text
stringlengths
7.27k
650k
label
int64
0
10
Technical Report No. 822 MIT Laboratory for Computer Science November 2001 Roles Are Really Great! Viktor Kuncak, Patrick Lam, and Martin Rinard arXiv:cs/0408013v1 [] 5 Aug 2004 Laboratory for Computer Science Massachusetts Institute of Technology Cambridge, MA 02139 {vkuncak, plam, rinard}@lcs.mit.edu 5.4 . . . . . . . . . . . . . . . . . . . . . . . . 9 9 9 9 9 10 . . . . . . . . . . . . . . . . . . . . . . . . 10 11 12 13 15 16 16 7 Interprocedural Role Analysis 7.1 Procedure Transfer Relations . . . . . . . 7.1.1 Initial Context . . . . . . . . . . . 7.1.2 Procedure Effects . . . . . . . . . . 7.1.3 Semantics of Procedure Effects . . 7.2 Verifying Procedure Transfer Relations . . 7.2.1 Role Graphs at Procedure Entry . 7.2.2 Verifying Basic Statements . . . . 7.2.3 Verifying Procedure Postconditions 7.3 Analyzing Call Sites . . . . . . . . . . . . 7.3.1 Context Matching . . . . . . . . . 7.3.2 Effect Instantiation . . . . . . . . . 7.3.3 Role Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 18 18 19 20 20 20 21 21 21 22 22 22 8 Extensions 8.1 Multislots . . . . . . . . . . . . . . . . . . . . 8.2 Cascading Role Changes . . . . . . . . . . . . 22 22 23 9 Related Work 24 10 Conclusion 25 ABSTRACT We present a new role system for specifying changing referencing relationships of heap objects. The role of an object depends, in large part, on its aliasing relationships with other objects, with the role of each object changing as its aliasing relationships change. Roles therefore capture important object and data structure properties and provide useful information about how the actions of the program interact with these properties. Our role system enables the programmer to specify the legal aliasing relationships that define the set of roles that objects may play, the roles of procedure parameters and object fields, and the role changes that procedures perform while manipulating objects. We present an interprocedural, compositional, and context-sensitive role analysis algorithm that verifies that a program respects the role constraints. Contents 1 Introduction 1 2 Example 2.1 Role Definitions . . . . . . . . . . . . . . . . . 2.2 Roles and Procedure Interfaces . . . . . . . . 2 2 3 3 Abstract Syntax and Semantics of Roles 3.1 Heap Representation . . . . . . . . . . . . . . 3.2 Role Representation . . . . . . . . . . . . . . 3.3 Role Semantics . . . . . . . . . . . . . . . . . 3 3 3 4 4 Role Properties 4.1 Formal Properties of Roles . . . . . . . . . . . 4 5 5 A Programming Model 5.1 A Simple Imperative Language . . . . . . . . 5.2 Operational Semantics . . . . . . . . . . . . . 5.3 Onstage and Offstage Objects . . . . . . . . . 7 7 7 7 ∗ This research was supported in part by DARPA Contract F33615-00-C-1692, NSF Grant CCR00-86154, NSF Grant CCR00-63513, and an NSERC graduate scholarship. 5.5 Role Consistency . . . . . . . . . . . . 5.4.1 Offstage Consistency . . . . . . 5.4.2 Reference Removal Consistency 5.4.3 Procedure Call Consistency . . 5.4.4 Explicit Role Check . . . . . . Instrumented Semantics . . . . . . . . 6 Intraprocedural Role Analysis 6.1 Abstraction Relation . . . . . 6.2 Transfer Functions . . . . . . 6.2.1 Expansion . . . . . . . 6.2.2 Contraction . . . . . . 6.2.3 Symbolic Execution . 6.2.4 Node Check . . . . . . 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Introduction Types capture important properties of the objects that programs manipulate, increasing both the safety and readability of the program. Traditional type systems capture properties (such as the format of data items stored in the fields of the object) that are invariant over the lifetime of the object. But in many cases, properties that do change are as important as properties that do not. Recognizing the benefit of capturing these changes, researchers have developed systems in which the type of the object changes as the values stored in its fields change or as the program invokes operations on the object [44, 43, 10, 47, 48, 4, 20, 13]. These systems integrate the concept of changing object states into the type system. The fundamental idea in this paper is that the state of each object also depends on the data structures in which it participates. Our type system therefore captures the referencing relationships that determine this data structure participation. As objects move between data structures, their types change to reflect their changing relationships with other objects. Our system uses roles to formalize the concept of a type that depends on the referencing relationships. Each role declaration provides complete aliasing information for each object that plays that role—in addition to specifying roles for the fields of the object, the role declaration also identifies the complete set of references in the heap that refer to the object. In this way roles generalize linear type systems [45, 2, 30] by allowing multiple aliases to be statically tracked, and extend alias types [42, 46] with the ability to specify roles of objects that are the source of aliases. This approach attacks a key difficulty associated with state-based type systems: the need to ensure that any state change performed using one alias is correctly reflected in the declared types of the other aliases. Because each object’s role identifies all of its heap aliases, the analysis can verify the correctness of the role information at all remaining or new heap aliases after an operation changes the referencing relationships. Roles capture important object and data structure properties, improving both the safety and transparency of the program. For example, roles allow the programmer to express data structure consistency properties (with the properties verified by the role analysis), to improve the precision of procedure interface specifications (by allowing the programmer to specify the role of each parameter), to express precise referencing and interaction behaviors between objects (by specifying verified roles for object fields and aliases), and to express constraints on the coordinated movements of objects between data structures (by using the aliasing information in role definitions to identify legal data structure membership combinations). Roles may also aid program optimization by providing precise aliasing information. This paper makes the following contributions: • Role Concept: The concept that the state of an object depends on its referencing relationships; specifically, that objects with different heap aliases should be regarded as having different states. • Role Definition Language: It presents a language for defining roles. The programmer can use this language to express data structure invariants and properties such as data structure participation. • Programming Model: It presents a set of role consistency rules. These rules give a programming model for changing the role of an object and the circumstances under which roles can be temporarily violated. • Procedure Interface Specification Language: It presents a language for specifying the initial context and effects of each procedure. The effects summarize next LiveHeader next next DeadProc proc next null next LiveList proc left right prev RunningProc next prev prev SleepingProc left next RunningHeader next right root SleepingTree prev Figure 1: Role Reference Diagram for Scheduler the actions of the procedure in terms of the references it changes and the regions of the heap that it affects. • Role Analysis Algorithm: It presents an algorithm for verifying that the program respects the constraints given by a set of role definitions and procedure specifications. The algorithm uses a data-flow analysis to infer intermediate referencing relationships between objects, allowing the programmer to focus on role changes and procedure interfaces. 2 Example Figure 1 presents a role reference diagram for a process scheduler. Each box in the diagram denotes a disjoint set of objects of a given role. The labelled arrows between boxes indicate possible references between the objects in each set. As the diagram indicates, the scheduler maintains a list of live processes. A live process can be either running or sleeping. The running processes form a doubly-linked list, while sleeping processes form a binary tree. Both kinds of processes have proc references from the live list nodes LiveList. Header objects RunningHeader and SleepingTree simplify operations on the data structures that store the process objects. As Figure 1 shows, data structure participation determines the conceptual state of each object. In our example, processes that participate in the sleeping process tree data structure are classified as sleeping processes, while processes that participate in the running process list data structure are classified as running processes. Moreover, movements between data structures correspond to conceptual state changes—when a process stops sleeping and starts running, it moves from the sleeping process tree to the running process list. 2.1 Role Definitions Figure 2 presents the role definitions for the objects in our example.1 Each role definition specifies the constraints that an object must satisfy to play the role. Field constraints 1 In general, each role definition would specify the static class of objects that can play that role. To simplify the specify the roles of the objects to which the fields refer, while slot constraints identify the number and kind of aliases of the object. role LiveHeader { fields next : LiveList | null; } role LiveList { fields next : LiveList | null, proc : RunningProc | SleepingProc; slots LiveList.next | LiveHeader.next; acyclic next; } role RunningHeader { fields next : RunningProc | RunningHeader, prev : RunningProc | RunningHeader; slots RunningHeader.next | RunningProc.next, RunningHeader.prev | RunningProc.prev; identities next.prev, prev.next; } role RunningProc { fields next : RunningProc | RunningHeader, prev : RunningProc | RunningHeader; slots RunningHeader.next | RunningProc.next, RunningHeader.prev | RunningProc.prev, LiveList.proc; identities next.prev, prev.next; } role SleepingTree { fields root : SleepingProc | null, acyclic left, right; } role SleepingProc { fields left : SleepingProc | null, right : SleepingProc | null; slots SleepingProc.left | SleepingProc.right | SleepingTree.root; LiveList.proc; acyclic left, right; } role DeadProc { } Figure 2: Role Definitions for a Scheduler Role definitions may also contain two additional kinds of constraints: identity constraints, which specify paths that lead back to the object, and acyclicity constraints, which specify paths with no cycles. In our example, the identity constraint next.prev in the RunningProc role specifies the cyclic doubly-linked list constraint that following the next, then prev fields always leads back to the initial object. The acyclic constraint left, right in the SleepingProc role specifies that there are no cycles in the heap involving only left and right edges. On the other hand, the list of running processes must be cyclic because its nodes can never point to null. The slot constraints specify the complete set of heap aliases for the object. In our example, this implies that no process can be simultaneously running and sleeping. presentation, we assume that all objects are instances of a single class with a set of fields F . In general, roles can capture data structure consistency properties such as disjointness and can prevent representation exposure [8]. As a data structure description language, roles can naturally specify trees with additional pointers. Roles can also approximate non-tree data structures like sparse matrices. Because most role constraints are local, it is possible to inductively infer them from data structure instances. 2.2 Roles and Procedure Interfaces Procedures specify the initial and final roles of their parameters. The suspend procedure in Figure 3, for example, takes two parameters: an object with role RunningProc p, and the SleepingTree s. The procedure changes the role of the object referenced by p to SleepingProc whereas the object referenced by s retains its original role. To perform the role change, the procedure removes p from its RunningList data structure and inserts it into the SleepingTree data structure s. If the procedure fails to perform the insertions or deletions correctly, for instance by leaving an object in both structures, the role analysis will report an error. procedure suspend(p : RunningProc ->> SleepingProc, s : SleepingTree) local pp, pn, r; { pp = p.prev; pn = p.next; r = s.root; p.prev = null; p.next = null; pp.next = pn; pn.prev = pp; s.root = p; p.left = r; setRole(p : SleepingProc); } Figure 3: Suspend Procedure 3 Abstract Syntax and Semantics of Roles In this section, we precisely define what it means for a given heap to satisfy a set of role definitions. In subsequent sections we will use this definition as a starting point for a programming model and role analysis. 3.1 Heap Representation We represent a concrete program heap as a finite directed graph Hc with nodes(Hc ) representing objects of the heap and labelled edges representing heap references. A graph edge ho1 , f, o2 i ∈ Hc denotes a reference with field name f from object o1 to object o2 . To simplify the presentation, we fix a global set of fields F and assume that all objects have all fields in F . We do not consider subtyping or dynamic dispatch in this paper. 3.2 Role Representation Let R denote the set of roles used in role definitions, nullR be a special symbol always denoting a null object nullc , and let R0 = R ∪ {nullR }. We represent each role as the conjunction of the following four kinds of constraints: • Fields: For every field name f ∈ F we introduce a function fieldf : R → 2R0 denoting the set of roles that objects of role r ∈ R can reference through field f . A field f of role r can be null if and only if nullR ∈ fieldf (r). The explicit use of nullR and the possibility to specify a set of alternative roles for every field allows roles to express both may and must referencing relationships. • Slots: Every role r has slotno(r) slots. A slot slotk (r) of role r ∈ R is a subset of R×F . Let o be an object of role r and o′ an object of role r ′ . A reference ho′ , f, oi ∈ Hc can fill a slot k of object o if and only if hr ′ , f i ∈ slotk (r). An object with role r must have each of its slots filled by exactly one reference. • Identities: Every role r ∈ R has a set of identities(r) ⊆ F × F . Identities are pairs of fields hf, gi such that following reference f on object o and then returning on reference g leads back to o. • Acyclicities: Every role r ∈ R has a set acyclic(r) ⊆ F of fields along which cycles are forbidden. 3.3 Role Semantics We define the semantics of roles as a conjunction of invariants associated with role definitions. A concrete role assignment is a map ρc : nodes(Hc ) → R0 such that ρc (nullc ) = nullR . Definition 1 Given a set of role definitions, we say that heap Hc is role consistent iff there exists a role assignment ρc : nodes(Hc ) → R0 such that for every o ∈ nodes(Hc ) the predicate locallyConsistent(o, Hc , ρc ) is satisfied. We call any such role assignment ρc a valid role assignment. The predicate locallyConsistent(o, Hc , ρc ) formalizes the constraints associated with role definitions. Definition 2 locallyConsistent(o, Hc , ρc ) iff all of the following conditions are met. Let r = ρc (o). 1) For every field f ∈ F and ho, f, o′ i ∈ Hc , ρc (o′ ) ∈ fieldf (r). 2) Let {ho1 , f1 i, . . . , hok , fk i} = {ho′ , f i | ho′ , f, oi ∈ Hc } be the set of all aliases of object o. Then k = slotno(r) and there exists some permutation p of the set {1, . . . , k} such that hρc (oi ), fi i ∈ slotpi (r) for all i. 3) If ho, f, o′ i ∈ Hc , ho′ , g, o′′ i ∈ Hc , and hf, gi ∈ identities(r), then o = o′′ . 4) It is not the case that graph Hc contains a cycle o1 , f1 , . . . , os , fs , o1 where o1 = o and f1 , . . . , fs ∈ acyclic(r) Note that a role consistent heap may have multiple valid role assignments ρc . However, in each of these role assignments, every object o is assigned exactly one role ρc (o). The existence of a role assignment ρc with the property ρc (o1 ) 6= ρc (o2 ) thus implies o1 6= o2 . This is just one of the ways in which roles make aliasing more predictable. 4 Role Properties Roles capture important properties of the objects and provide useful information about how the actions of the program affect those properties. • Consistency Properties: Roles can ensure that the program respects application-level data structure consistency properties. The roles in our process scheduler, for example, ensure that a process cannot be simultaneously sleeping and running. • Interface Changes: In many cases, the interface of an object changes as its referencing relationships change. In our process scheduler, for example, only running processes can be suspended. Because procedures declare the roles of their parameters, the role system can ensure that the program uses objects correctly even as the object’s interface changes. • Multiple Uses: Code factoring minimizes code duplication by producing general-purpose classes (such as the Java Vector and Hashtable classes) that can be used in a variety of contexts. But this practice obscures the different purposes that different instances of these classes serve in the computation. Because each instance’s purpose is usually reflected in its relationships with other objects, roles can often recapture these distinctions. • Correlated Relationships: In many cases, groups of objects cooperate to implement a piece of functionality. Standard type declarations provide some information about these collaborations by identifying the points-to relationships between related objects at the granularity of classes. But roles can capture a much more precise notion of cooperation, because they track correlated state changes of related objects. Programmers can use roles for specifying the membership of objects in data structures and the structural invariants of data structures. In both cases, the slot constraints are essential. When used to describe membership of an object in a data structure, slots specify the source of the alias from a data structure node that stores the object. By assigning different sets of roles to data structures used at different program points, it is possible to distinguish nodes stored in different data structure instances. As an object moves between data structures, the role of the object changes appropriately to reflect the new source of the alias. When describing nodes of data structures, slot constraints specify the aliasing constraints of nodes; this is enough to precisely describe a variety of data structures and approximate many others. Property 16 below shows how to identify trees in role definitions even if tree nodes have additional aliases from other sets of nodes. It is also possible to define nodes which make up a compound data structure linked via disjoint sets of fields, such as threaded trees, sparse matrices and skip lists. Example 3 The following role definitions specify a sparse matrix of width and height at least 3. These definitions can be easily constructed from a sketch of a sparse matrix, as in Figure 4. two r r 1 2 d r r 4 5 d r 5 d d r one one one one Figure 5: Sketch of a Two-Level Skip List 6 } 9 Example 4 We next give role definitions for a two-level skip list [36] sketched in Figure 5. d r 8 one null 2 d r 5 d 6 d r 4 one 2 1 r 5 d 2 1 d two two SL 3 2 d d 7 r r 8 Figure 4: Roles of Nodes of a Sparse Matrix role A1 { fields right : A2, down : A4; acyclic right, down; } role A2 { fields right : A2 | A3, down : A5; slots A1.right | A2.right; acyclic right, down; } role A3 { fields down : A6; slots A2.right; acyclic right, down; } role A4 { fields right : A5, down : A4 | A7; slots A1.down | A4.down; acyclic right, down; } role A5 { fields right : A5 | A6, down : A5 | A8; slots A4.right | A5.right, A2.down | A5.down; acyclic right, down; } role A6 { fields down : A6 | A9; slots A5.right, A3.down | A6.down; acyclic right, down; } role A7 { fields right : A8; slots A4.down; acyclic right, down; } role A8 { fields right : A8 | A9; slots A7.right | A8.right, A5.down; acyclic right, down; } role A9 { slots A8.right, A6.down; acyclic right, down; role SkipList { fields one : OneNode | TwoNode | null; two : TwoNode | null; } role OneNode { fields one : OneNode | TwoNode | null; two : null; slots OneNode.one | TwoNode.one | SkipList.one; acyclic one, two; } role TwoNode { fields one : OneNode | TwoNode | null; two : TwoNode | null; slots OneNode.one | TwoNode.one | SkipList.one, TwoNode.two | SkipList.two; acyclic one, two; } 4.1 Formal Properties of Roles In this section we identify some of the invariants expressible using sets of mutually recursive role definitions. A further study of role properties can be found in [31]. The following properties show some of the ways role specifications make object aliasing more predictable. They are an immediate consequence of the semantics of roles. Property 5 (Role Disjointness) If there exists a valid role assignment ρc for Hc such that ρ(o1 ) 6= ρ(o2 ), then o1 6= o2 . The previous property gives a simple criterion for showing that objects o1 and o2 are unaliased: find a valid role assignment which assigns different roles to o1 and o2 . This use of roles generalizes the use of static types for pointer analysis [12]. Since roles create a finer partition of objects than a typical static type system, their potential for proving absence of aliasing is even larger. Property 6 (Disjointness Propagation) If ho1 , f, o2 i, ho3 , g, o4 i ∈ Hc , o1 6= o3 , and there exists a valid role assignment ρc for Hc such that ρc (o2 ) = ρc (o4 ) = r but fieldf (r) ∩ fieldg (r) = ∅, then o2 6= o4 . Property 7 (Generalized Uniqueness) If ho1 , f, o2 i, ho3 , g, o4 i ∈ Hc , o1 6= o3 , and there exists a role assignment ρc such that ρc (o2 ) = ρc (o4 ) = r, but there are no indices i 6= j such that hρc (o1 ), f i ∈ sloti (r) and hρc (o2 ), gi ∈ slotj (r) then o2 6= o4 . A special case of Property 7 occurs when slotno(r) = 1; this constrains all references to objects of role r to be unique. Role definitions induce a role reference diagram RRD which captures some, but not all, role constraints. Let ρc be a valid role assignment for Hc . Then every path in Hc starting from an object o1 with role ρc (o1 ) ∈ RINTER and containing only edges labelled with F0 is a prefix of a path that terminates at some object o2 with ρc (o2 ) ∈ RFINAL . Definition 8 (Role Reference Diagram) Given a set of definitions of roles R, a role reference diagram RRD is is a directed graph with nodes R0 and labelled edges defined by Property 13 (Upstream Path Termination) Assume that for some set of fields F0 ⊆ F there are sets of nodes RINTER ⊆ R, RINIT ⊆ R0 of the role reference diagram RRD such that for every node r ∈ RINTER : RRD = {hr, f, r ′ i | r ′ ∈ fieldf (r) and ∃i hr, f i ∈ sloti (r ′ )} ∪ {hr, f, nullR i | nullR ∈ fieldf (r)} Each role reference diagram is a refinement of the corresponding class diagram in a statically typed language, because it partitions classes into multiple roles according to their referencing relationships. The sets ρ−1 c (r) of objects with role r change during program execution, reflecting the changing referencing relationships of objects. Role definitions give more information than a role reference diagram. Slot constraints specify not only that objects of role r1 can reference objects of role r2 along field f , but also give cardinalities on the number of references from other objects. In addition, role definitions include identity and acyclicity constraints, which are not present in role reference diagrams. Property 9 Let ρc be any valid role assignment. Define G = {hρc (o1 ), f, ρc (o2 )i | ho1 , f, o2 i ∈ Hc } Then G is a subgraph of RRD. It follows from Property 9 that roles give an approximation of may-reachability among heap objects. Property 10 (May Reachability) If there is a valid role assignment ρc : nodes(Hc ) → R0 such that ρc (o1 ) 6= ρc (o2 ) where o1 , o2 ∈ nodes(Hc ) and there is no path from ρc (o1 ) to ρc (o2 ) in the role reference diagram RRD, then there is no path from o1 to o2 in Hc . The next property shows the advantage of explicitly specifying null references in role definitions. While the ability to specify acyclicity is provided by the acyclic constraint, it is also possible to indirectly specify must-cyclicity. Property 11 (Must Cyclicity) Let F0 ⊆ F and RCYC ⊆ R be a set of nodes in the role reference diagram RRD such that for every node r ∈ RCYC , if hr, f, r ′ i ∈ RRD then r ′ ∈ RCYC . If ρc is a valid role assignment for Hc , then every object o1 ∈ Hc with ρc (o1 ) ∈ RCYC is a member of a cycle in Hc with edges from F0 . The following property shows that roles can specify a form of must-reachability among the sets of objects with the same role. Property 12 (Downstream Path Termination) Assume that for some set of fields F0 ⊆ F there are sets of nodes RINTER ⊆ R, RFINAL ⊆ R0 of the role reference diagram RRD such that for every node r ∈ RINTER : 1. F0 ⊆ acyclic(r) 2. if hr, f, r ′ i ∈ RRD for f ∈ F0 , then r ′ ∈ RINTER ∪ RFINAL 1. F0 ⊆ acyclic(r) 2. if hr ′ , f, ri ∈ RRD for f ∈ F0 , then r ′ ∈ RINTER ∪ RINIT Let ρc be a valid role assignment for Hc . Then every path in Hc terminating at an object o2 with ρc (o2 ) ∈ RINTER and containing only edges labelled with F0 is a suffix of a path which started at some object o1 , where ρc (o1 ) ∈ RINIT . The next two properties guarantee reachability properties by which there must exist at least one path in the heap, rather than stating properties of all paths as in Properties 12 and 13. Property 14 (Downstream Must Reachability) Assume that for some set of fields F0 ⊆ F there are sets of roles RINTER ⊆ R, RFINAL ⊂ R0 of the role reference diagram RRD such that for every node r ∈ RINTER : 1. F0 ⊆ acyclic(r) 2. there exists f ∈ F0 such that fieldf (r) ⊆ RINTER ∪ RFINAL Let ρc be a valid role assignment for Hc . Then for every object o1 with ρc (o1 ) ∈ RINTER there is a path in Hc with edges from F0 from o1 to some object o2 where ρc (o2 ) ∈ RFINAL . Property 15 (Upstream Must Reachability) Assume that for some set of fields F0 ⊆ F there are sets of nodes RINTER ⊆ R, RINIT ⊆ R of the role reference diagram RRD such that for every node r ∈ RINTER : 1. F0 ⊆ acyclic(r) 2. there exists k such that slotk (r) ⊆ (RINTER ∪ RINIT ) × F Let ρc be a valid role assignment for Hc . Then for every object o2 with ρc (o2 ) ∈ RINTER there is a path in Hc from some object o1 with ρc (o1 ) ∈ RINIT to the object o2 . Trees are a class of data structures especially suited for static analysis. Roles can express graphs that are not trees, but it is useful to identify trees as certain sets of mutually recursive role definitions. Property 16 (Treeness) Let RTREE ⊆ R be a set of roles and F0 ⊆ F set of fields such that for every r ∈ RTREE 1. F0 ⊆ acyclic(r) 2. |{i | sloti (r) ∩ (RTREE × F0 ) 6= ∅}| ≤ 1 Let ρc be a valid role assignment for Hc and S ⊆ {hn1 , f, n2 i | hn1 , f, n2 i ∈ Hc , ρ(n1 ), ρ(n2 ) ∈ RTREE , f ∈ F0 }. Then S is a set of trees. 5 A Programming Model In this section we define what it means for an execution of a program to respect the role constraints. This definition is complicated by the need to allow the program to temporarily violate the role constraints during data structure manipulations. Our approach is to let the program violate the constraints for objects referenced by local variables or parameters, but require all other objects to satisfy the constraints. We first present a simple imperative language with dynamic object allocation and give its operational semantics. We then specify additional statement preconditions that enforce the role consistency requirements. 5.1 A Simple Imperative Language Our core language contains, as basic statements, Load (x=y.f), Store (x.f=y), Copy (x=y), and New (x=new). All variables are references to objects in the global heap and all assignments are reference assignments. We use an elementary test statement combined with nondeterministic choice and iteration to express if and while statement, using the usual translation [22, 1]. We represent the control flow of programs using control-flow graphs. A program is a collection of procedures proc ∈ Proc. Procedures change the global heap but do not return values. Every procedure proc has a list of parameters param(proc) = {parami (proc)}i and a list of local variables local(proc). We use var(proc) to denote param(proc) ∪ local(proc). A procedure definition specifies the initial role preRk (proc) and the final role postRk (proc) for every parameter paramk (proc). We use procj for indices j ∈ N to denote activation records of procedure proc. We further assume that there are no modifications of parameter variables so every parameter references the same object throughout the lifetime of procedure activation. Example 17 The following kill procedure removes a process from both the doubly linked list of running processes and the list of all active processes. This is indicated by the transition from RunningProc to DeadProc. procedure kill(p : RunningProc ->> DeadProc, l : LiveHeader) local prev, current, cp, nxt, lp, ln; { // find ’p’ in ’l’ prev = l; current = l.next; cp = current.proc; while (cp != p) { prev = current; current = current.next; cp = current.proc; } // remove ’current’ and ’p’ from active list nxt = current.next; prev.next = nxt; current. current.proc = null; setRole(current : IsolatedCell); // remove ’p’ from running list lp = p.prev; ln = p.next; p.prev = null; p.next = null; lp.next = ln; ln.prev = lp; setRole(p : DeadProc); } 5.2 Operational Semantics In this section we give the operational semantics for our language. We focus on the first three columns in Figures 6 and 7; the safety conditions in the fourth column are detailed in Section 5.4. Figure 6 gives the small-step operational semantics for the basic statements. We use A ⊎ B to denote the union A ∪ B where the sets A and B are disjoint. The program state consists of the stack s and the concrete heap Hc . The stack s is a sequence of pairs p@proci ∈ ×(Proc × N ), where p ∈ NCFG (proc) is a program point, and proci ∈ Proc × N is an activation record of procedure proc. Program points p ∈ NCFG (proc) are nodes of the control-flow graphs. There is one control-flow graph for every procedure proc. An edge of the control-flow graph hp, p′ i ∈ ECFG (proc) indicates that control may transfer from point p to point p′ . We write p : stat to state that program point p contains a statement stat. The control flow graph of each procedure contains special program points entry and exit indicating procedure entry and exit, with no statements associated with them. We assume that all conditions are of the form x==y or !(x==y) where x and y are either variables or a special constant null which always points to the nullc object. The concrete heap is either an error heap errorc or a nonerror heap. A non-error heap Hc ⊆ N ×F ×N ∪((Proc×N )× V × N ) is a directed graph with labelled edges, where nodes represent objects and procedure activation records, whereas edges represent heap references and local variables. An edge ho1 , f, o2 i ∈ N × F × N denotes a reference from object o1 to object o2 via field f ∈ F . An edge hproci , x, oi ∈ Hc means that local variable x in activation record proci points to object o. A load statement x=y.f makes the variable x point to node of , which is referenced by the f field of object oy , which is in turn referenced by variable y. A store statement x.f=y replaces the reference along field f in object ox by a reference to object oy that is referenced by y. The copy statement x=y copies a reference to object oy into variable x. The statement x=new creates a new object on with all fields initially referencing nullc , and makes x point to on . The statement test(c) allows execution to proceed only if condition c is satisfied. Figure 7 describes the semantics of procedure calls. Procedure call pushes new activation record onto stack, inserts it into the heap, and initializes the parameters. Procedure entry initializes local variables. Procedure exit removes the activation record from the heap and the stack. 5.3 Onstage and Offstage Objects At every program point the set of all objects of heap Hc can be partitioned into: Statement Transition p : x=y.f hp@proci ; s, Hc ⊎ {hproci , x, ox i}i −→ hp′ @proci ; s, Hc′ i p : x.f=y hp@proci ; s, Hc ⊎ {hox , f, of i}i −→ hp′ @proci ; s, Hc′ i p : x=y hp@proci ; s, Hc ⊎ {hproci , x, ox i}i −→ hp′ @proci ; s, Hc′ i p : x=new hp@proci ; s, Hc ⊎ {hproci , x, ox i}i −→ hp′ @proci ; s, Hc′ i p : test(c) hp@proci ; s, Hc i −→ hp′ @proci ; s, Hc i Constraints x, y ∈ local(proc), hproci , y, oy i, hoy , f, of i ∈ Hc , hp, p′ i ∈ ECFG (proc), Hc′ = Hc ⊎ {proci , x, of } x, y ∈ local(proc), hproci , x, ox i, hproci , y, oy i ∈ Hc , hp, p′ i ∈ ECFG (proc), Hc′ = Hc ⊎ {hox , f, oy i} x ∈ local(proc), y ∈ var(proc), hproci , y, oy i ∈ Hc , hp, p′ i ∈ ECFG (proc), ′ Hc = Hc ⊎ {hproci , x, oy i} x ∈ local(proc), on fresh, hp, p′ i ∈ ECFG (proc), Hc′ = Hc ⊎ {hproci , x, on i} ⊎ nulls, nulls = {on } × F × {null} satisfiedc (c, proci , Hc ), hp, p′ i ∈ ECFG (proc) Role Consistency accessible(of , proci , Hc ), con(Hc′ , offstage(Hc′ )) of ∈ onstage(Hc , proci ) con(Hc′ , offstage(Hc′ )) con(Hc′ , offstage(Hc′ )) con(Hc′ , offstage(Hc′ )) con(Hc , offstage(Hc )) satisfiedc (x==y, proci , Hc ) iff {o | hproci , x, oi ∈ Hc } = {o | hproci , y, oi ∈ Hc } satisfiedc (!(x==y), proci , Hc ) iff not satisfiedc (x==y, proci , Hc ) accessible(o, proci , Hc ) := (∃p ∈ param(proc) : hproci , p, oi ∈ Hc ) or not (∃proc′j ∃v ∈ var(proc′ ) : hproc′j , v, oi ∈ Hc ) Figure 6: Semantics of Basic Statements Statement entry : p : proc′ (xk )k exit : Transition hp@proci ; s, Hc i −→ hp′ @proci ; s, Hc ⊎ nullsi hp@proci ; s, Hc i −→ hentry@proc′j ; p′ @proci ; s, Hc′ i hp@proci ; s, Hc i −→ hs, Hc \ AFi Constraints nulls = {hproci , v, nullc i | v ∈ local(proc), hp, p′ i ∈ ECFG (proc) j fresh in p@proci; s, hp, p′ i ∈ ECFG (proc), ok : hproci , xk , ok i ∈ Hc , Hc′ = Hc ⊎ {hproc′j , pk , ok i}k , ∀k pk = paramk (proc′ ) AF = {hproci , v, ni | hproci , v, ni ∈ Hc } Role Consistency con(Hc , offstage(Hc )) conW(ra, Hc , S), ra = {hok , preRk (proc′ )i}k , S = offstage(Hc ) ∪ {ok }k conW(ra, Hc , S), ra = {hparndk (proci ), postRk (proc)i}k , S = offstage(Hc ) ∪ {o | hproci , v, oi ∈ Hc } parndk (proci ) = o where hproci , paramk (proc), oi ∈ Hc Figure 7: Semantics of Procedure Call 1. onstage objects (onstage(Hc )) referenced by a local variable or parameter of some activation frame; onstage(Hc , proci ):={o | ∃x ∈ var(proc) Shproci , x, oi ∈ Hc } onstage(Hc ):= onstage(Hc , proci ) proci 2. offstage objects (offstage(Hc )) unreferenced by local or parameter variables. offstage(Hc ) := nodes(Hc ) \ onstage(Hc ) Onstage objects need not have correct roles. Offstage objects must have correct roles assuming some role assignment for onstage objects; the exception is that acyclicity constraints for offstage objects can be violated due to cycles that pass through the onstage objects. Definition 18 Given a set of role definitions and a set of objects Sc ⊆ nodes(Sc ), we say that heap Hc is role consistent for Sc , and we write con(Hc , Sc ), iff there exists a role assignment ρc : nodes(Hc ) → R0 such that the predicate locallyConsistent(o, Hc , ρc , Sc ) is satisfied for every object o ∈ Sc . We define locallyConsistent(o, Hc , ρc , Sc ) to generalize the locallyConsistent(o, Hc , ρc ) predicate, weakening the acyclicity condition. Definition 19 locallyConsistent(o, Hc , ρc , Sc ) holds iff conditions 1), 2), and 3) of Definition 2 are satisfied and the following condition holds: 4’) It is not the case that graph Hc contains a cycle o1 , f1 , . . . , os , fs , o1 such that o1 = o, f1 , . . . , fs ∈ acyclic(r), and additionally o1 , . . . , os ∈ Sc . Here Sc is the set of onstage objects that are not allowed to create a cycle; objects in nodes(Hc ) \ Sc are exempt from the acyclicity condition. The locallyConsistent(o, Hc , ρc , Sc ) and con(Hc , Sc ) predicates are monotonic in Sc , so a larger Sc implies a stronger invariant. For Sc = nodes(Hc ), consistency for Sc is equivalent with heap consistency from Definition 1. Note that the role assignment ρc specifies roles even for objects o ∈ nodes(Hc ) \ Sc . This is because the role of o may influence the role consistency of objects in Sc which are adjacent to o. At procedure calls, the role declarations for parameters restrict the set of potential role assignments. We therefore generalize con(Hc , Sc ) to conW(ra, Hc , Sc ), which restricts the set of role assignments ρc considered for heap consistency. Definition 20 Given a set of role definitions, a heap Hc , a set Sc ⊆ nodes(Hc ), and a partial role assignment ra ⊆ Sc → R, we say that the heap Hc is consistent with ra for Sc , and write conW(ra, Hc , Sc ), iff there exists a (total) role assignment ρc : nodes(Hc ) → R0 such that ra ⊆ ρc and for every object o ∈ Sc the predicate locallyConsistent(o, Hc , ρc , Sc ) is satisfied. 5.4 Role Consistency We are now able to precisely state the role consistency requirements that must be satisfied for program execution. The role consistency requirements are in the fourth row of Figures 6 and 7. We assume the operational semantics is extended with transitions leading to a program state with heap errorc whenever role consistency is violated. 5.4.1 Offstage Consistency At every program point, we require con(Hc , offstage(Hc )) to be satisfied. This means that offstage objects have correct roles, but onstage objects may have their role temporarily violated. 5.4.2 Reference Removal Consistency The Store statement x.f=y has the following safety precondition. When a reference hox , f, of i ∈ Hc for hprocj , x, ox i ∈ Hc , and hox , f, of i ∈ Hc is removed from the heap, both ox and of must be referenced from the current procedure activation record. It is sufficient to verify this condition for of , as ox is already onstage by definition. The reference removal consistency condition enables the completion of the role change for of after the reference hox , f, of i is removed and ensures that heap references are introduced and removed only between onstage objects. 5.4.3 Procedure Call Consistency Our programming model ensures role consistency across procedure calls using the following protocol. A procedure call proc′ (x1 , ..., xp ) in Figure 7 requires the role consistency precondition conW(ra, Hc , Sc ), where the partial role assignment ra requires objects ok , corresponding to parameters xk , to have roles preRk (proc′ ) expected by the callee, and Sc = offstage(Hc )∪{ok }k for hprocj , xk , ok i ∈ Hc . To ensure that the callee proc′j never observes incorrect roles, we impose an accessibility condition for the callee’s Load statements (see the fourth column of Figure 6). The accessibility condition prohibits access to any object o referenced by some local variable of a stack frame other than proc′j , unless o is referenced by some parameter of proc′j . Provided that this condition is not violated, the callee proc′j only accesses objects with correct roles, even though objects that it does not access may have incorrect roles. In Section 7 we show how the role analysis ensures that the accessibility condition is never violated. At the procedure exit point (Figure 7), we require correct roles for all objects referenced by the current activation frame proc′j . This implies that heap operations performed by proc′j preserve heap consistency for all objects accessed by proc′j . 5.4.4 Explicit Role Check The programmer can specify a stronger invariant at any program point using statement roleCheck(x1 , . . . , xp , ra). As Figure 8 indicates, roleCheck requires the conW(ra, Hc , Sc ) predicate to be satisfied for the supplied partial role assignment ra where Sc = offstage(Hc ) ∪ {ok }k for objects ok referenced by given local variables xk . Statement Transition p : roleCheck(x1 , . . . , xn , ra) Constraints hp@proci ; s, Hc i −→ hp′ @proci ; s, Hc i hp, p′ i ∈ ECFG S Role Consistency conW(ra, Hc , S), = offstage(Hc ) ∪ {o | hproci , xk , oi ∈ Hc } Figure 8: Operational Semantics of Explicit Role Check 5.5 Instrumented Semantics We expect the programmer to have a specific role assignment in mind when writing the program, with this role assignment changing as the statements of the program change the referencing relationships. So when the programmer wishes to change the role of an object, he or she writes a program that brings the object onstage, changes its referencing relationships so that it plays a new role, then puts it offstage in its new role. The roles of other objects do not change.2 To support these programmer expectations, we introduce an augmented programming model in which the role assignment ρc is conceptually part of the program’s state. The role assignment changes only if the programmer changes it explicitly using the setRole statement. The augmented programming model has an underlying instrumented semantics as opposed to the original semantics. Example 21 The original semantics allows asserting different roles at different program points even if the structure of the heap was not changed, as in the following procedure foo. role A1 { fields f : B1; } role B1 { slots A1.f; } role A2 { fields f : B2; } role B2 { slots A2.f; } procedure foo() var x, y; { x = new; y = new; x.f = y; roleCheck(x,y, x:A1,y:B1); roleCheck(x,y, x:A2,y:B2); } Both role checks would succeed since each of the specified partial role assignments can be extended to a valid role assignment. On the other hand, the check roleCheck(x,y, x:A1,y:B2) would fail. The procedure foo in the instrumented semantics can be written as folllows. procedure foo() var x, y; { x = new; y = new; x.f = y; setRole(x:A1); setRole(y:B1); roleCheck(x,y, x:A1,y:B1); 2 An extension to the programming model supports cascading role changes in which a single role change propagates through the heap changing the roles of offstage objects, see Section 8.2. setRole(x:A2); setRole(y:B2); roleCheck(x,y, x:A2,y:B2); } The setRole statement makes the role change of object explicit. The instrumented semantics extends the concrete heap Hc with a role assignment ρc . Figure 9 outlines the changes in instrumented semantics with respect to the original semantics. We introduce a new statement setRole(x:r), which modifies a role assignment ρc , giving ρc [ox 7→ r], where ox is the object referenced by x. All statements other than setRole preserve the current role assignment. For every consistency condition conW(ra, Hc , Sc ) in the original semantics, the instrumented semantics uses the corresponding condition conW(ρc ∪ ra, Hc , Sc ) and fails if ρc is not an extension of ra. Here we consider con(Hc , S) to be a shorthand for conW(∅, Hc , S). For example, the new role consistency condition for the Copy statement x=y is conW(ρc , Hc , offstage(Hc )). The New statement assigns an identifier unknown to the newly created object on . By definition, a node with unknown does not satisfy the locallyConsistent predicate. This means that setRole must be used to set a a valid role of on before on moves offstage. By introducing an instrumented semantics we are not suggesting an implementation that explicitly stores roles of objects at run-time. We instead use the instrumented semantics as the basis of our role analysis and ensure that all role checks can be statically removed. Because the instrumented semantics is more restrictive than the original semantics, our role analysis is a conservative approximation of both the instrumented semantics and the original semantics. 6 Intraprocedural Role Analysis This section presents an intraprocedural role analysis algorithm. The goal of the role analysis is to statically verify the role consistency requirements described in the previous section. The key observation behind our analysis algorithm is that we can incrementally verify role consistency of the concrete heap Hc by ensuring role consistency for every node when it goes offstage. This allows us to represent the statically unbounded offstage portion of the heap using summary nodes with “may” references. In contrast, we use a “must” interpretation for references from and to onstage nodes. The exact representation of onstage nodes allows the analysis to verify role consistency in the presence of temporary violations of role constraints. Our analysis representation is a graph in which nodes represent objects and edges represent references between ob- Statement p : x=new p: setRole(x:r) p : stat Transition hp@proci ; s, Hc ⊎ {hproci , x, ox i}, ρc i −→ hp′ @proci ; s, Hc′ , ρ′c i hp@proci ; s, Hc , ρc i −→ hp′ @proci ; s, Hc , ρ′c i hs, Hc , ρc i −→ hs′ , Hc′ , ρc i Constraints x ∈ local(proc), on fresh, hp, p′ i ∈ ECFG (proc), Hc′ = Hc ⊎{hproci , x, on i} ⊎{on } × F × {null}, ρ′c = ρc [on 7→ unknown] x ∈ local(proci ), hproci , x, ox i ∈ Hc , ρ′c = ρc [ox 7→ r], hp, p′ i ∈ ECFG hs, Hc i −→hs ′ , Hc′ i Role Consistency conW(ρ′c , Hc′ , offstage(Hc′ )) conW(ρ′c , Hc , offstage(Hc )) P ∧ conW(ρc ∪ ra, Hc′′ , S) for every original condition P ∧ conW(ra, Hc′′ , S) Figure 9: Instrumented Semantics jects. There are two kinds of nodes: onstage nodes represent onstage objects, with each onstage node representing one onstage object; and offstage nodes, with each offstage node corresponding to a set of objects that play that role. To increase the precision of the analysis, the algorithm occasionally generates multiple offstage nodes that represent disjoint sets of objects playing the same role. Distinct offstage objects with the same role r represent disjoint sets of objects of role r with different reachability properties from onstage nodes. We frame role analysis as a data-flow analysis operating on a distributive lattice P(RoleGraphs) of sets of role graphs with set union ∪ as the join operator. In this section we present an algorithm for intraprocedural analysis. We use procc to denote the topmost activation record in a concrete heap Hc . In Section 7 we generalize the algorithm to the compositional interprocedural analysis. 6.1 Abstraction Relation Every data-flow fact G ⊆ RoleGraphs is a set of role graphs G ∈ G. Every role graph G ∈ RoleGraphs is either a bottom role graph ⊥G representing the set of all concrete heaps (including errorc ), or a tuple G = hH, ρ, Ki representing nonerror concrete heaps, where • H ⊆ N × F × N is the abstract heap with nodes N representing objects and fields F . The abstract heap H represents heap references hn1 , f, n2 i and variables of the currently analyzed procedure hproc, x, ni where x ∈ local(proc). Null references are represented as references to abstract node null. We define abstract onstage nodes onstage(H) = {n | hproc, x, ni ∈ H, x ∈ local(proc) ∪ param(proc)} and abstract offstage nodes offstage(H) = nodes(H) \ onstage(H) \ {proc, null}. • ρ : nodes(H) → R0 is an abstract role assignment, ρ(null) = nullR ; • K : nodes(H) → {i, s} indicates the kind of each node; when K(n) = i, then n is an individual node representing at most one object, and when K(n) = s, n is a summary node representing zero or more objects. We require K(proc) = K(null) = i, and require all onstage nodes to be individual, K[onstage(H)] ⊆ {i}. The abstraction relation α relates a pair hHc , ρc i of concrete heap and concrete role assignment with an abstract role graph G. Definition 22 We say that an abstract role graph G represents concrete heap Hc with role assignment ρc and write hHc , ρc i α G, iff G = ⊥G or: Hc 6= errorc , G = hH, ρ, Ki, and there exists a function h : nodes(Hc ) → nodes(H) such that 1) Hc is role consistent: conW(ρc , Hc , offstage(Hc )), 2) identity constraints of onstage nodes with offstage nodes hold: if ho1 , f, o2 i ∈ Hc and ho2 , g, o3 i ∈ Hc for o1 ∈ onstage(Hc ), o2 ∈ offstage(Hc ), and hf, gi ∈ identities(ρc (o1 )), then o3 = o1 ; 3) h is a graph homomorphism: if ho1 , f, o2 i ∈ Hc then hh(o1 ), f, h(o2 )i ∈ H; 4) an individual node represents at most one concrete object: K(n) = i implies |h−1 (n)| ≤ 1; 5) h is bijection on edges which originate or terminate at onstage nodes: if hn1 , f, n2 i ∈ H and n1 ∈ onstage(H) or n2 ∈ onstage(H), then there exists exactly one ho1 , f, o2 i ∈ Hc such that h(o1 ) = n1 and h(o2 ) = n2 ; 6) h(nullc ) = null and h(procc ) = proc; 7) the abstract role assignment ρ corresponds to the concrete role assignment: ρc (o) = ρ(h(o)) for every object o ∈ nodes(Hc ). Note that the error heap errorc can be represented only by the bottom role graph ⊥G . The analysis uses ⊥G to indicate a potential role error. Condition 3) implies that role graph edges are a conservative approximation of concrete heap references. These edges are in general “may” edges. Hence it is possible for an offstage node n that hn, f, n1 i, hn, f, n2 i ∈ H for n1 6= n2 . This cannot happen when n ∈ onstage(H) because of 5). Another consequence of 5) is that an edge in H from an onstage node ✲ hHc′ , ρ′c i hHc , ρc i L h next α ✠ LN h next G1  ❅ α ❅ ❘ ❅ α ❄ G2 st =⇒ G3  G4 L LN h next next proc LN prev LN h proc Figure 11: Simulation Relation Between Abstract and Concrete Execution current 6.2 next prev next current next LN LN h LN h next next LN next next h LN next next LN next null null Figure 10: Abstraction Relation n0 to a summary node ns implies that ns represents at least one object. Condition 2) strengthens 1) by requiring certain identity constraints for onstage nodes to hold, as explained in Section 6.2.4. Example 23 Consider the following role declaration for an acyclic list. role L { // List header fields first : LN | null; } role LN { // List node fields next : LN | null; slots LN.next | L.first; acyclic next; } Figure 10 shows a role graph and one of the concrete heaps represented by the role graph via homomorphism h. There are two local variables, prev and current, referencing distinct onstage objects. Onstage objects are isomorphic to onstage nodes in the role graph. In contrast, there are two objects mapped to each of the summary nodes with role LN (shown as LN-labelled rectangles in Figure 10). Note that the sets of objects mapped to these two summary nodes are disjoint. The first summary LN-node represents objects stored in the list before the object referenced by prev. The second summary LN-node represents objects stored in the list after the object referenced by current. Transfer Functions The key complication in developing the transfer functions for the role analysis is to accurately model the movement of objects onstage and offstage. For example, a load statement x=y.f may cause the object referred to by y.f to move onstage. In addition, if x was the only reference to an onstage object o before the statement executed, object o moves offstage after the execution of the load statement, and thus must satisfy the locallyConsistent predicate. The analysis uses an expansion relation  to model the movement of objects onstage and a contraction relation  to model the movement of objects offstage. The expansion relation uses the invariant that offstage nodes have correct roles to generate possible aliasing relationships for the node being pulled onstage. The contraction relation establishes the role invariants for the node going offstage, allowing the node to be merged into the other offstage nodes and represented more compactly. We present our role analysis as an abstract execution rest lation ❀. The abstract execution ensures that the abstraction relation α is a forward simulation relation [33] from the space of concrete heaps with role assignments to the set RoleGraphs. The simulation relation implies that the traces of ❀ include the traces of the instrumented semantics −→. To ensure that the program does not violate constraints associated with roles, it is thus sufficient to guarantee that ⊥G is not reachable via ❀. To prove that ⊥G is not reachable in the abstract execution, the analysis computes for every program point p a set of role graphs G that conservatively approximates the possible program states at point p. The transfer function for a st statement st is an image [[st]](G) = {G′ | G ∈ G, G ❀ G′ }. st The analysis computes the relation ❀ in three steps: 1. ensure that the relevant nodes are instantiated using expansion relation  (Section 6.2.1); st 2. perform symbolic execution =⇒ of the statement st (Section 6.2.3); 3. merge nodes if needed using contraction relation  to keep the role graph bounded (Section 6.2.2). Figure 11 shows how the abstraction relation α relates , st =⇒, and  with the concrete execution −→ in instrumented semantics. Assume that a concrete heap hHc , ρc i is represented by the role graph G1 . Then one of the role graphs G2 obtained after expansion remains an abstraction of hHc , ρc i. Transition Definition x=y.f ′ hH, ρ, Ki ❀ G x=y ny ,f ′ hH, ρ, Ki  G1 =⇒ G2  G n1 x=y ′ hH, ρ, Ki ❀ G x=new Conditions nx x=y.f hproc, x, nx i, hproc, y, ny i ∈ H ′ hH, ρ, Ki =⇒ G1  G x=new ′ hH, ρ, Ki ❀ G n1 hproc, x, n1 i ∈ H ′ hH, ρ, Ki =⇒ G1  G s s hH, ρ, Ki ❀ G′ hH, ρ, Ki =⇒ G′ hproc, x, n1 i ∈ H s ∈ {x.f=y, test(c), setRole(x:r), roleCheck(x1..p , ra)} Figure 12: Abstract Execution ❀ st The symbolic execution =⇒ followed by the contraction relation  corresponds to the instrumented operational semantics →. Figure 12 shows rules for the abstract execution relation st ❀. Only Load statement uses the expansion relation, because the other statements operate on objects that are already onstage. Load, Copy, and New statements may remove a local variable reference from an object, so they use contraction relation to move the object offstage if needed. For the rest of the statements, the abstract execution reduces to symbolic execution =⇒ described in Section 6.2.3. st Nondeterminism and Failure The ❀ relation is not a function because the expansion relation  can generate a set of role graphs from a single role graph. Also, there might be st no ❀ transitions originating from a given state G if the symbolic execution =⇒ produces no results. This corresponds to a trace which cannot be extended further due to a test statement which fails in state G. This is in contrast to a transition from G to ⊥G which indicates a potential role consistency violation or a null pointer dereference. We assume that =⇒ and  relations contain the transition h⊥G , ⊥G i to propagate the error role graph. In most cases we do not write the explicit transitions to error states. 6.2.1 Expansion n,f Figure 13 shows the expansion relation  . Given a role graph hH, ρ, Ki expansion attempts to produce a set of role graphs hH ′ , ρ′ , K ′ i in each of which hn, f, n0 i ∈ H ′ and K(n0 ) = i. Expansion is used in abstract execution of the Load statement. It first checks for null pointer dereference and reports an error if the check fails. If hn, f, n′ i ∈ H and K(n′ ) = i already hold, the expansion returns the original state. Otherwise, hn, f, n′ i ∈ H with K(n′ ) = s. In that case, the summary node n′ is first instantiated using instann0 n0 tiation relation ⇑ . Next, the split relation k is applied. Let n′ ρ(n0 ) = r. The split relation ensures that n0 is not a member of any cycle of offstage nodes which contains only edges in acyclic(r). We explain instantiation and split in more detail below. generates the set of role graphs hH ′ , ρ′ , K ′ i such that each concrete heap represented by hH, ρ, Ki is represented by one of the graphs hH ′ , ρ′ , K ′ i. Each of the new role graphs contains a fresh individual node n0 that satisfies localCheck. The edges of n0 are a subset of edges from and to n′ . Let H0 be a subset of the references between n′ and onstage nodes, and let H1 be a subset of the references between n′ and offstage nodes. References in H0 are moved from n′ to the new node n0 , because they represent at most one reference, while references in H1 are copied to n0 because they may represent multiple concrete heap references. Moving a reference is formalized via the swing operation in Figure 14. The instantiation of a single graph can generate multiple role graphs depending on the choice of H0′ and H1′ . The number of graphs generated is limited by the existing references of node n′ and by the localCheck requirement for n0 . This is where our role analysis takes advantage of constraints associated with role definitions to reduce the number of aliasing possibilities that need to be considered. Split The split relation is important for verifying operations on data structures such as skip lists and sparse matrices. It is also useful for improving the precision of the initial set of role graphs on procedure entry (Section 7.2.1). The goal of the split relation is to exploit the acyclicity constraints associated with role definitions. After a node n0 is brought onstage, split represents the acyclicity condition of ρ(n0 ) explicitly by eliminating impossible paths in the role graph. It uses additional offstage nodes to encode the reachability information implied by the acyclicity conditions. This information can then be used even after the role of node n0 changes. In particular, it allows the acyclicity condition of n0 to be verified when n0 moves offstage. Example 24 Consider a role graph for an acyclic list with nodes LN and a header node L. The instantiated node n0 is in the middle of the list. Figure 16 a) shows a role graph with a single summary node representing all offstage LNnodes. Figure 16 b) shows the role graph after applying the split relation. The resulting role graph contains two LN summary nodes. The first LN summary node represents objects definitely reachable from n0 along next edges; the second summary NL node represents objects definitely not reachable from n0 . Figure 14 presents the instantiation rela- Figure 15 shows the definition of the split operation on tion. Given a role graph G = hH, ρ, Ki, instantiation ⇑ node n0 , denoted by k . Let G = hH, ρ, Ki be the initial Instantiation n0 n′ n0 Transition Definition Condition n,f hH, ρ, Ki  hH, ρ, Ki n0 n,f hH, ρ, Ki  G′ n0 hH, ρ, Ki ⇑ hH1 , ρ1 , K1 i k G′ n′ hn, f, n′ i ∈ H, n′ ∈ onstage(H) hn, f, n′ i ∈ H, n′ ∈ offstage(H) hn, f, n0 i ∈ H1 Figure 13: Expansion Relation n0 hH, ρ, Ki ⇑ hH ′ , ρ′ , K ′ i n′ swing(nold , nnew , H) H ′ = H \ H0 ∪ H0′ ∪ H1′ n′ ∈ / nodes(H ′ ), if K(n′ ) = i ′ ρ = ρ[n0 7→ ρ(n′ )] K ′ = K[n0 7→ i] localCheck(n0 , hH ′ , ρ′ , K ′ i)  H0 ⊆ H ∩ onstage(H) × F × {n′ } ∪ {n′ } × F × onstage(H)  H1 ⊆ H ∩ offstage(H) × F × {n′ } ∪ {n′ } × F × offstage(H) H0′ = swing(n′ , n0 , H0 ) H1′ ⊆ swing(n′ , n0 , H1 ) {hnnew , f, ni | hnold , f, ni ∈ H} ∪ {hn, f, nnew i | hn, f, nold i ∈ H} ∪ {hnnew , f, nnew i | hnold , f, nold i ∈ H} = Figure 14: Instantiation Relation ⇑ n0 hH, ρ, Ki k hH, ρ, Ki, n0 ′ ′ ′ hH, ρ, Ki k hH , ρ , K i, acycCheck(n0 , hH, ρ, Ki, offstage(H)) ¬acycCheck(n0 , hH, ρ, Ki, offstage(H)) where H ′ = (H \ Hcyc ) ∪ Hoff ∪ BfNR ∪ BfR ∪ BtNR ∪ BtR ∪ Nf ∪ Nt Hcyc = {hn1 , f, n2 i | n1 or n2 ∈ Scyc } Hoff = { hn′1 , f, n′2 i | n1 = c(n′1 ), n2 = c(n′2 ), n1 , n2 ∈ offstage1 (H), n1 or n2 ∈ Scyc , hn1 , f, n2 i ∈ H } \(SR × acyclic(r) × SNR ) H ∩ (onstage(H) × F ∪ {n0 } × acyclic(r)) × Scyc = AfNR ⊎ AfR H ∩ Scyc × (acyclic(r) × {n0 } ∪ F × onstage(H)) = AtNR ⊎ AtR BfNR = {hn1 , f, hNR (n2 )i | hn1 , f, n2 i ∈ AfNR } BfR = {hn1 , f, hR (n2 )i | hn1 , f, n2 i ∈ AfR } BtNR = {hhNR (n1 ), f, n2 i | hn1 , f, n2 i ∈ AtNR } BtR = {hhR (n1 ), f, n2 i | hn1 , f, n2 i ∈ AtR } Nf = {hn0 , f, n′ i | n′ ∈ SR , hn0 , f, c(n′ )i ∈ H, f ∈ acyclic(r)} Nt = {hn′ , f, n0 i | n′ ∈ SNR , hc(n′ ), f, n0 i ∈ H, f ∈ acyclic(r)} Scyc = {n | ∃n1 , . . . , np−1 ∈ offstage(H) : hn0 , f0 , n1 i, . . . , hnk , fk , ni, hn, fk+1 , nk+2 i, hnp−1 , fp−1 , n0 i ∈ H, f0 , . . . , fp−1 ∈ acyclic(r)} offstage1 (H) = offstage(H) \ {n0 } r = ρ(n0 ) ρ′ (c(n)) = ρ(n) K ′ (c(n)) = K(n) Figure 15: Split Relation n0 LN LN null L a) Before Split LN n 0 LN LN null L b) After Split Figure 16: A Role Graph for an Acyclic List role graph and ρ(n0 ) = r. If acyclic(r) = ∅, then the split operation returns the original graph G; otherwise it proceeds as follows. Call a path in graph H cycle-inducing if all of its nodes are offstage and all of its edges are in acyclic(r). Let Scyc be the set of nodes n such that there is a cycle-inducing path from n0 to n and a cycle-inducing path from n to n0 . The goal of the split operation is to split the set Scyc into a fresh set of nodes SNR representing objects definitely not reachable from n0 along edges in acyclic(r) and a fresh set of nodes SR representing objects definitely reachable from n0 . Each of the newly generated graphs H ′ has the following properties: 1) merging the corresponding nodes from SNR and SR in H ′ yields the original graph H; 2) n0 is not a member of any cycle in H ′ consisting of offstage nodes and edges in acyclic(r); 3) onstage nodes in H ′ have the same number of fields and aliases as in H. Let S0 = nodes(H) \ Scyc and let hNR : Scyc → SNR and hR : Scyc → SR be bijections. Define a function c : nodes(H ′ ) → nodes(H) as follows: c(n) = ( n, h−1 R (n), h−1 NR (n), n ∈ S0 n ∈ SR n ∈ SNR Then H ′ ⊆ {hn′1 , f, n′2 i | hc(n′1 ), f, c(n′2 )i ∈ H}. Because there are two copies of S0 in H ′ , there might be multiple edges hn′1 , f, n′2 i in H ′ corresponding to an edge hc(n1 ), f, c(n2 )i ∈ H. If both n′1 and n′2 are offstage nodes other than n0 , we always include hn′1 , f, n′2 i in H ′ unless hn′1 , f, n′2 i ∈ SR × acyclic(r) × SNR . The last restriction prevents cycles in H ′ . For an edge hn1 , f, n2 i ∈ H where n1 ∈ onstage(H) and n2 ∈ Scyc we include in H ′ either the edge hn1 , f, hNR (n2 )i or hn1 , f, hR (n2 )i but not both. Split generates multiple graphs H ′ to cover both cases. We proceed analogously if n2 ∈ onstage(H) and n1 ∈ Scyc . The node n0 itself is treated in the same way as onstage nodes for f ∈ / acyclic(r). If f ∈ acyclic(r) then we choose references to n0 to have a source in SNR , whereas the reference from n0 have the target in SR . Details of the split construction are given in Figure 15. The intuitive meaning of the sets of edges is the following: Hoff : edges between offstage nodes BfNR : edges from onstage nodes to SNR BfR : edges from onstage nodes to SR BtNR : edges from SNR to onstage nodes BtR : edges from SR to onstage nodes Nf : acyclic(r)-edges from n0 to SR Nt : acyclic(r)-edges from SNR to n0 The sets BfNR and BfR are created as images of the sets AfNR and AfR which partition edges from onstage nodes to nodes in Scyc . Similarly, the sets BtNR and BtR are created as images of the sets AtNR and AtR which partition edges from nodes in Scyc to onstage nodes. We note that if in the split operation Scyc = ∅ then the operation has no effect and need not be performed. In Figure 16, after performing a single split, there is no need to split for subsequent elements of the list. Examples like this indicate that split will not be invoked frequently during the analysis. 6.2.2 Contraction Figure 17 shows the non-error transitions of the contraction n relation . The analysis uses contraction when a local variable reference to node n is removed. If there are other local references to n, the result is the original graph. Otherwise n has just gone offstage, so analysis invokes nodeCheck. If the check fails, the result is ⊥G . If the role check succeeds, the contraction invokes normalization operation to ensure that the role graph remains bounded. For simplicity, we use normalization whenever nodeCheck succeeds, although it is sufficient to perform normalization only at program points adjacent to back edges of the control-flow graph. Normalization Figure 18 shows the normalization relation. Normalization accepts a role graph hH, ρ, Ki and produces a normalized role graph hH ′ , ρ′ , K ′ i which is a factor graph of hH, ρ, Ki under the equivalence relation ∼. Two offstage nodes are equivalent under ∼ if they have the same role and the same reachability from onstage nodes. Here we consider node n to be reachable from an onstage node n0 iff there is some path from n0 to n whose edges belong to acyclic(ρ(n0 )) and whose nodes are all in offstage(H). Note that, by construction, normalization avoids merging nodes which were previously generated in the split operation k, while still ensuring a bound on the size of the role graph. For a procedure with l local variables, f fields and r roles the number of nodes in a role graph is on the order of r2l so the n ∃x ∈ var(proc) : hproc, x, ni ∈ H hH, ρ, Ki  normalize(hH, ρ, Ki) nodeCheck(n, hH, ρ, Ki, offstage(H)) hH, ρ, Ki hH, ρ, Ki n Figure 17: Contraction Relation normalize(hH, ρ, Ki) = hH ′ , ρ′ , K ′ i where H ′ = {hn1/∼ , f, n2/∼ i | hn1 , f, n2 i ∈ H} ρ′ (n/∼ ) = ρ(n)  i, n/∼ = {n}, K(n) = i K ′ (n/∼ ) = s, otherwise n1 ∼ n2 iff n1 = n2 or (n1 , n2 ∈ offstage(H), ρ(n1 ) = ρ(n2 ), ∀n0 ∈ onstage(H) : (reach(n0 , n1 ) iff reach(n0 , n2 )) reach(n0 , n) iff ∃n1 , . . . , np−1 ∈ offstage(n), ∃f1 , . . . , fp ∈ acyclic(ρ(n0 )) : hn0 , f1 , n1 i, . . . , hnp−1 , fp , ni ∈ H Figure 18: Normalization l maximum size of a chain in the lattice is of the order of 2r2 . To ensure termination we consider role graphs equal up to isomorphism. Isomorphism checking can be done efficiently if normalization assigns canonical names to the equivalence classes it creates. 1. r ∈ fieldf (ρ(n)) for all hn, f, nx i ∈ H, and 6.2.3 Symbolic Execution st Figure 19 shows the symbolic execution relation =⇒. In most cases, the symbolic execution of a statement acts on the abstract heap in the same way that the statement would act on the concrete heap. In particular, the Store statement always performs strong updates. The simplicity of symbolic execution is due to conditions 3) and 5) in the abstraction relation α. These conditions are ensured by the  relation which instantiates nodes, allowing strong updates. The symbolic execution also verifies the consistency conditions that are not verified by  or . Verifying Reference Removal Consistency for offstage nodes are consistent with the new role r. The acyclicity constraint involves only offstage nodes, so it remains satisfied. The other role constraints are local, so they can only be violated for offstage neighbors of nx . To make sure that no violations occur, we require: The ab- 2. hr, f i ∈ sloti (ρ(n)) for all hnx , f, ni ∈ H and every slot i such that hr0 , f i ∈ sloti (ρ(n)) This is sufficient to guarantee conW(ρc , Hc , offstage(Hc )). To ensure condition 2) in Definition 22 of the abstraction relation, we require that for every hf, gi ∈ identities(r), 1. hf, gi ∈ identities(r0 ) or 2. for all hnx , f, ni ∈ H: K(n) = i and (hn, g, n′ i ∈ H implies n′ = nx ). We use roleChOk(nx , r, hH, ρ, Ki) to denote the check just described. st stract execution ❀ for the Store statement can easily verify the Store safety condition from section 5.4.2, because the set of onstage and offstage nodes is known precisely for every role graph. It returns ⊥G if the safety condition fails. Symbolic Execution of setRole The setRole(x:r) statement sets the role of node nx referenced by variable x to r. Let G = hH, ρ, Ki be the current role graph and let hproc, x, nx i ∈ H. If nx has no adjacent offstage nodes, the role change always succeeds. In general, there are restrictions on when the change can be done. Let hHc , ρc i be a concrete heap with role assignment represented by G and h be a homomorphism from Hc to H. Let h(ox ) = nx . Let r0 = ρc (ox ). The symbolic execution must make sure that the condition conW(ρc , Hc , offstage(Hc )) continues to hold after the role change. Because the set of onstage nodes does not change, it suffices to ensure that the original roles Symbolic Execution of roleCheck To symbolically execute roleCheck(x1 , . . . , xp , ra), we ensure that the conW predicate of the concrete semantics is satisfied for the concrete heaps which correspond to the current abstract role graph. The symbolic execution for roleCheck returns the error graph ⊥G if ρ is inconsistent with ra or if any of the nodes ni referenced by xi fail to satisfy nodeCheck. 6.2.4 Node Check The analysis uses the localCheck, acycCheck, acycCheckAll, and nodeCheck predicates to incrementally maintain the abstraction relation. We first define the predicate localCheck, which roughly corresponds to the predicate locallyConsistent (Definition 2), but ignores the nonlocal acyclicity condition and additionally ensures condition 2) from Definition 22. Statement s Transition Conditions s hproc, y, ny i, hny , f, nf i ∈ H hproc, x, nx i, hproc, y, ny i ∈ H nf ∈ onstage(H) hproc, y, ny i ∈ H nn fresh ρ′ = ρ[nn 7→ unknown] satisfied(c, H) hproc, x, nx i ∈ H roleChOk(nx , r, hH, ρ, Ki) ∀i hproc, xi , ni i ∈ H nodeCheck(ni , hH, ρ, Ki, S) S = offstage(H) ∪ {ni }i ρ(ni ) = ra(ni ) x = y.f hH ⊎ {proc, x, nx }, ρ, Ki =⇒ hH ⊎ {proc, x, nf }, ρ, Ki x.f = y s hH ⊎ {nx , f, nf }, ρ, Ki =⇒ hH ⊎ {nx , f, ny }, ρ, Ki x = y hH ⊎ {proc, x, nx }, ρ, Ki =⇒ hH ⊎ {proc, x, ny }, ρ, Ki x = new hH ⊎ {proc, x, nx }, ρ, Ki =⇒ hH ⊎ {proc, x, nn }, ρ′ , Ki test(c) hH, ρ, Ki =⇒ hH, ρ, Ki setRole(x:r) hH, ρ, Ki =⇒ hH, ρ[nx 7→ r], Ki roleCheck(x1..p , ra) hH, ρ, Ki =⇒ hH, ρ, Ki s s s s s satisfied(x==y, Hc ) iff {o | hproc, x, oi ∈ Hc } = {o | hproc, y, oi ∈ Hc } satisfied(!(x==y), Hc ) iff not satisfied(x==y, Hc ) Figure 19: Symbolic Execution of Basic Statements Definition 25 For a role graph G = hH, ρ, Ki, an individual node n and a set S, the predicate localCheck(n, G) holds iff the following conditions are met. Let r = ρ(n). S, and we write acycCheck(n, G, S), iff it is not the case that H contains a cycle n1 , f1 , . . . , ns , fs , n1 where n1 = n, f1 , . . . , fs ∈ acyclic(ρ(n)) and n1 , . . . , ns ∈ S. 1A. (Outgoing fields check) For fields f ∈ F , if hn, f, n′ i ∈ H then ρ(n′ ) ∈ fieldf (r). The analysis uses the predicate acycCheckAll(n, G, S) in the contraction relation (Section 6.2.2) to make sure that n is not a member of any cycle that would violate the acyclicity condition of any of the nodes in S, including n. 2A. (Incoming slots check) Let {hn1 , f1 i, . . . , hnk , fk i} = {hn′ , f i | hn′ , f, ni ∈ H} be the set of all aliases of node n in abstract heap H. Then k = slotno(r) and there exists a permutation p of the set {1, . . . , k} such that hρ(ni ), fi i ∈ slotpi (r) for all i. 3A. (Identity Check) If hn, f, n′ i ∈ H, hn′ , g, n′′ i ∈ H, hf, gi ∈ identities(r), and K(n′ ) = i, then n = n′′ . 4A. (Neighbor Identity Check) For every edge hn′ , f, ni ∈ H, if K(n′ ) = i, ρ(n′ ) = r ′ and hf, gi ∈ identities(r ′ ) then hn, g, n′ i ∈ H. 5A. (Field Sanity Check) For every f ∈ F there is exactly one edge hn, f, n′ i ∈ H. Conditions 1A and 2A correspond to conditions 1) and 2) in Definition 2. Condition 3) in Definition 19 is not necessarily implied by condition 3A) if some of the neighbors of n are summary nodes. Condition 3) cannot be established based only on summary nodes, because verifying an identity constraint for field f of node n where hn, f, n′ i ∈ H requires knowing the identity of n′ , not only its existence and role. We therefore rely on Condition 2) of the Definition 22 to ensure that identity relations of neighbors of node n are satisfied before n moves offstage. The predicate acycCheck(n, G, S) verifies the acyclicity condition from Definition 19 for the node that has just been brought onstage. The split operation uses acycCheck to check whether it should split any nodes (Section 6.2.1). The set S represents offstage nodes. Definition 26 We say that a node n satisfies an acyclicity check in graph G = hH, ρ, Ki with respect to the set Definition 27 We say that a node n satisfies a strong acyclicity check in graph G = hH, ρ, Ki with respect to a set S, and we write acycCheckAll(n, G, S), iff it is not the case that: H contains a cycle n1 , f1 , . . . , ns , fs , n1 where n1 = n, f1 , . . . , fs ∈ acyclic(ρ(ni )), for some 1 ≤ i ≤ s, and n1 , . . . , ns ∈ S. acycCheckAll is a stronger condition than acycCheck because it ensures the absence of cycles containing n for acyclic(ρ(ni )) fields of all offstage nodes ni , and not only for the fields acyclic(ρ(n)). The analysis uses the predicate nodeCheck to verify that bringing a node n offstage does not violate role consistency for offstage nodes. Definition 28 nodeCheck(n, G, S) holds iff both predicates localCheck(n, G) and acycCheckAll(n, G, S) hold. 7 Interprocedural Role Analysis This section describes the interprocedural aspects of our role analysis. Interprocedural role analysis can be viewed as an instance of the functional approach to interprocedural dataflow analysis [41]. For each program point p, role analysis approximates program traces from procedure entry to point p. The solution in [41] proposes tagging the entire data-flow fact G at point p with the data flow fact G0 at procedure entry. In contrast, our analysis computes the correspondence between heaps at procedure entry and heaps at point p at the granularity of sets of objects that constitute role graphs. This allows our analysis to detect which regions of the heap have been modified. We approximate the concrete executions of a procedure with procedure transfer relations consisting of 1) an initial context and 2) a set of effects. Effects are fine-grained transfer relations which summarize load and store statements and can naturally describe local heap modifications. In this paper we assume that procedure transfer relations are supplied and we are concerned with a) verifying that transfer relations are a conservative approximation of procedure implementation b) instantiating transfer relations at call sites. 7.1 Procedure Transfer Relations A transfer relation for a procedure proc extends the procedure signature with an initial context context(proc), and procedure effects effect(proc). imply that parameters p1 and p2 must be aliased, p1 -> n1 p2 -> n2 force p1 and p2 to be unaliased, whereas p1 -> n1|n2 p2 -> n1|n2 allow for both possibilities. A heap edge n -f-> m denotes hn, f, mi ∈ HIC . The shorthand notation n1 -f-> n2 -g-> n3 denotes two heap edges hn1, f, n2i, hn1, g, n3i ∈ HIC . An expression n1 -f-> n2|n3 denotes two edges n1 -f-> n2 and n1 -f-> n3. We use similar shorthands for parameter edges. 7.1.1 Initial Context Figures 20 and 21 contain examples of initial context specification. An initial context is a description of the initial role graph hHIC , ρIC , KIC i where ρIC and KIC are determined by a nodes declaration and HIC is determined by a edges declaration. The initial role graph specifies a set of concrete heaps at procedure entry and assigns names for sets of nodes in these heaps. The next definition is similar to Definition 22. Definition 29 We say that a concrete heap hHc , ρc i is represented by the initial role graph hHIC , ρIC , KIC i and write hHc , ρc i α0 hHIC , ρIC , KIC i, iff there exists a function h0 : nodes(Hc ) → nodes(HIC ) such that proc p l lx ph proc P1 LL1 null proc proc px proc l2 SleepingProc 1. conW(ρc , Hc , h−1 0 (read(proc)); 2. h0 is a graph homomorphism; 3. KIC (n) = i implies |h−1 0 (n)| ≤ 1; proc P2 proc LL2 proc 4. h0 (nullc ) = null and h0 (procc ) = proc; 5. ρc (o) = ρIC (h0 (o)) for every object o ∈ nodes(Hc ). Here read(proc) is the set of initial-context nodes read by the procedure (see below). For simplicity, we assume one context per procedure; it is straightforward to generalize the treatment to multiple contexts. A context is specified by declaring a list of nodes and a list of edges. A list of nodes is given with nodes declaration. It specifies a role for every node at procedure entry. Individual nodes are denoted with lowercase identifiers, summary nodes with uppercase identifiers. By using summary nodes it is possible to indicate disjointness of entire heap regions and reachability between nodes in the heap. There are two kinds of edges in the initial role graph: parameter edges and heap edges. A parameter edge p->pn is interpreted as hproc, p, pni ∈ HIC . We require every parameter edge to have an individual node as a target, we call such node a parameter node. The role of a parameter node referenced by parami (proc) is always preRi (proc). Since different nodes in the initial role graph denote disjoint sets of concrete objects, parameter edges p1 -> n1 p2 -> n1 nodes ph : RunningHeader, P1, px, P2 : RunningProc, lx : LiveHeader, LL1, l2, LL2 : LiveList; edges p-> px, l-> px, ph -next-> P1|px -prev-> px|P2, P1 -next-> P1|px -prev-> ph|P1, px -next-> P2|ph -prev-> P1|ph, P2 -next-> P2|ph -prev-> P2|px, lx -next-> LL1|l2, LL1 -next-> LL1|l2 -proc-> P1|P2|SleepingProc l2 -next-> LL2|null -proc-> px, LL2 -next-> LL2|null -proc-> P1|P2|SleepingProc Figure 20: Initial Context for kill Procedure Example 30 Figure 20 shows an initial context graph for the kill procedure from Example 17. It is a refinement of the role reference diagram of Figure 1 as it gives description of the heap specific to the entry of kill procedure. The initial context makes explicit the fact that there is only one header node for the list of running processes (ph) and one header node for the list of all active processes (lx). More importantly, it shows that traversing the list of active processes reaches a node l2 whose proc field references the parameter node px. This is sufficient for the analysis to conclude that there will be no null pointer dereferences in the while loop of kill procedure since l2 is reached before null. We assume that the initial context always contains the role reference diagram RRD (Definition 8). Nodes from RRD are called anonymous nodes and are referred to via role name. This further reduces the size of initial context specifications by leveraging global role definitions. In Figure 20 there is no need to specify edges originating from SleepingProc or even mention the node SleepingTree, since role definitions alone contain enough information on this part of the heap to enable the analysis of the procedure. Note, however, that all edges between anonymous nodes and named nodes must be explicitly specified. 7.1.2 Procedure Effects Procedure effects conservatively approximate the region of the heap that the procedure accesses and indicate changes to the referencing relationships in that region. There are two kinds of effects: read effects and write effects. A read effect specifies a set read(proc) of initial graph nodes accessed by the procedure. It is used to ensure that the accessibility condition in Section 5.4.3 is satisfied. If the set of nodes denoted by read(proc) is mapped to a node n which is onstage in the caller but is not an argument of the procedure call, a role check error is reported at the call site. Write effects are used to modify caller’s role graph to conservatively model the procedure call. A write effect e1 .f = e2 approximates Store operations within a procedure. The expression e1 denotes objects being written to, f denotes the field written, and e2 denotes the set of objects which could be assigned to the field. Write effects are may effects by default, which means that the procedure is free not to perform them. It is possible to specify that a write effect must be performed by prefixing it with a “!” sign. Example 31 In Figure 21, the insert procedure inserts an isolated cell into the end of an acyclic singly linked list. As a result, the role of the cell changes to LN. The initial context declares parameter nodes ln and xn (whose initial roles are deduced from roles of parameters), and mentions anonymous LN node from a default copy of the role reference diagram RRD. The code of the procedure is summarized with two write effects. The first write effect indicates that the procedure may perform zero or more Store operations to field next of nodes mapped to ln or LN in context(proc). The second write effect indicates that the execution of the procedure must perform a Store to the field next of xn node where the reference stored is either a node mapped onto anonymous LN node or null. procedure insert(l : L, x : IsolatedN ->> LN) nodes ln, xn; edges l-> ln, x-> xn, ln -next-> LN|null; effects ln|LN . next = xn, ! xn.next = LN|null; local c, p; { p = l; c = l.next; while (c!=null) { p = c; c = p.next; } p.next = x; x.next = c; setRole(x:LN); } Figure 21: Insert Procedure for Acyclic List procedure insertSome(l : L) nodes ln; edges l-> ln, ln -next-> LN|null; effects ln|LN . next = NEW, NEW.next = LN|null; aux c, p, x; { p = l; c = l.next; while (c!=null) { p = c; c = p.next; } x = new; p.next = x; x.next = c; setRole(x:LN); } Figure 22: Insert Procedure with Object Allocation Effects also describe assignments that procedures perform on the newly created nodes. Here we adopt a simple solution of using a single summary node denoted NEW to represent all nodes created inside the procedure. We write nodes0 (HIC ) for the set nodes(HIC ) ∪ {NEW}. Example 32 Procedure insertSome in Figure 22 is similar to procedure insert in Figure 21, except that the node inserted is created inside the procedure. It is therefore referred to in effects via generic summary node NEW. We represent all may write effects as a set mayWr(proc) of triples hnj , f, n′j i where n, n′j ∈ nodes0 (HIC ) and f ∈ F . We represent must write effects as a sequence mustWrj (proc) of subsets of the set KIC−1 (i) × F × nodes0 (HIC ). Here 1 ≤ j ≤ mustWrNo(proc). To simplify the interpretation of the declared procedure effects in terms of concrete reads and writes, we re- quire the union ∪i mustWri (proc) to be disjoint from the set mayWr(proc). We also require the nodes n1 , . . . , nk in a must write effect n1 | · · · |nk .f = e2 to be individual nodes. This allows strong updates when instantiating effects (Section 7.3.2). 7.1.3 Semantics of Procedure Effects We now give precise meaning to procedure effects. Our definition is slightly complicated by the desire to capture the set of nodes that are actually read in an execution while still allowing a certain amount of observational equivalence for write effects. The effects of procedure proc define a subset of permissible program traces in the following way. Consider a concrete heap Hc with role assignment ρc such that hHc , ρc i α0 hHIC , ρIC , KIC i with graph homomorphism h0 from Definition 29. Consider a trace T starting from a state with heap Hc and role assignment ρc . Extract the subsequence of all loads and stores in trace T . Replace Load x=y.f by concrete read read ox where ox is the concrete object referenced by x at the point of Load, and replace Store x.f=y by a concrete write ox .f = oy where ox is the object referenced by x and oy object referenced by y at the point of Store. Let p1 , . . . , pk be the sequence of all concrete read statements and q1 , . . . , qk the sequence of all concrete write statements. We say that trace T starting at Hc conforms to the effects iff for all choices of h0 the following conditions hold: 1. h0 (o) ∈ read(proc) for every pi of the form read o 2. there exists a subsequence qi1 , . . . , qit of q1 , . . . , qk such that (a) executing qi1 , . . . , qit on Hc yields the same result as executing the entire sequence q1 , . . . , qk (b) the sequence qi1 , . . . , qit implements write effects of procedure proc A typical way to obtain a sequence qi1 , . . . , qit from the sequence q1 , . . . , qk is to consider only the last write for each pair hoi , f i of object and field. We say that a sequence qi1 , . . . , qit implements write effects mayWr(proc) and mustWri (proc) for 1 ≤ i ≤ i0 , i0 = mustWrNo if and only if there exists an injection s : {1, . . . , i0 } → {i1 , . . . , it } such that 1. hh′ (o), f, h′ (o′ )i ∈ mustWri (proc) for every concrete write qs(i) of the form o.f = o′ , and 2. hh′ (o), f, h′ (o′ )i ∈ mayWr(proc) for all concrete writes qi of the form o.f = o′ for i ∈ {i1 , . . . , it } \ {s(1), . . . , s(i0 )}. Here h′ (n) = h0 (n) for n ∈ nodes(Hc ) where Hc is the initial concrete heap and h′ (n) = NEW otherwise. It is possible (although not very common) for a single concrete heap Hc to have multiple homomorphisms h0 to the initial context HIC . Note that in this case we require the trace T to conform to effects for all possible valid choices of h0 . This places the burden of multiple choices of h0 on procedure transfer relation verification (Section 7.2) but in turn allows the context matching algorithm in Section 7.3.1 to select an arbitrary homomorphism between a caller’s role graph and an initial context. 7.2 Verifying Procedure Transfer Relations In this section we show how the analysis makes sure that a procedure conforms to its specification, expressed as an initial context with a list of effects. To verify procedure effects, we extend the analysis representation from Section 6.1. A non-error role graph is now a tuple hH, ρ, K, τ, Ei where: 1. τ : nodes(H) → nodes0 (HIC ) is initial context transformation that assigns an initial context node τ (n) ∈ nodes(HIC ) to every node n representing objects that existed prior to the procedure call, and assigns NEW to every node representing objects created during procedure activation; 2. E ⊆ ∪i mustWri (proc) is a list of must write effects that procedure has performed so far. The initial context transformation τ tracks how objects have moved since the beginning of procedure activation and is essential for verifying procedure effects which refer to initial context nodes. We represent the list E of performed must effects as a partial map from the set KIC−1 (i)×F to nodes0 (HIC ). This allows the analysis to perform must effect folding by recording only the last must effect for every pair hn, f i of individual node n and field f . [[entry•]] = n hH, ρ, K, τ, Ei P : {proc} × {parami (proc)}i → N, P ⊆ HIC H0 = (HIC \ {proc} × param(proc) × N ) ∪ P ni = P (proc, parami (proc)) H1 ⊆ H0 H1 \ H0 ⊆ {hn′ , f, n′′ i | {n1 , n2 } ∩ {ni }i 6= ∅} ∀j : localCheck(nj , hH, ρ, Ki, nodes(H1 )) n1 n2 np H1 k H2 k · · · k H ρ = ρIC K = KIC τ = ρICo E=∅ Figure 23: The Set of Role Graphs at Procedure Entry 7.2.1 Role Graphs at Procedure Entry Our role analysis creates the set of role graphs at procedure entry point from the initial context context(proc). This is simple because role graphs and the initial context have similar abstraction relations (Sections 6.1 and 7.1). The difference is that parameters in role graphs point to exactly one node, and parameter nodes are onstage nodes in role graphs which means that all their edges are “must” edges. Figure 23 shows the construction of the initial set of role graphs. First the graph H0 is created such that every parameter parami (proc) references exactly one parameter node ni . Next graph H1 is created by using localCheck to ensure that parameter nodes have the appropriate number of edges. Finally, the instantiation is performed on parameter nodes to ensure acyclicity constraints if the initial context does not make them explicit already. Statement s Transition Constraints hproc, y, ny i, hny , f, nf i ∈ H τ (nf ) ∈ read(proc) hproc, y, ny i, hny , f, nf i ∈ H τ (nf ) ∈ / read(proc) hproc, x, nx i, hproc, y, ny i ∈ H hτ (nx ), f, τ (ny )i ∈ mayWr(proc) hproc, x, nx i, hproc, y, ny i ∈ H hτ (nx ), f, τ (ny )i ∈ ∪i mustWri (proc) E ′ = updateWr(E, hτ (nx ), f, τ (ny )i) hproc, x, nx i, hproc, y, ny i ∈ H hτ (nx ), f, τ (ny )i ∈ / mayWr(proc)∪ ∪i mustWri (proc) nn fresh τ ′ = τ [nn 7→ NEW] s x = y.f hH ⊎ {proc, x, nx }, ρ, K, τ, Ei =⇒ hH ⊎ {proc, x, nf }, ρ, K, τ, Ei x = y.f hH ⊎ {proc, x, nx }, ρ, K, τ, Ei =⇒ ⊥G x.f = y hH ⊎ {nx , f, nf }, ρ, K, τ, Ei =⇒ hH ⊎ {nx , f, ny }, ρ, K, τ, Ei x.f = y hH ⊎ {nx , f, nf }, ρ, K, τ, Ei =⇒ hH ⊎ {nx , f, ny }, ρ, K, τ, E ′ i x.f = y hH ⊎ {nx , f, nf }, ρ, K, τ, Ei =⇒ ⊥G x = new hH ⊎ {proc, x, nx }, ρ, K, τ, Ei =⇒ hH ⊎ {proc, x, nn }, ρ, K, τ ′ , Ei s s s s s updateWr(E, hn1 , f, n2 i) = E[hn1 , f i 7→ n2 ] Figure 24: Verifying Load, Store, and New Statements 7.2.2 Verifying Basic Statements To ensure that a procedure conforms to its transfer relation the analysis uses the initial context transformation τ to assign every Load and Store statement to a declared effect. Figure 24 shows new symbolic execution of Load, Store and New statements. The symbolic execution of Load statement x=y.f makes sure that the node being loaded is recorded in some read effect. If this is not the case, an error is reported. The symbolic execution of the Store statement x.f=y first retrieves nodes τ (nx ) and τ (ny ) in the initial role graph context that correspond to nodes nx and ny in the current role graph. If the effect hτ (nx ), f, τ (ny )i is declared as a may write effect the execution proceeds as usual. Otherwise, the effect is used to update the list E of must-write effects. The list E is checked at the end of procedure execution. The symbolic execution of the New statement updates the initial context transformation τ assigning τ (nn ) = NEW for the new node nn . The τ transformation is similarly updated during other abstract heap operations. Instantiation of node n′ into node n0 assigns τ (n0 ) = τ (n′ ), split copies values of τ into the new set of isomorphic nodes, and normalization does not merge nodes n1 and n2 if τ (n1 ) 6= τ (n2 ). 7.2.3 Verifying Procedure Postconditions At the end of the procedure, the analysis verifies that ρ(ni ) = postRi (proc) where hproc, parami (proc), ni i ∈ H, and then performs node check on all onstage nodes using predicate nodeCheck(n, hH, ρ, Ki, nodes(H)) for all n ∈ onstage(H). At the end of the procedure, the analysis also verifies that every performed effect in E = {e1 , . . . , ek } can be attributed to exactly one declared must effect. This means that k = mustWrNo(proc) and there exists a permutation s of set {1, . . . , k} such that es(i) ∈ mustWri (proc) for all i. [[proc′ (x1 , . . . , xp )]](G) = if ∃G ∈ G : ¬paramCheck(G) then {⊥G } else try G1 = matchContext(G) if failed then {⊥G } else {G′′ | hG, µi ∈ G1 FX RR haddNEW(G), µi −→hG′ , µi −→ G′′ } paramCheck(hH, ρ, K, τ, Ei) iff ∀ni : nodeCheck(ni , G, offstage(H) ∪ {ni }i ) ni are such that hproc, xi , ni i ∈ H addNEW(hH, ρ, K, τ, Ei) = hH ∪ {n0 } × F × {null}, ρ[n0 7→ unknown], K[n0 7→ s], τ [n0 7→ NEW], Ei where n0 is fresh in H Figure 25: Procedure Call 7.3 Analyzing Call Sites The set of role graphs at the procedure call site is updated based on the procedure transfer relation as follows. Consider procedure proc containing call site p ∈ NCFG (proc) with procedure call proc′ (x1 , . . . , xp ). Let hHIC , ρIC , KIC i = context(proc′ ) be the initial context of the callee. Figure 25 shows the transfer function for procedure call sites. It has the following phases: 1. Parameter Check ensures that roles of parameters conform to the roles expected by the callee proc′ . 2. Context Matching (matchContext) ensures that the caller’s role graphs represent a subset of concrete heaps represented by context(proc′ ). This is done by deriving a mapping µ from the caller’s role graph to nodes(HIC ). FX 3. Effect Instantiation (−→ ) uses effects mayWr(proc′ ) and mustWri (proc′ ) in order to approximate all structural changes to the role graph that proc′ may perform. RR 4. Role Reconstruction (−→) uses final roles for parameter nodes and global role declarations postRi (proc′ ) to reconstruct roles of all nodes in the part of the role graph representing modified region of the heap. the inverse image S of the performed effects. The effects in S are grouped by the source n and field f . Each field n.f is applied in sequence. There are three cases when applying an effect to n.f : 1. There is only one node target of the write in nodes(H) and the effect is a must write effect. In this case we do a strong update. 2. The condition in 1) is not satisfied, and the node n is offstage. In this case we conservatively add all relevant edges from S to H. The parameter check requires nodeCheck(ni , G, offstage(H)∪ {ni }i ) for the parameter nodes ni . The other three phases are explained in more detail below. 3. The condition in 1) is not satisfied, but the node n is onstage i.e. it is a parameter node3 . In this case there is no unique target for n.f , and we cannot add multiple edges either as this would violate the invariant for onstage nodes. We therefore do case analysis choosing which effect was performed last. If there are no must effects that affect n, then we also consider the case where the original graph is unchanged. 7.3.1 Context Matching Figure 26 shows our context matching function. The matchContext function takes a set G of role graphs and produces a set of pairs hG, µi where G = hH, ρ, K, τ, Ei is a role graph and µ is a homomorphism from H to HIC . The homo′ morphism µ guarantees that α−1 (G) ⊆ α−1 0 (context(proc )) since the homomorphism h0 from Definition 29 can be constructed from homomorphism h in Definition 22 by putting h0 = µ ◦ h. This implies that it is legal to call proc′ with any concrete graph represented by G. The algorithm in Figure 26 starts with empty maps µ = nodes(G) × {⊥} and extends µ until it is defined on all nodes(G) or there is no way to extend it further. It proceeds by choosing a role graph hH, ρ, K, τ, Ei and node n0 for which the mapping µ is not defined yet. It then finds candidates in the initial context that n0 can be mapped to. The candidates are chosen to make sure that µ remains a homomorphism. The accessibility requirement—that a procedure may see no nodes with incorrect role—is enforced by making sure that nodes in inaccessible are never mapped into nodes in read for the callee. As long as this requirement holds, nodes in inaccessible can be mapped onto nodes of any role since their role need not be correct anyway. We generally require that the set µ−1 (n′0 ) for individual node n′0 in the initial context contain at most one node, and this node must be individual. In contrast, there might be many individual and summary nodes mapped onto a summary node. We relax this requirement by performing instantiation of a summary node of the caller if, at some point, that is the only way to extend the mapping µ (this corresponds to the first recursive call in the definition of match in Figure 26). The algorithm is nondeterministic in the order in which nodes to be matched are selected. One possible ordering of nodes is depth-first order in the role graph starting from parameter nodes. If some nondeterministic branch does not succeed, the algorithm backtracks. The function fails if all branches fail. In that case the procedure call is considered illegal and ⊥G is returned. The algorithm terminates since every procedure call lexicographically increases the sorted list of numbers |µ[nodes(H)]| for hhH, ρ, K, τ, Ei, µi ∈ Γ. 7.3.2 Effect Instantiation The result of the matching algorithm is a set of pairs hG, µi of role graphs and mappings. These pairs are used to instantiate procedure effects in each of the role graphs of the caller. Figure 30 gives rules for effect instantiation. The analysis first verifies that the region read by the callee is included in the region read by the caller. Then it uses map µ to find 7.3.3 Role Reconstruction Procedure effects approximate structural changes to the heap, but do not provide information about role changes for non-parameter nodes. We use the role reconstruction RR algorithm −→ in Figure 27 to conservatively infer possible roles of nodes after the procedure call based on role changes for parameters and global role definitions. Role reconstruction first finds the set N0 of all nodes that might be accessed by the callee since these nodes might have their roles changed. Then it splits each node n ∈ N0 into |R| different nodes ρ(n, r), one for each role r ∈ R. The node ρ(n, r) represents the subset of objects that were initially represented by n and have role r after procedure executes. The edges between nodes in the new graph are derived by simultaneously satisfying 1) structural constraints between nodes of the original graph; and 2) global role constraints from the role reference diagram. The nodes ρ(n, r) not connected to the parameter nodes are garbage collected in the role graph. In practice, we generate nodes ρ(n, r) and edges on demand starting from parameters making sure that they are reachable and satisfy both kinds of constraints. 8 Extensions This section presents two extensions of the basic role system. The first extension allows statically unbounded number of aliases for objects. The second extension allows the analysis to verify more complex role changes. Additional ways of extending roles are given in [31]. 8.1 Multislots A multislot hr ′ , f i ∈ multislots(r) in the definition of role r allows any number of aliases ho′ , f, oi ∈ Hc for ρc (o′ ) = r ′ and ρc (o) = r. We require multislots multislots(r) to be disjoint from all sloti (r). To handle multislots in role analysis we relax the condition 5) in Definition 22 of the abstraction 3 Non-parameter onstage nodes are never affected by effects, as guaranteed by the matching algorithm. matchContext(G) = match({hG, nodes(G) × {⊥}i | G ∈ G}) match : P(RoleGraphs × (N ∪ {⊥})N ) ⇀ P(RoleGraphs × N N ) match(Γ) = Γ0 := {hG, µi ∈ Γ | µ−1 (⊥) 6= ∅}; if Γ0 = ∅ then return Γ; hhH, ρ, K, τ, Ei, µi := choose Γ0 ; Γ′ = Γ \ hhH, ρ, K, τ, Ei, µi; paramnodes := {n | ∃i : hproc, xi , ni ∈ H}; inaccessible := onstage(H) \ paramnodes; n0 := choose µ−1 (⊥); candidates := {n′ ∈ nodes(HIC ) | (n0 ∈ / inaccessible and ρIC (n′ ) = ρ(n0 )) or (n0 n ∈ inaccessible and n′ ∈ /o read(proc′ ))} T n′ hn′ , f, µ(n)i ∈ HIC hn0 ,f,ni∈H µ(n)6=⊥ T hn,f,n0 i∈H µ(n)6=⊥ o n n′ hµ(n), f, n′ i ∈ HIC ; if candidates = ∅ then fail ; if candidates = {n′0 }, K(n0 ) = s, KIC (n′0 ) = i, µ−1 (n′0 ) = ∅ n1 then match(Γ′ ∪ {hG′ , µ[n1 7→ n′0 ]i | hH, ρ, K, τ, Ei ⇑ G′ }) else n′0 := choose {n′ ∈ candidates | K(n′ ) = s or (K(n0 ) = i, µ−1 (n′ ) = ∅)} ′ match(Γ ∪ hhH, ρ, K, τ, Ei, µ[n0 7→ n′0 ]i); n0 Figure 26: The Context Matching Algorithm relation by allowing h to map more than one concrete edge ho′ , f, oi onto abstract edge hn′ , f, ni ∈ H terminating at an onstage node n provided that hρ(n′ ), f i ∈ multislots(ρ(n)). The nodeCheck and expansion relation  are then extended appropriately. Note that a role graph does not represent the exact number of references that fill each multislot. The analysis therefore does not attempt to recognize actions that remove the last reference from the multislot. Once an object plays a role with a multislot, all subsequent roles that it plays must also have the multislot. 8.2 Cascading Role Changes In some cases it is desirable to change roles of an entire set of offstage objects without bringing them onstage. We use the statement setRoleCascade(x1 : r1 , . . . , xn : rn ) to perform such cascading role change of a set of nodes. The need for cascading role changes arises when roles encode reachability properties. Example 33 The code fragment in Figure 28 manipulates object m of role Data. The role Data has fields buffer and work, each being a root for a singly linked acyclic list. Elements of the first list have BufferNode role and elements of the second list have WorkNode role. At some point procedure swaps the contents of the fields buffer and work, which requires all nodes in both lists to change the roles. These role changes are triggered by the setRoleCascade statement. role BufferNode { fields next : BufferNode | null; slots BufferNode.next | Data.buffer; acyclic next; } role WorkNode { fields next : WorkNode | null; WorkNode.next | Data.work; acyclic next; } role Data { fields buffer : BufferNode | null, work : WorkNode | null; } { ... roleCheck(m : Data); x = m.buffer; y = m.work; m.buffer = y; m.work = x; setRoleCascade(x:WorkNode, y:BufferNode); ... } Figure 28: Example of a Cascading Role Change RR hhH, ρ, K, τ, Ei, µi −→hH ′ , ρ′ , K ′ , τ ′ , E ′ i hproc, xi , ni i ∈ H N0 = µ−1 [read(proc′ )] s : N0 × R → N where s(n, r) are all different nodes fresh in H ρ′ = ρ \ (N0 × R) ∪ {hs(n, r), ri | n ∈ N0 , r ∈ R} \({ni }i × R) ∪ {hni , postRi (proc)i} K ′ (s(n, r)) = K(n) τ ′ (s(n, r)) = τ (n) E′ = E H0 = H \ {hn1 , f, n2 i | n1 ∈ N0 or n2 ∈ N0 } ∪ {hs(n1 , r1 ), f, s(n2 , r2 )i | hn1 , f, n2 i ∈ H, hr1 , f, r2 i ∈ RRD} ∪ {hn1 , f, s(n2 , r2 )i | hn1 , f, n2 i ∈ H, hρIC (µ(n1 )), f, r2 i ∈ RRD} ∪ {hs(n1 , r1 ), f, n2 i | hn1 , f, n2 i ∈ H, hr1 , f, ρIC (µ(n2 ))i ∈ RRD} H ′ = GC(H0 ) Figure 27: Call Site Role Reconstruction The statement indicates new roles for onstage nodes, and the analysis cascades role changes to offstage nodes. Given a role graph hH, ρ, K, Ei cascading role change finds a new valid role assignment ρ′ where the onstage nodes have desired roles and the roles of offstage nodes are adjusted appropriately. Figure 29 shows abstract execution of the setRoleCascade statement. Here neighbors(n, H) denotes nodes in H adjacent to n. The condition cascadingOk(n, H, ρ, K, ρ′ ) makes sure it is legal to change the role of node n from ρ(n) to ρ′ (n) given that the neighbors of n also change role according to ρ′ . This check resembles the check for setRole statement in Section 6.2.3. Let r = rho(n) and r ′ = ρ′ (n). Then cascadingOk(n, H, ρ, K, ρ′ ) requires the following conditions: 1. hn, f, n1 i ∈ H implies ρ′ (n1 ) ∈ fieldf (r ′ ) 2. slotno(r ′ ) = slotno(r) = k, and for every list hn1 , f1 , ni, . . . , hnk , fk , ni ∈ H if there is a permutation p : {1, . . . , k} → {1, . . . , k} such that hρ(ni ), fi i ∈ slotpi (r), then there is a permutation p′ : {1, . . . , k} → {1, . . . , k} such that hρ(ni ), fi i ∈ slotpi (r ′ ). 3. identity relations were already satisfied or can be explicitly checked: hf, gi ∈ identities(ρ′ (n)) implies (a) hf, gi ∈ identities(ρ(n)) or (b) for all hn, f, n′ i ∈ H: K(n′ ) = i, and if hn′ , g, n′′ i ∈ H then n′′ = n 4. either acyclic(ρ′ (n)) ⊆ acyclic(ρ(n)) or acycCheck(n, hH, ρ′ , Ki, offstage(H)). In practice there may be zero or more solutions that satisfy constraints for a given cascading role change. Selecting any solution that satisfies the constraints is sound with respect to the original semantics. A useful heuristic for searching the solution space is to first explore branches with as few roles changed as possible. If no solutions are found, an error is reported. 9 Related Work Typestate, as a type system extension for statically verifying dynamically changing properties, was proposed in [44, 43]. Aliasing causes problems for typestate-based systems because the declared typestates of all aliases must change whenever the state of the referred object changes. Faced with the complexity of aliasing, [44] resorted to a more controlled language model which avoids aliasing. More recently proposed typestate approaches use linear types for heap references to support state changes of dynamic allocated objects without addressing aliasing issues [10]. Motivated by the need to enforce safety properties in lowlevel software systems, [42, 46, 9] use extensions of linear types to describe aliasing of objects and rely on language design to avoid non-local type inference. These systems take a construction based approach that specifies data structures as unfoldings of basic elaboration steps [46]. Similarly to shape types [15, 14] and graph types [29, 34], this allows tree-like data structures to be expressed more precisely than using our roles, but cannot approximate data structures such as sparse matrices. More importantly, this approach makes it difficult to express nodes that are members of multiple data structures. Handling multiple data structures is the essential ingredient of our approach because the role of an object depends on data structures in which it participates. Like shape analysis techniques [5, 17, 39, 40] we have therefore adopted the constraint based approach which characterizes data structures in terms of the constraints that they satisfy. The constraint based approach allows us to handle a wider range of data structure while giving up some precision. Like [47, 48] we perform non-local inference of program properties, but while [47, 48] focus on linear integer constraints and handle recursive data structures conservatively, we do not handle integer arithmetic but have a more precise representation of the heap. At a higher level, these approaches all focus on detailed properties of individual data structures. We view our research as focusing more on global aspects such as the participation of objects in multiple data structures. The path matrix approaches [18, 17] have been used to implement efficient interprocedural analyses that infer one level of referencing relationships, but are not sufficiently pre- s ′ hH, ρ, K, τ, Ei ❀hH, ρ , K, τ, Ei s = setRoleCascade(x1 : r1 , . . . , xn : rn ) ni : hproc, xi , ni i ∈ H ρ′ (ni ) = ri ρ′ (n) = ρ(n), n ∈ onstage(H) \ {ni }i N0 = {n ∈ offstage(H) | ∃n′ ∈ neighbors(n, H) : ρ(n′ ) 6= ρ′ (n′ )} ∀n ∈ N0 : cascadingOk(n, H, ρ, K, ρ′ ) Figure 29: Abstract Execution for setRoleCascade cise to track must aliases of heap objects for programs with destructive updates of more complex data structures. The use of the instantiation relation in role analysis is analogous to the materialization operation of [39, 40]. Role analysis can also track reachability properties, but we use an abstraction relation based on graph homomorphism rather than 3-valued logic. Our split operation achieves a similar goal to the focus operation of [40]. However, the generic focus algorithm of [32] cannot handle the reachability predicate which is needed for our split operation. This is because it conservatively refuses to focus on edges between two summary nodes to avoid generating an infinite number of structures. Rather than requiring definite values for reachability predicate, our role analysis splits by reachability properties in the abstract role graph, which illustrates the flexibility of the homomorphism-based abstraction relation. Another difference with [40] is that our role analysis does not require the developer to supply the predicate update formulae for instrumentation predicates. A precise interprocedural analysis [38] extends shape analysis techniques to treat activation records as dynamically allocated structures. The approach also effectively synthesizes an application-specific set of contexts. Our approach differs in that it uses a less precise but more scalable treatment of procedures. It also uses a compositional approach that analyzes each procedure once to verify that it conforms to its specification. Like [48] our interprocedural analysis can apply both may and must effects, but our contexts are general graphs with summary nodes and not trees. Roles are similar to the ADDS and ASAP data structure description languages [25, 26, 23]. These systems use sound techniques to apply the data structure invariants for parallelization and general dependence testing but do not verify that the data structure invariants are preserved by destructive updates of data structures [24]. The object-oriented community has long been aware of benefits that dynamically changing classes give in large systems [37]. Recognizing these benefits, researchers have proposed dynamic techniques that change the class of an object to reflect its state changes [16, 20, 4, 13]. These systems illustrate the need for a static system that can verify the correct use of objects with changing roles. 10 Conclusion This paper proposes two key ideas: aliasing relationships should determine, in large part, the state of each object, and the type system should use the resulting object states as its fundamental abstraction for describing procedure interfaces and object referencing relationships. We present a role system that realizes these two key ideas in a concrete system, and present an analysis algorithm that can verify that the program correctly respects the constraints of this role system. The result is that programmers can use roles for a variety of purposes: to ensure the correctness of extended procedure interfaces that take the roles of parameters into account, to verify important data structure consistency properties, to express how procedures move objects between data structures, and to check that the program correctly implements correlated relationships between the states of multiple objects. We therefore expect roles to improve the reliability of the program and its transparency to developers and maintainers. Note. This November 2001 version of the technical report corrects some errors in the formal description of the analysis algorithm of the original technical report from July 2001. REFERENCES [1] Thomas Ball, Rupak Majumdar, Todd Millstein, and Sriram K. Rajamani. Automatic predicate abstraction of C programs. In Proc. ACM PLDI, 2001. [2] Barendsen and J. E. W. Smetsers. Conventional and uniqueness typing in graph rewrite systems. In Proceedings of the 13th Conference on the Foundations of Software Technology and Theoretical Computer Science. Springer-Verlag, 1993. [3] R. Cartwright and M. Fagan. Soft typing. In Proceedings of the ACM SIGPLAN ’91 Conference on Programming Language Design and Implementation (PLDI), number 6 in 26, pages 278–292, 1991. [4] Craig Chambers. Predicate classes. ECOOP, pages 268–296, 1993. In Proc. 7th [5] David R. Chase, Mark Wegman, and F. Kenneth Zadeck. Analysis of pointers and structures. In Proc. ACM PLDI, 1990. [6] Ramkrishna Chatterjee, Barbara G. Ryder, and William Landi. Relevant context inference. In Proc. 26th ACM POPL, pages 133–146, 1999. [7] Ben-Chung Cheng and Wen mei W. Hwu. Modular interprocedural pointer analysis using access paths. In Proc. ACM PLDI, 2000. [8] David G. Clarke, John M. Potter, and James Noble. Ownership types for flexible alias protection. In Proc. 13th Annual ACM Conference on Object-Oriented Programming, Systems, Languages, and Applications, 1998. FX hhH, ρ, K, τ, Ei, µi −→h⊥G , µi where FX hhH, ρ, K, τ, Ei, µi −→ Gt where τ [µ−1 [read(proc′ )]] 6⊆ read(proc) τ [µ−1 [read(proc′ )]] ⊆ read(proc) n1 ,f1 nt ,ft hH, ρ, K, τ, Ei ⊢ G1 ⊢ · · · ⊢ Gt S = {hn, f, n′ i ∈ H | hµ(n), f, µ(n′ )i ∈ mayWr(proc′ ) ∪ ∪i mustWri (proc′ )} {hn1 , f1 i, . . . , hnt , ft i} = {hn, f i | hn, f, n′ i ∈ S} Single Write Effect Instantiation: n,f hH1 , ρ1 , K1 , τ1 , E1 i ⊢ G′ iff case condition deterministic effect {n1 | hn, f, n1 i ∈ S} = {n0 } and ∃i : hµ(n), f, µ(n0 )i ∈ mustWri (proc′ ) |{n1 | hn, f, n1 i ∈ S}| > 1 or ∃n1 : hµ(n), f, µ(n1 )i ∈ mayWr(proc′ ) n ∈ offstage(H) {hτ (n), f, τ (n1 )i | hn, f, n1 i ∈ S} ⊆ mayWr(proc) |{n1 | hn, f, n1 i ∈ S}| > 1 or ∃n1 : hµ(n), f, µ(n1 )i ∈ mayWr(proc′ ) n ∈ offstage(H) {hτ (n), f, τ (n1 )i | hn, f, n1 i ∈ S} 6⊆ mayWr(proc) |{n1 | hn, f, n1 i ∈ S}| > 1 or ∃n1 : hµ(n), f, µ(n1 )i ∈ mayWr(proc′ ) n∈ / offstage(H) {hτ (n), f, τ (n1 )i | hn, f, n1 i ∈ S} ⊆ mayWr(proc) ¬({n1 | hn, f, n1 i ∈ S} = {n1 } and ∃i : hµ(n), f, µ(n0 )i ∈ mustWri (proc′ )) n∈ / offstage(H) {hτ (n), f, τ (n1 )i | hn, f, n1 i ∈ S} 6⊆ mayWr(proc) nondeterministic effect for non-parameters nondeterministic effect for parameters orem(H1 ) =  result G′ = hH2 , ρ1 , K1 , τ1 , E2 i H2 = H1 \ {hn, f, n1 i | hn, f, n1 i ∈ H1 } ∪{hn, f, n0 i} E2 = updateWr(E1 , hτ (n), f, τ (n0 )i) G′ = hH2 , ρ1 , K1 , τ1 , E2 i H2 = orem(H1 )∪ {hn, f, n1 i | hn, f, n1 i ∈ S} G′ = ⊥G G′ = hH2 , ρ1 , K1 , τ1 , E2 i H0 = H1 \ {hn, f, n1 i | hn, f, n1 i ∈ H1 } H2 = H1 or H2 = H0 ∪ {hn, f, n1 i} hn, f, n1 i ∈ S G′ = ⊥G H1 \ {hn, f, n′ i | hn, f, n′ i ∈ H1 }, if ∃i ∃n′ : hµ(n), f, µ(n′ )i ∈ mustWri (proc′ ) H1 , otherwise Figure 30: Effect Instantiation [9] Karl Crary, David Walker, and Greg Morrisett. Typed memory management in a calculus of capabilities. In Proc. 26th ACM POPL, 1999. [10] Robert DeLine and Manuel Fähndrich. Enforcing highlevel protocols in low-level software. In Proc. ACM PLDI, 2001. [11] Alain Deutsch. Interprocedural may-alias analysis for pointers: beyond k-limiting. SIGPLAN Notices, 29(6):230–241, 1994. [12] Amer Diwan, Kathryn McKinley, and J. Elliot B. Moss. Type-based alias analysis. In Proc. ACM PLDI, 1998. [25] Joseph Hummel, Laurie J. Hendren, and Alexandru Nicolau. Abstract description of pointer data structures: An approach for improving the analysis and optimization of imperative programs. ACM Letters on Programming Languages and Systems, 1(3), September 1993. [26] Joseph Hummel, Laurie J. Hendren, and Alexandru Nicolau. A language for conveying the aliasing properties of dynamic, pointer-based data structures. In Proc. 8th International Parallel Processing Symposium, Cancun, Mexico, April 26–29 1994. [27] Samin Ishtiaq and Peter W. O’Hearn. BI as an assertion language for mutable data structures. In Proc. 28th ACM POPL, 2001. [13] S. Drossopoulou, F. Damiani, M. Dezani-Ciancaglini, and P. Giannini. Fickle: Dynamic object reclassification. In Proc. 15th ECOOP, LNCS 2072, pages 130–149. Springer, 2001. [28] Pierre Jouvelot and David K. Gifford. Algebraic reconstruction of types and effects. In Proc. 18th ACM POPL, 1991. [14] Pascal Fradet and Daniel Le Metayer. Structured gamma. Technical Report 989, IRISA, 1996. [29] Nils Klarlund and Michael I. Schwartzbach. Graph types. In Proc. 20th ACM POPL, Charleston, SC, 1993. [15] Pascal Fradet and Daniel Le Métayer. Shape types. In Proc. 24th ACM POPL, 1997. [30] Naoki Kobayashi. Quasi-linear types. In Proc. 26th ACM POPL, 1999. [16] Erich Gamma, Richard Helm, Ralph Johnson, and John Vlisside. Design Patterns. Elements of Reusable ObjectOriented Software. Addison-Wesley, Reading, Mass., 1994. [17] Rakesh Ghiya and Laurie Hendren. Is it a tree, a DAG, or a cyclic graph? In Proc. 23rd ACM POPL, 1996. [18] Rakesh Ghiya and Laurie J. Hendren. Connection analysis: A practical interprocedural heap analysis for C. In Proc. 8th Workshop on Languages and Compilers for Parallel Computing, 1995. [19] Rakesh Ghiya and Laurie J. Hendren. Putting pointer analysis to work. In Proc. 25th ACM POPL, 1998. [20] Georg Gottlob, Michael Schrefl, and Brigitte Roeck. Extending object-oriented systems with roles. ACM Transactions on Information Systems, 14(3), 1994. [21] Samuel Z. Guyer and Calvin Lin. An annotation language for optimizing software libraries. In Second Conference on Domain Specific Languages, 1999. [22] David Harel, Dexter Kozen, and Jerzy Tiuryn. Dynamic Logic. The MIT Press, Cambridge, Mass., 2000. [23] Laurie J. Hendren, Joseph Hummel, and Alexandru Nicolau. A general data dependence test for dynamic, pointer-based data structures. In Proc. ACM PLDI, 1994. [24] Joseph Hummel. Data Dependence Testing in the Presence of Pointers and Pointer-Based Data Structures. PhD thesis, Dept. of Computer Science, Univ. of California at Irvine, 1998. [31] Viktor Kuncak. Designing an algorithm for role analysis. Master’s thesis, MIT Laboratory for Computer Science, 2001. [32] Tal Lev-Ami. TVLA: A framework for kleene based logic static analyses. Master’s thesis, Tel-Aviv University, Israel, 2000. [33] Nancy Lynch and Frits Vaandrager. Forward and backward simulations – Part I: Untimed systems. Information and Computation, 121(2), 1995. [34] Anders Møller and Michael I. Schwartzbach. The Pointer Assertion Logic Engine. In Proc. ACM PLDI, 2001. [35] John Plevyak, Vijay Karamcheti, and Andrew A. Chien. Analysis of dynamic structures for efficient parallel execution. In Workshop on Languages and Compilers for Parallel Architectures, 1993. [36] William Pugh. Skip lists: A probabilistic alternative to balanced trees. In Communications of the ACM 33(6):668–676, 1990. [37] Trygve Reenskaug. Working With Objects. Prentice Hall, 1996. [38] Noam Rinetzky and Mooly Sagiv. Interprocedual shape analysis for recursive programs. In Proc. 10th International Conference on Compiler Construction, 2001. [39] Mooly Sagiv, Thomas Reps, and Reinhard Wilhelm. Solving shape-analysis problems in languages with destructive updating. In Proc. 23rd ACM POPL, 1996. [40] Mooly Sagiv, Thomas Reps, and Reinhard Wilhelm. Parametric shape analysis via 3-valued logic. In Proc. 26th ACM POPL, 1999. [41] Micha Sharir and Amir Pnueli. Two approaches to interprocedural data flow analysis problems. In Program Flow Analysis: Theory and Applications. Prentice-Hall, Inc., 1981. [42] Frederick Smith, David Walker, and Greg Morrisett. Alias types. In Proc. 9th ESOP, Berlin, Germany, March 2000. [43] Robert E. Strom and Daniel M. Yellin. Extending typestate checking using conditional liveness analysis. IEEE Transactions on Software Engineering, May 1993. [44] Robert E. Strom and Shaula Yemini. Typestate: A programming language concept for enhancing software reliability. IEEE Transactions on Software Engineering, January 1986. [45] Philip Wadler. Linear types can change the world! In IFIP TC 2 Working Conference on Programming Concepts and Methods, Sea of Galilee, Israel, 1990. [46] David Walker and Greg Morrisett. Alias types for recursive data structures. In Workshop on Types in Compilation, 2000. [47] Zhichen Xu, Barton Miller, and Thomas Reps. Safety checking of machine code. In Proc. ACM PLDI, 2000. [48] Zhichen Xu, Thomas Reps, and Barton Miller. Typestate checking of machine code. In Proc. 10th ESOP, 2001.
6
Global finite element matrix construction based on a CPU-GPU implementation arXiv:1501.04784v1 [cs.NA] 20 Jan 2015 Francisco Javier Ramı́rez-Gila , Marcos de Sales Guerra Tsuzukib , Wilfredo Montealegre-Rubioa,∗ a Department of Mechanical Engineering, Faculty of Mines, Universidad Nacional de Colombia, Medellı́n, 050034, Colombia b Computational Geometry Laboratory, Department of Mechatronics and Mechanical Systems Engineering, Escola Politécnica da Universidade de São Paulo, Av. Prof. Mello Moraes, 2231 São Paulo - SP, Brazil Abstract The finite element method (FEM) has several computational steps to numerically solve a particular problem, to which many efforts have been directed to accelerate the solution stage of the linear system of equations. However, the finite element matrix construction, which is also time-consuming for unstructured meshes, has been less investigated. The generation of the global finite element matrix is performed in two steps, computing the local matrices by numerical integration and assembling them into a global system, which has traditionally been done in serial computing. This work presents a fast technique to construct the global finite element matrix that arises by solving the Poisson’s equation in a three-dimensional domain. The proposed methodology consists in computing the numerical integration, due to its intrinsic parallel opportunities, in the graphics processing unit (GPU) and computing the matrix assembly, due to its intrinsic serial operations, in the central processing unit (CPU). In the numerical integration, only the lower triangular part of each local stiffness matrix is computed thanks to its symmetry, which saves GPU memory and computing time. As a result of symmetry, the global sparse matrix also contains non-zero ∗ Corresponding author Email addresses: [email protected] (Francisco Javier Ramı́rez-Gil), [email protected] (Marcos de Sales Guerra Tsuzuki), [email protected] (Wilfredo Montealegre-Rubio ) Preprint submitted to arXiv.org January 21, 2015 elements only in its lower triangular part, which reduces the assembly operations and memory usage. This methodology allows generating the global sparse matrix from any unstructured finite element mesh size on GPUs with little memory capacity, only limited by the CPU memory. Keywords: Finite element method (FEM), unstructured mesh, matrix generation, parallel computing, graphic processing unit (GPU), heterogeneous computing 1. Introduction Numerous physical phenomena in a stationary condition such as electrical and magnetic potential, heat conduction, fluid flow and elastic problems in a static condition can be described by elliptic partial differential equations (EPDEs). The EPDEs does not involve a time variable, and so describes the steady state response. A linear EPDE has the general form given in Eq. (1), where a, b, c are coefficient functions, f is a source (excitation) function and u is the unknown variable. All of these functions can vary spatially (x, y, z). ∇(c · ∇u) + b · ∇u + au = f (1) EPDEs can be solved exactly by mathematical procedures such as Fourier series [1]. However, the classical solution does not frequently exist; for those problems which allow the use of these analytical methods, many simplifications are made [2]. Consequently, several numerical methods have been developed such as the finite element method (FEM) and finite difference method to efficiently solve EPDEs. The FEM has several advantages over other methods. The main advantage is that it is particularly useful for problems with complicated geometry using unstructured meshes [2]. One way to get a suitable framework for solving EPDEs problems, by using FEM, is formulating them as variational problems, also called weak solution. 2 The variational formulation of an EPDE is a mathematical treatment for converting the strong formulation into a weak formulation, which permits the approximation in elements or subdomains, and the EPDE variational form is solved using the so-called Galerking method [3]. Fig. 1 summarizes the steps required to numerically solve an EPDE by using FEM. Some projects are publicly available and all of these steps are solved automatically in the computer [4]; however, the most common commercial FEM packages are focused on steps 4-8, which are: (i) Domain discretization with finite elements (FEs); (ii) Numerical integration of all FEs. In this step, a local matrix ke and a local load vector f e are computed for each finite element; (iii) Construction of the global sparse matrix K from local matrices ke and global load vector F from local load vectors f e ; (iv) Application of boundary conditions; (v) Solution of the linear equation system formed previously, KU = F , where U represents the vector of the nodal solution. Figure 1: FEM steps to solve EPDEs. Many efforts have been made to accelerate the linear equation solver by using direct [5] and iterative methods [6]; however, the FE sparse matrix construction (steps 5 and 6 of Fig. 1), which are also time-consuming for dense unstructured meshes, have been less investigated [7]. Hence, in this work the fast construction 3 of the global sparse matrix is developed, which arises in the numerical solution of EPDEs by FEM. One way to efficiently construct the global FE matrix is by using parallel computing. Parallelism is a trend in current computing and, for this reason, microprocessor manufacturers focus on adding cores rather than on increasing single-thread performance [8]. Although there are several parallel architectures, the most widespread many-core architecture are the GPUs (graphics processing units), and the most common multi-core architecture are the CPUs (central processing units) [9]. However, due to the massive parallelism offered by GPU compared to CPU, the trend in scientific computing is to use GPU for accelerating intensive and intrinsic parallel computations [10]. Additionally, the GPU is preferred over the other architectures to be used as a co-processor to accelerate part or the whole code because its market is well established in the computer games industry. This fact allows having a graphic processor in most personal computers, which makes GPUs an economic and attractive option for application developers [10]. Although some researches are focused on completely porting the FEM subroutines to be executed in GPU [11, 12], other works are focused on porting parts of the FEM code to the GPU, such as numerical integration [13, 14, 15, 16], matrix assembly [17, 18], and the solution of large linear systems of equations [19, 20, 21]. Even though some previous works accelerated the construction of sparse FE matrix [7, 22, 23], these approaches require high GPU memory, which means that the size of the problem that may be analyzed is limited or an expensive GPU with high memory capacity is needed. However, in most personal computers equipped with a graphics processor, the GPU has low-memory capacity. Accordingly, the proposed approach is based on the assumption that a single GPU has significantly less memory than the CPU; therefore, a CPU-GPU implementation is developed. The central idea is computing the numerical integration in the GPU, which is an intensive and intrinsic parallel subroutine, and computing the assembly in the CPU, which is a predominantly serial subroutine 4 with high memory consumption. Hence, based on this methodology, we are able to construct any large sparse matrix arising in FEM, only limited by the CPU memory. The rest of the paper is organized as follows. In section 2, the FEM formulation of the Poisson’s equation used as an example for checking the proposed implementation is given. Section 3 presents the CPU-GPU implementation of the FE matrix construction. The results are discussed in section 4 and finally, in section 5, the conclusions are drawn. 2. Elliptic PDE modelling by FEM The computational benefits of our proposed implementation in the construction of large sparse FE matrices is shown by solving the Poisson’s equation in a three-dimensional (3D) domain. The Poisson’s equation is an elliptic-type PDE. This type of equation is commonly used to solve scalar problems such as electrical and magnetic potential, heat transfer and other problems with no external flows [3]. In addition, the Poisson’s equation is used for modelling vector field problems such as structural problems without external applied forces. All of these problems should be in static or stationary condition. In this section, the FEM steps are presented, from the strong formulation step to the construction of the global sparse matrix step (steps 1 to 6 of Fig. 1). The strong formulation of the Poisson equation is given by [3]: ∇ · (c∇φ(r)) = 0 in Ω φ = φ̂ on Γφ (2) qn = 0 on Γq where φ represents a general scalar variable on the domain Ω, i.e. can be the electrical potential or temperature, depending on the specific applications. The position vector is r and the term c can represent the electrical or thermal conductivity for an isotropic material. Term φ̂ is a prescribed value of scalar variable φ at boundary Γφ (Dirichlet condition) and the normal component of the flow (or 5 flux) qn is stated for simplicity as zero at boundary Γq (Neumann condition) for the purposes of computational evaluation only. After the integration by parts of Eq. (2), the weak form is found as: Z (∇ν)T (c∇φ)dΩ = 0 (3) Ω where ν is an arbitrary function [3]. Based on the weak form of the Poisson equation, the FE formulation can be found. Accordingly, the domain is discretized into FEs; next, in each FE, the scalar variable is approximated as: φ ≈ φ̂ = n X Na φ̃a = N φ̃ (4) a=1 where n is the number of nodes in the element and, according to the Galerkin principle, ν = Na are the shape functions. Substituting Eq. (4) into Eq. (3) and evaluating the integrals for all elements, a system of equation is obtained as: KΦ = F ; F =0 (5) where K is the so-called global sparse matrix or stiffness matrix. Term Φ is the nodal solution vector and global load vector F is modified by the boundary conditions, ceasing to be a vector of zeroes. The K matrix is obtained as a sum of contributions of the matrices obtained in numerical integration for each FE. This step is known as assembly [3]: K= nel X ke (6) e=1 where nel is the number of FEs and ke is the local matrix of element e (local stiffness matrix), which is obtained as: Z ke = cB T BdΩ, with B = Ωe n X ∇Na (7) a=1 In our approach we consider that coefficient c, as a part of material properties array (M P ), can vary from one element to the other, but it remains 6 constant for a specific element, which means it can be outside the integral in Eq. (7). The global sparse matrix construction is made in Eq. (6) and Eq. (7). However, for computational efficiency, Eq. (7) is solved numerically instead of by exact integration. There are several numerical procedures to solve an integral, but the most common in FEM is the Gauss-Legendre, or simply Gauss quadrature [3]. For using the Gauss quadrature, a transformation by using the reference element technique must be applied, in which physical coordinates (x, y, z) are transformed into natural coordinates (r, s, t). The equivalence between these systems of coordinates in the local matrix calculation (see the Eq. (7)) is represented by: Z ke = c Ωe Z1 = B T (x, y, z)B(x, y, z)dxdydz Z1 Z1 c (8) T B (r, s, t)B(r, s, t)|J (r, s, t)|drdsdt −1 −1 −1 where J is the Jacobian matrix and term |J | is its determinant. After that, the Gauss quadrature is applied to the normalize domain (right hand side of Eq. (8)) as: ke = c Mj Mk Mi X X X B T (ri , sj , tk )B(ri , sj , tk )|J (ri , sj , tk )|Wi Wj Wk (9) i=1 j=1 k=1 where Mi , Mj , Mk , ri , sj , tk and Wi , Wj , Wk are the number, points and weights of the integration points in each direction in a 3D domain. In our implementation, we use hexahedral FEs with eight nodes in the discretization of the domain, also known as 8-node brick elements which is a low-order tri-linear FE. The 8node brick element was selected based on the idea that the majority of FE codes are based on the low-order elements and few works have focused on this type of elements [13]. The points and weights for an exact numerical integration of the 8-node brick element can be consulted in [3]. 7 3. Numerical implementation A typical way to construct the global sparse FE matrix is using serial computing. There are several ways to construct sparse matrices; however, the fastest method is to create the triplet1 form first, and then, using a sparse function to compress the triplet sparse format, avoiding some memory challenges [24]. Despite the triplet format being simple to create, it is difficult to use in most sparse matrix algorithms; a compress sparse format is thus required [5]. A common compress sparse format is the CSC (compressed sparse column), used by default in MATLAB [25]. Algorithm 1 shows a typical implementation for the sparse matrix generation using the MATLAB syntax. This algorithm needs a subroutine that makes the numerical integration for computing local matrix ke and local load vector f e . However, this work is focused only on generating matrix K; load vector F is not considered for parallel computation (this term is not considered because it is computationally much less demanding [15, 16]). The numerical integration subroutine changes with the PDE, element type and basis functions; it is a function of the nodal coordinates (C) and any nodal fields such as forces or boundary conditions (BCs) and material properties (M P ) of element e [17]. After computing local matrix ke , which is stored in array K all , a mapping from local to global degrees of freedom (DOFs) is required. The mapping is made considering connectivity matrix (E), which gives the global node numbers that compose an element, and the number of DOFs per node (DOFxn). As a result of the mapping function, the positions in the global sparse matrix of each entry of ke is obtained, i.e. row (Indrow ) and column (Indcol ) indices, which are 1 In the triplet sparse format the row and column indices that specify the position of NNZ (non-zero) entries must be stored. This means that each NNZ entry requires three numbers: the row and column position using integer numbers and the value of that entry using a double precision number. In this format, several NNZ entries with the same row/column indices can be repeated, which makes the difference with the coordinate (COO) sparse matrix format where entries with same row/column indices are summed. 8 stored in arrays K rows and K cols , respectively. Finally, the global sparse matrix is obtained in a single call of an assembly function (e.g. the MATLAB sparse function [5, 24]). Algorithm 1 Global sparse FE matrix construction in serial computing. 1: Initialize K rows , K cols and K vals to zero; 2: for each element e in nel do 3: ke ←− N umericalIntegration(C, M P, BCs); 4: K vals (e, :) = ke (:); 5: (Indrow , Indcol ) ←− M apping(E(e), DOF xn); 6: K rows (e, :) = Indrow (:); 7: K cols (e, :) = Indcol (:); 8: end for 9: K = sparse(K rows , K cols , K vals ); Several methods have been proposed to parallelize the global sparse matrix construction. In this work, instead of viewing the matrix generation subroutine as a single block, it is separated in two subroutines: the numerical integration subroutine and the assembly subroutine, which allows the analysis looking for parallelism opportunities. The numerical integration subroutine has several opportunities for parallelism [16]: 1. Computing all Gauss integration points in parallel; 2. Parallelizing each ke component [15, 16]; 3. Finally, it is also possible compute ke in parallel for all elements [13, 16]. The loop over integration points (first approach) is a good strategy for higherorder FEs but it is harder to parallelize due to inherent data race2 in updating entries in ke with contributions from different integration points, and the degree 2 The race condition or race hazard is the behavior of a system in which the output is dependent on a sequence of events. It becomes a bug when events do not happen in the proper order. Race conditions can occur in computer software, especially multithreaded or distributed programs. 9 of concurrency is generally lower [16]. The second approach is also helpful for higher-order FEs, but not convenient for low-order elements. The parallelization of the loop over low-order elements (third approach) is the most natural way because a thread can be charged entirely of one local matrix ke [16, 17]. Hence, the third approach is selected in our implementation since the 8-node brick FE offers low degree of parallelism using the first two approaches. In the computation of ke there are no dependences and just the information from the specific 8-node brick element is necessary. As a consequence, numerical integration for different elements are also perfectly parallelizable [15]. A thread is used here in to compute each local matrix ke . Then, with a one-dimensional grid of thread blocks, all FEs in the mesh can be computed. In the eight-node brick element for scalar problems (1 DOF per node), each ke is a square dense matrix having 64 (8 × 8) components, but exploiting the matrix symmetry, only the lower or upper triangular matrix part, including its diagonal, should be considered. This leads to a significant saving in floating point operations and memory requirements, since approximately half the local matrix components are needed, specifically, 36 components instead of 64. Additionally, there are calculations that need to be made only once for the same FE type, such as the shape function and their derivatives in the physical coordinates. We use the same FE type to discretize the whole domain. Algorithm 2 presents the CUDA kernel to compute the local stiffness matrix ke . A thread is responsible of computing the 36 components of ke and store them in the GPU global memory as a vector in array K vals . For each integration point, each thread must serially perform several operations, such as the Jacobian matrix (J ) computation, its determinant (|J |) and its inverse (J −1 ), in addition to the gradient matrix (B) and the local stiffness matrix (ke ). The shape functions (N ) and their derivatives with respect to natural coordinates (dN ) evaluated at each integration point are computed only once and they are stored in a fast GPU memory. The computation of matrix J requires three dense matrix-vector multiplications, i.e. the multiplications of dN (3 × 8) by nodal coordinates in Cartesian coordinates (8 × 3). The computation of |J | is 10 performed with the determinant formula of a 3 × 3 matrix and J −1 with the formula to invert matrices of the same size. Matrix B is computed as the multiplication of J −1 (3 × 3) by dN (3 × 8). Finally, element matrix ke is computed as presented in Eq. (9) with some modifications to calculate only the lower triangular matrix components. Algorithm 2 CUDA kernel for numerical integration in the 8-node brick element. 1: global void N umericalIntegration(ke , E, C, M P ) 2: tid = blockDim.x ∗ blockIdx.x + threadIdx.x; // Thread ID 3: // Initialize local variables 4: if tid < nel then 5: for i = 0; i < 8; i + + do // Loop over integration points 6: read N (i) and dN(i); 7: compute J , |J | and J −1 ; 8: compute B; 9: compute the lower triangular part of ke and store it in K vals ; 10: 11: end for end if However, if the FE mesh is denser, the numerical integration may not be executed because this is a heavily memory-consuming operation and the GPU memory cannot allocate it entirely [17]. Thus, a simple domain partitioning based on the GPU memory availability is used, which is a modified implementation of that proposed by Komatitsch et al. [11]. Consequently, the numerical integration is developed in subgroups of FEs (GP Ug ), i.e. the numerical integration kernel is invocated to operate a FEs set. The elements subgroups are executed sequentially by the GPU, but the numerical integration over FEs within a group are executed in parallel. With this approach, the construction of the global FE matrix can be made from any mesh size, only limited by the CPU memory. Thus, if the available GPU memory is GP Uma and the required memory by the numerical integration subroutine is GP Umr (known through 11 GPU query functions), the work done by the GPU is divided into groups based on the following formula:  GP Ug = ceil GP Umr GP Uma  (10) Each group of FEs is integrated numerically in the GPU, and the results allocated in the global GPU memory are downloaded to the CPU memory, leaving the GPU memory free for a new group of FEs. Every kernel is uploaded with only the data required by the FEs belonging the group that will be executed (only part of arrays C and E), which allows a better use of memory. After the numerical integration over all elements, the assembly of all local matrices into the global matrix is executed. To do this, the assembly step can be viewed as a sparse matrix format conversion [7, 24]. As stated before, the global position of each component of the K all matrix should be stored in arrays K rows and K cols , which is an implicit sparse matrix format storage [5]. The assembly step consists in converting the triplet format into the CSC format. This function requires sorting and reducing operations, which are not well suited for parallel computing due to they are intrinsic serial operations. In fact, the assembly cannot be considered an embarrassingly parallel algorithm with no dependencies, as is considered the numerical integration is [15]. The dependencies present in the assembly algorithm can cause race condition, which should be avoided to prevent wrong results. To remove dependencies present in the assembly algorithm there are some options, but the best for GPU programming is the use of coloring techniques or atomic operations [7, 17]. Using the first option, sets of elements with different colors are obtained, with two elements having the same color if their local stiffness matrices do not coincide at any global matrix entry. Given this element partition, the numerical integration algorithm followed by immediate matrix assembly can be executed in parallel for elements within a group of the same color and proceeds color by color sequentially. However, this option requires precomputing operations to divide the mesh into colors, where the problem to find 12 the optimal number of colors is NP-hard. Although there are many heuristics for finding nearly minimal colorings [26], this operation can take substantial execution time in meshes with a large number of FEs. On the other hand, atomic operations are undesirable because the penalty on the GPU execution time can be high [17]. Dziekonski et al. [7] use the sparse matrix conversion concept, which is executed in the GPU with the advantage that the assembly occurs in parallel using atomic operations. Using this approach, arrays K rows and K cols must be computed and stored in GPU, which substantially increases memory requirements. Although this approach is good on high-end GPUs provided with high-memory capacity to avoid memory transactions between processors, it is less interesting for many low-end video cards users. Additionally, in many current personal computers, the memory installed in the CPU motherboard is often four to eight times as large as than that installed on the graphics card [11]. Hence, a good option is to make the sparse matrix format conversion in the CPU. Therefore, a methodology for using GPU for intrinsic parallel and CPU for intrinsic serial algorithms in the global FE matrix construction is proposed. The components of K vals are parallel computed in GPU, but indices K rows and K cols are computed and stored in the CPU, since their computation is simple [24]. After computing the global FE matrix in triplet format in CPU and GPU, as described above, the sparse matrix format conversion is executed in MATLAB using one of the following options: the native MATLAB function sparse 3 , the sparse2 function from SuiteSparse4 and the sparse create function from MILAMIN5 . A graph showing the flow of our proposed implementation is presented in Fig. 2. In Fig. 2, there is a preprocessing phase that can be performed by any specialized software for complex domains. However, as our goal is evaluating 3 MATLAB R2013b, http://www.mathworks.com/help/matlab/ref/sparse.html 4.0.2, http://www.cise.ufl.edu/research/sparse/SuiteSparse/ 5 Mutils 0.3, http://www.milamin.org/ 4 SuiteSparse 13 the computational time involved in the matrix generation, we use cubic domains discretized by 8-node brick elements. This simple discretization is performed in MATLAB generating three arrays: connectivity matrix E, which indicates the nodes that compose an element; nodal matrix coordinates C, which specify the Cartesian coordinates of the nodes; and material properties matrix M P , which states the material properties of each element. As this subroutine is simple and takes a few seconds to generate large meshes, it is executed in the CPU. The second phase in Fig. 2 shows the necessary steps for building the global FE matrix, as described previously. In this step, the heterogeneous computing concept is used: while the GPU is executing the numerical integration kernel, the CPU is used for generating the indices of the sparse matrix in triplet format. Once the local matrices from all elements are computed, the assembly function can be executed, and the global matrix is obtained. After the global system is obtained, the boundary conditions are applied, and the solution of the linear system is found, but these steps are not shown in the figure because it is out of the scope of this paper. Figure 2: Global sparse FE matrix construction. 4. Results The GPU results are obtained using a 2GB NVIDIA Quadro 4000 video card based on Fermi architecture. The CPU computations were performed in 14 MATLAB R2013b using an Intel Xeon E5-2620 CPU at 2.0 GHz with six cores in a 64-bit Windows 7 operating system with 40GB of RAM. However, in the serial implementation only one CPU core is used. As stated before, the matrix is generated in two steps: first, conducting the numerical integration (NI) over all FEs, and second, assembling all local matrices into a global matrix. The results are presented separately in the following subsections. 4.1. Numerical integration The NI subroutine is programmed in CUDA C and is called from MATLAB workspace using the parallel.gpu.CUDAKernel constructor, which creates a CUDAKernel object. The properties of the CUDAKernel object such as the number of blocks in the grid and the number of threads in the block are also configured in MATLAB. After the creation and configuration of the kernel, it is evaluated using the feval MATLAB function. The code executed in GPU is compared with its respective serial implementation counterpart executed in one CPU core. Fig. 3 shows the computational time when the NI routine is executed in CPU and GPU for different meshes. In the time spent by the GPU, the time spent in memory transactions is also taking into account: uploading data from CPU to GPU memory using the MATLAB function gpuArray, and vice versa, downloading data from GPU to CPU memory using the function gather. The gather function guarantees that all work is finished in the GPU, and it ensures accurate time measurement using the tic-toc MATLAB functions. Fig. 3 also shows the speedup reached by the GPU implementation. The GPU speedup is calculated as the ratio between the serial (ts ) and parallel (tp ) execution time. The advantage of the GPU implementation is observed in the curves presented. The NI routine can compute meshes with almost five million of 3D FEs in a GPU with-low-memory capacity without using mesh partitioning. Although the NI routine is considered memory bounded [17], the implemented code consumes little memory due to the low-order of the FE selected. The maximum time spent in the GPU for the largest mesh is 7.56 seconds, while for the same mesh 15 Figure 3: Execution time in the numerical integration subroutine and GPU speedup as a function of the mesh size. in serial computing in the CPU, the NI routine spent 875.1 seconds, which means a GPU speedup of 115.8X. The GPU speedup dramatically increases its value using 8000 FEs. After that, the GPU speedup rate is slow; indeed, with more than four million FEs, the GPU speedup decreases. This fact is explained by the ratio between floating point operations (flops) and memory operations (memops). When the data volume transferred from one to another memory type (memops) is larger than the flops; the problem thus becomes memory bounded and the speedup is decreased. This is known as memory wall, in which there is a difference between the speed at which data are transferred to the processor and the rate at which instructions are executed [27]. A common problem in transferring data between the CPU and the GPU memories is found in the PCI-express port, which limits the rate of moving data between these processors. For this reason, minimizing the transaction between the processor memories is recommended. 16 4.2. Assembly The assembly step is performed in the CPU for two reasons: the assembly algorithm presents few parallel opportunities and requires high temporary memory. As the CPU cores are designed to maximize the execution speed of sequential programs (latency6 ) [10], this processor is well suited for serial tasks, as proposed here (assembly). The memory requirements of the assembly routine are shown in Table 1. As indicated in Section 3, the assembly phase can be viewed as a sparse matrix conversion from triplet to CSC format. Table 1 shows the percentage of compression in the non-zero elements (NNZ) going from one to another sparse format through different meshes. This table also shows the percentage of memory saving when the CSC is used instead of the triplet form. The percentage of compression and memory saving are not equal due to the matrix structure, which depends on the nodes and elements numeration; however, these percentages seem to be very similar. Having into account that the assembly subroutine initially requires a lot of memory for storing the three arrays K rows , K cols and K vals , it is not feasible for a GPU implementation, particularly if GPUs with low-memory capacity are used for forming large matrices. Three different routines for changing the sparse matrix format are considered herein: the sparse function from MATLAB (Sparse), the sparse2 function from SuiteSparse (Sparse2) and the sparse create function from MILAMIN (SparseCreate). The execution time spent in the NI routine, executed in the GPU, and the assembly routine, executed in the CPU, are presented in Fig. 4. Addi6 Latency, in a basic definition, is the amount of time to complete a task, which mean that it is measured in units of time. While the CPU tries to minimize the latency, the GPU tries to maximize the throughput. Throughput is tasks completed per unit time measured in units as stuff per time, such as jobs completed per seconds. Therefore, the NI subroutine is executed in the GPU because we need to obtain many local matrices in the unit of time; and the assembly subroutine is executed in the CPU because the whole task is required to be finished in a short time. 17 Table 1: Memory requirements in the triplet and CSC sparse formats for different mesh discretizations. # of FEs 1 000 8 000 64 000 512 000 1 728 000 2 744 000 4 096 000 5 832 000 8 000 000 NNZ in Triplet 36 000 288 000 2 304 000 18 432 000 62 208 000 98 784 000 147 456 000 209 952 000 288 000 000 NNZ in CSC 15 561 118 121 920 241 7 264 481 24 408 721 38 710 841 57 728 961 82 135 081 112 601 201 NNZ compression 56.8% 59.0% 60.1% 60.6% 60.8% 60.8% 60.9% 60.9% 60.9% Memory used (triplet) 0.58 MB 4.61 MB 36.9 MB 294.9 MB 995.3 MB 1 580.5 MB 2 359.3 MB 3 359.2 MB 4 608.0 MB Memory used (CSC) 0.26 MB 1.96 MB 15.3 MB 120.5 MB 404.7 MB 641.8 MB 957.0 MB 1 361.6 MB 1 866.6 MB 54.9% 57.4% 58.6% 59.1% 59.3% 59.4% 59.4% 59.5% 59.5% Memory saving tionally, this figure shows the time spent for computing arrays K rows and K cols (Index) used by Sparse and Sparse2 routines. The NI routine spends more time than Sparse2 and SparseCreate routines for all mesh discretizations, even though the NI routine is executed in parallel in the GPU and the assembly routines are executed serially in the CPU. This occurs because the NI is an intensive routine that requires many operations between small matrices, such as multiplications, additions and inversions, whereas assembly only requires sorting and reducing operations [7]. Figure 4: Execution time spent by the assembly using three functions and different mesh sizes. The sparse MATLAB function consumes significantly more time and mem18 ory than the other assembly functions, and for meshes larger than 200 thousand NNZs, it requires more time than the NI routine. The large memory usage explains the drawbacks of this function. In this routine, the row and column indices must be entered as double type arrays, which increases the memory requirements and performance overhead making this function the most inefficient assembly routine [24]. In turn, although the sparse2 function has an excellent performance, this routine requires computing and storing the index arrays, and their computations require almost the same time as the assembly function itself, meaning that the computational time spent by sparse2 can be duplicated if the time for computing K rows and K cols is considered. Regarding the sparse create function for the sparse matrix conversion, in all discretization cases, the computational time invested in assembly is always the lowest. Indeed, this function employs less time than the index arrays efficiently computed as MILAMIN [24]. Moreover, the sparse create function does not require the computation of the index arrays for the assembly step, saving memory and computations. However, this function is a specialized routine to form only matrices arising in FEM, which can represent a disadvantage when general sparse matrices are required. 4.3. Partitioning the numerical integration Finally, as shown above, the numerical integration executed in parallel by using GPU and assembling the NNZ element based on sparse create CPU function are the most efficient implementation. However, without doing a mesh partitioning for the NI routine, the problem size that can be solved using low-memory capacity GPUs is limited. Hence, using an approach as the one presented in Section 3 for partitioning the mesh based on available GPU memory, FE matrices can be constructed as large as CPU memory allows. In addition, as the CPU memory is usually greater than the GPU memory, larger matrices can be constructed than those created only in the GPU. Fig. 5 shows the computational time for numerical integration and assembly routines in the construction of the global sparse matrix, considering different 19 mesh sizes. Adding the time spent by these two routines, the time necessary to construct the global sparse matrix (MatGen) is obtained. Table 2 presents these results for different mesh sizes and the time percentage employed per function. The largest mesh size constructed has 27.27 million nodes and 27 million 3D elements. As stated before, the numerical integration routine always consumes more time than the assembly. Specifically, the assembly function consumes less than the 40% of the total matrix generation time for all the mesh cases: 37.9% is the greatest percentage spent for assembling 1 728 million FEs (an intermediate mesh size) and the lowest percentage was presented assembling 1 000 FEs (the lowest mesh size). These results indicate an appropriate approach to generate large FE matrices from unstructured meshes in systems in which the GPU memory is lower than the CPU memory. Additionally, this method shows the benefits of heterogeneous computing: using the CPU for intrinsic serial routines and the GPU for intrinsic parallel and intensive computations. Figure 5: Execution time spent generating global sparse FE matrices of several sizes. As a final comment, for reducing the time for matrix generation, one option is to form temporally small sparse matrices and next, to assemble all these matrices for forming the global matrix. Since the numerical integration for large 20 Table 2: Computational time spent in the sparse matrix construction. # of FEs 1 000 8 000 64 000 512 000 1 728 000 2 744 000 4 096 000 5 832 000 8 000 000 15 625 000 27 000 000 Matrix size 1 331 9 261 68 921 531 441 1 771 561 2 803 221 4 173 281 5 929 741 8 120 601 15 813 251 27 270 901 MatGen time (s) 0.02 0.03 0.15 1.17 3.98 5.92 8.85 15.3 23.8 40.9 115.0 NI time 82.8% 80.9% 68.4% 62.9% 62.1% 66.2% 66.2% 70.9% 67.8% 70.5% 82.4% Assembly time 17.2% 19.1% 31.6% 37.1% 37.9% 33.8% 33.8% 29.1% 32.2% 29.5% 17.6% meshes is divided into groups, which are executed sequentially in the GPU, it can can execute kernels asynchronously7 ; an approach that can accelerate large sparse matrix generation in FEM is assembling all local matrices belonging to a FEs group while the GPU is executing the NI routine. 5. Conclusions The implementation proposed shows that large global sparse matrices arising in the numerical solution of EPDEs by FEM can be generated in heterogeneous computing with limited computational resources. In fact, with only 2GB of memory in the GPU, FEM matrices of 3D unstructured meshes with up to 27 million of FEs are generated. Moreover, a significant acceleration in the numerical integration executed in the GPU as compared to the serial CPU implementation is achieved. The maximum speedup reached by the GPU is 126x, and the mean value is 121x, which shows good performance of the developed code. Additionally, the assembly function (sparse create) used in this work is an efficient routine that requires lower runtime and memory consumption compared with the other two functions (sparse and sparse2 ). In fact, sparse create consumes less runtime than the numerical integration routine, even though the latter is implemented in parallel computing. The maximum time spent by the sparse create function took only twenty seconds for assembling 27 million FEs, 7 Some function calls are asynchronous to facilitate concurrent execution, returning the control to the CPU thread before the GPU has completed a specific task. This means that the GPU is able to execute several kernels while the CPU executes other routines. 21 one-fifth of the time spent by NI routine. Finally, although the proposed methodology is applied to the numerical solution of the Poisson equation by using FEM, this approach is general and can be used to construct not only the stiffness matrix from static or stationary problems, but also the mass matrix from dynamics problems and for solving other types of PDEs by using the FEM. Acknowledgments The first author thanks to Universidad Nacional de Colombia for the financial support to this work with the Estudiantes sobresalientes de posgrado scholarship. MSG Tsuzuki was partially supported by CNPq (grant 310.663/2013-0). References References [1] E. Kreyszig, H. Kreyszig, E. Norminton, Advanced engineering mathematics, 10 ed., John Wiley & Sons, Hoboken, N.J., 2011. [2] D. Braess, Finite elements: Theory, fast solvers, and applications in solid mechanics, 2 ed., Cambridge University Press, Cambridge, 2001. [3] O. C. Zienkiewicz, R. L. Taylor, J. Z. Zhu, The finite element method: its basis and fundamentals, 6 ed., Elsevier, Amsterdam, 2005. [4] A. Logg, K.-A. Mardal, G. Wells (Eds.), Automated solution of differential equations by the finite element method: The FEniCS Book, volume 84 of Lecture Notes in Computational Science and Engineering, Springer, 2012. [5] T. A. Davis, Direct methods for sparse linear systems, Fundamental of Algorithms, SIAM, Philadelphia, 2006. [6] Y. Saad, Iterative methods for sparse linear systems, 2 ed., SIAM, 2003. doi:10.1137/1.9780898718003. 22 [7] A. Dziekonski, P. Sypek, A. Lamecki, M. Mrozowski, Progress In Electromagnetics Research 128 (2012) 249–265. doi:10.2528/PIER12040301. [8] J. Owens, M. Houston, D. Luebke, S. Green, J. Stone, J. Phillips, Proceedings of the IEEE 96 (2008) 879–899. doi:10.1109/JPROC.2008.917757. [9] A. R. Brodtkorb, C. Dyken, T. R. Hagen, J. M. Hjelmervik, O. O. Storaasli, Scientific Programming 18 (2010) 1–33. doi:10.3233/SPR-2009-0296. [10] D. B. Kirk, W.-m. W. Hwu, Programming massively parallel processors: A hands-on approach, 2 ed., Morgan Kaufmann, Burlington, MA, USA, 2012. [11] D. Komatitsch, D. Michéa, G. Erlebacher, Journal of Parallel and Distributed Computing 69 (2009) 451–460. doi:10.1016/j.jpdc.2009.01. 006. [12] Z. Fu, T. J. Lewis, R. M. Kirby, R. T. Whitaker, Journal of computational and applied mathematics 257 (2014) 195–211. doi:10.1016/j.cam.2013. 09.001. [13] M. G. ematical Knepley, A. R. Terrel, Software 39 (2013) 1–13. ACM Transactions on Math- doi:10.1145/2427023.2427027. arXiv:arXiv:1103.0066v1. [14] M. Pawel, P. Plaszewski, K. Banaś, Procedia Computer Science 1 (2010) 1093–1100. doi:10.1016/j.procs.2010.04.121, 00014. [15] F. Krużel, K. Banaś, Computers & Mathematics with Applications 66 (2013) 2030–2044. doi:10.1016/j.camwa.2013.08.026. [16] K. Banaś, P. Plaszewski, P. Maciol, Computers & Mathematics with Applications 67 (2014) 1319–1344. doi:10.1016/j.camwa.2014.01.021. [17] C. Cecka, A. J. Lew, E. Darve, International Journal for Numerical Methods in Engineering 85 (2011) 640–669. doi:10.1002/nme.2989. 23 [18] G. R. Markall, A. Slemmer, D. A. Ham, P. H. J. Kelly, C. D. Cantwell, S. J. Sherwin, International Journal for Numerical Methods in Fluids 71 (2013) 80–97. doi:10.1002/fld.3648. [19] J. Krüger, R. Westermann, in: M. Pharr, R. Fernando (Eds.), GPU gems 2: programming techniques for high-performance graphics and generalpurpose computation, number 2 in GPU gems, Addison-Wesley, Upper Saddle River, NJ, 2005, pp. 703–718. 00467. [20] F. Stock, A. Koch, in: R. Wyrzykowski, J. Dongarra, K. Karczewski, J. Wasniewski (Eds.), Parallel Processing and Applied Mathematics, number 6067 in Lecture Notes in Computer Science, Springer Berlin Heidelberg, 2010, pp. 457–466. 00002. [21] R. Li, Y. Saad, The Journal of Supercomputing 63 (2012) 443–466. doi:10. 1007/s11227-012-0825-3. [22] A. Dziekonski, P. Sypek, A. Lamecki, M. Mrozowski, International Journal for Numerical Methods in Engineering 94 (2013) 204–220. doi:10.1002/ nme.4452. [23] A. Dziekonski, P. Sypek, A. Lamecki, M. Mrozowski, IEEE Antennas and Wireless Propagation Letters 11 (2012) 1346–1349. doi:10.1109/LAWP. 2012.2227449. [24] M. Dabrowski, M. Krotkiewski, D. W. Schmid, Geochemistry, Geophysics, Geosystems 9 (2008) 1–24. doi:10.1029/2007GC001719. [25] J. R. Gilbert, C. Moler, R. Schreiber, SIAM Journal on Matrix Analysis and Applications 13 (1992) 333–356. doi:10.1137/0613024. [26] M. Kubale, Graph Colorings, American Mathematical Society, Providence, Rhode Island, 2004. [27] K. Asanovic, J. Wawrzynek, D. Wessel, K. Yelick, R. Bodik, J. Demmel, T. Keaveny, K. Keutzer, J. Kubiatowicz, N. Morgan, D. Patterson, K. Sen, Communications of the ACM 52 (2009) 56. doi:10.1145/1562764.1562783. 24
5
BOOST: A fast approach to detecting gene-gene interactions in genome-wide case-control studies arXiv:1001.5130v1 [q-bio.GN] 28 Jan 2010 1 Xiang Wan1∗, Can Yang1∗, Qiang Yang2, Hong Xue3 , Xiaodan Fan4 , Nelson L.S. Tang5 and Weichuan Yu1† Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology. 2 Department of Computer Science and Engineering, The Hong Kong University of Science and Technology. 3 Department of Biochemistry, The Hong Kong University of Science and Technology. 4 Department of Statistics, The Chinese University of Hong Kong. 5 Laboratory for Genetics of Disease Susceptibility, Li Ka Shing Institute of Health Sciences, The Chinese University of Hong Kong. genome-wide data 2 [5]. PLINK finished pairwise interaction examination of 89,294 SNPs selected from the WTCCC Crohn’s disease data set in 14 days. Here we propose a new method, named ‘BOolean Operation based Screening and Testing’ (BOOST), to analyze all pairwise interactions in genome-wide SNP data. In our method, we use a boolean representation of genotype data which is designed to be CPU efficient for multi-locus analysis. Based on this data representation, we design a two-stage (screening and testing) search method. In the screening stage, we use a non-iterative method to approximate the likelihood ratio statistic in evaluating all pairs of SNPs and select those passing a specified threshold. Most non-significant interactions will be filtered out and the survival of significant interactions is guaranteed. In the testing stage, we employ the typical likelihood ratio test to measure the interaction effects of selected SNP pairs. Gene-gene interactions have long been recognized to be fundamentally important to understand genetic causes of complex disease traits. At present, identifying gene-gene interactions from genome-wide case-control studies is computationally and methodologically challenging. In this paper, we introduce a simple but powerful method, named ‘BOolean Operation based Screening and Testing’(BOOST). To discover unknown gene-gene interactions that underlie complex diseases, BOOST allows examining all pairwise interactions in genome-wide case-control studies in a remarkably fast manner. We have carried out interaction analyses on seven data sets from the Wellcome Trust Case Control Consortium (WTCCC). Each analysis took less than 60 hours on a standard 3.0 GHz desktop with 4G memory running Windows XP system. The interaction patterns identified from the type 1 diabetes data set display significant difference from those identified from the rheumatoid arthritis data set, while both data sets share a very similar hit region in the WTCCC report. BOOST has also identified many undiscovered interactions between genes in the major histocompatibility complex (MHC) region in the type 1 diabetes data set. In the coming era of large-scale interaction mapping in genome-wide case-control studies, our method can serve as a computationally and statistically useful tool. Disease C1 C1 & C2 C1 & C2 & C3 BD 10 0 0 CAD 16 0 0 CD 8 1 1 HT 7 0 0 RA 350 0 0 T1D 4499 789 91 T2D 18 0 0 Table 1: The number of interactions identified from seven diseases data set under different constraints. C 1 – significant threshold constraint: The significance threshold is 0.05 for the Bonferronicorrected interaction P -value; C 2 – distance constraint: The physical distance between two interacting SNPs is at least 1Mb. This constraint is used to avoid interactions that might be attributed to the linkage disequilibrium (LD) effects [5]; C 3 – main effect constraint: The singlelocus P -value should not be less than 10−6 . This constraint is used to see whether there exist strong interactions without significant main effects because those SNPs with P ≥ 10−6 are usually filtered out in the typical single-locus scan. Genome-wide case-control studies use high-throughput genotyping technologies to assay hundreds of thousands of single-nucleotide polymorphisms (SNPs) and relate them to clinical conditions or measurable traits. To understand underlying causes of complex disease traits, it is often necessary to consider joint genetic effects (Epistasis) across the whole genome. The concept of epistasis [2] was introduced around 100 years ago. It is generally defined as interactions among different genes. The existence of epistasis has been widely accepted as an important contributor to genetic variation in complex diseases such as asthma, cancer, diabetes, hypertension, and obesity [5]. As a matter of fact, most researchers believe that it is critical to model complex interactions to elucidate the joint genetic effects causing complex diseases. They have demonstrated the presence of gene-gene interactions in complex diseases, such as breast cancer [14] and coronary heart disease [13]. The problem of detecting gene-gene interactions in genome-wide case-control studies has attracted extensive research interest. The difficulty of this problem is the heavy computational burden. For example, in order to detect pairwise interactions from 500, 000 SNPs genotyped in thousands of samples, we need 1.25 × 1011 statistical tests in total. A recent review [5] presented a detailed analysis on many popular methods, including PLINK [8], MDR [14], Tuning ReliefF [12], Random Jungle1 (i.e., Random Forest [3]) and BEAM [17]. Among them, PLINK was recommended as the most computationally feasible method that is able to detect gene-gene interactions in Results We have applied BOOST to analyze data (14,000 cases in total and 3,000 shared controls) from the Wellcome Trust Case Control Consortium (WTCCC) on seven common human diseases: bipolar disorder (BD), coronary artery disease (CAD), Crohn’s disease (CD), hypertension (HT), rheumatoid arthritis (RA), type 1 diabetes (T1D) and type 2 diabetes (T2D). The analysis of each disease data set with control samples took less than 60 hours (around 2.5 days) to completely evaluate all pairs of roughly 360, 000 SNPs 3 on a standard 3.0 GHz desktop with 4G memory running Windows XP system. The results under different constraints are reported in Table 1. For T1D, 2 Marchini et al. [10] demonstrated that it is feasible to test association allowing for interactions in genome-wide scale. Beside that, Random Jungle can handle genomewide data efficiently. However, they aim at testing associations allowing for interactions, which is easier than testing interactions. Please check the supplementary document and [5] for detailed explanations of ‘test of association allowing for interactions’ and ‘test of interactions’. 3 500, 000 SNPs are genotyped in 5, 000 samples, about 360, 000 SNPs can pass the quality control, see the supplementary for details. ∗ These authors contributed equally to this work. corresponding author. 1 http://randomjungle.com/ † 1 we discovered many gene-gene interactions in the MHC region (see detailed descriptions in the following section). For other six diseases, however, we did not find nontrivial interactions (except one interacting SNP pair in CD). T1D & RA. The MHC region in chromosome 6 has long been investigated as the most variable region in the human genome with respect to infection, inflammation, autoimmunity and transplant medicine [9]. The recent study conducted by WTCCC [15] has shown that both T1D and RA are strongly associated with the MHC region via singlelocus association mapping. The top-left panel of Figure 1 shows that the single-locus association map does not reveal much difference between T1D and RA. In our study, BOOST reports 4499 interactions in the T1D data set (see Table 1), in which 4489 interactions (99.8%) are in the MHC region. As comparison, BOOST reports 350 interactions in the RA data set, in which 280 interactions (80.0%) are in the MHC region. Our genome-wide interaction map provides the evidence that the MHC region is associated with these two diseases in different ways. The bottom panel of Figure 1 gives detailed interaction maps in the MHC region for T1D and RA data. The LD map 4 of MHC region is provided in the top-right panel of Figure 1. These interaction maps, different from the LD map, reveal a distinct pattern difference between T1D and RA. Specifically, there are three subregions in the MHC region, namely, the MHC class I region (29.8Mb 31.6Mb), the MHC class III region (31.6Mb - 32.3Mb) and the MHC class II region (32.3Mb - 33.4Mb). A closer inspection of the T1D interaction map indicates that strong interaction effects widely exist between genes within and cross three classes, while most significant interactions in RA only involve loci closely placed in the MHC class II region. The contrast of the interaction patterns between T1D and RA may explain their different aetiologies, which are not revealed by single-locus association mapping. SNP rs2524057 rs2524057 rs2853934 rs2524115 rs3873385 rs3873385 rs3873385 rs396038 SNP 1 Single-locus P -value 4.807 × 10−1 4.807 × 10−1 8.336 × 10−2 1.215 × 10−1 3.368 × 10−1 3.368 × 10−1 3.368 × 10−1 9.939 × 10−2 SNP rs9276448 rs5014418 rs9276448 rs9276448 rs9276448 rs5014418 rs6919798 rs9276448 SNP 2 Single-locus P -value 8.878 × 10−3 1.116 × 10−2 8.878 × 10−3 8.878 × 10−3 8.878 × 10−3 1.116 × 10−2 6.077 × 10−2 8.878 × 10−3 Interaction BOOST P -value 5.362 2.738 2.507 6.456 3.186 3.841 4.257 5.894 × 10−14 × 10−13 × 10−13 × 10−13 × 10−14 × 10−14 × 10−13 × 10−13 Table 2: The interaction SNP pairs in the two regions shown in Figure 2. The SNPs in the column ‘SNP 1’ reside in the gene HLA-B and The SNPs in the column ‘SNP 2’ locate at the block across the genes HLA-DQA2 and HLA-DQB2. They show strong interactions without displaying significant main effects. selected 89,294 SNPs genotyped in 5000 samples. It is unknown how much time Random Jungle will need to find interactions from 360, 000 SNPs. However, Random Jungle aims at detecting association allowing for interactions rather than detecting interactions (see detailed explanations in the supplementary). Besides, Random Jungle has the difficulty of finding interacting SNP pairs displaying weak main effects because trees built in Random Jungle are conditional on the main effects of SNPs. BEAM took about 8 days to handle 47,727 SNPs using 5 × 107 Markov chain Monte Carlo iterations. Currently, BEAM fails to handle 500, 000 to 1, 000, 000 SNPs genotyped in 5000 or more samples. Cordell [5] recommended PLINK and Random Jungle as two most computationally feasible methods. Our method BOOST makes a tremendous progress. It evaluated all pairs of roughly 360, 000 SNPs within 60 hours (around 2.5 days) on a standard desktop (3.0 GHz CPU with 4G memory running Windows XP professional x64 Edition system). The WTCCC phase 2 study will analyze over 60,000 samples of various diseases using either the Affymetrix v6.0 chip or the Illumina 660K chip. The shared control samples will increase from 3, 000 to 6, 000. Such an increase in numbers of SNPs and sample size are more demanding on the computation efficiency. We anticipate that BOOST is still applicable to analyze the new data sets. Interactions without significant main effects detected in T1D. The MHC region is a highly polymorphic region with a high gene density. Although previous reports [7, 15] using the single-locus scan have identified strong associations between MHC genes (such as HLADQB1 and HLA-DRB1) and T1D, it is still unclear which and how many loci within the MHC region determine T1D susceptibility. Interactions without significant main effects can provide additional information to help pinpoint disease-associated loci because SNPs involved in those interactions are usually filtered out in the single-locus scan. Among the selected 789 interacting pairs in T1D, 91 pairs have non-significant loci under the single-locus scan (all of them are listed in the supplementary). A careful inspection of these 91 interactions has identified two interesting interaction patterns between the MHC Class I and Class II. Figure 2 presents one interaction pattern between the region 31350k-31390k and the region 32810k-32860k in chromosome 6. Another pattern between the region 31350k-31390k and the region 32930k - 32960k in chromosome 6 is provided in the supplementary. The interactions between two regions in Figure 2 are listed in Table 2. All SNPs in these interactions display weak main effects while their joint effects are statistically significant. As Nejentsev et al. [7] argued that both the MHC class I and II genes should be considered to better understand type 1 diabetes susceptibility, our results further provide the evidence that the interaction effects between these two classes may contribute to the aetiology of type 1 diabetes. CONCLUSION The large number of SNPs genotyped in genome-wide case-control studies poses a great computational challenge in identification of gene-gene interactions. During the last few years, there have been fast growing interests in developing and applying computational and statistical approaches to finding gene-gene interactions. However, many approaches fail to handle genome-wide data sets (e.g., 500, 000 SNPs and 5, 000 samples). This hampers identification of interactions in genome-wide case-control studies. In this paper, we present a method named ‘BOOST’ to address this problem. We have successfully applied our method to analyze seven data sets from WTCCC. Not only is BOOST computationally efficient, it also has the advantage over PLINK with respect to the statistical power (see simulation study in the supplementary). Our experiment results demonstrate that interaction mapping is both computationally and statistically feasible for hundreds of thousands of SNPs genotyped in thousands of samples. METHODS COMPUTATION TIME Boolean representation of genotype data. Suppose we have L SNPs and n samples. The data set is usually stored in an L × n matrix. Each cell in this matrix takes a value in {1, 2, 3}, which represent homozygous reference genotype, heterozygous genotype and homozygous variant genotype, respectively. In our method, we introduce a Boolean representation of genotype data (the details are provided in the supplementary). This Boolean representation promotes not only the space efficiency but also the CPU efficiency because it only involves Boolean values and thus allows using fast logic (bitwise) operations to obtain contingency tables. From a practical point of view, a key issue of detecting gene-gene interactions in genome-wide case-control studies is the computational efficiency [5]. Cordell [5] reported that PLINK took about 14 days to test pairwise interactions of the selected 89,294 SNPs on a single node of a computer cluster. A rough estimation implies that it would take approximately 228 days5 to test all pairwise interactions of 360, 000 SNPs. Random Jungle took about 5 hours to handle the 4 We calculate composite LD using the method by Zaykin et al. [16]. Given the number of samples, the running time of PLINK is proportional to the square of the number of SNPs. Therefore, the rough estimation is calculated by 14 × (360, 000/89, 294)2 ≈ 228. 5 Measuring interaction effects. Logistic regression models are of2 LD plot in the MHC region Single−locus association map of T1D 33.1Mb 10 −log (P) 15 1 0.9 10 0 0.8 32.3Mb 5 0.7 0.6 10Mb 35Mb 69Mb 100Mb 133Mb 162Mb 0.5 31.2Mb SIngle−locus association map of RA 0.4 10 −log (P) 15 0.3 10 30.4Mb 0.2 0.1 5 0 10Mb 35Mb 69Mb 100Mb 133Mb 162Mb 29.7Mb Interaction map of T1D in the MHC region 30.4Mb 31.2Mb 32.3Mb 33.1Mb Interaction map of RA in the MHC region 33.1Mb 33.1Mb 16 14 Whole genome Whole genome 32.3Mb 32.3Mb 12 10 MHC (99.8%) MHC (80.0%) 31.2Mb 8 31.2Mb 6 30.4Mb 4 30.4Mb 2 0 29.7Mb 30.4Mb Class I 31.2Mb 32.3Mb 33.1Mb 29.7Mb Class III Class II 30.4Mb Class I 31.2Mb 32.3Mb 33.1Mb Class III Class II Figure 1: Top-left panel: Single-locus association mapping of T1D and RA. They share a very similar hit region in chromosome 6. Topright panel: The LD map of the MHC region in control samples. Bottom panel: Genome-wide interaction mapping of T1D and RA. 99.8% interactions of T1D and 80.0% interactions of RA are in the MHC region. Strong interaction effects widely exist between genes in and across the MHC class I, II and III in T1D, while most significant interactions of RA only involve loci closely placed in the MHC class II region (The P values are truncated at P = 1.0 × 10−16 ). (a) (b) Figure 2: Interactions between the genes in the MHC class I and those in the MHC class II. Left panel (a): The region 31350k - 31390k of Chromosome 6. The gene HLA-B in the MHC class I locates in this region. The recombination rate and LD plot from HapMap show that a block structure spans from 31360k to 31380k. This region is mapped through the SNPs rs2524057, rs2853934, rs2524115, rs396038, rs3873385, rs2524095, and rs2524089. The SNPs rs2524095 and rs2524089 are involved in the interactions with the region 32930k - 32960k shown in Figure 1(b) of the supplementary. Right panel (b): The region 32810k - 32860k of Chromosome 6. The genes HLA-DQA2 and HLA-DQB2 in the MHC class II reside in this region. The recombination rate and LD plot from HapMap show that a block structure exists from 32820k to 32847k. This region is mapped through the genotyped SNPs rs9276448, rs5014418, and rs6919798. The ungenotyped SNPs rs9276438 and rs7774954 reside in the genes HLA-DQA2 and HLA-DQB2, respectively. They are in strong LD with those genotyped SNPs. 3 [4] H.J. Cordell. Epistasis: what it means, what it doesn’t mean, and statistical methods to detect it in humans. Human Molecular Genetics, 11:2463–2468, 2002. [5] H.J. Cordell. Detecting gene-gene interactions that underlie human diseases. Nature Reviews Genetics, 10:392–404, 2009. [6] T. M. Cover and J. A. Thomas. Elements of information theory. Wiley-Interscience, second edition, 2006. [7] S. Nejentsev et al. Localization of type 1 diabetes susceptibility to the MHC class I genes HLA-B and HLA-A. Nature, 450(6):887–892, 2007. [8] S. Purcell et al. Plink: a tool set for whole-genome association and population-based linkage analyses. Am. J. Hum. Genet, 81:559–575, 2007. [9] R. Lechler and A. Warrens. HLA in health and disease. Academic, 2000. [10] J. Marchini, P. Donnelly, and L. R. Cardon. Genome-wide strategies for detecting multiple loci that influence complex diseases. Nature Genetics, 37(4):413–417, 2005. [11] H. Matsuda. Physical nature of higher-order mutual information: Intrinsic correlations and frustration. Physical Review E, 6:3096–3102, 2000. [12] J.H. Moore and B.C. White. Tuning reliefF for genomewide genetic analysis. Lecture Notes Computer Science, 4447:166– 175, 2007. [13] M.R. Nelson, S.L. Kardia, R.E. Ferrell, and C.F. Sing. Combinatorial partitioning method to identify multilocus genotypic partitions that predict quantitative trait variation. Genome Research, 11:458–470, 2001. [14] M.D. Ritchie, L.W. Hahn, N. Roodi, L.R Bailey, W.D. Dupont, F.F. Parl, and J.H. Moore. Multifactor-dimensionality reduction reveals high-order interactions among estrogenmetabolism genes in sporadic breast cancer. The American Journal of Human Genetics, 69:138–147, 2001. [15] WTCCC. Genome-wide association study of 14,000 cases of seven common diseases and 3,000 shared controls. Nature, 447:661–678, 2007. [16] D.V. Zaykin, Z. Meng, and M.G. Ehm. Contrasting linkagedisequilibrium patterns between cases and controls as a novel association-mapping method. The American Journal of Human Genetics, 78(5):737–746, 2006. [17] Y. Zhang and J.S. Liu. Bayesian inference of epistatic interactions in case-control studies. Nature Genetics, 39:1167–1173, 2007. ten used to measure the strength of gene-gene interactions [4, 5]. Let LM and LF be the log likelihood of the logistic regression model MM with only main effect terms (df = 4) and the full model MF with both main effect terms and interaction terms (df = 8), respectively. An interaction effect is measured by the difference of the log likelihood evaluated at maximum likelihood estimation (MLE), i.e., L̂F − L̂M . The likelihood ratio statistic 2(L̂F − L̂M ) is asymptotically χ2 distributed with df = 4. However, it is computationally unaffordable to directly use this measure to evaluate all pairs of SNPs in a genome-wide case-control study because there are hundreds of billions of pairs to be tested. Noticing the correspondence between a logistic regression model and a log-linear model in categorical data analysis [1], we are able to measure an interaction effect under log-linear models. In the space of log-linear models, the homogenous model MH is the equivalent form of the main effect model MM and the saturated model MS is the equivalent form of the full model MF . Let L̂H and L̂S be the log-likelihood of MH and MS evaluated at their MLEs, respectively. Interaction effects can thus be measured using L̂S − L̂H . After some algebra, it turns out that L̂S − L̂H is connected with the KullbackLeibler divergence [6]: L̂S − L̂H = n · DKL (π̂||p̂), (1) where n is the number of samples, π̂ is the joint distribution estimated under MS and p̂ is the joint distribution estimated under MH . More details can be found in the supplementary. Boolean operation based screening and testing. In Eq.(1), there is no closed-form solution for p̂ under the model MH . Using iterative methods to estimate p̂ is computationally intensive to test all SNP pairs in genome-wide case-control studies. Here we use a noniterative method known as ‘Kirkwood Superposition Approximation’ (KSA) [11] to approximate p̂, denoted as p̂K . Let L̂KSA be the loglikelihood of KSA model evaluated at its MLE. We show that L̂S − L̂H ≤ L̂S − L̂KSA . (2) In other words, L̂S − L̂KSA = n · DKL (π̂||p̂K ) is an upper bound of L̂S − L̂H (please check the supplementary for details). Based on this upper bound, we propose our new method ‘BOolean Operation based Screening and Testing’ (BOOST): • Stage 1 (Screening): we evaluate all pairwise interactions by using KSA in the screening stage. For each pair, the calculation of 2(L̂S − L̂KSA ) = 2n · DKL (π̂||p̂K ) is based on the contingency table collected by using Boolean operations. Since 2(L̂S − L̂H ) ≤ 2(L̂S − L̂KSA ), an interaction obtained by KSA without passing a specified threshold τ , i.e., 2(L̂S − L̂KSA ) ≤ τ , would not be considered in Stage 2. We set the threshold τ = 30 in our experiment.6 • Stage 2 (Testing): For each pair with 2(L̂S − L̂KSA ) > τ , we test the interaction effect using the likelihood ratio statistic 2(L̂S − L̂H ). We fit the log-linear models MH and MS , and calculate this test statistic using Eq. (1). After that, we conduct the χ2 test with df = 4 to determine whether the interaction effect is significant. References [1] A. Agresti. Categorical Data Analysis. Wiley Series in Probability and Statistics. Wiley and Sons INC., second edition, 2002. [2] W. Bateson and G. Mendel. Mendel’s Principles of Heredity. Cambridge University Press,Cambridge, 1909. [3] L. Breiman. Random forests. Machine Learning, 45:5–32, 2001. 6 The threshold τ = 30 corresponds to P = 0.0002 without Bonferroni correction, which is a very weak significant level for a genome-wide study. For the WTCCC data, the number of interactions passing this threshold ranges from ∼ 300, 000 to ∼ 600, 000. 4
5
A Historical Review of Forty Years of Research on CMAC Frank Z. Xing arXiv:1702.02277v1 [] 8 Feb 2017 School of Computer Science and Engineering Nanyang Technological University [email protected] Abstract— The Cerebellar Model Articulation Controller (CMAC) is an influential brain-inspired computing model in many relevant fields. Since its inception in the 1970s, the model has been intensively studied and many variants of the prototype, such as KCMAC, MCMAC, and LCMAC, have been proposed. This review article focus on how the CMAC model is gradually developed and refined to meet the demand of fast, adaptive, and robust control. Two perspective, CMAC as a neural network and CMAC as a table look-up technique are presented. Three aspects of the model: the architecture, learning algorithms and applications are discussed. In the end, some potential future research directions on this model are suggested. I. INTRODUCTION The Cerebellar Model Articulation Controller (CMAC) was proposed by J. S. Albus in 1975 [2]. Parallel at this time in the history, the concept of perceptron [23] had already been popular, whereas effective learning schemes to tune perceptrons [24] were not on the stage yet. In 1969, Minsky and Papert also pointed out the limitations that the exclusive disjunction logic cannot be solved by the perceptron model in their book Perceptrons: An Introduction to Computational Geometry. These facts made it less promising to consider CMAC as a neural network form. Consequently, although the name of CMAC appears bio-inspired enough, and the theory that the cerebellum is analogous to a perceptron has been proposed earlier [1], CMAC was emphasized to be understood as a table referring technique that can adaptive to real-time control system. Nevertheless, the underlying bioscience mechanism was addressed again in 1979 by Albus [3], which always gives the CMAC model two different ways of interpretation. The structure of CMAC was originally described as two inter-layer mappings illustrated in Fig. 1. The control functions are represented in the weighted look-up table, rather than by solution of analytic equations or by analog [2]. If we use S to denote sensory input vectors, and let A and P stand for association cells and response output vectors respectively, both CMAC and multilayer percptrons (MLP) model can be formalized as:  f :S→A g:A→P the final output can be calculated as: y = ∑ A∗i wi where the asterisk denotes a link or activation between certain association cell and the output. The nuance between Fig. 1. The primary CMAC structure MLP and CMAC model is that, for mapping f , MLP model is fully connected but CMAC restricts the association in a certain neighboring range. This property of mapping significantly accelerates the learning process of CMAC, which is considered a main advantage of it comparing to other neural network models. It is notable that CMAC may not represent accurately how the human cerebellum works, even at a most simplified level. For instance, recent biological evidence from eyelid conditioning experiments suggests that the cerebellum is capable of computing exclusive disjunction [34]. However, CMAC is still an important computational model, because the restriction on mapping function effectively decreased the chance of been trapped in a local minimum during learning process. Despite the advantage aforementioned, CMAC has the following disadvantages as well [25]: • many more weight parameters are needed comparing to normal MLP model • local generalization mechanism may cause some training data to be incorrectly interpolated, especially when data is sparse considering the size of CMAC • CMAC is a discrete model, analytical derivatives do not exist As a response to these problems, modified or high order CMAC, storage space compressing, and fast convergent learning algorithms are continuously studied. These discov- eries will be elaborated in the following sections. Recent advances in Big Data and computing power seem to have watered down these problems. But in many physical scenarios, the computing power are still restricted and high speed responses are required. This serves for the reason to further study CMAC-like models, though it has been in and out of fashion for several times. Although there has been few other pioneer review works on CMAC, for instance by Mohajeri et al. in 2009 [20], this article is inventive for its chronological perspective. rather than emphasizing on detailed techniques. The remainder of the article is organized as follows: Section II provides the evolution trajectory of CMAC structure and efficient storage techniques. Section III discusses the learning algorithms. Section IV presents various circumstances that CMAC model has been applied. Section V summarizes the paper, and instigates discussions about potential improvements that can be made on CMAC model. Fig. 3. Albus’ model [1] of cerebellum resembles a forward feed perceptron. It assumes climbing fiber activity would cause parallel fiber synapses to be weakened. S stands for stellate b cells, B stands for basket cells II. ARCHITECTURE A. Basic Architectures Before the CMAC was proposed in the 1970s, the anatomy and physiology of cerebellum has been studied for a long time. It is widely agreed that many different type of nerve cells are involved in cerebellar functioning. Fig. 2 shows the computational model proposed by Mauk and Donegan [18]. A more simple and implementable model proposed by Albus [3] is exhibited in Fig. 3. If we use A to represent the actual memory (Association Cells), M is the conceptual memory to encode S. Then the conceptual mapping f is more sparse and constrained within a certain range, but mapping g could be random.   f :S→M g:M→A  h:A→P When it comes to implementation, the connectivism perspective to recognize CMAC as a neural network and the table referring perspective are equivalent. Fig. 5 illustrates the difference at a conceptual level [25]. To the upper part is a two input one output neural network structure, to the lower part is a two input one output table look-up structure. An intuitive observation from both Fig. 4 and Fig. 5 is that the number of weights will increase exponentially with the number of input variables. This problem brings out two challenges: 1) The storage of weights become space consuming; 2) The training process becomes difficult to converge and waiting time before termination will lengthen. Fig. 2. Mauk’s model assumes complex interaction between different type of cells: open arrow stands for plastic synapses; filled circle arrow stands for inhibitory synapses; hollow square arrow stands for excitatory synapses. Based on the computational model of cerebellum (Fig. 3), the primary CMAC structure as shown in Fig. 1 is conceived. While it is obvious that high dimensional proximity of association rules cannot be captured in this primary form, because the nodes are arranged into one dimensional array. A simple solution to this problem is to introduce some nonlinearity to the first mapping. As a result, another layer called “conceptual memory” was soon added to the CMAC structure, which involves one additional mapping to the primary structure. The function of conceptual memory is illustrated in Fig. 4. Fig. 4. CMAC with conceptual memory B. Modified Architectures Since simply increasing the CMAC size gives diminishing returns, two directions of modification are undertaken to push forward the research on CMAC. The first consideration is to combine multiple low dimensional CMACs. The second consideration is to introduce other properties, for example spline neurons, fuzzy coding or cooperative PID controller. Cascade CMAC architecture was firstly proposed in 1997 for the purpose of printer calibration [10] (Fig. 6). Input variables are sequentially added to keep each of the CMAC component two dimensional. Fig. 6. Fig. 5. A conceptual level comparison of two different perspectives A previously used technique to solve the first challenge is called tile coding, which latter was developed to an adaptive version as well [36]. The advantage of tile coding is that we can strictly control the number of features through tile split. Another commonly employed trick is called hashing. This technique is applied to CMAC in the 1990s, and maps a large virtual space A to a smaller physical address A0 . Many common hashing function fh can be used, for instance from MD5 or DES, which are fundamental methods in cryptography. However, how to reduce the collision for specific problems according to the data property is still considered an art. Cascade CMAC architecture Another method to combine multiple CMACs can be realized by voting technique. If we regard the Cascade CMAC model as a fusion of input information at a feature level, then the voting CMACs can be reckoned as a fusion at decision level. Each small CMAC just accept a subset of the whole input space. In this case an important antecedent is that input data is well partitioned. The reason to this requirement is that voting lift can only be achieved by heterogeneous expert networks. Taken this into account, some prior knowledge of input data or unsupervised clustering techniques can be applied in this stage. If we make more efforts for dimension reduction, multiple levels of voting can be used. Then the architecture can be reckoned as a Hierarchical CMAC (H-CMAC), which is described by Tham [32] in 1996. H-CMAC has several advantages, such as less storage space and fast adaptation for learning non-linear functions. In Fig. 7, a two level HCMAC is illustrated. It is noticeable that each conventional CMAC components in different layers plays different role. The gating network works at a higher level. Literature [25] introduced a hardware implementation which uses selected binary bits of indexes as the hashing code. Whereas other research, e. g. [35], claims that due to the learning rate diminishing and slower convergence, hashing is not effective for enhancing the capability of approximation by CMAC. Therefore, many other attempts have been made, such as neighbor sequential, CMAC with general basis functions, and adaptive coding [7], which is a similar idea to hashing in the sense of weight space compression. Fig. 7. Hierarchical CMAC architecture, adapted from [32] These architectures can be employed at the same time with more fundamental modifications for the second consideration. In 1992, Lane et al. [11] descried high order CMAC as CMAC with binary neuron output replaced by spline functions. This modification brings about more parameters of splines, but makes the output derivable, which sometimes gives a better performance because the learning phase goes deeper. Sharing the idea to allow more meticulous transfer function, a similar modification can be made by introducing linguistic rules and fuzzy logic. Linguistic CMAC (LCMAC) was proposed by He and Lawry based on label semantics, rather than mapping functions in 2009. A cascade of LCMAC series was further developed in 2015 [8]. Borrowing the terminology of “focal element” from evidence theory, the properties used to activate it is represented as membership function (usually trapezoidal) of several attributes. Therefore, for each input tuple, the excited neurons form a hypercube in the matrix of all the weights as memory. The responsive output can be distributionally depicted as: N P(a|x) = ∏ mxd (Fdi ) d=1 where P is the probability of some memory unit a been activated given the input vector x. F denotes focal element. di is the index of linguistic attributes. m is the hidden weights for F called “mass assignment”. Fuzzy CMAC (FCMAC) is yet another form of fuzzy coding. The intuition to use fuzzy representation is similar to using spline function. For most well defined problems, the nature of CMAC approximation is using multiple steps to emulate a smooth surface. Proper selection of fuzzy membership function would obviously relief the pressure of weight storage and training. From my understanding, FCMAC is an inverse structure of many established NeuroFuzzy Inference Systems. Usually, two extra fuzzy/defuzzy layers are added next to the association layer, the consequents can be Mamdani type, TSK type, weights, or a hybrid of them, e. g. Parametric-FCMAC (PFCMAC) [21]. More advanced FCMAC models, maybe inspired by spline methods, use interpolation to solve the discrete inference problem. In 2015, Zhou and Quek proposed FIE-FCMAC [39], which adds fuzzy interpolation and extrapolation to a rigid CMAC structure. Recently, CMAC applications to more specific scenarios are studied. For example, for control of time-varying nonlinear system, a combination of Radial Basis Function network and the local learning feature of CMAC is proposed (RBFCMAC). It is reported that using RBFs can prevent parameter drift and accelerate synchronization speed to the changing system [17]. For this type of combination, beside RBF, Wavelet Neural Network (WNN), fuzzy rule neuron and recurrent mechanism [15] or a mingle of them can also be employed with CMAC model simultaneously. Previous works, such as [38], have provided evidence that these features are effective for modeling complicated dynamic systems. In a broader scenario, CMAC can be applied with other control systems as well (Fig. 8). Traditionally, the role of CMAC at the primary stage is to assist the output of a main controller. As the training proceed, error between CMAC and the actual model decays. CMAC takes charge of the main controller. Back-forward signal acceptor or conjugated CMACs are often used to accelerate this process [13]. More precisely, this arrangement is a change of information flow but not a change of architecture. Fig. 8. A combined control system of CMAC and PID Similarly, CMAC structures that modifies the storage optimizing methods, for example quantization [16] and multiresolution [19], will only result in architectural difference from a hardware implementation sense. According to my personal understanding, they are the same thing regarding the conceptual structure, though these techniques are of sufficient interests to be discussed in Section III. III. LEARNING ALGORITHM The original form of learning proposed Albus is based on backward propagation of errors. The fast convergence of this algorithm is proved mathematically by succeeding researches. Specifically, the convergence rate is only governed by the size of receptive fields for association cells [37]. Some study [25] suggests that it would be useful to distinguish between target training and error training, despite they share the same mathematical form. In the weights updating rule, wi (k + 1) = wi (k) + α Error xi count(A∗ ) k is the epoch times, α ∈ [0, 1] is the learning rate, xi is a state indicator of activation. The learning process is theoretically faster with a larger α, while overshooting may occur. If the difference between output and the desired value ∆y = y − ŷ can well define the error, target training and error training will be equivalent. In certain cases, 1 (y − ŷ)2 2c is a popular cost function as well. Based on the original learning algorithm, many developments are further derived. Most of them can be categorized into two directions of improvement. The first relies on extra supervisory signals or value assignment mechanism based on statistics. The second endeavors to optimize the use of memory. From a more practical sense, the dichotomy can be understood as learning to adjust value of weights, and learning to adjust number or size of weights. Error = A. Adjusting value of weights Facing the trade off between speed of convergence and learning stability, it is intuitive to consider using a relatively large α at the beginning, and slow down the weight adjustment near the optimum point. This improvement is called adaptive learning rate [14]. It can be achieved by using two CMAC components, one as main controller, another as supervisory controller or compensated controller. Another way to achieve this adaption is by imposing a resistant term to the weight updating rule: wi (k + 1) = wi (k) + α0 Error xi cαi count(A∗ ) In the above rule, α0 and c are constants, αi denotes the average activation times of the memory units that have been activated by training sample i. For both fixed learning rate and adaptive learning rate, weights adjustment start from a randomized set of parameters. Experiments suggest that for sparse data, even if the training samples are within the local generalization range, perfect linear interpolation may not be achieved [25]. As a result, the approximation may appears to have many small zigzag patterns. Therefore, the weight smoothing technique is proposed. After each iteration, the weights are globally adjusted according to: ∗ wi (A∗j ) = (1 − δ ) wi (A∗j ) + count(A ) δ ∑ wi (A∗k ) ∗ count(A ) k=1 variables x = {x1 , x2 , ..., xn }, the output can directly be the weight of the winning neuron. The weights updating rule can be formalized according to Hebbian theory: wx (k + 1) = wx (k) + α (y(k) − wx (k)) Note that the index of x can be time dependent. A modified version of the aforementioned rule involves not only inputs, but also errors as feedback: wx ,ee (k + 1) = wx ,ee (k) + α (y(k) − wx ,ee (k)) Using this learning mechanism, the SOFM-like CMAC is named MCMAC. In 2000, Ang and Quek [5] proposed learning with momentum, neighborhood, and averaged fuzzy output for both CMAC and MCMAC. For CMAC and MCMAC with momentum, the weights updating rules can be written as:  ∆w j (k + 1) = α ∆w j (k) + η δ j (k)y j (k) ∆wx ,ee (k + 1) = α ∆wx ,ee (k) + λ (1 − α)[y(k) − wx ,ee (k)] where the first term represents a momentum, the second term is a back propagation term with learning rate η and local gradient δ j . j is the index for activated weights. Therefore, given a sequential learning process, the aggregational weights adjustment can be derived from the above rules: k ∆w j (k + 1) = η i=1 k where δ is the proportional coefficient measuring the share of the weight with index j that needs to be replaced by the average of all activated weights A∗ . While the weight smoothing technique tries to affect as much memory as possible in one iteration, repeated activation of the same units may not be a good thing. In 2003, research [26] pointed out that equally assign the error to each weight is not a meticulous method. During the learning process, if an associative memory is activated many times, which means many relevant samples are already learned, the weight should be more close to the desired value. In other words, the weight value is more “credible”. In this situation, less error should be assigned to it so that other memory units can learn faster. This rule is called learning based on credit assignment: wi (k + 1) = wi (k) + α f (i)−1 g ∑ f (i)−1 g (y − ∑ wi (k)) xi i=1 i=1 where g is a parameter regarding the degree of local generalization, f (i) records the times memory unit i has been activated. Further research has proved that the convergence is guaranteed with learning rate α < 2. The hardware implementation of CMAC memory enables other interpretations of the associative architecture. If we recognize it as a self-organizing feature map (SOFM), competitive learning algorithm can be realized. With input ∑ α k−1 δ j (i)y j (i) = −η ∂ Error ∑ α k−1 ∂ w j (k) i=1 ∂ Error ∂wj When the sign of keeps unchanged, ∆w j is accelerating; while if the sign is reversing, ∆w j will slow down to stabilize the learning process. The neighborhood learning rule proposed by Ang and Quek [5] serves the same purpose as weight smoothing technique. However, they used the Gaussian function to put more attention to those neurons surrounding the winning neuron instead of evenly adjusting each weight regardless of the distance. Weights updating rule for MCMAC with momentum and neighborhood in singular form is: ∆wi (k + 1) = α ∆wi (k) + hi j [λ (1 − α)(y(k) − wi (k))] where  |r − r |2  j i hi j = exp − 2σ 2 is the distance metric between neuron j and the winning neuron i. Beside using additional terms such as momentum and neighborhood, kernel method can also be applied to CMAC learning. In 2007, Kernel CMAC (KCMAC) was proposed [9], [11] to reduce the CMAC dimension which usually hazard the training speed and convergence. KCMAC treats the association memory as the feature space of a kernel machine, thus weights can be determined by solving an optimization problem. The supervised learning using error e as slack variables is to achieve: β n 1 T w w + ∑ e 2i w ,ee 2 2 i=1 β s.t. w T φ (ui ) + e i > 1, i = 1, 2, ..., n 2 where w denotes weights, coefficient β serves as penalty parameter, φ (·) is the mapping function and K(uu, w ) = φ (uu) · w) is the kernel function. φ (w The standard procedure to this problem is to solve the maxima-minima of Lagrangian function. Though other learning method can be employed as well, for instance, using Bayesian Ying-Yang (BYY) learning as proposed in 2013 by Tian et al [33]. The key idea behind BYY learning can be represented by harmonizing the joint probability with different product form of Bayesian components. In this specific KCMAC case, u and output z are observable, while w is a hidden variable. The joint distribution can either be written as ying form or yang form.  w)p(uu|w w)p(zz|uu, w ) pying (uu, z , w ) = p(w w|uu, z ) pyang (uu, z , w ) = p(uu)p(zz|uu)p(w min Our goal is to maximize H(pying , pyang ), H(pying , pyang ) = ZZZ w pying ln pyang duu dzz dw In practice, ∆H is frequently calculated depending on how the conditional probabilities are estimated and maximum of H is achieved by searching heuristically. B. Adjusting number of weights Adjusting number of weights is chiefly realized by introducing multi-resolution and dynamic quantization techniques. As Section I has explained, CMAC was firstly used for real time control system. Consequently, the structural design fits the hardware implementation well. Many CMAC variants, such as LCMAC and FCMAC, inherit the memory units division with a lattice-based manner. Inputs are generally built on grids with equal space. This characteristic adds local constraints to the value of adjacent units. If we consider this problem from a function approximation perspective, it is also rather intuitive that local complex shape needs a larger number of low-order elements to approach. Naturally, multi-resolution lattice techniques [19] were proposed in the 1990s. With our prior knowledge, some metrics can be used to determine the resolution, for example, the variation of output in certain memory unit areas, which can be formalized as following: resolution ∝ v = 1 N N ∑ |y j − ȳ| j=1 where N is the number of output samples, v is the variance. Other attempts use a tree structure to manage the resolution hierarchically. New high resolution lattice is generated only if the threshold of variance is exceeded. The concept of quantization is almost identical to resolution. The nuance may be that the terminology quantization puts more emphasis on the discretion of continuous signal. Furthermore, increasing resolution will also cause the number of weights to grow. Quantization deals with a given memory capacity problem. To the best of my knowledge, the idea of adaptive quantization was initially proposed in 2006 [16]. The algorithm used to interpolate points looks on change of slopes, which is also similar to the variance metric discussed above. The input space is initialized with uniform quantization. For each point xi , the slopes are calculated by neighbor points. ˆ j = f (x j ) − f (xi ) , j = 1, 2, 3... slope x j − xi Here f stands for the mapping function between input vector and association cells. The change of sign for corresponding directions indicates finer local structure, or fluctuation in other words, thus a new critical point is added to split the interval. The termination condition is that, within all intervals, the difference of output values should not exceed an experimental constant C [28]. The adaptive quantization technique is later developed to the Pseudo Self Evolving CMAC (PSECMAC) model [30], which further introduced neighborhood activation mechanism. IV. APPLICATION Though CMAC was firstly proposed for manipulator control, during the past decades it has been proved effective in robotic arm control, fractionator control, piezoelectric actuator control [22], mechanic system, signal processing, intelligent driver warning system, fuel injection, canal control automation, printer color reproduction, ATM cash demand [31], and many other fields [7]. This can be attributed to the fast learning and stable performance of CMAC models. Moreover, engineering-oriented software and toolkit also help to promote the application of CAMC. In the following part, CMAC application to two emerging engineering fields are elaborated. A. Financial Engineering The practice side of financial engineering has employed various instruments to model the price and risk of bond, stock, option, future and other derivatives. In most cases, historic data can be partitioned into chunks of selected time span to enable a supervised learning process. In 2008, Teddy et al [29] proposed an improved CMAC variant (PSECMAC) to model the price of currency exchange American call option. Three variables are taken into consideration: difference between strike price and current price, time of maturity, and pricing volatility. Thus the pricing function is formulated as: C0 = f (S0 − X, T, σ30 ) The article reported PSECMAC as the best-performing model among several CMAC-like systems. It is also reported that most of the CMAC models produce better result than using Black-Scholes model in sense of RMSE. Though for American option, Black-Scholes model is not a good benchmark because it is sensitive to details of calculation. This work further constructed an arbitrage system based on the pricing model. Positions are adjusted according to the Delta hedging ratio. Experiments suggested the model has a marginal positive ROI, omitting transaction costs. B. Adaptive Control The application of CMAC on commercial devices may be more advantageous. The pressure to reduce cost and demand of embedded controller make CMAC-like models a good choice. Recently, studies have been carried on adaptive control of disabled wheelchair and seatless unicycle [12]. This type of problems can be formalized as controlling a set of variables in certain range (e. g. speed and balance angle) with a set of unknown and varying variables (e. g. friction to the ground and weight of the rider). Li et al [12] proposed a TSK type FCMAC to synthesis the equations of adaptive steering control. The back-stepping error is associated with the torque τ, which can simultaneously effect on critical moments.  θ̈ = Ā(θ , θ̇ ) + B̄(θ )(τ − µ φ̇ − c sgn(φ̇ )) φ̈ = C̄(θ , θ̇ ) + D̄(θ )(τ − µ φ̇ − c sgn(φ̇ )) As the output adapting to the moment parameters, the balance angle is controlled near zero. The performance of FCMAC is benchmarked with a linear-quadratic regulator for differential equations to describe the state of the motor. Simulations suggest that LQR is not able to converge speed and angle of balance, while FCMAC provides satisfactory result. V. DISCUSSIONS Referring to Section IV, CMAC was proved to be effective in many classic control problem and has been applied to emerging engineering problems. However, this model seems have encountered a bottleneck because of the lack of fundamental breakthrough during the past decade. Nowadays, issues discussed are mainly focused on trivial modification on memory structure and learning algorithm. The framework of error propagation or minimization of loss function is kept unchanged. According to my understanding, the limitation of current CMAC models can be ascribe to its over simplification of the cerebellum structure. Therefore, the next generation cerebellar model may adopt new discoveries from neuroscience. For example, the associative memory cells may take different roles rather than been treated identically. In fact, anatomical models usually feature several types of elementary cerebellar processing units. Meanwhile, the theory of Spike Timing Dependent Plasticity (STDP) suggests that the learning process of firing neurons may be ordered [6]. This feature can introduce far more complexity to the current learning algorithm. R EFERENCES [1] J. S. Albus, “A Theory of Cerebellar Function,” Mathematical Biosciences, pp. 25-61, 1971. [2] J. S. Albus, “A New Approach to Manipulator Control: the Cerebellar Model Articulation Controller (CMAC),” Trans. ASME, Series G. Journal of Dynamic Systems, Measurement and Control, 97, pp. 220233, 1975. [3] J. S. Albus, “Mechanisms of Planning and Problem Solving in the Brain,” Mathematical Biosciences, 45, pp. 247-293, 1979. [4] P. C. E. An, W.T. Miller, and P.C. Parks, “Design Improvements in Associative Memories for Cerebellar Model Articulation Controllers,” in Proceedings of ICANN, pp. 1207-10, 1991. [5] K. K. Ang and C. Quek, “Improved MCMAC with momentum, neighborhood, and averaged trapezoidal output,” IEEE Transactions on Systems. Man and Cybernetics: PartB, 30(3), pp. 491-590, 2013. [6] N. Caporale and Y. Dan, “Spike TimingDependent Plasticity: A Hebbian Learning Rule,” Annual Review of Neuroscience, 31(1), pp. 25-46, 2008. [7] P. Duan and H. Shao, “CMAC Neural Network based Neural Computation and Neuro-control,” Information and Control (in Chinese), 28(3), 1999. [8] H. He, Z. Zhu, A. Tiwari, and A. Mills, “A Cascade of Linguistic CMAC Neural Networks for Decision Making,” in International Joint Conference on Neural Networks (IJCNN), 2015. [9] G. Horváth and T. Szabó, “Kernel CMAC with improved capability,” IEEE Transactions on Systems. Man and Cybernetics: PartB, 37(1), pp. 124–38, 2007. [10] K.-L. Huang, S.-C. Hsieh, and H.-C. Fu, “Cascade-CMAC neural network applications on the color scanner to printer calibration,” in International Conference on Neural Networks, 1997. [11] S. H. Lane, D. A. Handelman, and J. J. Gelfand, “Theory and development of higher-order CMAC neural networks,” IEEE Control Systems, April, pp. 23–30, 1992. [12] Y.-Y. Li, C.-C. Tsai, F.-C. Tai and H.-S. Yap, “Adaptive steering control using fuzzy CMAC for electric seatless unicycles,” IEEE International Conference on Control & Automation, pp. 556–561, 2014. [13] C. C. Lin and F. C. Chen, “On a new CMAC control scheme, and its comparisons with the PID controllers,” Proceedings of the American Control Conference, pp. 769–774, 2001. [14] C.-M. Lin and Y.-F. Peng, “Adaptive CMAC-based supervisory control for uncertain nonlinear systems,” IEEE Transactions on Systems, Man, and Cybernetics, Part B, 34(2), pp. 1248–60, 2004. [15] C.-M. Lin and H.-Y. Li, “Self-organizing adaptive wavelet CMAC backstepping control system design for nonlinear chaotic systems,” Nonlinear Analysis: Real World Applications, 14(1), pp. 206–223, 2013. [16] H.-C. Lu, M.-F. Yeh, and J.-C. Chang, “CMAC Study with Adaptive Quantization,” IEEE Intl. Conf. on Systems, Man, and Cybernetics, Taipei, pp. 2596–2601, 2006. [17] C. J. B. Macnab, “Using RBFs in a CMAC to prevent parameter drift in adaptive control,” Neurocomputing, pp. 45–52, 2016. [18] M. D. Mauk and N. H. Donegan, “A Model of Pavlovian Eyelid Conditioning Based on The Synaptic Organization of Cerebellum,” Learn. Mem., 3, pp. 130-158, 1997. [19] A. Menozzi and M.-Y. Chow, “On the training of a multi-resolution CMAC neural network,” in Proceedings of the IEEE International Symposium on Industrial Electronics, 1997. [20] K. Mohajeri, G. Pishehvar, and M. Seifi, “CMAC neural networks structures,” in IEEE International Symposium on Computational Intelligence in Robotics and Automation, 2009. [21] K. Mohajeri, M. Zakizadeh, B. Moaveni, and M. Teshnehlab, “Fuzzy CMAC structures,” in IEEE International Conference on Fuzzy Systems, 2009. [22] Y.-F. Peng, R.-J. Wai and C.-M. Lin, “Adaptive CMAC model reference control system for linear piezoelectric ceramic motor,” in IEEE International Symposium on Computational Intelligence in Robotics and Automation, 2003. [23] F. Rosenblatt, “Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms,” Spartan Books, Washington DC, 1961. [24] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning Internal Representations by Error Propagation,” Nature, 323(9), 1986. [25] R. L. Smith, “Intelligent Motion Control with an Artificial Cerebellum,” PhD Thesis of The University of Auckland, New Zealand, Chapter 3, 1998. [26] S.-F. Su, T. Tao, and T.-H. Hung, “Credit assigned CMAC and its application to online learning robust controllers,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 33(2), 2003. [27] R. S. Sutton, “Generalization in Reinforcement Learning: Successful Examples Using Sparse Coarse Coding,” Advances in Neural Information Processing Systems, pp. 1038–1044, MIT Press, 1996. [28] S. D. Teddy, E. M.-K. Lai, and C. Quek, “Hierarchically Clustered Adaptive Quantization CMAC and Its Learning Convergence,” IEEE TRANSACTIONS ON NEURAL NETWORKS, 18(6), 2007. [29] S. D. Teddy, C. Quek, and E. M.-K. Lai, “A cerebellar associative memory approach to option pricing and arbitrage trading,” Neurocomputing, 19(4), 2008. [30] S. D. Teddy, C. Quek, and E. M.-K. Lai, “PSECMAC: A Novel SelfOrganizing Multiresolution Associative Memory Architecture,” IEEE Trans. Neural Networks, 19(4), pp. 689–712, 2008. [31] S. D. Teddy and S. K. Ng, “Forecasting ATM cash demands using a local learning model of cerebellar associative memory network,” International Journal of Forecasting, 27(3), pp. 760–776, 2011. [32] C.-K. Tham, “A hierarchical CMAC architecture for context dependent function approximation,” in IEEE International Conference on Neural Networks, 1996. [33] K. Tian, B. Guo, G. Liu, I. Mitchell, D. Cheng, and W. Zhao, “KCMAC-BYY: Kernel CMAC using Bayesian YingYang learning,” Neurocomputing, 101, pp. 24-31, 2013. [34] H. Voicu, “The Cerebellum: An Incomplete Multilayer Perceptron?,” Neurocomputing, 72, pp. 592-599, 2008. [35] Z.-Q. Wang, J. L. Shiano, and M. Ginsberg, “Hash-coding in CMAC Neural Networks,” in IEEE International Conference on Neural Networks, Washington, pp. 1698-1703, 1996. [36] S. Whiteson, M. E. Taylor, and P. Stone “Adaptive Tile Coding for Value Function Approximation,” AI Technical Report AI-TR-07-339, University of Texas at Austin, 2007. [37] Y.-F. , “CMAC Learning is Governed by a Single Parameter,” in IEEE International Conference on Neural Networks, San Francisco, pp. 1439-43, 1993. [38] F. Z. Xing, E. Cambria and X. Zou, “Predicting Evolving Chaotic Time Series with Fuzzy Neural Networks,” in International Joint Conference on Neural Networks (IJCNN), 2017. [39] W. J. Zhou, D. L. Maskell and C. Quek, “FIE-FCMAC: A novel fuzzy cerebellum model articulation controller (FCMAC) using fuzzy interpolation and extrapolation technique,” in International Joint Conference on Neural Networks (IJCNN), 2015.
9
Differential Evolution for Quantum Robust Control: Algorithm, Applications and Experiments Daoyi Dong, Xi Xing, Hailan Ma, Chunlin Chen, Zhixin Liu, Herschel Rabitz Abstract—Robust control design for quantum systems has been recognized as a key task in quantum information technology, molecular chemistry and atomic physics. In this paper, an improved differential evolution algorithm of msMS DE is proposed to search robust fields for various quantum control problems. In msMS DE, multiple samples are used for fitness evaluation and a mixed strategy is employed for mutation operation. In particular, the msMS DE algorithm is applied to the control problem of open inhomogeneous quantum ensembles and the consensus problem of a quantum network with uncertainties. Numerical results are presented to demonstrate the excellent performance of the improved DE algorithm for these two classes of quantum robust control problems. Furthermore, msMS DE is experimentally implemented on femtosecond laser control systems to generate good signals of twophoton absorbtion and control fragmentation of halomethane molecules CH2 BrI. Experimental results demonstrate excellent performance of msMS DE in searching effective femtosecond laser pulses for various tasks. I. I NTRODUCTION Controlling quantum systems has become a fundamental task in promising quantum technology [1], and it is relevant to many emerging areas such as atomic physics, molecular chemistry, and quantum information science [2], [3], [4]. Some control methods, such as optimal control theory [1], learning control algorithms [2] and Lyapunov control approaches [5], have been developed for quantum systems. Among these methods, learning control is a powerful approach for many complex quantum control tasks [6] and has achieved great success in laser control of molecules since the method was presented in the seminal paper [7]. Many quantum learning control problems could be formulated as an optimization problem and a learning algorithm is employed to search for an optimal control field for achieving desired performance. Gradient algorithms have been demonstrated to be a good candidate for numerically finding an optimal This work was supported by the National Natural Science Foundation of China (No. 61374092), the Australian Research Council’s Discovery Projects funding scheme under Project DP130101658, NSF (CHE-1464569) and ARO (W911NF-16-1-0014). D. Dong is with the School of Engineering and Information Technology, University of New South Wales, Canberra, ACT 2600, Australia, and the Department of Chemistry, Princeton University, Princeton, NJ 08544, USA. (email: [email protected]). X. Xing is with the Department of Chemistry, Princeton University, Princeton, NJ 08544, USA. (email: [email protected]). H. Ma and C. Chen is with the Department of Control and Systems Engineering, School of Management and Engineering, Nanjing University, Nanjing 210093, China.(e-mail: [email protected]). Z.X. Liu is with the Key Laboratory of Systems and Control, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China (e-mail: [email protected]). H. Rabitz is with the Department of Chemistry, Princeton University, Princeton, NJ 08544, USA. (email: [email protected]). field due to their high efficiency [8]. In many practical applications, the gradient information may not be easy to obtain and some complex quantum control problems may have local optimum. For these situations, it is necessary to employ stochastic searching algorithms to find a good control field. Genetic algorithm (GA) has been widely used in the area of quantum control and has achieved success in closedloop learning control of molecular systems in the laboratory [2]. In this paper, we focus on robust control problems of quantum systems and explore the use of differential evolution (DE) [9] for searching robust control fields. DE is a competitive form of evolutionary computation and has shown great performance for many complex optimization problems [10]. Recently, it has also been used for solving quantum control problems [11], [12]. For example, Zahedinejad et al. [11], [13] proposed a subspace-selective self-adaptive differential evolution (SUSSADE) algorithm to achieve a high-fidelity single-shot Toffoli gate and singleshot three-qubit gates. DE methods have been adopted to fulfil a desired state transition by designing optimal control fields for an open quantum ensemble [14]. Zahedinegad et al. [15] investigated several promising evolution algorithms and found DE outperformed GA and particle swarm optimization (PSO) for hard quantum control. In this paper, we employ DE algorithms to solve several classes of quantum robust control problems. The robustness of quantum systems has been recognized as an essential requirement for the development of practical quantum technology, and robust control methods could provide enhanced robustness performance for quantum systems [16], [17]. In particular, we propose an msMS DE (multiple-samples and Mixed-Strategy DE) algorithm where a mixed strategy and an average performance with multiple samples are employed. The msMS DE is used for three classes of quantum robust control problems: control of inhomogeneous open quantum ensembles, consensus of quantum networks with uncertainties and experimental fragmentation control using femtosecond laser pulses [18]. A quantum ensemble consists of a large number of single quantum systems (e.g., spin systems or molecules) and every individual system is referred to as a member of the ensemble [19]. In practical applications, the members of a quantum ensemble could have variations in the parameters characterizing the system dynamics. Such a quantum ensemble is called an inhomogeneous quantum ensemble [19], [20]. Inhomogeneous quantum ensembles have wide applications in fields ranging form long-distance quantum communication to magnetic-resonance imaging [21], [22]. Hence, it is highly desirable to design control laws that can steer an inhomogeneous ensemble from an initial state to a desired state when variations exist in the system parameters. Some results have been presented for controllability analysis and control design of inhomogeneous quantum ensembles. For example, Li and co-workers [20], [21] presented a series of results to analyze controllability and design optimal control laws for inhomogeneous spin ensembles. A Lyapunov control design approach has been proposed to asymptotically stabilize a spin ensemble around a uniform state of spin +1/2 or −1/2 [23]. Chen et al. [19] presented a sampling-based learning control method to achieved high fidelity control of inhomogeneous quantum ensembles. In these results, decoherence and dissipation were usually not considered. The existence of decoherence and dissipation may irreversibly lead a quantum ensemble to becoming an open system [24], and the manipulation of inhomogeneous open quantum ensembles becomes more challenging than that without considering decoherence. For an inhomogeneous quantum ensemble, we cannot employ different control fields to control individual members. A practical solution is to find robust control fields that can drive all of the members in the ensemble into a given target state. In this paper, we employ an msMS DE algorithm to search such robust control fields for inhomogeneous open quantum ensembles aiming at achieving enhanced control performance. Another problem under consideration is to drive a quantum network into a consensus state even in the presence of uncertainties. Achieving consensus is one of primary objectives in distributed control and coordination of classical (nonquantum) networked systems [25]. Consensus usually means that all of the nodes in a network hold the same state. Recent development in quantum technology has made it significant and feasible to analyze quantum networks where each node (agent) represents a quantum system such as a photon, an electron, a spin system or a superconducting quantum bit (qubit). Consensus of quantum networks may have potential applications in promising quantum communication networks, distributed quantum computation and one-way quantum computation [26], [27]. Since the nodes in a quantum network are described by quantum mechanics, some unique characteristics such as quantum entanglement and measurement backaction should be carefully considered, and the analysis and control of quantum networks raise new challenges. Some results have been presented for the consensus problem of quantum networks. For example, Sepulchre et al. [28] generalized consensus algorithms to noncommutative spaces and analyzed the asymptotic convergence to the consensus state of a fully mixed state. Ticozzi and co-workers [26], [29], [30] presented a series of results on consensus of quantum networks including several different definitions for quantum consensus, quantum gossip algorithms, and quantum consensus results within a group-theoretic framework. Shi et al. [27] presented a systematic investigation on consensus of quantum networks with continuous-time dynamics within the framework of graph theory. In this paper, we consider the basic problem of finding a robust control law to steer a quantum network to a reduced state consensus (as defined in [26]) and would not consider the distributed solutions to achieving quantum consensus. In particular, we employ the proposed msMS DE for driving a superconducting qubit network with uncertainties into a reduced state consensus. Femtosecond (fs) (1fs = 10−15 second) laser [18] has found wide applications in controlling the molecular dynamics because of its short pulse duration, which is comparable to the time scales of electronic and nuclear motions of a molecule. The temporal structures of a femtosecond pulse could be manipulated by pulse shaping techniques [18], which is typically achieved by modulating the phase and/or amplitude of the laser frequency components, spatially separated in the Fourier plane, with a computer programmable spatial light modulator (SLM) before recombination into a “shaped pulse”. The advances in both fs lasers and pulse shaping techniques have spurred recent interest in laserselective chemistry and quantum control using shaped fs laser pulses. In quantum control experiments, a practical approach is to use closed-loop learning control [2] to find an optimal field that can steer the quantum system towards the desired outcome, which has achieved a lot of successes. An evolutionary algorithm (e.g., GA) is often employed to assist the search of an optimal pulse. To the best of our knowledge, there have not been any experimental results reported where DE algorithms are used on femtosecond laser control systems. There are few quantum control experiments using femtosecond laser pulses that have investigated the robustness subject to variations in the control. In this work, we employ the msMS DE algorithm to an experimental quantum control problem, where the goal is to identify a robust solution (shaped fs laser pulse) that can maximize the CH2 Br+ /CH2 I+ product ratio from the fragmentation of CH2 BrI molecule in a time-of-flight mass spectrometry (TOF-MS). The msMS DE algorithm is also used to identify the transform limited (TL) pulse via optimizing the two photon absorption (TPA) signal, which is carried out prior to the fragmentation experiment. The main contributions of this paper are summarized as follows: • • • Motivated by solving three classes of quantum robust control problems, an improved DE algorithm of msMS DE is proposed where an average fitness function of multiple samples is utilized and a mixed strategy of mutation is employed. The control problem of inhomogeneous open quantum ensembles is investigated and the msMS DE algorithm is used to learn robust control fields for inhomogeneous open quantum ensembles. Numerical results show that the control fields learned by msMS DE usually have improved robustness performance compared with those learned by basic DE and GA. The task of driving a quantum network with uncertainties to a consensus state is investigated and robust control fields can be found by employing the msMS DE algorithm. Numerical results demonstrate that msMS DE • has excellent performance in searching robust control fields for achieving consensus of quantum networks. Several experiments are implemented on femtosecond laser control systems where the msMS DE algorithm is employed to find effective femtosecond laser pulses for generating excellent TPA signal and achieving good fragmentation control of molecules. These experiments present the first result of testing DE for femtosecond laser quantum control systems as well as realize the sampling-based learning control method [19], [31], [32] in the area of quantum control. The paper is organized as follows. Section II formulates three classes of quantum robust control problems under consideration. Section III briefly introduces the basic DE and then presents a systematic description of the msMS DE algorithm. Numerical results on quantum ensemble control and quantum network consensus are provided in Section IV. In Section V, we present experimental results on femtosecond laser control systems. Conclusions and discussions are given in Section VI. II. T HREE CLASSES OF QUANTUM CONTROL PROBLEMS For an inhomogeneous open quantum ensemble, the Hamiltonian can be described in the form of M HΘ (t) = g0 (θ0 )H0 + ∑ g j (θ j )u j (t)H j . Let Θ = (θ0 , θ1 , ..., θM ) and the functions g j (θ j ) ( j = 0, 1, . . . , M) characterize possible inhomogeneities. For example, g0 (θ0 ) corresponds to inhomogeneity in the free Hamiltonian (e.g., due to chemical shift in NMR). g j (θ j ) ( j = 1, ..., M) can characterize possible multiplicative noises in the control fields or imprecise parameters in the dipole approximation. We assume that g j (θ j ) ( j = 0, 1, ..., M) are continuous functions of θ j and the parameters θ j could be time-dependent and θ j ∈ [1 − E j , 1 + E j ]. For simplicity, we assume that g j (θ j ) = θ j , the nominal value of θ j is 1 and E0 = ... = E j = ... = EM = E. For an open quantum system in (1), we may define a coherent vector as y := (tr(Ul ρ ), tr(U2 ρ ), ..., tr(Um ρ ))⊤ , where iU1 , iU2 , ... iUm (m = n2 − 1) are orthogonal generators of the special unitary group SU(n) with degree n . Its density operator can be written as: ρ= A. Control of inhomogeneous open quantum ensembles We consider an inhomogeneous quantum ensemble where the members of the ensemble could have variations in the parameters that characterize the system dynamics. For example, the spins of an ensemble in nuclear magneticresonance (NMR) experiments may encounter large dispersion in the strength of the applied radio frequency field (rf inhomogeneity) and there also exist variations in the natural frequencies of these spins (Larmor dispersion) [20]. Several methods have been presented to design control laws for inhomogeneous quantum ensembles when dissipation and decoherence were not considered [19], [21]. For a practical quantum ensemble, it may be unavoidable to interact with its environment and each member in the quantum ensemble should be dealt as an open quantum system. For an open quantum system, its state can be described by a positive Hermitian density operator ρ satisfying tr(ρ ) = 1. Under the assumption of a short environmental correlation time permitting the neglect of memory effects, a Markovian master equation for ρ (t) can be used to describe the dynamics of an open quantum system interacting with its environment [24]. Markovian master equations in the Lindblad form are described as [3], [33] i ρ̇ (t) = − [H(t), ρ (t)] + ∑ γk D[Lk ]ρ (t), h̄ k I 1 m + ∑ yl Ul . n 2 l=1 (3) Substituting (3) into (1), we can obtain the evolution of the coherent vector y as: M ẏ = (LH0 + LD )y + ∑ u j LH j y + l0 , (4) j=1 where the superoperators LH0 , LD , LH j ( j = 1, 2, ..., M) and the term l0 are explained in detail in [34], [35]. We choose the objective function to be maximized as follows [34]: J(u) = 1 − n k y f − y(T ) k2 , 8(n − 1) (5) where k x k2 = xT x is a vector norm and it is clear that J(u) ∈ [0, 1]. And y f and y(T ) are the target state and the final state of the quantum system in terms of coherent vector, respectively. The control problem of an inhomogeneous open quantum ensemble can be formulated as: max J(u) := max E[JΘ (u)] u u s.t. (1) √ where i = −1, H(t) is the system Hamiltonian, h̄ is reduced Planck constant (we set h̄ = 1 in this paper), the non-negative coefficient γk specify the relevant relaxation rates, Lk are appropriate Lindblad operators and 1 1 D[Lk ]ρ = (Lk ρ L†k − L†k Lk ρ − ρ L†k Lk ). 2 2 (2) j=1      ẏΘ (t) = (θ0 LH0 + LD + θ j M ∑ u j (t)LH j )yΘ (t) + l0 j=1  yΘ (0) = y0 , t ∈ [0, T ]     θ j ∈ [1 − E, 1 + E], j = 0, 1, ...M (6) where E[JΘ (u)] denotes the average performance function regarding parameter inhomogeneities Θ . B. Consensus in quantum networks The problem can be formulated as follows: Achieving quantum consensus is a primary objective in the investigation of quantum networks. Existing results presented some distributed solutions to quantum consensus problems. For example, Mazzarella et al. [26] proposed a quantum gossip iteration algorithm where discrete-time quantum swapping operations between arbitrary two nodes are used to make a quantum network achieving consensus. A graphical method has been developed in [27] to build the connection between quantum consensus and its classical counterpart, and asymptotical convergence results on achieving a consensus state have been presented for a class of quantum networks with continuous-time Markovian dynamics. Here, we would not intend to develop a distributed algorithm for quantum consensus. In contrast, we consider how to design a robust control field to drive a quantum network from an initial state into a consensus state with high fidelity when uncertainties or inaccuracies may exist in the system dynamics. We consider the type of Reduced State Consensus that was defined in [26]. We denote H a Hilbert space, A ⊗ B returns the tensor product of A and B, and |ai with Dirac representation is a vector in H , i.e., |ai ∈ H [4]. In order to present the definition of reduced state consensus, we need to use the concept of partial trace defined as follows. Definition 1 (Partial trace): [4] Let HA and HB be the state spaces of two quantum systems A and B, respectively. Their composite system is described as a density operator ρ AB. The partial trace over system B, denoted as TrHB is given in the following form TrHB (|a1 iha2 | ⊗ |b1ihb2 |) = |a1 iha2 |Tr(|b1 ihb2 |), (7) where the vectors |a1 i, |a2 i ∈ HA , and the vectors |b1 i, |b2 i ∈ HB . When the composite system is in the state ρ AB , the reduced density operator for system A is defined as ρA = TrHB (ρ AB ) and the reduced density operator for system B is defined as ρB = TrHA (ρ AB ). Reduced state consensus for a quantum network can be defined as follows. Definition 2 (Reduced State Consensus): [26] A quantum network consisting of m nodes with the state ρ̄ is in Reduced State Consensus (RSC) if ρ̄1 = ρ̄2 = ... = ρ̄m , where ρ̄ j = Tr⊗k6= j Hk (ρ̄ ) ( j = 1, 2, ..., m) are defined as the reduced density operator for node j and can be calculated according to Definition 1. We aim to steer a quantum network into a consensus state defined in Definition 2. In practical applications, the existence of noise (including extrinsic and intrinsic), inaccuracies (e.g., inaccurate operation in the coupling between nodes) and fluctuations (e.g., fluctuations in control fields) in quantum networks is unavoidable. We assume that the Hamiltonian with uncertainties can be written into M HΘ (t) = θ0 H0 + ∑ θ j u j (t)H j . j=1 (8) max J(u) := max E[JΘ (u, ρ̄ )] u u s.t. " #  M     ρ̇ (t) = −i θ0 H0 + ∑ θ j u j (t)H j , ρ (t) (9) j=1  ρ (0) = ρ 0 , t ∈ [0, T ]     θ j ∈ [1 − E, 1 + E], j = 0, 1, 2, ...M where E[JΘ (u, ρ̄ ] denotes the average performance function with respect to the parameter fluctuations Θ and the target consensus state ρ̄ , and E ∈ [0, 1] are the bounds of the parameter uncertainties. C. Femtosecond laser quantum control We consider the experimental control of molecular fragmentation using shaped femtosecond laser pulses. Here, CH2 BrI is chosen as the target molecule. As a family member of Halomethane molecules, whose dissociative product plays a central role in ozone depletion, CH2 BrI has attracted wide attention because of its importance in environmental chemistry. In addition, it is one of the simplest prototype molecules containing different bonds, a stronger C-Br bond and a weaker C-I bond, which is ideal for the study of controlling the selective bond-breaking. Under strong fields with femtosecond laser pulses, CH2 BrI molecules will undergo ionization and dissociation, and their charged products can be separated and detected with a TOF-MS. In particular, we choose to optimize the photoproduct ratio of CH2 Br+ /CH2 I+ as our control objective, which corresponds to breaking the strong bond versus the weak bond. We apply closed-loop learning control, using our proposed msms DE algorithm, to search for a robust ultrafast laser pulse that maximizes this ratio. In closed-loop learning control, the learning process can be conceptually demonstrated as follows. First, one applies trial input pulses to the molecules to be controlled and observes the results. Second, a learning algorithm suggests better control inputs based on the prior experiments. Third, one applies “better” control inputs to new molecules [2]. This approach has been employed to explore the quantum control landscape [36] to find the optimal control strategy where the control performance function J(u)reaches its maximum. In order to achieve good performance, we first need to identify a reference phase mask on the SLM that give shortest transform limited (TL) pulse, which can be obtained from optimizing the signal of two-photon absorption (TPA). We first use the proposed msms DE algorithm to search for a good control to obtain highest TPA signal. Then we apply the same algorithm to search for a good control for the fragment ratio of CH2 Br+ /CH2 I+ . The consideration of robustness with multiple samples (MS) in DE would also ensure good transferability of the experimental results or photonic reagents [37]. That is, an optimal pulse identified from one laser system would also perform well (if not optimal) when transferred to another system despite the minor differences or uncertainties lying in the control parameters (i.e., the spectral phases on the SLM). III. D IFFERENTIAL EVOLUTION AND msMS DE ALGORITHM DE was initially proposed in 1990s by Storn and Price [9], [39] and has derived many variants. DE algorithms have witnessed wide applications in diverse domains of science and engineering [10], [38]. In this section, we first briefly introduce the conventional DE algorithm, analyze DE with variant strategies and control parameters, and then propose an improved DE algorithm of msMS DE for these quantum robust control problems presented in Section II. A. DE Algorithm In DE, the individual trial solutions (which constitute a population) are termed as parameter vectors or genomes, usually represented in a vector X = [x1 , x2 , · · · , xD ]T where each parameter xi is a real number. Solving an optimization problem using DE is basically a search for a parameter vector to minimize or maximize a fitness function (or objective function) f (X). The following operations are used to evolve a population of D-dimensional parameter vectors until a “best” individual is generated and found. (a) Initialization. DE searches for a global optimum point in a D-dimensional real parameter space ℜD . Here, we denote the population at the current generation as j j j Xi,G = (x1i,G , · · · , xD i,G ), i = 1, ..., NP and let xi,G ∈ [xmin , xmax ], ( j = 1, 2, ..., D) since these parameters are usually related to physical variables with relevant bounds. We usually initialize the population (at G = 0) as follows [43]: j j j j xi,0 = xmin + rand(0, 1) · (xmax − xmin ), j = 1, 2, ..., D, (10) where rand(0, 1) is a uniformly distributed random number. (b) Mutation. In DE, the key to “mutation” is to generate a difference vector by choosing three other distinct parameter vectors from the current generation (say, Xr1 , Xr2 , Xr3 ). The indices r1 , r2 , r3 ∈ {1, ..., NP} are mutually exclusive integers randomly generated within the range [1, NP] and r1 , r2 , r3 6= i. The donor vector Vi,G+1 are generated by Vi,G = Xr1 ,G + F · (Xr2 ,G − Xr3 ,G ), (11) where the scaling factor F is a positive control parameter typically in the interval [0.4, 1]. (c) Crossover. To enhance the potential diversity of the population, a crossover operation comes into play after the mutation. The DE family has two types of crossover operations (i.e., exponential and binomial). The binomial (uniform) crossover is outlined as ( j vi,G , if rand( j) ≤ CR or j = rand(1, D), j ui,G = (12) j xi,G , if rand( j) > CR and j 6= rand(1, D), where j = 1, 2, ..., D and rand( j) ∈ [0, 1] is a uniform random number. CR is a user-specified constant within the range [0, 1), rand(1, D) ∈ {1, 2, ..., D} is a randomly chosen index. (d) Selection. To keep the population size constant over subsequent generations, DE uses the following selection operation to determine whether the target vector or the trial vector survives to the next generation: ( Ui,G , if f (Ui,G ) ≥ f (Xi,G ), Xi,G+1 = (13) Xi,G , otherwise. If the new trial vector yields an equal or lower value of the objective function, it replaces the corresponding target vector in the next generation; otherwise the target vector survives. Usually, it is the mutation operation that demarcates one DE scheme from another. The DE strategy with the mutation in (11) is referred to as “DE/rand/1” using the notation “DE/x/y”, where x represents a string denoting the base vector to be perturbed, y is the number of difference vector considered for perturbation of x. Furthermore, when crossover is also considered, the notation “DE/x/y/z” is used, where z stands for the type of crossover (bin: binomial, exp: exponential). DE variants with different mutation strategies usually have different performance for solving optimization problems. “DE/rand/1/bin” is a commonly used strategy and it usually shows slow convergence speed and strong exploration capacity. The strategies relying on the best solution found so far such as “DE/rand-to-best/1/bin”, “DE/best/2/bin”, usually have a fast convergence speed and perform well when solving unimodal problems. However, they are more likely to get stuck at a local optimum and lead to premature convergence for multimodal problems. Two-difference-vectors-based strategies may result in better perturbations than one-difference-vector-based strategies. For control parameters of DE, Storn and Price [39] have proposed that a good initial choice of F was 0.5 and the range of F is usually set [0.4, 1]. The crossover rate CR may be CR ∈ [0, 1]. Several results also proposed the techniques of self-adaptation to automatically find an optimal set of control parameters [10], [40] to provide improved performance. B. msMS DE algorithm When implementing DE, users need to determine the appropriate mutation-trial strategies and parameter settings to ensure the success of the algorithm [40]. It is a highcost practice to perform a trial-and-error search for the most appropriate trial vector generation strategy and fine-tune its associated control parameter values, i.e., the values of CR, F for a given problem. Moreover, during different stages of evolution, different trial vector generation strategies coupled with specific control parameter values can be more effective and single strategy may result in premature convergence thus leading to a failure in complex problems such as nonseparable and multimodal functions [10], [41], [42]. Also, several variants of DE utilizing the idea of mixed strategies such as SaDE [40] and EPDE [43] have been proposed and exhibited good performance. Our numerical results show that DE with a single strategy might be enough for easy problems while DE variants with mixed strategies might emerge as a promising candidate for quantum control problems with multimodal landscapes. Existing results of sampling-based learning control [19], [31], [44] have shown that the employment of an average objective function with multiple samples can provide improved performance for quantum robust control problems. Inspired by these observations, we adopt a mixed strategy and an average performance of multiple samples to present an improved DE algorithm (i.e., msMS DE) for these quantum robust control problems outlined in Section II. We first choose one mutation scheme from a pool of strategy candidates where several mutation schemes with effective yet diverse characteristics are equally distributed. Subsequently, a binomial crossover operation is performed on the corresponding mutant vector to generate the trial vector. Note that we assign various values of F and CR for each individual during the current generation to increase the diversity of the population. To construct the candidate pool, we investigate several commonly used mutation strategies [42] and select four strategies with distinct capabilities at different stages of evolution as follows: DE/rand/1: Vi = Xr1 + F · (Xr2 − Xr3 ). (14) DE/rand to best/2: Vi = Xi + F · (Xbest − Xi ) + F · (Xr1 − Xr2 ) + F · (Xr3 − Xr4 ). (15) DE/rand/2: Vi = Xr1 + F · (Xr2 − Xr3 ) + F · (Xr4 − Xr5 ). (16) DE/current-to-rand/1: Vi = Xi + K · (Xr1 − Xi ) + F · (Xr2 − Xr3 ). (17) The indices r1 , r2 , r3 , r4 and r5 are mutually exclusive integers randomly chosen from the range [1, NP] and all of them are different from the index i. Xbest is the best individual vector with the best fitness (i.e., the lowest objective function value for a minimization problem) in the population. The control parameter K in the strategy DE/current-to-rand/1 is set as K = 0.5 to eliminate one additional parameter. As for the crossover operation, the first three mutation schemes are combined with a binomial crossover operation, while the fourth scheme directly generates trial vectors without crossover. In the proposed msMS DE algorithm, the parameter F is approximated by a normal distribution with mean value 0.5 and standard deviation 0.3, denoted by N(0.5, 0.3). It is easy to verify that values of F fall into he range [−0.4, 1.4] with probability of 0.997 which helps maintain both exploitation (with small F values) and exploration (with larger F values). Similarly, we assume CR obeys a normal distribution denoted by N(0.5,0.1) and the small standard deviation 0.1 is enough to guarantee that most values of CR lies in [0, 1] [40]. Consequently, a set of F and CR values are randomly sampled from normal distribution (denoted by Normrnd) and applied to each target vector in the current population. We may obtain some extraordinary values far from [-0.4,1.4] for scale factor F and we usually accept them to increase diversity. While the crossover rate has probabilistic meaning for the chance of survival, we should abandon those fall outside [0, 1], and generate another valid parameter by CR = N(0.5, 0.1) to guarantee the practical meaning of crossover. The msMS DE method is proposed for three classes of quantum control tasks. In order to design appropriate control laws to achieve good robustness performance, we integrate the idea of sampling-based learning control [19] into the msMS DE algorithm. To begin with, we prepare N samples Θk = (θ0 , θ1 , ..., θM ) (k = 1, 2, . . . , N) with different values of the uncertainty parameters. We compute the fitness values of these sample vectors . Then, we evaluate the average fitness value f¯ for these samples, and f¯ is defined as follows 1 N f¯(Ui,G ) = ∑ f (Ui,G , Θk ), N k=1 1 N f¯(Xi,G ) = ∑ f (Xi,G , Θk ). N k=1 The algorithmic description of the msMS DE is presented in Algorithm 1. Remark 1: In simulation, after we obtain the nominal value of an individual, we may generate the other samples by perturbing the nominal value and then calculate the average fitness function. In experiment, we need to run an experiment for each sample to measure the fitness of each sample, and then calculate the average fitness. Usually, a larger number of samples N may lead to a better robustness performance [19]. However, the computational or experimental time will significantly increase with the increase of N. In this paper, we use three samples for each uncertainty parameter for saving computational and experimental time. Remark 2: After performing the mutation operation, we obtain new donor vectors, and some of them might survive into the next generation and serve as parents. Therefore, we add a procedure where each vector is evaluated in view of boundary constraints. If any parameter of the vector falls beyond the pre-defined lower or upper bounds, we will replace it with a random value within the allowed range. Remark 3: In our proposed msMS DE, we preset a maximum generation Gmax as the termination criterion. During the implementation of the algorithm, the population evolves until the learning process reaches G = Gmax . In numerical examples, we let Gmax = 50000. In experimental examples, we choose Gmax = 150. IV. N UMERICAL RESULTS FOR ENSEMBLE CONTROL AND QUANTUM NETWORK CONSENSUS A. Control of open inhomogeneous two-level quantum ensembles We consider a specific inhomogeneous open two-level ensemble with inhomogeneous parameter bound E = 0.2. Members of the ensemble are governed by the following Hamiltonian: 1 H(t) = θ0 σz + θ1 u(t)(cos ϕσx + sin ϕσy ), (18) 2 Algorithm 1. 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: Algorithmic description of msMS DE Set the generation number G = 0 for i = 1 to NP do for j = 1 to D do j j j j xi,G = xmin + rand(0, 1) · (xmax − xmin) end for end for Initialize fitness f (Xi,G ) and evaluate vector by f Mark the best vector with maximum f as Xbest,G repeat (for each generation G = 1, 2, . . . , Gmax ) repeat (for each vector Xi , i = 1, 2, . . . , NP) Set parameter Fi,G = Normrnd(0.5, 0.3) Randomly generate a real number pp∈ [0, 1], if pp > 0 and pp ≤ 0.25 then flag=1 Vi,G = Xr1 ,G + Fi,G · (Xr2 ,G − Xr3 ,G ) end if if pp > 0.25 and pp ≤ 0.5 then flag=2 Vi,G = Xi,G + Fi,G · (Xbest,G − Xi,G ) + Fi,G · (Xr1 ,G − Xr2,G ) 18: +Fi,G · (Xr3 ,G − Xr4 ,G ) 19: end if 20: 21: if pp > 0.5 and pp ≤ 0.75 then flag=3 22: end if if pp > 0.75 and pp ≤ 1 then flag=4 23: 24: 25: 26: 27: 28: 29: 30: 31: 32: 33: 34: 35: 36: 37: 38: 39: 40: 41: 42: 43: 44: 45: 46: 47: 48: 49: 50: 51: 52: 53: Vi,G = Xr1 ,G + Fi,G · (Xr2 ,G − Xr3 ,G ) + Fi,G · (Xr4 ,G − Xr5 ,G ) Vi,G = Xi,G + K · (Xr1 ,G − Xi,G ) + Fi,G · (Xr2 ,G − Xr3,G ) end if for i = 1 to D do j j j j if vi,G > xmax or vi,G < xmin then j j j j vi,G = xmin + rand(0, 1) · (xmax − xmin ) end if end for Set parameter CRi,G = Normrnd(0.5, 0.1) while CRi,G < 0 or CRi,G > 1 do CRi,G = Normrnd(0.5, 0.1) end while if flag=1 or flag=2 or flag=3 then for j = 1 to D do j j ui,G = vi,G , if (rand[0, 1] ≤ CRi,G or j = jrand ) j j ui,G = xi,G , otherwise end for end if if flag=4 then Ui,G = Vi,G end if for each sample Θk , (k = 1, 2, ..., N) do evaluate the fitness function f (Ui,G , Θk ) end for Compute f¯(Ui,G ) = N1 ∑Nk=1 f (Ui,G , Θk ) if f¯(Ui,G ) ≥ f¯(Xi,G ) then Xi,G+1 ← Ui,G , f¯(Xi,G+1 ) ← f¯(Ui,G ). end if Renew the best vector Xbest,G and i ← i + 1 until i = NP G ← G+1 until G = Gmax where ϕ ∈ [0, 2π ], u(t) ∈ [−10, 10], and the Pauli operators are defined as:       0 1 0 −i 1 0 , σy = , σz = . (19) σx = 1 0 i 0 0 −1 Let γk = 1 and the Lindblad operators are given by [45]       0 0 0 0.2 0.2 0 L1 = , L2 = , L3 = . 0.1 0 0 0 0 0 (20) For a two-level quantum ensemble, U1 ,U2 and U3 in (3) can be chosen as σx , σy and σz . The coherent vector for the density matrix is the Bloch vector r = (x, y, z)T = (tr(σx ρ ), tr(σy ρ ), tr(σz ρ )). The dynamical equation for r can be written as     −0.045 −θ0 0 0  r(t) +  0  θ0 −0.045 0 ṙ(t) =  0 0 −0.05 0.03 0 0 −2 sin ϕ 0 0 2 cos ϕ  r(t). +θ1 u(t)  2 sin ϕ −2 cos ϕ 0 (21) The average fitness function is given as E[JΘ (u)] = 1 1 ∑[1 − 4 k r f − rθ0 ,θ1 (T ) k2], N∑ θ θ 0 (22) 1 where N is the total number of the chosen samples. An upper bound of the fitness function is 1 although we do not know the maximum that can be achieved. In msMS DE algorithm, we choose three samples for each parameter, and here we have N = 9. During learning control of the inhomogeneous quantum ensemble, we employ DE algorithms to seek for the optimal control u∗ (t). Then, we apply the optimal control field to additional samples with inhomogeneous parameters (θ0 , θ1 ) following uniform distributions within [0.8, 1.2] to test its performances. We assume that the initial state and the target state are, respectively,     1 0 0 0 ρ0 = , ρf = . (23) 0 0 0 1 The target time T = 10 and the time interval [0, T ] is equally divided into D = 200 time steps, and ∆t = 0.05. The population size is set as NP = 50 for all the algorithms in this example. The simulation is implemented on MATLAB platform (version 8.3.0.532). The hardware environment for simulation is Intel(R)-Core(TM) i7-6700K CPU, dominant frequency @4.00GHz, and 16G(ARM). To demonstrate the performance of the proposed msMS DE algorithm for the control problem of inhomogeneous quantum ensembles, we make performance comparison between it and ms DE (DE with multiple samples, i.e., using the average fitness function of multiple samples) with various parameters. To begin with, we present the results for the traditional DE (i.e., “DE/rand/1/bin”) using multiple samples with three typical sets of control parameters. Three cases with different control parameters are labeled as “ms DE1” B. Consensus in superconducting qubit networks J(u) 0.96 0.95 ms_DE1 ms_DE2 ms_DE3 0.94 (a) 0.93 0 10000 20000 30000 40000 50000 Iterations J(u) 0.98 0.96 ms_DE1 msMS_DE GA 0.94 0.92 (b) 0.9 0 10000 20000 30000 40000 50000 Iterations Fig. 1. (a) The training performance of the two-level open quantum ensemble via ms DE1 (F = 0.9,CR = 0.1), ms DE2 (F = 0.9,CR = 0.9) and ms DE3 (F = 0.5,CR = 0.3). (b) The training performance of ms DE1, GA and msMS DE. (F = 0.9,CR = 0.1), “ms DE2” (F = 0.9,CR = 0.9), and “ms DE3” (F = 0.5, CR = 0.3), and the training performance is presented in Fig. 1(a). It is clear that ms DE1 and ms DE3 have better performance than ms DE2 for the quantum control problem. ms DE1 can achieve the highest fitness 0.9566 among these three cases. We then compare the training performance of ms DE1, GA and msMS DE, and the results are illustrated in Fig. 1(b). The msMS DE algorithm achieve the highest fitness Jmax = 0.9798, while ms DE1 and GA converge to a maximum value of 0.9566 and 0.9667, respectively. A comparison of testing performance for 2000 additional samples and training time between DE1 (using one sample), ms DE1, ms DE2, ms DE3, GA (with crossover probability Pc = 0.8 and mutation probability Pm = 0.05) and msMS DE in Table 1 shows that msMS DE is superior to ms DE and DE1. More numerical results also show that msMS DE usually can find the control field with the best robustness among these algorithms because msMS DE employs mixed mutation strategies as well as average performance using multiple samples. ms DE1, ms DE2, ms DE3, GA and msMS DE also take similar time to find an optimal solution for the ensemble control problem. For example, msMS DE takes 9 hours 20 minutes and GA takes 10 hours 18 minutes and 14 seconds. Algorithm DE1 ms DE1 ms DE2 ms DE3 GA msMS DE parameters F = 0.9,CR = 0.1, N = 1 F = 0.9,CR = 0.1, N = 9 F = 0.9,CR = 0.9, N = 9 F = 0.5,CR = 0.3, N = 9 Pc = 0.8, Pm = 0.05, N = 9 F = N(0.5, 0.3), CR = N(0.5, 0.1), N = 9 training time 1h10m47s 9h27m47s 9h20m15s 9h40m5s 10h18m14s 9h20m0s J̄(u) 0.9408 0.9610 0.9537 0.9601 0.9691 0.9803 TABLE I P ERFORMANCE COMPARISON OF DIFFERENT ALGORITHMS The nodes in a quantum network could be photons, electrons, or other quantum systems. In this section, we consider a quantum network that consists of superconducting qubits as its nodes. Superconducting quantum circuits based on Josephson junctions are one promising candidate for building hardwares of quantum computers. These macroscopic circuits can behave quantum mechanically like artificial atoms that can also be used to test the laws of quantum mechanics on macroscopic systems [46]. Superconducting qubits have been widely investigated theoretically and implemented experimentally since they could be easily embedded in nanometerscale electronic devices and scaled up to provide a large number of qubits for quantum computation [47]. Manipulation of superconducting qubits can be achieved by adjusting external parameters such as currents and voltages or by tuning the coupling between two superconducting qubits [48]. In superconducting circuit, the charging energy EC and the Josephson coupling energy EJ have significant effect on the quantum mechanical behavior of a Josephson-junction circuit. Different kinds of superconducting qubits can be realized according to the regimes of EJ /EC . For example, a charge qubit can form when EC ≫ EJ . In practical applications, the Josephson junction in the charge qubit is usually replaced by a dc superconducting quantum interference device (SQUID) with low inductance and a magnetic flux [49]. The equivalent Hamiltonian of a charge qubit can be described as [50], [51] H = Fz (Vg )σz − Fx (Φ )σx , (24) where Fz (Vg ) can be adjusted through external voltage Vg , and Fx (Φ ) corresponds to a tunable effective coupling with the external magnetic flux Φ in the SQUID. Hence, Fz (Vg ) and Fx (Φ ) are related to external control fields. Now, consider a quantum network consisting of three superconducting qubits with control fields acting on all (12) (23) qubits. We denote σx = σx ⊗ σx ⊗ I, σx = I ⊗ σx ⊗ σx , (13) σx = σx ⊗ I ⊗ σx . Its free Hamiltonian can be described as (12) H0 = ω12 σx (1) (23) + ω23σx (13) + ω13 σx (2) . (25) (3) Denote σx = σx ⊗ I ⊗ I, σx = I ⊗ σx ⊗ I, σx = I ⊗ I ⊗ σx , (1) (2) (3) and σz = σz ⊗ I ⊗ I, σz = I ⊗ σz ⊗ I, σz = I ⊗ I ⊗ σz . We have the control Hamiltonian in the following form (1) (1) (2) (2) (3) (3) Hu (t) = ux1 σx + uz1 σz + ux2 σx + uz2 σz + ux3 σx + uz3 σz . (26) Our task is to drive the quantum network from an arbitrary initial state (usually three qubits having different reduced states) to a consensus state. Furthermore, if we withdraw the external control fields, the quantum network will remain in the consensus state under the free Hamiltonian. Denote 1n as an n-dimensional matrix with all of its elements being 1. Let the target state be ρ̄ = 81 18 . We have the following result. Proposition 1: The state ρ̄ = 18 18 is a consensus state for the three qubit network. Also, ρ̄ is invariant under 1 J(u) 0.9 msMS_DE DE1 0.8 0.7 0 10000 20000 30000 40000 50000 Iterations Fig. 2. The initial and target (reduced) states of the three qubits on the Bloch sphere. (12) (23) the action of free Hamiltonian H0 = ω12 σx + ω23 σx + (13) ω13 σx . Proof: For ρ̄ = 18 18 , we can calculate the reduced states for three notes as follows: 1 1 1 ρ̄1 = 12 , ρ̄2 = 12 , ρ̄3 = 12 . 2 2 2 It is clear that ρ̄1 = ρ̄2 = ρ̄3 , that is, the state ρ̄ is a consensus state for the three qubit network according to Definition 2. A direct calculation shows that [H0 , ρ̄ ] = 0. Hence, ρ̄˙ = [H0 , ρ̄ ] = 0. That is, ρ̄ is invariant under the action of free Hamiltonian H0 . The chosen target state ρ̄ is a symmetric state from which the superconducting qubit network will keep invariant with only free Hamiltonian H0 . The initial state is set as ρ̄ 0 = ρ10 ⊗ ρ20 ⊗ ρ30 where    1    1 0 0 0 − 12 0 2 ρ10 = , ρ20 = , ρ = . 1 3 0 0 0 1 − 12 2 The initial and target states of qubit network are illustrated in Fig. 2. In practical applications, there may exist fluctuations in magnetic fields and electric fields in superconducting qubits. The practical control Hamiltonian is assumed to be (1) (1) (2) (2) Hu (t) =θx ux1 σx + θz uz1 σz + θx ux2 σx + θz uz2 σz (3) (3) + θx ux3 σx + θz uz3 σz . (27) We apply the msMS DE algorithm to search for the robust control fields to reach a consensus state in the above quantum network. Stimulation parameters are set as: the population size is set as NP = 100, the time internal [0, 20] ns is equally divided into 100 smaller time intervals (i.e., D = 100), the control terms ux1 , uz1 , ux2 , uz2 , ux3 , uz3 ∈ [0, 1] GHz. Let ω12 = ω23 = ω13 = 0.1 GHz. We assume that θx ∈ [0.98, 1.02] and θz ∈ [0.98, 1.02] (i.e., E = 0.02). For each uncertainty parameter, we choose three samples and we have N = 9 samples for training. We employ DE1 (one sample) for comparison. The training performance of driving qubit network is illustrated in Fig. 3. As we can see, msMS DE achieves a rather high fitness Jmax = 0.9988 (where 1 is an upper bound), while DE1 achieves the fitness Fig. 3. The training performance of superconducting qubit network via DE1 and msMS DE. of Jmax = 0.9561. For the case θx = θz = 1, Fig. 4 shows the reduced states of three qubits from different trajectories asymptotically converging to the same trajectory using the control learned from msMS DE. Based on the control fields from DE1 and msMS ED, we test 2000 additional samples regarding the trace distance defined as (for i = 1, 2, 3) r 1 1 1 1 1 2 ||ρi − 12 ||Tr = Tr (ρi − 12 )† (ρi − 12 ) = ∑ |λ j | 2 2 2 2 2 j=1 where λ j are eigenvalues of ρi − 12 12 . Since the maximum trace distance between two quantum states may be 1, we define the relative error between two quantum states ρi and ρ j as ||ρi − ρ j ||Tr × 100%. Fig. 5 shows that the relative error between each qubit and its target state always keeps below 1.2% for the case using msMS DE while the relative error between each qubit and its target state may exceed 10.0% for the case using DE1. We further show the trace distances between the quantum states of different qubits after the control Hamiltonian is withdrawn in Fig. 6. It is clear that the relative errors between the reduced states are always below 2.0% for the case of msMS DE while the relative errors may exceed 12.0% for the case of DE1. The results demonstrate that the approximate consensus state achieved using msMS DE has much better stability than that obtained using DE1. V. E XPERIMENTAL RESULTS ON FEMTOSECOND LASER CONTROL SYSTEMS The following quantum control experiments were carried out in Department of Chemistry at Princeton University. A. Experimental setup The experimental setup contains three major components: 1) a fs laser system, 2) a pulse shaper and 3) a time-of-flight mass spectrometry (TOF-MS). Briefly, the fs laser system (KMlab, Dragon) consists of a Ti:sapphire oscillator and a amplifier, which produces 1 mJ, 25 fs pulses centered at 790 nm. The laser pulses are introduced into a pulse shaper with a programmable dual-mask liquid crystal spatial light modulator (SLM). The SLM has the capability of independent phase and amplitude modulation and has 640 pixels with 0.2 nm/pixel resolution [52], [53]. Typically, every 8 adjacent pixels are bundled together to form an array of 80 “grouped pixels”, which are the control variables. Each 1 trace distance r3=(0,0,1) qubit1 qubit2 qubit3 o qubit1−qubit2 qubit2−qubit3 qubit1−qubit3 0.15 0.1 0.05 20 25 30 35 40 t o z (a) 0 o −1 1 trace distance 0.02 rtarget=(1,0,0) r2=(−1,0,0) qubit1−qubit2 qubit2−qubit3 qubit1−qubit3 0.01 (b) o 20 25 30 1 r1=(0,0,−1) 0 −1 y −1 35 40 t 0 x Fig. 4. The reduced states of three qubits from different trajectory asymptotically converge to the same trajectory for θx = θz = 1 using the control learned by msMS DE. Fig. 6. The trace distances between different qubits in the qubit network with free Hamiltonian. (a) Average evolution curves of trace distances between three qubits for 2000 samples (DE1). (b) Average evolution curves of trace distances between three qubits for 2000 samples (msMS DE). trace distance B. Optimization of TPA signal 1 0.8 qubit1 qubit2 qubit3 0.6 0.4 (a) 0.2 0 0 10 20 30 40 trace distance t 1 0.8 qubit1 qubit2 qubit3 0.6 0.4 (b) 0.2 0 0 10 20 30 40 t Fig. 5. The testing performance of the qubit network. (a) Average evolution curves of trace distances of three qubits for 2000 samples using the control field learned by DE1. (b) Average evolution curves of trace distances of three qubits for 2000 samples using the control field learned by msMS DE. control variable can have a phase value between 0 and 2π , and an amplitude value between 0 and 1. In this experiment, we do phase-only control, with all the amplitude values fixed at 1. The shaped laser pulses out of the shaper are focused into a vacuum chamber, where photoionization and photofragmentation occurs. The fragment ions are separated with a set of ion lens and passing through a TOF tube before being collected with an MCP detector. The MS signals are recorded with a fast oscilloscope, which accumulates 1 second with 3000 laser shots each time before sending to a personal computer for further analysis. A small fraction of the beam (< 5%) is separated from the main beam and focused into a GaP photodiode (Thorlab, DET25K), which collects signals arising from two photon absorption. A preliminary task is to optimize the two-photon absorption (TPA) signal, which is a convenient way to identify a shortest pulse that removes the residual highorder dispersion in the amplifier output. The parameter setting is as follows: D = 80, NP = 30 and N = 3. The control variable are the phases bounded in between 0 and 2π . The fitness function corresponds to the TPA signal and an average fitness of three samples was used. Three samples for each individual were selected as follows: The first sample came from the T current individual, denoted as Xi1 = [x1i , x2i , . . . , x80 i ] , the second sample was generated by adding a random fluctuation between 0 and 0.1π to each component of the current individual, i.e., Xi2 = [x1i + 0.05rand(0, 1) × 2π , x2i + T 0.05rand(0, 1) × 2π , . . . , x80 i + 0.05rand(0, 1) × 2π ] , and the 3 1 third sample was selected as Xi = [xi − 0.05rand(0, 1) × T 2π , x2i − 0.05rand(0, 1) × 2π , . . ., x80 i − 0.05rand(0, 1) × 2π ] . This means that each control variable is permitted to have up to 5% (of the maximum phase) additive noise. The interaction between the algorithm and the modulation of SLM is accomplished by LabVIEW system design software. An experimentally reasonable termination condition of 150 generations (iterations) is used. For 150 iterations, it takes around five and a half hours to run the experiment. For each generation, a total of 90,000 signal measurements were made. The experimental result is shown in Figure 7 where TPA signal during each iteration is presented in Fig. 7(a) and the optimized phases of 80 control variables for the final optimal result is given in Fig. 7(b). After 150 generations, the best average TPA signal for three samples can reach 1.35. C. Fragmentation control We consider the fragmentation control of molecules CH2 BrI, where the fitness is defined as the photofragment ratio of CH2 Br+ /CH2 I+ , while the control variables are the phases. The parameter setting is the same as that in Section Fig. 7. Experimental result for optimizing the TPA signal using msMS DE. (a) TPA signal vs iterations, where ‘Best’ represents the maximum fitness and ‘Average’ represents the average fitness of all individuals during each iteration. (b) Optimized phases of 80 control variables for the final optimal result corresponding to the maximum fitness. V.B: D = 80 and NP = 30. DE algorithms were employed to optimize the phases of 80 control variables. Here, we apply both DE1 and msMS DE for comparison. Figure 8 shows the experimental results using DE1, where the ratio CH2 Br+ /CH2 I+ as the fitness function is presented in Fig. 8(a) and the optimized phases of 80 control variables for the final optimal result is given in Fig. 8(b). In Fig. 8(b), ‘Best’ represents the maximum fitness and ‘Average’ represents the average fitness of all individuals during each iteration. With 150 iterations, DE1 can find an optimized pulse to make CH2 Br+ /CH2 I+ achieve 2.41. Figure 9 shows the results from the msMS DE algorithm, in which three samples are measured in each experiment. Three samples for each individual were selected using the same method as that in the experiments of optimizing TPA signals. With 150 iterations, msMS DE can find an optimal pulse for making the average CH2 Br+ /CH2 I+ of three samples to achieve 2.67. The experimental results are shown in Fig. 9, where the average ratio CH2 Br+ /CH2 I+ of three samples as the fitness function is presented in Fig. 9(a) and the optimized phases of 80 control variables for the final optimal result is given in Fig. 9(b). After we obtained the optimal femtosecond control pulses using DE1 and msMS DE, we can test the performance of the optimal pulses. The testing results are shown in Fig. 10, where Fig. 10(a) is the average TOF signal of 100 testing results with random noises between −7.5% and +7.5% (with respect to the maximum phase 2π ) for the femtosecond pulse optimized by DE1. In other words, if T we denote the best individual as Xbest = [x1b , x2b , . . . , x80 b ] , k 1 these 100 testing samples can be written as Xs = [xb + 0.075(2rand(0, 1) − 1) × 2π , x2b + 0.075(2rand(0, 1) − 1) × T 2π , . . ., x80 b + 0.075(2rand(0, 1) − 1) × 2π ] . Fig. 10(b) shows the average TOF signal of 100 testing results for the femtosecond pulse optimized by msMS DE. The average CH2 Br+ /CH2 I+ of the 100 testing samples can achieve Fig. 8. Experimental result for optimizing the ratio between the product CH2 Br+ and the product CH2 I+ using DE1. (a) Ratio CH2 Br+ /CH2 I+ vs iterations, where ‘Best’ represents the maximum fitness and ‘Average’ represents the average fitness of all individuals during each iteration. (b) Optimized phases of 80 control variables for the final optimal result corresponding to the maximum fitness. Fig. 9. Experimental result for optimizing the ratio between the product CH2 Br+ and the product CH2 I+ using msMS DE. (a) Ratio + CH2 Br /CH2 I+ vs iterations, where ‘Best’ represents the maximum fitness and ‘Average’ represents the average fitness of all individuals during each iteration. (b) Optimized phases of 80 control variables for the final optimal result corresponding to the maximum fitness. 2.61 for the pulse from msMS DE while the average CH2 Br+ /CH2 I+ is only 2.12 for the pulse from DE1. It is clearly shown that msMS DE outperforms DE1 in terms of reaching a better objective fitness value in this experiment. VI. CONCLUSION In order to solve three classes of quantum robust control problems, we have proposed improved msMS DE using multiple samples for fitness evaluation and a mixed strategy for mutation. The msMS DE algorithm shows excellent performance for the control problem of open inhomogeneous quantum ensembles and the consensus problem of quantum networks with uncertainties. We have experimentally implemented msMS DE on femtosecond laser control systems in the laboratory to generate good TPA signal and control Fig. 10. Experimental results for TOF signal where m represents mass and q represents charge. (a) The average TOF signal of 100 testing results with random noises between −7.5% and +7.5% for the femtosecond pulse optimized by DE1. (b) The average TOF signal of 100 testing results with random noises between −7.5% and +7.5% for the femtosecond pulse optimized by msMS DE. fragmentation of chemical molecules. In future research, there is plenty of room for exploring the use of DE for emerging quantum control engineering. For example, it is worth adapting more efficient DE algorithms for high-dimensional quantum control problems [54]. In the laboratory, many quantum control problems involve multiple even many objectives that need to be optimized, and it is also worth adapting multi-objective and many-objective evolution optimization algorithms [55], [56] for these challenging quantum control problems. R EFERENCES [1] D. Dong, and I. R. Petersen, “Quantum control theory and applications: a survey,” IET Control Theory & Applications, vol. 4, no. 12, pp. 26512671, 2010. [2] H. Rabitz, R. De Vivie-Riedle, M. Motzkus, and K. Kompa, “Whither the future of controlling quantum phenomena?” Science, vol. 288, no. 5467, pp. 824-828, 2000. [3] H. M. Wiseman, and G. J. Milburn, “Quantum Measurement and Control,” Cambridge, England: Cambridge University Press, 2010. [4] M. A. Nielsen, and I. L. Chuang, “Quantum Computation and Quantum Information,” Cambridge, U.K.: Cambridge University. Press, 2000. [5] S. Kuang, D. Dong, and I. R. Petersen, “Rapid Lyapunov control of finite-dimensional quantum systems” Automatica, in press, 2017, online: quant-ph, arXiv, https://arxiv.org/abs/1512.02545. [6] C. Chen, D. Dong, H.-X. Li, J. Chu, and T.-J. Tarn, “Fidelity based probabilistic Q-learning for control of quantum systems,” IEEE Transactions on Neural Networks and Learning Systems, vol. 25, no. 5, pp. 920-933, 2014. [7] R. S. Judson, and H. Rabitz, “Teaching lasers to control molecules,” Physical Review Letters, vol. 68, pp. 1500-1503, 1992. [8] N. Khaneja, T. Reiss, C Kehlet, T. Schulte-Herbrüggen, and S. J. Glaser, “Optimal control of coupled spin dynamics: design of NMR pulse sequences by gradient ascent algorithms,” Journal of Magnetic Resonance, vol. 172, no. 2, pp. 296-305, 2005. [9] R. Storn, and K. Price, “Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341-359, 1997. [10] S. Das, and P. N. Suganthan, “Differential evolution: A survey of the state-of-the-art,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 1, pp. 4-31, 2011. [11] E. Zahedinejad, J. Ghosh, and B. C. Sanders, “High-fidelity singleshot Toffoli gate via quantum control,” Physics Review Letters, vol. 114, no. 20, p. 200502, 2015. [12] H. Ma, C. Chen, and D. Dong, “Differential evolution with equallymixed strategies for robust control of open quantum systems,” IEEE International Conferernce on Systems, Man and Cybernetics, pp. 20552060, Hong Kong, October 9-12, 2015. [13] E. Zahedinejad, J. Ghosh, and B. C. Sanders, “Desgining high-fidelity single-shot three-qubit gates: A machine-learning approach,” Physics Review Applied, vol. 6, p. 054005, 2016. [14] Y. Sun, H. Ma, C. Wu, C. Chen, and D. Dong, “Ensemble control of open quantum systems using differential evolution,” 2015 10th Asian Control Conference (ASCC), pp. 1-6, 2015. [15] E. Zahedinejad, S. Schirmer, and B. C. Sanders, “Evolutionary algorithms for hard quantum control,” Physics Review A, vol. 90, no. 3, p. 032310, 2014. [16] M. R. James, H. I. Nurdin, and I. R. Petersen, “H ∞ control of linear quantum stochastic systems,” IEEE Transactions on Automatic Control, vol. 53, pp. 1787-1803, 2008. [17] D. Dong and I. R. Petersen, “Sliding mode control of two-level quantum systems,” Automatica, 48(5): 725-735, 2012. [18] X. Xing, R. Rey-de-Castro and H. Rabitz, “Assessment of optimal control mechanism complexity by experimental landscape Hessian analysis: fragmentation of CH2 BrI,” New Journal of Physics, vol. 16, p. 125004, 2014. [19] C. Chen, D. Dong, R Long, and H. A. Rabitz, “Sampling-based learning control of inhomogeneous quantum ensembles,” Physics Review A, vol. 89, no. 2, p. 023402, 2014. [20] J. S. Li, and N. Khaneja, “Control of inhomogeneous quantum ensembles,” Physics Review A, vol. 73, no. 3, p. 030302, 2006. [21] J. S. Li, J. Ruths, T. Y. Yu, H. Arthanari, and G. Wagner, “Optimal pulse design in quantum control: A unified computational method,” Proceedings of the National Academy of Sciences of the USA, vol. 108, no. 5, pp. 1879-1884, 2011. [22] L. M. Duan, M. D. Lukin, J. I. Cirac, and P. Zoller, “Long-distance quantum communication with atomic ensembles and linear optics,” Nature, vol. 414, no. 6862, pp. 413-418, 2001. [23] K. Beauchard, P.S.P. da Silva, and P. Rouchon, “Stabilization for an ensemble of half-spin systems,” Automatica, vol. 48, pp. 68-76, 2012. [24] H. P. Breuer, and F. Petruccione, “The Theory of Open Quantum Systems,” Oxford University Press, 2002. [25] M. Mesbahi, and M. Egerstedt, “Graph theoretic methods in multiagent networks,” Princeton University Press, 2010. [26] L. Mazzarella, A. Sarlette, and F. Ticozzi, “Consensus for quantum networks: from symmetry to gossip iterations,” IEEE Transactions on Automatic Control, vol. 60, no. 1, pp. 158-172, 2015. [27] G. Shi, D. Dong, I. R. Petersen, and K. H. Johansson, “Reaching a quantum consensus: master equations that generate symmetrization and synchronization,” IEEE Transactions on Automatic Control, vol. 61, no. 2, pp. 374-387, 2016. [28] R. Sepulchre, A.Sarlette, and P. Rouchon, “Consensus in noncommutative spaces,” in Proc. 49th IEEE Conf. Decision Control, Atlanta, GA, USA, Dec. 15-17, 2010, pp. 6596-6601. [29] L. Mazzarella, F. Ticozzi, and A. Sarlette, “Extending robustness and randomization from consensus to symmetrization algorithms,” SIAM Journal on Control and Optimization, vol. 53, no. 4, pp. 2076-2099, 2015. [30] F. Ticozzi, “Symmetrizing quantum dynamics beyond gossip-type algorithms,” Automatica, vol. 74, pp. 38-46, 2016. [31] D. Dong, M. A. Mabrok, I. R. Petersen, B. Qi, C. Chen, and H. Rabitz, “Sampling-based learning control for quantum systems with uncertainties,” IEEE Transactions on Control Systems Technology, vol. 23, pp. 2155-2166, 2015. [32] C. Wu, B. Qi, C. Chen, and D. Dong, “Robust learning control design for quantum unitary transformations,” IEEE Transactions on Cybernetics, in press, 2017, online http://ieeexplore.ieee.org/document/7579636/. [33] G. Lindblad, “On the generators of quantum dynamical semigroups,” Communications in Mathmatical Physics, vol. 48, no. 2, pp. 119-130, 1976. [34] F. Yang, S. Cong, R. Long, T. S. Ho, R. Wu, and H. Rabitz, “Exploring the transition-probability-control landscape of open quantum systems: Application to a two-level case,” Physics Review A, vol. 88, no. 3, p. 033420, 2013. [35] G. Kimura, “The Bloch vector for N-level systems,” Physics Letters A, vol. 314, pp. 339-349, 2003. [36] H. Rabitz, M. M. Hsieh, and C. M. Rosenthal, “Quantum optimally controlled transition landscapes,” Science, vol. 303, no. 5666, pp. 1998-2001, 2004. [37] K. M. Tibbetts, X. Xing and H. Rabitz, “Laboratory transferability of optimally shaped laser pulses for quantum control,” Journal of Chemical Physics, vol. 140, p. 074302, 2014. [38] N. M. Hamza, D. L. Essam, and R. A. Sarker, “Constraint consensus mutation-based differential evolution for constrained optimization,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 3, pp. 447-459, 2016. [39] R. Storn, and K. Price, “Differential evolution-a simple and efficient adaptive scheme for global optimization over continuous spaces,” ICSI, USA, Techonology Rep. TR-95-012, 1995 [Online]. Avaliable: http://icsi.berkeley.edu/ storn/litera.html [40] A. K. Qin, V. L. Huang, and P. N. Suganthan, “Differential evolution algorithm with strategy adaptation for global numerical optimization,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 2, pp. 398-417, 2009. [41] F. Neri, and V. Tirronen, “Recent advances in differential evolution: A survey and experimental analysis,” Artificial Intelligence Review, vol. 33, no. 1-2, pp. 61-106, 2010. [42] R. L. Becerra, and C. A. Coello Coello, “Cultured differential evolution for constrained optimization,” Computing Methods in Applied Mechanics and Engineering, vol. 195, no. 33-36, pp. 4303-4322, 2006. [43] R. Mallipeddi, P. N. Suganthan, Q. K. Pan, and M. F. Tasgetirenc, “Differential evolution algorithm with ensemble of parameters and mutation strategies,” Applied Soft Computing, vol. 11, no. 2, pp. 16791696, 2011. [44] C. Chen, D. Dong, B. Qi, I. R. Petersen and H. Rabitz, “Quantum ensemble classification: A samplingbased learning control approach,” IEEE Transactions on Neural Networks and Learning Systems, in press, online http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7439835. [45] H. Jirari, and W. Pötz, “Optimal coherent control of dissipative N-level systems,” Physics Review A, vol. 72, no. 1, p. 013409, 2005. [46] J. Q. You, and F. Nori, “Superconducting circuits and quantum information,” Physics Today, vol. 58, no. 11, pp. 42-47, 2005. [47] Y. Makhlin, G. Scöhn, and A. Shnirman, “Josephson-junction qubits with controlled couplings,” Nature, vol. 398, no. 6725, pp. 305-307, 1999. [48] R. C. Bialczak, M. Ansmann, M. Hofheinz, M. Lenander, E. Lucero, M. Neeley, A. D. O’Connell, D. Sank, H. Wang, M. Weides, J. Wenner, T. Yamamoto, A. N. Cleland, and J. M. Martinis, “Fast tunable coupler for superconducting qubits,” Physics Review Letters, vol. 106, no. 6, p. 060501, 2011. [49] J. Q. You, J. S. Tsai, and F. Nori, “Controllable manipulation and entanglement of macroscopic quantum states in coupled charge qubits,” Physics Review B, vol. 68, no. 2, p. 024510, 2003. [50] D. Dong, C. Chen, B. Qi, I. R. Petersen and F. Nori, “Robust manipulation of superconducting qubits in the presence of fluctuations,” Scientific Reports, 5: 7873, 2015. [51] D. Dong, C. Wu, C. Chen, B. Qi, I. R. Petersen and F. Nori, “Learning robust pulses for generating universal quantum gates,” Scientific Reports, 6: 36090, 2016. [52] K. M. Tibbetts, X. Xing and H. Rabitz, “Systematic trends in photonic reagent induced reactions in a homologous chemical family,” Journal of Physical Chemistry A, vol. 117, pp. 8025-8215, 2013. [53] K. M. Tibbetts, X. Xing and H. Rabitz, “Optimal control of molecular fragmentation with homologous families of photonic reagents and chemical substrates,” Physical Chemistry Chemical Physics, vol. 15, pp. 18012-18022, 2013. [54] P. Palittapongarnpim, P. Wittek, E. Zahedinejad, S. Vedaie, and B. C. Sanders, “Learning in quantum control: High-dimensional global optimization for noisy quantum dynamics”, learning, arxiv: https://arxiv.org/abs/1607.03428. [55] H. Zhang, A. Zhou, S. Song, Q. Zhang, X. Z. Gao and J. Zhang, “A self-organizing multiobjective evolutionary algorithm,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 5, pp. 792-806, 2016. [56] Y. Yuan, H. Xu, B. Wang, and X. Yao, “A new dominance relationbased evolutionary algorithm for many-objective optimization,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 1, pp. 16-37, 2016.
9
Deep Semi-Random Features for Nonlinear Function Approximation Kenji Kawaguchi,1∗ Bo Xie,2∗ Vikas Verma,3 Le Song2 1 Massachusetts Institute of Technology Institute of Technology 3 Aalto University arXiv:1702.08882v7 [cs.LG] 21 Nov 2017 2 Georgia Abstract We propose semi-random features for nonlinear function approximation. The flexibility of semi-random feature lies between the fully adjustable units in deep learning and the random features used in kernel methods. For one hidden layer models with semi-random features, we prove with no unrealistic assumptions that the model classes contain an arbitrarily good function as the width increases (universality), and despite non-convexity, we can find such a good function (optimization theory) that generalizes to unseen new data (generalization bound). For deep models, with no unrealistic assumptions, we prove universal approximation ability, a lower bound on approximation error, a partial optimization guarantee, and a generalization bound. Depending on the problems, the generalization bound of deep semi-random features can be exponentially better than the known bounds of deep ReLU nets; our generalization error bound can be independent of the depth, the number of trainable weights as well as the input dimensionality. In experiments, we show that semi-random features can match the performance of neural networks by using slightly more units, and it outperforms random features by using significantly fewer units. Moreover, we introduce a new implicit ensemble method by using semi-random features. Introduction Many recent advances, such as human-level image classification (Deng et al. 2009) and game playing (Bellemare et al. 2013; Silver et al. 2016) in machine learning are attributed to large-scale nonlinear function models. There are two dominating paradigms for nonlinear modeling in machine learning: kernel methods and neural networks: • Kernel methods employ pre-defined basis functions, k(x, x ), called kernels to represent nonlinear functions (Scholkopf and Smola 2001; Shawe-Taylor and Cristianini 2004). Learning algorithms that use kernel methods often come with nice theoretical properties–globally optimal parameters can be found via convex optimization, and statistical guarantees can be provided rigorously. However, kernel methods typically work with matrices that are quadratic in the number of samples, leading to unfavorable computation and storage complexities. A popular ∗ Equal contribution c 2018, Association for the Advancement of Artificial Copyright  Intelligence (www.aaai.org). All rights reserved. approach to tackle such issues is to approximate kernel functions using random features (Rahimi and Recht 2008; Sindhwani, Avron, and Mahoney 2014; Pennington, Yu, and Kumar 2015). One drawback of random features is that its approximation powers suffer from the curse of dimensionality (Barron 1993) because its bases are not adaptive to the data. • Neural networks use adjustable basis functions and learn their parameters to approximate the target nonlinear function (LeCun, Bengio, and Hinton 2015). Such adaptive nature allows neural networks to be compact yet expressive. As a result, they can be efficiently trained on some of the largest datasets today. By incorporating domain specific network architectures, neural networks have also achieved state-of-the-art results in many applications. However, learning the basis functions involves difficult non-convex optimization. Few theoretical insights are available in the literature and more research is needed to understand the working mechanisms and theoretical guarantees for neural networks (Choromanska, LeCun, and Arous 2015; Swirszcz, Czarnecki, and Pascanu 2016; Shamir 2016). Can we have the best of both worlds? Can we develop a framework for big nonlinear problems which has the ability to adapt basis functions, has low computational and storage complexity, while at the same time retaining some of the theoretical properties of random features? Towards this goal, we propose semi-random features to explore the space of trade-off between flexibility, provability and efficiency in nonlinear function approximation. We show that semirandom features have a set of nice theoretical properties, like random features, while possessing a (deep) representation learning ability, like deep learning. More specifically: • Despite the nonconvex learning problem, semi-random feature model with one hidden layer has no bad local minimum; • Depending on the problems, the generalization bound of deep semi-random features can be exponentially better than known bounds of deep ReLU nets; • Semi-random features can be composed into multi-layer architectures, and going deep in the architecture leads to more expressive model than going wide; • Semi-random features also lead to statistical stable func- tion classes, where generalization bounds can be readily provided. Background We briefly review different ways of representing nonlinear functions in this section. Hand-designed basis. In a classical machine learning approach for nonlinear function approximation, users or domain experts typically handcraft a set of features φexpert : X → H, a map from an input data space X to a (complete) inner product space H. Many empirical risk minimization algorithms then require us to compute the inner product of the features as φexpert (x), φexpert (x )H for each pair (x, x ) ∈ X × X . Computing this inner product can be expensive when the dimensionality of H is large, and indeed it can be infinite. For example, if H is the space of square integrable functions, we need to evaluate the integral as φexpert (x), φexpert (x )H =  φ (x; ω)φexpert (x ; ω). ω expert Kernel methods. When our algorithms solely depend on the inner product, the kernel trick avoids this computational burden by introducing an easily computable kernel function as kexpert (x , x) = φexpert (x), φexpert (x )H , resulting in an implicit definition of the features φexpert (Scholkopf and Smola 2001; Shawe-Taylor and Cristianini 2004). However, the kernel approach typically scales poorly on large datasets. Given a training set of m input points {xi }m i=1 , evaluating a learned function at a new point x requires computing m fˆ(x) = i=1 αi kexpert (xi , x), the cost of which increases linearly with m. Moreover, it usually requires computing (or approximating) inverses of matrices of size m × m. Random features. In order to scale to large datasets, one can approximate the kernel by a set of random basis functions sampled according to some distributions. That is, C kexpert (x , x) ≈ C1 j=1 φrandom (x; rj )φrandom (x ; rj ), where both the type of basis functions φrandom , and the sampling distribution for the random parameter rj are determined by the kernel function. Due to its computational advantage and theoretical foundation, the random feature approach has many applications and is an active research topic (Rahimi and Recht 2008; Sindhwani, Avron, and Mahoney 2014; Pennington, Yu, and Kumar 2015). Neural networks. Neural networks approximate functions using weighted combination of adaptable basis functions n (2) (1) f (x) = k=1 wk φ(x; wk ), where both the combination (2) (1) weights wk and the parameters wk in each basis function φ are learned from data. Neural networks can be composed into multilayers to express highly flexible nonlinear functions. Semi-Random Features When comparing different nonlinear representaitons, we can see that random features are designed to approximate a known kernel, but not for learning these features from the given dataset (i.e., it is not a representation learning). As a result, when compared to neural network, it utilizes less amount of information encoded in the dataset, which could be disadvantageous. Neural networks, on the other hand, pose a difficulty for theoretical developments due to nonconvexity in optimization. This suggests a hybrid approach of random feature and neural network, called semi-random feature (or semirandom unit), to learn representation (or feature) from datasets. The goal is to obtain a new type of basis functions which can retain some theoretical guarantees via injected randomness (or diversity) in hidden weights, while at the same time have the ability to adapt to the data at hand. More concretely, semi-random features are defined as   (1) φs (x; r, w) = σs (x r) x w , where x = (1, x ) is assumed to be in R1+d , r = (r0 , r  ) is sampled randomly, and w = (w0 , w  ) is adjustable weights to be learned from data (hence, it is “semi-random”). Furthermore, the family of functions σs for s ∈ {0, 1, 2, . . . } is defined as σs (z) = (z)s H(z), where H is Heaviside step function (H(z) = 1 for z > 0 and 0 otherwise). For instance, σ0 is simply Heaviside step function, σ1 is ramp function, and so on. We call the corresponding semi-random features with s = 0 “linear semi-random features (LSR)” and with s = 1 “squared semi-random features (SSR)”. An illustration of example semi-random features can be found in Figure 4 in Appendix. Unlike dropout, which uses a data independent random switching mechanism (during training), the random switching in semi-random feature depends on the input data x (during both training and testing), inducing highly nonlinear models with practical advantages. By further enhancing this property, we additionally introduce linear semi-random implicit-ensemble (LSR-IE) features in Section “Image classification benchmarks”. Intuitively, models with semi-random features have more expressive power than those with random features because of the learnable unit parameter w. Yet, these models are less flexible compared to neural networks, since the parameters in σs (x r) is sampled randomly. Depending on the problems. this property of semi-random feature can result in exponential advantage over fully random feature in expressiveness (as discussed in Section “Better than Random Feature?”), and exponential advantage over deep ReLU models in generalization error bound (as discussed in Section “Generalization Guarantee”). One Hidden Layer Model With semi-random features φs in equation (1), we define one hidden layer model for nonlinear function as fˆns (x; w) = n  (1) (2) φs (x; rk , wk )wk , (2) k=1 where rk is sampled randomly for k ∈ {2, 3, · · · , n} as described in Section “Semi-Random Features”, and r1 is fixed to be the first element of the standard basis as r1 = e1 = (1, 0, 0, · · · 0) (to compactly represent a constant term in x). We can think of this model as one hidden layer model (1) by considering φs (x; r, wk ) as the output of k-th unit of (1) the hidden layer, and wk as adjustable parameters associated with this hidden layer unit. This way of understanding the model will become helpful when we generalize it to a multilayer model in Section “Multilayer Model”. Note that fˆns (x; w) is a nonlinear function of x. When it is clear, by the notation w, we denote all adjustable parameters in the entire model. In matrix notation, the model in (2) can be rewritten as   fˆns (x; w) = σs (x R)  (x W(1) ) W (2) , (3) where (1) (1) W(1) = (w1 , w2 , . . . , wn(1) ) ∈ R(d+1)×n , R = (r1 , r2 , . . . , rn ) ∈ R(d+1)×n , and (2) W(2) = (w1 , . . . , wn(2) ) ∈ Rn+1 . Here, (M1  M2 ) represents a Hadamard product of two matrices M1 and M2 . Furthermore σs (M )ij = σs (Mij ), given a matrix M of any size (with overloads of the symbol σs ). In the following subsections, we present our theoretical results for one hidden layer model. All proofs in this paper are deferred to the appendix. Universal Approximation Ability We show that our model class has universal approximation ability. Given a finite s, our model class is defined as Fns = {x → fˆns (x; w) | w ∈ Rdw }, where dw = (d+1)n+n is the number of adjustable parameters. Let L2 (Ω) be the space of square integrable functions on a compact set Ω ⊆ Rd . Then Theorem 1 states that we can approximate any f ∈ L2 (Ω) arbitrarily well as we increase the number of units n. We discuss the importance of the bias term r0 to obtain the universal approximation power in Appendix. Theorem 1 (Universal approximation) Let s be any fixed finite integer and let Ω = {0} be any fixed nonempty compact subset of Rd . Then, for any f ∈ L2 (Ω), with probability one, lim inf n→∞ fˆ∈F s n f − fˆ L2 (Ω) = 0. Optimization Theory As we have confirmed universal approximation ability of our model class Fns in the previous section, we now want to find a good fˆ ∈ Fns via empirical loss minimization. More specifically, given a dataset {(xi , yi )}m i=1 , we will consider the following optimization problem: m 2 1  yi − fˆns (xi ; w) . minimize L(w) = 2m i=1 w∈Rdw Let Y = (y1 , y2 , . . . , ym ) ∈ R and Ŷ = (fns (x1 ; w), fns (x2 ; w), . . . , fns (xm ; w)) ∈ Rm . Given a  m matrix M , let Pcol(M ) and Pnull(M ) be the projection matrices onto the column space and null space of M . Our optimization problem turns out to be characterized by the following m by nd matrix: ⎤ ⎡   · · · σs (x σs (x 1 r1 )x1 1 rn )x1 ⎥ ⎢ .. .. .. (4) D=⎣ ⎦, . . .  σs (x m r1 )xm ···  σs (x m rn )xm  where σs (x i rj )xi is a 1 × d block at (i, j)-th block entry. That is, at any global minimum, we have L(w) = 1 2 and Ŷ = Pcol(D) Y . Moreover, we can 2m Pker(DT ) Y achieve a global minimum in polynomial time in dw based on the following theorem. Theorem 2 (No bad local minima and few bad critical points) For any s and any n, the optimization problem of L(w) has the following properties: (i) it is non-convex (if D = 0)1 , (ii) every local minimum is a global minimum, (2) (iii) if wk = 0 for all k ∈ {1, 2, . . . , n}, every critical point is a global minimum, and 1 Pnull(DT ) Y 2 (iv) at any global minimum, L(w) = 2m and Ŷ = Pcol(D) Y . Theorem 2 (optimization) together with Theorem 1 (universality) suggests that not only does our model class contain an arbitrarily good function (as n increases), but also we can find the best function in the model class given a dataset. In the context of understanding the loss surface of neural networks (Kawaguchi 2016; Freeman and Bruna 2016), Theorem 2 implies that the potential problems in the loss surface are due to the inclusion of r as an optimization variable. Generalization Guarantee In the previous sections, we have shown that our model class has universal approximation ability and that we can learn the best model given a finite dataset. A major remaining question is about the generalization property; how well can a learned model generalize to unseen new observations? Theorem 3 bounds the generalization error; the difference between expected risk, 12 Ex (f (x) − fˆ(x; w∗ ))2 , and empirical risk, L(w). In Theorem 3, we use the following notations: σ, x = [σs (x r1 )x · · · σs (x rn )x] ∈ Rnd and (2) (1) (2) (1) (2) (1) w = (w1 w1 , w2 w2 , . . . , wn wn ) ∈ Rnd . Theorem 3 (Generalization bound for shallow model) Let s ≥ 0 and n > 0 be fixed. Consider any model class Fns with w 2 ≤ CW and σ, x 2 ≤ Cσx almost surely. Then, with probability at least 1 − δ, for any fˆ ∈ Fns , 1 Ex (f (x) − fˆ(x; w))2 − L(w) 2  ≤ (CY2 + CŶ2 ) log 1δ C + 2(CY + CŶ ) √ Ŷ , 2m m where CŶ = CW Cσx . 1 In the case of D = 0, L(w) is convex in a trivial way; our model class only contains a single constant function x → 0. By combining Theorem 2 (optimization) and Theorem 3 (generalization), we obtain the following remark. Remark 1. (Expected risk bound) Let CW be values such 1 Pker(DT ) Y 2 that the global minimal value L(w) = 2m is attainable in the model class (e.g., setting CW = (D D)† D Y suffices). Then, at any critical points with (2) wk = 0 for all k ∈ {1, 2, . . . , n} and any local minimum such that w 2 < CW , we have Ex (f (x) − fˆ(x; w∗ ))2 ≤ Pnull(DT ) Y 2 m ⎛ +O⎝ log m 1 δ ⎞ ⎠ , (5) with probability at least 1 − δ. Here, O(·) notation simply hides the constants in Theorem 3. In the right-hand side of equation (5), the second term goes to zero as m increases, and the first term goes to zero as n or d increases (because null(DT ) becomes a smaller and smaller space as n or d increases, eventually containing only 0). Hence, we can minimize the expected risk to zero. Multilayer Model We generalize one hidden layer model to H hidden layer model by composing semi-random features in a nested fashion. More specifically, let nl be the number of units, or width, in the l-th hidden layer for all l = 1, 2, . . . , H. Then we will denote a model of fully-connected feedforward semi-random networks with H hidden layers by fˆs (x; w) = h(H) W (H+1) , (6) n1 ,...,nH w where for all l ∈ {2, 3, · · · , H}, (l) (l−1) (x) W (l) ) and h(l) w (x) = hr (x)  (hw (l−1) h(l) (x) R(l) ) r (x) = σs (hr is the output of the l-th semi-random hidden layer, and the output of the l-th random switching layer respectively. Here, (l) (l) (l) W (l) = (w1 , w2 , . . . , wnl ) ∈ Rnl−1 ×nl and R(l) = (l) (l) (l) (r1 , r2 , . . . , rnl ) ∈ Rnl−1 ×nl . Similarly to one hidden (l) layer model, rk is sampled randomly for k ∈ {2, 3, . . . , nl } (l) but r1 is fixed to be e1 = (1, 0, 0, . . . , 0) (to compactly write the effect of constant terms in x). The output of the first hidden layer is the same as that of one hidden layer model: (1) (1) (1) h(1) ) and h(1) ), w (x) = hr (x)  (xW r (x) = σs (xR where the boldface notation emphasizes that we require the bias terms at least in the first layer. In other words, we keep (l) randomly updating the random switching layer hr (x), and (l) couple it with a linearly adjustable hidden layer hw (x)W (l) (l+1) to obtain the next semi-random hidden layer hw (x). Convolutional semi-random feedforward neural networks can be defined in the same way as in equation (6) with vector-matrix multiplication being replaced by c dimensional convolution (for some number c). In our experiments, we will test both convolutional semi-random networks as well as fully-connected versions. We will discuss further generalizations of our architecture in Appendix. Benefit of Depth We first confirm that our multilayer model class Fns1 ,...,nH = {x → fˆns1 ,...,nH (x; w)|w ∈ Rdw } preserves universal approximation ability. Corollary 4 (Universal approximation with deep model) Let s be any fixed finite integer and let Ω = {0} be any fixed nonempty compact subset of Rd . Then, for any f ∈ L2 (Ω), with probability one, lim inf n1 ,...,nH →∞ fˆ∈F s n1 ,...,nH f − fˆ L2 (Ω) = 0. We now know that both of one hidden layer models and deeper models have universal approximation ability. Then, a natural question arises: how can depth benefit us? To answer the question, note that H hidden layer model only needs O(nH) number of parameters (by setting n = n1 , n2 , · · · , nH ) to create around nH d paths, whereas one hidden layer model requires O(nH ) number of parameters to do so. Intuitively, because of this, the expressive power would grow exponentially in depth H, if those exponential paths are not redundant to each other. The redundancy among the paths would be minimized via randomness in the switching and exponentially many combinations of nonlinearities σs . We formalize this intuition by considering concrete degrees of approximation powers for our models. To do so, we adopt a type of a degree of “smoothness” on the target functions from a previous work (Barron 1993). Consider Fourier representation of a target function as f (x) =   f˜(ω)eiω x . Define a class of smooth functions ΓC : ω∈Rd    ˜ ΓC = x → f (x) : ω 2 |f (ω)| ≤ C . ω∈Rd Any f ∈ ΓC with finite C is continuously differentiable in Rd , and the gradient of f can be written as ∇x f (x) =   iω f˜(ω)eiω x . Thus, via Plancherel theorem, we can ω∈Rd view C as the bound on ∇x f (x) L(Rd ) . See the previous work (Barron 1993) for a more detailed discussion on the properties of this function class ΓC . Theorem 5 states that a lower bound on the approximation power gets better exponentially in depth H. Theorem 5 (Lower bound on universal approximation power) Let Ω = [0, 1]d . For every fixed finite integer s, for any depth H ≥ 0, and for any set of nonzero widths {n1 , n2 , . . . , nH },  H −1/d  κC sup inf f − fˆ L2 (Ω) ≥ 2 nl , s d f ∈ΓC fˆ∈Fn ···n 1 where κ ≥ (8πe l=1 H (π−1) −1 ) is a constant. By setting n = n1 = · · · = nH , the lower bound in Theorem 5 becomes: supf ∈ΓC inf fˆ∈F s f − fˆ L2 (Ω) ≥ κC −H/d , d2 n H. n1 ···nH where we can easily see the benefit of the depth However, this analysis is still preliminary for the purpose of fully understanding the benefit of the depth. Our hope here is to provide a formal statement to aid intuitions. To help our intuitions further, we discuss about an upper bound on universal approximation power for multilayer model in Appendix. Fns1 ,...,nH , Optimization Theory where CŶ = CW Cσx . Similarly to one hidden layer case, we consider the following optimization problem: The generalization bound in Corollary 7 can be exponentially better than the known bounds of ReLU; e.g., see (Sun et al. 2016; Neyshabur, Tomioka, and Srebro 2015; Xie, Deng, and Xing 2015). The known generalization upper bounds of ReLU explicitly contain 2H factor, which comes from ReLU nonlinearity and grows exponentially in depth H. In contrast, the generalization bound in Corollary 7 does not explicitly depend on any of the depth, the number of trainable weights, and the dimensionality of the domain. Note that fn1 ,...,nH (x) = vec(w) vec(σ, x) = vec(w) 2 vec(σ, x) 2 cos θ. Hence, even though the dimensionality of vec(w) and vec(σ, x) grows exponentially in the depth, the norm bound CW Cσx would stay near the norm of the output. Indeed, CW Cσx ≈ fn1 ,...,nH (x)/ cos θ. As a new related work, the generalization bounds of standard nets trained by a novel two-phase training procedure in a recent paper (Kawaguchi, Kaelbling, and Bengio 2017) have this desired qualitative property similarly to our generalization bounds of semi-random nets. minimize L(H) (w) = w∈Rdw m 2 1  yi − fns1 ,...,nH (x; w) . 2m i=1 Compared to one hidden layer case, our theoretical understanding of multilayer model is rather preliminary. Here, given a function g(w1 , w2 , . . . , wn ), we say that w̄ = (w̄1 , w̄2 , . . . , w̄n ) is a global minimum of g with respect to w1 if w̄1 is a global minimum of g̃(w1 ) = g(w1 , w̄2 , . . . , w̄n ). Corollary 6 (No bad local minima and few bad critical points w.r.t. last two layers) For any s, any depth H ≥ 1, and any set of nonzero widths {n1 , n2 , . . . , nH }, the optimization problem of L(H) (w) has the following property: (i) every local minimum is a global minimum with respect to (W (H) , W (H+1) ), and 1 Ex (f (x) − fˆ(x; w∗ ))2 − L(H) (w) 2  ≤ (CY2 + CŶ2 ) (H+1) = 0 for all k ∈ {1, 2, . . . , nH }, every criti(ii) if wk cal point is a global minimum with respect to (W (H) , W (H+1) ). Future work is still required to investigate the theoretical nature of the optimization problem with respect to all parameters. Some hardness results of a standard neural network optimization come from the difficulty of learning activation pattern via optimization of the variable r (Livni, ShalevShwartz, and Shamir 2014). In this sense, our optimization problem is somewhat easier, and it would be interesting to see if we can establish meaningful optimization theory for semi-random model as a first step to establish the theory for neural networks in general. Generalization Guarantee The following corollary bounds the generalization error. In the statement of the corollary, we can easily see that the generalization error goes to zero as m increases (as long as the relevant norms are bounded). Hence, we can achieve generalization. In Corollary 7, we use the following notations: σ, xk0 ,k1 ,...,kH = σk1 ,...,kH (x)xk0 and wk0 ,k1 ,...,kH =  H (l) (H+1) . Let vec(M ) be a vectorization of l=1 wkl−1 kl wkH a tensor M . Corollary 7 (Generalization bound for deep model) Let s ≥ 0 and H ≥ 1 be fixed. Let nl > 0 be fixed for l = 1, 2, . . . , H. Consider the model class Fns1 ,...,nH with vec(w) 2 ≤ CW and vec(σ, x) 2 ≤ Cσx almost surely. Then, with probability at least 1 − δ, for any fˆ ∈ log 1δ C + 2(CY + CŶ ) √ Ŷ , 2m m Experiments We compare semi-random features with random features (RF) and neural networks with ReLU on both UCI datasets and image classification benchmarks. We will study two variants of semi-random features for s = 0 (LSR: linear semi-random features) and s = 1 (SSR: squared semi-random features) in σs (·) from equation (1). Additional experimental details are presented in Appendix. The source code of the proposed method is publicly available at: http://github.com/zixu1986/semi-random. Simple Test Function We first tested the methods with a simple sine function, f (x) = sin(x), where we can easily understand what is happening. Figure 1 shows the test errors with one standard deviations. As we can see, semi-random network (LSR) performed the best. The problem of ReLU became clear once we visualized the function learned at each iteration: ReLU network had a difficulty to diversify activation units to mimic the frequent oscillations of the sine function (i.e., it took long time to diversely allocate the breaking points of its piecewise linear function). The visualizations of learned functions at each iteration for each method are presented in Appendix. On average, they took 54.39 (ReLU), 43.04 (random), 45.44 (semi-random) seconds. Their training errors are presented in Appendix. UCI datasets We have comparisons on six large UCI datasets.The network architecture used on this dataset is multi-layer networks with Figure 1: Test error for a simple test function. Table 1: Performance comparison on UCI datasets. RF for random features, LSR and SSR for linear (s = 0) and squared (s = 1) semi-random features respectively. mtr : number of training data points; mte : number of test data points; d dimension of the data. Dataset covtype mtr = 522, 910 mte = 58, 102 d = 54 webspam mtr = 280, 000 mte = 70, 000 d = 123 letter mtr = 15, 000 mtr = 5, 000 d = 16 method err (%) Dataset method err (%) ReLU RF LSR SSR 5.6 adult ReLU 20.2 mtr = 32, 561 RF 5.7 mte = 16, 281 LSR 14.4 d = 123 SSR 15.0 14.9 14.8 14.9 ReLU RF LSR SSR 1.1 senseit ReLU 6.0 mtr = 78, 823 RF 1.1 mte = 19, 705 LSR 2.2 d = 100 SSR 13.6 16.0 13.9 13.3 ReLU RF LSR SSR 13.1 sensor ReLU 14.9 mtr = 48, 509 RF 6.5 mtr = 10, 000 LSR 5.6 d = 48 SSR 2.0 13.4 1.4 5.7 l = [1, 2, 4] hidden layers and k = [1, 2, 4, 8, 16] × d hidden units per layer where d is the input data dimension. Comparison of best performance. In Table 1, we listed the best performance among different architectures for semirandom and fully random methods. For ReLU, we fix the number of hidden units to be d. On most datasets, semi-random features achieve smaller errors compared with ReLU by using more units. In addition, semi-random units have significant lower errors than random features. Matching the performance of ReLU. The top row of Figure 2 demonstrates how many more units are required for random and semi-random features to reach the test errors of networks with ReLU. First, all three methods enjoy lower test errors by increasing the number of hidden units. Second, semi-random units can achieve comparable performance to ReLU with slightly more units, around 2 to 4 times in Webspam dataset. In comparison, random features require many more units, more than 16 times. These experiments clearly show the benefit of adaptivity in semi-random features. Depth vs width. The bottom row of Figure 2 explores the benefit of depth. Here, “l-layer” indicates l hidden layer model. To grow the number of total units, we can either use more layers or more units per layer. Experiment results suggest that we can gain more in performance by going deeper. The ability to benefit from deeper architecture is an impor- (a) Covtype dataset (b) Webspam dataset Figure 2: Top row: Linear semi-random features match the performance of ReLU for two hidden layer networks in two datasets. Bottom row: Depth vs width of linear semirandom features. Both plots show performance of semirandom units. Even the total number of units is the same, deeper models achieve lower test error. Table 2: Test error (in %) of different methods on three image classification benchmark datasets. 2×, 4× and 16× mean the number of units used is 2 times, 4 times and 16 times of that used in neural network with ReLU respectively. neuron type MNIST CIFAR10 SVHN ReLU RF RF 2× RF 4× RF 16× LSR LSR 2× LSR 4× LSR-IE LSR-IE 2× LSR-IE 4× 0.70 8.80 5.71 4.10 2.69 0.97 0.78 0.71 0.59 0.47 0.54 16.3 59.2 55.8 49.8 40.7 21.4 17.4 18.7 20.0 16.8 14.9 3.9 73.9 70.5 58.4 37.1 7.6 6.9 6.4 6.9 5.9 4.8 tant feature that is not possessed by random features. The details on how the test error changes w.r.t. the number of layers and number of units per layer are shown in Figure 3. As we can see, on most datasets, more layers and more units lead to smaller test errors. However, the adult dataset is more noisy and it is easier to overfit. All types of neurons perform relatively the same on this dataset, and more parameters actually lead to worse results. Furthermore, the squared semi-random features have very similar error pattern to neural network with ReLU. Image classification benchmarks We have also compared different methods on three image classification benchmark datasets. Here we use publicly 4 0.03 1 2 # of layers 4 1 2 4 8 16 0.0105 0.144 0.136 2 4 8 16 1 # of units per layer 2 4 8 1 0.250 0.45 0.30 2 4 8 16 (a) ReLU 1 # of layers 2 4 8 16 # of units per layer 0.10 0.08 0.06 0.165 0.04 0.150 2 4 8 16 # of units per layer 1 2 4 1 2 0.012 0.180 1 # of layers 0.016 4 0.160 0.195 0.16 4 0.020 sensor 0.24 2 0.024 0.168 senseit 0.32 1 0.176 0.152 0.40 0.08 0.028 8 16 8 16 0.02 # of units per layer (c) linear semi-random features adult 0.20 0.19 0.18 0.17 0.16 0.15 letter 0.5 0.195 0.4 0.180 0.3 0.165 0.2 1 2 4 8 16 # of units per layer 0.10 2 4 8 16 # of units per layer 0.040 0.032 0.024 sensor 0.148 0.144 0.15 1 0.048 0.152 0.25 0.20 webspam 0.154 0.153 0.152 0.151 0.150 0.149 senseit 1 letter 0.184 covtype 2 4 2 # of layers 1 0.175 0.150 0.125 0.100 0.075 4 0.6 (b) random features webspam # of layers adult 2 0.08 sensor 0.210 # of units per layer 4 covtype 1 # of units per layer # of units per layer 0.10 0.150 senseit 0.75 1 0.12 0.156 0.225 0.60 16 0.14 0.162 letter 0.152 0.16 0.168 0.275 sensor 0.10 0.08 0.06 0.04 0.02 webspam 0.174 0.300 2 0.0120 0.160 1 # of units per layer 0.160 0.152 senseit 0.175 0.150 0.125 0.100 0.075 0.050 0.0135 adult 0.325 # of layers 0.06 0.168 covtype 1 2 0.09 0.0150 2 # of layers 0.12 letter 0.0165 # of layers 1 webspam 0.176 4 adult 0.15 4 covtype 0.140 0.136 1 2 4 8 16 # of units per layer 1 2 4 8 16 0.11 0.10 0.09 0.08 0.07 0.06 # of units per layer (d) squared semi-random features Figure 3: Detailed experiment results for all types of neurons and on all datasets. The heat map for each dataset shows how the test error changes w.r.t. the number of layers and number of units per layer. available and well-tuned neural network architectures from tensorflow for the experiments. We simply replace ReLU by random and semi-random units respectively. The results are summarized in Table 2. LSR-IE Unit. To improve the practical performance of LSR unit while preserving its theoretical properties, we additionally introduced a new unit, called linear semi-random implicit-ensemble (LSR-IE) unit. Unlike an explicit ensemble, LSR-IE trains only a single network and comparable to a single standard network with dropout. Recall that in LSR unit, we have a single random net (with random weights r) and a single trainable net (with trainable weights w). In contrast, in LSR-IE unit, we increase the number of the random nets (with random weights rk for k = 1, . . . , K) while keeping the number of the trainable net to be one. That is, by modifying equation (1) of semirandom unit, we implement the LSR-IE unit as follows: for each k ∈ {1, . . . , K}, we have   φs (x; rk , w) = σs (x rk ) x w , (7) with 5 × 5 filters and the number of channels is 32 and 64, respectively. Each convolution is followed by a maxpooling layer, then finally a fully-connected layer of 512 units with 0.5 dropout. Increasing the number of units for semi-random leads to better performance. At four times the size of the original network, semi-random feature can achieve very close errors of 0.71%. In contrast, even when increasing the number of units to 16 times more, random features still cannot reach below 1%. CIFAR10 dataset. CIFAR 10 contains internet images and consists of 50,000 32 × 32 color images for training and 10,000 images for test. We use a convolutional neural network architecture with two convolution layers, each with 64 5 × 5 filters and followed by max-pooling. The fullyconnected layers contain 384 and 192 units. By using two times more units, semi-random features are able to achieve similar performance with ReLU. However, the performance of random features lags behind by a huge margin. and we randomly sample the index k at each step of SGD during training. Then, during test, we average the preactivation of output units over k ∈ {1, . . . , K}. We set the number of random nets K to be 2 in MNIST and CIFAR-10 and 4 in SVHN. SVHN dataset. The Street View House Numbers (SVHN) dataset contains house digits collected by Google Street View. We use the 32 × 32 color images version and only predict the digits in the middle of the image. For training, we combined the training set and the extra set to get a dataset with 604,388 images. We use the same architecture as in the CIFAR10 experiments. MNIST dataset. MNIST is a popular dataset for recognizing handwritten digits. It contains 28 × 28 grey images, 60,000 for training and 10,000 for test. We use a convolution neural network consisting of two convolution layers, In this paper, we proposed the method of semi-random features. For one hidden layer model, we proved that our model Conclusion class contains an arbitrarily good function as the width increases (universality), and we can find such a good function (optimization theory) that generalizes to unseen new data (generalization bound). For deep model, we proved universal approximation ability, a lower bound on approximation error, a partial optimization guarantee, and a generalization bound. Furthermore, we demonstrated the advantage of semi-random features over fully-random features via empirical results and theoretical insights. The idea of semi-random feature itself is more general than what is explored in this paper, and it opens up several intersecting directions for future work. Indeed, we can generalize any deep architecture by having an option to include semi-random units per unit level. We can also define a more general semi-random feature as: given some nonconstant functions σ and g,   φ(x; r, w) = σ(x r)g x w , where x = (1, x) is assumed to be in R1+d , r = (r0 , r) is sampled randomly, and w = (w0 , w) is adjustable weights to be learned from data. This general formulation would lead to a flexibility to balance expressivity, generalization and theoretical tractability. References Barron, A. R. 1993. Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information theory 39(3):930–945. Bellemare, M. G.; Naddaf, Y.; Veness, J.; and Bowling, M. 2013. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research 47:253–279. Choromanska, A.; LeCun, Y.; and Arous, G. B. 2015. Open problem: The landscape of the loss surfaces of multilayer networks. In Proceedings of The 28th Conference on Learning Theory, 1756–1760. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and FeiFei, L. 2009. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009, 248–255. IEEE. Freeman, C. D., and Bruna, J. 2016. Topology and geometry of half-rectified network optimization. arXiv preprint arXiv:1611.01540. Huang, G.-B.; Chen, L.; Siew, C. K.; et al. 2006. Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans. Neural Networks 17(4):879–892. Kawaguchi, K.; Kaelbling, L. P.; and Bengio, Y. 2017. Generalization in deep learning. arXiv preprint arXiv:1710.05468. Kawaguchi, K. 2016. Deep learning without poor local minima. In Advances in Neural Information Processing Systems. LeCun, Y.; Bengio, Y.; and Hinton, G. 2015. Deep learning. Nature 521(7553):436–444. Leshno, M.; Lin, V. Y.; Pinkus, A.; and Schocken, S. 1993. Multilayer feedforward networks with a nonpolynomial ac- tivation function can approximate any function. Neural networks 6(6):861–867. Livni, R.; Shalev-Shwartz, S.; and Shamir, O. 2014. On the computational efficiency of training neural networks. In Advances in Neural Information Processing Systems, 855– 863. Mohri, M.; Rostamizadeh, A.; and Talwalkar, A. 2012. Foundations of machine learning. MIT press. Neyshabur, B.; Tomioka, R.; and Srebro, N. 2015. Normbased capacity control in neural networks. In Proceedings of The 28th Conference on Learning Theory, 1376–1401. Pennington, J.; Yu, F.; and Kumar, S. 2015. Spherical random features for polynomial kernels. In Advances in Neural Information Processing Systems, 1846–1854. Rahimi, A., and Recht, B. 2008. Random features for largescale kernel machines. In Advances in Neural Information Processing Systems, 1177–1184. Rahimi, A., and Recht, B. 2009. Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. In Advances in neural information processing systems, 1313–1320. Scholkopf, B., and Smola, A. J. 2001. Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press. Shamir, O. 2016. Distribution-specific hardness of learning neural networks. arXiv preprint arXiv:1609.01037. Shawe-Taylor, J., and Cristianini, N. 2004. Kernel methods for pattern analysis. Cambridge university press. Silver, D.; Huang, A.; Maddison, C. J.; Guez, A.; Sifre, L.; Van Den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. 2016. Mastering the game of go with deep neural networks and tree search. Nature 529(7587):484–489. Sindhwani, V.; Avron, H.; and Mahoney, M. W. 2014. Quasi-monte carlo feature maps for shift-invariant kernels. In International Conference on Machine Learning. Sun, S.; Chen, W.; Wang, L.; Liu, X.; and Liu, T.-Y. 2016. On the depth of deep neural networks: a theoretical view. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 2066–2072. AAAI Press. Swirszcz, G.; Czarnecki, W. M.; and Pascanu, R. 2016. Local minima in training of deep networks. arXiv preprint arXiv:1611.06310. Telgarsky, M. 2016. Benefits of depth in neural networks. In 29th Annual Conference on Learning Theory, 1517–1539. Xie, P.; Deng, Y.; and Xing, E. 2015. On the generalization error bounds of neural networks under diversityinducing mutual angular regularization. arXiv preprint arXiv:1511.07110. Xie, B.; Liang, Y.; and Song, L. 2016. Diversity leads to generalization in neural networks. arXiv preprint arXiv:1611.03131. Appendix In practice, we can typically assume that x ∈ Ω ⊆ Rd with some sufficiently large compact subspace Ω. Thus, given such a Ω, we use the following sampling procedure in order for our method to be efficiently executable in practice: r is sampled uniformly from a d dimensional unit sphere Sd−1 , and r0 is sampled such that the probability measure on any open ball in R with its center in [−radius(Ω), radius(Ω)] is nonzero.2 For example, uniform distribution that covers [−radius(Ω), radius(Ω)] or normal distribution with a nonzero finite variance suffices the above requirement. Visualization of semi-random features We visualize the surfaces of two semi-random features (a) linear (LSR) (s = 0) and (b) square (SSR) (s = 1) in Figure 4. The semi-random features are defined by two parts. The random weights and nonlinear thresholding define a hyperplane, where on one side, it is zero and on the other side, it is either an adjustable linear or adjustable square function. During learning, gradient descent is used to tune the adjustable parts to fit target functions. Importance of Bias Terms in First Layer We note the importance of the bias term r0 in the first layer, to obtain universal approximation power. Without the bias term, Theorem 1 does not hold. Indeed, it is easy to see that without the bias term, Heaviside step functions do not form a function class with universal approximation ability for x ∈ Rd . The importance of the bias term can also be seen by the binomial theorem. For example, let g(z) (z ∈ R) be a polynomial of degree at least c for some constant c, then g does not form a function class that contains the polynomial of degree less than c in z. However, if z = x + r0 , it is possible to contain polynomials of degrees less than c in x by binomial theorem. Proofs for One Hidden Layer Model In this section, we provide the proofs of the theorems presented in Section “One Hidden Layer Model”. We denote a constant function x → 1 by 1 (i.e., 1(x) = 1). Proof of Theorem 1 (Universal Approximation) To prove the theorem, we recall the following known result. Lemma 8 (Leshno et al. 1993, Proposition 1) For any σ : R → R, span({σ(r x + r0 ) : r ∈ Rd , r0 ∈ R}) is dense in Lp (Ω) for every compact subset Ω ⊆ Rd and for every p ∈ [1, ∞),if and only if σ is not a polynomial of finite degree (almost everywhere). With this lemma, we first prove that for every fixed s, the span of a set of our functions σs is dense in L2 (Ω). 2 The radius of a compact subspace Ω of Rd is defined as radius(Ω) = supx∈Ω x2 and a open ball is defined as B(z̄, δ) = {z : z − z̄ < δ} where z̄ is the center of the ball. (a) linear semi-random (b) square semi-random Figure 4: Visualization of linear semi-random (LSR) (s = 0) and squared semi-random (SSR) (s = 1) units. The random weights define a hyperplane: on one side, it is zero while on the other side, it is either adjustable linear or adjustable square function. Lemma 9 For every fixed finite integer s and for every compact subset Ω ⊆ Rd , span{σs (r x + r0 ), 1(x) : r ∈ Sd−1 , r0 ∈ R} is dense in L2 (Ω), where g(x) = 1 represents a constant function. Proof Since σ0 is Heaviside step function, σ0 is not a polynomial of finite degree and span{σ0 (r x + r0 ) : r ∈ Rd , r0 ∈ R} is dense in L2 (Ω) (we know each of both statements independently without Lemma 8). For the case of s ≥ 1, σs (z) = z s σ0 (z) is not a polynomial of finite degree (i.e., otherwise, σ0 is a polynomial of finite degree). Thus, by Lemma 8, for fixed finite integer s ≥ 1, span{σs (r x + r0 ) : r ∈ Rd , r0 ∈ R} is dense in L2 (Ω).  Since σs ( rrx + r0 ) = r −s σs (r x + r r0 ) for all r = 0 (because H(z) is insensitive to a positive scaling of z), span{σs (r x + r0 ) : r ∈ Rd , r0 ∈ R} = span{σs (r x + r0 ) : r ∈ Sd−1 ∪ {0}, r0 ∈ R} = span{σs (r x + r0 ), 1(x) : r ∈ Sd−1 , r0 ∈ R}. This completes the proof. In practice, we want to avoid wasting samples; we want to have a choice to sample r0 from the interval that matters. In order to do that, we admit the dependence of our function class on Ω via the following lemma. Lemma 10 For every fixed finite integer s and for every compact subset Ω ⊆ Rd , span{σs (r x + r0 ), 1(x) : r ∈ Sd−1 , |r0 | ≤ radius(Ω)} is dense in L2 (Ω). Proof For any r0 ∈ R \ [−radius(Ω), +radius(Ω)], since |r x| ≤ r 2 x 2 ≤ radius(Ω), either σs (r x + r0 ) = 0 or = (r x + r0 )s , for all x ∈ Ω. Meanwhile, with r0 = radius(Ω), σs (r x + r0 ) = (r x + r0 )s , for all s s s−k  k x ∈ Ω. Since (r x + r0 )s = (r x) , k=0 k r0 it is a polynomial in x, and r0 only affects the coefficients of the polynomial. Thus, for any x ∈ Ω, span{σs (r x + r0 ), 1(x) : r ∈ Sd−1 , r0 ∈ R} = span{σs (r x + r0 ), 1(x) : r ∈ Sd−1 , |r0 | ≤ radius(Ω)}, because the difference in the nonzero coefficients of the polynomials can be taken care of by adjusting the coefficients of the linear combination, unless the right-hand side only contains r0 ∈ {0}. Having only r0 ∈ {0} implies Ω = {0}, which is assumed to be false. We use the following lemma to translate the statement with uncountable samples in Lemma 10 to countable samples in Theorem 1. Lemma 11 Let s be a fixed finite integer, Ω be any compact subset of Rd , and (r̄, r̄0 ) is in Sd−1 × [−radius(Ω), radius(Ω)]. Then, for any > 0, there exists a δ such that for any (r, r0 ) ∈ B((r̄, r̄0 ), δ),  (σs (r x + r0 ) − σs (r̄ x + r̄0 ))2 < . (ri , r0i ) in any δ-ball becomes one. This shows the last line. The above inequality leads a contradiction |c| < |c| by choosing < |c|/ f0 . This completes the proof. Proof of Theorem 2 (No Bad Local Minima and Few Bad Critical Points) Since multiplying a constant in w does not change the optimization problem (in terms of optimizer), in the proof of Theorem 2, we consider L(w) := to be succinct. m   i=1 2 yi − fˆns (xi ; w) , x∈Ω Proof For the case of s = 0, it directly follows the proof of lemma II.5 in (Huang et al. 2006). For the case of s ≥ 1, we can think of σs as a uniformly continuous function with a compact domain that contains all the possible inputs of σs for our choice of Ω and Sd−1 × [−radius(Ω), radius(Ω)]. The lemma immediately follows from its uniform continuity. Proof of Theorem 2 (iii) and (iv) Define a m × (nd) matrix as ⎤ ⎡ (2) (2)   · · · wn σs (x w1 σs (x 1 r1 )x1 1 rn )x1 ⎥ ⎢ .. .. .. D = ⎣ ⎦. . . . (2)  w1 σs (x m r1 )xm (2)  wn σs (x m rn )xm ··· (2) Note that if s → ∞, x → ∞, r → ∞, or r0 → ∞, the proof of Lemma 11 does not work. We are now ready to prove Theorem 1. Proof of Theorem 1 Fix s and let σi : x → σs (ri x + r0i ) with ri and r0i being sampled randomly for i = 2, 3, . . . as specified in Section “Semi-Random Features”. Let σ1 : x → σs (r1 x + r01 ) = 1(x) = 1 as specified in Section “One Hidden Layer Model”. Since span{σ1 , σ2 , ..., σn } ⊆ Fns (as we can get span{σ1 , σ2 , ..., σn } by only using (1) (2) w0i and wi ), we only need to prove the universality of span{σ1 , σ2 , ..., σn } as n → ∞. We prove the statement by contradiction. Suppose that span{σ1 , σ2 , ...} is not dense in L2 (Ω) (with some nonzero probability). Then, there exists a nonzero f0 ∈ L2 (Ω) such that f0 , σi  = 0 for all i = 1, 2, . . .. However, from Lemma 10, either f0 , σ1  = 0 or there exists (r̄, r̄0 ) ∈ Sd−1 × [−radius(Ω), radius(Ω)] such that | f0 , σr̄,r̄0  | = |c| > 0 with some constant c, where σr̄,r̄0 = σs (r̄ x + r̄0 ). If f0 , σ1  = 0, we get the desired contradiction, and hence we considerer the latter case. In the latter case, for any > 0, |c| = | f0 , σr̄,r̄0  | = | f0 , σr̄,r̄0  − f0 , σi  | ∀i ∈ {2, 3, ...} = | f0 , σr̄,r̄0 − σi  | ∀i ∈ {2, 3, ...} ≤ σr̄,r̄0 − σi f0 ∀i ∈ {2, 3, ...} < f0 ∃i ∈ {2, 3, ...}, where the last line follows from lemma 11 and our sampling procedure; that is, for any > 0, if a sampled (ri , r0i ) is in a δ-ball, σi − σr̄,r̄0 < (from Lemma 11). Since our sampling procedure allocates nonzero probability on any such interval, as i → ∞, the probability of sampling  where wj σs (x i rj )xi is a 1 × d block at (i, j)-th block entry. Then, we can rewrite the objective function as where L(w) = D vec(w(1) ) − Y (1) vec(w(1) ) = (w1 (1) , w2 2 2, , . . . , wn(1) ) . Therefore, at any critical point w.r.t. w(1) (i.e., by taking gradient of L w.r.t. w(1) and setting it to zero), Ŷ is the projection of Y onto the column space of D . n (2)  (1) Since fˆn (xi ; w) = k=1 σs (x i rk )(xi wk )wk , L(w) = D[[w]] − Y where (2) (1) [[w]] = (w1 w1 (2) (1) , w 2 w2 2 2, , . . . , wn(2) wn(1) ) . (8) Therefore, we achieve a global minimum if Ŷ is the projection of Y onto the column space of D. (2) If wk = 0 for all k ∈ {1, 2, . . . , n}, the column space  of D is equal to that of D. Hence, Ŷ being the projection of Y onto the column space of D is achieved by our parameterization, which completes the proof of Theorem 2 (iv). Any critical point w.r.t. (w(1) , w (2) ) needs to be a critical point w.r.t. w(1) . Hence, every critical point is a (2) global minimum if wk = 0 for all k ∈ {1, 2, . . . , n}. This completes the proof of Theorem 2 (iii). (2) Proof of Theorem 2 (ii) From Theorem 2 (iii), if wk = 0 for all k ∈ {1, 2, . . . , n}, every local minimum is a global minimum (because a set of local minima is included in a set of critical points). (2) Consider any point w̄ where w̄k = 0 for some k ∈ {1, 2, . . . , n}. Without loss of generality, with some integer (2) (2) c, let w̄k = 0 for all k ∈ {1, . . . , c − 1} and w̄k = 0 for all k ∈ {c, . . . , n}. Then, at the point w̄, 2 2, L(w̄) = D1:c−1 [[w̄]]1:c−1 − Y where D1:c−1 and [[w̄]]1:c−1 are the block elements of D (equation 4) and [[w̄]] (equation 8) that correspond to the nonzero w̄k . Then, from the proof of Theorem 2 (iii), if w̄ is a critical point, then D1:c−1 [[w̄]]1:c−1 is the projection of Y onto D1:c−1 . If w̄ is a local minimum, for sufficiently small > 0, for (1) (2) any k̄ ∈ {c, . . . , n}, for any (wk̄ , wk̄ ) ∈ Rd × R , 2 2 D1:c−1 [[w̄]]1:c−1 − Y ≤ D1:c−1 [[w̄]]1:c−1 + Dk̄ [[w̄ ]]k̄ − Y which is simplified to + Dk̄ [[w̄ ]]k̄ 2 , (9) where [[w̄ ]]k̄ is a -perturbation of [[w̄]]k̄ as [[w̄ ]]k̄ = (2) (1) wk̄ (w̄k̄ + (1) wk̄ ). Here, where D,k̄ is the corresponding k̄-th block of size m × d. In equation (9), the first term contains and 2 terms whereas the second term contains 2 , 3 and 4 terms. With sufficiently small, the term must be zero (as we can change its sign by the direction of perturbation). Then, with sufficiently small, the 2 terms become dominant and must satisfy (1) (2) 0 ≤2 2 (D1:c−1 [[w̄]]1:c−1 − Y ) (Dk̄ wk̄ wk̄ ) 2 + (1) (1) (2) 2 Dk̄ w̄k̄ wk̄ , (2) for any (wk̄ , wk̄ ) ∈ Rd × R, which implies (1) (2) (1) for any |wk̄ | << minj∈{1,2,...,d} |(wk̄ )j | (see footnote3 ). It implies that for any k̄ ∈ {c, . . . , n}, 0 = (D1:c−1 [[w̄]]1:c−1 − Y ) Dk̄ . Since D1:c−1 [[w̄]]1:c−1 is the projection of Y onto D1:c−1 (as discussed above), this implies that D (D[[w̄]] − Y ) = (D1:c−1 [[w̄]]1:c−1 − Y ) D = 0. Thus, D[[w̄]] is the projection of Y onto D, which is a global minimum. 3 For readers who have experienced deriving some conditions of a local minimum, it may sound too good to be true that we can ignore the 2 term from Dk̄ [[w̄ ]]k̄ 2 . However, this is true and (2) not that good since we are considering a spacial case of w̄k̄ = 0. (2) (2) (2) (1) With a standard use of Rademacher complexity (e.g., see section 3 in Mohri, Rostamizadeh, and Talwalkar for a clear introduction of Rademacher complexity, and the proof of lemma 12 in Xie, Liang, and Song for its concrete use), 1 Ex (f (x) − fˆ(x; w∗ ))2 − L(w) 2  (1) Indeed, if w̄k̄ = 0, [[w̄ ]]k̄ = (w̄k̄ + wk̄ )(w̄k̄ + wk̄ ) creates the 2 term from Dk̄ [[w̄ ]]k̄ 2 that we cannot ignore. (10) log 1δ + 2(CY + CŶ )Rm (F), 2m ≤ (CY2 + CŶ2 ) with probability at least 1 − δ, where (CY2 + CŶ2 ) is an upper bound on the value of L(w), and (CY + CŶ ) comes from an upper bound on the Lipschitz constant of the squared loss function. Note that f (x; w) = σ, x w ≤ CW Cσx . We first compute an upper bound on the empirical Rademacher complexity of our model class F as follows: with Rademacher variables ξi ,   m   ξi σ, xi  w mR̂m (F) = E sup w∈Sw i=1 m       ≤ E sup  ξi σ, xi  w 2  w∈Sw  i=1  2  m     = CW E  ξi σ, xi     0 ≤2 2 (D1:c−1 [[w̄]]1:c−1 − Y ) (Dk̄ wk̄ wk̄ ), (2) if Dk = 0. Thus, it is non-convex if D = 0. Proof of Theorem 3 (Generalization Bound for Shallow Model) 2 2, 0 ≤2(D1:c−1 [[w̄]]1:c−1 − Y ) (Dk̄ [[w̄ ]]k̄ ) Proof of Theorem 2 (i) It is sufficient to prove non(1) (2) convexity w.r.t. the parameters (wk , wk ) for each k ∈ (k) (2) {1, 2, . . . , n}; that is, we prove that L (w1 , wk ) = (1) (2) L(w)|wi=k =0 = Dk wk wk − Y 22 is non-convex, where D,k is the corresponding k-th block of size m × d. It is indeed easy to see that L is non-convex. For example, at (1) (2) (wk , wk ) = 0, its Hessian is   0 Dk  0, Dk 0 i=1 2 where the second line follows the Cauchy-Schwarz inequality. Furthermore,     m   m m       E  ξi σ, xi  ≤ E ξi ξj σ, xi  σ, xj    i=1 i=1 j=1 2  m  σ, xi 22 ≤ i=1 √ ≤ Cσx m. where the first line uses Jensen’s inequality for the concave function, and the second line follows that for any   m 2 m m ≤ z, E ξ ξ z z i j i j j=1 i=1 i=1 zi , where we used the definition of Rademacher variables for E[ξi ξj ]. Hence, σx . Since the Rademacher complexity R̂m (F) ≤ CW√C m Rm (F) is the expectation of R̂m (F) over samples (x, y), σx Rm (F) ≤ CW√C . m Tensorial Structure Since the output of our network is the sum of the outputs of all the paths in the network, we observe the following interesting structure: fˆns1 ,...,nH (x) = n1 d   k0 =0 k1 =1 where Proof of Theorem 5 (Lower Bound on Universal Approximation Power) Since the output of our network is the sum of the outputs of all the paths in the network, we observed in Section “Tensorial Structure” that fˆn1 ,...,nH (x) = n1 d   k0 =0 k1 =1 ··· nH  [[σ]]k1 ,...,kH (x)xk0 [[w]]k0 ,k1 ,...,kH , kH =1 where (1) ··· nH  [[σ]]k1 ,...,kH (x) = σs (x rk1 ) kH =1 σk1 ,...,kH (x)xk0 wk0 ,k1 ,...,kH , (1) σk1 ,...,kH (x) = σs (x rk1 )  H  l=2 (l) H  (l) σs (h(l−1) (x)rkl ), r l=2 and [[w]]k0 ,k1 ,...,kH = H  (l) wkl−1 kl l=1 (l) σs (h(l−1) (x) rkl ), r H   (H+1) wk H . By re-writing gk0,k1,...,kH (x) = [[σ]]k1 ,...,kH (x)xk0 , this means that (H+1) and wk0 ,k1 ,...,kH = . This means l=1 wkl−1 kl wkH that the function is the weighted combination of (d + 1) × n1 × n2 . . . nH nonlinear basis functions, which is exponential in the number of layers H. Alternatively, the function can also be viewed as the inner product between two tensors σk1 ,...,kH (x)xk0 and wk0 ,k1 ,...,kH , which is nonlinear in input x, but linear in the parameter tensor wk0 ,k1 ,...,kH . We note that the parameter tensor is not arbitrary but highly structured: it is composed using a collection of matrices with H dw = (d + 1)n1 + nH + l=2 nl−1 nl number of adjustable parameters. Such special structure allows the function to generate an exponential number of basis functions yet keep the parameterization compact. Fns1 ,...,nH ⊆ span({gk0,k1,...,kH : ∀k0 , ∀k1 , . . . , ∀kH }). Then, Theorem 5 is a direct application of the following result: Lemma 12 (Barron 1993, Theorem 6) For any fixed set of M functions g1 , g2 , . . . , gM , sup inf f ∈ΓC fˆ∈span(g1 ,g2 ,...,dM ) f − fˆ L2 (Ω) ≥κ C −1/d M . d We apply Lemma 12 with our gk0,k1,...,kH to obtain the statement of Theorem 5. Proofs for Multilayer Model In this section, we provide the proofs of the theorems and corollary presented in Section “Multilayer Model”. Proof of Corollary 4 (Universal Approximation with Deep Model) (1) (2) (3) (H) From the definition of r1 , r1 , r1 , . . . , r1 , the first unit in each hidden layer of the main semi-random net is not affected by the random net; the output of the first unit of semi-random net is multiplied by one. By setting most of the weights w to zeros, we can use only the first unit at each layer from the second hidden layer, creating a single path that is not turned-off by the random activation net from the second hidden layer. Then, by adjusting weights w in the path, we can create an identity map from the second hidden layer unit, from which the path starts. In theorem 1, we have already shown that the output of any second hidden layer unit has universal approximation ability. This completes the proof. Proof of Corollary 6 (No Bad Local Minima and Few Bad Critical Points w.r.t. Two Last Layers) It directly follows the proof of Theorem 2, because any critical point (or any local minimum) is a critical point (or respectively, a local minimum) with respect to (W (H) , W (H+1) ). Proof of Corollary 7 (Generalization Bound for Deep Model) By noticing that fn1 ,...,nH (x) = vec(w) vec(σ, x) (i.e., the tensorial structure as discussed in the proof of Theorem 5), it directly follows the proof of Theorem 3 (Generalization Bound – shallow); in the proof of Theorem 3, replace σ, xi  and w with the corresponding definitions for multilayer models, which are vec(σ, xi ) and vec(w) of multilayer models. Discussion on an Upper Bound on Approximation Error for Multilayer Model Here, we continue the discussion in Section “Benefit of Depth”. An upper bound on approximation error can be decomposed into three illustrative terms as f − fˆ2L2 (Ω)  = f − fˆ2L2 (Ωi ) (11) i∈S ≤  i∈S f − fi 2L2 (Ωi ) Figure 5: Training error for a simple test function. + fˆ − fˆi L2 (Ωi ) + fi − fˆi 2L2 (Ωi ) , where {Ωi }i∈S is an arbitrary partition of Ω, and fi = f (xi ) is a constant function with some xi ∈ Ωi . Then, the first term and the second term represent how much each of f and fˆ varies in Ωi . In other words, we can bound these two terms by some “smoothness” of f and fˆ (some discontinuity at a set of measure zero poses no problem as we are taking the norm of L2 (Ωi ) space). The last term in equation (11) represents the expressive power of fˆ on the finite points {xi }i∈S (assuming that S is finite). Essentially, to obtain a good upper bound, we want to have a fˆ that is “smooth” and yet expressive on some finite points. For the first term in equation (11), we can obtain a bound via a simple calculation for any f ∈ ΓC as follows: by the mean value theorem, there exists zx for each x such that  f − fi 2L2 (Ωi ) = (f (x) − f (xi ))2 x∈Ωi  = (∇f (zx ) (x − xi ))2 dx x∈Ωi 2 L2 (Ωi ) x − xi 2 L2 (Rd ) x − xi xi 2L2 (Ωi ) . ≤ ∇f (zx ) ≤ ∇f (x) 2 L2 (Ωi ) 2 L2 (Ωi ) ≤ C2 x −  where x − xi 2L2 (Ωi ) = x∈Ωi x − xi 22 . Hence, the bound goes to zero as the size of Ωi decreases. We can use the same reasoning to bound the second term in equation (11) with some “smoothness” of fˆ. A possible concern is that fˆ becomes less “smooth” as the width n and depth H increase. However, from Bessel’s inequality with Gram-Schmidt process, it is clear that such an effect is bounded for a best fˆ with a finite s ≥ 1 (notice the difference from statistical learning theory, where we cannot focus on the best fˆ). The last term in equation (11) represents the error at points xi ∈ Ωi , which would be bounded via optimization theory, as the expressive power of fˆ increases as depth H and width increase. While this reasoning illustrates what factors may matter, a formal proof is left to future work. Additional Experimental Details In this section, we provide additional Experimental details. A simple test function Figure 5 shows the training errors for a sine function experiment discussed in Section “Simple Test Function”. It shows roughly the same patterns as in their test errors, indicating that there is no large degree of overfitting. Figures 6, 7, and 8 visualize the function learned at each iteration for each method. We can see that semi-random features (LSR) learns the function quickly. At each trial, 5000 points of the inputs x were sampled uniformly from [−12π, 12π] for each of training dataset and test dataset. We repeated this trial 20 times to obtain standard deviations. All hyperparameters were fixed to the same values for all methods, with the network architecture being fixed to 1-50-50-1. For all the methods, we used the learning rate 5×10−4 , momentum parameter 0.9, and mini-batch size of 500. For all methods, the weights are initialized as 0.1 times random samples drawn from the standard normal distribution. UCI datasets For all methods on all datasets, we use SGD with 0.9 momentum, a batch size of 128 to run 100 epochs (passes over the whole dataset). The initial learning rate is set to 0.1 (for some combinations, this leads to NaN and we use 0.02 as initial learning rate). We also use an exponential staircase decaying schedule for the learning rate where after one epoch we decrease it to 0.95 of the original rate. The√parameters are initialized as normal distributions times 1/ d where d is the input dimension for the layer. For ReLU and semi-random features (LSR and SSR), their behaviors change similarly on most datasets: more layers and more units lead to better performance. Also, on most datasets, adding more layers leads to better performance than adding more units per layer. The dataset adult is an exception where all methods have similar performance and more parameters actually lead to worse performance. This is likely due to that adult is quite noisy and using more parameters leads to overfitting. Overfitting is also observed on senseit dataset for LSR where the training errors keep become smaller but test errors increase. Image datasets We train these convolution neural networks using SGD with 0.9 momentum with a batch size of 64 for MNIST and 128 1 1 0.5 0.5 1 0.5 0 0 0 -0.5 -0.5 -1 -0.5 -1 -1.5 -1 -1.5 -2 -40 -2 -30 -20 -10 0 10 20 30 40 -2.5 -40 -30 -20 (a) iteration 0 -10 0 10 (b) iteration 10 20 30 40 -1.5 -40 1.5 1.5 1 1 1 0.5 0.5 0.5 0 0 0 -0.5 -0.5 -0.5 -1 -1 -1 -30 -20 -10 0 10 20 30 40 -1.5 -40 -30 (d) iteration 104 -20 -10 0 -20 -10 0 10 (c) iteration 10 1.5 -1.5 -40 -30 2 10 20 30 40 -1.5 -40 -30 -20 (e) iteration 5 × 104 -10 0 20 30 40 3 10 20 30 40 (f) iteration 105 Figure 6: Function learned at each iteration (Semirandom - LSR). 1 1.5 1 0.8 1 0.5 0.6 0.4 0 0.5 0.2 0 0 -0.5 -0.2 -0.5 -0.4 -1 -0.6 -1 -0.8 -1.5 -40 -30 -20 -10 0 10 20 30 40 -1 -40 -30 -20 (a) iteration 0 -10 0 10 (b) iteration 10 20 30 40 -1.5 -40 -20 -10 0 10 (c) iteration 10 2 2.5 1.5 2 2 1.5 1.5 1 -30 2 20 30 40 20 30 40 3 2.5 1 1 0.5 0.5 0.5 0 -0.5 0 0 -0.5 -0.5 -1 -1 -1 -1.5 -1.5 -1.5 -2 -40 -30 -20 -10 0 10 (d) iteration 10 4 20 30 40 -2 -40 -30 -20 -10 0 10 (e) iteration 5 × 10 20 30 40 -2 -40 -30 4 Figure 7: Function learned at each iteration (fully random). -20 -10 0 10 (f) iteration 10 5 1 1 1 0.8 0.8 0.8 0.6 0.6 0.6 0.4 0.4 0.4 0.2 0.2 0.2 0 0 0 -0.2 -0.2 -0.2 -0.4 -0.4 -0.4 -0.6 -0.6 -0.6 -0.8 -0.8 -0.8 -1 -40 -30 -20 -10 0 10 20 30 40 -1 -40 -30 -20 (a) iteration 0 -10 0 10 (b) iteration 10 20 30 40 -1 -40 1.5 1.5 1 1 1 0.5 0.5 0.5 0 0 0 -0.5 -0.5 -0.5 -1 -1 -1 -30 -20 -10 0 10 20 30 40 -1.5 -40 -30 (d) iteration 104 -20 -10 0 -20 10 -10 0 10 (c) iteration 10 1.5 -1.5 -40 -30 2 20 30 40 -1.5 -40 -30 -20 (e) iteration 5 × 104 -10 0 20 30 40 20 30 40 3 10 (f) iteration 105 Figure 8: Function learned at each iteration (ReLU). for CIFAR10 and SVHN. On each dataset, we have tried initial learning rates of 0.1, 0.01, and 0.001 and used an exponential decaying schedule. For MNIST, the decay schedule is decreasing to 0.95 of the previous rate after each epoch. For CIFAR10 and SVHN, the schedule is decreasing to 0.1 of the previous rate after 120 epochs. The best performing results on a validation set are then picked as the final solution, and usually the learning rate of 0.1 worked best. The parameters are also √ randomly initialized from normal distributions times 1/ d where d is the input dimension for the layer. Better than Random Feature? The experimental results verified our intuition that semirandom feature can outperform random feature with fewer number of unites due to its learnable weights. We can also strengthen this intuition via the following theoretical insights. Let f ∈ Frandom,n1 ···nH be a function that is a composition of any fully-random features with depth H where the adjustable weights are only in the last layer. The following corollary states that a model class of any fully-random features has a approximation power exponentially bad in the dimensionality of x in the worst case. Corollary 13 (Lower bound on approximation power for fully-random feature) Let Ω = [0, 1]d . For any depth H ≥ 0, and for any set of nonzero widths {n1 , n2 , . . . , nH }, sup f ∈ΓC inf fˆ∈Frandom,n1 ···nH f − fˆ L2 (Ω) ≥ κC (nH )−1/d , d2 where κ ≥ (8πe(π−1) )−1 is a constant. Proof The model class of any random features is the span of the features represented at the last hidden layer. Hence, the statement of the corollary directly follows from the proof of Theorem 5. Corollary 13 (lower bound for fully-random feature) together with Theorem 5 (lower bound for semi-random feature) reflects our intuition that semi-random feature model can potentially get exponential advantage over random feature by learning hidden layer’s weights. Again, because the lower bound may not be tight, this is intended only to aid our intuition. We can also compare upper bounds on their approximation errors with an additional assumption. Assume that we can represent a target function f using some basis as  f (x) = σ(r x)(w x) p(r, w). r∈Sd−1 ,w≤CW Then, we can obtain the following results. • If we have access to the true distribution p(r, w), f (x) can be approximated as a finite sample average, obtaining approximation error of O( √1n ). • Without knowing the true distribution p(r, w), a purely random feature approximation of f (x) incurs a large approximation error of O( q0 ·q1c·√n ). Here, q0 is inverse of the hyper-surface area of a unit hypersphere and q1 is the inverse of the volume of a ball of radius CW . • Without knowing the true distribution p(r, w), semirandom feature approach with one hidden layer model can √ ). obtain a smaller approximation error of O( 35c n layer model), we obtain the following approximation: with   i.i.d. ri ∼ q(r) := Uniform Sd−1 , A detailed derivation of this result is presented in Appendix. By finding the best w, we can get at least as good as the case i.i.d. of wi ∼ p(w|r) where p(w|r) = p(w, r)/p(r) is the true conditional distribution given r. If p(r) ∞ = c, then the important weight βi can be much smaller, with a bound of c q0 ≤ 35c independent of the dimension. Accordingly, this √ ). approximation incurs an error of O( 35c n Derivation of Upper Bounds on Approximation Errors With an additional assumption, we compare upper bounds on approximation errors for random features and semirandom features. Recall the additional assumption that we can represent a target function f using some basis as  σ(r x)(w x) p(r, w), f (x) = r∈Sd−1 ,w≤CW where we can write p(r, w) = p(r) p(w|r). If we have access to the true distribution p(r, w), then f (x) can be approximated as a finite sample average n 1 f1 (x) = σ(ri x)(wi x) n i=1 i.i.d. (ri , wi ) ∼ p(r, w). Such an approximation will incur an error of O( √1n ). On the other hand, without knowing the true distribution p(r, w), a purely random feature approximation of f (x) will be sampling both r and w from uniform distribution:   i.i.d. i.i.d. with ri ∼ q(r) := Uniform Sd−1 and wi ∼ q(w) := Uniform ( wi ≤ CW ), n 1  p(ri , wi ) σ(ri x)(wi x) f2 (x) = n i=1 q(ri )q(wi ) n = 1 αi σ(ri x)(wi x) n i=1 d where q(r) = q0 := Γ( n2 )/(2π 2 ) (inverse of the hypersurface area of a unit hypersphere) and q(w) = q1 := d d Γ( d2 + 1)/(π 2 CW ) (inverse of the volume of a ball of radius CW ). Interestingly enough, for the unit hypersphere, the hyper-surface area reaches a maximum and then decreases towards 0 as d increases. One can show that the seven-dimensional unit hypersphere has a maximum hypersurface area that is less than 35. But the volume of a ball of radius CW depends exponentially on the radius CW . If p(r, w) ∞ = c, then the importance weight αi can be as large as q0c·q1 . Due to the fact that q1 can be very small which make the bound very large, this incurs a big approximation error of O( q0 ·q1c·√n ). In contrast, if we sample r from unit sphere and then optimize over w (i.e., semi-random approach with one hidden n 1  p(ri ) σ(ri x)(wi x) f3 (x) = n i=1 q(ri ) n = 1 βi σ(ri x)(wi x). n i=1 Additional Future Work and Open Problems We have discussed several open questions in the relevant sections of the main text. Here, we discusses additional future work and open problems. We emphasize that our theoretical understanding of multilayer model is of preliminary nature and it is not intended to be complete. Future work may adopt an oscillation argument in the previous work by (Telgarsky 2016), in order to directly compare deep models and shallow models. To obtain a tight upper bound, future work may start with a stronger assumption such as one used in the previous work by (Rahimi and Recht 2009); an assumption on a relationship of random sampling and the true distribution. Indeed, when compared with neural network, semi-random feature would be advantageous when the target function is known to be decomposed into the unknown part w and the random part r with known distribution. However, we did not assume such a knowledge in this paper, leaving the study of such a case to feature work.
9
arXiv:1604.07566v2 [math.NT] 9 May 2017 THE COHOMOLOGY OF CANONICAL QUOTIENTS OF FREE GROUPS AND LYNDON WORDS IDO EFRAT Abstract. For a prime number p and a free profinite group S, let S (n,p) be the nth term of its lower p-central filtration, and S [n,p] the corresponding quotient. Using tools from the combinatorics of words, we construct a canonical basis of the cohomology group H 2 (S [n,p] , Z/p), which we call the Lyndon basis, and use it to obtain structural results on this group. We show a duality between the Lyndon basis and canonical generators of S (n,p) /S (n+1,p) . We prove that the cohomology group satisfies shuffle relations, which for small values of n fully describe it. 1. Introduction Let p be a fixed prime number. For a profinite group G one defines the lower p-central filtration G(n,p) , n = 1, 2, . . . , inductively by G(1,p) = G, G(n+1,p) = (G(n,p) )p [G, G(n,p) ]. Thus G(n+1,p) is the closed subgroup of G generated by the powers hp and commutators [g, h] = g −1h−1 gh, where g ∈ G and h ∈ G(n,p) . We also set G[n,p] = G/G(n,p) . Now let S be a free profinite group on the basis X, and let n ≥ 2. Then S is a free object in the category of pro-p groups G with G(n,p) trivial. As with any pro-p group, the cohomology groups H l (S [n,p]) = H l (S [n,p], Z/p), l = 1, 2, capture the main information on generators and relations, respectively, in a minimal presentation of S [n,p]. The group H 1 (S [n,p]) is just the L dual (S [2,p] )∨ ∼ = x∈X Z/p, and it remains to understand H 2 (S [n,p]). [n,p] When n = 2 the quotient S [2,p] is an elementary abelian p-group, and the structure of H 2 (S [2,p] ) is well-known. Namely, for p > 2 one has an isomorphism V ∼ H 1 (S [2,p]) ⊕ 2 H 1 (S [2,p] ) − → H 2 (S [2,p]), 2010 Mathematics Subject Classification. Primary 12G05, Secondary 20J06, 68R15. Key words and phrases. Profinite cohomology, lower p-central filtration, Lyndon words, Shuffle relations, Massey products. This work was supported by the Israel Science Foundation (grant No. 152/13). 1 2 IDO EFRAT which is the Bockstein map on the first component, and the cup product on the second component. Furthermore, taking a basis χx , x ∈ X, of H 1 (S [2,p]) dual to X, there is a fundamental duality between pth powers and commutators in the presentation of S and Bockstein elements and cup products, respectively, of the χx (see [NSW08, Ch. III, §9] for details). These facts have numerous applications in Galois theory, ranging from class field theory ([Koc02], [NSW08]), the works by Serre and Labute on the prop Galois theory of p-adic fields ([Ser63], [Lab67]), the structure theory of general absolute Galois groups ([MS96], [EM11]), the birational anabelian phenomena ([Bog91], [Top16]), Galois groups with restricted ramification ([Vog05], [Sch10]), and mild groups ([Lab06], [For11], [LM11]), to mention only a few of the pioneering works in these areas. In this paper we generalize the above results from the case n = 2 to arbitrary n ≥ 2. Namely, we give a complete description of H 2(S [n,p]) in terms of a canonical linear basis of this cohomology group. This basis is constructed using tools from the combinatorics of words – in particular, the Lyndon words in the alphabet X, i.e., words which are lexicographically smaller than all their proper suffixes (for a fixed total order on X). We call it the Lyndon basis, and use it to prove several structural results on H 2 (S [n,p]), and in particular to compute its size (see below). The Lyndon basis constructed here can be most naturally described in terms of central extensions, as follows: For 1 ≤ s ≤ n let U denote the group of all unipotent upper-triangular (s + 1) × (s + 1)-matrices over the ring Z/pn−s+1 . There is a central extension 0 → Z/p → U → U[n,p] → 1 (Proposition 6.3). It corresponds to a cohomology element γn,s ∈ H 2 (U[n,p]). For a profinite group G and a continuous homomorphism ρ : G → U we write ρ̄ : G[n,p] → U[n,p] for the induced homomorphism, and ρ̄∗ γn,s for the pullback to H 2 (G[n,p]). Now for any word w = (x1 · · · xs ) in the alphabet X we define a homomorphism ρw : S → U by setting the entry (ρw (σ))ij to be the coefficient of the subword (xi · · · xj−1 ) in the power series Λ(σ), where Λ : S → (Z/pn−s+1 )hhXii× is the Magnus homomorphism, defined on the generators x ∈ X by Λ(x) = 1 + x (see §3 and §6 for more details). The Lyndon basis is now given by: Main Theorem. The pullbacks αw,n = (ρ̄w )∗ γn,s , where w ranges over all Lyndon words of length 1 ≤ s ≤ n in the alphabet X, form a linear basis of H 2 (S [n,p]) over Z/p. COHOMOLOGY AND LYNDON WORDS 3 We further show a duality between the Lyndon basis and certain canonical elements σw ∈ S (n,p) , with w a Lyndon word of length ≤ n, generalizing the above mentioned duality in the case n = 2 (see Corollary 8.3). The cohomology elements ρ̄∗ γn,s include the Bockstein elements (for s = 1), the cup products (for n = s = 2), and more generally, the elements of n-fold Massey products in H 2 (G[n,p]) (for n = s ≥ 2); see Examples 7.4. The full spectrum ρ̄∗ γn,s , 1 ≤ s ≤ n, appears to give new significant “external” objects in profinite cohomology, which to our knowledge have not been investigated so far in general. Lyndon words are known to have tight connections with the shuffle algebra, and indeed, the αw,n for arbitrary words w of length ≤ n in X satisfy natural shuffle relations (Theorem 9.4). In §10-§11 we show that for n = 2, 3 these shuffle relations fully describe H 2 (S [n,p]), provided that p > 2, p > 3, respectively (for n = 2 this was essentially known). Interestingly, related considerations arise also in the context of multiple zeta values, see e.g. [MP00], although we are not aware of a direct connection. The Lyndon words on X form a special instance of Hall sets, which are well-known to have fundamental role in the structure theory of free groups and free Lie algebras (see [Reu93], [Ser92]). In addition, the Lyndon words have a triangularity property (see Proposition 4.4(b)). This property allows us to construct certain upper-triangular unipotent matrices that express a (semi-)duality between the σw and the cohomology elements αw,n . We now outline the proof that the αw,n form a linear basis of H 2 (S [n,p]). For simplicity we assume for the moment that X is finite. To each Lyndon word w of length 1 ≤ s ≤ n one associates in a canonical way an element τw of the s-th term of the lower central series of S (see §4). The cosets n−s of the powers σw = τwp generate S (n,p) /S (n+1,p) (Theorem 5.3). Using the special structure of the lower p-central filtration of U we define for any two Lyndon words w, w ′ of length ≤ n a value hw, w ′in ∈ Z/p (see §6). Thetriangularity property of Lyndon words implies that the matrix hw, w ′in is unipotent upper-triangular, whence invertible. Turning now to cohomology, we define a natural perfect pairing (·, ·)n : S (n,p) /S (n+1,p) × H 2 (S [n,p]) → Z/p. Cohomological computations show that, for Lyndon words w, w ′ of length  ≤ n, one has hw, w ′in = (σw , αw′ ,n )n . Hence the matrix (σw , αw′ ,n )n is also invertible. We then conclude that the αw′ ,n form a basis of H 2 (S [n,p]) (Theorem 8.5). This immediately determines the dimension of the latter 4 IDO EFRAT cohomology group, in terms of Witt’s necklace function, which counts the number of Lyndon words over X of a given length (Corollary 8.6). In the special case n = 2, the theory developed here generalizes the above description of H 2 (S [2,p]) in terms of the Bockstein map and cup products (see §10 for details). Namely, the matrix (hw, w ′i2 ) is the identity matrix, which gives the above duality between pth powers/commutators and Bockstein elements/cup products. Likewise, the shuffle relations just recover the basic fact that the cup product factors via the exterior product. In §11 we describe our theory explicitly also for the (new) case n = 3. While here we focus primarily on free profinite groups, it may be interesting to study the canonical elements ρ̄∗ γn,s for more general profinite groups G, in particular, when G = GF is the absolute Galois group of a field F . For instance, when n = 2, they were used in [EM11] (following [3,p] [MS96] and [AKM99]) and [CEM12] to describe the quotient GF . Triple Massey products for GF (which correspond to the case n = s = 3) were also extensively studied in recent years – see [EM15], [MT15b], [MT16], [MT17], and [Wic12] and the references therein. I thank Claudio Quadrelli and the anonymous referee for their careful reading of this paper and for their very valuable comments and suggestions on improving the exposition. 2. Words Let X be a nonempty set, considered as an alphabet. Let X ∗ be the free monoid on X. We view its elements as associative words on X. The length of a word w is denoted by |w|. We write ∅ for the empty word, and ww ′ for the concatenation of words w and w ′ . Recall that a magma is a set M with a binary operation (·, ·) : M × M → M. A morphism of magmas is a map which commutes with the associated binary operations. There is a free magma MX on X, unique up to an isomorphism; that is, X ⊆ MX , and for every magma (·, ·) : N × N → N and a map f0 : X → N there is a magma morphism f : MX → N extending f0 . See [Ser92, Ch. IV, §1] for an explicit construction of MX . The elements of MX may be viewed as non-associative words on X. The monoid X ∗ is a magma with respect to concatenation, so the universal property of MX gives rise to a unique magma morphism f : MX → X ∗ , called the foliage (or brackets dropping) map, such that f (x) = x for x ∈ X. COHOMOLOGY AND LYNDON WORDS 5 Let H be a subset of MX and ≤ a total order on H. We say that (H, ≤) is a Hall set in MX , if the following conditions hold [Reu93, §4.1]: (i) X ⊆ H; (ii) If h = (h′ , h′′ ) ∈ H \ X, then h < h′′ ; (iii) For h = (h′ , h′′ ) ∈ MX \ X, one has h ∈ H if and only if • h′ , h′′ ∈ H and h′ < h′′ ; and • either h′ ∈ X, or h′ = (v, u) where u ≥ h′′ . Given a Hall set (H, ≤) in MX we call H = f (H) a Hall set in X ∗ . Every w ∈ H can be written as w = f (h) for a unique h ∈ H [Reu93, Cor. 4.5]. If w ∈ H \ X, then we can uniquely write h = (h′ , h′′ ) with h′ , h′′ ∈ H, and call w = w ′w ′′ , where w ′ = f (h′ ) and w ′′ = f (h′′ ), the standard factorization of w [Reu93, p. 89]. Next we fix a total order ≤ on X, and define a total order ≤alp (the alphabetical order) on X ∗ as follows: Let w1 , w2 ∈ X ∗ . Then w1 ≤alp w2 if and only if w2 = w1 v for some v ∈ X ∗ , or w1 = vx1 u1 , w2 = vx2 u2 for some words v, u1 , u2 and some letters x1 , x2 ∈ X with x1 < x2 . Note that the restriction of ≤alp to X n is the lexicographic order. In addition, we order Z≥0 × X ∗ lexicographically with respect to the usual order on Z≥0 and the order ≤alp on X ∗ . We then define a second total order  on X ∗ by setting (2.1) w1  w2 ⇐⇒ (|w1 |, w1 ) ≤ (|w2|, w2 ) with respect to the latter order on Z≥0 × X ∗ . A nonempty word w ∈ X ∗ is called a Lyndon word if it is smaller in ≤alp than all its non-trivial proper right factors. Equivalently, no nontrivial rotation leaves w invariant, and w is lexicographically strictly smaller than all its rotations 6= w ([CFL58, Th. 1.4], [Reu93, Cor. 7.7]). We denote the set of all Lyndon words on X by Lyn(X), and the set of all such words of length n (resp., ≤ n) by Lynn (X) (resp., Lyn≤n (X)). The set Lyn(X), totally ordered with respect to ≤alp , is a Hall set [Reu93, Th. 5.1]. Example 2.1. The Lyndon words of length ≤ 4 on X are (x) for x ∈ X, (xy), (xxy), (xyy), (xxxy), (xxyy), (xyyy) for x, y ∈ X, x < y, (xyz), (xzy), (xxyz), (xxzy), (xyxz), (xyyz), (xyzy), (xyzz), (xzyy), (xzyz), (xzzy) for x, y, z ∈ X, x < y < z, (xyzt), (xytz), (xzyt), (xzty), (xtyz), (xtzy) for x, y, z, t ∈ X, x < y < z < t 6 IDO EFRAT The necklace map (also called the Witt map) is defined for integers n, m ≥ 1 by 1X µ(d)mn/d . ϕn (m) = n d|n Here µ is the Möbius function, defined by µ(d) = (−1)k , if d is a product of k distinct prime numbers, and µ(d) = 0 otherwise; Alternatively, 1/ζ(s) = P∞ s n=1 µ(n)/n , where ζ(s) is the Riemann zeta function and s is a complex number with real part > 1. When m = q is a prime power, ϕn (q) also counts the number of irreducible monic polynomials of degree n over a field of q elements [Reu93, §7.6.2]. We also define ϕn (∞) = ∞. One has [Reu93, Cor. 4.14] (2.2) | Lynn (X)| = ϕn (|X|). 3. Power series We fix a commutative unital ring R. Recall that a bilinear map M × N → R of R-modules is non-degenerate if its left and right kernels are trivial, i.e., the induced maps M → Hom(N, R) and N → Hom(M, R) are injective. The bilinear map is perfect if these two maps are isomorphisms. Let RhXi be the free associative R-algebra on X. We may view its elements as polynomials over R in the non-commuting variables x ∈ X. The additive group of RhXi is the free R-module on the basis X ∗ . We grade RhXi by total degree. Let RhhXii be the ring of all formal power series in the non-commuting variables x ∈ X and coefficients in R. For f ∈ RhhXii P we write fw for the coefficient of f at w ∈ X ∗ . Thus f = w∈X ∗ fw w. There are natural embedings X ∗ ⊆ RhXi ⊆ RhhXii, where we identify w ∈ X ∗ with the series f such that fw = 1 and fw′ = 0 for w ′ 6= w. There is a well-defined non-degenerate bilinear map of R-modules X fw gw ; (3.1) RhhXii × RhXi → R, (f, g) 7→ w∈X ∗ See [Reu93, p. 17]. For every integer n ≥ 0, it restricts to a non-degenerate bilinear map on the homogenous components of degree n. ∗ We may identify the additive group of RhhXii with RX via the map f 7→ (fw )w . When R is equipped with a profinite ring topology (e.g. when R ∗ is finite), the product topology on RX induces a profinite group topology on RhhXii. Moreover, the multiplication map of RhhXii is continuous, making it a profinite ring. The group RhhXii× of all invertible elements in RhhXii is then a profinite group. COHOMOLOGY AND LYNDON WORDS 7 Next we recall from [FJ08, §17.4] the following terminology and facts on free profinite groups. Let G be a profinite group and X a set. A map ψ : X → G converges to 1, if for every open normal subgroup N of G, the set X \ ψ −1 (N) is finite. We say that a profinite group S is a free profinite group on basis X with respect to a map ι : X → S if (i) ι : X → S converges to 1 and ι(X) generates S as a profinite group; (ii) For every profinite group G and a map ψ : X → G converging to 1, there is a unique continuous homomorphism ψ̂ : S → G such that ψ = ψ̂ ◦ ι on X. A free profinite group on X exists, and is unique up to a continuous isomorphism. We denote it by SX . The map ι is then injective, and we identify X with its image in SX . We define the (profinite) Magnus homomorphism ΛX,R : SX → RhhXii× as follows (compare [Efr14, §5]): P i i Assume first that X is finite. For x ∈ X one has 1 = (1+x) ∞ i=0 (−1) x in RhhXii, so 1 + x ∈ RhhXii× . Hence, by (ii), the map ψ : X → RhhXii× , x 7→ 1 + x, uniquely extends to a continuous homomorphism ΛX,R : SX → RhhXii×. Now suppose that X is arbitrary. Let Y range over all finite subsets of X. The map ψ : X → SY , which is the identity on Y and 1 on X \ Y , converges to 1. Hence it extends to a unique continuous group homomorphism SX → SY . Also, there is a unique continuous R-algebra homomorphism RhhXii → RhhY ii, which is the identity on Y and 0 on X \ Y . Then SX = lim SY , RhhXii = lim RhhY ii, RhhXii× = lim RhhY ii× . ←− ←− ←− We define ΛX,R = lim ΛY,R . It is functorial in both X and R in the natural ←− way. Note that ΛX,R (x) = 1 + x for x ∈ X. In the sequel, X will be fixed, so we abbreviate S = SX and ΛR = ΛX,R . For σ ∈ S and a word w ∈ X ∗ we denote the coefficient of w in ΛR (σ) by ǫw,R (σ). Thus X ΛR (σ) = ǫw,R (σ)w. w∈X ∗ By the construction of ΛR , we have ǫ∅,R (σ) = 1. Since ΛR is a homomorphism, for every σ, τ ∈ S and w ∈ X ∗ one has X ǫu1 ,R (σ)ǫu2 ,R (τ ). (3.2) ǫw,R (στ ) = w=u1 u2 We will also need the classical discrete version of the Magnus homomorphism. To define it, assume that X is finite, and let FX be the free 8 IDO EFRAT group on basis X. There is a natural homomorphism FX → SX . The dis× crete Magnus homomorphism Λdisc Z : FX → ZhhXii is defined again by x 7→ 1 + x. There is a commutative square (3.3) FX Λdisc Z / SX ΛZp   ZhhXii×   / Zp hhXii× . 4. Lie algebra constructions Recall that the lower central filtration G(n,0) , n = 1, 2, . . . , of a profinite group G is defined inductively by G(1,0) = G, G(n+1,0) = [G(n,0) , G]. Thus G(n+1,0) is generated as a profinite group by all elements of the form [σ, τ ] with σ ∈ G(n,0) and τ ∈ G. One has [G(n,0) , G(m,0) ] ≤ G(n+m,0) for every n, m ≥ 1 (compare [Ser92, Part I, Ch. II, §3]). Proposition 4.1. Let S = SX and let σ ∈ S. Then: (a) σ ∈ S (n,0) if and only if ǫw,Zp (σ) = 0 for every w ∈ X ∗ with 1 ≤ |w| < n; (b) σ ∈ S (n,p) if and only if ǫw,Zp (σ) ∈ pn−|w|Zp for every w ∈ X ∗ with 1 ≤ |w| < n. Proof. In the discrete case (a) and (b) are due to Grün and Magnus (see [Ser92, Part I, Ch. IV, th. 6.3]) and Koch [Koc60], respectively. The results in the profinite case follow by continuity. For other approaches see [Mor12, Prop. 8.15], [CE16, Example 4.5], and [MT15a, Lemma 2.2(d)].  We will need the following profinite analog of [Fen83, Lemma 4.4.1(iii)]. Lemma 4.2. Let σ ∈ S (n,0) and τ ∈ S (m,0) , and let w ∈ X ∗ have length n + m. Write w = u1 u2 = u′2 u′1 with |u1| = |u′1 | = n and |u2 | = |u′2| = m. Then ǫw,Zp ([σ, τ ]) = ǫu1 ,Zp (σ)ǫu2 ,Zp (τ ) − ǫu′2 ,Zp (τ )ǫu′1 ,Zp (σ). Proof. By Proposition 4.1(a), we may write ΛZp (σ) = 1 + P + O(n + 1) and ΛZp (τ ) = 1 + Q + O(m + 1), where P, Q ∈ Zp hhXii are homogenous of degrees n, m, respectively, and where O(r) denotes a power series containing only terms of degree ≥ r. Then ΛZp ([σ, τ ]) = 1 + P Q − QP + O(n + m + 1). COHOMOLOGY AND LYNDON WORDS 9 (compare e.g., [Mor12, Proof of Prop. 8.5]). By (3.2), it follows that ǫw,Zp ([σ, τ ]) = (P Q − QP )w = Pu1 Qu2 − Qu′2 Pu′1 = ǫu1 ,Zp (σ)ǫu2 ,Zp (τ ) − ǫu′2 ,Zp (τ )ǫu′1 ,Zp (σ).  L∞ The commutator map induces on the graded ring n=1 S (n,0) /S (n+1,0) a graded Lie algebra structure [Ser92, Part I, Ch. II, Prop. 2.3]. Let d be the L n n+1 is a ideal in the Zp -algebra Zp hhXii generated by X. Then ∞ n=1 d /d Lie algebra with the Lie brackets defined on homogenous components by [f¯, ḡ] = f g − gf for f ∈ dn , g ∈ dm [Ser92, p. 25]. By Proposition 4.1(a), ΛZp induces a graded Zp -algebra homomorphism gr ΛZp : ∞ M n=1 S (n,0) /S (n+1,0) → ∞ M dn /dn+1 , σS (n+1,0) 7→ n=1 X ǫw,Zp (σ)+dn+1. |w|=n Then Lemma 4.2 means that gr ΛZp is a Lie algebra homomorphism. For w ∈ Lyn(X) we inductively define an element τw of S and a noncommutative polynomial Pw ∈ ZhXi ⊆ Zp hXi as follows: • If w = (x) has length 1, then τw = x and Pw = x; • If |w| > 1, then we take the standard factorization w = w ′w ′′ of w with respect to the Hall set Lyn(X) (see §2), and set τw = [τw′ , τw′′ ], Pw = Pw′ Pw′′ − Pw′′ Pw′ . For w ∈ Lynn (X) one has τw ∈ S (n,0) . Moreover: Proposition 4.3. Let n ≥ 1. The cosets of τw , with w ∈ Lynn (X), generate S (n,0) /S (n+1,0) . Proof. See [Reu93, Cor. 6.16] for the discrete version. The profinite version follows by taking closure.  Let ≤alp and  be the total orders on X ∗ defined in §2. The importance of the Lyndon words in our context, beside forming a Hall set, is part (b) of the following Proposition, called the triangularity property. Proposition 4.4. Let w ∈ Lyn(X). Then (a) ΛZp (τw ) − 1 − Pw is a combination of words of length > |w|. (b) Pw − w is a combination of words of length |w| which are strictly larger than w with respect to ≤alp . (c) ΛZp (τw ) − 1 − w is a combination of words which are strictly larger than w in . 10 IDO EFRAT Proof. (a) Since gr ΛZp is a Lie algebra homomorphism, for w ∈ Lynn (X) we have by induction (gr ΛZp )(τw S (n+1,0) ) = Pw + dn+1 , and the assertion follows. See also [Reu93, Lemma 6.10(ii)]. (b) See [Reu93, Th. 5.1] and its proof. (c) This follows from (a) and (b).  5. Generators for S (n,p)/S (n+1,p) Let π be an indeterminate over the ring Z/p and let Z/p[π] be the polynoL mial ring. Let A• = ∞ n=1 An be a graded Lie Z/p-algebra with Lie bracket [·, ·]. Suppose that there is a map Z/p[π] × A• → A• , (α, ξ) 7→ αξ, which is Z/p-linear in Z/p[π], such that πξ ∈ As+1 for ξ ∈ As , and such that for every ξ1 , ξ2 ∈ As , ( πξ1 + πξ2 , if p > 2 or s > 1, (5.1) π(ξ1 + ξ2 ) = πξ1 + πξ2 + [ξ1 , ξ2 ], if p = 2, s = 1. By induction, this extends to: Lemma 5.1. Let r, k, s ≥ 1 and let ξ1 , . . . , ξk ∈ As . Then (P k k X π r ξi , if p > 2 or s > 1, πr ( ξi ) = Pi=1 P k r r−1 [ξi , ξj ], if p = 2, s = 1. i=1 π ξi + i<j π i=1 We write hT i for the submodule of A• generated by a subset T . Lemma 5.2. Let n ≥ 2 and for each 1 ≤ s ≤ n let Ts be a subset of As . When p = 2 assume also that [τ1 , τ2 ] ∈ T2 ∪ {0} for every τ1 , τ2 ∈ T1 . If the sets π n−s hTs i, s = 1, 2, . . . , n, generate An , then the sets π n−s Ts , s = 1, 2, . . . , n, also generate An . Proof. When p > 2 or s > 1 Lemma 5.1 shows that π n−s hTs i = hπ n−s Ts i. When p = 2 and s = 1, it shows that π n−1 hT1 i ⊆ hπ n−1 T1 i + hπ n−2T2 i ⊆ An . Therefore the subgroup of An generated by the sets π n−s Ts , s = 1, 2, . . . , n, contains the sets π n−s hTs i, s = 1, 2, . . . , n, and hence equals An .  Motivated by e.g., [Laz54], [Ser63], [Lab67], we now specialize to a graded Lie algebra defined using the lower p-central filtration. We refer to [NSW08, Ch. III, §8] for the following facts. For the free profinite group S on the basis X and for n ≥ 1 we set grn (S) = S (n,p) /S (n+1,p) . It is an elementary COHOMOLOGY AND LYNDON WORDS 11 abelian p-group, which we write additively. The commutator map induces a map [·, ·] : grn (S) × grm (S) → grn+m (S), which endows a graded Lie L p (r,p) algebra structure on gr• (S) = ∞ n=1 grn (S). The map τ 7→ τ maps S (r+1,p) r into S , and induces a map πr : grr (S) → grr+1 (S). The map (π , ξ) 7→ πr (ξ) for ξ ∈ grr (S) extends to a map Z/p[π] × gr• (S) → gr• (S) which is Z/p-linear in the first component and which satisfies (5.1). n−s Theorem 5.3. Let n ≥ 1. The cosets of τwp w ∈ Lyns (X), generate S (n,p) /S (n+1,p). , with 1 ≤ s ≤ n and Proof. The case n = 1 is immediate, so we assume that n ≥ 2. For every 1 ≤ s ≤ n let Ts be the set of cosets of τw , w ∈ Lyns (X), in grs (S). By Proposition 4.3, hTs i is the image of S (s,0) in grs (S). Hence π n−s hTs i consists of the cosets in grn (S) of the pn−s -powers of S (s,0) . One has n Y n−s (S (s,0) )p S (n,p) = s=1 [NSW08, Prop. 3.8.6]. Thus the sets π n−s hTs i, s = 1, 2, . . . , n, generate grn (S). Further, T1 consists of the cosets of x, with x ∈ X, and T2 consists of the cosets of commutators [x, y] with x < y in X. Moreover, when p = 2 the cosets of [x, y] and [y, x] = [x, y]−1 in gr2 (S) coincide. Lemma 5.2 therefore implies that even the sets π n−s Ts , s = 1, 2, . . . , n, generate grn (S), as required.  6. The pairing hw, w ′in For a commutative unitary ring R and a positive integer m, let Um (R) be the group of all m × m upper-triangular unipotent matrices over R. We write Im for the identity matrix in Um (R), and Eij for the matrix with 1 at entry (i, j) and 0 elsewhere. As above, X will be a totally ordered set. For the following fact we refer, e.g., to [Efr14, Lemma 7.5]. We recall from §3 that ǫu,R (σ) is the coefficient of the word u ∈ X ∗ in the formal power series ΛR (σ) ∈ RhhXii× . Proposition 6.1. Given a profinite ring R and a word w = (x1 · · · xs ) in X ∗ there is a well defined homomorphism of profinite groups ρw R : S → Us+1 (R), σ 7→ (ǫ(xi ···xj−1 ),R (σ))1≤i<j≤s+1. Remark 6.2. In particular, for each x ∈ X the map χx,R = ǫ(x),R : S → R is a group homomorphism, where R is considered as an additive group. 12 IDO EFRAT The homomorphisms χx,R , x ∈ X, are dual to the basis X, in the sense that χx,R (x) = 1, and χx,R (y) = 0 for x 6= y in X. Proposition 6.3. Let 1 ≤ s ≤ n. For R = Z/pn−s+1 one has: (a) Us+1 (R)(n,p) = Is+1 + Zpn−s E1,s+1 . (b) Us+1 (R)(n,p) is central in Us+1(R). Proof. (a) We follow the argument of [MT15a, Lemma 2.4]. Take X = {x1 , . . . , xs } be a set of s elements, let S = SX , and let w = (x1 · · · xs ). The matrices ρw R (xi ) = Is+1 + Ei,i+1 , i = 1, 2, . . . , s, generate Us+1 (R) [Wei55, p. w 55], so ρR is surjective. Therefore it maps S (n,p) onto Us+1 (R)(n,p) . By Proposition 4.1(b), for σ ∈ S (n,p) and u ∈ X ∗ of length 1 ≤ |u| ≤ s one has ǫu,Zp (σ) ∈ pn−|u| Zp . If |u| < s, then ǫu,Zp (σ) ∈ pn−|u| Zp ⊆ pn−s+1 Zp . Hence ǫu,R (σ) = 0 in this case. If |u| = s, then ǫu,Zp (σ) ∈ pn−s Zp , so ǫu,R (σ) ∈ pn−s R. n−s n−s Moreover, τwp ∈ (S (s,0) )p ≤ S (n,p) . By Proposition 4.4(c), ΛZp (τw ) = 1 + w + f , where f is a combination of words strictly larger than w in . n−s Hence ΛZp (τwp ) = 1 + pn−s w + g, where g is also a combination of words n−s strictly larger than w in , which implies that ǫw,R (τwp ) = pn−s · 1R . (n,p) Consequently, Us+1 (R)(n,p) = ρw ) = Is+1 + Zpn−s E1,s+1 . R (S (b) It is straightforward to see that Is+1 +ZE1,s+1 is central in Us+1 (R), so the assertion follows from (a).  See [Bor04, §2] for a related analysis of the lower p-central filtration of Us+1(Z/pn−s+1 ). Consider the obvious isomorphism ∼ ιn,s : pn−s Z/pn−s+1 Z − → Z/p, apn−s (mod pn−s+1 ) 7→ a (mod p). In view of Proposition 6.3(a), we may define a group isomorphism ∼ ιUn,s : Us+1 (Z/pn−s+1)(n,p) − → Z/p, (aij ) 7→ ιn,s (a1,s+1 ). Next let w, w ′ ∈ X ∗ be words of lengths 1 ≤ s, s′ ≤ n, respectively, n−s n−s where w is Lyndon. We have τwp ∈ (S (s,0) )p ≤ S (n,p) . By Propon−s n−s ′ sition 4.1(b), ǫw′ ,Zp (τwp ) ∈ pn−s Zp , and therefore ǫw′ ,Z/pn−s′ +1 (τwp ) ∈ ′ ′ pn−s Z/pn−s +1 Z. We set (6.1) n−s hw, w ′in = ιn,s′ (ǫw′ ,Z/pn−s′ +1 (τwp )) ∈ Z/p. Alternatively, (6.2) ′ n−s (τ p hw, w ′in = ιUn,s′ (ρw Z/pn−s′ +1 w )). COHOMOLOGY AND LYNDON WORDS 13 Let  be as in (2.1). Proposition 6.4. Let w, w ′ be words in X ∗ of lengths 1 ≤ s, s′ ≤ n, respectively, with w Lyndon. (a) If w ′ ≺ w, then hw, w ′in = 0; (b) hw, win = 1; (c) If w ′ contains letters which do not appear in w, then hw, w ′in = 0; (d) If s < s′ < 2s, then hw, w ′in = 0. n−s Proof. (a), (b) Proposition 4.4(c) implies that ΛZp (τwp ) − 1 − pn−s w is a combination of words strictly larger than w with respect to , and ′ the same therefore holds over the coefficient ring Z/pn−s +1 . Hence, if n−s w ′ ≺ w, then ǫw′ ,Z/pn−s′ +1 (τwp ) = 0, so hw, w ′in = 0. If w = w ′, then n−s ǫw,Z/pn−s+1 (τwp ) = pn−s · 1Z/pn−s+1 , whence hw, win = 1. (c) n−s Here we clearly have ǫw′ ,Z/pn−s′ +1 (τwp ) = 0. (d) Since τw ∈ S (s,0) , one may write ΛZp (τw ) = 1 + P + O(s′ + 1), where P is a combination of words w ′′ of length s ≤ |w ′′ | ≤ s′ , and O(s′ + 1) denotes a combination of words of length ≥ s′ + 1 (Proposition 4.1(a)). n−s Since s′ < 2s, this implies that ΛZp (τwp ) = 1 + pn−s P + O(s′ + 1). In n−s particular, ǫw′ ,Zp (τwp ) ∈ pn−s Zp , and therefore n−s ǫw′ ,Z/pn−s′ +1 (τwp ′ ) ∈ pn−s (Z/pn−s +1 ) = {0}, since s < s′ . Hence hw, w ′in = 0.  7. Transgressions Given a profinite group G and a discrete G-module A, we write as usual C (G, A), Z i (G, A), and H i (G, A) for the corresponding group of continuous i-cochains, group of continuous i-cocycles, and the ith profinite cohomology group, respectively. For x ∈ Z i (G, A) let [x] be its cohomology class in H i (G, A). For a normal closed subgroup N of G, let trg : H 1 (N, A)G → H 2 (G/N, AN ) be the transgression homomorphism. It is the map d0,1 of the Lyndon– 2 Hochschild–Serre spectral sequence associated with G and N [NSW08, Th. 2.4.3]. We recall the explicit description of trg, assuming for simplicity that the G-action on A is trivial [NSW08, Prop. 1.6.6]: If x ∈ Z 1 (N, A), then there exists y ∈ C 1 (G, A) such that y|N = x and (∂y)(σ1 , σ2 ) depends only on the cosets of σ1 , σ2 modulo N, so that ∂y may be viewed as an element of Z 2 (G/N, A). For any such y one has trg([x]) = [∂y]. i 14 IDO EFRAT We fix for the rest of this section a finite group U and a normal subgroup N of U satisfying: (i) N ∼ = Z/p; and (ii) N lies in the center of U. We set Ū = U/N, and let it act trivially on U. We denote the image of u ∈ U in Ū by ū. We may choose a section λ of the projection U → Ū such that λ(1̄) = 1. We define a map δ ∈ C 2 (Ū, N) by δ(ū, ū′ ) = λ(ū) · λ(ū′ ) · λ(ūū′ )−1 . It is normalized, i.e., δ(ū, 1) = δ(1, ū) = 1 for every ū ∈ Ū. We also define y ∈ C 1 (U, N) by y(u) = uλ(ū)−1 . Note that y|N = idN . Lemma 7.1. For every u, u′ ∈ U one has δ(ū, ū′ ) · y(u) · y(u′) = y(uu′). Proof. Since y(u) and y(u′) are in N, they are central in U, so δ(ū, ū′ ) · y(u) · y(u′) = λ(ū) · λ(ū′) · λ(ūū′)−1 · y(u) · y(u′) = y(u) · λ(ū) · y(u′) · λ(ū′ ) · λ(ūū′ )−1 = uu′ λ(ūū′ )−1 = y(uu′).  For the correspondence between elements of H 2 and central extensions see e.g., [NSW08, Th. 1.2.4]. Proposition 7.2. Using the notation above, the following holds. (a) δ ∈ Z 2 (Ū, N); (b) One has trg(idN ) = −[δ] for the transgression map trg : H 1 (N, N)U → H 2 (Ū, N). (c) The cohomology class [δ] ∈ H 2 (Ū, N) corresponds to the equivalence class of the central extension 1 → N → U → Ū → 1. (7.1) Proof. (a), (b): For u, u′ ∈ U Lemma 7.1 gives (∂y)(u, u′) = y(u) · y(u′) · y(uu′)−1 = δ(ū, ū′ )−1 . This shows that δ is a 2-cocycle, and that (∂y)(u, u′) depends only on the cosets ū, ū′ . Further, idN ∈ Z 1 (N, N). By the explicit description of the transgression above, trg(idN ) = −[δ]. COHOMOLOGY AND LYNDON WORDS (c) 15 Consider the set B = N × Ū with the binary operation (u, v̄) ∗ (u′, v̄ ′ ) = (δ(v̄, v̄ ′ )uu′, v̄v̄ ′ ). The proof of [NSW08, Th. 1.2.4] shows that this makes B a group, and [δ] corresponds to the equivalence class of the central extension 1 → N → B → Ū → 1. (7.2) Moreover, the map h : U → B, u 7→ (y(u), ū) is clearly bijective. We claim that it is a homomorphism, whence an isomorphism. Indeed, for u, u′ ∈ U Lemma 7.1 gives: h(u) ∗ h(u′ ) = (y(u), ū) ∗ (y(u′), ū′ ) = (δ(ū, ū′ )y(u)y(u′), ūū′ ) = (y(uu′), ūū′ ) = h(uu′ ). We obtain that the central extension (7.2) is equivalent to the central extension (7.1).  Next let Ḡ be a profinite group, and let ρ̄ : Ḡ → Ū be a continuous ∼ homomorphism. Let ι : N − → Z/p be a fixed isomorphism (see (i)). Set (7.3) α = (ρ̄∗ ◦ ι∗ )([δ]) = −(ρ̄∗ ◦ ι∗ ◦ trg)(idN ) ∈ H 2 (Ḡ, Z/p), where the second equality is by Proposition 7.2(b). Then α corresponds to the equivalence class of the central extension ι−1 ×1 0 → Z/p −−−→ U ×Ū Ḡ → Ḡ → 1, (7.4) where U ×Ū Ḡ is the fiber product with respect to the natural projection U → Ū and to ρ̄; See [Hoe68, Proof of 1.1]. Suppose further that there is a profinite group G, a closed normal subgroup M of G, and a continuous homomorphism ρ : G → U such that Ḡ = G/M, ρ(M) ≤ N, and ρ̄ : Ḡ → Ū is induced from ρ. The functoriality of transgression yields a commutative diagram H 1 (N, N)U ι∗ ∼ H 1 (N, Z/p)U / trg  H 2 (Ū, N) ρ∗ H 1 (M, Z/p)G / trg trg ι∗ ∼ /  H 2 (Ū, Z/p) ρ̄∗ /  H 2 (Ḡ, Z/p). The image of idN ∈ H 1 (N, N) in H 1 (M, Z/p)G is (7.5) θ = ι ◦ (ρ|M ) ∈ H 1 (M, Z/p). By (7.3) and the commutativity of the diagram, (7.6) α = − trg(θ) ∈ H 2 (Ḡ, Z/p) 16 IDO EFRAT Remark 7.3. Suppose that Ū is abelian and that Ḡ acts trivially on Ū. A 2-cocycle representing α is (σ̄, σ̄ ′ ) 7→ ι(λ(ρ̄(σ̄)) · λ(ρ̄(σ̄ ′ )) · λ(ρ̄(σ̄σ̄ ′ ))−1 ). But λ(ρ̄(σ̄)) · λ(ρ̄(σ̄ ′ )) · λ(ρ̄(σ̄σ̄ ′ ))−1 is a 2-cocycle representing the image of ρ̄ under the connecting homomorphism H 1 (Ḡ, Ū) → H 2 (Ḡ, N) arising from (7.1). Thus α is the image of ρ̄ under the composition ι ∗ H 1 (Ḡ, Ū) → H 2 (Ḡ, N) − → H 2 (Ḡ, Z/p). Example 7.4. We give several examples of the above construction with the group U = Us+1 (Z/pn−s+1 ) (where 1 ≤ s ≤ n), a continuous homomorphism ρ : G → U, where G is a profinite group, and the induced homomorphism ρ̄ : G[n,p] → U[n,p] . Note that assumptions (i) and (ii) then hold for N = U(n,p) , by Proposition 6.3. We will be especially interested in the case where G = S = SX is a free profinite group, M = S (n,p) , ρ = ρw Z/pn−s+1 for ∗ w [n,p] a word w ∈ X of length 1 ≤ s ≤ n, and ρ̄ = ρ̄Z/pn−s+1 : S → U[n,p] is the induced homomorphism. In this setup we write αw,n for α. (1) Bocksteins. For a positive integer m and a profinite group Ḡ, the connecting homomorphism arising from the short exact sequence of trivial Ḡ-modules 0 → Z/p → Z/pm → Z/m → 0 is the Bockstein homomorphism Bockm,Ḡ : H 1 (Ḡ, Z/m) → H 2 (Ḡ, Z/p). Let U = U2 (Z/pn ) (i.e., s = 1). There is a commutative diagram of central extensions 1 / U(n,p) ≀ / / / ≀  0 U pn−1 Z/pn Z / 1 ≀   / U[n,p] Z/pn / Z/pn−1 / 0. For a profinite group Ḡ and a homomorphism ρ̄ : Ḡ → Z/pn−1 , Remark 7.3 therefore implies that α = Bockpn−1 ,Ḡ (ρ̄). In particular, for x ∈ X take (x) ρ̄ = ρ̄Z/pn : Ḡ = S [n,p] → U2 (Z/pn )[n,p]. Identifying U2 (Z/pn )[n,p] = Z/pn−1 , we obtain α(x),n = Bockpn−1 ,S [n,p] (ǫ(x),Z/pn−1 ). COHOMOLOGY AND LYNDON WORDS 17 (2) Massey products. Let n = s ≥ 2, so U = Un+1 (Z/p). Let Ḡ be a profinite group, let ρ̄ : Ḡ → U[n,p] be a continuous homomorphism, and let ρi,i+1 : Ḡ → Z/p denote its projection on the (i, i + 1)-entry, i = 1, 2, . . . , n. By a result of Dwyer [Dwy75, Th. 2.6], the extension (7.4) then corresponds to a defining system for the n-fold Massey product hρ12 , ρ23 , . . . , ρn,n+1i ⊆ H 2 (Ḡ, Z/p) (see [Efr14, Prop. 8.3] for the profinite analog of this fact). Thus, for a fixed Ḡ and for homomorphisms ρ̄1 , . . . , ρ̄n : Ḡ → Z/p, with ρ̄ varying over all homomorphisms such that ρ̄i,i+1 = ρ̄i , i = 1, 2, . . . , n, the cohomology element α ranges over the elements of the Massey product hρ̄1 , . . . , ρ̄n i. In particular, for Ḡ = S [n,p] and for a word w = (x1 · · · xn ) ∈ X ∗ of length n, the cohomology elements αw,n range over the Massey product hǫ(x1 ),Z/p , . . . , ǫ(xn ),Z/p i ⊆ H 2 (S [n,p], Z/p), where the ǫ(xi ),Z/p are viewed as elements of H 1 (S [n,p], Z/p). (3) Cup products. In the special case n = s = 2, the Massey product contains only the cup product. Hence for every profinite group Ḡ and a homomorphism ρ̄ : Ḡ → U3 (Z/p) the cohomology element α ∈ H 2 (Ḡ, Z/p) is the cup product ρ̄12 ∪ ρ̄23 . In particular, for w = (xy) we have α(xy),Z/p = ǫ(x),Z/p ∪ ǫ(y),Z/p ∈ H 2 (S [2,p], Z/p). 8. Cohomological duality Let S = SX be again a free profinite group on the totally ordered set X, and let it act trivially on Z/p. Let n ≥ 2, so S (n,p) ≤ S p [S, S]. Then the inflation map H 1 (S [n,p], Z/p) → H 1 (S, Z/p) is an isomorphism. Further, H 2 (S, Z/p) = 0. By the five-term sequence of cohomology groups [NSW08, Prop. 1.6.7], trg : H 1 (S (n,p) , Z/p)S → H 2 (S [n,p], Z/p) is an isomorphism. There is a natural non-degenerate bilinear map S (n,p) /S (n+1,p) × H 1 (S (n,p) , Z/p)S → Z/p, (σ̄, ϕ) 7→ ϕ(σ) (see [EM11, Cor. 2.2]). It induces a bilinear map (·, ·)n : S (n,p) × H 2 (S [n,p], Z/p) → Z/p, (σ, α)n = −(trg−1 (α))(σ), with left kernel S (n+1,p) and trivial right kernel. Now let w ∈ X ∗ be a word of length 1 ≤ s ≤ n. As in Examples 7.4, we apply the computations in Section 7 to the group U = Us+1 (Z/pn−s+1 ), the open normal subgroup N = U(n,p) , the homomorphism ρ = ρw Z/pn−s+1 : S → w [n,p] [n,p] U, the induced homomorphism ρ̄ = ρ̄Z/pn−s+1 : S → U , and the closed (n,p) normal subgroup M = S of S. We write θw,n , αw,n for θ, α, respectively. 18 IDO EFRAT Lemma 8.1. For σ ∈ S (n,p) and a word w ∈ X ∗ of length 1 ≤ s ≤ n one has (σ, αw,n )n = ιn,s (ǫw,Z/pn−s+1 (σ)). Proof. By (7.6) and (7.5), (σ, αw,n )n = θw,n (σ) = ιUn,s (ρw Z/pn−s+1 (σ)) = ιn,s (ǫw,Z/pn−s+1 (σ)).  This and (6.1) give: Corollary 8.2. Let w, w ′ be words in X ∗ of lengths 1 ≤ s, s′ ≤ n, respectively, with w Lyndon. Then n−s (τwp , αw′ ,n )n = hw, w ′in . Proposition 6.4(a)(b) now gives: Corollary 8.3. Let Lyn≤n (X) be totally ordered by . The matrix  n−|w|  (τwp , αw′ ,n )n , where w, w ′ ∈ Lyn≤n (X), is upper-triangular unipotent. In general the above matrix need not be the identity matrix – see e.g., Proposition 11.2 below. Next we observe the following general fact: Lemma 8.4. Let R be a commutative ring and let (·, ·) : A × B → R be a non-degenerate bilinear map of R-modules. Let (L, ≤) be a finite totally ordered set, and for every w ∈ L let aw ∈ A, bw ∈ B. Suppose that the matrix (aw , bw′ ) w,w′∈L is invertible, and that aw , w ∈ L, generate A. Then aw , w ∈ L, is an R-linear basis of A, and bw , w ∈ L, is an R-linear basis of B. Proof. Let b ∈ B, and consider rw′ ∈ R, with w ′ ∈ L. The assumptions P P imply that b = w′ rw′ bw′ if and only if (aw , b − w′ rw′ bw′ ) = 0 for every P w. Equivalently, the rw′ solve the linear system w′ (aw , bw′ )Xw′ = (aw , b), for w ∈ L. By the invertibility, the latter system has a unique solution. This shows that bw , w ∈ L, is an R-linear basis of B. By reversing the roles of aw , bw , we conclude that the aw , w ∈ L, form an R-linear basis of A.  Theorem 8.5. (a) The cohomology elements αw,n , where w ∈ Lyn≤n (X), form a Z/p-linear basis of H 2 (S [n,p], Z/p). n−s (b) When X is finite, the cosets of the powers τwp , w ∈ Lyn≤n (X), form a basis of the Z/p-module S (n,p) /S (n+1,p) . COHOMOLOGY AND LYNDON WORDS 19 Proof. When X is finite, the set Lyn≤n (X) is also finite. By Theorem n−|w| 5.3, the cosets aw of τwp , where w ∈ Lyn≤n (X), generate the Z/p(n,p) (n+1,p) module S /S . We apply Lemma 8.4 with the Z/p-modules A = (n,p) (n+1,p) S /S and B = H 2 (S [n,p], Z/p), the non-degenerate bilinear map A × B → Z/p induced by (·, ·)n , the generators aw of A, and the elements bw = αw,n of B. Corollary 8.3 implies that the matrix (aw , bw′ ) is invertible. Therefore Lemma 8.4 gives both assertions in the finite case. The general case of (a) follows from the finite case by a standard limit argument.  We call αw,n , w ∈ Lyn≤n (X), the Lyndon basis of H 2 (S [n,p], Z/p). Recall that the number of relations in a minimal presentation of a pro-p group G is given by dim H 2 (G, Z/p) [NSW08, Cor. 3.9.5]. In view of (2.2), Theorem 8.5 gives this number for G = S [n,p]: Corollary 8.6. One has 2 dimFp H (S [n,p] , Z/p) = dimFp (S (n,p) /S (n+1,p) )= n X ϕs (|X|), s=1 where ϕs is the necklace map. 9. The shuffle relations We recall the following constructions from [CFL58], [Reu93, pp. 134– 135]. Let u1 , . . . , ut ∈ X ∗ be words of lengths s1 , . . . , st , respectively. We say that a word w ∈ X ∗ of length 1 ≤ n ≤ s1 + · · ·+ st is an infiltration of u1 , . . . , ut , if there exist sets I1 , . . . , It of respective cardinalities s1 , . . . , st such that {1, 2, . . . , n} = I1 ∪ · · · ∪ It and the restriction of w to the index set Ij is uj , j = 1, 2, . . . , t. We then write w = w(I1 , . . . , It , u1 , . . . , ut). We write Infil(u1 , . . . , ut ) for the set of all infiltrations of u1 , . . . , ut . The P infiltration product u1 ↓ · · · ↓ ut of u1 , . . . , ut is the polynomial w in ZhXi, where the sum is over all such infiltrations, taken with multiplicity. If in the above setting, the sets I1 , . . . , It are pairwise disjoint, then w(I1, . . . , It , u1, . . . , ut ) is called a shuffle of u1 , . . . , ut. We write Sh(u1 , . . . , ut ) for the set of all shuffles of u1, . . . , ut . It consists of the words in Infil(u1 , . . . , ut ) of length s1 + · · · + st . The shuffle product u1 x · · · xut is the polynoP mial w(I1 , . . . , It , u1 , . . . , ut ) in ZhXi, where the sum is over all shuffles of u1 , . . . , ut , taken with multiplicity. Thus u1 x · · · xut is the homogenous 20 IDO EFRAT part of u1 ↓ · · · ↓ ut of (maximal) degree s1 + · · · + st . For instance (xy) ↓ (xz) = (xyxz) + 2(xxyz) + 2(xxzy) + (xzxy) + (xyz) + (xzy), (xy)x(xz) = (xyxz) + 2(xxyz) + 2(xxzy) + (xzxy) (x) ↓ (x) = 2(xx) + (x), (x)x(x) = 2(xx). We may view infiltration and shuffle products also as elements of Zp hXi. Let Shuffles(X) be the Z-submodule of ZhXi generated by all shuffle products uxv, with ∅ = 6 u, v ∈ X ∗ . Let Shufflesn (X) be its homogenous component of degree n. Examples 9.1. Shuffles1 (X) = {0}, Shuffles2 (X) = h(xy) + (yx) | x, y ∈ Xi, Shuffles3 (X) = h(xyz) + (xzy) + (zxy) | x, y, z ∈ Xi. Let (·, ·) be the pairing of (3.1) for the ring R = Zp . As before, S = SX is the free profinite group on the set X. The following fact is due to Chen, Fox, and Lyndon in discrete case [CFL58, Th. 3.6] (see also [Mor12, Prop. 8.6], [Reu93, Lemma 6.7]), as well as [Vog05, Prop. 2.25] in the profinite case. Proposition 9.2. For every ∅ = 6 u, v ∈ X ∗ and every σ ∈ S one has ǫu,Zp (σ)ǫv,Zp (σ) = (ΛZp (σ), u ↓ v). Corollary 9.3. Let u, v be nonempty words in X ∗ with s = |u| + |v| ≤ n. For every σ ∈ S (n,p) one has (ΛZp (σ), uxv) ∈ pn−s+1 Zp . Proof. If w is a nonempty word of length |w| < s, then by Proposition 4.1(b), ǫw,Zp (σ) ∈ pn−|w| Zp ⊆ pn−s+1 Zp . In particular, this is the case for w = u, w = v, and when w ∈ Infil(u, v) \ Sh(u, v). It follows from  Proposition 9.2 that (ΛZp (σ), uxv) ∈ pn−s+1 Zp . We obtain the following shuffle relations (see also [Vog04, Cor. 1.2.10] and [FS84, Th. 6.8]). We write X s for the set of words in X ∗ of length s. Theorem 9.4. For every ∅ = 6 u, v ∈ X ∗ with s = |u| + |v| ≤ n one has X (uxv)w αw,n = 0. w∈X s Proof. For σ ∈ S (n,p) , Corollary 9.3 gives X X (uxv)w ǫw,Zp (σ) = (ΛZp (σ), uxv) ∈ pn−s+1 Zp . (uxv)w ǫw,Zp (σ) = w∈X s w∈X ∗ COHOMOLOGY AND LYNDON WORDS 21 Therefore, by Lemma 6.2, X X X (uxv)w ιn,s (ǫw,Z/pn−s+1 (σ)) (uxv)w (σ, αw,n )n = (uxv)w αw,n )n = (σ, w∈X s w∈X s w∈X s = ιn,s X w∈X s  (uxv)w ǫw,Z/pn−s+1 (σ) = 0. Now use the fact that (·, ·)n : S (n,p) × H 2 (S [n,p], Z/p) → Z/p has a trivial right kernel.  Corollary 9.5. There is a canonical epimorphism n  M  M  Z / Shuffless (X) ⊗ (Z/p) → H 2 (S [n,p], Z/p) s=1 w∈X s (r̄w )w 7→ X rw αw,n . w Proof. By Theorem 9.4 this homomorphism is well defined. By Theorem 8.5(a), it is surjective.  Remark 9.6. In view of Lemma 6.2, the epimorphism of Corollary 9.5 and the canonical pairing (·, ·)n induce a bilinear map n  M  M  (n,p) Z / Shuffless (X) → Z/p, S × s=1 w∈X s (σ, (rw )w ) = X rw ιn,s (ǫw,Z/pn−s+1 (σ)) w with left kernel S (n+1,p) . Example 9.7. We show that for every x1 , x2 , . . . , xk ∈ X one has (x1 x2 · · · xk ) + (−1)k−1 (xk · · · x2 x1 ) ∈ Shufflesn (X). We may assume that x1 , x2 , . . . , xk are distinct. For 1 ≤ l ≤ k − 1 let u = (x · · · x x ) and vl = (xl+1 · · · xk ). We consider the polynomial Pl k−1 l l−1 2 1 ul xvl in ZhXi. It is homogenous of degree k. If w ∈ Sh(ul , vl ), l=1 (−1) then either: (1) xl appears before xl+1 in w, and then w appears with an opposite sign also in Sh(ul−1 , vl−1 ); or (2) xl+1 appears before xl in w, and then w appears with opposite sign also in Sh(ul+1 , vl+1 ). 22 IDO EFRAT The only exceptions are w = (x1 x2 · · · xk ) ∈ Sh(u1 , v1 ) and w = (xk · · · x2 x1 ) ∈ Sh(uk−1, vk−1 ). This shows that k (x1 x2 · · · xk ) + (−1) (xk · · · x2 x1 ) = k−1 X (−1)l−1 ul xvl . l=1 For 1 ≤ k ≤ n Corollary 9.5 therefore implies that α(x1 x2 ···xk ),n = (−1)k−1 α(xk ···x2 x1 ),n . 10. Example: The case n = 2 Our results in this case are fairly well known, and are brought here in order to illustrate the general theory. As before let S = SX with X totally ordered. Here S (2,p) = S p [S, S] and S̄ = S [2,p] is the maximal elementary p-abelian quotient of S. We L may identify H 1 (S, Z/p) = H 1 (S̄, Z/p) ∼ = x∈X Z/p. Let χx,Z/p = ǫ(x),Z/p , x ∈ X, be the basis of H 1 (S, Z/p) dual to X (see Remark 6.2). The Lyndon words w of length ≤ 2 are (x), where x ∈ X, and (xy), where x, y ∈ X and x < y. For these words we have τ(x) = x and τ(xy) = [x, y]. By Examples 7.4(1)(3), α(x),2 = Bockp,S̄ (χx,Z/p ), α(xy),2 = χx,Z/p ∪ χy,Z/p . Hence, by Theorem 8.5, Bockp,S̄ (χx,Z/p ) and χx,Z/p ∪ χy,Z/p , where x < y, form a Z/p-linear basis of H 2 (S̄, Z/p). Furthermore, when X is finite, the elements of the form xp and [x, y] with x, y ∈ X, x < y, form a Z/p-linear basis of S (2,p) /S (3,p) . In view of Examples 9.1 and Corollary 9.5, the map P (r̄w ) 7→ w rw αw,2 induces an epimorphism  M   M Z/p ⊕ Z /h(xy) + (yx) | x, y ∈ Xi ⊗ (Z/p) → H 2 (S̄, Z/p). x∈X x,y∈X It coincides with the map H 1 (S̄, Z/p) ⊕ V2 H 1 (S̄, Z/p) → H 2 (S̄, Z/p) which is Bockp,S̄ on the first component and ∪ on the second component. When p 6= 2 the direct sum is a free Z/p-module on Lyn≤2 (X), and by comparing dimensions we see that the epimorphism is in fact an isomorphism (compare [EM11, Cor. 2.9(a)]). However when p = 2 one has Bock2,S̄ (χ) = χ ∪ χ [EM11, Lemma 2.4], so the above epimorphism is not injective. COHOMOLOGY AND LYNDON WORDS 23 Next, Proposition 6.4 shows that the matrix (hw, w ′i2 ), where w, w ′ ∈ Lyn≤2 (X), is the identity matrix. In view of Corollary 8.2, it coincides  2−|w| with the matrix (τwp , αw′,2 )2 . Thus (xp , Bockp,S̄ (χx,Z/p ))2 = 1 for every x ∈ X, (xp , Bockp,S̄ (χy,Z/p ))2 = 0 for every x, y ∈ X, x 6= y, (xp , χy,Z/p ∪ χz,Z/p )2 = 0 for every x, y, z ∈ X, ([x, y], Bockp,S̄ (χz,Z/p ))2 = 0 for every x, y, z ∈ X, ([x, y], χz,Z/p ∪ χt,Z/p )2 = 0 for every x, y, z, t ∈ X, (xy) 6= (zt), (tz), ([x, y], χx,Z/p ∪ χy,Z/p )2 = 1 for every x, y ∈ X with x < y. This recovers well known facts from [Lab66, §2.3], [Koc02, §7.8] and [NSW08, Th. 3.9.13 and Prop. 3.9.14] 11. Example: The case n = 3. 2 Here S (3,p) = S p [S, S]p [S, [S, S]]. We abbreviate S̄ = S [3,p] . Recall that Lyn≤3 (X) consists of the words (x) for x ∈ X, (xy), (xxy), (xyy) for x, y ∈ X with x < y, (xyz), (xzy) for x, y, z ∈ X with x < y < z. For these words τ(x) = x, τ(xy) = [x, y], τ(xxy) = [x, [x, y]], τ(xyz) = [x, [y, z]], τ(xyy) = [[x, y], y], τ(xzy) = [[x, z], y]. By Theorem 5.3, the cosets of 3 xp , [x, y]p , [x, [x, y]], [[x, y], y], [x, [y, z]], [[x, z], y], with x, y, z as above, generate S (3,p) /S (4,p) . When X is finite, they form a linear basis of S (3,p) /S (4,p) over Z/p (Theorem 8.5(b)). Furthermore, Theorem 8.5(a) gives: Theorem 11.1. The following cohomology elements form a Z/p-linear basis of H 2 (S̄, Z/p): α(x),3 , α(xy),3 , α(xxy),3 , α(xyy),3 , α(xyz),3 , α(xzy),3 , where x, y, z ∈ X and we assume that x < y < z. 24 IDO EFRAT By Examples 7.4, α(x),3 = Bockp2 ,S̄ (χx,Z/p2 ), and for every x, y, z ∈ X, α(xyz),3 belongs to the triple Massey product hχx,Z/p , χy,Z/p , χz,Z/p i ⊆ H 2 (S̄, Z/p). (xy) We further recall that α(xy),3 is the pullback to H 2 (S̄, Z/p) under ρ̄Z/p2 : S̄ → U3 (Z/p2 )[3,p] of the cohomology element in H 2 (U3 (Z/p2 )[3,p], Z/p) corresponding to the central extension 0 → Z/p → U3 (Z/p2 ) → U3 (Z/p2 )[3,p] → 1. Alternatively, it has the following explicit description: By Proposition ∼ 6.3(a), U3 (Z/p2 )(3,p) = I3 + ZpE13 , and let ι = ιU3,2 : U3 (Z/p2 )(3,p) − → Z/p be the natural isomorphism. By (7.5) and (7.6), α(xy),3 = − trg(θ), where ∼ (xy) θ = ι ◦ (ρZ/p2 |S (3,p) ) and trg : H 1 (S (3,p) , Z/p)S − → H 2 (S̄, Z/p) is the transgression isomorphism. 3−|w| , αw′ ,3 )3 ) = (hw, w ′i3 ), where w, w ′ ∈ Next we compute the matrix ((τwp Lyn≤3 (X): Proposition 11.2. For w, w ′ ∈ Lyn≤3 (X) one has  ′    1, if w = w ; hw, w ′i3 = −1, if w = (xyz), w ′ = (xzy) for some x, y, z ∈ X, x < y < z;    0, otherwise. Proof. In view of Proposition 6.4, it is enough to show the assertion when w 6= w ′ , either |w| = |w ′ | or 2|w| ≤ |w ′|, and the letters of w ′ appear in w. Furthermore, when |w| = |w ′ | we may assume that w ≤alp w ′ . Thus when w has one of the forms (x), (xy), (xyy), (xzy) (where x < y < z) there is nothing more to show. When w = (xxy) with x < y we need to check only the word w ′ = (xyy). Then Lemma 4.2 gives h(xxy), (xyy)i3 = ǫ(xyy),Z/p ([x, [x, y]]) = ǫ(x),Z/p (x) · ǫ(yy),Z/p ([x, y]) − ǫ(y),Z/p (x) · ǫ(xy),Z/p ([x, y]) = 1 · 0 − 0 · 1 = 0. When w = (xyz) with x < y < z we need to check only the word w = (xzy). Then Lemma 4.2 gives ′ h(xyz), (xzy)i3 = ǫ(xzy),Z/p ([x, [y, z]]) = ǫ(x),Z/p (x) · ǫ(zy),Z/p ([y, z]) − ǫ(y),Z/p (x) · ǫ(xz),Z/p ([y, z]) = 1 · (−1) − 0 · 0 = −1. This completes the verification in all cases.  COHOMOLOGY AND LYNDON WORDS 25 In view of Examples 9.1, Corollary 9.5 gives rise to an epimorphism (11.1)  M  M  Z/p ⊕ Z(xy) /h(xy) + (yx) | x, y ∈ Xi ⊗ (Z/p) x∈X x,y∈X ⊕  M x,y,z∈X   Z(xyz) /h(xyz) + (xzy) + (zxy) | x, y, z ∈ Xi ⊗ (Z/p) → H 2 (S̄, Z/p). Moreover, for x, y, z ∈ X, x < y < z, we have (yx) = (x)x(y) − (xy) 2(xx) = (x)x(x) (xyx) = (x)x(xy) − 2(xxy) (yxx) = (x)x(yx) − (xx)x(y) + (xxy) (yxy) = (xy)x(y) − 2(xyy) (yyx) = (yy)x(x) − (y)x(xy) + (xyy) (yxz) = (y)x(xz) − (xyz) − (xzy) (zxy) = (z)x(xy) − (xzy) − (xyz) (yzx) = (zx)x(y) − (x)x(zy) + (xzy) (zyx) = (yx)x(z) − (x)x(yz) + (xyz) 3(xxx) = (x)x(xx). These congruences and Example 2.1 imply that X X Zw ≡ Zw + 2-torsion (mod Shuffles2 (X)), w∈X 2 X w∈X 3 w∈Lyn2 (X) Zw ≡ X Zw + 3-torsion (mod Shuffles3 (X)). w∈Lyn3 (X) Therefore, for p > 3, the direct sum in (11.1) is the free Z/p-module on the basis Lyn≤3 (X). Thus the epimorphism (11.1) maps the Z/p-linear basis 1w, w ∈ Lyn≤3 (X), bijectively onto the Z/p-linear basis αw,3, w ∈ Lyn≤3 (X) (see Theorem 8.5). Consequently we have: Theorem 11.3. For n = 3 and p > 3, (11.1) is an isomorphism. Thus all relations in H 2 (S [3,p] , Z/p) are consequences of the shuffle relations of Theorem 9.4. 26 IDO EFRAT References [AKM99] A. Adem, D. B. Karagueuzian, and J. Mináč, On the cohomology of Galois groups determined by Witt rings, Adv. Math. 148 (1999), 105–160. [Bog91] F. A. Bogomolov, On two conjectures in birational algebraic geometry, Proc. of Tokyo Satellite conference ICM-90 Analytic and Algebraic Geometry, 1991, pp. 26–52. [Bor04] I. C. Borge, A cohomological approach to the modular isomorphism problem, J. Pure Appl. Algebra 189 (2004), 7–25. [CE16] M. Chapman and I. Efrat, Filtrations of the free group arising from the lower central series, J. Group Theory 19 (2016), 405–433, special issue in memory of O. Melnikov. [CEM12] S. K. Chebolu, I. Efrat, and J. Mináč, Quotients of absolute Galois groups which determine the entire Galois cohomology, Math. Ann. 352 (2012), 205– 221. [CFL58] K.-T. Chen, R. H. Fox, and R. C. Lyndon, Free differential calculus. IV. The quotient groups of the lower central series, Ann. Math. 68 (1958), 81–95. [Dwy75] W. G. Dwyer, Homology, Massey products and maps between groups, J. Pure Appl. Algebra 6 (1975), 177–190. [Efr14] I. Efrat, The Zassenhaus filtration, Massey products, and representations of profinite groups, Adv. Math. 263 (2014), 389–411. [EM15] I. Efrat and E. Matzri, Triple Massey products and absolute Galois groups, J. Eur. Math Soc. (2015), to appear, available at arXiv:1412.7265. [EM11] I. Efrat and J. Mináč, On the descending central sequence of absolute Galois groups, Amer. J. Math. 133 (2011), 1503–1532. [Fen83] R. A. Fenn, Techniques of Geometric Topology, London Math. Society Lect. Note Series, vol. 57, Cambridge Univ. Press, Cambridge, 1983. [FS84] R. Fenn and D. Sjerve, Basic commutators and minimal Massey products, Canad. J. Math. 36 (1984), 1119–1146. [FJ08] M. D. Fried and M. Jarden, Field arithmetic, Springer-Verlag, Berlin, 2008. [For11] P. Forré, Strongly free sequences and pro-p groups of cohomological dimension 2, J. reine angew. Math. 658 (2011), 173–192. [Hoe68] K. Hoechsmann, Zum Einbettungsproblem 229 (1968), 81–106. [Koc60] H. Koch, Über die Faktorgruppen einer absteigenden Zentralreihe, Math. Nach. 22 (1960), 159–161. [Koc02] H. Koch, Galois theory of p-extensions, Springer, Berlin, 2002. [Lab66] J. P. Labute, Demuškin groups of rank ℵ0 , Bull. Soc. Math. France 94 (1966), 211–244. [Lab67] J. Labute, Classification of Demuškin groups, Can. J. Math. 19 (1967), 106– 132. [Lab06] J. Labute, Mild pro-p groups and Galois groups of p-extensions of Q, J. reine angew. Math. 596 (2006), 155–182. [LM11] J. Labute and J. Mináč, Mild pro-2-groups and 2-extensions of Q with restricted ramification, J. Algebra 332 (2011), 136–158. [Laz54] M. Lazard, Sur les groupes nilpotents et les anneaux de Lie, Ann. Sci. Ecole Norm. Sup. (3) 71 (1954), 101–190. COHOMOLOGY AND LYNDON WORDS 27 [MS96] J. Mináč and M. Spira, Witt rings and Galois groups, Ann. Math. 144 (1996), 35–60. [MT15a] J. Mináč and N. D. Tân, The Kernel Unipotent Conjecture and the vanishing of Massey products for odd rigid fields, (with an appendix by Efrat, I., Mináč, J. and Tân, N. D.), Adv. Math. 273 (2015), 242–270. [MT15b] J. Mináč and N. D. Tân, Triple Massey products over global fields, Doc. Math. 20 (2015), 1467–1480. [MT16] J. Mináč and N. D. Tân, Triple Massey products vanish over all fields, J. London Math. Soc. 94 (2016), 909–932. [MT17] J. Mináč and N. D. Tân, Triple Massey products and Galois theory, J. Eur. Math. Soc. 19 (2017), 255–284. [MP00] H. N. Minh and M. Petitot, Lyndon words, polylogarithms and the Riemann ζ function, Discrete Math. 217 (2000), 273–292. [Mor12] M. Morishita, Knots and primes, Universitext, Springer, London, 2012. [NSW08] J. Neukirch, A. Schmidt, and K. Wingberg, Cohomology of Number Fields, Second edition, Springer, Berlin, 2008. [Reu93] C. Reutenauer, Free Lie algebras, London Mathematical Society Monographs. New Series, vol. 7, The Clarendon Press, Oxford University Press, New York, 1993. Oxford Science Publications. [Sch10] A. Schmidt, Über pro-p-Fundamentalgruppen markierter arithmetischer Kurven, J. reine angew. Math. 640 (2010), 203–235. [Ser63] J.-P. Serre, Structure de certains pro-p-groupes (d’après Demuškin), Séminaire Bourbaki (1962/63), Exp. 252. [Ser92] J.-P. Serre, Lie Algebras and Lie Groups, Springer, 1992. [Top16] A. Topaz, Reconstructing function fields from rational quotients of mod-ℓ Galois groups, Math. Ann. 366 (2016), 337–385. [Vog04] D. Vogel, Massey products in the Galois cohomology of number fields, Ph.D. thesis, Universität Heidelberg, 2004. [Vog05] D. Vogel, On the Galois group of 2-extensions with restricted ramification, J. reine angew. Math. 581 (2005), 117–150. [Wei55] A. J. Weir, Sylow p-subgroups of the general linear group over finite fields of characteristic p, Proc. Amer. Math. Soc. 6 (1955), 454–464. [Wic12] K. Wickelgren, n-nilpotent obstructions to π1 sections of P1 − {0, 1, ∞} and Massey products, Galois-Teichmüller theory and arithmetic geometry, Adv. Stud. Pure Math., vol. 63, Math. Soc. Japan, Tokyo, 2012, pp. 579–600. Department of Mathematics, Ben-Gurion University of the Negev, P.O. Box 653, Be’er-Sheva 84105, Israel E-mail address: [email protected]
4
Nonparametric independence testing via mutual arXiv:1711.06642v1 [stat.ME] 17 Nov 2017 information Thomas B. Berrett and Richard J. Samworth Statistical Laboratory, University of Cambridge November 20, 2017 Abstract We propose a test of independence of two multivariate random vectors, given a sample from the underlying population. Our approach, which we call MINT, is based on the estimation of mutual information, whose decomposition into joint and marginal entropies facilitates the use of recently-developed efficient entropy estimators derived from nearest neighbour distances. The proposed critical values, which may be obtained from simulation (in the case where one marginal is known) or resampling, guarantee that the test has nominal size, and we provide local power analyses, uniformly over classes of densities whose mutual information satisfies a lower bound. Our ideas may be extended to provide a new goodness-of-fit tests of normal linear models based on assessing the independence of our vector of covariates and an appropriately-defined notion of an error vector. The theory is supported by numerical studies on both simulated and real data. 1 Introduction Independence is a fundamental concept in statistics and many related fields, underpinning the way practitioners frequently think about model building, as well as much of statistical theory. Often we would like to assess whether or not the assumption of independence is reasonable, for instance as a method of exploratory data analysis (Steuer et al., 2002; Albert et al., 2015; Nguyen and Eisenstein, 2017), or as a way of evaluating the goodness-of-fit of a statistical model (Einmahl and van Keilegom, 2008). Testing independence and estimating dependence 1 are well-established areas of statistics, with the related idea of the correlation between two random variables dating back to Francis Galton’s work at the end of the 19th century (Stigler, 1989), which was subsequently expanded upon by Karl Pearson (e.g. Pearson, 1920). Since then many new measures of dependence have been developed and studied, each with its own advantages and disadvantages, and there is no universally accepted measure. For surveys of several measures, see, for example, Schweizer (1981), Joe (1989), Mari and Kotz (2001) and the references therein. We give an overview of more recently-introduced quantities below; see also Josse and Holmes (2016). In addition to the applications mentioned above, dependence measures play an important role in independent component analysis (ICA), a special case of blind source separation, in which a linear transformation of the data is sought so that the transformed data is maximally independent; see e.g. Comon (1994), Bach and Jordan (2002), Miller and Fisher (2003) and Samworth and Yuan (2012). Here, independence tests may be carried out to check the convergence of an ICA algorithm and to validate the results (e.g. Wu, Yu and Li, 2009). Further examples include feature selection, where one seeks a set of features which contains the maximum possible information about a response (Torkkola, 2003; Song et al., 2012), and the evaluation of the quality of a clustering in cluster analysis (Vinh, Epps and Bailey, 2010). When dealing with discrete data, often presented in a contingency table, the independence testing problem is typically reduced to testing the equality of two discrete distributions via a chi-squared test. Here we will focus on the case of distributions that are absolutely continuous with respect to the relevant Lebesgue measure. Classical nonparametric approaches to measuring dependence and independence testing in such cases include Pearson’s correlation coefficient, Kendall’s tau and Spearman’s rank correlation coefficient. Though these approaches are widely used, they suffer from a lack of power against many alternatives; indeed Pearson’s correlation only measures linear relationships between variables, while Kendall’s tau and Spearman’s rank correlation coefficient measure monotonic relationships. Thus, for example, if X has a symmetric distribution on the real line, and Y = X 2 , then the population quantities corresponding to these test statistics are zero in all three cases. Hoeffding’s test of independence (Hoeffding, 1948) is able to detect a wider class of departures from independence and is distribution-free under the null hypothesis but, as with these other classical methods, was only designed for univariate variables. Recent work of Weihs, Drton and Meinshausen (2017) has aimed to address some of computational challenges involved in extending these ideas to multivariate settings. 2 Recent research has focused on constructing tests that can be used for more complex data and that are consistent against wider classes of alternatives. The concept of distance covariance was introduced in Székely, Rizzo and Bakirov (2007) and can be expressed as a weighted L2 norm between the characteristic function of the joint distribution and the product of the marginal characteristic functions. This concept has also been studied in high dimensions (Székely and Rizzo, 2013; Yao, Zhang and Shao, 2017), and for testing independence of several random vectors (Fan et al., 2017). In Sejdinovic et al. (2013) tests based on distance covariance were shown to be equivalent to a reproducing kernel Hilbert space (RKHS) test for a specific choice of kernel. RKHS tests have been widely studied in the machine learning community, with early understanding of the subject given by Bach and Jordan (2002) and Gretton et al. (2005), in which the Hilbert–Schmidt independence criterion was proposed. These tests are based on embedding the joint distribution and product of the marginal distributions into a Hilbert space and considering the norm of their difference in this space. One drawback of the kernel paradigm here is the computational complexity, though Jitkrittum, Szabó and Gretton (2016) and Zhang et al. (2017) have recently attempted to address this issue. The performance of these methods may also be strongly affected by the choice of kernel. In another line of work, there is a large literature on testing independence based on an empirical copula process; see for example Kojadinovic and Holmes (2009) and the references therein. Other test statistics include those based on partitioning the sample space (e.g. Gretton and Györfi, 2010; Heller et al., 2016). These have the advantage of being distribution-free under the null hypothesis, though their performance depends on the particular partition chosen. We also remark that the basic independence testing problem has spawned many variants. For instance, Pfister et al. (2017) extend kernel tests to tests of mutual independence between a group of random vectors. Another important extension is to the problem of testing conditional independence, which is central to graphical modelling (Lauritzen, 1996) and also relevant to causal inference (Pearl, 2009). Existing tests of conditional independence include the proposals of Su and White (2008), Zhang et al. (2011) and Fan, Feng and Xia (2017). To formalise the problem we consider, let X and Y have (Lebesgue) densities fX on RdX and fY on RdY respectively, and let Z = (X, Y ) have density f on Rd , where d := dX + dY . Given independent and identically distributed copies Z1 , . . . , Zn of Z, we wish to test the null hypothesis that X and Y are independent, denoted H0 : X ⊥⊥ Y , against the alternative that X and Y are not independent, written H1 : X 6⊥⊥ Y . Our approach is based on constructing an estimator Ibn = Ibn (Z1 , . . . , Zn ) of the mutual information I(X; Y ) between 3 X and Y . Mutual information turns out to be a very attractive measure of dependence in this context; we review its definition and basic properties in Section 2 below. In particular, its decomposition into joint and marginal entropies (see (1) below) facilitates the use of recentlydeveloped efficient entropy estimators derived from nearest neighbour distances (Berrett, Samworth and Yuan, 2017). The next main challenge is to identify an appropriate critical value for the test. In the simpler setting where either of the marginals fX and fY is known, a simulation-based approach can be employed to yield a test of any given size q ∈ (0, 1). We further provide regularity conditions under which the power of our test converges to 1 as n → ∞, uniformly over classes of alternatives with I(X; Y ) ≥ bn , say, where we may even take bn = o(n−1/2 ). To the best of our knowledge this is the first time that such a local power analysis has been carried out for an independence test for multivariate data. When neither marginal is known, we obtain our critical value via a permutation approach, again yielding a test of the nominal size. Here, under our regularity conditions, the test is uniformly consistent (has power converging to 1) against alternatives whose mutual information is bounded away from zero. We call our test MINT, short for Mutual Information Test; it is implemented in the R package IndepTest (Berrett, Grose and Samworth, 2017). As an application of these ideas, we are able to introduce new goodness-of-fit tests of normal linear models based on assessing the independence of our vector of covariates and an appropriately-defined notion of an error vector. Such tests do not follow immediately from our earlier work, because we do not observe realisations of the error vector directly; instead, we only have access to residuals from a fitted model. Nevertheless, we are able to provide rigorous justification, again in the form of a local analysis, for our approach. It seems that, when fitting normal linear models, current standard practice in the applied statistics community for assessing goodness-of-fit is based on visual inspection of diagnostic plots such as those provided by the plot command in R when applied to an object of class lm. Our aim, then, is to augment the standard toolkit by providing a formal basis for inference regarding the validity of the model. Related work here includes Neumeyer (2009), Neumeyer and Van Keilegom (2010), Müller, Schick and Wefelmeyer (2012), Sen and Sen (2014) and Shah and Bühlmann (2017). The remainder of the paper is organised as follows: after reviewing the concept of mutual information in Section 2.1, we explain in Section 2.2 how it can be estimated effectively from data using efficient entropy estimators. Our new tests, for the cases where one of the marginals is known and where they are both unknown, are introduced in Sections 3 and 4 4 respectively. The regression setting is considered in Section 5, and numerical studies on both simulated and real data are presented in Section 6. Proofs are given in Section 7. The following notation is used throughout. For a generic dimension D ∈ N, let λD and k · k denote Lebesgue measure and the Euclidean norm on RD respectively. If f = dµ/dλD and g = dν/dλD are densities on RD with respect to λD , we write f  g if µ  ν. For z ∈ RD and r ∈ [0, ∞), we write Bz (r) := {w ∈ RD : kw − zk ≤ r} and Bz◦ (r) := Bz (r) \ {z}. We write λmin (A) for the smallest eigenvalue of a positive definite matrix A, and kBkF for the Frobenius norm of a matrix B. 2 2.1 Mutual information Definition and basic properties Retaining our notation from the introduction, let Z := {(x, y) : f (x, y) > 0}. A very natural measure of dependence is the mutual information between X and Y , defined to be Z f (x, y) log I(X; Y ) = I(f ) := Z f (x, y) dλd (x, y), fX (x)fY (y) when f  fX fY , and defined to be ∞ otherwise. This is the Kullback–Leibler divergence between the joint distribution of (X, Y ) and the product of the marginal distributions, so is non-negative, and equal to zero if and only if X and Y are independent. Another attractive feature of mutual information as a measure of dependence is that it is invariant to smooth, invertible transformations of X and Y . Indeed, suppose that X := {x : fX (x) > 0} is an open subset of RdX and φ : X → X 0 is a continuously differentiable bijection whose inverse φ−1 has Jacobian determinant J(x0 ) that never vanishes on X 0 . Let Φ : X × Y → X 0 × Y be  given by Φ(x, y) := φ(x), y , so that Φ−1 (x0 , y) = φ−1 (x0 ), y) also has Jacobian determinant  J(x0 ). Then Φ(X, Y ) has density g(x0 , y) = f φ−1 (x0 ), y |J(x0 )| on Φ(Z) and X 0 = φ(X)  has density gX 0 (x0 ) = fX φ−1 (x0 ) |J(x0 )| on X 0 . It follows that g(x0 , y) dλd (x0 , y) I φ(X); Y = g(x , y) log 0 gX 0 (x )fY (y) Φ(Z)  Z f φ−1 (x0 ), y 0  = g(x , y) log dλd (x0 , y) = I(X; Y ). −1 0 fX φ (x ) fY (y) Φ(Z)  Z 0 5 This means that mutual information is nonparametric in the sense of Weihs, Drton and Meinshausen (2017), whereas several other measures of dependence, including distance covariance, RKHS measures and correlation-based notions are not in general. Under a mild assumption, the mutual information between X and Y can be expressed in terms of their joint and marginal entropies; more precisely, writing Y := {y : fY (y) > 0}, and provided that each of H(X, Y ), H(X) and H(Y ) are finite, Z Z Z f log f dλd − I(X; Y ) = Z fX log fX dµdX − X fY log fY dµdY Y =: −H(X, Y ) + H(X) + H(Y ). (1) Thus, mutual information estimators can be constructed from entropy estimators. Moreover, the concept of mutual information is easily generalised to more complex situations. For instance, suppose now that (X, Y, W ) has joint density f on Rd+dW , and let f(X,Y )|W (·|w), fX|W (·|w) and fY |W (·|w) denote the joint conditional density of (X, Y ) given W = w and the conditional densities of X given W = w and Y given W = w respectively. When f(X,Y )|W (·, ·|w)  fX|W (·|w)fY |W (·|w) for each w in the support of W , the conditional mutual information between X and Y given W is defined as Z I(X; Y |W ) := f (x, y, w) log W f(X,Y )|W (x, y|w) dλd+dW (x, y, w), fX|W (x|w)fY |W (y|w) where W := {(x, y, w) : f (x, y, w) > 0}. This can similarly be written as I(X; Y |W ) = H(X, W ) + H(Y, W ) − H(X, Y, W ) − H(W ), provided each of the summands is finite. Finally, we mention that the concept of mutual information easily generalises to situations with p random vectors. In particular, suppose that X1 , . . . , Xp have joint density f on Rd , where d = d1 +. . .+dp and that Xj has marginal density fj on Rdj . Then, when f  f1 . . . fp , and writing Xp := {(x1 , . . . , xp ) ∈ Rd : f (x1 , . . . , xp ) > 0}, we can define Z I(X1 ; . . . ; Xp ) := f (x1 , . . . , xp ) log Xp = p X f (x1 , . . . , xp ) dλd (x1 , . . . , xp ) f1 (x1 ) . . . fp (xp ) H(Xj ) − H(X1 , . . . , Xp ), j=1 6 with the second equality holding provided that each of the entropies is finite. The tests we introduce in Sections 3 and 4 therefore extend in a straightforward manner to tests of independence of several random vectors. 2.2 Estimation of mutual information For i = 1, . . . , n, let Z(1),i , . . . , Z(n−1),i denote a permutation of {Z1 , . . . , Zn } \ {Zi } such that kZ(1),i − Zi k ≤ . . . ≤ kZ(n−1),i − Zi k. For conciseness, we let ρ(k),i := kZ(k),i − Zi k denote the distance between Zi and the kth nearest neighbour of Zi . To estimate the unknown entropies, we will use a weighted version of the Kozachenko–Leonenko estimator (Kozachenko and Leonenko, 1987). For k = k Z ∈ {1, . . . , n − 1} and weights w1Z , . . . , wkZ P satisfying kj=1 wjZ = 1, this is defined as   d n k ρ(j),i Vd (n − 1) 1 XX Z d,wZ Z b b Hn = Hn,k (Z1 , . . . , Zn ) := w log , n i=1 j=1 j eΨ(j) where Vd := π d/2 /Γ(1 + d/2) denotes the volume of the unit d-dimensional Euclidean ball and where Ψ denotes the digamma function. Berrett, Samworth and Yuan (2017) provided b nZ conditions on k, w1Z , . . . , wZ and the underlying data generating mechanism under which H k is an efficient estimator of H(Z) (in the sense that its asymptotic normalised squared error risk achieves the local asymptotic minimax lower bound) in arbitrary dimensions. With b nX and H b nY of H(X) and H(Y ) defined analogously as functions of (X1 , . . . , Xn ) estimators H and (Y1 , . . . , Yn ) respectively, we can use (1) to define an estimator of mutual information by bX + H bY − H bZ. Ibn = Ibn (Z1 , . . . , Zn ) = H n n n (2) Having identified an appropriate mutual information estimator, we turn our attention in the next two sections to obtaining appropriate critical values for our independence tests. 7 3 The case of one known marginal distribution In this section, we consider the case where at least one of fX and fY in known (in our experience, little is gained by knowledge of the second marginal density), and without loss of generality, we take the known marginal to be fY . We further assume that we can generate in(b) dependent and identically distributed copies of Y , denoted {Yi : i = 1, . . . , n, b = 1, . . . , B}, independently of Z1 , . . . , Zn . Our test in this setting, which we refer to as MINTknown (or MINTknown(q) when the nominal size q ∈ (0, 1) needs to be made explicit), will reject H0 for large values of Ibn . The ideal critical value, if both marginal densities were known, would therefore be Cq(n) := inf{r ∈ R : PfX fY (Ibn > r) ≤ q}. (b) Using our pseudo-data {Yi : i = 1, . . . , n, b = 1, . . . , B}, generated as described above, we define the statistics  (b) Ibn(b) = Ibn (X1 , Y1 ), . . . , (Xn , Yn(b) ) for b = 1, . . . , B. Motivated by the fact that these statistics have the same distribution as (n) Ibn under H0 , we can estimate the critical value Cq by  (n),B b Cq := inf r ∈ R :  B 1 X 1 (b) ≤ q , B + 1 b=0 {Ibn ≥r} (0) (B) (0) the (1 − q)th quantile of {Ibn , . . . , Ibn }, where Ibn := Ibn . The following lemma justifies this critical value estimate. Lemma 1. For any q ∈ (0, 1) and B ∈ N, the MINTknown(q) test that rejects H0 if and only bq(n),B has size at most q, in the sense that if Ibn > C sup  bq(n),B ≤ q, P Ibn > C sup k∈{1,...,n−1} (X,Y ):I(X;Y )=0 where the inner supremum is over all joint distributions of pairs (X, Y ) with I(X; Y ) = 0. An interesting feature of MINTknown, which is apparent from the proof of Lemma 1, is b X in (1), either on the original data, or on the pseudothat there is no need to calculate H n (b) (b) data sets {(X1 , Y1 ), . . . , (Xn , Yn ) : b = 1, . . . , B}. This is because in the (b) b X appears on both sides of of the event {Ibn ≥ Ibn } into entropy estimates, H n decomposition the inequality, so it cancels. This observation somewhat simplifies our assumptions and analysis, as well as 8 reducing the number of tuning parameters that need to be chosen. The remainder of this section is devoted to a rigorous study of the power of MINTknown that is compatible with a sequence of local alternatives (fn ) having mutual information In → 0. To this end, we first define the classes of alternatives that we consider; these parallel the classes introduced by Berrett, Samworth and Yuan (2017) in the context of entropy estimation. Let Fd denote the class of all density functions with respect to Lebesgue measure on Rd . For f ∈ Fd and α > 0, let Z kzkα f (z) dz. µα (f ) := Rd Now let A denote the class of decreasing functions a : (0, ∞) → [1, ∞) satisfying a(δ) = o(δ − ) as δ & 0, for every  > 0. If a ∈ A, β > 0, f ∈ Fd is m := (dβe − 1)-times differentiable and z ∈ Z, we define ra (z) := {8d1/2 a(f (z))}−1/(β∧1) and kf (t) (z)k , Mf,a,β (z) := max max t=1,...,m f (z)   kf (m) (w) − f (m) (z)k sup . f (z)kw − zkβ−m w∈Bz◦ (ra (z)) The quantity Mf,a,β (z) measures the smoothness of derivatives of f in neighbourhoods of z, relative to f (z) itself. Note that these neighbourhoods of z are allowed to become smaller when f (z) is small. Finally, for Θ := (0, ∞)4 × A, and θ = (α, β, ν, γ, a) ∈ Θ, let Fd,θ   := f ∈ Fd : µα (f ) ≤ ν, kf k∞ ≤ γ, sup Mf,a,β (z) ≤ a(δ) ∀δ > 0 . z:f (z)≥δ Berrett, Samworth and Yuan (2017) show that all Gaussian and multivariate-t densities (amongst others) belong to Fd,θ for appropriate θ ∈ Θ. Now, for dX , dY ∈ N and ϑ = (θ, θY ) ∈ Θ2 , define n o FdX ,dY ,ϑ := f ∈ FdX +dY ,θ : fY ∈ FdY ,θY , fX fY ∈ FdX +dY ,θ and, for b ≥ 0, let n o FdX ,dY ,ϑ (b) := f ∈ FdX ,dY ,ϑ : I(f ) > b . Thus, FdX ,dY ,ϑ (b) consists of joint densities whose mutual information is greater than b. In Theorem 2 below, we will show that for a suitable choice of b = bn and for certain ϑ ∈ Θ2 , the power of the test defined in Lemma 1 converges to 1, uniformly over FdX ,dY ,ϑ (b). Before we can state this result, however, we must define the allowable choices of k and 9 the weight vectors. Given d ∈ N and θ = (α, β, γ, ν, a) ∈ Θ let  α−d 4(β ∧ 1) 2α , , τ1 (d, θ) := min 5α + 3d 2α 4(β ∧ 1) + 3d and    d d/4 τ2 (d, θ) := min 1 − , 1− 2β bd/4c + 1 Note that mini=1,2 τi (d, θ) > 0 if and only if both α > d and β > d/2. Finally, for k ∈ N, let W (k)  := T k w = (w1 , . . . , wk ) ∈ R : k X wj j=1 k X Γ(j + 2`/d) = 0 for ` = 1, . . . , bd/4c Γ(j)  wj = 1 and wj = 0 if j ∈ / {bk/dc, b2k/dc, . . . , k} . j=1 Thus, our weights sum to 1; the other constraints ensure that the dominant contributions to the bias of the unweighted Kozachenko–Leonenko estimator cancel out to sufficiently high order, and that the corresponding jth nearest neighbour distances are not too highly correlated. Theorem 2. Fix dX , dY ∈ N, set d = dX + dY and fix ϑ = (θ, θY ) ∈ Θ2 with n o min τ1 (d, θ) , τ1 (dY , θY ) , τ2 (d, θ) , τ2 (dY , θY ) > 0. ∗ ∗ and k ∗ = kn∗ denote any deterministic sequences of positive integers , kY∗ = kY,n Let k0∗ = k0,n with k0∗ ≤ min{kY∗ , k ∗ }, with k0∗ / log5 n → ∞ and with  max k∗ nτ1 (d,θ)− , kY∗ nτ1 (dY ,θY )− , k∗ nτ2 (d,θ) , kY∗ nτ2 (dY ,θY )  →0 (k ) for some  > 0. Also suppose that wY = wY Y ∈ W (kY ) and w = w(k) ∈ W (k) , and that lim supn max(kwk, kwY k) < ∞. Then there exists a sequence (bn ) such that bn = o(n−1/2 ) and with the property that for each q ∈ (0, 1) and any sequence (Bn∗ ) with Bn∗ → ∞, inf inf inf ∗ k ∈{k ∗ ,...,k ∗ } f ∈F Bn ≥Bn Y dX ,dY ,ϑ (bn ) 0 Y k∈{k0∗ ,...,k∗ } b(n),Bn ) → 1. Pf (Ibn > C q Theorem 2 provides a strong guarantee on the ability of MINTknown to detect alternatives, 10 uniformly over classes whose mutual information is at least bn , where we may even have bn = o(n−1/2 ). 4 The case of unknown marginal distributions We now consider the setting in which the marginal distributions of both X and Y are unknown. Our test statistic remains the same, but now we estimate the critical value by permuting our sample in an attempt to mimic the behaviour of the test statistic under H0 . More explicity, for some B ∈ N, we propose independently of (X1 , Y1 ), . . . , (Xn , Yn ) to simulate independent random variables τ1 , . . . , τB uniformly from Sn , the permutation group (b) (b) (b) (b) of {1, . . . , n}, and for b = 1, . . . , B, set Z := (Xi , Yτ (i) ) and I˜n := Ibn (Z1 , . . . , Zn ). For i q ∈ (0, 1), we can now estimate C̃q(n),B (n) Cq b by  := inf r ∈ R :  B 1 X 1 (b) ≤ q , B + 1 b=0 {I˜n ≥r} (0) (n),B where I˜n := Ibn , and refer to the test that rejects H0 if and only if Ibn > C̃q as MINTunknown(q). Lemma 3. For any q ∈ (0, 1) and B ∈ N, the MINTunknown(q) test has size at most q: sup  P Ibn > C̃q(n),B ≤ q. sup k∈{1,...,n−1} (X,Y ):I(X;Y )=0 (n),B Note that Ibn > C̃q if and only if B 1 X 1 (b) ≤ q. B + 1 b=0 {I˜n ≥Ibn } (3) This shows that estimating either of the marginal entropies is unnecessary to carry out (b) b Z − H̃n(b) , where H̃n(b) := H b d,wZ (Z1(b) , . . . , Zn(b) ) is the weighted the test, since I˜n − Ibn = H n n,k Kozachenko–Leonenko joint entropy estimator based on the permuted data. We now study the power of MINTunknown, and begin by introducing the classes of marginal densities that we consider. To define an appropriate notion of smoothness, for 11 z ∈ {w : g(w) > 0} =: W, g ∈ Fd and δ > 0, let  rz,g,δ := δeΨ(k) Vd (n − 1)g(z) 1/d . (4) Now, for A belonging to the class of Borel subsets of W, denoted B(W), define Mg (A) := sup sup δ∈(0,2] z∈A Z 1 d g(z) Vd rz,g,δ g dλd − 1 . Bz (rz,g,δ ) Both rz,g,δ and Mg (·) depend on n and k, but for simplicity we suppress this in our notation. Let φ = (α, µ, ν, (cn ), (pn )) ∈ (0, ∞)3 × [0, ∞)N × [0, ∞)N =: Φ and define GdX ,dY ,φ  := f ∈ FdX +dY : max{µα (fX ), µα (fY )} ≤ µ, max{kfX k∞ , kfY k∞ } ≤ ν,  Z fX fY dλd ≤ pn ∀n ∈ N . ∃Wn ∈ B(X × Y) s.t. MfX fY (Wn ) ≤ cn , Wnc In addition to controlling the αth moment and uniform norms of the marginals fX and fY , the class Gd,φ asks for there to be a (large) set Wn on which this product of marginal densities is uniformly well approximated by a constant over small balls. This latter condition is satisfied by products of many standard parametric families of marginal densities, including normal, Weibull, Gumbel, logistic, gamma, beta, and t densities, and is what ensures that nearest neighbour methods are effective in this context. The corresponding class of joint densities we consider, for φ = (α, µ, ν, (cn ), (pn )) ∈ Φ, is Hd,φ  := f ∈ Fd : µα (f ) ≤ µ, kf k∞ ≤ ν, Z ∃Zn ∈ B(Z) s.t. Mf (Zn ) ≤ cn ,  f dλd ≤ pn ∀n ∈ N . Znc In many cases, we may take Zn = {z : f (z) ≥ δn }, for some appropriately chosen sequence (δn ) with δn → 0 as n → ∞. For instance, suppose we fix d ∈ N and θ = (α, β, ν, γ, a) ∈ Θ. Then, by Berrett, Samworth and Yuan (2017, Lemma 12), there exists such a sequence (δn ), as well as sequences (cn ) and (pn ), where d ka(k/(n − 1)) β∧1 δn = log(n − 1), n−1 β∧1 β∧1 15 2 d d3/2 cn = log− d (n − 1), 7 d + (β ∧ 1) 12 α for large n and pn = o((k/n) α+d − ) for every  > 0, such that Fd,θ ⊆ Hd,φ with φ = (α, µ, ν, (cn ), (pn )) ∈ Φ. We are now in a position to state our main result on the power of MINTunknown. Theorem 4. Let dX , dY ∈ N, let d = dX +dY and fix φ = (α, µ, ν, (cn ), (pn )) ∈ Φ with cn → 0 ∗ ∗ and pn = o(1/ log n) as n → ∞. Let k0∗ = k0,n and k1∗ = k1,n denote two deterministic sequences of positive integers satisfying k0∗ ≤ k1∗ , k0∗ / log2 n → ∞ and (k1∗ log2 n)/n → 0. Then for any b > 0, q ∈ (0, 1) and any sequence (Bn∗ ) with Bn∗ → ∞ as n → ∞, inf inf inf ∗ k∈{k ∗ ,...,k ∗ } f ∈G Bn ≥Bn dX ,dY ,φ ∩Hd,φ : 0 1 I(f )≥b Pf (Ibn > C̃q(n),Bn ) → 1 as n → ∞. Theorem 4 shows that MINTunknown is uniformly consistent against a wide class of alternatives. 5 Regression setting In this section we aim to extend the ideas developed above to the problem of goodness-offit testing in linear models. Suppose we have independent and identically distributed pairs (X1 , Y1 ), . . . , (Xn , Yn ) taking values in Rp × R, with E(Y12 ) < ∞ and E(X1 X1T ) finite and positive definite. Then β0 := argmin E{(Y1 − X1T β)2 } β∈Rp is well-defined, and we can further define i := Yi − XiT β0 for i = 1, . . . , n. We show in the proof of Theorem 6 below that E(1 X1 ) = 0, but for the purposes of interpretability and inference, it is often convenient if the random design linear model Yi = XiT β0 + i , i = 1, . . . , n, holds with Xi and i independent. A goodness-of-fit test of this property amounts to a test of H0 : X1 ⊥⊥ 1 against H1 : X1 6⊥⊥ 1 . The main difficulty here is that 1 , . . . , n are not observed directly. Given an estimator βb of β0 , the standard approach for dealing with this problem is to compute residuals b i := Yi − XiT βb for i = 1, . . . , n, and use these as a proxy for 1 , . . . , n . Many introductory statistics textbooks, e.g. Dobson (2002, Section 2.3.4), Dalgaard (2002, Section 5.2) suggest examining for patterns plots of residuals against fitted 13 values, as well as plots of residuals against each covariate in turn, as a diagnostic, though it is difficult to formalise this procedure. (It is also interesting to note that when applying the plot function in R to an object of type lm, these latter plots of residuals against each covariate in turn are not produced, presumably because it would be prohibitively time-consuming to check them all in the case of many covariates.) The naive approach based on our work so far is simply to use the permutation test of Section 4 on the data (X1 , b 1 ), . . . , (Xn , b n ). Unfortunately, calculating the test statistic Ibn on permuted data sets does not result in an exchangeable sequence, which makes it difficult to ensure that this test has the nominal size q. To circumvent this issue, we assume that the marginal distribution of 1 under H0 has E(1 ) = 0 and is known up to a scale factor σ 2 := E(21 ); in practice, it will often be the case that this marginal distribution is taken to be N (0, σ 2 ), for some unknown σ > 0. We also assume that we can sample from the density fη of η1 := 1 /σ; of course, this is straightforward in the normal distribution case above. Let X = (X1 · · · Xn )T , Y = (Y1 , . . . , Yn )T , and suppose the vector of residuals b  := (b 1 , . . . , b n )T is computed from the least squares estimator βb := (X T X)−1 X T Y . We then define standardised residuals by ηbi := b i /b σ , for i = 1, . . . , n, where σ b2 := n−1 kb k2 ; these standardised residuals are invariant under changes of scale of  := (1 , . . . , n )T . Suppressing the dependence of our entropy estimators on k and the weights w1 , . . . , wk for notational simplicity, our test statistic is now given by  b np (X1 , . . . , Xn ) + H b n1 (b b np+1 (X1 , ηb1 ), . . . , (Xn , ηbn ) . Iˇn(0) := H η1 , . . . , ηbn ) − H Writing η := /σ, we note that  1/2  n ηi − XiT (X T X)−1 X T η 1 1 Tb T b i − Xi (β − β0 ) = , ηbi = (Yi − Xi β) = σ b σ b kη − X T (X T X)−1 X T ηk (b) (b) whose distribution does not depend on the unknown β0 or σ 2 . Let {η (b) = (η1 , . . . , ηn )T : b = 1, . . . , B} denote independent random vectors, whose components are generated independently from fη . For b = 1, . . . , B we then set sb(b) := n−1/2 kb η (b) k and, for i = 1, . . . , n, let (b) ηbi := 1  (b) ηi − XiT (X T X)−1 X T η (b) . (b) sb We finally compute  (b) b np (X1 , . . . , Xn ) + H b n1 (b b np+1 (X1 , ηb1(b) ), . . . , (Xn , ηbn(b) ) . Iˇn(b) := H η1 , . . . , ηbn(b) ) − H 14 Analogously to our development in Sections 3 and 4, we can then define a critical value by Čq(n),B  := inf r ∈ R :  B 1 X 1 (b) ≤ q . B + 1 b=0 {Iˇn ≥r} (0) (B) Conditional on X and under H0 , the sequence (Iˇn , . . . , Iˇn ) is independent and identically distributed and unconditionally it is an exchangeable sequence under H0 . This is the crucial observation from which we can immediately derive that the resulting test has the nominal size. Lemma 5. For each q ∈ (0, 1) and B ∈ N, the MINTregression(q) test that rejects H0 if (0) (n),B and only if Iˇn > Čq has size at most q: sup  P Iˇn > Čq(n),B ≤ q. sup k∈{1,...,n−1} (X,Y ):I(X;Y )=0 (0) (b) As in previous sections, we are only interested in the differences Iˇn − Iˇn for b = 1, . . . , B, b np (X1 , . . . , Xn ) terms cancel out, so these marginal entropy and in these differences, the H estimators need not be computed. In fact, to simplify our power analysis, it is more convenient to define a slightly modified test, which also has the nominal size. Specifically, we assume for simplicity that m := n/2 is an integer, and consider a test in which the sample is split in half, with the second half of the sample used to calculate the estimators βb(2) and σ b2 of β0 and σ 2 respectively. On the (2) first half of the sample, we calculate ηbi,(1) := Yi − XiT βb(2) σ b(2) for i = 1, . . . , m and the test statistic  p+1 p 1 bm bm bm (b η1,(1) , . . . , ηbm,(1) ) − H (X1 , ηb1,(1) ), . . . , (Xm , ηbm,(1) ) . I˘n(0) := H (X1 , . . . , Xm ) + H (b) Corresponding estimators {I˘n : b = 1, . . . , B} based on the simulated data may also be computed using the same sample-splitting procedure, and we then obtain the critical value (n),B C̆q in the same way as above. The advantage from a theoretical perspective of this 2 approach is that, conditional on βb(2) and σ b(2) , the random variables ηb1,(1) , . . . , ηbm,(1) are independent and identically distributed. 15 To describe the power properties of MINTregression, we first define several densities: for γ ∈ Rp and s > 0, let fηbγ,s and fηbγ,s b1γ,s := (η1 − X1T γ)/s and (1) denote the densities of η (1),γ,s ηb1 (1) γ,s γ,s be the densities of (X1 , ηb1γ,s ) := (η1 −X1T γ)/s respectively; further, let fX,b η and fX,b η (1) (1),γ,s and (X1 , ηb1 ) respectively. Note that imposing assumptions on these densities amounts to imposing assumptions on the joint density f of (X, ). For Ω := Θ × Θ × (0, ∞) × (0, 1) × ∗ denote the class of joint (0, ∞) × (0, ∞), and ω = (θ1 , θ2 , r0 , s0 , Λ, λ0 ), we therefore let Fp+1,ω densities f of (X, ) satisfying the following three requirements: (i) o n o n γ,s γ,s fηb : γ ∈ B0 (r0 ), s ∈ [s0 , 1/s0 ] ∪ fηb(1) : γ ∈ B0 (r0 ), s ∈ [s0 , 1/s0 ] ⊆ F1,θ1 and n o n o γ,s γ,s fX,b : γ ∈ B (r ), s ∈ [s , 1/s ] ∪ f : γ ∈ B (r ), s ∈ [s , 1/s ] ⊆ Fp+1,θ2 . 0 0 0 0 0 0 0 0 η X,b η (1) (ii) n o sup max E log2 fηbγ,1 (η1 ) , E log2 fηbγ,1 (η ) ≤ Λ, 1 (1) (5) o n  (1),γ,1  γ,1 2 2 ≤ Λ. sup max E log fη ηb1 , E log fη ηb1 (6) γ∈B0 (r0 ) and γ∈B0 (r0 ) (iii) Writing Σ := E(X1 X1T ), we have λmin (Σ) ≥ λ0 . The first of these requirements ensures that we can estimate efficiently the marginal entropy of our scaled residuals, as well as the joint entropy of these scaled residuals and our covariates. The second condition is a moment condition that allows us to control |H(η1 − X1T γ) − H(η1 )| (and similar quantities) in terms of kγk, when γ belongs to a small ball around the origin. To illustrate the second part of this condition, it is satisfied, for instance, if fη is a standard normal density and E(kX1 k4 ) < ∞, or if fη is a t density and E(kX1 kα ) < ∞ for some α > 0; the first part of the condition is a little more complicated but similar. The final condition is very natural for random design regression problems. (0) (B) By the same observation on the sequence (I˘n , . . . , I˘n ) as was made regarding the (0) (B) sequence (Iˇn , . . . , Iˇn ) just before Lemma 5, we see that the sample-splitting version of the MINTregression(q) test has size at most q. Theorem 6. Fix p ∈ N and ω = (θ1 , θ2 , r0 , s0 , Λ, λ0 ) ∈ Ω, where the first component of θ2 is 16 α2 ≥ 4 and the second component of θ1 is β1 ≥ 1. Assume that n o min τ1 (1, θ1 ) , τ1 (p + 1, θ2 ) , τ2 (1, θ1 ) , τ2 (p + 1, θ2 ) > 0. ∗ ∗ and k ∗ = kn∗ denote any deterministic sequences of positive integers , kη∗ = kη,n Let k0∗ = k0,n with k0∗ ≤ min{kη∗ , k ∗ }, with k0∗ / log5 n → ∞ and with  max k∗ nτ1 (p+1,θ2 )− , kη∗ nτ1 (1,θ1 )− , (kη ) for some  > 0. Also suppose that wη = wη k∗ nτ2 (p+1,θ2 ) , kη∗ nτ2 (1,θ1 )  →0 ∈ W (kη ) and w = w(k) ∈ W (k) , and that lim supn max(kwk, kwη k) < ∞. Then for any sequence (bn ) such that n1/2 bn → ∞, any q ∈ (0, 1) and any sequence (Bn∗ ) with Bn∗ → ∞, inf inf inf ∗ k ∈{k ∗ ,...,k ∗ } f ∈F ∗ Bn ≥Bn η η 0 1,p+1 :I(f )≥bn k∈{k0∗ ,...,k∗ } Pf (I˘n > C̆q(n),Bn ) → 1. Finally in this section, we consider partitioning our design matrix as X = (X ∗ X ∗∗ ) ∈ Rn×(p0 +p1 ) , with p0 + p1 = p, and describe an extension of MINTregression to cases where we are interested in testing the independence between  and X ∗ . For instance, X ∗∗ may consist of an intercept term, or transformations of variables in X ∗ , as in the real data example presented in Section 6.3 below. Our method for simulating standardised residual vectors {b η (b) : b = 1, . . . , B} remains unchanged, but our test statistic and corresponding null statistics become b p (X ∗ , . . . , X ∗ ) + H b 1 (b b p+1 (X ∗ , ηb1 ), . . . , (X ∗ , ηbn ) bn ) − H I¯n(0) := H n 1 n n η1 , . . . , η n 1 n   (b) b p (X ∗ , . . . , X ∗ ) + H b 1 (b b p+1 (X ∗ , ηb1(b) ), . . . , (X ∗ , ηb(b) ) , b = 1, . . . , B. I¯n(b) := H bn(b) ) − H n 1 n n η1 , . . . , η n 1 n n (0) (1) (B) The sequence (I¯n , I¯n , . . . , I¯n ) is again exchangeable under H0 , so a p-value for this mod- ified test is given by B 1 X 1 (b) (0) . B + 1 b=0 {I¯n ≥I¯n } 17 6 Numerical studies 6.1 Practical considerations For practical implementation of the MINTunknown test, we consider both a direct, data-driven approach to choosing k, and a multiscale approach that averages over a range of values of k. To describe the first method, let K ⊆ {1, . . . , n − 1} denote a plausible set of values of k. For a given N ∈ N and independently of the data, generate τ1 , . . . , τ2N independently b (j) be the (unweighted) and uniformly from Sn , and for each k ∈ K and j = 1, . . . , 2N let H n,k Kozachenko–Leonenko joint entropy estimate with tuning parameter k based on the sample {(Xi , Yτj (i) ) : i = 1, . . . , n}. We may then choose b k := sargmin k∈K N X b (2j) − H b (2j−1) )2 , (H n,k n,k j=1 where sargmin denotes the smallest index of the argmin in the case of a tie. In our simulations below, which use N = 100, we refer to the resulting test as MINTauto. For our multiscale approach, we again let K ⊆ {1, . . . , n−1} and, for k ∈ K, let b h(0) (k) := b n,k denote the (unweighted) Kozachenko–Leonenko entropy estimate with tuning parameter H k based on the original data (X1 , Y1 ), . . . , (Xn , Yn ). Now, for b = 1, . . . , B and k ∈ K, (b) we let b h(b) (k) := H̃n,(k) denote the Kozachenko–Leonenko entropy estimate with tuning P (b) (b) b h(b) (k) parameter k based on the permuted data Z , . . . , Zn . Writing h̄(b) := |K|−1 1 k∈K for b = 0, 1, . . . , B, we then define the p-value for our test to be B 1 X 1 (0) (b) . B + 1 b=0 {h̄ ≥h̄ } By the exchangeability of (h̄(0) , h̄(1) , . . . , h̄(B) ) under H0 , the corresponding test has the nominal size. We refer to this test as MINTav, and note that if K is taken to be a singleton set then we recover MINTunknown. In our simulations below, we took K = {1, . . . , 20} and B = 100. 6.2 Simulated data To study the empirical performance of our methods, we first compare our tests to existing approaches through their performance on simulated data. For comparison, we present cor18 responding results for a test based on the empirical copula process decribed by Kojadinovic and Holmes (2009) and implemented in the R package copula (Hofert et al., 2017), a test based on the HSIC implemented in the R package dHSIC (Pfister and Peters, 2017), a test based on the distance covariance implemented in the R package energy (Rizzo and Szekely, 2017) and the improvement of Hoeffding’s test described in Bergsma and Dassios (2014) and implemented in the R package TauStar (Weihs, Drton and Leung, 2016). We also present results for an oracle version of our tests, denoted simply as MINT, which for each parameter value in each setting, uses the best (most powerful) choice of k. Throughout, we took q = 0.05 and n = 200, ran 5000 repetitions for each parameter setting, and for our comparison methods, used the default tuning parameter values recommended by the corresponding authors. We consider three classes of data generating mechanisms, designed to illustrate different possible types of dependence: (i) For l ∈ N and (x, y) ∈ [−π, π]2 , define the density function fl (x, y) = 1 {1 + sin(lx) sin(ly)}. π2 This class of densities, which we refer to as sinusoidal, are identified by Sejdinovic et al. (2013) as challenging ones to detect dependence, because as l increases, the dependence becomes increasingly localised, while the marginal densities are uniform on [−π, π] for each l. Despite this, by the periodicity of the sine function, we have that the mutual information does not depend on l: indeed, Z πZ π  1 {1 + sin(lx) sin(ly)} log 1 + sin(lx) sin(ly) dx dy I(fl ) = 2 π −π −π Z πZ π 1 = 2 (1 + sin u sin v) log(1 + sin u sin v) du dv ≈ 0.0145. π −π −π (ii) Let L, Θ, 1 , 2 be independent with L ∼ U ({1, . . . , l}) for some parameter l ∈ N, Θ ∼ U [0, 2π], and 1 , 2 ∼ N (0, 1). Set X = L cos Θ + 1 /4 and Y = L sin Θ + 2 /4. For large values of l, the distribution of (X/l, Y /l) approaches the uniform distribution on the unit disc. (iii) Let X,  be independent with X ∼ U [−1, 1],  ∼ N (0, 1), and for a parameter ρ ∈ [0, ∞), let Y = |X|ρ . For each of these three classes of data generating mechanisms, we also consider a corre19 sponding multivariate setting in which we wish to test the independence of X and Y when X = (X1 , X2 )T , Y = (Y1 , Y2 )T . Here, (X1 , Y1 ), X2 , Y2 are independent, with X1 and Y1 having the dependence structures described above, and X2 , Y2 ∼ U (0, 1). In these multivariate settings, the generalisations of the TauStar test were too computationally intensive, so were omitted from the comparison. The results are presented in Figure 1(a). Unsurprisingly, there is no uniformly most powerful test among those considered, and if the form of the dependence were known in advance, it may well be possible to design a tailor-made test with good power. Nevertheless, Figure 1(a) shows that, especially in the first and second of the three settings described above, the MINT and MINTav approaches have very strong performance. In these examples, the dependence becomes increasingly localised as l increases, and the flexibility to choose a smaller value of k in such settings means that MINT approaches are particularly effective. Where the dependence is more global in nature, such as in setting (iii), other approaches may be better suited, though even here, MINT is competitive. Interestingly, the multiscale MINTav appears to be much more effective than the MINTauto approach of choosing a single value of k; indeed, in setting (iii), MINTav even outperforms the oracle choice of k. 6.3 Real data In this section we illustrate the use of MINTregression on a data set comprising the average minimum daily January temperature, between 1931 and 1960 and measured in degrees Fahrenheit, in 56 US cities, indexed by their latitude and longitude (Peixoto, 1990). Fitting a normal linear model to the data with temperature as the response and latitude and longitude as covariates leads to the diagnostic plots shown in Figures 2(a) and 2(b). Based on these plots, it is relatively difficult to assess the strength of evidence against the model. Centering all of the variables and running MINTregression with kη = 6, k = 3 and B = 1785 = b105 /56c yielded a p-value of 0.00224, providing strong evidence that the normal linear model is not a good fit to the data. Further investigation is possible via partial regression: in Figure 2(c) we plot residuals based on a simple linear fit of Temperature on Latitude against the residuals of another linear fit of Longitude on Latitude. This partial regression plot is indicative of a cubic relationship between Temperature and Longitude after removing the effects of Latitude. Based on this evidence we fitted a new model with a quadratic and cubic term in Longitude added to the first model. The p-value of the resulting F -test was < 2.2 × 10−16 , giving overwhelming evidence in favour of the inclusion of the quadratic and cubic terms. We then ran our extension of MINTregression to test the independence of  20 Figure 1: Power curves for the different tests considered, for settings (i) (left), (ii) (middle) and (iii) (right). The marginals are univariate (top) and bivariate (bottom). Figure 2: Plots of residuals against fitted values (left) and square roots of the absolute values of the standardised residuals against fitted values (middle) for the US cities temperature data. The right panel gives a partial regression plot after removing the effect of latitude from both the response and longitude. 21 and (Longitude, Latitude), as described at the end of Section 5 with B = 1000. This resulted in a p-value of 0.0679, so we do not reject this model at the 5% significance level. 7 Proofs 7.1 Proofs from Section 3 (1) (B) Proof of Lemma 1. Under H0 , the finite sequence (Ibn , Ibn , . . . , Ibn ) is exchangeable. If fol(1) (B) lows that under H0 , if we split ties at random, then every possible ordering of Ibn , Ibn , . . . , Ibn (1) (B) is equally likely. In particular, the (descending order) rank of Ibn among {Ibn , Ibn , . . . , Ibn }, denoted rank(Ibn ), is uniformly distributed on {1, 2, . . . , B + 1}. We deduce that for any k ∈ {1, . . . , n − 1} and writing PH0 for the probability under the product of any marginal distributions for X and Y , PH0  bq(n),B = PH0 Ibn > C  B 1 X 1 (b) ≤q B + 1 b=0 {Ibn ≥Ibn }   bq(B + 1)c ≤ PH0 rank(Ibn ) ≤ q(B + 1) = ≤ q, B+1 as required. (X,Y ) Let V (f ) := Var log fX f(X)f . The following result will be useful in the proof of TheoY (Y ) rem 2. Lemma 7. Fix dX , dY ∈ N and ϑ ∈ Θ2 . Then V (f ) < ∞. 1/4 f ∈FdX ,dY ,ϑ (0) I(f ) sup Proof of Lemma 7. For x ∈ R we write x− := max(0, −x). We have  E log f (X, Y ) fX (X)fY (Y )   Z fX (x)fY (y) f (x, y) log dλd (x, y) f (x, y) {(x,y):f (x,y)≤fX (x)fY (y)}   Z fX (x)fY (y) ≤ f (x, y) − 1 dλd (x, y) f (x, y) {(x,y):f (x,y)≤fX (x)fY (y)} Z Z  = sup fX fY − f ≤ {I(f )/2}1/2 , = − A∈B(Rd ) A A 22 where the final inequality is Pinsker’s inequality. We therefore have that   f (X, Y ) f (X, Y ) 1/2 f (X, Y ) 3/2 = E log V (f ) ≤ E log log fX (X)fY (Y ) fX (X)fY (Y ) fX (X)fY (Y )  1/2 1/2  f (X, Y ) f (X, Y ) 3 ≤ E log E log fX (X)fY (Y ) fX (X)fY (Y )  1/2  1/2   f (X, Y ) 3 f (X, Y ) E log = I(f ) + 2E log fX (X)fY (Y ) − fX (X)fY (Y )    1/2 1/2 3 . E log fX (X)fY (Y ) + E| log f (X, Y )|3 ≤ 2 I(f ) + 21/2 I(f )1/2 2 By Berrett, Samworth and Yuan (2017, Lemma 11(i)), we conclude that V (f ) < ∞. 1/4 f ∈FdX ,dY ,ϑ :I(f )≤1 I(f ) sup The result follows from this fact, together with the observation that sup V (f ) ≤ 2 f ∈FdX ,dY ,ϑ sup f ∈FdX ,dY ,ϑ  E log2 f (X, Y ) + E log2 fX (X)fY (Y )  < ∞, (7) where the final bound follows from another application of Berrett, Samworth and Yuan (2017, Lemma 11(i)). Proof of Theorem 2. Fix f ∈ FdX ,dY ,ϑ (bn ). We have by two applications of Markov’s inequality that 1 {1 + Bn Pf (Ibn(1) ≥ Ibn )} q(Bn + 1)  1 1 ≤ Pf |Ibn − Ibn(1) − I(f )| ≥ I(f ) + q q(Bn + 1) 1 1 ≤ Ef [{Ibn − Ibn(1) − I(f )}2 ] + . 2 qI(f ) q(Bn + 1) b(n),Bn ) ≤ Pf (Ibn ≤ C q (8) It is convenient to define the following entropy estimators based on pseudo-data, as well as (1) oracle versions of our entropy estimators: let Zi 23 (1)T T := (XiT , Yi (1) (1) ) , Z (1) := (Z1 , . . . , Zn )T (1) (1) and Y (1) := (Y1 , . . . , Yn )T and set b Y,(1) := H b Y,wY (Y (1) ) H n n,k n 1X ∗,Y Hn := − log fY (Yi ), n i=1 b Z,(1) := H b Z,wZ (Z (1) ), H n n,k n X 1 ∗ Hn := − log f (Xi , Yi ), n i=1 n Hn∗,(1) Writing In∗ := 1 n n 1X 1X (1) (1) log fX (Xi )fY (Yi ), Hn∗,Y,(1) := − log fY (Yi ). := − n i=1 n i=1 Pn i=1 i ,Yi ) log fXf(X(Xi )f we have by Berrett, Samworth and Yuan (2017, TheoY (Yi ) rem 1) that sup sup ∗} kY ∈{k0∗ ,...,kY k∈{k0∗ ,...,k∗ } f ∈FdX ,dY ,ϑ = nEf {(Ibn − Ibn(1) − In∗ )2 }    2 Y Y,(1) (1) ∗,Y ∗ ∗,Y,(1) ∗,(1) b b b b + Hn sup nEf Hn − Hn − Hn + Hn − Hn − Hn − Hn sup ∗ } f ∈F kY ∈{k0∗ ,...,kY dX ,dY ,ϑ k∈{k0∗ ,...,k∗ } ≤ 4n sup sup ∗ } f ∈F kY ∈{k0∗ ,...,kY dX ,dY ,ϑ k∈{k0∗ ,...,k∗ } n b Y − H ∗,Y )2 + (H b n − H ∗ )2 Ef (H n n n o Y,(1) ∗,Y,(1) 2 (1) ∗,(1) 2 b b + (Hn − Hn ) + (Hn − Hn ) → 0.  2 It follows by Cauchy–Schwarz and the fact that E In∗ − I(f ) = V (f )/n that 2n := sup sup ∗ } f ∈F kY ∈{k0∗ ,...,kY dX ,dY ,ϑ k∈{k0∗ ,...,k∗ } ≤ sup sup ∗ } f ∈F kY ∈{k0∗ ,...,kY dX ,dY ,ϑ k∈{k0∗ ,...,k∗ } nEf  2 Ibn − Ibn(1) − I(f ) − V (f ) n  1/2 o nEf {(Ibn − Ibn(1) − In∗ )2 } + 2 nEf {(Ibn − Ibn(1) − In∗ )2 }V (f ) → 0, (9) 1/2 where we use (7) to bound V (f ) above. Now consider bn := max(n n−1/2 , n−4/7 log n). 24 By (8), (9) and Lemma 7 we have that inf inf inf ∗ k ∈{k ∗ ,...,k ∗ } f ∈F Bn ≥Bn Y dX ,dY ,ϑ (bn ) 0 Y k∈{k0∗ ,...,k∗ } ≥1− Pf (Ibn > Cq(n),Bn ) (1) E[{Ibn − Ibn − I(f )}2 ] 1 − 2 ∗ ∗ } f ∈F qI(f ) q(Bn + 1) kY ∈{k0∗ ,...,kY dX ,dY ,ϑ (bn ) sup sup k∈{k0∗ ,...,k∗ } ≥1− V (f ) + 2n 1 − 2 q(Bn∗ + 1) f ∈FdX ,dY ,ϑ (bn ) nqI(f ) ≥1− 1 V (f ) 1 n − → 1, sup − q q(Bn∗ + 1) q log7/4 n f ∈FdX ,dY ,ϑ (0) I(f )1/4 sup as required. 7.2 Proofs from Section 4 (0) (1) (B) Proof of Lemma 3. We first claim that (Ibn , Ibn , . . . , Ibn ) is an exchangeable sequence un(b) der H0 . Indeed, let σ be an arbitrary permutation of {0, 1, . . . , B}, and recall that Ibn is computed from (Xi , Yτb (i) )ni=1 , where τb is uniformly distributed on Sn . Note that, under H0 , d we have (Xi , Yi )ni=1 = (Xi , Yτ (i) )ni=1 for any τ ∈ Sn . Hence, for any A ∈ B(RB+1 ),  PH0 (Ibn(σ(0)) , . . . , Ibn(σ(B)) ) ∈ A X  1 bn(σ(0)) , . . . , Ibn(σ(B)) ) ∈ A|τ1 = π1 , . . . , τB = πB = P ( I H 0 (n!)B π ,...,π ∈S n 1 B X  1 bn(0) , . . . , Ibn(B) ) ∈ A|τ1 = πσ(1) π −1 , . . . , τB = πσ(B) π −1 P ( I = H 0 σ(0) σ(0) (n!)B π ,...,π ∈S n 1 B  = PH0 (Ibn(0) , . . . , Ibn(B) ) ∈ A . By the same argument as in the proof of Lemma 1 we now have that bq(B + 1)c ≤ q, PH0 (Ibn > C̃q(n),B ) ≤ B+1 as required. b n(1) given just after (3). We give the proof of Theorem 4 below, Recall the definition of H followed by Lemma 8, which is the key ingredient on which it is based. 25 Proof of Theorem 4. We have by Markov’s inequality that Pf (Ibn ≤ C̃q(n),Bn ) ≤ 1 1 1 b Z ≥ H̃ (1) ) + {1 + Bn Pf (Ibn ≤ I˜n(1) )} ≤ Pf (H . n n q(Bn + 1) q q(Bn + 1) (10) But  bZ ≥ H b (1) ) = Pf H b Z − H(X, Y ) ≥ H b (1) − H(X) − H(Y ) + I(X; Y ) Pf (H n n n n     1 b (1) − H(X) − H(Y )| ≥ 1 I(X; Y ) b Z − H(X, Y )| ≥ I(X; Y ) + P |H ≤ P |H n n 2 2  2 b Z − H(X, Y )| + E|H b (1) − H(X) − H(Y )| . ≤ E|H (11) n n I(X; Y ) The theorem therefore follows immediately from (10), (11) and Lemma 8 below. Lemma 8. Let dX , dY ∈ N, let d = dX + dY and fix φ = (α, µ, ν, (cn ), (pn )) ∈ Φ with cn → 0 ∗ ∗ and pn = o(1/ log n) as n → ∞. Let k0∗ = k0,n and k1∗ = k1,n denote two deterministic sequences of positive integers satisfying k0∗ ≤ k1∗ , k0∗ / log2 n → ∞ and (k1∗ log2 n)/n → 0. Then sup sup k∈{k0∗ ,...,k1∗ } f ∈GdX ,dY ,φ sup k∈{k0∗ ,...,k1∗ } b (1) − H(X) − H(Y )| → 0, Ef |H n b Z − H(X, Y )| → 0 sup Ef |H n f ∈Hd,φ as n → ∞. Proof. We prove only the first claim in Lemma 8, since the second claim involves similar arguments but is more straightforward because the estimator is based on an independent and identically distributed sample and no permutations are involved. Throughout this proof, we write a . b to mean that there exists C > 0, depending only on dX , dY ∈ N and φ ∈ Φ, such that a ≤ Cb. Fix f ∈ GdX ,dY ,φ , where φ = (α, µ, ν, (cn ), (pn )). Write ρ(k),i,(1) (1) for the distance from Zi (1) ξi := e −Ψ(k) Vd (n − (1) (1) to its kth nearest neighbour in the sample Z1 , . . . , Zn 1)ρd(k),i,(1) and so that n X (1) b (1) = 1 H log ξi . n n i=1 Recall that Sn denotes the set of all permutations of {1, . . . , n}, and define Snl ⊆ Sn to be the 26 set of permutations that fix {1, . . . , l} and have no fixed points among {l + 1, . . . , n}; thus P r |Snl | = (n−l)! n−l r=0 (−1) /r!. For τ ∈ Sn , write Pτ (·) := P(·|τ1 = τ ), and Eτ (·) := E(·|τ1 = τ ). Let ln := blog log nc. For i = 1, . . . , n, let (1) A0i := {The k nearest neighbours of Zi (1) (1) are among Zl+1 , . . . , Zn(1) }, (1) and let Ai := A0i ∩ {Zi (1) ∈ Wn }. Using the exchangeability of ξ1 , . . . , ξn , our basic decomposition is as follows: b n(1) − H(X) − H(Y )| EH  ln   X  X l n 1 X 1 X n 1 (1) (1) ≤ Eτ | log ξi | + Eτ log ξi − H(X) − H(Y ) n! l=0 l n i=1 n i=l+1 l τ ∈Sn   n 1 X n X b n(1) − H(X) − H(Y ) + Eτ H n! l=l +1 l l τ ∈Sn n ln  X   l n X 1X n X 1 (1) (log ξi )1Ai − H(X) − H(Y ) l n i=1 n i=l+1 l l=0 τ ∈Sn    n n   1 X 1 X n X (1) b (1) | + |H(X) + H(Y )| . + Eτ | log ξi |1{(Aci \(A0i )c )∪(A0i )c } + Eτ |H n n i=l+1 n! l=l +1 l l ≤ 1 n! (1) Eτ | log ξi | + Eτ n τ ∈Sn (12) The rest of the proof consists of handling each of these four terms. Step 1: We first show that sup sup k∈{1,...,n−1} f ∈GdX ,dY ,φ   n  1 X n X b n(1) | + |H(X) + H(Y )| → 0 Eτ |H n! l=l +1 l l n (13) τ ∈Sn as n → ∞. Writing ρ(k),i,X for the distance from Xi to its kth nearest neighbour in the sample X1 , . . . , Xn and defining ρ(k),i,Y similarly, we have max{ρ2(k),i,X , ρ2(k),τ1 (i),Y } ≤ ρ2(k),i,(1) ≤ ρ2(n−1),i,X + ρ2(n−1),τ1 (i),Y . 27 Using the fact that log(a + b) ≤ log 2 + | log a| + | log b| for a, b > 0, we therefore have that   2  ρ(k),i,(1) Vd (n − 1) d ≤ log + d| log ρ(k),i,X | + log 2 eΨ(k) 2 ρ(k),i,X   Vd (n − 1) d ≤ log + 2d| log ρ(k),i,X | + log 2 + d| log ρ(n−1),i,X | + d| log ρ(n−1),τ1 (i),Y | Ψ(k) e 2 (14)   Vd (n − 1) d ≤ log + log 2 Ψ(k) e 2  (1) | log ξi | + 4d max max{− log ρ(1),j,X , − log ρ(1),j,Y , log ρ(n−1),j,X , log ρ(n−1),j,Y }. j=1,...,n (15) Now, by the triangle inequality, a union bound and Markov’s inequality, n o   hn  o i E max log ρ(n−1),j,X − log 2 ≤ E log max kXj k ≤ E log max kXj k j=1,...,n j=1,...,n j=1,...,n + Z ∞   = P max kXj k ≥ eM dM 0 j=1,...,n   1 1 max 0, log n + log E(kX1 kα ) + nE(kX1 kα ) exp − log n − log E(kX1 kα ) α α 1 ≤ 1 + log n + max(0, log µ) , (16) α ≤ and the same bound holds for E maxj=1,...,n log ρ(n−1),j,Y . Similarly, hn E min log ρ(1),j,X j=1,...,n o i − Z ∞  min ρ(1),j,X ≤ e−M dM j=1,...,n 0 Z −1 ≤ 2dX log n + n(n − 1)VdX kfX k∞ =  P ∞ 2d−1 X ≤ 2d−1 X and the same bound holds for E log n + e−M dX dM log n d−1 X VdX ν,  minj=1,...,n log ρ(1),j,Y (17)  − once we replace dX with dY . By (15), (16), (17) and Stirling’s approximation,   n   n X X n n (1) b max sup E(|Hn |1{τ1 ∈Snl } ) . log n P(τ1 ∈ Snl ) k=1,...,n−1 f ∈Gd ,d ,φ l l X Y l=ln +1 l=ln +1    ln +1 n log n e log n log n (n − ln − 1)! = ≤p →0 ≤ n! ln + 1 (ln + 1)! 2π(ln + 1) ln + 1 28 (18) as n → ∞. Moreover, for any f ∈ GdX ,dY ,φ and (X, Y ) ∼ f , writing  = α/(α + 2dX ), Z |H(X)| ≤ X   kfX k∞ fX | log fX | ≤ fX (x) log dx + | log kfX k∞ | fX (x) X Z kfX k∞ ≤ fX (x)1− dx + | log kfX k∞ |  X Z  ν 1− α − 1−  (1 + kxk ) dx ≤ (1 + µ)  RdX   α dX  VdX µ (α + dX )α+dX 1 + max log ν , log , α αα ddXX Z (19) where the lower bound on kfX k∞ is due to the fact that VdαX µα (fX )dX kfX kα∞ ≥ αα ddXX /(α + dX )α+dX by Berrett, Samworth and Yuan (2017, Lemma 10(i)). Combining (19) with the corresponding bound on H(Y ), as well as (18), we deduce that (13) holds. Step 2: We observe that by (15), (16) and (17), sup sup k∈{1,...,n−1} f ∈GdX ,dY ,φ ln   X l ln log n n 1X 1 X (1) → 0. Eτ | log ξi | . n! l=0 l n i=1 n l (20) τ ∈Sn Step 3: We now show that sup sup k∈{k0∗ ,...,k1∗ } f ∈GdX ,dY ,φ ln   X n  1 X n 1 X (1) Eτ | log ξi |1{(Aci \(A0i )c )∪(A0i )c } → 0. n! l=0 l n i=l+1 l (21) τ ∈Sn We use an argument that involves covering Rd by cones; cf. Biau and Devroye (2015, Section 20.7). For w0 ∈ Rd , z ∈ Rd \ {0} and θ ∈ [0, π], define the cone of angle θ centred at w0 in the direction z by  C(w0 , z, θ) := w0 + w ∈ Rd \ {0} : cos−1 (z T w/(kzkkwk)) ≤ θ ∪ {0}. By Biau and Devroye (2015, Theorem 20.15), there exists a constant Cπ/6 ∈ N depending (1) only on d such that we may cover Rd by Cπ/6 cones of angle π/6 centred at Z1 . In each (1) cone, mark the k nearest points to Z1 (1) (1) (1) among Z2 , . . . , Zn . Now fix a point Zi not marked and a cone that contains it, and let 29 (1) (1) Zi1 , . . . , Zik that is be the k marked points in this cone. By Biau and Devroye (2015, Lemma 20.5) we have, for each j = 1, . . . , k, that (1) (1) (1) (1) kZi − Zij k < kZi − Z1 k. (1) Thus, Z1 (1) is not one of the k nearest neighbours of the unmarked point Zi , and only the (1) marked points, of which there are at most kCπ/6 , may have Z1 as one of their k near- est neighbours. This immediately generalises to show that at most klCπ/6 of the points (1) (1) Zl+1 , . . . , Zn (1) (1) may have any of Z1 , . . . , Zl among their k nearest neighbours. Then, by (15), (16) and (17), we have that, uniformly over k ∈ {1, . . . , n − 1}, 1 Eτ max sup ln l n f ∈GdX ,dY ,φ τ ∈∪l=0 Sn X n 1 (1) | log ξi | (A0i )c  . i=l+1 Now, for x ∈ X and r ∈ [0, ∞), let hx (r) := R Bx (r) kln log n. n fX , and, for s ∈ [0, 1), let h−1 x (s) := Γ(n) sk−1 (1 Γ(k)Γ(n−k) inf{r ≥ 0 : hx (r) ≥ s}. We also write Bk,n−k (s) := (22) − s)n−k−1 for s ∈ (0, 1). Then, by (4) and (13) in Berrett, Samworth and Yuan (2017), there exists C > 0, depending only on dX and φ, such that, uniformly for k ∈ {1, . . . , n − 1},  max Eτ | log ρ(k),i,X |1{Z (1) ∈W c } n i i=l+1,...,n   Z 1 Z VdX (n − 1)h−1 x (s) log fX (x)fY (y) Bk,n−k (s) ds dλd (x, y) = eΨ(k) 0 Wnc Z Z 1  . fX (x)fY (y) log n − log s − log(1 − s) + log(1 + Ckxk) Bk,n−k (s) ds dλd (x, y). Wnc 0 (23) Now, given  ∈ (0, 1), by Hölder’s inequality, and with K := max{1, (α−1) log 2, C α }/(α), Z Z fX (x)fY (y) log(1 + Ckxk) dλd (x, y) ≤ K Wnc Z ≤ K α fX (x)fY (y)(1 Wnc  Z fX (x)fY (y)(1 + kxk ) dλd (x, y) + kxkα ) dλd (x, y) 1− fX (x)fY (y) dλd (x, y) Wnc Wnc ≤ K (1 + µ) p1− n . (24) We deduce from (23) and (24) that for each  ∈ (0, 1) and uniformly for k ∈ {1, . . . , n − 1}, max sup l+1=1,...,n f ∈Gd  Eτ | log ρ(k),i,X |1{Z (1) ∈W c } . pn log n + K pn1− . i X ,dY ,φ 30 n From similar bounds on the corresponding terms with ρ(k),i,X replaced with ρ(n−1),i,X and ρ(n−1),τ1 (i),Y , we conclude from (14) that for every  ∈ (0, 1) and uniformly for k ∈ {1, . . . , n− 1}, max  (1) Eτ | log ξi |1{Z (1) ∈W c } . pn log n + K pn1− . sup i=l+1,...,n f ∈Gd (25) n i X ,dY ,φ The claim (21) follows from (22) and (25). Step 4: Finally, we show that sup sup k∈{k0∗ ,...,k1∗ } f ∈GdX ,dY ,φ ln   X n 1 X 1 X n (1) Eτ (log ξi )1Ai − H(X) − H(Y ) → 0. n! l=0 l n l i=l+1 (26) τ ∈Sn To this end, we consider the following further decomposition: n 1 X (1) (log ξi )1Ai − H(X) − H(Y ) Eτ n i=l+1 n n X 1 X 1 (1) (1)  (1)  ≤ Eτ log ξi fX fY (Zi ) 1Ai + Eτ log fX fY (Zi ) 1{(Aci \(A0i )c )∪(A0i )c } n i=l+1 n i=l+1 n n − l 1 X (1) log fX fY (Zi ) − H(X) + H(Y ) + Eτ n i=l+1 n + l H(X) + H(Y ) . n (27) We deal first with the second term in (27). Now, by the arguments in Step 3,   n X Cπ/6 kl 1 (1)  (1) Eτ log fX fY (Zi ) 1(A0i )c ≤ Eτ max | log fX fY (Zi )| i=l+1,...,n n i=l+1 n      Cπ/6 k1∗ l E max | log fX (Xi )| + E max | log fY (Yi )| . ≤ i=l+1,...,n i=l+1,...,n n (28) Now, for any s > 0, by Hölder’s inequality, α  P fX (X1 ) ≤ s ≤ s 2(α+dX ) Z α 1− 2(α+d fX (x) X) dx x:f (x)≤s ≤s α 2(α+dX ) α 1− 2(α+d (1 + µ) Z X) RdX 1 dx α (1 + kxk )(α+2dX )/α 31 α  2(α+d X) α =: K̃dX ,φ s 2(α+dX ) . X) Writing t∗ := − 2(α+d log(n − l), we deduce that for n ≥ 3, α    E max | log fX (Xi )| ≤ log ν − E min  log fX (Xi ) i=l+1,...,n i=l+1,...,n Z t∗ αt 2(α + dX ) e 2(α+dX ) dt + ≤ log ν + (n − l)K̃dX ,φ log n α −∞ 2(α + dX ) 2(α + dX ) = log ν + log n. (29) K̃dX ,φ + α α  Combining (29) with the corresponding bound on E maxi=l+1,...,n | log fY (Yi )| , which is obtained in the same way, we conclude from (28) that sup sup k∈{k0∗ ,...,k1∗ } f ∈GdX ,dY ,φ n X k ∗ ln log n 1 (1)  log fX fY (Zi ) 1(A0i )c . 1 Eτ → 0. n i=l+1 n (30)  Moreover, given any  ∈ 0, (α + 2d)/α , set 0 := α/(α + 2d). Then by two applications of Hölder’s inequality, n X 1 (1)  Eτ log fX fY (Zi ) 1{Z (1) ∈W c } n i n i=l+1 Z 0 Z ν 0 fX fY (z)| log fX fY (z)| dz ≤ νpn + 0 ≤ fX fY (z)1− dz  Wnc Wnc   Z 0  ν 2d/(α+2d) ≤ νpn + 0 p1− f f (z) dz X Y  n Wnc  Z α/(α+2d)  0 ν  1−  1 2d/(α+2d) α−1 . p1− ≤ νpn + 0 pn 1 + max(1, 2 )µ dz n . α )2d/α  (1 + kzk d R (31) From (30) and (31), we find that n X 1 (1)  sup sup max Eτ log fX fY (Zi ) 1{(Aci \(A0i )c )∪(A0i )c } → 0, ln l n Sn k∈{k0∗ ,...,k1∗ } f ∈GdX ,dY ,φ τ ∈∪l=0 i=l+1 (32) which takes care of the second term in (27). (1) For the third term in (27), since Zi  (j) and Zi : j ∈ / {i, τ (i), τ −1 (i)} are independent, 32 we have  X  n n 1 X n − l (1) (1) 1/2 1 H(X) + H(Y ) ≤ Varτ log fX fY (Zi ) − log fX fY (Zi ) Eτ n i=l+1 n n i=l+1    31/2  31/2 1/2 1/2 2 2 1/2 (1) + E log fY (Y1 ) . ≤ 1/2 Varτ log fX fY (Zn ) ≤ 1/2 E log fX (X1 ) n n By a very similar argument to that in (19), we deduce that n 1 X n − l (1) max Eτ sup sup H(X) + H(Y ) log fX fY (Zi ) − ln l n i=l+1 n Sn k∈{k0∗ ,...,k1∗ } f ∈GdX ,dY ,φ τ ∈∪l=0 → 0, (33) which handles the third term in (27). For the fourth term in (27), we simply observe that, by (19), sup sup max ln l Sn k∈{k0∗ ,...,k1∗ } f ∈GdX ,dY ,φ τ ∈∪l=0 ln |H(X) + H(Y )| → 0. n (34) Finally, we turn to the first term in (27). Recalling the definition of rz,fX fY ,δ in (4), we define the random variables Ni,δ := X 1kZ (1) −Z (1) k≤r l+1≤j≤n j i . (1) Z ,fX fY ,δ i j6=i We study the bias and the variance of Ni,δ . For δ ∈ (0, 2] and z ∈ Wn , we have that (1) Eτ (Ni,δ |Zi Z = z) ≥ (n − ln − 3) fX fY (w) dw Bz (rz,fX fY ,δ ) d ≥ (n − ln − 3)Vd rz,f f ,δ fX fY (z)(1 − cn )  X Y  Ψ(k) n − ln − 3 = δ(1 − cn )e . n−1 (35) Similarly, also, for δ ∈ (0, 2] and z ∈ Wn , (1) Eτ (Ni,δ |Zi Z = z) ≤ 2 + (n − ln − 3) fX fY (w) dw Bz (rz,fX fY ,δ ) ≤ 2 + δ(1 + cn )e 33 Ψ(k)   n − ln − 3 . n−1 (36) Note that if j2 ∈ / {j1 , τ (j1 ), τ −1 (j1 )} then for δ ∈ (0, 2] and z ∈ Wn , Covτ  1 (1) 1 kZj −zk≤rz,fX fY ,δ , 1 (1) 2 kZj −zk≤rz,fX fY ,δ (1) |Zi  = z = 0. Also, for j ∈ / {i, τ (i), τ −1 (i)} we have Varτ  1 (1) |Zi (1) kZj −zk≤rz,fX fY ,δ  Z =z ≤ fX fY (w) dw ≤ Bz (rz,fX fY ,δ ) δ(1 + cn )eΨ(k) . n−1 When j ∈ {i, τ (i), τ −1 (i)} we simply bound the variance above by 1/4 so that, by Cauchy– Schwarz, when n − 1 ≥ 2(1 + cn eΨ(k) ), (1) Varτ (Ni,δ |Zi = z) ≤ 3(n − ln − 3) δ(1 + cn )eΨ(k) + 1. n−1 (37) 1/2 Letting n,k := max(k −1/2 log n, cn ), there exists n0 ∈ N, depending only on dX , dY , φ and k0∗ , such that for n ≥ n0 and all k ∈ {k0∗ , . . . , k1∗ }, we have n,k ≤ 1/2 and     k  n,k Ψ(k) n−ln −3 Ψ(k) n−ln −3 min (1+n,k )(1−cn )e −k , k−2−(1−n,k )(1+cn )e ≥ . n−1 n−1 2 We deduce from (37) and (35) that for n ≥ n0 ,   (1) (1) Pτ {ξi fX fY (Zi ) − 1}1Ai ≥ n,k = Pτ Ai ∩ {ρ(k),i,(1) ≥ rZ (1) ,fX fY ,1+ } n,k i Z (1) (1) fX fY (z)Pτ (Ni,1+n,k ≤ k|Zi = z) dz ≤ Pτ (Ni,1+n,k ≤ k, Zi ∈ Wn ) = Wn (1) Varτ (Ni,1+n,k |Zi = z) fX fY (z) (1) {Eτ (Ni,1+n,k |Zi = z) − k}2 Wn Z ≤ ≤ dz (1+n,k )(1+cn )eΨ(k) +1 n−1  2. n −3 cn )eΨ(k) n−l − k n−1 3(n − ln − 3) (1 + n,k )(1 − (38) Using very similar arguments, but using (36) in place of (35), we also have that for n ≥ n0 , (1) (1) Pτ {ξi fX fY (Zi ) − 1}1Ai ≤ −n,k ≤  (1−n,k )(1+cn )eΨ(k) +1 n−1  2. n −3 n,k )(1 + cn )eΨ(k) n−l n−1 3(n − ln − 3)  k − 2 − (1 − 34 (39) Now, by Markov’s inequality, for k ≥ 3, i ∈ {l + 1, . . . , n}, τ ∈ Snl and δ ∈ (0, 2], Pτ   (1) (1) (1) ξi fX fY (Zi ) ≤ δ ∩ Ai = Pτ (Ni,δ ≥ k, Zi ∈ Wn ) Z Z n−l−3 ≤ fX fY (z) k−2 Wn Bz (rz,f ≤ fX fY (w) dw dz ) X fY ,δ n−l−3 δeΨ(k) (1 + cn ) . k−2 n−1 (40) Moreover, for any i ∈ {l + 1, . . . , n}, τ ∈ Snl and t > 0, by two applications of Markov’s inequality, Pτ   (1) (1) (1) ξi fX fY (Zi ) ≥ t ∩ Ai ≤ Pτ (Ni,t ≤ k, Zi ∈ Wn ) Z Z n−l−3 ≤ fX fY (z) fX fY (w) dw dz n − l − k − 3 Wn Bz (rz,fX fY ,t )c Z µ + kzkα n − ln − 3 α−1 max{1, 2 } fX fY (z) α ≤ dz n − ln − k − 3 rz,fX fY ,t Wn  α/d n − ln − 3 α−1 2α/d α/d n − 1 max{1, 2 }2µν Vd ≤ t−α/d . Ψ(k) n − ln − k − 3 e (41)  Writing sn := log2 (n/k) log2d/α n , we deduce from (38), (39), (40) and (41) that for n ≥ n0 , Z ∞  2 (1)  (1)  1/2 (1)  (1) Eτ log ξi fX fY (Zi ) 1Ai = Pτ ξi fX fY (Zi ) ≤ e−s ∩ Ai ds 0 Z sn Z ∞  (1)  1/2 (1) 2 + Pτ ξi fX fY (Zi ) > es ∩ Ai ds + log (1 + n,k ) + log2 (1+n,k ) Ψ(k) Z ∞ ≤ e n−l−3 (1 + cn ) k−2 n−1 + sn sn e−s 1/2 ds + log2 (1 + n,k ) 0 12(n − ln − 3) (1+n,k )(1+cn )eΨ(k) n−1 k 2 2n,k +1  α/d n − ln − 3 α−1 2α/d α/d n − 1 + max{1, 2 }2µν Vd n − ln − k − 3 eΨ(k) Z ∞ e−s 1/2 α/d ds. sn We conclude that sup sup sup max  (1) (1)  max Eτ log2 ξi fX fY (Zi ) 1Ai < ∞. n S l i=l+1,...,n n∈N k∈{k0∗ ,...,k1∗ } f ∈GdX ,dY ,φ τ ∈∪ll=1 n 35 (42) Finally, then, from (38), (39) and (42), n 1 X (1) (1)  (1) (1)  Eτ log ξi fX fY (Zi ) 1Ai ≤ max Eτ log ξi fX fY (Zi ) 1Ai i=l+1,...,n n i=l+1 h  i1/2  (1) (1)  (1) (1)  ≤ 2n,k + max Pτ log ξi fX fY (Zi ) 1Ai ≥ 2n,k Eτ log2 ξi fX fY (Zi ) 1Ai i=l+1,...,n h  i1/2  (1) (1)  (1) (1) ≤ 2n,k + max Pτ ξi fX fY (Zi ) − 1 1Ai ≥ n,k Eτ log2 ξi fX fY (Zi ) 1Ai , i=l+1,...,n so sup sup max ln l Sn k∈{k0∗ ,...,k1∗ } f ∈GdX ,dY ,φ τ ∈∪l=1 n 1 X (1) (1)  Eτ log ξi fX fY (Zi ) 1Ai → 0. n i=l+1 (43) as n → ∞. The proof of claim (26) follows from (27), together with (32), (33), (34) and (43). The final result therefore follows from (12), (13), (20), (21) and (26). 7.3 Proof from Section 5 T T T Proof of Theorem 6. We partition X = (X(1) X(2) ) ∈ R(m+m)×p and, writing η (0) := /σ,  T (b) (b) T (0) T η(2) , X(2) )−1 X(2) let η (b) = (η(1) )T , (η(2) )T ∈ Rm+m , for b = 0, . . . , B. Further, let γ b := (X(2) (1) T T b(2) /σ. Now define the events η(2) and sb := σ X(2) )−1 X(2) γ b(1) := (X(2) n o Ar0 ,s0 := max(kb γ k, kb γ (1) k) ≤ r0 , sb ∈ [s0 , 1/s0 ] , sb(1) ∈ [s0 , 1/s0 ] and A0 := {λmin (n−1 X T X) > 21 λmin (Σ)}. Now  P(I˘n(0) ≤ I˘n(1) ) ≤ P {I˘n(0) ≤ I˘n(1) } ∩ Ar0 ,s0 ∩ A0 + P(Ac0 ) + P(A0 ∩ Acr0 ,s0 ). For the first term in (44), define functions h, h(1) : Rp → R by (1) h(b) := H(η1 − X1T b) and h(1) (b) := H(η1 − X1T b). 36 (44) Writing R1 and R2 for remainder terms to be bounded below, we have b m (b b m (X1 , ηb1,(1) ), . . . , (Xm , ηbm,(1) ) I˘n(0) − I˘n(1) = H η1,(1) , . . . , ηbm,(1) ) − H   (1) (1) b m (b b m (X1 , ηb(1) ), . . . , (Xm , ηb(1) ) −H η1,(1) , . . . , ηbm,(1) ) + H 1,(1) m,(1) = I(X1 ; 1 ) + h(b γ ) − h(0) − h(1) (b γ (1) ) + h(1) (0) + R1 = I(X1 ; 1 ) + R1 + R2 . (45) To bound R1 on the event Ar0 ,s0 , we first observe that for fixed γ, γ (1) ∈ Rp and s, s(1) > 0, η1 − X1T γ H s     (1)    (1) η1 − X1T γ η1 − X1T γ (1) η1 − X1T γ (1) − H X1 , −H + H X1 , s s(1) s(1)   (1)   η   η (1)  η1 − X1T γ (1) η1 − X1T γ 1 1 −H X1 − H + H X =H 1 s s s(1) s(1)  = H(η1 ) − H(η1 |X1 ) + h(γ) − h(0) − h(1) (γ (1) ) + h(1) (0) = I(X1 ; 1 ) + h(γ) − h(0) − h(1) (γ (1) ) + h(1) (0). (46) Now, for a density g on Rd , define the variance functional Z v(g) := g(x){log g(x) + H(g)}2 dx. Rd If (b γ, γ b(1) , sb, sb(1) )T are such that Ar0 ,s0 holds, then conditional on (b γ, γ b(1) , sb, sb(1) ), we have (1) (1) γ b,b s γ b ,b s fηbγb,bs ∈ F1,θ1 , fX,b η ∈ Fp+1,θ2 , fηb(1) (1) (1) γ b ,b s ∈ F1,θ1 and fX,b η (1) ∈ Fp+1,θ2 . It follows by (46), Berrett, Samworth and Yuan (2017, Theorem 1 and Lemma 11(i)) that lim sup n1/2 n→∞ sup sup Ef (|R1 |1Ar0 ,s0 ) ∗ kη ∈{k0∗ ,...,kη∗ } f ∈Fp+1,ω ∗ ∗ k∈{k0 ,...,k } ≤ lim sup n1/2 n→∞ sup ∗ kη ∈{k0∗ ,...,kη∗ } f ∈Fp+1,ω k∈{k0∗ ,...,k∗ } ≤ 4 sup v(g) + 4 g∈F1,θ1 sup sup  E(R12 1Ar0 ,s0 ) v(g) < ∞. 1/2 (47) g∈Fp+1,θ2 Now we turn to R2 , and study the continuity of the functions h and h(1) , following the approach taken in Proposition 1 of Polyanskiy and Wu (2016). Write a1 ∈ A for the fifth 37 component of θ1 , and for x ∈ R with fη (x) > 0, let ra1 (x) := 1 . 8a1 fη (x) Then, for |y − x| ≤ ra1 (x) we have by Berrett, Samworth and Yuan (2017, Lemma 12) that |fη (y) − fη (x)| ≤ 15 15 fη (x)a1 (fη (x))|y − x| ≤ fη (x). 7 56 Hence | log fη (y) − log fη (x)| ≤ 120 a1 (fη (x))|y − x|. 41 When |y − x| > ra1 (x) we may simply write | log fη (y) − log fη (x)| ≤ 8{| log fη (y)| + | log fη (x)|}a1 (fη (x))|y − x|. Combining these two equations we now have that, for any x, y such that fη (x) > 0, | log fη (y) − log fη (x)| ≤ 8{1 + | log fη (y)| + | log fη (x)|}a1 (fη (x))|y − x|. By an application of the generalised Hölder inequality and Cauchy–Schwarz, we conclude that fη (η1 − X1T γ) fη (η1 )  2 2 1/2   1/4 1/2   ≤ 8 Ea1 (fη (η1 )) E 1 + | log fη (η1 − X1T γ)| + | log fη (η1 )| E(|X1T γ|4  1/2  1/2   1/4 ≤ 16kγk Ea21 (fη (η1 )) E 1 + log2 fη (η1 − X1T γ) + log2 fη (η1 ) E(kX1 k4 ) . E log (48) We also obtain a similar bound on the quantity E log fηbγ,1 (η1 −X1T γ) fηbγ,1 (η1 ) d when kγk ≤ r0 . Moreover, for any random vectors U, V with densities fU , fV on R satisfying H(U ), H(V ) ∈ R, and writing U := {fU > 0}, V := {fV > 0}, we have by the non-negativity of Kullback–Leibler divergence that Z H(U ) − H(V ) = Z fV log fV − V Z fU log fV − U fU log U 38 fU fV (V ) ≤ E log . fV fV (U ) Combining this with the corresponding bound for H(V ) − H(U ), we obtain that   fU (U ) fV (V ) . |H(U ) − H(V )| ≤ max E log , E log fU (V ) fV (U ) Since fη , fηbγ,1 ∈ F1,θ1 when kγk ≤ r0 , we may apply (48), the second part of Berrett, Samworth and Yuan (2017, Proposition 9), (5), (6) and the fact that α2 ≥ 4 to deduce that sup sup ∗ f ∈Fp+1,ω γ∈B0◦ (r0 ) |h(γ) − h(0)| kγk   fηbγ,1 (η1 − X1T γ) 1 fη (η1 − X1T γ) ≤ sup sup max E log , E log < ∞. ∗ fη (η1 ) fηbγ,1 (η1 ) f ∈Fp+1,ω γ∈B0◦ (r0 ) kγk (49) Similarly, |h(1) (γ) − h(1) (0)| < ∞. ∗ kγk f ∈Fp+1,ω γ∈B0◦ (r0 ) sup sup (50) We now study the convergence of γ b and γ b(1) to 0. By definition, the unique minimiser of the function R(β) := E{(Y1 − X1T β)2 } = E(21 ) − 2(β − β0 )T E(1 X1 ) + (β − β0 )T Σ(β − β0 ) is given by β0 , and R(β0 ) = E(21 ). Now, for λ > 0,  R β0 + λE(1 X1 ) = E(21 ) − 2λkE(1 X1 )k2 + λ2 E(1 X1 )T ΣE(1 X1 ). If E(1 X1 ) 6= 0 then taking λ = kE(1 X1 )k2 E(1 X1 )T ΣE(1 X1 )  we have R β0 + λE(1 X1 ) < E(21 ), a contradition. Hence E(1 X1 ) = 0. Moreover, p p n n n  X  X 1 X 1 X 1 X 1/2 E Xi ηi ≤ E Xij ηi ≤ Var Xij ηi m i=m+1 m j=1 i=m+1 m j=1 i=m+1 p 1 X 21/2 p 4 1/4 = 1/2 Var1/2 (X1j η1 ) ≤ 1/2 E(η14 )1/4 max E(X1j ) . j=1,...,p m j=1 n 39 (51) It follows that n o  21/2 p T T (0) 4 1/4 E kb γ k1A0 = E (X(2) X(2) )−1 X(2) η(2) 1A0 ≤ E(η14 )1/4 max E(X1j ) . j=1,...,p λ0 n1/2 (52)  Similar arguments yield the same bound for E kb γ (1) k1A0 . Hence, from (49), (50) and (52), we deduce that lim sup n1/2 n→∞ sup sup Ef (|R2 |1Ar0 ,s0 ∩A0 ) < ∞. (53) ∗ kη ∈{k0∗ ,...,kη∗ } f ∈Fp+1,ω k∈{k0∗ ,...,k∗ } We now bound P(Ac0 ). By the Hoffman–Wielandt inequality, we have that T T {λmin (m−1 X(2) X(2) ) − λmin (Σ)}2 ≤ km−1 X(2) X(2) − Σk2F . Thus P(Ac0 )  −1 ≤ P km T X(2) X(2) p  X 1 4 − ΣkF ≥ λmin (Σ) ≤ Var(X1j X1l ) 2 mλ2min (Σ) j,l=1 ≤ 8p2 4 max E(X1j ). nλ2min (Σ) j=1,...,p (54) Finally, we bound P(A0 ∩ Acr0 ,s0 ). By Markov’s inequality and (52),   1 21/2 p 4 1/4 P {kb γ k ≥ r0 } ∩ A0 ≤ E kb γ k1A0 ≤ E(η14 )1/4 max E(X1j ) . j=1,...,p r0 r0 λ0 n1/2 (55)  The same bound also holds for P {kb γ (1) k ≥ r0 } ∩ A0 . Furthermore, writing P(2) := T T X(2) (X(2) X(2) )−1 X(2) , note that n 1 1 X 2 1 (0)T 1 (0) 2 (0) 2 (0) kη(2) k − 1 − kP(2) η(2) k ≤ (ηi − 1) + η(2) P(2) η(2) , |b s − 1| = m m m i=m+1 m 2 40 so by Chebychev’s inequality, Markov’s inequality and (51), for any δ > 0  P {|b s2 − 1| > δ} ∩ A0      n  λ δ 1/2  δ 1 T (0) 1 X 2 0 ∩ A0 + P X η > ∩ A0 ≤P (η − 1) > m i=m+1 i 2 m (2) (2) 2 ≤ 8 2p 4 1/4 ) . E(η14 ) + E(η14 )1/4 max E(X1j 1/2 1/2 2 j=1,...,p 1/2 nδ δ λ0 n (56)  The same bound holds for P {|(b s(1) )2 − 1| > δ} ∩ A0 . We conclude from Markov’s inequality, (44), (45), (47), (53), (54), (55) and (56) that sup sup sup ∗ k ∈{k ∗ ,...,k ∗ } f ∈F ∗ Bn ≥Bn η η 0 1,p+1 :I(f )≥bn k∈{k0∗ ,...,k∗ } ≤ Pf (I˘n ≤ C̆q(n),Bn ) 1 + sup sup Pf (I˘n(1) ≥ I˘n(0) ) ∗ + 1) kη ∈{k0∗ ,...,kη∗ } f ∈F1,p+1 :I(f )≥bn q(Bn∗ k∈{k0∗ ,...,k∗ } n  1 ˘(1) ≥ I˘(0) } ∩ Ar0 ,s0 ∩ A0 + sup sup P { I ≤ f n n ∗ q(Bn∗ + 1) kη ∈{k0∗ ,...,kη∗ } f ∈F1,p+1 :I(f )≥bn k∈{k0∗ ,...,k∗ } o + Pf (A0 ∩ Acr0 ,s0 ) + Pf (Ac0 ) n1 1 ≤ + sup Ef (|R1 + R2 |1Ar0 ,s0 ) sup ∗ q(Bn∗ + 1) kη ∈{k0∗ ,...,kη∗ } f ∈F1,p+1 :I(f )≥bn bn k∈{k0∗ ,...,k∗ } o + Pf (A0 ∩ Acr0 ,s0 ) + Pf (Ac0 ) → 0, as required. Acknowledgements: Both authors are supported by an EPSRC Programme grant. The first author was supported by a PhD scholarship from the SIMS fund; the second author is supported by an EPSRC Fellowship and a grant from the Leverhulme Trust. References Albert, M., Bouret, Y., Fromont, M. and Reynaud-Bouret, P. (2015) Bootstrap and permutation tests of independence for point processes. Ann. Statist., 43, 2537–2564. 41 Bach, F. R. and Jordan, M. I. (2002) Kernel independent component analysis. J. Mach. Learn. Res., 3, 1–48. Bergsma, W. and Dassios, A. (2014) A consistent test of independence based on a sign covariance related to Kendall’s tau. Bernoulli, 20, 1006–1028. Berrett, T. B., Grose, D. J. and Samworth, R. J. (2017) IndepTest: parametric independence tests based on entropy estimation. non- Available at https://cran.r-project.org/web/packages/IndepTest/index.html. Berrett, T. B., Samworth, R. J. and Yuan, M. (2017) Efficient multivariate entropy estimation via k-nearest neighbour distances. https://arxiv.org/abs/1606.00304v3. Biau, G. and Devroye, L. (2015) Lectures on the Nearest Neighbor Method. Springer, New York. Comon, P. (1994) Independent component analysis, a new concept?. Signal Process., 36, 287–314. Dalgaard, P. (2002) Introductory Statistics with R. Springer-Verlag, New York. Dobson, A. J. (2002) An Introduction to Generalized Linear Models. Chapman & Hall, London. Einmahl, J. H. J. and van Keilegom, I. (2008) Tests for independence in nonparametric regression. Statistica Sinica, 18, 601–615. Fan, J., Feng, Y. and Xia, L. (2017) A projection based conditional dependence measure with applications to high-dimensional undirected graphical models. Available at arXiv:1501.01617. Fan, Y., Lafaye de Micheaux, P., Penev, S. and Salopek, D. (2017) Multivariate nonparametric test of independence. J. Multivariate Anal., 153, 189–210. Gibbs, A. L. and Su, F. E. (2002) On choosing and bounding probability metrics. Int. Statist. Review, 70, 419–435. Gretton A., Bousquet O., Smola A. and Schölkopf B. (2005) Measuring Statistical Dependence with Hilbert-Schmidt Norms. Algorithmic Learning Theory, 63–77. 42 Gretton, A. and Györfi, L. (2010) Consistent nonparametric tests of independence. J. Mach. Learn. Res., 11, 1391–1423. Heller, R., Heller, Y., Kaufman, S., Brill, B. and Gorfine, M. (2016) Consistent distributionfree K-sample and independence tests for univariate random variables. J. Mach. Learn. Res., 17, 1–54. Hoeffding, W. (1948) A non-parametric test of independence. Ann. Math. Statist., 19, 546– 557. Hofert, M., Kojadinovic, I., Mächler, M. and Yan, J. (2017) copula: variate Dependence with Copulas. Multi- R Package version 0.999-18. Available from https://cran.r-project.org/web/packages/copula/index.html. Jitkrittum, W., Szabó, Z. and Gretton, A. (2016) An adaptive test of independence with analytic kernel embeddings. Available at arXiv:1610.04782. Joe, H. (1989) Relative entropy measures of multivariate dependence. J. Amer. Statist. Assoc., 84, 157–164. Josse, J. and Holmes, S. (2016) Measuring multivariate association and beyond. Statist. Surveys, 10, 132–167. Kojadinovic, I. and Holmes, M. (2009) Tests of independence among continuous random vectors based on Cramér–von Mises functionals of the empirical copula process. J. Multivariate Anal., 100, 1137–1154. Kozachenko, L. F. and Leonenko, N. N. (1987) Sample estimate of the entropy of a random vector. Probl. Inform. Transm., 23, 95–101. Lauritzen, S. L. (1996) Graphical Models. Oxford University Press, Oxford. Mari, D. D. and Kotz, S. (2001) Correlation and Dependence. Imperial College Press, London. Miller, E. G. and Fisher, J. W. (2003) ICA using spacings estimates of entropy. J. Mach. Learn. Res., 4, 1271–1295. Müller, U. U., Schick, A. and Wefelmeyer, W. (2012) Estimating the error distribution function in semiparametric additive regression models. J. Stat. Plan. Inference, 142, 552–566. 43 Neumeyer, N. (2009) Testing independence in nonparametric regression. J. Multivariate Anal., 100, 1551–1566. Neumeyer, N. and Van Keilegom, I. (2010) Estimating the error distribution in nonparametric multiple regression with applications to model testing. J. Multivariate Anal., 101, 1067–1078. Nguyen, D. and Eisenstein, J. (2017) A kernel independence test for geographical language variation. Comput. Ling., to appear. Pearl, J. (2009) Causality. Cambridge University Press, Cambridge. Pearson, K. (1920) Notes on the history of correlation. Biometrika, 13, 25–45. Peixoto, J. L. (1990) A property of well-formulated polynomial regression models. The American Statistician, 44, 26–30. Pfister, N., Bühlmann, P., Schölkopf, B. and Peters, J. (2017) Kernel-based tests for joint independence. J. Roy. Statist. Soc., Ser. B, to appear. Pfister, N. and Peters, J. (2017) Schmidt Independence Criterion. dHSIC: Independence Testing via Hilbert R Package version 2.0. Available at https://cran.r-project.org/web/packages/dHSIC/index.html. Polyanskiy, Y. and Wu, Y. (2016) Wasserstein continuity of entropy and outer bounds for interference channels. IEEE Trans. Inf. Theory, 62, 3992–4002. Rizzo, M. L. and Szekely, G. J. (2017) energy: Inference via the Energy of Data. E-Statistics: Multivariate R Package version 1.7-2. Available from https://cran.r-project.org/web/packages/energy/index.html. Samworth, R. J. and Yuan, M. (2012) Independent component analysis via nonparametric maximum likelihood estimation. Ann. Statist., 40, 2973–3002. Schweizer, B. and Wolff, E. F. (1981) On nonparametric measures of dependence for random variables. Ann. Statist., 9, 879–885. Sejdinovic, D., Sriperumbudur, B., Gretton, A. and Fukumizu, K. (2013) Equivalence of distance-based and RKHS-based statistics in hypothesis testing. Ann. Statist., 41, 2263– 2291. 44 Sen, A. and Sen, B. (2014) Testing independence and goodness-of-fit in linear models. Biometrika, 101, 927–942. Shah, R. D. and Bühlmann, P. (2017) Goodness of fit tests for high-dimensional linear models. J. Roy. Statist. Soc., Ser. B, to appear. Song, L., Smola, A., Gretton, A., Bedo, J. and Borgwardt, K. (2012) Feature selection via dependence maximization. J. Mach. Learn. Res., 13, 1393–1434. Steuer, R., Kurths, J., Daub, C. O., Weise, J. and Selbig, J. (2002) The mutual information: detecting and evaluating dependencies between variables. Bioinformatics, 18, 231–240. Stigler, S. M. (1989) Francis Galton’s account of the invention of correlation. Stat. Sci., 4, 73–86. Su, L. and White, H. (2008) A nonparametric Hellinger metric test for conditional independence. Econometric Theory, 24, 829–864. Székely, G. J., Rizzo, M. L. and Bakirov, N. K. (2007) Measuring and testing dependence by correlation of distances. Ann. Statist., 35, 2769–2794. Székely, G. J. and Rizzo, M. L. (2013) The distance correlation t-test of independence in high dimension. J. Multivariate Anal., 117, 193–213. Torkkola, K. (2003) Feature extraction by non-parametric mutual information maximization. J. Mach. Learn. Res., 3, 1415–1438. Vinh, N. X., Epps, J. and Bailey, J. (2010) Information theoretic measures for clusterings comparison: variants, properties, normalisation and correction for chance. J. Mach. Learn. Res., 11, 2837–2854. Weihs, L., Drton, M. and Leung, D. (2016) Efficient computation of the Bergsma–Dassios sign covariance. Comput. Stat., 31, 315–328. Weihs, ances: L., Drton, M. and Meinshausen, N. (2017) Symmetric rank covari- a generalised framework for nonparametric measures of dependence. https://arxiv.org/abs/1708.05653. Wu, E. H. C., Yu, P. L. H. and Li, W. K. (2009) A smoothed bootstrap test for independence based on mutual information. Comput. Stat. Data Anal., 53, 2524–2536. 45 Yao, S., Zhang, X. and Shao, X. (2017) Testing mutual independence in high dimension via distance covariance. J. Roy. Statist. Soc., Set. B, to appear. Zhang, K., Peters, J., Janzing, D. and Schölkopf, B. (2011) Kernel-based conditional independence test and application in causal discovery. https://arxiv.org/abs/1202.3775. Zhang, Q., Filippi, S., Gretton, A. and Sejdinovic, D. (2017) Large-scale kernel methods for independence testing. Stat. Comput., 27, 1–18. 46
10
1 Distributed Rate and Power Control in Vehicular Networks arXiv:1511.01535v1 [] 4 Nov 2015 Jubin Jose, Chong Li, Xinzhou Wu, Lei Ying and Kai Zhu Abstract—The focus of this paper is on the rate and power control algorithms in Dedicated Short Range Communication (DSRC) for vehicular networks. We first propose a utility maximization framework by leveraging the well-developed network congestion control, and formulate two subproblems, one on rate control with fixed transmit powers and the other on power control with fixed rates. Distributed rate control and power control algorithms are developed to solve these two subproblems, respectively, and are proved to be asymptotically optimal. Joint rate and power control can be done by using the two algorithms in an alternating fashion. The performance enhancement of our algorithms compared with a recent rate control algorithm, called EMBARC [1], is evaluated by using the network simulator ns2. I. I NTRODUCTION Dedicated Short Range Communication (DSRC) service [2] is for vehicle-to-vehicle and vehicle-to-infrastructure communication in the 5.9 GHz band. Among the 75 MHz allocated to DSRC, the channel 172 (5.855 GHz – 5.865 GHz) is assigned for critical safety operations, which allows vehicles to periodically exchange Basic Safety Messages (BSM) to maximize the mutual awareness to prevent collisions. Such messages typically include the GPS position, velocity of the vehicle. By receiving these BSM messages from surrounding vehicles, all participating vehicles in the DSRC safety system can assess the threat of potential collisions and provide warnings to the driver, if necessary. United States Department of Transportation (USDOT) reported that the DSRC safety system based on this simple mechanism can address 80% of the traffic accidents on the road today and thus has potentially huge societal benefit. On the other hand, such benefit is possible only when timely and reliable information exchange among vehicles using DSRC can be guaranteed in all deployment scenarios. DSRC is based on IEEE 802.11p standards [3] at PHY and MAC layer. It has been well known that DSRC vehicular networks exhibit degraded performance in congested scenarios. In particular, excessive packet loss can be observed at high node density even between vehicles which are in close proximity to each other [4], which can severely undermine the safety benefit targeted by deploying such networks. Performance improvement in such scenarios has been one of the key challenging issues for the success of DSRC. The industry and academics have contributed various solutions to this issue in a collaborative way over the last decade, e.g., [5], [6], [7]. J.Jose, C. Li and X. Wu are with Qualcomm Research; L,Ying and K. Zhu are with School of Electrical, Computer and Energy Engineering, Arizona State University. The authors are listed in alphabetical order The key system parameters one may control at each vehicle are the transmit power and transmit rate, i.e. the periodicity of the BSM messages, to alleviate the system congestion, i.e. lower transmit rate and power reduces the footprint and also the number of messages a DSRC device may generate and thus reduce the congestion level in the critical safety channel. On the other hand, both rate control and power control are critical for system performance as the transmit power of a vehicle determines the number of surrounding vehicles which can discover the vehicle and higher transmit rate of the BSM message can improve the accuracy the of collision estimation between two vehicles by having more message exchanges. Thus, a key problem to be addressed in the DSRC system is: How to choose the transmit rate and power for each vehicle in a distributed manner such that the overall system performance is maximized without creating excessive network congestion, i.e. observing very high channel load in some locations of the network? Both rate control and power control have been studied in the literature (e.g. [8], [9]). However, most of these works propose heuristic methods to adjust the rate and (or) power in simplistic scenarios, e.g. single bottleneck scenarios (i.e., there is only one congested channel in the network). Further, some of the methods [6], [1] rely on the existence of global parameters for algorithm convergence, which leads to system resource under-utilization in some scenarios. The focus of this paper1 is to propose a network resource allocation framework for rate and power control in vehicular network, by leveraging existing network congestion control framework [11] established in the context of wireline and wireless networks, and then develop optimal distributed rate and power control algorithms to achieve such a goal. The main contributions of this paper are summarized below: • We propose a utility maximization framework for rate and power control in DSRC. In general, the utility maximization is a non-convex optimization problem with integer constraints. We separate the problem to two subproblems: rate control problem and power control problem, where the rate control problem is to find the optimal broadcast rates when the transmit power (i.e., the transmit ranges) of vehicles are fixed and the power control problem is to find the optimal transmit power (or transmission range) when the broadcast rates are fixed. • We develop a distributed rate control algorithm which is similar to the dual algorithm for the Internet congestion control and prove that the time-average total utility 1 Partial version of this paper has appeared in [10]. 2 obtained under the proposed rate control algorithm can be arbitrarily close to the optimal total utility with fixed transmission power. • The power control problem is a non-convex optimization problem. We reformulate the problem as an integer programming problem. After relaxing the integer constraints, we develop a distributed power control algorithm based on the dual formulation. Interestingly, it can be shown that one of the optimal solutions to the Lagrangian dual, when fixing the dual variables, is always an integer solution. That implies that the distributed algorithm derived from the relaxed optimization problem produces a valid power control decision and the relaxation is performed without loss of optimality (more details can be found in Section IV). Based on that, we prove that the time-average total utility obtained under the proposed power control algorithm can be arbitrarily close to the optimal total utility with fixed broadcast rates. The paper is organized as follows. In Section II, a utility maximization framework is provided for congestion control in DSRC. Then asymptotically optimal distributed rate control algorithm and power control algorithm are derived respectively in Section III and IV. In the end, simulation results are presented in Section V to verify our algorithms. The proofs in the paper are provided in the Appendix. A. Discussion on Related Work The design of rate and power control algorithms in DSRC is one of most critical problems in ITS. Error Model Based Adaptive Rate Control (EMBARC) [1] is a recent rate control protocol which integrates several existing rate control algorithms including the Linear Integrated Message Rate Control (LIMERIC) [6], Periodically Updated Load Sensitive Adaptive Rate control (PULSAR) [12], and the InterVechicle Transmission Rate Control (IVTRC) [13]. LIMERIC allocates the wireless channel equally among all vehicles that share the same bottleneck link while guaranteeing the channel load is below a given threshold. IVTRC generates messages and adapts transmission probabilities based on the Suspected Tracking Error (STE) calculated based on vehicle dynamics to avoid collisions. In EMBARC, the message rates are controlled by LIMERIC and are further modified to satisfy the STE requirement. A parallel work [14] introduced a network utility maximization (NUM) formulation on the rate control problem when specified to safety-awareness. A distributed algorithm was proposed to adjust the rate with the objective to maximize the utility function. Similarly, [15] also provided a NUM formulation on the rate control problem and proposed a fair adaptive beaconing rate for intervehicular communications (FABRIC) algorithm, which essentially is a particular scaled gradient projection algorithm to solve the dual of the NUM problem. Other related work includes the database approach proposed in [9], where the optimal broadcast rates and transmission power are calculated offline based on the network configurations. Also, [16] proposed an environment and context-aware distributed congestion control (DCC) algorithm, which jointly control the rate and power to improve cooperative awareness by adapting to both specific propagation environments (such as urban intersections, open highways, suburban roads) as well as application requirements (e.g., different target cooperative awareness range). However, the stability and convergence of the algorithm are not proved mathematically. Besides the rate control algorithm IVTRC, the authors also proposed range control algorithms in [17], [18], [8] where the objective is to adapt the transmission ranges to achieve a specific threshold. The motivation of limiting channel loads below the threshold is to control channel congestion to maximize effective channel throughput. However, fair resource allocation among vehicles to increase the safety awareness of all vehicles are not considered, and the stability of the algorithms is subject to certain conditions [8]. II. P ROBLEM F ORMULATION In this section, we formally define the utility maximization framework for the DSRC congestion control problem. We first introduce the set of notations used throughout this paper. • µi : the message broadcast rate of vehicle i; • pi : the transmit power of vehicle i; • αij : the minimum transmit power required for node i’s message to be decoded by node j; • βij : the minimum transmit power required for node i’s message to be sensed by node j, i.e. the received energy is above the carrier sensing energy detection threshold; • I : indicator function. Note αij is not necessarily the same as βij , as in IEEE802.11 standards, packet header decoding happens at a much lower energy level than energy based detection in carrier sensing. From the definition of αij and βij , it is easy to see that • Vehicle j can receive the message from vehicle i if pi ≥ αij ; • Vehicle j can detect (but not necessarily decode) the message from vehicle i if pi ≥ βij . We assume αij and βij are constants within the time frame for the distributed rate and power control algorithm, which is reasonable as the nominal BSM update rate is 10Hz, i.e. 10 transmissions in every second. In DSRC, a vehicle can control rate µi and power pi . We consider the following utility maximization problem for rate and power control: P P General − OPT maxµ,p i j Ipi ≥αij Uij (µi ) (1) P ∀j (2) subject to: i µi Ipi ≥βij ≤ γ µi ≥ 0, pi ≥ 0 ∀i. (3) Now we explain the particular choice of the objective function and constraints above. In the objective function (1), X Ipi ≥αij Uij (µi ) j is the total utility associated with vehicle i, which depends on the number of vehicles who can receive the transmissions of vehicle i, i.e., the size of the set  (4) j : Ipi ≥αij = 1 3 and the utility function Uij (µi ) associated with each ordered pair (i, j), which is a concave function and can be interpreted as the level of j ′ s awareness of i when j receives messages from i with rate µi . Obviously, higher transmission rate µi should lead to higher value of Uij in DSRC. The neighborhood size (4) is controlled by the transmit power pi and the value of utility Uij (µi ) is determined by rate µi . Remark 1. A widely used utility function [19] is called the α-fair utility function which includes proportional fairness, minimum potential-delay fairness as special cases and is given by µ1−αi Ui (µi ) = wi i , αi > 0, (5) 1 − αi where wi represents the weight of node i, determined by its local information such as relative velocity, instantaneous location in the application of vehicular network. Notice that this utility function, given in a generic form, is independent of communication links (from j). In other words, each vehicle only knows its own utility function. As will be seen later, a choice of such a form of utility function further simplifies the proposed distributed algorithms because there is no need of obtaining neighbors’ utility functions. i For αi = 2, the utility function turns to be Ui (µi ) = − w µi which implies weighted minimum potential delay fairness in network’s resource allocation. For αi = 1, the utility function behaves as Ui (µi ) = wi log(µi ) which leads to weighted proportional fairness. We refer interested readers to [20] for details. The constraint (2) states that the channel load at any vehicle j should be below a target channel load γ. In CSMA based systems, high γ value indicates channel congestion, which implies high packet collision rate and low effective channel throughout [17], [9]. In [17], the authors have observed that the curve of information dissemination rate versus channel load remains almost the same under different configurations. In [9], the authors also found that the effective channel throughput is maximized when the channel load is around 0.91 under various settings. Thus, it is natural to impose such a constraint (2) to limit the congestion level in the system. A. Problem decomposition General-OPT is difficult to solve because the objective function (1) is not jointly convex in (µ, p). We therefore separate the problem into rate control problem and power control problem as defined below. • Assume the transmit power is fixed at each vehicle. Then we can define Ri = {j : pi ≥ αij }, i.e., the set of vehicles who can receive the messages from vehicle i, and Ii = {j : pi ≥ βij }, i.e., the set of vehicles whose channel load can be affected by vehicle i′ s transmissions. When transmit power pi is • fixed, both Ri and Ii are fixed. In this case, general-OPT becomes the following Rate-OPT P P Rate − OPT : ρ = maxµ i j∈Ri Uij (µi ) (6) P subject to: ∀ j. (7) i:j∈Ii µi ≤ γ Assuming the broadcast rates are fixed, i.e., µi ’s are fixed, OPT becomes the following Power-OPT: P Power − OPT : ρ = maxp i,j Ipi ≥αij Uij (µi )(8) P (9) subject to: i µi Ipi ≥βij ≤ γ. B. Iterative joint rate and power control In light of the above decompositions, a (suboptimal) solution of General-OPT can be obtained by iterating Rate-OPT and Power-OPT. The initial set of rate or power parameters for the iterative algorithm can be appropriately chosen according to certain practical constraints. The stopping criterion at step k is typically set to be ρ(k + 1) − ρ(k) ≤ ǫ for ǫ > 0. It is worth noting that in each step of iterations the utility value ρ(k) is non-decreasing and ρ(k) is bounded above for all ∀k, given a well-defined utility function. Therefore, the convergence of the iterative algorithm is guaranteed. In the following sections, we will develop distributed algorithms to solve Rate-OPT and Power-OPT separately. The optimal rate control algorithm directly follows from the welldeveloped network congestion control while the optimal power control algorithm is innovative and rather technical. III. R ATE C ONTROL A LGORITHM In what follows, we study the rate control problem and develop a distributed rate control algorithm that solves (6). Note that Rate-OPT is similar to the network utility maximization (NUM) problem for the Internet congestion control (see [11] for a comprehensive introduction of the NUM problem for the Internet congestion control). Each vehicle i may represent both a flow and a link onPthe Internet, and µi is the data rate of flow i. Regarding j∈Ri Uij (µi ) as the utility function of vehicle i, the objective is to maximize the sum of user utilities. We may further say that flow i uses link j when j ∈ Ii . Then constraint (7) is equivalent to the link capacity constraint that requires the total data rate on link j to be no more than the link capacity γ. To this end, Rate-OPT can be viewed as a standard NUM problem for the Internet congestion control. The distributed rate control algorithm below is based on the dual congestion control algorithm for the Internet congestion control [11], which consists of rate control and congestion price update. The congestion price update monitors the channel load of vehicle j. The congestion price λj increases when the channel load at vehicle j exceeds the threshold γ and decreases otherwise. The rate control algorithm adapts the broadcast rate µi based on the aggregated congestion price from all vehicles who can sense the transmissions from vehicle i, i.e., the vehicles whose channel loads are affected by vehicle i. Rate Control Algorithm 4 1) Rate control algorithm at vehilce i : At time slot t, vehicle i broadcasts with rate µi [t] such that µi [t] = min   arg max µ  X X Uij (µ) − ǫµ j∈Ri λj [t − 1], µmax    j∈Ri (10) where ǫ ∈ (0, 1] is a tuning parameter. 2) Congestion price update at vehicle j : At time slot t, vehicle j updates its congestion price λj to be  + X λj [t] = λj [t − 1] + µi [t − 1] − γ  . (11) i:j∈Ii This rate control algorithm is developed based on the dual decomposition approach [11]. Specifically, the Lagrangian of optimization (6) is L(µi , λ) = X X i = X i X λj  Uij (µi ) − ǫµi X Uij (µi ) − ǫ j j∈Ri   X  j∈Ri j∈Ii X i:j∈Ii  µi − γ  λj  − γ X λj , j min g(λ) = min max L(µi , λ) λ µi When λ is fixed, the µi should maximize X X Uij (µi ) − ǫµi λj , j∈Ri j∈Ii which motivates the rate control algorithm (10). The congestion price update (11) is designed by taking derivative of g(λ) over λ and then the optimal λ, as a mean to optimize the dual problem, can be achieved by using a gradient search in (11). The next theorem shows the rate control algorithm is asymptotically optimal. Theorem 2. Denote by µ∗i the optimal solution to problem (6) and assume µmax > µ∗i for all i. Then there exists a constant B > 0, independent of ǫ, such that under the proposed rate control algorithm lim inf T →∞ 1 T T −1 X X t=0 i X j∈Ii Uij (µi [t]) ≥ XX i xij = Ipi ≥αij ∀ βik ≤ αij (15) ∀i, j (16) yij ∈ {0, 1} ∀i, j. xij ≥ xik yik ≥ xij (17) ∀ αij ≤ αik (20) ∀ βik ≤ αij (21) 0 ≤ xij ≤ 1 0 ≤ yij ≤ 1 ∀i, j ∀i, j. (22) (23) Now by including constraint (19) in the Lagrangian, we obtain P P P maxx,y i,j xij Uij (µi ) − ǫ j λj ( i yij µi − γ) s.t.: xij ≥ xik ∀ αij ≤ αik yik ≥ xij ∀ βik ≤ αij 0 ≤ xij ≤ 1 ∀i, j 0 ≤ yij ≤ 1 j∈Ii The proof of the theorem is similar to the proof of Theorem 6.1.1 in [11], and is omitted in thisP paper. Remark that if the objective function utility function j∈Ri Uij (µi ) is strictly concave, the optimal solution of Rate-OPT is unique since the search space is convex. As a consequence, the above algorithm converges to the unique optimal solution. yik ≥ xij xij ∈ {0, 1} Recall αij is the minimum transmit power for vehicle j to receive messages from vehicle i. So constraint (14) states that if vehicle j requires a smaller minimum transmit power of vehicle i than vehicle k, then vehicle j can receive from vehicle i if vehicle k can do so. Constraint (15) is similarly defined. Next, we relax the integer constraints (16) and (17) to obtain the following linear programming problem. P maxx,y i,j xij Uij (µi ) (18) P ∀j (19) subject to: i yij µi ≤ γ Uij (µ∗i ) − Bǫ.  and yij = Ipi ≥βij . The Power-OPT problem is equivalent to the following integer programming problem: P (12) maxx,y i,j xij Uij (µi ) P ∀j (13) subject to: i yij µi ≤ γ xij ≥ xik ∀ αij ≤ αik (14)  where ǫ is a tuning parameter. Then the dual problem is λ IV. P OWER C ONTROL A LGORITHM In this section, we develop a distributed power control algorithm that solves (8). The power control problem is developed by formulating the Power-OPT as an integer programming problem. After relaxing the integer constraint, we develop a distributed power control algorithm using the dual approach. Interestingly, it turns out the solution obtained from the Lagrangian dual is always an integer solution. In other words, the power control algorithm based on the linear approximation always gives a valid power control solution and is proved to be asymptotically optimal for Power-OPT. We first introduce new variables x and y such that ∀i, j. where ǫ is a tuning parameter. Note that constraints (20) and (21) impose conditions on x and y related to the same transmitter i. Therefore, given λ, the Lagrangian dual problem above can be decomposed into the sub-problems for each given i: P (24) maxx,y j xij Uij (µi ) − ǫλj µi yij subject to: xij ≥ xik ∀ αij ≤ αik 5 yik ≥ xij ∀ βik ≤ αij 0 ≤ xij ≤ 1 ∀j 0 ≤ yij ≤ 1 ∀j. Next we will show that one of the optimizers to the problem (24) is an integer solution. For a fixed vehicle i, we sort the vehicles in a descendent order according to αij and divide them into groups, called G-groups and denoted by Gg , such that αij = αik if j, k ∈ Gg , and αij < αik if j ∈ Gg and k ∈ Gg+1 . Associated with each group Gg , we define α̃g to be common α in the group. We further define H-groups Hg = {m : α̃g−1 < βim ≤ α̃g }. This is the set of vehicles that can sense the transmission of vehicle i when the transmit power is α̃g and cannot if the power is α̃g−1 . Furthermore, let g(j) denote the G-group vehicle j is in and h(j) the H-group vehicle j is in. The following lemma proves that one of the optimal solution to (24) is an integer solution. The proof is presented in the Appendix. 1. Note that the optimization problem can be further simplified for specific utility functions, e.g., Ui,j (µi ) = wi log(µi ). Based on the discussion and lemma above, we develop the following power control algorithm, which consists of congestion price update and power update. The congestion price update monitors the channel load and the power update adapts the transmission power pi based on the aggregated congestion price from all vehicles who can sense the transmissions from vehicle i. Power Control Algorithm 1) Power control at vehicle i : Vehicle i chooses the transmission power to be pi [t + 1] = α̃gi′ , where gi′ is defined in Lemma 3 with λ = λ[t]. 2) Congestion price update at vehicle j :  + X λj [t + 1] = λj [t] + µi − γ  . (25) (26) i:j∈Ii Lemma 3. Given λ, one of optimizers to optimization problem (24) for given vehicle i is the following integer solution   1, if g(j) ≤ gi′ 1, if h(j) ≤ gi′ Remark 4. Notice that the second step of the power control, xij = and yij = , congestion price update, is identical to that in the rate control. 0, otherwise. 0, otherwise. P In practice, the value of i:j∈Ii µi can be approximated by where measured/sensed channel load of individual vehicle2 . Further  more, as shown in Lemma 3, the congestion prices of vehicles X Uij (µi ) gi′ = max g : in the sensing range Ii are required in the power control (25)  j∈∪q:k≤q≤g Gq while only the prices of vehicles in the receiving range Ri are  needed in the rate control. Unlike the price acquisition in the  X receiving range, which can be piggybacked in the broadcasted λj µi > 0 ∀0 ≤ k ≤ g . −ǫ  BSM, the price information of vehicles in the sensing range j∈∪q:k≤q≤g Hq cannot be decoded. The approach of obtaining congestion  prices in the sensing range is not discussed in this paper since it is rather implementation-specific and out of the scope of this Algorithm 1 Sample algorithm for Lemma 3 paper. Input: gmax , λj . The next theorem shows the asymptotic optimality of the Output: gi′ P P proposed distributed power control algorithm. 1: Define fp = j∈Hp λj µi , p = j∈Gp Uij (µi ) − ǫ 1, 2, · · · , gmax . Theorem 5. Denote by p∗ the optimal solution to Power-OPT. 2: for all g ∈ {gmax , gmax − 1, · · · , 1} do There exists a constant M > 0, independent of ǫ, such that 3: k ← g and f lag ← 1. under the proposed power control algorithm, 4: while f lag = 1 and k ≥ 0 do Pg T −1 X 1 XX 5: Fg ← p=k fp ; lim inf Ip∗i ≥αij Uij (µi ) − ǫM. Ipi [t]≥αij Uij (µi ) ≥ 6: if Fg > 0 then T →∞ T t=0 i,j i,j 7: k ←k−1  8: else 9: f lag = 0 10: end if V. P ERFORMANCE E VALUATION USING NS 2 11: end while In this section, we evaluate the performance of the dis12: if f lag = 1 then tributed rate and power control algorithm developed in this 13: gi′ ← g; break; paper, and compare the performance with EMBARC [1]. We 14: end if used the ns2 platform to simulate the asynchronous IEEE 15: end for 802.11p media access control algorithm with the associated The optimization in Lemma 3 can be solved by low complexity algorithms. A sample algorithm is given in Algorithm 2 The channel load of DSRC is measured by carrier sensing technique which is widely implemented in CSMA network 6 Fig. 1: Deployment of 6-lane highway vehicles Number of packet delivered per second 7 5 4 3 2 1 0 0 1 EMBARC Joint rate&power control 0.8 Channel Load EMBARC Rate control Joint rate&power control 6 50 100 Distance (m) 150 200 Fig. 3: Number of successful received packets per second v.s. distance between transmitter and receiver 0.6 12 10 0.4 Rate 8 6 4 0.2 0 2 4 6 8 Time (s) 10 12 2 14 0 0 200 400 600 800 1000 1200 Location (m) 1400 1600 1800 2000 0 200 400 600 800 1000 1200 Location (m) 1400 1600 1800 2000 Fig. 2: Convergence of channel load lower layer functions. The 802.11MacExt package in ns2 is adopted. To simulate congestion at high densities, we constructed a 6-lane scenario where each lane is 4 meters wide and 2000 meters long. We use a wrap-around model of a network along the length of the road (see Figure 1). In each lane, 300 vehicles are deployed in a dense-sparse-dense-sparse fashion as a grid. Specifically, the first 120 vehicles are spaced with either 4 or 5 meters distance between any adjacent vehicles. Similarly, the next 30 vehicles are spaced with either 16 or 17 meters distance. The last 150 vehicles are deployed by following the same rule. A comprehensive list of simulation parameters is summarized in Table I. number of vehicles packet size carrier frequency noise floor carrier sense threshold contention window transmission rate carrier sensing period 1800 300 Byte 5.9 GHz -96 dBm -76 dBm 15 6 Mbps 0.25 s TABLE I: Simulation Parameters We now briefly review the EMBRAC algorithm, of which the transmission rate is a function of both channel load (using LIMERIC component) and vehicle dynamics (using the suspected tracking error component). In our ns2 simulations, we did not consider vehicle dynamics and assumed that the relative positions of the vehicles are static, which can be justified using a time-scale separation assumption under which the dynamics of the rate and power control algorithms are at Radius (m) 300 200 100 0 Fig. 4: Broadcast rates and transmission ranges of the vehicles in the first lane under the joint rate and power control algorithm a much faster time scale than the relative dynamics of the vehicles. Therefore, the suspected tracking error component of EMBARC was not simulated and EMBARC turns to be LIMERIC. According to [1], LIMERIC is a distributed and linear rate-control algorithm and the rate of vehicle i is evolving as follows, ri (t) = (1 − α)ri (t − 1) + β(rg − rc (t − 1)), (27) where rc is the total rate of all the K vehicles and rg is the target total rate. α and β are parameters that tunes stability, fairness and steady state convergence. In EMBARC, however, rc is defined to be the maximum channel load reported by all the 2-hop neighbors in order to achieve global fairness [1]. For the implementation of our rate and power control algorithm, the sum rate from the interfering vehicles in the congestion price update equations (11) and (26) can be replaced by the measured channel load at vehicle j. Therefore, each vehicle only needs to piggyback its congestion price in the safety message in broadcasting. Further, we chose the 7 following utility function for evaluation Uij (µi ) = max{vij , α} log µi . dij Awareness 0.7 (28) Joint rate&power control EMBARC 0.6 0.5 This specific choice of utility functions is motivated from the collision avoidance perspective, which we explain in Appendix. In simulations, the target channel load is set to be 0.6. 0.4 0.3 0.2 0.1 A. Convergence to Target Channel Load 0 The evolving equation (27) shows that in steady state LIMERIC converges to a value strictly smaller than rg [6]. In other words, the target channel load can not be achieved in steady state. However, our algorithm leads to full utilization of the target channel load. See Figure 2. Furthermore, our algorithm converges less than 4 seconds while in EMBARC oscillations still occur after 10 seconds. 0 50 100 150 200 Number of nodes 250 300 Fig. 5: Awareness distribution of joint rate and power control Coverage 0.7 0.6 Joint rate&power control EMBARC 0.5 B. Packet Reception Rate 0.4 We compare the number of successful received packets per second between EMBARC (with α = 0.1 and β = 0.001) and our joint congestion control algorithm, which motivates the need of congestion control algorithms in DSRC. To be fair with (rate-control only) EMBARC, we simulated our standalone rate control algorithm as well. Figure 3 shows that: 0.3 1) our rate control algorithm performs uniformly better than EMBARC because of full utilization of the target channel load. Specifically, our rate control guarantees the convergence of measured channel load of each vehicle to the target channel load while EMBARC is proved to have inevitable gap in its steady state (27); 2) the joint congestion control algorithm provides significant gain in short distance regime (safety-sensitive zone). This is because both rate and transmission range are adjusted according to the deployment topology, as shown in Figure 4. Specifically, the transmission range increases in the sparse segments and achieves maximum at the center vehicle, while the range is constantly short in the dense segments. Note that 80% vehicles have short range, e.g., 50m, which leads to the performance gain in the shortrange regime. C. Coverage and Awareness Figures 5 shows the distribution of the number of vehicles that a vehicle can receive messages from, called awareness. Figure 6 shows the distribution of the number of vehicles within a vehicle’s transmission range, called coverage. Under EMBARC, there are two peaks at 35 and 145 in both coverage and awareness, respectively associated with two different densities in the network. The joint algorithm only has one peak since the algorithm dynamically allocates the resources based on the network topology, achieving fairness in terms of both coverage and awareness. 0.2 0.1 0 0 50 100 150 200 Number of nodes 250 300 350 Fig. 6: Coverage distribution of joint rate and power control VI. C ONCLUSIONS In this paper, we proposed a utility maximization framework for joint rate and power control in DSRC, and formulated two optimization problems, named Rate-OPT and PowerOPT, where Rate-OPT deals with rate control with fixed transmit power and Power-OPT deals with power control with fixed rates. We developed both distributed rate control and power control algorithms and proved the algorithms are asymptotically optimal. Evaluations using ns2 showed that our algorithms outperform EMBARC at several relevant aspects including channel utilization, packet reception rate, coverage and awareness. R EFERENCES [1] G. Bansal, H. Lu, J. B. Kenney, and C. Poellabauer, “EMBARC: error model based adaptive rate control for vehicle-to-vehicle communications,” in Proc. ACM Int. Workshop on Vehicular Inter-Networking, Systems, and Applications (VANET), Taipei, Taiwan, 2013, pp. 41–50. [2] J. B. Kenney, “Dedicated short-range communications (DSRC) standards in the United States,” in Proc. IEEE, vol. 99, no. 7, pp. 1162 –1182, Jul. 2011. [3] IEEE 802.11p Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications, Amendment 6: Wireless Access in Vehicular Environments, 2007. [4] T. V. Nguyen, F. Baccelli, K. Zhu, S. Subramanian, and X. Wu, “A performance analysis of csma based broadcast protocol in vanets,” in in Proc. of IEEE Conference on Computer Communications, 2013, pp. 2805–2813. 8 [5] ETSI TS 102 687: Decentralized Congestion Control Mechanisms for Intelligent Transport Systems operating in the 5 GHz range; Access layer part, 2011. [6] G. Bansal, J. Kenney, and C. Rohrs, “Limeric: A linear adaptive message rate algorithm for dsrc congestion control,” IEEE Trans. Veh. Technol., vol. 62, no. 9, pp. 4182–4197, 2013. [7] C.-L. Huang, Y. Fallah, R. Sengupta, and H. Krishnan, “Intervehicle transmission rate control for cooperative active safety system,” IEEE Trans. on Intelligent Transportation Systems, vol. 12, no. 3, pp. 645 –658, Sep. 2011. [8] N. Nasiriani, Y. P. Fallah, and H. Krishnan, “Stability analysis of congestion control schemes in vehicular ad-hoc networks,” in IEEE Consumer Communications and Networking Conference (CCNC), Las Vegas, NV, 2013, pp. 358–363. [9] T. Tielert, D. Jiang, H. Hartenstein, and L. Delgrossi, “Joint power/rate congestion control optimizing packet reception in vehicle safety communications,” in Proc. ACM Int. Workshop on Vehicular Inter-Networking, Systems, and Applications (VANET), Taipei, Taiwan, 2013, pp. 51–60. [10] J. Jose, C. Li, X. Wu, L. Ying, and K. Zhu, “Distributed rate and power control in dsrc,” in Proc. of IEEE International Symposium on Information Theory (ISIT), HongKong, China, 2015, pp. 2822–2826. [11] R. Srikant and L. Ying, Communication Networks: An Optimization, Control and Stochastic Networks Perspective. Cambridge University Press, 2014. [12] T. Tielert, D. Jiang, Q. Chen, L. Delgrossi, and H. Hartenstein, “Design methodology and evaluation of rate adaptation based congestion control for vehicle safety communications,” in Proc. IEEE Vehicular Networking Conf. (VNC), Amsterdam, The Netherlands, 2011, pp. 116–123. [13] C.-L. Huang, Y. P. Fallah, R. Sengupta, and H. Krishnan, “Intervehicle transmission rate control for cooperative active safety system,” IEEE Trans. on Intelligent Transportation Systems, vol. 12, no. 3, pp. 645– 658, 2011. [14] L. Zhang and S. Valaee, “Safety context-aware congestion control for vehicular broadcast networks,” in Signal Processing Advances in Wireless Communications (SPAWC), 2014, pp. 399–403. [15] Egea-Lopez, Esteban, and P. Pavon-Mariño, “Distributed and fair beaconing congestion control schemes for vehicular networks,” arXiv preprint arXiv:1407.6965, 2014. [16] B. Aygün, M. Boban, and A. M. Wyglinski, “Ecpr: Environment- and context-aware combined power and rate distributed congestion control for vehicular communications,” arXiv preprint arXiv:1502.00054, 2015. [17] Y. P. Fallah, C.-L. Huang, R. Sengupta, and H. Krishnan, “Congestion control based on channel occupancy in vehicular broadcast networks,” in Proc. IEEE Vehicular Technology Conf., Ottawa, Canada, 2010, pp. 1–5. [18] C.-L. Huang, Y. P. Fallah, R. Sengupta, and H. Krishnan, “Adaptive intervehicle communication control for cooperative safety systems,” IEEE Network, vol. 24, no. 1, pp. 6–13, 2010. [19] J. Mo and J. Walrand, “Fair end-to-end window-based congestion control,” IEEE/ACM Transactions on Networking, vol. 8, no. 5, pp. 556– 567, 2000. [20] R. Srikant, The mathematics of Internet congestion control. Springer Science & Business Media, 2012. A PPENDIX A. Proof of Lemma 3 Recall that the optimization problem (24) is for fixed i and λ. Define X xij Uij (µi ) − ǫλj µi yij . Fi (x, y) = j Let (x̂, ŷ) denote an optimal solution. We first have the following two observations: • According to constraint (20), x̂ij = x̂ik if dij = dik , which implies that x̂ij = x̂ik • j, k ∈ Gg . if (29) To maximize the objective (24), y should be chosen as small as possible. So ŷik = max j:βik ≤αij x̂ij . (30) Since x̂ij ≥ x̂ik when αij < αik , (30) is equivalent to ŷik = x̂ij ′ where j ′ = arg min j:βik ≤αij αij , which further implies that ŷik = x̂ij if k ∈ Hg and j ∈ Gg . (31) In other words, x̂ij and ŷik are equal if j ∈ Gg and k ∈ Hg for the same g. This is easy to understand because the define of H-group Hg is the set of vehicles that can sense the transmission of vehicle i when the vehicles in G-group Gg can receive messages from vehicle i. Now suppose that x̂ij is not an integer solution. Initially, ˜ = x̂. Identify vehicle j ′ is the vehicle has the maximum let x̂ αij among all vehicles such that 0 < x̂ij < 1, i.e., j ′ = arg max j:0<x̂ij <1 αij . According to observations (29) and (31), we have x̂ij = ŷik = x̂ij ′ for j ∈ Gg (j ′ ) and k ∈ Hg(j′) . Therefore, X X x̂ij Uij (µi ) − ǫ λk µi ŷik j∈Gg(j′ )  If =  k∈Hg(j′ ) X Uij (µi ) − ǫ j∈Gg(j′ ) X Uij (µi ) − j∈Gg(j′ ) X k∈Hg(j′ ) X  λk µi  x̂ij ′ . λk µi ≤ 0, k∈Hg(j′ ) then we define x̂˜ij = ŷ˜ik = 0 for j ∈ Gg (j ′ ) and k ∈ Hg(j′) . ˜ij = ŷ˜ik = x̂ib for j ∈ Gg (j ′ ) and Otherwise, we define x̂ k ∈ Hg(j′) where b ∈ Gg(j ′ )−1 . It is easy to see that the following inequality holds: ˜ ˜ ŷ). Fi (x̂, ŷ) ≤ Fi (x̂, Now for the second scenario discussed above, we have X X ˜ij − ǫ Uij (µi )x̂ λk µi ŷ˜ik j∈Gg(j′ ) ∪Gg(j′ )−1 k∈Hg(j′ ) ∪Hg(j′ )−1 9   P 2 Note that ( i yij [t]µi − γ) ≤ (nµmax + γmax )2 , where n is  = Uij (µi ) − ǫ λk µi x̂ibthe . maximum number of vehicles whose transmissions can be j∈Gg(j′ ) ∪Gg(j′ )−1 k∈Hg(j′ ) ∪Hg(j′ )−1 sensed by vehicle j, P since 0 ≤ yij [t] ≤ 1 and 0 ≤ γ ≤ γmax . 2 ′ Defining M = ˜ ˜ i (γmax + nµmax ) and denoting by Similarly, we define x̂ij = ŷik = 0 for j ∈ Gg(j ′ ) ∪ Gg(j ′ )−1 ∗ ∗ (x̃ , ỹ ) the optimal solution to problem (18), we have and k ∈ H ∪H if X  X g(j′) g(j′)−1 X X Uij (µi ) − ǫ j∈Gg(j′ ) ∪Gg(j′ )−1 ∆V [t] = λk µi ≤ 0; k∈Hg(j′ ) ∪Hg(j′ )−1 ≤ ˜ij = ŷ˜ik = x̂ic for j ∈ Gg(j ′ ) ∪ Gg(j ′ )−1 and otherwise define x̂ and k ∈ Hg(j′) ∪ Hg(j′)−1 , where c ∈ Gg(j′)−2 . Similarly, we also have ˜ ˜ ŷ). Fi (x̂, ŷ) ≤ Fi (x̂, = j∈∪g:g≤g′ Gg  −ǫ Uij (µi ) λj µi > 0 ∀k < g   B. Proof of Theorem 5 P Defining V [t] = j λ2j [t], we have . ≤ X λj [t] + = λj [t] + = X 2λj [t] + X j X !2 yij [t]µi − γ i j = yij [t]µi − γ i j X I(pi [t]≥βij µi − γ i j X X 2λj [t] X i yij [t]µi − γ ! ! + − − X λ2j [t] j X Note that λ2j [t] X yij [t]µi − γ ! X yij [t]µi − γ !2 i PT −1 ≤ ∆V [t] = V [T ] − V [0], so P ′ ∗ (V [T ] − V [0]) − ǫM i,j Uij (µi )x̃ij 2 + P P T −1 1 t=0 i,j Uij (µi )xij [t], T t=0 ǫ 2T j i ! which implies X X ǫ Uij (µi )xij [t], Uij (µi )x̃∗ij ≤ (∆V [t] − M ′ ) + 2 i,j i,j V [t + 1] − V [t] !2 . (34) (35) (36) According to Lemma 3, (33) + (34) ≤ 0. Further, since (x̃∗ , ỹ ∗ ) is a feasible solution to problem (18), we have (36) ≤ 0. Hence, we conclude 2X 2X Uij (µi )x̃∗ij + Uij (µi )xij [t], ∆V [t] ≤ M ′ − ǫ i,j ǫ i,j ∆V [t] = −γ i ! i j  j∈∪q:k≤q≤g Hg λj [t] ∗ ỹij µi X X 2X ∗ ỹij µi − γ λi [t] Uij (µi )x̃∗ij − 2 ǫ i,j i j 2X 2X − Uij (µi )x̃∗ij + Uij (µi )xij [t] ǫ i,j ǫ i,j ! X X ∗ ỹij µi − γ . λi [t] +2 + j∈∪q:k≤q≤g Gg X X (33) where X X ∆V [t] ≤ M ′ (32) ! X X 2X yij [t]µi − γ λi [t] Uij (µi )xij [t] + 2 − ǫ i,j i j From the discussion above, the integer optimizer is   1, if g(j) ≤ g ′ 1, if h(j) ≤ g ′ xij = and yij = , 0, otherwise. 0, otherwise. g: i By rearranging the terms above, we obtain ˜ ŷ). ˜ Fi (x̂, ŷ) ≤ Fi (x̂, g ′ = min yij [t]µi − γ i j j and x = 0 otherwise. Therefore, from any optimizer (x̂, ŷ) we ˜ ŷ) ˜ such that can construct an integer solution (x̂,   λj [t] ! 2X 2X Uij (µi )x̃∗ij − Uij (µi )x̃∗ij ǫ i,j ǫ i,j 2X 2X Uij (µi )xij [t] − Uij (µi )xij [t] + ǫ i,j ǫ i,j ! X X ∗ ỹij µi − γ λj [t] +2 −2 j∈∪g:g≤g′ Hg i X + j∈∪g:g≤g′ Hg It is easy to see that we should choose x = 1 if X X Uij (µi ) − ǫ λj µi > 0. M′ + 2 j X j Repeating the same argument, we can conclude that there ˜ij = x if exists g ′ such that x̂˜ij = 0 if g(j) > g ′ and x̂ ′ g(j) ≤ g . Therefore,   X X ˜ ŷ) ˜ = Fi (x̂, Uij (µi ) − ǫ λj µi  x. j∈∪g:g≤g′ Gg V [t + 1] − V [t] ! X X ′ yij [t]µi − γ λj [t] M +2 which implies . − T −1 1 X X ηij ǫV [0] ǫM ′ X Uij (µi )x̃∗ij ≤ xij [t]. − + 2T 2 T t=0 i,j dij i,j 10 The theorem holds by choosing M = M ′ /2 and because X X Uij (µi )x∗ij . Uij (µi )x̃∗ij ≥ i,j i,j C. Selection of Utility Function (28) Note that given distance dij and relative speed vij , vehicles dij i and j on a line would collide after vij units of time if they d ij do not change their speeds. We therefore call vij reaction time of pair (i, j). To prevent the collision, vehicles i and j need to communicate at least once during this reaction time. Assume each message is reliably received with probability p, then the probability that vehicle j receives at least one message from vehicle i during the reaction time is dij 1 − (1 − p) vij µi . Imposing a lower bound pmin on this probability is equivalent to requiring that dij log(1 − pmin ) µi ≥ . vij log(1 − p) (37) These pairwise safety constraints may not always be feasible depending on the network density and the geographical distribution of the vehicles. Fig. 7: Vehicle 1 and vehicle 2 are both approaching vehicle 3 with different speeds Therefore, we consider a different requirement also from the collision avoidance perspective. Consider a scenario shown in Figure 7, where both vehicle 1 and vehicle 2 are approaching vehicle 3 with different speeds. A fair resource allocation should equalize the reaction time to avoid collisions, i.e., to have d23 d13 µ1 = µ2 . (38) v13 v23 In a general scenario, this objective could also be difficult to achieve. We note that if we assume all vehicles share a single-bottleneck link, e.g., in the scenario in Figure 7 where all vehicles can hear each other, then solving the following optimization problem v23 v13 log µ1 + log µ2 (39) max µ d13 d23 results in the solution v23 1 v13 1 = , (40) d13 µ1 d23 µ2 which is equivalent to (38). This motivated the objective function in (28), where each link dij is associated with a weighted max{vij ,α} log-utility log µi . Since vij may be negative, we dij define the weight to be max{vij , α} for some α > 0.
3
An evidential Markov decision making model arXiv:1705.06578v1 [] 10 May 2017 Zichang Hea , Wen Jianga,∗ a School of Electronics and Information, Northwestern Polytechnical University, Xi’an, Shaanxi, 710072, China Abstract The sure thing principle and the law of total probability are basic laws in classic probability theory. A disjunction fallacy leads to the violation of these two classical laws. In this paper, an Evidential Markov (EM) decision making model based on Dempster-Shafer (D-S) evidence theory and Markov modelling is proposed to address this issue and model the real human decision-making process. In an evidential framework, the states are extended by introducing an uncertain state which represents the hesitance of a decision maker. The classical Markov model can not produce the disjunction effect, which assumes that a decision has to be certain at one time. However, the state is allowed to be uncertain in the EM model before the final decision is made. An extra uncertainty degree parameter is defined by a belief entropy, named Deng entropy, to assignment the basic probability assignment of the uncertain state, which is the key to predict the disjunction effect. A classical categorization decision-making experiment is used to illustrate the effectiveness and validity of EM model. The disjunction effect can be well predicted Corresponding author at Wen Jiang: School of Electronics and Information, Northwestern Polytechnical University, Xi’an, Shaanxi 710072, China. Tel: (86-29)88431267. E-mail address: [email protected], [email protected] ∗ Preprint submitted to Elsevier May 19, 2017 and the free parameters are less compared with the existing models. Keywords: Evidential Markov model; Markov decision making model; Dempster-Shafer evidence theory; the sure thing principle; the law of total probability; disjunction effect 1. Introduction The sure thing principle introduced by Jim Savage[1] is fundamental in economics and probability theory. It means that if one prefers action A over B under state of the world X, while action A is also preferred under the complementary state ¬X, then it can be concluded that one will still prefer action A over B under the state is unknown. The law of total probability is a fundamental rule of Bayesian probability relating marginal probabilities to conditional probabilities. It expresses the total probability of an outcome which can be realized via several distinct events. However, many experiments and studies have shown that the sure thing principle and the law of total probability can be violated due to the disjunction effect[2–4]. The disjunction fallacy is an empirical finding in which the proportion taking the target gamble under the unknown condition falls below both of the proportions taking the target gamble under each of the known conditions. The same person takes the target gamble under both known conditions, but then rejects the target gamble under the unknown condition[5]. To explain the disjunction fallacy, many studies have been proposed. The original explanation was a psychological idea based on the failure of consequential reasoning under unknown condition[5]. A Markov decision making model was proposed by Townsend et al. (2000)[3]. A Markov process can be used to model a random system that changes states according to a transition 2 rule that only depends on the current state. Markov models has been used in many applications[6–9], especially in the prediction[10, 11], such as rainfall prediction[12, 13], economy prediction[14, 15] and so on. However, it failed to predict the disjunction effect, which will be introduced later. More recently, the theory of quantum probability has been introduced in the cognition and decision making process. Quantum probability is an effective approach to psychology and behavioristics [16–19]. It has been widely applied to the fields of cognition and decision making fields to explain the phenomena in classical theory, like order effect[20–22], disjunction effect[4], the interference effect of categorization[23], prisoner’s dilemma[24], conceptual combinations[25, 26], quantum game theory[27–29] and so on. To explain the disjunction fallacy, many quantum models have been proposed, such as a quantum dynamical (QD) model[30, 31], quantum-like models[32–34] quantum prospect decision theory[35–38] and quantum-like Bayesian networks[39] etc. In a quantum framework, the decision in human brain is deemed as a superposition of several decisions before the final one is made. Although the quantum model works for explaining the disjunction fallacy, the major problem is the additional introduction of quantum parameters. Decision making and optimization under uncertain environment is normal in reality and is heavily studied[40–42]. It is still an open issue for uncertain information modeling and processing[43–45]. Some methods have been proposed to handle the uncertainty like probability theory[46], fuzzy set theory[47] , Dempster-Shafer evidence theory[48, 49], rough sets[50], and D-numbers[51]. Dempster-Shafer (D-S) evidence theory[48, 49] is a powerful tool to handle the uncertainty. It has been widely used in many fields, like risk analysis[52–54], controller design[55, 56], pattern recognition[57–59], fault diagnosis[60–62], multiple attribute decision making[63, 64] and so on. Also D-S theory is widely combined with Markov models[65–68]. 3 In this paper, an evidential Markov (EM) decision making model based on D-S evidence theory and Makov modelling is proposed to model the decision making process. In the paradigm of studying the disjunction fallacy, generally a decision making process is consisted of beliefs and actions. The EM model assumes that people’s decision can be uncertain before the final one is made while a it is assumed that people is decision has to be certain at one time. In an evidential framework, the actions states are extended by introducing an uncertain state to represent the hesitance of a decision maker. An extra uncertainty degree is determined by a belief entropy, named Deng entropy, which is the key to predict the disjunction effect in our model. The EM model is used in an categorization decision-making experiment. The model results is discussed and compared, which show the effectiveness and validity of our model. The rest of the paper is organized as follows. In Section 2, the preliminaries of basic theories employed are briefly introduced. The classical Markov decision making model is introduced in Section 3. Then a categorization decisionmaking experiment is illustrated in Section 4. Our EM model is proposed and applied to explain the experiment result in Section 5. The model result is discussed and compared in Section 6. Finally, Section 7 comes to the conclusion. 2. Preliminaries 2.1. Dempster-Shafer evidence theory In D-S theory, a finite set of N mutually exclusive and exhaustive elements, called the frame of discernment (FOD), which is symbolized by Θ= {H1 , H2, . . . , HN }. 4 Let us denote P (Θ) as the power set composed of 2N elements A of Θ: P (Θ) = {∅, {H1 } , {H2 } , . . . , {HN } , {H1 ∪ H2 } , {H1 ∪ H3 } , . . . , Θ} A basic probability assignment (BPA) is a mapping from P (Θ) to [0, 1], defined as m : P (Θ) → [0, 1] satisfying[48, 49]: P m (A) = 1, (1) A∈P (Θ) m (∅) = 0. The mass function m represents supporting degree to focal element A. The elements of P (Θ) that have a non-zero mass are called focal elements. A body of evidence (BOE) is the set of all the focal elements, defined as n o (ℜ, m) = [A, m (A)] ; A ∈ P (Θ) , m > 0 , A mass function corresponds to a belief (Bel) function and a plausibility (P l) function respectively. Given m : P (Θ) → [0, 1], Bel (A) function represents the whole belief degree to A, defined as Bel (A) = P m (B) ; ∀A ⊆ P (Θ) P m (B) ; ∀A ⊆ P (Θ) B⊆A (2) P l function represents the belief degree of not denying A, defined as P l (A) = B∩A6=∅ (3) It should be noticed that when all the focal sets of m are singletons, m is said to be Bayesian; Bel and P l then degenerate into the same probability measure. In the following, an game of picking ball will be used to show the D-S theory’s ability of handling uncertainty[42]. There are two boxes filled with some balls as shown in Figure 1. Left box is contended with red balls and right box 5 is contended with blue balls. The number of balls in each box is unknown. Now, a ball is picked randomly from two boxes. The probability of picking from left box P 1 is known as 0.4 while picking from right box P 2 is known as 0.6. It is easy to obtain that the probability of picking a red ball is 0.4 while picking a blue ball is 0.6 based on probability theory. Now, the Figure 1: A game of picking ball which can be handled by probability theory situation changes as shown in Figure 2. The left box is contended with right balls while the right box is contended with red and blue balls. The exact number of the balls in each box and the ratio of red balls with blue balls are completely unknown. The probabilities of selecting from two boxes keep the same, P 1 = 0.4 and P 2 = 0.6. The question is what the probability that a red ball is picked is. Due to the lack of information, the question can not be addressed in probability theory. However, D-S evidence theory can effectively handle it. We can obtain two mass functions that m (R) = 0.4 and m (R, B) = 0.6. Then the uncertainty is well handled in the frame of D-S theory. 6 Figure 2: A game of picking ball where probability theory is unable but D-S evidence theory is able to handle 2.2. Pignistic probability transformation The term ”pignistic” proposed by Smets is originated from the word pignus, meaning bet in Latin. Pignistic probability transformation (PPT) has a wide application in decision making. Principle of insufficient reason is used to assign the probability of a multiple-element set to singleton sets. In other word, a belief interval is distributed into the crisp one determined as[69]: X m (B) Bet (A) = (4) A⊆B |B| where m is a mass function, B is the focal element of m and |B| is the cardinality of B which denotes the number of elements in set B. 2.3. Deng entropy Deng entropy is proposed to measure uncertainty, which is defined as follows[42]: X m (A) , (5) m (A) log2 |A| Ed = − 2 − 1 A∈X 7 where m is a mass function defined on FOD X, A is the focal element of m and |A| is the cardinality of A. Especially, for a BOE (ℜ, m), if all the focal elements are singletons, Deng entropy degenerates into Shannon entropy. Ed = − X m (A) log2 A∈X X m (A) = − m (A) log2 m (A) 21 − 1 A∈X (6) 3. Classical Markov decision-making model Townsend et al. (2000)[3] originally proposed a Markov model for studying the disjunction effect. For simplicity, we use a two-dimensional Markov model to illustrate the decision making process. 3.1. State representation The model assumes that the perceptual system can be in one of two states at each moment in time: a plus state denoted |+i or a minus state denoted |−i representing the other orientation. A sample path by the Markov process starts in some initial state, either |S (0)i = |+i or |S (0)i = |−i, and jumps back and forth across time from a clear plus state to a clear minus state (Figure 3). In general, however, we do not know which it is for any given Figure 3: Two-state transition digram sample path, and so the uncertainty is represented by assigning an initial 8 probability distribution across these two possibilities, denoted φ+ (0) for plus and φ− (0) for minus, and φ+ (0) +φ− (0) =0. 3.2. State transition The initial state is defined by a probability distribution represented by a 2×1 matrix φ (0) = " φ+ (0) φ− (0) # . If the system is known to start from the plus state, then φ+ (0) = 1; if the state is known to start from the minus state, then φ− (0) = 1; and if the state is unknown, then φ+ (0) + φ− (0) = 1. After some period of time t1 , the perceptual system can transit to a new state. Hence, a probability transition matrix is determined as T (t1 , 0) = " T+,+ (t1 , 0) T+,− (t1 , 0) T−,+ (t1 , 0) T−,− (t1 , 0) # , where, for example, T+,− (t1 , 0) represents the probability of transferring from the minus state at time zero to the plus state at time t1 . It is called a stochastic matrix because all the elements are nonnegative and sum to unity with a column. The matrix product of the transition matrix times the initial probability distribution at time zero produces the updated distribution across the two states at time t1 : φ (t1 ) = T (t1 , 0) · φ (0) . Given the state at time t1 , we can update again using the transition matrix T (t2 , t1 ) that describes the probabilities of transitions for the period of time from t1 to t2 : φ (t2 ) = T (t2 , t1 ) · φ (t1 ) = T (t2 , t1 ) · T (t1 , 0) · φ (0) . 9 It is reasonable to assume that the transition matrix remains stationary for many applications. Hence, the transition matrix can be simply write as T (t), in which t is the duration time period. In this case, the probability distribution at t2 equals φ (t2 ) = T (t2 ) · φ (0) = T (t2 − t1 ) · T (t1 ) · φ (0) . (7) It means that the transition matrix obeys the semi-group property of dynamic systems, T (t + u) = T (t) · T (u) = T (u) · T (t) . The time evolution of the transition matrix should obey the following differential equation, called the Kolmogorov forward equation. dT (t) = K · T (t) dt (8) Also the time evolution of the probability distribution over states should obey: dφ (t) = K · φ (t) . (9) dt The 2 × 2 matrix K is called the intensity matrix, which is used to construct the transition matrix as a function of processing time duration. The matrix must satisfy the following constraints to guarantee that the solution is a transition matrix: the off-diagonal elements must be nonnegative but the sum to zero with a column. The solution of Kolmogorov equation is a matrix exponential function: T (t) = etK , φ (t) = etK · φ (0) . (10) 3.3. Response probabilities A response refers to a measurement that the perceiver maker or reports. To derive response probabilities at various time points from the Markov model, 10 measure matrixes are defined as " # " # 1 0 0 0 M+ = and M− = , 0 0 0 1 where M+ is applied to measure the plus state and M− is applied to measure the minus state. Let R (t) = + denote the response that the system is in the plus state at time t. In the following, it will be convenient to use the 1 × 2 matrix L = [1, 1] perform summation across states. Using these matrixes, the probability if observing a ”plus” at time t equals p (R (t) = +) = L · M+ · T (t) · φ (0) . (11) 3.4. The law of total probability Assume that the initial state is known to ”plus” or ”minus”, the probability of R (t) = + equals p (R (t) = +|+) = [1, 1] · " 1 0 p (R (t) = +|−) = [1, 1] · " 1 0 0 0 0 0 # " · # " · T+,+ (t) T+,− (t) T−,+ (t) T−,− (t) T+,+ (t) T+,− (t) T−,+ (t) T−,− (t) #" 1 #" 0 0 1 # = T+,+ (t) , # = T+,− (t) respectively. Assume that the initial state is unknown, then the probability of R (t) = + equals p (R (t) = +|U) = [1, 1] · " 1 0 0 0 # " · T+,+ (t) T+,− (t) T−,+ (t) T−,− (t) #" φ+ (0) φ− (0) # = T+,+ (t) φ+ (0) + T+,− (t) φ− (0) . The last line expresses that the probability of being the plus state for the unknown condition equals to the probability of reaching it by two different 11 paths (see Figure ??), which means that the law of total probability is not violated. Hence, the classical Markov model can not explain and predict the disjunction effect. To address it, an evidential Markov model is proposed. There are several paradigms for studying the disjunction fallacy, in the following, a categorization decision-making experiment is briefly introduced. 4. Categorization decision-making experiment 4.1. Experiment method Townsend et al.[3] proposed a categorization decision-making experiment to study the interactions between categorization and decision making. It turned out that categorization can result in the disjunction effect on decision making. In the experiment, pictures of faces varying along face width and lip thickness are shown to participants. Generally, the faces can be distributed into two different kinds: on average a narrow (N) face with a narrow width and thick lips; on average a wide (W) face with a wide width and thin lips (see Figure 4 for example). The The participants were informed that N face had a 0.60 probability to come from the ”bad” guy population while W face had a 0.60 probability to come from the ”good” guy population. The participants were usually (probability 0.70) rewarded for attacking ”bad guys” and they were usually (probability 0.7) rewarded for withdrawing from ”bad guys”. The primary manipulation was produced by using the following two test conditions, presented across a series of trials, to each participant. In the categorization-decision making (C-D) condition: participants were asked to categorize a face as belonging to either a ”good” (G) guy or ”bad” 12 Figure 4: Example faces used in a categorization decision-making experiment (B) guy group. Following the categorization, they were asked to make a decision whether to ”attack” (A) or to to ”withdraw” (W). In the decisionmaking alone (D-alone) condition: participants were only asked to make an action decision. The experiment included a total of 26 participants, but each participant provided 51 observations for the C-D condition for a total of 26 × 51 = 1326 observations, while each person produced 17 observations for the D condition for a total of 17 × 26 = 442 total observations. 4.2. Experiment results The experiment results are shown in Table 1. The column labeled P (G) Table 1: The results of Townsend et al.[3] Type face P (G) P (A|G) P (B) P (A|B) PT P (A) Wide 0.84 0.35 0.16 0.52 0.37 0.39 Narrow 0.17 0.41 0.83 0.63 0.59 0.69 shows the probability of categorizing the face as a ”good buy”; the column labeled P (A|G) shows the probability of attacking given categorizing the face as a ”good guy”; the column labeled P (B) shows the probability of categorizing the face as a ”bad buy”; the column labeled P (A|B) shows the probability of attacking given categorizing the face as a ”bad guy”; and 13 the column labeled PT shows the total probability of attacking in the C-D condition computed by P (A) = P (G) · P (A|G) + P (B) · P (A|B) , (12) and the column labeled P (A) represents the probability of attacking in the D-alone condition. Obviously, PT and P (A) are unequal for both types of face. According to Bayesian probability theory, this violates the law of total probability and the difference value is called the disjunction effect. In this experiment, the disjunction effect for wide type faces is so weak that it can be ignored and explained as the statistical error, however, the disjunction effect for narrow type is too prominent to ignore. The classical paradigm has been discussed in many later works. Literatures of studying the categorization decision-making experiment and their results are shown below in Table 2. 5. Evidential Markov model The EM model based on D-S evidence theory and Markov modelling is proposed to model the human decision making process. In the following, we will use the categorization decision-making experiment to illustrate our model. The disjunction fallacy can be explained and the disjunction effect for narrow type faces can be well predicted in our model. The flowchart of EM model is shown in Figure ??. Step 1: Frame of discrimination determined In the C-D condition, the outcome will be either to attack or withdraw given the face is categorized as a good guy or a bad guy. Thus the FOD ΘCD is 14 Table 2: Results of categorization decision-making experiments Literature Type P (G) P (A|G) P (B) P (A|B) PT P (A) W 0.84 0.35 0.16 0.52 0.37 0.39 N 0.17 0.41 0.83 0.63 0.59 0.69 W 0.80 0.37 0.20 0.53 0.40 0.39 N 0.20 0.45 0.80 0.64 0.60 0.69 Wang & Busemeyer (2016) W 0.78 0.39 0.22 0.52 0.42 0.42 Experiment 1[23] N 0.21 0.41 0.79 0.58 0.54 0.59 Wang & Busemeyer (2016) W 0.78 0.33 0.22 0.53 0.37 0.37 Experiment 2[23] N 0.24 0.37 0.76 0.61 0.55 0.60 Wang & Busemeyer (2016) W 0.77 0.34 0.23 0.58 0.40 0.39 Experiment 3[23] N 0.24 0.33 0.76 0.66 0.58 0.62 W 0.79 0.36 0.21 0.54 0.39 0.39 N 0.21 0.39 0.79 0.62 0.57 0.64 Townsend et al. (2000)[3] Busemeyer et al. (2009)[31] Average 1 In Busemeyer et al. (2009)[31], the classical experiment was replicated. 2 In Wang and Busemeyer (2016)[23], experiment 1 used a larger data set to replicate the classical experiment. Experiment 2 introduced a new X-D trial verse C-D trial and only the results of C-D trial are used here. In experiment 3, the reward rate for attacking bad people was a bit less than experiment 1 and 2. determined as ΘCD = {AG, W G, AB, W B}, where, for example, AG represents the participant decide to attack given the face is categorized as a good guy. In this case, as the categorization is uncertain to the participant, it is meaningless to extend the belief states. However, in an evidential framework, the action states should be extended. Then the states AW G (denoted UG), AW B (denoted UB) are introduced to fill the power set of ΘCD . The state UG represents that the participant is uncertain to attack or withdraw given the face is categorized as G; the state UB represents that the participant is uncertain to attack or withdraw given the face is categorized as bad. In brief, the hesitance of a participant is shown by UG and UB. 15 In the D-alone condition, the outcome of the game will be either to attack or withdraw without a precise categorization. Thus the FOD ΘD is determined as ΘD = {AU, W U}, where, for example, AU represents that a participant decide to attack given the categorization is unknown. In this case, the action states should be extended in an evidential framework. But the state of belief is unknown, the belief state is actually an assembly of categorizing the face as good and bad. Then the state AW U (denoted UU) is introduced to fill the power set of ΘD . The state UU represents that a participant is uncertain to attack or withdraw given that the categorization is unknown, which also shows the hesitance of a participant. Step 2: Representation of beliefs and actions In general, the beliefs and actions consist of the decision making process. As the states have been extended, the initial state involves six combination of beliefs and actions {|BG AA i , |BG AU i , |BG AW i , |BB AA i , |BB AU i , |BC AW i} , where, for example, |BG AA i symbolizes the event in which the player believes the face belongs to a good guy, but the player intends to act by attacking. All of the possible transitions between the six states are illustrated in Figure ??. The Markov model assumes that a person states in exactly one circle, but we do not known which one is is. Hence, we assign probabilities of to the possible states. At the beginning of a trail, the initial probability of starting out in one of these six states is defined by a probability distribution represented by 16 the 6 × 1 column matrix       φ (0) =       φAG   φU G    φW G  .  φAB   φU B   φW B Participants’ state is a superposition of the six basis states |ψi = ψAG · |BG AA i + ψU G · |BG AU i + ψW G · |BG AW i + ψAB · |BB AA i + ψU B · |BB AU i + ψW B · |BB AW i (13) and the initial state corresponds to an amplitude distribution represented by the 6 × 1 column matrix       ψ (0) =       ψAG   ψU G    ψW G  ,  ψAB   ψU B   ψW B where, for example, ψAG is the probability of observing state |BG AA i. This probability distribution satisfies the constraint L·φ = 1, where L = [1, 1, 1, 1, 1, 1]. In this experiment, we assume that a face is shown to the participant at time zero. The initial probability distribution is uniformly random when the participant has no time to deliberate about the categorization and the actions to take, namely ψU G = ψU B = 0.5. Step 3: Inferences based on prior information 17 During the decision process, new information at time t1 changes the initial state ar time t = 0 into a new state at time t1 . In the C-D condition, if the face is categorized as good, the state changes to  φ  AG  φ  UG   φW G 1  φ (t1 ) = φAG + φU G + φW G   0   0  0     " #   φ G = .  0     (14) The initial probability that the face is categorized as good equals φAG + φU G + φW G , and so the 3 × 1 matrix φG is the conditional probability distribution across actions given the categorization is good. If the face is categorized as bad, the state changes to   0    0   "  #     0 0 1 =  . φ (t1 ) =  φAB + φU B + φW B  φB  φAB     φU B    (15) φW B The initial probability that the face is categorized as bad equals φAB + φU B + φW B , and so the 3 × 1 matrix φB is the conditional probability distribution across actions given the categorization is bad. In the D-alone condition, the state remains the same as the initial state, which is a mixed state produced by a weighted average of the distribution 18 for two known conditions in C-D condition: φ (t1 ) = φ (0) " # (φAG + φU G + φW G ) · φG = (φAB + φU B + φW B ) · φB " # " # φG 0 = (φAG + φU G + φW G ) · + (φAB + φU B + φW B ) · . 0 φB (16) Step 4: Strategies based on payoffs During the decision making process, the participants need to evaluate the payoffs in order to select an appropriate action, which evolve the state at time t1 into a new state at time t2 . The evolution of the state during this time period corresponds the thought process leading to a action decision, defection, cooperation or hesitance. The evolution of the state obeys a Kolmogorov forward equation (Eq. (9)) driven by a 6 × 6 intensity matrix K. The solution is: φ (t2 ) = eKt · φ (t1 ) . For t = t2 − t1 , the state to state transition matrix is defined by T (t) = etK with Tij (t) represents the probability of transiting to state i at t2 given being in state j at time t2 . In this experiment, t is set to 2 corresponding to the average time that a participant takes to make a decision (approximately 2 seconds)[31]. The intensity matrix K is " # KG 0 H= 0 KB with   KG =   (17) − 3kr 2+kw kr kr kr +kw 2 −(kr + kw ) kw kw kr +kw 2 kr +3kw − 2 19   ,    KB =   w − kr +3k 2 kw kw kr +kw 2 −(kr + kw ) kr kr kr +kw 2 3kr +kw − 2   .  The 3 × 3 intensity matrix KG applies when the face is categorized as good, and the other matrix KB applies when the face is categorized as bad. The parameter kr represents the payoff for taking the right choice, namely attacking the bad guy and withdrawing the good guy, and the parameter kr represents the payoff for taking the wrong choice, namely attacking the good guy and withdrawing the bad guy. The off-diagonal elements of K are nonnegative, and the columns of K sum to zero, which then guarantees that the column of T sum to one, which finally guarantees that φ (t). In the C-D condition, if the face is categorized as good, the state changes to # " # " φG eKG ·t 0 K·t · = eKG ·t · φG (18) φ (t2 ) = e · φ(t1 )= KB ·t 0 0 e If the face is categorized as bad, the state changes to # " # " KG ·t 0 e 0 · = eKB ·t · φB φ (t2 ) = eK·t · φ(t1 )= φB 0 eKB ·t (19) In the D-alone condition, the state changes to φ (t2 ) = eK·t · φ(0) " # " # eKG ·t 0 (φAG + φUG + φW G ) · φG = · 0 eKB ·t (φAB + φUB + φW B ) · φB (20) = (φAG + φUG + φW G ) · eKG ·t · φG + (φAG + φUG + φW G ) · eKB ·t · φB . Step 5: Basic probability assignments measurement According to section 3.3, response probability of each decision can be derived. In an evidential framework, the BPA can also be derived. Specially, when 20 the focal elements of a mass function is one, the BPA is the same with a probability. The measurement matrix is defined as ! MG 0 M= . 0 MB In the C-D condition, if the face is categorized as good, the 3 × 3 matrix MG is set to       0 0 0 0 0 0 1 0 0            MG =  0 0 0  ,  0 1 0  or  0 0 0   0 0 1 0 0 0 0 0 0 respectively to pick out the state of attacking, hesitance or defecting while the 3 × 3 matrix MB is set to 0. If the face is categorized as bad, the 3 × 3 matrix MB is set to  1 0 0   0 0 0   0 0 0         ,  0 1 0  or  0 0 0  MB =  0 0 0       0 0 1 0 0 0 0 0 0 respectively to pick out the state of attacking, hesitance or defecting while the 3 × 3 matrix MG is set to 0. In the D-alone condition the 3 × 3 matrixes MB and MG are set to       0 0 0 0 0 0 1 0 0            MB = MG =  0 0 0  ,  0 1 0  or  0 0 0   0 0 1 0 0 0 0 0 0 respectively to pick out the state of attacking, hesitance or defecting. Using L = [1, 1, 1, 1, 1, 1], the BPA of each state equals m = L · M · etK · φ (t1 ) . 21 (21) The derived BPAs consist body of evidences, (ℜCD , mCD ) and (ℜD , mD ) for two conditions respectively. (ℜCD , mCD ) = [m (AG) , m (UG) , m (W G) , m (AB) , m (UB) , m (W G)] = (0.0264, 0.0811, 0.0625, 0.3050, 0.3960, 0.1290) (ℜD , mD ) = [m (AU) , m (UU) , m (W U)] = (0.3314, 0.4771, 0.1915) Step 6: Probability distribution based on uncertainty measurement To obtain the probability distribution, the BPA of uncertain state should be transferred. It is still an open issue to address it. The classical method is using pignistic probability transformation. Obviously, however, the disjunction effect can not be predicted in this way. To address it, a new parameter γ which represents the extra uncertainty degree (EUD) is determined as follows: γ= ED − ECD , ED + ECD (22) where ECD and ED are the information volume of (ℜCD , mCD ) and (ℜD , mD ) respectively calculated by Deng entropy as follows: 0.0264 − 0.0811log2 0.0.0811 21 −1 22 −1   0.3960 0.3050 − 0.3960log2 22 −1 − 21 −1 ED = −0.0264log2 − 0.3050log2 = 2.8715 ED = −0.3314log2 = 4.1868.  0.3314 22 −1  − 0.4771log2 0.4771 24 −1 0.0625 21 −1  0.1290 21 −1  − 0.0625log2  − 0.1915log2 0.1290log2 0.1915 22 −1  ,  The cardinality of states AG, W G, AB, W B is one, the cardinality of states UG, UB, AU, W U is two, and the cardinality of the state UU is 4. Without a categorization, the uncertainty for the D-alone condition is obviously larger than the C-D condition. It is reasonable to assume that the 22 hesitated participants are tougher to take a final decision, especially for the narrow type faces. According to the cognitive dissonance and social projection in psychology, people tend to be consistent with their beliefs and actions[70–72]. The hesitated people are more tend to attack especially in the D-alone condition. Hence, for (ℜD , mD ), the BPA of the uncertain state should be assigned more to the attacking based on the EUD. Then the probability of attacking in the D-alone condition is calculated as   1 PD (A) = m(AU) + + γ m (UU ) = 0.6589. 2 (23) As the observed experiments result shows that the disjunction effect is less than 0.1, the information volume will not differ hugely, which guarantees γ < 0.5. For (ℜCD , mCD ), the BPA of uncertain state is still handled by using PPT (Eq. (4)). Then the probability of attacking in the C-D condition is calculated as 1 1 PCD (A) = m (AG) + m (UG) + m (AB) + m (UB) = 0.57 (24) 2 2 The difference value of Eq. (23) and Eq. (24) is the predicted disjunction effect. Dis = PD (A) − PCD (A) = 0.0889 According to the above steps, the comprehensive process of our model is illustrated in Figure ??. 6. Model results and comparisons 6.1. Comparison with experiment results Applying the EM model to the categorization decision-making experiments, the probability distributions for all the experiments can be obtained. Compare the obtained model results with the observed experiment results (for 23 narrow type face), the model results are close to the practical situation, which verifies the correctness and effectiveness of our model. As Table 3 shows, the disjunction effect is well predicted and the average error rate is only approximately 0.5%, which proof the correctness and effectiveness of our model greatly. Table 3: The results of the EM model Literature P (G) P (A|G)1 P (B) P (A|B)2 PT P (A) Dis Townsend Obs 0.17 0.41 0.83 0.63 0.59 0.69 0.1 et al. (2000)[3] EM 0.17 0.394 0.83 0.606 0.57 0.6589 0.0889 Busemeyer Obs 0.20 0.45 0.80 0.64 0.60 0.69 0.09 et al. (2009)[31] EM 0.20 0.4019 0.5981 0.5588 0.80 0.6404 0.0816 Wang & Busemeyer (2016) Obs 0.21 0.41 0.79 0.58 0.54 0.59 0.05 experiment 1[23] EM 0.21 0.3840 0.79 0.6160 0.5673 0.6432 0.0759 Wang & Busemeyer (2016) Obs 0.24 0.37 0.76 0.61 0.55 0.60 0.05 experiment 2[23] EM 0.24 0.3384 0.76 0.6197 0.5622 0.63 0.0678 Wang & Busemeyer (2016) Obs 0.24 0.33 0.76 0.66 0.58 0.62 0.04 experiment 3[23] EM 0.24 0.3384 0.76 0.6616 0.5841 0.6436 0.0596 Obs 0.21 0.39 0.79 0.62 0.57 0.64 0.07 EM 0.21 0.3797 0.79 0.6203 0.5685 0.6432 0.0747 Average 1 Dis represents the disjunction effect. 2 Obs represents the observed experiment results. 3 EM represents the results of the Evidential Markov model. 6.2. Comparison with other uncertainty measurement method The crucial step of the EM model to predict the disjunction effect is the measurement of the extra uncertainty. Deng entropy is compared with many other uncertainty measuring methods (shown in Table 4) in Deng (2016)[42]. These methods are still used to do the comparison in our method. The comparison of the predicted disjunction effect and the standard error (SE) is illustrated in Figure 5 and Figure 6 respectively, where the Exp 1 to 5 comes from Townsend et al. (2000), Busemeyer et al. (2009), Wang & Busemeyer 24 Table 4: Methods of measuring uncertainty Name Formula Deng entropy[42] m (A) log2 m(A) 2|A| −1 P m (A) log2 |A| IDP (m) = − A∈X P m (A) log2 Bel (A) CH (m) = − A∈X P m (A) log2 P l (A) EY (m) = − A∈X P P |A∩B| m (B) |B| m (A) log2 DKR (m) = − B∈X A∈X P P m (B) |A∩B| m (A) log2 SKP (m) = − |B| B∈X A∈X h i P P |A∩B| m (B) 1 − |A∪B| m (A) log2 CGP (m) = E (m) = − P A∈X Dubois & Prades weighted Hartley entropy[73] Hohles confusion measure[74] Yagers dissonance measure[75] Klir & Ramers discord[76] Klir & Parvizs strife[77] George & Pals conflict measure[78] A∈X B∈X (2016) experiment 1, 2, 3 respectively. We can see that the SE of Deng entropy is small for each experiment and the average SE of Deng entropy is the smallest; the predicted disjunction effect of Deng entropy is close to the observed one for each experiment and the average disjunction effect is the most accurate. Deng entropy has a powerful ability of measuring the uncertainty of a BOE, especially when many focal sets are multi-element. Hence, Deng entropy is used to measure the extra uncertainty degree in the EM model. Figure 5: The predicted disjunction effect of different methods 25 Figure 6: The SE of different methods 6.3. Comparison with other models As illustrated in section 3.4, the classical Markov model can not predict the disjunction effect. However, many quantum models have been proposed to explain the disjunction fallacy. In a quantum framework, before taking a decision, human thoughts are seen as superposed waves that can interfere with each other, influencing the final decision. As shown in Figure ??, different decisions can coexist at one time before the final decision is made, while the Markov model assumes that the decision at one time must be certain. The idea of the EM model is identified with the quantum model in some degree, namely allowing a state to be uncertain. The interference effect, a term of quantum mechanics, is borrowed to account for the disjunction fallacy. One of the effective quantum models is the quantum dynamical (QD) model first proposed by Busemeyer et al. in 2006[79], which is always used to compare with the classical Markov model. Busemeyer et al. (2009)[31] illustrates that the QD model can predict the interference effect while the Markov model can not in detail. Both the QD model and the Markov model are formulated as a random walk decision process, but the 26 QD model describes the evolution of complex valued probability amplitudes over time. In a quantum framework, the state evolves obeying a Schrödinger equation (Eq. 25), which is driven by a Hermitian matrix. Also, two payoffs functions are used to fill the Hermitian matrix H. d ψ (t) = −i · H · ψ (t) (25) dt Admittedly, the disjunction effect can be predicted in the QD model, however, its problem is that the model requires a growth of parameters. For example, in Wang & Busemeyer (2016)[23], a modified QD model, the quantum belief-action entanglement (BAE) model, is proposed. The actions and beliefs are deemed to be entangled in some degree, which is the key to produce the interference effect. The entanglement degree is described in a introduced parameter whose value is set artificially. However, in our EM model, the extra uncertainty degree is totally driven by data. And it turns out that the uncertainty can be well measured in our model. To see the ability of predicting the disjunction effect, the model results of the probability of attacking for the D-alone condition is compared as shown in Figure ??). As we can see, both the QD model and the EM model can predict the disjunction effect while the Markov model can not. However, the prediction result of our EM model is a bit more accurate. Also the free parameter of EM model is less. Based on the above, it is reasonable to conclude that our model is correct and efficient. 7. Conclusion To model the decision making process and explain the disjunction fallacy, the EM decision making model is proposed in this paper. The model combines Dempster-Shafer evidence theory with the Markov model. The classical 27 Markov model assumes that the at a certain time, people’s decision has to be certain at one time. In an evidential framework, the action states are extended. The hesitance of a decision maker is represented by a new introduced uncertain state. Hence, the state is allowed to be uncertain at one time before a final decision is made. The uncertainty measurement is the key to predict the disjunction effect. A belief entropy, named Deng entropy, is used due to its powerful ability of uncertainty measurement. The model results show that the EM model can well predict the disjunction effect, which is impossible for the classical Markov model. Also, the EM model has less free parameters than the quantum model. Hence, it is reasonable for us to conclude that the EM model is correct and effective. Acknowledgement The work is partially supported by National Natural Science Foundation of China (Grant No. 61671384), Natural Science Basic Research Plan in Shaanxi Province of China (Program No. 2016JM6018), Aviation Science Foundation (Program No. 20165553036), the Fund of SAST (Program No. SAST2016083) References [1] R. F. Jarrett, L. J. Savage, The foundation of statistics, American Journal of Psychology 69 (1956). [2] C. Lambdin, C. Burdsal, The disjunction effect reexamined: Relevant methodological issues and the fallacy of unspecified percentage compar- 28 isons, Organizational Behavior & Human Decision Processes 103 (2007) 268–276. [3] J. T. Townsend, K. M. Silva, J. Spencersmith, M. J. Wenger, Exploring the relations between categorization and decision making with regard to realistic face stimuli, Pragmatics & Cognition 8 (2000) 83–105. [4] E. M. Pothos, J. R. Busemeyer, A quantum probability explanation for violations of rationaldecision theory, Proceedings of the Royal Society of London B: Biological Sciences (2009) rspb–2009. [5] A. Tversky, E. Shafir, The disjunction effect in choice under uncertainty, Psychological Science 3 (1992) 305–309. [6] B. Annett, T. Sabine, B. Helge, Twelve years of succession on sandy substrates in a post-mining landscape: a markov chain analysis., Ecological Applications A Publication of the Ecological Society of America 20 (2010) 1136–47. [7] O. Alagoz, A. J. Hsu HSchaefer, M. S. Roberts, Markov decision processes: a tool for sequential decision making under uncertainty., Medical Decision Making 30 (2010) 474–483. [8] A. Farahat, Markov stochastic technique to determine galactic cosmic ray sources distribution, Journal of Astrophysics and Astronomy 31 (2010) 81–88. [9] Navaei, Markov chain for multiple hypotheses testing and identification of distributions for one object, Pakistan Journal of Statistics 26 (2010) 557–562. [10] Y. Lu, M. Zhang, T. Yu, M. Qu, Y. Lu, M. Zhang, T. Yu, M. Qu, Application of markov prediction method in the decision of insurance company, lemcs-14 (2014). 29 [11] H. Samet, A. Mojallal, Enhancement of electric arc furnace reactive power compensation using grey-markov prediction method, Generation, Transmission & Distribution, IET 8 (2014) 1626–1636. [12] K. Fraedrich, K. Muller, On single station forecasting: Sunshine and rainfall markov chains, Beitr. Phys. Atmos 56 (1983) 208–134. [13] C. Liu, Y. M. Tian, X. H. Wang, Study of rainfall prediction model based on gm (1, 1) - markov chain, in: Water Resource and Environmental Protection (ISWREP), 2011 International Symposium on, pp. 744–747. [14] Y. Jin, Z. Xie, J. Chen, E. Chen, Phev power distribution fuzzy logic control strategy based on prediction, Journal of Zhejiang University of Technology 43 (2015) 97–102. [15] J. Mar, F. Anto?anzas, R. Pradas, A. Arrospide, Probabilistic markov models in economic evaluation of health technologies: a practical guide, Gaceta Sanitaria 24 (2010) 209–214. [16] Z. Wang, J. R. Busemeyer, H. Atmanspacher, E. M. Pothos, The potential of using quantum theory to build models of cognition, Topics in Cognitive Science 5 (2013) 672–688. [17] E. M. Pothos, J. S. Trueblood, Structured representations in a quantum probability model of similarity, Journal of Mathematical Psychology 64 (2015) 35–43. [18] D. Aerts, L. Gabora, S. Sozzo, Concepts and their dynamics: A quantum-theoretic modeling of human thought, Topics in Cognitive Science 5 (2013) 737–772. [19] R. Blutner, E. M. Pothos, P. Bruza, A quantum probability perspective on borderline vagueness, Topics in Cognitive Science 5 (2013) 711–736. 30 [20] J. S. Trueblood, J. R. Busemeyer, A quantum probability account of order effects in inference, Cognitive science 35 (2011) 1518–1552. [21] Z. Wang, J. R. Busemeyer, A quantum question order model supported by empirical tests of an a priori and precise prediction, Topics in Cognitive Science 5 (2013) 689–710. [22] Z. Wang, T. Solloway, R. M. Shiffrin, J. R. Busemeyer, Context effects produced by question orders reveal quantum nature of human judgments, Proceedings of the National Academy of Sciences 111 (2014) 9431–9436. [23] Z. Wang, J. R. Busemeyer, Interference effects of categorization on decision making, Cognition 150 (2016) 133–149. [24] L. Chen, H. Ang, D. Kiang, L. Kwek, C. Lo, Quantum prisoner dilemma under decoherence, Physics Letters A 316 (2003) 317–323. [25] P. D. Bruza, K. Kitto, B. J. Ramm, L. Sitbon, A probabilistic framework for analysing the compositionality of conceptual combinations, Journal of Mathematical Psychology 67 (2015) 26–38. [26] D. Aerts, Quantum structure in cognition, Journal of Mathematical Psychology 53 (2009) 314–348. [27] H. Situ, Z. Huang, Relativistic Quantum Bayesian Game Under Decoherence, International Journal of Theoretical Physics 55 (2016) 2354– 2363. [28] H. Situ, Two-player conflicting interest Bayesian games and Bell nonlocality, Quantum Information Processing 15 (2016) 137–145. 31 [29] M. Asano, M. Ohya, A. Khrennikov, Quantum-like model for decision making process in two players game, Foundations of Physics 41 (2010) 538–548. [30] E. M. Pothos, J. R. Busemeyer, A quantum probability explanation for violations of ’rational’ decision theory., Proceedings of the Royal Society B Biological Sciences 276 (2009) 2171–8. [31] J. R. Busemeyer, Z. Wang, A. Lambert-Mogiliansky, Empirical comparison of markov and quantum models of decision making , Journal of Mathematical Psychology 53 (2009) 423–433. [32] J. Denolf, A. Lambert-Mogiliansky, Bohr complementarity in memory retrieval, Journal of Mathematical Psychology 73 (2016) 28–36. [33] P. Nyman, On the consistency of the quantum-like representation algorithm for hyperbolic interference, Advances in Applied Clifford Algebras 21 (2011) 799–811. [34] P. Nyman, I. Basieva, Quantum-like representation algorithm for trichotomous observables, International Journal of Theoretical Physics 50 (2011) 3864–3881. [35] V. I. Yukalov, D. Sornette, Quantum decision theory as quantum theory of measurement, Physics Letters A 372 (2015) 6867–6871. [36] V. I. Yukalov, D. Sornette, Physics of risk and uncertainty in quantum decision making, The European Physical Journal B 71 (2009) 533–548. [37] V. I. Yukalov, D. Sornette, Processing information in quantum decision theory, Entropy 11 (2009) 1073–1120. [38] V. I. Yukalov, D. Sornette, Decision theory with prospect interference and entanglement, Theory and Decision 70 (2011) 283–328. 32 [39] C. Moreira, A. Wichert, Quantum-like bayesian networks for modeling decision making., Frontiers in Psychology 7 (2016). [40] W. Jiang, C. Xie, M. Zhuang, Y. Shou, Y. Tang, Sensor data fusion with z-numbers and its application in fault diagnosis, Sensors 16 (2016) 1509. [41] W. Jiang, C. Xie, B. Wei, D. Zhou, A modified method for risk evaluation in failure modes and effects analysis of aircraft turbine rotor blades, Advances in Mechanical Engineering 8 (2016). [42] Y. Deng, Deng entropy, Chaos, Solitons & Fractals 91 (2016) 549–553. [43] Y. Tang, D. Zhou, S. Xu, Z. He, A weighted belief entropy-based uncertainty measure for multi-sensor data fusion, Sensors 17 (2017) 928. [44] X. Deng, D. Han, J. Dezert, Y. Deng, Y. Shyr, Evidence combination from an evolutionary game theory perspective, IEEE Transactions on Cybernetics 46 (2016) 2070–2082. [45] W. Jiang, J. Zhan, A modified combination rule in generalized evidence theory, Applied Intelligence 46 (2017) 630–640. [46] F. E. Grubbs, An introduction to probability theory and its applications, Technometrics 121 (1967) 354. [47] L. A. Zadeh, Fuzzy sets, Information & Control 8 (1965) 338–353. [48] A. P. Dempster, Upper and lower probabilities induced by a multivalued mapping, Annals of Mathematical Statistics 38 (1967) 325–339. [49] G. Shafer, A mathematical theory of evidence, Technometrics 20 (1978) 242. 33 [50] Z. Pawlak, Rough sets, International Journal of Parallel Programming 38 (1982) 88–95. [51] Y. Deng, D numbers: Theory and applications, JOURNAL OF INFORMATION &COMPUTATIONAL SCIENCE 9 (2012) 2421–2428. [52] C. Fu, J. B. Yang, S. L. Yang, A group evidential reasoning approach based on expert reliability, European Journal of Operational Research 246 (2015) 886–893. [53] X. Su, S. Mahadevan, P. Xu, Y. Deng, Dependence assessment in human reliability analysis using evidence theory and ahp, Risk Analysis 35 (2015) 1296. [54] W. Jiang, C. Xie, M. Zhuang, Y. Tang, Failure mode and effects analysis based on a novel fuzzy evidential method, Applied Soft Computing Published online (2017). [55] Y. Tang, D. Zhou, W. Jiang, A new fuzzy-evidential controller for stabilization of the planar inverted pendulum system, PloS ONE 11 (2016) e0160416. [56] R. R. Yager, D. P. Filev, Including probabilistic uncertainty in fuzzy logic controller modeling using dempster-shafer theory, IEEE Transactions on Systems Man & Cybernetics 25 (1995) 1221–1230. [57] D. Han, W. Liu, J. Dezert, Y. Yang, A novel approach to pre-extracting support vectors based on the theory of belief functions, KnowledgeBased Systems 110 (2016) 210–223. [58] J. Ma, W. Liu, P. Miller, H. Zhou, An evidential fusion approach for gender profiling, Information Sciences 333 (2016) 10–20. 34 [59] Z. G. Liu, Q. Pan, J. Dezert, A. Martin, Adaptive imputation of missing values for incomplete pattern classification, Pattern Recognition 52 (2016) 85–95. [60] K. Yuan, F. Xiao, L. Fei, B. Kang, Y. Deng, Conflict management based on belief function entropy in sensor fusion, SpringerPlus 5 (2016) 638. [61] K. Yuan, F. Xiao, L. Fei, B. Kang, Y. Deng, Modeling sensor reliability in fault diagnosis based on evidence theory, Sensors 16 (2016) 113. [62] W. Jiang, B. Wei, C. Xie, D. Zhou, An evidential sensor fusion method in fault diagnosis, Advances in Mechanical Engineering 8 (2016). [63] C. Fu, Y. Wang, An interval difference based evidential reasoning approach with unknown attribute weights and utilities of assessment grades , Computers & Industrial Engineering 81 (2015) 109–117. [64] W. S. Du, B. Q. Hu, Attribute reduction in ordered decision tables via evidence theory, Information Sciences 364-365 (2016) 91–110. [65] X. Zhou, S. Li, Z. Ye, A novel system anomaly prediction system based on belief markov model and ensemble classification, Mathematical Problems in Engineering 2013 (2013) 831–842. [66] X. Deng, Q. Liu, Y. Deng, Newborns prediction based on a belief markov chain model, Applied Intelligence 43 (2015) 1–14. [67] H. Soubaras, An Evidential Measure of Risk in Evidential Markov Chains, Springer Berlin Heidelberg, 2009. [68] H. Soubaras, On evidential markov chains, Studies in Fuzziness & Soft Computing 249 (2010) 247–264. 35 [69] P. Smets, R. Kennes, The transferable belief model, Artificial Intelligence 66 (1994) 191–234. [70] L. Festinger, A theory of cognitive dissonance., Stanford Univ. Pr.,, 1962. [71] J. I. Krueger, T. E. Didonato, D. Freestone, Social projection can solve social dilemmas, Psychological Inquiry 23 (2012) 1–27. [72] J. R. Busemeyer, E. M. Pothos, Social projection and a quantum approach for behavior in prisoner’s dilemma, Psychological Inquiry 23 (2012) 28–34. [73] D. DUBOIS, H. PRADE, A note on measures of specificity for fuzzy sets, International Journal of General Systems 10 (1984) 279–283. [74] U. Höhle, Entropy with respect to plausibility measures, in: Proceedings of the 12th IEEE International Symposium on Multiple-Valued Logic, pp. 167–169. [75] R. R. YAGER, Entropy and specificity in a mathematical theory of evidence, International Journal of General Systems 219 (1982) 291–310. [76] G. J. KLIR, A. RAMER, Uncertainty in the dempster-shafer theory: A critical re-examination, International Journal of General Systems 18 (1990) 155–166. [77] G. J. Klir, B. Parviz, A {NOTE} {ON} {THE} {MEASURE} {OF} {DISCORD}, in: D. Dubois, M. P. Wellman, B. D’Ambrosio, P. Smets (Eds.), Uncertainty in Artificial Intelligence, Morgan Kaufmann, 1992, pp. 138 – 141. 36 [78] T. GEORGE, N. R. PAL, Quantification of conflict in dempster-shafer framework: A new approach, International Journal of General Systems 24 (1996) 407–423. [79] J. R. Busemeyer, Z. Wang, J. T. Townsend, Quantum dynamics of human decision-making, Journal of Mathematical Psychology 50 (2006) 220–241. 37
2
arXiv:1404.7078v2 [cs.DB] 2 May 2014 Query shredding: Efficient relational evaluation of queries over nested multisets (extended version) James Cheney Sam Lindley Philip Wadler University of Edinburgh University of Edinburgh University of Edinburgh [email protected] [email protected] [email protected] ABSTRACT Nested relational query languages have been explored extensively, and underlie industrial language-integrated query systems such as Microsoft’s LINQ. However, relational databases do not natively support nested collections in query results. This can lead to major performance problems: if programmers write queries that yield nested results, then such systems typically either fail or generate a large number of queries. We present a new approach to query shredding, which converts a query returning nested data to a fixed number of SQL queries. Our approach, in contrast to prior work, handles multiset semantics, and generates an idiomatic SQL:1999 query directly from a normal form for nested queries. We provide a detailed description of our translation and present experiments showing that it offers comparable or better performance than a recent alternative approach on a range of examples. 1. INTRODUCTION Databases are one of the most important applications of declarative programming techniques. However, relational databases only support queries against flat tables, while programming languages typically provide complex data structures that allow arbitrary combinations of types including nesting of collections (e.g. sets of sets). Motivated by this so-called impedance mismatch, and inspired by insights into language design based on monadic comprehensions [33], database researchers introduced nested relational query languages [26, 4, 5] as a generalisation of flat relational queries to allow nesting collection types inside records or other types. Several recent language designs, such as XQuery [25] and PigLatin [23], have further extended these ideas, and they have been particularly influential on language-integrated querying systems such as Kleisli [35], Microsoft’s LINQ in C# and F# [22, 28, 6], Links [8, 20], and Ferry [11]. This paper considers the problem of translating nested queries over nested data to flat queries over a flat representation of nested data, or query shredding for short. Our motivation is to support a free combination of the features of nested relational query languages with those of high-level programming languages, particularly systems such as Links, Ferry, and LINQ. All three of these This paper is an extended version of a paper to appear at SIGMOD 2014. systems support queries over nested data structures (e.g. records containing nested sets, multisets/bags, or lists) in principle; however, only Ferry supports them in practice. Links and LINQ currently either reject such queries at run-time or execute them inefficiently in-memory by loading unnecessarily large amounts of data or issuing large numbers of queries (sometimes called query storms or avalanches [12] or the N + 1 query problem). To construct nested data structures while avoiding this performance penalty, programmers must currently write flat queries (e.g. loading in a superset of the needed source data) and convert the results to nested data structures. Manually reconstructing nested query results is tricky and hard to maintain; it may also mask optimisation opportunities. In the Ferry system, Grust et al. [11, 12] have implemented shredding for nested list queries by adapting an XQuery-to-SQL translation called loop-lifting [15]. Loop-lifting produces queries that make heavy use of advanced On-Line Analytic Processing (OLAP) features of SQL:1999, such as ROW_NUMBER and DENSE_RANK, and to optimise these queries Ferry relies on a SQL:1999 query optimiser called Pathfinder [14]. Van den Bussche [31] proved expressiveness results showing that it is possible in principle to evaluate nested queries over sets via multiple flat queries. To strengthen the result, Van den Bussche’s simulation eschews value invention mechanisms such as SQL:1999’s ROW_NUMBER. The downside, however, is that the flat queries can produce results that are quadratically larger than needed to represent sets and may not preserve bag semantics. In particular, for bag semantics the size of a union of two nested sets can be quadratic in the size of the inputs — we give a concrete example in Appendix A. Query shredding is related to the well-studied query unnesting problem [17, 10]. However, most prior work on unnesting only considers SQL queries that contain subqueries in WHERE clauses, not queries returning nested results; the main exception is Fegaras and Maier’s work on query unnesting in a complex object calculus [10]. In this paper, we introduce a new approach to query shredding for nested multiset queries (a case not handled by prior work). Our work is formulated in terms of the Links language, but should be applicable to other language-integrated query systems, such as Ferry and LINQ, or to other complex-object query languages [10]. Figure 1 illustrates the behaviour of Links, Ferry and our approach. We decompose the translation from nested to flat queries into a series of simpler translations. We leverage prior work on normalisation that translates a higher-order query to a normal form in which higher-order features have been eliminated [34, 7, 10]. Our algorithm operates on normal forms. We review normalisation in Section 2, and give a running example of our approach in Section 3. Sections 4–7 present the remaining phases of our approach, which (a) (b) Ferry Q r Links normalisation Q' RDBMS r Q loop-lifting Pathfinder r rebuilding Figure 1: (a) Default Links behaviour (flat queries) Q RDBMS r1 ... rn BACKGROUND We use metavariables x, y, . . . , f, g for variables, and c, d, . . . for constants and primitive operations. We also use letters t, t0 , . . . for table names, `, `0 , `i , . . . for record labels and a, b, . . . for tags. We write M [x := N ] for capture-avoiding substitution of N for x in M . We write ~x for a vector x1 , . . . , xn . Moreover, we extend vector notation pointwise to other constructs, writing, for example, −−−−→ h` = M i for h`1 = M1 , . . . , `n = Mn i. We write: square brackets [−] for the meta level list constructor; w :: ~v for adding the element w onto the front of the list ~v ; ~v ++ w ~ for the result of appending the list w ~ onto the end of the list ~v ; and concat for the function that concatenates a list of lists. We also normalisation shredding Q1 ... Qn r1 ... rn stitching r (b) Ferry (nested list queries) are new. The shredding phase translates a single, nested query to a number of flat queries. These queries are organised in a shredded package, which is essentially a type expression whose collection type constructors are annotated with queries. The different queries are linked by indexes, that is, keys and foreign keys. The shredding translation is presented in Section 4. Shredding leverages the normalisation phase in that we can define translations on types and terms independently. Section 5 shows how to run shredded queries and stitch the results back together to form nested values. The let-insertion phase implements a flat indexing scheme using a let-binding construct and a row-numbering operation. Letinsertion (described in Section 6) is conceptually straightforward, but provides a vital link to proper SQL by providing an implementation of abstract indexes. The final stage is to translate to SQL (Section 7) by flattening records, translating let-binding to SQL’s WITH, and translating row-numbering to SQL’s ROW_NUMBER. This phase also flattens nested records; the (standard) details are included in Appendix E for completeness. We have implemented and experimentally evaluated our approach (Section 8) in comparison with Ulrich’s implementation of looplifting in Links [30]. Our approach typically performs as well or better than loop-lifting, and can be significantly faster. Our contribution over prior work can be summarised as follows. Fegaras and Maier [10] show how to unnest complex object queries including lists, sets, and bags, but target a nonstandard nested relational algebra, whereas we target standard SQL. Van den Bussche’s simulation [31] translates nested set queries to several relational queries but was used to prove a theoretical result and was not intended as a practical implementation technique, nor has it been implemented and evaluated. Ferry [11, 12] translates nested list queries to several SQL:1999 queries and then tries to simplify the resulting queries using Pathfinder. Sometimes, however, this produces queries with cross-products inside OLAP operations such as ROW_NUMBER, which Pathfinder cannot simplify. In contrast, we delay introducing OLAP operations until the last stage, and our experiments show how this leads to much better performance on some queries. Finally, we handle nested multisets, not sets or lists. This report is the extended version of a SIGMOD 2014 paper, and contains additional details, proofs of correctness, and comparison with related work in appendices. 2. (c) Our approach Q1 ... Qn RDBMS (c) Our approach (nested bag queries) N JxKρ N Jc(X1 , . . . , Xn )Kρ N Jλx.M Kρ N JM N Kρ N Jh`i = Mi in i=1 Kρ N JM.`Kρ N Jif L then M else N Kρ N Jreturn M Kρ N J∅Kρ N JM ] N Kρ N Jfor (x ← M ) N Kρ N Jempty M Kρ N Jtable tKρ = ρ(x) = JcK(N JX1 Kρ , . . . , N JXn Kρ ) = λv.N JM Kρ[x7→v] = N JM Kρ (N JN Kρ ) = h`i = N JMi Kρ in i=1 =N  JM Kρ .` N JM Kρ , if N JLKρ = true = N JN Kρ , if N JLKρ = false = [N JM Kρ ] = [] = N JM Kρ ++ N JN Kρ = concat[N JN Kρ[x7→v] | v ← N JM Kρ ] true, if N JM Kρ = [ ] = false, if N JM Kρ 6= [ ] = JtK Figure 2: Semantics of λNRC make use of the following functions: n−1 init [xi ]n i=1 = [xi ]i=1 last [xi ]n i=1 = xn enum([v1 , . . . , vm ]) = [h1, v1 i, . . . hm, vm i] In the meta language we make extensive use of comprehensions, primarily list comprehensions. For instance, [v | x ← xs, y ← ys, p], returns a copy of v for each pair hx, yi of elements of xs and ys such that the predicate p holds. We write [vi ]n i=1 as shorthand for [vi | 1 ≤ i ≤ n] and similarly, e.g., h`i = Mi in i=1 for h`1 = M1 , . . . , `n = Mn i. 2.1 Nested relational calculus We take the higher-order, nested relational calculus (evaluated over bags) as our starting point. We call this λNRC ; this is essentially a core language for the query components of Links, Ferry, and LINQ. The types of λNRC include base types (integers, strings, −−→ booleans), record types h` : Ai, bag types Bag A, and function types A → B. −−→ Types A, B ::= O | h` : Ai | Bag A | A → B Base types O ::= Int | Bool | String We say that a type is nested if it contains no function types and flat if it contains only base and record types. The terms of λNRC include λ-abstractions, applications, and the standard terms of nested relational calculus. ~ ) | table t | if M then N else N 0 Terms M, N ::= x | c(M −−−−→ | λx. M | M N | h` = M i | M.` | empty M | return M | ∅ | M ] N | for (x ← M ) N We assume that the constants and primitive functions include boolean values with negation and conjunction, and integer values with standard arithmetic operations and equality tests. We assume special labels #1 , #2 , . . . and encode tuple types hA1 , . . . , An i as record types h#1 : A1 , . . . , #n : An i, and similarly tuple terms hM1 , . . . , Mn i as record terms h#1 = M1 , . . . , #n = Mn i. We assume fixed signatures Σ(t) and Σ(c) for tables and constants. The tables are constrained to have flat relation type (Bag h`1 : O1 , . . . , `n : On i), and the constants must be of base type or first order n-ary functions (hO1 , . . . , On i → O). Most language constructs are standard. The ∅ expression builds an empty bag, return M constructs a singleton, and M ] N builds the bag union of two collections. The for (x ← M ) N comprehension construct iterates over a bag obtained by evaluating M , binds x to each element, evaluates N to another bag for each such binding, and takes the union of the results. The expression empty M evaluates to true if M evaluates to an empty bag, false otherwise. λNRC employs a standard type system similar to that presented in other work [35, 20, 6]. We will also employ several typed intermediate languages and translations mapping λNRC to SQL. All of these (straightforward) type systems are omitted due to space limits; they will be available in the full version of this paper. Semantics. We give a denotational semantics in terms of lists. Though we wish to preserve bag semantics, we interpret objectlevel bags as meta-level lists. For meta-level values v and v 0 , we consider v and v 0 equivalent as multisets if they are equal up to permutation of list elements. We use lists mainly so that we can talk about order-sensitive operations such as row_number. We interpret base types as integers, booleans and strings, function types as functions, record types as records, and bag types as lists. For each table t ∈ dom(Σ), we assume a fixed interpretation JtK of t as a list of records of type Σ(t). In SQL, tables do not have a list semantics by default, but we can impose one by choosing a canonical ordering for the rows of the table. We order by all of the columns arranged in lexicographic order (assuming linear orderings on field names and base types). We assume fixed interpretations JcK for the constants and primitive operations. The semantics of nested relational calculus are shown in Figure 2. The (standard) typing rules are given in Appendix B. We let ρ range over environments mapping variables to values, writing ε for the empty environment and ρ[x 7→ v] for the extension of ρ with x bound to v. 2.2 Query normalisation In Links, query normalisation is an important part of the execution model [7, 20]. Links currently supports only queries mapping flat tables to flat results, or flat–flat queries. When a subexpression denoting a query is evaluated, the subexpression is first normalised and then converted to SQL, which is sent to the database for evaluation; the tuples received in response are then converted into a Links value and normal execution resumes (see Figure1(a)). For flat–nested queries that read from flat tables and produce a nested result value, our normalisation procedure is similar to the one currently used in Links [20], but we hoist all conditionals into the nearest enclosing comprehension as where clauses. This is a minor change; the modified algorithm is given in the full version of this paper. The resulting normal forms are: U ~ Query terms L ::= C ~ where X) return M Comprehensions C ::= for (G Generators G ::= x ← t Normalised terms M, N ::= X | R | L −−−−→ Record terms R ::= h` = M i ~ | empty L Base terms X ::= x.` | c(X) Any closed flat–nested query can be converted to an equivalent term in the above normal form. JdepartmentsK = (id) 1 2 3 4 name Product Quality Research Sales JtasksK = (id) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 employee Alex Bert Cora Cora Cora Cora Cora Drew Drew Erik Erik Fred Gina Gina JemployeesK = (id) 1 2 3 4 5 6 7 task build build abstract build call dissemble enthuse abstract enthuse call enthuse call call dissemble dept Product Product Research Research Sales Sales Sales JcontactsK = (id) 1 2 3 4 5 6 7 name Alex Bert Cora Drew Erik Fred Gina dept Product Product Research Research Sales Sales Sales salary 20000 900 50000 60000 2000000 700 100000 name Pam Pat Rob Roy Sam Sid Sue client false true false false false false true Figure 3: Sample data T HEOREM 1. Given a closed flat–nested query ` M : Bag A, there exists a normalisation function norm Bag A , mapping each M to an equivalent normal form norm Bag A (M ). The normalisation algorithm and correctness proof are similar to those in previous papers [7, 20, 6]. The details are given in Appendix C. The normal forms above can also be viewed as an SQL-like language allowing relation-valued attributes (similar to the complex-object calculus of Fegaras and Maier [10]). Thus, our results can also be used to support nested query results in an SQLlike query language. In this paper, however, we focus on the functional core language based on comprehensions, as illustrated in the next section. 3. RUNNING EXAMPLE To motivate and illustrate our work, we present an extended example showing how our shredding translation could be used to provide useful functionality to programmers working in LINQ using F#, Ferry or Links. We first describe the code the programmer would actually write and the results the system produces. Throughout the rest of the paper, we return to this example to illustrate how the shredding translation works. Consider a flat database schema Σ for an organisation: tasks(employee : String, task : String) employees(dept : String, name : String, salary : Int) contacts(dept : String, name : String, client : Bool) departments(name : String) Each department has a name, a collection of employees, and a collection of external contacts. Each employee has a name, salary and a collection of tasks. Some contacts are clients. Figure 3 shows a small instance of this schema. For convenience, we also assume every table has an integer-valued key id. In λNRC , queries of the form for . . . where . . . return . . . are natively supported. These are comprehensions as found in XQuery or functional programming languages, and they generalise idiomatic SQL SELECT FROM WHERE queries [4]. Unlike SQL, we can use (nonrecursive) functions to define queries with parameters, or parts of queries, and freely combine them. For example, the following functions define useful query patterns over the above schema: tasksOfEmp e = for (t ← tasks) where (t.employee = e.name) return t.tsk contactsOfDept d = for (c ← contacts) where (d.dept = c.dept) return hname = c.name, client = c.clienti employeesByTask t = for (e ← employees, d ← departments) where (e.name = t.employee ∧ e.dept = d.dept) return hb = e.employee, c = d.depti Nested queries allow free mixing of collection (bag) types with record or base types. For example, the following query employeesOfDept d = for (e ← employees) where (d.dept = e.dept) return hname = e.name, salary = e.salary, tasks = tasksOfEmp ei returns a nested result: a collection of employees in a department, each with an associated collection of tasks. That is, its return type is Bag hname:String, salary:Int, tasks:Bag Stringi Consider the following nested schema for organisations: Task Employee Contact Department = String = hname : String, salary : Int, tasks : Bag Task i = hname : String, client : Booli = hname : String, employees : Bag Employee, contacts : Bag Contacti Organisation = Bag Department Using some of the above functions we can write a query Qorg that maps data in the flat schema Σ to the nested type Organisation, as follows: Qorg = for (d ← departments) return (hname = d.name, employees = employeesOfDept d, contacts = contactsOfDept di We can also define and use higher-order functions to build queries, such as the following: filter p xs = for (x ← xs) where (p(x)) return x any xs p = ¬(empty(for (x ← xs) where (p(x)) return hi)) all xs p = ¬(any xs (λx.¬(p(x)))) contains xs u = any xs (λx.x = u) To illustrate the main technical challenges of shredding, we consider a query with two levels of nesting and a union operation. Suppose we wish to find for each department a collection of people of interest, both employees and contacts, along with a list of the tasks they perform. Specifically, we are interested in those employees that earn less than a thousand euros and those who earn more than a million euros, call them outliers, along with those contacts who are clients. The following code defines poor, rich, outliers, and clients: isPoor x = x.salary < 1000 isRich x = x.salary > 1000000 outliers xs = filter (λx.isRich x ∨ isPoor x) xs clients xs = filter (λx. x.client) xs We also introduce a convenient higher-order function that uses its f parameter to initialise the tasks of its elements: getTasks xs f = for (x ← xs) return hname = x.name, tasks = f xi Using the above operations, the query Q returns each department, the outliers and clients associated with that department, and their tasks. We assign the special task “buy” to clients. Q(organisation) = for (x ← organisation) return (hdepartment = x.name, people = getTasks(outliers(x.employees)) (λy. y.tasks) ] getTasks(clients(x.contacts)) (λy. return “buy”)i) The result type of Q is: Result = Bag hdepartment : String, people : Bag hname : String, tasks : Bag Stringii We can compose Q with Qorg to form a query Q(Qorg ) from the flat data stored in Σ to the nested Result. The normal form of this composed query, which we call Qcomp , is as follows: Qcomp = for (x ← departments) return ( hdepartment = x.name, people = (for (y ← employees) where (x.name = y.dept ∧ (y.salary < 1000 ∨ y.salary > 1000000)) return (hname = y.name, tasks = for (z ← tasks) where (z.employee = y.name) return z.taski)) ] (for (y ← contacts) where (x.name = y.dept ∧ y.client) return (hname = y.name, tasks = return “buy”i))i) The result of running Qcomp on the data in Figure 3 is: [hdepartment = “Product”, people = [hname = “Bert”, tasks = [“build”]i, hname = “Pat”, tasks = [“buy”]i]i] hdepartment = “Research”, people = ∅i, hdepartment = “Quality”, people = ∅i, hdepartment = “Sales”, people = [hname = “Erik”, tasks = [“call”, “enthuse”]i, hname = “Fred”, tasks = [“call”]i, hname = “Sue”, tasks = [“buy”]i]i] Now, however, we are faced with a problem: SQL databases do not directly support nested multisets (or sets). Our shredding translation, like Van den Bussche’s simulation for sets [31] and Grust et al.’s for lists [12], can translate a normalised query such as Qcomp : Result that maps flat input Σ to nested output Result to a fixed number of flat queries q1 : Result 1 , . . . , qn : Result n whose results can be combined via a stitching operation Qstitch : Result 1 × · · · × Result n → Result. Thus, we can simulate the query Qcomp by running q1 , . . . , qn remotely on the database and stitching the results together using Qstitch . The number of intermediate queries n is the nesting degree of Result, that is, the number of collection type constructors in the result type. For example, the nesting degree of Bag hA : Bag Int, B : Bag Stringi is 3. The nesting degree of the type Result is also 3, which means Q can be shredded into three flat queries. The basic idea is straightforward. Whenever a nested bag appears in the output of a query, we generate an index that uniquely identifies the current context. Then a separate query produces the contents of the nested bag, where each element is paired up with its parent index. Each inner level of nesting requires a further query. We will illustrate by showing the results of the three queries and how they can be combined to reconstitute the desired nested result. The outer query q1 contains one entry for each department, with an index ha, idi in place of each nested collection: r1 = [hdepartment = “Product”, people = ha, 1ii, hdepartment = “Quality”, people = ha, 2ii, hdepartment = “Research”, people = ha, 3ii, hdepartment = “Sales”, people = ha, 4ii] The second query q2 generates the data needed for the people collections: r2 = [hha, 1i, hname = “Bert”, tasks = hb, 1, 2iii, hha, 4i, hname = “Erik”, tasks = hb, 4, 5iii, hha, 4i, hname = “Fred”, tasks = hb, 4, 6iii, hha, 1i, hname = “Pat”, tasks = hd, 1, 2iii, hha, 4i, hname = “Sue”, tasks = hd, 4, 7iii] The idea is to ensure that we can stitch the results of q1 together with the results of q2 by joining the inner indexes of q1 (bound to the people field of each result) with the outer indexes of q2 (bound to the first component of each result). In both cases the static components of these indexes are the same tag a. Joining the people field of q1 to the outer index of q2 correctly associates each person with the appropriate department. Finally, let us consider the results of the innermost query q3 for generating the bag bound to the tasks field: In order to shred nested queries, we introduce an abstract type Index of indexes for maintaining the correspondence between outer and inner queries. An index a  d has a static component a and a dynamic component d. The static component a links the index to the corresponding returna in the query. The dynamic component identifies the current bindings of the variables in the comprehension. Next, we modify types so that bag types have an explicit index component and we use indexes to replace nested occurrences of bags within other bags: Shredded types A, B ::= Bag hIndex , F i −−→ Flat types F ::= O | h` : F i | Index We also adapt the syntax of terms to incorporate indexes. After shredding, terms will have the following forms: U ~ Query terms L, M ::= C Comprehensions Generators Inner terms Record terms Base terms Indexes Dynamic indexes r3 = [hhb, 1, 2i, “build”i, hhb, 4, 5i, “call”i, hhb, 4, 5i, “enthuse”i, hhb, 4, 6i, “call”i, hhd, 1, 2i, “buy”i, hhd, 4, 7i, “buy”i] Recall that q2 returns further inner indexes for the tasks associated with each person. The two halves of the union have different static indexes for the tasks b and d, because they arise from different comprehensions in the source term. Furthermore, the dynamic index now consists of two id fields (x.id and y.id) in each half of the union. Thus, joining the tasks field of q2 to the outer index of q3 correctly associates each task with the appropriate outlier. Note that each of the queries q1 , q2 , q3 produces records that contain other records as fields. This is not strictly allowed by SQL, but it is straightforward to simulate such nested records by rewriting to a query with no nested collections in its result type; this is similar to Van den Bussche’s simulation [31]. However, this approach incurs extra storage and query-processing cost. Later in the paper, we explore an alternative approach which collapses the indexes at each level to a pair ha, ii of static index and a single “surrogate” integer, similarly to Ferry’s approach [12]. For example, using this approach we could represent the results of q2 and q3 as follows: r20 = [hha, 1i, hname = “Bert”, tasks = hb, 1iii, hha, 4i, hname = “Erik”, tasks = hb, 2iii, hha, 4i, hname = “Fred”, tasks = hb, 3iii, hha, 1i, hname = “Pat”, tasks = hd, 1iii, hha, 4i, hname = “Sue”, tasks = hd, 2iii] r30 = [hhb, 1i, “build”i, hhb, 2i, “call”i, hhb, 2i, “enthuse”i, hhb, 3i, “call”i, hhd, 1i, “buy”i, hhd, 2i, “buy”i] The rest of this paper gives the details of the shredding translation, explains how to stitch the results of shredded queries back together, and shows how to use row_number to avoid the space overhead of indexes. We will return to the above example throughout the paper, and we will use Qorg , Q and other queries based on this example in the experimental evaluation. 4. SHREDDING TRANSLATION As a pre-processing step, we annotate each comprehension body in a normalised term with a unique name a — the static component of an index. We write the annotations as superscripts, for example: ~ where X) returna M for (G C ::= returna hI, N i ~ where X) C | for (G G ::= x ← t N ::= X | R | I −−−→ R ::= h` = N i ~ | empty L X ::= x.` | c(X) I, J ::= a  d d ::= out | in A comprehension is now constructed from a sequence of genera~ where X) followed by a body of the tor clauses of the form for (G form returna hI, N i. Each level of nesting gives rise to such a generator clause. The body always returns a pair hI, N i of an outer index I, denoting where the result values from the shredded query should be spliced into the final nested result, and a (flat) inner term N . Records are restricted to contain inner terms, which are either base types, records, or indexes, which replace nested multisets. We assume a distinguished top level static index >, which allows us to treat all levels uniformly. Each shredded term is associated with an outer index out and an inner index in. (In fact out only appears in the left component of a comprehension body, and in only appears in the right component of a comprehension body. These properties will become apparent when we specify the shredding transformation on terms.) 4.1 Shredding types and terms We use paths to point to parts of types. Paths p ::=  | ↓.p | `.p The empty path is written . A path p can be extended by traversing a bag constructor (↓.p) or selecting a label (`.p). We will sometimes write p.↓ for the path p with ↓ appended at the end and similarly for p.`; likewise, we will write p.~ ` for the path p with all the labels of ~ ` appended. The function paths(A) defines the set of paths to bag types in a type A: paths(O) = {} Sn paths(h`i : Ai in i=1 {`i .p | p ← paths(Ai )} i=1 ) = paths(Bag A) = {} ∪ {↓.p | p ← paths(A)} We now define a shredding translation on types. This is defined in terms of the inner shredding TAU, a flat type that represents the contents of a bag. TOU = O n Th`i : Ai in i=1 U = h`i : TAi Uii=1 TBag AU = Index Un VLWp = V i=1 Ci W?a,p ? Vh`i = Mi in i=1 Wa,`j .p b ~ Vfor (G where X) return M W?a, ~ where X) returnb M W? Vfor (G a,↓.p U (VLW?>,p ) Tx.`Ua = x.` n Tc([Xi ]n i=1 )Ua = c([TXi Ua ]i=1 ) Tempty LUa = empty VLW = concat([VCi W?a,p ]n i=1 ) = VMj W?a,p ~ where X) returnb ha  out, TM U i] = [for (G b ~ where X) C | C ← VM W? ] = [for (G n Th`i = Mi in i=1 Ua = h`i = TMi Ua ii=1 TLUa = a  in b,p Figure 4: Shredding translation on terms Given a path p ∈ paths(A), the type VAWp is the outer shredding of A at p. It corresponds to the bag at path p in A. VBag AW = Bag hIndex , TAUi VBag AW↓.p = VAWp −−→ Vh` : AiW`i .p = VAi Wp For example, consider the result type Result from Section 3. Its nesting degree is 3, and its paths are: paths(Result) = {, ↓.people., ↓.people.↓.tasks.} We can shred Result in three ways using these three paths, yielding three shredded types: A1 = VResultW A2 = VResultW↓.people. A3 = VResultW↓.people..↓.tasks. or equivalently: A1 = Bag hIndex , hdepartment : String, people : Index ii A2 = Bag hIndex , hname : String, tasks : Index ii A3 = Bag hIndex , Stringi The shredding translation on terms VLWp is given in Figure 4. This takes a term L and a path p and gives a query VLWp that computes a result of type VAWp , where A is the type of L. The auxiliary translation VM W?a,p returns the shredded comprehensions of M along path p with outer static index a. The auxiliary translation TM Ua produces a flat representation of M with inner static index a. Note that the shredding translation is linear in time and space. Observe that for emptiness tests we need only the top-level query. Continuing the example, we can shred Qcomp in three ways, yielding shredded queries: q1 = VQcomp W q2 = VQcomp W↓.people. q3 = VQcomp W↓.people..↓.tasks. or equivalently: q1 = for (x ← departments) returna h>  1, hdepartment = x.name, people = a  inii q2 = (for (x ← departments) for (y ← employees) where (x.name = y.dept ∧ (y.salary < 1000 ∨ y.salary > 1000000)) returnb (ha  out, hname = y.name, tasks = b  inii)) ] (for (x ← departments) for (y ← contacts) where (x.name = y.dept ∧ y.client) returnd (ha  out, hname = y.name, tasks = d  inii)) q3 = (for (x ← departments) for (y ← employees) where (x.name = y.dept ∧ (y.salary < 1000 ∨ y.salary > 1000000)) for (z ← tasks) where (z.employee = y.employee) returnc hb  out, z.taski) ] (for (x ← departments) for (y ← contacts) where (x.name = y.dept ∧ y.client) returne hd  out, “buy”i) As a sanity check, we show that well-formed normalised terms shred to well-formed shredded terms of the appropriate shredded types. We discuss other correctness properties of the shredding translation in Section 5. Typing rules for shredded terms are shown in Appendix B. T HEOREM 2. Suppose L is a normalised flat-nested query with ` L : A and p ∈ paths(A), then ` VLWp : VAWp . 4.2 Shredded packages To maintain the relationship between shredded terms and the structure of the nested result they are meant to construct, we use shredded packages. A shredded package  is a nested type with annotations, denoted (−)α , attached to each bag constructor. −−→  ::= O | h` : Âi | (Bag Â)α For a given package, the annotations are drawn from the same set. We write Â(S) to denote a shredded package with annotations drawn from the set S. We sometimes omit the type parameter when it is clear from context. for shredded packages are shown in Appendix B. Given a shredded package Â, we can erase its annotations to obtain its underlying type. erase(O) = O n erase(h`i : Âi in i=1 ) = h`i : erase(Âi )ii=1 erase((Bag Â)α ) = Bag (erase(Â)) Given a type A and a shredding function f : paths(A) → S, we can construct a shredded package Â(S). package f (A) = package f, (A) package f,p (O) = O n package f,p (h`i : Ai in i=1 ) = h`i : package f,p.`i (Ai )ii=1 package f,p (Bag A) = (Bag (package f,p.↓ (A)))f (p) Using package, we lift the type- and term-level shredding functions V−W− to produce shredded packages, where each annotation contains the shredded version of the input type or query along the path to the associated bag constructor. shred B (A) = package (VBW ) (A) − shred L (A) = package (VLW ) (A) − For example, the shredded package for the Result type from Section 3 is: Shred Result (Result) = Bag hdepartment : String, people : Bag hname : String, tasks : (Bag String)A3 iA2 iA1 where A1 , A2 , and A3 are as shown in Section 4.1. Shredding the normalised query Q0 gives the same package, except the type annotations A1 , A2 , A3 become queries q1 , q2 , q3 . Again, as a sanity check we show that erasure is the left inverse of type shredding and that term-level shredding operations preserve types. T HEOREM 3. For any type A, we have erase(shred A (A)) = A. Furthermore, if L is a closed, normalised, flat–nested query such that ` L : A then ` shred L (A) : shred A (A). 5. 5.2 A shredded value package can be stitched back together into a nested value, preserving annotations, as follows: stitch(Â) = stitch >  1 (Â) QUERY EVALUATION AND STITCHING Having shredded a normalised nested query, we can then run all of the resulting shredded queries separately. If we stitch the shredded results together to form a nested result, then we obtain the same nested result as we would obtain by running the nested query directly. In this section we describe how to run shredded queries and stitch their results back together to form a nested result. 5.1 Evaluating shredded queries The semantics of shredded queries SJ−K is given in Figure 5. Apart from the handling of indexes, it is much like the semantics for nested queries given in Figure 2. To allow different implementations of indexes, we first define a canonical notion of index, and then parameterise the semantics by the concrete type of indexes X and a function index : Index → X mapping each canonical index to a distinct concrete index. A canonical index a  ι comprises static index a and dynamic index ι, where the latter is a list of positive natural numbers. For now we take concrete indexes simply to be canonical indexes, and index to be the identity function. We consider other definitions of index in Section 6. The current dynamic index ι is threaded through the semantics in tandem with the environment ρ. The former encodes the position of each of the generators in the current comprehension and allows us to invoke index to construct a concrete index. The outer index at dynamic index ι.i is ι; the inner index is ι.i. In order for a comprehension to generate dynamic indexes we use the function enum (introduced in Section 2) that takes a list of elements and returns the same list with the element number paired up with each source element. Running a shredded query yields a list of pairs of indexes and shredded values. Results s ::= [hI1 , w1 i, . . . , hIm , wm i] Flat values w ::= c | h`1 = w1 , . . . , `n = wn i | I Given a shredded package Â(S) and a function f : S → T , we can map f over the annotations to obtain a new shredded package Â0 (T ) such that erase(Â) = erase(Â0 ). pmap f (O) = O n pmap f (h`i : Âi in i=1 ) = h`i : pmap f (Âi )ii=1 pmap f ((Bag Â)α ) = ((Bag pmap f (Â)))f (α) The semantics of a shredded query package is a shredded value package containing indexed results for each shredded query. For each type A we define HJAK = shred A (A) and for each flat–nested, closed ` L : A we define HJLKA : HJAK as pmap SJ−K (shred L (A)). In other words, we first construct the shredded query package shred L (A), then apply the shredded semantics SJqK to each query q in the package. For example, here is the shredded package that we obtain after running the normalised query Qcomp from Section 2.2: HJQcomp KA = Bag hdepartment : String, people : Bag hname : String, tasks : (Bag String)r3 ir2 ir1 where r1 , r2 , and r3 are as in Section 3 except that indexes are of the form a  1.2.3 instead of ha, 1, 2, 3i. Stitching shredded query results together stitch c (O) = c n stitch r (h`i : Âi in i=1 ) = h`i = stitch r.`i (Âi )ii=1 stitch I ((Bag Â)s ) = [(stitch w (Â)) | hI, wi ← s] The flat value parameter w to the auxiliary function stitch w (−) specifies which values to stitch along the current path. Resuming our running example, once the results r1 : A1 , r2 : A2 , r3 : A3 have been evaluated on the database, they are shipped back to the host system where we can run the following code inmemory to stitch the three tables together into a single value: the result of the original nested query. The code for this query Qstitch follows the same idea as the query Qorg that constructs the nested organisation from Σ. for (x ← r1 ) return (hdepartment = x.name, people = for (hi, yi ← r2 ) where (x.people = i)) return (hname = y.name, tasks = for (hj, zi ← r3 ) where (y.tasks = j) return zi)i) We can now state our key correctness property: evaluating shredded queries and stitching the results back together yields the same results as evaluating the original nested query directly. T HEOREM 4. If ` L : Bag A then: stitch(HJLKBag A ) = N JLK P ROOF S KETCH . We omit the full proof due to space limits; it is available in the full version of this paper. The proof introduces several intermediate definitions. Specifically, we consider an annotated semantics for nested queries in which each collection element is tagged with an index, and we show that this semantics is consistent with the ordinary semantics if the annotations are erased. We then prove the correctness of shredding and stitching with respect to the annotated semantics, and the desired result follows. 6. INDEXING SCHEMES So far, we have worked with canonical indexes of the form a  1.2.3. These could be represented in SQL by using multiple columns (padding with NULLs if necessary) since for a given query the length of the dynamic part is bounded by the number of for-comprehensions in the query. This imposes space and running time overhead due to constructing and maintaining the indexes. Instead, in this section we consider alternative, more compact indexing schemes. We can define alternative indexing schemes by varying the index parameter of the shredded semantics (see Section 5.1). Not all possible instantiations are valid. To identify those that are, we first define a function for computing the canonical indexes of a nested query result. IJLK = IJLKε,1 U n IJ n C i Kρ,ι = concat([IJCi Kρ,ι ]i=1 ) i=1 n IJh`i = Mi in i=1 Kρ,ι = concat([IJMi Kρ,ι ]i=1 ) IJXKρ,ι = [ ] a IJfor ([xi ← ti ]n i=1 where X) return M Kρ,ι = concat([a  ι.j :: IJM Kρ[xi 7→ri ]n ,ι.j i=1 | hj, ~ ri ← enum([~ r | [ri ← Jti K]n i=1 , N JXKρ[xi 7→ri ]n ])]) i=1 SJLK = SJLKε,1 n SJh` = N in i=1 Kρ,ι = h`i = SJNi Kρ,ι ii=1 SJXKρ,ι = N JXKρ U n SJ n i=1 Ci Kρ,ι = concat([SJCi Kρ,ι ]i=1 ) SJfor ([xi ← ti ]n where X) CK = concat([SJCKρ[xi 7→ri ]n ρ,ι i=1 i=1 ,ι.j SJa  outKρ,ι.i = index (a  ι) SJa  inKρ,ι.i = index (a  ι.i) SJreturna N Kρ,ι = [SJN Kρ,ι ] | hj, ~ ri ← enum([~ r | [ri ← Jti K]n i=1 , N JXKρ[xi 7→ri ]n ])]) i=1 Figure 5: Semantics of shredded queries An indexing function index : Index → X is valid with respect to the closed nested query L if it is injective and defined on every canonical index in IJLK. The only requirement on indexes in the proof of Theorem 4 is that index is valid. We consider two alternative valid indexing schemes: natural and flat indexes. 6.1 Natural indexes Natural indexes are synthesised from row data. In order to generate a natural index for a query every table must have a key, that is, a collection of fields guaranteed to be unique for every row in the table. For sets, this is always possible by using all of the field values as a key; this idea is used in Van den Bussche’s simulation for sets [31]. However, for bags this is not always possible, so using natural indexes may require adding extra key fields. Given a table t, let key t be the function that given a row r of t returns the key fields of r. We now define a function to compute the list of natural indexes for a query L. I \ JLK = I \ JLKε U \ n I\J n i=1 Ci Kρ = concat([I JCi Kρ ]i=1 ) \ JM K ]n ) I \ Jh`i = Mi in K = concat([I ρ ρ i i=1 i=1 I \ JXKρ = [ ] a I \ Jfor ([xi ← ti ]n i=1 where X) return M Kρ = \ JM K concat([a  hkey ti (ri )in :: I ρ[xi 7→ri ]n i=1 i=1 | [ri ← Jti K]n i=1 , N JXKρ[xi 7→ri ]n ]) i=1 If a  ι is the i-th element of IJLK, then index \L (a  ι) is defined as the i-th element of I \ JLK. The natural indexing scheme is defined by setting index = index \L . An advantage of natural indexes is that they can be implemented in plain SQL, so for a given comprehension all where clauses can be amalgamated (using the ∧ operator) and no auxiliary subqueries are needed. The downside is that the type of a dynamic index may still vary across the component comprehensions of a shredded query, complicating implementation of the query (due to the need to pad some subqueries with null columns) and potentially decreasing performance due to increased data movement. We now consider an alternative, in which row_number is used to generate dynamic indexes. 6.2 Flat indexes and let-insertion The idea of flat indexes is to enumerate all of the canonical dynamic indexes associated with each static index and use the enumeration as the dynamic index. Let ι be the i-th element of the list [ι0 | a  ι0 ← IJLK], then index [L (a  ι) = ha, ii. The flat indexing scheme is defined by setting index = index [L . Let I [ JLK = [index [L (I) | I ← IJLK] and let S [ JLK be SJLK where index = index [L . In this section, we give a translation called let-insertion that uses let-binding and an index primitive to manage flat indexes. In the next section, we take the final step from this language to SQL. Our semantics for shredded queries uses canonical indexes. We now specify a target language providing flat indexes. In order to do so, we introduce let-bound sub-queries, and translate each compre- hension into the following form: −−→ let q = for (Gout where Xout ) return Nout in −→ for (Gin where Xin ) return Nin The special index expression is available in each loop body, and is bound to the current index value. Following let-insertion, the types are as before, except indexes are represented as pairs of integers ha, di where a is the static component and d is the dynamic component. Types A, B ::= Bag hhInt, Inti, F i −−→ Flat types F ::= O | h` : F i | hInt, Inti The syntax of terms is adapted as follows: U ~ Query terms L, M ::= C Comprehensions Subqueries Data sources Generators Inner terms Record terms Base terms C ::= let q = S in S 0 ~ where X) return N S ::= for (G u ::= t | q G ::= x ← u N ::= X | R | index −−−→ R ::= h` = N i ~ | empty L X ::= x.~ ` | c(X) The semantics of let-inserted queries is given in Figure 6. Rather than maintaining a canonical index, it generates a flat index for each subquery. We first give the translation on shredded types as follows: L(O) = O −−−−−→ −−→ L(h` : F i) = h` : L(F )i L(Index ) = hInt, Inti L(Bag hIndex , F i) = Bag hhInt, Inti, L(F )i For example: L(A2 ) = Bag hhInt, Inti, hname : String, tasks : hInt, Intiii Without loss of generality we rename all the bound variables in our source query to ensure that all bound variables have distinct names, and that none coincides with the distinguished name z used for let-bindings. The let-insertion translation L is defined in Figure 7, where we use the following auxiliary functions: expand(x, t) = h`i = x.`i in i=1 −−→ where Σ(t) = Bag h` : Ai ~ where X) C) = G ~ :: gens C gens (for (G gens (returna N ) = [ ] ~ where X) C) = X :: conds C conds (for (G conds (returna N ) = [ ] ~ where X) C) = body C body (for (G body (returna N ) = N Each comprehension is rearranged into two sub-queries. The first generates the outer indexes. The second computes the results. The translation sometimes produces n-ary projections in order to refer to values bound by the first subquery inside the second. LJLK = LJLKε U m LJ m j=1 Cj Kρ = concat([LJCj Kρ ]j=1 ) m LJh`j = Nj im j=1 Kρ,i = h`j = LJNj Kρ,i ij=1 LJXKρ,i = N JXKρ LJtKρ = JtK LJqKρ = ρ(q) LJindexKρ,i = i LJlet q = Sout in Sin Kρ = LJSin Kρ[q7→LJSout Kρ ] LJfor ([xj ← uj ]m ri ← enum([~ r | [rj ← LJuj Kρ ]m j=1 where X) return N Kρ = [LJN Kρ[xj 7→rj ]m ,i | hi, ~ j=1 , N JXKρ[xj 7→rj ]m ])] j=1 j=1 Figure 6: Semantics of let-inserted shredded queries U Un L( n i=1 Ci ) = i=1 L(Ci ) −−→ −→ L(C) = let q = (for (Gout where Xout ) return hRout , indexi) in for (z ← q, Gin where Ly~ (Xin )) return Ly~ (N ) −−→ −→ −−−→ −−→ N = body C where Gout = concat (init (gens C)) y = t = Gout Gin = last (gens C) V −−→ Xout = init (conds C) Rout = hexpand(yi , ti )in X n = length Gout in = last (conds C) i=1  Ly~ (empty L) = empty (Ly~ (L)) U U x.`, if x ∈ / {y1 , . . . , yn } Ly~ ( n Ci ) = n Ly~ (Ci ) Ly~ (x.`) = i=1 i=1 z.1.i.`, if x = yi ~ where X) returna ha, N i) = for (G ~ where Ly~ (X)) Ly~ (for (G Ly~ (c(X1 , . . . , Xm )) = c(Ly~ (X1 ), . . . , Ly~ (Xm )) return ha, Ly~ (N )i m Ly~ (h`j = Xj im ~ (Xj )ij=1 j=1 ) = h`j = Ly Ly~ (a  d) = ha, L(d)i L(out) = z.2 L(in) = index Figure 7: The let-insertion translation For example, applying L to q1 from Section 4.2 yields: for (x ← departments) return hh>, 1i, hdept = x.name, people = indexii and q2 becomes: (let q = for (x ← departments) return hhdept = x.namei, indexi in for (z ← q, y ← employees) where (z.1.1.name = y.dept ∧ (y.salary < 1000 ∨ y.salary > 1000000)) return (hha, z.2i, hname = y.name, tasks = hb, indexiii)) ] (let q = for (x ← departments) return hhdept = x.namei, indexi in for (z ← q, y ← contacts) where (z.1.1.name = y.dept ∧ y.client) return (hha, z.2i, hname = y.name, tasks = hd, indexiii)) As a sanity check, we show that the translation is type-preserving: T HEOREM 5. Given shredded query ` M : Bag hIndex , F i, then ` L(M ) : L(Bag hIndex , F i). To prove the correctness of let-insertion, we need to show that the shredded semantics and let-inserted semantics agree. In the statement of the correctness theorem, recall that S [ JLK refers to the version of SJLK using index = index [L . T HEOREM 6. Suppose ` L : A and VLWp = M . Then S [ JM K = LJL(M )K. P ROOF SKETCH . The high-level idea is to separate results into data and indexes and compare each separately. It is straightforward, albeit tedious, to show that the different definitions are equal if we replace all dynamic indexes by a dummy value. It then remains to show that the dynamic indexes agree. The pertinent case is the translation of a comprehension: b ~ i ← Xi )]n ~ [for (G i=1 for (Gin ← Xin ) return ha  out, N i which becomes let q = Sout in Sin for suitable Sout and Sin . The dynamic indexes computed by Sout coincide exactly with those of I [ JLK at static index a, and the dynamic indexes computed by Sin , if there are any, coincide exactly with those of I [ JLK. 7. CONVERSION TO SQL Earlier translation stages have employed nested records for convenience, but SQL does not support nested records. At this stage, we eliminate nested records from queries. For example, we can represent a nested record ha = hb = 1, c = 2i, d = 3i as a flat record ha b = 1, a c = 2, d = 3i. The (standard) details are presented in Appendix E. In order to interpret shredded, flattened, let-inserted terms as SQL, we interpret index generators using SQL’s OLAP facilities. ~ Query terms L ::= (union all) C Comprehensions C ::= with q as (S) C | S 0 ~ where X Subqueries S ::= select R from G Data sources u ::= t | q Generators G ::= u as x ~ Inner terms N ::= X | row_number() over (order by X) −−−→ Record terms R ::= N as ` ~ | empty L Base terms X ::= x.` | c(X) Continuing our example, L(q1 ) and L(q2 ) translate to q10 and q20 where: q10 = select x.name as i1_name, row_number() over (order by x.name) as i1_people from departments as x 0 q2 = (with q as (select x.name as i1_name, row_number() over (order by x.name) as i2 from departments as x) select a as i1_1, z.i2 as i1_2, y.name as i2_name, b as i2_tasks_1, row_number() over (order by z.i1_name, z.i2, y.dept, y.employee, y.salary) as i2_tasks_2 from employees as y, q as z where (z.i1_name = y.dept ∧ (y.salary < 1000 ∨ y.salary > 1000000))) union all (with q as (select x.name as i1_name, row_number() over (order by x.name) as i2 from departments as x) select a as i1_1, z.i2 as i1_2, y.name as i2_name, d as i2_tasks_1, row_number() over (order by z.i1_name, z.i2, y.dept, y.name, y.client) as i2_tasks_2 from contacts as y, q as z where (z.i1_name = y.dept ∧ y.client)) Modulo record flattening, the above fragment of SQL is almost isomorphic to the image of the let-insertion translation. The only significant difference is the use of row_number in place of index. Each instance of index in the body R of a subquery of the form QF1: SELECT e.emp FROM employees e WHERE e.salary > 10000 QF2: SELECT e.emp, t.tsk FROM employees e, tasks t WHERE e.emp = t.emp QF3: SELECT e1.emp, e2.emp FROM employees e1, employees e2 WHERE e1.dpt = e2.dpt AND e1.salary = e2.salary AND e1.emp <> e2.emp QF4: (SELECT t.emp FROM tasks t WHERE t.tsk = ’abstract’) UNION ALL (SELECT e.emp FROM employees WHERE e.salary > 50000) QF5: (SELECT t.emp FROM tasks t WHERE t.tsk = ’abstract’) MINUS (SELECT e.emp FROM employees e WHERE e.salary > 50000) QF6: ((SELECT t.emp FROM tasks t WHERE t.tsk = ’abstract’) UNION ALL (SELECT e.emp FROM employees e WHERE e.salary > 50000)) MINUS ((SELECT t.emp FROM tasks t WHERE t.tsk = ’enthuse’) UNION ALL (SELECT e.emp FROM employees e WHERE e.salary > 10000)) Figure 8: SQL versions of flat queries used in experiments −−−→ for (x ← t where X) return R is simulated by a term of the form − → row_number() over (order by x.`), where: xi : h`i,1 : Ai,1 , . . . , `i,mi : Ai,mi i − → x.` = x1 .`1,1 , . . . , x1 .`1,m1 , . . . , xn .`n,1 , . . . , xn .`n,mn A possible concern is that row_number is non-deterministic. It computes row numbers ordered by the supplied columns, but if there is a tie, then it is free to order the equivalent rows in any order. However, we always order by all columns of all tables referenced from the current subquery, so our use of row_number is always deterministic. (An alternative could be to use nonstandard features such as PostgreSQL’s OID or MySQL’s ROWNUM, but sorting would still be necessary to ensure consistency across inner and outer queries.) 8. EXPERIMENTAL EVALUATION Ferry’s loop-lifting translation has been implemented in Links previously by Ulrich, a member of the Ferry team [30], following the approach described by Grust et al. [12] to generate SQL:1999 algebra plans, then calling Pathfinder [14] to optimise and evaluating the resulting SQL on a PostgreSQL database. We have also implemented query shredding in Links, running against PostgreSQL; our implementation1 does not use Pathfinder. We performed initial experiments with a larger ad hoc query benchmark, and developed some optimisations, including inlining certain WITH clauses to unblock rewrites, using keys for row numbering, and implementing stitching in one pass to avoid construction of intermediate inmemory data structures that are only used once and then discarded. We report on shredding with all of these optimisations enabled. Benchmark queries. There is no standard benchmark for queries returning nested results. In particular, the popular TPC1 http://github.com/slindley/links/tree/shredding Q1 : for (d ← departments) return (hname = d.name, employees = employeesOfDept d, contacts = contactsOfDept di Q2 : for (d ← Q1) where (all d.employees (λx.contains x.tasks “abstract”)) return hdept = d.depti Q3 : for (e ← employees) return hname = e.name, task = tasksOfEmp(e)i Q4 : for (d ← departments) return hdept = d.dept, employees = for (e ← employees) where (d.dept = e.dept) return e.employeei Q5 : for (t ← tasks) return ha = t.task, b = employeesByTask ti Q6 : for (d ← Q1) return (hdepartment = d.name, people = getTasks(outliers(d.employees)) (λy. y.tasks) ] getTasks(clients(d.contacts)) (λy. return “buy”)i) Figure 9: Nested queries used in experiments H benchmark is not suitable for comparing shredding and looplifting: the TPC-H queries do not return nested results, and can be run directly on any SQL-compliant RDBMS, so neither approach needs to do any work to produce an SQL query. We selected twelve queries over the organisation schema described in Section 3 to use as a benchmark. The first six queries, named QF1–QF6, return flat results and can be translated to SQL using existing techniques, without requiring either shredding or loop-lifting. We considered these queries as a sanity check and in order to measure the overhead introduced by loop-lifting and shredding. Figure 8 shows the SQL versions of these queries. We also selected six queries Q1–Q6 that do involve nesting, either within query results or in intermediate stages. They are shown in Figure 9; they use the auxiliary functions defined in Section 3. Q1 is the query Qorg that builds the nested organisation view from Section 3. Q2 is a flat query that computes a flat result from Q1 consisting of all departments in which all employees can do the “abstract” task; it is a typical example of a query that employs higherorder functions. Q3 returns records containing each employee and the list of tasks they can perform. Q4 returns records containing departments and the set of employees in each department. Q5 returns a record of tasks paired up with sets of employees and their departments. Q6 is the outliers query Q introduced in Section 3. Experimental results. We measured the query execution time for all queries on randomly generated data, where we vary the number of departments in the organisation from 4 to 4096 (by powers of 2). Each department has on average 100 employees and each employee has 0–2 tasks, and the largest (4096 department) database was approximately 500MB. Although the test datasets are moderate in size, they suffice to illustrate the asymptotic trends in the comparative performance of the different techniques. All tests were performed using PostgreSQL 9.2 running on a MacBook Pro with 4-core 2.6GHz CPU, 8GB RAM and 500GB SSD storage, with the database server running on the same machine (hence, negligible network latency). We measure total time to translate a nested query to SQL, evaluate the resulting SQL queries, and stitch the results together to form a nested value, measured from Links. We evaluated each query using query shredding and loop-lifting, and for the flat queries we also measured the time for Links’ default 37.5 4 8 16 32 64 128 #departments 0 QF1 1E+00 4 16 64 256 1024 4096 1E+03 1E+02 1E+01 1E+00 4 16 64 256 1024 4096 1E+02 1E+01 1E+00 4 16 64 4 16 64 256 1024 4096 #departments 1E+03 1E+05 1E+04 1E+03 1E+02 1E+01 1E+00 1E+01 1E+00 4 16 64 loop-lifting implementation for Links) supports grouping and aggregation features that are not handled by our translation. We focused on queries that both systems can handle, but Ferry has a clear advantage for grouping and aggregation queries or when ordering is important (e.g. sorting or top-k queries). Ferry is based on list semantics, while our approach handles multiset semantics. So, some performance differences may be due to our exploitation of multisetbased optimisations that Ferry (by design) does not exploit. The results for flat queries show that shredding has low per-query overhead in most cases compared to Links’ default flat query evaluation, but the queries it generates are slightly slower. Loop-lifting, on the other hand, has a noticeable per-query overhead, likely due to its invocation of Pathfinder and associated serialisation costs. In some cases, such as QF4 and QF5, loop-lifting is noticeably slower asymptotically; this appears to be due to additional sorting needed to maintain list semantics. We encountered a bug that prevented loop-lifting from running on query QF6; however, shredding had negligible overhead for this query. In any case, the overhead of either shredding or loop-lifting for flat queries is irrelevant: we can simply evaluate such queries using Links’ default flat query evaluator. Nevertheless, these results show that the overhead of shredding for such queries is not large, suggesting that the queries it generates are similar to those currently generated by Links. (Manual inspection of the generated queries confirms this.) The results for nested queries are more mixed. In most cases, the overhead of loop-lifting is dominant on small data sizes, which again suggests that shredding may be preferable for OLTP or Web workloads involving rapid, small queries. Loop-lifting scales poorly 0 256 0 4 shredding loop-lifting 16 64 256 1024 4096 1E+02 1E+01 1E+00 4 16 64 Q4 1E+03 1E+03 1E+02 shredding loop-lifting 4 16 64 1E+05 1E+04 1E+03 1E+02 1E+01 1E+00 256 1024 4096 shredding loop-lifting 1E+02 1E+01 1E+00 4 16 64 16 64 Q6 256 1024 4096 #departments 256 1024 4096 #departments shredding loop-lifting 4 256 1024 4096 #departments 1E+04 1E+01 16 Q2 1E+03 4 8 shredding loop-lifting 1E+04 Q5 256 1024 4096 Discussion. We should re-emphasise that Ferry (and Ulrich’s 128 #departments #departments QF1: SELECT e.emp FROM employees e WHERE e.salary > 10000 Experimental results (flat QF2:Figure SELECT10: e.emp, t.tsk FROM employees e, queries) tasks t WHERE e.emp = t.emp QF3: SELECT e1.emp, e2.emp FROM employees e1, employees e2 WHERE e1.dpt = e2.dpt AND e1.salary = e2.salary (flat) query evaluation. experimental results for the flat queries AND e1.emp The <> e2.emp QF4: (SELECT t.emp tasks WHERE t.tsk = ‘abstract’) are shown in Figure 10 FROM and for thet nested queries in Figure 11. UNIONaxes (SELECT e.emp FROM employees Note that both are logarithmic. All times are in milliseconds; WHERE e.salary > 50000) the times are medians of 5 runs. The times for small data sizes QF5: (SELECT t.emp FROM tasks t WHERE t.tsk = ‘abstract’) provide a comparison ofe.emp the overhead associated with shredding, MINUS (SELECT FROM employees e WHERE e.salary > 50000) loop-lifting or Links’ default query normalisation algorithm. 64 1E+04 1E+00 256 1024 4096 1E+02 32 Q3 msec msec msec 1E+00 16 #departments shredding loop-lifting default 1E+04 1E+01 8 QF6 QF5 1E+02 4 #departments shredding loop-lifting default 1E+03 256 1024 4096 msec msec 1E+03 #departments 1E+04 256 0 Q1 shredding loop-lifting default 1E+04 msec 64 128 0 QF4 QF3 16 64 #departments shredding loop-lifting default 4 32 QF2 #departments 1E+05 1E+04 1E+03 1E+02 1E+01 1E+00 16 msec 1E+01 8 shredding loop-lifting default 1E+04 msec msec 1E+02 4 #departments shredding loop-lifting default 1E+03 7500 200 msec 0 0 msec 0 msec 37.5 1E+05 1E+04 1E+03 1E+02 1E+01 1E+00 shredding loop-lifting 4 16 64 256 1024 4096 #departments Figure 11: Experimental results (nested queries) on two queries (Q1 and Q6), and did not finish within 1 minute even for small data sizes. Both Q1 and Q6 involve 3 levels of nesting and in the innermost query, loop-lifting generates queries with Cartesian products inside OLAP operators such as DENSE_RANK or ROW_NUMBER that Pathfinder was not able to remove. The queries generated by shredding in these cases avoid this pathological behaviour. For other queries, such as Q2 and Q4, loop-lifting performs better but is still slower than shredding as data size increases. Comparison of the queries generated by loop-lifting and shredding reveals that loop-lifting encountered similar problems with hardto-optimise OLAP operations. Finally, for Q3 and Q5, shredding is initially faster (due to the overhead of loop-lifting and calling Pathfinder) but as data size increases, loop-lifting wins out. Inspection of these generated queries reveals that the queries themselves are similar, but the shredded queries involve more data movement. Also, loop-lifting returns sorted results, so it avoids in-memory hashing or sorting while constructing the nested result. It should be possible to incorporate similar optimisations into shredding to obtain comparable performance. Our experiments show that shredding performs similarly or better than loop-lifting on our (synthetic) benchmark queries on moderate (up to 500MB) databases. Further work may need to be done to investigate scalability to larger databases or consider more realistic query benchmarks. 9. RELATED AND FUTURE WORK We have discussed related work on nested queries, Links, Ferry and LINQ in the introduction. Besides Cooper [7], several authors have recently considered higher-order query languages. Benedikt et al. [2, 32] study higher-order queries over flat relations. The Links approach was adapted to LINQ in F# by Cheney et al. [6]. Higher-order features are also being added to XQuery 3.0 [25]. Research on shredding XML data into relations and evaluating XML queries over such representations [18] is superficially similar to our work in using various indexing or numbering schemes to handle nested data. Grust et al.’s work on translating XQuery 32 64 128 to SQL via Pathfinder [14] is a mature solution to this problem, and Grust et al. [13] discuss optimisations in the presence of unordered data processing in XQuery. However, XQuery’s ordered data model and treatment of node identity would block transformations in our algorithm that assume unordered, pure operations. We can now give a more detailed comparison of our approach with the indexing strategies in Van den Bussche’s work and in Ferry. Van den Bussche’s approach uses natural indexes (that is, n-tuples of ids), but does not preserve multiset semantics. Our approach preserves multiplicity and can use natural indexes, we also propose a flat indexing scheme based on row_number. In Ferry’s indexing scheme, the surrogate indexes only link adjacent nesting levels, whereas our indexes take information at all higher levels into account. Our flat indexing scheme relies on this property, and Ferry’s approach does not seem to be an instance of ours (or vice versa). Ferry can generate multiple SQL:1999 operations and Pathfinder tries to merge them but cannot always do so. Our approach generates row_number operations only at the end, and does not rely on Pathfinder. Finally, our approach uses normalisation and tags parts of the query to disambiguate branches of unions. Loop-lifting has been implemented in Links by Ulrich [30], and Grust and Ulrich [16] recently presented techniques for supporting higher-order functions as query results. By using Ferry’s looplifting translation and Pathfinder, Ulrich’s system also supports list semantics and aggregation and grouping operations; to our knowledge, it is an open problem to either prove their correctness or adapt these techniques to fit our approach. Ferry’s approach supports a list-based semantics for queries, while we assume a bag-based semantics (matching SQL’s default behaviour). Either approach can accommodate set-based semantics simply by eliminating duplicates in the final result. In fact, however, we believe the core query shredding translation (Sections 4–6) works just as well for a list semantics. The only parts that rely on unordered semantics are normalisation (Section 2.2) and conversion to SQL (Section 7). We leave these extensions to future work. Our work is also partly inspired by work on unnesting for nested data parallelism. Blelloch and Sabot [3] give a compilation scheme for NESL, a data-parallel language with nested lists; Suciu and Tannen [27] give an alternative scheme for a nested list calculus. This work may provide an alternative (and parallelisable) implementation strategy for Ferry’s list-based semantics [12]. 10. CONCLUSION Combining efficient database access with high-level programming abstractions is challenging in part because of the limitations of flat database queries. Currently, programmers must write flat queries and manually convert the results to a nested form. This damages declarativity and maintainability. Query shredding can help to bridge this gap. Although it is known from prior work that query shredding is possible in principle, and some implementations (such as Ferry) support this, getting the details right is tricky, and can lead to queries that are not optimised well by the relational engine. Our contribution is an alternative shredding translation that handles queries over bags and delays use of OLAP operations until the final stage. Our translation compares favourably to loop-lifting in performance, and should be straightforward to extend and to incorporate into other language-integrated query systems. Acknowledgements. We thank Ezra Cooper, Torsten Grust, and Jan Van den Bussche for comments. We are very grateful to Alex Ulrich for sharing his implementation of Ferry in Links and for extensive discussions. This work was supported by EPSRC grants EP/J014591/1 and EP/K034413/1 and a Google Research Award. 11. REFERENCES [1] A. Beckmann. Exact bounds for lengths of reductions in typed λ-calculus. Journal of Symbolic Logic, 66:1277–1285, 2001. [2] M. Benedikt, G. Puppis, and H. Vu. Positive higher-order queries. In PODS, pages 27–38, 2010. [3] G. E. Blelloch and G. W. Sabot. Compiling collection-oriented languages onto massively parallel computers. J. Parallel Distrib. Comput., 8:119–134, February 1990. [4] P. Buneman, L. Libkin, D. Suciu, V. Tannen, and L. Wong. Comprehension syntax. SIGMOD Record, 23, 1994. [5] P. Buneman, S. A. Naqvi, V. Tannen, and L. Wong. Principles of programming with complex objects and collection types. Theor. Comput. Sci., 149(1):3–48, 1995. [6] J. Cheney, S. Lindley, and P. Wadler. A practical theory of language-integrated query. In ICFP, pages 403–416. ACM, 2013. [7] E. Cooper. The script-writer’s dream: How to write great SQL in your own language, and be sure it will succeed. In DBPL, pages 36–51, 2009. [8] E. Cooper, S. Lindley, P. Wadler, and J. Yallop. Links: web programming without tiers. In FMCO, volume 4709 of LNCS, 2007. [9] P. de Groote. On the strong normalisation of intuitionistic natural deduction with permutation-conversions. Inf. Comput., 178(2):441–464, 2002. [10] L. Fegaras and D. Maier. Optimizing object queries using an effective calculus. ACM Trans. Database Syst., 25:457–516, December 2000. [11] T. Grust, M. Mayr, J. Rittinger, and T. Schreiber. Ferry: Database-supported program execution. In SIGMOD, June 2009. [12] T. Grust, J. Rittinger, and T. Schreiber. Avalanche-safe LINQ compilation. PVLDB, 3(1), 2010. [13] T. Grust, J. Rittinger, and J. Teubner. eXrQuy: Order indifference in XQuery. In ICDE, pages 226–235, 2007. [14] T. Grust, J. Rittinger, and J. Teubner. Pathfinder: XQuery off the relational shelf. IEEE Data Eng. Bull., 31(4):7–14, 2008. [15] T. Grust, S. Sakr, and J. Teubner. XQuery on SQL hosts. In VLDB, pages 252–263, 2004. [16] T. Grust and A. Ulrich. First-class functions for first-order database engines. In DBPL, 2013. [17] W. Kim. On optimizing an SQL-like nested query. ACM Trans. Database Syst., 7(3):443–469, Sept. 1982. [18] R. Krishnamurthy, R. Kaushik, and J. F. Naughton. XML-SQL query translation literature: The state of the art and open problems. In Xsym, pages 1–18, 2003. [19] S. Lindley. Extensional rewriting with sums. In TLCA, 2007. [20] S. Lindley and J. Cheney. Row-based effect types for database integration. In TLDI, 2012. [21] S. Lindley and I. Stark. Reducibility and >>-lifting for computation types. In TLCA, 2005. [22] E. Meijer, B. Beckman, and G. M. Bierman. LINQ: reconciling object, relations and XML in the .NET framework. In SIGMOD, 2006. [23] C. Olston, B. Reed, U. Srivastava, R. Kumar, and A. Tomkins. Pig latin: a not-so-foreign language for data processing. In SIGMOD, pages 1099–1110, 2008. [24] B. Pierce. Types and Programming Languages. MIT Press, 2002. [25] J. Robie, D. Chamberlin, M. Dyck, and J. Snelson. XQuery 3.0: An XML query language. W3C Proposed Recommendation, October 2013. http://www.w3.org/TR/xquery-30/. [26] H. J. Schek and M. H. Scholl. The relational model with relation-valued attributes. Inf. Syst., 11:137–147, April 1986. [27] D. Suciu and V. Tannen. Efficient compilation of high-level data parallel algorithms. In SPAA, pages 57–66. ACM, 1994. [28] D. Syme, A. Granicz, and A. Cisternino. Expert F# 3.0. Apress, 2012. [29] W. W. Tait. Intensional interpretations of functionals of finite type I. Journal of Symbolic Logic, 32(2):198–212, June 1967. [30] A. Ulrich. A Ferry-based query backend for the Links programming language. Master’s thesis, University of Tübingen, 2011. Code supplement: http://github.com/slindley/links/tree/sand. [31] J. Van den Bussche. Simulation of the nested relational algebra by [32] [33] [34] [35] the flat relational algebra, with an application to the complexity of evaluating powerset algebra expressions. Theor. Comput. Sci., 254(1-2), 2001. H. Vu and M. Benedikt. Complexity of higher-order queries. In ICDT, pages 208–219. ACM, 2011. P. Wadler. Comprehending monads. Math. Struct. in Comp. Sci., 2(4), 1992. L. Wong. Normal forms and conservative extension properties for query languages over collection types. J. Comput. Syst. Sci., 52(3), 1996. L. Wong. Kleisli, a functional query system. J. Funct. Programming, 10(1), 2000. APPENDIX A. BEHAVIOUR OF VAN DEN BUSSCHE’S SIMULATION ON MULTISETS As noted in the Introduction, Van den Bussche’s simulation of nested queries via flat queries does not work properly over multisets. To illustrate the problem, consider a simple query R ∪ S, where R and S have the same schema Bag hA : Int, B : Bag Inti. Suppose R and S have the following values: A R= 1 2 B {1} {2} A S= 1 2 B {3, 4} {2} then their multiset union is: A B 1 {1} R ∪ S = 1 {3, 4} 2 {2} 2 {2} Van den Bussche’s simulation (like ours) represents these nested values by queries over flat tables, such as the following: A R1 = 1 2 id a b id R2 = a b B 1 2 A S1 = 1 2 id a b id a S2 = a b B 3 4 2 where a, b are arbitrary distinct ids. Note however that R and S have overlapping ids, so if we simply take the union of R1 and S1 , and of R2 and S2 respectively, we will get: A 1 Wrong 1 = 2 1 2 id a b a b id a b Wrong 2 = a a b B 1 2 3 4 2 corresponding to nested value: A B 1 {1, 3, 4} Wrong = 1 {1, 3, 4} 2 {2, 2} 2 {2, 2} Instead, both Van den Bussche’s simulation and our approach avoid clashes among ids when taking unions. Van den Bussche’s simulation does this by adding two new id fields to represent the result of a union, say id1 and id2 . The tuples originating from R will have equal id1 and id2 values, while those originating from S will have different id1 and id2 values. In order to do this, the simulation defines two queries, one for each result table. The first query is of the form: T1 = (R1 × (id1 : x, id2 : x) | x ∈ adom) ∪ (S1 × (id1 : x, id2 : x0 ) | x 6= x0 ∈ adom) and similarly T2 = (R2 × (id1 : x, id2 : x) | x ∈ adom) ∪ (S2 × (id1 : x, id2 : x0 ) | x 6= x0 ∈ adom) where adom is the active domain of the database (in this case, adom = {1, 2, 3, 4, a, b}) — this is of course also definable as a query. Thus, in this example, the result is of the form A 1 T1 = 2 1 2 id a b a b id1 x y z v id2 x y z0 v0 id a b T1 = a a b id1 x y z w v id2 x y z0 w0 v0 B 1 2 3 4 2 where x, y are any elements of adom (rows mentioning x and y stand for 6 instances) and z 6= z 0 , w 6= w0 and v 6= v 0 are any pairs of distinct elements of adom (rows mentioning these variables stand for 30 instances of distinct pairs from adom). This leads to an O(|adom| ∗ |R| + |adom|2 ∗ |S|) blowup in the number of tuples. Specifically, for our example, |T1 | = 72, whereas the actual number of tuples in a natural representation of R ∪ S is only 9. In a set semantics, the set value simulated by these tables is correct even with all of the extra tuples; however, for a multiset semantics, this quadratic blowup is not correct — even for our example, with |adom| = 6, the result of evaluating R ∪ S yields a different number of tuples from the result of evaluating S ∪ R, and neither represents the correct multiset in an obvious way. It may be possible (given knowledge of the query and active domain, but not the source database) to “decode” the flat query results and obtain the correct nested result, but doing so appears no easier than developing an alternative, direct algorithm. B. TYPING RULES The (standard) typing rules for λNRC queries are shown in Figure 12. The typing rules for shredded terms, packages, and values are shown in Figures 13, 14, 16 and 18. C. QUERY NORMALISATION Following previous work [20], we separate query normalisation into a rewriting phase and a type-directed structurally recursive function. Where we diverge from this previous work is that we extend the rewrite relation to hoist all conditionals up to the nearest comprehension (in order to simplify the rest of the development), and the structurally recursive function is generalised to handle nested data. Thus normalisation can be divided into three stages. • The first stage performs symbolic evaluation, that is, β-reduction and commuting conversions, flattening unnecessary nesting and eliminating higher-order functions. • The second stage hoists all conditionals up to the nearest enclosing comprehension in order to allow them to be converted to where clauses. • The third and final stage hoists all unions up to the top-level, η-expands tables and variables, and turns all conditionals into where clauses. If M is r-SN, then we write max r (M ) for the maximum length of a reduction sequence starting from M . A rewrite relation ;r is weakly-normalising, or WN, if all terms M are weakly-normalising with respect to ;r . Similarly, a rewrite relation ;r is strongly-normalising, or SN, if all terms M are strongly-normalising with respect to ;r . For any ;r we can define a partial function nf r such that nf r (M ) is a normal form of M , if one exists, otherwise nf r (M ) is undefined. Note that there may be more than one normal form for a given M ; nf r chooses a particular normal form for each M that has one. If ;r is weakly normalising, then nf r is a total function. We now describe each normalisation stage in turn. C.1 Symbolic evaluation The β-rules perform symbolic evaluation, including substituting argument values for function parameters, record field projection, conditionals where the test is known to be true or false, or iteration over singleton bags. (λx. N ) M −−−−→ h` = M i.`i if true then M else N if false then M else N for (x ← return M ) N M ; M1 ; . . . ; Mn 6; A term M is strongly normalising with respect to a rewrite relation ;r , or r-SN, if every reduction sequence starting from M is finite. N [x := M ] Mi M N N [x := M ] Fundamentally, β-rules always follow the same pattern. Each is associated with a particular type constructor T , and the left-hand side always consists of an introduction form for T inside an elimination form for T . For instance, in the case of functions (the first β-rule above), the introduction form is a lambda and the elimination form is an application. Applying a β-rule eliminates T in the sense that the introduction form from the left-hand side either no longer appears or has been replaced by a term of a simpler type on the right-hand side. Each instance of a β-rule is associated with an elimination frame. An elimination frame is simply the elimination form with a designated hole [ ] that can be plugged with another expression. For instance, elimination frames for the function β-rule are of the form E[ ] = [ ] M . If we plug an introduction form λx.N into E[ ], written E[λx.N ], then we obtain the left-hand side of the associated β-rule (λx.N ) M . (This notion of an expression with a hole that can be filled by another expression is commonly used in rewriting and operational semantics for programming languages [24, ch. 19]; here, it is not essential but helps cut down the number of explicit rules, and helps highlight commonality between rules.) The elimination frames of λNRC are as follows. E[ ] ::= [ ] M | [ ].` | if [ ] then M else N | for (x ← [ ]) N The following rules express that comprehensions, conditionals, empty bag constructors, and unions can always be hoisted out of the above elimination frames. In the literature such rules are often called commuting conversions. They are necessary in order to expose all possible β-reductions. For instance, Weak normalisation and strong normalisation. Given a term M and a rewrite relation ;, we write M 6; if M is irreducible, that is no rewrite rules in ; apply to M . The term M is said to be in normal form. A term M is weakly normalising with respect to a rewrite relation ;r , or r-WN, if there exists a finite reduction sequence ;c ;c ;c ;c ;c (if M then h` = N i else N 0 ).` cannot β-reduce, but if M then h` = N i.` else N 0 .` can. E[for (x ← M ) N ] ;c E[if L then M else N ] ;c E[∅] ;c E[M1 ] M2 ] ;c for (x ← M ) E[N ] if L then E[M ] else E[N ] ∅ E[M1 ] ] E[M2 ] For example: (if L then M1 else M2 ) M ;c if L then M1 M else M2 M C ONSTANT VAR L AM Σ(c) = hO1 , . . . , On i → O0 [Γ ` Mi : Oi ]n i=1 Γ ` c(M1 , . . . , Mn ) : O0 Γ, x : A ` x : A Γ, x : A ` M : B Γ ` λx. M : A → B R ECORD A PP P ROJECT [Γ ` Mi : Ai ]n i=1 n Γ ` h`i = Mi in : h` i = Ai ii=1 i=1 Γ`M :A→B Γ`N :A Γ`MN :B IF Γ ` M : Bool Γ`N :A Γ ` N0 : A Γ ` if M then N else N 0 : A E MPTY Γ ` ∅ : Bag A Γ ` M : h`i : Ai in i=1 Γ ` M.`j : Aj S INGLETON U NION Γ`M :A Γ ` return M : Bag A Γ ` M : Bag A Γ ` N : Bag A Γ ` M ] N : Bag A TABLE F OR I S E MPTY Γ ` M : Bag A Γ, x : A ` N : Bag B Γ ` for (x ← M ) N : Bag B Γ ` M : Bag A Γ ` empty M : Bool −−→ Σ(t) = Bag h` : Oi −−→ Γ ` table t : Bag h` : Oi Figure 12: Typing rules for higher-order nested queries C ONSTANT VAR Σ(c) = hO1 , . . . , On i → O0 [Γ ` Xi : Oi ]n i=1 Γ ` c(X1 , . . . , Xn ) : O0 −−→ −−→ Γ, x : h` : F i ` x : h` : F i P ROJECT Γ ` x : h`i : Ai in i=1 Γ ` x.`j : Aj F OR [Σ(ti ) = Bag Ai ]n i=1 R ECORD [Γ ` Ni : Ai ]n i=1 n Γ ` h`i = Ni in : h` i = Ai ii=1 i=1 S INGLETON I NDEX Γ ` a  d : Index U NION [Γ ` Ci : Bag hIndex , F i]n i=1 ] ~ Γ` C : Bag hIndex , F i Γ ` I : Index Γ`N :F Γ ` returna hI, N i : Bag hIndex , F i I S E MPTY Γ, [xi : Ai ]n Γ, [xi : Ai ]n i=1 ` X : Bool i=1 ` C : Bag hIndex , F i n Γ ` for ([xi ← ti ]i=1 where X) C : Bag hIndex , F i Γ ` L : Bag hIndex , F i Γ ` empty L : Bool Figure 13: Typing rules for shredded terms BASE ` O(ΛS ) : O(TS ) R ECORD BAG [` Âi (ΛS ) : Â0i (TS )]n i=1 ` h`i : Âi (ΛS )in i=1 : h`i : ` Â(ΛS ) : Â0 (TS ) Â0i (TS )in i=1 `L:A erase(Â(ΛS )) = A ` (Bag Â(ΛS )) : (Bag Â0 (TS ))A We write ΛS for the set of shredded terms, and TS for the set of shredded types. Figure 14: Typing rules for shredded packages L Note that some combinations of elimination frames and rewrite rule are impossible in a well-typed term, such as if ∅ then M else N . For the purposes of reduction we treat empty like an uninterpreted constant, that is, we do reduce inside emptiness tests, but they do not in any other way interact with the reduction rules. Next we prove that ;c is strongly normalising. The proof is based on a previous proof of strong normalisation for simply-typed λ-calculus with sums [19], which generalises the >>-lifting approach [21], which in turn extends Tait’s proof of strong normalisation for simply-typed λ- calculus [29]. (→) If S[M [x : = N ]] and N are c-SN then S[(λx.M ) N ] is cSN. −−−→ ~ , are c-SN then S[h− (hi) If M ` = M i.`i ] is c-SN. (Bag −) If S[N [x : = M ]] and M are c-SN then S[for (x ← return M ) N ] is c-SN. (Bool ) If S[N ] and S[N 0 ] are c-SN then S[if true then N else N 0 ] is c-SN and S[if false then N else N 0 ] is c-SN. P ROOF. (→): By induction on maxc (S) + maxc (M ) + maxc (N ). (hi): By induction on Frame stacks. S ::= Id | S ◦ E (frame stacks) |Id | = 0 |S ◦ E| = |S| + 1 (stack length) (plugging) Id [M ] = M (S ◦ E)[M ] = S[(E[M ])] Following previous work [19] we assume variables are annotated with types. We assume variables are annotated with types. We write A ( B for the type of frame stack S, if S[M ] : B for all terms M : A. Frame stack reduction. S ;c S 0 def ⇐⇒ ⇐⇒ max S + ( c Frame stacks are closed under reduction. A frame stack S is cstrongly normalising, or c-SN, if all reduction sequences starting from S are finite. L EMMA 7. If S ;c S 0 , for frame stacks S, S 0 , then |S 0 | ≤ |S|. P ROOF. Induction on the structure of S. Reducibility. We define reducibility as follows: • Id is reducible. • S ◦ ([ ] N ) : (A → B) ( C is reducible if S and N are reducible. −−→ • S ◦ ([ ].`) : (` : A) ( C is reducible if S is reducible. • S : Bag A ( C is reducible if S[return M ] is c-SN for all reducible M : A. • S : Bool ( C is reducible if S[true] is c-SN and S[false] is c-SN. • M : A is reducible if S[M ] is c-SN for all reducible S : A ( C. L EMMA 8. If M : A is reducible then M is c-SN. P ROOF. Follows immediately from reducibility of Id and the definition of reducibility on terms. L EMMA 9. x : A is reducible. P ROOF. By induction on A using Lemma 7 and Lemma 8. C OROLLARY 10. If S : A ( C is reducible then S is c-SN. i=1 n X max(Mi )) + max(N ) + ( max(Mi0 )). c c i=1 c (Bag ): By induction on |S| + max(S[N [x : = M ]]) + max(M ). c c (Bool ): By induction on |S| + maxc (S[N ]) + maxc (S[N 0 ]). This completes the proof. Now we obtain reducibility-closure properties for each type constructor. L EMMA 12 ∀M.S[M ] ;c S 0 [M ] S[x] ;c S 0 [x] n X ( REDUCIBILITY- CLOSURE ). (→) If M [x : = N ] is reducible for all reducible N , then λx.M is reducible. −−−→ ~ are reducible, then h− (hi) If M ` = M i is reducible. (Bag ) If M is reducible, N [x: =M 0 ] is reducible for all reducible M 0 , then for (x ← M ) N is reducible. (Bool ) If M, N, N 0 are reducible then if M then N else, N 0 is reducible. P ROOF. Each property follows from the corresponding part of Lemma 11 using Lemma 8 and Corollary 10. We also require additional closure properties for the empty bag and union constructs. L EMMA 13 ( REDUCIBILITY- CLOSURE II). (∅) The empty bag ∅ is reducible. (]) If M, N are reducible, then M ] N is reducible. P ROOF. (∅): Suppose S : Bag A ( C is reducible. We need to prove that S[∅] is c-SN. The proof is by induction on |S| + maxc (S). The only interesting case is hoisting the empty bag out of a bag elimination frame, which simply decreases the size of the frame stack by 1. (]): Suppose M, N : Bag A, and S : Bag A ( C are reducible. We need to show that S[M ] N ] is c-SN. The proof is by induction on |S| + maxc (S[M ]) + maxc (S[N ]). The only interesting case is hoisting the union out of a bag elimination frame, which again decreases the size of the frame stack by 1, whilst leaving the other components of the induction measure unchanged. This completes the proof. Each type constructor has an associated β-rule. Each β-rule gives rise to an SN-closure property. L EMMA 11 (SN- CLOSURE ). T HEOREM 14. Let M be any term. Suppose x1 : A1 , . . . , xn : An includes all the free variables of M . If N1 : A1 , . . . , Nn : An ~ ] is reducible. are reducible then M [~ x:=N P ROOF. By induction on the structure of terms using Lemma 12 and Lemma 13. C.3 The query normalisation function The following definition of the function norm generalises the normalisation algorithm from previous work [20]. T HEOREM 15 ( STRONG NORMALISATION ). The relation ;c is strongly normalising. norm A (M ) = Lnf h (nf c (M ))MA P ROOF. Let M be a term with free variables ~ x. By Lemma 9, ~ x are reducible. Hence, by Theorem 14, M is c-SN. where: Lc([Xi : Oi ]n i=1 )MO Lx.`MO Lempty (M : Bag A)MBool LM Mh`i :Ai in i=1 LM MBag A It is well known that β-reduction in simply-typed λ-calculus has non-elementary complexity in the worst case [1]. The relation ;c includes β-reduction, so it must be at least as bad (we conjecture that it has the same asymptotic complexity, as ;c can be reduced to β-reduction on simply-typed λ-calculus via a CPS translation following de Groote [9]). However, we believe that the asymptotic complexity is unlikely to pose a problem in practice, as the kind of higher-order code that exhibits worst-case behaviour is rare. It has been our experience with Links that query normalisation time is almost always dominated by SQL execution time. C.2 ~ where L) return LM MA ] BLreturn M M? ~ = [for (G A,G,L BLfor (x ← t) M M? ~ = BLM M? ~ A,G,L BLtable tM? ~ A,G,L The if-hoisting rule says that if an expression contains an if-hoisting frame around a conditional, then we can lift the conditional up and push the frame into both branches: F [if L then M else N ] ;h if L then F [M ] else F [N ] We write size(M ) for the size of M , as in the total number of syntax constructors in M . L EMMA 16. 1. If M, N, N 0 are h-SN then if M then N else N 0 is h-SN. 2. If M, N are h-SN then M ] N is h-SN. ~ are h-SN then c(M ~ ) is h-SN. 3. If M −−−−→ ~ 4. If M are h-SN then h` = M i is h-SN. P ROOF. 1: By induction on hmax h (M ), max h (N 0 ), size(N ) + size(N 0 )i. size(M ), max h (N ) + 2: By induction on hmax h (M ) + max h (N ), size h (M ) + size h (N )i using (1). 3 and 4: By induction on h n X max h (Mi ), i=1 n X size(Mi )i i=1 using (1). This concludes the proof. P ROPOSITION 17. The relation ;h is strongly normalising. P ROOF. By induction on the structure of terms using Lemma 16. ~ +[x←t],L A,G+ BL∅M? ~ = [ ] A,G,L BLM ] N M? ~ = BLM M? ~ ++ BLN M? ~ A,G,L A,G,L A,G,L BLif L0 M N M? ~ = BLM M? ~ A,G,L A,G,L∧L0 ++ BLN M? ~ 0 A,G,L∧¬L To hoist conditionals (if-expressions) out of constant applications, records, unions, and singleton bag constructors, we define if-hoisting frames as follows: | [ ] ] N | M ] [ ] | return [ ] A,G++[x←t],L = BLreturn xM? (x fresh) If hoisting −−−−→ −−−−→ ~ , [ ], N ~ ) | h`0 = M , ` = [ ], `00 = N i F [ ] ::= c(M = c([LXi MOi ]n i=1 ) = x.` = empty (LM MBag A ) = h`i = F LM MAi ,`i in i=1 U = (BLM M?A,[ ],true ) F LxMA,`i = Lx.`i MA F Lh`i = Mi in i=1 MA,`i = LMi MA Strictly speaking, in order for the above definition of norm A to make sense we need the two rewrite relations to be confluent. It is easily verified that the relation ;c is locally confluent, and hence by strong normalisation and Newman’s Lemma it is confluent. The relation ;h is not confluent as the ordering of hoisting determines the final order in which booleans are eliminated. However, it is easily seen to be confluent modulo reordering of conditionals, which in turn means that norm A is well-defined if we identify terms modulo commutativity of conjunction, which is perfectly reasonable given that conjunction is indeed commutative. T HEOREM 18. The function norm A terminates. P ROOF. The result follows immediately from strong normalisation of ;c and ;h , and the fact that the functions L−M, BL−M? , and FL−M are structurally recursive (modulo expanding out the righthand-side of the definition of BLtable tM?A,G,L ~ . D. PROOF OF CORRECTNESS OF SHREDDING The main correctness result is that if we shred an annotated normalised nested query, run all the resulting shredded queries, and stitch the shredded results back together, then that is the same as running the nested query directly. Intuitively, the idea is as follows: 1. define shredding on nested values; 2. show that shredding commutes with query execution; and 3. show that stitching is a left-inverse to shredding of values. Unfortunately, this strategy is a little too naive to work, because nested values do not contain enough information about the structure of the source query in order to generate the same indexes that are generated by shredded queries. To fix the problem, we will give an annotated semantics AJ−K that is defined only on queries that have first been converted to normal form and annotated with static indexes as described in Section 4. We will ensure that N Jerase(L)K = erase(AJLK), where we overload the erase function to erase annotations on terms and values. The structure of the resulting proof is depicted in the diagram in Figure 15, where we view a query as a function from the in- R ESULT [` vi : Ai ]n [` J : Index ]n i=1 i=1 ` [vi @Ji ]n i=1 : Bag A R ECORD [` vi : Ai ]n i=1 n ` h`i = vi in i=1 : h`i = Ai ii=1 C ONSTANT Σ(c) = O `c:O Figure 16: Typing rules for nested annotated values SJLK = SJLKε,1 n SJh` = N in i=1 Kρ,ι = h`i = SJNi Kρ,ι ii=1 SJXKρ,ι = N JXKρ SJa  outKρ,ι.i = index (a  ι) SJa  inKρ,ι.i = index (a  ι.i) U n SJreturna N Kρ,ι = [SJN Kρ,ι @index (a  ι)] SJ n i=1 Ci Kρ,ι = concat([SJCi Kρ,ι ]i=1 ) n SJfor ([xi ← ti ]i=1 where X) CKρ,ι = concat([SJCKρ[xi 7→ri ]ni=1 ,ι.j | hj, ~ri ← enum([~r | [ri ← Jti K]n ])]) i=1 , N JXKρ[xi 7→ri ]n i=1 Figure 17: Semantics of shredded queries, with annotations on annotated values is defined as follows: JΣK N Jerase(L)K AJLK HJLKA shred (−) (A) N JAK erase AJAK HJAK AJLK = AJLKε,1 U n AJ n i=1 Ci Kρ,ι = concat([AJCi Kρ,ι ]i=1 ) n AJh`i = Mi in i=1 Kρ,ι = h`i = AJMi Kρ,ι ii=1 AJXKρ,ι = N JXKρ a AJfor ([xi ← ti ]n i=1 where X) return M Kρ,ι = [AJM Kρ[xi 7→ri ]n ,ι.j @index (a  ι.j) i=1 | hj, ~ ri ← enum([~ r | [ri ← Jti K]n i=1 , N JXKρ[xi 7→ri ]n ])] i=1 stitch Figure 15: Correctness of shredding and stitching terpretation of the input schema Σ to the interpretation of its result type. To prove correctness is to prove that this diagram commutes. As well as an environment, the current dynamic index is threaded through the semantics. Note that we use the ordinary semantics to evaluate values that cannot have any annotations (such as boolean tests in the last case). It is straightforward to show by induction that the annotated semantics erases to the standard semantics: T HEOREM 19. For any L we have erase(AJLK) = N Jerase(L)K. D.2 D.1 Annotated semantics of nested queries We annotate bag elements with distinct indexes as follows: Results s ::= [w1 @I1 , . . . , wm @Im ] Inner values w ::= c | r | s Rows r ::= h`1 = w1 , . . . , `n = wn i Indexes I, J Typing rules for these values are in Figure 16. The index annotations @I on collection elements are needed solely for our correctness proof, and do not need to be present at run time. The canonical choice for representing indexes is to take I = a  ι. We also allow for alternative indexing schemes in by parameterising the semantics over an indexing function index mapping each canonical index a  ι.j to a concrete representation. In the simplest case, we take index to be the identity function (where static indexes are viewed as distinct integers). We will later consider other definitions of index that depend on the particular query being shredded. Informally, the only constraints on index are that it is defined on every canonical index a  ι that we might need in order to shred a query, and it is injective on all canonical indexes. We will formalise these constraints later, but for now, index can be assumed to be the identity function. Given an indexing function index , the semantics of nested queries Shredding and stitching annotated values To define the semantics of shredded queries and packages, we use annotated values in which collections are annotated pairs of indexes and annotated values. Again, we allow annotations @J on elements of collections to facilitate the proof of correctness. Typing rules for these values are shown in Figure 18. Results s ::= [hI1 , w1 i@J1 , . . . , hIm , wm i@Jm ] Shredded values w ::= c | r | I Rows r ::= h`1 = w1 , . . . , `n = wn i Indexes I, J Having defined suitably annotated versions of nested and shredded values, we now extend the shredding function to operate on nested values. VsWp ? V[vi @Ji ]n i=1 WI, ? V[vi @Ji ]n W i=1 I,↓.p ? Vh`i = vi in i=1 WI,`i .p = VsW?>  1,p = [hI, Tvi UJi i@Ji ]n i=1 = concat([Vvi W?Ji ,p ]n i=1 ) = Vvi W?I,p TcUI = c n Th`i = vi in i=1 UI = h`i = Tvi UI ii=1 TsUI = I Note that in the cases for bags, the index annotation J is passed as an argument to the recursive call: this is where we need the ghost indexes, to relate the semantics of nested and shredded queries. We lift the nested result shredding function to build an (annotated) shredded value package in the same way that we did for R ESULT C ONSTANT [` Ii : Index ]n [` wi : F ]n [` Ji : Index ]n i=1 i=1 i=1 ` hI1 , w1 @J1 i, . . . , hIn , wn @Jn i : Bag hIndex , F i Σ(c) = O `c:O R ECORD [` ni : Fi ]n i=1 n ` h`i = wi in i=1 : h`i = Fi ii=1 Figure 18: Typing rules for shredded values nested types and nested queries. shred s (Bag A) = package (VsW ) (Bag A) − shred s,p (Bag A) = package (VsW ),p (Bag A) Not all possible indexing schemes satisfy Lemma 21. To identify those that do, we recall the function for computing the canonical indexes of a nested query. − We adjust the stitching function slightly to maintain annotations; the adjustment is consistent with its behaviour on unannotated values. stitch(Â) = stitch >  1 (Â) stitch c (O) = c n stitch r (h`i : Âi in i=1 ) = h`i = stitch r.cli (Âi )ii=1 stitch I ((Bag Â)s ) = [(stitch w (Â))@J | hI, wi@J ← s] IJLK = IJLKε,1 U IJ n C = concat([IJCi Kρ,ι ]n i Kρ,ι i=1 i=1 ) IJh`i = Mi in K = concat([IJMi Kρ,ι ]n ρ,ι i=1 i=1 ) IJXKρ,ι = [ ] a IJfor ([xi ← ti ]n i=1 where X) return M Kρ,ι = concat([a  ι.j :: IJM Kρ[xi 7→ri ]n ,ι.j i=1 | hj, ~ ri ← enum([~ r | [ri ← Jti K]n i=1 , N JXKρ[xi 7→ri ]n ])]) i=1 Note that IJ−K resembles AJ−K, but instead of the nested value v it computes all indexes of v. An indexing function index : Index → A is valid with respect to the closed nested query L if it is injective and defined on every canonical index in IJLK. The inner value parameter w to the auxiliary function stitch w (−) specifies which values to stitch along the current path. We now show how to interpret shredded queries and query packages over shredded values. The semantics of shredded queries (including annotations) is given in Figure 17. The semantics of a shredded query package is a shredded value package containing indexed results for each shredded query. For each type A we define HJAK = shred A (A) and for each flat–nested, closed ` L : A we define HJLKA : HJAK as pmap SJ−K (shred L (A)). The only requirement on indexes in the proof of Theorem 4 is that nested values be well-indexed, hence the proof extends to any valid indexing scheme. The concrete, natural, and flat indexing schemes are all valid. D.3 D.4 Main results There are three key theorems. The first states that shredding commutes with the annotated semantics. T HEOREM 20. If ` L : Bag A then: HJLKBag A = shred AJLK (Bag A) In order to allow shredded results to be correctly stitched together, we need the indexes at the end of each path to a bag through a nested value to be unique. We define indexes p (v), the indexes of nested value v along path p as follows. n indexes  ([vi @Ji ]n i=1 ) = [Ji ]i=1 n indexes ↓.p ([vi @Ji ]n i=1 ) = concat([indexes p (vi )]i=1 ) indexes `i .p (h`i = vi in i=1 ) = indexes p (vi ) We say that a nested value v is well-indexed (at type A) provided ` v : A, and for every path p in paths(A), the elements of indexes p (v) are distinct. L EMMA 24. If index is valid for L then AJLK is well-indexed. L EMMA 25. SJTM Ua Kρ,ι = TAJM Kρ,ι Ua  ι P ROOF. By induction on the structure of M . Case x.`: = T HEOREM 22. If ` s : Bag A and s is well-indexed at type Bag A then stitch(shred s (Bag A)) = s. Combining Theorem 20, Lemma 21 and Theorem 22 we obtain the main correctness result (see Figure 15). T HEOREM 23. If ` L : Bag A then: stitch(HJLKBag A ) = AJLK, and in particular: erase(stitch(HJLKBag A )) = N JLK. SJTx.`Ua Kρ,ι SJx.`Kρ,ι = AJx.`Kρ,ι = TAJx.`Kρ,ι Ua  ι Case c([Xi ]n i=1 ): L EMMA 21. If ` L : A, then AJLK is well-indexed at A. Our next theorem states that for well-indexed values, stitching is a left-inverse of shredding. Detailed proofs We first show that the inner shredding function T−U commutes with the annotated semantics. = SJTc([Xi ]n i=1 )Ua Kρ,ι = SJc([TXi Ua ]n i=1 )Kρ,ι = AJc([TXi Ua ]n i=1 )Kρ,ι = JcK([AJTXi Ua Kρ,ι ]n i=1 ) TAJc([Xi ]n i=1 )Kρ,ι Ua  ι Case empty M : = SJTempty M Ua Kρ,ι = SJempty VM W Kρ,ι = AJempty VM W Kρ,ι TAJempty M Kρ,ι Ua  ι = = SJh`i = TMi Ua in i=1 Kρ,ι h`i = SJTMi Ua Kρ,ι in i=1 = (Induction hypothesis) h`i = TAJMi Kρ,ι Ua  ι in i=1 = Th`i = AJMi Kρ,ι in i=1 Ua  ι = TAJh`i = Mi in i=1 Kρ,ι Ua  ι = ? Vconcat([AJCi Kρ,ι ]n i=1 )Wa  ι,p = ~ ? Vconcat([AJCKρ,ι | C ← C])W a  ι,p ~ s ← AJCKρ,ι ])) concat(concat([VsW?a  ι,p | C ← C, = −−−−→ Case h` = M i: SJTh`i = Mi in i=1 Ua Kρ,ι = U ? VAJ n i=1 Ci Kρ,ι Wa  ι,p = ~ concat([VAJCKρ,ι W?a  ι,p | C ← C]) concat([VAJCi Kρ,ι W?a  ι,p ]n i=1 ) This completes the proof. We are now in a position to prove that the outer shredding function V−W commutes with the semantics. Case L: = SJTLUa Kρ,ι SJa  inKρ,ι = aι = TAJLKρ,ι Ua  ι This completes the proof. The first part of the following lemma allows us to run a shredded query by concatenating the results of running the shreddings of its component comprehensions. Similarly, the second part allows us to shred the results of running a nested query by concatenating the shreddings of the results of running its component comprehensions. L EMMA 27. SJVLWp Kρ,1 = VAJLKρ,1 Wp P ROOF. We prove the following: 1. SJVLWp Kρ,1 = VAJLKρ,1 Wp U 2. SJ VCW?a,p Kρ,ι = VAJCKρ,ι W?a  ι,p 3. SJVM W?a,p Kρ,ι = VAJM Kρ,ι W?a  ι,p The first equation is the result we require. Observe that it follows from (2) and Lemma 26. We now proceed to prove equations (2) and (3) by mutual induction on the structure of p. There are only two cases for (2), as the `i .p case cannot apply. Case : L EMMA U 26. U 1. SJ V n Ci W?a,p Kρ,ι = concat([SJVCi W?a,p Kρ,ι ]n i=1 ) Un i=1 2. VAJ i=1 Ci Kρ,ι W?a  ι,p = concat([VAJCi Kρ,ι W?a  ι,p ]n i=1 ) P ROOF. 1. = U U ? SJ V n i=1 Ci Wa,p Kρ,ι = U SJ concat([VCi W?a,p ]n i=1 )Kρ,ι U ~ ρ,ι SJ concat([VCW?a,p | C ← C])K = = = = = = ~ C 0 ← VCW? ])) concat(concat([SJC 0 Kρ,ι | C ← C, a,p = U ~ concat([SJ VCW?a,p Kρ,ι | C ← C]) = U −−−→ SJ Vfor (x ← t where X) returnb M W?a, Kρ,ι (Definition of SJ−K) −−−→ SJfor (x ← t where X) returnb ha  ι, TM Ub iKρ,ι (Definition of SJ−K) − → [ha  ι, SJTM Ub Kρ[− x7− → v],ι.i i@b  ι.i − −−−− → − → | hi, ~v i ← enum([~v | v ← JtK, SJXKρ[− x7− → v] ])] (Lemma 25) − → [ha  ι, TAJM Kρ[x7−−→ v],ι.i Ub  ι.i i@b  ι.i − −−−− → − → | hi, ~v i ← enum([~v | v ← JtK, SJXKρ[− x7− → v] ])] (Definition of V−W? ) − → V[AJM Kρ[− x7− → v],ι.i @b  ι.i − −−−− → ? − → | hi, ~v i ← enum([~v | v ← JtK, SJXKρ[− x7− → v] ])]Wa  ι, (Definition of AJ−K) −−−→ VAJfor (x ← t where X) returnb M Kρ,ι W?a  ι, U concat([SJ VCi W?a,p Kρ,ι ]n i=1 ) 2. Case ↓.: = = = = = U −−−→ SJ Vfor (x ← t where X) returnb M W?a,↓.p Kρ,ι (Definition of V−W? ) U −−−→ SJ [for (x ← t where X) C | C ← VM W?b,p ]Kρ,ι (Definition of SJ−K) concat([concat − → ([SJCKρ[x7−−→ v],ι.i − −−−− → − → | hi, ~v i ← enum([~v | v ← JtK, SJXKρ[− x7− → v] ])]) | C ← VM W?b,p ]) (Definition of SJ−K) concat − → ([SJVM W?b,p Kρ[x7−−→ v],ι.i − −−−− → − → | hi, ~v i ← enum([~v | v ← JtK, SJXKρ[− x7− → v] ])]) (Induction hypothesis (3)) concat ? − → ([VAJM Kρ[− x7− → v],ι.i Wb,p − −−−− → − → | hi, ~v i ← enum([~v | v ← JtK, SJXKρ[− x7− → v] ])]) (Definitions of AJ−K and V−W? ) −−−→ VAJfor (x ← t where X) returnb M Kρ,ι W?a  ι,↓.p There are three cases for (3). We can now formulate the following crucial technical lemma, which states that given the descendants of a result v at path p.↓ and the shredded values of v at path p we can stitch them together to form the descendants at path p. L EMMA 28 Bag A, then v [ v HJLKA = shred AJLK (Bag A) This is straightforward by induction on A, using Lemma 27. We have proved that shredding commutes with the semantics. It remains to show that stitching after shredding is the identity on index-annotated nested results. We need two auxiliary notions: the descendant of a value at a path, and the indexes at the end of a path (which must be unique in order for stitching to work). We define v p,w , the descendant of a value v at path p with respect to inner value w as follows. J,p,h`i =wi in i=1 s J,,I n [vi @Ji ]i=1 J,↓.p,I h`i = vi in i=1 J,`i .p,I | hIout , wi@Iin ← VvW?J 0 ,p.~`, Iout = I] = [v 0 .~ `@Iin | v 0 @Iin ← v J 0 ,p,I ] Subcase : If J 0 6= I then both sides are empty lists. Suppose that J 0 = I. J 0 ,↓.~ `,c @Iin | hIout , ci@Iin ← VsW?J 0 ,~`, Iout = I] = (Definition of − ) [c@Iin | hIout , ci@Iin ← VsW?J 0 ,~`, Iout = I] = (J 0 = I) s = (Definition of − ) [v 0 .~ `@Iin | v 0 @Iin ← s J 0 ,,I ] Subcase `i .p: −−−→ [( h` = vi = v J 0 ,p.↓.~ `,w @Iin Case O: = p,w | hIout , wi@Iin ← VvW?J,p , Iout = I] The proof now proceeds by induction on the structure of A and side-induction on the structure of p. We now lift Lemma 27 to shredded packages. P ROOF OF T HEOREM 20. We need to show that if ` L : Bag A then J,p,c J,p.↓,w @Iin P ROOF. First we strengthen the induction hypothesis to account for records. The generalised induction hypothesis is as follows. If v is well-indexed and ` v J 0 ,p.~`,I : Bag A, then This completes the proof. v =[ v J,p,I [( s Cases  and ↓.: follow from (2) by applying the two parts of Lemma 26 to the left and right-hand side respectively. Case `i .: −−−−→ SJVh` = M iW?a,`i .p Kρ,ι = (Definition of V−W? ) SJVMi W?a,p Kρ,ι = (Induction hypothesis (3)) VAJMi Kρ,ι W?a  ι,p = (Definition of AJ−K) −−−−→ VAJh` = M iKρ,ι W?a  ι,`i .p v J,p,I ( KEY LEMMA ). If v is well-indexed and ` v = v >  1,p,w =c = h`i = v J,p.`i ,wi in i=1  s, if J = I = [ ], if J 6= I = concat([ vi Ji ,p,I ]n i=1 ) = vi J,p,I Essentially, this extracts the part of v that corresponds to w. The inner value w allows us to specify a particular descendant as an index, or nested record of indexes; for uniformity it may also contain constants. = = = J 0 ,`i .p.↓.~ `,c @Iin −−−→ | hIout , c@Iin i ← Vh` = viW?J 0 ,`i .p.~`, Iout = I] (Definition of − ) −−−→ [c@Iin | hIout , c@Iin i ← Vh` = viW?J 0 ,`i .p.~`, Iout = I] (Definition of V−W? ) [c@Iin | hIout , c@Iin i ← Vvi W?J 0 ,p.~`, Iout = I] (Definition of − ) [( vi J 0 ,p.↓.~`,c @Iin | hIout , ci@Iin ← Vvi W?J 0 ,p.~`, Iout = I] (Induction hypothesis) [v 0 .~ `@Iin | v 0 @Iin ← vi J 0 ,p,I ] (Definition of V−W? ) −−−→ 0 ~ [v .`@Iin | v 0 @Iin ← h` = vi J 0 ,`i .p,I ] Subcase ↓.p: [( [vi @Ji ]n i=1 J 0 ,↓.p.↓.~ `,c )@Iin ? | hIout , ci@Iin ← V[vi @Ji ]n , I = I] i=1 WJ 0 ,↓.p.~ ` out ? = (Definitions of − and V−W ) concat([c@Iin | hIout , c@Iin i ← Vvi W?Ji ,p.~`, Iout = I]n i=1 ) = (Induction hypothesis) concat([v 0 .~ `@Iin | v 0 @Iin ← vi Ji ,p ]n i=1 ) = (Definition of − ) [v 0 .~ `@Iin | v 0 @Iin ← [v@J]n i=1 J 0 ,p ] Case hi: The proof is the same as for base types with the constant c replaced by hi. : −−→ Case h` : Ai where |~ `| ≥ q: We rely on the functions zip ~`, for transforming a record of lists of equal length to a list of records, and unzip ~`, for transforming a list of records to a record of lists of equal length. In fact we require special versions of zip and unzip that handle annotations, such that zip takes a record of lists of equal length whose annotations must be in sync, and unzip returns such a record. zip ~` h`i = [ ]in i=1 = [ ] n h` = si in zip ~` h`i = vi @J :: si in i=1 i=1 = h`i = vi ii=1 @J :: zip ~ ` i unzip ~`(s) = h`i = [v.`i @J | v@J ← s]in i=1 If ~ ` is a non-empty list of column labels then zip ~` is the inverse of unzip ~`. zip ~`(unzip ~`(s)) = s, Subcase `i .p: [ h`i = vi in i=1 J 0 ,`i .p.↓.`~0 ,Iin @Iin ? | hIout , Iin i@Iin ← Vh`i = vi in i=1 WJ 0 ,` .p.`~0 , Iout = I] i ? = (Definitions of − and V−W ) [ vi J 0 ,p.↓.`~0 ,Iin @Iin | hIout , Iin i@Iin ← Vvi W?J 0 ,p.`~0 , Iout = I] = (Induction hypothesis) [v 0 .`~0 @Iin | v 0 @Iin ← vi J 0 ,p,I ] = (Definition of − ) [v 0 .`~0 @Iin | v 0 @Iin ← h`i = vi in i=1 J 0 ,`i .p,I ] Subcase ↓.p: if |~ `| ≥ 1 = [ v = = = = @Iin | hIout , wi@Iin ← VvW?J 0 ,p.`~0 , Iout = I] (Definition of − ) [h`i = v J 0 ,p.↓.`~0 .`i ,w in i=1 @Iin | hIout , w@Iin i ← VvW?J 0 ,p.`~0 .` , Iout = I] (Definition of zip) zip ~` h`i = [v`i ,w @Iin | hIout , w@Iin i ← VvW?J 0 ,p.`~0 .` , i Iout = I]in i=1 where v`,w stands for v J 0 ,p.↓.`~0 .`,w (Induction hypothesis) zip ~` h`i = [v 0 .`~0 .`i @Iin | v 0 @Iin ← v J 0 ,p,I ]in i=1 (Definition of zip) [v 0 .`~0 @Iin | v 0 @Iin ← v J 0 ,p,I ] J 0 ,p.↓.`~0 ,w = = = = ~ Subcase : If J 0 6= I then both sides of the equation Case Bag A: are equivalent to the empty bag. Suppose J 0 = I. = = = = = = = = [ [vi @Ji ]n i=1 J 0 ,↓.~ `,Iin @Iin ? , I = I] | hIout , Iin i@Iin ← V[vi @Ji ]n i=1 WJ 0 ,~ ` out (Definition of − ) [concat([ vi Ji ,~`,Iin ]n i=1 )@Iin ? | hIout , Iin i@Iin ← V[vi @Ji ]n , I = I] i=1 WJ 0 ,~ ` out ? n 0 (V[vi @Ji ]i=1 WJ 0 ,~` = hJ , J1 i@J1 , . . . , hJ 0 , Jn i@Jn ) ~ 0 [concat([ vi Ji ,~`,Iin ]n i=1 )@Iin | Iin ← J, J = I] 0 (J = I) ~ [concat([ vi Ji ,~`,Iin ]n i=1 )@Iin | Iin ← J] (Definition of − ) [concat([ vi Ji ,~`,Iin ~ | vi @Ji ← [vi @Ji ]n i=1 ])@Iin | Iin ← J] (Definition of − ) [concat([ vi .~ ` Ji ,,Iin | vi @Ji ← [vi @Ji ]n i=1 ])@Iin ~ | Iin ← J] (Definition of − ) [concat([vi .~ ` ~ | vi @Ji ← [vi @Ji ]n i=1 , Ji = Iin ])@Iin | Iin ← J] (v is well-indexed) [vi .~ `@Iin | vi @Iin ← [vi @Ji ]n i=1 ] (Definition of − and J 0 = I) [vi .~ `@Iin | vi @Iin ← [vi @Ji ]n i=1 J 0 ,,I ] = [ [vi @Ji ]n i=1 J 0 ,↓.p.↓.~ `,Iin @Iin ? | hIout , Iin i@Iin ← V[vi @Ji ]n , I = I] i=1 WIout ,↓.p.~ ` out ? (Definitions of − and V−W ) [(concat([ vi Ji ,p.↓.~`,Iin ]))@Iin | hIout , Iin i@Iin ← concat([Vvi W?Ji ,p.~`]n i=1 ), Iout = I] (Expanding concat([V−W? ]n i=1 )) [(concat([ vi Ji ,p.↓.~`,Iin ]))@Iin | vi @Ji ← [vi @Ji ]n i=1 , hIout , Iin i@Iin ← Vvi W?Ji ,p.~`, Iout = I] (v is well-indexed) [ vi Ji ,p.↓.~`,Iin @Iin | vi @Ji ← [vi @Ji ]n i=1 , hIout , Iin i@Iin ← Vvi W?Ji ,p.~`, Iout = I] (Comprehension → concatenation) concat ([ vi Ji ,p.↓.~`,Iin @Iin | hIout , Iin i@Iin ← Vvi W?Ji ,p.~`, Iout = I]n i=1 ) (Induction hypothesis) concat([v 0 .~ `@Iin | v 0 @Iin ← vi Ji ,↓.p,I ]n i=1 ) (Concatenation → comprehension) [v 0 .~ `@Iin | v 0 @Iin ← [vi @Ji ]n i=1 J 0 ,↓.p,I ] This completes the proof. The proof of this lemma is the only part of the formalisation that makes use of values being well-indexed. Stitching shredded results together does not depend on any other property of indexes, thus any representation of indexes that yields unique indexes suffices. T HEOREM 29. If s is well-indexed and ` stitch w (shred s,p (A)) = s p,w . P ROOF. By induction on the structure of A. Case O: stitch c (shred s,p (O)) = c = s p,c −−−→ Case h` = Ai: s p,w : A then stitch h`i =wi ini=1 (shred s,p (h`i : Ain i=1 )) E. RECORD FLATTENING = h`i = stitch wi (shred s,p.l (A))in i=1 = (Induction hypothesis) h`i = s p.l,wi in i=1 = s p,h`i =wi in i=1 Case Bag A: stitch I (shred s,p (Bag A)) = Flat types. For simplicity we extend base types to include the unit type hi. This allows us to define an entirely syntax-directed unflattening translation from flat record values to nested record values. −−→ Types A, B ::= Bag h` : Oi Base types O ::= Int | Bool | String | hi The Links implementation diverges slightly from the presentation here. Rather than treating the unit type as a base type, it relies on type information to construct units when unflattening values. stitch I (((shred s,p.↓ (A))VsWp )) Flat terms. = [(stitch n (shred s,p.↓ (A))@Iin ) | hIout , wi@Iin ← VsW?p , Iout = I] = (Induction hypothesis) [ s p.↓,w @Iin | hIout , wi@Iin ← VsW?p , Iout = I] = (Lemma 28) s p,I This completes the proof. U ~ Query terms L, M ::= C Comprehensions C ::= let q = S in S 0 ~ where X) return R Subqueries S ::= for (G Data sources u ::= t | q Generators G ::= x ← u Inner terms N ::= X | index −−−→ Record terms R ::= h` = N i ~ | empty L Base terms X ::= x.` | c(X) P ROOF OF T HEOREM 22. By Theorem 29, setting p =  and w = >  1. Flattening types. The record flattening function (−) flattens We now obtain the main result. P ROOF OF T HEOREM 4. Recall the statement of Theorem 4: we need to show that ` L : Bag A then: stitch(HJLKBag A ) = stitch(shred AJLK (Bag A)) = AJLK The first equation follows immediately from Theorem 20, and the second from Lemma 21 and Theorem 22. Furthermore, applying erase to both sides we have: erase(stitch(HJLKBag A )) = erase(AJLK) = N JLK where the second step follows by Theorem 19. record types. (Bag A) = Bag (A ) O = h• : Oi hi = h• : hii −−→  h` : F i = h[(`i _`0 ) : O | i ← dom(~ `), (`0 : O) ← Fi  ]i Labels in nested records are concatenated with the labels of their ancestors. Base and unit types are lifted to 1-ary records (with a special • field) for uniformity and to aid with reconstruction of nested values from flattened values. Flattening terms. The record flattening function (−) is defined on terms as well as types. U Un   ( n i=1 Ci ) = i=1 (Ci ) (let q = Sout in Sin ) = let q = Sout  in Sin   ~ ~ where X † ) return N  (for (G where X) return N ) = for (G X  = h• = X † i hi = h• = hii (m,ni )  0 (h`i = Ni im i=1 ) = h`i _`j = Xj i(i=1,j=1) , ni  where Ni = h`0j = Xj ij=1 (x.`1 . · · · .`n )† = x.`1 . . . `n • c(X1 , . . . , Xn )† = c((X1 )† , . . . , (Xn )† ) (empty L)† = empty L The auxiliary (−)† function flattens n-ary projections. Type soundness. `L:A ⇒ ` L : A Unflattening record values. [r1 , . . . , rn ]≺ = [(r1 )≺ , . . . , (rn )≺ ] h• = ci≺ = c h• = hii≺ = hi (m,n ) ≺ i (h`i _`0j = cj i(i=1,j=1) ) n ≺ i = h`i = (h`0j = cj ij=1 ) im i=1 Type soundness. `L:A ⇒ ` L≺ : A≺ Correctness P ROPOSITION 30. If L is a let-inserted query and ` L : Bag A, then ≺ LJL K = LJLK
6
arXiv:1512.07450v3 [] 4 Jan 2016 Interacting Behavior and Emerging Complexity∗ Alyssa Adams†,> , Hector Zenil‡,> , Eduardo Hermo Reyes∓ , Joost J. Joosten∓,> † Beyond Center, Arizona State University, Tempe, AZ, U.S.A. Department of Computer Science, University of Oxford, U.K. ∓ Department of Logic, History and Philosophy of Science, University of Barcelona, Spain > Algorithmic Nature Group, LABoRES, Paris, France ‡ Abstract Can we quantify the change of complexity throughout evolutionary processes? We attempt to address this question through an empirical approach. In very general terms, we simulate two simple organisms on a computer that compete over limited available resources. We implement Global Rules that determine the interaction between two Elementary Cellular Automata on the same grid. Global Rules change the complexity of the state evolution output which suggests that some complexity is intrinsic to the interaction rules themselves. The largest increases in complexity occurred when the interacting elementary rules had very little complexity, suggesting that they are able to accept complexity through interaction only. We also found that some Class 3 or 4 CA rules are more fragile than others to Global Rules, while others are more robust, hence suggesting some intrinsic properties of the rules independent of the Global Rule choice. We provide statistical mappings of Elementary Cellular Automata exposed to Global Rules and different initial conditions onto different complexity classes. Keywords: Behavioral classes; emergence of behavior; cellular automata; algorithmic complexity; information theory ∗ Corresponding authors: hector.zenil at algorithmicnaturelab.org, jjoosten at ub.edu 1 1 Introduction The race to understanding and explaining the emergence of complex structures and behavior has a wide variety of participants in contemporary science, whether it be sociology, physics, biology or any of the other major sciences. One of the more well-trodden paths is one where evolutionary processes play an important role in the emergence of complex structures. When various organisms compete over limited resources, complex behavior can be beneficial to outperform competitors. But the question remains: Can we quantify the change of complexity throughout evolutionary processes? The experiment we undertake in this paper addresses this question through an empirical approach. In very general terms, we simulate two simple organisms on a computer that compete over limited available resources. Since this experiment takes place in a rather abstract modeling setting, we will use the term processes instead of organisms from now on. Two competing processes evolve over time and we measure how the complexity of the emerging patterns evolves as the two processes interact. The complexity of the emerging structures will be compared to the complexity of the respective processes as they would have evolved without any interaction. When setting up an experiment, especially one of an abstract nature, a subtle yet essential question emerges. What exactly defines a particular process and how does the process distinguish itself from the background? For example, is the shell of a hermit crab part of the hermit crab even though the shell itself did now grow from it? Do essential bacteria that reside in larger organisms form part of that larger organism? Where is the line drawn between an organism and its environment? Throughout this paper, we assume that a process is only separate from the background in a behavioral sense. In fact, we assume processes are different from backgrounds in a physical sense only through resource management. We will demonstrate how these ontological questions are naturally prompted in our very abstract modeling environment. 2 Methods As mentioned already, we wish to investigate how complexity evolves over time when two processes compete for limited resources. We chose to represent competing processes by cellular automata as defined in the subsection below. Cellular automata (CA/CAs) represent a simple parallel computational paradigm that admit various analogies to processes in nature. In this 2 section, we shall first introduce Elementary Cellular Automata to justify how they are useful for modeling interacting and competing processes. 2.1 Elementary Cellular Automata We will consider Elementary Cellular Automata (ECA/ECAs) which are simple and well-studied CAs. They are defined to act on a one-dimensional discrete space divided into countably infinite many cells. We represent this space by a copy of the integers Z and refer to it as a tape or simply row. Each cell in the tape can have a particular color. In the case of our ECAs we shall work with just two colors, represented for example by 0 and 1 respectively. We sometimes call the distributions of 0s and 1s over the tape the output space or spatial configuration. Thus, we can represent an output space by a function r : Z → {0, 1} (also written r ∈ {0, 1}Z ) from the integers Z to the set of colors, in this case {0, 1}. Instead of writing r(i), we shall often write ri and call ri the color of the i-th cell of row r. An ECA  will act on such an output space in discrete time steps and as such can be seen as a function mapping functions to functions. In our case:  : {0, 1}Z → {0, 1}Z . If some ECA  acts on some initial row r0 we will denote the output space after t time-steps by rt so that rt+1 := (rt ). Likewise, we will denote the nth cell at time t by rnt . Our ECAs are entirely defined in a local fashion in that the color rnt+1 of cell n at time t + 1 will only depend on rnt and its t t and rn−1 (in more general terminology, we two direct neighboring cells rn−1 only consider radius-one CAs ([3])). Thus, an ECA  with just two colors in the output space is entirely determined by its behavior on three adjacent cells. Since each cell can only have two colors, there are 23 = 8 such possible triplets so that there are 28 = 256 possible different ECAs. However, we will not consider all ECAs for this experiment for computational simplicity. Instead, we only consider the 88 ECA that are non-equivalent under horizontal translation, 1 and 0 exchanges, or any combination of the two. 2.2 Interacting and competing ECAs: global rules In our experiment, the entire interaction will be modeled by a particular CA that is a combination of two ECAs and something else. So far we have 3 decided to model a process Π by an ECA  with a color 0 (white) and a non-white color 1. A process is modeled by the evolution of the white and non-white cells throughout time steps as governed by the particular ECA rule. Once we have made this choice, it is natural to consider a different process Π0 in a similar fashion. That is, we model Π0 also by an ECA 0 . To tell  and 0 apart on the same grid we choose 0 to work on the alphabet {0, 2} whereas the alphabet of  was {0, 1}. Both  and 0 will use the same symbol 0 (white). The next question is how to model the interaction between  and 0 . We will do so by embedding both CAs in a setting where we have a global CA E with three colors {0, 1, 2}. We will choose E in such a fashion that E restricted to the alphabet {0, 1} is just  while E restricted to the alphabet {0, 2} is 0 . Of course these two requirements do not determine E for given  and 0 . Thus, this leaves us with a difficult modeling choice reminiscent to the ontological question for organisms: what to do for triplets that contain all three colors or are otherwise not specified by  or 0 . Since there are 12 such triplets1 we have 312 = 531 441 different ways to define E given  and 0 . Given ECAs  and 0 as above, we call any such E that extends  and 0 a corresponding global rule. Since there are 88 unique ECAs, there are 3916 unique combinations of  and 0 , which results in 3916 × 531 441 possible globals rules. An online program illustrating interacting cellular automata via a global rule can be visited at http://demonstrations.wolfram.com/ CompetingCellularAutomata/. 2.3 Intrinsic versus evolutionary complexity Let us go back to the main purpose of the paper. We wish to study two competing processes that evolve over time and we want to measure how the complexity of the emerging patterns evolves as the two processes interact. The complexity of the emerging structures will be compared to the complexity of the respective processes as they would have evolved without interacting with each other. The structure of an experiment readily suggests itself. Pick two ECAs  and 0 defined on the alphabets {0, 1} and {0, 2} respectively. Measure the typical complexities c and c0 generated by  and 0 respectively when they 1 These 12 different triplets are {0, 1, 2}, {0, 2, 1}, {1, 0, 2}, {2, 0, 1}, {1, 2, 0}, {2, 1, 0}, {1, 1, 2}, {1, 2, 1}, {2, 1, 1}, {1, 2, 2}, {2, 1, 2}, and {2, 2, 1}. 4 are applied in an isolated setting. Next, pick some corresponding global rule E and measure the typical complexity c̃ that is generated by E. Once this ‘typical complexity’ is well-defined, the experiment can be run and c̃ can be compared to c and c0 . The question is how to interpret the results. If we see a change in the typical complexity there are three possible reasons this change can be attributed to: 1. An evolutionary process triggered by the interaction between  and 0 ; 2. An intrinsic complexity injection due the nature of how E is defined on the 12 previously non-determined tuples; 3. An intrinsic complexity injection due to scaling the alphabet from size 2 to size 3. In case of the second reason, an analogy to the cosmological constant is readily suggested. Recall that E is supposed to take care of the modeling of the background so to say, where none of  or 0 is defined but only their interaction. Thus, a possible increase of complexity/entropy is attributed in the second reason to some intrinsic entropy density of the background. We shall see how the choice of E will affect the change of complexity upon interaction. Before we can further describe the experiment, we first need to decide on how to measure complexity. 2.4 Characterizing complexity In [3] a qualitative characterization of complexity is given for dynamical processes in general and ECAs in particular. More specifically, Wolfram described four classes for complexity which can be characterized as follows: • Class 1. Symbolic systems which rapidly converge to a uniform state. Examples are ECA rules2 0, 32 and 160. • Class 2. Symbolic systems which rapidly converge to a repetitive or stable state. Examples are ECA rules 4, 108 and 218. • Class 3. Symbolic systems which appear to remain in a random state. Examples are ECA rules 22, 30, 126 and 189. • Class 4. Symbolic systems which form areas of repetitive or stable states, but which also form structures that interact with each other in complicated ways. Examples are ECA rules 54 and 110. 2 Throughout this paper we will use the Wolfram enumeration of ECA rules, see [3]. 5 It turns out that one can consider a quantitative complexity measure that largely generalizes the four qualitative complexity classes as given by Wolfram. In our definition of doing so we shall make use of De Bruijn sequences B(k, n) although there are different approaches possible. A de Bruijn sequence for n colors B(k, n) is any sequence σ of n colors so that any of the nk possible string of length k with n colors is actually a substring of σ. In this sense, de Bruijn sequences are sometimes considered to be semi-random. Using de Bruijn sequences of increasing length for a fixed number of colors we can parametrize our input and as such it makes sense to speak of asymptotic behavior. Now, one can formally characterize Wolfram’s classes in terms of Kolmogorov complexity as was done in [5]. For large enough input i and for a sufficiently long time evaluation t one can use Kolmogorov complexity to assign a complexity measure W ((i, t)) of a system  for input i and runtime t (see [5]). A typical complexity measure for  can then be defined by: W () = lim sup W ((i, t)). i,t→∞ As a first approximation, one can use cut-off values of these W () outcomes to classify  into one of the four Wolfram classes. 2.5 Experiment setup In the experiment we tested the influence of global rules (GRs) E on two interacting ECAs  and 0 . Of the 312 = 531 441 possible GRs we explored a total number of 425 600 GRs representing a share3 of 80%. For each combination of the two ECAs  and 0 with one GR, we considered 100 different initial conditions of length 26 corresponding to the first 100 (in the canonical enumeration order) de Bruijn sequences of this length over the alphabet {0, 1, 2}. Each execution was evaluated for 60 timesteps. We use the method described in Section 2.4 to determine the complexity of a CA state evolution for two ECAs with mixed neighborhoods evolving under a GR. Specifically, we used the Mathematica v. 10.0 Compress function as an approximation of the Kolmogorov complexity 4 as first suggested in [4]. 3 We aimed at simulating all GRs but due to some technical restrictions only 80% of them got executed. For as far as we know, no bias has been introduced by this. 4 An online program showing how compression characterizes cellular automata evolution can be found in http://demonstrations.wolfram.com/ CellularAutomatonCompressibility/. 6 It is important to note the number of computational hours it took to undergo this experiment. For 425 600 GRs each exploring 100 different initial conditions and 3 916 combinations of  and 0 , there were a total of 166 billion CA executions. Even with one numerical value per CA execution, the total data generated for this experiment was 1.3 terabytes on hard drive. About 100 000 hours of computational time were used total on about 200 jobs at a time, each GR on a different core, which was 200 GRs. Sometimes up to 500 jobs in parallel. Each job took about 36 hours to complete. Each complexity estimate c is normalized by subtracting the compression value of a string of 0’s of equal length. For most of the CA instances, this is a good approximation of the long-term effects of a GR. As mentioned in Section 2.4 we can use these compression values to determine/approximate Wolfram’s complexity classes according to critical thresholds values. The thresholds were trained according to the ECAs we used for the interactions. Using these compression methods for each run, we determined if the output was Class 1, 2, 3 or 4. The outcome is organized according to the complexity classes of the constituent CAs. For the best clarity, the results are represented in heat maps in the next section. 3 Results Each output class (1-4) is represented by its own heat map, so each figure will have four heat maps to account for the different classes of outputs. Each map is composed of a four-by-four grid whose axes describe the complexity classes of  and 0 , as shown. Thus, of all of the runs (ECAs  and 0 interaction, GR E, and initial condition) that yield a Class 1 output are represented under the heat map labeled Class 1. For example, in Figure 1, about 11% of all Class 1 outputs were generated by Class 1  and Class 1 0 interactions. Here the more intense color represents the more densely populated value for a particular class. 3.1 General Global Rule effects The outputs of every possible 425 600 GRs are accumulated and represented in Figure 1. In general, this figure shows the change in complexity when a GR is used to determine the interaction between two ECAs. Most of the outputs for each complexity class were generated with Class 1 ECA interactions, which was least expected. 7 Figure 1: Heat maps for the 4 classes of output types for all executions. Each heat map represents the percent of cases that resulted in the corresponding output complexity class. 8 3.2 Global Rules of interest Figure 2: Heat maps for the 4 classes of output types for GR 36983. Each heat map represents the percent of cases that resulted in the corresponding output complexity class. Figure 3: Heat maps for the 4 classes of output types for GR 72499. Each heat map represents the percent of cases that resulted in the corresponding output complexity class. Figure 4: Heat maps for the 4 classes of output types for GR 77499. Each heat map represents the percent of cases that resulted in the corresponding output complexity class. There are several interesting examples of GRs that affect the classification of behavior of the CA interaction state output. The heat maps of 9 Figure 5: Interesting outputs of various  and 0 interactions and E. Note that most of the  and 0 rules belong to complexity Class 1 or 2. three GRs in particular are shown in Figures 2-4. As shown in the figure captions, GRs are enumerated according to their position in the Tuples function. That is to say, we enumerate the GRs by generating all tuples with 3 symbols in lexicographical order. Over half of all outputs from with GR 77399 were complexity Class 4 from Class 1  and 0 interactions. The majority of complexity Class 3 outputs were generated from complexity Class 2  and 0 interactions. GR 72399 had outputs that were 65 The actual outputs for some of these GRs are shown in Figure 5. Note that most  and 0 rules are complexity Class 1 or 2. 4 Discussion It is unexpected that the biggest increases in complexity arise from  and 0 complexity Class 1 interactions. This suggests that complexity is intrinsic to the GRs rather than the ECAs themselves. We suspect that interacting Class 1 ECAs readily accept complexity through the rules of their interactions. This is not as prevalent in any of the other complexity classes of  and 10 0 interactions. Likely, if  and 0 have more complexity without interaction, then they are more robust to any complexity changes introduced by the GR. In all cases, complexity increases or remains the same by introducing an interaction rule via a GR. The most interesting cases are when Global Rules increase the complexity of the output by entire classes. There are cases where mixed neighborhoods are present and sustained throughout the output, which is a form of emergence through the interaction GR rule. Because we only used a short number of time steps per execution, it is unclear whether these mixed neighborhoods eventually die out or not, it is nonetheless a case of intermediate emergence from a Global Rule. We have found interesting cases where Global Rules seem to drastically change the complexity of an interacting CA output. Some originally Class 3 ECAs, for example, were found to be too fragile under most Global Rules, while some other are more resilient. Most importantly, the greatest increases in complexity occur when the interacting ECA are both Class 1, which is true for the majority of all possible Global Rules. Although we still have yet to understand the mechanisms behind these results, we are confident that further analysis will be important in understanding the emergence of complex structures. Acknowledgement We want to acknowledge the ASU Saguaro cluster where most of the computational work was undertaken. References [1] G.J. Chaitin, On the length of programs for computing finite binary sequences: Statistical considerations, Journal of the ACM, 16(1):145– 159, 1969. [2] S.J. Chandler, “Cellular Automata with Global Control” http://demonstrations.wolfram.com/ CellularAutomataWithGlobalControl/, Wolfram Demonstrations Project, Published: May 9, 2007. [3] S. Wolfram, A New Kind of Science, Wolfram Media, Champaign, IL. 2002. 11 [4] H. Zenil, Compression-based investigation of the behaviour of cellular automata and other systems, Complex Systems, (19)2, 2010. [5] H. Zenil and E. Villarreal-Zapata, Asymptotic Behaviour and Ratios of Complexity in Cellular Automata Rule Spaces, International Journal of Bifurcation and Chaos vol. 13, no. 9, 2013. 12
9
Noname manuscript No. (will be inserted by the editor) Fast Selection of Spatially Balanced Samples F. Piersimoni · R. Benedetti arXiv:1710.09116v1 [stat.ME] 25 Oct 2017 the date of receipt and acceptance should be inserted later Abstract Sampling from very large spatial populations is challenging. The solutions suggested in recent literature on this subject often require that the randomly selected units are well distributed across the study region by using complex algorithms that have the feature, essential in a design–based framework, to respect the fixed first–order inclusion probabilities for every unit of the population. The size of the frame, N , often causes some problems to these algorithms since, being based on the distance matrix between the units of the population, have at least a computational cost of order N 2 . In this paper we propose a draw– by–draw algorithm that randomly selects a sample of size n in exactly n steps, updating at each step the selection probability of not–selected units depending on their distance from the units already selected in the previous steps. The performance of this solution is compared with those of other methods derived from the spatially balanced sampling literature in terms of their root mean squared error (RMSE) using the simple random sampling (SRS) without replacement as benchmark. The fundamental interest is not only to evaluate the efficiency of a such different procedure, but also to understand if similar results can be obtained even with a notable reduction in the computational burden needed to obtain more efficient sampling designs. Repeated sample selections on real and simulated populations support this perspective. An application to the Land Use and Land Cover Survey (LUCAS) 2012 data–set in an Italian region is presented as a concrete and practical illustration of the capabilities of the proposed sample selection method. Keywords Spatial dependence · Big Data · Generalized Random Tessellation Stratified design · Spatially correlated Poisson sampling · Local pivotal method · Product of the within sample distance . F. Piersimoni Istat, Directorate for Methodology and Statistical Process Design, Via Cesare Balbo 16, Rome, IT-00184, Italy E-mail: [email protected] R. Benedetti “G. d’Annunzio” University, Department of Economic Studies (DEc), Viale Pindaro 42, Pescara, IT-65127, Italy Tel.: +39-085-45083210, +39-085-45083229 Fax: +39-085-45083208 E-mail: [email protected] 2 F. Piersimoni, R. Benedetti Mathematics Subject Classification (2000) 62D05 · 62H11 1 Introduction Surveys are routinely used to gather data for environmental and ecological research. The units to be observed are often randomly selected from a population made up of geo-referenced units, and the spatial distribution of these units is information that can be used in sample design. It represents a source of auxiliaries that can be helpful to design an effective random selection strategy, which, by a proper use of this particular information and assuming that the observed phenomenon is related with the spatial features of the population, could lead to a remarkable efficiency gain of a design–based estimator of the unknown total of a target variable. In recent years, with the widespread use of the Global Positioning System (GPS) technology, of remotely sensed data, of automatic address geocoding systems and of Geographical Information Systems (GIS) databases, spatial modeling and analysis of geo-referenced data-sets have received more and more attention. The use of this particular type of data is becoming popular in several research areas including geological and environmental sciences [62], ecology [51], cartography [37], public health [15] and so on. Some extended reviews of current research tendencies in spatial modeling and analysis are presented in [11, 19, 23, 25, 40, 42,50]. At the same time it is surprising and incomprehensible the reason why, except for some attractive references on the subject [6,44], at least the same interest was not devoted to basic topics as spatial data collection and sample surveys selected from spatial populations, with the consequence that none of these authoritative texts addresses, unless mentioned as a border issue, the problem of sampling from spatial data–sets. To better understand the increase in the importance of spatial data, it is also appropriate to realize that is more and more frequent the practice that National Statistical Offices georeference their sampling frames of physical or administrative bodies, used for social and economic surveys, not only according to the codes of a geographical nomenclature, but also adding information regarding the exact, or estimated, position of each record. With the effect that such methodological problems are no longer felt only in local and occasional experiments, although of a high scientific level, but also in economic and social surveys that provide basic statistical figures at national and regional level. Indeed area frame surveys [6] often supported, or even replaced, the traditional methods based on list frames mainly due to the absence of coverage errors and the low probability of non–responses that they guarantee. Moreover the latest fashion in official statistics, currently considered a keyword, is the statistical use of administrative data that implies the need of estimating the coverage of administrative lists, which is often carried out through area frame surveys. Spatial surveys also present some drawbacks with regard to the reduced information that can be collected in a survey involving only a direct observation and not a questionnaire to be filled. Within this context, in [2] it is claimed that area frame surveys will play a very important role in the future development of agricultural surveys in particular in developing countries. In addition big spatial data–sets are very common in scientific problems, such as those involving remote sensing of the earth by satellites, climate-model output, small-area samples from national surveys, and so forth. The term Big Data includes data characterized not only by their large volume, but also by their variety and velocity, the organic way in which they are created, and the new types of processes needed to analyze them and make inference from them. The change in the nature of the new types of data, their availability, and the Fast Selection of Spatially Balanced Samples 3 way in which they are collected and disseminated is fundamental. This change constitutes a paradigm shift for survey research. There is great potential in Big Data, but there are some fundamental challenges that have to be resolved before their full potential can be realized. A large data–set entails not only computational but also theoretical problems as it is often defined on a large spatial domain, so the spatial process of interest typically exhibits local homogeneity over that domain. The availability of massive spatial data that is becoming quite common emphasizes the need for developing new and computationally efficient methods tailored to handle such big data–sets. These problems have already been widely treated from a modeling approach and there is a lot of literature on the subject [12, 13, 35, 57]. On the contrary, the aspect of data collection, in particular the sample selection, has not yet been fully addressed. The situation becomes even more difficult to manage if we consider that these data have complex structures. The use of points, lines, polygons or regular grids to represent spatial units and their relationships is certainly more than an established practice, it is the way used in geography to simplify reality. Each data type in this list is represented by different modes and rules representing a major framework for the inherent complexity of the analysis of spatial phenomena. This complexity is related to the stage of data collection, in that various situations can arise for recording and describing a given phenomenon under alternative topologies. Reflecting on the evidence that spatial sampling algorithms can be very slow when the data– sets are made up of a large amount of data, our main purpose was to propose a fast method for random selection of units that are well spread in every dimension that can therefore be used in massive surveys without creating computational problems and that, by founding its ground on an attempt to set the concepts used by different approaches to the problem, have the flexibility deriving from a model–based (MB) approach and can be safely used even by the most strong supporters of sampling from finite populations within a design–based (DB) framework. The plan of the paper is as follows. Some preliminaries and basic concepts are given in Section 2 together with some motivation to spread the sample over a spatial population. After a review of the literature on spatially balanced samples, in Section 3 is introduced a fast algorithm that, selecting well–spread samples, seeks to exploit the spatial characteristics of the population. In Section 4 the performance of the proposed method is empirically compared with some reviewed competitors by a design–based simulation study. The efficiency appraisal is evaluated in terms of the root mean squared error (RMSE) of the estimates by using the simple random sampling (SRS) as benchmark. Different estimation scenarios have been assumed for varying sample sizes and with respect to real and artificial populations characterized by different spatial distributions. Finally, in Section 5 the main results of the paper are discussed and some addresses for further research are provided. 2 Basic concepts and preliminaries The adoption of a spatial model exploiting the topological nature of geographical entities can be formally tackled by introducing the following super–population model for spatial data [11]: let i ∈ Rd be a generic data location in a d–dimensional Euclidean space within a region U ⊆ Rd and Y i a random variable representing the potential datum for the attribute or target variable at spatial location i. Now let i vary over index set U so as to generate the random field {Y (i) : i ∈ U }. Within this model, space is featured in every respect by the set U , including topology and coordinates system, and location in U indexed by i. Notice that 4 F. Piersimoni, R. Benedetti the data generating process can lie also over a continuous spatial domain even if, to simplify the problem, it is observed only in a selection, possibly made at random, of fixed points or averaged over a selection of predefined polygons. Even if in this paper we will focus our attention on situations of this kind, which can be led back to usual finite population sampling, it is appropriate to underline that in natural resources monitoring and estimation, however, they cover an important part of all the possible sampling problems that arise in this field. There is a huge list of phenomena that can be observed in any site of a linear object, such as a river, or of a surface as it is for meteorological data. In these cases the resulting sample is a set of points or polygons whose possible positions are not predefined but chosen from an infinite set of possible sites. A spatial finite population U = {1, . . . , i, . . . , N } of size N is thus recorded on a frame and from the geographical position and topology of each unit i we can derive, according to some distance definition, a matrix DU = {dij ; i = 1, . . . , N ; j = 1, . . . , N } that specifies how far are all the pairs of units in the population. The use of the matrix DU as a synthesis of the spatial information implies the hypothesis that the dependence doesnot change with the position of the unit i and the direction, i.e. that the random field Y i is homogeneous and isotropic [11], i.e. its distribution does not change if we shift or rotate the space of the coordinates. DU is a very important tool to emphasize the importance to spread the sample over U, a property that can be related, through a variogram [11], to the spatial dependence of Y i but also to some form of similarity between adjacent units as a spatial clustering or a spatial stratification. If the units are points, for the definition of DU , we can simply resort to simple concepts of distance between sets of coordinates, but if they are polygons we should use as a distance the notion of contiguity between areal units, or it would be better to use the order of contiguity, unless we want to transform polygons into points by identifying them with their centroids. As far as linear topology is concerned, a distance measurement can not ignore assumptions about the distances between the sets of points that make up the lines. Some synthetic indexes such as the minimum and the maximum are always available while for the average, as it is necessary to calculate the area between the two lines, serious computational problems  P could arise. A traditional objective of most surveys is estimation of the R totaltY = i∈U Y i when we assume that U is a finite population that becomes tY = U Y i d i if U is not finite but is a continuous surface over U . In order to estimate tY , a sample of units from U is selected by identifying their labels i on the frame, and then measures yi of their corresponding values are collected. Considering only without replacement samples, a convenient way to state this sample selection process is to assume that, for each unit i on the frame, a random vector S = {si ; i = 1, . . . , N } is generated that is equal to 1 if a unit is selected in the sample and  P 0 otherwise. If the sample size is fixed then n = N i=1 si . The distribution P S , whose support is S and in principle is under the complete control of the sampler, defines what is generally referred to as the design of the sample survey. The first–order inclusion probability P that the unit i will be included in the sample is denoted by πi = S3i P (S ), where the term S 3 i means that the sum is extended over those samples that contain P i, while the second order inclusion probability for the units i and j is denoted as πij = S3{i,j} P (S ). Within the DB estimation framework we consider P (S ) as the only source of randomness, thus assuming that yi is not a random field and is not affected by any measurement error, we can define the well-known, and widely usedP in practical applications, Horvitz and Thompson (HT) [49] estimator for tY as t̂HT,Y = i∈S yi /πi , where the term i ∈ S here means that the sum is extended over those units i that have been selected in the sample S ∈ S . The equivalent of the HT estimator, and its properties, for continuous populations was ex- Fast Selection of Spatially Balanced Samples 5 tensively treated by [14]. The alternative MB estimation framework consists in assuming that the actual finite popu lation yi is only one of the possible realizations of the random field Y i . The population total can be decomposed as the sum of two components tY = tYS + tYS̄ , where YS is the set of yi values observed in the sample while YS̄ are the not–observed values in the remaining units of the population (S̄ ≡ U − S ). Once completed the data collection tYS is known, and the estimation problem can be reduced to predict tYS̄ with t̂YS̄ as the sum of the predicted values ŷi arising from a suitable model ξ fitted to the observed data [9, 58]. It follows that, depending on which estimation approach we adopt, we should evaluate the expected value and the mean squared error (MSE) of the estimators following different crite   2  ria, when dealing with a DB approach ES t̂HT,Y −tY and ES t̂HT,Y −tY should be     2  used while Eξ t̂HT,Y − tY |S and Eξ t̂HT,Y − tY |S are more appropriately applied when working within a MB approach, where ES and Eξ denote respectively expectation with regard to the sample S ∈ S and to the model ξ . A vector of C covariates xi = {xi1 , . . . , xic , . . . , xiC } is usually available for each unit i in the frame represented by the coordinates of the unit (if they are points or centroids of polygons), the land use derived from a map, the elevation, remotely sensed data, administrative data and so on. Considering that we are dealing with spatial data the practice is that the model ξ consists of two parts, a trend component that is used to relate the outcome Y i with the auxiliaries xi and an autocorrelation component that takes advantage of the knowledge of DU to fit a variogram to the observed data. This model is known as kriging [11, 19] and, considering its theoretical properties and its popularity in practical applications, it is surely one the best candidate for ξ . To select S the logic of the optimal design is conceived so that preferential sampling [18] can be used allowing explicitly for the minimization of a criterion Φ linked with some summary statistics Q, usually an utility function, arising from model ξ [61]. More formally the set of n units that constitute the sample S , and possibly a set of weights  F associated to the sample, are the result of an optimization problem such as max Q ξ [43, 45, 46]. This is a S,F combinatorial optimization problem that can be solved through some heuristics or by using the well known Simulated Annealing algorithm that has shown to provide promising results [4]. Alternatively a design criterion motivated from Bayesian learning was suggested and estimated by MCMC [22] or, if estimation of Φ is complex as the likelihood is intractable, approximate bayesian computation showed to be a feasible solution [31, 32]. In a MB perspective, the concern is necessarily in finding the sample configuration that is the best representative of U maximizing an objective function defined over the whole set S of possible samples which, in a spatial context, will surely depend on the loss of information due to spatial autocorrelation. The resulting optimal sample is selected with certainty and is of course not advised, if we assume the randomization hypothesis that is the background for DB inference [48,56,60]. Moreover, as clearly stated by [12]: “A modelbased approach seems therefore necessary under preferential sampling, such as assuming a spatial-statistical model for s(·). However, this does not mean that the basic design-based notions of randomisation, stratification, and clustering cannot be used in a preferential– sampling approach, since they are all useful tools that lead to a better representation of a heterogeneous population. In particular, what happens when the model . . . does not adequately describe the spatial variability in y (·) and s(·)? The optimality . . . , and the validity of any consequent inference depends critically on the appropriateness of this model”. The answer to these questions is, of course, a highly debated topic not only for theoretical but also practical reasons, starting with the evidence that if we do not assume that a model 6 F. Piersimoni, R. Benedetti holds for the measurement errors, the potential observations over each unit of the population, being of a deterministic nature, cannot be considered as dependent. Thus it is clear that sampling schemes to be reasonably adopted for spatial units in a DB context need a suitable alternative to the concept of spatial dependence. A useful property of random samples, that of being spatial balanced, has been introduced for this purpose by [53]. To define a spatial balance index (SBI) the use of Voronoi polygons is required for any sample S . Such a polygon, for the sample unit i, includes all population units closer to i than to any other sample unit j . If we let νi be the sum of the πv of all units v in the i-th Voronoi polygon, for any sample unit we have E (νi ) = 1. Thus the SBI: P SBI (S ) = V ar (νi ) = i∈S (νi − 1)2 n , (1) can be used as a measure of difference from the state of perfect spatial balance (SBI(S)=0). A sample S is considered spatially balanced when SBI(S) ≤ ωsb as every νi should be close to 1, where ωsb is an acceptable upper bound. Note that the role played by SBI is exactly the same as Φ, it is nothing more than a criterion for assessing the suitability of S but, depending on the πi s, its use is more appropriate in a design–based framework. Anyway, its minimization would again lead to a purposive sample that would not solve the problem of randomization of S . A way to recover the use of autocorrelation in a DB sampling strategy, or at least to restore its acceptability, is to consider that in survey design an upper bound to the sampling error is usually imposed to the auxiliary variables rather than on the target variable. This choice is usually motivated by the practitioners on the basis of the hypothesis that a survey that places its bases on specific levels of precision for a set of auxiliary variables will approximately yield the same sampling errors also for the target variable. However, in realistic cases of practical interest, this assumption is unlikely to occur and appreciable differences are often  measured among the covariates and Y i. In such situations the suggested design could be incorrect as using xi as a proxy for Y i it could underestimate the sample size needed to reach a predetermined level of precision.  An appealing and widely used alternative is to assume a model that links Y i to xi . The assumption underlying this solution is that it is possible to estimate the unknown parameters of such models from data collected in previous surveys. This concept of anticipated variance (AV - or anticipated MSE if the estimator is biased) was introduced by Isaki and Fuller [34] and is very useful to overcome the hitch of not being able to introduce, at least in the design of the sample, a stochastic model that can take into account the dependence of spatial units. It is defined as the average of the DB variance of t̂HT,Y , the HT estimator of the total of Y , under a stochastic model ξ . To predict the Y i s we assume that, for each unit i of U , a linear model ξ holds, given the known auxiliaries xi s:   Y i = xti β + i    Eξ (i ) = 0 V arξ (i ) = σi2    Covξ (i j ) = σi σj ρij , (2) where Eξ , V arξ , and Covξ denote respectively expectation, variance and covariance with respect to the model ξ , β is a vector of regression coefficients, i is a random variable with variance σi2 and ρij is its autocorrelation coefficient. The AV of the unbiased HT estimator Fast Selection of Spatially Balanced Samples 7 of the total of Y , under model (2), is:  !T  X xi X XX  πij − πi πj AV t̂HT,Y − t = ES  − xi β + . σi σj ρij i∈S πi i∈U i∈U j∈U πij (3) The two components of (3) represent respectively the error implicitly introduced in the estimate of the totals of the auxiliary variables and the existing dependence of the population units. It is clear that uncertainty on estimates can be reduced constraining the units selected to respect the average value of the population’s covariates and, assuming that autocorrelation coefficient decreases as the distance dij between the selected units i and j increases, selecting units as far apart as possible. The logic derived from the AV criterion may, however, sound like an excuse to justify the forced introduction of a model ξ within a DB framework. In this regard, it is interesting to note that the authors of this proposal justify the designs derived from it within a MB and not DB framework [29]: “Even though we justify the method by using a superpopulation model, the inference is based on the sampling design. The HT estimator will be efficient if the population is close to a realization from the model, but the estimator maintains desirable properties like design unbiasedness and design consistency even if the model is not properly specified”. Another motivation for spatially balanced samples, perhaps less forced and even less arbitrary, has been recently proposed [5] on the basis of the so called decomposition lemma, a useful though not so well-known result in sampling theory, which states that [39, p. 87]: ¯S + σy̆2 = VS y̆  n−1 ES s2y̆,S , n   (4) where y̆ is a vector of the expanded-values of the target variable y whose generic element is ¯S y̆i = πi−1 yi , σy̆2 is the constant and unknown population variance of the variable y̆, VS y̆ ¯S = 1 P y̆ is the variance between samples of the HT estimator of the mean y̆ i∈S i and n  P 2 1 ¯ ES s2y̆,S is the expectation of s2y̆,S = (n− (y̆ − y̆ ) , i.e. the within sample variS i∈S i 1) ance according to the design P (S ) (for details see [39, ch.3]). It can be seen from (4) that a gain in the efficiency of the HT estimator can be realized either by setting the πi s in such a way that y̆ is approximately constant [18, p. 53] or by defining a design P (S ) that increases the expected within sample variance or both. Provided that we are dealing with samples S with fixed and known πi s, the intuitive explanation for this is that, being constant the unknown variance of the population, the only way to reduce the first term in the right side of (4) is to increase the second term. Thus, we should set the probability P (S ) to select a sample S proportional, or even more than proportional, to s2y̆,S . Unfortunately, this proposal risks to remain purely theoretical as this parameter is unknown being relative to the unobserved target variable y. In the spatial interpolation literature [11, 19], it is often adopted the assumption that the distance DU is highly related to the variance of a variable observed on a set of geo–referenced units. The variogram (or semi variogram), and its shape, is a classical tool to choose how and to what extent the variance of y̆ is a function of the distance between the statistical units. Thus, when dealing with spatially distributed populations a promising candidate to play the role of the proxy for s2y̆,S is DU . The intuitive requirement that a sample should be well–spread over a study region is explainable if and only if there are reasons to assume that s2y̆,S is a monotone increasing function of DU . This will surely happen when y̆ has a linear or monotone spatial trend or when there is spatial dependence. It follows that the distance  matrix DS between sample units, if appropriately synthesized through an index M DS , could be an additional criterion to play the 8 F. Piersimoni, R. Benedetti same role as SBI and Φ and is probably much simpler and faster to evaluate. Finally, noting that the Sen-Yates-Grundy (SYG) formulation of the estimator of the sample variance:    1 X X πij − πi πj 2 V̂SY G t̂y = − (5) (y̆i − y̆j ) , 2 πij i∈S j∈S is very similar to the definition of the variogram, another interesting motivation to the practice to spread the sample can be derived from the assumption that, increasing the distance 2 dij between two units i and j , the difference (y̆i − y̆j ) between the values of the survey variable is expected to increase. As a result an efficient design, to weigh less the expected most relevant differences (y̆i − y̆j )2 , should use a set of πij strictly and positively related with dij . 3 Algorithms for random selection of spatially balanced samples The need to avoid as much as possible to include in the same sample contiguous or too close spatial units is very much felt in real periodic surveys such as those used to estimate land use and land cover [6, ch. 2] or to implement a forestry inventory [30, 41]. The strategies adopted in planning these surveys invariably invoke simple and consolidated methods of sampling theory that have the drawback of not being specifically implemented to solve this problem. These solutions often involve systematic selection with a predetermined step and a random starting point, partitioning the area in exactly n contiguous strata from which randomly pick up only one unit per stratum and multi-stage samples with primary units represented by spatial aggregates. Each of these proposals has proved not to be appropriate to capture the potential efficiencies deriving from the existence of spatial trends or dependence. The main reason for these difficulties lies in the evidence that, if the population does not lie on a regular grilling, it is almost impossible to define a sampling step in the absence of a unit order and it is highly subjective to set up a partition of the region in n parts. The choices made to overcome these shortcomings seriously affect the results. One of the first attempts to formalize the problem, increasing the amount of information collected by avoiding the selection of pairs of contiguous units, was done in the pioneering work [33] that, however, explicitly requires the introduction of an exogenous ordering of the population units. The line drawn from this preliminary work has remarkably influenced the approaches adopted in the following studies on this topic. Many algorithms, even recently proposed in the literature [53], are primarily based on the search for a one-dimensional sorting of multidimensional units by studying the best mapping of Rd in R, while trying to preserve some multidimensional features of the population, and then use this induced ordering to systematically select the sample units. The fundamental principle is to extend the use of systematic sampling to two or more dimensions, even when the population is not a regular grid. Developed by the Environmental Protection Agency (EPA) and widely used in most of its surveys, the Generalized Random Tessellation Stratified (GRTS) design [53] is mainly based on such a logic. A multilevel grid hierarchy is used to index the units through a tree structure. The two parts of the AV (3) suggested instead the use of two different classes of methods, the first derived from the restriction of the sample space considered acceptable and the second from the assumption that dependence between units decreases as their distance increases. Despite the assonance of the two names, the notion of spatially balanced samples is quite far from that of balanced samples used to denote those samples that respect the known totals of Fast Selection of Spatially Balanced Samples 9 a set of auxiliary variables, their selection is made at random on a restricted support S̄ ⊂ S of all the possible samples [54,55,56]. Thus, for given πi s the set of balanced samples are defined as all those samples S for which it occurs |t̂HT,xc − tHT,xc | ≤ ωb , ∀ c = {1, . . . , C}, where ωb is an acceptable upper bound. These restrictions represent the intuitive requirement that the sample estimates of the total of a covariate should be as close as possible to the known total of the population. In a spatial context, this constraint could be applied by imposing that, for any S , the first p moments of each coordinate  should coincide with the first p moments of the population, implicitly assuming that Y i follows a polynomial spatial trend of order p. This logic was subsequently extended to approximate any nonlinear trends through penalized splines with particular reference to the space [8]. The CUBE algorithm [10,17], proposed as a general (not spatial) solution, is the tool used to achieve these samples. In recent studies [5,6], these sampling designs have demonstrated that they can effectively exploit the presence of linear and nonlinear  trends but are often unable to capture the presence of dependence or homogeneity in Y i . The second class is instead more articulated and consisting of algorithms of different nature. A selection strategy conceived with the clear goal of minimizing (1) should use DU as the only known information available on the spatial distribution of the sample units. Its use implies the adoption of the already mentioned intuitive criterion that units that are close should seldom appear simultaneously in the sample. Recalling (5), this request can be claimed as reasonable if we expect that, increasing dij , the difference (y̆i − y̆j )2 always increases. In such a situation, it is clear that the variance of the HT estimator will necessarily decrease if we set high joint inclusion probabilities to couples with very different values of the target variable as they are far each other. Following an approach based on distances, inspired by purely MB assumptions on the dependence of the stochastic process generating the data, [1] suggested a sampling strategy: the dependent areal units sequential technique (DUST). Starting with a unit selected at random, say i, at every step t < n, the selection probabilities are updated according to a multiplicative rule depending by a tuning parameter useful to control the distribution of the sample over the study region. This algorithm, or at least the design that it implies, can be easily interpreted and analyzed in a DB perspective in particular referring to a careful estimation and analysis of its first and second order inclusion probabilities. Another solution, based on a classic list sequential algorithm [54], was suggested by [26]. Introduced as a variant of the correlated Poisson sampling, in the SCPS (Spatially Correlated Poisson Sampling) for each unit, at every step t, it updates the inclusion probabilities according to a rule in such a way that the required inclusion probabilities are respected. The suggested maximal weights criterion, used to update the inclusion probabilities at each step, provides as much weight as possible to the closest unit, then to the second closest unit and so on. The resulting sample is then obtained in n ≤ t ≤ N steps depending on the randomness according to which the last unit is added. A procedure to select samples with fixed πi and correlated πij was derived by [28] as an extension of the pivotal method initially introduced to select π ps samples [16]. It is essentially based on an updating rule of the probabilities that at each step should locally keep the sum of the updated probabilities as constant as possible and differ from each other in a way not to choose the two nearby units. This method is referred to as the local pivotal method (LPM) and obtain a sample in, again, n ≤ t ≤ N steps. This solution has been also integrated with the balancing property, typical of the CUBE algorithm, to simultaneously minimize both the two members of the AV (3) [29]. A comprehensive review of the main spatially balanced samples selection methods can be found in [7]. According to (4) the attention should be moved to the definition of a design with probability 10 F. Piersimoni, R. Benedetti  proportional to some synthetic index M · of the within sample distance matrix when it P is observed within each possible sample: P (S ) = M DS / S∈S M DS [5]. This innovative proposal is quite dangerous because extracting realizations from the complete P (S ) could make it very difficult to control or specify the πi s and the πij s whose knowledge is required by the HT estimator to be able to yield total and variance estimates. Despite this initial difficulty it may open a very interesting window on how to use indices, such as Φ or SBI, within a DB framework. Select the whole sample with probability proportional to the index instead of maximizing or minimizing the index itself could remove all the doubts that sometimes arise from choosing the units with certainty. Interpreted in this way, this proposal could thus represent a bridge between the MB and DB frameworks to setting up a spatial sampling strategy.  Among the possible summary indexes · of the distance matrix, the most promising reQMQ sults were provided by M (DS ) = dγij introducing a design proportional to the i∈S j6=i;j∈S products of the within sample distance matrix (PWD) that depends on γ , a tuning parameter that can be used without limits to increase or decrease the effects of the distance matrix on the spread of the sample over the study region. In [5] it is suggested to start with a SRS without replacement and repeatedly and randomly exchanging a unit included in the sample with a unit not included in the sample with probability equal to the exponential of the ratio between the two indexes before and after the exchange. This is an MCMC iterative procedure that for a suitable choice of the number of iterations will generate a random outcome from the multivariate distribution P (S ). These procedures are quick and efficient and thus have a practical applicability in real spatial surveys if the size N of U is not prohibitive. The main reason is that they strictly depend on N in the number of attempts or steps needed to select a sample. According to the definition [54, p. 35] “A sampling design of fixed sample size n is said to be draw–by–draw if, at each one of the n steps of the procedure, a unit is definitively selected in the sample”, a possible way to reduce the computational burden is to look for a draw–by–draw alternative to the listed methods. With regard to the PWD, deriving its probabilities from the product of the distance between units, a natural candidate could arise from the idea of iteratively updating, through a product, the πi s at each step. The algorithm suggested in this paper, starts by randomly selecting a unit i with equal probability. Then, at every step t ≤ n, the algorithm updates the selection probabilities πjt s of every other unit j of the population according to the rule (see Figure 1): πjt = P πjt−1 d¯ij j∈U πjt−1 d¯ij ∀j ∈ U, (6)  where d¯ij = Ψ dγij is an appropriate transformation applied toQDU in order to standardize it so that it will have known and fixed products by row i6=j,i∈U dij and column Q t ¯ i6=j,j∈U dij . Notice that, as dii = 0 by the definition of distance, πi will be necessarily equal to 0 for all units i already selected in the sample in previous steps, thus preventing random selection with replacement. Criterion (6) can be basically considered as a heuristic method (HPWD) to generate samples approximately with the same probabilities of the PWD but with a much smaller number of steps. From a practical point of view, the idea behind (6) is very similar to the DUST method already suggested in the literature [1] although with two substantive differences: (6) is motivated by solid theoretical bases while the DUST did not find any justifications either according to a MB or DB logic and in the HPWD it is possible to control the πi s while with DUST this is impossible with the result that the HT estimator Fast Selection of Spatially Balanced Samples 11 Fig. 1 First order inclusion probabilities πjt of the HPWD of a population on a 100 × 100 regolar grid after the selection of the first t = 1, . . . , 6 units with the suggested algorithm (darker is lower and lighter is higher). systematically yields irreparably biased estimates. In [5] it is advised that the πi s and the πij s of the PWD follow the rules:  π̂i = k1  N Y k2 dij  (7) , j =1  π̂ij = k3  N Y i=1 dij × N Y k4 dij  (dij )k5 , (8) j =1 where k1 , k2 , k3 , k4 and k5 are parameters mainly depending on the sample and population sizes. Notice that, to ensure the symmetry of the πij s, the rows and columns marginal products of the distance matrix have the same parameter. From (7) it is clear that if we want to fix the πi s, it is enough to standardize the dij s using the d¯ij s specifying an appropriate standardizing function Ψ . For this role [5] suggest to iteratively constrain to known totals, the rows (or columns) sums of the logarithmic transformed matrix. When the known totals are all equal to a constant, this is known to be a very simple and accurate method to scale a symmetric matrix to a doubly stochastic matrix [36]. To verify if (7) and (8) can be used as working rules to specify the πi s we can generate as many independent replicates from HPWD as needed and the πi s and the πij s may be estimated by the frequency with which a unit or a pair of units are selected. These π̂i s and π̂ij s can both be adopted in the estimation process instead of their theoretical counterparts [20, 21] and also modeled to verify the fit of (8). 12 F. Piersimoni, R. Benedetti Table 1 Relative efficiency of the estimated first order inclusion probabilities CV (π̂i ) (9) of the HPWD estimated in 100,000 replicated samples in the 5 × 5 and 10 × 10 regular grid populations for different sample sizes and γ. 5×5 10 × 10 γ n = 5 n = 10 n = 5 n = 10 n = 20 n = 40 1 1.852 0.653 2.864 2.688 1.477 0.453 5 8.543 2.457 13.373 8.082 3.659 1.469 10 12.722 5.297 21.358 13.131 6.080 2.583 To this purpose we selected 100,000 replicated samples of size n = {5, 10} from a 5 × 5 regular grid and of size n = {5, 10, 20, 40} from a 10 × 10 regular grid by using the HPWD design with 3 different values of the parameter γ ={1, 5, 10}. As far as the empirical frequencies of units in selected samples are concerned, in Table (1) are shown the values of the index: CV (π̂i ) = v uP   n 2 u π̂ − t i i∈U N N n N × 100. (9) From the results in Table (1) we can observe that the πi s are approximately constant once the row (and column) products of the d¯ij s are fixed as constant with an error that increases with γ and decreases with the increase of n. The first effect is probably due to the fact that increasing γ necessarily increases the concentration of the d¯ij s in few very high values and many low values, which undoubtedly reduces the accuracy of the matrix standardization and of the logarithmic transformation. While the second aspect occurs as increasing the number of random selections the empirical π̂i s are closer to the theoretical πi s. With regard to the π̂ij s the evidence from Figure (2) and Table (2) is that (8) fits enough well supporting the assumption of such a relationship between the distance and the estimated second order probabilities. However, some trends can be noticed not only in R2 but also in the intercept (k̂3 ) and in the slope (k̂5 ) estimated through ordinary least squares (OLS) on a logarithmic transformation of (8). As expected increasing γ the effects of the distance on the π̂ij s are more sensible leading to a more scattered spatial distribution of the sampling units. An evidence that is mitigated by the increase of n as in this case the HPWD has less space to better spread the units. Regarding the evaluation of the πij s we should also consider that their use is expected only in HT variance estimation and that a spatial design producing well spread samples necessarily will have very small joint inclusion probabilities for nearby units. Even though HPWD has the advantage to produce πij s that can be shown to be always strictly positive, although it is still unbiased, the HT variance estimator could become extremely unstable if there is some very small πij . Thus, using HPWD, the HT estimation of variance is always possible, and this is a clear advantage, but it cannot always be recommended as a good solution. In such circumstances it could be better the use of alternatives variance estimators as a MB approach to the problem [3] or the local variance estimator suggested by [52] that represents a general solution as it does not need the πij s. Fast Selection of Spatially Balanced Samples 13 Fig. 2 Second order inclusion probabilities π̂ij of the HPWD estimated in 100,000 replicated samples in the 5 × 5 and 10 × 10 regular grid populations for different sample sizes and γ = 1. Table 2 Regression parameters of the model (8) for the π̂ij of the HPWD estimated in 100,000 replicated samples in the 5 × 5 and 10 × 10 regular grid populations for different sample sizes and γ. population grid 5×5 5×5 5×5 5×5 5×5 5×5 10 × 10 10 × 10 10 × 10 10 × 10 10 × 10 10 × 10 10 × 10 10 × 10 10 × 10 10 × 10 10 × 10 10 × 10 ∗∗∗ γ 1 5 10 1 5 10 1 5 10 1 5 10 1 5 10 1 5 10 n   log k̂3 k̂5 R2 5 5 5 10 10 10 5 5 5 10 10 10 20 20 20 40 40 40 −3.377(0.006)∗∗∗ 0.718(0.012)∗∗∗ 0.943 0.846 0.806 0.872 0.709 0.660 0.917 0.830 0.763 0.862 0.774 0.599 0.782 0.591 0.540 0.722 0.406 0.337 −3.025(0.045)∗∗∗ 3.022(0.085)∗∗∗ −2.668(0.096)∗∗∗ 5.563(0.180)∗∗∗ −1.842(0.005)∗∗∗ 0.388(0.010)∗∗∗ −1.606(0.027)∗∗∗ 1.205(0.051)∗∗∗ −1.426(0.048)∗∗∗ 1.918(0.091)∗∗∗ −6.149(0.003)∗∗∗ 0.831(0.004)∗∗∗ −4.951(0.052)∗∗∗ 5.799(0.073)∗∗∗ −4.116(0.074)∗∗∗ 11.453(0.105)∗∗∗ −4.587(0.003)∗∗∗ 0.598(0.004)∗∗∗ −4.066(0.014)∗∗∗ 2.235(0.020)∗∗∗ −3.039(0.052)∗∗∗ 5.485(0.074)∗∗∗ −3.157(0.002)∗∗∗ 0.361(0.003)∗∗∗ −2.916(0.010)∗∗∗ 0.988(0.014)∗∗∗ −2.714(0.017)∗∗∗ 1.607(0.024)∗∗∗ −1.789(0.001)∗∗∗ 0.165(0.002)∗∗∗ −1.725(0.004)∗∗∗ 0.316(0.006)∗∗∗ −1.685(0.007)∗∗∗ 0.430(0.010)∗∗∗ p < 0.001, ∗∗ p < 0.01, ∗ p < 0.05 Table 3 Relative efficiency of the estimated first order inclusion probabilities CV (π̂i ) (9) of the HPWD in 10,000 replicated samples in the LUCAS - Emilia Romagna population, Meuse river, clustered and sparse simulated populations for different sample sizes and γ. LUCAS - Emilia Romagna Meuse river Sim. Clustered Population Sim. Sparse Population γ n = 100 n = 300 n = 600 n = 10 n = 50 n = 50 n = 100 n = 200 n = 50 n = 100 n = 200 1 7.658 4.480 3.001 5.695 2.271 10.542 7.616 5.788 5.766 5.396 5.153 5 11.183 7.505 5.519 18.393 12.329 28.539 28.146 26.747 21.514 26.076 26.237 10 16.264 11.682 8.864 32.002 20.952 43.558 47.529 40.981 37.762 44.454 41.986 14 F. Piersimoni, R. Benedetti   RM SE Table 4 Relative efficiency of the sample mean RM and average SBI for each design estimated SESRS in 10,000 replicated samples in the clustered and sparse populations for different sample sizes, trend and autocorrelation. Design GRTS LPM SCPS PWD1 PWD5 PWD10 HPWD1 HPWD5 HPWD10 GRTS LPM SCPS PWD1 PWD5 PWD10 HPWD1 HPWD5 HPWD10 GRTS LPM SCPS PWD1 PWD5 PWD10 HPWD1 HPWD5 HPWD10 n 50 50 50 50 50 50 50 50 50 100 100 100 100 100 100 100 100 100 200 200 200 200 200 200 200 200 200 Clustered Population Sparse Population No Trend Linear Trend No Trend Linear Trend Autocorrelation Autocorrelation Average Autocorrelation Autocorrelation Average Medium High Medium High SBI Medium High Medium High SBI 0.963 0.612 0.519 0.349 0.308 0.990 0.777 0.549 0.520 0.132 0.935 0.591 0.498 0.322 0.265 0.991 0.698 0.522 0.449 0.094 0.935 0.592 0.498 0.336 0.279 0.999 0.687 0.515 0.437 0.093 0.946 0.624 0.515 0.359 0.272 1.002 0.774 0.542 0.509 0.136 0.957 0.491 0.460 0.246 0.135 1.004 0.619 0.503 0.402 0.065 0.975 0.466 0.443 0.221 0.104 0.991 0.566 0.487 0.385 0.050 0.951 0.611 0.507 0.339 0.269 1.003 0.760 0.524 0.478 0.123 0.935 0.504 0.470 0.262 0.166 1.005 0.633 0.502 0.393 0.067 0.955 0.466 0.471 0.247 0.147 1.002 0.597 0.494 0.382 0.058 0.924 0.557 0.481 0.299 0.230 0.959 0.669 0.514 0.425 0.142 0.904 0.521 0.462 0.280 0.191 0.965 0.629 0.501 0.385 0.105 0.908 0.517 0.474 0.281 0.203 0.968 0.630 0.508 0.387 0.104 0.919 0.565 0.480 0.315 0.213 0.981 0.704 0.514 0.446 0.149 0.907 0.443 0.430 0.225 0.107 0.941 0.538 0.477 0.354 0.078 0.907 0.416 0.421 0.214 0.093 0.935 0.515 0.474 0.342 0.068 0.925 0.542 0.467 0.293 0.205 0.973 0.674 0.510 0.417 0.136 0.888 0.444 0.440 0.236 0.121 0.957 0.549 0.474 0.346 0.080 0.894 0.417 0.432 0.222 0.103 0.948 0.510 0.471 0.339 0.070 0.900 0.505 0.454 0.253 0.199 0.947 0.613 0.491 0.387 0.169 0.874 0.466 0.429 0.233 0.164 0.938 0.551 0.478 0.336 0.132 0.877 0.459 0.425 0.234 0.172 0.941 0.550 0.473 0.333 0.135 0.895 0.512 0.448 0.263 0.202 0.939 0.618 0.483 0.389 0.179 0.829 0.395 0.400 0.197 0.123 0.889 0.473 0.450 0.285 0.109 0.826 0.389 0.401 0.191 0.118 0.887 0.469 0.452 0.280 0.104 0.890 0.483 0.436 0.249 0.191 0.947 0.589 0.486 0.362 0.166 0.836 0.400 0.399 0.196 0.125 0.902 0.470 0.456 0.284 0.109 0.810 0.386 0.385 0.182 0.112 0.864 0.443 0.452 0.268 0.097   RM SE Table 5 Relative efficiency of the sample mean RM and average SBI for each design estimated SESRS in 10,000 replicated samples in the Meuse river population for two different sample sizes. Sample Size n = 10 Sample Size n = 50 Design Cadmium Copper Lead Zinc Av. SBI Cadmium Copper Lead Zinc Av. SBI GRTS 0.897 0.887 0.884 0.908 0.173 0.839 0.809 0.806 0.838 0.176 LPM 0.884 0.869 0.862 0.901 0.129 0.762 0.724 0.736 0.757 0.133 SCPS 0.883 0.864 0.853 0.893 0.120 0.755 0.708 0.705 0.735 0.138 PWD1 0.907 0.895 0.905 0.915 0.178 0.801 0.770 0.774 0.797 0.185 PWD5 0.770 0.782 0.773 0.799 0.086 0.688 0.631 0.647 0.672 0.119 PWD10 0.704 0.747 0.727 0.747 0.059 0.664 0.602 0.619 0.646 0.110 HPWD1 0.902 0.888 0.919 0.915 0.181 0.798 0.760 0.775 0.805 0.189 HPWD5 0.767 0.782 0.784 0.797 0.085 0.678 0.621 0.648 0.671 0.125 0.722 0.754 0.735 0.754 0.058 0.671 0.609 0.626 0.649 0.112 HPWD10 Fast Selection of Spatially Balanced Samples 15 Fig. 3 Spatial distribution of the simulated populations: (a) clustered and (c) sparse. The variograms are reported for Medium and High autocorrelation without trend (b) and for No Trend and Linear Trend without autocorrelation (d). 4 Sampling Designs Comparison on Artificial and Real Populations In this Section the performance of the HPWD is empirically compared with respect to alternative spatially balanced designs via 10,000 sample replications, which have been carried out by using the free software environment for statistical computings R [47]. In particular, we used the following R packages: BalancedSampling [27], and spsurvey [38]. For further details on the R codes that can be employed to randomly select spatially balanced samples, see [6, ch. 7]. As possible alternatives to the HPWD, we considered the GRTS [53], the SCPS [26], the LPM [28] and the PWD [5]. In addition, for PWD and HPWD, three values for the parameter γ ∈ {1, 5, 10} were tried, indicated as suffix after the acronym of the design. To be comparable each other, all these alternative designs have been used setting the first order inclusion probabilities constant and equal to n/N . The comparison between different designs has been performed by using the (RMSE) of the HT estimates as relative to the RMSE obtained when using a SRS design that is used, thus, as a scale factor to remove the known effects of the size of the population N and of the sample n on the sampling errors. It is worth noticing that in every simulation performed, as the HT estimator is unbiased, the RMSEs were always very close to the standard error of 16 F. Piersimoni, R. Benedetti Fig. 4 Spatial distribution of the Meuse river data (a). The fitted exponential variogram models are reported for the variables Cadmium (b), Copper (c) and Zinc (d). each design as the bias can be considered negligible. The experiments carried out concern both real and artificial populations. The latter are two frames of size N =1,000 generated through point processes with two different levels of clustering of the units to control the distribution of the coordinates and with different spatial  features of the response variable Y i . The coordinates {x1 , x2 } are generated in the unit square [0, 1]2 according to a Neyman-Scott process with intensity of the cluster centers equal to 10 [59] with 100 expected units per cluster. Two different scale parameters for cluster kernel were used {0.005, 0.03}, representing respectively an highly clustered and a sparse population of spatial units (see Figure 3). For each frame four possible outcomes  Y i have been generated according to a Gaussian Markov Random Field with or withouta spatial linear trend x1 + x2 + η , that explain approximately the 80% of the variance of Y i . Spatial dependence of the errors η is modeled by an exponential variogram with two intensities of the parameter {0.01, 0.1}, representing respectively medium and high dependence between units. To avoid the possible effects due to different variability, each population was finally standardized to the same mean µY = 5 and standard deviation σY = 1. To verify if n has any effect on the efficiency, from each population, samples of different Fast Selection of Spatially Balanced Samples 17 Fig. 5 Land Cover LUCAS data on a 2km × 2km population grid in the Italian region Emilia Romagna overlaid on the NUTS2 and NUTS3 boundaries (wider and narrower lines respectively). size n ∈ {10, 50, 200} have been selected. One of the two real populations used is the well known case study, largely debated and analyzed in the field of geostatistics, the Meuse River data–set available in the package gstat. It is a set of 155 samples of top soil heavy metal concentrations (ppm) used in our experiments as a population, along with a number of soil and landscape variables. The samples were collected in a flood plain of the river Meuse, near the Stein village (The Netherlands). Heavy metal concentrations are bulk sampled from an area of approximately 15m × 15m. In addition to the topographical map coordinates the top soil concentrations of four metals have been used: cadmium, copper, lead and zinc. On these variables there is an high spatial autocorrelation as it is apparent from the variograms (Fig. 4). In this population the experiment consists of selecting samples of size {10, 50}. Land is very important for most biological and human activities, it is the main economic resource for agriculture, forestry, industries, and transport. The information collected on the land deals mainly with two interconnected concepts: land cover that refers to the biophysical coverage of land (e.g. crops, grass, broad-leaved forest, or build-up area) and land use that specifies the socio-economic use of land (e.g. agriculture, forestry, recreation or residential use). It is for this reason that land use and land cover data collected by the EU survey LUCAS has been chosen as the second real population. The Land Use/Cover Area frame Survey (LUCAS) [24] is a task by EUROSTAT that was initially designed to deliver, on a yearly basis, European crop estimates and then became a more general agro-environmental survey. Our population has been built considering 2012 as reference year and the region Emilia– Romagna in Italy as the area under investigation (see Figure 5). The regular grid 2km× 2km of points constitutes our frame population. In the LUCAS experiment three different sample sizes have been adopted {100, 300, 600} on data aggregated in 10 land cover classes. The first check that must necessarily be done before performing the estimates is related to the 18 F. Piersimoni, R. Benedetti   RM SE Table 6 Relative efficiency of the Land Cover codes area estimates RM and average SBI for SESRS each design estimated in 10,000 replicated samples in the Emilia-Romagna LUCAS population for different sample sizes. Design n GRTS 100 LPM 100 SCPS 100 PWD1 100 PWD5 100 PWD10 100 HPWD1 100 HPWD5 100 HPWD10 100 GRTS 300 LPM 300 SCPS 300 300 PWD1 PWD5 300 PWD10 300 HPWD1 300 HPWD5 300 HPWD10 300 GRTS 600 LPM 600 SCPS 600 PWD1 600 PWD5 600 PWD10 600 HPWD1 600 HPWD5 600 HPWD10 600 Area (hec.) Artif. Land 0.981 0.956 0.958 0.965 0.956 0.951 0.959 0.947 0.948 0.935 0.908 0.922 0.928 0.900 0.892 0.945 0.904 0.895 0.903 0.853 0.854 0.906 0.846 0.837 0.892 0.842 0.815 1688 Wheat Maize Other Cereals 0.950 0.933 0.981 0.947 0.935 0.968 0.937 0.949 0.973 0.952 0.936 0.965 0.943 0.922 0.968 0.935 0.939 0.961 0.951 0.938 0.982 0.942 0.934 0.963 0.948 0.944 0.967 0.923 0.902 0.942 0.885 0.869 0.907 0.873 0.865 0.899 0.921 0.898 0.937 0.888 0.852 0.903 0.876 0.859 0.895 0.911 0.913 0.952 0.902 0.887 0.903 0.887 0.846 0.897 0.869 0.852 0.900 0.840 0.806 0.838 0.822 0.805 0.822 0.882 0.862 0.904 0.836 0.792 0.820 0.823 0.793 0.819 0.862 0.858 0.899 0.834 0.796 0.810 0.806 0.775 0.781 2380 1764 692 Fodder Perman. Other Woodl. Crops Crops Cropl. 0.937 0.939 0.961 0.819 0.937 0.903 0.954 0.801 0.921 0.916 0.941 0.796 0.942 0.936 0.963 0.825 0.917 0.915 0.942 0.796 0.925 0.911 0.954 0.793 0.948 0.915 0.958 0.820 0.928 0.919 0.960 0.793 0.917 0.915 0.948 0.793 0.902 0.875 0.934 0.784 0.873 0.849 0.889 0.737 0.868 0.848 0.885 0.742 0.898 0.900 0.918 0.780 0.859 0.850 0.880 0.733 0.856 0.825 0.883 0.720 0.903 0.882 0.915 0.776 0.870 0.840 0.878 0.723 0.862 0.830 0.878 0.722 0.860 0.836 0.872 0.730 0.823 0.780 0.817 0.679 0.814 0.770 0.815 0.681 0.867 0.846 0.878 0.734 0.810 0.771 0.811 0.673 0.807 0.760 0.796 0.659 0.868 0.840 0.884 0.734 0.800 0.758 0.807 0.663 0.791 0.741 0.795 0.638 3120 1452 1368 5976 Grassl. Ot. Land Average Cover SBI 0.967 0.976 0.288 0.950 0.942 0.110 0.954 0.955 0.070 0.965 0.954 0.054 0.953 0.953 0.076 0.963 0.935 0.044 0.970 0.964 0.028 0.957 0.961 0.067 0.963 0.937 0.044 0.921 0.927 0.296 0.890 0.891 0.103 0.901 0.894 0.070 0.926 0.930 0.055 0.891 0.877 0.075 0.877 0.886 0.045 0.911 0.935 0.036 0.889 0.883 0.066 0.887 0.881 0.043 0.898 0.888 0.298 0.845 0.837 0.109 0.839 0.825 0.078 0.905 0.898 0.066 0.838 0.819 0.085 0.834 0.820 0.056 0.893 0.889 0.052 0.823 0.817 0.076 0.806 0.798 0.053 2368 1300 assurance of having respected the πi s, otherwise the HT estimator could provide extremely biased results. From the CV (π̂i )s (9) reported in Table (3) it is clear that the standardization of the distance matrix implied again as a result that the πs s are approximately constant even if, with the exception of the LUCAS frame that lies on a regular grid, the error introduced is quite higher than that observed in regular grid populations. The spatial distribution of the population is thus an essential aspect to entail an efficient standardization of DU . In addition to the effects of γ and n already observed in regular grids it is possible to notice that a combination of a small sample size, an high tuning parameter and the presence of clustering in the coordinates of units in U may worryingly reduce the accuracy of the πi s. In such cases it would be advisable to proceed to a partition of the frame population in zones with a more homogeneous spatial distribution and then standardize the DU independently within each stratum. It should also be considered that the CV (π̂i )s that may seem high actually did not result in significant biases in the HT estimates proving its robustness to even medium-high deviations in the πi s or in their estimated counterpart π̂i s. It is indeed necessary to remember that in this case we used only 10,000 sample replications instead of the 100,000 used for regular grids, and probably they are not sufficient to achieve reliable estimates for the inclusion probabilities. Fast Selection of Spatially Balanced Samples 19 Looking at Tables (4), (5) and (6), it can be noticed that generally the HPWD gives encouraging results as it seems to handle any existing spatial data structure and effectively treat it to locate units in the study region. Clustering of the population, presence of a spatial trend and of spatial autocorrelation imply a clear direct effect on the reduction of estimation variance even though their joint impact is obviously extremely moderate. Especially a linear trend has been a valuable attribute to be exploited by our design even if very high variance decreases are also found in the presence of spatial autocorrelation, while the clustering of the population units slightly mitigate its capacity to spread the sample. Although it seems to be so sensitive to the occurrence of any of these properties it is also quite robust in case of their absence, as it always has a RMSE much smaller to that obtained using the SRS. Note that GRTS, SCPS and LPM have a similar behavior but with a lower gain in efficiency in all these situations, in particular GRTS that shows to achieve samples that are less spatially balanced than any other method. The Meuse example confirms that HPWD is at least as efficient as PWD and that all the trends found in artificial populations are also verified on this case study. The results of the LUCAS data are also very encouraging. The gain in RMSE of the estimates compared with SRS is remarkable reaching and exceeding, when n = 600 in some land cover classes, 25%. In any experiment the PWD is slightly but systematically more efficient than SCPS and LPM if used with γ ≥ 5 and sometimes a bit less efficient when used with lower exponents. The HPWD proves to be an appreciable approximation of the PWD as it always has very similar RMSEs and, in some rare cases, surprisingly better. The increase of n invariably entails an improvement in the performance of PWD and HPWD with respect not only to SRS but also to GRTS while the difference with SCPS and LPM seems to remain appreciably constant. Table (7) reports the average CPU time in seconds of a 3,06 GHz Intel Core 2 Duo used by each of the algorithms to select a sample for different populations and sample sizes. For each method the time of both the developed version using R and C ++, when available (except for the GRTS, both versions are always available), is reported. For SCPS and LPM, the R code has been extracted from the available online material included in [26, 28]. The CPU time of LPM are reported for both the suggested methods: LPM1 is slower but more accurate (used to produce any results in this article) and LPM2 is faster but less accurate. The extent to which the execution time matters, particularly if it highly depends on N and n, is not secondary to the choice of the design. It is clear that among the examined procedures the GRTS is the more computationally intensive method and that the HPWD is sensibly quicker than the other distance based methods. The time spent to select a sample gradually increases with n and only proportionally with N mainly because the number of iterations is exactly n while the time used by SCPS and the two versions of the LPM increases with N 2 . To be honest, however, this comparison is strongly influenced by the computation of DU that in PWD and HPWD is excluded from the computing times, since, having this matrix to be standardized in each application, its external calculation is required. In SCPS and LPM the distances calculation is instead included in the sample selection algorithm. Finally we can be confident that HPWD can be effectively applied without extensive difficulties even to spatial populations of large size and the only limit can be represented by the amount of memory needed to store the distance matrix. 5 Conclusions The use of space as a criterion for selecting sample units from a population is a solution that promises significant developments in ecological, environmental, agricultural and forestry 20 F. Piersimoni, R. Benedetti Table 7 Average computing time (in seconds) to select a sample based on 1,000 replications for each design and for different population and sample sizes. N 1000 1000 1000 1500 1500 1500 2000 2000 2000 2500 2500 2500 3000 3000 3000 R C++ n GRTS SCPS LPM1 LPM2 PWD HPWD SCPS LPM1 LPM2 PWD HPWD 100 0.182 3.114 8.348 2.008 0.243 0.014 0.058 0.013 0.007 0.009 0.000 300 1.941 3.094 8.457 2.010 0.374 0.039 0.058 0.013 0.007 0.027 0.002 600 57.008 2.962 8.137 1.960 0.594 0.075 0.058 0.013 0.007 0.040 0.003 100 0.116 6.728 17.698 4.393 0.376 0.019 0.135 0.028 0.015 0.036 0.007 300 0.848 6.665 16.662 4.161 0.598 0.059 0.133 0.028 0.015 0.014 0.002 600 23.195 6.668 17.627 4.375 0.952 0.112 0.134 0.028 0.015 0.030 0.029 100 0.104 11.920 31.587 7.757 0.504 0.026 0.246 0.050 0.026 0.049 0.001 300 1.088 11.848 31.586 7.682 0.791 0.074 0.246 0.050 0.026 0.023 0.001 600 5.267 11.718 31.084 7.661 1.264 0.147 0.245 0.050 0.026 0.038 0.003 100 0.115 18.511 49.659 12.036 0.642 0.031 0.392 0.078 0.041 0.014 0.001 300 1.108 18.304 49.919 12.051 1.019 0.094 0.392 0.078 0.041 0.051 0.011 600 5.021 18.318 49.666 12.053 1.652 0.185 0.392 0.078 0.041 0.062 0.013 100 0.134 26.623 71.000 17.255 0.769 0.038 0.574 0.112 0.059 0.058 0.001 300 1.418 26.412 71.197 17.343 1.242 0.115 0.573 0.113 0.059 0.028 0.005 600 4.688 26.532 66.318 16.284 1.972 0.223 0.564 0.113 0.059 0.054 0.011 surveys. In addition to collecting data on phenomena that are impossible or particularly expensive to observe with an exhaustive direct observation of the frame population, this type of surveys allows also to update existing lists frames (e.g. agricultural holdings), to use auxiliary information available only on a geographical basis (remotely sensed data), to facilitate some aspects of quality controls and to better define some concepts and nomenclatures for data dissemination (small area estimation). Recent advances in geocoding that have led to a better understanding of the position of population units have thus extended the interest in these applications by the Institutes responsible for the production of official statistics. Typical households and business surveys have been based on archives that, in addition to many administrative data, include spatial information on each single unit of the population. It also results that these institutes have to update the survey techniques adopted so far. Among these, random sample selections have seen both theoretical and practical developments in methods and algorithms, allowing us to conduct surveys efficiently on spatial populations. Their motivation is still unclear and their framing is still debated within two approaches, based on the model or on the design, formally very different but that lead to logical and intuitive choices that can be reconciled. The selection with probability proportional to the distance between sampling units seems to go in this direction with the use of the PWD method. One of the main limitations of this studies lies on the fact that only the specification of homogeneous and isotropic spatial processes can be considered by using the distance matrix as a summary of the population spatial features. The population size, however, could be a big obstacle to be circumvented as this method is based on a computationally intensive MCMC approximation of the distribution P (S ) representing the sample design. Simultaneously increasing the amount of surveys on which to use these methods and information details on the phenomena to be investigated, both in terms of spatial resolution and size, it has become essential to propose a much faster alternative to PWD, the HPWD that can actually randomly select S in exactly n steps. The results of the empirical studies show that the proposed HPWD design can compete in precision with the best spatially balanced designs analyzed, while also allowing large reductions in the computational effort required. It represents an important alternative to such Fast Selection of Spatially Balanced Samples 21 designs when the survey requires, as it is often the case in real situations, the use of a very large spatial population. The results obtained seem to confirm that the simplicity of a draw–by–draw scheme may also be sufficient to handle many spatial aspects of the frame population and of the collected variables. Some issues remain open for future research fundamentally linked to the theoretical derivation of πi s and πij s when the HPWD or PWD are used. Their knowledge would enable to improve the study the theoretical properties of these designs and perhaps to derive more flexible alternatives that can be better adapted to the treatment of spatial samples repeated over time or to the selection of samples well–spread, stratified and simultaneously balanced on some auxiliaries. References 1. Arbia, G.: The use of GIS in spatial statistical surveys. International Statistical Review 61, 2, 339–359. (1993) 2. Benedetti, R., Bee, M., Espa, G., Piersimoni, F. (eds): Agricultural Survey Methods. Chicester: Wiley. (2010) 3. Benedetti, R., Espa, G., Taufer, E.: Model-based variance estimation in non-measurable spatial designs, Journal of Statistical Planning and Inference 181, 52–61. (2017) 4. Benedetti, R., Palma, D.: Optimal sampling designs for dependent spatial units. Environmetrics 6, 101– 114. (1995) 5. Benedetti, R., Piersimoni, F.: A spatially balanced design with probability function proportional to the within sample distance. Biometrical Journal doi:10.1111/insr.12216. (2017) 6. Benedetti, R., Piersimoni, F., Postiglione, P.: Sampling spatial units for agricultural surveys. Advances in Spatial Science Series. Berlin, Heidelberg: Springer-Verlag. (2015) 7. Benedetti, R., Piersimoni, F., Postiglione, P.: Spatially balanced sampling: a review and a reappraisal. International Statistical Review doi:10.1111/insr.12216. (2017) 8. Breidt, F.J., Chauvet, G.: Penalized balanced sampling. Biometrika 99, 945–958. (2012) 9. Chambers, R.L., Clark, R.G.: An introduction to model–based survey sampling with applications. Oxford: Oxford University Press. (2012) 10. Chauvet, G., Tillè, Y.: A fast algorithm of balanced sampling. Computational Statistics 21, 53–62. (2006) 11. Cressie, N.A.C.: Statistics for spatial data. 2nd edition. New York: Wiley. (1993) 12. Cressie, N.A.C., Chambers, R. L..: Comment on Article by Ferreira and Gamerman. Bayesian Analysis 10, 3, 741–748. (2015) 13. Cressie, N.A.C., Johannesson G.: Fixed rank kriging for very large spatial data sets. Journal of the Royal Statistical Society, Series B 70, 209–226. (2008) 14. Cordy, C.B.: An extension of the HorvitzThompson theorem to point sampling from a continuous universe. Statististics and Probability Letters 18, 353–362. (1993) 15. Cromley, E., McLafferty, S.: GIS and Public Health. New York: Guilford Press. (2002) 16. Deville, J.C., Tillè, Y.: Unequal probability sampling without replacement through a splitting method. Biometrika 85, 89–101. (1998) 17. Deville, J.C., Tillè, Y.: Efficient balanced sampling: the cube method. Biometrika 91, 893–912. (2004) 18. Diggle, P., Menezes, R., Su, T.: Geostatistical inference under preferential sampling. Journal of the Royal Statistical Society, Series C (Applied Statistics) 59, 191–232. (2010) 19. Diggle, P.J., Ribeiro, P.J.: Model-based geostatistics. New York: Springer-Verlag. (2007) 20. Fattorini, L.: Applying the Horvitz–Thompson criterion in complex designs: A computer-intensive perspective for estimating inclusion probabilities. Biometrika 93, 269-278. (2006) 21. Fattorini, L.: An adaptive algorithm for estimating inclusion probabilities and performing the Horvitz– Thompson criterion in complex designs. Computational Statistic 24, 623-639. (2009) 22. Ferreira, G., Gamerman, D.: Optimal design in geostatistics under preferential sampling. Bayesian Analysis 10, 3, 711–735. (2015) 23. Gaetan, C., Guyon, X.: Spatial statistics and modeling. Springer series in statistics. New York: SpringerVerlag. (2010) 24. Gallego, J., Delincé, J.: The European land use and cover area-frame statistical survey, In: Agricultural survey methods. Benedetti, R., Bee, M., Espa, G. and Piersimoni, F. (eds), John Wiley and Sons Ltd, Chichester, UK, 151–168. (2010) 22 F. Piersimoni, R. Benedetti 25. Gelfand, A.E., Diggle, P.J., Fuentes, M., Guttorp, P.: Handbook of spatial statistics. Handbooks of modern statistical methods. Boca Raton: Chapman and Hall/CRC. (2010) 26. Grafström, A.: Spatially correlated Poisson sampling. Journal of Statistical Planning and Inference 142, 139–147. (2012) 27. Grafström, A., Lisic, J.: BalancedSampling: balanced and spatially balanced sampling. R package version 1.5.2. https://CRAN.R-project.org/package=BalancedSampling. Accessed July 10, 2017. (2017) 28. Grafström, A., Lundström, N.L.P., Schelin, L.: Spatially balanced sampling through the pivotal method. Biometrics 68, 514–520. (2012) 29. Grafström, A., Tillè, Y.: Doubly balanced spatial sampling with spreading and restitution of auxiliary totals. Environmetrics 24, 120–131. (2013) 30. Gregoire, T.G., Valentine, H.T.: Sampling strategies for natural resources and the environment. Applied environmental statistics. London: Chapman and Hall/CRC (2008) 31. Hainy, M., Müller, W.G., Wagner, H.: Likelihood-free simulation-based optimal design with an application to spatial extremes. Stochastic Environmental Research and Risk Assessment 30, 2, 481–492. (2016) 32. Hainy, M., Müller, W.G., Wynn, H. P.: Learning functions and approximate bayesian computation design: ABCD. Entropy 16, 8, 4353–4374. (2014) 33. Hedayat, A., Rao, C.R., Stufken, J.: Sampling designs excluding contiguous units. Journal of Statistical Planning and Inference 19, 159–170. (1988) 34. Isaki, C.T., Fuller, W.A.: Survey design under the regression superpopulation model. Journal of the American Statistical Association 77, 377, 89–96. (1982) 35. Johannesson, G., Cressie, N.A.C.: Finding large-scale spatial trends in massive, global, environmental datasets. Environmetrics 15, 1, 1–44. (2004) 36. Johnson, C.R., Masson, R.D., Trosset, M.W.: On the diagonal scaling of Euclidean distance matrices to doubly stochastic matrices. Linear Algebra and its Applications 397, 253-264. (2005) 37. Jones, C.B.: Geographical Information Systems and Computer Cartography. Harlow, Essex, UK: Addison Wesley Longman. (1997) 38. Kincaid, T.M., Olsen, A.R.: spsurvey: spatial survey design and analysis. R package version 3.3. (2016) 39. Knottnerus, P.: Sample survey theory: some Pythagorean perspectives. New York: Springer-Verlag. (2003) 40. LeSage, J.P., Pace, R.K.: Introduction to spatial econometrics. Statistics, textbooks and monographs. Boca Raton: Chapman and Hall/CRC. (2009) 41. Mandallaz, D.: Sampling techniques for forest inventories. Applied environmental statistics. Boca Raton: Chapman and Hall/CRC. (2008). 42. Möller J.: Spatial Statistics and Computational Method. New York: Springer-Verlag. (2003) 43. Müller, P.: Simulation-Based optimal design. Bayesian Statistics 6, 459–474. (1999) 44. Müller, W.G.: Collecting spatial data: optimum design of experiments for random fields. Berlin: Springer-Verlag, 3rd edition (2007) 45. Müller, W.G., Pronzato, L., Rendas, J., Waldl, H.: Efficient prediction designs for random fields. Applied Stochastic Models in Business and Industry 31, 2, 178–194. (2015) 46. Müller, W.G., Stehlı́k, M.: Compound optimal spatial designs, Environmetrics 21, 3-4, 354–364. (2010) 47. R Core Team: R: a language and environment for statistical computing. R Foundation for Statistical Computing. Vienna, Austria. Available from:https://www.R-project.org/. Accessed March 10, 2017. (2016) 48. Rogerson, P., Delmelle, E.: Optimal sampling design for variables with varying spatial importance. Geographical Analysis 36, 177–194. (2004) 49. Särndal, C.E., Swensson, B. and Wretman, J.: Model Assisted Survey Sampling. New York: SpringerVerlag. (1992) 50. Schabenberger, O., Gotway, C.A.: Statistical Methods for Spatial Data Analysis. Texts in Statistical Science Series, Boca Raton: Chapman and Hall/CRC. (2004) 51. Scheiner, S.M., Gurevitch, J.: Design and Analysis of Ecological Experiments. 2nd edition. Oxford, UK: Oxford University Press. (2001) 52. Stevens, Jr.D.L., Olsen, A.R.: Variance estimation for spatially balanced samples of environmental resources. Environmetrics 14, 593–610. (2003) 53. Stevens, Jr.D.L., Olsen, A.R.: Spatially balanced sampling of natural resources. Journal of the American Statistical Association 99, 262–278. (2004) 54. Tillé, Y.: Sampling Algorithms. New York: Springer-Verlag. (2006) 55. Tillé, Y.: Ten years of balanced sampling with the CUBE method: An appraisal. Survey Methodology 2, 215–226. (2011) 56. Tillé, Y., Wilhelm, M.: Probability Sampling Designs: Principles for Choice of Design and Balancing. Statistical Science 32, 2, 176–189. (2017) 57. Tzeng, S., Huang, H.C., Cressie, N.A.C.: A Fast, Optimal Spatial–Prediction Method for Massive Datasets. Journal of the American Statistical Association 100, 472, 1343–1357. (2005) Fast Selection of Spatially Balanced Samples 23 58. Valliant, R., Dorfman. A.H., Royall, R.M.: Finite population sampling and inference: a prediction approach. New York: John Wiley and Sons. (2000) 59. Waagepetersen, R.: An estimating function approach to inference for inhomogeneous Neyman-Scott processes. Biometrics 63, 252–258. (2007) 60. Wang, J.F., Jiang,C.S., Hu, M.G., Cao, Z.D., Guo, Y.S., Li, L.F., Liu, T.J., Meng, B.: Design-based spatial sampling: Theory and implementation. Environmental Modelling and Software 40, 280–288. (2013) 61. Wang, J.F., Stein, A., Gao, B.B., Ge, Y.: A review of spatial sampling. Spatial Statistics 2, 1–14. (2012) 62. Webster, R., Oliver, M.A.: Geostatistics for Environmental Scientists. 2nd Edition, Statistics in Practice. Chichester: John Wiley and Sons. (2007)
10
arXiv:1701.02459v4 [] 27 Jul 2017 The Congruence Subgroup Problem for the Free Metabelian group on n ≥ 4 generators David El-Chai Ben-Ezra March 30, 2018 Abstract The congruence subgroup problem for a finitely generated group Γ \ asks whether the map Aut (Γ) → Aut(Γ̂) is injective, or more generally, what is its kernel C (Γ)? Here X̂ denotes the profinite completion of X. It n is well known that for finitely  generated free abelian groups C (Z ) = {1} 2 for every n ≥ 3, but C Z = F̂ω , where F̂ω is the free profinite group on countably many generators. Considering Φn , the free metabelian group on n generators, it was also proven that C (Φ2 ) = F̂ω and C (Φ3 ) ⊇ F̂ω . In this paper we prove that C (Φn ) for n ≥ 4 is abelian. So, while the dichotomy in the abelian case is between n = 2 and n ≥ 3, in the metabelian case it is between n = 2, 3 and n ≥ 4. Mathematics Subject Classification (2010): Primary: 19B37, 20H05, Secondary: 20E36, 20E18. Key words and phrases: congruence subgroup problem, automorphism groups, profinite groups, free metabelian groups. Contents 1 Introduction 2 2 Some properties of IA (Φn ) and its subgroups 6 3 The main theorem’s proof 8 4 Some elementary elements of hIA (Φn )m i 10 4.1 Elementary elements of type 1 . . . . . . . . . . . . . . . . . . . 10 4.2 Elementary elements of type 2 . . . . . . . . . . . . . . . . . . . 13 1 5 A main lemma 14 5.1 Reducing Lemma 3.1’s proof . . . . . . . . . . . . . . . . . . . . 14 5.2 A technical lemma . . . . . . . . . . . . . . . . . . . . . . . . . . 17 5.3 Finishing Lemma 3.1’s proof . . . . . . . . . . . . . . . . . . . . . 23 6 Remarks and problems for further research 1 25 Introduction The classical congruence subgroup problem (CSP) asks for, say, G = SLn (Z) or G = GLn (Z), whether every finite index subgroup of G contains a principal congruence subgroup, i.e. a subgroup of the form G (m) = ker (G → GLn (Z/mZ)) for some 0 6= m ∈ Z. Equivalently, it asks whether the natural map Ĝ → GLn (Ẑ) is injective, where Ĝ and Ẑ are the profinite completions of the group G and the ring Z, respectively. More generally, the CSP asks what is the kernel of this map. It is a classical 19th century result that the answer is negative for n = 2. Moreover (but not so classical, cf. [Mel], [L]), the kernel in this case is F̂ω - the free profinite group on a countable number of generators. On the other hand, it was proved in 1962 by Mennicke [Men] and Bass-Lazard-Serre [BLS] that for n ≥ 3 the answer is affirmative, and the kernel is therefore trivial. By the observation GLn (Z) ∼ = Aut (Zn ) = Out (Zn ), the CSP can be generalized as follows: Let Γ be a group and G ≤ Aut (Γ) (resp. G ≤ Out (Γ)). For a finite index characteristic subgroup M ≤ Γ denote: G (M ) = (resp. G (M ) = ker (G → Aut (Γ/M )) ker (G → Out (Γ/M ))). Such a G (M ) will be called a “principal congruence subgroup” and a finite index subgroup of G which contains G (M ) for some M will be called a “congruence subgroup”. The CSP for the pair (G, Γ) asks whether every finite index subgroup of G is a congruence subgroup. In some sense, the CSP tries to understand whether every finite quotient of G comes from a finite quotient of Γ. One can easily see that the CSP is equivalent to the question: Is the congruence map Ĝ = ← lim −G/U → lim ←−G/G (M ) injective? Here, U ranges over all finite index normal subgroups of G, and M ranges over all finite index characteristic subgroups of Γ. When Γ is finitely generated, it has only finitely many subgroups of given index m, and thus, the charateristic subgroups: Mm = ∩ {∆ ≤ Γ | [Γ : ∆] = m} are of finite index in Γ. Hence, one can write Γ̂ = 1 lim ←−m∈N Γ/Mm and have : limG/G (M ) = limm∈N G/G (Mm ) ≤ limm∈N Aut(Γ/Mm ) ←− ←− ←− ≤ Aut(lim (Γ/M )) = Aut(Γ̂) (resp. Out(Γ̂)). m∈N m ←− 1 By the celebrated theorem of Nikolov and Segal which asserts that every finite index subgroup of a finitely generated profinite group is open [NS], the second inequality is actually an equality. However, we do not need it. 2 Therefore, when Γ is finitely generated, the CSP is equivalent to the question: Is the congruence map: Ĝ → Aut(Γ̂) (resp. Ĝ → Out(Γ̂)) injective? More generally, the CSP asks what is the kernel C (G, Γ) of this map. For G = Aut (Γ) we will also use the simpler notation C (Γ) = C (G, Γ). The classical congruence  subgroup results mentioned above can therefore be reformulated as C Z2 = F̂ω while C (Zn ) = {e} for n ≥ 3. So the finite quotients of GLn (Z) are closely related to the finite quotients of Zn when n ≥ 3, but the finite quotients of GL2 (Z) are far of being understandable by the finite quotients of Z2 . Very few results are known when Γ is non-abelian. Most of the results are related to Γ = πg,n , the fundamental group of Sg,n , the closed surface of genus g with n punctures. In these cases one can take G = P M od (Sg,n ), the pure mapping class group of Sg,n , and can naturally view it as a subgroup of Out (πg,n ) (cf. [FM], chapter 8). Considering this cases, it is known that: Theorem 1.1. For g = 0, 1, 2 and every n ≥ 0, 1, 0 respectively, we have: C (P M od (Sg,n ) , πg,n ) = {1}. Z2 and P M od (S1,0 ) ∼ Note that when g = 1 and n = 0, π1,0 ∼ = SL2 (Z), =  2 so: C (P M od (S1,0 ) , π1,0 ) = C SL2 (Z) , Z = F̂ω . The cases for g = 0 were proved in [DDH] (see also [Mc]), the cases for g = 1 were proved in [A] (see also [Bo1], [BER]), and the cases for g = 2 were proved in [Bo1] (see also [Bo2] for the specific case where g = 2 and n = 0). In particular, as P M od (S1,1 ) is isomorphic to the special outer-automorphism group of F2 , we have an affirmative answer for the full outer-automorphism group of F2 , and by some standard arguments it shows that actually C (F2 ) is trivial (see [BER], [BL]). Note that for every n > 0, πg,n ∼ = F2g+n−1 = the free group on 2g + n − 1 generators. Hence, the above solved cases give an affirmative answer for various subgroups of the outerautomorphism group of finitely generated free groups, while the CSP for the full Aut (Fd ) when d ≥ 3 is still unsettled, and so is the situation with P M od (Sg,n ) when g ≥ 3. All the above settled cases have a common property which plays a crucial role in the proof of Theorem 1.1: There is an intrinsic description of G by iterative extension process by virtually free groups (groups which have a finite index free subgroup). Actually, in these cases, in some sense, we do understand the finite quotients of G, and the CSP tells us that these quotients are closely related to the finite quotients of Γ. This situation changes when we pass to G = Aut (Fd ) for d ≥ 3 or P M od (Sg,n ) for g ≥ 3. In these cases we do not have a description of G that can help to understand the finite quotients of G. So in some sense, all the known cases do not give us a new understanding of the finite quotients of G. Considering the abelian case, what makes the result of Mennicke and Bass-Lazard-Serre so special is that it not only shows that the finite quotients of GLn (Z) are related to the finite quotients of Zn , but also gives us a description of the finite quotients of GLn (Z), which we have not known without this result. Denote now the free metabelian group on n generators by Φn = Fn /Fn′′ . Considering the metabelian case, it was shown in [BL] (see also [Be1]) that 3 C (Φ2 ) = F̂ω . In addition, it was proven there that C (Φ3 ) ⊇ F̂ω . So, the finite quotients of Aut (Φ2 ) and Aut (Φ3 ) are far of being connected to the finite quotients of Φ2 and Φ3 , respectively. Here comes the main theorem of this paper: \ Theorem 1.2. For every n ≥ 4, C (IA (Φn ) , Φn ) is central in IA (Φn ), where: IA (Φn ) = ker (Aut (Φn ) → Aut (Φn /Φ′n ) = GLn (Z)) . u Using the commutative exact diagram (see 6): \ \ \ IA (Φn ) → Aut (Φn ) → GL n (Z) ց ↓ ↓ Aut(Φ̂n ) → GLn (Ẑ) → 1 \ and the fact that GL n (Z) → GLn (Ẑ) is injective for n ≥ 3, we obtain that C (IA (Φn ) , Φn ) is mapped onto C (Φn ). Therefore we deduce that: Theorem 1.3. For every n ≥ 4, C (Φn ) is abelian. This is dramatically different from the cases of n = 2, 3 described above. Theorem 1.3 tells us that when n ≥ 4 the situation changes, and the finite quotients of Aut (Φn ) are closely related to the finite quotients of Φn in the following manner: Corollary 1.4. Let n ≥ 4. Then, for every finite index subgroup H ≤ G = Aut (Φn ), there exists a finite index characteristic subgroup M ≤ Φn and r ∈ N such that G (M )′ G (M )r ⊆ H. Note that by a theorem of Bachmuth and Mochizuki [BM], Aut (Fn ) → Aut (Φn ) is surjective for every n ≥ 4, and thus G = Aut (Φn ) is finitely generated. Hence, the principal congruence subgroups of the form G (M ) are finitely generated, and thus, the subgroups of the form G (M )′ G (M )r are also of finite ′ r index in Aut (Φn ). Therefore, the quotients of the form Aut (Φn ) /G (M ) G (M ) describe all the finite quotients of Aut (Φn ). In particular, our theorem gives us a description of the finite quotients of Aut (Φn ) when n ≥ 4 - just like the theorem of [Men] and [BLS] gives for GLn (Z) when n ≥ 3. Corollary 1.4 obviously does not hold for n = 2, 3. So, the picture is that while the dichotomy in the abelian case is between n = 2 and n ≥ 3, in the metabelian case we have a dichotomy between n = 2, 3 and n ≥ 4. d In [KN], Kassabov and Nikolov showed that ker(SL\ n (Z [x]) → SLn (Z [x])) is central and not finitely generated, when n ≥ 3. In [Be2] we use their techniques and an interesting surjective representations:  x→1 IA (Φn ) ։ ker(GLn−1 Z[x±1 ] −→ GLn−1 (Z)) to show also that: 4 Theorem 1.5. For every n ≥ 4, C (IA (Φn ) , Φn ) is not finitely generated. We remark that despite the result of the latter theorem, we do not know whether C (Φn ) is also not finitely generated.uIn fact we can not even prove at this point that it is not trivial (for more, see 6). The main line of the proof of Theorem 1.2 is as follows: For G = IA (Φn ) we first take the principal congruence subgroups G (Mn,m ) where Mn,m = ′ ′ m m (Φ′n Φm (Φ /M ), and thus we deduce that n ) (Φn Φn ) . By [Be1], Φ̂n = lim ←− n n,m the subgroups of the form G Mn,m4 are enough to represent the congruence subgroups of IA(Φn ) in the sense that every congruence subgroup contains one of these principal congruence subgroups. Then, we follow the steps of the theorem of Bachmuth and Mochizuki [BM], showing that Aut (Fn ) → Aut (Φn ) is m surjective for n ≥ 4, and we try to build G Mn,m4 with elements of hIA (Φn ) i. This process, combined with some classical results from algebraic K-theory enables us to show that  m m hIA (Φn ) i G Mn,m4 / hIA (Φn ) i is finite and central in IA (Φn ) / hIA (Φn )m i, and thus, hIA (Φn )m i is of finite index in IA (Φn ). In particular, as every normal subgroup of index m in IA (Φn ) m m contains hIA (Φn ) i, we deduce that the groups of the form hIA (Φn ) i are enough to represent the finite index subgroups of IA (Φn ). From here, it follows \ easily that C (IA (Φn ) , Φn ) is central in IA (Φn ). We hope that the solution of the free metabelian case will help to understand some more cases of non-abelian groups, such as the automorphism group of a free group and the mapping class group of a surface. The immediate next challenges are the automorphism groups of free solvable groups. u Let us point out that, as remarked in 5 in [BL], one can deduce from Theorem 1.3 that for every n ≥ 4, Aut (Φn ) is not large, i.e does not contain a finite index subgroup which can be mapped onto a free group. This is in contrast with Aut (Φ2 ) and Aut (Φ3 ) which are large. u The paper is organized as follows: In 2 we present some needed notations and lemma, in u discuss IA (Φn ) and some of its subgroups. Then, up to a main u 3 we prove the main theorem of the paper, Theorem 1.2. In 4 we compute m some u elements of hIA (Φn ) i which we use in the proof of the main lemma. In 5 we prove the main reminded lemma. We end the paper with the proof of Theorem 1.3, and some remarks on the problem of computing C (Φn ) and C (IA (Φn ) , Φn ). Acknowledgements: I wish to offer my deepest thanks to my great supervisor Prof. Alexander Lubotzky for his sensitive and devoted guidance, and to the Rudin foundation trustees for their generous support during the period of the research. 5 2 Some properties of IA (Φn) and its subgroups Let G = IA (Φn ) = ker (Aut (Φn ) → Aut (Φn /Φ′n ) = GLn (Z)). We start with recalling some of the properties of G = IA (Φn ) and its subgroups, as presented in Section 3 in [Be2]. We also refer the reader to [Be2] for the proofs of the statements in this section. We start with the following notations: • Φn = Fn /Fn′′ = the free metabelian group on n elements. Here Fn′′ denotes the second derivative of Fn , the free group on n elements. ′ m ′ m • Φn,m = Φn /Mn,m , where Mn,m = (Φ′n Φm n ) (Φn Φn ) . • IGn,m = G(Mn,m ) = ker (IA (Φn ) → Aut (Φn,m )) . m • IAm n = hIA (Φn ) i. ±1 n • Rn = Z[Zn ] = Z[x±1 1 , . . . , xn ] where x1 , . . . , xn are the generators of Z . • Zm = Z/mZ. • σi = xi − 1 for 1 ≤ i ≤ n. We also denote by ~σ the column vector which has σi in its i-th entry. P • An = ni=1 σi Rn = the augmentation ideal of Rn . Pn • Hn,m = ker (Rn → Zm [Znm ]) = i=1 (xm i − 1) Rn + mRn . By the well known Magnus embedding (see [Bi], [RS], [Ma]), one can identify Φn with the matrix group: ) (  n X g a 1 t1 + . . . + a n tn n ai (xi − 1) | g ∈ Z , ai ∈ Rn , g − 1 = Φn = 0 1 i=1 where ti is a free basis for Rn -module, under the identification of the generators of Φn with the matrices   xi ti 1 ≤ i ≤ n. 0 1 Moreover, for every α ∈ IA (Φn ), one can describe α by its action on the generators of Φn , by:     xi ti xi ai,1 t1 + . . . + ai,n tn α: 7→ 0 1 0 1 and this description gives an injective homomorphism (see [Ba], [Bi]): IA (Φn ) ֒→ defined by α 7→ GLn (Rn )  a1,1 · · ·  ..  . an,1 6 ···  a1,n ..  .  an,n which gives an identification of IA (Φn ) with the subgroup: IA (Φn ) = = {A ∈ GLn (Rn ) | A~σ = ~σ } n o In + A ∈ GLn (Rn ) | A~σ = ~0 . One can find the proof of the following proposition in [Be2] (Propositions 3.1 and 3.2): Proposition 2.1. Let In + A ∈ IA (Φn ). Then: • If one denotes the P entries of A by ak,l for 1 ≤ k, l ≤ n, then for every n 1 ≤ k, l ≤ n, ak,l ∈ l6=i=1 σi Rn ⊆ An . Q • det (In + A) is of the form: det (In + A) = nr=1 xsrr for some sr ∈ Z. Consider now the map:    Pn g a 1 t1 + . . . + a n tn Φn = | g ∈ Zn , ai ∈ Rn , g − 1 = i=1 ai (xi − 1) 0 1 ↓    Pn g a 1 t1 + . . . + a n tn | g ∈ Znm , ai ∈ Zm [Znm ], g − 1 = i=1 ai (xi − 1) 0 1 which induced by the projections Zn → Znm , Rn = Z[Zn ] → Zm [Znm ]. Using result of Romanovskiı̆ [Rom], it is shown in [Be1] that this map is surjective and that Φn,m is canonically isomorphic to its image. Therefore, we can identify the principal congruence subgroup of IA (Φn ), IGn,m , with: IGn,m = = {A ∈ ker (GLn (Rn ) → GLn (Zm [Znm ])) | A~σ = ~σ } n o In + A ∈ GLn (Rn , Hn,m ) | A~σ = ~0 . Let us step forward with the following definitions: Definition 2.2. Let A ∈ GLn (Rn ), and for 1 ≤ i ≤ n, denote by Ai,i the minor which obtained from A by erasing its i-th row and i-th column. Now, for every 1 ≤ i ≤ n, define the subgroup IGLn−1,i ≤ IA (Φn ), by:   The i-th row of A is 0, IGLn−1,i = In + A ∈ IA (Φn ) | . In−1 + Ai,i ∈ GLn−1 (Rn , σi Rn ) The following proposition is proven in [Be2] (Proposition 3.4): Proposition 2.3. For every 1 ≤ i ≤ n we have: IGLn−1,i ∼ = GLn−1 (Rn , σi Rn ). We recall the following definitions from Algebraic K-Theory: Definition 2.4. Let R be a commutative ring (with identity), H ⊳ R an ideal, and d ∈ N. Then: 7 • Ed (R) = hId + rEi,j | r ∈ R, 1 ≤ i 6= j ≤ di ≤ SLd (R) where Ei,j is the matrix which has 1 in the (i, j)-th entry and 0 elsewhere. • SLd (R, H) = ker (SLd (R) → SLd (R/H)). • Ed (R, H) = the normal subgroup of Ed (R), which is generated as a normal subgroup by the elementary matrices of the form Id + hEi,j for h ∈ H. Under the above identification of IGLn−1,i with GLn−1 (Rn , σi Rn ), for every 1 ≤ i ≤ n we define: Definition 2.5. Let H ⊳ Rn . Then: ISLn−1,i (H) = IGLn−1,i ∩ SLn−1 (Rn , H) IEn−1,i (H) = IGLn−1,i ∩ E n−1 (Rn , H) ≤ ISLn−1,i (H) . 3 The main theorem’s proof Using the above notations we prove in u 5 the following main lemma: Lemma 3.1. For every n ≥ 4 and m ∈ N one has: IGn,m2 ⊆ IAm n · n Y ISLn−1,i (σi Hn,m ) i=1 = IAm n · ISLn−1,1 (σ1 Hn,m ) · . . . · ISLn−1,n (σn Hn,m ) . Observe that it follows that when n ≥ 4, then for every m ∈ N: IGn,m4 ⊆ 2 IAm n · n Y ISLn−1,i σi Hn,m2 i=1 ⊆ ⊆ IAm n · IAm n · n Y i=1 n Y i=1 ISLn−1,i σi Hn,m2  ISLn−1,i Hn,m2 .   The following Lemma is proved in [Be2], using classical results from Algebraic K-theory (Lemma 7.1 in [Be2]): Lemma 3.2. For every n ≥ 4, 1 ≤ i ≤ n and m ∈ N one has:  IEn−1,i Hn,m2 ⊆ IAm n. Let us now quote the following proposition (see [Be2], Corollary 2.3): Proposition 3.3. Let R be a commutative ring, H ⊳ R ideal of finite index and d ≥ 3. Assume also that Ed (R) = SLd (R). Then: SK1 (R, H; d) = SLd (R, H) /E d (R, H) is a finite group which is central in GLd (R) /E d (R, H). 8 Now, according to Proposition 3.3 and the fact that Ed (Rn ) = SLd (Rn ) for every d ≥ 3 [Su], we obtain that for every n ≥ 4: SLn−1 (Rn , Hn,m ) /E n−1 (Rn , Hn,m ) = SK1 (R, Hn,m ; n − 1) is a finite group. Thus ISLn−1,i (Hn,m ) /IEn−1,i (Hn,m ) ≤ SLn−1 (Rn , Hn,m ) /E n−1 (Rn , Hn,m ) is also a finite group. Hence, the conclusion from Lemmas 3.1 and 3.2 is that for every m ∈ N, one can cover IGn,m4 with finite number of cosets of IAm n . As IGn,m4 is obviously a finite index subgroup of IA (Φn ) we deduce that IAm n is also a finite index subgroup of IA (Φn ). Therefore, as every normal subgroup of IA (Φn ) of index m cotains IAm n we deduce that one can write explicitely m \ IA (Φn ) = lim ←− (IA (Φn ) /IAn ). On the other hand, it is proven in [Be1] that Φ̂n = ← lim −Φn,m , and thus: Corollary 3.4. For every n ≥ 4:   m C (IA (Φn ) , Φn ) = ker lim (IA (Φ ) /IA ) → lim (IA (Φ ) /IG ) n n n,m n ←− ←−   m 4 IA (Φ ) /IG = ker lim (IA (Φ ) /IA ) → lim n n n,m n ←− ←−  m m = lim IAn · IGn,m4 /IAn . ←− Now, Proposition 3.3 gives us also that for every m ∈ N and n ≥ 4, the subgroup SK1 (Rn , Hn,m ; n − 1) is central in GLn−1 (Rn ) /E n−1 (Rn , Hn,m ). This fact is used in [Be2] to prove that (see the arguments in Section 5 in [Be2]) : Proposition 3.5. For every n ≥ 4, m ∈ N and 1 ≤ i ≤ n the subgroup:  m IAm n · ISLn−1,i σi Hn,m2 /IAn is central in IA (Φn ) /IAm n. Corollary 3.6. For every n ≥ 4 and m ∈ N the elements of the set IAm n · n Y i=1  ISLn−1,i σi Hn,m2 /IAm n belong to the center of IA (Φn ) /IAm n. The conclusion from the latter corollary is that for every n ≥ 4 and m ∈ N, the set n Y  ISLn−1,i σi Hn,m2 /IAm IAm n n · i=1 is an abelian group which contained in the center of IA (Φn ) /IAm n . In particm m 4 /IA · IG ular, IAm is contained in the center of IA (Φ ) /IA n n,m n n , and thus, n \ C (IA (Φn ) , Φn ) is in the center of IA (Φn ). This finishes, up to the proof of Lemma 3.1, the proof of Theorem 1.2. So it remains to prove Lemma 3.1. But before we start to prove this lemma, we need to compute some elements of IAm n . We will do this in the following section. 9 4 Some elementary elements of hIA (Φn )m i m In this section we compute some elements of IAm n = hIA (Φn ) i which needed through the proof of Lemma 3.1. As one can see below, we separate the elementary elements to two types. In addition, we separate the treatment of the elements of type 1, to two parts. We hope this separations will make the process clearer. Additionally to the previous notations, on the section, and also later on, we will use the notation: µr,m = m−1 X xir for 1 ≤ r ≤ n. i=0 4.1 Elementary elements of type 1 Proposition 4.1. Let n ≥ 3, 1 ≤ u ≤ n and m ∈ N. Denote by ~ei the i-th row standard vector. Then, the elements of IA (Φn ) of the form (the following notation means that the matrix is similar to the identity matrix, except the entries in the u-th row):   Iu−1 0 0  au,1 · · · au,u−1 1 au,u+1 · · · au,n  ← u-th row 0 0 In−u when (au,1 , . . . , au,u−1 , 0, au,u+1 , . . . , au,n ) is a linear combination of the vectors: 1. {m (σi~ej − σj ~ei ) | i, j 6= u, i 6= j} 2. {σk µk,m (σi~ej − σj ~ei ) | i, j, k 6= u, i 6= j} 3. {σk µi,m (σi~ej − σj ~ei ) | i, j, k 6= u, i 6= j, k 6= j} with coefficients in Rn , belong to IAm n. Before proving this proposition, we present some more elements of this type. Note that for the following proposition we assume n ≥ 4: Proposition 4.2. Let n ≥ 4, IA (Φn ) of the form:  Iu−1  au,1 · · · au,u−1 0 1 ≤ u ≤ n and m ∈ N. Then, the elements of 0 1 0 au,u+1 0 ··· In−u  au,n  ← u-th row when (au,1 , . . . , au,u−1 , 0, au,u+1 , . . . , au,n ) is a linear combination of the vectors:  2 1. σu µu,m (σi~ej − σj ~ei ) | i, j 6= u, i 6= j 2. {σu σj µi,m (σi~ej − σj ~ei ) | i, j 6= u, i 6= j} with coefficients in Rn , belong to IAm n. 10 Proof. (of Proposition 4.1) Without loss of generality, we assume that u = 1. Observe now that for every ai , bi ∈ Rn for 2 ≤ i ≤ n one has:      1 a2 + b 2 · · · an + b n 1 b2 · · · bn 1 a2 · · · an . = 0 In−1 0 In−1 0 In−1 Hence, it is enough to prove that the elements of the following forms belong to IAm ei we mean that the entry of the i-th column in the first n (when we write a~ row is a):   1 mf (σi~ej − σj ~ei ) i, j 6= 1, i 6= j, f ∈ Rn 1. 0 In−1   1 σk µk,m f (σi~ej − σj ~ei ) i, j, k 6= 1, i 6= j, f ∈ Rn 2. 0 In−1   1 σk µi,m f (σi~ej − σj ~ei ) i, j, k 6= 1, i 6= j, k 6= j, f ∈ Rn . 3. 0 In−1 We start with the elements of form 1. Here we have: m    1 f (σi~ej − σj ~ei ) 1 mf (σi~ej − σj ~ei ) = ∈ IAm n. 0 In−1 0 In−1 We pass to the elements of form 2. In this case we have: " m # −1  1 f (σi~ej − σj ~ei ) xk −σ1~ek m IAn ∋ , 0 In−1 0 In−1   1 σk µk,m f (σi~ej − σj ~ei ) . = 0 In−1 We finish with the elements of form 3. If k = i, it is a private case of the previous case, so we assume k 6= i. So we assume that i, j, k are all different from each other and i, j, k 6= 1 - observe that this case is interesting only when n ≥ 4. The computation here is more complicated than in the previous cases, so we will demonstrate it for the private case: n = 4, i = 2, j = 3, k = 4. It is clear that symmetrically, with similar argument, the same holds in general when n ≥ 4 for every i, j, k 6= 1 which different from each other. So: −m     1 0 0 0 1 0 −σ4 f σ3 f    0 1 1 0 0  0 0    , 0  IAm ∋  4      0 0 0 −σ3 x2 0 1 0 0 0 0 1 0 0 0 1   1 −σ4 f µ2,m σ3 σ4 f σ2 µ2,m 0  0 1 0 0  . =   0 0 1 0  0 0 0 1 11 We pass now to the proof of Proposition 4.2. Proof. (of Proposition 4.2) Also here, without loss of generality, we assume that u = 1. Thus, all we need to show is that also the elements of the following forms belong to IAm n:   1 σ12 µ1,m f (σi~ej − σj ~ei ) i, j 6= 1, i 6= j, f ∈ Rn 1. 0 In−1   1 σ1 σj µi,m f (σi~ej − σj ~ei ) i, j 6= 1, i 6= j, f ∈ Rn . 2. 0 In−1 Also here, to simplify the notations, we will demonstrate the proof in the private case: n = 4, i = 2, j = 3. We start with the first form. From Proposition 4.1 we have (an element of form 2 in Proposition 4.1):   1 0 0 0  0 1 0 0  .  IAm 4 ∋ 0 0 1 0  0 σ3 σ1 µ1,m f −σ2 σ1 µ1,m f 1 Therefore, we also have:  x4 0 0 −σ1   0  0 1 0 ∋  IAm 4  0 0 1 0 0 0 0 1  2 1 −σ3 σ1 µ1,m f  0 1 =   0 0 0 0   1 0   0 1 ,   0 0 0 σ3 σ1 µ1,m f  σ2 σ12 µ1,m f 0 0 0  . 1 0  0 1 0 0 1 −σ2 σ1 µ1,m f  0  0   0  1 We pass to the elements of form 2. From Proposition 4.1 we have (an element of form 3 in Proposition 4.1):   1 0 0 0  0 1 0 0    IAm 4 ∋ 0 0 1 0  0 σ1 σ3 µ2,m f −σ1 σ2 µ2,m f 1 and therefore, we have:  1 0 σ4 −σ3  0 1 0 0 m   IA4 ∋  0 0 1 0 0 0 0 1  1 −σ1 σ32 µ2,m f  0 1 =   0 0 0 0   1 0   0 1 ,   0 0 0 σ1 σ3 µ2,m f  σ3 σ1 σ2 µ2,m f 0 0 0  . 1 0  0 1 12 0 0 1 −σ1 σ2 µ2,m f  0  0    0  1 4.2 Elementary elements of type 2 Proposition 4.3. Let n ≥ 4, 1 ≤ u < v ≤ n and m ∈ N. Then, the elements of IA (Φn ) of the form:   Iu−1 0 0 0 0  0 1 + σu σv f 0 −σu2 f 0    ← u-th row  0  0 I 0 0 v−u−1    0 σv2 f 0 1 − σu σv f 0  ← v-th row 0 0 0 0 In−v for f ∈ Hn,m , belong to IAm n. Proof. As before, to simplify the notations we will demonstrate the proof in the case: n = 4, u = 1 and v = 2, and it will be clear from the computation that the same holds in the general case, provided n ≥ 4. First observe that for every f, g ∈ Rn we have:    1 + σ1 σ2 g −σ12 g 0 0 1 + σ1 σ2 f −σ12 f 0 0   σ22 g 1 − σ1 σ2 g 0 0  σ22 f 1 − σ1 σ2 f 0 0        0 0 1 0  0 0 1 0 0 0 0 1 0 0 0 1  1 + σ1 σ2 (f + g) −σ12 (f + g) 2  σ2 (f + g) 1 − σ1 σ2 (f + g) =  0 0 0 0 0 0 1 0  0 0   0  1 so it is enough to consider the cases f ∈ mR4 and f ∈ σr µr,m R4 for 1 ≤ r ≤ 4, separately. Consider now the following computation. For an arbitrary f ∈ Rn we have:    −1  1 0 0 0 x4 0 0 −σ1  0    1 0 0    ,  0 x4 0 −σ2    0 0 1 0   0 0 1 0   −σ2 f σ1 f 0 1 0 0 0 1     1 0 0 0 1 + σ1 σ2 f −σ12 f 0 0   0 1 0 0  1 − σ1 σ2 f 0 0  σ22 f = . ·  0 0 1 0   0 0 1 0  −σ4 σ2 f σ4 σ1 f 0 1 0 0 0 1 Therefore, we conclude that  1 0 0  0 1 0   0 0 1 −σ4 σ2 f σ4 σ1 f 0 if:   1 0  0 0  ,  0   0 −σ2 f 1 13 0 1 0 σ1 f 0 0 1 0  0 0   ∈ IAm 4 0  1 then also:  1 + σ1 σ2 f  σ22 f   0 0 −σ12 f 1 − σ1 σ2 f 0 0  0 0   ∈ IAm 4 . 0  1 0 0 1 0 Thus, the cases f ∈ mR4 and f ∈ σr µr,m R4 for r 6= 4, are obtained immediately from Proposition 4.1. Hence, it remains to deal with the case f ∈ σr µr,m R4 for r = 4. However, it is easy to see that by switching the roles of 3 and 4, the remained case is also obtained by similar arguments. 5 A main lemma In this section we prove Lemma 3.1 which states that for every n ≥ 4 and m ∈ N we have: IGn,m2 ⊆ IAm n · n Y ISLn−1,i (σi Hn,m ) i=1 = IAm n · ISLn−1,1 (σ1 Hn,m ) · . . . · ISLn−1,n (σn Hn,m ) . The proof will be presented in a few stages - each of which will have a separated subsection. In this sections n ≥ 4 will be constant, so we will make notations simpler and write: R = Rn , A = An , Hm = Hn,m , IAm = IAm n , IGm = IGn,m . We will also use the following notations: Om = mR, Ur,m = µr,m R when µr,m = Pm−1 i=0 xir for 1 ≤ r ≤ n . Pn Notice that it follows from the definitions, that: Hm = r=1 σr Ur,m + Om (we note that in [Be2] we used the notation Ur,m for σr µr,m R). 5.1 Reducing Lemma 3.1’s proof We start this subsection with introducing the following objects: Definition 5.1. Let m ∈ N. Define: R ⊲ Jm = n X 2 σr3 Ur,m + A2 Om + AOm r=1 Jm =  In + A | In + A ∈ IA (Φn ) ∩ GLn (R, Jm ) Qn 2 det (In + A) = r=1 xrsr m , sr ∈ Z Proposition 5.2. For every m ∈ N we have: IGm2 = IA (Φn ) ∩ GLn (R, Hm2 ) ⊆ Jm . 14  . Pm−1 Proof. Let x ∈ R. Notice that i=0 xi ∈ (x − 1) R + mR. In addition, by P mi replacing x by xm we obtain: m−1 ∈ (xm − 1) R + mR. Hence: i=0 x m2 x −1 2 m −1 X xi = (x − 1) m−1 X xi m−1 X xmi = (x − 1) ∈ (x − 1) ((x − 1) R + mR) ((xm − 1) R + mR) ⊆ (x − 1)2 (xm − 1) R + (x − 1)2 mR + (x − 1) m2 R. i=0 i=0 i=0 P 2 2 2 Thus, we obtain that Hm2 = nr=1 (xm r − 1)R + m R ⊆ Jm + Om . Now, let In +A ∈ IGm2 = IA (Φn )∩GLn (R, Hm2 ). From the above observation and  from 2 ∩A = Proposition 2.1, it follows that every entry of A belongs to Jm + Om J . In addition, by Proposition 2.1, the determinant of I + A is of the form m n Qn m2 sr . On the other hand, we know that under the projection x → 7 1 and x r r r=1 Qn m2 7→ 0 one has: In + A 7→ In and thus also r=1 xsrr = det (In + A) 7→ 1. Qn 2 Therefore, det (In + A) is of the form r=1 xrm sr , as required. Corollary 5.3. Let n ≥ 4 and m ∈ N. Then, for proving Lemma 3.1 it suffices to prove that: n Y ISLn−1,i (σi Hm ) . Jm ⊆ IAm · i=1 We continue with defining the following objects: Definition 5.4. For 0 ≤ u ≤ n and 1 ≤ v ≤ n, define the following ideals of ±1 R = Rn = Z[x±1 1 , . . . , xn ]: Ãu = n X σr R r=u+1 J˜m,u,v =   Pu 2 Ãu  r=1 Aσr Ur,m + AOm + Om +   P  n 3 r=u+1 σr Ur,m  Pu 2  Ãu  r=1 Aσr Ur,m + AOm + Om +  P  n 3 2 v6=r=u+1 σr Ur,m + Aσv Uv,m v≤u v>u and for 0 ≤ u ≤ n define the groups: Ãu = IA (Φn ) ∩ GLn (R, Ãu ), and: ) ( Qn 2 det (In + A) = i=1 xsi i m , every entry in . J̃m,u = In + A ∈ IA (Φn ) | the v-th colmun of A belongs to J˜m,u,v Remark 5.5. If In + A ∈ J̃m,u , the entries of the columns of A may belong to different ideals in R, so it is not obvious that J̃m,u is indeed a group, i.e. closed under matrix multiplication and the inverse operation. However, showing that J̃m,u is a group is not difficult and we leave it to the reader. 15 Notice now the extreme cases: 1. For u = 0 we have (for every v and m): Ã0 = A, and Jm ⊆ J˜m,0,v . Hence, we have Jm ⊆ J̃m,0 . 2. For u = n we have (for every v and m): Ãn = J˜m,n,v = 0. Hence, we also have J̃m,n = {In }. Corollary 5.6. For proving Lemma 3.1, it is enough to prove that for every 1 ≤ u ≤ n: J̃m,u−1 ⊆ IAm · ISLn−1,u (σu Hm ) · J̃m,u . Proof. Using that IAm is normal in IA (Φn ) and the latter observations, under the above assumption, one obtains that: Jm ⊆ J̃m,0 ⊆ ⊆ ⊆ IAm · ISLn−1,1 (σ1 Hm ) · J̃m,1 ... n Y (IAm · ISLn−1,u (σu Hm )) · J̃m,n u=1 = IAm n Y ISLn−1,u (σu Hm ) u=1 which is the requirement of Corollary 5.3. We continue with defining the following objects: Definition 5.7. For 0 ≤ u ≤ n and 1 ≤ v ≤ n, define the following ideals of R:  Pu  2 A  r=1 Aσr Ur,m + AOm + Om +  Pn  3 v≤u r=u+1 σr Ur,m  Pu Jm,u,v = 2  + Aσ U + AO + O A r r,m m  m  Pn r=1 3 2 v>u v6=r=u+1 σr Ur,m + Aσv Uv,m and for 0 ≤ u ≤ n define the group:   Qn si m2 , every entry in . Jm,u = In + A ∈ IA (Φn ) | det (In + A) = i=1 xi the v-th colmun of A belongs to Jm,u,v It follows from the definitions that for every 1 ≤ u ≤ n we have: 1. Jm,u−1,v ⊆ Jm,u,v , but Ãu−1 ⊇ Ãu . Thus, we have also 2. Jm,u−1 ⊆ Jm,u , but Ãu−1 ⊇ Ãu . Here comes the connection between the latter objects to the objects defined in Definition 5.4. Proposition 5.8. For every 0 ≤ u ≤ n and 1 ≤ v ≤ n we have: Jm,u,v ∩ Ãu = J˜m,u,v , and hence: Jm,u ∩ Ãu = J̃m,u . 16 Proof. It is clear from the definitions that we have: J˜m,u,v ⊆ Jm,u,v ∩ Ãu , so we have to show an inclusion on the opposite way. Let a ∈ Jm,u,v ∩ Ãu . As: (P n σ 3 Ur,m v≤u ˜ Jm,u,v ⊇ Pnr=u+1 r 3 2 v6=r=u+1 σr Ur,m + Aσv Uv,m v > u  Pu 2 we can assume that: a ∈ A r=1 Aσr Ur,m + AOm + Om ∩ Ãu . Observe now that by dividing an element b ∈ R by σu+1 , . . . , σn (with residue), one can present b as a summand of an element of Ãu with an ele±1 ment of Ru = Z[x±1 1 , . . . , xu ]. Hence, R = Ãu + Ru and A = Ãu + Au , where Au is the augmentation ideal of Ru . Hence: a ∈ (Ãu + Au )2 u X σr µr,m (Ãu + Ru ) r=1 2 ⊆ + (Ãu + Au ) m(Ãu + Ru ) + (Ãu + Au )m2 (Ãu + Ru ) u X J˜m,u,v + A2u σr µr,m Ru + A2u mRu + Au m2 Ru . r=1  Pu Hence, we can assume that a ∈ A2u r=1 σr µr,m Ru + A2u mRu + Au m2 Ru ∩ Ãu = {0}, i.e. a = 0 ∈ J˜m,u,v , as required. Due to the above, we can now reduce Lemma 3.1’s proof as follows. Corollary 5.9. For proving Lemma 3.1 it suffices to show that given 1 ≤ u ≤ n, for every α ∈ J̃m,u−1 there exist β ∈ IAm ∩ Jm,u and γ ∈ ISLn−1,u (σu Hm ) ∩ Jm,u such that γαβ ∈ Ãu . Proof. As clearly Jm,u ⊇ Jm,u−1 ⊇ J̃m,u−1 , we obtain from Proposition 5.8 that: γαβ ∈ Ãu ∩ Jm,u = J̃m,u . Thus: α ∈ ISLn−1,u (σu Hm ) · J̃m,u · IAm = IAm · ISLn−1,u (σu Hm ) · J̃m,u . This yields that J̃m,u−1 ⊆ IAm ·ISLn−1,u (σu Hm )·J̃m,u which is the requirement of Corollary 5.6. 5.2 A technical lemma I this section we will prove a technical lemma, which will help us in subsection 5.3 to prove Lemma 3.1. In the following subsections 1 ≤ u ≤ n will be constant. Under this statement, we will use the following notations: • For a ∈ R we denote its image in Ru under the projection xu+1 , . . . , xn 7→ 1 by ā. In addition, we denote its image in Ru−1 under the projection ¯. xu , . . . , xn 7→ 1 by ā • For α ∈ GLn (R) we denote its image in GLn (Ru ) under the projection xu+1 , . . . , xn 7→ 1 by ᾱ. 17 • Similarly, we will use the following notations for every m ∈ N: P – Ā = Au = ui=1 σi P Ru , Ūr,m = µr,m Ru for 1 ≤ r ≤ u, Ōm = mRu u and H̄m = Hu,m = r=1 σr Ūr,m + Ōm . P u−1 ¯ =A ¯ – Ā u−1 = i=1 σi Ru−1 , Ūr,m = µr,m Ru−1 for 1 ≤ r ≤ u − 1 and ¯ = mR Ō . m u−1 Now, let α = In + A ∈ J̃m,u−1 , and denote the entries of A by ai,j . Consider the u-th row of A. Under the above assumption, for every v we have:  P  u−1 2  + Aσr Ur,m + AOm + Om Ãu−1  r=1   Pn σ 3 U v<u r=u r r,m  au,v ∈ Pu−1 2  Ãu−1  r=1 Aσr Ur,m + AOm + Om +   Pn 3 2 v ≥ u. v6=r=u σr Ur,m + Aσv Uv,m Hence we have:   P u−1 2  Ū + Ā Ō + Ō + Āσu2 Ūu,m Āσ σ r,m m r u  m r=1     = σu Pu Āσr Ūr,m + ĀŌm + Ō2 m r=1 P  āu,v ∈ u−1 2  + σu3 Ūu,m Āσr Ūr,m + ĀŌm + Ōm σu  r=1   P   u−1 = σ 2 2 Ū + σ Ū + Ā Ō + Ō Āσ r,m u,m m r u u m r=1 v=u (5.1) v 6= u. We can state now the technical lemma: Lemma 5.10. Let α = In + A ∈ J̃m,u−1 . Then, there exists δ ∈ IAm ∩ J̃m,u−1 such that for every v 6= u, the (u, v)-th entry of αδ −1 belongs to σu2 H̄m . We will prove the lemma in two steps. Here is the first step: Proposition 5.11. Let α = In + A ∈ J̃m,u−1 . Then, there exists δ ∈ IAm ∩ J̃m,u−1 such that for every v < u, the (u, v)-th entry of αδ −1 belongs to σu2 H̄m . Proof. So let α = In + A ∈ J̃m,u−1 , and observe that for every 1 ≤ v ≤ u − 1 one Pu−1 2 . can write āu,v = σu b̄u,v for some: b̄u,v ∈ r=1 Āσr Ūr,m + σu2 Ūu,m + ĀŌm + Ōm In addition, as it is easy to see that: u−1 X r=1 Āσr Ūr,m = u−1 X ¯ )⊆σ ¯ )σ (σ Ū + Ū (σu Ru + Ā r,m u r u r,m r=1 u−1 X σr Ūr,m + r=1 u−1 X ¯ ¯ σ Ū Ā r r,m r=1 2 ¯2 ¯ + Ō ¯ Ō ¯ )2 ⊆ σ Ō + Ā ¯ ) + (σ Ō + Ō ¯ )(σ Ō + Ō ĀŌm + Ōm = (σu Ru + Ā m m u m m u m u m m ¯ for every 1 ≤ v ≤ u − 1, for some: one can write b̄ = σ c̄ + b̄ u,v ¯b̄ u,v u,v u u,v ∈ u−1 X ¯¯ ¯ σ Ū ¯2 ¯ Ā r r,m + ĀŌm + Ōm r=1 c̄u,v ∈ u−1 X σr Ūr,m + σu Ūu,m + Ōm = H̄m . r=1 18 Notice, that as A satisfies the condition A~σ = ~0 we have the equality σ1 au,1 + . . . + σn au,n = 0, which yields the following equalities as well: σ1 āu,1 + . . . + σu−1 āu,u−1 + σu āu,u = 0 σ1 b̄u,1 + . . . + σu−1 b̄u,u−1 + āu,u ⇓ = 0 σ1¯b̄u,1 + . . . + σu−1¯b̄u,u−1 ⇓ = 0. Observe now that for every 1 ≤ v ≤ u − 1 we have: ! u−1 X 2 ¯ ¯ ¯ ¯ ¯ ¯ ⊆ J˜m,u−1,v Āσr Ūr,m + ĀŌm + Ō σu b̄u,v ∈ σu m r=1 and thus, if we define:  δ =  σu¯b̄u,1 Iu−1 ··· 0 σu¯b̄u,u−1  0 0 1 0  ← u-th row 0 In−u then δ ∈ J̃m,u−1 . We claim now that we also have δ ∈ IAm . We will prove this claim soon, but assuming this claim, we can now multiply α from the right by δ −1 ∈ J̃m,u−1 ∩ IAm and obtain an element in J̃m,u−1 such that the image of its (u, v)-th entry for 1 ≤ v ≤ u − 1, under the projection xu+1 , . . . , xn 7→ 1, is: āu,v − σu¯b̄u,v (1 + āu,u ) = σu2 c̄u,v − σu¯b̄u,v āu,u ∈ σu2 H̄m + σu2 u−1 X ¯ σ Ū ¯¯ ¯ ¯2 Ā r r,m + ĀŌm + Ōm r=1 = σu2 H̄m ! as required. So it remains to prove the following claim: Pu−1 ¯ ¯ ¯ 2 for ¯ + Ō ¯ Ō σr Ūr,m + Ā Claim 5.12. Let n ≥ 4, 1 ≤ u ≤ n, and ¯b̄u,v ∈ r=1 Ā m m 1 ≤ v ≤ u − 1 which satisfy the condition: σ1¯b̄u,1 + . . . + σu−1¯b̄u,u−1 = 0. Then:  u-th row →  σu¯b̄u,1 Iu−1 ··· 0 σu¯b̄u,u−1 19  0 0 1 0  ∈ IAm . 0 In−u (5.2) Proof. It will be easier to prove a bit more - we will prove that if for every 1 ≤ v ≤ u − 1: u−1 X ¯b̄ ∈ ¯ ¯2 ¯ ¯ ¯ σ Ū Ā r r,m + Ā Ūv,m + Ōm u,v v6=r=1 then the vector: ~b = (¯b̄u,1 , . . . , ¯b̄u,u−1 , 0, . . . , 0) is a linear combination of the vectors:   σk µk,m (σi~ej − σj ~ei ) , m (σi~ej − σj ~ei ) | i, j, k ≤ u − 1, i 6= j σk µi,m (σi~ej − σj ~ei ) with coefficients in Ru−1 . This will show that σu (¯b̄u,1 , . . . , ¯b̄u,u−1 , 0, . . . , 0) is a linear combination of the vectors in Propositions 4.1 and 4.2, so the claim will follow. We start with expressing ¯b̄u,1 explicitly by writing: ¯b̄ = u,1 u−1 X u−1 X σi σr µr,m pi,r + r=2 i=1 u−1 X σi σj µ1,m qi,j + mr i,j=1 for some pi,r , qi,j , r ∈ Ru−1 . Now, Equation 5.2 gives that under the projection Pu−1 ¯ σ2 , . . . , σu−1 7→ 0, ¯b̄u,1 7→ 0. It follows that ¯b̄u,1 ∈ i=2 σi Ru−1 ⊆ Ā. In ¯ , so we can write: particular, r ∈ Ā ¯b̄ = u,1 u−1 X u−1 X σi σr µr,m pi,r + r=2 i=1 u−1 X σi σj µ1,m qi,j + u−1 X σi mri i=1 i,j=1 for some pi,r , qi,j , ri ∈ Ru−1 . Observe now that by dividing r1 by σ2 , . . . , σu−1 (with residue) we can write Pu−1 r1 = r1′ + i=2 σi ri′ where r1′ depends only on x1 . Therefore, by replacing r1 by r1′ and ri by ri + σ1 ri′ for 2 ≤ i ≤ n, we can assume that r1 depends only on x1 . Similarly, by dividing q1,1 by σ2 , . . . , σu−1 , we can assume that q1,1 depends only on x1 . Now, by replacing ~b with: ~b − − u−1 X u−1 X r=2 i=1 u−1 X u−1 X σi µr,m pi,r (σr ~e1 − σ1~er ) σj µ1,m qi,j (σi~e1 − σ1~ei ) − u−1 X σ1 µ1,m q1,j (σj ~e1 − σ1~ej ) j=2 i=2 j=1 − u−1 X mri (σi~e1 − σ1~ei ) i=2 we can assume that ¯b̄u,1 is a polynomial which depends only on x1 . On the Pu−1 other hand, we already saw that Equation 5.2 yields that ¯b̄u,1 ∈ i=2 σi Ru−1 , so we can actually assume that ¯b̄u,1 = 0. 20 We continue in this manner by induction. In the 1 ≤ v ≤ u − 1 stage we assume that ¯b̄u,1 = . . . ¯b̄u,v−1 = 0. Then we write: ¯b̄ = u,v u−1 X u−1 X u−1 X σi σr µr,m pi,r + σi σj µv,m qi,j + mr i,j=1 v6=r=1 i=1 for some pi,r , qi,j , r ∈ Ru−1 . The condition ¯b̄u,1 = . . . = ¯b̄u,v−1 = 0 and Equation 5.2 give that σv ¯b̄u,v +σv+1¯b̄u,v+1 +. . .+σu−1¯b̄u,u−1 = 0 and thus, under Pu−1 ¯ . In the projection σv+1 , . . . , σu−1 7→ 0, ¯b̄u,v 7→ 0, so ¯b̄u,v ∈ i=v+1 σi Ru−1 ⊆ Ā ¯ particular, r ∈ Ā, so we can write: ¯b̄ = u,v u−1 X u−1 X σi σr µr,m pi,r + u−1 X σi σj µv,m qi,j + σi mri i=1 i,j=1 v6=r=1 i=1 u−1 X for some pi,r , qi,j , ri ∈ Ru−1 . Now, as we explained previously, by dividing pi,r , qi,j , ri for 1 ≤ i, j, r ≤ v by σv+1 , . . . , σu−1 , we can assume that these polynomials depend only on x1 , . . . , xv . Thus, by replacing ~b with: ~b − u−1 X X u−1 σi µr,m pi,r (σr ~ev − σv ~er ) − u−1 X u−1 X σj µv,m qi,j (σi~ev − σv ~ei ) − u−1 X v u−1 X X σi µv,m qi,j (σj ~ev − σv ~ej ) i=1 j=v+1 i=v+1 j=1 − σr µr,m pi,r (σi~ev − σv ~ei ) r=1 i=v+1 r=v+1 i=1 − v−1 X X u−1 mri (σi~ev − σv ~ei ) i=v+1 we can assume that ¯b̄u,v is a polynomial which depends only on x1 , . . . , xv , without changing the assumption that ¯b̄u,w = 0 for w < v. But we saw that in Pu−1 this situation Equation 5.2 yields that ¯b̄u,v ∈ i=v+1 σi Ru−1 , so we can actually assume that ¯b̄u,v = 0, as required. Here is the second step of the technical lemma’s proof: Proposition 5.13. Let α = In + A ∈ J̃m,u−1 such that for every v < u, āu,v ∈ σu2 H̄m . Then, there exists δ ∈ IAm ∩ J̃m,u−1 such that for every v 6= u, the (u, v)-th entry of αδ −1 belongs to σu2 H̄m . Proof. So let α = In + A ∈ J̃m,u−1 such that for every v < u, , āu,v ∈ σu2 H̄ m . We remined that by Equation 5.1,for every v > u we have: āu,v ∈ P u−1 2 2 σu r=1 Āσr Ūr,m + σu Ūu,m + ĀŌm + Ōm . Hence, we can write explicitly: āu,v = σu u−1 u XX σi σr µr,m pr,i + σu2 µu,m q + r=1 i=1 u X i=1 21 mσi ri + m2 s ! 2 for some: pr,i , q, ri , s ∈ Ru . Clearly, as ĀŌm ⊇ ĀŌm , by dividing s by σi for 1 ≤ i ≤ u (with residue), we can assume that s ∈ Z. Consider now the following element: m2 IAm ∋ (In + σv Eu,u − σu Eu,v ) = In + σv µv,m2 Eu,u − σu µv,m2 Eu,v = δ ′ . By the computation in the proof of Proposition 5.2, we obtain that: 2 µv,m2 ∈ σv2 Uv,m + σv Om + Om and thus (we remind that v > u): σv µv,m2 σu µv,m2  2 ∈ σv σv2 Uv,m + σv Om + Om ⊆ J˜m,u−1,u  2 2 ∈ σu σv Uv,m + σv Om + Om ⊆ J˜m,u−1,v . 2 ′ In addition, the determinant of δ ′ is xm v . Therefore, δ ∈ J̃m,u−1 . Observe now that as v > u, under the projection σu+1 , . . . , σn 7→ 0, xv 7→ 1, and δ is therefore maped to: δ̄ ′ = In − m2 σu Eu,v . Thus, if we multiply α from the right by δ ′s we obtain that the value of the entries in the u-th row under the projection σu+1 , . . . , σn 7→ 0 does not change, besides the value of the entry in the v-th colmun, which changes to (see Equation 5.1 for the ideal which contains āu,u ): ! u−1 X Āσr Ūr,m + σu2 Ūu,m + ĀŌm āu,v − sm2 σu (1 + āu,u ) ∈ σu + σu2 = σu r=1 u X r=1 u−1 X Āσr Ūr,m + ĀŌm + Āσr Ūr,m + r=1 σu2 Ūu,m 2 Ōm ! + ĀŌm ! .  Pu Pu−1 Hence, we can assume that āu,v ∈ σu i=1 σi fi + σu2 r=1 σr Ūr,m + Ōm = P Pu−1 u−1 σu i=1 σi fi + σu2 H̄m , for some fi ∈ r=1 σr Ūr,m + Ōm . Define now (the coefficient of ~ev is the value of the (u, v)-th entry):   Iu−1 0  0  Pu−1   δv =  −σv σu f1 · · · −σv σu fu−1 1 σu i=1 σi fi ~ev  ∈ J̃m,u−1 . 0 0 In−u By proposition 4.1, we obviously have: δv ∈ IAm . In addition, as v > u, under the projection σu+1 , . . . , σn 7→ 0 we have:   Iu−1 0  P 0   u−1 ev  . 1 σu δ̄v =  0 i=1 σi fi ~ 0 0 In−u 22 Thus, by multiplying α from the right by δ̄v−1 we obtain that the value of the entries in the u-th row under the projection σu+1 , . . . , σn 7→ 0 does not change, besides the value of the entry in the v-th colmun, which changes to: ! ! u u−1 X X 2 2 2 Āσr Ūr,m + AŌm + Ōm σi fi (1 + āu,u ) ∈ σu H̄m + σu āu,v − σu r=1 i=1 = σu2 H̄m . Qn Thus, defininig δ = v=u+1 δv finishes the proof of the proposition, and hence, also the proof of the technical lemma. 5.3 Finishing Lemma 3.1’s proof We remind that we have a constant 1 ≤ u ≤ n. We remind also that by Corollary 5.9, it suffices to show that given α ∈ J̃m,u−1 there exist β ∈ IAm ∩ Jm,u and γ ∈ ISLn−1,u (σu Hm ) ∩ Jm,u such that γαβ ∈ Ãu . So let α = In + A ∈ J̃m,u−1 . By the above technical lemma, there exists δ ∈ IAm ∩ J̃m,u−1 ⊆ IAm ∩ Jm,u such that for every v 6= u, the (u, v)-th entry of αδ −1 belongs to σu2 H̄m . Thus, by replacing α with αδ −1 , with out loss of generality, we can assume that we have āu,v ∈ σu2 H̄m for every v 6= u. I.e. for every v 6= u one can write: āu,v = σu2 b̄u,v for some b̄u,v ∈ H̄m . Now, for every v 6= u define the matrix:   σ1 b̄u,v (σv ~eu − σu~ev )  σ2 b̄u,v (σv ~eu − σu~ev )    δv = In +   ∈ Jm,u ..   . σn b̄u,v (σv ~eu − σu~ev ) which is equals, by direct computation, to the multiplication of the matrices:   0 Jm,u ∋ εv,k = In +  σk b̄u,v (σv ~eu − σu~ev )  ← k-th row 0 for k 6= u, v and the matrix (the following is an example for v > u):   0  σu b̄u,v (σv ~eu − σu~ev )  ← u-th row    0 Jm,u ∋ ηv = In +     σv b̄u,v (σv ~eu − σu~ev )  ← v-th row 0 Qn i.e. δv = ηv · u,v6=k=1 εv,k (observe that the matrices εv,k commute, so the product is well defined). One can see that by Propositions 4.1 and 4.2, εv,k ∈ IAm for every k 6= u, v. Moreover, by Proposition 4.3, ηv ∈ IAm . Hence, 23 Pn δv ∈ IAm ∩ Jm,u . Now, as for every 1 ≤ i ≤ n we have j=1 ai,j σj = 0 (by the Qn condition A~σ = ~0), α · u6=v=1 δv is equals to:       σ1 b̄u,v (σv ~eu − σu~ev ) a1,1 · · · a1,n n      .. ..  Y I +  σ2 b̄u,v (σv ~eu − σu~ev )  I +   n   n  . .. .     . u6 = v=1 an,1 · · · an,n σn b̄u,v (σv ~eu − σu~ev )     σ1 b̄u,v (σv ~eu − σu~ev ) a1,1 · · · a1,n n    ..  + X  σ2 b̄u,v (σv ~eu − σu~ev )  . = In +  ...    . . ..   u6 = v=1 an,1 · · · an,n σn b̄u,v (σv ~eu − σu~ev ) Qn It is easy to see now that if we denote α · u6=v=1 δv = In + C, then for every v 6= u, c̄u,v = 0, when ci,j is the (i, j)-th entry of C. Hence, we also have: c̄u,u σu = n X c̄u,v σ̄v = 0 =⇒ c̄u,u = 0. v=1 Thus, we can write α · following properties: Qn u6=v=1 δv = In + C̄ when the matrix C̄ has the • The entries of the u-th row of C̄ are all 0. • As ai,v ∈ J˜m,u−1,v for 5.1 Pevery i, v, by the computation for Equation  u−1 2 2 for every we have: āi,v ∈ σu r=1 Āσr Ūr,m + σu Ūu,m + ĀŌm + Ōm i, v 6= u. Hence, for every i, v 6= u we have: ! u−1 X 2 2 Āσr Ūr,m + σu Ūu,m + ĀŌm + Ōm + σu ĀH̄m c̄i,v ∈ σu r=1 =  2 . σu ĀH̄m + Ōm Qn Qu 2 Now, as det(δv ) = 1 for every v 6= u, det(α · u6=v=1 δv ) = det(α) = i=1 xsi i m . However, as the entries of C̄ have the above properties, Qthis determinant is mapped to 1 under the projection σu 7→ 0. Thus, det(α · nu6=v=1 δv ) is of the 2 form xsuu m . Now, set i0 6= u, and denote: ζ = In + σu µu,m2 Ei0 ,i0 − σi0 µu,m2 Ei0 ,u = (In + σu Ei0 ,i0 − σi0 Ei0 ,u ) m2 ∈ IAm . By the computation in the proof of Proposition 5.2, we obtain that: 2 µu,m2 ∈ σu2 Uu,m + σu Om + Om and thus: σu µu,m2 σi0 µu,m2   2 2 ⊆ Jm,u,i0 ⊆ σu ĀH̄m + Ōm ∈ σu σu2 Uu,m + σu Om + Om  2 2 ∈ σi0 σu Uu,m + σu Om + Om ⊆ Jm,u,u 24 2 so ζ ∈ IAm ∩ Jm,u . In addition det (ζ) = xm u . Therefore, α · writen as In + C̄, has the following properties: Q v6=u δv ζ −su , • The entries of the u-th row of C̄ are all 0.  2 , so we can write • For every i, v 6= u we have: c̄i,v ∈ σu ĀH̄m + Ōm 2 c̄i,v = σu di,v for some di,v ∈ ĀH̄m + Ōm . Pu Pu−1 • For every 1 ≤ i ≤ n we have: k=1 σk c̄i,k = 0, so c̄i,u = − k=1 σk di,k .  • det In + C̄ = 1. I.e.:   i=u 0 P c̄i,j = − u−1 σ d j=u k i,k k=1   σu di,j i, j 6= u  2 for some di,j ∈ ĀH̄mQ+ Ōm , and det In + C̄ = 1. Define now β = v6=u δv ζ −su and γ −1 by (γ −1 i,j is the (i, j)-th entry of γ −1 . Notice that the sums in the u-th column run until n and not until u − 1 as in In + C̄, as we need the following matrix to be an element of IA (Φn ). However, clearly, this addition does not changes the value of the determinant, and it stays to be 1):   i=u 0 P −1 γ i,j = − nu6=k=1 σk di,k j = u   σu di,j i, j 6= u.  m 2 ⊆ Hm and det γ −1 = So β ∈ IA  ∩ Jm,u . In addition, as di,j ∈ ĀH̄m + Ōm det In + C̄ = 1, γ ∈ ISLn−1,u (σu Hm ). Moreover, γ ∈ Jm,u . Hence, we obtained β ∈ IAm ∩ Jm,u and γ ∈ ISLn−1,u (σu Hm ) ∩ Jm,u such that γαβ = In , i.e. γαβ ∈ Ãu , as required. 6 Remarks and problems for further research We will prove now Theorem 1.3, which asserts that C (Φn ) is abelian for every n ≥ 4. But before, let us have the following proposition, which is slightly more general than Lemma 2.1. in [BER], but proven by similar arguments: α β Proposition 6.1. Let 1 → G1 → G2 → G3 → 1 be a short exact sequence of groups. Assume also that G1 is finitely generated. Then: α̂ β̂ 1. The sequence Ĝ1 → Ĝ2 → Ĝ3 → 1 is also exact. α̂ 2. The kernel ker(Ĝ1 → Ĝ2 ) is central in Ĝ1 . Proof. (of Theorem 1.3) By Proposition 6.1, the commutative exact diagram: 1 → IA (Φn ) → Aut (Φn ) ց ↓ Aut(Φ̂n ) 25 → GLn (Z) → 1 ↓ → GLn (Ẑ) . gives rise to the commutative exact diagram: \ \ \ IA (Φn ) → Aut (Φn ) → GL n (Z) ց ↓ ↓ Aut(Φ̂n ) → GLn (Ẑ) → 1 \ Now, as n ≥ 4, by the CSP for GLn (Z), the map GL n (Z) → GLn (Ẑ) is injec\ tive, so one obtains by diagram chasing, that C (IA (Φn ) , Φn ) = ker(IA (Φn ) → \ Aut(Φ̂n )) is mapped onto C (Φn ) = ker(Aut (Φn ) → Aut(Φ̂n )) through the map \ \ IA (Φn ) → Aut (Φn ). In particular, as by Theorem 1.2 C (IA (Φn ) , Φn ) is cen\ tral in IA (Φn ) for every n ≥ 4, it is also abelian, and thus C (Φn ) is an image of abelian group, and therfore abelian, as required. Problem 6.2. Is C (Φn ) not finitely generated? trivial? We proved in [Be2] that C (IA (Φn ) , Φn ) is not finitely generated for every n ≥ 4. This may suggest that also C (Φn ) is not finitely generated, or at list, \ not trivial. Moreover, if C (IA (Φn ) , Φn ) was not central in IA (Φn ), we could use the fact that IA (Φn ) is finitely generated for every n ≥ 4 [BM], and by the second part of Proposition 6.1 we could derive that the image of C (IA (Φn ) , Φn ) \ in Aut (Φn ) is not trivial. However, we showed that C (IA (Φn ) , Φn ) is central \ \ \ in IA (Φn ), so it is possible that C (IA (Φn ) , Φn ) ⊆ ker(IA (Φn ) → Aut (Φn )) and thus C (Φn ) is trivial. We saw in [Be2] that for every i there is a natural surjective map  ±1 ±1 \ \ ρ̂i : IA (Φn ) ։ GLn−1 Z[x i ], σi Z[xi ] . These maps enabled us to show in [Be2] that for every n ≥ 4, C (IA (Φn ) , Φn ) can be written as C (IA (Φn ) , Φn ) = (C (IA (Φn ) , Φn ) ∩ni=1 ker ρ̂i ) ⋊ n Y Ci i=1 where Ci ∼ = ∼ =  ±1 ±1 ±1 \ \ ker(GLn−1 Z[x i ], σi Z[xi ] → GLn−1 (Z[xi ]))  ±1 \ \ ker(SLn−1 Z[x±1 i ] → SLn−1 (Z[xi ])). \ are central in IA (Φn ). Here we showed that also C (IA (Φn ) , Φn ) ∩ni=1 ker ρ̂i lie \ in the center of IA (Φn ) but we still do not know to determine whether: Q Problem 6.3. Is C (IA (Φn ) , Φn ) = ni=1 Ci or it contains more elements? It seems that having the answer to Problem 6.3 will help to solve Problem 6.2. 26 References [A] M. Asada, The faithfulness of the monodromy representations associated with certain families of algebraic curves, J. Pure Appl. Algebra 159 (2001), 123–147. [Ba] S. Bachmuth, Automorphisms of free metabelian groups, Trans. Amer. Math. Soc. 118 (1965) 93-104. [Be1] D. E-C. Ben-Ezra, The congruence subgroup problem for the free metabelian group on two generators. Groups Geom. Dyn. 10 (2016), 583–599. [Be2] D. E-C. Ben-Ezra, The IA-congruence kernel of high rank free Metabelian groups, in preparation. [Bi] J. S. Birman, Braids, links, and mapping class groups, Princeton University Press, Princeton, NJ, University of Tokyo Press, Toyko, 1975. [Bo1] M. Boggi, The congruence subgroup property for the hyperelliptic modular group: the open surface case, Hiroshima Math. J. 39 (2009), 351–362. [Bo2] M. Boggi, A generalized congruence subgroup property for the hyperelliptic modular group, arXiv:0803.3841v5. [BER] K-U. Bux, M. V. Ershov, A. S. Rapinchuk, The congruence subgroup property for Aut (F2 ): a group-theoretic proof of Asada’s theorem, Groups Geom. Dyn. 5 (2011), 327–353. [BL] D. E-C. Ben-Ezra, A. Lubotzky, The congruence subgroup problem for low rank free and free metabelian groups, J. Algebra (2017), http://dx.doi.org/10.1016/j.jalgebra.2017.01.001. [BLS] H. Bass, M. Lazard, J.-P. Serre, Sous-groupes d’indice fini dans SLn (Z), (French) Bull. Amer. Math. Soc. 70 (1964) 385–392. [BM] S. Bachmuth, H. Y. Mochizuki, Aut (F ) → Aut (F/F ′′ ) is surjective for free group F of rank ≥ 4, Trans. Amer. Math. Soc. 292 (1985), 81–101. [DDH] S. Diaz, R. Donagi, D. Harbater, Every curve is a Hurwitz space, Duke Math. J. 59 (1989), 737–746. [FM] B. Farb, D. Margalit, A primer on mapping class groups, Princeton Mathematical Series, 49. Princeton University Press, Princeton, NJ, 2012. [KN] M. Kassabov, M. Nikolov, Universal lattices and property tau, Invent. Math. 165 (2006), 209–224. [L] A. Lubotzky, Free quotients and the congruence kernel of SL2 , J. Algebra 77 (1982), 411–418. 27 [Ma] W. Magnus, On a theorem of Marshall Hall, Ann. of Math. 40 (1939), 764–768. [Mc] D. B. McReynolds, The congruence subgroup problem for pure braid groups: Thurston’s proof, New York J. Math. 18 (2012), 925–942. [Mel] O. V. Mel´nikov, Congruence kernel of the group SL2 (Z), (Russian) Dokl. Akad. Nauk SSSR 228 (1976), 1034–1036. [Men] J. L. Mennicke, Finite factor groups of the unimodular group, Ann. of Math. 81 (1965), 31–37. [NS] N. Nikolov, D. Segal, Finite index subgroups in profinite groups, C. R. Math. Acad. Sci. Paris 337 (2003), 303–308. [Rom] N. S. Romanovskiı̆, On Shmel´kin embeddings for abstract and profinite groups, (Russian) Algebra Log. 38 (1999), 598-612, 639-640, translation in Algebra and Logic 38 (1999), 326–334. [RS] V. N. Remeslennikov, V. G. Sokolov, Some properties of a Magnus embedding, (Russian) Algebra i Logika 9 (1970), 566–578, translation in Algebra and Logic 9 (1970), 342–349. [Su] A. A. Suslin, The structure of the special linear group over rings of polynomials, (Russian) Izv. Akad. Nauk SSSR Ser. Mat. 41 (1977), 235–252, 477. Institute of Mathematics The Hebrew University Jerusalem, ISRAEL 91904 [email protected] 28
4
Enabling adaptive scientific workflows via trigger detection ∗ Maher Salloum Janine C. Bennett Ali Pinar Ankit Bhagatwala Jacqueline H. Chen Sandia National Laboratories, Livermore, CA arXiv:1508.04731v1 [] 19 Aug 2015 {mnsallo, jcbenne, apinar, abhagat, jhcehn} @sandia.gov ABSTRACT Next generation architectures necessitate a shift away from traditional workflows in which the simulation state is saved at prescribed frequencies for post-processing analysis. While the need to shift to in situ workflows has been acknowledged for some time, much of the current research is focused on static workflows, where the analysis that would have been done as a post-process is performed concurrently with the simulation at user-prescribed frequencies. Recently, research efforts are striving to enable adaptive workflows, in which the frequency, composition, and execution of computational and data manipulation steps dynamically depend on the state of the simulation. Adapting the workflow to the state of simulation in such a data-driven fashion puts extremely strict efficiency requirements on the analysis capabilities that are used to identify the transitions in the workflow. In this paper we build upon earlier work on trigger detection using sublinear techniques to drive adaptive workflows. Here we propose a methodology to detect the time when sudden heat release occurs in simulations of turbulent combustion. Our proposed method provides an alternative metric that can be used along with our former metric to increase the robustness of trigger detection. We show the effectiveness of our metric empirically for predicting heat release for two use cases. 1. INTRODUCTION Concurrent analysis frameworks have been developed to process raw simulation output as it is computed, decoupling the analysis from I/O. Operations sharing primary resources of the simulation are considered in situ, while in transit processing involves asynchronous data transfers to secondary ∗ This work was funded by the Laboratory Directed Research and Development (LDRD) program of Sandia National Laboratories. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$15.00. resources. Both in situ [15, 6, 8] and in transit [14, 1, 2] workflows perform analyses as the simulation is run and produce results, which are typically much smaller than the raw data, mitigating the effects of limited disk bandwidth and capacity. Concurrent analyses are often performed at prespecified frequencies, which is viable for analyses that are not too expensive – in terms of runtime (with respect to a simulation time step), memory footprint, and output size. However, for many other analyses that are resource intense, prescribed frequencies will not suffice because the scientific phenomenon being simulated typically does not behave linearly (e.g., combustion, climate, astrophysics). When the prescribed I/O or analysis frequency is high enough to capture the events of interest, the costs incurred are too great. On the other hand, a cost-effective, lower analysis-frequency may miss the scientific events that simulation is intended to capture. An alternative approach is to perform expensive analyses (denoted Ae ) and I/O in an adaptive fashion, driven by the data itself. A domain-agnostic approach is presented in [12, 11], based on entropy of information in the data. Our previous work in [3] provides a framework for making datadriven control-flow decisions that can leverage the scientists’ intuitions, even when the algorithms to capture those intuitions would otherwise be too expensive to compute. Our methodology involves a user-defined indicator function that is computed and measured in situ at a relatively regular and high-frequency (i.e., greater than the frequency with which the I/O or Ae would be prescribed). Along with the indicator, the application scientist defines an associated trigger, a function that returns a boolean value indicating that the indicator has met some property, for example a threshold value. While our methodology is intuitive and conceptually quite simple, the challenges lie in defining indicators and triggers that capture the appropriate scientific information, while remaining cost efficient in terms of runtime, memory footprint, and I/O requirements, so that they can be deployed at the high frequency that is required. In [3] we show that chemical explosive mode analysis, denoted CEMA, can be used to devise a noise-tolerant indicator for heat release (the quantity of interest that the scientists would like to capture), thus making it a good candidate to drive adaptive workflows. However, exhaustive computation of CEMA values dominates the simulation time. To overcome this bottleneck, we proposed a quantile sampling approach with provable error/confidence bounds. These bounds depend only on the number of samples and are independent of the problem size. We also de- 2. COMBUSTION AS A USE CASE FOR TRIGGER DETECTION We demonstrate our approach applied to a combustion use case, using S3D [7], a direct numerical simulation of combustion in turbulence. The combustion simulations in our use case pertain to a class of internal combustion (IC) engine concept. We are interested in two specific techniques called homogeneous-charge compression ignition (HCCI) [4] and reactivity-controlled compression ignition (RCCI) [9, 5]. In both cases, heat release starts in the form of small kernels at arbitrary locations. Eventually, multiple kernels ignite as the overall heat release reaches a global maximum and subsequently declines. Since these simulations are computation and storage-intense, we want to run the simulation at a coarser grid resolution and save data less frequently during the early build-up phase. When the heat release events occur, we want to run the simulation at the finest grid granularity possible, and store the data as frequently as possible. Therefore, it is imperative to be able to predict the start of the heat release event using an indicator and trigger that serve to inform the application to adjust its grid resolution and I/O frequency accordingly, see Figure 1. 3. DESIGNING A NOISE-RESISTANT INDICATOR AND TRIGGER In this section we describe the design of an indicator and trigger for heat release for our combustion use case. To provide context, we begin by discussing the intuitions that informed our design. 3.1 Chemical Explosive Mode Analysis Chemical explosive mode analysis (CEMA) is known to be a reliable technique to predict incipient heat release. Here we provide brief description and refer to [10, 13] for details. The conservation equations for reacting species can = ω(y) + s(y). The vector y represents be written as Dy Dt temperature and reacting species mass fractions, ω is the reaction source term and s is the mixing term. Then, the 9 ×1010 Coarse grid Less frequent I/O 8 Minimum Maximum Mean Refined grid More frequent I/O Heat Release Rate 7 6 5 4 3 2 1 0 0 100 200 300 400 Time Step 500 600 700 800 Figure 1: The minimum (blue), maximum (red) and mean (black) heat release values for each time step in the simulation. Early in the simulation, we want to run at a coarser grid and save data less frequently. When heat release occurs, we want to the finest grid and save the data as frequently as possible. The vertical lines define a range of time steps within which we would like to make this transition, as identified by a domain expert. Jacobian is Jω + Js where Jω = ∂ω(y) and Js = ∂s(y) . ∂y ∂y The eigen-decomposition of the chemical Jacobian, Jω can be used to infer chemical properties of the mixture. Let λe be the eigenvalue with the largest real part. λe is defined as a chemical explosive mode (CEM) if Re(λe ) > 0, which indicates that point will undergo ignition. If it has undergone ignition, we have Re(λe ) < 0. The presence of a CEM indicates the propensity of a mixture to ignite. Our CEMA-based indicator is based on global trends of CEMA over time. Consider Figure 2 which provides a summary of the trends of CEMA values across all time steps in a simulation. At timestep t, let C(t) be the sorted (in nondecreasing order) array of CEMA values on the underlying mesh. For α ∈ (0, 1], the α-percentile is the entry C(t)dαN e . More specifically, it is the value in C(t) that is greater than at least dαN e values in C(t). We denote this value by pα (t). 16000 p 14000 12000 CEMA Percentiles signed an indicator, referred to as the P-indicator, based on the quantile sampling approach. Our experiments on homogeneous charge compression ignition (HCCI) and reactivity controlled compression ignition (RCCI) simulations showed that the proposed method can detect rapid increases in heat release and is computational efficient. In this paper, we propose an alternative indicator, referred to as the C-indicator, and associated trigger function. The proposed technique is based on the coefficient of variation and similar in essence to our earlier technique that used quantiles, as it tries to detect shrinking in the range of upper percentiles in the distribution of CEMA values. The coefficient of variation among the top quantiles decreases as the range of the values shrink and can serve as a detector for the subsequent heat release that the scientists wish to capture. The new technique based on coefficient of variation is not proposed as an alternative to replace our former metric. Rather, we believe that a set of trigger detection mechanisms, when used collectively, can provide more robust detections, as collectively they will be less prone to false positives. Our experiments on HCCI and RCCI simulations show that the proposed method can efficiently detect rapid increases in heat release. 0.01 0.1 p0.2 0.9 p0.91 1.0 10000 8000 6000 4000 2000 0 -2000 0 100 200 300 400 Time Step 500 600 700 800 Figure 2: Percentile plot of CEMA values. The blue curves correspond to p0.01,0,02,...,0.1 , the green curves to p0.2,0.3,...,0.9 and the red curves to p0.91,0.92,...,1 . We notice that as the simulation progresses, the distance between the higher percentiles (red curves) decreases then suddenly increases. This is illustrated in the plot by the spread in the red curves that occurs between the dashed lines defined by the domain expert that indicate the desired transition phase. Our goal is to capture this empirical observation with a metric, and use it to predict heat release. We introduce an indicator function, C-indicator that quantifies the distribution of the top quantiles of CEMA values over time. The C-indicator is equal to the coefficient of variation (COV) of the top percentiles of CEMA values, i.e. the ratio of their standard deviation to their mean. Formally, quantiles are defined as values taken at regular intervals from the inverse of the cumulative distribution function of a random variable. For a given data set, quantiles are used to divide the data into equal sized sets after sorting, and the quantiles are the values on the boundary between consecutive subsets. A special case is dividing into 100 equal groups, when we can refer to quantiles as percentiles. This paper focuses on percentiles with numbers in the [0, 1] range (although all techniques presented here can be generalized for any quantiles). For example, the 0.5 percentile will refer to the median of the data set. We define our indicator using 2 parameters: α and β. We want to detect whether the range covered by top quantiles shrinks, and α represents the lower end of the top percentiles, whereas β represents the top percentile considered. Therefore, the range of top percentiles we measure is [pα (t), pβ (t)]. In our indicator, we choose α < β (typically in the range [0.90, 0.99]). We measure the spread at time t by the COV of the ps (t) values for α ≤ s ≤ β which results in the following C-indicator: v u β u µ X (ps /µ − 1)2 (1) Cα,β (t) = t N − 1 s=α where µ= β 1 X ps N s=α (2) and N is the number of percentile values considered between α and β. For instance, if α = 0.90 and β = 0.99, we would obtain N = 10. Figure 3 illustrates percentile plots for heat release (top row) and CEMA (middle row). In the percentile plots, the lowest blue curve and the highest red curve correspond to the 1 and 100 percentiles (p0.01 and p1 ), respectively. The blue curves correspond to p0.01,0.02,...,0.1 , the green curves to p0.2,0.3,...,0.9 and the red curves to p0.91,0.92,...,1 . The Cindicator evaluated for α = 0.92 and β = 0.99 is shown in the bottom row of Figure 3. Results are generated for four test cases described in Table 1. The vertical dotted lines were identified by a domain expert who, via examination of heat release and CEMA percentile plots, visually located the range of time steps in the simulation where the mesh resolution and I/O frequency should be increased. We refer to this range of time steps as the “true” trigger range we wish to identify with the C-indicator and trigger functions. Note, for the RCCI cases, there are two ignition ranges. To simplify the following exposition, we focus on the second rise in the heat release rate profiles, as this is the ignition stage of interest to the scientists. However, we note that our approach is robust in identifying the first ignition stage as well. Heat Release Rate Percentiles Designing a Noise-Resistant CEMA-Based Indicator 10 8 6 4 2 0 2 CEMA Percentiles 3.2 HCCI, T=15 ×10 9 10 8 6 4 2 0 0 100 200 300 400 ×10 4 2 HCCI, T=40 ×10 9 4 RCCI, case 6 ×10 10 2 3 1 1 0.5 0 0 100 200 300 400 ×10 4 4 0 0 20 40 60 80 100 ×10 4 4 1.5 3 3 1 1 2 2 0.5 0.5 1 1 0 0 0 0 100 200 300 400 0 100 200 300 400 0 20 40 60 80 100 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0 200 Time Step 400 0 0 200 400 Time Step 0 0 20 40 60 80 100 ×10 4 0 0 20 40 60 80 100 0.2 0 RCCI, case 7 ×10 10 1.5 2 1.5 Cα=0.92, β=0.99 Our empirical observation is consistent with the underlying physics, and we refer to [3] for a detailed discussion on the underlying physics. 0 50 100 0 0 Time Step 50 100 Time Step Figure 3: Plots showing the (top row) percentiles of the heat release rate, (middle row) percentiles of CEMA, and (bottom row) the C-indicator, as indicated. The blue curves correspond to p0.01,0.02,...,0.1 , the green curves to p0.2,0.3,...,0.9 and the red curves to p0.91,0.92,...,1 . The C-indicator shown is evaluated for α0 = 0.92 and β = 0.99. The vertical dotted lines crossing the images indicate a window of acceptable “true” trigger time steps, as identified by a domain expert. For the RCCI cases, the trigger time ranges are based on the High Temperature Heat Release (HTHR), i.e., the second peak in the Heat Release Rate (HRR) profiles. 3.3 Defining a Trigger In addition to defining a noise-resistant indicator function, we also need to define a trigger function that returns a boolean value, capturing whether a property of the indicator has been met. Looking at Figure 3, we notice that across all experiments from Table 1, the C-indicator is increasing during the true trigger time step windows. Therefore, we seek to find a value τC ∈ (0, 1), such that Cα,β (t) crosses τC from below, as the simulation time t progresses. Figure 4 plots the trigger time steps as a function of τC for Cα=0.92,β=0.99 (t). This plot shows that the viable range of τC is [0.01, 0.05] for the four use cases described in Table 1. The horizontal dashed lines indicate the true trigger range identified by our domain expert. We consider those values of τC that fall within the horizontal dashed lines to be good τC values for our trigger, with those values of τC falling below the dashed lines still considered viable. We note the plots for other values of α and β look similar and have been omitted due to lack of space in this text. 4. COMPUTING INDICATORS AND TRIGGERS EFFICIENTLY: A SUBLINEAR APPROACH The previous section showed that C-indicator and trigger are robust to noise fluctuations and act as a precursor to rapid heat release in combustion simulations. In [3] we show the computational cost of computing CEMA-based indicators can be prohibitive for large-scale simulations (up to 60 times the cost of a simulation time-step depending on the simulation parameters). To mitigate the cost, we introduced a quantile sampling approach that comes with provable bounds on accuracy as a function of the number of samples. Most importantly, the required number of sam- Table 1: Four Combustion Use Cases analyzed in this study. The “true” trigger time ranges are estimated based on 95 − 100th percentiles of the heat release rate. The computed time ranges were evaluated using our quantile sampling approach. The C-trigger and P-trigger were computed over 50 realizations of experiments with 20 samples per process. Problem Instance HCCI, T=15 HCCI, T=40 RCCI, case 6 RCCI, case 7 Number of Grid Points 451,584 451,584 2,560K 2,560K–10,240K Number of Species 28 28 116 116 “True” Trigger Time Range 250-315 175-225 38-50 42-58 C-trigger Detection 235 - 265 180 - 220 34 - 38 32.5 - 35.5 P-trigger Detection [3] 250 - 262 213 - 220 28 - 45 35 - 50 Total Processes 1600 1600 6400 6400 α = 0.92, β = 0.99 400 HCCI, T=15 HCCI, T=40 RCCI, case 6 RCCI, case 7 350 Trigger Time Step 300 250 200 150 100 50 0 0 0.05 0.1 0.15 τC Figure 4: This figure plots the trigger time steps as a function of τC for Cα=0.92,β=0.99 (t). There is a range of viable values of τC ∈ [0.0, 0.05] that predict early stage heat release. ples for a specified accuracy is independent of the size of the problem, hence our sampling based algorithms offer excellent scalability. We summarize the quantile sampling method here and refer the reader to [3] for analytical and empirical proofs of convergence. Consider an array A ∈ RN , in sorted order. Our aim is to estimate the α-percentile of A. (We use pα to denote the percentiles.) Note that this is exactly the entry AdαN e . Here is a simple sampling procedure. 1. Sample k independent, uniform indices r1 , r2 , . . . , rk in {1, 2, . . . , N }. b the sorted array [A(r1 ), A(r2 ), . . . , A(rk )]. Denote by A b as the estimate, pbα . 2. Output the α-percentile of A We performed a series of experiments examining the variation in the trigger time steps as a function of the number of samples used per process. The data for Figure 5 was generated via 50 realizations of the C-indicator with α = 0.92 and β = 0.99, with τC drawn from [0.01, 0.05]. The horizontal dashed lines in this figure define the range of true trigger time steps within which we would like to make the workflow transition (as identified by a domain expert). This plot demonstrates that, even across the range of τC values, with a small number of samples per process, our quantile sampling approach can accurately estimate the true trigger time steps as defined by the domain expert. The accuracy of our quantile-based sampling predictions is also shown in Table 1, which lists the the triggers predicted by the C-indicator and Figure 5: Plots illustrating the variability of the trigger time steps predicted by the C-indicator and trigger as a function of the number of samples per process. The data for these plots was generated via 50 realizations of the C-indicator with α = 0.92 and β = 0.99, and τC drawn from [0.01, 0.05].The horizontal dashed lines define the range of time steps within which we would like to make the workflow transition (as identified by a domain expert. P-indicator that we used in our previous work [3]. The Pp −p indicator is computed as Pα,β,γ = pαβ −pγγ , for α = 0.94, β = 0.98, γ = 0.01, and the trigger threshold chosen as [0.725, 0.885]. The results show both methods are accurate. We are currently investigating how we can improve our results by using two metrics concurrently. Note that using both metrics will not induce an additional burden, since the two metrics can be computed from the same set of samples. 5. CONCLUSION We propose a new indicator and trigger for making datadriven control-flow decisions in situ. Using a provably robust sampling-based approach, of CEMA values (which, when computed in full, can cost up to 60 times a simulation time step). Our experiments show that our proposed indicator, based on the coefficient of variation, can efficiently predict rapid increases in heat release. We believe the new metric can be deployed collectively with a previously defined mechanism, to provide more robust detections, since together they will be less prone to false positives. We note that using both metrics will not induce an additional burden, since the two metrics can be computed from the same set of samples. Our future work aims to explore deployment of both metrics jointly in production simulation runs. 6. REFERENCES [1] H. Abbasi, G. Eisenhauer, M. Wolf, K. Schwan, and S. Klasky. Just In Time: Adding Value to The IO Pipelines of High Performance Applications with JITStaging. In Proc. of 20th International Symposium on High Performance Distributed Computing (HPDC’11), June 2011. [2] J. C. Bennett, H. Abbasi, P.-T. Bremer, R. Grout, A. Gyulassy, T. Jin, S. Klasky, H. Kolla, M. Parashar, V. Pascucci, P. Pebay, D. Thompson, H. Yu, F. Zhang, and J. Chen. Combining in-situ and in-transit processing to enable extreme-scale scientific analysis. In J. Hollingsworth, editor, SC ’12: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, Salt Lake Convention Center, Salt Lake City, UT, USA, November 10–16, 2012, pages 49:1–49:9, pub-IEEE:adr, 2012. IEEE Computer Society Press. [3] J. C. Bennett, A. Bhagatwala, J. H. Chen, C. Seshadhri, A. Pinar, and M. Salloum. Trigger detection for adaptive scientific workflows using percentile sampling. Technical report, 2015. arXiv:1506.08258. [4] A. Bhagatwala, J. H. Chen, and T. Lu. Direct numerical simulations of SACI/HCCI with ethanol. Comb. Flame, 161:1826–1841, 2014. [5] A. Bhagatwala, R. Sankaran, S. Kokjohn, and J. H. Chen. Numerical investigation of spontaneous flame propagation under RCCI conditions. Comb. Flame, Under review. [6] J.-M. F. Brad Whitlock and J. S. Meredith. Parallel In Situ Coupling of Simulation with a Fully Featured Visualization System. In Proc. of 11th Eurographics Symposium on Parallel Graphics and Visualization (EGPGV’11), April 2011. [7] J. H. Chen, A. Choudhary, B. de Supinski, M. DeVries, E. R. Hawkes, S. Klasky, W. K. Liao, K. L. Ma, J. Mellor-Crummey, N. Podhorski, R. Sankaran, S. Shende, and C. S. Yoo. Terascale direct numerical simulations of turbulent combustion using S3D. Computational Science and Discovery, 2:1–31, 2009. [8] N. Fabian, K. Moreland, D. Thompson, A. Bauer, P. Marion, B. Gevecik, M. Rasquin, and K. Jansen. The paraview coprocessing library: A scalable, general purpose in situ visualization library. In Proc. of IEEE Symposium on Large Data Analysis and Visualization (LDAV), pages 89 –96, October 2011. [9] S. L. Kokjohn, R. M. Hanson, D. A. Splitter, and R. D. Reitz. Fuel reactivity controlled compression ignition (rcci): A pathway to controlled high-efficiency clean combustion. Int. J. Engine. Res., 12:209–226, 2011. [10] T. Lu, C. S. Yoo, J. H. Chen, and C. K. Law. Three-dimensional direct numerical simulation of a turbulent lifted hydrogen flame in heated coflow: a chemical explosive mode analysis. J. Fluid Mech., 652:45–64, 2010. [11] K. Myers, E. Lawrence, M. Fugate, J. Woodring, J. Wendelberger, and J. Ahrens. An in situ approach for approximating complex computer simulations and identifying important time steps. Technical report, 2014. arXiv:1409.0909v1. [12] B. Nouanesengsy, J. Woodring, J. Patchett, K. Myers, and J. Ahrens. ADR visualization: A generalized framework for ranking large-scale scientific data using analysis-driven refinement. In Large Data Analysis and Visualization (LDAV), 2014 IEEE 4th Symposium on, pages 43–50. IEEE, 2014. [13] R. Shan, C. S. Yoo, J. H. Chen, and T. Lu. Computational diagnostics for n-heptane flames with chemical explosive mode analysis. Comb. Flame, 159:3119–3127, 2012. [14] V. Vishwanath, M. Hereld, and M. Papka. Toward simulation-time data analysis and i/o acceleration on leadership-class systems. In Proc. of IEEE Symposium on Large Data Analysis and Visualization (LDAV), October 2011. [15] H. Yu, C. Wang, R. W. Grout, J. H. Chen, and K.-L. Ma. In-situ visualization for large-scale combustion simulations. IEEE Computer Graphics and Applications, 30:45–57, 2010.
5
Estimating operator norms using covering nets arXiv:1509.05065v1 [quant-ph] 16 Sep 2015 Fernando G.S.L. Brandão ∗ Aram W. Harrow † Abstract We present several polynomial- and quasipolynomial-time approximation schemes for a large class of generalized operator norms. Special cases include the 2 → q norm of matrices for q > 2, the support function of the set of separable quantum states, finding the least noisy output of entanglement-breaking quantum channels, and approximating the injective tensor norm for a map between two Banach spaces whose factorization norm through `n 1 is bounded. These reproduce and in some cases improve upon the performance of previous algorithms by BrandãoChristandl-Yard [BCY11] and followup work, which were based on the Sum-of-Squares hierarchy and whose analysis used techniques from quantum information such as the monogamy principle of entanglement. Our algorithms, by contrast, are based on brute force enumeration over carefully chosen covering nets. These have the advantage of using less memory, having much simpler proofs and giving new geometric insights into the problem. Net-based algorithms for similar problems were also presented by Shi-Wu [SW12] and Barak-Kelner-Steurer [BKS13], but in each case with a run-time that is exponential in the rank of some matrix. We achieve polynomial or quasipolynomial runtimes by using the much smaller nets that exist in `1 spaces. This principle has been used in learning theory, where it is known as Maurey’s empirical method. 1 Introduction Given a n × m matrix M , its operator norm is given by kM k = maxx∈Cm kM xk2 /kxk2 , with kxk2 = P 1 ( i |xi |2 ) 2 the Euclidean norm. The operator norm is also given by the square root of the largest eigenvalue of M † M and thus can be efficiently computed. There are numerous ways of generalizing the operator norm, e.g. by considering tensors instead of matrices, by changing the Euclidean norm to another norm, or by considering other vector spaces instead of Cm . Although such generalizations are very useful in applications, they can be substantially harder to compute than the basic operator norm, and in many cases we still do not have a good grasp of the computational complexity of computing, or even only approximating, them. In some cases quasipolynomial algorithms are known, usually based on semidefinite programming (SDP) hierarchies, and in other cases quasipolynomial hardness results are known. These are partially overlapping so that some problems have sharp bounds on their complexity and for others there are exponential gaps between the best upper and lower bounds. As we will discuss below, the complexity of these problems is not only a basic question in the theory of algorithms, but also is closely related to the unique games conjecture and the power of multiprover quantum proof systems. In this paper we give new algorithms for several variants of the basic operator norm of interest in quantum information theory, theoretical computer science, and the theory of Banach spaces. Unlike most past work which was based on SDP hierarchies, our algorithms simply enumerate over a carefully chosen net of points. This yields run-times that often match the SDP hierarchies and sometimes improve upon them. Besides improved performance, our algorithms have the advantage of being based on simple geometric properties of spaces we are optimizing over, which may help explain which types of norms are amenable to quasipolynomial optimization. In particular we consider the following four optimization problems in this work: Optimization over Separable States: An important problem in quantum information theory is to optimize a linear function over the set of separable (i.e. non-entangled) states, defined as bipartite ∗ (a) Quantum Architectures and Computation Group, Microsoft Research, Redmond, WA and (b) Department of Computer Science, University College London WC1E 6BT. email: [email protected] † Center for Theoretical Physics, Massachusetts Institute of Technology. email: [email protected] 1 density matrices that can be written as a convex combination of tensor product states. This problem is closely related to the task of determining if a given quantum state is entangled or not (called the quantum separability problem) and to the computation of several other quantities of interest in quantum information, including the optimal acceptance probability of quantum Merlin-Arthur games with unentangled proofs, optimal entanglement witnesses, mean-field ground-state energies, and measures of entanglement; see [HM13] for a review of many of these connections. Given an operator M acting on the bipartite vector space Cd1 ⊗ Cd2 the support function of M on the set of separable states is given by hSep(d1 ,d2 ) (M ) := max α∈Dd1 ,β∈Dd2 tr[M (α ⊗ β)], (1) with Dd the set of density matrices on Cd (d × d positive semidefinite matrices of unit trace). Our goal is to n approximate hSep(d1 ,d2 ) (M ). For M ∈ L(Cd ), define hSepn (d) (M ) = max α1 ,...,αn ∈Dd tr[M (α1 ⊗ · · · ⊗ αn )]. (2) The first result on the complexity of computing hSep(d1 ,d2 ) was negative: Gurvits showed that the problem is NP-hard for sufficiently small additive error (inverse polynomial in d1 d2 ) [Gur03]. Then [HM13] showed there is no exp(O(log2−Ω(1) (d1 d2 ))) time algorithm even for a constant error additive approximation of the quantity, assuming the exponential time hypothesis (ETH1 ). This left open the question whether there are quasipolynomial-time algorithms (i.e. of time exp(polylog(d1 , d2 ))). In [BCY11] it was shown that this is indeed the case at least for a class of linear functions: namely those corresponding to quantum measurements that can be implemented by local operations and one-directional classical communication (one-way LOCC or 1-LOCC). For thisparticular class of measurements the problem can be solved with error δ in time exp O δ −2 log(d1 ) log(d2 ) . The proof was based on showing that the hierarchy of semidefinite programs for the problem introduced in 2004 by Doherty, Parrilo and Spedalieri [DPS04] (which is an application of the more general Sum-of-Squares (SoS) hierarchy, also known as the Lasserre hierarchy, to the separability problem) converges quickly. The approach of [BCY11] was to use ideas from quantum information theory (monogamy of entanglement, entanglement measures, hypothesis testing, etc) to find good bounds on the quality of the SoS hierarchy. Since then several follow-up work gave different proofs of the result, but always using quantum information-theoretic ideas [BH13, LW14, BC11, Yan06]. A corollary of [BCY11] and the other results on 1-LOCC M is that hSep(d1 ,d2 ) (M ) can also be approxi1 mated for a different class of operators M : those with small Hilbert-Schmidt norm kM kHS := tr(M † M ) 2 . Ref. [BaCY11] showed that also in this case there is a quasipolynomial-time algorithm for estimating Eq. (1). An interesting subsequent development was the work of Shi and Wu [SW12] (see also [BKS13]), who gave a different algorithm for the problem based on enumerating over nets. It was left as an open question whether a similar approach could be given for the case of one-way LOCC measurements (which is more relevant both physically 2 and in terms of applications; see again [HM13]). Estimating the Output Purity of Quantum Channels: Another important optimization problem in quantum information theory consists of determining how much noise a quantum channel introduces. A quantum channel models a general physical evolution and is given mathematically by a completely positive trace preserving map Λ : Dd1 → Dd2 . One way to measure the level of noise of the channel is to compute the maximum over states of the output Schatten-α norm, for a given α > 1: kΛk1→α := max kΛ(ρ)kα , ρ∈Dd1 1 The (3) ETH is the conjecture that 3-SAT instances of length n require time 2Ω(n) to solve. This is a plausible conjecture for deterministic, randomized or quantum computation, and each version yields a corresponding lower bound on the complexity of estimating hSep . 2 The one-way LOCC norm gives the optimal distinguishably of two multipartite quantum states when only local measurements can be done, and the parties can coordinate by one-directional communication. See [MWW09] for a discussion of its power. 2 −1 1 with kZkα = tr(|Z|α ) α . The quantity kΛk1→α varies from one, for an ideal channel, to d2−1+α for the depolarizing channel mapping all states to the maximally mixed state. This optimization problem has been extensively studied, in particular because for α ≈ 1 it is related to the Holevo capacity of the channel, whose regularization gives the classical capacity of the channel (i.e. how many reliable bits can be transmitted per use of the channel). It was shown in [HM13] that, assuming ETH, there is no algorithm that runs in time exp(O(log2−Ω(1) (d))) and can decide if kΛk1→α is one or smaller than δ (for any fixed δ > 0 and α > 1) for a general quantum channel Λ : Dd → Dd . On the algorithmic side, nothing better than exhaustive search over the input space (taking time exp(Ω(d1 ))) is known. An interesting subclass of quantum channels, lying somewhere between classical channels and fully quantum channels, are the so-called entanglement-breaking channels, which are the channels that cannot be used to distribute entanglement. Any entanglement-breaking quantum channel Λ can be written as [HSR04]: X Λ(ρ) := tr(Xi ρ)Yi , (4) i P with Yi ≥ 0, tr(Yi ) = 1 quantum states and Xi ≥ 0, and i Xi = I a quantum measurement. Because of their simpler form, one can expect that there are more efficient algorithms for computing the maximum output norm of entanglement-breaking channels. However until now no algorithm better than exhaustive search was known either (apart from the case α = ∞ where the Sum-of-Squares hierarchy can be used and analyzed using [BCY11]). Computing p → q Norms: Given a d1 × d2 matrix A we define its p → q norm by kAkp→q kAxkq , := max x∈Cd2 kxkp kxkp := d2 X !1/p p |x| (5) i=1 Such norms have many different applications, such as in hypercontractive inequalities and determining if a graph is a small-set expander [BBH+ 12], to oblivious routing [BV11] and robust optimization [Ste05]. However we do not have a complete understanding of the complexity of computing them. For 2 < q ≤ p or q ≤ p < 2, it is NP-hard to approximate them to any constant factor [BV11]. In the regime q > p (the one relevant for hypercontractivity and small-set expansion) the only known hardness result is that to obtain any (multiplicative) constant-factor approximation for the 2 → 4 norm of a n × n matrix is as hard as solving 3-SAT with Õ(log2 (n)) variables [BBH+ 12]. On the algorithmic side, besides the 2 → 2 and 2 → ∞ norms being exactly computable in polynomial time, Ref. [BBH+ 12] showed that one can use the Sum-of-Squares hierarchy to compute in time exp(O(log2 (n)ε−2 )) a number X s.t. kAk42→4 ≤ X ≤ kAk42→4 + εkAk22→2 kAk22→∞ . (6) Whether similar approximations can be obtained for 2 → q norms for other values of q was left as an open problem. Computing the Operator Norm between Banach Spaces: These problems are all special cases of the following general question. Given a map T : A → B between Banach spaces A, B, can we approximately compute the following operator norm? kT kA→B := sup x6=0 1.1 kT xkB kxkA (7) Summary of Results In this paper we give new algorithmic results for the four problems discussed above. They can be summarized as follows. 3 Separable-state optimization by covering nets: We give a different algorithm for optimizing linear functions over separable states (corresponding to one-way LOCC measurements) based on enumerating over covering nets (see Algorithm 1). The complexity of the algorithm matches the time complexity of [BCY11] (see Theorem 2). The proof does not use information theory in any way, nor the SoS hierarchy. Instead the main technical tool is a matrix version of the Hoeffding bound (see Lemma 3). It gives new geometric insight into the problem and gives arguably the simplest and most self-contained proof of the result to date. It also gives an explicit rounding (as does [BH13] but in contrast to [BCY11, LW14, BC11, Yan06]). For particular subclasses of one-way LOCC measurements our algorithm improves the run time of [BCY11]. One example is the case where Bob’s measurement outcomes are low rank, in which we find O(ε−2 ) a poly(d2 )d1 -time algorithm. Generalization to arbitrary operator norms: Computing hSep is mathematically equivalent to computing the 1 → ∞ norm of a quantum channel, or more precisely the S1 → S∞ norm where Sα denotes the Schatten-α norm. This perspective will help us generalize the scope of our algorithm, to estimating the S1 → B norm for a general Banach space B. The analysis of this algorithm is based on tools from asymptotic geometric analysis, and we will see that its efficiency depends on properties of B known as the Rademacher type and the modulus of uniform smoothness. Besides generalizing the scope of the algorithm, this also gives more of a geometric explanation of its performance. We focus on two special cases of the problem: 1. maximum output norm: A particular case of the generalization is the problem of computing the maximum output purity of a quantum entanglement-breaking channel (measured in the Schatten-α O(ε−2 ) norms). We prove that for any α > 1 one can compute kΛk1→α in time poly(d2 )d1 to within additive error ε. (see Corollary 16). In contrast known hardness results [HM10, HM13] show that no such algorithm exists for general quantum channels (under the exponential time hypothesis). Previously the entanglement-breaking case was not known to be easier. 2. matrix 2 → q norms: As a second particular case of the general framework we extend the approximation of [BBH+ 12] to the 2 → 4 norm, given in Eq. (6), to the 2 → q norms for all q ≥ 2 (see Corollary 17). Operator norms between Banach spaces: This framework can be further generalized to estimating the operator norm of any linear map from A → B for Banach spaces A, B. Here we have replaced S1 with any finite-dimensional Banach space A whose norm can be computed efficiently. When applied to an operator Λ, the approximation error scales with the A → `n1 → B factorization norm, which is the minimum of kΛ1 k`n1 →B kΛ2 kA→`n1 such that Λ = Λ1 Λ2 . Factorization norms have applications to communication complexity [LS07, LS09], Banach space theory [Pie07], and machine learning [LRS+ 10], and here we argue that they help explain what makes the class of 1-LOCC measurements uniquely tractable for algorithms. In Section 4 we describe an algorithm for this general norm estimation problem, which to our knowledge previously had no efficient algorithms. This problem equivalently can be viewed as computing the injective tensor norm of two Banach spaces. We remark that this generalization is not completely for free, so we cannot simply derive all our other algorithms from this final one. In the case where A = S1d (which Pn corresponds to all of our specific applications), we are able to easily sparsify the input; i.e. given i=1 Ai ⊗ Bi , we can reduce n to be poly(d) without loss of generality. For general input spaces A we do not know if this is possible. Also, the case of hSep is much simpler, and so it may be helpful to read it first. 1.2 Comparison with prior work As discussed in the introduction, previous algorithms for separable-state optimization (and as a corollary, the 2 → 4 norm) have been obtained using SDP hierarchies. Our algorithms generally match or improve upon their parameters, but with the added requirement for the separable-state problem that the input be presented in a more structured form. 4 Several parallels between LP/SDP hierarchies and net-based algorithms have been developed for other problems. The first example of this was Ref. [DKLP06a] which gave both types of algorithms for the problem of maximizing a polynomial over the simplex, improving on a result implicit in the 1980 proof of the finite de Finetti theorem by Diaconis and Freedman [DF80]. Besides the separable-state approximation problem that we study, hierarchies and nets have been found to have similar performance in finding approximate Nash equilibria [LMM03, Har15] and in estimating the value of free two-prover games [AIM14, BH13]. The state-of-the-art run-time for solving Unique Games and Small Set Expansion have also been achieved using both hierarchies and covering-nets. These parallels are summarized in the table: Problem nets hierarchies/information theory maxx∈∆n p(x) approximate Nash free games unique games small-set expansion separable states [DKLP06a] [LMM03, ALSV13a] [AIM14] [ABS10] [ABS10] [SW12, BKS13], this work [DF80, DKLP06a] [Har15] [BH13, Cor 4] [BRS11] [BBH+ 12, §10] [BaCY11, BH13, BKS13, LW14, LS14] Table 1: We briefly describe these problems here. Full descriptions can be found in the references in the table. In maxx∈∆n p(x), ∆n is the n-dimensional probability simplex and p(x) is a low-degree polynomial. “Approximate Nash” refers to the problem of finding a pair of strategies in a two-player non-cooperative game for which no player can improve their welfare by more than ε. “Free games” refers to two-prover oneround proof systems where the questions asked are independent; the computational problem is to estimate the largest possible acceptance probability. “Unique games” describes instead proof systems with “unique” constraints; i.e. for each question pair and each answer given by one of the provers, there is exactly one correct answer possible for the other prover. Small-set expansion asks, given a graph G and parameters ε, δ > 0, whether all subsets with a δ fraction of the vertices have a ≥ 1 − ε fraction of edges leaving the set or whether there exists one with a ≤ ε fraction of edges leaving the set. Finally “separable states” refers to estimating hSep(n,n) as we will discuss elsewhere in the paper. It can also be though of as estimating maxkxk2 =1} p(x) for some low-degree polynomial p(x). While this paper focuses on the particular problems where we can improve upon the state-of-the-art algorithms, we hope to be a step towards more generally understanding the connections between these two methods. In almost every case above, the best covering-net algorithms achieve nearly the same complexity as the best analyses of SDP hierarchies. There are a few exceptions. Ref. [BBH+ 12] shows O(1) rounds of the SoS hierarchy can certify a small value for the Khot-Vishnoi integrality gap instances of the unique games problem, but we do not know how to achieve something similar using nets. A more general example is in [BKS13], which shows that the SoS hierarchy can approximate hSep (M ) in quasipolynomial time when M is entrywise nonnegative. The closest related paper to this work is [SW12] by Shi and Wu (as well as Appendix A of [BKS13]), which also used enumeration over ε-nets to approximate hSep . Here we explain their results in our language. Shi and Wu [SW12] have two algorithms: one when M has low Schmidt rank (i.e. factorizes as S1 → d1 ×d2 `r2 → S∞ for small r) and one where M has low rank, which we can interpret as a `r2 → S∞ factorization d1 ×d2 (here S∞ refers to the space of d1 × d2 -dimensional matrices with norm given by the largest singular value). These correspond to their Theorems 5 and 8 respectively. In both cases they construct ε-nets for the `r2 unit ball of size ε−O(r) (here, `2 could be replaced with any norm; see Lemma 9.5 of [LT91]). In both cases, their results can be improved to yield multiplicative approximations, using ideas from [BKS13]. Appendix A of Barak, Kelner and Steurer [BKS13] considers fully symmetric 4-index tensors M ∈ (Rn )⊗4 , so that when viewed as n2 × n2 matrices their rank and Schmidt rank are the same; call them r. Their algorithm is similar to that of [SW12], although they observe additionally (using different terminology) that for any self-adjoint operator T : A∗ → A (i.e. satisfying hT (X), Y i = hX, T (Y )i) the A∗ → `2 → A norm is 5 equal to the A∗ → A norm. This means that constructing an ε-net for B(`2 ) actually yields a multiplicative approximation of the A∗ → A norm (here the S1 → S∞ norm). Achieving a multiplicative approximation is stronger than what our algorithms achieve, but it is at the cost of a runtime that can be exponential in the input size even for a constant-factor approximation. By contrast, our algorithms yield nontrivial approximations in polynomial or quasipolynomial time. 1.3 Notation d d Define the sets of d × d real and complex semidefinite matrices by S+ , H+ respectively. For complex vector spaces V, W , define L(V, W ) to be the set of linear operators from V to W , L(V ) := L(V, V ) and H(V ), H+ (V ) to be respectively the Hermitian and positive-semidefinite operators on V . P For α ≥ 1 define the `α , Sα metrics on vectors and matrices respectively by kxk`α = ( i |xi |α )1/α and kXkSα = (tr |X|α )1/α . Denote the corresponding normed spaces by `dα , Sαd . Where it is clear from context we will refer to both norms by k · kα . We use k · k without subscript to denote the operator norm for matrices (i.e. kXk = kXkS∞ ) and the Euclidean norm for vectors (i.e. kxk = kxk`2 ). We use Õ(f (x)) to mean O(f (x) poly log(f (x))) and say that f (x) is “quasipolynomial” in x if f ≤ O(exp(poly log(x))). For a normed space V , define B(V ) = {v ∈ V : kvk ≤ 1}. Two important special cases are the probability simplex ∆n := B(`n1 ) ∩ Rn≥0 and the set of density matrices (also called “quantum states”) d = conv{vv † : v ∈ B(`d2 )}. Here v † is the conjugate transpose of v. For k a positive Dd := B(S1d ) ∩ H+ integer, define also   ei1 + . . . + eik : i1 , . . . , ik ∈ [n] ⊂ ∆n , (8) ∆n (k) := k where ei is the vector in Rn with a 1 in position i and zeros elsewhere. For a convex set K define the support function hK (x) := supy∈K hx, yi. For matrices h, i refers to the Hilbert-Schmidt inner product hX, Y i := tr(X † Y ). Banach spaces are normed vector spaces with an additional condition (completeness, i.e. convergence of Cauchy sequences) that is relevant only in the infinite dimensional case. In this work we will consider only finite-dimensional Banach spaces. 2 Warmup: algorithm for bipartite separability In this section we describePa simple version of our algorithm. It contains all the main ideas which we will n d1 d2 P later generalize. Let M = i=1 Xi ⊗ Yi , where Xi ∈ S+ , Yi ∈ S+ , i Xi ≤ I, and each Yi ≤ I. In quantum information language, M is a 1-LOCC measurement, meaning it can be implemented with local operations and one-way classical communication 3 . In later sections we will see that M can also be interpreted in a (mathematically) more natural way as a bounded map from S1 to S∞ . The goal of our algorithm is to approximate hSep(d1 ,d2 ) (M ), where we define the set of separable states as Sep(d1 , d2 ) := conv{α ⊗ β : α ∈ Dd1 , β ∈ Dd2 }. (9) There have been several recent proofs [BCY11, BH13, LW14], each based on quantum information theory, that SDP hierarchies can estimate hSep(d1 ,d2 ) (M ) to error εkM k in time exp(O(log2 (d)/ε2 )). Similar techniques also appeared in [BKS13, LS14] for different classes of operators M . The role of the 1LOCC conditions in these proofs was typically not completely obvious, and indeed it entered the proofs of [BCY11, BH13, LW14] in three different ways. We now give another interpretation of it that is arguably more geometrically natural. 3 Conventionally these have P i Xi = I, but our formulation is essentially equivalent. 6 Begin by observing that hSep(d1 ,d2 ) (M ) = = = max α∈Dd1 ,β∈Dd2 max α∈Dd1 ,β∈Dd2 tr[(α ⊗ β)M ], n X tr[αXi ] tr[βYi ] i=1 max kpkY . (10) p∈SX In the last step we have defined SX := {p ∈ ∆n : ∃α ∈ Dd1 , pi = tr[αXi ] ∀i ∈ [n]}, and kakY := n X ai Yi . (11) i=1 The basic algorithm is the following: Algorithm 1 (Basic algorithm for computing hSep (M ) for one-way LOCC M = P i Xi ⊗ Yi ). d2 d1 . , {Yi }ni=1 ⊂ H+ Input: {Xi }ni=1 ⊂ H+ Output: States α ∈ Dd1 and β ∈ Dd2 . 1. Enumerate over all p ∈ ∆n (k), with k = 9 ln(d2 )/δ 2 . (a) For each p, check (using Lemma 5) whether there exists q ∈ S with kp − qkY ≤ δ/2. (b) If so, compute kqkY . 2. Let q be such that the kqkY is the P maximum, and P let α ∈ Dd1 be the state for which qi = tr[Xi α]. Output this α and β satisfying tr[β i qi Yi ] = k i qi Yi k. The main result of this section is: Pn P Theorem 2. Let M = i=1 Xi ⊗ Yi be  such that i Xi ≤ I, Xi ≥ 0, 0 ≤ Yi ≤ I. Algorithm 1 runs in time poly(d1 , d2 , n) exp O δ −2 log(n) log(d2 ) and outputs α ∈ Dd1 and β ∈ Dd2 such that hSep (M ) ≥ tr[M (α ⊗ β)] ≥ hSep (M ) − δ, (12) For n = poly(d1 , d2 ) this is the same running time as found in [BCY11] (while for n  poly(d1 , d2 ) it is an improvement). Later in this section we will show how we can always modify the measurement to have n = poly(d1 , d2 ) only incurring in a small error. But before that, we now show that Theorem 2 follows easily from two simple lemmas. One of the lemmas is a consequence of the the well-known matrix Hoeffding bound. Lemma 3 (Matrix Hoeffding Bound [Tro10]). Suppose Z1 , . . . , Zk are independent random d × d Hermitian matrices satisfying E[Zi ] = 0 and kZi k ≤ λ. Then # " k kδ 2 1X Zi ≥ δ ≤ d · e− 8λ2 . (13) Pr k i=1 This is a special case of Theorem 2.8 from [Tro10]): −2 Our first lemma shows that one can restrict the optimization to a net of size nO(log(d2 )δ ) : Lemma 4. For any p ∈ ∆n there exists q ∈ ∆n (k) with r kp − qkY ≤ 7 9 ln(d2 ) . k (14) Proof. Sample i1 , . . . , ik according to p and set q = (ei1 +. . .+eik )/k. Define Ȳ := Observe that E[Zj ] = 0 and kZj k ≤ 1. Then Lemma 3 implies that k 1X kp − qkY = Zj ≤ δ. k j=1 with positive probability if k > 8 ln(d)/δ 2 . Setting δ = q ∈ ∆n (k) satisfying Eq. (14). Pn i=1 pi Yi and Zj = Ȳ −Yij . (15) p 9 ln(d2 )/k we find that there exists a choice of The second lemma shows that one can decide efficiently if an element of the net is a valid solution. A similar result is in [SW12]. Lemma 5. Given p ∈ ∆n and ε > 0, we can decide in time poly(d1 , d2 , n) whether the following set is nonempty SX ∩ {q : kp − qkY ≤ ε}. (16) Proof. Both are convex sets, defined by semidefinite constraints. So we can test for feasibility with a SDP of size poly(d1 , d2 , n). Indeed this is manifest for SX in Eq. (11), while {q : kp − qkY ≤ ε} can be written as ( ) X X {q : kp − qkY ≤ ε} = (q1 , . . . , qn ) : qi ≥ 0, −εI ≤ pi Yi − qi Yi ≤ εI . (17) i i We are ready to prove Theorem 2: Proof of Theorem 2. Whatever the output x is, x ≤ hSep (M )+δ/2. On the other hand, let q = arg maxq∈S kqkY , so that kqkY = hSep (M 0 ). By Lemma 4, there exists p ∈ ∆n (k) with kp − qkY ≤ δ/2. Thus our algorithm will output a value that is ≥ hSep (M ) − δ. We conclude that the algorithm achieves an additive error of δ 2 in time poly(d1 , d2 )nO(log(d2 )/δ ) . 2.1 Sparsification We now consider the case where n  poly(d1 , d2 ). It turns out that we can modify the algorithm such that its running time is polynomial in n by first sparsifying the number of local terms of the measurement. This results in the following theorem. Pn P Theorem 6. Let M = i=1 Xi ⊗ Y i be such that i Xi ≤ I, Xi ≥ 0, 0 ≤ Yi ≤ I. Algorithm 8 runs in time poly(n) exp O δ −2 log d1 log(d1 d2 ) and outputs α ∈ Dd1 and β ∈ Dd2 such that hSep (M ) ≥ tr(M (α ⊗ β)) ≥ hSep (M ) − δ, (18) The key element of the theorem is the following Lemma. Pn Lemma 7. Given a 1-LOCC measurement M = i=1 Xi ⊗ Yi and some ε > 0 there exists a 1-LOCC 0 P n measurement M 0 = j=1 Xj0 ⊗ Yj0 with kM − M 0 k ≤ ε and n0 ≤ poly(d1 , d2 )/ε2 . If the decomposition of M is explicitly given then M 0 and its decomposition can be found in time poly(d1 , d2 , n) using a randomized algorithm. The modified algorithm is the following: 8 Algorithm 8 (Algorithm for computing hSep (M ) for one-way LOCC M = P i Xi ⊗ Yi ). Input: {Xi }ni=1 , {Yi }ni=1 . Output: States α ∈ Dd1 and β ∈ Dd2 . 1. Use Lemma 7 to replace M = Pn i=1 Xi ⊗ Yi with M 0 = Pn0 i=1 Xi0 ⊗ Yi0 satisfying kM − M 0 k ≤ δ/2. 2. Run Algorithm 1 on M 0 . The proof of correctness is straightforward. Proof of Theorem 6. Whatever the output x is, x ≤ hSep (M 0 ) ≤ hSep (M ) + δ/2. On the other hand, let q = arg maxq∈S kqkY , so that kqkY = hSep (M 0 ). By Lemma 4, there exists p ∈ ∆n (k) with kp − qkY ≤ δ/2. Thus our algorithm will output a value that is ≥ hSep (M 0 ) − δ/2 ≥ hSep (M ) − δ. We conclude that the 2 algorithm achieves an additive error of δ in time poly(n)(d1 d2 )O(log(d2 )/δ ) . It remains only to prove Lemma 7. This requires a careful use of the matrix Hoeffding bound (Lemma ??). The details are in Appendix A. 2.2 Multipartite We now consider the generalization of the problem to the multipartite case. We consider measurements on a l-partite vector space Cd1 ⊗ . . . ⊗ Cdl . Following Li and Smith [LS14], we define P the class of fully one-way d1 , LOCC measurements on Cd1 ⊗ . . . ⊗ Cdl recursively as all measurements M = i Xi ⊗ Mi , where Xi ∈ H+ P d2 ···dl d2 dl , and each Mi is a fully one-way LOCC measurement in C ⊗ . . . ⊗ C . i Xi ≤ I, Mi ∈ H+ Ref. [LS14] recently strengthened the result of [BH13] (from parallel one-way LOCC to fully one-way LOCC measurement) and proved that the SoS hierarchy approximates hSep(d1 ,...,dl ) (M ) := max α1 ∈Dd1 ,...,αl ∈Ddl tr[(α1 ⊗ . . . ⊗ αl )M ] (19) to within additive error δ in time exp(O(log2 (d)l3 /δ 2 )), with d := maxi∈[l] di . Here we show that our previous algorithm for the bipartite case can be extended to the multipartite setting to give the same run time. Theorem 9. Algorithm 10 above runs in time exp(O(l3 ln2 (d)/δ 2 ) and outputs states αi , i ∈ [l], satisfying hSep(d1 ,...,dl ) (M ) ≥ tr[M (α1 ⊗ . . . ⊗ αl )] ≥ hSep(d1 ,...,dl ) (M ) − δ. 9 (20) Algorithm 10 (Algorithm for computing hSep(d1 ,...,dl ) (M ) for fully one-way LOCC M ). (m) dm such that Input: {Xi1 ,...,im : m ∈ [l], i1 ∈ [n1 ], . . . , im ∈ [nm ]} ⊂ H+ M= n1 X (1) Xi1 ⊗ i1 =1 n2 X (2) Xi1 ,i2 ⊗ · · · ⊗ i2 =1 nm X (m) Xi1 ,i2 ,...,im im =1 Output: States αi ∈ Ddi , i ∈ [l]. 1. Use Lemma 7 to replace M = Pn1 i1 =1 (1) Xi1 ⊗Mi1 with M 0 = Pn01 (1) 0 0 i1 =1 (Xi1 ) ⊗Mi1 satisfying kM −M 0 k ≤ (m) δ/2l. Here Mi1 is a shorthand for the collection {Xi1 ,...,im } for m ≥ 2 and likewise for Mi0 . Redefine (m) M, {Xi1 ,...,im }, {ni } appropriately. 2. Initialize the variables α1 , . . . , αl to ∅. 3. Enumerate over all p ∈ ∆n (k), with k = 9l2 ln(d)/δ 2 . For each p, (a) Check (using Lemma 5) whether there exists q ∈ SX (1) with k P i (pi − qi )Mi k ≤ δ/2l. (b) If no such q exists then do not evaluate this value of p any further. Otherwise let β1 be the (1) density matrix found in the SDP in Lemma 5 satisfying qi = tr[β1 Xi ]. P (m0 −1) (m0 ) (c) For m0 ∈ {2, . . . , m}, i2 ∈ [n2 ], . . . , im0 ∈ [nm0 ], define X̃i2 ,...,im0 := i1 qi1 Xi1 ,i2 ,...,im0 . (m0 ) (d) Recursively call Algorithm 10 on input {X̃i1 ,...,im0 }. Denote the output by β2 , . . . , βl . (e) If tr[M (β1 ⊗ · · · ⊗ βl )] > tr[M (α1 ⊗ · · · ⊗ αl )] then replace α1 , . . . , αl with β1 , . . . , βl . 2.3 The need for an explicit decomposition The input P to our algorithm is not only a 1-LOCC measurement M but an explicit decomposition of the form M = i Xi ⊗ Yi with each Xi ≥ 0. Previous algorithms for hSep were mostly based on the SoS hierarchy (or its restriction to the separability problem also known as k-extendible hierarchy) [DPS04]. Running these requires only knowledge of M and not its decomposition. The decomposition appears in the analysis of [BCY11, LW14, BC11, BH13, Yan06], but not the algorithm. On the other hand, previous algorithms did not yield an explicit rounding, i.e. a separable state σ with tr M σ ≈ hSep (M ). The only exception to this [BH13] also required an explicit decomposition in order to produce a rounding. P In general any bipartite measurement M can be written in the form i Xi ⊗ Yi , with individual terms that are not necessarily positive semidefinite. Finding some P such decomposition is straightforward, e.g. using the operator Schmidt decomposition or even writing M = ijkl Mijkl |ii hj| ⊗ |ki hl|. Our algorithm can be readily modified toPincorporate non-positive Xi (along the lines of Section 4), but the run-time will then include a factor of i kXi k1 in the exponent. In general this will be O(1) only if M is close to 1-LOCC and the decomposition is close to the correct one. P This raises an interesting i Xi ⊗ Yi that (apP open question: given M , find a decomposition M = proximately) minimizes i kXi k1 . We are not aware of nontrivial algorithms or hardness results for this problem. 10 3 Generalized algorithm for arbitrary norms An important step in the algorithm of the previous section was the identity, hSep(d1 ,d2 ) (M ) = max kpkY , (21) p∈SX P valid for any one-way LOCC M = i Xi ⊗ Yi . This equation suggests ways of generalizing the algorithm. In this section we consider the setting where the operators {Y1 , . . . Yn } belong to some Banach space B with norm k · kB . In analogy with Eq. (11), given Y = {Y1 , . . . Yn } we define the (B, Y ) norm in Rn as n X kakB,Y := . ai Yi i=1 (22) B The goal is then to estimate max kpkB,Y , (23) p∈SX where, as before, SX is given by Eq. (11). Also this generalization is of interest in quantum information theory. As we discuss more in the next subsection, it includes as a particular case the well-studied problem of computing the maximum output α-norms of an entanglement-breaking channel. Consider a general entanglement-breaking quantum channel Λ : Dd1 → Dd2 given by [HSR04]: X Λ(ρ) := tr(Xi ρ)Yi , (24) i with Yi ≥ 0, tr(Yi ) = 1, Xi ≥ 0, and P i Xi = I. Then max kΛ(ρ)kα = max kpkSα ,Y . ρ∈Dd1 (25) p∈SX In order to find an algorithm for computing Eq. (23), we need to replace the quantum Hoeffding bound (Lemma 3) by more sophisticated concentration bounds. Since in Lemma 5 all we needed was a bound in expectation, the right concept will turn out to be the Rademacher type-γ constant of the space B, which we now define: Definition 11. We say a Banach space B has Rademacher type-γ constant C if for every Z1 , . . . , Zk ∈ B and Rademacher random variables ε1 , . . . , εk (i.e. independent and uniformly distributed on ±1) , E ε1 ,...,εk k X i=1 γ ≤ Cγ εi Zi B k X γ kZi kB . (26) i=1 √ It is known that Schatten-α spaces with norm kXkα := tr(|X|α )1/α have type-2 constant α − 1 for α ≥ 2 [BCL94], and type-α constant 1 for every α ∈ [1, 2] [KPT00, Thm 3.3]. For a reader unfamiliar with the type-γ constant, we suggest verifying that the type-2 constant of `2 is 1. A more nontrivial calculation√is using the Hoeffding bound or its operator version to verify that the n type-2 constant of `n∞ or S∞ is O( log n). (This also follows from the fact that the S∞ and Slog(n) norms are within a constant multiple of each other on the space of n-dimensional matrices.) For sparsification (the analogue of Lemma 7) we will actually need a slightly stronger condition than a bound on the type-γ constant: Definition 12. The modulus of uniform smoothness of a Banach space B is defined to be the function   kx + τ ykB + kx − τ ykB − 1 : kxkB = kykB = 1 . (27) ρB (τ ) := sup 2 11 By the triangle inequality, ρB (τ ) ≤ τ for all B. But when limτ →0 ρBτ(τ ) = 0 then we say that B is uniformly smooth. For example, if B = `2 then ρB (τ ) = τ 2 /2, whereas ρ`1 (τ ) = τ . More generally [BCL94] 2 (building on [TJ74]) proved that ρSα (τ ) ≤ α−1 2 τ for α > 1. We say that B has modulus of smoothness of γ power type γ if ρB (τ ) ≤ Cτ for some constant C. This implies (using an easy induction on k) that the type-γ constant is ≤ C, and indeed this was how the type-γ constant was bounded in [TJ74, BCL94]. The algorithm for approximating the optimization problem given by Eq. (23) is the following: Algorithm 13 (Algorithm for computing maxp∈SX kpkB,Y for B of type-γ constant C and modulus of uniform smoothness ρB (τ ) ≤ sτ 2 , with X := {Xi } and Y := {Yi }). Input: {Xi }ni=1 , {Yi }ni=1 Output: p ∈ S 1. Use Lemma 22 to replace X = {Xi } and Y = {Yi } with X 0 := {Xi0 } and Y 0 := {Yi0 }. 2. Enumerate over all p ∈ ∆n (k), with k = 2C γ δγ maxi kYi kγB 1/(γ−1) . (a) For each p, check (using Lemma 21) whether there exists q ∈ S with kp − qkB,Y ≤ δ. (b) If so, compute kpkY . 3. Output p such that kpkB,Y is the maximum. We have: Theorem 14. Let B be a Banach space with norm k · kB . Suppose the type-γ constant of B is C and that 2 there is s > 0 such that the modulus of uniform smoothness satisfies ρB (τ ) ≤ sτP . Suppose one can compute k · kB in time T . Consider {Xi }ni=1 with Xi d × d matrices satisfying Xi ≥ 0, Xi ≤ I, and {Yi }ni=1 with Yi ∈ B. Algorithm 13 runs in time    γ  γ−1 −1 poly(T, d, s) exp O Cδ max kYi kB log(d) (28) i and outputs p such that max kpkB,Y ≥ kpkB,Y ≥ max kpkB,Y − δ, p∈SX p∈SX (29) p d2 . Then the type-2 constant is O( log(d2 )), maxi kYi k ≤ 1, and Theorem As an example, suppose B is S∞ 2 shows one can compute maxp∈SX kpkY in time exp(O(δ −2 log(d1 ) log(d1 d2 ))). In the next subsection we discuss a few particular cases of the theorem worth emphasizing. Then we prove the theorem. 3.1 3.1.1 Consequences of Theorem 14 Restricted one-way LOCC measurements The next lemma shows that for subclasses of one-way LOCC measurements one has a PTAS for computing hSep . The class include in particular one-way LOCC measurements in which Bob’s measurements are low rank. P P Corollary 15. Let M = i Xi ⊗ Yi be such that Xi ≥ 0, i Xi ≤ I and kYi k2 ≤ r. Then one can compute α ∈ Dd1 and β ∈ Dd2 such that hSep (M ) ≥ tr(M (α ⊗ β)) ≥ hSep (M ) − δ O(δ −2 r) in time d1 . 12 (30) Proof. We use Theorem 14 and Algorithm 13 to estimate the optimal p and then find α and β by semidefinite programming. P If instead we use the multipartitePversion of the algorithm (see Algorithm 10), we find that for M = i Xi ⊗ Yi1 ⊗ . . . ⊗ Yil , with Xi ≥ 0, i Xi ≤ I and kYi k2 ≤ r, we can compute α ∈ Dd and β1 , . . . , βl ∈ Dd such that hSep (M ) ≥ tr(M α ⊗ β1 ⊗ . . . ⊗ βl ) ≥ hSep (M ) − δ (31) in time dO(δ 3.1.2 −2 3 l r) . Maximum output norm of entanglement-breaking channels The next corollary shows that for all α > 1, there is a PTAS for computing the maximum output Schatten-α norm of an entanglement-breaking channel. Corollary 16. Let Λ : Dd1P→ Dd2 be an entanglement-breaking channel with decomposition Λ(ρ) := P tr(X ρ)Y i i (where Xi ≥ 0, i i Xi = I, Yi ∈ Dd2 ). O (δ −2 α) 1. For every α ≥ 2 one can compute in time poly(d2 )d1 a number r such that max kΛ(ρ)kα ≥ r ≥ max kΛ(ρ)kα − δ, ρ∈Dd1 (32) ρ∈Dd1  O 2. For every 1 < α ≤ 2 one can compute in time poly(d2 )d1 1 (αδ−α ) α−1  max kΛ(ρ)kα ≥ r ≥ max kΛ(ρ)kα − δ, ρ∈Dd1 ρ∈Dd1 a number r such that (33) √ Proof. Part 1 follows from Theorem 14 and the fact that Sα , with α ≥ 2, has type-2 constant α − 1 2 [BCL94] and ρSα (τ ) ≤ α−1 2 τ for α > 1. Part 2, in turn, follows from Theorem 14 and the fact that for Sα , with α ≥ 2, has type-α constant one [KPT00, Thm 3.3]. We note that computing maximum output α-norms for general quantum channels is harder.  In particular it was shown in [HM10, HM13] that there is no algorithm that run in time exp O log2−ε d for any ε > 0 and can decide if maxρ kΛ(ρ)k is one or smaller than δ (for any fixed δ > 0) for a general quantum channel Λ : Dd → Dd , unless the exponential time hypothesis (ETH) is wrong (meaning there is a subexponential time algorithm for 3-SAT). The result of [HM10] is one example of many that found dΘ̃(log d) upper or lower bounds for related optimization problems [LMM03, BKW14, HM10]. In a few cases [ALSV13b, SW12, DKLP06b] poly-time approximate schemes (PTASs) are known. Our results here fall into this second class. We hope that the geometric perspective from our paper can lead to a better understanding of what distinguishes these cases. What is known about hardness results for entanglement-breaking channels? Using the results of [BBH+ 12] one can show that to determine if maxρ kΛ(ρ)k is ≥ C/d or ≤ c/d (for any two constants C > c > 0) cannot be done in time exp O log2−ε d assuming ETH. So one cannot hope to find a polynomial-time algorithm for a multiplicative approximation of the maximum output norm. Note that the complexity of the algorithm blows up when α → 1. This is not only an artifact of the proof. Computing the quantity for α close to one allow us to estimate the von Neumann minimum output entropy of the channel. However to estimate it we need a number of samples of order O(d) and so the net-based approach we explore in this paper does not lead to efficient algorithms. 13 3.1.3 Hypercontractive norms Our third corollary concerns the problem of computing hypercontractive norms, in particular computing the 2 → s norm of a d × d matrix A, for s > 2, defined as kAk2→s := max kAxks . (34) kxk2 =1 This norms are important in several applications, e.g. bounding the mixing time of Markov chains and determining if a graph is a small-set expander [BBH+ 12]. In [BBH+ 12] it was also shown that to compute any constant-factor multiplicative approximation to the 2 → 4 norm of a n × n matrix is as hard as solving 3-SAT with O(log2 (n)) variables. In Appendix B we extend the approach of [BBH+ 12] to show hardness results for multiplicatively approximating all 2 → q norms, for even q ≥ 4. In [BBH+ 12] it was shown that the result of [BCY11] implies that for any d × d matrix A the Sum-of−2 Squares hierarchy computes in time dO(log(d)δ ) an additive approximation x s.t. kAk42→4 ≤ x ≤ kAk42→4 + δkAk22→2 kAk22→∞ , (35) where kAk2→2 is the largest singular value of A and kAk2→∞ the largest 2-norm of any row of A. Using Theorem 14 we can improve this algorithm in two ways: First we can compute an approximation to k.k2→s for any s > 2. Second the running time for fixed error is polynomial, instead of quasipolynomial. Corollary 17. For any s ≥ 2 one can compute in time dO(sδ −2 ) a number x such that kAk22→s ≥ x ≥ kAk22→s − δkAk22→2 . P Proof. Let Xi := A† |ii hi| A/||A||22→2 . Note Xi ≥ 0 and i Xi ≤ I. We can write X s/2 s/2 kAks2→s = max hψ| A† |ii hi| A |ψi = ||A||s2→2 max kpks/2 . |ψi∈`2 Since `s has type-2 constant √ p∈SX i p∈SX  (37) s − 1, by Theorem 14 we can estimate max kpks/2 = in time exp O sδ −2 log(d) (36) kAk22→s kAk22→2 (38) with additive error δ. Although the corollary above gives an approximation for every s > 2 that can be computed in polynomial time for every fixed error, it gives a worse approximation to the 2 → 4 than [BBH+ 12] (given by Eq. (35)). We now show a second corollary that strictly improves the result of [BBH+ 12] for 2 → 4 and generalizes it to 2 → s norms for every even ≥ 4. Corollary 18. For any even s ≥ 4 one can compute in time dO(sδ kAks2→s ≥x≥ kAks2→s − −2 ) a number x such that s−2 δkAk22→2 kAk2→∞ . (39) Proof. Define  † ⊗ 2s −1 A† |ii hi| A A |ii hi| A Xi := and Yi := . ||A||22→2 ||A||22→∞ P Observe that Xi , Yi ≥ 0, i Xi ≤ I and Yi ≤ I. Additionally X s−2 kAks2→s = kAk22→2 kAk2→∞ hSeps/2 (n) ( Xi ⊗ Yi ) i This last term can be approximated to additive error ε in time exp(O(s3 /ε2 )) using the multipartite results of Section 3.1.1. 14 (40) (41) 3.2 Proof of Theorem 14 The proof of Theorem 14 will follow from three lemmas, the first showing that it is enough to search over a net of small size, the second showing that one can decide membership of {q : kp − qkB,Y ≤ δ} efficiently (assuming that k.kB can be computed efficiently), and the third giving a sparsification for the number of {Xi }ni and {Yi }ni=1 . We first show how the type-γ constant gives a concentration bound. This uses a standard argument. Lemma 19 (Symmetrization Lemma). Suppose we are given p ∈ ∆n , Zi , . . . , Zn elements of a Banach space B with norm k · kB , and ε1 , . . . , εk Rademacher distributed random variables. Then for every γ ≥ 1 E i1 ,...,ik ∼p⊗k k 1X Zi − E Zi k j=1 j i∼p γ ≤2 E E i1 ,...,ik ∼p⊗k ε1 ,...,εk B k 1X ε j Z ij k j=1 γ . (42) B Proof. E i1 ,...,ik ∼p⊗k k 1X (Zi − E Zi ) k j=1 j i∼p γ = E i1 ,...,ik ∼p⊗k B ≤ = E k 1X (Zi − E [Zi0 ]) k j=1 j i0j ∼p j E i1 ,...,ik ∼p⊗k i01 ,...,i0k ∼p⊗k E k X 1 k E E E i1 ,...,ik ∼p⊗k ε1 ,...,εk (43) B γ (Zij − Zi0j ) j=1 E i1 ,...,ik ∼p⊗k i01 ,...,i0k ∼p⊗k ε1 ,...,εk ≤2 γ (44) B 1 k k 1X εj Zij k j=1 k X j=1 γ εj (Zij − Zi0j ) (45) B γ . (46) B (47) Then we have the following generalization of Lemma 5: Lemma 20. Let the Banach space B have type-γ constant C. Then for any p ∈ ∆n there exists q ∈ Nk with  kp − qkB,Y ≤ 2C γ γ E kYi kB k γ−1 i∼p 1/γ . (48) Proof. Sample i1 , . . . , ik according to p and set q = (ei1 + . . . + eik )/k. Then Definition 11 and Lemma 20 15 give γ  E i1 ,...,ik ∼p⊗k kp − qkB,Y ≤ E i1 ,...,ik ∼p⊗k = kp − qkγB,Y k 1X Yi − E Yi k j=1 j i∼p E i1 ,...,ik ∼p⊗k ≤ 2 E E i1 ,...,ik ∼p⊗k ε1 ,...,εk γ ≤ = 2C kγ E i1 ,...,ik ∼p⊗k k X 1 k k X B γ . εj Yij j=1 Yij γ B γ B i=1 2C γ γ E kYi kB . k γ−1 i∼p (49) The first inequality follows from the convexity of x 7→ xγ , the second inequality from Lemma 19, and the third from the fact that B has type-γ constant C. The next lemma is an analogue of Lemma 5: Lemma 21. Let the Banach space B be such that k.kB can be computed in time T . Given p ∈ ∆n and ε > 0, we can decide in time poly(T, d, n) whether the following set is nonempty SX ∩ {q : kp − qkB,Y ≤ ε} (50) Proof. Since we have an efficient algorithm for k · kB we can efficiently test membership in the set {q : kp − qkB,Y ≤ ε}. Thus we can determine if Eq. (50) is nonempty using the ellipsoid algorithm [GLS93]. We now state an analogous sparsification result of Lemma 7 for the more general case we consider in this section. The proof is in Appendix A. LemmaP22. Suppose Λ is a map from d P × d Hermitian matrices to a Banach space B and is given by n n Λ(ρ) = i=1 hXi , ρiYi where each Xi ≥ 0, i=1 Xi ≤ I and each kYi kB ≤ 1. Suppose that B has modulus Pk 0 0 0 of smoothness ρB (τ ) ≤ sτ 2 . Then there exists Λ0 such that Λ0 (ρ) = i=1 hXi , ρiYi where each Xi ≥ 0, Pk 0 0 2 2 i=1 Xi ≤ I and each kYi kB ≤ 1. Additionally k ≤ cd (d + s)/δ for some constant c > 0, max k(Λ0 − Λ)(ρ)kB ≤ δ ρ∈Dd (51) and Λ0 can be found efficiently. With the lemmas in hand the proof of Theorem 14 follows along the same lines as Theorem 6. 4 Algorithm for injective tensor norm In this section we present one further generalization, this time on the input space. While this final generalization does not have natural applications in quantum information (to our knowledge), it does give perspective on why it is natural to consider 1-LOCC measurements and entanglement-breaking channels. First, we introduce some more definitions. Suppose that k · kA and k · kB are two norms. For Λ an operator from A → B define the operator norm kΛkA→B := sup kΛ(a)kB . a∈B(A) 16 (52) Define the injective tensor norm A ⊗inj B by kxkA⊗inj B = sup ha ⊗ b, xi. (53) a∈B(A∗ ) b∈B(B∗ ) Here A∗ is the space of functions from A to R, and kãkA∗ := supa∈B(A) ã(a). For example, if A = B = `2 then A ⊗inj B is the usual operator norm for matrices, i.e. largest singular value. More generally A∗ ⊗inj B is isomorphic to the operator norm on maps from A → B. Finally if A, B, C are Banach spaces then define the factorization norm A → B → C for x ∈ L(A, C) by kΛkA→C→B = inf Λ1 ∈L(C,B) Λ2 ∈L(A,C) Λ=Λ1 Λ2 kΛ1 kC→B kΛ2 kA→C . (54) We can now (informally) state our generalized estimation theorem. Given an operator Λ ∈ B(A → `1 → B) we can estimate kΛkA→B efficiently. For example, consider hSep , which we considered in Section 2. In our new notation hSep (M ) = kM kS∞ ⊗inj S∞ = kM̂ kS1 →S∞ , (55) where M̂ is the map defined by M̂ (X) = trA [M (X ⊗ I)]. The requirement that M is 1-LOCC is roughly equivalent to the requirement that kM̂ kS1 →`1 →S∞ ≤ 1. (56) Theorem 23. Suppose A, B are d-dimensional Banach spaces. Suppose kΛkA→`n1 →B ≤ 1 and that Pn a good ∗ ∗ ∗ ∗ ∈ A and y , . . . , y ∈ B are given such that Λ = factorization is known; i.e. x , . . . , x 1 n n 1 i=1 yi xi , Pn ∗ supa∈A i=1 |xi (a)| ≤ 1 and maxi kyi kB ≤ 1. Suppose further that algorithms exist for computing the A and B norms running in times TA , TB respectively. Let λ denote the type-γ constant of B. Then we can estimate kΛkA→B to accuracy ε in time TA TB poly(d)nc(λ/δ) γ γ−1 . (57) The algorithm follows similar lines to the earlier algorithms. It lacks only the sparsification step since we do not know how to extend Lemma 22 to this case. Algorithm 24 (Algorithm for computing kΛkA→B ). Input: {xi }ni=1 , {yi }ni=1 Output: p ∈ S γ 1. Enumerate over all p ∈ Nk , with k = (2λ/δ) γ−1 . (a) For each p, check whether there exists q ∈ S with kp − qkB,Y ≤ δ. (b) If so, compute kpkY . 2. Output p such that kpkB,Y is the maximum. The proof of Theorem 23 is almost the same as that of Theorem 14. The only new ingredient is checking whether p ∈ SX . This is equivalent to asking whether ∃a ∈ B(A) such that pi = x∗i (a). This is a convex program which can be decided in time poly(d)TA using the ellipsoid algorithm along with our assumption that k · kA can be computed in time TA . 17 Acknowledgments We thank Jop Briet, Pablo Parrilo and Ben Recht for interesting discussions. FGSLB is supported by EPSRC. AWH was funded by NSF grants CCF-1111382 and CCF-1452616, ARO contract W911NF-12-10486 and a Leverhulme Trust Visiting Professorship VP2-2013-041. Part of this work was done while A.W. was visiting UCL. A Sparsification In this appendix we prove two Lemmas about sparsification: one (Lemma 7) for the problem of hSep and the second (Lemma 22) for the estimate S1 → B norms. While the former is a special case of the latter, it is also far more self-contained (requiring only the operator Hoeffding bound), so we recommend reading it first. Proof of Lemma 7. Assume initially that each kYi k = 1. This is possible because we can always drop terms with Yi = 0 and then rewrite M as n X Yi . (58) kYi kXi ⊗ kY ik i=1 P Redefining Xi appropriately we see that i Xi ≤ I still holds. Pn i ⊗Yi ) i . Sample i1 , . . . , in0 according to p and and Wi = Xip⊗Y Now write M = i=1 pi Wi with pi = tr(X tr(M ) i define n0 n0 X X W ij Xi /pi A= and B= . (59) 0 n n0 j=1 j=1 We would like to guarantee that kA − M k ≤ δ kB − Id1 k ≤ δ (60a) (60b) for some δ to be chosen later. We can use Lemma 3 here. To do so, note that kWi k ≤ tr Wi ≤ tr M ≤ d1 d2 tr M kXi /pi k ≤ ≤ tr M ≤ d1 d2 , using the assumption that kYi k = 1 tr Yi Now we find that the probability that Eq. (60) fails to hold is     n0 δ 2 n0 δ 2 ≤ d1 d2 exp − 2 2 + d1 exp − 2 2 . 8d1 d2 8d1 d2 (61a) (61b) (62) Taking n0 = 8d21 d22 log(2d1 d2 )/δ 2 we have that (60) holds with positive probability. Fix the corresponding i1 , . . . , in0 . Choose M 0 = A/(1 + δ). Together with Eq. (60) this means that M 0 is a valid 1-LOCC measurement. By Eq. (60a) we can achieve our result by choosing δ = ε/3. Indeed kM 0 − M k = ≤ ≤ ≤ A −M 1+δ A − A + kA − M k 1+δ   1 (1 + δ) + δ 1− 1+δ 2δ + δ 2 ≤ ε. 18 (63) We now turn to the proof of Lemma 22, covering the case of general Banach spaces with bounded modulus of smoothness. We will need the following Azuma-type inequality from Naor [Nao12], who attributes it to Pisier. We will state a weaker Hoeffding-type formulation that suffices for our purposes. Lemma 25 (Theorem 1.5 of [Nao12]). Suppose X1 , . . . , Xk are independent random variables on B(B) for B a Banach space with ρB (τ ) ≤ sτ 2 . Then # " k 2 1X (64) Xi ≥ δ ≤ es+2−ckδ . Pr k i=1 Proof of Lemma Pn 22. First we introduce notation. For a matrix X, define the map X̂ by X̂(A) := hX, Ai. Thus Λ = i=1 Yi X̂i . Pn As in Lemma 7 we first drop terms with Yi = 0 and rewrite Λ = i=1 kYYiikB · kYi kB X̂i . Redefine Xi , Yi appropriately and assume from now on that each kYi kB = 1. Define pi = tr[Xi ]/d. Note that p ∈ Rn+ and kpk1 ≤ 1. Sample i1 , . . . , ik according to p and let them take P Pk Xi value 0 with probability 1 − i pi . Set Λ00 = j=1 Yj0 X̂j0 where Yj0 = Yij and Xj0 = kpij . (Set Xj0 = Yj0 = 0 j if ij = 0.) These mean that E[Λ00 ] = Λ. Pchoices n 0 Let X̄ := i=1 Xi and observe that 0 ≤ X̄ ≤ I. Additionally E[Xj ] = X̄/k. Thus if we define 0 Zj := kXj − X̄ then E[Zj ] = 0 and kZj k ≤ d. The operator Hoeffding bound (Lemma 3) implies that Pk Pk k k1 j=1 Zj k ≤ δ with probability ≥ 1 − d exp(−kδ 2 /8d2 ). When this occurs we have k j=1 Xj0 − X̄k ≤ δ and thus k X Xj0 ≤ (1 + δ)I. (65) j=1 Next we attempt to bound the LHS of Eq. (51). First we can relax Dd to B(S1 ) and obtain max k(Λ00 − Λ)(ρ)kB ≤ kΛ00 − ΛkS1 →B = kΛ00 − E[Λ00 ]kS1 →B . (66) ρ∈Dd This formulation allows to apply the symmetrization trick (Lemma 19) to obtain E max k(Λ00 − Λ)(ρ)kB ≤ 2 i1 ,...,ik ∼p⊗k ρ∈Dd E E i1 ,...,ik ∼p⊗k ε1 ,...,εk k X̂i 1X εj Yij j k j=1 pij (67) S1 →B We will bound this last quantity for any fixed i1 , . . . , ik . For ρ ∈ B(S1 ), define qj := hXij , ρi/kpij . Denote the set of feasible q by SX,~i where this notation emphasizes the dependence on both X and i1 , . . . , ik . Then P j |qj | ≤ 1, each |qj | ≤ d/k and k X̂i 1X εj Yij j k j=1 p ij {z } | =:Λ0ε = max q∈SX,~i k X j=1 εj Yij qj . B S1 →B Now let us fix ρ (or equivalently q). Observe that kYij qj kB ≤ d/k. Then Lemma 25 implies that   k X ckδ 2 Pr  εj Yij qj ≥ δ  ≤ es+2− d2 . ε1 ,...,εk (68) j=1 B 19 (69) According to Lemma II.2 of [HLSW04] there exists a net of pure states ρ1 , . . . , ρm ∈ Dd such that m ≤ 2d 10P and for any pure state ρ, we have minl kρ − ρl k1 ≤ 1/2. Say that ε1 , . . . , εk is a good sequence if k j εj Yij hXij , ρl i/kpij k ≤ δ for all l ∈ [m]. By Eq. (69) and the union bound the probability that 2 2 ε1 , . . . , εk is bad (i.e. not good) is ≤ 102d es+2−ckδ /d . For a bad sequence we still have that Eq. (68) is ≤ d by the triangle inequality. For a good sequence, let α denote Eq. (68) and let β be the corresponding maximum with ρ restricted to the set {ρ1 , . . . , ρm }. By our assumption that the sequence is good we have β ≤ δ. Observe that α = maxρ∈B(S1 ) kΛ0ε (ρ)kB and by convexity (and symmetry of the k · kB norm) this max is achieved for ρ a pure state. Let ρl satisfy kρ − ρl k1 ≤ 1/2. Then 1 kΛ0ε (ρ)kB ≤ kΛ0ε (ρl )kB + kΛ0ε (ρ − ρl )kB ≤ β + α · . 2 (70) Maximizing the LHS over ρ we obtain α ≤ β + α/2, or equivalently α ≤ 2β ≤ 2δ. Thus Eq. (67) is ≤ 4δ + 2d102d es+2− ckδ 2 d2 . (71) Redefining c, this is ≤ 5δ when k ≥ cd3 /δ 2 . Since Eq. (67) controls the expectation with respect to i1 , . . . , ik , we conclude that for at least half of the i1 , . . . , ik , the LHS of Eq. (51) is ≤ 10δ. Since Eq. (65) holds with high probability (≥ 1 − d exp(−cd/8)) it follows that there exists a sequence of i1 , . . . , ik that simultaneously fulfills P both criteria. Fix this choice. Finally we choose Λ0 = Λ00 /(1 + δ) so that the normalization condition on j Xj0 is satisfied. This increases the error by at most a further factor of δ. We conclude the proof by redefining δ to be 11δ. B Hardness of computing 2 → q norms In this section we extend the hardness results of [BBH+ 12] (Theorem 9.4, part 2) for estimating the 2 → 4 norm to general 2 → q norms for even q ≥ 4. The next lemma is an extension from Lemma 9.5 from [BBH+ 12]. Lemma 26. Let M ∈ L(Cd ⊗ Cd ) satisfy 0 ≤ M ≤ I. Assume that either (case Y ) hSep(d,d) (M ) = 1 or (case N ) hSep(d,d) (M ) ≤ 1 − δ. Let k be a positive integer and q ≥ 4 an even positive integer. Then there exists a matrix A of size d4kq × d2kq such that in case Y , kAk2→q = 1, and in case N , kAk2→q ≤ (1 − δ/2)k . Moreover, A can be constructed efficiently from M . Proof. Consider the following operator 1/2 1/2 1/2 1/2 N := (MA1 B1 ⊗ . . . ⊗ MAq/2 Bq/2 )PA1 ,...,Aq/2 ⊗ PB1 ,...,Bq/2 (MA1 B1 ⊗ . . . ⊗ MAq/2 Bq/2 ), (72) with PA1 ,...,Aq/2 the projector onto the symmetric subspace over A1 , ..., Aq/2 . We will first relate hSepq/2 (d2 ) (N ) to hSep(d,d) (M ), and then relate hSepq/2 (d2 ) (N ) to kAk2→q for a matrix A of size d4kq × d2kq . First we show that in case Y , hSepq/2 (d2 ) (N ) = 1. Indeed since there are unit vectors x, y ∈ Cd satisfying MAB (x ⊗ y) = x ⊗ y, we have hSepq/2 (d2 ) (N ) = max v1 ,...,vq/2 ∈Cd2 (v1 ⊗ . . . ⊗ vq/2 )∗ N (v1 ⊗ . . . ⊗ vq/2 ) ≥ (x⊗q/2 ⊗ y ⊗q/2 )∗ N (x⊗q/2 ⊗ y ⊗q/2 ) = (x⊗q/2 ⊗ y ⊗q/2 )∗ PA1 ,...,Aq/2 ⊗ PB1 ,...,Bq/2 (x⊗q/2 ⊗ y ⊗q/2 ) = 1 In case N we show that hSepq/2 (d2 ) (N ) ≤ 1 − δ/2. Note that PA1 ,...,Aq/2 ≤ PA1 A2 ⊗ IA3 ...Aq/2 . 20 (73) Then hSepq/2 (d2 ) (N ) = max v1 ,...,vq/2 ≤ ∈Cd2 (v1 ⊗ . . . ⊗ vq/2 )∗ N (v1 ⊗ . . . ⊗ vq/2 ) 1/2 1/2 1/2 1/2 max (v1 ⊗ v2 )∗ (MA1 B1 ⊗ MA2 B2 )PA1 A2 ⊗ PB1 B2 (MA1 B1 ⊗ MA2 B2 )(v1 ⊗ v2 ) v1 ,v2 ∈Cd2 ≤ 1 − δ/2, (74) where the last inequality follows from Lemma 9.6 of [BBH+ 12]. To construct a matrix A of size d4kq ×d2kq s.t. kAk2→q = hSepq/2 (d2 ) (N ) we follow the proof of Lemma 9.5 of [BBH+ 12], the only difference being that we apply Wick’s theorem to PA1 ,...Aq/2 , i.e. there is a measure µ over unit vectors s.t.  Z d + q/2 − 1 PA1 ,...,Aq/2 = µ(dv)(vv ∗ )⊗q/2 . (75) q/2 The basic idea of the Lemma is to use the product test of [HM10] to force v1 , . . . , vq/2 to be product states. Our proof can be summarized as saying that q/2 copies can enforce this more effectively than 2 copies (assuming q/2 ≥ 2), and therefore we obtain soundness at least as sharp as in [BBH+ 12]. This analysis may be wasteful, since using more copies should improve the effectiveness of the product test. The main result of this section is the following analogue of Theorem 9.4, part 2, of [BBH+ 12]: Theorem 27. Let φ be a 3-SAT instance with n variables and O(n) clauses and q ≥ 4 an even integer. Determining whether φ is satisfiable can be reduced in polynomial time to determining whether kAk2→q ≥ C √ or kAk2→q ≤ c where 0 ≤ c < C and A is an m × m matrix, where m = exp(q n polylog(n) log(C/c)). √ This gives nontrivial hardness for super-constant q, in fact up to Õ( log d), but not yet all the way up to O(log d), where multiplicative approximations are known to be easy. Proof. Corollary 14 of [HM10] gives a reduction from determining satisfiability of φ to distinguishing between hSep(d,d) (M ) =√1 and hSep(d,d) (M ) ≤ 1/2, with 0 ≤ M ≤ I that can be constructed in time poly(d) from φ with d = exp( n polylog(n)). Applying Lemma 26 gives the result. References [ABS10] Sanjeev Arora, Boaz Barak, and David Steurer. Subexponential algorithms for unique games and related problems. In FOCS, pages 563–572, 2010. [AIM14] S. Aaronson, R. Impagliazzo, and D. Moshkovitz. AM with multiple Merlins. In Computational Complexity (CCC), 2014 IEEE 29th Conference on, pages 44–55, June 2014, arXiv:1401.6848. [ALSV13a] Noga Alon, Troy Lee, Adi Shraibman, and Santosh Vempala. The approximate rank of a matrix and its algorithmic applications: Approximate rank. In Proceedings of the 45th Annual ACM Symposium on Symposium on Theory of Computing, STOC ’13, pages 675–684, 2013. [ALSV13b] Noga Alon, Troy Lee, Adi Shraibman, and Santosh Vempala. The approximate rank of a matrix and its algorithmic applications: Approximate rank. In Proceedings of the Forty-fifth Annual ACM Symposium on Theory of Computing, STOC ’13, pages 675–684. ACM, 2013. [BaCY11] Fernando G.S.L. Brandão, Matthias Christandl, and Jon Yard. A quasipolynomial-time algorithm for the quantum separability problem. In Proceedings of the 43rd annual ACM symposium on Theory of computing, STOC ’11, pages 343–352, 2011, arXiv:1011.2751. 21 [BBH+ 12] Boaz Barak, Fernando G.S.L. Brandão, Aram W. Harrow, Jonathan Kelner, David Steurer, and Yuan Zhou. Hypercontractivity, sum-of-squares proofs, and their applications. In STOC ’12, STOC ’12, pages 307–326, 2012, arXiv:1205.4484. [BC11] F.G.S.L. Brandão and M. Christandl. Detection of multiparticle entanglement: Quantifying the search for symmetric extensions, 2011, arXiv:1105.5720. [BCL94] Keith Ball, Eric A. Carlen, and Elliott H. Lieb. Sharp uniform convexity and smoothness inequalities for trace norms. Inventiones mathematicae, 115(1):463–482, 1994. [BCY11] F. G. S. L. Brandão, M. Christandl, and J. Yard. Faithful squashed entanglement. Commun. Math. Phys., 306(3):805–830, 2011, arXiv:1010.1750. [BH13] Fernando G. S. L. Brandão and Aram W. Harrow. Quantum de Finetti theorems under local measurements with applications. In Proceedings of the 45th annual ACM Symposium on theory of computing, STOC ’13, pages 861–870, 2013, arXiv:1210.6367. [BKS13] Boaz Barak, Jonathan Kelner, and David Steurer. Rounding sum-of-squares relaxations. In STOC ’14, STOC ’14, 2013, arXiv:1312.6652. [BKW14] M. Braverman, Y. K. Ko, and Omri Weinstein. Approximating the best Nash equilibrium in no(log(n)) -time breaks the Exponential Time Hypothesis, 2014. ECCC TR14-092. [BRS11] Boaz Barak, Prasad Raghavendra, and David Steurer. Rounding semidefinite programming hierarchies via global correlation. In FOCS, 2011, arXiv:1104.4680. [BV11] Aditya Bhaskara and Aravindan Vijayaraghavan. Approximating matrix p-norms. In Proceedings of the twenty-second annual ACM-SIAM symposium on Discrete Algorithms, pages 497–511. SIAM, 2011, arXiv:1001.2613. [DF80] P. Diaconis and D. Freedman. Finite exchangeable sequences. Annals of Probability, 8:745–764, 1980. [DKLP06a] Etienne De Klerk, Monique Laurent, and Pablo A Parrilo. A PTAS for the minimization of polynomials of fixed degree over the simplex. Theoretical Computer Science, 361(2):210–225, 2006. [DKLP06b] Etienne De Klerk, Monique Laurent, and Pablo A Parrilo. A PTAS for the minimization of polynomials of fixed degree over the simplex. Theoretical Computer Science, 361(2):210–225, 2006. [DPS04] Andrew C. Doherty, Pablo A. Parrilo, and Federico M. Spedalieri. Complete family of separability criteria. Phys. Rev. A, 69:022308, Feb 2004, arXiv:quant-ph/0308032. [GLS93] M. Grötschel, L. Lovász, and A. Schrijver. Geometric algorithms and combinatorial optimization. Springer-Verlag, 1993. [Gur03] Leonid Gurvits. Classical deterministic complexity of Edmonds’ problem and quantum entanglement. In STOC ’03, pages 10–19, 2003, arXiv:quant-ph/0303055. [Har15] Aram W. Harrow. Finding approximate Nash equilibria using LP hierarchies, 2015. in preparation. [HLSW04] P. Hayden, D. W. Leung, P. W. Shor, and A. J. Winter. Randomizing quantum states: Constructions and applications. Commun. Math. Phys., 250:371–391, 2004, arXiv:quant-ph/0307104. [HM10] Aram W. Harrow and Ashley Montanaro. An efficient test for product states, with applications to quantum Merlin-Arthur games. In FOCS ’10, pages 633–642, 2010, arXiv:1001.0017. 22 [HM13] Aram W. Harrow and Ashley Montanaro. Testing product states, quantum Merlin-Arthur games and tensor optimization. J. ACM, 60(1):3:1–3:43, February 2013, arXiv:1001.0017. [HSR04] M. Horodecki, P.W. Shor, and M.B. Ruskai. General entanglement breaking channels. Rev. Math. Phys, 15(3):629–641, 2004, arXiv:quant-ph/0012127. [KPT00] Mikio Kato, Lars-Erik. Persson, and Yasuji Takahashi. Clarkson type inequalities and their relations to the concepts of type and cotype. Collectanea Mathematica, 51(3):327–346, 2000. [LMM03] Richard J. Lipton, Evangelos Markakis, and Aranyak Mehta. Playing large games using simple strategies. In Proceedings of the 4th ACM Conference on Electronic Commerce, EC ’03, pages 36–41. ACM, 2003. [LRS+ 10] Jason D Lee, Ben Recht, Ruslan R Salakhutdinov, Nathan Srebro, and Joel Tropp. Practical large-scale optimization for max-norm regularization. In J.D. Lafferty, C.K.I. Williams, J. Shawe-Taylor, R.S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 1297–1305. Curran Associates, Inc., 2010. [LS07] Nati Linial and Adi Shraibman. Lower bounds in communication complexity based on factorization norms. In In Proc. of the 39th Symposium on Theory of Computing (STOC, pages 699–708, 2007. [LS09] Nati Linial and Adi Shraibman. Lower bounds in communication complexity based on factorization norms. Random Structures & Algorithms, 34(3):368–394, 2009. [LS14] Ke Li and Graeme Smith. Quantum de Finetti theorem measured with fully one-way LOCC norm, 2014, arXiv:1408.6829. [LT91] Michel Ledoux and Michel Talagrand. Probability in Banach spaces. Isoperimetry and processes. Springer-Verlag, 1991. [LW14] Ke Li and Andreas Winter. Relative entropy and squashed entanglement. Commun. Math. Phys., 326(1):63–80, 2014, arXiv:1210.3181. [MWW09] William Matthews, Stephanie Wehner, and Andreas Winter. Distinguishability of quantum states under restricted families of measurements with an application to quantum data hiding. Commun. Math. Phys., 291(3):813–843, 2009, arXiv:0810.2327. [Nao12] Assaf Naor. On the Banach-space-valued Azuma inequality and small-set isoperimetry of Alon-Roichman graphs. Combinatorics, Probability and Computing, 21:623–634, 7 2012, arXiv:1009.5695. [Pie07] A. Pietsch. History of Banach Spaces and Linear Operators. Birkhäuser, 2007. [Ste05] Daureen Steinberg. Computation of matrix norms with applications to robust optimization. Master’s thesis, Technion, 2005. Available on A. Nemirovski’s website http://www2.isye.gatech. edu/∼nemirovs/. [SW12] Yaoyun Shi and Xiaodi Wu. Epsilon-net method for optimizations over separable states. In Proceedings of the 39th International Colloquium Conference on Automata, Languages, and Programming - Volume Part I, ICALP’12, pages 798–809, Berlin, Heidelberg, 2012. SpringerVerlag, arXiv:1112.0808. [TJ74] Nicole Tomczak-Jaegermann. The moduli of smoothness and convexity and the Rademacher averages of the trace classes s{ p} (1 ≤ p < ∞). Studia Mathematica, 50(2):163–182, 1974. [Tro10] J. A. Tropp. User-friendly tail bounds for sums of random matrices, 2010, arXiv:1004.4389. 23 [Yan06] D. Yang. A simple proof of monogamy of entanglement. arXiv:quant-ph/0604168. 24 Phys. Lett., 360(1):249, 2006,
8
INJECTIVE MODULES AND TORSION FUNCTORS arXiv:1508.07407v2 [] 12 Oct 2016 PHAM HUNG QUY AND FRED ROHRER Abstract. A commutative ring is said to have ITI with respect to an ideal a if the a-torsion functor preserves injectivity of modules. Classes of rings with ITI or without ITI with respect to certain sets of ideals are identified. Behaviour of ITI under formation of rings of fractions, tensor products and idealisation is studied. Applications to local cohomology over non-noetherian rings are given. Introduction Let R be a ring1. It is an interesting phenomenon that the behaviour of injective Rmodules is related to noetherianness of R. For example, by results of Bass, Matlis and Papp ([14, 3.46; 3.48]) the following statements are both equivalent to R being noetherian: (i) Direct sums of injective R-modules are injective; (ii) Every injective R-module is a direct sum of indecomposable injective R-modules. In this article, we investigate a further property of the class of injective R-modules, dependent on an ideal a ⊆ R, that is shared by all noetherian rings without characterising them: We say that R has ITI with respect to a if the a-torsion submodule of any injective R-module is again injective2. It is well-known that ITI with respect to a implies that every a-torsion R-module has an injective resolution whose components are a-torsion modules. We show below (1.1) that these two properties are in fact equivalent. Our interest in ITI properties stems from the study of the theory of local cohomology (i.e., the right derived cohomological functor of the a-torsion functor). Usually, local cohomology is developed over a noetherian ring (e.g., [4]), although its creators were obviously interested in a more general theory – see [7]. While some deeper theorems about local cohomology might indeed rely on noetherianness, setting up its basic machinery does not need such a hypothesis at all. The ITI property with respect to the supporting ideal serves as a convenient substitute for noetherianness, and thus identifying non-noetherian rings 2010 Mathematics Subject Classification. Primary 13C11; Secondary 13D45. Key words and phrases. ITI, non-noetherian ring, injective module, torsion functor, local cohomology, weakly proregular ideal. PHQ was partially supported by NAFOSTED (Vietnam), grant no. 101.04-2014.25. An earlier version of this article was cited as Bad behaviour of injective modules. 1Throughout the following rings are understood to be commutative. 2ITI stands for “injective torsion of injectives”. 1 INJECTIVE MODULES AND TORSION FUNCTORS 2 with ITI is a first step in extending applicability of local cohomology beyond noetherian rings, in accordance with natural demands from modern algebraic geometry and homological algebra.3 Besides the approach via ITI, there is also the extension of local cohomology to non-noetherian rings via the notion of weak proregularity. This originates in [7], was studied by several authors ([1], [17], [20]), and is quite successful from the point of view of applications. However – and in contrast to ITI – it applies only to supporting ideals of finite type, and when working in a non-noetherian setting, it seems artificial to restrict ones attention a priori to ideals of finite type. The first section contains mostly positive results. Besides giving a new proof of the fact that noetherian rings have ITI with respect to every ideal (1.16), we prove that absolutely flat rings have ITI with respect to every ideal (1.12) and that 1-dimensional local domains (e.g. valuation rings of height 1) have ITI with respect to ideals of finite type (1.19). We also show that ITI properties are preserved by formation of rings of fractions (1.8) and discuss further necessary or sufficient conditions for ITI. Unfortunately, there are a lot of nice rings without ITI, even with respect to some principal ideals. In the second section, we provide several such examples (2.2, 2.3, 2.9) and also show that ITI may be lost by natural constructions such as idealisation (2.10) or base change (2.5), even when performed on noetherian rings. The goal of the third section is twofold. First, we show that ITI with respect to an ideal a of finite type is strictly stronger than weak proregularity of a (3.1, 3.2). Second, to round off, we sketch how to swap ITI for noetherianness in some basic results on local cohomology. Our study of rings with or without ITI left us with at least as much questions as we found answers; some of them are pointed out in the following. We consider this work as a starting point in the search for non-noetherian rings with ITI and hope this naturally arising and interesting class of rings will be further studied and better understood. General notation and terminology follows Bourbaki’s Éléments de mathématique; concerning local cohomology we follow Brodmann and Sharp ([4]). 1. Rings with ITI Let R be a ring and let a ⊆ R be an ideal. We choose for every R-module M an injective hull eR (M) : M ֌ ER (M) with eR (M) = IdM if M is injective; for basics on injective hulls we refer the reader to [2, X.1.9]. The a-torsion functor Γa is defined as the subfunctor of S the identity functor on the category of R-modules with Γa (M) = n∈N (0 :M an ) for each R-module M. For basics on torsion functors we refer the reader to [4], but not without 3Concerning “theorems that might indeed rely on noetherianness”, let us mention that in 3.7 we exhibit rings with ITI with respect to every ideal but without Grothendieck vanishing. INJECTIVE MODULES AND TORSION FUNCTORS 3 a word of warning that over non-noetherian rings these functors may behave differently than over noetherian ones – see [18].4 (1.1) Proposition The following statements are equivalent: (i) Γa (M) is injective for any injective R-module M; (ii) Every a-torsion R-module has an injective resolution whose components are a-torsion modules; (iii) ER (Γa (ER (M))) = Γa (ER (M)) for every R-module M; (iv) Γa (ER (Γa (M))) = ER (Γa (M)) for every R-module M. Proof. The equivalences “(i)⇔(iii)” and “(ii)⇔(iv)” are immediate. “(iii)⇒(iv)”: Let M be an R-module. Then, Γa (M) ⊆ Γa (ER (Γa (M))) ⊆ ER (Γa (M)). Taking injective hulls yields ER (Γa (M)) ⊆ ER (Γa (ER (Γa (M)))) ⊆ ER (Γa (M)). As the R-module in the middle equals Γa (ER (Γa (M))) by (iii) we get the desired equality. “(iv)⇒(iii)”: Let M be an R-module. Then, Γa (ER (M)) ⊆ ER (Γa (ER (M))) ⊆ ER (M). Taking a-torsion submodules yields Γa (ER (M)) ⊆ Γa (ER (Γa (ER (M)))) ⊆ Γa (ER (M)). As the R-module in the middle equals ER (Γa (ER (M))) by (iv) we get the desired equality.  We say that R has ITI with respect to a if the statements (i)–(iv) in 1.1 hold. It is well-known that noetherian rings have ITI with respect to every ideal ([4, 2.1.4]); we give a new proof of this fact below (1.16). But let us begin by exhibiting first examples – albeit silly ones – of possibly non-noetherian rings with ITI properties. (1.2) Examples A) Every ring has ITI with respect to nilpotent ideals. B) If R is a 0-dimensional local ring, then proper ideals of finite type are nilpotent, and hence R has ITI with respect to ideals of finite type. N C) If K is a field, then K[(Xi )i∈N ]/hXi Xj | i, j ∈ iK[(Xi)i∈N ] is a non-noetherian 0dimensional local ring whose proper ideals are all nilpotent, hence it has ITI with respect to every ideal. • 4Although some of the following can be expressed in the language of torsion theories we avoid this mainly because Γa is not necessarily a radical if a is not of finite type – see 2.12. INJECTIVE MODULES AND TORSION FUNCTORS 4 Next we look at a necessary condition for R to have ITI with respect to a which proves to be very useful in producing rings without ITI later on. Moreover, the question about its sufficiency is related to the question whether ITI properties are preserved along surjective epimorphisms. (1.3) Proposition If R has ITI with respect to a, then ER (R/a) is an a-torsion module. Proof. Immediately from 1.1 (iv), since R/a is an a-torsion module.  (1.4) Lemma If a, b ⊆ R are ideals with b ⊆ a and M is an R-module with bM = 0 such that ER (M) is an a-torsion module, then ER/b (M) is an a/b-torsion module. Proof. Straightforward on use of the canonical isomorphism HomR (R/b, ER (M)) ∼ = ER/b (M) ([4, 10.1.16]). of R/b-modules  (1.5) Question It would be useful to know whether the converse of 1.3 holds, i.e.: (A) Suppose ER (R/a) is an a-torsion module. Does then R have ITI with respect to a? For example, if b ⊆ R is an ideal with b ⊆ a and ER (R/a) is an a-torsion module, then ER/b ((R/b)/(a/b)) is an a/b-torsion module by 1.4. So, if Question (A) could be positively answered and R has ITI with respect to a, then R/b would have ITI with respect to a/b. In particular, ITI properties would be preserved along surjective epimorphisms. • While we do not know whether or not ITI properties are preserved along surjective epimorphisms (let alone arbitrary epimorphisms), we show now that they are preserved along flat epimorphisms, hence in particular by formation of rings of fractions.5 (1.6) Proposition Let h : R → S be a flat epimorphism of rings. If R has ITI with respect to a, then S has ITI with respect to aS. Proof. Let I be an injective S-module, let b ⊆ S be an ideal, and let f : b → ΓaS (I) be a morphism of S-modules. Then, f induces a morphism of R-modules b ∩ R → Γa (I ↾R ) = ΓaS (I) ↾R . The R-module I ↾R is injective by [14, 3.6A], hence so is Γa (I ↾R ) by our hypothesis on R. So, there exists y ∈ Γa (I) such that if x ∈ b ∩ R, then f (h(x)) = xy. Let x ∈ b. Since h is a flat epimorphism, b = (b ∩ R)S by [15, IV.2.1], hence x is an S-linear combination of elements of b ∩ R, and thus f (x) = xy. Now, ΓaS (I) is injective by Baer’s criterion, and the claim is proven.  (1.7) Corollary Let h : R → S be a flat epimorphism of rings. If R has ITI with respect to every ideal (of finite type), then S has ITI with respect to every ideal (of finite type). 5This result could raise hope to obtain non-noetherian rings with ITI by considering flat epimorphisms with noetherian source. Alas, Lazard showed that flat epimorphisms with noetherian source have noetherian target ([15, IV.2.3]). INJECTIVE MODULES AND TORSION FUNCTORS 5 Proof. Immediately by [15, IV.2.1] and 1.6.  (1.8) Corollary Let T ⊆ R be a subset. If R has ITI with respect to a, then T −1 R has ITI with respect to T −1 a; if R has ITI with respect to every ideal (of finite type), then T −1 R has ITI with respect to every ideal (of finite type). Proof. Immediately by 1.6 and 1.7.  (1.9) Questions Since ITI properties are inherited by rings of fractions, it is natural to ask for a converse, i.e., whether they can be delocalised in one of the following ways: (B) If Rp has ITI with respect to ap for every p ∈ Spec(R), does then R have ITI with respect to a? (C) Let (fi )i∈I be a generating family of the ideal R. If Rfi has ITI with respect to afi for every i ∈ I, does then R have ITI with respect to a? The following special case of Question (B) deserves to be mentioned separately. (D) Do pointwise ideal? 6 noetherian rings have ITI with respect to every • Now we turn to a first class of rings with ITI properties, namely, the absolutely flat ones. Recall that R is absolutely flat if it fulfills the following equivalent conditions ([14, 3.71; 4.21]): (i) Every R-module is flat; (ii) Every monogeneous R-module is flat; (iii) Rm is a field for every maximal ideal m ⊆ R; (iv) R is reduced and dim(R) ≤ 0; (v) R is von Neumann regular. Since products of infinitely many fields are absolutely flat and non-noetherian, this provides a class of interesting non-noetherian rings with ITI. (In view of Question (D) in 1.9 we may note that absolutely flat rings are pointwise noetherian.) (1.10) Lemma If the R-module R/a is flat, then a is idempotent. Proof. Applying the exact functor • ⊗R R/a to the exact sequence 0 → a ֒→ R → R/a h yields an exact sequence 0 → a/a2 → R/a → R/a with h = IdR/a . So, we get a = a2 and thus the claim.  (1.11) Proposition If the R-module R/a is flat, then R has ITI with respect to a. Proof. Let M be an a-torsion R-module. The ideal a is idempotent by 1.10, hence aM = 0, and thus we can consider M canonically as an R/a-module. Now, we consider the injective R/a-module ER/a (M) and its scalar restriction ER/a (M) ↾R , which is an a-torsion module. Moreover, the canonical morphism R → R/a being flat, ER/a (M) ↾R is injective by [14, 6A ring R is pointwise noetherian if Rp is noetherian for every p ∈ Spec(R). For facts on pointwise noetherian rings, including examples of non-noetherian but pointwise noetherian rings, we suggest [9]. INJECTIVE MODULES AND TORSION FUNCTORS 6 3.6A]. Scalar restriction to R of eR/a (M) : M → ER/a (M) being a monomorphism of R-modules M → ER/a (M) ↾R we thus get a monomorphism of R-modules ER (M) → ER/a (M) ↾R . Therefore, ER (M) is an a-torsion module, and the claim is proven.  (1.12) Corollary Absolutely flat rings have ITI with respect to every ideal. Proof. Immediately by 1.11.  (1.13) Corollary Let m ⊆ R be a maximal ideal such that the R-module R/m is injective or that Rm is a field. Then, R has ITI with respect to m. Proof. Both conditions are equivalent by [14, 3.72], and they are equivalent to R/m being flat by [10, Lemma 1]. The claim follows then from 1.11.  Next we study the relation between the property of an R-module M of being an atorsion module and the assassin of M, i.e., the set AssR (M) of primes associated with M. If R is noetherian, it is well-known that M is an a-torsion module if and only if its assassin is contained in Var(a) (i.e., the set of primes containing a). Since assassins are not well-behaved over non-noetherian rings, we also consider the so-called weak assassin AssfR (M). Recall that a prime ideal p ⊆ R is weakly associated with M if it is a minimal prime of the annihilator of an element of M. For basics about weak assassins we refer the reader to [3, IV.1 Exercise 17], [12] and [21]. These considerations lead to a new proof of the fact that noetherian rings have ITI with respect to every ideal, and to the result that 1-dimensional local domains (e.g., valuation rings of height 1) have ITI with respect to ideals of finite type. (1.14) Proposition Let M be an R-module. We consider the following statements: (1) M is an a-torsion module; (2) AssfR (M) ⊆ Var(a); (3) AssR (M) ⊆ Var(a). We have (1)⇒(2)⇒(3); if a is of finite type, we have (1)⇔(2)⇒(3); if R is noetherian, we have (1)⇔(2)⇔(3). Proof. Recall that AssR (M) ⊆ AssfR (M), with equality if R is noetherian ([3, IV.1 Exercice 17]). Hence, (2) implies (3), and the converse holds if R is noetherian. If M is an a-torsion module and p ∈ AssfR (M), there exists x ∈ M such that p is with an x = 0, thus we get a minimal prime of (0 :R x), hence there exists n ∈ an ⊆ (0 :R x) ⊆ p, and therefore a ⊆ p. This shows (1)⇒(2). Suppose now that a is of finite type. If AssfR (M) ⊆ Var(a), then for x ∈ M we have p  a ⊆ (0 :R x), and thus there exists n ∈ with an x = 0. This shows (2)⇒(1). N N (1.15) Proposition If every R-module M with AssR (M) ⊆ Var(a) is an a-torsion module, then R has ITI with respect to a. INJECTIVE MODULES AND TORSION FUNCTORS 7 Proof. Every R-module has the same assassin as its injective hull ([14, 3.57]). So, if M is an a-torsion R-module, then AssR (ER (M)) = AssR (M) ⊆ Var(a) by 1.14, hence ER (M) is an a-torsion module, and thus 1.1 yields the claim.  (1.16) Corollary Noetherian rings have ITI with respect to every ideal. Proof. Immediately from 1.14 and 1.15.  (1.17) Proposition If AssfR (M) = AssfR (ER (M)) for every R-module M, then R has ITI with respect to ideals of finite type. Proof. Replacing AssR by AssfR in the proof of 1.15 yields the claim.  (1.18) Proposition Local rings whose non-maximal prime ideals are of finite type have ITI with respect to ideals of finite type. Proof. Let R be such a ring and let m denote its maximal ideal. If m is of finite type then R is noetherian by Cohen’s theorem ([8, 0.6.4.7]) and the claim follows from 1.16. So, suppose m is not of finite type. Let M be an R-module. By 1.17, it suffices to show AssfR (M) = AssfR (ER (M)). If M = 0, this is fulfilled. We suppose M 6= 0 and assume AssfR (M) $ AssfR (ER (M)). By [22, 1.8], a prime ideal of R of finite type is associated with M if and only if it is weakly associated with M. Therefore, our assumptions imply m ∈ AssfR (ER (M)) \ AssfR (M). So, there exists x ∈ ER (M) such that m is a minimal prime of (0 :R x), and as M ֒→ ER (M) is essential there exists r ∈ R with rx ∈ M \ 0. The ideal (0 :R rx) ⊆ R has a minimal prime p which is weakly associated with M, so that p 6= m, but it contains (0 :R x), yielding the contradiction p = m.  (1.19) Corollary 1-dimensional local domains have ITI with respect to ideals of finite type. Proof. Immediately from 1.18.  (1.20) Questions The successful application of 1.17 in the proof of 1.18 lets us ask for more: (E) For which rings R do we have AssfR (M) = AssfR (ER (M)) for every R-module M? By [14, 3.57], one class of such rings, containing the noetherian ones, are those over which assassins and weak assassins coincide: (F) For which rings R do we have AssR (M) = AssfR (M) for every R-module M? One may note that assassins and weak assassins do not necessarily coincide over a 1and valuation ν, dimensional local domain: If R is a valuation ring with value group Q INJECTIVE MODULES AND TORSION FUNCTORS 8 and a := {x ∈ R | ν(x)2 > 2}, then AssR (R/a) is empty and AssfR (R/a) contains the maximal ideal of R.7 • We end this section with two examples, showing that the hypotheses in 1.14 cannot be omitted. (1.21) Examples A) Let K be a field. We consider the absolutely flat ring R := K N , the ideal b ⊆ R whose elements are precisely the elements of R of finite support, the R-module M := R/b, and f = (fn )n∈N ∈ R with f0 = 0 and fn = 1 for every n > 0. Then, f ∈ R \ 0 is idempotent, and hence the principal ideal a := hf iR is not nilpotent. Yassemi showed in [21, Section 1, Example] that AssR (M) = ∅ and in particular AssR (M) ⊆ Var(a). If M is an a-torsion module and x = (xn )n∈N ∈ R, then setting x′0 = 0 and x′n = xn for n > 0 it follows x′ := (x′n )n∈N ∈ b, and so there exists n0 ∈ such that if n ≥ n0 , then xn = 0 – a contradiction. This show that M is not an a-torsion module. In particular, the implication (3)⇒(1) in 1.14 does not necessarily hold, even if a is principal. N B) Let R be a 0-dimensional local ring whose maximal ideal m is not nilpotent, e.g. the ring constructed below in 2.2 c). Then, R is not an m-torsion module, but AssfR (R) ⊆ Spec(R) = Var(m). In particular, the implication (2)⇒(1) in 1.14 does not necessarily hold. • 2. Rings without ITI In this section, we exhibit several examples of rings without the ITI property. The first bunch of examples relies on the following observation about local rings with idempotent maximal ideal. (2.1) Proposition If R is a local ring whose maximal ideal m is idempotent, then the following statements are equivalent: (i) R is a field; (ii) R is noetherian; (iii) R has ITI with respect to m; (iv) ER (R/m) is an m-torsion module. Proof. By 1.3 and 1.16 it suffices to show that (iv) implies (i). Suppose ER (R/m) is an m-torsion module and let x ∈ ER (R/m) \ 0. By idempotency of m we have mx = 0, hence hxiR ∼ = R/m is simple. The canonical injection R/m ֒→ ER (R/m) being essential we have (R/m) ∩ hxiR 6= 0, hence, by simplicity, x ∈ R/m. Thus, R/m = ER (R/m) is injective, and therefore R is a field by [14, 3.72].  Recall that a valuation ring is a local domain whose set of (principal) ideals is totally ordered with respect to inclusion, and that a Bezout ring is a domain whose ideals of finite type are principal. A quotient of a valuation ring or of a Bezout ring has the property that 7This example was suggested by Neil Epstein via MathOverflow.net. INJECTIVE MODULES AND TORSION FUNCTORS 9 the set of its ideals is totally ordered with respect to inclusion or that its ideals of finite type are principal, respectively. Moreover, valuation rings are Bezout rings, and Bezout rings are coherent rings whose local rings are valuation rings ([3, VI.1.2; VII.1 Exercice 20; VII.2 Exercices 12, 14, 17]). (2.2) Proposition Let K be a field, let Q denote the additive monoid of positive rational numbers, let R := K[Q] denote the algebra of Q over K, let {eα | α ∈ Q} denote its canonical basis, and let m := heα | α > 0iR and a := heα | α > 1iR . Furthermore, set S := Rm , n := mm , b := am , T := S/b and p := n/b. a) R is a 1-dimensional Bezout ring that does not have ITI with respect to its maximal ideal m.8 b) S is a 1-dimensional valuation ring that has ITI with respect to non-maximal ideals, but does not have ITI with respect to its maximal ideal n. c) T is a 0-dimensional non-coherent local ring that has ITI with respect to non-maximal ideals, but does not have ITI with respect to its maximal ideal p. Proof. First, we note that m is an idempotent maximal ideal of R and the only prime ideal of R containing a. Hence, S and T are local rings with idempotent maximal ideals n and p, respectively, and T ∼ = R/a is 0-dimensional. So, there is a surjective = (R/a)m/a ∼ morphism of rings R → T , mapping m onto p. The ring R is the union of the increasing family (K[e 1 ])n∈N of principal subrings, hence a Bezout ring, and it is integral over n! its 1-dimensional subring K[e1 ] by [5, 12.4], hence 1-dimensional (cf. [11, Example 31]). Therefore, S is a 1-dimensional valuation ring, and in particular not a field. Moreover, p = (0 :T e11 + b) is not of finite type, hence T is not coherent ([14, 4.60]), and in particular not noetherian. Thus, the negative claims about ITI follow from 1.4 and 2.1. For an ideal c ⊆ S we set α(c) := inf{α ∈ Q | eα ∈ c}. We have α(n) = 0, and if c, d ⊆ S are ideals with α(c) > α(d) then c ⊆ d. Since ideals of S of finite type are of the form heα iS with α ∈ Q, it follows that an ideal c ⊆ S is of finite type if and only if eα(c) ∈ c. Now, let c ⊆ S be an ideal and let n ∈ . If α ∈ Q with eα ∈ cn , then there are α1 , . . . , αn ∈ Q with eα1 , . . . , eαn ∈ c and eα = eα1 +···+αn , implying α = α1 + · · · + αn ≥ nα(c). Therefore, α(cn ) ≥ nα(c). Conversely, if β ∈ Q with β > nα(c), then nβ > α(c), implying e β ∈ c and n thus eβ ∈ cn . Therefore, α(cn ) ≤ nα(c). It follows α(cn ) = nα(c). Next, let c, d ⊆ S be non-maximal, non-zero ideals and let n ∈ , so that α(cn ) 6= 0 and α(dn ) 6= 0. There exist k, m ∈ with α(cnk ) = kα(cn ) ≥ α(dn ) and α(dnm ) = mα(dn ) ≥ α(cn ), implying cnk ⊆ dn and dnm ⊆ cn , and therefore Γc = Γd (cf. [18, 2.2]). It follows that S has ITI with respect to non-maximal ideals if and only if it has ITI with respect to some non-maximal ideal, and by 1.19 this is fulfilled. N N 8In N [2, X.1 Exercice 27 f)] the reader gets the cruel task to show that R is a valuation domain – but its element 1 + e1 is neither invertible nor contained in m. INJECTIVE MODULES AND TORSION FUNCTORS 10 N Finally, if c ⊆ S is a non-maximal ideal then α(c) > 0, hence there exists n ∈ with α(cn ) = nα(c) > 1 and therefore cn ⊆ b. Thus, non-maximal ideals of T are nilpotent, so that T has ITI with respect to non-maximal ideals by 1.2 A).  We draw some conclusions from these examples. While a 1-dimensional local domain has ITI with respect to ideals of finite type (1.19), it need not have so with respect to every ideal (2.2 b)). While a reduced 0-dimensional ring (i.e., an absolutely flat ring) has ITI with respect to every ideal (1.12), an arbitrary 0-dimensional ring need not have so (2.2 c)). John Greenlees conjectured (personal communication, 2010) that a ring with bounded a-torsion9 for some ideal a has ITI with respect to a; in particular, domains would have ITI with respect to every ideal. Example 2.2 b) disproves this conjecture. (2.3) Corollary The polynomial algebra K[(Xi )i∈I ] in infinitely many indeterminates (Xi )i∈I over a field K does not have ITI with respect to hXi | i ∈ IiK[(Xi)i∈I ] . Proof. Let U := K[(Xi )i∈I ] and let q := hXi | i ∈ IiU . Assume that U has ITI with respect to q. Then, Uq has ITI with respect to qq by 1.8. Furthermore, in the notations of 2.2, there is a surjective morphism of rings U → K[Q] mapping q onto m. This morphism induces a surjective morphism of rings Uq → S mapping qq onto n. Now, 1.3 and 1.4 imply that ES (S/n) is an n-torsion module, and thus S has ITI with respect to n by 2.1. This contradicts 2.2 b).  One may note that R = K[(Xi )i∈N ] is not only a coherent domain, but in fact a filtering inductive limit of noetherian rings with flat monomorphisms (cf. [6, p. 48]), and that its hXi | i ∈ iR -adic topology is separated. (In 2.11 we will give a direct proof of the fact that ER (R/hXi | i ∈ iR ) is not a hXi | i ∈ iR -torsion module.) N N N (2.4) Question Inspired by 2.3 we ask the following question: (G) Do polynomial algebras in (countably) infinitely many indeterminates over fields have ITI with respect to (monomial) ideals of finite type? • As an application of 2.1, we show that ITI is not necessarily preserved by tensor products. In particular, it is not stable under base change. (2.5) Proposition Let p be a prime number, let K be a non-perfect field of characteristic p, and let L be its perfect closure. Then, L ⊗K L has ITI with respect to ideals of finite type, but does not have ITI with respect to its maximal ideal Nil(L ⊗K L).10 9A N ring R is of bounded a-torsion for some ideal a if there exists n ∈ with Γa (R) = (0 :R an ). is also the standard example showing that noetherianness is not necessarily preserved by tensor products (cf. [8, I.3.2.5]). 10This INJECTIVE MODULES AND TORSION FUNCTORS 11 Proof. It is well-known that L ⊗K L is a non-noetherian 0-dimensional local ring with maximal ideal Nil(L ⊗K L) ([8, I.3.2.5]). So, the first claim is clear by 1.2 B), and by 2.1, it suffices to show that Nil(L ⊗K L) is idempotent. Let f ∈ Nil(L ⊗K L) \ 0. There exist Pm m m ∈ and families (xi )m i=0 and (yi )i=0 in L \ 0 with f = i=0 xi ⊗ yi . Since L is perfect, p there exist for every i ∈ [0, m] elements vi , wi ∈ L with vi = xi and wip = yi . It follows P Pm p p p p f= m  i=0 vi ⊗ wi = ( i=0 vi ⊗ wi ) and therefore f ∈ Nil(L ⊗K L) as desired. N Up to now, we saw only rings without ITI with respect to ideals not of finite type. The second bunch of examples relies on an observation about valuation rings with maximal ideal of finite type (2.8). (2.6) Lemma Let a, b ⊆ R be ideals and suppose the canonical injection b ֒→ R is essential. Then, ER (b) is an a-torsion module if and only if a is nilpotent. Proof. As b ֒→ R is essential, R ⊆ ER (R) = ER (b). If ER (b) is a a-torsion module, then so is R, hence a is nilpotent. The converse is clear.  (2.7) Corollary Let R be a quotient of a valuation ring, let x ∈ R \ 0, and let a ⊆ R be an ideal. Then, ER (hxiR ) is an a-torsion module if and only if a is nilpotent. Proof. The set of principal ideals of R is totally ordered with respect to inclusion, hence the canonical injection hxiR ֒→ R is essential, and thus 2.6 yields the claim.  (2.8) Proposition If R is a valuation ring whose maximal ideal m is of finite type, then the following statements are equivalent: (i) R is noetherian; (ii) R has ITI with respect to m; (iii) ER (R/m) is an m-torsion module. Proof. By 1.3 and 1.16, it suffices to show that (iii) implies (i). Suppose (iii) holds and assume R is non-noetherian. By [3, VI.3.6 Proposition 9], a valuation ring with maximal ideal m of finite type is noetherian if and only if its m-adic topology is separated. Hence, T there exists x ∈ ( n∈N mn ) \ 0. We consider the local ring R := R/xm, its maximal ideal m := m/xm, and x := x + xm ∈ R. If there exists n ∈ with mn = 0, then mn ⊆ xm ⊆ mn+1 , and Nakayama’s Lemma yields the contradiction mn = 0. As (0 :R x) = m and therefore R/m ∼  = hxiR , we get a contradiction from 1.4 and 2.7. N Z (2.9) Proposition Let p be a prime number, let R := [(Xi )i∈N ] be the polynomial algebra in indeterminates (Xi )i∈N over , let a := hpj−i Xj − Xi | i, j ∈ , i < jiR , let m := hpiR + hXi | i ∈ iR , let S := R/a, and let n := m/a. Furthermore, denote by Yi the canonical image of Xi in S for i ∈ , let T := Sn /hpY0iSn , and let p := nn /hpY0 iSn . a) Sn is a 2-dimensional valuation ring that does not have ITI with respect to its principal maximal ideal nn . N Z N N INJECTIVE MODULES AND TORSION FUNCTORS 12 b) T is a 1-dimensional local ring that does not have ITI with respect to its principal maximal ideal p. N Proof. We claim first that if i, m, n, r, s ∈ , then pm Yir and pn Yis are comparable with respect to divisibility in S. Indeed, without loss of generality, we can suppose r < s, so if m ≤ n, then the claim is clear. Otherwise, pn Yis = pm Yir Yis−1−r Yi+m−n and thus the and f ∈ S, then claim holds. Keeping in mind the definition of a, we see that if b ∈ P r with i ≥ b and a finite family (ak )k=0 in with f = rk=0 ak Yik . In there exist i ∈ particular, if f, g ∈ S, then there exist i ∈ and finite families (ak )rk=0 and (bk )sk=0 in Pr Ps P k k with f = k=0 ak Yi and g = k=0 bk Yi . Moreover, for f = rk=0 ak pnk Yik ∈ S \ 0 with i ∈ and finite families (nk )rk=0 in and (ak )rk=0 in \ hpi there exists k0 := min{k ∈ [0, r] | ak 6= 0}, and pnk0 Yik0 divides pnk Yik for every k ∈ [0, r] with ak 6= 0. In particular, for f ∈ Sn \ 0 there exist a unit u of Sn and i, k, n ∈ with f = upn Yik , and we have f ∈ nn if and only if (n, k) 6= (0, 0). It follows that any two elements of Sn are comparable with respect to divisibility. / a. Assume aX0n ∈ a. There exists Next, we show that if a ∈ \0 and n ∈ , then aX0n ∈ m ∈ with aX0n ∈ a′ := hpj−i Xj − Xi | i, j ∈ [0, m], i < jiR′ , where R′ := [(Xi )m i=0 ]. If j−i m−i j−i m−j i, j ∈ [0, m] with i < j, then p Xj − Xi = (p Xm − Xi ) − p (p Xm − Xj ), hence m−i ′ m−i a = hp Xm −Xi | i ∈ [0, m−1]iR′ . Factoring out hp Xm −Xi | i ∈ [1, m−1]iR′ we get aU n = f (pm V −U) in the polynomial algebra [U, V ] with f ∈ [U, V ]\0, a contradiction, and thus our claim. From this we get pX0 ∈ / a and thus c := hXi | i ∈ iR /a 6= 0. Moreover, it also follows that S is a domain. Indeed, if f, g ∈ S \ 0 with f g = 0, then there exist i, k, l, m, n ∈ , a, b ∈ \ hpi, and f ′ , g ′ ∈ c with f = pm Yik (a + f ′ ) and g = pn Yil (b + g ′). Furnishing R with its canonical -graduation and S with the induced -graduation, we get abpm+n Yik+l = 0, hence abpr X0s ∈ a for some r, s ∈ , and finally the contradiction ab = 0. So, S is a domain, and hence Sn is a valuation ring. then Yi = pYi+1 , hence p divides Yi in S, and therefore nn = If i ∈ hpiSn + hYi | i ∈ iSn = hpiSn is principal. Thus, p = hpiT is principal, too. Now, let q ∈ Spec(Sn ) \ {0, nn }, so that p ∈ / q. Then, there exists an i ∈ with Yi ∈ q, since elen k ments of nn are of the form up Yi with a unit u of Sn , i, k, n ∈ and (n, k) 6= (0, 0). This implies pn Yi+n = Yi ∈ q and thus Yi+n ∈ q for every n ∈ , and also Yi−k = pk Yi ∈ q for every k ∈ [0, i]. It follows q = cn , hence Spec(Sn ) = {0, cn , nn }, and therefore dim(Sn ) = 2. then Yi = pn Yi+n , hence pn divides Yi in S, and therefore Furthermore, if i, n ∈ T 0 6= pY0 ∈ cn ⊆ n∈N nnn . Thus, T is 1-dimensional and Sn is non-noetherian, so a) follows from 2.8. If n ∈ ∗ , then pn divides Y0 = pn Yn , so if pY0 divides pn , then Yn is a unit of Sn , which is a contradiction. Therefore, hpY0 iSn contains no power of p, and hence p is not nilpotent. Moreover, Y0 ∈ / hpY0 iSn , so denoting by Z the canonical image of Y0 in T it N Z Z N N N Z N Z Z N Z N Z Z Z N N N N N N Z N N N N N INJECTIVE MODULES AND TORSION FUNCTORS 13 follows p = (0 :T Z), hence T /p ∼ = hZiT , and thus 1.3 and 2.7 imply that T does not have ITI with respect to p.  Consider again the ring T from 2.9. By 1.17 there exists a T -module M with AssfT (M) 6= AssfT (ET (M)), and we can explicitly describe such a module. Indeed, T has precisely one non-maximal prime and ET (T /p) is not a p-torsion module, so 1.14 implies {p} = AssfT (T /p) $ AssfT (ET (T /p)) = Spec(T ). We draw from the examples in 2.2 and 2.9 the conclusion that while a 1-dimensional local domain has ITI with respect to ideals of finite type (1.19), an arbitrary 1-dimensional local ring need not have so. Further examples of non-noetherian valuation rings whose maximal ideal is of finite type can be found in [16, p. 79, Remark] and [11, Example 32]. Based on 2.9 we show now that the idealisation of a module over a noetherian ring need not have ITI. (Recall that for a ring R and an R-module M, the idealisation of M is the R-algebra with underlying R-module R ⊕ M and with multiplication defined by (r, x) · (s, y) = (rs, ry + sx).) (2.10) Proposition We use the notations from 2.9 and denote by U the idealisation of the hpi -module obtained from hYi | i ∈ iSn /hpY0 iSn by scalar restriction. Then, U is a local ring that does not have ITI with respect to its principal maximal ideal. Z N N N Proof. For i ∈ we denote by Zi the canonical image of Yi in M := hYi | i ∈ iSn /hpY0 iSn . If i, j ∈ , then Yi Yj = Y0 Yi+j = pY0 Yi+j+1 in Sn . Therefore, an element m of M \ 0 has P the form m = kj=1 vj Zij with a strictly increasing sequence (ij )kj=1 in and a family ik −ij k Zik for j ∈ [1, k − 1], it follows that m can be (vj )j=1 in hpi \ {0}. Since Zij = p written in the form m = vZi with i ∈ and v ∈ hpi \ {0}. Suppose that i is chosen minimally with this property, and assume v ∈ hpiZhpi . Then, v = v ′ pn with v ′ ∈ hpi \ hpi and n > 0. If n > i, we get the contradiction vZi = v ′ pn Zi = v ′ pn−i−1 pZ0 = 0. If n ≤ i, we get vZi = v ′ pn Zi = v ′ Zi−n , contradicting the minimality of i. Therefore, v ∈ hpi \ hpi. and The above shows that an element of U \ 0 has the form (upn , vZi ) with i, n ∈ u, v ∈ ( hpi \ hpi) ∪ {0} such that (u, v) 6= (0, 0). Using this, it is readily checked that U is a local ring with maximal ideal q = h(p, 0)iU , that the morphism of U-modules U → U with 1 7→ (0, Z0) induces an isomorphism of U-modules U/q ∼ = h(0, Z0)iU , and T T T n n n that n∈N q = n∈N h(p , 0)iU = n∈N (hp i ⊕ M) = 0 ⊕ M = h(0, Zi ) | i ∈ iU 6= 0. and In particular, q is not nilpotent. Now, we consider (upn , vZi ) ∈ U \ 0 with i, n ∈ n −1 u, v ∈ ( hpi \ hpi) ∪ {0}. If u 6= 0, then (up , vZi )(0, u Zn ) = (0, Z0 ), and if u = 0, then v 6= 0, and therefore (upn , vZi )(pi , 0) = (0, Z0 ). From this it follows that the canonical injection h(0, Z0)iU ֒→ U is essential, thus 1.3 and 2.6 yield the claim.  N Z Z Z N N Z Z Z N N N We end this section by considering again a 0-dimensional local ring R with maximal ideal m. If m is nilpotent, then R has ITI with respect to every ideal (1.2 A)). If m is INJECTIVE MODULES AND TORSION FUNCTORS 14 idempotent, then R has ITI with respect to every ideal if and only if it is a field (2.1). All examples of 0-dimensional local rings without ITI constructed so far had an idempotent maximal ideal (2.2, 2.5). We present now a further 0-dimensional local ring R; its maximal ideal m is neither nilpotent nor idempotent, but T-nilpotent11, and its m-adic topology is separated. But still R does not have ITI with respect to m. (2.11) Lemma Let K be a field, let A := K[(Xi )i∈N ] be the polynomial algebra in countably infinitely many indeterminates (Xi )i∈N , let n := hXi | i ∈ iA and let a := h{Xi Xj | i, j ∈ , i 6= j} ∪ {Xii+1 | i ∈ }iA . Then, there exists f ∈ EA (A/n) \ Γn (EA (A/n)) with af = 0. N M N N denote the set of monomials in A. We furnish E := HomK (A, K) with Proof. Let its canonical structure of A-module. Its elements are families in K indexed by . For we have sg = (αst )t∈M . The morphism of A-modules A → E g = (αt )t∈M ∈ E and s ∈ mapping 1 to (αt )t∈M with α1 = 1 and αt = 0 for t 6= 1 has kernel n and hence induces a monomorphism of A-modules A/n ֌ E by means of which we consider A/n as a sub-A-module of E. / {Xii | i ∈ }. Let f = (αt )t∈M with αt = 1 for t ∈ {Xii | i ∈ } and αt = 0 for t ∈ with i 6= j and t ∈ , then Xi Xj f (t) = f (Xi Xj t) = 0. If i ∈ and t ∈ , If i, j ∈ i+1 i+1 n then Xi f (t) = f (Xi t) = 0. This implies af = 0. If n ∈ , then Xn ∈ nn and Xnn f (1) = f (Xnn ) = 1. Thus, if n ∈ , then nn f 6= 0. Furthermore, X1 f (1) = f (X11 ) = 1, and for t ∈ \ {1} we have X1 f (t) = f (X1 t) = 0, so that X1 f is a non-zero element of A/n. Therefore, the canonical injection A/n ֒→ A/n + hf iA is essential, and so it follows f ∈ EA (A/n) as desired.  N M M M M N N N N N M (2.12) Proposition Let K be a field, let A, n and a be as in 2.11, let R := A/a, denote by Yi the canonical image of Xi in R, and let m := hYi | i ∈ iR . Then, R is for i ∈ a 0-dimensional local ring; its maximal ideal m is neither nilpotent nor idempotent, but T -nilpotent; its m-adic topology is separated; the m-torsion functor Γm is not a radical 12; R does not have ITI with respect to m. N N N N Proof. We set P := {Xi Xj | i, j ∈ , i 6= j} and Q := {Xii+1 | i ∈ }. The ideal m is it follows m ⊆ Nil(R) ⊆ maximal since R/m ∼ = K. As Yi is nilpotent for every i ∈ m, hence m = Nil(R). Thus, R is a 0-dimensional local ring with maximal ideal m. If n∈ and mn = 0, then Ynn = 0, hence Xnn ∈ a – a contradiction; therefore, m is not nilpotent. If m is idempotent, then Y1 ∈ m2 = hYi2 | i > 1iR , hence X1 is a polynomial in P ∪ Q ∪ {Xi2 | i > 1}, thus a polynomial in P ∪ {X0 } ∪ {Xi2 | i > 0} – a contradiction; Q therefore, m is not idempotent. Assume now there is a family (fi )i∈N in m with ni=0 fi 6= 0 N N N Q ideal a of R is T-nilpotent if for every family (xi )i∈N in a there exists n ∈ with ni=0 xi = 0. 12A radical is a subfunctor F of Id Mod(R) with F (M/F (M )) = 0 for every R-module M . 11An INJECTIVE MODULES AND TORSION FUNCTORS N 15 for every n ∈ . Without loss of generality, we can suppose all the fi are monomials in with i 6= j, there exists k ∈ such that all the {Yj | j ∈ }. As Yi Yj = 0 for i, j ∈ Qk+1 fi are monomials in Yk , yielding the contradiction i=0 fi = 0. Thus, m is T-nilpotent. If n ∈ , then mn = hYin | i ≥ niR , so in elements of mn there occurs no Yi with i < n. T T It follows that in elements of n∈N mn there occurs no Yi at all, hence n∈N mn = 0 and thus R is m-adically separated. It is readily seen that 1 + Γm (R) is a non-zero element of Γm (R/Γm (R)), hence Γm is not a radical. Finally, the A-module (0 :EA (A/n) a) is not an n-torsion module by 2.11, hence by base ring independence of torsion functors and [4, 10.1.16] we see that ER (R/m) ∼ = (0 :EA (A/n) a) is not an m-torsion module. Thus, 1.3 implies that R does not have ITI with respect to m.  N N N N If, in 2.11 and 2.12, we take K instead of a field to be a selfinjective ring, then the conclusions still hold, except that R need not be a 0-dimensional local ring and that m need not be a maximal ideal of R. 3. Weak proregularity, and applications to local cohomology In this section we clarify the relation between ITI and weak proregularity, and then sketch some basic results on local cohomology for rings with ITI, but omit proofs. Results under ITI hypotheses on the closely related higher ideal transformation functors can be found in [19, 2.3]. Let R be a ring and let a ⊆ R be an ideal. The right derived cohomological functor of the a-torsion functor Γa is denoted by (Hai )i∈Z , and Hai is called the i-th local cohomology functor with respect to a. There is a canonical isomorphism of functors HomR (R/an , •) that can be canonically extended to an isomorphism of Γa (•) ∼ = lim −→n∈N ExtiR (R/an , •))i∈Z ([4, 1.3.8]). δ-functors (Hai (•))i∈Z ∼ = (lim −→n∈N Suppose now that a is of finite type and let a = (ai )ni=1 be a generating family of a. Čech cohomology with respect to a yields an exact δ-functor, denoted by (Ȟ i (a, •))i∈Z . There is a canonical isomorphism Γa (•) ∼ = Ȟ 0 (a, •) that can be canonically extended to a morphism of δ-functors γa : (Hai (•))i∈Z → (Ȟ i (a, •))i∈Z ([4, 5.1]). The sequence a is called weakly proregular if γa is an isomorphism. (By [20, 3.2] this is equivalent to the usual – but more technical – definition of weak proregularity ([1, Corrections], [17, 4.21], cf. [7, Exposé II, Lemme 9]).) The ideal a is called weakly proregular if it has a weakly proregular generating family. By [17, 6.3], this is the case if and only if every finite generating family of a is weakly proregular. Ideals in noetherian rings are weakly proregular ([4, 5.1.20]), but the converse need not hold – see below for examples. Local cohomology with respect to a weakly proregular ideal a of finite type behaves quite well, and so we may ask about the relation between this notion and ITI with respect to a. The next two results clarify this. INJECTIVE MODULES AND TORSION FUNCTORS 16 (3.1) Proposition There exist a ring R and a weakly proregular ideal a such that R does not have ITI with respect to a. Proof. The 2-dimensional domain Sn from 2.9 does not have ITI with respect to its principal maximal ideal nn = hpiSn . But p is regular, and hence nn is weakly proregular by [17, 4.22].  The proof of the next result makes use of Koszul homology and cohomology. We briefly recall our notations and refer the reader to [2, X.9] and [4, 5.2] for details. Let a = (ai )ni=1 be a finite sequence in R. We denote by K• (a) the Koszul complex with respect to a and by K • (a) := HomR (K• (a), R) the Koszul cocomplex with respect to a. We define functors K• (a, ) := K• (a) ⊗R and K • (a, ) := HomR (K• (a), ), and for i ∈ we set we set au = (aui )ni=1 . Hi (a, ) := Hi (K• (a, )) and H i (a, ) := H i (K • (a, )). For u ∈ For u, v ∈ with u ≤ v there is a morphism of functors K• (au , ) → K• (av , ), and these morphisms give rise to an inductive system of functors (K• (au , ))u∈N . We denote its inductive limit by K• (a∞ , ), and we set Hi (a∞ , ) := Hi (K• (a∞ , )). N N Z (3.2) Proposition Let R be a ring and let a ⊆ R be an ideal of finite type. If R has ITI with respect to a, then a is weakly proregular. Proof. Let a = (ai )ni=1 be a finite generating family of a. Let i ∈ 5.2.5] that there is a canonical isomorphism of functors Z. It follows from [4, Ȟ i (a, •) ∼ = Hn−i (a∞ , •). (1) Since inductive limits are exact, there is a canonical isomorphism of functors (H (au , •)). Hi (a∞ , •) ∼ = lim −→ i (2) N u∈ N If u ∈ , then the components of K• (au ) and K • (au ) are free of finite type, hence there are ∼ HomR (K • (au ), R) of complexes and canonical isomorphisms K• (au ) = HomR (K • (au ), R) ⊗R ∼ = HomR (K • (au ), ) of functors ([2, II.2.7 Proposition 13; II.4.2 Proposition 2]). Therefore, there is a canonical isomorphism of functors lim (H (au , )) ∼ H (HomR (K • (au ), )). = lim −→ i −→ i (3) N u∈ N u∈ If I is an injective R-module, then HomR (•, I) is exact and thus commutes with formation of homology, so that there is a canonical isomorphism of R-modules (4) lim H (HomR (K • (au ), I)) ∼ HomR (H i (au , R), I). = lim −→ −→ i N u∈ N u∈ INJECTIVE MODULES AND TORSION FUNCTORS 17 N If u ∈ , then H i (au , R) is an a-torsion module ([2, X.9.1 Proposition 1 Corollaire 2]). Hence, there is a canonical isomorphism of functors (5) HomR (H i (au , R), Γa (•)). lim HomR (H i (au , R), •) ∼ = lim −→ −→ N N u∈ u∈ Now, if I is an injective R-module, then so is Γa (I) by our hypothesis, and thus assembling the above, we get canonical isomorphisms of R-modules Ȟ i (a, I) (1)−(4) ∼ = (5) lim HomR (H n−i (au ), I) ∼ = −→ N u∈ lim HomR (H n−i (au ), Γa (I)) −→ N (1)−(4) ∼ = Ȟ i (a, Γa (I)). u∈ Since the components of nonzero degree of the Čech cocomplex with respect to a of an a-torsion module are zero, it follows that Ȟ i (a, •) is effaceable for i ∈ ∗ , and thus a is weakly proregular.  N The results from Section 1 together with 3.2 imply, for example, that ideals of finite type in absolutely flat rings or in 1-dimensional local domains are weakly proregular. The statement about absolutely flat rings can be proven directly by first noting that idempotent elements are proregular and thus generate weakly proregular ideals ([20, 2.7]), and then using the fact that an ideal of finite type in an absolutely flat rings is generated by an idempotent ([13, 4.23]). Interestingly, the same observation shows that the principal ideal a in Example 1.21 A) is weakly proregular. Finally, let us point out that 3.2 together with 1.16 yields a new proof of the fact that ideals in noetherian rings are weakly proregular. Now we turn to basic results on local cohomology for rings with ITI. The interplay between Γa and local cohomology functors is described by the following result, proven analogously to [4, 2.1.7] on use of 1.1. (3.3) Proposition Suppose R has ITI with respect to a, let i > 0 and let M be an R-module. Then, Hai (Γa (M)) = 0, and the canonical morphism Hai (M) → Hai (M/Γa (M)) is an isomorphism. In case a is of finite type, this follows immediately from 3.2 and the definitions of weak proregularity and of Čech cohomology. Using the notion of triad sequence we get the important Comparison Sequence. Z (3.4) Proposition Let n ∈ , let b ∈ R and b := hbiR , and suppose R has ITI with respect to a and with respect to b. Then, there is an exact sequence of functors n 0 −→ Hb1 ◦ Han−1 −→ Ha+b −→ Γb ◦ Han −→ 0. INJECTIVE MODULES AND TORSION FUNCTORS 18 In [20, 3.5], Schenzel gets the same conclusion under the hypothesis that a is of finite type and that a, b and a + b are weakly proregular. The Comparison Sequence allows for an inductive proof of an ITI variant of Hartshorne’s Vanishing Theorem ([4, 3.3.3]), but this also follows immediately from 3.2. The next two results concern the behaviour of local cohomology under change of rings: the Base Ring Independence Theorem and the Flat Base Change Theorem. Both can be proven analogously to [4, 4.2.1; 4.3.2], relying on a general variant of the vanishing result [4, 4.1.3]. The latter can be obtained in the noetherian case on use of the Mayer-Vietoris Sequence, and in general on use of the Comparison Sequence. (3.5) Proposition Let R → S be a morphism of rings. Suppose a is of finite type and R has ITI with respect to ideals of finite type. Then, there is a canonical isomorphism of δ-functors i (HaS (•) ↾R )i∈Z ∼ = (Hai (• ↾R ))i∈Z . (3.6) Proposition Let R → S be a flat morphism of rings. Suppose a is coherent and S has ITI with respect to extensions of coherent ideals of R.13 Then, there is a canonical isomorphism of δ-functors i (• ⊗R S))i∈Z . (Hai (•) ⊗R S)i∈Z ∼ = (HaS Under the hypothesis that a and aS are weakly proregular, the conclusion of the Base Ring Independence Theorem is shown to hold in [17, 6.5], while the conclusion of the Flat Base Change Theorem follows immediately from the definitions of weak proregularity and Čech cohomology. Up to now, we saw that several basic results on local cohomology can be generalised by replacing noetherianness with ITI properties (and maybe some coherence hypotheses). But alas!, this does not work for Grothendieck’s Vanishing Theorem ([4, 6.1.2]). We end this article with a positive and a negative result in this direction for absolutely flat rings. Keep in mind that absolutely flat rings have ITI with respect to every ideal by 1.12 (hence their ideals of finite type are weakly proregular by 3.2) and that they are moreover coherent. (3.7) Proposition Let R be an absolutely flat ring. a) If a ⊆ R is an ideal of finite type, then Hai = 0 for every i > 0. b) If R is non-noetherian, then there exist an ideal a ⊆ R not of finite type, an R-module M and i > dim(M) such that Hai (M) 6= 0. 13The hypothesis on S is fulfilled if it has ITI with respect to ideals of finite type. INJECTIVE MODULES AND TORSION FUNCTORS 19 Proof. An ideal a ⊆ R is idempotent, hence Γa (•) ∼ = HomR (R/a, •), and thus Hai = 0 if and only if the R-module R/a is projective. In case a is of finite type, the R-module R/a is flat and of finite presentation, hence projective. This shows a). Assume now R is non-noetherian and Hai = 0 for every i > 0. By the above, this implies projectivity of every monogeneous R-module, thus by [2, VIII.8.2 Proposition 4] the contradiction that R is semisimple (and in particular noetherian). This shows b).  Acknowledgement: For their various help during the writing of this article we thank Markus Brodmann, Paul-Jean Cahen, Neil Epstein, Shiro Goto, John Greenlees, Thomas Preu, Rodney Sharp, and Kazuma Shimomoto. We are grateful to the referee for his careful reading. References [1] L. Alonso Tarrı́o, A. Jeremı́as López, J. Lipman, Local homology and cohomology on schemes. Ann. Sci. École Norm. Sup. (4) 30 (1997), 1–39. Corrections available at http://www.math.purdue.edu/~lipman/papers/homologyfix.pdf [2] N. Bourbaki, Éléments de mathématique. Algèbre. Chapitre 8. Springer, 2012; Chapitre 10. Masson, 1980. [3] N. Bourbaki, Éléments de mathématique. Algèbre commutative. Chapitres 1 à 4. Masson, 1985; Chapitres 5 à 7. Herman, 1975. [4] M. P. Brodmann, R. Y. Sharp, Local cohomology (second edition). Cambridge Stud. Adv. Math. 139. Cambridge Univ. Press, 2013. [5] R. Gilmer, Commutative semigroup rings. Chicago Lectures in Math. Univ. Chicago Press, 1984. [6] S. Glaz, Commutative coherent rings. Lecture Notes in Math. 1371. Springer, 1989. [7] A. Grothendieck, Cohomologie locale des faisceaux cohérents et théorèmes de Lefschetz locaux et globaux (SGA 2). Séminaire de géométrie algébrique du Bois Marie 1962. North-Holland, Amsterdam, 1968. [8] A. Grothendieck, J. A. Dieudonné, Éléments de géométrie algébrique. I: Le langage des schémas (seconde édition). Grundlehren Math. Wiss. 166. Springer, 1971. [9] W. Heinzer, J. Ohm, Locally noetherian commutative rings. Trans. Amer. Math. Soc. 158 (1971), 273– 284. [10] Y. Hirano, On rings whose simple modules are flat. Canad. Math. Bull. 37 (1994), 361–364. [11] H. C. Hutchins, Examples of commutative rings. Polygonal Publ. House, 1981. [12] J. Iroz, D. E. Rush, Associated prime ideals in non-noetherian rings. Canad. J. Math. 36 (1984), 344– 360. [13] T. Y. Lam, A first course in noncommutative rings (second edition). Grad. Texts in Math. 131. Springer, 2001. [14] T. Y. Lam, Lectures on modules and rings. Grad. Texts in Math. 189. Springer, 1999. [15] D. Lazard, Autour de la platitude. Bull. Soc. Math. France 97 (1969), 81–128. [16] H. Matsumura, Commutative ring theory. Translated from the Japanese. Cambridge Stud. Adv. Math. 8. Cambridge Univ. Press, 1986. [17] M. Porta, L. Shaul, A. Yekutieli, On the homology of completion and torsion. Algebr. Represent. Theory 17 (2014), 31–67. INJECTIVE MODULES AND TORSION FUNCTORS 20 [18] F. Rohrer, Torsion functors with monomial support. Acta Math. Vietnam. 38 (2013), 293–301. [19] F. Rohrer, Quasicoherent sheaves on toric schemes. Expo. Math. 32 (2014), 33–78. [20] P. Schenzel, Proregular sequences, local cohomology, and completion. Math. Scand. 92 (2003), 161– 180. [21] S. Yassemi, Coassociated primes of modules over a commutative ring. Math. Scand. 80 (1997), 175– 187. [22] S. Yassemi, Weakly associated primes under change of rings. Comm. Algebra 26 (1998), 2007–2018. Department of Mathematics, FPT University, 8 Ton That Thuyet, 10307 Hanoi, Vietnam E-mail address: [email protected] Grosse Grof 9, 9470 Buchs, Switzerland E-mail address: [email protected]
0
On the Capacity of Wireless Powered Cognitive arXiv:1712.04747v1 [] 13 Dec 2017 Relay Network with Interference Alignment Sultangali Arzykulov∗ , Galymzhan Nauryzbayev, Theodoros A. Tsiftsis∗ and Mohamed Abdallah‡ ∗ ‡ School of Engineering, Nazarbayev University, Astana, Kazakhstan Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar Email: {sultangali.arzykulov, theodoros.tsiftsis}@nu.edu.kz, [email protected], [email protected] Abstract In this paper, a two-hop decode-and-forward cognitive radio system with deployed interference alignment is considered. The relay node is energy-constrained and scavenges the energy from the interference signals. In the literature, there are two main energy harvesting protocols, namely, time-switching relaying and power-splitting relaying. We first demonstrate how to design the beamforming matrices for the considered primary and secondary networks. Then, the system capacity under perfect and imperfect channel state information scenarios, considering different portions of time-switching and power-splitting protocols, is estimated. Keywords Decode-and-forward (DF), energy-harvesting (EH), capacity, cognitive radio (CR), interference alignment (IA). I. C OGNITIVE I NTRODUCTION radio (CR) is a promising paradigm that can resolve the problem of the spectrum scarcity [1]. Cognitive radio network (CRN) consists of primary users (PUs), who are licensed users of the spectrum, and secondary users (SUs), who can access the spectrum only when they do not cause harmful interference to PUs [2]. SUs can access the licensed spectrum by three paradigms such as interweave, underlay and overlay [2]. Interference alignment (IA) is a very powerful technique potential to provide higher degrees of freedom (DoFs) which means interference-free signaling dimensions at receivers [3]– [7]. The promising performance of IA can be also attained in CR by suppressing interference at PUs. By using this technique, SUs easily coexist with primary networks (PNs) by achieving higher data rates and not causing harmful interference to PUs [8]. Many previous studies introduced the impact of IA on the performance of the underlay CR [9]–[11]. A multiple-input multiple-output (MIMO) CRN with cooperative relay was presented in [12], where DoFs for the network was increased by using IA. The authors in [13] and [14] introduced energy-harvesting (EH) as a promising technology to prolong the life-time of battery-powered wireless devices. There are two main EH architectures such as time-switching (TS) and power-splitting (PS) protocols [15]. In [16], the authors proposed a common framework to jointly study IA and simultaneous wireless information and power transfer (SWIPT) in MIMO networks, where the users were dynamically chosen as EH terminals to improve the performance of wireless power transfer (WPT) and information transmission. The work in [17] considered a spectrum sensing policy for the EH-based CR to ensure that SUs can harvest the energy from the PUs signals in the TS mode. In [18], the same system was investigated with the optimal information and energy cooperation methods between SUs and PUs, where SUs harvest energy from PUs, then use that energy to transmit their own and PUs’ signals. Both [17] and [18] assumed that all nodes in CR have a perfect channel state information (CSI), however, in practice it is more common to deal with imperfect CSI due to channel estimation errors [19]. Unlike the existing studies and to the best of our knowledge, no work has been addressed to jointly study these three important areas, namely, CRN, WPT and IA. Therefore, this work focuses on studying the underlay IA-based CRN where an energy-restricted relay node operates in the time-switching relaying (TSR) and power-splitting relaying (PSR) modes. The performance of primary receivers and the EH-based relay of the secondary network (SN) under interference environment is analyzed. Interference at the PUs and the relay is cancelled by the well-known IA technique. The network capacity for these protocols is evaluated with different settings of TS/PS portions and under various CSI scenarios. II. S YSTEM M ODEL We consider a system model consisting of two pairs of PUs and three SUs. Within the PN, each transmitter communicates to its corresponding receiver and causes interference to another primary receiver and secondary relay node. It is assumed that there is no temperature constraint at the SUs which may imply N1 1 M1 1 ... ... T1 R1 M2 1 N2 1 ... ... T2 R2 NS 1 NR 1 ND 1 ... ... ... S R D direct links interfering links Fig. 1. The IA- and EH-based CRN with two PUs and one SU sharing the spectrum simultaneously. interference at the primary receivers. The SN consists of a source (S), a relay (R) and a destination (D) nodes. S communicates with D through the assistance of the energy-constrained R that operates in a halfduplex mode and decodes and forwards (DF) the signal of S to D within two time slots. To support this, R harvests energy from the interference signals, while S and D are assumed to have external power sources. Furthermore, D is assumed to be located far from PUs and does not receive any interference. Each node of the network is assumed to be deployed with multiple antennas as shown in Fig.1, where solid lines indicate the direct channels while the dotted lines denote the interfering links. We also assume that the channels remain constant during a transmission block time T , but vary independently from one block to another. [k] Channel links between nodes are defined as follows. For the PN case, Hj,i ∈ CNj ×Mi , ∀i, j ∈ {1, 2} denotes the channel between receiver jth (Rj ) and transmitter ith (Ti ), where superscript k indicates a certain time period (TP) when the data transmission occurs. Nj and Mi are the numbers of antennas at Rj and Ti , respectively. For the SN case, HR,S and HD,R denote the channel links related to the S-R and R-D transmissions while the inter-network channels are given by Hj,R ∈ CNj ×NR , Hj,S ∈ CNj ×NS and HR,i ∈ CNR ×Mi , where NS , NR and ND denote the numbers of antennas at S, R and D, respectively. Each entry of any matrix H is assumed to be independent and identically distributed (i.i.d.) random variables according to CN (0, 1), where CN (0, 1) denotes the complex normal distribution with zero mean and unit variance. Also, note that each channel link is characterized by the corresponding distance and path-loss exponent denoted by dm,n and τm,n , ∀m ∈ {1, 2, R, D}, ∀n ∈ A = {1, 2, S, R}, respectively. Moreover, a global CSI is also assumed. We assume that the SN and PN nodes exploit the IA technique to mitigate the interference at R and Ri , respectively. Thus, any transmit node l with power Pl utilizes a transmit BF matrix Vl ∈ C(Ml or Nl )×fl , with trace{Vl VlH } = 1, ∀l ∈ A, where fl is the number of the transmitted data streams. At the same time, all receive nodes (except D) exploit the receive BF matrix Ut ∈ CNt ×ft , ∀t ∈ {1, 2, R}, where ft is the number of data streams to be decoded at the corresponding receiver. For simplicity, we assume that each node is deployed with N antennas (Mi = Nj = NS = NR = ND = N). The primary receivers, within two TPs, obtain the following signal [k] yj = s s Pj [k]H [k] [k] Pi [k]H [k] [k] [k] + T[k] Hj,j Vj sj + Hj,i Vi si +ñj , k τj,j Uj τj,i Uj |{z} dj,j dj,i {z } interference from SN | {z } | [k] [k] H {1, 2}, (1) interference from PN, i 6= j desired signal where ñj = Uj ∈ [k] nj is the effective zero-mean additive white Gaussian noise (AWGN) vector at the [k] [k]H output of the beamformer, with E{ñj ñj } = σñ2 j I, where E{·} denotes an expectation operator. Since we assume that sl is the vector containing the symbols drawn from i.i.d. Gaussian input signalling and chosen from a desired constellation, we have E{sl sH l } = I, with l ∈ A. Meeting all these conditions satisfies the average power constraint at each transmit node. Regarding the SN, its interference to PN can be defined as r  [k]H PS    dτj,S Uj Hj,S VS sS , if k = 1, j,S (2) T[k] = r  [k]H P  R   τj,R Uj Hj,R VR sR , if k = 2. dj,R During the S-R transmission, the received signal at R can be written as s s 2 PS H Pi X H [1] yR = UR HR,i Vi si +ñR , τR,S UR HR,S VS sS + τR,i dR,S dR,i i=1 {z } | {z } | desired signal (3) interference from PN where ñR = UH R nR is the effective noise after receive beamforming at the relay. Then, R decodes the desired signal sS and forwards the information signal to D. Hence, D receives the following signal yD = s PR τD,R HD,R VR sR + nD , dD,R (4) 2 where nD is the AWGN noise vector, with E{nD nH D } = σD I. The interference is assumed to be completely eliminated if the following conditions are satisfied at Rj as [20], [21] [k]H [k] [k] Hj,i Vi = 0, ∀i, j ∈ {1, 2}, ∀i 6= j,   Hj,S VS , if k = 1, [k]H [k] [k] Uj J = 0, where J =  H V , if k = 2, j,R R   [k]H [k] [k] rank Uj Hj,j Vj = fj , ∀j ∈ {1, 2}, Uj (5a) (5b) (5c) and at R as [1] UH R HR,i Vi = 0, ∀i ∈ {1, 2},  rank UH R HR,S VS = fS . (6a) (6b) A. Beamforming Design [1] [2] The existing interfering signals need to be orthogonalized to Uj and Uj at Rj during two TPs to satisfy conditions (5). Then, the interference at R needs to be orthogonalized to UR , as R receives the interference only during the S-R transmission. To decode the desired signal from the received signal, the interference space should be linearly independent from the desired signal space. Thus, the precoding matrices need to be designed in such a way that all interference in each receiver span to each others. Thereby, [1] [1] the interference at R1 , R2 and R, in the first TP, can be spanned as span(H1,2 V2 ) = span(H1,S VS ), [1] [1] [1] [1] span(H2,1 V1 ) = span(H2,S VS ) and span(HR,1 V1 ) = span(HR,2 V2 ), where span(A) denotes the [1] [1] vector space spanned by the column vectors of A. From these definitions, precoding matrices V1 , V2 and VS can be derived as [22] [1] [1] V2 = (HR,2 )−1 HR,1 V1 , [1] (7a) [1] VS = (H2,S )−1 H2,1 V1 , [1] [1] (7b) [1] where V1 = eig (A) and A = (HR,1 )−1 HR,2 (H1,2 )−1 H1,S (H2,S )−1 H2,1 ; eig (A) are the eigenvectors of A. Now, we derive the corresponding receive BF matrices as   [k] [k] [k] Uj = null [Hj,i Vi ]H , j 6= i,   [1] UR = null [HR,1 V1 ]H . (8a) (8b) During the R-D transmission, S stays silent while R establishes its own communication. Therefore, it is clear that the BF matrices design follows the same steps given by (7)–(8), and hence they are omitted for the sake of brevity. B. Imperfect CSI The model for imperfect CSI can be written as [4], [23] Ĥ = H + E, (9) where Ĥ is the observed mismatched channel, H ∼ CN (0, I) represents the real channel matrix and E is the error matrix which represents an inaccuracy degree in the estimated CSI. It is also assumed that E is independent of H. Considering the nominal signal-to-noise ratio (SNR), θ, E is described as E ∼ CN (0, λI) with λ = ψθ−κ , (10) where λ is an error variance, κ ≥ 0 and ψ > 0 determine various CSI scenarios. Finally, the real channel matrix, conditioning on Ĥ, [24], can be described as H= λ I) is independent of Ĥ. where H̃ ∼ CN (0, 1+λ 1 Ĥ + H̃, 1+λ (11) III. T IME -S WITCHING R ELAYING The time used for the S-D information transmission is given by T , and the time fraction devoted for EH purposes is given by αT , with 0 ≤ α ≤ 1. The remaining time (1−α)T is formed by two equal time phases to support the S-R and R-D transmissions [15]. At the same time, the PN adopts its one-hop transmission policy according to the SN time frame architecture as follows. The first data transmission occurs during the (1 + α)T /2 time because within this TP the network transmission is performed by the primary transmitters and the source node only. The remaining time (1 − α)T /2 is dedicated for the second PN transmission when the source node is replaced by the relay in the network transmission. Hence, the received signal at R during the EH phase can be written as s s 2 X PS Pi [1] yR = τR,S HR,S VS sS + τR,i HR,i Vi si + nR . dR,S dR,i i=1 (12) Now, neglecting the power harvested from the noise, the harvested energy at R is derived as [15] ! 2 X 2 Pi PS [1] 2 T SR , (13) HR,i Vi EH = ηαT τR,S ||HR,S VS || + τ dR,S d R,i i=1 R,i where k · k denotes the Euclidean norm. T SR The relay transmit power relates to the harvested energy as PRT SR = EH /((1 − α)T /2) and can be further rewritten as PRT SR 2αη = 1−α 2 X Pi PS 2 ||H V || + R,S S τR,S τ dR,S d R,i i=1 R,i [1] HR,i Vi 2 ! , (14) where η (0 < η < 1) is the EH conversion efficiency. During the S-R information transmission, taking into account imperfect CSI given by (11) and receive BF matrices, the information signal at the relay can be written as s s     2 X PS H Pi H 1 1 [1] IT ĤR,S + H̃R,S VS sS + ĤR,i + H̃R,i Vi si + ñR . yR = τR,S UR τR,i UR 1+λ 1+λ dR,S dR,i i=1 (15) After some manipulation, the corresponding signal-to-interference-noise ratio (SINR) at R can be expressed as PS 2 ||UH τR,S R ĤR,S VS || (1+λ)2 dR,S γR = where IP N = Pi τR,i dR,i noise power. P2 i=1 PS τR,S dR,S 2 2 ||UH R H̃R,S VS || + IP N + σñR , (16) [1] 2 2 ||UH R H̃R,i Vi || indicates the interference from the PN, and σñR stands for the Since it is assumed that the PN does not interfere the destination node, the signal received at D can be written as yD = s PRT SR τD,R dD,R   1 ĤD,R + H̃D,R VR sR + nD . 1+λ (17) Then, the respective received SINR at D is calculated as τ γD = T SR PR D,R (1+λ)2 dD,R T SR PR τD,R dD,R ||ĤD,RVR ||2 2 ||H̃D,R VR ||2 + σD , (18) 2 where σD is the noise power. Finally, the received SINR for the primary receivers Rj is derived as τ [k] γj where B [k] = Pj τ j,j dj,j [k]H ||Uj [k] [k] H̃j,j Vj ||2 + = Pj j,j dj,j (1+λ)2 [k]H ||Uj [k] [k] Ĥj,j Vj ||2 B [k] + F [k] + σñ2 j [k]H [k] [k] 2 Pi H̃j,i Vi ||i6=j τj,i ||Uj dj,i [k] , (19) is the intra-network interference of the PN due to the CSI mismatch while the inter-network interference from the SN is expressed by   [k]H  S  Pτj,S ||Uj H̃j,S VS ||2 , if k = 1, d j,S F [k] = T SR  P   Rτj,R ||U[k]H H̃j,RVR ||2, if k = 2. j (20) dj,R IV. P OWER -S PLITTING R ELAYING The PSR protocol exploits the time T divided into two equal parts to support the S-R and R-D information transmissions [25]. In the first TP, R utilizes a portion of the received signal power for EH purposes, ρ, while the rest of the power, (1 − ρ), is dedicated for the S-R data transmission [26]. Thus, IT yR p = 1−ρ s PS H τR,S UR dR,S  ! s    2 X 1 1 Pi [1] H ĤR,S + H̃R,S VS sS + ĤR,i + H̃R,i Vi si + ñR . τR,i UR 1+λ 1+λ dR,i i=1 the relay utilizes the energy harvested from the received signal given by s s 2 X ρPS ρPi √ [1] EH yR = ρnR . τR,S HR,S VS sS + τR,i HR,i Vi si + dR,S dR,i i=1 (24) (21) EH By assuming that the received signal yR at R is used only for WPT and the noise power is neglected, the instantaneous harvested energy can be expressed as [15] P SR EH ηρT = 2 2 X Pi PS 2 τR,S ||HR,S VS || + τ dR,S d R,i i=1 R,i [1] HR,i Vi 2 ! , (22) where 0 < ρ < 1 is the signal power portion dedicated for EH purposes. The relay transmit power as a P SR function of the harvested energy is given by PRP SR = 2EH /T , and it can be then written as [15] ! 2 X 2 P P i S [1] 2 . HR,i Vi PRP SR = ηρ τR,S ||HR,S VS || + τ dR,S d R,i i=1 R,i (23) Using (11), the same signal, but with the power portion of (1 − ρ) and with the receive BF matrix UR applied, can be received at the information decoder terminal as shown in (24) at the top of the next page. Now, by using (24), the SINR of the S-R link can be derived as γR = where IP N = Pi (1−ρ) τR,i dR,i P2 i=1 PS (1−ρ) 2 ||UH τR,S R ĤR,S VS || (1+λ)2 dR,S PS (1−ρ) H 2 τR,S ||UR H̃R,S VS || dR,S + IP N + σñ2 R , (25) [1] 2 ||UH R H̃R,i Vi || denotes the interference from the PN. Then, the corresponding SINR at D is calculated as γD = 2 where σD is the noise power. P SR PR τD,R dD,R (1+λ)2 P SR PR τD,R D,R dD,R ||ĤD,RVR ||2 ||H̃ 2 VR ||2 + σD , (26) Finally, the SINR of the primary users can be calculated as τ [k] γj = where B [k] = Pj τ j,j dj,j [k]H ||Uj [k] [k] H̃j,j Vj ||2 + Pj j,j dj,j (1+λ)2 [k]H ||Uj [k] [k] Ĥj,j Vj ||2 B [k] + F [k] + σñ2 j [k]H [k] [k] 2 Pi H̃j,i Vi ||i6=j τj,i ||Uj dj,i [k] , (27) is the intra-network interference of the PN due to the CSI mismatch while the inter-network interference from the SN is expressed by   [k]H  S  Pτj,S ||Uj H̃j,S VS ||2 , if k = 1, dj,S [k] F = P SR    PRτj,R ||U[k]H H̃j,R VR ||2 , if k = 2. j (28) dj,R V. E RGODIC C APACITY For the PN receivers, the general derivation of the instantaneous capacity can be written as [27] Cj = log2 (1 + γj ), (29) where γj is the instantaneous SINR at Rj . Regarding the SN operating in the DF mode, the end-to-end capacity at D is related to the weakest link of the S-R and R-D transmissions and can be written as [28] CD = 1 log2 (1 + min (γR , γD )) , 2 (30) where γR and γD denote the instantaneous SINR at R and D, respectively. Now, we derive an expression for the capacity of the TSR-based system. Using (16), (18) and (30), the capacity of the destination node can be written as CD = 1−α log2 (1 + min (γR , γD )) . 2 (31) Regarding the PN, the capacity of Rj can be written as Cj = where E [k] =    1+α , when k = 1, 2   1−α , otherwise, 2 2 X k=1 [k] E [k] log2 (1 + γj ), j ∈ {1, 2}, with k ∈ {1, 2}. (32) Using (25), (26) and (30), the capacity of the destination node for the PSR-based system can be written as CD = 1 log2 (1 + min (γR , γD )) . 2 (33) Regarding the PN, the capacity of Rj can be written using (27) as 2 Cj = 1X [k] log2 (1 + γj ), j ∈ {1, 2}. 2 k=1 (34) We evaluate the capacity of the optimized TSR and PSR-based systems as a function of SNR. To begin with, we find the optimal values of α and ρ given by α∗ and ρ∗ with η = 0.8 for the corresponding TSR and PSR protocols by solving the following dCD /dα = 0 and dCD /dρ = 0. VI. S IMULATION R ESULTS In this section, we present numerical examples for the capacity expressions derived above. The adopted system parameters are as follows: the channel distances and path loss exponents are identical and given by d = 3 m and τ = 2.7, respectively; the EH conversion efficiency η = 0.8; the fixed transmit powers are assumed to be equal (P1 = P2 = PS ). The calculated optimal EH time fraction α and power-splitting factor ρ are taken as 0.19 and 0.75, accordingly. Finally, the values of (κ, ψ) such as (1.5, 15), (1, 10), (0, 0.001) are considered to describe various CSI mismatch scenarios. Fig. 2 presents an insight into how the CSI mismatch affects the obtainable capacity of the PUs and the SU of the considered system model. When κ = 0, it can be noticed that ψ has a significant impact on the system capacity. At ψ = 0.001 and SNR of 20 dB, the capacity loss of the PUs equals 1.85 bit/s/Hz while the capacity losses of the destination node for the TSR and PSR protocols are given by 0.09 and 0.17 bit/s/Hz, respectively. Hence, it is obvious that the CSI mismatch severely degrades the performance of the PUs which can be explained by the higher level of the interference due to the number of the transmitting nodes. In general, the capacity of the destination node of the PSR-based system always outperforms the capacity of the TSR protocol even the former relaying method experiences worse performance degradation than the latter one. When κ 6= 0, it can be noticed that the capacity demonstrates a poor performance in the low SNR region. On the other side, the dependence on the SNR results in the capacity growth when 2,6 4 8 perfect CSI 2,4 PU Capacity (bits/s/Hz) Capacity (bits/s/Hz) = TSR 2,2 2,0 6 18,0 18,5 4 19,0 19,5 20,0 = PU TSR PSR PSR 3 = 2 1 2 0 0 0 2 4 6 8 10 12 14 16 18 20 0 2 4 6 8 SNR (dB) 10 12 14 16 18 20 SNR (dB) (a) Perfect CSI vs. CSI mismatch (κ = 0, ψ = 0.001). (b) CSI mismatch: (κ = 1.5, ψ = 15) vs. (κ = 1, ψ = 10). Fig. 2. Capacity vs. SNR of the primary user and the destination node operating in the TSR and PSR protocols under different CSI scenarios. 8 Perfect CSI Capacity (bit/s/Hz) 7 PSR 6 5 4 3 2 1 TSR 0 0,0 0,2 0,4 0,6 0,8 1,0 EH time fraction Power splitting factor Fig. 3. Capacity vs. the EH time fraction and power splitting factor for the TSR and PSR at 20 dB, respectively. SNR increases. Being κ large makes the curve slope sharper. Fig. 3 illustrates some simulated results for the capacities as a function of α and ρ for various CSI acquisition scenarios. The results for the TSR and PSR systems are obtained from (31) and (33), respectively. For the case of the PSR-based system, no sufficient portion of power allocated for EH will consequently result in poor capacity. At the other extreme, being ρ too large which results in too much unnecessarily harvested energy at the cost of the power level of the received signal leading to poor capacity. Similarly, this justification applies to α in the TSR-based system. VII. C ONCLUSION In this paper, we analyzed the capacity of different wireless powered IA-based DF CRN considering the EH protocols, namely, TSR and PSR. The three special scenarios of the imperfect CSI given by (1.5, 15), (1, 10), (0, 0.001) were studied to analyse the impact of the CSI quality on the system capacity. The presented results demonstrated that an optimal selection of the power splitting factor in the PSR protocol and the EH time factor in the TSR protocol were found to be important in achieving the best performance. Finally, the optimized PSR-based system was shown to have the best performance over the optimized TSR system. R EFERENCES [1] S. Haykin, “Cognitive radio: Brain-empowered wireless communications,” IEEE J. Sel. Areas Commun., vol. 23, no. 2, pp. 201–202, Apr. 2005. [2] A. Goldsmith, S. Jafar,L. Maric and S. Srinivasa, “Breaking spectrum gridlock with cognitive radios: an information theoretic perspective,” Proc. IEEE, vol. 97, no. 5, pp. 894–914, May 2009. [3] G. Nauryzbayev and E. Alsusa, “Enhanced Multiplexing Gain Using Interference Alignment Cancellation in Multi-cell MIMO Networks,” IEEE Trans. Inf. Theory, vol. 62, no. 1, pp. 357–369, Jan. 2016. [4] G. Nauryzbayev and E. Alsusa, “Interference Alignment Cancellation in Compounded MIMO Broadcast Channels with General Message Sets,” IEEE Trans. Commun., vol. 63, no. 10, pp. 3702–3712, Oct. 2015. [5] G. Nauryzbayev, S. Arzykulov, E. Alsusa and T. A. Tsiftsis, “A closed-form solution to implement interference alignment and cancellation scheme for the MIMO three-user X-channel model,” ICSPCS 2016, pp. 1-6, Dec. 2016. [6] G. Nauryzbayev, S. Arzykulov, E. Alsusa and T. A. Tsiftsis, “An alignment-based interference cancellation scheme for network-MIMO systems,” ICSPCS, pp. 1-6, Dec. 2016. [7] V. R. Cadambe, S. A. Jafar, “Interference alignment and degrees of freedom of the K-user interference channel,” IEEE Trans. Inf. Theory, vol. 54, no. 8, pp. 3425–3441, Aug. 2008. [8] M. Amir, A. El-Keyi and M. Nafie, “Constrained Interference Alignment and the Spatial Degrees of Freedom of MIMO Cognitive Networks,” IEEE Trans. Inf. Theory, vol. 57, no. 5, pp. 2994–3004, May 2011. [9] Y. Yi, J. Zhang, Q. Zhang and T. Jiang, “Exploring frequency diversity with interference alignment in cognitive radio networks,” IEEE GLOBECOM, pp. 5579–5583, Dec. 2012. [10] T. Xu, L. Ma and G. Sternberg, “Practical interference alignment and cancellation for MIMO underlay cognitive radio networks with multiple secondary users,” IEEE GLOBECOM, pp. 1031–1036, Dec. 2013. [11] N. Zhao, F. Yu, H. Sun and M. Li, “Adaptive Power Allocation Schemes for Spectrum Sharing in Interference-Alignment-Based Cognitive Radio Networks,” IEEE Trans. Veh. Technol., vol. 65, no. 5, pp. 3700–3714, May 2016. [12] J. Tang, S. Lambotharan and S. Pomeroy, “Interference cancellation and alignment techniques for multiple-input and multiple-output cognitive relay networks,” IET Signal Process., vol. 7, no. 3, pp. 188–200, May 2013. [13] X. Chen, Z. Zhang, H. Chen and H. Zhang, “Enhancing wireless information and power transfer by exploiting multi-antenna techniques,” IEEE Commun. Mag., vol. 53, no. 4, pp. 133–141, Apr. 2015. [14] Z. Ding et al., “Application of smart antenna technologies in simultaneous wireless information and power transfer,” IEEE Commun. Mag., vol. 53, no. 4, pp. 86–93, Apr. 2015. [15] A. Nasir, X. Zhou, S. Durrani, and R. Kennedy, “Relaying protocols for wireless energy harvesting and information processing,” IEEE Trans. Wireless Commun., vol. 12, no. 7, pp. 3622–3636, Jul. 2013. [16] N. Zhao, F. Yu and V. Leung, “Simultaneous Wireless Information and Power Transfer in Interference Alignment Networks,” IWCMC14, pp. 1–5, Aug. 2014. [17] S. Park, H. Kim and D. Hong, “Cognitive radio networks with energy harvesting,” IEEE Trans. Wireless Commun., vol. 12, no. 3, pp. 1386–1397, Mar. 2013. [18] G. Zheng, Z. Ho, E. A. Jorswieck and B. Ottersten, “Information and energy cooperation in cognitive radio networks,” IEEE Trans. Signal Proces., vol. 62, no. 9, pp. 2290–2303, Sept. 2014. [19] F. Wang and X. Zhang, “Resource Allocation for Multiuser Cooperative Overlay Cognitive Radio Networks with RF Energy Harvesting Capability,” IEEE GLOBECOM, pp. 1–6, 2016. [20] G. Nauryzbayev and E. Alsusa, “Identifying the Maximum DoF Region in the Three-cell Compounded MIMO Network,” IEEE WCNC, pp. 1–5, Apr. 2016. [21] G. Nauryzbayev, E. Alsusa, and J. Tang, “An Alignment Based Interference Cancellation Scheme for Multi-cell MIMO Networks,” IEEE VTC, pp. 1–5, May 2015. [22] H. Sung, S. H. Park, K. J. Lee and I. Lee, “Linear precoder designs for K-user interference channels,” IEEE Trans. Wireless Commun., vol. 9, no. 1, pp. 291–301, Jan. 2010. [23] P. Aquilina and T. Ratnarajah, “Performance Analysis of IA Techniques in the MIMO IBC With Imperfect CSI,” IEEE Trans. Commun., vol. 63, no. 4, pp. 1259–1270, Apr. 2015. [24] S. Kay, “Fundamentals of statistical signal processing: Estimation theory” in Englewood Cliffs NJ: Prentice-Hall 1993. [25] G. Nauryzbayev, K. M. Rabie, M. Abdallah and B. Adebisi, “Ergodic Capacity Analysis of Wireless Powered AF Relaying Systems over α - µ Fading Channels,” in Proc. IEEE GLOBECOM, pp. 1–6, Dec. 2017. [26] S. Arzykulov, G. Nauryzbayev, T. A. Tsiftsis and M. Abdallah, “Error Performance of Wireless Powered Cognitive Relay Network with Interference Alignment,” in Proc. PIMRC, pp. 1–5, Oct. 2017. [27] J. Tang and S. Lambotharan, “Interference Alignment Techniques for MIMO Multi-Cell Interfering Broadcast Channels,” IEEE Trans. Commun., vol. 61, no. 1, pp. 164–175, Jan. 2013. [28] S. Arzykulov, G. Nauryzbayev and T. A. Tsiftsis, “Underlay Cognitive Relaying System Over α - µ Fading Channels,” IEEE Commun. Lett., vol. 21, no. 1, pp. 216–219, Jan. 2017
7
Collaborative vehicle routing: a survey Margaretha Gansterera,∗, Richard F. Hartla arXiv:1706.05254v1 [cs.MA] 13 Jun 2017 a University of Vienna, Department of Business Administration, Oskar-Morgenstern-Platz 1, 1090 Vienna, Austria Abstract In horizontal collaborations, carriers form coalitions in order to perform parts of their logistics operations jointly. By exchanging transportation requests among each other, they can operate more efficiently and in a more sustainable way. Collaborative vehicle routing has been extensively discussed in the literature. We identify three major streams of research: (i) centralized collaborative planning, (ii) decentralized planning without auctions, and (ii) auction-based decentralized planning. For each of them we give a structured overview on the state of knowledge and discuss future research directions. Keywords: Logistics, Collaborations, Vehicle routing 1. Introduction The transportation industry is highly competitive and companies need to aim for a maximum level of efficiency in order to stay in business. Fierce competition brings prices down and therefore profit margins have declined to an extremely low level. To increase efficiency, these companies can establish collaborations, where parts of their logistics operations are planned jointly. By collaborative vehicle routing we refer to all kinds of cooperations, which are intended to increase the efficiency of vehicle fleet operations.1 By increasing efficiency, collaborations also serve ecological goals. It is well known ∗ Corresponding author Email addresses: [email protected] (Margaretha Gansterer), [email protected] (Richard F. Hartl) 1 We use the terms collaboration and cooperation interchangeable. In the literature, there is an agreement that collaboration is a strong type of cooperation. However, the boundary between them is vague (Cruijssen et al., 2007c). Preprint submitted to Elsevier June 19, 2017 that transportation is one of the main contributors of CO2 emissions (Ballot and Fontane, 2010). Thus, public authorities are encouraging companies to collaborate. They not only aim at reduced emissions of harmful substances, but also on reduced road congestion, and noise pollution. Moreover, collaborations in logistics have been shown to increase service levels, gain market shares, enhance capacities, and reduce the negative impacts of the bullwhip effect (Audy et al., 2012). Thus, it is not surprising that collaborative vehicle routing is an active research area of high practical importance. Related reviews by Verdonck et al. (2013) and Cruijssen et al. (2007c) exist. Both are dealing with transportation collaborations. However, Verdonck et al. (2013) focus on the operational planning of road transportation carriers (i.e. the owners and operators of transportation equipment) only. The perspective of collaborating shippers (i.e. the owners of the shipments) is not taken into account. Furthermore, they do not consider studies on centralized planning (i.e. collaboration in case of full information). We observe that about 45% of the related literature refers to central planning situations. This is an important aspect of collaborative vehicle routing, where a centralized authority is in charge of allocating requests such that requirements of all collaborators are met. Furthermore, we identify two classes of decentralized settings, which are auction-based and non-auction-based collaborations. Cruijssen et al. (2007c) give an overview on different types of horizontal collaboration, i.e. the levels of integration among collaborators. They do not consider operational planning problems. We find that almost 60% of the related articles were published in the last 3 years. These articles have not been covered in both of the existent reviews. The review by Guajardo and Rönnqvist (2016) deals with cost allocation in collaborative transportation, which is also an important aspect in collaborative vehicle routing. Because of this very recent survey, we can keep the cost allocation part short. Given the high volume of recent literature on collaborative vehicle routing it is now appropriate to provide a review on the state of knowledge. The contribution of our survey is threefold: 1. We also consider centralized collaborative planning. 2. We survey the literature of the last few years. 3. We give a new and broader classification of articles. The remainder of our survey is organized as follows. The research methodology used is described in Section 2. Classifications and definitions are pro2 vided in Section 3. Centralized collaborative planning is surveyed in Section 4. Sections 5 and 6 give overviews on decentralized planning with and without auctions, respectively. Each section closes with a discussion on future research directions. A summarizing conclusion is given in Section 7. 2. Research methodology In our review, we focus on studies where operations research models and solution techniques are applied. Pure empirical studies, not focusing on the operational planning problems, are not considered. However, readers interested in these empirical studies are referred to, e.g., Cruijssen et al. (2007b), Lydeka and Adomavičius (2007), Ballot and Fontane (2010), Schmoltzi and Wallenburg (2011). We also do not consider studies, where the main focus is on general design of coalitions rather, while the transportation planning problems of collaborators are neglected (e.g. Tate, 1996; Verstrepen et al., 2009; Voruganti et al., 2011; Audy et al., 2012; Guajardo and Rönnqvist, 2015). Regarding application fields, we limit our survey to studies on road transportation. Collaborations in rail (e.g. Kuo et al., 2008), naval (e.g. Agarwal and Ergun, 2010), and air (e.g. Ankersmit et al., 2014) transportation lead to interesting planning problems, but these studies do not fit within the scope of this survey. We initially did not omit any study based on its type (primary, secondary; journal articles, proceedings, etc.) or its year of publication. With this scope in mind, we searched library databases, where the major journals in operations research, operations management, management science etc. are covered. Used search terms were basically combinations of • ”collaboration”, ”cooperation”, ”coalition”, ”alliance” and • ”transportation”, ”routing”, ”logistics”, ”freight”, ”carrier”, ”shipper”. By this, we identified a first set of relevant studies. At this point we decided to only consider journal publications, in order to survey a reasonable number of articles. For each of the remaining papers we screened the reference list and added articles that had not been find in the first step. Finally, we applied a descendancy approach, which goes from old information to new information, by screening all articles that cite one of the papers that we found relevant. We proceeded until we converged to a final set of publications. 3 3. Classifications and definitions In our review, we distinguish between centralized and decentralized collaborative planning. In Figures 1- 3, we provide a generalized illustrations of collaborative and non-collaborative settings. In the non-collaborative setting (Figure1), each participant i, i ∈ (A, ..., N ) maximizes his individual profit Pi . This profit depends on his set of requests Ri , the payments pi (Ri ) that he gets for his requests Ri , and his costs ci (Ri ). The capacity usage Capi of a participant i is limited by his available capacity Li . B A B B N Figure 1: Generalized illustration of a non-collaborative setting with profit (Pi ) maximizing participants i, i ∈ (A, ..., N ), with limited capacity Li . Capi is the capacity usage. In case of centralized planning (Figure 2), the total profit is maximized jointly (which is denoted as ideal model by Schneeweiss, 2003). In decentralized settings (Figure 3), collaborators agree on a mechanism for exchanging subsets of their requests by revealing no or only limited information. R̄ is the set of requests that have been offered for exchange. In our study, the research stream on decentralized planning is further split up in non-auction-based and auction-based studies. All papers are classified based on the model formulation of the routing problem used. Here, the following categories can be identified. 4 A B B B N Figure 2: Generalized illustration of centralized planning with profit (Pi ) maximizing participants i, i ∈ (A, ..., N ), with limited capacity Li . Capi is the capacity usage. • Vehicle routing problems (VRP), which give the optimal sets of routes for fleets of vehicles in order to visit a given set of customers (e.g. Gendreau et al., 2008). • Arc routing problems (ARP), which assume that customers are located on arcs that have to be traversed. The capacitated ARP is the arcbased counterpart to the node-based VRP (e.g. Wøhlk, 2008). • Inventory routing problems (IRP), which combine VRP with inventory management (e.g. Bertazzi et al., 2008). • Lane covering problems (LCP), which aim at finding a set of tours covering all lanes with the objective of minimizing the total travel cost (e.g. Ghiani et al., 2008). A lane is the connection between the pickup and the delivery node of a full truckload (FTL) request. • Minimum cost flow problems (MCFP), which aim at sending goods through a network in the cheapest possible way (e.g. Klein, 1967). • Assignment problems (AP), where vehicles are assigned to requests, such that total costs are minimized (e.g. Munkres, 1957). We also indicate whether models assume customers to have delivery time windows (TW) or not. Another frequently appearing extension are pickup 5 A B B B N Figure 3: Generalized illustration of decentralized planning with profit (Pi ) maximizing participants i, i ∈ (A, ..., N ), with limited capacity Li . Capi is the capacity usage. R̄ is the subset of requests that have been offered for exchange. 6 and delivery (PD) requests. This means that goods have to be picked up at some node and to be delivered to another node, where pickup and delivery locations do not necessarily coincide with a depot. Articles can be further classified based on the investigated types of shipment. These can either be FTL or less than truckload (LTL). FTL can of course be seen as a special case of LTL, where the size of customer orders is equal to the vehicle’s capacity. Hence, LTL models are applicable for FTL settings as well. However, FTL is often used in the transportation of a single product, whereas LTL is usually used to transport multiple products in small volumes from depots to customers or from customers to customers. A typical application area for LTL is parcel delivery (Dai and Chen, 2012; Parragh et al., 2008). Figure 4 shows an example of non-collaborative and of collaborative vehicle routes of LTL carriers. An example of collaboration between FTL carriers is displayed in Figure 5. Carrier A drives from customer c to customer a, and from customer a to customer b, while carrier B is serving lanes b-c and b-d. By collaboration the carriers can avoid three empty return trips (Adenso-Dı́az et al., 2014a). It should be mentioned that b-c can of course also be used by carrier A in a non-collaborative setting. In this case this carrier has only one empty return trip. However, the collaboration with carrier B still improves the situation of carrier A. Players in transportation collaborations might be carriers (also denoted as freight forwarders, logistics service providers, or third party logistics providers) and shippers. Carriers are assumed to be the owners and operators of transportation equipment, while shippers own or supply the shipments. Joint routing planning is typically assumed to be done by carriers. When shippers consider collaboration, they identify attractive bundles of lanes, helping carriers to reduce empty trips in return of better rates (Ergun et al., 2007b). We observe that the huge majority of papers focuses on carrier-related cooperations. We agree with Cruijssen et al. (2007a), that from the planning perspective it does not matter whether carriers or shippers are in charge of the process. However, in decentralized settings the issue of information asymmetries has to be taken into account. Shippers and carriers typically do not having the same level of information. We therefore find it useful to distinguish whether carriers or shippers are the players in a collaboration. In the papers surveyed, a variety of both, exact and heuristic solution methodologies are represented. An overview on these methodologies is given in Figure 6. 7 No collaboration + Pickup - A3 - A1 + A1 - + + C2 C1 - + B2 - A2 + - A3 A + A2 C - A1 - + C2 C1 - C2 - - A1 + A3 + - A B2 + C A3 C5 C4 + C5 - C4 A3 B2 B1 A3 + C1 - C3 + A2 B1 C3 + Collaboration - B3 - + C2 + B B3 C1 + Depot B1 + B2 - Delivery A2 - + B3 - B3 + C3 C3 B - B1 + C5 C4 + - C5 C4 Figure 4: Example for non-collaborative and collaborative vehicle routes of three LTL carriers with pickup and delivery requests (Gansterer and Hartl, 2016a). 8 No collaboration Carrier A Carrier B a b d c Collaboration a b d c Figure 5: Example for non-collaborative and collaborative vehicle routes of two FTL carriers. Dotted arcs are empty return trips (Adenso-Dı́az et al., 2014a) 9 10 Methods Others Iterated local search: Pérez-Bernabeu et al. (2015), Quintero-Araujo et al. (2016) Tabu search: Bailey et al. (2011), Xu et al. (2016) Evolutionary algorithms: Liu et al. (2010b), Dao et al. (2014), Sanchez et al. (2016), Gansterer and Hartl (2017) Figure 6: Overview on solution methodologies. Simulation: Dahl and Derigs (2011), Cuervo et al. (2016), Quintero-Araujo et al. (2016) Multi-agent-systems: Dai and H.Chen (2011), Sprenger and Mönch (2014) Greedy heuristics: Ergun et al. (2007a), Ergun et al. (2007b), Liu et al. (2010a), Bailey et al. (2011), Sprenger and Mönch (2012) Local search-based: Dahl and Derigs (2011), Nadarajah and Bookbinder (2013) Metaheuristics Ant colony optimization: Sprenger and Mönch (2012) Adaptive large neighborhood search: Wang et al. (2014), Wang and Kopfer (2014), Wang and Kopfer (2015), Li et al. (2016), Schopka and Kopfer (2017) Greedy randomized search procedure: Adenso-Dı́az et al. (2014a) Lagrangian relaxation and Benders decomposition-based: Weng and Xu (2014) Branch-and-cut: Berger and Bierwirth (2010), Hernández and Peeta (2011), Hernández et al. (2011), Hernández and Peeta (2014), Fernández et al. (2016) Branch-and-price: Dai and Chen (2012), Kuyzu (2017) Matheuristics Exact Column generation: Kuyzu (2017) Lagrangian relaxation and decomposition: Dai and Chen (2012), Dai et al. (2014), Chen (2016) An important aspect of collaborative operations is how to share the gained benefits. It is shown in Guajardo and Rönnqvist (2016) that most problems in collaborative transportation use sharing methods based on cooperative game theory. The authors identify more than 40 different methods, which they categorize as traditional or ad hoc concepts. However, they show that in the huge majority of studies, one of the following three methods is used: • the well-known Shapley value (Shapley, 1953), which is generally the most applied method (e.g. Kimms and Kozeletskyi, 2016; Vanovermeire and Sörensen, 2014; Engevall et al., 2004). • proportional methods, where each carrier j gets a share αj of the total profit (e.g. Özener et al., 2013; Berger and Bierwirth, 2010; Frisk et al., 2010 • the nucleolus method initially defined by Schmeidler (1969) (e.g. Agarwal and Ergun, 2010; Guajardo and Jörnsten, 2015; Göthe-Lundgren et al., 1996). In the literature, there are basically two streams of research: (i) articles that focus on the transportation problems, while profit sharing is not taken into account, and (ii) articles dealing with profit sharing, while the transportation problem is neglected. There are only a few studies (for instance Krajewska et al., 2008), where both aspects are combined. Thus, our survey aims at building bridges between these two worlds. In our sections on future research, we provide several suggestions how profit sharing aspects should be integrated into the collaborative planning processes. 4. Centralized collaborative planning If collaborative decisions are made by a central authority having full information, this is referred to as centralized collaborative planning. An example for such a central authority might be an online platform providing services for collaborative decision making (Dai and Chen, 2012). It is obvious that under full information, the decision maker has to tackle a standard optimization problem, since the collaborative aspect is diminished by information disclosure. Thus, each transportation planning problem might be interpreted as a collaborative transportation planning problem that a decision maker with full information has to solve. However, to have in this review a reasonable 11 scope, we only survey studies that contribute to the collaborative aspect of transportation planning. An example for such a contribution is, for instance, given in Wang et al. (2014). In this study the central authority does not have full power to find an optimal solution by simply exchanging requests. It rather has to decide on the degree of collaboration taking both outsourcing and request exchange into consideration. It can be observed that that there are two streams of research in this area: • the assessment of the potential benefits of centralized collaborative planning versus non-cooperative settings. In non-cooperative settings players do not show any kind of collaborative efforts. We refer to this research stream as Collaboration Gain Assessment (CGA). • innovative models or innovative solutions approaches for centralized collaborative planning. These studies will be denoted as Methodological Contributions (MC). Table 1 gives an overview on studies contributing to centralized collaborative planning. We classify them according to the categories given above, and highlight their main characteristics. 12 13 Reference Adenso-Dı́az et al. (2014a) Buijs et al. (2016) Cruijssen et al. (2007a) Dai and Chen (2012) Ergun et al. (2007b) Fernández et al. (2016) Hernández and Peeta (2011) Krajewska et al. (2008) Kuyzu (2017) Lin (2008) Liu et al. (2010a) Montoya-Torres et al. (2016) Nadarajah and Bookbinder (2013) Pérez-Bernabeu et al. (2015) Quintero-Araujo et al. (2016) Sanchez et al. (2016) Soysal et al. (2016) Sprenger and Mönch (2012) Wang et al. (2014) Weng and Xu (2014) Yilmaz and Savasaneril (2012) Focus CGA MC CGA MC MC MC MC CGA MC CGA MC CGA MC CGA CGA CGA CGA CGA MC MC CGA x x x x Shipper x x x x x x x x x x x x x x x x x x Carrier Model VRP VRP VRP VRP LCP ARP MCFP VRP LCP VRP ARP VRP VRP VRP VRP VRP IRP VRP VRP ARP AP Shipment FTL LTL LTL LTL FTL LTL LTL LTL FTL LTL FTL LTL LTL LTL LTL LTL LTL LTL LTL LTL LTL x x x x x x x x x TW Table 1: References and basic characteristics for collaboration with full information. x x x x x x x x PD x x 4.1. Collaboration gain assessment One of the first studies to systematically assess the potentials of collaborative vehicle routing was presented by Cruijssen et al. (2007a). The authors consider a system with multiple companies, each having a separate set of distribution orders. Goods are picked up at a single distribution center and delivered to customer sites. Both, a non-cooperative setting, where each company solves the planning problem independently, and a cooperative setting, where routes are planned jointly are investigated. It is shown that joint route planning can achieve synergy values of up to 30%. Many other studies confirm the observation of Cruijssen et al. (2007a), that centralized collaborative planning has the potential to improve total profits by around 20-30% of the non-cooperative solution (e.g. MontoyaTorres et al., 2016; Soysal et al., 2016). A real-world setting, where a local courier service of a multi-national logistics company is investigated by Lin (2008). It is shown that the cooperative strategy, where courier routes are planned jointly, outperforms the non-cooperative setting by up to 20% of travel cost. Joint route planning is generally considered to be done by carriers (e.g. Dai and Chen, 2012; Buijs et al., 2016; Liu et al., 2010a), but also shippers can be involved in joint route planning, as long as they have direct control over the flows of goods (Cruijssen et al., 2007a). However, merging FTL lanes is mostly assumed to be done by shippers (e.g. Adenso-Dı́az et al., 2014a; Ergun et al., 2007b; Kuyzu, 2017). Here again one might argue, that also carriers can be involved in this type of horizontal collaboration, as it is considered by, e.g., Liu et al. (2010a). Horizontal collaborations not only follow economical but also ecological goals like reduced road congestion, noise pollution, and emissions of harmful substances. Thus, public authorities are encouraging companies to collaborate. The city of Zurich, for instance, is funding a research project aiming at improved cooperation between different transport companies by an IT-based collaboration platform (Schmelzer, 2014). In this spirit, MontoyaTorres et al. (2016) quantify the effect of collaborative routing in the field of city logistics. In order to solve real-world instances from the city of Bogotá, the centralized problem is decomposed into an assignment and a routing part. By this, the non-cooperative solution can be improved by 25.6% of the travel distance. Many other recent studies account for ecological aspects. Pérez-Bernabeu et al. (2015), for instance, examine different VRP scenarios and show that co14 operations can contribute to a noticeable reduction of expected travel costs as well as of greenhouse gas emissions. The VRP with time windows (VRPTW) and with carbon footprint as a constraint is proposed by Sanchez et al. (2016). Using this model, the reduction of carbon emissions in a collaborative setting, where different companies pool resources, is investigated. The authors find that the total greenhouse gas emissions can be reduced by 60%, while cost savings were nearly 55%. Soysal et al. (2016) model and analyse the IRP in a collaborative environment, which accounts for perishability, energy use (CO2 emissions), and demand uncertainty. According to their experiments, the cost benefit from cooperation varies in a range of about 4-24%, while the aggregated total emission benefit varies in a range of about 8-33%. Collaboration potentials in stochastic systems has also been assessed by Sprenger and Mönch (2012). The authors investigate a real-world scenario found in the German food industry, where products are sent from manufacturers to customers via intermediate distribution centers. They are the first to show that the cooperative strategy clearly outperforms the noncooperative algorithms in a dynamic and stochastic logistics system. A large-scale VRPTW is obtained for the delivery of the orders, capacity constraints, maximum operating times for the vehicles, and outsourcing options. This problem is decomposed into rich VRP sub problems and solved by an algorithm based on ant colony systems. The proposed heuristics is tested in a rolling horizon setting using discrete event simulation. Quintero-Araujo et al. (2016) discuss the potential benefits of collaborations in supply chains with stochastic demands. A simheuristic approach is used to compare cooperative and non-cooperative scenarios. The authors find costs reduction around 4% with values rising up to 7.3%. Yilmaz and Savasaneril (2012) study the collaboration of small shippers in the presence of uncertainty. This problem focuses on markets where shippers have random transportation requests for small-volume shipments. The AP, where the coalition decides where to assign an arriving shipper and when to dispatch a vehicle, is proposed and modeled as a Markov decision process. Its performance is compared to a naive and a myopic strategy. The authors find, for instance, that when it is costly to pickup and deliver the shipments to consolidation points, the naive policy is outperformed by the policy of the coalition. While the majority of papers finds that horizontal collaborations can improve the non-cooperative solution by around 20-30%, some authors report collaboration profits outside of this range (e.g. Krajewska et al., 2008; 15 Sanchez et al., 2016; Quintero-Araujo et al., 2016). Adenso-Dı́az et al. (2014a) contribute to this issue by investigating the impact of coalition sizes. Their computational experiments show that benefits are marginally decreasing with the size of the partnership. 4.2. Methodological contributions In centralized planning problems, there are several decisions that have to be taken. Typically, not only the routing but also the assignment of customers to depots has to be considered. In order to approximate optimal solutions even for large real world instances, many authors propose decomposition strategies (e.g. Dai and Chen, 2012; Nadarajah and Bookbinder, 2013; Buijs et al., 2016). While a popular assumption is that in horizontal collaborations the VRP is the underlying planning problem, also collaborative ARP or MCFP have been investigated. Fernández et al. (2016) introduce the collaboration uncapacitated ARP. This yields to a profitable ARP, where carriers have customers that they are not willing or allowed to share, and others that can be exchanged with collaborators. The model is formulated as integer linear program and solved through a branch-and-cut algorithm. The optimal hub routing problem of merged tasks is investigated by Weng and Xu (2014). This problem allows all requests to pass up to two hubs within limited distance. The underlying problem is formulated as multi-depot ARP. Solutions are generated using two heuristics based on Lagrangian relaxation and Benders decomposition. The time-dependent centralized multiple carrier collaboration problem is introduced by Hernández and Peeta (2011). The authors assume a setting where carriers either provide or consume collaborative capacity. Capacities are time-dependent but known a priori, and demand is fixed. The problem is modeled as a binary multi-commodity MCFP and solved using a branch-and-cut algorithm. Liu et al. (2010a) define the multi-depot capacitated ARP aiming for a solution with minimized empty movements of truckload carriers. A two-phase greedy algorithm is presented to solve practical large-scale problems. Not only carriers, but also shippers can conduct horizontal collaborations. In this case, they jointly identify sets of lanes that can be submitted to a carrier as attractive bundles. The goal is to offer tours with little or no asset repositioning to carriers. In return, they can get more favorable rates from the carriers. Ergun et al. (2007b) define the shipper collaboration problem, which is formulated as LCP. Solutions are generated by a greedy algorithm. 16 In Kuyzu (2017) the LCP model is extended by a constraint on the number of partners with whom the collaborative tours must be coordinated. Column generation and Branch-and-price approaches are developed for the solution of the resulting LCP variant. In collaborative vehicle routing, typically horizontal cooperations are considered. These refer to collaborative practices among companies acting at the same levels in a market (Cruijssen et al., 2007c). Vertical cooperation on the other hand, indicate hierarchical relationships, meaning that one player is the client of the other. Wang et al. (2014) were the first to present a combination of horizontal and vertical cooperation. They extend the pickup and delivery problem with time windows (PDPTW) to a combination of integrated (vertical) and collaborative (horizontal) transportation planning, where both subcontracting and collaborative request exchange are taken into account. The centralized planning problem is solved using adaptive large neighborhood search (ALNS) and an ALNS-based iterative heuristic. 4.3. Future research directions The area of CGA has been extensively researched. The cost advantages of centralized collaborations have been quantified in several studies, most of them finding potential benefits of 20-30%. Also ecological goals, like reduction of emissions, have been taken into account. However, most of these studies assume deterministic scenarios. Literature assessing collaboration potentials, when the central authority faces uncertainties, is scarce. Also collaboration gains in more complex, e.g. multi-modal, multi-depot transportation systems have yet to be investigated. Centralized authorities typically face huge and highly complex optimization problems, since they have to plan operations for several interconnected fleets. Thus, sophisticated solution techniques are required. There is a vast field of problems and methods that have not been investigated so far from a collaborative perspective. It could, for instance, be investigated, how a central authority exchanges requests among collaborators, while trying not to redistribute too much. This would lead to a 2-objective problem, which minimizes (i) total cost and (ii) deviation from the decentralized solution. A related question is how the central authority can motivate participants to reveal their data. These incentives might be provided by using smart profit sharing mechanisms or, e.g., side payments. To answer these questions, studies investigating how much information has to be revealed in order to achieve 17 reasonable collaboration profits would be helpful. Finally, since central decision makers face huge optimization problems, the application of solution methods for large scale VRP (e.g. Kytöjoki et al., 2007) are supposed to further improve solution quality. For this purpose, advanced processing methods like parallel computing should be taken into account (Ghiani et al., 2003). Also machine learning concepts might be valuable tools to, e.g., tune parameters (Birattari, 2009) or to automatically identify problem structures in collaborative settings. 5. Decentralized planning without auctions If players are not willing to give full information to a central planner, decentralized approaches are needed. In such a decentralized setting collaborators might cooperate individually or supported by a central authority, which does not have full information. Articles in this area contribute either to the issue of • selecting appropriate collaboration partners, we refer to this as Partner Selection (PS), • requests that should be offered to collaboration partners, which is referred to as Request Selection (RS), • methods for exchanging requests, which is denoted Request Exchange (RE). In Table 2, we categorize all papers dealing with decentralized non-auctionbased approaches and give their main characteristics. 18 19 Reference Main contribution Adenso-Dı́az et al. (2014b) PS Bailey et al. (2011) RS Cuervo et al. (2016) PS Dahl and Derigs (2011) RE Dao et al. (2014) PS Ergun et al. (2007a) RS Hernández et al. (2011) RS Hernández and Peeta (2014) RS Liu et al. (2010b) RS Özener et al. (2011) RE Sprenger and Mönch (2014) RE Wang and Kopfer (2014) RE Wang et al. (2014) RE Wang and Kopfer (2015) RE Shipper Carrier Model x VRP x VRP x VRP x VRP x AP x LCP x MCFP x MCFP x ARP x LCP x VRP x VRP x VRP x VRP Shipment FTL FTL LTL LTL FTL FTL LTL LTL FTL FTL LTL LTL LTL FTL x x x x x x x x x x x x x x x x x PD x x TW Table 2: References and basic characteristics for non-auction-based decentralized collaboration 5.1. Partner selection The benefit of collaborations of course depends on the partners that form the coalition and the characteristics of their operations. Potential partners might have different requirements, which have to be considered in the joint operational plan (Cuervo et al., 2016). Thus, Adenso-Dı́az et al. (2014b) propose an a priori index that could be used to roughly predict synergies between potential partners, without the necessity of solving any optimization model. This index is based on the transportation demands of the participating companies. However, average order size and the number of orders seem to be the most influential characteristic on the coalitions profit (Cuervo et al., 2016). A model integrating partner selection and collaborative transportation scheduling is developed by Dao et al. (2014). The mathematical model basically accounts for transportation times, costs, and capabilities of potential partners. 5.2. Request selection Carriers have to decide which of their requests should be offered to collaboration partners. Typically, carriers do not want to offer all their requests, but to keep some of them to be served with their private fleet. An intuitive solution would be to let the carriers solve a team orienteering problem, and put those requests into the pool, that do not appear in the optimal tour (Archetti et al., 2014). A combination of the request selection and the routing problem of collaborative truckload carriers is introduced by Liu et al. (2010b). It is assumed that carriers receive different kinds of requests, which they have to allocate to either their internal fleet or to an external collaborative carrier. The objective is to make a selection of tasks and to route the private vehicles by minimizing a total cost function. The authors develop a memetic algorithm to solve the problem. However, Gansterer and Hartl (2016b) show that in auction-based exchanges, the best request evaluation criteria take geographical aspects into account. It can be assumed, that these criteria are effective in non-auction-based settings as well. Of course, carriers not only decide on requests they want to offer, but on requests they want to acquire. Again, solving a team orienteering problem, which gives the set of valuable requests, would be an intuitive approach. However, this comes with a high computational effort since various different restrictions have to be considered. A request might be valuable for different players in the decentralized system. Thus, a coordinating authority has to 20 find a feasible assignment of requests to carriers. An efficient method to reduce empty backhauls by adding pickup and delivery tasks of partners is proposed by Bailey et al. (2011). Two optimization models are developed, where one is formulated as an integer program, and the other is formulated as a mixed integer program. A greedy heuristic and tabu search, are used to solve these problems. A numerical analysis based on real-world freight data indicates that the percentage of cost savings can be as high as 27%. In Hernández and Peeta (2014) a carrier seeks to collaborate with other carriers by acquiring capacity to service excess demand. Carriers first allocate requests to their private resources and then find the cost minimizing transport option for excess demand. The problem is addressed from a static perspective, formulated as a MCFP, and solved using a branch-and-cut algorithm. In Hernández et al. (2011) dynamic capacities are assumed. In vertical collaborations, beneficial requests or tours have to be selected for collaboration partners. Ergun et al. (2007a) investigate vertical collaboration between shippers and carriers. The authors present optimization technology that can be used by shippers to identify beneficial tours with little truck repositioning. Timing considerations are a key focus of their study. The effectiveness of their algorithms is shown based on real-world data. 5.3. Request exchange Once collaboration partners and requests have been selected, players have to decide on the exchange mechanism. Due to the inherent complexity, it is not common to exchange sets of unconnected requests, but parts of existing tours. This complexity can be overcome by auction-based systems. We refer to Section 6, where auction-based mechanisms are discussed. However, in non-auctioned-based frameworks, it is reasonable to trade requests being packed in vehicle routes. Wang and Kopfer (2014) propose such an routebased exchange mechanism. Carriers can iteratively generate and submit new routes based on the feedback information from an agent. This information is deduced from the dual values of a linear relaxation of a set partitioning problem. An extension including subcontracting is presented in Wang et al. (2014). The problem becomes even more complex, if dynamics of carrier coalitions are considered. Two rolling horizon planning approaches, which yield considerably superior results than isolated planning, are proposed by Wang and Kopfer (2015). 21 In the FTL market, typically lanes rather than requests are exchanged. Özener et al. (2011) consider settings in which several carriers collaborate by means of bilateral lane exchanges with and without side payments. With regard to applicability of request exchange mechanisms for realworld collaborations, some decision support systems have been developed. Dahl and Derigs (2011) present an empirical analysis of the effectiveness of a collaborative decision support system in an express carrier network. They assume carriers to solve a dynamic PDPTW by a local search-based heuristic. They show that with the support of a real-time decision support system based on an adequate compensation scheme, the network is able to perform close to the level obtainable by centralized planning. A decision support system for cooperative transportation planning where several manufacturing companies share their fleets to reduce transportation costs is presented by Sprenger and Mönch (2014). 5.4. Future research directions Decentralized non-auction-based systems have advantages and disadvantages. They are generally assumed to be less complex than auction-based approaches, since there is, for instance, no need for a bidding procedure. This might be seen as an advantage, but this comes at the price that none of the players has structured information on the collaborators’ preferences. This of course leads to relatively low collaboration profits. To overcome this drawback, there are attempts to still get some information by, for instance, doing multiple rounds of exchanges in order to approximate mutual preferences (e.g. Wang and Kopfer, 2014). An interesting research direction might be to compare the performance of such mechanisms with auction-based systems. Also the value of sharing information with collaboration partners has not been investigated so far. 6. Auction-based decentralized planning The decentralized exchange of requests can be organized through auctions (e.g. Ledyard et al., 2002), where collaborators submit requests to a common pool. Due to the necessity of a trading mechanism, auctions are generally supposed to be more complex than their conventional (i.e. nonauction-based) counterparts. However, auctions have more potential, since the trading mechanism can be used to indirectly share information of collaborators’ preferences. 22 Table 3: References and basic characteristics for auction-based decentralized planning Reference Ackermann et al. (2011) Berger and Bierwirth (2010) Chen (2016) Dai and H.Chen (2011) Dai et al. (2014) Gansterer and Hartl (2016b) Gansterer and Hartl (2017) Krajewska and Kopfer (2006) Li et al. (2016) Li et al. (2015) Schopka and Kopfer (2017) Xu et al. (2016) Model VRP VRP VRP VRP VRP VRP VRP VRP VRP VRP VRP LCP Shipper Carrier x x x x x x x x x x x x Shipment LTL LTL LTL LTL LTL LTL LTL LTL LTL FTL LTL FTL TW x x x x PD x x x x x x x x x x x x In horizontal collaborations, auctions are used to exchange requests. Thus, collaborators typically have both the roles of buyers and of sellers. A central authority, which is in charge of coordinating the auction process, is called the auctioneer. In combinatorial auctions, requests are not traded individually but are combined to bundles (Pekeč and Rothkopf, 2003). This is of particular importance in vehicle routing, where a request might not be attractive unless it is combined with other ones. An example for this can be seen in the upper part of Figure 4. Carrier B would probably not be interested in the individual requests C4 or C5, while the bundle (C4, C5) seems to be attractive. A bidding carrier receives the full bundle if the bidding price is accepted. If the bid is rejected, none of the items contained in the package is transferred to this carrier. This eliminates the risk of obtaining only a subset of requests which does not fit into the current request portfolio. Table 3 lists papers on auction-based decentralized planning, and highlights their basic characteristics. An early study on auction-based horizontal transportation collaboration is presented by Krajewska and Kopfer (2006). They are the first to design an auction-based exchange mechanism for collaborating carriers. Ackermann et al. (2011) discuss various goals for a combinatorial request exchange in freight logistics. They give a complete modeling proposal and show the crucial points when designing such a complex system. Berger and Bierwirth 23 (2010) divide the auction process into 5 phases: 1. Carriers decide which requests to put into the auction pool. 2. The auctioneer generates bundles of requests and offers them to the carriers. 3. Carriers place their bids for the offered bundles. 4. Winner Determination Problem: Auctioneer allocates bundles to carriers based on their bids. 5. Profit sharing: collected profits are distributed among the carriers. In the first phase, participating collaborators can decide either on selffulfillment, i.e. they plan and execute their transportation requests with their own capacities, or to offer some of them to other carriers. Aiming at network profit maximization, carriers should try to offer requests that are valuable for other network participants. Otherwise, the auction mechanism will not yield improved solutions. However, the identification of requests that are valuable for collaborators is not trivial since the actors do not want to reveal sensitive information. Different selection decisions are illustrated in Figure 7, where carrier A selects traded requests based on his own preferences, while the other carriers offer requests that have a higher probability to be attractive for their collaborators. In such a system, there is of course a high risk of strategic behavior, since carriers might increase individual benefits by offering unprofitable requests, and achieving very attractive ones in return. The first auction phase is investigated by Gansterer and Hartl (2016b). The authors show that the best request evaluation criteria take geographical aspects into account. They clearly dominate pure profit-based strategies. Schopka and Kopfer (2017) investigate pre-selection strategies, which they classify as (i) request potential or (ii) tour potential valuation strategies. Li et al. (2016) assume that carriers in combinatorial auctions have to solve a PDPTW with reserved requests for deciding which requests should be submitted to the auction pool. In the second phase, the requests in the pool are grouped into bundles. These are then offered to participating carriers. If it is assumed that carriers can get a set of requests that exceeds their capacities (outsourcing option), the generation of bundles can be moved to the carriers themselves. The auctioneer could then offer the set of requests without grouping them to bundles, while the carriers give their bids on self-created packages of requests. The obvious drawback of this approach is that the auctioneer cannot guarantee 24 + A1 - + + C2 C1 - - - A1 A3 + B2 A2 + A3 + - A B2 B + C1 A1 B2 B3 C4 C5 B1 B1 A2 B3 + - B3 - C C2 Auction pool + C3 C3 - + C5 + C4 - C5 C4 Figure 7: Carriers submit request to the pool. Carrier A selects requests based on marginal profits, while carriers B and C take geographical information into account (Gansterer and Hartl, 2016a). to find a feasible assignment of bundles to carriers. This the reason why an outsourcing option has to be included. A simple method to overcome strict capacities (no outsourcing), is to assume that the auctioneer assigns at most one bundle per carrier. Since carriers give their bids on bundles, they can easily communicate whether they are able to handle a specific bundle or not. However, from a practical point of view, offering all possible bundles is not manageable, since the number of bundles grows exponentially with the number of requests that are in the pool. An intuitive approach to reduce the auction’s complexity is to limit the number of requests that are traded. Li et al. (2015) do not allow the carriers to submit more than one request to the auction pool. Xu et al. (2016) show effective auction mechanisms for the truckload carrier collaboration problem with bilateral lane exchange. Again carriers offer only one lane, which is the one with the highest marginal cost. In the multi-agent framework presented by Dai and H.Chen (2011), there is only one request traded per auction round. Thus, it is a non-combinatorial auction, where carriers act as auctioneers when they want to outsource a request to other 25 carriers, whereas they act as bidders when they want to acquire a request from other carriers. However, limiting the number of offered items, obviously decreases the probability to find good solutions. Gansterer and Hartl (2017) show that, without a loss in solution quality, the set of offered bundles can be efficiently reduced to a relatively small subset of attractive ones. They develop a proxy function for assessing the attractiveness of bundles under incomplete information. This proxy is then used in a genetic algorithms-based framework that aims at producing attractive and feasible bundles. With only a little loss in solution quality, instances can be solved in a fraction of the computational time compared to the situation where all possible bundles are evaluated. Complexity can also be reduced by performing multi-round auctions. These are generally intended to offer subsets of the traded items in multiple rounds. Previously gained information can be used to compose the setting for the next round. By this, the bidders are never faced with the full complexity of the auction pool. A multi-round price-setting based combinatorial auction approach is proposed by Dai et al. (2014). In each round of the auction, the auctioneer updates the price for serving each request based on Lagrangian relaxation. Each carrier determines its requests to be outsourced and the requests to be acquired from other carriers by solving a request selection problem based on the prices. Regarding the last auction phase, i.e. profit sharing, we refer to a broad survey presented in Guajardo and Rönnqvist (2016). Chen (2016) propose an alternative to combinatorial auctions for carrier collaboration, which is combinatorial clock-proxy exchange. This exchange has two phases. The clock phase is an iterative exchange based on Lagrangian relaxation. In the proxy phase, the bids that each carrier submits are determined based on the information observed in the clock phase. Future research directions. Auctions can be powerful mechanisms for increasing collaboration profits. However, each of the 5 auction phases bears a complex and at least partly unsolved decision problem in itself. To make auctions efficiently applicable to real-world settings, many challenging questions still have to be answered. For instance, the strong relationship between the five auction-phases (Berger and Bierwirth, 2010) has not been investigated so far. The majority of studies focuses on one of the five decision phases, while an integrated and practically usable framework is still missing. Also the real26 istic aspect that carriers might behave strategically, opens many interesting research questions. In particular, the influence of strategic behavior in the request selection or in the bidding phase are yet to be investigated. At this point, effective profit sharing mechanisms are needed, since these have the potential to impede strategic behavior. So far, only relatively simple auction procedures have been investigated. It might be worth to adopt more complex mechanisms like, e.g., multi-round value-setting auctions (Dai et al., 2014). Also, in the literature there is no structured assessment of the potential of auctions in comparison to optimal solutions. These can of course only be guaranteed, if the auctioneer gets full information. Thus, it is worthwhile to investigate the value of information, i.e. the assessment of types or levels of information that increase solution quality. 7. Conclusion Collaborative vehicle routing is an active research area of high practical importance. In this review paper, we have given a structured overview and classification of the related literature. We identified three major streams of research, which are (i) centralized planning, (ii) non-auction-based decentralized planning, and (iii) auction-based decentralized planning. Literature was further classified based on the underlying planning problem and the collaboration setting. We discussed recent developments and proposed future work directions, which, for instance, are • the application of collaborative frameworks to more complex, e.g. multimodal, transportation systems, • the investigation of strategic behavior, and effective profit sharing mechanisms to avoid it, • a comparative study assessing the advantages of auction-based compared to non-auction-based systems, • the assessment of the value of information in decentralized exchange mechanisms. In order to produce comparable results, an open access repository of related benchmark instances would be helpful. This would enable a structured 27 investigation of performance gaps between centralized and decentralized approaches. Publicly available data instances are provided by (i) Fernández et al. (2016) (http://or-brescia.unibs.it/instances), (ii) Wang et al. (2014), Wang and Kopfer (2015) (http://www.logistik.uni-bremen.de/, and (iii) Gansterer and Hartl (2017) (https://tinyurl.com/y85hcpry). Acknowledgements This work is supported by FWF the Austrian Science Fund (Projectnumber P27858-G27) References References Ackermann, H., Ewe, H., Kopfer, H., Küfer, K., 2011. Combinatorial auctions in freight logistics. In: Böse, J., Hu, H., Carlos, C., Shi, X., Stahlbock, R., Voss, S. (Eds.), Computational Logistics. Vol. 6971 of Lecture Notes in Computer Science. Springer Berlin Heidelberg, pp. 1–17. Adenso-Dı́az, B., Lozano, S., Garcia-Carbajal, S., Smith-Miles, K., 2014a. Assessing partnership savings in horizontal cooperation by planning linked deliveries. Transportation Research Part A: Policy and Practice 66, 268 – 279. Adenso-Dı́az, B., Lozano, S., Moreno, P., 2014b. Analysis of the synergies of merging multi-company transportation needs. Transportmetrica A: Transport Science 10 (6), 533–547. Agarwal, R., Ergun, Ö., 2010. Network design and allocation mechanisms for carrier alliances in liner shipping. Operations Research 58 (6), 1726–1742. Ankersmit, S., Rezaei, J., Tavasszy, L., 2014. The potential of horizontal collaboration in airport ground freight services. Journal of Air Transport Management 40, 169 – 181. Audy, J.-F., Lehoux, N., D’Amours, S., Rönnqvist, M., 2012. A framework for an efficient implementation of logistics collaborations. International Transactions in Operational Research 19 (5), 633–657. URL http://dx.doi.org/10.1111/j.1475-3995.2010.00799.x 28 Bailey, E., Unnikrishnan, A., Lin, D.-Y., 2011. Models for minimizing backhaul costs through freight collaboration. Transportation Research Records 2224, 51–60. Ballot, E., Fontane, F., 2010. Reducing transportation co2 emissions through pooling of supply networks: perspectives from a case study in french retail chains. Production Planning & Control 21 (6), 640–650. Berger, S., Bierwirth, C., 2010. Solutions to the request reassignment problem in collaborative carrier networks. Transportation Research Part E: Logistics and Transportation Review 46, 627–638. Bertazzi, L., Savelsbergh, M., Speranza, M. G., 2008. Inventory Routing. Springer US, Boston, MA, pp. 49–72. Birattari, M., 2009. Tuning Metaheuristics. Buijs, P., Alvarez, J. A. L., Veenstra, M., Roodbergen, K. J., 2016. Improved collaborative transport planning at dutch logistics service provider fritom. Interfaces 46 (2), 119 – 132. Chen, H., 2016. Combinatorial clock-proxy exchange for carrier collaboration in less than truck load transportation. Transportation Research Part E: Logistics and Transportation Review 91, 152 – 172. Cruijssen, F., Bräysy, O., Dullaert, W., Fleuren, H., Salomon, M., 2007a. Joint route planning under varying market conditions. International Journal of Physical Distribution & Logistics Management 37 (4), 287–304. Cruijssen, F., Cools, M., Dullaert, W., 2007b. Horizontal cooperation in logistics: Opportunities and impediments. Transportation Research Part E: Logistics and Transportation Review 43 (2), 129 – 142. Cruijssen, F., Dullaert, W., Fleuren, H., 2007c. Horizontal cooperation in transport and logistics: A literature review. Transportation Journal 46 (3), 22–39. Cuervo, D. P., Vanovermeire, C., Sörensen, K., 2016. Determining collaborative profits in coalitions formed by two partners with varying characteristics. Transportation Research Part C: Emerging Technologies 70, 171 – 184. 29 Dahl, S., Derigs, U., 2011. Cooperative planning in express carrier networks an empirical study on the effectiveness of a real-time decision support system. Decision Support Systems 51 (3), 620 – 626. Dai, B., Chen, H., 2012. Mathematical model and solution approach for carriers collaborative transportation planning in less than truckload transportation. International Journal of Advanced Operations Management 4, 62–84. Dai, B., Chen, H., Yang, G., 10 2014. Price-setting based combinatorial auction approach for carrier collaboration with pickup and delivery requests. Operational Research 14 (3), 361–386. Dai, B., H.Chen, 2011. A multi-agent and auction-based framework and approach for carrier collaboration. Logistics Research 3 (2-3), 101–120. Dao, S. D., Abhary, K., Marian, R., 2014. Optimisation of partner selection and collaborative transportation scheduling in virtual enterprises using {GA}. Expert Systems with Applications 41 (15), 6701 – 6717. Engevall, S., Gthe-Lundgren, M., Vrbrand, P., 2004. The heterogeneous vehicle-routing game. Transportation Science 38 (1), 71–85. Ergun, Ö., Kuyzu, G., Savelsbergh, M., 05 2007a. Reducing truckload transportation costs through collaboration. Transportation Science 41 (2), 206– 221. Ergun, Ö., Kuyzu, G., Savelsbergh, M., 2007b. Shipper collaboration. Computers & Operations Research 34 (6), 1551 – 1560, part Special Issue: Odysseus 2003 Second International Workshop on Freight Transportation Logistics. Fernández, E., Fontana, D., Speranza, M. G., 2016. On the collaboration uncapacitated arc routing problem. Computers & Operations Research 67, 120 – 131. Frisk, M., Goethe-Lundgren, M., Joernsten, K., Rönnqvist, M., 2010. Cost allocation in collaborative forest transportation. European Journal of Operational Research 205, 448–4587. 30 Gansterer, M., Hartl, R. F., 2016a. Combinatorial auctions in collaborative vehicle routing. IFORS News (10(4)), 15–16. Gansterer, M., Hartl, R. F., 2016b. Request evaluation strategies for carriers in auction-based collaborations. OR Spectrum 38(1), 3–23. Gansterer, M., Hartl, R. F., 2017. Bundle generation in combinatorial transportation auctions. Working paper. URL http://prolog.univie.ac.at/research/PaperMG/Bundles.pdf Gendreau, M., Potvin, J.-Y., Bräumlaysy, O., Hasle, G., Løkketangen, A., 2008. Metaheuristics for the Vehicle Routing Problem and Its Extensions: A Categorized Bibliography. Springer US, Boston, MA, pp. 143–169. Ghiani, G., Guerriero, F., Laporte, G., Musmanno, R., 2003. Real-time vehicle routing: Solution concepts, algorithms and parallel computing strategies. European Journal of Operational Research 151 (1), 1 – 11. Ghiani, G., Manni, E., Triki, C., 2008. The lane covering problem with time windows. Journal of Discrete Mathematical Sciences and Cryptography 11 (1), 67–81. Göthe-Lundgren, M., Jörnsten, K., Värbrand, P., 1996. On the nucleolus of the basic vehicle routing game. Mathematical Programming 72 (1), 83–100. Guajardo, M., Jörnsten, K., 2015. Common mistakes in computing the nucleolus. European Journal of Operational Research 241 (3), 931 – 935. Guajardo, M., Rönnqvist, M., 2015. Operations research models for coalition structure in collaborative logistics. European Journal of Operational Research 240 (1), 147 – 159. Guajardo, M., Rönnqvist, M., 2016. A review on cost allocation methods in collaborative transportation. International Transactions in Operational Research 23 (3), 371–392. Hernández, S., Peeta, S., 2011. Centralized time-dependent multiple-carrier collaboration problem for less-than-truckload carriers. Transportation Research Record: Journal of the Transportation Research Board 2263, 26–34. 31 Hernández, S., Peeta, S., 2014. A carrier collaboration problem for less-thantruckload carriers: characteristics and carrier collaboration model. Transportmetrica A: Transport Science 10 (4), 327–349. Hernández, S., Peeta, S., Kalafatas, G., 2011. A less-than-truckload carrier collaboration planning problem under dynamic capacities. Transportation Research Part E: Logistics and Transportation Review 47 (6), 933 – 946. Kimms, A., Kozeletskyi, I., 2016. Shapley value-based cost allocation in the cooperative traveling salesman problem under rolling horizon planning. EURO Journal on Transportation and Logistics 5 (4), 371–392. Klein, M., 1967. A primal method for minimal cost flows with applications to the assignment and transportation problems. Management Science 14 (3), 205–220. Krajewska, M., Kopfer, H., 2006. Collaborating freight forwarding enterprises. OR Spectrum 28(3), 301–317. Krajewska, M. A., Kopfer, H., Laporte, G., Ropke, S., Zaccour, G., 11 2008. Horizontal cooperation among freight carriers: request allocation and profit sharing. The Journal of the Operational Research Society 59 (11), 1483–1491. Kuo, A., Miller-Hooks, E., Zhang, K., Mahmassani, H., 2008. Train slot cooperation in multicarrier, international rail-based intermodal freight transport. Transportation Research Record: Journal of the Transportation Research Board 2043, 31–40. Kuyzu, G., 2017. Lane covering with partner bounds in collaborative truckload transportation procurement. Computers & Operations Research 77, 32 – 43. Kytöjoki, J., Nuortio, T., Bräysy, O., Gendreau, M., 2007. An efficient variable neighborhood search heuristic for very large scale vehicle routing problems. Computers & Operations Research 34 (9), 2743 – 2757. Ledyard, J., Olson, M., Porter, D., Swanson, J., Torma, D., 2002. The first use of a combined-value auction for transportation services. Interfaces 32 (5), 4–12. 32 Li, J., Rong, G., Feng, Y., 2015. Request selection and exchange approach for carrier collaboration based on auction of a single request. Transportation Research Part E: Logistics and Transportation Review 84, 23 – 39. Li, Y., Chen, H., Prins, C., 2016. Adaptive large neighborhood search for the pickup and delivery problem with time windows, profits, and reserved requests. European Journal of Operational Research 252 (1), 27 – 38. Lin, C., 2008. A cooperative strategy for a vehicle routing problem with pickup and delivery time windows. Computers & Industrial Engineering 55 (4), 766 – 782. Liu, R., Jiang, Z., Fung, R. Y., Chen, F., Liu, X., 2010a. Two-phase heuristic algorithms for full truckloads multi-depot capacitated vehicle routing problem in carrier collaboration. Computers & Operations Research 37 (5), 950 – 959, disruption Management. Liu, R., Jiang, Z., Liu, X., Chen, F., 2010b. Task selection and routing problems in collaborative truckload transportation. Transportation Research Part E: Logistics and Transportation Review 46 (6), 1071 – 1085. Lydeka, Z., Adomavičius, B., 2007. Cooperation among the competitors in international cargo transportation sector: Key factors to success. Engineering Economics 51 (1), 80–90. Montoya-Torres, J. R., Muñoz-Villamizar, A., Vega-Mejia, C. A., 2016. On the impact of collaborative strategies for goods delivery in city logistics. Production Planning & Control 27 (6), 443–455. Munkres, J., 1957. Algorithms for the assignment and transportation problems. Journal of the Society for Industrial and Applied Mathematics 5 (1), 32–38. URL http://www.jstor.org/stable/2098689 Nadarajah, S., Bookbinder, J., 2013. Less-than-truckload carrier collaboration problem: Modeling framework and solution approach. Journal of Heuristics 19, 917–942. Özener, O. Ö., Ergun, Ö., Savelsbergh, M., 2011. Lane-exchange mechanisms for truckload carrier collaboration. Transportation Science 45 (1), 1–17. 33 Özener, O. Ö., Ergun, Ö., Savelsbergh, M., 2013. Allocating cost of service to customers in inventory routing. Operations Research 61 (1), 112–125. Parragh, S., Dörner, K., Hartl, R., 2008. A survey on pickup and delivery problems. part ii: Transportation between pickup and delivery locations. Journal für Betriebswirtschaft 58, 21–51. Pekeč, A., Rothkopf, M., 2003. Combinatorial auction design. Management Science 49 (11), 1485–1503. Pérez-Bernabeu, E., Juan, A. A., Faulin, J., Barrios, B. B., 2015. Horizontal cooperation in road transportation: a case illustrating savings in distances and greenhouse gas emissions. International Transactions in Operational Research 22 (3), 585–606. Quintero-Araujo, C. L., Gruler, A., Juan, A. A., 2016. Quantifying Potential Benefits of Horizontal Cooperation in Urban Transportation Under Uncertainty: A Simheuristic Approach. Springer International Publishing, Cham, pp. 280–289. Sanchez, M., Pradenas, L., Deschamps, J.-C., Parada, V., 2016. Reducing the carbon footprint in a vehicle routing problem by pooling resources from different companies. NETNOMICS: Economic Research and Electronic Networking 17 (1), 29–45. Schmeidler, D., 1969. The nucleolus of a characteristic function game. SIAM Journal on Applied Mathematics 17, 1163–1170. Schmelzer, H., 2014. Cooperational platform for urban logistics in Zurich (accessed January 2017). URL https://blog.zhaw.ch/mobine/category/1/city-logistik/ Schmoltzi, C., Wallenburg, C. M., 07 2011. Horizontal cooperations between logistics service providers: Motives, structure, performance. International Journal of Physical Distribution & Logistics Management 41 (6), 552–575. Schneeweiss, C., 2003. Distributed Decision Making. Schopka, K., Kopfer, H., 2017. Pre-selection Strategies for the Collaborative Vehicle Routing Problem with Time Windows. Springer International Publishing, Cham, pp. 231–242. 34 Shapley, L., 1953. A value for n-person games. Annals of Mathematical Studies 28, 307–317. Soysal, M., Bloemhof-Ruwaard, J. M., Haijema, R., van der Vorst, J. G., 2016. Modeling a green inventory routing problem for perishable products with horizontal collaboration. Computers & Operations Research, –. Sprenger, R., Mönch, L., 2012. A methodology to solve large-scale cooperative transportation planning problems. European Journal of Operational Research 223 (3), 626 – 636. Sprenger, R., Mönch, L., 2014. A decision support system for cooperative transportation planning: Design, implementation, and performance assessment. Expert Systems with Applications 41 (11), 5125 – 5138. Tate, K., 1996. The elements of a successful logistics partnership. International Journal of Physical Distribution & Logistics Management 26 (3), 7–13. Vanovermeire, C., Sörensen, K., 2014. Integration of the cost allocation in the optimization of collaborative bundling. Transportation Research Part E: Logistics and Transportation Review 72, 125 – 143. Verdonck, L., Caris, A., Ramaekers, K., Janssens, G. K., 2013. Collaborative logistics from the perspective of road transportation companies. Transport Reviews 33 (6), 700–719. Verstrepen, S., Cools, M., Cruijssen, F., Dullaert, W., 2009. A dynamic framework for managing horizontal cooperation in logistics. International Journal of Logistics Systems and Management 3 4 (5), 228–248. Voruganti, A., Unnikrishnan, A., Waller, S., 2011. Modeling carrier collaboration in freight networks. Transportation Letters 3 (1), 51–61. Wang, X., Kopfer, H., 2014. Collaborative transportation planning of lessthan-truckload freight. OR Spectrum 36, 357–380. Wang, X., Kopfer, H., 2015. Rolling horizon planning for a dynamic collaborative routing problem with full-truckload pickup and delivery requests. Flexible Services and Manufacturing Journal 27 (4), 509–533. 35 Wang, X., Kopfer, H., Gendreau, M., 2014. Operational transportation planning of freight forwarding companies in horizontal coalitions. European Journal of Operational Research 237 (3), 1133 – 1141. Weng, K., Xu, Z.-H., 2014. Flow merging and hub route optimization in collaborative transportation. Journal of Applied Mathematics 2014. Wøhlk, S., 2008. A Decade of Capacitated Arc Routing. Springer US, Boston, MA, pp. 29–48. Xu, S. X., Huang, G. Q., Cheng, M., 2016. Truthful, budget-balanced bundle double auctions for carrier collaboration. Transportation Science 0 (0), null. Yilmaz, O., Savasaneril, S., 2012. Collaboration among small shippers in a transportation market. European Journal of Operational Research 218 (2), 408 – 415. 36
2
arXiv:1302.2676v3 [] 25 Feb 2014 CONVEX BODIES AND MULTIPLICITIES OF IDEALS KIUMARS KAVEH AND A. G. KHOVANSKII Dedicated to Viktor Matveyevich Buchstaber for the occasion of his 70th birthday Abstract. We associate convex regions in Rn to m-primary graded sequences of subspaces, in particular m-primary graded sequences of ideals, in a large class of local algebras (including analytically irreducible local domains). These convex regions encode information about Samuel multiplicities. This is in the spirit of the theory of Gröbner bases and Newton polyhedra on one hand, and the theory of Newton-Okounkov bodies for linear systems on the other hand. We use this to give a new proof, as well as a generalization of a BrunnMinkowski inequality for multiplicities due to Teissier and Rees-Sharp. Introduction The purpose of this note is to employ, in the local case, techniques from the theory of semigroups of integral points and Newton-Okounkov bodies (for the global case) and to obtain new results as well as new proofs of some previously known results about multiplicities of ideals in local rings. Let R = OX,p be the local ring of a point p on an n-dimensional irreducible algebraic variety X over an algebraically closed field k. Let m denote the maximal ideal of R and let a be an m-primary ideal, i.e. a is an ideal containing a power of the maximal ideal m. Geometrically speaking, a is m-primary if its zero set (around p) is the single point p itself. Let f1 , . . . , fn be n generic elements in a. The multiplicity e(a) of the ideal a is the intersection multiplicity, at the origin, of the hypersurfaces Hi = {x | f (x) = 0}, i = 1, . . . , n (it can be shown that this number is independent of the choice of the fi ). According to Hilbert-Samuel’s theorem, the multiplicity e(a) is equal to: dimk (R/ak ) . k→∞ kn (This result is analogous to Hilbert’s theorem on the Hilbert function and degree of a projective variety.) More generally, let R be an n-dimensional Noetherian local domain over k (where k is isomorphic to the residue field R/m and m is the maximal ideal). Let a be an m-primary ideal of R. Since a contains a power of the maximal ideal m, R/a is finite dimensional regarded as a vector space over k. The Hilbert-Samuel function of the m-primary ideal a is defined by: n! lim Ha (k) = dimk (R/ak ). Date: February 26, 2014. 2010 Mathematics Subject Classification. Primary: 13H15, 11H06; Secondary: 13P10. Key words and phrases. multiplicity, local ring, Hilbert-Samuel function, convex body. The first author is partially supported by a Simons Foundation Collaboration Grants for Mathematicians (Grant ID: 210099) and a National Science Foundation (Grant ID: 1200581). The second author is partially supported by the Canadian Grant N 156833-12. 1 For large values of k, Ha (k) coincides with a polynomial of degree n called the Hilbert-Samuel polynomial of a. The Samuel multiplicity, e(a) of a is defined to be the leading coefficient of Ha (k) multiplied by n!. It is well-known that the Samuel multiplicity satisfies a Brunn-Minkowski inequality [Teissier77, Rees-Sharp78]. That is, for any two m-primary ideals a, b ∈ R we have: (1) e(a)1/n + e(b)1/n ≥ e(ab)1/n . More generally we define multiplicity for m-primary graded sequences of subspaces. That is, a sequence a1 , a2 , . . . of k-subspaces in R such that for all k, m we have ak am ⊂ ak+m , and a1 contains a power of the maximal ideal m (Definition 5.2). We recall that if a, b are two k-subspaces of R, ab denotes the k-span of all the xy where x ∈ a and y ∈ b. In particular, a graded sequence a• where each ak is an m-primary ideal, is an m-primary graded sequence of subspaces. We call such a• an m-primary graded sequence of ideals. For an m-primary graded sequence of subspaces we define multiplicity e(a• ) to be: dimk (R/ak ) (2) e(a• ) = n! lim . k→∞ kn (It is not a priori clear that the limit exists.) We will use convex geometric arguments to prove the existence of the limit in (2) and a generalization of (1) to m-primary graded sequences of subspaces, for a large class of local domains R. Let us briefly discuss the convex geometry part of the story. Let C be a closed strongly convex cone with apex at the origin (i.e. C is a convex cone and does not contain any line). We call a closed convex set Γ ⊂ C, a C-convex region if for any x ∈ Γ we have x + C ⊂ Γ. We say that Γ is cobounded if C \ Γ is bounded. It is easy to verify that the set of cobounded C-convex regions is closed under addition (Minkowski sum of convex sets) and multiplication with a positive real number. For a cobounded C-convex region Γ we call the volume of the bounded region C \ Γ the covolume of Γ and denote it by covol(Γ). Also we refer to C \Γ as a C-coconvex body. (Instead of working with convex regions one can alternatively work with coconvex bodies.) In [Khovanskii-Timorin-a, Khovanskii-Timorin-b], similar to convex bodies and their volumes (and mixed volumes), the authors develop a theory of convex regions and their covolumes (and mixed covolumes). Moreover they prove an analogue of the Alexandrov-Fenchel inequality for mixed covolumes (see Theorem 2.3). The usual Alexandrov-Fenchel inequality is an important inequality about mixed volumes of convex bodies in Rn and generalizes the classical isoperimetric inequality and the Brunn-Minkowski inequality. In a similar way, the result in [Khovanskii-Timorin-a] implies a Brunn-Minkowski inequality for covolumes, that is, for any two cobounded C-convex regions Γ1 , Γ2 where C is an n-dimensional cone, we have: (3) covol(Γ1 )1/n + covol(Γ2 )1/n ≥ covol(Γ1 + Γ2 )1/n . We associate convex regions to m-primary graded sequences of subspaces (in particular m-primary ideals) and use the inequality (3) to prove the Brunn-Minkowski inequality for multiplicities. To associate a convex region to a graded sequence of subspaces we need a valuation on the ring R. We will assume that there is a valuation v on R with values in Zn (with respect to a total order on Zn respecting 2 addition) such that the residue field of v is k, and moreover the following conditions (i)-(ii) hold 1. We call such v a good valuation on R (Definition 7.3): (i) Let S = v(R \ {0}) ∪ {0} be the value semigroup of (R, v). Let C = C(S) be the closure of the convex hull of S. It is a closed convex cone with apex at the origin. We assume that C is a strongly convex cone. Let ℓ : Rn → R be a linear function. For any a ∈ R let ℓ≥a denote the half-space {x | ℓ(x) ≥ a}. Since the cone C = C(S) associated to the semigroup S is assumed to be strongly convex we can find a linear function ℓ such that the cone C lies in ℓ≥0 and it intersects the hyperplane ℓ−1 (0) only at the origin. (ii) We assume there exists r0 > 0 and a linear function ℓ as above such that for any f ∈ R, if ℓ(v(f )) ≥ kr0 for some k > 0 then f ∈ mk . Let Mk = v(mk \ {0}) denote the image of mk under the valuation v. The condition (ii) in particular implies that for any k > 0 we have Mk ∩ ℓ≥kr0 = S ∩ ℓ≥kr0 . As an example, let R = k[x1 , . . . , xn ](0) be the algebra of polynomials localized at the maximal ideal (x1 , . . . , xn ). Then the map v which associates to a polynomial its lowest exponent (with respect to some term order) defines a good valuation on R and the value semigroup S coincides with the semigroup Zn≥0 , that is, the semigroup of all the integral points in the positive orthant C = Rn≥0 . In the same fashion any regular local ring has a good valuation, as well as the local ring of a toroidal singularity (Example 7.5 and Theorem 7.6). More generally, in Section 7 we see that an analytically irreducible local domain R has a good valuation (Theorem 7.7 and Theorem 7.8; see also [Cutkosky-a, Theorem 4.2 and Lemma 4.3]). A local ring R is said to be analytically irreducible if its completion is an integral domain. Regular local rings and local rings of toroidal singularities are analytically irreducible. (We should point out that in the first version of the paper we had addressed only the case where R is a regular local ring or the local ring of a toroidal singularity.) Given a good Zn -valued valuation v on the domain R, we associate the (strongly) convex cone C = C(R) ⊂ Rn to the domain R which is the closure of convex hull of the value semigroup S. Then to each m-primary graded sequence of subspaces a• in R we associate a convex region Γ(a• ) ⊂ C, such that the set C \ Γ(a• ) is bounded (Definition 7.11). The main result of the manuscript (Theorem 7.12) is that the limit in (2) exists and: (4) e(a• ) = n! covol(Γ(a• )). The equality (4) and the Brunn-Minkowski inequality for covolumes (see (3) or Corollary 2.4) are the main ingredients in proving a generalization of the inequality (1) to m-primary graded sequences of subspaces (Corollary 7.14). We would like to point out that the construction of Γ(a• ) is an analogue of the construction of the Newton-Okounkov body of a linear system on an algebraic variety (see [Okounkov03], [Okounkov96], [Kaveh-Khovanskii12], [Lazarsfeld-Mustaţă09]). In fact, the approach and results in the present paper are analogous to the approach and results in [Kaveh-Khovanskii12] regarding the asymptotic behavior of Hilbert functions of a general class of graded algebras. In the present manuscript we also deal with certain graded algebras (i.e. m-primary graded sequences of 1In [Kaveh-Khovanskii12] a valuation v with values in Zn and residue field k is called a valuation with one-dimensional leaves (see Definition 7.1). 3 subspaces) but instead of dimension of graded pieces we are interested in the codimension (i.e. dimension of R/ak ), that is why in our main theorem (Theorem 7.12) the covolume of a convex region appears as opposed to the volume of a convex body ([Kaveh-Khovanskii12, Theorem 2.31]). Also our Theorem 7.12 generalizes [Kaveh-Khovanskii12, Corollary 3.2] which gives a formula for the degree of a projective variety X in terms of the volume of its corresponding Newton-Okounkov body, because the Hilbert function of a projective variety X can be regarded as the difference derivative of the Hilbert-Samuel function of the affine cone over X at the origin and hence has the same leading coefficient. On the other hand, the construction of Γ(a) generalizes the notion of the Newton diagram of a power series (see [Kushnirenko76] and [Arnold-Varchenko-Guseinzade85, Section 12.7]). To a monomial ideal in a polynomial ring (or a power series ring), i.e. an ideal generated by monomials, one can associate its (unbounded) Newton polyhedron. It is the convex hull of the exponents of the monomials appearing in the ideal. The Newton diagram of a monomial ideal is the union of the bounded faces of the Newton polyhedron. One can see that for a monomial ideal a, the convex region Γ(a) coincides with its Newton polyhedron (Theorem 6.5). The main theorem in this manuscript (Theorem 7.12) for the case of monomial ideals recovers the local case of the well-known theorem of Bernstein-Kushnirenko, about computing the multiplicity at the origin of a system f1 (x) = · · · = fn (x) = 0 where the fi are generic functions from m-primary monomial ideals (see Section 6 and [Arnold-Varchenko-Guseinzade85, Section 12.7]). Another immediate corollary of (4) is the following: let a be an m-primary ideal in R = k[x1 , . . . , xn ](0) . Fix a term order on Zn and for each k > 0 let in(ak ) denote the initial ideal of the ideal ak (generated by the lowest terms of elements of ak ). Then the sequence of numbers e(in(ak )) kn is decreasing and converges to e(a) as k → ∞ (Corollary 7.16). The Brunn-Minkowski inequality proved in this paper is closely related to the more general Alexandrov-Fenchel inequality for mixed multiplicities. Take mprimary ideals a1 , . . . , an in a local ring R = OX,p of a point p on an n-dimensional algebraic variety X. The mixed multiplicity e(a1 , . . . , an ) is equal to the intersection multiplicity, at the origin, of the hypersurfaces Hi = {x | fi (x) = 0}, i = 1, . . . , n, where each fi is a generic function from ai . Alternatively one can define the mixed multiplicity as the polarization of the Hilbert-Samuel multiplicity e(a), i.e. it is the unique function e(a1 , . . . , an ) which is invariant under permuting the arguments, is multi-additive with respect to product of ideals, and for any m-primary ideal a the mixed multiplicity e(a, . . . , a) coincides with e(a). In fact, in the above the ai need not be ideals and it suffices for them to be m-primary subspaces. The Alexandrov-Fenchel inequality is the following inequality among the mixed multiplicities of the ai : (5) e(a1 , a1 , a3 , . . . , an )e(a2 , a2 , a3 , . . . , an ) ≥ e(a1 , a2 , a3 , . . . , an )2 When n = dim R = 2 it is easy to see that the Brunn-Minkowski inequality (1) and the Alexandrov-Fenchel inequality (5) are equivalent. By a reduction of dimension theorem for mixed multiplicities one can get a proof of the Alexandrov-Fenchel 4 inequality (5) from the Brunn-Minkowski inequality (1) for dim(R) = 2. The Brunn-Minkowski inequality (1) was originally proved in [Teissier77, Rees-Sharp78]. In a recent paper [Kaveh-Khovanskii] we give a simple proof of the AlexandrovFenchel inequality for mixed multiplicities of ideals using arguments similar to but different from those of this paper. This then implies an Alexandrov-Fenchel inequality for covolumes of convex regions (in a similar way that in [Kaveh-Khovanskii12] and in [Khovanskii88] the authors obtain an alternative proof of the usual AlexandrovFenchel inequality for volumes of convex bodies from similar inequalities for intersection numbers of divisors on algebraic varieties). We would like to point out that the Alexandrov-Fenchel inequality in [Khovanskii-Timorin-a] for covolumes of coconvex bodies is related to an analogue of this inequality for convex bodies in higher dimensional hyperbolic space (or higher dimensional Minkowski space-time). From this point of view, the AlexandrovFenchel inequality has been proved for certain coconvex bodies in [Fillastre13]. After the first version of this note was completed we learned about the recent papers [Cutkosky-a, Cutkosky-b] and [Fulger] which establish the existence of limit (2) in more general settings. We would also like to mention the paper of Teissier [Teissier78] which discusses Newton polyhedron of a power series, and notes the relationship/analogy between notions from local commutative algebra and convex geometry. Also we were notified that, for ideals in a polynomial ring, ideas similar to construction of Γ(a• ) (see Definition 7.11) appears in [Mustaţă02] were the highest term of polynomials is used instead of a valuation. Moreover, in [Mustaţă02, Corollary 1.9] the Brunn-Minkowski-inequality for multiplicities of graded sequences of m-primary ideals is proved for regular local rings using Teissier’s Brunn-Minkowski (1). Finally as the final version of this manuscript was being prepared for publication, the preprint of D. Cutksoky [Cutkosky-c] appeared in arXiv.org in which the author uses similar methods to prove Brunn-Minkowski inequality for graded sequences of m-primary ideals in local domains. And few words about the organization of the paper: Section 1 recalls basic background material about volumes/mixed volumes of convex bodies. Section 2 is about convex regions and their covolumes/mixed covolumes, which we can think of as a local version of the theory of mixed volumes of convex bodies. In Sections 3 and 4 we associate a convex region to a primary sequence of subsets in a semigroup and prove the main combinatorial result required later (Definition 4.7 and Theorem 4.10). In Section 5 we recall some basic definitions and facts from commutative algebra about multiplicities of m-primary ideals (and subspaces) in local rings. The next section (Section 6) discusses the case of monomial ideals and the BernsteinKushnirenko theorem. Finally in Section 7, using a valuation on the ring R, we associate a convex region Γ(a• ) to an m-primary graded sequence of subspaces a• and prove the main results of this note (Theorem 7.12 and Corollary 7.14). Acknowledgement. The first author would like to thank Dale Cutkosky, Vladlen Timorin and Javid Validashti for helpful discussions. We are also thankful to Bernard Teissier, Dale Cutkosky, Francois Fillastre and Mircea Mustaţă for informing us about their interesting papers [Teissier78], [Cutkosky-a, Cutkosky-b], [Fillastre13] and [Mustaţă02]. 5 1. Mixed volume of convex bodies The collection of all convex bodies in Rn is a cone, that is, we can add convex bodies and multiply a convex body with a positive number. For two convex bodies ∆1 , ∆2 ⊂ Rn , their (Minkowski) sum ∆1 + ∆2 is {x + y | x ∈ ∆1 , y ∈ ∆2 }. Let vol denote the n-dimensional volume in Rn with respect to the standard Euclidean metric. The function vol is a homogeneous polynomial of degree n on the cone of convex bodies, i.e. its restriction to each finite dimensional section of the cone is a homogeneous polynomial of degree n. In other words, for any collection of convex bodies ∆1 , . . . , ∆r , the function: P∆1 ,...,∆r (λ1 , . . . , λr ) = vol(λ1 ∆1 + · · · + λr ∆r ), is a homogeneous polynomial of degree n in λ1 , . . . , λr . By definition the mixed volume V (∆1 , . . . , ∆n ) of an n-tuple (∆1 , . . . , ∆n ) of convex bodies is the coefficient of the monomial λ1 · · · λn in the polynomial P∆1 ,...,∆n (λ1 , . . . , λn ) divided by n!. This definition implies that mixed volume is the polarization of the volume polynomial, that is, it is the unique function on the n-tuples of convex bodies satisfying the following: (i) (Symmetry) V is symmetric with respect to permuting the bodies ∆1 , . . . , ∆n . (ii) (Multi-linearity) It is linear in each argument with respect to the Minkowski sum. The linearity in first argument means that for convex bodies ∆′1 , ∆′′1 , ∆2 , . . . , ∆n , and real numbers λ′ , λ′′ ≥ 0 we have: V (λ′ ∆′1 + λ′′ ∆′′1 , ∆2 , . . . , ∆n ) = λ′ V (∆′1 , ∆2 , . . . , ∆n ) + λ′′ V (∆′′1 , ∆2 , . . . , ∆n ). (iii) (Relation with volume) On the diagonal it coincides with the volume, i.e. if ∆1 = · · · = ∆n = ∆, then V (∆1 , . . . , ∆n ) = vol(∆). The following inequality attributed to Alexandrov and Fenchel is important and very useful in convex geometry (see [Burago-Zalgaller88]): Theorem 1.1 (Alexandrov-Fenchel). Let ∆1 , . . . , ∆n be convex bodies in Rn . Then V (∆1 , ∆1 , ∆3 , . . . , ∆n )V (∆2 , ∆2 , ∆3 , . . . , ∆n ) ≤ V (∆1 , ∆2 , . . . , ∆n )2 . In dimension 2, this inequality is elementary. We call it the generalized isoperimetric inequality, because when ∆2 is the unit ball it coincides with the classical isoperimetric inequality. The celebrated Brunn-Minkowski inequality concerns volume of convex bodies in Rn . It is an easy corollary of the Alexandrov-Fenchel inequality. (For n = 2 it is equivalent to the Alexandrov-Fenchel inequality.) Theorem 1.2 (Brunn-Minkowski). Let ∆1 , ∆2 be convex bodies in Rn . Then vol(∆1 )1/n + vol(∆2 )1/n ≤ vol(∆1 + ∆2 )1/n . 2. Mixed covolume of convex regions Let C be a strongly convex closed n-dimensional cone in Rn with apex at the origin. (A convex cone is strongly convex if it does not contain any lines through the origin.) We are interested in closed convex subsets of C which have bounded complement. Definition 2.1. We call a closed convex subset Γ ⊂ C a C-convex region (or simply a convex region when the cone C is understood from the context) if for any x ∈ Γ and y ∈ C we have x + y ∈ Γ. Moreover, we say that a convex region Γ is cobounded 6 if the complement C \ Γ is bounded. In this case the volume of C \ Γ is finite which we call the covolume of Γ and denote it by covol(Γ). One also refers to C \ Γ as a C-coconvex body. The collection of C-convex regions (respectively cobounded regions) is closed under the Minkowski sum and multiplication by positive scalars. Similar to the volume of convex bodies, one proves that the covolume of convex regions is a homogeneous polynomial [Khovanskii-Timorin-a]. More precisely: Theorem 2.2. Let Γ1 , . . . , Γr be cobounded C-convex regions in the cone C. Then the function PΓ1 ,...,Γr (λ1 , . . . , λr ) = covol(λ1 Γ1 + · · · + λr Γr ), is a homogeneous polynomial of degree n in the λi . As in the case of convex bodies, one uses the above theorem to define mixed covolume of cobounded regions. By definition the mixed covolume CV (Γ1 , . . . , Γn ) of an n-tuple (Γ1 , . . . , Γn ) of cobounded convex regions is the coefficient of the monomial λ1 · · · λn in the polynomial PΓ1 ,...,Γn (λ1 , . . . , λn ) divided by n!. That is, mixed covolume is the unique function on the n-tuples of cobounded regions satisfying the following: (i) (Symmetry) CV is symmetric with respect to permuting the regions Γ1 , . . . , Γn . (ii) (Multi-linearity) It is linear in each argument with respect to the Minkowski sum. (iii) (Relation with covolume) For any cobounded region Γ ⊂ C: CV (Γ, . . . , Γ) = covol(Γ). The mixed covolume satisfies an Alexandrov-Fenchel inequality [Khovanskii-Timorin-a]. Note that the inequality is reversed compared to the Alexandrov-Fenchel for mixed volumes of convex bodies. Theorem 2.3 (Alexandrov-Fenchel for mixed covolume). Let Γ1 , . . . , Γn be cobounded C-convex regions. Then: CV (Γ1 , Γ1 , Γ3 , . . . , Γn )CV (Γ2 , Γ2 , Γ3 , . . . , Γn ) ≥ CV (Γ1 , Γ2 , Γ3 , . . . , Γn )2 . The (reversed) Alexandrov-Fenchel inequality implies a (reversed) Brunn-Minkowski inequality. (For n = 2 it is equivalent to the Alexandrov-Fenchel inequality.) Corollary 2.4 (Brunn-Minkowski for covolume). Let Γ1 , Γ2 be cobounded C-convex regions. Then: covol(Γ1 )1/n + covol(Γ2 )1/n ≥ covol(Γ1 + Γ2 )1/n . 3. Semigroups of integral points In this section we recall some general facts from [Kaveh-Khovanskii12] about the asymptotic behavior of semigroups of integral points. Let S ⊂ Zn × Z≥0 be an additive semigroup. Let π : Rn × R → R denote the projection onto the second factor, and let Sk = S ∩ π −1 (k) be the set of points in S at level k. For simplicity, assume S1 6= ∅ and that S generates the whole lattice Zn+1 . Define the function HS by: HS (k) = #Sk . We call HS the Hilbert function of the semigroup S. We wish to describe the asymptotic behavior of HS as k → ∞. 7 Let C(S) be the closure of the convex hull of S ∪ {0}, that is, the smallest closed convex cone (with apex at the origin) containing S. We call the projection of the convex set C(S) ∩ π −1 (1) to Rn (under the projection onto the first factor (x, 1) 7→ x), the Newton-Okounkov convex set of the semigroup S and denote it by ∆(S). In other words, [ {x/k | (x, k) ∈ Sk }. ∆(S) = k>0 (From the fact that S is a semigroup one can show that ∆(S) is a convex set.) If C(S) ∩ π −1 (0) = {0} then ∆(S) is compact and hence a convex body. The Newton-Okounkov convex set ∆(S) is responsible for the asymptotic behavior of the Hilbert function of S (see [Kaveh-Khovanskii12, Corollary 1.16]): Theorem 3.1. The limit lim k→∞ HS (k) , kn exists and is equal to vol(∆(S)). 4. Primary sequences in a semigroup and convex regions In this section we discuss the notion of a primary graded sequence of subsets in a semigroup and describe its asymptotic behavior using Theorem 3.1. In section 7 we will employ this to describe the asymptotic behavior of the Hilbert-Samuel function of a graded sequence of m-primary ideals in a local domain. Let S ⊂ Zn be an additive semigroup containing the origin. Without loss of generality we assume that S generates the whole Zn . Let as above C = C(S) denote the cone of S i.e. the closure of convex hull of S = S ∪ {0}. We also assume that C is a strongly convex cone, i.e. it does not contain any lines through the origin. For two subsets I, J ⊂ S, the sum I + J is the set {x + y | x ∈ I, y ∈ J }. For any integer k > 0, by the product k ∗ I we mean I + · · · + I (k times). Definition 4.1. A graded sequence of subsets in S is a sequence I• = (I1 , I2 , . . .) of subsets such that for any k, m > 0 we have Ik + Im ⊂ Ik+m . Example 4.2. Let I ⊂ S. Then the sequence I• defined by Ik = k ∗ I is clearly a graded sequence of subsets. Let I•′ , I•′′ be graded sequences of subsets. Then the sequence I• = I•′ + I•′′ defined by Ik = Ik′ + Ik′′ , is also a graded sequence of subsets which we call the sum of sequences I•′ and I•′′ . Let ℓ : Rn → R be a linear function. For any a ∈ R let ℓ≥a (respectively ℓ>a ) denote the half-space {x | ℓ(x) ≥ a} (respectively {x | ℓ(x) > a}), and similarly for ℓ≤a and ℓ<a . By assumption the cone C = C(S) associated to the semigroup S is strongly convex. Thus we can find a linear function ℓ such that the cone C = C(S) lies in ℓ≥0 and it intersects the hyperplane ℓ−1 (0) only at the origin. Let us fix such a linear function ℓ. We will be interested in graded sequences of subsets I• satisfying the following condition: 8 Definition 4.3. We say that a graded sequence of subsets I• is primary if there exists an integer t0 > 0 such that for any integer k > 0 we have: (6) Ik ∩ ℓ≥kt0 = S ∩ ℓ≥kt0 . Remark 4.4. One verifies that if ℓ′ is another linear function such that C lies in ℓ′≥0 and it intersects the hyperplane ℓ′−1 (0) only at the origin then it automatically satisfies (6) with perhaps a different constant t′0 > 0. Hence the condition of being a primary graded sequence does not depend on the linear function ℓ. Nevertheless when we refer to a primary graded sequence I• , choices of a linear function ℓ and an integer t0 > 0 are implied. Proposition 4.5. Let I• be a primary graded sequence. Then for all k > 0, the set S \ Ik is finite. Proof. Since C intersects ℓ−1 (0) only at the origin it follows that for any k > 0 the set C ∩ ℓ<kt0 is bounded which implies that S ∩ ℓ<kt0 is finite. But by (6),  S \ Ik ⊂ S ∩ ℓ<kt0 and hence is finite. Definition 4.6. Let I• be a primary graded sequence. Define the function HI• by: HI• (k) = #(S \ Ik ). (Note that by Proposition 4.5 this number is finite for all k > 0.) We call it the Hilbert-Samuel function of I• . To a primary graded sequence of subsets I• we can associate a C-convex region Γ(I• ) (see Definition 2.1). This convex set encodes information about the asymptotic behavior of the Hilbert-Samuel function of I• . Definition 4.7. Let I• be a primary graded sequence of subsets. Define the convex set Γ(I• ) by [ Γ(I• ) = {x/k | x ∈ Ik }. k>0 (One can show that Γ(I• ) is an unbounded convex set in C.) Proposition 4.8. Let I• be a primary graded sequence. Then Γ = Γ(I• ) is a C-convex region in the cone C, i.e. for any x ∈ Γ, x + C ⊂ Γ. Moreover, the region Γ is cobounded i.e. C \ Γ is bounded. Proof. Let ℓ and t0 be as in Definition 4.3. From the definitions it follows that the region Γ contains C ∩ ℓ≥t0 . Thus (C \ Γ) ⊂ (C ∩ ℓ<t0 ) and hence is bounded. Next let x ∈ Γ. Since x + C ⊂ C, Γ contains the set (x + C) ∩ ℓ≥t0 . But the convex hull  of x and (x + C) ∩ ℓ≥t0 is x + C. Thus x + C ⊂ Γ because Γ is convex. The following is an important example of a primary graded sequence in a semigroup S. Proposition 4.9. Let C be an n-dimensional strongly convex rational polyhedral cone in Rn and let S = C ∩ Zn . Also let I ⊂ S be a subset such that S \ I is finite. Then the sequence I• defined by Ik := k ∗ I is a primary graded sequence and Γ(I• ) = conv(I). 9 Proof. Since S \ I is finite there exists t1 > 0 such that S ∩ ℓ≥t1 ⊂ I. Put M1 = S ∩ ℓ≥t1 . Because C is a rational polyhedral cone, M1 is a finitely generated semigroup. Let v1 , . . . , vs be semigroup generators for M1 . Let t0 > 0 P be bigger s than all the ℓ(vi ). For k > 0 take x P ∈ S ∩ ℓ≥kt0 ⊂PM1 . Then x = i=1 ci vi for cP kt0 ≤ ℓ(x) = c ℓ(v ) ≤ ( c )t . This implies that i ∈ Z≥0 . Thus P i i i 0 i i k ≤ i ci and hence ( i ci ) ∗ M1 ⊂ k ∗ M1 . It follows that x ∈ k ∗ M1 . That is, S ∩ ℓ≥kt0 = (k ∗ M1 ) ∩ ℓ≥kt0 ⊂ (k ∗ I) ∩ ℓ≥kt0 and hence S ∩ ℓ≥kt0 = (k ∗ I) ∩ ℓ≥kt0 as required. The assertion Γ(I• ) = conv(I• ) follows from the observation that conv(k ∗ I) = k conv(I).  The following is our main result about the asymptotic behavior of a primary graded sequence. Theorem 4.10. Let I• be a primary graded sequence. Then HI• (k) lim k→∞ kn exists and is equal to covol(Γ(I• )). Proof. Let t0 > 0 be as in Definition 4.3. Then for all k > 0 we have S ∩ ℓ≥kt0 = Ik ∩ ℓ≥kt0 . Moreover take t0 to be large enough so that the finite set I1 ∩ ℓ<t0 generates the lattice Zn (this is possible because S and hence I1 generate Zn ). Consider S̃ = {(x, k) | x ∈ Ik ∩ ℓ<kt0 }. T̃ = {(x, k) | x ∈ S ∩ ℓ<kt0 }. S̃ and T̃ are semigroups in Zn × Z≥0 and we have S̃ ⊂ T̃ . From the definition it follows that both of the groups generated by S̃ and T̃ are Zn+1 . Also the NewtonOkounkov bodies of S̃ and T̃ are: ∆(S̃) = Γ(I• ) ∩ ∆(t0 ), ∆(T̃ ) = ∆(t0 ), where ∆(t0 ) = C ∩ ℓ≤t0 . Since S ∩ ℓ≥kt0 = Ik ∩ ℓ≥kt0 , we have: S \ Ik = T̃k \ S̃k , Here as usual S̃k = {(x, k) | (x, k) ∈ S̃} (respectively T̃k ) denotes the set of points in S̃ (respectively T̃ ) at level k. Hence HI• (k) = #T̃k − #S̃k . By Theorem 3.1 we have: lim #S̃k = vol(∆(S̃)), kn lim #T̃k = vol(∆(t0 )). kn k→∞ k→∞ Thus #(S \ Ik ) = vol(∆(t0 )) − vol(∆(S̃)). kn On the other hand, we have: lim k→∞ ∆(t0 ) \ ∆(S̃) = C \ Γ(I• ), and hence vol(∆(t0 )) − vol(∆(S̃)) = covol(Γ(I• )). This finishes the proof. 10  Definition 4.11. For a primary graded sequence I• we denote n! limk→∞ HI• (k)/k n by e(I• ). Motivated by commutative algebra, we call it the multiplicity of I• . We have just proved that e(I• ) = n! covol(Γ(I• )). Note that it is possible for a primary graded sequence to have multiplicity equal to zero. The following additivity property is straightforward from definition. Proposition 4.12. Let I•′ , I•′′ be primary graded sequences. We have: Γ(I•′ ) + Γ(I•′′ ) = Γ(I•′ + I•′′ ). Proof. From definition it is clear that Γ(I•′ + I•′′ ) ⊂ Γ(I•′ ) + Γ(I•′′ ). We need to show the reverse inclusion. Let I• denote I•′ + I•′′ . For k, m > 0 take x ∈ Ik′ and ′′ y ∈ Im . Then (x/k) + (y/m) = (mx + ky)/km ∈ (1/km)Ikm . This shows that (x/k) + (y/m) ∈ Γ(I• ) which finishes the proof.  Let I1,• , . . . , In,• be n primary graded sequences. Define the function PI1,• ,...,In,• : Nn → N by: PI1,• ,...,In,• (k1 , . . . , kn ) = e(k1 ∗ I1,• + · · · + kn ∗ In,• ). Theorem 4.13. The function PI1,• ,...,In,• is a homogeneous polynomial of degree n in k1 , . . . , kn . Proof. Follows immediately from Proposition 4.12, Theorem 4.10 and Theorem 2.2.  Definition 4.14. Let I1,• , . . . , In,• be primary graded sequences. Define the mixed multiplicity e(I1,• , . . . , In,• ) to be the coefficient of k1 · · · kn in the polynomial PI1,• ,...,In,• divided by n!. From Theorem 4.10 and Proposition 4.12 we have the following corollary: Corollary 4.15. e(I1,• , . . . , In,• ) = n! CV (Γ(I1,• ), . . . , Γ(In,• )), where as before CV denotes the mixed covolume of cobounded regions. From Theorem 2.3 and Corollary 2.4 we then obtain: Corollary 4.16 (Alexandrov-Fenchel inequality for mixed multiplicity in semigroups). For primary graded sequences I1,• , . . . , In,• in the semigroup S we have: e(I1,• , I1,• , I3,• , . . . , In,• )e(I2,• , I2,• , I3,• , . . . , In,• ) ≥ e(I1,• , I2,• , I3,• , . . . , In,• )2 . Corollary 4.17 (Brunn-Minkowski inequality for multiplicities in semigroups). Let I• , J• be primary graded sequences in the semigroup S. We have: e(I• )1/n + e(J• )1/n ≥ e(I• + J• )1/n . 5. Multiplicities of ideals and subspaces in local rings Let R be a Noetherian local domain of Krull dimension n over a field k, and with maximal ideal m. We also assume that the residue field R/m is k. Example 5.1. Let X be an irreducible variety of dimension n over k, and let p be a point in X. Then the local ring R = OX,p (consisting of rational functions on X which are regular in a neighborhood of p) is a Noetherian local domain of Krull dimension n over k. The ideal m consists of functions which vanish at p. 11 If a, b ⊂ R are two k-subspaces then by ab we denote the k-span of all the xy where x ∈ a and y ∈ b. Note that if a, b are ideals in R then ab coincides with the product of a and b as ideals. Definition 5.2. (i) A k-subspace a in R is called m-primary if it contains a power of the maximal ideal m. (ii) A graded sequence of subspaces is a sequence a• = (a1 , a2 , . . .) of k-subspaces in R such that for all k, m > 0 we have ak am ⊂ ak+m . We call a• an mprimary sequence if moreover a1 is m-primary. It then follows that every ak is m-primary and hence dimk (R/ak ) is finite. (If each ak is an m-primary ideal in R we call a• an m-primary graded sequence of ideals.) When k is algebraically closed, an ideal a in R = OX,p is m-primary if the subvariety it defines around p coincides with the single point p itself. Example 5.3. Let a be an m-primary subspace. Then the sequence a• defined by ak = ak is an m-primary graded sequence of subspaces. Let a• , b• be m-primary graded sequences of subspaces. Then the sequence c• = a• b• defined by ck = ak bk , is also an m-primary graded sequence of subspaces which we call the product of a• and b• . Definition 5.4. Let a• be an m-primary graded sequence of subspaces. Define the function Ha• by: Ha• (k) = dimk (R/ak ). We call it the Hilbert-Samuel function of a• . The Hilbert-Samuel function Ha (k) of an m-primary subspace a is the Hilbert-Samuel function of the sequence a• = (a, a2 , . . .). That is, Ha (k) = dimk (R/ak ). Remark 5.5. For an m-primary ideal a it is well-known that, for sufficiently large values of k, the Hilbert-Samuel function Ha coincides with a polynomial of degree n called the Hilbert-Samuel polynomial of a ([Samuel-Zariski60]). Definition 5.6. Let a• be an m-primary graded sequence of subspaces. We define the multiplicity e(a• ) to be: Ha• (k) . kn (It is not a priori clear that the limit exists.) The multiplicity e(a) of an m-primary ideal a is the multiplicity of its associated sequence (a, a2 , . . .). That is: e(a• ) = n! lim k→∞ Ha (k) . kn (Note that by Remark 5.5 the limit exists in this case.) e(a) = n! lim k→∞ The notion of multiplicity comes from the following basic example: Example 5.7. Let a be an m-primary subspace in the local ring R = OX,p of a point p in an irreducible variety X over an algebraically closed filed k. Let f1 , . . . , fn be generic elements in a. Then the multiplicity e(a) is equal to the intersection multiplicity at p of the hypersurfaces Hi = {x | fi (x) = 0}, i = 1, . . . , n. 12 In Section 7 we use the material in Section 4 to give a formula for e(a• ) in terms of covolume of a convex region. One can also define the notion of mixed multiplicity for m-primary ideals as the polarization of the Hilbert-Samuel multiplicity e(a), i.e. it is the unique function e(a1 , . . . , an ) which is invariant under permuting the arguments, is multi-additive with respect to product, and for any m-primary ideal a the mixed multiplicity e(a, . . . , a) coincides with e(a). In fact one can show that in the above definition of mixed multiplicity the ai need not be ideals and it suffices for them to be m-primary subspaces. Similar to multiplicity we have the following geometric meaning for the notion of mixed multiplicity when R = OX,p is the local ring of a point p on an n-dimensional algebraic variety X. Take m-primary subspaces a1 , . . . , an in R. The mixed multiplicity e(a1 , . . . , an ) is equal to the intersection multiplicity, at the origin, of the hypersurfaces Hi = {x | fi (x) = 0}, i = 1, . . . , n, where each fi is a generic function from the ai . 6. Case of monomial ideals and Newton polyhedra In this section we discuss the case of monomial ideals. It is related to the classical notion of Newton polyhedron of a power series in n variables. We will see that our Theorem 4.10 in this case immediately recovers (and generalizes) the local version of celebrated theorem of Bernstein-Kushnirenko ([Kushnirenko76] and [Arnold-Varchenko-Guseinzade85, Section 12.7]). Let R be the local ring of an affine toric variety at its torus fixed point. The algebra R can be realized as follows: Let C ⊂ Rn be an n-dimensional strongly convex rational polyhedral cone with apex at the origin, that is, C is an n-dimensional convex cone generated by a finite number of rational vectors and it does not contain any lines through the origin. Consider the semigroup algebra over k of the semigroup of integral points S = C ∩ Zn . In other words, consider P the algebra α of Laurent polynomials consisting of all the f of the form f = α∈C∩Zn cα x , where we have used the shorthand notation x = (x1 , . . . , xn ), α = (a1 , . . . , an ) and xα = xa1 1 · · · xann . Let R be the localization of this Laurent polynomial algebra at the maximal ideal m generated by non-constant monomials. (Similarly instead of R we can take its completion at the maximal ideal m which is an algebra of power series .) Definition 6.1. Let a be an m-primary monomial ideal in R, that is, an m-primary ideal generated by monomials. To a we can associate a subset I(a) ⊂ C ∩ Zn by I(a) = {α | xα ∈ a}. The convex hull Γ(a) of I(a) is usually called the Newton polyhedron of the monomial ideal a. It is a convex unbounded polyhedron in C, moreover it is a C-convex region. The Newton diagram of a is the union of bounded faces of its Newton polyhedron. Remark 6.2. It is easy to see that if a is an ideal in R then I = I(a) is a semigroup ideal in S = C ∩ Zn , that is, if x ∈ I and y ∈ S then x + y ∈ I. Let a be an m-primary monomial ideal. Then for any k > 0 we have I(ak ) = k ∗ I(a). It follows from Proposition 4.9 that I• defined by Ik = k ∗ I(a) is a 13 primary graded sequence in S = C ∩ Zn and the convex region Γ(I• ) associated to I• coincides with the Newton polyhedron Γ(a) = conv(I(a)) defined above. More generally, let a• be an m-primary graded sequence of monomial ideals in R. Associate a graded sequence I• = I(a• ) in S to a• by: Ik = I(ak ) = {α | xα ∈ ak }. Clearly for any k we have k ∗ I(a1 ) = I(ak1 ) ⊂ I(ak ). From the above we then conclude the following: Proposition 6.3. The graded sequence I• = I(a• ) is a primary graded sequence in the sense of Definition 4.3. Let Γ(a• ) denote the convex region associated to the primary graded sequence I• = I(a• ) (Definition 4.7). We make the following important observation that a• 7→ Γ(a• ) is additive with respect to the product of graded sequences of monomial ideals. Proposition 6.4. Let a• , b• be m-primary graded sequences of monomial ideals in R. Then I(a• b• ) = I(a• ) + I(b• ). It follows from Proposition 4.12 that: Γ(a• b• ) = Γ(a• ) + Γ(b• ). From Proposition 6.3, Proposition 6.4 and Theorem 4.10 we readily obtain the following. Theorem 6.5. Let a• be an m-primary graded sequence of monomial ideals in R. Then: e(a• ) = n! covol(Γ(a• )). In particular, if a is an m-primary monomial ideal then: e(a) = n! covol(Γ(a)). Here Γ(a) is the Newton polyhedron of a i.e. the convex hull of I(a). Theorem 6.6. Let a1 , . . . , an be m-primary monomial ideals in R. Then the mixed multiplicity e(a1 , . . . , an ) is given by: e(a1 , . . . , an ) = n! CV (Γ(a1 ), . . . , Γ(an )), where as before CV denotes the mixed covolume of cobounded convex regions. Remark 6.7. Using Theorem 4.13 one can define the mixed multiplicity of mprimary graded sequences of monomial ideals. Then Theorem 6.6 can immediately be extended to mixed multiplicities of m-primary graded sequences of monomial ideals. One knows that the mixed multiplicity of an n-tuple (a1 , . . . , an ) of m-primary subspaces in R gives the intersection multiplicity, at the origin, of hypersurfacs Hi = {x | fi (x) = 0}, i = 1, . . . , n, where each fi is a generic element from the subspace ai . Theorem 6.6 then gives the following corollary. Corollary 6.8 (Local Bernstein-Kushnirenko theorem). Let a1 , . . . , an be m-primary monomial ideals in R. Consider a system of equations f1 (x) = · · · = fn (x) = 0 where each fi is a generic element from ai . Then the intersection multiplicity at the origin of this system is equal to n! covol(Γ(a1 ), . . . , Γ(an )). 14 Remark 6.9. (i) When R is the algebra of polynomials k[x1 , . . . , xn ](0) localized at the origin (or the algebra of power series localized at the origin), i.e. the case corresponding to the local ring of a smooth affine toric variety, Corollary 6.8 is the local version of the classical Bernstein-Kushnirenko theorem ([Arnold-Varchenko-Guseinzade85, Section 12.7]). (ii) As opposed to the proof above, the original proof of the Kushnirenko theorem is quite involved. (iii) Corollary 6.8 has been known to the second author since the early 90’s (cf. [Khovanskii92]), although as far as the authors know it has not been published. 7. Main results Let R be a domain over a field k. Equip the group Zn with a total order respecting addition. Definition 7.1 (Valuation). A valuation v : R \ {0} → Zn is a function satisfying: (1) For all 0 6= f, g ∈ R, v(f g) = v(f ) + v(g). (2) For all 0 6= f, g ∈ R with f + g 6= 0 we have v(f + g) ≥ min(v(f ), v(g)). (One then shows that when v(f ) 6= v(g), v(f + g) = min(v(f ), v(g)).) (3) For all 0 6= λ ∈ k, v(λ) = 0. We say that v has one-dimensional leaves if whenever v(f ) = v(g), there exists λ ∈ k with v(g + λf ) > v(g). From definition S = v(R \ {0}) ∪ {0} is an additive subsemigroup of Zn which we call the value semigroup of (R, v). Any valuation on R extends to the field of fractions K of R by defining v(f /g) = v(f ) − v(g). The set Rv = {0 6= f ∈ K | v(f ) ≥ 0} ∪ {0} is a local subring of K called the valuation ring of v. Also mv = {0 6= f ∈ K | v(f ) > 0} ∪ {0} is the maximal ideal in Rv . The field Rv /mv is called the residue field of v. One can see that v has one-dimensional leaves if and only if the residue field of v is k. Definition 7.2. For a subspace a in R define I = I(a) ⊂ S by: I = {v(f ) | f ∈ a \ {0}}. Similarly, for a graded sequence of subspaces a• in R, define I• = I(a• ) by: Ik = I(ak ) = {v(f ) | f ∈ ak \ {0}}. For the rest of the paper we assume that R is a Noetherian local domain of dimension n such that R is an algebra over a field k isomorphic to the residue field R/m, where m is the maximal ideal of R. Moreover, we assume that R has a good valuation in the following sense: Definition 7.3. We say that a Zn -valued valuation v on R with one-dimensional leaves is good if the following hold: (i) The value semigroup S = v(R \ {0}) ∪ {0} generates the whole lattice Zn , and its associated cone C(S) is a strongly convex cone (recall that C(S) is the closure of convex hull of S). It implies that there is a linear function ℓ : Rn → R such that C(S) lies in ℓ≥0 and intersects ℓ−1 (0) only at the origin. (ii) We assume there exists r0 > 0 and a linear function ℓ as above such that for any f ∈ R if ℓ(v(f )) ≥ kr0 for some k > 0 then f ∈ mk . 15 The condition (ii) in particular implies that for any k > 0 we have: I(mk ) ∩ ℓ≥kr0 = S ∩ ℓ≥kr0 . In other words, the sequence M• given by Mk = I(mk ) is a primary graded sequence in the value semigroup S. The following is a generalization of Proposition 6.3. Proposition 7.4. Let v be a good valuation on R. Let a• be an m-primary graded sequence of subspaces in R. Then the associated graded sequence I• = I(a• ) is a primary graded sequence in the value semigroup S in the sense of Definition 4.3. Proof. Let m > 0 be such that mm ⊂ a1 . Then for any k > 0 we have mkm ⊂ ak which then implies that Mkm ⊂ Ik . This proves the claim.  Example 7.5. As in Section 6 let R be the local ring of an affine toric variety at its torus fixed point: Take C ⊂ Rn to be an n-dimensional strongly convex rational polyhedral cone with apex at the origin. Consider of Laurent P the algebra α c x . Then R is polynomials consisting of all the f of the form f = n α α∈C∩Z the localization of this algebra at the maximal ideal generated by non-constant monomials. Take a total order on Zn which respects addition and such that the semigroup S = C ∩ Zn is well-ordered. We also require that if ℓ(α) > ℓ(β) then α > β, for any α, β ∈ Zn . Such a total order can be constructed as follows: pick linear functions ℓ2 , . . . , ℓn on Rn such that ℓ, ℓ2 , . . . , ℓn are linearly independent, and for each i the cone C lies in (ℓi )≥0 . Given α, β ∈ Zn define α > β if ℓ(α) > ℓ(β), or ℓ(α) = ℓ(β) and ℓ2 (α) > ℓ2 (β), and so on. Now one defines a (lowest term) P valuation v on the algebra R with values in S = C ∩ Zn as follows: For f = α∈S cα xα put: v(f ) = min{α | cα 6= 0}. Clearly v extends to the field of fractions of Laurent polynomials and in particular to R. Similarly v can be defined for formal power series and formal Laurent series. It is easy to see that v is a valuation with one-dimensional leaves on R. Let us show that it is moreover a good valuation. Take P 0 6= f ∈ R. Without loss of generality we can take f to be a Laurent series f = α∈S cα xα . Applying Proposition 4.9 to the sequence I• , where Ik = {α | xα ∈ mk }, we know that there exists r0 > 0 with the following property: if for some α ∈ S we have ℓ(α) ≥ kr0 then xα ∈ mk . On the other hand, if α ≤ β then ℓ(α) ≤ ℓ(β). Thus ℓ(β) ≥ kr0 and hence xβ ∈ mk . It follows that if ℓ(v(f )) ≥ kr0 then all the nonzero monomials in f lie in mk and hence f ∈ mk . This proves the claim that v is a good valuation on R. The arguments in Example 7.5 in particular show the following: Theorem 7.6. If R is a regular local ring then R has a good valuation. Proof. The completion R of R is isomorphic to an algebra of formal power series over the residue field k. The above construction gives a good valuation on R. One verifies that the restriction of this valuation to R is still a good valuation.  More generally, one has: Theorem 7.7. Suppose R is an analytically irreducible local domain (i.e. the completion of R has no zero divisors). Moreover, suppose that there exists a regular local ring S containing R such that S is essentially of finite type over R, R and 16 S have the same quotient field k and the residue field map R/mR → S/mS is an isomorphism. Then R has a good valuation. Proof. By Theorem 7.6, S has a good valuation. By the linear Zariski subspace theorem in [Hubl01, Theorem 1] or [Cutkosky-a, Lemma 4.3] v|R is a good valuation for R too.  Using Theorem 7.7 and as in [Cutkosky-a, Theorem 5.2] we have the following: Theorem 7.8. Let R be an analytically irreducible local domain over k. Then R has a good valuation. Proposition 7.9. Let I = I(a) be the subset of integral points associated to an m-primary subspace a in R. Then we have: dimk (R/a) = #(S \ I). Proof. Take m > 0 with mm ⊂ a and let r0 and ℓ be as in Definition 7.3. If ℓ(v(f )) > mr0 then f ∈ mm ⊂ a. Thus the set of valuations of elements in R \ a is bounded. In particular S \ I is finite. Let {v1 , . . . , vr } = S \ I. Let B = {b1 , . . . , br } ⊂ R be such that v(bi ) = vi , i = 1, . . . , r. WePclaim that no linear combination of b1 , . . . , br lies in a. By contradiction suppose i ci bi = a ∈ a. P Then v( i ci bi ) is equal to v(bj ) for some j. This implies that v(bj ) should lie in I which contradicts the choice of the vi . Thus the image of B in R/a is a linearly independent set. Among the set of elements in R that are not in the span of a and B take f with maximum v(f ). If v(f ) = v(b) for some b ∈ B, then we can subtract a multiple of b from f getting an element g with v(g) > v(f ) which contradicts the choice of f . Similarly v(f ) can not lie in I otherwise we can subtract an element of a from f to arrive at a similar contradiction. This shows that the set of images of elements of B in R/a is a k-vector space basis for R/a which proves the proposition.  Corollary 7.10. Let a• be an m-primary graded sequence of subspaces in R and put I• = I(a• ). We then have: e(a• ) = e(I• ). Definition 7.11. To the sequence of subspaces a• we associate a C-convex region Γ(a• ), which is the convex region Γ(I• ) associated to the primary sequence I• = I(a• ). The convex region Γ(a• ) depends on the choice of the valuation v. By definition the convex region Γ(a) associated to an m-primary subspace a is the convex region associated to the sequence of subspaces (a, a2 , a3 , . . .). Theorem 7.12. Let a• be an m-primary graded sequence of subspaces in R. Then: Ha• (k) = n! covol(Γ(a• )). k→∞ kn In particular, if a is an m-primary ideal we have e(a) = n! covol(Γ(a)). e(a• ) = n! lim The following superadditivity follows from Proposition 4.12. Note that I(a• ) + I(b• ) ⊂ I(a• b• ) (cf. Proposition 6.4). Proposition 7.13. Let a• , b• be two m-primary graded sequences of subspaces in R. We have: Γ(a• ) + Γ(b• ) ⊂ Γ(a• b• ). 17 From Theorem 7.12, Proposition 7.13 and Corollary 2.4 we readily obtain: Corollary 7.14. (Brunn-Minkowski for multiplicities) Let a• , b• be two m-primary graded sequences of subspaces in R. Then: e(a• )1/n + e(b• )1/n ≥ e(a• b• )1/n . (7) Remark 7.15. By Theorem 7.8 and Corollary 7.14 we obtain the Brunn-Minkowski inequality (7) for an analytically irreducible local domain R. But in fact the assumption that R is analytically irreducible is not necessary: Suppose R is not necessarily analytically irreducible. First by a reduction theorem the statement can be reduced to dim R = n = 2. In dimension 2, the inequality (7) implies that the mixed multiplicity of ideals e(·, ·), regarded as a bilinear form on the (multiplicative) semigroup of m-primary graded sequences of ideals, is positive semidefinite restricted to each local analytic irreducible component. But the sum of positive semidefinite forms is again positive semidefinite which implies that Corollary 7.14 should hold for R itself. As another corollary of Theorem 7.12 we can immediately obtain inequalities between the multiplicity of an m-primary ideal, multiplicity of its associated initial ideal and its length. Let R be a regular local ring of dimension n with a good valuation (as in Example 7.5 and Theorem 7.6). Corollary 7.16 (Multiplicity of an ideal versus multiplicity of its initial ideal). Let a be an m-primary ideal in R and let in(a) denote the initial ideal of a, that is, the monomial ideal in the polynomial algebra localized at the origin k[x1 , . . . , xn ](0) corresponding to the semigroup ideal I(a). We have: e(a) ≤ e(in(a)) ≤ n! dimk (R/a). k More generally, if in(a ) denote the monomial ideal in k[x1 , . . . , xn ](0) corresponding to the semigroup ideal I(ak ) then the sequence of numbers e(in(ak )) kn is decreasing and converges to e(a) as k → ∞. Proof. From definition one shows that Γ(in(a)) is the convex hull of I(a) (see Theorem 6.5). It easily follows that I(a) ⊂ Γ(in(a)) ⊂ Γ(a). We now notice that dimk (R/a) is the number of integral points in S \ I(a) that in turn is bigger than or equal to the volume of Rn≥0 \ Γ(in(a)) and hence the volume of Rn≥0 \ Γ(a). More generally, from the definition of Γ(a) we have an increasing sequence of convex regions: Γ(in(a)) ⊂ (1/2)Γ(in(a2 )) ⊂ · · · ⊂ Γ(a) = ∞ [ (1/k)Γ(in(ak )). k=1 Now from Theorem 7.12 we have e(a) = n! covol(Γ(a)) and for each k, e(in(ak )) = n! covol(Γ(in(ak ))). This finishes the proof.  The inequality e(a) ≤ n! dimk (R/a) is a special case of an inequality of Lech [Lech64, Theorem 3]. See also Lemma 1.3 in [Fernex-Ein-Mustaţă04]. 18 References [Arnold-Varchenko-Guseinzade85] Arnold, V. I.; Gusein-Zade, S. M.; Varchenko, A. N. Singularities of differentiable maps. Modern Birkhäuser Classics. Birkhäuser/Springer, New York, 2012. [Burago-Zalgaller88] Burago, Yu. D.; Zalgaller, V. A. Geometric inequalities. Translated from the Russian by A. B. Sosinskiı̆. Grundlehren der Mathematischen Wissenschaften, 285. Springer Series in Soviet Mathematics (1988). [Cutkosky-a] Cutkosky, S. D. Multiplicities Associated to Graded Families of Ideals. arXiv:1206.4077. To appear in Journal of Algebra and Number Theory. [Cutkosky-b] Cutkosky, S. D. Multiplicities of graded families of linear series and ideals. arXiv:1301.5613. [Cutkosky-c] Cutkosky, S. D. Asymptotic multiplicities. arXiv:1311.1432. [Fernex-Ein-Mustaţă04] de Fernex, T.; Ein, L.; Mustaţă, M. Multiplicities and log canonical threshold. J. Algebraic Geom. 13 (2004), no. 3, 603615. [Fillastre13] Fillastre, F.; Fuchsian convex bodies: basics of BrunnMinkowski theory. Geom. Funct. Anal. 23 (2013), no. 1, 295–333. [Fulger] Fulger, M. Local volumes on normal algebraic varieties. arXiv:1105.2981. [Hubl01] Hübl, R. Completions of local morphisms and valuations. Math. Z. 236 (2001), no. 1, 201–214. [Kaveh-Khovanskii12] Kaveh, K.; Khovanskii, A. G. Newton-Okounkov bodies, semigroups of integral points, graded algebras and intersection theory. Annals of Mathematics, 176 (2012), 1–54. [Kaveh-Khovanskii] Kaveh, K.; Khovanskii, A. G. On mixed multiplicities of ideals. arXiv:1310.7979. [Khovanskii88] Khovanskii, A. G. Algebra and mixed volumes. Appendix 3 in: Burago, Yu. D.; Zalgaller, V. A. Geometric inequalities. Translated from the Russian by A. B. Sosinskii. Grundlehren der Mathematischen Wissenschaften, 285. Springer Series in Soviet Mathematics (1988). [Khovanskii92] Khovanskii, A. G. Newton polyhedron, Hilbert polynomial and sums of finite sets. (Russian) Funktsional. Anal. i Prilozhen. 26 (1992), no. 4, 57–63, 96; translation in Funct. Anal. Appl. 26 (1992), no. 4, 276–281. [Khovanskii-Timorin-a] Khovanskii, A. G.; Timorin, V. Alexandrov-Fenchel inequality for coconvex bodies. arXiv:1305.4484. [Khovanskii-Timorin-b] Khovanskii, A. G.; Timorin, V. On the theory of coconvex bodies. arXiv:1308.1781. [Kushnirenko76] Kushnirenko, A. G. Polyedres de Newton et nombres de Milnor. (French) Invent. Math. 32 (1976), no. 1, 1–31. [Lazarsfeld-Mustaţă09] Lazarsfeld, R.; Mustaţă, M. Convex bodies associated to linear series. Ann. Sci. Ec. Norm. Super. (4) 42 (2009), no. 5, 783835. [Lech64] Lech, C. Inequalities related to certain couples of local rings. Acta Math. 112 (1964), 69–89. [Mustaţă02] Mustaţă, M. On multiplicities of graded sequences of ideals. J. of Algebra 256 (2002), 229–249. [Okounkov96] Okounkov, A. Brunn-Minkowski inequality for multiplicities. Invent. Math. 125 (1996), no. 3, 405–411. [Okounkov03] Okounkov, A. Why would multiplicities be log-concave? The orbit method in geometry and physics (Marseille, 2000), 329–347, Progr. Math., 213, Birkhäuser Boston, Boston, MA, 2003. [Rees-Sharp78] Rees, D.; Sharp, R. Y. On a theorem of B. Teissier on multiplicities of ideals in local rings. J. London Math. Soc. (2) 18 (1978), no. 3, 449463. [Samuel-Zariski60] Samuel, P.; Zariski, O. Commutative algebra. Vol. II. Reprint of the 1960 edition. Graduate Texts in Mathematics, Vol. 29. [Teissier77] Teissier, B. Sur une inegalite pour les multiplicites, (Appendix to a paper by D. Eisenbud and H. Levine). Ann. Math. 106 (1977), 3844. [Teissier78] Teissier, B. Jacobian Newton Polyhedra and equisingularity. arXiv:1203.5595. 19 Department of Mathematics, School of Arts and Sciences, University of Pittsburgh, 301 Thackeray Hall, Pittsburgh, PA 15260, U.S.A. E-mail address: [email protected] Department of Mathematics, University of Toronto, Toronto, Canada; Moscow Independent University; Institute for Systems Analysis, Russian Academy of Sciences E-mail address: [email protected] 20
0
1 Broadcast Channels with Privacy Leakage PSfrag replacements Constraints arXiv:1504.06136v3 [] 28 May 2017 Ziv Goldfeld, Student Member, IEEE, Gerhard Kramer, Fellow, IEEE, and Haim H. Permuter, Senior Member, IEEE Abstract—The broadcast channel (BC) with one common and two private messages with leakage constraints is studied, where leakage rate refers to the normalized mutual information between a message and a channel symbol string. Each private message is destined for a different user and the leakage rate to the other receiver must satisfy a constraint. This model captures several scenarios concerning secrecy, i.e., when both, either or neither of the private messages are secret. Inner and outer bounds on the leakage-capacity region are derived when the eavesdropper knows the codebook. The inner bound relies on a Marton-like code construction and the likelihood encoder. A Uniform Approximation Lemma is established that states that the marginal distribution induced by the encoder on each of the bins in the Marton codebook is approximately uniform. Without leakage constraints the inner bound recovers Marton’s region and the outer bound reduces to the UVW-outer bound. The bounds match for semi-deterministic (SD) and physically degraded (PD) BCs, as well as for BCs with a degraded message set. The leakage-capacity regions of the SD-BC and the BC with a degraded message set recover past results for different secrecy scenarios. A Blackwell BC example illustrates the results and shows how its leakage-capacity region changes from the capacity region without secrecy to the secrecy-capacity regions for different secrecy scenarios. Index Terms—Broadcast channel, Marton’s inner bound, Privacy Leakage, Secrecy, Physical-layer Security. I. I NTRODUCTION Public and confidential messages are often transmitted over the same channel. However, the underlying principles for constructing codes without and with secrecy are different. Without secrecy constraints, codes should use all available channel resources to reliably convey information to the destinations. Confidential messages, on the other hand, require that some channel resources are allocated to preserve security. We study relationships between the coding strategies and the fundamental limits of communication with and without secrecy. To this end we simultaneously account for secret and non-secret transmissions over a two-user broadcast channel (BC) by means of privacy leakage constraints (Fig. 1). Z. Goldfeld and H. H. Permuter were supported in part by European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement n◦ 337752, in part by the Israel Science Foundation and in part by the Cyber Security Research Center within the Ben-Gurion University of the Negev. G. Kramer was supported by an Alexander von Humboldt Professorship endowed by the German Federal Ministry of Education and Research. Z. Goldfeld and H. H. Permuter are with the Department of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel ([email protected], [email protected]). G. Kramer is with the Institute for Communications Engineering, Technical University of Munich, Munich D-80333, Germany ([email protected]). Channel (M0,M1,M2) Enc f (n) Y1 (n) Dec φ1 X W Y1 ,Y2 |X Y2 (1) (M̂0 , M̂1 ) ℓ2 (cn ) ≤ nL2 (2) (n) Dec φ2 (M̂0 , M̂2 ) ℓ1 (cn ) ≤ nL1 Fig. 1: A BC with a common message and privacy leakage constraints ℓ1 (cn ) , Icn (M1 ; Y2 ) ≤ nL1 and ℓ2 (cn ) , Icn (M2 ; Y1 ) ≤ nL2 , where Icn denotes that the mutual information term are taken with respect to  the distribution (n) (n) induced by the code cn = f (n) , φ1 , φ2 . A. Past Work Information theoretic secrecy was introduced by Shannon [1] who studied communication between a source and a receiver in the presence of an eavesdropper. Wyner modeled secret communication over noisy channels (also known as physical layer security) when he introduced the degraded wiretap channel (WTC) and derived its secrecy capacity [2]. Csiszár and Körner [3] extended Wyner’s result to a general BC where the source also transmits a common message to both users. The development of wireless communication, whose inherent open nature makes it vulnerable to security attacks, has inspired a growing interest in the fundamental limits of secure communication. Multiuser settings with secrecy were extensively treated in the literature. Broadcast and interference channels with two confidential messages were studied in [4], where inner and outer bounds on the secrecy-capacity region of both problems were derived. The secrecy-capacity region for the semideterministic (SD) BC was established in [5]. The capacity region of a SD-BC where only the message of the stochastic user is kept secret from the deterministic user was derived in [6]. The opposite case, i.e., when the message of the deterministic user is confidential, was solved in [7]. Secret cooperative communication was considered in [8], where the authors derive inner and outer bounds on the rate-equivocation region of the relay-BC (RBC) with one or two confidential messages. Gaussian multiple-input multiple-output (MIMO) BCs and WTCs were studied in [9]–[14], while [15]–[17] focused on BCs with an eavesdropper as an external entity from which all messages are kept secret. Many of the aforementioned achievability results were derived by combining Marton’s coding for BCs [18], [19] and 2 Wyner’s wiretap coding [2], [3]. Marton coding usually uses a joint typicality encoder (JTE) whose success is guaranteed by invoking the Mutual Covering Lemma (MCL) [20, Lemma 8.1]. However, the JTE and the MCL have a cumbersome security analysis. Several past works avoid the complications by performing the security analysis without conditioning on the random codebook. This significantly simplifies the derivations, but one would like to have security even if the codebooks are known by the eavesdropper. B. Model We study a two-user BC over which a common message for both users and a pair of private messages, each destined for a different user, are transmitted. A limited amount of rate of each private message may be leaked to the opposite receiver. The leaked rate is quantified as the normalized mutual information between the message of interest and the channel output sequence at the opposite user. Setting either leakage to zero or infinity reduces the problem to the case where the associated message is confidential or non-confidential, respectively. Thus, our problem setting specializes to all four scenarios concerning secrecy: when both, either or neither of the private messages are secret. We derive inner and outer bounds on the leakagecapacity region of the BC. The inner bound relies on a leakageadaptive coding scheme that accounts for the codebook being known to the eavesdropper. The derived bounds are tight for SD-BCs, physically degraded (PD) BCs, and BCs with a degraded message set, thus characterizing their leakage-capacity regions. Furthermore, we derive a condition for identifying the privacy leakage threshold above which the inner bound saturates. Various past results are captured as special cases. By taking the leakage thresholds to infinity, our inner bound recovers Marton’s inner bound with a common message [21], which is tight for every BC with a known capacity region. Making the leakage constraint inactive in our outer bound recovers the UVW-outer bound [22] or the New-Jersey outer bound [23]. These bounds are at least as good as previously known bounds (see [24]–[26]). The leakage-capacity region of the SD-BC reduces to each of the regions in [5], [6], [21] and [27] by discarding the common message and choosing the leakage constraints appropriately. The capacity result also recovers the optimal regions for the BC with confidential messages [3] and the BC with a degraded message set (without secrecy) [28]. Finally, a Blackwell BC (BW-BC) [29], [30] illustrates the results and visualizes the transition of the leakage-capacity region from the capacity region without secrecy to the secrecy-capacity regions for different secrecy scenarios. C. Organization This paper is organized as follows. Section II establishes notation and preliminary definitions. In Section III we discuss the need for replacing the JTE with the likelihood encoder and state a Uniform Approximation Lemma. Section IV describes the BC with privacy leakage constraints, states inner and outer bounds on the leakage-capacity region and characterize the optimal regions of several special cases. In Section V we discuss past results that are captured within our framework and Section VI visualizes the results by means of a BW-BC example. Finally, Section VIII summarizes the main achievements and insights of this work. II. N OTATIONS AND P RELIMINARY D EFINITIONS A. Notations We use the following notations. As customary N is the set of natural numbers (which does not include 0), while R denotes the reals. We further define R+ = {x ∈ R|x ≥ 0} and R++ = R \ {0}. Given two real  numbers a, b, we denote by [a : b] the set of integers n ∈ N ⌈a⌉ ≤ n ≤ ⌊b⌋ . Calligraphic letters such are X denote sets, the complement of X is denoted by X c , while |X | stands for its cardinality. X n denoted the n-fold Cartesian product of X . An element of X n is denoted by xn = (x1 , x2 , . . . , xn ); whenever the dimension n is clear from the context, vectors (or sequences) are denoted by boldface letters, e.g., x. A substring of x ∈ X n is denoted by xji = (xi , xi+1 , . . . , xj ), for 1 ≤ i ≤ j ≤ n; when i = 1, the subscript is omitted. We also define xn\i = (x1 , . . . , xi−1 , xi+1 , . . . , xn ). Let X , F , P be a probability space, where X is the sample space, F is the σ-algebra and Pis the probability measure. Random variables over X , F , P are denoted by uppercase letters, e.g., X, with conventions for random vectors similar to those for deterministic vectors. The probability of an event A ∈ F is denoted by P(A), while P(A B ) denotes the conditional probability of A given Bn . We use 1A to denote the indicator function of A. The set of all probability mass functions (PMFs) on a finite set X is denoted by P(X ), i.e., ) ( X P (x) = 1] . (1) P(X ) = P : X → [0, 1] x∈X PMFs are denoted by the uppercase letters such as P or Q, with a subscript that identifies the random variable and its possible conditioning. For example, for a discrete probability  space X , F , P and two correlated random variables X and Y over that space, we use PX , PX,Y and PX|Y to denote, respectively, the marginal PMF of X, the joint PMF of (X, Y ) and the conditional PMF of X given Y . In particular, PX|Y represents the stochastic matrix whose  elements are given by PX|Y (x|y) = P X = x|Y = y . Expressions such as PX,Y = PX PY |X are to be understood as PX,Y (x, y) = PX (x)PY |X (y|x), for all (x, y) ∈ X × Y. Accordingly, when three random variables X, Y and Z satisfy PX|Y,Z = PX|Y , they form a Markov chain, which we denote by X − Y − Z. We omit subscripts if the arguments of a PMF are lowercase versions of the random variables. The support of a PMF P and the expectation of  arandom variable X ∼ P are denoted by supp(P ) and EP X , respectively; when the distribution of Xis clear from the context we write its expectation simply as E X . Similarly, HP and IP denote entropy and mutual information that are calculated with respect to an underlying PMF P . For a discrete measurable space (X , F ), a PMF Q ∈ P(X ) gives rise to a probability measure on (XP , F ), which we denote by PQ ; accordingly, PQ A) = x∈A Q(x), 3 for every A ∈ F . For a random vector X n , if the entries of X n are drawn in an independent and identically distributed (i.i.d.) manner according Qn to PX , then for every x ∈ X n we have PX n (x) = i=1 PX (xi ) and we write n n n (x). Similarly, PX n (x) = PX Qnif for every (x, y) ∈ X × Y we have PY n |X n (y|x) = i=1 PY |X (yi |xi ), then we write PY n |X n (y|x) = PYn|X (y|x). The conditional product PMF PYn|X given a specific sequence x ∈ X n is denoted by PYn|X=x . Let X and Y be finite sets. The empirical PMF νx of a sequence x ∈ X n is N (x|x) (2) n Pn where N (x|x) = i=1 1{xi =x} . We use Tδn (PX ) to denote the set of letter-typical sequences of length n with respect to the PMF PX ∈ P(X ) and the positive number δ [31, Chapter 3], i.e., we have n o Tδn (PX ) = x ∈ X n νx (x)−PX (x) ≤ δPX (x), ∀x ∈ X . (3) Furthermore, for a joint PMF PX,Y ∈ P(X × Y) a δ > 0 and a fixed sequence y ∈ Y n , we define n o Tδn (PX,Y |y) = x ∈ X n (x, y) ∈ Tδn (PX,Y ) . (4) νx (x) , Another notion used throughout this work is information density. Let (X ×Y, F , PX,Y ) be a probability space, where X and Y are arbitrary sets. The information density iP : X ×Y → R++ of PX,Y is given by iP (x; y) = log dPX,Y (x, y) dPX PY (5a) dP is the Radon-Nikodym derivative of P with respect where dQ to Q and PX and PY are the marginal probability measures induced by PX,Y on X and Y, respectively. If X and Y are discrete and PX,Y ∈ P(X × Y), then (5a) simplifies as iP (x; y) = log PX,Y (x, y) . PX (x)PY (y) Definition 2 (Fidelity): Let (X , F ) be a measurable space and P and Q be two probability measures on F , such that P ≪ Q, i.e., P is absolutely continuous with respect to Q. The fidelity between P and Q is s dP . (7a) F(P, Q) = EQ dQ If the sample space X is countable and P, Q ∈ P(X ), then (7a) reduces to Xp P (x)Q(x). (7b) F(P, Q) = x∈X Fidelity satisfies F(P, Q) ∈ [0, 1] and is related to the TV as follows [32, Lemma 1]. Lemma 1 (Fidelity and Total Variation): For any two probability measures P and Q over the same measurable space (X , F ), we have p 1 − F(P, Q) ≤ ||P − Q||TV ≤ 1 − F 2 (P, Q). (8) Via Jensen’s inequality, the right-most inequality in (8) extends to the expected values of the fidelity and the TV between two conditional distributions as follows [32, Lemma 2], [33, Lemma 7]. Lemma 2 (Extension to Expected Values): Let (Ω, G, µ) be a probability space, (X , F ) be a measurable space and P and Q be two transition probability kernels from (Ω, G) to (X , F ).1 We have r  2 Eµ P − Q TV ≤ 1 − Eµ F P, Q . (9)   By virtue of Lemma 2, if Pn n∈N and Qn n∈N are two sequences of Markov kernels2 then  (10a) Eµn F Pn , Qn −−−−→ 1 n→∞ implies Eµn Pn − Qn (5b) Whenever the underlying distribution is clear from the context, we drop the subscript P from iP . −−−−→ 0. TV n→∞ (10b) III. U NIFORM D ISTRIBUTION A PPROXIMATION L EMMA A. Marton Coding B. Measures of Distribution Proximity We measure the proximity between two distributions by using total variation (TV). Definition 1 (Total Variation): Let (X , F ) be a measurable space and P and Q be two probability measures on F . The total variation between P and Q is ||P − Q||TV = sup P (A) − Q(A) . (6a) A∈F If the sample space X is countable and P, Q ∈ P(X ), then (6a) reduces to 1X P (x) − Q(x) . (6b) ||P − Q||TV = 2 x∈X We also consider the fidelity between two distributions. A Marton code involves two independent codebooks from which a pair of codewords is usually selected by means of a JTE [19]. A standard tool for the encoding error probability analysis is the MCL [20, Lemma 8.1]. While the JTE and the MCL are convenient for analysing reliability, security (equivocation or leakage) analysis seems cumbersome. Several past works employ Marton coding without conditioning the security analysis on the random codebook. Attempting to repeat the steps from these derivations while 1 A transition probability kernel between two measurable spaces (Ω, G) and (X , F ) is a mapping κ : Ω × F → [0, 1] such that: (i) ω 7→ κ(ω, A) is a G-measurable function for every A ∈ F ; (ii) A 7→ κ(ω, A) is a probability measure on (X , F ) for every ω ∈ Ω. 2 The formal definition is in accordance with Lemma 2 where we re place (Ω, G, µ), (X , F ), P and Q with the sequences (Ωn , Gn , µn ) n ,  (Xn , Fn ) n , {Pn }n and {Qn }n , respectively. 4 conditioning the equivocation on the codebook turns out to be problematic. The principal difficulty is showing that the marginal distribution of an index chosen by the JTE is approximately uniform.3 More precisely, let the output index pair of the encoder be (I, J); the corresponding alphabets are In and Jn . Several existing proofs rely on the following relations holding true: H(I|Cn ) ≥ log |In | − nδn H(J|Cn ) ≥ log |Jn | − nδn , (11) where Cn is the random codebook4 and limn→∞ δn = 0. Proving these inequalities while using the JTE is cumbersome. A potential proof would rely on analysing the output distribution of the JTE. However, the structure of this distribution quickly makes the analysis intractable. ; B. Likelihood Encoder Our coding scheme also uses a Marton code. We circumvent the problems with the JTE by replacing it with a likelihood encoder for Marton codebooks [7], [34], [35]. A similar encoding rule was used in [32], [36] under the name stochastic mutual information encoder. This encoder induces a probability distribution over the possible pairs of indices (or, equivalently, codewords). Given two independently generated bins, the probability of each codeword pair is proportional to the ratio of their joint probability (under the coding distribution) to the product of the marginal distributions. Namely, if ui and vj are the i-th and j-th codewords for each bin, respectively, and QU,V is the coding distribution (the codebooks are i.i.d. samples of its marginals QU and QV ), then the encoder chooses (i, j) with probability proportional to QnU,V (ui , vj ) . QnU (ui )QnV (vj ) Nonetheless, as can be seen in the proof of Lemma 3 (Section VII-A), the derivation is valid for random variables with general alphabets. Fix Q  W,U,V ∈P(W × U × V) and for every n ∈  N define  In , 1 : 2nS1 , Jn , 1 : 2nS2 and Kn = 1 : 2nT , where S1 , S2 , T ∈ R+ . Let W ∼ QnW and fix w ∈ W n with  (n) QnW (w) > 0. Let BU (w) , Ui (w) i∈In be a random codebook that comprises |In | vectors of length n that are i.i.d. according to QnU|W =w . Furthermore, for every k ∈ Kn let  (n) BV (k, w) , Vj,k (w) j∈Jn be a random codebook with i.i.d. codewords according to QnV |W =w . The codebooks in the o n (n) (n) set BV (w) , BV (k, w) are conditionally indepenk∈Kn dent of one another given W = w. Forn any w ∈ W n with o (n) (n) QnW (w) > 0 we also define Bn (w) , BU (w), BV (w)  and finally we set Bn , W, Bn (W) . (n) λ(Bn ) = QnW (w) C. Setup and Statement of the Lemma For notational convenience we formulate the setup and state the result in terms of random variables with finite alphabets. Y QnU|W ui (w) w i∈In (12) Thus, the further the joint distribution is from the product of the marginals the more favorable the corresponding pair of codewords is. Replacing the JTE with the likelihood encoder comes at no cost in reliability. This is because, like the JTE, if the sum of the bin rates is greater than I(U ; V ), then the likelihood encoder chooses jointly typical codeword pairs with high probability [32, Theorem 3]. The leakage analysis, on the other hand, tremendously simplifies. This allows to derive the achievability result for the BC with privacy leakage constraints. Key to the leakage analysis is that the marginal distribution of the indices at the encoder’s output are indeed approximately uniform. This relation is formulated in the next subsection and the proof is provided in Section VII-A. (n) A realization of BU (w) or BV (k, w), k ∈ Kn ,  (n) , ui (w) i∈In and is denoted by BU (w)  (n) BV (k, w) , vj,k (w) j∈Jn , respectively. In accordance o n (n) (n) to the above, we also set BV (w) , BV (k, w) = k∈K on n  (n) (n) vj,k (w) (j,k)∈Jn ×Kn , Bn (w) , BU (w), BV (w) and  Bn , w, Bn (w) . Letting Bn denote the collection of all possible realization of Bn , the above construction induces a PMF λ ∈ P(Bn ) on Bn that is given by  Y  QnV |W vj,k (w) w . (j,k)∈Jn ×Kn (13) Now, let K be a random variable independent of Bn and uniformly distributed over Kn . For each Bn ∈ Bn and k ∈ Kn , the index pair (i, j) ∈ In × Jn is drawn according to  iQn ui (w);vj,k (w) w 2 (B ,k)  , (14) PI,Jn (i, j) = P 2iQn uℓ̄ (w);vj̄,k (w) w (ℓ̄,j̄)∈In ×Jn where iQn (u; v|w) = log QnU,V |W (u, v|w) QnU|W (u|w)QnV |W (v|w) . (15) (B ,k) PI,Jn describes our likelihood encoder. Finally, on account of (13)-(14) we set 1 (B ,k) PBn ,K,I,J (Bn , k, i, j) , λ(Bn ) P n (i, j), (16) |Kn | I,J which induces a probability measure PP . 3 Without the conditioning, uniformity follows by symmetry. conditioning on Cn is not present in many existing works. Instead, the relations (11) were replaced with their unconditioned versions H(I) = log |I| and H(J) = log |J |. Although these relations are true, an unconditioned analysis does not imply achievability when the codebooks are known to the eavesdropper. 4 The The following lemma specifies sufficient conditions on the sizes of the index sets for approximating the induced marginal distribution of I with a uniform distribution over In . To state (U) the result let pIn be the uniform distribution over In and note 5   (n) (n) for the WY1 ,Y2 |X BC with A code cn = f (n) , φ1 , φ2 that for every Bn ∈ Bn X 1 PI|Bn (i|Bn ) = |Kn | (j,k) ∈Jn ×Kn  2iQn ui (w);vj,k (w) w P 2iQn uℓ̄ (w);vj̄,k (w) (ℓ̄,j̄) ∈In ×Jn w . (17) Lemma 3 (Uniform Approximation Lemma): For any QW,U,V ∈ P(W × U × V) if  (18a) S2 + min S1 , T > IQW,U,V (U ; V |W ) then (U) EBn PI|Bn − pIn −−−−→ 0. TV n→∞ (18b) The Lemma is proven in Section VII-A via an analysis of the expected fidelity between the induced marginal distribution of I and the uniform distribution. Inspired by ideas from [32], we employ the Cauchy-Schwarz inequality and Jensen’s inequality to show that the expected fidelity converges to 1 with the blocklengh. The result of the lemma then follows by (10). privacy leakage constraints induces a PMF P (cn ) on M0 × M1 × M2 × X n × Y1n × Y2n × M̂01 × M̂02 , that is given by    (2) (1) P (cn ) m0 , m1 , m2 , x, y1 , y2 , m̂0 , m̂1 , m̂0 , m̂2 Y 1 = f (n) (x|m0 , m1 , m2 )WYn1 ,Y2 |X (y1 , y2 |x) (n) j=0,1,2 Mj  (j) × 1T . (n) j=1,2 (m̂0 ,mj )=φj (yj ) (19) The induced PMF gives rise to the probability measure PP (cn ) , which we abbreviate by Pcn . Similarly, we use the shorthand Icn instead of IP (cn ) to denote a mutual information expression taken with respect to P (cn ) . Definition 4 (Average Error Probability): The average error probability for an (n, R0 , R1 , R2 ) code cn is   o [ n  (j) Pe (cn ) , Pcn  M̂0 , M̂j 6= (M0 , Mj )  (20) j=1,2 IV. B ROADCAST C HANNELS WITH P RIVACY L EAKAGE C ONSTRAINTS A. Problem Setting  The X , Y1 , Y2 , WY1 ,Y2 |X : X → P(Y1 × Y2 ) BC with privacy leakage constraints is illustrated in Fig. 1. The channel has one sender and two receivers. The sender randomly chooses a triple (m0 , m uniformly and inde   indices   1 , m2 ) of pendently from the set 1 : 2nR0 × 1 : 2nR1 × 1 : 2nR2 and maps them to a sequence x ∈ X n , which is the channel input (the mapping may be random). The sequence x is transmitted over a BC with transition probability WY1 ,Y2 |X . The output sequence yj ∈ Yjn , where j = 1, 2, is received by decoder  (j) j. Decoder j produces a pair of estimates m̂0 , m̂j of (m0 , mj ). Remark 1 (Specific Classes of BCs): We sometimes specialize to the following classes of BCs: • • • Semi-Deterministic BCs: A BC is SD if its channel transition matrix factors as WY1 ,Y2 |X = 1{Y1 =y1 (X)} WY2 |X , where y1 : X → Y1 and WY2 |X : X → P(Y2 ). Physically-Degraded BCs: A BC is PD if its channel transition matrix factors as WY1 ,Y2 |X = WY1 |X WY2 |Y1 , where WY1 |X : X → P(Y1 ) and WY2 |Y1 : Y1 → P(Y2 ). Deterministic BCs: A BC is deterministic if its channel transition matrix factors as WY1 ,Y2 |X = 1{Y1 =y1 (X)}∩{Y2 =y2 (X)} , where yj : X → Yj , for j = 1, 2. Definition 3 (Code): An (n, R0 , R1 , R2 ) code cn for the BC with leakage constraints has:   (n) 1) Three message sets Mj , 1 : 2nRj , j = 0, 1, 2. (n) (n) (n) 2) A stochastic encoder f (n) : M0 × M1 × M2 → n P(X ). (n) (n) 3) Two decoding functions, φj : Yjn → M̂0j , where (n) (n) (n) M̂0j , M0 × Mj , for j = 1, 2.  (j) (n) where M̂0 , M̂j = φj (Yj ), for j = 1, 2. Definition 5 (Information Leakage Rate): The information leakage rate of M1 to receiver 2 under an (n, R0 , R1 , R2 ) code cn is 1 (21a) ℓ1 (cn ) , Icn (M1 ; Y2 ). n Similarly, the information leakage rate of M2 to receiver 1 under cn is 1 ℓ2 (cn ) , Icn (M2 ; Y1 ). (21b) n Definition 6 (Achievable Rates): Let (L1 , L2 ) ∈ R2+ . A rate triple (R0 , R1 , R2 ) ∈ R3+ is (L1 , L2 )-achievable if for any ǫ > 0 there exists a sufficiently large n ∈ N and an (n, R0 , R1 , R2 ) code cn such that Pe (cn ) ≤ ǫ ℓ1 (cn ) ≤ L1 + ǫ (22a) (22b) ℓ2 (cn ) ≤ L2 + ǫ. (22c) Definition 7 (Leakage-Capacity Region): The (L1 , L2 )leakage-capacity region C(L1 , L2 ) is the closure of the set of the (L1 , L2 )-achievable rates. Remark 2 (Inactive Leakage Constraints): Setting Lj = Rj , for j = 1, 2, makes (22b)-(22c) inactive and reduces the BC with privacy leakage constraints to the classic BC with a common message. This is a simple consequence of the nonnegativity of entropy, which implies that Icn (M1 ; Y2 ) ≤ nR1 and Icn (M2 ; Y1 ) ≤ nR2 always hold. To simplify notation we write Lj → ∞, j = 1, 2 to refer to leakage threshold values under which (22b)-(22c) are satisfied by default. B. Leakage-Capacity Results This section states inner and outer bounds on the (L1 , L2 )leakage-capacity region C(L1 , L2 ) of a BC. The bounds match for SD-BCs, BCs with a degraded message set and PD-BCs, 6 which characterizes the leakage-capacity regions for these three cases. We start with the inner bound. In the following, the transition probability WY1 ,Y2 |X describing the BC stays fixed unless stated otherwise. When specifying to particular instances of BCs (see Remark 1), we explicitly mention the corresponding structure of WY1 ,Y2 |X . Theorem 1 (Inner Bound): Let RI (L1 , L2 ) be the closure of the union of rate triples (R0 , R1 , R2 ) ∈ R3+ satisfying: n o R0 ≤ min I(U0 ; Y1 ), I(U0 ; Y2 ) (23a) R1 ≤ I(U1 ; Y1 |U0 )−I(U1 ; U2 , Y2 |U0 )+L1 (23b) n o R0 + R1 ≤ I(U1 ; Y1 |U0 )+min I(U0 ; Y1 ),I(U0 ; Y2 ) (23c) R2 ≤ I(U2 ; Y2 |U0 )−I(U2 ; U1 , Y1 |U0 )+L2 (23d) n o R0 + R2 ≤ I(U2 ; Y2 |U0 )+min I(U0 ; Y1 ),I(U0 ; Y2 ) (23e) X Rj ≤ I(U1 ; Y1 |U0 ) + I(Un2 ; Y2 |U0 ) o (23f) j=0,1,2 − I(U1 ; U2 |U0 ) + min I(U0 ;Y1 ), I(U0 ;Y2 ) where the union is over all PMFs QU0 ,U1 ,U2 ,X ∈ P(U0 × U1 × U2 × X ), each inducing a joint distribution QU0 ,U1 ,U2 ,X WY1 ,Y2 |X . The following inclusion holds: RI (L1 , L2 ) ⊆ C(L1 , L2 ). (24) The proof of Theorem 1 is given in Section VII-B and uses a leakage-adaptive Marton-like code construction. Rate-splitting is first used to decompose each private message Mj , j = 1, 2, into a public part M0j and a private part Mjj . A Marton codebook with an extra layer of bins is then constructed while treating (M0 , M10 , M20 ) as a public message and Mjj , for j = 1, 2, as private message j. The double-binning of the private messages permits joint encoding (outer layer) and controlling the total rate leakage to the other user (inner layer). In contrast to the classic Marton coding scheme [19] that employes a JTE, we execute joint encoding by means of the likelihood encoder from (14). Doing so doesn’t affect the reliability analysis (as the likelihood encoder chooses jointly typical pairs of codewords with high probability), but it is of consequence for analysing the leakage rate. The leakage analysis takes into account the rate leaked due to the decoding of the public message by both users. Also, additional leakage occurs due to the joint encoding process, which introduces correlation between the private message codewords. We account for the latter by relating the bin sizes in the inner and outer coding layers to the rate of the public parts M10 and M20 . The leakage analysis relies heavily on the structure of the likelihood encoder that lets us establish several crucial properties of our random coding experiment. The main challenge is showing that the induced marginal distribution describing the choice of the private message codewords is approximately uniform. This follows by virtue of the Uniform Approximation Lemma (Lemma 3). Remark 3 (Relation to Marton’s Region): In [21, Theorem 1] Gelfand and Pinsker generalized Marton’s inner bound [18] to include a common message. An alternative form of Gelfand and Pinsker’s inner bound was given in [37, Theorem 5] (see also [38]). This region is the best known inner bound on the capacity region of the BC with a common message. RI (∞, ∞) recovers the Gelfand-Pinsker region since (23b) and (23d) are redundant. A full discussion of the special cases of RI (L1 , L2 ) is given in Section V-D. The following corollary states a sufficient condition on the leakage thresholds L1 and L2 to become inactive in the bounds from (23) when R0 = 0 (i.e., no common message is present). To state the result, let R̃I (L1 , L2 , QU0 ,U1 ,U2 ,X ) denote the set of rate pairs (R1 , R2 ) ∈ R2+ satisfying (23) with R0 = 0 when the mutual information terms are calculated with respect to QU0 ,U1 ,U2 ,X WY1 ,Y2 |X . Accordingly, [ R̃I (L1 , L2 ) , R̃I (L1 , L2 , QU0 ,U1 ,U2 ,X ) (25) QU0 ,U1 ,U2 ,X is the region obtained by setting R0 = 0 in RI (L1 , L2 ). Corollary 2 (Inactive Leakage Constraints): Let QU0 ,U1 ,U2 ,X ∈ P(U0 × U1 × U2 × X ). For j = 1, 2 define Lj ⋆ (QU0 ,U1 ,U2 ,X ) n o = min I(U0 ; Y1 ), I(U0 ; Y2 ) + I(Uj ; Uj̄ , Yj̄ |U0 ), (26) where j̄ = j + (−1)j+1 . The following implications hold: 1) If L1 ≥ L⋆1 (QU0 ,U1 ,U2 ,X ) then R̃I (L1 , L2 , QU0 ,U1 ,U2 ,X ) = R̃I (∞, L2 , QU0 ,U1 ,U2 ,X ). 2) If L2 ≥ L⋆2 (QU0 ,U1 ,U2 ,X ) then R̃I (L1 , L2 , QU0 ,U1 ,U2 ,X ) = R̃I (L1 , ∞, QU0 ,U1 ,U2 ,X ). 3) If Lj ≥ L⋆j (QU0 ,U1 ,U2 ,X ), for j = 1, 2, then R̃I (L1 , L2 , QU0 ,U1 ,U2 ,X ) = R̃I (∞, ∞, QU0 ,U1 ,U2 ,X ). For the proof of Corollary 2 see Section VII-C. According to the above, if any of the leakage thresholds Lj , j = 1, 2 surpasses the critical value from (26), then the corresponding inner bound remains unchanged if Lj is further increased, and is therefore equivalent to the region where Lj → ∞. Remark 4 (Application of Corollary 2): Corollary 2 specifies a condition for L1 and/or L2 being inactive for each input probability. Getting a condition for the inactivity of the thresholds with respect to the entire region R̃I (L1 , L2 ) from (25) is a more challenging task. Identifying such a condition involves identifying which input distributions achieve the boundary of R̃I (L1 , L2 ). In some communication scenarios this is possible, e.g., for the MIMO Gaussian BC with or without secrecy requirements the boundary achieving distributions are Gaussian vectors [39]–[43]. However, the structure of the optimizing distribution is unknown in general. The merit of Corollary 2 becomes clear when explicitly calculating R̃I (L1 , L2 ). One can then identify the optimizing distribution, e.g., by means of an analytical characterization or via an exhaustive search. In turn, one can calculate the maximum of L⋆j (QU0 ,U1 ,U2 ,X ) over those distributions. Denoting by L⋆j this maximal value, if Lj < L⋆j then increasing Lj will further shrink the region. If, on the other hand, Lj ≥ L⋆j , then the region remains unchanged even if Lj grows. This idea is demonstrated in Section VI where we calculate the (L1 , L2 )leakage-capacity region of the Blackwell BC. Next, we state an outer bound on C(L1 , L2 ). A proof of Theorem 3 is given in Section VII-D. 7 Theorem 3 (Outer Bound): Let RO (L1 , L2 ) be the closure of the union of rate triples (R0 , R1 , R2 ) ∈ R3+ satisfying: n o R0 ≤ min I(W ; Y1 ), I(W ; Y2 ) (27a) R1 ≤ I(U ; Y1 |W, V ) − I(U ; Y2 |W, V ) + L1 (27b) R1 ≤ I(U ; Y1 |W ) − I(U ; Y2 |W ) + L1 (27c) n o R0 + R1 ≤ I(U ; Y1 |W ) + min I(W ; Y1 ),I(W ; Y2 ) (27d) R2 ≤ I(V ; Y2 |W, U ) − I(V ; Y1 |W, U ) + L2 R2 ≤ I(V ; Y2 |W ) − I(V ; Y1 |W ) + L2 n o R0 + R2 ≤ I(V ; Y2 |W ) + min I(W ; Y1 ),I(W ; Y2 ) X Rj ≤ I(U ; Y1 |W, V ) + n I(V ; Y2 |W ) o j=0,1,2 + min I(W ; Y1 ), I(W ; Y2 ) X Rj ≤ I(U ; Y1 |W ) + I(V n ; Y2 |W, U ) o j=0,1,2 + min I(W ; Y1 ), I(W ; Y2 ) (27e) (27f) (27g) (27h) (27i) where the union is over all PMFs QW,U,V QX|U,V ∈ P(W × U × V × X ), each inducing a joint distribution QW,U,V QX|U,V WY1 ,Y2 |X . RO (L1 , L2 ) is convex and the following inclusion holds: C(L1 , L2 ) ⊆ RO (L1 , L2 ). (28) Remark 5 (Relation to UVW-Outer Bound): The best known outer bounds on the capacity region of a BC with a common message are the UVW-outer bound [22, Bound 2] and the New-Jersey outer bound [23] which are equivalent. The region RO (∞, ∞) recovers the UVW-outer bound since (27b)-(27c) and (27e)-(27f) are redundant. The inner and outer bounds in Theorems 1 and 3 are tight for SD-BCs and give rise to the following theorem. Theorem 4 (Leakage-Capacity - SD-BC): The (L1 , L2 )leakage-capacity region CSD (L1 , L2 ) of a SD-BC 1{Y1 =y1 (X)} WY2 |X is the closure of the union of rate triples (R0 , R1 , R2 ) ∈ R3+ satisfying: n o R0 ≤ min I(W ; Y1 ), I(W ; Y2 ) (29a) R1 ≤ H(Y1 |W, V, Y2 ) + L1 (29b) n o R0 + R1 ≤ H(Y1 |W ) + min I(W ; Y1 ),I(W ; Y2 ) (29c) R2 ≤ I(V ; Y2 |W ) − I(V ; Y1 |W ) + L2 (29d) n o R0 + R2 ≤ I(V ; Y2 |W ) + min I(W ; Y1 ),I(W ; Y2 ) (29e) X ≤ H(Y1 |W, V ) + I(V n ; Y2 |W ) o (29f) j=0,1,2 + min I(W ; Y1 ), I(W ; Y2 ) where the union is over all PMFs QW,V,X ∈ P(W × V × X ), each inducing a joint distribution QW,V,X 1{Y1 =y1 (X)} WY2 |X . Furthermore, CSD (L1 , L2 ) is convex. The direct part of Theorem 4 follows from Theorem 1 by taking U0 = W , U1 = Y1 and U2 = V , while Theorem 3 is used for the converse. See Section VII-E for the details. Remark 6 (SD-BC Result - Special Cases): All four cases of the SD-BC concerning secrecy (i.e., when neither, either or both messages are secret) are solved and their solutions are retrieved from CSD (L1 , L2 ) by inserting the appropriate values of Lj , j = 1, 2. This property of CSD (L1 , L2 ) is discussed in Section V-D. The inner and outer bounds in Theorems 1 and 3 also match when the message set is degraded, i.e., when M2 = 0 and there is only one private message. Theorem 5 (Leakage-Capacity - Degraded Message Set): The L1 -leakage-capacity region CDM (L1 ) of a BC with a degraded message set (M2 = 0) and a privacy leakage constraint is the closure of the union of rate pairs (R0 , R1 ) ∈ R2+ satisfying: n o R0 ≤ min I(W ; Y1 ), I(W ; Y2 ) (30a) R1 ≤ I(U ; Y1 |W ) − I(U ; Y2 |W ) + L1  R0 + R1 ≤ I(U ; Y1 |W ) + min I(W ; Y1 ),I(W ; Y2 ) (30b) (30c) where the union is over all PMFs QW,U QX|U ∈ P(W × U × X ), each inducing a joint distribution QW,U QX|U WY1 ,Y2 |X . Furthermore, CDM (L1 ) is convex. Proof: The direct part follows by setting R2 = 0, U0 = W , U1 = U and U2 = 0 in Theorem 1. For the converse we show that RO (L1 , L2 ) ⊆ CDM (L1 ). Clearly, (30a), (30b) and (30c) coincide with (27a), (27c) and (27d), respectively. Dropping the rest of the inequalities from (27) completes the proof. Remark 7 (Degraded Message Set Result - Special Cases): The BC with a degraded message set and a privacy leakage constraint captures the BC with confidential messages [3] and the BC with a degraded message set [28]. The former is obtained by taking L1 = 0, while L1 → ∞ recovers the latter. Setting L1 = 0 or L1 → ∞ into CDM (L1 ) recovers the capacity regions of these special cases (see Section V-E for more details). We next characterize the leakage-capacity region of a PDBC WY1 |X WY2 |Y1 with privacy leakage constraints and without a common message (M0 = 0). Since X − Y1 − Y2 forms a Markov chain, it is impossible to achieve non-trivial leakage constraints on the message M2 . Accordingly, the leakagecapacity region of the PD-BC (where X − Y1 − Y2 ) is defined only through L1 . Corollary 6 (Leakage-Capacity - PD-BC): The L1 -leakagecapacity region CPD (L1 ) of a PD-BC WY1 |X WY2 |Y1 without a common message is the closure of the union over the same domain as CDM (L1 ) of rate pairs (R1 , R2 ) ∈ R2+ satisfying (30), while recasting R0 as R2 and noting that  min I(W ; Y1 ), I(W ; Y2 ) = I(W ; Y2 ). The proof of Corollary 6 is similar to that of Theorem 5 and is omitted. Remark 8 (Cardinality Bounds): Cardinality bounds for the auxiliary random variables in Theorems 1, 3, 4 and 5 can be derived using the perturbation method [20, Appendix C] or techniques such as in [22] and [44]. The computability of the derived regions is not in the scope of this work. V. S PECIAL C ASES A. The Gelfand-Pinsker Inner Bound Theorem 1 generalizes the Gelfand-Pinsker region for the BC with a common message [21, Theorem 1] to the case 8 with privacy leakage constraints. In other words, RI (∞, ∞) recovers the result from [21], which is tight for every BC (without secrecy) whose capacity region is known. B. UVW-Outer Bound The New-Jersey outer bound was derived in [23] and shown to be at least as good as the previously known bounds. A simpler version of this outer bound was established in [22] and was named the UVW-outer bound. The UVW-outer bound is given by RO (∞, ∞). C. Liu-Marić-Spasojević-Yates Inner Bound In [4] an inner bound on the secrecy-capacity region of a BC WY1 ,Y2 |X with two confidential messages (each destined for one of the receivers and kept secret from the other) was characterized as the set of rate pairs (R1 , R2 ) ∈ R2+ satisfying: R1 ≤ I(U1 ; Y1 |U0 ) − I(U1 ; U2 , Y2 |U0 ) R2 ≤ I(U2 ; Y2 |U0 ) − I(U2 ; U1 , Y1 |U0 ) (31a) (31b) where the union is over all PMFs QU0 ,U1 ,U2 QX|U1 ,U2 ∈ P(U0 × U1 × U2 × X ), each inducing a joint distribution QU0 ,U1 ,U2 QX|U1 ,U2 WY1 ,Y2 |X . This inner bound is tight for SD-BCs [5] and MIMO Gaussian BCs [11]. Setting R0 = 0 in RI (0, 0) recovers (31). D. SD-BCs with and without Secrecy The SD-BC 1{Y1 =y1 (X)} WY2 |X without a common message, i.e., when R0 = 0, is solved when both, either or neither private messages are secret (see [5], [6], [27] and [21], respectively). Setting Lj = 0, for j = 1, 2, reduces the SD-BC with privacy leakage constraints to the problem where Mj is secret. Taking Lj → ∞ results in a SD-BC without a leakage constraint on Mj . We use Theorem 4 to obtain the leakagecapacity region of the SD-BC without a common message. Corollary 7 (Leakage-Capacity - SD-BC without M0 ): The 0 (L1 , L2 )-leakage-capacity region CSD (L1 , L2 ) of a SD-BC 1{Y1 =y1 (X)} WY2 |X without a common message is the closure of the union over the domain stated in Theorem 4 of rate pairs (R1 , R2 ) ∈ R2+ satisfying: R1 ≤ H(Y1 |W, V, Y2 ) + L1 (32a) n o R1 ≤ H(Y1 |W ) + min I(W ; Y1 ), I(W ; Y2 ) (32b) R2 ≤ I(V ; Y2 |W ) − I(V ; Y1 |W ) + L2 (32c) n o R2 ≤ I(V ; Y2 |W ) + min I(W ; Y1 ),I(W ; Y2 ) (32d) R1 + R2 ≤ H(Y1 |W, V ) + I(V n ; Y2 |W ) o (32e) + min I(W ; Y1 ), I(W ; Y2 ) . 1) Neither Message is Secret: If L1 , L2 → ∞, the SD-BC with privacy leakage constraints reduces to the classic case 0 (∞, ∞) by choosing without secrecy [21]. We recover CSD W = 0 so that (32) becomes R1 ≤ H(Y1 ) (33a) R2 ≤ I(V ; Y2 ) (33b) R1 + R2 ≤ H(Y1 |V ) + I(V ; Y2 ) (33c) This agrees with the discussion in Section V-A since Marton’s inner bound is tight for SD-BCs. 2) Only M1 is Secret: The SD-BC where M1 is a secret is obtained by taking L1 = 0 and L2 → ∞. The secrecycapacity region was derived in [27, Corollary 4] and is the closure of the union over the same domain as (33) of rate pairs (R1 , R2 ) ∈ R2+ satisfying: R1 ≤ H(Y1 |V, Y2 ) (34a) R2 ≤ I(V ; Y2 ). (34b) 0 To see that CSD (0, ∞) and (34) match, first note that when L1 = 0, (32b) is redundant due to (32a). The sum rate bound (32e) also becomes inactive as it is implied by adding (32a) 0 and (32d). Setting W = 0 in CSD (0, ∞) now recovers (34). Remark 9 (Relation to Optimal Coding Scheme): The optimal code for the SD-BC with a secret message M1 employs no public message and relies on double-binning the codebook of M1 , while M2 is transmitted at maximal rate and no binning of its codebook is performed. The optimality of W = 0 in 0 CSD (0, ∞) corresponds to the absence of the public messages. Furthermore, referring to the bounds in Section VII-B, inserting L1 = 0 and L2 → ∞ into our code construction results in (68a) and (87b) becoming inactive since (86b) is the dominant constraint. Consequently, the redundancy used for correlating the transmission and ensuring security (i.e., the double-binning) is present only in the M1 codebook. 3) Only M2 is Secret: The SD-BC where M2 is secret is obtained by taking L1 → ∞ and L2 = 0. The secrecy-capacity region is the closure of the union of rate pairs (R1 , R2 ) ∈ R2+ satisfying: R1 ≤ H(Y1 ) (35a) R1 ≤ H(Y1 |W ) + I(W ; Y2 ) R2 ≤ I(V ; Y2 |W ) − I(V ; Y1 |W ) (35b) (35c) where the union is over all PMFs QW,V,X ∈ P(W × V × X ), each inducing a joint distribution QW,V,X 1{Y1 =y1 (X)} WY2 |X [6, Theorem 1]. Using Corollary 7, the bounds (32) become n o R1 ≤ H(Y1 |W ) + min I(W ; Y1 ), I(W ; Y2 ) (36a) R2 ≤ I(V ; Y2 |W ) − I(V ; Y1 |W ) R1 + R2 ≤ H(Y1 |W, V ) + I(V n ; Y2 |W ) (36b) o (36c) + min I(W ; Y1 ), I(W ; Y2 ) . and (36c) is redundant by adding (36a) and (36b). The regions from (35) and (36) thus coincide. The effect of L1 → ∞ and L2 = 0 on the bins in our coding scheme (Section VII-B) is analogous to the one described in Section V-D2. In contrast to Section V-D2, however, here the achievability of (36) requires a common message. Since L2 = 0, (60c) implies that the public message is a portion of M1 only. Keeping in mind that the public message is decoded by both receivers, unless R20 = 0 (i.e., unless the public message contains no information about M2 ) the secrecy constraint will be violated. 9 4) Both Messages are Secret: Taking L1 = L2 = 0 recovers the SD-BC where both messages are secret. The secrecy-capacity region for this case was found in [5, Theorem 1] and is the closure of the union of rate pairs (R1 , R2 ) ∈ R2+ satisfying: Encoder X 0 R1 ≤ H(Y1 |W, V, Y2 ) (37a) 1 R2 ≤ I(V ; Y2 |W ) − I(V ; Y1 |W ) (37b) 2 where the union is over all PMFs QW,V QX|V ∈ P(W × V × X ), each inducing a joint distribution QW,V QX|V 1{Y1 =y1 (X)} WY2 |X . The region (37) coincides 0 0 with CSD (0, 0). Restricting the union in CSD (0, 0) to encompass only PMFs that satisfy the Markov relation W − V − X does not shrink the region. This is since in the proof of Theorem 3 we define Vq , (M2 , Wq ), and therefore, Xq − Vq − Wq forms a Markov chain for every q ∈ [1 : n]. Remark 10 (Relation to Optimal Coding Scheme): The coding scheme that achieves (37) uses double-binning for the codebooks of both private messages. To ensure confidentiality, the rate bounds of each message includes the penalty term I(U1 ; U2 |U0 ). Note that without the confidentiality constraints, Marton’s coding scheme [18] requires only that the sum-rate has that penalty term. This is evident from our scheme by setting L1 = L2 = 0 in (60c), (86b) and (87b), which makes (68a) redundant. 0 1 R12 I(M1 ; Y2 ) ≤ nL1 Y2 Decoder 2 Fig. 2: Blackwell BC with privacy leakage constraints. R0 + R1 ≤ I(X; Y1 |W ) + I(W ; Y2 ) R0 + R1 ≤ I(X; Y1 ) Consider the BC with leakage constraints in which M2 = 0; its leakage-capacity region CDM (L1 ) is stated in Theorem 5. We show that CDM (L1 ) recovers the secrecy-capacity region of the BC with confidential messages [3] and the capacity region of the BC with a degraded message set (without secrecy) [28]. 1) BCs with Confidential Messages: The secrecy-capacity region of the BC with confidential messages was derived in [3] and is the union over the same domain as in Theorem 5 of rate pairs (R0 , R1 ) ∈ R2+ satisfying: n o R0 ≤ min I(W ; Y1 ), I(W ; Y2 ) (38a) (38b) Inserting L1 = 0 into the result of Theorem 5 produces (38). Our code construction (Section VII-B) with L1 = 0 and U2 = 0 reduces to a superposition code for which the outer codebook (that is associated with the confidential message) is binned. This is a secrecy-capacity achieving coding scheme for the BC with confidential messages. Remark 11 (Wiretap Channel): The BC with confidential messages captures the WTC by setting M0 = 0. Thus, the WTC is also a special case of the BC with privacy leakage constraints. 2) BCs with a Degraded Message Set: If L1 → ∞, we get the BC with a degraded message set [28]. Inserting L1 → ∞ into CDM (L1 ) and setting U = X we recover the union of rate pairs (R0 , R1 ) ∈ R+ satisfying: n o R0 ≤ min I(W ; Y1 ), I(W ; Y2 ) (39a) (39b) (39c) where the union is over all PMFs QW,X ∈ P(V × X ), each induces a joint distribution QW,X WY1 ,Y2 |X . CDM (L1 ) in (39) matches [37, Theorem 7] which establishes the union over all PMFs QT,U,X ∈ P(T ×U ×X ) of rate pairs (R0 , R1 ) ∈ R+ with n o R0 ≤ min I(T ; Y1 ), I(T ; Y2 ) (40a) R0 + R1 ≤ I(X; Y1 |T, U ) + I(T, U ; Y2 ) R0 + R1 ≤ I(X; Y1 ) E. BCs with One Private Message R1 ≤ I(U ; Y1 |W ) − I(U ; Y2 |W ). Decoder 1 Y1 0 I(M2 ; Y1 ) ≤ nL2 1 (40b) (40c) as an outer bound on the capacity region of interest. The RHS of (40a) can be bounded as n o n o min I(T ; Y1 ), I(T ; Y2 ) ≤ min I(T, U ; Y1 ), I(T, U ; Y2 ) (41) and relabeling W = (T, U ) matches (39). VI. E XAMPLE Suppose the channel from the transmitter to receivers 1 and 2 is the BW-BC without a common message as illustrated in Fig. 2 [29], [30]. Using Corollary 7, the (L1 , L2 )-leakagecapacity region of a deterministic BC (DBC) is the following. Corollary 8 (Leakage-Capacity - Deterministic BC): The (L1 , L2 )-leakage-capacity region CD (L1 , L2 ) of the DBC 1{Y1 =y1 (X)}∩{Y2 =y2 (X)} without a common message is the union of rate pairs (R1 , R2 ) ∈ R2+ satisfying:  (42a) R1 ≤ min H(Y1 ) , H(Y1 |Y2 ) + L1  (42b) R2 ≤ min H(Y2 ) , H(Y2 |Y1 ) + L2 R1 + R2 ≤ H(Y1 , Y2 ) (42c) where the union is over all input PMFs QX ∈ P(X ). The proof of Corollary 8 is relegated to Appendix A. For the BW-BC, we parametrize the input PMF QX ∈ P {0, 1, 2} in Corollary 8 as QX (0) = α , QX (1) = β , QX (2) = 1 − α − β, (43) where α, β ∈ R+ and α + β ≤ 1. Using (43), the (L1 , L2 )leakage-capacity region CBW (L1 , L2 ) of the BW-BC is de- 10 scried as the union of rate pairs (R1 , R2 ) ∈ R2+ satisfying: 1.2 L=0 R2 [bits/use] L = 0.1 As explained in Remark 4, for each leakage value L, Corollary PSfrag replacements 2 (along with some numerical calculations) can be used to tell whether a further increase of L will induce a larger region or not. Accordingly, for each L ∈ {0, 0.05, 0.1, 0.4}, we have calculated the maximum of L⋆ (α, β) over the distributions that achieve the boundary points of the capacity region CBW (L, L). Denoting the value of the maximal L⋆ that corresponds to the allowed leakage L ∈ {0, 0.05, 0.1, 0.4} by L⋆ (L), we have L⋆ (0) = L⋆ (0.05) = 0.15897 L⋆ (0.1) = 0.20101 L⋆ (0.4) = 0.38317. L = 0.4 0.8 0.6 0.4 0.2 0 0 0.2 0.4 0.6 0.8 1 1.2 R1 [bits/use] (a) 1.2 L=0 L = 0.05 1 L = 0.1 R2 [bits/use] where the union is over all α, β ∈ R+ with α + β ≤ 1. Fig. 3 illustrates CBW (L1 , L2 ) for three cases. In Fig. 3(a) L2 → ∞ while L1 ∈ {0, 0.05, 0.1, 0.4}. The blue (inner) line corresponds to L1 = 0 and is the secrecy-capacity region of a BW-BC where M1 is secret [27, Fig. 5]. The red (outer) line corresponds to L1 = 0.4 (which is large enough to be thought of as L1 → ∞) and depicts the capacity region of the classic BW-BC. As L1 grows, the inner (blue) region converges to coincide with the outer (red) region. Fig. 3(b) considers the opposite case, i.e., where L1 → ∞ and PSfrag replacements L2 ∈ {0, 0.05, 0.1, 0.4}, and is analogous to Fig. 3(a). In Fig. 3(c) we choose L1 = L2 = L, where L ∈ {0, 0.05, 0.1, 0.4}, and we demonstrate the impact of two leakage constraints on the region. When L = 0, one obtains the secrecy-capacity region of the BW-BC when both messages are confidential [5]. In each case, the capacity region grows with L and saturates at the red (outer) region, for which neither message is secret. Focusing on the symmetric case in Fig. 3(c), we note that the saturation of the region at L = 0.4 is implied by Corollary 2. For the Blackwell BC with L1 = L2 = L, and some α, β ∈ R+ with α + β ≤ 1, we denote by L⋆ (α, β) the threshold from (26), which reduces to   β L⋆ (α, β) = I(Y1 ; Y2 ) = Hb (β) − (1 − α)Hb . (45) 1−α L = 0.05 1 L = 0.4 0.8 0.6 0.4 0.2 0 0 0.2 0.4 0.6 0.8 1 1.2 R1 [bits/use] (b) 1.2 L=0 L = 0.05 1 R2 [bits/use]     β + L1 (44a) R1 ≤ min Hb (β) , (1 − α)Hb 1−α PSfrag replacements     α + L2 (44b) R2 ≤ min Hb (α) , (1 − β)Hb 1−β   β (44c) R1 + R2 ≤ Hb (α) + (1 − α)Hb 1−α L = 0.1 L = 0.4 0.8 0.6 0.4 0.2 0 0 0.2 0.4 0.6 0.8 1 1.2 R1 [bits/use] (46) Observing that L⋆ (0.4) ≤ L, Corollary 2 and Remark 4 imply that increasing L beyond 0.4 will not change the leakagecapacity region. Evidently, CBW (L, L) saturates at L = 0.4. For L ∈ {0, 0.05, 0.1}, however, L⋆ (L) > L and consequently CBW (L′ , L′ ) ( CBW (L, L), for L, L′ ∈ {0, 0.05, 0.1} with L′ < L. The variation of the sum of rates R1 + R2 as a function of L is shown by the blue curve in Fig. 4; the red dashed vertical lines correspond to the values of L considered in Fig. 3. Note that for 0 ≤ L ≤ 0.09818, (44c) is inactive, and therefore, R1 + R2 is bounded by the summation of (44a) and (c) Fig. 3: (L1 , L2 )-leakage-capacity region of the BW-BC for three cases: (a) L1 = L and L2 → ∞; (b) L1 → ∞ and L2 = L; (c) L1 = L2 = L. (44b). Thus, for 0 ≤ L ≤ 0.09818, the sum of rates R1 + R2 increases linearly with L. For L > 0.09818, the bound in (44c) is no longer redundant, and because it is independent of L, the sum rate saturates. The regions in Fig. 3 are a union of rectangles or pentagons,  each corresponds to a different input PMF QX ∈ P {0, 1, 2} . 11 VII. P ROOFS 1.6 A. Proof of Lemma 3 R1 + R2 [bits/use] 1.55 We derives sufficient conditions for   (U) EBn F PI|Bn , pIn −−−−→ 1 1.5 (47) n→∞ 1.45 1.4 frag replacements 1.35 0 0.02 0.04 0.05 0.06 0.08 0.1 0.12 L [bits/use] Fig. 4: The sum-rate capacity versus the allowed leakage for L1 = L2 = L. which implies Lemma 3 using (10). First, for each Bn ∈ Bn the fidelity between the induced and the desired (uniform) distribution is   (U) F PI|Bn =Bn , pIn 12    X 2i uℓ (w);vj,k (w) w X 1   (48) =  P i uℓ̄ (w);vj̄,k (w) w    |In ||Kn | 2 ℓ (j,k) (ℓ̄,j̄) where as in (17), the information density is taken with respect to QnU,V . Now, by theCauchy-Schwarz inequality we have the  following bound for aj,k , bj,k ⊂ R+ :  21   21  Xp X X aj,k bj,k ≤  (49) bj,k  . aj,k   R2 H(Y2 ) j,k lacements H(Y2 |Y1 ) j,k j,k Using this on each of the summands from the right-hand side (RHS) of (48) with  2i uℓ (w);vj,k (w) w 1  (50a) aj,k = |In ||Kn | P i uℓ̄ (w);vj̄,k (w) w 2 (ℓ̄,j̄) 0 H(Y1 |Y2 ) H(Y1 ) R1 Fig. 5: The pentagons/rectangles whose union produces the capacity region of a BW-BC for different secrecy cases: The outer pentagon corresponds to the case without secrecy; the red and blue rectangles correspond to L1 = 0 and L2 = 0, respectively; the inner rectangle corresponds to L1 = L2 = 0. In Fig. 5 we illustrate a typical structure of these rectangles and pentagons for a fixed QX at the extreme values of L1 and L2 . When both L1 and L2 are sufficiently large, the leakage constraints degenerate and the classic BW-BC is obtained. Its capacity region (the red (outer) line in, e.g., Fig. 3(c)) is a union of the pentagons depicted in Fig. 5. The secrecycapacity region for L1 = 0 and L2 → ∞ (depicted by the blue line in Fig. 3(a)) is a union of the red rectangles in Fig. 5. Similarly, when L2 = 0 and L1 → ∞ the secrecycapacity region is a union of the blue rectangles in Fig. 5. Finally, if L1 = L2 = 0 and both messages are secret, the secrecy-capacity region of the BW-BC is the union of the dark rectangles in Fig. 5, i.e., the intersection of the blue and the red regions. Fig. 5 highlights that as L1 and/or L2 decrease, the underlying pentagons/rectangles (the union of which produces the admissible rate region) shrink, which results in a smaller region. and 2 bj,k = P i uℓ (w);vj,k (w) w uℓ̄ (w);vj̄,k̄ (w) w 2i (ℓ̄,k̄) we obtain   (U) F PI|Bn =Bn , pIn ≥ X (ℓ,j,k) 2i |In ||Kn | uℓ (w);vj,k (w) w P  2i  uℓ̄ (w);vj̄,k (w) w (ℓ̄,j̄) 1 × (50b)  1  !2 1 P (j̄,k̄) 2i uℓ̄ (w);vj̄,k̄ (w) w  !2 . (51) For any w ∈ W n with QnW (w) > 0, we evaluate the conditional expectation of the fidelity given W = w as given in (52) at the top of this page. First note that with respect to the notation from Section III, we have     (U) (U) EBn |W=w F PI|Bn , pIn = EBn (w) F PI|Bn , pIn . (53) Now consider the following justifications for the steps of (52): (a) uses (51) and the symmetry of the random codebook; (b) is the law of total expectation; 12   (U) EBn (w) F PI|Bn , pIn (a)  1 1  ≥ |In | 2 |Jn ||Kn | 2 EBn (w) 2i U1 (w);V1,1 (w) w   i 1 1 = |In | 2 |Jn ||Kn | 2 EU1 (w),V1,1 (w)  2 (b)    X 2i Uℓ̄ (w);Vj̄,1 w (ℓ̄,j̄) U1 (w),V1,1 (w) w  − 12    X U1 (w);Vj̃,k̃ (w) 2i (j̃,k̃)  − 21       − 1      2 X  X 2i U1 (w);Vj̃,k̃ (w) 2i Uℓ̄ (w);Vj̄,1 (w) w   × EBn (w)|U1 (w),V1,1 (w)     (ℓ̄,j̄) (j̃,k̃)  − 21   (c) 1 1 i U1 (w);V1,1 (w) w i U1 (w);V1,1 (w) w  2 2 ≥ |In | |Jn ||Kn | EU1 (w),V1,1 (w) 2 2 + |In ||Jn | − 1  × 2i  (d) i(U;V|w)  −1 i(U;V|w) − 21  −1 U1 (w);V1,1 (w) w > EQnU |W =w QnV |W =w 2 1 + |In ||Jn | 2 1 + |Jn ||Kn | 2   −1 i(U;V|w) − 21 −1 i(U;V|w) − 21  (e) 1 + |Jn ||Kn | 2 2 = EQnU,V |W =w 1 + |In ||Jn | µ(Cn ) = Y mp ∈Mp  Y QnU0 u0 (mp ) j=1,2 Y (j) mp ,mjj ,wj ,ij  ∈Mp ×Mjj ×Wj ×Ij (c) uses Jensen’s inequality for the two-valued convex function 1 f : (x, y) 7→ (xy)− 2 and the relation  (54) EBn (w)|U1 (w),V1,1 (w) 2i Uℓ̄ (w);Vj̄,k̄ (w) w = 1 which holds for any (ℓ̄, j̄, k̄) 6= (1, 1, 1) (see [32], [45] for a similar derivation); (d) is by increasing each term in the parenthesis by 1; (e) is because U1 (w), V1,1 (w)) ∼ QnU|W =w QnV |W =w . Taking an expectation over W of both sides of (52), while making use of the law of total expectation and of the monotonicity of expectation, gives   (U) EBn F PI|Bn , pIn   (U) = EW EBn |W F PI|Bn , pIn  −1 i(U;V|W) − 21 n 1 + |In ||Jn | 2 ≥ EQW,U,V   −1 i(U;V|W) − 21 . (55) × 1 + |Jn ||Kn | 2 Finally, note that 1 EQn 2iQW,U,V (U;V|W) = IQW,U,V (U ; V |W ). (56) n W,U,V Therefore, by the weak law of large number for any ζ > 0 i(U;V|w) − 12   w   − 12          + |Jn ||Kn | − 1    (j) QnUj |U0 uj m(j) p , mjj , wj , ij u0 (mp ) − 12   (52) (61) there exists a sequence {δn }n∈N with limn→∞ δn = 0, such that ! 1 PQnW,U,V iQn (U;V|W)−IQW,U,V (U;V |W) > ζ ≤ δn , n W,U,V (57) for all n ∈ N. Combining (55) and (57) we see that as long as  (58) S2 + min S1 , T > IQW,U,V (U ; V |W ) + ζ then   (U) EBn F PI|Bn , pIn ր 1 (59) as n → ∞. The relation (10) now establishes the result of Lemma 3. B. Proof of Theorem 1 Fix n ∈ N, (L1 , L2 ) ∈ R2+ , ǫ, δ > 0, a PMF QU0 ,U1 ,U2 ,X ∈ P(U0 × U1 × U2 × X ) and denote QU0 ,U1 ,U2 ,X,Y1 ,Y2 , QU0 ,U1 ,U2 ,X WY1 ,Y2 |X . In the following we omit the blocklength n from our notations of the ets of indices, e.g., we write (n) M0 instead of M0 . Furthermore, we assume that quantities nR of the form 2 , where n ∈ N and R ∈ R+ , are integers.5 5 Otherwise simple modifications of some of the subsequent expressions using floor and ceiling operations are required. 13 Message Splitting: Split each message mj ∈ Mj , j = 1, 2, into a pair of messages denoted by (mj0 , mjj ). The triple mp , (m0 , m10 , m20 ) is referred to as a public message while mjj , j = 1, 2, serves as private message j. The rates associated with mj0 and mjj , j = 1, 2, are denoted by Rj0 and Rjj , while the corresponding alphabets are Mj0 and Mjj , respectively. The partial rates Rj0 and Rjj , j = 1, 2, satisfy Rj = Rj0 + Rjj 0 ≤ Rj0 ≤ Rj (60a) (60b) Rj0 ≤ Lj . (60c) Let Mj0 and Mjj be independent random variables uniformly distributed over Mj0 and Mjj , respectively. We use the notations Mp , (M0 , M10 , M20 ), Mp , M0 × M10 × M20 and Rp , R0 + R10 + R20 . Note that Mp is uniformly distributed over Mp and that |Mp | = 2nRp . Moreover, let (W1 , W2 ) be a pair of independent random variables,  where Wj , j = 1, 2, is uniformly distributed over Wj = 1 : 2nR̃j and independent of (M0 , M1 , M2 ) (which implies their independence of (Mp , M11 , M22 ) as well).  (n) Codebook C n : Let C0 , U0 (mp ) mp ∈Mp be a random public message codebook that comprises 2nRp i.i.d. random vectors U0 (mp ), each distributed according to QnU0 . A real (n) (n) ization of C0 is denoted by C0 , u0 (mp ) mp ∈Mp . (n) a public message codebook C0 . For every (n) m ∈ Mp and j = 1, 2, let Cj (mp ) , p  where Uj (mp , mjj , wj , ij ) (mjj ,wj ,ij )∈Mjj ×Wj ×Ij ,  ′  (mjj , wj , ij ) ∈ Mjj × Wj × Ij and Ij , 1 : 2nRj , be a random codebook of private messages j, consisting of conditionally independent random vectors each distributed according to QnUj |U0 =u0 (mp ) . A Fix (n) (n) realization of Cj (mp ) is denoted by Cj (mp ) ,  uj (mp , mjj , wj , ij ) (mjj ,wj ,ij )∈Mjj ×Wj ×Ij . o n (n) (n) We denote Cj , Cj (mp ) , and its realiza(n) mp ∈Mp is denotedoby Cn = tion by Cj . Ao random codebook n n (n) (n) (n) (n) (n) (n) denotes a C0 , C1 , C2 , while Cn = C0 , C1 , C2 fixed codebook (a possible outcome of Cn ). Denoting the set of all possible realizations of Cn by Cn , the above codebook construction induces a PMF µ ∈ P(Cn ) over the codebook ensemble. For every Cn ∈ Cn , we have (61) at the top of this page. For a fixed codebook Cn ∈ Cn we now describe its associated encoding function f (Cn ) and decoding functions (C ) φj n , for j = 1, 2. Encoder f (Cn ) : Fix a codebook Cn ∈ Cn . To transmit the message pair (m0 , m1 , m2 ) the encoder transforms it into the triple mp , m11 , m22 ), and draws Wj uniformly from Wj , j = 1, 2; denote the realization of Wj by wj ∈ Wj . Given (mp , m11 , m22 , w1 , w2 ), a pair of indices (i1 , i2 ) ∈ I1 × I2 is randomly selected by the likelihood encoder according to (C ) PLE n (i1 , i2 |mp , m11 , m22 , w1 , w2 ) ,  2iQn u1 (mp ,m11 ,w1 ,i1 );u2 (mp ,m22 ,w2 ,i2 ) u0 (mp )  P iQn u1 (mp ,m11 ,w1 ,i′1 );u2 (mp ,m22 ,w2 ,i′2 ) u0 (mp ) 2 (i′1 ,i′2 ) ∈I1 ×I2 (62) where iQn stands for the information density with respect to the conditional product distribution QnU1 ,U2 |U0 (and its (C ) marginals). The structure of PLE n adheres to the setup of Lemma 3 from Section III and, in particular, to the stochastic choice of indices therein as described in (14). Replacing the (C ) commonly used joint typicality encoder with PLE n , we are able to establish several important properties of the chosen codewords and their induced distribution. Let (i1 , i2 ) be the selected pair of indices. The channel input sequence is randomly generated according to the conditional product distribution QnX|U0 =u0 (mp ),U1 =u1 (mp ,m11 ,w1 ,i1 ),U2 =u2 (mp ,m22 ,w2 ,i2 ) . (C ) Decoder φj n : Decoder j = 1, 2 operates in two stages. First, it searches for a unique m̂p ∈ Mp such that   (63) u0 (m̂p ), yj ∈ Tδn (QU0 ,Yj ). (C ) If no such unique index is found, set φj n = (1, 1). Otherwise, having m̂p ∈ Mp , Decoder j = 1, 2 proceeds by looking for a unique pair (m̂jj , ŵj ) ∈ Mjj × Wj for which there exists an index îj ∈ Ij such that   u0 (m̂p ), uj (m̂p , m̂jj , ŵj , îj ), yj ∈ Tδn (QU0 ,Uj ,Yj ). (64) Recall that each mp ∈ Mp specifies a triple (m0 , m10 , m20 ) ∈ M0 × M10 × M20 . If the second stage is also executed successfully the decoder has a triple (m̂p , m̂jj , ŵj ) ∈ Mp × Mjj × Wj with m̂p and (m̂jj , ŵj ) being the unique indices satisfying (63) and (64), respectively. In this case we  (Cn ) set φj (yj ) = m̂0 , m̂j , where m̂j is assembled from (C ) (m̂j0 , m̂jj ); otherwise, set φj n = (1, 1).  Induced Code and Joint Distribution: The triple (C ) (C ) defined with respect to the codebook f (Cn ) , φ1 n , φ2 n Cn ∈ Cn constitutes an (n, R0 , R1 , R2 ) code cn for the BC with privacy leakage constraints. Thus, for every codebook Cn ∈ Cn , the induced joint distribution is given in (65) at the top of this page, where the random variables U0 , U1 and U2 are the chosen codewords at the conclusion of the encoding process (from which the input X to the BC is generated). Taking the random codebook generation into account, we also have (66) at the top of this page, where µ ∈ P(Cn ) is described in (61). The PMF P induces a probability measure P , PP , with respect to which the subsequent analysis is performed. Specifically, all the multi-letter information measures in the sequel are taken with respect to P from (66), while single-letter information terms are calculated with respect to QU0 ,U1 ,U2 ,X,Y1 ,Y2 . Average Error Probability Analysis: The output se(C ) quences of PLE n from (62) are jointly typical with high probability as long as the sum of the rates of the product bin is greater than the mutual information between the coding 14    P (Cn ) mp ,m11 , m22 , w1 , w2 , i1 , i2 , u0 , u1 , u2 , x, y1 , y2 , m̂0 , m̂1 , m̂0 , m̂2 = (C ) 2−n(Rp +R11 +R11 +R̃1 +R̃2 ) PLE n (i1 , i2 |mp , m11 , m22 , w1 , w2 )1 u0 =u0 (mp ) ∩ × QnX|U0 ,U1 ,U2 (x|u0 , u1 , u2 )WYn1 ,Y2 |X (y1 , y2 |x)1 T j=1,2 T j=1,2    uj =uj (mp ,mjj ,wj ,ij ) (Cn ) m̂0 ,m̂j =φj (65) (yj )    P Cn , mp , m11 ,m22 , w1 , w2 , i1 , i2 , u0 , u1 , u2 , x, y1 , y2 , m̂0 , m̂1 , m̂0 , m̂2    = µ(Cn )P (Cn ) mp , m11 , m22 , w1 , w2 , s1 , s2 , u0 , u1 , u2 , x, y1 , y2 , m̂0 , m̂1 , m̂0 , m̂2 random variables [32, Theorem 3]. The rest of the error probability analysis goes through via classic joint typicality arguments. The details of the analysis are relegated to Appendix B, where it is shown that ′ EPe (Cn ) ≤ η(n, δ, δ ) ′ ′ (67) ′ where δ ∈ (0, δ) and limn→∞ η(n, δ, δ ) = 0 for all 0 < δ < δ, if R1′ + R2′ > I(U1 ; U2 |U0 ) R0 + R10 + R20 < I(U0 ; Y1 ) − τδ (68a) (68b) R0 + R10 + R20 < I(U0 ; Y2 ) − τδ (68c) R11 + R̃1 + R1′ < I(U1 ; Y1 |U0 ) − τδ R22 + R̃2 + R2′ < I(U2 ; Y2 |U0 ) − τδ (68d) (68e) with τδ → 0 as δ → 0. Furthermore, setting ηn , η(n, δn , δn′ ) where {δn }n∈N and {δn′ }n∈N are sequences that converge sufficiently slowly to zero as n grows, we have limn→∞ ηn = 0. To clarify, the δ ′ that appears in (67) and in upper bounds below is a consequence of the Conditional Typicality Lemma [20, Section 2.5]. This lemma considers conditioning on sequences that are jointly letter-typical with respect to a slightly smaller gap than the original δ. This smaller gap is δ ′ . Properties for Leakage Analysis: In contrast to previous works, we do not analyse the expected leakages of the random code. Instead, we establish certain properties that the random code possesses and then extract a specific sequence of codes that satisfies these properties as well as reliability. It is then shown that the extracted sequence of codes admits the leakage constraints. By symmetry, we consider only the properties required for the analysis of the rate-leakage from M1 to the 2nd receiver. The corresponding derivations for M2 follows similar lines and the resulting rate constraints match up to changing some indices. We first need a decodability property. Specifically, Decoder 2 should be able to decode (W1 , I1 ) with a low error probability based on (Mp , M11 , M22 , W2 , I2 , Y2 ). We consider a decoding rule based on a joint typicality test: Decoder 2 searches for a unique pair (w̌1 , ǐ1 ) ∈ W1 × I1 such that   u0 (mp ), u1 (mp , m11 , w̌1 , ǐ1 ),u2 (mp , m22 , w2 , i2 ), y2 ∈ Tδn (QU0 ,U1 ,U2 ,Y2 ). (69) (66) For a fixed codebook Cn ∈ Cn (which specifies a code cn ), (Leak) let P1 (Cn ) denote the probability that Decoder 2 fails in this decoding process. As explained in Appendix B, we have (Leak) EP1 (Cn ) ≤ κ(n, δ, δ ′ ) ′ (70) ′ where δ ∈ (0, δ) and limn→∞ κ(n, δ, δ ) = 0 for all 0 < δ ′ < δ, if R̃1 + R̃1 < I(U1 ; Y2 |U0 , U2 ) − ξδ (71a) R1′ (71b) < I(U1 ; U2 , Y2 |U0 ) − ξδ with ξδ → 0 as δ → 0. Again, by allowing δ and δ ′ from (70) to converge to zero sufficiently slow with n, κ(n, δ, δ ′ ) may be replaced by a κn with limn→∞ κn = 0. We are now ready to state Lemmas 4-6. Proofs are given in Appendices C-E. Lemma 4: If (71) is valid with ξδ → 0 as δ → 0, then there exists ζ1 (n, δ, δ ′ ), where δ ′ ∈ (0, δ), such that H(W1 , I1 |Mp , M11 , M22 , W2 , I2 , Y2 , Cn ) ≤ nζ1 (n, δ, δ ′ ) (72) and limn→∞ ζ1 (n, δ, δ ′ ) = 0 for all 0 < δ ′ < δ. Furthermore, setting ζ1,n , ζ1 (n, δn , δn′ ) where {δn }n∈N and {δn′ }n∈N are sequences that decay sufficiently slow to zero as n grows, we have limn→∞ ζ1,n = 0. Lemma 5: There exist ζ2 (n, δ, δ ′ ) that satisfies the same properties as ζ1 (n, δ, δ ′ ) from Lemma 4, such that I(U1 ; Y2 |U0 , U2 , Cn ) ≤ nI(U1 ; Y2 |U0 , U2 ) + nζ2 (n, δ, δ ′ ). (73) Lemma 6: There exists ζ3 (n, δ, δ ′ ) that satisfies the same properties as ζ1 (n, δ, δ ′ ) from Lemma 4, such that I(U1 ; M22 , W2 , I2 |Mp , Cn ) ≤ nI(U1 ; U2 |U0 ) + nζ3 (n, δ, δ ′ ). (74) The Uniform Approximation Lemma from Section III further implies that if  (75) R2′ + min R1′ , R22 + R̃2 > I(U1 ; U2 |U0 ) + δ then there exist ζ4,n with limn→∞ ζ4,n , such that (U) ECn PMp ,M11 ,W1 ,I1 |Cn −pMp ×M11 ×W1 ×I1 (U) TV ≤ ζ4,n (76) where pMp ×M11 ×W1 ×I1 is the uniform distribution on Mp × M11 × W1 × I1 . To see this, observe that by symmetry we 15 have and (U) pMp ×M11 ×W1 ×I1 ECn PMp ,M11 ,W1 ,I1 |Cn − X 1 = |Mp ||M11 ||W1 |  R1′ + min R2′ , R11 + R̃1 > I(U1 ; U2 |U0 ) + δ TV (80c) in accordance with (71) and (75), respectively. This implies that results analog to those of Lemmas 4-6 and (76) hold for M2 . (mp ,m11 ,w1 ) ∈Mp ×M11 ×W1 (U) × ECn PI1 |Mp =mp ,M11 =m11 ,W1 =w1 ,Cn − pI1 = ECn PI1 |Mp =1,M11 =1,W1 =1,Cn − (U) pI 1 TV TV . (77) Note that (Mp , M11 , W1 ) = (1, 1, 1) fixes a single u1 -bin ′ (comprising 2nR1 codewords), while the pair (M22 , W2 ) (of total rate R22 + R̃2 ) uniformly chooses a u2 -bin (comprising ′ 2nR2 codewords). Lemma 3 now gives the desired relation because bins are generated conditionally independent given (C ) U0 and the chosen codeword pair is drawn according to PLE n from (62) which adheres to the structure of (14). We now invoke the Selection Lemma [46, Lemma 5] to extract a specific sequence of codes that satisfies several desired properties. We restate this lemma next.  Lemma 7 (Selection Lemma): Let An n∈N be a sequence of random variables, n o where An takes values in An . Let (1) (2) (J) fn , fn , . . . , fn be a collection of J < ∞ sequences n∈N (i) of bounded functions fn : An → R+ , j ∈ [1 : J ]. If Efn(j) (An ) −−−−→ 0, n→∞ ∀j ∈ [1 : J ], n→∞ ∀j ∈ [1 : J ]. Consider the sequence of random codes functions6  Cn fn(1)(Cn) , Pe (Cn ) (78b) n∈N , the (79a) fn(2)(Cn) , H(W1 , I1 |Mp , M11 , M22 , W2 , I2 , Y2 , Cn = Cn ) (79b) 1 fn(3)(Cn) , I(U1 ; Y2 |U0 , U2 , Cn = Cn ) n − I(U1 ; Y2 |U0 , U2 ) (79c) 1 fn(4)(Cn) , I(U1 ; M22 , W2 , I2 |Mp , Cn = Cn ) n − I(U1 ; U2 |U0 ) (79d) (U) fn(5)(Cn) , PMp ,M11 ,W1 ,I1 |Cn =Cn − pMp ×M11 ×W1 ×I1 TV (79e) (6) (7) (8) as well as the functions fn , fn and fn that correspond to (2) (3) (4) fn , fn and fn , respectively, with respect to the analysis for M2 . We also impose constraints on the rates that arise from repeating the above steps for M2 . Namely, we set R̃2 + R̃2 < I(U2 ; Y1 |U0 , U1 ) − ξ(δ) (80a) R2′ (80b) < I(U2 ; U1 , Y1 |U0 ) − ξ(δ) ECn fn(j) (Cn ) −−−−→ 0, j ∈ [1 : 7]. n→∞ (81) Lemma 7 now implies the existence of a sequence of codebooks Cn n∈N (each inducing an (n, R1 , R1 , R2 ) code cn ) and another sequence of numbers {ηn }n∈N with limn→∞ ηn = 0, such that for j ∈ [1 : 7] we have fn(j) (Cn ) ≤ ηn , ∀n ∈ N. (82) Leakage Analysis of M1 Under C n : All subsequent information measures are calculated with respect to P (Cn ) from (65). We emphasize this by using HCn and ICn as the notation of such entropy or mutual information terms, respectively. (78a) then there exists a sequence {an }n∈N , where an ∈ An for every n ∈ N, such that fn(j) (an ) −−−−→ 0, Replacing δ and δ ′ in the definitions of ηj (n, δ, δ ′ ), for j = 1, 2, 3, with {δn }n∈N and {δn′ }n∈N that decay to zero sufficiently slow, we have (5) First, because fn (Cn ) −−−−→ 0 and by the continuity of n→∞ entropy, there exists a sequence {θn }n∈N with limn→∞ θn = 0, such that  HCn (M11 , W1 , I1 |Mp ) − log |M11 ||W1 ||I1 | ≤ θn (83) for every n ∈ N. Next, since 1 1 (84) IC (M1 ; Y2 ) = R1 − HCn (M1 |Y2 ) n n n we can upper bound the leakage of M1 to the second receiver by lower bounding the conditional entropy term from the RHS of (84). We have ℓ1 (cn ) = HCn (M1 |Y2 ) (a) ≥ HCn (M11 |Mp , M22 , W2 , I2 , Y2 ) = HCn (M11 , Y2 |Mp , M22 , W2 , I2 ) − HCn (Y2 |Mp , M22 , W2 , I2 ) (b) ≥ HCn (M11 , Y2 |Mp , M22 , W2 , I2 ) − HCn (Y2 |U0 , U2 ) = HCn (M11 , W1 , I1 , Y2 |Mp , M22 , W2 , I2 ) − HCn (W1 , I1 |Mp , M11 , M22 , W2 , I2 , Y2 ) − H(Y2 |U0 , U2 ) = HCn (M11 , W1 , I1 |Mp , M22 , W2 , I2 ) + HCn (Y2 |Mp , M11 , W1 , I1 , M22 , W2 , I2 ) − HCn (W1 , I1 |Mp , M11 , M22 , W2 , I2 , Y2 ) − HCn (Y2 |U0 , U2 ) (c) 6 We (1) slightly abuse notation in the definition of fn because Pe is actually a function of the code cn rather than the codebook Cn . However, since Cn uniquely defines cn we prefer this presentation for the sake of simplicity. = HCn (M11 , W1 , I1 |Mp , M22 , W2 , I2 ) − HCn (W1 , I1 |Mp , M11 , M22 , W2 , I2 , Y2 ) − ICn (U1 ; Y2 |U0 , U2 ) 16 = HCn (M11 , W1 , I1 |Mp ) − ICn (U1 ; M22 , W2 , I2 |Mp ) − HCn (W1 , I1 |Mp , M11 , M22 , W2 , I2 , Y2 ) − ICn (U1 ; Y2 |U0 , U2 ) (85) where: (a) is because conditioning cannot increase entropy and since M1 corresponds to the pair (M10 , M11 ) while Mp = (M0 , M10 , M20 ); (b) follows because U0 and U2 are specified by (Mp , M22 , W2 , I2 ) and since conditioning cannot increase entropy; (c) uses the deterministic relations stated in (b) along with U1 being determined by (Mp , M11 , W1 , I1 ) and the Markov relation Y2 −(U0 , U1 , U2 )−(Mp , M11 , W1 , I1 , M22 , W2 , I2 ). Inserting (82) (for j ∈ [2 : 4]), (83) and R11 = R1 − R10 into (85) further gives n o − I(U1 ; U2 |U0 )+ min I(U0 ;Y1 ), I(U0 ;Y2 ) . (89e) To prove the first claim, assume that L1 ≥ L⋆1 (QU0 ,U1 ,U2 ,X ). Consequently, the term inside the positive part function from the RHS of (89a) is non-negative as it satisfies I(U1 ;Y1 |U0 ) − I(U1 ; U2 , Y2 |U0 ) + L1 n o ≥ I(U1 ; Y1 |U0 ) + min I(U0 ; Y1 ), I(U0 ; Y2 ) , (90) which makes (89a) inactive due to (89b), and therefore, R̃O (L1 , L2 , QU0 ,U1 ,U2 ,X ) = R̃O (∞, L2 , QU0 ,U1 ,U2 ,X ). An analogous argument with respect to L2 proves the second claim (essentially by showing that if L2 ≥ L⋆2 then (89c) is inactive due (89d)). The third claim follows by combining both preceding arguments. D. Proof of Theorem 3 HCn (M1 |Y2 ) ≥ n R1 − R10 + R̃1 + R1′ − I(U1 ; U2 , Y2 |U0 ) − 3ηn − θn (a)  ≥ nR1 − n L1 + 3ηn + θn  where (a) follows by taking R̃1 + R1′ − R10 > I(U1 ; U2 , Y2 |U0 ) − L1 (86a) R1′ (86b) + L1 − R10 > I(U1 ; U2 |U0 ). The bound in (86b) ensures the feasibility of an R̃1 > 0 that satisfies (71a) and (86a) simultaneously. The corresponding rate bounds for the analysis of ℓ2 (cn ) are R̃2 + R2′ − R20 > I(U2 ; U1 , Y1 |U0 ) − L2 (87a) R2′ (87b) + L2 − R20 > I(U1 ; U2 |U0 ). Recalling that ηn and θn can be made arbitrarily small with n, there exists n0 (ǫ) ∈ N, such that for all n > n0 (ǫ) Pe (cn ) ≤ ǫ ℓ1 (cn ) ≤ L1 + ǫ (88a) (88b) ℓ2 (cn ) ≤ L2 + ǫ. (88c) as required. Our last step is to apply FME on (68), (75), (80) and (86)(87), while using (60) and the non-negativity of the involved terms, to eliminate Rj0 , Rj′ and R̃j , for j = 1, 2. Since all the above linear inequalities have constant coefficients, the FME can be performed by a computer program, e.g., by the FME-IT software [47]. This shows the sufficiency of (23). C. Proof of Corollary 2 Fix (L1 , L2 ) ∈ R2+ and QU0 ,U1 ,U2 ,X ∈ P(U0 ×U1 ×U2 ×X ). The rate bounds describing R̃I (L1 , L2 , QU0 ,U1 ,U2 ,X ) are: R1 ≤ I(U1 ; Y1 |U0 )−I(U1 ; U2 , Y2 |U0 )+L1 (89a) n o R1 ≤ I(U1 ; Y1 |U0 )+min I(U0 ; Y1 ),I(U0 ; Y2 ) (89b) R2 ≤ I(U2 ; Y2 |U0 )−I(U2 ; U1 , Y1 |U0 )+L2 (89c) n o R2 ≤ I(U2 ; Y2 |U0 )+min I(U0 ; Y1 ),I(U0 ; Y2 ) (89d) R1 + R2 ≤ I(U1 ; Y1 |U0 ) + I(U2 ; Y2 |U0 ) We show that given an (L1 , L2 )-achievable rate triple (R0 , R1 , R2 ), there is a PMF QW,U,V,X ∈ P(W ×U ×V ×X ), such that (27) holds when the information measures are calculated with respect to QW,U,V,X WY1 ,Y2 |X . Due to the symmetric structure of the rate bounds defining RO (L1 , L2 ), we present only the derivation of (27a)-(27d) and (27h). The other inequalities from (27) are established by similar arguments. Since (R0 , R1 , R2 ) is (L1 , L2 )-achievable, for every ǫ > 0 there is a sufficiently large n ∈ N and an (n, R0 , R1 , R2 ) code cn for which (22) holds. We note that all subsequent entropy and mutual information terms are calculated with respect to the PMF from (19) that is specified by cn . Fix ǫ > 0 and find the corresponding blocklength n ∈ N. By Fano’s inequality we have (j) H(M0 , Mj |Yjn ) ≤ 1 + nǫRj , nδn,ǫ , j = 1, 2. (91)  (1) (2) Define δn,ǫ = max δn,ǫ , δn,ǫ . Next, by (22b), we write n(L1 + ǫ) ≥ I(M1 ; Y2n ) = I(M1 ; M0 , M2 , Y2n ) − I(M1 ; M0 , M2 |Y2n ) (a) ≥ I(M1 ; Y2n |M0 , M2 ) − H(M0 , M2 |Y2n ) (b) ≥ I(M1 ; Y2n |M0 , M2 ) − nδn,ǫ (92) where (a) uses the independence of M1 and (M0 , M2 ) and the non-negativity of entropy, while (b) is by (91). (92) implies I(M1 ; Y2n |M0 , M2 ) ≤ nL1 + n(ǫ + δn,ǫ ). (93) Similarly, we have I(M1 ; Y2n |M0 ) ≤ nL1 + n(ǫ + δn,ǫ ). The common message rate R0 satisfies nR0 = H(M0 ) (a) ≤ I(M0 ; Y1n ) + nδn,ǫ n X I(M0 ; Y1,i |Y1i−1 ) + nδn,ǫ = i=1 (94) 17 ≤ (b) ≤ n X I(M0 , Y1i−1 ; Y1,i ) + nδn,ǫ i=1 n X I(Wi ; Y1,i ) + nδn,ǫ (95a) (95b) i=1 n where (a) uses (91) and (b) defines Wi , (M0 , Y1i−1 , Y2,i+1 ). n n By reversing the roles of Y1 and Y2 and repeating similar steps, we also have nR0 ≤ ≤ n X i=1 n X where (a) follows from (91) and (b) follows by the definition of (Wi , Ui ). Moreover, consider n(R0 + R1 ) = H(M1 |M0 ) + H(M0 ) (a) ≤ I(M1 ; Y1n |M0 ) + I(M0 ; Y2n ) + nδn,ǫ n h i X n n I(M1 , Y2,i+1 ; Y1,i |M0 , Y1i−1 ) + I(M0 ; Y2,i |Y2,i+1 ) ≤ i=1 n I(M0 , Y2,i+1 ; Y2,i ) + nδn,ǫ I(Wi ; Y2,i ) + nδn,ǫ . (96a) (96b) + nδn,ǫ = n I(Ui ; Y1,i |Wi ) + I(Y2,i+1 ; Y1,i |M0 , Y1i−1 ) i=1 i=1 For R1 , it follows that n h X i n + I(M0 ; Y2,i |Y2,i+1 ) + nδn,ǫ n h X n I(Ui ; Y1,i |Wi ) + I(Y1i−1 ; Y2,i |M0 , Y2,i+1 ) = (b) nR1 i=1 = H(M1 |M0 , M2 ) (a) ≤ I(M1 ; Y1n |M0 , M2 ) − I(M1 ; Y2n |M0 , M2 ) + nL1 + nξn,ǫ n h (b) X n I(M1 ; Y1i , Y2,i+1 |M0 , M2 ) = i=1 i n − I(M1 ; Y1i−1 , Y2,i |M0 , M2 ) + nL1 + nξn,ǫ n h i X I(M1 ; Y1,i |M2 , Wi ) − I(M1 ; Y2,i |M2 , Wi ) = i n + I(M0 ; Y2,i |Y2,i+1 ) + nδn,ǫ n h i (c) X I(Ui ; Y1,i |Wi ) + I(Wi ; Y2,i ) + nδn,ǫ (100) ≤ i=1 where (a) is by (91), (b) is Csiszár’s sum identity, while (c) uses the definition of (Wi , Ui ). i=1 (c) = n h X i=1 + nL1 + nξn,ǫ i I(Ui ; Y1,i |Wi , Vi ) − I(Ui ; Y2,i |Wi , Vi ) + nL1 + nξn,ǫ (97) where (a) uses (91) and (92) and ξn,ǫ = 2δn,ǫ + ǫ, (b) follows from a telescoping identity [48, Eqs. (9) and (11)], and (c) uses Ui , (M1 , Wi ) and Vi , (M2 , Wi ). R1 is also upper bounded as To bound the sum R0 + R1 + R2 , we start by writing H(M1 |M0 , M2 ) (a) ≤ I(M1 ; Y1n |M0 , M2 ) + nδn,ǫ n X I(M1 ; Y1,i |M0 , M2 , Y1i−1 ) + nδn,ǫ = i=1 ≤ nR1 = H(M1 |M0 ) (b) (a) ≤ I(M1 ; Y1n |M0 ) − I(M1 ; Y2n |M0 ) + nL1 + nξn,ǫ n h i (b) X n n I(M1 ; Y1i , Y2,i+1 |M0 ) − I(M1 ; Y1i−1 , Y2,i |M0 ) = i=1 + nL1 + nξn,ǫ n h i X (c) I(Ui ; Y1,i |Wi ) − I(Ui ; Y2,i |Wi ) + nL1 + nξn,ǫ = i=1 (98) where (a) is by (91) and (94), (b) uses a telescoping identity, while (c) follows by the definition of (Wi , Ui ). For the sum R0 + R1 , we have i=1 n h X i=1 i n I(Ui ; Y1,i |Wi , Vi ) + I(Y2,i+1 ; Y1,i |M0 , M2 , Y1i−1 ) + nδn,ǫ (103) where (a) uses (91) and (b) follows by the definition of (Wi , Ui , Vi ). Moreover, we have H(M2 |M0 ) (a) ≤ I(M2 ; Y2n |M0 ) + nδn,ǫ n h i (b) X n n I(M2 ; Y2,i |M0 , Y1i−1 ) − I(M2 ; Y2,i+1 |M0 , Y1i ) = i=1 n h (c) X n I(M2 ; Y2,i+1 |M0 , Y1i−1 ) + I(Vi ; Y2,i |Wi ) = (a) i=1 n I(M1 , Y2,i+1 ; Y1,i |M0 , M2 , Y1i−1 ) + nδn,ǫ + nδn,ǫ n(R0 + R1 ) = H(M0 , M1 ) ≤ I(M0 , M1 ; Y1n ) + nδn,ǫ n (b) X I(Wi , Ui ; Y1,i ) + nδn,ǫ ≤ = n X i=1 (99) n − I(M2 ; Y1,i , Y2,i+1 |M0 , Y1i−1 ) i + I(M2 ; Y1,i |M0 , Y1i−1 ) + nδn,ǫ 18 nR2 ≤ nR2 ≤ n(R0 + R2 ) ≤ n(R0 + R2 ) ≤ n(R0 + R1 + R2 ) ≤ n(R0 + R1 + R2 ) ≤ n h i X I(Vi ; Y2,i |Wi , Ui ) − I(Vi ; Y1,i |Wi , Ui ) + nL2 + nξn,ǫ i=1 n h X i=1 n X = I(Wi , Vi ; Y2,i ) + nδn,ǫ i I(Ui ; Y1,i |Wi ) + I(Vi ; Y2,i |Wi , Ui ) + I(Wi ; Y1,i ) + 3nδn,ǫ i=1 n h X i I(Ui ; Y1,i |Wi ) + I(Vi ; Y2,i |Wi , Ui ) + I(Wi ; Y2,i ) + 3nδn,ǫ i I(M2 ; Y1,i |M0 , Y1i−1 ) + nδn,ǫ (104) n(R1 + R2 ) n h X I(Ui ; Y1,i |Wi , Vi ) + I(Vi ; Y2,i |Wi ) − I(Vi ; Y1,i |Wi ) ≤ n h X i n + I(M2 , Y2,i+1 ; Y1,i |M0 , Y1i−1 ) + 2nδn,ǫ I(Ui ; Y1,i |Wi , Vi ) + I(Vi ; Y2,i |Wi ) i=1 (108e) (108f) and where: (a) follows from (91); (b) is a telescoping identity; (c) is by the mutual information chain rule and the definition of (Vi , Ui ); (d) uses the mutual information chain rule again. Combining (103) and (104) yields = (108d) i=1 n h X i=1 i=1 (108c) i I(Vi ; Y2,i |Wi ) + I(Wi ; Y1,i ) + nδn,ǫ n h X I(Vi ; Y2,i |Wi ) − I(Vi ; Y1,i |Wi ) + (108b) i=1 n h X i=1 (d) i I(Vi ; Y2,i |Wi ) − I(Vi ; Y1,i |Wi ) + nL2 + nξn,ǫ (108a) i n + I(Y2,i+1 ; Y1,i |M0 , Y1i−1 ) + 2nδn,ǫ. (105a) Applying Csiszár’s sum identity on the last term in (105a) gives n h X I(Ui ; Y1,i |Wi , Vi ) + I(Vi ; Y2,i |Wi ) n(R1 + R2 ) = n(R0 + R1 + R2 ) ≤ n h X I(Ui ; Y1,i |Wi , Vi ) + I(Vi ; Y2,i |Wi ) i=1 i + I(Wi ; Y2,i ) + 3nδn,ǫ , (107) respectively. By repeating similar steps, we obtain bounds related to the remaining rate bounds in (27) as given in (108a)-(108f) at the top of this page. The bounds are rewritten by introducing a time-sharing random variable Q that is uniformly distributed over the set [1 : n] and is independent of all the other random variables whose distribution is described in (19). For instance, the bound (97) is rewritten as n i 1 Xh R1 ≤ I(Uq ; Y1,q |Wq , Vq ) − I(Uq ; Y2,q |Wq , Vq ) n q=1 + L1 + ξn,ǫ h P Q = q I(UQ ; Y1,Q |WQ , VQ , Q = q) = i=q i − I(UQ ; Y2,Q |WQ , VQ , Q = q) + L1 + ξn,ǫ n X ≤ I(UQ ; Y1,Q |WQ , VQ , Q) − I(UQ ; Y2,Q |WQ , VQ , Q) + L1 + nξn,ǫ (109) i=1 i n + I(Y1i−1 ; Y2,i |M0 , Y2,i+1 ) + 2nδn,ǫ . (105b) Combining (95a) with (105a) and (96a) with (105b) yields n h X I(Ui ; Y1,i |Wi , Vi ) n(R0 + R1 + R2 ) ≤ i=1 i + I(Vi ; Y2,i |Wi ) + I(Wi ; Y1,i ) + 3nδn,ǫ (106) Denote Y1 , Y1,Q , Y2 , Y2,Q , W , (WQ , Q), U , (UQ , Q) and V , (VQ , Q). We thus have the bounds from (27) with the added terms δn,ǫ and ξn,ǫ , which can be made arbitrarily small by increasing the blocklength n while decreasing ǫ. To complete the converse proof note that since the channel is memoryless and without feedback, and because Uq = (M1 , Wq ) and Vq = (M2 , Wq ), the chain (Y1,q , Y2,q ) − Xq − (Uq , Vq ) − Wq (110) is Markov for every q ∈ [1 : n]. This implies that (Y1 , Y2 ) − 19 X − (U, V ) − W forms a Markov chain, which establishes Theorem 3. E. Proof of Theorem 4 The direct part of Theorem 4 follows by setting U0 = W , U1 = Y1 and U2 = V into CSD (L1 , L2 ), which establishes its inclusion in RI (L1 , L2 ). For the converse we prove the reverse inclusion, i.e., RO (L1 , L2 ) ⊆ CSD (L1 , L2 ). First we remove the restriction from RO (L1 , L2 ) that X − (U, V ) − W forms a Markov chain; this can only increase the region. Fix a PMF QW,U,V,X ∈ P(W×U ×V×X ), which induces a joint distribution QW,U,V,X 1{Y1 =y1 (X)} WY2 |X , and let QW,V,Y1 ,X WY2 |X be its marginal PMF of (W, V, Y1 , X, Y2 ). Each of the bounds defining RO (L1 , L2 ) are evaluated with respect to QW,U,V,X 1{Y1 =y1 (X)} WY2 |X , while the information terms from CSD (L1 , L2 ) are taken with respect to QW,V,Y1 ,X WY2 |X . We start by noting that (29a) and (27a) are the same. Next, the RHS of (27b) is upper bounded by the RHS of (29b) since R1 ≤ I(U ; Y1 |W, V ) − I(U ; Y2 |W, V ) + L1 = H(Y1 |W, V ) − H(Y1 |W, V, U ) − I(U ; Y2 |W, V ) + L1 (a) ≤ H(Y1 |W, V ) − I(Y1 ; Y2 |W, V, U ) − I(U ; Y2 |W, V ) + L1 = H(Y1 |W, V ) − I(U, Y1 ; Y2 |W, V ) + L1 (b) ≤ H(Y1 |W, V, Y2 ) + L1 (111) where (a) is by the non-negativity of entropy and (b) is because conditioning cannot increase entropy. For (27d) we clearly have n o R0 + R1 ≤ I(U ; Y1 |W ) + min I(W ; Y1 ), I(W ; Y2 ) n o ≤ H(Y1 |W ) + min I(W ; Y1 ), I(W ; Y2 ) , (112) which coincides with (29c). Furthermore, inequalities (29d) and (29e) are the same as (27f) and (27g), respectively. For the sum of rates, the RHS of (29f) upper bounds that of (27h) because I(U ; Y1 |W, V ) ≤ H(Y1 |W, V ). (113) Removing the other bounds from (27) can only increase RO (L1 , L2 ), which shows its inclusion in CSD (L1 , L2 ). This characterizes CSD (L1 , L2 ) as the (L1 , L2 )-leakage-capacity region of the SD-BC. VIII. S UMMARY AND message that comprises the public parts and the common message was constructed. To correlate the codewords for the private parts, we used the likelihood encoder. Its simple structure enabled a rigorous analysis of performance for the proposed scheme. Theorem 1 fixes a weakness of previous work by letting the eavesdropper know the codebook. The main tool needed was the likelihood encoder (Lemma 3). Our results include various past works as special cases. Large leakage thresholds reduce our inner and outer bounds to Marton’s inner bound with a common message [21] and the UVW-outer bound [22], respectively. The leakage-capacity region of the SD-BC without a common message recovers the capacity regions where both [5], either [6], [27], or neither [21] private message is secret. The result for the BC with a degraded message set and a privacy leakage constraint captures the capacity regions for the BC with confidential messages [3] and the BC with a degraded message set (without secrecy) [28]. Furthermore, we derived conditions on the allowed leakage values that differentiates whether a further increase of each leakage threshold induces a larger inner bound or not. The conditions effectively let one (numerically) calculate privacy leakage threshold values above which the inner bound saturates. This idea was visualized by means of a BW-BC example that showed the transition of the leakage-capacity region from secrecy-capacity regions for different scenarios to the capacity region without secrecy. C ONCLUDING R EMARKS We considered the BC with privacy leakage constraints. Under this model, all four scenarios concerning secrecy (i.e., when both, either or neither of the private messages are secret) are special cases by appropriate choices for the leakage thresholds. Inner and outer bounds on the leakage-capacity region were derived and shown to be tight for SD and PD BCs, as well as for BCs with a degraded message set. The coding strategy that achieved the inner bound is based on a Marton-like codebook construction with a common message supplemented by an extra layer of binning. Splitting each private message into a public and a private part, a public ACKNOWLEDGEMENTS The authors would like to thank the Associate Editor and the anonymous reviewers for helping to improve the presentation of this paper. We also kindly thank Ido B. Gattegno for his work on the FME-IT software [47] that assisted us with technical details of proofs. A PPENDIX A P ROOF OF C OROLLARY 8 0 The region CD (L1 , L2 ) is obtained from CSD (L1 , L2 ) by setting W = 0 and V = Y2 , which implies that CD (L1 , L2 ) ⊆ 0 CSD (L1 , L2 ). For the converse, the RHS of (32a) is upper bounded by R1 ≤ H(Y1 |W, V, Y2 ) + L1 ≤ H(Y1 |Y2 ) + L1 . (114) For (32c), we have I(V ; Y2 |W ) − I(V ;Y1 |W ) + L2 ≤ I(V ; Y1 , Y2 |W ) − I(V ; Y1 |W ) + L2 = I(V ; Y2 |W, Y1 ) + L2 ≤ H(Y2 |Y1 ) + L2 . (115) The RHSs of (32b) and (32d) are clearly upper bounded as n o H(Yj |W ) + min I(W ; Y1 ), I(W ; Y2 ) ≤ H(Yj ), j = 1, 2. (116) Finally, (42c) is implied by (32e) since R1 + R2  ≤ H(Y1 |W, V ) + I(V ; Y2 |W ) + min I(W ; Y1 ), I(W ; Y2 ) 20 E= D0 = n n o  U0 (1), U1 (1, 1, 1, I1 ), U2 (1, 1, 1, I2 ) ∈ / Tδn′ (QU0 ,U1 ,U2 ) o  U0 (1), U1 (1, 1, 1, I1 ), U2 (1, 1, 1, I2 ), Y1 , Y2 ∈ Tδn (QU0 ,U1 ,U2 ,Y1 ,Y2 ) (j) D0 (mp ) = (j) D1 (mjj , wj , ij ) = n n (119a) o  U0 (mp ), Yj ∈ Tδn (QU0 ,Yj ) (119b) o  U0 (1), Uj (1, mjj , wj , ij ), Yj ∈ Tδn (QU0 ,Uj ,Yj ) = H(Y1 , Y2 ). (119c) [2] ≤ H(Y1 |W, V ) + I(W, V ; Y2 ) ≤ H(Y1 , Y2 |W, V ) + I(W, V ; Y1 , Y2 ) 4) For Pj , j = 1, 2, we have [2] Pj (117) (a) ≤ X 2−n [2] I(U0 ;Yj )−τj (δ) m̃p 6=1 A PPENDIX B E RROR P ROBABILITY A NALYSIS FOR THE P ROOF T HEOREM 1 ≤ 2nRp 2−n [1] 1) By [32, Theorem 3], P0 → 0 as n → ∞ if R1′ + R2′ > I(U1 ; U2 |U0 ). (121) 2) The Conditional Typicality Lemma [20, Section 2.5] [2] implies that P0 → 0 as n grows. More precisely, there exists a function β(n, δ, δ ′ ) with limn→∞ β(n, δ, δ ′ ) = 0 [2] for any 0 < δ ′ < δ, such that P0 ≤ β(n, δ, δ ′ ). ′ Furthermore, replacing δ and δ with properly chosen decaying sequences {δn }n∈N and {δn′ }n∈N , respectively, and setting βn , β(n, δn , δn′ ), we have limn→∞ βn = 0. [1] [0] 3) The definitions in (119) clearly give Pj = Pj = 0, for j = 1,o2 andn every n ∈ N. This o is since n (j) (j) D0 (1)c ∩ D0 = D1 (1, 1, Ij )c ∩ D0 = ∅, for j = 1, 2. in Section VII-B, we slightly abuse notation in writing EPe (Cn ) because Pe is actually a function of the code cn rather than the codebook Cn . We favor this notation for its simplicity and remind the reader that Cn uniquely defines cn . [2] I(U0 ;Yj )−τj (δ) [2] Rp −I(U0 ;Yj )+τj (δ) = 2n OF By the symmetry of the codebook construction with respect to (Mp , M11 , W1 , M22 , W2 ) and due to their uniformity, we may assume that (Mp , M11 , W1 , M22 , W2 ) = (1, 1, 1, 1, 1). Encoding errors: Fix any δ ′ ∈ (0, δ). An encoding error event is described as given in (118) at the top of this page. Decoding errors: To account for decoding errors, define (119) from the top of this page, where j = 1, 2. For any event A from the σ-algebra over which P is defined, denote P1 = P A Mp = 1, M11 = 1, W1 = 1, M22 = 1, W2 = 1 . By the union bound, the expected error probability is bounded as in (120), given at the top of the next page.7 Note that with respect to the notation in (120), [k] [1] P0 is the probability of an encoding error, while Pj , for k ∈ [0 : 3], are the decoding errors of Decoder j. We proceed with the following steps: 7 As (118)    (122) where (a) follows since U0 (m̃p ) is independent of Yj , [2] for any m̃p 6= 1. Thus, for Pj to vanish as n → ∞, we take: [2] Rp < I(U0 ; Yj ) − τj (δ), j = 1, 2. (123) [2] where τj (δ) → 0 as δ → 0. [3] 5) For Pj , j = 1, 2, we have [3] Pj (a) X [3] 2−n I(Uj ;Yj |U0 )−τj (δ) ≤ 2n(Rjj +Rj +R̃j ) 2−n I(Uj ;Yj |U0 )−τj (δ) ≤ (m̃jj ,w̃j )6=(1,1), ĩj ∈Ij ′ = 2n [3] [3] Rjj +R′j +R̃j −I(Uj ;Yj |U0 )+τj (δ)    (124) where (a) follows since Uj (1, m̃jj , w̃j , ĩj ) is independent of Yj , for any (m̃jj , w̃j ) 6= (1, 1) and ĩj ∈ Ij , while both of them are drawn conditioned on U0 (1). [3] We have Pj → 0 as n → ∞ if [3] Rjj + Rj′ + R̃j < I(Uj ; Yj |U0 ) − τj (δ), [3] j = 1, 2, (125) where, as before, τj (δ) → 0 as δ → 0. Summarizing the above results, while substituting Rp = R0 + R10 + R20 and setting o n [k] τδ , max τj (δ) j=1,2, (126) k=2,3 we find that EPe (Cn ) ≤ η(n, δ, δ ′ ), ∀n ∈ N, (127) where limn→∞ η(n, δ, δ ′ ) = 0 for all 0 < δ ′ < δ, if the conditions in (68) are met. As mentioned before, if we replace δ and δ ′ with properly chosen sequences {δn }n∈N and {δn′ }n∈N , respectively, that decay sufficiently slowly to zero 21 EPe (Cn )   c ≤ P1  E ∪ D0 ∪    [   j=1,2   (j) D0 (1)c ∪    [  m̃p 6=1     (j) (j) D0 (m̃p ) ∪ D1 (1, 1, Ij )c ∪       [ (j)  D0 (m̃jj , w̃j , Ij )     (m̃jj ,w̃j )6=(1,1)       X  [ (j)  (j) (j)  P1 D (1)c ∩ D0 + P1 D (1, 1, Ij )c ∩ D0 + P1  ≤ P1 E + P1 D0c ∩ E + D0 (m̃p ) 0 1 | {z } | {z } j=1,2  | | {z } {z } m̃p 6=1 [1] [2] [0] [1] P0 P0 {z } | Pj Pj   c [2] PJ    [   (j) D1 (m̃jj , w̃j , ĩj )  + P1     (m̃jj ,w̃j )6=(1,1),  (120) ĩj ∈Ij | and set ηn , η(n, δn , δn′ ), we have limn→∞ ηn = 0. A. Leakage Associated Errors (1) This subsection shows how (71) ensures Eλm11 (Cn ) → 0 as n → ∞, for any m11 ∈ M11 . As before, by the symmetry of the underlying random code with respect to the messages, we have (1) Eλ(1) m11 (Cn ) = Eλ1 (Cn ), ∀m11 ∈ M11 , (129a) where ξ(δ) → 0 as δ → 0, results in a vanishing probability of the event that this u1 -sequence satisfies the typicality test from (69). Furthermore, if w̃1 and ĩ1 are both incorrect, U1 (1, 1, w̃1 , ĩ1 ) conditionally independent of is U2 (1, 1, 1, I2 ), Y2 given U0 (1). The search space is ′ now of size 2n(R̃1 +R1 ) , and therefore, taking R̃1 + R1′ < I(U1 ; U2 , Y2 |U0 ) − ξ(δ) [3] } implies a vanishing probability of this second error event. Finally, note that the error event where W1 = 1 is correct but ĩ1 is wrong has arbitrarily small probability if R1′ < I(U1 ; U2 , Y2 |U0 ) − ξ(δ) (the structure of the mutual information term is the same as in (129b) because an incorrect ĩ1 produces the same statistical relations as an incorrect pair (w̃1 , ĩ1 )). Evidently, the latter constraint is redundant due to (129b). (128) and we may further assume that (Mp , W1 , M22 , W2 ) = (1, 1, 1, 1). By arguments similar to those presented in the encoding and decoding error probability analysis, one can verify that (71) implies the existence of a function κ(n, δ) with (1) limn→∞ κ(n, δ) = 0 for any δ > 0, such that Eλ1 (Cn ) ≤ κ(n, δ). Furthermore, replacing δ with a sequence {δn }n∈N that decays sufficiently slow to zero as n grows and setting κn , κ(n, δn ), we have κn → 0 as n → ∞. This essentially follows by the law of large numbers and the Conditional Typicality Lemma that ensure the joint typicality of the transmitted sequences and the outputs. If w̃1 is incorrect but I1 is the true index chosen by the likelihood encoder, U1 (1, 1, w̃1 , I1 ) is conditionally independent Y2 given (U0 (1), U2 (1, 1, 1, I2 )). The correlation between U0 (1), U1 (1, 1, w̃1 , I1 ) and U2 (1, 1, 1, I2 ) is a consequence of the likelihood encoder’s operation. Since the search space in this case is of size 2nR̃1 , taking R̃1 < I(U1 ; Y2 |U0 , U2 ) − ξ(δ) {z Pj (129b) A PPENDIX C P ROOF OF L EMMA 4 (Leak) Recall that P1 (Cn ) denotes the error probability in decoding (W1 , I1 ) from (Mp , M11 , M22 , W2 , I2 , Y2 ) by means of the typicality test from (69) with respect to the fixed code Cn ∈ Cn . The analysis in Appendix B shows that as long as (71) holds, we have (Leak) EP1 (Cn ) ≤ κ(n, δ, δ ′ ) (130) where limn→∞ κ(n, δ, δ ′ ) = 0 for all 0 < δ ′ < δ. As a consequence, we have H(W1 , I1 |Mp , M11 , M22 , W2 , I2 , Y2 , Cn ) ≤ H(W1 , I1 |Mp , M11 , M22 , W2 , I2 , Y2 ) (a) ≤ 1 + n · κ(n, δ, δ ′ )n(R̃1 + R1′ ) (131) where (a) is because conditioning cannot increase entropy, while (b) uses Fano’s inequality and (130). Setting ζ1 (n, δ, δ ′ ) , n1 + κ(n, δ, δ ′ )(R̃1 + R1′ ) completes the proof. A PPENDIX D P ROOF OF L EMMA 5 Define the indicator function E = 1A , where o n (132) A = (U0 , U1 , U2 , Y2 ) ∈ / Tδn (QU0 ,U1 ,U2 ,Y2 )    and note that P E = 1 ≤ P1 E + P1 D0c ∩ E c , where E and D0 are defined in (118) and (119a), respectively, from 22 H(Y1 |U0 , U1 , U2 , E = 0, Cn ) (a) = H(Y1 |U0 , U1 , U2 , E = 0) X PU0 ,U1 ,U2 |E (u0 , u1 , u2 |0)H(Y2 |U0 = u0 , U1 = u1 , U2 = u2 , E = 0) = (u0 ,u1 ,u2 ) ∈Tδn (QU0 ,U1 ,U2 ) = X PU0 ,U1 ,U2 |E (u0 , u1 , u2 |0) = X PU0 ,U1 ,U2 |E (u0 , u1 , u2 |0) X PU0 ,U1 ,U2 |E (u0 , u1 , u2 |0) ≥ n· X H(Y2,i |U0,i = u0,i , U1,i = u1,i , U2,i = u2,i ) X νu0 ,u1 ,u2 (u0 , u1 , u2 )H(Y2 |U0 = u0 , U1 = u1 , U2 = u2 ) (u0 ,u1 ,u2 ) ∈U0 ×U1 ×U2 (u0 ,u1 ,u2 ) ∈Tδn (QU0 ,U1 ,U2 ) (c) n X i=1 (u0 ,u1 ,u2 ) ∈Tδn (QU0 ,U1 ,U2 ) = H(Y2,i |U0 = u0 , U1 = u1 , U2 = u2 , Y2i−1 , E = 0) i=1 (u0 ,u1 ,u2 ) ∈Tδn (QU0 ,U1 ,U2 ) (b) n X PU0 ,U1 ,U2 |E (u0 , u1 , u2 |0)(1 − δ)H(Y2 |U0 , U1 , U2 ) (u0 ,u1 ,u2 ) ∈Tδn (QU0 ,U1 ,U2 ) = n(1 − δ)H(Y2 |U0 , U1 , U2 ) (137) Appendix B. The analysis in Appendix B shows the existence of a function β̃(n, δ, δ ′ ), such that    P E = 1 = P1 E + P1 D0c ∩ E c ≤ β̃(n, δ, δ ′ ) (133) where 0 < δ ′ < δ and limn→∞ β̃(n, δ, δ ′ ) = 0 for all such values of δ and δ ′ . Furthermore, limn→∞ β̃(n, δn , δn′ ) = 0 for sequences {δn }n∈N and {δn′ }n∈N that decay sufficiently slow to zero with n. We now expand the mutual information term from the LHS of (73) as follows I(U1 ; Y2 |U0 , U2 , Cn ) ≤ I(U1 , E; Y2 |U0 , U2 , Cn ) = I(E; U2 |U0 , , U2 , Cn ) + I(U1 ; Y2 |U0 , U2 , E, Cn ) (a) ≤ 1+ 1 X j=0  P E = j I(U1 ; Y2 |U0 , U2 , E = j, Cn ). (134) where (a) is because E is binary and the entropy function is non-negative. Note that  P E = 1 I(U1 ; Y2 |U0 , U2 , E = 1, Cn )  ≤ P E = 1 H(Y2 |E = 1, Cn ) ≤ β̃(n, δ, δ ′ ) · n log |Y2 |. (135) where (a) uses (133). For the mutual information term conditioned on E = 0, we first have H(Y2 |U0 , U2 , E = 0, Cn ) ≤ H(Y2 |U0 , U2 , E = 0) X QU0 ,U2 |E (u0 , u2 |0)H(Y2 |U0 = u0 , U2 = u2 , E = 0) = (u0 ,u2 ) ∈Tδn (QU0 ,U2 ) ≤ nH(Y2 |U0 , U2 ) (136) where the last inequality is because for every (u0 , u2 ) ∈ Tδn (QU0 ,U2 ) the support of the conditional PMF PY2 |U0 =u0 ,U2 =u2 ,E=0 is upper bounded by the size of the conditional typical set Tδn (QU0 ,U2 ,Y2 |u0 , u2 ), which is upper bounded by 2nH(Y2 |U0 ,U2 )(1+δ) . This step also relies on the entropy being maximized by the uniform distribution and the logarithm being a monotonically increasing function. For the other (subtracted) entropy term, we have (137) given at the top of this page, where (a) is because Y2 − (U0 , U1 , U2 ) − Cn forms a Markov chain, (b) follows since given (U0,i , U1,i , U2,i ), Y2,i is independent of all other random variables, while (c) is by the definition of letter-typical sequences from (3). Inserting (135), (136) and (137) into (134) gives I(U1 ; Y2 |U0 , U2 , Cn ) ≤ nI(U1 ; Y2 |U0 , U2 ) + nζ2 (n, δ, δ ′ ) (138) where 1 ζ2 (n, δ, δ ′ ) = + δH(Y2 |U0 , U1 , U2 ) + β̃(n, δ, δ ′ ) (139) n as needed. A PPENDIX E P ROOF OF L EMMA 6 Rewriting the mutual information term of interest as a difference of entropies, we have I(U1 ; M22 , W2 , I2 |Mp , Cn ) = H(U1 |Mp , Cn ) − H(U1 |Mp , M22 , W2 , I2 , Cn ). (140) Since U0 is defined by (Mp , Cn ) we clearly have, H(U1 |Mp , Cn ) ≤ H(U1 |U0 , Cn ). (141) 23 n D̃ , ∃(m̃p , m̃22 , w̃2 , ĩ2 ) 6= (Mp , M22 , W2 , I2 ), Next, since (Mp , M22 , W2 , I2 , Cn ) determines both U0 and U2 , we write the subtracted entropy term as H(U1 |Mp , M22 , W2 , I2 , Cn ) = H(U1 |U0 , U2 , Mp , M22 , W2 , I2 , Cn ) = H(U1 |U0 , U2 , Cn )−I(U1 ; Mp , M22 , W2 , I2 |U0 , U2 , Cn ) ≥ H(U1 |U0 , U2 , Cn ) − H(Mp , M22 , W2 , I2 |U0 , U2 , Cn ). (142) We now upper bound H(Mp , M22 , W2 , I2 |U0 , U2 , Cn ) by a vanishing term times the blocklength n. Let F = 1D̃ be the indicator function to the event D̃ defined in (143) at the top of this page. Standard error probability analysis of random codes shows that    ′ P F = 1 = P D̃ ≤ 2n Rp +R22 +R̃2 +R2 −H(U0 ,U2 )+α̃(δ) (144) where α̃(δ) → 0 as δ → 0. Consequently, taking Rp + R22 + R̃2 + R2′ < H(U0 , U2 ) − α̃(δ) (145)  results in P F = 1 ≤ κ̃(n, δ) with limn→∞ κ̃(n, δ) = 0 for every δ > 0. Next, note that (145) holds on account of (68c) and (68e) (adding (68c) and (68e) results in a tighter bound on the same rates) and consider the following: H(Mp , M22 , W2 , I2 |U0 , U2 , Cn ) ≤ H(Mp , M22 , W2 , I2 |U0 , U2 ) (a) ≤ 1 + H(Mp , M22 , W2 , I2 |U0 , U2 , F )  = 1 + P F = 0 H(M  p , M22 , W2 , I2 |U0 , U2 , F = 0) + P F = 1 H(Mp , M22 , W2 , I2 |U0 , U2 , F = 1) (b) ≤ 1 + H(Mp , M22 , W2 , I2 |U  0 , U2 , F = 0) + P F = 1 · n(Rp + R22 + R̃2 + R2′ ) i (c) h ≤ 2 1 + n · κ̃(n, δ)(Rp + R22 + R̃2 + R2′ ) (1) = nζ̃3 (n, δ) (146) where 1 + κ̃(n, δ)(Rp + R22 + R̃2 + R2′ ). (147) n In the above derivation (a) follows because the uniform distribution maximizes entropy and since F is binary, (b) upper bounds the first entropy term by the logarithm of the support size, while (c) uses Fano’s inequality. Inserting (141), (142) and (146) into (140) gives (1) ζ̃3 (n, δ) , (1) I(U1 ;M22 ,W2 , I2 |Mp ,Cn) ≤ I(U1 ;U2 |U0 ,Cn)+nζ̃3 (n,δ). (148) To complete the proof, it suffices to show that there exists (2) a function ζ̃3 (n, δ, δ ′ ) that satisfies the same properties as ′ ζ3 (n, δ, δ ) from the statement of Lemma 6 for which (2) I(U1 ; U2 |U0 , Cn ) ≤ nI(U1 ; U2 |U0 ) + nζ̃3 (n, δ, δ ′ ). (149) U0 (m̃p ) = U0 and U2 (m̃p , m̃22 , w̃2 , ĩ2 ) = U2 o (143) This can be established by arguments similar to those presented in the proof of Lemma 5 and we therefore omit the details. Combining (148) with (149) and setting ζ3 (n, δ, δ ′ ) , (1) (2) ζ̃3 (n, δ) + ζ̃3 (n, δ, δ ′ ) completes the proof. R EFERENCES [1] C. E. Shannon. Communication theory of secrecy systems. Bell Sys. Techn., 28(4):656715, Oct. 1949. [2] A. D. Wyner. The wire-tap channel. Bell Sys. Techn., 54(8):1355–1387, Oct. 1975. [3] I. Csiszár and J. Körner. Broadcast channels with confidential messages. IEEE Trans. Inf. Theory, 24(3):339–348, May 1978. [4] R. Liu, I. Maric, P. Spasojević, and R. D. Yates. Discrete memoryless interference and broadcast channels with confidential messages: Secrecy rate regions. IEEE Trans. Inf. Theory, 54(6):2493–2507, Jun. 2008. [5] Y. Zhao, P. Xu, Y. Zhao, W. Wei, and Y. Tang. Secret communications over semi-deterministic broadcast channels. In Fourth Int. Conf. Commun. and Netw. in China (CHINACOM), Xi’an, China, Aug. 2009. [6] W. Kang and N. Liu. The secrecy capacity of the semi-deterministic broadcast channel. In Proc. Int. Symp. Inf. Theory, Seoul, Korea, Jun.Jul. 2009. [7] Z. Goldfeld, G. Kramer, H. H. Permuter, and P. Cuff. Strong secrecy for cooperative broadcast channels. IEEE Trans. Inf. Theory, 63(1):469– 495, Jan. 2017. [8] E. Ekrem and S. Ulukus. Secrecy in cooperative relay broadcast channels. IEEE Trans. Inf. Theory, 57(1):137–155, Jan. 2011. [9] R. Liu and H. Poor. Secrecy capacity region of a multiple-antenna Gaussian broadcast channel with confidential messages. IEEE Trans. Inf. Theory, 55(3):1235–1249, Mar. 2009. [10] T. Liu and S. Shamai. A note on the secrecy capacity of the multipleantenna wiretap channel. IEEE Trans. Inf. Theory, 6(6):2547–2553, Jun. 2009. [11] R. Liu, T. Liu, H. V. Poor, and S. Shamai. Multiple-input multipleoutput Gaussian broadcast channels with confidential messages. IEEE Trans. Inf. Theory, 56(9):4215–4227, Sep. 2010. [12] A. Khisti and G. W. Wornell. Secure transmission with multiple antennas - part II: The MIMOME channel. IEEE Trans. Inf. Theory, 56(11):5515– 5532, Nov. 2010. [13] E. Ekrem and S. Ulukus. The secrecy capacity region of the Gaussian MIMO multi-receiver wiretap channel. IEEE Trans. Inf. Theory, 57(4):2083–2114, Apr. 2011. [14] F. Oggier and B. Hassibi. The secrecy capacity of the MIMO wiretap channel. IEEE Trans. Inf. Theory, 57(8):4961–4972, Aug. 2011. [15] E. Ekrem and S. Ulukus. Secrecy capacity of a class of broadcast channels with an eavesdropper. EURASIP J. Wireless Commun. and Netw., 2009(1):1–29, Mar. 2009. [16] G. Bagherikaram, A. Motahari, and A. Khandani. Secrecy capacity region of Gaussian broadcast channel. In 43rd Annual Conf. on Inf. Sci. and Sys. (CISS) 2009, pages 152–157, Baltimore, MD, US, Mar. 2009. [17] M. Benammar and P. Piantanida. Secrecy capacity region of some classes of wiretap broadcast channels. IEEE Trans Inf. Theory, 61(10):5564–5582, Oct. 2015. [18] K. Marton. A coding theorem for the discrete memoryless broadcast channel. IEEE Trans. Inf. Theory, 25(3):306–311, May 1979. [19] A. El Gamal and E. C. Van Der Meulen. A proof of Marton’s coding theorem for the discrete memoryless broadcast channel. IEEE Trans. Inf. Theory, 27(1):120–122, Jan. 1981. [20] A. El Gamal and Y.-H. Kim. Network Information Theory. Cambridge University Press, 2011. [21] S. I. Gelfand and M. S. Pinsker. Capacity of a broadcast channel with one deterministic component. Prob. Pered. Inf. (Problems of Inf. Transm.), 16(1):17–25, Jan-Mar 1980. [22] C. Nair. A note on outer bounds for broadcast channel. Presented at Int. Zurich Seminar, Jan. 2011. Available on ArXiV at http://arXiv.org/abs/1101.0640. [23] G. Kramer Y. Liang and S. Shamai. Capacity outer bounds for broadcast channels. In IEEE Inf. Theory Workshop (ITW-2008), Porto, Portugal, 2008. 24 [24] Y. Liang. Multiuser communications with relaying and user cooperation. PhD thesis, Dept. of Electrical and Computer Eng., Univ. Illinois at Urbana-Champaign, Illinois, 2005. [25] C. Nair and A. El Gamal. An outer bound to the capacity region of the broadcast channel. IEEE Trans. Inf. Theory, 53(1):350–355, Jan. 2007. [26] C. Nair and V. W. Zizhou. On the inner and outer bounds for 2-receiver discrete memoryless broadcast channels. In Inf. Theory and Applic. Workshop, San Diego, California, US, Jan. 27-Feb. 1 2008. [27] Z. Goldfeld, G. Kramer, and H. H. Permuter. Cooperative broadcast channels with a secret message. In Proc. Int. Symp. Inf. Theory (ISIT2015), pages 1342–1346, Hong Kong, China, Jun. 2015. [28] J. Körner and K. Marton. General broadcast channels with degraded message sets. IEEE Trans. Inf. Theory, 23(1):60–64, Jan. 1977. [29] E. C. van der Meulen. Random coding theorems for the general discrete memoryless broadcast channel. IEEE Trans. Inf. Theory, IT-21(2):180– 190, May 1975. [30] S. I. Gelfand. Capacity of one broadcast channel. Probl. Peredachi Inf., 13(3):106108, Jul./Sep. 1977. [31] J. L. Massey. Applied Digital Information Theory. ETH Zurich, Zurich, Switzerland, 1980-1998. [32] M. H. Yassaee. One-shot achievability via fidelity. In Proc. Int. Symp. Inf. Theory (ISIT-2015), pages 301–305, Hong Kong, China, Jun. 2015. [33] M. H. Yassaee, M. R. Aref, and A. Gohari. Achievability proof via output statistics of random binning. IEEE Trans. Inf. Theory, 60(11):6760–6786, Nov. 2014. [34] E. Song, P. Cuff, and V. Poor. The likelihood encoder for lossy compression. IEEE Trans. Inf. Theory, 62(4):1836–1849, Apr. 2016. [35] Z. Goldfeld, P. Cuff, and H. H. Permuter. Wiretap channels with random states non-causally available at the encoder. Submitted for publication to IEEE Trans. Inf. Theory, 2016. https://arxiv.org/abs/1608.00743. [36] M. H. Yassaee, M. R. Aref, and A. Gohari. A technique for deriving one-shot achievability results in network information theory. In Proc. Int. Symp. Inf. Theory (ISIT-2013), Istanbul, Turkey, Jul. 2013. Full version available on ArXiv at http://arxiv.org/abs/1303.0696. [37] Y. Liang and G. Kramer. Rate regions for relay broadcast channels. IEEE Trans. Inf. Theory, 53(10):3517–3535, Oct 2007. [38] G. Kramer Y. Liang and H. V. Poor. On the equivalence of two achievable regions for the broadcast channel. IEEE Trans. Inf. Theory, 57(1):95–100, Jan. 2011. [39] H. Weingarten, Y. Steinberg, and S. Shamai. The capacity region of the Gaussian multiple-input multiple-output broadcast channel. IEEE Trans. Inf. Theory, 52(9):3936–3964, Sep. 2006. [40] R. Liu, T. Liu, H. V. Poor, and S. Shamai. Multiple-input multipleoutput Gaussian broadcast channels with confidential messages. IEEE Trans. Inf. Theory, 56(9):4215–4227, Sep. 2010. [41] E. Ekrem and S. Ulukus. Capacity region of Gaussian MIMO broadcast channels with common and confidential messages. IEEE Trans. Inf. Theory, 58(9):5669–5680, Sep. 2012. [42] Y. Geng and C. Nair. The capacity region of the two-receiver vector Gaussian broadcast channel with private and common messages. IEEE Trans. Inf. Theory, 60(4):2087–2014, Apr. 2014. [43] Z. Goldfeld and H. H. Permuter. MIMO Gaussian broadcast channels with common, private and confidential messages. Submitted to IEEE Trans. Inf. Theory, 2016. [44] A. Gohari, C. Nair, and V. Anantharam. Improved cardinality bounds on the auxiliary random variables in Marton’s inner bound. In Proc. Int. Symp. Inf. Theory (ISIT-2013), Istanbul, Turkey, Jul. 2013. [45] J. Hou and G. Kramer. Effective secrecy: Reliability, confusion and stelth. In IEEE Int. Symp. Inf. Theory, Honolulu, HI, USA, Jun.-Jul. 2014. [46] Z. Goldfeld, P. Cuff, and H. H. Permuter. Semantic-security capacity for wiretap channels of type II. IEEE Trans. Inf. Theory, 62(5):2285–2307, May 2016. [47] I. B. Gattegno, Z. Goldfeld, and H. H. Permuter. Fourier-Motzkin Elimination software for information theoretic inequalities. IEEE Inf. Theory Society Newsletter, 65(3):25–28, Sep. 2015. Version R0.6 Available at http://www.ee.bgu.ac.il/˜ fmeit/. [48] G. Kramer. Teaching IT: An identity for the Gelfand-Pinsker converse. IEEE Inf. Theory Society Newsletter, 61(4):4–6, Dec. 2011. Ziv Goldfeld (S’13) received his B.Sc. (summa cum laude) and M.Sc. (summa cum laude) degrees in Electrical and Computer Engineering from the BenGurion University, Israel, in 2012 and 2014, respectively. He is currently a student in the direct Ph.D. program for honor students in Electrical and Computer Engineering at that same institution. Between 2003 and 2006, he served in the intelligence corps of the Israeli Defense Forces. Ziv is a recipient of several awards, among them are the Dean’s List Award, the Basor Fellowship, the Lev-Zion fellowship, IEEEI-2014 best student paper award, a Minerva Short-Term Research Grant (MRG), and a Feder Family Award in the national student contest for outstanding research work in the field of communications technology. Gerhard Kramer (S’91-M’94-SM’08-F’10) received the Dr. sc. techn. (Doktor der technischen Wissenschaften) degree from the Swiss Federal Institute of Technology (ETH), Zurich, in 1998. From 1998 to 2000, he was with Endora Tech AG, Basel, Switzerland, as a Communications Engineering Consultant. From 2000 to 2008, he was with Bell Labs, Alcatel-Lucent, Murray Hill, NJ, as a Member of Technical Staff. He joined the University of Southern California (USC), Los Angeles, in 2009. Since 2010, he has been a Professor and Head of the Institute for Communications Engineering at the Technical University of Munich (TUM), Munich, Germany. Dr. Kramer served as the 2013 President of the IEEE Information Theory Society. He has won several awards for his work and teaching, including an Alexander von Humboldt Professorship in 2010 and a Lecturer Award from the Student Association of the TUM Electrical and Computer Engineering Department in 2015. He has been a member of the Bavarian Academy of Sciences and Humanities since 2015. Haim H. Permuter (M’08-SM’13) received his B.Sc. (summa cum laude) and M.Sc. (summa cum laude) degrees in Electrical and Computer Engineering from the Ben-Gurion University, Israel, in 1997 and 2003, respectively, and the Ph.D. degree in Electrical Engineering from Stanford University, California in 2008. Between 1997 and 2004, he was an officer at a research and development unit of the Israeli Defense Forces. Since 2009 he is with the department of Electrical and Computer Engineering at Ben-Gurion University where he is currently an associate professor. Prof. Permuter is a recipient of several awards, among them the Fullbright Fellowship, the Stanford Graduate Fellowship (SGF), Allon Fellowship, and and the U.S.-Israel Binational Science Foundation Bergmann Memorial Award. Haim is currently serving on the editorial board of the IEEE Transactions on Information Theory.
7
Tuning Modular Networks with Weighted Losses for Hand-Eye Coordination Fangyi Zhang, Jürgen Leitner, Michael Milford, Peter I. Corke ∗† A B This paper introduces an end-to-end fine-tuning method to improve hand-eye coordination in modular deep visuomotor policies (modular networks) where each module is trained independently. Benefiting from weighted losses, the fine-tuning method significantly improves the performance of the policies for a robotic planar reaching task. 1. Introduction Figure 1. A technique to improve hand-eye coordination for better performance when transferring deep visuo-motor policies for a planar reaching task from simulated (A) to real environments (B). Recent work has demonstrated robotic tasks based directly on real image data using deep learning, for example robotic grasping [2]. However these methods require largescale real-world datasets, which are expensive, slow to acquire and limit the general applicability of the approach. To reduce the cost of real dataset collection, we used simulation to learn robotic planar reaching skills using the DeepMind DQN [3]. The DQN showed impressive results in simulation, but exhibited brittleness when transferred to a real robot and camera [4]. By introducing a bottleneck to separate the DQN into perception and control modules for independent training, the skills learned in simulation (Fig. 1A) were easily adapted to real scenarios (Fig. 1B) by using just 1418 real-world images [5]. However, there is still a performance drop compared to the control module network with ideal perception. To reduce the performance drop, we propose fine-tuning the combined network to improve hand-eye coordination. Preliminary studies show that a naive fine-tuning using Qlearning does not give the desired result [5]. To tackle the problem, we introduce a novel end-to-end fine-tuning method using weighted losses in this work, which significantly improved the performance of the combined network. 64 filters 64 filters Conv1 Conv2 Conv3 Perception Module fully conn. 9 units fully conn. + ReLU Θ I 300 units fully conn. + ReLU 400 units LqBN Task Loss (Lq) 64 filters 5 units fully conn. Or stride 1 3×3 conv + ReLU 84×84 stride 2 4×4 conv + ReLU Perception Loss (Lp) stride 2 7×7 conv + ReLU arXiv:1705.05116v1 [cs.RO] 15 May 2017 Abstract Q-values BN FC_c1 FC_c2 FC_c3 Bottleneck Control Module Figure 2. A modular neural network is used to predict Q-values given some raw pixel inputs. It is composed of perception and control modules. The perception module which consists of three convolutional layers and a FC layer, extracts the physically relevant information (Θ in the bottleneck) from a single image. The control module predicts action Q-values given Θ. The action with a maximum Q-value is executed. The architecture is similar to that in [5], but has an additional end-to-end fine-tuning process using weighted perception and task losses. Note that the values in Θ are normalized to the interval [0, 1]. error between the robot’s current and target position, i.e., kx − x∗ k. At each time step 1 of 9 possible actions a ∈ a is chosen to change the robot configuration: 3 per joint – increasing or decreasing by a constant amount (0.04 rad) or leaving it unchanged. An agent is required to learn to reach using only raw-pixel visual inputs I from a monocular camera and their accompanying rewards r. The network has the same architecture and training method to [5], but with an additional end-to-end fine-tuning using weighted losses, as shown in Fig. 2. The perception network is first trained to estimate the scene configuration Θ = [x∗ q] ∈ R5 from a raw-pixel image I using the quadratic loss function 2. Methodology We consider the planar reaching task, which is defined as controlling a 3 DoF robot arm (Baxter robot’s left arm) so that in operational space its end-effector position x ∈ R2 moves to the position of the target x∗ in a vertical plane (ignoring orientation). The reaching controller adjusts the robot configuration (joint angles q ∈ R3 ) to minimize the ∗ FZ, JL, MM, PIC are with the Australian Centre for Robotic Vision (ACRV), Queensland University of Technology (QUT), Brisbane, Australia. [email protected] † This research was conducted by the Australian Research Council Centre of Excellence for Robotic Vision (project number CE140100016). Additional computational resources and services were provided by the HPC and Research Support Group at QUT. m Lp = 1 1 X y(I j ) − Θj 2m j=1 2 , m 1 X Q(Θjt , ajt ) − (rtj + γ max Q(Θjt+1 , ajt+1 )) Lq = 2m j=1 ajt+1 2 , where Q(Θjt , ajt ) is the sum of future expected rewards P ∞ j j k j k=0 γ rt+k when taking action at in state Θt . γ is a discount factor applied to future rewards. After separate training for perception and control individually, an end-to-end fine-tuning is conducted for the combined network (perception + control) using weighted task (Lq ) and perception (Lp ) losses. The control network is updated using only Lq , while the perception network is updated using the weighted loss Euclidean Distanc Error d [cm] where y(I j ) is the prediction of Θj for I j ; m is the number of samples. The control network is trained using KGPS [5] where network weights are updated using the Bellman equation which is equivalent to the loss function 10 8 6 4.598 4 3.568 3.449 2 0 Initial Fine-tuned CR Figure 3. The box-plots of distance errors of different networks, with median values displayed. The crosses represent outliers. Table 1. Planar Reaching Performance dmed [cm] [pixels] Initial 4.598 1.929 Fine-tuned 3.568 1.497 CR 3.449 1.447 Nets L = βLp + (1 − β)LBN q , is a pseudo-loss which reflects the loss of Lq in where LBN q the bottleneck (BN); β ∈ [0, 1] is a balancing weight. From the backpropagation algorithm [1], we can infer that δL = βδLp + (1 − β)δLBN , where δL is the gradients resulted q by L; δLp and δLBN are the gradients resulting respectively q BN from Lp and Lq (equivalent to that resulting from Lq in the perception module). dQ3 R̄ [cm] [pixels] [\] 6.150 2.581 0.319 4.813 2.020 0.626 4.330 1.817 0.761 Results are summarized in Fig. 3 and Table 1. dmed and dQ3 are the median and third quartile of d. The error distance in pixels in the 84 × 84 input image is also listed. We can see that Fine-tuned achieved a much better performance (22.4% smaller dmed and 96.2% bigger R̄) than Initial. The fine-tuned performance is even very close to that of the control module (CR) which controls the arm using ground-truth Θ as sensing inputs. We also did the same evaluations in 20 real-world trials on Baxter, and achieved similar results. The experimental results show the feasibility of the proposed fine-tuning approach. Improved hand-eye coordination in modular deep visuo-motor policies is possible due to fine-tuning with weighted losses. The adaptation to real scenarios can still be kept by presenting (a mix of simulated and) real samples to compute the perception loss. 3. Experiments and Results We evaluated the feasibility of the proposed approach using the metrics of Euclidean distance error d (between the end-effector and target) and average accumulated reward R̄ (a bigger accumulated reward means a faster and closer reaching to a target) in 400 simulated trials. For comparison, we evaluated three networks: Initial, Fine-tuned and CR. Initial is a combined network without end-to-end finetuning, which is labelled as EE2 in [5] (comprising FT75 and CR). FT75 and CR are the selected perception and control modules which have the best performance individually. Fine-tuned is obtained by fine-tuning Initial using the proposed approach. CR works as a baseline indicating performance upper-limit. In fine-tuning, β = 0.8, we used a learning rate between 0.01 and 0.001, a mini-batch size of 64 and 256 for task and perception losses respectively, and an exploration possibility of 0.1 for K-GPS. These parameters were empirically selected. To make sure that the perception module remembers the skills for both simulated and real scenarios, the 1418 real samples were also used to obtain δLp . Similar to FT75, 75% samples in a mini-batch were from real scenarios, i.e., at each weight updating step, 192 extra real samples were used in addition to the 64 simulated samples in the mini-batch for δLq . References [1] Y. LeCun. A theoretical framework for back-propagation. In D. Touretzky, G. Hinton, and T. Sejnowski, editors, Proceedings of the 1988 Connectionist Models Summer School, pages 21–28, CMU, Pittsburgh, Pa, 1988. Morgan Kaufmann. [2] S. Levine, P. P. Sampedro, A. Krizhevsky, and D. Quillen. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. In International Symposium on Experimental Robotics (ISER), 2016. [3] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015. [4] F. Zhang, J. Leitner, M. Milford, B. Upcroft, and P. Corke. Towards vision-based deep reinforcement learning for robotic motion control. In Australasian Conference on Robotics and Automation (ACRA), 2015. [5] F. Zhang, J. Leitner, B. Upcroft, and P. Corke. Transferring visionbased robotic reaching skills from simulation to real world. Technical report, 2017. 2
2
FEAST: An Automated Feature Selection Framework for Compilation Tasks Pai-Shun Ting arXiv:1610.09543v1 [] 29 Oct 2016 Department of Electrical Engineering and Computer Science University of Michigan Ann Arbor, Michigan 48109, USA Email: [email protected] Chun-Chen Tu Department of Statistics University of Michigan Ann Arbor, Michigan 48109, USA Email: [email protected] Pin-Yu Chen IBM T. J. Watson Research Center Yorktown Heights, New York 10598, USA Email: [email protected] Ya-Yun Lo Adobe Systems San Francisco, California 94103, USA Email: [email protected] Shin-Ming Cheng Department of Computer Science and Information Engineering National Taiwan University of Sciecne and Technology Taipei 106, Taiwan Email: [email protected] Abstract—Modern machine-learning techniques greatly reduce the efforts required to conduct high-quality program compilation, which, without the aid of machine learning, would otherwise heavily rely on human manipulation as well as expert intervention. The success of the application of machine-learning techniques to compilation tasks can be largely attributed to the recent development and advancement of program characterization, a process that numerically or structurally quantifies a target program. While great achievements have been made in identifying key features to characterize programs, choosing a correct set of features for a specific compiler task remains an ad hoc procedure. In order to guarantee a comprehensive coverage of features, compiler engineers usually need to select excessive number of features. This, unfortunately, would potentially lead to a selection of multiple similar features, which in turn could create a new problem of bias that emphasizes certain aspects of a program’s characteristics, hence reducing the accuracy and performance of the target compiler task. In this paper, we propose FEAture Selection for compilation Tasks (FEAST), an efficient and automated framework for determining the most relevant and representative features from a feature pool. Specifically, FEAST utilizes widely used statistics and machine-learning tools, including LASSO, sequential forward and backward selection, for automatic feature selection, and can in general be applied to any numerical feature set. This paper further proposes an automated approach to compiler parameter assignment for assessing the performance of FEAST. Intensive experimental results demonstrate that, under the compiler parameter assignment task, FEAST can achieve comparable results with about 18% of features that are automatically selected from the entire feature pool. We also inspect these selected features and discuss their roles in program execution. Index Terms—Compiler optimization, feature selection, LASSO, machine learning, program characterization I. I NTRODUCTION Program characterization, a process to numerically or structurally quantify a target program, allows modern machinelearning techniques to be applied to compiler tasks, since most machine-learning methods assume numerical inputs for both training and testing data. Program characterization is usually achieved by extracting from the target program a set of static features and/or a set of dyanamic features. Static features can be obtained directly from the source code or intermediate TABLE I: List of all original 56 static features from cTuning Compiler Collection [1]. ft1 ft2 ft3 ft4 ft5 ft6 ft7 ft8 ft9 ft10 ft11 # basic blocks in the method # basic blocks with a single successor # basic blocks with two successors ft29 # basic blocks with more then two successors # basic blocks with a single predecessor # basic blocks with two predecessors # basic blocks with more then two predecessors # basic blocks with a single predecessor and a single successor # basic blocks with a single predecessor and two successors # basic blocks with a two predecessors and one successor # basic blocks with two successors and two predecessors ft32 ft30 ft31 ft33 ft34 ft35 ft36 ft37 ft38 ft39 ft12 # basic blocks with more then two successors and more then two ft40 ft13 # basic blocks with # instructions less then 15 ft41 ft14 ft15 ft16 ft17 ft18 ft19 ft20 ft21 ft22 ft23 ft24 ft25 ft26 ft27 ft28 # basic blocks with # instructions in the interval [15, 500] # basic blocks with # instructions greater then 500 # edges in the control flow graph # critical edges in the control flow graph # abnormal edges in the control flow graph # direct calls in the method # conditional branches in the method # assignment instructions in the method # binary integer operations in the method # binary floating point operations in the method #instructions in the method Average of # instructions in basic blocks Average of # phi-nodes at the beginning of a basic block Average of arguments for a phi-node predecessors # basic blocks with no phi nodes ft42 ft43 ft44 ft45 ft46 ft47 ft48 ft49 ft50 ft51 ft52 ft53 ft54 ft55 ft56 # basic blocks with phi nodes in the interval [0, 3] # basic blocks with more then 3 phi nodes # basic block where total # arguments for all phi-nodes is in greater then 5 # basic block where total # arguments for all phi-nodes is in the interval [1, 5] # switch instructions in the method # unary operations in the method # instruction that do pointer arithmetic in the method # indirect references via pointers (”*” in C) # times the address of a variables is taken (”&” in C) # times the address of a function is taken (”&” in C) # indirect calls (i.e. done via pointers) in the method # assignment instructions with the left operand an integer constant in the method # binary operations with one of the operands an integer constant in the method # calls with pointers as arguments # calls with the # arguments is greater then 4 # calls that return a pointer # calls that return an integer # occurrences of integer constant zero # occurrences of 32-bit integer constants # occurrences of integer constant one # occurrences of 64-bit integer constants # references of a local variables in the method # references (def/use) of static/extern variables in the method # local variables referred in the method # static/extern variables referred in the method # local variables that are pointers in the method # static/extern variables that are pointers in the method # unconditional branches in the method representation of the target program, while the procurement of dynamic features usually requires actually executing the target program to capture certain run-time behavior. With current intensive research on program characterization, new features, both static and dynamic, are continuously being proposed. An example of a static feature set is shown in Table I, which lists all the 56 original static features extracted by Milpost GCC from cTuning Compiler Collection [1], with many of them being different yet non-independent features. For example. ft. 8 implies ft. 2 and ft. 5, meaning that these features are correlated. In a compiler task, determining which features to use or to include for program characterization is of considerable importance, since different features can have different effects, and hence different resulting performance on a specific compiler task. Intuitively, including as many features as possible for program characterization seems to be a reasonable approach when considering feature selection. This, however, essentially increases the dimensionality of the feature space, and thus potentially introduces extra computational overhead. Also, some features may not be relevant to the target compiler task, and therefore behave equivalently as noise in program characterization, harming the resulting performance. Furthermore, many features, though different, capture very similar characteristics of a program. The similarities among features can produce bias that overemphasizes certain aspects of a program’s characteristics, and consequently lead to an inaccurate result for a target compiler task. Due to the aforementioned reasons, many machine-learning-aided compiler tasks still heavily rely on expert knowledge for determining an appropriate set of features to use. This, unfortunately, hinders the full automation of compiler tasks using machine learning, which is originally deemed as a tool to lower the involvement of field expertise and engineer intervention. This paper proposes FEAture Selection for compiler Tasks (FEAST), an automated framework of feature selection for a target compiler task. FEAST incorporates into its framework a variety of feature selection methods, including the well-known LASSO [2], sequential forward and backward selection. Given a compiler task and a list of feature candidates, FEAST first samples a small set of available programs as training data, and then uses the integrated feature selection methods to choose M most appropriate or relevant features for this specific compiler tasks. The remaining programs can then be handled using only the chosen features. To demonstrate the feasibility and practicality of FEAST, we assess its performance on a proposed method to assignment of compiler parameters. Modern compilers usually embed a rich set of tuning parameters to provide hints and guidance for their optimization processes. To obtain an optimal compiled executable program, exhaustive trials over all combinations of tuning parameters of the utilized compiler is required. This is, in general, excessively time consuming and hence infeasible due to its combinatorial nature. As many other compiler tasks, in practice, expert intervention is frequently triggered and heuristics based on expertise and experience are adopted as a general guideline for tuning parameters [3]. Unfortunately, fine tuning compiler’s parameters may require multiple compilation processes, and can take up to weeks or months to complete for a moderate program size, entailing a huge burden and software engineering. In this work, as a test case for FEAST, we develop an automated method to assigning “good” parameters to a pool of programs using machine-learning techniques. We then show that using FEAST, the dimension of the feature space can be greatly reduced to pertaining relevant features, while maintaining comparable resulting performance.  !"##$ %& ' ("%% !"##$ %& '                                         )                                                                                     *              Fig. 1: Flow diagrams of the proposed methods. This paper is organized as follows. Sec. II details the proposed FEAST framework as well as the automatic compiler parameter assignment process. Sec. III presents experimental results with detailed analysis. Related work is discussed in Sec. IV. Finally, Sec. V draws some conclusions and provides envisions for future work. II. FEAST AND C OMPILER PARAMETER A SSIGNMENT This section details the mechanisms of FEAST, and presents the proposed compiler parameter assignment method. A. FEAST Given K training programs and a set of p numerical features, FEAST assumes a linear model for resulting performance and feature values: y = Xβ + β0 1 (1) where y ∈ RK×1 is the compiled programs’ performance vector, whose i-th entry denotes the performance (measured in some pre-defined metrics) of the i-th program in the set of total K training programs. X ∈ RK×p is a matrix whose (i, j)-th entry denotes the value of the j-th extracted feature of the i-th program. β ∈ Rp×1 and β0 ∈ R are the coefficients describing the linear relationship between the features and the resulting performance. 1 is a K × 1 vector with its elements being all 1s. FEAST utilizes three widely used feature selection methods, namely LASSO, sequential forward selection (SFS) and sequential backward selection (SBS), all of which pick M most influential features out of the total p features. Specifically, the elastic net approach for LASSO adopted in FEAST selects features by first solving optimization problem [2]: minβ̃ 1 y − X̃ β̃ K 2 + λ β̃ 2 (2) 1   where X̃ = X 1 . The first p elements of the solution β̃ are the coefficient estimates whose magnitudes directly reflect the influence of the corresponding features. M selected features are chosen as those with coefficients of largest magnitudes. SFS and SBS are other well-known feature selection methods. For SFS, we sequentially or greedily select the most relevant feature from the training programs until we have selected M features. For SBS, we sequentially exclude the most irrelevant feature until there are M features left. We omit algorithmic descriptions of SFS and SBS, and refer interested readers to [4] for implementation details. B. Compiler Parameter Assignment Algorithms This section discusses the proposed method for compiler parameter assignment algorithm and the application of FEAST to this task. Given a compiler with many parameters to set, finding an optimal assignment of parameters that can best compile a target program is a vexing problem. This paper proposes a machine-learning algorithm that automatically assigns “good” parameters to target programs based on the known optimal assignment to the training programs. The proposed method works in two schemes: active training scheme and passive training scheme (see Fig. 1). In active training scheme, the users can actively acquire the optimal compiler parameters for a subset of programs, while in passive training scheme, the users are given as prior knowledge a set of programs whose optimal compiler parameters are known. A practical example of active training scheme is that a company has no prior knowledge about compiler parameter assignment, and it has a very limited budget which only allows it to select a small set of programs for full tuning. Remaining large set of programs, however, has to be quickly and efficiently compiled. For passive training scheme, a company has a small set of programs with well-tuned compiler parameters, and it would like to quickly find good compiler parameters for other programs. Note that, in the active training scheme, our proposed method can also automatically choose a good set of candidate programs for full tuning. The remaining section is dedicated to detailing the two preceding schemes with FEAST application to them respectively. Active training scheme. Since acquirement of dynamic features can be very expensive due to the potential need for multiple compilations and iterative tuning, we opt to use Algorithm 1 Active Training Scheme Input: n programs with p static features, number K of training programs for optimization Output: compiler parameter assignment for each untrained program Procedures: 1) Partition n programs into K clusters by K-means clustering 2) For each cluster select one program having the least sum of distances to other programs in the same cluster as a training program. 3) Find the set of optimal compiler parameters of the selected K programs. 4) Use FEAST to perform feature selection on the selected K programs by regressing their optimal execution time with respect to their static features. 5) Repartition the untrained programs based on the similarities computed by the selected features. 6) For each untrained program, assign the compiler parameters of the closest trained program to it based on the selected features. numerical static features in the proposed compiler parameter assignment task. Also, we use the execution time of a compiled program as the measure of performance for that program. Given n programs with p static features, we are granted to choose K ≤ n programs as training samples and acquire the optimal compiler parameters for each training program that optimize the execution time. In order to choose the K programs, we first compute the similarity of each program pair based on the Euclidean distance between the corresponding static feature vectors, and partition the programs into K clusters using K-means clustering. For each cluster, we select one program that has the least average distance to other programs in the same cluster as a training program. Exhaustive trials are then conducted for the training programs to obtain their optimal compiler parameters and the associated execution time. Given the K training programs as well as their execution time, FEAST can then select M features that are most influential to the training programs’ performance (execution time in this case) . We then leverage the selected features to recompute similarities and repartition the programs. Lastly, each untrained program is assigned by the optimal parameters of the most similar training program. See Alg. 1 for detailed algorithm. Passive training scheme. Different from the active training scheme, in addition to n programs with p static features, we are also given K pre-selected training programs. Therefore, the methodology of the passive training scheme is similar to that of the active training scheme, except that the clustering procedures described in active training scheme are no longer required. See Alg. 2 for detailed algorithm. Algorithm 2 Passive Training Scheme Input: n programs with p static features, K given training programs Output: compiler parameter assignment for each untrained program Procedures: 1) Find the set of optimal compiler parameters of the K given training programs. 2) Use FEAST to perform feature selection on the selected K programs by regressing their optimal execution time with respect to their static features. 3) Compute distances between programs based on the selected features. 4) For each untrained program, assign the compiler parameters of the closest trained program based on the selected features. III. P ERFORMANCE E VALUATION OF C OMPLIER PARAMETER A SSIGNMENT We test our implementations using the PolyBench benchmark suite [5] that consists of n = 30 programs. The programs are characterized using p = 56 static features from cTuning Compiler Collection [1]. For the two proposed training schemes, K trained programs are used for feature selection and compiler parameter assignment, and for each feature selection method (LASSO, SFS, SBS), we select the M = 10 most relevant features. A. Performance Comparison of the Active and Passive Training Schemes Fig. 2 shows the program execution time of active and passive training schemes. The minimal execution time refers to the optimal performance over 192 possible combinations of compiler parameters of every program. We also show the results for the case where no tunable compiler parameters are enabled. The results of minimal-time and no-tuning-parameter are regarded as baselines for comparing the proposed FEAST methods, as their execution time does not depend on the number K of training programs. Furthermore, to validate the utility of FEAST, we also calculate the execution time using all features, i.e., the case where FEAST is disabled. For the three feature selection methods introduced in Sec. II-A (LASSO, SFS, and SBS), we select the top M = 10 important features and use these selected features to compute program similarities and assign compiler parameters. In this setting, we only compare the execution time of the untrained programs, and we omit the computation time required to obtain the optimal compiler parameters associated with the training programs. We will consider the overall execution time of the trained and untrained programs shortly. In Fig. 2, each untrained program is executed once and we sum up the execution time to get the overall execution time under various values of K of trained programs. It is observed that the execution time of both training schemes converge to 38 34 32 34 Executation time (s) Executation time (s) 36 32 No feature selection Lasso Forward selection Backward selection Minimal time No tuning parameter 30 28 26 24 30 28 26 24 22 22 20 No feature selection Lasso Forward selection Backward selection Minimal time No tuning parameter 0 5 10 15 20 25 20 30 0 5 K: number of trained program 10 15 20 25 30 K: number of trained program (a) Active training scheme. (b) Passive training scheme. Fig. 2: Program execution time of the active and passive training schemes. Minimal time refers to exhaustive parameter optimization on every program. It can be observed that the execution time of the three feature selection algorithms integrated in FEAST are comparable to that of using all features, which strongly suggests that important features affecting program execution are indeed identified by FEAST. Lasso 38 36 Passive training Active training 36 34 Execution time (sec) 34 Execution time (sec) SFS 38 Passive training Active training 32 30 28 26 32 30 28 26 24 24 22 22 20 20 0 5 10 15 20 25 30 K: # trained program 0 5 10 15 20 25 30 K: # trained program Fig. 3: Comparison of active and passive training schemes. Error bars represent standard deviation. The average execution time of active training is smaller that that of passive training. The variations in active training scheme are caused by random initialization in K-means clustering procedure, whereas the variations in passive training scheme are caused by randomness in the selection of training programs. the minimal execution time as K increases. The curve of the passive training scheme is smoother than that of the active training scheme due to the fact that the former is an averaged result over 1000 trials of randomly selected training programs. The trends in Fig. 2 can be explained as follows: when adopting passive training, every program has an equal chance to be selected as a training program. As K grows, the set of available optimal compiler parameters for training programs increases as well, resulting in the decrease in average total execution time. Active training, on the other hand, adopts K-means clustering when selecting the training programs. Given a fixed K, only K programs can be selected for training and compiler parameter optimization. Therefore, for small K (e.g.,. K = 1 or 2), we might select the programs whose optimal compiler parameters do not fully benefit other untrained programs. The execution time of cases with FEAST enabled are shown to be comparable to that of using all features, which strongly suggests that FEAST can successfully select important features affecting program execution, leading to dimensionality reduc- -5000 00 5 20 200 400 50 00 600 800 0 -2000 300 0 -3000 00 0 4000 1000 00 -1 0 Program execution times (Nexec) (a) Active training scheme. 0 -2000 200 0 0 10 -3000 4000 -40 0 0 00 -2 200 400 600 800 -4000 -5000 3000 2000 5 -6000 1000 2000 -1000 -3 00 -4000 3000 1000 10 0 -3000 40 00 100 -2 0 3000 -1 00 0 10 0 00 -2000 15 0 00 -1 00 0 4000 3000 -1000 K: number of trained program 0 -10 00 0 00 -40 -30 0 20 3000 15 1000 4000 0 00 2000 5000 20 25 200 -20 100 00 -50 3000 2000 K: number of trained program 00 4000 5000 5000 25 Passive with lasso feature selection 30 5000 -5 0 Active with lasso feature selection 30 -6000 1000 Program execution times (Nexec) (b) Passive training scheme. Fig. 4: Time reduction for active and passive training schemes using LASSO feature selection. The parameter Nexec specifies the number of times a program is executed. The contour indicates a phase transition in time reduction. The figure suggests that with Nexec large enough, the proposed compiler parameter assignment method can provide time saving when considering overall execution time. tion while still attaining satisfactory execution time reduction. To gain more insights on the performance of active and passive training schemes, we compare the execution time of LASSO and SFS in Fig. 3. The comparison of SBS is omitted since in practice SBS is computationally inefficient due to its sequential feature removal nature, especially for high-dimensional data. It is observed in Fig. 3 that the average execution time of active training is smaller than that of passive training, since for active training, we are able to select representative programs as training samples for optimization. These results indicate that the execution time can be further reduced when we have the freedom to select K representative programs from clustering as training samples for compiler parameter assignment. B. Overall Execution Time Comparison In this section, we consider the overall execution time, including time overhead imposed by the proposed algorithms as well as the execution time of every untrained program. To this end, we introduce a parameter Nexec, the number of executions per program. The motivation behind introducing Nexec is as follows: for programs such as matrix operations, users may execute these core programs multiple times (i.e., Nexec times). Therefore, as will be demonstrated by the analysis detailed in this section, the time overhead introduced by the proposed algorithms will be compensated as Nexec increases, since the time spent in optimizing the training programs comprises relatively small portion of the overall time cost with large Nexec. Time Reduction (TR) can therefore be computed using the following formula: K TR = Nexec · Tnull − Nexec · Tauto − Texhaustive , where Nexec denotes the number of executions for each program, Tnull represents the total time to run every program with all tunable compiler parameters disabled, Tauto denotes the total time to run every program compiled by using the proK posed compiler parameter assignment method, and Texhaustive denotes the computation time for finding the optimal compiler parameters for K training programs. Fig. 4 shows the contour plot of the overall time reduction metric for both training schemes with LASSO. Results using SFS and SBS are similar, and are omitted in this paper. If TR is positive, it indicates the overall execution time is smaller than that with all tunable compiler parameters disabled. It can be observed that for each K, TR becomes positive when Nexec exceeds certain threshold value, implying that if a user uses the compiled programs repeatedly, the proposed method could potentially provide great time saving. Also for each K, the preceding threshold of active training is lower than that of passive training, since programs to exhaustively optimize are actively chosen by the proposed method. C. Features Selected by FEAST To investigate the key features affecting compiler execution time, we inspect the selected features from FEAST. Table II lists the top 10 selected features using various feature selection methods integrated in FEAST. The features are selected with 3-fold cross validation method for regressing the optimal execution time of all 30 programs with respect to their static program features. Based on the selected features, we can categorize the selected features into three groups: 1) Control Flow Graph (CFG), 2) assignments/operations, and 3) phi node. CFG features describe a programs’ control flow, which can be largely influenced by instruction branches, such as “if-else”, “if- if” and “for loop” statements. The selected CFG features are reasonable as in our program testing dataset, for loop contributes to the major part of programs’ control flow. In addition, assignment operations are essential to matrix operations, and hence possess discriminative power for distinguishing programs. Lastly, Phi node is a special operation of Intermediate Representation (IR). It is designed for Static Single Assignment form (SSA) which enables a compiler to perform further optimization, and hence it is an important factor for program execution. IV. R ELATED W ORK Recent rapid development of program characterization, a process to quantify programs, allows the application of modern machine-learning techniques to the field fo code compilation and optimization. These machine-learning techniques provide powerful tools that are widely used in various aspects of compilation procedures. For example, Buse and Weimer use static features to identify ”hot paths” (executional paths that are most frequently invoked) of a target program by applying logistic regression [6] without ever profiling the program, Kulkarni et al. build an evolving neural-network model that uses static features to help guide the inlining heuristics in compilation process [7], and Wang and O’Boyle exploit the use of artificial neural networks (ANNs) as well as support vector machine (SVM) for automatic code parallelization on multicore systems [8], among others. Many existing applications of machine-learning-enabled compiler tasks use features, either static or dynamic, selected by the designers, and hence heavily rely on field expertise. This work provides a comprehensive solution to this problem by using modern statistical methods to select appropriate features for a specific case. There has also been a vast amount of research dedicated to designing suitable features for target applications. For example, it is shown in [9] that, for compiler tasks that cannot afford the time cost for procurement of dynamic features, carefully designed graph-based static features can achieve accuracy and performance comparable to dynamic features in some applications, regardless the fact that it is prevailingly believed dynamic features are preferred due to insightful information they provide. Another example is the compiler parameters assignment task by Park et al. [12]. In their work, an SVM-based supervised training algorithm is used to train a set of support vectors that can help estimate the performance or reaction of an unknown program to a set of compiler parameters. They use newly-defined graph-based static features for training, which achieves high performance comparable to that using dynamic features, but without the need to invoke multiple compilations and profiling. While in general, it is possible to design dedicated features for specific tasks, the applicability of these dedicated features to other applications remain questionable. In the scenario where there are excessive number of numerical features that may or may not fit a target task, FEAST can help in selecting the most meaningful and influential features. As to the task of compiler parameter tuning, recent work has been focused on automation of parameter assignment process. Compiler parameter tuning has long been a crucial problem which attracts a vast amount of attention. Recent trends and efforts on exploiting the power of modern machine-learning techniques have achieved tremendous success in compiler parameter tuning. Stephenson and Amarasinghe demonstrate the potential of machine learning on automatic compiler parameter tuning by applying ANNs and SVM to predict a suitable loop-unrolling factor for a target program [10]. This work is relatively restricted, since it deals with a single compiler parameter. Agakov et al. propose a computeraided compiler parameter tuning method by identifying similar programs using static features [3], where certain level of expert intervention is still required. Cavazos et al. characterize programs with dynamic features, and use logistic regression, a classic example of conventional machine-learning algorithm, to predict good compiler parameter assignment [11]. While providing state-of-the-art performance, [11] requires dynamic features, which can be expensive to acquire. In [12], graphbased features are used along with SVM for performance prediction given a compilation sequence. This work uses dedicated features for the machine-learning task, and further implicitly utilizes excessive number of candidate assignments of compiler parameters in order to find a good assignment, resulting in a non-scalable algorithm. On the other hand, the proposed compiler parameter assignment algorithm is a comprehensive assignment algorithm that does not require dynamic features or dedicated static features. Furthermore, the training data that need full optimization can be set fixed. The good assignment for an unseen target program is directly derived from that of trained programs, implying its scalability with the number of potential assignments. V. C ONCLUSIONS AND F UTURE W ORK In this work, we propose FEAST, an automated framework for feature selection in compiler tasks that incorporates with well-known feature selection methods including LASSO, sequential forward and backward selection. We demonstrate the feasibility and applicability of FEAST by testing it on a proposed method for the task of compiler parameter assignment. The experimental results show that the three feature selection methods integrated in FEAST can select a representative small subset of static features that, when used in the compiler parameter assignment task, can achieve non-compromised performance. We also validate the effectiveness of the proposed methods by experimentally demonstrating significant overall execution time reduction of our method in a practical scenario where each program is required to run multiple times. Lastly, we discussed the roles of the features selected by FEAST, which provides deep insights into compilation procedures. In summary, our contributions are two-fold: 1) We integrate into FEAST with various modern machinelearning and optimization techniques for feature selection for compilation tasks. 2) We demonstrate the applicability of FEAST by experimentally showing that it can achieve comparable performance in compiler parameter assignment tasks with a very small set of selected static features. For future work, we are interested in exploring the inherent structural dependencies of codes in each program as additional features for compiler parameter assignment. We are also interested in integrating the proposed compiler parameter assignment algorithm with recently developed automated community detection algorithms, such as AMOS [13], to automatically cluster similar programs for the proposed passive and active training schemes. R EFERENCES [1] “cTuning Compiler Collection.” [Online]. Available: http://ctuning.org/wiki/index.php?title=CTools:CTuningCC [2] H. Zou and T. Hastie, “Regularization and variable selection via the elastic net,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 67, no. 2, pp. 301–320, 2005. [3] F. Agakov, E. Bonilla, J. Cavazos, B. Franke, G. Fursin, M. F. O’Boyle, J. Thomson, M. Toussaint, and C. K. Williams, “Using machine learning to focus iterative optimization,” in Proceedings of the International Symposium on Code Generation and Optimization. IEEE Computer Society, 2006, pp. 295–305. [4] M. Dash and H. Liu, “Feature selection for classification,” Intelligent data analysis, vol. 1, no. 1, pp. 131–156, 1997. [5] “Polybench benchmark suite.” [Online]. Available: http://web.cse.ohiostate.edu/∼ pouchet/software/polybench/ [6] R. P. Buse and W. Weimer, “The road not taken: Estimating path execution frequency statically,” in Proceedings of the 31st International Conference on Software Engineering. IEEE Computer Society, 2009, pp. 144–154. [7] S. Kulkarni, J. Cavazos, C. Wimmer, and D. Simon, “Automatic construction of inlining heuristics using machine learning,” in Code Generation and Optimization (CGO), 2013 IEEE/ACM International Symposium on. IEEE, 2013, pp. 1–12. [8] Z. Wang and M. F. O’Boyle, “Mapping parallelism to multi-cores: a machine learning based approach,” in ACM Sigplan notices, vol. 44, no. 4. ACM, 2009, pp. 75–84. [9] J. Demme and S. Sethumadhavan, “Approximate graph clustering for program characterization,” ACM Transactions on Architecture and Code Optimization (TACO), vol. 8, no. 4, p. 21, 2012. [10] M. Stephenson and S. Amarasinghe, “Predicting unroll factors using supervised classification,” in International symposium on code generation and optimization. IEEE, 2005, pp. 123–134. [11] J. Cavazos, G. Fursin, F. Agakov, E. Bonilla, M. F. O’Boyle, and O. Temam, “Rapidly selecting good compiler optimizations using performance counters,” in International Symposium on Code Generation and Optimization (CGO’07). IEEE, 2007, pp. 185–197. [12] E. Park, J. Cavazos, and M. A. Alvarez, “Using graph-based program characterization for predictive modeling,” in Proceedings of the Tenth International Symposium on Code Generation and Optimization. ACM, 2012, pp. 196–206. [13] P.-Y. Chen, T. Gensollen, and A. O. Hero III, “Amos: An automated model order selection algorithm for spectral graph clustering,” arXiv preprint arXiv:1609.06457, 2016. TABLE II: Top 10 selected features from various methods integrated in FEAST. The number in the bracket indicates the feature ranking for each method. LASSO Number of basic blocks with a single predecessor and a single successor (6) Number of basic blocks with a single predecessor and two successors (7) Number of basic blocks with more then two successors and more than two predecessors (8) Number of basic blocks with number of instructions in the interval [15, 500] (5) Number of assignment instructions in the method (9) SFS Number of binary integer operations in the method (1) Number of direct calls in the method (9) Number of binary floating point operations in the method (2) Number of basic blocks with phi nodes in the interval [0,3] (4) Number of basic block where total number of arguments for all phi-nodes is greater than 5 (10) Number of assignment instructions in the method (10) Average of number of phi-nodes at the beginning of a basic block (10) Number of basic blocks with more than 3 phi nodes (4) Number of basic block where total number of arguments for all phi-nodes is greater than 5 (7) Number of switch instructions in the method (3) Number of binary integer operations in the method (1) Number of unary operations in the method (2) Number of unary operations in the method (3) Number of basic blocks in the method (3) SBS Number of basic blocks with a two predecessors and one successor (8) Number of basic blocks with a two predecessors and one successor (7) Number of conditional branches in the method (5) Number of basic blocks with two successors and two predecessors (2) Number of instructions in the method (9) Number of basic blocks with more then two successors and more than two predecessors (6) Number of basic blocks with number of instructions greater then 500 (4) Number of basic blocks with more than 3 phi nodes (5) Number of basic block where total number of arguments for all phi-nodes is in greater than 5 (8) Number of assignment instructions with the left operand an integer constant in the method (6) Number of binary operations with one of the operands an integer constant in the method (1)
2
Below all subsets for Minimal Connected Dominating Set∗ arXiv:1611.00840v1 [] 2 Nov 2016 Daniel Lokshtanov† Michal Pilipczuk‡ Saket Saurabh§ Abstract A vertex subset S in a graph G is a dominating set if every vertex not contained in S has a neighbor in S. A dominating set S is a connected dominating set if the subgraph G[S] induced by S is connected. A connected dominating set S is a minimal connected dominating set if no proper subset of S is also a connected dominating set. We prove that there exists a constant ǫ > 10−50 such that every graph G on n vertices has at most O(2(1−ǫ)n ) minimal connected dominating sets. For the same ǫ we also give an algorithm with running time 2(1−ǫ)n · nO(1) to enumerate all minimal connected dominating sets in an input graph G. 1 Introduction In the field of enumeration algorithms, the following setting is commonly considered. Suppose we have some universe U and some property Π of subsets of U . For instance, U can be the vertex set of a graph G, whereas Π may be the property of being an independent set in G, or a dominating set of G, etc. Let F be the family of all solutions: subsets of U satisfying Π. Then we would like to find an algorithm that enumerates all solutions quickly, optimally in time |F| · nO(1) , where n is the size of the universe. Such an enumeration algorithm could be then used as a subroutine for more general problems. For instance, if one looks for an independent set of maximum possible weight is a vertex-weighted graph, if suffices to browse through all inclusion-wise maximal independent sets (disregarding the weights) and pick the one with the largest weight. If maximal independent sets can be efficiently enumerated and their number turns out to be small, such an algorithm could be useful in practice. The other motivation for enumeration algorithms stems from extremal problems for graph properties. Suppose we would like to know what is, say, the maximum possible number of inclusion-wise maximal independent sets in a graph on n vertices. Then it suffices to find an enumeration algorithm for maximal independent sets, and bound its (exponential) running time in terms of n. The standard approach for the design of such an enumeration algorithm is to construct a smart branching procedure. The run of such a branching procedure can be viewed as a tree where the nodes correspond to moments when the algorithm branches into two or more subprocedures, fixing different choices for the shape of a solution. Then the leaves of such a search tree correspond to the discovered solutions. By devising smart branching rules one can limit the number of leaves of the ∗ The research of S. Saurabh leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement no. 306992. The research of Mi. Pilipczuk is supported by Polish National Science Centre grant UMO-2013/11/D/ST6/03073. Mi. Pilipczuk is also supported by the Foundation for Polish Science via the START stipend programme. † Department of Informatics, University of Bergen, Norway, [email protected]. ‡ Institute of Informatics, University of Warsaw, Poland, [email protected]. § Institute of Mathematical Sciences, India, [email protected], and Department of Informatics, University of Bergen, Norway, [email protected]. 1 search tree, which both estimates the running time of the enumeration algorithm, and provides a combinatorial upper bound on the number of solution. For instance, the classic proof of Moon and Moser [9] that the number of maximal independent sets in an n-vertex graph is at most 3n/3 , can be easily turned into an algorithm enumerating this family in time 3n/3 · nO(1) . However, the analysis of branching algorithms is often quite nontrivial. The technique usually used, called Measure&Conquer, involves assigning auxiliary potential measures to subinstances obtained during branching, and analyzing how the potentials change during performing the branching rules. Perhaps the most well-known result obtained using Measure&Conquer is the O(1.7159n )-time algorithm of Fomin et al. [4] for enumerating minimal dominating sets. Note that in particular this implies an O(1.7159n ) upper bound on the number of minimal dominating sets. We refer to the book of Fomin and Kratsch [5] for a broader discussion of branching algorithms and the Measure&Conquer technique. The main limitation of such branching strategies is that, without any closer insight, they can only handle properties that are somehow local. This is because pruning unnecessary branches is usually done by analyzing specific local configurations in the graph. For this reason, it is difficult to add requirements of global nature to the framework. To give an example, consider the family of minimal connected dominating sets of a graph: a subset of vertices S is a minimal connected dominating set if it induces a connected subgraph, is a dominating set, and no its proper subset has both these properties. While the number of minimal dominating sets of an n-vertex graph is bounded by O(1.7159n ) by the result of Fomin et al. [4], for the number of minimal connected dominating sets no upper bound of the form O(cn ) for any c < 2 was known prior to this work. The question about the existence of such an upper bound was asked by Golovach et al. [6], and then re-iterated by Kratsch [1] during the recent Lorentz workshop “Enumeration Algorithms using Structure”. We remark that the problem of finding a minimum-size connected dominating set admits an algorithm with running time O(1.9407n ) [3], but the method does not generalize to enumerating minimal connected dominating sets. Our results. We resolve this question in affirmative by proving the following theorem. Theorem 1. There is a constant ǫ > 10−50 such that every graph G on n vertices has at most O(2(1−ǫ)n ) minimal connected dominating sets. Further, there is an algorithm that given as input a graph G, lists all minimal connected dominating sets of G in time 2(1−ǫ)n · nO(1) . Note that we give not only an improved combinatorial upper bound, but also a corresponding enumeration algorithm. The improvement is minuscule, however our main motivation was just to break the trivial 2n upper bound of enumerating all subsets. In many places our argumentation could be improved to yield a slightly better bound at the cost of more involved analysis. We choose not to do it, as we prefer to keep the reasoning as simple as possible, while the improvements would not decrease our upper bound drastically anyway. The main purpose of this work is to show the possibility of achieving an upper bound exponentially smaller than 2n , and thus to investigate what tools could be useful for the treatment of requirements of global nature in the setting of extremal problems for graph properties. For the proof of Theorem 1, clearly it is sufficient to bound the number of minimal connected dominating sets of size roughly n/2. The starting point is the realization that any vertex u in a minimal connected dominating set S serves one of two possible roles. First, u can be essential for domination, which means that there is some v not in S such that u is the only neighbor of v in S. Second, u can be essential for connectivity, in the sense that after removing u, the subgraph induced by S would become disconnected. Therefore, if we suppose that the vertices essential for domination 2 form a small fraction of S, we infer that almost every vertex of G[S] is a cut-vertex of this graph. It is not hard to convince oneself that then almost every vertex of S has degree at most 2 in G[S]. All in all, regardless whether the number of vertices essential for domination is small or large, a large fraction of all the vertices of the graph has at most 2 neighbors in S. Intuitively, in an “ordinary” graph the number of sets S with this property should be significantly smaller than 2n . We prove that this is indeed the case whenever the graph is “robustly dense” in the following sense: it has a spanning subgraph where almost all vertices have degrees not smaller than some constant ℓ, but no vertex has degree larger than some (much larger) constant h. For this proof we use the probabilistic method: we show that if S is sampled at random, then the probability that many vertices are adjacent to at most 2 vertices of S is exponentially small. The main tool is Chernoff-like concentration of independent random variables. The remaining case is when the spanning subgraph as described above cannot be found. We attempt at constructing it using a greedy procedure, which in case of failure discovers a different structure in the graph. We next show that such a structure can be also used to design an algorithm for enumerating minimal connected dominating sets faster than 2n , using a more direct branching strategy. The multiple trade-offs made in this part of the proof are the main reason for why our improvement over the trivial 2n upper bound is so small. 2 Preliminaries All graphs considered in this paper are simple, i.e., they do not have self-loops or multiple edges connecting the same pair of vertices. For a graph G, by V (G) and E(G) we denote the vertex and edge sets of G, respectively. The neighborhood of a vertex v in a graph G is denoted by NG (v), and consists of vertices adjacent to v. The degree of v, denoted by d(v), is defined the cardinality of its neighborhood. For a subset S ⊆ V (G) and vertex v ∈ V (G) the S-degree of v, denoted dS (v), is defined to be the number of vertices in S adjacent to v. A proper coloring of a graph G with c colors is a function φ : V (G) → {1, . . . , c} such that for every edge uv ∈ E(G) we have φ(u) 6= φ(v). For a proper coloring φ of G and integer i ≤ c, the i-th color class of φ is the set Vi = φ−1 (i). The subgraph of G induced by a vertex subset S ⊆ V (G) is denoted by G[S] and defined to be the graph with vertex set S and edge set {uv ∈ E(G) : u, v ∈ S}. For a vertex v ∈ V (G), the graph G − v is simply G[V (G) \ {v}]. A subset I of vertices is independent if it induced an edgeless graph, that is, a graph with no edges. A cutvertex in a connected graph G is a vertex v such that G − v is disconnected. We denote exp(t) = et . The probability of an event A is denoted by Pr[A] and the expected value of a random variable X is denoted by E[X]. We use standard concentration bounds for sums of independent random variables. In particular, the following variant of the Hoeffding’s bound [8], given by Grimmett and Stirzaker [7, p. 476], will be used. Theorem 2 (Hoeffding’s bound). Suppose X1 , X2 , . . . , Xn are independent random variables such that ai ≤ Xi ≤ bi for all i. Let X = Σni=1 Xi . Then: ! −2t2 . Pr[X − E[X] ≥ t] ≤ exp Σni=1 (bi − ai )2 For enumeration, we need the following folklore claim. Lemma 3. Let U be a universe of size n and let F ⊆ 2U be a family of its subsets that is closed under taking subsets (X ⊆ Y and Y ∈ F implies X ∈ F), and given a set X it can be decided in polynomial time whether X ∈ F. Then F can be enumerated in time |F| · nO(1) . 3 Proof. Order the elements of U arbitrarily as e1 , e2 , . . . , en , and process them in this order while keeping some set X ∈ F, initially set to be the empty set. When considering the next ei , check if X ∪ {ei } ∈ F. If this is not the case, just proceed further with X kept. Otherwise, output X ∪ {ei } as the next discovered set from F, and execute two subprocedures: in the first proceed with X, and in the second proceed with X ∪ {ei }. It can be easily seen that every set of F is discovered by the procedure, and that some new set of F is always discovered within a polynomial number of steps (i.e., this is a polynomial-delay enumeration algorithm). Thus, the total running time is |F| · nO(1) . 3 Main case distinction The first step in our proof is to try to find a spanning subgraph of the considered graph G, which has constant maximum degree, but where only a small fraction of vertices have really small degrees. This is done by performing a greedy construction procedure. Obviously, such a spanning subgraph may not exist, but then we argue that the procedure uncovers some other structure in the graph, which may be exploited by other means. The form of the output of the greedy procedure constitutes the main case distinction in our proof. Lemma 4. There is an algorithm that given as input a graph G, together with integers ℓ and h such that 1 ≤ ℓ ≤ h, and a real δ with 0 ≤ δ ≤ 1, runs in polynomial time and outputs one of the following two objects: 1. A subgraph G′ of G with V (G′ ) = V (G), such that • every vertex in G′ has degree at most h, and • less than δ · n vertices in G′ have degree less than ℓ. 2. A partition of V (G) into subsets L, H and R such that • |L| ≥ δ · n, • every vertex in L has strictly less than ℓ neighbors outside H, and • |H| ≤ 2ℓ h · n. Proof. The algorithm takes as input ℓ, h and δ and computes a subgraph G′ of G as follows. Initially V (G′ ) = V (G) and E(G′ ) = ∅. As long as there is an edge uv ∈ E(G) \ E(G′ ) such that (a) both u and v have degree strictly less than h in G′ , and (b) at least one of u and v has degree strictly less than ℓ in G′ , the algorithm adds the edge uv to E(G′ ). When the algorithm terminates, G′ is a subgraph of G with V (G′ ) = V (G), such that every vertex in G′ has degree at most h. Let L be the set of vertices that have degree strictly less than ℓ in G′ . If |L| < δ · n then the algorithm outputs G′ , as G′ satisfies the conditions of case 1. Suppose now that |L| ≥ δ · n. Let H be the set of vertices of degree exactly h in G′ , and let R be V (G) \ (L ∪ H). Clearly L, H, and R form a partition of V (G). Consider any vertex u ∈ L. There can not exist an edge uv ∈ E(G) \ E(G′ ) with v ∈ / H, since such an edge would be added to ′ E(G ) by the algorithm. Thus every vertex v ∈ NG (u) \ H is also a neighbor of u in G′ . Since the degree of u in G′ is less than ℓ, we conclude that |NG (u) \ H| < ℓ. ′ Finally, we show that |H| ≤ 2ℓ h · n. To that end, we first upper bound |E(G )|. Consider the potential function X φ(G′ ) = max(ℓ − dG′ (v), 0). v∈V (G′ ) 4 Initially the potential function has value nℓ. Each time an edge is added to G′ by the algorithm, the potential function decreases by (at least) 1, because at least one endpoint of the added edge has degree less than ℓ. Further, when the potential function is 0, there are no vertices of degree less than ℓ, and so the algorithm terminates. Thus, the algorithm terminates after at most nℓ iterations, yielding |E(G′ )| ≤ nℓ. Hence, the sum of the degrees of all vertices in G′ is at most 2nℓ. Since every vertex in H has degree h, it follows that |H| ≤ 2ℓ h · n. 1 To prove Theorem 1, we apply Lemma 4 with ℓ = 14, h = 3 · 105 and δ = 60 . There are two ′ ′ possible outcomes. In the first case we obtain a subgraph G of G with V (G ) = V (G), such that 1 every vertex in G′ has degree at most 3 · 105 , and at most 60 · n vertices in G′ have degree less than 14. We handle this case using the following Lemma 5, proved in Section 4. Lemma 5. Let G be a graph on n vertices that has a subgraph G′ with V (G′ ) = V (G) and the 1 · n vertices in G′ following properties: every vertex in G′ has degree at most 3 · 105 , and less than 60 −26 have degree less than 14. Then G has at most O(2n·(1−10 ) ) minimal connected dominating sets. Further, there is an algorithm that given as input G and G′ , enumerates the family of all minimal −26 connected dominating sets of G in time 2n·(1−10 ) · nO(1) . 1 In the second case we obtain a partition of V (G) into L, H, and R such that |L| ≥ 60 · n, every 3 vertex in L has strictly less than 14 neighbors outside H, and |H| ≤ 104 · n. This case is handled by the following Lemma 6, which we prove in Section 5. Lemma 6. Let G be a graph on n vertices that has a partition of V (G) into L, H and R such 1 · n, every vertex in L has strictly less than 14 neighbors outside H, and |H| ≤ 1014 · n. that |L| ≥ 60 −50 Then G has at most 2n·(1−10 ) minimal connected dominating sets. Further, there is an algorithm that given as input G together with the partition (L, H, R), enumerates the family of all minimal −50 connected dominating sets of G in time 2n·(1−10 ) · nO(1) . Together, Lemmas 5 and 6 complete the proof of Theorem 1. 4 Robustly dense graphs In this section we bound the number of minimal connected dominating sets in a graph G that satisfies case 1 of Lemma 4, that is, we prove Lemma 5. In particular, we assume that G has a 1 subgraph G′ such that all vertices of G′ have degree at most h = 3 · 105 , and less than δn = 60 n vertices of G′ have degree less than ℓ = 14. For a set S, we say that a vertex v has low S-degree if dS (v) ≤ 2. We define the set L(S) = {v ∈ V (G) : dS (v) ≤ 2} to be the set of vertices in G of low S-degree. Our bound consists of two main parts. In the first part we give an upper bound on the 1 · n. In the second part we show that for any minimal number of sets S in G such that |L(S)| ≥ 20 4 1 connected dominating set S of G of size at least 10 n, we have |L(S)| ≥ 20 ·n. Together the two parts immediately yield an upper bound on the number of (and an enumeration algorithm for) minimal connected dominating sets in G. We begin by proving the first part using a probabilistic argument. 1 ·n Lemma 7. Let H be a graph on n vertices of maximum degree at most h, such that at most 60 n − 2 n 4 vertices have degree less than ℓ ≥ 14. Then there are at most h · 2 · e 1800h subsets S of V (H) 1 · n. such that |L(S)| ≥ 20 Proof. To prove the lemma, it is sufficient to show that if S ⊆ V (H) is selected uniformly at 1 · n is upper bounded as follows. random, then the probability that |L(S)| is at least 20 5    1 n  Pr |L(S)| > · n ≤ h2 · exp − 20 1800h4 (1) Let H 2 be the graph constructed from H by adding an edge between every pair of vertices in H that share a common neighbor. Since H has maximum degree at most h, H 2 has maximum degree at most h(h − 1) ≤ h2 − 1, and therefore H 2 can be properly colored with h2 colors [2]. Let φ : V (H) → {1, . . . , h2 } be a proper coloring of H 2 , and let V1 , V2 , . . . , Vh2 be the color classes of φ. Two vertices in the same color class of φ have empty intersection of neighborhoods in H. Thus, when S ⊆ V (H) is picked at random, we have that dS (u) and dS (v) are independent random variables whenever u and v are in the same color class of φ. 1 ) · n by Let Q be the set of vertices in G of degree at least ℓ. We have that |Q| ≥ (1 − 60 Q 2 assumption. For each i ≤ h we set Vi = Vi ∩ Q. Next we upper bound, for each i ≤ h2 , the 1 probability that |L(S) ∩ ViQ | > 40h 2 · n. For every vertex v ∈ V (H), define the indicator variable Xv which is set to 1 if dS (v) ≤ 2 and Xv is set to 0 otherwise. We have that   d(v) + d(v) + d(v) 0 1 2 . Pr[Xv = 1] = 2d(v) The right hand side is non-increasing with increasing d(v), so since we assumed d(v) ≥ ℓ, we have 2 (ℓ)+(ℓ)+( ℓ) that Pr[Xv = 1] ≤ 0 21ℓ 2 ≤ 2ℓ ℓ . In particular this holds for every v ∈ Q. Thus, for every P i ≤ h2 we have that |L(S) ∩ ViQ | = v∈V Q Xv — that is, |L(S) ∩ ViQ | is a sum of |ViQ | independent i indicator variables, each taking value 1 with probability at most (Theorem 2) yields ℓ2 . 2ℓ Thus, Hoeffding’s inequality !   2 2 ℓ n 2n n  Pr |L(S) ∩ ViQ | ≥ ℓ · |ViQ | + ≤ exp − . ≤ exp − 2 60h2 1800h4 3600h4 |ViQ |  (2) The union bound over the h2 color classes of φ, coupled with equation (2), yields that    ℓ2 1 n  Pr |L(S) ∩ Q| ≥ ℓ |Q| + · n ≤ h2 · exp − . 2 60 1800h4  n we have that Hence, with probability at least 1 − h2 · exp − 1800h 4 |L(S) ∩ Q| ≤ ℓ2 2 1 ·n≤ · n, |Q| + ℓ 2 60 60 where the last inequality holds due to ℓ ≥ 14. Since |L(S)| ≤ |L(S) ∩ Q| + |V (H) \ Q| and 1 1 n it follows that in this case, |L(S)| ≤ 20 · n. This proves equation (1) and the |V (H) \ Q| ≤ 60 statement of the Lemma. Note that the statement of Lemma 7 requires H to have at maximum degree at most h, such 1 · n vertices have degree less than ℓ. What we obtain from Lemma 4 is a subgraph that at most 60 ′ G of the input graph G with these properties. We will apply Lemma 7 to H = G′ and transfer the conclusion to G, since G′ is a subgraph of G. We now turn to proving the second part, that for any minimal connected dominating set S of 1 4 n, we have |L(S)| ≥ 20 · n. The first step of the proof is to show that any graph G of size at least 10 where almost every vertex is a cut vertex must have many vertices of degree 2. 6 Lemma 8. Let α > 0 be a constant. Suppose that H is a connected graph on n vertices in which at least (1 − α)n vertices are cutvertices. Then at least (1 − 7α)n vertices of H have degree equal to 2. Proof. Let X be the set of those vertices of H that are not cutvertices. By the assumption we have |X| ≤ αn. Let T be any spanning tree in H, and let L1 be the set of leaves of T . No leaf of T is a cutvertex of H, hence L1 ⊆ X. Let L3 be the set of those vertices of T that have degree at least 3 in T . It is well-known that in any tree, the number of vertices of degree at least 3 is smaller than the number of leaves. Therefore, we have the following: |L3 | < |L1 | ≤ |X| ≤ αn. (3) Let R be the closed neighborhood of L1 ∪L3 ∪X in T , that is, the set consisting of L1 ∪L3 ∪X and all vertices that have neighbors in L1 ∪ L3 ∪ X. Since T is a tree, it can be decomposed into a set of paths P, where each path connects two vertices of L1 ∪ L3 and all its internal vertices have degree 2 in T . Contracting each of these paths into a single edge yields a tree on the vertex set L1 ∪L3 , which means that the number of the paths in P is less than |L1 ∪ L3 |. Note that the closed neighborhood of L1 ∪ L3 in T contains at most 2 of the internal vertices on each of the paths from P: the first and the last one. Moreover, each vertex of X \ (L1 ∪ L3 ) introduces at most 3 vertices to R: itself, plus two its neighbors on the path from P on which it lies. Consequently, by equation (3) we have: |R| ≤ |L1 | + |L3 | + 2|L1 ∪ L3 | + 3|X| ≤ 7αn. (4) We now claim that every vertex u that does not belong to R, in fact has degree 2 in H. By the definition of R we have that u has degree 2 in T , both its neighbors v1 and v2 in T also have degree 2 in T , and moreover u, v1 , and v2 are all cutvertices in H. Aiming towards a contradiction, suppose u has some other neighbor w in H, different than v1 and v2 . Then the unique path from u to w in T passes either through v1 or through v2 ; say, through v1 . However, then the removal of v1 from H would not result in disconnecting H. This is because the removal of v1 from T breaks T into 2 connected components, as the degree of v1 in T is equal to 2, and these connected components are adjacent in H due to the existence of the edge uw. This is a contradiction with the assumption that v1 and v2 are cutvertices. From equation (4) and the claim proved above it follows that at least (1 − 7α)n vertices of G have degree equal to 2. We now apply Lemma 8 to subgraphs induced by minimal connected dominating sets. Lemma 9. Let S be a minimal connected dominating set of a graph G on n vertices, such that 4 1 |S| ≥ 10 n. Then |L(S)| ≥ 20 n. Proof. For n ≤ 2 the claim is trivial, so assume n ≥ 3, which in particular implies |S| ≥ 2. Aiming 1 n. By minimality, we have that for every vertex v, towards a contradiction, suppose |L(S)| < 20 the set S \ {v} is not a connected dominating set of G. Let Scut = {v ∈ S : G[S] − v is disconnected}. Consider a vertex v in S \ Scut . We have that S \ {v} can not dominate all of V (G) because otherwise S \ {v} would be a connected dominating set. Let u be a vertex in of G not dominated by S \ {v}. Because G[S] is connected and |S| ≥ 2, vertex v has a neighbor in S, so in particular u 6= v and hence u ∈ / S. Further, since S is a connected dominating set, u has a neighbor in S, and this neighbor can only be v. Hence dS (u) = 1 and so u ∈ L(S). Re-applying this argument for every v ∈ S \ Scut yields |L(S)| ≥ |S \ Scut |. 7 1 1 n, it follows that |S \ Scut | ≤ 20 n. From the argument above and the assumption |L(S) < 20 4 1 1 Since |S| ≥ 10 n, we have that |S \ Scut | ≤ 8 |S|. It follows that |Scut | ≥ 8 |S|. By Lemma 8 applied 1 n. Each of these to G[S], the number of degree 2 vertices in G[S] is at least (1 − 78 )|S| = 81 |S| ≥ 20 vertices belongs to L(S), which yields the desired contradiction. We are now in position to wrap up the first case, giving a proof of Lemma 5. P⌊ 4n 1 ⌋ n n(1− 100 ) 4 10 Proof of Lemma 5. There are at most i=0 subsets of V (G) of size at most 10 · n. i ≤2 4 Thus, the family of all minimal connected dominating sets of size at most 10 · n can be enumerated 1 4 in time 2n(1− 100 ) · nO(1) by enumerating all sets of size at most 10 · n, and checking for each set in polynomial time whether it is a minimal connected dominating set. 4 Consider now any minimal connected dominating set S in G with |S| ≥ 10 · n. By Lemma 9, 1 we have that |L(S)| ≥ 20 n. Since every vertex of degree at most 2 in G has degree at most 2 in 1 n holds also in G′ . However, by Lemma 7 applied to G′ , there are G′ , it follows that |L(S)| ≥ 20 n 1 at most 2n · e− 1800h4 subsets S of V (G′ ) = V (G) such that |L(S)| ≥ 20 · n (in G′ ). Substituting −26 h = 3 · 105 in the above upper bound yields that there are at most 2n·(1−10 ) minimal connected 4 n, yielding the claimed upper bound on the number of minimal dominating sets of size at least 10 connected dominating sets. −26 4 n in time 2n·(1−10 ) · To enumerate all minimal connected dominating sets of G of size at least 10 1 nO(1) , it is sufficient to list all sets S such that |L(S)| ≥ 20 · n, and for each such set determine in polynomial time whether it is a minimal connected dominating set. Note that the family of sets S 1 1 1 · n is closed under subsets: if |L(S)| ≥ 20 · n and S ′ ⊆ S then |L(S ′ )| ≥ 20 · n. such that |L(S)| ≥ 20 1 Since it can be tested in polynomial time for a set S whether |L(S)| ≥ 20 · n, the family of all sets −26 with |L(S)| ≥ 1 · n can be enumerated in time 2n·(1−10 ) nO(1) by the algorithm of Lemma 3, 20 completing the proof. 5 Large sparse induced subgraph In this section we bound the number of minimal connected dominating sets in any graph G for which case 2 of Lemma 4 occurs, i.e., we prove Lemma 6. Let us fix some integer ℓ ≥ 1. Our enumeration algorithm will make decisions that some vertices are in the constructed connected dominating set, and some are not. We incorporate such decisions in the notion of extensions. For disjoint vertex sets I and O (for in and out), we define an (I, O)-extension to be a vertex set S that is disjoint from I ∪ O and such that I ∪ S is a connected dominating set in G. An (I, O)extension S is said to be minimal if no proper subset of it is also an (I, O)-extension. The following simple fact will be useful. Lemma 10. There is a polynomial-time algorithm that, given a graph G and disjoint vertex subsets I, O, and S, determines whether S is a minimal (I, O)-extension in G. Proof. The algorithm checks whether I ∪ S is a connected dominating set in G and returns “no” if not. Then, for each v ∈ S the algorithm tests whether I ∪ (S \ {v}) is a connected dominating set of G. If it is a connected dominating set for any choice of v, the algorithm returns “no”. Otherwise, the algorithm returns that S is a minimal (I, O)-extension. The algorithm clearly runs in polynomial time, and if the algorithm returns that S is not a minimal (I, O)-extension in G, then this is correct, as the algorithm also provides a certificate. We now prove that if S is not a minimal (I, O) extension in G, then the algorithm returns “no”. If S is not an (I, O)-extension at all, the algorithm detects it and reports no. If it is an 8 (I, O)-extension, but not a minimal one, then there exists an (I, O)-extension S ′ ( S. Let v be any vertex in S \ S ′ . We claim that X = I ∪ (S \ {v}) is a connected dominating set of G. Indeed, X dominates V (G) because I ∪S ′ does. Furthermore, G[X] is connected because G[I ∪S ′ ] is connected and every vertex in X \ (I ∪ S ′ ) has a neighbor in (I ∪ S ′ ). Hence I ∪ (S \ {v}) is a connected dominating set of G and the algorithm correctly reports “no”. This concludes the proof. Observe that for any minimal connected dominating set X, and any I ⊆ X and O disjoint from X, we have that X \ I is a minimal (I, O)-extension. Thus one can use an upper bound on the number of minimal extensions to upper bound the number of minimal connected dominating sets. Recall that case 2 of Lemma 4 provides us with a partition (L, H, R) of the vertex set. To upper bound the number of minimal connected dominating sets, we will consider each of the 2n−|L| possible partitions of H ∪ R into two sets I and O, and upper bound the number of minimal (I, O)-extensions. This is expressed in the following lemma. Lemma 11. Let G be a graph and (L, H, R) be a partition of the vertex set of G such that |L| ≥ 10|H|ℓ, and every vertex in L has less than ℓ neighbors in L ∪ R. Then, for every parti− tion (I, O) of H ∪ R, there are at most 2|L| · e |L| 2−10ℓ ·100ℓ3 all minimal (I, O)-extensions can be listed in time 2|L| minimal (I, O)-extensions. Furthermore, − ·e |L| 210ℓ ·100ℓ3 · nO(1) . We now prepare ground for the proof of Lemma 11. The first step is to reduce the problem essentially to the case when L is independent. For this, we shall say that a partition of V (G) into L, H, and R is a good partition if: • |L| ≥ 10|H|, • L is an independent set, and • every vertex in L has less than ℓ neighbors in R. Towards proving Lemma 11, we first prove the statement assuming that the input partition of V (G) is a good partition. Lemma 12. Let G be a graph and (L, H, R) be a good partition of V (G). Then, for every partition − (I, O) of H ∪ R, there are at most 2|L| · e |L| 210ℓ ·100ℓ2 minimal (I, O)-extensions. Furthermore, all − minimal (I, O)-extensions can be listed in time 2|L| · e |L| 210ℓ ·100ℓ2 · nO(1) . We will prove Lemma 12 towards the end of this section, now let us first prove Lemma 11 assuming the correctness of Lemma 12. Proof of Lemma 11 assuming Lemma 12. Observe that we may find an independent set L′ in G[L] of size at least |L| ℓ . Indeed, since every vertex of L has less than ℓ neighbors in L ∪ R, any inclusion|L| ′ wise maximal independent set L′ in G[L] has size at least |L| ℓ . Therefore |L | ≥ ℓ ≥ 10|H|, and hence L′ , H, R′ = R ∪ (L′ \ L) is a good partition of V (G). ′ Further, for a fixed partition of R ∪ H into I and O, consider each of the 2|L\L | partitions of H ∪ R′ into I ′ and O′ such that I ⊆ I ′ and O ⊆ O ′ . For every minimal (I, O)-extension S, we have that S ∩ L′ is a minimal (I ′ , O′ )-extension, where I ′ = I ∪ (S \ L′ ) and O ′ = O ∪ (L \ (L′ ∪ S)). Thus, by Lemma 12 applied to the good partition (L′ , H, R′ ) of V (G), and the partition (I ′ , O ′ ) of H ∪ R′ , we have that the number of minimal (I, O)-extensions is upper bounded by |L\L′ | 2 |L′ | ·2 − ·e |L′ | 210ℓ 100ℓ2 9 − ≤ 2|L| · e |L| 210ℓ 100ℓ3 . Further, by the same argument, the minimal (I, O)-extensions can be enumerated within the claimed running time, using the enumeration provided by Lemma 12 as a subroutine. The next step of the proof of Lemma 12 is to make a further reduction, this time to the case when also H ∪ R is independent. Since the partition into vertices taken and excluded from the constructed connected dominating set is already fixed on H ∪ R, this amounts to standard cleaning operations within H ∪ R. We shall say that a good partition (L, H, R) of V (G) is an excellent partition if G[H ∪ R] is edgeless. Lemma 13. There exists an algorithm that given as input a graph G, together with a good partition (L, R, H) of V (G), and a partition (I, O) of R ∪ H, runs in polynomial time, and outputs a graph G′ with V (G) ∩ V (G′ ) ⊇ L, an excellent partition (L, R′ , H ′ ) of V (G′ ), and a partition (I ′ , O′ ) of R′ ∪ H ′ , with the following property. For every set S ⊆ L, S is a minimal (I, O)-extension in G if and only if S is a minimal (I ′ , O ′ )-extension in G′ . Proof. The algorithm begins by setting G′ = G, H ′ = H, R′ = R, I ′ = I and O′ = O. It then proceeds to modify G′ , at each step maintaining the following invariants: (i) (L, H ′ , R′ ) is a good partition of the vertex set of G′ , and (ii) for every set S ⊆ L, S is a minimal (I, O)-extension in G if and only if S is a minimal (I ′ , O ′ )-extension in G′ . If there exists an edge uv with u ∈ O′ and v ∈ I ′ , the algorithm removes u from G′ , from O′ , and from R′ or H ′ depending on which of the two sets it belongs to. Since u is anyway dominated by I ′ and removing u can only decrease |H ′ | (while keeping |L| the same), the invariants are maintained. If there exists an edge uv with both u and v in O′ , the algorithm removes the edge uv from G′ . Since neither u nor v are part of I ′ ∪ S for any S ⊆ L, it follows that the invariants are preserved. Finally, if there exists an edge uv with both u and v in I ′ , the algorithm contracts the edge uv. Let w be the vertex resulting from the contraction. The algorithm removes u and v from I ′ and from R′ or H ′ , depending on which of the two sets the vertices are in, and adds w to I ′ . If at least one of u and v was in H ′ , w is put into H ′ , otherwise w is put into R′ . Note that |H ′ | may decrease, but can not increase in such a step. Thus (L, R′ , H ′ ) remains a good partition and invariant (i) is preserved. Further, since u and v are always in the same connected component of G′ [I ′ ∪ S] for any S ⊆ L, invariant (ii) is preserved as well. The algorithm proceeds performing one of the three steps above as long as there exists at least one edge in G′ [R′ ∪ H ′ ]. When the algorithm terminates no such edge exists, thus (L, H ′ , R′ ) forms an excellent partition of V (G′ ). Lemma 13 essentially allows us to assume in the proof of Lemma 12 that (L, H, R) is an excellent partition of V (G). To complete the proof, we distinguish between two subcases: either there |L| are at most |L| 10 vertices in R of degree less than 10ℓ, or there are more than 10 such vertices. Let us shortly explain the intuition behind this case distinction. If there are at most |L| 10 vertices in R of degree less than 10ℓ, then it is possible to show that H ∪R is small compared to L, in particular that minimal (I, O)-extension can not pick more than |H ∪ R| |H ∪ R| ≤ 3|L| 10 . We then show that any |L|  vertices from L. This gives a 0.3|L| upper bound for the number of minimal (I, O)-extensions, which is significantly lower than 2|L| . On the other hand, if there are more than |L| 10 vertices in R of degree less than 10ℓ, then one can find a large subset R′ of R of vertices of degree at most 10ℓ, such that no two vertices in R′ have a common neighbor. For each vertex v ∈ R′ , every minimal (I, O)-extension must contain at least one neighbor of v. Thus, there are only 2d(v) − 1, rather than 2d(v) possibilities for how a minimal (I, O)extension intersects the neighborhood of v. Since all vertices in R′ have disjoint neighborhoods,  10ℓ |R′ | on the number of minimal (I, O)-extensions. this gives an upper bound of 2|L| · 2 210ℓ−1 10 We now give a formal treatment of the two cases. We begin with the case that there are at most |L| 10 vertices in R of degree less than 10ℓ. Lemma 14. Let G be a graph, and I and O be disjoint vertex sets such that I is nonempty and both G[I ∪ O] and G− (I ∪ O) are edgeless. Then every minimal (I, O)-extension S satisfies |S| ≤ |I ∪ O|. Proof of Lemma 14. We will need the following simple observation about the maximum size of an independent set of internal nodes in a tree. Claim 1. Let T be a tree and S be a set of non-leaf nodes of T such that S is independent in T . Then |S| ≤ |V (T ) \ S|. Proof. Root the tree T at an arbitrary vertex. Construct a vertex set Z by picking, for every s ∈ S, any child z of s and inserting z into Z; this is possible since no vertex of S is a leaf. Every vertex in T has a unique parent, so no vertex is inserted into Z twice, and hence |Z| = |S|. Further, since S is independent, Z ⊆ V (T ) \ S. The claim follows. y We proceed with the proof of the lemma. Let X = V (G) \ (I ∪ O) and let S ⊆ X be a minimal (I, O)-extension. Since I ∪ S is a connected dominating set and I ∪ O is independent, it follows that every vertex in O has a neighbor in S. Hence G[I ∪ S ∪ O] is connected. Let T be a spanning tree of G[I ∪ S ∪ O]. We claim that every node in S is a non-leaf node of T . Suppose not, then G[I ∪ S \ {v}] is connected, every vertex in O has a neighbor in S \ {v}, v has a neighbor in I (since G[I ∪ S] is connected and I is nonempty), and every vertex in X \ S has a neighbor in I. Hence S \ {v} would be an (I, O)-extension, contradicting the minimality of S. We conclude that every node in S is a non-leaf node of T . Applying Claim 1 to S in T concludes the proof. The next lemma resolves the first subcase, when there are at most |L| 10 vertices in R of degree less than 10ℓ. The crucial observation is that in this case, a minimal (I, O)-extension S must be of size significantly smaller than |L|/2, due to Lemma 14. Lemma 15. Let G be a graph, (L, H, R) be an excellent partition of V (G), and (I, O) be a partition of H ∪ R. If at most |L| 10 vertices in R have degree less than 10ℓ in G, then there are at most |L| 2|L| · 2− 10 minimal (I, O)-extensions. Further, the family of all minimal (I, O)-extensions can be |L| enumerated in time 2|L| · 2− 10 · nO(1) . Proof. First, note that |H| ≤ |L| 10 , because (L, H, R) is an excellent partition. Partition R into Rbig and Rsmall according to the degrees: Rbig contains all vertices in R of degree at least 10ℓ, while Rsmall contains the vertices in R of degree less than 10ℓ. Since every vertex in L has at most ℓ neigh|L| 3|L| bors in R, it follows that |Rbig | ≤ |L| 10 . By assumption |Rsmall | ≤ 10 . It follows that |R ∪ H| ≤ 10 . Now, I ∪ O = R ∪ H, and therefore, by Lemma 14 every minimal (I, O)-extension has size at most |I ∪ O| ≤ 3|L| 10 . Hence the number of different minimal (I, O)-extensions is at most 3|L| ⌋  10 X |L| ⌊ i=0 i |L| ≤ 20.882|L| ≤ 2|L| · 2− 10 . To enumerate the sets within the given time bound it is sufficient to go through all subsets S of L of size at most 3|L| 10 and check whether S is a minimal (I, O)-extension in polynomial time using the algorithm of Lemma 10. We are left with the case when at least |L| 10 vertices in R have degree less than 10ℓ in G. 11 Lemma 16. Let G be a graph, (L, H, R) be an excellent partition of V (G), and (I, O) be a partition of H ∪ R. If at least |L| 10 vertices in R have degree less than 10ℓ in G, then there are at − most 2|L| · e |L| 210ℓ ·100ℓ2 minimal (I, O)-extensions. The family of all minimal (I, O)-extensions can − be enumerated in time 2|L| · e |L| 210ℓ ·100ℓ2 · nO(1) . Proof. We assume that |I ∪ O| ≥ 2, since otherwise the claim holds trivially. Let Rsmall be the set of vertices in R of degree less than 10ℓ; by assumption we have |Rsmall | ≥ |L| 10 . Recall that vertices in R have only neighbors in L, and every vertex of L has less than ℓ neighbors in R. Hence, for each vertex r in Rsmall there are at most 10ℓ · (ℓ − 1) other vertices in Rsmall that share a common neighbor with r. Compute a subset R′ of Rsmall as follows. Initially R′ is empty and all vertices in Rsmall are unmarked. As long as there is an unmarked vertex r ∈ Rsmall , add r to R′ and mark r ′ as well as all vertices in Rsmall that share a common neighbor with r. Terminate when all vertices in Rsmall are marked. Clearly, no two vertices in the set R′ output by the procedure described above can share any common neighbors. Further, for each vertex added to R′ , at most 10ℓ · (ℓ − 1) + 1 ≤ 10ℓ2 vertices |L| small | ≥ 100ℓ are marked. Hence, |R′ | ≥ |R10ℓ 2 2. Observe that if a subset S of L is an (I, O)-extension, then every vertex in I ∪ O must have a neighbor in S. This holds for every vertex in O, because I ∪ S needs to dominate this vertex, but there are no edges between O and I. For every vertex in I this holds because G[I ∪ S] has to be connected, and I ∪ O is an independent set of size at least 2. Consider now a subset S of L picked uniformly at random. We upper bound the probability that every vertex in I ∪O has a neighbor in S. This probability is upper bounded by the probability that every vertex in R′ has a neighbor in S. For each vertex r in R′ , the probability that none of its neighbors is in S is 2−d(r) ≥ 2−10ℓ . Since no two vertices in R′ share a common neighbor, the events “r has a neighbor in S” for r ∈ R′ are independent. Therefore, the probability that every vertex in R′ has a neighbor in S is upper bounded by ′ −10ℓ · |L| 100ℓ2 (1 − 2−10ℓ )|R | ≤ e−2 − =e |L| 210ℓ ·100ℓ2 . The upper bound on the number of minimal (I, O)-extensions follows. To enumerate all the minimal (I, O)-extensions within the claimed time bound, it is sufficient to enumerate all sets S ⊆ L such that every vertex in R′ has at least one neighbor in S, and check in polynomial time using Lemma 10 whether S is a minimal (I, O)-extension. The family of such subsets of L is closed under taking supersets, so to enumerate them we can use the algorithm of Lemma 3 applied to their complements. We can now wrap up the proof of Lemma 12. Proof of Lemma 12. Let G be a graph and (L, H, R) be a good partition of V (G). Consider a partition of H ∪ R into two sets I and O. By Lemma 13, we can obtain in polynomial time a graph G′ with V (G) ∩ V (G′ ) ⊇ L, as well as an excellent partition (L, R′ , H ′ ) of V (G′ ), and a partition (I ′ , O′ ) of R′ ∪ H ′ , such that every subset S of L is a minimal (I, O)-extension in G if and only if it is a minimal (I ′ , O ′ )-extension in G′ . Thus, from now on, we may assume without loss of generality that L, H and R is an excellent partition of V (G). We distinguish between two cases: either there are at most |L| 10 vertices in R of degree less than |L| 10ℓ, or there are more than 10 such vertices. In the first case, by Lemma 15, there are at most |L| 2|L| ·2− 10 minimal (I, O)-extensions. Further, the family of all minimal (I, O)-extensions can be enu|L| − merated in time 2|L| ·2− 10 ·nO(1) . In the second case, by Lemma 16, there are at most 2|L| ·e 12 |L| 210ℓ ·100ℓ2 minimal (I, O)-extensions, and the family of all minimal (I, O)-extensions can be enumerated in − time 2|L| · e |L| 210ℓ ·100ℓ2 − · nO(1) . Since e |L| 210ℓ ·100ℓ2 |L| ≥ 2− 10 , the statement of the lemma follows. As argued before, establishing Lemma 12 concludes the proof of Lemma 11. We can now use Lemma 11 to complete the proof of Lemma 6, and hence also of our main result. Proof of Lemma 6. To list all minimal connected dominating sets of G it is sufficient to iterate over each of the 2n−|L| partitions of H ∪ R into I and O, for each such partition enumerate all minimal (I, O)-extensions S using Lemma 11 with ℓ = 14, and for each minimal extension S check whether I ∪ S is a minimal connected dominating set of G. Observe that |L| ≥ 1 1 · n ≥ 10 · 14 · 4 · n ≥ 10 · 14 · |H|, 60 10 and that therefore Lemma 11 is indeed applicable with ℓ = 14. Hence, the total number of minimal connected dominating sets in G is upper bounded by − 2n−|L| · 2|L| · e |L| 210ℓ 100ℓ3 − ≤ 2n · 2 n 60·210ℓ 100ℓ3 −50 ) ≤ 2n(1−10 . The running time bound for the enumeration algorithm follows from the running time bound of the enumeration algorithm of Lemma 11 in exactly the same way. Acknowledgements. The authors gratefully thank the organizers of the workshop Enumeration Algorithms Using Structure, held in Lorentz Center in Leiden in August 2015, where this work was initiated. References [1] Hans L. Bodlaender, Endre Boros, Pinar Heggernes, and Dieter Kratsch. Open problems of the Lorentz workshop “Enumeration Algorithms using Structure”, 2015. Utrecht University Technical Report UU-CS-2015-016. URL: http://www.cs.uu.nl/research/techreps/repo/CS-2015/2015-016.pdf. [2] Reinhard Diestel. Graph Theory, 4th Edition, volume 173 of Graduate texts in mathematics. Springer, 2012. [3] Fedor V. Fomin, Fabrizio Grandoni, and Dieter Kratsch. Solving Connected Dominating Set faster than 2n . Algorithmica, 52(2):153–166, 2008. [4] Fedor V. Fomin, Fabrizio Grandoni, Artem V. Pyatkin, and Alexey A. Stepanov. Combinatorial bounds via measure and conquer: Bounding minimal dominating sets and applications. ACM Trans. Algorithms, 5(1), 2008. [5] Fedor V. Fomin and Dieter Kratsch. Exact Exponential Algorithms. Texts in Theoretical Computer Science. An EATCS Series. Springer, 2010. [6] Petr A. Golovach, Pinar Heggernes, and Dieter Kratsch. Enumerating minimal connected dominating sets in graphs of bounded chordality. Theor. Comput. Sci., 630:63–75, 2016. [7] Geoffrey Grimmett and David Stirzaker. Probability and random processes. Oxford university press, 2001. [8] Wassily Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American statistical association, 58(301):13–30, 1963. [9] John W. Moon and Leo Moser. On cliques in graphs. Israel Journal of Mathematics, 3:23–28, 1965. 13
8
Diversity Handling In Evolutionary Landscape Maumita Bhattacharya School of Computing & Mathematics Charles Sturt University, Australia [email protected] Abstract The search ability of an Evolutionary Algorithm (EA) depends on the variation among the individuals in the population [3, 4, 8]. Maintaining an optimal level of diversity in the EA population is imperative to ensure that progress of the EA search is unhindered by premature convergence to suboptimal solutions. Clearer understanding of the concept of population diversity, in the context of evolutionary search and premature convergence in particular, is the key to designing efficient EAs. To this end, this paper first presents a comprehensive analysis of the EA population diversity issues. Next we present an investigation on a counter-niching EA technique [4] that introduces and maintains constructive diversity in the population. The proposed approach uses informed genetic operations to reach promising, but un-explored or under-explored areas of the search space, while discouraging premature local convergence. Simulation runs on a number of standard benchmark test functions with Genetic Algorithm (GA) implementation shows promising results. 1. Introduction Implementation of EA requires preserving a population while converging to a solution, which maintains a degree of population diversity [7, 8, 9, 12, 13, 14, 15, and 16] to avoid premature convergence to sub-optimal solutions. It is difficult to precisely characterize the possible extent of premature convergence as it may occur in EA due to various reasons. The primary causes are algorithmic features like high selection pressure and very high gene flow among population members. Selection pressure pushes the evolutionary process to focus more and more on the already discovered better performing regions or “peaks” in the search space and as a result population diversity declines, gradually reaching a homogeneous state. On the other hand unrestricted recombination results in high gene flow which spreads genetic material across the population pushing it to a homogeneous state. Variation introduced through mutation is unlikely to be adequate to escape local optimum or optima [17]. While premature convergence [17] may be defined as the phenomenon of convergence to sub-optimal solutions, geneconvergence means loss of diversity in the process of evolution. Though, the convergence to a local or to the global optimum cannot necessarily be concluded from gene convergence, maintaining a certain degree of diversity is widely believed to help avoid entrapment in non-optimal solutions [3, 4]. In this paper we present a comprehensive analysis on population diversity in the context of efficiency of evolutionary search. We then present an investigation on a counter niching-based evolutionary algorithm that aims at combating gene-convergence (and premature convergence in turn) by employing intelligent introduction of constructive diversity [4]. The rest of the paper is organized as follows: Section 2 presents an analysis of diversity issues and the EA search process; Section 3 introduces the problem space for our proposed algorithm. Sections 4, 5 and 6 present the proposed algorithm, simulation details and discussions on the results respectively. Finally, Section 7 presents some concluding remarks. 2. Population diversity and evolutionary search In the context of EA, diversity may be described as the variation in the genetic material among individuals or candidate solutions in the EA population. This in turn may also mean variation in the fitness value of the individuals in the population. Two major roles played by population diversity in EA are: Firstly, diversity promotes exploration of the solution space to locate a single good solution by delaying convergence; secondly, diversity helps to locate multiple optima when more than one solution is present [8, 15 and 16]. Besides the role of diversity regarding premature convergence in static optimization problems, diversity also seems to be beneficial in non-stationary environments. If the genetic material in the population is too similar, i.e., has converged towards single points in the search space, all future individuals will be trapped at that single point even though the optimal solution has moved to another location in the fitness landscape. However, if the population is diverse, the mechanism of recombination will continue to generate new candidate solutions making it possible for the EA to discover new optima. The search ability of a typical evolutionary algorithm depends on the variation among the individuals or candidate solutions in the population. The variation is introduced by the recombination operator to recombine existing solutions, and the mutation operator introducing noise by applying random variation to the individual's genome. However, as the algorithm progresses, loss of diversity or loss of genetic variation in the population results in low exploration, pushing the algorithm to converge prematurely to a local optimum or non-optimal solution. The following sub-section presents an analysis of the impact of population diversity on premature convergence based on the concepts presented in [13]. 2.1 Effect of population diversity on premature convergence Let r  X =  X1,K, X N    ∈ S N  be a population of individuals Y in the solution space S N , where the r population size is N ; X (0) be the initial population; H is a schema, i.e., a hyperplane of the solution space S . H may be represented by its defining components (defining alleles) and their corresponding values as H ai1,K , aik , where K (1 ≤ K ≤ chromosome length ) . ( ) Leung et al. in [13] have proposed the following measures related to population diversity in canonical genetic algorithm. r Degree of population diversity, δ X : Defined as the number of distinct components in the vector N r ∑ X i ; and Degree of population maturity, µ X : i =1 r r Described as µ X = l − δ X or the number of lost alleles. With probability of mutation, p (m ) = 0 and r r X (0 ) = X 0 , according to Leung [13] the following postulates hold true: For each solution,   r Y ∈ H ai1,K, a  r  ; X 0  , there exists a n ≥ 0   iµ  X 0      ( ) ( ) ( ) ( ) r r r Probabilit y Y ∈ X (n ) / X (0 ) = X 0 > 0 . Conversely, for each solution,  r  Y ∉ H ai1 ,K, a  r  ; X 0  , and every n ≥ 0 such   iµ  X 0      r r r that Probabilit y Y ∈ X (n ) / X (0 ) = X 0 = 0 . such { that { } } It is obvious from the above postulates that a canonical genetic algorithm’s search ability is confined r δ  X  to the minimum schema with 2 different individuals. Hence, greater the degree of population r diversity, µ X , greater is the search ability of the genetic algorithm. Conversely, a small degree of population diversity will mean limited search ability, r reducing to zero search ability with µ X = 0 . ( ) ( ) 2.2 Enhanced EAs to combat diversity issues No mechanism in a standard EA guarantees that the population will remain diverse throughout the run [17]. Although there is a wide coverage of the fitness landscape at initialization due to the random initialization of individuals’ genomes, selection quickly eliminates the least fit solutions, which implies that the population will converge towards similar points or even single points in the search space. Since the standard EA has limitations to maintain population diversity, several models have been proposed by the EA community which either maintain or reintroduce diversity in the EA population [1, 2, 4, 5, 6, 10, 11, 14 and 18]. The key researches can be broadly categorized as follows [15]: a) Complex population structures to control gene flow, e.g., the diffusion model, the island model, the multinational EA and the religion model. b) Specialized operators to control and assist the selection procedure, e.g., crowding, deterministic crowding, and sharing are believed to maintain diversity in the population. c) Reintroduction of genetic material, e.g., random immigrants and mass extinction models are aimed at reintroduction of diversity in the population. d) Dynamic Parameter Encoding (DPE), which dynamically resizes the available range of each parameter by expanding or reducing the search window. e) Diversity guided or controlled genetic algorithms that use a diversity measure to assess and control the survival probability of individuals and the process of exploration and exploitation. Figure 1 summarizes the major methods proposed to directly or indirectly control EA population diversity. Manual tuning of constants/functions Measure based control Parameter control based methods Self-adaptive control Polulation structure based control In the second case, i.e., to locate multiple good solutions, alternative and different solutions may have to be considered before accepting one final solution as the optimum. An algorithm that can keep track of multiple optima simultaneously should be able to find multiple optima in the same run by spreading out the search. On the other hand, maintaining genetic diversity in the population can be primarily beneficial in the first case; the problem of entrapment in local optima. Crowding Replacement schemes Deterministic and probabilistic crowding Remarks: The issue is - how much genetic diversity in the population is optimum? Cellular EA Spatial population topologies Patch work model Religion based EA Selection based techniques Sharing Diversity control based EA Direct/indirect control of population diversity Forking EA Search space division Shifting balace EA Multinational EA Random immigrants EA Mass exinction Extinction model SOC extinction model Restart and phase based techniques CHC algorithm Diversity guided EA Fig.1. Direct/indirect control of population diversity in EA. 3. Understanding the problem space Before we present our proposed approach, which aims at achieving constructive diversity, it is important to understand the problem space we are dealing with. For optimization problems the main challenge is often posed by the topology of the fitness landscape, in particular its ruggedness in terms of local optima. The target optimization problems for our approach are primarily multimodal. Genetic diversity of the population is particularly important in case of multimodal fitness landscape. Evolutionary algorithms are required to avoid and escape local optima or basins of attraction to reach the optimum in a multimodal fitness landscape. Over the years, several new and enhanced EAs have been suggested to improve performance [1, 2, 4, 5, 6, 10, 11 and 14]. The objectives of such research are twofold; firstly, to avoid stagnation in local optimum in order to find the global optimum; secondly, to locate multiple good solutions if the application requires so. Recombination in a fully converged population cannot produce solutions that are different from the parents; leave alone better than the parents. However, a very high diversity actually deteriorates performance of the recombination operator. Offspring generated combining two parents approaching two different peaks is likely to be placed somewhere between the two peaks; hindering the search process from reaching either of the peaks. This makes the recombination operator less efficient for fine-tuning the solutions to converge at the end of the run. Hence, the optimal level of diversity is somewhere between fully converged and highly diverse. Various diversity measures (such as Euclidean distance among candidate solutions, fitness distance and so on) may be used to analyze algorithms to evaluate their diversity maintaining capabilities. In the following sections we investigate the functioning and performance of our proposed Counter Niching-based Evolutionary Algorithm [4]. 4. Counter Niching EA: Landscape information, informed operator and constructive diversity To attain the objective of introducing constructive diversity in the population, the proposed technique first extracts information about the population landscape before deciding on introduction of diversity through informed mutation. The aim is to identify locally converging regions or donor communities in the landscape whose redundant less fit members (or individuals) could be replaced by more promising members sampled in un-explored or under-explored sections of the decision space. The existence of such communities is purely based on the position and spread of individuals in the decision space at a given point in time. Once such regions are identified, random sampling is done on yet to be explored sections of the landscape. Best representatives found during such sampling, now replace the worst members of the identified donor regions. Best representatives are the ones that are fitness wise the fittest and spatially the farthest. Here, average Euclidean distance from representatives of all already considered regions (stored in a memory array) is the measure for spatial distance. Regular mutation and recombination takes place in the population as a whole. The basic framework is as depicted in Figure 2. The Counter Niching based EA Identify Donor Regions and perform informed operation Find Communities or Clusters in population (Procedure GRID_NICHING) (Procedure INFORMED_OP) Fig.2. The COUNTER-NICHING based EA framework The task described above is carried out by the following three procedures: Procedure COUNTER NICHING EA: This is the main algorithm. Procedure GRID NICHING: This procedure is called by COUNTER NICHING EA and is used to identify tendency of genotypic convergence. Procedure INFORMED OP: This procedure is second in order to be called by COUNTER NICHING EA. This procedure is responsible of genetic operations including informed mutation. Figure 3 presents the procedure COUNTER_NICHING_EA. For details on the procedures GRID NICHING and INFORMED OP, we refer to our previous work on [4]. 5. Simulations 5.1 Test functions The proposed algorithm is tested on a set of commonly used benchmark test functions to validate its efficacy. The standard benchmark test function set used in the simulation runs consists of minimization of seven analytical functions as follows: Ackley’s Path Function ( f ack ( x ) ), Griewank’s Function f gri ( x ) , Rastrigin’s Function f rtg ( x ) , Generalized Rosenbrock’s function f ros (x ) , Axis parallel HyperEllipsoidal Function or Weighted Sphere Model f elp (x ) , Schwefel Function 1.2 f sch − 1.2 (x ) and a rotated Rastrigin Function f rrtg (x ) . Algorithm 1: Procedure COUNTER NICHING EA 1: begin 2: t = 0 3: Initialize population P(t) 4: Evaluate population P(t) 5: while (not<termination condition>) 6: begin t=t+1 7: (* Perform pseudo-niching of the population*) 8: Call Procedure GRID_NICHING (* Perform informed genetic operations *) 9: Call Procedure INFORMED_OP 10: Create new population using an elitist selection mechanism 11: Evaluate P(t) 14: end while 15: end Fig.3. The COUNTER-NICHING based EA framework. 5.2 Algorithms considered for comparison The algorithms used in the comparison are as follows: (a) the “standard EA” (SEA), (b) the self organized criticality EA (SOCEA), (c) the cellular EA (CEA), and (d) the diversity guided EA (DGEA). The SEA uses Gaussian mutation with zero mean and variance σ 2 = 1 + t + 1 . The SOCEA is a standard EA with non-fixed and non-decreasing variance σ 2 = POW (10 ) , where POW (α ) is the power-law distribution. The CEA uses a 20x20 grid with wrapped edges. The grid size corresponds to the 400 individuals used in the other algorithms. The CEA uses Gaussian mutation with variance σ 2 = POW (10 ) . Finally, the DGEA uses the Gaussian mutation operator with variance σ 2 = POW (1) . The reader is referred to [15] for additional information on the methods. 5.3 Experiment set-up Simulations were carried out to apply the proposed COUNTER NICHING based EA with real-valued encoding with parameters N (population size) =300, p m (mutation probability) =0.01 and pr (recombination probability) =0.9. In case of the algorithms used for comparison as mentioned in Section 4.2, namely, (i) SEA (Standard EA), (ii) SOCEA (Self-organized criticality EA), (iii) CEA (The Cellular EA), and (iv) DGEA (Diversity guided EA), experiments were performed using real-valued encoding, a population size of 400 individuals, binary tournament selection. Probability of mutating an entire genome was p m = 0.75 and probability for crossover was p r = 0.9. All the test functions were considered in 20, 50 and 100 dimensions. Reported results were averaged over 30 independent runs, maximum number of generations in each run being only 500, as against 1000 generations in used [15] for the same set of test cases for the 20 dimensional scenarios. The comparison algorithms use 50 times the dimensionality of the test problems as the terminating generation number in general, while the COUNTER NICHING EA uses 500, 1000 and 2000 generations for the 20, 50 and 100 dimensional problem variants respectively. 6. Results and discussions This section presents the empirical results obtained by the COUNTER NICHING EA algorithm when tackling the seven test problems mentioned in Section 5.1 with dimensions 20, 50 and 100. 6.1 General performance of COUNTER NICHING EA Table 1 presents the error values, ( f (x ) − f (x )* ) where, f (x )* is the optimum. Each column corresponds to a test function. The error values have been presented for the three dimensions of the problems considered, namely 20, 50 and 100. As each test problem was simulated over 30 independent runs, we have recorded results from each run and sorted the results in ascending order. Table 1 presents results from the representative runs: 1st (Best), 7th, 15th (Median), 22nd and 30th (Worst), Mean and Standard Deviation (Std). The main performance measures used are the following: “A” Performance: Mean performance or average of the best-fitness function found at the end of each run. (Represented as ‘Mean’ in Table 1). “SD” Performance: Standard deviation performance. (Represented as ‘Std.’ in Table 1). “B” Performance: Best of the fitness values averaged as mean performance. (Represented as ‘Best’ in Table 1). As can be observed COUNTER NICHING EA has demonstrated descent performance in majority of the test cases. However, as can be seen from the highlighted segment (highlighted in bold) of Table 1, the proposed algorithm was not very efficient in handling the higher dimensional cases (50 and 100 dimensional cases in this example) for the rotated Rastrigin Function f rrtg (x ) . Keeping in mind the concept of No Free Lunch Theorem, this is acceptable as no single algorithm can be expected to perform favorably for all possible test cases. The chosen benchmark test functions represent a wide variety of test cases. 6.2 Comparative performance of COUNTER NICHING EA Simulation results obtained with COUNTER NICHING EA in comparison to SEA, SOCEA, CEA, and DGEA (see Section 5.2 for descriptions of these algorithms) are presented in Table 2. Results reported in this case, for COUNTER NICHING EA were averaged over 50 independent runs. These simulation results ascertain COUNTER NICHING EA’s superior performance as regards to solution precision in all the test cases particularly for lower dimensional instances. This may be attributed to COUNTER NICHING EA’s ability to strike a better balance between exploration and exploitation. However, the proposed algorithm’s performance deteriorates with increasing dimensions. Also, the algorithm could not handle the high dimensional versions of the high epistatis rotated Rastrigin function to any satisfactory level. For the reported results as shown in Table 2, the 100 dimensional scenarios of the test problems used 5000 generations for each of the compared algorithm, namely, SEA, SOCEA, CEA and DGEA. On the other hand, COUNTER NICHING EA used only 2000 generations to reach the reported results. 6.3 An analysis of population diversity for COUNTER NICHING EA In the next phase of our experiments, we have investigated COUNTER NICHING EA’s performance in terms of maintaining constructive diversity. There are various measures of diversity available. The “distance-to-average-point” measure used in [15] is relatively robust with respect to population size, dimensionality of problem and the search range of each Table 1. Error Values Achieved on the Test Functions with Simulation Runs for COUNTER NICHING EA. Dimensions of Each Function Considered are 20, 50 and 100. f ack f f f f f f gri 1st (Be st) 7th 2 0 D 15th (M edi an) 22n d 30th (W orst ) Me an Std. 1st (Be st) 7th 5 0 D 15th (M edi an) 22n d 30th (W orst ) Me an Std. 1st (Be st) 7th 1 0 0 D 15th (M edi an) 22n d 1.00 E61 1.11 E61 1.11 E61 4.3E -62 rtg ros elp sch − rrtg 1.01 E60 1.01 E60 1.11 E60 2.01 E60 2.01 E60 2.89 E60 2.01E50 3.89 E-6 4.41 E62 4.96 E62 1.11 E61 1.13 1E61 1.21 0E61 2.01E50 3.89 E-6 2.71E50 3.9E -6 1.95 E61 3.11 E61 5.61 E62 8.71 E62 2.02 E61 3.30 E61 1.92 E60 2.01 E60 2.91 E60 2.91 E60 2.91E50 3.93 E-6 2.91E50 3.99 E-6 1.12 E61 5.02 E62 1.21 E61 1.12 E60 2.92 E60 2.72E50 3.9E -6 8.33 E62 0.56 E29 0.71 E29 0.71 E29 1.64 E62 1.00 E30 1.01 E30 1.01 E30 8.6E -62 4.62 E61 1.01 E30 1.01 E30 1.10 E30 4.22E51 3.88 -8 1.00 E30 1.01 E30 1.10 E30 4.71 E61 1.21 E29 1.41 E29 1.90 E29 2.21E20 9.01 2.40E20 9.01 2.90E20 9.11 0.91 E29 0.99 E29 1.91 E30 1.99 E30 1.81 E30 1.99 E30 1.92 E29 1.98 E29 1.51 E30 1.92 E30 2.91E20 9.22 2.91E20 9.24 0.73 E29 1.55 E30 1.11 E30 4.41 E31 1.90 E-9 1.91 E29 3.16 E30 2.09 E-9 1.11 E30 3.66 E31 2.09 E-8 2.90E20 9.12 3.16E21 0.09 8 1.00 E-9 1.11 E30 4.76 E31 1.20 E-9 2.09E5 10.5 2 1.01 E-9 1.51 E-9 1.92 E-9 2.91 E-9 2.92 E-8 2.59E5 10.6 6 1.12 E-9 1.72 E-9 1.99 E-9 2.99 E-9 2.99 E-8 3.29E5 11.0 9 1.36 E-9 1.86 E-9 2.21 E-9 3.21 E-9 3.21 E-8 3.79E5 11.6 1 30th (W orst ) Me an 1.36 E-9 1.92 E-9 2.92 E-9 3.92 E-9 3.90 E-8 3.98E5 11.7 9 1.13 E-9 1.81 E-9 2.01 E-9 3.03 E-9 3.01 E-8 3.69E5 11.5 0 Std. 1.61 E10 2.71 E10 3.88 E10 6.91 E10 5.81 E-9 7.48E6 0.52 41 variable. Hence, we have used this measure of diversity in our investigation. The “distance-toaverage-point” measure for N dimensional numerical problems can be described as below [15]. P 2 N 1 diversity (P ) = ⋅ ∑ ∑  sij − s j  (1)  L ⋅ P i = 1 j = 1 where, L is the length of the diagonal or range in the search space S ⊆ ℜ N , P is the population, P is the population size, N is the dimensionality of the problem, sij is the j ’th value of the i ’th individual, and s j is the j ’th value of the average point s . It is assumed that each search variable s k is in a finite range, s k _ min ≤ s k ≤ s k _ max . Table 3 depicts the average diversity for four test problems with COUNTER NICHING EA simulation runs. The values reported in Table 3, averages the value of the diversity measure in equation (1) calculated at each generation where there has been an improvement in average fitness over 500, 1000 and 2000 generations for the 20, 50 and 100 dimensional cases respectively. Final values were averaged over 100 runs. To eliminate the noise in the initial generations of a run, diversity calculation does not start until the generation, since which a relatively steady improvement in fitness has been observed. Table 3 shows that the COUNTER NICHING EA does not necessarily maintain very high average population diversity. However, EA’s requirement is not to maintain very high average population diversity but to maintain an optimal level of population diversity. The high solution accuracy obtained by COUNTER NICHING EA proves that the algorithm is successful in this respect. 6.4 Statistical significance of comparative analysis Finally, a t-test (at 0.05 level of significance; 95% confidence) was applied in order to ascertain if differences in the “A” performance for the best average Table 2. Average Fitness Comparison for SEA, SOCEA, The CEA, DGEA, and COUNTER NICHING EA*. Dimensions of Each Function considered are 20, 50 and 100. ‘-’ Appears Where the Corresponding Data is Not Available. Function f ack ( x ) SEA SOCEA CEA DGEA C_EA* 2.494 0.633 0.239 3.36E-5 1.08E61 1.171 0.930 0.642 7.88E-8 4.6E-62 11.12 2.875 1.250 3.37E-8 1.21E61 8292.3 2 406.490 149.05 6 8.127 1.0E-60 - - - - 2.9E-60 20D f gri (x ) 20D f rtg (x ) 20D f ros (x ) 20D f elp (x ) 20D f sch − 1.2 ( - - - - 2.7E-50 - - - - 3.9E-6 2.870 1.525 0.651 2.52E-4 1.01E29 1.616 1.147 1.032 1.19E-3 1.01E30 44.674 22.460 14.224 1.97E-6 2.01E30 41425. 674 4783.246 1160.0 78 59.789 1.91E29 - - - - 1.00E30 f sch − 1.2 ( - - - - 2.9E-20 - - - - 9.1 2.893 2.220 1.140 9.80E-4 1.00E-9 2.250 1.629 1.179 3.24E-3 1.80E-9 106.21 2 86.364 58.380 6.56E-5 2.00E-9 91251. 300 30427.63 6053.8 70 880.324 3.00E-9 - - - - 2.99E-8 f sch − 1.2 ( - - - - 3.7E-5 - - - 11.51 20D f rrtg (x ) 20D f ack ( x ) 50D f gri (x ) 50D f rtg (x ) 50D f ros (x ) 50D f elp (x ) 50D 50D f rrtg (x ) 50D f ack ( x ) 100D f gri (x ) 100D f rtg (x ) 100D f ros (x ) 100D f elp (x ) 100D 100D f rrtg (x ) 100D - fitness function are statistically significant when compared with the one for the other techniques used for comparison. The P -values of the two-tailed t-test are given in Table 4. As can be observed, the difference in “A” performance of COUNTER NICHING EA is statistically significant for majority of the techniques across the test functions in their three different dimensional versions. Table 3. Average Population Diversity Comparison For COUNTER NICHING EA (Fixed Run). An average of 100 Runs Have Been Reported In Each Case. 20D 50D 100D f ack ( x ) 0.001350 0.001811 0.002001 f gri (x ) 0.001290 0.001725 0.002099 f rtg (x ) 0.003000 0.003550 0.004015 f ros (x ) 0.001718 0.002025 0.002989 7. Conclusions In this paper we investigated the issues related to population diversity in the context of the evolutionary search process. We established the association between population diversity and the search ability of a typical evolutionary algorithm. Then we presented an investigation on an intelligent mutation based EA that tries to achieve optimal diversity in the search landscape. The framework basically incorporates two key processes. Firstly, the population’s spatial information is obtained with a pseudo-niching algorithm. Secondly, the information is used to identify potential local convergence and community formations. Then diversity is intelligently introduced with informed genetic operations, aiming at two objectives: (a) Promising samples from unexplored regions are introduced replacing redundant less fit members of over-populated communities. (b) While local entrapment is discouraged, representative members are still preserved to encourage exploitation. While the current focus of the research was to introduce and maintain population diversity to avoid local entrapment, this Counter Niching-based algorithm can also be adapted to serve as an inexpensive alternative for niching GA, to identify multiple solutions in multimodal problems as well as to suit the diversity requirements of a dynamic environment. Table 4. The P -values of the t-test with 99degrees of freedom. Dimensions of Each Function considered are 20, 50 and 100. ‘-’ Appears Where the Corresponding Data is Not Available. C_EA*SEA C_EA*SOCEA C_EA*CEA C_EA*DGEA f ack (x ) 20D 0.1144 0.4263 0.625 0.9954 f gri (x ) 20D 0.2793 0.3349 0.4231 0.9998 f rtg (x ) 20D 0.0009 0.0901 0.2636 0.9999 f ros (x ) 20D 0 0 0 0.0044 f ack ( x ) 50D 0.0903 0.217 0.4198 0.9873 f gri (x ) 50D 0.2037 0.2843 0.3098 0.9725 f rtg (x ) 50D 0 0 0.0002 0.9989 f ros (x ) 50D 0 0 0 0 f ack ( x ) 100D 0.0891 0.1363 0.2857 0.975 f gri (x ) 100D 0.1337 0.2019 0.2776 0.9546 f rtg (x ) 100D 0 0 0 0 f ros (x ) 100D 0 0 0 0 Function References [1] ADRA, S. F. AND FLEMING, P. J. 2011. Diversity management in evolutionary many-objective optimization. IEEE Trans. Evol. Comput. 15, 2, 183– 195. [2] ARAUJO, L. AND MERELO, J. J. 2011. Diversity through multiculturality: Assessing migrant choice policies in an island model. IEEE Trans. Evol. Comput. 15, 4, 456–468. [3] Bhattacharya, M., “An Informed Operator Approach to Tackle Diversity Constraints in Evolutionary Search”, Proceedings of The International Conference on Information Technology, ITCC 2004, Vol. 2, IEEE Computer Society Press, ISBN 0-76952108-8,pp. 326-330. [4] Bhattacharya, M., “Counter-niching for Constructive Population Diversity”, in Proceedings of the 2008 IEEE Congress on Evolutionary Computation (CEC 2008), Hong Kong, IEEE Press, ISBN: 978-1-42441823-7, pp. 4174-4179. [5] CHOW, C. K. AND YUEN, S. Y. 2011. An evolutionary algorithm that makes decision based on the entire previous search history. IEEE Trans. Evol. Comput. 15, 6, 741–769. [6] CURRAN, D. AND O’RIORDAN, C. 2006. Increasing population diversity through cultural learning. Adapt. Behav. 14, 4, 315–338. [7] FRIEDRICH, T., HEBBINGHAUS, N., AND NEUMANN, F. 2007. Rigorous analyses of simple diversity mechanisms. In Proceedings of the Genetic and Evolutionary Computation Conference. 1219– 1225. [8] FRIEDRICH, T., OLIVETO, P. S., SUDHOLT, D., AND WITT, C. 2008. Theoretical analysis of diversity mechanisms for global exploration. In Proceedings of the Genetic and Evolutionary Computation Conference. 945–952. [9] GALV´AN-L´OPEZ, E.,MCDERMOTT, J., O’NEILL, M., AND BRABAZON, A. 2010. Towards an understanding of locality in genetic programming. In Proceedings of the 12th Annual Conference on Genetic and Evolutionary Computation. 901–908. [10] GAO, H. AND XU, W. 2011. Particle swarm algorithm with hybrid mutation strategy. Appl. Soft Comput. 11, 8, 5129–5142. [11] JIA, D., ZHENG, G., AND KHAN, M. K. 2011. An effective memetic differential evolution algorithm based on chaotic local search. Inform. Sci. 181, 15, 3175–3187. [12] K. A. De Jong, “An Analysis of the Behavior of a Class of Genetic Adaptive Systems”, PhD thesis, University of Michigan, Ann Arbor, MI, Dissertation Abstracts International 36(10), 5140B, University Microfilms Number 76-9381, 1975. [13] Leung, Y., Gao, Y. and Xu, Z. B., "Degree of Population Diversity-A Perspective on Premature Convergence in Genetic Algorithms and its Markov Chain Analysis",IEEE Transactions on Neural Networks, volume 8,no. 5, pp. 1165-1176,1997. [14] LIANG, Y. AND LEUNG, K.-S. 2011. Genetic algorithm with adaptive elitist-population strategies for multimodal function optimization. Appl. Soft Comput. 11, 2, 2017–2034. [15] R. K. Ursem, “Diversity-Guided Evolutionary Algorithms”, Proceedings of Parallel Problem Solving from Nature VII (PPSN-2002), 2002, pp. 462-471. [16] R. Thomsen and P. Rickers, “Introducing Spatial Agent-Based Models and Self-Organised Criticality to Evolutionary Algorithms” Master’s thesis, University of Aarhus, Denmark, 2000. [17] T. B¨ack, D. B. Fogel, Z. Michalewicz, and others, (eds.), Handbook on Evolutionary Computation, IOP Publishing Ltd and Oxford University Press, 1997. [18] Bhattacharya, Maumita. "Meta Model Based EA for Complex Optimization." International Journal of Computational Intelligence 4.1 (2008).
9
Model-Powered Conditional Independence Test Rajat Sen1,* , Ananda Theertha Suresh2,* , Karthikeyan Shanmugam3,* , Alexandros G. Dimakis1 , and Sanjay Shakkottai1 arXiv:1709.06138v1 [stat.ML] 18 Sep 2017 1 The University of Texas at Austin 2 Google, New York 3 IBM Research, Thomas J. Watson Center September 20, 2017 Abstract We consider the problem of non-parametric Conditional Independence testing (CI testing) for continuous random variables. Given i.i.d samples from the joint distribution f (x, y, z) of continuous random vectors X, Y and Z, we determine whether X ⊥ ⊥ Y |Z. We approach this by converting the conditional independence test into a classification problem. This allows us to harness very powerful classifiers like gradient-boosted trees and deep neural networks. These models can handle complex probability distributions and allow us to perform significantly better compared to the prior state of the art, for high-dimensional CI testing. The main technical challenge in the classification problem is the need for samples from the conditional product distribution f CI (x, y, z) = f (x|z)f (y|z)f (z) – the joint distribution if and only if X ⊥ ⊥ Y |Z. – when given access only to i.i.d. samples from the true joint distribution f (x, y, z). To tackle this problem we propose a novel nearest neighbor bootstrap procedure and theoretically show that our generated samples are indeed close to f CI in terms of total variational distance. We then develop theoretical results regarding the generalization bounds for classification for our problem, which translate into error bounds for CI testing. We provide a novel analysis of Rademacher type classification bounds in the presence of non-i.i.d near-independent samples. We empirically validate the performance of our algorithm on simulated and real datasets and show performance gains over previous methods. 1 Introduction Testing datasets for Conditional Independence (CI) have significant applications in several statistical/learning problems; among others, examples include discovering/testing for edges in Bayesian networks [15, 27, 7, 9], causal inference [23, 14, 29, 5] and feature selection through Markov Blankets [16, 31]. Given a triplet of random variables/vectors (X, Y, Z), we say that X is conditionally independent of Y given Z (denoted by X ⊥ ⊥ Y |Z), if the joint distribution fX,Y,Z (x, y, z) factorizes as fX,Y,Z (x, y, z) = fX|Z (x|z)fY |Z (y|z)fZ (z). The problem of Conditional Independence Testing (CI Testing) can be defined as follows: Given n i.i.d samples from fX,Y,Z (x, y, z), distinguish between the two hypothesis H0 : X ⊥ ⊥ Y |Z and H1 : X ⊥ 6 ⊥ Y |Z. In this paper we propose a data-driven Model-Powered CI test. The central idea in a model-driven approach is to convert a statistical testing or estimation problem into a pipeline that utilizes the power of supervised learning models like classifiers and regressors; such pipelines can then leverage recent advances in classification/regression in high-dimensional settings. In this paper, we take such a model-powered approach (illustrated in Fig. 1), which reduces the problem of CI testing to Binary Classification. Specifically, the key steps of our procedure are as follows: (i) Suppose we are provided 3n i.i.d samples from fX,Y,Z (x, y, z). We keep aside n of these original samples in a set U1 (refer to Fig. 1). The remaining 2n of the original samples are processed through our first module, the nearest-neighbor bootstrap (Algorithm 1 in our paper), which produces n simulated samples stored in * Equal Contribution 1 3n Original Samples XY Z x 1 y 1 z1 U1 n Original Samples . . . .. . 1 Training Set Dr G ĝ (Trained Classifier) y3n z3n Shu✏e U20 XY Z ` XY Z . . . 2n Original Samples 1 + .. . x3n ` XY Z . . . Nearest Neighbor Bootstrap 0 + .. . 0 n samples close to f CI Test Set De L(ĝ, De ) (Test Error) Figure 1: Illustration of our methodology. A part of the original samples are kept aside in U1 . The rest of the samples are used in our nearest neighbor boot-strap to generate a data-set U20 which is close to f CI in distribution. The samples are labeled as shown and a classifier is trained on a training set. The test error is measured on a test set there-after. If the test-error is close to 0.5, then H0 is not rejected, however if the test error is low then H0 is rejected. U20 . In Section 3, we show that these generated samples in U20 are in fact close in total variational distance (defined in Section 3) to the conditionally independent distribution f CI (x, y, z) , fX|Z (x|z)fY |Z (y|z)fZ (z). (Note that only under H0 does the equality f CI (.) = fX,Y,Z (.) hold; our method generates samples close to f CI (x, y, z) under both hypotheses). (ii) Subsequently, the original samples kept aside in U1 are labeled 1 while the new samples simulated from the nearest-neighbor bootstrap (in U20 ) are labeled 0. The labeled samples (U1 with label 1 and U20 labeled 0) are aggregated into a data-set D. This set D is then broken into training and test sets Dr and De each containing n samples each. (iii) Given the labeled training data-set (from step (ii)), we train powerful classifiers such as gradient boosted trees [6] or deep neural networks [17] which attempt to learn the classes of the samples. If the trained classifier has good accuracy over the test set, then intuitively it means that the joint distribution fX,Y,Z (.) is distinguishable from f CI (note that the generated samples labeled 0 are close in distribution to f CI ). Therefore, we reject H0 . On the other hand, if the classifier has accuracy close to random guessing, then fX,Y,Z (.) is in fact close to f CI , and we fail to reject H0 . For independence testing (i.e whether X ⊥ ⊥ Y ), classifiers were recently used in [19]. Their key observation was that given i.i.d samples (X, Y ) from fX,Y (x, y), if the Y coordinates are randomly permuted then the resulting samples exactly emulate the distribution fX (x)fY (y). Thus the problem can be converted to a two sample test between a subset of the original samples and the other subset which is permuted - Binary classifiers were then harnessed for this two-sample testing; for details see [19]. However, in the case of CI testing we need to emulate samples from f CI . This is harder because the permutation of the samples needs to be Z dependent (which can be high-dimensional). One of our key technical contributions is in proving that our nearest-neighbor bootstrap in step (i) achieves this task. The advantage of this modular approach is that we can harness the power of classifiers (in step (iii) above), which have good accuracies in high-dimensions. Thus, any improvements in the field of binary classification imply an advancement in our CI test. Moreover, there is added flexibility in choosing the best classifier based on domain knowledge about the data-generation process. Finally, our bootstrap is also efficient owing to fast algorithms for identifying nearest-neighbors [24]. 1.1 Main Contributions (i) (Classification based CI testing) We reduce the problem of CI testing to Binary Classification as detailed in steps (i)-(iii) above and in Fig. 1. We simulate samples that are close to f CI through a novel nearest-neighbor bootstrap (Algorithm 1) given access to i.i.d samples from the joint distribution. The 2 problem of CI testing then reduces to a two-sample test between the original samples in U1 and U20 , which can be effectively done by binary classifiers. (ii) (Guarantees on Bootstrapped Samples) As mentioned in steps (i)-(iii), if the samples generated by the bootstrap (in U20 ) are close to f CI , then the CI testing problem reduces to testing whether the data-sets U1 and U20 are distinguishable from each other. We theoretically justify that this is indeed true. Let φX,Y,Z (x, y, z) denote the distribution of a sample produced by Algorithm 1, when it is supplied with 2n i.i.d samples from fX,Y,Z (.). In Theorem 1, we prove that dT V (φ, f CI ) = O(1/n1/dz ) under appropriate smoothness assumptions. Here dz is the dimension of Z and dT V denotes total variational distance (Def. 1). (iii) (Generalization Bounds for Classification under near-independence) The samples generated from the nearest-neighbor bootstrap do not remain i.i.d but they are close to i.i.d. We quantify this property and go on to show generalization risk bounds for the classifier. Let us denote the class of function encoded by the classifier as G. Let R̂ denote the probability of error of the optimal classifier ĝ ∈ G trained on the training set (Fig. 1). We prove that under appropriate assumptions, we have    q √ r0 − O(1/n1/dz ) ≤ R̂ ≤ r0 + O(1/n1/dz ) + O V n−1/3 + 2dz /n with high probability, upto log factors. Here r0 = 0.5(1 − dT V (f, f CI )), V is the VC dimension [30] of the class G. Thus when f is equivalent to f CI (H0 holds) then the error rate of the classifier is close to 0.5. But when H1 holds the loss is much lower. We provide a novel analysis of Rademacher complexity bounds [4] under near-independence which is of independent interest. (iv) (Empirical Evaluation) We perform extensive numerical experiments where our algorithm outperforms the state of the art [32, 28]. We also apply our algorithm for analyzing CI relations in the protein signaling network data from the flow cytometry data-set [26]. In practice we observe that the performance with respect to dimension of Z scales much better than expected from our worst case theoretical analysis. This is because powerful binary classifiers perform well in high-dimensions. 1.2 Related Work In this paper we address the problem of non-parametric CI testing when the underlying random variables are continuous. The literature on non-parametric CI testing is vast. We will review some of the recent work in this field that is most relevant to our paper. Most of the recent work in CI testing are kernel based [28, 32, 10]. Many of these works build on the study in [11], where non-parametric CI relations are characterized using covariance operators for Reproducing Kernel Hilbert Spaces (RKHS) [11]. KCIT [32] uses the partial association of regression functions relating X, Y , and Z. RCIT [28] is an approximate version of KCIT that attempts to improve running times when the number of samples are large. KCIPT [10] is perhaps most relevant to our work. In [10], a specific permutation of the samples is used to simulate data from f CI . An expensive linear program needs to be solved in order to calculate the permutation. On the other hand, we use a simple nearest-neighbor bootstrap and further we provide theoretical guarantees about the closeness of the samples to f CI in terms of total variational distance. Finally the two-sample test in [10] is based on a kernel method [3], while we use binary classifiers for the same purpose. There has also been recent work on entropy estimation [13] using nearest neighbor techniques (used for density estimation); this can subsequently be used for CI testing by estimating the conditional mutual information I(X; Y |Z). Binary classification has been recently used for two-sample testing, in particular for independence testing [19]. Our analysis of generalization guarantees of classification are aimed at recovering guarantees similar to [4], but in a non-i.i.d setting. In this regard (non-i.i.d generalization guarantees), there has been recent work in proving Rademacher complexity bounds for β-mixing stationary processes [21]. This work also falls in the category of machine learning reductions, where the general philosophy is to reduce various machine learning settings like multi-class regression [2], ranking [1], reinforcement learning [18], structured prediction [8] to that of binary classification. 3 2 Problem Setting and Algorithms In this section we describe the algorithmic details of our CI testing procedure. We first formally define our problem. Then we describe our bootstrap algorithm for generating the data-set that mimics samples from f CI . We give a detailed pseudo-code for our CI testing process which reduces the problem to that of binary classification. Finally, we suggest further improvements to our algorithm. Problem Setting: The problem setting is that of non-parametric Conditional Independence (CI) testing given i.i.d samples from the joint distributions of random variables/vectors [32, 10, 28]. We are given 3n i.i.d samples from a continuous joint distribution fX,Y,Z (x, y, z) where x ∈ Rdx , y ∈ Rdy and z ∈ Rdz . The goal is to test whether X ⊥ ⊥ Y |Z i.e whether fX,Y,Z (x, y, z) factorizes as, fX,Y,Z (x, y, z) = fX|Z (x|z)fY |Z (y|z)fZ (z) , f CI (x, y, z) This is essentially a hypothesis testing problem where: H0 : X ⊥ ⊥ Y |Z and H1 : X ⊥ 6 ⊥ Y |Z. Note: For notational convenience, we will drop the subscripts when the context is evident. For instance we may use f (x|z) in place of fX|Z (x|z). Nearest-Neighbor Bootstrap: Algorithm 1 is a procedure to generate a data-set U 0 consisting of n samples given a data-set U of 2n i.i.d samples from the distribution fX,Y,Z (x, y, z). The data-set U is broken into two equally sized partitions U1 and U2 . Then for each sample in U1 , we find the nearest neighbor in U2 in terms of the Z coordinates. The Y -coordinates of the sample from U1 are exchanged with the Y -coordinates of its nearest neighbor (in U2 ); the modified sample is added to U 0 . Algorithm 1 DataGen - Given data-set U = U1 ∪ U2 of 2n i.i.d samples from f (x, y, z) (|U1 | = |U2 | = n ), returns a new data-set U 0 having n samples. function DataGen(U1 , U2 , 2n) U0 = ∅ for u in U1 do Let v = (x0 , y 0 , z 0 ) ∈ U2 be the sample such that z 0 is the 1-Nearest Neighbor (1-NN) of z (in `2 norm) in the whole data-set U2 , where u = (x, y, z) 5: Let u0 = (x, y 0 , z) and U 0 = U 0 ∪ {u0 }. 6: end for 7: end function 1: 2: 3: 4: One of our main results is that the samples in U 0 , generated in Algorithm 1 mimic samples coming from the distribution f CI . Suppose u = (x, y, z) ∈ U1 be a sample such that fZ (z) is not too small. In this case z 0 (the 1-NN sample from U2 ) will not be far from z. Therefore given a fixed z, under appropriate smoothness assumptions, y 0 will be close to an independent sample coming from fY |Z (y|z 0 ) ∼ fY |Z (y|z). On the other hand if fZ (z) is small, then z is a rare occurrence and will not contribute adversely. CI Testing Algorithm: Now we introduce our CI testing algorithm, which uses Algorithm 1 along with binary classifiers. The psuedo-code is in Algorithm 2 (Classifier CI Test -CCIT). Algorithm 2 CCITv1 - Given data-set U of 3n i.i.d samples from f (x, y, z), returns if X ⊥ ⊥ Y |Z. function CCIT(U, 3n, τ, G) Partition U into three disjoint partitions U1 , U2 and U3 of size n each, randomly. Let U20 = DataGen(U2 , U3 , 2n) (Algorithm 1). Note that |U20 | = n. Create Labeled data-set D := {(u, ` = 1)}u∈U1 ∪ {(u0 , `0 = 0)}u0 ∈U20 Divide data-set D into train and test Pset Dr and De respectively. Note that |Dr | = |De | = n. Let ĝ = argming∈G L̂(g, Dr ) := |D1r | (u,`)∈Dr 1{g(u) 6= l}. This is Empirical Risk Minimization for training the classifier (finding the best function in the class G). 7: If L̂(ĝ, De ) > 0.5 − τ , then conclude X ⊥ ⊥ Y |Z, otherwise, conclude X ⊥ 6 ⊥ Y |Z. 8: end function 1: 2: 3: 4: 5: 6: 4 In Algorithm 2, the original samples in U1 and the nearest-neighbor bootstrapped samples in U20 should be almost indistinguishable if H0 holds. However, if H1 holds, then the classifier trained in Line 6 should be able to easily distinguish between the samples corresponding to different labels. In Line 6, G denotes the space of functions over which risk minimization is performed in the classifier. We will show (in Theorem 1) that the variational distance between the distribution of one of the samples in U20 and f CI (x, y, z) is very small for large n. However, the samples in U20 are not exactly i.i.d but close to i.i.d. Therefore, in practice for finite n, there is a small bias b > 0 i.e. L̂(ĝ, De ) ∼ 0.5 − b, even when H0 holds. The threshold τ needs to be greater than b in order for Algorithm 2 to function. In the next section, we present an algorithm where this bias is corrected. Algorithm with Bias Correction: We present an improved bias-corrected version of our algorithm as Algorithm 3. As mentioned in the previous section, in Algorithm 2, the optimal classifier may be able to achieve a loss slightly less that 0.5 in the case of finite n, even when H0 is true. However, the classifier is expected to distinguish between the two data-sets only based on the Y, Z coordinates, as the joint distribution of X and Z remains the same in the nearest-neighbor bootstrap. The key idea in Algorithm 3 is to train a classifier only using the Y and Z coordinates, denoted by ĝ 0 . As before we also train another classier using all the coordinates, which is denoted by ĝ. The test loss of ĝ 0 is expected to be roughly 0.5 − b, where b is the bias mentioned in the previous section. Therefore, we can just subtract this bias. Thus, when H0 is true L̂(ĝ 0 , De0 ) − L̂(ĝ, De ) will be close to 0. However, when H1 holds, then L̂(ĝ, De ) will be much lower, as the classifier ĝ has been trained leveraging the information encoded in all the coordinates. Algorithm 3 CCITv2 - Given data-set U of 3n i.i.d samples, returns whether X ⊥ ⊥ Y |Z. function CCIT(U, 3n, τ, G) Perform Steps 1-5 as in Algorithm 2. Let Dr0 = {((y, z), `)}(u=(x,y,z),`)∈Dr . Similarly, let De0 = {((y, z), `)}(u=(x,y,z),`)∈De . These are the training and test sets without the X-coordinates. P 4: Let ĝ = argming∈G L̂(g, Dr ) := |D1r | (u,`)∈Dr 1{g(u) 6= l}. Compute test loss: L̂(ĝ, De ). P 5: Let ĝ 0 = argming∈G L̂(g, Dr0 ) := |D10 | (u,`)∈Dr0 1{g(u) 6= l}. Compute test loss: L̂(ĝ 0 , De0 ). 1: 2: 3: r 6: 7: 3 6 ⊥ Y |Z, otherwise, conclude X ⊥ ⊥ Y |Z. If L̂(ĝ, De ) < L̂(ĝ 0 , De0 ) − τ , then conclude X ⊥ end function Theoretical Results In this section, we provide our main theoretical results. We first show that the distribution of any one of the samples generated in Algorithm 1 closely resemble that of a sample coming from f CI . This result holds for a broad class of distributions fX,Y,Z (x, y, z) which satisfy some smoothness assumptions. However, the samples generated by Algorithm 1 (U2 in the algorithm) are not exactly i.i.d but close to i.i.d. We quantify this and go on to show that empirical risk minimization over a class of classifier functions generalizes well using these samples. Before, we formally state our results we provide some useful definitions. Definition 1. The total variational distance between two continuous probability distributions f (.) and g(.) defined over a domain X is, dT V (f, g) = supp∈B |Ef [p(X)] − Eg [p(X)]| where B is the set of all measurable functions from X → [0, 1]. Here, Ef [.] denotes expectation under distribution f . We first prove that the distribution of any one of the samples generated in Algorithm 1 is close to f CI in terms of total variational distance. We make the following assumptions on the joint distribution of the original samples i.e. fX,Y,Z (x, y, z): Smoothness assumption on f (y|z): We assume a smoothness condition on f (y|z), that is a generalization of boundedness of the max. eigenvalue of Fisher Information matrix of y w.r.t z. 5 Assumption 1. For z ∈ Rdz , a such that ka − zk2 ≤ 1 , the generalized curvature matrix Ia (z) is, # " ! Z f (y|z) ∂2 δ 2 log f (y|z 0 ) Z=z log Ia (z)ij = =E − f (y|z)dy ∂zi0 ∂zj0 f (y|z 0 ) δzi0 δzj0 z 0 =a 0 (1) z =a dz We require that for all z ∈ R and all a such that ka − zk2 ≤ 1 , λmax (Ia (z)) ≤ β. Analogous assumptions have been made on the Hessian of the density in the context of entropy estimation [12]. Smoothness assumptions on f (z): We assume some smoothness properties of the probability density function f (z). The smoothness assumptions (in Assumption 2) is a subset of the assumptions made in [13] (Assumption 1, Page 5) for entropy estimation. Definition 2. For any δ > 0, we define G(δ) = P (f (Z) ≤ δ). This is the probability mass of the distribution of Z in the areas where the p.d.f is less than δ. Definition 3. (Hessian Matrix) Let Hf (z) denote the Hessian Matrix of the p.d.f f (z) with respect to z i.e Hf (z)ij = ∂ 2 f (z)/∂zi ∂zj , provided it is twice continuously differentiable at z. Assumption 2. The probability density function f (z) satisfies the following: (1) f (z) is twice continuously differentiable and the Hessian matrix Hf satisfies kHf (z)k2 ≤ cdz almost everywhere, where cdz is only dependent on the dimension. R (2) f (z)1−1/d dz ≤ c3 , ∀d ≥ 2 where c3 is a constant. Theorem 1. Let (X, Y 0 , Z) denote a sample in U20 produced by Algorithm 1 by modifying the original sample (X, Y, Z) in U1 , when supplied with 2n i.i.d samples from the original joint distribution fX,Y,Z (x, y, z). Let φX,Y,Z (x, y, z) be the distribution of (X, Y 0 , Z). Under smoothness assumptions (1) and (2), for any  < 1 , n large enough, we have: dT V (φ, f CI ) ≤ b(n) s    1 1 β c3 ∗ 21/dz Γ(1/dz ) βG (2cdz 2 ) dz +2 2 + exp − nγ , + c  + G 2c  . d d d z z z 2 4 (nγdz )1/dz dz 4 2 Here, γd is the volume of the unit radius `2 ball in Rd . Theorem 1 characterizes the variational distance of the distribution of a sample generated in Algorithm 1 with that of the conditionally independent distribution f CI . We defer the proof of Theorem 1 to Appendix A. Now, our goal is to characterize the misclassification error of the trained classifier in Algorithm 2 under both H0 and H1 . Consider the distribution of the samples in the data-set Dr used for classification in Algorithm 2. Let q(x, y, z|` = 1) be the marginal distribution of each sample with label 1. Similarly, let q(x, y, z|` = 0) denote the marginal distribution of the label 0 samples. Note that under our construction,  CI f (x, y, z) if H0 holds q(x, y, z|` = 1) = fX,Y,Z (x, y, z) = 6= f CI (x, y, z) if H1 holds q(x, y, z|` = 0) = φX,Y,Z (x, y, z) (2) where φX,Y,Z (x, y, z) is as defined in Theorem 1. Note that even though the marginal of each sample with label 0 is φX,Y,Z (x, y, z) (Equation (2)), they are not exactly i.i.d owing to the nearest neighbor bootstrap. We will go on to show that they are actually close to i.i.d and therefore classification risk minimization generalizes similar to the i.i.d results for classification [4]. First, we review standard definitions and results from classification theory [4]. Ideal Classification Setting: We consider an ideal classification scenario for CI testing and in the process define standard quantities in learning theory. Recall that G is the set of classifiers under consideration. Let CI q̃ be our ideal distribution for q given by q̃(x, y, z|` = 1) = fX,Y,Z (x, y, z), q̃(x, y, z|` = 0) = fX,Y,Z (x, y, z) and q̃(` = 1) = q̃(` = 0) = 0.5. In other words this is the ideal classification scenario for testing CI. Let L(g(u), `) be our loss function for a classifying function g ∈ G, for a sample u , (x, y, z) with true label `. In our algorithms the loss function is the 0 − 1 loss, but our results hold for any bounded loss function s.t. |L(g(u), `)| ≤ |L|. For a distribution q̃ and a classifier g let Rq̃ (g) , Eu,`∼q̃ [L(g(u), `)] be the expected risk 6 of the function g. The risk optimal classifier gq̃∗ under q̃ is given by gq̃∗ , arg ming∈G Rq̃ (g). Similarly for P 1 a set of samples S and a classifier g, let RS (g) , |S| u,`∈S L(g(u), `) be the empirical risk on the set of samples. We define gS as the classifier that minimizes the empirical loss on the observed set of samples S that is, gS , arg ming∈G RS (g). If the samples in S are generated independently from q̃, then standard results from the learning theory states that with probability ≥ 1 − δ, r r V 2 log(1/δ) ∗ Rq̃ (gS ) ≤ Rq̃ (gq̃ ) + C + , (3) n n where V is the VC dimension [30] of the classification model, C is an universal constant and n = |S|. Guarantees under near-independent samples: Our goal is to prove a result like (3), for the classification problem in Algorithm 2. However, in this case we do not have access to i.i.d samples because the samples in U20 do not remain independent. We will see that they are close to independent in some sense. This brings us to one of our main results in Theorem 2. Theorem 2. Assume that the joint distribution f (x, y, z) satisfies the conditions in Theorem 1. Further assume that f (z) has a bounded Lipschitz constant. Consider the classifier ĝ in Algorithm 2 trained on the set Dr . Let S = Dr . Then according to our definition gS = ĝ. For  > 0 we have: (i) Rq (gS ) − Rq (gq∗ ) ≤ γn !  ! ! r 1/3 r d √ log(n/δ) 1 4 z log(n/δ) + on (1/) , C|L| + G() , V + log + δ n n with probability at least 1 − 8δ. Here V is the V.C. dimension of the classification function class, G is as defined in Def. 2, C is an universal constant and |L| is the bound on the absolute value of the loss. (ii) Suppose the loss is L(g(u), `) = 1g(u)6=` (s.t |L| ≤ 1). Further suppose the class of classifying functions is such that Rq (gq∗ ) ≤ r0 + η. Here, r0 , 0.5(1 − dT V (q(x, y, z|1), q(x, y, z|0))) is the risk of the Bayes optimal classifier when q(` = 1) = q(` = 0). This is the best loss that any classifier can achieve for this classification problem [4]. Under this setting, w.p at least 1 − 8δ we have:  b(n)  b(n) 1 1 1 − dT V (f, f CI ) − ≤ Rq (gS ) ≤ 1 − dT V (f, f CI ) + + η + γn 2 2 2 2 where b(n) is as defined in Theorem 1. We prove Theorem 2 as Theorem 3 and Theorem 4 in the appendix. In part (i) of the theorem we prove that generalization bounds hold even when the samples are not exactly i.i.d. Intuitively, consider two sample inputs ui , uj ∈ U1 , such that corresponding Z coordinates zi and zj are far away. Then we expect the resulting samples u0i and u0j (in U20 ) to be nearly-independent. By carefully capturing this notion of spatial near-independence, we prove generalization errors in Theorem 3. Part (ii) of the theorem essentially implies that the error of the trained classifier will be close to 0.5 (l.h.s) when f ∼ f CI (under H0 ). On the other hand under H1 if dT V (f, f CI ) > 1 − γ, the error will be less than 0.5(γ + b(n)) + γn which is small. 4 Empirical Results In this section we provide empirical results comparing our proposed algorithm and other state of the art algorithms. The algorithms under comparison are: (i) CCIT - Algorithm 3 in our paper where we use XGBoost [6] as the classifier. In our experiments, for each data-set we boot-strap the samples and run our algorithm B times. The results are averaged over B bootstrap runs1 . (ii) KCIT - Kernel CI test from [32]. We use the Matlab code available online. (iii) RCIT - Randomized CI Test from [28]. We use the R package that is publicly available. 1 The python package for our implementation can be found here (https://github.com/rajatsen91/CCIT). 7 4.1 Synthetic Experiments We perform the synthetic experiments in the regime of post-nonlinear noise similar to [32]. In our experiments X and Y are dimension 1, and the dimension of Z scales (motivated by causal settings and also used in [32, 28]). X and Y are generated according to the relation G(F (Z) + η) where η is a noise term and G is a non-linear function, when the H0 holds. In our experiments, the data is generated as follows: (i) when X ⊥ ⊥ Y |Z, then each coordinate of Z is a Gaussian with unit mean and variance, X = cos(aT Z + η1 ) and Y = cos(bT Z + η2 ). Here, a, b ∈ Rdz and kak = kbk = 1. a,b are fixed while generating a single dataset. η1 and η2 are zero-mean Gaussian noise variables, which are independent of everything else. We set V ar(η1 ) = V ar(η2 ) = 0.25. (ii) when X ⊥ 6 ⊥ Y |Z, then everything is identical to (i) except that Y = cos(bT Z + cX + η2 ) for a randomly chosen constant c ∈ [0, 2]. In Fig. 2a, we plot the performance of the algorithms when the dimension of Z scales. For generating each point in the plot, 300 data-sets were generated with the appropriate dimensions. Half of them are according to H0 and the other half are from H1 Then each of the algorithms are run on these data-sets, and the ROC AUC (Area Under the Receiver Operating Characteristic curve) score is calculated from the true labels (CI or not CI) for each data-set and the predicted scores. We observe that the accuracy of CCIT is close to 1 for dimensions upto 70, while all the other algorithms do not scale as well. In these experiments the number √ of bootstraps per data-set for CCIT was set to B = 50. We set the threshold in Algorithm 3 to τ = 1/ n, which is an upper-bound on the expected variance of the test-statistic when H0 holds. 4.2 Flow-Cytometry Dataset We use our CI testing algorithm to verify CI relations in the protein network data from the flow-cytometry dataset [26], which gives expression levels of 11 proteins under various experimental conditions. The ground truth causal graph is not known with absolute certainty in this data-set, however this dataset has been widely used in the causal structure learning literature. We take three popular learned causal structures that are recovered by causal discovery algorithms, and we verify CI relations assuming these graphs to be the ground truth. The three graph are: (i) consensus graph from [26] (Fig. 1(a) in [22]) (ii) reconstructed graph by Sachs et al. [26] (Fig. 1(b) in [22]) (iii) reconstructed graph in [22] (Fig. 1(c) in [22]). For each graph we generate CI relations as follows: for each node X in the graph, identify the set Z consisting of its parents, children and parents of children in the causal graph. Conditioned on this set Z, X is independent of every other node Y in the graph (apart from the ones in Z). We use this to create all CI conditions of these types from each of the three graphs. In this process we generate over 60 CI relations for each of the graphs. In order to evaluate false positives of our algorithms, we also need relations such that X⊥ 6 ⊥ Y |Z. For, this we observe that if there is an edge between two nodes, they are never CI given any other conditioning set. For each graph we generate 50 such non-CI relations, where an edge X ↔ Y is selected at random and a conditioning set of size 3 is randomly selected from the remaining nodes. We construct 50 such negative examples for each graph. In Fig. 2, we display the performance of all three algorithms based on considering each of the three graphs as ground-truth. The algorithms are given access to observational data for verifying CI and non-CI relations. In Fig. 2b we display the ROC plot for all three algorithms for the data-set generated by considering graph (ii). In Table 2c we display the ROC AUC score for the algorithms for the three graphs. It can be seen that our algorithm outperforms the others in all three cases, even when the dimensionality of Z is fairly low (less than 10 in all cases). An interesting thing to note is that the edges (pkc-raf), (pkc-mek) and (pka-p38) are there in all the three graphs. However, all three CI testers CCIT, KCIT and RCIT are fairly confident that these edges should be absent. These edges may be discrepancies in the ground-truth graphs and therefore the ROC AUC of the algorithms are lower than expected. 8 1.0 1.0 CCIT RCIT KCIT 0.8 True Positives ROC AUC 0.9 0.8 0.7 0.6 0.4 CCIT RCIT KCIT 0.2 0.6 05 20 50 70 100 Dimension of Z (a) 150 0.00.0 0.2 0.4 0.6 False Positives (b) 0.8 1.0 Algo. Graph (i) Graph (ii) Graph (iii) CCIT RCIT KCIT 0.6848 0.6448 0.6528 0.7778 0.7168 0.7416 0.7156 0.6928 0.6610 (c) Figure 2: In (a) we plot the performance of CCIT, KCIT and RCIT in the post-nonlinear noise synthetic data. In generating each point in the plots, 300 data-sets are generated where half of them are according to H0 while the rest are according to H1 . The algorithms are run on each of them, and the ROC AUC score is plotted. In (a) the number of samples n = 1000, while the dimension of Z varies. In (b) we plot the ROC curve for all three algorithms based on the data from Graph (ii) for the flow-cytometry dataset. The ROC AUC score for each of the algorithms are provided in (c), considering each of the three graphs as ground-truth. 5 Conclusion In this paper we present a model-powered approach for CI tests by converting it into binary classification, thus empowering CI testing with powerful supervised learning tools like gradient boosted trees. We provide an efficient nearest-neighbor bootstrap which makes the reduction to classification possible. We provide theoretical guarantees on the bootstrapped samples, and also risk generalization bounds for our classification problem, under non-i.i.d near independent samples. In conclusion we believe that model-driven data dependent approaches can be extremely useful in general statistical testing and estimation problems as they enable us to use powerful supervised learning tools. References [1] Maria-Florina Balcan, Nikhil Bansal, Alina Beygelzimer, Don Coppersmith, John Langford, and Gregory Sorkin. Robust reductions from ranking to classification. Learning Theory, pages 604–619, 2007. [2] Alina Beygelzimer, John Langford, Yuri Lifshits, Gregory Sorkin, and Alex Strehl. Conditional probability tree estimation analysis and algorithms. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages 51–58. AUAI Press, 2009. [3] Karsten M Borgwardt, Arthur Gretton, Malte J Rasch, Hans-Peter Kriegel, Bernhard Schölkopf, and Alex J Smola. Integrating structured biological data by kernel maximum mean discrepancy. Bioinformatics, 22(14):e49–e57, 2006. [4] Stéphane Boucheron, Olivier Bousquet, and Gábor Lugosi. Theory of classification: A survey of some recent advances. ESAIM: probability and statistics, 9:323–375, 2005. [5] Eliot Brenner and David Sontag. Sparsityboost: A new scoring function for learning bayesian network structure. arXiv preprint arXiv:1309.6820, 2013. [6] Tianqi Chen and Carlos Guestrin. Xgboost: A scalable tree boosting system. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 785–794. ACM, 2016. 9 [7] Jie Cheng, David Bell, and Weiru Liu. Learning bayesian networks from data: An efficient approach based on information theory. On World Wide Web at http://www. cs. ualberta. ca/˜ jcheng/bnpc. htm, 1998. [8] Hal Daumé, John Langford, and Daniel Marcu. Search-based structured prediction. Machine learning, 75(3):297–325, 2009. [9] Luis M De Campos and Juan F Huete. A new approach for learning belief networks using independence criteria. International Journal of Approximate Reasoning, 24(1):11–37, 2000. [10] Gary Doran, Krikamol Muandet, Kun Zhang, and Bernhard Schölkopf. A permutation-based kernel conditional independence test. In UAI, pages 132–141, 2014. [11] Kenji Fukumizu, Francis R Bach, and Michael I Jordan. Dimensionality reduction for supervised learning with reproducing kernel hilbert spaces. Journal of Machine Learning Research, 5(Jan):73–99, 2004. [12] Weihao Gao, Sewoong Oh, and Pramod Viswanath. Breaking the bandwidth barrier: Geometrical adaptive entropy estimation. In Advances in Neural Information Processing Systems, pages 2460–2468, 2016. [13] Weihao Gao, Sewoong Oh, and Pramod Viswanath. Demystifying fixed k-nearest neighbor information estimators. arXiv preprint arXiv:1604.03006, 2016. [14] Markus Kalisch and Peter Bühlmann. Estimating high-dimensional directed acyclic graphs with the pc-algorithm. Journal of Machine Learning Research, 8(Mar):613–636, 2007. [15] Daphne Koller and Nir Friedman. Probabilistic graphical models: principles and techniques. MIT press, 2009. [16] Daphne Koller and Mehran Sahami. Toward optimal feature selection. Technical report, Stanford InfoLab, 1996. [17] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012. [18] John Langford and Bianca Zadrozny. Reducing t-step reinforcement learning to classification. In Proc. of the Machine Learning Reductions Workshop, 2003. [19] David Lopez-Paz and Maxime Oquab. arXiv:1610.06545, 2016. Revisiting classifier two-sample tests. arXiv preprint [20] Colin McDiarmid. On the method of bounded differences. Surveys in combinatorics, 141(1):148–188, 1989. [21] Mehryar Mohri and Afshin Rostamizadeh. Rademacher complexity bounds for non-iid processes. In Advances in Neural Information Processing Systems, pages 1097–1104, 2009. [22] Joris Mooij and Tom Heskes. Cyclic causal discovery from continuous equilibrium data. arXiv preprint arXiv:1309.6849, 2013. [23] Judea Pearl. Causality. Cambridge university press, 2009. [24] V Ramasubramanian and Kuldip K Paliwal. Fast k-dimensional tree algorithms for nearest neighbor search with application to vector quantization encoding. IEEE Transactions on Signal Processing, 40(3):518–531, 1992. [25] Bero Roos. On the rate of multivariate poisson convergence. Journal of Multivariate Analysis, 69(1):120– 134, 1999. [26] Karen Sachs, Omar Perez, Dana Pe’er, Douglas A Lauffenburger, and Garry P Nolan. Causal proteinsignaling networks derived from multiparameter single-cell data. Science, 308(5721):523–529, 2005. [27] Peter Spirtes, Clark N Glymour, and Richard Scheines. Causation, prediction, and search. MIT press, 2000. 10 [28] Eric V Strobl, Kun Zhang, and Shyam Visweswaran. Approximate kernel-based conditional independence tests for fast non-parametric causal discovery. arXiv preprint arXiv:1702.03877, 2017. [29] Ioannis Tsamardinos, Laura E Brown, and Constantin F Aliferis. The max-min hill-climbing bayesian network structure learning algorithm. Machine learning, 65(1):31–78, 2006. [30] Vladimir N Vapnik and A Ya Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. In Measures of Complexity, pages 11–30. Springer, 2015. [31] Eric P Xing, Michael I Jordan, Richard M Karp, et al. Feature selection for high-dimensional genomic microarray data. In ICML, volume 1, pages 601–608. Citeseer, 2001. [32] Kun Zhang, Jonas Peters, Dominik Janzing, and Bernhard Schölkopf. Kernel-based conditional independence test and application in causal discovery. arXiv preprint arXiv:1202.3775, 2012. 11 A Guarantees on Bootstrapped Samples In this section we prove that the samples generated in Algorithm 1, through the nearest neighbor bootstrap, are close to samples generated from f CI (x, y, z) = fX|Z (x|z)fY |Z (y|z)fZ (z). The closeness is characterized in terms of total variational distance as in Theorem 1. Suppose 2n i.i.d samples from distribution f (x, y, z) are supplied to Algorithm 1. Consider a typical sample (X, Y, Z) ∼ f (x, y, z), which is modified to produce a typical sample in U20 (refer to Algorithm 1) denoted by (X, Y 0 , Z). Here, Y 0 are the Y -coordinates of a sample (X 0 , Y 0 , Z 0 ) in U2 such that Z 0 is the nearest neighbor of Z. Let us denote the marginal distribution of a typical sample in U20 by φX,Y,Z (x, y, z), i.e (X, Y 0 , Z) ∼ φX,Y,Z (x, y, z). Now we are at a position to prove Theorem 1. Proof of Theorem 1. Let fZ 0 |z (z 0 ) denote the conditional p.d.f of the variable Z 0 (that is the nearest neighbor of sample Z in U2 ), conditioned on Z = z. Therefore, the distribution of the new-sample is given by, Z φX,Y,Z (x, y, z) = fX|Z (x|z)fZ (z) fY |Z (y|z 0 )fZ 0 |z (z 0 )dz 0 . (4) CI We want to bound the total variational distance between φX,Y,Z (x, y, z) and fX,Y,Z (x, y, z). We have the following chain: 2 ∗ dT V (φ, f CI ) = Z fX|Z (x|z)fY |Z (y|z)fZ (z) − fX|Z (x|z)fZ (z) x,y,z Z  Z = x,y,z ≤ fX|Z (x|z)fY |Z (y|z)fZ (z) Z Z x,y,z Z 0 = Z ≤ x,z,kz 0 −zk2 ≤ Z fX|Z (x|z)fZ (z)fZ 0 |z (z 0 ) Z  fY |Z (y|z 0 )fZ 0 |z (z 0 )dz 0 dxdydz fZ 0 |z (z 0 )dz 0 dxdydz fY |Z (y|z 0 ) fZ 0 |z (z 0 )dz 0 dxdydz fY |Z (y|z) ! fY |Z (y|z) − fY |Z (y|z 0 ) dy dz 0 dxdz ! fY |Z (y|z) − fY |Z (y|z 0 ) dy dz 0 dxdz+ fX|Z (x|z)fZ (z)fZ 0 |z (z 0 )dz 0 dxdz 2 x,z,kz 0 −zk2 > ≤ Z fX|Z (x|z)fZ (z)fZ 0 |z (z ) x,z,z 0 fY |Z (y|z 0 ) 1− fY |Z (y|z) 1− fX|Z (x|z)fY |Z (y|z)fZ (z) Z Z x,z,kz 0 −zk2 ≤ fX|Z (x|z)fZ (z)fZ 0 |z (z 0 ) Z ! fY |Z (y|z) − fY |Z (y|z 0 ) dy dz 0 dxdz+ 2 ∗ P (kz 0 − zk2 > ) (5) By Pinsker’s inequality, we have: Z y s Z 1 f (y|z) fY |Z (y|z) − fY |Z (y|z ) dy ≤ log f (y|z)dy 2 y f (y|z 0 ) 0 By Taylor’s expansion with second-order residual, we have: Z 1 f (y|z) log f (y|z)dy = (z 0 − z)T Ia (z)(z 0 − z) 0 f (y|z ) 2 12 (6) (7) for some a = λz + (1 − λ)z 0 where 0 ≤ λ ≤ 1. Under Assumption 1 and  < 1 we have, (z 0 − z)T Ia (z)(z 0 − z) ≤ βkz 0 − zk22 . (8) Then, (8), (7), (6) and (5) imply: r 2 ∗ dT V (φ, f CI )≤ β E[kz 0 − zk2 1kz0 −zk2 ≤ ] + 2P (kz 0 − zk2 > ) 4 (9) We now bound both terms separately. Let Z1 , Z2 ..., Zn be distributed i.i.d according to f (z). Then, fZ 0 |z (·) is the pdf of the nearest neighbor of z among Z1 , . . . , Zn . A.0.1 Bounding the first term In this section we will use d in place of dz for notational simplicity. Let γd be the volume of the unit `2 ball in dimension d. Let S = {z : f (z) ≥ 2 ∗ cd 2 }. This implies, that for z ∈ S: f (z) − cd 2 ≥ f (z)/2 (10) Let Z 0 = argminZ1 ,Z2 ...,Zn kZi − zk2 be the random variable which is the nearest neighbor to a point z among n i.i.d samples Zi drawn from the distribution whose pdf is f (z) that satisfies assumption 2. Let r(z) = ||z − z 0 ||2 . Let F (r) be the CDF of the random variable R. Since R is a non-negative random variable, Z Z Z  ER [r(z)1r≤ ] = rdF (r) = [rF (r)]0 − F (r)dr ≤ P (R > r)dr (11) r≤ r≤ r≤ For any r ≤ , observe that Pr(R > r) = Pr(@i : zi ∈ B(z, r)) n = (1 − Pr(Z ∈ B(z, r))) ≤ exp(−nPr(Z ∈ B(z, r))) (12) We have the following chain to bound Pr(Z ∈ B(z, r)). Let a = λz + (1 − λ)t. Z d |Pr(Z ∈ B(z, r)) − f (z)γd r | ≤ (f (t) − f (z))dt t∈B(z,r) Z = t∈B(z,r) (∇f (z)T .(t − z) + (t − z)T Hf (a)(t − z))dt ≤ max kHf (t)k2 t∈B(z,r) Z t∈B(z,r) kt − zk22 dt ≤ cd γd rd+2 (13) By putting together (11),(12) and(13), we have: Z  −nγd r d (f (z)−cd r 2 ) ER [r(z)1r≤ ]≤1z∈S e dr + 1z∈S c r≤ Z  d 1 ≤ 1z∈S e− 2 nγd f (z)r dr + 1z∈S c r≤ ! e−t −1+1/d ≤ 1z∈S t dt + 1z∈S c 1 1/d t≤ 12 nγd f (z)d d[ 2 nγd f (z)]   21/d Γ(1/d) + 1z∈S c ≤ 1z∈S d(nγd f (z))1/d Z 13 Therefore, the first term in bounded by: E[kz 0 − zk2 1kz0 −zk2 ≤ ] ≤ EZ [ER [r(z)1r≤ ]]   21/d c Γ(1/d) + 1z∈S ≤ EZ d(nγd f (z))1/d  c3 ∗ 21/d ≤ Γ(1/d) +  ∗ G 2cd 2 1/d (nγd ) d (14) (15) (16) (17) A.0.2 Bounding the second term We now bound the second term as follows: Pr(||z − z 0 ||2 > ) ≤ EZ [Pr(R > )]    nγd f (z)d ≤ EZ 1z∈S exp − + Pr(z ∈ S c ) 2   ≤ exp −nγd cd d+2 /2 + G 2cd 2 Substituting in (9), we have: s 2 ∗ dT V (g, f CI ) ≤   β c3 ∗ 21/d Γ(1/d) βG (2cd 2 ) d+2 2 + + 2 exp −nγ c  /2 + 2G 2c  d d d 4 (nγd )1/d d 4 Substitute dz in place of d to recover Theorem 1. B Generalization Error Bounds on Classification In this section, we will prove generalization error bounds for our classification problem in Algorithm 2. Note the samples in U20 are not i.i.d, so standard risk bounds do not hold. We will leverage a spatial near independence property to provide generalization bounds under non-i.i.d samples. In what follows, we will prove the results for any bounded loss function L(g(u), `) ≤ |L|. Let S , Dr i.e., the set of training samples. For 1 ≤ i ≤ 3, let Zi , {z : (x, y, z) ∈ Ui }. Let Z , Z1 ∪ Z2 . Observe that Rq (gS ) ≤ Rq (gq∗ ) + 2 sup(RS (g) − Rq (g)), (18) g∈G and hence in the rest of the section we upper bound supg∈G (RS (g) − Rq (g)). To this end, we define conditional risk RS (q|Z) as 1 X RS (g|Z) , E[L(g(u), `)|Z]. n (u,`)∈S By triangle inequality, sup(RS (g) − Rq (g)) ≤ sup(RS (g) − RS (g|Z)) + sup(RS (g|Z) − Rq (g)). g∈G g∈G g∈G We first bound the second term in the right hand side of Equation (19) in the next lemma. Lemma 1. With probability at least 1 − δ, r r V 2 log(1/δ) sup(RS (g|Z) − Rq (g)) ≤ |L|C + |L| , n n g∈G where V is the VC dimension of the classification model. 14 (19) Proof. For a sample u = (x, y, z), observe that Z → z → u forms a Markov chain. Hence, E[L(g(u), `)|Z] = E[L(g(u), `)|Z]. Let h(Z) , E[L(g(u), `)|Z] (20) Hence, RS (g|Z) − Rq (g) = 1 X h(Z) − Eq [h(Z)]. n z∈Z The above term is the average of n independent random variables h(Z) and hence we can apply standard tools from learning theory [4] to obtain r r Vh 2 log(1/δ) sup(RS (g|Z) − Rq (g)) ≤ |L|C + |L| , n n g∈G where Vh is the VC dimension of the class of models of h. The lemma follows from the fact that VC dimension of h is smaller than the VC dimension of the underlying classification model. We next bound the first term in the RHS of Equation (19). Proof is given in Appendix B.1. Lemma 2. Let  > 0. If the Hessian of the density f (z)and the Lipscitz constant of the same is bounded, then with probability at least 1 − 7δ, !  ! r 1/3 r d √ log(n/δ) 1 4 log(n/δ) + on (1/) sup(RS (g) − Rn (g|Z)) ≤ |L| V + log + δ n n g∈G + |L|G(). (21) Lemmas 1 and 2, together with Equations (18) and (19) yield the following theorem. Theorem 3. Let  > 0. If the Hessian and the Lipscitz constant of f (z) is bounded, then with probability at least 1 − 8δ, !  ! ! r 1/3 r d √ 1 log(n/δ) 4 log(n/δ) + on (1/) ∗ V + log + + G() , (22) Rq (ĝ) ≤ Rq (gq ) + c|L| δ n n where c is a universal constant and ĝ is the minimizer in Step 6 of Algorithm 2. B.1 Proof of Lemma 2 We need few definitions to prove Lemma 2. For a point z, let Bn (z) be a ball around it such that 2 Pr (Z ∈ Bn (z)) = Z∼f (z) log nδ , αn . n Intuitively, with high probability the nearest neighbor of each sample z lies in Bn (z). We formalize it in the next lemma. Lemma 3. With probability > 1 − δ, the nearest neighbor of each sample z ∈ Z2 in Z3 lie in B(z). Proof. The probability that none of Z3 appears in B(z) is 1 − (1 − αn )n ≤ δ/n. The lemma follows by the union bound. We now bound the probability that the the nearest neighbor balls Bn () intersect for two samples. 15 Lemma 4. Let  > 0. If the Hessian of the density (f (z)) is bounded by c and the Lipschitz constant is bounded by γ, then for any given z1 such that f (z1 ) ≥  and a sample z2 ∼ f , Pr (Bn (z1 ) ∩ Bn (z2 ) 6= ∅) ≤ βn , 4d αn (1 + on (1/)). z2 ∼f Proof. Let rn (z) denote the radius of Bn (z). Let B(z, r) denote the ball of radius r around z and V (z, r) be its volume. We can rewrite βn as = Pr(Bn (z1 ) ∩ Bn (z2 ) 6= ∅) = Pr(Bn (z1 ) ∩ Bn (z2 ) 6= ∅, 3rn (z1 ) ≥ rn (z2 )) + Pr(Bn (z1 ) ∩ Bn (z2 ) 6= ∅, 3rn (z1 ) < rn (z2 )). We first bound the first term. Note that Z f (z 0 )dz 0 = αn , z 0 ∈Bn (z) Hence, by Taylor’s series expansion and the bound on Hessian yields, Z  αn = f (z 0 )dz 0 = V (z, rn (z)) f (z) + O(rn2 (z)c) . z 0 ∈Bn (z) Similarly,  Pr(z 0 ∈ V (z, 4rn (z))) = V (z, 4rn (z)) f (z) + O(9rn2 (z)c) = 4d αn (1 + on (1/)), where the last equality follows from the fact that V (z, 4rn (z))/V (z, rn (z)) = 4d in d dimensions. Then the first term can be bounded as Pr(Bn (z1 ) ∩ Bn (z2 ) 6= ∅, 3rn (z1 ) ≥ rn (z2 )) = Pr(z2 ∈ B(z1 , rn (z1 ) + rn (z2 )), 3rn (z1 ) ≥ rn (z2 )) ≤ Pr(z2 ∈ B(z1 , 4rn (z1 )), 3rn (z1 ) ≥ rn (z2 )) ≤ Pr(z2 ∈ B(z1 , 4rn (z1 ))) ≤ 4d αn (1 + on (1/)). To bound the second term, observe that if Bn (z1 ) ∩ Bn (z2 ) 6= ∅ and 3rn (z1 ) < rn (z2 ). There exists a point z 0 on the line joining z1 and z2 at distance 3rn (z1 ) from z1 such that Pr(z 00 ∈ B(z 0 , 3rn (z1 ))) < αn . As before bound on the Hessian yields, αn > V (z 0 , 3rn (z1 ))(f (z 0 ) − O(9rn2 (z1 )c)). Hence, f (z 0 ) < 3−d (f (z) + O(rn2 (z)c)) + O(9rn2 (z1 )c). However, f (z) >  and f (z 0 ) ≥ f (z) − 3rn (z1 )γ and rn (z1 ) → 0. Hence, a contradiction. Thus Pr(Bn (z1 ) ∩ Bn (z2 ) 6= ∅, 3rn (z1 ) < rn (z2 )) = 0. Consider the graph on indices [n], such that two indices are connected if and only if Bn (zi ) ∩ Bn (zj ) 6= ∅, f (z1 ) ≥ , f (z2 ) ≥  . Let ∆(Z1n ) be the maximum degree of the resulting graph. We first show that the maximum degree of this graph is small. 16 Lemma 5. With probability ≥ 1 − δ, ∆(Z1n ) ≤ 4nβn . Proof. For index 1, by Lemma 4 that probability of j points intersect is at most j   X n i βn (1 − βn )n−i . i i=0 Hence, the degree of vertex 1 is dominated by a binomial distribution with parameters n and βn . The lemma follows from the union bound and the Chernoff bound. Let k > 2∆(Z1n ) and S0 , S1 , S2 . . . Sk be k independent sets of the above graph such that maxt≥1 |St | ≤ 2n/k. Note that such independent sets exists by Lemma 10. We set the exact value of k later. Let S0 contains all indices such that f (zi ) < . Lemma 6. With probability > 1 − δ, r 1 |S0 | ≤ nG() + n log . δ Proof. Observe that |S0 | is the sum of n independent random variables and changing any of them changes S0 by at most 1. The lemma follows by McDiarmid’s inequality. We can upper bound the LHS in Equation (21) as sup(RS (g) − Rn (g|Z)) ≤ g∈G k X |St | sup t=1 g∈G n 1 X |S0 | |L|. (L(g(ui ), `i ) − h(Zi )) + |St | n i∈St Let N (Zi ) denote the number of elements of Z3 that are in B(Zi ) and Let Ai be always true if Zi ∈ Z1 and otherwise Ai be the event such that nearest neighbor of samples in N (Zi ) > 0. We first show the following inequality. Lemma 7. With probability ≥ 1 − δ, for all sets St . 1 X 1 X (L(g(ui ), `i ) − h(Zi )) = sup 1Ai (L(g(ui ), `i ) − h(Zi )) . sup g∈G |St | g∈G |St | i∈St i∈St Proof. Let Xi = (L(g(ui ), `i ) − h(Zi )) and Observe that LHS can be written as 1 X 1 X 1 X sup Xi = sup Xi 1Ai + sup (Xi − Xi 1Ai ). g∈G |St | g∈G |St | g∈G |St | i∈St i∈St i∈St If the conditions of Lemma 3 hold, the nearest sample of Zi ’s lie within Bn (Zi ). Hence, with probability ≥ 1 − δ, the second term is 0. Hence the lemma. Let r be defined as follows. r(Zi , Ni ) , E[1Ai (L(g(ui ), `i ) − h(Zi )) |N (Zi ) = Ni ]. Observe that " E # X i∈St = X i∈St = X i∈St = X 1Ai (L(g(ui ), `i ) − h(Zi )) |N (Z1 ), . . . N (Zn ) E [1Ai (L(g(ui ), `i ) − h(Zi )) |N (Z1 ), . . . N (Zn )] E [1Ai (L(g(ui ), `i ) − h(Zi )) |N (Zi )] r(Zi , Ni ). (23) i∈St 17 Hence, we can split the term as 1 X 1Ai (L(g(ui ), `i ) − h(Zi )) sup g∈G |St | i∈St 1 X 1 X ≤ sup 1Ai (L(g(ui ), `i ) − r(Zi , N (Zi ))) + sup 1Ai (r(Zi , N (Zi )) − h(Zi )) g∈G |St | g∈G |St | i∈St (24) i∈St Given {Zi , N (Zi )}, the first term in the RHS of the Equation (24) is a function of |St | independent random variables as 1Ai ∗ L(g(ui ), `i ) are mutually independent given {Zi , N (Zi )}. Thus we can use standard tools from VC dimension theory and state that with probability ≥ 1 − δ, the first term in the RHS of Equation (24) can be upper bounded as s s V log(1/δ) 1 X 1Ai (L(g(ui ), `i ) − r(Zi , N (Zi ))) ≤ |L|C sup + |L| . |S | |S | |St | t t g∈G i∈St conditioned on {Zi , N (Zi )} To bound the second term in the RHS of Equation (24), observe that unlike the first term, the N (Zi )s are dependent on each other. However note that N (Z1 ), . . . N (Z|St | ) are distributed according to multinomial distribution with parameters n and αn . However, if we replace them by independent Poisson distributed N (Zi )s we expect the value not to change. We formalize it by total variation distance. By Lemma 9, the total variation distance between a multinomial distribution and product of Poisson distributions is O(|St |αn ), and hence any bound holds in the second distribution holds in the first one with an additional penalty of O (|St |αn |L|) . Under the new independent sampling distribution, again the samples are independent and we can use standard tools from VC dimension and hence, with probability ≥ 1 − δ, the term is upper bounded by s s V log(1/δ) |L|C + |L| . |St | |St | Hence, summing over all the bounds, we get  s  s k 1 X |S0 | V |St |  log δ sup(RS (g) − Rn (g|Z)) ≤ |L|O  + +c + αn |St | n n |S | |S t t| g∈G t=1   s ! r r 1 √ log δ k 1  ≤ |L|O G() + log + c V + αn max |St | + t n δ n r  s ! r 1 √ log δ k 1 n . ≤ |L|O  log + c V + αn + n δ k n 2/3 conditioned on Zi , N (Zi ) Choose k = nαn + 8nβn , and note that the conditioning can be removed as the term on the r.h.s are constants.This yields the result. The error probability follows by the union bound. Theorem 4. Assume the conditions for Theorem 3. Suppose the loss is L(g(u), `) = 1g(u)6=` (s.t |L| ≤ 1). Further suppose the class of classifying function is such that Rq (gq∗ ) ≤ r0 + η. Here, r0 , 0.5(1 − dT V (q(x, y, z|1), q(x, y, z|0))) is the risk of the Bayes optimal classifier when P(` = 1) = P(` = 0). This is the best loss that any classifier can achieve for this classification problem [4]. Under this setting, w.p at least 1 − 8δ we have:  b(n)  b(n) 1 1 1 − dT V (f, f CI ) − ≤ Rq (gS ) ≤ 1 − dT V (f, f CI ) + + η + γn 2 2 2 2 18 Proof. Assume the bounds of Theorem 3 holds which happens w.p at least 1 − 8δ. From Theorem 3 we have that Rq (gS ) ≤ Rq (gq∗ ) + γn . (25) Also, note that from Theorem 1 we have the following: dT V (q(x, y, z|1), q(x, y, z|0)) = dT V (φ, f ) ≤ dT V (φ, f CI ) + dT V (f CI , f ) ≤ b(n) + dT V (f CI , f ) (26) Under our assumption we have Rq (gq∗ ) ≤ r0 + η. Combining this with (25) and (26) we get the r.h.s. For, the l.hs note that Rq (gq∗ ) ≥ r0 as the bayes optimal classifier has the lowest risk. We can now use (26) to prove the l.h.s. C Tools from probability and graph theory Lemma 8 (McDiarmid’s inequality [20]). Let X1 , X2 , . . . Xm be m independent random variables and f be a function from xn1 → R such that changing any one of the Xi s changes the function f at most by ci , then   −22 Pr(f − E[f ] ≥ ) ≤ exp Pm 2 . i=1 ci Lemma 9 (Special case of Theorem 1 in [25]). Let fm be the multinomial distribution with parameters n Pk and p1 , p2 , . . . pk , 1 − i=1 pi , and fp be the product of Poisson distributions with mean npi for i ≤ 1 ≤ k, then k X dT V (fm , fs ) ≤ 8.8 pi . i=1 Lemma 10. For a graph with maximum degree ∆, there exists a set of independent sets S1 , S2 , . . . Sk such that k ≥ 2∆ and max |Si | ≤ 2n/k. 1≤i≤k Proof. We show that the following algorithm yields a coloring (and hence independent sets) with the required property. Let 1, 2, . . . k be k colors, where k > 2∆. We arbitrarily order the nodes, and sequentially color nodes with a currently least used color from among the ones not used by its neighbors. Consider the point in time when i nodes have been colored, and we evaluate the options for the (i + 1)th node. The number of possible choices of color for that node is c ≥ k − ∆. Out these c colors, the average number of nodes belonging to each color at this point is at-most i/c. Therefore by pigeonholing, the minimum is less than the average; thus the number of nodes belonging to chosen color is no larger than i/c ≤ i/(k − ∆). Hence at the end when all n nodes are colored, each color has been used no more than (n − 1)/(k − ∆) + 1 < 2n/k. 19
7
arXiv:1604.01922v1 [math.AT] 7 Apr 2016 ON QUILLEN’S CONJECTURE FOR p-SOLVABLE GROUPS ANTONIO DÍAZ RAMOS Abstract. We give a new proof of Quillen’s conjecture for solvable groups via a geometric and explicit method. For p-solvable groups, we provide both a new proof using the Classification of Finite Simple Groups and an asymptotic version without employing it. 1. Introduction Let G be a finite group, let p be a prime and let Ap (G) be the poset consisting of the non-trivial elementary abelian p-groups of G ordered by inclusion. The homotopy properties of the topological realization |Ap (G)| were first studied in [8]. There, Quillen introduced the following conjecture, where we denote by Op (G) the largest normal p-subgroup of G: Conjecture 1.1 (Quillen’s conjecture). If |Ap (G)| is contractible then Op (G) 6= 1. We start discussing some of the known progress on this conjecture as well as we briefly comment on the methods employed so far. If the implication 1.1 holds for G we say that G satisfies QC. Let us introduce the following notion. Definition 1.2. Let G be a finite group of p-rank r. We say that G has Quillen e r−1 (|Ap (G)|; Q) 6= 0. dimension at p if Op (G) = 1 ⇒ H This notion was introduced by Ashchbacher and Smith in [2, p.474] and we e ∗ (|Ap (G)|; Q) denote it by QDp . Note that r − 1 is the top dimension for which H can possibly be non-zero. As contractibility leads to zero homology it is clear that: QDp holds for G⇒ QC holds for G. Quillen observed the following: (1) QDp for all K ⋊ Frp with K solvable p′ -group ⇒ QDp for all G solvable, where Frp is an elementary abelian group of order pr acting on K and K ⋊Frp is their semidirect product. In fact, the left hand side of the implication above was proven by Quillen himself [8, Corollary 12.2], leading to the solution of the conjecture in the solvable case. This is an inductive proof built on Cohen-Macaulay posets. Alperin supplied an alternative approach that examines a minimal counterexample via coprime action results [9, Theorem 8.2.9]. A further observation of Quillen is: (2) QDp for all K ⋊ Frp with K p′ -group ⇒ QDp for all G p-solvable. Note that the solvability requirement on K has been dropped. In this case, the left hand side, and hence QC for p-solvable groups, holds by an extension of the Date: January 18, 2018. Supported by MICINN grant RYC-2010-05663. Partially supported by MEC grant MTM201341768-P and Junta de Andalucı́a grant FQM-213. 1 2 ANTONIO DÍAZ RAMOS aforementioned argument of Alperin via the Classification of the Finite Simple Groups (CFSG). See [3, Theorem 1], [9, Theorem 8.2.12] and [2, Theorem 0.5]. As contractiblity produces zero homology with any trivial coefficients, it makes sense to include in the above discussion homology with (trivial) coefficients in any abelian group A. In this work, we investigate certain homology classes that we e c (|Ap (K ⋊ Fr )|; A) of term constructible classes and that form a subgroup H r−1 p e r−1 (|Ap (K ⋊ Fr )|; A), see Definition 4.3 for details. H p Definition 1.3. Consider the semidirect product K ⋊ Frp with K a p′ -group and let A be any abelian group. We say that K ⋊ Frp has constructible Quillen dimension c e r−1 at p and A if Op (K ⋊ Frp ) = 1 ⇒ H (|Ap (K ⋊ Frp )|; A) 6= 0. We denote this condition by QDpc,A . By the expression QDpc we mean that QDpc,A holds for some abelian group A. Note that for the semidirect product K ⋊ Frp : Op (K ⋊ Frp ) = 1 ⇔ the action of Frp on K is faithful. (3) As before, we have that: (4) QDpc for all K ⋊ Frp with K solvable p′ -group ⇒ QC for all G solvable and (5) QDpc for all K ⋊ Frp with K p′ -group ⇒ QC for all G p-solvable. A constructible class for the group K ⋊ Frp is defined by a choice of an element of A for each Sylow p-subgroup of this group, a· = (aS )S∈Sylp (K⋊Frp ) , such that certain homogeneous linear equations are satisfied. Non-trivial solutions of these c e r−1 equations gives rise to non-trivial elements in H (|Ap (K ⋊ Frp )|; A). So, roughly speaking, we have removed a subdivision when computing constructible homology classes. This means that we only need to look at the top two layers of this poset, i.e., Sylow subgroups and their hyperplanes. We show below that constructible classes suffice to prove Quillen’s conjecture in the p-solvable case. We start with the following findings. Theorem 1.4. For the elementary abelian group Frp or order pr we have: c,Z (a) QDp q holds for K ⋊ Frp with K a q-group and q 6= p. (b) QDpc,Z2 holds for K ⋊ Frp with K an abelian p′ -group. Here, ZD = Z/DZ. We succinctly outline the families a· utilised for this result. For case (a), we choose a· to take the constant value 1 ∈ Zq . Letting a· to be the characteristic function of an appropriately selected subset of Sylow p-subgroups gives (b). For the next result, we show that the aforementioned linear equations have at least a non-trivial solution. Theorem 1.5. QDpc,Z holds for K ⋊ Frp with K a p′ -group if |K| = q1 e1 · · · ql el satisfies that r < qi for all i = 1, . . . , l. In view of the previous discussion, Theorem 1.5 is in fact an asymptotic version of Quillen’s conjecture for p-solvable groups. If K is solvable, its Fitting subgroup is self-centralizing and has abelian Frattini quotient. These two techniques are of interest here because they preserve faithfulness and because of Equation (3). Hence, the solvable case can be studied via the abelian case above together with the following fact that constructible classes behave extremely well with respect to quotients. ON QUILLEN’S CONJECTURE FOR p-SOLVABLE GROUPS 3 Theorem 1.6. Consider K ⋊Frp with K a p′ -group and let N EK be an Frp -invariant e c (|Ap (K/N ⋊ Fr )|; A) 6= 0 then subgroup of K. Let A be any abelian group. If H r−1 p c r e )|; A) = 6 0. (|A (K ⋊ F H p p r−1 From here and Theorem 1.4(b) we obtain the next result. Theorem 1.7. QDpc,Z2 holds for K ⋊ Frp with K a solvable p′ -group. This result together with Equation (4) has the following immediate consequence. Theorem 1.8. Quillen’s conjecture holds for solvable groups. For general p′ -group K, the Frattini quotient of its generalized Fitting subgroup is the direct product of an abelian group with simple groups. Theorem 1.6 again reduces the problem to building constructible classes on this quotient. As in Theorem 1.4(b), we choose a· to be the characteristic function of a certain subset of Sylow p-subgroups, and we use the CFSG to single out this subset. Theorem 1.9. QDpc,Z2 holds for K ⋊ Frp with K a p′ -group. Now employing Equation (5) we obtain the p-solvable case. Theorem 1.10. Quillen’s conjecture holds for p-solvable groups. Going back to the initial discussion, we point out that Alperin’s arguments for the solvable and p-solvable cases are aimed to build a homology sphere in top dimension r − 1. This sphere is built by appropriately choosing points to form r 0spheres, and the considering its join. Here, our constructible homology class in top dimension r − 1 is made out of conical pieces of that dimension glued along a graph, see Examples 3.4, 4.9 and 5.7. The weight of our arguments lies in the geometry of the situation, and we do not use neither induction nor minimal counterexample reasoning. For more details on this comparison, please see Remarks 5.4 and 6.8. The methods explained here are open in at least a couple of directions: First, one e c (|Ap (G)|; A) for any finite group can also define the constructible subgroup H r−1 G of rank r and any abelian group A. Nevertheless, the situation is more complicated as, to start with, the coprime action Propositions 2.2 and 2.3 do not hold in general. Second, it is easy to sharpen Theorem 5.3 in several ways. For instance, the conditions ci normalizes CK (H) and [ci , cj ] ∈ CK (H) for all i and j, are also enough to have a 2-system. Remark 1.11. By [8, Proposition 2.4], which is the reversed implication of Cone ∗ (|Ap (K ⋊ Frp )|; A) 6= 0 forces the action of Frp on K jecture 1.1, the condition H to be faithful. In particular, if the thesis of Theorem 5.6 holds, we deduce that Frp acts faithfully on K and on K/N . Acknowledgements: I am really grateful to Stephen D. Smith for his time and generosity. He read an earlier version of this work and contributed many interesting and clarifying comments, specially those concerning buildings and the relation to Alperin’s solution. He also pointed out some flaws in the examples and in the analysis of fixed points in Section 6 that are now fixed. Organization of the paper: In Section 2 we introduce notation and basic facts about the semidirects products we are interested in. In Section 3 we define, for each element a ∈ A, a chain in top dimension r − 1 for |Ap (Frp )|. Then, in Section 4, we add up these chains to form a chain in top dimension for |Ap (K ⋊ Frp )|. We 4 ANTONIO DÍAZ RAMOS also formulate the linear equations that must be satisfied in order for this chain to c e r−1 become a cycle in H (|Ap (K ⋊ Frp )|; A). Case (a) of Theorem 1.4 and Theorem 1.5 are also proven. In Section 5, we introduce some tool needed to prove later in the same section the abelian case Theorem 1.4(b). Afterwards, we prove Theorem 1.6. In the final Section 6, we prove Theorems 1.7, 1.8, 1.9 and 1.10. 2. Preliminaries Throughout the paper, p denotes a prime and A denotes an abelian group. For a poset P, its realization |P| is the topological realization of its order complex ∆(P), i.e., of the simplicial complex whose n-simplices are the compositions p0 < p1 < e ∗ (|P|; A) via the simplicial . . . < pn in P. In particular, we compute the homology H complex C∗ (∆(P); A). Here, Cn (∆(P); A) has one generator for each n-simplex of ∆(P ) and C−1 (∆(P); A) has as generator the empty composition Pn ∅. The differential d : Cn (∆(P); A) → Cn−1 (∆(P); A) is given as usual by d = i=0 (−1)i di , where di forgets the i-th vertex of a given simplex. If G is a finite group and p is a prime, let Ap (G) denote the Quillen poset of G at the prime p [8]. This poset consists of the non-trivial elementary abelian p-subgroups of G ordered by inclusion. We will pay special attention to Ap (G) for G a semidirect product G = K ⋊ H, where H is an abelian p-group acting on the p′ -group K. Note that in this situation Sylp (G) = K H = {kHk −1 , k ∈ K} and | Sylp (G)| = |K|/|CK (H)|. We introduce the following notation. Definition 2.1. Let G be the semidirect product G = K ⋊ H, where H is an abelian p-group and K is a p′ -group. For any subgroup I ≤ G, we define N (I) as the subset of Sylow p-subgroups of G that contain I and we also set N (I) = |N (I)|. The next two results are basic coprime action properties about the semidirect products we are concerned with. Proposition 2.2. Let G be a semidirect product G = K ⋊ H, with H an abelian p-group and K a p′ -group. Let I be a subgroup of H. Then: (1) I is the only conjugate of I that lies in H. (2) N (I) = |CK (I)|/|CK (H)|. (3) If N E K is H-invariant then CK/N (H) = CK (H)N/N . Proof. For the first claim, we reproduce here for convenience the proof in [7, Lemma 1.2]. So let g I be a conjugate of I that lies in H. Then we have g = kh for some k ∈ K and h ∈ H and, as H is abelian, we have g I = k I. Now let i ∈ I. Then i−1 (k i) ∈ H and i−1 (k i) = k i k −1 ∈ K. As K ∩ H = {1} we obtain i = k i, k ∈ CK (I) and g I = k I = I. For the second claim, consider the action of CK (I) on N (I) given by k · J = k J for k ∈ CK (I) and J ∈ N (I). This action is easily seen to be transitive by the previous argument. Moreover, the isotropy group of J = H is exactly CK (H). The second claim follows. The last item is standard and can be found in [7, Lemma 2.2(c)].  From here on, we will only consider the case with H an elementary abelian pgroup of rank r, Frp . Then we will think of H as an Fp -vector space whose elements are r-tuples of elements from Fp : (x1 , . . . , xr ) ∈ H with xi ∈ Fp . Consequently, we call hyperplanes of H those subgroups I of H with |H : I| = p. We say that a collection of hyperplanes is linearly independent if their corresponding onedimensional subspaces are independent in the dual vector space H ∗ . ON QUILLEN’S CONJECTURE FOR p-SOLVABLE GROUPS 5 Proposition 2.3. Let G be a semidirect product G = K ⋊ H, with H = Frp and K a p′ -group. If H acts faithfully on K then there exists r linearly independent hyperplanes of H, H1 , . . . , Hr , that satisfy N (Hi ) > 1. Proof. A proof may be found in [7, Lemma 3.1] but we include here an alternative argument: By [1, Exercise 8.1], we have K = hCK (I)|I hyperplane of Hi. Denote by I the set of hyperplanes I of H that satisfy CK (H) ( CK (I), i.e., that, by Proposition 2.2(2), satisfy N (I) > 1. The intersection ∩I∈I I acts trivially on K because K is generated by {CK (I)}I∈I . Hence this intersection must be trivial as the action is faithful. Now observe that the dimension of ∩I∈I I is exactly r minus the maximal number of linearly independent hyperplanes in I.  For the case H = Frp , we also introduce the notation [i1 , . . . , il ] to denote a sequence of elements from {1, . . . , r} with no repetition, and we say that two sequences are equal if they have the same length and the same elements in the same order. Moreover, we denote by Slr the set of all such sequences which are of length l. For a sequence [i1 , . . . , il ] ∈ Slr , we define the following subspace of H: (6) H[i1 ,...,il ] = {(x1 , . . . , xr ) ∈ H|xi1 = . . . = xil = 0}. Thus H∅ = H, H[i] is the hyperplane with zero i-th coordinate and H[i1 ,...,il ] = H[i1 ] ∩ . . . ∩ H[il ] ∼ = Fr−l p . In addition, we also have (7) H[i1 ,...,il ] = H[j1 ,...,jl ] ⇔ {i1 , . . . , il } = {j1 , . . . , jl }, where on the right-hand side the equality is between (unordered) sets. We define the signature ǫ[i1 ,...,il ] as (8) ǫ[i1 ,...,il ] = (−1)n+m , where n is the number of transpositions we need to apply to the sequence [i1 , . . . , il ] to arrange it in increasing order [j1 , . . . , jl ], and m is the number of entries in which [j1 , . . . , jl ] differ from [1, . . . , l]. It is immediate that the parity of n does not depend on the particular choice of transpositions and so ǫ[i1 ,...,il ] is well defined. As an example consider [1, 4, 3] ∈ S34 . Then we have ǫ[1,4,3] = (−1)1+2 = −1. We will need the following property of the signature ǫ· : r Lemma 2.4. Let [i1 , . . . , ir−1 ] and [j1 , . . . , jr−1 ] be sequences in Sr−1 such that [i1 , . . . , ir−2 ] = [j1 , . . . , jr−2 ] and ir−1 6= jr−1 . Then ǫ[i1 ,...,ir−1 ] + ǫ[j1 ,...,jr−1 ] = 0. Proof. Write ǫ[i1 ,...,ir−1 ] = (−1)ni +mi and ǫ[j1 ,...,jr−1 ] = (−1)nj +mj as in (8). Without loss of generality we may assume that ir−1 < jr−1 and so we can order the set {1, . . . , r} as follows: (9) 1 < 2 < . . . < ir−1 < . . . < jr−1 < . . . < r. Denote by n the number of transpositions needed to arrange [i1 , . . . , ir−2 ] in increasing order. Then, from (9), ni = n + r − 1 − ir−1 and nj = n + r − jr−1 . It is also clear from (9) that mi = r − jr−1 and that mj = r − ir−1 . Hence we get (nj + mj ) − (ni + mi ) = 1 and we are done.  6 ANTONIO DÍAZ RAMOS 3. A chain for elementary abelian p-groups Let H = Frp be an elementary abelian p-subgroup of rank r and let A be an abelian group. We consider the poset Ap (H) of dimension r − 1 and its order complex ∆(Ap (H)). We shall explicitly define a chain in the abelian group Cr−1 (∆(Ap (H)); A), i.e., a chain in top dimension r − 1. We start defining, for [i1 , . . . , il ] ∈ Slr , the following l-simplex in ∆(Ap (H)): (10) σ[i1 ,...,il ] = H[i1 ,...,il ] < H[i1 ,...,il−1 ] < . . . < H[i1 ,i2 ] < H[i1 ] < H. Remark 3.1. Note that, by repeated application of (7), we have that σ[i1 ,...,il ] = σ[j1 ,...,ij ] ⇔ [i1 , . . . , il ] = [j1 , . . . , jl ]. Now, for any a ∈ A, we define the following chain in Cr−1 (∆(Ap (H)); A) : X ǫ[i1 ,...,ir−1 ] · a · σ[i1 ,...,ir−1 ] . (11) ZH,a = r [i1 ,...,ir−1 ]∈Sr−1 The next proposition exhibits the key feature of the chain ZH,a . Proposition 3.2. The differential d(ZH,a ) ∈ Cr−2 (∆(Ap (H)); A) is given by X d(ZH,a ) = (−1)r−1 ǫ[i1 ,...,ir−1 ] · a · τ[i1 ,...,ir−1 ] , r [i1 ,...,ir−1 ]∈Sr−1 where τ[i1 ,...,ir−1 ] = H[i1 ,...,ir−1 ] < H[i1 ,...,ir−2 ] < . . . < H[i1 ,i2 ] < H[i1 ] . Remark 3.3. Again by (7), we have τ[i1 ,...,il ] = τ[j1 ,...,ij ] ⇔ [i1 , . . . , il ] = [j1 , . . . , jl ]. Proof. The differential d : Cr−1 (∆(Ap (H)); A) → Cr−2 (∆(Ap (H)); A) is given by Pr−1 the alternate sum i=0 (−1)i di . Note that, by the rank of the subspaces of H, di (σ[i1 ,...,ir−1 ] ) = dj (σ[j1 ,...,jr−1 ] ) ⇒ i = j. We show below that di (ZH,a ) = 0 for 0 ≤ i ≤ r − 2. Then the theorem follows. We start considering d0 . Note that d0 (σ[i1 ,...,ir−1 ] ) = H[i1 ,...,ir−2 ] < . . . < H[i1 ,i2 ] < H[i1 ] < H, and that, by repeated application of (7), we have d0 (σ[i1 ,...,ir−1 ] ) = d0 (σ[j1 ,...,jr−1 ] ) ⇔ [i1 , . . . , ir−2 ] = [j1 , . . . , jr−2 ]. Assume that such condition holds for the sequences [i1 , . . . , ir−1 ] and [j1 , . . . , jr−1 ]. Then they must be identical but for the last terms ir−1 and jr−1 . Moreover, if the given sequences are different, we must have ir−1 6= jr−1 . Then, by Lemma 2.4, ǫ[i1 ,...,ir−1 ] and ǫ[j1 ,...,jr−1 ] have opposite signs and the corresponding summands cancel each other in the expression for d0 (ZH,a ). Hence d0 (ZH,a ) = 0. Now we consider di for 0 < i ≤ r − 2. In this case we have: di (σ[i1 ,...,ir−1 ] ) = H[i1 ,...,ir−1 ] < . . . < Ĥ[i1 ,...,ir−1−i ] < . . . < H[i1 ,i2 ] < H[i1 ] < H, where the term H[i1 ,...,ir−1−i ] is missing. Again by (7), if the sequences [i1 , . . . , ir−1 ] and [j1 , . . . , jr−1 ] have the same differential di , then they are identical but for the entries ir−i−1 , ir−i and jr−i−1 , jr−i , and these entries satisfy {ir−i−1 , ir−i } = {jr−i−1 , jr−i }. So, if the two given sequences are not the same one, then they differ by a transposition. Hence ǫ[i1 ,...,ir−1 ] and ǫ[j1 ,...,jr−1 ] have opposite signs and their corresponding summands cancel each other in the expression for di (ZH,a ). Thus di (ZH,a ) = 0.  ON QUILLEN’S CONJECTURE FOR p-SOLVABLE GROUPS 7 Example 3.4. For r = 3 we have S23 = {[1, 2], [2, 1], [1, 3], [3, 1], [2, 3], [3, 2]} and the signatures are +1, −1, −1, +1, +1 and −1 respectively. Within ∆(Ap (F3p )) we have the following cover relations involving the six 2-simplexes σ[i1 ,i2 ] , [i1 , i2 ] ∈ S23 : ❡❡❡❡⑧ H[1] < H ❡❡❡❡❡❡ ⑧⑧⑧ ❡ ❡ ❡ ❡ ❳ ❳ H[1,2] < H[1] < H ❳❳❳❳❳ ⑧⑧♠♠♠ H[1,2] < H ❳❳♠❳♠⑧❳♠ ♠♠⑧♠⑧⑧ ❳❳ H[1,2] < H[1] ♠ ♠ ♠ ♠♠❳♠❳❳❳⑧⑧⑧ H[2,1] < H[2] < H ❳ ✆ H[2] < H ❳ ⑧⑧ ❳❳❳❳❳❳✆❳✆✆ ⑧ ✆ H[2,1] < H[2] ⑧⑧ ✆✆ ♠♠ H[1,3] < H ⑧⑧❳❳❳❳ H[1,3] < H[1] < H ❳ ✆ ❳❳❳✆❳✆❳♠♠♠ ♠ ❳❳❳ H ♠✆♠✆✆♠ [1,3] < H[1] ♠ ♠ ♠ ♠♠❳♠❳❳✆❳✆✆ H[3,1] < H[3] < H ❳ H ② [3] < H ✆ ❳❳❳❳❳❳❳②❳②② ✆ ② H ✆ [3,1] < H[3] ②② ✆❳✆ ② ✆ ❳ ② H[2,3] < H[2] < H ❳❳❳❳②❳② ♠♠♠ H[2,3] < H ②②♠♠❳♠❳♠❳♠❳♠❳❳ H ② [2,3] < H[2] ② ♠ ②♠②♠♠♠ ② ♠ H <H <H H <H . [3,2] [3] [3,2] [3] The different signs for the signatures cause ZH,a to have boundary d(ZH,a ) supported on the six 1-simplexes τ[i1 ,i2 ] with [i1 , i2 ] ∈ S23 . So the chain ZH,a has support in the right-hand side coneshaped 2-dimensional simplicial complex. Remark 3.5. The figure above suggest that ZH,a is a sort of cone construction. In fact, it is straightforward that ZH,a is the join of the vertex H with the standard apartment arising from the basis of the one-dimensional subspaces Fp vi , where the vector vi = (0, . . . , 0, 1, 0, . . . , 0) has the 1 in the i-th position for i = 1, . . . , r. 4. A chain for semidirect products Let H = Frp be an elementary abelian p-subgroup of rank r that acts on the p -group K and consider the semidirect product G = K ⋊ H. We consider the poset Ap (G) of dimension r − 1 and its order complex ∆(Ap (G)). We shall define a chain Cr−1 (∆(Ap (G)); A) in top dimension r − 1 using the chain defined in Section 3. For each Sylow p-subgroup S ∈ Sylp (G) choose kS ∈ K with S = kS H and choose an element aS of the abelian group A. Then consider the conjugated chain ′ ZS,aS := kS (ZH,aS ), (12) where ZH,aS ∈ Cr−1 (∆(Ap (H)); A) ⊆ Cr−1 (∆(Ap (G)); A) was defined in Equation (11) and kS acts by conjugation on Cr−1 (∆(Ap (G)); A) and hence transforms ZH,aS somewhere inside this chain space. The next lemma shows that ZS,aS does not depend on the conjugating element kS : Lemma 4.1. Let k1 , k2 ∈ K with k2 (ZH,a ). k1 H = k2 H and let a ∈ A. Then k1 Proof. By Equations (11) and (10), it is enough to see that k1 H[i1 ,...,ir−1 ] < k1 H[i1 ,...,ir−2 ] < . . . < k1 H[i1 ,i2 ] < k1 H[i1 ] < k1 H (ZH,a ) = 8 ANTONIO DÍAZ RAMOS and k2 H[i1 ,...,ir−1 ] < k2 H[i1 ,...,ir−2 ] < . . . < k2 H[i1 ,i2 ] < k2 H[i1 ] < k2 H are equal for each sequence [i1 , . . . , ir−1 ] of length r − 1. But note that for any 1 ≤ l ≤ r−1, the subgroups k1 H[i1 ,...,il ] and k2 H[i1 ,...,il ] are K-conjugates of H[i1 ,...,il ] lying in k1 H = k2 H. Hence, by Proposition 2.2(1), these two subgroups must be equal. The result follows.  Now, for a· = (aS )S∈Sylp (G) elements of A, and (kS )S∈Sylp (G) elements from K satisfying kS H = S, we define the following element of Cr−1 (∆(Ap (G)); A): X X kS (ZH,aS ). ZS,aS = (13) ZG,a· = S∈Sylp (G) S∈Sylp (G) The lemma above shows that ZG,a· does not depend on the particular choice of the elements kS ’s. Proposition 4.2. The differential d(ZG,a· ) ∈ Cr−2 (∆(Ap (G)); A) is given by   X X ǫ[i1 ,...,ir−1 ] · aS ′ · kS τ[i1 ,...,ir−1 ] , d(ZG,a· ) = (−1)r−1 S ′ ∈N (kS H[i1 ] ) ([i1 ,...,ir−1 ],S)∈T r × Sylp (G) and where T is certain subset of Sr−1 τ[i1 ,...,ir−1 ] = kS H[i1 ,...,ir−1 ] < kS H[i1 ,...,ir−2 [ < . . . < kS H[i1 ,i2 ] < kS H[i1 ] . P kS Proof. We have d(ZG,a· ) = S∈Sylp (G) d( ZH,as ) and, by Proposition 3.2, the kS differential of kS ZH,as is given by d(kS ZH,as ) = (−1)r−1 X ǫ[i1 ,...,ir−1 ] · aS · kS τ[i1 ,...,ir−1 ] , r [i1 ,...,ir−1 ]∈Sr−1 where τ[i1 ,...,ir−1 ] = H[i1 ,...,ir−1 ] < H[i1 ,...,ir−2 ] < . . . < H[i1 ,i2 ] < H[i1 ] . r × Sylp (G) the following equivalence To study d(ZG,a· ), we define on the set Sr−1 relation: ([i1 , . . . , ir−1 ], S) ∼ ([j1 , . . . , jr−1 ], S ′ ) ⇔ kS τ[i1 ,...,ir−1 ] = kS′ τ[j1 ,...,jr−1 ] . r × Sylp (G) to be any set of representatives for the equivalence We also set T ⊆ Sr−1 classes of ∼. Now we fix ([i1 , . . . , ir−1 ), S) ∈ T and study its ∼-equivalence class. So assume that ([i1 , . . . , ir−1 ], S) ∼ ([j1 , . . . , jr−1 ], S ′ ). Then, in the first place, we have kS H[i1 ] = kS′ H[j1 ] < kS′ H. In addition, for any 1 ≤ l ≤ r − 1, the −1 subgroup kS′ kS H[i1 ,...,il ] = H[j1 ,...,jl ] is a subgroup of H that is K-conjugated to H[i1 ,...,il ] ≤ H. Hence, by Proposition 2.2(1), we must have H[i1 ,...,il ] = H[j1 ,...,jl ] . Using (7) we obtain then that [i1 , . . . , ir−1 ] = [j1 , . . . , jr−1 ]. So we have proven the left to right implication of the next claim: ([i1 , . . . , ir−1 ], S) ∼ ([j1 , . . . , jr−1 ], S ′ ) ⇔ [i1 , . . . , ir−1 ] = [j1 , . . . , jr−1 ] and kS H[i1 ] < kS′ H, and the right to left implication is a direct consequence of Proposition 2.2(1) again. It turns out then that the signature is constant along the ∼-equivalence class of ([i1 , . . . , ir−1 ], S) and the formula in the statement follows.  ON QUILLEN’S CONJECTURE FOR p-SOLVABLE GROUPS 9 Note that the chain ZG,a· is a cycle, i.e., d(ZG,a· ) = 0, if and only if X (14) aS = 0 S∈N (k H[i] ) for all i ∈ {1, . . . , r} and all K-conjugates k H[i] of H[i] , and it is non-trivial if and only if (15) aS 6= 0 for some S ∈ Sylp (G). Definition 4.3. If equations (14) are satisfied we say that [ZG,a· ] is a constructible e r−1 (|Ap (K ⋊ H)|; A). class in H c e r−1 It is immediate that constructible classes form a subgroup H (|Ap (K ⋊H)|; A) e r−1 (|Ap (K ⋊H)|; A) as claimed in the introduction. For r = 1, i.e., for H = Fp , of H and considering the “hyperplane” ∅ ∈ ∆(Ap (K ⋊ H)), Equations (14) and (15) just e c (|Ap (K ⋊ H)|; A) = H e 0 (|Ap (K ⋊ H)|; A). say that H 0 Remark 4.4. If for some a ∈ A we choose aS = a for all S ∈ Sylp (G) then the Equation for acyclicity of ZG,a· , (14), simplifies to: (16) N (H[i] ) · a = 0 for all i ∈ {1, . . . , r}. Corollary 4.5. Let G be a semidirect product G = K ⋊ H with H = Frp and K a p′ -group. Assume there are r linearly independent hyperplanes H1 , . . . , Hr such c e r−1 that D > 1 divides the numbers N (H1 ), . . . , N (Hr ). Then H (|Ap (G)|; ZD ) 6= 0. Proof. By a suitable change of basis in the Fp -vector space H, we may assume that Hi = H[i] for i = 1, . . . , r. Set A = Z/DZ = ZD . Then, if we choose a = 1 ∈ A as in Remark 4.4, we have N (Hi ) · 1 = 0 mod D for all i and it follows that ZG,a· is c e r−1 a non-trivial homology class in H (|Ap (G)|; A).  Corollary 4.6. Let G be a semidirect product G = K ⋊ H with H = Frp acting e c (|Ap (G)|; Zq ) 6= 0. faithfully on the q-group K for a prime q 6= p. Then H r−1 Proof. By Proposition 2.3, there exist r linearly independent hyperplanes H1 , . . . , Hr that satisfy N (Hi ) > 1. Being K a q-group, this implies that 1 < q divides N (Hi ) c e r−1 for all i. Then, by Corollary 4.5, H (|Ap (G)|; Zq ) 6= 0.  This proves Theorem 1.4(a) of the introduction. Equations (14) form a system Pr of homogeneous linear equations over the abelian group A. It consists of i=1 |K|/|CK (H[i] )| equations over |K|/|CK (H)| variables and the entries of the matrix associated to the system are either 0 or 1. Assume that A is a ring which is a Principal Ideal Domain (P.I.D.). Then, if there are fewer equations than variables: r r X X |K| 1 |K| (17) < ⇔ < 1, |CK (H[i] )| |CK (H)| |N (H[i] )| i=1 i=1 the Smith-Normal form of the matrix associated to the system shows that there exists at least a non-trivial solution. Corollary 4.7. Let G be a semidirect product G = K ⋊ H with H = Frp acting faithfully on the p′ -group K with |K| = q1e1 · · · qlel . If r < qi for i = 1, . . . , l, then c e r−1 (|Ap (G)|; Z) 6= 0. H 10 ANTONIO DÍAZ RAMOS Proof. After a suitable reordering, we may assume that q1 < . . . < qr and hence the hypothesis on r reads as r < q1 . As the integers Z form a P.I.D., we must check that Equation (17) holds. By Proposition 2.3 and a suitable change of basis in the Fp -vector space H we may assume that Hi = H[i] for i = 1, . . . , r with |N (H[i] )| > 1. Because these numbers divide |K| we have |N (H[i] )| ≥ q1 and Pr r 1  i=1 |N (H[i] )| ≤ q1 < 1. This result proves Theorem 1.5 of the introduction. In view of Equations (14), it is worth considering the following graph. Definition 4.8. For the semidirect product G = K ⋊ H with K a p′ -group and H ∼ = Frp , let G be the graph with vertices all K-conjugates of H together with all K-conjugates of the hyperplanes H[i] , and with edges all the inclusions among them. By Proposition 2.2(1), the vertices that are K-conjugates of H have degree equal to r. A vertex which is K-conjugate to H[i] has degree N (H[i] ). Moreover, Equations (14) can be interpreted as assigning the value aS with S ∈ Sylp (G) to the vertex S ∈ Sylp (G) and, for each vertex k H[i] with k ∈ K, equalizing to zero the sum of the values associated to its neighbours. Example 4.9. Consider G = C3 × C3 ⋊ F2 × F2 with the generators acting by (x1 , x2 ) 7→ (−x1 , x2 ) and (x1 , x2 ) 7→ (x1 , −x2 ) respectively. Set H1 = F2 × 0 and H2 = 0 × F2 . Then we have CK (H1 ) = 0 × C3 , CK (H2 ) = C3 × 0, CK (H) = 1 and N (H1 ) = N (H2 ) = 3. The graph G is shown below, where upper-case letters correspond to conjugates of H and lower-case letters to conjugates of hyperplanes. By Corollary 4.6, there exists a non-trivial homology class ZG in the top dimension homology group H1c (∆(A2 (G)); Z3 ). Moreover, by Corollary 4.7, as there are 6 e 1c (∆(A2 (G)); Z) 6= 0. equations and 9 variables, we also have H a b A B C D E F d e c G H I f 5. D-systems and quotients Under some hypothesis, we have constructed above non-trivial classes in top homology that are constant solutions (Remark 4.4) to the Equations (14). In this section, we construct solutions which are characteristic functions. We first specialize Definition 2.1. Definition 5.1. Let G be a semidirect product G = K ⋊ H with H = Frp and K a p′ -group. Let S ⊆ Sylp (G) be a non-empty subset of Sylow p-subgroups. For any subgroup I ≤ G, we define NS (I) as the subset of Sylow p-subgroups of S that contain I and we also set NS (I) = |NS (I)|. If D > 1 is an integer, we say that S is a D-system if D divides NS (k H[i] ) for all i and all k ∈ K. ON QUILLEN’S CONJECTURE FOR p-SOLVABLE GROUPS 11 Theorem 5.2. Let G be a semidirect product G = K ⋊ H with H = Frp and K a e c (|Ap (G)|; ZD ) 6= 0. p′ -group. If S is a D-system then H r−1 Proof. Form the chain ZG,a· as in Equation (13), where we choose aS = 1 for S ∈ S and 0 otherwise. This is a not trivial chain and the Equations (14) for ZG,a· being a cycle become NS (k H[i] ) = 0 mod D, for all i ∈ {1, . . . , r} and all K-conjugates k H[i] of H[i] . This holds by hypothesis.  Next we formulate a group theoretical condition that implies the existence of a 2-system. Theorem 5.3. Let G be a semidirect product G = K ⋊ H with H = Frp and K a p′ -group. Assume there are elements ci ∈ CK (H[i] ) \ CK (H) for i = 1, . . . , r such [ci , cj ] = 1 for all i and j. Then there exists a 2-system for G and hence e c (|Ap (G)|; Z2 ) 6= 0. H r−1 δ1 δ2 δr Proof. Let S = {c1 c2 ...cr H, with δi ∈ {0, 1} } ⊆ Sylp (G). We claim that S is a 2-system: consider k H[i] for some i ∈ {1, . . . , r} and some k ∈ K. If NS (k H[i] ) = 0 then there is nothing to prove. Assume then that the set NS (k H[i] ) is not empty δ1 δ2 δr and choose one of its elements: c1 c2 ...cr H ∈ NS (k H[i] ) for some values δi ’s in {0, 1}. Then by Proposition 2.2(1) we have k δ1 δ2 c2 ...cδrr H[i] = c1 δ1 δ2 c2 ...cδrr H[i] < c1 H. Assume first that δi = 0. Because [ci , cj ] = 1 for all j, we have: δ δ δ i−1 i+1 ci+1 ...cδrr ci c11 ...ci−1 ( δi−1 δi+1 δ1 ...ci−1 ci ci+1 ...cδrr H) = c1 δi−1 δi+1 δ1 ...ci−1 ci+1 ...cδrr ci H = c1 H. δi−1 δi+1 δ δ ci c11 ...ci−1 ci+1 ...crr From here we deduce that ( H[i] ) is a subgroup of the last two subgroups in the display above. So, by Proposition 2.2(1) again, we must have: δ δ δ i−1 i+1 ci+1 ...cδrr ci c11 ...ci−1 ( δ1 H[i] ) = c1 δ δ i−1 i+1 ...ci−1 ci ci+1 ...cδrr δ1 H[i] = c1 δ δ i−1 i+1 ...ci−1 ci+1 ...cδrr ci H[i] . As ci centralizes H[i] , the right-hand side group in the equation above is equal to δ δ δ i−1 i+1 c11 ...ci−1 ci+1 ...cδrr H[i] . Hence, δ δ δ i−1 i+1 δ ci c11 ...ci−1 ci+1 ...crr ( H) ∈ NS (k H[i] ). Moreover, this δi−1 δi+1 δ ci+1 ...cδrr c11 ...ci−1 H because then we would K-conjugate of H cannot be equal to obtain ci ∈ CK (H), a contradiction. So, we have shown, in the case δi = 0, that δ δ c11 ...ci i ...cδrr δ1 H ∈ NS (k H[i] ) ⇒ c1 1−δi ...ci ...cδrr H ∈ NS (k H[i] ), and both K-conjugates of H are different. The case δi = 1 is proven analogously using conjugation by c−1 instead of by ci . It follows that NS (k H[i] ) is an even i number.  Remark 5.4. The set S defined in the proof of Theorem 5.3 has size 2r : Consider non-trivial elements hi ∈ ∩j6=i H[j] . Then H = hh1 , . . . , hr i. Moreover, if we have ǫi ǫ c11 ...cǫrr H = H with ǫi ∈ {−1, 0, 1}, we obtain, by Proposition 2.2(1), that ci hi = hi for all i. If there exists i0 with ǫi0 6= 0, we deduce that [ci0 , hi0 ] = 1 and then, as H = hH[i0 ] , hi0 i, that ci0 ∈ CK (H), a contradiction. 12 ANTONIO DÍAZ RAMOS P δ1 δ2 δr Consider then the chain ZG,b· , where bS = (−1) i δi if S = c1 c2 ...cr H ∈ S c e r−1 (|Ap (G)|; Z) and bS = 0 otherwise. It represents a homology class [ZG,b· ] ∈ H c e is exactly the cycle (|A (G)|; Z ). In fact, Z which is a lift of [ZG,a· ] ∈ H p 2 G,b· r−1 corresponding to the join of the r 0-spheres {h1 , c1 h1 }, . . . , {hr , cr hr }. Corollary 5.5. Let G be a semidirect product G = K ⋊ H with H = Frp acting faithfully on the abelian p′ -group K. Then there exists a 2-system for G and hence c Hr−1 (|Ap (G)|; Z2 ) 6= 0. Proof. By Proposition 2.3, there exist r linearly independent hyperplanes H1 , . . . , Hr that satisfy N (Hi ) = |CK (Hi )|/|CK (H)| > 1. By a suitable change of basis in the Fp -vector space H, we may assume that Hi = H[i] for i = 1, . . . , r. As K is abelian, any choice of elements ci ∈ CK (H[i] ) \ CK (H) 6= ∅ for i = 1, . . . , r satisfies the hypothesis of Theorem 5.3.  This prove Theorem 1.4(b). We finish this section by proving Theorem 1.6 of the introduction. Theorem 5.6. Let G be a semidirect product G = K ⋊ H with H = Frp and K a p′ -group and let N E K be a normal H-invariant subgroup of K. Consider the quotient Ḡ = G/N and let A be an abelian group. Then every class [ZḠ,ā· ] ∈ c e c (|Ap (G)|; A). Moreover, a Hr−1 (|Ap (Ḡ)|; A) can be lifted to a class [ZG,a ] ∈ H r−1 non-trivial class lifts to a non-trivial class. Proof. Consider the projection homomorphism π : G → Ḡ. Because its kernel is the p′ -order subgroup N , there is an induced surjective application: Sylp (π) : Sylp (G) → Sylp (Ḡ). Moreover, for π(S) ∈ Sylp (Ḡ), we have Sylp (π)−1 (π(S)) = {n S, n ∈ N } by Proposition 2.2(3). We have the collection of elements of A, ā· = (āS̄ )S̄∈Sylp (Ḡ) , and we define a· = (aS )S∈Sylp (G) by aS = āπ(S) . Now choose a K-conjugate k H[i] of H[i] . Then the equality Sylp (π)(N (k H[i] )) = N (π(k) π(H[i] )) follows from Proposition 2.2(3). Equation (14) for ZG,a at k H[i] become: X X X aS = āπ(S) = 0. S∈N (k H[i] ) π(S)∈N (π(k) π(H[i] )) S ′ ∈N (k H[i] )∩Syl−1 p (π)(π(S))  n k Note that N (k H[i] ) ∩ Syl−1 p (π)(π(S)) = { S, n ∈ N ∩ CK ( H[i] )} and that this k set has order m = |CN ( H[i] )|/|CN (S)|. As N is normal in K, the denominator |CN (S)| does not depend on the Sylow p-subgroup S ∈ N (k H[i] ) and it is clear that m does not depend neither. Hence, we may rewrite the equation above as X āπ(S) = 0, m· π(S)∈N (p(k) π(H[i] )) and this equation holds because ZḠ,ā is a cycle. It is immediate from the construction that non-trivial classes lifts to non-trivial classes. ON QUILLEN’S CONJECTURE FOR p-SOLVABLE GROUPS 13 Example 5.7. Consider G = C5 × C5 × C5 ⋊ F2 × F2 × F2 with the generators acting by sending (x1 , x2 , x3 ) to (−x1 , x2 , x3 ), (x1 , −x2 , x3 ) and (x1 , x2 , −x3 ) respectively. Set H1 = 0 × F2 × F2 , H2 = F2 × 0 × F2 and H3 = F2 × F2 × 0. Then we have CK (H1 ) = C5 × 0 × 0, CK (H2 ) = 0 × C5 × 0, CK (H3 ) = 0 × 0 × C5 , N (Hi ) = 25 and CK (H) = 1. Although Corollaries 4.6 and 4.7 apply in this case, we focus on the construction in Corollary 5.5. Set c1 = (1, 0, 0), c2 = (0, 1, 0) and c3 = (0, 0, 1). Then the elements ci ∈ CK (Hi ) \ CK (H) satisfy the hypothesis of Theorem 5.3. The 2-system constructed in the proof of this result is exactly S = {H, c1 H, c2 H, c3 H, c1 c2 H, c1 c3 H, c2 c3 H, c1 c2 c3 H} and has size 8. The subgraph of the graph G (Definition 4.8) containing S and its neighbours is the following. c1 c2 c3 c1 c2 c1 c2 c2 c1 c3 c2 c3 c1 c2 c3 c3 c1 c2 H ❊ ❊❘ ❳H ❘ ❙ ❳H ❳ ❯ ❲❙❲❙❲❙❲❤❲❤❤①❳H ❑H ❤H ✐H ❥H ❳❳❳❳❑❯❳❑❯❑❯❑❯❯❯❯ ❲①❙❣❲❣❲❳❣❲❳❣❲❳❣❳❣❳❣❳❣❳❳❳❦❳❳❦❳❦❳❦❳❦❳❦❳❳❳❳❳❳ ❤❤❙ q✂q❥✂q✂❥❥❥❥❥✐✐✐❊✐❊❊✐✐✐❤❤❤❊❤❊❘❤❊❘❤❘❤ ❤ ❙ ① ❘ ❤ q ❤ ❘ ❳ ❣ ❤ ① ❳ q ❙ ❦ ❳ ❲ ❊ ❊ ❘ ❣ ❑❳ ❯❯ ❤ ❤ ❣❘❣❘①❣① ❳❳ ❳❳ ❳❳❳ ❙ ❲❲ ❦ q❥q❥q❥✂❥✂✐✂❥✐✐✐✐❤✐❤✐❤❤❤❤❤❤❤❤❊❊❤❊❤❤❤❣❤❣❊❣❊❣❊❣① ① ❘❘❘❘ ❦❙❦❙❦❙❦❙❦❙❦❙❲❲❲❲❲❲❲❳❲❳❳❳❳❳❳❳❳❳❳❳❳❳❳❑❳❑❳❳❑❳❳❑❳❳❯❳❯❳❯❳❯❳❯❳❯❳❯ q ❥ q ❥ ✐ ❣ ❤ ❤ ❣ ❦ ❤ ① ✂ ✐ ❤ ❥ q H1 H2 H3 c1 H2 H3 H1 H3 c3 H1 H2 H3 c1 c3 H2 c2 c3 H1 e 2c (|Ap (G)|; Z2 ) has The homology class [ZG,a· ] ∈ H support in the 2-dimensional simplicial complex on the right-hand side. The eight faces of the hollow octahedron are in bijection with the set S. For each element of S, its corresponding face is decomposed as in Example 3.4, that is, via its barycentric subdivision (shown in one of the faces of the figure). 6. Quillen’s conjecture In this section we prove the results of the introduction directly related to Quillen’s conjecture. We start with Theorem 1.7. Theorem 6.1. QDpc,Z2 holds for K ⋊ H with H = Frp and K a solvable p′ -group. Proof. As Op (K ⋊ H) = 1 we have that the action of H on K is faithful. Because K ⋊ H is solvable, its Fitting group F (K ⋊ H) = F (K) is self-centralizing [1, 31.10] and hence H also acts faithfully on the nilpotent group F (K). Now, the Frattini quotient F (K)/Φ(F (K)) is abelian and, by a result of Burnside [4, 5.1.4], H acts e c (|Ap (F (K)/Φ(F (K)) ⋊ H)|; Z2 ) 6= faithfully on it. Next, by Corollary 5.5, H r−1 e c (|Ap (F (K) ⋊ H)|; Z2 ) 6= 0. Finally, 0. Then by Theorem 5.6 we also have H r−1 e c (|Ap (F (K) ⋊ H)|; Z2 ) ≤ H e c (|Ap (K ⋊ H)|; Z2 ) finishes the the inclusion H r−1 r−1 proof.  Next we prove Theorem 1.8 of the introduction. Theorem 6.2. Quillen’s conjecture holds for solvable groups. Proof. We reproduce Quillen’s argument that reduces this result to the configuration of Theorem 6.1: Let G be a solvable group with Op (G) = 1. Let r be the p-rank of G and choose H ∈ Ap (G) of rank r. By Hall-Highman [6, Lemma 1.2.3] we have that K = Op′ (G) is self centralizing and hence H acts faithfully on this c e r−1 solvable group K. By Theorem 6.1 we have that H (|Ap (K ⋊ H)|; Z2 ) 6= 0. The e e inclusion Hr−1 (|Ap (K ⋊ H)|; Z2 ) ≤ Hr−1 (|Ap (G)|; Z2 ) finishes the proof.  14 ANTONIO DÍAZ RAMOS Next, we focus on the p-solvable case. We start describing fixed points for actions on direct products. We say that the group H acts by permutations on the direct product Y = X1 × . . . × Xm if H acts on Y and each h ∈ H permutes the groups {X1 , . . . , Xm } among themselves. This is the case, for instance, if each Xi is quasisimple, by the Krull-Schmidt Theorem [5, 3.22]. Lemma 6.3. Let H be an elementary abelian p-group acting by permutations on the m-fold direct product Y = X ×. . .×X and transitively permuting these components. Then the following hold: (1) If H ′ = NH (X) and H ′′ satisfies H = H ′ × H ′′ then Y h CY (H) = { x with x ∈ CX (H ′ )}. h∈H ′′ (2) For I ≤ H, H ′ = NH (X), I ′ = NI (X), and k equal to the number of orbits for the permutation action of I on the components of Y , we have: h i h i ′ ′ ′ CY (I) > CY (H) ⇔ k = 1 and CX (I ) > CX (H ) or k > 1 and CX (I ) > 1 . Proof. Set H ′ = NH (X) and choose H ′′ ≤ H such that H = H ′ × H ′′ . Then we have Y = h1 X × h2 X × . . . × hm X, where H ′′ = {h1 = 1, h2 , . . . , hm }. Moreover, CY (H) = CY (H ′ ) ∩ CY (H ′′ ) and we Q centralizers separately. Q compute these two For the former, we have CY (H ′ ) = h∈H ′′ Ch X (H ′ ) = h∈H ′′ h CX (H ′ ). For the ′′ latter, notice that Q H hregularly permutes the components of Y . The map X′′ → Y given by x 7→ h∈H ′′ x is a homomorphism which image is exactly CY (H ) (cf. [5, Lemma 3.27]). The expression in (1) follows. Regarding (2), write Y = Y1 · · · Yk , where each Yl is an orbit for the permutation action of I on the components of Y . Then CY (I) = CY1 (I) · · · CYk (I). For the action of I on each orbit Yl and with the notation of (1), we have I ′ = I ∩ H ′ and we may choose I ′′ and H ′′ such that I ′′ = I ∩ H ′′ . Moreover, if J ≤ H ′′ satisfies H ′′ = I ′′ × J, then we may write J = {j1 , . . . , jk } with Yl the I-orbit of jl X: Y i jl Yl = ( X). i∈I ′′ ′ | Note that k = |J| = |H ′′ : I ′′ | = |H:H |I:I ′ | . The result follows from inspection of the expressions in (1) for the centralizers CY (H) and CYl (I) for l = 1, . . . k.  The next result is of independent interest and it is the only place where we use the Classification of the Finite Simple Groups (CFSG). Proposition 6.4. Let H be an elementary abelian p-group acting on a direct product Y of m copies of a nonabelian simple p′ -group X and transitively permuting these components. Let H1 , . . . , Hn be linearly independent hyperplanes of H such that CY (Hi ) > CY (H) for all i. Then there exist elements ci ∈ CY (Hi ) \ CY (H) such that [ci , cj ] = 1 for all i, j. Proof. Write H ′ = NH (X), Hi′ = NHi (X) and choose H ′′ and Hi′′ for each i such that H = H ′ × H ′′ and Hi = Hi′ × Hi′′ . As Hi′ ≤ H ′ and |H : Hi | = p, we have: (1) either H ′ = Hi′ and |H ′′ |/|Hi′′ | = p, (2) or |H ′ : Hi′ | = p and |H ′′ | = |Hi′′ |. ON QUILLEN’S CONJECTURE FOR p-SOLVABLE GROUPS 15 In case (1), we have CX (H ′ ) = CX (Hi′ ) and, by Lemma 6.3(2), the permutation action of Hi on Y has exactly p orbits and |CX (Hi′ )| > 1. In case (2), Lemma 6.3(2) implies that CX (Hi′ ) > CX (H ′ ). If CH (X) = NH (X), then also CHi (X) = NHi (X) and CX (H ′ ) = CX (Hi′ ) = X, a contradiction. Otherwise, AutH (X) = NH (X)/CH (X) is a non-trivial group of outer automorphisms of X. But: (18) If P is a p-group and 1 6= P ≤ Out(X) for a nonabelian simple p′ -group X, then X is of Lie type and P is cyclic and consists of field automorphisms. The proof of the claim (18) is an exhaustive check via the CFSG. Details are provided in [9, Theorem 8.2.12(2)] and the same argument is used in [2]. Hence, AutH (X) is cyclic of order p. Then, either |CH (X) : CHi (X)| = |NHi (X) : CHi (X)| = p or these two indexes are equal to 1. In the former case, AutHi (X) is a non-trivial subgroup of AutH (X) and hence AutHi (X) = AutH (X). Then H ′ = Hi′ + CH (X) and CX (H ′ ) = CX (Hi′ ), contradiction again. So the latter case must hold, i.e., CH (X) = CHi (X) = NHi (X). We deduce that CX (Hi′ ) = X and, from (18), that CX (H ′ ) = X σ for a field automorphism σ of X of order p. To sum up, we conclude that for each hyperplane Hi : (1) either Hi has p permutation orbits on Y and CX (Hi′ ) = CX (H ′ ) > 1, (2) or Hi transitively permutes Y and CX (Hi′ ) = X > CX (H ′ ) = X σ . Now we construct the elements ci in the statement.QIf all hyperplanes are of type (1) then pick c ∈ CX (H ′ ) with c 6= 1 and set ci = h∈H ′′ h c for all i. Otherwise, i some hyperplane is of type (2), X is of Lie type and we notice that: There exists c ∈ X σ and d ∈ X \ X σ such that [c, d] = 1. The elements c and d may be chosen as elements in the maximal torus of X satQ isfying σ(c) = c and σ(d) 6= d. For Hi of type (1), set ci = h∈H ′′ h c. For Hi of i Q  type (2), set ci = h∈H ′′ h d. These elements satisfy [ci , cj ] = 1 for all i, j. Remark 6.5. Under the hypotheses of Proposition 6.4, it also holds that n ≤ m. Theorem 6.6. QDpc,Z2 holds for K ⋊ Frp with K a p′ -group. Proof. As Op (K ⋊ H) = 1 we have that the action of H on K is faithful. Now, the generalized Fitting subgroup F ∗ (K ⋊ H) = F ∗ (K) is self-centralizing [1, 31.13] and hence H acts faithfully on it. Here, F ∗ (K) = F (K)E(K) with [E(K), F (K)] = 1 and where E(K) is the layer, which is a central product of quasisimple groups. Moreover, By [5, 3.15(v)], Φ(F ∗ (K)) = Φ(F (K))Z(E(K)) and by [5, 3.5(v)], F (K) ∩ E(K) = Z(E(K)). Then by [5, 3.9(ii)], it follows that F ∗ (K)/Φ(F ∗ (K)) = A × B, where A is elementary abelian and B is a direct product of nonabelian simple p′ -groups. By Burnside [4, 5.1.4], H acts faithfully on D = A × B, and we are going to show that the hypothesis of Theorem 5.3 holds for D ⋊ H. Then the same argumentsQas in Theorem 6.1 finish the proof. Write B = j Bj , where each Bj is an H-orbit for the permutation action of H on Q the direct factors of B. For any subgroup I ≤ H we have CD (I) = CA (I) × j CBj (I). By Proposition 2.3, there are r linearly independent hyperplanes of H, H1 , . . . , Hr such that |CD (Hi )|/|CD (H)| > 1. Hence, for each i, either |CA (Hi )|/|CA (H)| > 1 or |CBj (Hi )|/|CBj (H)| > 1 for some j. In the former case, let c′i ∈ CA (Hi ) \ CA (H) by any element in this non-empty set and set 16 ANTONIO DÍAZ RAMOS ci = (c′i , 1) ∈ CD (Hi ) \ CD (H). To choose ci for the latter case, consider, for a fixed orbit Bj , the following set: Bj = {i ∈ {1, . . . , r} such that |CBj (Hi )|/|CBj (H)| > 1}. Let Bj = X × r2 X × . . . rm X, where X is a simple nonabelian p′ -group and r1 = 1, r2 , . . . , rm are representatives for the set H/NH (X) of size m. By Proposition 6.4, there are elements c′′i ∈ CBj (Hi ) \ CBj (H) for each i ∈ Bj such that [c′′i , c′′i′ ] = 1 for all i, i′ ∈ Bj . Then define c′i = (1, 1, . . . , 1, c′′i , 1, . . . , 1) ∈ CB (Hi ) \ CB (H), where we place c′i in the position of the orbit Bj . Define ci = (1, c′i ) ∈ CD (Hi ) \ CD (H). It is straightforward that the chosen elements ci ’s satisfy the hypothesis of Theorem 5.3, i.e., they commute pairwise.  r1 We finally prove Theorem 1.10 Theorem 6.7. Quillen’s conjecture holds for p-solvable groups. Proof. As Hall-Highman [6, Lemma 1.2.3] is valid for p-soluble groups, the argument employed in the proof of Theorem 6.2 can be used here but for replacing Theorem 6.1 by Theorem 6.6.  Final Remark 6.8. Here we give more details on Alperin’s approach compared to the approach on this work. The reductions for the former route are as follows, where K is a p′ -group: G p-solvable G solvable /o /o o/(2)/o /o /o /o // K ⋊ Fr p O CFSG  /o (1) /o /o // K ⋊ Frp , K solvable Alperin /o o/ o/ // Join of r 0-spheres Here, CFSG refers to the argument (18) already used in the proof of Proposition 6.4. The rightmost reduction on the bottom line consists of Alperin’s coprime action arguments leading to a minimal counterexample for which the join of spheres may be considered. Our approach can be summarized as follows: G p-solvable G solvable /o /o (2) /o /o o/ o/ // K ⋊ Frp , K = A × S o/ /o /o /o o/ // K ⋊ Frp /o /o /o 5.6 *j 5.3+CFSG *j *j *j *j ** 4t 44 5.2 t 4 4t 4t 4t 5.3+abelian t 4 /o (1) /o /o /o // K ⋊ Fr , K = A /o /o // K ⋊ Frp , K solvable /o 5.6 p Here, A is an abelian p′ -group and S is a p′ -group that is a direct product of nonabelian simple groups. Theorem 5.6 lifts classes from the quotient, and it is a tool that was not available before. Theorem 5.3 implies the existence of a 2-system, and it is a generalization to possibly non-solvable groups of the join of spheres construction (see Remark 5.4). References [1] M. Aschbacher, Finite group theory, Second edition. Cambridge Studies in Advanced Mathematics, 10. Cambridge University Press, Cambridge, 2000. [2] M. Aschbacher, S. Smith, On Quillen’s conjecture for the p-groups complex, Ann. of Math. (2) 137 (1993), no. 3, 473-529. [3] J.L. Alperin, A Lie approach to finite groups, GroupsCanberra 1989, 19, Lecture Notes in Math., 1456, Springer, Berlin, 1990. ON QUILLEN’S CONJECTURE FOR p-SOLVABLE GROUPS 17 [4] D. Gorenstein, Finite groups, Second edition. Chelsea Publishing Co., New York, 1980. [5] D. Gorenstein, R. Lyons, R. Solomon, The classification of the finite simple groups. Number 2. Part I. Chapter G, General group theory. Mathematical Surveys and Monographs, 40.2. American Mathematical Society, Providence, RI, 1996. [6] P. Hall, G. Higman, On the p-length of p-soluble groups and reduction theorems for Burnside’s problem, Proc. London Math. Soc. (3) 6 (1956), 1-42. [7] T. Hawkes, I.M. Isaacs, On the poset of p-subgroups of a p-solvable group, J. London Math. Soc. (2) 38 (1988), no. 1, 77-86. [8] D. Quillen, Homotopy properties of the poset of nontrivial p-subgroups of a group, Adv. Math. 28 (1978), no. 2, 101-128. [9] S.D. Smith, Subgroup complexes, Mathematical Surveys and Monographs, 179. American Mathematical Society, Providence, RI, 2011. Departamento de Álgebra, Geometrı́a y Topologı́a, Universidad de Málaga, Apdo correos 59, 29080 Málaga, Spain. E-mail address: [email protected]
4
The Refinement Calculus of Reactive Systems∗ Viorel Preoteasa Iulia Dragomir Stavros Tripakis arXiv:1710.03979v2 [cs.LO] 8 Feb 2018 February 9, 2018 Abstract The Refinement Calculus of Reactive Systems (RCRS) is a compositional formal framework for modeling and reasoning about reactive systems. RCRS provides a language which allows to describe atomic components as symbolic transition systems or QLTL formulas, and composite components formed using three primitive composition operators: serial, parallel, and feedback. The semantics of the language is given in terms of monotonic property transformers, an extension to reactive systems of monotonic predicate transformers, which have been used to give compositional semantics to sequential programs. RCRS allows to specify both safety and liveness properties. It also allows to model input-output systems which are both non-deterministic and non-input-receptive (i.e., which may reject some inputs at some points in time), and can thus be seen as a behavioral type system. RCRS provides a set of techniques for symbolic computer-aided reasoning, including compositional static analysis and verification. RCRS comes with a publicly available implementation which includes a complete formalization of the RCRS theory in the Isabelle proof assistant. Contents 1 Introduction 1.1 Novel contributions of this paper and relation to our prior work . . . . . . . . . . . . . . . . . 3 4 2 Preliminaries 5 3 Language 3.1 An Algebra of Components . . . . . . . . . . . . 3.2 Symbolic Transition System Components . . . . 3.2.1 General STS Components . . . . . . . . . 3.2.2 Variable Name Scope . . . . . . . . . . . 3.2.3 Stateless STS Components . . . . . . . . 3.2.4 Deterministic STS Components . . . . . . 3.2.5 Stateless Deterministic STS Components 3.3 Quantified Linear Temporal Logic Components . 3.3.1 QLTL . . . . . . . . . . . . . . . . . . . . 3.3.2 QLTL Components . . . . . . . . . . . . . 3.4 Well Formed Composite Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 6 7 7 9 10 10 11 11 12 13 14 ∗ The bulk of this work was done while Preoteasa and Dragomir were at Aalto University. Preoteasa is currently at Space Systems Finland. Dragomir is currently at Verimag. Tripakis is affiliated with Aalto University and the University of California at Berkeley. This work was partially supported by the Academy of Finland and the U.S. National Science Foundation (awards #1329759 and #1139138). Dragomir was supported during the writing of this paper by the Horizon 2020 Programme Strategic Research Cluster (grant agreement #730080 and #730086). 1 4 Semantics 4.1 Monotonic Property Transformers . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Some Commonly Used MPTs . . . . . . . . . . . . . . . . . . . . . . . . 4.1.2 Operators on MPTs: Function Composition, Product, and Fusion . . . 4.1.3 Novel Operators Used in Semantical Definition of Feedback . . . . . . . 4.1.4 Refinement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Subclasses of MPTs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Relational MPTs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Guarded MPTs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.3 Other subclasses and overview . . . . . . . . . . . . . . . . . . . . . . . 4.3 Semantics of Components as MPTs . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Example: Two Alternative Derivations of the Semantics of Diagram Sum 4.3.2 Characterization of Legal Input Traces . . . . . . . . . . . . . . . . . . . 4.3.3 Semantic Equivalence and Refinement for Components . . . . . . . . . . 4.3.4 Compositionality Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . of Fig. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 15 15 17 17 18 19 19 20 20 21 22 23 24 24 5 Symbolic Reasoning 5.1 Symbolic Transformation of STS Components to QLTL Components . . . . . . . . . . . . . . 5.2 Symbolic Transformations of Special Atomic Components to More General Atomic Components 5.3 Symbolic Computation of Serial Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Symbolic Serial Composition of Two QLTL Components . . . . . . . . . . . . . . . . . 5.3.2 Symbolic Serial Composition of Two General STS Components . . . . . . . . . . . . . 5.3.3 Symbolic Serial Composition of Two Stateless STS Components . . . . . . . . . . . . 5.3.4 Symbolic Serial Composition of Two Deterministic STS Components . . . . . . . . . . 5.3.5 Symbolic Serial Composition of Two Stateless Deterministic STS Components . . . . 5.3.6 Symbolic Serial Composition of Two Arbitrary Atomic Components . . . . . . . . . . 5.3.7 Correctness of Symbolic Serial Composition . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Symbolic Computation of Parallel Composition . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Symbolic Parallel Composition of Two QLTL Components . . . . . . . . . . . . . . . 5.4.2 Symbolic Parallel Composition of Two General STS Components . . . . . . . . . . . . 5.4.3 Symbolic Parallel Composition of Two Stateless STS Components . . . . . . . . . . . 5.4.4 Symbolic Parallel Composition of Two Deterministic STS Components . . . . . . . . . 5.4.5 Symbolic Parallel Composition of Two Stateless Deterministic STS Components . . . 5.4.6 Symbolic Parallel Composition of Two Arbitrary Atomic Components . . . . . . . . . 5.4.7 Correctness of Symbolic Parallel Composition . . . . . . . . . . . . . . . . . . . . . . . 5.5 Symbolic Computation of Feedback Composition for Decomposable Deterministic STS Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Decomposable Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.2 Symbolic Feedback of a Decomposable Deterministic STS Component . . . . . . . . . 5.5.3 Symbolic Feedback of a Decomposable Stateless Deterministic STS Component . . . . 5.5.4 Correctness of Symbolic Feedback Composition . . . . . . . . . . . . . . . . . . . . . . 5.6 Closure Properties of MPT Subclasses w.r.t. Composition Operators . . . . . . . . . . . . . . 5.7 Symbolic Simplification of Arbitrary Composite Components . . . . . . . . . . . . . . . . . . 5.7.1 Deterministic and Algebraic Loop Free Composite Components . . . . . . . . . . . . . 5.7.2 Correctness of the Simplification Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Checking Validity and Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 Checking Input-Receptiveness and Computing Legal Inputs Symbolically . . . . . . . . . . . 5.10 Checking Refinement Symbolically . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 25 26 27 27 28 28 28 28 28 28 29 30 30 30 30 30 30 30 6 Toolset and Case Studies 39 7 Related Work 41 2 31 31 31 31 31 32 32 33 34 36 37 37 8 Conclusion 1 43 Introduction This paper presents the Refinement Calculus of Reactive Systems (RCRS), a comprehensive framework for compositional modeling of and reasoning about reactive systems. RCRS originates from the precursor theory of synchronous relational interfaces [78, 79], and builds upon the classic refinement calculus [10]. A number of conference publications on RCRS exist [66, 31, 67, 65, 33]. This paper collects some of these results and extends them in significant ways. The novel contributions of this paper and relation to our previous work are presented in §1.1. The motivation for RCRS stems from the need for a compositional treatment of reactive systems. Generally speaking, compositionality is a divide-and-conquer principle. As systems grow in size, they grow in complexity. Therefore dealing with them in a monolithic manner becomes unmanageable. Compositionality comes to the rescue, and takes many forms. Many industrial-strength systems have employed for many years mechanisms for compositional modeling. An example is the Simulink tool from the Mathworks. Simulink is based on the widespread notation of hierarchical block diagrams. Such diagrams are both intuitive, and naturally compositional: a block can be refined into sub-blocks, sub-sub-blocks, and so on, creating hierarchical models of arbitrary depth. This allows the user to build large models (many thousands of blocks) while at the same time managing their complexity (at any level of the hierarchy, only a few blocks may be visible). But Simulink’s compositionality has limitations, despite its hierarchical modeling approach. Even relatively simple problems, such as the problem of modular code generation (generating code for a block independently from context) require techniques not always available in standard code generators [53, 52]. Perhaps more serious, and more relevant in the context of this paper, is Simulink’s lack of formal semantics, and consequent lack of rigorous analysis techniques that can leverage the advances in the fields of computer-aided verification and programming languages. RCRS provides a compositional formal semantics for Simulink in particular, and hierarchical block diagram notations in general, by building on well-established principles from the formal methods and programming language domains. In particular, RCRS relies on the notion of refinement (and its counterpart, abstraction) which are both fundamental in system design. Refinement is a binary relation between components, and ideally characterizes substitutability: the conditions under which some component can replace another component, without compromising the behavior of the overall system. RCRS refinement is compositional in the sense that it is preserved by composition: if A0 refines A and B 0 refines B, then the composition of A0 and B 0 refines the composition of A and B. RCRS can be viewed as a refinement theory. It can also be viewed as a behavioral type system, similar to type systems for programming languages, but targeted to reactive systems. By behavioral we mean a type system that can capture not just data types of input and output ports of components (bool, int, etc.), but also complete specifications of the behavior of those components. As discussed more extensively in [79], such a behavioral type system has advantages over a full-blown verification system, as it is more lightweight: for instance, it allows type checking, which does not require the user to provide a formal specification of the properties that the model must satisfy, the model must simply type-check. To achieve this, it is essential, as argued in [79], for the framework to be able to express non-inputreceptive (also called non-input-enabled or non-input-complete) components, i.e., components that reject some input values. RCRS allows this. For example, a square-root component can be described√in RCRS alternatively as: (1) a non-input-receptive component C1 with input-output contract x ≥ 0 ∧ y = x (where x, y are the input √ and output variables, respectively); or (2) an input-receptive component C2 with contract x ≥ 0 → y = x. Simple examples of type-checking in the RCRS context are the following: Connecting the non-input-receptive square-root component C1 to a component which outputs x = −1 results in a type error (incompatibility). Connecting C1 to a non-deterministic component which outputs an arbitrary value for x (this can be specified by the formula/contract true) also results in a type error. Yet, in both of the above cases C2 will output a non-deterministically chosen value, even though the input constraint is not satisfied. RCRS allows both non-input-receptive and non-deterministic components. This combination results in a 3 game-theoretic interpretation of the composition operators, like in interface theories [22, 79]. Refinement also becomes game-theoretic, as in alternating refinement [7]. Game-theoretic composition can be used for an interesting form of type inference. For example, if we connect the non-input-receptive square-root component C1 above to a non-deterministic component C3 with input-output contract x ≥ u + 1 (where x is the output of C3 , and u its input), and apply the (game-theoretic) serial composition of RCRS, we obtain the condition u ≥ −1 on the external input of the overall composition. The constraint u ≥ −1 represents the weakest condition on u which ensures compatibility of the connected components. In a nutshell, RCRS consists of the following elements: 1. A modeling language (syntax), which allows to describe atomic components, and composite components formed by a small number of primitive composition operators (serial, parallel, and feedback). The language is described in §3. 2. A formal semantics, presented in §4. Component semantics are defined in terms of monotonic property transformers (MPTs). MPTs are extensions of monotonic predicate transformers used in theories of programming languages, and in particular in refinement calculus [10]. Predicate transformers transform sets of post-states (states reached by the program after its computation) into sets of pre-states (states where the program begins). Property transformers transform sets of a component’s output traces (infinite sequences of output values) into sets of input traces (infinite sequences of input values). Using this semantics we can express both safety and liveness properties. 3. A set of symbolic reasoning techniques, described in §5. In particular, RCRS offers techniques to • compute the symbolic representation of a composite component from the symbolic representations of its sub-components; • simplify composite components into atomic components; • reduce checking refinement between two components to checking satisfiability of certain logical formulas; • reduce input-receptiveness and compatibility checks to satisfiability; • compute the legal inputs of a component symbolically. We note that these techniques are for the most part logic-agnostic, in the sense that they do not depend on the particular logic used to represent components. 4. A toolset, described briefly in §6. The toolset consists mainly of: • a full implementation of the RCRS theory in the Isabelle proof assistant [61]; • a translator of Simulink diagrams into RCRS code. Our implementation is open-source and publicly available from http://rcrs.cs.aalto.fi. 1.1 Novel contributions of this paper and relation to our prior work Several of the ideas behind RCRS originated in the theory of synchronous relational interfaces [78, 79]. The main novel contributions of RCRS w.r.t. that theory are: (1) RCRS is based on the semantic foundation of monotonic property transformers, whereas relational interfaces are founded on relations; (2) RCRS can handle liveness properties, whereas relational interfaces can only handle safety; (3) RCRS has been completely formalized and most results reported in this and other RCRS papers have been proven in the Isabelle proof assistant; (4) RCRS comes with a publicly available toolset (http://rcrs.cs.aalto.fi) which includes the Isabelle formalization, a Translator of Simulink hierarchical block diagrams [31, 33], and a Formal Analyzer which performs, among other functions, compatibility checking, refinement checking, and automatic simplification of RCRS contracts. 4 RCRS was introduced in [66], which focuses on monotonic property transformers as a means to extend relational interfaces with liveness properties. [66] covers serial composition, but not parallel nor feedback. It also does not cover symbolic reasoning nor the RCRS implementation. Feedback is considered in [67], whose aim is in particular to study instantaneous feedback for non-deterministic and non-input-receptive systems. The study of instantaneous feedback is an interesting problem, but beyond the scope of the current paper. In this paper we consider non-instantaneous feedback, i.e., feedback for systems without same-step cyclic dependencies (no algebraic loops). [31] presents part of the RCRS implementation, focusing on the translation of Simulink (and hierarchical block diagrams in general) into an algebra of components with three composition primitives, serial, parallel, and feedback, like RCRS. As it turns out, there is not a unique way to translate a graphical notation like Simulink into an algebraic formalism like RCRS. The problem of how exactly to do it and what are the trade-offs is an interesting one, but beyond the scope of the current paper. This problem is studied in depth in [31] which proposes three different translation strategies and evaluates their pros and cons. [31] leaves open the question whether the results obtained by the different translations are equivalent. [64] settles this question, by proving that a class of translations, including the ones proposed in [31] are semantically equivalent for any input block diagram. [65] also concerns the RCRS implementation, discussing solutions to subtle typing problems that arise when translating Simulink diagrams into RCRS/Isabelle code. In summary, the current paper does not cover the topics covered in [31, 64, 65, 67] and can be seen as a significantly revised and extended version of [66]. The main novel contributions with respect to [66] are the following: (1) a language of components (§3); (2) a revised MPT semantics (§4), including in particular novel operators for feedback (§4.1.3) and a classification of MPT subclasses (§4.2); (3) a new section of symbolic reasoning (§5). 2 Preliminaries Sets, types. We use capital letters X, Y , Σ, . . . to denote types or sets, and small letters to denote elements of these types x ∈ X, y ∈ Y , etc. We denote by B the type of Boolean values true and false. We use ∧, ∨, ⇒, and ¬ for the Boolean operations. The type of natural numbers is denoted by N, while the type of real numbers is denoted by R. The Unit type contains a single element denoted (). Cartesian product. For types X and Y , X × Y is the Cartesian product of X and Y , and if x ∈ X and y ∈ Y , then (x, y) is a tuple from X × Y . The empty Cartesian product is Unit. We assume that we have only flat products X1 × . . . × Xn , and then we have (X1 × . . . × Xn ) × (Y1 × . . . × Ym ) = X1 × . . . × Xn × Y1 × . . . × Ym Functions. If X and Y are types, X → Y denotes the type of functions from X to Y . The function type constructor associates to the right (e.g., X → Y → Z = X → (Y → Z)) and the function interpretation associates to the left (e.g., f (x)(y) = (f (x))(y)). In order to construct functions we use lambda notation, e.g., (λx, y : x + y + 1) : N → N → N. Similarly, we can have tuples in the definition of functions, e.g., (λ(x, y) : x + y + 2) : (N × N) → N. The composition of two functions f : X → Y and g : Y → Z, is a function denoted g ◦ f : X → Z, where (g ◦ f )(x) = g(f (x)). Predicates. A predicate is a function returning Boolean values, e.g., p : X → Y → B, p(x)(y) = (x = y). We define the smallest predicate ⊥ : X → B where ⊥(x) = false for all x ∈ X. The greatest predicate is > : X → B, with >(x) = true for all x ∈ X. We will often interpret predicates as sets. A predicate p : X → B can be viewed as the set of all x ∈ X such that p(x) = true. For example, viewing two predicates p, q as sets, we can write p ⊆ q, meaning that for all x, p(x) ⇒ q(x). 5 Relations. A relation is a predicate with at least two arguments, e.g., r : X → Y → B. For such a relation r, we denote by in(r) : X → B the predicate in(r)(x) = (∃y : r(x)(y)). If the relation r has more than two arguments, then we define in(r) similarly by quantifying over the last argument. We extend point-wise all operations on Booleans to operations on predicates and relations. For example, if r, r0 : X → Y → B are two relations, then r ∧ r0 and r ∨ r0 are the relations given by (r ∧ r0 )(x)(y) = r(x)(y) ∧ r0 (x)(y) and (r ∨ r0 )(x)(y) = r(x)(y) ∨ r0 (x)(y). We also introduce the order on relations r ⊆ r0 = (∀x, y : r(x)(y) ⇒ r0 (x)(y)). The composition of two relations r : X → Y → B and r0 : Y → Z → B is a relation (r ◦ r0 ) : X → Z → B, where (r ◦ r0 )(x)(z) = (∃y : r(x)(y) ∧ r0 (y)(z)). Infinite sequences. If Σ is a type, then Σω = (N → Σ) is the set of all infinite sequences over Σ, also called traces. For a trace σ ∈ Σω , let σi = σ(i) be the i-th element in the trace. Let σ i ∈ Σω denote the suffix of σ starting from the i-th step, i.e., σ i = σi σi+1 · · · . We often view a pair of traces (σ, σ 0 ) ∈ Σω × Σ0ω as being also a trace of pairs (λi : (σi , σi0 )) ∈ (Σ × Σ0 )ω . Properties. A property is a predicate p over a set of infinite sequences. Formally, p ∈ (Σω → B). Just like any other predicate, a property can also be viewed as a set. In particular, a property can be viewed as a set of traces. 3 Language 3.1 An Algebra of Components We model systems using a simple language of components. The grammar of the language is as follows: component atomic component STS component ::= ::= ::= composite component ::= atomic component | composite component STS component | QLTL component GEN STS component | STATELESS STS component | DET STS component | DET STATELESS STS component component ; component | component k component | fdbk(component) The elements of the above grammar are defined in the remaining of this section, where examples are also given to illustrate the language. In a nutshell, the language contains atomic components of two kinds: atomic components defined as symbolic transition systems (STS component), and atomic components defined as QLTL formulas over input and output variables (QLTL component). STS components are split in four categories: general STS components, stateless STS components, deterministic STS components, and deterministic stateless STS components. Semantically, the general STS components subsume all the other more specialized STS components, but we introduce the specialized syntax because symbolic compositions of less general components become simpler, as we shall explain in the sequel (see §5). Also, as it turns out, atomic components of our framework form a lattice, shown in Fig. 5, from the more specialized ones, namely, deterministic stateless STS components, to the more general ones, namely QLTL components. The full definition of this lattice will become apparent once we provide a symbolic transformation of STS to QLTL components, in §5.1. Apart from atomic components, the language also allows to form composite components, by composing (atomic or other composite) components via three composition operators: serial ;, parallel k, and feedback fdbk, as depicted in Fig. 1. The serial composition of two components C, C 0 is formed by connecting the output(s) of C to the input(s) of C 0 . Their parallel composition is formed by “stacking” the two components on top of each other without forming any new connections. The feedback of a component C is obtained by connecting the first output of C to its first input. Our language is inspired by graphical notations such as Simulink, and hierarchical block diagrams in general. But our language is textual, not graphical. An interesting question is how to translate a graphical 6 x x y C y C 0 z u (a) Serial composition: C ; C 0 y x1 C x2 .. . v C0 (b) Parallel composition: C k C 0 y1 C y2 .. . (c) Feedback composition: fdbk(C) Figure 1: The three composition operators of RCRS. block diagram into a term in our algebra. We will not address this question here, as the issue is quite involved. We refer the reader to [31], which includes an extensive discussion on this topic. Suffice it to say here that there are generally many possible translations of a graphical diagram into a term in our algebra (or generally any algebra that contains primitive serial, parallel, and feedback composition operators). These translations achieve different tradeoffs in terms of size, readability, computational properties, and so on. See [31] for details. 1 1 In z Add UnitDelay 1 Out Figure 2: A Simulink diagram modeling the 1-step delayed sum of its input In. Each atomic block as well as the entire system can be formalized as STS components (see §3.2). The entire system can also be formalized as a composite component (see below). As an example, consider the Simulink diagram shown in Fig. 2. This diagram can be represented in our language as a composite component Sum defined as Sum = fdbk(Add ; UnitDelay ; Split) where Add, UnitDelay, and Split are atomic components (for a definition of these atomic components see §3.2). Here Split models the “fan-out” element in the Simulink diagram (black bullet) where the output wire of UnitDelay splits in two wires going to Out and back to Add.1 3.2 Symbolic Transition System Components We introduce all four categories of STS components and at the same time provide syntactic mappings from specialized STS components to general STS components. 3.2.1 General STS Components A general symbolic transition system component (general STS component) is a transition system described symbolically, with Boolean expressions over input, output, state, and next state variables defining the initial states and the transition relation. When we say “Boolean expression” (here and in the definitions that follow) we mean an expression of type B, in some arbitrary logic, not necessarily restricted to propositional logic. For example, if x is a variable of numerical type, then x > 0 is a Boolean expression. In the definition that follows, s0 denotes the primed, or next state variable, corresponding to the current state variable s. Both 1 Note that the Simulink input and output ports In and Out are not explicitly represented in Sum. They are represented implicitly: In corresponds to the second input of Add, which carries over as the unique external input of Sum (thus, Sum is an “open system” in the sense that it has open, unconnected, inputs); Out corresponds to the second output of Split, which carries over as the unique external output of Sum. 7 can be vectors of variables. For example, if s = (s1 , s2 ) then s0 = (s01 , s02 ). We assume that s0 has the same type as s. Definition 1 (STS component). A (general) STS component is a tuple sts(x : Σx , y : Σy , s : Σs , init exp : B, trs exp : B) where x, y, s are input, output and state variables (or tuples of variables) of types Σx , Σy , Σs , respectively, init exp is a Boolean expression on s (in some logic), and trs exp is a Boolean expression on x, y, s, s0 (in some logic). Intuitively, an STS component is a non-deterministic system which for an infinite input sequence σx ∈ Σω x produces as output an infinite sequence σy ∈ Σω y . The system starts non-deterministically at some state σs (0) satisfying init exp. Given first input σx (0), the system non-deterministically computes output σy (0) and next state σs (1) such that trs exp holds (if no such values exist, then the input σx (0) is illegal, as discussed in more detail below). Next, it uses the following input σx (1) and state σs (1) to compute σy (1) and σs (2), and so on. We will sometimes use the term contract to refer to the expression trs exp. Indeed, trs exp can be seen as specifying a contract between the component and its environment, in the following sense. At each step in the computation, the environment must provide input values that do not immediately violate the contract, i.e., for which we can find values for the next state and output variables to satisfy trs exp. Then, it is the responsibility of the component to find such values, otherwise it is the component’s “fault” if the contract is violated. This game-theoretic interpretation is similar in spirit with the classic refinement calculus for sequential programs [10]. We use Σx , Σy , Σs in the definition above to emphasize the types of the input, output and the state, and the fact that, when composing components, the types should match. However, in practice we often omit the types, unless they are required to unambiguously specify a component. Also note that the definition does not fix the logic used for the expressions init exp and trs exp. Indeed, our theory and results are independent from the choice of this logic. The choice of logic matters for algorithmic complexity and decidability. We will return to this point in §5. Finally, for simplicity, we often view the formulas init exp and trs exp as semantic objects, namely, as predicates. Adopting this view, init exp becomes the predicate init : Σs → B, and trs exp the predicate trs exp : Σs → Σx → Σs → Σy → B. Equivalently, trs exp can be interpreted as a relation trs exp : (Σs × Σx ) → (Σs × Σy ) → B. Throughout this paper we assume that init exp is satisfiable, meaning that there is at least one valid initial state. Examples. In the examples provided in this paper, we often specify systems that have tuples as input, state and output variables, in different equivalent ways. For example, we can introduce a general STS component with two inputs as sts((n : N, x : R), s : R, y : R, s > 0, s0 > s ∧ y + s = xn ), but also as sts((n, x) : N × R, s : R, y : R, s > 0, s0 > s ∧ y + s = xn ), or sts(z : N × R, y : R, s : R, s > 0, s0 > s ∧ y + s = snd(z)fst(z) ), where fst and snd return the first and second elements of a pair. Let us model a system that at every step i outputs the input received at previous step i − 1 (assume that the initial output value is 0). This corresponds to Simulink’s commonly used UnitDelay block, which is also modeled in the diagram of Fig. 2. This block can be represented by an STS component, where a state variable s is needed to store the input at moment i such that it can be used at the next step. We formally define this component as UnitDelay = sts(x, y, s, s = 0, y = s ∧ s0 = x). We use first-order logic to define init exp and trs exp. Here init exp is s = 0, which initializes the state variable s with the value 0. The trs exp is y = s ∧ s0 = x, that is at moment i the current state variable s which stores the input x received at moment i − 1 is outputed and its value is updated. As another example, consider again the composite component Sum modeling the diagram of Fig. 2. Sum could also be defined as an atomic STS component: Sum = sts(x, y, s, s = 0, y = s ∧ s0 = s + x). 8 In §5 we will show how we can automatically and symbolically simplify composite component terms such as fdbk(Add ; UnitDelay ; Split), to obtain syntactic representations of atomic components such as the one above. These examples illustrate systems coming from Simulink models. However, our language is more general, and able to accommodate the description of other systems, such as state machines à la nuXmv [16], or input/output automata [54]. In fact, both UnitDelay and Sum are deterministic, so they could also be defined as deterministic STS components, as we will see below. Our language can capture non-deterministic systems easily. An example of a non-deterministic STS component is the following: C = sts(x, y, s, s = 0, x + s ≤ y). For an input sequence σx ∈ Nω , C outputs a non-deterministically chosen sequence σy such that the transition expression x + s ≤ y is satisfied. Since there is no formula in the transition expression tackling the next state variable, this is updated also non-deterministically with values from N. Our language can also capture non-input-receptive systems, that is, systems which disallow some input values as illegal. For instance, a component performing division, but disallowing division by zero, can be specified as follows: x Div = sts((x, y), z, (), true, y 6= 0 ∧ z = ). y Note that Div has an empty tuple of state variables, s = (). Such components are called stateless, and are introduced in the sequel. Even though RCRS is primarily a discrete-time framework, we have used it to model and verify continuoustime systems such as those modeled in Simulink (see §6). We do this by discretizing time using a time step parameter ∆t > 0 and applying Euler numerical integration. Then, we can model Simulink’s Integrator block in RCRS as an STS component parameterized by ∆t: Integrator∆t = sts x, y, s, s = 0, y = s ∧ s0 = s + x · ∆t  More complex dynamical system blocks can be modeled in a similar fashion. For instance, Simulink’s Transfer Fcn block, with transfer function s2 + 2 0.5s2 + 2s + 1 can be modeled in RCRS as the following STS component parameterized by ∆t: TransferFcn∆t where = sts x, y, (s1 , s2 ), s1 = 0 ∧ s2 = 0, trs) trs = (y = −8 · s1 + 2 · x) ∧ (s01 = s1 + (−4 · s1 − 2 · s2 + x) · ∆t) ∧ (s02 = s2 + s1 · ∆t) 3.2.2 Variable Name Scope We remark that variable names in the definition of atomic components are local. This holds for all atomic components in the language of RCRS (including STS and QLTL components, defined in the sequel). This means that if we replace a variable with another one in an atomic component, then we obtain a semantically equivalent component. For example, the two STS components below are equivalent (the semantical equivalence symbol ≡ will be defined formally in Def. 20, once we define the semantics): sts((x, y), z, s, s > 0, z > s + x + y) ≡ sts((u, v), w, t, t > 0, w > t + u + v) 9 3.2.3 Stateless STS Components A special STS component is one that has no state variables: Definition 2 (Stateless STS component). A stateless STS component is a tuple C = stateless(x : Σx , y : Σy , io exp : B) where x, y are the input and output variables, and io exp is a Boolean expression on x and y. Stateless STS components are special cases of general STS components, as defined by the mapping stateless2sts: stateless2sts(C) = sts(x, y, (), true, io exp). Examples. A trivial stateless STS component is the one that simply transfers its input to its output (i.e., a “wire”). We denote such a component by Id, and we formalize it as Id = stateless(x, y, y = x). Another simple example is a component with no inputs and a single output, which always outputs a constant value c (of some type). This can be formalized as the following component parameterized by c: Constc = stateless((), y, y = c). Component Add from Fig. 2, which outputs the sum of its two inputs, can be modeled as a stateless STS component: Add = stateless((x, y), z, z = x + y). Component Split from Fig. 2 can also be modeled as a stateless STS component: Split = stateless(x, (y, z), y = x ∧ z = x). The Div component introduced above is stateless, and therefore can be also specified as follows: x . Div = stateless (x, y), z, y 6= 0 ∧ z = y The above examples are not only stateless, but also deterministic. We introduce deterministic STS components next. 3.2.4 Deterministic STS Components Deterministic STS components are those which, for given current state and input, have at most one output and next state. Syntactically, they are introduced as follows: Definition 3 (Deterministic STS component). A deterministic STS component is a tuple det(x : Σx , s : Σs , a : Σs , inpt exp : B, next exp : Σs , out exp : Σy ) where x, s are the input and state variables, a ∈ Σs is the initial value of the state variable, inpt exp is a Boolean expression on s and x defining the legal inputs, next exp is an expression of type Σs on x and s defining the next state, and out exp is an expression of type Σy on x and s defining the output. Deterministic STS components are special cases of general STS components, as defined by the mapping det2sts: det2sts(C) = (x, y, s, s = a, inpt exp ∧ s0 = next exp ∧ y = out exp) where y is a new variable name (or tuple of new variable names) of type Σy . Note that a deterministic STS component has a separate expression inpt exp to define legal inputs. A separate such expression is not needed for general STS components, where the conditions for legal inputs are part of the expression trs exp. For example, compare the definition of Div as a general STS above, and as a stateless deterministic STS below (see §3.2.5). 10 Examples. As mentioned above, all three components, UnitDelay, Add, and Split from Fig. 2, as well as Div and Const, are deterministic. They could therefore be specified in our language as deterministic STS components, instead of general STS components:  UnitDelay = det x, s, 0, true, x, s  Constc = det (), (), (), true, (), c  Add = det (x, y), (), (), true, (), x + y  Split = det x, (), (), true, (), (x, x) x Div = det (x, y), (), (), y 6= 0, (), y The component Sum modeling the entire system is also deterministic, and could be defined as a deterministic STS component: Sum = det(x, s, 0, true, s + x, s). Note that these alternative specifications for each of those components, although syntactically distinct, will turn out to be semantically equivalent by definition, when we introduce the semantics of our language, in §4. 3.2.5 Stateless Deterministic STS Components STS components which are both deterministic and stateless can be specified as follows: Definition 4 (Stateless deterministic STS component). A stateless deterministic STS component is a tuple C = stateless det(x : Σx , inpt exp : B, out exp : Σy ) where x is the input variable, inpt exp is a Boolean expression on x defining the legal inputs, and out exp is an expression of type Σy on x defining the output. Stateless deterministic STS components are special cases of both deterministic STS components, and of stateless STS components, as defined by the mappings stateless det2det(C) = det(x, (), (), inpt exp, (), out exp) stateless det2stateless(C) = stateless(x, y, inpt exp ∧ y = out exp) where y is a new variable name or a tuple of new variable names. Examples. Many of the examples introduced above are both deterministic and stateless. They could be specified as follows:  Id = stateless det x, true, x  Constc = stateless det (), true, c  Add = stateless det (x, y), true, x + y  Split = stateless det x, true, (x, x) x Div = stateless det (x, y), y 6= 0, y 3.3 Quantified Linear Temporal Logic Components Although powerful, STS components have limitations. In particular, they cannot express liveness properties [4]. To remedy this, we introduce another type of components, based on Linear Temporal Logic (LTL) [63] and quantified propositional LTL (QPTL) [76, 48], which extends LTL with ∃ and ∀ quantifiers over propositional variables. In this paper we use quantified first-order LTL (which we abbreviate as QLTL). QLTL further extends QPTL with functional and relational symbols over arbitrary domains, quantification 11 of variables over these domains, and a next operator applied to variables.2 We need this expressive power in order to be able to handle general models (e.g., Simulink) which often use complex arithmetic formulas, and also to be able to translate STS components into semantically equivalent QLTL components (see §5.1). 3.3.1 QLTL QLTL formulas are generated by the following grammar. We assume a set of constants and functional symbols (0, 1, . . ., true, false, +, . . .), a set of predicate symbols (=, ≤, <, . . .), and a set of variable names (x, y, z, . . .). Definition 5 (Syntax of QLTL). A QLTL formula ϕ is defined by the following grammar: term ::= ϕ ::= x | y | ... | (variable names) 0 | 1 | . . . | true | . . . | (constants) term + term | . . . | (functional symbol application) # term (next applied to a term) term = term | (term ≤ term) | . . . | (atomic QLTL formulas) ¬ϕ | (negation) ϕ∨ψ | (disjunction) ϕUψ| (until) ∀x : ϕ (forall) As in standard first order logic, the bounded variables of a formula ϕ are the variables in scope of the universal quantifier ∀, and the free variables of ϕ are those that are not bounded. The logic connectives ∧, ⇒ and ⇔ can be expressed with ¬ and ∨. Quantification is over atomic variables. The existential quantifier ∃ can be defined via the universal quantifier usually as ¬∀¬. The primitive temporal operators are next for terms (# ) and until ( U ). As is standard, QLTL formulas are evaluated over infinite traces, and ϕ U ψ intuitively means that ϕ continuously holds until some point in the trace where ψ holds. Formally, we will define the relation σ |= ϕ (σ satisfies ϕ) for a QLTL formula ϕ over free variables x, y, . . ., and an infinite sequence σ ∈ Σω , where Σ = Σx × Σy × . . ., and Σx , Σy , . . . are the types (or domains) of variables x, y, . . .. As before we assume that σ can be written as a tuple of sequences (σx , σy , . . .) where ω σ x ∈ Σω x , σy ∈ Σy , . . .. The semantics of a term t on variables x, y, . . . is a function from infinite sequences to infinite sequences hhtii : Σω → Σω t , where Σ = Σx × Σy × . . ., and Σt is the type of t. When giving the semantics of terms and formulas we assume that constants, functional symbols, and predicate symbols have the standard semantics. For example, we assume that +, ≤, . . . on numeric values have the semantics of standard arithmetic. Definition 6 (Semantics of QLTL). Let x be a variable, t, t0 be terms, ϕ, ψ be QLTL formulas, P be a predicate symbol, f be a functional symbol, c be a constant, and σ ∈ Σω be an infinite sequence. hhxii(σ) hhcii(σ) hhf (t, t0 )ii(σ) hh# tii(σ) σ σ σ σ σ |= P (t, t0 ) |= ¬ϕ |= ϕ ∨ ψ |= ϕ U ψ |= (∀x : ϕ) := σx := (λi : c) := (λi : f (hhtii(σ)(i), hht0 ii(σ)(i))) := hhtii(σ 1 ) := := := := := P (hhtii(σ)(0), hht0 ii(σ)(0)) ¬ σ |= ϕ σ |= ϕ ∨ σ |= ψ (∃n ≥ 0 : (∀ 0 ≤ i < n : σ i |= ϕ) ∧ σ n |= ψ) (∀σx ∈ Σω x : (σx , σ) |= ϕ) Other temporal operators can be defined as follows. Eventually (F ϕ = true U ϕ) states that ϕ must hold in some future step. Always (G ϕ = ¬F ¬ϕ) states that ϕ must hold at all steps. The next operator for 2 A logic similar to the one that we use here is presented in [47], however in [47] the next operator can be applied only once to variables, and the logic from [47] uses also past temporal operators. 12 formulas X can be defined using the next operator for terms # . The formula X ϕ is obtained by replacing all occurrences of the free variables in ϕ by their next versions (i.e., x is replaced by # x, y by # y, etc.). For example the propositional LTL formula X (x ∧ X y ⇒ G z) can be expressed as (# x = true ∧ (# # y = true) ⇒ G (# z = true)). We additionally introduce the operator: ϕ L ψ := ¬(ϕ U ¬ψ). Intuitively, ϕ L ψ holds if whenever ϕ holds continuously up to some step n − 1, ψ must hold at step n. Two QLTL formulas ϕ and ψ are semantically equivalent, denoted ϕ ⇐⇒ ψ, if ∀σ : (σ |= ϕ) ⇐⇒ (σ |= ψ). Lemma 1. Let ϕ be a QLTL formula. Then: 1. (∃x : G ϕ) ⇐⇒ G (∃x : ϕ) when ϕ does not contain temporal operators. 2. ϕ L ϕ ⇐⇒ G ϕ 3. true L ϕ ⇐⇒ G ϕ 4. ϕ L true ⇐⇒ true 5. ϕ L false ⇐⇒ false 6. ∀y : (ϕ L ψ) ⇐⇒ (∃y : ϕ) L ψ, when ϕ does not contain temporal operators and y is not free in ψ. The proof of the above result, as well as of most results that follow, is omitted. All omitted proofs have been formalized and proved in the Isabelle proof assistant, and are available as part of the public distribution of RCRS from http://rcrs.cs.aalto.fi. In particular, all results contained in this paper can be accessed from the theory RCRS Overview.thy – either directly in that file or via references to the other RCRS files. Examples. Using QLTL we can express safety, as well as liveness requirements. Informally, a safety requirement expresses that something bad never happens. An example is the formula thermostat = G (180◦ ≤ t ∧ t ≤ 220◦ ), which states that the thermostat-controlled temperature t stays always between 180◦ and 220◦ . A liveness requirement informally says that something good eventually happens. An example is the formula F (t > 200◦ ) stating that the temperature t is eventually over 200◦ . A more complex example is a formula modeling an oven that starts increasing the temperature from an initial value of 20◦ until it reaches 180◦ , and then keeps it between 180◦ and 220◦ . oven = (t = 20◦ ∧ ((t < # t ∧ t < 180) U thermostat)). In this example the formula t < # t specifies that the temperature increases from some point to the next. 3.3.2 QLTL Components A QLTL component is an atomic component where the input-output behavior is specified by a QLTL formula: Definition 7 (QLTL component). A QLTL component is a tuple qltl(x : Σx , y : Σy , ϕ), where x, y are input and output variables (or tuples of variables) of types Σx , Σy , and ϕ is a QLTL formula over x and y. Intuitively a QLTL component C = qltl(x, y, ϕ) represents a system that takes as input an infinite ω sequence σx ∈ Σω x and produces as output an infinite sequence σy ∈ Σy such that (σx , σy ) |= ϕ. If there is no σy such that (σx , σy ) |= ϕ is true, then input σx is illegal for C, i.e., C is not input-receptive. There could be many possible σy for a single σx , in which case the system is non-deterministic. As a simple example, we can model the oven as a QLTL component with no input variables and the temperature as the only output variable: qltl((), t, oven) 13 3.4 Well Formed Composite Components Not all composite components generated by the grammar introduced in §3.1 are well formed. Two components C and C 0 can be composed in series only if the number of outputs of C matches the number of inputs of C 0 , and in addition the input types of C 0 are the same as the corresponding output types of C. Also, fdbk can be applied to a component C if the type of the first output of C is the same as the type of its first input. Formally, for every component C we define below Σin (C) - the input type of C, Σout (C) - the output type of C, and wf (C) - the well-formedness of C, by induction on the structure of C: Σin (sts(x : Σx , y : Σy , s : Σs , init, trs)) Σin (stateless(x : Σx , y : Σy , trs)) Σin (det(x : Σx , s : Σs , a, inpt, next, out : Σy )) Σin (stateless det(x : Σx , inpt, out : Σy )) Σin (qltl(x : Σx , y : Σy , ϕ)) Σin (C ; C 0 ) Σin (C k C 0 ) Σin (fdbk(C)) Σout (sts(x : Σx , y : Σy , s : Σs , init, trs)) Σout (stateless(x : Σx , y : Σy , trs)) Σout (det(x : Σx , s : Σs , a, inpt, next, out : Σy )) Σout (stateless det(x : Σx , inpt, out : Σy )) Σout (qltl(x : Σx , y : Σy , ϕ)) Σout (C ; C 0 ) Σout (C k C 0 ) Σout (fdbk(C)) wf (sts(x, y, s, init, trs)) = wf (stateless(x, y, trs)) = wf (det(x, s, a, inpt, next, out)) = wf (stateless det(x, inpt, out)) = wf (qltl(x, y, ϕ)) = wf (C ; C 0 ) = wf (C k C 0 ) = wf (fdbk(C)) = = = = = = = = = = = = = = = = = Σx Σx Σx Σx Σx Σin (C) Σin (C) × Σin (C 0 ) X2 × · · · × Xn provided Σin (C) = X1 × · · · × Xn for some n ≥ 1 Σy Σy Σy Σy Σy Σout (C 0 ) Σout (C) × Σout (C 0 ) Y2 × · · · × Yn provided Σout (C) = Y1 × · · · × Yn for some n ≥ 1 true true true true true wf (C) ∧ wf (C 0 ) ∧ Σout (C) = Σin (C 0 ) wf (C) ∧ wf (C 0 ) wf (C) ∧ Σin (C) = X × X1 · · · × Xn ∧ Σout (C) = X × Y1 · · · × Ym , for some n, m ≥ 0. In the definition above, both n and m are natural numbers. If n = 0 then X1 × · · · × Xn denotes the Unit type. Note that atomic components are by definition well-formed. The composite components considered in the sequel are required to be well-formed too. We note that the above well-formedness conditions are not restrictive. Components that do not have matching inputs and outputs can still be composed by adding appropriate switching components which reorder inputs, duplicate inputs, and so on. An example of such a component is the component Split, introduced earlier. As another example, consider the diagram in Fig. 3: This diagram can be expressed in our language as the composite component: A ; Switch1 ; (B k Id) ; Switch2 ; (C k Id) 14 A B C Figure 3: Another block diagram. where Switch1 = stateless det((x, y), true, (x, y, x)) Switch2 = stateless det((u, v, x), true, (u, x, v)) 4 Semantics In RCRS, the semantics of components is defined in terms of monotonic property transformers (MPTs). This is inspired by classical refinement calculus [10], where the semantics of sequential programs is defined in terms of monotonic predicate transformers [28]. Predicate transformers are functions that transform sets of post-states (states reached after the program executes) into sets of pre-states (states from which the program begins). Property transformers map sets of output traces (that a component produces) into sets of input traces (that a component consumes). In this section we define MPTs formally, and introduce some basic operations on them, which are necessary for giving the semantics of components. The definitions of some of these operations (e.g., product and fusion) are simple extensions of the corresponding operations on predicate transformers [10, 9]. Other operations, in particular those related to feedback, are new (§4.1.3). The definition of component semantics is also new (§4.3). 4.1 Monotonic Property Transformers ω A property transformer is a function S : (Σω y → B) → (Σx → B), where Σx , Σy are input and output types of the component in question. Note that x is the input and y is the output. A property transformer has a weakest precondition interpretation: it is applied to a set of output traces Q ⊆ Σω y , and returns a set of , such that all traces in P are legal and, when fed to the component, are guaranteed to input traces P ⊆ Σω x produce only traces in Q as output. Interpreting properties as sets, monotonicity of property transformers simply means that these functions are monotonic with respect to set inclusion. That is, S is monotonic if for any two properties q, q 0 , if q ⊆ q 0 then S(q) ⊆ S(q 0 ). For an MPT S we define its set of legal input traces as legal(S) = S(>), where > is the greatest predicate extended to traces. Note that, because of monotonicity, and the fact that q ⊆ > holds for any property q, we have that S(q) ⊆ legal(S) for all q. This justifies the definition of legal(S) as a “maximal” set of input traces for which a system does not fail, assuming no restrictions on the post-condition. An MPT S is said to be input-receptive if legal(S) = >. 4.1.1 Some Commonly Used MPTs Definition 8 (Skip). Skip is defined to be the MPT such that for all q, Skip(q) = q. Skip models the identity function, i.e., the system that accepts all input traces and simply transfers them unchanged to the output (this will become more clear when we express Skip in terms of assert or update transformers, below). Note that Skip is different from Id, defined above, although the two are strongly related: Id is a component, i.e., a syntactic object, while Skip is an MPT, i.e., a semantic object. As we shall see in §4.3, the semantics of Id is defined as Skip. Definition 9 (Fail). Fail is defined to be the MPT such that for all q, Fail(q) = ⊥. 15 Recall that ⊥ is the predicate that returns false for any input. Thus, viewed as a set, ⊥ is the empty set. Consequently, Fail can be seen to model a system which rejects all inputs, i.e., a system such that for any output property q, there are no input traces that can produce an output trace in q. Definition 10 (Assert). Let p ∈ Σω → B be a property. The assert property transformer {p} : (Σω → B) → (Σω → B) is defined by {p}(q) = p ∧ q. The assert transformer {p} can be seen as modeling a system which accepts all input traces that satisfy p, and rejects all others. For all the traces that it accepts, the system simply transfers them, i.e., it behaves as the identity function. To express MPTs such as assert transformers syntactically, let us introduce some notation. First, we can use lambda notation for predicates, as in λ(σ, σ 0 ) : (σ = σ 0 ) for some predicate p : Σω → Σω → B which returns true whenever it receives two equal traces. Then, instead of writing {λ(σ, σ 0 ) : (σ = σ 0 )} for the corresponding assert transformer {p}, we will use the slightly lighter notation {σ, σ 0 | σ = σ 0 }. ω Definition 11 (Demonic update). Let r : Σω x → Σy → B be a relation. The demonic update property ω ω transformer [r] : (Σy → B) → (Σx → B) is defined by [r](q) = {σ | ∀σ 0 : r(σ)(σ 0 ) ⇒ σ 0 ∈ q}. That is, [r](q) contains all input traces σ which are guaranteed to result into an output trace in q when fed into the (generally non-deterministic) input-output relation r. The term “demonic update” comes from the refinement calculus literature [10]. Similarly to assert, we introduce a lightweight notation for the demonic update. If r is an expression in σ and σ 0 , then [σ ; σ 0 | r] = [λ(σ, σ 0 ) : r]. For example, [σx , σy ; σz | ∀i : σz (i) = σx (i) + σy (i)] is the system which produces as output the sequence σz = (λi : σx (i) + σy (i)), where σx and σy are the input sequences. If e is an expression in σ, then [σ ; e] is defined to be [σ ; σ 0 | σ 0 = e], where σ 0 is a new variable different from σ and which does not occur free in e. For example, [σ ; (λi : σ(i) + 1)] = [σ ; σ 0 | σ 0 = (λi : σ(i) + 1)]. The following lemma states that Skip can be defined as an assert transformer, or as a demonic update transformer. Lemma 2. Skip = [σ ; σ 0 | σ = σ 0 ] = {>} = {σ | true}. In general Skip, Fail, and other property transformers are polymorphic with respect to their input and output types. In Skip the input and output types must be the same. Fail, on the other hand, may have an input type and a different output type. ω Definition 12 (Angelic update). Let r : Σω x → Σy → B be a relation. The angelic update property ω ω transformer {r} : (Σy → B) → (Σx → B) is defined by {r}(q) = {σ | ∃σ 0 : r(σ)(σ 0 ) ∧ σ 0 ∈ q}. An input sequence σ is in {r}(q) if there exists an output sequence σ 0 such that r(σ)(σ 0 ) and σ 0 ∈ q. Notice the duality between the angelic and demonic update transformers. Consider, for example, a relation r = {(σ, σ 0 ), (σ, σ 00 )}. If q = {σ 0 , σ 00 }, then {r}(q) = [r](q) = {σ}. If q = {σ 0 } then {r}(q) = {σ}, while [r](q) = ∅. We use a lightweight notation for the angelic update transformer, similar to the one for demonic update. If r is an expression in σ and σ 0 , then {σ ; σ 0 | r} = {λ(σ, σ 0 ) : r}. Lemma 3. Assert is a particular case of angelic update: {p} = {σ ; σ 0 | p(σ) ∧ σ = σ 0 }. 16 4.1.2 Operators on MPTs: Function Composition, Product, and Fusion As we shall see in §4.3, the semantics of composition operators in the language of components will be defined by the corresponding composition operators on MPTs. We now introduce the latter operators on MPTs. First, we begin by the operators that have been known in the literature, and are recalled here. In §4.1.3 we introduce some novel operators explicitly designed in order to handle feedback composition. Serial composition of MPTs (and property transformers in general) is simply function composition. Let ω ω ω S : (Σω y → B) → (Σx → B) and T : (Σz → B) → (Σy → B) be two property transformers. Then ω ω S ◦ T : (Σz → B) → (Σx → B), is the function composition of S and T , i.e., ∀q : (S ◦ T )(q) = S(T (q)). Note that serial composition preserves monotonicity, so that if S and T are MPTs, then S ◦ T is also an MPT. Also note that Skip is the neutral element for serial composition, i.e., S ◦ Skip = Skip ◦ S = S. To express parallel composition of components, we need a kind of Cartesian product operation on property transformers. We define such an operation below. Similar operations for predicate transformers have been proposed in [9]. ω ω ω Definition 13 (Product). Let S : (Σω y → B) → (Σx → B) and T : (Σv → B) → (Σu → B). The product of ω ω ω ω S and T , denoted S ⊗ T : (Σy × Σv → B) → (Σx × Σu → B), is given by 0 ω 0 0 0 (S ⊗ T )(q) = {(σ, σ 0 ) | ∃p : Σω y → B, p : Σv → B : p × p ⊆ q ∧ σ ∈ S(p) ∧ σ ∈ T (p )} where (p × p0 )(σy , σv ) = p(σy ) ∧ p0 (σv ). Lemma 4. For arbitrary S and T , S ⊗ T is monotonic. The neutral element for the product composition is the Skip MPT that has Unit as input and output type. In order to define a feedback operation on MPTs, we first define two auxiliary operations: Fusion and IterateOmega. Fusion is an extension of a similar operator introduced previously for predicate transformers in [9]. IterateOmega is a novel operator introduced in the sequel. ω Definition 14 (Fusion). If S = {Si }i∈I , Si : (Σω y → B) → (Σx → B) is a collection of MPTs, then the ω ω fusion of S is the MPT Fusioni∈I (Si ) : (Σy → B) → (Σx → B) defined by \ \ (Fusioni∈I (Si ))(q) = {σ | ∃p : I → Σω pi ⊆ q ∧ σ ∈ Si (pi )} y →B: i i The Fusion operator satisfies the following property. Lemma 5. For I 6= ∅ we have Fusioni∈I ({pi } ◦ [ri ]) = { \ i∈I 4.1.3 pi } ◦ [ \ ri ]. i∈I Novel Operators Used in Semantical Definition of Feedback The IterateOmega operator is defined as follows: Definition 15 (IterateOmega). IterateOmega(S) = Fusionn∈N (S n ◦ [σ ; σ 0 | ∀i : i + 1 < n ⇒ σi = σi0 ]) The feedback operator consists of connecting the first output of an MPT S with its first input. Formally, feedback is defined as follows. ω Definition 16 (Feedback). Let S : (Σu × Σω y → B) → (Σu × Σx → B) be an MPT. The feedback operator on S, denoted Feedback(S), is given by the MPT Feedback(S) = {σx ; σu , σy , σx } ◦ IterateOmega([σu , σy , σx ; σu , σx , σx ] ◦ (S ⊗ Skip)) ◦ [σu , σy , σx ; σy ] 17 Example. As an example we show how to derive Feedback(S) for S = [σu , σx ; 0 · σx + 0 · σu , 0 · σx + 0 · σu ], where σ + σ 0 = (λi : σ(i) + σ 0 (i)), and 0 · σ is 0 concatenated with σ. For now, we note that S is the semantics of the composite component Add ; UnitDelay ; Split, which corresponds to the inner part of the diagram of Fig. 2, before applying feedback. We will complete the formal definition of the the semantics of this diagram in §4.3. For now, we focus on deriving Feedback(S), in order to illustrate how the Feedback operator works. Let T = [σu , σy , σx ; σu , σx , σx ] ◦ (S ⊗ Skip) = [σu , σy , σx ; 0 · σx + 0 · σu , 0 · σx + 0 · σu , σx ] Then, we have T ◦T ... Tn = = [σu , σy , σx ; 0 · σx + 0 · 0 · σx + 0 · 0 · σu , 0 · σx + 0 · 0 · σx + 0 · 0 · σu , σx ] [σu , σy , σx ; 0 · σx + . . . + 0n · σx + 0n · σu , 0 · σx + . . . + 0n · σx + 0n · σu , σx ] where 0n is a finite sequence of n 0s. We also have = T n ◦ [σ ; σ 0 | ∀i : i + 1 < n ⇒ σi = σi0 ] [σu , σy , σx ; σ | ∀i : i + 1 < n ⇒ σ(i) = (Σj<i σx (j), Σj<i σx (j), σx (i))] Then Feedback(S) = = = = {σx ; σu , σy , σx } ◦ IterateOmega(T ) ◦ [σu , σy , σx ; σy ] {σx ; σu , σy , σx } ◦ Fusionn∈N (T n ◦ [σ ; σ 0 | ∀i : i + 1 < n ⇒ σi = σi0 ]) ◦ [σu , σy , σx ; σy ] {σx ; σu , σy , σx } ◦ [σu , σy , σx ; σ | ∀i : σ(i) = (Σj<i σx (j), Σj<i σx (j), σx (i))] ◦ [σu , σy , σx ; σy ] [σx ; σy | ∀i : σy (i) = Σj<i σx (j)] Finally we obtain Feedback(S) = [σx ; σy | ∀i : σy (i) = Σj<i σx (j)] (1) This is the system that outputs the trace (λi : Σj<i σx (j)) for input trace σx . 4.1.4 Refinement A key element of RCRS, as of other compositional frameworks, is the notion of refinement, which enables substitutability and other important concepts of compositionality. Semantically, refinement is defined as follows: ω Definition 17 (Refinement). Let S, T : (Σω y → B) → (Σx → B) be two MPTs. We say that T refines S (or that S is refined by T ), written S v T , if and only if ∀q : S(q) ⊆ T (q). All operations introduced on MPTs preserve the refinement relation: Theorem 1. If S, T, S 0 , T 0 are MPTs of appropriate types such that S v S 0 and T v T 0 , then 1. S ◦ T v S 0 ◦ T 0 and S ⊗ T v S 0 ⊗ T 0 and Fusion(S, T ) v Fusion(S 0 , T 0 ) ([9, 10]) 2. IterateOmega(S) v IterateOmega(S 0 ) 3. Feedback(S) v Feedback(S 0 ) 18 4.2 Subclasses of MPTs Monotonic property transformers are a very rich and powerful class of semantic objects. In practice, the systems that we deal with often fall into restricted subclasses of MPTs, which are easier to represent syntactically and manipulate symbolically. We introduce these subclasses next. 4.2.1 Relational MPTs Definition 18 (Relational property transformers). A relational property transformer (RPT) S is an MPT of the form {p} ◦ [r]. We call p the precondition of S and r the input-output relation of S. Relational property T transformers T correspond to conjunctive transformers [10]. A transformer S is conjunctive if it satisfies S( i∈I qi ) = i∈I S(qi ) for all (qi )i∈I and I 6= ∅. Fail, Skip, any assert transformer {p}, and any demonic update transformer [r], are RPTs. Indeed, Fail can be written as {σ | false} ◦ [σ ; σ 0 | true]. Skip can be written as {σ | true} ◦ [σ ; σ]. The assert transformer {p} can be written as the RPT {p} ◦ [σ ; σ]. Finally, the demonic update transformer [r] can be written as the RPT {σ | true} ◦ [r]. Angelic update transformers are generally not RPTs: the angelic update transformer {σ ; σ 0 | true} is not an RPT, as it is not conjunctive. Examples. Suppose we wish to specify a system that performs division. Here are two possible ways to represent this system with RPTs: S1 = {>} ◦ [σx , σy ; σz | ∀i : σy (i) 6= 0 ∧ σz (i) = σx (i) ] σy (i) S2 = {σx , σy | ∀i : σy (i) 6= 0} ◦ [σx , σy ; σz | σz (i) = σx (i) ] σy (i) Although S1 and S2 are both relational, they are not equivalent transformers. S1 is input-receptive: it accepts all input traces. However, if at some step i the input σy (i) is 0, then the output σz (i) is arbitrary (nondeterministic). In contrast, S2 is non-input-receptive as it accepts only those traces σy that are guaranteed to be non-zero at every step, i.e., those that satisfy the condition ∀i : σy (i) 6= 0. Theorem 2 (RPTs are closed under serial, product and fusion compositions). Let S = {p} ◦ [r] and S 0 = {p0 } ◦ [r0 ] be two RPTs, with p, p0 , r and r0 of appropriate types. Then S ◦ S 0 = {σ | p(σ) ∧ (∀σ 0 : r(σ)(σ 0 ) ⇒ p0 (σ 0 ))} ◦ [r ◦ r0 ] and and S ⊗ S 0 = {σx , σy | p(σx ) ∧ p0 (σy )} ◦ [σx , σy ; σx0 , σy0 | r(σx )(σx0 ) ∧ r0 (σy )(σy0 )] Fusion(S, S 0 ) = {p ∧ p0 } ◦ [r ∧ r0 ] RPTs are not closed under Feedback. For example, we have Feedback([σx , σz ; σx , σx ]) = {σ ; σ 0 | true} which is a non-relational angelic update transformer as we said above. Next theorem shows that the refinement of RPTs can be reduced to proving a first order property. Theorem 3. For p, p0 , r, r0 of appropriate types we have. {p} ◦ [r] v {p0 } ◦ [r0 ] ⇐⇒ (p ⊆ p0 ∧ (∀σx : p(σx ) ⇒ r0 (σx ) ⊆ r(σx ))) 19 4.2.2 Guarded MPTs Relational property transformers correspond to systems that have natural syntactic representations, as the composition {p} ◦ [r], where the predicate p and the relation r can be represented syntactically in some logic. Unfortunately, RPTs are still too powerful. In particular, they allow system semantics that cannot be implemented. For example, consider the RPT Magic = [σ ; σ 0 | false]. It can be shown that for any output property q (including ⊥), we have Magic(q) = >. Recall that, viewed as a set, > is the set of all traces. This means that, no matter what the post-condition q is, Magic somehow manages to produce output traces satisfying q no matter what the input trace is (hence the name “magic”). In general, an MPT S is said to be non-miraculous (or to satisfy the law of excluded miracle) if S(⊥) = ⊥. We note that in [28], sequential programs are modeled using predicate transformers that are conjunctive and satisfy the low of excluded miracle. We want to further restrict RPTs so that miraculous behaviour does not arise. Specifically, for an RPT S = {p} ◦ [r] and an input sequence σ, if there is no σ 0 such that r(σ)(σ 0 ) is satisfied, then we want σ to be illegal, i.e., we want p(σ) = false. We can achieve this by taking p to be in(r). Recall that if r : X → Y → B, then in(r)(x) = (∃y : r(x)(y)). Taking p to be in(r) effectively means that p and r are combined into a single specification r which can also restrict the inputs. This is also the approach followed in the theory of relational interfaces [79]. Definition 19 (Guarded property transformers). The guarded property transformer (GPT) of a relation r is an RPT, denoted {r], defined by {r] = {in(r)} ◦ [r]. It can be shown that an MPT S is a GPT if and only if S is conjunctive and non-miraculous [10]. Fail, Skip, and any assert property transformer are GPTs. Indeed, Fail = {⊥] and Skip = {σ ; σ | >]. The assert transformer can be written as {p} = {σ ; σ | p(σ)]. The angelic and demonic update property transformers are generally not GPTs. The angelic update property transformer is not always conjunctive in order to be a GPT. The demonic update property transformer is not in general a GPT because is not always non-miraculous (Magic(⊥) = > = 6 ⊥). The demonic update transformer [r] is a GPT if and only if in(r) = > and in this case we have [r] = {r]. Theorem 4 (GPTs are closed under serial and parallel compositions). Let S = {r] and S 0 = {r0 ] be two GPTs with r and r0 of appropriate types. Then  S ◦ S 0 = {σx ; σz | in(r)(σx ) ∧ ∀σy : r(σx )(σy ) ⇒ in(r0 )(σy ) ∧ (r ◦ r0 )(σx , σz )] and S ⊗ S 0 = {σx , σy ; σx0 , σy0 | r(σx )(σx0 ) ∧ r0 (σy )(σy0 )] GPTs are not closed under Fusion neither Feedback. Indeed, we have already seen in the previous section that Feedback applied to the GPT [σx , σz ; σx , σx ] is not an RPT, and therefore not a GPT either. For the fusion operator, we have Fusion([x ; 0], [x ; 1]) = [⊥], which is not a GPT. A corollary of Theorem 3 is that refinement of GPTs can be checked as follows: Corollary 1. {r] v {r0 ] ⇐⇒ (in(r) ⊆ in(r0 ) ∧ (∀σx : in(r)(σx ) ⇒ r0 (σx ) ⊆ r(σx ))) 4.2.3 Other subclasses and overview The containment relationships among the various subclasses of MPTs are illustrated in Fig. 4. In addition to the subclasses discussed above, we introduce several more subclasses of MPTs in the sections that follow, when we assign semantics (in terms of MPTs) to the various atomic components in our component language. For instance, QLTL components give rise to QLTL property transformers. Similarly, STS components, stateless STS components, etc., give rise to corresponding subclasses of MPTs. The containment relationships between these classes will be proven in the sections that follow. For ease of reference, we provide some forward links to these results also here. The fact that QLTL property transformers are GPTs follows by definition 20 of the semantics of QLTL components: see §4.3, equation (2). The fact that STS property transformers are a special case of QLTL property transformers follows from the transformation of an STS component into a semantically equivalent QLTL component: see §5.1 and Theorem 7. The inclusions for subclasses of STS property transformers follow by definition of the corresponding components (see also Fig. 5). Monotonic Property Transformers Relational Property Transformers Guarded Property Transformers QLTL Property Transformers STS Property Transformers Stateless STS Property Transformers Deterministic STS Property Transformers Figure 4: Overview of the property transformer classes and their containment relations. 4.3 Semantics of Components as MPTs We are now ready to define the semantics of our language of components in terms of MPTs. Let C be a well formed component. The semantics of C, denoted [[C]], is a property transformer of the form: [[C]] : ((Σout (C))ω → B) → ((Σin (C))ω → B). We define [[C]] by induction on the structure of C. First we give the semantics of QLTL components and composite components: [[qltl(x, y, ϕ)]] 0 = {σx ; σy | (σx , σy ) |= ϕ] (2) 0 [[C ; C ]] = [[C]] ◦ [[C ]] (3) [[C k C 0 ]] = [[C]] ⊗ [[C 0 ]] (4) [[fdbk(C)]] = Feedback([[C]]) (5) The semantics of QLTL components satisfies the following property: Lemma 6. If ϕ is a LTL formula on variables x and y, we have: [[qltl(x, y, (∃y : ϕ) ∧ ϕ)]] = [[qltl(x, y, ϕ)]] To define the semantics of STS components, we first introduce some auxiliary notation. ω Consider an STS component C = sts(x, y, s, init exp, trs exp). We define the predicate runC : Σω s × Σx × ω Σy → B as runC (σs , σx , σy ) = (∀i : trs exp(σs (i), σx (i))(σs (i + 1), σy (i))). ω ω Intuitively, if σx ∈ Σω x is the input sequence, σy ∈ Σy is the output sequence, and σs ∈ Σs is the sequence of values of state variables, then runC (σs , σx , σy ) holds if at each step of the execution, the current state, current input, next state, and current output, satisfy the trs exp predicate. 21 We also formalize the illegal input traces of STS component C as follows: illegalC (σx ) = (∃σs , σy , k : init exp(σs (0)) ∧ (∀i < k : trs exp(σs (i), σx (i))(σs (i + 1), σy (i))) ∧ ¬in(trs exp)(σs (k), σx (k))) Essentially, illegalC (σx ) states that there exists some point in the execution where the current state and current input violate the precondition in(trs exp) of predicate trs exp, i.e., there exist no output and next state to satisfy trs exp for that given current state and input. Then, the semantics of an STS component C is given by: [[C]] = {¬illegalC } ◦ [σx ; σy | (∃σs : init exp(σs (0)) ∧ runC (σs , σx , σy ))] (6) We give semantics to stateless and/or deterministic STS components using the corresponding mappings from general STS components. If C is a stateless STS, C 0 is a deterministic STS, and C 00 is a stateless deterministic STS, then: [[C]] 0 [[C ]] 00 [[C ]] = = = [[stateless2sts(C)]] (7) 0 [[det2sts(C )]] (8) 00 [[stateless det2det(C )]] (9) Note that the semantics of a stateless deterministic STS component C 00 is defined by converting C 00 into a deterministic STS component, by Equation (9) above. Alternatively, we could have defined the semantics of C 00 by converting it into a stateless STS component, using the mapping stateless det2stateless. In order for our semantics to be well-defined, we need to show that regardless of which conversion we choose, we obtain the same result. Indeed, this is shown by the lemma that follows: Lemma 7. For a stateless deterministic STS C 00 we have: [[stateless det2det(C 00 )]] = [[stateless det2stateless(C 00 )]] (10) Observe that, by definition, the semantics of QLTL components are GPTs. The semantics of STS components are defined as RPTs. However, they will be shown to be GPTs in §5.1. Therefore, the semantics of all atomic RCRS components are GPTs. This fact, and the closure of GPTs w.r.t. parallel and serial composition (Theorem 4), ensure that we stay within the GPT realm as long as no feedback operations are used. In addition, as we shall prove in Corollary 2, components with feedback are also GPTs, as long as they are deterministic and do not contain algebraic loops. An example of a component whose semantics is not a GPT is: C = fdbk(stateless((x, z), (y1 , y2 ), y1 = x ∧ y2 = x)) Then, we have [[C]] = Feedback([σx , σz ; σx , σx ]). As stated earlier, Feedback([σx , σz ; σx , σx ]) is equal to {σ ; σ 0 | true}, which is not a GPT neither an RPT. The problem with C is that it contains an algebraic loop: the first output y1 of the internal stateless component where feedback is applied directly depends on its first input x. Dealing with such components is beyond the scope of this paper, and we refer the reader to [67]. 4.3.1 Example: Two Alternative Derivations of the Semantics of Diagram Sum of Fig. 2 To illustrate our semantics, we provide two alternative derivations of the semantics of the Sum system of Fig. 2. First, let us consider Sum as a composite component: Sum = fdbk(Add ; UnitDelay ; Split) 22 where Add = UnitDelay = Split = stateless det((u, x), true, u + x) det(x, s, 0, true, x, s) stateless det(x, true, (x, x)) We have [[Sum]] = [[fdbk(Add ; UnitDelay ; Split)]] = Feedback([[Add]] ◦ [[UnitDelay]] ◦ [[Split]]) For Add, UnitDelay and Split, all inputs are legal, After simplifications, we get: [[Add]] = [[UnitDelay]] = [[Split]] = The semantics of Sum is given by so illegalC = ⊥ for all C ∈ {Add, UnitDelay, Split}. [σu , σx ; σu + σx ] [σx ; 0 · σx ] [σx ; σx , σx ] [[Sum]] = Feedback([[Add]] ◦ [[UnitDelay]] ◦ [[Split]] = = = We obtain: Feedback([σu , σx ; σu + σx ] ◦ [σx ; 0 · σx ] ◦ [σx ; σx , σx ]) Feedback([σu , σx ; 0 · σx + 0 · σu , 0 · σx + 0 · σu ]) {Using (1)} [σx ; σy | ∀i : σy (i) = Σj<i σx (j)] [[Sum]] = [σx ; σy | ∀i : σy (i) = Σj<i σx (j)]. Next, let us assume that the system has been characterized already as an atomic component: SumAtomic = sts(x, y, s, s = 0, y = s ∧ s0 = s + x). The semantics of SumAtomic is given by [[SumAtomic]] = {¬illegalSumAtomic } ◦ [σx ; σy | ∃σs : σs (0) = 0 ∧ runSumAtomic (σs , σx , σy )] where illegalSumAtomic = ⊥ because there are no restrictions on the inputs of SumAtomic, and  runSumAtomic (σs , σx , σy ) = ∀i : σy (i) = σs (i) ∧ σs (i + 1) = σs (i) + σx (i) We have  [[SumAtomic]] = [σx ; σy | ∃σs : σs (0) = 0 ∧ ∀i : σy (i) = σs (i) ∧ σs (i + 1) = σs (i) + σx (i) ] which is equivalent to (11). 4.3.2 Characterization of Legal Input Traces The following lemma characterizes legal input traces for various types of MPTs: Lemma 8. The set of legal input traces of an RPT {p} ◦ [r] is p: legal({p} ◦ [r]) = p. 23 (11) The set of legal input traces of a GPT {r] is in(r): legal({r]) = in(r). The set of legal input traces of an STS component C = sts(x, y, s, init, r) is equal to ¬illegalC : legal([[C]]) = ¬illegalC . The set of legal input traces of a QLTL component C = qltl(x, y, ϕ) is: legal([[C]]) = {σx | σx |= ∃y : ϕ}. 4.3.3 Semantic Equivalence and Refinement for Components Definition 20. Two components C and C 0 are (semantically) equivalent, denoted C ≡ C 0 , if [[C]] = [[C 0 ]]. Component C is refined by component C 0 , denoted C  C 0 , if [[C]] v [[C 0 ]]. The relation ≡ is an equivalence relation, and  is a preorder relation (i.e., reflexive and transitive). We also have (C  C 0 ∧ C 0  C) ⇐⇒ (C ≡ C 0 ) 4.3.4 Compositionality Properties Several desirable compositionality properties follow from our semantics: Theorem 5. Let C1 , C2 , C3 , and C4 be four (possibly composite) components. Then: 1. (Serial composition is associative:) (C1 ; C2 ) ; C3 ≡ C1 ; (C2 ; C3 ). 2. (Parallel composition is associative:) (C1 k C2 ) k C3 ≡ C1 k (C2 k C3 ). 3. (Parallel composition distributes over serial composition:) If [[C1 ]] and [[C2 ]] are GPTs and [[C3 ]] and [[C4 ]] are RPTs, then (C1 k C2 ) ; (C3 k C4 ) ≡ (C1 ; C3 ) k (C2 ; C4 ). 4. (Refinement is preserved by composition:) If C1  C2 and C3  C4 , then: (a) C1 ; C3  C2 ; C4 (b) C1 k C3  C2 k C4 (c) fdbk(C1 )  fdbk(C2 ) In addition to the above, requirements that a component satisfies are preserved by refinement. Informally, if C satisfies some requirement ϕ and C  C 0 then C 0 also satisfies ϕ. Although we have not formally defined what requirements are and what it means for a component to satisfy a requirement, these concepts are naturally captured in the RCRS framework via the semantics of components as MPTs. In particular, since our components are generally open systems (i.e., they have inputs), we can express requirements using Hoare triples of the form p{C}q, where C is a component, p is an input property, and q is an output property. Then, p{C}q holds iff the outputs of C are guaranteed to satisfy q provided the inputs of C satisfy p. Formally: p{C}q ⇐⇒ p ⊆ [[C]](q). Theorem 6. C  C 0 iff ∀p, q : p{C}q ⇒ p{C 0 }q. Theorem 6 shows that refinement is equivalent to substitutability. Substitutability states that a component C 0 can replace another component C in any context, i.e., ∀p, q : p{C}q ⇒ p{C 0 }q. 24 5 Symbolic Reasoning So far we have defined the syntax and semantics of RCRS. These already allow us to specify and reason about systems in a compositional manner. However, such reasoning is difficult to do “by hand”. For example, if we want to check whether a component C is refined by another component C 0 , we must resort to proving the refinement relation [[C]] v [[C 0 ]] of their corresponding MPTs, [[C]] and [[C 0 ]]. This is not an easy task, as MPTs are complex mathematical objects. Instead, we would like to have computer-aided, and ideally fully automatic techniques. In the above example of checking refinement, for instance, we would like ideally to have an algorithm that takes as input the syntactic descriptions of C and C 0 and replies yes/no based on whether [[C]] v [[C 0 ]] holds. We say “ideally” because we know that in general such an algorithm cannot exist. This is because we are not making a-priori any restrictions on the logics used to describe C and C 0 , which means that the existence of an algorithm will depend on decidability of these logics. In this section, we describe how reasoning in RCRS can be done symbolically, by automatically manipulating the formulas used to specify the components involved. As we shall show, most problems can be reduced to checking satisfiability of first-order formulas formed by combinations of the formulas of the original components. This means that the problems are decidable whenever the corresponding first-order logics are decidable. 5.1 Symbolic Transformation of STS Components to QLTL Components Our framework allows the specification of several types of atomic components, some of which are special cases of others, as summarized in Fig. 5. In §3, we have already shown how the different types of STS components are related, from the most specialized deterministic stateless STS components, to the general STS components. By definition, the semantics of the special types of STS components is defined via the semantics of general STS components (see §4). In this subsection, we show that STS components can be viewed as special cases of QLTL components. qltl sts stateless det stateless det Figure 5: Lattice of atomic components: lower types are special cases of higher types. Specifically, we show how an STS component can be transformed into a semantically equivalent QLTL component. This transformation also shows that STS property transformers are a special case of QLTL property transformers, as already claimed in Fig. 4. Note that this containment is not obvious simply by looking at the definitions of these MPT subclasses (c.f. §4.3), as QLTL property transformers are defined as GPTs (equation 2), whereas STS property transformers are defined as RPTs (equation 6). Although RPTs are generally a superclass, not a subclass, of GPTs, the transformation proposed below shows that the RPTs obtained from STS components can indeed be captured as GPTs. The transformation of STS into QLTL components also enables us to apply several algorithms which are available for QLTL components to STS components as well. Consider an STS component C = sts(x, y, s, init, trs). We can transform C into a QLTL component using the mapping sts2qltl: sts2qltl(C) = qltl(x, y, (∀s, y : init ⇒ (ϕ L ϕ0 )) ∧ (∃s : init ∧ G ϕ)) where ϕ = trs[s0 := # s] and ϕ0 = (∃s0 , y : trs). The theorem that follows demonstrates the correctness of the above transformation: 25 (12) Theorem 7. For any STS component C = sts(x, y, s, init, trs) s.t. init is satisfiable, C ≡ sts2qltl(C). Example. It is instructive to see how the above transformation specializes to some special types of STS components. In particular, we will show how it specializes to stateless STS components. Let C = stateless(x, y, trs) and let C 0 = stateless2sts(C) = sts(x, y, (), true, trs). Applying the sts2qltl transformation to C 0 , for which s = () and init = true, we obtain:  sts2qltl(C 0 ) = qltl x, y, (∀y : (ϕ L ϕ0 )) ∧ G ϕ) where ϕ = trs[() := # ()] = trs and ϕ0 = (∃y : trs). Using the properties of Lemma 1, and the fact that semantically equivalent LTL formulas result in semantically equivalent QLTL components, we can simplify sts2qltl(C 0 ) further: sts2qltl(C 0 ) = {Definition of sts2qltl}  qltl x, y, (∀y : (trs L (∃y : trs))) ∧ G trs ≡ {Lemma 1, trs does not contain temporal operators, and y is not free in (∃y : trs)}  qltl x, y, ((∃y : trs) L (∃y : trs)) ∧ G trs ≡ {Lemma 1}  qltl x, y, G (∃y : trs) ∧ G trs ≡ {Lemma 1 and trs does not contain temporal operators}  qltl x, y, (∃y : G trs) ∧ G trs ≡ {Lemma 6}  qltl x, y, G trs Note that in the above derivation we use the equivalence symbol ≡, in addition to the equality symbol =. Recall that ≡ stands for semantical equivalence of two components (c.f. §4.3.3). On the other hand, = for components means syntactic equality. By definition of the semantics, if two QLTL formulas are equivalent, then the corresponding QLTL components are equivalent. Based on the above, we define the transformation stateless2qltl of a stateless component C = stateless(x, y, trs), into a QLTL component as follows:  stateless2qltl(C) = qltl x, y, G trs 5.2 Symbolic Transformations of Special Atomic Components to More General Atomic Components Based on the lattice in Fig. 5, we define all remaining mappings from more special atomic components to more general atomic components, by composing the previously defined mappings sts2qltl, stateless2qltl, stateless2sts, det2sts, stateless det2stateless and stateless det2det, as appropriately. For mapping stateless deterministic STS components to QLTL components, we have two possibilities: stateless det → det → sts → qltl and stateless det → stateless → qltl. We choose the transformation stateless det → stateless → qltl because it results in a simpler formula: stateless det2qltl(C) = stateless2qltl(stateless det2stateless(C)). Examples. Consider the following STS components: C1 C2 UnitDelay  = stateless x, y, y > x  = stateless x, (), x > 0  = det x, s, 0, true, x, s 26 Then:  stateless2qltl(C1 ) = qltl x, y, G y > x stateless2qltl(C2 ) = qltl x, (), G x > 0 det2qltl(UnitDelay)  = qltl x, y, (∀s, y : s = 0 ⇒ (true ∧ # s = x ∧ y = s) L (∃s0 , y : true ∧ s0 = x ∧ y = s)) ∧  (∃s : s = 0 ∧ G (true ∧ # s = x ∧ y = s)) Because (∃s0 , y : true ∧ s0 = x ∧ y = s)) ⇐⇒ true, using Lemma 1 and logical simplifications we obtain:  det2qltl(UnitDelay) ≡ qltl x, y, (∃s : s = 0 ∧ G (# s = x ∧ y = s))  ≡ qltl x, y, (∃s : s = 0 ∧ G (# y = x ∧ y = s))  ≡ qltl x, y, (∃s : s = 0 ∧ G (y = s) ∧ G (# y = x))  ≡ qltl x, y, (∃s : y = 0 ∧ G (y = s) ∧ G (# y = x))  ≡ qltl x, y, (∃s : G (y = s)) ∧ y = 0 ∧ G (# y = x)  ≡ qltl x, y, G (∃s : y = s) ∧ y = 0 ∧ G (# y = x)  ≡ qltl x, y, y = 0 ∧ G (# y = x) 5.3 Symbolic Computation of Serial Composition Given a composite component C formed as the serial composition of two components C1 and C2 , i.e., C = C1 ; C2 , we would like to compute a new, atomic component Ca , such that Ca is semantically equivalent to C. Because atomic components are by definition represented syntactically (and symbolically), being able to reduce composite components into atomic components means that we are able to symbolically compute composition operators. We start by defining the symbolic serial composition of components of the same type. 5.3.1 Symbolic Serial Composition of Two QLTL Components Let C = qltl(x, y, ϕ) and C 0 = qltl(y, z, ϕ0 ) such that C ; C 0 is well formed. Then their symbolic serial composition, denoted serial(C, C 0 ), is the QLTL component defined by  serial(C, C 0 ) = qltl x, z, (∀y : ϕ ⇒ (∃z : ϕ0 )) ∧ (∃y : ϕ ∧ ϕ0 ) (13) Note that in the above definition (as well as the ones that follow) we assume that the output variable of C and the input variable of C 0 have the same name (y) and that the names x, y and z are distinct. In general, this may not be the case. This is not a problem, as we can always rename variables such that this condition is met. Note that variable renaming does not change the semantics of components (c.f. §3.2.2). The intuition behind the formula in (13) is as follows. The second conjunct ∃y : ϕ ∧ ϕ0 ensures that the both contracts ϕ and ϕ0 of the two components are enforced in the composite contract. The reason we use ∃y : ϕ ∧ ϕ0 instead of just the conjunction ϕ ∧ ϕ0 is that we want to eliminate (“hide”) internal variable y. (Alternatively, we could also have chosen to output y as an additional output, but would then need an additional hiding operator to remove y.) The first conjunct ∀y : ϕ ⇒ (∃z : ϕ0 ) is a formula on the input variable x of the composite component (since all other variables y and z are quantified). This formula restricts the legal inputs of C to those inputs for which, no matter which output C produces, this output is guaranteed to be a legal input for the downstream component C 0 . For an extensive discussion of the intuition and justification behind this definition, see [79]. 27 5.3.2 Symbolic Serial Composition of Two General STS Components Let C = sts(x, y, s, init, trs) and C 0 = sts(y, z, t, init0 , trs0 ) be two general STS components such that C ; C 0 is well formed. Then:  serial(C, C 0 ) = sts x, z, (s, t), init∧init0 , (∃s0 , y : trs)∧(∀s0 , y : trs ⇒ (∃t0 , z : trs0 ))∧(∃y : trs∧trs0 ) (14) 5.3.3 Symbolic Serial Composition of Two Stateless STS Components Let C = stateless(x, y, trs) and C 0 = stateless(y, z, trs0 ) be two stateless STS components such that C ; C 0 is well formed. Then  serial(C, C 0 ) = stateless x, z, (∀y : trs ⇒ (∃z : trs0 )) ∧ (∃y : trs ∧ trs0 ) (15) 5.3.4 Symbolic Serial Composition of Two Deterministic STS Components Let C = det(x, s, a, p, next, out) and C 0 = det(y, t, b, p0 , next0 , out0 ) be two deterministic STS components such that their serial composition C ; C 0 is well formed. Then: serial(C, C 0 ) = det(x, (s, t), (a, b), p ∧ p0 [y := out], (next, next0 [y := out]), out0 [y := out]) (16) where e[z := e0 ] denotes the substitution of all free occurences of variable z by expression e0 in expression e. 5.3.5 Symbolic Serial Composition of Two Stateless Deterministic STS Components Finally, let C = stateless det(x, p, out) and C 0 = stateless det(y, p0 , out0 ) be two stateless deterministic STS components such that their serial composition C ; C 0 is well formed. Then: serial(C, C 0 ) = stateless det(x, p ∧ p0 [y := out], out0 [y := out]) 5.3.6 (17) Symbolic Serial Composition of Two Arbitrary Atomic Components In general, we define the symbolic serial composition of two atomic components C and C 0 by using the mappings of less general components to more general components (Fig. 5), as appropriate. For example, if C is a deterministic STS component and C 0 is a stateless STS component, then serial(C, C 0 ) = serial(det2sts(C), stateless2sts(C 0 )). Similarly, if C is a QLTL component and C 0 is a deterministic STS component, then serial(C, C 0 ) = serial(C, det2qltl(C 0 )). Formally, assume that atm, atm 0 ∈ {qltl, sts, stateless, det, stateless det} are the types of the components C and C 0 , and common = atm ∨ atm 0 is the least general component type that is more general than atm and atm 0 as defined in Fig. 5. Then serial(C, C 0 ) = serial(atm2common(C), atm 0 2common(C 0 )). 5.3.7 Correctness of Symbolic Serial Composition The following theorem demonstrates that our symbolic computations of serial composition are correct, i.e., that the resulting atomic component serial(C, C 0 ) is semantically equivalent to the original composite component C ; C 0 : Theorem 8. If C and C 0 are two atomic components, then C ; C 0 ≡ serial(C, C 0 ). 28 Examples. Consider the following STS components: C1 = stateless(u, (x, y), true) C2 x = stateless det((x, y), y 6= 0, ) y Then: serial(C1 , C2 ) = serial(C1 , stateless det2stateless(C2 )) = serial(stateless(u, (x, y), true), stateless((x, y), z, y 6= 0 ∧ z = = stateless(u, z, (∀x, y : true ⇒ (∃z : y 6= 0 ∧ z = ∧ (∃x, y : true ∧ y 6= 0 ∧ z = x )) y x )) y x ))) y ≡ stateless(u, z, false) As we can see, the composition results in a stateless STS component with input-output formula false. The semantics of such a component is Fail, indicating that C1 and C2 are incompatible. Indeed, in the case of C1 ; C2 , the issue is that C2 requires its second input, y, to be non-zero, but C1 cannot guarantee that. The reason is that the input-output formula of C1 is true, meaning that, no matter what its input u is, C1 may output any value for x and y, non-deterministically. This violates the input requirements of C2 , causing an incompatibility. We will return to this point in §5.8. We also note that this type of incompatibility is impossible to prevent, by controlling the input u. In the example that follows, we see a case where the two components are not incompatible, because the input requirements of the downstream component can be met by strengthening the input assumptions of the upstream component: Consider the following QLTL components: C3 = qltl(x, y, G (x ⇒ F y)) C4 = qltl(y, (), G F y) Then: serial(C3 , C4 ) = serial(qltl(x, y, G (x ⇒ F y)), qltl(y, (), G F y)) = qltl(x, (), (∀y : (G (x ⇒ F y)) ⇒ G F y) ∧ (∃y : G (x ⇒ F y) ∧ G F y)) ≡ qltl(x, (), G F x) In this example, the downstream component C4 requires its input y to be infinitely often true (G F y). This can be achieved only if the input x of the upstream component is infinitely often true, which is the condition derived by the serial composition of C3 and C4 (G F x). Notice that C3 does not impose any apriori requirements on its input. However, its input-output relation is the so-called request-response property which can be expressed as: whenever the input x is true, the output y will eventually become true afterwards (G (x ⇒ F y)). This request-response property implies that in order for y to be infinitely-often true, x must be infinitely-often true. Moreover, this is the weakest possible condition that can be enforced on x in order to guarantee that the condition on y holds. 5.4 Symbolic Computation of Parallel Composition Given a composite component formed as the parallel composition of two components, C = C1 k C2 , we would like to compute an atomic component Ca , such that Ca is semantically equivalent to C. We show how this can be done in this subsection. The symbolic computation of parallel composition follows the same pattern as the one for serial composition (§5.3). 29 5.4.1 Symbolic Parallel Composition of Two QLTL Components Let C = qltl(x, y, ϕ) and C 0 = qltl(u, v, ϕ0 ). parallel(C, C 0 ), is the QLTL component defined by Then their symbolic parallel composition, denoted parallel(C, C 0 ) = qltl (x, u), (y, v), ϕ ∧ ϕ0  (18) As in the case of symbolic serial composition, we assume that variable names x, y, u, v are all distinct. If this is not the case, then we rename variables as appropriately. 5.4.2 Symbolic Parallel Composition of Two General STS Components Let C = sts(x, y, s, init, trs) and C 0 = sts(u, v, t, init0 , trs0 ). Then: parallel(C, C 0 ) = sts (x, u), (y, v), (s, t), init ∧ init0 , trs ∧ trs0 5.4.3  (19) Symbolic Parallel Composition of Two Stateless STS Components Let C = stateless(x, y, trs) and C 0 = stateless(u, v, trs0 ). Then parallel(C, C 0 ) = stateless (x, u), (y, v), trs ∧ trs0 5.4.4  (20) Symbolic Parallel Composition of Two Deterministic STS Components Let C = det(x, s, a, p, next, out) and C 0 = det(u, t, b, p0 , next0 , out0 ). Then:  parallel(C, C 0 ) = det (x, u), (s, t), (a, b), p ∧ p0 , (next, next0 ), (out, out0 ) 5.4.5 (21) Symbolic Parallel Composition of Two Stateless Deterministic STS Components Let C = stateless det(x, p, out) and C 0 = stateless det(u, p0 , out0 ). Then: parallel(C, C 0 ) = stateless det (x, u), p ∧ p0 , (out, out0 ) 5.4.6  (22) Symbolic Parallel Composition of Two Arbitrary Atomic Components Similar to the symbolic serial composition, we define the symbolic parallel composition of two atomic components C and C 0 by using the mappings of less general components to more general components (Fig. 5), as appropriate. Formally, assume that atm, atm 0 ∈ {qltl, sts, stateless, det, stateless det} are the types of the components C and C 0 , and common = atm ∨ atm 0 is the least general component type that is more general than atm and atm 0 as defined in Fig. 5. Then parallel(C, C 0 ) = parallel(atm2common(C), atm 0 2common(C 0 )). 5.4.7 Correctness of Symbolic Parallel Composition The following theorem demonstrates that our symbolic computations of parallel composition are also correct, i.e., that the resulting atomic component parallel(C, C 0 ) is semantically equivalent to the original composite component C k C 0 : Theorem 9. If C and C 0 are two atomic components, then C k C 0 ≡ parallel(C, C 0 ). 30 5.5 5.5.1 Symbolic Computation of Feedback Composition for Decomposable Deterministic STS Components Decomposable Components We provide a symbolic closed-form expression for the feedback composition of a deterministic STS component, provided such a component is decomposable. Intuitively, decomposability captures the fact that the first output of the component, y1 , does not depend on its first input, x1 . This ensures that the feedback composition (which connects y1 to x1 ) does not introduce any circular dependencies. Definition 21 (Decomposability). Let C be a deterministic STS  det (x1 , . . . , xn ), s, a, p, next, (e1 , . . . , em )  or a stateless deterministic STS stateless det (x1 , . . . , xn ), p, (e1 , . . . , em ) . C is called decomposable if x1 is not free in e1 . component component e1 x1 x2 , . . . , x n • e2 ... em e1 x2 , . . . , x n (a) • x1 e2 ... em (b) Figure 6: (a) Decomposable deterministic component; (b) the same component after applying feedback, connecting its first output to x1 . Decomposability is illustrated in Fig. 6a. The figure shows that expression e1 depends only on inputs x2 , . . . , x n . 5.5.2 Symbolic Feedback of a Decomposable Deterministic STS Component For a decomposable deterministic STS component C = det((x1 , . . . , xn ), s, a, p, next, (e1 , . . . , em )), its symbolic feedback composition, denoted feedback(C), is the deterministic STS component defined by  feedback(C) = det (x2 , . . . , xn ), s, a, p[x1 := e1 ], next[x1 := e1 ], (e2 [x1 := e1 ], . . . , em [x1 := e1 ]) (23) Thus, computing feedback symbolically consists in removing the first input of the component and replacing the corresponding variable x1 by the expression of the first output, e1 , everywhere where x1 appears. The feedback operator is illustrated in Fig. 6b. 5.5.3 Symbolic Feedback of a Decomposable Stateless Deterministic STS Component For a decomposable stateless deterministic STS component C = stateless det((x1 , . . . , xn ), p, (e1 , . . . , em )), feedback(C) is the stateless deterministic STS component defined by  feedback(C) = stateless det (x2 , . . . , xn ), p[x1 := e1 ], (e2 [x1 := e1 ], . . . , em [x1 := e1 ]) (24) 5.5.4 Correctness of Symbolic Feedback Composition Theorem 10. If C is a decomposable deterministic STS component, then fdbk(C) ≡ feedback(C) 31 Providing closed-form symbolic computations of feedback composition for general components, including possibly non-deterministic STS and QLTL components, is an open problem, beyond the scope of the current paper. We remark that the straightforward idea of adding to the contract the equality constraint x = y where y is the output connected in feedback to input x, does not work.3 In fact, even obtaining a semantically consistent compositional definition of feedback for non-deterministic and non-input-receptive systems is a challenging problem [67]. Nevertheless, the results that we provide here are sufficient to cover the majority of cases in practice. In particular, the symbolic operator feedback can be used to handle Simulink diagrams, provided these diagrams do not contain algebraic loops, i.e., circular dependencies (see §5.7). 5.6 Closure Properties of MPT Subclasses w.r.t. Composition Operators In addition to providing symbolic computation procedures, the results of the above subsections also prove closure properties of the various MPT subclasses of RCRS with respect to the three composition operators. These closure properties are summarized in Tables 1 and 2. In a nutshell, both serial and parallel composition preserve the most general type of the composed components, according to the lattice in Fig. 5. For instance, the serial (or parallel) composition of two stateless STS components is a stateless STS component; the serial (or parallel) composition of a stateless STS component and a general STS component is a general STS component; and so on. Feedback preserves the type of its component (deterministic or stateless deterministic). ; and k qltl sts stateless det stateless det qltl qltl qltl qltl qltl qltl sts qltl sts sts sts sts stateless qltl sts stateless sts stateless det qltl sts sts det det stateless det qltl sts stateless det stateless det Table 1: Closure properties of serial and parallel compositions. The table is to be read as follows: given atomic components C1 , C2 of types as specified in a row/column pair, the serial or parallel composition of C1 and C2 is an atomic component of type as specified in the corresponding table entry. fdbk det and decomposable det stateless det and decomposable stateless det Table 2: Closure properties of feedback composition. 5.7 Symbolic Simplification of Arbitrary Composite Components The results of the previous subsections show how to simplify into an atomic component the serial or parallel composition of two atomic components, or the feedback composition of an atomic decomposable component. We can combine these techniques in order to provide a general symbolic simplification algorithm: the algorithm takes as input an arbitrarily complex composite component, and returns an equivalent atomic component. The algorithm is shown in Fig. 7. The algorithm fails only in case it encounters the feedback of a non-decomposable component. Recall that decomposability implies determinism (c.f. §5.5.1), which means that the test C 00 is decomposable means 3 One of several problems of this definition is that it does not preserve refinement. For example, the stateless component with contract x 6= y refines the stateless component with contract true. Adding the constraint x = y to both contracts yields the components with contracts x = y and false, respectively, where the latter no longer refines the former. For a more detailed discussion, see [67]. 32 atomic(C) : if C is atomic then return C else if C is C 0 ; C 00 then return serial(atomic(C 0 ), atomic(C 00 )) else if C is C 0 k C 00 then return parallel(atomic(C 0 ), atomic(C 00 )) else if C is fdbk(C 0 ) then C 00 := atomic(C 0 ) if C 00 is decomposable then return feedback(C 00 ) else fail else /* impossible by definition of syntax */ fail Figure 7: Simplification algorithm for arbitrary composite components. that C 00 is of the form det((x1 , . . .), s, a, p, next, (e1 , . . .)) or stateless det((x1 , . . .), p, (e1 , . . .)) and x1 is not free in e1 . We note that in practice, our RCRS implementation on top of Isabelle performs more simplifications in addition to those performed by the procedure atomic. For instance, our implementation may be able to simplify a logical formula φ into an equivalent but simpler formula φ0 (e.g., by eliminating quantifiers from φ), and consequently also simplify a component, say, qltl(x, y, φ) into an equivalent but simpler component qltl(x, y, φ0 ). These simplifications very much depend on the logic used in the components. Describing the simplifications that our implementation performs is outside the scope of the current paper, as it belongs in the realm of computational logic. It suffices to say that our tool is not optimized for this purpose, and could leverage specialized tools and relevant advances in the field of computational logic. 5.7.1 Deterministic and Algebraic Loop Free Composite Components In order to state and prove correctness of the algorithm, we extend the notion of determinism to a composite component. We also introduce the notion of algebraic loop free components, which capture systems with no circular and intantaneous input-output dependencies. A (possibly composite) component C is said to be deterministic if every atomic component of C is either a deterministic STS component or a stateless deterministic STS component. Formally, C is deterministic iff determ(C) is true, where determ(C) is defined inductively on the structure of C: determ(det(x, s, a, p, n, e)) = determ(stateless det(x, p, e)) = determ(sts(x, y, s, init, trs)) = determ(stateless(x, y, trs)) = determ(C ; C 0 ) = determ(C k C 0 ) = determ(fdbk(C)) = true true false false determ(C) ∧ determ(C 0 ) determ(C) ∧ determ(C 0 ) determ(C) Notice that this notion of determinism applies to a generally composite component C, i.e., a syntactic term in our algebra of components, involving atomic components possibly composed via the three composition operators. This notion of determinism is the generalization of the syntactically deterministic STS components, 33 which are atomic. This notion of determinism is also distinct from any semantic notion of determinism (which we have not introduced at all in this paper, as it is not needed). For a deterministic component we define its output input dependency relation. Let C be deterministic, and let Σin (C) = X1 × . . . × Xn and Σout (C) = Y1 × . . . × Ym . The relation OI(C) ⊆ {1, . . . , m} × {1, . . . , n} is defined inductively on the structure of C: OI(det((x1 , . . . , xn ), s, a, p, next, (e1 , . . . , em ))) = OI(stateless det((x1 , . . . , xn ), p, (e1 , . . . , em ))) = OI(C ; C 0 ) = OI(C k C 0 ) = OI(fdbk(C)) = {(i, j) | xj is free in ei } {(i, j) | xj is free in ei } OI(C 0 ) ◦ OI(C) OI(C) ∪ {(i + m, j + n) | (i, j) ∈ OI(C 0 )} where Σin (C) = X1 × . . . × Xn and Σout (C) = Y1 × . . . × Ym {(i, j) | i > 0 ∧ j > 0 ∧ ((i + 1, j + 1) ∈ OI(C) ∨ ((i + 1, 1) ∈ OI(C) ∧ (1, j + 1) ∈ OI(C)))} The intuition is that (i, j) ∈ OI(C) iff the i-th output of C depends on its j-th input. The OI relation is preserved by the symbolic operations, as shown by the following lemma: Lemma 9. If C and C 0 are deterministic STS components, then OI(C ; C 0 ) OI(C k C 0 ) = OI(serial(C, C 0 )) = OI(parallel(C, C 0 )). If C is also decomposable, then OI(fdbk(C)) = OI(feedback(C)). We introduce now the the notion of algebraic loop free component. Intuitively, a (possibly composite) deterministic component C is algebraic loop free if, whenever C contains a subterm of the form fdbk(C 0 ), the first output of C 0 does not depend on its first input. This implies that whenever a feedback connection is formed, no circular dependency is introduced. It also ensures that the simplification algorithm will never fail. Formally, for a component C such that determ(C) is true, loop-free(C) is defined inductively on the structure of C: loop-free(C) = loop-free(C) = loop-free(C ; C 0 ) = loop-free(C k C 0 ) = loop-free(fdbk(C)) = 5.7.2 true if C = det(x, s, a, p, next, out) true if C = stateless det(x, p, out) loop-free(C) ∧ loop-free(C 0 ) loop-free(C) ∧ loop-free(C 0 ) loop-free(C) ∧ (1, 1) 6∈ OI(C) Correctness of the Simplification Algorithm Theorem 11. Let C be a (possibly composite) component. 1. If C does not contain any fdbk operators then atomic(C) does not fail and returns an atomic component such that atomic(C) ≡ C. 2. If determ(C) ∧ loop-free(C) is true then atomic(C) does not fail and returns an atomic component such that atomic(C) ≡ C. Moreover, atomic(C) is a deterministic STS component and OI(C) = OI(atomic(C)). Proof. The first part of the theorem is a consequence of the fact that the symbolic serial and parallel compositions are defined for all atomic components and return equivalent atomic components, by Theorems 8 and 9. 34 For the second part, since we have a recursive procedure, we prove its correctness by asumming the correctness of the recursive calls. Additionally, the termination of this procedure is ensured by the fact that all recursive calls are made on “smaller” components. Specifically: we assume that both determ(C) and loop-free(C) hold; and we prove that atomic(C) does not fail, atomic(C) is a deterministic STS component, atomic(C) ≡ C, and OI(C) = OI(atomic(C)). We only consider the case C = fdbk(C 0 ). All other cases are similar, but simpler. Because determ(C) and loop-free(C) hold, we have that determ(C 0 ) and loop-free(C 0 ) also hold, and in addition (1, 1) 6∈ OI(C 0 ). Using the correctness assumption for the recursive call we have that atomic(C 0 ) does not fail, C 00 = atomic(C 0 ) is a deterministic STS component, C 00 ≡ C 0 , and OI(C 00 ) = OI(C 0 ). Because C 00 is a deterministic STS component and (1, 1) 6∈ OI(C 0 ) = OI(C 00 ), C 00 is decomposable. From this we have that D := feedback(C 00 ) is defined. Therefore, atomic(C) returns D and does not fail. It remains to show that D has the desired properties. By the definition of feedback(C 00 ) and the fact that C 00 is a decomposable deterministic STS component, D is also a deterministic STS component. We also have: D = feedback(C 00 ) ≡ fdbk(C 00 ) ≡ fdbk(C 0 ) = C where feedback(C 00 ) ≡ fdbk(C 00 ) follows from Theorem 10 and fdbk(C 00 ) ≡ fdbk(C 0 ) follows from C 00 ≡ C 0 and the semantics of fdbk. Finally, using Lemma 9 and OI(C 00 ) = OI(C 0 ), we have OI(D) = OI(feedback(C 00 )) = OI(fdbk(C 00 )) = OI(fdbk(C 0 )) = OI(C) Corollary 2. If a component C does not contain any fdbk operators or if determ(C) ∧ loop-free(C) is true, then [[C]] is a GPT. Note that condition determ(C) ∧ loop-free(C) is sufficient, but not necessary, for [[C]] to be a GPT. For example, consider the following components:  Constfalse = stateless det x, true, false  And = stateless det (x, y), true, (x ∧ y, x ∧ y) C = Constfalse ; fdbk(And) Constfalse outputs the constant false. And is a version of logical and with two identical outputs (we need two copies of the output, because one will be eliminated once we apply feedback). C is a composite component, formed by first connecting the first output of And in feedback to its first input, and then connecting the output of Constfalse to the second input of And (in reality, to the only remaining input of fdbk(And)). Observe that C has algebraic loops, that is, loop-free(C) does not hold. Yet it can be shown that [[C]] is a GPT (in particular, we can show that C ≡ Constfalse ). Example. The simplification algorithm applied to the component from Fig. 2 results in atomic(Sum) = atomic(fdbk(Add ; UnitDelay ; Split)) = det(y, s, 0, s + y, s). 35 To see how the above is derived, let us first calculate atomic(Add ; UnitDelay ; Split): atomic(Add ; UnitDelay ; Split) = serial(atomic(Add ; UnitDelay), Split) = serial(serial(Add, UnitDelay), Split) = {Expanding Add and UnitDelay and choosing suitable variable names} serial(serial(stateless det((x, y), true, x + y), det(z, s, 0, true, z, s)), Split) = {Computing inner serial} serial(det((x, y), s, 0, true, x + y, s), Split) = {Expanding Split and choosing suitable variable names} serial(det((x, y), s, 0, true, x + y, s), stateless det(z, true, (z, z))) = {Computing remaining serial} det((x, y), s, 0, true, x + y, (s, s)) We calculate now atomic(fdbk(det((x, y), s, 0, true, x + y, (s, s)))): atomic(fdbk(det((x, y), s, 0, true, x + y, (s, s)))) = {x is not free in s} feedback(det((x, y), s, 0, true, x + y, (s, s))) = {Computing feedback} det(y, s, 0, true, s + y, s) 5.8 Checking Validity and Compatibility Recall the example given in §5.3.7, of the serial composition of components C1 and C2 , resulting in a component with input-output relation false, implying that [[C1 ; C2 ]] = Fail. When this occurs, we say that C1 and C2 are incompatible. We would like to catch such incompatibilities. This amounts to first simplifying the serial composition C1 ; C2 into an atomic component C, and then checking whether [[C]] = Fail. In general, we say that a component C is valid if [[C]] 6= Fail. Given a component C, we can check whether it is valid, as follows. First, we simplify C to obtain an atomic component C 0 = atomic(C). If C 0 is a QLTL component of the form qltl(x, y, ϕ) then C 0 is valid iff ϕ is satisfiable. The same is true if C 0 is a stateless STS component of the form stateless(x, y, ϕ). If C 0 is a general STS component then we can first transform it into a QLTL component and check satisfiability of the resulting QLTL formula. Theorem 12. If C is an atomic component of the form qltl(x, y, ϕ) or stateless(x, y, ϕ), then [[C]] 6= Fail iff ϕ is satisfiable. As mentioned in the introduction, one of the goals of RCRS is to function as a behavioral type system for reactive system modeling frameworks such as Simulink. In the RCRS setting, type checking consists in checking properties such as compatibility of components, as in the example C1 ; C2 of §5.3.7. When components are compatible, by computing new (stronger) input preconditions like in the example C3 ; C4 of §5.3.7, RCRS can be seen as enabling to perform behavioral type inference. Indeed, the new derived condition G F x in the above example can be seen as an inferred type of the composite component C3 ; C4 . We note that the decidability of the satisfiability question for ϕ depends on the logic used and the domains of the variables in the formula. For instance, although ϕ can be a QLTL formula, if it restricts the set of 36 constants to true, has no functional symbols, and only equality as predicate symbol, then it is equivalent to a QPTL formula,4 for which we can use available techniques [76]. 5.9 Checking Input-Receptiveness and Computing Legal Inputs Symbolically Given a component C, we often want to check whether it is input-receptive, i.e., whether legal([[C]]) = >, or equivalently, [[C]](>) = >. More generally, we may want to compute the legal input values for C, which is akin to type inference as discussed above. To do this, we will provide a symbolic method to compute legal([[C]]) as a formula legal(C). Then, checking that C is input-receptive amounts to checking that the formula legal(C) is valid, or equivalently, checking that ¬legal(C) is unsatisfiable. We assume that C is atomic (otherwise, we first simplify C using the algorithm of §5.7). Definition 22. Given an atomic component C, we define legal(C), a formula characterizing the legal inputs of C. legal(C) is defined based on the type of C:  legal qltl(x, y, ϕ) = (∃y : ϕ) (25)  0 legal sts(x, y, s, init, trs) = (∀s, y : init ⇒ (ϕ L ϕ )) (26)  legal stateless(x, y, trs) = G (∃y : trs) (27)  legal det(x, s, a, p, next, out) = (∀s, y : s = a ⇒ (# s = next ∧ y = out) L p) (28)  (29) legal stateless det(x, p, out) = G p where ϕ = trs[s0 := # s] and ϕ0 = (∃s0 , y : trs). The next theorem shows that legal correctly characterizes the semantic predicate legal: Theorem 13. If C is an atomic component , then legal([[C]]) = {σx | σx |= legal(C)}. It follows from Theorem 13 that a component C is input-receptive iff the formula legal(C) is valid. 5.10 Checking Refinement Symbolically We end this section by showing how to check whether a component refines another component. Again, we will assume that the components in question are atomic (if not, they can be simplified using the atomic procedure). Theorem 14. Let C1 = sts(x, y, s, init, r1 ), C10 = sts(x, y, s, init0 , r10 ), C2 = stateless(x, y, r2 ), C20 = stateless(x, y, r20 ), C3 = qltl(x, y, ϕ), and C30 = qltl(x, y, ϕ0 ). Then: 1. C1 is refined by C10 if the formula (init0 ⇒ init) ∧ ((∃s0 , y : r1 ) ⇒ (∃s0 , y : r10 )) ∧ ((∃s0 , y : r1 ) ∧ r10 ⇒ r) is valid. 2. C2 is refined by C20 if and only if the formula   (∃y : r2 ) ⇒ (∃y : r20 ) ∧ (∃y : r2 ) ∧ r20 ⇒ r2 is valid. 4 For example, the atomic QLTL formula # # x = # y can be translated into the LTL formula X X x ⇔ X y, and the formula # # # x = true into X X X x. 37 3. C3 is refined by C30 if and only if the formula   (∃y : ϕ) ⇒ (∃y : ϕ0 ) ∧ ((∃y : ϕ) ∧ ϕ0 ) ⇒ ϕ is valid. As the above theorem shows, checking refinement amounts to checking validity (or equivalently, satisfiability of the negation) of first-order formulas formed by the various symbolic expressions in the component specifications. The exact logic of these formulas depends on the logics used by the components. For example, if C3 and C30 both use quantifier-free LTL for φ and φ0 , then in order to check refinement we need to check satisfiability of a first-order QLTL formula. Examples. Consider again the QLTL component C = qltl((), t, oven), introduced in §3.3, where oven = (t = 20 ∧ ((t < # t ∧ t < 180) U thermostat)) thermostat = G (180 ≤ t ∧ t ≤ 220) Let us introduce a refined version C 0 of C: C0 = sts((), t, (s, sw), init, trs) where init = s = 20 ∧ sw = on trs (t = s) ∧ = (if sw = on then s < s0 < s + 5 else (if s > 10 then s − 5 < s0 < s else s0 = s)) ∧ (if sw = on ∧ s > 210 then sw0 = off else (if sw = off ∧ s < 190 then sw0 = on else sw0 = sw)) C 0 is an STS component with no input variables, output variable t, and state variables s and sw, recording the current temperature of the oven, and the on/off status of the switch, respectively. When sw is on, the temperature increases nondeterministically by up to 5 units, otherwise the temperature decreases nondeterministically by up to 5 units. When the temperature exceeds 210, the switch is turned off; when the temperature is below 190, the switch is turned on; otherwise sw remains unchanged. The output t is always equal to the current state s. Initially the temperature is 20, and sw is on. Using Theorem 14, and the properties of sts2qltl we have: C  C0 ⇐⇒ C  sts2qltl(C 0 ) ⇐⇒ qltl((), t, oven)  qltl((), t, (∀s, sw, t : init ⇒ (ϕ L ϕ0 )) ∧ (∃s, sw : init ∧ G ϕ)) ⇐⇒ where ϕ = trs[s0 , sw0 := # s, # sw] and ϕ0 = (∃s0 , sw0 , t : trs) {Using Lemma 1, because ϕ0 ⇐⇒ true} qltl((), t, oven)  qltl((), t, (∃s, sw : init ∧ G ϕ)) ⇐⇒ {Using Theorem 14}   (∃t : oven) ⇒ (∃t, s, sw : init ∧ G ϕ) ∧ ((∃t : oven) ∧ (∃s, sw : init ∧ G ϕ)) ⇒ oven is valid ⇐⇒ {Because (∃t : oven) ⇐⇒ true and (∃t, s, sw : init ∧ G ϕ) ⇐⇒ true}  (∃s, sw : init ∧ G ϕ) ⇒ oven is valid Thus, checking whether C 0 refines C amounts to checking whether the QLTL formula (∃s, sw : init∧G ϕ) ⇒ oven is valid. This indeed holds for this example and can be shown using logical reasoning. 38 The above example is relatively simple in the sense that in the end refinement reduces to checking implication between the corresponding contracts. Indeed, this is always the case for input-receptive systems, as in the example above. However, refinement is not equivalent to implication in the general case of noninput-receptive systems. For example, consider the components: C1 = stateless(x, y, x ≥ 0 ∧ y ≥ x) C2 = stateless(x, y, x ≤ y ≤ x + 10) Using Theorem 14, we have: C1  C2 ⇐⇒ {Theorem 14} ((∃y : x ≥ 0 ∧ y ≥ x) ⇒ (∃y : x ≤ y ≤ x + 10)) ∧ ((∃y : x ≥ 0 ∧ y ≥ x) ∧ x ≤ y ≤ x + 10 ⇒ x ≥ 0 ∧ y ≥ x) is valid ⇐⇒ {Arithmetic and logical reasoning} true Note that the second and third parts of Theorem 14 provide necessary and sufficient conditions, while the first part only provides a sufficient, but generally not necessary condition. Indeed, the condition is generally not necessary in the case of STS components with state, as state space computation is ignored by the condition. This can be remedied by transforming STS components into equivalent QLTL components and then applying the second part of the theorem. An alternative which may be more tractable, particularly in the case of finite-state systems, is to use techniques akin to strategy synthesis in games, such as those proposed in [79] for finite-state relational interfaces. Another limitation of the first part of Theorem 14 is that it requires the two STS components to have the same state space, i.e., the same state variable s. This restriction can be lifted using the well-known idea of data refinement [44, 8]. Theorem 15. Let C1 = sts(x, y, s, init, r), C2 = sts(x, y, t, init0 , r0 ) be two STS components, and D a (data refinement) expression on variables s and t. Let p = (∃s0 , y : r) and p0 = (∃t0 , y : r0 ). If the formulas (∀t : init0 ⇒ (∃s : D ∧ init)) (∀t, x, s : D ∧ p ⇒ p0 ) (∀t, x, s, t0 , y : D ∧ p ∧ r0 ⇒ (∃s0 : D[t, s := t0 , s0 ] ∧ r)) are valid, then C1 is refined by C2 . 6 Toolset and Case Studies The RCRS framework comes with a toolset, illustrated in Fig. 8. The toolset is publicly available under the MIT license and can be downloaded from http://rcrs.cs.aalto.fi. The toolset is described in tool papers [32, 33] ([32] contains additional material over [33], specifically, a six-page appendix describing a demo of the RCRS toolset). In summary, the toolset consists of: • A full implementation of the RCRS theory in Isabelle [61]. The implementation consists of 22 theory files and a total of 27588 lines of Isabelle code. A detailed description of the implementation can be found in the file document.pdf available in the public distribution of RCRS. • A formal Analyzer, which is a set of procedures implemented on top of Isabelle and the functional programming language SML. The Analyzer performs compatibility checking, automatic contract simplification, and other functions. 39 Options (translation strategy, etc.) Fuel Control System Model RCRS theory and component library (Isabelle theory files) This model uses only the ODEs to implement the dynamics. theta [0 90] 1 throttle input (deg) [0, 81.2] 1 .1s+1 1 A/F Throttle del ay1 1 s f(u) 8.8 ODE1 Base opening angle ~= 1 s 1 s f(u) pe ODE4 Closed double 2 ~= Fuel Cmd Open Pwr Analyzer f(u) 0.0 f(u) airbyfuel_ref 14.7 lambda f(u) ODE3 incompatiblity detection 12.5 p f(u) ODE2 ODE4 Open ~= Fuel Cmd Open 1 s ~= i f(u) f(u) Fuel Cmd Closed InputPoly 2 engine speed (rpm) [900,1100] pi/30 AND (rpm) to (rad/s) internal variable elimination Power Mode Guard boolean Starup Mode 1: Failure 0: Normal In Out Startup Mode Latch boolean FaultInjection In Out Sensor Failure Detection Latch OR 3 controller_mode NOT Powertrain Control Benchmark Model Toyota Technial Center 2014 This is a model of a hybrid automaton with polynomial dynamics, and an implementation of the 3rd model that appears in "Powertrain Control Verification Benchmark", 2014 Hybrid Systems: Computation and Control, X. Jin, J. V. Deshmukh, J.Kapinski, K. Ueda, and K. Butts Simulink diagram Translator RCRS model of the diagram (built on top of Isabelle theorem prover) auto generated top-level contract refinement checking ... Figure 8: The RCRS toolset. • A formalization of Simulink characterizing basic Simulink blocks as RCRS components and implementing those as a library of RCRS/Isabelle. At the time of writing this paper, 48 of Simulink’s blocks can be handled. • A Translator : a Python program translating Simulink hierarchical block diagrams into RCRS code. We implemented in Isabelle a shallow embedding [14] of the language introduced in §3. The advantage of a shallow embedding is that all datatypes of Isabelle are available for specification of components, and we can use the existing Isabelle mechanism for renaming bound variables in compositions. The disadvantage of this shallow embedding is that we cannot express Algorithm 7 within Isabelle (hence the “manual” proof that we provide for Theorem 11). A deep embedding, in which the syntax of components is defined as a datatype of Isabelle, is possible, and is left as an open future work direction. We implemented Algorithm 7 in SML, the meta-language of Isabelle. The SML program takes as input a component C and returns not only a simplified atomic component atomic(C), but also a proved Isabelle theorem of the fact C ≡ atomic(C). The simplification program, as well as a number of other procedures to perform compatibility checking, validity checking, etc., form what we call the Analyzer in Fig. 8. The Translator takes as input a Simulink model and produces an RCRS/Isabelle theory file containing: (1) the definition of all atomic and composite components representing the Simulink diagram; and (2) embedded bottom-up simplification procedures and the corresponding correctness theorems. By running this theory file in Isabelle, we obtain an atomic component corresponding to the top-level Simulink model, equivalent to the original composite component. As a special case, if the Simulink diagram contains inconsistencies (e.g., division by zero), these are detected by obtaining Fail as the top-level atomic component. The error can be localized by finding earlier points in the Simulink hierarchy (subsystems) which already resulted in Fail. As mentioned earlier, how to obtain a composite component from a graphical block diagram is an interesting problem. This problem is studied in depth in [31], where several translation strategies are proposed. These various strategies all yield semantically equivalent components, but with different trade-offs in terms of size, readability, effectiveness of the simplification procedures, and so on. The Translator implements all these translation strategies, allowing the user to explore these trade-offs. Further details on the translation problem are provided in [31, 65]. A proof that the translation strategies yield semantically equivalent components is provided in [64]. This proof, which has been formalized in Isabelle, is a non-trivial result: the entire formalization of the translation algorithms and the proof is 13579 lines of Isabelle code. We have used the RCRS toolset on several case studies, including a real-life benchmark provided by the Toyota motor company. The benchmark involves a Fuel Control System (FCS) described in [45, 46]. FCS aims at controlling the air mass and injected fuel in a car engine such that their ratio is always optimal. This problem has important implications on lowering pollution and costs by improving the engine performance. Toyota has made several versions of FCS publicly available as Simulink models at https: //cps-vo.org/group/ARCH/benchmarks. We have used the RCRS toolset to process two of the three Simulink models in the FCS benchmark suite (the third model contains blocks that are currently not implemented in the RCRS component library). A 40 typical model in this set has a 3-layer hierarchy with a total of 104 Simulink block instances (97 basic blocks and 7 subsystems), and 101 connections out of which 8 are feedbacks. Each basic Simulink block is modeled in our framework by an atomic STS component (possibly stateless). These atomic STS components are created once, and form part of the RCRS implementation, which is reused for different Simulink models. The particular FCS diagram is translated into RCRS using the Translator, and simplified within Isabelle using our SML simplification procedure. After simplification, we obtain an atomic deterministic STS component with no inputs, 7 outputs, and 14 state variables. Its contract (which is 8337 characters long) includes a condition on the state variables, in particular, that a certain state variable must always be non-negative (as its value is fed into a square-root block). This condition makes it not immediately obvious that the whole system is valid (i.e., not Fail). However, we can show after applying the transformation sts2qltl that the resulting formula is satisfiable, which implies that the original model is consistent (i.e., no connections result in incompatibilities, there are no divisions by zero, etc.). This illustrates the use of RCRS as a powerful static analysis tool. More details on the FCS case study are provided in [31, 65, 32]. 7 Related Work Several formal compositional frameworks exist in the literature. Most closely related to RCRS are the frameworks of FOCUS [15], input-output automata [54], reactive modules [6], interface automata [22], and Dill’s trace theory [29]. RCRS shares with these frameworks many key compositionality principles, such as the notion of refinement. At the same time, RCRS differs and complements these frameworks in important ways. Specifically, FOCUS, IO-automata, and reactive modules, are limited to input-receptive systems, while RCRS is explicitly designed to handle non-input-receptive specifications. The benefits of non-inputreceptiveness are discussed extensively in [79] and will not be repeated here. Interface automata are a low-level formalism whereas RCRS specifications and reasoning are symbolic. For instance, in RCRS one can naturally express systems with infinite state-spaces, input-spaces, or output-spaces. (Such systems can even be handled automatically, provided the corresponding logic they are expressed in is decidable.) Both interface automata and Dill’s trace theory use a single form of asynchronous parallel composition, whereas RCRS has three primitive composition operators (serial, parallel, feedback) with synchronous semantics. Our work adapts and extends to the reactive system setting many of the ideas developed previously in a long line of research on correctness and compositionality for sequential programs. This line of research goes back to the works of Floyd, Hoare, Dijkstra, and Wirth, on formal program semantics, weakest preconditions, program development by stepwise refinement, and so on [34, 43, 27, 81]. It also goes back to game-theoretic semantics of sequential programs as developed in the original refinement calculus [10], as well as to contractbased design [57]. Many of the concepts used in our work are in spirit similar to those used in the above works. For instance, an input-output formula φ used in an atomic component in our language can be seen as a contract between the environment of the component and the component itself: the environment must satisfy the contract by providing to the component legal inputs, and the component must in turn provide legal outputs (for those inputs). On the other hand, several of the concepts used here come from the world of reactive systems and as such do not have a direct correspondence in the world of sequential programs. For instance, this is the case with feedback composition. RCRS extends refinement calculus from predicate to property transformers. Extensions of refinement calculus to infinite behaviors have also been proposed in the frameworks of action systems [11], fair action systems [12], and Event B [3]. These frameworks use predicate (not property) transformers as semantic foundation; they can handle certain property patterns (e.g., fairness) by providing proof rules for these properties, but they do not treat livenes and LTL properties in general [12, 82, 42]. The Temporal Logic of Actions [49] can be used to specify liveness properties, but does not distinguish between inputs and outputs, and as such cannot express non-input-receptive components. Our specifications can be seen as “rich”, behavioral types [50, 22]. Indeed, our work is closely related to programming languages and type theory, specifically, refinement types [35], behavioral types [60, 51, 26], and liquid types [69]. Behavioral type frameworks have also been proposed in reactive system settings. In the SimCheck 41 framework [70], Simulink blocks are annotated with constraints on input and output variables, much like stateless components in RCRS. RCRS is more general as it also allows to specify stateful components. RCRS is also a more complete compositional framework, with composition operators and refinement, which are not considered in [70]. Other behavioral type theories for reactive systems have been proposed in [23, 18, 30]. Compared to RCRS, these works are less general. In particular, [23, 30] are limited to specifications which separate the assumptions on the inputs from the guarantees on the outputs, and as such cannot capture input-output relations. [18] considers a synchronous model which allows to specify legal values of inputs and outputs at the next step, given the current state. This model does not allow to capture relations between inputs and outputs within the same step, which RCRS allows. Our work is related to formal verification frameworks for hybrid systems [5]. Broadly speaking, these can be classified into frameworks following a model-checking approach, which typically use automata-based specification languages and state-space exploration techniques, and those following a theorem-proving approach, which typically use logic-based specifications. More closely related to RCRS are the latter, among which, CircusTime [17], KeYmaera [37], and the PVS-based approach in [2]. CircusTime can handle a larger class of Simulink diagrams than the current implementation of RCRS. In particular, CircusTime can handle multi-rate diagrams, where different parts of the model work at different rates (periods). On the other hand, CircusTime is based on predicate (not property) transformers, and as such cannot handle liveness properties. KeYmaera is a theorem prover based on differential dynamic logic [62], which is itself based on dynamic logic [39]. The focus of both KeYmaera and the work in [2] is verification, and not compositionality. For instance, these works do not distinguish between inputs and outputs and do not investigate considerations such as input-receptiveness. The work of [68] distinguishes inputs and outputs, but provides a system model where the output relation is separated from the transition relation, and where the output relation is assumed to be total, meaning that there exists an output for every input and current state combination. This does not allow to specify non-input-receptive stateless components, such as for example the Div component from §3. Our component algebra is similar to the algebra of flownomials [77] and to the relational model for nondeterministic dataflow [41]. In [20], graphs and graph operations which can be viewed as block diagrams are represented by algebraic expressions and operations, and a complete equational axiomatization of the equivalence of the graph expressions is given. This is then applied to flow-charts as investigated in [72]. The translation of block diagrams in general and Simulink in particular has been treated in a large number of papers, with various goals, including verification and code generation (e.g., see [80, 56, 19, 52, 73, 13, 83, 84, 85, 58]). Although we share several of the ideas of the above works, our main goal here is not to formalize the language of block diagrams, neither their translation to other formalisms, but to provide a complete compositional framework for reasoning about reactive systems. RCRS is naturally related to compositional verification frameworks, such as [38, 1, 55, 74, 40, 24, 25]. In particular, compositional verification frameworks often make use of a refinement relation such as trace inclusion or simulation [74, 40]. However, the focus of these frameworks is different than that of RCRS. In compositional verification, the focus is to “break down” a large (and usually computationally expensive) verification task into smaller (and hopefully easier to calculate) subtasks. For this purpose, compositional verification frameworks employ several kinds of decomposition rules. An example of such a rule is the socalled precongruence rule (i.e., preservation of refinement by composition): if P1 refines Q1 , and P2 refines Q2 , then the composition P1 kP2 refines Q1 kQ2 . This, together with preservation of properties by refinement, allows us to conclude that P1 kP2 satisfies some property φ, provided we can prove that Q1 kQ2 satisfies φ. The latter might be a simpler verification task, if Q1 and Q2 are smaller than P1 and P2 . The essence of compositional verification is in finding such abstract versions Q1 and Q2 of the concrete processes in question, P1 and P2 , and employing decomposition rules like the one above in the hope of making verification simpler. RCRS can also be used for compositional verification: indeed, RCRS provides both the precongruence rule, and preservation of properties by refinement. Note that, in traditional settings, precongruence is not always powerful enough, and for this reason most compositional verification frameworks employ more complex decomposition rules (e.g., see [59]). In settings which allow non-input-receptive components, such as ours, there are indications that the precongruence rule is sufficient for compositional verification purposes [75], 42 although more work is required to establish this in the specific context of RCRS. Such work is beyond the scope of the current paper. We also note that, beyond compositional verification with precongruence, RCRS provides a behavioral type theory which allows to state system properties such as compatibility, which is typically not available in compositional verification frameworks. Refinement can be seen as the inverse of abstraction, and as such our framework is related to general frameworks such as abstract interpretation [21]. Several abstractions have been proposed in reactive system settings, including relational abstractions for hybrid systems, which are related to Simulink [71]. The focus of these works is verification, and abstraction is used as a mechanism to remove details from the model that make verification harder. In RCRS, the simplification procedure that we employ can be seen as an abstraction process, as it eliminates internal variable information. However, RCRS simplification is an exact abstraction, in the sense that it does not lose any information: the final system is equivalent to the original one, and not an over- or under-approximation, as is usually the case with typical abstractions for verification purposes. 8 Conclusion We presented RCRS, a compositional framework for modeling and reasoning about reactive systems. This paper focuses on the theory and methodology of RCRS, its formal semantics, and techniques for symbolic and computer-aided reasoning. RCRS is an ongoing project, and a number of problems remain open. Future work directions include: • An extension of the framework to systems with algebraic loops, which necessitates handling instantaneous feedback. Here, the preliminary ideas of [67] can be helpful in defining the semantics of instantaneous feedback. However, [67] does not provide solutions on how to obtain symbolic closed-form expression for the feedback of general components. • Extension of the results of §5.5 to general components, possibly non-deterministic or non-decomposable. • An extension of the framework to acausal systems, i.e., systems without a clear distinction of inputs and outputs [36]. • The development of better symbolic reasoning techniques, such as simplification of logical formulas, decision procedures, etc. References [1] M. Abadi and L. Lamport. Conjoining specifications. ACM Trans. Program. Lang. Syst., 17(3):507–535, 1995. [2] E. Ábrahám-Mumm, M. Steffen, and U. Hannemann. Verification of hybrid systems: Formalization and proof rules in PVS. In 7th Intl. Conf. Engineering of Complex Computer Systems (ICECCS 2001), pages 48–57, 2001. [3] J.-R. Abrial. Modeling in Event-B: System and Software Engineering. Cambridge University Press, New York, NY, USA, 1st edition, 2010. [4] B. Alpern and F. B. Schneider. Defining liveness. Information Processing Letters, 21(4):181 – 185, 1985. [5] R. Alur, C. Courcoubetis, N. Halbwachs, T. Henzinger, P. Ho, X. Nicollin, A. Olivero, J. Sifakis, and S. Yovine. The algorithmic analysis of hybrid systems. Theoretical Computer Science, 138:3–34, 1995. [6] R. Alur and T. Henzinger. Reactive modules. Formal Methods in System Design, 15:7–48, 1999. 43 [7] R. Alur, T. Henzinger, O. Kupferman, and M. Vardi. Alternating refinement relations. In CONCUR’98, volume 1466 of LNCS. Springer, 1998. [8] R. J. Back. Correctness preserving program refinements: proof theory and applications, volume 131 of Mathematical Centre Tracts. Mathematisch Centrum, Amsterdam, 1980. [9] R.-J. Back and M. Butler. Exploring summation and product operators in the refinement calculus, pages 128–158. Springer Berlin Heidelberg, Berlin, Heidelberg, 1995. [10] R.-J. Back and J. von Wright. Refinement Calculus: A Systematic Introduction. Springer, 1998. [11] R.-J. Back and J. Wright. Trace refinement of action systems. In B. Jonsson and J. Parrow, editors, CONCUR ’94: Concurrency Theory, volume 836 of Lecture Notes in Computer Science, pages 367–384. Springer Berlin Heidelberg, 1994. [12] R.-J. Back and Q. Xu. Refinement of fair action systems. Acta Informatica, 35(2):131–165, 1998. [13] P. Boström. Contract-Based Verification of Simulink Models. In S. Qin and Z. Qiu, editors, Formal Methods and Software Engineering, volume 6991 of Lecture Notes in Computer Science, pages 291–306. Springer Berlin Heidelberg, 2011. [14] R. J. Boulton, A. Gordon, M. J. C. Gordon, J. Harrison, J. Herbert, and J. V. Tassel. Experience with embedding hardware description languages in HOL. In IFIP TC10/WG 10.2 Intl. Conf. on Theorem Provers in Circuit Design, pages 129–156. North-Holland Publishing Co., 1992. [15] M. Broy and K. Stølen. Specification and development of interactive systems: focus on streams, interfaces, and refinement. Springer, 2001. [16] R. Cavada, A. Cimatti, M. Dorigatti, A. Griggio, A. Mariotti, A. Micheli, S. Mover, M. Roveri, and S. Tonetta. The nuxmv symbolic model checker. In A. Biere and R. Bloem, editors, Computer Aided Verification: 26th International Conference, CAV 2014,, pages 334–342, Cham, 2014. Springer. [17] A. L. C. Cavalcanti, A. Mota, and J. C. P. Woodcock. Simulink timed models for program verification. In Z. Liu, J. C. P. Woodcock, and H. Zhu, editors, Theories of Programming and Formal Methods Essays Dedicated to Jifeng He on the Occasion of His 70th Birthday, volume 8051 of Lecture Notes in Computer Science, pages 82–99. Springer, 2013. [18] A. Chakrabarti, L. de Alfaro, T. Henzinger, and F. Mang. Synchronous and bidirectional component interfaces. In CAV, LNCS 2404, pages 414–427. Springer, 2002. [19] C. Chen, J. S. Dong, and J. Sun. A formal framework for modeling and validating Simulink diagrams. Formal Aspects of Computing, 21(5):451–483, 2009. [20] B. Courcelle. A representation of graphs by algebraic expressions and its use for graph rewriting systems. In H. Ehrig, M. Nagl, G. Rozenberg, and A. Rosenfeld, editors, Graph-Grammars and Their Application to Computer Science, 3rd International Workshop, Warrenton, Virginia, USA, December 2-6, 1986, volume 291 of Lecture Notes in Computer Science, pages 112–132. Springer, 1986. [21] P. Cousot and R. Cousot. Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In 4th ACM Symp. POPL, 1977. [22] L. de Alfaro and T. Henzinger. Interface automata. In Foundations of Software Engineering (FSE). ACM Press, 2001. [23] L. de Alfaro and T. Henzinger. Interface theories for component-based design. In EMSOFT’01. Springer, LNCS 2211, 2001. 44 [24] W. de Roever, H. Langmaack, and A. E. Pnueli. Compositionality: The Significant Difference. LNCS. Springer, 1998. [25] W.-P. de Roever, F. de Boer, U. Hanneman, J. Hooman, Y. Lakhnech, M. Poel, and J. Zwiers. Concurrency Verification: Introduction to Compositional and Non-compositional Methods. Cambridge University Press, 2012. [26] K. Dhara and G. Leavens. Forcing behavioral subtyping through specification inheritance. In ICSE’96: 18th Intl. Conf. on Software Engineering, pages 258–267. IEEE Computer Society, 1996. [27] E. Dijkstra. Notes on structured programming. In O. Dahl, E. Dijkstra, and C. Hoare, editors, Structured programming, pages 1–82. Academic Press, London, UK, 1972. [28] E. Dijkstra. Guarded commands, nondeterminacy and formal derivation of programs. Comm. ACM, 18(8):453–457, 1975. [29] D. Dill. Trace theory for automatic hierarchical verification of speed-independent circuits. MIT Press, Cambridge, MA, USA, 1987. [30] L. Doyen, T. Henzinger, B. Jobstmann, and T. Petrov. Interface theories with component reuse. In 8th ACM & IEEE International conference on Embedded software, EMSOFT, pages 79–88, 2008. [31] I. Dragomir, V. Preoteasa, and S. Tripakis. Compositional Semantics and Analysis of Hierarchical Block Diagrams. In SPIN, pages 38–56. Springer, 2016. [32] I. Dragomir, V. Preoteasa, and S. Tripakis. The Refinement Calculus of Reactive Systems Toolset. CoRR, abs/1710.08195, 2017. [33] I. Dragomir, V. Preoteasa, and S. Tripakis. The Refinement Calculus of Reactive Systems Toolset. In TACAS, 2018. [34] R. Floyd. Assigning meanings to programs. In In. Proc. Symp. on Appl. Math. 19, pages 19–32. American Mathematical Society, 1967. [35] T. Freeman and F. Pfenning. Refinement Types for ML. SIGPLAN Not., 26(6):268–277, May 1991. [36] P. Fritzson. Principles of Object-Oriented Modeling and Simulation with Modelica 3.3: A Cyber-Physical Approach. Wiley, 2 edition, 2014. [37] N. Fulton, S. Mitsch, J.-D. Quesel, M. Völp, and A. Platzer. KeYmaera X: An axiomatic tactical theorem prover for hybrid systems. In A. P. Felty and A. Middeldorp, editors, CADE, volume 9195 of LNCS, pages 527–538. Springer, 2015. [38] O. Grumberg and D. Long. Model checking and modular verification. ACM Trans. Program. Lang. Syst., 16(3):843–871, 1994. [39] D. Harel, J. Tiuryn, and D. Kozen. Dynamic Logic. MIT Press, 2000. [40] T. Henzinger, S. Qadeer, and S. Rajamani. You assume, we guarantee: Methodology and case studies. In CAV’98, volume 1427 of LNCS. Springer-Verlag, 1998. [41] T. T. Hildebrandt, P. Panangaden, and G. Winskel. A relational model of non-deterministic dataflow. Mathematical Structures in Computer Science, 14(5):613–649, 10 2004. [42] T. S. Hoang and J.-R. Abrial. Reasoning About Liveness Properties in Event-B. In Proceedings of the 13th International Conference on Formal Methods and Software Engineering, ICFEM’11, pages 456–471, Berlin, Heidelberg, 2011. Springer-Verlag. 45 [43] C. A. R. Hoare. An axiomatic basis for computer programming. Comm. ACM, 12(10):576–580, 1969. [44] C. A. R. Hoare. Proof of correctness of data representations. Acta Informatica, 1(4), Dec. 1972. [45] X. Jin, J. Deshmukh, J. Kapinski, K. Ueda, and K. Butts. Benchmarks for model transformations and conformance checking. In 1st Intl. Workshop on Applied Verification for Continuous and Hybrid Systems (ARCH), 2014. [46] X. Jin, J. V. Deshmukh, J. Kapinski, K. Ueda, and K. Butts. Powertrain Control Verification Benchmark. In Proceedings of the 17th International Conference on Hybrid Systems: Computation and Control, HSCC’14, pages 253–262. ACM, 2014. [47] Y. Kesten, Z. Manna, and A. Pnueli. Temporal verification of simulation and refinement. In J. W. de Bakker, W. P. de Roever, and G. Rozenberg, editors, A Decade of Concurrency Reflections and Perspectives: REX School/Symposium, pages 273–346. Springer, 1994. [48] Y. Kesten and A. Pnueli. Complete Proof System for QPTL. Journal of Logic and Computation, 12(5):701, 2002. [49] L. Lamport. The temporal logic of actions. ACM Trans. Program. Lang. Syst., 16(3):872–923, May 1994. [50] E. Lee and Y. Xiong. System-level types for component-based design. In EMSOFT’01: 1st Intl. Workshop on Embedded Software, pages 237–253. Springer, 2001. [51] B. Liskov and J. Wing. A behavioral notion of subtyping. ACM Trans. Program. Lang. Syst., 16(6):1811– 1841, 1994. [52] R. Lublinerman, C. Szegedy, and S. Tripakis. Modular code generation from synchronous block diagrams – modularity vs. code size. In 36th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL’09), pages 78–89. ACM, Jan. 2009. [53] R. Lublinerman and S. Tripakis. Modularity vs. reusability: Code generation from synchronous block diagrams. In Design, Automation, and Test in Europe (DATE’08), pages 1504–1509. ACM, Mar. 2008. [54] N. Lynch and M. Tuttle. An introduction to input/output automata. CWI Quarterly, 2:219–246, 1989. [55] K. McMillan. A compositional rule for hardware design refinement. In Computer Aided Verification (CAV’97), volume 1254 of LNCS. Springer-Verlag, 1997. [56] B. Meenakshi, A. Bhatnagar, and S. Roy. Tool for Translating Simulink Models into Input Language of a Model Checker. In Formal Methods and Software Engineering, volume 4260 of LNCS, pages 606–620. Springer, 2006. [57] B. Meyer. Applying “Design by Contract”. Computer, 25(10):40–51, 1992. [58] S. Minopoli and G. Frehse. SL2SX Translator: From Simulink to SpaceEx Verification Tool. In 19th ACM International Conference on Hybrid Systems: Computation and Control (HSCC), 2016. [59] K. S. Namjoshi and R. J. Trefler. On the completeness of compositional reasoning methods. ACM Trans. Comput. Logic, 11(3), 2010. [60] O. Nierstrasz. Regular types for active objects. SIGPLAN Not., 28(10):1–15, 1993. [61] T. Nipkow, L. C. Paulson, and M. Wenzel. Isabelle/HOL — A Proof Assistant for Higher-Order Logic. LNCS 2283. Springer, 2002. [62] A. Platzer. Differential dynamic logic for hybrid systems. J. Autom. Reas., 41(2):143–189, 2008. 46 [63] A. Pnueli. The Temporal Logic of Programs. In 18th Annual Symposium on Foundations of Computer Science, Providence, Rhode Island, USA, 31 October - 1 November 1977, pages 46–57. IEEE Computer Society, 1977. [64] V. Preoteasa, I. Dragomir, and S. Tripakis. A nondeterministic and abstract algorithm for translating hierarchical block diagrams. CoRR, abs/1611.01337, 2016. [65] V. Preoteasa, I. Dragomir, and S. Tripakis. Type Inference of Simulink Hierarchical Block Diagrams in Isabelle. In 37th IFIP WG 6.1 International Conference on Formal Techniques for Distributed Objects, Components, and Systems (FORTE), 2017. [66] V. Preoteasa and S. Tripakis. Refinement calculus of reactive systems. In Embedded Software (EMSOFT), 2014 International Conference on, pages 1–10, Oct 2014. [67] V. Preoteasa and S. Tripakis. Towards Compositional Feedback in Non-Deterministic and Non-InputReceptive Systems. In 31st Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), 2016. [68] G. Reissig, A. Weber, and M. Rungger. Feedback refinement relations for the synthesis of symbolic controllers. IEEE Trans. Automat. Contr., 62(4):1781–1796, 2017. [69] P. M. Rondon, M. Kawaguci, and R. Jhala. Liquid types. In 29th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI ’08, pages 159–169, New York, NY, USA, 2008. ACM. [70] P. Roy and N. Shankar. SimCheck: An expressive type system for Simulink. In C. Muñoz, editor, 2nd NASA Formal Methods Symposium (NFM 2010), NASA/CP-2010-216215, pages 149–160, Langley Research Center, Hampton VA 23681-2199, USA, Apr. 2010. NASA. [71] S. Sankaranarayanan and A. Tiwari. Relational abstractions for continuous and hybrid systems. In Computer Aided Verification: 23rd International Conference, CAV 2011, pages 686–702. Springer, 2011. [72] H. Schmeck. Algebraic characterization of reducible flowcharts. Journal of Computer and System Sciences, 27(2):165 – 199, 1983. [73] V. Sfyrla, G. Tsiligiannis, I. Safaka, M. Bozga, and J. Sifakis. Compositional translation of Simulink models into synchronous BIP. In Industrial Embedded Systems (SIES), 2010 International Symposium on, pages 217–220, July 2010. [74] N. Shankar. Lazy compositional verification. In COMPOS’97: Revised Lectures from the International Symposium on Compositionality: The Significant Difference, pages 541–564, London, UK, 1998. Springer-Verlag. [75] A. Siirtola, S. Tripakis, and K. Heljanko. When do we not need complex assume-guarantee rules? ACM Trans. Embed. Comput. Syst., 16(2):48:1–48:25, Jan. 2017. [76] A. P. Sistla, M. Y. Vardi, and P. Wolper. The Complementation Problem for Büchi Automata with Applications to Temporal Logic. Theoretical Computer Science, 49:217–237, 1987. [77] G. Stefănescu. Network Algebra. Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2000. [78] S. Tripakis, B. Lickly, T. Henzinger, and E. Lee. On Relational Interfaces. In Proceedings of the 9th ACM & IEEE International Conference on Embedded Software (EMSOFT’09), pages 67–76. ACM, 2009. [79] S. Tripakis, B. Lickly, T. A. Henzinger, and E. A. Lee. A Theory of Synchronous Relational Interfaces. ACM Trans. Program. Lang. Syst., 33(4):14:1–14:41, July 2011. 47 [80] S. Tripakis, C. Sofronis, P. Caspi, and A. Curic. Translating Discrete-time Simulink to Lustre. ACM Trans. Embed. Comput. Syst., 4(4):779–818, Nov. 2005. [81] N. Wirth. Program development by stepwise refinement. Comm. ACM, 14(4):221–227, 1971. [82] D. Yadav and M. Butler. Verification of liveness properties in distributed systems. In Contemporary Computing, volume 40 of Communications in Computer and Information Science, pages 625–636. Springer Berlin Heidelberg, 2009. [83] C. Yang and V. Vyatkin. Transformation of Simulink models to IEC 61499 Function Blocks for verification of distributed control systems. Control Engineering Practice, 20(12):1259–1269, 2012. [84] C. Zhou and R. Kumar. Semantic Translation of Simulink Diagrams to Input/Output Extended Finite Automata. Discrete Event Dynamic Systems, 22(2):223–247, 2012. [85] L. Zou, N. Zhany, S. Wang, M. Franzle, and S. Qin. Verifying Simulink diagrams via a Hybrid Hoare Logic Prover. In Embedded Software (EMSOFT), Sept 2013. 48
6
Neural Models for Key Phrase Detection and Question Generation arXiv:1706.04560v2 [cs.CL] 14 Sep 2017 Sandeep Subramanian1,*,† , Tong Wang2 , Xingdi Yuan2 , Saizheng Zhang1,† , Adam Trischler2 , and Yoshua Bengio1 1 MILA, Université de Montréal 2 Microsoft Maluuba Abstract We propose a two-stage neural model to tackle question generation from documents. Our model first estimates the probability that word sequences in a document compose “interesting” answers using a neural model trained on a questionanswering corpus. We thus take a data-driven approach to interestingness. Predicted key phrases then act as target answers that condition a sequence-tosequence question generation model with a copy mechanism. Empirically, our neural key phrase detection model significantly outperforms an entity-tagging baseline system and existing rule-based approaches. We demonstrate that the question generator formulates good quality natural language questions from extracted key phrases, and a human study indicates that our system’s generated questionanswer pairs are competitive with those of an earlier approach. We foresee our system being used in an educational setting to assess reading comprehension and also as a data augmentation technique for semi-supervised learning. 1 Introduction Many educational applications can benefit from automatic question generation, including vocabulary assessment Brown, Frishkoff, and Eskenazi (2005), writing support Liu, Calvo, and Rus (2012), and assessment of reading comprehension Mitkov and Ha (2003); Kunichika et al. (2004). Formulating questions that test for certain skills at certain levels requires significant human effort that is difficult to scale, e.g., to massive open online courses (MOOCs). Despite their applications, the majority of existing models for automatic question generation rely on rule-based methods that likewise do not scale well across different domains and/or writing styles. To address this limitation, we propose and compare several neural models for automatic question generation. We focus specifically on the assessment of reading comprehension. In this domain, question generation typically involves two inter-related components: first, a system to identify interesting entities or events (key phrases) within a passage or document Becker, Basu, and Vanderwende (2012); second, a question generator that constructs questions in natural language that ask specifically about the given key phrases. Key phrases thus act as the “correct” answers for generated questions. This procedure ensures that we can assess a student’s performance against a ground-truth target. We formulate key phrase detection as modeling the probability of potential answers conditioned on a given document, i.e., P (a|d). Inspired by successful work in question answering, we propose a sequence-to-sequence model that generates a set of key-phrase boundaries. This model can flexibly ∗ † Corresponding author ([email protected]). Research conducted during internship at Microsoft Maluuba. select an arbitrary number of key phrases from a document. To teach it to assign high probability to interesting answers, we train it on human-selected answers from large-scale, crowd-sourced question-answering datasets. We thus take a purely data-driven approach to the concept of interestingness, working from the premise that crowdworkers tend to select entities or events that interest them when they formulate their own comprehension questions. If this premise is correct, then the growing collection of crowd-sourced question-answering datasets Rajpurkar et al. (2016); Trischler et al. (2016) can be harnessed to learn models for key phrases of interest to human readers. Given a set of extracted key phrases, we approach question generation by modeling the conditional probability of a question given a document-answer pair, i.e., P (q|a, d). For this we use a sequenceto-sequence model with attention Bahdanau, Cho, and Bengio (2014) and the pointer-softmax mechanism Gulcehre et al. (2016). This component is also trained on a QA dataset by maximizing the likelihood of questions in the dataset. Empirically, our proposed model for key phrase detection outperforms two baseline systems by a significant margin. We support these quantitative findings with qualitative examples of generated question-answer pairs given documents. 2 2.1 Related Work Question Generation Automatic question generation systems are often used to alleviate (or even eliminate) the burden of human generation of questions to assess reading comprehension Mitkov and Ha (2003); Kunichika et al. (2004). Various NLP techniques have been adopted in these systems to improve generation quality, including parsing Heilman and Smith (2010a); Mitkov and Ha (2003), semantic role labeling Lindberg et al. (2013), and the use of lexicographic resources like WordNet Miller (1995); Mitkov and Ha (2003). However, the majority of proposed methods resort to simple rule-based techniques such as slot-filling with templates Lindberg et al. (2013); Chali and Golestanirad (2016); Labutov, Basu, and Vanderwende (2015) or syntactic transformation heuristics Agarwal and Mannem (2011); Ali, Chali, and Hasan (2010) (e.g., subject-auxiliary inversion Heilman and Smith (2010a)). These techniques can be inadequate to capture the diversity of natural language questions. To address this limitation, end-to-end-trainable neural models have recently been proposed for question generation in both vision Mostafazadeh et al. (2016) and language. For the latter, Du, Shao, and Cardie used a sequence-to-sequence model with an attention mechanism derived from the encoder states. Yuan et al. proposed a similar architecture but in addition improved model performance through policy gradient techniques. Wang, Yuan, and Trischler proposed a generative model that learns jointly to generate questions and answers based on documents. 2.2 Key Phrase Detection Meanwhile, a highly relevant aspect of question generation is to identify which parts of a given document are important or interesting for asking questions. Existing studies formulate key phrase extraction as a two-step process. In the first step, lexical features (e.g., part-of-speech tags) are used to extract a key phrase candidate list of certain types Liu et al. (2011); Wang, Zhao, and Huang (2016); Le, Nguyen, and Shimazu (2016); Yang et al. (2017). In the second step, ranking models are often used to select a key phrase. Medelyan, Frank, and Witten; Lopez and Romary used bagged decision trees, while Lopez and Romary used a Multi-Layer Perceptron (MLP) and Support Vector Machine (SVM) to label the candidates in a binary fashion. Mihalcea and Tarau; Wan and Xiao; Le, Nguyen, and Shimazu scored key phrases using PageRank. Heilman and Smith asked crowdworkers to rate the acceptability of computer-generated natural language questions as quiz questions, and Becker, Basu, and Vanderwende solicited quality ratings of text chunks as potential gaps for Clozestyle questions. These studies are closely related to our proposed work by the common goal of modeling the distribution of key phrases given a document. The major difference is that previous studies begin with a prescribed list of candidates, which might significantly bias the distribution estimate. In contrast, we adopt a dataset that was originally designed for question answering, where crowdworkers presumably tend to pick entities or events that interest them most. We postulate that the resulting distribu2 tion, learned directly from data, is more likely to reflect the true importance and appropriateness of answers. Recently, Meng et al. proposed a generative model for key phrase prediction with an encoderdecoder framework, which is able to both generate words from vocabulary and point to words from document. Their model achieved state-of-the-art results on multiple scientific publication keyword extraction datasets. This model shares similar ideas to our key phrase extractor, i.e., using a single neural model to learn the probabilities that words are key phrases. 1 Yang et al. used rule-based method to extract potential answers from unlabeled text, and then generated questions given documents and extracted answers using a pre-trained question generation model, and combined model-generated questions with human-generated questions for training question answering models. Experiments showed that question answering models can benefit from the augmented data provided by their approach. 3 Model Description 3.1 Key Phrase Detection In this section, we describe a simple baseline as well as the two proposed neural models for extracting key phrases (answers) from documents. 3.1.1 Entity Tagging Baseline Our baseline model (ENT) predicts all entities tagged by spaCy2 as key phrases. This is motivated by the fact that over 50% of the answers in SQuAD are entities (Table 2 in Rajpurkar et al. (2016)). These include dates (September 1967), numeric entities (3, five), people (William Smith), locations (the British Isles) and other entities (Buddhism). 3.1.2 Neural Entity Selection The baseline model above naı̈vely selects all entities as candidate answers. One pitfall is that it exhibits high recall at the expense of precision (Table 1). We first attempt to address this with a neural entity selection model (NES) that selects a subset of entities from a list of candidates. Our neural entity selection model takes a document (i.e., a sequence of words) D = (w1d , . . . , wnd d ) and a list of ne entities as a sequence of (start, end) locations within the document start end E = ((estart , eend 1 1 ), . . . , (ene , ene )). The model is then trained on the binary classification task of predicting whether an entity overlaps with any of the gold answers. Pn Specifically, we maximize i e log(P (ei |D)). We parameterize P (ei |D) using a neural model that first embeds each word wid in the document as a distributed vector using a word embedding lookup table. We then encode the document using a bidirectional Long Short-Term Memory (LSTM) network into annotation vectors (hd1 , . . . , hdnd ). We then compute P (ei |D) using a three-layer multilayer perceptron (MLP) that takes as input the concatenation of three vectors hhdnd ; hdavg ; hei i. Here, hdavg and hdnd are the average and the final state of the annotation vectors, respectively, and hei is the average of the annotation vectors corresponding to the i-th entity (i.e., hdestart , . . . , hdeend ). i i During inference, we select the top k entities with highest likelihood as given by our model. We use k = 6 in our experiments as determined by hyper-parameter search. 3.1.3 Pointer Networks While a significant fraction of answers in SQuAD are entities, extracting interesting aspects of a document requires looking beyond entities. Many documents of interest may be entity-less, or sometimes an entity tagger may fail to recognize some of the entities. To this end, we build a neural model 1 In scientific publications, keywords might be absent in abstracts; whereas in SQuAD dataset, all answers are extractive spans in documents. Due to the different nature of two tasks, it is unfair to directly compare the two models by evaluating on either dataset. 2 https://spacy.io/docs/usage/entity-recognition 3 that is trained from scratch to extract all of the answer key phrases in a particular document. We parameterize this model as a pointer network Vinyals, Fortunato, and Jaitly (2015) that is trained to point sequentially to the start and end locations of all key phrase answers. As in our entity selection model, we first encode the document into a sequence of annotation vectors (hd1 , . . . , hdnd ). A decoder LSTM is then trained to point to all of the start and end locations of answers in the document from left to right, conditioned on the annotation vectors, via an attention mechanism. We add a special termination token to the document, which the decoder is trained to attend on when it has generated all key phrases. This provides the flexibility to learn the number of key phrases the model should extract from a particular document. This is in contrast to the work of Meng et al. (2017), where a fixed number of key phrases was generated per document. A pointer network is an extension of sequence-to-sequence models Sutskever, Vinyals, and Le (2014), where the target sequence consists of positions in the source sequence. We encode the source sequence document into a sequence of annotation vectors hd = (hd1 , . . . hdnd ) using an embedding lookup table followed by a bidirectional LSTM. The decoder also consists of an embedding lookup that is shared with the encoder followed by an unidirectional LSTM with an attention mechanism. We denote the decoder’s annotation vectors as (hp1 , hp2 , . . . , hp2na −1 , hp2na ), where na is the number of answer key phrases, hp1 and hp2 correspond to the start and end annotation vectors for the first answer key phrase and so on. We parameterize P (wid = start|hp1 . . . hpj , hd ) and P (wid = end|hp1 . . . hpj , hd ) using the dot product attention mechanism Luong, Pham, and Manning (2015) between the decoder and encoder annotation vectors, P (wid |hp1 . . . hpj , hd· ) = softmax(W1 hpj · hd ), where W1 is an affine transformation matrix. The inputs at each step of the decoder are words from the document that correspond to the start and end locations pointed to by the decoder. During inference, we employ a decoding strategy that greedily picks the best location from the softmax vector at every step, then post process results to remove duplicate key phrases. Since the output sequence is relatively small, we observed similar performances when using greedy decoding and beam search. 3.2 Question Generation The question generation model adopts a sequence-to-sequence framework Sutskever, Vinyals, and Le (2014) with an attention mechanism Bahdanau, Cho, and Bengio (2014) and a pointersoftmax decoder Gulcehre et al. (2016). It takes a document D = (w1d , . . . , wnd d ) and an answer A = (w1a , . . . , wna a ) as input, and outputs a question Q = (ŵ1q , . . . , ŵnq q ). {d,a} An input word wi is first embedded by concatenating its word and character-level embeddings ch ei = hew ; e i. Character-level information ech i i i is captured with the final states of a BiLSTM on the character sequences of wi . The concatenated embeddings are subsequently encoded with another BiLSTM into annotation vectors hdi . To take better advantage of the extractive nature of answers in documents, we encode the answer by extracting the document encodings at the answer word positions. Specifically, we encode the hidden states of the document that correspond to the answer phrase with another condition aggregation BiLSTM. We use the final state ha of this as an encoding of the answer. The RNN decoder employs the pointer-softmax module Gulcehre et al. (2016). At each step of the question generation process, the decoder decides adaptively whether to (a) generate from a decoder vocabulary or (b) point to a word in the source sequence (and copy over). The pointer-softmax decoder thus has two components - a pointer attention mechanism and a generative decoder. In the pointing decoder, recurrence is implemented with two cascading LSTM cells c1 and c2 : (t) s1 (t) s2 (t) (t−1) = c1 (y(t−1) , s2 = c2 (v (t) (t) (t) , s1 ), ) (1) (2) where s1 and s2 are the recurrent states, y(t−1) is the embedding of decoder output from the previous time step, and v(t) is the context vector (to be defined shortly in Equation (3)). 4 Table 1: Model evaluation on key phrase extraction Models F 1M S Validation Prec. Rec. F 1M S Test Prec. Rec. 0.292 0.347 0.362 0.404 0.252 0.295 0.375 0.448 0.403 0.547 0.380 0.387 0.183 0.435 0.125 0.467 0.479 0.427 SQuAD H&S ENT NES PtrNet 0.308 0.334 0.352 0.249 0.335 0.387 0.523 0.354 0.337 NewsQA ENT PtrNet 0.187 0.452 0.127 0.480 0.491 0.444 At each time step t, the pointing decoder computes a distribution α(t) over the document word positions (i.e., a document attention, Bahdanau, Cho, and Bengio (2014)). Each element is defined as: (t) αi = f (hdi , ha , s1 (t−1) ), where f is a two-layer MLP with tanh and softmax activation, respectively. The context vector v(t) used in Equation (2) is the sum of the document encoding weighted by the document attention: v (t) = n X (t) αi hdi . (3) i=1 The generative decoder, on the other hand, defines a distribution over a prescribed decoder vocabulary with a two-layer MLP g: (t) o(t) = g(y(t−1) , s2 , v(t) , ha ). The switch scalar s(t) at each time step is computed by a three-layer MLP h: (t) s(t) = h(s2 , v(t) , α(t) , o(t) ), The first two layers of h use tanh activation and the final layer uses sigmoid. Highway connections are present between the first and the second layer.3 Finally, the resulting switch is used to interpolate the pointing and the generative probabilities for predicting the next word: P (ŵt ) ∼ s(t) α(t) + (1 − s(t) )o(t) . 4 4.1 Experiments Dataset We conduct our experiments on the SQuAD Rajpurkar et al. (2016) and NewsQA Trischler et al. (2016) corpora. Both of these are machine comprehension datasets consisting of over 100k crowdsourced question-answer pairs. SQuAD contains 536 paragraphs from Wikipedia while NewsQA was created on 12,744 news articles. Simple preprocessing is performed, including lower-casing and word tokenization using NLTK. The test split of SQuAD is hidden from the public, we therefore take 5,158 question-answer pairs (self-contained in 23 Wikipedia articles) from the training set as a validation set, and use the official development data to report test results. We use NewsQA only to evaluate our key phrase detection models in a transfer setting. 3 We also attach the entropy of the softmax distributions to the input of the final layer, postulating that this guides the switching mechanism by indicating the confidence of pointing vs generating. We observed an improvement in question quality with this technique. 5 Figure 1: A comparison of key phrase extraction methods. Red phrases are extracted by the pointer network, violet by H&S, green by the baseline, brown correspond to squad gold answers and cyan indicates an overlap between the pointer model and squad gold questions. The last paragraph is an exception where lyndon b. johnson and april 20 are extracted by H&S as well as the baseline model. 4.2 Implementation Details All models were trained using stochastic gradient descent with a minibatch size of 32 using the ADAM optimization algorithm. 4.2.1 Key Phrase Detection Key phrase detection models used pretrained word embeddings of 300 dimensions, generated using a word2vec extension Ling et al. (2015) trained on the English Gigaword 5 corpus. We used bidirectional LSTMs of 256 dimensions (128 forward and backward) to encode the document and an LSTM of 256 dimensions as our decoder in the pointer network model. A dropout of 0.5 was used at the outputs of every layer in the network. 4.2.2 Question Generation In question generation, the decoder vocabulary uses the top 2000 words sorted by their frequency in the gold questions in the training data. The word embedding matrix is initialized with the 300dimensional GloVe vectors Pennington, Socher, and Manning (2014). The dimensionality of the character representations is 32. The number of hidden units is 384 for both of the encoder/decoder RNN cells. Dropout is applied at a rate of 0.3 to all embedding layers as well as between the hidden states in the encoder/decoder RNNs across time steps. 4.3 Quantitative Evaluation of Key Phrase Extraction Since each key phrase is itself a multi-word unit, we believe that a naiv̈e word-level F1 that considers an entire key phrase as a single unit is not well suited to evaluate these models. We thus propose an extension of the SQuAD F1 evaluation metric (for a single answer span) to multiple spans within a document called multi-span F1 score. The metric is calculated as follows. Given the predicted phrase êi and a gold phrase ej , we first construct a pairwise, token-level F 1 score matrix of elements fi,j between the two phrases êi and ej . Max-pooling along the gold-label axis essentially assesses the precision of each prediction, with partial matches accounted for by the pair-wise F1 (identical to evaluation of a single answer in SQuAD) in the cells: pi = maxj (fi,j ). Analogously, recall for label ej can be defined by maxpooling along the prediction axis: rj = maxi (fi,j ). The multi-span F1 score is defined from the 6 mean precision p̄ = avg(pi ) and recall r̄ = avg(rj ): F 1M S = 2p̄ · r̄ . (p̄ + r̄) Existing evaluations (e.g., that of Meng et al.) can be seen as the above computation performed on the matrix of exact match scores between predicted and gold key phrases. By using token-level F1 scores between phrase pairs, we allow fuzzy matches. 4.4 Human Evaluation of QA pairs While key phrase extraction has a fairly well defined quantitative evaluation metric, evaluating generated text as in question generation is a harder problem. Instead of using automatic evaluation metrics such as BLEU, ROUGE, METEOR or CIDEr, we performed a human evaluation of our generated questions in conjunction with the answer key phrases. We used two different evaluation approaches: an ambitious one that compares our generated question-answer pairs to human generated ones from SQuAD and another that compares our model with Heilman and Smith (2010a) (henceforth refered to as H&S). Comparison to human generated questions - We presented annotators with documents from the SQuAD official development set and two sets of question-answer pairs, one from our model (machine generated) and the other from SQuAD (human generated). Annotators are tasked with identifying which of the question-answer pairs is machine generated. The order in which the questionanswer pairs appear in each example is randomized. The annotators are free to use any criterion of their choice to make a distinction such as poor grammar, the answer phrase not correctly answering the generated question, uninteresting answer phrases, etc. Implict comparison to H&S - To compare our system to existing methods (H&S)4 , we use human generated SQuAD question-answer pairs to setup an implict comparison. Human annotators are presented with a document and two question-answer pairs – one that comes from the SQUAD official development set and another from either our system or H&S (at random). Annotators are not made aware of the fact that there are two different models generating QA pairs. The annotators are once again tasked with identifying which QA pair is “human” generated. We evaluate the accuracy with which annotators can distinguish human and machine when using both of these models. Comparison to H&S - In a more direct evaluation strategy, we present annotators with documents from the SQuAD official development set but instead of a human generated question-answer pair, we use one generated by the H&S model and one from ours. We then ask annotators which one they prefer. 4.5 Results and Discussion Our evaluation of the key phrase extraction systems is presented in Table 1. We compare answer phrases extracted by H&S, our baseline entity tagger, the neural entity selection module and the pointer network. As expected, the entity tagging baseline achieved the best recall, likely by overgenerating candidate answers. The NES model, on the other hand, exhibits a much larger advantage in precision and consequently outperforms the entity tagging baseline by notable margins in F1. This trend persists in the comparison between the NES model and the pointer-network model. The H&S model exhibits high recall and lacks precision similar to the baseline entity tagger. This is not very surprising since the model hasn’t been exposed the SQuAD answer phrase distribution. Qualitatively, we observe that the entity-based models have a strong bias towards numeric types, which often fail to capture interesting information in an article. In addition, we also notice that the entity-based systems tend to select the central topical entity as the answer, which can contradict the distribution of interesting answers selected by humans. For example, given a Wikipedia article on Kenya and the fact agriculture is the second largest contributor to kenya ’s gross domestic product ( gdp ), the entity-based systems propose kenya 4 http://www.cs.cmu.edu/ ark/mheilman/questions/ 7 as a key phrase and ask what country is nigeria ’s second largest contributor to ? 5 Given the same information, the pointer model picked agriculture as the answer and asked what is the second largest contributor to kenya ’s gross domestic product ? Qualitative results with the question generation and key phrase extraction modules are presented in Table 2 and contrast H&S, our system, and human generated QA pairs from SQuAD. H&S - Key phrases selected by this model appear to be different from the PtrNet and human generated ones; for example, they may start with prepositions such as “of”, “by” and “to” or be very large noun-phrases such as that student motivation and attitudes towards school are closely linked to student-teacher relationships. In addition, their key phrases as seen in Figure 1 (document 1) do not seem “interesting” and appear to contain somewhat arbitrary phrases such as “this theory”, “some studies”, “a person”, etc. Their question generation module appears to produce a few ungrammatical sentences, eg: the first time – what was the yuan dynasty that non-native chinese people ruled all of china ? Our system - Since our key phrase extraction module was trained on SQuAD, the selected key phrases more closely resemble gold SQuAD answers. However, some of these answers don’t answer the questions generated about them, eg: eicosanoids and cytokines — what are bacteria produced by ? (first document in Table 2). Our model is sometimes able to effectively parse coreferent entities. eg: to generate the mongol empire - the yuan dynasty is considered to be the continuation of what ? the model had to resolve the pronoun it to yuan dynasty in ”it is generally considered to be the continuation of the mongol empire” (third document in Table 2). Comparison to human generated questions - We presented 14 annotators with a total of 740 documents, each containing 2 question-answer pairs. We observed that annotators were able to identify the machine generated question-answer pairs 77.8% of the time with a standard deviation of 8.34%. Implict comparison to H&S - We presented 2 annotators with the same 100 documents, 45 of which come from our model and 55 from H&S, all examples are paired with SQuAD gold questions and answers. The first annotator labeled 30 (66.7%) correctly between ours and gold, labeled 45 (81.8%) correctly between H&S and gold; while the second annotator labeled 9 (20%) correctly between ours and gold, labeled 13 (23.6%) correctly between H&S and gold. Neither of the annotators has substantial prior knowledge of SQuAD dataset. The experiment shows both annotator have harder time distinguishing between our generated question-answer pairs with gold than H&S with gold. Comparison to H&S - We presented 2 annotators with the same 200 examples, each of them contains the document with question-answer pairs generated from both our model and H&S’s model. The first annotator chose 107 (53.5%) question-answer pairs generated from our model as preferred choice, while the second annotator chose 90 (45%) from our model. This experiment shows that, without given the ground truth question-answer pairs, humans consider both models’ outputs to be equally good. 5 Conclusion We proposed a two-stage framework to tackle the problem of question generation from documents. First, we use a question answering corpus to train a neural model to estimate the distribution of key phrases that are interesting to question-asking humans. We proposed two neural models, one that ranks entities proposed by an entity tagging system, and another that points to key-phrase start and end boundaries with a pointer network. When compared to an entity tagging baseline, the proposed models exhibit significantly better results. We adopt a sequence-to-sequence model to generate questions conditioned on the key phrases selected in the framework’s first stage. Our question generator is inspired by an attention-based translation model, and uses the pointer-softmax mechanism to dynamically switch between copying a word from the document and generating a word from a vocabulary. Qualitative examples show that the generated questions exhibit both syntactic fluency and semantic relevance to the conditioning documents and answers, and appear useful for assessing reading comprehension in educational settings. 5 Since the answer word kenya cannot appear in the generated question, the decoder produced a similar word nigeria instead. 8 In future work we will investigate fine-tuning the complete framework end to end. Another interesting direction is to explore abstractive key phrase detection. References Agarwal, M., and Mannem, P. 2011. Automatic gap-fill question generation from text books. In Proceedings of the 6th Workshop on Innovative Use of NLP for Building Educational Applications, 56–64. Association for Computational Linguistics. Ali, H.; Chali, Y.; and Hasan, S. A. 2010. Automation of question generation from sentences. In Proceedings of QG2010: The Third Workshop on Question Generation, 58–67. Bahdanau, D.; Cho, K.; and Bengio, Y. 2014. Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473. Becker, L.; Basu, S.; and Vanderwende, L. 2012. Mind the gap: learning to choose gaps for question generation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 742–751. Association for Computational Linguistics. Brown, J. C.; Frishkoff, G. A.; and Eskenazi, M. 2005. Automatic question generation for vocabulary assessment. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, 819–826. Association for Computational Linguistics. Chali, Y., and Golestanirad, S. 2016. Ranking automatically generated questions using common human queries. In The 9th International Natural Language Generation conference, 217. Du, X.; Shao, J.; and Cardie, C. 2017. Learning to ask: Neural question generation for reading comprehension. arXiv preprint arXiv:1705.00106. Gulcehre, C.; Ahn, S.; Nallapati, R.; Zhou, B.; and Bengio, Y. 2016. Pointing the unknown words. arXiv preprint arXiv:1603.08148. Heilman, M., and Smith, N. A. 2010a. Good question! statistical ranking for question generation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, 609–617. Association for Computational Linguistics. Heilman, M., and Smith, N. A. 2010b. Rating computer-generated questions with mechanical turk. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, 35–40. Association for Computational Linguistics. Kunichika, H.; Katayama, T.; Hirashima, T.; and Takeuchi, A. 2004. Automated question generation methods for intelligent english learning systems and its evaluation. In Proc. of ICCE. Labutov, I.; Basu, S.; and Vanderwende, L. 2015. Deep questions without deep understanding. In ACL (1), 889–898. Le, T. T. N.; Nguyen, M. L.; and Shimazu, A. 2016. Unsupervised keyphrase extraction: Introducing new kinds of words to keyphrases. 29th Australasian Joint Conference, Hobart, TAS, Australia, December 5-8, 2016. Lindberg, D.; Popowich, F.; Nesbit, J.; and Winne, P. 2013. Generating natural language questions to support learning on-line. ENLG 2013 105. Ling, W.; Dyer, C.; Black, A. W.; and Trancoso, I. 2015. Two/too simple adaptations of word2vec for syntax problems. In HLT-NAACL, 1299–1304. Liu, Z.; Chen, X.; Zheng, Y.; and Sun, M. 2011. Automatic keyphrase extraction by bridging vocabulary gap. the Fifteenth Conference on Computational Natural Language Learning. Liu, M.; Calvo, R. A.; and Rus, V. 2012. G-asks: An intelligent automatic question generation system for academic writing support. D&D 3(2):101–124. 9 Lopez, P., and Romary, L. 2010. Humb: Automatic key term extraction from scientific articles in grobidp. the 5th International Workshop on Semantic Evaluation. Luong, M.-T.; Pham, H.; and Manning, C. D. 2015. Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025. Medelyan, O.; Frank, E.; and Witten, I. H. 2009. Human-competitive tagging using automatic keyphrase extraction. Empirical Methods in Natural Language. Meng, R.; Zhao, S.; Han, s.; He, D.; Brusilovsky, P.; and Chi, Y. 2017. Deep keyphrase generation. In the 55th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. Mihalcea, R., and Tarau, P. 2004. Textrank: Bringing order into text. Empirical Methods in Natural Language. Miller, G. A. 1995. Wordnet: a lexical database for english. Communications of the ACM 38(11):39– 41. Mitkov, R., and Ha, L. A. 2003. Computer-aided generation of multiple-choice tests. In Proceedings of the HLT-NAACL 03 workshop on Building educational applications using natural language processing-Volume 2, 17–22. Association for Computational Linguistics. Mostafazadeh, N.; Misra, I.; Devlin, J.; Mitchell, M.; He, X.; and Vanderwende, L. 2016. Generating natural questions about an image. arXiv preprint arXiv:1603.06059. Pennington, J.; Socher, R.; and Manning, C. D. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), 1532–1543. Rajpurkar, P.; Zhang, J.; Lopyrev, K.; and Liang, P. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. Sutskever, I.; Vinyals, O.; and Le, Q. V. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, 3104–3112. Trischler, A.; Wang, T.; Yuan, X.; Harris, J.; Sordoni, A.; Bachman, P.; and Suleman, K. 2016. Newsqa: A machine comprehension dataset. 2nd Workshop on Representation Learning for NLP. Vinyals, O.; Fortunato, M.; and Jaitly, N. 2015. Pointer networks. In Advances in Neural Information Processing Systems, 2692–2700. Wan, X., and Xiao, J. 2008. Single document keyphrase extraction using neighborhood knowledge. AAAI. Wang, T.; Yuan, X.; and Trischler, A. 2017. A joint model for question answering and question generation. 1st Workshop on Learning to Generate Natural Language. Wang, M.; Zhao, B.; and Huang, Y. 2016. Ptr: Phrase-based topical ranking for automatic keyphrase extraction in scientific publications. 23rd International Conference, ICONIP 2016. Yang, Z.; Hu, J.; Salakhutdinov, R.; and Cohen, W. W. 2017. Semi-supervised qa with generative domain-adaptive nets. In the 55th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. Yuan, X.; Wang, T.; Gulcehre, C.; Sordoni, A.; Bachman, P.; Subramanian, S.; Zhang, S.; and Trischler, A. 2017. Machine comprehension by text-to-text neural question generation. 2nd Workshop on Representation Learning for NLP. 10 Table 2: Qualitative examples of detected key phrases and generated questions. Doc. inflammation is one of the first responses of the immune system to infection . the symptoms of inflammation are redness , swelling , heat , and pain , which are caused by increased blood flow into tissue . inflammation is produced by eicosanoids and cytokines , which are released by injured or infected cells . eicosanoids include prostaglandins that produce fever and the dilation of blood vessels associated with inflammation , and leukotrienes that attract certain white blood cells ( leukocytes ) . . . H&S Q-A by eicosanoids and cytokines — who is inflammation produced by ? Q-A leukotrienes — what can attract certain white blood cells ? Q-A inflamation — what is one of the first responses the immune system has to infection ? of the first responses of the immune system to infection — what is inflammation one of ? PtrNet eicosanoids and cytokines — what are bacteria produced by ? Gold SQuAD Doc. eicosanoids and cytokines — what compounds are released by injured or infected cells , triggering inflammation ? research shows that student motivation and attitudes towards school are closely linked to student-teacher relationships . enthusiastic teachers are particularly good at creating beneficial relations with their students . their ability to create effective learning environments that foster student achievement depends on the kind of relationship they build with their students . useful teacher-to-student interactions are crucial in linking academic success with personal achievement . here , personal success is a student ’s internal goal of improving himself , whereas academic success includes the goals he receives from his superior . a teacher must guide his student in aligning his personal goals with his academic goals . students who receive this positive influence show stronger self-confidenche and greater personal and academic success than those without these teacher interactions . H&S Q-A research — what shows that student motivation and attitudes towards school are closely linked to studentteacher relationships ? to student-teacher relationships — what does research show that student motivation and attitudes towards school are closely linked to ? useful teacher-to-student interactions — what are crucial in linking academic success with personal achievement ? that student motivation and attitudes towards school are closely linked to student-teacher relationships — what does research show to ? PtrNet Q-A student-teacher relationships — what are the student motivation and attitudes towards school closely linked to ? teacher-to-student interactions — what is crucial in linking academic success with personal achievement ? enthusiastic teachers — who are particularly good at creating beneficial relations with their students ? a teacher — who must guide his student in aligning his personal goals ? Gold SQuAD Q-A Doc. student-teacher relationships — ’what is student motivation about school linked to ? aligning his personal goals with his academic goals . — what should a teacher guide a student in ? beneficial — what type of relationships do enthusiastic teachers cause ? student motivation and attitudes towards school — what is strongly linked to good student-teacher relationships ? the yuan dynasty was the first time that non-native chinese people ruled all of china . in the historiography of mongolia , it is generally considered to be the continuation of the mongol empire . mongols are widely known to worship the eternal heaven . . . H&S Q-A the first time — what was the yuan dynasty that nonnative chinese people ruled all of china ? Q-A the mongol empire — the yuan dynasty is considered to be the continuation of what ? Q-A non-native chinese people — the yuan was the first time all of china was ruled by whom ? the yuan dynasty — what was the first time that nonnative chinese people ruled all of china ? PtrNet worship the eternal heaven — what are mongols widely known to do in historiography of mongolia ? Gold SQuAD Doc. the eternal heaven — what did mongols worship ? on july 31 , 1995 , the walt disney company announced an agreement to merge with capital cities/abc for $ 19 billion . . . . . in 1998 , abc premiered the aaron sorkin-created sitcom sports night , centering on the travails of the staff of a sportscenter-style sports news program ; despite earning critical praise and multiple emmy awards , the series was cancelled in 2000 after two seasons . H&S Q-A an agreement to merge with capital cities/abc for $19 billion — what did the walt disney company announce on july 31 , 1995 ? the walt disney company — what announced an agreement to merge with capital cities/abc for $19 billion on july 31 , 1995 ? PtrNet Q-A 2000 — in what year was the aaron sorkin-created sitcom sports night cancelled ? Q-A july 31 , 1995 — when was the disney and abc merger first announced ? walt disney company — who announced an agreement to merge with capital cities/abc for $ 19 billion ? Gold SQuAD 11 sports night — what aaron sorkin created show did abc debut in 1998 ?
9
National Research Council Canada Conseil national de recherches Canada Institute for Information Technology Institut de technologie de l’information ERB-1099 NRC-44953 JohnnyVon: Self-Replicating Automata in Continuous Two-Dimensional Space Arnold Smith, National Research Council Canada Peter Turney, National Research Council Canada Robert Ewaschuk, University of Waterloo August 29, 2002 Note: This report originally contained colour figures. If you have a black-and-white copy of the report, you are missing important information. © Copyright 2002 by National Research Council of Canada Permission is granted to quote short excerpts and to reproduce figures and tables from this report, provided that the source of such materials is fully acknowledged. NRC/ERB-1099. JohnnyVon: Self-Replicating Automata in Continuous 2D Space Table of Contents Abstract........................................................................................................................... 3 1 Introduction ............................................................................................................. 3 2 Related Work .......................................................................................................... 4 3 4 2.1 Self-Replicating Cellular Automata .................................................................. 4 2.2 Mobile Automata.............................................................................................. 5 2.3 Physical Models of Self-Replication ................................................................. 5 JohnnyVon .............................................................................................................. 5 3.1 Informal Description......................................................................................... 6 3.2 Definitions........................................................................................................ 6 3.3 Fields............................................................................................................... 7 3.4 Physics ............................................................................................................ 9 3.4.1 Brownian Motion .................................................................................... 10 3.4.2 Viscosity ................................................................................................ 10 3.4.3 Attractive Force...................................................................................... 10 3.4.4 Repulsive Force ..................................................................................... 10 3.4.5 Straightening Force................................................................................ 11 3.4.6 Spring Dampening ................................................................................. 11 3.5 Splitting.......................................................................................................... 11 3.6 States ............................................................................................................ 12 3.7 Time .............................................................................................................. 13 3.8 Implementation .............................................................................................. 14 Experiments .......................................................................................................... 14 4.1 Seeded Replication........................................................................................ 14 4.2 Spontaneous Replication............................................................................... 21 5 Interpretation of Experiments................................................................................. 22 6 Limitations and Future Work.................................................................................. 23 7 Applications........................................................................................................... 24 8 Conclusion ............................................................................................................ 24 References ................................................................................................................... 25 Smith, Turney, Ewaschuk 2 NRC/ERB-1099. JohnnyVon: Self-Replicating Automata in Continuous 2D Space Abstract JohnnyVon is an implementation of self-replicating automata in continuous twodimensional space. Two types of particles drift about in a virtual liquid. The particles are automata with discrete internal states but continuous external relationships. Their internal states are governed by finite state machines but their external relationships are governed by a simulated physics that includes brownian motion, viscosity, and springlike attractive and repulsive forces. The particles can be assembled into patterns that can encode arbitrary strings of bits. We demonstrate that, if an arbitrary “seed” pattern is put in a “soup” of separate individual particles, the pattern will replicate by assembling the individual particles into copies of itself. We also show that, given sufficient time, a soup of separate individual particles will eventually spontaneously form self-replicating patterns. We discuss the implications of JohnnyVon for research in nanotechnology, theoretical biology, and artificial life. 1 Introduction John von Neumann is well known for his work on self-replicating cellular automata [10]. His ultimate goal, however, was to design self-replicating physical machines, and cellular automata were simply the first step towards the goal. Before his untimely death, he had sketched some of the other steps. One step was to move away from the discrete space of cellular automata to the continuous space of classical physics (pages 91-99 of [10]). Following the path sketched by von Neumann, we have developed JohnnyVon, an implementation of self-replicating automata in continuous two-dimensional space. JohnnyVon consists of a virtual “soup” of two types of particles that drift about in a simulated liquid. The particles are automata with discrete internal states that are regulated by finite state machines. Although the particles are internally discrete, the external relationships among the particles are continuous. Force fields mediate the interactions among the particles, enabling them to form and break bonds with one another. A pattern encoding an arbitrary string of bits can be assembled from these two types of particles, by bonding them into a chain. When a soup of separate individual particles is “seeded” with an assembled pattern, the pattern will replicate itself by assembling the separate particles into a new chain.1 The design of JohnnyVon was inspired by DNA and RNA. The individual automata are intended to be like codons and the assembled patterns are like strands of DNA or RNA. The simulated physics in JohnnyVon corresponds (very roughly) to the physics inside cells. The design was also influenced by our interest in nanotechnology. The automata in JohnnyVon can be seen as tiny nanobots, floating in a liquid vat, assembling structures in a manufacturing plant. Another source of guidance in our design was, of course, the research on self-replicating cellular automata, which has thrived and matured greatly since von Neumann's pioneering work. In particular, although the broad outline of JohnnyVon is derived from physics and biology, the detailed design of the system borrows much from automata theory. The basic entities in JohnnyVon are 1 The copies are mirror images; however, that is not a problem. This point is discussed in Section 5. Smith, Turney, Ewaschuk 3 NRC/ERB-1099. JohnnyVon: Self-Replicating Automata in Continuous 2D Space essentially finite automata, although they move in a continuous space and are affected by smoothly-varying force fields. We discuss related work in Section 2. JohnnyVon is most closely related to research on self-replicating cellular automata. A significant difference is that the automata in JohnnyVon are mobile. There is some related work on mobile automata that move in a two-dimensional space, but this work does not investigate self-replication and does not involve a continuous space. In many respects, JohnnyVon is most similar to the work of Lionel and Roger Penrose, who created self-replicating machines using pieces of plywood [4]. JohnnyVon is described in detail in Section 3. The motion of the automata is determined by a simulated physics that includes brownian motion, viscosity, and spring-like attractive and repulsive forces. The behaviours and interactions of the automata are determined by a small number of internal states and state transition rules. We present two experiments in Section 4. First, we show that an arbitrary seed structure can assemble copies of itself, using individual automata as building blocks. Second, we show that a soup of separate individual automata can spontaneously form selfreplicating structures, although we have deliberately designed JohnnyVon so that this is a relatively rare event. Section 5 is our interpretation of the experiments. Section 6 is concerned with limitations of JohnnyVon and future work. Some potential applications of this line of research are given in Section 7 and we conclude in Section 8. 2 Related Work JohnnyVon is related to research in self-replicating automata, mobile automata, and physical models of self-replication. 2.1 Self-Replicating Cellular Automata Since von Neumann’s pioneering work [10], after a hiatus, research in self-replicating cellular automata is now flourishing [2], [5], [7], [8], [9]. Most of this work has involved two-dimensional cellular automata. A two-dimensional grid of cells forms a discrete space, which is infinite and unbounded in the abstract, but is necessarily finite in a computer implementation. The cells are (usually identical) finite state machines, in which a cell’s state depends on the states of its neighbours, according to a set of (deterministic) state transition rules. The system begins with each cell in an initial state (chosen by the user) and the states change synchronously in discrete time steps. With carefully selected transition rules, it is possible to create self-replicating patterns. The initial states of the cells are “seeded” with a certain pattern (usually the pattern is a small loop). Over time, the pattern spreads from the seed to nearby cells, eventually filling the available space. Although this work has influenced and guided us, JohnnyVon is different in several ways. The automata in JohnnyVon are (essentially) finite state machines, but they are mobile, rather than being locked in a grid. The automata move in a continuous twodimensional space, rather than a discrete space (but time in JohnnyVon is still discrete). The states of the automata are mainly discrete and finite, but each automaton has a position and a velocity, and the force fields around the tips of each particle have smooth Smith, Turney, Ewaschuk 4 NRC/ERB-1099. JohnnyVon: Self-Replicating Automata in Continuous 2D Space gradients, all of which are represented with floating point numbers.2 The movements of the automata are governed by a simple simulated physics. We claim that these differences from self-replicating cellular automata make JohnnyVon more realistic, and thus more suitable for guiding work on self-replicating nanotechnology and research on the origins of life and related issues in theoretical biology. Aside from the increased realism, JohnnyVon is interesting simply because it is significantly different from cellular automata. 2.2 Mobile Automata There has been other research on mobile automata (e.g., bio-machines [3] and generalized mobile automata [11]), combining Turing machines with cellular automata. Turing machines move from cell to cell in an N-dimensional grid space, changing the states of the cells and possibly interacting with each other. However, unlike JohnnyVon, this work has used a discrete space. As far as we know, JohnnyVon is the first system that demonstrates self-replication with mobile automata in a continuous space. 2.3 Physical Models of Self-Replication Lionel Penrose, with the help of his son, Roger, made actual physical models of selfreplicating machines, using pieces of plywood [4]. His models are similar to JohnnyVon in several ways. Both involve simple units that can be assembled into self-replicating patterns. In both, the units move in a continuous space. Another shared element is the harnessing of random motion for replication. JohnnyVon uses simulated brownian motion and Penrose required the plywood units to be placed in a box that was then shaken back and forth. Penrose described both one-dimensional and two-dimensional models, in which motion is restricted to a line or to a plane. An obvious difference between the Penrose models and JohnnyVon is that the former are physical whereas the latter is computational. One advantage of a computational model is that experiments are perfectly repeatable, given the same initial conditions and the same random number seed. Another advantage is the ability to rapidly modify the computational model, to explore alternative models. A limitation of the Penrose models is that the basic units are all identical, so they cannot use binary encoding. They could encode information by length (the number of units in an assembled pattern), but the mechanism for ensuring that length is replicated faithfully is built in to the physical structure of the units. Thus altering the length involves building new units. On the other hand, JohnnyVon can encode an arbitrary binary string without making any changes to the basic units. 3 JohnnyVon We begin the description of JohnnyVon with an informal discussion of the model. We then define some terminology, followed with an outline of the attractive and repulsive fields that govern the interactions among the automata. The fourth subsection sketches the simulated physics and the fifth subsection explains how the automata decide when 2 We discuss in Section 3.5 the extent to which the automata in JohnnyVon are finite state machines. Smith, Turney, Ewaschuk 5 NRC/ERB-1099. JohnnyVon: Self-Replicating Automata in Continuous 2D Space to split apart. The sixth subsection considers the states of the automata and the seventh subsection discusses the treatment of time in JohnnyVon. The final subsection is concerned with the implementation of JohnnyVon. 3.1 Informal Description The design of JohnnyVon was based on the idea that strings (chains) of particles, of arbitrary length, should be capable of forming spontaneously, and once formed, they should be relatively stable. Each particle is a T-shaped structure. Particles form strings by bonding together at the tips of the horizontal arms of the T structures. Strings replicate by attracting randomly floating particles to the tips of the vertical arms of the T structures and holding them in place until they join together to form a replica of the original string. Bonds between particles in a string can be broken apart by brownian motion (due to random collisions with virtual molecules of the containing fluid) and by jostling from other particles. Particles can also bond to form a string by chance, without a seed string, if two T structures meet under suitable conditions. Strings that are randomly broken or formed can be viewed as mutations. We intentionally designed JohnnyVon so that mutations are relatively rare, although they are possible. Faithful replication is intended to be much more common than mutation. The attractive fields around particles have limited ranges, which can shrink or expand. This is one of the mechanisms that we use to ensure faithful replication. The fields shrink when we want to discourage bonding that could cause mutations and the fields expand when we want to encourage bonding that should lead to faithful replication. A mechanism is needed to recognize when a string has attracted and assembled a full copy of itself. Without this, each seed string would attract a single copy, but the seed and its copy would remain bonded together forever. Therefore the automata send signals to their neighbours in the string, to determine whether a full copy has been assembled. When the right signal is received, a particle releases the bond on the vertical arm of the T structure and pushes its corresponding particle away. 3.2 Definitions The following definitions will facilitate our subsequent discussion. To better understand these definitions, it may be helpful to look ahead to Table 1 and the figures in Section 4. Codon: a T-shaped object that can encode one bit of information. There are two types of codons, type 0 codons and type 1 codons. Container: the space that contains the codons. Codons move about in a twodimensional continuous space, bounded by a grey box. The centers of the codons are confined to the interior of the grey box. Liquid: a virtual liquid that fills the container. The trajectory of a codon is determined by brownian motion (random drift due to the liquid) and by interaction with other codons and the walls of the container. The liquid has a viscosity that dampens the momentum of the codons. Soup: liquid with codons in it. Field: an attractive or repulsive area associated with a codon. The range of a field is indicated by a coloured circle. In addition to attracting or repelling, a field can also exert Smith, Turney, Ewaschuk 6 NRC/ERB-1099. JohnnyVon: Self-Replicating Automata in Continuous 2D Space a straightening force, which twists the codons to align their arms linearly. The fields behave somewhat like springs. Arm: a black line in a codon that begins in the middle of the codon and ends in the center of a coloured field. Tip: the outer end of an arm, where the red, blue, green, and purple fields are centered. Middle: the inner ends of the arms, where the three arms join together. This is not the geometrical center of the codon, but it is treated as the center of mass in the physical simulation. Red (blue, purple, green) arm: an arm that ends in a red (blue, purple, green) field. Bond: two codons can bond together when the field of one codon intersects the field of another. Not all fields can bond. This is described in detail later. Red (blue, purple, green) neighbour: the codon that is bonded to the red (blue, purple, green) arm of a given codon. Single strand: a chain of codons that are red and blue neighbours of each other. Double strand: two single strands that are purple and green neighbours of each other. Small (large) field: a field may be in one of two possible states, small or large. These terms refer to the radius of the circle that delimits the range of the field. Free codon: a codon with no bonds. Time: the number of steps that have been executed since the initialization of JohnnyVon. The initial configuration is called step 0 (or time 0). 3.3 Fields The codons have attractive and repulsive fields, as shown in Table 1. These fields determine how codons interact with each other to form strands. There are five types of fields, which we have named according to the colours that we use to display the codons in JohnnyVon’s user interface (purple, green, blue, red, yellow). All five fields have two possible states, called large and small, according to the radius of the circle that delimits the range of the field. All fields in a free codon are small. Table 2 gives the state transition rules for the field states. Fields switch between small and large as bonds are formed and broken. The interactions among the fields are listed in Table 3. Fields can pull codons together, push them apart, or twist them to align their arms. Smith, Turney, Ewaschuk 7 NRC/ERB-1099. JohnnyVon: Self-Replicating Automata in Continuous 2D Space Table 1. The two types of codons and their field states. The fields of a free codon are always small. The fields become large only when codons bond together, as described in the next table. A codon’s fields may be in a mixture of states (one field may be small and another large). Note that the circles are not drawn to scale, since the small fields would be invisibly small at this scale. Type 0 codons (purple codons) Type 1 codons (green codons) All fields in their small states All fields in their large states Table 2. State transitions in fields. Fields can change from small to large or vice versa, but they never change colour. Current field state Next field state small red large red small blue large blue large red small red large blue small blue small green large green small purple large purple large green small green large purple small purple small yellow large yellow If a double strand is ready to split, then the yellow fields of all of the codons in the double strand become large (this is described in more detail later). large yellow small yellow If a yellow field has been large for 150 time units, then it returns to its small state. Smith, Turney, Ewaschuk Transition rules If a small blue field touches a small red field and the arms of their respective codons are aligned linearly to within ± /256 radians, then both fields switch to their large states and their codons are designated as being bonded together. As long as the two fields continue to intersect, at any angle, they remain in the bonded state, and any third field that intersects with the two fields will be ignored. If jostling causes a large red field to lose contact with its bonded large blue field, then both fields switch to their small states and their codons are no longer designated as being bonded. If a codon’s red or blue fields are bonded, then its green or purple field switches to its large state. If neither of a codon’s red or blue fields are bonded, then its green or purple field becomes small. 8 NRC/ERB-1099. JohnnyVon: Self-Replicating Automata in Continuous 2D Space Table 3. The behaviour of the fields. Fields have no effect on each other unless their circles intersect. If the behaviour of a pair of fields is not listed in this table, it means that pair of fields has no interaction (they ignore each other). The designations “Field 1” and “Field 2” in this table are arbitrary, since the relationships between the fields are symmetrical. Field 1 Field 2 Interaction between fields small red small blue If a small blue field touches a small red field and the arms of their respective codons are aligned linearly to within ± /256 radians, then both fields switch to their large states and their codons are designated as being bonded together. As long as the two fields continue to intersect, at any angle, they remain bonded, and any third field that intersects with the two fields will be ignored. large red large blue When a large red field is designated as bonded with a large blue field, an attractive force pulls the tip of the red arm towards the tip of the blue arm and a straightening force twists the codons to align their arms linearly. small purple small green small purple large green large purple small green When a purple field touches a green field and the arms of their respective codons are aligned linearly to within ± /3 radians, they are designated as being bonded. As long as the two fields continue to intersect, at any angle, they remain bonded, and any third field that intersects with the two fields will be ignored. An attractive force pulls the tip of the purple arm towards the tip of the green arm and a straightening force twists the codons to align their arms linearly. When a small purple field bonds with a small green field, their bond is typically quickly ripped apart by brownian motion. The bonds between two large fields or one large field and one small field are more robust; they can withstand interference from brownian motion. large purple large green A large purple field and a large green field cannot initiate a new bond; they can only maintain an existing bond. If they do not have an existing bond, carried over from before they became large, then they ignore each other. Otherwise, as long as the two fields continue to intersect, at any angle, they remain bonded, and any third field that intersects with the two fields will be ignored. An attractive force pulls the tip of the purple arm towards the tip of the green arm and a straightening force twists the codons to align their arms linearly. large yellow large yellow When two large yellow fields intersect, a repulsive force pushes them apart. The repulsive force stops acting when the fields no longer intersect or when the fields switch to their small states. However, when the repulsive force stops acting, the codons will continue to move apart, until their momentum has been dissipated by the viscosity of the liquid. For yellow fields, unlike the other fields, there is nothing that corresponds to designated bonded pairs. Thus, if there are three or more intersecting large yellow fields, they will all repel each other. 3.4 Physics JohnnyVon runs in a sequence of discrete time steps. Each codon has a position (x-axis location, y-axis location, and angle) and a velocity (x-axis velocity, y-axis velocity, and angular velocity) in two-dimensional space. Although time is measured in whole Smith, Turney, Ewaschuk 9 NRC/ERB-1099. JohnnyVon: Self-Replicating Automata in Continuous 2D Space numbers, position and velocity are represented with floating point numbers. (We discuss time in more detail in Section 3.7.) Each codon has one unit of mass. The internal state changes of a codon are triggered by the presence and state of neighbouring codons. One codon “senses” another when it comes within range of one of its force fields. It could be said that the force fields are also sensing fields, and when one of these fields expands, its sensing ability expands equally. We think of the container as holding a thin layer of liquid, so although the space is twodimensional, codons are allowed to slide over one another (it could be called a 2.5D space). This simplifies computation, since we do not need to be concerned with detecting collisions between codons. It also facilitates replication, since free codons can move anywhere in the container, so it is not possible for them to get trapped behind a wall of strands. It is interesting to note that strands are emergent structures that depend only on the local interactions of individual codons. There is no data structure that represents strands; each codon is treated separately and interacts only with its immediate neighbours. 3.4.1 Brownian Motion Codons move in a virtual liquid. Brownian motion is simulated by applying a random change to each codon’s (linear and angular) velocity at each time step. This random velocity change may be thought of as the result of a collision with a molecule of the liquid, but we do not explicitly model the liquid’s molecules in JohnnyVon. 3.4.2 Viscosity We implement a simple model of viscosity in the virtual liquid. With each time step, a codon’s velocity is adjusted towards zero by multiplying the velocity by a fractional factor. One factor is applied for x-axis and y-axis velocity (linear viscosity) and another factor is applied for angular velocity (angular viscosity). 3.4.3 Attractive Force When two codons are bonded, an attractive force pulls their bonded arms together. This force acts like a spring joining the tips of the bonded arms. The strength of the spring force increases linearly with the distance between the tips, until the distance is greater than the sum of the radii of the fields, at which point the bond is broken. The spring force modifies both the linear and angular velocities of the bonded codons. (The angular velocity is modified because the force acts on the codon tip, rather than on the center of mass.) 3.4.4 Repulsive Force The repulsive force also acts like a spring, joining the centers of the yellow fields, pushing the codons apart. The strength of the spring force decreases linearly with the distance between the centers of the yellow fields, until the distance is greater than the sum of the radii of the fields, at which point the force ceases. The spring force modifies both the linear and angular velocities of the bonded codons. Smith, Turney, Ewaschuk 10 NRC/ERB-1099. JohnnyVon: Self-Replicating Automata in Continuous 2D Space 3.4.5 Straightening Force When two codons are bonded, a straightening force twists them to align their bonded arms linearly. This force is purely rotational; it has no linear component. The two bonded codons rotate about their middles, so that their bonded arms lie along the line that joins their middles. The straightening force for a given codon is linearly proportional to the angle between the bonded arm of the given codon and the line joining the middle of the given codon to the middle of the other codon. 3.4.6 Spring Dampening The motion due to the attractive and straightening forces is dampened, in a way that is similar to the viscosity that is applied to brownian motion. The dampening prevents unlimited oscillation. No dampening is applied to the repulsive force, since oscillation is not a problem with repulsion. The linear velocities of a bonded pair of codons are dampened towards the average of their linear velocities, by a fractional factor (linear dampening). The angular velocities of a bonded pair of codons are dampened towards zero, by another fractional factor (angular dampening). 3.5 Splitting When a complete double strand forms, the yellow fields switch to their large states and split the double strand into two single strands. The decision to split is controlled by a purely local process. Each codon has an internal state that is determined by the states of its neighbours. When a codon enters a certain internal state, its yellow field switches to the large state. The splitting is determined by a combination of two state variables, the strand_location_state and the splitting_state. The strand_location_state has three possible values: 0= Initial state: I do not think I am at the end of a (possibly incomplete) double strand. 1= I think I might be located at the end of a (possibly incomplete) double strand. 2= My green or purple neighbour also thinks it might be at the end of a (possibly incomplete) double strand. An incomplete double strand occurs when a single strand has partially replicated. On one side, there is the original single strand, and, attached to it, there are one or more single codons or shorter single strands. The state transition rules for strand_location_state are designed so that a codon can only be in state 2 when it is at one of the two extreme ends of a (complete or incomplete) double strand. The splitting_state also has three possible values: x= Initial state: I am not ready to split. y= I am ready to split. z= I am now splitting. A codon’s yellow field switches to the large yellow state when splitting_state becomes z. The following tables give the rules for state transitions. Table 4 lists the rules for strand_location_state and Table 5 provides the rules for splitting_state. Smith, Turney, Ewaschuk 11 NRC/ERB-1099. JohnnyVon: Self-Replicating Automata in Continuous 2D Space Table 4. Transition rules for strand_location_state. When strand_location_state is 2, the given codon must actually be at one end of a (possibly incomplete) double strand. During the replication process, if a strand has not yet fully replicated, there will be gaps in the strand, and the codons situated at the edges of these gaps will be stuck in state 1 until the gaps are filled, at which time they will switch to state 0. Transition rules for strand_location_state Current state Next state 0 1 If (I have exactly one red or blue neighbour) and (I have a purple or green neighbour), then I switch from state 0 to state 1. 1 0 If (I do not have exactly one red or blue neighbour) or (I do not have a green or purple neighbour), then I switch from state 1 to state 0. 1 2 If (I have exactly one red or blue neighbour) and (I have a green or purple neighbour) and (my green or purple neighbour is in state 1 or 2), then I switch from state 1 to state 2. 2 0 If (I do not have exactly one red or blue neighbour) or (I do not have a green or purple neighbour) or (my green or purple neighbour is in state 0), then I switch from state 2 to state 0. Table 5. Transition rules for splitting_state. A strand begins with all codons in the x state. When the strand is complete, one end of the strand (the end with no red neighbour) enters the y state, and the y state then spreads down the strand to the other end (the end with no blue neighbour). If the double strand is incomplete, the codons next to the gap will have their strand_location_state set to 1, which will block the spread of the y state. When the y state spreads all the way to the other end, in either of the two single strands, the double strand must be complete. Therefore, when the y state reaches the other end (the end with no blue neighbour), the end codon enters the z state, and the z state spreads back down to the first end (the end with no red neighbour). Transition rules for splitting_state Current state Next state x y If [(my strand_location_state is 2) and (my green or purple neighbour’s strand_location_state is 2) and (I have no red neighbour)] or [(my strand_location_state is not 1) and (my green or purple neighbour’s strand_location_state is not 1) and (my red neighbour’s splitting_state is y)], then I switch from state x to state y. y z If [(my strand_location_state is 2) and (my green or purple neighbour’s strand_location_state is 2) and (I have no blue neighbour)] or [(my strand_location_state is not 1) and (my green or purple neighbour’s strand_location_state is not 1) and (my blue neighbour’s splitting_state is z)], then I switch from state y to state z. z x If [(I have no red neighbour) and (I have been in state z for 150 time units)] or [my red neighbour is in state x], then I switch from state z to state x. 3.6 States The state of a codon in JohnnyVon is represented by a vector, rather than a scalar. The state vector of a codon has 16 elements: • 3 floating point variables for position (x-axis location, y-axis location, angle) • 3 floating point variables for velocity (x-axis velocity, y-axis velocity, angular velocity) Smith, Turney, Ewaschuk 12 NRC/ERB-1099. JohnnyVon: Self-Replicating Automata in Continuous 2D Space • 5 binary variables for field size (blue, red, green, purple, and yellow field size) • 3 whole-number-valued variables (one for each arm of the codon) for “pointers” that identify the codon to which a given arm is bonded (if any) • 1 three-valued variable for strand_location_state • 1 three-valued variable for splitting_state The seven-dimensional finite-valued sub-vector, consisting of the five binary variables and the two three-valued variables, is the part of each codon that corresponds to the traditional notion of a finite state machine. This seven-dimensional sub-vector has a total of 25 × 32 = 288 possible states.3 Chou and Reggia demonstrated the emergence of self-replicating structures in twodimensional cellular automata, using 256 states per cell [1]. The state values were divided into “data fields”, which were treated separately. In other words, Chou and Reggia used state vectors, like JohnnyVon, rather than state scalars. We agree with Chou and Reggia that state vectors facilitate the design of state machines. The remaining nine variables in the state vector (position, velocity, bond pointers) represent information about external relationships among codons, rather than the internal states of the codons. For example, the absolute position of a codon is not important; the interactions of the codons are determined by their relative positions. These nine variables are analogous to the grid in cellular automata. As far as internal states alone are concerned, the codons are finite state machines. It is their external relationships that make the codons significantly different from cellular automata. 3.7 Time For the discrete elements in the state vector, there is a natural relation between changes in state and increments of time, when each unit of time is one step in the execution of JohnnyVon. For the continuous elements in the state vector (position and velocity), the time scale is somewhat arbitrary. The physical rules that are used to update the position and velocity are continuous functions of continuous time. In a computational simulation of a continuous process, it is necessary to sample the process at a succession of discrete intervals. In JohnnyVon, the parameter timestep_duration determines the temporal resolution of the simulation. The parameter may be seen as determining how finely continuous time is sliced into discrete intervals, or, equivalently, it may be seen as determining how much action takes place from one step of the simulation to the next. Changing the value of the parameter is equivalent to rescaling the magnitudes of the forces. A small value for timestep_duration yields a fine temporal resolution (i.e., a small amount of action between steps) and a large value yields a coarse temporal resolution. If the value is too small, the simulation will be computationally inefficient; many CPU cycles will be wasted on making the simulation unnecessarily precise. On the other hand, if the value is too large, the simulation may become unstable; the behaviour of the objects in the simulation may be a very poor approximation to the intended continuous physical dynamics. 3 Some combinations of states are not actually possible. See Tables 2 to 5. Smith, Turney, Ewaschuk 13 NRC/ERB-1099. JohnnyVon: Self-Replicating Automata in Continuous 2D Space The actual value that we use for timestep_duration has no meaning outside of the context of the (arbitrary) values that we chose for the magnitudes of the various physical parameters (force field strengths, viscosity, brownian motion, etc.). We set timestep_duration by adjusting it until it seemed that we had found a good balance between computational efficiency and physical accuracy. When comparing different runs of JohnnyVon, with different values for timestep_duration, we found it useful to normalize time by multiplying the number of steps by timestep_duration. For example, if you halve the value of timestep_duration, then half as much action takes place from one time step to the next, so it takes twice as many time steps for a certain amount of action to occur. Therefore, as JohnnyVon runs, it reports both the number of time steps and the normalized time (the product of the step number and timestep_duration). However, in the following experiments, we only report the number of steps, since the normalized time has no meaning when taken out of context. 3.8 Implementation JohnnyVon is implemented in Java. The source code is available under the GNU General Public License (GPL) at http://extractor.iit.nrc.ca/johnnyvon/. We originally implemented JohnnyVon in C++. The current version is in Java because we found it difficult to make the C++ version portable across different operating systems. Informal testing suggests that the Java version runs at about 75% of the speed of the C++ version. We believe that the slight loss of speed in the Java version is easily offset by the gain in portability and maintainability. 4 Experiments In our first experiment, we seed a soup of free codons with a pattern and show that the pattern is replicated. In the second experiment, we show that a soup of free codons, given sufficient time, will spontaneously generate self-replicating patterns. 4.1 Seeded Replication Figures 1 to 7 show a typical run of JohnnyVon with a seed strand of eight codons and a soup of 80 free codons. Over many runs, with different random number seeds, we have observed that the seed strand reliably replicates. Smith, Turney, Ewaschuk 14 NRC/ERB-1099. JohnnyVon: Self-Replicating Automata in Continuous 2D Space Figure 1. Step 250. This screenshot shows JohnnyVon near the start of a run, after 250 steps have passed. A soup of free codons (randomly located) has been seeded with a single strand of eight codons (placed near the center). The strand of eight codons encodes the binary string “00011001” (0 is purple, 1 is green). In the strand, the red fields are covering the corresponding blue fields of the red neighbours. Smith, Turney, Ewaschuk 15 NRC/ERB-1099. JohnnyVon: Self-Replicating Automata in Continuous 2D Space Figure 2. Step 6,325. Six codons have bonded with the seed strand, but they have not yet formed any red-blue bonds. Smith, Turney, Ewaschuk 16 NRC/ERB-1099. JohnnyVon: Self-Replicating Automata in Continuous 2D Space Figure 3. Step 18,400. Eight codons have bonded with the seed strand, but these eight codons have not yet formed a complete strand. One red-blue bond is missing. These bonds can only form when the red and blue arms meet linearly to within ± /256 radians. This happens very rarely when the codons are drifting freely, but it happens dependably when the codons are held in position by the purple-green bonds. Smith, Turney, Ewaschuk 17 NRC/ERB-1099. JohnnyVon: Self-Replicating Automata in Continuous 2D Space Figure 4. Step 22,000. The eight codons formed their red-blue bonds, making a complete strand of eight codons. This caused the yellow fields in the double strand to switch to their large states, breaking the bonds between the two single strands and pushing them apart. In this screenshot, the yellow fields are still large. After a few more time units have passed, they will return to their small states. Note that the seed strand encodes “00011001”, but the daughter strand encodes “01100111”. This is discussed in Section 5. Smith, Turney, Ewaschuk 18 NRC/ERB-1099. JohnnyVon: Self-Replicating Automata in Continuous 2D Space Figure 5. Step 25,950. We now have two single strands, and they have started to form bonds with the free codons. Smith, Turney, Ewaschuk 19 NRC/ERB-1099. JohnnyVon: Self-Replicating Automata in Continuous 2D Space Figure 6. Step 30,850. The daughter strand has replicated itself, producing a granddaughter. The original seed strand and the granddaughter encode the same bit string. Smith, Turney, Ewaschuk 20 NRC/ERB-1099. JohnnyVon: Self-Replicating Automata in Continuous 2D Space Figure 7. Step 127,950. There are only a few free codons left. Eventually they will bond with the strands, leaving a stable soup of partially completed double strands. The elapsed real-time from step 250 (Figure 1) to step 127,950 (Figure 7) was approximately 45 minutes. 4.2 Spontaneous Replication JohnnyVon was intentionally designed so that life (self-replicating patterns) can arise from non-life (free codons without a seed), but only rarely. It is difficult for red-blue bonds to form, due to the narrow angle at which the arms must meet (± /256 radians – see Table 2). These bonds are very unlikely to form unless the codons are held in position by green-purple bonds. However, given sufficient time, two free codons will eventually Smith, Turney, Ewaschuk 21 NRC/ERB-1099. JohnnyVon: Self-Replicating Automata in Continuous 2D Space come into contact in such a way that a red-blue bond is formed and a self-replicating strand of length two is created. Figure 8 shows an example. Figure 8. Step 164,450. A strand of length two has spontaneously formed from a soup of 88 free codons. Very shortly after forming, it replicated. The elapsed real-time from step 0 to step 164,450 was about 15 minutes. This is less real-time per step than the previous experiment because there are fewer calculations when there are no bonds between the codons. 5 Interpretation of Experiments The first experiment shows that a pattern containing arbitrary information can replicate itself. Note that all codon interactions in JohnnyVon are local; no global control system is needed. (This is also true of the various implementations of self-replicating cellular automata.) Smith, Turney, Ewaschuk 22 NRC/ERB-1099. JohnnyVon: Self-Replicating Automata in Continuous 2D Space It is apparent in Figure 7 that (approximately) half of the single strands are mirror images of the original seed strand (in Figure 1). More precisely, let X be an arbitrary sequence of 0s and 1s that we want to encode. Let n(X) be the string that results when every 0 in X is replaced with 1 and every 1 in X is replaced with 0. Let r(X) be the string that results when the order of the characters in X is reversed. When a strand with the pattern X replicates, the resulting new strand will have the pattern r(n(X)). Therefore, if we seed a soup of free codons with a pattern X, then the final result will consist of about 50% strands with the pattern X and 50% with the pattern r(n(X)). Penrose anticipated this problem [4]. He suggested it could be avoided by making the pattern symmetrical. Let c(X, Y) be the string that results when the string Y is concatenated to the end of string X. Let g(X) be c(X, r(n(X))). Note that g(X) is equal to its negative mirror image, r(n(g(X))). That is, if g(X) replicates, the resulting string is exactly g(X) itself. The function g(X) enables us to encode any arbitrary string X in such a way that replication will not alter the pattern. 100% of the final strands will be copies of g(X). The second experiment shows that self-replicating patterns can spontaneously arise. The strands in this case are of length two, but it is possible in principle for mutations to extend the length of the strands (although we have not observed this). Strands of length two have an evolutionary advantage over longer strands, since they can replicate faster. On rare occasions, when running JohnnyVon with a seed of length eight (as in Section 4.1), a strand of length two has spontaneously appeared. The length-two strand quickly out-replicates the length-eight strand and soon predominates. We have intentionally designed JohnnyVon so that its most likely behaviour is to faithfully replicate a given seed strand. However, we have allowed a small possibility of red-blue bonds forming without a seed pattern, which allows both spontaneous generation of life and mutation of existing strands. (The probability of mutation can be increased or decreased by adjusting the red-blue bonding angle above or below its current value of ± /256 radians.) Since there is selection (for rapid replication), JohnnyVon fully supports evolution: there is inheritance, mutation, and selection. Cellular automata can also support self-replication [2], [5], [8], [10], evolution [6], and spontaneous generation of life from non-life [1]. The novelty in JohnnyVon is that these three features appear in a computer simulation that includes continuous space and virtual physics. We believe that this is an important step towards building physical machines with these features. 6 Limitations and Future Work One area we intend to look at is the degree to which the internal codon states can be simplified while still exhibiting the basic features of stability and self-replication. We make no claim that we have found the simplest codon structure that will exhibit the intended behaviours. JohnnyVon contains only genotypes (genetic code) with no phenotypes (bodies). The only evolutionary selection that JohnnyVon currently supports is selection for shorter strands, since they can replicate faster than longer strands. In order to support more interesting selection, we would like to introduce phenotypes. In natural life, DNA can be read in two different ways. During reproduction, DNA is copied verbatim, but during growth, DNA is read as instructions for building proteins. We would like to introduce this Smith, Turney, Ewaschuk 23 NRC/ERB-1099. JohnnyVon: Self-Replicating Automata in Continuous 2D Space same distinction into a future version of JohnnyVon. One approach would be to add new “protein” particles to complement the existing codon particles. Free protein particles would bond to a strand of codons, which would act as a template for assembling the proteins. Once a string of proteins has been assembled, it would separate from the codon strand and then it would fold into a shape, analogous to the way that real proteins fold. To achieve interesting evolution, the environment could be structured so that certain protein shapes have an evolutionary advantage, which somehow results in increased replication for the corresponding codon strands. Another limitation of JohnnyVon is the simplistic virtual physics. Many of our simplifications were designed to make the computation tractable. For example, electrostatic attraction and repulsion in the real world have an infinite range, but all of the fields in JohnnyVon have quite limited ranges (relative to the size of the container). Our codons can only interact when their fields are in contact with one another, so it is not necessary to calculate the forces between every pair of codons. This significantly reduces the computation, especially when there are many free codons. The trajectory of a free codon is determined solely by brownian motion and viscosity. However, it is likely possible to significantly increase the realism of JohnnyVon without sacrificing speed. This is another area for future work. It may be that the direction taken will depend on the application. The changes that would make JohnnyVon more realistic for a biologist, for example, may be different from the changes that would be appropriate for a nanotechnologist. Finally, it may be worthwhile to develop a 3D version of JohnnyVon. The current 2.5D space might be insufficiently realistic for some applications. 7 Applications JohnnyVon was designed with nanotechnology in mind. We hope that it may some day be possible to implement the codons in JohnnyVon (or some distant descendant of JohnnyVon) as nanomachines. We imagine that the two types of codons could be mass produced by some kind of macroscopic manufacturing process, and then sprinkled in to a vat of liquid. A seed strand could be dropped in the vat, and the nanomachines would quickly replicate the seed. This imaginary scenario might never become reality, but the success of the experiments in Section 4 lends some plausibility to this project. JohnnyVon may also contribute to theoretical biology, by increasing our understanding of natural life. As Penrose mentioned, models of this kind may help us to understand the origins of life on Earth [4]. JohnnyVon was designed to allow life to arise from non-life (as we saw in Section 4.2). Of course, we have no idea whether this model is anything like the actual origin of life on Earth, but it seems possible. Finally, we should not overlook the entertainment value of JohnnyVon. We believe it would make a great screen saver. 8 Conclusion JohnnyVon includes the following features: • automata that move in a continuous 2.5D space • self-replication of seed patterns • spontaneous generation of life (self-replication) from non-life (free codons) Smith, Turney, Ewaschuk 24 NRC/ERB-1099. JohnnyVon: Self-Replicating Automata in Continuous 2D Space • evolution (inheritance, mutation, and selection) • virtual physics (brownian motion, viscosity, attraction, repulsion, dampening) • the ability to encode arbitrary bit strings in self-replicating patterns • local interactions (no global control structures) JohnnyVon is the first computational simulation to combine all of these features. Von Neumann sketched a path that begins with self-replicating cellular automata and ends with self-replicating physical machines. We agree with von Neumann that, at some point along this path, it is necessary to move away from discrete space models, towards continuous space models. JohnnyVon is such a step. References [1] Chou, H.-H., and Reggia, J.A. (1997). Emergence of self-replicating structures in a cellular automata space. Physica D, 110, 252-276. [2] Langton, C.G. (1984). Self-reproduction in cellular automata. Physica D, 10, 134144. [3] Lerena, P., and Courant, M. (1996). Bio-machines. In Proceedings of Artificial Life V (Poster), Nara, Japan. [4] Penrose, L.S. (1959). Self-reproducing machines. Scientific American, 200 (6), 105114. [5] Reggia, J.A., Lohn, J.D., and Chou, H.-H. (1998). Self-replicating structures: Evolution, emergence and computation. Artificial Life, 4 (3), 283-302. [6] Sayama, H. (1998). Constructing Evolutionary Systems on a Simple Deterministic Cellular Automata Space. Ph.D. Dissertation, Department of Information Science, Graduate School of Science, University of Tokyo. [7] Sayama, H. (1998). Introduction of structural dissolution into Langton's selfreproducing loop. In C. Adami, R.K. Belew, H. Kitano, and C.E. Taylor, eds., Artificial Life VI: Proceedings of the Sixth International Conference on Artificial Life, 114-122. Los Angeles, California: MIT Press. [8] Sipper, M. (1998). Fifty years of research on self-replication: An overview. Artificial Life, 4 (3), 237-257. [9] Tempesti, G., Mange, D., and Stauffer, A. (1998). Self-replicating and self-repairing multicellular automata. Artificial Life, 4 (3), 259-282. Smith, Turney, Ewaschuk 25 NRC/ERB-1099. JohnnyVon: Self-Replicating Automata in Continuous 2D Space [10] von Neumann, J. (1966). Theory of Self-Reproducing Automata. Edited and completed by A.W. Burks. Urbana, IL: University of Illinois Press. [11] Wolfram, S. (2002). A New Kind of Science. Champaign, IL: Wolfram Media. Smith, Turney, Ewaschuk 26
5
Semidefinite Programming and Nash Equilibria in Bimatrix Games arXiv:1706.08550v2 [math.OC] 27 Feb 2018 Amir Ali Ahmadi and Jeffrey Zhang ∗ Abstract We explore the power of semidefinite programming (SDP) for finding additive -approximate Nash equilibria in bimatrix games. We introduce an SDP relaxation for a quadratic programming formulation of the Nash equilibrium (NE) problem and provide a number of valid inequalities to improve the quality of the relaxation. If a rank-1 solution to this SDP is found, then an exact NE can be recovered. We show that for a strictly competitive game, our SDP is guaranteed to return a rank-1 solution. Furthermore, we prove that if a rank-2 solution to our SDP is found, 5 -NE can be recovered for any game, or a 31 -NE for a symmetric game. We propose then a 11 two algorithms based on iterative linearization of smooth nonconvex objective functions that are designed so that their global minima coincide with rank-1 solutions. Empirically, we demonstrate that these algorithms often recover solutions of rank at most two and  close to zero. We then show how our SDP approach can address two (NP-hard) problems of economic interest: finding the maximum welfare achievable under any NE, and testing whether there exists a NE where a particular set of strategies is not played. Finally, we show the connection between our SDP and the first level of the Lasserre/sum of squares hierarchy. 1 Introduction A bimatrix game is a game between two players (referred to in this paper as players A and B) defined by a pair of m × n payoff matrices A and B. Let 4m and 4n denote the m-dimensional and n-dimensional simplices m 4m = {x ∈ R | xi ≥ 0, ∀i, m X n xi = 1}, 4n = {y ∈ R | yi ≥ 0, ∀i, i=1 n X yi = 1}. i=1 These form the strategy spaces of player A and player B respectively. For a strategy pair (x, y) ∈ 4m × 4n , the payoff received by player A (resp. player B) is xT Ay (resp. xT By). In particular, if the players pick vertices i and j of their respective simplices (also called pure strategies), their payoffs will be Ai,j and Bi,j . One of the prevailing solution concepts for bimatrix games is the notion of Nash equilibrium. At such an equilibrium, the players are playing mutual best responses, i.e., a payoff maximizing strategy against the opposing player’s strategy. In our notation, a Nash equilibrium for the game (A, B) is a pair of strategies (x∗ , y ∗ ) ∈ 4m × 4n such that x∗T Ay ∗ ≥ xT Ay ∗ , ∀x ∈ 4m , and x∗T By ∗ ≥ x∗T By, ∀y ∈ 4n .1 ∗ The authors are partially supported by the DARPA Young Faculty Award, the Young Investigator Award of the AFOSR, the CAREER Award of the NSF, the Google Faculty Award, and the Sloan Fellowship. 1 In this paper we assume that all entries of A and B are between 0 and 1, and argue at the beginning of Section 2 why this is without loss of generality for the purpose of computing Nash equilibria. 1 Nash [29] proved that for any bimatrix game, such pairs of strategies exist (in fact his result more generally applies to games with a finite number of players and a finite number of pure strategies). While existence of these equilibria is guaranteed, finding them is believed to be a computationally intractable problem. More precisely, a result of Daskalakis, Goldberg, and Papadimitriou [12] implies that computing Nash equilibria is PPAD-complete (see [12] for a definition) even when the number of players is 3. This result was later improved by Chen and Deng [9] who showed the same hardness result for bimatrix games. These results motivate the notion of an approximate Nash equilibrium, a solution concept in which players receive payoffs “close” to their best response payoffs. More precisely, a pair of strategies (x∗ , y ∗ ) ∈ 4m × 4n is an (additive) -Nash equilibrium for the game (A, B) if x∗T Ay ∗ ≥ xT Ay ∗ − , ∀x ∈ 4m , and x∗T By ∗ ≥ x∗T By − , ∀y ∈ 4n . Note that when  = 0, (x∗ , y ∗ ) form an exact Nash equilibrium, and hence it is of interest to find -Nash equilibria with  small. Unfortunately, approximation of Nash equilibria has also proved to be computationally difficult. Cheng, Deng, and Teng have shown in [10] that, unless PPAD ⊆ P, there cannot be a fully polynomial-time approximation scheme for computing Nash equilibria in bimatrix games. There have, however, been a series of constant factor approximation algorithms for this problem [14, 13, 20, 37], with the current best producing a .3393 approximation via an algorithm by Tsaknakis and Spirakis [37]. We remark that there are exponential-time algorithms for computing Nash equilibria, such as the Lemke-Howson algorithm [25, 34]. There are also certain subclasses of the problem which can be solved in polynomial time, the most notable example being the case of zero-sum games (i.e. when B = −A). This problem was shown to be solvable via linear programming by Dantzig [11], and later shown to be polynomially equivalent to linear programming by Adler [2]. Aside from computation of Nash equilibria, there are a number of related decision questions which are of economic interest but unfortunately NP-hard. Examples include deciding whether a player’s payoff exceeds a certain threshold in some Nash equilibrium, deciding whether a game has a unique Nash equilibrium, or testing whether there exists a Nash equilibrium where a particular set of strategies is not played [16]. Our focus in this paper is on understanding the power of semidefinite programming2 (SDP) for finding approximate Nash equilibria in bimatrix games or providing certificates for related decision questions. The goal is not to develop a competitive solver, but rather to analyze the algorithmic power of SDP when applied to basic problems around computation of Nash equilibria. Semidefinite programming relaxations have been analyzed in the past for an array of intractable problems in computational mathematics (most notably in combinatorial optimization [17], [26] and systems theory [8]), but to our knowledge not for computation of Nash equilibria in general bimatrix games. SDPs have appeared however elsewhere in the literature on game theory for finding equilibria, e.g. by Stein for exchangeable equilibria in symmetric games [36], by Parrilo, Laraki, and Lasserre for Nash equilibria in zero-sum polynomial games [31, 21], or by Parrilo and Shah for zero-sum stochastic games [35]. 1.1 Organization and Contributions of the Paper In Section 2, we formulate the problem of finding a Nash equilibrium in a bimatrix game as a nonconvex quadratically constrained quadratic program and pose a natural SDP relaxation for it. In Section 3, we show that our SDP is exact when the game is strictly competitive; see Definition 3.3). 2 The unfamiliar reader is referred to [38] for the theory of SDPs and a description of polynomial-time algorithms for them based on interior point methods. 2 In Section 4, we establish a number of bounds on the quality of the approximate Nash equilibria that can be read off of feasible solutions to our SDP. We show that if the SDP has a rank-k solution with nonzero eigenvalues λ1 , . . . , λk , then one can recover from it an -Nash equilibrium Pk with  ≤ m+n i=2 λi (Theorem 4.5). We then present an improved analysis in the rank-2 case 2 5 which shows how one can recover a 11 -Nash equilibrium from the SDP solution (Theorem 4.12). We further prove that for symmetric games (i.e., when B = AT ), a 13 -Nash equilibrium can be recovered in the rank-2 case (Theorem 4.16). We do not currently know of a polynomial-time algorithm to find rank-2 solutions to our SDP. If such an algorithm were found, it would, together with our analysis, improve the best known approximation bound for symmetric games. This motivates us to design, in Section 5, two continuous but nonconvex objective functions for our SDP whose global minima coincide with minimum-rank solutions. We prove an upper bound on  based on the value of one of these objective functions (Theorem 5.6) and provide a heuristic for minimizing them both that is based on iterative linearization. We show empirically that these approaches produce  very close to zero (on average in the order of 10−3 ). In Section 6, we show how our SDP formulation can be used to provide certificates for certain (NP-hard) questions of economic interest about Nash equilibria. These are the problems of testing whether the maximum welfare achievable under any Nash equilibrium exceeds some threshold, and whether a set of strategies is played in every Nash equilibrium. In Section 7, we show that the SDP analyzed in this paper dominates the first level of the Lasserre hierarchy (Proposition 7.1). Some directions for future research are discussed in Section 8. 2 The Formulation of our SDP Relaxation In this section we present an SDP relaxation for the problem of finding Nash equilibria in bimatrix games. This is done after a straightforward reformulation of the problem as a nonconvex quadratically constrained quadratic program. Throughout the paper the following notation is used. · · · · · · · · · · · · · · Ai, refers to the i-th row of a matrix A. A,j refers to the j-th column of a matrix A. ei refers to the elementary vector (0, . . . , 0, 1, 0, . . . , 0)T with the 1 being in position i. 4k refers to the k-dimensional simplex. 1m refers to the m-dimensional vector of one’s. 0m refers to the m-dimensional vector of zero’s. Jm,n refers to the m × n matrix of one’s. A  0 denotes that the matrix A is positive semidefinite (psd), i.e., has nonnegative eigenvalues. A ≥ 0 denotes that the matrix A is nonnegative, i.e., has nonnegative entries. A  B denotes that A − B  0. Sk×k denotes the set of symmetric k × k matrices. Tr(A) denotes the trace of a matrix A, i.e., the sum of its diagonal elements. A ⊗ B denotes the Kronecker product of matrices A and B. vec(M ) denotes the vectorized version of a matrix M , and diag(M ) denotes the vector containing its diagonal entries. We also assume that all entries of the payoff matrices A and B are between 0 and 1. This can be done without loss of generality because Nash equilibria are invariant under certain affine transformations in the payoffs. In particular, the games (A, B) and (cA + dJm×n , eB + f Jm×n ) 3 have the same Nash equilibria for any scalars c, d, e, and f , with c and e positive. This is because x∗T Ay ≥ xT Ay ⇔ c(x∗T Ay ∗ ) + d ≥ c(xT Ay ∗ ) + d ⇔ c(x∗T Ay ∗ ) + d(x∗T Jm×n y ∗ ) ≥ c(xT Ay ∗ ) + d(xT Jm×n y ∗ ) ⇔ x∗T (cA + dJm×n )y ∗ ≥ xT (cA + dJm×n )y Identical reasoning applies for player B. 2.1 Nash Equilibria as Solutions to Quadratic Programs Recall the definition of a Nash equilibrium from Section 1. An equivalent characterizaiton is that a strategy pair (x∗ , y ∗ ) ∈ 4m × 4n is a Nash equilibrium for the game (A, B) if and only if x∗T Ay ∗ ≥ eTi Ay ∗ , ∀i ∈ {1, . . . , m}, x∗T By ∗ ≥ x∗T Bei , ∀i ∈ {1, . . . , n}. (1) The equivalence can be seen by noting that because the payoff from playing any mixed strategy is a convex combination of payoffs from playing pure strategies, there is always a pure strategy best response to the other player’s strategy. We now treat the Nash problem as the following quadratic programming (QP) feasibility problem: min x∈Rm ,y∈Rn 0 subject to xT Ay ≥ eTi Ay, ∀i ∈ {1, . . . , m}, xT By ≥ xT Bej , ∀j ∈ {1, . . . , n}, xi ≥ 0, ∀i ∈ {1, . . . , m}, yi ≥ 0, ∀j ∈ {1, . . . , n}, m X xi = 1, (2) i=1 n X yi = 1. i=1 Similary, a pair of strategies x∗ ∈ 4m and y ∗ ∈ 4n form an -Nash equilibrium for the game (A, B) if and only if x∗T Ay ∗ ≥ eTi Ay ∗ − , ∀i ∈ {1, . . . , m}, x∗T By ∗ ≥ x∗T Bei − , ∀i ∈ {1, . . . , n}. Observe that any pair of simplex vectors (x, y) is an -Nash equilibrium for the game (A, B) for any  that satisfies  ≥ max{max eTi Ay − xT Ay, max xT Bei − xT By}. i i We use the following notation throughout the paper: · A (x, y) := max eTi Ay − xT Ay, i · B (x, y) := max xT Bei − xT By, i · (x, y) := max{A (x, y), B (x, y)}, and the function parameters are later omitted if they are clear from the context. 4 2.2 SDP Relaxation The QP formulation in (2) lends itself to a natural SDP relaxation. We define a matrix   X P M := , Z Y and an augmented matrix  X P 0  := Z Y M x y  x y , 1 with X ∈ S m×m , Z ∈ Rn×m , Y ∈ S n×n , x ∈ Rm , y ∈ Rn and P = Z T . The SDP relaxation can then be expressed as min M0 ∈Sm+n+1,m+n+1 subject to 0 (SDP1) Tr(AZ) ≥ eTi Ay, ∀i ∈ {1, . . . , m}, (3) Tr(BZ) ≥ xT Bej , ∀j ∈ {1, . . . , n}, m X xi = 1, (4) (5) i=1 n X yi = 1, (6) i=1 M0ij ≥ 0 ∀i, j ∈ {1, . . . , m + n + 1}, (7) M0m+n+1,m+n+1 0 (8) = 1, M  0. (9) We refer to the constraints (3) and (4) as the relaxed Nash constraints and the constraints (5) and (6) as the unity constraints. This SDP is motivated by the following observation. Proposition 2.1. Let M0 be any rank-1 feasible solution to SDP1. Then the vectors x and y from its last column constitute a Nash equilibrium for the game (A, B). Proof. We know that x and y are in the simplex from the constraints (5), (6), and (7). If the matrix M0 is rank-1, then it takes the form  T     T xx xy T x x x yxT yy T y  = y  y  . 1 1 xT yT 1 (10) Then, from the relaxed Nash constraints we have that eTi Ay ≤ Tr(AZ) = Tr(AyxT ) = Tr(xT Ay) = xT Ay, xT Aei ≤ Tr(BZ) = Tr(ByxT ) = Tr(xT By) = xT By. The claim now follows from the characterization given in (1). Remark 2.1. Because a Nash equilibrium always exists, there will always be a matrix of the form (10) which is feasible to SDP1. Thus we can disregard any concerns about SDP1 being feasible, even when we add valid inequalities to it in Section 2.3. 5 Remark 2.2. It is intuitive to note that the submatrix P = Z T of the matrix M0 corresponds to a probability distribution over the strategies, and that seeking a rank-1 solution to our SDP can be interpreted as making P a product distribution. The following theorem shows that SDP1 is a weak relaxation and stresses the necessity of additional valid constraints. Theorem 2.2. Consider a bimatrix game with payoff matrices bounded in [0, 1]. Then  for any x two vectors x ∈ 4m and y ∈ 4n , there exists a feasible solution M0 to SDP1 with y  as its last 1 column. Proof. Consider any x, y, γ > 0, and the matrix    T   x x γJ 0 m+n,m+n m+n y  y  + . 0Tm+n 0 1 1 This matrix is the sum of two nonnegative psd matrices and is hence nonnegative and psd. By assumption x and y are in the simplex, and so constraints (5) − (9) of SDP1 are satisfied. To check that constraints (3) and (4) hold, note that since A and B are nonnegative, as long as the matrices A and B are not the zero matrices, the quantities Tr(AZ) and Tr(BZ) will become arbitrarily large as γ increases. Since eTi Ay and xT Bei are bounded by 1 by assumption, we will have that constraints (3) and (4) hold for γ large enough. In the case where A or B is the zero matrix, the Nash constraints are trivially satisfied for the respective player. 2.3 Valid Inequalities In this subsection, we introduce a number of valid inequalities to improve upon the SDP relaxation in SDP1. These inequalities are justified by being valid if the matrix returned by the SDP is rank-1. The terminology we introduce here to refer to these constraints is used throughout the paper. Constraints (11) and (12) will be referred to as the row inequalities, and (13) and (14) will be referred to as the correlated equilibrium inequalities. Proposition 2.3. Any rank-1 solution M0 to SDP1 must satisfy the following: m X Xi,j = j=1 n X j=1 n X n X Pi,j = xi , ∀i ∈ {1, . . . , m}, (11) Zi,j = yi , ∀i ∈ {1, . . . , n}. (12) j=1 Yi,j = m X j=1 Ai,j Pi,j ≥ n X j=1 j=1 m X m X j=1 Bj,i Pj,i ≥ Ak,j Pi,j , ∀i, k ∈ {1, . . . , m}, (13) Bj,k Pj,i , ∀i, k ∈ {1, . . . , n}. (14) j=1 Proof. Recall from (10) that if M0 is rank-1, it is of the form  T     T xx xy T x x x yxT yy T y  = y  y  . 1 1 xT yT 1 6 To show (11), observe that m X Xi,j = j=1 m X xi xj = xi , ∀i ∈ {1, . . . , m}. j=1 An identical argument works for the remaining matrices P, Z, and Y . To show (13) and (14), observe that a pair (x, y) is a Nash equilibrium if and only if ∀i, xi > 0 ⇒ eTi Ay = xT Ay = max eTi Ay, i ∀i, yi > 0 ⇒ xT Bei = xT By = max xT Bei . i This is because the Nash conditions require that xT Ay, a convex combination of eTi Ay, be at least eTi Ay for all i. Indeed, if xi > 0 but eTi Ay < xT Ay, the convex combination must be less than max xT Ay. i For each i such that xi = 0 or yi = 0, inequalities (13) and (14) reduce to 0 ≥ 0, so we only need to consider strategies played with positive probability. Observe that if M0 is rank-1, then n X Ai,j Pi,j = xi j=1 m X n X Ai,j yj = xi eTi Ay ≥ xi eTk Ay = j=1 Bj,i Pj,i = yi j=1 m X n X Ak,j Pi,j , ∀i, k j=1 Bj,i xj = yi xT Bei ≥ yi xT Bek = j=1 m X Bj,i Pj,k , ∀i, k. j=1 Remark 2.3. There are two ways to interpret the inequalities in (13) and (14): the first is as a relaxation of the constraint xi (eTi Ay − eTj Ay) ≥ 0, ∀i, j, which must hold since any strategy played with positive probability must give the best response payoff. The other interpretation is to have the distribution over outcomes defined by P be a correlated equilibrium [4]. This can be imposed by a set of linear constraints on the entries of P as explained next. Suppose the players have access to a public randomization device which prescribes a pure strategy to each of them (unknown to the other player). The distribution over the assignments can be given by a matrix P , where Pi,j is the probability that strategy i is assigned to player A and strategy j is assigned to player B. This distribution is a correlated equilibrium if both players have no incentive to deviate from the strategy prescribed, that is, if the prescribed pure strategies a and b satisfy n n X X Ai,j P rob(b = j|a = i) ≥ Ak,j P rob(b = j|a = i), j=1 m X j=1 Bi,j P rob(a = i|b = j) ≥ i=1 m X Bi,k P rob(a = i|b = j). i=1 If we interpret the P submatrix in our SDP as the distribution over the assignments by the P public device, then because of our row constraints, P rob(b = j|a = i) = xi,j whenever xi 6= 0 i P (otherwise the above inequalities are trivial). Similarly, P (a = i|b = j) = yi,j for nonzero yj . j Observe now that the above two inequalities imply (13) and (14). Finally, note that every Nash equilibrium generates a correlated equilibrium, since if P is a product distribution given by xy T , then P rob(b = j|a = i) = yj and P (a = i|b = j) = xi . 7 2.3.1 Implied Inequalities In addition to those explicitly mentioned in the previous section, there are other natural valid inequalities which are omitted because they are implied by the ones we have already proposed. We give two examples of such inequalities in the next proposition. We refer to the constraints in (15) below as the distribution constraints. The constraints in (16) are the familiar McCormick inequalities [27] for box-constrained quadratic programming.   x Proposition 2.4. Let z := . Any rank-1 solution M0 to SDP1 must satisfy the following: y m X m X Xi,j = i=1 j=1 n X m X Zi,j = i=1 j=1 n X n X Yi,j = 1. (15) i=1 j=1 Mi,j ≤ zi , ∀i, j ∈ {1, . . . , m + n}, Mi,j + 1 ≥ zi + zj , ∀i, j ∈ {1, . . . , m + n}. (16) Proof. The distribution constraints follow immediately from the row constraints (11) and (12), along with the unity constraints (5) and (6). The first McCormick inequality is immediate as a consequence of (11) and (12), as all entries of M are nonnegative. To see why the second inequality holds, consider whichever submatrix X, Y, P , or Z that contains Mi,j . Suppose that this submatrix is, e.g., P . Then, since P is nonnegative, 0≤ m X n X (11) Pk,l = k=1,k6=i l=1,l6=j m X (12) (xk − Pk,j ) = (1 − xi ) − (yj − Pi,j ) = Pi,j + 1 − xi − yj . k=1,k6=i The same argument holds for the other submatrices, and this concludes the proof. 2.3.2 The Effect of Valid Inequalities on an Example Game Consider the following  0.42 0.54  A = 0.53 0.19 0.08 randomly-generated 5 × 5 bimatrix game:   0.46 0.03 0.77 0.33 0.63 0.19 0.03 0.71 0.06 0.56 0.66 0.92   0.43 0.17 0.85 0.30 , B = 0.99 0.29  0.94 0.55 0.56 0.59 0.38 0.37 0.64 0.61 0.40 0.35 0.35 0.92 0.09 0.26 0.43 0.58 0.90 0.22 0.97 0.43 0.78 0.53  0.33 0.43  0.72 . 0.92 0.89 Figure 1 demonstrates the shrinkage in the feasible set of our SDP, projected onto the first two pure strategies of player A, as valid inequalities are added. Subfigure (a) depicts the Nash equilibria and the feasible set of SDP1 (recall from 2.2 that the feasible region without valid inequalities is the projection of the entire simplex). The row and distribution constraints are added for subfigure (b), and the correlated equilibrium constraints are further added for subfigure (c). Subfigure (d) depicts the true projection of the convex hull of Nash equilibria. 2.4 Strengthened SDP Relaxation We now write out our new SDP with all constraints in matrix  X P 0  M := Z Y xT y T 8 one place. Recall the representation of the  x y , 1 Figure 1: Reduction in the size of our spectrahedral outer approximation to the convex hull of Nash equilibria through the addition of valid inequalities. with P = Z T . The improved SDP is now: min M0 ∈S (m+n+1)×(m+n+1) subject to 0 (SDP2) M0  0, (17) M0ij ≥ 0, i, j ∈ {1, . . . , m M0m+n+1,m+n+1 = 1, m n X X xi = i=1 + n + 1}, yi = 1, (18) (19) (20) i=1 Tr(AZ) − eTi Ay ≥ 0, ∀i ∈ {1, . . . , m}, (21) Tr(BZ) − xT Bei ≥ 0, ∀i ∈ {1, . . . , n}, m n X X Xi,j = Zj,i = xi , ∀i ∈ {1, . . . , m}, (22) j=1 j=1 n X m X j=1 n X Yi,j = Zi,j = yi , ∀i ∈ {1, . . . , n}, (23) (24) j=1 Ai,j Pi,j ≥ n X j=1 j=1 m X m X Bj,i Pj,i ≥ Ak,j Pi,j , ∀i, k ∈ {1, . . . , m}, (25) Bj,k Pj,i , ∀i, k ∈ {1, . . . , n}. (26) j=1 j=1 Constraints (21) and (22) are the relaxed Nash constraints, constraints (23) and (24) are the row constraints, and constraints (25) and (26) are the correlated equilibrium constraints. 9 3 Exactness for Strictly Competitive Games In this section, we show that SDP1 recovers a Nash equilibrium for any zero-sum game, and that SDP2 recovers a Nash equilibrium for any strictly competitive game (see Definition 3.3 below). Both these notions represent games where the two players are in direct competition, but strictly competitive games are more general and for example, allow both players to have nonnegative payoff matrices. These classes of games are solvable in polynomial time via linear programming. Nonetheless, it is reassuring to know that our SDPs recover these important special cases. Definition 3.1. A zero-sum game is a game in which the payoff matrices satisfy A = −B. Theorem 3.2. For a zero-sum game, the vectors x and y from the last column of any feasible solution M0 to SDP1 constitute a Nash equilibrium. Proof. Recall that the relaxed Nash constraints (3) and (4) read Tr(AZ) ≥ eTi Ay, ∀i ∈ {1, . . . , m}, Tr(BZ) ≥ xT Bej , ∀j ∈ {1, . . . , n}. Since B = −A, the latter statement is equivalent to Tr(AZ) ≤ xT Aej , ∀j ∈ {1, . . . , n}. In conjunction these imply eTi Ay ≤ Tr(AZ) ≤ xT Aej , ∀i ∈ {1, . . . , m}, j ∈ {1, . . . , n}. (27) We claim that any pair x ∈ 4m and y ∈ 4n which satisfies the above condition is a Nash equilibrium. To see that xT Ay ≥ eTi Ay, ∀i ∈ {1, . . . , m}, observe that xT Ay is a convex combination of xT Aej , which are at least eTi Ay by (27). To see that xT By ≥ xT Bej ⇔ xT Ay ≤ xT Aej , ∀j ∈ {1, . . . , n}, observe that xT Ay is a convex combination of eTi Ay, which are at most xT Aej by (27). Definition 3.3. A game (A, B) is strictly competitive if for all x, x0 ∈ 4m , y, y 0 ∈ 4n , xT Ay − x0T Ay 0 and x0T By 0 − xT By have the same sign. The interpretation of this definition is that if one player benefits from changing from one outcome to another, the other player must suffer. Adler, Daskalakis, and Papadimitriou show in [3] that the following much simpler characterization is equivalent. Theorem 3.4 (Theorem 1 of [3]). A game is strictly competitive if and only if there exist scalars c, d, e, and f, with c > 0, e > 0, such that cA + dJm×n = −eB + f Jm×n . One can easily show that there exist strictly competitive games for which not all feasible solutions to SDP1 have Nash equilibria as their last columns (see Theorem 2.2). However, we show that this is the case for SDP2. Theorem 3.5. For a strictly competitive game, the vectors x and y from the last column of any feasible solution M0 to SDP2 constitute a Nash equilibrium. To prove Theorem 3.5 we need the following lemma, which shows that feasibility of a matrix M0 in SDP2 is invariant under certain transformations of A and B. Lemma 3.6. Let c, d, e, and f be any set of scalars with c > 0 and e > 0. If a matrix M0 is feasible to SDP2 with input payoff matrices A and B, then it is also feasible to SDP2 with input matrices cA + dJm×n and eB + f Jm×n . 10 Proof. It suffices to check that constraints (21),(22),(25), and (26) of SDP2 still hold, as only the relaxed Nash and correlated equilibrium constraints use the matrices A and B. We only show that constraints (21) and (25) still hold because the arguments for constraints (22) and (26) are identical. First recall that due to constraints (20) and (24) of SDP2, Tr(Jm×n Z) = 1. To check that the relaxed Nash constraints hold, observe that for scalars c > 0 and d, and for all i ∈ {1, . . . , m}, Tr(AZ) − eTi Ay ≥ 0 ⇔ cTr(AZ) + d − c(eTi Ay) − d ≥ 0 ⇔ cTr(AZ) + d(Tr(Jm×n Z)) − c(eTi Ay) − d(Tr(Jm×n Z)) ≥ 0 ⇔ Tr((cA + dJm×n )Z) − eTi (cA + dJm×n )y ≥ 0. P Now recall from constraint (23) of SDP2 (keeping in mind that P = Z T ) that nj=1 (Jm×n )i,j Pi,j = xi . To check that the correlated equilibrium constraints hold, observe that for scalars c > 0, d, and for all i, k ∈ {1, . . . , m}, n X Ai,j Pi,j ≥ j=1 ⇔c ⇔c n X n X Ai,j Pi,j + d (Jm×n )i,j Pi,j ≥ c j=1 ⇔ n X Ak,j Pi,j j=1 n X Ai,j Pi,j + dxi ≥ c j=1 n X j=1 n X (cAi,j + dJm×n )k,j Pi,j ≥ j=1 j=1 n X j=1 n X Ak,j Pi,j + dxi Ak,j Pi,j n X +d (Jm×n )k,j Pi,j j=1 (cAi,j + dJm×n )k,j Pi,j . j=1 Proof (of Theorem 3.5). Let A and B be the payoff matrices of the given strictly competitive game and let M0 be a feasible solution to SDP2. Since the game is strictly competitive, we know that cA + dJm×n = −eB + f Jm×n for some scalars c > 0, e > 0, d, f . Consider a new game with input matrices à = cA + dJm×n and B̃ = eB − f Jm×n . By Lemma 3.6, M0 is still feasible to SDP2 with input matrices à and B̃. Furthermore, since the constraints of SDP1 are a subset of the constraints of SDP2, M0 is also feasible to SDP1. Now notice that since à = −B̃, Theorem 3.2 implies that the vectors x and y in the last column form a Nash equilibrium to the game (Ã, B̃). Finally recall from Section 2 that Nash equilibria are invariant to scaling and shifting of the payoff matrices, and hence (x, y) is a Nash equilibrium to the game (A, B). Remark 3.1. We later show in Proposition 5.1 that using the trace of M as the objective function to SDP2 will guarantee that SDP2 returns a rank-1 solution. 4 Bounds on  for General Games In this section, we provide upper bounds on the  returned by SDP2 for an arbitrary bimatrix game. Since the problem of computing a Nash equilibrium in such a game is PPAD-complete, it is unlikely that one can find rank-1 solutions to this SDP in polynomial time. In Section 5, we design objective functions (such as variations of the nuclear norm) that empirically do very well in finding 11 low-rank solutions to SDP2. Nevertheless, it is of interest to know if the solution returned by SDP2 is not rank-1, whether one can recover an -Nash equilibrium from it and have a guarantee on . Recall our notation for the matrices   X P M := , Z Y and  X M0 :=  Z xT P Y yT  x y . 1 Throughout this section, any matrices X, Z, P = Z T and Y or vectors x and y are assumed to be taken from a feasible solution to SDP2. The ultimate results of this section are Theorems 4.5, 4.12, and 4.16. To work towards them, we need a number of preliminary lemmas which we present in Section 4.1. 4.1 Lemmas Towards Bounds on  We first observe the following connection between the approximate payoffs Tr(AZ) and Tr(BZ), and (x, y), as defined in Section 2.1. Lemma 4.1. Consider a feasible solution M0 to SDP2 and the vectors x and y and the matrix Z from that solution. Then (x, y) ≤ max{Tr(AZ) − xT Ay, Tr(BZ) − xT By}. Proof. Note that since Tr(AZ) ≥ eTi Ay and Tr(BZ) ≥ xT Bei from constraints (21) and (22) of SDP2, we have A ≤ Tr(AZ) − xT Ay and B ≤ Tr(BZ) − xT By. We thus are interested in the difference of the two matrices P = Z T and xy T . These two matrices can be interpreted as two different probability distributions over the strategy outcomes. The matrix P is the probability distribution from the SDP which generates the approximate payoffs Tr(AZ) and Tr(BZ), while xy T is the product distribution that would have resulted if the matrix had been rank-1. We will see that the difference of these distributions is key in studying the  which results from the SDP. Hence, we first take steps to represent this difference. Lemma 4.2. Consider any feasible matrix M0 to SDP2, and its submatrix M. Let the matrix M be given by an eigendecomposition M= k X λi vi viT =: i=1 k X i=1    T a ai , λi i bi bi (28) soPthat the eigenvectors vi ∈ Rm+n are partitioned into ai ∈ Rm and bi ∈ Rn . Then for all Pn m i, j=1 (ai )j = j=1 (bi )j . 12 Proof. We know from the distribution constraints from Section 2.3.1 that k X (15) λi 1Tm ai aTi 1m = 1, i=1 k X (29) (15) λi 1Tm ai bTi 1n = 1, (30) i=1 k X (15) λi 1Tn bi aTi 1m = 1, i=1 k X (31) (15) λi 1Tn bi bTi 1n = 1. (32) i=1 Then by subtracting terms we have (29) − (30) = (31) − (32) = k X λi 1Tm ai (aTi 1m − bTi 1n ) = 0, i=1 k X λi 1Tn bi (aTi 1m − bTi 1n ) = 0. (33) (34) i=1 By subtracting again these imply (33) − (34) = k X λi (1Tm ai − 1Tn bi )2 = 0. (35) i=1 As all λi are nonnegative due to positive semidefiniteness of M, the only way for this equality to hold is to have 1Tm ai = 1Tn bi , ∀i. This is equivalent to the statement of the claim. Pn P From Lemma 4.2, we can let si := m j=1 (bi )j , and furthermore we assume without j=1 (ai )j = loss of generality that each si is nonnegative. Note that from the row constraint (11) we have xi = m X j=1 Xij = k X m X λl (al )i (al )j = k X λj sj (al )i . (36) j=1 l=1 j=1 Hence, x= k X λi si ai . (37) λi si bi . (38) i=1 Similarly, y= k X i=1 Finally note from the distribution constraint (15) that this implies k X λi s2i = 1. i=1 13 (39) Lemma 4.3. Let M= k X i=1    T a ai λi i , bi bi be a feasible solution to SDP2, such that the eigenvectors of M are partitioned into ai and bi with Pn Pm (b )j = si , ∀i. Then (a ) = i i j j=1 j=1 T P − xy = k X k X λi λj (sj ai − si aj )(sj bi − si bj )T . i=1 j>i Proof. Using equations (37) and (38) we can write P − xy T = k X λi ai bTi − ( i=1 = k X k X k X λi si ai )( λj sj bj )T i=1 j=1 λi ai (bi − si i=1 (39) = λj sj bj )T j=1 k X k X λi ai ( i=1 = k X λj s2j bi − si j=1 k X k X k X λj sj bj )T j=1 λi λj ai sj (sj bi − si bj )T i=1 j=1 = k X k X λi λj (sj ai − si aj )(sj bi − si bj )T , i=1 j>i where the last line follows from observing that terms where i and j are switched can be combined. We can relate  and P − xy T with the following lemma. Lemma 4.4. Consider any feasible solution M0 to SDP2 and the matrix P − xy T . Then ≤ kP − xy T k1 , 2 where k · k1 here denotes the entrywise L-1 norm, i.e., the sum of the absolute values of the entries of the matrix. Proof. Let D := P − xy T . From Lemma 4.1, A ≤ Tr(AZ) − xT Ay = Tr(A(Z − yxT )). If we then hold D fixed and restrict that A has entries bounded in [0,1], the quantity Tr(ADT ) is maximized when ( 1 Di,j ≥ 0 Ai,j = . 0 Di,j < 0 The resulting quantity Tr(ADT ) will then be the sum of all nonnegative elements of D. Since the sum of all elements in D is zero, this quantity will be equal to 12 kDk1 . The proof for B is identical, and the result follows from that  is the maximum of A and B . 14 4.2 Bounds on  We provide a number of bounds on (x, y), where x and y are taken from the last column of a feasible solution to SDP2. The first is a theorem stating that solutions which are “close” to rank-1 provide small . Theorem 4.5. Let M0 be a feasible solution to SDP2. Suppose M is rank-k and its eigenvalues are λ1 ≥ λ2 ≥ ... ≥ λk ≥ 0. Then, the x and y from the last column of M0 constitute an -NE to Pk the game (A, B) with  ≤ m+n i=2 λi . 2 Proof. By the Perron Frobenius theorem (see e.g. [28, Chapter 8.3]), the eigenvector corresponding to λ1 can be assumed to be nonnegative, and hence s1 = ka1 k1 = kb1 k1 . (40)   a We further note that for all i, since i is a vector of length m + n with 2-norm equal to 1, we bi must have   √ ai ≤ m + n. (41) bi 1 Since si is the sum of the elements of ai and bi , we know that √ m+n si ≤ min{kai k1 , kbi k1 } ≤ . 2 (42) This then gives us m+n , (43) 4 with the first inequality following from (42) and the second from (41). Finally note that a consequence of the nonnegativity of k · k1 and (41) is that for all i, j, s2i ≤ kai k1 kbi k1 ≤ kai k1 kbj k1 + kbi k1 kaj k1 ≤ (kai k1 + kbi k1 )(kaj k1 + kbj k1 ) =   ai bi 1 Now we let D := P − xy T and upper bound 21 kDk1 using Lemma 4.3. k k 1 1 XX kDk1 = k λi λj (sj ai − si aj )(sj bi − si bj )T k1 2 2 i=1 j>i ≤ 1 2 k X k X i=1 j>i k ≤ kλi λj (sj ai − si aj )(sj bi − si bj )T k1 k 1 XX λi λj ksj ai − si aj k1 ksj bi − si bj k1 2 i=1 j>i k ≤ k 1 XX λi λj (sj kai k1 + si kaj k1 )(sj kbi k1 + si kbj k1 ) 2 i=1 j>i (40),(43) ≤ k 1X λ1 s21 λj (sj + kaj k1 )(sj + kbj k1 ) 2 j=2 15   aj bj (41) ≤ m + n. 1 (44) k k 1 XX m+n m+n + + s2i + si sj kai k1 kbj k1 + si sj kaj k1 kbi k1 ) λi λj (s2j 2 4 4 i=2 j>i k (41),(44),(42) X m+n λ1 s21 λi 2 ≤ i=2 + 1 2 k X k X λi λj i=2 j>i AMGM3 ≤ m+n 2 (si + s2j ) + λi λj si sj (m + n) 4 k k k X s2i + s2j s2i + s2j m + n XX m+n λ1 s21 + ) λi + λi λj ( 2 2 4 2 i=2 i=2 j>i k = k i=2 = = ≤ i=2 j>i k k k k k i=2 i=2 j>i i=2 j>i X X X X 3(m + n) X m+n λ1 s21 λi + ( λi s2i λj + λi λj s2j ) 2 8 k k k k k X X X X 3(m + n) X m+n λ1 s21 λi + ( λj λi s2i + λi λj s2j ) 2 8 i=2 j=2 = k k k i=2 j=2 i=2 i=2 j>i k k X X m+n 3(m + n) λ1 s21 (1 − λ1 s21 ) λi + λi 2 8 i=2 m+n (3 + λ1 s21 ) 8 (39) ≤ 1<i<j X X m+n 3(m + n) X λ1 s21 λi + ( λj s2j ) λi 2 8 (39) = k X m+n 3(m + n) X X λ1 s21 λi + λi λj (s2i + s2j ) 2 8 i=2 k X λi i=2 k m+nX λi . 2 i=2 4.3 Bounds on  in the Rank-2 Case We now give a series of bounds on  which hold for feasible solutions to SDP2 that are rank-2 (note that due to the row inequalities, M will have the same rank as M0 ). This is motivated by our ability to show stronger (constant) bounds in this case, and the fact that we often recover rank-2 (or rank-1) solutions with our algorithms in Section 5. As it will be important in our study of this particular case, we first recall the definition of a completely positive factorization/rank of a matrix. Definition 4.6. A matrix M is completely positive (CP) if it admits a decomposition M = U U T for some nonnegative matrix U . Definition 4.7. The CP-rank of an n × n CP matrix M is the smallest k for which there exists a nonnegative n × k matrix U such that M = U U T . 3 AMGM is used to denote the arithmetic-mean-geometric-mean inequality. 16 Theorem 4.8 (see e.g. [19] or Theorem 2.1 in [6]). A rank-2, nonnegative, and positive semidefinite matrix is CP and has CP-rank 2. It is also known (see e.g., Section 4 in [19]) that the CP factorization of a rank-2 CP matrix can be found to arbitrary accuracy in polynomial time. With these preliminaries in mind, we present lemmas which are similar to Lemmas 4.2 and 4.3, but provide a decomposition which is specific to the rank-2 case. Lemma 4.9. Suppose that a feasible solution M0 to SDP2 is rank-2. Then there exists a decompo   T    T c c a a , where σ1 and σ2 are nonnegative, σ1 + σ2 = 1, a, c ∈ 4m , + σ2 sition M = σ1 d d b b and b, d ∈ 4n . Proof. Since the matrix M is rank-2, nonnegative, and positive semidefinite, from Theorem 4.8 we T T can decompose the matrix M into  0  v1 v1 + 0v2 v2 where v1 and v2 are nonnegative. We can then a c partition v1 and v2 into vectors 0 and 0 as we did in Lemma 4.2. Furthermore, we note that b d because hold, by repeating the proof of Lemma 4.2, we find Pstill Pm 0 (15) Pn 0 constraints Pmthe 0 distribution n that i=1 ai = i=1 bi , and i=1 ci = i=1 d0i . By letting m m X X a0 b0 c0 d0 0 2 σ1 = ( ai ) , σ2 = ( c0i )2 , a = √ , b = √ , c = √ , d = √ , σ1 σ1 σ2 σ2 i=1 i=1 we have found constants σ1 and σ2 , and vectors a, b, c and d which satisfy    T    T a a c c M = σ1 + σ2 , b b d d and a, b, c, and d are simplex vectors. Note that we can assume σ1 and σ2 are both positive, as otherwise we are in the rank-1 case. Finally, to show that σ1 + σ2 = 1, recall that the sum of all    T    T c c a a elements in M is 4, as are the sum of the elements of and . b b d d Lemma 4.10. Suppose a feasible solution M0 to SDP2 is rank-2, and that    T    T a a c c M = σ1 + σ2 , b b d d where σ1 and σ2 are nonnegative, σ1 + σ2 = 1, a, c ∈ 4m , and b, d ∈ 4n . Then, P − xy T = σ1 σ2 (a − c)(b − d)T . Proof. Recall that in our notation, P = σ1 abT + σ2 cdT , x = σ1 a + σ2 c, y = σ1 b + σ2 d. Then, P − xy T = σ1 abT + σ2 cdT − (σ1 a + σ2 c)(σ1 b + σ2 d)T = σ1 abT + σ2 cdT − σ12 abT − σ1 σ2 adT − σ1 σ2 cbT − σ22 cdT = σ1 (1 − σ1 )abT − σ1 σ2 adT − σ1 σ2 cbT + σ2 (1 − σ2 )cdT = σ1 σ2 abT − σ1 σ2 adT − σ1 σ2 cbT + σ2 σ1 cdT = σ1 σ2 (a − c)(b − d)T . 17 With this decomposition we can present a series of constant additive factor approximations for the rank-2 case. To do so we apply Lemma 4.4 to the decomposition in Lemma 4.10. All the following proofs will use the notation with σ, a, b, c, and d as in Lemma 4.10. Theorem 4.11. If a feasible solution M0 to SDP2 is rank-2, then the x and y from the last column of M0 constitute a 12 −Nash equilibrium. Proof. We can use Lemma 4.10 to represent D := P − xy T = σ1 σ2 (a − c)(b − d)T , where σ1 , σ2 ≥ 0, σ1 + σ2 = 1, a, c ∈ 4m , and b, d ∈ 4n . Then we can use Lemma 4.4 to get that ≤ kDk1 1 1 = kσ1 σ2 (a − c)(b − d)T k1 ≤ σ1 σ2 ka − ck1 kb − dk1 . 2 2 2 Since we have kak1 = kbk1 = kck1 = kdk1 = 1, we get the following bound for : 1 1 σ1 σ2 ka − ck1 kb − dk1 ≤ σ1 σ2 · 2 · 2 = 2σ1 σ2 . 2 2 Since σ1 and σ2 sum to one and are nonnegative, we know that σ1 σ2 ≤  ≤ 12 . 1 4, and hence we have Theorem 4.12. If a feasible solution M0 to SDP2 is rank-2, then either the x and y from its last 5 5 column constitute a 11 -NE, or a 11 -NE can be recovered from M0 in polynomial time. Proof. We consider 3 cases, depending on whether A (x, y) and B (x, y) are greater than or less than .4. If A ≤ .4, B ≤ .4, then (x, y) is already a .4-Nash equilibrium. Now consider the case when A ≥ .4, B ≥ .4. Since A ≤ Tr(A(P − xy T )T ) and B ≤ Tr(B(P − xy T )T ), we have σ1 σ2 (a − c)T A(b − d) ≥ .4, σ1 σ2 (a − c)T B(b − d) ≥ .4. Since A, a, b, c, and d are all nonnegative and σ1 σ2 ≤ 41 , aT Ab + cT Ad ≥ (a − c)T A(b − d) ≥ 1.6, and the same inequalities hold for for player B. In particular, since A and B have entries bounded in [0,1] and a, b, c, and d are simplex vectors, all the quantities aT Ab, cT Ad, aT Bb, and cT Bd are at most 1, and consequently at least .6. Hence (a, b) and (c, d) are both .4-Nash equilibria. Now suppose that (x, y) is a .4-NE for one player (without loss of generality player A) but not for the other (without loss of generality player B). Then A ≤ .4, and B ≥ .4. Let y ∗ be a best response for player B to x, and let p = 1+B1−A . Consider the strategy profile (x̃, ỹ) := (x, py + (1 − p)y ∗ ). This can be interpreted as the outcome (x, y) occurring with probability p, and the outcome (x, y ∗ ) happening with probability 1 − p. In the first case, player A will have A (x, y) = A and player B will have B (x, y) = B . In the second outcome, player A will have A (x, y ∗ ) at most 1, while player B will have B (x, y ∗ ) = 0. Then under this strategy profile, both players have the same upper bound for , which equals B p = 1+BB−A . To find the worst case for this value, let B = .5 5 (note from Theorem 4.11 that B ≤ 12 ) and A = .4, and this will return  = 11 . We now show a stronger result in the case of symmetric games. Definition 4.13. A symmetric game is a game in which the payoff matrices A and B satisfy B = AT . 18 Definition 4.14. A Nash equilibrium strategy (x, y) is said to be symmetric if x = y. Theorem 4.15 (see Theorem 2 in [29]). Every symmetric bimatrix game has a symmetric Nash equilibrium. For the proof of Theorem 4.16 below we modify SDP2 so that we are seeking a symmetric solution. Theorem 4.16. Suppose the constraints x = y and X = P = Y are added to SDP2. Then if a feasible solution M0 to this new SDP is rank-2, either the x and y from its last column constitute a symmetric 31 -NE, or a symmetric 13 -NE can be recovered from M0 in polynomial time. Proof. If (x, y) is already a symmetric 13 -NE, then the claim is established. Now suppose that (x, y) does not constitute a 13 -Nash equilibrium. Observe that since x = y, we must have a = b and c = d. Then following the same reasoning as in the proof of Theorem 4.12, we have 1 σ1 σ2 (a − c)T A(a − c) ≥ . 3 1 Since A, a, and c are all nonnegative, and σ1 σ2 ≤ 4 , we get 4 aT Aa + cT Ac ≥ (a − c)T A(a − c) ≥ . 3 In particular, at least one of aT Aa and cT Ac is at least 32 . Since the maximum possible payoff is 1, at least one of (a, a) and (c, c) is a (symmetric) 13 -Nash equilibrium. 5 Algorithms for Lowering the Rank In this section, we present heuristics which aim to find low-rank solutions to SDP2 and present some empirical results. Recall that our SDP2 in Section 2.4 did not have an objective function. Hence, we can encourage low-rank solutions by choosing certain objective functions, for example the nuclear norm (i.e. trace) of the matrix M, which is a general heuristic for rank minimization [33, 15]. This simple objective function is already guaranteed to produce a rank-1 solution in the case of strictly competitive games (see Proposition 5.1 below). For general games, however, one can design better objective functions in an iterative fashion (see Section 5.1). Proposition 5.1. For a strictly competitive game, any optimal solution to SDP2 with Tr(M) as the objective function must be rank-1. Proof. Let    X X P M := , M0 :=  Z Z Y xT P Y yT  x y , 1 with P = Z T , be a feasible solution to SDP2. In the case of strictly competitive games, from Theorem 3.5 we know that that (x, y) is a Nash equilibrium. Then because the matrix M0 must be    T x x psd, by applying the Schur complement (see, e.g. [7, Sect. A.5.5]), we have that M  , y y  T  xx xy T + P for some psd matrix P and some Nash equilibrium (x, y). and therefore M = T yx yy T Given this expression, the objective value is then xT x+y T y+Tr(P). As (x, y) is a Nash equilibrium, the choice of P = 0 results in a feasible solution. Since the zero matrix has the minimum possible    T x x trace among all psd matrices, the solution will be the rank-1 matrix . y y 19 5.1 Linearization Algorithms The algorithms we present in this section are based on iterative linearization of certain nonconvex objective functions. Motivated by the next proposition, we design two continuous (nonconvex) objective functions that, if minimized exactly, would guarantee rank-1 solutions. We will then linearize these functions iteratively. Proposition 5.2. Let the matrices X and Y and vectors x and y be taken from a feasible solution M0 to SDP2. Then the matrix M0 is rank-1 if and only if Xi,i = x2i and Yi,i = yi2 for all i.   x Proof. Necessity of the condition is trivial; we argue sufficiency. Denote the vector z = . First y recall from the row constraints of Section 2.3 that M will have the same rank as M0 , as the last column p is a linear combination of the columns of X and Y . Since M is psd, we have that Mi,j ≤ Mi,i Mj,j , which implies Mi,j ≤ zi zj by the assumption of the proposition. Further P Pm+n recall that a consequence of the unity constraint (20) is that m+n zi zj = 4, and that we i=1 Pm+nj=1 Pm+n require from the distribution constraints from Section 2.3.1 that i=1 j=1 Mi,j = 4. Now we can see that in order to have the equality 4= m+n X m+n X Mi,j ≤ i=1 j=1 m+n X m+n X zi zj = 4, i=1 j=1 we must have Mi,j = zi zj for each i and j. Consequently M is rank-1. We focus now on two nonconvex objectives that as a consequence of the above proposition would return rank-1 solutions: Pm+n p Proposition 5.3. All optimal solutions to SDP2 with the objective function Mi,i or i=1 T T Tr(M) − x x − y y are rank-1. Proof. We show that each of these objectives has a specific lower bound which is achieved if and only if the matrix is rank-1.    T p p x x Observe that since M  , we have Xi,i ≥ xi and Yi,i ≥ yi , and hence y y m+n X m n X X p Mi,i ≥ xi + yi = 2. i=1 Further note that i=1 i=1  T    T    T   x x x x x x Tr(M) − ≥ − = 0. y y y y y y We can see that the lower bounds are achieved if and only if Xi,i = x2i and Yi,i = yi2 for all i, which by Proposition 5.2 happens if and only if M is rank-1. We refer to our two objective functions in Proposition 5.3 as the “square root objective” and the “diagonal gap objective” respectively. While these are both nonconvex, we will attempt to iteratively minimize them by linearizing them through a first order Taylor expansion. For example, at iteration k of the algorithm, m+n Xq (k) Mi,i ' i=1 m+n Xq (k−1) Mi,i i=1 1 (k) (k−1) + q (Mi,i − Mi,i ). (k−1) 2 Mi,i 20 Note that for the purposes of minimization, this reduces to minimizing Pm+n i=1 (k) q 1 Mi,i . (k−1) Mi,i In similar fashion, for the second objective function, at iteration k we can make the approximation  (k)T  (k)  (k−1)T  (k−1)T  (k−1)T  (k)  (k−1) x x x x x x x Tr(M) − ' Tr(M) − −2 ( − ). y y y y y y y  (k−1)T  (k) x x Once again, for the purposes of minimization this reduces to minimizing Tr(M)−2 . y y This approach then leads to the following two algorithms.4 Algorithm 1 Square Root Minimization Algorithm 1: 2: 3: Let x(0) = 1m , y (0) = 1n , k = 1. while !convergence do P Pn q 1 Solve SDP2 with m X + i,i i=1 i=1 (k−1) xi timal solution by M0∗ . 4: Let x(k) = diag(X ∗ ), y (k) = diag(Y ∗ ). 5: Let k = k + 1. 6: end while q 1 Yi,i (k−1) yi as the objective, and denote the op- Algorithm 2 Diagonal Gap Minimization Algorithm 1: 2: 3: 4: 5: 6: Let x(0) = 0m , y (0) = 0n , k = 1. while !convergence do  (k−1)T  (k) x x Solve SDP2 with Tr(X)+Tr(Y )−2 as the objective, and denote the optimal y y solution by M0∗ . Let x(k) = x∗ , y (k) = y ∗ . Let k = k + 1. end while Remark 5.1. Note that the first iteration of both algorithms uses the nuclear norm (i.e. trace) of M as the objective. The square root algorithm has the following property. Theorem 5.4. Let M0(1) , M0(2) , . . . be the sequence of optimal matrices obtained from the square root algorithm. Then the sequence m+n X q (k) { Mi,i } (45) i=1 is nonincreasing and is lower bounded by two. If it reaches two at some iteration t, then the matrix M0(t) is rank-1. Proof. Observe that for any k > 1, m+n Xq (k) Mi,i i=1 4 (k) (k−1) q q m+n m+n m+n X q (k−1) Mi,i 1 X 1 X Mi,i (k−1) (k−1) q q ≤ ( + Mi,i ) ≤ ( + Mi,i ) = Mi,i , (k−1) (k−1) 2 2 Mi,i Mi,i i=1 i=1 i=1 An algorithm similar to Algorithm 2 is used in [18]. 21 where the first inequality follows from the arithmetic-mean-geometric-mean inequality, and the Pm+n M(k) (k) q i,i and hence achieves a no second follows from that Mi,i is chosen to minimize i=1 (k−1) Mi,i M(k−1) . larger value than the feasible solution This shows that the sequence is nonincreasing. The proof of Proposition 5.3 already shows that the sequence is lower bounded by two, and Proposition 5.3 itself shows that reaching two is sufficient to have the matrix be rank-1. The diagonal gap algorithm has the following property. 0 0 Theorem 5.5. Let M (1) , M (2) , . . . be the sequence of optimal matrices obtained from the diagonal gap algorithm. Then the sequence (k) {Tr(M  (k)T  (k) x x )− } y y (46) is nonincreasing and is lower bounded by zero. If it reaches zero at some iteration t, then the matrix M0(t) is rank-1. Proof. Observe that Tr(M(k) ) −  (k)T  (k)  (k)T  (k)  (k)  (k−1)  (k)  (k−1) x x x x x x x x ≤ Tr(M(k) ) − +( − )T ( − ) y y y y y y y y  (k)T  (k−1)  (k−1)T  (k−1) x x x x (k) = Tr(M ) − 2 + y y y y  (k−1)T  (k−1)  (k−1)T  (k−1) x x x x ≤ Tr(M(k−1) ) − 2 + y y y y  (k−1)T  (k−1) x x = Tr(M(k−1) ) − , y y  (k−1)T  (k−1) x x y y 0(k−1) and hence achieves a no larger value than the feasible solution M . This shows that the sequence is nonincreasing. The proof of Proposition 5.3 already shows that the sequence is lower bounded by zero, and Proposition 5.3 itself shows that reaching zero is sufficient to have the matrix be rank-1. 0 where the second inequality follows from that M (k) is chosen to minimize Tr(M(k−1) )−2 Our last theorem further quantifies how making the objective of the diagonal gap algorithm small makes  small. The proof is similar to the proof of Theorem 4.5. Theorem 5.6. Let M0 be a feasible solution to SDP2. Then, the x and y from the last column of M0 constitute an -NE to the game (A, B) with  ≤ 3(m+n) (Tr(M0 ) − xT x − y T y). 8 Proof. Let k be the rank of M0 with the eigenvalues of M0given by λ1 , . . . , λk and the eigenvectors P Pn a v1 , . . . , vk partitioned as in Lemma 4.2 so that vi = i with m j=1 (ai )j = j=1 (bi )j = si , ∀i. bi P Then we have Tr(M0 ) = ki=1 λi , and xT x + y T y (37),(38) = k k k X X X λi si vi )T ( λi si vi ) = λ2i s2i . ( i=1 i=1 22 i=1 We now get the following chain of inequalities (the first one follows from Lemma 4.4 and the proof of Theorem 4.5): k k 1 XX λi λj (sj kai k1 + si kaj k1 )(sj kbi k1 + si kbj k1 ) 2 ≤ i=1 j>i (40),(43) ≤ k k m+n 1 XX m+n + s2i + si sj kai k1 kbj k1 + si sj kaj k1 kbi k1 ) λi λj (s2j 2 4 4 i=1 j>i (44) ≤ k k 1 XX m+n 2 (si + s2j ) + λi λj si sj (m + n) λi λj 2 4 i=1 j>i AM GM ≤ k k s2i + s2j s2i + s2j m + n XX + ) λi λj ( 2 4 2 i=1 j>i k = k 3(m + n) X X λi λj (s2i + s2j ) 8 i=1 j>i = = k k k k i=1 j>i i=1 j>i X X X 3(m + n) X ( λi s2i λj + λi λj s2j ) 8 k k k k X X X 3(m + n) X ( λj λi s2i + λi λj s2j ) 8 j=1 1<i<j i=1 j>i k = 3(m + n) X X ( λi λj s2j ) 8 i=1 (39) = k 3(m + n) X ( λi (1 − λi s2i )) 8 3(m + n) = ( 8 5.2 j6=i i=1 k X k X i=1 i=1 λi − λ2i s2i ) = 3(m + n) (Tr(M0 ) − xT x − y T y). 8 Numerical Experiments We tested Algorithms 1 and 2 on games coming from 100 randomly generated payoff matrices with entries bounded in [0, 1] of varying sizes. Below is a table of statistics for 20 × 20 matrices; the data for the rest of the sizes can be found in Appendix A.5 We can see that our algorithms return approximate Nash equilibria with fairly low  (recall the definition from Section 2.1). We ran 20 iterations of each algorithm on each game. Using the SDP solver of MOSEK [1], each iteration takes on average under 4 seconds to solve on a standard personal machine with a 3.4 GHz processor and 16 GB of memory. 5 The code that produced these results is publicly available at aaa.princeton.edu/software. The function nash.m computes an approximate Nash equilibrium using one of our two algorithms as specified by the user. 23 Table 1: Statistics on  for 20 × 20 games after 20 iterations. Algorithm Max Mean Median StDev Square Root 0.0198 0.0046 0.0039 0.0034 Diagonal Gap 0.0159 0.0032 0.0024 0.0032 The histograms below show the effect of increasing the number of iterations on lowering  on 20 × 20 games. For both algorithms, there was a clear improvement of the  by increasing the number of iterations. Figure 2: Distribution of  over numbers of iterations for the square root algorithm (left) and the diagonal gap algorithm (right). 6 Bounding Payoffs and Strategy Exclusion In addition to finding -additive Nash equilibria, our SDP approach can be used to answer certain questions of economic interest about Nash equilibria without actually computing them. For instance, economists often would like to know the maximum welfare (sum of the two players’ payoffs) achievable under any Nash equilibrium, or whether there exists a Nash equilibrium in which a given subset of strategies (corresponding, e.g., to undesirable behavior) is not played. Both these questions are NP-hard for bimatrix games [16]. In this section, we show how our SDP can be applied to these problems and given some numerical experiments. 6.1 Bounding Payoffs When designing policies that are subject to game theoretic behavior by agents, economists would often like to find one with a good socially optimal outcome, which usually corresponds to an equilibrium giving the maximum welfare. Hence, given a game, it is of interest to know the highest achievable welfare under any Nash equilibrium. To address this problem, we begin as we did in Section 2.1 by posing the question of maximizing the welfare under any Nash equilibrium as a quadratic program. Since the feasible set of this program is the set of Nash equilibria, the constraints are the same as those in the formulation in (2), though the objective function is now the welfare: 24 max x,y xT Ay + xT By subject to xT Ay ≥ eTi Ay, ∀i ∈ {1, . . . , m}, xT By ≥ xT Bej , ∀j ∈ {1, . . . , n}, xi ≥ 0, ∀i ∈ {1, . . . , m}, yj ≥ 0, ∀j ∈ {1, . . . , n}, m X xi = 1, (47) i=1 n X yi = 1. i=1 The SDP relaxation of this quadratic program will then be given by max M0 ∈Sm+n+1,m+n+1 subject to Tr(AZ) + Tr(BZ) (17) − (26). (SDP3) (48) One can easily see that the optimal value of this SDP is an upper bound on the welfare achievable under any Nash equilibrium. To test the quality of this upper bound, we tested this SDP on a random sample of one hundred 5 × 5 and 10 × 10 games6 . The results are in Figures 3, which show that the bound returned by SDP3 was exact in a large number of the experiments. Figure 3: The quality of the upper bound on the maximum welfare obtained by SDP3 on 100 5 × 5 games (left) and 100 10 × 10 games (right). 6 The matrices were randomly generated with uniform and independent entries in [0,1]. The computation of the upper bounds on the maximum payoffs was done with the function nashbound.m, which computes an SDP lower bound on the problem of minimizing a quadratic function over the set of Nash equilibria of a bimatrix game. This code is publicly available at aaa.princeton.edu/software. The exact computation of the maximum payoffs was done with the lrsnash software [5], which computes extreme Nash equilibria. For a definition of extreme Nash equilibria and for understanding why it is sufficient for us to compare against extreme Nash equilibria (both in Section 6.1 and in Section 6.2), see Appendix B. 25 6.2 Strategy Exclusion The strategy exclusion problem asks, given a subset of strategies S = (Sx , Sy ), with Sx ⊆ {1, . . . , m} and Sy ⊆ {1, . . . , n}, is there a Nash equilibrium in which no strategy in S is played with positive probability. We will call a set S “persistent” if the answer to this question is negative, i.e. at least one strategy in S is played with positive probability in every Nash equilibrium. One application of the strategy exclusion problem is to understand whether certain strategies can be discouraged in the design of a game, such as reckless behavior in a game of chicken or defecting in a game of prisoner’s dilemma. In these particular examples these strategy sets are persistent and cannot be discouraged. A quadratic program which can address the strategy exclusion problem is as follows: X X min xi + yi x∈4m ,y∈4n subject to i∈Sx i∈Sy T x Ay ≥ eTi Ay, ∀i ∈ {1, . . . , m}, (49) xT By ≥ xT Bej , ∀j ∈ {1, . . . , n}. Observe that by design, S is persistent if and only if this quadratic program has a positive optimal value. The SDP relaxation of this problem is given by X min M0 ∈Sm+n+1,m+n+1 subject to i∈S xi + X yi (SDP4) i∈S (17) − (26). (50) Our approach for the strategy exclusion problem would be to declare that a strategy set is persistent if and only if SDP4 has positive optimal value. Note that since the optimal value of SDP4 is a lower bound for that of (49), SDP4 carries over the property that if a set S is not persistent, then the SDP for sure returns zero. Thus, when using SDP4 on a set which is not persistent, our algorithm will always be correct. However, this is not necessarily the case for a persistent set. While we can be certain that a set is persistent if SDP4 returns a positive optimal value (again, because the optimal value of SDP4 is a lower bound for that of (49)), there is still the possibility that for a persistent set SDP4 will have optimal value zero. To test the performance of SDP4, we generated 100 random games of size 5 × 5 and 10 × 10 and computed all their extreme Nash equilibria7 . We then, for every strategy set S of cardinality one and two, checked whether that set of strategies was persistent, first by checking among the extreme Nash equilibria, then through SDP4. The results are presented in Tables 2 and 3. As motivated by the discussion above, we separately show the performance on all instances and the performances on persistent input instances. As can be seen, SDP4 was quite effective for the strategy exclusion problem. In particular, for 10 × 10 games, we have a perfect identification rate. 7 Connection to the Sum of Squares/Lasserre Hierarchy In this section, we clarify the connection of the SDPs we have proposed in this paper to those arising in the sum of squares/Lasserre hierarchy. We start by briefly reviewing this hierarchy. 7 The exact computation of the Nash equilibria was done again with the lrsnash software [5], which computes extreme Nash equilibria. To understand why this suffices for our purposes see Appendix B. 26 |S| Number of Total Sets Number Correct Percent Correct Table 2: Performance of SDP4 on 5 × 5 games 1 2 |S| 1000 4500 Number of Persistent Sets 996 4465 Number Correct 99.6 % 99.2 % Percent Correct Table 3: Performance of SDP4 on 10 × 10 games |S| 1 2 |S| Number of Total Sets 2000 19000 Number of Persistent Sets Number Correct 2000 19000 Number Correct Percent Correct 100 % 100 % Percent Correct 7.1 1 22 18 81.8% 1 11 11 100% 2 1478 1443 97.6% 2 841 841 100% Sum of Squares/Lasserre Hierarchy The sum of squares/Lasserre hierarchy8 gives a recipe for constructing a sequence of SDPs whose optimal values converge to the optimal value of a given polynomial optimization problem. Recall that a polynomial optimization problem (pop) is a problem of minimizing a polynomial over a basic semialgebraic set, i.e., a problem of the form min f (x) x∈Rn (51) subject to gi (x) ≥ 0, ∀i ∈ {1, . . . , m}, where f, gi are polynomial functions. In this section, when we refer to the k-th level of the Lasserre hierarchy, we mean the optimization problem k γsos :=max γ,σi γ subject to f (x) − γ = σ0 (x) + m X σi (x)gi (x), (52) i=1 σi is sos, ∀i ∈ {0, . . . , m}, σ0 , gi σi have degree at most 2k, ∀i ∈ {1, . . . , m}. Here, the notation “sos” stands for sum of squares. We P say that a polynomial p is a sum of squares if there exist polynomials q1 , . . . , qr such that p = ri=1 qi2 . There are two primary properties of the Lasserre hierarchy which are of interest. The first is that any fixed level of this hierarchy gives an SDP of size polynomial in n. The second is that, if the set {x ∈ Rn |gi (x) ≥ 0} is Archimedean k = p∗ , where p∗ is the optimal value of the pop in (51). (see, e.g. [24] for definition), then lim γsos k→∞ The latter statement is a consequence of Putinar’s positivstellensatz [32], [22]. 7.2 The Lasserre Hierarchy and SDP1 One can show, e.g. via the arguments in [23], that the feasible sets of the SDPs dual to the SDPs underlying the hierarchy we summarized above produce an arbitrarily tight outer approximation to the convex hull of the set of Nash equilibria of any game. The downside of this approach, however, is that the higher levels of the hierarchy can get expensive very quickly. This is why the approach we took in this paper was instead to improve the first level of the hierarchy. The next proposition formalizes this connection. 8 The unfamiliar reader is referred to [22, 30, 24] for an introduction to this hierarchy and the related theory of moment relaxations. 27 Proposition 7.1. Consider the problem of minimizing any quadratic objective function over the set of Nash equilibria of a bimatrix game. Then, SDP1 (and hence SDP2) gives a lower bound on this problem which is no worse than that produced by the first level of the Lasserre hierarchy. Proof. To prove this proposition we show that the first level of the Lasserre hierarchy is dual to a weakened version of SDP1. Explicit parametrization of first level of the Lasserre hierarchy. Consider the formulation of the Lasserre hierarchy in (52) with k = 1. Suppose we are minimizing a quadratic function  T   x x f (x, y) = y  C y  1 1 over the set of Nash equilibria as described by the linear and quadratic constraints in (2). If we apply the first level of the Lasserre hierarchy to this particular pop, we get max Q,α,χ,β,ψ,η γ  T    T   m x x x x X subject to y  C y  − γ = y  Q y  + αi (xT Ay − eTi Ay) 1 1 1 1 i=1 n X + βi (xT By − xT Bei ) i=1 + m X χi x i + i=1 n X (53) ψi yi i=1 m n X X + η1 ( xi − 1) + η2 ( yi − 1), i=1 i=1 Q  0, α, χ, β, ψ ≥ 0, where Q ∈ Sm+n+1×m+n+1 , α, χ ∈ Rm , β, ψ ∈ Rn , η ∈ R2 . By matching coefficients of the two quadratic functions on the left and right hand sides of (53), this SDP can be written as max γ γ,α,β,χ,ψ,η subject to H  0, α, β, χ, ψ ≥ 0, (54) where P Pm Pn   0 (− m i=1 αi )A + (− i=1 βi )B Pi=1 βi B,i − χ − η1 1m Pn 1  Pm m T  (− i=1 αi )A + (− i=1 βi )B 0 H := i=1 αi Ai, − ψ − η2 1n +C. Pn Pm 2 T T T T T β B − χ − η 1 α A − ψ − η 1 2η + 2η − 2γ 1 2 1 m 2 n i=1 i ,i i=1 i i, (55) Dual of a weakened version of SDP1. With this formulation in mind, let us consider a weakened version of SDP1 with only the relaxed Nash constraints, unity constraints, and nonnegativity constraints on x and y in the last column (i.e., the nonegativity constraint is not applied to the entire matrix). Let the objective be Tr(CM0 ). To write this new SDP in standard form, let 28    0 B −B,i A 0 1 0 0 , 0 −ATi,  , Bi :=  B T 2 −B T 0 0 −Ai, 0 ,i     0 0 1m 0 0 0 1 1  0 0 0 , S2 := 0 0 1n  . S1 := 2 1T 0 −2 2 0 1T −2 n m  0 1 T A Ai := 2 0 1 2 Let Ni be the matrix with all zeros except a i = m + n + 1). Then this SDP can be written as at entry (i, m + n + 1) and (m + n + 1, i) (or a 1 if min0 Tr(CM0 ) (SDP0) subject to M0  0, (56) M 0 (57) 0 (58) 0 (59) 0 (60) 0 (61) (62) Tr(Ni M ) ≥ 0, ∀i ∈ {1, . . . , m + n}, Tr(Ai M ) ≥ 0, ∀i ∈ {1, . . . , m}, Tr(Bi M ) ≥ 0, ∀i ∈ {1, . . . , n}, Tr(S1 M ) = 0, Tr(S2 M ) = 0, Tr(Nm+n+1 ) = 1. We now create dual variables for each constraint; we choose αi and βi for the relaxed Nash constraints (58) and (59), η1 and η2 for the unity constraints (60) and (61), χ for the nonnegativity of x (57), ψ for the nonnegativity of y (57), and γ for the final constraint on the corner (62). These variables are chosen to coincide with those used in the parametrization of the first level of the Lasserre hierarchy, as can be seen more clearly below. We then write the dual of the above SDP as max α,β,λ,γ subject to γ m X αi Ai + i=1 n X i=1 βi Bi + 2 X ηi Si + i=1 m X Ni+n χi + i=1 n X Ni ψi + γNm+n+1  C, i=1 α, β, χ, ψ ≥ 0. which can be rewritten as max α,β,χ,ψ,γ γ subject to G  0, α, β, χ, ψ ≥ 0, (63) where P Pm Pn   0 (− m i=1 αi )A + (− i=1 βi )B Pi=1 βi B,i − χ − η1 1m Pn 1  Pm m T  (− i=1 αi )A + (− i=1 βi )B 0 G := i=1 αi Ai, − ψ − η2 1n +C. Pn Pm 2 T T T T T β B − χ − η 1 α A − ψ − η 1 2η + 2η − 2γ 1 m 2 n 1 2 i=1 i ,i i=1 i i, We can now see that the matrix G coincides with the matrix H in the SDP (54). Then we have (53)opt = (54)opt = (63)opt ≤ SDP 0opt ≤ SDP 1opt , 29 where the first inequality follows from weak duality, and the second follows from that the constraints of SDP0 are a subset of the constraints of SDP1. Remark 7.1. The Lasserre hierarchy can be viewed in each step as a pair of primal-dual SDPs: the sum of squares formulation which we have just presented, and a moment formulation which is dual to the sos formulation [22]. All our SDPs in this paper can be viewed more directly as an improvement upon the moment formulation. Remark 7.2. One can see, either by inspection or as an implication of the proof of Theorem 2.2, that in the case where the objective function corresponds to maximizing player A’s and/or B’s payoffs9 , SDPs (54) and (63) are infeasible. This means that for such problems the first level of the Lasserre hierarchy gives an upper bound of +∞ on the maximum payoff. On the other hand, the additional valid inequalities in SDP2 guarantee that the resulting bound is always finite. 8 Future Work Our work leaves many avenues of further research. Are there other interesting subclasses of games (besides strictly competitive games) for which our SDP is guaranteed to recover an exact Nash equilibrium? Can the guarantees on  in Section 4 be improved in the rank-2 case (or the general case), by improving our analysis or for example by using the correlated equilibrium constraints (which we did not use)? Is there a polynomial time algorithm that is guaranteed to find a rank-2 solution to SDP2? Such an algorithm, together with our analysis, would improve the best known approximation bound for symmetric games (see Theorem 4.16). Can SDPs in a higher level of the Lasserre hierarchy be used to achieve better  guarantees? What are systematic ways of adding valid inequalities to these higher-order SDPs by exploiting the structure of the Nash equilibrium problem? For example, since any strategy played with positive probability must give the same payoff, one can add a relaxed version of the cubic constraints xi xj (eTi Ay − eTj Ay) = 0, ∀i, j ∈ {1, . . . , m} to the SDP underlying the second level of the Lasserre hierarchy. What are other valid inequalities for the second level? Finally, our algorithms were specifically designed for two-player one-shot games. This leaves open the design and analysis of semidefinite relaxations for repeated games or games with more than two players. References [1] MOSEK reference manual, 2013. Version 7. Latest version available at http://www.mosek. com/. [2] I. Adler. The equivalence of linear programs and zero-sum games. International Journal of Game Theory, 42(1):165–177, 2013. [3] I. Adler, C. Daskalakis, and C. H. Papadimitriou. A note on strictly competitive games. In International Workshop on Internet and Network Economics, pages 471–474. Springer, 2009. 9 This would be the case, for example, in the maximum social welfare problem of Section 6.1, where the matrix of the quadratic form in the objective function is given by   0 −A − B 0 0 0 . C = −A − B 0 0 0 30 [4] R. J. Aumann. Subjectivity and correlation in randomized strategies. Journal of Mathematical Economics, 1(1):67–96, 1974. [5] D. Avis, G. D. Rosenberg, R. Savani, and B. Von Stengel. Enumeration of Nash equilibria for two-player games. Economic Theory, 42(1):9–37, 2010. [6] A. Berman and N. Shaked-Monderer. Completely positive matrices. World Scientific, 2003. [7] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge university press, 2004. [8] S. P. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan. Linear matrix inequalities in system and control theory, volume 15. SIAM, 1994. [9] X. Chen and X. Deng. Settling the complexity of two-player Nash equilibrium. In FOCS, volume 6, page 47th, 2006. [10] X. Chen, X. Deng, and S.-H. Teng. Computing Nash equilibria: Approximation and smoothed complexity. In 2006 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS’06), pages 603–612. IEEE, 2006. [11] G. B. Dantzig. A proof of the equivalence of the programming problem and the game problem. Activity analysis of production and allocation, 13:330–338, 1951. [12] C. Daskalakis, P. W. Goldberg, and C. H. Papadimitriou. The complexity of computing a Nash equilibrium. SIAM Journal on Computing, 39(1):195–259, 2009. [13] C. Daskalakis, A. Mehta, and C. Papadimitriou. A note on approximate Nash equilibria. In International Workshop on Internet and Network Economics, pages 297–306. Springer, 2006. [14] C. Daskalakis, A. Mehta, and C. Papadimitriou. Progress in approximate Nash equilibria. In Proceedings of the 8th ACM conference on Electronic commerce, pages 355–358. ACM, 2007. [15] M. Fazel. Matrix rank minimization with applications. PhD thesis, PhD thesis, Stanford University, 2002. [16] I. Gilboa and E. Zemel. Nash and correlated equilibria: Some complexity considerations. Games and Economic Behavior, 1(1):80–93, 1989. [17] M. X. Goemans and D. P. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of the ACM (JACM), 42(6):1115–1145, 1995. [18] S. Ibaraki and M. Tomizuka. Rank minimization approach for solving BMI problems with random search. In American Control Conference, 2001. Proceedings of the 2001, volume 3, pages 1870–1875. IEEE, 2001. [19] V. Kalofolias and E. Gallopoulos. Computing symmetric nonnegative rank factorizations. Linear Algebra and its Applications, 436(2):421–435, 2012. [20] S. C. Kontogiannis, P. N. Panagopoulou, and P. G. Spirakis. Polynomial algorithms for approximating Nash equilibria of bimatrix games. In International Workshop on Internet and Network Economics, pages 286–296. Springer, 2006. [21] R. Laraki and J. B. Lasserre. Semidefinite programming for min–max problems and games. Mathematical Programming, 131(1-2):305–332, 2012. 31 [22] J. B. Lasserre. Global optimization with polynomials and the problem of moments. SIAM Journal on Optimization, 11(3):796–817, 2001. [23] J. B. Lasserre. Convex sets with semidefinite representation. Mathematical programming, 120(2):457–477, 2009. [24] M. Laurent. Sums of squares, moment matrices and optimization over polynomials. In Emerging Applications of Algebraic Geometry, pages 157–270. Springer, 2009. [25] C. E. Lemke and J. T. Howson, Jr. Equilibrium points of bimatrix games. Journal of the Society for Industrial and Applied Mathematics, 12(2):413–423, 1964. [26] L. Lovász. On the shannon capacity of a graph. IEEE Transactions on Information theory, 25(1):1–7, 1979. [27] G. P. McCormick. Computability of global solutions to factorable nonconvex programs: Part I : Convex underestimating problems. Mathematical Programming, 10(1):147–175, 1976. [28] C. D. Meyer. Matrix analysis and applied linear algebra, volume 2. SIAM, 2000. [29] J. Nash. Non-cooperative games. Annals of mathematics, pages 286–295, 1951. [30] P. A. Parrilo. Semidefinite programming relaxations for semialgebraic problems. Mathematical Programming, 96(2):293–320, 2003. [31] P. A. Parrilo. Polynomial games and sum of squares optimization. In Proceedings of the 45th IEEE Conference on Decision and Control, pages 2855–2860. IEEE, 2006. [32] M. Putinar. Positive polynomials on compact semi-algebraic sets. Indiana University Mathematics Journal, 42(3):969–984, 1993. [33] B. Recht, M. Fazel, and P. A. Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Review, 52(3):471–501, 2010. [34] R. Savani and B. Stengel. Hard-to-solve bimatrix games. Econometrica, 74(2):397–429, 2006. [35] P. Shah and P. A. Parrilo. Polynomial stochastic games via sum of squares optimization. In Decision and Control, 2007 46th IEEE Conference on, pages 745–750. IEEE, 2007. [36] N. D. Stein. Exchangeable equilibria. PhD thesis, Massachusetts Institute of Technology, 2011. [37] H. Tsaknakis and P. G. Spirakis. An optimization approach for approximate Nash equilibria. In International Workshop on Web and Internet Economics, pages 42–56. Springer, 2007. [38] L. Vandenberghe and S. Boyd. Semidefinite programming. SIAM Review, 38(1):49–95, 1996. A Statistics on  from Algorithms in Section 5 Below are statistics for the  recovered in 100 random games of varying sizes using the algorithms of Section 5. Table 4: Statistics on  for 5 × 5 games after 20 iterations. Algorithm Max Mean Median StDev Square Root 0.0702 0.0040 0.0004 0.0099 Diagonal Gap 0.0448 0.0027 0 0.0061 32 Table 5: Statistics on  for 10 × 5 games after 20 iterations. Algorithm Max Mean Median StDev Square Root 0.0327 0.0044 0.0021 0.0064 Diagonal Gap 0.0267 0.0033 0.0006 0.0053 B Table 6: Statistics on  for 10 × 10 games after Algorithm Max Mean Median Square Root 0.0373 0.0058 0.0039 Diagonal Gap 0.0266 0.0043 0.0026 20 iterations. StDev 0.0065 0.0051 Table 7: Statistics on  for 15 × 10 games after Algorithm Max Mean Median Square Root 0.0206 0.0050 0.0034 Diagonal Gap 0.0212 0.0038 0.0025 20 iterations. StDev 0.0045 0.0039 Table 8: Statistics on  for 15 × 15 games after Algorithm Max Mean Median Square Root 0.0169 0.0051 0.0042 Diagonal Gap 0.0159 0.0038 0.0029 20 iterations. StDev 0.0039 0.0034 Table 9: Statistics on  for 20 × 15 games after Algorithm Max Mean Median Square Root 0.0152 0.0046 0.0035 Diagonal Gap 0.0119 0.0032 0.0022 20 iterations. StDev 0.0036 0.0027 Table 10: Statistics on  for 20 × 20 games after Algorithm Max Mean Median Square Root 0.0198 0.0046 0.0039 Diagonal Gap 0.0159 0.0032 0.0024 20 iterations. StDev 0.0034 0.0032 Lemmas for Extreme Nash Equilibria The results reported in Section 6 were found using the lrsnash [5] software which computes extreme Nash equilibria (see definition below). In particular the true maximum welfare and the persistent strategy sets were found in relation to extreme Nash equilibria only. We show in this appendix why this is sufficient for the claims we made about all Nash equilibria. Definition B.1. An extreme Nash equilibrium is a Nash equilibrium which cannot be expressed as a convex combination of other Nash equilibria. Lemma B.2. All Nash equilibria are convex combinations of extreme Nash equilibria. 33 Proof. It suffices to show that any extreme point of the convex hull of the set of Nash equilibria must be an extreme Nash equilibrium, as any point in a convex set can be written as a convex combination of its extreme points. Suppose for the purpose of contradiction that this was not the case, i.e. there is a point x which is an extreme point of the convex hull of Nash equilibria but is not an extreme Nash equilibrium. Then either it is not a Nash equilibrium, or it is a Nash equilibrium which is not extreme. In both cases, x can be written as a convex combination of other Nash equilibria, and so cannot be an extreme point for the convex hull. For the former case it is because its membership in the convex hull must be due to an expression of it as a convex combination of Nash equilibria, and in the latter it is due to the definition of extreme Nash equilibria. The next lemma shows that checking extreme Nash equilibria are sufficient for the maximum welfare problem. Lemma B.3. For any bimatrix game, there exists an extreme Nash equilibrium giving the maximum welfare among all Nash equilibria.    i Pr x̃ x Proof. Consider any Nash equilibrium (x̃, ỹ), and let it be written as = i=1 λi i for some ỹ y  1  r x x set of extreme Nash equilibria 1 , . . . , r and λ ∈ 4r . Observe that for any i, j, y y xiT Ay j ≤ xjT Ay j , xiT By j ≤ xiT By i , (64) from the definition of a Nash equilibrium. Now note that r r X X x̃T (A + B)ỹ = ( λi xi )T (A + B)( λi y i ) = i=1 r X r X i=1 λi λj xiT (A + B)y j i=1 j=1 = r X r X = = j λi λj x Ay + i=1 j=1 r X r (64) X ≤ iT r X r X λi λj xiT By j i=1 j=1 r X r X λi λj xjT Ay j + λi λj xiT By i i=1 j=1 i=1 j=1 r r X X λi xiT Ay i + λi xiT By i i=1 i=1 r X λi xiT (A + B)y i . i=1 In particular, since each (xi , y i ) is an extreme Nash equilibrium, this tells us for any Nash equilibrium (x̃, ỹ) there must be an extreme Nash equilibrium which has at least as much welfare. Similarly for the results for persistent sets in Section 6.2, there is no loss in restricting attention to extreme Nash equilibria. Lemma B.4. For a given strategy set S, if every extreme Nash equilibrium plays at least one strategy in S with positive probability, then every Nash equilibrium plays at least one strategy in S with positive probability. 34 Proof. Let S be a persistent set of strategies. Since all Nash equilibria are composed of nonnegative entries, and every extreme Nash equilibrium has positive probability on some entry in S, any convex combination of extreme Nash equilibria must have positive probability on some entry in S. Acknowledgments. We would like to thank Ilan Adler, Costis Daskalakis, Georgina Hall, Ramon van Handel, and Robert Vanderbei for insightful exchanges. 35
8
arXiv:1605.02778v1 [cs.CR] 9 May 2016 Calculational Design of Information Flow Monitors (extended version) Mounir Assaf David A. Naumann Stevens Institute of Technology Stevens Institute of Technology a high branch, yielding false positives in cases like this: if inhi then outlo := 0 else outlo := 0. Another technique which has been investigated extensively is to rely on static analysis to determine which locations might have been updated in executions that do not follow the same branch as the actual execution. The monitor tags all such locations when the control join point is reached. These techniques provide monitoring that is provably sound with respect to idealized semantics that ignores covert channels like timing (e.g., [9], [15], [42], [52], [56]). These and related techniques have been investigated and implemented but had quite limited practical impact. One I. I NTRODUCTION obvious reason is the difficulty of specifying policies with Runtime monitoring can serve to test a program’s security or sufficient flexibility to capture security goals without excessive to ensure its security by detecting violations. Monitoring is a restriction. Another impediment to practical use is that keeping good fit for access control policies, which are safety properties. track of possible alternate control paths has high performance A run either does or does not satisfy the policy, and a monitor cost. Lowering precision to reduce cost can result in intolerably can be precise in the sense of raising an alert only when the many false positives. It is an active area of research to improve run is poised to violate the policy. Information flow policies are monitors for better performance, better precision, and more about dependency, e.g., an untrusted (resp. secret) input should subtle policies. (See Section VI for related work.) not influence a trusted (resp. public) output. Formal definitions Another impediment to practical IF monitoring is that, if of information flow (IF) security are “hyperproperties” [25] enough is at stake to motivate paying the costs of policy involving multiple runs. Suppose an observer classified as “low” specification, performance degradation, and possible false knows the code, the set of possible secret inputs, and the low positives, there should be high assurance of correctness. The input (from which they can deduce the possible runs and low complexity of monitoring grows with the complexity of the outputs). A policy specifies what can be learned about the programming language and the monitoring techniques. While secret upon observing a particular low output. Learning means there are machine-checked correctness proofs for theoretical determining a smaller set of possible values of the secret. How models, there are few for practical implementations. The proofs is a monitor, acting only on the actual execution, to detect known to us have been done “from scratch”, rather than building violations of a property defined with respect to all (pairs of) on and reusing prior results (though of course one can identify runs? Remarkably, this was shown to be possible [42]. In this common techniques). Such proofs are not easily maintained as paper we show how to design such monitors systematically. the monitored language and platform evolves. A popular way to monitor dependency is to tag secret This paper addresses the impediments related to the precision data and propagate tags whenever tagged data is involved of monitors as well as the complexity of their design and in computing other data. If an output is not tagged, we might correctness proofs. conclude that in all possible runs, the output would have the For safety properties, the theory of abstract interpretation same value, i.e., nothing has been learned about the secret. The [29] is well established and widely used to guide and validate conclusion is wrong, owing to information channels besides the design of static analyses [30]. Like the best theories data flow. The most pervasive and exploitable such channel in engineering, abstract interpretation allows designs to be is control flow. If some branch condition depends on a secret, derived from their specification instead of merely helping the low observer may learn the secret from the absence of an to justify them after the fact [27]. Abstract interpretation observable action that happens in the other branch. underlies the static analysis part of some IF monitors [16], Owing to the possibility of such implicit flow, sound [46], but the monitor design and justification remains ad hoc. monitors are not in general precise. A simple technique Abstract interpretation has also been used for static analysis is to raise an alert if a low assignment is attempted in of noninterference [40]. Abstract—Fine grained information flow monitoring can in principle address a wide range of security and privacy goals, for example in web applications. But it is very difficult to achieve sound monitoring with acceptable runtime cost and sufficient precision to avoid impractical restrictions on programs and policies. We present a systematic technique for design of monitors that are correct by construction. It encompasses policies with downgrading. The technique is based on abstract interpretation which is a standard basis for static analysis of programs. This should enable integration of a wide range of analysis techniques, enabling more sophisticated engineering of monitors to address the challenges of precision and scaling to widely used programming languages. Chudnov et al. [23] suggest that an IF monitor should be viewed as computing an abstract interpretation to account for alternate runs vis-à-vis the monitored run. The key observation is that typical IF policies are 2-safety [25]: a violation has the form of a pair of runs, so the monitored run (major run) need only be checked with respect to each alternate (minor run) individually. What needs to be checked about the minor run is a safety property, defined in terms of the major run (and thus fully known to the monitor only upon completion of the major run). This view offers a path to more sophisticated monitoring and systematic development of monitors for real world languages and platforms, and modular machine checked correctness proofs. But the paper [23] is devoid of Galois connections or other trappings of abstract interpretation! It offers only a rational reconstruction of an existing monitor for the simple while language, augmented with downgrading and intermediate release policies. interpretation, especially collecting semantics. For commands we choose standard denotational semantics, because it facilitates streamlined notations in what follows. Sec. III presents the ideal monitor and shows how extant security policies are defined in terms of this semantics. Sec. IV defines a Galois connection for the lattice of relational formulas. That connection induces a specification of a monitor that approximates the ideal monitor. We sketch the derivation, from that specification, of a generic monitor. Sec. V derives several monitors, by refining the generic monitor to use different abstract interpretations for the minor runs. These include a new purely dynamic monitor as well as improvement on prior monitoring techniques. Sec. VI discusses related work. Sec. VII discusses ideas for monitoring richer languages and for gaining precision by leveraging existing static analyses. An appendix, providing detailed proofs for all results, can be found in the end of this technical report. Contribution: ideal monitor We reformulate the idea of “tracking set” in Chudnov et al. [23] as a novel variation of the standard notion of collecting semantics [28], which serves as specification for—and basis for deriving—static analyses. We generalize collecting semantics to depend on the major run, in an ideal monitor which we prove embodies checking of noninterference for the major execution. Contribution: derived monitors We derive several monitors from the monitoring semantics, using techniques of abstract interpretation to show the monitors are correct by construction. That is, the definitions are obtained by calculation, disentangling routine steps from inventive steps and design choices, inspired by Cousot [27]. We identify two main ways in which a monitor can glean information from the major run and the abstractly interpreted collective minor runs, accounting for existing monitors as well as showing the way to further advances that can be made in precision and efficiency. II. BACKGROUND A. Language syntax and standard semantics To expose the main ideas it suffices to work with the simple imperative language with integer variables. The only nonstandard features are the annotation commands, assert and assume. These use relational formulas as in Chudnov et al. [23], and are explained in due course. Program syntax e ::= n | id | e1 ⊕ e2 | b b ::= e1 < e2 | e1 = e2 | ¬b | b1 ∧ b2 c ::= id := e | c1 ; c2 | if b then c1 else c2 | while b do c | skip | assume Φ | assert Φ Φ ::= Ae | Bb | Bb ⇒ Ae (basic formulas) | Φ, Φ Expressions are integer-valued. They include constants n, variables id, binary operators (indicated by ⊕), and boolean expressions b. A state is a mapping from variables id to values v ∈ Z. For σ ∈ States we define the denotation JeKσ of an expression e as usual. For example, JidKσ , σ(id). Boolean expressions evaluate to either integer 0 or 1, e.g., J¬bKσ is 1 if JbKσ = 0. We omit the details, which are standard, and for simplicity we assume that every expression has a value in every state. We define the set of outcomes States⊥ , States ∪ {⊥} where ⊥ is distinct from proper states (representing divergence). We denote by 4 the approximation partial order over the flat domain States⊥ , and 4̇ its lifting to functions over outcomes. Therefore, the denotation of a command c is a function A key abstraction used in our monitors is one for relational formulas as in [23], here formulated as a Galois connection. In the cited paper, the derived monitor exhibits ad hoc features that reflect implementation details, e.g., simple agreement relations are represented both by taint tags on variables and by formulas. Here we refrain from dwelling on implementation and instead explore how some existing static analyses can be used in monitoring with little or no change. For example, one of our monitors uses an interval analysis, known to have good performance in practice. Other analyses, like constant propagation could as well be incorporated, as we discuss. As in standard static analysis, the notion of reduced product [26], [29], [35] serves to share information between different analyses, increasing their precision and efficiency. These first steps are a proof of principle. In the future, solid theoretical underpinnings can enable aggressive engineering of monitors while retaining high assurance. Outline: Sec. II introduces the simple language used to present our ideas, and reviews key notions from abstract JcK ∈ States⊥ → States⊥ For background on denotational semantics, see [58]. The clause JcK⊥ , ⊥ indicates that JcK is ⊥-strict for all c and we ignore ⊥ in the subsequent cases. The annotation commands act like skip. In the clause for if/else, we confuse 2 1 and 0 with truth and falsity in the metalanguage, to avoid writing JbKσ = 1 etc. Throughout this paper, we denote the least fixpoint of a monotonic function f ∈ A → A that is v-greater than x ∈ A by lfpv x f. Relational formulas are closed under conjunction, written Φ, Ψ as a reminder that sometimes we abuse notation and treat a relational formula as a set of basic formulas. Standard semantics of commands Usually, abstract interpretation-based static analyses introduce a collecting semantics (also known as a static semantics [28, Section 4]) aimed at formalizing the possible behaviours of a program wrt. a property of interest. This serves as a starting point for the derivation of sound approximate representations of program behaviours. The collecting semantics lifts the standard semantics to apply to arbitrary sets of proper states, ignoring the ⊥ outcome that indicates divergence because — like most work on information flow monitoring — we aim for termination-insensitive security [19]. JcK⊥ , ⊥ Jid := eKσ , σ[id 7→ JeKσ] C. Collecting semantics and abstract interpretation J−K Jc1 ; c2 Kσ , Jc2 K ◦ Jc1 Kσ ( Jc1 Kσ if JbKσ Jif b then c1 else c2 Kσ , Jc2 Kσ if ¬JbKσ Jassume ΦKσ , σ Jassert ΦKσ , σ Jwhile b cKσ , (lfp4̇ (λσ.⊥) F)(σ)( ρ where F(w)(ρ) , w ◦ JcKρ JskipKσ , σ ⦃c⦄ ∈ P(States) → P(States) ⦃c⦄Σ , {JcKσ | σ ∈ Σ and JcKσ 6= ⊥} if ¬JbKρ otherwise The powerset P(States), with set inclusion as a partial order, is a complete lattice. The collecting semantics can be given a direct definition that makes explicit the fixpoint computation over a set of states, rather than relying on the underlying fixpoint of the functional F used in the standard semantics of while. B. Relational formulas Relational formulas relate two states. The agreement formula Ax says the two states have the same value for x. In relational logics, initial and final agreements indicate which variables are “low”, as in the low-indistinguishability relations used to define noninterference [1]. In this paper we internalize specifications using annotation commands as in [23]. Given a command c, consider the command assume Ax, Ay; c; assert Az σpτ |= Ae iff. JeKσ = JeKτ ⦃−⦄ Collecting semantics grdb grdb (Σ) , {τ ∈ Σ | JbKτ = true} ⦃id := e⦄Σ = {σ[id 7→ JeKσ] | σ ∈ Σ} ⦃c1 ; c2 ⦄Σ = ⦃c2 ⦄ ◦ ⦃c1 ⦄Σ (1) This expresses that the final value of z may depend on the initial values of x and y but not on other variables. If, from two states that agree on x and y, executions of c lead to different values of z, the assertion will fail. Assumptions at intermediate points in the program serve to specify downgrading, similar to explicit code annotations in some work [21], [48], [49]. Although we do not model intermediate output as such, one may model an output channel as a variable, say out; the policy that it is low can be specified by asserting agreement over what is assigned to out. Relational formulas also feature a “holds in both” operator: Two states σ and τ satisfy Bb iff. they both evaluate the conditional expression b to 1. The third basic form is conditional agreement [2], [23], Bb ⇒ Ae, which can be used to encode multilevel security policies as well as to encode conditional downgrading (e.g., [11], [20]). Semantics of relational formulas (2) ⦃if b then c1 else c2 ⦄Σ = ⦃c1 ⦄ ◦ grdb (Σ) ∪ ⦃c2 ⦄ ◦ grd¬b (Σ) ⦃assume Φ⦄Σ = Σ ⦃assert Φ⦄Σ = Σ ⦃skip⦄Σ = Σ   ⦃while b c⦄Σ = grd¬b lfp⊆ Σ ⦃if b then c else skip⦄ Lemma 1 . The displayed equations define the same semantics as Equation (2). 1 The proof is by structural induction on commands. γ −− −− (A; v), comA Galois connection, written (C; ≤) ← −− α→ prises partially ordered sets with monotonic functions α, γ such that α(x) v y iff. x ≤ γ(y) for all x ∈ C, y ∈ A. In case C is P(States) and A is some lattice of abstract states, a sound approximation for command c is t ∈ A → A such that the following holds (writing v̇ for the pointwise lift of v): σpτ |= Φ σpτ |= Bb iff. JbKσ and JbKτ α ◦ ⦃c⦄ ◦ γ v̇ t σpτ |= (Bb ⇒ Ae) iff. σpτ |= Bb implies σpτ |= Ae (3) 1 While this formulation of the collecting semantics “is sometimes taken as standard in abstract interpretation works” [22], Cachera and Pichardie [22] provide the first precise proof relating it to a small-step operational semantics. σpτ |= Φ, Ψ iff. σpτ |= Φ and σpτ |= Ψ 3 The best abstract transformer for c is α ◦ ⦃c⦄ ◦ γ. It is not computable, in general, but it serves to specify the abstract interpretation t. The idea is to derive an abstract semantics ⦃ − ⦄] so that, for all c, ⦃c⦄] is a sound approximation of c and can be implemented efficiently. monitoring semantics continues tracking all minor states that also evaluate the guard to true. As for the minor states that evaluate the conditional guard to false, they are propagated through the else-branch by the collecting semantics. Notice that the monitor semantics of conditionals ignores annotation commands in non-executed branches. We revisit this later. III. I DEAL M ONITOR Ideal monitor We introduce a concrete monitoring semantics which serves as basis to define the security property by interpreting annotation commands with respect to both the actual execution (major run) and all possible alternatives (minor runs). Readers familiar with Chudnov et al. [23] may see this as a principled account of their notion of “tracking set”, adapted to denotational semantics. Sections IV and V derive monitors as abstract interpretations of this ideal monitor. The main difference between the collecting semantics and the ideal monitor is that the ideal monitor is parametrised by the current state σ of the major run – we call this a major state. The ideal monitor is responsible for interpreting annotation commands in order to track and verify the relational formulas satisfied by all minor states τ in the tracking set Σ, wrt. the major state σ. The ideal monitor also has to signal security violations due to assertion failures. We use the term fault, denoted by . We define P (States) , P(States) ∪ { }. Therefore, the ideal monitor snd(σ, Σ) , Σ LcMσ , Lid := eMσ Σ , {Jid := eKτ | τ ∈ Σ} ◦ Lc1 Mσ Σ LskipMσ Σ , Σ Lassume ΦMσ Σ , {τ ∈ Σ | σpτ |= Φ} ( Σ if ∀τ ∈ Σ, σpτ |= Φ Lassert ΦMσ Σ , otherwise Lif(b then c1 else c0 Mσ Σ , Lc1 Mσ ◦ grdbσ Σ t ⦃c0 ⦄ ◦ grd¬b Σ ⦃c1 ⦄ ◦ grdb Σ t Lc0 Mσ ◦ grdσ¬b Σ Lwhile b do cMσ Σ , snd  if JbKσ otherwise   4̇×v̇ lfpλ(σ,Σ).(⊥,∅) G (σ, Σ) ( if ¬JbKσ (σ, ⦃while b do c⦄Σ) G(w)(σ, Σ) , w (JcKσ, Lif b then c else skipMσ Σ) otherw. is applied to the initial major state and maps an input tracking set Σ to an output set Σ0 or fault . We also introduce only one rule (LcMσ , ) to mean that the ideal monitor maps fault to fault. In order to use the framework of abstract interpretation, we provide the set P (States) with a lattice structure. To this end, we lift set inclusion, the natural partial order over the powerset P(States), to the set P (States). Therefore, we let be the top element of the set P (States) and we denote by v the lifting of set inclusion ⊆ to the set P (States): Σ v Σ0 iff. (Σ0 = grdσb grdbσ (Σ) , {τ ∈ Σ | JbKτ = true} Lc1 ; c2 Mσ Σ , Lc2 MJc1 Kσ LcM ∈ States⊥ → P (States) → P (States) ∀Σ, Σ0 ∈ P (States), L−M− For assume Φ, the ideal monitor reduces the initial tracking set Σ to the set of minor states τ whose pairing with the major state σ satisfy the relational formula Φ. The monitor rules out all alternative executions on the same control path that do not comply with the assumption Φ. These are termed “assumption failures” in [23]. For assert Φ, the ideal monitor checks whether all minor states in the initial tracking set Σ satisfy the relational formula Φ when paired with the current major state σ. If so, the monitor returns Σ. Otherwise, the monitor concludes that one of the alternative executions — that satisfies all assumptions encountered so far — falsifies the assertion when paired with the major state. So the semantics signals a security violation by returning fault . For while loops, the ideal monitor is defined as the least fixpoint of a functional G that formalises the simultaneous evaluation on both the major state and the tracking set. Erasing operations related to tracking sets in the functional G yields the functional F used in the standard denotational semantics. Along the iteration of the major state on the loop body, the ideal monitor iterates the tracking set on a conditional — ensuring only minor states that have not yet reached a fixpoint go through an additional iteration of the loop body. When the major state reaches a fixpoint (i.e. ¬JbKσ), the tracking set is fed to the ∨ Σ ⊆ Σ0 ) Let t denote the lifting of set union to the set P (States). The ideal monitor relies on the collecting semantics for branching commands. Therefore, we also lift the collecting semantics to the set P (States), by letting ⦃c⦄ , . This guarantees that both the ideal monitor and the collecting semantics are monotonic. The ideal monitor can be seen as a hybrid monitor comprised of a dynamic part and a static part. It directly handles the dynamic part, but delegates the static part to the collecting semantics. The dynamic part consists of tracking the minor states that follow the same execution path as the major state, whereas the static part consists in tracking the minor states that follow a different execution path. The monitor semantics of conditional commands best illustrates the intertwining between the dynamic and static part of this ideal monitor. When the major state evaluates the conditional guard to true, the 4 In summary, the idea is that from an initial state σ, a monitored execution evaluates JcKσ and in parallel should evaluate an abstraction of LcMσ , written LcM]σ . The monitored execution yields σ 0 , with JcKσ = σ 0 , if it can guarantee that LcM]σ States 6= . Otherwise, there is a potential security violation. This parallel evaluation can be formalized, as it is in the functional G for the monitor semantics of loops, but to streamline notation in the rest of the paper we focus on what is returned by the ideal monitor. collecting semantics before exiting the loop — ensuring all minor states reach their fixpoint. Like the collecting semantics, the ideal monitor is defined over concrete executions. Therefore, there is no loss of information, in the sense that the ideal monitor returns a fault iff. there is a minor state that falsifies the assertion. The semantics of the ideal monitor is the concrete specification for a more abstract monitor whose transfer functions are computable. We will rely on the framework of abstract interpretation in order to derive sound monitors by approximating both the concrete collecting semantics and the ideal monitor. The ideal monitor and the collecting semantics are equivalent for annotation-free commands, as long as the major run terminates. IV. L ATTICE OF R ELATIONAL F ORMULAS The derived monitors use abstract interpretations based on relational formulas. This section defines the abstraction and uses it to derive a generic monitor that is refined in Section V. First we define the lattice of relational formulas. To make it finite, expressions in relational formulas are restricted to those that occur in the program to be monitored, as well as their negations to facilitate precision in the monitors we derive. A set of formulas is interpreted conjunctively. To streamline notation we confuse a conjunctive formula, say “Ax, Ay”, with the set {Ax, Ay}. That is why the lattice is defined in terms of basic formulas. Lemma 2 . For all annotation-free commands c, all σ ∈ States such that JcKσ 6= ⊥, and all sets Σ ⊆ States, it holds that LcMσ Σ = ⦃c⦄Σ In fact LcMσ Σ ⊆ ⦃c⦄Σ as long as c is assertion-free. The proof is by structural induction. If the major run diverges, the least fixpoint of the functional G yields an undefined outcome ⊥, paired with an empty set ∅ of minor states. Assuming a set in of variables considered low inputs, and a set out considered low outputs, let us use the notation Ain and Aout to abbreviate the conjunction of basic agreements for these variables. Theorem 1 states that, for c annotated following the pattern of Equation (1), the ideal monitor, parametrised with a major state σ1 , does not result in a fault iff. the standard notion of termination-insensitive noninterference (TINI) holds for c and σ1 . Lattice of relational formulas L P (L) Assumption: for a given command c, let L be a finite set of basic relational formulas that is closed under negation of boolean expressions and which contains at least Bb, Ae, and Bb ⇒ Ae for every b and e that occur in c. We use the powerset P(L) as a lattice ordered by ⊇ with ∅ on top and intersection as join: (P(L); ⊇, L, ∅, ∩, ∪). Let P (L) be P(L) ∪ { }. Let v] be the lifting of the partial order ⊇ such that is the top element of the lattice P (L): Theorem 1 . Let in and out be two sets of variables and c be an annotation-free command. Let command ĉ be defined as assume Ain; c ; assert Aout. For all σ1 , σ10 ∈ States such that JcKσ1 = σ10 , we have LĉMσ1 States 6= iff. (P (L); v] , L, , t] , u] ) We also let t] (resp. u] ) denote the lifting of set intersection ∩ (resp. the lifting of set union ∪) to the lattice P (L). ∀σ2 , σ20 ∈ States, JcKσ2 = σ20 ∧ σ1 =in σ2 =⇒ σ10 =out σ20 The notation elides dependence of L on c because c should be the main program to be monitored; a fixed L will be used in the context of the monitor semantics which is recursively applied to sub-programs of c. The monitor will maintain an over-approximation of the relational formulas satisfied by the major state σ and every minor state τ of interest. A set of formulas is interpreted to mean all the formulas hold for every such pair (σ, τ ). The empty set indicates no relations are known, whereas serves to indicate that some required relation fails to hold. Given a major state σ, we aim to define an approximation of a tracking set Σ, in order to account for relational formulas satisfied by the major state σ and each minor state τ ∈ Σ. Subsequently, we lift this abstraction in order to approximate the monitoring collecting semantics and obtain sound computable abstract transfer functions of a monitor tracking relational formulas. We formalise this abstraction of the Here we write =in to indicate agreement on the variables in. The security property in [23] allows intermediate annotations, but disallows them in high branches, to ensure robustness of declassification etc. In an Appendix we provide an alternative ideal monitor and an alternative collecting semantics, both of which fault if an assertion or assumption occurs in a high conditional — one for which some minor states do not agree with the major state on its conditional guard. Strong conjecture: for terminating executions, the security property in [23] holds iff. the alternative ideal monitor does not fault. The alternative ideal monitor is a simple variation, but with the notational complication of threading an additional parameter through the definitions. So we do not use it in the body of the paper. However, for each of the derived monitors, there is a very similar one derived from the alternate ideal monitor, and therefore sound with respect to the security property in [23]. 5 tracking set Σ as a function ασ that is parametrised by a state σ: ασ ∈ P (States) → P (L) ( ασ (Σ) , {Φ | ∀τ ∈ Σ, σpτ |= Φ} In the process of deriving a sound abstract monitoring semantics LcM]σ approximating the monitoring collecting semantics LcMσ , we also have to derive a sound abstract static semantics ⦃c⦄] ∈ P (L) → P (L) approximating the static collecting semantics ⦃c⦄ ∈ P (States) → P (States). Equation (3) provides a notion of soundness for static semantics, whereas Equation (4) provides a notion of soundness for monitoring semantics. While approximating the monitoring collecting semantics, we will find good ways for the abstract static and monitoring semantics to interact. In particular, the abstract static analyses we propose will account for that interaction by additional parameters. Additionally, we will also prove soundness results in Lemmas 5, 6 and 9 that embody not only the soundness condition of Equation (3), but also variations on that property that take into account this interaction between the dynamic and static analyses. The derivation of a sound abstract monitoring semantics is by structural induction over commands. As an example, consider the case of a conditional command c , if b then c1 else c0 , an initial state σ and a final state σ 0 such that σ 0 = JcKσ. Let us consider the case where the guard evaluates to true (JbKσ = 1), so that σ 0 = Jc1 Kσ. Then, assuming a sound approximation ⦃c⦄] of the static collecting semantics, we can derive a generic abstract monitoring semantics by successive approximation, beginning as follows: if Σ = otherwise The associated concretisation function γσ is also parametrised by a major state σ. The concretisation of a set ∆ ∈ P(L) of relational formulas yields a set Σ of minor states, such that every minor state τ ∈ Σ and the major state σ satisfy all relational formulas Φ ∈ ∆. γσ ∈ P (L) → P (States) ( γσ (∆) , {τ ∈ States | ∀Φ ∈ ∆, σpτ |= Φ} if ∆ = otherwise Lemma 3 . For all σ ∈ States, the pair (ασ , γσ ) is a Galois γσ −− −− − − (P (L); v] ). That is, connection: (P (States); v) ← −− → ασ ∀Σ ∈ P (States), ∀∆ ∈ P (L): ασ (Σ) v ∆ ⇐⇒ Σ v] γσ (∆) Figure 1 illustrates the best abstraction of the monitoring collecting semantics LcMσ . If a set Σ of minor states is abstracted wrt. a major state σ by a set ∆ of relational formulas (Σ v γσ (∆)), then the resulting set Σ0 = LcMσ Σ of minor states is abstracted wrt. the resulting major state σ 0 = JcKσ by the set ασ0 ◦ LcMσ ◦ γσ (∆) of relational formulas (Σ0 v γσ0 (ασ0 ◦ LcMσ ◦ γσ (∆))). This best abstract transformer P (L) ασ0 ◦LcMσ ◦γσ ασ0 ◦ Lif b then c1 else c0 Mσ = ασ0 Lc1 Mσ ◦ grdbσ ◦ ◦ γσ (∆) γσ (∆) t ⦃c0 ⦄ ◦ grd¬b ασ 0 ◦ Lc1 Mσ ◦ ◦ grdbσ ◦ σ σ ⦃c0 ⦄ ◦ grd¬b ◦ γσ (∆) v Hασ0 , Lc1 Mσ , and ⦃c1 ⦄ are monotone, γσ 0 extensive: λΣ.Σ v̇ γσ ασ 0 ασ 0 ◦ Lc1 Mσ ασ0 P (States) Figure 1. ◦ LcMσ ◦ Best abstract transformer ] γσ v̇ LcM]σ where σ 0 = JcKσ (4) γσ v̇ γσ0 ◦ LcM]σ and ασ ◦ ⦃c0 ⦄ ◦ γσ ◦ grdbσ ◦ ασ ◦ γσ (∆) t] ◦ grd¬b The above derivation is a routine use of abstract interpretation techniques. The last step uses a soundness property for the static part of the monitor that is similar to Equation (3). So far, we relied on an approximation of the monitoring collecting semantics of command c1 and an approximation of the static collecting semantics of command c0 . To continue this generic derivation, we need to derive a sound approximation for the guard operators grdbσ and grd¬b . Lemma 4 Soundness conditions. Consider any c, σ, σ 0 such that σ 0 = JcKσ. Equation (4) is equivalent to each of the following: ◦ ◦ ] Note that we denote by v̇ (resp. v̇ ) the pointwise lifting to functions of the partial order v over P (States) (resp. of the partial order v] over P (L)). LcMσ γσ ασ I ασ is ◦ γσ (∆) ] v HLc1 Mσ is sound by induction hypothesis of Equation (4)I Lc1 M]σ ◦ ασ ◦ grdbσ ◦ γσ (∆) t] ασ0 ◦ ⦃c0 ⦄ ◦ γσ ◦ ασ ◦ grd¬b ◦ γσ (∆) ] v] HThe static analysis is sound: ασ0 ◦ ⦃c0 ⦄ ◦ γσ v̇ ⦃c0 ⦄] I Lc1 M]σ ◦ ασ ◦ grdbσ ◦ γσ (∆) t] ⦃c0 ⦄] ◦ ασ ◦ grd¬b ◦ γσ (∆) (5) is not computable in general. An abstract interpretation LcM]σ is sound if it satisfies the following condition: ασ0 ◦ ◦ ◦ ] P (States) LcMσ  γσ (∆) t] ] JcK γσ (∆) = HSince ασ0 is additive: ασ0 (Σ ∪ Σ0 ) = ασ0 (Σ) t] ασ0 (Σ0 )I P (L) ασ0 γσ ◦ ασ0 ◦ ] LcMσ v̇ LcM]σ ◦ ασ 6 The approximation of grdbσ proceeds as follows: ασ ◦ grdbσ first one treats high branching commands pessimistically by forgetting all known formulas. The second one relies on an approximation of the modified variables in order to determine which relational formulas cannot be falsified, a usual technique in hybrid monitors. The third one deduces new relational formulas by comparing the results of an interval value analysis to the current values in the monitored execution. γσ (∆) ◦ = ασ ({τ ∈ γσ (∆) | JbKτ = 1}) = Hsince the major state evaluates b to trueI ασ ({γσ (∆) ∩ {τ ∈ States | σpτ |= Bb}) = ασ ({γσ (∆) ∩ γσ ({Bb})) = Hγσ is multiplicative: γσ (∆ u] ∆0 ) = γσ (∆) u γσ (∆0 )I ασ v] Hασ ◦ ◦ A. Purely-Dynamic Monitor ] γσ (∆ u {Bb}) γσ is reductive: ασ ] ∆ u {Bb} ◦ Let us start by deriving a purely-dynamic monitor tracking relational formulas. This monitor is an instance of EM mechanisms [55]. Thus, it must observe only the execution steps of the major state. In particular, this monitor should not look aside [52], meaning that it does not rely on information about minor states that do not follow the same execution path as the major state. This scenario corresponds to approximating the collecting semantics ⦃c⦄ by an abstract static semantics D⦃c⦄] that returns the top element of the lattice P(L), providing no information about minor states that do not follow the same execution path as the major state. Definition 1 introduces such an abstract static semantics. We ensure it is strict by mapping the bottom element L of the lattice P(L) to itself. ] γσ v̇ λ∆.∆I (6) As for the approximation of grdb , we have: ασ ◦ grdb γσ (∆) v] Hασ is monotone and grdb v̇λΣ.ΣI ◦ ασ ] ◦ γσ (∆) v Hsince ασ ◦ γσ is reductiveI ∆ (7) To sum up, we rewrite the approximations of operators grdbσ and grdb of Equations (6) and (7) in the intermediate abstraction of conditionals obtained in Equation (5), so we can conclude: ασ0 Lif b then c1 else c0 Mσ ◦ γσ (∆)   ] v Lc1 M]σ (∆ u] {Bb}) t] ⦃c0 ⦄] ∆ Definition 1 . The abstract static semantics D⦃c⦄] for a purelydynamic monitor is given by: ◦ (8) D⦃c⦄] ∈ P (L) → P (L)   if ∆ =  ] D⦃c⦄ ∆ , L if ∆ = L   ∅ otherwise This equation yields one of the generic approximations of conditional commands. It says that when the conditional guard is true, monitor the then branch and statically analyse the else branch. In the following section, we specialise this generic approximation by relying on different approximations of the static collecting semantics. Given the importance of relational formulas as an abstraction, it should be no surprise that some monitors rely on entailments among formulas. Precision can be improved by providing the monitor with strong means of logical deduction, but there is a cost in performance. This engineering trade-off can be left open, by just specifying what we need. We use the notation ∆ ⇒] Φ, which is either true or false, as follows (cf. [23]). Approximate entailment Lemma 5 soundness of D⦃ − ⦄] . For all c, σ, σ 0 ∈ States, ] it holds that: ασ0 ◦ ⦃c⦄ ◦ γσ v̇ D⦃c⦄] . Mapping L to L is sound because the bottom element L is concretised to the empty set of states (since L is closed under negation, thus it contains Bb and B¬b for some b). A detailed derivation of the purely dynamic monitor, as well as subsequent derivations, are found in the Appendix. Figure 2 presents the derived monitor. In it we use the notation ∆⇒] Φ for the approximate implication specified at the end of Section IV. The semantics for conditional commands is obtained from the generic derivation for conditionals, presented in Equation (8), by unfolding the definition of the static analysis D⦃c⦄] . The abstract monitoring semantics of any command maps fault to fault (LcM]σ , ). For assignments id := e, the monitor invalidates all relational formulas that involve variable id. It also deduces that all resulting minor states agree with the major state on variable id, if ∆ ⇒] Ae. For a sequence c1 ; c2 , the initial state σ is used to monitor c1 , and then the resulting major state Jc1 Kσ is used to monitor c2 . For assumptions, the monitor adds the assumed formula to the input set. For asserts, the monitor returns fault if it cannot determine that the known ∆ implies the asserted Φ. ∆ ⇒] Φ Assumption: For any ∆, Φ, if ∆ ⇒] Φ then the implication is valid. That is, for all σ, τ ∈ States, if σpτ |= ∆ then σpτ |= Φ. V. M ONITOR D ERIVATION We derive three different monitors from the ideal monitoring semantics. By relying on the framework of abstract interpretation, these monitors are correct by construction. Unless it is clear from context, we will differentiate the three abstract semantics of these monitors by prefixing them with the letters D, M and I. These three monitors illustrate different ways of reasoning about relational formulas in high branching commands. The 7 LcM]σ , LskipM]σ ∆ , ∆ ( Lid := eM]σ ∆ ( Lassert ΦM]σ ∆ , , {Φ ∈ ∆ | id 6∈ fv(Φ)} u ] ] ∆ u {Φ} if ∆ ⇒ Φ otherwise ] Aid if ∆ ⇒] Ae ∅ otherwise Lc1 ; c2 M]σ ∆ , Lc2 M]Jc1 Kσ ◦ Lc1 M]σ ∆ Lassume ΦM]σ ∆ , ∆ u] {Φ} (  Lc1 M]σ (∆ u] {Bb})     ∅ Lif b then c1 else c0 M]σ ∆ , (  Lc0 M]σ (∆ u] {B¬b})     ∅ if ∆ ⇒] Ab otherwise if ∆ ⇒] Ab otherwise if JbKσ if ¬JbKσ   ] v̇ ] Lwhile b do cM]σ ∆ , snd (lfp4̇× G )(σ, ∆) λ(σ,∆).(⊥,L)  σ, if ¬JbKσ and ∆ =    σ, ∆ u] {B¬b} if ¬JbKσ and ∆ ⇒] Ab where G ] (w] )(σ, ∆) ,  σ, {B¬b} otherwise if ¬JbKσ     ] w JcKσ, Lif b then c else skipM]σ ∆ otherwise Figure 2. Purely-dynamic monitor DL−M]− , as abstract semantics derived from the ideal monitor For conditionals, if the monitor is unable to determine that all minor states agree with the major state on the value of the conditional guard, it must treat it as a “high conditional”. So the purely-dynamic monitor conservatively forgets all known relational formulas. Similarly to the standard denotational semantics and the ideal monitor semantics, the abstract monitoring semantics of loops is defined as a fixpoint of an abstract functional G ] . This abstract fixpoint behaves as a finite sequence of conditionals. To each iteration of the major state through the loop body corresponds a simultaneous iteration of the monitor on a conditional. It is important that the monitor treats each iteration of the loop body as a conditional, in order to soundly track formulas satisfied by minor states that exit the loop before the major state. When the major state exits the loop, the monitor relies on the static analysis in order to account for minor states that may continue iterating through the loop. 0 1 2 3 4 5 6 7 8 9 Listing 1. ασ0 ◦ LcMσ ◦ γσ v̇ Example program all variables as high. Thus, at the assignment to variable y, our monitor deduces that all minor states agree with the major state on the value of y, which means that no security violation is raised since the relational assertion is satisfied. Notice that if this example program asserts Apublic instead of Ay, our monitor would always signal a violation , whereas a NSU approach would not signal a fault when the conditional guard evaluates to false (secret ≤ 0). Theorem 2 . The abstract monitoring semantics DLcM]σ of the purely-dynamic monitor is sound. For all commands c, for all σ, σ 0 ∈ States such that σ 0 = JcKσ, it holds that: ] / / σ0 , [secret → 1; public → 0], ∆0 = ∅ assume Apublic ; / / [secret → 1; public → 0], {Apublic} i f ( s e c r e t > 0 ) then { public := public + 1; } / / [secret → 1; public → 1], {Apublic, B(secret > 0)} else { skip ; } / / [secret → 1; public → 1], ∅ y : = 0 ; / / [secret → 1; public → 1, y → 0], {Ay} assert Ay ; [secret → 1; public → 1, y → 0], {Ay} B. Hybrid Monitor with the Modified Variables DLcM]σ To achieve more precision, we need non-trivial static analysis to provide information about minor runs that do not follow the same execution path as the major run (“high branches”). The next monitor relies on an over-approximation of variables that are modified by a command. Similarly to the No-Sensitive Upgrade (NSU) approach [59, Section 3.2] [8], the purely-dynamic monitor of Figure 2 is relatively efficient, at the cost of precision (i.e., it rejects many secure executions). However, notice that both purely-dynamic approaches are incomparable. Indeed, consider for instance the program in Listing 1 where only variable secret is high. An NSU approach would stop the program when the assignment to public is executed, signaling a possible security violation. In contrast, our purely dynamic monitor simply forgets all known relational formulas at the merge point of the conditional; in the case of simple agreements, this is tantamount to labelling Modifiable variables Mod Assumption: Mod ∈ Com → P(V ar) satisfies the following: for all c, id, if there is σ with JcKσ 6= ⊥ and JcKσ(id) 6= σ(id) then id ∈ Mod(c). For our simple language, the obvious implementation is to 8 return the set of assignment targets. Instead of conservatively forgetting about all known relational formulas at the merge point of high conditionals, the monitor will be able to retain information about variables that are modified in neither conditional branches, similarly to existing hybrid monitors [42], [52]. Guided by the soundness condition, we derive an abstract static semantics that leverages modified variables. leverages the abstract static analysis of Definition 2 and Lemma 6, in order to treat more precisely high branching commands. Although we avoid relying on explicit tracking of a security context, for reasons of precision, that is useful for another purpose: enforcing security policies such as robust declassification [60]. Recall the strong conjecture following Theorem 1. One can replay the derivation of all three of our monitors, starting from the alternative ideal monitor in Appendix A, with minimal changes. We thus obtain three alternative monitors that track a security context and are conjectured to be sound for the semantics in [23]. Definition 2 . For all commands c and c0 , the abstract static semantics M⦃c⦄]c0 for modified variables is given by: M⦃c⦄]c0 ∈ P (L) → P (L)     L ] M⦃c⦄c0 ∆ ,   Φ ∈ ∆ | ∀id ∈ fv(Φ),    id 6∈ Mod(c) ∪ Mod(c0 ) C. Hybrid Monitor with Intervals if ∆ = if ∆ = L We now derive a hybrid monitor that relies on a static analysis approximating the range of values each variable may take. This allows to infer agreements even for locations that are modified in high branches. otherwise The soundness condition is similar to Lemma 5, adapted to the extra parameter. Interval analysis ⦃ − ⦄],Int Assumptions: StatesInt is a set of abstract environments mapping variables to intervals (and StatesInt , StatesInt ∪ { }). ⦃c⦄],Int ∈ StatesInt → StatesInt is an interval static For a given set ∆ of relational formulas that hold between analysis satisfying: αInt ◦ ⦃c⦄ ◦ γ Int ≤ ˙ ],Int ⦃c⦄],Int , with an initial major state σ and every initial minor state τ ∈ Σ, (αInt , γ Int ) being the Galois connection enabling its derivation: γ Int this abstract static semantics deduces a set ∆0 of relational ← −− −− −→ − (StatesInt ; ≤],Int ) (P (States); v) 0 0 − − formulas that hold between an output major state σ = Jc Kσ αInt and every output minor state τ 0 = JcKτ . Intuitively, the set ∆0 is deduced from ∆ by keeping only the relational formulas that Interval static analysis is standard [28]; we present one in cannot be falsified since their free variables are not modified. full detail in the long version of the paper. Figure 3 introduces the abstract semantics MLcMσ of the Unlike the previous hybrid monitor relying on the modified hybrid monitor relying on a static analysis of modified variables. variables, this monitor reasons on the values variables may Most transfer functions are essentially the same as the ones take, in order to establish relational formulas in the case of introduced for the purely-dynamic monitor. Thus, we refer high branching commands. Similarly to the condition stated in to Figure 2 and redefine only the ones that are different, Lemmas 5 and 6, the abstract static analysis ⦃c⦄] must satisfy: namely conditionals and loops. The main difference compared ] ασ0 ◦ ⦃c⦄ ◦ γσ v̇ ⦃c⦄] to the purely-dynamic monitor resides in the treatment of high branchings. For a high conditional, the abstract static analysis Let us derive such an abstract static analysis by relying on the of modified variables enables the hybrid monitor to deduce interval analysis ⦃c⦄],Int . Observe that that if a relational formula Φ holds before the conditional command and if its free variables are modified in neither ασ0 ◦ ⦃c⦄ ◦ γσ ] conditional branches, then Φ also holds after the execution v̇ Hsince ασ0 , ⦃c⦄ are monotone, γ Int ◦ αInt is extensiveI of the conditional command. This behaviour is similar to the ασ0 ◦ γ Int ◦ αInt ◦ ⦃c⦄ ◦ γ Int ◦ αInt ◦ γσ treatment of high conditionals by existing hybrid information ] flow monitors [42], [52]. However, our hybrid monitor does not v̇ Hsince ασ0 , γ Int are monotone, ⦃c⦄],Int is soundI rely on labelling the program counter with a security context in ασ0 ◦ γ Int ◦ ⦃c⦄],Int ◦ αInt ◦ γσ (9) order to track implicit flows. This is similar to Besson et al.’s Consequently, we can leverage an interval static analysis approach [16]; it facilitates better precision, as they show and to derive a monitor tracking relational formulas, provided we see in the next subsection. that we derive an interface between the two abstractions. Theorem 3 soundness of ML−M]− . The hybrid monitor First, we have to approximate the operator αInt ◦ γσ which MLcM]σ is sound in the sense of Equation (4): for all c, σ: translates relational formulas — that holds wrt. a major state ] σ — to interval constraints over variables. Second, we need to ασ0 ◦ LcMσ ◦ γσ v̇ MLcM]σ where σ 0 = JcKσ approximate the operator ασ0 ◦ γ Int which translates interval The derivation proof of this monitor is similar to the constraints over variables to relational formulas — that holds derivation of the previous purely-dynamic monitor. It mostly wrt. a major state σ 0 . Lemma 6 soundness of M⦃ − ⦄]− . For all c, c0 , σ, σ 0 such ] that σ 0 = Jc0 Kσ, it holds that: ασ0 ◦ ⦃c⦄ ◦ γσ v̇ M⦃c⦄]c0 . 9 ] Lif b  then ( c1 else c0 Mσ ∆ ,  Lc1 M]σ (∆ u] {Bb})     Lc M] (∆ u] {Bb}) t] {Φ ∈ ∆ | fv(Φ) ∩ (Mod(c ) ∪ Mod(c )) = ∅} 1 0 ( 1 σ  Lc0 M]σ (∆ u] {B¬b})     Lc M] (∆ u] {B¬b}) t] {Φ ∈ ∆ | fv(Φ) ∩ (Mod(c ) ∪ Mod(c )) = ∅} 0 σ 1 0 if ∆ ⇒] Ab otherwise if ∆ ⇒] Ab otherwise if JbKσ if ¬JbKσ   ] v̇ ] Lwhile b do cM]σ ∆ , snd (lfp4̇× G )(σ, ∆) λ(σ,∆).(⊥,L)  σ,    σ, ∆ u] {B¬b}  ] G ] (w] )(σ, ∆) , ]  σ, {Φ ∈ ∆ | fv(Φ) ∩ Mod(c) = ∅} t ∆ u {B¬b}     ] ] w JcKσ, Lif b then c else skipMσ ∆ Figure 3. if ¬JbKσ and ∆ = if ¬JbKσ and ∆ ⇒] Ab otherwise if ¬JbKσ otherwise Hybrid monitor ML−M]− using modified variables These two operators αInt ◦ γσ and ασ ◦ γ Int are similar defines a Granger’s reduced product for the Cartesian product to Granger’s reduced product operators [26], [36] over the of intervals and relational formulas, in Lemma 8. Cartesian product of both intervals and relational formulas, as A Granger’s reduced product tointσ toforσ we shall explain. Since combining different static analyses independently by relying on a Cartesian product does not yield optimal results in tointσ (ı, ) , ı toforσ ( , ∆) , ∆ general, abstract interpretation relies on a notion of a reduced products [29], in order to enable the sharing of information tointσ (ı, ∆) , u],Int tointσ (ı, {Φ}) between the different abstractions and gain more precision. Φ∈∆ Let (ασInt , γσInt ) be the Galois connection associated with tointσ (ı, {Ae}) , ı u],Int grd],Int with v , JeKσ the Cartesian product of both abstractions: e=v (ı) Int Int Int Int γσ (ı, ∆) , γ (ı) u γσ (∆) and ασ (Σ) , (α (Σ), ασ (Σ)). tointσ (ı, {Bb}) , ı u],Int grd],Int (ı) max([a, b]) , b b Definition 3 . A Granger’s reduced product for the Cartesian abstraction StatesInt × P (L) is a pair of operators tointσ (ı, {Bb ⇒] Ae}) , ı min([a, b]) , a tointσ ∈ StatesInt × P (L) → StatesInt and toforσ ∈ StatesInt × P (L) → P (L), parametrised by a state σ ∈ toforσ (ı, ∆) , ∆ u] {Ax | min(ı(x)) = max(ı(x)) = σ(x)} States and satisfying two conditions: Int Int • Soundness: γσ (tointσ (ı, ∆), ∆) = γσ (ı, ∆) and Int Int γσ (ı, toforσ (ı, ∆)) γσ (ı, ∆) ],Int • Reduction: tointσ (ı, ∆) ≤ ı and toforσ (ı, ∆) v] ∆ Lemma 8 . For all σ ∈ States, the pair of operators tointσ , toforσ is a Granger’s reduced product. Lemma 7 . For all σ ∈ States, any pair of operators that is a The proof derives the definition from the required properties. Granger’s reduced product (tointσ , toforσ ) for the Cartesian Int The operator tointσ reduces an interval environment ı by abstraction States × P (L) provides a sound approximation accounting for the additional constraint that a set of relational of the interface between intervals and relational formulas: formulas must hold wrt. to a state σ. For instance, reducing ı ˙ ],Int λ∆. tointσ ( , ∆) and αInt ◦ γσ ≤ to account for a set of constraints ∆ amounts to computing the ] intersection over the reduced interval environments wrt. each ασ ◦ γ Int v̇ λı. toforσ (ı, ) formula Φ ∈ ∆. Also, reducing ı to account for the constraint The proof of this result applies to any abstraction and is that Ae holds wrt. to a major state σ amounts to reducing ı to not limited to an interval analysis. In a nutshell, for any off- account for the additional constraint that expression e evaluates the-shelf static analysis, we can define a Granger’s reduced to a particular value JeKσ that is determined by the major product for its Cartesian product with the relational formulas in state σ. This reduction can be achieved by using an abstract ],Int that over-approximates the concrete operator order to interface this analysis with our monitor, and guarantee operator grdb b soundness by Equation (9). Therefore, we introduce in the grd . The soundness of grd],Int guarantees the soundness of b following a pair of operators (tointσ , toforσ ), that we prove operator tointσ , and the abstract meet with the initial interval 10 environment also guarantees the reduction condition stated in Definition 3. The operator toforσ reduces interval constraints over variables to a set of relational formulas that hold wrt. a major state σ. Whenever an interval environment ı maps a variable x to a singleton value that matches σ(x), we can deduce that all the minor states abstracted by ı agree with σ on the value of x. Notice that toforσ does not iterate over the finite lattice P(L), though in theory that can be done in order to determine 0 all relational formulas that hold. A smarter way to improve the 1 precision of toforσ would take hints from the monitored branch, 2 3 by trying to prove that some particular relational formulas are 4 satisfied wrt. the major state. This would improve precision 5 6 for the statically analysed branch. 7 Having derived a sound approximation of the interface 8 between intervals and relational formulas, we can resume the 9 derivation started in Equation (9), using Lemmas 7 and 8, to 10 obtain a sound abstract static semantics. 11 12 Definition 4 . For all σ, σ 0 ∈ States, the abstract static 13 14 semantics I⦃c⦄]σ,σ0 based on ⦃c⦄],Int is given by: I⦃c⦄]σ,σ0 ∈ P (L) → P (L)     L ] I⦃c⦄σ,σ0 ∆ ,  λı. toforσ0 (ı, ) ◦    ⦃c⦄],Int ◦ tointσ ( , ∆) if ∆ = if ∆ = L analysis determines that all minor states agree with the major state on the value of variable seed — at the merge point of the conditional, which corresponds to labelling this variable as low. Our hybrid monitor is similar in spirit to the hybrid monitor of Besson et al. [16] that is also able to deduce that variable seed does not convey any knowledge about sensitive data, by relying on a constant propagation static analysis. / / σ0 , [seed → 3; secret conf → 1 . . .], ∆0 = ∅ assume Aseed ; / / [seed → 3; secret conf → 1 . . .], {Aseed} a : = secret base ; i f ( s e c r e t c o n f ) then { b : = secret number ; / / [seed → 3 . . .], {Aseed, Bsecret conf} / / Complicated hash computation on t h e seed , a & b . r : = seed ∗ a ∗ b ; seed : = 1 + seed ; / / [seed → 4 . . .], {Aseed, Bsecret conf} } else { / / value-sensitivity: seed is initialised to a singleton interval  / / ı , seed → [3, 3]; r → [−∞, +∞] . . . / / Complicated hash computation on t h e seed & a .   r : = seed ∗ a ∗ 4 2 ; / / seed → [3, 3]; r → [−∞, +∞] .. . seed : = 1 + seed ; / / seed → [4, 4]; r → [−∞, +∞] . . . } / / [seed → 4 . . .], {Aseed} assert Aseed ; / / [seed → 4 . . .], {Aseed} Listing 2. Example program inspired by [47] VI. R ELATED W ORK Abstract interpretation has been used for static analysis of noninterference in a number of works including otherwise Kovàcs et al. [40], where security is explicitly formulated as 2-safety. Giacobazzi and Mastroeni show how abstract Lemma 9 soundness of I⦃ − ⦄]−,− . For all c, c0 and σ, σ 0 interpretation can be used to reason about downgrading policies in States it holds that: and observational power of the attacker [34]. Here we focus ] ] on related work that addresses the challenge areas for IF 0 0 ασ0 ◦ ⦃c⦄ ◦ γσ v̇ I⦃c⦄σ,σ0 where σ = Jc Kσ monitoring identified in Sec. I: expressive policy, precision Figure 4 introduces the abstract monitoring semantics we versus performance, and assurance of correctness. To facilitate expression of the range of practical IF policies, derive for the hybrid monitor relying on intervals. Most abstract transfer functions are similar to the ones introduced for the researchers have proposed language based approaches that use purely-dynamic monitor. Thus, we refer to Figure 2 and redefine types and other program annotations to label channels and for only the ones that are different, namely conditionals and loops. downgrading directives [6], [11], [20], [21], [48], [53], [57], The main difference concerns branching commands, since we [60]. In various ways, policies can refer to meaningful events and conditions in terms of program control and data state rely on the novel abstract static analysis of Definition 4. (including instrumentation to express policy [20]). Relational ] Theorem 4 . The abstract monitoring semantics ILcMσ of the Hoare logic features assertions that express agreement or “low hybrid monitor, introduced in Figure 4, is sound: For all σ, σ 0 ∈ indistinguishability”, enabling direct specification of conditional 0 States such that σ = JcKσ, it holds that: and partial dependency properties [1], [4], [14], [50]. A strength ] ] of the logic approach is that it can offer precise reasoning about ◦ ◦ 0 ασ LcMσ γσ v̇ ILcMσ complex data and control structures [3], [12], [50]. For endThis monitor is sensitive to runtime values. It deduces to-end semantics of IF policies, epistemic formulations are new relational formulas for high branching commands, by effective [5]–[7], [10] and have been connected with relational comparing the major state with the results of an interval static logic [11]. Such a connection is evident in the IF property analysis. Consider for instance the program in Listing 2, that of [23], although it is not formalized there. is inspired by Müller and al. [47]. Despite being modified in The generality and expressiveness of relational logic comes at conditional branches that depend on a high guard, variable the cost that it does not inherently enforce desirable constraints seed does not leak sensitive data. Unlike the hybrid flow- on policy. For example, consider the policy that the initial sensitive monitors of Le Guernic et al. [42] and Russo and value of card number is secret, but the low four digits may Sabelfeld [52], as well as the ones we introduce previously be released upon successful authentication between merchant in Sections V-A and V-B, our monitor relying on an interval and customer. The relevant condition could be asserted at a 11 ] Lif b  then ( c1 else c0 Mσ ∆ ,  Lc1 M]σ (∆ u] {Bb}) if ∆ ⇒] Ab    ],Int  Lc M] (∆ u] {Bb}) t] λı. tofor ],Int ◦ grd ◦ tointσ ( , ∆) otherwise Jc1 Kσ (ı, ) ◦ ⦃c0 ⦄ ¬b ( 1 σ ] ]  Lc0 Mσ (∆ u {B¬b}) if ∆ ⇒] Ab      Lc M] (∆ u] {B¬b}) t] λı. tofor (ı, ) ◦ ⦃c ⦄],Int ◦ toint ( , ∆) otherwise Jc0 Kσ 0 σ σ 1 if JbKσ if ¬JbKσ   ] 4̇×v̇ Lwhile b do cM]σ ∆ , snd (lfpλ(σ,∆).(⊥,L) G ] )(σ, ∆)  σ,    σ, ∆ u] {B¬b} G ] (w] )(σ, ∆) ,  σ, ∆ t] λı. toforσ (ı, ) ◦ ⦃while b do c⦄],Int     ] w JcKσ, Lif b then c else skipM]σ ∆ Figure 4. ◦  tointσ ( , ∆) u] {B¬b} if ¬JbKσ and ∆ = if ¬JbKσ and ∆ ⇒] Ab otherwise if ¬JbKσ otherwise Hybrid monitor IL−M]− with an interval analysis point in the code where the release takes place, together with work relies on the framework of abstract interpretation for assumed agreement on the expression ccnum%10000, but the the systematic design and derivation of security monitors. We policy analyst could also assert that ccnum refers to its initial hope our work will pave the way for the formal verification value at this point — assert B(ccnum = oldcc) where oldcc of practical security monitors, by reusing some of the recent is a variable set to the initial value of ccnum and not changed. developments in formal verification of abstract interpretation Concerning precision of purely dynamic monitors, the basic analysers [32], [38], [51]. technique of No Sensitive Upgrade (NSU) [8], [59] has been VII. D ISCUSSION AND C ONCLUSION refined [9], [17] and several implementations exist [18], [24], In this paper we propose an ideal monitor as a variation of [37], [54]. Experience suggesting NSU is too imprecise — a collecting semantics. We prove that this monitor enforces rejecting many secure executions — led Hedin et al. [37] to noninterference, for policies expressed using relational formuaugment their monitor with static analysis of modified locations las, to which many other policy formalisms can be translated. that heuristically suggests label upgrades, while relying on We also conjecture (following Theorem 1) a relation with the NSU for soundness. Hybrid monitors typically feature static security property proposed in [23] and taking into account analysis of modified variables. For the simple while language, intermediate assumptions (for downgrading) and assertions (for a simple approximation is to find all assignment targets [42], modularity and intermediate output). We would like to prove [52]. For more complex data structure this requires memory that conjecture, and strengthen it to account for intermediate abstraction [46]. For more precision it is better to take into assertions in divergent major runs. We believe this can be done account the actual low values [41]. The term “value-sensitive” using either transition semantics or trace based denotational is used by Hedin et al. [37] and Bello et al. [13] for the use semantics. of low values in the major run to determine the observable The ideal monitor is a specification that serves for deriving modifications. monitors that are sound by construction. We derive three As our third monitor shows, there is a second important way monitors that illustrate various ways of tracking low indisin which precision can be gained by sensitivity to low values. tinguishability and other relational formulas between states for This is also done by Besson et al. [16] who propose a generic high branchings. Although we provide a systematic approach hybrid monitor for quantitative information flow by tracking the by which precision can be fine tuned, we do not systematically knowledge about sensitive data that is stored in each program evaluate the precision of the derived monitors. Several notions variable. Their monitor is also parametrised by a static analysis. of evaluation have been proposed in the literature, using terms The monitors we derive by relying on abstract interpretation such as permissiveness and transparency. Bielova and Rezk [19] share some similarities with their monitors. Indeed, they also disentangle these notions. In their terms, precision in our model a purely-dynamic monitor by a static analysis that replies sense — allowing more secure executions — is termed “true with top, forcing the monitor to treat high branching commands transparency”. pessimistically. Additionally, their monitors do not rely on A benefit of monitoring, relative to type systems or other labelling the program counter with a security context. static analyses for security, is the potential to leverage runFor formal assurance of IF monitor correctness, several time values for precision. Almost all prior work uses valueworks provide detailed formal proofs [37], [54]; some are sensitivity, if at all, as a means of improving precision to machine checked [15], [33], including Besson et al. [16]. Our determine modifiable locations. An exception is Besson et 12 al. [16] who rely on constant propagation to delimit implicit flows based on actual values. A similar result is provided by our value-sensitive hybrid monitor using interval analysis. We rely on a classical notion in abstract interpretation, reduced product [29], to formalize the interactions between both the dynamic and the static part of the monitor. Lemma 7 and Equation (9) in particular are key results. By defining a Granger’s reduced product, we can immediately leverage other off-the-shelf static analyses, such as polyhedra [31], trace partitioning [45], and constant propagation. Existing monitors already incorporate such complex static analyses that are spawned during or before the monitored execution [16], [43], [44]. Two challenges remain for the adoption of IF monitors: scaling them to complex and richer languages, and lowering the incurred overhead. We believe this paper makes a dent wrt. the first dimension, by linking the design of information flow monitors to the design of static analyses by the wellestablished theory of abstract interpretation [28]. As to the second challenge, we would like to investigate what static information can be pre-computed and how a monitor can take advantage of such information (beyond the easy case of modified variables in a toy language). Acknowledgments: Thanks to Anindya Banerjee, Andrey Chudnov, and the anonymous reviewers for helpful feedback. The authors were partially supported by NSF award CNS1228930. R EFERENCES [1] T. Amtoft and A. Banerjee, “Information Flow Analysis in Logical Form.” Static Analysis Symposium, pp. 100–115, 2004. [2] ——, “Verification condition generation for conditional information flow,” in ACM Workshop on Formal Methods in Security Engineering, 2007, pp. 2–11. [3] T. Amtoft, J. Hatcliff, and E. Rodrı́guez, “Precise and automated contract-based reasoning for verification and certification of information flow properties of programs with arrays,” in European Symposium on Programming, ser. LNCS, vol. 6012, 2010. [4] T. Amtoft, J. Hatcliff, E. Rodrı́guez, Robby, J. Hoag, and D. Greve, “Specification and checking of software contracts for conditional information flow,” in Formal Methods, ser. LNCS, vol. 5014, 2008. [5] A. Askarov and A. C. Myers, “Attacker control and impact for confidentiality and integrity,” Logical Methods in Computer Science, vol. 7, no. 3, 2011. [6] A. Askarov and A. Sabelfeld, “Gradual release: Unifying declassification, encryption and key release policies,” in IEEE Symposium on Security and Privacy, 2007. [7] ——, “Tight enforcement of information-release policies for dynamic languages,” in IEEE Computer Security Foundations Symposium, 2009. [8] T. H. Austin and C. Flanagan, “Efficient purely-dynamic information flow analysis,” in ACM Workshop on Programming Languages and Analysis for Security, vol. 44, no. 8, Aug. 2009, pp. 20–31. [9] ——, “Permissive Dynamic Information Flow Analysis,” in ACM Workshop on Programming Languages and Analysis for Security. ACM, 2010, pp. 1–12. [10] M. Balliu, M. Dam, and G. Le Guernic, “Epistemic temporal logic for information flow security,” in ACM Workshop on Programming Languages and Analysis for Security (PLAS), 2011. [11] A. Banerjee, D. A. Naumann, and S. Rosenberg, “Expressive Declassification Policies and Modular Static Enforcement,” in IEEE Symposium on Security and Privacy. IEEE, 2008, pp. 339–353. [12] B. Beckert, D. Bruns, V. Klebanov, C. Scheben, P. H. Schmitt, and M. Ulbrich, “Information flow in object-oriented software,” in LogicBased Program Synthesis and Transformation LOPSTR, ser. LNCS, no. 8901, 2014. 13 [13] L. Bello, D. Hedin, and A. Sabelfeld, “Value sensitivity and observable abstract values for information flow control,” in Logic for Programming, Artificial Intelligence, and Reasoning (LPAR), 2015, pp. 63–78. [14] N. Benton, “Simple relational correctness proofs for static analyses and program transformations,” in ACM Symposium on Principles of Programming Languages, 2004. [15] L. Beringer, “End-to-end multilevel hybrid information flow control,” in Asian Symposium on Programming Languages and Systems (APLAS), ser. LNCS, 2012, vol. 7705, pp. 50–65. [16] F. Besson, N. Bielova, and T. Jensen, “Hybrid information flow monitoring against web tracking,” in IEEE Computer Security Foundations Symposium. IEEE, 2013, pp. 240–254. [17] A. Bichhawat, V. Rajani, D. Garg, and C. Hammer, “Generalizing permissive-upgrade in dynamic information flow analysis,” in ACM Workshop on Programming Languages and Analysis for Security (PLAS), 2014. [18] ——, “Information flow control in WebKit’s JavaScript bytecode,” in Principles of Security and Trust (POST), 2014, pp. 159–178. [19] N. Bielova and T. Rezk, “A Taxonomy of Information Flow Monitors,” in POST, 2016, to appear. [20] N. Broberg and D. Sands, “Flow Locks: Towards a Core Calculus for Dynamic Flow Policies,” in European Symposium on Programming. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006, pp. 180–196. [21] N. Broberg, B. van Delft, and D. Sands, “Paragon for Practical Programming with Information-Flow Control,” in Asian Symposium on Programming Languages and Systems, 2013, pp. 217–232. [22] D. Cachera and D. Pichardie, “A Certified Denotational Abstract Interpreter,” in Interactive Theorem Proving (ITP), 2010, pp. 9–24. [23] A. Chudnov, G. Kuan, and D. A. Naumann, “Information Flow Monitoring as Abstract Interpretation for Relational Logic,” in IEEE Computer Security Foundations Symposium. IEEE, 2014, pp. 48–62. [24] A. Chudnov and D. A. Naumann, “Inlined information flow monitoring for JavaScript,” in ACM SIGSAC conference on Computer and Communications Security, 2015, pp. 629–643. [25] M. R. Clarkson and F. B. Schneider, “Hyperproperties.” Journal of Computer Security, vol. 18, no. 6, pp. 1157–1210, 2010. [26] A. Cortesi, G. Costantini, and P. Ferrara, “A Survey on Product Operators in Abstract Interpretation.” Festschrift for Dave Schmidt, vol. 129, pp. 325–336, 2013. [27] P. Cousot, “The calculational design of a generic abstract interpreter,” in Calculational System Design, M. Broy and R. Steinbrüggen, Eds. NATO ASI Series F. IOS Press, Amsterdam, 1999, vol. 173, pp. 421–506. [28] P. Cousot and R. Cousot, “Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints,” in ACM Symposium on Principles of Programming Languages, New York, New York, USA, Jan. 1977, pp. 238–252. [29] ——, “Systematic design of program analysis frameworks,” in ACM Symposium on Principles of Programming Languages. ACM, 1979, pp. 269–282. [30] P. Cousot, R. Cousot, J. Feret, L. Mauborgne, A. Miné, D. Monniaux, and X. Rival, “Combination of abstractions in the ASTRÉE static analyzer,” in Asian Computing Science Conference (ASIAN), 2006, pp. 272–300. [31] P. Cousot and N. Halbwachs, “Automatic Discovery of Linear Restraints Among Variables of a Program.” in ACM Symposium on Principles of Programming Languages, 1978, pp. 84–96. [32] D. Darais, M. Might, and D. V. Horn, “Galois transformers and modular abstract interpreters: reusable metatheory for program analysis,” in Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA), 2015. [33] A. A. de Amorim, N. Collins, A. DeHon, D. Demange, C. Hritcu, D. Pichardie, B. C. Pierce, R. Pollack, and A. Tolmach, “A verified information-flow architecture,” in ACM Symposium on Principles of Programming Languages, 2014, pp. 165–178. [34] R. Giacobazzi and I. Mastroeni, “Adjoining classified and unclassified information by abstract interpretation,” Journal of Computer Security, vol. 18, no. 5, 2010. [35] P. Granger, “Static analysis of arithmetical congruences,” International Journal of Computer Mathematics, 1989. [36] ——, “Improving the Results of Static Analyses Programs by Local Decreasing Iteration.” in Foundations of Software Technology and Theoretical Computer Science, vol. 652, 1992, pp. 68–79. [37] D. Hedin, L. Bello, and A. Sabelfeld, “Value-sensitive hybrid information flow control for a javascript-like language,” in IEEE Computer Security Foundations Symposium, 2015, pp. 351–365. [38] J.-H. Jourdan, V. Laporte, S. Blazy, X. Leroy, and D. Pichardie, “A Formally-Verified C Static Analyzer,” in ACM Symposium on Principles of Programming Languages. New York, New York, USA: ACM Press, 2015, pp. 247–259. [39] M. Kindahl, “The galois connection in interval analysis,” Uppsala University, Sweden, Tech. Rep., 1994, http://user.it.uu.se/∼matkin/interval.ps.gz. [40] M. Kovács, H. Seidl, and B. Finkbeiner, “Relational abstract interpretation for the verification of 2-hypersafety properties,” in ACM SIGSAC conference on Computer and Communications Security. New York, New York, USA: ACM Press, 2013, pp. 211–222. [41] G. Le Guernic, “Precise dynamic verification of confidentiality,” in Proceedings of the 5th International Verification Workshop in connection with IJCAR, 2008. [42] G. Le Guernic, A. Banerjee, T. P. Jensen, and D. A. Schmidt, “Automatabased confidentiality monitoring,” in Asian Computing Science Conference (ASIAN). Springer-Verlag, Dec. 2006. [43] P. Mardziel, S. Magill, M. Hicks, and M. Srivatsa, “Dynamic enforcement of knowledge-based security policies,” in IEEE Computer Security Foundations Symposium. IEEE, 2011, pp. 114–128. [44] ——, “Dynamic enforcement of knowledge-based security policies using probabilistic abstract interpretation,” Journal of Computer Security, vol. 21, no. 4, pp. 463–532, Jan. 2013. [45] L. Mauborgne and X. Rival, “Trace partitioning in abstract interpretation based static analyzers,” in European Symposium on Programming. Berlin, Heidelberg: Springer-Verlag, Apr. 2005, pp. 5–20. [46] S. Moore and S. Chong, “Static Analysis for Efficient Hybrid InformationFlow Control,” in IEEE Computer Security Foundations Symposium. IEEE, 2011, pp. 146–160. [47] C. Müller, M. Kovács, and H. Seidl, “An Analysis of Universal Information Flow Based on Self-Composition,” in IEEE Computer Security Foundations Symposium. IEEE, 2015, pp. 380–393. [48] A. C. Myers, “JFlow: Practical Mostly-Static Information Flow Control.” in ACM Symposium on Principles of Programming Languages, 1999, pp. 228–241. [49] A. C. Myers, N. Nystrom, L. Zheng, and S. Zdancewic, Jif: Java Information Flow, May 2001, software release. http://www.cs.cornell.edu/jif. [50] A. Nanevski, A. Banerjee, and D. Garg, “Dependent type theory for verification of information flow and access control policies,” ACM Trans. Program. Lang. Syst., vol. 35, no. 2, 2013. [51] D. Pichardie, “Modular Proof Principles for Parameterised Concretizations,” in Construction and Analysis of Safe, Secure, and Interoperable Smart Devices Second International Workshop, CASSIS , Revised Selected Papers, G. Barthe, B. Grégoire, M. Huisman, and J.-L. Lanet, Eds. Springer Berlin Heidelberg, 2006, pp. 138–154. [52] A. Russo and A. Sabelfeld, “Dynamic vs. Static Flow-Sensitive Security Analysis,” in IEEE Computer Security Foundations Symposium. IEEE, 2010, pp. 186–199. [53] A. Sabelfeld and D. Sands, “Declassification: Dimensions and principles,” Journal of Computer Security, vol. 17, no. 5, Oct. 2009. [54] J. Santos and T. Rezk, “An information flow monitor-inlining compiler for securing a core of javascript,” in ICT Systems Security and Privacy Protection, ser. IFIP Advances in Information and Communication Technology, 2014, vol. 428. [55] F. B. Schneider, “Enforceable security policies,” ACM Transactions on Information and System Security, vol. 3, no. 1, pp. 30–50, Feb. 2000. [56] P. Shroff, S. F. Smith, and M. Thober, “Dynamic dependency monitoring to secure information flow,” in IEEE Computer Security Foundations Symposium, 2007, pp. 203–217. [57] D. Volpano, C. Irvine, and G. Smith, “A Sound Type System for Secure Flow Analysis,” Journal of Computer Security, vol. 4, no. 2-3, pp. 167– 187, 1996. [58] G. Winskel, The Formal Semantics of Programming Languages: an Introduction. Cambridge, 1993. [59] S. A. Zdancewic, “Programming languages for information security,” Ph.D. dissertation, Cornell University, 2002. [60] S. Zdancewic and A. C. Myers, “Robust declassification,” in IEEE Computer Security Foundations Workshop, 2001, pp. 15–23. 14 A PPENDIX A A LTERNATIVE I DEAL M ONITOR The alternative ideal monitor relies on an alternative collecting semantics. In a nutshell, both collecting semantics are instrumented to keep track of a boolean a, that signals if annotation commands are allowed in the current context. Intuitively, in a low context, the boolean a is set to true, meaning that annotation commands are allowed. Otherwise, the boolean a is set to false, signifying that the current context is high, and disallowing annotation commands. How does the monitoring semantics determine the security context? It simply checks what happens for branching commands: Are we already in a high security context? if not, are there some states that follow a conditional branch that is different from the one taken by the major state? If there are some states that follow a different control path, this means that both conditional branches ought to be treated as high branches. If the conditional is in a low security context and no minor states follow a different conditional branch, then both conditional branches are to be treated as low branches. In high conditional branches, both the monitoring semantics and the collecting semantics return an error if an annotation command is encountered. This way, the monitoring semantics implicitly signals an alignment failure, as proposed in [23]. Figures 5 and 6 introduce the alternative ideal monitor and the alternative collecting semantics. We conjecture that this alternative ideal monitor returns an error iff. the monitor that is defined in [23] in terms of a tracking set results in either an alignment failure or an assertion failure. This conjecture remains to be proved. LcMann=a , σ LskipMann=a Σ,Σ σ Lc1 ; c2 Mann=a Σ , Lc2 Mann=a σ Jc1 Kσ Lassert ◦ Lid := eMann=a Σ , {Jid := eKτ | τ ∈ Σ} σ ( {τ ∈ Σ | σpτ |= Φ} if a = true Lassume ΦMann=a Σ, σ if a = f alse Lc1 Mann=a Σ σ ΦMann=a Σ σ , ( Σ if ∀τ ∈ Σ, σpτ |= Φ and a = true otherwise ( Lif b then c1 else c0 Mann=a Σ σ 0 0 ∧a ◦ Lc1 Mann=a grdbσ Σ t ⦃c0 ⦄ann=a ∧ a σ , ann=a ∧ a0 ◦ ∧ a0 ⦃c1 ⦄ grdb Σ t Lc0 Mann=a σ ( (grd¬b (Σ) = ∅) if JbKσ with a0 , (grdb (Σ) = ∅) if ¬JbKσ Lwhile b do cMann=a Σ , snd σ  grd¬b Σ ¬b ◦ grdσ Σ ◦ if JbKσ otherwise   4̇×v̇ lfpλ(σ,Σ).(⊥,∅) G a (σ, Σ) ( if ¬JbKσ σ, ⦃while b do c⦄ann=a Σ with G (w) , λ(σ, Σ). ann=a Σ) otherwise w (JcKσ, Lif b then c else skipMσ a Figure 5. ⦃skip⦄ann=a Σ = Σ ⦃id := e⦄ann=a Σ = {τ [id 7→ JeKτ ] | τ ∈ Σ} ⦃if b then c1 else c2 ⦄ann=a Σ = ⦃c1 ⦄ann=a ( ⦃assert Φ⦄ann=a Σ = Alternative ideal monitor Σ if a if ¬ a ◦ grdb (Σ) t ⦃c2 ⦄ann=a ◦ ⦃c1 ; c2 ⦄ann=a Σ = ⦃c2 ⦄ann=a grd¬b (Σ) ⦃c1 ⦄ann=a Σ ( Σ if a ann=a ⦃assume Φ⦄ Σ= if ¬ a ◦   ann=a ⦃while b do c⦄ann=a Σ = grd¬b lfpv Σ ⦃if b then c else skip⦄ grdb (Σ) , {τ ∈ Σ | JbKτ = true} Figure 6. Alternative collecting semantics 15 A PPENDIX B TABLE OF S YMBOLS States , V ar * Z mappings from vars to integers ⊥ undefined outcome States⊥ , States ∪ {⊥} set of outcomes σ ∈ States proper state σ̂ ∈ States⊥ outcome security violation, or fault P(States) powerset of States P (States) , P(States) ∪ { } Σ ∈ P(States) set of states Σ̂ ∈ P (States) set of states or fault Φ, Ψ ∈ L relational formulas L the set of relational formulas ∆ ∈ P(L) a set of relational formulas P (L) , P(L) ∪ { } ˆ ∈ P (L) a set of relational formulas of fault ∆ JcK ∈ States⊥ → States⊥ denotational sem. of com. JeK ∈ States → Z denotational sem. of exp. ⦃c⦄ ∈ P (States) → P (States) collecting sem. LcMσ ∈ P (States) → P (States) ideal monitor sem. DLcM] purely-dynamic monitor. MLcM] hybrid mon. with modified vars ILcM] hybrid mon. with modified intervals D⦃c⦄] static analysis for purely-dynamic mon. M⦃c⦄]c0 static analysis for hybrid mon. with modified vars I⦃c⦄]σ,σ0 static analysis for hybrid mon. with intervals v set inclusion lifted to P (States) t set union lifted to P (States) u set intersection lifted to P (States) v] “includes” (⊇) lifted to P (L) t] set intersection lifted to P (L) u] set union lifted to P (L) 4 approximation order ασ ∈ P (States) → P (L) abstraction function param. by σ γσ ∈ P (L) → P (States) concretisation function param. by σ αB ∈ States ×P (States) → States ×P (L) generalised ασ γ B ∈ States ×P (L) → States ×P (States) generalised γσ 16 A PPENDIX C C-A   b = grd¬b Σ ∪ gn⦃c⦄◦grd (Σ) [ = (for all Σ, gnΣ = (⦃c⦄ ◦ grdb )(k) (Σ) ) BACKGROUND Collecting Semantics Lemma 1 . The displayed equations define the same semantics as Equation (2). 0≤k≤n−1 grd b ◦ lfp⊆ ∅ λX. Σ ∪ ⦃c⦄ grd (X) = lfp⊆ ∅ λX.Σ ∪ ⦃if b then c else skip⦄X Indeed, let the sequence (fnΣ )n≥0 be defined as: = {σ[id 7→ JeKσ] | σ ∈ Σ and Jid := eKσ 6= ⊥} f0Σ , ∅ = {σ[id 7→ JeKσ] | σ ∈ Σ} Σ fn+1 , Σ ∪ ⦃if b then c else skip⦄fnΣ 2 – Case: skip 1 – Case: loops. 1.1 – Let us first prove the following intermediate result: Therefore, by induction on n ∈ N, it holds that fn = gn : - f0Σ = g0Σ = ∅. - let n ∈ N, such that fnΣ = gnΣ . Then: ⦃while b do c⦄Σ = grd lfp⊆ ∅ λX. Σ b  1.2 – Let us now prove that : ⦃id := e⦄Σ , {Jid := eKσ | σ ∈ Σ and Jid := eKσ 6= ⊥}  Σ gn+1 Σ = yn+1 Proof. The proof of equivalence of both collecting semantics is by structural induction on commands. We feature in this proof one simple case (assignments) as well as the most interesting case that is the case of while loops. 1 – Case: assignments ¬b ¬b Σ gn+1 = Σ ∪ ⦃c⦄ ◦ grdb (gn )  ∪ ⦃c⦄ ◦ grd (X) . = (grd¬b (gn ) ⊆ gn ⊆ gn+1 ) Σ ∪ ⦃c⦄ ◦ grdb (gn ) ∪ grd¬b (gn ) Indeed, let the sequence (xΣ n )n≥0 be defined as: = Σ ∪ ⦃if b then c else skip⦄gn (n) xΣ (⊥)(σ) ∈ States | σ ∈ Σ} n , {F Notice that for all σ ∈ Σ, the sequence (F (n) (⊥)(σ))n≥0 converges and is equal to the evaluation of the while loop in the state σ (Jwhile b do cKσ = F (∞) (⊥)(σ)), by definition of the denotational semantics of loops. Let also the sequences (ynΣ )n≥0 and (gnΣ )n≥0 be defined as:  ynΣ , grd¬b gnΣ  Σ gn+1 , Σ ∪ ⦃c⦄ ◦ grdb gnΣ g0Σ , ∅ = (By induction hypothesis) = Σ ∪ ⦃if b then c else skip⦄fnΣ Σ fn+1 1.3 – Finally, ⦃while b do c⦄Σ   b ◦ grd (X) = grd¬b lfp⊆ ⦃c⦄ λX. Σ ∪ ∅ = grd¬b lfp⊆ ∅ λX. Σ ∪ ⦃if b then c else skip⦄X Then, it holds that: = grd¬b Σ ∀Σ ∈ P(States), ∀n ∈ N, xΣ n = yn .  lfp⊆ Σ ⦃if b then c else skip⦄ Indeed, the proof proceeds by induction on n. Σ - xΣ 0 = ∅ = y0 Σ - Let n ∈ N such that: ∀Σ ∈ P(States), xΣ n = yn . Then: (n+1) xΣ (⊥)(σ) ∈ States | σ ∈ Σ} n+1 = {F = grd¬b (Σ) ∪ {F (n) (⊥)(JcKσ) ∈ States | σ ∈ grdb (Σ)} = grd¬b (Σ) ∪ {F (n) (⊥)(τ ) ∈ States | τ ∈ ⦃c⦄ ◦ grdb (Σ)} b ◦grd = grd¬b (Σ) ∪ x⦃c⦄ n (Σ) = (By induction hypothesis) b grd¬b (Σ) ∪ yn⦃c⦄◦grd (Σ) b = (By definition of yn⦃c⦄◦grd (Σ) )   b grd¬b (Σ) ∪ grd¬b gn⦃c⦄◦grd (Σ) 17  A PPENDIX D M ONITORING C OLLECTING S EMANTICS Lemma 2 . For all annotation-free commands c, all σ ∈ States such that JcKσ 6= ⊥, and all sets Σ ⊆ States, it holds that Finally, we deduce that: Lwhile b do cMσ Σ   = grd¬b lfp⊆ ⦃if b then c else skip⦄ Σ LcMσ Σ = ⦃c⦄Σ = ⦃while b do c⦄Σ Proof. The proof is by structural induction on commands. 1 – Cases skip and assignments stem from the definition of the collecting semantics and the monitoring semantics. 2 – Case: sequence c1 ; c2 Let σ, σ 0 ∈ States, such that Jc1 ; c2 Kσ = σ 0 . Then in particular, we know that Jc1 Kσ ∈ States, meaning that it terminates. Lc1 ; c2 Mσ Σ = Lc2 MJc1 Kσ ◦ Notice that if c is an assertion-free command, we have LcMσ Σ ⊆ ⦃c⦄Σ. The proof is exactly the same, and the only difference proceeds by noticing that Lassume ΦMσ Σ ⊆ ⦃assume Φ⦄Σ. Let us now prove the soundness of the monitoring semantics wrt. TINI. Theorem 1 . Let in and out be two sets of variables and c be an annotation-free command. Let command ĉ be defined as assume Ain; c ; assert Aout. For all σ1 , σ10 ∈ States such that JcKσ1 = σ10 , we have LĉMσ1 States 6= iff. Lc1 Mσ Σ = (By induction twice, since Jc1 Kσ ∈ States) ⦃c2 ⦄ ◦ ⦃c1 ⦄Σ ∀σ2 , σ20 ∈ States, JcKσ2 = σ20 ∧ σ1 =in σ2 =⇒ σ10 =out σ20 = ⦃c1 ; c2 ⦄Σ 3 – Case: conditionals Let σ, σ 0 ∈ States such that Jif b then c1 else c2 Kσ = σ 0 Let us also assume that JbKσ = true. Then: LcMσ Σ = Lc1 Mσ ◦ Proof. Let σ1 , σ10 ∈ States such that JcKσ1 = σ10 . Assume LĉMσ1 States 6= . Let σ2 , σ20 ∈ States and assume σ1 =in σ2 , and prove σ10 =out σ20 . Therefore, we have: grdb (Σ) ∪ ⦃c2 ⦄ ◦ grd¬b (Σ) (By induction on c1 ) ⦃c1 ⦄Σ ◦ grdb (Σ) ∪ ⦃c2 ⦄ ◦ grd¬b (Σ) Lassume Ain; c; assert AoutMσ1 States = Lc; assert AoutMσ1 = ⦃c⦄Σ = Lassert AoutMσ10 4 – Case: loops Let σ, σ 0 ∈ States such that Jwhile b do cKσ = σ 0 . Since the loop terminates, there exists a smallest k ∈ N∗ such that F (k) (σ) = σ 0 . The natural k is intuitively the number of executed iterations that must be executed before exiting the loop. Therefore: ◦ 6= (by assumption) ◦ Lassume AinMσ1 States LcMσ1 ◦ Lassume AinMσ1 States Notice that σ2 ∈ Lassume AinMσ1 States, since σ1 =in σ2 . Therefore, σ20 ∈ LcMσ1 ◦ Lassume AinMσ1 States, since the monitoring semantics is equivalent to the collecting semantics for annotation-free commands (Lemma 2), and the collecting semantics is the lifting of the denotational semantics over a set of states (Lemma 1 and σ20 ∈ States). This means that Lassert AoutMσ10 {σ20 } = 6 , by monotonicity of the monitoring semantics. Therefore: Lwhile b do cMσ Σ   = snd G (k) (λ(σ, Σ).(⊥, ∅))(σ, Σ) = (Σ0 , L(if b then c1 else skip)(k−1) Mσ Σ)   grd¬b lfp⊆ Σ0 ⦃if b then c else skip⦄ σ10 =out σ20 Notice that we write (c)(k−1) as a shorthand for the sequence of commands c; c; c; c; . . ., where c is sequentially composed with itself k − 1 times. Additionally, by using the same proof as for conditionals, we have: – Case TINI =⇒ LĉMσ1 States 6= : Assume for all σ2 , σ20 ∈ States such that JcKσ2 = σ20 , then σ1 =in σ2 =⇒ σ10 =out σ20 . Prove LĉMσ1 States 6= . Note that: Σ0 = ⦃(if b then c1 else skip)(k−1) ⦄Σ ∀τ ∈ Lassume AinMσ1 States, σ1 =in τ (k−1) = ⦃if b then c1 else skip⦄ Σ Therefore, ∀τ 0 ∈ LcM ◦ Lassume AinMσ1 States, ∃τ ∈ Lassume AinMσ1 States such that τ 0 = JcKτ and σ =in τ . Thus, we deduce by assumption that ∀τ 0 ∈ LcM ◦ Lassume AinMσ1 States, σ 0 =out τ 0 . Consequently, it holds that: Therefore, the fixpoint over Σ0 can be formulated as a fixpoint over Σ: lfp⊆ Σ0 ⦃if b then c else skip⦄ = lfp⊆ Σ ⦃if b then c else skip⦄ Lassert AoutMσ0 18 ◦ LcM ◦ Lassume AinMσ1 States 6= . Lemma 10 The monitoring semantics is monotone. For all major states σ̂ ∈ States⊥ , it holds that: ∀Σ̂, Σ̂0 ∈ P (L), Σ̂ v Σ̂0 =⇒ LcMσ̂ Σ̂ v LcMσ̂ Σ̂0 Proof. The monitoring semantics is monotone, since the collecting semantics is monotone, and the monitoring semantics of both annotations is also monotone. Notice in particular that the monotonicity of assert annotations stems from the extension of the partial order ⊆ to let be the top element of P (L). 19 A PPENDIX E A BSTRACT D OMAIN OF R ELATIONAL F ORMULAS – ασ Lemma 3 . For all σ ∈ States, the pair (ασ , γσ ) is a Galois γσ −− −− − − (P (L); v] ). That is, connection: (P (States); v) ← −− → ασ ∀Σ ∈ P (States), ∀∆ ∈ P (L): ασ (Σ) v ∆ ⇐⇒ Σ v] γσ (∆) LcMσ0 =⇒ ∀τ ∈ Σ, τ ∈ γσ (∆) =⇒ Σ ⊆ γσ (∆) – Case Σ ⊆ γσ (∆) =⇒ ∆ ⊆ ασ (Σ): Σ ⊆ γσ (∆) =⇒ ∀τ ∈ Σ, τ ∈ γσ (∆) =⇒ ∀τ ∈ Σ, ∀Φ ∈ ∆, σpτ |= Φ =⇒ ∀Φ ∈ ∆, Φ ∈ ασ (Σ) =⇒ ∆ ⊆ ασ (Σ) Lemma 4 Soundness conditions. Consider any c, σ, σ 0 such that σ 0 = JcKσ. Equation (4) is equivalent to each of the following: ] LcMσ v̇ LcM]σ ◦ ασ Proof. Let σ0 , σ ∈ Σ such that σ = JcKσ0 . The best abstraction of the state transformer LcMσ0 consists in concretising an abstract state ∆, applying LcMσ0 and then abstracting again using ασ : ασ ◦ LcMσ0 ◦ γσ0 . Therefore, the monitoring abstract semantics is sound if it holds that: LcMσ0 ◦ ◦ ◦ γσ0 ◦ ασ0 v̇ γσ γσ ◦ LcM]σ0 ◦ ] LcMσ0 v̇ ασ =⇒ (ασ ασ ◦ LcM]σ0 ◦ ◦ ασ0 ασ0 being extensive) ασ0 ◦ ◦ ◦ γσ γσ reductive) ] LcMσ0 v̇ LcM]σ0 ◦ ◦ LcM]σ0 ] γσ0 v̇ LcM]σ0 . Let us now prove the equivalence of the 3 conditions. 20 ◦ ασ0 ασ0 – ασ ◦ ] LcMσ0 v̇ LcM]σ0 ◦ ασ0 =⇒ (By monotony) ασ ασ =⇒ ∀τ ∈ Σ, ∀Φ ∈ ∆, σpτ |= Φ ◦ LcM]σ0 LcM]σ0 ◦ LcMσ0 ◦ LcMσ0 ◦ ] γσ0 v̇ LcM]σ0 =⇒ (By monotony and ασ0 ασ (Σ) ⊇ ∆ =⇒ ∆ ⊆ {Φ | ∀τ ∈ Σ, σpτ |= Φ} ασ ◦ ασ – Case ασ (Σ) ⊇ ∆ =⇒ Σ ⊆ γσ (∆): ◦ γσ0 v̇ γσ ◦ =⇒ (By monotony) ασ (Σ) ⊇ ∆ ⇐⇒ Σ ⊆ γσ (∆). ασ0 ◦ LcMσ0 v̇ Proof. Notice first that if Σ = , then ασ ( ) v] ∆ implies ∆ = , therefore Σ v γσ (∆). Additionally, v γσ (∆) also implies ∆ = , therefore ασ (Σ) v] ∆. Also, if ∆ = , then both Σ v γσ (∆) and ασ (Σ) v] ∆ are equivalent since they both hold. Let us now assume Σ ∈ States and ∆ ∈ L, and prove that: and γσ0 v̇ γσ =⇒ (By monotony and γσ0 γσ (∆) , {τ ∈ States | ∀Φ ∈ ∆, σpτ |= Φ}. LcM]σ ◦ LcMσ0 γσ ∈ P (L) → P (States) ◦ LcMσ0 =⇒ (By monotony) ασ (Σ) , {Φ | ∀τ ∈ Σ, σpτ |= Φ}. γσ v̇ γσ0 ◦ – ασ ∈ P (States) → P (L) ◦ ] γσ0 v̇ LcM]σ0 LcMσ0 =⇒ ((ασ , γσ ) is a Galois connection) Let us recall the definitions of ασ and γσ : LcMσ ◦ ◦ ] γσ0 v̇ LcM]σ0 ◦ ασ0 ◦ γσ0 being reductive) ◦ γσ0 A PPENDIX F F-A 2.1 – Note that ∀Φ ∈ ∆, such that id 6∈ fv(Φ), it holds that [23, Lemma 2 in Section II.B]: M ONITOR D ERIVATION Purely-Dynamic Monitor σpτ |= Φ =⇒ σ 0 pτ [id 7→ JeKτ ] |= Φ. We first start by proving the soundness of the abstract static analysis that the purely-dynamic monitor relies on. Lemma 5 soundness of D⦃ − ⦄] . For all c, σ, σ 0 ∈ States, ] it holds that: ασ0 ◦ ⦃c⦄ ◦ γσ v̇ D⦃c⦄] . Let us recall the definition of this abstract static semantics: Therefore, ∀Φ ∈ ∆ such that id 6∈ fv(Φ), it holds that: Φ ∈ ασ0  τ ∈ γσ (∆) ∧ (∆ ⇒] Ae) ⦃c⦄ ∈ P (L) → P (L)   if ∆ =  ⦃c⦄] ∆ , L if ∆ = L   ∅ otherwise =⇒ σ[id 7→ JeKσ]pτ [id 7→ JeKτ ] |= Aid =⇒ (since σ 0 = σ[id 7→ JeKσ]) σ 0 pτ [id 7→ JeKτ ] |= Aid It is worthwhile to note that this proof explicitly uses the fact that σ 0 is the result of evaluation of the assignment id:=e on σ. This means that if we were to derive an abstract static semantics tracking relational formulas, we would not be able to deduce Ae, even if ∆ ⇒] Ae. This is because we have no way of relating the abstraction of relational formulas wrt. σ to an abstraction of relational formulas wrt. σ 0 , without additional information. Therefore, if ∆ ⇒] Ae, then it holds that: Proof. Let σ, σ 0 ∈ States. Notice first that ⦃c⦄] = , therefore ◦ ⦃c⦄ ◦ γσ ( ) v] ⦃c⦄] . Let us now assume that ∆ ∈ P(L). If ∆ = L, and since γσ (L) = ∅ (L contains at least an expression e and its negation ¬e, therefore L is concretised to the empty set), we have: ασ0 ◦ ⦃c⦄ ◦ γσ (L) = ασ0 ◦ Aid ∈ ασ0 ⦃c⦄∅ = ασ0 (∅) = L ◦ ασ0 ⦃c⦄ ◦ γσ (∆) v] ∅ ◦ LcMσ ◦ Lid := eMσ ◦ ασ0 ] γσ v̇ DLcM]σ ⦃c⦄ ◦ γσ ( ) = ◦ LskipMσ ◦ ◦ Lid := eMσ = ασ0 ◦ = ασ0 ◦ ◦ γσ (∆) ⊇∆ ασ 0 ◦ ◦ ◦ ⦃c2 ⦄ ◦ grd¬b Lc1 Mσ ◦ grdb ◦ γσ (∆) γσ (∆) ∪  γσ (∆) ◦ ◦ γσ (∆) ∩ ⦃c2 ⦄ ◦ grd¬b ◦ Lc1 Mσ ◦ grdb ◦ ◦ γσ (∆) γσ (∆) ⊇ (by monotonicity and γσ α σ0 ◦ Lc1 Mσ ◦ γσ ◦ ασ grdb ασ ◦ ◦ ασ being extensive) grdb ◦ γσ (∆) ⊇ (by induction hypothesis) Lc1 M]σ γσ (∆) Lid := eMσ grdb ◦ We will now treat both branches separately before merging them. 3.1 – Then-branch: 2 – Case: assignments Let σ, σ 0 ∈ States such that σ 0 = Jid := eKσ. Then: ◦ Lc1 Mσ ◦ ασ 0 , LskipM]σ ∆ ασ0 Lif b then c1 else c2 Mσ = ασ0 , LcM]σ γσ (∆) = ασ ◦ = ασ0 1 – Case: skip ασ if ∆ ⇒] Ae otherwise 3 – Case: conditionals Let σ, σ 0 ∈ States such that σ 0 = Jif b then c1 else c2 Kσ. Let us consider the case that JbKσ = true. Then: Proof. Let us first rule out the error case: ασ0 γσ (∆) ◦ , Lid := eM]σ ∆ Theorem 2 . The abstract monitoring semantics DLcM]σ of the purely-dynamic monitor is sound. For all commands c, for all σ, σ 0 ∈ States such that σ 0 = JcKσ, it holds that: ◦ {τ [id 7→ JeKτ ] | ∀Φ ∈ ∆, σpτ |= Φ}. ( Aid ⊇ {Φ ∈ ∆ | id 6∈ fv(Φ)} ∪ ∅ Let us now prove the soundness of the abstract semantics of the purely-dynamic monitor we derive. ασ0 ◦ 2.3 – Finally: Additionally, we can always approximate an element of P(L) by the top element of P(L). Therefore, it holds that: ασ0 {τ [id 7→ JeKτ ] | ∀Φ ∈ ∆, σpτ |= Φ}. 2.2 – Otherwise, it holds that: ] ασ0 ◦ {τ | ∀Φ ∈ ∆, σpτ |= Φ} ◦ Note that grdb {τ [id 7→ JeKτ ] | ∀Φ ∈ ∆, σpτ |= Φ} ασ 21 ◦ ◦ ◦ ◦ γσ (∆) γσ (∆) ⊆ γσ (∆), therefore: grdb ◦ γσ (∆) ⊇ ασ ◦ γσ (∆) ⊇ ∆ Let σ1 = Jc1 Kσ and σ2 = Jc2 Kσ1 . Then: Additionally, since JbKσ = true, and also: ∀τ ∈ grdb ασ2 γσ (∆), JbKτ = true ◦ ◦ grdb ◦ ◦ grdb ασ2 ◦ Lc1 Mσ ◦ grdb ◦ ◦ ⦃c2 ⦄ ◦ grd¬b ασ ◦ ⦃c2 ⦄ ◦ γσ ◦ ασ ◦ ◦ = γσ (∆) ⊇ Lc1 M]σ (∆ ∪ {Bb}). Lc1 Mσ ◦ γσ (∆) ασ1 is extensive) Lc2 Mσ1 ◦ γσ1 ◦ ασ1 ◦ ◦ Lc1 Mσ ◦ γσ (∆) Lc1 M]σ ∆ Lassume ΦMσ ασ ◦ ◦ γσ (∆) {τ ∈ γσ (∆) | σpτ |= Φ} ⊇ ∆ ∪ {Φ} , Lassume ΦM]σ ∆ 6 – Case: assertions ⊇ (by monotonicity and γσ ασ0 ◦ 5 – Case: assumptions γσ (∆) ◦ γσ (∆) Lc2 Mσ1 , Lc1 ; c2 M]σ ∆ 3.2 – Else-branch: ασ0 ◦ Lc2 M]σ1 Therefore, in the then-branch we have: ασ0 ◦ ◦ ⊇ (By induction hypothesis) γσ (∆). γσ (∆) ⊇ ∆ ∪ {Bb} ◦ ◦ ⊇ (γσ1 To sum up, we obtain an approximation of grdb : ασ Lc1 ; c2 Mσ = ασ2 then it holds that: ∀τ ∈ grdb ◦ γσ (∆), σpτ |= Bb. Notice that we explicitly use the assumption that the major state evaluates to true. Therefore: Bb ∈ ασ ◦ ◦ ασ being extensive) grd¬b ◦ ασ γσ (∆) Since the major state σ is assumed to evaluate to true, it holds that: ( L if ∆ ⇒] Ab ασ ◦ grd¬b ◦ γσ (∆) ⊇ ∆ otherwise Lassert ΦMσ ◦ γσ (∆) ( γσ (∆) if ∀τ ∈ γσ (∆), σpτ |= Φ = ασ ◦ otherwise ( ∆ ∪ {Φ} if ∆ ⇒] Φ ⊇ otherwise ◦ 7 – Case: loops Therefore, in the else branch we have: ασ0 ⦃c2 ⦄ ◦ grd¬b   ˙ ⊆ Lwhile (e) do cMσ Σ , snd (lfp4× G)(σ, Σ) λ(σ,Σ).(⊥,∅) γσ (∆) ⦃c2 ⦄ γσ ◦ ασ ◦ grd¬b ◦ γσ (∆) ⊇α ( L if ∆ ⇒] Ab ] ◦ ⊇ ⦃c2 ⦄ λ∆. ∆ otherwise ( L if ∆ ⇒] Ab = ∅ otherwise ◦ σ0 ◦ ◦ with: ◦ G(w) , λ(σ, Σ). ( if ¬JbKσ σ, ⦃while b do c⦄Σ w (JcKσ, Lif b then c else skipMσ Σ) otherwise 3.3 – Finally, we merge both approximations of the thenbranch and the else-branch: and: C , States ×P (States) ∪ {(⊥, ∅), (>, )} ασ0 ◦ Lif b then c1 else c2 Mσ ⊇ ασ0 ◦ Lc1 Mσ ασ0 ◦ grdb ◦ ◦ A , States ×P (L) ∪ {(⊥, L), (>, )} γσ (∆) γσ (∆) ∩ ⦃c2 ⦄ ◦ grd¬b ◦ γσ (∆) ( L if ∆ ⇒] Ab ] ⊇ Lc1 Mσ (∆ ∪ {Bb}) ∩ ∅ otherwise ( Lc1 M]σ (∆ ∪ {Bb}) if ∆ ⇒] Ab = ∅ otherwise ◦ αB ∈ C → A   if σ, Σ = ⊥, ∅ ⊥, L B α (σ, Σ) , >, if σ, Σ = >,   σ, ασ (Σ) otherwise γB ∈ A → C   if σ, ∆ = ⊥, L ⊥, ∅ B γ (σ, ∆) , >, if σ, ∆ = >,   σ, γσ (∆) otherwise , (when JbKσ = true) Lif b then c1 else c2 M]σ ∆ The case where JbKσ = f alse is symmetric. 4 – Case: sequences. 22 1 – First, we prove that (αB , γ B ) is a Galois connection: γB −− −− − − (A ; 4 ×v] ) (C ; 4 × v) ← −− → B α Let (σ, Σ) ∈ C , and (σ , ∆) ∈ A . 1.1 – Assume αB (σ, Σ) 4 × v] (σ 0 , ∆). Prove (σ, Σ) 4 × v γ B (σ 0 , ∆). If (σ, Σ) = (⊥, ∅), then it holds that (σ, Σ) 4 × v γ B (σ 0 , ∆). Otherwise, if (σ, Σ) = (>, ), then (σ 0 , ∆) = (>, ), and it holds that (σ, Σ) 4 × v γ B (σ 0 , ∆). Otherwise, if (σ, Σ) ∈ States ×P (States), then either σ = σ 0 , or σ 0 = >. If σ 0 = top, then ∆ = and it holds that (σ, Σ) 4 × v γ B (σ 0 , ∆). If σ = σ 0 , then ασ (Σ) ⊇ ∆ =⇒ Σ v γσ (∆) since (ασ , γσ ) is a Galois connection. Therefore, it also holds that (σ, Σ) 4 × v γ B (σ 0 , ∆). 1.2 – Assume (σ, Σ) 4 × v γ B (σ 0 , ∆). Prove αB (σ, Σ) 4 × v] (σ 0 , ∆). If (σ 0 , ∆) = (⊥, L), then (σ, Σ) = (⊥, ∅), thus it holds that B α (σ, Σ) 4 × v] (σ 0 , ∆). Otherwise, if (σ 0 , ∆) = (>, ), then it holds that B α (σ, Σ) 4 × v] (σ 0 , ∆). Otherwise, if (σ 0 , ∆) ∈ States ×P (L), then σ 4 σ 0 implies that either σ = ⊥, or σ = σ 0 . If σ = ⊥, then Σ = ∅ and it holds that αB (σ, Σ) 4 × v] (σ 0 , ∆). If σ = σ 0 , then Σ v γσ (∆) =⇒ ασ (Σ)v] ∆ since (ασ , γσ ) is a Galois connection. Therefore, it holds that: αB (σ, Σ) 4 × v] (σ 0 , ∆). v̇ ◦ B 2 – Approximating the fixpoint lfp4× λ(σ,Σ).(⊥,∅) G γ (σ, ∆). 2.1 – Applying the fixpoint transfer theorem: 0 αB ◦ v̇ ◦ B (lfp4̇× λ(σ,Σ).(⊥,∅) G) γ (σ, ∆) 4 × v] (assuming G ] is a sound approximating of G) ] v̇ ] (lfp4̇× λ(σ,∆).(⊥,L) G )(σ, ∆) 2.2 – Deriving a sound approximation of G ] : αB G(w) ◦ γ B (σ, ∆)   σ, ⦃while b do c⦄γσ (∆) if ¬JbKσ B ◦ =α w JcKσ,    Lif b then c else skipMσ ◦ γσ (∆) oth.   σ, ασ ◦ ⦃while b do c⦄ ◦ γσ (∆) if ¬JbKσ = αB ◦ w JcKσ,    Lif b then c else skipMσ ◦ γσ (∆) oth.   σ, ασ ◦ ⦃while b do c⦄ ◦ γσ (∆) if ¬JbKσ ] 4 × v αB ◦ w ◦ γ B ◦ αB ◦ JcKσ,    Lif b then c else skipMσ ◦ γσ (∆) oth.  σ, ασ ◦ ⦃while b do c⦄ ◦ γσ (∆) if ¬JbKσ    αB ◦ w ◦ γ B ◦ JcKσ, 4 × v]  αJcKσ ◦ Lif b then c else skipMσ ◦     γσ (∆) oth. ◦ 23  σ, ασ ◦ ⦃while b do c⦄ ◦ γσ (∆) if ¬JbKσ    w] ◦ JcKσ, 4 × v]  αJcKσ ◦ Lif b then c else skipMσ ◦     γσ (∆) oth.   σ, ασ ◦ ⦃while b do c⦄ ◦ γσ (∆) if ¬JbKσ ] 4 × v w] ◦ JcKσ,    Lif b then c else skipM]σ ∆ oth. (Notice that the derivation above is still generic.) (Specialising it with the abstract static semantics now.)   σ, if ¬JbKσ ∧ ∆ =      σ, ∆ ∪ {B¬b} if ¬JbKσ ∧ (∆ ⇒] Ab)  4 × v] σ, {B¬b} otherwise if ¬JbKσ   ] ◦  w JcKσ,     Lif b then c else skipM] ∆ otherwise σ 3 – Finally, ] 4̇×v̇ snd(lfpλ(σ,∆).(⊥,L) G ] )(σ, ∆) Lwhile b do cM]σ ∆ , with: G ] , λw] .λ(σ, ∆).   σ, if ¬JbKσ ∧ ∆ =      if ¬JbKσ ∧ (∆ ⇒] Ab) σ, ∆ ∪ {B¬b} σ, {B¬b} otherwise if ¬JbKσ   ] ◦  w JcKσ,     Lif b then c else skipM] ∆ otherwise σ F-B Hybrid Monitor with the Modified Variables Proof. Let Σ ∈ States and ∆ ∈ L. Let us derive a hybrid monitoring semantics relying on a static analysis over-approximating the set of variables that may be modified by a command c. We will consider only the case of branching instructions, since the derivation of the abstract monitoring semantics of the other commands is similar to the one in Theorem 2. 1 – Case : conditionals Let σ, σ 0 ∈ States such that σ 0 = Jif b then c1 else c2 Kσ. Let us also assume that JbKσ = true, which means that σ 0 = Jc1 Kσ. Then, similarly to Theorem 3 we have: We start by first proving an intermediate result, introduced in Lemma 11. We denote by σV the restriction of the state σ to the set V of variables. Lemma 11 Lemma 11 . ∀σ, τ, τ 0 ∈ States, ∀Φ ∈ L, ∀V ⊆ V ar, if: 1) σpτ |= Φ 2) τV = τV0 3) fv(Φ) ⊆ V then, σpτ 0 |= Φ Proof. The proof of this lemma is straightforward, by structural induction on relational formulas as well as a structural induction on expressions, by remarking that ∀e ∈ Exp, (fv(e) ⊆ V =⇒ JeKτ = JeKτ 0 ). ασ0 with = ασ0 ⦃c⦄ ◦ γσ (L) = ασ0 ◦ ασ 0 ◦ ⦃c⦄ ◦ γσ ( ) = ασ0 ◦ ◦ ⦃c⦄ = ασ0 ( ) = ασ0 0 ασ0 ◦ ] v 0 γσ ({Φ ∈ ∆ | fv(Φ) ∩ Mod(c) = ∅}) σ 0 = Jc0 Kσ, σ 0 =V ar\ Mod(c0 ) σ, and by defining ∆0 as ∆0 , {Φ ∈ ∆ | fv(Φ) ∩ Mod(c) = ∅}) {Φ ∈ ∆0 | fv(Φ) ∩ Mod(c0 ) = ∅} = {Φ ∈ ∆ | fv(Φ) ∩ (Mod(c) ∪ Mod(c0 )) = ∅} Theorem 3 soundness of ML−M]− . The hybrid monitor MLcM]σ is sound in the sense of Equation (4): for all c, σ: LcMσ ◦ ] γσ v̇ MLcM]σ ◦ γσ (∆) ∩ ⦃c2 ⦄ grd¬b ◦ ◦ γσ (∆) Lc1 Mσ ◦ grdb ◦ γσ (∆) v] Lc1 M]σ (∆ ∪ {Bb}). if ∆ ⇒] Ab otherwise Lif b then c1 else c2 Mσ ◦ α Lc1 Mσ ◦ Lc1 M]σ (∆ ◦ grdb ◦ ∪ {Bb}) ∩ γσ (∆) γσ (∆) ∩ ⦃c2 ⦄ grd¬b ◦ ◦ ◦ γσ (∆)   L  Φ ∈ ∆ | ∀id ∈ fv(Φ),   id 6∈ Mod(c1 ) ∪ Mod(c2 ) if ∆ ⇒] Ab 2 – Case: loops We rely on the generic derivation of loops in Theorem 2, and specialize it with the static analysis relying on the modified variables. Therefore, we continue the derivation from the generic approximation obtained in Theorem 2: , M⦃c⦄]c0 ∆ ◦ grdb otherwise  ]  if ∆ ⇒] Ab Lc1 Mσ (∆ ∪ {Bb}) = Lc1 M]σ (∆ ∪ {Bb}) ∩   {Φ ∈ ∆ | fv(Φ) ∩ (Mod(c1 ) ∪ Mod(c2 )) = ∅} otherwise v (By applying Lemma 11, since for all σ, σ 0 such that ασ0 ◦ σ0 for all τ such that τ = JcKτ, τ =V ar\ Mod(c) τ ) ] ◦ v ασ 0 ασ being extensive) v] (By applying Lemma 11, since for all τ ∈ γσ (∆), 0 ◦ ◦ ⦃c2 ⦄ ◦ grd¬b ◦ γσ (∆)   L  ] v Φ ∈ ∆ | ∀id ∈ fv(Φ),   id 6∈ Mod(c1 ) ∪ Mod(c2 ) ] ◦ ◦ σ0 Finally, ⦃c⦄ ◦ γσ (∆) (By monotonicity of ασ0 , and γσ ασ0 ◦ γσ ◦ ασ ◦ ⦃c⦄ ◦ γσ (∆) Lc1 Mσ ασ0 ⦃c⦄∅ = ασ0 (∅) = L Additionally, assuming ∆ 6= L and ∆ 6= : ασ0 ◦ γσ (∆) γσ (∆) ∪  ◦ γσ (∆) ◦ As for the non-executed branch, we have a more precise abstract static semantics by Lemma 6: otherwise Also, ασ0 grdb ◦ as well as: if ∆ = if ∆ = L Proof. Let a command c’ and σ, σ 0 ∈ States such that σ 0 = Jc0 Kσ. Then: ◦ ◦ ⦃c2 ⦄ ◦ grd¬b α defined as: M⦃c⦄]c0 ∈ P (L) → P (L)     L ] M⦃c⦄c0 ∆ ,   Φ ∈ ∆ | ∀id ∈ fv(Φ),    id 6∈ Mod(c) ∪ Mod(c0 ) ασ0 Lif b then c1 else c2 Mσ = ασ0 Lc1 Mσ Lemma 6 soundness of M⦃ − ⦄]− . For all c, c0 , σ, σ 0 such ] that σ 0 = Jc0 Kσ, it holds that: ασ0 ◦ ⦃c⦄ ◦ γσ v̇ M⦃c⦄]c0 . M⦃c⦄]c0 ◦ where σ 0 = JcKσ 24 αB G(w) ◦ γ B (σ, ∆)   σ, ασ ◦ ⦃while b do c⦄ ◦ γσ (∆) if ¬JbKσ 4 × v] w] ◦ JcKσ,    Lif b then c else skipM]σ ∆ oth. ◦ 4 ×v]  σ, if ¬JbKσ ∧ ∆ =      σ, ∆ ∪ {B¬b} if ¬JbKσ ∧ (∆ ⇒] Ab)    σ, {Φ ∈ ∆ | fv(Φ) ∩ Mod(c) = ∅} ∩   ∆ ∪ {B¬b} otherwise if ¬JbKσ      w] ◦ JcKσ,     Lif b then c else skipM]σ ∆ otherw. Therefore:   ] 4̇×v̇ Lwhile b do cM]σ ∆ , snd (lfpλ(σ,∆).(⊥,L) G ] )(σ, ∆) with: G ] , λw] .λ(σ, ∆).  σ, if ¬JbKσ ∧ ∆ =      σ, ∆ ∪ {B¬b} if ¬JbKσ ∧ (∆ ⇒] Ab)    σ, {Φ ∈ ∆ | fv(Φ) ∩ Mod(c) = ∅} ∩   ∆ ∪ {B¬b} otherwise if ¬JbKσ      w] ◦ JcKσ,     Lif b then c else skipM]σ ∆ otherw. 25 F-C Proof. Let ∆ ∈ P (L). Then: Hybrid Monitor with Intervals The abstract semantics of an interval analysis, inspired by [39], is presented in Figure 7. αInt γσ (∆) ◦ = αInt γ Int ( ) ∩ γσ (∆) grd],Int ( ), b = (By definition of γσInt ) blw([a, b]) , [−∞, b] αInt abv([a, b]) , [a, +∞] ◦ γσInt ( , ∆) = (By the soundness condition of a Granger’s pair) 2 )) 1 )) grde],Int (ı) , appblw(ı(e (ı) u] appabv(ı(e (ı) e1 e2 1 ≤e2 αInt ◦ γσInt (tointσ ( , ∆), ∆) = (By definition of ασInt ) blw(ı(e2 )−1) 1 )+1) grd],Int (ı) u] appabv(ı(e (ı) e1 <e2 (ı) , appe1 e2 fst ασInt ◦ Similarly, let ı ∈ StatesInt . Then: ασ ],Int ],Int ] grd],Int b1 ∨b2 (ı) , grdb1 (ı) t grdb2 (ı) ◦  γ Int (ı) = ασ γ Int (ı) ∩ γσ ( ) appix (ı) , ı[x 7→ ı(x) u] i] u ı(e1 − e2 ) , ı(e1 ) − ı(e2 ) ⦃c⦄ ⦃skip⦄],Int ı , ı , ⦃c1 ; c2 ⦄],Int ı , ⦃c2 ⦄],Int ],Int ⦃id := e⦄ ı , ı[id 7→ ı(e)] ⦃assert Φ⦄],Int ı , ı ⦃c1 ⦄],Int ◦ ◦ ⦃c1 ⦄],Int ı = snd ασInt ◦ γσInt (ı, toforσ (ı, )) Φ∈∆ ],Int ⦃assume Φ⦄ ı,ı Indeed, ⦃if b then c1 else c2 ⦄],Int ı , grd],Int (ı) t] ⦃c2 ⦄],Int b γσInt (ı, ) Proof. The error cases are straightforward: tointσ (ı, ) , ı and toforσ ( , ∆) , ∆. Let us restrict ourselves to ı ∈ StatesInt and ∆ ∈ P(L). 1 – tointσ is sound. 1.1 – Case : tointσ (ı, ∆). We will prove that:   γσInt u] tointσ (ı, {Φ}) , ∆ = γσInt (ı, ∆) ı(e1 + e2 ) , ı(e1 ) + ı(e2 ) ],Int ◦ Lemma 8 . For all σ ∈ States, the pair of operators tointσ , toforσ is a Granger’s reduced product. 1 )−i appı(e (ı) e2 appin (ı) , >Int ı(n) , [n, n] = snd ασInt v] toforσ (ı, ) 2) 1) appie1 +e2 (ı) , appi−ı(e (ı) u] appi−ı(e (ı) e1 e2 , γσInt is reductive) = tointσ ( , ∆) ],Int ] grd],Int b1 (ı) u grdb2 (ı) ] ◦ fst (tointσ ( , ∆), ∆) ],Int ],Int grd],Int ¬(e1 =e2 ) (ı) , grd(e1 <e2 )∧(e2 >e1 ) (ı) , grdb1 ∧b2 (ı) , 2) appi+ı(e (ı) e1 γσInt (tointσ ( , ∆), ∆) ≤],Int (Since ασInt ] ı(e1 ) 2) grde],Int (ı) , appı(e e1 (ı) u appe2 (ı) 1 =e2 appie1 −e2 (ı)  ◦ γσInt (ı, ∆) = γ Int (ı) ∩ γσ (∆) grd],Int ¬b (ı) = (∆ is interpreted conjunctively by γσ ) ⦃while (e) c⦄],Int ı , γ Int (ı) ∩ γσ ({Φ}) Φ∈∆   ] ],Int grd],Int lfpv ı ⦃if b then c1 else skip⦄ ¬b = ∩ γ Int (ı) ∩ γσ ({Φ}) Figure 7. = ∩ γσInt (ı, {Φ}) Φ∈∆ Abstract semantics of an interval analysis Φ∈∆ Φ∈∆ = (By cases 1.2 and 1.3, proven below) Lemma 7 . For all σ ∈ States, any pair of operators that is a Granger’s reduced product (tointσ , toforσ ) for the Cartesian abstraction StatesInt × P (L) provides a sound approximation of the interface between intervals and relational formulas: αInt ασ ◦ ◦ ],Int ˙ γσ ≤ ∩ γσInt Φ∈∆ Int = ∩ γ Φ∈∆ (tointσ (ı, {Φ}) , {Φ}) (tointσ (ı, {Φ})) ∩ γσ ({Φ}) Φ∈∆ = (∆ is interpreted conjunctively by γσ ) ∩ γ Int (tointσ (ı, {Φ})) ∩ γσ (∆) λ∆. tointσ ( , ∆) and Φ∈∆ Int ] γ Int v̇ λı. toforσ (ı, ) = (γ 26 is multiplicative)  γ u tointσ (ı, {Φ}) ∩ γσ (∆) Φ∈∆   = γσInt u] tointσ (ı, {Φ}) , ∆ Int  ] As for the non-executed branch, we have a more precise abstract static semantics by Lemma 9: ασ0 ◦ ⦃c2 ⦄ ◦ grd¬b ◦ γσ (∆)   if ∆ =  ] v L if ∆ ⇒] Ab   λı. toforσ0 (ı, ∅) ◦ ⦃c2 ⦄],Int Φ∈∆ , γσInt (toint(ı, ∆), ∆) with toint(ı, ∆) , u] tointσ (ı, {Φ}) Φ∈∆ 1.2 – Case : tointσ (ı, {Ae}) ◦ grd],Int ¬b ◦ γσ (∆) ◦ tointσ (>Int , ∆)oth. Finally, γσInt (ı, {Ae}) = γ Int (ı) ∩ γσ ({Ae}) ασ0 = γ Int (ı) ∩ {τ ∈ States | σpτ |= Ae} v = γ Int (ı) ∩ {τ ∈ States | JeKτ = JeKσ} =γ Int =γ Int e=JeKσ (ı) ∩ grd e=JeKσ (ı) ∩ grd (States) ◦ γ Int (ı) γσ ({Ae})  = γ Int ı u],Int grd],Int e=JeKσ (ı) ∩ γσ ({Ae})   = γσInt ı u],Int grd],Int e=JeKσ (ı), {Ae} 1.3 – Case: tointσ (ı, {Bb}) Similarly to case 1.2, we have: Theorem 4 . The abstract monitoring semantics ILcM]σ of the hybrid monitor, introduced in Figure 4, is sound: For all σ, σ 0 ∈ States such that σ 0 = JcKσ, it holds that: ◦ αB ] γσ v̇ ILcM]σ Proof. Let Σ ∈ States and ∆ ∈ L. Let us derive a hybrid monitoring semantics relying on an interval static analysis. We will consider only the case of branching instructions, since the derivation of the abstract monitoring semantics of the other commands is similar to the one in Theorem 2. 1 – Case : conditionals Let σ, σ 0 ∈ States such that σ 0 = Jif b then c1 else c2 Kσ. Let us also assume that JbKσ = true. Then, similarly to Theorem 3 we have: ασ0 ◦ Lif b then c1 else c2 Mσ = ασ0 ◦ Lc1 Mσ ασ0 ◦ ◦ grdb ◦ ◦ γσ (∆) ◦ γσ (∆) ◦ Lc1 Mσ ◦ grdb ◦ ασ0 ◦ Lc1 M]σ (∆ grdb ◦ γσ (∆) ∩ ⦃c2 ⦄ ◦ grd¬b ◦ γσ (∆) G(w) ◦ γ B (σ, ∆)   σ, ασ ◦ ⦃while b do c⦄ ◦ γσ (∆) if ¬JbKσ 4 × v] w] ◦ JcKσ,    Lif b then c else skipM]σ ∆ oth. ◦ 4 ×v]  σ, if ¬JbKσ ∧ ∆ =      σ, ∆ ∪ {B¬b} if ¬JbKσ ∧ (∆ ⇒] Ab)    σ, λı. tofor (ı, ∅) ◦ ⦃while b do c⦄],Int ◦ σ  Int  toint (> , ∆) ∩ ∆ ∪ {B¬b} otherwise if ¬JbKσ σ      w] ◦ JcKσ,     Lif b then c else skipM]σ ∆ otherw.   ] 4̇×v̇ Lwhile b do cM]σ ∆ , snd (lfpλ(σ,∆).(⊥,L) G ] )(σ, ∆) as well as: ασ0 Lc1 Mσ ◦ Therefore: γσ (∆) ∩ ⦃c2 ⦄ ◦ grd¬b ασ0 ◦ 2 – Case : loops We rely on the generic derivation of loops in Theorem 2, and specialize it with the static analysis relying on the modified variables. Therefore, we continue the derivation from the generic approximation obtained in Theorem 2: γσInt (ı, {Be})   = γσInt ı u],Int grd],Int (ı), {Bb} b LcMσ Lif b then c1 else c2 Mσ v ∪ {Bb}) ∩   if ∆ ⇒] Ab L ],Int ◦ λı. toforσ0 (ı, ∅) ◦ ⦃c2 ⦄   ],Int Int ◦ grd¬b tointσ (> , ∆) otherwise  ] if ∆ ⇒] Ab Lc1 Mσ (∆ ∪ {Bb})    Lc M] (∆ ∪ {Bb}) ∩ 1 σ v]  λı. toforσ0 (ı, ∅) ◦ ⦃c2 ⦄],Int ◦    ◦ tointσ (>Int , ∆) otherwise grd],Int ¬b  ◦ ] ] = (By soundness of grd],Int e=JeKσ ) ],Int Int Int ◦ γ (ı) ∩ γ grde=JeKσ (ı) ∩ ασ0 ◦ γσ (∆) v] Lc1 M]σ (∆ ∪ {Bb}). with: 27 G ] , λw] .λ(σ, ∆).  σ, if ¬JbKσ ∧ ∆ =     σ, ∆ ∪ {B¬b} if ¬JbKσ ∧ (∆ ⇒] Ab)    σ, λı. tofor (ı, ∅) ◦ ⦃while b do c⦄],Int ◦ σ  Int  toint (> , ∆) ∩ ∆ ∪ {B¬b} otherwise if ¬JbKσ σ    ]   w ◦ JcKσ,     Lif b then c else skipM]σ ∆ otherw. 28
6
ON THE MAXIMUM LIKELIHOOD ESTIMATOR FOR THE GENERALIZED EXTREME-VALUE DISTRIBUTION arXiv:1601.05702v3 [] 15 Mar 2017 AXEL BÜCHER AND JOHAN SEGERS Abstract. The vanilla method in univariate extreme-value theory consists of fitting the three-parameter Generalized Extreme-Value (GEV) distribution to a sample of block maxima. Despite claims to the contrary, the asymptotic normality of the maximum likelihood estimator has never been established. In this paper, a formal proof is given using a general result on the maximum likelihood estimator for parametric families that are differentiable in quadratic mean but whose supports depend on the parameter. An interesting side result concerns the (lack of) differentiability in quadratic mean of the GEV family. Key words. Differentiability in quadratic mean; M-estimator; maximum likelihood; empirical process; Fisher information; Generalized Extreme-Value distribution; Lipschitz condition; support. 1. Introduction Asymptotic normality of maximum likelihood estimators in regular parametric models is a classic subject, but when the support of the distribution depends on the parameter, the mathematics are not routine. The family of univariate, three-parameter Generalized Extreme-Value distributions is a case in point. Its cumulative distribution function Gθ at a parameter vector θ = (γ, µ, σ) ∈ R × R × (0, ∞) is given by Gγ,µ,σ (x) = Gγ,0,1 ((x − µ)/σ), x ∈ R, (1.1) where  exp(−e−z )      exp −(1 + γz)−1/γ Gγ,0,1 (z) =  0    1 if if if if γ γ γ γ = 0, 6= 0 and 1 + γz > 0, > 0 and z ≤ −1/γ, < 0 and z ≥ −1/γ. The support of Gθ is an interval, Sθ = {x ∈ R : σ + γ(x − µ) > 0}, whose endpoints depend on θ. The above parameterization, due to von Mises (1936) and Jenkinson (1955), generalizes and unifies the Fréchet/Gumbel/Weibull trichotomy in Fisher and Tippett (1928) across all signs of the shape parameter γ. Fitting a generalized extreme-value distribution to a sample of annual maxima is the earliest statistical method in extreme-value theory. Various inference procedures have been explored, including quantile or probability matching (Gumbel, 1958), the probability weighted moment method (Hosking et al., 1985), and the maximum likelihood Date: March 16, 2017. 1 2 AXEL BÜCHER AND JOHAN SEGERS method (Prescott and Walden, 1980; Hosking, 1985). The present paper revisits the maximum likelihood estimator and its asymptotic distribution. In general, deriving large-sample asymptotics of the maximum likelihood estimator for a distribution family with varying support is a difficult problem. The classical regularity conditions of Cramér (1946) are not fulfilled, and the same is true for the weaker Lipschitz conditions stemming from empirical process theory (van der Vaart, 1998, Theorem 5.39). Up to our knowledge, a general theory does not exist. Smith (1985) was the first to consider maximum likelihood estimation in a large class of non-regular parametric families on the real line. More precisely, he studied densities which can be written as f (x; µ, φ) = (x − µ)α−1 g(x − µ; φ), x ∈ (µ, ∞), where µ is a location parameter, φ is a parameter vector, α = α(φ) is a smooth function of φ, and g is a known function. The formulation is general enough to include location versions of the Weibull, Gamma, Beta and log-Gamma distribution. Depending on the value of α, Smith (1985) shows that the rate of convergence and the asymptotic distribution of the maximum likelihood estimator for µ is n−1/2 and normal or faster than n−1/2 and non-normal, respectively. The above class of densities, however, does not include the three-parameter Generalized Extreme-Value distribution. Still, after a reparameterization, the formulation does include the case −1/2 < γ < 0. Smith (1985, p. 88) claims that similar arguments will work for γ > 0 too, but details are omitted, while the case γ = 0 is not mentioned. It could be argued that the case γ < 0 treated in Smith (1985) is indeed the most difficult one. However, we believe that his proof contains a gap which cannot be easily remedied. Uniformity in certain convergence statements is claimed but not proven, and we doubt that it can be done with the techniques used in the article. We justify our point of view at the end of Section 2. Our contribution consists of a general result on the asymptotic normality of the maximum likelihood estimator for parametric models whose support may depend on the parameter. We apply the result to the three-parameter GEV family. An interesting side result concerns the differentiability in quadratic mean of that family; for the Gumbel case, see Marohn (1994, 2000). As in Smith (1985), we need to show that certain limit relations hold uniformly over specific subsets of the parameter space. To do so, we use empirical process machinery borrowed from van der Vaart (1998). The approach requires exercising control on the entropy of certain function classes. We obtain such control via carefully formulated Lipschitz conditions. Checking these conditions for the three-parameter GEV family turns out to be surprisingly tedious. The set-up in our paper is that of independent random samples drawn from a distribution within the parametric family itself. For the GEV family, Dombry (2015) considers the more realistic setting of triangular arrays of block maxima extracted from independent random variables sampled from a distribution in the domain of attraction of a GEV distribution. He shows consistency of the maximum likelihood estimator, and our Proposition 3.1 below is not far from being a special case of his Theorem 2. In Bücher and Segers (2016), we go one step further and also establish asymptotic normality, even ON THE MLE FOR THE GEV 3 for block maxima extracted from time series. In that paper, however, we do not consider the full three-parameter GEV family but focus on the two-parameter Fréchet sub-family. The support of the latter is equal to the positive half-line for all values of the parameter vector. The complications that motivate the present paper do therefore not arise. Asymptotic normality of the maximum likelihood estimator in general parametric families with parameter-dependent support is asserted in Section 2. The proof is given in Appendix A. We specialize the theory to the GEV family in Section 3 and we provide an outline for possible applications to other parametric families in Section 4. The reasonings and calculations needed to work out the results for the GEV family are sufficiently complicated to fill Appendices B to E. 2. Maximum likelihood estimator of a support-determining parameter Let (Pθ : θ ∈ Θ) be a family of distributions on some measurable space (X , A), where Θ ⊂ Rk . Suppose that each Pθ has a density pθ with respect to some common dominating measure µ. The model (Pθ : θ ∈ Θ) is said to be differentiable in quadratic mean at an inner point θ0 ∈ Θ if there exists a measurable function `˙θ0 : X → Rk such that Z n 1 √ √ o2 √ pθ0 +h − pθ0 − hT `˙θ0 pθ0 dµ = o(khk2 ), h → 0. (2.1) 2 X The function `˙θ0 is referred to as the score vector. Its components are square-integrable with respect to Pθ0 and have mean zero. The covariance matrix Iθ0 = Pθ0 `˙θ0 `˙Tθ0 is called Fisher information matrix. Differentiability in quadratic mean with invertible Fisher information matrix implies an asymptotic expansion of the log-likelihood ratio statistic implying local asymptotic normality of the associated sequence of statistical experiments. For more on the statistical implications of this property, see for instance Le Cam (1986) and van der Vaart (1998, Chapters 7–8). In Marohn (2000), local asymptotic normality is exploited to construct asymptotically efficient tests of the Gumbel hypothesis. Let X1 , . . . , Xn be an i.i.d. sample from P Pθ0 . By definition, any global maximizer of the R̄-valued function θ 7→ Pn log pθ = ni=1 log pθ (Xi ) is called a maximum likelihood estimator. Here, it is implicitly assumed that the set of global maximizers is nonempty. This is easily satisfied in most situations where Θ is compact. Otherwise, a compactification argument often works (van der Vaart, 1998, Chapter 5) or one can restrict attention to local maximizers instead. √ ). Under “regularity conditions”, the asymptotic distribution of n(θ̂n −θ0 ) is Nk (0, Iθ−1 0 Various sets of sufficient conditions have been proposed in the literature: Cramér’s classical conditions require the existence of and bounds on the third-order derivatives of θ 7→ `θ (x) = log pθ (x). Theorem 5.39 in van der Vaart (1998) only demands differentiability in quadratic mean plus a Lipschitz condition on θ 7→ `θ (x), for all x in the support of Pθ0 and all θ in a neighbourhood of θ0 . However, if the support, {x : pθ (x) > 0}, of Pθ depends on θ, an approach based on smoothness of θ 7→ `θ (x) is not possible. Indeed, for any neighbourhood of θ0 , we may find θ in that neighbourhood and x ∈ X such that pθ0 (x) > 0 but pθ (x) = 0 and thus `θ (x) = −∞. For the same reason, empirical processes indexed by θ in a neighbourhood of θ0 will have unbounded trajectories with probability tending to one. 4 AXEL BÜCHER AND JOHAN SEGERS But weak convergence of such empirical processes in the space of bounded functions is a crucial ingredient in the proofs of many theorems on the asymptotics of M-estimators. A possible way to circumvent these problems consists of replacing the criterion function `θ by   pθ + pθ0 , (2.2) mθ = 2 log 2pθ0 a real-valued function on {x : pθ0 (x) > 0} satisfying mθ (x) ≥ −2 log(2) for all such x. Using Corollary 5.53 in van der Vaart (1998), the function mθ can be used to obtain √ the OP (1/ n) rate of convergence of the maximum likelihood estimator for the GEV parameter θ, see Proposition D.1 below. The method of proof does not allow to obtain the asymptotic distribution of the estimator, however. √ Yet, if the rate of convergence OP (1/ n) has been established, then our next proposition provides alternative conditions (on `θ ) which can be used to prove asymptotic normality of the maximum likelihood estimator. In doing so, we follow the well-known three-step strategy for deriving the asymptotic distribution of M-estimators (see, e.g., van de Geer, 2009): (1) prove consistency, then (2) derive the rate of convergence and finally (3) derive the exact limiting distribution. In Section 3, the three steps are worked out for the GEV family by an application of the following proposition for step (3). The support of Pθ is Sθ = {x : pθ (x) > 0}, a subset of X which may vary with θ ∈ Θ. Let Uε (θ) = {θ0 ∈ Θ : kθ − θ0 k < ε} and put \ S̄(ε) = S̄(θ0 , ε) = Sθ = {x : pθ (x) > 0 for all θ such that kθ − θ0 k < ε}. θ∈Uε (θ0 ) The set S̄(ε) is the intersection of the supports of the distributions Pθ for all θ in an ε-ball centered at θ0 . The following proposition claims the asymptotic normality of the maximum likelihood estimator under a Lipschitz condition on the function θ 7→ `θ (x) for x and θ in a certain range. Proposition 2.1. Let (Pθ : θ ∈ Θ) be a parametric model on (X , A), with Θ ⊂ Rk , and let θ0 be an inner point of Θ. Let X1 , X2 , . . . be a sequence of independent and identically distributed random elements in X with common distribution Pθ0 and let θ̂n be a maximum likelihood estimator based on X1 , . . . , Xn . Assume the following conditions: (a) The model is differentiable in quadratic mean at θ0 with score vector `˙θ0 and nonsingular Fisher information matrix Iθ0 . (b) We have  Pθ0 X \ S̄(ε) = o(ε2 ), ε ↓ 0. (2.3) (c) There exist `˙ ∈ L2 (Pθ0 ) and ε0 > 0 such that, for every 0 < ε < ε0 , ˙ |`θ1 (x) − `θ2 (x)| ≤ `(x)kθ for all x ∈ S̄(2ε) and θ1 , θ2 ∈ Uε (θ0 ). 1 − θ2 k √ (d) We have n(θ̂n − θ0 ) = OP (1) as n → ∞. Then n X √ −1 1 √ n(θ̂n − θ0 ) = Iθ0 `˙θ0 (Xi ) + oP (1) Nk (0, Iθ−1 ), n → ∞. 0 n i=1 (2.4) (2.5) ON THE MLE FOR THE GEV 5 Condition (b) controls the size of that part of the support of Pθ0 that is not contained in the support of Pθ for some θ in a neighbourhood of θ0 . Condition (c) is a Lipschitz condition on θ 7→ `θ (x) in which the range of x and θ is specified in such a way that pθ0 (x) > 0 for all θ0 in a neighbourhood of θ. In view of (b) and (d), the part of the range of x that has been left out is asymptotically negligible. The proof of Proposition 2.1 adapts arguments from the proofs of Theorems 5.23, 5.39 and 7.2 and Lemma 19.31 in van der Vaart (1998). It is given in detail in Appendix A. The role of uniformity. Theorem 3(i) in Smith (1985) states the asymptotic normality of the maximum likelihood estimator for a certain class of real-valued parametric families whose supports depend in a smooth way on the parameter. As our paper may have the appearance of needlessly repeating known results, we feel obliged to explain our motives. The matter is not easy to explain, and we invite the reader to consult Smith’s article while reading this remark. The pages in Smith (1985) we will be needing are 71, 75–76, 80, and 82, and it will be convenient to discuss them in more or less reverse order. The proof of Theorem 3(i), stated on page 80, comprises only three lines at the bottom of page 82. The heart of the argument is the claim that the second-order derivatives of the log-likelihood are asymptotically constant in a sequence of shrinking neighbourhoods of the true parameter. We agree with the author that if the claim on the second-order derivatives is true, then the asymptotic normality can be proven too, via a second-order Taylor expansion of the log-likelihood around the true parameter. Another good example of this technique is the proof of Theorem 5.41 in van der Vaart (1998). As a justification of the assertion on the asymptotic constancy of the second-order derivatives of the log-likelihood, the author refers to the proof of his Theorem 1. Studying the proof of that theorem, we find the assertion in Lemma 4, items I(i), II(i), and III, on page 75. Only for item I(i), a proof is given (page 76), while the proof of the other items is said to be similar. The proof of item I(i) in Lemma 4 relies on Assumption 7 on page 71. This assumption concerns the L1 -continuity of the second-order derivatives of the log-likelihood of a single observation as a function of the parameter vector. It is here that we wish to formulate an objection. After careful reflection, we are convinced that Assumption 7 is insufficient to conclude that the supremum of the sample averages on line 4 of page 76 converges to zero. Indeed, the expectation of the supremum of a collection of random variables is in general larger than the supremum of the expectations of those random variables. In fact, the difference is fundamental and lies at the heart of what makes proving uniform laws of large numbers and uniform central limit theorems so challenging. Controlling expectations of suprema is the topic of maximal inequalities. In van der Vaart (1998, Section 19.6), such control is exercised by bracketing integrals of the function classes over which suprema are to be taken. Bracketing integrals appear in the proof of our Proposition 3.3. We find bounds on such integrals by relying on the Donsker theory in Chapter 19 in van der Vaart (1998). The complexity of the function classes under consideration is tempered by means of uniform Lipschitz conditions. This explains the appearance of our Lipschitz condition (2.4). 6 AXEL BÜCHER AND JOHAN SEGERS 3. Maximum likelihood estimator for the GEV family Let X1 , . . . , Xn be an independent random sample from the Generalized ExtremeValue distribution Gθ0 in (1.1) with unknown parameter θ0 = (γ0 , µ0 , σ0 ), to be estimated. Expressions for the log-density `θ (x) = log pθ (x) and the score vector `˙θ (x) = (∂γ `θ (x), ∂µ `θ (x), ∂σ `θ (x))T are given in Appendix B. Consider the maximum likelihood estimator, θ̂n , defined by maximizing the logP likelihood θ 7→ n−1 ni=1 `θ (Xi ) = Pn `θ . As argued by Dombry (2015), it is not guaranteed that the log-likelihood attains a unique, global maximum. For that reason, he rather defines any local maximizer of the log-likelihood over Θ0 = R × R × (0, ∞) as a maximum likelihood estimator. His main result then states that, for block maxima constructed from an underlying i.i.d. series, one can always find a strongly consistent local maximizer, as long as γ0 is strictly larger than −1. Going into his proofs, we see how that local maximizer is constructed: by restricting the parameter space to some arbitrary compact set Θ ⊂ (−1, ∞) × R × (0, ∞) which contains θ0 as an interior point, it is guaranteed that the [−∞, ∞)-valued, continuous function θ 7→ Pn `θ attains its global maximum over Θ. That global maximum is then shown to be a local maximum over Θ0 = R × R × (0, ∞), eventually, and to be strongly consistent. In the subsequent parts of this paper, we could also work along Dombry’s lines, redefining maximum likelihood estimators as local rather than global maxima. However, we prefer to keep track of the above-mentioned construction within his and our proofs. For the following proposition concerning consistency, we therefore consider the restriction of the parameter space to an arbitrarily large compact set Θ ⊂ (−1, ∞)×R×(0, ∞) that contains the true value θ0 ∈ (−1, ∞) × R × (0, ∞) in its interior. In practice, maximizers produced by numerical algorithms are local maximizers anyway, so that the restriction of the parameter space to an arbitrarily large compact set is hardly a restriction, at least if γ0 is known to be larger than −1; see Dombry (2015, Remark 4) for the case γ0 ≤ −1. Proposition 3.1 (Consistency). Let X1 , X2 , . . . be an independent random sample from the Gθ0 distribution, with θ0 = (γ0 , µ0 , σ0 ) ∈ (−1, ∞) × R × (0, ∞). For any compact set Θ ⊂ (−1, ∞) × R × (0, ∞) such that θ0 is in the interior of Θ and for any estimator sequence θ̂n such that Pn `θ̂n = maxθ∈Θ Pn `θ , such maximizers always existing, we have θ̂n → θ0 almost surely as n → ∞. Proof of Proposition 3.1. Since an extreme-value distribution is in its own domain of attraction, Proposition 3.1 is in fact a combination of Theorem 2 and Proposition 2 in Dombry (2015) with block size sequence m(n) ≡ 1 and with am = σ0 and bm = µ0 . However, such a choice for m(n) is prohibited in the cited theorem because of its condition (5), requiring that m(n)/ log n → ∞ as n → ∞. Therefore, we need to pass through the proof of the theorem and adapt where necessary. First, the hypothesis m(n) → ∞ is used in Lemmas 2 and 3 to apply the domain-ofattraction condition that F m (am x + bm ) → Ḡγ0 (x) as m → ∞, where Ḡγ = Gγ,0,1 . But in our case, F = Gγ0 ,µ0 ,σ0 , and thus F (σ0 x + µ0 ) = Ḡγ0 (x) already, without having to pass through the limit. Further, as explained in Remark 3 in Dombry (2015), the condition m(n)/ log n → ∞ appears only in the proof of Lemma 4 in the same paper. Writing `¯γ = `γ,0,1 and ON THE MLE FOR THE GEV Zi = (Xi − µ0 )/σ0 , the claim of that lemma is that Z n 1 X¯ `¯γ0 (z) dḠγ0 (z), lim `γ0 (Zi ) = n→∞ n R 7 almost surely. i=1 But the random variables Z1 , Z2 , . . . are independent and identically distributed with common distribution Ḡγ0 , so that the claim follows from the strong law of large numbers. Note that the random variables Ei = (1 + γZi )−1/γ have a unit Exponential distribution and that `¯γ0 (Zi ) = (1 + γ) log(Ei ) − Ei , the absolute value of which has a finite expectation.  To prove the asymptotic normality of the maximum likelihood estimator, we apply Proposition 2.1. To that end, we need to check a number of conditions, one of which, differentiability in quadratic mean, is interesting in its own right. Proposition 3.2 (Differentiability in quadratic mean). The three-parameter GEV family {G(γ,µ,σ) : (γ, µ, σ) ∈ Θ0 = R × R × (0, ∞)} is differentiable in quadratic mean at θ0 = (γ0 , µ0 , σ0 ) if and only if γ0 > −1/2. In that case, the score vector `˙θ0 (x) is equal to the gradient of the map θ 7→ `θ (x) at θ = θ0 for x such that σ0 + γ0 (x − µ0 ) > 0 and equal to 0 otherwise. The proof of Proposition 3.2 requires showing (2.1). For every real x such that σ0 + γ0 (x − µ0 ) 6= 0, the order of thepintegrand in (2.1) is o(khk2 ) as h → ∞ due to the differentiability of the map θ 7→ pθ (x) at θ = θ0 for such x. This asymptotic relation is true pointwise in x, and it remains to be shown that it can be integrated over x. The most delicate case occurs when −1/2 < γ0 ≤ −1/3 because in that case, the first-order √ partial derivatives of the map θ 7→ pθ (x) are not continuous at the boundary situation σ + γ(x − µ) = 0. A detailed proof is given in Appendix C. Proposition 3.3 (Asymptotic normality). Let X1 , X2 , . . . be independent and identically distributed random variables with common GEV law Gθ0 , with θ0 = (γ0 , µ0 , σ0 ) ∈ (−1/2, ∞)×R×(0, ∞). Then, for any compact parameter set Θ ⊂ (−1/2, ∞)×R×(0, ∞), any sequence of maximum likelihood estimators over Θ is strongly consistent and asymptotically normal: √ n(θ̂n − θ0 ) = 1 √ Iθ−1 0 n n X `˙θ0 (Xi ) + oP (1) N3 (0, Iθ−1 ), 0 n → ∞. i=1 Proof. Strong consistency follows from Proposition 3.1. The proof of Proposition 3.3 therefore consists of checking the four conditions (a)–(d) of Proposition 2.1. Property (a) is just Proposition 3.2. The rate of convergence (d) is the topic of Proposition D.1, the proof of which consists of an application of Corollary 5.53 in van der Vaart (1998) to the modified criterion function mθ in (2.2); a Lipschitz property of the latter function is established in Lemma D.3. The remaining points (b) and (c) are treated in Appendix E: the condition (b) on the support is given in Lemma E.1, while the Lipschitz condition (c) is given in Lemma E.2.  Remark 3.4. If γ0 = −1/2, then (d/dx)pθ (x) converges to a negative constant as x increases to the upper endpoint of the support, and the results in Woodroofe (1972) 8 AXEL BÜCHER AND JOHAN SEGERS suggest that the maximum likelihood estimator is still asymptotically Normal, but at a rate faster than n−1/2 ; see also Smith (1985, Theorem 3(ii)). For −1 < γ0 < −1/2, the results in Woodroofe (1974) suggest that the maximum likelihood estimator converges weakly to a certain non-Gaussian limit and at a rate depending on γ0 ; see also Smith (1985, Theorem 3(iii)). Since at γ0 ≤ −1/2, the GEV family is not differentiable in quadratic mean, our Proposition 2.1 does not apply, and we do not pursue the matter further. 4. Maximum likelihood estimators for other parametric families The result of Proposition 2.1 is general enough to be applicable for a wide variety of parametric families with support depending on the parameter. Deriving all the details may be quite cumbersome and is beyond the scope of the present paper but, nevertheless, we would like to review a couple of parametric families on the real line where Proposition 2.1 may come in handy. The Generalized Pareto distribution with parameter θ = (γ, σ) ∈ R × (0, ∞) is defined by the cdf  1 − e−x/σ if γ = 0 and x > 0,    1 − (1 + γx/σ)−1/γ if γ 6= 0 and x ∈ S , θ Gθ (x) =  0 if x ≤ 0,    1 if γ < 0 and x ≥ −σ/γ, where Sθ = (0, ∞) for γ ≥ 0 and Sθ = (0, −σ/γ) for γ < 0. Using the notation in (B.1) below, its density can be written as pθ (x) = σ −1 uγ (x/σ)1+γ 1(x > 0, σ + γx > 0), x ∈ R, a function which is similar to but simpler than the one of the GEV family. Along similar lines as for that family, the results from Appendix B can be used to derive differentiability in quadratic mean at any point (γ0 , σ0 ) ∈ (−1/2, ∞) × (0, ∞) (see also Marohn, 1994, for the case γ0 = 0). Also, the Lipschitz condition on `θ (Condition (c) of Proposition 2.1) and on mθ (needed to prove Condition (d) of Proposition 2.1 via an application of Corollary 5.53 in van der Vaart, 1998; see also the proof of Proposition D.1 below) can be checked similarly to how we proceeded for the GEV family. Asymptotic normality of the MLE based on high-threshold exceedances was proved with different techniques in Drees et al. (2004). Following Smith (1985), we may also consider parametric families of densities on the real line of the form pθ (x) = (x − µ)α(φ)−1 g(x − µ; φ), x ∈ (µ, ∞), where θ = (µ, φ) ∈ Θ = R × Φ ⊂ R1+q are parameters and where g : (0, ∞) × Φ → [0, ∞) and α : Φ → (0, ∞) are known, smooth functions. Among other families (see Smith, 1985 for the three-parameter gamma, beta and log-gamma distributions), we obtain for instance the three-parameter Weibull family by choosing Φ = (0, ∞)2 , g(x; φ1 , φ2 ) = φ1 φ2 exp(−φ2 xφ1 ), α(φ1 , φ2 ) = φ1 . ON THE MLE FOR THE GEV 9 Under regularity conditions on α and g, Proposition 2.1 can be applied on the restricted parameter set Θ2 = R × Φ2 = R × {φ : α(φ) > 2}; this corresponds to case (i) of Theorem 3 of Smith (1985). Consider Condition (b) of Proposition 2.1. In Smith (1985), it is assumed that c(φ) = lim g(x; φ)/α(φ) ∈ (0, ∞) x↓0 exists (note that c(φ) = φ2 > 0 for the Weibull family), which implies that g(x; φ0 ) may be bounded by a constant K0 for sufficiently small x. As a consequence, Condition (b) in Proposition 2.1 is met: clearly, Sθ = (µ, ∞), so that S̄(ε) = (µ0 + ε, ∞), and thus, since α0 = α(φ0 ) > 2, Z µ0 +ε  (x − µ0 )α0 −1 g(x − µ0 ; φ0 ) dx Pθ0 R \ S̄(ε) = µ0 Z ε y α0 −1 dy = O(εα0 ) = o(ε2 ), ε ↓ 0. ≤ K0 0 Let us sketch how to arrive at the other conditions of Proposition 2.1. Since `θ (x) = {α(φ) − 1} log(x − µ) + log g(x − µ; φ), Condition (c) of Proposition 2.1 can obviously be deduced from smoothness properties of the map φ 7→ α(φ) on Φ2 and of the map (x, φ) 7→ h(x, φ) = log g(x; φ) on (0, ∞)×Φ2 . Note that α(φ) − 1 |∂µ `θ (x)| ≤ + |∂x h(x − µ; φ)| x−µ and |∂φ `θ (x)| ≤ |∂φ α(φ) log(x − µ)| + |∂φ h(x − µ; φ)|, showing that the inequality α(φ) > 2 must be used again to control the integrals of the squared score functions near the lower endpoint. Condition (a) of Proposition 2.1 can be conveniently shown by using Lemma 7.6 in van der Vaart (1998): if the Fisher information Iθ = Pθ `˙θ `˙Tθ exists and ispcontinuous in θ, we only need to show that, for each x ∈ R, the function θ 7→ sθ (x) = pθ (x) on Θ2 is continuously differentiable. Using the relations ∂µ sθ (x) = ∂φ sθ (x) = 0 for x < µ and the identities ∂µ sθ (x) = 1 sθ (x) ∂µ `θ (x), 2 ∂φ sθ (x) = 1 sθ (x) ∂φ `θ (x), 2 for x > µ, this follows from continuous differentiability of α and h, provided that α(φ) > 3. For 2 < α(φ) ≤ 3, ∂µ sθ (x) will not converge to 0 as x ↓ µ; this is similar to the case −1/2 < γ ≤ −1/3 for the GEV family and may require more sophisticated arguments (see the proof of Proposition 3.2). Finally, Condition (d) of Proposition 2.1 may be deduced from Corollary 5.53 in van der Vaart (1998). By following the arguments that lead to (D.3) in Lemma D.3 (with a = 1/2), the necessary Lipschitz condition on mθ can again be conveniently reformulated in terms of α and h. 10 AXEL BÜCHER AND JOHAN SEGERS Appendix A. Proof of Proposition 2.1 For 0 < ε < ε0 and θ ∈ Uε (θ0 ), define a real-valued function `θ,ε on Sθ0 by `θ,ε (x) = `θ (x) 1{x ∈ S̄(2ε)}. The Lipschitz condition (2.4) on ` implies that ˙ kθ1 − θ2 k |`θ1 ,ε (x) − `θ2 ,ε (x)| ≤ `(x) (A.1) for all θ1 and θ2 in Uε (θ0 ) and for all x ∈ Sθ0 . We claim that, for any C > 0, we have Pn `θ̂n = sup Pn `θ ≥ θ∈Θ sup √ Pn `θ ≥ kθ−θ0 k<C/ n sup √ kθ−θ0 k<C/ n Pn `θ,C/√n − oP (1/n). (A.2) √ Only the last inequality is non-trivial. Write, for an arbitrary θ with kθ − θ0 k < C/ n, √ Pn `θ = Pn `θ,C/√n + Pn `θ 1{ · ∈ Sθ0 \ S̄(2C/ n)} √ sup √ Pn `θ0 1{ · ∈ Sθ0 \ S̄(2C/ n)} . ≥ Pn `θ,C/√n − kθ0 −θ0 k<C/ n The supremum on the second line is oP (rn ) for any sequence rn ↓ 0: for all η > 0, # " √ P rn−1 sup √ Pn `θ0 1{ · ∈ Sθ0 \ S̄(2C/ n)} > η kθ0 −θ0 k<C/ n √ ≤ P{∃ i = 1, . . . , n : Xi ∈ X \ S̄(2C/ n)} √  ≤ nPθ0 X \ S̄(2C/ n) = o(1), n → ∞, (A.3) by Assumption (2.3). Equation (A.2) has thus been proved. Now, let us show that, for fixed C > 0 and all converging sequences hn → h ∈ Rk with khn k < C for all n, we have √ n(`θ0 +hn /√n,C/√n − `θ0 ,C/√n ) → hT `˙θ0 in L2 (Pθ0 ). (A.4) For that purpose, write the left-hand side as √ √ √ An1 − An2 = n(`θ0 +hn /√n − `θ0 ) − n(`θ0 +hn /√n − `θ0 ) 1{ · ∈ Sθ0 \ S̄(2C/ n)}. Because of differentiability in quadratic mean, Assumption (a), the term An1 converges to hT `˙θ0 in Pθ0 -probability; see the proof of Theorem 5.39 in van der Vaart (1998). Moreover, An2 converges to zero in Pθ0 -probability: for all η > 0, we have √ Pθ0 (|An2 | > η) ≤ Pθ0 {Sθ0 \ S̄(2C/ n)} = o(1), n → ∞. Hence, the convergence in (A.4) holds in Pθ0 -probability. In view of the Lipschitzproperty of `θ,C/√n in (A.1) and the fact that Pθ0 `˙2 < ∞ by Assumption (c), convergence in Pθ0 -probability can be strengthened to L2 (Pθ0 ) convergence by applying the dominated convergence theorem. Indeed, for any subsequence of the right hand-side of (A.4), we may choose a sub-subsequence along which the convergence holds almost surely. The dominated convergence theorem implies convergence in L2 (Pθ0 ) along that sub-subsequence. The claim follows since the subsequence we have started with was arbitrary. ON THE MLE FOR THE GEV 11 Next, we shall show that, for any converging sequence hn → h with khn k < C for all n,  1 n Pθ0 `θ0 +hn /√n,C/√n − `θ0 ,C/√n → − hT Iθ0 h, n → ∞. (A.5) 2 √ Recall the empirical process Gn = n(Pn −Pθ0 ). In view of (A.4), computing means and variances, we have √ √ √ −` √ ) − hT `˙ } = o (1), Bn (hn ) = Gn { n(` n → ∞. (A.6) P θ θ0 +hn / n,C/ n θ0 ,C/ n 0 We can rewrite Bn as Bn (hn ) = n Pn (`θ0 +hn /√n,C/√n − `θ0 ,C/√n ) − Gn hT `˙θ0 − nPθ0 (`θ0 +hn /√n,C/√n − `θ0 ,C/√n ) = n Pn (`θ0 +hn /√n − `θ0 ) − Gn hT `˙θ0 √ − n Pn (`θ0 +hn /√n − `θ0 ) 1{ · ∈ X \ S̄(2C/ n)} − nPθ0 (`θ0 +hn /√n,C/√n − `θ0 ,C/√n ). P n √ Note that n Pn (`θ0 +hn /√n − `θ0 ) = i=1 log(pθ0 +hn / n /pθ0 )(Xi ) is a likelihood ratio statistic. Differentiability in quadratic mean, Assumption (a), implies 1 n Pn (`θ0 +hn /√n − `θ0 ) − Gn hT `˙θ0 = − hT Iθ0 h + oP (1), n → ∞; 2 see van der Vaart (1998, Theorem 7.2). Moreover, by Assumption (b), h i √ P n Pn (`θ0 +hn /√n − `θ0 ) 1{ · ∈ X \ S̄(2C/ n)} ≥ η √  ≤ n Pθ0 X \ S̄(2C/ n) = o(1), n → ∞. The last four displays imply (A.5). We can now follow the lines of the proof of Theorem 5.23 in van der Vaart (1998) to prove the following reinforcement of (A.6): for any random sequence h̃n such that kh̃n k < C almost surely for all n, we have Bn (h̃n ) = oP (1), n → ∞. (A.7) To see this, note that it follows from (A.6) that Bn (h) = oP (1) for any fixed h ∈ Rk with khk < C. As a consequence, the finite-dimensional distributions of the stochastic process Bn converge indeed to 0 in probability. It remains to show asymptotic tightness of the sequence Bn in the space `∞ ({h : khk < C}) equipped with the supremum distance; note that by the Lipschitz property (A.1), the trajectories of Bn are indeed bounded almost surely. Obviously, the sequence of linear processes h 7→ Gn hT `˙θ0 = hT Gn `˙θ0 is asymptotically tight, so that it suffices to show the same property for the processes √ h 7→ Mn (h) = n Gn (`θ0 +h/√n,C/√n − `θ0 ,C/√n ). This can be done along the lines of the proof of Lemma 19.31 in van der Vaart (1998), relying on a result for empirical processes indexed by sequences of function classes. More precisely, define a sequence of function classes on Sθ0 through √ Mn = { n(`θ0 +h/√n,C/√n − `θ0 ,C/√n ) : khk < C}. 12 AXEL BÜCHER AND JOHAN SEGERS By the Lipschitz property (A.1), the functions in Mn are bounded by the envelope function C `˙ ∈ L2 (Pθ0 ). From Example 19.7 in van der Vaart (1998), we obtain the following bound on the bracketing number N[ ] ε, Mn , L2 (Pθ0 )): for some constant K > 0 not depending on n, we have, for all sufficiently small ε > 0,  k  ˙ L (P ) , N[ ] ε, Mn , L2 (Pθ0 ) ≤ K ε−1 (2C 2 ) k`k 2 θ0 where k denotes the dimension of the Euclidean space of which Θ is a subset; here we used the property that the diameter of the ball {h ∈ Rk : khk < C} is equal to 2C. As a consequence, if δn ↓ 0, the bracketing integrals converge to zero: Z δn q   log N[ ] ε, Mn , L2 (Pθ0 ) dε → 0, n → ∞. J[ ] δn , Mn , L2 (Pθ0 ) = 0 ˙ 2 1{C `˙ > ε√n} → 0 (n → ∞) is satisfied because the The Lindeberg condition Pθ0 (C `) envelope function C `˙ belongs to L2 (Pθ0 ). Asymptotic tightness of Bn then follows from Theorem 19.28 in van der Vaart (1998). Equation (A.7) is thus proven. To complete the proof of Proposition 2.1, we can now proceed similarly to the proof of Theorem 5.23 in van der Vaart (1998), with some additional effort needed to get rid of the constant C. In view of (A.5), the convergence property (A.7) can be rewritten as  1 n Pn `θ0 +h̃n /√n,C/√n − `θ0 ,C/√n = − h̃Tn Iθ0 h̃n + h̃Tn Gn `˙θ0 + oP (1), 2 n → ∞, (A.8) where h̃n denotes an arbitrary random sequence in Rk with kh̃n k < C. √ Gn `˙θ0 . For C > 0, let An,C denote the event Define ĥn1 = n(θ̂n − θ0 ) and ĥn2 = Iθ−1 0 {max(kĥn1 k, kĥn2 k) < C}, and set h̃nj = ĥnj 1(An,C ), for j ∈ {1, 2}. Inserting both tilde-expressions into (A.8), we get, as n → ∞, 1 n Pn (`θ0 +h̃n1 /√n,C/√n − `θ0 ,C/√n ) = − h̃Tn1 Iθ0 h̃n1 + h̃Tn1 Gn `˙θ0 + oP (1), 2 1 Gn `˙θ0 1(An,C ) + oP (1). n Pn (`θ0 +h̃n2 /√n,C/√n − `θ0 ,C/√n ) = Gn `˙Tθ0 Iθ−1 0 2 Subtracting the second equation from the first one yields n Pn (`θ0 +h̃n1 /√n,C/√n − `θ0 +h̃n2 /√n,C/√n ) = −Qn 1(An,C ) + oP (1),   1 ˙θ T Iθ ĥn1 − I −1 Gn `˙θ . where Qn = ĥn1 − Iθ−1 G ` n 0 0 0 θ0 0 2 We will show that Qn = oP (1), n → ∞. n → ∞, (A.9) Since the Fisher information matrix Iθ0 is positive definite, this will imply that n √ 1 X˙ √ n(θ̂n − θ0 ) − Iθ−1 `θ0 (Xi ) = ĥn1 − Iθ−1 Gn `˙θ0 = oP (1), 0 0 n n → ∞, i=1 which is the first part of (2.5). The asymptotic normality then follows from the multivariate central limit theorem. ON THE MLE FOR THE GEV 13 It remains to show (A.9). Note that Qn ≥ 0, since Iθ0 is positive definite. By the maximization property (A.2), we have, as n → ∞, Rn = n Pn `θ̂n ≥ n Pn `θ0 +h̃n2 /√n,C/√n − oP (1) = n Pn `θ0 +h̃n1 /√n,C/√n + Qn 1(An,C ) − oP (1). Isolating Qn 1(An,C ), we find 0 ≤ Qn = Qn 1(An,C ) + Qn 1(Acn,C ) ≤ Rn − n Pn `θ0 +h̃n1 /√n,C/√n + Qn 1(Acn,C ) + oP (1), n → ∞. √ Note that `θ̂n (x) 1{x ∈ S̄(2C/ n)} 1(An,C ) = `θ0 +h̃n1 /√n,C/√n (x) 1(An,C ). Therefore, Rn = Rn 1(An,C ) + Rn 1(Acn,C )  √ √  = n Pn `θ̂n 1{ · ∈ S̄(2C/ n)} + n Pn `θ̂n 1{ · ∈ / S̄(2C/ n)} 1(An,C ) + Rn 1(Acn,C ) = n Pn `θ0 +h̃n1 /√n,C/√n 1(An,C ) + oP (1) + Rn 1(Acn,C ), n → ∞; the oP (1) term appears because of the argument in (A.3). Substituting the expansion for Rn into the upper bound for Qn yields, as n → ∞, 0 ≤ Qn ≤ n Pn `θ0 +h̃n1 /√n,C/√n 1(An,C ) + Rn 1(Acn,C ) − n Pn `θ0 +h̃n1 /√n,C/√n + Qn 1(Acn,C ) + oP (1) = {Rn + Qn − n Pn `θ0 +h̃n1 /√n,C/√n } 1(Acn,C ) + oP (1). Hence, for any ε > 0,  lim sup P(|Qn | > ε) ≤ lim sup P Acn,C , n→∞ n→∞ which can be made arbitrary small by increasing C, using Assumption (d). This finishes the proof of (A.9) and thus of Proposition 2.1.  Appendix B. Density and score functions of the GEV family The density, log-density, score functions and Fisher information matrix of the GEV family can of course be found in many articles and textbooks. Here we present some of these objects in a form which is convenient for later analysis. In the notation of Section 2, the state space (X , A, µ) is the real line equipped with its Borel sets and the Lebesgue measure. The probability density function of the GEV distribution Pθ with parameter θ = (γ, µ, σ) ∈ R × R × (0, ∞) is given by pθ (x) = σ −1 e−u uγ+1 1(1 + γz > 0), x ∈ R, where x−µ , σ  Z u = uγ (z) = exp − z = zµ,σ (x) = 0 z 1 dt 1 + γt  ( (1 + γz)−1/γ = e−z if γ 6= 0, if γ = 0. (B.1) The expression uγ (z) is convex and decreasing in z and is increasing in γ. See Figure 1 for graphs of the functions z 7→ uγ (z) for γ ∈ {−0.5, 0, 0.5}. 14 AXEL BÜCHER AND JOHAN SEGERS uγ(z) = (1 + γz)(−1 γ) uγ(z) γ = 0.5 γ=0 γ = − 0.5 1 −2 −1 0 1 2 z Figure 1. Graph of z 7→ uγ (z) in (B.1) for γ ∈ {−0.5, 0, 0.5} and z ∈ [−2, 2]. The support, Sθ , of the distribution is (defined as) the open interval   (−∞, µ − σ/γ) Sθ = {x : pθ (x) > 0} = {x ∈ R : σ + γ(x − µ) > 0} = R   (µ − σ/γ, ∞) if γ < 0, if γ = 0, if γ > 0. The log-density is given by `θ (x) = − log(σ) − u + (γ + 1) log(u), x ∈ Sθ , Rz with log(u) = − 0 (1 + γt)−1 dt. The partial derivatives of the map θ 7→ `θ (x) at θ such that pθ (x) > 0 are given by z ∂γ `θ (x) = (1 − u) ∂γ log(u) − , (B.2) 1 + γz γ+1−u , (B.3) ∂µ `θ (x) = σ(1 + γz) (1 − u)z − 1 ∂σ `θ (x) = , (B.4) σ(1 + γz) with ∂γ log(u) given by    1 1 z   Z z log(1 + γz) − if γ 6= 0,  t γ γ 1 + γz 0 ≤ ∂γ log(u) = dt = (B.5) 2 2  0 (1 + γt)  z if γ = 0. 2 For γ > −1/2, these partial derivatives have expectation zero and finite second moments with respect to Pθ . For such θ, the Fisher information matrix Iθ ∈ R3×3 is equal to the covariance matrix of the score vector `˙θ = (∂γ `θ , ∂µ `θ , ∂σ `θ )T , the three entries of which are viewed as elements of L2 (Pθ ). Explicit expressions for Iθ are given in Prescott and Walden (1980), but we will not be needing those here. The ON THE MLE FOR THE GEV 15 only properties of Iθ that are relevant for our current study are that Iθ is symmetric, positive definite and non-singular and that the map θ 7→ Iθ is continuous. We continue with a number of properties of the GEV densities and score functions. Lemma B.1. Let γ and z be such that 1 + γz > 0. Let u = uγ (z) be as in (B.1). If γz > 0, then  2 z   ,    2     z2   (B.6) ∂γ log(u) ≤ ,   z 1 1 + γz   log(1 + γz) ≤   γ 1 + γz   1     log(1 + γz). γ2 If −1 < γz < 0, then also z2 . 1 + γz As a consequence, for any γ and z such that 1 + γz > 0, we have ∂γ log(u) ≤ 0 ≤ ∂γ log(u) ≤ z2 . 1 + γz (B.7) (B.8) Proof. If γ = 0 or z = 0, the stated inequalities are clearly satisfied, so suppose that γ 6= 0 and z 6= 0. Substituting γt = v in (B.5), we find Z γz 1 v ∂γ log(u) = 2 dv. γ 0 (1 + v)2 If γz > 0, since v 7→ v/(1 + v) is increasing in v ≥ 0 and v 7→ 1/(1 + v) is decreasing in v ≥ 0, we obtain the bounds in the first display of the lemma: not only Z γz 1 z2 ∂γ log(u) ≤ 2 v dv = , γ 0 2 but also Z z 1 γz 1 z2 dv ≤ . 1 + γz γ 0 1 + v 1 + γz If γz < 0, then we have γz = − |γz| and Z |γz| Z 1 v |z| |γz| 1 |z|2 dv ≤ dv = . ∂γ log(u) = 2 γ 0 (1 − v)2 |γ| 0 (1 − v)2 1 − |γz| ∂γ log(u) ≤  Lemma B.2. Let θ and x be such that 1 + γz > 0, where z = (x − µ)/σ. We have ( −1 σ (1 + |γ|) u1+γ if z ≤ 0, |∂µ `θ (x)| ≤ σ −1 (1 + |γ|) uγ if z ≥ 0. Proof. We have (1 + γz)−1 = uγ and thus ∂µ `θ (x) = σ −1 (γ + 1 − u)uγ . If z ≤ 0, then u ≥ 1, so that |γ + 1 − u| ≤ u + |γ| ≤ (1 + |γ|)u. If z ≥ 0, then 0 < u ≤ 1, so that |γ + 1 − u| ≤ 1 + |γ|.  16 AXEL BÜCHER AND JOHAN SEGERS Lemma B.3. Let θ and x be such that 1 + γz > 0, where z = (x − µ)/σ. We have  z+1   if z ≥ 0, |∂σ `θ (x)| ≤ σ(1 + γz)  σ −1 1 + u log(u) umax(γ,0) if z ≤ 0. Proof. If z ≥ 0, then 0 < u ≤ 1, so that |(1 − u)z − 1| ≤ z +1, yielding the stated bound. If z ≤ 0, then u ≥ 1, so that |(1 − u)z − 1| ≤ u |z| + 1. Further, (1 + γz)−1 = uγ as well as Z u Z u uγ − 1 |z| γ−1 max(γ,0) t−1 dt = umax(γ,0) log(u). (B.9) t dt ≤ u = = 1 + γz γ 1 1 u|z|+1 The stated bound now follows from |∂σ `θ (x)| ≤ σ −1 (1+γz) .  Lemma B.4. Let θ and x be such that 1 + γz > 0, where z = (x − µ)/σ. If γ ≥ 0 and z ≥ 0, then, with γ −2 log(1 + γz) and γ −1 being defined as +∞ for γ = 0,      2 1 z 1 . (B.10) , log(1 + γz) , min z, |∂γ `θ (x)| ≤ max min 2 γ2 γ If γ ≤ 0 and z ≥ 0, then |∂γ `θ (x)| ≤ max(z 2 , z) . 1 + γz (B.11) If γ ≥ 0 and z ≤ 0, then |∂γ `θ (x)| ≤ u1+γ max{(log u)2 , log u}. (B.12) |∂γ `θ (x)| ≤ u max{(log u)2 , log u}. (B.13) If γ ≤ 0 and z ≤ 0, Proof. If γ = 0, then u = e−z and |∂γ `θ (x)| = (1−e−z )z 2 /2−z, and all stated inequalities are satisfied. In the remainder of the proof, we assume therefore that γ 6= 0. Suppose first that z ≥ 0, so that 0 < u ≤ 1. Since |a − b| ≤ max(a, b) for nonnegative numbers a, b, we have   z |∂γ `θ (x)| ≤ max ∂γ log(u), . 1 + γz Use (B.6) to find (B.10) and use (B.7) to find (B.11). Next suppose z ≤ 0, so that u ≥ 1. If γ < 0, then, by (B.6) and (B.9), 1 |z| ∂γ log(u) ≤ (− ) log(1 + γz) ≤ (log u)2 . γ 1 + γz We find, again using (B.9), as stated in (B.13), that   |z| |∂γ `θ (x)| ≤ max u ∂γ log(u), ≤ u max{(log u)2 , log u}. 1 + γz Finally, suppose z ≤ 0 and γ > 0. By (B.7) and (B.9)  2 2 |z| z2 ≤ u−γ uγ log(u) = uγ (log u)2 . ∂γ log(u) ≤ = (1 + γz) 1 + γz 1 + γz ON THE MLE FOR THE GEV We obtain, again using (B.9),  |∂γ `θ (x)| ≤ max u ∂γ log(u), |z| 1 + γz  17 ≤ u1+γ max{(log u)2 , log u}, which is (B.12).  Lemma B.5. Fix a ∈ [1/2, 1). For any x ∈ R, the function θ 7→ paθ (x) is continuously differentiable on (−a/(1 + a), ∞) × R × (0, ∞) and the partial derivatives are continuous in (x, θ) ∈ R × (−a/(1 + a), ∞) × R × (0, ∞). Proof. Fix θ0 = (γ0 , µ0 , σ0 ) with γ0 > −a/(1 + a) and choose x0 ∈ R. If σ0 + γ0 (x0 − µ0 ) > 0, then also σ + γ(x − µ) > 0 for all (x, θ) in a neighbourhood of (x0 , θ0 ) and thus paθ (x) = σ −a e−au ua(γ+1) for such (x, θ). All functions arising in the expression of paθ (x) are continuously differentiable in the three parameters and are continuous in (x, θ). Since ∂θk paθ (x) = a paθ (x) ∂θk `θ (x), the formulas for the score function in (B.2), (B.3), and (B.4) imply that   z a −ua a(γ+1) a ∂γ pθ (x) = (1 − u) ∂γ log(u) − e u , 1 + γz σ a γ + 1 − u a −ua a(γ+1) ∂µ paθ (x) = e u , σ(1 + γz) σ a (1 − u)z − 1 a −ua a(γ+1) e u . ∂σ paθ (x) = σ(1 + γz) σ a If σ0 + γ0 (x0 − µ0 ) < 0, then also σ + γ(x − µ) < 0 for all (x, θ) in a neighbourhood of (x0 , θ0 ) and thus paθ (x) = 0 for all such (x, θ), whence the partial derivatives vanish too. The difficult case is σ0 + γ0 (x0 − µ0 ) = 0, that is, if γ0 6= 0 and x0 = µ0 − σ0 /γ0 . We need to show that, for every k ∈ {1, 2, 3}, lim (x,θ)→(x0 ,θ0 ) σ+γ(x−µ)>0 ∂θk paθ (x) = 0 where (θ1 , θ2 , θ3 ) = (γ, µ, σ). Recall u = (1 + γz)−1/γ with z = (x − µ)/σ. First, suppose that γ0 > 0. Then u → ∞ as (x, θ) → (x0 , θ0 ) and convergence to zero is assured by the exponential factor e−au in each of the three partial derivatives. Second, suppose that γ0 < 0. Then u → 0 as (x, θ) → (x0 , θ0 ). Using the bound in (B.8), we see that the limit behaviour of the three partial derivatives is dominated by the factor − a −a−1 (1 + γz) γ as 1 + γz → 0. The exponent must be positive eventually: −(a/γ0 ) − a − 1 > 0. But this is equivalent to −a/(1 + a) < γ0 < 0.  Appendix C. Differentiability in quadratic mean of the GEV family √ Proof of Proposition 3.2. Let sθ = pθ . We distinguish between three cases: γ0 > −1/3, −1/2 < γ0 ≤ −1/3, and γ0 ≤ −1/2. 18 AXEL BÜCHER AND JOHAN SEGERS Case γ0 > −1/3. Lemma B.5 implies that for all x ∈ R, the function θ 7→ sθ (x) is continuously differentiable in a neighbourhood of θ0 . Differentiability in quadratic mean then follows from an application of Lemma 7.6 in van der Vaart (1998). Note that the existence and the continuity of the Fisher information matrix in a neighbourhood of θ0 has been derived in Prescott and Walden (1980). √ Case −1/2 < γ0 ≤ −1/3. Recall sθ = pθ . The conditions of Lemma 7.6 in van der Vaart (1998) are no longer fulfilled, but an adaptation of that proof still yields differentiability in quadratic mean. Since the unit ball in R3 is compact, it suffices to show that, if hn → h in R3 and if tn ↓ 0 as n → ∞, then 2 Z  sθ0 +tn hn (x) − sθ0 (x) 1 T ˙ − hn `θ0 (x) sθ0 (x) dx = 0. lim n→∞ R tn 2 It is sufficient to show the same equality with hTn `˙θ0 (x) sθ0 (x) replaced by hT `˙θ0 (x) sθ0 (x); to see why, use the elementary inequality (a+b)2 ≤ 2a2 +2b2 , the fact that the score vector has Pθ0 -square integrable components, and the assumption that hn → h as n → ∞. For every x ∈ R except x = µ0 − σ0 /γ0 , the integrand of the resulting integral converges to zero by differentiability of the map θ 7→ sθ (x) at θ = θ0 . What remains is to show convergence of the integral itself. To this end, we apply Proposition 2.29 in van der Vaart (1998) with p = 2; the condition to check is that 2  Z  Z  1 T ˙ sθ0 +tn hn (x) − sθ0 (x) 2 h `θ0 (x) sθ0 (x) dx. (C.1) dx ≤ lim sup tn n→∞ R 2 R R The right-hand side in (C.1) is equal to (1/4)hT ( `˙θ0 `˙Tθ0 pθ0 )h = (1/4)hT Iθ0 h. We need to control the left-hand side. Write hn = (hn1 , hn2 , hn3 )T and θn = θ0 + tn hn = (γn , µn , σn ). Without loss of generality, assume that n is large enough such that −1/2 < γn < 0. The upper endpoints of the supports of the GEV distributions with parameter vectors θ0 , θn , (γ0 , µn , σn ) and (γn , µ0 , σ0 ) are equal to µ0 − σ0 /γ0 , µn − σn /γn , µn − σn /γ0 and µ0 − σ0 /γn , respectively. Let αn and βn be the minimum and the maximum of these four endpoints, respectively. Write sθ +t h − sθ0 . fn = 0 n n tn Since fn (x) = 0 for x > βn , we have Z βn Z Z αn 2 2 fn (x)2 dx. fn (x) dx + fn (x) dx = R −∞ αn We will show that the limit superior of the first term on the right-hand side is bounded by the right-hand side of (C.1), while the second term on the right-hand side converges to zero. To bound the integral of fn (x)2 from −∞ to αn , we can proceed as in the proof of Lemma 7.6 in van der Vaart (1998). See in particular the display on top of page 96. Each point x ∈ (−∞, αn ) is an element of the support of sθ for each θ on the line segment connecting θ0 and θn ; this was the reason for introducing the additional parameter vectors (γ0 , µn , σn ) and (γn , µ0 , σ0 ) in the previous paragraph. Let ṡθ (x) denote the gradient ON THE MLE FOR THE GEV 19 of the map θ 7→ sθ (x), for x in the support of Pθ . Note that ṡθ (x) = (1/2)sθ (x) `˙θ (x). Since the map u 7→ sθ0 +utn hn (x), for u ∈ [0, 1], is continuously differentiable, we have R1 fn (x) = u=0 ṡθ0 +utn hn (x)T hn du. Applying the Cauchy–Schwarz inequality and the Fubini theorem, we find Z αn Z 1 Z αn 2 2 ṡθ0 +utn hn (x)T hn du dx fn (x) dx ≤ −∞ x=−∞ Z 1 ≤ 1 4 u=0 hTn Iθ0 +utn hn hn du. u=0 By continuity of the map θ 7→ Iθ , the right-hand side converges to (1/4)hT Iθ0 h. Rβ To show that αnn fn (x)2 dx converges to zero, observe that  fn (x)2 ≤ 2t−2 n pθn (x) + pθ0 (x) . Moreover, there exists c0 > 0 such that αn ≥ βn − c0 tn for all sufficiently large n. As a consequence, it is sufficient to show that, whenever θ̃n → θ0 and un ↓ 0, we have lim u−2 n Pθ̃n [ω̃n − un , ω̃n ) = 0, n→∞ (C.2) with ω̃n the upper endpoint of the support of Pθ̃n . Writing θ̃n = (γ̃n , µ̃n , σ̃n ), we have "  #  µ̃n − σ̃n /γ̃n − un − µ̃n −1/γ̃n Pθ̃n [ω̃n − un , ω̃n ) = 1 − exp − 1 + γ̃n σ̃n n o = 1 − exp − (|γ̃n | un /σ̃n )1/|γ̃n | ≤ (|γ̃n | un /σ̃n )1/|γ̃n | . On the last line, we used the elementary inequality 1 − exp(−z) ≤ z for all z ∈ R. Since 1/ |γ̃n | → 1/ |γ0 | > 2, equation (C.2) follows. Case γ0 ≤ −1/2. We will show that for h = (0, t, 0)T with t ↓ 0, we have lim inf t−2 Pθ0 +h {pθ0 = 0} > 0. t↓0 (C.3) Here, {pθ0 = 0} is short-hand for {x ∈ R : pθ0 (x) = 0} = [µ0 − σ0 /γ0 , ∞). Restricting the integral on the left-hand side in (2.1) to the set {pθ0 = 0} then shows that the asymptotic relation in (2.1) cannot hold. We show (C.3). The upper endpoint of Pθ0 +h is equal to µ0 +t−σ0 /γ0 , which is larger than the one of Pθ0 . The mass assigned by Pθ0 +h to the difference of the two supports is given by # "   (µ0 − σ0 /γ0 ) − µ0 − t −1/γ0 Pθ0 +h [µ0 − σ0 /γ0 , ∞) = 1 − exp − 1 + γ0 σ0 n o = 1 − exp − (|γ0 | t/σ0 )1/|γ0 | . Since 0 < 1/ |γ0 | ≤ 2 and since 1 − exp(−u) = (1 + o(1)) u as u → 0, the inequality (C.3) follows.  20 AXEL BÜCHER AND JOHAN SEGERS Appendix D. Rate of convergence √ To apply Proposition 2.1, the OP (1/ n) rate of convergence of the maximum likelihood estimator needs to be established first. Proposition D.1 (Rate of convergence). Let X1 , X2 , . . . be independent and identically distributed random variables with common GEV law Gθ0 , with θ0 = (γ0 , µ0 , σ0 ) ∈ (−1/2, ∞)×R×(0, ∞). Then, for any compact parameter set Θ ⊂ (−1/2, ∞)×R×(0, ∞), √ any sequence of maximum likelihood estimators over Θ satisfies θ̂n − θ0 = OP (1/ n) as n → ∞. Proof of Proposition D.1. We apply Corollary 5.53 in van der Vaart (1998) to the function mθ in (2.2). To do so, we need to check a number of things: • θ̂n = θ0 + oP (1) as n → ∞: this is okay by Proposition 3.1. • Pn mθ̂n ≥ Pn mθ0 − OP (n−1 ): By concavity of the logarithm, we have   p Pn mθ̂n ≥ Pn log θ̂n + Pn log 1 ≥ 0 = Pn mθ0 . pθ0 • There exists ṁ ∈ L2 (Pθ0 ) such that |mθ1 (x) − mθ2 (x)| ≤ ṁ(x)kθ1 − θ2 k for Pθ0 almost all x and all θ1 and θ2 in a neighbourhood of θ: this Lipschitz condition is the topic of Lemma D.3. • The map θ 7→ Pθ0 mθ admits a second-order Taylor expansion at the point of maximum θ0 with non-singular second derivative: this is established in Lemma D.4. √  The cited corollary now yields n(θ̂n − θ0 ) = OP (1) as n → ∞, as required. Lemma D.2 (Relative errors). Let (µ0 , σ0 ) ∈ R × (0, ∞) and ε ∈ (0, 1]. Let (µ, σ) ∈ R × (0, ∞) be such that |µ − µ0 | /σ0 ≤ ε and |σ/σ0 − 1| ≤ ε. Let x ∈ R and write z0 = (x − µ0 )/σ0 and z = (x − µ)/σ. If |z0 | ≥ 2, then |z/z0 − 1| ≤ 2ε. Proof. Since (1 + b)(1 − c) − 1 = b − c + bc for b, c ∈ R, we have    (µ − µ0 )/σ0 σ0 z −1 = 1+ −1 1− −1 z0 σ (x − µ0 )/σ0 σ0 σ0 µ − µ0 ε ε2 µ − µ0 ≤ −1 + + −1 ≤ε+ + ≤ 2ε. σ 2σ0 σ 2σ0 2 2  Lemma D.3 (Lipschitz condition). Let mθ be as in (2.2) with pθ the GEV density function. For fixed θ0 = (γ0 , µ0 , σ0 ) ∈ (−1/2, ∞) × R × (0, ∞), there exists ṁ such that Pθ0 ṁ2 < ∞ and such that |mθ1 − mθ2 | 1{pθ0 >0} ≤ ṁ kθ1 − θ2 k (D.1) for all θ1 and θ2 in a neighborhood of θ0 . Proof of Lemma D.3. The function ṁ can be constructed along the following lines. First, fix a ∈ [1/2, 1), to be determined later in terms of γ0 . Since |log x − log y| ≤ |x − y| / min(x, y) and since the map z 7→ |(x + z)a − (y + z)a | is decreasing, we have, ON THE MLE FOR THE GEV 21 on {x : pθ0 (x) > 0}, 1 |mθ1 − mθ2 | = |log(pθ1 + pθ0 ) − log(pθ2 + pθ0 )| 2 1 = |log{(pθ1 + pθ0 )a } − log{(pθ2 + pθ0 )a }| a 1 |(pθ1 + pθ0 )a − (pθ2 + pθ0 )a | ≤ × a min{(pθ1 + pθ0 )a , (pθ2 + pθ0 )a } ≤ pa − pa 1 × θ1 a θ2 . a pθ0 Suppose we can find a nonnegative function ṗa such that, for some neighbourhood V of θ0 , we have, for each k ∈ {1, 2, 3}, sup |∂θk paθ (x)| ≤ ṗa (x), θ∈V x ∈ {pθ0 > 0}. (D.2) Then we find, on {pθ0 > 0}, 1 3 ṗa |mθ1 − mθ2 | ≤ kθ1 − θ2 k. 2 a paθ0 Hence, the Lipschitz condition (D.1) is satisfied, with ṁ = (6/a) ṗa p−a θ0 , provided that Z 2 Pθ0 [(ṗa p−a ṗ2a (x) pθ1−2a (x) dx < ∞. (D.3) θ0 ) ] = 0 {pθ0 >0} We will split the domain {pθ0 > 0} into certain intervals and we will use the previous construction on each of these intervals separately, with possibly different values of a. For bounded intervals, a simplication may occur. Recall Lemma B.5 and let a ∈ [1/2, 1) be large enough such that γ0 > −a/(1 + a). Let V be a compact neighbourhood of θ0 within (−a/(1 + a), ∞) × R × (0, ∞) and let I be a bounded interval. Since continuous functions are uniformly bounded on compacta, we have sup max |∂θk paθ (x)| < ∞. (D.4) (x,θ)∈I×V k∈{1,2,3} Hence, on any bounded interval of the support of Pθ0 on which p1−2a is integrable, we θ0 −a may choose ṁ equal to a constant times pθ0 . Such intervals need therefore not be looked into further. For a = 1/2, a choice which will occur often, the condition that p1−2a is θ0 integrable on bounded intervals is trivially satisfied. To further control the partial derivatives of paθ , we will use the identity ∂θk paθ (x) = a paθ (x) ∂θk `θ (x), k ∈ {1, 2, 3}. paθ We will seek bounds for the functions and |∂θk `θ | separately. To facilitate writing, let us say that positive functions A and B of (x, θ) satisfy A.B if there exists c(θ0 ) > 0 such that A(x, θ) ≤ c(θ0 ) B(x, θ) for all (x, θ) in the proper range. Here c(θ0 ) is a positive constant whose value may depend on θ0 . For instance, for all θ = (γ, µ, σ) in a compact neighbourhood of θ0 within R × R × (0, ∞), we have 1/σ . 1. 22 AXEL BÜCHER AND JOHAN SEGERS The relation . is transitive (and reflexive, but not anti-symmetric) and behaves well under multiplication of positive quantities. I. Case γ0 > 0. Fix θ0 = (γ0 , µ0 , σ0 ) ∈ (0, ∞) × R × (0, ∞) and set a = 1/2 in (D.2) and (D.3). Fix ε ∈ (0, 1/4] and consider the following neighbourhood Vε of θ0 :   σ0 σ0 , . Vε = [(1 − ε)γ0 , (1 + ε)γ0 ] × [µ0 − εσ0 , µ0 + εσ0 ] × 1+ε 1−ε The support of Pθ0 is (µ0 − σ0 /γ0 , ∞). In view of (D.4), it is sufficient to construct ṗ1/2 (x) for x ≥ µ0 + 2σ0 , i.e., z0 = (x − µ0 )/σ0 ≥ 2. For such x and for θ ∈ Vε , we have pθ (x) > 0 as well as |z/z0 − 1| ≤ 2ε by Lemma D.2. In particular, 1 ≤ z0 /2 ≤ z ≤ 2z0 . Put u = (1 + γz)−1/γ ; note that 0 < u < 1. By Lemmas B.2, B.3 and B.4, all three score functions ∂θk `θ (x) can be bounded in absolute value by a multiple of 1 + log(1/u), uniformly in θ ∈ Vε and for all x ≥ µ0 + 2σ0 . Further, the density can be bounded by pθ (x) = 1 −u γ+1 e u . uγ+1 . σ Hence pθ (x) |∂θk `θ (x)| . (1 + log(1/u))uγ+1 . uγ+1/2 = (1 + γz)−1−1/2γ . Since z > z0 /2 and 0 < γ0 /2 < γ < 2γ0 , the supremum over θ ∈ Vε of the previous upper bound is easily seen to be integrable over z0 ≥ 2. II. Case γ0 ∈ (−1/2, 0). Fix ε ∈ (0, 1/12) sufficiently small such that −1/2 < γ0 − 2ε < γ0 + 2ε < 0. Consider the following neighbourhood Vε of θ0 :   σ0 σ0 , . Vε = [γ0 − ε, γ0 + ε] × [µ0 − εσ0 , µ0 + εσ0 ] × 1+ε 1−ε The support of Pθ0 is (−∞, µ0 + σ0 / |γ0 |). We split this set into two intervals: (−∞, µ0 − 2σ0 ], (µ0 − 2σ0 , µ0 + σ0 / |γ0 |). II.1. The interval (µ0 − 2σ0 , µ0 + σ0 / |γ0 |). We follow the construction leading to (D.2) and (D.3). To this end, we choose a = a(γ0 ) ∈ (1/2, 1) in such a way that |γ0 | 1 <a< . 1 − |γ0 | 2(1 − |γ0 |) Define ṗa (x) = sup max |∂θk paθ (x)| . θ∈V k∈{1,2,3} Our choice of a entails that γ0 > −a/(1 + a). Hence, in view of (D.4), the function ṗa is bounded on the interval (µ0 − 2σ0 , µ0 + σ0 / |γ0 |). We need to show that the integral in (D.3) is finite when we restrict the domain to (µ0 −2σ0 , µ0 +σ0 / |γ0 |). It is then sufficient to show that the function p1−2a is integrable on that set. Write down the integral and θ0 ON THE MLE FOR THE GEV 23 make a change of variable z0 = (x − µ0 )/σ0 to obtain that Z 1/|γ0 | Z µ0 +σ0 /|γ0 | n o 1−2a σ02a exp −(1 − 2a)(1 + γ0 z0 )−1/γ0 pθ0 (x) dx = −2 µ0 −2σ0 × (1 + γ0 z0 )(1−2a)(1/|γ0 |−1) dz0 Z 1/|γ0 | . (1 + γ0 z0 )(1−2a)(1/|γ0 |−1) dz0 . −2 The proportionality constant in the last inequality only depends on θ0 . The right-hand side is finite since the exponent is larger than −1 by our choice of a. II.2. The interval (−∞, µ0 − 2σ0 ]. We construct ṁ(x) for x ≤ µ0 − 2σ0 , i.e., for z0 = (x − µ0 )/σ0 ≤ −2. We will again use the construction leading to (D.2) and (D.3), this time choosing a = 1/2. In that case, it is sufficient to construct ṗ1/2 such that it is square-integrable on (−∞, µ0 − 2σ0 ) with respect to the Lebesgue measure. For x ≤ µ0 −2σ0 and for θ ∈ Vε we have pθ (x) > 0 and, by Lemma D.2, |z/z0 − 1| ≤ 2ε and therefore 2z0 ≤ z ≤ z0 /2 ≤ −1. Since z is negative, we have u ≥ 1. The density satisfies 1 pθ (x) = e−u uγ+1 . e−u u. σ By Lemmas B.2, B.3 and B.4, the three score functions ∂θk `θ (x) can all be bounded by a multiple of u2 , the proportionality constant neither depending on x ≤ µ0 − 2σ0 nor on θ ∈ Vε . Hence, for all k ∈ {1, 2, 3}, all θ ∈ Vε and all x ≤ µ0 − 2σ0 , pθ (x) |∂θk `θ (x)|2 . e−u u5 . e−u/2 (using that supu≥1 um e−u/2 < ∞ for any scalar m). Since γ ≥ γ0 − ε and z ≤ z0 /2 < 0, we have u ≥ {1 + (γ0 − ε)z0 /2}−1/(γ0 −ε) . Inserting this bound into the last display yields a function which is integrable over z0 ∈ (−∞, −2). III. Case γ0 = 0. For fixed ε ∈ (0, 1/6], consider the following compact neighbourhood of θ0 = (0, µ0 , σ0 ):   σ0 σ0 Vε = [−ε, ε] × [µ0 − σ0 ε, µ0 + σ0 ε] × , . 1+ε 1−ε Partition this set in two pieces, according to the sign of γ: Vε,+ = {θ ∈ Vε : γ ≥ 0}, Vε,− = {θ ∈ Vε : γ ≤ 0}. The support of the Gumbel distribution is R, which we will decompose into three intervals: (−∞, µ0 − σ0 /ε], [µ0 − σ0 /ε, µ0 + σ0 /ε], [µ0 + σ0 /ε, ∞). The middle interval is bounded. By (D.4) with a = 1/2, we only need to consider the cases z0 ≥ 1/ε and z0 ≤ −1/ε, where z0 = (x − µ0 )/σ0 . Put z = (x − µ)/σ and note that |z/z0 − 1| ≤ 2ε by Lemma D.2. III.1. Case z0 ≥ 1/ε. We have z0 ≥ 1/ε ≥ 6 and 4 ≤ (1 − 2ε)z0 ≤ z ≤ (1 + 2ε)z0 . Moreover, 0 < u < 1. We will write the supremum over θ ∈ Vε as the maximum of the suprema over θ ∈ Vε,+ and θ ∈ Vε,− . 24 AXEL BÜCHER AND JOHAN SEGERS III.1.1. Case θ ∈ Vε,+ . In this case, always 1 + γz > 1. The bounds on the scores in Lemmas B.2, B.3, and B.4 imply that |∂γ `θ (x)| . z 2 . z0 2 , |∂µ `θ (x)| . 1, |∂σ `θ (x)| . z . z0 . Further, we have pθ (x) . (1 + γz)−1/γ−1 ≤ (1 + γz)−1/γ ≤ (1 + εz)−1/ε ≤ {1 + ε(1 − 2ε)z0 }−6 . It follows that, for each k ∈ {1, 2, 3}, sup pθ (x) |∂θk `θ (x)|2 . {1 + ε(1 − 2ε)z0 }−6 z04 . θ∈Vε,+ The right-hand side is integrable over z0 ∈ [1/ε, ∞). III.1.2. Case θ ∈ Vε,− . If −ε ≤ γ ≤ −1/z, then 1 + γz ≤ 0 and thus pθ (x) = 0. So suppose γ > −1/z. By Lemmas B.2, B.3, and B.4, the scores can be bounded as follows: max |∂θk `θ (x)| . k∈{1,2,3} z2 . 1 + γz (D.5) Moreover, the density is bounded by pθ (x) . (1 + γz)−1/γ−1 . Hence, for k ∈ {1, 2, 3}, since γ 7→ (1 + γz)−1/γ is increasing in γ on {γ : 1 + γz}, sup pθ (x) |∂θk `θ (x)|2 . z 4 (1 + γz)−1/γ−3 θ∈Vε,+ = z 4 {(1 + γz)−1/γ }1+3γ ≤ z 4 e−(1+3γ)z ≤ z 4 e−z/2 . e−z/4 , as supz≥0 z 4 e−z/4 < ∞. Bounding z in e−z/4 from below by z0 /2 yields a function which is integrable over z0 ∈ [1/ε, ∞). III.2. Case z0 ≤ −1/ε. We have z0 ≤ −1/ε ≤ −6 and, by Lemma D.2, (1 + 2ε)z0 ≤ z ≤ (1 − 2ε)z0 ≤ −4. Moreover, u > 1. III.2.1. Case θ ∈ Vε,+ . If γ ≥ 1/ |z|, then pθ (x) = 0. Assume 0 ≤ γ < 1/ |z|, so that 1 + γz > 0. By (B.12), |∂γ `θ (x)| ≤ u1+γ {(log u)2 + log u)} . u1+2ε . (D.6) Further, by Lemmas B.2 and B.3, |∂µ `θ (x)| . u1+γ ≤ u1+ε , |∂σ `θ (x)| . u1+γ {log(u) + 1} . u1+2ε . (D.7) The density is bounded by pθ (x) . e−u u1+γ . In total, since supu≥1 um e−u/2 < ∞ for any scalar m, pθ (x) |∂θk `θ (x)|2 . e−u u(1+γ)+2(1+2ε) . e−u u3+5ε . e−u/2 . Since γ 7→ (1 + γz)−1/γ is increasing in γ, a lower bound for u is given by u ≥ e−z = e|z| ≥ e|z0 |/2 . Plugging this bound into e−u/2 yields a function which is integrable over z0 ∈ (−∞, −1/ε]. ON THE MLE FOR THE GEV 25 III.2.2. Case θ ∈ Vε,− . By Lemmas B.2, B.3 and B.4, |∂γ `θ (x)| . u1+ε , |∂µ `θ (x)| . u1+ε , |∂σ `θ (x)| . u log(u). The density is bounded by pθ (x) . e−u u1+γ . We get sup pθ (x) |∂θk `θ (x)|2 . e−u u3+3ε . e−u/2 . θ∈Vε,− Further, u can be bounded from below by (1 + (−ε)(1 − 2ε)z0 )−1/(−ε) . Inserting this into e−u/2 yields an integrable function in z0 ∈ (−∞, −1/ε].  Lemma D.4. Let Pθ denote the three-parameter GEV distribution with parameter θ and let mθ be as in (2.2). For fixed θ0 , the map θ 7→ Pθ0 mθ attains a unique maximum over (−1/2, ∞) × R × (0, ∞) at θ = θ0 . If moreover γ0 > −1/2, then whenever ht → h in Rk as t → 0, we have  1 t → 0. t−2 Pθ0 mθ0 +tht − mθ0 → − hT Iθ0 h, 4 Proof. The fact that θ0 is the unique maximizer of the map θ 7→ Pθ0 follows from Lemma 5.35 in van der Vaart (1998) applied to the mixture densities (pθ + pθ0 )/2. The second-order Taylor expansion can now be shown along similar lines as in the proof of Theorem 5.39 in van der Vaart (1998). In fact, the same technique was used to show (A.5) in the proof of Proposition 2.1.  Appendix E. Remaining steps for the proof of Proposition 3.3 In view of the outline of the proof given right after the statement of the Proposition 3.3, it remains to check the condition on the support in (2.3) and the Lipschitz property (2.4). This is the content of the present section. Lemma E.1. For any θ0 ∈ (−1/2, ∞) × R × (0, ∞), the three-parameter GEV family satisfies Condition (2.3). Proof. First, consider θ0 ∈ Θ0,− = (−∞, 0) × R × (0, ∞) such that γ0 > −1/2. Let ω(θ) = µ − σ/γ = µ + σ/ |γ| and ω0 = ω(θ0 ). Since θ 7→ ω(θ) is increasing in each component of θ ∈ Θ0,− , we have h− (ε) := inf{ω(θ) : θ ∈ Uε (θ0 )} = ω(γ0 − ε, µ0 − ε, σ0 − ε) whence S̄(ε) = (−∞, h− (ε)). The function t 7→ h− (t) is continuously differentiable on a neighbourhood of t = 0 and with negative derivative at t = 0. Hence, we can find constants 0 < b1 < c1 and t1 > 0 such that ω0 − t · c1 σ0 / |γ0 | ≤ h− (t) ≤ ω0 − t · b1 σ0 / |γ0 | , As a consequence of (E.1), for sufficiently small ε > 0, X \ S̄(ε) ⊂ [ω0 − εc1 σ0 / |γ0 | , ∞). t ∈ [0, t1 ]. (E.1) 26 AXEL BÜCHER AND JOHAN SEGERS Therefore, using the substitution u = {1 + γ0 (x − µ0 )/σ0 }1/|γ0 | , Z (εc1 )1/|γ0 | Z ω0 e−u du = O(ε1/|γ0 | ) = o(ε2 ) pθ0 (x) dx = Pθ0 (X \ S̄(ε)) ≤ ω0 −c1 εσ0 /|γ0 | 0 as ε ↓ 0, since −1/2 < γ0 < 1. Now, consider θ0 ∈ Θ0,+ = (0, ∞) × R × (0, ∞), i.e., γ0 > 0. Then, for any ε < min(σ0 , γ0 ) and any θ ∈ Uε (θ0 ), we have ω(θ) = µ − σ σ0 − ε ≤ µ0 + ε − =: h+ (ε). γ γ0 + ε whence S̄(ε) = (h+ (ε), ∞). The function t 7→ h+ (ε) is continuously differentiable on a neighbourhood of t = 0, with positive derivative at t = 0. As a consequence, we can find constants 0 < b2 < c2 and t2 > 0 such that ω0 + t · b2 σ0 /γ0 ≤ h+ (t) ≤ ω0 + t · c2 σ0 /γ0 , t ∈ [0, t2 ]. (E.2) Hence, for sufficiently small ε > 0, we obtain that X \ S̄(ε) ⊂ (−∞, ω0 + ε · c2 σ0 /γ0 ]. Therefore, using the substitution u = {1 + γ0 (x − µ0 )/σ0 }−1/γ0 , Z ω0 +εc2 σ0 /γ0 Z ∞ Pθ0 (X \ S̄(ε)) ≤ pθ0 (x) dx = e−u du = exp{−(c2 ε)−1/γ0 }, (c2 ε)−1/γ0 ω0 which clearly is of the order o(ε2 ) as ε ↓ 0, since γ0 > 0. Finally, consider θ0 ∈ Θ0 with γ0 = 0. Then, for any ε < σ0 /2, we have ω(θ) = µ − σ0 − ε σ ≤ µ0 + ε − =: h0,+ (ε), γ ε θ ∈ Uε (θ0 ) ∩ Θ0,+ , ω(θ) = µ + σ0 − ε σ ≥ µ0 − ε + =: h0,− (ε), |γ| ε θ ∈ Uε (θ0 ) ∩ Θ0,− . as well as As a consequence, for sufficiently small ε, S̄(ε) = (h0,+ (ε), h0,− (ε)). Clearly, we can find constants d1 , d2 > 0 such that h0,+ (ε) ≤ −d1 /ε and h0,− (ε) ≥ d2 /ε for all sufficiently small ε. Hence, Z −d1 /ε Z ∞ Pθ0 (X \ S̄(ε)) ≤ pθ0 (x) dx + pθ0 (x) dx −∞  = exp − exp d2 /ε  µ0 d1 + σ0 ε σ0     d2 µ0 + 1 − exp − exp − + , σ0 ε σ0 which can be easily seen to be of the order o(ε2 ) as ε ↓ 0, since 1 − exp(−y) = (1 + o(1))y as y → 0.  Lemma E.2. For any θ0 ∈ (−1/2, ∞) × R × (0, ∞), the three-parameter GEV family satisfies Condition (2.4) with Θ = (−1/2, ∞) × R × (0, ∞) (and hence also with any compact subset containing θ0 in its interior). ON THE MLE FOR THE GEV 27 Proof. Throughout, let k · k denote the maximum norm on R3 . The proof will be split up in three cases, according to the sign of γ0 . For some suitable ε0 > 0 to be specified below, and for any x ∈ Sθ0 , define a set of admissible parameter vectors θ as  Θ0 (x) = θ ∈ Uε0 (θ0 ) : if θ0 ∈ Θ satisfies kθ0 − θ0 k ≤ 2kθ − θ0 k, then x ∈ Sθ0 . Note that, if θ ∈ Θ0 (x), then the entire ball with center θ0 and radius kθ − θ0 k is a subset of Θ0 (x). In other words, Θ0 (x) is the union of all neighbourhoods Uε (θ0 ) for those ε ∈ (0, ε0 ] such that x ∈ Sθ0 for every θ0 ∈ U2ε (θ0 ). [The multiplicative constant 2 is not essential and could have been replaced by an arbitrary constant c0 > 1.] It follows from the definition of Θ0 (x) that θ 7→ `θ (x) is continuously differentiable on Θ0 (x). Hence, we may define ˙ `(x) = 3 sup{k`˙θ (x)k : θ ∈ Θ0 (x)}, x ∈ Sθ0 , (E.3) and, by the mean-value theorem, immediately obtain that ∀x ∈ Sθ0 : ∀θ1 , θ2 ∈ Θ0 (x) : ˙ kθ1 − θ2 k. |`θ1 (x) − `θ2 (x)| ≤ `(x) (E.4) [The constant 3 appears because of the use of the max-norm.] It can be seen easily that (E.4) implies (2.4). The proof will be finished once we will have constructed ε0 and will have showed that the function in (E.3) is square-integrable with respect to Pθ0 . I. Case γ0 ∈ (0, ∞). Write ω0 = µ0 − σ0 /γ0 and recall that Sθ0 = (ω0 , ∞). For x ∈ Sθ0 , write z0 = z0 (x) = (x − µ0 )/σ0 . Note that z0 > −1/γ0 . For a parameter vector θ with γ > 0, we have Sθ = (ω(θ), ∞) where ω(θ) = µ − σ/γ. A monotonicity and differentiability argument similar to that yielding (E.2) shows that there exist positive constants b0 and t0 with b0 t0 ≤ 1/2 and t0 < min(σ0 , γ0 )/2 such that for all t ∈ [0, t0 ],  σ0 − t  ≤ ω0 + (σ0 /γ0 ) tb0 (4/3), h+ (t) = sup ω(θ) = µ0 + t − γ0 + t  ≥ ω + (σ /γ ) tb . θ∈Ut (θ0 ) 0 0 0 0 Partition the interval Sθ0 into three sub-intervals: Sθ0 = (ω0 , ω0 + (σ0 /γ0 ) t0 b0 ) ∪ [ω0 + (σ0 /γ0 ) t0 b0 , µ0 + 2σ0 ] ∪ (µ0 + 2σ0 , ∞). Choose 0 < ε0 < min{1/4, t0 /2, σ0 /(2γ0 )} small enough such that σ/σ0 , γ/γ0 ∈ [5/6, 7/6], (σ/γ)/(σ0 /γ0 ) ∈ [3/4, 4/3], for all θ ∈ Uε0 (θ0 ). ˙ We will provide an upper bound for `(x) in (E.3) for each x in each piece of the partition separately. First, suppose that x ∈ Sθ0 is such that x = ω0 + (σ0 /γ0 )y with 0 < y < t0 b0 . If θ is such that t := kθ − θ0 k satisfies t < ε0 ≤ t0 /2 and 2tb0 ≥ y, then h+ (2t) ≥ ω0 + (σ0 /γ0 )(2tb0 ) ≥ x and thus θ 6∈ Θ0 (x). As a consequence, Θ0 (x) ⊂ Uy/(2b0 ) (θ0 ). But for θ ∈ Uy/(2b0 ) (θ0 ), we have ω(θ) ≤ ω0 + (σ0 /γ0 )(y/2)(4/3) = ω0 + (σ0 /γ0 )(2/3)y and thus x − ω(θ) ≥ (σ0 /γ0 )(1/3)y. For such θ, writing z = (x − µ)/σ, we find, using y = 1 + γ0 z0 , that 1 + γz = (γ/σ)(x − ω(θ)) ≥ (γ/σ)(σ0 /γ0 )(1/3)y ≥ (1/4)(1 + γ0 z0 ). In addition, x < ω0 + (σ0 /γ0 )(1/2) = µ0 − (σ0 /γ0 )(1/2) < µ0 − ε0 ≤ µ and thus z < 0. Apply (B.3), (B.4) and (B.12) to arrive at a Pθ0 square-integrable bound. 28 AXEL BÜCHER AND JOHAN SEGERS Second, suppose that x ∈ Sθ0 is such that ω0 +(σ0 /γ0 ) t0 b0 ≤ x ≤ µ0 +2σ0 . The partial derivatives of `θ (x) with respect to the three components of θ being continuous functions of (θ, x) in the compact domain {θ : kθ − θ0 k ≤ ε0 } × [ω0 + (σ0 /γ0 ) t0 b0 , µ0 + 2σ0 ], they are also uniformly bounded and thus square-integrable with respect to Pθ0 . Third, suppose that x ∈ Sθ0 is such that z0 ≥ 2. Let θ ∈ Uε0 (θ0 ) and write z = (x − µ)/σ. By Lemma D.2 we have |(z/z0 ) − 1| ≤ 1/2 and thus z0 /2 ≤ z ≤ 3z0 /2. The inequalities in Lemmas B.2, B.3 and B.4 then combine into a Pθ0 square-integrable ˙ upper bound for `. II. Case γ0 ∈ (−1/2, 0). Write ω0 = µ0 + σ0 / |γ0 | and recall that Sθ0 = (−∞, ω0 ). For x ∈ Sθ0 , write z0 = z0 (x) = (x − µ0 )/σ0 . Note that z0 < 1/ |γ0 |. For a parameter vector θ with γ < 0, we have Sθ = (−∞, ω(θ)) where ω(θ) = µ+σ/ |γ|. A monotonicity and differentiability argument similar to that yielding (E.1) shows that there exist positive constants b0 and t0 with b0 t0 ≤ 1/2 and t0 < min(σ0 , |γ0 |)/2 such that for all t ∈ [0, t0 ],  σ0 − t  ≤ ω0 − (σ0 / |γ0 |) tb0 , h− (t) = inf ω(θ) = µ0 − t − γ0 − t  ≥ ω − (σ / |γ |) tb (4/3). θ∈Ut (θ0 ) 0 0 0 0 Partition the interval Sθ0 into three sub-intervals: Sθ0 = (−∞, µ0 − 2σ0 ) ∪ [µ0 − 2σ0 , ω0 − (σ0 / |γ0 |)t0 b0 ] ∪ (ω0 − (σ0 / |γ0 |)t0 b0 , ω0 ). Choose 0 < ε0 < min{1/4, t0 /2, σ0 /(2 |γ|0 ), σ0 /7} small enough such that σ/σ0 , γ/γ0 ∈ [5/6, 7/6], (σ/γ)/(σ0 /γ0 ) ∈ [3/4, 4/3], for all θ ∈ Uε0 (θ0 ). ˙ We will provide an upper bound for `(x) in (E.3) for each x in each piece of the partition separately. First, consider x ∈ (ω0 − (σ0 / |γ0 |)t0 b0 , ω0 ), which can be rewritten as x = ω0 − (σ0 / |γ0 |)y for some 0 < y < t0 b0 . Then Θ0 (x) ⊂ Uy/(2b0 ) (θ0 ), for, if θ ∈ Uε0 (θ0 ) ⊂ Ut0 /2 (θ0 ) satisfies t = kθ − θ0 k ≥ y/(2b0 ), then there exists θ0 with kθ0 − θ0 k ≤ 2t such that ω(θ0 ) = h− (2t) ≤ ω0 − (σ0 / |γ0 |)2tb0 ≤ x. Now, for θ ∈ Uy/(2b0 ) (θ0 ), we have ω(θ) − x ≥ ω0 − σ0 / |γ0 | (y/2)(4/3) − x = ω0 − (2/3)(ω0 − x) − x = (1/3)(ω0 − x). In addition, x > ω0 − σ0 / |γ0 | /2 = µ0 + σ0 / |γ0 | > µ0 + ε0 > µ, whence z = (x − µ)/σ > 0. Therefore, as a consequence of Lemmas B.2, B.3 and B.4, we obtain the upper bound σ 1 (4/3)σ0 1 1 = ≤ . k`˙θ (x)k . 1 + γz |γ| ω(θ) − x (1/3) |γ0 | ω0 − x Since γ0 ∈ (−1/2, 0) implies 1/ |γ0 | > 2, the bound can be seen to be square-integrable with respect to Pθ0 . Second, consider x ∈ [µ0 − 2σ0 , ω0 − (σ0 / |γ0 |)t0 b0 ]. The partial derivatives of `θ (x) with respect to the three components of θ being continuous functions of (θ, x) in the compact domain {θ : kθ − θ0 k ≤ ε0 } × [µ0 − 2σ0 , ω0 − (σ0 / |γ0 |)t0 b0 ], they are also uniformly bounded and thus square-integrable with respect to Pθ0 . Third, consider x ∈ (−∞, µ0 − 2σ0 ). Let θ ∈ Uε0 (θ0 ) and write z = (x − µ)/σ. By Lemma D.2, we have |(z/z0 ) − 1| ≤ 1/2 and thus 3z0 /2 ≤ z ≤ z0 /2 ≤ −1. The inequalities in Lemmas B.2, B.3 and B.4 then combine into a Pθ0 square-integrable ˙ upper bound for `. ON THE MLE FOR THE GEV 29 III. Case γ0 = 0. Recall that Sθ0 = R. Find t0 > 0 small enough such that (1 + t0 )t0 ≤ σ0 /4. Then we have (3/4)σ0 /t ≤ (σ0 /t) − t − 1 ≤ σ0 /t, for t ∈ (0, t0 ]. It follows that, for all t ∈ (0, t0 ], σ0 − t h0,+ (t) = sup ω(θ) = µ0 + t − t θ∈Ut (θ0 )∩Θ0,+  σ0 − t h0,− (t) = inf ω(θ) = µ0 − t + t θ∈Ut (θ0 )∩Θ0,−  ≤ µ0 − (3/4)σ0 /t, ≥ µ0 − σ0 /t, (E.5) ≤ µ0 + σ0 /t, ≥ µ0 + (3/4)σ0 /t. (E.6) Choose 0 < ε0 < min{1/4, t0 /2} small enough such that σ/σ0 ∈ [5/6, 7/6] whenever θ ∈ Uε0 (θ0 ). Partition the real line into three sub-intervals: R = (−∞, µ0 − σ0 /ε0 ) ∪ [µ0 − σ0 /ε0 , µ0 + σ0 /ε0 ] ∪ (µ0 + σ0 /ε0 , ∞). ˙ We will provide an upper bound for `(x) in (E.3) for each x in each piece of the partition separately. Write x = µ0 + σ0 z0 . III.1 – Case x < µ0 − σ0 /ε0 . We have z0 < −1/ε0 . For any θ ∈ Θ0 (x) ⊂ Uε0 (θ0 ), we have x < µ0 − σ0 /ε0 < µ0 − ε0 ≤ µ; note that ε20 < t20 < σ0 . Therefore, z = zθ (x) = (x − µ)/σ < 0 and thus u = uγ (z) > 1, see Figure 1. We claim that Θ0 (x) ⊂ U1/(2|z0 |) (θ0 ). To prove this, it suffices to show that we can find a point θ0 such that kθ0 − θ0 k = 2/(2 |z0 |) = 1/ |z0 | and such that x ∈ / Sθ0 . But this is easy: just set θ0 = (1/ |z0 | , µ0 , σ0 ) and note that 1 + (1/ |z0 |)z0 = 1 − 1 = 0. Let θ ∈ Θ0 (x) ⊂ U1/(2|z0 |) (θ0 ). Write z = (x−µ)/σ. Our choice of ε0 and the fact that |z0 | > 1/ε0 > 2 imply that |z/z0 − 1| ≤ 1/2; see Lemma D.2. Consider three subcases: γ > 0, γ = 0, and γ < 0. • Suppose γ > 0. By (E.5), we have ω(θ) ≤ µ0 −(3/4)σ0 (2 |z0 |) = µ0 +(3/2)σ0 z0 < µ0 + σ0 z0 = x. Hence, x ∈ Sθ . As γ > 0 and z < 0, the bounds in (D.6) and (D.7) then imply that k`˙θ (x)k . u1+2ε0 . We need to bound u from above. Recall that uγ (z) is decreasing in z and increasing in γ. Moreover, (3/2)z0 ≤ z < 0 and 0 < γ < 1/(2 |z0 |). We find −2|z0 |  −2|z0 | 1 3 z0 = 1 − (3/4) = 16−z0 . 1≤u≤ 1+ 2 |z0 | 2 For arbitrary ρ > 1, the function z0 7→ ρ−z0 is integrable with respect to the standard Gumbel density z0 7→ exp(−e−z0 ) e−z0 . • Suppose γ = 0. The expressions for the score functions are ∂γ `θ (x) = (1 − e−z )z 2 /2 − z, ∂µ `θ (x) = (1 − e−z )/σ, and ∂σ `θ (x) = ((1 − e−z )z − 1)/σ. Since (3/2)z0 ≤ z ≤ (1/2)z0 < 0 for all θ considered, these expressions can be easily bounded by Pθ0 -square integrable functions. • Suppose γ < 0. Since z < 0, the components of the score vector can be bounded by a multiple of u max{z, (log u)2 }. But since uγ (z) is increasing in γ and since γ < 0, we can bound u by its value at γ = 0, which is e−z . Now continue as for the case γ = 0. 30 AXEL BÜCHER AND JOHAN SEGERS III.2 – Case µ0 −σ0 /ε0 ≤ x ≤ µ0 +σ0 /ε0 . This case is trivial, since the three components of the score vector `˙θ (x) are continuous and thus uniformly bounded on the closure of the bounded domain {(x, θ) : |x − µ0 | ≤ σ0 /ε0 , θ ∈ Θ0 (x)}. III.3 – Case x > µ0 + σ0 /ε0 . This case is partially similar to the case x < µ0 − σ0 /ε0 . For θ ∈ Θ0 (x) ⊂ Uε0 (θ0 ), we have x > µ0 + ε0 > µ and thus z = (x − µ)/σ > 0 and 0 < u = uγ (z) < 1. Moreover, z0 > 1/ε0 . We claim that Θ0 (x) ⊂ U1/(2z0 ) (θ0 ). Indeed, the point θ0 = (−1/z0 , µ0 , σ0 ) is such that kθ − θ0 k = 1/z0 = 2/(2z0 ) and still x ∈ / Sθ0 . Let θ ∈ Θ0 (x) ⊂ U1/(2z0 ) (θ0 ) and write z = (x−µ)/σ. Again, we have |z/z0 − 1| ≤ 1/2. Consider three subcases: γ > 0, γ = 0, and γ < 0. • Suppose γ > 0. Since 0 < u < 1 and 1 + γz > 1, all three components of the score vector can be bounded by a constant multiple of z 2 and thus by a constant multiple of z02 . Now it suffices to observe that all moments of the Gumbel distribution are finite. • Suppose γ = 0. Then apply the same reasoning as for the subcase γ = 0 in the case III.1 above. • Suppose γ < 0. By (E.6), we have ω(θ) ≥ µ0 + (3/4)σ0 (2z0 ) = µ0 + (3/2)σ0 z0 ≤ µ0 + σ0 z0 = x, so that x ∈ Sθ . Since 0 > γ > −1/(2z0 ) ≥ −3/(4z) > −1/z, the bound in (D.5) applies. We need to find a square-integrable bound on z 2 /(1+γz). But this is immediate, since the denominator is larger than 1 − 3/4 = 1/4 and the numerator is smaller than a multiple of z02 .  Acknowledgments The authors would like to thank two anonymous referees and an Associate Editor for their constructive comments on an earlier version of this manuscript. The research by A. Bücher has been supported by the Collaborative Research Center “Statistical modeling of nonlinear dynamic processes” (SFB 823, Project A7) of the German Research Foundation, which is gratefully acknowledged. Parts of this paper were written when A. Bücher was a visiting professor at TU Dortmund University. J. Segers gratefully acknowledges funding by contract “Projet d’Actions de Recherche Concertées” No. 12/17-045 of the “Communauté française de Belgique” and by IAP research network Grant P7/06 of the Belgian government (Belgian Science Policy). References Bücher, A. and J. Segers (2016). Maximum likelihood estimation for the Fréchet distribution based on block maxima extracted from a time series. Bernoulli . To appear. Cramér, H. (1946). Mathematical Methods of Statistics. Princeton: Princeton University Press. Dombry, C. (2015). Existence and consistency of the maximum likelihood estimators for the extreme value index within the block maxima framework. Bernoulli 21 (1), 420–436. Drees, H., A. Ferreira, and L. de Haan (2004). On maximum likelihood estimation of the extreme value index. The Annals of Applied Probability 14 (3), 1179–1201. ON THE MLE FOR THE GEV 31 Fisher, R. A. and L. H. C. Tippett (1928). On the estimation of the frequency distributions of the largest or smallest member of a sample. Proceedings of the Cambridge Philosophical Society 24, 180–190. Gumbel, E. J. (1958). Statistics of extremes. New York: Columbia University Press. Hosking, J. R. M. (1985). Algorithm as 215: Maximum-likelihood estimation of the parameters of the generalized extreme-value distribution. Journal of the Royal Statistical Society. Series C (Applied Statistics) 34 (3), 301–310. Hosking, J. R. M., J. R. Wallis, and E. F. Wood (1985). Estimation of the generalized extreme-value distribution by the method of probability-weighted moments. Technometrics 27 (3), 251–261. Jenkinson, A. F. (1955). The frequency distribution of the annual maximum (or minimum) values of meteorological elements. Quarterly Journal of the Royal Meteorological Society 81 (348), 158–171. Le Cam, L. (1986). Asymptotic Methods in Statistical Decision Theory. Springer Series in Statistics. New York: Springer-Verlag. Marohn, F. (1994). On testing the Exponential and Gumbel distribution. In Extreme Value Theory and Applications, pp. 159–174. Kluwer Academic Publishers. Marohn, F. (2000). Testing extreme value models. Extremes 3 (4), 363–384. Prescott, P. and A. T. Walden (1980). Maximum likelihood estimation of the parameters of the generalized extreme-value distribution. Biometrika 67 (3), 723–724. Smith, R. L. (1985). Maximum likelihood estimation in a class of nonregular cases. Biometrika 72 (1), 67–90. van de Geer, S. (2009). Empirical Processes in M-Estimation. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press. van der Vaart, A. (1998). Asymptotic Statistics. Cambridge: Cambridge University Press. von Mises, R. (1936). La distribution de la plus grande de n valeurs. In Selected Papers, Volume II, pp. 271–294. Providence, R.I., 1954: American Mathematical Society. Woodroofe, M. (1972). Maximum likelihood estimation of a translation parameter of a truncated distribution. The Annals of Mathematical Statistics 43 (1), 113–122. Woodroofe, M. (1974). Maximum likelihood estimation of translation parameter of truncated distribution II. The Annals of Statistics 2 (3), 474–488. Ruhr-Universität Bochum, Fakultät für Mathematik, Universitätsstr. 150, 44780 Bochum, Germany E-mail address: [email protected] Université catholique de Louvain, Institut de Statistique, Biostatistique et Sciences Actuarielles, Voie du Roman Pays 20, B-1348 Louvain-la-Neuve, Belgium E-mail address: [email protected]
10
Collaborative Visual Area Coverage using Unmanned Aerial Vehicles arXiv:1612.02065v1 [] 6 Dec 2016 Sotiris Papatheodorou, Anthony Tzes and Yiannis Stergiopoulos Abstract— This article addresses the visual area coverage problem using a team of Unmanned Aerial Vehicles (UAVs). The UAVs are assumed to be equipped with a downward facing camera covering all points of interest within a circle on the ground. The diameter of this circular conic-section increases as the UAV flies at a larger height, yet the quality of the observed area is inverse proportional to the UAV’s height. The objective is to provide a distributed control algorithm that maximizes a combined coverage-quality criterion by adjusting the UAV’s altitude. Simulation studies are offered to highlight the effectiveness of the suggested scheme. Index Terms— Cooperative Control, Autonomous Systems, Unmanned Aerial Vehicles I. I NTRODUCTION Area coverage over a planar region by ground agents has been studied extensively when the sensing patterns of the robots are circular [1, 2]. Most of these techniques are based on a Voronoi or similar partitioning [3, 4] of the region of interest. However there have been studies concerning arbitrary sensing patterns [5, 6] where the partitioning is not Voronoi based [7]. Both convex and non-convex domains have been examined [8]. Many algorithms have been developed for the mapping by UAVs [9–11] relying mostly in Voronoi-based tessellations. Extensive work has also been done in area monitoring by UAVs equipped with cameras [12, 13]. In these pioneering research efforts, there is no maximum allowable height that can be reached by the UAVs and for the case where there is overlapping of the covered areas, this is considered an advantage compared to the same area viewed by a single camera. In this paper the persistent coverage problem of a planar region by a network of UAVs is considered. The UAVs are assumed to have downwards facing visual sensors with a circular sensing footprint, while no assumptions are made about the type of the sensor. The covered area as well as the coverage quality of that area depend on the altitude of each UAV. A partitioning scheme of the sensed region, similar to [7], is employed and a gradient based control law is developed. This control law leads the network to a locally optimal configuration with respect to a combined coverage-quality criterion, while also guaranteeing that the UAVs remain within a desired range of altitudes. This article, compared to [13] assumes the flight of the UAVs within a This work has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under the Grant Agreement No.644128, AEROWORKS. The authors are with the Electrical & Computer Engineering Department, University of Patras, Rio, Achaia 26500, Greece. Corresponding author’s email: [email protected] given regime and attempts to provide visual information of an area using a single UAV-camera. The problem statement, along with the definition of the coverage-quality criterion and the sensed space partitioning scheme are presented in Section II. In Section III, the distributed control law is derived and its main properties are explained. In Section IV, the partitioning scheme and the control law are examined for two particular coverage quality functions. Finally, in Section V, simulations are provided to show the efficiency of the proposed technique. II. P ROBLEM S TATEMENT A. Main assumptions - Preliminaries Let Ω ⊂ R2 be a compact convex region under surveillance. Assume a swarm of n UAVs, each positioned at the spatial coordinates Xi = [xi , yi , zi ]T , i ∈ In , where In = {1, . . . , n}. We also define the vector qi = [xi yi ]T , qi ∈ Ω to note the projection of the center of each UAV on the ground. The minimum and maximum altitudes each UAV can fly to max ], i ∈ I . are zmin and zmax respectively, thus zi ∈ [zmin n i i , zi i The simplified UAV’s kinodynamic model is q̇i = ui,q , qi ∈ Ω, ui,q ∈ R2 , żi = ui,z , zi ∈ [zmin , zmax ], ui,z ∈ R. i i (1) where [ui,q , ui,z ] is the corresponding ‘thrust’ control input for each robot (node). The minimum altitude zmin is used to i ensure the UAVs will fly above ground obstacles, whereas the maximum altitude zmax guarantees that they will not fly i out of range of their base station. In the sequel, all UAVs are assumed to have common minimum zmin and maximum zmax altitudes. As far as the sensing performance of the UAVs (nodes) is concerned, all members are assumed to be equipped with identical downwards pointing sensors with conic sensing patterns. Thus the region of Ω sensed by each node is a disk defined as Cis (Xi , a) = {q ∈ Ω :k q − qi k≤ zi tan a} , i = 1, . . . , n, (2) where a is half the angle of the sensing cone. As shown in Figure 1, the higher the altitude of a UAV, the larger the area of Ω surveyed by its sensor. The coverage quality of each node is in the general case a function fi (q) : R2 → [0, 1] which is dependent on the node’s position Xi as well as its altitude constraints zmin , zmax and the angle a of its sensor. The higher the value of fi (q), the better the coverage quality of point q ∈ R2 , with fi = 1 when zi = zmin and fi = 0 when zi = zmax . It is assumed that as the altitude of a node increases, the visual quality of its sensed area decreases. The properties of fi are examined in Section IV along with some example functions. For each point q ∈ Ω, an importance weight is assigned via the space density function φ : Ω → R+ , encapsulating any a priori information regarding the region of interest. Thus the coverage-quality objective is 4 H = Z max fi (q) φ (q) dq. Ω i∈In (3) In the sequel, we assume φ (q) = 1, ∀q ∈ Ω but the expressions can be easily altered to take into account any a priori weight function. Fig. 2: Examples of an empty cell (node 8) and one consisting of two disjoint regions (node 1). III. S PATIALLY D ISTRIBUTED C OORDINATION A LGORITHM Fig. 1: UAV-visual area coverage concept B. Sensed space partitioning The assignment of responsibility regions to the nodes is done in a manner similar to [7]. Only the subset of Ω sensed by the nodes is partitioned. Each node is assigned a cell 4 Wi = q ∈ Ω : fi (q) ≥ f j (q), j 6= i (4) Based on the nodes kinodynamics (1), their sensing performance (2) and the coverage criterion (5), a gradient based control law is designed. The control law utilizes the partitioning (4) and result in monotonous increase of the covered area. Theorem 1: In a UAV visual network consisting of nodes with sensing performance as in (2), governed by the kinodynamics in (1) and the space partitioning described in Section II-B, the control law  Z Z ∂ fi (q) dq + ni fi (q) dq + ui,q = αi,q  ∂ qi Wi ∂Wi ∩∂ O  Z j6=i H = Z ∑ i∈In Wi fi (q) φ (q) dq. (5) (6) ∂Wi ∩∂W j  with the equality holding true only at the boundary ∂Wi , so that the cells Wi comprise a complete tessellation of the sensed region. Definition 1: We define the neighbors Ni of node i as 4 Ni = j 6= i : Csj ∩Cis 6= 0/ . The neighbors of node i are those nodes that sense at least a part of the region that node i senses. It is clear that only the nodes in Ni need to be considered when creating Wi . Remark 1: The aforementioned partitioning is a complete S tessellation of the sensed region i∈In Cis , although it is not a complete tessellation of Ω. Let us denote the neutral region S not assigned by the partitioning scheme as O = Ω \ i∈In Wi . Remark 2: The resulting cells Wi are compact but not necessarily convex. It is also possible that a cell Wi consists of multiple disjoint regions, such as the cell of node 1 shown in yellow in Figure 2. It is also possible that the cell of a node is empty, such as node 8 in Figure 2. Its sensing circle ∂C8s is shown in a dashed black line. By utilizing this partitioning scheme, the network’s coverage performance can be written as  υii ni ( fi (q) − f j (q)) dq ∑ Z ui,z Z = αi,z  tan(a) fi (q) dq + ∂Wi ∩∂ O Z ∑ j6=i Wi ∂ fi (q) dq + ∂ zi   νii · ni ( fi (q) − f j (q)) dq (7) ∂Wi ∩∂W j where αi,q , αi,z are positive constants and ni the outward pointing normal vector of Wi , maximizes the performance criterion (5) monotonically along the nodes’ trajectories, leading in a locally optimal configuration. Proof: Initially we evaluate the time derivative of the optimization criterion H   dH ∂H ∂H =∑ q̇i + żi . dt ∂ zi i∈In ∂ qi . The usage of a gradient based control law in the form ui,q = αi,q ∂H ∂H , ui,z = αi,z ∂ qi ∂ zi will result in a monotonous increase of H . By using the Leibniz integral rule [14] we obtain  ∂H ∂ qi Z = ∑ i∈In υii ni fi (q) dq + υii ni fi (q) dq + = Z Wi ∂Wi Z ∂ fi (q) dq ∂ qi ∂ fi (q) dq + ∂ qi Z Wi ∂Wi Z υ ij n j f j (q) dq +  ∑ Z Wj ∂W j ∂Wi ∩∂ O ∑ ∂ f j (q)  dq ∂ qi ∑ j6=i 4 Since ∂ f j (q) ∂ qi ∂H ∂ qi ∂q , q ∈ ∂W j , i, j ∈ In . ∂ qi j6=i (8) = 0 we obtain Z = υii ni fi (q) dq + Wi ∂Wi Z ∑ j6=i Z υ ij ∂ fi (q) dq + ∂ qi n j f j (q) dq υ ij n j f j (q) dq. ∂W j ∩∂Wi The evaluation of υii on ∂Wi ∩ ∂ O can be found in the Appendix, whereas its evaluation on ∂W j ∩ ∂Wi depends on the choice of fi (q) and is not examined in this section. Because the boundary ∂Wi ∩ ∂W j is common among nodes i and j, it holds true that υ ij = υii when evaluated over it and that n j = −ni . Finally the sums and the integrals within then can be combined, producing the final form of the planar control law Z Z ∂ fi (q) ∂H = ni fi (q) dq + dq + ∂ qi ∂ qi ∂Wi ∩∂ O Wi Z whose three terms indicate how a movement of node i affects the boundary of its cell, its whole cell and the boundaries of the cells of other nodes. It is clear that only the cells W j which have a common boundary with Wi will be affected and only at that common boundary. The boundary ∂Wi can be decomposed in disjoint sets as [ (∂Wi ∩ ∂W j )}. (9) j6=i These sets represent the parts of ∂Wi that lie on the boundary of Ω, the boundary of the node’s sensing region and the parts that are common between the boundary of the cell of node i and those of other nodes. This decomposition can be seen in Figure 3 with the sets ∂Wi ∩ ∂ Ω, ∂Wi ∩ ∂ O and ∂Wi ∩ S j6=i ∂W j appearing in solid red, green and blue respectively. ni fi (q) dq + ∂Wi ∩∂W j ∂W j ∂Wi = {∂Wi ∩ ∂ Ω} ∪ {∂Wi ∩ ∂ O} ∪ { υii Z where υ ij stands for the Jacobian matrix with respect to qi of the points q ∈ ∂W j , υ ij (q) = Wi Z   j6=i common boundary ∂W j ∩∂Wi of node i with any other node j is affected by the movement of node i, ∂∂H qi can be simplified as Z Z ∂H ∂ fi (q) = υii ni fi (q) dq + dq + ∂ qi ∂ qi  υii ni ( fi (q) − f j (q)) dq. ∑ j6=i ∂W j ∩∂Wi Similarly, by using the same ∂Wi decomposition and 4 defining ν ij (q) = ∂∂zqi , q ∈ ∂W j , i, j ∈ In , the altitude control law is Z Z ∂H ∂ fi (q) tan(a) fi (q) dq + = dq + ∂ zi ∂ zi ∂Wi ∩∂ O Z ∑ j6=i Wi νii · ni ( fi (q) − f j (q)) dq ∂W j ∩∂Wi where the evaluation of νii (q)·ni on ∂Wi ∩∂ O and ∂W j ∩∂Wi can be found in the Appendix. Remark 3: The cell Wi of node i is affected only by its neighbors Ni thus resulting in a distributed control law. The finding of the neighbors Ni depends on their coordinates X j , j ∈ Ni and does not correspond to the classical 2DDelaunay neighbors. The computation of the Ni set demands node i to be able to communicate with all nodes within a sphere centered around Xi and radius ric   2  2 ric = max 2zi tan a, zi + zmin tan2 a + z− zmin , o (zi + zmax )2 tan2 a + (z− zmax )2 . Fig. 3: ∂Wi -decomposition into disjoint sets At q ∈ ∂ Ω it holds that υii = 02×2 since we assume the region of interest is static. Additionally, since only the Figure 4 highlights the case where nodes 2, 3 and 4 are at zmin , z1 and zmin respectively. These are the worst case scenario neighbors of node 1 , the farthest of which dictates the communication range r1c . Remark 4: When zi = zmax , both the planar and altitude control laws are zero because fi (qi ) = 0. This results in the UAV being unable to move any further in the future and additionally its contribution to the coverage-quality objective Fig. 4: Ni neighbor set being zero. However this degenerate case is of little concern, as shown in Sections III-A and III-C. A. Stable altitude The altitude control law ui,z moves each UAV towards an altitude in which ui,z = 0 which corresponds to an equilibrium point for that particular node. We call this the stable altitude zstb i and it is the solution with respect to zi of the equation: C. Degenerate cases ui,z = 0 ⇒ R ∂Wi ∩∂ O ∑ tan(a) fi (q) dq + R j6=i ∂Wi ∩∂W j R ∂ fi (q) Wi ∂ zi dq + νii · ni ( fi (q) − f j (q)) dq = 0. Both integrals over ∂Wi are non-negative since their integrands are non-negative, whereas the integral over Wi is negative since coverage quality decreases as altitude increases. The stable altitude is not common among nodes as it depends on one’s neighbors Ni and is not constant over time since the neighbors change over time. When the integrals over ∂Wi are both zero, the resulting control law has a negative value. This will lead to a reduction of the node’s altitude and in time the node will reach zstb = zmin . Once the node reaches zmin its altitude control law will be 0 until the integral over ∂Wi stops being zero. The planar control law however is unaffected, so the node’s performance in the future is not affected. This situation may arise in a node with several neighboring nodes at lower altitude that result in ∂Cis ∩ ∂Wi = 0. / When the integral over Wi is zero, the resulting control law has a positive value. This will lead to an increase of the node’s altitude and in time the node will reach zstb = zmax and as shown in Remark 4 the node will be immobilized from this time onwards. However this situation will not arise in practice as explained in Section III-C. When the integral over Wi and at least one of the integrals over ∂Wi are non-zero, then zstb ∈ zmin , zmax . B. Optimal altitude It is useful to define an optimal altitude zopt as the altitude a node would reach if: 1) it had no neighbors (Ni = ∅), and 2) no part of Wi on ∂ Ω, (∂ Ω ∩ ∂Cis = ∅). This optimal altitude is the solution with respect to zi of the equation Z Z tan(a) fi (q) dq + ∂Cis Wi Additionally, let us denote the sensing region of a node i at s zopt as Ci,opt and Hopt the value of the criterion when all nodes are located at zopt . If Ω = R2 and because the planar control law ui,q results in the repulsion of the nodes, the network will reach a state in which no node will have neighbors and all nodes will be at zopt . In that state, the coverage-quality criterion (5) will have attained it’s maximum possible value Hopt for that particular network configuration and coverage quality function fi . This network configuration will be globally optimal. When Ω is a convex compact subset of R2 , it is possible for the network to reach a state where all the nodes are at s zopt only if n Ci,opt disks can be packed inside Ω. This state will be globally optimal. If that is not the case, the nodes will converge at some altitude other than zopt and in general different among nodes. It should be noted that although the nodes do not reach zopt , the network configuration is locally optimal. ∂ fi (q) dq = 0. ∂ zi It is possible due to the nodes’ initial positions that the sensing disk of some node i is completely contained within the sensing disk of another node j, i.e. Cis ∩Csj = Cis . In such a case, it is not guaranteed that the control law will result in separation of the nodes’ sensing regions and thus it is possible that the nodes do not reach zopt . Instead, node j may converge to a higher altitude and node i to a lower altitude than zopt , while their projections on the ground qi and q j remain stationary. Because the region covered by node i is also covered by node j, the network’s performance is impacted negatively. Since this degenerate case may only arise at the network’s initial state, care must be taken to avoid it during the robots’ deployment. Such a degenerate case is shown in Figure 5 where the sensing disk of node 4 is completely contained within that of node 3. Another case of interest is when some node i is not assigned a region of responsibility, i.e. Wi = 0. / This is due to the existence of other nodes at lower altitude that cover all of Cis with better quality than node i. This is the case with node 8 in Figure 2. This situation is resolved since the nodes at lower altitude will move away from node i and once node i has been assigned a region of responsibility it will also move. It should be noted that the coverage objective H remains continuous even when node i changes from being assigned no region of responsibility to being assigned some region of responsibility. In order for a node to reach zmax , as explained in Section III-A, the integral over Wi of its altitude control law must be zero, that is its cell must consist of just a closed curve without its interior. In order to have Wi = ∂Wi , a second node j must be directly below node i at an infinitesimal distance. However just as node i starts moving upwards the integral over Wi will stop being zero thus changing the stable altitude to some value zstb < zmax . In other words, in order for a node to reach zmax , the configuration described must happen at an altitude infinitesimally smaller than zmax . So in practice, if 1 1 0.8 0.8 0.6 0.6 fip fiu all nodes are deployed initially at an altitude smaller than zmax , no node will reach zmax in the future. 0.4 0.4 0.2 0.2 0 −0.15 −0.1 −0.05 0 x 0.05 0.1 0.15 0 −0.15 Fig. 6: Coverage quality functions: [Right] for different values of b. Fig. 5: Sensed space partitioning using the coverage quality functions fiu [Left] and fip [Right]. IV. C OVERAGE QUALITY FUNCTIONS The choice of coverage quality function affects both the sensed space partitioning (4) and the control law (7). The function fi , i ∈ In is required to have the following properties 1) fi (q) = 0, ∀q ∈ / Cis , 2) fi (q) ≥ 0, ∀q ∈ ∂Cis , 3) fi (q) is first order differentiable with respect to qi and zi , or ∂ ∂fiq(q) and ∂ ∂fiz(q) exist within Cis , i i 4) fi (q) is symmetric around the z-axis, 5) fi (q) is a decreasing function of zi , 6) fi (q) is a non-increasing function of k q − qi k, 7) fi (qi ) = 1 when zi = zmin and fi (qi ) = 0 when zi = zmax . The need for the first property is clear since the coverage quality with respect to node i should be zero outside Cis . The second guarantees that the integrals of the control law over some part of ∂Cis are not zero and the third is required so that the integrals over Wi can be defined. The fourth property is due to the fact that the sensing patterns are rotation invariant. Two indicative coverage quality functions are examined in the sequel along with the properties of the resulting partitioning schemes and control laws. A. Uniform coverage quality The simplest coverage quality function that can be chosen is the uniform one, i.e. the coverage quality is the same for all points in Cis . This function can effectively model downward facing cameras [15, 16] that provide uniform quality in the whole image. This function will be referred to as fiu from now on and is defined as   2 2 2   zi − zmin − zmax − zmin  , q ∈ Cis fiu (q) = max − zmin )4 (z    0, q∈ / Cis It should be noted that the above definition of a uniform coverage quality function is not unique, just the simplest of the possible ones. A projection of this function on the y−z planecan be seen T in the left portion of Figure 6 for a node at Xi = 0, 0, zmin . It can be shown that fiu (q) = 0 when k q − qi k> zi tan a, i.e. when q is outside the sensing disk. −0.1 −0.05 0 x 0.05 0.1 fiu (q) [Left], 0.15 fip (q) This choice of quality function simplifies the space partitioning since ∂W j ∩ ∂Wi is either an arc of ∂Ci if zi < z j or of ∂C j if zi > z j . In the case where zi = z j , ∂W j ∩ ∂Wi is chosen arbitrarily as the line segment defined by the two intersection points of ∂Ci and ∂C j . The resulting cells consist of circular arcs and line segments. If the sensing disk of a node i is contained within the sensing disk of another node j, i.e. Cis ∩Csj = Cis , then Wi = Cis and W j = Csj \ Cis . An example partitioning with all of the above cases shown can be seen in Figure 5 [Left], where the boundaries of the sensing disks ∂Cis are in red and the boundaries of the cells ∂Wi in black. Nodes 1 and 2 are at the same altitude so the arbitrary partitioning scheme is used. The sensing disk of node 3 contains the sensing disk of node 4 and nodes 5, 6 and 7 illustrate the general case. The control law is also significantly simplified. Since ∂ fi (q) ∂ qi = 0, the corresponding integral of the control law (7) is 0. Additionally, υii and νii ni are evaluated as ( I2 , zi ≤ z j υii = (10) 02 , zi > z j ( tan(a), zi ≤ z j i νi · ni = (11) 0, zi > z j Because both υii and νii · ni are zero when zi > z j ⇒ fiu < f ju , the integrals over ∂Wi ∩ ∂W j are zero for both the planar and altitude control laws when fiu < f ju and as a result need only be evaluated when fiu > f ju . The control law essentially maximizes the volume contained by all the cylinders defined by fiu , i ∈ In , under the constraints imposed by the network and area of interest. While the control law in the general case guarantees that zi < zmax , ∀i ∈ In , this particular one also guarantees that zi > zmin , ∀i ∈ In . If zi = zmin , then it follows that ∂ ∂fiz(q) = 0, i resulting in the integral over Wi being zero. Since fiu is positive and only arcs of ∂Wi where fiu > f ju are integrated over, the remaining part of the control law is positive, resulting in the node moving to a higher altitude. B. Decreasing coverage quality A more realistic assumption for some types of visual sensors would be for the coverage quality to decrease as distance from the node increases, to take into account lens distortion for example. The coverage quality of a point q ∈ Cis is maximum directly below the node and decreases as k q − qi k increases. One such function is an inverted paraboloid whose maximum value depends on zi and could be defined as #  " h i  2 2  1− 1−b (x − xi ) + (y − yi ) fiu , p fi (q) = [zi tan(a)]2   0, q ∈ Cis q∈ / Cis where b ∈ (0, 1) is the coverage quality on ∂Ci as a percentage of the coverage quality on qi and fiu is the uniform coverage quality function defined previously and is used to set the maximum value of the paraboloid. A projection of this function on the y − z plane can be seen in the right portion of Figure 6 for a node at Xi = [0, 0, zmax ]T and various bvalues; b ∈ {1, 0.5, 0.2, 0} corresponds to blue, red, green and black colored curves, respectively. It can be shown that fip (q) = 0 when k q − qi k> zi tan a, i.e. when q is outside the sensing disk. Additionally, when k q − qi k= zi tan a, fip = b fiu . For the space partitioning it is necessary to solve fip (q) ≥ p f j (q) in order to find part of ∂W j ∩Wi . The resulting solution on the x−y plane is a disk Ci,i j when zi 6= z j and the halfplane defined by the perpendicular bisector of qi and q j when zi = z j . Finally, the cell of node i with  respectto node j is Wi = Cis ∩Ci,i j if zi < z j and Wi = Cis \ Csj ∩Ci,i j if zi > z j . Figure 7 shows the intersection of two paraboloids fip and f jp with zi < z j and the boundaries ofthe resulting cells above them.  The red line shows ∂Ci,i j ∩ Cis ∩Csj , i.e. the boundary of the intersection disk constrained within the sensing disks of the two nodes, since only for q ∈ Cis ∩Csj can fip (q) = f jp (q). The green arcs represent ∂Wi ∩ ∂Cis ∩W j while the blue ones represent ∂Wi ∩ ∂ O. Thus the boundary of the cell of any node i can be decomposed into disjoint sets as ( ∂Wi = (∂Wi ∩ ∂ O) [  )  (∂Wi ∩ ∂Ci,i j ) ∩ (∂Wi \ ∂Ci,i j ∩W j ) j∈Ni . sensing disks ∂Cis are in red and the cells Wi in black. The sensing disk of node 3 contains the sensing disk of node 4 and nodes 5, 6 and 7 illustrate the general case. For the evaluation of υii and νii · ni the partitioning of ∂Wi explained previously will be used. Since ∀q ∈ ∂Ci,i j ⇒ fip (q) = f jp (q), the integral over ∂Ci,i j is zero. For the remaining part of ∂Wi ∩W j which is ∂Wi \ ∂Ci,i j ∩W j , the evaluation of υii and νii · ni is the same as in the case of the uniform coverage function fiu . For the same reasons as in the uniform coverage quality, the integrals over ∂Wi ∩ ∂W j are zero for both the planar and altitude control laws when fip < f jp and as a result need only be evaluated when fip > f jp . The control law essentially leads to the maximization of the volume contained by all the paraboloid-like functions fip , i ∈ In , under the constraints imposed by the network and area of interest. This control law also guarantees that that zi > zmin , ∀i ∈ In in addition to zi < zmax , ∀i ∈ In . Because fip is positive and only arcs of ∂Wi where fip > f jp are integrated over, the integral over ∂Wi is always positive when zi = zmin . It can ∂ f (q) also be shown that ∂pzi > 0, ∀q ∈ Wi when zi = zmin and thus the integral over Wi of ui,z will also be positive. As a result the altitude control law is always positive when zi = zmin resulting in the altitude increase of node i. V. S IMULATION S TUDIES Simulation results of the proposed control law using the uniform coverage quality function fiu are presented in this section. The region of interest Ω is the same as the one used in [3] for consistency. All nodes are identical with a half sensing cone angle a = 20◦ and zi ∈ [0.3, 2.3], ∀i ∈ In . The boundaries of the nodes’ cells are shown in black and the boundaries of their sensing disks in red. Remark 5: When using fiu (q) it is possible to observe jittering on the cells of some nodes i and j. This can happen when zi = z j and the arbitrary boundary ∂Wi ∩ ∂W j is used. Once the altitude of one of the nodes changes slightly, the boundary between the cells will change instantaneously from a line segment to a circular arc. The coverage-quality objective H however will present no discontinuity when this happens. A. Case Study I Fig. 7: ∂Wi -decomposition when fip is used. If the sensing disk of a node i is contained within the sensing disk of another node j, i.e. Cis ∩Csj = Cis , then Wi ⊂ Cis and W j = Csj \Wi . It is important to note that in the general case, ∂W j ∩ ∂Wi consists of circular arcs from both ∂Ci,i j and ∂Cis or ∂Csj (depending on the relationship of zi and z j ). An example partitioning with all of the above cases shown can be seen in Figure 5 [Right], where the boundaries of the In this simulation three nodes start grouped as seen in Figure (8) [Left]. Since the region of interest is large enough s for three optimal disks Ci,opt to fit inside, all the nodes converge at the optimal altitude zopt . As it can be seen in Figure 11,  the area covered by the network is equal to S s and the coverage-quality criterion has reached A C i∈In  i  s Hopt = A i∈In Ci,opt . However since all nodes converged at zopt , the addition of more nodes will result in significantly better performance coverage and quality wise, as is clear from Figure 8 [Right] and Figure 11 [Left]. Figure 9 shows a graphical representation of the coverage quality at the initial and final stages of the simulation. It is essentially a plot of all fiu (q) inside the region of interest. The volume of the S Fig. 8: Initial [Left] and final [Right] network configuration and space partitioning. cylinders in Figure 9 [Right] is the maximum possible. The trajectories of the UAVs in R3 can be seen in Figure 10 in blue and their projections on the region of interest in black. Fig. 9: Initial [Left] and final [Right] coverage quality. why H never reaches Hopt , as seen in Figure 15. It can be clearly seen though from Figure 12 [Right] and Figure 15 [Left] that the network covers a significant portion of Ω with better quality than Case Study I. The volume of the cylinders in Figure 13 [Right] has reached a local optimum. The trajectories of the UAVs in R3 can be seen in Figure 14 in blue and their projections on the region of interest in black. It can be seen from the trajectories that the altitude of some nodes was not constantly increasing. This is expected behavior since nodes at lower altitude will increase the stable altitude of nodes at higher altitude they share sensed regions with. Once they no longer share sensed regions, or share a smaller portion, the stable altitude of the upper node will decrease, leading to a decrease in their altitude. Fig. 12: Initial [Left] and final [Right] network configuration and space partitioning. z 1 0.5 0 2 2.5 1.5 2 1.5 1 1 0.5 y 0.5 0 x 0 80 60 60 40 Fig. 13: Initial [Left] and final [Right] coverage quality. 1 z 100 80 (%) 100 H H opt Acov (%) Fig. 10: Node trajectories (blue) and their projections on the sensed region (black). 0.5 40 0 20 20 2 2.5 1.5 0 0 5 10 15 0 0 5 T ime (s) 10 2 15 1.5 1 T ime (s) 1 0.5 A( A (Ω) s i∈In Ci S Fig. 11: ) [Left] and H Hopt y [Right]. B. Case Study II A network of nine nodes, identical to those in Case Study I, is examined in this simulation with an initial configuration as seen in Figure 12 [Left]. The region Ω is not large enough s to contain these nine Ci,opt disks and so the nodes converge opt at different altitudes below z  . This is why the covered area S s never reaches A i∈In Ci , which is larger than A (Ω) and 0.5 0 0 x Fig. 14: Node trajectories (blue) and their projections on the sensed region (black). VI. C ONCLUSIONS Area coverage by a network of UAVs has been studied in this article by use of a combined coverage-quality metric. A partitioning scheme based on coverage quality is employed to assign each UAV an area of responsibility. The proposed 140 100 120 80 (%) 80 H H opt Acov (%) 100 60 60 40 40 20 20 0 0 5 10 T ime (s) 15 0 0 20 A ( i∈In Cis ) A (Ω) 5 S Fig. 15: [Left] and 10 T ime (s) H Hopt 15 20 [Right]. R EFERENCES control law leads the network to a locally optimal configuration which provides a compromise between covered area and coverage quality. It also guarantees that the altitude of all UAVs will remain within a predefined range, thus avoiding potential obstacles while also keeping the UAVs below their maximum operational altitude. Simulation studies are presented to indicate the efficiency of the proposed control algorithm. APPENDIX The parametric equation of the boundary of the sensing disk Cis (Xi , a) defined in (2) is     x xi + zi tan(a) cos(k) γ(k) : = , k ∈ [0, 2π) y yi + zi tan(a) sin(k) We will now evaluate ni , υii (q) and νii (q) on ∂Wi ∩ ∂ O which is always an arc of the circle γ(k). The normal vector ni is given by   cos(k) ni = , k ∈ [0, 2π) sin(k) It can be easily shown that " ∂x ∂ yi ∂y ∂ yi ∂x ∂ xi ∂y ∂ xi υii (q) = # = I2 and similarly that " νii (q) = ∂x ∂ zi ∂y ∂ zi #  = When fi (q) < f j (q), due to the partition scheme, the common sensed region between nodes i and j will be assigned to node j. As a result, ∂Wi ∩ ∂W j will be an arc of ∂Csj which will not change when node i moves. Thus ni , υii (q) and νii (q) are both zero in this case. It is thus concluded that for the integrals over ∂Wi ∩∂W j of node i, only arcs where fi (q) > f j (q) need to be considered when integrating. tan(a) cos(k) tan(a) sin(k)  . It is now clear that νii (q) · ni = tan(a). The evaluation of ni , υii (q) and νii (q) on ∂W j ∩ ∂Wi depends on the choice of f (q) and the partitioning scheme described in Section II-B. When fi (q) = f j (q), the evaluation of ni , υii (q) and νii (q) is irrelevant since the corresponding integral will be 0 due to the fi (q) − f j (q) term. When fi (q) > f j (q), due to the partition scheme, the common sensed region between nodes i and j will be assigned to node i. As a result, ∂Wi ∩ ∂W j will be an arc of ∂Cis and will change when node i moves. Thus the evaluation of ni , υii (q) and νii (q) will be the same as their evaluation over ∂Wi ∩ ∂ O described previously. [1] J. Cortés, S. Martinez, and F. Bullo, “Spatially-distributed coverage optimization and control with limited-range interactions,” ESAIM: Control, Optimisation and Calculus of Variations, vol. 11, no. 4, pp. 691–719, 2005. [2] L. Pimenta, V. Kumar, R. Mesquita, and G. Pereira, “Sensing and coverage for a network of heterogeneous robots,” in Proc. of the 47th Conference on Decision and Control, Cancun, Mexico, 2008, pp. 3947–3952. [3] Y. Stergiopoulos and A. Tzes, “Convex Voronoi-inspired space partitioning for heterogeneous networks: A coverage-oriented approach,” IET Control Theory and Applications, vol. 4, no. 12, pp. 2802–2812, 2010. [4] O. Arslan and D. E. Koditschek, “Voronoi-based coverage control of heterogeneous disk-shaped robots,” in Proc. of the IEEE International Conference on Robotics and Automation, (ICRA 2016), 2016, pp. 4259–4266, 2016. [5] Y. Stergiopoulos and A. Tzes, “Spatially distributed area coverage optimisation in mobile robotic networks with arbitrary convex anisotropic patterns,” Automatica, vol. 49, no. 1, pp. 232–237, 2013. [6] Y. Kantaros, M. Thanou, and A. Tzes, “Distributed coverage control for concave areas by a heterogeneous robot-swarm with visibility sensing constraints,” Automatica, vol. 53, pp. 195–207, 2015. [7] Y. Stergiopoulos and A. Tzes, “Cooperative positioning/orientation control of mobile heterogeneous anisotropic sensor networks for area coverage,” in Proc. IEEE International Conference on Robotics & Automation, Hong Kong, China, 2014, pp. 1106–1111. [8] Y. Stergiopoulos, M. Thanou, and A. Tzes, “Distributed collaborative coverage-control schemes for non-convex domains,” IEEE Transactions on Automatic Control, vol. 60, no. 9, pp. 2422–2427, 2015. [9] A. Renzaglia, L. Doitsidis, E. Martinelli, and E. Kosmatopoulos, “Muti-robot three-dimensional coverage of unknwon areas,” The International Journal of Robotics Research, vol. 31, no. 6, pp. 738–752, 2012. [10] A. Breitenmoser, J. Metzger, R. Siegwart, and D. Rus, “Distributed coverage control on surfaces in 3D space,” in Proc. of the 2010 IEEE International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 2010, pp. 5569–5576. [11] M. Thanou and A. Tzes, “Distributed visibility-based coverage using a swarm of uavs in known 3D-terrains,” in Proc. of the International Symposium on Communications, Control and Signal Processing (ISCCSP 2014), Athens, Greece, 2014, pp. 458–461. [12] M. Schwager, B. J. Julian, and D. Rus, “Optimal coverage for multiple hovering robots with downward facing cameras,” in Proc. of the IEEE International Conference on Robotics and Automation, (ICRA 2009), 2009, pp. 3515–3522. [13] M. Schwager, B. J. Julian, M. Angermann, and D. Rus, “Eyes in the sky: Decentralized control for the deployment of robotic camera networks,” Proceedings of the IEEE, vol. 99, no. 9, pp. 1541–1561, 2011. [14] H. Flanders, “Differentiation under the integral sign,” American Mathematical Monthly, vol. 80, no. 6, pp. 615–627, 1973. [15] C. Di Franco and G. Buttazzo, “Coverage path planning for uavs photogrammetry with energy and resolution constraints,” Journal of Intelligent & Robotic Systems, pp. 1–18, 2016. [16] G. Avellar, G. Pereira, L. Pimenta and P. Iscold, “Multi-UAV Routing for Area Coverage and Remote Sensing with Minimum Time,” Sensors, vol. 15, no. 11, pp. 27783–27803, 2015.
3
1 Training Simplification and Model Simplification for Deep Learning: A Minimal Effort Back Propagation Method arXiv:1711.06528v1 [cs.LG] 17 Nov 2017 Xu Sun, Xuancheng Ren, Shuming Ma, Bingzhen Wei, Wei Li, and Houfeng Wang Abstract—We propose a simple yet effective technique to simplify the training and the resulting model of neural networks. In back propagation, only a small subset of the full gradient is computed to update the model parameters. The gradient vectors are sparsified in such a way that only the top-k elements (in terms of magnitude) are kept. As a result, only k rows or columns (depending on the layout) of the weight matrix are modified, leading to a linear reduction in the computational cost. Based on the sparsified gradients, we further simplify the model by eliminating the rows or columns that are seldom updated, which will reduce the computational cost both in the training and decoding, and potentially accelerate decoding in real-world applications. Surprisingly, experimental results demonstrate that most of time we only need to update fewer than 5% of the weights at each back propagation pass. More interestingly, the accuracy of the resulting models is actually improved rather than degraded, and a detailed analysis is given. The model simplification results show that we could adaptively simplify the model which could often be reduced by around 9x, without any loss on accuracy or even with improved accuracy. Index Terms—neural network, back propagation, sparse learning, model pruning. ✦ 1 I NTRODUCTION N EURAL network learning is typically slow, where back propagation usually dominates the computational cost during the learning process. Back propagation entails a high computational cost because it needs to compute full gradients and update all model parameters in each learning step. It is not uncommon for a neural network to have a massive number of model parameters. In this study, we propose a minimal effort back propagation method, which we call meProp, for neural network learning. The idea is that we compute only a very small but critical portion of the gradient information, and update only the corresponding minimal portion of the parameters in each learning step. This leads to sparsified gradients, such that only highly relevant parameters are updated, while other parameters stay untouched. The sparsified back propagation leads to a linear reduction in the computational cost. On top of meProp, we further propose to simplify the trained model by eliminating the less relevant parameters discovered during meProp, so that the computational cost of decoding can also be reduced. We name the method meSimp (minimal effort simplification). The idea is that we record which portion of the parameters is updated at each learning step in meProp, and gradually remove the parameters that are less updated. This leads to a simplified model that costs • X. Sun, X. Ren, S. Ma, B. Wei, W. Li, and H. Wang are with School of Electronics Engineering and Computer Science, Peking University, China, and MOE Key Laboratory of Computational Linguistics, Peking University, China. E-mail: {xusun, renxc, shumingma, weibz, liweitj47, wanghf}@pku.edu.cn The first two authors contributed equally to this work. This work is a substantial extension of the work presented at ICML 2017 [1]. The codes are available at https://github.com/jklj077/meProp. less in computation during decoding, while meProp can only speed up the training of the neural networks. One of the motivations for such method is that if we suppose back propagation can determine the importance of input features, with meProp, the essential features are welltrained, and the non-essential features are less-trained, so that the robustness of the models can be improved, and overfitting can be reduced. As the essential features play a more important role in the final model, there are chances that the parameters related to non-essential features could be eliminated, which leads to the idea of meSimp. For a classification task, there are essential features that are decisive in the classification, non-essential features that are helpful but can also be distractions, and irrelevant features that are not useful at all. For example, when classifying a picture as a taxi, the taxi sign is one of the essential features, and the color yellow, which is often the color of a taxi, is one of the non-essential features. Overfitting often occurs when the non-essential features are given too much importance in the model, while meProp intentionally focuses on training the probable essential features to lessen the risk of overfitting. To realize our approaches, we need to answer four questions. The first question is how to find the highly relevant subset of the parameters from the current sample in stochastic learning. We propose a top-k search method to find the most important parameters. Interestingly, experimental results demonstrate that most of the time we only need to update fewer than 5% of the weights at each back propagation pass. This does not result in a larger number of training iterations. The proposed method is general-purpose and it is independent of specific models and specific optimizers (e.g., Adam and AdaGrad). The second question is whether or not this minimal effort 2 Fig. 1. An illustration of meProp. back propagation strategy would hurt the accuracy of the trained models. We show that our strategy does not degrade the accuracy of the trained model, even when a very small portion of the parameters is updated. More interestingly, our experimental results reveal that our strategy actually improves the model accuracy in most cases. Based on our experiments, we find that it is probably because the minimal effort update does not modify weakly relevant parameters in each update, according with our assumption, which makes overfitting less likely, similar to the dropout effect. The third question is whether or not the decoding cost of the model can be reduced, as meProp can only shorten the training time. Based on meProp, we further apply the technique of meSimp. From our observations, the simplifying strategy can indeed shrink the final model by usually around 9x without any loss on accuracy. It also supports our assumption that, in fact, many learned features are not essential to the final correct prediction. The final question is whether or not the size of the simplified models needs to be set explicitly in advance. In most previous work, the final model size is pre-configured as desired or using heuristic rules, making it hard to simplify models with multiple layers, because naturally, each layer should have a different dimension, since it captures a different level of abstraction. In practice, we find that meSimp could adaptively reduce the size of the hidden layers, and automatically decide which features are essential for the task at different abstraction levels, resulting in a model of different hidden layer sizes. The contributions of this work are as follows: • • We propose a minimal effort back propagation technique for neural network learning, which could automatically find the most important features. Only a small subset of the full gradient is computed to update the model parameters, and is used to determine whether the related parameters should be kept in the final model. Applying the technique to training simplification (meProp), we find that the strategy actually improve the accuracy of the resulting models, rather than degraded, even if fewer than 5% of the weights are updated at each back propagation pass most of the time. The technique does not entail a larger number • • 2 of training iterations, and could reduce the time of the training substantially. Most importantly, applying the technique to model simplification (meSimp) could potentially reduce the time of decoding. With the ability to adaptively simplify each layer of the model to only keep essential features, the resulting model could be reduced to around one ninth of its original size, which equals to an around 9x reduction in decoding cost, on a base of no accuracy loss or even improved accuracy. It’s worth mentioning, when applied to models with multiple layers, given a single hyper-parameter, meSimp could simplify each hidden layer to a different extent, alleviating the need to set different hyperparameters for different layers. The minimal effort back propagation technique can be applied to different types of deep learning models (MLP and LSTM), can be applied with various optimization methods (Adam and AdaGrad), and works on diverse tasks (natural language processing and image recognition). P ROPOSED M ETHOD We propose a simple yet effective technique for neural network learning. The forward propagation is computed as usual. During back propagation, only a small subset of the full gradient is computed to update the model parameters. The gradient vectors are “quantized” so that only the topk components in terms of magnitude are kept. Based on the technique, we further propose to simplify the resulting models by removing the rows that are seldom updated, according to the top-k indices. The model is simplified in such a way that only actively updated rows are kept. We first present the proposed methods, and then describe the implementation details. 2.1 Simplified Back Propagation (meProp) Forward propagation of neural network models, including feedforward neural networks, RNN, LSTM, consists of linear transformations and non-linear transformations. For simplicity, we take a computation unit with one linear transformation and one non-linear transformation as an example: y = Wx (1) 3 Fig. 2. An illustration of the computational flow of meProp. z = σ(y) (2) where W ∈ Rn×m , x ∈ Rm , y ∈ Rn , z ∈ Rn , m is the dimension of the input vector, n is the dimension of the output vector, and σ is a non-linear function (e.g., relu, tanh, and sigmoid). During back propagation, we need to compute the gradient of the parameter matrix W and the input vector x: ′ ∂z = σi xTj (1 ≤ i ≤ n, 1 ≤ j ≤ m) (3) ∂Wij X ′ ∂z WijT σj (1 ≤ j ≤ n, 1 ≤ i ≤ m) = ∂xi j (4) ′ ∂zi . We can see that the computawhere σi ∈ Rn means ∂y i tional cost of back propagation is directly proportional to the dimension of output vector n. The proposed meProp uses approximate gradients by keeping only top-k elements based on the magnitude values. That is, only the top-k elements with the largest absolute values are kept. For example, suppose a vector v = h1, 2, 3, −4i, then top2 (v) = h0, 0, 3, −4i. We de′ note the indices of vector σ (y)’s top-k values as S = {t1 , t2 , ..., tk }(1 ≤ k ≤ n), and the approximate gradient of the parameter matrix W and input vector x is: ′ ∂z ← σi xTj if i ∈ {t1 , t2 , ..., tk } else 0 ∂Wij (5) X ′ ∂z WijT σj if j ∈ {t1 , t2 , ..., tk } else 0 ← ∂xi j (6) As a result, only k rows or columns (depending on the layout) of the weight matrix are modified, leading to a linear reduction (k divided by the vector dimension) in the computational cost. The algorithm is described in Algorithm 1. Figure 1 is an illustration of meProp for a single computation unit of neural models. The original back propagation uses the full gradient of the output vectors to compute the gradient of the parameters. The proposed method selects the top-k values of the gradient of the output vector, and back propagates the loss through the corresponding subset of the total model parameters. Algorithm 1 Backward Propagation Simplification for A Computation Unit z ← σ(W x) ⊲ Forward propagation ′ ∂z ⊲ Gradient of W x w.r.t. z σ ← ∂σ S ← {t1 , t2 , ..., tk } ⊲ Indices of k largest derivatives of ′ σ in magnitude ′ ∂z ← σi xTj if i ∈ S else 0 4: ∂W ij P ∂z T ′ 5: ∂x ← j Wij σj if j ∈ S else 0 i 1: 2: 3: As for a complete neural network framework with a loss L, the original back propagation computes the gradient of the parameter matrix W as: ∂L ∂y ∂L = · ∂W ∂y ∂W (7) 4 Fig. 3. An illustration of the computational flow of meProp on a mini-batch learning setting. while the gradient of the input vector x is: ∂y ∂L ∂L = · ∂x ∂x ∂y (8) The proposed meProp selects top-k elements of the gradient ∂L ∂y to approximate the original gradient, and passes them through the gradient computation graph according to the chain rule. Hence, the gradient of W goes to: ∂L ∂L ∂y ← topk ( )· ∂W ∂y ∂W (9) while the gradient of the vector x is: ∂y ∂L ∂L ← · topk ( ) ∂x ∂x ∂y (10) Figure 2 shows an illustration of the computational flow of meProp. The forward propagation is the same as traditional forward propagation, which computes the output vector via a matrix multiplication operation between two input tensors. The original back propagation computes the full gradient for the input vector and the weight matrix. For meProp, back propagation computes an approximate gradient by keeping top-k values of the backward flowed gradient and masking the remaining values to 0. Figure 3 further shows the computational flow of meProp for the mini-batch case. 2.2 Simplified Model (meSimp) The method from section 2.1 simplifies the training process, thus reduces the training time. However, for most deep learning applications in real life, it is even more important to reduce the computational cost of decoding, for the fact that although training is time consuming, it only needs to be done once, while decoding needs to be done as long as there is a new request. In this section, we propose to simplify the model by eliminating the inactive paths, which we define as the neurons whose gradients are not in top-k . This way, the decoding cost would also be reduced. There are two major concerns about this proposal. The main problem here is that we don’t know the active path of unseen examples in advance, as we don’t know the gradient information of those examples. Our solution for this problem is that we could obtain the overall inactive paths from the inactive paths of the training samples, which could be removed gradually in the training. The second is that the reduction in dimension could lead to performance degradation. Surprisingly, from our experimental results, our top-k gradient based method does not deteriorate the model. Instead, with an appropriate configuration, the resulting smaller model often performs better than the baseline large model, or even the baseline model of the similar size. As a matter of fact, after pruning the performance does drop. However, with the following training, the performance is regained. In what follows, we will briefly introduce the inspiration of the proposed method, and how the model simplification is done. In the experiments of meProp, we discover an interesting phenomenon that during training, apart from the active paths with top-k gradients, there are some inactive paths that are not activated at all for any of the examples. We call 5 these paths universal inactive paths. These neurons are not updated at all during training, and their parameter values remain the same as their initialized values, and we have every reason to believe that they would not be effective for new samples as well. However, the number of those paths may not be enough to bring a substantial contraction to the model. Algorithm 2 Model Simplification for A Computation Unit 1: Initialize W , t ← 0, c ← 0 2: while training do 3: Draw (x, y) from training data 4: z ← σ(W x) ⊲ Forward propagation ′ ∂z T ⊲ Gradient of W w.r.t. z 5: ∂W ← topk (σ )x 6: S ← {t1 , t2 , ..., tk } ⊲ indices of k largest derivatives ′ of σ in magnitude 7: ci ← increase if i in S ⊲ Record the top-k indices ∂z 8: Update W with ∂W 9: if t mod m = 0 then ⊲ Prune inactive paths 10: θ ← m × prune rate 11: for all i where ci < θ do 12: Remove row i from W 13: end for 14: c←0 ⊲ Reset c 15: end if 16: t←t+1 17: end while Based on the previous findings, we generalize the idea of universal inactive paths, and prune the paths that are less updated, that is, the paths we eliminate are the paths that are not active for a number of samples. To realize the idea, we keep a record c of how many times the index is in the top-k indices S , during the back propagation at the same time as meProp. After several training steps m, we take out the less active paths that are not updated for a number of samples, e.g. 90% prune rate, which results in a simplified model. The record is cleared at each pruning action. By doing that iteratively, the model size will approach near stable in the end. Algorithm 2 describes the method for a computation unit, and an illustration is shown in Figure 4. An important hyper-parameter for the method is the pruning threshold. When determining the threshold, the model size and the number of examples between pruning actions should be taken into account. As shown in Algorithm 2, the threshold could be parameterized by prune interval, that is, how many samples between pruning, and prune rate, that is, how active the path should be if it is not to be eliminated. Note that the layer sizes are determined adaptively in a multi-layer setting, and only one threshold is needed for a model with multiple layers to have different layer sizes. Because the top-k indices of different layers at different iterations intersect differently in back propagation. For some layers, the top-k indices are similar, hence results in a larger layer size, compared to k . For other layers, the top-k indices are quite different at each iteration, so that the intersection happens more often, which means ci is lower, hence the resulting layer size is smaller. How the layer is simplified depends on how the learning is done, which is in accordance with our intuition. forward propaga!on (original) back propaga!on (meProp) model simplifica!on ac!veness collected from mul!ple examples forward propaga!on (simplified) inac!ve neurons are eliminated back propaga!on (simplified & meProp) Fig. 4. An illustration of model simplification (k = 2). The figure shows the three main stages of the simplified model training. First, as the upper shows, the model is trained using meProp, for several iterations, and a record of the “activeness” of the paths is kept, indicated by the shades of the neurons. Second, as the middle shows, the model is simplified based on the collected record, and the inactive paths are eliminated. Third, as the lower shows, we train the simplified model also using meProp. We repeat the procedure until the goal of the training is met. In a deep neural network, it’s worth noticing that when simplifying a hidden layer, the respective columns in the “next” layer could also be removed, as the values in the columns represent the connection between the eliminated inputs and the outputs, which is no longer effective. That could reduce the model even further. However, we have not include that in our implementation yet. There are some extra considerations for LSTM models. In an LSTM, there is a lasting linear memory, and four gates controlling the modifying of the memory cells. It makes sense only if the pruning is for the memory cells instead of the gates, which are the computation units defined previously, because there is coherence between the memory and 6 the gates. Otherwise, the pruning would cause chaos and mismatch of dimensions, as each gate is of its own size, and the memory is of another dimension if it is set to the union of the gates. For LSTM models, we treat an LSTM module as a whole unit for model simplification, instead of treating each gate in an LSTM module as a unit for simplification. However, the top-k gradient selection takes place as the level of gates rather than memory cells. In practice, we still obtain the top-k indices from the gates, but we merge the top-k indices records of the gates into one record, and the pruning is for memory cells, so that the related gates are pruned as well. For model simplification, we also propose a kind of cycle mechanism. During our experiments, we find that at the time of the simplification, there is a drop in performance, but it recovers quickly within the following training, and may even supersede the performance before the simplification. It makes us wonder whether the training after simplification is critical to the performance improvement. We propose to divide the training procedure into several stages, and in each stage, we first conduct the training of model simplification, and then conduct the normal training. At the start of each stage, we also reinitialize the optimizer, if there is historical information of the gradients stored. The reason for such operation is that after model simplification, the dynamics of how the neurons interacted with each other changed, and the previous gradient information may interfere with the new dynamics of the simplified network. We find this cycle mechanism could improve the resulting model’s performance even further on some tasks. 2.3 Implementation We have coded two neural network models, including an LSTM model for part-of-speech (POS) tagging, and a feedforward NN model (MLP) for transition-based dependency parsing and MNIST image recognition. We use the optimizers with automatically adaptive learning rates, including Adam [2] and AdaGrad [3]. In our implementation, we make no modification to the optimizers, although there are many zero elements in the gradients. Most of the experiments on CPU are conducted on the framework coded in C# on our own. This framework builds a dynamic computation graph of the model for each sample, making it suitable for data in variable lengths. A typical training procedure contains three parts: forward propagation, back propagation, and parameter update. We also have an implementation based on the PyTorch framework for GPU based experiments. To focus on the method itself, the results of GPU based experiments will be presented in appendices. 2.3.1 Where to apply top-k selection The proposed method aims to reduce the complexity of the back propagation by reducing the elements in the computationally intensive operations. In our preliminary observations, matrix-matrix or matrix-vector multiplication consumed more than 90% of the time of back propagation. In our implementation, we apply meProp only to the back propagation from the output of the multiplication to its inputs. For other element-wise operations (e.g., activation functions), the original back propagation procedure is kept, because those operations are already fast enough compared with matrix-matrix or matrix-vector multiplication operations. If there are multiple hidden layers, the top-k sparsification needs to be applied to every hidden layer, because the sparsified gradient will again be dense from one layer to another. That is, in meProp the gradients are sparsified with a top-k operation at the output of every hidden layer. While we apply meProp to all hidden layers using the same k of top-k , usually the k for the output layer could be different from the k for the hidden layers, because the output layer typically has a very different dimension compared with the hidden layers. For example, there are 10 tags in the MNIST task, so the dimension of the output layer is 10, and we use an MLP with the hidden dimension of 500. Thus, the best k for the output layer could be different from that of the hidden layers. 2.3.2 Choice of top-k algorithms In our C# implementation, instead of sorting the entire vector, we use the well-known min-heap based top-k selection method, which is slightly changed to focus on memory reuse. The algorithm has a time complexity of O(n log k) and a space complexity of O(k). PyTorch comes with a GPU implementation of a certain paralleled top-k algorithm, which we are not sure how the operation is done exactly. 3 E XPERIMENTS To demonstrate that the proposed method is generalpurpose, we perform experiments on different models (LSTM/MLP), various training methods (Adam/AdaGrad), and diverse tasks. Transition-based Dependency Parsing (Parsing): Following prior work, we use English Penn TreeBank (PTB) [4] for evaluation. We follow the standard split of the corpus and use sections 2-21 as the training set (39,832 sentences, 1,900,056 transition examples),1 section 22 as the development set (1,700 sentences, 80,234 transition examples) and section 23 as the final test set (2,416 sentences, 113,368 transition examples). The evaluation metric is unlabeled attachment score (UAS). We implement a parser using MLP following [5], which is used as our baseline. Part-of-Speech Tagging (POS-Tag): We use the standard benchmark dataset in prior work [6], which is derived from the Penn Treebank corpus, and use sections 0-18 of the Wall Street Journal (WSJ) for training (38,219 examples), and sections 22-24 for testing (5,462 examples). The evaluation metric is per-word accuracy. A popular model for this task is the LSTM model [7],2 which is used as our baseline. MNIST Image Recognition (MNIST): We use the MNIST handwritten digit dataset [8] for evaluation. MNIST consists of 60,000 28×28 pixel training images and additional 10,000 test examples. Each image contains a single numerical digit (0-9). We select the first 5,000 images of the training images as the development set and the rest as the 1. A transition example consists of a parsing context and its optimal transition action. 2. In this work, we use the bi-directional LSTM (Bi-LSTM) as the implementation of LSTM. 7 TABLE 1 meProp results based on LSTM/MLP models and Adam optimizers. Time means averaged time per iteration. Iter means the number of iterations to reach the optimal score on development data. The model of this iteration is then used to obtain the test score. As we can see, applying meProp can substantially speedup the back propagation with improved accuracy. Parsing (Adam) MLP (h=500) meProp (k=20) POS-Tag (Adam) LSTM (h=500) meProp (k=10) MNIST (Adam) MLP (h=500) meProp (k=80) Iter 10 6 Iter 3 4 Iter 13 14 Backprop time (s) 9,077.7 488.7 (18.6x) Backprop time (s) 16,167.3 435.6 (37.1x) Backprop time (s) 169.5 28.7 ( 5.9x) training set. The evaluation metric is per-image accuracy. We use the MLP model as the baseline. Following common practice, we use ReLU [9] as the activation function of the hidden layers. 3.1 Experimental Settings We set the dimension of the hidden layers to 500 for all the tasks. For Parsing, the input dimension is 48 (features) × 50 (dim per feature) = 2400, and the output dimension is 25. For POS-Tag, the input dimension is 1 (word) × 50 (dim per word) + 7 (features) × 20 (dim per feature) = 190, and the output dimension is 45. For MNIST, the input dimension is 28 (pixels per row) × 28 (pixels per column) × 1 (dim per pixel) = 784, and the output dimension is 10. Based on the development set and prior work, we set the mini-batch size to 10,000 (transition examples), 1 (sentence), and 10 (images) for Parsing, POS-Tag, and MNIST, respectively. Using 10,000 transition examples for Parsing follows [5]. As discussed in Section 2, the optimal k of topk for the output layer could be different from the hidden layers, because their dimensions could be very different. For Parsing and MNIST, we find using the same k for the output and the hidden layers works well, and we simply do so. For another task, POS-Tag, we find the the output layer should use a different k from the hidden layers. For simplicity, we do not apply meProp to the output layer for POS-Tag, because in this task we find the computational cost of the output layer is almost negligible compared with other layers. In the experiments of model simplification, we use the Adam optimizer for all the tasks, for the sake of simplicity. In addition, we also apply the cycle mechanism in the reported results. Note that, to simulate the real scenario, we run each configuration 5 times with different random seeds, and choose the best model on development set to report. The hyperparameters are tuned based on the development data. For the Adam optimization method, we find the default hyperparameters work well on development sets, which are as follows: the learning rate α = 0.001, and β1 = 0.9, β2 = 0.999, ǫ = 1 × 10−8 . The experiments on CPU are conducted on a computer with the INTEL(R) Xeon(R) 3.0GHz CPU. The experiments on GPU are conducted on NVIDIA GeForce GTX 1080. 3.2 Experimental Results of meProp In this experiment, the LSTM is based on one hidden layer and the MLP is based on two hidden layers (experiments Dev UAS (%) 90.48 89.91 Dev Acc (%) 97.20 97.14 Dev Acc (%) 98.72 98.36 Test UAS (%) 89.80 89.84 (+0.04) Test Acc (%) 97.22 97.25 (+0.03) Test Acc (%) 98.20 98.27 (+0.07) on more hidden layers will be presented later). We conduct experiments on different optimization methods, including AdaGrad and Adam. Since meProp is applied to the linear transformations (which entail the major computational cost), we report the linear transformation related backprop time as Backprop Time. It does not include non-linear activations, which usually have only less than 2% computational cost. The total time of back propagation, including nonlinear activations, is reported as Overall Backprop Time. Table 1 shows the results based on different models and different optimization methods. In the table, meProp means applying meProp to the corresponding baseline model, h = 500 means that the hidden layer dimension is 500, and k = 20 means that meProp uses top-20 elements (among 500 in total) for back propagation. Note that, for fair comparisons, all experiments are first conducted on the development data and the test data is not observable. Then, the optimal number of iterations is decided based on the optimal score on development data, and the model of this iteration is used upon the test data to obtain the test scores. As we can see, applying meProp can substantially speed up the back propagation. It provides a linear reduction in the computational cost. Surprisingly, results demonstrate that we can update only fewer than 5% of the weights at each back propagation pass for the natural language processing tasks. This does not result in a larger number of training iterations. More surprisingly, the accuracy of the resulting models is actually improved rather than decreased. The main reason could be that the minimal effort update does not modify weakly relevant parameters, which makes overfitting less likely, similar to the dropout effect. 3.2.1 Result Analysis of meProp Changing Optimizer TABLE 2 meProp: Results using AdaGrad optimizers. We can see that meProp also works with AdaGrad optimizers, indicating that meProp is independent of optimizers. Parsing (AdaGrad) MLP (h=500) meProp (k=20) POS-Tag (AdaGrad) LSTM (h=500) meProp (k=5) MNIST (AdaGrad) MLP (h=500) meProp (k=10) Iter 11 8 Iter 4 4 Iter 8 16 Test UAS (%) 88.92 88.95 (+0.03) Test Acc (%) 96.93 97.25 (+0.32) Test Acc (%) 97.52 98.00 (+0.48) 8 100 98 97.9 97.8 97.6 0 meProp MLP 20 40 60 80 Backprop Ratio (%) 100 98 98 97 96 96 Accuracy (%) Accuracy (%) Accuracy (%) 98.1 97.7 MNIST: Change h/k MNIST: Topk vs Random MNIST: Reduce Overfitting 98.2 94 92 90 Topk meProp Random meProp 88 86 0 5 10 Backprop Ratio (%) 95 94 93 92 0 15 meProp MLP 91 2 4 6 8 Backprop/Hidden Ratio (%) 10 Fig. 5. Accuracy vs. meProp’s backprop ratio (left). Results of top-k meProp vs. random meProp (middle). Results of top-k meProp vs. baseline with the hidden dimension h (right). TABLE 3 meProp: Results based on the same k and h. It can be concluded that meProp does not rely on redundant neurons, as the model of the small hidden dimension works much worse. Parsing (Adam) MLP (h=20) meProp (k=20) POS-Tag (Adam) LSTM (h=5) meProp (k=5) MNIST (Adam) MLP (h=20) meProp (k=20) Iter 18 6 Iter 7 5 Iter 15 17 Test UAS (%) 88.37 90.01 (+1.64) Test Acc (%) 96.40 97.12 (+0.72) Test Acc (%) 95.77 98.01 (+2.24) It is important to see whether meProp can be applied with different optimizers, because the minimal effort technique sparifies the gradient, which affects the update of the parameters. For the AdaGrad learner, the learning rate is set to α = 0.01, 0.01, 0.1 for Parsing, POS-Tag, and MNIST, respectively, and ǫ = 1 × 10−6 . As shown in Table 2, the results are consistent among AdaGrad and Adam. The results demonstrate that meProp is independent of specific optimization methods. For simplicity, the following experiments use Adam. Varying Backprop Ratio In Figure 5 (left), we vary the k of top-k meProp to compare the test accuracy on different ratios of meProp backprop. For example, when k=5, it means that the backprop ratio is 5/500=1%. The optimizer is Adam. As we can see, meProp achieves consistently better accuracy than the baseline. Top-k vs. Random It will be interesting to check the role of top-k elements. Figure 5 (middle) shows the results of top-k meProp vs. random meProp. The random meProp means that random elements (instead of top-k ones) are selected for back propagation. As we can see, the top-k version works better than the random version. It suggests that top-k elements contain the most important information of the gradients. Varying Hidden Dimension We still have a question: does the top-k meProp work well simply because the original model does not require that big dimension of the hidden layers? For example, the meProp (topk=5) works simply because the LSTM works well with the hidden dimension of 5, and there is no need TABLE 4 meProp: Varying the number of hidden layers on the MNIST task. The experiments demonstrate that meProp can also be applied to traditional deep models. Layers 2 3 4 5 Method MLP (h=500) meProp (k=25) MLP (h=500) meProp (k=25) MLP (h=500) meProp (k=25) MLP (h=500) meProp (k=25) Test Acc (%) 98.10 98.20 (+0.10) 98.21 98.37 (+0.16) 98.10 98.15 (+0.05) 98.05 98.21 (+0.16) to use the hidden dimension of 500. To examine this, we perform experiments on using the same hidden dimension as k , and the results are shown in Table 3. As we can see, however, the results of the small hidden dimensions are much worse than those of meProp. In addition, Figure 5 (right) shows more detailed curves by varying the value of k . In the figure, different k gives different backprop ratio for meProp and different hidden dimension ratio for LSTM/MLP. As we can see, the answer to that question is negative: meProp does not rely on redundant hidden layer elements. Adding More Hidden Layers Another question is whether or not meProp relies on shallow models with only a few hidden layers. To answer this question, we also perform experiments on more hidden layers, from 2 hidden layers to 5 hidden layers. We find setting the dropout rate to 0.1 works well for most cases of different numbers of layers. For simplicity of comparison, we set the same dropout rate to 0.1 in this experiment. Table 4 shows that adding the number of hidden layers does not hurt the performance of meProp. Adding Dropout Since we have observed that meProp can reduce overfitting of deep learning, a natural question is that if meProp is reducing the same type of overfitting risk as dropout. Thus, we use development data to find a proper value of the dropout rate on those tasks, and then further add meProp to check if further improvement is possible. Table 5 shows the results. As we can see, meProp can achieve further improvement over dropout. In particular, meProp has an improvement of 0.46 UAS on Parsing. The results suggest that the type of overfitting that meProp 9 TABLE 5 meProp: Adding the dropout technique. As the results show, meProp could further improve the performance on top of dropout, suggesting that meProp is reducing a different type of overfitting, comparing to dropout. Parsing (Adam) MLP (h=500) meProp (k=40) POS-Tag (Adam) LSTM (h=500) meProp (k=20) MNIST (Adam) MLP (h=500) meProp (k=25) Dropout 0.5 0.5 Dropout 0.5 0.5 Dropout 0.2 0.2 Test UAS (%) 91.53 91.99 (+0.46) Test Acc (%) 97.20 97.31 (+0.11) Test Acc (%) 98.09 98.32 (+0.23) TABLE 6 Results of simple unified top-k meProp based on a whole mini-batch (i.e., unified sparse patterns). The optimizer is Adam. Mini-batch Size is 50. Layers 2 5 Method MLP (h=500) meProp (k=30) MLP (h=500) meProp (k=50) Test Acc (%) 97.97 98.08 (+0.11) 98.09 98.36 (+0.27) reduces is probably different from that of dropout. Thus, a model should be able to take advantage of both meProp and dropout to reduce overfitting. Speedup on GPU For implementing meProp on GPU, the simplest solution is to treat the entire mini-batch as a “big training example”, where the top-k operation is based on the averaged values of all examples in the mini-batch. In this way, the big sparse matrix of the mini-batch will have consistent sparse patterns among examples, and this consistent sparse matrix can be transformed into a small dense matrix by removing the zero values. We call this implementation as simple unified top-k . This experiment is based on PyTorch. Despite its simplicity, Table 6 shows the good performance of this implementation, which is based on the minibatch size of 50. We also find the speedup on GPU is less significant when the hidden dimension is low. The reason is that our GPU’s computational power is not fully consumed by the baseline (with small hidden layers), so that the normal back propagation is already fast enough, making it hard for meProp to achieve substantial speedup. For example, supposing a GPU can finish 1000 operations in one cycle, there could be no speed difference between a method with 100 and a method with 10 operations. Indeed, we find MLP (h=64) and MLP (h=512) have almost the same GPU speed even on forward propagation (i.e., without meProp), while theoretically there should be an 8x difference. With GPU, the forward propagation time of MLP (h=64) and MLP (h=512) is 572ms and 644ms, respectively. This provides evidence for our hypothesis that our GPU is not fully consumed with the small hidden dimensions. Thus, the speedup test on GPU is more meaningful for the heavy models, such that the baseline can at least fully consume the GPU’s computational power. To check this, we test the GPU speedup on synthetic data of matrix multiplication with a larger hidden dimension. Indeed, Table 7 shows that meProp achieves much higher speed than TABLE 7 Acceleration results on the matrix multiplication synthetic data using GPU. The batch size is 1024. Method Baseline (h=8192) meProp (k=8) meProp (k=16) meProp (k=32) meProp (k=64) meProp (k=128) meProp (k=256) meProp (k=512) Backprop time (ms) 308.00 8.37 (36.8x) 9.16 (33.6x) 11.20 (27.5x) 14.38 (21.4x) 21.28 (14.5x) 38.57 ( 8.0x) 69.95 ( 4.4x) TABLE 8 Acceleration results on MNIST using GPU. Method MLP (h=8192) meProp (k=8) meProp (k=16) meProp (k=32) meProp (k=64) meProp (k=128) meProp (k=256) meProp (k=512) Overall backprop time (ms) 17,696.2 1,501.5 (11.8x) 1,542.8 (11.5x) 1,656.9 (10.7x) 1,828.3 ( 9.7x) 2,200.0 ( 8.0x) 3,149.6 ( 5.6x) 4,874.1 ( 3.6x) the traditional backprop with the large hidden dimension. Furthermore, we test the GPU speedup on MLP with the large hidden dimension [10]. Table 8 shows that meProp also has substantial GPU speedup on MNIST with the large hidden dimension. In this experiment, the speedup is based on Overall Backprop Time (see the prior definition). Those results demonstrate that meProp can achieve good speedup on GPU when it is applied to heavy models. Finally, there are potentially other implementation choices of meProp on GPU. For example, another natural solution is to use a big sparse matrix to represent the sparsified gradient of the output of a mini-batch. Then, the sparse matrix multiplication library can be used to accelerate the computation. This could be an interesting direction of future work. 3.3 Experimental Results of meSimp In this experiment, we only simplify the hidden layers of the model, and we use Adam optimizer for all the tasks. We set the cycle to 10 for all the tasks, that is, we first train the model using meSimp for 5 epochs, then train the model normally for 5 epochs, and repeat the procedure till the end. Table 9 shows the model simplificatoin results based on different models. In the table, meProp means applying meProp to the corresponding baseline model, and meSimp means applying model compression on top of meProp. h = 500 means that the dimension of the model’s hidden layers is 500, k = 20 means that in back propagation we propagate top-20 elements, and prune = 0.08 means that the dimension which is updated less than 8% times during an statistical interval is dropped. As we can see, our method is capable of reducing the models to a relatively small size, while maintaining the performance if not improving. The hidden layers of the models are reduced by around 10x, 8x, and 3x for Parsing, POS-Tag, and MNIST respectively. That means when the simplified model is deployed, it could achieve more than 10 TABLE 9 meSimp results based on LSTM/MLP models. Iter means the number of iterations to reach the optimal score on development data. The model of this iteration is then used to obtain the test score. Dim means the dimension of the model of this iteration. For LSTM, it is the average of each direction; for MLP, it is the average of hidden layers. It can be drawn from the results that meSimp could reduce the model to a smaller size, often around 10%, while maintaining the performance if not improving. Parsing MLP (h=500) meSimp (k=20, prune=0.08) POS-Tag LSTM (h=500) meSimp (k=20, prune=0.08) MNIST MLP (h=500) meSimp (k=160, prune=0.10) Iter 10 10 Iter 3 3 Iter 13 14 Dim 500 51 (10.2%) Dim 500 60 (12.0%) Dim 500 154 (30.8%) TABLE 10 meSimp: The dimensions of the resulting models. The results confirm that meSimp is suitable for deep models, and could adaptively determine the proper sizes of different layers. Parsing #Average #Hidden meSimp(k=20, prune=0.08) 51 51 POS-Tag #Average #Forward #Backward meSimp (k=20, prune=0.08) 60 57 63 MNIST #Average #First #Second meSimp (k=160, prune=0.10) 154 149 159 10x, 8x, and 3x computational cost reduction respectively in decoding, with a similar or better performance. The reason could be that the minimal effort update captures important features, so that the simplified model is enough to fit the data, while without minimal effort update, the model of a similar size treats each feature equally at start, limiting its ability to learn from the data. We will show the experimental results in section 3.3.1. The results show that the simplifying method is effective in reducing the model size, thus bringing a substantial reduction of the computational cost of decoding in real-world task. More importantly, the accuracy of the original model is kept, or even more often improved. This means model simplifying could make it more probable to deploy a deep learning system to a computation constrained environment. 3.3.1 Result Analysis of meSimp Adaptively setting the size of the hidden layers It’s worth noticing that meSimp is also able to automatically determine the appropriate size of the resulting model of deep neural networks (Table 10). At the beginning, we conduct the experiments on a neural network with a single hidden layer, that is, Parsing, and we get promising result, as the model size is reduced to 10.2% of its original size. The result of Paring makes us wondering whether meProp could also simplify deeper networks, so we continue to run experiments on different models. In the experiments of POS-Tag, the LSTM is based on Bi-LSTM, that is, there is a forward LSTM and a backward LSTM regarding to the input sequences, which means it is often very deep in time series (horizontal). As shown in Table 10, the forward and backward LSTMs indeed gets different dimensions, 60 and 57 respectively. We further conduct experiments on an MLP with 2 hidden layers (vertical), and the result shows that the first hidden layer and the second hidden layer are Dev UAS (%) 90.48 90.23 Dev Acc (%) 97.20 97.19 Dev Acc (%) 98.72 98.46 Test UAS (%) 89.80 90.11 (+0.31) Test Acc (%) 97.22 97.25 (+0.03) Test Acc (%) 98.20 98.31 (+0.11) TABLE 11 meSimp: Results based on the same k and h for Parsing. We report the results of 5 different runs of the baseline model. It’s clear that the simplified model constantly surpasses the traditionally-trained model of the same size, indicating that meSimp may enable a more efficient and effective learning. Method meSimp (h=51) MLP (h=51) MLP (h=51) MLP (h=51) MLP (h=51) MLP (h=51) Dev UAS (%) 90.23 89.92 89.90 89.80 89.63 89.62 Test UAS (%) 90.11 89.64 89.56 89.56 89.45 89.44 again of different dimensions, which confirms that meSimp could adaptively adjust the hidden layer size in a multilayer setting. We also need to remind the readers that meSimp does not need to specify different hyper-parameters for different layers, while most of the previous work ([11], [12], [13]), if different layer sizes are pursued, a different hyperparameter need to be set separately for each hidden layer, limiting their ability to simplifying the models adaptively. Comparing with the models of similar sizes One natural and important question is how does the simplified model perform compared to the model of a similar size. If the simplified models perform not as well as the normally trained models of similar sizes, the simplifying method may be redundant and unnecessary. Results in Table 11 shed light on that question. We train baseline models of sizes similar to the sizes of simplified models, and report the results in Table 11. It can be shown that, our simplified models perform better than the models trained of similar sizes, especially on the Parsing task. The results shows that the model simplifying training is not unnecessary, as a simplified model achieves better accuracy than a model trained using a small dimension. An attempt at revealing why the minimal effort technique works From the back propagation simplification and model simplification results, we could see that approaches based on active paths, which are measured by the back propagation, are effective in reducing overfitting. One of our hypothesis is that for a neural network, for each example, only a small part of the neurons is needed to produce the correct results, and gradients are good identifiers to detect 11 TABLE 12 MNIST: minimal effort activation. Dim means the averaged active dimension of hidden layers across examples. 10-15 means during epoch 10 to epoch 15, we apply the minimal effort activation technique. The results show that for an example, a smaller number of neurons is enough to generate the correct prediction, and that by only training the highly-related neurons, the performance could be improved. MNIST Iter Dim Test Acc (%) MLP (h=500) 18 500 98.18 meAct (threshold=0.004, 10-20) 20 99 98.42 (+0.24) meAct (threshold=0.004, 15-20) 18 99 98.32 (+0.14) MNIST: Accuracy on Dev. 98.6 98.4 Accuracy (%) 98.2 98.0 97.8 97.6 97.4 97.2 97.0 meAct (prune=0.04, 10~20) baseline 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Epoch MNIST: Wei ht Update Average Absolute Update per Parameter 1.1 1e−4 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 meAct (prune=0.04, 10~20) baseline 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Epoch Fig. 6. Change of accuracy (upper), and Average update of parameters (lower) in active path finding. To isolate the impact of meAct, we fix the random seed, which means the initialization of parameters and the shuffle are the same between meAct and baseline, so the lines coincide with each other during Epoch 1-10, in which the training is exactly the same. As we can see, after meAct is applied, the accuracy rises, which indicates training focused on the most relevant neurons could reduce overfitting, and the update drops, which suggests in the later of normal training, most of the update is caused by fitting of the noises, making the already trained neurons constantly changing. the decisive neurons. Too many neurons are harmful to the model, because the extra neurons may be trained to fit the noise in the training examples. To examine the hypothesis, we design an new algorithm, which we call meAct (minimal effort activation), which only activates the active path, with respect to each example, in the forward propagation, and the experimental results are consistent with our hypothesis. To realize the idea, for each example, we only activate the paths with the largest accumulated absolute gradients, and the number of the chosen paths is controlled by threshold. Specifically, we accumulate the absolute gradients of each layer’s output for each example, denoted by gi (xj ), where i is the neuron’s index in a layer, and j is the example’s identifier. For a layer, if a neuron’s accumulated absolute gradients accounts for less than a specified percentage of the sum of the gradients ofPall the neurons of the layer, that is, n gi (xj ) < threshold × i=1 gi (xj ), where n is the number of the neurons in the layer, and 0 < threshold < 1, the neuron is considered inactive. The paths out of active paths are deactivated and we use the previous activation values at the last encounter in the training as their outputs, thus the effort in activation is minimized. As the sparse activation is done in forward propagation, the back propagation is sparsified as well, because obviously, the deactivate neurons contribute none to the results, meaning their gradients are zeros, which requires no computation. Note that the method does not reduce the size of the model, and for each example, we obtain its own active paths. During test, the forward propagation is done normally, as we wouldn’t know the active paths of these unseen examples. From results shown in Table 12, we can see that, for the MNIST task, on average, fewer than 100 neurons are adequate to achieve good results or even better results. From the results shown in Figure 6, we can see that, during minimal effort activation, the accuracy rises from the baseline, which shows that the accuracy could benefit from the training focused more on the related neurons. To see how the accuracy improvement is acquired, we further investigate the change of the parameters during training. Normally, the gradient is used to represent the change. However, because we use Adam as the optimizer, where the update is not done by directly using the gradient, we consider the change before and after the Adam update rule as the change. As there are many iterations in an epoch and many parameters for a model, we average the change of allPthe parameters at each iteration, that is P t n |δ j | j j=1 i=1 i , where |δi | means the absolute update = n×t change of parameter i at iteration j , n means the number of the parameters, and t means the number of training iterations, and report the average absolute change per parameter per iteration as update. We use the absolute of the update of a parameter, because we would like to see how much the parameters have been modified during the training process, not just the change between the start and the end. As shown in Figure 6, we can see that the update dropped sharply when meAct is applied, meaning the accuracy improvement is achieved by very little change of the parameters, while the update of normal training is still high, more than 5x of the update of meAct, suggesting that many update is redundant and unnecessary, which could be the result of the model trying to adapt to the noise in the data. As there should be no regular pattern in the noise, it requires more subtle update of all the parameters to fit the noise, which is much harder and often affects the training of the essential features, thus leading to a lower accuracy than our method which tries to focus only on the essential features for each example. The results confirm our initial hypothesis that for an example, only a few neurons are required, and minimal effort technique provides a simple yet effective way to train and extract the helpful neurons. 12 3.4 Related Systems on the Tasks The POS tagging task is a well-known benchmark task, with the accuracy reports from 97.2% to 97.4% ([14], [15], [16], [17], [18], [19], [20], [21], [22], [23], [24]). Our method achieves 97.31% (Table 5). For the transition-based dependency parsing task, existing approaches typically can achieve the UAS score from 91.4 to 91.5 ([25], [26], [27], [28], [29], [30], [31], [32], [33], [34], [35]). As one of the most popular transition-based parsers, MaltParser [29] has 91.5 UAS. [5] achieves 92.0 UAS using neural networks. Our method achieves 91.99 UAS (Table 5). For MNIST, the MLP based approaches can achieve 98–99% accuracy, often around 98.3% ([8], [36], [37]). Our method achieves 98.37% (Table 4). With the help from convolutional layers and other techniques, the accuracy can be improved to over 99% ([38], [39]). Our method can also be improved with those additional techniques, which, however, are not the focus of this paper. As shown in [40], meProp could improve the accuracy of a convolutional neural network baseline by 0.2%, reaching 99.27%. 4 R ELATED WORK 4.1 Training Simplification [41] proposed a direct adaptive method for fast learning, which performs a local adaptation of the weight update according to the behavior of the error function. [42] also proposed an adaptive acceleration strategy for back propagation. Dropout [43] is proposed to improve training speed and reduce the risk of overfitting. Sparse coding is a class of unsupervised methods for learning sets of over-complete bases to represent data efficiently [44]. [45] proposed a sparse autoencoder model for learning sparse over-complete features. Our proposed method is quite different compared with those prior studies on back propagation, dropout, and sparse coding. The sampled-output-loss methods [46] are limited to the softmax layer (output layer) and are only based on random sampling, while our method does not have those limitations. The sparsely-gated mixture-of-experts [47] only sparsifies the mixture-of-experts gated layer and it is limited to the specific setting of mixture-of-experts, while our method does not have those limitations. There are also prior studies focusing on reducing the communication cost in distributed systems ([10], [48]), by quantizing each value of the gradient from 32-bit float to only 1-bit. Those settings are also different from ours. 4.2 Model Simplification [11] proposed to reduce the parameters in RNN family models by masking out the parameters below a heuristically calculated threshold. Their method is mainly designed to make large scale RNN family models smaller while maintaining a similar performance, so that the models can be feasibly deployed. Their model is limited to the RNN family models, and they can not reduce the training time of the model. [13] propose to distill an expressive but cumbersome model into a smaller model by mimicking the target of the cumbersome model, while adjusting the temperature. They claim that their method can transfer the knowledge learned by the cumbersome model to the simpler one. And their method doesn’t presume a specific model. However, the final model size should be preconfigured, while our method could adaptively learn the size of the model. [49] proposed to automatically choose the size of the model parameters by iteratively adding and removing zero units. However, their method is rather complicated and limited to CNN with batch normalization. [12] propose a densesparse-dense model to first eliminate units with small absolute values, then reactivate the units, and re-train them, so that the model can achieve better performance. Their model does not really reduce the size of the model, as their purpose is to improve the performance. Note that our work could adaptively choose the size of a layer in deep neural networks with a single simplifying configuration, as shown in Table 10. But most of the previous work, either the related hyper parameter controlling the final model size needs to be set for each layer ([11], [12]), or the model size needs to be set directly ([13]). Those methods will lead to trivial hyper-parameter tuning if different hidden layer sizes are pursued, as different layers, representing different levels of abstraction naturally should be of different sizes, which we do not know in advance. Our work eliminates the needs for that, as with the same hyper parameter, meSimp will automatically determine the size needed to represent the useful information for each hidden layer, thus leading to varied dimensions of the hidden layers. 5 C ONCLUSIONS We propose a minimal effort back propagation technique to simplify the training (meProp), and to simplify the resulting model (meSimp). The minimal effort technique adopts the top-k selection based back propagation to determine the most relevant features, which leads to very sparsified gradients to compute for the given training sample. Experiments show that meProp can reduce the computational cost of back propagation by one to two orders of magnitude via updating only fewer than 5% parameters, and yet improve the model accuracy in most cases. We further propose to remove the seldom updated parameters to simplify the resulting model for the purpose of reducing the computational cost of the decoding. Experiments reveal that the model size could be reduced to around one ninth of the original models, leading to around 9x computational cost reduction in decoding for two natural language processing tasks with improved accuracy. More importantly, meSimp could automatically decide the appropriate sizes for different hidden layers, alleviating the need for hyper-parameter tuning. ACKNOWLEDGMENTS This work was supported in part by National Natural Science Foundation of China (No. 61673028), and an Okawa Research Grant (2016). This work is a substantial extension of the work presented at ICML 2017 [1]. 13 R EFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] X. Sun, X. Ren, S. Ma, and H. Wang, “meProp: Sparsified back propagation for accelerated deep learning with reduced overfitting,” in Proceedings of ICML’17, 2017, pp. 3299–3308. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” CoRR, vol. abs/1412.6980, 2014. J. C. Duchi, E. Hazan, and Y. Singer, “Adaptive subgradient methods for online learning and stochastic optimization,” Journal of Machine Learning Research, vol. 12, pp. 2121–2159, 2011. M. P. Marcus, M. A. Marcinkiewicz, and B. Santorini, “Building a large annotated corpus of english: The penn treebank,” Computational linguistics, vol. 19, no. 2, pp. 313–330, 1993. D. Chen and C. D. Manning, “A fast and accurate dependency parser using neural networks,” in Proceedings of EMNLP’14, 2014, pp. 740–750. M. Collins, “Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms,” in Proceedings of EMNLP’02, 2002, pp. 1–8. S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998. R. H. R. Hahnloser, R. Sarpeshkar, M. Mahowald, R. J. Douglas, and H. S. Seung, “Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit.” Nature, vol. 405, no. 6789, pp. 947–951, 2000. N. Dryden, T. Moon, S. A. Jacobs, and B. V. Essen, “Communication quantization for data-parallel training of deep neural networks,” in Proceedings of the 2nd Workshop on Machine Learning in HPC Environments, 2016, pp. 1–8. S. Narang, G. Diamos, S. Sengupta, and E. Elsen, “Exploring sparsity in recurrent neural networks,” in ICLR’17, 2017. S. Han, J. Pool, S. Narang, H. Mao, S. Tang, E. Elsen, B. Catanzaro, J. Tran, and W. J. Dally, “Dsd: Regularizing deep neural networks with dense-sparse-dense training flow.” in ICLR’17, 2017. G. E. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network.” CoRR, vol. abs/1503.02531, 2015. K. Toutanova and C. D. Manning, “Enriching the knowledge sources used in a maximum entropy part-of-speech tagger,” in Proceedings of EMNLP-ACL 2000, 2000, pp. 63–70. K. Toutanova, W. Ammar, and H. Poon, “Model selection for type-supervised learning with application to pos tagging.” in Proceedings of CoNLL’15, 2015, pp. 332–337. X. Sun, “Towards shockingly easy structured classification: A search-based probabilistic online learning framework,” CoRR, vol. abs/1503.08381, 2015. L. Shen, G. Satta, and A. K. Joshi, “Guided learning for bidirectional sequence classification,” in Proceedings of ACL’07, 2007, pp. 760–767. X. Sun, “Structure regularization for structured prediction,” in NIPS’14, 2014, pp. 2402–2410. R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. P. Kuksa, “Natural language processing (almost) from scratch,” Journal of Machine Learning Research, vol. 12, pp. 2493–2537, 2011. X. Sun, “Structure regularization for structured prediction: Theories and experiments,” CoRR, vol. abs/1411.6243, 2014. Z. Huang, W. Xu, and K. Yu, “Bidirectional lstm-crf models for sequence tagging,” CoRR, vol. abs/1508.01991, 2015. X. Sun, “Asynchronous parallel learning for neural networks and structured models with dense features,” in Proceedings of COLING’16, 2016, pp. 192–202. Y. Tsuruoka, J. Tsujii, and S. Ananiadou, “Stochastic gradient descent training for l1-regularized log-linear models with cumulative penalty,” in Proceedings of ACL-AFNLP’09, 2009, pp. 477–485. Y. Tsuruoka, Y. Miyao, and J. Kazama, “Learning with lookahead: Can history-based models rival globally optimized models?” in Proceedings of CoNLL’11, 2011, pp. 238–246. H. Noji, Y. Miyao, and M. Johnson, “Using left-corner parsing to encode universal structural constraints in grammar induction,” in Proceedings of the EMNLP’16, 2016, pp. 33–43. Y. Miyao, R. Sætre, K. Sagae, T. Matsuzaki, and J. Tsujii, “Taskoriented evaluation of syntactic parsers and their representations,” in Proceedings of ACL’08, 2008, pp. 46–54. T. Matsuzaki, Y. Miyao, and J. Tsujii, “Probabilistic CFG with latent annotations,” in Proceedings of ACL’05, 2005, pp. 75–82. [28] T. Matsuzaki and J. Tsujii, “Comparative parser performance analysis across grammar frameworks through automatic tree conversion using synchronous grammars,” in Proceedings of COLING’08, 2008, pp. 545–552. [29] J. Nivre, J. Hall, J. Nilsson, A. Chanev, G. Eryigit, S. Kübler, S. Marinov, and E. Marsi, “Maltparser: A language-independent system for data-driven dependency parsing,” Natural Language Engineering, vol. 13, no. 2, pp. 95–135, 2007. [30] J. Nivre, “Algorithms for deterministic incremental dependency parsing,” Computational Linguistics, vol. 34, no. 4, pp. 513–553, 2008. [31] J. Cross and L. Huang, “Incremental parsing with minimal features using bi-directional LSTM,” in Proceedings of ACL’16, 2016. [32] L. Huang and K. Sagae, “Dynamic programming for linear-time incremental parsing,” in Proceedings of ACL’10, 2010, pp. 1077– 1086. [33] Y. Zhang and J. Nivre, “Analyzing the effect of global learning and beam-search on transition-based dependency parsing,” in Proceedings of COLING’12, 2012, pp. 1391–1400. [34] Y. Zhang and S. Clark, “Syntactic processing using the generalized perceptron and beam search,” Computational Linguistics, vol. 37, no. 1, pp. 105–151, 2011. [35] O. Vinyals, L. Kaiser, T. Koo, S. Petrov, I. Sutskever, and G. E. Hinton, “Grammar as a foreign language,” in NIPS’15, 2015, pp. 2773–2781. [36] P. Y. Simard, D. Steinkraus, and J. C. Platt, “Best practices for convolutional neural networks applied to visual document analysis,” in Proceedings of ICDR’03, 2003, pp. 958–962. [37] D. C. Ciresan, U. Meier, L. M. Gambardella, and J. Schmidhuber, “Deep, big, simple neural nets for handwritten digit recognition,” Neural Computation, vol. 22, no. 12, pp. 3207–3220, 2010. [38] D. C. Ciresan, U. Meier, and J. Schmidhuber, “Multi-column deep neural networks for image classification,” in Proceedings of CVPR’12, 2012, pp. 3642–3649. [39] K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun, “What is the best multi-stage architecture for object recognition?” in Proceeding of ICCV’09, 2009, pp. 2146–2153. [40] B. Wei, X. Sun, X. Ren, and J. Xu, “Minimal effort back propagation for convolutional neural networks,” CoRR, vol. abs/1709.05804, 2017. [41] M. Riedmiller and H. Braun, “A direct adaptive method for faster backpropagation learning: The rprop algorithm,” in Proceedings of IEEE International Conference on Neural Networks 1993, 1993, pp. 586–591. [42] T. Tollenaere, “Supersab: fast adaptive back propagation with good scaling properties,” Neural networks, vol. 3, no. 5, pp. 561– 573, 1990. [43] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929–1958, 2014. [44] B. A. Olshausen and D. J. Field, “Natural image statistics and efficient coding,” Network: Computation in Neural Systems, vol. 7, no. 2, pp. 333–339, 1996. [45] M. Ranzato, C. S. Poultney, S. Chopra, and Y. LeCun, “Efficient learning of sparse representations with an energy-based model,” in NIPS’06, 2006, pp. 1137–1144. [46] S. Jean, K. Cho, R. Memisevic, and Y. Bengio, “On using very large target vocabulary for neural machine translation,” in Proceedings of ACL/IJCNLP’15, 2015, pp. 1–10. [47] N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. V. Le, G. E. Hinton, and J. Dean, “Outrageously large neural networks: The sparsely-gated mixture-of-experts layer,” CoRR, vol. abs/1701.06538, 2017. [48] F. Seide, H. Fu, J. Droppo, G. Li, and D. Yu, “1-bit stochastic gradient descent and its application to data-parallel distributed training of speech dnns,” in Proceedings of INTERSPEECH’14, 2014, pp. 1058–1062. [49] G. Philipp and J. G. Carbonell, “Nonparametric neural networks,” in ICLR’17, 2017.
1
Provable learning of Noisy-or Networks Sanjeev Arora∗ Rong Ge† Tengyu Ma ‡ Andrej Risteski§ arXiv:1612.08795v1 [cs.LG] 28 Dec 2016 December 30, 2016 Abstract Many machine learning applications use latent variable models to explain structure in data, whereby visible variables (= coordinates of the given datapoint) are explained as a probabilistic function of some hidden variables. Finding parameters with the maximum likelihood is NP-hard even in very simple settings. In recent years, provably efficient algorithms were nevertheless developed for models with linear structures: topic models, mixture models, hidden markov models, etc. These algorithms use matrix or tensor decomposition, and make some reasonable assumptions about the parameters of the underlying model. But matrix or tensor decomposition seems of little use when the latent variable model has nonlinearities. The current paper shows how to make progress: tensor decomposition is applied for learning the single-layer noisy or network, which is a textbook example of a Bayes net, and used for example in the classic QMR-DT software for diagnosing which disease(s) a patient may have by observing the symptoms he/she exhibits. The technical novelty here, which should be useful in other settings in future, is analysis of tensor decomposition in presence of systematic error (i.e., where the noise/error is correlated with the signal, and doesn’t decrease as number of samples goes to infinity). This requires rethinking all steps of tensor decomposition methods from the ground up. For simplicity our analysis is stated assuming that the network parameters were chosen from a probability distribution but the method seems more generally applicable. ∗ Princeton University, Computer Science Department. [email protected] Duke University, Computer Science Department. [email protected] ‡ Princeton University, Computer Science Department. [email protected] § Princeton University, Computer Science Department. [email protected] † 1 Introduction Unsupervised learning is important and potentially very powerful because of the availability of the huge amount of unlabeled data — often several orders of magnitudes more than the labeled data in many domains. Latent variable models, a popular approach in unsupervised learning, model the latent structures in data: the “structure”corresponds to some hidden variables, which probabilistically determine the values of the visible coordinates in data. Bayes nets model the dependency structure of latent and observable variables via a directed graph. Learning parameters of a latent variable model given data samples is often seen as a canonical definition of unsupervised learning. Unfortunately, finding parameters with the maximum likelihood is NP-hard even in very simple settings. However, in practice many of these models can be learnt reasonably well using algorithms without polynomial runtime guarantees, such as expectation-maximization algorithm, Markov chain Monte Carlo, and variational inference. Bridging this gap between theory and practice is an important research goal. Recently it has become possible to use matrix and tensor decomposition methods to design polynomial-time algorithms to learn some simple latent variable models such as topic models [AGM12, AGH+ 13], sparse coding models [AGMM15, MSS16], mixtures of Gaussians [HK13, GHK15], hidden Markov models [MR05], etc. These algorithms are guaranteed to work if the model parameters satisfy some conditions, which are reasonably realistic. In fact, matrix and tensor decomposition are a natural tool to turn to since they appear to be a sweet spot for theory whereby non-convex NP-hard problems can be solved provably under relatively clean and interpretable assumptions. But the above-mentioned recent results suggest that such methods apply only to solving latent variable models that are linear: specifically, they need the marginal of the observed variables conditioned on the hidden variables to depend linearly on the hidden variables. But many settings seem to call for nonlinearity in the model. For example, Bayes nets in many domains involve highly nonlinear operations on the latent variables, and could even have multiple layers. The study of neural networks also runs into nonlinear models such as restricted Boltzmann machines (RBM) [Smo86, HS06]. Can matrix factorization (or related tensor factorization) ideas help for learning nonlinear models? This paper takes a first step by developing methods to apply tensor factorization to learn possibly the simplest nonlinear model, a single-layer noisy-or network. This is a direct graphical model with hidden variable d ∈ {0, 1}m , and observation node s ∈ {0, 1}n . The hidden variables d1 , . . . , dm are independent and assumed to have Bernoulli distributions. The conditional distribution Pr[s|d] is parameterized by a non-negative weight matrix W ∈ Rn×m . We use W i to denote the i-row of W . Conditioned on d, the observations s1 , . . . , sn are assume to be independent with distribution Pr [si = 0 | d] = m Y exp(−Wij dj ) = exp(−hW i , di) . (1.1) j=1 We see that 1 − exp(−Wji dj ) can be thought of as the probability that dj activates symptom si , and si is activated if one of dj ’s activates it — which explains the name of the model, noisy-or. It follows that the conditional distribution s | d is Pr[s | d] = n Y i=1 s 1−si 1 − exp(−hW i , di) i exp(−hW i , di) . One canonical use of this model is to model the relationship between diseases and symptoms, as in the classical human-constructed tool for medical diagnosis called Quick Medical Reference 1 (QMR-DT) by (Miller et al.[MPJM82], Shwe et al. [SC91]) This textbook example ([JGJS99]) of a Bayes net captures relationships between 570 binary disease variables (latent variables) and 4075 observed binary symptom variables, with 45, 470 directed edges, and the Wij ’s are small integers.1 The name “noisy-or ”derives from the fact that Q the probability that the OR of m independent binary variables y1 , y2 , . . . , ym is 1 is exactly 1 − j (Pr[yj = 0]). Noisy-or models are implicitly using this expression; specifically, for the i-th symptom we are considering the OR of m events where the jth event is “Disease j does not cause symptom i” and its probability is exp(−Wij dj ). Treating these events as independent leads to expression (1.1). The parameters of the QMR-DT network were hand-estimated by consulting human experts, but it is an interesting research problem whether such networks can be created in an automated way using only samples of patient data (i.e., the s vectors). Previously there were no approaches for this that work even heuristically at the required problem size (n = 4000). (This learning problem should not be confused with the simpler problem of infering the latent variables given the visible ones, which is also hard but has seen more work, including reasonable heuristic methods [JGJS99]). Halpern et al.[HS13, JHS13] have designed some algorithms for this problem. However, their first paper [HS13] assumes the graph structure is given. The second paper [JHS13] requires the Bayes network to be quartet-learnable, which is a strong assumption on the structure of the network. Finally, the problem of finding a “best-fit” Bayesian network according to popular metrics2 has been shown to be NP-complete by [Chi96] even when all of the hidden variables are also observed. Our algorithm and analysis. Our algorithm uses Taylor expansion —on a certain correlation measure called PMI, whose use in this context is new—to convert the problematic exponential into an infinite sum, where we can ignore all but the first two terms. This brings the problem into the realm of tensor decomposition but with several novel twists having to do with systematic error (see the overview in Section 2). Our recovery algorithm makes several assumptions about W , the matrix of connection weights, listed in Section 3. We verified some of these on the QMR-DT network, but the other assumptions are asymptotic in nature. Thus the cleanest description of our algorithm is in a clean average-case setting. First, we assume all latent variables are iid Bernoulli Ber(ρ) for some ρ, which should be thought of as small (In the QMR-DT application, ρ is like O(1/m).) Next we assume that the ground truth W ∈ Rn×m is created by nature by picking its entries in iid fashion using the following random process: ( 0, with probability 1 − p Wij = f Wij , with probability p fij ’s are upper bounded by νu for some constant νu and are identically distributed according where W to a distribution D which satisfies that for some constant νl > 0, h i fij2 ) ≤ 1 − νl . exp(−W (1.2) E fij ∼D W fij is bounded away from 0. We will assume that The condition (1.2) intuitively requires that W p ≤ 1/3 and νu = O(1), νl = Ω(1). (Again, these are realistic for QMR-DT setting). 1 We thank Randolph Miller and Vanderbilt University for providing the current version of this network for our research. 2 Researchers resort to these metrics as when the graph structure is unknown, multiple structures may have the same likelihood, so maximum likelihood is not appropriate. 2 Theorem 1.1 (Informally stated). There exists a polynomial time algorithm (Algorithm 1) that, given polynomially many samples from the noisy OR network described in the previous paragraph, e √pm) relative error in ℓ2 -norm in each column. recovers the weight matrix W with O(ρ Recall that we mostly thought of the prior of the diseases ρ as being on the order O(1/m). This √ means that even if p is on the order of 1, our relative error bound equals to O(1/ m) ≪ 1. 2 Preliminaries and overview We denote by 0 the all-zeroes vector and 1 the all-ones vector. A+ will denote the Moore-Penrose pseudo-inverse of a matrix A, and for symmetric matrices A, we use A−1/2 as a shorthand for (A+ )1/2 . The least non-zero singular value of matrix A is denoted σmin (A). For matrices A, B we define the Kronecker product ⊗ as (A ⊗ B)ijkl = Aij Bkl . A useful identity is that (A⊗B)·(C ⊗D) = (AC)⊗(BD) whenever the matrix multiplications are defined. Moreover, Ai will denote the i-th column of matrix A and Ai the i-th row of matrix A. We write A . B if there exists a universal constant c such that A ≤ cB and we define & similarly. The pointwise mutual information of two binary-valued random variables x and y is E[xy] P M I2(x, y) , log E[x] . Note that it is positive iff E[x, y] > E[x] E[y] and thus is used as a E[y] measure of correlation in many fields. This concept can be extended in more than one way to a triple of boolean variables x, y, z and we use P M I3(x, y, z) , log E[xy] E[yz] E[zx] . [xyz] E E[x] E[y] E[z] (2.1) (We will sometimes shorten PMI3 and PMI2 to PMI when this causes no confusion.) 2.1 The Algorithm in a nutshell Our algorithm is given polynomially many samples from the model (each sample describing which symptoms are or are not present in a particular patient). It starts by computing the following matrix n × n PMI and and n × n × n tensor PMIT, which tabulate the correlations among all pairs and triples of symptoms (specifically, the indicator random variable for the symptom being absent): PMIij , P M I2(1 − si , 1 − sj ). PMITi,j,k , P M I3(1 − si , 1 − sj , 1 − sj ) (2.2) (2.3) The next proposition makes the key observation that the above matrix and tensor are close to rank m, which we recall is much smaller than n. Here and elsewhere, for a matrix or vector A we use exp(A) for the matrix or vector obtained by taking the entries-wise exponential. For convenience, we define F, G ∈ Rn×m as F , 1 − exp(−W ) G , 1 − exp(−2W ) . 3 (2.4) (2.5) Proposition 2.1 (Informally stated). Let Fk , Gk denote the kth columns of the above F, G. Then,   PMI ≈ ρ F F ⊤ + ρGG⊤ PMIT ≈ ρ m X k=1 | = {z m X Fk Fk⊤ + ρ2 | } m X k=1 m X Gk G⊤ k (2.6) k=1 k=1 Fk ⊗ Fk ⊗ Fk + ρ :=S ρ  Gk ⊗ Gk ⊗ Gk . {z :=E (2.7) (2.8) } The proposition is proved later (with precise statement) in Section A by computing the moments by marginalization and using Taylor expansion to approximate the log of the moments, and ignoring terms ρ3 and smaller. (Recall that ρ is the probability that a patient has a particular disease, which should be small, of the order of O(1/n). The dependence of the final error upon ρ appears in Section 3.) Since the tensor PMIT can be estimated to arbitrary accuracy given enough samples, the natural idea to recover the model parameters W is to use Tensor Decomposition. This is what our algorithm does as well, except the following difficulties have to be overcome. Difficulty 1: Suppose in equation (2.8) we view the first summand S, which is rank m with components Fk ’s as the signal term. In all previous P polynomial-time algorithms for tensor decomposition, the tensor is required to have the form m k=1 Fk ⊗ Fk ⊗ Fk + noise. To make our problem fit this template we could consider the second summand E as the “noise”, especially since it is multiplied by ρ ≪ 1 which tends to make E have smaller norm than S. But this is naive and incorrect, since E is a very structured matrix: it is more appropriate viewed as systematic error. (In particular this error doesn’t go down in norm as the number of samples goes to infinity.) In order to do tensor decomposition in presence of such systematic error, we will need both a delicate error analysis and a very robust tensor decomposition algorithm. These will be outlined in Section 2.3. Difficulty 2: To get our problem into a form suitable for tensor decomposition requires a whitening step, which uses the robust estimate of the whitening matrix from the second moment matrix. In this case, the whitening matrix has to be extracted out of the PMI matrix, which itself suffers from a systematic error. This also is not handled in previous works, and requires a delicate control of the error. See Section 2.4 for more discussion. Difficulty 3: There is another source of inexactness in equation (2.8), namely the approximation is only true for those entries with distinct indices — for example, the diagonal entry PMIii has completely different formula from that for PMIij when i 6= j. This will complicate the algorithm, as described in Subsections 2.3 and 2.4. The next few Subsections sketch how to overcome these difficulties, and the details appear in the rest of the paper. 2.2 Recovering matrices in presence of systematic error In this Section we recall the classical method of approximately recovering a matrix given noisy estimates of its entries. We discuss how to adapt that method to our setting where the error in the estimates is systematic and does not go down even with many more samples. The next section sketches an extension of this method to tensor decomposition with systematic error. In the classical setting, there is an unknown n × n matrix S of rank m and we are given S + E where E is an error matrix. The method to recover S is to compute the best rank-m approximation to S +E. The quality of this approximation was studied by Davis and Kahan [DK70] and Wedin [Wed72], and many subsequent authors. The quality of the recovery depends upon the 4 ratio ||E||/σm (S), where σm (·) denotes m-th largest singular value and || · || denotes the spectral norm. To make this familiar lemma fit our setting more exactly, we will phrase the problem as trying to recover a matrix S given noisy estimate SS ⊤ + E. Now one can only recover S up to rotation, and the following lemma describes the error in the Davis-Kahan recovery. It also plays a key role in the error analysis of the usual algorithm for tensor decomposition. b the subspace of the top m eigenvectors of SS ⊤ and Lemma 2.2. In the above setting, let K, K SS ⊤ + E. Let ε be such that kEk ≤ ε · σm (SS ⊤ ). Then IdK − IdKb . ε where Id is the identity transformation on the subspace in question. The Lemma thus treats ||E||/σm (SS ⊤ ) as the definition of noise/signal ratio. Before we generalize the definition and the algorithm to handle systematic error it is good to get some intuition, from looking at (2.6): PMI ≈ ρ(F F T + ρGGT ). Thinking of the first term as signal and the second as error, let’s check how bad is the noise/signal ratio defined in Davis-Kahan. The “signal”is σm (F F ⊤ ), which is smaller than n since the trace of F F ⊤ is of the order of mn in our probabilistic model for the weight matrix. The “noise” is the norm of ρGG⊤ , which is large since the Gk ’s are nonnegative vectors with P entries of the order of 1, and therefore the quadratic form h √1n 1, ρGG⊤ √1n 1i can be as large as ρ k hGk , √1n 1i2 ≈ ρmn. Thus the Davis-Kahan noise/signal ratio is ρm, and so when ρm ≪ 1, it allows recovering the subspace of F with error O(ρm). Note that this is a vacuous bound since ρ needs to be at least 1/m so that the hidden variable d contains 1 non-zero entry in average. We’ll argue that this error is too pessimistic and we can in fact drive the estimation error down to close to ρ. Definition 2.3 (spectral boundedness). Let n ≥ m. Let E ∈ Rn×n be a symmetric matrix and S ∈ Rn×m . Then, we say E is τ -spectrally bounded by S if E  τ (SS ⊤ + σm (SS ⊤ ) · Idn ) (2.9) The smallest such τ is the “error/signal ratio” for this recovery problem. This definition differs from Davis-Kahan’s because of the τ SS ⊤ term on the right hand side of (2.9). This allows, for any unit vector x, the quadratic form value xT Ex to be as large as τ (xT SS ⊤ x + σm (SS ⊤ )). Thus for example the 1 vector no longer causes a large noise/signal ratio since both quadtratic forms F F ⊤ and GG⊤ have large values on it. This new error/signal ratio is no larger than the Davis-Kahan ratio, but can potentially be much smaller. Now we show how to do a better analysis of the Davis-Kahan recovery in terms of it. The proof of this theorem appears in Section 4. Theorem 2.4 (matrix perturbation theorem for systematic error). Let n ≥ m. Let S ∈ Rn×m be of full rank. Suppose positive semidefinite matrix E ∈ Rn×n is ε-spectrally bounded by S ∈ Rn×m b the subspace of the top m eigenvectors of SS ⊤ and SS ⊤ + E. Then, for ε ∈ (0, 1). Let K, K IdK − IdKb . ε . Finally, we should consider what this new definition of noise/signal ratio achieves. The next proposition (whose proof appears in Section B) shows that that under the generative model for W √ sketched earlier, τ = O(log n). Therefore, ρG is Õ(ρ)-bounded by F , and the recovery error of the subspace of F from F F ⊤ + ρGG⊤ is Õ(ρ) (instead of O(ρm) using Davis-Kahan). Proposition 2.5. Under the generative model for W , w.h.p, the matrix G = 1 − exp(−2W ) is τ -spectrally bounded by F = 1 − exp(−W ), with τ = Õ(1). 5 Empirically, we can compute the τ value for the weight matrix W in the QMR-DT dataset [SC91], which is a textbook application of noisy OR network. For the QMR-DT dataset, τ is under 6. This implies that the recovery error of the subspace of F guaranteed by Theorem 2.4 is bounded by O(τ ρ) ≈ ρ, whereas the error bound by Davis-Kahan is O(ρm). 2.3 Tensor decomposition with systematic error Now we extend the insight from the matrix case to tensor recovery under systematic error. In turns out condition (2.9) is also a good measure of error/signal for the tensor recovery problem of (2.8). Specifically, if G is τ -bounded by F , then we can recover the components Fk ’s from the PMIT with √ column-wise error O(ρτ 3/2 m). This requires a non-trivial algorithm (instead of SVD), and the additional gain is that we can recover Fk ’s individually, instead of only obtaining the subspace with the PMI matrix. First we recall the prior state of the art for the error analysis of tensor decomposition with Davis-Kahan type bounds. The best error bounds involve measuring the magnitude of the noise matrix Z in a new way. For any n1 × n2 × n3 tensor T , we define the k·k{1}{2,3} norm as X xi yjk Tijk . (2.10) kT k{1}{2,3} := sup x∈Rn1 ,y∈Rn2 n3 kxk=1,kyk=1 i∈[n1 ] (j,k)∈[n2 ]×[n3 ] Note that this norm is in fact the spectral norm of the flattening of the tensor (into a n1 × n2 n3 dimensional matrix). This norm is larger than the injective norm3 , but recently [MSS16] shows that ε-error in this norm implies O(ε)-error in the recovery guarantees of the components, whereas if one uses injective norm, the guarantees often pick up an dimension-dependent factor [AGH+ 14]. We define k·k{2}{1,3} norm similarly. As is customary in tensor decomposition, the theorem is stated for tensors of a special form, where the components {ui }, {vi }, {wi } are orthonormal families of vectors. This can be ensured without loss of generality using a procedure called whitening that uses the 2nd moment matrix. Theorem 2.6 (Extension of [MSS16, Theorem 10.2]). There is a polynomial-time algorithm (Algorithm 2 later) which has the following guarantee. Suppose tensor T is of the form T = r X i=1 ui ⊗ vi ⊗ wi + Z where {ui }, {vi }, {wi } are three collections of orthonormal vectors in Rd , and kZk{1}{2,3} ≤ ε, kZk{2}{1,3} ≤ ε. Then, it returns {(ũi , ṽi , w̃i )} in polynomial time that is O(ε)-close to {(ui , vi , wi )} in ℓ2 norm up to permutation. 4 But in our setting the noise tensor has systematic error. An analog of Theorem 2.4 in this setting is complicated because even the whitening step is nontrivial. Recall also the inexactness in Proposition 2.1 due to the diagonal terms, which we earlier called Difficulty 3. We address this difficulty in the algorithm by setting up the problem using a sub-tensor of the PMI tensor. Let Sa , Sb , Sc be a uniformly random equipartition of the set of indices [n]. Let ak = Fk,Sa , 3 bk = Fk,Sb , ck = Fk,Sc , The injective norm of the tensor T is defined as kT k{1}{2,3} := supx∈Rn1 ,y∈Rn2 z∈Rn3 kxk=1,kyk=1,kzk=1 4 (2.11) P i∈[n1 ],j∈[n2 ],k∈[n3 ] xi yj zk Tijk . Precisely, here we meant that there exists a permutation π such that for every i, max{kũπ(i) − ui k, kṽπ(i) − vi k, kw̃π(i) − wi k} ≤ O(ε) 6 where Fk,S denotes the restriction of vector Fk to subset S. Moreover, let γk = Gk,Sa , δk = Gk,Sb , θk = Fk,Sc . (2.12) Then, since the sub-tensor PMITSa ,Sb ,Sc only contains entries with distinct indices, we can use Taylor expansion (see Lemma A.1) to obtain that X X γk ⊗ δk ⊗ θk + higher order terms . PMITSa ,Sb ,Sc = ρ ak ⊗ bk ⊗ ck + ρ2 k∈[m] k∈[m] Here the second summand on the RHS corresponds to the second order term in the Taylor expansion. It turns out that the higher order terms are multiplied by ρ3 and thus have negligible Frobenius norm, and therefore discussion below will focus on the first two summands. For simplicity, let T = PMITSa ,Sb ,Sc . Our goal is to recover the components ak , bk , ck from the approximate low-rank tensor T . The first step is to whiten the components ak ’s, bk ’s and ck ’s. Recall that ak = Fk,Sa is a non-negative vector. This implies the matrix A = [a1 , . . . , am ] must have a significant contribution in the direction of the vector 1, and thus is far away from being well-conditioned. For the purpose of this section, we assume for simplicity that we can access the covariance matrix defined by the vector ak ’s, X (2.13) Q̄a := AA⊤ = ak a⊤ k . k∈[m] Similarly we assume the access of Q̄b and Q̄c which are defined analogously. In Section 2.4 we discuss how to obtain approximately these three matrices. Then, we can compute the whitened tensor by applying transformation 1/2 , (Q̄+ )1/2 , (Q̄+ )1/2 along the three modes of the tensor T , ) (Q̄+ c a b 1/2 1/2 1/2 ⊗ (Q̄+ ·T =ρ (Q̄+ ⊗ (Q̄+ c ) a) b ) k∈[m] + ρ2 | X 1/2 1/2 1/2 bk ⊗ (Q̄+ ck (Q̄+ ak ⊗ (Q̄+ c ) a) b ) X 1/2 1/2 1/2 (Q̄+ γk ⊗ (Q̄+ δk ⊗ (Q̄+ θk +negligible terms a) c ) b ) k∈[m] {z } :=Z 1/2 a ’s are orthonormal Now the first summand is a low rank orthogonal tensor, since (Q̄+ k a) vectors. However, the term Z is a systematic error and we use the following Lemma to control its k·k{1}{2,3} norm. Lemma 2.7. Let n ≥ m and A, B, C ∈ Rn×m be full rank matrices and let Γ, ∆, Θ ∈ Rd×ℓ . Let γi , δi , θi be the i-th column of Γ, ∆, Θ, respectively. Let Q̄a = AA⊤ , Q̄b = BB ⊤ , Q̄c = CC ⊤ . Suppose ΓΓ⊤ (and ∆∆⊤ , ΘΘ⊤ ) is τ -spectrally bounded by A (and B, C respectively), then, X 1/2 1/2 1/2 (Q̄+ γi ⊗ (Q̄+ δi ⊗ (Q̄+ θi c ) a) b ) i∈[ℓ] ≤ (2τ )3/2 . {1}{2,3} Lemma 2.7 shows that to give an upper bound on the k·k{1}{2,3} norm of the error tensor Z, it suffices to show that the square of the components of the error, namely, ΓΓ⊤ , ∆∆⊤ , ΘΘ⊤ are 7 τ -spectrally bounded by the components of the signal A, B, C respectively. This will imply that kZk{1}{2,3} ≤ (2τ )3/2 ρ2 . Recall that A and Γ are two sub-matrices of F and G. We have shown that GG⊤ is τ -spectrally bounded by F in Proposition 2.5. It follows straightforwardly that the random sub-matrices also have the same property. Proposition 2.8. In the setting of this section, under the generative model for W , w.h.p, we have that ΓΓ⊤ is τ -spectrally bounded by A with τ = O(log n). The same is true for the other two modes. Using Proposition 2.8 and Lemma 2.7, we have that kZk{1}{2,3} . ρ2 log3/2 (n) . 1/2 ⊗ (Q̄+ )1/2 ⊗ (Q̄+ )1/2 · T , we can recover the comThen using Theorem 2.6 on the tensor (Q̄+ c a) b + 1/2 + 1/2 + 1/2 ponents (Q̄a ) ak ’s, (Q̄b ) bk ’s, and (Q̄c ) ck ’s. This will lead us to recover ak ,bk and ck , and finally to recover the weight matrix W . 2.4 Robust whitening In the previous subsection, we assumed the access to Q̄a , Q̄b , Q̄c (defined in (2.13)) which turns out to be highly non-trivial. A priori, using equation (2.6), noting that A = [F1,Sa , . . . , Fm,Sa ], we have PMISa ,Sa /ρ ≈ Q̄a + error . However, this approximation can be arbitrarily bad for the diagonal entries of PMI since equation (2.6) only works for entries with distinct indices. (Recall that this is why we divided the indices set into Sa , Sb , Sc and studied the asymmetric tensor in the previous subsection). Moreover, the diagonal of the matrix Q̄a contributes to its spectrum significantly and therefore we cannot get meaningful bounds (in spectral norm) by ignoring the diagonal entries. This issue turns out to arise in most of the previous tensor papers and the solution was to compute AA⊤ by using the asymmetric moments AB ⊤ , BC ⊤ , CA⊤ , AA⊤ = (AB ⊤ )(CB ⊤ )+ (CA⊤ ) . Typically AB ⊤ , BC ⊤ , CA⊤ can be estimated with arbitrarily small error (as number of samples go to infinity) and therefore the equation above leads to accurate estimate to AA⊤ . However, in our case the errors in the estimate PMISa ,Sb ≈ AB ⊤ , PMISb ,Sc ≈ BC ⊤ , PMISc ,Sa ≈ CA⊤ are systematic. Therefore, we need to use a more delicate analysis to control how the error accumulates in the estimate, Q̄a ≈ PMISa ,Sb · PMI−1 Sb ,Sc · PMISc ,Sa . Here again, to get an accurate bound, we need to understand how the error in PMISa ,Sb − AB ⊤ behaves relatively compared with AB ⊤ in a direction-by-direction basis. We generalized Definition 2.3 to capture the asymmetric spectral boundedness of the error by the signal. Definition 2.9 (Asymmetric spectral boundedness). Let n ≥ m and B, C ∈ Rn×m . We say a matrix E ∈ Rn×n is ε-spectrally bounded by (B, C) if E can be written as: ⊤ E = B∆1 C ⊤ + B∆⊤ 2 + ∆3 C + ∆4 . (2.14) Here ∆1 ∈ Rm×m , ∆2 , ∆3 ∈ Rn×m and ∆4 ∈ Rn×n are matrices whose spectral norms are bounded by: k∆1 k ≤ ε, k∆2 k ≤ εσmin (C), k∆3 k ≤ εσmin (B) and k∆4 k ≤ εσmin (B)σmin (C). 8 Let K be the column subspace of B and H be the column subspace of C. Then we have ∆1 = B + E(C ⊤ )+ , ∆2 = B + EIdH ⊥ , ∆3 = IdK ⊥ E(C ⊤ )+ , ∆4 = IdK ⊥ EIdH ⊥ . Intuitively, they measure the relative relationship between E and B, C in different subspaces. For example, ∆1 is the relative perturbation in the column subspace of K and row subspace of H. When B = C, this is equivalent to the definition in the symmetric setting (this will be clearer in the proof of Theorem 2.4). Theorem 2.10 (Robust whitening theorem). Let n ≥ m and A, B, C ∈ Rn×m . Σab , Σbc , Σca ∈ Rn×n are of the form, Suppose Σab = AB ⊤ + Eab , Σbc = BC ⊤ + Ebc , and Σca = CA⊤ + Eca . where Eab , Ebc , Eca are ε-spectrally bounded by (A, B), (B, C), (C, A) respectively. Then, the matrix matrix + Qa = Σab [Σ⊤ bc ]m Σca + ⊤ is a good approximation of AA⊤ in the sense that Qa = Σab [Σ⊤ bc ]m Σca − AA is O(ε)-spectrally bounded by A. Here [Σ]m denotes the best rank-m approximation of Σ. The theorem is non-trivial even if the we have an absolute error assumption, that is, even if kEbc k ≤ τ σmin (B)σmin (C), which is stronger condition than Ebc is τ -spectrally bounded by ⊤ + ⊤ (B, C). Suppose we establish bounds on kΣab − AB ⊤ k, kΣ+⊤ bc − (BC ) k and kΣab − AB k + ⊤ individually, and then putting them together in the obvious way to control the error Σab [Σbc ]m Σca − AB ⊤ (BC ⊤ )+ CA⊤ . Then the error will be too large for us. This is because standard matrix  ⊤ −1 ⊤ −1 2 . perturbation theory gives that kΣ−⊤ bc − (BC ) k can be bounded by O kEbc kk(BC ) k ε/[σmin (B)σmin (C)], which is tight. Then we multiply the error with the norm of the rest of the (B)σmax (C) . That is, we will loss a condition number of two terms, the error will be roughly ε · σσmax min (B)σmin (C) B, C, which can be dimension dependent for our case. + The fix to this problem is to avoid bounding each term in Σab [Σ⊤ bc ]m Σca individually. To do this, we will take the cancellation of these terms into account. Technically, we re-decompose the + + + ⊤ + product Σab [Σ⊤ bc ]m Σca into a new product of three matrices (Σab B )(B[Σbc ]m C)(C Σca ), and then bound the error in each of these terms instead. See Section C for details. 1/2 a ’s are indeed approximately As a corollary, we conclude that the whitened vectors (Q+ i a) orthonormal. 1/2 A contains approximately Corollary 2.11. In the setting of Theorem 2.10, we have that (Q+ a) orthonormal vectors as columns, in the sense that 1/2 1/2 k(Q+ AA⊤ (Q+ − Idk . ε . a) a) Therefore we have found an approximate whitening matrix for A even though we do not have access to the diagonal entries. 3 Main Algorithms and Results As sketched in Section 2, our main algorithm (Algorithm 1) uses tensor decomposition on the PMI tensor. In this section, we describe the different steps and how the fit together. Subsequently, all steps will be analyzed in separate sections. 9 Algorithm 1 Learning Noisy-Or Networks via Decomposing PMI Tensor Inputs: N samples generated from a noisy-or network, disease prior ρ c. Outputs: Estimate of weight matrix W d PMIT \ using equation (E.1). 1. Compute the empirical PMI matrix and tensor PMI, 2. Choose a random equipartition Sa , Sb , Sc of [n]. \ via Algorithm 4 for the partitioning 3. Obtain approximate whitening matrices for PMIT Sa , Sb , Sc 4. Run robust tensor-decomposition Algorithm 3 to obtain vectors âi , b̂i , ĉi , i ∈ [m] 1/3 â , 1 − ( 1−ρ )1/3 b̂ , 1 − ( 1−ρ )1/3 ĉ . 5. Let Yi be the concatenation of the three vectors 1 − ( 1−ρ i i i ρ ) ρ ρ (Recall that âi , b̂i , ĉi are of dimension n/3 each.) c , where 6. Return W ci,j W ( − log((Yi )j ), if (Yi )j > exp(−νu ) : = exp(−νu ), otherwise Theorem 3.1 (Main theorem, random weight matrix). Suppose the true W is generated from the random model in Section 1 with ρpm ≤ c for some sufficiently small constant c. Then given c in polynomial N = poly(n, 1/p/, 1/ρ) number of examples, Algorithm 1 returns a weight matrix W time that satisfies ci − Wi k2 ≤ O(η e √pn) , ∀i ∈ [m], kW  √ where η = Õ mpρ . √ Note that the column ℓ2 norm of Wi is on the order of pn, and thus η can be thought of as the relative error in ℓ2 norm. Note also that Pr[si = 0] = 1 − Pr[si = 1] ≈ 1 − pmρ, so ρpm = o(1) is necessary purely for sample complexity reasons. Finally, we can also state a result with a slightly weaker guarantee, but with only deterministic assumptions on the weight matrix W . Recall that F = 1 − exp(−W ) and G = 1 − exp(−2W ). We will also define third and fourth-order terms H = 1 − exp(−3W ), L = 1 − exp(−4W ). We also define the incoherence of a matrix F . Roughly speaking, it says that the left singular vectors of F don’t correlate with any of the natural basis vector much more than the average. Definition 3.2 (Incoherence:). Let Fp∈ Rn×m have singular value decomposition F = U ΣV ⊤ . We say F is µ-incoherent if maxi kUi k ≤ µm/n. where Ui is the i-th row of U . We assume the weight matrix W satisfies the following deterministic assumptions, 1. GG⊤ , HH ⊤ , LL⊤ is τ -spectrally p bounded by F for τ ≥ 1. e n/m). 2. F is µ-incoherent with µ ≤ O( 3. If maxi kFi k0 ≤ pn, with high probability over the choice of a subset Sa , |Sa | = n/3, σmin (FSa ) & √ np and ρpm ≤ c for some sufficiently small constant c. Theorem 3.3 (Main theorem, deterministic weight matrix). Suppose the matrix W satisfies the c in polynomial conditions 1-3 above. Given polynomial number of samples, Algorithm 1 returns W time, s.t. ci − Wi k2 ≤ O(η e √np) . ∀i ∈ [m], kW 10 for η = √ mpρτ 3/2 √ √ Since the ℓ2 norm of Wi is on the order of np, the relative error in ℓ2 -norm is as most mρτ 3/2 , which mirrors the randomized case above. The proofs of Theorems uses the overall strategy of Section 2, and is deferred to Section D. We give a high level outline that demonstrates how the proofs depends on the machinery built in the subsequent sections. Both Theorem 3.1 and Theorem 3.3 are similarly proved – the only technical difference being how the third and higher order terms are bounded. (Because of generative model assumption, for Theorem 3.1 we can get a more precise control on them.) Hence, we will not distinguish between them in the coming overview. Overall, we will follow the approach outlined in Section 2. Let us step through Algorithm 1 line by line: 1. The overall goal will be to recover the leading terms of the PMI tensor. Of course, we get samples only, so can merely get an empirical version of it. In Section E, we show that the simple plug-in estimator does the job – and does so with polynomially many samples. 2. Recall Difficulty 3 from Section 2 : the PMI tensor and matrix expression is only accurate on the off-diagonal entries. In order to address this, in Section 2.3 we passed to a sub-tensor of the original tensor by partitioning the symptoms into three disjoint sets, and considering the induced tensor by this partition. 3. In order to apply the robust tensor decomposition algorithm from Section 5, we need to first calculate whitening matrices. This is necessarily complicated by the fact that the diagonals of the PMI matrix are not accurate, as discussed in Section 2.4. Section C gives guarantees on the procedure for calculating the whitening matrices. 4. This is main component of the algorithm: the robust tensor decomposition machinery. In Section 5, the conditions and guarantees for the success of the algorithm are formalized. There, we deal with the difficulties layed out in Section 2.2 : namely that we have a substantial systematic error that we need to handle. (Both due to higher-order terms, and due to the missing diagonal entries) 5. This step, along with Step 6, is a post-processing step – which allows us to recover the weight matrix W after we have recovered the leading terms of the PMI tensor. We also give a short quantitative sense of the guarantee of the algorithm. (The reader can find the full proof in Section D.) To get quantitative bounds, we will first need a handle on spectral properties of the random model: these are located in Section B. As we mentioned above, the main driver of the algorithm is step 4, which uses our robust tensor decomposition machinery in Section 5. To apply the machinery, we first need to show that the second (and higher) order terms of the PMI tensor are spectrally bounded. This is done by applying Proposition B.4, which roughly shows the higher-order terms are O(ρ log n)-spectrally bounded by ρF F ⊤ . The whitening matrices are calculated using machinery in Section C. We can apply these tools since the random model gives rise to a O(1)-incoherent F matrix as shown in Lemma B.1. To get a final sense of what the guarantee is, the l2 error which step 4 gives, via Theorem 5.4 √ roughly behaves like σmax τ 3/2 , where σmax is the spectral norm of the whitening matrices and τ is the spectral boundedness parameter. But, by Lemma D.1 σmax is approximately the spectral 11 norm of ρF F ⊤ – which on the other hand by Lemma B.1 is on the order of mnp2 ρ. Plugging in these values, we get the theorem statement. 4 Finding the Subspace under Heavy Perturbations In this section, we show even if we perturb a matrix SS ⊤ with an error whose spectral norm might be much larger than σmin (SS ⊤ ), as long as E is spectrally bounded the top singular subspace of S is still preserved. We defer the proof of the asymmetric case (Theorem 2.10) to Section C. We note that such type of perturbation bounds, often called relatively perturbation bounds, have been studied in [Ips98, Li98a, Li98b, Li97]. The results in these papers either require the that signal matrix is full rank, or the perturbation matrix has strong structure. We believe our results are new and the way that we phrase the bound makes the application to our problem convenient. We recall Theorem 2.4, which was originally stated in Section 2. Theorem 2.4 (matrix perturbation theorem for systematic error). Let n ≥ m. Let S ∈ Rn×m be of full rank. Suppose positive semidefinite matrix E ∈ Rn×n is ε-spectrally bounded by S ∈ Rn×m b the subspace of the top m eigenvectors of SS ⊤ and SS ⊤ + E. Then, for ε ∈ (0, 1). Let K, K IdK − IdKb . ε . Proof. We can assume ε ≤ 1/10 since otherwise the statement is true (with a hidden constant 10). Since E is a positive semidefinite matrix, we write E = RR⊤ where R = E 1/2 . Since A has full column rank, we can write R = AS + B where S ∈ Rm×n and the columns of B are in the subspace K ⊥ . (Specifically, we can choose S = A+ R and B = R − AA+ R = IdK ⊥ B.) By the definition of spectral boundedness, we have   ⊤ ⊤ ⊤ ⊤ BB = IdK ⊥ RR IdK ⊥  IdK ⊥ ε AA + σm (AA )Idn IdK ⊥ . = εσm (AA⊤ )IdK ⊥ . Therefore, we have that kBk2 ≤ εσmin (AA⊤ ). Moreover, we also have IdK RR⊤ IdK  εAA⊤ + εσmin IdK , It follows that ASS ⊤ A⊤ ≤ 2εAA⊤ . which implies Let P = Idm + SS ⊤ 1/2 kSS ⊤ k ≤ ε. . Then we write AA⊤ + E as, AA⊤ + E = AA⊤ + RR⊤ = AA⊤ + (AS + B)(AS + B)⊤ = A(Id + SS ⊤ )A⊤ + ASB ⊤ + BS ⊤ A⊤ + BB ⊤ = (AP + BS ⊤ P −1 )(AP + BS ⊤ P −1 )⊤ + BB ⊤ − BS ⊤ P −2 SB ⊤ (4.1) b = (AP + BS ⊤ P −1 ). Let K ′ be the column span of A. b We first prove that K b is close to K ′ . Let A Note that BB ⊤ − BS ⊤ P −2 SB ⊤ . kBk2 + kBk2 S ⊤ P −2 S . kBk2 . εσmin (AA⊤ ) . 12 (since P = Id + SS ⊤  SS ⊤ )  bA b⊤ ) = σmin (A) b 2 = σmin (AP ) − kBS ⊤ P −1 k 2 ≥ (1 − O(ε))σmin (A)2 . Moreover, we have σmin (A Therefore, using Wedin’s Theorem (Lemma F.2) on equation (4.1), we have that kIdKb − IdK ′ k . ε . Next we show K ′ and K are also close. We have p b − AP k ≤ kBS ⊤ P −1 k ≤ ε σmin (A)2 kA (4.2) (since kSk . √ ε, kBk . √ ε) b is close to the Therefore, by Wedin’s Theorem, K ′ , as the span of top m left singular vectors of A, span of the top left singular vector of AP , namely, K kIdK − IdK ′ k . ε . (4.3) Therefore using equation (4.2) and (4.3) and triangle inequality, we complete the proof. 5 Robust Tensor Decomposition with Systematic Error In this section we discuss how to robustly find the tensor decomposition even in presence of systematic error. We first illustrate the main techniques in an easier setting of orthogonal tensor decomposition (Section 5.1), then we describe how it can be generalized to the general setting that we require for our algorithm (Section 5.2). 5.1 Warm-up: Approximate Orthogonal Tensor Decomposition We start with decomposing an orthogonal tensor with systematic error. The algorithm we use here is a slightly more general version of an algorithm in [MSS16]. Algorithm 2 Robust orthogonal tensor decomposition Inputs: Tensor T ∈ Rd×d×d , number δ, ε ∈ (0, 1). Outputs: Set S = {(ãi , b̃i , c̃i )} 1. S = ∅ 2. For s = 1 to O(d1+δ log d) 3. 4. 5.  Draw g ∼ N (0, Idn ), and compute M = Idn ⊗ Idn ⊗ g⊤ · T .5 Compute the top left and right singular vectors u, v ∈ Rd of M . Let z = (u⊤ ⊗v ⊤ ⊗Idn )·T . If (u⊤ ⊗ v ⊤ ⊗ z ⊤ ) · T ≥ 1 − ζ, where ζ = O(ε), and u is 1/2-far away from any of ui ’s with (ui , vi , wi ) ∈ S, then add (u, v, w) to S. 6. Return S Theorem 5.1 (Stronger version of Theorem 2.6). Suppose {ui }, {vi }, {wi } are three collection ε-approximate orthonormal vectors. Suppose tensor T is of the form T = r X i=1 5 ui ⊗ vi ⊗ wi + Z Recall that product of two tensor (A ⊗ B ⊗ C) · (E ⊗ D ⊗ F ) = AE ⊗ BD ⊗ CF 13 with kZk{2}{1,3} ≤ τ and kZk{1}{2,3} ≤ τ . Then, with probability at least 0.9, Algorithm 2 returns S = {(ũi , ṽi , w̃i )} which is guaranteed to be O((τ + ε)/δ)-close to {(ui , vi , wi )} in ℓ2 -norm up to permutation. Proof Sketch of Theorem 5.1. The Theorem is a direct extension of [MSS16, Theorem 10.2] to asymmetric and approximate orthogonal case. We only provide a proof sketch here. We start by writing m   X ⊤ hg, wi iui vi⊤ + (Idn ⊗ Idn ⊗ g⊤ ) · Z ·T = M = Id ⊗ Id ⊗ g | {z } i=1 :=Mg {z } | (5.1) :=Ms Since kZk{2}{1,3} ≤ τ and kZk{1}{2,3} ≤ τ , [MSS16, Theorem 6.5] implies that with probability at least 1 − d2 over the choice of g, p (Idn ⊗ Idn ⊗ g⊤ ) · Z ≤ 2 log d · τ √ Let t = 2 log d. We have that with probability 1/(d1+δ logO(1) d), hg, w1 i ≥ (1 + δ/3)t and hg, wj i ≤ t for every j 6= 1. We condition on these events. Let ūi be a set of orthonormal vectors such that Eu = [u1 , . . . , um ] − [ū1 , . . . , ūm ] satisfies kEu k ≤ ε (we can take ūi ’s to be the whitening of uP i ’s). Similarly define v̄i ’s. Then we have that the Pterm (defined in equation (5.1)) can be written ′ ′ as i hg, w1 iūi v̄i + E where kEk . ε. Let M̄S = i hg, w1 iūi v̄i . Then M̄S has top singular value hg, w1 i ≥ (1 + δ/3)t, and second singular value at most t. Moreover, the term Mg + E ′ has spectral norm bounded by O(τ + ε). Thus by Wedin’s Theorem (Lemma F.2), the top left and right singular vectors u, v of MS + Mg = M̄S + Mg + E ′ are O((τ + ε)/δ)-close to ū1 and v̄1 respectively. They are also O((τ + ε)/δ)-close to u1 , v1 since u1 is close to ū1 . Moreover, we have (u⊤ ⊗ v ⊤ ⊗ Id) · T is O(τ /δ)-close to w1 . Therefore, with probability 1/(d1+δ logO(1) d), each round of the for loop in Algorithm 2 will find u1 , v1 , w1 . Line 5 is used to verify if the resulting vectors are indeed good using the injective norm as a test. It can be shown that if the test is passed then (u, v, z) is close to one of the component. Therefore, after d1+δ logO(1) d iterations, with high probability, we can find all of the components. 5.2 General tensor decomposition In many previous works, general tensor decomposition is reduced to orthogonal tensor decomposition via a whitening procedure. However, here in our setting we cannot estimate the exact whitening matrix because of the systematic error. Therefore we need a more robust version of approximate whitening matrix, which we define below: Definition 5.2. Let r ≤ d. A collection of r vectors {a1 , . . . , ar } is ε-approximately orthonormal if the matrix A with ai as columns satisfies A⊤ A − Id ≤ ε (5.2) Definition 5.3. Let d ≥ r and A = [a1 , . . . , ar ] ∈ Rd×r . A PSD matrix Q ∈ Rd×d is an εapproximate whitening matrix for A if (Q+ )1/2 A is ε-approximately orthonormal . With this in mind, we can state the guarantee on the tensor decomposition algorithm (Algorithm 3). 14 Algorithm 3 Tensor decomposition with systematic error Inputs: Tensor T ∈ Rn1 ×n2 ×n3 and ε-approximate whitening matrices Qa , Qb , Qc ∈ Rd×d . Outputs: {âi , bˆi , ĉi }i∈[r] 1/2 ⊗ (Q+ )1/2 ⊗ (Q+ )1/2 · T 1. Compute T̃ = (Q+ c a) b 2. Run orthogonal tensor decomposition (Algorithm 2) with input T̃ , and obtain {ăi , b̆i , c̆i } 1/2 1/2 1/2 3. Return: {Qa ăi , Qb b̆i , Qc c̆i } Theorem 5.4. Let d ≥ r, and A, B, C ∈ Rd×r be full rank matrices. Let Γ, ∆, Θ ∈ Rd×ℓ . Let ai , bi , ci , γi , δi , θi be the columns of A, B, C, Γ, ∆, Θ respectively. Suppose tensor T is of the form T = r X i=1 ai ⊗ bi ⊗ ci + ℓ X i=1 γi ⊗ δi ⊗ θi + E (5.3) Suppose matrices Qa ∈ Rd×d , Qb ∈ Rd×d , Qc ∈ Rd×d are ε-approximate whitening matrices for 1/2 1/2 1/2 A, B, C, and suppose Γ, ∆, Θ are τ -spectrally bounded by Qa , Qb , Qc , respectively. Then, Algorithm 3 returns âi , b̂i , ĉi that are O(η)-close to ai , bi , ci in Õ(d4+δ ) time with   η . max(kQa k , kQb k , kQc k)1/2 · τ 3/2 + σ −3/2 kEk{1,2}{3} + ε · 1/δ where σ = min(σmin (Qa ), σmin (Qb ), σmin (Qc )). Note that in our model, the matrix E has very small spectral norm as it is the third order term in ρ (and ρ = O(1/n)). The spectral boundedness of Γ, ∆, Θ are discussed in Section B. Therefore we can expect the RHS to be small. In order to prove this theorem, we show after we apply whitening operation using the approximate whitening matrices, the tensor is still close to an orthogonal tensor. To do that, we need the following lemma which is a useful technical consequence of the condition (2.9). Lemma 5.5. Suppose F is τ -spectrally bounded by g. Then, kG⊤ (F F ⊤ )+ Gk ≤ 2τ . (5.4) Proof. Let K be the column span of F . Let Q = F F ⊤ . Multiplying (Q+ )1/2 on both sides of equation (2.9), we obtain that (Q+ )1/2 GG⊤ (Q+ )1/2  τ (IdK + σm (Q)Q+ )  τ (IdK + σm (Q)kQ+ kIdK ) It follows that k(Q+ )1/2 Gk kG⊤ (Q+ )1/2 (Q+ )1/2 Gk ≤ 2τ . ≤ √  2τ IdK 2τ , which in turns implies that kG⊤ (F F ⊤ )+ Gk = We also need to bound the {1, 2}{3} norm of the following systematic error tensor. This is important because we want to bound the spectral norm of the perturbation after the whitening operation. 15 Lemma 5.6 (Variant of [MSS16, Theorem 6.1]). Let Γ, ∆, Θ ∈ Rd×ℓ . Let γi , δi , θi be the i-th column of Γ, ∆, Θ, respectively. Then, X i∈[ℓ] γi ⊗ δi ⊗ θi ≤ kΓk · kΘk · k∆k1→2 ≤ kΓk · kΘk · k∆k (5.5) {1,2}{3} Proof of Lemma 5.6. Using the definition of k·k{1,2}{3} we have that X i∈[] γi ⊗ δi ⊗ θi = X (γi ⊗ δi )θi⊤ (5.6) i∈[ℓ] {1,2}{3} ≤ = X 1/2 (γi ⊗ δi )(γi ⊗ δi ) X 1/2 θi θi⊤ i∈[ℓ] i∈[ℓ] (by Cauchy-Schwarz inequality) 1/2 X (γi γi⊤ ) i∈[ℓ] ⊗ (δi δi⊤ ) kΘk Next observe that we have that for any i, δi δi⊤  (max kδi k2 )Id and therefore, (γi γi⊤ ) ⊗ (δi δi⊤ )  γi γi⊤ ⊗ (max kδi k2 )Id . (5.7) It follows that X i∈[r] γi ⊗ δi ⊗ θi ≤ {1,2}{3} X i∈[r] 1/2 2 γi γi⊤ ⊗ (max kδi k )Id kΘk = kΓk · kΘk · k∆k1→2 . With this in mind, we prove the main theorem: 1/2 A, B̃ = (Q+ )1/2 B, C̃ = (Q+ )1/2 C. Moreover, let Γ̃ = Proof of Theorem 5.4. Let à = (Q+ a) c b 1/2 Γ and define ∆, ˜ (Q+ ) Θ̃ similarly. Let ã , b̃ , c̃ , γ̃ , δ̃ , θ̃ be their columns. Then we have that T̃ 1 i i i i i a as defined in Algorithm 3 satisfies T̃ = r X i=1 ãi ⊗ b̃i ⊗ c̃i + ℓ X i=1 γ̃i ⊗ δ̃i ⊗ θ̃i + Ẽ (5.8) 1/2 ⊗ (Q+ )1/2 ⊗ (Q+ )1/2 · E. We will show that T̃ meets the condition of Thewhere Ẽ = (Q+ c a) b 1/2 A is orem 2.6. Since Qa is an ε-approximate whitening matrix of A, by Definition, à = (Q+ a) ε-approximately orthonormal . Similarly, B̃, C̃ are ε-approximately orthonormal . √ Γ is τ -spectrally bounded by Qa , hence by Lemma 5.5, we have that kΓ̃k ≤ 2τ . Similarly, √ kΘk, k∆k ≤ 2τ . Applying Lemma 5.6, we have, ℓ X i=1 γ̃i ⊗ δ̃i ⊗ θ̃i {1,2}{3} 16 ≤ (2τ )3/2 (5.9) −3/2 kEk 1/2 k · k(Q+ )1/2 k · k(Q+ )1/2 kkEk Moreover, we have kẼk{1,2}{3} ≤ k(Q+ {1,2}{3} ≤ σ {1,2}{3} , c c a) where σ = min{σmin (Qa ), σmin (Qb ), σmin (Qc )}.PTherefore, using Theorem 2.6 (with ai , bi , ci there replaced by ãi , b̃i , c̃i , and Z there replaced by ℓi=1 γ̃i ⊗ δ̃i ⊗ θ̃i + Ẽ), we have that a set of vectors {ăi , b̆i , c̆i } that are ε-close to {ãi , b̃i , c̃i } with ε = (2τ )3/2 + σ −3/2 kẼk{1,2}{3} . Therefore, we obtain that kai − Q1/2 ăi k ≤ kQa k1/2 ε. Similarly we can control the error for bi and ci and complete the proof. 6 Conclusions We have presented theoretical progress on the longstanding open problem of presenting a polynomial-time algorithm for learning noisy-or networks given sample outputs from the network. In particular it is enouraging that linear algebraic methods like tensor decomposition can play a role. Earlier there were no good approaches for this problem; even heuristics fail for realistic sizes like n = 1000. Can sample complexity be reduced, say to subcubic? (Cubic implies more than one billion examples for networks with 1000 outputs.) Possibly this requires exploiting some hierarchichal structure –e.g. groupings of diseases and symptoms— in practical noisy-OR networks but exploring such possibilities using the current version of QMR-DT is difficult because it has been scrubbed of labels for diseases and symptoms.) Various more practical versions of our algorithm are also easy to conceive and will be tested in the near future. This could be somewhat analogous to topic modeling, for which discovery of provable polynomial-time algorithms soon led to very efficient algorithms. Acknowledgments: We thank Randolph Miller and Vanderbilt University for providing the current version of this network for our research. References [AGH+ 13] Sanjeev Arora, Rong Ge, Yonatan Halpern, David M Mimno, Ankur Moitra, David Sontag, Yichen Wu, and Michael Zhu. A practical algorithm for topic modeling with provable guarantees. In ICML (2), pages 280–288, 2013. [AGH+ 14] Animashree Anandkumar, Rong Ge, Daniel Hsu, Sham M Kakade, and Matus Telgarsky. Tensor decompositions for learning latent variable models. Journal of Machine Learning Research, 15(1):2773–2832, 2014. [AGM12] Sanjeev Arora, Rong Ge, and Ankur Moitra. Learning topic models–going beyond svd. In Foundations of Computer Science (FOCS), 2012 IEEE 53rd Annual Symposium on, pages 1–10. IEEE, 2012. [AGMM15] Sanjeev Arora, Rong Ge, Tengyu Ma, and Ankur Moitra. Simple, efficient, and neural algorithms for sparse coding. In Proceedings of The 28th Conference on Learning Theory, pages 113–149, 2015. [Chi96] David Maxwell Chickering. Learning bayesian networks is np-complete. In Learning from data, pages 121–130. Springer, 1996. 17 [DK70] Chandler Davis and William Morton Kahan. The rotation of eigenvectors by a perturbation. iii. SIAM Journal on Numerical Analysis, 7(1):1–46, 1970. [GHK15] Rong Ge, Qingqing Huang, and Sham M Kakade. Learning mixtures of gaussians in high dimensions. In Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, pages 761–770. ACM, 2015. [HK13] Daniel Hsu and Sham M Kakade. Learning mixtures of spherical gaussians: moment methods and spectral decompositions. In Proceedings of the 4th conference on Innovations in Theoretical Computer Science, pages 11–20. ACM, 2013. [HS06] Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, 2006. [HS13] Yoni Halpern and David Sontag. Unsupervised learning of noisy-or bayesian networks. In Uncertainty in Artificial Intelligence, page 272. Citeseer, 2013. [Ips98] Ilse CF Ipsen. Relative perturbation results for matrix eigenvalues and singular values. Acta numerica, 7:151–201, 1998. [JGJS99] Michael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183– 233, 1999. [JHS13] Yacine Jernite, Yonatan Halpern, and David Sontag. Discovering hidden variables in noisy-or networks using quartet tests. In Advances in Neural Information Processing Systems, pages 2355–2363, 2013. [Li97] Ren-Cang Li. Relative perturbation theory. iii. more bounds on eigenvalue variation. Linear algebra and its applications, 266:337–345, 1997. [Li98a] Ren-Cang Li. Relative perturbation theory: I. eigenvalue and singular value variations. SIAM Journal on Matrix Analysis and Applications, 19(4):956–982, 1998. [Li98b] Ren-Cang Li. Relative perturbation theory: Ii. eigenspace and singular subspace variations. SIAM Journal on Matrix Analysis and Applications, 20(2):471–492, 1998. [MPJM82] Randolph A Miller, Harry E Pople Jr, and Jack D Myers. Internist-i, an experimental computer-based diagnostic consultant for general internal medicine. New England Journal of Medicine, 307(8):468–476, 1982. [MR05] Elchanan Mossel and Sébastien Roch. Learning nonsingular phylogenies and hidden markov models. In Proceedings of the thirty-seventh annual ACM symposium on Theory of computing, pages 366–375. ACM, 2005. [MSS16] T. Ma, J. Shi, and D. Steurer. Polynomial-time Tensor Decompositions with Sum-ofSquares. ArXiv e-prints, October 2016. [SC91] Michael Shwe and Gregory Cooper. An empirical analysis of likelihood-weighting simulation on a large, multiply connected medical belief network. Computers and Biomedical Research, 24(5):453–475, 1991. [Smo86] Paul Smolensky. Information processing in dynamical systems: Foundations of harmony theory. Technical report, DTIC Document, 1986. 18 [Ste77] GW Stewart. On the perturbation of pseudo-inverses, projections and linear least squares problems. SIAM review, 19(4):634–662, 1977. [Ste90] Gilbert W Stewart. Matrix perturbation theory. 1990. [Wed72] Per-Åke Wedin. Perturbation bounds in connection with singular value decomposition. BIT Numerical Mathematics, 12(1):99–111, 1972. A Formal expression for the PMI tensor In this section we formally derive the expressions for the PMI tensors and matrices, which we only informally did in Section 2. As a notational convenience for l ∈ N, we will denote by P̃l the matrix which has as columns the vectors 1 − exp(−lWk ), k ∈ [m]. Furthermore, for a subset Sa ⊆ [n], we will introduce the notation ⊤  X X  = (1 − exp(−lWk )Sa ) (1 − exp(−lWk )Sa )⊤ (P̃l )k,Sa (P̃l )k,Sa Pl,Sa = k∈[m] k∈[m] These matrices will appear naturally in the expressions for the higher-order terms in the Taylor expansion for the PMI matrix and tensor. We first compute the formally the moments of the noisy-or model. Lemma A.1. We have log Pr[si = 0] = X k∈[m] ∀i 6= j log Pr [si = 0 ∧ sj = 0] = ∀ distinct i, j, k ∈ [n], log Pr [si = 0 ∧ sj = 0 ∧ sℓ = 0] = X k∈[m] X k∈[m] log (1 − ρ(1 − exp(Wik ))) log (1 − ρ(1 − exp(Wik + Wjk ))) log (1 − ρ(1 − exp(Wik + Wjk + Wℓk ))) Proof of Lemma A.1. We only give the proof for the second equation. The rest can be shown analogously. h i log Pr [si = 0 ∧ sj = 0] = E [Pr[si = 0|d] · Pr[sj = 0|d]] = E exp(−(Wi + Wj )⊤ d) Y = E [exp(−(Wik + Wjk )dk ))] k∈[m] = Y k∈[m] (1 − ρ(1 − exp(−(Wik + Wjk )))) . With this in mind, we give the expression for the PMI tensor along with all the higher-order terms. Proposition A.2. For any equipartition Sa , Sb , Sc of [n], the restriction of the PMI tensor PMITSa ,Sb ,Sc satisfies, for any L ≥ 2, PMITSa ,Sb ,Sc L X X ρ Fk,Sa ⊗Fk,Sb ⊗Fk,Sc + (−1)l+1 = 1−ρ k∈[m] l=2 19 1 l  ρ 1−ρ l ! X k∈[m] (P̃l )k,Sa ⊗(P̃l )k,Sb ⊗(P̃l )Sc +EL (A.1) where kEL k{1,2},{3} ≤ (mn)3 L  1− ρ 1−ρ  L ρ 1−ρ L Proof. The proof will proceed by Taylor expanding the log terms. Towards that, using Lemma A.1, we have : PMITijl = X (1 − ρ(1 − exp(−Wik − Wjk ))) (1 − ρ(1 − exp(−Wik − Wlk ))) (1 − ρ(1 − exp(−Wjk − Wlk ))) log (1 − ρ(1 − exp(−Wik − Wjk − Wlk ))) (1 − ρ(1 − exp(−Wik ))) (1 − ρ(1 − exp(−Wjk ))) (1 − ρ(1 − exp(−Wlk ))) k∈[m] By the Taylor expansion of log(1 − x), we get that PMIijl = ∞ X 1 X t − ρ (((1 − exp(−Wik − Wjk )))t + ((1 − exp(−Wik − Wlk )))t + ((1 − exp(−Wjk − Wlk )))t − t t=1 k∈[m] (1 − exp(−Wik ))t − (1 − exp(−Wjk ))t − (1 − exp(−Wlk ))t − (1 − exp(−Wik − Wjk − Wlk ))t ) Furthermore, note that ((1 − exp(−Wik − Wjk )))t + ((1 − exp(−Wik − Wlk )))t + ((1 − exp(−Wjk − Wlk )))t − (1 − exp(−Wik ))t − (1 − exp(−Wjk ))t − (1 − exp(−Wlk ))t − (1 − exp(−Wik − Wjk − Wlk ))t = t   X t (−1)l (1 − exp(−lWik )) (1 − exp(−lWjk )) (1 − exp(−lWlk )) l l=1 by simple regrouping of the terms. By exchanging l and t, we get PMITSa ,Sb ,Sc   X  t1 t (1 − exp(−lWk ))Sa ⊗ (1 − exp(−lWk ))Sb ⊗ (1 − exp(−lWk ))Sc ρ = (−1) t l l=1 t≥l k∈[m]  l ! X ∞ X 1 ρ (1 − exp(−lWk ))Sa ⊗ (1 − exp(−lWk ))Sb ⊗ (1 − exp(−lWk ))Sc = (−1)l+1 l 1−ρ ∞ X X l+1 l=1 k∈[m] (A.2) where the last equality holds by noting that X 1 t  1  ρ l ρt = t l l 1−ρ t≥l The term corresponding to t = 1 is easily seen to be ρ X Fk,Sa ⊗ Fk,Sb ⊗ Fk,Sc 1−ρ k∈[m] therefore we to show the statement of the lemma, we only need bound the contribution of the terms with l ≥ L. 20 Toward that, note that ∀l, kk1 − exp(−lWk )k ≤ n. Hence, we have by Lemma 5.6, m X k=1 (1 − exp(−lWk ))Sa ⊗ (1 − exp(−lWk ))Sb ⊗ (1 − exp(−lWk ))Sc {1,2},{3} ≤ (mn)3 Therefore, subadditivity of the {12}, {3} norm gives  l  ρ ∞ X  1−ρ  X (1 − exp(−lWk ))Sa ⊗ (1 − exp(−lWk ))Sb ⊗ (1 − exp(−lWk ))Sc (−1)l+1   l l=L k∈[m]  ∞ X  ≤ (mn)  3 l=L ρ 1−ρ l l   l ∞  (mn)3 ρ  (mn)3 X = ≤ L 1−ρ L 1− l=L ρ 1−ρ  L ρ 1−ρ {1,2},{3} L which gives us what we need. A completely analogous proof gives a similar expression for the PMI matrix: Proposition A.3. For any subsets Sa , Sb of [n], s.t. Sa ∩ Sb = ∅, the restriction of the PMI matrix PMISa ,Sb satisfies, for any L ≥ 2, PMISa ,Sb L X X ρ ⊤ + (−1)l+1 Fk,Sa Fk,S = b 1−ρ k∈[m] l=2 1 l  ρ 1−ρ where kEL k{1,2},{3} ≤ B (mn)2 L  1− l ! X (P̃l )k,Sa ((P̃l )k,Sb )⊤ + EL (A.3) k∈[m] ρ 1−ρ  L ρ 1−ρ L Spectral properties of the random model The goal of this section is to prove that the random model specified in Section 1 satisfies the incoherence property 3.2 on the weight matrix and the spectral boundedness property of the PMI tensor. (Recall, the former is required for the whitening algorithm, and the later for the tensor decomposition algorithm.) Before delving into the proofs, we will need a few simple bounds on the singular values of Pl . Lemma B.1. Let Sa ⊆ [n], s.t. |Sa | = Ω(n). With probability 1 − exp(− log2 n) over the choice of W , and for all l = O(poly(n)), σmin (Pl,Sa ) & np and σmax (Pl,Sa ) . mnp2 21 Proof. Let us proceed to the lower bound first. If we denote by L the matrix which has as columns (P̃l )k,Sa , k ∈ [m], it’s clear that Pl,Sa = LL⊤ . Since σmin (LL⊤ ) = σmin (L⊤ L) we will proceed to bound the smallest eigenvalue of L⊤ L. Note that X L⊤ L = (1 − exp(−lW k ))(1 − exp(−lW k ))⊤ k∈Sa Since the matrices (1 − exp(−lW k ))(1 − exp(−lW k ))⊤ are independent, the bound will follow from a matrix Bernstein bound. Denoting i h Q = E (1 − exp(−lW k ))(1 − exp(−lW k ))⊤ by a simple calculation we have h i2  i  h Q = p2 E 1 − exp(−lW̃ ) 11⊤ + pE (1 − exp(−lW̃ ))2 − p2 E [1 − exp(−lw̃)]2 Idm (B.1) where 1 is the all-ones vector of size m, and Idm is the identity of the same size. Furthermore, W̃ is a random variable following the distribution D of all the W̃i,j . Note that (1.2) together with the assumption ν = Ω(1) gives = Ω(p)  Pσmin (Q) i i ⊤ = |S | · Id , and i −1/2 Let Z = Q (1 − exp(−lWi )). Then we have that E a m i∈Sa Z (Z ) with high probability, kZ i k2 ≤ 1/σmin (Q) · kF i k2 . m and it’s a sub-exponential random variable. Moreover, # " 2  X X X Z i (Z i )⊤ ≤ mn . .m E Z i (Z i )⊤ r2 = E i i Therefore, by Bernstein inequality we have that w.h.p, X i∈Sa Z i (Z i )⊤ − nIdm . It follows that X i∈Sa p r 2 log n + max kZ i k2 log n = p mn log n + m log n .  p  mn log n Idm  nIdm . Z i (Z i )⊤  n − O which in turn implies that Pl,Sa  nQ. But this immediately implies σmin (Pl,Sa ) & np with high probability. Union bounding over all l, we get the first part of the lemma. The upper bound will be proven by a Chernoff bound. Note that the matrices (1 − exp(−lWk ))Sa (1 − exp(−lWk ))⊤ Sa , k ∈ [m] 2 are independent. Furthermore, k(1−exp(−lWk ))Sa (1−exp(−lWk ))⊤ Sa k ≤ pn with high probability, 2 ⊤ and the variable k(1 − exp(−lWk ))Sa (1 − exp(−lWk ))Sa k is sub-exponential. Finally, # "m X 2 ((1 − exp(−lWk ))Sa (1 − exp(−lWk ))⊤ r2 = E Sa ) k=1 ≤ m X k=1 i h 2 E ((1 − exp(−lWk )Sa )(1 − exp(−lWk ))⊤ ) Sa h i 2 2 ≤ m k1 − exp(−lWk )Sa k2 E ((1 − exp(−lWk ))Sa (1 − exp(−lWk ))⊤ Sa ) ≤ mn p 22 Similarly as in the lower bound,  h i2 i  h E[Pl,Sa ] = p2 E 1 − exp(−lW̃ ) 11⊤ + pE (1 − exp(−lW̃ ))2 − p2 E [1 − exp(−lw̃)]2 Id|Sa | where 1 is the all-ones vector of size |Sa |, and Id|Sa | is the identity of the same size. Again, W̃ is a random variable following the distribution D of all the W̃i,j . This immediately gives p Pl  E[Pl ] + r log nId|Sa |  mnp2 + mn2 p2 log nId|Sa |  O(mnp2 )Id|Sa | A union bound over all values of l gives the statement of the Lemma. B.1 Incoherence of matrix F First, we proceed to show the incoherence property 3.2 on the weight matrix. Lemma B.2. Suppose n is a multiple of 3. Let F = U ΣV be the singular value decomposition of F . Let Sa , Sb , Sc be a uniformly random equipartition of the rows of [n]. Suppose F is µ-incoherent with µ ≤ n/(m log n). Then, with high probability over the choice of and Sa , Sb , Sc , we have for every i ∈ {a, b, c}, r  1 µm Si ⊤ Si U − Id . log n . U 3 n Proof. Let S = Sa . Then, since U ⊤ U = Idm , we have # " X 1 (U i )(U i )⊤ = · Idm . E 3 i∈S By the assumption on the row norms of U , kU i (U i )⊤ k2 = kU i k2 ≤ µ m n . By the incoherence assumption, we have that maxi kU i k2 ≤ µm/n. We also note that U i ’s are negatively associated random variables. Therefore by the matrix Chernoff inequality for negatively associated random variables, we have with high probability, r i h µm log n S ⊤ S S ⊤ S . (U ) U − E (U ) U . n But, an analogous argument holds for Sb , Sc as well – so by a union bound over k, we complete the proof. Lemma B.3. Suppose n & m log n. Under the generative assumption in Section 1 for W , we have that we have that F = 1 − exp(−W ) is O(1)-incoherent. Proof. We have that F F ⊤ = U Σ2 U T and therefore, kF i k2 = Σ2i,i kU i k2 . This in turn implies that kU i k2 ≤ 1 1 Fi = 2 kF i k2 . min Σ2ii σmin (F ) √ 2 Since F i ≤ pm+ pm ≤ 2pm with high probability, we only need to bound σmin (F ) from below. 2 (F ) = σ ⊤ ⊤ Note that σmin min (F F ). Therefore it suffices to control σmin (F F ). But by Lemma B.1 2 we have σmin (F ) & np. Therefore, we have that m 1 kF i k2 = O( ) kU i k2 ≤ 2 n σmin (F ) 23 B.2 Spectral boundedness The main goal of the section is to show that the bias terms in the PMI tensor are spectrally bounded by the PMI matrix (which we can estimate from samples). Furthermore, we show that we can calculate an approximate whitening matrix for the leading terms of the PMI tensor using the machinery in Section . The main proposition we will show is the following: Proposition B.4.   Let W be sampled according to the random model in Section 1 with ρ =  log n 1 . Let Sa ⊆ [n], |Sa | = Ω(n). If RSa is the matrix that has as columns o log n , p = ω √ mn    l 1/3 ρ the vectors 1l 1−ρ (P̃l )j,Sa , l ∈ [2, L], j ∈ [m] for L = O(poly(n)), and A is the matrix that 1/3  ρ (P̃1 )j,Sa for j ∈ [m], with high probability it holds that RSa RS⊤a has as columns the vectors 1−ρ is O(ρ2/3 log n)-spectrally bounded by A. The main element for the proposition is the following Lemma: Lemma B.5. For any set Sa ⊆ [n], |Sa | = Ω(n), with high probability over the choice of W , for ⊤ is O(log n)-spectrally bounded by P ′ every ℓ, ℓ′ = O(poly(n)) Pl,Sa Pl,S l ,Sa . a Before proving the Lemma, let us see how the proposition follows from it: Proof of B.4. We have RSa RS⊤a = L X l=2 1 l  ρ 1−ρ l !2/3 Pl,Sa By Lemma B.5, we have that ∀l > 1, P̃l,Sa is τ -spectrally bounded by P̃1,Sa , for some τ = O(log n). Hence, RSa RS⊤a  ∞ X l=2 1 l  ρ 1−ρ l !2/3 τ (P1,Sa + σmin (P1,Sa )) - ρ4/3 τ (P1,Sa + σmin (P1,Sa )) ρ 2/3 ) P1,Sa , the claim of the Proposition follows. Since AA⊤ = ( 1−ρ It is clear analogous statements hold for Sb and Sc . Finally, we proceed to show Lemma B.5. For notational convenience, we will denote by Jm×n the all ones matrix with dimension m × n. (We will omit the dimensions when clear from the context.) This statement will immediately follow from the following two lemmas: Lemma B.6. For any set Sa ⊆ [n], |Sa | = Ω(n), with probability 1 − exp(− log2 n) over the choice of W , for all ℓ ≤ O(poly(n)), 5 Pl,Sa  10np log nId + mp2 J 2 24 Lemma B.7. For any set Sa ⊆ [n], |Sa | = Ω(n), with probability 1 − exp(− log2 n) over the choice of W , ∀ℓ = poly(n), Pl,Sa + 6np log nId - mp2 J Before showing these lemmas, let us see how Lemma B.5 is implied by them: Proof of Lemma B.5. Let κ be the constant in B.7, s.t. Pl,Sa + 6np log nId - mp2 J. Putting the bounds from Lemmas B.6 and B.7 together along with a union bound, we have that with high probability, ∀l, l′ = O(poly(n))   15 5 Pl,Sa − κPl′ ,Sa  10np log n + κnp log n Id  O(mp log n)Id 2 2 But, note that σmin (Pl′ ) = Ω(np), by Lemma B.1. Hence, Pl − 52 κPl′ ,Sa  r log nσmin (Pl′ ), for some sufficiently large constant r. This implies 5 Pl − r log nPl′ ,Sa  Pl − κPl′  r log nσmin (Pl′ ,Sa ) 2 from which the statement of the lemma follows. We proceed to the first lemma: Proof of Lemma B.6. To make the notation less cluttered, we will drop l and Sa and use P = Pl,Sa . Furthermore, we will drop Sa when referring to columns of P̃ so we will denote P̃k = P̃k,Sa . Let’s denote by e = √ 1 1 . Let’s furthermore denote Id1 = ee⊤ , and Id−1 = Id − ee⊤ . Note |Sa | first that trivially, since Id1 + Id−1 = Id, P = (Id1 + Id−1 )P (Id1 + Id−1 ) (B.2) Furthermore, it also holds that 1 1 0  (2Id−1 − Id1 )P (2Id−1 − Id1 ) 2 2 1 = Id1 P Id1 + 4Id−1 P Id−1 − Id−1 P Id1 − Id1 P Id−1 4 where the first inequality holds since 1 1 (2Id−1 − Id1 )P (2Id−1 − Id1 ) = 2 2  1 (2Id−1 − Id1 )P̃ 2  1 (2Id−1 − Id1 )P̃ 2 ⊤ From this we get that Id−1 P Id1 + Id1 P Id−1  1 Id1 P Id1 + 4Id−1 P Id−1 4 (B.3) We proceed to upper bound both the terms on the RHS above. More precisely, we will show that Id1 P Id1 ≤ 2mp2 nJ (B.4) Id−1 P Id−1  2np log nId 25 (B.5) Let us proceed to showing (B.4). The LHS can be rewritten as   Id1 P Id1 = ee⊤ e⊤ P̃ P̃ ⊤ e Note that m X 1 e⊤ P̃ P̃ ⊤ e = n k=1 h1, P̃k i2 ! All the terms h1, P̃k i2 are independent and satisfy and E[h1, P̃k i2 ] ≤ (E[h1, P̃k i])2 ≤ p2 n2 . By Chernoff, we have that m X p h1, P̃k i2 ≤ mp2 n2 + mp2 n2 log n ≤ 2mp2 n2 k=1 log n ). with probability at least 1 − exp(− log2 n), where the second inequality holds because p = ω( √ mn Hence, e⊤ P̃ P̃ ⊤ e ≤ 2mp2 n with high probability. We proceed to (B.5), which will be shown by a Bernstein bound. Towards that, note that E[(Id−1 P̃k (Id−1 P̃k )⊤ ] = Id−1 E[P̃k (P̃k )⊤ ]Id−1   = Id−1 p2 E[1 − exp(W̃ )]2 11⊤ + (p − p2 )E[(1 − exp(W̃ ))2 ]Id Id−1  pId where W̃ is a random variable following the distribution D of all the W̃i,j . The second line can be seen to follow from the independence of the coordinates of P̃k according to our model. Furthermore, with high probability kId−1 P̃k k22 ≤ kP̃k k22 ≤ np and the random variable kId−1 P̃k k2 is sub-exponential Finally, # "m # "m 2 X X 2 (Id−1 P̃k )(Id−1 Γk )P̃ ≤ P̃k r2 = E (I−1 P̃k )(Id−1 P̃k )⊤ E 2 k=1 k=1 ≤ npmkE[(Id−1 P̃k )(Id−1 P̃k )⊤ ]k ≤ np2 m Therefore, applying a matrix Bernstein bound, we get Id−1 P Id−1  mpId + np log nId + r log nId  2np log nId with high probability. Combining this with (B.2) and (B.3), we get 5 P  Id1 P Id1 + 5Id−1 P Id−1 4 5  mp2 J + 10np log nId 2 Let us proceed to the second inequality, which essentially follows the same strategy: 26 Proof of Lemma B.7. Similarly as in the proof of Lemma B.6, for notational convenience, let’s denote by P̃ the matrix which has column k the vector (1 − exp(lWk )). Reusing the notation from Lemma B.7, we have that P = (Id1 + Id−1 )P (Id1 + Id−1 ) (B.6) and 1 1 0  ( Id1 + 2Id−1 )P ( Id1 + 2Id−1 ) 2 2 1 ′ = Id−1 P Id1 + Id1 P Id−1 + Id1 P Id1 + 4Id−1 P Id−1 4 for similar reasons as before. From this we get that 1 Id−1 P Id1 + Id1 P Id−1  − Id1 P Id1 − 4Id−1 P Id−1 4 (B.7) Putting (B.6) and (B.7) together, we get that 3 P + 3Id−1 P Id−1  Id1 P Id1 4 (B.8) We will proceed to show an upper bound Id−1 P Id−1  2np log nId on second term of the LHS. We will do this by a Bernstein bound as before. Namely, analogously as in Lemma B.7, Id−1 P Id−1 = m X k=1 (Id−1 P̃k )(Id−1 P̃k )⊤ and E[(Id−1 P̃k )(Id−1 P̃k )⊤ ]  pId and and kId−1 P̃k k22 ≤ kP̃k k22 ≤ np are satisfied so "m # 2 X (Id−1 P̃k )(Id−1 P̃k )⊤ ≤ np2 m r2 = E k=1 Therefore, applying a matrix Bernstein bound, we get Id−1 P Id−1  mpId + np log nId + r  2np log nId with high probability. Plugging this in in (B.8), we get   3 3 3 Id1 P Id1 = ee⊤ P̃ P̃ ⊤ ee⊤ = ee⊤ e⊤ P̃ P̃ ⊤ e 4 4 4 P  m 2 with the goal of applying Chernoff, we will lower Since we have e⊤ P̃ P̃ ⊤ e = n1 k=1 hP̃k , 1i P + 6np log nId  bound E[h1, Ak i2 ]. More precisely, we will show E[h1, P̃k i]2 = Ω(n2 p2 ). In order to do this, we have X X E[(P̃k )2j ] + E[h1, P̃k i2 ] = E[(P̃k )j ]E[(P̃k )j ′ ] j6=j ′ ;j,j ′∈Sa j∈Sa = X j∈Sa ≥ X j∈Sa ˜ )2 ] + pE[(1 − exp(−lW pE[(1 − exp(−W̃ )2 ] + 2 2 = Ω(n p ) 27 X j6=j ′ ;j,j ′∈Sa X j6=j ′ ;j,j ′∈Sa ˜ )]2 p2 E[1 − exp(−lW p2 E[1 − exp(−W̃ )]2 where W̃ is a random variable following the distribution D of all the W̃i,j . and the last inequality holds because of (1.2). p So by Chernoff, we get that e⊤ P̃ P̃ ⊤ e = n1 (Ω(mn2 p2 ) − n2 p2 m) = Ω(mnp2 ) with high probability. Altogether, this means P + 6np log nId % mp2 J C Robust whitening Algorithm 4 Obtaining whitening matrices d Inputs: Random partitioning Sa , Sb , Sc of [n]. Empirical PMI matrix PMI. d×d Outputs: Whitening matrices Qa , Qb , Qc ∈ R 1. Output + d Sc ,Sa , d Sa ,S (PMI d S ,S )⊤ PMI Qa = ρ−1/3 PMI b b c d S ,Sc (PMI d + )⊤ PMI d Sa ,S , Qb = ρ−1/3 PMI Sc ,Sa b b ⊤ d d Sc ,Sa (PMI d+ Qc = ρ−1/3 PMI Sa ,Sb ) PMISb ,Sc ⊤ d d Sa ,S (PMI d+ In this section, we show the formula Qa = ρ−1/3 PMI Sb ,Sc ) PMISc ,Sa computes an b ⊤ approximation of the true whitening matrix AA , so that the error is ε-spectrally bounded by A. We recall Theorem 2.10. Theorem 2.10. Let n ≥ m and A, B, C ∈ Rn×m . Suppose Σab , Σbc , Σca ∈ Rn×n are of the form, Σab = AB ⊤ + Eab , Σbc = BC ⊤ + Ebc , and Σca = CA⊤ + Eca . where Eab , Ebc , Eca are ε-spectrally bounded by (A, B), (B, C), (C, A) respectively. Then, the matrix matrix + Qa = Σab [Σ⊤ bc ]m Σca ⊤ + is a good approximation of AA⊤ in the sense that Qa = Σab [Σ⊤ bc ]m Σca − AA is O(ε)-spectrally bounded by A. Here [Σ]m denotes the best rank-m approximation of Σ. Towards proving Theorem 2.10, an intermediate step is to understand the how the space of singular vectors of BC ⊤ are aligned with the noisy version Σbc . The following explicitly represent BC ⊤ + E as the form B ′ R(C ′ )⊤ + ∆′ . Here the crucial benefit to do so is that the resulting ∆′ is small in every direction. In other words, we started with a relative error guarantees on E and the Lemma below converts to it an absolute error guarantees on ∆′ (though the signal term changes slightly). Lemma C.1. Suppose B, C are n × m matrices with n ≥ m. Suppose a matrix E is ε-spectrally bounded by (B, C), then BC ⊤ + E can be written as BC ⊤ + E = (B + ∆B )RBC (C + ∆C )⊤ + ∆′BC , 28 where ∆B , ∆C , ∆′BC are small and RBC is close to identity in the sense that, k∆B k ≤ O(εσmin (B)) k∆C k ≤ O(εσmin (C)) k∆′BC k ≤ O(εσmin (B)σmin (C)) kRBC − Idk ≤ O(ε) Proof. The key intuition is if the perturbation is happening in the span of columns of B and C, they cannot change the subspace. By Definition 2.9, we can write E as ⊤ E = B∆1 C ⊤ + B∆⊤ 2 + ∆3 C + ∆4 . Now since k∆1 k ≤ ε < 1, we know (Id − ∆1 ) is invertible, so we can write (BC ⊤ + E) = (B + ∆3 (Id + ∆1 )−1 )(Id + ∆1 )(C + ∆2 (Id − ∆1 )−⊤ )⊤ − ∆3 (Id − ∆1 )−1 ∆⊤ 2 + ∆4 . This is already in the desired form as we can let ∆B = ∆3 (Id + ∆1 )−1 , RBC = (Id + ∆1 ), ∆C = ∆2 (Id − ∆1 )−⊤ , and ∆′BC = −∆3 (Id − ∆1 )−1 ∆⊤ 2 + ∆4 . By Weyl’s Theorem we know −1 ε σmin (B). Other terms can be σmin (Id + ∆1 ) ≥ 1 − ε, therefore k∆B k ≤ k∆3 kσmin (Id + ∆) ≤ 1−ε bounded similarly. Now we prove that the top m approximation of BC ⊤ + E has similar column/row spaces as BC ⊤ . Let UB be the column span of B, UB′ be the column span of (B + ∆B ), and UB′′ be the top m left singular subspace of (BC ⊤ + E). Similarly we can define UC , UC′ , UC′′ to be the column spans of C, C + ∆C and the top m right singular subspace of (BC ⊤ + E). For B + ∆B , we can apply Weyl’s Theorem and Wedin’s Theorem. By Weyl’s Theorem we know σmin (B + ∆B ) ≥ σmin (B) − k∆B k ≥ (1 − O(ε))σmin (B). By Wedin’s Theorem we know UB′ is O(ε)-close to UB . Similar results apply to C + ∆C . Now we know σmin ((B + ∆B )RBC (C + ∆C )⊤ )) ≥ σmin (B + ∆B )σmin (RBC )σmin (C + ∆C ) ≥ Ω(σmin (B)σmin (C). Therefore we can again apply Wedin’s Theorem, considering (B+∆B )RBC (C+ ∆C )⊤ ) as the original matrix and ∆′BC as the perturbation. As a result, we know UB′′ is O(ε) close to UB′ , UC′′ is O(ε) close to UB′ . The distance between UB , UB′′ (and UC , UC′′ ) then follows from triangle inequality. As a direct corollary of Lemma C.1, we obtain that the BC ⊤ and BC ⊤ + E have similar subspaces of singular vectors. Corollary C.2. In the setting of Lemma C.1, let [BC ⊤ + E]m be the best rank-m approximation of BC ⊤ + E. Then, the span of columns of [BC ⊤ + E]m is O(ε)-close to the span of columns of B, span of rows of [BC ⊤ + E]m is O(ε)-close to the span of columns of C. Furthermore, we can write [BC ⊤ + E]m = (B + ∆B )RBC (C + ∆C )⊤ + ∆BC . Here ∆B , ∆C and RBC as defined in Lemma C.1 and ∆BC satisfies k∆BC k ≤ O(εσmin (B)σmin (C)). Proof. Since [BC ⊤ + E]m is the best rank-m approximation, because (B + ∆B )RBC (C + ∆C )⊤ ) is a rank m matrix, in particular we have kBC ⊤ + E − [BC ⊤ + E]m k ≤ kBC ⊤ − (B + ∆B )RBC (C + ∆C )⊤ )kk∆′BC k. 29 Therefore k∆BC k = k[BC ⊤ + E]m − (B + ∆B )RBC (C + ∆C )⊤ )k ≤ kBC ⊤ + E − [BC ⊤ + E]m k + kBC ⊤ + E − (B + ∆B )RBC (C + ∆C )⊤ )k ≤ 2k∆′BC k. + In order to fix this problem, we notice that the matrix [Σ⊤ bc ]m is multiplied by Σab on the left ⊤ ⊤ + and Σca on the right. Assuming Σab = AB , Σca = CA , we should expect [Σ⊤ bc ]m to “cancel” with the B ⊤ factor on the left and the C factor on the right, giving us AA⊤ . Therefore, we should really ⊤ + measure the error of the middle term [Σ⊤ bc ]m after left multiplying with B and right multiplying with C. We formalize this in the following lemma: + ⊤ + Lemma C.3. Suppose Σbc is as defined in Theorem 2.10, let ∆ = [Σ⊤ bc ]m − [CB ] , then we have kB ⊤ ∆Ck = O(ε), k∆Ck ≤ O( ε ), σmin (B) ε ), σmin (C) ε k∆k ≤ O( ). σmin (B)σmin (C) kB ⊤ ∆k ≤ O( We will first prove Theorem 2.10 assuming Lemma C.3. Proof of Theorem 2.10. By Lemma C.1, we know Σab can be written as (A + ∆1A )RAB (B + ∆1B )⊤ + ∆AB . Similarly Σca can be written as (C + ∆3C )RCA (A + ∆3A )⊤ + ∆CA . Here the ∆ terms and R terms are bounded as in Lemma C.1. + Now let us write the matrix Σab [Σ⊤ bc ]m Σca as     (A + ∆1A )RAB (B + ∆1B )⊤ + ∆AB ([CB ⊤ ]+ + ∆BC ) (C + ∆3C )RCA (A + ∆3A )⊤ + ∆CA + We can now view Σab [Σ⊤ bc ]m Σca as the product of three terms, each term is the sum of two matrices. Therefore we can expand the product into 8 terms. In each of the three pairs, we will call the first matrix the main matrix, and the second matrix the perturbation. In the remaining proof, we will do calculations to show the product of the main terms is close to AA⊤ , and all the other 7 terms are small. Before doing that, we first prove several Claims about PSD matrices Claim C.4. If k∆k ≤ ε, then A∆A⊤  εAA⊤ . If kΓk ≤ εσmin (A), then 2 (A)Id . εAA⊤ + εσmin 1 ⊤ 2 (AΓ + ΓA⊤ )  Proof. Both inequalities can be proved by consider the quadratic form. We know for any x, x⊤ A∆A⊤ x ≤ k∆kkA⊤ xk2 ≤ εx⊤ AA⊤ x, so the first part is true. For the second part, for any x we can apply Cauchy-Schwartz inequality √ 1 2 x⊤ (AΓ⊤ + ΓA⊤ )x = h εA⊤ x, ε−1/2 Γ⊤ xi ≤ εkA⊤ xk2 + ε−1 kΓ⊤ xk2 = x⊤ (εAA⊤ + εσmin (A)Id)x. 2 30 Now, we will first prove the product of three main matrices is close to AA⊤ :   Claim C.5. We have (A + ∆1A )RAB (B + ∆1B )⊤ (CB ⊤ )+ (C + ∆3C )RCA (A + ∆3A )⊤ = AA⊤ + EA , where EA is O(ε)-spectrally bounded by AA⊤ . Proof. We will first prove the middle part of the matrix (B +∆1B )⊤ (CB ⊤ )+ (C +∆3C ) is O(ε) close to identity matrix Id. Here we observe that both B, C have full column rank so (CB ⊤ )+ = (B ⊤ )+ C + . Therefore we can rewrite the product as (Id + B + ∆1B )⊤ (Id + C + ∆3C ). Since k∆1B k ≤ O(εσmin (B)) by Lemma C.1 (and similarly for C), we know kB + ∆1B k ≤ O(ε). Therefore the middle part is O(ε) bAB = RAB (B + ∆1 )⊤ (CB ⊤ )+ (C + ∆3 )RCA is O(ε)-close close to Id. Now since ε ≪ 1 we know R B C to Id. bAB (A + ∆3 )⊤ , for this matrix we know Now we are left with (A + ∆1A )R A bAB (A + ∆3A )⊤ − AA⊤ = A(R bAB − Id)A⊤ + ∆1A R bAB A⊤ + AR bAB (∆3A )⊤ + ∆1A R bAB (∆3A )⊤ . (A + ∆1A )R 3 ⊤  bAB − Id)A⊤  O(ε)AA⊤ (Claim C.4); the fourth term ∆1 R b The first term A(R A AB (∆A ) 2 (A))Id (by the norm bounds of ∆1 and ∆3 . For the cross terms, we can bound them O(εσmin A A using the second part of Claim C.4. Next we will try to prove the remaining 7 terms are small. We partition them into three types depending on how many ∆ factors they have. We proceed to bound them in each of these cases. For the terms with only one ∆, we claim: Claim C.6. The three terms ∆AB (CB ⊤ )+ (C + ∆3C )RCA (A + ∆3A )⊤ , (A + ∆1A )RAB (B + ∆1B )⊤ ∆AB (C + ∆3C )RCA (A + ∆3A )⊤ , (A + ∆1A )RAB (B + ∆1B )⊤ (CB ⊤ )+ ∆CA are all O(ε) spectrally bounded by AA⊤ . Proof. For the first term, note that both B, C have full column rank, and hence (CB ⊤ )+ = (B ⊤ )+ C + . Therefore the first term can be rewritten as [∆AB (B ⊤ )+ ][(Id + C + ∆3C )RCA ](A + ∆3A )⊤ . By Lemma C.1, we have spectral norm bounds for ∆AB , ∆3C , ∆3A , RCA . Therefore we know k∆AB (B ⊤ )+ k ≤ O(εσmin (A)) and [(Id + C + ∆3C )RCA ] is O(ε) close to Id. Therefore 2 (A)) is trivially O(ε) spectrally bounded, and k[∆AB (B ⊤ )+ ][(Id + C + ∆3C )RCA ](∆3A )⊤ k ≤ O(εσmin [∆AB (B ⊤ )+ ][(Id + C + ∆3C )RCA ]A⊤ is O(ε) spectrally bounded by Claim C.4. The third term is exactly symmetric. b BC = (B+∆1 )⊤ ∆BC (C + For the second part, we will first prove the middle part of the matrix ∆ B ∆3C ) has spectral norm O(ε). This can be done y expanding it to the sum of 4 terms, and use appropriate spectral norm bounds on ∆BC and its products with B ⊤ and C from Lemma C.3. b BC RCA (A + ∆3 )⊤ is O(ε) spectrally bounded by the first part Now we can show (A + ∆1A )RAB ∆ A of Claim C.4. Next we try to bound the terms with two ∆ factors. Claim C.7. The three terms ∆AB ∆BC (C + ∆3C )RCA (A + ∆3A )⊤ , ∆AB (CB ⊤ )+ ∆CA ,  (A + ∆1A )RAB (B + ∆1B )⊤ ∆BC ∆CA are all O(ε2 ) spectrally bounded by AA⊤ . Proof. For the first term, notice that k∆BC (C + ∆3C )k is bounded by O(ε/σmin (B)) by Lemma C.3, and k∆AB k = O(εσmin (A)σmin (B)). Therefore we know k∆AB ∆BC (C + ∆3C )RCA k ≤ O(ε2 σmin (A)), so by Claim C.4 we know this term is O(ε2 ) spectrally bounded by AA⊤ . Third term is symmetric. 31 2 (A)), For the second term, by Lemma C.1 we can directly bound its spectral norm by O(ε2 σmin so it is trivially O(ε2 ) spectrally bounded by AA⊤ . Finally, for the product ∆AB ∆BC ∆CA , we can get the spectral norm for the three factors by 2 (A)) which is trivially O(ε3 ) Lemma C.1 and Lemma C.3. As a result k∆AB ∆BC ∆CA k ≤ O(ε3 σmin ⊤ spectrally bounded by AA . Combining the bound for all of the terms we get the theorem. With that, we now try to prove Lemma C.3 We first prove a simpler version where the perturbation is simply bounded in spectral norm Lemma C.8. Suppose B, C are n × m matrices and n ≥ m. Let R be an n × n matrix such that kR − Idk ≤ ε, and E is a perturbation matrix with kEk ≤ εσmin (B)σmin (C) and (CRB ⊤ + E) is also of rank m. Now let ∆ = (CRB ⊤ + E)+ − (CRB ⊤ )+ , then when ε ≪ 1 we have kB ⊤ ∆Ck = O(ε), k∆Ck ≤ O( ε ), σmin (B) ε ), σmin (C) ε k∆k ≤ O( ). σmin (B)σmin (C) kB ⊤ ∆k ≤ O( Proof. We first give the proof for kB ⊤ ∆Ck. Other terms are similar. Let UB be the column span of B, and UB′ be the row span of (CRB ⊤ + E). Similarly let UC be the column span of C and UC′ be the column span of (CRB ⊤ + E). By Wedin’s theorem, we know UB′ is O(ε) close to UB and UC′ is O(ε) close to UC . As a result, suppose the SVD of B is UB DB VB⊤ , we know σmin (B ⊤ UB′ ) = σmin (VB DB UB⊤ UB′ ) ≥ (1 − O(ε))σmin (B). The same is true for C: σmin (C ⊤ UC′ ) ≥ (1 − O(ε))σmin (C). By the property of pseudoinverse, the column span of (CRB ⊤ + E)+ is UB′ , and the row span of (CRB ⊤ + E)+ is UC′ , further, (CRB ⊤ + E)+ = UB′ [(UC′ )⊤ (CRB ⊤ + E)UB′ ]−1 UC′ , therefore we can write B ⊤ (CRB ⊤ + E)+ C = B ⊤ UB′ [(UC′ )⊤ (CRB ⊤ + E)UB′ ]−1 (UC′ )⊤ C. Note that now the three matrices are all n×n and invertible! We can write B ⊤ UB′ = ((B ⊤ UB′ )−1 )−1 (and do the same thing for (UC′ )⊤ C. Using the fact that P −1 Q−1 = (QP )−1 , we have B ⊤ (CRB ⊤ + E)+ C = (((UC′ )⊤ C)−1 (UC′ )⊤ (CRB ⊤ + E)UB′ (B ⊤ UB′ )−1 )−1 = (R + ((UC′ )⊤ C)−1 (UC′ )⊤ EUB′ (B ⊤ UB′ )−1 )−1 =: (R + X)−1 . Here we defined X = ((UC′ )⊤ C)−1 (UC′ )⊤ EUB′ (B ⊤ UB′ )−1 . The spectral norm of X can be bounded by kXk ≤ k((UC′ )⊤ C)−1 kkEkk(B ⊤ UB′ )−1 k −1 −1 ′ = kEkσmin (B ⊤ UB′ )σmin (C ⊤ CB ) ≤ O(ε). We can write B ⊤ ∆C = B ⊤ (CRB ⊤ + E)+ C − Id = (Id + (R − Id + X))−1 − Id, and we now know k(R − Id + X)k ≤ O(ε), as a result kB ⊤ ∆Ck ≤ O(ε) as desired. 32 For the term kB ⊤ ∆k, by the same argument we have B ⊤ (CRB ⊤ + E)+ = ((UC′ )⊤ (CRB ⊤ + E)UB′ (B ⊤ UB′ )−1 )−1 (UC′ )⊤ = ((UC′ )⊤ CR + (UC′ )⊤ EUB′ (B ⊤ UB′ )−1 )−1 (UC′ )⊤ = ((UC′ )⊤ C(R + X))−1 (UC′ )⊤ = (R + X)−1 ((UC′ )⊤ C)−1 (UC′ )⊤ . On the other hand, we know B ⊤ (CRB ⊤ )+ = R−1 C + = R−1 ((UC )⊤ C)−1 UC⊤ . We can match the three factors: kR−1 − (R + X)−1 k ≤ O(ε), k((UC )⊤ C)−1 − ((UC′ )⊤ C)−1 k ≤ O(ε/σmin (C)), kUC − UC′ k ≤ O(ε), kR−1 k ≤ 1 + O(ε) k((UC )⊤ C)−1 k = O(1/σmin (C)) kUC k = 1. Here, first and third bound are proven before. The second bound comes if we consider the SVD of C = UC DC VC⊤ and notice that k(UC′ )⊤ UC − Idk ≤ O(ε). We can write ∆1 = R−1 − (R + X)−1 , ∆2 = ((UC )⊤ C)−1 − ((UC′ )⊤ C)−1 , ∆3 = UC − UC′ , then we have B ⊤ ∆ = B ⊤ (CRB ⊤ + E)+ − B ⊤ (CRB ⊤ )+ = (R−1 − ∆1 )(((UC )⊤ C)−1 − ∆2 )(UC − ∆3 )⊤ − R−1 ((UC )⊤ C)−1 UC⊤ . Expanding the last equation, we get 7 terms and all of them can be bounded by O(ε/σmin (C)). The bounds on k∆Ck and k∆k can be proved using similar techniques. Finally we are ready to prove the main Lemma C.3: ⊤ , we can write the matrix before pseudoinverse Proof of Lemma C.3. Using Lemma C.1, let E = Ebc as [CB ⊤ + E]m = (C + ∆C )RBC (B + ∆B )⊤ + ∆BC . We can then apply Lemma C.8 on (C + ∆C )RBC (B + ∆B )⊤ + ∆BC . As a result, we know if we ⊤ + let ∆′ = [CB ⊤ + E]+ m − ((C + ∆C )RBC (B + ∆B ) ) , we have the desired bound if we left multiply ⊤ with (B + ∆B ) or right multiply with (C + ∆C ). We will now show how to prove the first bound, all the other bounds can be proved using the same strategy: First, we can write ′ ⊤ ′ B ⊤ ∆′ C = −(B + ∆B )⊤ ∆′ (C + ∆C ) + (B + ∆B )⊤ ∆′ C + ∆⊤ B ∆ (C + ∆C ) − ∆B ∆ ∆C . All the four terms on the RHS can be bounded by Lemma C.8 so we know kB ⊤ ∆′ Ck ≤ O(ε). On the other hand, let ∆′′ = ((C + ∆C )RBC (B + ∆B )⊤ )+ − (C ⊤ B)+ = ∆ − ∆′ . We will prove kB ⊤ ∆′′ Ck ≤ O(ε) and then the bound on kB ⊤ ∆Ck follows from triangle inequality. For B ⊤ ∆′′ C, we know it is equal to −1 (C + ∆C )+ C − Id B ⊤ [(B + ∆B )⊤ ]+ RAB −1 (C + ∆C )+ C − Idk ≤ O(ε) Claim C.9. kB ⊤ [(B + ∆B )⊤ ]+ RAB 33 −1 this follows Proof. We will show all three factors in the first term are O(ε) close to Id. For RAB + immediately from Lemma C.1. For (C + ∆C ) C, we know (C + ∆C )+ C − Id = −(C + ∆C )+ ∆C . −1 Therefore its spectral norm bound is bounded by k∆C kσmin (C + ∆C ) = O(ε) (where the bound on k∆C k comes from Lemma C.1). With the claim we have now proven kB ⊤ ∆′′ Ck ≤ O(ε), therefore kB ⊤ ∆Ck ≤ kB ⊤ (∆′ + ∆′′ )Ck ≤ kB ⊤ ∆′ Ck + kB ⊤ ∆′′ Ck ≤ O(ε). C.1 Spectrally Boundedness and Incoherence Here we will show under mild incoherence conditions (defined below), if an error matrix E is ε-spectrally bounded by F F ⊤ , then the partial matrices satisfy the requirement of Theorem 2.10. q Theorem C.10. If F is µ-incoherent for µ ≤ n/m log2 n, then when n ≥ Ω(m log2 m), with high probability over the random partition of F into A, B, C, we know σmin (A) ≥ σmin (F )/3 (same is true for B, C). As a corollary, if E is ε-spectrally bounded by F . Let a, b, c be the subsets corresponding to A, B, C, and let Ea,b be the submatrix of E whose rows are in set a and columns are in set b. Then Ea,b (also Eb,c , Ec,a ) is O(ε)-spectrally bounded by the corresponding asymmetric matrices AB ⊤ (BC ⊤ , CA⊤ ). Proof. Consider the singular value decomposition of F : F = U DV ⊤ . Here U is a n × m matrix whose columns are orthonormal, V is an m × m orthonormal matrix and D is a diagonal matrix whose smallest diagonal entry is σmin (F ). Consider the following way of partitioning the matrix: for each row of F , we put it into A, B or C with probability 1/3 independently. Now, let Xi = 1 if row i is in the matrix A, and 0 otherwise. Then Xi ’s are Bernoulli random variables with probability 1/2. Suppose S is the set of rows in A, let UA be U restricted to rows in A, then we have A = UA DV ⊤ . We will show with high probability σmin (A) ≥ 1/3. P The key observation here is the expectation of UA⊤ UA = ni=1 Xi Ui Ui⊤ , where Ui is the i-th row of U (represented as a column vector). Since Xi ’s are Bernoulli random variables, we know n n X 1 1X Ui Ui⊤ = Id. Xi Ui Ui⊤ ] = E[UA⊤ UA ] = E[ 3 3 i=1 i=1 Therefore we can hope to use matrix concentration to prove that UA⊤ UA is close to its expectation. Let Mi = Xi Ui Ui⊤ − 1/3Ui Ui⊤ . Clearly E[Mi ] = 0. By the Incoherence assumption, we know kUi k ≤ 1/ log n. Therefore we know kMi k ≤ O(1/ log n). Also, we can bound the variance n n n X X X ⊤ ⊤ 2 ⊤ Ui Ui⊤ k ≤ O(1/ log 2 n). Xi Ui Ui Ui Ui ]k ≤ max kUi k k Mi Mi ]k ≤ kE[ kE[ i=1 i=1 i=1 Here the last inequality is because Pn ⊤ ⊤ i=1 Xi Ui Ui Ui Ui 34  kUi k2 Ui Ui⊤ . Therefore by Matrix Bernstein’s inequality we know with high probability k When this happens we know kUA⊤ UA k ≥ σmin (E[UA⊤ UA ]) −k n X i=1 Pn i=1 Mi k ≤ 1/6. Mi k ≥ 1/6. p Hence we have σmin (UA ) ≥ 1/6 > 1/3, and σmin (A) ≥ σmin (UA )σmin (D) ≥ σmin (F )/3. Note that matrices B, C have exactly the same distribution as A so the bounds for B,C follows from union bound. For the corollary, if a matrix E is ε spectrally bounded, we can write it as F ∆1 F ⊤ + F ∆⊤ 2 + 2 (F ). This can be done by ∆2 F ⊤ + ∆4 , where k∆1 k ≤ ε, k∆2 k ≤ εσmin (F ) and k∆4 k ≤ εσmin considering different projections of E: let U be the span of columns of F , then F ∆1 F ⊤ term ⊤ corresponds to ProjU E ProjU ; F ∆⊤ 2 term corresponds to ProjU E ProjU ⊥ ; ∆2 F term corresponds to ProjU ⊥ E ProjU ; ∆4 term corresponds to ProjU ⊥ E ProjU ⊥ . The spectral bounds are necessary for E to be spectrally bounded. ⊤ Now for Ea,b , we can write it as A∆1 B ⊤ + A(∆2 )⊤ b + (∆2 )a B + (∆4 )a,b , where we also take the corresponding submatrices of ∆’s. Since the spectral norm of a submatrix can only be smaller, we know k∆1 k ≤ ε, k(∆2 )b k ≤ εσmin (F ) ≤ 3εσmin (B), k(∆2 )a k ≤ εσmin (F ) ≤ 3εσmin (A) and 2 (F ) ≤ 9εσ k(∆2 )a,b k ≤ εσmin min (A)σmin (B). Therefore by Definition 2.9 we know Ea,b is 9ε ⊤ spectrally bounded by AB . D Proof of Theorem 3.1 and Theorem 3.3 In this section, we provide the full proof of Theorem 3.1. We start with a simple technical Lemma. Lemma D.1. If Q is an ε-approximate whitening matrix for A, then kQk ≤ 1 ⊤ 1−ε kAA k 1 ⊤ 1−ε kAA k, σmin (Q) ≥ Proof. By the definition of approximate-whitening, we have 1 − ε ≤ σmin ((Q+ )1/2 A⊤ A(Q+ )1/2 ), σmax ((Q+ )1/2 A⊤ A(Q+ )1/2 ) ≤ 1 + ε which implies that 1 − ε ≤ σmin ((Q+ )1/2 AA⊤ (Q+ )1/2 ), σmax ((Q+ )1/2 AA⊤ (Q+ )1/2 ) ≤ 1 + ε by virtue of the fact that (Q+ )1/2 AA⊤ (Q+ )1/2 = semidefinite-order notation, we get that (Q+ )1/2 A⊤ A(Q+ )1/2 ⊤ . Rewriting in (1 − ε)Id  (Q+ )1/2 AA⊤ (Q+ )1/2  (1 + ε)Id Multiplying on the left and right by Q1/2 , we get (1 − ε)Q  AA⊤  (1 + ε)Q This directly implies 1 ⊤ 1+ε AA Q 1 ⊤ 1−ε AA which is equivalent to the statement of the lemma. Towards proving Theorem 3.1, we will first prove the following proposition, which shows that we recover the exp(−W ) matrix correctly: 35 Proposition D.2 (Recovery of exp(−W )). Under the random generative model defined in Section 1, if the number of samples satisfies N = poly(n, 1/p/, 1/ρ) √ the vectors W̃i , i ∈ [m] in Algorithm 1 are O(η np)-close to exp(−Wi ) where √ η = Õ ( mpρ) Proof. The proof will consist of checking the conditions for Algorithms 4 and 3 to work, so that we can apply Theorems 5.4 and 2.10. To get a handle on the PMI tensor, by Proposition A.2, for any equipartition Sa , Sb , Sc of [n], we can it as PMITSa ,Sb ,Sc = L X ρ X (−1)l+1 Fk,Sa ⊗ Fk,Sb ⊗ Fk,Sc + 1−ρ k∈[m] l=2 1 l  ρ 1−ρ l ! X (P̃l )k,Sa ⊗ (P̃l )k,Sb ⊗ (P̃l )k,Sc + EL k∈[m] 1 1 We can choose L = poly(log(n, , )) to ensure ρ p   √ kEL k = o p5/2 ρ7/3 mn2 (D.1) Having an explicit form for the tensor, we proceed to check the spectral boundedness condition for Algorithm 4. Let Sa , Sb , Sc be a random equipartition. Let RSa be the matrix that has as columns the vectors  l !1/3 ρ 1 (P̃l )j,Sa , for all l ∈ [2, L], j ∈ [m] and let A be the matrix that has as columns the l 1−ρ 1/3  ρ Fj,Sa , for all j ∈ [m]. Since L is polynomially bounded in n, by Proposition B.4 vectors 1−ρ we have that with high probability RSa is τ -spectrally bounded by A, for a τ = O(ρ2/3 log n). Analogous statements hold for Sb , Sc . Next, we verify the conditions for calculating approximate whitening matrices (Algorithm 4). Towards applying Theorem C.10, note that if R[n] is the matrix that has as columns the vectors  l !1/3 ρ 1 (P̃l )j , for all l ∈ [2, L], j ∈ [m], and D is the matrix that has as columns the l 1−ρ 1/3  ρ Fj , for all j ∈ [m], R[n] then is τ -spectrally bounded by D for τ = O(ρ2/3 log n). vectors 1−ρ Furthermore, the matrix F is O(1)-incoherent with high probability by Lemma B.1. Hence, we can apply Theorem C.10, the output of Algorithm 4 are matrices Qa , Qb , Qc which are τ -approximate whitening matrices for A, B, C respectively. Next, we will need bounds on min(σmin (Qa ), σmin (Qb ), σmin (Qc )), max(σmax (Qa ), σmax (Qb ), σmax (Qc )) to plug in the guarantee of Algorithm 4. 36 By Lemma D.1, we have σmax (Qa ) ≤ 1 1 kAA⊤ k . (1 + τ )kAA⊤ k, σmin (Qa ) ≥ σmin (AA⊤ ) & (1 − τ )σmin (AA⊤ ) 1−τ 1+τ However, for the random model, applying Lemma B.1, ⊤ σmin (AA ) ≥  ρ 1−ρ 2/3 np & ρ 2/3 ⊤ np, σmax (AA ) ≤  ρ 1−ρ 2/3 mnp2 . ρ2/3 mnp2 (D.2) Analogous statements hold for B and C. Finally, we bound the error due to empirical estimates. Since ρpm = o(1), Pr[si = 0 ∧ sj = 0 ∧ sk = 0] ≥ 1 − Pr[si = 1] − Pr[sj = 1] − Pr[sk = 1] ≥ 1 − 3pmρ = Ω(1) Hence, by Corollary E.3, with a number of samples as stated in the theorem, √ ˆ Sa ,S ,Sc − PMITSa ,S ,Sc k{1,2},{3} . p5/2 ρ5/2 mn2 kPMIT b b (D.3) as well. With that, invoking Theorem 5.4 (with kEk{1,2},{3} taking into account both the EL term above, and the above error due to sampling), the output of Algorithm 3 will produce vectors vi , i ∈ [m], 1/3  ρ (1 − exp(−Wi )), for s.t. vi is O(η ′ )-close to 1−ρ   ˆ Sa ,S ,Sc − PMITSa ,S ,Sc k{1,2},{3} ) η ′ . max(kQa k , kQb k , kQc k)1/2 · τ 3/2 + σ −3/2 (kEL k{1,2},{3} + kPMIT b b where σ = min(σmin (Qa ), σmin (Qb ), σmin (Qc )). Plugging in the estimates from (D.1), (D.2), (D.3) as well as τ = O(ρ2/3 log n), we get: q √ 1/2 3/2 max(kQa k , kQb k , kQc k) τ . mnp2 ρ2/3 (ρ2/3 log n)3/2 = ρ4/3 mnp log3/2 n √ 1 3/2 ) kEL k{1,2},{3} . ρ4/3 mnp log3/2 n σ −3/2 kEL k{1,2},{3} . ( ρnp √ −3/2 ˆ Sa ,S ,Sc − PMITSa ,S ,Sc k{1,2},{3} . ρ4/3 mnp log3/2 n σ kPMIT b b 1/3  ρ (1 − exp(Wi )), for all i ∈ [m]. which implies the vectors (âi , b̂i , ĉi ) are O(η ′ )-close to 1−ρ  1/3 However, this directly implies that 1−ρ (âi , b̂i , ĉi ) are O(η ′ /ρ1/3 )-close to 1 − exp(Wi ), ρ i ∈ [m], which in turn implies (ãi , b̃i , c̃i ) are O(η ′ /ρ1/3 ) close to exp(Wi ). This implies the statement of the Lemma. Given that, we prove the main theorem. The main issue will be to ensure that taking log of the values √ Proof of Theorem 3.1. By Proposition D.2, the vectors Yi , i ∈ [m] in Algorithm 1 are O(η np) √ close to exp(−Wi ) with η = Õ mpρ . Let (Yi′ )j = (Yi )j if (Yi )j > exp(−νu ) and otherwise (Yi )′j = exp(−νu ). Then we have that kYi′ − Wi k ≤ kYi − Wi k. By the Lipschitzness of log(·) in the region [νi , ∞] 37 we have that It follows that ci )j − (Wi )j | = | log(Y ′ )j − log(Wi )j | . |(Yi )′ − (Wi )j | |(W i j ci − Wi k = klog Yi′ − log Wi k . kYi′ − Wi k kW √ Therefore recalling kYi′ − Wi k ≤ kYi − Wi k ≤ O(η np) we complete the proof. Proof of Theorem 3.3. The proof will follow the same outline as the proof of Theorem 3.1. The difference is that since we only have a guarantee on the spectral boundedness of the second and third-order term, we will need to bound the higher-order terms in a different manner. Given that we have no information on them in this scenario, we will simply bound them in the obvious manner. We proceed to formalize this. The sample complexity is polynomial for the same reasons as in the proof of Theorem 3.1, so we will not worry about it here. We only need to check the conditions for Algorithms 4 and 3 to work, so that we can apply Theorems 5.4 and 2.10. Towards that, first we claim that we can write the PMI tensor for any equipartition Sa , Sb , Sc of [n] as PMITSa ,Sb ,Sc =  2 ! X 1 ρ ρ X Fk,Sa ⊗ Fk,Sb ⊗ Fk,Sc − Gk,Sa ⊗ Gk,Sb ⊗ Gk,Sc 1−ρ 2 1−ρ k∈[m] k∈[m]  3 ! X ρ 1 Hk,Sa ⊗ Hk,Sb ⊗ Hk,Sc + E + 3 1−ρ (D.4) k∈[m] where kEk{1,2},{3} ≤ ρ4 m(np)3/2 . Towards achieving this, first we claim that Proposition 5.6 implies that for any subsets Sa , Sb , Sc , m X k=1 (1 − exp(−lWk ))Sa ⊗ (1 − exp(−lWk ))Sb ⊗ (1 − exp(−lWk ))Sc {1,2},{3} ≤ m(np)3/2 Indeed, if we put γk = (1 − exp(−lWk ))Sa , δk = (1 − exp(−lWk ))Sb , θk = (1 − exp(−lWk ))Sc , P √ then we have k k γk γk⊤ k ≤ mnp, and similarly for δk . Since maxk kθk k ≤ (np)1/2 , the claim immediately follows. Hence, (D.4) follows.   l 1/3 ρ 1 Next, let RSa be the matrix that has as columns the vectors l 1−ρ (P̃l )j,Sa , l ∈ 1/3  ρ (P̃1 )j,Sa for j ∈ [m] for [2, L], j ∈ [m] and A is the matrix that has as columns the vectors 1−ρ some L = O(poly(n)), similarly as in the proof of Theorem 3.1. We claim that RSa RS⊤a is τ spectrally bounded by ρF F ⊤ .   l 1/3 √ ρ 1 Indeed, for any l > 2, we have k l 1−ρ P̃l k . ρl/3 kP̃l k . ρl/3 mnp Hence, RSa RS⊤a  ρ2/3 GG⊤ + ρ4/3 HH ⊤ + ρ2 LL⊤ +  3ρ 2/3 τ (F F ⊤ ⊤ + σmin (F F )) + ρ X 8/3 38 ρ2l/3 mnp2 l≥4 mnp2 - ρ2/3 τ (F F ⊤ + σmin (F F ⊤ )) (D.5) where the first inequality holds since HH ⊤ , GG⊤ , LL⊤ are τ -spectrally bounded bounded by F and the second since σmin (F F ⊤ ) & np and τ ≥ 1. Let τ ′ = ρ2/3 τ . Since we are assuming the matrix F is O(1)-incoherent, we can apply Theorem C.10, and claim the output of Algorithm 4 are matrices Qa , Qb , Qc which are τ -approximate whitening matrices for A, B, C respectively. By Lemma D.1, we have again 1 1 . (1 + τ ′ )kAA⊤ k, σmin (Qa ) ≥ & (1 − τ ′ )σmin (AA⊤ ) σmax (Qa ) ≤ 1 − τ′ 1 + τ′ Then, applying Theorem 5.4, we get that we recover vectors (âi , b̂i , ĉi ) are O(η ′ )-close to 1/3 ρ (1 − exp(Wi )), for all i ∈ [m]. for 1−ρ   3/2 η ′ . max(kQa k , kQb k , kQc k)1/2 · τ ′ + σ −3/2 kEk{1,2},{3} √ Recall that τ ′ = ρ2/3 τ , and kQa k ≤ ρ2/3 σmax (F ) . ρ2/3 mnp and kEk{1,2},{3} ≤ ρ4 m(np)3/2 and σ & ρ2/3 np, we obtain that ! 4 m(np)3/2 √ √ ρ . ρ4/3 mnpτ 3/2 η ′ . ρ1/3 mnp (τ ρ2/3 )3/2 + 2/3 3/2 (ρ np)  where the last inequality holds since ρ3 m = o(1) = o(τ ). 1/3 (â , b̂ , ĉ ) are O(η ′ /ρ1/3 ) = O(η)-close to 1 − However, this directly implies that ( 1−ρ i i i ρ ) exp(−Wi ), i ∈ [m], which in turn implies (ãi , b̃i , c̃i ) are O(η) close to exp(−Wi ). Argument for recovering Wi from exp(−Wi ) is then exactly the same as the one in Theorem 3.1. E Sample complexity and bias of the PMI estimator Finally, we consider the issue of sample complexity. The estimator we will use for the PMI matrix will simply be the plug-in estimator, namely: ˆ i,j = log P̂r[si = 0 ∧ sj = 0] PMI ˆ j = 0] P̂r[si = 0]Pr[s (E.1) Notice that this estimator is biased, but as the number of samples grows, the bias tends to zero. Formally, we can show: Lemma E.1. If the number of samples N satisfies 1 1 log m N≥ mini6=j {Pr[si = 0 ∧ sj = 0} δ2 ˆ i,j − PMIi,j | ≤ δ, ∀i 6= j. with high probability |PMI Proof. Denoting ∆i,j = P̂r[si = 0 ∧ sj = 0] − Pr[si = 0 ∧ sj = 0] and ∆i = P̂r[si = 0] − Pr[si = 0], we get that Pr[si = 0 ∧ sj = 0] + ∆i,j ˆ i,j = log P̂r[si = 0 ∧ sj = 0] = log PMI ˆ (Pr[s P̂r[si = 0]Pr[sj = 0] i = 0] + ∆i )(Pr[sj = 0] + ∆j )       ∆i,j ∆i ∆j = PMIi,j + log 1 + − log 1 + − log 1 + Pr[si = 0 ∧ sj = 0] Pr[si = 0] Pr[sj = 0] 39 Furthermore, we have that 2 3x 2x 2+x ≤ log(1 + x) ≤ √x , x+1 for x ≥ 0, which implies that when x ≤ 1, ≤ log(1 + x) ≤ x. From this it follows that if   ∆i ∆i,j , max ≤δ max max i Pr[si = 0] i,j Pr[si = 0 ∧ sj = 0] we have δ ˆ i,j ≤ PMIi,j + δ ≤ PMI 3 Note that it suffices to show that if N > 1−4pmax1 mρmax δ12 log m, we have   ∆i ∆i > (1 + δ) ∨ < (1 − δ) ≤ exp(− log2 m) Pr Pr[si = 0] Pr[si = 0] PMIi,j − and Pr   ∆i,j ∆i,j > (1 + δ) ∨ < (1 − δ) ≤ exp(− log2 m) Pr[si = 0 ∧ sj = 0] Pr[si = 0 ∧ sj = 0] since this implies (E.2) (E.3)   ∆i,j ∆i max max , max ≤δ i,j Pr[si = 0 ∧ sj = 0] i Pr[si = 0] with high probability by a simple union bound. Both (E.2) and (E.3) will follow by a Chernoff bound. Indeed, consider (E.2) first. We have by Chernoff s ! # " log N Pr[si = 0] ≤ exp(− log2 N ) Pr ∆i > 1 + N Pr[si = 0] Hence, if N > 2 1 1 Pr[si =0] δ2 log m, we get that 1 − δ ≤ ∆i Pr[si =0] ≤ 1 + δ with probability at least 1 − exp(log m). The proof of (E.3) is analogous – the only difference being that the requirement is that N > 1 1 Pr[si =0] δ2 log m which gives the statement of the lemma. Virtually the same proof as above shows that: Lemma E.2. If the number of samples N satisfies N≥ 1 1 log m mini6=j6=k {Pr[si = 0 ∧ sj = 0 ∧ sk = 0} δ2 ˆ i,j,k − PMITi,j,k | ≤ δ, ∀i 6= j 6= k. with high probability |PMIT As an immediate corollary, we get: Corollary E.3. If the number of samples N satisfies N≥ 1 1 log m mini6=j6=k {Pr[si = 0 ∧ sj = 0 ∧ sk = 0} δ2 1 1 log m mini6=j6=k {Pr[si = 0 ∧ sj = 0 ∧ sk = 0} δ2 with high probability for any equipartition Sa , Sb , Sc N≥ ˆ Sa ,S ,Sc − PMITSa ,S ,Sc k{1,2},{3} . n3 δ kPMIT b b 40 F Matrix Perturbation Toolbox In this section we discuss standard matrix perturbation inequalities. Many results in this section b = A + E, the perturbation in individual singular can be found in Stewart and Sun [Ste77]. Given A values can be bounded by Weyl’s theorem: b = A+E, we know σk (A)−kEk ≤ σk (A) b ≤ σk (A)+kEk. Theorem F.1 (Weyl’s theorem). Given A For singular vectors, the perturbation is bounded by Wedin’s Theorem: Lemma F.2 (Wedin’s theorem; Theorem 4.1, p.260 in [Ste90]). Given matrices A, E ∈ Rm×n with m ≥ n. Let A have the singular value decomposition   Σ1 0 A = [U1 , U2 , U3 ]  0 Σ2  [V1 , V2 ]⊤ . 0 0 b = A + E, with analogous singular value decomposition. Let Φ be the matrix of canonical Let A b1 , and Θ be the matrix of canonical angles angles between the column span of U1 and that of U between the column span of V1 and that of Vb1 . Suppose that there exists a δ such that min |[Σ1 ]i,i − [Σ2 ]j,j | > δ, i,j and min |[Σ1 ]i,i | > δ, i,i then k sin(Φ)k2 + k sin(Θ)k2 ≤ 2 kEk2 . δ2 Perturbation bound for pseudo-inverse When we have a lowerbound on σmin (A), it is easy to get bounds for the perturbation of pseudoinverse. Theorem F.3 (Theorem 3.4 in [Ste77]). Consider the perturbation of a matrix A ∈ Rm×n : B = A + E. Assume that rank(A) = rank(B) = n, then √ kB † − A† k ≤ 2kA† kkB † kkEk. Note that this theorem is not strong enough when the perturbation is only known to be τ spectrally bounded in our definition. 41
8
What Is The Collision Probability And How To Compute It arXiv:1711.07060v2 [] 14 Jan 2018 Richard Altendorfer and Christoph Wilkmann Abstract—We revisit the computation of probability of collision in the context of automotive collision avoidance (the estimation of a potential collision is also referred to as conflict detection in other contexts). After reviewing existing approaches to the definition and computation of collision probability we argue that the fundamental quantity that is required to answer a question like ”What is the probability of collision within the next three seconds?” is a collision probability rate. We derive a general expression for the distribution of the collision probability rate and demonstrate that it exactly reproduces distributions obtained by large-scale Monte-Carlo simulations. This expression is valid for arbitrary prediction models including process noise. We derive an approximation for the distribution of the collision probability rate that can be computed on an embedded platform. In order to efficiently sample this probability rate distribution for determination of its characteristic shape an adaptive method to obtain the sampling points is proposed. The probability of collision is then obtained by one-dimensional numerical integration over the time period of interest. We also argue that temporal collision measures such as time-to-collision should not be calculated as separate or even prerequisite quantities but that they are properties of the distribution of the collision probability rate. we derive a general expression valid for arbitrary prediction models including process noise. Inclusion of process noise is crucial for collision avoidance systems since it allows to encode the uncertainty in the relative motion of the host and the colliding vehicle. This uncertainty is particularly relevant for predictions over several seconds where it is unknown whether the colliding vehicle keeps its motion, accelerates or slows down, or whether the host vehicle driver perceives the risk and slows down, for example. The basis of our derivations are the distributions pt (x, y, ẋ, ẏ, ẍ, ÿ), t ∈ ∆T of the predicted relative state ξ − (t) of the colliding vehicle. The predicted state ξ − (t) is a nonstationary stochastic process. In the remainder of this paper the time dependence of ξ − (t) and its elements will be suppressed, however the temporal dependence of probability distributions will be indicated by p → pt where appropriate. In the next section Monte-Carlo simulations of two collision scenarios are performed and it is shown that the result is naturally represented by a collision probability rate. I. INTRODUCTION II. T HE COLLISION PROBABILITY GROUND TRUTH : LARGE - SCALE M ONTE -C ARLO SIMULATIONS The implementation of a collision mitigation or collision avoidance system requires in general the computation of a collision probability in order to assess the criticality of the current traffic situation as well as its evolution in the shortterm future. Sometimes, in a separate calculation, see e. g. [1],[2] an estimate of the time-to-collision (TTC) is computed. There are two fundamentally different approaches to computing a collision probability that are known to the authors: 1) probability of the spatial overlap of the host vehicle with the colliding vehicle’s probability distribution, see e. g. [3], [4], and 2) probability of penetrating a boundary around the host vehicle, see e. g. [5],[6]. There is currently no satisfying way to compute the collision probability over a time period: there is a heuristic proposal to pick the maximal collision probability over that period as the collision probability for that time period [5],[1], and there is a rather involved calculation valid only for piecewise linear trajectories penetrating a line interval to directly compute the collision probability over a time period while ignoring process noise [6]. In the following we will derive an expression of the probability of penetrating a boundary around the host vehicle in a time period ∆T = [t1 , t2 ]. This will be the result of the temporal integration of the probability rate for which R. Altendorfer is with Driver Assistance Systems, ZF TRW, Germany [email protected] C. Wilkmann is with RWTH [email protected] Aachen, Germany In order to obtain ground truth data for the collision probability Monte-Carlo simulations are performed. The target vehicle on a possibly colliding path with the host vehicle is modeled by the state vector ξ = (x y ẋ ẏ ẍ ÿ)> and the dynamical system as specified in appendix B. Accelerations are deliberately included in the state in order to demonstrate that accelerated motions can be handled. The target vehicle is chosen to be detected by a radar sensor mounted at the middle of the front bumper of the host vehicle with standard radar measurements also specified in app. B. The starting point for an individual simulation is a sample point in state space ξi− where the target vehicle is some distance away from the host vehicle - either directly in front or coming from the right side, see fig. 1. This sample point is drawn from a multivariate Gaussian distribution characterized by its mean vector and covariance matrix. The covariance matrix is taken from steady state1 at this mean vector using the discrete algebraic Riccati equation (25) as detailed in app. B. An instance ξi− of an initial state of the target vehicle is − drawn as a sample of N (ξ − ; µ− ξ , P∞ ). This state is predicted using the stochastic state equation (22) until it crosses the host vehicle boundary or a certain time limit is exceeded. The time until the crossing is recorded and a new simulation with a new sample of initial conditions is started. Examples of colliding 1 Strictly speaking there is no steady state at those points since the system is non-linear and the relative speed is not zero. Nevertheless the solution of the Riccati equation is still representative if the filter settles within a smaller time period than the time period in which the state changes significantly. trajectories starting from an initial position in front of the host vehicle are depicted in fig. 1. We have performed simulations of Ntraj = 1 · 107 trajectories for the two starting points. The result is represented by a histogram of the number of collisions that occur within a histogram bin, i. e. time interval, with respect to time. Hence simulating colliding trajectories naturally leads to a collision probability rate. An example is given in fig. 2 where the bins are normalized by the total number of trajectories Ntraj and the chosen bin width of dt = 0.05s to obtain a collision probability rate. In addition, the collision probability rate integrated by simple midpoint quadrature from 0 to time t is shown. In this example the probability of collision with the target vehicle reaches a threshold of 40%, for example, within the first 4.88s. The asymptotic value of the collision probability as t → ∞ indicates the overall probability of collision over all times. The main contribution of this paper will be to derive formulae that reproduce this collision probability and collision probability rate. In the following two sections we will review existing approaches to computing a collision probability. III. C OLLISION PROBABILITY FROM 2D SPATIAL OVERLAP This is the probability of the spatial overlap between the host vehicle and the colliding vehicle as proposed in [3], [4]. In [3] the variables of the relevant probability distribution are either defined by the relative two-dimensional position and relative orientation (x, y, ψ), or by the distribution of the difference of independent probability distributions of global two-dimensional position and orientation as in [4]. It is not explained how higher order derivatives necessary for prediction such as ẋ, ẏ, ... have been dealt with (e. g. by marginalization). Then in [3] the collision probability is obtained as the integral over the pdf of relative position and orientation over a collision volume D ZZZ PC (t) = p (x, y, ψ) dxdydψ (1) x,y,ψ∈D which in turn is approximated as the convolution of independent probability distributions of global two-dimensional position of both host and colliding vehicle - orientation is ignored. Note that even this two-dimensional integral, i. e. the cumulative distribution function of a bivariate Gaussian, cannot be solved in closed form; however, numerical approximation schemes exist [7]. By further ignoring the x−y-covariance the multivariate Gaussian decouples so that integration can be factored into one-dimensional Gaussian distributions [3]. In [4], the collision probability is computed as the integral of the product of the two global distributions. A problem of deriving a collision probability from 2D spatial overlap is that this approach directly yields a collision probability for a specific time, see fig. 3. Hence it does not allow to answer the question ”What is the probability of collision within the next three seconds?” because integration of the collision probability over time does not yield a collision probability over a certain time period as already pointed out in [6]. In particular, time is not a random variable that can be marginalized over and the integral over a time interval ∆T : R P (t)dt as in [8] has dimension of time and does not ∆T C constitute a probability. A rather heuristic proposal to solve this problem has been to pick the maximal collision probability over a time period as the collision probability for that period [5],[1]. Another issue is that the collision probability is determined by the presence of the colliding object anywhere inside the entire area of the host vehicle. It could, for example be in the center of the host vehicle, which would be a catastrophic event and much too late for collision avoidance. What we are actually interested in is the probability of the colliding object touching and/or penetrating the boundary of the host vehicle. This requires a different approach than integration over state space as in eq. (1). Some existing approaches that consider a boundary instead of a state space volume for the computation of a collision probability are reviewed in the next section. IV. C OLLISION PROBABILITY AT BOUNDARY A probabilistic approach to computing the probability of penetrating a boundary - instead of the probability of a spatial overlap - has been proposed in [6]. Their method is based on the probability density of a so-called time-to-go which is the time to cross a straight, axis-aligned boundary assuming a constant velocity model. The derived collision probability refers to a time period and not just a time instant. It is only applicable to straight paths or combinations of piecewise straight paths and does not take into account more complex geometries such as a rectangle. Another limitation is that process noise is not considered. This method is an extension of the approach discussed [5] in the sense that the uncertainty of the initial condition is included. A different, semi-probabilistic approach is taken in [2] where the time-to-collision of a 2D constant velocity model with respect to the host vehicle’s front boundary at x = 0 is computed. This TTC is then inserted into the prediction equation to arrive at     x + tẋ 0 x   y + tẏ     → y − ẋ ẏ  (2)  ẋ   ẋ  ẏ ẏ Then the second component of this vector is singled out and interpreted as a probabilistic expression. Its distribution as a function of the initial condition is determined by MonteCarlo simulation and integrated over the host vehicle’s front boundary to obtain a collision probability at the TTC. As in the previous approaches more complex geometries as well as process noise are not considered. The approaches discussed above all rely on a time-togo or TTC as a prerequisite quantity - either probabilistic or non-probabilistic. As we will show in the next section, such a temporal collision measure is not necessary for the computation of a collision probability. Instead, we show that the fundamental quantity to compute the collision probability is the collision probability rate. 10 10 shape of initial x-y covariance sample collision trajectories x [m] 5 x [m] 5 shape of initial x-y covariance sample collision trajectories 0 0 -5 -5 -10 -5 0 5 10 -10 -5 0 y [m] 5 10 y [m] (a) The target is coming from the front (µx , µy ) = (10, 0) m (b) The target is coming from the front right (µx , µy ) = (10, 10) m 0.09 0.5 collision probability rate collision probability 0.45 P(t) from 2D overlap TTC x 0.08 0.4 0.07 0.35 0.06 0.3 P(t) [] collision probability rate [s -1 ], collision probability [] Fig. 1. Samples of simulated colliding trajectories for vehicles initially coming from the front (a) and from the front right (b) side. The number of drawn trajectories colliding with the edges of the host vehicle indicate for the ratio of colliding the respective edge compared to the total number of collisions. 0.25 0.05 0.04 0.2 0.03 0.15 0.1 0.02 0.05 0.01 0 0 1 2 3 4 5 6 7 8 0 2 3 4 t [s] Fig. 2. Collision probability rate as a function of time for (µx , µy ) = (10, 0) m based upon Ntraj = 1·107 trajectories. Also shown is the collision probability obtained by integrating over time. V. C OLLISION PROBABILITY RATE AT BOUNDARY 5 6 We have seen that simulating colliding trajectories naturally gives us a probability rate and that a collision probability rate allows us to perform temporal integration to arrive at a collision probability for an extended period of time. The expression for the collision probability rate will be derived on the basis of a theorem on boundary crossings of stochastic vector processes. For sake of lucidity of arguments we restrict ourselves to one of the four straight boundaries of the host vehicle, see fig. 4; extension to the other boundaries is straightforward. We start with the prediction of the pdf of the state vector (see eq. (22) in app. B) of a colliding object from an initial condition at t = 0 to a future time t where process noise 8 Fig. 3. Example of collision probability over time. (parametrized by its covariance matrix Q) is explicitly incorporated: t,Q A. Derivation of collision probability rate 7 t [s] prediction : p0 (x, y, ẋ, ẏ, ẍ, ÿ) 7−→ pt (x, y, ẋ, ẏ, ẍ, ÿ) (3) In order to cast the following expressions into a more readable format we define a probability distribution with fewer variables by marginalization (see app. A for marginalization of Gaussian densities) over the accelerations: Z Z pt (x, y, ẋ, ẏ) := pt (x, y, ẋ, ẏ, ẍ, ÿ)dẍdÿ. (4) ẍ∈R ÿ∈R What we are looking for is an expression for dPC+ (Γf ront , t) dt dP + (5) i. e. the collision probability rate dtC with dimension [s−1 ] at time t for the front boundary Γf ront . The superscript + is used to denote that this probability rate is referring to boundary crossings from outside to inside. We start with the probability of the colliding object being inside an infinitesimally thin strip at the boundary Γf ront (see fig. 4) Z Z Z dPC+ (Γf ront , t) = pt (x0 , y, ẋ, ẏ) dxdydẋdẏ y∈Iy ẋ≤0 ẏ∈R Here, since we are only interested in colliding trajectories, i. e. trajectories that cross the boundary from outside to inside, we do not fully marginalize over ẋ but restrict the x−velocity to negative values at the boundary. A collision probability rate can now be obtained by dividing the unintegrated differential dx by dt; in that way the “flow” of the target vehicle through the host vehicle boundary is described at x0 with velocity ẋ ≤ 0: Z Z Z dPC+ (Γf ront , t) = − pt (x0 , y, ẋ, ẏ) ẋ dydẋdẏ (6) dt y∈Iy ẋ≤0 ẏ∈R Here, since the velocity is restricted to negative values a minus sign is required to obtain a positive rate. This intuitive derivation can be corroborated as well as generalized in a mathematically rigorous way by invoking a result on crossings of a surface element by a stochastic vector process described in the following box: x Front boundary:  front yL yR dx Infinitesimally thin strip x0 Iy left y right Host vehicle rear Fig. 4. Horizontal view of the host vehicle rectangle with local Cartesian coordinate system and coordinate origin at the middle of the front boundary characterized by x = 0 and y ∈ [yL , yR ] = Iy . In order to apply eq. (7) to the front boundary Γf ront as in fig. 4 we need to perform the following identifications:2 ζ(t) = (x, y) > Γf ront = {(x, y) : x − x0 = 0 ∧ y ∈ Iy } g(x) = x − x0 > nΓf ront (x) = (−1, 0) ds(x) = dy (9) Probability rate and probability for boundary crossings Hence we obtain for the intermediate expectation operator of stochastic vector processes Here two expressions are given for the probability rate and n o the probability for crossings of a surface by a stochastic E hnΓf ront (x), ζ̇(t)i+ ζ(t) = x = vector process that were stated in [9] and generalized in Z Z [10]. − ẋ pt (ẋ, ẏ|x, y) dẋdẏ (10) Let ζ(t) be a continuously differentiable n−dimensional ẋ≤0 ẏ∈R vector stochastic process with values x ∈ Rn . The probn ability densities pt (x) and pt (ẋ, x) exist where ẋ ∈ R dP + are the values of ζ̇(t). Let the region S ∈ Rn be bounded and the collision entry rate dtC as in eq. (7) becomes by the smooth surface ∂S defined by the smooth function   g as ∂S = {x : g(x) = 0} and let Γ ⊆ ∂S be a subset Z Z Z + dPC of that surface. Let nΓ (x) be the surface normal at x (Γf ront , t) = −  ẋ pt (ẋ, ẏ|x0 , y)dẋdẏpt (x0 , y)dy dt directed towards the interior of the region. + − y∈Iy ẋ≤0 ẏ∈R Then the entry rate dPdt and the exit rate dPdt through Γ Z Z Z at time t is given by =− ẋ pt (x0 , y, ẋ, ẏ) dydẋdẏ (11) Z n o ± dP y∈Iy ẋ≤0 ẏ∈R (Γ, t) = E hnΓ (x), ζ̇(t)i± ζ(t) = x pt (x)ds(x) dt x∈Γ which is identical to eq. (6). Because of the simple projection (7) of the velocity onto the surface normal hnΓf ront (x), ζ̇(t)i+ = where h·, ·i is the scalar product, ds(x) is an infinitesimal ẋ for the front boundary this expression could be further surface element of Γ at x and (·)+ := max(·, 0) and simplified by marginalization over ẏ. − (·) := − min(·, 0). The computation above applies to the front boundary of The probability of entries and exits through the surface the host vehicle. In order to cover all four boundaries of the subset Γ within a time interval ∆T = [t1 , t2 ] is given by host vehicle the collision probabilities of the four boundaries Zt2 can be added since the crossing events through the different dP ± ± P (Γ, t1 , t2 ) = (Γ, t) dt (8) dt t1 These results hold for general non-stationary processes. 2 Unlike in the box above we now do not distinguish anymore between a stochastic process and its values. boundaries are mutually exclusive. Hence the total collision probability rate is given by dPC+ (Γhost dt vehicle , t) = dPC+ dPC+ (Γf ront , t) + (Γright , t)+ dt dt dPC+ dPC+ (Γlef t , t) + (Γrear , t) (12) dt dt The collision probability over a certain time period can then be obtained by one-dimensional numerical integration over time as in eq. (8). The derivation above relied on a couple of assumptions. Here we want to comment on whether these assumptions pose any restrictions on the application of the derived collision probability rate on more general scenarios. a) Rectangular boundary: The boundary of the host vehicle is modeled as a rectangle, however by the theorem stated in the box above the formula can be applied to any subsets of smooth surfaces, including higher dimensional ones for three-dimensional objects, for example. b) Point distribution: The relative state of the colliding object is modeled by a point distribution. In many ADAS applications the object state is indeed modeled by a single reference point and possibly additional attributes such as width and length. This is partly due to the fact that the estimation of extended objects is still an active area of research, for an overview see [11]. On the other hand, our approach can be executed in parallel for distributions of points representative of the colliding vehicle’s geometry [12] or for individual point distributions of a Gaussian mixture model, for example. c) State vector: the state vector ξ of the colliding object needs to contain 2D relative position (x y)> and 2D relative velocity (ẋ ẏ)> . In many ADAS applications the target vehicle dynamics is modeled directly in relative coordinates. For state vectors that do not contain the 2D relative velocity but other quantities such as the velocity over ground (see e. g. [13]), a probabilistic transformation to relative velocities must be performed first. B. Implementation For further computations - especially in the Gaussian case - it will be convenient to marginalize over ẏ and rewrite eq. (11) in terms of a conditional probability: Z Z dPC+ (Γf ront , t) = −pt (x0 ) ẋ pt (ẋ, y|x0 ) dẋdy (13) dt ẋ≤0 y∈Iy For general distribution functions the integral in eq. (13) cannot be computed in closed form and numerical integration methods must be used. Even in the bivariate Gaussian case there is no explicit solution known to the authors. However, by a Taylor-expansion with respect to the off-diagonal element of the inverse covariance matrix of p(y, ẋ|x0 ) as detailed in app. C, the integral can be factorized into one-dimensional Gaussians and solved in terms of the standard normal onedimensional cumulative distribution function Φ. To zeroth order the integration yields: dPC+ (Γf ront , t) = − N (x0 ; µx , σx ) dt    −µẋ|x0 · µẋ|x0 Φ σ̃ẋ|x0  2 N (0; µẋ|x0 , σ̃ẋ|x0 ) σ̃ẋ|x 0 · −      yR − µy|x0 yL − µy|x0 · Φ −Φ σ̃y|x0 σ̃y|x0 !  + O Σ−1 (14) 12 Here, if Σ ∈ q R2×2 is the covariance q matrix of p(ẋ, y|x0 ), |Σ| |Σ| then σ̃ẋ|x0 = Σyy and σ̃y|x0 = Σẋẋ , see app. C where the integration has also been carried out to first order in Σ−1 12 . Expression (14) can be computed on an embedded platform using the complementary error function available in the C math library.3 In the next section an extensive numerical study using the above formulae and Monte-Carlo simulations is presented. VI. N UMERICAL S TUDY A numerical study has been carried out to address the following questions: • Is the expression for calculating the collision probability rate from eq. (11) consistent with the results from large scale Monte-Carlo simulations? • How does the approximation (14) perform in comparison with the numerical integration of the derived expression for the collision probability rate? • Can the computational effort be reduced by increasing ∆t and still accurately calculating the collision probability rate? A. Is the expression of collision probability rate corroborated by Monte-Carlo simulation? In order to address the first question, large scale MonteCarlo simulations as described in sec. II have been performed. Collision probability rates were calculated based on 1 · 107 sample trajectories for each of the two initial conditions − − − Ni (ξ − ; µ− ξ i , P∞ ), where µξ i is shown in table I, and P∞ is calculated using eq. (25) with the matrices defined in appendix B. The two initial conditions (i ∈ {f, f r}) describe a starting point directly in front of the host vehicle, and in front to the right at an angle of 45 degrees with respect to the host vehicle. Table II shows the number of collisions divided into the respective boundaries of the host vehicle where the impact or boundary crossing occurred for the two different simulations. The resulting histograms of the collision probability rates are shown in fig. 5 together with the numerical integration of 3 Φ is related to the error function erf and complementary error function     √ √ erfc by Φ(x) = 21 erfc −x = 12 − 12 erf −x . 2 2 TABLE I M EAN OF INITIAL CONDITIONS FOR M ONTE -C ARLO SIMULATIONS i HH f µ− ξ µ− 10 x [m] µ− 0 y [m]   m µ− −2 ẋ  s  m µ− 0.1 ẏ h s i − m µẍ s2 −0.2 h i m µ− 0.0 ÿ s2 HH fr 10 10 −2 −1.6 −0.001 −0.01 TABLE II N UMBER OF COLLISIONS AT HOST VEHICLE BOUNDARIES FOR 1 · 107 SIMULATED TRAJECTORIES WITH DIFFERENT INITIAL CONDITIONS PP PP i Γj P P f ront right lef t rear P f fr 106 4.46 · 36874 6196 0 4.50 · 106 1.40 · 106 2.88 · 106 2 0 4.28 · 106 the collision probability rate as well as the difference between the simulation and the calculation. The difference is calculated by evaluating the collision probability rate at the same time as the mid points of the histogram bins. As can be seen in fig. 5, the numerical integration of the collision probability rate accurately reproduces the collision probability rate from the Monte-Carlo simulations. In order to illustrate the increase in accuracy as a function of the number of simulated trajectories, fig. 6 shows the differences between simulation and numerical integration with increasing amount of simulated trajectories for collisions at the right side of the host vehicle in the front scenario. B. Does the approximation by Taylor-expansion accurately reproduce the exact result? In order to be able to compute the collision probability rate efficiently on an embedded platform, an approximation of the exact expression (eq. (11)) was derived in eq. (14). Fig. 7 shows the differences between this approximation as well as a higher-order approximation where the pdf is Taylor-expanded to linear order with respect to the off-diagonal element of the inverse covariance matrix around 0 (see app. C) and the numerical integration. As can be seen, the higher-order approximation reduces the error to a large extent while it can be still calculated efficiently on an embedded platform using the complementary error function. C. An adaptive method to sample the collision probability rate over ∆T Above, the approximations of the exact expression for calculating the collision probability rate were evaluated at small time increments of ∆t = 0.05s. Thus, the calculation over the entire time period of interest (e.g 8s as used above) and for every relevant object could induce a substantial computational burden. In order to reduce this effort, we propose an adaptive method to sample the collision probability rate function with variable – in general larger – time increments ∆t over the time period of interest while still capturing the characteristics of this function, in particular its shape around the maximum. The sampling starting point is based upon the non-probabilistic TTCs for single, straight boundaries using a one-dimensional constant acceleration model. Those TTCs for penetrating the front, left, and right boundaries can then be used as initial condition for the start of the sampling iteration of the collision probability rate.4 To reproduce the collision probability rate without substantial loss of information but with lower computational effort, the following algorithm is proposed: • Calculate the times of penetrating the front, left and right boundaries based upon the non-probabilistic TTCs described above. • Calculate the collision probability rate for each time. Pick the time with the maximum collision probability rate as a starting point. • Move left and right from this starting point with equally spaced ∆t1 > ∆t and calculate the collision probability rate at these time points. Stop on each side if the collision dP + probability rate has reached a lower threshold of dtC low . • While moving left and right, check if the slope of the collision probability rate has changed its sign. • On every slope sign change, calculate the collision probability rate around this time interval with decreased ∆t2 < ∆t1 . Examples of this implementation can be found in fig. 8 for the front and front-right scenarios. It can be seen that the collision probability rate as well as the integrated collision probability over a certain time period can be determined with considerably fewer sampling points while still capturing the shape of the functions to be approximated. VII. W HAT IS THE TTC? There are various approaches to computing the TTC. In [2], the TTC is computed as the mean of the time distribution of reaching the x0 boundary of the car as a function of the initial conditions assuming a constant speed model; process noise is not considered. This is also presented in [1]; in addition the time distribution for reaching the x0 boundary as a function of the initial conditions assuming a constant acceleration model is calculated by Monte-Carlo-simulation and its mean values depending upon the initial condition setup is given - again, process noise for this motion model is not considered. However these simple temporal quantities do not fully capture the characteristics of horizontal plane collision scenarios as will be shown below. In fig. 9 the collision probability rate is plotted together with the constant acceleration motion TTCs for the x- and ydirections for an initial position at the front, right side. Both 4 Due to the low probability of penetration the non-probabilistic TTC for the rear boundary is not considered for the determination of the sampling starting point. 0.5 0.5 Monte-Carlo histogram numerical integration of exact expression 0.45 0.4 collision probability rate [s -1 ] 0.4 collision probability rate [s -1 ] Monte-Carlo histogram numerical integration of exact expression 0.45 0.35 0.3 0.25 0.2 0.15 0.35 0.3 0.25 0.2 0.15 0.1 0.1 0.05 0.05 0 0 2 3 4 5 6 7 8 2 3 4 5 t [s] (a) Front scenario total collision probability rate 8 10-3 2 numerical integration - Monte-Carlo histogram numerical integration - Monte-Carlo histogram 1.5 collision probability rate [s -1 ] 1.5 collision probability rate [s -1 ] 7 (b) Front-Right scenario total collision probability rate 10-3 2 6 t [s] 1 0.5 0 -0.5 1 0.5 0 -0.5 -1 -1 -1.5 -1.5 2 3 4 5 6 7 8 2 3 4 5 t [s] 6 7 8 t [s] (c) Front scenario total collision probability rate difference (d) Front-Right scenario total collision probability rate difference Fig. 5. The histogram resulting from Monte-Carlo simulation is shown together with the numerical integration of the expression for the collision probability rate for front (a) and front-right (b) scenario. The differences between simulation and numerical integration are calculated by evaluating the numerical integration at the same time as the mid points of the histogram bins and shown in (c) and (d). 10-3 5 4.5 2 1.5 4 collision probability rate [s-1 ] collision probability rate [s-1 ] 3 2.5 3.5 3 2.5 2 1.5 3.5 3 2.5 2 1.5 1 1 1 0.5 0.5 0.5 0 0 1 2 3 4 5 6 7 8 0 1 2 3 t [s] (a) Simulation based on 1 · 105 trajectories. Monte-Carlo histogram numerical integration of exact expression 4.5 4 3.5 10-3 5 Monte-Carlo histogram numerical integration of exact expression 4.5 4 collision probability rate [s-1 ] 10-3 5 Monte-Carlo histogram numerical integration of exact expression 4 5 6 7 8 t [s] (b) Simulation based on 1 · 106 trajectories. 1 2 3 4 5 6 7 8 t [s] (c) Simulation based on 1 · 107 trajectories. Fig. 6. The collision probability rate for the right side of the host vehicle for the front scenario is shown comparing the results from Monte-Carlo simulation with increasing amount of simulated trajectories (a)-(c) with the calculated collision probability rate of the numerical integration of the expression for the collision probability rate eq. (11). The process noise PSD for both coordinates is q̃x = q̃y = 1.0125m2 s−5 . TTCs in x- and y-direction are significantly different from the time of the maximum of the collision probability rate. Since the bulk of the colliding trajectories go through two sides front and right (see also fig. 1b) - only a collision model that takes into account the full geometry of the host vehicle can yield accurate results. In fig. 10 the collision probability rate is plotted together with the constant acceleration motion TTCs (i. e. their mean values) for the x-direction for an initial position that is straight in front of the vehicle hence almost all trajectories pass through the front boundary. Nevertheless the maximum of the probability rate occurs before the mean of the TTC. This difference increases as the process noise increases as can be seen in fig. 11. This is due to the fact that the time of the maximum is strongly influenced by the factor pt (x0 ) in eq. (13); an increased level of process noise leads to a faster 10-7 approx. - num. integration higher order approx. - num. integration 4 approx. - num. integration higher order approx. - num. integration 10 2 collision probability rate [s -1 ] collision probability rate [s -1 ] 10-4 12 0 -2 -4 -6 8 6 4 2 0 -8 -2 2 3 4 5 6 7 8 2 3 4 5 t [s] 6 7 8 t [s] (a) Front scenario (b) Front-Right scenario Fig. 7. Differences between numerical integration and two approximations of the exact expression of the collision probability rate (eq. (11)). The first approximation (solid line) ignores the non-diagonal entries of the covariance matrix while the second approximation (dashed line) Taylor-expands the pdf to linear order with respect to the off-diagonal elements of the inverse covariance matrix around 0. 0.35 starting point with increment of 0.5s with increment of 0.2s around max linear-order approximation linear-order approximation with reduced samples 0.4 starting point with increment of 0.5s with increment of 0.2s around max linear-order approximation linear-order approximation with reduced samples 0.3 collision probability rate [s -1 ] collision probability rate [s -1 ] 0.5 0.3 0.2 0.25 0.2 0.15 0.1 0.1 0.05 0 0 2 3 4 5 6 7 8 2 3 4 t [s] 5 6 7 8 t [s] (a) Front scenario collision probability rate (b) Front-Right scenario collision probability 0.6 0.5 Monte-Carlo simulation linear-order approximation linear-order approximation with reduced samples 0.5 Monte-Carlo simulation linear-order approximation linear-order approximation with reduced samples 0.45 collision probability [] collision probability [] 0.4 0.4 0.3 0.2 0.35 0.3 0.25 0.2 0.15 0.1 0.1 0.05 0 0 2 3 4 5 6 7 8 t [s] (c) Front scenario collision probability rate 2 3 4 5 6 7 8 t [s] (d) Front-Right scenario collision probability Fig. 8. Examples for reducing the number of calculations to determine the collision probability rate and the integrated collision probability. (a) and (c) show the results for the front scenario and (b) and (d) for the front-right scenario. The used parameters in these examples are ∆t1 = 0.5s, ∆t2 = 0.2s and + dPC dt low = 0.01. In doing so the number of calculations for the collision probability rate could be reduced from 120 (using a fixed sampling increment of ∆t = 0.05s) to 13 for the front scenario and to 16 for the front-right scenario, respectively. spreading of pt (x0 ) and hence the maximum is reached earlier. The above discussion shows that temporal collision characteristics are encoded by the distribution of the collision probability rate which incorporates the full geometry of the host vehicle as well as process noise during prediction. A quantity called TTC could then be defined as one of the 0.18 collision probability rate TTC x collision probability rate [s -1 ] 0.16 TTC y 0.14 0.12 0.1 0.14 collision probability rate TTC x 0.12 collision probability rate [s -1 ] characteristic properties of this distribution such as the mode or the mean or the median or as a property of the integrated collision probability rate, e. g. the time when the collision probability exceeds a certain threshold. 0.1 0.08 0.06 0.04 0.02 0.08 0 2 0.06 3 4 5 6 7 8 t [s] 0.04 0.02 0 2 3 4 5 6 7 8 Fig. 11. Collision probability rate and mean of constant acceleration TTC for an initial condition almost in front of the vehicle: (x, y) = (10, 0)m. The process noise PSD for both coordinates has been increased to q̃x = q̃y = 0.1013m2 s−5 . t [s] Fig. 9. Collision probability rate and mean of constant acceleration TTC for an initial condition at the front, right side of the vehicle: (x, y) = (10, 10)m. The process noise PSD for both coordinates is q̃x = q̃y = 0.0405m2 s−5 . 0.4 collision probability rate TTC collision probability rate [s -1 ] 0.35 x 0.3 0.25 0.2 0.15 0.1 geometries such as a single line segment do not properly characterize the true collision probability rate distributions. The ground truth collision probability rate distribution has been obtained by Monte-Carlo simulations and reproduced by our derived collision probability rate expression. We have also implemented an approximation of the collision probability rate expression that can be computed in closed form on an embedded platform. This approximate formula provided collision probability rate distributions that are almost indistinguishable from distributions obtained by numerical integration for the scenarios considered in this paper. In order to efficiently sample this probability rate distribution for determination of its characteristic shape we have worked out an adaptive method to obtain the sampling points. IX. ACKNOWLEDGEMENTS Helpful clarifications by Prof. Georg Lindgren are gratefully acknowledged. 0.05 0 2 3 4 5 6 7 8 t [s] Fig. 10. Collision probability rate and mean of constant acceleration TTC for an initial condition almost in front of the vehicle: (x, y) = (10, 0)m. The process noise PSD for both coordinates is q̃x = q̃y = 0.0101m2 s−5 . VIII. C ONCLUSIONS As detailed in our literature review a common approach to compute a collision probability is via temporal collision measures such as time-to-collision or time-to-go. In this paper, however, we have pursued a different approach, namely the derivation of a collision probability rate without temporal collision measures as an intermediate or prerequisite quantity. A collision probability rate then affords the provision of a collision probability over an extended period of time by temporal integration. We have also shown that computations of TTCs using assumptions based on the shape of trajectories or the prediction dynamics (disregard of process noise), or on simple A PPENDIX A. Partitioned Gaussian densities In many calculations in stochastic estimation there is a need to marginalize over certain elements of a state vector or to obtain lower dimensional distributions by conditioning with respect to certain elements. For these calculations the original state vector ξ can be rearranged or partitioned such that xr denotes the remaining state vector and xm denotes the states to be marginalized over or which are used for conditioning.   xr ξ= (15) xm Hence the mean vector µ and covariance matrix Σ can be partitioned into     Σrr Σrm µr µ= , Σ= (16) µm Σ> rm Σmm The following two well-known results on multivariate Gaussians are used in this paper: a) Marginalization: The probability density of ξ marginalized with respect to xm is Z p (xr ) = p (ξ) dxm = N (xr ; µr , Σrr ) (17) xm b) Conditioning: The probability density of ξ conditioned on xm is covariance matrix R(tk ). The measurement function h is given by typical radar measurements (r, φ, ṙ), i. e.  p x2 + y 2 y   h(ξ) = arctan x  x ẋ+y ẏ √ 2 2 x +y (24) ∂h ∂ξ p (ξ|xm ) = p (xr |xm ) = N xr ; µr|m , Σr|m  (18) with µr|m = µr + Σrm Σ−1 mm (xm − µm ) (19) > Σr|m = Σrr − Σrm Σ−1 mm Σrm (20) and H = is its linearization. − The covariance matrix P∞ at steady state can be determined by the the discrete algebraic Riccati equation using the matrices defined above: − − > P∞ = F P∞ F − > − > −F P∞ H HP∞ H +R −1 − > HP∞ F +Q B. Dynamical system The vehicle kinematics is characterized by a sixdimensional state vector > ξ = x y ẋ ẏ ẍ ÿ (21) (25) C. Evaluation of the 2D integral for the collision probability rate The integral Z Z ẋ pt (y, ẋ|x0 ) dydẋ (26) The dynamical model is a discrete-time equivalent white noise jerk model (see e. g. [14]) ẋ≤0 y∈Iy in eq. (13) for the collision probability rate cannot be computed in closed form if the covariance matrix of pt (y, ẋ|x0 ) is not diagonal. Instead of a full numerical 2D integration where scheme we Taylor-expand the 2D pdf with respect to the off  ∆t2k diagonal element of the inverse covariance matrix around 0 to 1 0 ∆tk 0 0 2  2  a certain order and then integrate. For a general 2D Gaussian ∆t k  0 1 0 ∆tk 0 2   pdf p(x1 , x2 ) = N (ξ; µ, Σ) with ξ = (x1 , x2 )> and mean µ  1 0 ∆tk 0  F (∆tk ) = 0 0  and covariance matrix Σ the Taylor-expansion to linear order 0 0 0 1 0 ∆tk    with respect to Σ−1 12 reads 0 0 0 0 1 0      q q 0 0 0 0 0 1 N (ξ; µ, Σ) = N x1 ; µ1 , Σ̃11 N x2 ; µ2 , Σ̃22    q with ∆tk = tk+1 −tk . The corresponding predicted covariance − −1 matrix is denoted by P (tk+1 ). The process noise ν(tk ) · −Σ12 (x1 − µ1 )N x1 ; µ1 , Σ̃11 is modeled by a white, mean-free Gaussian process with    q covariance matrix Q · (x2 − µ2 )N x2 ; µ2 , Σ̃22  5  ∆t4 ∆t3 ∆tk k k q̃ 0 q̃x 0 q̃x 0  20 x 8 6 2   ∆t5 ∆t4 ∆t3 +O (Σ−1 k k k  0 12 ) q̃ 0 q̃ 0 q̃  ξ − (tk+1 ) = F (∆tk )ξ(tk ) + ν(tk )  4  ∆tk q̃  Q(∆tk ) =  8 x  0  3  ∆tk  6 q̃x 0 20 y ∆t3 k q̃x 3 0 ∆t4 k 8 q̃y 0 ∆t3 k q̃y 6 0 ∆t2 k q̃x 2 0 (22) y 8 ∆t2 k q̃x 2 0 ∆t3 k 3 q̃y 0 0 ∆tk q̃x ∆t2 k q̃y 2 0 6 0 y     q̃ y  2  0  ∆tk q̃y ∆t2 k where q̃x , q̃y are the jerk power spectral densities (PSD) in xand y-direction of the underlying continuous noise process. Discrete-time equivalent noise – as opposed to discrete-time counterpart noise – has been deliberately chosen because the power spectral density as characterizing noise parameter is independent of ∆tk . Hence ∆tk in prediction can be changed arbitrarily without adjusting the power spectral density. The measurement model is z(tk ) = h(ξ − (tk )) + r(tk ) (23) where z(tk ) is the measurement and the measurement noise r(tk ) is modeled by a white, mean-free Gaussian process with Σ12 with Σ−1 12 = − |Σ| and Σ̃11 = the following integral |Σ| Σ22 , Σ̃22 = |Σ| Σ11 . This leads to Zx1u Zx2u x1 p(x1 , x2 )dx1 dx2 = x1l x2l !  #x1u q x1 − µ1 p − Σ̃11 N x1 ; µ1 , Σ̃11 · 2Σ̃11 x1l " !#x2u 1 x2 − µ2 · erf p 2 2Σ̃22 x2l " !#x1u   q Σ̃11 x1 − µ1 −1 −Σ12 x1 Σ̃11 N x1 ; µ1 , Σ̃11 − erf p · 2 2Σ̃11 x1l   x2u q  2 · Σ̃22 N x2 ; µ2 , Σ̃22 + O (Σ−1 12 ) " µ1 erf 2 x2l We expand with respect to the off-diagonal entry of the inverse covariance since this is the element that prevents factorization into one-dimensional Gaussians. We have checked that Taylor-expansion with respect to the off-diagonal entry of the covariance matrix leads to less accurate results to first order. R EFERENCES [1] J. Jansson and F. Gustafsson, “A framework and automotive application of collision avoidance decision making,” Automatica, vol. 44, no. 9, pp. 2347–2351, 2008. [2] M. M. Muntzinger, S. Zuther, and K. Dietmayer, “Probability estimation for an automotive pre-crash application with short filter settling times,” in Proceedings of IEEE Intelligent Vehicles Symposium, 2009, pp. 411– 416. [3] J. Jansson, “Collision avoidance theory: With application to automotive collision mitigation,” Ph.D. dissertation, Linköping University Electronic Press, 2005. [4] A. Lambert, D. Gruyer, and G. Saint Pierre, “A fast monte carlo algorithm for collision probability estimation,” in Control, Automation, Robotics and Vision, 2008. ICARCV 2008. 10th International Conference on. IEEE, 2008, pp. 406–411. [5] M. Prandini, J. Hu, J. Lygeros, and S. Sastry, “A probabilistic approach to aircraft conflict detection,” IEEE Transactions on intelligent transportation systems, vol. 1, no. 4, pp. 199–220, 2000. [6] P.-J. Nordlund and F. Gustafsson, Probabilistic conflict detection for piecewise straight paths. Linköping University Electronic Press, 2008. [7] A. Genz, “Numerical computation of rectangular bivariate and trivariate normal and t probabilities,” Statistics and Computing, vol. 14, no. 3, pp. 251–260, 2004. [8] A. Eidehall and L. Petersson, “Statistical threat assessment for general road scenes using monte carlo sampling,” IEEE Transactions on intelligent transportation systems, vol. 9, no. 1, pp. 137–147, 2008. [9] Y. K. Belyaev, “On the number of exits across the boundary of a region by a vector stochastic process,” Theory of Probability & Its Applications, vol. 13, no. 2, pp. 320–324, 1968. [10] G. Lindgren, “Model processes in nonlinear prediction with application to detection and alarm,” The Annals of Probability, pp. 775–792, 1980. [11] L. Mihaylova, A. Y. Carmi, F. Septier, A. Gning, S. K. Pang, and S. Godsill, “Overview of bayesian sequential monte carlo methods for group and extended object tracking,” Digital Signal Processing, vol. 25, pp. 1–16, 2014. [12] P. Broßeit, M. Rapp, N. Appenrodt, and J. Dickmann, “Probabilistic rectangular-shape estimation for extended object tracking,” in Intelligent Vehicles Symposium (IV), 2016 IEEE. IEEE, 2016, pp. 279–285. [13] R. Altendorfer, “Observable dynamics and coordinate systems for automotive target tracking,” in Proceedings of IEEE Intelligent Vehicles Symposium, 2009, pp. 741–746. [14] Y. Bar-Shalom, X. R. Li, and T. Kirubarajan, Estimation with Applications to Tracking and Navigation. Wiley, 2001.
3
C OMBATING A DVERSARIAL ATTACKS U SING S PARSE R EPRESENTATIONS arXiv:1803.03880v1 [stat.ML] 11 Mar 2018 Soorya Gopalakrishnan∗, Zhinus Marzi∗, Upamanyu Madhow & Ramtin Pedarsani Department of Electrical and Computer Engineering University of California, Santa Barbara Santa Barbara, CA 93106, USA {soorya,zhinus_marzi,madhow,ramtin}@ucsb.edu A BSTRACT It is by now well-known that small adversarial perturbations can induce classification errors in deep neural networks (DNNs). In this paper, we make the case that sparse representations of the input data are a crucial tool for combating such attacks. For linear classifiers, we show that a sparsifying front end is provably effective against `∞ -bounded attacks, reducing output distortion due to the attack by a factor of roughly K/N where N is the data dimension and K is the sparsity level. We then extend this concept to DNNs, showing that a “locally linear” model can be used to develop a theoretical foundation for crafting attacks and defenses. Experimental results for the MNIST dataset show the efficacy of the proposed sparsifying front end. 1 I NTRODUCTION It has been less than five years since Szegedy et al. (2014) and Goodfellow et al. (2015) pointed out the vulnerability of deep networks to tiny, carefully designed adversarial perturbations, but there is now widespread recognition that understanding and combating such attacks is a crucial challenge in machine learning security. It was conjectured by Goodfellow et al. (2015) (see also later work by Moosavi-Dezfooli et al. (2016); Poole et al. (2016); Fawzi et al. (2017)) that the vulnerability arises not because deep networks are complicated and nonlinear, but because they are “too linear.” We argue here that this intuition is spot on, using it to develop a systematic, theoretically grounded, framework for design of both attacks and defenses, and making the case that sparse input representations are a critical tool for defending against `∞ -bounded perturbations. e x + Adversary x ΨT HK (·) Defense Ψ x̂ y(·) Classifier Figure 1: Defense against adversarial attacks via a sparsifying front end Figure 1 depicts a classifier attacked by an adversary which can corrupt the input x by adding a perturbation e satisfying kek∞ ≤ , and a defense based on a sparsifying front end, which exploits the rather general observation that input data must be sparse in some basis in order to avoid the curse of dimensionality. Specifically, we assume that input data x ∈ RN has a K-sparse representation (K  N ) in an orthonormal basis Ψ: ΨT x 0 ≤ K. The front end enforces sparsity in domain Ψ via function HK (·), retaining the K coefficients largest in magnitude and zeroing out the rest. The intuition behind why sparsity can help is quite clear: small perturbations can add up to a large output distortion when the input dimension is large, and by projecting to a smaller subspace, we limit the damage. Indeed, many recently proposed defenses implicitly use some notion of sparsity, such as ∗ Joint first authors. 1 JPEG compression (Das et al., 2017; Guo et al., 2018), PCA (Bhagoji et al., 2017), and projection onto GAN-based generative models (Ilyas et al., 2017; Samangouei et al., 2018). Our goal here is to provide a theoretically grounded framework which permits a systematic pursuit of sparsity as a key tool, perhaps the key tool, for robust machine learning. We first motivate our approach by rigorous results for linear classifiers, and then show how the approach extends to general neural networks via a “locally linear” model. 2 L INEAR CLASSIFIERS For a linear classifier, y(x) = wT x and hence the distortion caused by the adversary is T ∆ = wT x̂ − w  x . Denote the support of the K-sparse representation of x by SK (x) = T supp HK Ψ x . We say that we are in a high signal-to-noise ratio (SNR) regime when the support does not change due to the perturbation: SK (x) = SK (x + e). The high SNR regime can be characterized as follows: Proposition 1. For sparsity level K, a sufficient condition for high SNR is: λ/ > 2M, where λ is the magnitude of the smallest non-zero entry of HK (ΨT x), and M = maxj kψj k1 .1 We note that the high SNR condition is easier to satisfy for bases with sparser, or more localized, basis functions (smaller M ). P The distortion now becomes ∆ = eT PK (w, x) , where PK (w, x) = k∈SK (x) ψk ψkT w is the projection of w onto the K-dimensional subspace spanned by SK (x). We now consider two settings: • Semi-white box: Here the perturbations are designed based on knowledge of w alone, so that eSW =  sgn(w) and ∆SW =  sgn(wT ) PK (w, x) . We note that the attack is aligned with w. • White box: Here the adversary has knowledge of both w and the front end. Assuming high SNR, the optimal perturbation is eW =  sgn(PK (w, x)), which yields ∆W =  kPK (w, x)k1 . Thus, instead of aligning with w, eW is aligned to the projection of w on the subspace that x lies in. In order to understand how well the sparsifying front end works, we take an ensemble average over randomly chosen classifiers w, and show (Marzi et al., 2018) that, relative to no defense, the attenuation in output distortion provided by the sparsifying front end is a factor of K/N for a semiwhite box attack, and is at least O(K polylog(N )/N ) for the white box attack. We do not state these theorems formally here due to lack of space. A practical take-away from the calculations involved is that, in order for the defense to be effective against a white box attack, not only do we need K  N , but we also need that the individual basis functions be localized (small in `1 norm). 3 N EURAL NETWORKS We skip a lot of details, but the key idea is to extend the intuition from linear classifiers to general neural networks using the concept of a “locally linear” representation. The change in slope in a ReLU function, or the selection of the maximum in a max pooling function, can be modeled as an input-dependent switch. If we fix these switches, the transfer function from the input to the network to, say, the inputs to an output softmax layer, is linear. Specifically, consider a multilayer (deep) network with L classes. Using the locally linear model, each of the outputs of the network (prior to softmax) can be written as {i} T yi = weq x − b{i} eq , i = 1, . . . , L,  PL yj where y = [y1 , y2 , ..., yL ]T . The softmax layer computes pi = Si (y) = eyi / . j=1 e {i} Applying the theory developed for linear classifiers to weq , we see that a sparsifying front end will attenuate the distortion going in to the softmax layer. And of course, the adversary can use the locally linear model to devise attacks analogous with those for linear classifiers, as follows. Semi-white box and white box attacks: Assume that x belongs to class t, with label t known to the adversary (a pessimistic but realistic assumption, given that the attacker can run the input through a high-accuracy network prior to devising its attack). The adversary can sidestep the nonlinearity 1 Here ψj denotes the jth column of Ψ, i.e. Ψ = [ψ1 , ψ2 , . . . , ψN ]. 2 Table 1: Classification accuracies (in %) for 3 vs. 7 discrimination via linear SVM, and 10-class classification via CNN. For linear SVM,  = 0.12 and ρ = 2%. For the CNN,  = 0.25, ρ = 3%. Linear SVM Four layer CNN Semi-white box White box FGSM Semi-white box White box 0 97.31 0 94.62 19.45 89.75 8.87 88.76 8.87 84.04 No defense Sparsifying front end of the softmax layer, since its goal is simply to make yi > yt for some i 6= t. Thus, the adversary can consider L − 1 binary classification problems, and solve for perturbations aiming to maximize yi − yt for each i 6= t. We now apply the semi-white and white box attacks to each pair, with {i} {t} weq = weq − weq being the equivalent locally linear model from the input to yi − yt . After computing the distortions for each pair, the adversary applies its attack budget to the worst-case pair for which the distortion is the largest: max yi (x + e) − yt (x + e), s.t. kek∞ ≤  i,e FGSM attack: For binary classification, the standard FGSM attack (Goodfellow et al., 2015) can be shown to be equivalent to the semi-white box attack using the locally linear model. However, it does not have such a nice interpretation for more than two classes: it attacks along the gradient of the cost function (typically cross-entropy), and hence does not take as direct an approach to confounding the network as the semi-white box attack. As expected (and verified by experiments), it performs worse (does less damage) than the semi-white box attack. 4 E XPERIMENTS Setup: We consider two inference tasks on the MNIST dataset (LeCun et al., 1998): binary classification of digit pairs via linear SVM, and multi-class classification via a four layer CNN. The CNN consists of two convolutional layers (containing 20 and 40 feature maps, both with 5x5 local receptive fields) and two fully connected layers (containing 1000 neurons each, with dropout) (Nielsen, 2015). For the sparsifying front end, we use the Cohen-Debauchies-Feauveau 9/7 wavelet (Cohen et al., 1992) and retrain the network with sparsified images for various values of ρ = K/N . We perturb images with FGSM, semi-white box and white box attacks and report on classification accuracies. Results: For the binary classification task, we begin with the digits 3 and 7. Without the front end, an attack with  = 0.122 completely overwhelms the classifier, reducing accuracy from 98.2% to 0%. We find ρ = 2% to be the best choice for the 3 versus 7 scenario, and report on the accuracies obtained in Table 1. Results for other digit pairs show a similar trend; insertion of the front end greatly improves resilience to adversarial attacks. The optimal value of ρ lies between 1-5%, with ρ = 2% working well for all scenarios. For multi-class classification, the attacks use  = 0.25. We find that the locally linear attack is stronger than FGSM; it degrades performance from 99.38% to 8.87% when no defense is present. Again, the sparsifying front end (ρ = 3%) improves network robustness, increasing accuracy to 84.04% in the worst-case scenario. 5 C ONCLUSIONS We have emphasized here the value of locally linear modeling for design of attacks and defenses, and the connection between sparsity and robustness. We believe that these results just scratch the surface, and hope that they stimulate the community towards developing a comprehensive design framework, grounded in theoretical fundamentals, for robust neural networks. Important topics for future work include developing better sparse generative models, as well as discriminative approaches to sparsity (e.g., via sparsity of weights within the neural network). Promising results on the latter approach have been omitted here due to lack of space. 2 The reported values of  are for images normalized to [0, 1]. 3 R EFERENCES Arjun Nitin Bhagoji, Daniel Cullina, Chawin Sitawarin, and Prateek Mittal. Enhancing robustness of machine learning systems via data transformations. arXiv preprint arXiv:1704.02654, 2017. Albert Cohen, Ingrid Daubechies, and J-C Feauveau. Biorthogonal bases of compactly supported wavelets. Communications on Pure and Applied Mathematics, 45(5):485–560, 1992. Nilaksh Das, Madhuri Shanbhogue, Shang-Tse Chen, Fred Hohman, Li Chen, Michael E Kounavis, and Duen Horng Chau. Keeping the bad guys out: Protecting and vaccinating deep learning with JPEG compression. arXiv preprint arXiv:1705.02900, 2017. Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard, and Stefano Soatto. Classification regions of deep neural networks. arXiv preprint arXiv:1705.09552, 2017. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR), 2015. Chuan Guo, Mayank Rana, Moustapha Cisse, and Laurens van der Maaten. Countering adversarial images using input transformations. In International Conference on Learning Representations (ICLR), 2018. Andrew Ilyas, Ajil Jalal, Eirini Asteri, Constantinos Daskalakis, and Alexandros G Dimakis. The robust manifold defense: Adversarial training using generative models. arXiv preprint arXiv:1712.09196, 2017. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. Zhinus Marzi, Soorya Gopalakrishnan, Upamanyu Madhow, and Ramtin Pedarsani. Sparsity-based defense against adversarial attacks on linear classifiers. arXiv preprint arXiv:1801.04695, 2018. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. DeepFool: A simple and accurate method to fool deep neural networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2574–2582, 2016. Michael A Nielsen. Neural Networks and Deep Learning. Determination Press, 2015. Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein, and Surya Ganguli. Exponential expressivity in deep neural networks through transient chaos. In Advances in Neural Information Processing Systems (NIPS), pp. 3360–3368, 2016. Pouya Samangouei, Maya Kabkab, and Rama Chellappa. Defense-GAN: Protecting classifiers against adversarial attacks using generative models. In International Conference on Learning Representations (ICLR), 2018. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR), 2014. 4
7
Partial Consistency with Sparse Incidental Parameters Jianqing Fan1 , Runlong Tang2 , and Xiaofeng Shi1 arXiv:1210.6950v3 [] 2 Aug 2017 1 2 Princeton University Johns Hopkins University March 14, 2018 Abstract Penalized estimation principle is fundamental to high-dimensional problems. In the literature, it has been extensively and successfully applied to various models with only structural parameters. As a contrast, in this paper, we apply this penalization principle to a linear regression model with a finite-dimensional vector of structural parameters and a high-dimensional vector of sparse incidental parameters. For the estimators of the structural parameters, we derive their consistency and asymptotic normality, which reveals an oracle property. However, the penalized estimators for the incidental parameters possess only partial selection consistency but not consistency. This is an interesting partial consistency phenomenon: the structural parameters are consistently estimated while the incidental ones cannot. For the structural parameters, also considered is an alternative two-step penalized estimator, which has fewer possible asymptotic distributions and thus is more suitable for statistical inferences. We further extend the methods and results to the case where the dimension of the structural parameter vector diverges with but slower than the sample size. A data-driven approach for selecting a penalty regularization parameter is provided. The finite-sample performance of the penalized estimators for the structural parameters is evaluated by simulations and a real data set is analyzed. Keywords: Structural Parameters, Sparse Incidental Parameters, Penalized Estimation, Partial Consistency, Oracle Property, Two-Step Estimation, Confidence Intervals 1 Introduction Since the pioneering papers by Tibshirani (1996) and Fan and Li (2001), the penalized estimation methodology for exploiting sparsity has been studied extensively. For example, Zhao and Yu (2006) 1 provides an almost necessary and sufficient condition, namely Irrepresentable Condition, for the LASSO estimator to be strong sign consistent. Fan and Lv (2011) shows that an oracle property holds for the folded concave penalized estimator with ultrahigh dimensionality. For an overview on this topic, see Fan and Lv (2010). All the aforementioned papers consider models only with the so-called structural parameters, which are related to every data point. By contrast, we consider in this paper another type of models where there are not only the structural parameters but also the so-called incidental parameters, each of which is related to only one data point. Specifically, suppose data {X i , Yi }ni=1 are from the following linear model: Yi = µ?i + X Ti β ? + i , (1.1) where the vector of incidental parameters µ? = (µ?1 , · · · , µ?n )T is sparse, the vector of structural parameters β ? = (β1? , · · · , βd? )T is of main interest, and for each i, X i is a d-dimensional covariate vector, and i is a random error. Let ν = (µ?T , β ?T )T . Then, in model (1.1), a different data point (X i , Yi ) depends on a different subset of ν, that is, µ?i and β ? . Model (1.1) arises as a working model for estimation from Fan et al. (2012b), which considers a large-scale hypothesis testing problem under arbitrary dependence of test statistics. By Principal Factor Approximation, a method proposed by Fan et al. (2012b), the dependent test statistics Z = (Z1 , · · · , Zp )T ∼ N (µ, Σ) can be decomposed as Zi = µi +bTi W +Ki , where µ = (µ1 , · · · , µp )T and bi is the ith row of the first k unstandardized principal components, denoted by B, of Σ and K = (K1 , · · · , Kp )T ∼ N (0, A) with A = Σ − BB T . The common factor W drives the dependence among the test statistics. This realized but unobserved factor is critical for False Discovery Proportion (FDP) estimation and power improvements by removing the common factor {bTi W } from the test statistics. Hence, an important goal is to estimate W with given {bi }ni=1 . In many applications on large-scale hypothesis testing, the parameters {µi }pi=1 are sparse. For example, genome-wide association studies show that the expression level of gene CCT8 is highly related to the phenotype of Down Syndrome. It is of interest to test the association between each of millions of SNP’s and the CCT8 gene expression level. In the framework of Fan et al. (2012b), each µi stands for such an association. That is, if µi = 0, the ith SNP is not associated with the CCT8 gene expression level; otherwise, it is associated. Since most of the SNP’s are not associated the CCT8 gene expression level, it is reasonable to assume {µi }pi=1 are sparse. Replacing Zi , µi , bi , W , k, p, and Ki with Yi , µ?i , X i , β ? , d, n, and i respectively, we obtain model (1.1) formally. It is of interest to study this model independently with simplifications. 2 Although model (1.1) emerges from a critical component of estimating FDP in Fan et al. (2012b), it stands with its own interest. For example, in some applications, there are only few signals (nonzero µ?i ’s) and what is interesting is to learn about β ? , which reflects the relationship between the covariates and response. For another example, those few nonzero µ?i ’s might be some measurement or recording errors of the responses {Yi }. In these cases, model (1.1) is suitable for modeling data with contaminated responses and a method producing a reliable estimator for β ? is essentially a robust replacement for ordinary least squares estimate, which is sensitive to outliers. Several models with structural and incidental parameters have first been studied in a seminal paper by Neyman and Scott (1948), which points out the inconsistency of the maximum likelihood estimators (MLE) of structural parameters in the presence of a large number of incidental parameters and provides a modified MLE. However, their method does not work for model (1.1) due to no exploration of the sparsity of incidental parameters. Kiefer and Wolfowitz (1956) shows the consistency of the MLE of the structural parameters when the incidental parameters are assumed to be from a common distribution. That is, they eliminate the essential high-dimensional issue of the incidental parameters by randomizing them. In contrast, this paper considers deterministic incidental parameters and handles the high-dimensional issue by penalization with a sparsity assumption. Basu (1977) considers the elimination of nuisance parameters via marginalizing and conditioning methods and Moreira (2009) solves the incidental parameter problem with an invariance principle. For a review of the incidental parameter problems in statistics and economics, see Lancaster (2000). Without loss of generality, suppose the first s incidental parameters {µ?i }si=1 are nonvanishing and the remaining are zero. Then, model (1.1) can be written in a matrix form as Y = Xν + , where  X= Is X T1,s 0 0 X Ts+1,n I n−s  , X Ti,j = (X i , X i+1 , · · · , X j )T , I k is a k × k identity matrix, 0 is a generic block of zeros and ν = (µ?1 , · · · , µ?s , β T , µ?s+1 , · · · , µ?n )T . Although this is a sparse high-dimensional problem, the matrix X does not satisfy the sufficient conditions of the theoretical results in Zhao and Yu (2006) and Fan and Lv (2011) due to the inconsistency of the estimation of the incidental parameters in ν and the penalty should not be simply placed on all parameters. For details, see Supplement C. 3 In this paper, we investigate a penalized estimator of (µ? , β ? ) defined by (µ̂, β̂) = argmin n X (Yi − µi − (µ,β)∈Rn+d i=1 X Ti β)2 + n X pλ (|µi |), (1.2) i=1 where pλ is a penalty function with a regularization parameter λ. Since only the incidental parameters are sparse, the penalty is imposed on them. An iterative algorithm is proposed to numerically compute the estimators. The estimator β̂ possesses consistency, asymptotic normality and an oracle property. On the other hand, the nonvanishing elements of µ? cannot be consistently estimated even if β ? were known. So, there is a partial consistency phenomenon. Penalized estimation (1.2) is a one-step method. For the estimation of β ? , We also propose a twostep method whose first step is designed to eliminate the influence of the data with large incidental parameters. The estimator β̃ from the two-step method has fewer possible asymptotic distributions than β̂ and thus is more suitable for constructing confidence regions for β ? . It is asymptotically equivalent to the one-step estimator β̂ when the sizes of the nonzero incidental parameters are small enough, that is, when the incidental parameters are really sparse. Also, the two-step method improves the convergence rate and efficiency over the one-step method for challenging situations where large nonzero incidental parameters increase the asymptotic covariance or even reduce the convergence rate for the one-step method. The rest of the paper is organized as follows. In Section 2, the model and penalized estimation method are formally introduced and the corresponding penalized estimators are characterized. In Section 3, asymptotic properties of the penalized estimators are derived; a penalized two-step estimator is proposed and its theoretical properties are obtained; we also provide a data-driven approach for selecting the regularization parameter. In Section 4, we consider the case where the number of covariates grows with but slower than the sample size. In Section 5, we present simulation results and analyze a read data set. Section 6 concludes this paper with a discussion and all the proofs and some theoretical results are relegated to the appendix and supplements. 2 Model and Method The matrix form of model (1.1) is given by Y = µ? + Xβ ? + , (2.1) where Y = (Y1 , Y2 , · · · , Yn )T , X = (X 1 , X 2 , · · · , X n )T , and  = (1 , 2 , · · · , n )T . The covariates {X i }ni=1 are independent and identically distributed (i.i.d.) copies of X 0 ∈ Rd , which is a random 4 vector with mean zero and a covariance matrix ΣX > 0. They are independent of the random errors {i }, which are i.i.d. copies of 0 , which is a random variable with mean zero and variance σ 2 > 0. Denote an  bn and an  bn if an = o(bn ) and bn = o(an ), respectively. There is an assumption on the covariates and random errors. Assumption (A): There exist positive sequences κn  √ n, γn  √ n such that P ( max kX i k2 > κn ) → 0 and P ( max |i | > γn ) → 0, as n → ∞, 1≤i≤n 1≤i≤n (2.2) where k · k2 stands for the l2 norm of Rd . Suppose there are three types of incidental parameters in model (1.1) or (2.1): for simplicity on 1 the indexes, the first s1 incidental parameters {µ?i }si=1 are large in the sense that |µ?i |  max{κn , γn } for 1 ≤ i ≤ s1 ; the next s2 ones {µ?i }si=s1 +1 are nonzero and bounded by γn with s = s1 + s2 ; the last n − s ones {µ?i }ni=s+1 are zero. Note that it is unknown to us which µ?i ’s are large, bounded or zero. The sparsity of µ? is understood by s1 + s2  n, i.e. s1 + s2 = o(n). Denote the vectors of the three types of incidental parameters µ?1 , µ?2 , and µ?3 , respectively. The penalized estimation (1.2) can be written as (µ̂, β̂) = argmin L(µ, β), L(µ, β) = kY − µ − Xβk22 + (µ,β) n X pλ (|µi |). (2.3) i=1 The penalty function pλ can be the soft (i.e. L1 or LASSO), hard, SCAD or a general folded concave penalty function (Fan and Li, 2001). For simplicity, we next consider only the soft penalty function, that is, pλ (|µi |) = 2λ|µi |. The cases with the hard and SCAD penalties can be considered in a similar way. By Lemma D.1 in Supplement D, A necessary and sufficient condition for (µ̂, β̂) to be a minimizer of L(µ, β) is that β̂ = (X T X)−1 X T (Y − µ̂), Yi − µ̂i − X Ti β̂ = λSign(µ̂i ) for i ∈ Iˆ0c and |Yi − X Ti β̂| ≤ λ for i ∈ Iˆ0 , where Sign(·) is a sign function and Iˆ0 = {1 ≤ i ≤ n : µ̂i = 0}. Numerically, the special structure of L(µ, β) suggests a marginal decent algorithm for the minimization problem in (2.3), which iteratively computes µ(k) = argmin L(µ, β (k−1) ) and β (k) = µ∈Rn argmin L(µ(k) , β) until convergence. The advantage of this algorithm is that there exist analytic β∈Rd solutions to the above two minimization problems. They are respectively the soft-threshold estimators with residuals {Yi − X Ti β (k−1) } and ordinary least-squares estimator with responses Y − µ(k) . In this section and the next, the number of the covariates d is assumed to be a fixed integer. A case where d diverging to infinity will be considered in Section 4. For the case where d is finite, we make the following assumption on λ. 5 Assumption (B): The regularization parameter λ satisfies κn  λ, αγn ≤ λ, and λ  min{µ? , √ n}, (2.4) where κn and γn are defined in (2.2), α is a constant greater than 2, and µ? = min1≤i≤s1 |µ?i |. For simplicity, abbreviate “with probability going to one” to “wpg1”. A stopping rule for the above algorithm is based on the successive difference kβ (k+1) − β (k) k2 . By Proposition D.2 in Supplement D, wpg1, the iterative algorithm stops at the the second iteration, given the initial estimator is bounded wpg1. Suppose {β (k) } has a theoretical limit β (∞) , corresponding to which, there is a limit estimator µ(∞) . Then, (µ(∞) , β (∞) ) is a solution of the following system of nonlinear equations β = (X T X)−1 X T (Y − µ), (2.5) and, with soft-threshold applied to each component, it follows µ = (|Y − X T β| − λ)+ Sign(Y − X T β), (2.6) where (·)+ returns the maximum value of the input and zero. By Lemma D.3 in Supplement D, a necessary and sufficient condition for (µ̂, β̂) to be a minimizer of L(µ, β) is that it is a solution to equations (2.5) and (2.6). Hence, (µ(∞) , β (∞) ) is a minimizer of L(µ, β) and can also be denoted as (µ̂, β̂). Note that β̂ is also the minimizer of the profiled loss function L̃(β) = L(µ(β), β), where µ(β) as a function of β is given by (2.6) . Interestingly, this profiled loss function is a criterion function equipped with the famous Huber loss function (see Huber (1964) and Huber (1973)). Pn T Specifically, the profiled loss function can be expressed as L̃(β) = i=1 ρ(Yi − X i β), where ρ(x) = x2 I(|x| ≤ λ) + (2λx − λ2 )I(|x| > λ) is exactly the Huber loss function, which is optimal in a minimax sense. This equivalence between the penalized estimation and Huber’s robust estimation indicates that the penalization principle is versatile and can naturally produce an important loss function in robust statistics. This equivalence also provides a formal endorsement of the least absolute deviation robust regression (LAD) in Fan et al. (2012b) and indicates that it is better to use all data with LAD regression rather than 90% of them. It is worthwhile to note that the penalized estimation is only formally equal to the Huber’s. Our model (2.1) considers deterministic sparse incidental parameters µ?i ’s, while the model in Huber’s works assumes random contamination as in Kiefer and Wolfowitz (1956). Recently, there appear a few papers on robust regression in 6 high-dimensional settings, see, for example, Chen et al. (2010), Lambert-Lacroix and Zwald (2011), ? and Bean et al. (2012). Portnoy and He (2000) provide a high level review of literature on robust statistics. From the equations (2.5) and (2.6), β̂ is a solution to ϕn (β) = 0, where ϕn (β) = β − (X T X)−1 X T (Y − µ(β)). (2.7) In general, this is a Z-estimation problem. The following theoretical analysis is based on this characterization of β̂. At the end of this section, we provide for further analysis some notations and an expansion of P P P P P ϕn (β). Let S = ni=1 X i X Ti , SS = i∈S X i X Ti , SµS = i∈S X i µ?i , SS = i∈S X i i , S = ni=1 X i P and SS = i∈S X i , where S is a subset of {1, 2, · · · , n}. It is straightforward to show ϕn (β) = (SS10 + SS11 + SS12 )(β − β ? ) − (SµS11 + SµS12 ) − (SS10 + SS11 + SS12 ) − λ(SS20 + SS21 + SS22 − SS30 − SS31 − SS32 ), (2.8) where the index sets S10 = {s + 1 ≤ i ≤ n : |X Ti (β ? − β) + i | ≤ λ}, S11 = {1 ≤ i ≤ s1 : |µ?i + X Ti (β ? − β) + i | ≤ λ} and S12 = {s1 + 1 ≤ i ≤ s : |µ?i + X Ti (β ? − β) + i | ≤ λ}; S20 , S21 and S22 are defined similarly except that the absolute operation is omitted and “≤” is replaced by “>”; S30 , S31 and S32 , are defined similarly with S20 , S21 and S22 except that “> λ” is replaced by “< −λ”. Note that all these index sets depend on β. 3 Asymptotic Properties In this section, we consider the asymptotic properties of the penalized estimators β̂ and µ̂. Assumption (A), together with Assumption (B), enables the penalized estimation method to distinguish the large incidental parameters from others, and thus simplifies the asymptotic properties of the index sets Sij ’s in (2.8) in the sense that they become independent of β wpg1. Denote a hypercube of β ? by BC (β ? ) = {β ∈ Rd : |βj − βj? | ≤ C, 1 ≤ j ≤ d} with a constant C > 0. Lemma 3.1 (On Index Sets Sij ’s). Under Assumptions (A) and (B), for every C > 0 and every ? , S ? ? ? β ∈ BC (β ? ), wpg1, S10 = S10 11 = ∅, S12 = S12 , S20 = ∅, S21 = S21 , S22 = ∅, S30 = ∅, S31 = S31 ? = {s + 1, s + 2, · · · , n}, S ? = {s + 1, s + 2, · · · , s}, and S32 = ∅, where the limit index sets S10 1 12 ? = {1 ≤ i ≤ s : µ? > 0} and S ? = {1 ≤ i ≤ s : µ? < 0}. S21 1 1 31 i i 7 By Lemma 3.1, wpg1, the solution β̂ to (2.7) has an analytic expression: −1 µ  ? + SS ? ) β̂ = β ? + (SS10 [SS ? + (SS10 ? + SS ? ) + λ(SS ? − SS ? )], 12 21 31 12 12 (3.1) from which, we derive asymptotic properties of β̂. Some analysis needs the following assumption. Assumption (C): There exists some constant δ > 0 such that EkX 0 k2+δ < ∞ and kµ?2 k2 /kµ?2 k2+δ 2  Ps ? 2+δ 1/(2+δ) . diverges to infinity, where kµ?2 k2+δ = i=s1 +1 |µi | The following result shows the existence of a unique consistent estimator of β ? . Theorem 3.2 (Existence and Consistency of β̂). Under Assumptions (A) and (B), if either s2 = o(n/(κn γn )) or Assumption (C) holds, then, for every fixed C > 0, wpg1, there exists a unique P estimator β̂ n ∈ BC (β ? ) such that ψn (β̂ n ) = 0 and β̂ n −→ β ? . In Theorem 3.2, there are two different kinds of sufficient conditions: on is on s2 , which is the size of bounded incidental parameters µ?2 , and the other is Assumption (C), which is about the norms of µ?2 . They come from different analysis approaches on the term SµS ? in (3.1). One does 12 not imply the other. For details, see Supplement E. Specially, if s2 = O(nα2 ) for some α2 ∈ (0, 1) and κn γn  n(1−α2 ) , then β̂ is consistent by Theorem 3.2. Next, we consider the asymptotic distributions of the consistent estimator β̂ n obtained in The? = {1 ≤ i ≤ s : µ? > 0} orem 3.2. Without loss of generality, we assume the sizes of index sets S21 1 i ? = {1 ≤ i ≤ s : µ? < 0} are asymptotically equivalent to as and (1 − a)s with a constant and S31 1 1 1 i a ∈ (0, 1). Similar to Theorem 3.2, there are two different sets of conditions on µ?2 corresponding to two different analysis approaches. Denote ∼ as the asymptotic equivalence and Dn = kµ?2 k2 . Theorem 3.3 (Asymptotic Distributions on β̂ n ). Under Assumptions (A) and (B), suppose s2  √ n/(κn γn ) holds or Assumption (C) and Dn2 /n = o(1) hold. √ d n(β̂ n − β ? ) −→ N (0, σ 2 Σ−1 X ); [main case] √ d + (2) If s1 ∼ bn/λ2 , then n(β̂ n − β ? ) −→ N (0, (b + σ 2 )Σ−1 X ), for every constant b ∈ R ; √ d (3) If s1  n/λ2 , then rn (β̂ n − β ? ) −→ N (0, Σ−1 X ), where rn ∼ n/(λ s1 ). (1) If s1  n/λ2 , then When the incidental parameters are really sparse, the size s1 of large incidental parameters is small and the size s2 or the magnitude Dn of bounded incidental parameters is also small so that the conditions of case (1) tends to hold. This case is of most interest and we denote it as the main case. The other cases are presented to provide a relatively complete picture of the asymptotic distributions 8 of β̂. In fact, Theorem E.1 in Supplement E shows more possible asymptotic distributions. Note that the constant a does not appear in the limit distributions of Theorem 3.3 due to cancelation and √ that the sub- n consistency emerges in case (3) when s1 is large, because for this case the impact of the large incidental parameters is too big to be handled efficiently by the penalized estimation. For case (2), in one direction, as b → 0, its condition and limit distribution become those of case (1); in the other direction, as b increases, it approaches case (3). This boundary phenomenon was in spirit similar to that in Tang et al. (2012). Specially, if λ  nα1 , κn γn  nα2 , s1  n1−α1 and √ d s2  n1/2−α2 , for some α1 ∈ (0, 1) and α2 ∈ (0, 1/2), then n(β̂ n − β ? ) −→ N (0, σ 2 Σ−1 X ) by the main case of Theorem 3.3. Remark 1 (An Oracle Property). Suppose an oracle tells the true µ? . Then, with the adjusted (O) responses Y −µ? , the oracle estimator of β ? is given by β̂ = (XX T )−1 X T (Y −µ? ). The limiting √ (O) distribution of n(β̂ n − β ? ) is N (0, σ 2 Σ−1 X ). Comparing this with the main case of Theorems 3.3, it follows that the penalized estimator β̂ n enjoys an oracle property. Although mainly interested in the estimation of β ? , we also obtain the soft-threshold estimator µ̂ of µ? : for each i, µ̂i = µi (β̂) = (|Yi − X Ti β̂| − λ)+ sgn(Yi − X Ti β̂). (3.2) Denote E = {µ̂i 6= 0, for i = 1, 2, · · · , s1 ; and µ̂i = 0, for i = s1 + 1, s1 + 2, · · · , n}. P Theorem 3.4 (Partial Selection Consistency on µ̂). Under Assumptions (A) and (B), if β̂ −→ β ? , then P (E) → 1. Theorem 3.4 shows that, wpg1, the indexes of µ?1 and µ?3 are estimated correctly, but those of µ?2 wrongly. We call this a partial selection consistency phenomenon. 3.1 Two-Step Estimation Theorems 3.3 shows that the penalized estimator β̂ n has multiple different limit distributions, which complicates the application of these theorems in practice. In addition, the convergence rate √ of β̂ n is less than the optimal rate n in the challenging cases where the impact of large incidental parameters is substantial. To address these issues, we propose the following two-step estimation method: firstly, we apply the penalized estimation (2.3) and let Iˆ0 = {1 ≤ i ≤ n : µ̂i = 0}; secondly, we define the two-step estimator as β̃ = (X TIˆ X Iˆ0 )−1 X TIˆ Y Iˆ0 , 0 0 9 (3.3) where X Iˆ0 consists of X i ’s whose indexes are in Iˆ0 and Y Iˆ0 consists of the corresponding Yi ’s. The following theorem shows that β̃ is consistent and Asymptotic Gaussian. Theorem 3.5 (Consistency and Asymptotic Normality on β̃). Suppose Assumptions (A) and P (B) hold. If either s2 = o(n/(κn γn )) or Assumption (C) holds, then β̃ −→ β ? . If either s2 = √ √ d o( n/(κn γn )) holds or Assumptions (C) and Dn2 /n = o(1) hold, then n(β̃−β ? ) −→ N (0, σ 2 Σ−1 X ). Comparing Theorem 3.5 with Theorem 3.3, we see that β̂ has three possible asymptotic distributions but β̃ has only one since for β̃ the conditions on s1 disappear. It is because the two-step method identifies and removes large incidental parameters by exploiting the partial selection consistency property of µ̂ in Theorem 3.4. Further, the two-step estimator improves the convergence rate to the optimal one over the one-step estimator for the challenging case with s1  n/λ2 . Because of these advantages, we suggest to use the two-step method to make statistical inferences. When the incidental parameters are sparse in the sense that the size or the magnitude of the √ bounded incidental parameters are small, i.e. s2 = o( n/(κn γn )) or Dn2 /n = o(1), it follows, √ d by Theorem 3.5, n(β̃ − β ? ) −→ N (0, σ 2 Σ−1 X ), from which a confidence region with asymptotic confidence level 1 − α is given by √ 1/2 {β ∈ Rd : σ −1 nkΣX (β̃ − β)k2 ≤ qα (χd )}, (3.4) where qα (χd ) is the upper α-quantile of χd , the square root of the chi-squared distribution with degrees of freedom d. For each component βj? of β ? , an asymptotic 1 − α confidence interval is given by −1/2 [β̃j ± n−1/2 σΣX −1/2 where ΣX (j, j)zα/2 ], (3.5) (j, j) is the square root of the (j, j) entry of Σ−1 X and zα/2 is the upper α/2-quantile of N (0, 1). The confidence region (3.4) and interval (3.5) involve unknown parameters ΣX and σ. They can be estimated by Σ̂X = (1/n)X T X and σ̂ = #(Iˆ0 )−1/2 kY Iˆ0 − X TIˆ β̃k2 . 0 (3.6) By Law of Large Numbers, Σ̂X is consistent. By lemma E.3 in Supplement, σ̂ is also consistent. Hence, after replacing ΣX and σ in the confidence region (3.4) and interval (3.5) with Σ̂X and σ̂, the resulting confidence region and interval have the asymptotic confidence level 1 − α. 10 3.2 Theoretical and Data-Driven Regularization Parameters By Assumption (B), the theoretical regularization parameter λ depends on κn and γn , which are also crucial to the boundary conditions of the asymptotic properties of the penalized estimators β̂ and β̃. By Assumption (A), κn and γn is determined by the distributions of X 0 and 0 , respectively. It is of interest to explicitly derive κn and γn for some typical cases on the covariates and errors. When √ the covariates are bounded with CX > 0 and the random errors follow N (0, σ 2 ), let κn = dCX p and γn = 2σ 2 log(n). They satisfy (2.2) in Assumption (A) and the specification of λ (2.4) in p √ Assumption (B) becomes α 2σ 2 log(n) ≤ λ  min{µ? , n}. When X 0 and 0 follow N (0, ΣX ) 2 the maximum of diagonal elements of Σ . We can take and N (0, σ 2 ), respectively. Denote by σX X q p p √ 2 log(n) and γ = 2σ 2 log(n). Then, (2.4) becomes log(n)  λ  min{µ? , n}. A κn = 2dσX n case on exponentially tailed random variables has all been considered in Supplement E.2. Although the theoretical specification of λ guaranties desired asymptotic properties, a datadriven specification is of interest in practice. A popular way to specify λ is to use multi-fold cross-validation. The validation set, however, needs to be made as little contaminated as possible. We propose the following procedure to identify a data-driven regularization parameter: Step 1: On the training and testing data sets. (OLS) 1. Apply ordinary least squares (OLS) to all the data and obtain residuals ˆi X Ti β̂ (OLS) = Yi − for each i. (OLS) 2. Identify the set of “pure” data corresponding to the npure smallest values in {|ˆ i 3. Compute the updated OLS estimator β̂ residuals (OLS,2) {ˆ i } (OLS,2) |}. with the “pure” data and obtain updated for each i. (OLS,2) 4. Identify the updated “pure” data set corresponding to the npure smallest {|ˆ i |} and label the remaining as the “contaminated” data set. 5. Randomly select a subset from the updated “pure” data set as a testing set and merge the remaining “pure” data set and the “contaminated” one into a training set. Step 2: On the range [λL , λU ] of the regularization parameter. 1. Compute the standard deviation σ̂pure of the residuals of the “pure” data set. 2. Set λL = αl σ̂pure and λU = αu σ̂pure , where αl < αu are positive constants. Step 3: On the data-driven regularization parameter. 11 1. For each grid point of λ in the interval [λL , λU ], apply a penalized method to the train2 ing set and obtain the estimator β̂ λ,train and the corresponding test error σ̂λ,test = P T 2 testing set (Yi − X i β̂ λ,train ) . 2 2. Identify the data-driven regularization parameter λopt , which minimizes σ̂λ,test among the grid points. This simple data-driven procedure can certainly be further improved. For example, In Step 1, the sub-steps 3 and 4 can be repeated more times to obtain a better “pure” data set. In Step two, the (OLS,2) range for λ can also be obtained from quantiles of {|ˆ i (OLS,2) on σ̂pure and quantiles of {|ˆ i |}. We can also hybrid quantities based |} to determine [λL , λU ]. The good performance of this data-driven regularization parameter will be demonstrated in Subsection 5.2. 4 Diverging number of structural parameters In Sections 2 and 3, we have considered model (2.1) under the assumption that the number of covariates d is a fixed integer. However, when there are a moderate or large number of covariates, it is appropriate to assume that d diverges to infinity with the sample size. In this section, we consider model (2.1) with the assumption that d → ∞ and d  n. Since the number of covariates grows orderly slower than the sample size, we chose to continue use the penalized estimation (2.3) for (µ? , β ? ) and the penalized two-step estimation (3.3) for β ? . The corresponding estimators are still denoted as (µ̂, β̂) and β̃, but we should keep it in mind that their dimensions diverge to infinity with n. The characterizations of β̂ in Lemmas D.1 and D.3 are still valid since they are finite-sample results. The iteration algorithm also wpg1 stops at the second iteration, which is shown by Proposition F.3 in Supplement F. As before, it is critical to properly specify the regularization parameter λ for the case with a diverging number of covariates. Assumption (B’) The regularization parameter λ satisfies √ dκn  λ, αγn ≤ λ, and λ  µ? , where κn and γn are defined in (2.2) and α > 2. 12 (4.1) Comparing Assumption (B’) with Assumption (B), the main difference in formation is that κn is √ changed to dκn . In fact, κn in (4.1) also depends on d, which will be shown in Supplement E.2. This difference is caused by the assumption that d diverges to ∞. Lemma 4.1 (On Index Sets Sij ’s). Under Assumptions (A) and (B’), the conclusion of Lemma 3.1 holds. Thus, wpg1, still valid is the crucial analytic expression of β̂ (3.1), from which we derive its theoretical properties. They are similar to those in the previous section, with additional technical complexity caused by the diverging dimension d. Denote k·kF,d = d−1/2 k·kF , where k·kF is the Frobenius norm, and κX = d−1 Pd 4 1/2 , j=1 (E[X0j ]) the average of the square root of the fourth marginal moments of X 0 . We make the following assumptions on ΣX and κX . Assumption (D): kΣ−1 X kF,d is bounded. Assumption (E): κX is bounded. Theorem 4.2 (Existence and Consistency on β̂). Suppose Assumptions (A), (B’), (D) and (E) hold. If there exists rd , a sequence of positive numbers depending on d, such that d3 /n → 0, √ √ (rd d)2 /n → 0, s1 = o(n/(rd dκn λ)) and s2 = o(n/(rd dκn γn )), then, for every fixed C > 0, wpg1, P there exists a unique estimator β̂ ∈ BC (β ? ) such that ψn (β̂) = 0 and rd kβ̂ − β ? k2 −→ 0. Next, we consider the asymptotic distribution on β̂. Since the dimension of β̂ diverges to infinity, following Fan and Lv (2011), it is more appropriate to study its linear maps. Let An be a q × d matrix, where q is a fixed integer, Gn = An ATn with the largest eigenvalue λmax (Gn ), and GX,n = T 2 An Σ−1 X An . Denote by λmin (ΣX ) the smallest eigenvalue of ΣX , σX,max = max1≤j≤d Var[X0j ], 2 σX,min = min1≤j≤d Var[X0j ] and γX,max = max1≤j≤d E|X0j |3 . Abbreviate “with respect to” by “wrt”. We assume further Assumption (D’): λmin (ΣX ) is bounded away from zero, which implies Assumption (D). Assumption (D”): kΣX kF,d is bounded. Assumption (F): kAn kF and λmax (Gn ) are bounded and GX,n converges to a q × q symmetric matrix GX wrt k·kF . Assumption (G): σX,max > 0; σX,max and γX,min are bounded from above and σX,min is bounded away from zero. 13 Similar to the main case of Theorem 3.3, a properly scaled β̂ n is asymptotically Gaussian. Theorem 4.3 (Asymptotic Distribution on β̂). Suppose Assumptions (A), (B’), (D’), (D”), (E), √ √ √ √ (F) and (G) hold. If d5 log d = o(n), s1 = o( n/(λ dκn )) and s2 = o( n/( dκn γn )), then √ d nAn (β̂ − β ? ) −→ N (0, σ 2 GX ). The penalized estimator µ̂ obtained by (3.2) is partially consistent. Theorem 4.4 (Partial Selection Consistency on µ̂). Suppose Assumptions (A) and (B’)hold and √ β̂ is a consistent estimator of β ? wrt rd k·k2 . If rd ≥ 1/ d, then P (E) → 1. We can construct the penalized two-step estimator β̃ through (3.3) with µ̂. This two-step estimator is consistent by Theorem F.4 in Supplement F and its asymptotic distribution, as an extension of the main case in Theorem 3.5, is given by the following theorem. Theorem 4.5 (Asymptotic Distribution on β̃). Suppose all the assumptions and conditions of The√ d orem 4.3 hold except that the condition on s1 is not required. Then nAn (β̃−β ? ) −→ N (0, σ 2 GX ). From Theorems 4.3 and 4.5, Wald-type asymptotic confidence regions of β ? are availabe. For example, a confidence region based on β̃ with asymptotic confidence level 1 − α is given by √ −1/2 {β ∈ Rd : σ −1 nkGX,n An (β̃ − β)k2 ≤ qα (χq )}. (4.2) −1 Since GX,n involves the unknown ΣX , we estimate it by ĜX,n = An Σ̂X ATn . On the other hand, σ is estimated by σ̂ in (3.6) as before. After plugging ĜX,n and σ̂ into (4.2), we obtain √ −1/2 {β ∈ Rd : σ̂ −1 nkĜX,n An (β̃ − β)k2 ≤ qα (χq )}. (4.3) By Lemma A.5 in the appendix, the consistency of σ̂ is assured. Then, Theorem A.6 in the appendix guarantees the asymptotic validity of the confidence region (4.3). 5 Numerical Evaluations and Real Data Analysis In this section, we evaluate the finite-sample performance of the penalized estimation through simulations and use it to analyze a real data set. The model for simulations is given by, for ? +  . For simplicity, the deterministic sparse i = 1, · · · , n, Yi = µ?i + Xi,1 β1? + · · · + Xi,50 β50 i incidental parameters {µ?i } are generated as i.i.d. copies of µ: µ is 0, U [−c, c] and W (c + Exp(τ )) with probabilities p0 , p1 , and p2 respectively, where U [−c, c] is a uniform random variable in [−c, c], 14 W takes values 1 and −1 with probabilities pw and 1 − pw , respectively, and Exp(τ ) follows an exponential distribution with mean 1/τ > 0. Note that c can be viewed as a contamination parameter. The larger c is, the more contaminated the data. On the other hand, pw determines ? = 1; the asymmetry of the incidental parameters. The regression coefficients β1? = · · · = β50 i.i.d. {(Xi,1 , · · · , Xi,50 )} ∼ N (0, ΣX ), where ΣX (i, j) = 2 exp(−|i − j|)), which is a Toeplitz matrix and the constant 2 is used to inflate the covariance a little; the covariates are independent of i.i.d. {i } ∼ N (0, 1); and n = 500; p0 = 0.8, p1 = 0.1, p2 = 0.1, c is 0.5, 1, 3 or 5, pw is 0.5 or 0.75 and τ = 1. 5.1 Performance of Penalized Methods The following methods for estimating β ? are evaluated. (i) Oracle method (O): an oracle knows the index set S of zero µ?i ’s. Its performance is used as a benchmark. (ii) Ordinary least squares method (OLS): all µ?i ’s are thought as zeros. (iii) Four penalized least squares (PLS) methods, namely, PLS with soft penalty (PLS.Soft or S), PLS with hard penalty (PLS.Hard or H), two-step PLS with soft penalty (PLS.Soft.TwoStep or S.TS) and two-step PLS with hard penalty (PLS.Hard.TwoStep or P P (O) H.TS). More specifically, the oracle estimator of β ? is given by β̂ = ( i∈S X i X Ti )−1 i∈S X i Yi . The hard penalty function is pλ (|t|) = λ2 −(|t|−λ)2 {|t| < λ} (see Fan and Li (2001)). Each method is evaluated by the square root of the empirical mean squared error (RMSE). Every penalized method is evaluated with a grid of values for the regularization parameter λ, ranging from .5 to 5 by .25. The sequence plot of Figure 1 shows 500 realized incidental parameters µ?i ’s with c = 3 and pw = 0.75. They are used in data generation of simulations. The scatter plot of Figure 1 shows the responses Yi ’s against the first covariate Xi1 ’s of a generated data set and the red stars stand for the contaminated sample points, the ones with nonzero µ?i ’s. With fifty covariates, usually it is difficult to graphically identify the contaminated data points. With the above incidental parameters, those six methods are evaluated by simulations with iteration number 1000. Since ΣX is a Toeplitz matrix with equal diagonal elements, the asymptotic variances of the estimators of β1? and β2? are different and representative for estimators of other βi? ’s. So, we only report simulation results on the estimation of β1? and β2? . Figure 2 shows RMSE’s of six estimators for β1? . RMSE’s for β2? are similar. As expected, the oracle method has the smallest RMSE and OLS the largest. RMSE of each PLS method as a function of λ forms a convex curve, which achieves a minimal RMSE significantly below the green line of OLS and close to the cyan line of O. More specifically, RMSE of PLS.Hard achieves the 15 Marginal Data Plot 40 Sequence Plot of Incidental Parameters ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 ● ● −40 −5 ● ● ● ● ● ● ● ● 0 ● −20 0 mu ● ● ● 100 200 300 ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●●●●● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ●● ● ● ●● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ●●● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●●● ● ● ●● ● ● ● ●● ●● ●● ●● ●● ● ● ● ●● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ●●● ●● ● ● ●● ●●●● ●● ●● ● ● ● ●● ● ● ● ● ● ●● ●● ●● ● ● ●● ● ● ●● ●● ● ● ●●●●● ●● ● ● ● ● ●● ● ● ●●● ● ● ● ● ● ●● ● ● ● ● ● ●●● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● Y 5 ● ● ● 20 ● ● ● ●● ● ● ● ● ● ● ● ● ● ● 400 500 −4 −2 Index 0 2 4 X1 Figure 1: The sequence plot on the left shows 500 incidental parameters µ?i ’s with c = 3 and pw = 0.75. The non-zero µ?i ’s are in red. The scatter plot on the right shows the responses Yi ’s against the first covariate Xi1 ’s of a data set generated with those 500 incidental parameters. The red stars stand for the contaminated sample points, the ones with nonzero µ?i ’s. minimal RMSE when λ is around 2.75. On the other hand, RMSE of PLS.Soft decreases a little till λ is around .75, then increases and stays above RMSE of PLS.Hard. This reflects the fact that a large λ in a soft-threshold method usually causes bias. PLS.Hard.TwoStep has very similar performance with PLS.Hard for all λ. PLS.Soft.TwoStep has similar performance with PLS.Soft when λ is small. However, as λ becomes large, PLS.Soft.TwoStep moves closer to PLS.Hard than PLS.Soft. It is because PLS.Soft.TwoStep and PLS.Hard.TwoStep have similar estimation when λ is large. The minimal RMSE of PLS.Soft is slightly larger than those of other PLS Methods. Table 1 depicts the minimal RMSE’s of the estimators for β1? and β2? with the corresponding optimal λ’s and biases. The biases are ignorable comparing withe the RMSE’s. The optimal λ for PLS.Soft and other PLS methods are around .75 and 2.5, respectively. This indicates the simple soft threshold method tends to work best with a small λ due to the bias issue. Denote the empirical relative efficiency (ERE) of an estimator A with respect to another estimator B as RMSE(B)/RMSE(A). Then, for the estimation of β1? , the ERE’s of PLS.Soft, PLS.Hard, PLS.Soft.TwoStep and PLS.Hard.TwoStep with respect to O are around 78%, 87%, 87% and 87%, respectively; the ERE’s of the PLS methods with respect to OLS are around 176%, 196%, 196% and 196%, respectively. The ERE’s for β2? are similar. Thus, in terms of ERE (and RMSE), the PLS methods perform closely to O and significantly better than OLS. 16 RMSE's for beta1 with Deterministic Sparse Inicidental Parameters OLS ● S H S.TS H.TS 0.08 0.09 O ● ● ● 0.07 ● ● ● ● ● ● ● ● ● ● ● ● ● 0.04 0.05 ● ● 0.06 RMSE ● 1 2 3 4 5 lambda Figure 2: RMSE’s of the O(Oracle), OLS, S(PLS.Soft), H(PLS.Hard), S.TH(PLS.Soft.TwoStep) and H.TH(PLS.Hard.TwoStep) estimators of β1? with the incidental parameters shown in Figure 1. The top and bottom solid horizontal lines show the RMSE’s for OLS and O, respectively. Other four horizontal lines indicate the minimal RMSE’s for those four PLS methods and the corresponding four best λ’s are shown by the vertical lines. From Table 1, we can also see that the RMSE’s of the estimators for β1? are always smaller than those for β1? . This is because the first covariate is less correlated with others covariates than the second one. In order to examine the performance of the methods with general incidental parameters but not just those in Figure 1, we generate µ? randomly for each iteration. The iteration number for each simulation is also 1000. Figure 3 shows the RMSE’s of six estimators of β1? under two settings: pw = 0.5 and c = 1 or 5. Each plot in Figure 3 presents a similar pattern with Figure 2. When pw is fixed at 0.5, the RMSE’s of each non-oracle estimator of β1? increases as the contamination parameter c increases from 1 to 5. This indicates that each non-oracle estimator performs worse as the data becomes more contaminated. However, the PLS estimators are more robust than OLS, which is very sensitive to the change of c. We have also done simulations with pw = 0.75 and the RMSE’s of the estimators of β1? are similar to those with pw = 0.75 so that the corresponding plots are similar to those in Figure 3. In other words, the RMSE’s of all estimators are stable with respect to pw , which means 17 O OLS S H S.TS H.TS S.P H.P LAD Bias(β̂1 )×104 .8 17.8 1.3 .6 5.9 20.4 9.5 11.5 10.4 RMSE(β̂1 )×102 4.0 8.4 5.1 4.6 4.6 4.6 5.4 5.0 5.4 .75 2.75 2.25 2.5 2.45 2.47 λ Bias(β̂2 )×104 -2.3 -46.7 24.6 43.5 -21.8 10.0 32.3 13.9 -7.3 RMSE(β̂2 )×102 4.4 9.0 5.3 4.9 5.0 4.9 5.8 5.5 5.9 .75 2.75 2.75 2.5 2.45 2.47 λ Table 1: RMSE’s of the Oracle, OLS and LAD estimators and The minimal RMSE’s of the penalized estimators of β1? and β2? with the corresponding optimal or data-driven λ’s and biases when the incidental parameters shown in Figure 1 are used. For a data-driven method, the data-driven λ is different in each iteration so that the reported λ’s are averages. The lines over the numbers emphasize the numbers are averages. The standard deviations for the data-driven λ’s of S.P and H.P are 0.37 and 0.34, respectively. RMSE's for beta1 with Random Sparse Incidental Parameters c = 1 , p.w = 0.5 ● 1 ● ● ● 2 ● ● ● ● ● 3 ● ● ● 4 ● ● ● 0.08 0.06 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.04 0.06 RMSE 0.04 ● ● ● ● ● RMSE 0.08 O OLS S H S.TS H.TS ● ● c = 5 , p.w = 0.5 5 1 lambda 2 3 4 5 lambda Figure 3: Similar to Figure 2, these plots show the RMSE’s of the O(Oracle), OLS, S(PLS.Soft), H(PLS.Hard), S.TH(PLS.Soft.TwoStep) and H.TH(PLS.Hard.TwoStep) estimators of β1? with randomly generated µ? under two settings with pw = 0.5 and c = 1 or 5. 18 RMSE(βˆ1 )×102 O OLS S H S.TS H.TS S.P H.P LAD (pw , c) = (.5, .5) 4.06 4.06 3.92 3.95 3.90 3.96 4.01 4.38 4.85 1.25 3.5 2.75 2.5 2.08 2.15 4.11 4.27 4.21 4.27 4.01 4.59 2 3.75 4.25 3.5 2.20 2.24 4.81 4.99 4.81 4.80 5.64 5.19 1 2.5 2 2.25 2.42 2.45 4.96 4.63 4.66 4.64 6.36 4.87 1 2.25 2.75 3 2.52 2.58 3.91 3.88 3.98 3.91 4.08 4.41 1.5 5 3.25 4.5 2.08 2.14 4.19 4.15 4.15 4.20 4.16 4.59 3.25 3 4 3.5 2.15 2.23 4.91 4.93 4.80 5.02 5.35 5.06 1 2.25 2 2.5 2.44 2.47 5.01 4.66 4.75 4.66 6.22 4.90 1.5 2.25 2.5 3 2.55 2.60 λ (pw , c) = (.5, 1) 4.06 4.58 λ (pw , c) = (.5, 3) 4.12 6.47 λ (pw , c) = (.5, 5) 4.07 8.50 λ (pw , c) = (.75, .5) 4.13 4.02 λ (pw , c) = (.75, 1) 4.17 4.41 λ (pw , c) = (.75, 3) 4.15 5.99 λ (pw , c) = (.75, 5) λ 3.97 8.41 4.75 5.49 5.52 4.79 4.98 5.52 5.76 Table 2: Similar to Table 1, this one shows the RMSE’s and minimal RMSE’s of nine estimators of β1? under eight settings on randomly generated µ? with pw = 0.5, 0.75 and c = 0.5, 1, 3, 5. The standard deviations of the data-driven λ’s of S.P and H.P for different settings are between 0.2 and 0.45. the magnitudes of the nonzero incidental parameters matter most but not their signs. We also note that some penalized methods perform closely to or even outperform the oracle one when c is small as showed in the plot with c = 1. This happens because that O ignores all the contaminated data points, even those with very light contamination, but the penalized methods exploit information in such points. Table 2 contains the RMSE’s of the estimators of β1? under eight settings with pw = 0.5 or 0.75 and c = 0.5, 1, 3 or 5. For each pw , as c increases from 0.5 to 5, the RMSE’s of O with the multiplication factor 102 is almost constantly around 4, those of OLS increases from about 4 to 8.5 and those of PLS ones grow from about 4 to 5, which confirms the robustness of the PLS estimators. When c ≤ 1 is small with respect to the variance of random error σ = 1, the data points are only slightly contaminated. OLS and PLS methods perform similar to O. However, when c ≥ 3 is large, 19 which means the data are more contaminated, the RMSE’s of OLS become significantly larger, but PLS methods perform still closely to O. 5.2 Performance of Data-Driven Penalized Methods Previous simulations have shown the PLS methods with optimal λ’s have good RMSE’s comparing with those of the Oracle and OLS ones. In practice, however, these optimal λ’s are unknown. One approach to obtain a data driven λ has been introduced in Subsection 3.2. Since, as shown in the previous simulation results, the two-step PLS methods perform similarly with the one-step PLS methods, i.e. PLS.Soft and PLS.Hard, only the latter are studied by simulations with data-driven λ’s and denoted as PLS.Soft.Prac (S.P) and PLS.Hard.Prac (H.P), respectively. For estimating data-driven λ’s, αl = 2 and αu = 7. The size of the pure data set npure is n/2 and that of the testing data set is npure /2. Simulations are first run with the deterministic sparse incidental parameters as showed in Figure 1. We can see in Table 1 that the RMSE’s of estimators of β1? from PLS.Soft.Prac and PLS.Hard.Prac are around 5.4 and 5.0, slightly larger than the optimal values 5.1 and 4.6.,respectively. However, they are still significantly smaller than RMSE of OLS, which is 8.4. The observations of the estimators of β2? are similar. As before, we also evaluate the performance of the data-driven PLS methods with random sparse incidental parameters. Table 2 shows that, for a given pw , when c is small such as 0.5 and 1, the RMSE’s of PLS.Soft.Prac and PLS.Hard.Prac are close to those of PLS.Soft and PLS.Hard with the optimal λ’s. In these cases, PLS.Soft.Prac performs slightly better than PLS.Hard.Prac, and even better than PLS.Soft with the optimal λ and the Oracle method. On the other hand, for a given pw , when c is large such as 3 and 5, the RMSE’s of PLS.Soft.Prac and PLS.Hard.Prac are greater than those of PLS.Soft and PLS.Hard, respectively, but still less than those of OLS. In these cases, the RMSE’s of PLS.Soft.Prac are larger than those of PLS.Hard.Prac, which indicates the bias issue of the soft threshold method. Thus, the data-driven regularization parameter works well with penalized estimation. When the data is slightly contaminated, the soft penalty is preferred; otherwise, the hard penalty is recommended. Tables 1 and 2 also contain RMSE of the least absolute deviation regression method (LAD) used in Fan et al. (2012b) with all but not part of the sample points with small residuals. Generally speaking, in both deterministic and random incidental parameter cases, LAD performs similarly with the PLS methods with data-driven λ’s. More specifically, when c ≤ 1 is small, PLS.Soft.Prac outperform LAD; otherwise, LAD performs better. For all the cases, LAD is domi20 nated by PLS.Hard.Prac. These observations confirm that LAD is an effective robustness method and the penalized methods make improvement. 5.3 Data-Driven Confidence Intervals We next turn to investigate the finite-sample performance of the asymptotic confidence interval (CI) (3.5) for βj? with j = 1, 2 based on PLS two-step methods. Since CI (3.5) is based on the properties of the penalized two-step estimator with the soft penalty, we focus on PLS.TS.Soft with a data-driven regularization parameter λ. The choice of λ in Subsection 3.2 for minimizing RMSE is usually no longer suitable for constructing confidence intervals, since it is designed to achieve minimal RMSE. We propose to first obtain σ̂pure as in the data-driven procedure in Subsection 3.2 and then simply set the data-driven λ be five times of σ̂pure . Since σ̂pure tends to underestimate σ, this data-driven λ is usually not large with respect to σ. Denote this method as PLS.TwoStage.Soft.Prac or S.TS.P. −1 After plugging in σ̂ and σ̂j−1 , the square root of the (j, j)th element of Σ̂X , and replace n by m = #(Iˆ0 ) in the theoretical CI (3.5), we obtain a data-driven CI [β̃j ± m−1/2 σ̂σ̂j−1 zα/2 ], where β̃j is the PLS.TwoStage.Soft.Prac estimator of βj? for each j. This data-driven CI is compared with CI’s based on Oracle and OLS methods. More specif(O) ically, denote the Oracle and OLS estimators of βj? as β̂j (O) corresponding CI’s are given by [β̂j −1/2 (O) −1 σ̂ σ̂j zα/2 ] ± mo (OLS) and β̂j , respectively. Then, the (OLS) and [β̂j ± n−1/2 σ̂ (OLS) σ̂j−1 zα/2 ], where mo is the number of zero incidental parameters and σ̂ (O) and σ̂ (OLS) are the estimators of σ from O and OLS methods, respectively. The simulation settings are the same to the previous ones with deterministic sparse incidental parameters except the following changes. (a) The number of covariates d is reduced to 5 from 50. This is because when d = 50 and the nominal level is 95%, even the empirical coverage rate (CR) of the oracle confidence interval for β1? becomes 93.5%, not very close to 95%. (b) The iteration number is increased from 1000 to 10000 to improve the accuracy of CR’s. (c) The probabilities of nonzero incidental parameters (p1 , p2 ) are set to be (0.01, 0.01), (0.03, 0.03) and (0.05, 0.05); the contamination parameter c is increased to 10. In order to achieve good second order asymptotic approximation, we can either increase the sample size or enlarge the signal noise ratio. Here we adopt the latter. Table 3 reports the empirical coverage rates (CR) and average lengths (AL) of the CI’s of β1? and β2? from O, OLS and PLS.TS.Soft.Prac methods under three different settings on the incidental parameters. For the oracle method, these three settings are the same and thus only one set of 21 (p1 , p2 ) β1? O OLS S.TS.P .950 .944 .951 .948 (.05,.05) .946 (.01,.01) (.01,.01) (.03,,03) (.03,,03) CR AL (.05,.05) .955 .133 β2? OLS S.TS.P .948 .945 .946 .947 .945 .949 .946 .200 .135 .213 .144 .279 .137 .297 .146 .382 .139 .407 .149 CR AL O .948 .142 Table 3: Coverage rates (CR) and average length (AL) of 95% confidence intervals for β1? and β2? from O, OLS, PLS.TwoStage.Soft.Prac methods under three settings on deterministic sparse incidental parameters. simulation results are presented. Table 3 shows that the CR’s of all methods under all settings are close to the nominal level .95. The OLS treats the deterministic incidental parameters as random ones and achieves excellent CR’s. However, the AL’s of OLS are significantly larger than those of O and PLS.TS.Soft.Prac, especially when there are more non-zero incidental parameters. On the other hand, the AL’s of PLS.TS.Soft.Prac are only slightly larger than those of O. This means PLS.TS.Soft.Prac has excellent efficiency in terms of AL’s given excellent CR’s. Also note that the AL’s for β1? are less than those for β2? . This is because the asymptotic variance of β̂1 is less than that of β̂2 when the covariance matrix ΣX is a Toeplitz matrix. Simulations with random incidental parameters under the same settings have also been done and the results are similar to those in Table 3 with slightly inflated AL’s for OLS and PLS.TS.Soft.Prac due to the randomness of the incidental parameters. 5.4 Real Data Analysis We implement the penalized estimation with the soft penalty in the method of estimating false discovery proportion of a multiple testing procedure proposed by Fan et al. (2012b) for investigating the association between the expression level of gene CCT8, which is closely related to Down Syndrome phenotypes, and thousands of SNPs. The data set consists of three populations: 60 Utah residents (CEU), 45 Japanese and 45 Chinese (JPTCHB) and 60 Yoruba (YRI). More details on the data set can be found in Fan et al. (2012b). In the testing procedure by Fan et al. (2012b), a filtered least absolute deviation regression (LAD) is used to estimate the loading factors with 90% of the cases (SNPs) whose test statistics are small and thus the resulting estimator is statistically biased. We upgrade this step with S.P 22 10 ● ● ● ● ● ● ● ● ● 3.0 3.5 ● ● 2.0 2.5 3.0 ● 0.8 50 ● 0 0 2.5 ● ● ● 2.0 ● ● ● ● ● ● ● ● ● ● ● ● ● 0.4 ● ● ● ●● ● ● Estimated FDP(t) ● ● ●● ● ●●● ● ●● ●● LAD S.P ● ● 0.0 ● ● ● 30 R(t) ● ● ● ● ● ● ● ● ● 30 Estimated V(t) ● ● ● 10 50 CEU ● ● ●● ● ●● ●● ● ●● 3.5 2.0 2.5 3.0 3.5 50 ● ●● ● ● ● ● ● ● 20 ● ● ● ● 10 ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● 0.8 ● ● ● ● ● ● ● ● 0.4 ● Estimated FDP(t) ● Estimated V(t) 30 ● ● ● 30 40 ● ● ● 40 ● ● ● ● 0.0 R(t) ●● ●● 20 ● 10 50 JPTCHB ● ● ● ● 4 5 6 7 8 ● 0 ● 9 4 5 6 7 8 9 4 5 6 7 8 9 ●● ● ● ● ● ● ● ● ●● ● 0 5 6 −log(t) 7 ● ●● ●● ●● ●● ● ● ● ● ● 8 ●● ●● ● ● ●● 4 5 6 −log(t) 7 8 ● ●● 0.8 ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● 0.4 80 100 60 ● ● ● ●● ● ● 4 Estimated FDP(t) ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ●●● ● ●● ● ● ● ●● ● ● ●● ● ● ●● ● ●● ● ● ● ● ● ● ● ● 0.0 ● ● ● ● ● ● ● ● 40 Estimated V(t) ● ● ● ●● ● ● ● ● ● ● 0 40 60 ● ●● ● ● ●● ● ● ● ●● ● 20 ● ● 20 R(t) 80 100 YRI ●● ● ● ● ●● ●● ● ●● 4 5 6 7 8 −log(t) Figure 4: Discovery number R(t), estimated false discovery number V (t) and estimated false discovery proportion FDP(t) as functions of a threshold on t for populations CEU, JTPCHB and YRI. The x-axis is − log10 (t). 23 Population t R(t) [ FDP(t) with LAD [ FDP(t) with S.P CEU 6.12 × 10−4 4 .810 .845 JPTCHB 1.51 × 10−9 5 .153 .373 YRI 2.54 × 10−9 2 .227 .308 ˆ Table 4: Discovery numbers R(t) and estimated false discover proportions FDP(t)s from methods with LAD and S.P for specific values of threshold t. described in Subsections 3.2 and 5.2 and re-estimate the number of false discoveries V (t) and the false discovery proportion FDP(t) as functions of − log10 (t), where t is a thresholding value. Figure [ 4 shows the number of total discoveries R(t), V̂ (t) and FDP(t) from procedures using filtered LAD [ and S.P. It is clear that V̂ (t) and FDP(t) with S.P are uniformly larger than but reasonably close [ to those with filtered LAD. Table 4 contains R(t) and FDP(t) with filtered LAD and S.P for several specific thresholds. The estimated FDPs with S.P for CEU and YRI are slightly larger than those [ for JPTCHB with S.P is more than double of that with filtered LAD. This with LAD and FDP suggests that the estimation of FDP with filtered LAD might tend to be optimistic. 6 Conclusion and Discussion This paper considers the estimation of structural parameters with a finite or diverging number of covariates in a linear regression model with the presence of high-dimensional sparse incidental parameters. By exploiting the sparsity, we propose an estimation method penalizing the incidental parameters. The penalized estimator of the structural parameters is consistent and asymptotically Gaussian and achieves an oracle property. On the contrary, the penalized estimator of the incidental parameters possesses only partial selection consistency but not consistency. Thus, the structural parameters are consistently estimated while the incidental parameters not, which presents a partial consistency phenomenon. Further, in order to construct better confidence regions for the structural parameters, we propose a two-step estimator, which has fewer possible asymptotic distributions and can be asymptotically even more efficient than the one-step penalized estimator when the size and magnitude of nonzero incidental parameters are substantially large. Simulation results show that the penalized methods with best regularization parameters achieve significantly smaller mean square errors than the ordinary least squares method which ignores the incidental parameters. Also provided is a data-driven regularization parameter, with which the 24 penalized estimators continue to significantly outperform ordinary least squares when the incidental parameters are too large to be neglected. In terms of average length together with excellent coverage rates, the advantage of the confidence intervals based on the two-step estimator with an alternative data-driven regularization parameter is verified by simulations. A data set on genomewide association is analyzed with a multiple testing procedure equipped with a data-driven penalized method and false discovery proportions are estimated. In econometrics, a fixed effect panel data model is given by, for 1 ≤ i ≤ n and 1 ≤ t ≤ T , Yit = µ?i + X Tit β ? + it , (6.1) where µ?i ’s are unknown fixed effects. When T diverges, the fixed effects can be consistently estimated. When T is finite and greater than or equal to 2, although the fixed effects can no longer be consistently estimated, they can be removed by a within-group transformation: for each i, Yit − Ȳi = (X it − X̄ i )T β ? + it − ¯i , where Ȳi , X̄ i and ¯i are the averages of Yit ’s, X it ’s and it ’s, respectively. When T is equal to 1, however, the within-group transformation fails. Note that, with T = 1, Model (6.1) becomes Model (1.1) so that the proposed penalized estimations provide a solution under the sparsity assumption on the fixed effects. Although this paper only illustrates the partial consistency phenomenon of a penalized estimation method for a linear regression model, such a phenomenon shall universally exist for a general parametric model, which contains both a structural parameter and a high-dimensional sparse incidental parameter. For example, consider a panel data logistic regression model: P (Yit = 1|X it ) = (1 + exp{−(µ?i + X Tit β ? )})−1 . When T is finite, the fixed effects µ?i ’s cannot be removed by the within-group transformation as in the panel data linear model (6.1). However, the proposed penalized estimations can still provide a solution. Further, if the structural parameter has a dimension diverging faster than the sample size and is sparse, it is expected that the partial consistency phenomenon will continue to appear when sparsity penalty is imposed on both the structural and incidental parameters. Acknowledgement The authors thank the Editor, an associate editor, and three referees for their many helpful comments that have resulted in significant improvements in the article. The research was partially supported by NSF grants DMS-1206464 and DMS-0704337 and NIH Grants R01-GM100474-01 and R01-GM072611-08. 25 A Appendix In this appendix, we provide the proofs of the theoretical results in Section 4. The proofs of the results in Sections 2 and 3 are in Supplements D and E. Denote Sk,l = S{k,k+1,··· ,l} and Sk,l = S{k,k+1,··· ,l} . Let B = {maxs+1≤i≤n kX i k2 ≤ κn } and T D = ni=1 {−γn ≤ i ≤ γn }. Then P (B) → 1 and P (D) → 1 by (2.2). Proof of Lemma 4.1. We first consider Si0 ’s, then Si1 ’s, and finally Si2 ’s with i = 1, 2, 3. Consider ? }. Note that P (A) ≥ P (A|B)P (B) and P (B) → 1. It S10 , S20 and S30 . Let A = {S10 = S10 √ suffices to show that P (A|B) → 1. By λ  dκn , it follows P (A|B) ≥ P ({s + 1 ≤ i ≤ n : √ √ ? |B) ≥ P ({s + 1 ≤ i ≤ −λ + maxs+1≤i≤n kX i k2 dC ≤ i ≤ λ − maxs+1≤i≤n kX i k2 dC} ⊃ S10 √ √ ? ? ) ≥ P (D) → 1. Thus, wpg1, S n : −λ + κn dC ≤ i ≤ λ − κn dC} ⊃ S10 10 = S10 . From ? , it follows that, wpg1, S S10 ∪ S20 ∪ S30 = S10 20 = S30 = ∅. Consider S21 , S31 and S11 . Recall √ that µ? = min{|µ?i | : 1 ≤ i ≤ s1 } and note that λ − µ? + dCκn < −γn when n is large. Let ? and S ?c ? S211 = S21 S21 212 = S21 S21 . We will show P (S211 = S21 ) → 1 and P (S212 = ∅) → 1. ? ) → 1. Denote A = {S ? Then P (S21 = S21 1 211 ⊃ S21 }. On the event B, S211 ⊃ {1 ≤ i ≤ s1 : √ i > λ − µ? + dCκn and µ?i > 0} ⊃ {1 ≤ i ≤ s1 : i > −γn and µ?i > 0}. Then, P (A1 ) ≥ ? )P (B) → 1 · 1 = 1. It follows that, P (A1 |B)P (B) ≥ P ({1 ≤ i ≤ s1 : i > −γn and µ?i > 0} ⊃ S21 ? . Note that S ? ? wpg1, S211 ⊃ S21 211 ⊂ S21 . Then, wpg1, S211 = S21 . Denote A2 = {S212 = ∅}. √ On the event B, S212 ⊂ {1 ≤ i ≤ s1 : i > λ + µ? − dCκn and µ?i < 0}, which contains {1 ≤ i ≤ s1 : i > γn }. Then, P (A2 ) ≥ P (A2 |B)P (B) ≥ P ({1 ≤ i ≤ s1 : i > γn } = ∅)P (B) →= 1. ? ) → 1. Similarly, we can show, wpg1, S ? Then, wpg1, S212 = ∅. Thus, P (S21 = S21 31 = S31 . ? ∪ S ? . Then, wpg1, S Note that S11 , S21 and S31 are disjoint and their union is S21 11 = ∅. 31 √ ? }. Note that −λ − µ? + dCκ < −γ and Consider S12 , S22 and S32 . Denote A = {S12 = S12 n n i √ λ − µ?i − dCκn > γn when n is large for s1 + 1 ≤ i ≤ s. On the event B, S12 ⊃ {s1 + 1 ≤ i ≤ √ √ s : −λ − µ?i + dCκn ≤ i ≤ λ − µ?i − dCκn }, which contains {s1 + 1 ≤ i ≤ s : −γn ≤ i ≤ γn }. ? )P (B) → 1. Thus, Then, P (A) ≥ P (A|B)P (B) ≥ P ({s1 + 1 ≤ i ≤ s : −γn ≤ i ≤ γn } = S12 ? . Note that S , S ? wpg1, S12 = S12 12 22 and S32 are disjoint and their union is S12 . Then, wpg1, S22 = S32 = ∅. 2 = (1/d) Before proceeding to the proofs of Theorems 4.2 to A.6, we denote σ̄X P P 2 and σ̄XX = (1/d2 ) dk=1 dl=1 Var[X0k X0l ] and make the following assumptions. 2 is bounded. Assumption (E1): σ̄X 26 Pd j=1 Var[X0j ] 2 Assumption (E2): σ̄XX is bounded. Assumption (E) in Section 4 implies Assumptions (E1) and (E2) by Cauchy-Schwarz inequality. For simplicity, we adopt the notation ., which means the left hand side is bounded by a constant times the right, where the constant does not affect related analysis. Below are three lemmas needed for proving Theorems 4.2 to A.6. Their proofs are in Supplement F. Suppose that M and E are matrices and k·k is a matrix norm and that {An } is a sequence of random d × d matrices and A a deterministic d × d matrix, and denote Σ̂n = (1/n)Sn , the sample covariance matrix. Lemma A.1 (Stewart (1969)). If kIk = 1 and kM −1 kkEk < 1, then Lemma A.2. If kA−1 kF,d k(M + E)−1 − M −1 k kM −1 kkEk ≤ . kM −1 k 1 − kM −1 kkEk √ P P −1 is bounded, An −→ A, and rd ≥ 1/ d, then A−1 n −→ A , where the convergence in probability is wrt rd k·kF . P Lemma A.3. If Assumption (E2) holds and rd2 d4 /n → 0, then Σ̂n −→ ΣX wrt rd k·kF . Proof of Theorem 4.2. By the proof of Lemma 4.1, wpg1, the solution β̂ n to ϕn (β) = 0 on BC (β ? ) is explicitly given by β̂ n = β ? + T0−1 (T1 + T2 + T3 − T4 ), where T0 = (1/n)Ss1 +1,n , ? ? and T4 = (λ/n)SS ? . Then, rd kβ̂ − β k2 ≤ T1 = (1/n)SµS ? , T2 = (1/n)Ss1 +1,n , T3 = (λ/n)SS21 n 31 12 √ P kT0−1 kF,d 4i=1 rd dkTi k2 . We will show that kT0−1 kF,d is bounded by a positive constant wpg1 √ P and rd dkTi k2 −→ 0 for i = 1, 2, 3, 4. Then, rd kβ̂ n − β ? k2 = oP (1). Consider T0 . By Lemma P A.3, kT0 − ΣX kF,d −→ 0 under Assumption (E2) and the condition d3 /n → 0. Then, by Lemma P −1 A.2, together with Assumption (D), kT0−1 − Σ−1 X kF,d −→ 0. This implies that, wpg1, kT0 kF,d is √ √ bounded by a positive constant. Consider T1 . Wpg1, rd dkT1 k2 ≤ rd ds2 κn γn /n = o(1) for s2 = √ P o(n/(rd dκn γn )). Consider T2 . For any δ > 0, P (kT2 k2 > δ) ≤ (1/δ 2 )P k(1/n) ni=s1 +1 X i i k22 ≤ √ Pd 2 /(nδ 2 ), where σ̄ 2 = (1/d) 2 2 2 2 2 2 dσ 2 σ̄X j=1 σj . Thus, P (rd dkT2 k2 > δ) ≤ rd d σ σ̄X /(nδ ) → 0 by AsX √ √ sumption (E1) and (rd d)2 /n → 0. Consider T3 and T4 . Wpg1, rd dkT3 k2 ≤ rd dλs1 κn /n = o(1) √ √ for s1 = o(n/(rd dλκn )). Similarly, rd dkT4 k2 = oP (1). The next lemma is needed for proving Theorem 4.3 and its proof is in Supplement F. Suppose 2 {ξ i } are i.i.d. copies of ξ 0 , a d-dimensional random vector with mean zero. Denote σξ,max = 2 max1≤j≤d Var[ξ0j ], σξ,min = min1≤j≤d Var[ξ0j ] and γξ,max = max1≤j≤d E|ξ0j |3 . Lemma A.4. Suppose σξ,max and γξ,max are bounded from above and σξ,max is bounded from zero. √ √ √ P If d = o( n), then (1/ n) ni=1 ξ i = OP ( d log d) wrt k·k2 . 27 Proof of Theorem 4.3. We reuse the notations Ti ’s in the proof of Theorems 4.2, from which, √ √ nAn (β̂ n − β ? ) = V1 + V2 + V3 − V4 , where Vi = B n Ti for i = 1, 2, 3, 4 and B n = nAn T0−1 . d It is sufficient to show that V2 −→ N (0, σ 2 GX ) and other Vi ’s are oP (1). Consider V1 . We √ have kV1 k2 ≤ ndkAn kF kT0−1 kF,d kT1 k2 . By Assumption (F), kAn kF is bounded. By Lemmas A.2 and A.3 and Assumption (D), for d = o(n1/3 ), wpg1, kT0−1 kF,d is bounded. We have, wpg1, p √ √ kT1 k2 ≤ s2 κn γn /n. Then, kV1 k2 . d/ns2 κn γn , Thus, kV1 k2 = oP (1) for s2 = o( n/( dκn γn )). √ √ nAn (T0−1 − Σ−1 Consider V2 . We have V2 = V21 + V22 , where V21 = nAn Σ−1 X T2 and V22 = X )T2 . p Pn √ First, note that V21 = (n − s1 )/n i=s1 +1 Z n,i , where Z n,i = (1/ n − s1 )An Σ−1 X X i i . On one Pn hand, for every δ > 0, i=s1 +1 EkZ n,i k22 {kZ n,i k2 > δ} ≤ (n − s1 )EkZ n,0 k42 /δ 2 , and EkZ n,0 k42 = 2 −1 T 1 2 E40 E(X T0 Σ−1 X An An ΣX X 0 ) , (n−s1 )2 −2 d 4 2 which is ≤ (n−s 2 E0 λmax (Gn )λmin (ΣX )κX . Then, by As1) P √ sumptions (D’), (E) and (F) and for d = o( n), ni=s1 +1 EkZ n,i k22 {kZ n,i k2 > δ} → 0. On the other P T 2 hand, ni=s1 +1 Cov(Z n,i ) = σ 2 An Σ−1 X An → σ GX by Assumption (F). Thus, by central limit thed orem (see Proposition 2.27 in van der Vaart (1998)), V21 −→ N (0, σ 2 GX ). Next, consider V22 . Note −1/2 k√nT k . By Assumption (F), kA k that kV22 k2 ≤ kAn kF (d log(d))1/2 kT0−1 −Σ−1 2 2 n F X kF (d log(d)) 5 is O(1); by Lemmas A.2 and A.3, (d log(d))1/2 kT0−1 −Σ−1 X kF is oP (1) for d log(d) = o(n); by Lemma √ √ P A.4, (d log(d))−1/2 k nT2 k2 = (d log(d))−1/2 k √1n Ss1 +1,n k2 is OP (1) for d = o( n). Then, V22 −→ 0. d Thus, by slutsky’s lemma, V2 −→ N (0, σ 2 GX ). Consider V3 and V4 . First consider V3 . By noting √ √ √ √ √ that s1 = o( n/(λ dκn )), wpg1, kV3 k2 ≤ ndkAn kF kT0−1 kF,d kT3 k2 . dλs1 κn / n → 0. Thus, kV3 k2 = oP (1). In the same way, kV4 k2 = oP (1). T1 Proof of Theorem 4.4. By the definition of E, we have P (E) = T1 T2 T3 , where T1 = P ( si=1 {|µ?i + T T X Ti (β ? −β̂)+i | > λ}), T2 = P ( si=s1 +1 {|µ?i +X Ti (β ? −β̂)+i | ≤ λ}) and T3 = P ( ni=s+1 {|X Ti (β ? − β̂) + i | ≤ λ}). We will show that each Ti converges to one. Then, P (E) → 1. Denote C = {rd kβ̂ − β ? k2 ≤ 1}. Then P (C) → 1 since β̂ is a consistent estimator of β ? wrt rd k·k2 . Consider S T1 . We have 1 − T1 ≤ T11 + T12 , where T11 = P ( i∈S ? {|µ?i + X Ti (β ? − β̂) + i | ≤ λ}) and 21 S T12 = P ( i∈S ? {|µ?i +X Ti (β ? − β̂)+i | ≤ λ}). It is sufficient to show that both T11 and T12 converge √31 S to zero. By dκn  λ  µ? , T11 ≤ P ( i∈S ? {i ≤ λ − µ? + kX i k2 · kβ̂ − β ? k2 }, C) + P (C c ), which 21 √ S ? c is ≤ P ( i∈S ? {i ≤ λ − µ + dκn }) + P (C ) ≤ s1 P {0 ≤ −γn } + P (C c ) −→ 0. Similarly, T12 → 0. 21 √ T Thus T1 → 1. Consider T2 and T3 . By αγn ≤ λ and dκn  λ, T2 ≥ P ( si=s1 {−λ−µ?i +(1/rd )κn ≤ √ √ T i ≤ λ − µ?i − (1/rd )κn }, C), which is ≥ P ( si=s1 {−λ − µ?i + dκn ≤ i ≤ λ − µ?i − dκn }, C) ≥ T P ( si=s1 {−γn ≤ i ≤ γn }, C) → 1. Then T2 → 1. Similarly, T3 → 1. Proof of Theorem 4.5. Note that √ 1/2 nAn ΣX (β̃ − β ? ) = R̃1 + R̃2 + V1 + V2 , where R̃1 = 28 √ nAn R1 , R̃2 = √ nAn R2 , R1 = (X TIˆ X Iˆ0 )−1 X TIˆ Y Iˆ0 {Iˆ0 6= I0 }, R2 = −(X TI0 X I0 )−1 X TI0 Y I0 {Iˆ0 6= I0 }, 0 0 and Vi ’s are defined in the proof of Theorem 4.3. Since P (kR̃1 k2 = 0) ≥ P {Iˆ0 = I0 } → 1, we have R̃1 = oP (1). Similarly, R̃2 = oP (1). By the proof of Theorem 4.3, V1 = oP (1) and d V2 −→ N (0, σ 2 GX ). Therefore, the desired result follows by Slutsky’s lemma. Lemma A.5 (Consistency on σ̂). Suppose the assumptions and conditions of Theorem 4.2 hold √ P with rd ≥ d. If s2 = o(n/γn2 ), then σ̂ −→ σ. √ Proof of Lemma A.5. Since the assumptions and conditions of Theorem 4.2 hold with rd ≥ d, √ the penalized estimators β̂ and β̃ are consistent estimators of β ? wrt dk·k2 by Theorems 4.2 and F.4 in Supplement F. Let A = {Iˆ0 = I0 }. Then A occurs wpg1 by Theorem 4.4. Note that σ̂ 2 = T A + σ̂ 2 Ac , where T = (n − s1 )−1 kY I0 − X TI0 β̃k22 . It suffices to show that Pn P6 P ? T 2 −1 T −→ σ 2 . Note that T = i=s1 +1 [X i (β − β̃)] , T2 = (n − i=1 Ti , where T1 = (n − s1 ) P P P s1 )−1 ni=s1 +1 2i , T3 = 2(n − s1 )−1 ni=s1 +1 X Ti (β ? − β̃)i , T4 = (n − s1 )−1 si=s1 +1 µ?2 i , T5 = 2(n − P P P s1 )−1 si=s1 +1 µi X Ti (β ? − β̃) and T6 = 2(n−s1 )−1 si=s1 +1 µ?i i . It is clear that T2 −→ σ 2 . Thus, it √ is sufficient to show other Ti ’s are oP (1). For every η > 0, wpg1, dkβ ? − β̃k2 ≤ η. By Assumption √ T 2 1 Pn 2 . η 2 . For every dkβ ? − β̃k2 )2 ≤ 2η 2 d1 EkX T0 k22 = 2η 2 σ̄X (E1), wpg1, |T1 | ≤ d1 n−s kX k ( i 2 i=s +1 1 1 √ T 1 Pn kX  k η > 0, wpg1, |T3 | ≤ 2 √1d n−s dkβ ? − β̃k2 ≤ 4η √1d EkX T0 0 k2 = 4σησ̄X . η. For i 2 i i=s +1 1 1 √ √ 1 s γ κ dkβ ? − s2 = o(n/γn2 ), |T4 | ≤ (n − s1 )−1 s2 γn2 → 0. For s2 = o( dn/(γn κn )), |T5 | ≤ 2 √1d n−s 2 n n 1 P 1 1 β̃k2 ≤ 2η √1d n−s s2 γn κn −→ 0. For s2 = o(n/γn ), wpg1, |T6 | ≤ 4 n−s γn s2 E|0 | → 0. 1 1 Theorem A.6 (Asymptotic Distributions on β̂ and β̃ with ĜX,n ). Under the assumptions and √ −1/2 d conditions of Theorem 4.3, if d8 (log(d))2 = o(n), then nĜX,n An (β̂ − β ? ) −→ N (0, σ 2 I q ). Similarly, under the assumptions and conditions of Theorem 4.5, If d8 (log(d))2 = o(n), then √ −1/2 d nĜX,n An (β̃ − β ? ) −→ N (0, σ 2 I q ). −1/2 Note that a stronger requirement on d is required to handle ĜX,n in Theorem A.6.. Below is a lemma needed for proving Theorem A.6. Lemma A.7 (Wihler (2009)). Suppose A and B are m × m symmetric positive-semidefinite matrices. Then, for p > 1, kA1/p − B 1/p kpF ≤ m(p−1)/2 kA − BkF . Specifically, for p = 2, kA1/2 − B 1/2 kF ≤ (m1/2 kA − BkF )1/2 . Proof of Theorem A.6. We only show the result on β̂. since the result on β̃ can be obtained in a sim√ −1/2 ilar way. We reuse the notations Ti ’s in the proof of Theorems 4.2, from which, nĜX,n An (β̂ n − √ −1/2 √ −1/2 −1/2 β ? ) = M + R, where M = nGX,n An (β̂ n − β ? ) and R = n(ĜX,n − GX,n )An (β̂ n − β ? ). By 29 d P Theorem 4.3, M −→ N (0, σ 2 GX ). Then, it is sufficient to show that R −→ 0 wrt k·k2 . We have √ −1/2 −1/2 R = R1 + R2 + R3 − R4 , where Ri = B n Ti for i = 1, 2, 3, 4 and B n = n(ĜX,n − GX,n )An T0−1 . We will show each Ri converges to zero in probability, which finishes the proof. Before that, −1/2 −1/2 −1/2 −1/2 we first establish an inequality for kĜX,n − GX,n kF . By Lemma A.7, kĜX,n − GX,n kF ≤ −1 √ P 1/2 . Note that, by Lemma A.3, kΣ̂ − Σ k −→ ( qkĜX,n − G−1 0 for d4 = o(n). Then, by n X F X,n kF ) −1 P 2 Lemma A.2, kĜX,n − GX,n kF ≤ kAn k2F kΣ̂n − Σ−1 X kF . kAn kF kΣ̂n − ΣX kF −→ 0. Thus, by −1 2 Lemma A.2, kĜX,n − G−1 X,n kF . kĜX,n − GX,n kF . kAn kF kΣ̂n − ΣX kF . Since q is a fixed in−1/2 √ −1/2 teger, it follows kĜX,n − GX,n kF . kAn kF ( qkΣ̂n − ΣX kF )1/2 . kAn kF (kΣ̂n − ΣX kF )1/2 . √ √ −1/2 −1/2 Consider R1 . Note that kR1 k2 ≤ n dkĜX,n − GX,n kF kAn kF kT0−1 kF,d kT1 k2 , which is . √ n(dkΣ̂n − ΣX kF )1/2 kAn k2F kT0−1 kF,d kT1 k2 . By Lemmas A.2 and A.3, dkΣ̂n − ΣX kF = oP (1) for d6 = o(n). By Assumption (F), kAn kF is bounded. By Lemmas A.2 and A.3 and Assumption (D), for d = o(n1/3 ), wpg1, kT0−1 kF,d is bounded. Also note that, wpg1, kT1 k2 ≤ √ √ s2 κn γn /n. Then, kR1 k2 . s2 κn γn / n. Thus, kR1 k2 = oP (1) for s2 = o( n/(κn γn )). Consider √ −1/2 −1/2 R2 . Note that kR2 k2 ≤ kĜX,n − GX,n kF kAn kF kT0−1 kF k nT2 k2 , which is . (d2 log(d)kΣ̂n − √ ΣX kF )1/2 kAn k2F kT0−1 kF,d (d log(d))−1/2 k nT2 k2 . By Lemmas A.2 and A.3, d2 log(d)kΣ̂n −ΣX kF is oP (1) for d8 (log(d))2 = o(n). By Assumption (F), kAn kF is O(1). By Lemmas A.2 and A.3 and As√ sumption (D), for d = o(n1/3 ), wpg1, kT0−1 kF,d is bounded. By Lemma A.4, (d log(d))−1/2 k nT2 k2 = √ P (d log(d))−1/2 k √1n Ss1 +1,n k2 is OP (1) for d = o( n). Thus, R2 −→ 0. Consider R3 and R4 . √ √ −1/2 −1/2 By s1 = o( n/(λκn )), wpg1, kR3 k2 ≤ nkĜX,n − GX,n kF kAn kF kT0−1 kF kT3 k2 , which is . √ √ n(dkΣ̂n − ΣX kF )1/2 kAn k2F kT0−1 kF,d kT3 k2 . λs1 κn / n → 0. Thus, kR3 k2 = oP (1). In the same way, kR4 k2 = oP (1). B Supplementary Materials Additional materials for Sections 1 to 4 can be found in the file of supplementary materials. References Debabrata Basu. On the elimination of nuisance parameters. Journal of the American Statistical Association, 72:355–366, 1977. Derek Bean, Peter Bickel, Noureddine El Karoui, Chinghway Lim, and Bin Yu. Penalized robust regression in high-dimension. 2012. 30 Louis H.Y. Chen, Larry Goldstein, and Qi-Man Shao. Normal Approximation by Stein’s Method. Springer-Verlag, 2010. Jianqing Fan and Runze Li. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96(456):1348–1360, Dec 2001. ISSN 0162-1459. Jianqing Fan and Jinchi Lv. A selective overview of variable selection in high dimensional feature space. Statistica Sinica, 20:101–148, 2010. Jianqing Fan and Heng Peng. On non-concave penalized likelihood with diverging number of parameters. The Annals of Statistics, 32:928–961, 2004. Jianqing Fan, Yuan Liao, and Martina Mincheva. High dimensional covariance matrix estimation in approximate factor models. 2011. Jianqing Fan, Yingying Fan, and Emre Barut. Adaptive robust variable selection. 2012a. Jianqing Fan, Yang Feng, and Xin Tong. A road to classification in high dimensional space: the regularized optimal affine discriminant. Journal of the Royal Statistical Society Series B, 2012b. Peter J. Huber. Robust estimation of a location parameter. The Annals of Mathematical Statistics, 35:73–101, 1964. Peter J. Huber. Robust regression: Asymptotics, conjectures and monte carlo. The Annals of Statistics, 1:799–821, 1973. Johannes Jahn. Introduction to the Theory of Nonlinear Optimization. Springer Berlin Heidelberg, 2007. J. Kiefer and J. Wolfowitz. Consistency of the maximum likelihood estimator in the presence of infinitely many incidental parameters. The Annals of Mathematical Statistics, 27:887–906, 1956. Michael R. Kosorok. Bootstrapping the grenander estimator. In Beyond Parametrics in Interdisciplinary Research: Festschrift in Honor of Professor Pranab K. Sen, pages 282–292. Institute of Mathematical Statistics: Hayward, CA., 2008. S. Lambert-Lacroix and L. Zwald. Robust regression through the huber’s criterion and adaptive lasso penalty. Electronic Journal of Statistics, 5:1015–1053, 2011. ISSN 1935-7524. 31 Tony Lancaster. The incidental parameter problem since 1948. Journal of Econometrics, 95: 391–413, 2000. Marcelo J. Moreira. A maximum likelihood method for the incidental parameter problem. The Annals of Statistics, 37(6A):3660–3696, 2009. ISSN 0090-5364. Jerzy Neyman and Elizabeth L. Scott. Consistent estimates based on partially consistent observations. Econometrica, 16:1–32, 1948. Stephen Portnoy and Xuming He. A robust journey in the new millennium. Journal of the American Statistical Association, 95:1331–1335, 2000. Albert N. Shiryaev. Probability. Springer-Verlag, second edition, 1995. G. W. Stewart. On the continuity of the generalized inverse. SIAM Journal on Applied Mathematics, 17:33–45, 1969. Runlong Tang, Moulinath Banerjee, and Michael R. Kosorok. Likelihood based inference for current status data on a grid: A boundary phenomenon and an adaptive inference procedure. The Annals of Statistics, 40(1):45–72, 2012. Robert Tibshirani. Regression shrinkage and selection via the lasso. J. Royal. Statist. Soc B, 58: 267–288, 1996. Aad W. van der Vaart. Asymptotic Statistics. Cambridge University Press, 1998. Aad W. van der Vaart and Jon A. Wellner. Weak Convergence and Empirical Processes. Springer, 1996. Thomas P. Wihler. On the holder continuity of matrix functions for normal matrices. Journal of inequalities in pure and applied mathematics, 10, 2009. Peng Zhao and Bin Yu. On model selection consistency of lasso. The Journal of Machine Learning Research, 7(Nov):2541–2563, 2006. 32 Supplementary Materials for Paper: “Partial Consistency with Sparse Incidental Parameters” by Jianqing Fan, Runlong Tang and Xiaofeng Shi C Supplement for Section 1 In this supplement, we first show the method proposed by Neyman and Scott (1948) does not work for model (1.1) and then explain which assumptions or conditions for the consistent results of the penalized methods in Zhao and Yu (2006), Fan and Peng (2004) and Fan and Lv (2011) are not satisfied for model (1.1). Although the modified equations of maximum likelihood method proposed by Neyman and Scott (1948) could handle “a number of important cases” with incidental parameters, unfortunately, it does not work for model (1.1). More specifically, consider the simplest case of model (1.1) with d = 1: Yi = µ?i + Xi β ? + i , for i = 1, 2, · · · , n, where {i } are i.i.d. copies of N (0, σ 2 ). Using the notations of Neyman and Scott (1948), the √ likelihood function for (Xi , Yi ) is pi = pi (β, σ, µi |Xi , Yi ) = ( 2πσ)−1 exp{−(2σ 2 )−1 (Yi −µ?i −Xi β)2 }, √ and the log-likelihood function is log pi = − log( 2πσ) − (2σ 2 )−1 (Yi − µ?i − Xi β)2 . Then, the score functions are 1 ∂ log pi = 2 (Yi − µ?i − Xi β)Xi , ∂β σ ∂ log pi 1 1 φi2 = = + 3 (Yi − µ?i − Xi β)2 , ∂σ σ σ ∂ log pi 1 ωi = = 2 (Yi − µ?i − Xi β). ∂µi σ φi1 = From the equation ωi = 0, we have µ̂i = Yi − Xi β. Plugging this µ̂i into φi1 and φi2 (replacing µi with µ̂i ), we obtain φi1 = 0 and φi2 = 1/σ. Then, Ei1 = Eφi1 = 0 and Ei2 = Eφi1 = 1/σ. Thus, Ei1 and Ei2 do only depend on the structural parameters (β ? and σ). However, we then have Φi1 = φi1 − Ei1 = 0 and Φi2 = φi2 − Ei2 = 0. This means Fn1 = Fn2 = 0, independent of structural parameters! Consequently, the estimation equations degenerate to two 0 = 0 equations, which means the modified equation of maximum likelihood method does not work for model (1.1). i Next, we explicitly explain which assumptions or conditions for the consistent results of the penalized methods in Zhao and Yu (2006), Fan and Peng (2004) and Fan and Lv (2011) are not valid for model (1.1). Zhao and Yu (2006) derive strong sign consistency for lasso estimator. However, their consistency results Theorems 3 and 4 do not apply to model (1.1), since the above specific design matrix X does not satisfy their regularity condition (6) on page 2546. More specifically, with model (1.1),     0 0 I X 1 s 1,s a.s.  n  −→ , C11 =  n X T Pn X X T 0 Σ 1,s i i=1 X i where ΣX is the covariance matrix of the covariates. This means that some of the eigenvalues of n goes to 0 as n → ∞. Then the regularity condition (6), which is C11 n αT C11 α ≥ a positive constant , for all α ∈ Rs+d such that kαk22 = 1, does not hold any more. Thus the consistency results Theorems 3 and 4 in Zhao and Yu (2006) is not applicable for model (1.1). Fan and Peng (2004) show the consistency with Euclidean metric of a penalized likelihood estimator when the dimension of the sparse parameter increases with the sample size in Theorem 1 on Page 935. Under their framework, the log-likelihood function of the data point Vi = (X i , Yi ) for each i from model (1.1) with random errors being i.i.d. copies of N (0, σ 2 ) is given by log fn (Vi , µi , β) ∝ − 1 (Yi − µi − X Ti β)2 , 2σ 2 where ∝ means “proportional to”. As we can see that log-likelihood functions with different i’s might different since µi ’s might be different for different i’s. This violates a condition that all the data points are i.i.d. from a structural density in Assumption (G) on Page 934. This violation might not be essential, however, since we could consider the log-likelihood function for all the data directly. That is, we consider Ln (µ, β) = n X log fn (Vi , µi , β) ∝ − n 1 X (Yi − µi − X Ti β)2 . 2σ 2 i=1 i=1 Then, the Fisher information matrix for (µ, β) is given by   σ −2 I n 0 , In+d (µ, β) =  0 nσ −2 Σ2X ii where In is the n × n identity matrix. Then, the Fisher information for one data point is   −1 σ −2 I n 0 1 n . In+d (µ, β) =  n 0 σ −2 Σ2 X It is clear that the minimal eigenvalue λmin (In+d (µ, β)/n) = n−1 σ 2 → 0 as n → ∞. This violates the condition that the minimal eigenvalue should be lower bounded from 0 in Assumption (F) on Page 934. Thus, the consistency result Theorem 1 in Fan and Peng (2004) can not be applied to model (1.1). Fan and Lv (2011) “consider the variable selection problem of nonpolynomial dimensionality in the context of generalized linear models” by taking the penalized likelihood approach with folded-concave penalties. Theorem 3 on page 5472 of Fan and Lv (2011) shows that there exists a consistent estimator of the unknown parameters with the Euclidean metric under certain conditions. In Condition 4 on page 5472, there is a condition on a minimal eigenvalue min λmin [X TI Σ(XI δ)X I ] ≥ cn, δ∈N0 where X I consists of the first s + d columns of the design matrix X. With model (1.1), this condition becomes λmin [X TI X I ] ≥ cn, which is n λmin [(1/n)X TI X I ] = λmin [C11 ] ≥ c, n is the matrix defined in Zhao and Yu (2006) and c is a positive constant. Since the minwhere C11 n ] converges to 0, the above condition does not hold. Thus, the consistency imal eigenvalue λmin [C11 result Theorem 3 of Fan and Lv (2011) is not applicable for model (1.1). D Supplement for Section 2 In this supplement, we provide Lemmas D.1 and D.3, Proposition D.2 and their proofs. Before that, there are two graphs, Figures 5 and 6, illustrating the incidental parameters and the step of updating the responses in the iteration algorithm with d = 1. iii n−s γn λ µ∗ {µ i∗ : s1 + 1 ≤ i ≤ s} s2 0 ∞ {µi∗ : 1 ≤ i ≤ s1} s1 −∞ − λ − µ∗ −γn Figure 5: An illustration of three types of µ?i ’s, that is, large µ?1 , bounded µ?2 and zero µ?3 . The negative half of the real line is folded at 0 under the positive half for convenience. For the penalized least square method with a soft penalty function and under the assumption of fixed d, the √ specification of the regularization parameter λ is that κn  λ, αγn ≤ λ, and λ  min{µ? , n}. Y 0 X f ( x ) = xβ ( k −1) Figure 6: An illustration for the updating of responses with d = 1. The solid black line is a fitted regression line. The dashed black lines are the corresponding shifted regression lines. The circle and diamond points are the original data points. The circle and triangle points are the updated data points. That is, the diamond points are drawn onto the shifted regression lines. iv Lemma D.1. A necessary and sufficient condition for (µ̂, β̂) to be a minimizer of L(µ, β) is that β̂ = (X T X)−1 X T (Y − µ̂), for i ∈ Iˆ0c , Yi − µ̂i − X Ti β̂ = λSign(µ̂i ), |Yi − X Ti β̂| ≤ λ, for i ∈ Iˆ0 , where Sign(·) is a sign function and Iˆ0 = {1 ≤ i ≤ n : µ̂i = 0}. Proof of Lemma D.1. By subdifferential calculus (see, for example, Theorem 3.27 in Jahn (2007)), a necessary and sufficient condition for (µ̂, β̂) to be a minimizer of L(µ, β) is that zero is in the subdifferential of L at (µ̂, β̂), which means that, for each i, β̂ = (X T X)−1 X T (Y − µ̂), Yi − µ̂i − X Ti β̂ = λSign(µ̂i ), |Yi − X Ti β̂| ≤ λ, if µ̂i 6= 0, if µ̂i = 0. Thus, the conclusion of Lemma D.1 follows. Proposition D.2. Suppose Assumptions (A) and (B) hold and there exist positive constants C1 and C2 such that kβ ? k2 < C1 and kβ (0) k2 < C2 wpg1. If s1 λ/n = O(1) and s2 γn /n = o(1), then, for every K ≥ 1 and k ≤ K, wpg1 as n → ∞, kβ (K+1) − β (K) k2 ≤ O((s1 /n)K ), and √ kβ (k) k2 ≤ 2 dC1 + C2 . Remark 2. For any prespecified critical value in the stopping rule, Proposition D.2 implies that the algorithm stops at the second iteration wpg1. In practice, the sample size n might not be large enough for the two-iteration estimator to have a decent performance so that more iterations are usually needed to activate the stopping rule. By Proposition D.2, K iterations will make the distance kβ (K+1) −β (K) k2 of the small order (s1 /n)K . When s1 /n is small, the algorithm converges quickly, which has been verified by our simulations. √ Proof of Proposition D.2. First, we show that, wpg1, kβ (1) k2 is bounded by 2 dC1 + C2 . For each k ≥ 1, we have Sβ (k) = SµS11 + SµS12 + SS1 β ? + SS1 + SS2 ∪S3 β (k−1) + λ(SS2 − SS3 ), v where Si = ∪3j=1 Sij (β (k−1) ) for i = 1, 2, 3 and Sij ’s are defined at the end of Section 2. Denote Ak−1 as the event ? ? ? ? ? {S11 (β (k−1) ) = ∅, S12 (β (k−1) ) = S12 , S1 (β (k−1) ) = S10 ∪ S12 ; S2 (β (k−1) ) = S21 ; S3 (β (k−1) ) = S31 }, ? ’s are defined at the beginning of Section 3. where Sij By Lemma 3.1, P (A0 ) → 1. Thus, wpg1, β (1) = T0−1 T1 + T0−1 T2 + T0−1 T3 + T0−1 T4 (β (0) ) + T0−1 T5 , where T0 = S/n, T1 = SµS ? /n, T2 = Ss1 +1,n β ? /n, T3 = Ss1 +1,n /n, T4 (β (0) ) = S1,s1 β (0) /n and T5 = 12 √ −1 −1 −1 ? − SS ? )λ/n. We will show that, wpg1, kT (SS21 0 T1 k2 ≤ C2 /4, kT0 T2 k2 ≤ 2 dC1 , kT0 T3 k2 ≤ 31 C2 /4, kT0−1 T4 (β (0) )k2 ≤ C2 /4 and kT0−1 T5 k2 ≤ C2 /4. Then, wpg1, kβ (1) k2 ≤ 5 X √ kT0−1 Ti k2 ≤ 2 dC1 + C2 . i=1 On T0−1 T1 . For s2 γn /n = o(1), wpg1, 1 s2 1 kT0−1 T1 k2 ≤ k( S)−1 kF k SµS ? k2 ≤ 4kΣ−1 γn → 0. X kF EkX 0 k2 12 n n n Thus, wpg1, kT0−1 T1 k2 ≤ C2 /4. On T0−1 T2 . Wpg1, √ 1 1 kT0−1 T2 k2 ≤ k( S)−1 Ss1 +1,n kF kβ ? k2 ≤ 2kI d kF C1 = 2 dC1 . n n On T0−1 T3 . Wpg1, 1  P kT0−1 T3 k2 ≤ 2kΣ−1 X kF k Ss1 +1,n k2 −→ 0. n Thus, wpg1, kT0−1 T3 k2 ≤ C2 /4. On T0−1 T4 (β (0) ). For s1 /n = o(1), kT0−1 T4 (β (0) )k2 ≤ s1 √ s1 1 −1 1 P k( S) S1,s1 kF kβ (0) k2 ≤ 2 dC2 −→ 0. n n s1 n Thus, wpg1, kT0−1 T4 (β (0) )k2 ≤ C2 /4. On T0−1 T5 . For s1 λ/n = O(1), wpg1, kT0−1 T5 k2 ≤ 2kΣ−1 X kF s1 λ 1 1 P ? k2 + k ? k2 ) −→ 0. (k SS21 SS31 n s1 s1 Thus, wpg1, kT0−1 T5 k2 ≤ C2 /4. vi Next, consider kβ 2 − β 1 k2 . Since β (1) is bounded wpg1, by Lemma 3.1, A1 occurs wpg1. Then, β (2) = T0−1 T1 + T0−1 T2 + T0−1 T3 + T0−1 T4 (β (1) ) + T0−1 T5 , where T4 (β (1) ) = (1/n)S1,s1 β (1) . Thus, wpg1, β (2) − β (1) = S−1 S1,s1 (β (1) − β (0) ). It follows that, for s1 = o(n), wpg1, √ √ kβ (2) − β (1) k2 ≤ kS−1 S1,s1 kF kβ (1) − β (0) k2 ≤ (2 ds1 /n)(4 dC1 + 2C2 ) → 0. Then, wpg1, β (2) = β (1) , which means that, wpg1, the iteration algorithm stops at the second iteration. Finally, for any K ≥ 1, repeat the above arguments. Then, with at least probability pn,K = TK P ( k=0 Ak ), which increases to one by Lemma 3.1, we have √ √ kβ (K+1) − β (K) k2 ≤ (2 ds1 /n)K (4 dC1 + 2C2 ) = O((s1 /n)K ) → 0, √ and kβ (k) k2 ≤ 2 dC1 + C2 for all k ≤ K. Lemma D.3. A necessary and sufficient condition for (µ̂, β̂) to be a minimizer of L(µ, β) is that it is a solution to equations (2.5) and (2.6). Proof of Lemma D.3. First, we show a solution of (2.5) and (2.6) satisfies the necessary and sufficient condition in Lemma D.1. Denote a solution of (2.5) and (2.6) as (µ̂, β̂). Then β̂ = (X T X)−1 X T (Y − µ̂), which is exactly the first condition in Lemma D.1, and, for each i = 1, 2, · · · , n, (µ̂, β̂) satisfies one of three cases: |Yi − X Ti β̂| ≤ λ and µ̂i = 0; Yi − X Ti β̂ > λ and µ̂i = Yi − X Ti β̂ − λ; Yi − X Ti β̂ < −λ and µ̂i = Yi − X Ti β̂ + λ. If (µ̂, β̂) satisfies the first case, it satisfies the third condition in Lemma D.1. If (µ̂, β̂) satisfies the second case, then µ̂i > 0 and Yi − µ̂i − X Ti β̂ = λ = λSign(µ̂i ), which means that the second case satisfies the second condition in Lemma D.1. Similarly, the third case also satisfies the second condition in Lemma D.1. Thus (µ̂, β̂) satisfies the necessary and sufficient condition in Lemma D.1. In the other direction, suppose (µ̂, β̂) satisfies the necessary and sufficient condition in Lemma D.1. Then, the first condition in Lemma D.1 exactly (2.5). For each i, (µ̂, β̂) satisfies one of three cases: µ̂i = 0 and |Yi − X Ti β̂| ≤ λ; µ̂i > 0 and Yi − µ̂i − X Ti β̂ = λ; µ̂i < 0 and Yi − µ̂i − X Ti β̂ = −λ. If (µ̂, β̂) satisfies the first case, it satisfies the first case in (2.6). If (µ̂, β̂) satisfies the second case, vii then µ̂i = Yi − X Ti β̂ − λ and Yi − X Ti β̂ > λ, which means that (µ̂, β̂) satisfies the second case of (2.6). Similarly, If (µ̂, β̂) satisfies the third case, then it satisfies the third case of (2.6). Thus, (µ̂, β̂) satisfies (2.5) and (2.6). E Supplement for Section 3 In this supplement, we provide the proofs of the theoretical results in Section 3. Before that, we point out that those two different sufficient conditions in Theorem 3.2 come from the different analysis on the term SµS ? . Each of the two different sufficient conditions does not imply the other. 12 Specifically, on one hand, suppose the absolute values of µ?i ’s are all equal for i = s1 +1, s2 +2, · · · , s. P (2+δ)/2 ? 2+δ Then, kµ?2 k2+δ = s2 |µs | and si=s1 +1 |µ?i |2+δ = s2 |µ?s |2+δ . Thus Assumption (C) holds 2 automatically since s2 → ∞. This means that Assumption (C) holds at least when the absolute magnitudes of µ?i ’s are similar to each other. For this case, there still exists a consistent estimator even if n/(κn γn )  s2  n. On the other hand, suppose µ?s = γn and the other µ?i ’s are all equal to P = [γn2 +(s2 −1)c2 ](2+δ)/2 and si=s1 +1 |µ?i |2+δ = γn2+δ +(s2 −1)c2+δ . a constant c > 0. Then, kµ?2 k2+δ 2 If s2  γn2  n/(κn γn ), the previous two terms are both asymptotically equivalent to γn2+δ . Thus Assumption (C) fails but the other sufficient condition holds. Proof of Lemma 3.1. The proof is the similar to that of Lemma 4.1 and omitted. Proof of Theorems 3.2. By Lemma 3.1, wpg1, the solution β̂ n to ϕn (β) = 0 on BC (β ? ) is explicitly given by β̂ n = β ? + T0−1 (T1 + T2 + T3 − T4 ), ? and T4 = (λ/n)SS ? . where T0 = (1/n)Ss1 +1,n , T1 = (1/n)SµS ? , T2 = (1/n)Ss1 +1,n , T3 = (λ/n)SS21 31 12 P P We will show that T0 −→ Σ−1 X > 0 with the Frobenius norm and Ti −→ 0 with the Euclidean norm for i = 1, 2, 3, 4. Thus, by Slutsky’s lemma (see, for example, Lemma 2.8 on page 11 of van der Vaart (1998)), β̂ n is a consistent estimator of β ? . P On T0−1 . By law of large number, T0 −→ ΣX > 0. Then, by continuous mapping theorem, P T0−1 −→ Σ−1 X > 0. On T1 : Approach One. Suppose s2 = o(n/(κn γn )). Then, kT1 k2 ≤ s s 1 X 1 X kX i µ?i k2 = kX i k2 · |µ?i | ≤ s2 κn γn /n = o(1). n n i=s1 +1 i=s1 +1 viii On T1 : Approach Two. Under Assumption (C), it follows (ΣX ?2 −1/2 Sµ ? i=s1 +1 µi ) S12 Ps d −→ N (0, Id ). In fact, Assumption (C) implies the Lyapunov condition for sequence of random vectors (see, e.g. Proposition 2.27 on page 332 of (van der Vaart, 1998)). More specifically, recall the Lyapunov condition is that there exists some constant δ > 0 such that s X s X Ek(ΣX i=s1 +1 −1/2 µ?2 X i µ?i k2+δ → 0. j ) 2 j=s1 +1 Then, by Assumption (C), s X Ek(ΣX i=s1 +1 s X 1 −2 µ?2 X i µ?i k2+δ ≤( j ) 2 j=s1 +1 s X − µ?2 j ) 2+δ 2 j=s1 +1 s X − 2+δ −→ 0, |µ?i |2+δ λmin2 EkX0 k2+δ 2 i=s1 +1 where λmin > 0 is the minimum eigenvalue of ΣX . Then, s s X X 1 µ 1 ?2 1/2 −1/2 µ kT1 k2 = k SS ? k2 ≤ k(ΣX µi ) kF k(ΣX µ?2 SS ? k2 i ) 12 n 12 n i=s1 +1 = 1 ( n s X i=s1 +1 1/2 1/2 µ?2 kΣX kF OP (1) ≤ i ) i=s1 +1 1 1 (s2 γn2 )1/2 OP (1) ≤ √ γn OP (1) = oP (1). n n On T2 . By law of large number, T2 = oP (1). √ On T3 and T4 . By noting λ  n, √ s1 1 1 λ ? ? k2 ≤ √ OP (1) = oP (1). kT3 k2 = kλ SS21 k2 = λ k √ SS21 n n s1 n Thus T3 = oP (1). In the same way, we can show that T4 = oP (1) holds. In Theorem, one condition is Dn /n = o(1). It turns out we can consider other conditions on Dn and derive more possible asymptotic distributions for β̂ n . Theorem E.1 (Asymptotic Distributions on β̂ n : more cases). Under Assumptions (A), (B) and (C), for all constants b, c ∈ R+ , √ d (1) when s1  n/λ2 and Dn2 /n = o(1), n(β̂ n − β ? ) −→ N (0, σ 2 Σ−1 X ); [main case] √ d (2) when s1  n/λ2 and Dn2 /n ∼ c, n(β̂ n − β ? ) −→ N (0, (c + σ 2 )Σ−1 X ); d √ d √ (3) when s1  n/λ2 and Dn2 /n → ∞, rn (β̂ n − β ? ) −→ N (0, Σ−1 X ), where rn ∼ n/Dn  √ d (4) when s1 ∼ bn/λ2 and Dn2 /n = o(1), n(β̂ n − β ? ) −→ N (0, (b + σ 2 )Σ−1 X ); √ d (5) when s1 ∼ bn/λ2 and Dn2 /n ∼ c, n(β̂ n − β ? ) −→ N (0, (b + c + σ 2 )Σ−1 X ); (6) when s1 ∼ bn/λ2 and Dn2 /n → ∞, rn (β̂ n − β ? ) −→ N (0, Σ−1 X ), where rn ∼ n/Dn  ix n; n; d (7) when s1  n/λ2 and Dn2 /n = o(1) or Dn2 /n ∼ c, rn (β̂ n − β ? ) −→ N (0, Σ−1 X ), where rn ∼ √ √ n/(λ s1 )  n; √ √ √ (8) when s1  n/λ2 and Dn2 /n → ∞, letting rn ∼ min{ bn/(λ s1 ), n/Dn }  n, √ √ d bn/(λ s1 )  n/Dn , then rn (β̂ n − β ? ) −→ N (0, Σ−1 X ); √ √ d (8b) if bn/(λ s1 ) ∼ n/Dn , then rn (β̂ n − β ? ) −→ N (0, (1 + b)Σ−1 X ); √ √ d (8c) if bn/(λ s1 )  n/Dn , then rn (β̂ n − β ? ) −→ N (0, bΣ−1 X ). (8a) if Theorem 3.3 groups the results according to the asymptotic magnitude of s1 given an upper bound of the diverging speed of s2 . Alternatively, Theorem E.1 groups the results according to the asymptotic magnitudes of s1 and Dn2 . Since both s1 and Dn2 have three cases, Theorem E.1 basically contains nine cases. For the last case, there are further three cases on the relationship √ √ between bn/(λ s1 ) and n/Dn . As in Theorem 3.3, the first case of Theorem E.1 is denoted as the main case since for this case the incidental parameters are sparse in the sense that the size and magnitude of the nonzero incidental parameters µ?1 and µ?2 are well controlled. Note that √ s2 = o( n/(κn γn )) implies Dn2 /n = o(1). which means that, under Assumption (C), the cases (1), (4) and (7) of Theorem E.1 actually imply the three results of Theorem 3.3. As in Theorem 3.3, √ the convergence rate of β̂ n becomes less than n when s1  n/λ2 or Dn2 /n → ∞, that is, when the size and magnitude of the nonzero incidental parameters are large; the boundary phenomenon also appears. Proof of Theorems 3.3 and E.1. It is sufficient to provide the proof for the case where the sizes of ? = {1 ≤ i ≤ s : µ? > 0} and S ? = {1 ≤ i ≤ s : µ? < 0} are both asymptotically index sets S21 1 1 31 i i s1 /2 and b = 2. From the proof of Theorems 3.2, wpg1, µ  ? ? β̂ = β ? + S−1 s1 +1,n [SS ? + Ss1 +1,n + λ(SS21 − SS31 )]. 12 Let rn be a sequence going to infinity. Then, rn (β̂ n −β ? ) = T0−1 (V1 +V2 +V3 −V4 ), where V1 = rn T1 , V2 = rn T2 , V3 = rn T3 , V4 = rn T4 and Ti ’s are defined in the proof of Theorem 3.2. Next we derive the asymptotic properties of T0 and Vi ’s, from which the desired results follow by Slutsky’s lemma. P On T0 . By the proof of Theorem 3.2, T0−1 −→ Σ−1 X √ √ On V1 : Approach One. If rn = n and s2 = o( n/(κn γn )), then s 1 µ 1 X 1 1 kT1 k2 = krn SS ? k2 ≤ rn kX i k2 · |µ?i | ≤ rn s2 κn γn = √ s2 κn γn = o(1). 12 n n n n i=s1 +1 x √ n and s2 = o( n/(κn γn )), then T1 = oP (1). √ On V1 : Approach Two. If rn = n, then Thus, if rn = √ n or rn  √ 1 Dn 1 µ Dn 1 µ T1 = rn SµS ? = rn SS ? = √ S ?, n 12 n Dn 12 n Dn S12 P 1/2 . There are three cases on D /√n or D 2 /n. If D 2 /n → 0, where Dn = kµ?2 k2 = ( si=s1 +1 µ?2 n n n i ) √ d P 2 2 then T1 −→ 0. If Dn /n → 1, then T1 −→ N (0, ΣX ). If Dn /n → ∞, it means that rn = n is too √ p √ d fast. Let rn ∼ n/Dn = n n/Dn2  n. Then T1 −→ N (0, ΣX ); √ √ √ d P On V2 . If rn = n, then T2 −→ N (0, σ 2 ΣX ). Thus, if rn  n, T2 −→ 0; if rn  n; P T2 −→ ∞. On V3 and V4 . First consider T3 . Denote #(·) as the size function. If rn = r s1 /2 1 1 ? = λ p S ?. T3 = λrn SS21 ? ) S21 n n #(S21 √ n, then p p P ? ) = s /2. There are three cases on λ s /(2n). If λ s /(2n) → 0, then T −→ Note that #(S21 0. 1 1 1 3 p p d Note that λ s1 /(2n) → 0 is equivalent to s1 = o(2n/λ2 ). If λ s1 /(2n) → 1, then T3 −→ p p N (0, ΣX ). Note that λ s1 /(2n) → 1 is equivalent to s1 ∼ 2n/λ2 . If λ s1 /(2n) → ∞, it means p √ √ √ √ √ rn = n is too large. Let rn ∼ n/(λ (s1 /2)) = n 2n/(λ s1 )  n. With this rate rn , p d T3 −→ N (0, ΣX ). Note that λ s1 /2n → ∞ is equivalent to s1  O(2n/λ2 ). In the same way, T4 can be analyzed and parallel results can be obtained. Proof of Theorem 3.4. The proof is similar to that of Theorem 4.4 and omitted. E.1 Supplement for Subsection 3.1 The following Theorem implies Theorem 3.5 since it contains more details. Theorem E.2 (Consistency and Asymptotic Normality on β̃). Suppose Assumptions (A) and (B) √ P hold. If either s2 = o(n/(κn γn )) or Assumption (C) holds, then β̃ −→ β ? . If s2 = o( n/(κn γn )), √ d then n(β̃ − β ? ) −→ N (0, σ 2 Σ−1 X ). On the other hand, under Assumption (C), (1) if Dn2 /n = o(1), then (2) if Dn2 /n ∼ c, then √ d n(β̃ − β ? ) −→ N (0, σ 2 Σ−1 X ); [main case] √ d + n(β̃ − β ? ) −→ N (0, (c + σ 2 )Σ−1 X ), for every constant c ∈ R ; d (3) if Dn2 /n → ∞, then rn (β̃ − β ? ) −→ N (0, Σ−1 X ) where rn ∼ n/Dn  xi √ n. Proof of Theorem E.2. Denote I0 = {s1 + 1, s1 + 2, · · · , s = s1 + s2 , s + 1, · · · , n}. Note that √ s2 = o( n/(κn γn )) ensures that β̂ is consistent by Theorem 3.2. By Theorem 3.4, P {Iˆ0 = I0 } goes to 1. Then, β̃ = R1 + R2 + T0−1 (T1 + T2 ), where R1 = (X TIˆ X Iˆ0 )−1 X TIˆ Y Iˆ0 {Iˆ0 6= I0 } and R2 = −(X TI0 X I0 )−1 X TI0 Y I0 {Iˆ0 6= I0 } and Ti ’s are 0 0 defined in the proof of Theorem 3.2. The proof for the consistency is similar to that of Theorem 3.2 and is omitted. Next we show the asymptotic normality. We have, rn (β̃ − β ? ) = rn R1 + rn R2 + T0−1 (V1 + V2 ), √ where Vi ’s are defined in the proof of Theorem E.1. Since P ( nR1 = 0) ≥ P {Iˆ0 = I0 } → 1, we √ √ have nR1 = oP (1). Similarly, nR2 = oP (1). From the analysis on Vi ’s in the proof of Theorem E.1, the asymptotic distributions follows by Slutsky’s lemma. Lemma E.3 (Consistency on σ̂). Suppose Assumptions (A) and (B) hold and either s2 = o(n/(κn γn )) P or Assumption (C) holds. If s2 = o(n/γn2 ), then σ̂ −→ σ. Proof of Lemma E.3. When Assumption (C) or s2 = o(n/(κn γn )) holds, the penalized estimators β̂ and β̃ are consistent estimators of β ? by Theorems 3.2 and 3.5. Denote C = {Iˆ0 = I0 }. By Theorem 3.4, C occurs wpg1. Then, σ̂ 2 = T C + σ̂ 2 C c , where T = an kY I0 − X TI0 β̃k22 and an = 1/(n − s1 ). It P P P is sufficient to show T −→ σ 2 . We have T = 6i=1 Ti , where T1 = an ni=s1 +1 [X Ti (β ? − β̃)]2 , T2 = Ps P P P ? T an ni=s1 +1 2i , T3 = 2an ni=s1 +1 X Ti (β ? − β̃)i , T4 = an si=s1 +1 µ?2 i , T5 = 2an i=s1 +1 µi X i (β − P P P β̃) and T6 = 2an si=s1 +1 µ?i i . It is straightforward to show that T2 −→ σ 2 and each other Ti −→ 0 P under the condition s2 = o(n/γn2 ) and by noting that β̃ −→ β ? . Then σ̂ is a consistent estimator of σ. E.2 Supplement for Subsection 3.2 In this supplement, we consider a special case with exponentially tailed covariates and errors. For convenience, we first introduce the definition of Orlicz norm and related inequalities. For a strictly increasing and convex function ψ with ψ(0) = 0, the Orlicz norm of a random variable Z with respect to ψ is defined as kZkψ = inf{C > 0 : Eψ(|Z|/C) ≤ 1}. Then, for each x > 0, P (|Z| > x) ≤ 1/ψ(x/kZkψ ). xii (E.1) (See Page 96 of van der Vaart and Wellner (1996)). Next, we introduce a lemma on Orlicz norm with ψ1 . Suppose {Zi }ni=1 is a sequence of random variables and {Z i }ni=1 is a sequence of d-dimensional random vectors with Z i = (Zi1 , Zi2 , · · · , Zid )T . From Lemma 8.3 on Page 131 of Kosorok (2008), we have the following extension. Lemma E.4. If for each 1 ≤ i ≤ n and 1 ≤ j ≤ d, x2 1 x2 1 } and P (|Zij | > x) ≤ c exp{− · }, P (|Zi | > x) ≤ c exp{− · 2 ax + b 2 ax + b with a, b ≥ 0 and c > 0, then k max |Zi k|ψ1 ≤ K{a(1 + c) log(1 + n) + 1≤i≤n p p b(1 + c) log(1 + n)}, p p √ k max kZ i k2 kψ1 ≤ K{a d(1 + cd) log(1 + n) + bd(1 + cd) log(1 + n)}. 1≤i≤n where K is a universal constant which is independent of a, b, c, {Zi } and {Z i }. Proof of Lemma E.4. The proof for random variables {Zi } is the same to the proof of Lemma 8.3 on Page 131 of Kosorok (2008). For random vectors {Z i }, d X √ √ 1 x2 P (kZ i k2 ≥ x) ≤ P ( max |Zij | > x/ d) ≤ P (|Zij | > x/ d) ≤ c0 exp{− 0 }, 1≤j≤d 2 a x + b0 j=1 √ where a0 = a d, b0 = bd and c0 = cd. Then, by the result on random variables, the desired result on random vectors follows. Now, suppose, for every x > 0, 1 x2 1 x2 P (|i | > x) ≤ c1 exp{− · } and P (|Xij | > x) ≤ c2 exp{− · }, 2 a1 x + b1 2 a2 x + b2 (E.2) with ai , bi ≥ 0 and ci > 0 for i = 1, 2. By Lemma E.4, it follows k max |i k|ψ1 ≤ K{a1 (1 + c2 ) log(1 + n) + 1≤i≤n p b1 (1 + c1 ) p log(1 + n)}, p p √ k max kX i k2 kψ1 ≤ K{a2 d(1 + c2 d) log(1 + n) + b2 d(1 + c2 d) log(1 + n)}. 1≤i≤n p Thus, from the inequality (E.1), if a1 > 0, let γn  log(n); otherwise, let γn  log(n). p Similarly, if a2 > 0, let κn  log(n); otherwise, let κn  log(n). Then, such γn and κn satisfy the condition (2.2). Suppose both a1 and a2 are positive, which means both i and Xij ’s have exponential tails. As before, set κn = γn = log(n)τn . For this case, the regularization parameter √ specification (2.4) becomes log(n)τn  λ  min{µ? , n}. xiii At the end of this supplement, we simply list explicit expressions of κn under different assumptions on the covariates for the case with a diverging number of covariates, which are the extension of the results in Section 3.2. The magnitude of κn becomes larger than that for the case with d √ fixed while γn keeps the same. Specifically, if X 0 is bounded with CX > 0, then κn = dCX . q 2 [(3/2) log(d) + log(n)]. If the If X 0 follows a Gaussian distribution N (0, ΣX ), then κn = 2dσX P Orlicz norm kX0j kψ exists for 1 ≤ j ≤ d and their average (1/d) dj=1 kX0j kψ is bounded, then κn  dψ −1 (n); for instance, if ψ = ψp with p ≥ 1, then κn  d(log(n))1/p . Finally, if the data {X i } satisfies the right inequality of (E.2) with a2 > 0, that is, each component of X i is sub-exponentially tailed, then κn  d3/2 log(n). It is worthwhile to note that these expressions of κn depend on a factor involving the diverging number of covariates d, which will influence the specification of the regularization parameter and the sufficient conditions of all the theoretical results in Section 4. F Supplement for Section 4 In this supplement, we provide Proposition D.2 and its proof, the proofs of the lemmas in the appendix and some additional results. We first extend Proposition D.2 to the case with d → ∞ and d  n. Before that, we list two simple lemmas for a diverging d. Suppose {ξ i } is a sequence of i.i.d. copies of ξ 0 , a d-dimensional P random vector with mean zero. Denote σ̄ξ2 = (1/d) dj=1 Var[ξ0j ]. Lemma F.1. Suppose σ̄ξ2 is bounded. If d/n = o(1), then n k 1X P ξ i k2 −→ 0. n i=1 Lemma F.2. Suppose σ̄ξ2 is bounded. If d/n = o(1), then n 1X P kξ i k2 − P kξ 0 k2 −→ 0. n i=1 Suppose the specification of the regularization parameter is given by dκn  λ, αγn ≤ λ, and λ  µ? , (F.1) where α is a constant greater than 2. Proposition F.3. Suppose assumptions (D) and (G) hold and the regularization parameter satisfies √ √ (F.1). Suppose there exist constants C1 and C2 such that kβ ? k2 < C1 d and kβ (0) k2 < C2 d xiv √ √ wpg1. If the regularization parameter satisfies (2.4), s1 λκn /(n d) = o(1) and s2 κn γn /(n d) = o(1), then, for every K ≥ 1, with at least probability pn,K which increases to one as n → ∞, √ kβ (K+1) − β (K) k2 ≤ O(( ds1 κ2n /n)K d) and kβ (k) k2 ≤ (2C1 + C2 )d for all k ≤ K. Specifically, wpg1, the iterative algorithm stops at the second iteration. Proof of Proposition F.3. Reuse the notations in the proof of Lemma D.2. First, we show that, wpg1, kβ (1) k2 ≤ (2C1 + C2 )d. For each k ≥ 1, Sβ (k) = SµS11 + SµS12 + SS1 β ? + SS1 + SS2 ∪S3 β (k−1) + λ(SS2 − SS3 ), Since the regularization parameter satisfies (F.1), it is easy to check that the conclusion of Lemma 4.1 continues to hold, which implies P (A0 ) → 1. Thus, wpg1, β (1) = T0−1 T1 + T0−1 T2 + T0−1 T3 + T0−1 T4 (β (0) ) + T0−1 T5 . We will show that, wpg1, kT0−1 T1 k2 ≤ (C2 /4)d, kT0−1 T2 k2 ≤ 2C1 d, kT0−1 T3 k2 ≤ (C2 /4)d, kT0−1 T4 (β (0) )k2 ≤ (C2 /4)d, kT0−1 T5 k2 ≤ (C2 /4)d. Thus, wpg1, kβ (1) k2 ≤ 5 X kT0−1 Ti k2 ≤ (2C1 + C2 )d. i=1 √ On T0−1 T1 . Under Assumption (D), for s2 κn γn /(n d) = o(1), wpg1, 1 1 s2 kT0−1 T1 k2 ≤ k( S)−1 kF k SµS ? k2 ≤ 2kΣ−1 X kF,d √ κn γn d → 0. n n 12 n d Thus, wpg1, kT0−1 T1 k2 ≤ C2 d/4. On T0−1 T2 . Wpg1, 1 1 kT0−1 T2 k2 ≤ k( S)−1 Ss1 +1,n kF kβ ? k2 n n √ √ 1 1 ≤ kI d kF C1 d + k( S)−1 S1,s1 kF C1 d n n √ 1 1 −1 ≤ C1 d + k( S) kF k S1,s1 kF C1 d, n n xv and s 1 1X s1 1 kX i k22 ≤ κ2n . k S1,s1 kF = n n n i=1 Thus, Under Assumption (D), for s1 κ2n /n = o(1), wpg1, √ ds1 2 √ −1 −1 kT0 T2 k2 ≤ C1 d + 2kΣX kF,d κn C1 d ≤ 2C1 d. n On T0−1 T3 . Under assumptions (D) and (G), for log(d)/n = o(1), wpg1, √ 1 p 1 1 d√ d log(d)k( S)−1 kF,d (d log(d))−1/2 k √ Ss1 +1,n k2 n n n p d log(d) P √ ≤ 2kΣ−1 X kF,d OP (1) −→ 0. n kT0−1 T3 k2 = Thus, wpg1, kT0−1 T3 k2 ≤ C2 d/4. On T0−1 T4 (β (0) ). Under Assumption (D), for s1 κ2n /n, wpg1, √ 1 1 dk( S)−1 kF,d k S1,s1 kF kβ (0) k2 n n √ s1 2 √ P −1 ≤ d2kΣX kF,d κn C2 d −→ 0. n kT0−1 T4 (β (0) )k2 ≤ Thus, wpg1, kT0−1 T4 (β (0) )k2 ≤ C2 d/4. √ On T0−1 T5 . Under Assumption (D), for s1 κn λ/(n d) = o(1), wpg1, √ λ 1 ? k2 + kSS ? k2 ) dk( S)−1 kF,d (kSS21 31 n n √ λ ≤ d2kΣ−1 X kF,d s1 κn ≤ C2 d/4. n kT0−1 T5 k2 ≤ Next, consider kβ 2 − β 1 k2 . Since β (1) ≤ (2C1 + C2 )d wpg1, the conclusion of Lemma 4.1 holds, which implies A1 occurs wpg1. Then, β (2) = T0−1 T1 + T0−1 T2 + T0−1 T3 + T0−1 T4 (β (1) ) + T0−1 T5 , where T4 (β (1) ) = 1 S1,s1 β (1) . n Thus, wpg1, β (2) − β (1) = S−1 S1,s1 (β (1) − β (0) ). xvi Thus, for d3/2 s1 κ2n /n = o(1), wpg1, √ 1 1 dk S−1 kF,d k S1,s1 kF kβ (1) − β (0) k2 n n √ s1 2 −1 ≤ 2kΣX kF,d d κn (2C1 + C2 )d . d3/2 s1 κ2n /n → 0. n kβ (2) − β (1) k2 ≤ Thus, wpg1, β (2) = β (1) , which means that, wpg1, the iteration algorithm stops at the second iteration. T For any K ≥ 1, repeating the above arguments, with at least probability pn,K = P ( K k=0 Ak ), which increases to one, we have β (k) ≤ (2C1 + C2 )d for k ≤ K and √ s1 2 K √ kβ (K+1) − β (K) k2 ≤ (2kΣ−1 κn ) (2C1 + C2 )d . ( ds1 κ2n /n)K d → 0. X kF,d d n This completes the proof. Next, we provide the proofs of Lemmas A.2, A.3 and A.4 in the appendix. √ P Proof of Lemma A.2. Let E = An − A. Note that rd ≥ 1/ d. Then, rd kEkF −→ 0 implies P kEkF,d −→ 0. Thus, wpg1, kEkF,d is bounded by a constant C > 0. By Lemma A.1, −1 −1 kA−1 n − A kF,d ≤ kA kF,d kEkF,d kA−1 kF,d kEkF,d ≤ C2 . −1 1 − CkEkF,d 1 − kA kF,d kEkF,d Therefore, rd kEkF P −→ 0. 1 − CkEkF,d −1 2 rd kA−1 n − A kF ≤ C This completes the proof. Proof of Lemma A.3. For any δ > 0, we have P (kΣ̂n − ΣX kF > δ) ≤ d X d X d2 k=1 l=1 δ n P( 2 1X d4 1 2 Xik Xil − σkl )2 ≤ σ̄ . n n δ 2 XX i=1 2 r 2 d4 /(nδ 2 ) = o(1) by Assumption (E2) and for r 2 d4 /n → 0. Thus, P (rd kΣ̂n − ΣX kF > δ) ≤ σ̄XX d d Thus, Σ̂n is a consistent estimator of ΣX wrt rd k·kF . Proof of Lemma A.4. Let αd = n √ d log d and C1 ≥ √ d 2σξ,max . Then n X 1 X 1 X ξij αd C1 P (k √ ξ i k2 > αd C1 ) ≤ P (| √ | > √ ), σ n n σj d i=1 j=1 i=1 j xvii where σj is the standard deviation of ξ0j . By Berry and Esseen Theorem (see, for example, P375 √ P in Shiryaev (1995)), there exists a constant C2 > 0 such that P (k(1/ n) ni=1 ξ i k2 > αd C1 ) ≤ T1 + 2T2 , where T1 = d X P (|N (0, 1)| > j=1 αd C1 √ ), σj d T2 = d X C2 E|ξ0j |3 √ . σj3 n j=1 By noting d2 = o(n), √ σξ,max d αd C1 αd C1 √ ) < 2d √ ) → 0, φ( T1 ≤ P (|N (0, 1)| > α d C1 σξ,max d σξ,max d j=1 d X T2 ≤ d X C2 γξ,max C2 γξ,max √ = d 3 √ → 0. 3 σ n σmin n j=1 ξ,min √ P Therefore, k(1/ n) ni=1 ξ i k2 = OP (αd ). Next result is on the consistency of the penalized two-step estimator β̃. Theorem F.4 (Consistency on β̃). Suppose the assumptions and conditions of Theorem 4.2 hold. √ P If rd ≥ 1/ d, then β̃ −→ β ? wrt rd k·k2 . P Proof of Theorems F.4. By Theorem 4.2, β̂ −→ β ? wrt rd k·k2 . By Theorem 4.4, P {Iˆ0 = I0 } → 1 √ for rd ≥ 1/ d, where I0 = {s1 + 1, s1 + 2, · · · , s = s1 + s2 , s + 1, · · · , n}. Then, wpg1, β̃ − β ? = R1 + R2 + T0−1 T1 + T0−1 T2 , where R1 = (X TIˆ X Iˆ0 )−1 X TIˆ Y Iˆ0 {Iˆ0 6= I0 }, R2 = −(X TI0 X I0 )−1 X TI0 Y I0 {Iˆ0 6= I0 } and Ti ’s are 0 0 defined in the proof of Theorem 4.2. Then, √ √ rd kβ̃ − β ? k2 ≤ rd kR1 k2 + rd kR2 k2 + kT0−1 kF,d rd dkT1 k2 + kT0−1 kF,d rd dkT2 k2 . Since P (kR1 k2,d = 0) ≥ P {Iˆ0 = I0 } → 1, we have R1 = oP (1). Similarly, R2 = oP (1). By the proof √ P P of Theorem 4.2, kT0−1 kF,d is bounded and rd dkTi k2 −→ 0 for i = 1, 2. Thus, β̃ −→ β ? wrt rd k·k2 √ and rd ≥ 1/ d. Finally, we provide some additional results on the asymptotic distributions of β̂ and β̃ with a √ different scaling. Specifically, the scaling in Section 4 is nAn . Next, we consider another natural √ 1/2 scaling nAn ΣX . xviii Theorem F.5 (Asymptotic Distribution on β̂). Suppose assumptions (D’), (D”), (E), (F) and √ √ (G) hold. If d6 log d = o(n), s1 = o( n/(λdκn )) and s2 = o( n/(dκn γn )), then √ d 1/2 nAn ΣX (β̂ n − β ? ) −→ N (0, σ 2 G). Theorem F.6 (Asymptotic Distribution on β̃). Suppose the assumptions and conditions of Theo√ rem F.5 hold except the condition s1 = o( n/(λdκn )). Then √ d 1/2 nAn ΣX (β̃ − β ? ) −→ N (0, σ 2 G). By Theorems F.5 and F.6, Wald-type confidence regions can be constructed. In order to validate these confidence regions with estimated σ and ΣX , we need Lemma A.5 and the following result. Theorem F.7 (Asymptotic Distributions on β̂ and β̃ with Σ̂n ). Suppose the assumptions and conditions of Theorem F.5 hold. If d9 (log(d))2 = o(n), then √ 1/2 d nAn Σ̂n (β̂ − β ? ) −→ N (0, σ 2 G). Similarly, suppose the assumptions and conditions of Theorem F.6 hold. If d9 (log(d))2 = o(n), then √ 1/2 d nAn Σ̂n (β̃ − β ? ) −→ N (0, σ 2 G). Remark 3. A comparison of the assumptions and conditions of Theorem F.7 with those of Theorems F.5 and F.6 reveals that a much stronger requirement on d is needed to ensure Σ̂n is a good estimator of ΣX . Precisely, the former require that d9 (log(d))2 = o(n) and the latter d6 log(d) = o(n). This stronger requirement on d is a price paid for estimating ΣX . Remark 4. The condition on the dimension d in Theorems 4.3 and 4.5 is d5 log(d) = o(n), slightly weaker than the condition d6 log(d) = o(n) in Theorems F.5 and F.6. Accordingly, The condition on the dimension d in Theorem A.6 is d8 (log(d))2 = o(n), slightly weaker than the condition √ d9 (log(d))2 = o(n) in Theorem F.7. This means that the scaling nAn is slightly better than the √ 1/2 scaling nAn ΣX in terms of the condition on d. Further, the former scaling is more suitable for constructing confidence regions for some entries of β ? . At the end of this supplement, we provide the proofs of the above theorems. Proof of Theorems F.5. Reuse the notations Ti ’s in the proof of Theorems 4.2, from which, √ 1/2 nAn ΣX (β̂ n − β ? ) = V1 + V2 + V3 − V4 , xix where Vi = B n Ti for i = 1, 2, 3, 4 and B n = √ d 1/2 nAn ΣX T0−1 . We will show V2 −→ N (0, σ 2 G) and other Vi ’s are oP (1), from which the desired result follows by applying Slutsky’s lemma. √ 1/2 On V1 . We have kV1 k2 ≤ ndkAn kF kΣX kF,d kT0−1 kF,d kT1 k2 . By Assumption (F), kAn kF is 1/2 bounded. By Assumption (D”), kΣX kF,d is bounded. By Lemmas A.2 and A.3 and Assumption (D), for d = o(n1/3 ), wpg1, kT0−1 kF,d is bounded. Further, wpg1, kT1 k2 ≤ kV1 k2 . 1 n s2 κn γn . Then, √1 s2 dκn γn , n where . means that the left side is bounded by a constant times the right √ side, as noted at the beginning of the appendix. Thus, kV1 k2 = oP (1) for s2 = o( n/(dκn γn )). On V2 . We have V2 = V21 + V22 , where V21 = √ −1/2 nAn ΣX First, consider V21 . We have V21 = T2 , V22 = 1/2 nAn ΣX (T0−1 − Σ−1 X )T2 . p P (n − s1 )/n ni=s1 +1 Z n,i , where Z n,i = √ On one hand, for every δ > 0, √ 1 −1/2 An ΣX X i i . n − s1 Pn 2 i=s1 +1 EkZ n,i k2 {kZ n,i k2 > δ} ≤ (n − s1 )EkZ n,0 k42 /δ 2 and 1 −1/2 −1/2 E4 E(X T0 ΣX ATn An ΣX X 0 )2 (n − s1 )2 0 1 ≤ E4 λmax (Gn )λmin (ΣX )−1 E(X T0 X 0 )2 (n − s1 )2 0 EkZ n,0 k42 = d ≤ 1X d2 4 1/2 2 E40 λmax (Gn )λmin (ΣX )−1 ( (EX0j ) ) . 2 (n − s1 ) d j=1 P √ Thus, by assumptions (D’), (E) and (F), ni=s1 +1 EkZ n,i k22 {kZ n,i k2 > δ} → 0 for d = o( n). On P the other hand, ni=s1 +1 Cov(Z n,i ) = σ 2 An ATn → σ 2 G. Thus, by central limit theorem (see, for d example, Proposition 2.27 in van der Vaart (1998)), V21 −→ N (0, σ 2 G). Next, consider V22 . We have 1/2 −1/2 √ kV22 k2 ≤ kAn kF kΣX kF,d d(log(d))1/2 kT0−1 − Σ−1 k nT2 k2 . X kF (d log(d)) 1/2 By Assumption (F), kAn kF is O(1); By Assumption (D”), kΣX kF,d is O(1); by Lemmas A.2 6 and A.3, d(log(d))1/2 kT0−1 − Σ−1 X kF is oP (1) for d log(d) = o(n); By Lemma A.4, together with √ √ Assumption (G), (d log(d))−1/2 k nT2 k2 = (d log(d))−1/2 k √1n Ss1 +1,n k2 is OP (1) for d = o( n). P d Thus, V22 −→ 0. By slutsky’s lemma, V2 −→ N (0, σ 2 G). √ On V3 and V4 . First consider V3 . By noting that s1 = o( n/(λdκn )), wpg1, kV3 k2 ≤ √ √ 1/2 d nkAn kF kΣX kF,d kT0−1 kF,d kT3 k2 . dλs1 κn / n → 0. Thus, kV3 k2 = oP (1). In the same way, kV4 k2 = oP (1). This completes the proof. xx √ 1/2 Proof of Theorem F.6. From the proof of Theorem F.4, we have nAn ΣX (β̃ − β ? ) = R̃1 + R̃2 + √ √ 1/2 1/2 V1 + V2 , where R̃1 = nAn ΣX R1 , R̃2 = nAn ΣX R2 , and Ri ’s and Vi ’s are defined in the proofs of Theorems F.4 and F.5. Since P (kR̃1 k2 = 0) ≥ P {Iˆ0 = I0 } → 1, we have R̃1 = oP (1). d Similarly, R̃2 = oP (1). By the proof of Theorem F.5, V1 = oP (1) and V2 −→ N (0, σ 2 G). Thus, the asymptotic distribution of β̃ is Gaussian by Slutsky’s lemma. Proof of Theorem F.7. We only show the result on β̂. since the result on β̃ can be obtained in a similar way. We reuse the definitions of Ti ’s in the proof of Theorems 4.2, from which, √ where M = √ 1/2 nAn Σ̂n (β̂ n − β ? ) = M + R, 1/2 nAn ΣX (β̂ n − β ? ) and R = √ 1/2 nAn (Σ̂n d 1/2 − ΣX )(β̂ n − β ? ). By Theorem F.5, P M −→ N (0, σ 2 G). Then, it is sufficient to show that R −→ 0 wrt k·k2 . We have R = R1 + R2 + R3 − R4 , where Ri = B n Ti for i = 1, 2, 3, 4 and B n = √ 1/2 nAn (Σ̂n 1/2 − ΣX )T0−1 . We will show each Ri converges to zero in probability, which finishes the proof. 1/2 1/2 On R1 . By Lemma A.7, kΣ̂n − ΣX kF ≤ (d1/2 kΣ̂n − ΣX kF )1/2 . Then, kR1 k2 ≤ ≤ √ √ 1/2 1/2 nkAn kF kΣ̂n − ΣX kF kT0−1 kF kT1 k2 ndkAn kF (kΣ̂n − ΣX kF,d )1/2 kT0−1 kF,d kT1 k2 . By Assumption (F), kAn kF is bounded. By Lemma A.3, kΣ̂n − ΣX kF,d = oP (1) for d = o(n1/3 ). By Lemmas A.2 and A.3 and Assumption (D), for d = o(n1/3 ), wpg1, kT0−1 kF,d is bounded. We have, wpg1, kT1 k2 ≤ √ o( n/(dκn γn )). 1 n s2 κn γn . Then, kR1 k2 . √1 s2 dκn γn . n Thus, kR1 k2 = oP (1) for s2 = On R2 . We have √ 1/2 1/2 kR2 k2 ≤ kAn kF d(log(d))1/2 kΣ̂n − ΣX kF kT0−1 kF,d (d log(d))−1/2 k nT2 k2 , and 1/2 1/2 d(log(d))1/2 kΣ̂n − ΣX kF ≤ (d5/2 log(d)kΣ̂n − ΣX kF )1/2 . By Assumption (F), kAn kF is O(1); by Lemma A.3, d5/2 log(d)kΣ̂n −ΣX kF = oP (1) for d9 (log(d))2 = 6 o(n); by Lemmas A.2 and A.3, d(log(d))1/2 kT0−1 − Σ−1 X kF is oP (1) for d log(d) = o(n); by Lemma √ √ P A.4, (d log(d))−1/2 k nT2 k2 = (d log(d))−1/2 k √1n Ss1 +1,n k2 is OP (1) for d = o( n). Thus, R2 −→ 0. xxi √ On R3 and R4 . First consider R3 . By noting that s1 = o( n/(λdκn )), wpg1, √ √ 1/2 1/2 kR3 k2 ≤ d nkAn kF (kΣ̂n − ΣX kF,d )1/2 kT0−1 kF,d kT3 k2 . dλs1 κn / n → 0. Thus, kR3 k2 = oP (1). In the same way, kR4 k2 = oP (1). References Jianqing Fan and Jinchi Lv. Non-concave penalized likelihood with np-dimensionality. IEEE Transactions On Information Theory, 57:5467–5484, 2011. Jianqing Fan and Heng Peng. On non-concave penalized likelihood with diverging number of parameters. The Annals of Statistics, 32:928–961, 2004. Michael R. Kosorok. Introduction to Empirical Processes and Semiparametric Inference. Springer New York, 2008. Jerzy Neyman and Elizabeth L. Scott. Consistent estimates based on partially consistent observations. Econometrica, 16:1–32, 1948. Albert N. Shiryaev. Probability. Springer-Verlag, second edition, 1995. Aad W. van der Vaart. Asymptotic Statistics. Cambridge University Press, 1998. Peng Zhao and Bin Yu. On model selection consistency of lasso. The Journal of Machine Learning Research, 7(Nov):2541–2563, 2006. xxii
10
ADMM for the SDP relaxation of the QAP ∗ Danilo Elias Oliveira † Henry Wolkowicz ‡ Yangyang Xu § arXiv:1512.05448v1 [math.OC] 17 Dec 2015 December 18, 2015 Abstract The semidefinite programming SDP relaxation has proven to be extremely strong for many hard discrete optimization problems. This is in particular true for the quadratic assignment problem QAP, arguably one of the hardest NP-hard discrete optimization problems. There are several difficulties that arise in efficiently solving the SDP relaxation, e.g., increased dimension; inefficiency of the current primaldual interior point solvers in terms of both time and accuracy; and difficulty and high expense in adding cutting plane constraints. We propose using the alternating direction method of multipliers ADMM to solve the SDP relaxation. This first order approach allows for inexpensive iterations, a method of cheaply obtaining low rank solutions, as well a trivial way of adding cutting plane inequalities. When compared to current approaches and current best available bounds we obtain remarkable robustness, efficiency and improved bounds. Keywords: Quadratic assignment problem, semidefinite programming relaxation, alternating direction method of moments, large scale. Classification code: 90C22, 90B80, 90C46, 90-08 Contents 1 Introduction 2 2 A New Derivation for the SDP Relaxation 2 3 A New ADMM Algorithm for the 3.1 Lower bound . . . . . . . . . . . 3.2 Feasible solution of QAP . . . . 3.3 Low-rank solution . . . . . . . . 3.4 Different choices for V, Vb . . . . . SDP Relaxation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Numerical experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 6 7 8 8 9 5 Concluding Remarks 10 Index 10 ∗ This work is partially supported by NSERC and AFOSR. of Combinatorics and Optimization, University of Waterloo. ‡ Dept. of Combinatorics and Optimization, University of Waterloo. Research supported by The Natural Sciences and Engineering Research Council of Canada and by AFOSR. Email: [email protected] § Institute for Mathematics and its Applications (IMA), University of Minnesota † Dept. 1 List of Tables 1 1 QAP Instances I and II. Requested tolerance 1e − 5. . . . . . . . . . . . . . . . . . . . . . . . 10 Introduction The quadratic assignment problem (QAP), in the trace formulation is p∗X := min hAXB − 2C, Xi, X∈Πn (1.1) where A, B ∈ Sn are real symmetric n × n matrices, C is a real n × n matrix, h· , ·i denotes the trace inner product, hY, Xi = trace Y X > , and Πn denotes the set of n × n permutation matrices. A typical objective of the QAP is to assign n facilities to n locations while minimizing total cost. The assignment cost is the sum of costs using the flows in Aij between a pair of facilities i, j multiplied by the distance in Bst between their assigned locations s, t and adding on the location costs of a facility i in a position s given in Cis . It is well known that the QAP is an NP-hard problem and that problems with size as moderate as n = 30 still remain difficult to solve. Solution techniques rely on calculating efficient lower bounds. An important tool for finding lower bounds is the work in [13] that provides a semidefinite programmming (SDP), relaxation of (1.1). The methods of choice for SDP are based on a primal-dual interior-point, p-d ip, approach. These methods cannot solve large problems, have difficulty in obtaining high accuracy solutions and cannot properly exploit sparsity. Moreover, it is very expensive to add on nonnegativity and cutting plane constraints. The current state for finding bounds and solving QAP is given in e.g., [1, 2, 4, 7, 9]. In this paper we study an alternating direction method of multipliers (ADMM), for solving the SDP relaxation of the QAP. We compare this with the best known results given in [9] and with the best known bounds found at SDPLIB [5]. and with a p-d i-p methods based on the so-called HKM direction. We see that the ADMM method is significantly faster and obtains high accuracy solutions. In addition there are advantages in obtaining low rank SDP solutions that provide better feasible approximations for the QAP for upper bounds. Finally, it is trivial to add nonnegativity and rounding constraints while iterating so as to obtain significantly stronger bounds and also maintain sparsity during the iterations. We note that previous success for ADMM for SDP in presented in [12]. A detailed survey article for ADMM can be found in [3]. 2 A New Derivation for the SDP Relaxation We start the derivation from the following equivalent quadratically constrained quadratic problem min hAXB − 2C, Xi X s.t. Xij Xik = 0, Xji Xki = 0, ∀i, ∀j 6= k, 2 Xij − Xij = 0, ∀i, j, n n X X 2 2 Xij − 1 = 0, ∀j, Xij − 1 = 0, ∀i. i=1 (2.1) j=1 Remark 2.1. Note that the quadratic orthogonality constraints X > X = I, XX > = I, and the linear row and column sum constraints Xe = e, X > e = e can all be linearly represented using linear combinations of those in (2.1). In addition, the first set of constraints, the elementwise orthogonality of the row and columns of X, are referred to as the gangster constraints. They are particularly strong constraints and enable many of the other constraints to be redundant. In fact, after the facial reduction done below, many of these constraints also become redundant. (See the definition of the index set J below.) 2 The Lagrangian for (2.1) is L0 (X, U, V, W, u, v) =hAXB − 2C, Xi + n X X (i) Ujk Xij Xik + i=1 j6=k + n X uj j=1 n X ! 2 Xij −1 n X X (i) Vjk Xji Xki + X i=1 j6=k 2 Wij (Xij − Xij ) i,j   n n X X 2 + vi  Xij − 1 . i=1 i=1 j=1 The dual problem is a maximization of the dual functional d0 , max d0 (U, V, W, u, v) := min L0 (X, U, V, W, u, v). (2.2) X To simplify the dual problem, we homogenize the X terms in L0 by multiplying a unit scalar x0 to degree-1 terms and adding the single constraint x20 = 1 to the Lagrangian. We let L1 (X, x0 , U, V, W, w0 , u, v) =hAXB − 2x0 C, Xi + n X X (i) Ujk Xij Xik + i=1 j6=k + n X j=1 uj n X ! 2 Xij −1 n X X (i) Vjk Xji Xki + i=1 j6=k X 2 Wij (Xij − x0 Xij ) i,j   n n X X 2 + vi  Xij − 1 + w0 (x20 − 1). i=1 i=1 j=1 This homogenization technique is the same as that in [13]. The new dual problem is max d1 (U, V, W, w0 , u, v) := min L1 (X, x0 , U, V, W, w0 , u, v). X,x0 (2.3) Note that d1 ≤ d0 . Hence, our relaxation still yields a lower bound to (2.1). In fact, the relaxations give the same lower bound. This follows from strong duality of the trust region subproblem as shown in [13]. Let x = vec(X), y = [x0 ; x], and w = vec(W ), where x, w is the vectorization, columnwise, of X and W , respectively. Then L1 (X, x0 , U, V, W, w0 , u, v) = y > [LQ + B1 (U ) + B2 (V ) + Arrow(w, w0 ) + K1 (u) + K2 (v)] y − e> (u + v) − w0 , where K1 (u) = blkdiag(0, u ⊗ I), K2 (v) = blkdiag(0, I ⊗ v),   w0 − 21 w> Arrow(w, w0 ) = − 12 w Diag(w) and B1 (U ) = blkdiag(0, Ũ ), B2 (V ) = blkdiag(0, Ṽ ). Here, Ũ and Ṽ are n × n block matrices. Ũ has zero diagonal blocks and the (j, k)-th off-diagonal block to (1) (n) be the diagonal matrix Diag(Ujk , . . . , Ujk ) for all j 6= k, and Ṽ has zero off-diagonal blocks and the i-th   (i) (i) 0 V12 · · · V1n  (i) (i)  0 · · · V2n   V21 diagonal block to be  .. ..  .. . Hence, the dual problem (2.3) is  .. .  . . .  (i) (i) Vn1 Vn2 · · · 0 max − e> (u + v) − w0 (2.4) s.t. LQ + B1 (U ) + B2 (V ) + Arrow(w, w0 ) + K1 (u) + K2 (v)  0. 3 Taking the dual of (2.4), we have the SDP relaxation of (2.1): min hLQ , Y i s.t. GJ (Y ) = E00 , diag(Ȳ ) = y0 , n X trace(Ỹii ) = 1, ∀i, Ỹii = I, (2.5) i=1 Y  0, where Ỹij is an n × n matrix for each (i, j), and we have assumed the block structure  Y = y00 y0 y0> Ȳ  ; Ȳ made of n × n block matrices Ỹ = (Ỹij ). (2.6) The index set J and the gangster operator GJ are defined properly below in Definition 2.1. (By abuse of notation this is done after the facial reduction which results in a smaller J.) Remark 2.2. If one more feasible quadratic constraint q(X) can be added to (2.1) and q(X) cannot be linearly represented by those in (2.1), the relaxation following the same derivation as above can be tighter. We conjecture that no more such q(X) exists, and thus (2.5) is the tightest among all Lagrange dual relaxation from a quadratically constrained program like (2.1). However, this does not mean that more linear inequality constraints cannot be added, i.e., linear cuts. Theorem 2.1 ( [13]). The matrix Y is feasible for (2.5) if, and only if, it is feasible for (3.1). 2 As above, let x = vec X ∈ Rn be the vectorization of X by column. Y is the original matrix variable of   > 1 1 the SDP relaxation before the facial reduction. It can be motivated from the lifting Y = . vec X vec X The SDP relaxation of QAP presented in [13] uses facial reduction to guarantee strict feasibility. The SDP obtained is p∗R := minR hLQ , V̂ RV̂ > i (2.7) s.t. GJ (V̂ RV̂ > ) = E00 R  0, where the so-called gangster operator, GJ , fixes all elements indexed by J and zeroes out all others,     0 −vec(C)> 1 0 LQ = , V̂ = 1 −vec(C) B⊗A ne V ⊗ V (2.8) with e being the vector of all ones, of appropriate dimension and V ∈ Rn×(n−1) being a basis matrix of the " # In−1 2 orthogonal complement of e, e.g., V = . We let Y = V̂ RV̂ > ∈ Sn +1 . −e Lemma 2.1 ( [13]). The matrix R̂ defined by " # 1 0 (n−1)2 +1 R̂ := ∈ S++ 1 0 n2 (n−1) (nIn−1 − En−1 ) ⊗ (nIn−1 − En−1 ) is (strictly) feasible for (2.7). 2 2 Definition 2.1. The gangster operator GJ : Sn +1 → Sn +1 and is defined by  Yij if (i, j) ∈ J or (j, i) ∈ J GJ (Y )ij = 0 otherwise 4 By abuse of notation, we let the same symbol denote the projection onto R|J| . We get the two equivalent primal constraints: GJ (V̂ RV̂ > ) = E00 ∈ Sn 2 +1 GJ (V̂ RV̂ > ) = GJ (E00 ) ∈ R|J| . ; 2 Therefore, the dual variable for the first form is Y ∈ Sn +1 . However, the dual variable for the second form 2 is y ∈ R|J| with the adjoint now yielding Y = GJ∗ (y) ∈ Sn +1 obtained by symmetrization and filling in the missing elements with zeros. The gangster index set, J is defined to be (00) union the set of of indices i < j in the matrix Ȳ in (2.6) corresponding to: 1. the off-diagonal elements in the n diagonal blocks; 2. the diagonal elements in the off-diagonal blocks except for the last column of off-diagonal blocks and also not the (n−2), (n−1) off-diagonal block. (These latter off-diagonal block constraints are redundant after the facial reduction.) We note that the gangster operator is self-adjoint, GJ∗ = GJ . Therefore, the dual of (2.7) can be written as the following. d∗Y := max hE00 , Y i (= Y00 ) Y (2.9) > > s.t. V̂ GJ (Y )V̂  V̂ LQ V̂ Again by abuse of notation, using the same symbol twice, we get the two equivalent dual constraints: V̂ > GJ (Y )V̂  V̂ > LQ V̂ ; V̂ > GJ∗ (y)V̂  V̂ > LQ V̂ . 2 As above, the dual variable for the first form is Y ∈ Sn +1 and for the second form is y ∈ R|J| . We have used G ∗ for the second form to emphasize that only the first form is self-adjoint. Lemma 2.2 ( [13]). The matrices Ŷ ,Ẑ, with M > 0 sufficiently large, defined by " # n 0 (n−1)2 +1 (n−1)2 +1 Ŷ := M . , Ẑ := V̂ > LQ V̂ − V̂ > GJ (Ŷ )V̂ ∈ S++ ∈ S++ 0 In ⊗ (In − En ) and are (strictly) feasible for (2.9). 3 A New ADMM Algorithm for the SDP Relaxation We can write (2.7) equivalently as min hLQ , Y i, s.t. GJ (Y ) = E00 , Y = V̂ RV̂ > , R  0. R,Y (3.1) The augmented Lagrange of (3.1) is LA (R, Y, Z) = hLQ , Y i + hZ, Y − V̂ RV̂ > i + β kY − V̂ RV̂ > k2F . 2 (3.2) Recall that (R, Y, Z) are the primal reduced, primal, and dual variables respectively. We denote (R, Y, Z) n as the current iterate. We let Srn + denote the matrices in S+ with rank at most r. Our new algorithm is an application of the alternating direction method of multipliers ADMM, that uses the augmented Lagrangian in (3.2) and performs the following updates for (R+ , Y+ , Z+ ): R+ = arg min LA (R, Y, Z), (3.3a) Y+ = arg min LA (R+ , Y, Z), (3.3b) R∈Srn + Y ∈Pi Z+ = Z + γ · β(Y+ − V̂ R+ V̂ > ), 5 (3.3c) where the simplest case for the polyhedral constraints Pi is the linear manifold from the gangster constraints: P1 = {Y ∈ Sn 2 +1 : GJ (Y ) = E00 } We use this notation as we add additional simple polyhedral constraints. The second case is the polytope: P2 = P1 ∩ {0 ≤ Y ≤ 1}. Let V̂ be normalized such that V̂ > V̂ = I. Then if r = n, the R-subproblem can be explicitly solved by R+ = arg minR0 hZ, Y − V̂ RV̂ > i + β2 kY − V̂ RV̂ > k2F 2 arg minR0 Y − V̂ RV̂ > + β1 Z F  = arg minR0 R − V̂ > Y + β1 Z V̂    = PS+ V̂ > Y + β1 Z V̂ , = 2 (3.4) F where S+ denotes the SDP cone, and PS+ is the projection to S+ . For any symmetric matrix W , we have > PS+ (W ) = U+ Σ+ U+ , where (U+ , Σ+ ) contains the positive eigenpairs of W and (U− , Σ− ) the negative eigenpairs. If i = 1 in (3.3b), the Y -subproblem also has closed-form solution: Y+ = arg min hLQ , Y i + hZ, Y − V̂ R+ V̂ > i + GJ (Y )=E00 LQ + Z β GJ (Y )=E00   LQ + Z > =E00 + GJ c V̂ R+ V̂ − β = arg min β kY − V̂ R+ V̂ > k2F 2 2 Y − V̂ R+ V̂ > + F (3.5) The advantage of using ADMM is that its complexity only slightly increases while we add more constraints to (2.7) to tighten the SDP relaxation. If 0 ≤ V̂ RV̂ > ≤ 1 is added in (2.7), then we have constraint 0 ≤ Y ≤ 1 in (3.1) and reach to the problem p∗RY := min hLQ , Y i, s.t. GJ (Y ) = E00 , 0 ≤ Y ≤ 1, Y = V̂ RV̂ > , R  0. R,Y (3.6) The ADMM for solving (3.6) has the same R-update and Z-update as those in (3.3), and the Y -update is changed to    LQ + Z  > c Y+ = E00 + min 1, max 0, GJ V̂ R+ V̂ − . (3.7) β With nonnegativity constraint, the less-than-one constraint is redundant but makes the algorithm converge faster. 3.1 Lower bound If we solve (2.7) or (3.1) exactly or to a very high accuracy, we get a lower bound of the original QAP. However, the problem size of (2.7) or (3.1) can be extremely large, and thus having an exact or highly accurate solution may take extremely long time. In the following, we provide an inexpensive way to get a lower bound from the output of our algorithm that solves (3.1) to a moderate accuracy. Let (Rout , Y out , Z out ) be the output of the ADMM for (3.6). 6 Lemma 3.1. Let R := {R  0}, Z := {Z : V̂ > Z V̂  0}. Y := {Y : GJ (Y ) = E00 , 0 ≤ Y ≤ 1}, Define the ADMM dual function g(Z) := min {hLQ + Z, Y i}. Y ∈Y Then the dual problem of ADMM (3.6) is defined as follows and satisfies weak duality. d∗Z := max g(Z) ≤ p∗R . Z∈Z Proof. The dual problem of (3.6) can be derived as d∗Z := max Z min R∈R,Y ∈Y hLQ , Y i + hZ, Y − V̂ RV̂ > i = max min hLQ , Y i + hZ, Y i + min hZ, −V̂ RV̂ > i Y ∈Y Z R∈R = max min hLQ , Y i + hZ, Y i + min hV̂ > Z V̂ , −Ri Y ∈Y Z R∈R = max min hLQ + Z, Y i, Z∈Z Y ∈Y = max g(Z) Z∈Z Weak duality follows in the usual way by exchanging the max and min. For any Z ∈ Z, we have g(Z) is a lower bound of (3.6) and thus of the original QAP. We use the dual  function value of the projection g PZ (Z out ) as the lower bound, and next we show how to get PZ (Z̃) for any symmetric matrix Z̃. Let V̂⊥ be the orthonormal basis of the null space of V̂ . Then V̄ = (V̂ , V̂⊥ ) is an orthogonal matrix. Let   W11 W12 V̄ > Z V̄ = W = , and we have W21 W22 V̂ > Z V̂  0 ⇔ V̂ > Z V̂ = V̂ > V̄ W V̄ > V̂ = W11  0. Hence, PZ (Z̃) = arg min kZ − Z̃k2F Z∈Z = arg min kV̄ W V̄ > − Z̃k2F W11 0 = arg min kW − V̄ > Z̃ V̄ k2F W11 0  = PS− (W̃11 ) W̃12 W̃21 W̃22  , where S− denotes the negative semidefinite cone, and we have assumed V̄ > Z̃ V̄ =  W̃11 W̃21 W̃12 W̃22  . Note that PS− (W11 ) = −PS+ (−W11 ). 3.2 Feasible solution of QAP Let (Rout , Y out , Z out ) be the output of the ADMM for (3.6). Assume the largest eigenvalue and the corresponding eigenvector of Y are λ and v. We let X out be the matrix reshaped from the second through the last elements of the first column of λvv > . Then we solve the linear program maxhX out , Xi, s.t. Xe = e, X > e = e, X ≥ 0 X by simplex method that gives a basic optimal solution, i.e., a permutation matrix. 7 (3.8) 3.3 Low-rank solution Instead of finding a feasible solution through (3.8), we can directly get one by restricting R to a rank-one matrix, i.e., rank(R) = 1 and R ∈ S+ . With this constraint, the R-update can be modified to   Z > V̂ , (3.9) R+ = PS+ ∩R1 V̂ Y + β where R1 = {R : rank(R) = 1} denotes the set of rank-one matrices. For a symmetric matrix W with largest eigenvalue λ > 0 and corresponding eigenvector w, we have PS+ ∩R1 = λww> . 3.4 Different choices for V, Vb The matrix Vb is essential in the steps of the algorithm, see e.g., (3.4). A sparse Vb helps in the projection if one is using a sparse eigenvalue code. We have compared several. One is based on applying a QR algorithm to the original simple V from the definition of V̂ in (2.8). The other two are based on the approach in [10] and we present the most successful here. The orthogonal V we use is      1         1  I n ⊗ √1 I n ⊗ 1  h i  1     b 2 c    2     c b 2 −1 4 −1  . . . Vb  V =        −1  0(n−2b n c),b n c    2 2 0(n−4b n c),b n c 4 n×n−1 4 i.e., the block matrix consisting of t blocks formed from Kronecker products along with one block Vb to complete the appropriate size so that V > V = In−1 , V > e = 0. We take advantage of the 0, 1 structure of the Kronecker blocks and delay the scaling for the normalization till the end. The main work in the low rank projection part of the algorithm is to evaluate one (or a few) eigenvalues of W = Vb > (Y + β1 Z)V̂ to obtain the update R+ .   1 ρ w> Y + Z= . w W̄ β We let K := V ⊗ V, √ α = 1/ 2, 1 v = √ e, 2n x=   x1 . x̄ The structure for Vb in (2.8) means that we can evaluate the product for W x as  α v 0 K >  ρ w w> W̄  α v  0 x K = = = =  >    α 0 ρ w> αx1 v K w W̄ x1 v + K x̄    > > ραx1 + w (x1 v + K x̄) α v 0 K> αx1 w + W̄ (x1 v + K x̄)   2 ρα x1 + αw> (x1 v + K x̄) + v > αx1 w + W̄ (x1 v + K x̄)  K > αx1 w + W̄ (x1 v + K x̄)   2  ρα x1 + αw> + v > W̄ (x1 v + K x̄) + v > (αx1 w)  . K > αx1 w + W̄ (x1 v + K x̄) We emphasize that V ⊗ V = (V̄ ⊗ V̄ )/(D ⊗ D), where V̄ denotes the unscaled V , D is the diagonal matrix of scale factors to obtain the orthogonality in V , and / denotes the MATLAB division on the right, multiplication by the inverse on the right. Therefore, we can evaluate   K > W̄ K = (V ⊗ V )> W̄ (V ⊗ V ) = (V̄ ⊗ V̄ )> (D ⊗ D)\W̄ /(D ⊗ D) (V̄ ⊗ V̄ ). 8 4 Numerical experiments We illustrate our results in Table 1 on the forty five QAP instances I and II, see [5, 6, 9]. The optimal solutions are in column 1 and current best known lower bounds from [9] are in column 3 marked bundle. The p-d i-p lower bound is given in the column marked HKM-FR. (The code failed to find a lower bound on several problems marked −1111.) These bounds were obtained using the facially reduced SDP relaxation and exploiting the low rank (one and two) of the constraints. We used SDPT3 [11].1 Our ADMM lower bound follows in column 4. We see that it is at least as good as the current best known bounds in every instance. The percent improvement is given in column 7. We then present the best upper bounds from our heuristics in column 5. This allows us to calculate the percentage gap in column 6. The CPU seconds are then given in the last columns 8 − 9 for the high and low rank approaches, respectively. The last two columns are the ratios of CPU times. Column 10 is the ratio of CPU times for the 5 decimal and 12 decimal tolerance for the high rank approach. All the ratios for the low rank approach are approximately 1 and not included. The quality of the bounds did not change for these two tolerances. However, we consider it of interest to show that the higher tolerance can be obtained. The last column 11 is the ratio of CPU times for the 12 decimal tolerance of the high rank approach in column 8 with the CPU times for 9 decimal tolerance for the HKM approach. We emphasize that the lower bounds for the HKM approach were significantly weaker. We used MATLAB version 8.6.0.267246 (R2015b) on a PC Dell Optiplex 9020 64-bit, with 16 Gig, running Windows 7. We heuristically set γ = 1.618 and β = n3 in ADMM. We used two different tolerances 1e − 12, 1e − 5. Solving the SDP to the higher accuracy did not improve the bounds. However, it is interesting that the ADMM approach was able to solve the SDP relaxations to such high accuracy, something the p-d i-p approach has great difficulty with. We provide the CPU times for both accuracies. Our times are significantly lower than those reported in [4, 9], e.g., from 10 hours to less than an hour. We emphasize that we have improved bounds for all the SDP instances and have provably found exact solutions six of the instances Had12,14,16,18, Rou12, Tai12a. This is due to the ability to add all the nonnegativity constraints and rounding numbers to 0, 1 with essentially zero extra computational cost. In addition, the rounding appears to improve the upper bounds as well. This was the case for both using tolerance of 12 or only 5 decimals in the ADMM algorithm. 1 We do not include the times as they were much greater than for the ADMM approach, e.g., hours instead of minutes and a day instead of an hour. 9 Esc16a Esc16b Esc16c Esc16d Esc16e Esc16g Esc16h Esc16i Esc16j Had12 Had14 Had16 Had18 Had20 Kra30a Kra30b Kra32 Nug12 Nug14 Nug15 Nug16a Nug16b Nug17 Nug18 Nug20 Nug21 Nug22 Nug24 Nug25 Nug27 Nug28 Nug30 Rou12 Rou15 Rou20 Scr12 Scr15 Scr20 Tai12a Tai15a Tai17a Tai20a Tai25a Tai30a Tho30 1. opt value 2. Bundle [9] LowBnd 3. HKM-FR LowBnd 4. ADMM LowBnd 5. feas UpBnd 6. ADMM %gap 7. ADMM vs Bundle %Impr LowBnd 8 Tol5 cpusec HighRk 9 Tol5 cpusec LowRk 10 Tol12/5 cpuratio HighRk 11 HKM cpuratio Tol 9 68 292 160 16 28 26 996 14 8 1652 2724 3720 5358 6922 149936 91420 88700 578 1014 1150 1610 1240 1732 1930 2570 2438 3596 3488 3744 5234 5166 6124 235528 354210 725522 31410 51140 110030 224416 388214 491812 703482 1167256 1818146 88900 59 288 142 8 23 20 970 9 7 1643 2715 3699 5317 6885 136059 81156 79659 557 992 1122 1570 1188 1669 1852 2451 2323 3440 3310 3535 4965 4901 5803 223680 333287 663833 29321 48836 94998 222784 364761 451317 637300 1041337 1652186 77647 50 276 132 -12 13 11 909 -21 -4 1641 2709 3678 5287 6848 -1111 -1111 -1111 530 960 1071 1528 1139 1622 1802 2386 2386 3396 -1111 -1111 -1111 -1111 -1111 221161 323235 642856 23973 42204 83302 215637 349586 441294 619092 1096657 -1111 -1111 64 290 154 13 27 25 977 12 8 1652 2724 3720 5358 6922 143576 87858 85775 568 1011 1141 1600 1219 1708 1894 2507 2382 3529 3402 3626 5130 5026 5950 235528 350217 695181 31410 51140 106803 224416 377101 476525 671675 1096657 1706871 86838 72 300 188 18 32 28 996 14 14 1652 2724 3720 5358 6930 169708 105740 103790 632 1022 1306 1610 1356 1756 2160 2784 2706 3940 3794 4060 5822 5730 6676 235528 367782 765390 38806 58304 138474 224416 412760 546366 750450 1271696 1942086 102760 11.76 3.42 21.25 31.25 17.86 11.54 1.91 14.29 75.00 0.00 0.00 0.00 0.00 0.12 17.43 19.56 20.31 11.07 1.08 14.35 0.62 11.05 2.77 13.78 10.78 13.29 11.43 11.24 11.59 13.22 13.63 11.85 0.00 4.96 9.68 23.55 14.01 28.78 0.00 9.19 14.20 11.20 15.00 12.94 17.91 7.35 0.68 7.50 31.25 14.29 19.23 0.70 21.43 12.50 0.54 0.33 0.56 0.77 0.53 5.01 7.33 6.90 1.90 1.87 1.65 1.86 2.50 2.25 2.18 2.18 2.42 2.47 2.64 2.43 3.15 2.42 2.40 5.03 4.78 4.32 6.65 4.51 10.73 0.73 3.18 5.13 4.89 4.74 3.01 10.34 2.30e+01 3.87e+00 1.09e+01 2.14e+01 3.02e+01 4.24e+01 4.91e+00 1.37e+02 8.95e+01 1.02e+01 3.23e+01 1.75e+02 4.49e+02 3.85e+02 5.88e+03 4.36e+03 3.57e+03 2.60e+01 7.15e+01 9.10e+01 1.81e+02 9.35e+01 2.31e+02 4.16e+02 4.76e+02 1.41e+03 2.07e+03 1.20e+03 3.12e+03 5.11e+03 4.11e+03 7.36e+03 2.76e+01 3.12e+01 1.67e+02 4.40e+00 1.38e+01 1.53e+03 1.79e+00 2.74e+01 6.50e+01 1.28e+02 3.09e+02 1.25e+03 2.83e+03 4.02 4.55 8.09 3.69 4.29 4.27 3.53 4.30 4.80 1.08 1.69 3.15 6.00 12.15 149.32 170.57 200.26 1.04 1.87 3.31 3.06 3.19 4.34 5.47 11.56 15.32 21.82 29.64 39.23 78.18 83.38 133.38 0.93 2.70 10.31 1.17 2.41 9.61 0.90 2.35 4.52 10.10 38.48 142.55 164.86 4.14 2.15 4.53 4.87 4.80 2.72 2.33 2.39 3.83 1.06 1.19 1.04 2.22 4.20 2.22 3.01 4.28 6.61 5.06 5.90 3.28 6.23 3.63 2.43 3.75 1.68 1.39 3.29 1.65 1.58 2.17 1.76 0.98 8.68 10.90 2.40 1.84 1.15 1.04 14.69 7.31 14.32 5.58 10.51 4.74 9.37 8.08 4.88 10.22 8.79 8.63 10.60 8.76 7.93 5.91 10.46 12.51 13.28 14.53 1111.11 1111.11 1111.11 5.93 8.43 7.79 12.24 11.83 13.13 15.23 14.35 14.95 13.90 1111.11 1111.11 1111.11 1111.11 1111.11 6.90 9.46 16.08 5.79 10.75 17.96 6.70 10.34 12.04 15.85 1111.11 1111.11 1111.11 Table 1: QAP Instances I and II. Requested tolerance 1e − 5. 5 Concluding Remarks In this paper we have shown the efficiency of using the ADMM approach in solving the SDP relaxation of the QAP problem. In particular, we have shown that we can obtain high accuracy solutions of the SDP relaxation in less significantly less cost than current approaches. In addition, the SDP relaxation includes the nonnegativity constraints at essentially no extra cost. This results in both a fast solution and improved lower and upper bounds for the QAP. In a forthcoming study we propose to include this in a branch and bound framework and implement it in a parallel programming approach, see e.g., [8]. In addition, we propose to test the possibility of using warm starts in the branching/bounding process and test it on the larger test sets such as used in e.g., [7]. 10 Index J, gangster index set, 5 GJ , gangster operator, 4 vec, 3 R̂, 4 Ŷ , 5 Ẑ, 5 Srn + , 5 ∗ dZ , 7 d∗Y , 5 e, ones vector, 4 g(Z), 7 p∗R , 4 p∗X , 2 p∗RY , 6 2 P1 = {Y ∈ Sn +1 : GJ (Y ) = E00 }, 6 P2 = P1 ∩ {0 ≤ Y ≤ 1}, 6 R := {R  0}, 7 Y := {Y : GJ (Y ) = E00 , 0 ≤ Y ≤ 1}, 7 Z := {Z : V̂ > Z V̂  0}, 7 QAP, quadratic assignment problem, 2 SDP, semidefinite programmming, 2 alternating direction method of multipliers, 2, 5 augmented Lagrange, 5 dual function, 7 facial reduction, 4 gangster constraints, 2, 6 gangster index set, J, 5 gangster operator, GJ , 4 lifting, 4 linear cuts, 4 ones, 4 optimal optimal optimal optimal value value value value ADMM relaxation, p∗RY , 6 QAP, p∗X , 2 dual SDP relaxation, d∗Y , 5 primal SDP relaxation, p∗R , 4 primal-dual interior-point, p-d i-p, 2 quadratic assignment problem, 2 semidefinite programmming, 2 strictly feasible pair, (R̂, Ŷ , Ẑ), 4, 5 trace inner product, hY, Xi = trace Y X > , 2 11 References [1] K.M. Anstreicher. Recent advances in the solution of quadratic assignment problems. Math. Program., 97(1-2, Ser. B):27–42, 2003. ISMP, 2003 (Copenhagen). [2] K.M. Anstreicher and N.W. Brixius. A new bound for the quadratic assignment problem based on convex quadratic programming. Math. Program., 89(3, Ser. A):341–357, 2001. [3] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Machine Learning, 3(1):1–122, 2011. [4] S. Burer and R.D.C. Monteiro. Local minima and convergence in low-rank semidefinite programming. Math. Program., 103(3, Ser. A):427–444, 2005. [5] R.E. Burkard, S. Karisch, and F. Rendl. QAPLIB – a quadratic assignment problem library. European J. Oper. Res., 55:115–119, 1991. www.opt.math.tu-graz.ac.at/qaplib/. [6] R.E. Burkard, S.E. Karisch, and F. Rendl. QAPLIB—a quadratic assignment problem library. J. Global Optim., 10(4):391–403, 1997. [7] E. de Klerk and R. Sotirov. Exploiting group symmetry in semidefinite programming relaxations of the quadratic assignment problem. Math. Program., 122(2, Ser. A):225–246, 2010. [8] R. Jain and P. Yao. A parallel approximation algorithm for positive semidefinite programming. In 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science—FOCS 2011, pages 463–471. IEEE Computer Soc., Los Alamitos, CA, 2011. [9] F. Rendl and R. Sotirov. Bounds for the quadratic assignment problem using the bundle method. Math. Program., 109(2-3, Ser. B):505–524, 2007. [10] H. Sun, N. Wang, T.K. Pong, and H. Wolkowicz. Eigenvalue, quadratic programming, and semidefinite programming bounds for a cut minimization problem. Comput. Optim. Appl., (accepted Aug. 2015):to appear, 2015. 32 pages, submitted. [11] K.C. Toh, M.J. Todd, and R.H. Tütüncü. SDPT3—a MATLAB software package for semidefinite programming, version 1.3. Optim. Methods Softw., 11/12(1-4):545–581, 1999. Interior point methods. [12] Z. Wen, D. Goldfarb, and W. Yin. Alternating direction augmented Lagrangian methods for semidefinite programming. Math. Program. Comput., 2(3-4):203–230, 2010. [13] Q. Zhao, S.E. Karisch, F. Rendl, and H. Wolkowicz. Semidefinite programming relaxations for the quadratic assignment problem. J. Comb. Optim., 2(1):71–109, 1998. Semidefinite programming and interior-point approaches for combinatorial optimization problems (Fields Institute, Toronto, ON, 1996). 12
8
arXiv:cs/0504050v2 [cs.LO] 15 Jan 2006 Under consideration for publication in Theory and Practice of Logic Programming 1 Mapping Fusion and Synchronized Hyperedge Replacement into Logic Programming∗ IVAN LANESE and UGO MONTANARI Dipartimento di Informatica, Università di Pisa Largo Bruno Pontecorvo,3 56127 Pisa, Italia (e-mail: {lanese,ugo}@di.unipi.it) submitted 27 December 2003; revised 14 April 2005; accepted 5 January 2006 Abstract In this paper we compare three different formalisms that can be used in the area of models for distributed, concurrent and mobile systems. In particular we analyze the relationships between a process calculus, the Fusion Calculus, graph transformations in the Synchronized Hyperedge Replacement with Hoare synchronization (HSHR) approach and logic programming. We present a translation from Fusion Calculus into HSHR (whereas Fusion Calculus uses Milner synchronization) and prove a correspondence between the reduction semantics of Fusion Calculus and HSHR transitions. We also present a mapping from HSHR into a transactional version of logic programming and prove that there is a full correspondence between the two formalisms. The resulting mapping from Fusion Calculus to logic programming is interesting since it shows the tight analogies between the two formalisms, in particular for handling name generation and mobility. The intermediate step in terms of HSHR is convenient since graph transformations allow for multiple, remote synchronizations, as required by Fusion Calculus semantics. To appear in Theory and Practice of Logic Programming (TPLP). KEYWORDS: Fusion Calculus, graph transformation, Synchronized Hyperedge Replacement, logic programming, mobility 1 Introduction In this paper we compare different formalisms that can be used to specify and model systems which are distributed, concurrent and mobile, as those that are usually found in the global computing area. Global computing is becoming very important because of the great development of networks which are deployed on huge areas, first of all Internet, but also other kinds of networks such as networks for wireless communications. In order to build and program these networks one needs to deal with issues such as reconfigurability, synchronization and transactions at a suitable level of abstraction. Thus powerful ∗ Work supported in part by the European IST-FET Global Computing project IST-2001-33100 PROFUNDIS and the European IST-FET Global Computing 2 project IST-2005-16004 Sensoria. 2 Ivan Lanese and Ugo Montanari formal models and tools are needed. Until now no model has been able to emerge as the standard one for this kind of systems, but there are a lot of approaches with different merits and drawbacks. An important approach is based on process calculi, like Milner’s CCS and Hoare’s CSP. These two calculi deal with communication and synchronization in a simple way, but they lack the concept of mobility. An important successor of CCS, the πcalculus (Milner et al. 1992), allows to study a wide range of mobility problems in a simple mathematical framework. We are mainly interested in the Fusion Calculus (Parrow and Victor 1998; Victor 1998; Gardner and Wischik 2000; Gardner and Wischik 2004), which is an evolution of π-calculus. The interesting aspect of this calculus is that it has been obtained by simplifying and making more symmetric the π-calculus. One of the known limitations of process-calculi when applied to distributed systems is that they lack an intuitive representation because they are equipped with an interleaving semantics and they use the same constructions for representing both the agents and their configurations. An approach that solves this kind of problems is based on graph transformations (Ehrig et al. 1999). In this case the structure of the system is explicitly represented by a graph which offers both a clean mathematical semantics and a suggestive representation. In particular we represent computational entities such as processes or hosts with hyperedges (namely edges attached to any number of nodes) and channels between them with shared nodes. As far as the dynamic aspect is concerned, we use Synchronized Hyperedge Replacement with Hoare synchronization (HSHR) (Degano and Montanari 1987). This approach uses productions to specify the behaviour of single hyperedges, which are synchronized by exposing actions on nodes. Actions exposed by different hyperedges on the same node must be compatible. In the case of Hoare synchronization all the edges must expose the same action (in the CSP style). This approach has the advantage, w.r.t. other graphical frameworks such as Double Pushout (Ehrig et al. 1973) or Bigraphs (Jensen and Milner 2003), of allowing a distributed implementation since productions have a local effect and synchronization can be performed using a distributed algorithm. We use the extension of HSHR with mobility (Hirsch et al. 2000; Hirsch and Montanari 2001; König and Montanari 2001; Ferrari et al. 2001; Lanese 2002), that allows edges to expose node references together with actions, and nodes whose references are matched during synchronization are unified. For us HSHR is a good step in the direction of logic programming (Lloyd 1993). We consider logic programming as a formalism for modelling concurrent and distributed systems. This is a non-standard view of logic programming (see Bruni et al. 2001 for a presentation of our approach) which considers goals as processes whose evolution is defined by Horn clauses and whose interactions use variables as channels and are managed by the unification engine. In this framework we are not interested Mapping Fusion and SHR into Logic Programming 3 only in refutations, but in any partial computation that rewrites a goal into another. In this paper we analyze the relationships between these three formalisms and we find tight analogies among them, like the same parallel composition operator and the use of unification for name mobility. However we also emphasize the differences between these models: • the Fusion Calculus is interleaving and relies on Milner synchronization (in the CCS style); • HSHR is inherently concurrent since many actions can be performed at the same time on different nodes and uses Hoare synchronization; • logic programming is concurrent, has a wide spectrum of possible controls which are based on the Hoare synchronization model, and also is equipped with a more complex data management. We will show a mapping from Fusion Calculus to HSHR and prove a correspondence theorem. Note that HSHR is a good intermediate step between Fusion Calculus and logic programming since in HSHR hyperedges can perform multiple actions at each step, and this allows to build chains of synchronizations. This additional power is needed to model Milner synchronization, which requires synchronous, atomic routing capabilities. To simplify our treatment we consider only reduction semantics. The interleaving behaviour is imposed with an external condition on the allowed HSHR transitions. Finally we present the connections between HSHR and logic programming. Since the logic programming paradigm allows for many computational strategies and is equipped with powerful data structures, we need to constrain it in order to have a close correspondence with HSHR. We define to this end Synchronized Logic Programming (SLP), which is a transactional version of logic programming. The idea is that function symbols are pending constraints that must be satisfied before a transaction can commit, as for zero tokens in zero-safe nets (Bruni and Montanari 2000). In the mapping from HSHR to SLP edges are translated into predicates, nodes into variables and parallel composition into AND composition. This translation was already presented in the MSc. thesis of the first author (Lanese 2002) and in Lanese and Montanari (2002). Fusion Calculus was mapped into SHR with Milner synchronization (a simpler task) in Lanese and Montanari (2004a) where Fusion LTS was considered instead of Fusion reduction semantics. The paper Lanese and Montanari (2002) also contains a mapping of Ambient calculus into HSHR. This result can be combined with the one here, thus obtaining a mapping of Ambient calculus into SLP. An extensive treatment of all the topics in this paper can also be found in the forthcoming Ph.D. thesis of the first author (Lanese 2006). Since logic programming is not only a theoretical framework, but also a well developed programming style, the connections between Fusion, HSHR and logic pro- 4 Ivan Lanese and Ugo Montanari gramming can be used for implementation purposes. SLP has been implemented in Lanese (2002) through meta-interpretation. Thus we can use translations from Fusion and HSHR to implement them. In particular, since implementations of logic programming are not distributed, this can be useful mainly for simulation purposes. In Section 2 we present the required background, in particular we introduce the Fusion Calculus (2.1), the algebraic representation of graphs and the HSHR (2.2), and logic programming (2.3). Section 3 is dedicated to the mapping from Fusion Calculus to HSHR. Section 4 analyzes the relationships between HSHR and logic programming, in particular we introduce SLP (4.1), we prove the correspondence between it and HSHR (4.2) and we give some hints on how to implement Fusion Calculus and HSHR using Prolog (4.3). In Section 5 we present some conclusions and traces for future work. Finally, proofs and technical lemmas are in Appendix A. 2 Background Mathematical notation. We use T σ to denote the application of substitution σ to T (where T can be a term or a set/vector of terms). We write substitutions as sets of pairs of the form t/x, denoting that variable x is replaced by term t. We also denote with σ1 σ2 the composition of substitutions σ1 and σ2 . We denote with σ −1 (x) the set of elements mapped to x by σ. We use | − | to denote the operation that computes the number of elements in a set/vector. Given a function f we denote with dom(f ) its domain, with Im(f ) its image and with f |S the restriction of f to the new domain S. We use on functions and substitutions set theoretic operations (such as ∪) referring to their representation as sets of pairs. Similarly, we apply them to vectors, referring to the set of the elements in the vector. In particular, \ is set difference. Given a set S we denote with S ∗ the set of strings on S. Also, given a vector ~v and an integer i, ~v [i] is the i-th element of ~v . Finally, a vector is given by listing its elements inside angle brackets h−i. 2.1 The Fusion Calculus The Fusion Calculus (Parrow and Victor 1998; Victor 1998) is a calculus for modelling distributed and mobile systems which is based on the concepts of fusion and scope. It is an evolution of the π-calculus (Milner et al. 1992) and the interesting point is that it is obtained by simplifying the calculus. In fact the two action prefixes for input and output communication are symmetric, whereas in the π-calculus they are not, and there is just one binding operator called scope, whereas the πcalculus has two (restriction and input). As shown in Parrow and Victor (1998), the π-calculus is syntactically a subcalculus of the Fusion Calculus (the key point is that the input of π-calculus is obtained using input and scope). In order to have these properties fusion actions have to be introduced. An asynchronous version of Fusion Calculus is described in Gardner and Wischik (2000), Gardner and Wischik (2004), where name fusions are handled explicitly as messages. Here we follow the approach by Parrow and Victor. Mapping Fusion and SHR into Logic Programming 5 We now present in details the syntax and the reduction semantics of Fusion Calculus. In our work we deal with a subcalculus of the Fusion Calculus, which has no match and no mismatch operators, and has only guarded summation and recursion. All these restrictions are quite standard, apart from the one concerning the match operator, which is needed to have an expansion lemma. To extend our approach to deal with match we would need to extend SHR by allowing production applications to be tagged with a unique identifier. We leave this extension for future work. In our discussion we distinguish between sequential processes (which have a guarded summation as topmost operator) and general processes. We assume to have an infinite set N of names ranged over by u, v, . . . , z and an infinite set of agent variables (disjoint w.r.t. the set of names) with meta-variable X. Names represent communication channels. We use φ to denote an equivalence relation on N , called fusion, which is represented in the syntax by a finite set of equalities. Function n(φ) returns all names which are fused, i.e. those contained in an equivalence class of φ which is not a singleton. Definition 1 The prefixes are defined by: α : : = u~x (Input) u~x (Output) φ (Fusion) Definition 2 The agents are defined by: S::= P ::= P i αi .Pi (Guarded sum) 0 (Inaction) S (Sequential Agent) P1 |P2 (Composition) (x)P (Scope) rec X.P X (Recursion) (Agent variable) The scope restriction operator is a binder for names, thus x is bound in (x)P . Similarly rec is a binder for agent variables. We will only consider agents which are closed w.r.t. both names and agent variables and where in rec X.P each occurrence of X in P is within a sequential agent (guarded recursion). We use recursion to define infinite processes instead of other operators (e.g. replication) since it simplifies the mapping and since their expressive power is essentially the same. We use infix + for binary sum (which thus is associative and commutative). Given an agent P , functions fn, bn and n compute the sets fn(P ), bn(P ) and 6 Ivan Lanese and Ugo Montanari n(P ) of its free, bound and all names respectively. Processes are agents considered up to structural axioms defined as follows. Definition 3 (Structural congruence) The structural congruence ≡ between agents is the least congruence satisfying the α-conversion law (both for names and for agent variables), the abelian monoid laws for composition (associativity, commutativity and 0 as identity), the scope laws (x)0 ≡ 0, (x)(y)P ≡ (y)(x)P , the scope extrusion law P |(z)Q ≡ (z)(P |Q) where z∈ / fn(P ) and the recursion law rec X.P ≡ P {rec X.P/X}. Note that fn is also well-defined on processes. In order to deal with fusions we need the following definition. Definition 4 (Substitutive effect ) A substitutive effect of a fusion φ is any idempotent substitution σ : N → N having φ as its kernel. In other words xσ = yσ iff xφy and σ sends all members of each equivalence class of φ to one representative in the class1 . The reduction semantics for Fusion Calculus is the least relation satisfying the following rules. Definition 5 (Reduction semantics for Fusion Calculus) (~z)(R|(· · · + u~x.P )|(u~y .Q + . . . )) → (~z)(R|P |Q)σ where |~x| = |~y | and σ is a substitutive effect of {~x = ~y } such that dom(σ) ⊆ ~z. (~z)(R|(· · · + φ.P )) → (~z)(R|P )σ where σ is a substitutive effect of φ such that dom(σ) ⊆ ~z. P ≡ P ′ , P ′ → Q′ , Q′ ≡ Q P →Q 2.2 Synchronized Hyperedge Replacement Synchronized Hyperedge Replacement (SHR) (Degano and Montanari 1987) is an approach to (hyper)graph transformations that defines global transitions using local productions. Productions define how a single (hyper)edge can be rewritten and the conditions that this rewriting imposes on adjacent nodes. Thus the global transition is obtained by applying in parallel different productions whose conditions are compatible. What exactly compatible means depends on which synchronization model we use. In this work we will use the Hoare synchronization model (HSHR), which requires that all the edges connected to a node expose the same action on it. For a general definition of synchronization models see Lanese and Montanari (2004b). We use the extension of HSHR with mobility (Hirsch et al. 2000; Hirsch and Montanari 2001; König and Montanari 2001; Ferrari et al. 2001; Lanese 2002), that 1 Essentially σ is a most general unifier of φ, when it is considered as a set of equations. Mapping Fusion and SHR into Logic Programming 7 allows edges to expose node references together with actions, and nodes whose references are matched during synchronization are unified. We will give a formal description of HSHR as labelled transition system, but first of all we need an algebraic representation for graphs. An edge is an atomic item with a label and with as many ordered tentacles as the rank rank(L) of its label L. A set of nodes, together with a set of such edges, forms a graph if each edge is connected, by its tentacles, to its attachment nodes. We will consider graphs up to isomorphisms that preserve2 nodes, labels of edges, and connections between edges and nodes. Now, we present a definition of graphs as syntactic judgements, where nodes correspond to names and edges to basic terms of the form L(x1 , . . . , xn ), where the xi are arbitrary names and rank(L) = n. Also, nil represents the empty graph and | is the parallel composition of graphs (merging nodes with the same name). Definition 6 (Graphs as syntactic judgements) Let N be a fixed infinite set of names and LE a ranked alphabet of labels. A syntactic judgement (or simply a judgement) is of the form Γ ⊢ G where: 1. Γ ⊆ N is the (finite) set of nodes in the graph. 2. G is a term generated by the grammar G : : = L(~x) | G1 |G2 | nil where ~x is a vector of names and L is an edge label with rank(L) = |~x|. We denote with n the function that given a graph G returns the set n(G) of all the names in G. We use the notation Γ, x to denote the set obtained by adding x to Γ, assuming x ∈ / Γ. Similarly, we write Γ1 , Γ2 to state that the resulting set of names is the disjoint union of Γ1 and Γ2 . Definition 7 (Structural congruence and well-formed judgements) The structural congruence ≡ on terms G obeys the following axioms: (AG1) (G1 |G2 )|G3 ≡ G1 |(G2 |G3 ) (AG2) G1 |G2 ≡ G2 |G1 (AG3) G|nil ≡ G The well-formed judgements Γ ⊢ G over LE and N are those where n(G) ⊆ Γ. Axioms (AG1),(AG2) and (AG3) define respectively the associativity, commutativity and identity over nil for operation |. Well-formed judgements up to structural axioms are isomorphic to graphs up to isomorphisms. For a formal statement of the correspondence see Hirsch (2003). We will now present the steps of a SHR computation. 2 In our approach nodes usually represent free names, and they are preserved by isomorphisms. 8 Ivan Lanese and Ugo Montanari Definition 8 (SHR transition) Let Act be a set of actions. For each action a ∈ Act, let ar(a) be its arity. A SHR transition is of the form: Λ,π Γ ⊢ G −−→ Φ ⊢ G′ where Γ ⊢ G and Φ ⊢ G′ are well-formed judgements for graphs, Λ : Γ → (Act×N ∗ ) is a total function and π : Γ → Γ is an idempotent substitution. Function Λ assigns to each node x the action a and the vector ~y of node references exposed on x by the transition. If Λ(x) = (a, ~y ) then we define actΛ (x) = a and nΛ (x) = ~y . We require that ar(actΛ (x)) = | nΛ (x)|, namely the arity of the action must equal the length of the vector. We define: • n(Λ) = {z|∃x.z ∈ nΛ (x)} set of exposed names; • ΓΛ = n(Λ) \ Γ set of fresh names that are exposed; • n(π) = {x|∃x′ 6= x.xπ = x′ π} set of fused names. Substitution π allows to merge nodes. Since π is idempotent, it maps every node into a standard representative of its equivalence class. We require that ∀x ∈ n(Λ).xπ = x, i.e. only references to representatives can be exposed. Furthermore we require Φ ⊇ Γπ ∪ ΓΛ , namely nodes are never erased. Nodes in ΓInt = Φ \ (Γπ ∪ ΓΛ ) are fresh internal nodes, silently created in the transition. We require that no isolate, internal nodes are created, namely ΓInt ⊆ n(G′ ). Note that the set of names Φ of the resulting graph is fully determined by Γ, Λ, π and G′ thus we will have no need to write its definition explicitly in the inference rules. Notice also that we can write a SHR transition as: Λ,π Γ ⊢ G −−→ Γπ, ΓΛ , ΓInt ⊢ G′ . We usually assume to have an action ǫ ∈ Act of arity 0 to denote “no synchronization”. We may not write explicitly π if it is the identity, and some actions if they are (ǫ, hi). Furthermore we use Λǫ to denote the function that assigns (ǫ, hi) to each node in Γ (note that the dependence on Γ is implicit). We derive SHR transitions from basic productions using a set of inference rules. Productions define the behaviour of single edges. Definition 9 (Production) A production is a SHR transition of the form: Λ,π x1 , . . . , xn ⊢ L(x1 , . . . , xn ) −−→ Φ ⊢ G where all xi , i = 1 . . . n are distinct. Productions are considered as schemas and so they are α-convertible w.r.t. names in {x1 , . . . , xn } ∪ Φ. Mapping Fusion and SHR into Logic Programming 9 We will now present the set of inference rules for Hoare synchronization. The intuitive idea of Hoare synchronization is that all the edges connected to a node must expose the same action on that node. Definition 10 (Rules for Hoare synchronization) Λ,π (par) Γ ⊢ G1 −−→ Φ ⊢ G2 (Γ ∪ Φ) ∩ (Γ′ ∪ Φ′ ) = ∅ Λ∪Λ′ ,π∪π ′ Γ, Γ′ ⊢ G1 |G′1 −−−−−−−→ Φ, Φ′ ⊢ G2 |G′2 Λ,π (merge) Λ′ ,π ′ Γ′ ⊢ G′1 −−−→ Φ′ ⊢ G′2 Γ ⊢ G1 −−→ Φ ⊢ G2 ∀x, y ∈ Γ.xσ = yσ ∧ x 6= y ⇒ actΛ (x) = actΛ (y) Λ′ ,π ′ Γσ ⊢ G1 σ −−−→ Φ′ ⊢ G2 σρ where σ : Γ → Γ is an idempotent substitution and: (i). ρ = mgu({(nΛ (x))σ = (nΛ (y))σ|xσ = yσ} ∪ {xσ = yσ|xπ = yπ}) where ρ maps names to representatives in Γσ whenever possible (ii). ∀z ∈ Γ.Λ′ (zσ) = (Λ(z))σρ (iii). π ′ = ρ|Γσ Λǫ ,id (idle) Γ ⊢ G −−−→ Γ ⊢ G Λ,π (new) Γ ⊢ G1 −−→ Φ ⊢ G2 x∈ / Γ ~y ∩ (Γ ∪ Φ ∪ {x}) = ∅ Λ∪{(x,a,~ y)},π Γ, x ⊢ G1 −−−−−−−−−→ Φ′ ⊢ G2 A transition is obtained by composing productions, which are first applied on disconnected edges, and then by connecting the edges by merging nodes. In particular rule (par) deals with the composition of transitions which have disjoint sets of nodes and rule (merge) allows to merge nodes (note that σ is a projection into representatives of equivalence classes). The side condition requires that we have the same action on merged nodes. Definition (i) introduces the most general unifier ρ of the union of two sets of equations: the first set identifies (the representatives of) the tuples associated to nodes merged by σ, while the second set of equations is just the kernel of π. Thus ρ is the merge resulting from both π and σ. Note that (ii) Λ is updated with these merges and that (iii) π ′ is ρ restricted to the nodes of the graph which is the source of the transition. Rule (idle) guarantees that each edge can always make an explicit idle step. Rule (new) allows adding to the source graph an isolated node where arbitrary actions (with fresh names) are exposed. Λ,π Λ,π We write P (Γ ⊢ G −−→ Φ ⊢ G′ ) if Γ ⊢ G −−→ Φ ⊢ G′ can be obtained from the productions in P using Hoare inference rules. We will now present an example of HSHR computation. 10 Ivan Lanese and Ugo Montanari Fig. 1. Star graph x S v y w S S S z Fig. 2. Productions ε,<> y y r,<w> y y C C C S w z x x ε,<> C x r,<w> x Example 1 (Hirsch et al. 2000 ) We show now how to use HSHR to derive a 4 elements ring starting from a one element ring, and how we can then specify a reconfiguration that transforms the ring into the star graph in Figure 1. We use the following productions: (x,ǫ,hi),(y,ǫ,hi) x, y ⊢ C(x, y) −−−−−−−−−→ x, y, z ⊢ C(x, z)|C(z, y) (x,r,hwi),(y,r,hwi) x, y ⊢ C(x, y) −−−−−−−−−−−−→ x, y, w ⊢ S(y, w) that are graphically represented in Figure 2. Notice that Λ is represented by decorating every node x in the left hand with actΛ (x) and nΛ (x). The first rule allows to create rings, in fact we can create all rings with computations like: x ⊢ C(x, x) → x, y ⊢ C(x, y)|C(y, x) → → x, y, z ⊢ C(x, y)|C(y, z)|C(z, x) → → x, y, z, v ⊢ C(x, y)|C(y, z)|C(z, v)|C(v, x) In order to perform the reconfiguration into a star we need rules with nontrivial actions, like the second one. This allows to do: (x,r,hwi),(y,r,hwi),(z,r,hwi),(v,r,hwi) x, y, z, v ⊢ C(x, y)|C(y, z)|C(z, v)|C(v, x) −−−−−−−−−−−−−−−−−−−−−−−−→ → x, y, z, v, w ⊢ S(x, w)|S(y, w)|S(z, w)|S(v, w) Note that if an edge C is rewritten into an edge S, then all the edges in the ring must use the same production, since they must synchronize via action r. They must agree also on nΛ (x) for every x, thus all the newly created nodes are merged. The whole transition is represented in Figure 3. 11 Mapping Fusion and SHR into Logic Programming Fig. 3. Ring creation and reconfiguration to star x x x x C C S C v x C C C C y C C S C S y z y w S y v z z It is easy to show that if we can derive a transition T , then we can also derive every transition obtainable from T by applying an injective renaming. Lemma 1 Let P be a set of productions and σ an injective substitution. Λ,π P (Γ ⊢ G −−→ Φ ⊢ G′ ) iff: Λ′ ,π ′ P (Γσ ⊢ Gσ −−−→ Φσ ⊢ G′ σ) where Λ′ (xσ) = (Λ(x))σ and xσπ ′ = xπσ. Proof By rule induction. 2.3 Logic programming In this paper we are not interested in logic computations as refutations of goals for problem solving or artificial intelligence, but we consider logic programming (Lloyd 1993) as a goal rewriting mechanism. We can consider logic subgoals as concurrent communicating processes that evolve according to the rules defined by the clauses and that use unification as the fundamental interaction primitive. A presentation of this kind of use of logic programming can be found in Bruni et al. (2001). In order to stress the similarities between logic programming and process calculi we present a semantics of logic programming based on a labelled transition system. Definition 11 We have for clauses (C) and goals (G) the following grammar: C ::=A ← G G : : = G, G | A |  where A is a logic atom, “,” is the AND conjunction and  is the empty goal. We can assume “,” to be associative and commutative and with unit . The informal semantics of A ← B1 , . . . , Bn is “for every assignment of the variables, if B1 , . . . , Bn are all true, then A is true”. A logic program is a set of clauses. Derivations in logic programming are called SLD-derivations (from “Linear resolution for Definite clauses with Selection function”). We will also consider partial SLD-derivations. 12 Ivan Lanese and Ugo Montanari Definition 12 (Partial SLD-derivation) Let P be a logic program. We define a step of a SLD-resolution computation using the following rules: H ← B1 , . . . , B k ∈ P θ = mgu({A = Hρ}) atomic goal P A− → (B1 , . . . , Bk )ρθ where ρ is an injective renaming of variables such that all the variables in the clause variant (H ← B1 , . . . , Bk )ρ are fresh. θ P P θ G− →F θ G, G′ − → F, G′ θ conjunctive goal We will omit P if P is clear from the context. A partial SLD-derivation of P ∪ {G} is a sequence (possibly empty) of steps of SLD-resolution allowed by program P with initial goal G. 3 Mapping Fusion Calculus into Synchronized Hyperedge Replacement In this section we present a mapping from Fusion Calculus to HSHR. This mapping is quite complex since there are many differences between the two formalisms. First of all we need to bridge the gap between a process calculus and a graph transformation formalism, and this is done by associating edges to sequential processes and by connecting them according to the structure of the system. Moreover we need to map Milner synchronization, which is used in Fusion Calculus, into Hoare synchronization. In order to do this we define some connection structures that we call amoeboids which implement Milner synchronization using Hoare connectors. Since Hoare synchronization involves all the edges attached to a node while Milner one involves just pairs of connectors, we use amoeboids to force each node to be shared by exactly two edges (one if the node is an interface to the outside) since in that case the behaviour of Hoare and Milner synchronization is similar. An amoeboid is essentially a router (with no path-selection abilities) that connects an action with the corresponding coaction. This is possible since in HSHR an edge can do many synchronizations on different nodes at the same time. Finally, some restrictions have to be imposed on HSHR in order to have an interleaving behaviour as required by Fusion Calculus. We define the translation on processes in the form (~x)P where P is the parallel composition of sequential processes. Notice that every process can be reduced to the above form by applying the structural axioms: recursive definitions which are not inside a sequential agent have to be unfolded once and scope operators which are not inside a sequential agent must be taken to the outside. We define the translation also in the case (~x)P is not closed w.r.t. names (but it must be closed w.r.t. process variables) since this case is needed for defining productions. In the form (~x)P we assume that the ordering of names in (~x) is fixed, dictated Mapping Fusion and SHR into Logic Programming 13 by some structural condition on their occurrences in P . For our purposes, it is also convenient to express process P in (~x)P as P = P ′ σ, where P ′ is a linear agent, i.e. every name in it appears once. We assume that the free names of P ′ are fresh, namely fn(P ′ ) ∩ fn(P ) = ∅, and again structurally ordered. The corresponding vector is called fnarray(P ′ ). The decomposition P = P ′ σ highlights the role of amoeboids. In fact, in the translation, substitution σ is made concrete by a graph consisting of amoeboids, which implement a router for every name in fn(P ). More precisely, we assume the existence of edge labels mi and n of ranks i = 2, 3, . . . and 1 respectively. Edges labelled by mi implement routers among i nodes, while n edges “close” restricted names ~x in (~x)P ′ σ. Finally, linear sequential processes S in P ′ must also be given a standard form. In fact, they will be modelled in the HSHR translation by edges labelled by LS , namely by a label encapsulating S itself. However in the derivatives of a recursive process the same sequential process can appear with different names an unbound number of times. To make the number of labels (and also of productions, as we will see in short) finite, for every given process, we choose standard names x1 , . . . , xn and order them structurally: S = Ŝ(x1 , . . . , xn )ρS with S1 = S2 ρ implying Ŝ1 = Ŝ2 and ρS1 = ρρS2 . We can now define the translation from Fusion Calculus to HSHR. The translation is parametrized by the nodes in the vectors ~v and w ~ we choose to represent the names in ~x and fnarray(P ′ ). We denote with x∈S Gx the parallel composition of graphs Gx for each x ∈ S. Definition 13 (Translation from Fusion Calculus to HSHR)  J(~x)P ′ σK~v ,w~ = Γ ⊢ JP ′ K{w/ ~ fnarray(P ′ )}|JσK{~v /~x}{w/ ~ fnarray(P ′ )}| x∈~v n(x) where: |~v | = |~x|, |w| ~ = | fnarray(P ′ )|, ~v ∩ w ~ =∅ and with: Γ = fn((~x)P ′ σ), ~v , w. ~ J0K = nil JSK = LŜ (x1 , . . . , xn )ρS with n = | fn(Ŝ)| JP1 |P2 K = JP1 K|JP2 K  JσK = x∈Im(σ) mk+1 (x, σ −1 (x)) where k = |σ −1 (x)|. In the above translation, graph JP ′ K consists of a set of disconnected edges, one for each sequential process of (~x)P ′ σ. The translation produces a graph with three kinds of nodes. The nodes of the first kind are those in w. ~ Each of them is adjacent to exactly two edges, one representing a sequential process of P ′ , and the other an amoeboid. Also the nodes in ~v are adjacent to two edges, an amoeboid and an n 14 Ivan Lanese and Ugo Montanari Fig. 4. Amoeboids for σ r y m2 w m3 z x edge. Finally the nodes in fn((~x)P ′ σ) are adjacent only to an amoeboid. As mentioned above, translation JσK builds an amoeboid for every free name x of P ′ σ: it has k + 1 tentacles, where k are the occurrences of x in P ′ σ, namely the free names of P ′ mapped to it. Notice that the choice of the order within σ −1 (x) is immaterial, since we will see that amoeboids are commutative w.r.t. their tentacles. However, to make the translation deterministic, σ −1 (x) could be ordered according to some fixed precedence of the names. Example 2 (Translation of a substitution) Let σ = {y/x, y/z, r/w}. The translation of σ is in Figure 4. Example 3 (Translation of a process) Let us consider the (closed) process (uz)uz.0| rec X.(x)ux.(ux.0|X). We can write it in the form (~x)P as: (uzy)uz.0|uy.(uy.0| rec X.(x)ux.(ux.0|X)) Furthermore we can decompose P into P ′ σ where: P ′ = u1 z1 .0|u2 y1 .(u3 y2 .0| rec X.(x)u4 x.(u5 x.0|X)) σ = {u/u1, z/z1 , u/u2, y/y1 , u/u3 , y/y2 , u/u4 , u/u5}. We can now perform the translation. We choose ~v = (u, z, y) and w ~ = (u1 , z1 , u2 , y1 , u3 , y2 , u4 , u5 ): J(~x)P ′ σK~v ,w~ = u, z, y, u1, z1 , u2 , y1 , u3 , y2 , u4 , u5 ⊢ Lx1 x2 .0 (u1 , z1 )|Lx1 x2 .(x3 x4 .0| rec X.(x)x5 x.(x6 x.0|X)) (u2 , y1 , u3 , y2 , u4 , u5 )| m6 (u, u1 , u2 , u3 , u4 , u5 )|m2 (z, z1 )|m3 (y, y1 , y2 )|n(u)|n(z)|n(y) Now we define the productions used in the HSHR system. We have two kinds of productions: auxiliary productions that are applied to amoeboid edges and process productions that are applied to process edges. Before showing process productions we need to present the translation from Fusion Calculus prefixes into HSHR transition labels. Definition 14 The translation from Fusion Calculus prefixes into HSHR transition labels is the following: JαK = (Λ, π) where if α = u~x then Λ(u) = (inn , ~x), Λ(x) = (ǫ, hi) if x 6= u with n = |~x|, π = id if α = u~x then Λ(u) = (outn , ~x), Λ(x) = (ǫ, hi) if x 6= u and n = |~x|, π = id 15 Mapping Fusion and SHR into Logic Programming if α = φ then Λ = Λǫ and π is any substitutive effect of φ. We will write Ju~xK and Ju~xK as (u, inn , ~x) and (u, outn , ~x) respectively. Definition 15 (Process productions) We have a process production for each prefix at the top level of a linear standard P sequential process (which has {x1 , . . . , xn } as free names). Let i αi .Pi be such a process. Its productions can be derived with the following inference rule: JPj ξK~v ,w~ = Γ ⊢ G Jαj K = (Λ, π) Jαj K x1 , . . . , xn ⊢ LP i αi .Pi (x1 , . . . , xn ) −−−→ Γ, Γ′ ⊢ G|JξK|JπK|  x∈Γ′′ n(x) if ~v ∪ w, ~ {x1 , . . . , xn } and fn(Pj )ξ are pairwise disjoint with ξ injective renaming from fn(Pj ) to fresh names, Γ′ = x1 , . . . , xn and Γ′′ = Γ′ \ (n(Λ) ∪ n(π) ∪ fn(Pj )). We add some explanations on the derivable productions. Essentially, if αj .Pj is a possible choice, the edge labelled by the process can have a transition labelled by Jαj K to something related to JPj K~v ,w~ . We use Pj ξ instead of Pj (and then we add the translation of ξ) to preserve the parity of the number of amoeboid edges on each path (see Definition 17). The parameter ~v of the translation contains fresh nodes for restricted names that are taken to the top level during the normalization of Pj while w ~ contains the free names in the normalization of Pj (note that some of them may be duplicated w.r.t. Pj , if this one contains recursion). If αj is a fusion φ, according to the semantics of the calculus, a substitutive effect π of it should be applied to Pj , and this is obtained by adding the amoeboids JπK in parallel. Furthermore, Γ ⊢ G must be enriched in other two ways: since nodes can never be erased, nodes which are present in the sequential process, i.e. the nodes in Γ′ , must be added to Γ. Also “close” n edges must be associated to forgotten nodes (to forbid further transitions on them and to have them connected to exactly two edges in the result of the transition), provided they are not exposed, i.e. to nodes in Γ′′ . Note that when translating the RHS (~x)P σ of productions we may have names in P σ which occur just once. Since they are renamed by σ and ξ, they will produce in the translation some chains of m2 connectors of even length, which, as we will see shortly, are behaviourally equivalent to simple nodes. For simplicity, in the examples we will use the equivalent productions where these connectors have been removed and the nodes connected by them have been merged. Example 4 (Translation of a production) Let us consider firstly the simple agent x1 x2 .0. The only production for this agent (where ~v = w ~ = hi) is: (x1 ,out1 ,hx2 i) x1 , x2 ⊢ Lx1 x2 .0 (x1 , x2 ) −−−−−−−−−→ x1 , x2 ⊢ n(x1 ) where we closed node x1 but not node x2 since the second one is exposed on x1 . Let us consider a more complex example: x1 x2 .(x3 x4 .0| rec X.(x)x5 x.(x6 x.0|X)). The process x3 x4 .0| rec X.(x)x5 x.(x6 x.0|X)ξ where xi ξ = x′i can be transformed 16 Ivan Lanese and Ugo Montanari into: (y)y1 y2 .0|y3 y4 .(y5 y6 .0| rec X.((x)y7 x.(y8 x.0|X)))σ where σ = {x′3 /y1 , x′4 /y2 , x′5 /y3 , y/y4 , x′6 /y5 , y/y6 , x′5 /y7 , x′6 /y8 }. Its translation (with ~v = hyi and w ~ = hy1 , y2 , y3 , y4 , y5 , y6 , y7 , y8 i) is: x′3 , x′4 , x′5 , x′6 , y1 , y2 , y3 , y4 , y5 , y6 , y7 , y8 , y ⊢ Lx1 x2 .0 (y1 , y2 )|Lx1 x2 .(x3 x4 .0| rec X.((x)x5 x.(x6 x.0|X))) (y3 , y4 , y5 , y6 , y7 , y8 )| m2 (x′3 , y1 )|m2 (x′4 , y2 )|m3 (x′5 , y3 , y7 )|m3 (x′6 , y5 , y8 )|m3 (y, y4 , y6 )|n(y) Thus the production is: x1 , x2 , x3 , x4 , x5 , x6 ⊢ Lx1 x2 .(x3 x4 .0| rec X.(x)x5 x.(x6 x.0|X)) (x1 , x2 , x3 , x4 , x5 , x6 ) (x1 ,in1 ,hx2 i) −−−−−−−−→ x1 , x2 , x3 , x4 , x5 , x6 , y3 , y4 , y5 , y6 , y7 , y8 , y, x′5 , x′6 ⊢ Lx1 x2 .0 (x3 , x4 )|Lx1 x2 .(x3 x4 .0| rec X.((x)x5 x.(x6 x.0|X))) (y3 , y4 , y5 , y6 , y7 , y8 )| m3 (x′5 , y3 , y7 )|m3 (x′6 , y5 , y8 )|m3 (y, y4 , y6 )|n(y)|m2 (x5 , x′5 )|m2 (x6 , x′6 )|n(x1 ) where for simplicity we collapsed y1 with x3 and y2 with x4 . We will now show the productions for amoeboids. Definition 16 (Auxiliary productions) We have auxiliary productions of the form: (x1 ,inn ,~ y1 ),(x2 ,outn ,~ y2 ) Γ ⊢ mk (Γ) −−−−−−−−−−−−−−−→ Γ, ~y1 , ~y2 ⊢ mk (Γ)|  i=1...|~ y1 | m2 (~y1 [i], ~y2 [i]) We need such a production for each k and n and each pair of nodes x1 and x2 in Γ where Γ is a chosen tuple of distinct names with k components and ~y1 and ~y2 are two vectors of fresh names such that |~y1 | = |~y2 | = n. Note that we also have the analogous production where x1 and x2 are swapped. In particular, the set of productions for a mk edge is invariant w.r.t. permutations of the tentacles, modelling the fact that its tentacles are essentially unordered. We have no productions for edges labelled with n, which thus forbid any synchronization. The notion of amoeboid introduced previously is not sufficient for our purposes. In fact, existing amoeboids can be connected using m2 edges and nodes that are no more used can be closed using n edges. Thus we present a more general definition of amoeboid for a set of nodes and we show that, in the situations of interest, these amoeboids behave exactly as the simpler mi edges. Definition 17 (Structured amoeboid ) Given a vector of nodes ~s, a structured amoeboid M (~s) for the set of nodes S containing all the nodes in ~s is any connected graph composed by m and n edges that satisfies the following properties: • its set of nodes is of the form S ∪ I, with S ∩ I = ∅; • nodes in S are connected to exactly one edge of the amoeboid; Mapping Fusion and SHR into Logic Programming 17 • nodes in I are connected to exactly two edges of the amoeboid; • the number of edges composing each path connecting two distinct nodes of S is odd. Nodes in S are called external, nodes in I are called internal. We consider equivalent all the amoeboids with the same set S of external nodes. The last condition is required since each connector inverts the polarity of the synchronization, and we want amoeboids to invert it. Note that m|S| (~s) is an amoeboid for S. Lemma 2 If M (~s) is a structured amoeboid for S, the transitions for M (~s) which are non idle and expose non ǫ actions on at most two nodes x1 , x2 ∈ S are of the form:   Λ,id S, I ⊢ M (~s) −−−→ S, I, I ′ , ~y1 , ~y2 ⊢ M (~s)| i=1...|~y1 | M (~y1 [i], ~y2 [i])| M̃ (∅) where Λ(x1 ) = (inn , ~y1 ) and Λ(x2 ) = (outn , ~y2 ) (non trivial actions may be exposed also on some internal nodes) and ~y1 and ~y2 are two vectors of fresh names such that |~y1 | = |~y2 | = n. Here M̃ (∅) contains rings of m2 connectors connected only to fresh nodes which thus are disconnected from the rest of the graph. We call them pseudoamoeboids. Furthermore we have at least one transition of this kind for each choice of x1 , x2 , ~y1 and ~y2 . Proof See Appendix A. Thanks to the above result we will refer to structured amoeboids simply as amoeboids. We can now present the results on the correctness and completeness of our translation. Theorem 1 (Correctness) For each closed fusion process P and each pair of vectors ~v and w ~ satisfying the ′ constraints of Definition 13, if P → P then there exist Λ, Γ and G such that Λ,id ~′) JP K~v ,w~ −−−→ Γ ⊢ G. Furthermore Γ ⊢ G is equal to JP ′ Kv~′ ,w~′ (for some v~′ and w up to isolated nodes, up to injective renamings, up to equivalence of amoeboids (Γ ⊢ G can have a structured amoeboid where JP ′ Kv~′ ,w~′ has a simple one) and up to pseudoamoeboids. Proof The proof is by rule induction on the reduction semantics. See Appendix A. 18 Ivan Lanese and Ugo Montanari Theorem 2 (Completeness) Λ,π For each closed fusion process P and each pair of vectors ~v and w ~ if JP K~v ,w~ −−→ Γ ⊢ G with a HSHR transition that uses exactly two productions for communication or one production for a fusion action (plus any number of auxiliary productions) ~ ′ ) up to isolate then P → P ′ and Γ ⊢ G is equal to JP ′ Kv~′ ,w~′ (for some v~′ and w nodes, up to injective renamings, up to equivalence of amoeboids (Γ ⊢ G can have a structured amoeboid where JP ′ Kv~′ ,w~′ has a simple one) and up to pseudoamoeboids. Proof See Appendix A. These two theorems prove that the allowed transitions in the HSHR setting correspond to reductions in the Fusion Calculus setting. Note that in HSHR we must consider only transitions where we have either two productions for communication or one production for a fusion action. This is necessary to model the interleaving behaviour of Fusion Calculus within the HSHR formalism, which is concurrent. On the contrary, one can consider the fusion equivalent of all the HSHR transitions: these correspond to concurrent executions of many fusion reductions. One can give a semantics for Fusion Calculus with that behaviour. Anyway in that case the notion of equivalence of amoeboids is no more valid, since different amoeboids allow different degrees of concurrency. We thus need to constrain them. The simplest case is to have only simple amoeboids, that is to have no concurrency inside a single channel, but there is no way to force normalization of amoeboids to happen before undesired transitions can occur. The opposite case (all the processes can interact in pairs, also on the same channel) can be realized, but it requires more complex auxiliary productions. Note that the differences between the final graph of a transition and the translation of the final process of a Fusion Calculus reduction are not important, since the two graphs have essentially the same behaviours (see Lemma 1 for the effect of an injective renaming and Lemma 2 for the characterization of the behaviour of a complex amoeboid; isolated nodes and pseudoamoeboids are not relevant since different connected components evolve independently). Thus the previous results can be extended from transitions to whole computations. Note that in the HSHR model the behavioural part of the system is represented by productions while the topological part is represented by graphs. Thus we have a convenient separation between the two different aspects. Example 5 (Translation of a transition) We will now show an example of the translation. Let us consider the process: (uxyzw)(Q(x, y, z)|uxy.R(u, x)|uzw.S(z, w)) 19 Mapping Fusion and SHR into Logic Programming Fig. 5. (uxyzw)(Q(x, y, z)|uxy.R(u, x)|uzw.S(z, w)) n x x1 m4 1 z1 LQ(x ,x ,x ) 3 1 2 m4 3 z n 2 x3 x2 2 y1 5 Lx1x2x3.R(x4,x5) 1 y2 3 z2 m3 4 z3 y u1 n u2 m4 2 u3 1 Lx1x2x3.S(x 4,x 5) 3 u n 4 5 w1 w2 m3 w n Note that it is already in the form (~x)P . It can do the following transition: (uxyzw)(Q(x, y, z)|uxy.R(u, x)|uzw.S(z, w)) → (uxy)(Q(x, y, z)|R(u, x)|S(z, w)){x/z, y/w} We can write P in the form: (Q(x1 , y1 , z1 )|u1 x2 y2 .R(u2 , x3 )|u3 z2 w1 .S(z3 , w2 ))σ where: σ = {x/x1 , y/y1 , z/z1, u/u1 , x/x2 , y/y2 , u/u2, x/x3 , u/u3 , z/z2, w/w1 , z/z3 , w/w2 }. A translation of the starting process is: u, x, y, w, z, x1 , y1 , z1 , u1 , x2 , y2 , u2 , x3 , u3 , z2 , w1 , z3 , w2 ⊢ LQ(x1 ,x2 ,x3 ) (x1 , y1 , z1 )|Lx1 x2 x3 .R(x4 ,x5 ) (u1 , x2 , y2 , u2 , x3 )| Lx1 x2 x3 .S(x4 ,x5 ) (u3 , z2 , w1 , z3 , w2 )|m4 (u, u1 , u2 , u3 )|m4 (x, x1 , x2 , x3 )| m3 (y, y1 , y2 )|m4 (z, z1 , z2 , z3 )|m3 (w, w1 , w2 )|n(u)|n(x)|n(y)|n(w)|n(z) A graphical representation is in Figure 5. We have the following process productions: (y1 ,out2 ,hy2 ,y3 i) y1 , y2 , y3 , y4 , y5 ⊢ Lx1 x2 x3 .R(x4 ,x5 ) (y1 , y2 , y3 , y4 , y5 ) −−−−−−−−−−−→ y1 , y2 , y3 , y4 , y5 ⊢ LR(x1 ,x2 ) (y4 , y5 )|n(y1 ) (y1 ,in2 ,hy2 ,y3 i) y1 , y2 , y3 , y4 , y5 ⊢ Lx1 x2 x3 .S(x4 ,x5 ) (y1 , y2 , y3 , y4 , y5 ) −−−−−−−−−−→ y1 , y2 , y3 , y4 , y5 ⊢ LS(x1 ,x2 ) (y4 , y5 )|n(y1 ) In order to apply (suitable variants of) these two productions concurrently we have 20 Ivan Lanese and Ugo Montanari Fig. 6. Graph with actions n x x1 m4 1 z1 LQ(x ,x ,x ) 3 1 2 m4 3 z n 2 x3 x2 2 y1 5 Lx1x2x3.R(x4,x5) 1 y2 3 z3 y out2 x2 y2 u 1 out2 l 1 l2 n u2 m4 2 u3 1 in2 l 3 l4 u in2 z2 w1 n z2 m3 4 Lx1x2x3.S(x 4,x 5) 3 4 5 w1 w2 m3 w n to synchronize their actions. This can be done since in the actual transition actions are exposed on nodes u1 and u3 respectively, which are connected to the same m4 edge. Thus the synchronization can be performed (see Figure 6) and we obtain as final graph: u, x, y, w, z, x1 , y1 , z1 , u1 , x2 , y2 , u2 , x3 , u3 , z2 , w1 , z3 , w2 ⊢ LQ(x1 ,x2 ,x3 ) (x1 , y1 , z1 )|LR(x1 ,x2 ) (u2 , x3 )|n(u1 )|LS(x1 ,x2 ) (z3 , w2 )|n(u3 )| m4 (u, u1 , u2 , u3 )|m4 (x, x1 , x2 , x3 )|m3 (y, y1 , y2 )|m4 (z, z1 , z2 , z3 )|m3 (w, w1 , w2 )| m2 (x2 , z2 )|m2 (y2 , w1 )|n(u)|n(x)|n(y)|n(w)|n(z) which is represented in Figure 7. The amoeboids connect the following tuples of nodes: (u, u1 , u2 , u3 ), (x, x1 , x2 , x3 , z2 , z, z1, z3 ), (w, w1 , w2 , y2 , y, y1 ). Thus, if we connect these sets of nodes with simple amoeboids instead of with complex ones, we have up to injective renamings a translation of (uxy)Q(x, y, x)|R(u, x)|S(x, y) as required. Example 6 (Translation of a transition with recursion) We will show here an example that uses recursion. Let us consider the closed process (uz)uz| rec X.(x)ux.(ux.0|X). The translation of this process, as shown in Example 3 is: u, z, y, u1, z1 , u2 , y1 , u3 , y2 , u4 , u5 ⊢ Lx1 x2 .0 (u1 , z1 )|Lx1 x2 .(x3 x4 .0| rec X.(x)x5 x.(x6 x.0|X)) (u2 , y1 , u3 , y2 , u4 , u5 )| m6 (u, u1 , u2 , u3 , u4 , u5 )|m2 (z, z1 )|m3 (y, y1 , y2 )|n(u)|n(z)|n(y) We need the productions for two sequential edges (for the first step): x1 x2 .0 and x1 x2 .(x3 x4 .0| rec X.(x)x5 x.(ux6 .0|X)). 21 Mapping Fusion and SHR into Logic Programming Fig. 7. Resulting graph m2 x2 n x z2 x1 m4 1 z1 LQ(x ,x ,x ) 3 1 2 3 z m4 n 2 x3 z3 y1 2 1 LR(x ,x ) m3 1 2 1 u1 n LS(x 1 ,x 2) 2 y2 u2 n y m4 u3 w2 m2 n w1 m3 w n u n The productions are the ones of Example 4 (we write them here in a suitable αconverted form): (u1 ,out1 ,hz1 i) u1 , z1 ⊢ Lx1 x2 .0 (u1 , z1 ) −−−−−−−−−→ u1 , z1 ⊢ n(u1 ) u2 , y1 , u3 , y2 , u4 , u5 ⊢ Lx1 x2 .(x3 x4 .0| rec X.(x)x5 x.(x6 x.0|X)) (u2 , y1 , u3 , y2 , u4 , u5 ) u2 ,in1 ,hy1 i −−−−−−−→ u2 , y1 , u3 , y2 , u4 , u5 , w1 , w2 , w3 , w4 , w5 , w6 , y ′ , u′4 , u′5 ⊢ Lx1 x2 .0 (u3 , y2 )|Lx1 x2 .(x3 x4 .0| rec X.((x)x5 x.(x6 x.0|X))) (w1 , w2 , w3 , w4 , w5 , w6 )| m3 (u′4 , w1 , w5 )|m3 (u′5 , w3 , w6 )|m3 (y ′ , w2 , w4 )|n(y ′ )|m2 (u4 , u′4 )|m2 (u5 , u′5 )|n(u2 ) By using these two productions and a production for m6 (the other edges stay idle) we have the following transition: u, z, y, u1, z1 , u2 , y1 , u3 , y2 , u4 , u5 ⊢ Lx1 x2 .0 (u1 , z1 )|Lx1 x2 .(x3 x4 .0| rec X.(x)x5 x.(x6 x.0|X)) (u2 , y1 , u3 , y2 , u4 , u5 )| m6 (u, u1 , u2 , u3 , u4 , u5 )|m2 (z, z1 )|m3 (y, y1 , y2 )|n(u)|n(z)|n(y) (u1 ,out1 ,hz1 i)(u2 ,in1 ,hy1 i) −−−−−−−−−−−−−−−−−→ u, y, z, x, u1, z1 , u2 , y1 , u3 , y2 , u4 , u5 , w1 , w2 , w3 , w4 , w5 , w6 , y ′ , u′4 , u′5 ⊢ n(u1 )|Lx1 x2 .0 (u3 , y2 )|Lx1 x2 .(x3 x4 .0| rec X.((x)x5 x.(x6 x.0|X))) (w1 , w2 , w3 , w4 , w5 , w6 )| m3 (u′4 , w1 , w5 )|m3 (u′5 , w3 , w6 )|m3 (y ′ , w2 , w4 )|n(y ′ )|m2 (u4 , u′4 )|m2 (u5 , u′5 )|n(u2 )| m6 (u, u1 , u2 , u3 , u4 , u5 )|m2 (z1 , y1 )|m2 (z, z1 )|m3 (y, y1 , y2 )|n(u)|n(z)|n(y) The resulting graph is, up to injective renaming and equivalence of amoeboids, a 22 Ivan Lanese and Ugo Montanari translation of: (uyy ′ )(uy.0|uy ′ .(uy ′ .0| rec X.(x)ux.(ux.0|X))) as required. We end this section with a simple schema on the correspondence between the two models. Fusion HSHR Closed process Graph Sequential process Edge Production Prefix execution Fusion HSHR Reduction Transition Name Amoeboid 0 N il As shown in the table, we represent (closed) processes by graphs where edges are sequential processes and amoeboids model names. The inactive process 0 is the empty graph nil. From a dynamic point of view, Fusion reductions are modelled by HSHR transitions obtained composing productions that represent prefix executions. 4 Mapping Hoare SHR into logic programming We will now present a mapping from HSHR into a subset of logic programming called Synchronized Logic Programming (SLP). The idea is to compose this mapping with the previous one obtaining a mapping from Fusion Calculus into logic programming. 4.1 Synchronized Logic Programming In this subsection we present Synchronized Logic Programming. SLP has been introduced because logic programming allows for many execution strategies and for complex interactions. Essentially SLP is obtained from standard logic programming by adding a mechanism of transactions. The approach is similar to the zero-safe nets approach (Bruni and Montanari 2000) for Petri nets. In particular we consider that function symbols are resources that can be used only inside a transaction. A transaction can thus end only when the goal contains just predicates and variables. During a transaction, which is called big-step in this setting, each atom can be rewritten at most once. If a transaction can not be terminated, then the computation is not allowed. A computation is thus a sequence of big-steps. This synchronized flavour of logic programming corresponds to HSHR since: • used goals correspond to graphs (goal-graphs); • clauses in programs correspond to HSHR productions (synchronized clauses); • resulting computations model HSHR computations (synchronized computations). Definition 18 (Goal-graph) We call goal-graph a goal which has no function symbols (constants are considered as functions of arity 0). Mapping Fusion and SHR into Logic Programming 23 Definition 19 (Synchronized program) A synchronized program is a finite set of synchronized rules, i.e. definite program clauses such that: • the body of each rule is a goal-graph; • the head of each rule is A(t1 , . . . , tn ) where ti is either a variable or a single function (of arity at least 1) symbol applied to variables. If it is a variable then it also appears in the body of the clause. Example 7 q(f (x), y) ← p(x, y) q(f (x), y) ← p(x, f (y)) q(g(f (x)), y) ← p(x, y) q(f (x), y, f (z)) ← p(x) q(f (x), f (z)) ← p(x) synchronized rule; not synchronized since p(x, f (y)) is not a goal-graph; not synchronized since it contains nested functions; not synchronized since y is an argument of the head predicate but it does not appear in the body; synchronized, even if z does not appear in the body. In the mapping, the transaction mechanism is used to model the synchronization of HSHR, where edges can be rewritten only if the synchronization constraints are satisfied. In particular, a clause A(t1 , . . . , tn ) ← B1 , . . . , Bn will represent a production where the head predicate A is the label of the edge in the left hand side, and the body B1 , . . . , Bn is the graph in the right hand side. Term ti in the head represents the action occurring in xi , if A(x1 , . . . , xn ) is the edge matched by the production. Intuitively, the first condition of Definition 19 says that the result of a local rewriting must be a goal-graph. The second condition forbids synchronizations with structured actions, which are not allowed in HSHR (this would correspond to allow an action in a production to synchronize with a sequence of actions from a computation of an adjacent subgraph). Furthermore it imposes that we cannot disconnect from a node without synchronizing on it 3 . Now we will define the subset of computations we are interested in. Definition 20 (Synchronized Logic Programming) Given a synchronized program P we write: θ G1 ⇒ G2 θ′ iff G1 −→* G2 and all steps performed in the computation expand different atoms of G1 , θ′ |n(G1 ) = θ and both G1 and G2 are goal-graphs. θ We call G1 ⇒ G2 a big-step and all the → steps in a big-step small-steps. A SLP computation is: G1 ⇒∗ G2 i.e. a sequence of 0 or more big-steps. 3 This condition has only the technical meaning of making impossible some rewritings in which an incorrect transition may not be forbidden because its only effect is on the discarded variable. Luckily, we can impose this condition without altering the power of the formalism, because we can always perform a special f oo action on the node we disconnect from and make sure that all the other edges can freely do the same action. For example we can rewrite q(f (x), y, f (z)) ← p(x) as q(f (x), f oo(y), f (z)) ← p(x), which is an allowed synchronized rule. An explicit translation of action ǫ can be used too. 24 Ivan Lanese and Ugo Montanari 4.2 The mapping We want to use SLP to model HSHR systems. As a first step we need to translate graphs, i.e. syntactic judgements, to goals. In this translation, edge labels are mapped into SLP predicates. Goals corresponding to graphs will have no function symbols. However function symbols will be used to represent actions. In the translation we will lose the context Γ. Definition 21 (Translation for syntactic judgements) We define the translation operator J−K as: JΓ ⊢ L(x1 , . . . , xn )K = L(x1 , . . . , xn ) JΓ ⊢ G1 |G2 K = JΓ ⊢ G1 K, JΓ ⊢ G2 K JΓ ⊢ nilK =  Sometimes we will omit the Γ part of the syntactic judgement. We can do this because it does not influence the translation. For simplicity, we suppose that the set of nodes in the SHR model coincides with the set of variables in SLP (otherwise we need a bijective translation function). We do the same for edge labels and names of predicates, and for actions and function symbols. Definition 22 Let Γ ⊢ G and Γ′ ⊢ G′ be graphs. We define the equivalence relation ∼ = in the following way: Γ ⊢ G ∼ = Γ′ ⊢ G′ iff G ≡ G′ . Observe that if two judgements are equivalent then they can be written as: Γ, Γunused ⊢ G Γ, Γ′unused ⊢ G where Γ = n(G). Theorem 3 (Correspondence of judgements and goal-graphs) The operator J−K defines an isomorphism between judgements (defined up to ∼ =) and goal-graphs. Proof The proof is straightforward observing that the operator J−K defines a bijection between representatives of syntactic judgements and representatives of goal-graphs and the congruence on the two structures is essentially the same. We now define the translation from HSHR productions to definite clauses. Definition 23 (Translation from productions to clauses) We define the translation operator J−K as: Λ,π JL(x1 , . . . , xn ) −−→ GK = L(a1 (x1 π, ~y1 ), . . . , an (xn π, ~yn )) ← JGK if Λ(xi ) = (ai , ~yi ) for each i ∈ {1, . . . , n} and if ai 6= ǫ. If ai = ǫ we write simply xi π instead of ǫ(xi π). 25 Mapping Fusion and SHR into Logic Programming The idea of the translation is that the condition given by an action (x, a, ~y ) is represented by using the term a(xπ, ~y ) as argument in the position that corresponds to x. Notice that in this term a is a function symbol and π is a substitution. During unification, x will be bound to that term and, when other instances of x are met, the corresponding term must contain the same function symbol (as required by Hoare synchronization) in order to be unifiable. Furthermore the corresponding tuples of transmitted nodes are unified. Since x will disappear we need another variable to represent the node that corresponds to x. We use the first argument of a to this purpose. If two nodes are merged by π then their successors are the same as required. Observe that we do not need to translate all the possible variants of the rules since variants with fresh variables are automatically built when the clauses are applied. Notice also that the clauses we obtain are synchronized clauses. The observable substitution contains information on Λ and π. Thus given a transition we can associate to it a substitution θ. We have different choices for θ according to where we map variables. In fact in HSHR nodes are mapped to their representatives according to π, while, in SLP, θ cannot do the same, since the variables of the clause variant must be all fresh. The possible choices of fresh names for the variables change by an injective renaming the result of the big-step. Definition 24 (Substitution associated to a transition) Λ,π Let Γ ⊢ G −−→ Φ ⊢ G′ be a transition. We say that the substitution θρ associated to this transition is: θρ = {(a(xπρ, ~y ρ)/x|Λ(x) = (a, ~y), a 6= ǫ} ∪ {xπρ/x}|Λ(x) = (ǫ, hi)} for some injective renaming ρ. We will now prove the correctness and the completeness of our translation. Theorem 4 (Correctness) Let P be a set of productions of a HSHR system as defined in definitions 9 and 10. Let P be the logic program obtained by translating the productions in P according to Definition 23. If: Λ,π P (Γ ⊢ G −−→ Φ ⊢ G′ ) then we can have in P a big-step of Synchronized Logic Programming: θρ JΓ ⊢ GK ⇒ T for every ρ such that xρ is a fresh variable unless possibly when x ∈ Γ∧Λ(x) = (ǫ, hi). Λ,π In that case we may have xρ = x. Furthermore θρ is associated to Γ ⊢ G −−→ Φ ⊢ G′ and T = JΦ ⊢ G′ Kρ. Finally, used productions translate into the clauses used in the big-step and are applied to the edges that translate into the predicates rewritten by them. Proof The proof is by rule induction. See Appendix A. 26 Ivan Lanese and Ugo Montanari Theorem 5 (Completeness) Let P be a set of productions of a HSHR system. Let P be the logic program obtained by translating the productions in P according to Definition 23. If we have in P a big-step of logic programming: θ JΓ ⊢ GK ⇒ T Λ,π then there exist ρ, θ′ , Λ, π, Φ and G′ such that θ = θρ′ is associated to Γ ⊢ G −−→ Φ ⊢ G′ . Furthermore T = JΦ ⊢ G′ Kρ and P Λ,π (Γ ⊢ G −−→ Φ ⊢ G′ ). Proof See Appendix A. Example 8 We continue here Example 5 by showing how that fusion computation can be translated into a Synchronized Logic Programming computation. (uxyzw)(Q(x, y, z)|uxy.R(u, x)|uzw.S(z, w)) → (uxy)(Q(x, y, z)|R(u, x)|S(z, w)){x/z, y/w} Remember that a translation of the starting process is: u, x, y, w, z, x1 , y1 , z1 , u1 , x2 , y2 , u2 , x3 , u3 , z2 , w1 , z3 , w2 ⊢ LQ(x1 ,x2 ,x3 ) (x1 , y1 , z1 )|Lx1 x2 x3 .R(x4 ,x5 ) (u1 , x2 , y2 , u2 , x3 )| Lx1 x2 x3 .S(x4 ,x5 ) (u3 , z2 , w1 , z3 , w2 )|m4 (u, u1 , u2 , u3 )|m4 (x, x1 , x2 , x3 )| m3 (y, y1 , y2 )|m4 (z, z1 , z2 , z3 )|m3 (w, w1 , w2 )|n(u)|n(x)|n(y)|n(w)|n(z) We have the following productions: (y1 ,out2 ,hy2 ,y3 i) y1 , y2 , y3 , y4 , y5 ⊢ Lx1 x2 x3 .R(x4 ,x5 ) (y1 , y2 , y3 , y4 , y5 ) −−−−−−−−−−−→ y1 , y2 , y3 , y4 , y5 ⊢ LR(x1 ,x2 ) (y4 , y5 )|n(y1 ) (y1 ,in2 ,hy2 ,y3 i) y1 , y2 , y3 , y4 , y5 ⊢ Lx1 x2 x3 .S(x4 ,x5 ) (y1 , y2 , y3 , y4 , y5 ) −−−−−−−−−−→ y1 , y2 , y3 , y4 , y5 ⊢ LS(x1 ,x2 ) (y4 , y5 )|n(y1 ) that corresponds to the clauses (we directly write suitably renamed variants): Lx1 x2 x3 .R(x4 ,x5 ) (out2 (u′1 , x′2 , y2′ ), x′2 , y2′ , u′2 , x′3 ) ← LR(x1 ,x2 ) (u′2 , x′3 )|n(u′1 ) ′′′ ′′′ ′′′ ′′′ ′′′ ′′′ ′′′ ′′′ ′′′ Lx1 x2 x3 .S(x4 ,x5 ) (in2 (u′′′ 3 , z2 , w1 ), z2 , w1 , z3 , w2 ) ← LS(x1 ,x2 ) (z3 , w2 )|n(u3 ) plus the clause obtained from the auxiliary production: m4 (u′′ , out2 (u′′1 , x′′2 , y2′′ ), u′′2 , in2 (u′′3 , z2′′ , w1′′ )) ← m4 (u′′ , u′′1 , u′′2 , u′′3 ), m2 (x′′2 , z2′′ ), m2 (y2′′ , w1′′ ) We obtain the big-step represented in Figure 8. The observable substitution of the big-step is {out2 (u′1 , x2 , y2 )/u1 , in2 (u′′3 , z2 , w1 )/u3 }. This is associated to the wanted HSHR transition with ρ = {u′1 /u1 , u′′3 /u3 } and by applying ρ to the final graph of Mapping Fusion and SHR into Logic Programming 27 Fig. 8. Big-step for a Fusion transition LQ(x1 ,x2 ,x3 ) (x1 , y1 , z1 ), Lx1 x2 x3 .R(x4 ,x5 ) (u1 , x2 , y2 , u2 , x3 ), Lx1 x2 x3 .S(x4 ,x5 ) (u3 , z2 , w1 , z3 , w2 ), m4 (u, u1 , u2 , u3 ), m4 (x, x1 , x2 , x3 ), m3 (y, y1 , y2 ), m4 (z, z1 , z2 , z3 ), m3 (w, w1 , w2 ), n(u), n(x), n(y), n(w), n(z) ′ out2 (u′1 ,x2 ,y2 )/u1 ,x2 /x′2 ,y2 /y2 ,u2 /u′2 ,x3 /x′3 −−−−−−−−−−−−−−−−−−−−−−−−−−−−−→ LQ(x1 ,x2 ,x3 ) (x1 , y1 , z1 ), LR(x1 ,x2 ) (u2 , x3 ), n(u′1 ), Lx1 x2 x3 .S(x4 ,x5 ) (u3 , z2 , w1 , z3 , w2 ), m4 (u, out2 (u′1 , x2 , y2 ), u2 , u3 ), m4 (x, x1 , x2 , x3 ), m3 (y, y1 , y2 ), m4 (z, z1 , z2 , z3 ), m3 (w, w1 , w2 ), n(u), n(x), n(y), n(w), n(z) u/u′′ ,u′ /u′′ ,x2 /x′′ ,y2 /y ′′ ,u2 /u′′ ,in2 (u′′ ,z ′′ ,w′′ )/u3 1 2 2 3 2 −−−−−− −−1−−−−− −−−−2−−−−− −−−−− −−−−1−−−→ LQ(x1 ,x2 ,x3 ) (x1 , y1 , z1 ), LR(x1 ,x2 ) (u2 , x3 ), n(u′1 ), Lx1 x2 x3 .S(x4 ,x5 ) (in2 (u′′3 , z2′′ , w1′′ ), z2 , w1 , z3 , w2 ), ′ m4 (u, u1 , u2 , u′′3 ), m2 (x2 , z2′′ ), m2 (y2 , w1′′ ), m4 (x, x1 , x2 , x3 ), m3 (y, y1 , y2 ), m4 (z, z1 , z2 , z3 ), m3 (w, w1 , w2 ), n(u), n(x), n(y), n(w), n(z) u′′ /u′′′ ,z2 /z ′′ ,w1 /w′′ ,z2 /z ′′′ ,w1 /w′′′ ,z3 /z ′′′ ,w2 /w′′′ 3 2 1 3 −− −−3−−−−−2−−−−−1−−−−− −−−−− −−−−− −−−−−2→ LQ(x1 ,x2 ,x3 ) (x1 , y1 , z1 ), LR(x1 ,x2 ) (u2 , x3 ), n(u′1 ), LS(x1 ,x2 ) (z3 , w2 ), n(u′′3 ), m4 (u, u′1 , u2 , u′′3 ), m2 (x2 , z2 ), m2 (y2 , w1 ), m4 (x, x1 , x2 , x3 ), m3 (y, y1 , y2 ), m4 (z, z1 , z2 , z3 ), m3 (w, w1 , w2 ), n(u), n(x), n(y), n(w), n(z) the HSHR transition we obtain: LQ(x1 ,x2 ,x3 ) (x1 , y1 , z1 )|LR(x1 ,x2 ) (u2 , x3 )|n(u′1 )|LS(x1 ,x2 ) (z3 , w2 )|n(u′′3 )| m4 (u, u′1 , u2 , u′′3 )|m4 (x, x1 , x2 , x3 )|m3 (y, y1 , y2 )|m4 (z, z1 , z2 , z3 )|m3 (w, w1 , w2 )| m2 (x2 , z2 )|m2 (y2 , w1 )|n(u)|n(x)|n(y)|n(w)|n(z) that, translated, becomes the final goal of the big-step as required. We end this section with a simple schema on the correspondence between the two models. HSHR SLP HSHR SLP Graph Edge Parallel comp. Production Goal Atomic goal And comp. Clause Transition Node N il Action Big-step Variable  Function s. Essentially the correspondence is given by the homomorphism between graphs and goals, with edges mapped to atomic goals, nodes to variables, parallel composition to And composition and nil to . Dynamically, HSHR transitions are 28 Ivan Lanese and Ugo Montanari modelled by big-steps, that are transactional applications of clauses which model productions. Finally, HSHR actions are modelled by function symbols. 4.3 Using Prolog to implement Fusion Calculus The theorems seen in the previous sections can be used for implementation purposes. As far as Synchronized Logic Programming is concerned, in Lanese (2002) a simple meta-interpreter is presented. The idea is to use Prolog ability of dynamically changing the clause database to insert into it a set of clauses and a goal and to compute the possible synchronized computations of given length. This can be directly used to simulate HSHR transitions. In order to simulate Fusion Calculus processes we have to implement amoeboids using a bounded number of different connectors (note that m2 , m3 and n are enough) and to implement in the meta-interpreter the condition under which productions can be applied in a single big-step. This can be easily done. Furthermore this decreases the possible choices of applicable productions and thus improves the efficiency w.r.t. the general case. 5 Conclusion In this paper we have analyzed the relationships between three different formalisms, namely Fusion Calculus, HSHR and logic programming. The correspondence between HSHR and the chosen transactional version of logic programming, SLP, is complete and quite natural. Thus we can consider HSHR as a “subcalculus” of (synchronized) logic programming. The mapping between Fusion Calculus and HSHR is instead more involved because it has to deal with many important differences: • process calculi features vs graph transformation features; • interleaving models vs concurrent models; • Milner synchronization vs Hoare synchronization. Hoare synchronization was necessary since our aim was to eventually map Fusion Calculus to logic programming. If the aim is just to compare Fusion Calculus and SHR it is possible to use SHR with Milner synchronization, achieving a much simpler and complete mapping, which considers the LTS of Fusion Calculus instead of reductions (see Lanese and Montanari 2004a). We think that the present work can suggest several interesting lines of development, dictated by the comparison of the three formalisms studied in the paper. First, our implementation of routers in terms of amoeboids is rather general and abstract, and shows that Fusion Calculus names are a rather high level concept. They abstract out the behaviour of an underlying network of connections which must be open and reconfigurable. Had we chosen π-calculus instead (see a translation of π-calculus to Milner SHR in (Hirsch and Montanari 2001)), we would have noticed important differences. For instance, fusions are also considered in the semantics Mapping Fusion and SHR into Logic Programming 29 of open π-calculus by Davide Sangiorgi (Sangiorgi 1993), but in that work not all the names can be fused: newly extruded names cannot be merged with previously generated names. This is essential for specifying nonces and session keys for secure protocols. Instead, Fusion Calculus does not provide equivalent constructs. Looking at our translation, we can conclude that logic programming does not offer this feature, either. Thus logic programming is a suitable counterpart of Fusion Calculus, but it should be properly extended for matching open π-calculus and security applications. In a similar line of thought, we observe that we have a scope restriction operator in the Fusion Calculus, but no restriction is found in our version of HSHR. We think this omission simplifies our development, since no restriction exists in ordinary logic programming, either. However versions of SHR with restriction have been considered (Hirsch and Montanari 2001; Ferrari et al. 2001; Lanese 2002). Also (synchronized) logic programming can be smoothly extended with a restriction operator (Lanese 2002). More importantly, Fusion Calculus is equipped with an observational abstract semantics based on (hyper) bisimulation. We did not consider a similar concept for SHR or logic programming, since we considered it outside the scope of the paper. Furthermore our operational correspondence between HSHR and SLP is very strong and it should respect any reasonable abstract semantics. The mapping from Fusion Calculus into HSHR deals only with closed terms, thus no observations can be considered. However a bisimulation semantics of SHR has been considered in (König and Montanari 2001), and an observational semantics of logic programming is discussed in (Bruni et al. 2001). Another comment concerns concurrency. To prove the equivalence of Fusion Calculus and of its translation into HSHR we had to restrict the possible computations of the latter. On the contrary, if all computations were allowed, the same translation would yield a concurrent semantics of Fusion Calculus, that we think is worth studying. For instance in the presence of concurrent computations not all equivalent amoeboids would have the same behaviour, since some of them would allow for more parallelism than others. Finally we would like to emphasize some practical implication of our work. In fact, logic programming is not only a model of computation, but also a well developed programming paradigm. Following the lines of our translation, implementations of languages based on Fusion Calculus and HSHR could be designed, allowing to exploit existing ideas, algorithms and tools developed for logic programming. References Bruni, R. and Montanari, U. 2000. Zero-safe nets: Comparing the collective and individual token approaches. Information and Computation 156, 1-2, 46–89. Bruni, R., Montanari, U., and Rossi, F. 2001. An interactive semantics of logic programming. Theory and Practice of Logic Programming 1, 6, 647–690. Degano, P. and Montanari, U. 1987. A model for distributed systems based on graph rewriting. Journal of the ACM (JACM) 34, 2, 411–449. Ehrig, H., Kreowski, H.-J., Montanari, U., and Rozenberg, G., Eds. 1999. Hand- 30 Ivan Lanese and Ugo Montanari book of Graph Grammars and Computing by Graph Transformation, Vol.3: Concurrency, Parallelism, and Distribution. World Scientific. Ehrig, H., Pfender, M., and Schneider, H. J. 1973. Graph grammars: an algebraic approach. In Proc. of IEEE Conference on Automata and Switching Theory. IEEE Computer Society, 167–180. Ferrari, G. L., Montanari, U., and Tuosto, E. 2001. A LTS semantics of ambients via graph synchronization with mobility. In Proc. of ICTCS’01. LNCS, vol. 2202. Springer, 1–16. Gardner, P. and Wischik, L. 2000. Explicit fusions. In Mathematical Foundations of Computer Science. 373–382. Gardner, P. and Wischik, L. 2004. Strong bisimulation for the explicit fusion calculus. In Proc. of FoSSaCS’04. LNCS, vol. 2987. Springer, 484–498. Hirsch, D. 2003. Graph transformation models for software architecture styles. Ph.D. thesis, Departamento de Computación, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Argentina. Hirsch, D., Inverardi, P., and Montanari, U. 2000. Reconfiguration of software architecture styles with name mobility. In Proc. of COORDINATION 2000. LNCS, vol. 1906. Springer, 148–163. Hirsch, D. and Montanari, U. 2001. Synchronized hyperedge replacement with name mobility. In Proc. of CONCUR’01. LNCS, vol. 2154. Springer, 121–136. Jensen, O. H. and Milner, R. 2003. Bigraphs and transitions. SIGPLAN Not. 38, 1, 38–49. König, B. and Montanari, U. 2001. Observational equivalence for synchronized graph rewriting. In Proc. of TACS’01. LNCS, vol. 2215. Springer, 145–164. Lanese, I. 2002. Process synchronization in distributed systems via Horn clauses. M.S. thesis, Computer Science Department, University of Pisa, Pisa, Italy. Downloadable from http://www.di.unipi.it/~lanese/tesi.ps. Lanese, I. 2006. Synchronization strategies for global computing models. Ph.D. thesis, Computer Science Department, University of Pisa, Pisa, Italy. Forthcoming. Lanese, I. and Montanari, U. 2002. Software architectures, global computing and graph transformation via logic programming. In Proc SBES’2002 - 16th Brazilian Symposium on Software Engineering. Anais, 11–35. Lanese, I. and Montanari, U. 2004a. A graphical fusion calculus. In Proc. of the Workshop of the COMETA Project on Computational Metamodels. Electronic Notes in Theoretical Computer Science, vol. 104. Elsevier Science, 199–215. Lanese, I. and Montanari, U. 2004b. Synchronization algebras with mobility for graph transformations. In Proc. of FGUC’04 – Foundations of Global Ubiquitous Computing. Electronic Notes in Theoretical Computer Science, vol. 138. Elsevier Science, 43–60. Lloyd, J. W. 1993. Foundations of Logic Programming, Second Extended Edition. Springer. Milner, R., Parrow, J., and Walker, D. 1992. A calculus of mobile processes. Information and Computation 100, 1–77. Palamidessi, C. 1990. Algebraic properties of idempotent substitutions. In Proc. of ICALP’90. LNCS, vol. 443. Springer, 386–399. Parrow, J. and Victor, B. 1998. The fusion calculus: Expressiveness and symmetry in mobile processes. In Proc. of LICS ’98. IEEE, Computer Society Press. Sangiorgi, D. 1993. A theory of bisimulation for the pi-calculus. In Proc. of CONCUR’93. LNCS, vol. 715. Springer, 127–142. Victor, B. 1998. The fusion calculus: Expressiveness and symmetry in mobile processes. Ph.D. thesis, Dept. of Computer Systems, Uppsala University, Sweden. Mapping Fusion and SHR into Logic Programming 31 Appendix A Proofs We have here the proofs that are missing in the main part and some lemmas used in these proofs. Lemmas are just before the proofs that use them. Proof of Lemma 2 Notice that all the auxiliary productions perform two non trivial actions, and because of Hoare synchronization and because each node is shared by at most two edges, synchronizing edges form chains. There are two alternatives: each chain either starts and begins on an external node, or it is a cycle that contains only internal nodes. Exactly one chain must be of the first type. In fact if we have no chain of that kind we have only trivial actions on nodes in S. Also, if we have more than one, we have more than two non trivial actions on nodes in S. Thanks to the last condition of Definition 17 this chain contains an odd number of connectors, which must be m connectors. One can easily check by induction on the (odd) length of the chain that the transition creates an amoeboid M (~y1 [i], ~y2 [i]) for each component of vectors ~y1 and ~ y2 of fresh names and that these vectors are exposed on the external nodes together with an inn and an outn action, where n = |~y1 | = |~y2 |. Let us now consider the other kind of chains: these chains produce rings of m2 edges connected only to fresh nodes, which thus correspond to isolate subgraphs. Furthermore, they affect only the labels of internal nodes as required. Lemma 3 Given a set of amoeboids  for σ and a substitutive effect θ of a fusion φ = {xi = yi |i = 1 . . . n} then JσK| i=1...n M (xi , yi ) is a set of amoeboids for σθ. Proof Since we are working up to injective renamings we only have to prove that two  names are connected by JσK| i=1...n M (xi , yi ) iff they are merged by σθ and that all the paths connecting external nodes in the final graph have odd length. By definition two names are connected by JσK iff they are merged by σ. Assume that two names x and y are merged by σθ. Then their images along σ are merged by θ. This means that we have in φ a chain of equalities from xσ to yσ. Thus we have amoeboids connecting the amoeboids of xσ and of yσ, thus x and y are connected. Assume now that x and y are connected. Then there exists an amoeboid connecting the amoeboids for xσ and for yσ. Thus θ merges xσ and yσ as required. Finally note that all the paths between external nodes are created by connecting existing paths via new amoeboids. In particular, each new path is composed by n old paths and n − 1 new amoeboids. Thus its length is the sum of 2n − 1 odd lengths and thus it is odd. Proof of Theorem 1 The proof is by rule induction on the reduction semantics. Let us consider the reduction rule: (~z)(R|(· · · + u~x.P )|(u~y .Q + . . . )) → (~z)(R|P |Q)σ 32 Ivan Lanese and Ugo Montanari where |~x| = |~y | and σ is a substitutive effect of {~x = ~y } such that dom(σ) ⊆ ~z. In order for that process to be in the standard form we just need to make R sequential by unfolding recursion and taking bound names to the outside if required. For simplicity, we show just the case where R is already sequential, the other being essentially equal. Let P1 be (· · · + u~x.P ) and Q1 be (u~y .Q + . . . ). The translation of the LHS has the form:  Γ ⊢ LR̂′ (~r)|LP̂ ′ (~ p1 )|LQ̂′ (~q1 )|JθK| x∈~v n(x) 1 1 where R′ = R̂′ ρR′ , P1′ = P̂1′ ρP1′ , Q′1 = Q̂′1 ρQ′1 and R′ θ = R, P1′ θ = P1 , Q′1 θ = Q1 . We also use (u′1~x′ .P ′ )θ = u~x.P and (u′2 ~y ′ .Q′ )θ = u~y.Q. We have for LP̂ ′ and LQ̂′ the two following productions: 1 1 (u′1 ,in|~ x′ ) x′ | ,~ p~1 ⊢ LP̂ ′ (~ p1 ) −−−−−−−−→ p~1 , ~x′ , ~vP ⊢ JP K~vp ,w~ P | 1 (u′2 ,out|~ y′ ) y′ | ,~  q1 ⊢ LQ̂′ (~ ~ q1 ) −−−−−−−−−→ ~q1 , ~y ′ , ~vQ ⊢ JQK~vQ ,w~ Q | 1 x∈Γ′′ P  x∈Γ′′ Q n(x) n(x) The choice of the parameters ~vP and ~vQ is not important since they are fresh names and we work up to injective renamings. The choice of w ~ P and w ~ Q is important instead, since they have to correspond to the nodes in the LHS that are still used by the process. The choice performed in Definition 15 ensures that. Note that JθK contains an amoeboid for a set U with u′1 , u′2 ∈ U . Thus from Lemma 2 the two above productions can synchronize via the amoeboid. Using rule (idle) for the other edges, we obtain as a result: Γ, ΓInt ⊢ LR̂′ (~r)|JP K~vP ,w~ P |  x∈Γ′′ P   n(x)|JQK~vQ ,w~ Q | x∈Γ′′ n(x)|JθK| x∈~v n(x)| Q   M (~x′ [i], ~y ′ [i])| M̃ (∅) i=1...|~ x′ | Note that the fusion π of the transition is the identity since each synchronization involves two edges, and at least one of them is an amoeboid which exposes fresh names. Furthermore amoeboids expose each fresh name twice. Thus no names in Γ are merged. Also, in the final graph each node is shared by two edges. In fact, the only connections whose cardinality is not preserved by productions are the ones between nodes whose references are exposed, namely ~x′ and ~y′ , and the corresponding process edges, but these nodes are connected to the new M amoeboids created by the auxiliary productions.   In particular, x∈Γ′′ n(x)| x∈Γ′′ n(x)|JθK is an amoeboid for θ|fn(R′ |P ′ |Q′ ) . FurQ P  thermore thanks to Lemma 3 by adding in parallel i=1...|~x′ | M (~x′ [i], ~y ′ [i]) we obtain an amoeboid for θ|fn(R′ |P ′ |Q′ ) σ. Thus we can rewrite the final graph up to pseudoamoeboids as: Γ, ΓInt ⊢ LR̂′ (~r)|JP K~vP ,w~ P |JQK~vQ ,w~ Q |Jθ|fn(R′ |P ′ |Q′ ) σK|  x∈~ v n(x) Mapping Fusion and SHR into Logic Programming 33 The RHS of the fusion rule is (~z)(R|P |Q)σ, which has to be normalized into (~z~vP ~vQ )(R|P̃ |Q̃)θ1 σ with (~vP )P̃ ≡ P and (~vQ )Q̃ ≡ Q. Furthermore we can choose the names (since we are reasoning up to injective renamings) in such a way that θ1 = θ|fn(R′ |P ′ |Q′ ) (plus an injective renaming on names in ~vp and ~vQ which corresponds to an equivalence on the resulting amoeboids). Thus the translation of the RHS is equivalent to: Γ, ΓInt′ ⊢ LR̂′ (~r)|JP K~vP ,w~ P |JQK~vQ ,w~ Q |Jθ1 σK|  x∈~ v n(x) The correctness of the rule follows. Let us consider now rule: (~z)(R|(· · · + φ.P )) → (~z)(R|P )σ where σ is a substitutive effect of φ such that dom(σ) ⊆ ~z. We use essentially the same technique as before. Let us suppose R already sequential. Let P1 be · · · + φ.P . The translation of the LHS has the form: Γ ⊢ LR̂′ (~r)|LP̂ ′ (~ p1 )|JθK| 1  x∈~ v n(x) where R′ = R̂′ ρR′ , P1′ = P̂1′ ρP1′ and R′ θ = R, P1′ θ = P1 . We also use (φ′ .P ′ )θ = φ.P . We have for LP̂ ′ the following production: 1 Λ ǫ ~p1 , ~vP ⊢ JP K~vP ,w~ P |JσK| p~1 ⊢ LP̂ ′ (~ p1 ) −→ 1  x∈Γ′′ n(x) For ~vP and w ~ P the considerations for the preceding rule are still valid. Using rule (idle) for the other edges, we obtain as a result: Γ, ΓInt ⊢ LR̂′ (~r)|JP K~vP ,w~ P |JσK|  x∈Γ′′ P n(x)|JθK|  x∈~ v n(x) Note that the fusion part of the transition label is an identity since we have only trivial synchronizations. Also, in the final graph each node is shared by two edges since the production preserves the cardinality of connected edges for each node.  In particular, x∈Γ′′ n(x)|JθK is an amoeboid for θ|fn(R′ |P ′ ) . Furthermore, note P  that JσK has the form i=1...n m2 (xi , yi ) and thus thanks to Lemma 3 by adding it in parallel we obtain an amoeboid for θ|fn(R′ |P ′ ) σ. Thus we can rewrite the final graph as: Γ, ΓInt ⊢ LR̂′ (~r)|JP K~vP ,w~ P |Jθ|fn(R′ |P ′ ) σK|  x∈~ v n(x) The RHS of the fusion rule is (~z)(R|P )σ, which has to be normalized into (~z~vP )(R|P̃ )θ1 σ with (~vP )P̃ ≡ P . Furthermore we can choose the names (since we are reasoning up to injective renamings) in such a way that θ1 = θ|fn(R′ |P ′ ) (plus an injective renaming on names in ~vP which corresponds to an equivalence on the resulting amoeboids). Thus the translation of the RHS is equivalent to: 34 Ivan Lanese and Ugo Montanari Γ, ΓInt′ ⊢ LR̂′ (~r)|JP K~vP ,w~ P |Jθ1 σK|  x∈~ v n(x) The correctness of the rule follows. Consider now the rule: P ≡ P ′ , P ′ → Q′ , Q′ ≡ Q P →Q Equivalent agents are converted into the same representative (up to α-conversion) before being translated, thus the translation of P and P ′ are equal up to injective renamings. Similarly for the translation of Q and Q′ , thus the thesis follows. Lemma 4 Let θ1 and θ2 be idempotent substitutions. Let eqn(θ1 ) = {x = y|x/y ∈ θ1 }. Then mgu(eqn(θ1 ) ∪ eqn(θ2 )) = θ1 mgu(eqn(θ2 )θ1 ). Proof See Palamidessi (1990). Lemma 5 Given a graph Γ ⊢ G and one or zero productions for each edge in G let: Λ1 ,π1 Γ ⊢ G −−−−→ Φ1 ⊢ G′1 Λ2 ,π2 Γ ⊢ G −−−−→ Φ2 ⊢ G′2 be two transitions obtained by applying the chosen production for each edge (and using the (idle) rule if no production is chosen). Then there exists an injective renaming σ such that: • • • • Λ1 (x) = σ(Λ2 (x)) if x is not an isolated node; π1 = σπ2 ; Φ1 = σ(Φ2 ); G′1 = σ(G′2 ). Proof The proof is a simple rule induction if one proves that derivations can be done in a standard way, namely by applying to the axioms first rules (par), then rule (merge) and finally rules (new). This can be proved by showing that one can exchange the order of rules and that one can substitute two applications of (merge) with substitutions σ and σ ′ with just one application with substitution σ ′ σ. We have many cases to consider, but they are not so interesting. As examples we will show the detailed proof for commutation of rule (merge) and (par) and for composition of two different rules (merge). Let us consider the first case. Suppose we have a part of a derivation of the form: Λ,π Γ ⊢ G1 −−→ Φ ⊢ G2 Λ1 ,π1 Γσ ⊢ G1 σ −−−−→ Φ1 ⊢ G2 σρ Λ′ ,π ′ Γ′ ⊢ G′1 −−−→ Φ′ ⊢ G′2 Λ1 ∪Λ′ ,π1 ∪π ′ Γσ, Γ′ ⊢ G1 σ|G′1 −−−−−−−−→ Φ1 , Φ′ ⊢ G2 σρ|G′2 (A1) Mapping Fusion and SHR into Logic Programming 35 where for readability we have not written explicitly the side conditions (see Definition 10). Then we must also have a derivation for the same transition obtained applying rule (par) first and then rule (merge): Λ,π Γ ⊢ G1 −−→ Φ ⊢ G2 Λ′ ,π ′ Γ′ ⊢ G′1 −−−→ Φ′ ⊢ G′2 Λ∪Λ′ ,π∪π ′ Γ, Γ′ ⊢ G1 |G′1 −−−−−−−→ Φ, Φ′ ⊢ G2 |G′2 Λ′′ ,π ′′ (Γ, Γ′ )σ ⊢ (G1 |G′1 )σ −−−−→ Φ′′ ⊢ (G2 |G′2 )σρ′ (A2) We have to prove that this derivation is allowed and that the resulting transition is the one derived also by derivation A1. The first step is allowed iff (Γ∪Φ)∩(Γ′ ∪Φ′ ) = ∅. Since the first derivation is allowed by hypothesis, then (Γσ ∪ Φ1 ) ∩ (Γ′ ∪ Φ′ ) = ∅. Thus the only problem is when a name which is after renamed (by σ or ρ) creates a conflict. In that case thanks to Lemma 1 we can suppose to start with a different name. The final result of the derivation is not changed by that since the name disappears. Thus the first step is legal. For the second step we need ∀x, y ∈ Γ, Γ′ .xσ = yσ ∧ x 6= y ⇒ actΛ∪Λ′ (x) = actΛ∪Λ′ (y). Note that since (Γ ∪ Φ) ∩ (Γ′ ∪ Φ′ ) = ∅ and σ : Γ → Γ, we have that σ is the identity on Γ′ , thus the only x, y such that xσ = yσ ∧ x 6= y are in Γ, thus the condition is satisfied since it was satisfied in derivation A1. Furthermore we have (Γ, Γ′ )σ = Γσ, Γ′ and (G1 |G′1 )σ = G1 σ|G′1 . We must now consider ρ′ . We have: ρ′ = mgu({(nΛ∪Λ′ (x))σ = (nΛ∪Λ′ (y))σ|xσ = yσ} ∪{xσ = yσ|x(π ∪π ′ ) = y(π ∪π ′ )}) For what already said we have ρ′ = mgu({(nΛ (x))σ = (nΛ (y))σ|xσ = yσ}∪{xσ = yσ|xπ = yπ} ∪ {xσ = yσ|xπ ′ = yπ ′ }) = ρπ ′ where ρ is the one used in derivation A1. In particular, Λ′′ (x) = Λ1 (x) if x ∈ Γσ and Λ′′ (x) = Λ′ (x)π ′ = Λ′ (x) since π ′ is the identity on representatives (see Definition 8). Thus Λ′′ = Λ1 ∪ Λ′ . Furthermore π ′′ = ρ′ |Γσ,Γ′ = π1 ∪ π ′ . Also, Φ′′ = Φ1 , Φ′ since it is determined by the first part of transition. Finally (G2 |G′2 )σρ′ = G2 σρ′ |G′2 σρ′ = G2 σρ|G′2 π ′ = G2 σρ|G′2 . This proves that case. We will now consider the composition of two (merge) rules, with substitutions σ and σ ′ respectively. Suppose we have a derivation of the form: Λ,π Γ ⊢ G1 −−→ Φ ⊢ G2 Λ′ ,π ′ Γσ ⊢ G1 σ −−−→ Φ′ ⊢ G2 σρ Λ′′ ,π ′′ Γσσ ′ ⊢ G1 σσ ′ −−−−→ Φ′′ ⊢ G2 σρσ ′ ρ′ (A3) We want to be able to derive the same transition using just one inference step, with substitution σσ ′ . We have: Λ,π Γ ⊢ G1 −−→ Φ ⊢ G2 Λ1 ,π1 Γσσ ′ ⊢ G1 σσ ′ −−−−→ Φ1 ⊢ G2 σσ ′ ρ1 (A4) 36 Ivan Lanese and Ugo Montanari First of all we have to prove that the step is allowed. The required condition is that ∀x, y ∈ Γ.xσσ ′ = yσσ ′ ∧ x 6= y ⇒ actΛ (x) = actΛ (y). We have two cases. If xσ = yσ then the thesis follows from the analogous condition of the first step of derivation A3. Otherwise we can rewrite the condition as ∀x, y ∈ Γ.xσσ ′ = yσσ ′ ∧ xσ 6= yσ ⇒ actΛ (x) = actΛ (y). Note that actΛ′ (xσ) = actΛ (x) thus we can rewrite the condition as ∀x′ , y ′ ∈ Γσ.x′ σ ′ = y ′ σ ′ ∧ x′ 6= y ′ ⇒ actΛ′ (x′ ) = actΛ′ (y ′ ) what is the condition for the second step of derivation A3. The main step now is to prove that σσ ′ ρ1 = σρσ ′ ρ′ . We have: ρ = mgu({(nΛ (x))σ = (nΛ (y))σ|xσ = yσ} ∪ {xσ = yσ|xπ = yπ}) ρ′ = mgu({(nΛ′ (x))σ ′ = (nΛ′ (y))σ ′ |xσ ′ = yσ ′ } ∪ {xσ ′ = yσ ′ |xπ ′ = yπ ′ }) ρ1 = mgu({(nΛ (x))σσ ′ = (nΛ (y))σσ ′ |xσσ ′ = yσσ ′ } ∪ {xσσ ′ = yσσ ′ |xπ = yπ}) In particular we have: 1 σσ ′ ρ1 = = σσ ′ mgu({(nΛ (x))σσ ′ = (nΛ (y))σσ ′ |xσσ ′ = yσσ ′ ∧ x 6= y}∪ 2 ∪ {xσσ ′ = yσσ ′ |xπ = yπ}) = = σ mgu({(nΛ (x))σ = (nΛ (y))σ|xσσ ′ = yσσ ′ ∧ x 6= y}∪ 3 ∪ {xσ = yσ|xπ = yπ} ∪ eqn(σ ′ )) = = σ mgu({(nΛ (x))σ = (nΛ (y))σ|xσ = yσ ∧ x 6= y}∪ 4 ∪{(nΛ (x))σ = (nΛ (y))σ|xσσ ′ = yσσ ′ ∧xσ 6= yσ}∪{xσ = yσ|xπ = yπ}∪eqn(σ ′ )) = 5 = σ mgu(eqn(ρ) ∪ {(nΛ (x))σ = (nΛ (y))σ|xσσ ′ = yσσ ′ ∧ xσ 6= yσ} ∪ eqn(σ ′ )) = 6 = σρ mgu({(nΛ (x))σρ = (nΛ (y))σρ|xσσ ′ = yσσ ′ ∧ xσ 6= yσ} ∪ eqn(σ ′ )ρ) = 7 = σρ mgu({nΛ′ (xσ) = nΛ′ (yσ)|xσσ ′ = yσσ ′ ∧ xσ 6= yσ} ∪ eqn(σ ′ ) ∪ eqn(ρ)) = 8 = σρσ ′ mgu({(nΛ′ (xσ))σ ′ = (nΛ′ (yσ))σ ′ |xσσ ′ = yσσ ′ ∧ xσ 6= yσ} ∪ eqn(ρ)σ ′ ) = = σρσ ′ mgu({(nΛ′ (x′ ))σ ′ = (nΛ′ (y ′ ))σ ′ |x′ σ ′ = y ′ σ ′ ∧ x′ 6= y ′ }∪ 9 ∪ eqn(π ′ )σ ′ ∪ eqn(ρ \ π ′ )σ ′ ) = = σρσ ′ mgu({(nΛ′ (x′ ))σ ′ = (nΛ′ (y ′ ))σ ′ |x′ σ ′ = y ′ σ ′ ∧ x′ 6= y ′ }∪ 10 ∪ {xσ ′ = yσ ′ |xπ ′ = yπ ′ } ∪ eqn(ρ \ π ′ )σ ′ ) = 11 = σρσ ′ ρ′ mgu(eqn(ρ \ π ′ )σ ′ ρ′ ) = = σρσ ′ ρ′ We add some explanations for that (long) sequence of equations. Step 1 is just the definition of ρ1 . Step 2 is allowed by Lemma 4. Step 3 is a simple mathematical transformation. Step 4 applies the definition of ρ. Step 5 is Lemma 4 again. Step 6 creates a new ρ on the outside using idempotence, then it brings it inside using Lemma 4 and uses idempotence again to delete it where it is not necessary. It uses the definition of Λ′ too. Step 7 is Lemma 4 again. Steps 8 and 9 are trivial mathematics. Step 10 is another application of Lemma 4. Finally, step 11 is justified since the names in the domain of ρ \ π ′ are neither in the domain of σ ′ (since Mapping Fusion and SHR into Logic Programming 37 otherwise they would be in π ′ ) nor in the domain of ρ′ (since ρ′ is computed after having applied ρ, which is idempotent). Thus an allowed mgu is a subset of (ρ \ π ′ )σ ′ ρ′ which can be deleted by idempotence. Proof of Theorem 2 Let us first consider the case of two productions for communication actions. In order to apply them we need two sequential process edges to be rewritten. Each production needs to be synchronized with at least another one since each node is shared by exactly two edges. Since process edges are connected only through amoeboids we can have a synchronization only if the two actions done by process edges are equal to two actions allowed by amoeboids. Thanks to Lemma 2, they must be two complementary actions, i.e. an inn and an outn . Furthermore they have to be done on the same amoeboid, that is on two names merged by the substitution σ corresponding to the amoeboids. Thus P can be decomposed in the form (~x)P ′ σ where P ′ = P1′ |P2′ |Q′ with P1′ and P2′ sequential processes which are translated into the rewritten edges. We must have P1′ = · · · + u1 ~x′ .P1′′ and P2′ = u2 ~y ′ .P2′′ + . . . (or swapped) with ′ |~x | = |~y ′ |. Furthermore σ merges u1 and u2 thus we have a transition P → P ′ that corresponds to the synchronized execution of the two prefixes. The productions to be applied are thus forced except for the ones inside the amoeboids, but the only difference among the choices (as shown by Lemma 2) amounts to pseudoamoeboids and exchanges between equivalent amoeboids in the result. From Lemma 5 we know that the result of a transition is determined up to injective renamings (and actions on isolated nodes) by the starting graph and the productions chosen. Thus the transition that corresponds to P → P ′ for Theorem Λ,id 1 is equal up to injective renamings to a transition that differs from JP K~v ,w~ −−−→ Γ′ ⊢ G only for pseudoamoeboids and substitutions of equivalent amoeboids. The thesis follows. The other case is analogous. Lemma 6 Let A1 , . . . , An be a goal. We want to build a big-step where the clause unified with Ai is Hi ← Bi , if any (some Ai may not be replaced, in that case as notation we use Bi = Ai ). As a notational convention we use xi,1 , . . . , xi,ni to denote the arguments of Ai and ′ ai,j (x′i,j , ~yi,j ) to denote the jth argument of Hi if it is a complex term and x′i,j if it is a variable (note that we have different names for the same variable, one for each ′ occurrence). All these are undefined if Ai is not replaced, ai,j and ~yi,j are undefined also if the jth argument of Hi is a variable. Let θr be the mgu of the following set of equations: ′ ′ = ~yp,q |xi,j = xp,q } ∪ {xi,j = x′i,j |the jth argument of Hi is x′i,j } {x′i,j = x′p,q , ~yi,j We will denote xθr with [x]. θ We will have a big-step of the form A1 , . . . , An ⇒ G1 , . . . , Gn iff ∀i, p ∈ {1, . . . , n}.∀j ∈ {1, . . . , ni }.∀q ∈ {1, . . . , np }.xi,j = xp,q ⇒ ai,j = ap,q . 38 Ivan Lanese and Ugo Montanari Furthermore we have: ′ ′ ′ ])/x |a yi,j [l]∨(vk = θ = {ai,j ([x′i,j ], [y~i,j i,j i,j is defined}∪{[vk ]/vk |(vk = xi,j ∨vk = ~ xi,j ∧ ai,j is undefined )) ∧ [vk ] 6= vk } G1 , . . . , Gn = (B1 , . . . , Bn )θr ′ ′ ][l] = [y~ where [y~i,j i,j [l]]. The big-step is determined (up to injective renamings) by the choice of the clauses and of the atoms they are applied to. Proof We will prove a more general result by induction on the number of “considered” atoms, that is we consider an increasing chain of derivations, and considered atoms are the ones that, if expanded in the complete derivation, have already been expanded. For simplicity, atoms are considered in numeric order, i.e. at step m atoms A1 , . . . , Am−1 have already been considered, and atom Am becomes considered. θ We will prove that a computation of the form A1 , . . . , An − →* G1 , . . . , Gn where we substitute only atoms in the starting goal and where the atoms generated by considered atoms do not contain function symbols exists iff: ∀i, p ∈ {1, . . . , m}.∀j ∈ {1, . . . , ni }.∀q ∈ {1, . . . , np }.xi,j = xp,q ⇒ ai,j = ap,q and that furthermore: ~′ ]′ )/xi,j |ai,j is defined ∧ i ≤ m} ∪ {[vk ]′ /vk |(vk = x′ ∨ vk = • θ = {ai,j ([x′i,j ]′ , [yi,j i,j ′ ~yi,j [l] ∨ (vk = xi,j ∧ ai,j is undefined )) ∧ [vk ]′ 6= vk }; • G1 , . . . , Gn = (B1 , . . . , Bm )θr′ , (Am+1 , . . . , An )θ; where θr′ is the mgu of the subset of eqn(θ) containing only equalities between variables occurring in considered atoms and where [x]′ = xθr′ . Note that if all the atoms are considered we obtain the thesis. Base case, m = 1) We have the following transition: θ 1 (B1 , A2 , . . . , An )θ1 A1 , . . . , An −→ where θ1 is an mgu of {A1 = H1 }. This transition exists iff θ1 exists. We have: ′ θ1 = mgu({x1,j = a1,j (x′1,j , ~y1,j )|a1,j is defined} ∪ {x1,j = x′1,j |a1,j is undefined}). Note that if x1,j = x1,q ; a1,j = a1,q then the mgu does not exist (if ai,j and a1,q are both defined, otherwise they have to be both undefined since if just one of them is defined then a function symbol will remain in the considered part against the hypothesis). If the condition is satisfied we have: ′ θ1 = mgu({x1,j = a1,j (x′1,j , ~y1,j )|a1,j is defined}∪ ′ ′ ∪ {a1,q (x′1,q , ~y1,q ) = a1,j (x′1,j , ~y1,j )|x1,j = x1,q } ∪ {x1,j = x′1,j |a1,j is undefined}) = ′ )|a1,j is defined}∪ = mgu({x1,j = a1,j (x′1,j , ~y1,j ′ ′ ∪ {x′1,q = x′1,j , ~y1,q =~ y1,j |x1,j = x1,q } ∪ {x1,j = x′1,j |a1,j is undefined}) Note that the last part is exactly eqn(θr′ ), thus θr′ is its mgu. Thus we have: ′ θ1 = mgu({x1,j = a1,j (x′1,j , ~y1,j )|a1,j is defined} ∪ {[vk ]′ = vk |[vk ]′ 6= vk }). Mapping Fusion and SHR into Logic Programming 39 To have the real mgu we just need to apply θr′ to the equations in the first part (note that x1,j is unified with some other variable only if a1,j is undefined thus the domain variables of the first part are not changed). This proves the first part since this mgu exists and the second one since it has the wanted form. The third part follows from the observation that θ1 |n(B1 ) = θr′ |n(B1 ) . θg Inductive case, m ⇒ m + 1) Assume that A1 , . . . , An −→* G1 , . . . , Gn is a logic computation where we substitute only atoms in the starting goal, where atoms A1 , . . . , Am+1 are considered and where the atoms generated by them do not contain function symbols. Let us take the subcomputation where only the first m atoms have been considered. By inductive hypothesis we have that this part of the computation exists iff: ∀i, p ∈ {1, . . . , m}.∀j ∈ {1, . . . , ni }.∀q ∈ {1, . . . , np }.xi,j = xp,q ⇒ ai,j = ap,q and that: ′ ′ ])/x |a • θ = {ai,j ([x′i,j ], [y~i,j i,j i,j is defined ∧ i ≤ m} ∪ {[vk ]/vk |(vk = xi,j ∨ vk = ′ ~yi,j [l] ∨ (vk = xi,j ∧ ai,j is undefined )) ∧ [vk ] 6= vk }; • G1 , . . . , Gn = (B1 , . . . , Bm )θr , (Am+1 , . . . , An )θ. We will now consider the atom Am+1 . If it is not substituted then the thesis follows trivially (note that Am+1 does not contain variables substituted with a complex term by θ, since this can happen only if we have xi,j = xm+1,q with i < m + 1, xi,j defined and xm+1,q undefined, and this is forbidden; thus we have Am+1 θ = Am+1 θr ). Let us consider the case in which it is substituted. We will have a small step of the form: θ′ (G1 , . . . , Gm , Am+1 , . . . , An )θ −→ ( G1 , . . . , Gm+1 , Am+2 , . . . , An )θθ′ where θ′ = mgu({Am+1 θ = Hm+1 }) (note that we can assume that θ is also applied to Gm+1 since we can assume dom(θ) ∩ n(Gm+1 ) = ∅). We have: θ′ = mgu({Am+1 θ = Hm+1 }) = ′ = mgu({xm+1,j θ = am+1,j (x′m+1,j , ~ym+1,j )|am+1,j is defined}∪ ∪ {xm+1,j θ = x′m+1,j |am+1,j is undefined}) For each binding we must consider two cases: either the variable xm+1,j appears in already considered atoms (we call it an old variable) or it does not (we call it a new variable). In the second case θ is the identity on that variable. In the first case if am+1,j is defined then the mgu exists iff we have am+1,j = ap,q where ap,q is the function symbol in the binding for xm+1,j in θ. Note that if ap,q is undefined then also am+1,j must be undefined otherwise the function symbol remains in the final goal. 40 Ivan Lanese and Ugo Montanari Thus we will have: θ′ = mgu({Am+1 θ = Hm+1 }) = ′ ′ ′ ]) = a = mgu({ap,q ([x′p,q ], [y~p,q ym+1,j )|am+1,j is defined ∧ m+1,j (xm+1,j , ~ ∧ xp,q = xm+1,j ∧ xm+1,j is old} ∪ {[xm+1,j ] = x′m+1,j |am+1,j is undefined}∪ ′ ∪ {xm+1,j = am+1,j (x′m+1,j , ~ym+1,j )|am+1,j is defined ∧ xk+1,j is new} = ′ ′ ]=~ = mgu({[x′p,q ] = x′m+1,j , [y~p,q ym+1,j |am+1,j is defined ∧ ∧ xp,q = xm+1,j ∧ xm+1,j is old} ∪ {[xm+1,j ] = x′m+1,j |am+1,j is undefined}∪ ′ )|am+1,j is defined ∧ xm+1,j is new} ∪ {xm+1,j = am+1,j (x′m+1,j , ~ym+1,j where we use [−] to denote the representative of the equivalence class according to θ. We can reorder this substitution into: ′ mgu({xm+1,j = am+1,j (x′m+1,j , ~ym+1,j )|am+1,j is defined ∧ xm+1,j is new}∪ ′ ′ ]=~ ym+1,j |ak+1,j is defined ∧ ∪ {[x′p,q ] = x′m+1,j , [y~p,q ∧ xp,q = xm+1,j ∧ xm+1,j is old} ∪ {[xm+1,j ] = x′m+1,j |am+1,j is undefined} Note that the second part is a renaming that does not involve variables in the domain of the first part. Thus we can define new equivalence classes [−]′ according to this substitution θr′ and put the substitution in the resolved form: ′ mgu({xm+1,j = am+1,j ([x′m+1,j ]′ , [~ym+1,j ]′ )|am+1,j is defined ∧ xm+1,j is new}∪ ∪ {[vk ]′ = vk |[vk ]′ 6= vk }) Thus the mgu exists and the first part of the thesis is proved. Let us consider the substitution θg = θθ′ . Note that the renaming part of θ′ substitutes variables according to the equivalence between variables in the Hm+1 and representatives of the corresponding variables in Am+1 thus the composed substitution θr′′ maps each variable to the representative of the equivalence class that is defined by the union of the two sets of equations as required. Furthermore bindings with complex terms (the first part of the substitution) have disjoint domains and thus the union of them is made. Bindings coming from θ′ have already the wanted representatives in the image, while to bindings in θ the renaming is applied, mapping variables into the representatives of their equivalence classes. Thus θg has the wanted form w.r.t. the equivalence classes determined by all the equivalences on variables. We have: (G1 , . . . , Gm+1 , Am+2 , . . . , An )θg = (G1 , . . . , Gm+1 )θr′′ , (Am+2 , . . . , An )θg as required since (dom(θg ) \ dom(θr′′ )) ∩ n(G1 , . . . , Gm+1 ) = ∅. Note that this result does not depend on the order of application of clauses and that after having chosen which clauses to apply and to which atoms it is deterministic up to an injective renaming (which depends on the choice of names for new variables and on the choice of representatives for the equivalence classes). Mapping Fusion and SHR into Logic Programming 41 Proof of Theorem 4 The proof is by rule induction. Axioms) Assume we have a production: Λ,π x1 , . . . , xn ⊢ s(x1 , . . . , xn ) −−→ Φ ⊢ G where: Λ(xi ) = (ai , ~yi ) ∀i ∈ {1 . . . n} and π : {x1 , . . . , xn } → {x1 , . . . , xn } is an idempotent substitution. Then we have in P a clause: s(a1 (xi π, ~y1 ), . . . , an (xn π, ~yn )) ← JΦ ⊢ GK where we have xi π instead of ai (xi π, ~yi ) if ai = ǫ. We can have an applicable variant of this clause for each injective renaming ρ that maps each variable to a fresh one. Let us consider the goal: Js(x1 , . . . , xn )K = s(x1 , . . . , xn ) It unifies with the clause variant: s(a1 (xi π, ~y1 ), . . . , an (xn π, ~yn ))ρ ← JΦ ⊢ GKρ with mgu θρ : θρ = {ai (xi πρ, ~yi ρ)/xi |ai 6= ǫ} ∪ {xi πρ/xi |ai = ǫ}. Note that θρ = θρ |n(s(x1 ,...,xn )) . We can see that θρ is associated to x1 , . . . , xn ⊢ Λ,π s(x1 , . . . , xn ) −−→ Φ ⊢ G as required. Observe also that the result of the computation is: T = JΦ ⊢ GKρθρ = JΦ ⊢ GKρ because n(Gρ) ∩ dom(θρ ) = ∅. This proves the thesis. To deal with the missing possibilities for ρ w.r.t. the theorem statement note that bindings in the last part of θρ can also be resolved also as xi /xi πρ. In that case we have no binding for xi in θρ . This is equivalent to defining xi ρ = xi , and this covers the missing cases for ρ. Note finally that we used as clause the translation of the production and that we applied it to the translation of the rewritten edge. Rule (par)) Λ,π Γ ⊢ G1 −−→ Φ ⊢ G2 Λ′ ,π ′ Γ′ ⊢ G′1 −−−→ Φ′ ⊢ G′2 (Γ ∪ Φ) ∩ (Γ′ ∪ Φ′ ) = ∅ Λ∪Λ′ ,π∪π ′ Γ, Γ′ ⊢ G1 |G′1 −−−−−−−→ Φ, Φ′ ⊢ G2 |G′2 By inductive hypothesis we have: θρ • JΓ ⊢ G1 K ⇒ T Λ,π where θρ is associated to Γ ⊢ G1 −−→ Φ ⊢ G2 and T = JΦ ⊢ G2 Kρ; θρ′ ′ • JΓ′ ⊢ G′1 K ⇒ T ′ Λ′ ,π ′ where θρ′ ′ is associated to Γ′ ⊢ G′1 −−−→ Φ′ ⊢ G′2 and T ′ = JΦ′ ⊢ G′2 Kρ′ . In both cases by inductive hypothesis we used as clauses the translations of the productions used in the proof of the HSHR transition applied to the translations of the edges on which the productions were applied. Thanks to Lemma 1 we can assume that the sets of variables used in the two 42 Ivan Lanese and Ugo Montanari computations are disjoint. Thanks to Lemma 6 we know that these big-steps exist iff xi,j = xp,q ⇒ ai,j = ap,q . Since the used variable sets are disjoint the same condition guarantees the existence of a big-step of the form: θρ θρ′ ′ JΓ, Γ′ ⊢ G1 |G′1 K −−−→* T, T ′ where we used as clauses the unions of the clauses used in the two smaller computations, applied to the same predicates. We have that θρ θρ′ ′ is associated to: Λ∪Λ′ ,π∪π ′ Γ, Γ′ ⊢ G1 |G′1 −−−−−−−→ Φ, Φ′ ⊢ G2 |G′2 . We also have θρ θρ′ ′ = (θθ′ )ρρ′ and thus: T, T ′ = JΦ ⊢ G2 Kρ, JΦ′ ⊢ G′2 Kρ′ = JΦ, Φ′ ⊢ G2 |G′2 Kρρ′ as required. Rule (merge)) Λ,π Γ ⊢ G1 −−→ Φ ⊢ G2 ∀x, y ∈ Γ.xσ = yσ ∧ x 6= y ⇒ actΛ (x) = actΛ (y) Λ′ ,π ′ Γσ ⊢ G1 σ −−−→ Φ′ ⊢ G2 σρg where σ : Γ → Γ is an idempotent substitution and: (i). ρg = mgu({(nΛ (x))σ = (nΛ (y))σ|xσ = yσ} ∪ {xσ = yσ|xπ = yπ}) where ρg maps names to representatives in Γσ whenever possible (ii). ∀z ∈ Γ.Λ′ (zσ) = (Λ(z))σρg (iii). π ′ = ρg |Γσ where we used ρg instead of ρ to avoid confusion with the injective renaming ρ. For inductive hypothesis we have: θρ JΓ ⊢ G1 K ⇒ T Λ,π where θρ is associated to Γ ⊢ G1 −−→ Φ ⊢ G2 and T = JΦ ⊢ G2 Kρ and we used as clauses the translations of the productions used in the proof of the HSHR transition applied to the translations of the edges on which the productions were applied. Thanks to Lemma 6 we have that this computation exists iff: xi,j = xp,q ⇒ ai,j = ap,q and that: ′ ′ • θr = mgu({x′i,j = x′p,q , ~yi,j = ~yp,q |xi,j = xp,q }∪ ′ {xi,j = xi,j |the jth argument of Hi is x′i,j }) ′ ′ ′ ])/x |a yi,j [l] ∨ (vk = • θ = {ai,j ([x′i,j ], [y~i,j i,j i,j is defined} ∪ {[vk ]/vk |(vk = xi,j ∨ vk = ~ xi,j ∧ ai,j is undefined )) ∧ [vk ] 6= vk } • T = (B1 , . . . , Bn )θr where JΓ ⊢ G1 K = A1 , . . . , An and we used the naming conventions defined in Lemma 6. In particular ρ maps each variable to its primed version. We have: JΓ ⊢ G1 Kσ = JΓσ ⊢ G1 σK We want to find a big-step that uses the same clauses of the previous one applied to the same atoms, but starting from this new goal. We will use an overline to Mapping Fusion and SHR into Logic Programming 43 denote the components of the new big-step. Thanks to Lemma 6 such a big-step exists iff: xi,j σ = xp,q σ ⇒ ai,j = ap,q but if xi,j = xp,q this has already been proved, otherwise this is guaranteed by the applicability conditions of the rule. We have to prove that θ ρ is associated to: Λ′ ,π ′ Γσ ⊢ G1 σ −−−→ Φ′ ⊢ G2 σρg . From Lemma 6 we have: ~′ ]′ )/xi,j σ|ai,j is defined} ∪ {[vk ]′ /vk |(vk = x′ ∨ vk = ~y ′ [l] ∨ θρ = {ai,j ([x′i,j ]′ , [yi,j i,j i,j (vk = xi,j ∧ ai,j is undefined )) ∧ [vk ]′ 6= vk } where [−]′ maps each variable to the representative of the equivalence class given by: ′ ′ {x′i,j = x′p,q , ~yi,j = ~yp,q |xi,j σ = xp,q σ}∪ ∪ {xi,j σ = x′i,j |the jth argument of Hi is x′i,j } Note that xi,j σ is never unified with a primed variable unless ai,j is undefined. Since Λ′ (xi,j σ) = (Λ(xi,j ))σρg we need to prove that θr = θr ρ−1 σρg ρ, that is that [x′ ]′ = [y ′ ]′ iff xσρg = yσρg . From the hypothesis and from Lemma 4 we have that σρg = mgu({nΛ (x) = nΛ (y)|xσ = yσ} ∪ {x = y|xπ = yπ} ∪ eqn(σ)). The first and the third part are equal (adding primes) to the equations for [−]′ . As far as the second part is concerned, ρ maps variables merged by π to the same primed variable, thus these equations become trivial. Λ′ ,π ′ This proves that θρ is associated to Γσ ⊢ G1 σ −−−→ Φ′ ⊢ G2 σρg as required. By hypothesis and using Lemma 6 we have: JΦ ⊢ G2 Kρ = (B1 , . . . , Bn )θr The result of the new computation is: (B1 , . . . , Bn )θr Thanks to the properties of ρ we have: (B1 , . . . , Bn )θr = (B1 , . . . , Bn )θr ρ−1 σρg ρ = JΦ ⊢ G2 Kρρ−1 σρg ρ = JΦ′ ⊢ G2 σρg Kρ. Rule (idle)) Λǫ ,id Γ ⊢ G −−−→ Γ ⊢ G The proof is analogous to the proof for axioms, using as clause corresponding to the production the translation of the (idle) rule. Rule (new)) Λ,π Γ ⊢ G1 −−→ Φ ⊢ G2 x∈ / Γ ~y ∩ (Γ ∪ Φ ∪ {x}) = ∅ Λ∪{(x,a,~ y)},π Γ, x ⊢ G1 −−−−−−−−−→ Φ′ ⊢ G2 For inductive hypothesis we have in P the following big-step of Synchronized Logic Programming: θρ JΓ ⊢ G1 K −→* T where θρ is associated to: Λ,π Γ ⊢ G1 −−→ Φ ⊢ G2 44 Ivan Lanese and Ugo Montanari and T = JΦ ⊢ G2 Kρ but because x ∈ / n(G1 ) we have that θρ is also associated to: Λ∪{(x,a,~ y)},π Γ, x ⊢ G1 −−−−−−−−−→ Φ′ ⊢ G2 The thesis follows. Proof of Theorem 5 The translation JΓ ⊢ GK can be written in the form A1 , . . . , An where the Ai s are translations of single edges. We want to associate a HSHR production to each Ai . θ For edges associated to atoms that are rewritten in JΓ ⊢ GK ⇒ T we use an instance of the axiom that corresponds to the clause used to rewrite it, otherwise the rule obtained from rule (idle) applied to that edge and to nodes that are all distinct. Let T1 , . . . , Tn be the heads of these rules. We choose the names in the following way: for each first occurrence of a variable in A1 , . . . , An we use the same name for the node in the corresponding position, we use new names for all other occurrences. Note that there exists a substitution σ such that (T1 , . . . , Tn )σ = A1 , . . . , An and that σ is idempotent. Note also that for each i all names in Ti are distinct. For each i we can choose an instance Ri of the associated rule with head Ti such that for each i, j, i 6= j we have n(Ri ) ∩ n(Rj ) = ∅. As notation we use: Λi ,πi Ri = Γi ⊢ Gi −−−→ Φi ⊢ G′i Since all the rules have a disjoint set of names we can apply n − 1 times rule (par) in order to have: S S   S S i Λi , i πi − − − − − −−→ i Φi ⊢ i G′i Γ ⊢ G i i i i Now we want to apply rule (merge) with substitution σ. We can do it since σ is S idempotent. We have to verify that xσ = yσ ∧ x 6= y ⇒ ( i Λi )(x) = (a, ~v ) ∧ S ~ This happens thanks to Lemma 6. Thus we obtain a rule of ( i Λi )(x) = (a, w). the form: Λ,π n(A1 , . . . , An ) ⊢ A1 , . . . , An −−→ Φ ⊢ G′ Thanks to Theorem 4 we can have in P the following big-step of Synchronized Logic Programming: θρ′ Jn(A1 , . . . , An ) ⊢ A1 , . . . , An K ⇒ T ′ for every ρ that satisfies the freshness conditions. Furthermore θρ′ is associated to Λ,π n(A1 , . . . , An ) ⊢ A1 , . . . , An −−→ Φ ⊢ G′ and T ′ = JΦ ⊢ G′ Kρ. Finally we used as clauses the translations of the productions used in the proof of the HSHR rewriting. Note that Jn(A1 , . . . , An ) ⊢ A1 , . . . , An K = JΓ ⊢ GK. Since the result of a big-step is determined up to injective renaming by the starting goal and the used clauses (see Lemma 6) then we must have θ = θρ′ ρ′ and T = T ′ ρ′ for some injective renaming ρ′ . Note that ρρ′ satisfies the freshness conditions (since variables generated in logic ′′ programming are always fresh) and we also have θ = θρρ ′. Λ,π Thus we have that θ is associated to Γ ⊢ G −−→ Φ ⊢ G′ and that T = T ′ ρ = JΦ ⊢ G′ Kρρ′ . This proves the thesis.
6
arXiv:1711.11566v1 [cs.LG] 30 Nov 2017 Hybrid VAE: Improving Deep Generative Models using Partial Observations Sergey Tulyakov∗ Snap Research [email protected] Andrew Fitzgibbon, Sebastian Nowozin Microsoft Research {awf,Sebastian.Nowozin}@microsoft.com Abstract Deep neural network models trained on large labeled datasets are the state-of-theart in a large variety of computer vision tasks. In many applications, however, labeled data is expensive to obtain or requires a time consuming manual annotation process. In contrast, unlabeled data is often abundant and available in large quantities. We present a principled framework to capitalize on unlabeled data by training deep generative models on both labeled and unlabeled data. We show that such a combination is beneficial because the unlabeled data acts as a data-driven form of regularization, allowing generative models trained on few labeled samples to reach the performance of fully-supervised generative models trained on much larger datasets. We call our method Hybrid VAE (H-VAE) as it contains both the generative and the discriminative parts. We validate H-VAE on three large-scale datasets of different modalities: two face datasets: (MultiPIE, CelebA) and a hand pose dataset (NYU Hand Pose). Our qualitative visualizations further support improvements achieved by using partial observations. 1 Introduction Understanding the world from images or videos requires reasoning about ambiguous and uncertain information. For example, when an object is occluded we receive only partial information about it, making our resulting inferences about the object class, shape, location, or material uncertain. To represent this uncertainty in a coherent manner we can use probabilistic models. A key distinction is between generative and discriminative probabilistic models, see Lasserre et al. [2006]. Generative models represent a joint distribution p(d, h) over an observation d and a quantity h that we would like to infer. We can inspect a generative model by drawing samples (d, h) ∼ p(d, h) (see Fig. 1), and we can make predictions by conditioning, evaluating p(h|d). In contrast, discriminative models directly model the distribution p(h|d), always assuming that d is observed. We can make predictions but no longer inspect the internals of the model through sampling. Discriminative models often outperform generative models on prediction tasks where a large amount of labeled data is available. Conversely, generative models have the advantage that in principle they can make use of abundant unlabeled data, but in practice there are computational challenges. Today, the majority of popular computer vision models are discriminative, but recently deep learning revolutionized how we build generative models and perform inference in them. In particular, current works on generative adversarial networks (GANs) (Goodfellow et al. [2014], Nowozin et al. [2016]) and variational autoencoders (VAEs) (Kingma and Welling [2014], Rezende et al. [2014], Doersch [2016]) allow for rich and tractable generative models. In the current study, we extend the generative VAE framework to represent a joint distribution p(d, h) and derive a generative-discriminative hybrid making use of abundant unlabeled data. ∗ The work was done while Sergey Tulyakov was at Microsoft Research, Cambridge, UK 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Figure 1: Samples (d, h) ∼ p(d, h) of two generative models trained on the CelebA dataset (top), and on the MultiPIE dataset (bottom). Note how the models consistently sample both the image and the corresponding pose. To derive our hybrid model we start with a generative VAE model of the form pθ (d, h), where θ are neural network parameters. Since labeled data is costly, we assume that only a small subset of the training instances is labeled {(d, h)}, while a much larger training subset contains a collection of unlabeled observations {(d)} R only. To allow learning from the unlabeled set we consider the marginal likelihood pθ (d) = pθ (d, h) dh and derive a tractable variational lower bound. Interestingly, through a particular choice in the derivation of this lower bound we can create an auxiliary discriminative neural network model q(h|d). With the help of the bound the maximum likelihood learning objective for our generative model now becomes the sum of the full likelihood and the marginal likelihood. The benefit of this hybrid approach is that it allows learning from partial observations in a principled manner and scales to realistic computer vision applications. In summary, our contributions are: • Deriving a principled hybrid variational autoencoder model that allows for highdimensional continuous output labels. • Using unsupervised data as data-driven regularization for large scale deep learning models. • Experimentally validating the improved generative model performance in terms of better likelihoods and improved sample quality for facial landmarks. 2 Full Variational Autoencoder Framework We extend the deep VAE framework originally presented in Kingma and Welling [2014], Rezende et al. [2014], Doersch [2016] to the case of pairs of observations. This extension is technically straightforward and, like the VAE approach, has three components: first, a probabilistic model formulated as an infinite latent mixture model; second, an efficient approximate maximum likelihood learning procedure; and third, an effective variance reduction method that allows effective maximum likelihood training using backpropagation. For the probabilistic model, we are interested in representing a distribution p(d, h). Here d is an image, and h is an encoding of a continuous image label, e.g. a set of locations of the markers for face alignments. We define an infinite mixture model using an additional latent variable z as Z p(d, h) = pθ (d, h|z) p(z) dz. (1) The conditional distribution pθ (d, h|z) is described by a neural network and has parameters θ to be learned from a training data set. In practice this is implemented by outputting the parameters of a multivariate Normal distribution, N (µθ (z), Σθ (z)) so that the conditional likelihood pθ (d, h|z) can be computed easily. The distribution p(z) is fixed to be a multivariate standard Normal distribution, p(z) = N (0, I). The above model is expressive, because it corresponds to an infinite Gaussian mixture model and hence can approximate complicated distributions. To learn the parameters of the model using maximum likelihood, Kingma and Welling [2014], Rezende et al. [2014] introduce a tractable lower-bound on the log-likelihood. Consider the loglikelihood of a single joint training sample (d, h). Using variational Bayesian bounding techniques (see Doersch [2016]) we can lower bound the log-likelihood via an auxiliary model q(z|d, h) by log p(d, h) ≥ Ez∼qω (z|d,h) [log pθ (d, h|z)] − DKL (qω (z|d, h)kp(z)) =: LF (θ, d, h), 2 (2) q(z|d, h) p(d, h|z) FC FC h FC p(h|z) µ(h) FC z (h) d p(d|z) Convolutional layers Deconvolutional layers Figure 2: The architecture of the F-VAE deep generative neural network model we use to implement Equation (2). Here FC denotes a fully-connected neural network. where DKL is the Kullback-Leibler divergence between the variational distribution and the prior, which for the case of two Normal distributions has a simple analytic form. To optimize the bound (2) we can use stochastic gradient descent, approximating the expectation using a few samples, perhaps using only a single sample. A naive sampling approach would incur a high variance in the estimated gradients; the reparametrization trick (see Kingma and Welling [2014], Rezende et al. [2014]) allows significant variance reduction in estimating (2). In the following, we refer to the models trained using eq. 2 as Full VAE or F-VAE, as they use only fully observed data for training. Fig. 2 shows the architecture of the F-VAE model. 3 Hybrid Learning: Using Unlabeled Data Obtaining a large amount of {di } is possible for many computer vision tasks; however, it may be expensive to collect large amounts of paired data (di , hi ) because it involves some procedure for ground truth collection or manual labelling of images. In the case of abundant unlabeled data where only the image part is observed, we would like to train our model from both the expensive labeled set and the partially observed data. Given an unlabeled image d, we consider the marginal likelihood p(d) of the image as Z Z log p(d) = log pθ (d, h|z)p(z) dz dh. (3) The above is a difficult high-dimensional integration problem. In what follows, we drop neural network parameters θ and w for brevity of notation. Using a variational Bayes bounding technique we derive a tractable lower bound on this marginal log-likelihood by introducing one auxiliary model, q(h|d) and reusing q(z|d, h) introduced in the Full VAE framework, (2), Z Z p(d, h|z) p(z) log q(h|d) q(z|d, h) dz dh. (4) q(h|d) q(z|d, h) Replacing the integrals with expectations and moving the logarithm inside gives the variational lower bound on the log-likelihood:  h    p(d, h|z) p(z) i p(d, h|z) p(z) log p(d) = log Eh Ez ≥ Eh Ez log + log q(h|d) q(z|d, h) q(h|d) q(z|d, h) h h i i = Eh Ez log p(d, h|z) − log q(h|d) − DKL (q(z|d, h)kp(z)) . (5) Here, Eh is shorthand for Eh∼q(h|d) , and likewise Ez stands for Ez∼q(z|d,h) . We rewrite (5) by recognizing the entropy term H(q(h|d)) = −Ez [log q(d|h)], giving h i log p(d) ≥ Eh Ez [log pθ (d, h|z)] − DKL (q(z|d, h)kp(z)) + H(q(h|d)) =: LP (θ, d). (6) In our case, the entropy H(q(h|d)) is available as a simple analytic form because we use a multivariate Normal distribution for q(h|d). Note that q(h|d) represents essentially a discriminative model, implemented using a deep neural network. 3 q(h|z) FC q(h|d) d h d µ(h) h q(z|d, h) q(z|d, h) z p(d, h|z) (z) d z p(d, h|z) d h (a) Full VAE d h Convolutional layers (c) Architecture of q(h|d) (b) Hybrid VAE Figure 3: Extending the variational autoencoder (VAE) model to the case of hybrid labeled/unlabeled observations. (a) the standard VAE model extended to the paired observation case: an encoder q(z|d, h) and decoder p(d, h|z) mapping to/from a latent code z; (b) the hybrid VAE model introducing a discriminative variational model q(h|d) (c) The architecture of the discriminative network. We now combine LF and LP into one learning objective. For this, we assume we have a dataset {(di , hi )}i=1,...,n of fully-observed samples and another dataset {(dj )}j=1,...,m of partiallyobserved data, so that only d is observed. Typically m  n because it is easier to obtain unlabeled images. Because both LF and LP are log-likelihood bounds for a single instance, one principled way to combine the two learning objectives is to simply sum them over all instances, m n X X LP (θ, dj ). (7) LF (θ, di , hi ) + L1 (θ) := j=1 i=1 While (7) is a valid log-likelihood bound, we found that empirically learning is faster when the relative contribution of each sum is weighted equally. We achieve this through the learning objective n m 1X 1 X L(θ) := LF (θ, di , hi ) + LP (θ, dj ). (8) n i=1 m j=1 We optimize (8) using minibatch stochastic gradient descent, sampling one separate minibatch for each sum per iteration. We consider L(θ) in (8) to be a hybrid learning objective, as it couples together generative and discriminative models. Hence the name of the model: Hybrid VAE (HVAE). We show in the experimental section that hybrid learning using (8) greatly improves the loglikelihood on the hold-out set of the generative model. Additionally we show that the use of a large set of partially observed instances prevents overfitting of the generative models. Fig. 3a and Fig. 3b outline the components of the F-VAE and H-VAE frameworks respectively. 3.1 Modeling Depth Images Compared to natural images, depth images have the additional property that depth values could be unobserved. This is either because a pixel is outside the operating range of the camera or because the pixel is invalidated by the depth engine. To model this effect accurately, we proceed in two steps: for each pixel, we first compute a probability of being observed, p(b|z), where b is a probability map with one probability bu ∈ [0, 1] for each position u. If bu = 1 then the normal continuous model p(du |z) is used, but if bu = 0, then the depth value is set to du = ∅, a symbolic “unobserved” value. Formally this corresponds to enlarging the domain of depth observations to R ∪ {∅} and using the probability model R p(d|z) = p(d|b, z) p(b|z) db. We can implement this simple model efficiently as a summation over two maps because there are only two states for each pixel. 4 4.1 Experiments Implementation Details An H-VAE model has three neural networks as shown in Fig. 3b. Each of these networks uses either convolutional or deconvolutional subnetworks, and these parts closely follow the encoder-decoder 4 architecture proposed in Radford et al. [2015]. We visualize the network architectures in Fig. 2 and Fig. 3c. Every convolutional layer doubles the number of channels, while shrinking the width and height by a factor of two. Deconvolutatal layers perform the opposite operation: they reduce the number of channels by the factor of two, but double the width and height by a factor of two. Given a pair of (d, h) the encoding network q(z|d, h) independently processes d using the convolutional subnetwork, while h is processed by the three-layer fully-connected neural network (FC-NN). These two outputs are then concatenated and passed through another FC-NN, producing a diagonal multivariate Normal distribution over z. The decoder network p(d, h|z) first processes z using an FC-NN, the pipeline then split, and the deconvolutional subnetwork is used to produce a diagonal multivariate Normal distribution over d. The distribution over h is computed by yet another FCNN, again as a diagonal multivariate Normal. We use ReLU activations (Nair and Hinton [2010]) throughout all the networks and every FC-NN hidden layer has 256 units. In every stochastic layer, we use unbiased estimates for the expectations by averaging three samples from the corresponding distribution. We implement the pipeline in Chainer (Tokui et al. [2015]) and train it end-to-end using SGD with learning rate 0.01 and momentum 0.9. 4.2 Datasets Previous work on generative models used data sets such as MNIST (LeCun et al. [1998]) and SVHN (Netzer et al. [2011]) for evaluating their models. In our work, however, these datasets do not allow us to show the benefits of the proposed H-VAE approach, as they include categorical labels only, whereas our method extends the standard VAE framework to deal with continuous annotation h. We evaluate our method on two up-to-date datasets used in the computer vision community. To simulate partially observed samples, we shuffle the dataset and split it into fully- and partiallyobserved subsets and separate a holdout test set, not available during training. MultiPIE. The MultiPIE (Gross et al. [2010]) dataset consists of face images of 337 subjects taken under different pose, illumination and expressions. The pose range contains 15 discrete views, capturing a face profile-to-profile. Illumination changes were modeled using 19 flashlights located in different places of the room. The database has been extensively used in the community for face alignment (Xiong and De la Torre [2013], Zhu and Ramanan [2012], Tulyakov et al. [2017a]). For our purposes we use only the views annotated either with 68 or 66-points markup, in total producing 47250 images. We drop the inner mouth corner points, so that all the images have the same 66-point markup. Images were cropped around the landmarks and downscaled to 48×48 size. We reserve 2K (d, h) pairs from the MultiPIE dataset for testing purposes. CelebA face dataset. The CelebA (Liu et al. [2015]) is a large scale face attributes dataset containing more that 200K celebrity images annotated with 40 attributes (such as eyeglasses, pointy nose etc.) and 5 landmark locations (eyes, nose, mouth corners). It contains more than 10K distinct identities. As in Lamb et al. [2016] we center and crop all images around the face and resize to 64 × 64. We use the provided landmarks as continuous labels h. The testing set consists of 10K fully observed instances. NYU Hand Pose dataset. The NYU Hand Pose Dataset (Tompson et al. [2014]) contains 72757 training frames of RGBD data and 8252 testing set frames. For every frame, the RGBD data from 3 kinects is available. In our experiments we use only depth frames captured from the front view. We resize depth maps to 128 × 128 and preprocess the depth values using the code from Oberweger et al. [2015]. 4.3 Quantitative Evaluation A commonly accepted comparison procedure for generative modeling is evaluation of the negative log-likelihood (NLL) on the testing data (Gneiting and Raftery [2007], Lamb et al. [2016], Kingma et al. [2016]). We compute the NLL using Eq. (2) on the same fully observed testing data for each model. This metric, however, is difficult to interpret, requiring visual samples from the model to assess their quality. Therefore, we additionally report the task-loss Etask . The task-loss in the case of face alignment is the average point-to-point Euclidean distance, normalized by the interocular distance (Tulyakov and Sebe [2015], Trigeorgis et al. [2016]). For hand pose experiment we report 5 Table 1: Comparison of F-VAE and H-VAE models on three datasets: (a) MultiPIE, (b) CelebA, (c) NYU Hand Pose. The number of fully observed samples used for training is denoted by n, while m corresponds to the total number of partially observed instances. We report the task-loss Etask and the negative log-likelihood of the fully observed testing data computed using eq. 2. 5k 30k 5k 30k 6RXUFH Etask - - -23.00 -27.33 -31.25 0.1162 0.1457 0.0935 0.0845 5HFRQVWUXFWLRQ -20.43 -25.75 -28.29 -32.36 5k 5k 15k 15k 100k 150k 15k 100k 6WHS 6WHS − log p(d, h) 0.1425 0.1420 0.1181 0.0929 6WHS -3.49 -7.40 -7.19 -8.15 -6.90 -6.98 -5.90 -9.08 6WHS n m Etask F-VAE 500 500 5k 5k - m 15k 30k 50k 100k − log p(d, h) − log p(d, h) 10k 30k - - 29.31 26.27 H-VAE - n F-VAE 5k 15k 30k Etask H-VAE F-VAE m H-VAE n (c) NYU Hand Pose (b) CelebA (a) MultiPIE 10k 10k 10k 30k 30k 30k 10k 20k 30k 10k 20k 30k 4.54 4.82 4.56 4.82 4.56 4.56 29.31 25.89 21.13 19.20 18.42 14.11 6WHS 5HFRQVWUXFWLRQ 7DUJHW (a) Samples the first row (red) for every example are obtained using the F-VAE(5K) model, the bottom row (green) is produced by the H-VAE(5K, 30K) model. Both models were trained on the MultiPIE dataset. Source Step1 Step 2 Step 3 Step 4 Step 6 Step 7 Step 8 Step 8 Step 9 Step 10 Target (b) Interpolating depth images using the H-VAE(30K, 30K) model trained on the NYU Hand Pose dataset. Figure 4: Selected examples showing interpolations. For every interpolation, the source and the target images are taken from the testing set, projected to and reconstructed from the latent space, interpolation is performed from the source to the target. the average L2 distance between the ground truth and the prediction. We report the task-loss by evaluating the mean of q(h|d) trained using the hybrid objective (8). This metric is not available for the F-VAE models. For brevity reasons, we denote F-VAE(n) as a F-VAE model trained using n fully observed samples, and similarly H-VAE(n, m) trained with n fully- and m partially-observed instances. Table 1a compares multiple F-VAE models against the proposed H-VAE models on the MultiPIE dataset. Clearly, using a hybrid learning objective with partially observed data helps drastically improve the likelihood: the H-VAE(500, 30K) model outperforms the F-VAE(5K). Similarly, the H-VAE(5K, 30K) model shows better NLL as compared to F-VAE(30K). The same holds for the CelebA dataset, as seen in table 1b. The F-VAE(5K) model is not able to accurately learn the distribution. Instead, it overfits, leading to the worse NLL. In contrast, the H-VAE(5K, 15K) has comparable results with the F-VAE(15K), advocating for the use of inexpensive partial observations. Additionally, H-VAE(30K, 150K) outperforms F-VAE(150K). Similar results can be observed on the NYU Hand Pose dataset (Table 1c), where the H-VAE(10k, 10k) models shows NLL comparable to F-VAE(30K), and clearly the model having the most of fully observed and partially observed data scores best. 4.4 Qualitative Evaluation One of the key benefits of generative modeling is the ability to analyze the learned distribution by performing sampling. Since z ∼ N (0, I) one can sample z and then sample (d, h) ∼ p(d, h|z) to get an image and its label. Fig. 1 shows several examples of that. Note how a generative model can consistently represent pose and image information. The CelebA dataset consists of public images of 6 celebrities, and therefore is biased towards smiling faces. The bottom half of Fig. 1 shows samples from the distribution learned using the MultiPIE dataset. Since the MultiPIE dataset consists of multiple poses, illuminations and expressions, the model is able to encode them. Interestingly, one can project two images onto the z-space by sampling z ∼ q(z|d, h) and analyze the structure of the learned space by linearly interpolating between the two points. Source-target interpolation on the MultiPIE dataset is given in Fig. 4a. The models were trained on the MultiPIE dataset. For this example, the top row is obtained using the F-VAE(5K) model that saw only 5K labeled examples during training. The bottom row is produced using the model trained on 5K fully observed samples and 30K of partial observations (H-VAE(5K, 30K)). Clearly, partial observations significantly improve image quality, making the texture less blurry, and better representing facial expressions. Interestingly, both models gradually transform between images (i.e. open a mouth or rotate a face), indicating that by exploring the latent representation one can generate plausible face trajectories. It also shows that, in general, useful semantics are linearized in the hidden representation of the model. Similar interpolation on the NYU Hand Pose dataset is given in Fig. 4b. Interestingly, gesture transformation is encoded in the representation. Traversing a line transforms to bending or unbending fingers in the image space. 5 Related work We first discuss existing state-of-the-art deep generative models. Since our formulation is joint in terms of images and their labels we review previous attempts to create such a joint representation. Additionally, as our model is a hybrid consisting of discriminative and generative parts we outline previous related works in this area. 5.1 Deep Generative Models Current research focuses on two classes of models, generative adversarial networks and variational autoencoders. Generative adversarial networks (GAN) were proposed recently in the machine learning community as neural architectures for generative models (Goodfellow et al. [2014]). In a GAN two networks are trained together, competing against each other: the generator tries to produce realistic samples, e.g. images; the adversary tries to distinguish generated samples from training samples. Formally, this yields a challenging min − max optimization problem and a variety of techniques have been proposed to stabilize learning in GANs, see Salimans et al. [2016], Sønderby et al. [2016], Metz et al. [2016]. Despite this difficulty, multiple extensions of the original work appeared in the literature. Deep convolutional generative adversarial network in Radford et al. [2015] and Denton et al. [2015] show surprisingly photo-realistic and sharp image samples as compared to previous works. The authors provide multiple architectural guidelines that improve the overall quality of the sampled images. As a result the models are able to produce samples for trajectories in the latent space, showing the internal structure of the learned space. The work in Nowozin et al. [2016] shows that GANs can be viewed as a special case of a more general variational divergence estimation approach. Further examples of GANs include generating images of birds from textual description, Reed et al. [2016], styling images, Ulyanov et al. [2016], and video generation, Vondrick et al. [2016], Tulyakov et al. [2017b]. Variational autoencoders were introduced independently by two groups (Kingma and Welling [2014], Rezende et al. [2014]). VAEs maximize a variational lower bound on the log-likelihood of the data. Similarly to GANs, this is a recently emerged and rapidly evolving area of generative modeling. Since the original works, there have been many extensions introduced. The work in Kingma et al. [2014] extends the VAE framework to semi-supervised learning. In Maaløe et al. [2015] the idea of semi-supervised learning is exploited further, by introducing and auxiliary variables improving the variational approximation. The Deep Recurrent Attentive Writer (DRAW), Gregor et al. [2015], employs recurrent neural networks acting as encoder and decoder, in a way mimicking the foveation of 7 the human eye. Inverse Autoregressive Flow (IAF) presented in Kingma et al. [2016] and auxiliary variables (Maaløe et al. [2015]) are two approaches to further improve the quality of the variational approximation and quality of the images generated by the model. Another recent attempt at improving inferences is to consider hybrid GAN-VAE models as in Wu et al. [2016]. 5.2 Hybrid Models Learning a probabilistic model from different levels of annotations has been proposed earlier in Navaratnam et al. [2007]; in particular, using Gaussian processes (GP) the authors report improved person tracking performance. However, while GPs are analytically tractable they do not scale well and the work is limited to use very small data sets. In Lasserre et al. [2006] the authors consider the problem of combining a generative models p(d, h) with a discriminative model pd (h|d). They achieve this in a satisfying manner by creating a new model whose likelihood function can smoothly balance between training objectives of the generative and discriminative models. However, the proposed coupling prior of Lasserre et al. [2006] is not useful in the context of neural networks because Euclidean distance in the parameter vector of two neural networks does not measure useful differences in the function realized by the neural network. The model is shown to work well for discrete labels h in the empirical study (Druck et al. [2007]). Whereas the above work addresses semi-supervised learning with the goal to improve the predictive performance of p(h|d), our main interest is in improving the performance of the generative model p(d, h). Moreover, our work shows to to handle the marginal likelihood over h in an efficient ant tractable manner. 6 Conclusions We demonstrated a scalable and practical way to learn rich generative models of multiple output modalities. Compared to the ordinary deep generative model our hybrid VAE model does not require fully labelled observations for all samples. Because the hybrid VAE model derives from the principled variational autoencoder model it can represent complex distributions of images and pose and could be easily adapted to other modalities. Our experiments demonstrate that when mixing fully labelled with unlabelled data the hybrid learning greatly improves over the standard generative model which can use only the fully labelled data. The improvement is both in terms of test set log-likelihood and in the quality of image samples generated by the model. We believe that hybrid generative models such as our hybrid VAE model address one of the key limitation of deep learning: the requirement of having large scale labelled data sets. We hope that hybrid VAE model will enable large scale learning from unsupervised data for a variety of computer vision tasks. References Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using a laplacian pyramid of adversarial networks. In NIPS, 2015. Carl Doersch. Tutorial on Variational Autoencoders. arXiv, pages 1–23, 2016. URL http://arxiv.org/ abs/1606.05908. Gregory Druck, Chris Pal, Andrew McCallum, and Xiaojin Zhu. Semi-supervised classification with hybrid generative/discriminative methods. In Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 280–289. ACM, 2007. Tilmann Gneiting and Adrian E Raftery. Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association, 102(477):359–378, 2007. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, pages 2672–2680. 2014. Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015. 8 Ralph Gross, Iain Matthews, Jeffrey Cohn, Takeo Kanade, and Simon Baker. Multi-pie. Image and Vision Computing, 28(5):807–813, 2010. Diederik P Kingma and Max Welling. Auto-Encoding Variational Bayes. In ICLR, 2014. Diederik P Kingma, Danilo J Rezende, Shakir Mohamed, and Max Welling. Semi-Supervised Learning with Deep Generative Models. In NIPS, 2014. Diederik P Kingma, Tim Salimans, and Max Welling. Improving Variational Inference with Inverse Autoregressive Flow. In NIPS, 2016. Alex Lamb, Vincent Dumoulin, and Aaron Courville. Discriminative regularization for generative models. arXiv preprint arXiv:1602.03220, 2016. Julia A. Lasserre, Christopher M. Bishop, and Thomas P. Minka. Principled hybrids of generative and discriminative models. In CVPR, 2006. Yann LeCun, Corinna Cortes, and Christopher JC Burges. The mnist database of handwritten digits, 1998. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In ICCV, pages 3730–3738, 2015. Lars Maaløe, Casper Kaae Sønderby, Søren Kaae Sønderby, and Ole Winther. Auxiliary Deep Generative Models. In NIPS workshop, 2015. Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. Unrolled generative adversarial networks. arXiv preprint arXiv:1611.02163, 2016. Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In ICML, 2010. Ramanan Navaratnam, Andrew W Fitzgibbon, and Roberto Cipolla. The joint manifold model for semisupervised multi-valued regression. In CVPR. IEEE, 2007. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011. Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-GAN: Training generative neural samplers using variational divergence minimization. In NIPS, 2016. Markus Oberweger, Paul Wohlhart, and Vincent Lepetit. Hands deep in deep learning for hand pose estimation. arXiv preprint arXiv:1502.06807, 2015. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396, 2016. D J Rezende, S Mohamed, and D Wierstra. Stochastic backpropagation and approximate inference in deep generative models. 2014. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In NIPS, 2016. Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Huszár. Amortised map inference for image super-resolution. arXiv preprint arXiv:1610.04490, 2016. Seiya Tokui, Kenta Oono, Shohei Hido, and Justin Clayton. Chainer: a next-generation open source framework for deep learning. In NIPS, 2015. Jonathan Tompson, Murphy Stein, Yann Lecun, and Ken Perlin. Real-time continuous pose recovery of human hands using convolutional networks. ACM Transactions on Graphics, 33, August 2014. George Trigeorgis, Patrick Snape, Mihalis A Nicolaou, Epameinondas Antonakos, and Stefanos Zafeiriou. Mnemonic descent method: A recurrent process applied for end-to-end face alignment. In CVPR, 2016. Sergey Tulyakov and Nicu Sebe. Regressing a 3d face shape from a single image. In ICCV, pages 3748–3755. IEEE, 2015. 9 Sergey Tulyakov, Lszl Jeni, Jeffrey Cohn, and Nicu Sebe. Viewpoint-consistent 3d face alignment. IEEE Transactions on Pattern Analysis and Machine Intelligence, 09 2017a. Sergey Tulyakov, Ming-Yu Liu, Xiaodong Yang, and Jan Kautz. Mocogan: Decomposing motion and content for video generation. arXiv preprint arXiv:1707.04993, 2017b. Dmitry Ulyanov, Vadim Lebedev, Andrea Vedaldi, and Victor Lempitsky. Texture networks: Feed-forward synthesis of textures and stylized images. arXiv preprint arXiv:1603.03417, 2016. Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Generating videos with scene dynamics. arXiv preprint arXiv:1609.02612, 2016. Jiajun Wu, Chengkai Zhang, Tianfan Xue, William T Freeman, and Joshua B Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. arXiv preprint arXiv:1610.07584, 2016. Xuehan Xiong and Fernando De la Torre. Supervised descent method and its applications to face alignment. In CVPR, pages 532–539, 2013. Xiangxin Zhu and Deva Ramanan. Face detection, pose estimation, and landmark localization in the wild. In CVPR, pages 2879–2886, 2012. 10
1
Needed Computations Shortcutting Needed Steps Sergio Antoy Jacob Johannsen Steven Libby Computer Science Dept. Portland State University Oregon, U.S.A. [email protected] Dept. of Computer Science Aarhus University Denmark [email protected] Computer Science Dept. Portland State University Oregon, U.S.A. [email protected] We define a compilation scheme for a constructor-based, strongly-sequential, graph rewriting system which shortcuts some needed steps. The object code is another constructor-based graph rewriting system. This system is normalizing for the original system when using an innermost strategy. Consequently, the object code can be easily implemented by eager functions in a variety of programming languages. We modify this object code in a way that avoids total or partial construction of the contracta of some needed steps of a computation. When computing normal forms in this way, both memory consumption and execution time are reduced compared to ordinary rewriting computations in the original system. 1 Introduction Rewrite systems are models of computations that specify the actions, but not the control. The object of a computation is a graph referred to as an expression. The actions are encoded by rules that define how to replace (rewrite) one expression with another. The goal of a computation is to reach an expression, called a normal form, that cannot be further rewritten. In the computation of an expression, a rewrite system does not tell which subexpression should be replaced to reach the goal. Example 1. Consider the following rewrite system. The syntax is Curry [17]. loop = loop snd (-,y) = y (1) A computation of snd (loop,0) terminates with 0 if the second rule of (1) is ever applied, but goes on forever without making any progress if only the first rule is applied. In a computation a strategy is a policy or algorithm that defines both which subexpression should be replaced and its replacement. The intended goal of a strategy is to efficiently produce a normal form of an expression when it exists. A practical strategy, called needed, is known for the class of the strongly sequential term rewriting systems [18]. This strategy relies on the fact that, in every reducible expression e, there exists a redex, also called needed, that is reduced in any computation of e to a normal form. The needed strategy is defined and implemented as follows: given an expression e, while e is reducible, reduce an arbitrarily chosen, needed redex of e. In the systems considered in this paper, finding a needed redex is easy without look-ahead [3]. This strategy is normalizing: if an expression e has a normal form, repeatedly reducing arbitrary needed redexes will terminate with that normal form. This strategy is also optimal in the number of reduced redexes for graph (not term) rewriting. The above outline shows that implementing a needed strategy is a relatively straightforward task. Surprisingly, however, it is possible to shortcut some of the needed steps in the computation. This paper shows how this shortcutting can be introduced into an implementation of a needed strategy. A. Middeldorp and F. van Raamsdonk (Eds.): 8th International Workshop on Computing with Terms and Graphs (TERMGRAPH 2014) EPTCS 183, 2015, pp. 18–32, doi:10.4204/EPTCS.183.2 c S. Antoy, J. Johannsen, & S. Libby This work is licensed under the Creative Commons Attribution License. S. Antoy, J. Johannsen, & S. Libby 19 Terminology and background information are recalled in Sect. 2. The compilation scheme and its properties are in Sect. 3 and 4. The transformation that allows shortcutting needed redexes, and its properties, are in Sect. 5 and 6. Sect. 7 presents two benchmarks and sketches further opportunities to shortcut needed steps. Sect. 8 discusses the application of our work to the implementation of functional logic languages. Related work and conclusion are in Sect. 9 and 10, respectively. 2 Preliminaries A rewrite system is a pair (Σ∪X , R) in which Σ = C ]D is a signature partitioned into constructors and operations (or functions), X is a denumerable set of variables, and R is a set of rewrite rules defined below. Without further mention, we assume that the symbols of the signature have a type, and that any expression over the signature is well typed. An expression is a single-rooted, directed, acyclic graph defined in the customary way [11, Def. 2]. An expression e is a constructor form (or value) if, and only if, every node of e is labeled by a constructor symbol. Constructor forms are normal forms, but not vice versa. For example, head([ ]) where head is the usual operation that returns the first element of a (non-empty) list, is a normal form, but not a constructor form. In a constructor-based system, such expressions are regarded as failures or exceptions rather than results of computations. Likewise, a head-constructor form is an expression whose root node is labeled by a constructor symbol. A rule is a graph with two roots abstracting the left- and right-hand sides respectively. The rules follow the constructor discipline [23]. Each rule’s left-hand side is a pattern, i.e., an operation symbol applied to zero or more expressions consisting of only constructor symbols and variables. Rules are left-linear, i.e., the left-hand side is a tree. The objects of a computation are graphs rather than terms. Sharing some subexpressions of an expression is a requirement of functional logic programming [13, 14, 15, 21]. Incidentally, this sharing ensures that needed redexes are never duplicated during a computation. The difference between our graphs and ordinary terms concerns only the sharing of subexpressions. A computation of an expression t is a possibly infinite sequence t = t0 → t1 → . . . such that ti → ti+1 is a rewrite step [11, Def. 23]. For all i, ti is a state of the computation of t. Given a rewrite system R, an expression of R is an expression over the signature of R. When s is a signature symbol and n is a natural number, s/n denotes that n is the arity of s. When t and u are expressions and v is a variable, [u/v] is the substitution that maps v to u, and t[u/v] is the application of = [u/v] to t. The reflexive closure of the rewrite relation “→” is denoted “→ ”. Each operation in D is inductively sequential; that is, its rewrite rules are organized into a hierarchical structure called a definitional tree [1] which we informally recall below. An example of a definitional tree is shown in (3). In a definitional tree of an operation f , there are up to 3 kinds of nodes called branch, rule and exempt. Each kind contains a pattern of f and other items of information depending on the kind. A rule node with pattern π contains a rule of f whose left-hand side is equal to π modulo renaming nodes and variables. An exempt node with pattern π contains no other information. There is no rule of f whose left-hand side is equal to π. A branch node with a pattern π contains children that are subtrees of the definitional tree. At least one child is a rule node. The children are obtained by “narrowing” pattern π. Let x be any variable of π, which is called inductive. For each constructor c/m of the type of x, there is a child whose pattern is obtained from π by instantiating x with c(x1 , . . . xm ), where xi is a fresh variable. 20 Needed Computations Shortcutting Needed Steps An operation f /n is inductively sequential [1] if there exists a definitional tree whose root has pattern f (x1 , . . . xn ), where xi is a fresh variable, and whose leaves contain all, and only, the rules of f . A rewrite system is inductively sequential if all of its operations are inductively sequential. Inductively sequential operations can be thought of as “well designed” with respect to evaluation. To compute a needed redex of an expression e rooted by an operation f , match to e the pattern π of a maximal (deepest in the tree) node N of a definitional tree of f . If N is an exempt node, e has no constructor normal form, and the computation can be aborted. If N is a rule node, e is a redex and can be reduced by the rule in N. If N is a branch node, let x be the inductive variable of π and t the subexpression of e to which x is matched. Then, recursively compute a needed redex of t. The inductively sequential systems are the intersection [16] of the strongly sequential systems [18] and the constructor-based systems [23]. The following notion [8] for inductively sequential systems is key to our work. We abuse the word “needed” because we will show that our notion extends the classic one [18]. Our notion is a binary relation on nodes, or equivalently on the subexpressions rooted by these nodes, since they are in a bijection. Definition 1. Let R be an inductively sequential system, e an expression of R rooted by a node p, and n a node of e. Node n is needed for e, and similarly is needed for p, if, and only if, in any computation of e to a head-constructor form, the subexpression of e at n is derived to a head-constructor form. A node n (and the redex rooted by n, if any) of a state e of a computation in R is needed if, and only if, it is needed for some outermost operation-rooted subexpression of e. Our “needed” relation is interesting only when both nodes are labeled by operation symbols. If e is an expression whose root node p is labeled by an operation symbol, then p is trivially needed for p. This holds whether or not e is a redex and even when e is already a normal form, e.g., head([ ]). In particular, any expression that is not a value has pairs of nodes in the needed relation. Finally, our definition is concerned with reaching a head-constructor form, not a normal form. Our notion of need generalizes the classic notion [18]. Also, since our systems follow the constructor discipline [23] we are not interested in expressions that do not have a value. Lemma 1. Let R be an inductively sequential system and e an expression of R derivable to a value. If e0 is an outermost operation-rooted subexpression of e, and n is both a node needed for e0 and the root of a redex r, then r is a needed redex of e in the sense of [18]. Proof. Since e0 is an outermost operation-rooted subexpression of e, any node in any path from the root of e to the root of e0 , except for the root of e0 , is labeled by constructor symbols. Hence, e can be derived to a value only if e0 is derived to a value and e0 can be derived to a value only if e0 is derived to a headconstructor form. By assumption, in any derivation of e0 to a head-constructor form r is derived to a head-constructor form, hence it is reduced. Thus, r is a needed redex of e according to [18]. Lemma 2. Let R be an inductively sequential system, e an expression of R, e1 , e2 and e3 subexpressions of e such that ni is the root of ei and the label of ni is an operation, for i = 1, 2, 3. If n3 is needed for n2 and n2 is needed for n1 , then n3 is needed for n1 . Proof. By hypothesis, if e3 is not derived to a constructor-rooted form, e2 cannot be derived to a constructor-rooted form, and if e2 is not derived to a constructor-rooted form, e1 cannot be derived to a constructor-rooted form. Thus, if e3 is not derived to a constructor-rooted form, e1 cannot be derived to a constructor-rooted form. S. Antoy, J. Johannsen, & S. Libby 21 compile T 01 case T of 02 when branch(π, o, T¯ ) then 03 ∀ Ti ∈ T¯ compile Ti 04 output “H(${π}) = H(${π[H(π|o )]o })” 05 when rule(π, l → r) then 06 case r of 07 when operation-rooted then 08 output “H(${l}) = H(${r})” 09 when constructor-rooted then 10 output “H(${l}) = ${r}” 11 when variable then 12 for each constructor c/n of the sort of r 13 let l 0 → r0 = (l → r)[c(x1 , . . . xn )/r] 14 output “H(${l 0 }) = ${r0 }” 15 output “H(${l}) = H(${r})” 16 when exempt(π) then 17 output “H(${π}) = abort” Figure 1: Procedure compile takes a definitional tree of an operation f of R and produces the set of rules of H that pattern match f -rooted expressions. 3 Compilation For simplicity and abstraction, we present the object code, CR , of R as a constructor-based graph rewriting system as well. CR has only two operations called head and norm, and denoted H and N, respectively. The constructor symbols of CR are all, and only, the symbols of R. The rules of CR have a priority established by the textual order. A rule reduces an expression t only if no other preceding rule could be applied to reduce t. These semantics are subtle, since t could become reducible by the preceding rule only after some internal reduction. However, all our claims about computations in CR are stated for an innermost strategy. In this case, when a rule is applied, no internal reduction is possible, and the semantics of the priority are straightforward. Operation H is defined piecemeal for each operation of R. Each operation of R contributes a number of rules dispatched by pattern matching with textual priority. The rules of H contributed by an operation with definitional tree T are generated by the procedure compile defined in Fig. 1. The intent of H is to take an expression of R rooted by an operation and derive an expression of R rooted by a constructor by performing only needed steps. The expression “${x}” embedded in a string, denotes interpolation as in modern programming languages, i.e., the argument x is replaced by a string representation of its value. The notation t[u] p stands for an expression equal to t, in which the subexpression identified by p is replaced by u. In procedure compile, the notation is used to “wrap” an application of H around the subexpression of the pattern at o, the inductive node. An example is the last rule of (4). The loop at statement 12 is for collapsing rules, i.e., rules whose right-hand side is a variable. When this variable matches an expression rooted by a constructor of R, no further application of H is required; Otherwise, H is applied to the contractum. Symbol “abort” is not considered an element of the signature of CR . If any redex is reduced to “abort”, 22 Needed Computations Shortcutting Needed Steps the computation is aborted since it can be proved that the expression object of the computation has no constructor normal form. Example 2. Consider the rules defining the operation that concatenates two lists, denoted by the infix identifier “++”: []++y = y (x:xs)++y = x:(xs++y) (2) The definitional tree of operation “++” is pictorially represented below. The only branch node of this tree is the root. The inductive variable of this branch, boxed in the representation, is x. The rule nodes of this tree are the two leaves. There are no exempt nodes in this tree since operation “++” is completely defined. x ++y []++y  y (x:xs)++y (3)  x:(xs++y) Applying procedure compile to this tree produces the following output: H([]++[]) = [] H([]++(y:ys)) = (y:ys) H([]++y) = H(y) H((x:xs)++y) = x:(xs++y) H(x++y) = H(H(x)++y) compile line #14 compile line #14 compile line #15 compile line #10 compile line #04 (4) Operation N of the object code is defined by one rule for each symbol of R. In the following metarules, c/m stands for a constructor of R, f /n stands for an operation of R, and xi is a fresh variable for every i. N(c(x1 , . . . xm )) = c(N(x1 ), . . . N(xm )) N( f (x1 , . . . xn )) = N(H( f (x1 , . . . xn ))) (5) Example 3. The rules of N for the list constructors and the operation “++” defined earlier are: N([]) = [] N(x:xs) = N(x):N(xs) N(x++y) = N(H(x++y)) (6) Definition 2. The rewrite system consisting of the H rules generated by procedure compile for the operations of R and the N rules generated according to (5) for all the symbols of R is the object code of R and is denoted CR . Example 4. We show the computation of [1]++[2] in both R and CR . We use the desugared notation for list expressions to more easily match the patterns of the rules of “++”. (1:[])++(2:[]) → 1:([]++(2:[])) → 1:2:[] and (7) S. Antoy, J. Johannsen, & S. Libby 23 N((1:[])++(2:[])) → N(H((1:[])++(2:[]))) → N(1:([]++(2:[])) → N(1):N([]++(2:[])) (8) → 1:N(H([]++(2:[]))) → 1:N(2:[]) → 1:N(2):N([]) → 1:2:[] Computation (8) is longer than (7). If all the occurrences of N and H are ”erased” from the states of (8), a concept formalized shortly, and repeated states of the computation are removed, the remaining steps are the same as in (7). The introduction and removal of occurrences of N and H in (8), which lengthen the computation, represent the control, what to rewrite and when to stop. These activities occur in (7) too, but are in the mind of the reader rather than explicitly represented in the computation. 4 Compilation Properties CR , the object code of R, correctly implements R. Computations performed by CR produce the results of corresponding computations in R as formalized below. Furthermore, CR implements a needed strategy, because every reduction performed by CR is a needed reduction in R. In this section, we prove these properties of the object code. Let Expr be the set of expressions over the signature of CR output by compile on input a rewrite system R. The erasure function E : Expr → Expr is inductively defined by: E (H(t)) = E (t) E (N(t)) = E (t) E (s(t1 , . . .tn )) = s(E (t1 ), . . . E (tn )) for s/n ∈ ΣR (9) Intuitively, the erasure of an expression t removes all the occurrences of H and N from t. The result is an expression over the signature of R. Lemma 3. Let R be an inductively sequential system and H the head function of CR . For any operationrooted expression t of R, H(t) is a redex. Proof. Let f /n be the root of t, and T the definitional tree of f input to procedure compile. The pattern at the root of T is f (x1 , . . . xn ), where each xi is a variable. Procedure compile outputs a rule of H with left-hand side H( f (x1 , . . . xn )). Hence this rule, or a more specific one, reduces t. Comparing graphs modulo a renaming of nodes, as in the next proof, is a standard technique [11] due to the fact that any node created by a rewrite is fresh. Lemma 4. Let R be an inductively sequential system and H the head function of CR . Let t be an operation-rooted expression of R, and H(t) be reduced by a step resulting from the application of a rule r originating from statement 04 of procedure compile. The argument of the inner application of H in the contractum is both operation-rooted and needed for t. Proof. Let T be a definitional tree of the root of t. Let π be the pattern of the branch node n of T from which rule r originates and let o be the inductive node of π. Since r rewrites t and π is the left-hand side of r modulo a renaming of variables and nodes, there exists a graph homomorphism σ such that t = σ (π). Our convention on the specificity of the rules defining H establishes that no rule textually 24 Needed Computations Shortcutting Needed Steps preceding r in the definition of H rewrites t. Since procedure compile traverses T in post-order, every rule of H originating from a node descendant of n in T textually precedes r in the definition of H. Let q = σ (π|o ). For each constructor symbol c/n of R of the sort of π|o , there is a rule of H with argument π[c(x1 , . . . , xn )]|o , where x1 , . . . , xn are fresh variables, and this rule textually precedes r in the definition of H. Therefore, the label of q is not a constructor symbol, otherwise this rule would be applied to t instead of r. Since the step of t is innermost, q cannot be labeled by H either. Thus, the only remaining possibility is that q is labeled by an operation. We now prove that q is needed for t. If n1 and n2 are disjoint nodes (neither is an ancestor of the other) of T , then the patterns of n1 and n2 are not unifiable. This is because they have different constructors symbols at the node of the inductive variable of the closest (deepest) common ancestor. Thus, since t = σ (π), only a rule of R stored in a rule node of T below n can rewrite (a descendant of) t at the root, if any such a rule exists. All these rules have a constructor symbol at the node matched by o, whereas t has an operation symbol at q, the node matched by o. Therefore, t cannot be reduced (hence reduced to a head-constructor form) unless t|q is reduced to a head-constructor form. Thus, q is needed for t. Example 5. The situation depicted by the previous lemma can be seen in the evaluation of t = ([1]++[2])++[3]. According to (4), H(t) → H(H([1]++[2])++[3]). The argument of the inner application of H is both operation-rooted and needed for t. Lemma 5. Let R be an inductively sequential system and H the head function of CR . Let t be an operation-rooted expression of R and let A denote an innermost finite or infinite computation H(t) = e0 → e1 → . . . in CR . 1. For every index i in A, E (ei ) → E (ei+1 ) in R. = 2. If A terminates (it neither aborts nor is infinite) in an expression u, then u is a head-constructor form of R. Proof. Claim 1: Let l → r be the rule of H applied in the step ei → ei+1 . There are 3 cases for the origin of l → r. If l → r originates from statement 04 of compile, then E (ei ) = E (ei+1 ) and the claim holds. Otherwise l → r originates from one of statements 08, 10, 14 or 15. In all these cases, a subexpression of ei of the form H(w) is replaced by either H(u) (statements 08 and 15) or u (statements 10 and 14), in which w is an instance of the left-hand side of a rule of R and u is the corresponding right-hand side. Thus, in this case too, the claim holds. Claim 2: If A aborts or does not terminate, the claim is vacuously true. So, consider the last step of A. This step cannot originate from the application of a rule that places H at the root of the contractum, since another step would become available. Hence the rule of the last step is generated by statement 10 or 14 of procedure compile. In both cases, the contractum is a head constructor form. If A denotes a computation H(t) = e0 → e1 → . . . in CR , then, by Lemma 5, we denote E (e0 ) → E (e1 ) → . . . with E (A) and—with a slight abuse—we regard it as a computation in R. Some expression of E (A) may be a repetition of the previous one, rather than the result of a rewrite step. However, it is more practical to silently ignore these duplicates than filtering them out at the expenses of a more complicated definition. We will be careful to avoid an infinite repetition of the same expression. We extend the above viewpoint to computations of N(t) in CR , where t is any expression of R. = = Theorem 1. Let R be an inductively sequential system and H the head function of CR . Let t be an operation-rooted expression of R and let A denote an innermost finite or infinite computation H(t) = e0 → e1 → . . . in CR . Every step of E (A) is needed. S. Antoy, J. Johannsen, & S. Libby 25 Proof. We prove that for every index i such that ei is a state of A, every argument of an application of H in ei is needed for E (ei ). Preliminarily, we define a relation “≺” on the nodes of the states of E (A) as follows. Let p and q be nodes of states E (ei ) and E (e j ) of E (A) respectively. We define p ≺ q iff i < j or i = j and the expression at q is a proper subexpression of the expression at p in E (ei ). Relation “≺” is a well-founded ordering with minimum element the root of t. The proof of the theorem is by induction on “≺”. Base case: Directly from the definition of “need”, since t is rooted by an operation of R. Induction case: Let q be the root of the argument of an application of H in e j for j > 0. We distinguish whether q is the root of the argument of an application of H in e j−1 . If it is, then the claim is a direct consequence of the induction hypothesis. If it is not, e j−1 → e j is an application of a rule r generated by one of the statements 04, 08 or 15 of procedure compile. For statement 04, there is a node p of E (e j ) that by the induction hypothesis is needed for E (e j ) and matches the pattern π of the branch node of a definitional tree from which rule r originates. Let q be the node of the subexpression of e j rooted by p matched by π at o. By Lemma 4, q is needed for p. Since p is needed for E (e j ), by Lemma 2, q is needed for E (e j ) and the claim holds. For statements 08 and 15, q is the root of the contractum of the redex matched by r which by the induction hypothesis is needed for E (e j−1 ). Node q is still labeled by an operation, hence it is needed for E (e j ) directly by the definition of “need”. Corollary 1. Let R be an inductively sequential system. Let t be an expression of R and let A denote an innermost finite or infinite computation N(t) = e0 → e1 → . . . in CR . Every step of E (A) is needed. Proof. Operation N of CR applied to an expression t of R applies operation H to every outermost operation-rooted subexpression of t. All these expressions are needed by Def. 1. The claim is therefore a direct consequence of Th. 1. Corollary 2. Let R be an inductively sequential system. For all expressions t and constructor forms u of ∗ ∗ R, t → u in R if, and only if, N(t) → u in CR modulo a renaming of nodes. Proof. Let A denote some innermost computation of N(t). Observe that if A terminates in a constructor form u of R, then every innermost computation of N(t) terminates in u because the order of the reductions is irrelevant. Therefore, we consider whether A terminates normally. Case 1: A terminates normally. If ∗ ∗ N(t) → u, then by Lemma 5, point 1, t → u. Case 2: A does not terminate normally. We consider whether A aborts. Case 2a: A aborts. Suppose N(t) = e0 → e1 → . . . → ei , and the step of ei reduces a redex r to “abort”. By Theorem 1, r is needed for ei , but there is no rule in R that reduces r, hence t has no constructor form. Case 2b: A does not terminates. Every step of E (A) is needed. The complete tree unraveling [9, Def. 13.2.9] of the rules of R and the states of E (A), gives an orthogonal term rewriting system and a computation of the unraveled t. Since redexes are innermost, in this computation an infinite number of needed redexes are reduced. The hypernormalization of the needed strategy [9, Sect. 9.2.2] shows that hence t has no constructor form. The object code CR for a rewrite system R is subjectively simple. Since innermost reductions suffice for the execution, operations H and N can be coded as functions that take their argument by-value. This is efficient in most programming languages. Corollary 2, in conjunction with Theorem 1, shows that CR is a good object code: it produces the value of an expression t when t has such value, and it produces this value making only steps that must be made by any rewrite computation. One could infer that there cannot be a substantially better object code, but this is not true. The next section discusses why. 26 5 Needed Computations Shortcutting Needed Steps Transformation We transform the object code to avoid totally, or partially, constructing certain contracta. The transformation consists of two phases. The first phase replaces certain rules of H. Let r be a rule of H in which H is recursively applied to a variable, say x, as in the third rule of (4). Rule r is replaced by the set Sr of rules obtained as follows. A rule rf is in Sr , iff rf is obtained from r by instantiating x with f (x1 , . . . xn ), where f /n is an operation of R, x1 , . . . xn are fresh variables, and the sorts of f (x1 , . . . xn ) and x are the same. If a rule in Sr still applies H to another variable, it is again replaced in the same way. Example 6. The following rule originates from instantiating y for “++” in the third rule of (4). H([]++(u++v)) = H(u++v) (10) The first phase of the transformation ensures that H is always applied to an expression rooted by some operation f of R. The second phase introduces, for each operation f of R, a new operation, denoted H f . This operation is the composition of H with f , and then replaces every occurrence of the composition of H with f with H f . Example 7. The second phase transforms (10) into: H++ ([],u++v) = H++ (u,v) (11) After the second phase, operation H can be eliminated from the object code since it is no longer invoked. We denote the transformed CR with TR and the outcome of the first phase on CR with CR0 . The mapping τ, from expressions of CR to expressions of TR , formally defines the transformation:  if t = H( f (t1 , . . .tn )); H f (τ(t1 ), . . . τ(tn )), if t = s(t1 , . . .tn ), with s symbol of R; (12) τ(t) = s(τ(t1 ), . . . τ(tn )),  v, if t = v, with v variable. TR is more efficient than CR because, for any operation f of R, the application of H f avoids the allocation of a node labeled by f . This node is also likely to be pattern matched later. Example 8. Consider the usual length–of–a–list operation: length [] = 0 length (-:xs) = 1+length xs The compilation of (13), where we omit rules irrelevant to the point we are making, produces: (13) H(length([])) = 0 H(length(-:xs)) = H(1+length(xs)) (14) ··· The transformation of (14), where again we omit rules irrelevant to the point we are making, produces: Hlength ([]) = 0 Hlength (-:xs)) = H+ (1,length(xs)) (15) ··· Below, we show the traces of a portion of the computations of N(length [7]) executed by CR (left) and TR (right), where the number 7 is an irrelevant value. The rules of “+” are not shown. Intuitively, they evaluate the arguments to numbers, and then perform the addition. S. Antoy, J. Johannsen, & S. Libby N(H(length [7]) → N(H(1+length []) → N(H(1+H(length [])) → N(H(1+0)) → N(1) →1 27 N(Hlength ([7]) → N(H+ (1,length []) → N(H+ (1,Hlength ([]))) → N(H+ (1,0)) → N(1) →1 CR constructs the expression rooted by the underlined occurrence of “+”, and later pattern matches it. The same expression is neither constructed nor pattern matched by TR . The transformation increases the size of a program. Certain rules are replaced by sets of rules. The number of rules in a replacing set is the number of constructors of some type. A coarse upper bound of the size of the transformed program is a constant factor of the size of the original program. Modern computers have gigabytes of memory. We believe that the growth in size could become a problem only in extreme cases, and likely would not be the most serious problem in those cases. 6 Transformation Properties We show that both phases of the transformation described in the previous section preserve the object code computations. Lemma 6. Let R be an inductively sequential system. Every step of CR is a step of CR0 and vice versa, modulo a renaming of nodes. Proof. Every rule of CR0 is an instance of a rule of CR . Hence every step of CR0 is a step of CR . For the converse, let t → u be a step of CR where some rule r is applied. It suffices to consider the case in which t is the redex and the rule r applied in the step is not in CR0 . Let v be the variable “wrapped” by H in r. Rule r is output by statement either 04 or 15 of procedure compile. We show that in both cases the match of v, say s, is an operation-rooted subexpression of t. If r is output by statement 04, this property is ensured by Lemma 4. If r is output by statement 15, and the match of v were constructor-rooted, then some rule output by statement 14 of procedure compile, which textually precedes r and is tried first, would match t. Therefore, let f /n be the root of s. By the definition of phase 1 of the transformation, rule r[ f (x1 , . . . xn )/v] is in CR0 . Therefore, modulo a renaming of nodes, t → u in CR0 Corollary 3. Let R be an inductively sequential system. For every operation-rooted expression t and + + head-constructor form u of R, H(t) → u in CR0 if, and only if, τ(H(t)) → u in TR modulo a renaming of nodes. Proof. Preliminarily, we show that for any s, H(t) → s in CR0 iff τ(H(t)) → τ(s) in TR . Assume H(t) → s in CR0 . There exists a rule l → r of CR0 and a match (graph homomorphism) σ such that H(t) = σ (l) and s = σ (r). From the definition of phase 2 of the transformation, τ(l) → τ(r) is a rule of TR . We show that this rule reduces τ(H(t)) to τ(s). Since τ is the identity on variables, and σ is the identity on non variables, σ ◦ τ = τ ◦ σ . Thus τ(H(t)) = τ(σ (l)) = σ (τ(l)) → σ (τ(r)) = τ(σ (r)) = τ(s). The converse is similar because there a bijection between the steps of CR0 and TR . Now, we prove the main claim. First, the claim just proved holds also when H(t) is in a context. Then, + + an induction on the length of H(t) → u in CR0 shows that τ(H(t)) → τ(u) in TR . Since by assumption u is an expression of R, by the definition of τ, τ(u) = u. 28 Needed Computations Shortcutting Needed Steps Finally, we prove that object code and transformed object code execute the same computations. + Theorem 2. Let R be an inductively sequential system. For all expressions t and u of R, N(t) → u in CR + if, and only if, N(t) → u in TR . Proof. In the computation of N(t) in CR , by the definition of τ, each computation of H(s) in CR , for some expression s, is transformed into a computation of τ(H(s)) in TR . By Lemma 5, the former ends in a head-constructor form of R. Hence, by Corollary 3, τ(H(s)) ends in the same head-constructor form of + R. Thus, N(t) → u in TR produces the same result. The converse is similar. 7 Benchmarking Our benchmarks use integer values. To accommodate a built-in integer in a graph node, we define a kind of node whose label is a built-in integer rather than a signature symbol. An arithmetic operation, such as addition, retrieves the integers labeling its argument nodes, adds them together, and allocates a new node labeled by the result of the addition. Our first benchmark evaluates length(l1 ++ l2 ), where length is the operation defined in (13). In the table below, we compare the same rewriting computation executed by CR and TR . We measure the number of rewrite and shortcut steps executed, the number of nodes allocated, and the number of node labels compared by pattern matching. The ratio between the execution times of TR and CR varies with the implementation language, the order of execution of some instructions, and other code details that would seem irrelevant to the work being performed. Therefore, we measure quantities that are language and code independent. The tabular entries are in units per 10 rewrite steps of CR , and are constant functions of this value except for very short lists. For lists of one million elements, the number of rewrite steps of CR is two million. length (l1 ++ l2 ) CR TR OR rewrite steps 10 6 6 shortcut steps 0 4 4 node allocations 20 16 12 node matches 40 26 18 The column labeled OR refers to object code that further shortcuts needed steps using the same idea behind the transformation. For example, in the second rule of (14), both arguments of the addition in the right-hand side are needed. This information is known at compile-time, therefore the compiler can wrap an application of H around the right operand of “+” in the right-hand side of the rule. H(length(-:xs)) = H(1+H(length(xs))) (16) The composition of H with length is replaced by Hlength during the second phase. The resulting rule is: Hlength (-:xs)) = H+ (1,Hlength (xs)) (17) Of course, there is no need to allocate a node for expression 1, the first argument of the addition, every time rule (15) or (17) is applied. A single node can be shared by the entire computation. However, since the first argument of the application of H+ is constant, this application can be specialized or partially evaluated as follows: S. Antoy, J. Johannsen, & S. Libby 29 Hlength (-:xs)) = H+1 (Hlength (xs)) (18) The application of rule (18) allocates no node of the contractum. In our benchmarks, we ignore any optimization that is not directly related to shortcutting. Thus CR , TR and OR needlessly allocate this node every time these rules are applied. The number of shortcut steps of TR and OR remain the same because, loosely speaking, OR shortcuts a step that was already shortcut by TR , but the number of nodes allocated and matched further decreases. The effectiveness of TR to reduce node allocations or pattern matching with respect to CR varies with the program and the computation. Our second benchmark computes the n-th Fibonacci number for a relatively large value of n. The program we compile is: fib 0 = 0 fib 1 = 1 (19) fib n = fib (n-1) + fib (n-2) To keep the example simple, we assume that pattern matching is performed by scanning the rules in textual order. Therefore, the last rule is applied only when the argument of fib is neither 0 nor 1. fib (n) CR TR OR rewrite steps 10 8 8 shortcut steps 0 2 2 node allocations 24 22 10 node matches 44 26 16 The tabular entries are in units per 10 rewrite steps of CR and are constant functions of this value except for very small arguments of fib. For n = 32, the number of steps of CR is about 17.5 million. With respect to CR , TR avoids the construction of the root of the right-hand side of the third rule of (19). OR transforms the right-hand side of this rule into: H+ (Hfib (H- (n,1)),Hfib (H- (n,2))) (20) since every node that is not labeled by the variable or the constants 1 and 2 is needed. In this benchmark, there is also no need to allocate a node for either 1 or 2 every time (20) is constructed/executed. With this further optimization, the step would allocate no new node for the contractum, and the relative gains of our approach would be even more striking. 8 Functional Logic Programming Our work is motivated by the implementation of functional logic languages. The graph rewriting systems modeling functional logic programs are a superset of the inductively sequential ones. A minimal extension consists of a single binary operation, called choice, denoted by the infix symbol “?”. An expression x ? y reduces non-deterministically to x or y. There are approaches [5, 6] for rewriting computations involving the choice operation that produce all the values of an expression without ever making a nondeterministic choice. These approaches are ideal candidates to host our compilation scheme. Popular functional logic languages allow variables, called extra variables, which occur in the righthand side of a rewrite rule, but not in the left-hand side. Computations with extra variables are executed by narrowing instead of rewriting. Narrowing simplifies encoding certain programming problems into 30 Needed Computations Shortcutting Needed Steps programs [4]. Since our object code selects rules in textual order, and instantiates some variables of the rewrite system, narrowing with our object code is not straightforward. However, there is a technique [7] that transforms a rewrite system with extra variables into an equivalent system without extra variables. Loosely speaking, “equivalent”, in this context, means a system with the same input/output relation. In conjunction with this technique, our compiler generates code suitable for narrowing computations. 9 Related Work The redexes that we reduce are needed to obtain a constructor-rooted expression, therefoer they are closely related to the notion of root-neededness of [22]. However, we are interested only in normal forms that are constructor forms. In contrast to a computation according to [22], our object code may abort the computation of an expression e if no constructor normal form of e is reachable, even if e has a needed redex. This is a very desirable property in our intended domain of application since it saves useless rewrite steps, and in some cases may lead to the termination of an infinite computation. Machines for graph reduction have been proposed [10, 20] for the implementation of functional languages. While there is a commonality of intent, these efforts differ from ours in two fundamental aspects. Our object code is easily translated into a low-level language like C or assembly, whereas these machines have instructions that resemble those of an interpreter. There is no explicitly notion of need in the computations performed by these machines. Optimizations of these machines are directed toward their internal instructions, rather than the needed steps of a computation by rewriting, a problem less dependent on any particular mechanism used to compute a normal form. Our compilation scheme has similaties with deforestation [24], but is complementary to it. Both anticipate rule application, to avoid the construction of expressions that would be quickly taken apart and disposed. This occurs when a function producing one of these expressions is nested within a function consuming the expression. However, our expressions are operation-rooted whereas in deforestation they are constructor-rooted. These techniques can be used independently of each other and jointly in the same program. A compilation scheme similar to ours is described in [2]. This effort makes no claims of correctness, of executing only needed steps and of shortcutting needed steps. Transformations of rewrite systems for compilation purposes are described in [12, 19]. These efforts are more operational than ours. A compilation with the same intent as ours is described in [8]. The compilation scheme is different. This effort does not claim to execute only needed steps, though it shortcuts some of them. Shortcutting is obtained by defining ad-hoc functions whereas we present a formal systematic way through specializations of the head function. 10 Conclusion Our work addresses rewriting computations for the implementation of functional logic languages. We presented two major results. The first result is a compilation scheme for inductively sequential graph rewriting systems. The object code generated by our scheme has very desirable properties: it is simple consisting of only two functions that take arguments by value, it is theoretically efficient by only executing needed steps, and it is complete in that it produces the value, when it exists, of any expression. The two functions of the object code are easily generated from the signature of the rewrite system and a traversal of the definitional trees of its operations. S. Antoy, J. Johannsen, & S. Libby 31 The second result is a transformation of the object code that shortcuts some rewrite steps. Shortcutting avoids partial or total construction of the contractum of a step by composing one function of the object code with one operation symbol of the rewrite system signature. This avoids the construction of a node and in some cases and its subsequent pattern matching. Benchmarks show that the savings in node allocation and matching can be substantial. Future work will rigorously investigate the extension of our compilation technique to rewrite systems with the choice operation and extra variables, as discussed in Sect. 8, as well as systematic opportunities to shortcut needed steps in situations similar to that discussed in Sect. 7. Acknowledgments This material is based upon work partially supported by the National Science Foundation under Grant No. CCF-1317249. This work was carried out while the second author was visiting the University of Oregon. The second author wishes to thank Zena Ariola for hosting this visit. The authors wish to thank Olivier Danvy for insightful comments and the anonymous reviewers for their careful reviews. References [1] S. Antoy (1992): Definitional Trees. In H. Kirchner & G. Levi, editors: Proceedings of the Third International Conference on Algebraic and Logic Programming, Springer LNCS 632, Volterra, Italy, pp. 143–157. Available at http://dx.doi.org/10.1007/bfb0013825. [2] S Antoy (1993): Normalization by Leftmost Innermost Rewriting. In: Proceedings of the Third International Workshop on Conditional Term Rewriting Systems, Springer-Verlag, London, UK, pp. 448–457. Available at http://dx.doi.org/10.1007/3-540-56393-8_36. [3] S. Antoy (2005): Evaluation Strategies for Functional Logic Programming. Journal of Symbolic Computation 40(1), pp. 875–903. Available at http://dx.doi.org/10.1016/j.jsc.2004.12.007. [4] S. Antoy (2010): Programming with Narrowing. Journal of Symbolic Computation 45(5), pp. 501–522. Available at http://dx.doi.org/10.1016/j.jsc.2010.01.006. [5] S. Antoy (2011): On the Correctness of Pull-Tabbing. TPLP 11(4-5), pp. 713–730. Available at http: //dx.doi.org/10.1017/S1471068411000263. [6] S. Antoy, D. Brown & S. Chiang (2006): Lazy Context Cloning for Non-deterministic Graph Rewriting. In: Proceedings of the 3rd International Workshop on Term Graph Rewriting, Termgraph’06, Vienna, Austria, pp. 61–70. Available at http://dx.doi.org/10.1016/j.entcs.2006.10.026. [7] S. Antoy & M. Hanus (2006): Overlapping Rules and Logic Variables in Functional Logic Programs. In: Proceedings of the Twenty Second International Conference on Logic Programming, Springer LNCS 4079, Seattle, WA, pp. 87–101. Available at http://dx.doi.org/10.1007/11799573_9. [8] S. Antoy & A. Jost (2013): Are needed redexes really needed? In: Proceedings of the 15th Symposium on Principles and Practice of Declarative Programming, PPDP ’13, ACM, New York, NY, USA, pp. 61–71. Available at http://doi.acm.org/10.1145/2505879.2505881. [9] M. Bezem, J. W. Klop & R. de Vrijer (eds.) (2003): Term Rewriting Systems. Cambridge University Press. Available at http://dx.doi.org/10.1145/979743.979772. [10] G. L. Burn, S. L. Peyton Jones & J. D. Robson (1988): The Spineless G-machine. In: Proceedings of the 1988 ACM Conference on LISP and Functional Programming, ACM, pp. 244–258. Available at http: //doi.acm.org/10.1145/62678.62717. [11] R. Echahed & J. C. Janodet (1997): On constructor-based graph rewriting systems. Technical Report 985-I, IMAG. Available at ftp://ftp.imag.fr/pub/labo-LEIBNIZ/OLD-archives/PMP/ c-graph-rewriting.ps.gz. 32 Needed Computations Shortcutting Needed Steps [12] W. Fokkink & J. van de Pol (1997): Simulation as a correct transformation of rewrite systems. In: In Proceedings of 22nd Symposium on Mathematical Foundations of Computer Science, LNCS 1295, Springer, pp. 249–258. Available at http://dx.doi.org/10.1.1.41.8118. [13] J. C. González Moreno, F. J. López Fraguas, M. T. Hortalá González & M. Rodrı́guez Artalejo (1999): An Approach to Declarative Programming Based on a Rewriting Logic. The Journal of Logic Programming 40, pp. 47–87. Available at http://dx.doi.org/10.1016/S0743-1066(98)10029-8. [14] M. Hanus (1994): The Integration of Functions into Logic Programming: From Theory to Practice. Journal of Logic Programming 19&20, pp. 583–628. Available at http://dx.doi.org/10.1.1.226.8638. [15] M. Hanus (2013): Functional Logic Programming: From Theory to Curry. In: Programming Logics - Essays in Memory of Harald Ganzinger, Springer LNCS 7797, pp. 123–168. Available at http://dx.doi.org/ 10.1007/978-3-642-37651-1_6. [16] M. Hanus, S. Lucas & A. Middeldorp (1998): Strongly sequential and inductively sequential term rewriting systems. Information Processing Letters 67(1), pp. 1–8. Available at http://dx.doi.org/10.1016/ S0020-0190(98)00016-7. [17] M. Hanus (ed.) (2012): Curry: An Integrated Functional Logic Language (Vers. 0.8.3). Available at http: //www.curry-language.org. [18] G. Huet & J.-J. Lévy (1991): Computations in orthogonal term rewriting systems. In J.-L. Lassez & G. Plotkin, editors: Computational logic: essays in honour of Alan Robinson, MIT Press, Cambridge, MA. Part I, pp. 395–414 and Part II, pp. 415–443. [19] J. F. T. Kamperman & H. R. Walters (1996): Simulating TRSs by Minimal TRSs a Simple, Efficient, and Correct Compilation Technique. Technical Report CS-R9605, CWI. [20] R. Kieburtz (1985): The G-machine: A fast, graph-reduction evaluator. In: Functional Programming Languages and Computer Architecture, LNCS 201, Springer, pp. 400–413. Available at http://dx.doi.org/ 10.1007/3-540-15975-4_50. [21] F. J. López-Fraguas, E. Martin-Martin, J. Rodrı́guez-Hortalá & J. Sánchez-Hernández (2014): Rewriting and narrowing for constructor systems with call-time choice semantics. TPLP 14(2), pp. 165–213. Available at http://dx.doi.org/10.1017/S1471068412000373. [22] A. Middeldorp (1997): Call by Need Computations to Root-stable Form. In: Proceedings of the 24th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL ’97, ACM, New York, NY, USA, pp. 94–105. Available at http://dx.doi.org/10.1145/263699.263711. [23] M. J. O’Donnell (1977): Computing in Systems Described by Equations. Springer LNCS 58. Available at http://dx.doi.org/10.1007/3-540-08531-9. [24] Philip Wadler (1988): Deforestation: Transforming Programs to Eliminate Trees. Theor. Comput. Sci. 73(2), pp. 231–248. Available at http://dx.doi.org/10.1016/0304-3975(90)90147-A.
6
1 Caching with Partial Adaptive Matching Jad Hachem, Nikhil Karamchandani, Sharayu Moharir, Suhas Diggavi arXiv:1705.03852v3 [] 12 Dec 2017 Abstract We study the caching problem when we are allowed to match each user to one of a subset of caches after its request is revealed. We focus on non-uniformly popular content, specifically when the file popularities obey a Zipf distribution. We study two extremal schemes, one focusing on coded server transmissions while ignoring matching capabilities, and the other focusing on adaptive matching while ignoring potential coding opportunities. We derive the rates achieved by these schemes and characterize the regimes in which one outperforms the other. We also compare them to information-theoretic outer bounds, and finally propose a hybrid scheme that generalizes ideas from the two schemes and performs at least as well as either of them in most memory regimes. I. I NTRODUCTION In modern content distribution networks, caching is a technique that places popular content at nodes close to the end users in order to reduce the overall network traffic. In [3], a new “coded caching” technique was introduced for broadcast networks. This technique places different content in each cache, and takes advantage of these differences to send a common coded broadcast message to multiple users at once. This was shown not only to greatly reduce the network load in comparison with traditional uncoded techniques, but also to be approximately optimal in general. In [3] as well as many other works in the literature [4], [5], [6], [7], [8], a key assumption is that users are pre-fixed to specific caches; see also [9], [10] for a survey of related works. More precisely, each user connects to a specific cache before it requests a file from the content library. This assumption was relaxed in [11], [12] where the system is allowed to choose a matching of users to caches after the users make their requests, while respecting a per-cache load constraint. In particular, after each user requests a file, any user could be matched to any cache as long as no cache had more than one user connected to it. In this adaptive matching setup, it was shown under certain request distributions that a coded delivery, while approximately optimal in Shorter versions of the results in this paper have appeared in IEEE ISIT 2017 [1] and IEEE ITW 2017 [2]. 2 the pre-fixed matching case, is unnecessary. Indeed, it is sufficient to simply store complete files in the caches, and either connect a user to a cache containing its file or directly serve it from the server. The above dichotomy indicates a fundamental difference between the system with completely pre-fixed matching and the system with full adaptive matching. In this paper, we take a first step towards bridging the gap between these two extremes. Our contributions are the following. • We introduce a “partial adaptive matching” setup in which users can be matched to any cache belonging to a subset of caches. This can arise when, for instance, only some caches are close enough to a user to ensure a potential reliable connection. To make matters simple, we assume that the caches are partitioned into clusters of fixed and equal size, and each user can be matched to any cache within a single cluster, as illustrated in Fig. 1 and described precisely in Section II. This setup generalizes both setups considered above: on one extreme, if each cluster consisted of only a single cache, then the setup becomes the pre-fixed matching setup of [3]; on the other extreme, if all caches belonged to a single cluster, then we get back the total adaptive matching setup from [11], [12]. • Through this setup, we observe a dichotomy between coded caching and adaptive matching by considering two schemes: one scheme exclusively uses coded caching without taking advantage of adaptive matching; the other scheme focuses only on adaptive matching and performs uncoded delivery. We show that the scheme prioritizing coded caching is more efficient when both the cache memory and the cluster size are small, while the scheme prioritizing adaptive matching is more efficient in the opposite case (see Theorems 3 and 8). • We derive information-theoretic cut-set bounds on the optimal expected rate. These bounds show that the coded caching scheme is approximately optimal in the small memory regime and adaptive matching is approximately optimal in the large memory regime (see Theorem 4). • For a subclass of file popularities, we propose a new hybrid scheme that combines ideas from both the coded delivery and the adaptive matching schemes, and that performs better than both in most memory regimes (see Theorem 5). In this paper, we consider the relevant case when some files are much more popular than others. In particular, we consider a class of popularity distributions where file popularity follows a power law, specifically a Zipf law [13]. This includes the special case when all files are equally 3 Server W1 · · · WN files cache cluster users Fig. 1. Illustration of the setup considered in this paper. The squares represent K = 12 caches, divided into three clusters of size d = 4 caches each, and the circles represent users at these clusters. Dashed arrows represent the matching phase, and solid arrows the delivery phase. Unmatched users are in gray. popular, but also encompasses more skewed popularities. We observe a difference in the behavior of the schemes for two subclasses of Zipf distributions, namely when the Zipf popularities are heavily skewed (the steep Zipf case) and when they are not heavily skewed (the shallow Zipf case). The rest of this paper is organized as follows. Section II precisely describes the problem setup and introduces the clustering model. Section III provides a preliminary discussion that prepares for the main results. We divide the main results into Section IV, which focuses on the shallow Zipf case, and Section V, which focuses on the steep Zipf case. Detailed proofs are given in the appendices. II. P ROBLEM S ETUP Consider the system depicted in Fig. 1. A server holds N files W1 , . . . , WN of size F bits each. There are K caches of capacity M F bits, equivalently M files, each. The caches are divided into K/d clusters of size d each, where d is assumed to divide K. For every n ∈ {1, . . . , N } and every c ∈ {1, . . . , K/d}, there are un (c) users accessing cluster c and requesting file Wn . We refer to the numbers {un (c)}n,c as the request profile and will often represent the request profile as a vector u for convenience. In this paper, we focus on the case where the numbers un (c) are independent Poisson random variables with parameter ρdpn , where ρ ∈ (0, 1/2) is some fixed constant and p1 , . . . , pN is the popularity distribution of the files, with pn ≥ 0 and p1 + · · · + pN = 1. Thus pn represents the probability that a fixed user will request file Wn . We particularly focus on the case where the files follow a Zipf law, i.e., pn ∝ n−β where β ≥ 0 is the Zipf parameter. Note that the expected total number of users in the system is ρK. 4 As with standard coded caching setups, a placement phase occurs before the request profile is revealed during which content related to the files is placed in the caches, and a delivery phase occurs after the request profile is known during which a broadcast message is sent to all users to satisfy their demands. In our setup, in addition to the usual placement and delivery phases, there is an intermediate phase that we call the matching phase. The matching phase occurs before the delivery phase but after the request profile has been revealed. During the matching phase, each user is matched to a single cache within its cluster, with the constraint that no more than one user can be matched to a cache. If there are fewer caches than users in one cluster, then some users will be unmatched. For a given request profile u, let Ru denote the rate of the broadcast message required to deliver to all users their requested files. For any cache memory M , our goal is to minimize the expected rate R̄ = Eu [Ru ]. Specifically, we are interested in R̄∗ defined as the smallest R̄ over all possible strategies. Furthermore, we assume that there are more files than caches, i.e., N ≥ K, which is the case of most interest. We also, for analytical convenience, focus on the case where the cluster size d grows at least as fast as log K. More precisely, we assume d ≥ [2(1 + t0 )/α] log K, (1) where α = − log(2ρe1−2ρ ) and t0 > 0 is some constant. Note that α > 0. Other than analytical convenience, the reason for such a lower bound on d is that, when d is too small, the Poisson request model adopted in this paper is no longer suitable. Indeed, if for example d = 1, then with high probability a significant fraction of users will not be matched to any cache, leading to a rate proportional to K even with infinite cache memory. Finally, we will frequently use the helpful notation [x]+ = max{x, 0} for all real numbers x. III. P RELIMINARY D ISCUSSION The setup we consider is a generalization of the pre-fixed matching setup (when d = 1) and the maximal adaptive matching setup (when d = K). From the literature, we know that different strategies are required for these two extremes: one using a coded delivery when d = 1, and one using adaptive matching when d = K.1 Therefore, there must be some transition in the suitable strategy as the cluster size d increases from one to K. 1 The request model used in the literature when d = 1 is usually not the Poisson model used here. Instead, a multinomial model is used in which the total number of users is always fixed. As mentioned at the end of Section II, the Poisson model is not suitable in that case. However, the results from the literature are still very relevant to this paper. 5 The goal of this paper is to study this transition. To do that, we first adapt and apply the strategies suitable for the two extremes to our intermediate case. These strategies will exclusively focus on one of coded delivery and adaptive matching, and we will hence refer to them as “Pure Coded Delivery” (PCD) and “Pure Adaptive Matching” (PAM). In particular, PCD will perform an arbitrary matching and apply the coded caching scheme from [5], [6], whereas PAM will apply a matching scheme similar to [11], [12] independently on each cluster and serve unmatched requests directly, ignoring any coding opportunities. We then compare PCD and PAM in various regimes and evaluate them against information-theoretic outer bounds. Regardless of the value β of the Zipf parameter, we find that PCD tends to perform better than PAM when the cache memory M is small, while PAM is superior to PCD when M is large. The particular threshold of M where PAM overtakes PCD obeys an inverse relation with the cluster size d. Thus when d is small, PCD is the better choice for most memory values, whereas when d is large, PAM performs better for most memory values. This observation agrees with previous results on the two extremes d = 1 and d = K, and it is illustrated in Fig. 2 and 4 and made precise in the theorems that follow. While most of the analysis assumes general values for K, N ≥ K, d (except for (1)), and M , it will nevertheless be useful to sometimes compare PCD and PAM under the restriction that these parameters all scale as powers of K. This can provide some high-level insights into the different regimes where PCD or PAM dominate, while ignoring sub-polynomial factors such as log N , thus simplifying the analysis. During this polynomial-scaling-with-K analysis—which we will call poly-K analysis as a shorthand—we will assume that N = Kν, ν ≥ 1; (2a) d = Kδ, δ ∈ (0, 1]; (2b) M = K µ, µ ∈ [0, 1]. (2c) We stress again that our results do not make these assumptions, but that this representation provides a useful way to understand the results. To proceed, we will separately consider two regimes for the Zipf popularity: a shallow Zipf case in which β ∈ [0, 1), and a steep Zipf case where β > 1.2 2 The case β = 1 is a special case that usually requires separate handling. We skip it in this paper, and analyzing it is part of our on-going work. 6 IV. T HE S HALLOW Z IPF C ASE (β < 1) A. Comparing PCD and PAM when β < 1 The next theorem gives the rate achieved by PCD. Theorem 1. When β ∈ [0, 1), the PCD scheme can achieve for all M an expected rate of o n N + −t R̄PCD = min ρK, M − 1 + K√2π0 . Theorem 1 can be proved by directly applying any suitable coded caching strategy [5], [6], [7] along with an arbitrary matching phase. The additional K −t0 term represents the expected number of users that will not be matched to any cache and must hence be served directly from the server. The derivation of this term is done in Lemma 1 in Appendix A. The next theorem gives the rate achieved by PAM. Theorem 2. When β ∈ [0, 1), the PAM scheme can achieve an expected rate of   ρK if M = O(N/d); PAM R̄ =   min ρK, KM e−zdM/N if M = Ω(N/d), where z = (1 − β)ρh((1 + ρ)/2ρ) > 0 with h(x) = x log x + 1 − x. Theorem 2 can be proved using a similar argument to [11]: the idea is to replicate each file across the caches in each cluster, and match each user to a cache containing its requested file. The detailed proof is given in Appendix B. Notice that PAM can achieve a rate of o(1) when dM > Ω(N log N ). Recall that we have imposed a service constraint of one user per cache in our setup. If we instead allow multiple users to access the same cache, then it can be shown that a rate of o(1) can be achieved if and only if dM > (1 − o(1))N . Consequently, the cache service constraint increases this memory threshold by at most a logarithmic factor. The rates of PCD and PAM are illustrated in Fig. 2 for the β ∈ [0, 1) case. We can see that there is a memory threshold M0 , with M0 = Ω(N/d) and M0 = O((N/d) log N ), such that PCD performs better than PAM for M < M0 while PAM is superior to PCD for M > M0 . Using a poly-K analysis, we can ignore the log N term and obtain the following result, illustrated in Fig. 3. 7 PCD PAM HCM lower bound R̄ ρK M N/d (N/d) log N N Fig. 2. Rates achieved by PCD, PAM, and HCM when β ∈ [0, 1), along with information-theoretic lower bounds. HCM is a hybrid scheme described in Section IV-C, and the lower bounds are presented in Appendix D. This plot is not numerically generated but is drawn approximately for illustration purposes. µ ν PAM µ=ν−δ (M = N/d) PCD ν−1 δ 0 1 Fig. 3. The scheme among PCD and PAM that performs better than the other when β ∈ [0, 1), in terms of polynomial scaling in K. Here N = K ν , d = K δ , and M = K µ . Notice that the values in the figure are all independent of β; the behavior is therefore the same for all β < 1. Theorem 3. When β ∈ [0, 1), and considering only a polynomial scaling of the parameters with K, PCD outperforms PAM in the regime µ ≤ ν − δ, while PAM outperforms PCD in the opposite regime, where N = K ν , d = K δ , and M = K µ . Note that in some cases PCD and PAM perform equally well, such as when µ = ν. However, these are usually edge cases and most of the regimes in Theorem 3 are such that one scheme strictly outperforms the other. Interestingly, under the poly-K analysis, the memory regime where PAM becomes superior to PCD is the regime where PAM achieves a rate of o(1), for any d. 8 B. Approximate Optimality In this section, we compare the achievable rates of PCD and PAM schemes to informationtheoretic lower bounds and identify regimes in which PCD or PAM is approximately optimal, where approximate optimality is defined below. Definition 1 (Approximate Optimality). We say that a scheme is approximately optimal if it can achieve an expected rate R̄ such that R̄ ≤ C · R̄∗ + o(1), where R̄∗ is the optimal expected rate and C is some constant. For β ∈ [0, 1), we show the approximate optimality of PCD in the small memory regime and that of PAM in the large memory regime. When M > Ω((N/d) log N ), it follows from Theorem 2 that R̄PAM = o(1), and thus PAM is trivially approximately optimal. The following theorem states the approximate optimality of PCD when M < O(N/d). Theorem 4. When β ∈ [0, 1) and M < (1 − e−1 /2)N/2d, and for N ≥ 10, the rate achieved by PCD is within a constant factor of the optimum, 96 R̄PCD ∆ . ≤C= ∗ (1 − β)ρ(1 − e−1 /2)2 R̄ Note that the constant C is independent of K, d, N , and M . Theorem 4 can be proved by first reducing the β ∈ [0, 1) case to a uniform-popularities setup, and then deriving cut-set lower bounds on the optimal expected rate. Proof details are given in Appendix D. C. A Hybrid Coding and Matching (HCM) Scheme So far, we have seen that the two memory regimes M < O(N/d) and M > Ω((N/d) log N ) require very different schemes: the former requires PCD and the latter requires PAM. The PCD scheme prioritizes coded delivery at the expense of losing the benefits of adaptive matching, while the PAM scheme leverages adaptive matching but limits itself to an uncoded delivery. In this section, we introduce a universal scheme that generalizes ideas from both PCD and PAM. It is a hybrid scheme that combines the benefits of adaptive matching within clusters with the coded caching gains across clusters. For this reason, we call this scheme Hybrid Coding and Matching (HCM). 9 The main idea of HCM is to partition files and caches into colors, and then apply a coded caching scheme within each color while performing adaptive matching across colors. More precisely, each color consists of a subset of files as well as a subset of the caches of each cluster. When a user requests a file, the user is matched to an arbitrary cache in its cluster, as long as the cache has the same color as the requested file. For each color, a coded transmission is then performed to serve all the matched users requesting a file from said color. Unmatched users are served directly by the server. This allows us to take advantage of adaptive matching within each cluster as well as obtain coded caching gains across the clusters. The rate achieved by HCM is given in the following theorem. It is illustrated in Fig. 2 along with the rates of PCD and PAM for comparison. Theorem 5. For any β ∈ [0, 1), HCM can achieve a rate of  o n −t  min ρK, N − χ + K √ if M ≤ bN/χc; M 2π R̄HCM = −t  K √ if M ≥ dN/χe, 2π where χ = bαd/(2(1 + t) log K)c, for any t ∈ [0, t0 ]. While the expression for R̄HCM given in the theorem is rigorous, we can approximate it here for clarity as R̄ HCM  ≈ min ρK, h N M −Θ  d log K i+  + o(1) . The proof of Theorem 5 is given in detail in Appendix E, where we provide a rigorous explanation of the HCM scheme. We will next compare HCM to PCD and PAM. Notice from Fig. 2 that HCM is strictly better than PCD for all memory values. In fact, there is an additive gap between them of about d/ log K for most memory values, and an arbitrarily large multiplicative gap when M > (N/d) log K where HCM achieves a rate of o(1). Consequently, HCM is approximately optimal in the regime where PCD is, namely when M < N/2d. Furthermore, HCM is significantly better than PAM in the M < N/d regime: there is a multiplicative gap of up to about K/d between their rates in that regime. Moreover, HCM achieves a rate of o(1) when M > (N/d) log K. It is thus trivially approximately optimal in that regime, which includes the regime where PAM is. 10 PCD PAM R̄ 1 Kβ M 1 (K/dβ ) β−1 1 min{N, K β−1 }/d 1 min{N, K β−1 } Fig. 4. Rates achieved by PCD and PAM in the β > 1 case. Again, this plot is not numerically generated but is drawn approximately for illustration purposes. V. T HE S TEEP Z IPF C ASE (β > 1) A. Comparing PCD and PAM when β > 1 When β > 1, we restrict ourselves to the case where d is some polynomial in K for convenience. The following theorems give the rates achieved by PCD and PAM, illustrated in Fig. 4. Theorem 6. When β > 1, the PCD scheme can achieve an expected rate of    K 1/β if 0 ≤ M < 1;   h i + −t (KM )1/β R̄PCD = − 1 + K√2π0 if 1 ≤ M < N β /K; M      N − 1+ + K√−t0 if M ≥ N β /K. M 2π Much like Theorem 1, Theorem 6 follows from directly applying the coded caching strategy from [5], [6]. Again, the K −t0 term represents the expected number of unmatched users, derived in Lemma 1 in Appendix A. Theorem 7. When β > 1, the PAM scheme can achieve an expected rate of   o n 1  K O min β ,K if M = O(N/d); (dM )β−1 R̄PAM =   N o(1) if M = Ω N log . d The proof of Theorem 7, given in Appendix C, follows along the same lines as [12] and involves a generalization from d = K to any polynomial d = K δ , 0 < δ ≤ 1. The idea is to replicate the files across the caches in the cluster, placing more copies for the more popular files, and match the users accordingly. 11 µ ν νβ − 1 PAM µ=ν−δ (M = N/d) PCD µ = (1 − βδ)/(β − 1) ν−1 δ 0 1 − ν(β − 1) 1/β 1 Fig. 5. The scheme among PCD and PAM that performs better than the other when β > 1 and ν < 1/(β − 1), in terms of polynomial scaling in K. Here N = K ν , d = K δ , and M = K µ . As with the β ∈ [0, 1) case, we notice that PCD is the better choice when M is small, while PAM is the better choice when M is large. In fact, by comparing the rate expressions in Theorems 6 and 7 using a poly-K analysis, we obtain the following theorem describing the regimes for which either of PCD or PAM is superior to the other. The theorem is illustrated in Fig. 5 and proved at the end of this subsection. Theorem 8. When β > 1, and considering only a polynomial scaling of the parameters with K, PCD outperforms PAM in the regime µ ≤ min {ν − δ, (1 − βδ)/(β − 1)} , while PAM outperforms PCD in the opposite regime, where N = K ν , d = K δ , and M = K µ . When comparing Theorems 3 and 8, we notice that the case β > 1 has the added constraint µ < (1 − βδ)/(β − 1) for the regime where PCD is superior to PAM, indicating that there are values of d for which PAM is better than PCD for a larger memory regime under β > 1 as compared to β ∈ [0, 1). This is represented in Fig. 5 by the additional line segment joining points (1 − ν(β − 1), νβ − 1) and (1/β, 0). As β approaches one from above, this line segment tends toward the segment joining points (1, ν − 1) and (1, 0). With it, the regime in which PCD is better than PAM grows until it becomes exactly the regime shown in Fig. 3 for β ∈ [0, 1). In other words, when β > 1 and as β → 1+ , the regimes in which PCD or PAM are respectively the better choice become the same regimes as in the β ∈ [0, 1) case. This seemingly continuous transition suggests that, when β = 1, the system should behave similarly to β ∈ [0, 1), i.e., 12 Fig. 3, at least under a poly-K analysis. Proof of Theorem 8: Recall that we are only focusing on a poly-K analysis. We will define σ PCD and σ PAM to be the exponents of K in R̄PCD and R̄PAM , respectively, i.e., R̄PCD = Θ(K σ PCD ) and similarly for PAM. Our goal is to compare σ PCD to σ PAM . We can break the proof down into two main cases plus one trivial case. It can help the reader to follow these cases in Fig. 5. The trivial case is when the total cluster memory dM is large, specifically µ+δ > min{ν, 1/(β− 1)}. From Theorem 7, the PAM rate is then o(1), hence σ PAM = 0. Therefore, PCD cannot perform better than PAM in this case. In what follows, we assume µ + δ < min{ν, 1/(β − 1)}. We can write the exponents of the rates of PCD and PAM as σ PCD = min {[1 − (β − 1)µ]/β , ν − µ} ; σ PAM = min {1/β , 1 − (β − 1)(δ + µ)} . Notice that we always have σ PCD ≤ [1 − (β − 1)µ]/β ≤ 1/β, and hence it is sufficient to compare σ PCD to the second term in the minimization in σ PAM . In other words, σ PCD < σ PAM ⇐⇒ σ PCD < 1 − (β − 1)(δ + µ). Furthermore, we can write σ PCD as   [1 − (β − 1)µ]/β PCD σ =  ν − µ if µ ≤ νβ − 1; if µ ≥ νβ − 1. For this reason, we split the analysis into a small and a large memory regimes, with the threshold µ ≶ νβ − 1. Large memory: µ > νβ − 1: This case is only possible when ν < 1/(β − 1) because we always have µ ≤ ν. Here, PCD achieves σ PCD = ν − µ. The constraints on µ imply: µ<ν−δ =⇒ 1 − (β − 1)(δ + µ) > 1 − ν(β − 1); µ > νβ − 1 =⇒ ν − µ < ν − (νβ − 1) = 1 − ν(β − 1). Combining the two inequalities yields 1 − (β − 1)(δ + µ) > 1 − ν(β − 1) > ν − µ, and hence σ PCD < σ PAM . 13 Small memory: µ < νβ − 1: In this case, PCD always achieves σ PCD = [1 − (β − 1)µ]/β. Using some basic algebra, we can show that [1 − (β − 1)µ] /β < 1 − (β − 1)(δ + µ), i.e., σ PCD < σ PAM , if and only if µ < (1 − βδ)/(β − 1). B. Approximate Optimality 1 When β > 1 we know from Theorem 7 that R̄PAM = o(1) is achieved for M > Ω(min{N log N, K β−1 }/d), and thus PAM is trivially approximately optimal in that regime as per Definition 1. When the memory is smaller, proving approximate optimality of PCD is more difficult than in the β ∈ [0, 1) case. Indeed, applying a similar technique here to the shallow-Zipf case is insufficient. This is partly because the number of distinct requested files is close to the number of caches K when β < 1, but becomes much smaller than K when β > 1. A PPENDIX A E XPECTED N UMBER OF U NMATCHED U SERS In this appendix, we will derive upper bounds on the expected number of unmatched users when using PCD, stated in Lemma 1 below. The proof of this lemma requires Lemma 2, also stated below, which gives a more general result on the number of unmatched users. Lemma 1. When using PCD, the expected number of unmatched users is no greater than √ K −t0 / 2π. Lemma 2. If Y ∼ Poisson(γm) users must be matched with m ≥ 1 caches, where γ ∈ (0, 1), then the expected number of unmatched users U = [Y − m]+ is bounded by m 1 E[U ] ≤ √ · m · γe1−γ . 2π Lemma 2 is proved in Appendix F. Proof of Lemma 1: In PCD, at each cluster c we are attempting to match a number of users Y (c) ∼ Poisson(ρd) to exactly d caches. Let U (c) denote the number of unmatched users at P cluster c, and let U 0 = c U (c) be the total number of unmatched users. The matching in PCD is arbitrary, and so any user can be matched to any cache. Consequently, U (c) = [Y (c) − d]+ and we can apply Lemma 2 directly to obtain 0 E[U ] = K/d X c=1 E[U (c)] ≤ d   K 1 1 · √ · d · ρe1−ρ = √ exp log K + d log ρe1−ρ . d 2π 2π (3) 14 Note that the function x 7→ xe1−x is strictly increasing for x ∈ (0, 1). Since ρ ∈ (0, 1/2), we thus get   log ρe1−ρ < log 2ρe1−2ρ = −α < log(1) = 0. Applying this to (3), we obtain   (a) 1 1 1 E[U 0 ] ≤ √ exp log K + d log 2ρe1−2ρ ≤ √ exp {log K − (1 + t0 ) log K} = √ K −t0 , 2π 2π 2π where (a) uses (1). This concludes the proof. A PPENDIX B D ETAILS OF PAM S CHEME FOR β ∈ [0, 1) (P ROOF OF T HEOREM 2) First, note that it is always possible to unicast from the server to each user the file that it requested. Since the expected number of users is ρK, we always have R̄PAM ≤ ρK. In what follows, we focus on the regime M = Ω(N/d). Recall that the number of requests for file n at cluster c is un (c), a Poisson variable with parameter ρdpn . In the placement phase, we perform a proportional placement. Specifically, since each cache can store M files and each cluster consists of d caches, we replicate each file Wn on dn = pn dM caches per cluster. In the matching phase, we first construct a fractional matching of users to caches, and then show that this implies the existence of an integral matching. We construct the fractional matching by dividing each file Wn into dn equal parts, and then mapping each request for file Wn to dn requests, one for each of its parts. Each user now connects to the dn caches containing file Wn and retrieves one part from each cache. This leads to a fractional matching where the total data served by a cache k in cluster c is less than one file if X un (c) ≤ 1, dn W ∈W n (4) k where Wk is the set of files stored on cache k. Let h(x) = x log x + 1 − x be the Cramér transform of a unit Poisson random variable. Using the Chernoff bound and the arguments used in the proof of [11, Proposition 1], we have that ( ) X un (c) Pr > 1 ≤ e−zdM/N , dn W ∈W n k where z = (1 − β)ρh(1 + (1 − ρ)/2ρ) > 0. (5) 15 To find a matching between the set of requests and the caches, we serve all requests for files that are stored on caches for which (4) is violated via the server. For the remaining files, there exists a fractional matching between the set of requests and the caches such that each request is allocated only to caches in the corresponding cluster, and the total data served by each cache is not more than one unit. By the total unimodularity of adjacency matrix, the existence of a fractional matching implies the existence of an integral matching [14]. We use the Hungarian algorithm to find a matching between the remaining requests and the caches in the corresponding cluster. Let τn be the probability that at least one of the caches storing file Wn does not satisfy (4). By the union bound, it follows that τn ≤ (KM/N )e−zdM/N . By definition, R̄ PAM ≤ N X n=1 τn ≤ KM e−zdM/N , (6) which concludes the proof of the theorem. While the above was enough to prove the theorem, we will next provide an additional upper bound on the PAM rate, thus obtaining a tighter expression. Let Gc be the event that the total number of requests at cluster c is less than d, and let T G = c Gc . Using the Chernoff bound, we have E[G] ≥ 1 − K −zd e , d where z is as defined above. Conditioned on Gc , the number of files that need to be fetched from the server to serve all requests in the cluster is at most d. The rest of this proof is conditioned on G. Let Ec be the event that all caches in cluster c satisfy (4). Using (5) and the union bound, we have that Pr{Ec |G} ≥ Pr{Ec } ≥ 1 − de−zdM/N , where z is as defined above. Conditioned on Ec , all the requests in cluster c can be served by the caches. Therefore, E[R|G] ≤ K/d X c=1  d · Pr Ec G ≤ Kde−zdM/N , where A denotes the complement of A for any event A. It follows that E[R] = E[R|G] Pr{G} + E[R|G] Pr{G} ≤ E[R|G] + N Pr{G} ≤ Kde−zdM/N + N K −zd e . (7) d 16 Using (6) and (7), we obtain R̄ PAM   N K −zd −zdM/N −zdM/N , e ≤ min KM e , Kde + d which is a tighter bound on the PAM rate and implies Theorem 2. A PPENDIX C D ETAILS OF PAM S CHEME FOR β > 1 (P ROOF OF T HEOREM 7) At a high level, the PAM strategy consists in storing complete files in the caches, replicating the files across different caches, and then matching the users to the cache that contains their requested file. Users that cannot be matched to a cache containing their file are served directly from the server. The above describes PAM strategies very generally; there are many possible schemes for placement and matching within this class of strategies. In this paper, we adopt for β > 1 a strategy that performs a knapsack storage (KS) placement phase that is based on the knapsack problem, and a match least popular (MLP) matching phase in which matching is done for the least popular files first. We refer to this PAM scheme as KS+MLP. A. Excess Users Whatever the strategy, if there are more than d users at a particular cluster, then the excess users must be unmatched. From Lemma 2, we know that the expected total number of these excess users across all caches is O(K −t0 ), which is o(1). For the proof of Theorem 7, we will generally use asymptotic notation for the expected rate. Thus by always serving these users directly from the server, their contribution to the rate is always o(1). For this reason, we will make the simplifying assumption in what follows that there are no more than d users at each cluster. B. Placement Phase: Knapsack Storage We split the KS policy into two parts. In the first part, we determine how many copies of each file will be stored per cluster. In the second part, we determine which caches in each cluster will store each file. 17 1) KS Part 1: The first part of the knapsack storage policy determines how many caches in each cluster store each file by solving a fractional knapsack problem. The parameters of the fractional knapsack problem are a value vn and a weight wn associated with each file Wn , defined as follows. • The value vn of file Wn is the probability that Wn is requested by at least one user in a cluster, vn = 1 − (1 − pn )d . • The weight wn of file Wn represents the number of caches in which Wn will be stored, should the policy decide to store it. If we decide to store a file, we would like to make sure that all requests for that file can be served by the caches, so that it need not be transmitted by the server. To ensure this, we fix wn to be large enough so that, with probability going to one as K → ∞, the number of requests for Wn is no larger than wn . We thus choose the following values for wn : wn =    d         1+ if n = 1;  p1 2 ρdpn   d4p1 (log d)2 e       1  if 2 ≤ n ≤ N1 ; if N1 < n ≤ N2 ; if N2 < n ≤ N , where N1 and N2 are defined as d1/β ; N1 = p1 (log d)2/β (8a) N2 = d(1+1/β)/2 . (8b) Using the above parameters vn and wn , we solve the following knapsack problem: maximize x1 ,...,xN subject to N X v n xn n=1 N X n=1 wn xn ≤ dM ; 0 ≤ xn ≤ 1, ∀n. (9) Then, the number of copies of file Wn that will be present in each cluster is cn = bxn c wn . Note that cn is hence either zero or wn . 18 Algorithm 1 The Match Least Popular (MLP) matching policy for a fixed cluster. Require: Number of requests un for file Wn , for each n, at the cluster Ensure: Matching of users to caches 1: Set Kn ⊆ {1, . . . , d} to be the set of caches containing file Wn , for each n 2: Loop over all files from least to most popular: 3: for n ← N, N − 1, . . . , 1 do 4: Loop over all requests for file n: 5: for v ← 1, . . . , un do 6: if Kn 6= ∅ then 7: Pick k ∈ Kn uniformly at random 8: Match a user requesting file Wn to cache k 9: Cache k is no longer available: for all n0 ∈ {1, . . . , N } do 10: Kn0 ← Kn0 \ {k} 11: end for 12: 13: 14: 15: end if end for end for 2) KS Part 2: The second part of the knapsack storage policy is to determine which caches store each file. We will focus on one arbitrary cluster, but the same placement is done in each cluster. To do that, define the multiset S containing exactly cn copies of each file index n. Let us order the elements of S in increasing order, and call the resulting ordered list (n1 , . . . , nbdM c ). Then, for each r, we store file Wnr in cache ((r − 1) mod d) + 1 of the cluster. C. Matching and Delivery Phases: Match Least Popular In the matching phase, we use the Match Least Popular (MLP) policy, the key idea of which is to match users to caches starting with the users requesting the least popular files. Algorithm 1 gives the precise description of MLP. At the end of Algorithm 1, some users will be unmatched, particularly those for which the condition on line 6 fails. Any file requested by an unmatched user will be broadcast directly from the server. 19 D. Expected Rate Achieved by KS+MLP Lemma 3. Let X be a Poisson random variable with mean µ, and let  ∈ (0, 1) be arbitrary. Then, Pr {X ≥ (1 + )µ} ≤ e−µh(1+) , where h(x) = x log x + 1 − x. Proof: The lemma follows from the Chernoff bound. Lemma 4. Recall that un (c) denotes the number of users requesting file Wn from cluster c. Consider an arbitrary cluster c. Let E1 denote the event that  p1  un (c) ≤ 1 + dpn for all 1 ≤ n ≤ N1 ; and 4 un (c) ≤ 2p1 (log d)2 for all N1 < n ≤ N2 , where N1 and N2 are as defined in (8). Then, 2 Pr {E1 } = 1 − N e−Ω((log d) ) . Proof: In what follows, we will ignore for simplicity the index c when it is clear from the context, e.g., we will write un = un (c). Recall that un is a Poisson random variable with mean ρdpn . • For n ≤ N1 , we have dpn ≥ p1 (log d)2 . Therefore, using Lemma 3, we have for n ≤ N1 o n  p1  2 dpn ≤ e−Ω(dpn ) ≤ e−Ω((log d) ) . Pr un > 1 + 4 • For N1 < n ≤ N2 , we have dpn < p1 (log d)2 . By defining ũ to be a Poisson variable with parameter ρp1 (log d)2 > ρdpn , we obtain (b)   (a) 2 Pr un > 2p1 (log d)2 < Pr ũ > 2p1 (log d)2 ≤ e−Ω((log d) ) , In the above, (a) uses the fact that the function λ 7→ Pr{X(λ) > a} defined on λ ∈ [0, a), where X(λ) is a Poisson variable with parameter λ, is an increasing function of λ, and (b) follows from Lemma 3. The statement of the lemma is then obtained by a union bound over all the files. 20 Lemma 5. Let R = {n : xn = 1}, where xn is the solution to the fractional knapsack problem solved in Appendix C-B1. Let E2 denote the event that, in a given cluster, the MLP policy matches all requests for all files in R to caches. Then, 2 Pr{E2 } = 1 − N e−Ω((log d) ) . Proof: Since the MLP policy matches requests to caches starting from the least popular files, we first focus on requests for files less popular than file WN2 . Because the files follow a Zipf distribution, we can write pN2 = p1 N2−β = p1 . (β+1)/2 d Every file less popular than N2 is stored at most once across all the caches in the cluster. Therefore, under MLP, a request for a file n > N2 will remain unmatched only if the cache storing that file is matched to another request for another file n0 > N2 . Under the KS placement policy, the cumulative popularity of all files no more popular than file N2 stored on a particular cache is less than   β+1 pN2 + pN2 +d + pN2 +2d + · · · = O p1 d− 2 . Each unmatched request for file n > N2 corresponds to the event that there are at least two requests for the M files less popular than WN2 stored on a cache. Therefore, by the Chernoff bound (Lemma 3), the probability that a particular request for a file n > N2 remains unmatched is at most e−Ω(d) . By the union bound, the probability that at least one request for file n ∈ R such that n > N2 is not matched by the MLP policy is at most de−Ω(d) . Next, we focus on the files n ∈ {2, . . . , N2 }. Note that if the KS policy decides to store file n, it stores it on wn caches. Therefore, N2 X n=2 xn w n ≤ N1 l X n=2 N1 h X N2 m X   p1  dpn + 4p1 (log d)2 1+ 2 n=N +1 1 N2 i X   p1  ≤ 1+ dpn + 1 + 4p1 (log d)2 + 1 2 n=2 n=N1 +1   p1 ≤ 1+ d(1 − p1 ) + 4p1 (log d)2 N2 + N2 2   p1  ≤ 1+ d(1 − p1 ) + Θ d(1+1/β)/2 (log d)2 2 ≤ d, 21 for d large enough. Therefore, if files are stored according to KS Part 2, each cache stores at most one file from the set {2, . . . , N2 }. Let Kn be the set of caches storing file n in a given cluster. Let E3,k be the event that cache k ∈ Kn is matched to a user requesting a file n > N2 . A cache will be matched to a user requesting a file less popular than N2 only if at least one of the files that it stores, among those that are less popular than N2 , is requested at least once. Since there are at most M such files on each cache, Pr{E3,k } ≤ 1 − 1 − O d−(β+1)/2 d . For a given constant 0 <  < 1, there exists a d() such that Pr{E3,k } ≤  for all d ≥ d(). For each file n, N1 < n ≤ N2 , let E4,n denote the event that more than 2p1 (log d)2 of the d4p1 (log d)2 e caches in Kn are matched to users requesting some file n0 > N2 . By the Chernoff bound for negatively associated random variables [14], 2 Pr{E4,n } = e−Ω((log d) ) . 2 From Lemma 4, we know that with probability at least 1 − N e−Ω((log d) ) , there are less than (1 + p1 /4)dpn requests for each file 2 ≤ n ≤ N1 . Therefore, with probability at least 2 1 − N e−Ω((log d) ) , all requests for files in R such that 2 ≤ n ≤ N1 are matched to caches by the MLP policy. Finally, we now focus on the requests for the most popular file W1 . Recall that if the KS policy decides to store this file, it will be stored on all caches in each cluster. Since we are assuming that the total number of requests at each cluster is no greater than d, then even if all the users requesting files other than W1 are matched, the remaining caches can still be used to serve all requests for W1 . E. Expected Rate 2 From Lemma 5, we know that, for d large enough, with probability at least 1 − N e−Ω((log d) ) , in a given cluster all requests for the files cached by the KS+MLP policy are matched to caches. e be the number of files not in R (i.e., that are not cached) that are requested at least once. Let N By the union bound over the K/d clusters, 2 2 e ] + K (1 − Pr{E2 }) ≤ E[N e ] + N K e−Ω((log d)2 ) . R̄PAM ≤ E[N d d 22 After solving the fractional knapsack problem, defined in (9), as a function of N , K, d, β, and M , we can determine the set R. For a given R, we then have X  e] = E[N 1 − (1 − pn )K . n∈R / We hence obtain the following bound on the expected rate:   ! −β K X n N K 2 −Ω((log d)2 ) R̄PAM ≤ 1− 1− e . + AN d n∈R / When N and d are polynomial in K, then the second term is o(1), and solving the fractional knapsack problem yields the result of Theorem 7. A PPENDIX D A PPROXIMATE O PTIMALITY (P ROOF OF T HEOREM 4) In this section, we focus on the case β ∈ [0, 1) to prove Theorem 4. The key idea here is to show that this case can be reduced to a uniform-popularities case. We will therefore first derive lower bounds for the uniform-popularities setup (β = 0), and then use that result to derive bounds for the more general case. A. The Uniform-Popularities Case (β = 0) When β = 0, we have the following lower bounds. Lemma 6. Let s ∈ {1, . . . , K/d}. If N ≥ 10, we have the following lower bound on R̄∗ :   e−1 sdM 1 ∗ − . R̄ ≥ ρsd 1 − 4 2 N Before we prove Lemma 6, notice that the lower bounds that it gives are very similar to the ones in [3]. In fact, by writing the inequality of Lemma 6 as   ρ(1 − e−1 /2) sdM ∗ R̄ ≥ d · max s 1 − , s∈[K/d] 4 (1 − e−1 /2)N we can use the same argument as in [3] to show   ρ(1 − e−1 /2) 1 (1 − e−1 /2)N K R̄ ≥ d· min − 1, 4 12 dM d   ρ(1 − e−1 /2) (1 − e−1 /2)N = min − d, K . 48 M ∗ (10) Proof of Lemma 6: First, consider the following hypothetical scenario. Let there be a single cache of size M 0 , and suppose that a request profile u is issued from users all connected to 23 this one cache. Moreover, assume that we allow the designer to set the cache contents after the request profile is revealed; thus both the placement and delivery take place with knowledge of u. If we send a single message to serve those requests, and denote its rate by R0 (M 0 , u), then a cut-set bound shows that R0 (M 0 , u) + M 0 ≥ γ(u), (11) where γ(u) is the total number of distinct files requested in u. Since all users share the same resources, if a file can be decoded by one user then it can be decoded by all. Thus the number of requests for each file is irrelevant for (11), as long as it is nonzero. Furthermore, since both the placement and the delivery are made after the request profile is revealed, the identity of the requested files is irrelevant. Indeed, if R0 (M 0 , u1 ) > R0 (M 0 , u2 ) for some u1 and u2 such that γ(u1 ) = γ(u2 ), then a simple relabling of the files in u1 can make it equivalent to u2 , and thus the same rate can be achieved. Consequently, (11) can be rephrased using only the number of distinct requested files, e 0 , y) + M 0 ≥ y, R(M (12) e 0 , y) is the rate required to serve requests for y distinct files from one cache of where R(M e 0 , γ(u)) for all u. Additionally, it can be seen that memory M 0 . Note that R0 (M 0 , u) = R(M e 0 , y) increases as y increases: if y1 < y2 , then we can always add y2 − y1 users to request R(M e 0 , y1 ) ≤ R(M e 0 , y2 ). new files, and thus achieve a rate of R(M Let us now get back to our original problem. For convenience, define R̄y = Eu [Ru |γ(u) = y] for every y. Suppose we choose s ∈ {1, . . . , K/d} different clusters, and we observe the system over B instances. Over this period, a certain number of users will connect to these clusters and request files; all other users are ignored. If we denote the resulting request profiles as u1 , . . . , uB , P then the rate required to serve all requests is B i=1 Rui . Suppose we relax the problem and allow the users to co-operate. Suppose also that we allow the placement to take place after all B request profiles are made. This can only reduce the required rate. Furthermore, this is now an equivalent problem to the hypothetical scenario described at the beginning of the proof. Therefore, if we denote by ūB the request profile cumulating u1 , . . . , uB , we have B X i=1 e Rui ≥ R0 (sdM, ūB ) = R(sdM, γ(ūB )). 24 By averaging over the cumulative request profile, we obtain the following bound on any achievable expected rate R̄: h i h h ii e e B R̄ ≥ EūB R(sdM, γ(ūB )) = EYsB EūB R(sdM, γ(ūB )) γ(ūB ) = YsB h i e = EYsB R(sdM, YsB ) , (13) where YsB is a random variable denoting the number of distinct files requested after B instances at s clusters. Since (13) holds for all achievable rates R̄, it also holds for R̄∗ . Let us choose B = dN/ρsde. Using Chernoff bounds, we obtain some probabilistic bounds on the number of requested files YsB . These bounds are given in the following lemma for convenience; the lemma is proved in Appendix F. Lemma 7. If Y denotes the number of distinct requested files by users at s clusters over B = dN/ρsde instances, then, for all  > 0,  −1 −1 Pr Y ≤ (1 − e−1 − )N ≤ e−N D(e +||e ) . e = (1 − e−1 − )N . We will now use Lemma 7 to obtain bounds on the expected rate. Define N From (13), we have N h i X e e Pr{YsB = y} · R(sdM, y) YsB ) = dN/ρsde R̄∗ ≥ EYsB R(sdM, y=1 ≥ N X e y=N (a) Pr{YsB e e e) = y} · R(sdM, y) ≥ R(sdM, N N X Pr{Y = y} e y=N (b)   (c)  e e ) 1 − e−N D(e−1 +||e−1 ) ≥ (1 − o(1)) (1 − e−1 − )N − sdM , ≥ R(sdM, N e where (a) uses the fact that R(sdM, y) can only increase with the number of requested files y, (b) follows from Lemma 7, and (c) is due to (12). For a fixed , the 1 − o(1) factor approaches 1 as N grows. More generally, we can lower- bound it by some constant for a large enough N . For example, if  = e−1 /2, then the term is larger than 1/2 as long as N ≥ 10. Therefore, we have   −1   1 − e 2 N − sdM 1 e−1 sdM ∗ ≥ ρsd 1 − − R̄ ≥ 2 dN/ρsde 4 2 N as long as N ≥ 10. More generally, we can approach   1 sdM (1 − e−1 ) N − sdM ∗ −1 R̄ ≥ ≥ ρsd 1 − e − , dN/ρsde 2 N if we allow N to be sufficiently large. 25 B. The General Shallow Zipf Case (β ∈ [0, 1)) First, notice that the popularity of each file is pn ≥ p N = N −β (a) (1 − β)N −β 1−β ≥ , = 1−β AN N N (14) where AN is defined in Lemma 8 stated below, and (a) follows from the lemma. Lemma 8. Let m ≥ 1 be an integer and let β ∈ [0, 1). Define Am = Pm n=1 n−β . Then, m1−β − 1 ≤ (1 − β)Am ≤ m1−β . Lemma 8 is proved in Appendix F. Consider now the following relaxed setup. Suppose that, for every file n, there are ũn (c) users requesting file n from cluster c, where  ũn (c) ∼ Poisson (1 − β)ρd N  . Since (1 − β)ρd/N ≤ pn ρd for all n by (14), the optimal expected rate for this relaxed setup can only be smaller than the rate from the original setup. Indeed, we can retrieve the original setup by simply creating Poisson ((pn − (1 − β)/N )ρd) additional requests for file n at each cluster. Our relaxed setup is now exactly a uniform-popularities setup except that ρ is replaced by ρ0 = (1 − β)ρ, which is still a constant. Consequently, the information-theoretic lower bounds obtained in Lemma 6, and inequality (10) that follows it, can be directly applied here, giving the following lemma. Lemma 9. When N ≥ 10, the optimal expected rate R̄∗ can be lower-bounded by   (1 − e−1 /2)N (1 − β)ρ(1 − e−1 /2) ∗ min − d, K . R̄ ≥ 48 M When M < (1 − e−1 /2)N/2d, the bound in Lemma 9 can be further lower-bounded by   (1 − β)(1 − e−1 /2)2 ρ N ∗ R̄ ≥ · min , ρK . (15) 96 M Furthermore, the rate achieved by PCD is upper-bounded by     K −t0 N N PCD −1+ √ ≤ min ρK, . R̄ ≤ min ρK, M M 2π Consequently, combining (15) with (16) gives us the result of Theorem 4. (16) 26 A PPENDIX E D ETAILS OF HCM S CHEME (P ROOF OF T HEOREM 5) In this section, we are mostly interested in the case where K is larger than some constant. Specifically, we assume log K ≥ 2gα, (17) where g = (31−β − 1)/41−β > 0 is a constant. In the opposite case, we can achieve a constant rate by simply unicasting to each user the file that it requested. Let t ∈ [0, t0 ], and let χ = bαgd/(2(1 + t) log K)c. We will partition the set of files into χ colors. For each color x ∈ {1, . . . , χ}, define Wx as the set of files colored with x. We choose to color the files in an alternating fashion. More precisely, we choose for each x ∈ {1, . . . , χ} Wx = {Wn : n ≡ x (mod χ)} . Notice that |Wx | = bN/χc or dN/χe. We can now define the popularity of a color x as P Px = Wn ∈Wx pn . The following proposition, proved in Appendix F, gives a useful lower bound for Px . Proposition 1. For each x ∈ {1, . . . , χ}, we have Px ≥ g/χ. The significance of the above proposition is that the colors will essentially behave as though they are all equally popular. Next, we partition the caches of each cluster into the same χ colors. We choose this coloring in such a way that the number of caches associated with a particular color is proportional to the popularity of that color. Specifically, exactly bdPx c caches in every cluster will be colored with x. This will leave some caches colorless; they are ignored for the entirety of the scheme for analytical convenience. We can now describe the placement, matching, and delivery phases of HCM. Consider a particular color x. This color consists of |Wx | files and bdPx c K/d caches in total. The idea is to perform a Maddah-Ali–Niesen scheme [3], [15] on each color separately, while matching each user to a cache of the same color of its requested files. The scheme can be described more formally with the following three steps. • In the placement phase, for each color x we perform a Maddah-Ali–Niesen placement of the files Wx in the caches colored with x. 27 • In the matching phase, each user is matched to a cache in its cluster of the same color as the file that the user has requested. Thus if the user is at cluster c and requests a file from Wx , it is matched to an arbitrary cache from cluster c colored with color x. For each cluster-color pair, if there are more users than caches, then some users must be unmatched. • In the delivery phase, for each color x we perform a Maddah-Ali–Niesen delivery for the users requesting files from Wx . Next, each unmatched user is served with a dedicated unicast message. The resulting overall message sent from the server is a concatenation of the messages sent for each color as well as all the unicast messages intended for unmatched users. Suppose that the broadcast message sent for color x has a rate of Rx . Suppose also that the number of unmatched users is U 0 . Then, the total achieved expected rate will be ( ) χ X R̄HCM = min ρK, E[Rx ] + E[U 0 ] , (18) x=1 since ρK can always be achieved by simply unicasting to every user its requested file. From [15], we know that we can always upper-bound the rate for color x by +  |Wx | −1 , Rx ≤ M for all M > 0. Because |Wx | = bN/χc or dN/χe for all x, we obtain   N  −1 if M ≤ bN/χc;  M   X Rx ≤ 0 if M ≥ dN/χe;     x   (N mod χ) dN/χe − 1 otherwise. M (19) All that remains is to find an upper bound for E[U 0 ]. Let Y (c, x) represent the number of users at cluster c requesting a file from color x. Since there are bdPx c caches at cluster c with color x, then exactly U (c, x) = [Y (c, x) − bdPx c]+ users will be unmatched. Thus we can write U 0 as 0 U = K/d χ X X U (c, x). c=1 x=1 Before we proceed, it will be helpful to state the following two results, proved in Appendix F. Proposition 2. If Y is a Poisson variable with parameter λ, then, for all integers m ≥ 1, the function λ 7→ Pr{Y = m} is increasing in λ as long as λ < m. 28 Proposition 3. Let Y be a Poisson random variable with parameter λ, and let m ≥ λ. Define U = [Y − m]+ , i.e., U = 0 if Y < m and U = Y − m if Y ≥ m. Then, E[U ] ≤ m Pr{Y = m}. Notice that Y (c, x) ∼ Poisson(ρd · Px ), and that the Y (c, x) users must be matched to bdPx c. e = [Ye (c, x) − bdPx c]+ . Since For convenience, we define Ye (c, x) ∼ Poisson(2ρ · bdPx c) and U we have ρdPx ≤ 2ρ bdPx c , i.e., the Poisson parameter of Ye (c, x) is at least the Poisson parameter of Y (c, x), then ∞ ∞ X (a) X e (c, x)], E[U (c, x)] = y · Pr{Y (c, x) = y} ≤ y · Pr{Ye (c, x) = y} = E[U y=bdPx c y=bdPx c where (a) uses Proposition 2. e (c, x) in order to upper-bound the This allows us to apply Proposition 3 on Ye (c, x) and U expectation of U (c, x) by  e (c, x)] ≤ √1 · bdPx c · 2ρe1−2ρ bdPx c . E[U (c, x)] ≤ E[U 2π 0 Consequently, we get the upper bound on E[U ], 0 E[U ] = K/d χ X X c=1 K/d χ X X 1 bdPx c √ bdPx c 2ρe1−2ρ E[U (c, x)] ≤ 2π x=1 c=1 x=1 χ χ  bdPx c 1 XK 1 X 1−2ρ bdPx c =√ · bdPx c 2ρe ≤√ Px · K 2ρe1−2ρ . 2π x=1 d 2π x=1 Isolating part of the term in the sum, bdPx c   K 2ρe1−2ρ = exp log K + log 2ρe1−2ρ bdPx c = exp {log K − α bdPx c}     αdg αdPx (a) ≤ exp log K − ≤ exp log K − 2 2χ   (b) αdg 2(1 + t) ≤ exp log K − · log K = exp {log K − (1 + t) log K} 2 αdg = K −t , where (a) uses Proposition 1, and (b) uses the definition of χ combined with byc ≤ y. We obtain the final upper bound on the expected number of unmatched users, χ 1 X K −t 0 E[U ] ≤ √ Px K −t = √ . 2π x=1 2π (20) Finally, we combine (19) and (20) in (18) to obtain the rate expression in Theorem 5, thus completing its proof. 29 A PPENDIX F E XTRA P ROOFS Proof of Lemma 2: Before we prove Lemma 2, we need the following result that pertains to Poisson variables in general, proved later in the appendix. Proposition 4. If Y is a Poisson variable with parameter λ, then for all integers m ≥ 1, E [Y |Y ≥ m] = m Pr{Y = m|Y ≥ m} + λ. We can now prove Lemma 2. By using Proposition 3 (stated in Appendix E) with λ = γm, we have E[U ] ≤ m Pr{Y = m} = m · (γm)m e−γm . m! Using Stirling’s approximation, we have √ √ 1 m! ≥ 2πmm+ 2 e−m ≥ 2πmm e−m , which yields m 1 (γm)m e−γm √ = √ · m · γe1−γ , E[U ] ≤ m · 2πmm e−m 2π thus concluding the proof. Proof of Proposition 1: Choose any x ∈ {1, . . . , χ}. Starting with the definition of Px , we have Px = X Wn ∈Wx |Wx |−1 |Wx |−1 |Wx |−1 1 X 1 X χ−β X −β −β pn = (kχ + x) ≥ (kχ + χ) = (k + 1)−β AN k=0 AN k=0 AN k=0 A|Wx | (a) −β |Wx |1−β − 1 (b) −β (N/χ − 1)1−β − 1 ≥χ · ≥χ · AN N 1−β N 1−β "   1−β  1−β # 1 χ 1−β  χ 1−β (c) 1 gα gα = · 1− − ≥ 1− − χ N N χ 2 log K 2 log K " # 1−β  1−β (d) 1 1 1 g ≥ 1− − = , χ 4 4 χ = χ−β · where (a) uses Lemma 8, (b) uses the fact that |Wx | ≥ bN/χc ≥ N/χ − 1 for all x, (c) uses the definition of χ as well as N ≥ d and t ≥ 0, and (d) uses (17). Proof of Proposition 2: Define fm (λ) = Pr{Y = m} when Y is Poisson with parameter λ, i.e., fm (λ) = λm e−λ /m!. Then, 0 fm (y) =  λm−1 e−λ 1 mλm−1 e−λ − λm e−λ = (m − λ) . m! m! 30 0 Consequently, fm (y) > 0 if and only if λ < m, and hence Pr{Y = m} increases with λ as long as λ < m. Proof of Proposition 3: Define V such that V = 0 if Y < m and V = 1 if Y ≥ m. Using the tower property of expectation, E[U ] = E [E [U |V ]] = Pr{V = 0}E [U |V = 0] + Pr{V = 1}E [U |V = 1] (a) = 0 + Pr{V = 1}E [Y − m|V = 1] = Pr{Y ≥ m}E [Y − m|Y ≥ m] (b) = Pr{Y ≥ m} (E[Y |Y ≥ m] − m) = Pr{Y ≥ m} (m Pr{Y = m|Y ≥ m} + λ − m) (c) ≤ Pr{Y ≥ m} · m Pr{Y = m|Y ≥ m} = m Pr{Y = m}, where (a) uses the definition of U given the different values of V , (b) uses Proposition 4, and (c) uses λ ≤ m. Proof of Proposition 4: First notice that λ Pr{Y = m − 1} = λ · λm e−λ λm−1 e−λ = · m = m Pr{Y = m}. (m − 1)! m! (21) We can now write the conditional expectation as E [Y |Y ≥ m] = ∞ X y=m y· ∞ X Pr{Y = y} 1 λy e−λ = y· Pr{Y ≥ m} Pr{Y ≥ m} y=m y! ∞ X 1 λy−1 e−λ 1 = λ = λ · Pr{Y ≥ m − 1} Pr{Y ≥ m} y=m (y − 1)! Pr{Y ≥ m} λ Pr{Y = m − 1} + λ Pr{Y ≥ m} (a) m Pr{Y = m} + λ Pr{Y ≥ m} = Pr{Y ≥ m} Pr{Y ≥ m} = m Pr{Y = m|Y ≥ m} + λ, = where (a) uses (21) Proof of Lemma 7: Recall that we are considering s clusters and B = dN/ρsde instances of the problem. Let Un denote the number of requests for file n by users in these clusters across the B instances. We can see that Un is a Poisson variable with parameter   ρd ρsd N λ= · sB = · . N N ρsd Note that λ ≥ 1. Let Zn be equal to one if Un ≥ 1, i.e., if file n was requested at least once, and equal to zero otherwise. Thus Zn is a Bernoulli variable with parameter Pr{Un ≥ 1} = 1 − e−λ . Then, the P total number of distinct requested files can be written as Y = n Zn . 31 Let  > 0 be arbitrary, and define η = 1 − e−1 − . We can now use the Chernoff bound to write, for every t > 0, Pr {Y ≤ ηN } ≤ e tηN  −tY ·E e  =e tηN · N Y   E e−tZn . n=1 The expression inside the product is   E e−tZn = e−λ · 1 + (1 − e−λ ) · e−t = e−λ (1 − e−t ) + e−t ≤ e−1 (1 − e−t ) + e−t = e−1 + e−t (1 − e−1 ), where the inequality is due to λ ≥ 1 and the fact that the function λ 7→ e−λ (1 − e−t ) decreases as λ increases, for all t > 0. Consequently,  N Pr {Y ≤ ηN } ≤ etη · e−1 + e−t (1 − e−1 ) . By choosing t such that et = 1 − e−1 e−1 +  · , e−1 1 − (e−1 + ) we get h iN −1 −1 −1 −1 Pr {Y ≤ ηN } ≤ e−D(e +||e ) = e−N D(e +||e ) , which concludes the proof. Proof of Lemma 8: To prove the lemma, we will relate the sum Am = P n n−β with the corresponding integral, which can be evaluated as a closed-form expression. Let f be any decreasing function defined on the interval [k, l] for some integers k and l. Then, we can bound the integral of f by l X n=k+1 f (n) ≤ Z l k f (x) dx ≤ l−1 X f (n). n=k Rearranging the inequalities, we get the equivalent statement that Z l Z l l X f (l) + f (x) dx ≤ f (n) ≤ f (k) + f (x) dx. k Recall that Am = Pm n=1 −β n (22) k n=k . Thus we can apply (22) with f (x) = x−β , k = 1, and l = m. Since we know that Z 1 m x−β dx = m1−β − 1 , 1−β 32 this implies, using (22), m−β + m1−β − 1 m1−β − 1 ≤ Am ≤ 1 + 1−β 1−β 1−β 1−β m m −1 ≤ Am ≤ , 1−β 1−β which concludes the proof. R EFERENCES [1] J. Hachem, N. Karamchandani, S. Moharir, and S. Diggavi, “Coded caching with partial adaptive matching,” in 2017 IEEE International Symposium on Information Theory (ISIT), June 2017, pp. 2423–2427. [2] ——, “Caching with partial matching under Zipf demands,” in 2017 IEEE Information Theory Workshop (ITW), October 2017. [3] M. Maddah-Ali and U. Niesen, “Fundamental limits of caching,” IEEE Trans. Inf. Theory, vol. 60, no. 5, pp. 2856–2867, May 2014. [4] U. Niesen and M. A. Maddah-Ali, “Coded caching with nonuniform demands,” IEEE Transactions on Information Theory, vol. 63, no. 2, pp. 1146–1158, Feb 2017. [5] M. Ji, A. M. Tulino, J. Llorca, and G. Caire, “On the average performance of caching and coded multicasting with random demands,” in Proc. IEEE ISWCS, Aug. 2014. [6] J. Zhang, X. Lin, and X. Wang, “Coded caching under arbitrary popularity distributions,” in Proc. ITA, Feb. 2015. [7] J. Hachem, N. Karamchandani, and S. N. Diggavi, “Coded caching for multi-level popularity and access,” IEEE Transactions on Information Theory, vol. 63, no. 5, pp. 3108–3141, May 2017. [8] ——, “Multi-level coded caching,” in Proc. IEEE ISIT, Jun. 2014. [9] M. A. Maddah-Ali and U. Niesen, “Coding for caching: fundamental limits and practical challenges,” IEEE Communications Magazine, vol. 54, no. 8, pp. 23–29, August 2016. [10] G. Paschos, E. Bastug, I. Land, G. Caire, and M. Debbah, “Wireless caching: technical misconceptions and business barriers,” IEEE Communications Magazine, vol. 54, no. 8, pp. 16–22, August 2016. [11] M. Leconte, M. Lelarge, and L. Massoulié, “Bipartite graph structures for efficient balancing of heterogeneous loads,” SIGMETRICS Perform. Eval. Rev., vol. 40, no. 1, pp. 41–52, Jun. 2012. [12] S. Moharir and N. Karamchandani, “Content replication in large distributed caches,” arXiv:1603.09153 [cs.NI], Mar. 2016. [13] L. Breslau, P. Cao, L. Fan, G. Phillips, and S. Shenker, “Web caching and Zipf-like distributions: evidence and implications,” in Proc. IEEE INFOCOM, 1999, pp. 126–134. [14] A. Schrijver, Combinatorial Optimization: Polyhedra and Efficiency. Springer, 2003. [15] M. A. Maddah-Ali and U. Niesen, “Decentralized coded caching attains order-optimal memory-rate tradeoff,” IEEE/ACM Trans. Netw., vol. 23, no. 4, pp. 1029–1040, Aug 2015.
7
MDMP: Managed Data Message Passing Adrian Jackson Pär Strand EPCC The University of Edinburgh Mayfield Road Edinburgh EH9 3JZ, UK Plasma physics and fusion energy Earth and Space Sciences Chalmers University Hösalsvägen 9, 4th floor Göteborg, Sweden arXiv:1304.7615v1 [cs.DC] 29 Apr 2013 [email protected] ABSTRACT MDMP is a new parallel programming approach that aims to provide users with an easy way to add parallelism to programs, optimise the message passing costs of traditional scientific simulation algorithms, and enable existing MPIbased parallel programs to be optimised and extended without requiring the whole code to be re-written from scratch. MDMP utilises a directives based approach to enable users to specify what communications should take place in the code, and then implements those communications for the user in an optimal manner using both the information provided by the user and data collected from instrumenting the code and gathering information on the data to be communicated. In this paper we present the basic concepts and functionality of MDMP and discuss the performance that can be achieved using our prototype implementation of MDMP on some simple benchmark cases. Keywords Parallel Languages, Message Passing, MPI [email protected] implement in a new parallel programming language. The challenge of optimising parallel communications is becoming increasingly important as we approach Exascaletype high performance computers (HPC), where it is looking increasingly likely the that ratio between the computational power of a node of the computer and the relative performance of the network is going to make communications increasingly expensive when compared to the cost of calculations. Furthermore, the rise of multi-core and manycore computing on the desktop, and the related drop in single core performance, means that many more developers are going to need to exploit parallel programming to utilise the computational resources they have access to where they could have relied on increases in serial performance of the hardware they were using to maintain program performance in the past. Therefore, we have devised a new parallel programming approach, called Managed Data Message Passing (MDMP)[12], which is based on the MPI library but provides a new method for parallelising programs. MDMP follows the directives based approach, favoured by OpenMP and other parallel programming techniques, which are translated into MDMP library function calls or code There are numerous new programming languages, libraries, snippets, which in turn utilise communication library calls and techniques that have been developed over the past few (such as MPI) to provide the actual parallel communication years to either simplify the process of developing parallel functionality. Using a directives based approach enables us programs or provide additional functionality that traditional parallel programming techniques (such as MPI[7] or OpenMP[21]) to reduce the complexity, and therefore development code, of writing parallel programs, especially for the novice HPC do not provide. These include programming extensions such programmer. However, the novel aspect of MDMP is that it as Co-Array FORTRAN[20] (CAF), UPC[2], and new lanallows users to specify the communication patterns required guages such as Chapel[3], OpenMPD[16] or XMP[8] in the program but devolves the responsibility for scheduling and carrying out the communications to the MDMP funcHowever, these approaches often have not has a focus of optionality. MDMP instruments data accesses for data betimising the parallelisation overheads (the cost of the coming communicated to optimise when communications hapmunications) associated with distributed memory parallelipen and therefore better overlap communication and comsation, and have limited appeal for the many scientific appliputation than is easily possible with traditional MPI procations which are already parallelised with an existing pargramming. allelisation approach (primarily MPI) and have very large code bases which would be prohibitively expensive to reFurthermore, by taking the directive approach MDMP can be incrementally added to a program that is already parallelised with MPI, replacing or extending parts of the existing MPI parallelisation without requiring any changes to the rest of the code. Users can start by replacing one part of the current communication in the code, evaluate the performance impacts, and replace further communications as required. 1. INTRODUCTION In this paper we outline work others have undertaken in creating new parallel programming techniques, and optimisation communications. We describe the basic issues we are looking to tackle with MDMP, and go on to describe the basic feature and functionality of MDMP, outlining the performance benefits and costs of such an approach, and highlighting the scenarios where MDMP can provide reduced communication costs for the types of communication patterns seen in some scientific simulation codes using our prototype implementation of MDMP (which implements MDMP as library calls rather than directives). 2. RELATED WORK Recent evaluation of the common programming languages used in large scale parallel simulation code has found the majority are still implemented using MPI, with a minority also including a hybrid parallelisation through the addition of OpenMP (or in a small number of cases SHMEM) alongside the MPI functionality[25]. This highlights to us the key requirement for any new parallel programming language or technique of being easily integrated with existing MPI-based parallel codes. There are a wide range of studies evaluating the performance of a range of different parallel languages, including Partitioned Global Address Space (PGAS) languages, on different application domains and hardware[24, 13, 23, 1]. These show that there are many approaches that can provide performance improvements for parallel programs, compared to standard parallelisation techniques on a given architecture or set of architectures. Existing parallel programming languages or models for distributed memory system provide various features to describe parallel programs and to execute them efficiently. For instance, XMP provides features similar to both CAF and HPF[19], allowing users to use either global or local view programming models, and providing easy to program functionality through the use of compiler directives for parallel functionality. Likewise, OpenMPD provided easy to program directives based parallelisation for message passing functionality, extending an OpenMP like approach to a distributed memory supercomputer. However, both of these approaches generally require the rewriting of existing codes, or parts of existing codes, into a new languages, which we argue is prohibitively expensive for most existing computational simulation applications and therefore has limited the take-up of these different parallel programming languages or techniques by end user applications. Furthermore, both only target parts of the problem we are aiming to tackle, namely improving programmability and optimising performance. XMP and OpenMPD both aim to make parallel programming simpler, but have not direct features for optimising communications in the program (although they can enable users to implement different communication methods and therefore choose the most efficient method for themselves). PGAS languages, and other new languages, may provide lower cost communications or new models of communications to enable different algorithms to be used for a given problem, or may provide simpler programming model, but none seems to offer both as a solution for parallel programming. Also, crucially, they do not expect to work with existing MPI programs, negating the proposed benefits for the largest part of current HPC usage. There has also been significant work undertaken looking at optimising communications in MPI programs. A number of authors have looked at compiler based optimisations to provide automatic overlapping of communications and computation in existing parallel programs[10, 6, 11]. These approaches have shown that performance improvements can be obtained, generally evaluated against kernel benchmarks such as the NAS parallel benchmarks, by transforming user specified blocking communication code to non-blocking communication functionality, and using static compiler analysis to determine where the communications can be started and finished. Furthermore, other authors have looked at communication patterns or models in MPI based parallel programs and suggested code transformations that could be undertaken to improve communication and computation overlap[4]. However, these approaches are what we would class as coarse-grained communication optimisation. They use only static compiler analysis to identify the communication patterns, and identify the outer bounds of where communications can occur to try and start and finish bulk non-blocking operations in the optimal places. They do not address the fundamental separation of communication and computation into different phases that such codes generally employ. Our work, outlined in this paper, is looking at fine-grained communication optimisations, where individual communication calls are intermingled with computation to truly mix communication and computation. The has also been work on both offline and runtime identification and optimisation of MPI communications, primarily for collective communication[9, 14, 5], or other auto-tuning techniques such optimising MPI library variables[22] or individual library routines[17]. All these approaches have informed the way we have constructed MDMP. We belief that the work we have undertaken is unique as it brings together attempts to provide simple message passing programming which fine-grained communication optimisation along with the potential for runtime auto-tuning of communication patterns into a single parallel programming tool. 3. COMMUNICATION OPTIMISATION Whilst there is a very wide range of communication and computational patterns in parallel programs, a large proportion of common parallel applications use regular domain decomposition techniques coupled with halo communications to exploit parallel resources. As shown in Figure3, which is a representation of a Jacobi-style stencil based simulation method, many simulations undertake a set of calculations that iterate over a n-dimensional array, with a set of communications to neighbouring processes every iteration of the simulation. The core computational kernel of a simple Jacobi style simulation, as illustrated in the previous paragraph, can be implemented as shown in Figure 3 (undertaking a 2d simulation). It is evident from the above code that, whilst it has been optimised to use non-blocking communications, the communication and computation parts of the simulation are performed separately, with no opportunity to overlap commu- for (iter=1;iter<=maxiter; iter++){ MPI_Irecv(&old[0][1], NP, MPI_FLOAT, prev, 1, MPI_COMM_WORLD, &requests[0]); MPI_Irecv(&old[MP+1][1], NP, MPI_FLOAT, next, 2, MPI_COMM_WORLD, &requests[1]); MPI_Isend(&old[MP][1], NP, MPI_FLOAT, next, 1, MPI_COMM_WORLD, &requests[2]); MPI_Isend(&old[1][1], NP, MPI_FLOAT, prev, 2, MPI_COMM_WORLD, &requests[3]); MPI_Waitall(4, requests, statuses); for (i=1;i<MP+1;i++){ for (j=1;j<NP+1;j++){ new[i][j]=0.25*(old[i-1][j]+old[i+1][j]+ old[i][j-1]+old[i][j+1] - edge[i][j]); } } for (i=1;i<MP+1;i++){ for (j=1;j<NP+1;j++){ old[i][j]=new[i][j]; } } } Figure 2: MPI implementation of a simple 2d Jacobi style computation, implemented in C using MPI Figure 1: Representation of a common communication pattern for parallel simulations nications and computations. In practice this means that the application will only be using the communication network to send and receive data in short bursts, leaving it idle whilst computation is being performed. Many large scale HPC resources are used by a large number of running applications at any one time, which may help to ensure that the overall usage of the interconnect is high, even though individual applications often utilise it in a bursty manner. However, that still will not be true of the part of the network dedicated to the individual application, only to the load of the network overall. Furthermore, when considering the very largest HPC resources in the world, and including the proposed Exascale resources, there are often only a handful of applications utilising the resource at any one time. Therefore, enabling applications to effectively utilise the network, especially the spare resources that the current separated communication and computation patterns engender, is likely to be beneficial to overall application performance and resource utilisation (provided that the cost of doing this is not significant). It is possible to split the sends and receives in the previous example and place them around the computation rather than just before the computation, using the non-blocking functionality, to further ensure that more optimal communications are occurring. However, this is still does not allow for overlapping communication and computation because the computational is still occurring in a single block, with communications outside this block. fications, as shown in the code example in Figure 3 (which implements a strategy of sending data as soon as it has been computed). Whilst the code implemented in Figure 3 will enable the mixing of communication and computation, ensuring that data is sent as soon as it is ready to be communicated and potentially ensuring better utilisation of the communication network, it has come at the cost of considerable code mutilation, requiring developers to undertake significant code optimisations. As well as the damage to the readability and maintainability of the code that this causes, it also means that a code has been significantly changed for a potentially architecturally dependent optimisation, i.e. an optimisation that may be beneficial on one or more current HPC systems but may not be beneficial on other or future HPC systems. We are proposing MDMP as a mechanism for implementing such optimisations without the requirement to significantly change users codes, or the need to tailor codes to a specific platform, as the MDMP functionality can implement communications in the most optimal form for the application and hardware currently being used. 4. MDMP To re-iterate the challenges for MDMP that we have previously discussed, MDMP is designed to address the following issues: • Work with existing MPI based codes • Provide framework for optimisation communications For developers to ensure that communications and computations are truly mixed would require further code modi- • Simplify parallel development MDMP uses a directives based approach, relying on the compiler to implement the actual message passing functionality based on the users’ instructions. Compiler directives are used, primarily, to address the third point above, namely ease of use. We provide functionality that can be easily enabled and disabled in an application, hides some of the complexities of current MPI programming (such as providing message tags, error variables, communicators, etc...) that often complicate development for new users of MPI, and also provides some flexibility to the user over the type and level of message optimisation used. for (iter=1;iter<=maxiter; iter++){ requestnum = 0; for (j=0;j<NP;j++){ MPI_Irecv(&tempprev[j], 1, MPI_FLOAT, prev, 1, MPI_COMM_WORLD, &requests[requestnum]); requestnum++; MPI_Irecv(&tempnext[j], 1, MPI_FLOAT, next, 2, MPI_COMM_WORLD, &requests[requestnum]); requestnum++; } for (i=1;i<MP+1;i++){ for (j=1;j<NP+1;j++){ new[i][j]=0.25*(old[i-1][j]+old[i+1][j]+ old[i][j-1]+old[i][j+1] - edge[i][j]); if(i == MP){ MPI_Isend(&new[i][j], 1, MPI_FLOAT, next, 1, MPI_COMM_WORLD, &requests[requestnum]); requestnum++; }else if(i == 1){ MPI_Isend(&new[i][j], 1, MPI_FLOAT, prev, 2, MPI_COMM_WORLD, &requests[requestnum]); requestnum++; } } } for (i=1;i<MP+1;i++){ for (j=1;j<NP+1;j++){ old[i][j]=new[i][j]; } } MPI_Waitall(requestnum, requests, statuses); if(prev != MPI_PROC_NULL){ for (j=1;j<NP+1;j++){ old[0][j] = tempprev[j-1]; old[MP+1][j] = tempnext[j-1]; } } if(next != MPI_PROC_NULL){ for (j=1;j<NP+1;j++){ old[MP+1][j] = tempnext[j-1]; } } } Figure 3: Intermingled MPI implementation of a simple 2d Jacobi style computation The MDMP directives are translated into code snippets and library calls by the MDMP-enabled compiler, either directly in the equivalent non-blocking MPI calls (which simply mimics the communication that would have been implemented directly by the user) or to further optimised MPI communications or other another communication library as appropriate on the particular hardware being used. This enables MDMP to target different communication libraries transparently to the developer for a given HPC system. Also, crucially the ability to target MPI communications means that MDMP functionality can be added to existing MPIparallelised programs, either as additional functionality or to replace existing MPI functionality, without requiring the program to be completely changed into a new programming language or utilise a new message-passing (or other) communication library. However, simply using directives for programming message passing will not optimise the communication that are undertaken by a program. Therefore, MDMP provides not only directives to specify the communications to be undertaken in the program but also directives to specify communication regions. Communication regions define the areas of code where the data that is to be sent and received is worked on, and where communications occur, so that MDMP can, at runtime, examine the data access patterns and undertake communications at the optimal time to intermingle communications and computations and therefore better utilise the communication network. The optimisation of communications is based on runtime functionality that monitors the reads and writes of data that has been specified as communication data (data that will be sent or received). As any data monitoring entails some runtime overheads the communication region specifies the scope of the data monitoring to ensure it is only performed where required (i.e. where communications are occurring). Any data that is specified by the users and being involved in send or receives is tracked so each read and writing in a communication region is recorded and the number of reads and writes that have occurred when the send or receive happens is evaluated. This data, the number of reads and writes that have occurred for a particular piece of data when it comes to be sent or written over by a receive, can then be used in any subsequent iterations of the computation to launch the communication of that data once it is ready to be communicated. Communications are triggered for any given piece of data as follows: #pragma commregion for (iter=1;iter<=maxiter; iter++){ #pragma recv(old[0][0], NP, prev) #pragma recv(old[MP+1][1], NP, next) #pragma send(old[MP][1], NP, next) #pragma send(old[1][1], NP, prev) for (i=1;i<MP+1;i++){ for (j=1;j<NP+1;j++){ new[i][j]=0.25*(old[i-1][j]+old[i+1][j]+ old[i][j-1]+old[i][j+1] - edge[i][j]); } } for (i=1;i<MP+1;i++){ for (j=1;j<NP+1;j++){ old[i][j]=new[i][j]; } } } #pragma commregionfinished Figure 4: MDMP implementation of a simple 2d Jacobi style computation with optimised communication • last write occurs (sends) • last read and/or write occurs (receives) Using this functionality we can implement a communication pattern that intermingles communication and computation for the example code shown in Figure 3, as shown in Figure 4. When compiled with an MDMP-enabled compiled, the code in Figure 4 will be processed by the compiler and nonblocking sends and receives inserted where the send and recv directives are placed. The compile then looks through the code associated with the communicating region (between commregion and commregionfinished) and replaces any variable reads or writes linked to those sends and receives by MDMP code which will perform the reads and writes and also record those reads and writes occurring. Compiler based code analysis for data accesses will be straightforward for many applications, however we recognise that there will be a number of scenarios, such as when pointers are heavily used in C or FORTRAN, or possibly where preprocessing or function pointers or conditional function calls are used, where it will not be possible for the compiler to access where the data accesses for a particular send or recv occur. In that situation MDMP will revert to simply inserting the basic MPI function calls required to undertake the specified communication and not perform the optimise message functionality. Whilst this negates the possibility of optimising the communications, it will not add any overheads to the program compared to the standard MPI performance a developer would experience, and it does still leave scope for the MDMP functionality to target communication libraries other than MPI to enable optimisation for users would requiring them to modify their code, if such functionality is available. Furthermore, whilst we are not investigating such functionality at the moment, the design of MDMP means that it can also undertake auto-tuning or other runtime activities to optimise communication performance for users beyond the intermingling communication optimisations we have already discussed. For instance, MDMP could implement additional helper threads that enable progression of communications whilst the main program is undertaking calculations, albeit at the cost of utilising a computational core for that purpose. It could also evaluate different communication optimisations at runtime to auto-tune the performance of the program whilst it is running. A difference between the MPI functionality that a developer would add to a code like the one we have been considering and the functionality that MDMP implements is that where intermingling of communications is undertaken MDMP will be sending lots of single element (or small numbers of elements) messages between processes rather than a single message with all the data in it. In general, MPI performs best when small numbers of large messages are used, rather than large numbers of small messages. This is because in the case that large numbers of small message are sent the communication costs are dominated by the latency costs of each message, whereas using small numbers of large messages reduces the overall number of message latencies that are incurred. We recognise the fact the the MDMP functionality may not be optimal in terms of the overall message costs associated with communications but we are assuming that this penalty will be negated by the benefits associated with more consistent use of the network and less concentrated nature of the communication and computation patterns. However, this is something that is investigated in our performance analysis of MDMP, and as with other previously discussed potential problems with MDMP if it is impacting performance the optimised message passing can be disabled at compile time. We also recognise that MDMP functionality does not come without a cost to the performance of the program. MDMP is adding additional computational requirements above those specified in the user program, and also requires additional memory to store data associated with the communications (such as the counters that record the reads and writes to variables). The premise behind the optimised message passing functionality we are aiming for is that communications are much more expensive than computations for an application on a modern HPC machine, and this relationship is likely to get worse for future HPC machines. If this is the case then adding additional computational requirements can be acceptable provided the communication costs are reduced through this addition of extra computation. We have evaluated the performance impact of MDMP, and the communication verses computation trade-off on current HPC architectures where MDMP becomes beneficial, through benchmarking of our software, described in the next section. However, we are still working on minimising the memory requirements for MDMP, as this will be important to ensure MDMP is usage on current and future HPC systems. Furthermore, we should remember that MDMP can be used as a simpler programming alternative to MPI with the optimised message passing functionality turned of at compile time, thereby removing all of these overheads if they are not beneficial for a given application of HPC platform. 5. EXPERIMENTAL SETUP We evaluated the performance of the MDMP compared to standard C and MPI codes. We undertook our evaluation using a range of common large scale HPC platforms and a set of simple, kernel style, benchmarks. We have only evaluated the functionality using 2 nodes on each system, primarily testing the communications between a pair of communicating processors, one on each node. Table 1: C STREAM benchmark within a MDMP inside a communicating region (times in seconds) Operation Int Assign Db Assign Db Copy Db Scale Db Add Db Triad Original 0.000008 0.000010 0.000009 0.000016 0.000031 0.000042 MDMP 0.000172 0.000163 0.000295 0.000296 0.000447 0.000439 Optimised MDMP 0.000076 0.000084 0.000151 0.000152 0.000228 0.00238 5.1 Computing Resources We used three different large scale HPC machines to benchmark performance. The first was a Cray XE 6, HECToR, is the UK National Supercomputing Service consists of 2816 nodes, each containing two 16-core 2.3 GHz Interlagos AMD Opteron processors per node, giving a total of 32 cores per node, with 1 GB of memory per core. This configuration provides a machine with 90,112 cores in total, 90TB of main memory, and a peak performance of over 800 TFlop/s. We used the PGI FORTRAN compile on HECToR, compiled with the -fastsse optimisation flag. The seconds was a Bullx B510, HELIOS, which is based on Intel Xeon processors. A node contains 2 Intel Xeon E52680 2.7 GHz processors giving 16-cores and 64 GB memory. HELIOS is composed of 4410 nodes, providing a total of 70,560 cores and a peak performance of over 1.2 PFlop/s. The network is built using Infiniband QDR non-blocking technology and is arranged using a fat-tree topology. We used the Intel FORTRAN compiler on HELIOS, compiling with the -O2 optimisation flag. The final resource was a BlueGene/Q, JUQUEEN at Forschungszentrum Juelich. JUQUEEN is a IBM BlueGene/Q system based on the IBM POWER architecture. There are 28 racks composed of 28,672 nodes giving a total of 458,752 compute cores and a peak performance of 5.9 PFlop/s. Each node has an IBM PowerPC A2 processor running at 1.6 GHz and containing 16 SMT cores, each capable of running 4 threads, and 16 GB of SDRAM-DDR3 memory. IBM’s FORTRAN compile, xlf90, was used on JUQUEEN, compiling using the -O2 optimisation flag. 6. PERFORMANCE RESULTS We have been evaluating MDMP functionality using a number of different benchmarks. Initially we tested the performance impact of instrumenting data reads and writes on a non-communicating code, the STREAMS [18] benchmark, with the results presented in the first subsection below. After this we evaluated the communications performance of MDMP verses communication implemented directly with MPI, the results of these evaluations are in the second subsection below. Each of the STREAMS benchmark tests were repeated 10 times and an average runtime calculated. For the communications benchmarks each operation was run 100 times, and each benchmark as repeated 3 times with the average time taken. 6.1 Reference Implementation Whilst we have, in previous sections in this paper, outlined the principles of MDMP and how it designed to work, we do not yet have a full compiler based implementation of this functionality. We have designed and implemented the runtime functionality that any compiler would added to a code when encountering MDMP pragmas, but have not yet implemented the compiler functionality. Therefore, for this performance evaluation we are using benchmarks where the MDMP functionality has be implemented directly in the benchmark. We have implemented two versions of MDMP; the first version implements all the required functionality within function calls to the MDMP library. This includes data stores and lookups for all the data marked as being communicated within an MDMP communicating region. The second version implements exactly the same functionality but uses pre-processor macros to insert the required code directly into the source code, thereby removing the need for function calls at every point MDMP is used. In the benchmark results this is named Optimised MDMP Work is currently ongoing to implement a compiler based solution, utilising the LLVM[15] compiler infrastructure, to enable us to target all the main HPC computer languages with a single, full reference implementation. 6.2 STREAM Benchmark Results STREAMS is often used to evaluate the memory bandwidth of computer hardware, and therefore was chosen as it will highlight any impact on the memory access and update efficiencies of computations when MDMP is added to a code. The performance of the STREAMS benchmark was evaluated on all three of the hardware platforms we had available to us, although we are only presenting the results in the following tables from the Cray XE6 because, although the performance of the benchmark varies between machines, the relative performance difference between the original implementation of STREAMS and our MDMP implementations does not change significantly between these platforms. We are reporting the results from a single process running on an otherwise empty node on the Cray XE6. Table 6.2 (where Int stands for Integer and Db stands for Double) outlines the performance of the two versions of MDMP verses the original STREAMS code when the fully communicating functionality of MDMP is enabled and we have forced the MDMP library to treat each variable in the arrays being processed as if they were being communicated (i.e. we are fully tracking all the reads and writes to these array entries even though no communications are occurring). Table 2: C STREAM benchmark within a MDMP outside communicating region (times in seconds) Operation Int Assign Db Assign Db Copy Db Scale Db Add Db Triad Original 0.000003 0.000007 0.000009 0.000016 0.000030 0.000038 MDMP 0.000103 0.000079 0.000118 0.000126 0.000188 0.000194 Optimised MDMP 0.000017 0.000021 0.000025 0.000020 0.000025 0.000025 We can see from Table 6.2 that the MDMP functionality does have a significant impact on the overall runtime of all the benchmarks, adding around an order of magnitude increase to the runtime of the benchmark. We would expect any benchmark like this, where no communications are involved and therefore there is no optimisation for MDMP to perform, to be detrimentally impacted by the additional functionality added in the MDMP implementation. However, we can see that the optimised implementation of MDMP does not have as significant an impact as the original MDMP implementation. As this is the first optimisation we have done to the MDMP functionality we are hopeful there is further scope for optimising the performance of MDMP and reducing the computational impact of the MDMP functionality. Furthermore, it is worth re-iterating that in this benchmark we are forcing MDMP to mark and track all the data in the STREAMS benchmark as if it is to be communicated. MDMP is not designed to be beneficial for scenarios such as this, it is primarily designed to be useful in scenarios where a small amount of data (compared to the overall amount computed upon) is sent each iteration. In a scenario such as the one this benchmark mimics (where all the data used in the computation is also communicated) MDMP should simply compiled down to the basic MPI calls as they would be more efficient in this scenario. MDMP is designed to enable users to try different communication strategies, such as simply using the plain MPI calls, or trying to very the amount communication and computation intermingling, which enables users to experiment and evaluate which will give them the best performance for their application and use case. Indeed, such functionality could also be built into the runtime of MDMP, enabling auto-tuning of the choice of communication optimisation on the fly. We also ran the same benchmark with the code marked as outside a communicating region. In this scenario, whilst the MDMP functionality has be enabled for all the variables in the calculation, the absence of a communicating region disables, at runtime, any data tracking associated with the variables. Table 6.2 presents the results for this benchmark. We can see that the cost of the MDMP functionality has been substantially reduce, and indeed if we used the optimised functionality where the MDMP function calls have been removed and replaced with pre-processed code the MDMP performance is extremely close to the plain benchmark codes’ performance. This confirms that the MDMP functionality can be constructed in such a way as not to have a significant adverse impact on the key computational kernels of a code outside the places that communications are occurring. However, we can see from the results that if communications are present there is a significant performance impact on the data that is tracked by the MDMP functionality. Our assumption is that the computational cost associated by MDMP can be more than offset by the reduction in communication costs for a program, but clearly this is dependent on the ratio between communications and computations for a given kernel, and the ratio of relative costs (in terms of overall runtime) of a communication verses a computation. We evaluate the performance impact verses the communication cost savings in the next subsection, where we analyse some communication benchmarks. 6.3 Message Passing Results We have constructed four simple benchmarks to evaluate MDMP against MPI. The first is a PingPong benchmark where a process sends a message to another process who copies the received data from the receive buffer into it’s send buffer and sends it back to the first process, who performs the same copying process and sends it back again. This pattern is repeated many times and the time for the communications are recorded. The benchmark can send a range of message sizes. For the reference MPI benchmark only a single message is sent each iteration of the benchmark containing the fully amount of data to be sent. For the MDMP version the send and recv functionality specifies the single message to be sent and received, and performs the send and receive on the first iteration of the benchmark but on subsequent iterations of the benchmark the MDMP functionality identifies when each element of the message data is ready to be sent (through tracking the data copying process between the send and receive buffers) and sends individual elements when they are ready to go. This will mean that for a run of the benchmark using a message of 1000 elements in size the MPI version will send one message between processes whereas the MDMP version will send 1000 message (apart from on the first iteration where it will only send one message). The second benchmark, called SelectivePingPong alters the basic PingPong benchmark we have already described by performing the same functionality but only sending a portion of the overall data owned by a process in the messages. It is possible to vary both the overall size of data each process has, and the amount of that data that is sent, for instance you could have each process having an array that is 100 elements long but only the first 10 and last 10 elements are sent in the PingPong messages. This benchmark is designed to investigate the performance impact of varying the overall data in a computation and the amount that is being communicated via MDMP. The third benchmark, called DelayPingPong, also alters the basic PingPong benchmark by adding a delay in the loop that copies the data from the receive buffer to the send buffer. This delay is variable and is designed to simulate some level of computational work being undertaken during what would be the main computational loop for a computational kernel using MDMP. The delay is performed by a routine which iterates through a loop adding an integer to a double a specified number of times (delay elements). The final benchmark, SelectiveDelayPingPong, combines the second and third benchmarks meaning the PingPong process can contain both user defined delay in the data copy loop and a selective amount of data to be transferred. Figure 5a demonstrates the cost of MDMP compared to plain MPI where there is no scope for communication and computational overlaps. The runtime for MDMP increases more or less linearly as the size of the data to be transferred increases, whereas the runtime for MPI stays relatively constant. However, if we examine Figure 5b we can see that MDMP begins to see some benefits over MPI when the delay added to the data copy routine is increased. The JUQUEEN and HECToR MDMP is faster than MPI when the delay elements are around 1000 and 800 elements respectively, although for HELIOS MPI is always faster than MDMP (albeit with a smaller gap in performance between the two methods). If not all the data that is copied between buffers is sent, as in the case of the SelectivePingPong benchmark shown in Figure 6a, then in comparison to the normal PingPong benchmark the overall difference in performance is reduced between MPI and MDMP although MDMP is still more costly than MPI. Finally, the combined benchmark, results shown in 6a where 1024 overall data elements are processed and either 1 or 32 elements are sent with variable amounts of delays, highlight where MDMP can improve performance. When only one element is being sent then all it requires is 16 floating point adds between communications (16 delay elements)1 to enable MDMP to optimise communications. If 32 elements are being sent then around 32 floating point adds are required to enabling the communication hiding that MDMP enables to provide a performance benefit. Whilst these benchmarks are beneficial in enabling us to evaluate MDMP performance we recognise that a more realistic benchmark that evaluates MDMP performance against real kernel computations would also be useful as it would enable us to evaluate the overall impact of MDMP on cache, memory, and processor usage for real applications. We are in the process of undertaking such benchmarks at the moment but unfortunately do not have these results in time for this paper submission. 7. CONCLUSIONS We have outlined a novel approach of message passing programming on distributed memory HPC architectures and demonstrated that, given a reasonable level of computations to the communications to be performed, MDMP can reduce the overall cost of communications and improve application performance. We are aware the MDMP presents performance risks for parallel programs, including impacting cache and memory usage, and consuming additional memory. However, we belief the ability to enable and disable MDMP optimisations, and the potential benefits to ease of 1 Actually there are 1023 ∗ 16 delay elements as there is a delay per array element use and programmability from MDMP, make this approach a sensible one to investigate future message passing programming. We are currently working on a full compiler implementation of MDMP, including a formal MDMP language definition, and more involved benchmarks to evaluate MDMP in much more detail. 8. ACKNOWLEDGMENTS Part of this work was supported by an e-Science grant from Chalmers University. Part of this work was supported by the Nu-FuSE project which is funded by EPSRC through G8 Multilateral Research Funding Nu-FuSE grant EP/J004839/1. 9. REFERENCES [1] F. Blagojević, P. Hargrove, C. Iancu, and K. Yelick. Hybrid pgas runtime support for multicore nodes. In Proceedings of the Fourth Conference on Partitioned Global Address Space Programming Model, PGAS ’10, pages 3:1–3:10, New York, NY, USA, 2010. ACM. [2] W. W. Carlson, J. M. Draper, and D. E. Culler. S-246, 187 introduction to upc and language specification. [3] B. Chamberlain, D. Callahan, and H. Zima. Parallel programmability and the chapel language. Int. J. High Perform. Comput. Appl., 21(3):291–312, Aug. 2007. [4] A. Danalis, K. yong Kim, L. Pollock, and M. Swany. Transformations to parallel codes for communication-computation overlap. In In Supercomputing 2005, page 58, 2005. [5] A. Faraj, X. Yuan, and D. Lowenthal. Star-mpi: self tuned adaptive routines for mpi collective operations. In Proceedings of the 20th annual international conference on Supercomputing, ICS ’06, pages 199–208, New York, NY, USA, 2006. ACM. [6] L. Fishgold, A. Danalis, L. Pollock, and M. Swany. An automated approach to improve communication-computation overlap in clusters. In Proceedings of the 20th international conference on Parallel and distributed processing, IPDPS’06, pages 290–290, Washington, DC, USA, 2006. IEEE Computer Society. [7] M. Forum. MPI: A message-passing interface standard. available at: http://www.mpi-forum.org. [8] X. S. W. Group. Specification of xcalablemp, version 1.1, november 2012. available from: http://www.xcalablemp.org/spec/xmp-spec-1.1.pdf. [9] T. Hoefler and T. Schneider. Runtime detection and optimization of collective communication patterns. In Proceedings of the 21st international conference on Parallel architectures and compilation techniques, PACT ’12, pages 263–272, New York, NY, USA, 2012. ACM. [10] C. Hu, Y. Shao, J. Wang, and J. Li. Automatic transformation for overlapping communication and computation. In J. Cao, M. Li, M.-Y. Wu, and J. Chen, editors, Network and Parallel Computing, volume 5245 of Lecture Notes in Computer Science, pages 210–220. Springer Berlin Heidelberg, 2008. [11] C. Iancu, W. Chen, and K. Yelick. Performance (a) PingPong Benchmark (b) DelayPingPong Benchmark (1024 elements) Figure 5: Runtime of PingPong and DelayPingPong benchmarks using up to 1024 data elements (a) SelectivePingPong (b) SelectiveDelayPingPong Figure 6: Runtime of benchmarks with 1024 data elements, varying the number of selected elements or the number of delay elements [12] [13] [14] [15] portable optimizations for loops containing communication operations. In Proceedings of the 22nd annual international conference on Supercomputing, ICS ’08, pages 266–276, New York, NY, USA, 2008. ACM. A. Jackson and P. Strand. MDMP: Managed Data Message Passing. In Exascale Applications and Software Conference 2013, April 2013. H. Jin, R. Hood, and P. Mehrotra. A practical study of upc using the nas parallel benchmarks. In Proceedings of the Third Conference on Partitioned Global Address Space Programing Models, PGAS ’09, pages 8:1–8:7, New York, NY, USA, 2009. ACM. A. Knapfer, D. Kranzlmaller, and W. Nagel. Detection of collective mpi operation patterns. In D. Kranzlmaller, P. Kacsuk, and J. Dongarra, editors, Recent Advances in Parallel Virtual Machine and Message Passing Interface, volume 3241 of Lecture Notes in Computer Science, pages 259–267. Springer Berlin Heidelberg, 2004. C. Lattner and V. Adve. Llvm: A compilation framework for lifelong program analysis & [16] [17] [18] [19] transformation. In Proceedings of the international symposium on Code generation and optimization: feedback-directed and runtime optimization, CGO ’04, pages 75–, Washington, DC, USA, 2004. IEEE Computer Society. J. Lee, M. Sato, and T. Boku. Openmpd: A directive-based data parallel language extension for distributed memory systems. In Parallel Processing Workshops, 2008. ICPP-W ’08. International Conference on, pages 121–128, 2008. Y. Li, J. Dongarra, and S. Tomov. A note on auto-tuning gemm for gpus. In Proceedings of the 9th International Conference on Computational Science: Part I, ICCS ’09, pages 884–892, Berlin, Heidelberg, 2009. Springer-Verlag. J. D. McCalpin. Memory bandwidth and machine balance in current high performance computers. IEEE Computer Society Technical Committee on Computer Architecture (TCCA) Newsletter, pages 19–25, Dec. 1995. A. Müller and R. Rühl. Extending high performance fortran for the support of unstructured computations. [20] [21] [22] [23] [24] [25] In Proceedings of the 9th international conference on Supercomputing, ICS ’95, pages 127–136, New York, NY, USA, 1995. ACM. R. W. Numrich and J. Reid. Co-arrays in the next fortran standard. SIGPLAN Fortran Forum, 24(2):4–17, Aug. 2005. OpenMP. Openmp architecture review board. openmp fortran application program interface, version 1.1. available from: http://www.openmp.org. S. Pellegrini, T. Fahringer, H. Jordan, and H. Moritsch. Automatic tuning of mpi runtime parameter settings by using machine learning. In Proceedings of the 7th ACM international conference on Computing frontiers, CF ’10, pages 115–116, New York, NY, USA, 2010. ACM. H. Shan, F. Blagojević, S.-J. Min, P. Hargrove, H. Jin, K. Fuerlinger, A. Koniges, and N. J. Wright. A programming model performance study using the nas parallel benchmarks. Sci. Program., 18(3-4):153–167, Aug. 2010. T. Wen, J. Su, P. Colella, K. Yelick, and N. Keen. An adaptive mesh refinement benchmark for modern parallel programming languages. In Proceedings of the 2007 ACM/IEEE conference on Supercomputing, SC ’07, pages 40:1–40:12, New York, NY, USA, 2007. ACM. M. B. Xu Guo. Prace deliverble d7.4.3: Tier-0 applications and systems usage, may 2012.
6
arXiv:1710.07198v1 [] 19 Oct 2017 Dress like a Star: Retrieving Fashion Products from Videos Noa Garcia George Vogiatzis Aston University, UK Aston University, UK [email protected] [email protected] Abstract This work proposes a system for retrieving clothing and fashion products from video content. Although films and television are the perfect showcase for fashion brands to promote their products, spectators are not always aware of where to buy the latest trends they see on screen. Here, a framework for breaking the gap between fashion products shown on videos and users is presented. By relating clothing items and video frames in an indexed database and performing frame retrieval with temporal aggregation and fast indexing techniques, we can find fashion products from videos in a simple and non-intrusive way. Experiments in a large-scale dataset conducted here show that, by using the proposed framework, memory requirements can be reduced by 42.5X with respect to linear search, whereas accuracy is maintained at around 90%. Figure 1: Fashion items in a popular TV show. difficult and it involves time-consuming searches. To help in the task of finding fashion products that appear in multimedia content, some websites, such as Film Grab4 or Worn on TV 5 , provide catalogs of items that can be seen on films and TV shows, respectively. These websites, although helpful, still require some effort before actually buying the fashion product: users need to actively remember items from videos they have previously seen and navigate through the platform until they find them. On the contrary, we propose an effortless, non-intrusive and fast computer vision tool for searching fashion items in videos. This work proposes a system for retrieving clothing products from large-scale collections of videos. By taking a picture of the playback device during video playback (i.e. an image of the cinema screen, laptop monitor, tablet, etc.), the system identifies the corresponding frame of the video sequence and returns that image augmented with the fashion items in the scene. In this way, users can find a product as soon as they see it by simple taking a photo of the video sequence where the product appears. Figure 1 shows a frame 1. Introduction Films and TV shows are a powerful marketing tool for the fashion industry, since they can reach thousands of millions of people all over the world and impact on fashion trends. Spectators may find clothing appearing in movies and television appealing and people’s personal style is often influenced by the multimedia industry. Also, online videosharing websites, such as YouTube1 , have millions of users generating billions of views every day2 and famous youtubers3 are often promoting the latest threads in their videos. Fashion brands are interested in selling the products that are advertised in movies, television or YouTube. However, buying clothes from videos is not straightforward. Even when a user is willing to buy a fancy dress or a trendy pair of shoes that appear in the latest blockbuster movie, there is often not enough information to complete the purchase. Finding the item and where to buy it is, most of the times, 1 https://www.youtube.com/ 2 https://www.youtube.com/yt/press/statistics.html 4 http://filmgarb.com/ 3 Users 5 https://wornontv.net who have gained popularity from their videos on YouTube 1 of a popular TV show along with its fashion items. Recently, many computer vision applications for clothing retrieval [14, 11] and style recommendation [24, 21] have been proposed. Our system is close to clothing retrieval approaches in that the aim is to retrieve a product from an image. Instead of retrieving products directly as in standard clothing retrieval, we propose to first retrieve frames from the video collection. There are several reasons for that. Firstly, in standard clothing retrieval users usually provide representative images of the object of interest (e.g. dresses in front view, high-heeled shoes in side view, etc.). In a movie, the view of the object of interest cannot be chosen, and items might be partially or almost completely occluded, such as the red-boxed dress in Figure 1. Secondly, standard clothing retrieval requires to select a bounding box around the object of interest. This is undesirable in clothing video retrieval as it may distract user’s attention from the original video content. Finally, performing frame retrieval instead of product retrieval in videos allows users to get the complete list of fashion items in a scene, which usually matches the character’s style, including small accessories such as earrings, watches or belts. Some of these items are usually very small, sometimes almost invisible, and would be impossible to detect and recognize using standard object retrieval techniques. Retrieving frames from videos is a challenging task in terms of scalability. An average movie of two hours duration may contain more than 200,000 frames. That means that with only five movies in the video dataset, the number of images in the collection might be over a million. To overcome scalability issues, we propose the combination of two techniques. Firstly, frame redundancy is exploited by tracking and summarizing local binary features [7, 20] into a key feature. Secondly, key features are indexed in a kd-tree for fast search of frames. Once the scene the query image belongs to is identified, the clothing items associated to that scene are returned to the user. The contributions of this paper are: • Introduction of the video clothing retrieval task and collection of a dataset of videos for evaluation. • Proposal of a fast and scalable video clothing retrieval framework based on frame retrieval and indexing algorithms. • Exhaustive evaluation of the framework, showing that similar accuracy to linear search can be achieved while the memory requirements are reduced by a factor of 42.5. This paper is structured as follows: related work is summarized in Section 2; the details of the proposed system are explained in Section 3; experiment setup and results are detailed in Section 4 and 5, respectively; finally, the conclusions are summarized in Section 6. 2. Related Work Clothing Retrieval. Clothing retrieval is the field concerned with finding similar fashion products given a visual query. In the last few years, it has become a popular field within the computer vision community. Proposed methods [14, 25, 13, 11, 12, 15] are mainly focused on matching realworld images against online catalog photos. In this crossscenario problem, some methods [25, 13, 15] train a set of classifiers to find items with similar attributes. Other approaches [14, 12, 11] model the differences across domains by using a transfer learning matrix [14] or a deep learning architecture [12, 11]. All these methods, however, perform clothing retrieval on static images, in which a representative and recognizable part of the item of interest is contained. In the scenario of clothing retrieval from films and television, fashion items are not always shown in a representative and recognizable way across the whole duration of the video due to object occlusions and changes in camera viewpoints. Given a snapshot of a video, an alternative solution is to first, identify the frame it belongs to and then, find the products associated to it. Scene Retrieval. Scene retrieval consists on finding similar frames or scenes from a video collection according to a query image. Early work, such as Video Google [22] and others [18, 9, 8], retrieve frames from video datasets by applying image retrieval methods and processing each frame independently. More recent scene retrieval approaches use temporal aggregation methods to improve scalability and reduce memory requirements. [2, 5] extract hand-crafted local features from frames, track them along time and aggregate them into a single vector. Other proposals [26, 3, 4] produce a single scene descriptor, which improves efficiency but involves a more difficult comparison against the static query image. Recently, Araujo et al. [4] showed that, in scene retrieval, methods based on hand-crafted features outperform convolutional neural networks. Similarly to [2], our approach is based on the aggregation of local features along temporal tracks. Instead of using SIFT features [16], we provide evidence that binary features [7, 20] are more appropriate for the task and that they can be easily indexed in a kd-tree for fast search. 3. Video Clothing Retrieval Framework To retrieve fashion products from movies, television and YouTube, we propose a framework based on frame retrieval and fast indexing techniques. Since the number of frames explodes with the number of movies available in a collection, a key feature of this framework is its scalability. The system overview is shown in Figure 2. There are three main modules: product indexing, training phase and query phase. The product indexing and the training phase are done offline, whereas the query phase is performed online. Product Indexing Offline Training phase Fashion Items Indexed Database Video Collection Feature Extraction Feature Tracking Shot Detection Key Feature Computation Key Feature Indexing Video Frames Online Output Query phase Retrieved Frame Feature Extraction Key Feature Search Shot Search Fashion Items Frame Search Query Image Figure 2: System overview. First, during the product indexing, fashion items and frames are related in an indexed database. Then, frames are indexed by using key features. Finally, in the query phase, query images are used to retrieve frames and find associated fashion items. In the product indexing, fashion items and frames from the video collection are related in an indexed database. This process can be done manually (e.g. with Amazon Mechanical Truck6 ) or semi-automatically with the support of a standard clothing retrieval algorithm. In the training phase, features are extracted and tracked along time. Tracks are used to detect shot boundaries and compute aggregated key features, which are indexed in a kd-tree. In the query phase, features extracted from the query image are used to find their nearest key features in the kd-tree. With those key features, the shot and the frame the query image belong to are retrieved. Finally, the fashion products associated with the retrieved frame are returned by the system. 3.1. Feature Extraction and Tracking Local features are extracted from every frame in the video collection and tracked across time by applying descriptor and spatial filters. The tracking is performed in a bidirectional way so features within a track are unique (i.e. each feature can only be matched with up to two features: one in the previous frame and one in the following frame). 6 https://www.mturk.com As local features, instead of the popular SIFT [16], we use binary features for two main reasons. Firstly, because Hamming distance for binary features is faster to compute than Euclidean distance for floating-points vectors. Secondly, because we find that binary features are more stable over time than SIFT, as can be seen in Figure 3. Convolutional neural network (CNN) features from an intermediate layer of a pre-trained network were also studied. However, their results were not satisfactory as the resulting tracked features quickly started to get confused when the dataset increased. 3.2. Shot Detection and Key Feature Computation Consecutive frames that share visual similarities are grouped into shots. The boundaries of different shots are detected when two consecutive frames have no common tracks. Each shot contains a set of tracks, each track representing the trajectory of a particular feature along time. We define a key feature as the aggregation of all the features in the same track into a single vector. Subsequently, each shot is then represented by a set of key features, similarly to how frames are represented by a set of features. For each track, a key feature is computed by using majorities [10]. Figure 4: Visual similarities between frames. Left: Query image. Middle: Ground truth frame. Right: Retrieved frame similar to ground truth frame, thus, Visual Match. FIFO queue we first explore the closest nodes to the root, because the corresponding bits exhibit more variability by construction of the tree. 3.4. Search Figure 3: Trajectories of sample tracks along a sequence of frames. Left: Binary features. Right: SIFT features. Binary features are more constant over time than SIFT features. 3.3. Key Feature Indexing A popular method for searching in binary feature space is FLANN [17]. FLANN uses multiple, randomly generated hierarchical structures and it is not obvious how it can be deployed across multiple CPUs in order to provide the scalability we are looking for. On the other hand, kd-trees are a formalism that has been shown to be highly parallelizable in [1]. In this work, we modify the basic kd-tree to handle binary features and index our key features. In a kd-tree each decision node has an associated dimension, a splitting value and two child nodes. If the value of a query vector in the associated dimension is greater than the splitting value, the vector is assigned to the left child. Otherwise, the vector is associated to the right child. The process is repeated at each node during the query phase until a leaf node is reached. In our case, as we are dealing with binary features, each decision node has an associated dimension, dim, such that all query vectors, v, with v[dim] = 1 belong to the left child, and all vectors with v[dim] = 0 belong to the right child. The value dim is chosen such that the training data is split more evenly in that node, i.e. its entropy is maximum. Note that this criterion is similar to the one used in the ID3 algorithm [19] for the creation of decision trees, but where the splitting attribute is chosen as the one with smallest entropy. Leaf nodes have as many as SL indices pointing to the features that ended up in that node. A first-in first-out (FIFO) queue keeps record of the already visited nodes to backtrack B times and explore them later. We use a FIFO queue to ensure that even if some of the bits in a query vector are wrong, the vector can reach its closest neighbours by exploring unvisited nodes latter. With the In the query phase, binary features are extracted from an input image and assigned to its nearest set of key features by searching down the kd-tree. Each key feature votes for the shot it belongs to. The set of frames contained in the most voted shot are compared against the input image by brute force, i.e. distances between descriptors in the query image and descriptors in the candidate frames are computed. The frame with minimum distance is retrieved. Shots are commonly groups of a few hundreds of frames, thus the computation can be performed very rapidly when applying the Hamming distance. Finally, all the fashion vectors associated with the retrieved frame are returned to the user. 4. Experiments Experimental Details. To evaluate our system a collection of 40 movies with more than 80 hours of video and up to 7 million frames is used. Query images are captured by a webcam (Logitech HD Pro Webcam C920) while movies are being played on a laptop screen. To provide a ground truth for our experiments, the frame number of each captured image is saved in a text file. Movies and queries have different resolutions and aspect ratios, thus all the frames and images are scaled down to 720 pixels in width. Binary features are computed by using ORB detector [20] and BRIEF extractor [7]. In the tracking module, only matches with a Hamming distance less than 20 and a spatial distance less than 100 pixels are considered, whereas in the key feature computation algorithm, only tracks longer than 7 frames are used. The default values for the kd-tree are set at SL = 100 and B = 50. Evaluation criteria. The aim of the system is to find the fashion products in the query scene, so we only retrieve one frame per query image. The returned frame may not be the same frame as the one in the annotated ground truth but one visually similar. As long as the retrieved frame shares strong similarities with the ground truth frame, we consider it as a Visual Match, as shown in Figure 4. To measure the visual similarities between two frames, we match SURF [6] BF KT KF Ours 85M 85M 25M 2M 2.53GB 2.53GB 762MB 61MB 0.90 0.91 0.92 0.94 0.92 0.93 B = 100 0.96 0.93 0.94 B = 250 0.97 0.93 0.94 Indexed Features Memory B = 10 (b) Figure 5: Precision of the evaluation method: (a) TPR and FPR curves; (b) Three ground truth frames (left column) and retrieved frames (right column) along with their score. features. The linear comparison between two frames that do not present any noise or perspective distortion is an easy task that almost all kinds of features can perform correctly. If the matching score between SURF features of two dataset frames is greater than a threshold, τ , they are considered to be visually similar. To measure the precision of this evaluation method, we manually annotate either if a pair of frames (i.e. ground truth frame and retrieved frame) is a Visual Match or not, along with its score. For different values of τ the True Positive Rate (TPR) as well as the False Positive Rate (FPR) are computed as: TPR = TP TP + FN FPR = FP FP + TN (1) where T P is the number of true positives (Visual Match with score > τ ), F P is the number of false positives (No Match with score > τ ), T N is the number of true negatives (No Match with score ≤ τ ) and F N is the number of false negatives (Visual Match with score ≤ τ ). Figure 5 shows both the TPR and the FPR computed with the annotations of 615 pairs of frames and 406 different values of τ . Among all the possible values, we chose τ = 0.15, with T P R = 0.98 and F P R = 0. Finally, the accuracy for a set of query images is computed as: Acc = No. Visual Matches Total No. Queries (2) 5. Experimental Results To evaluate the proposed framework, two different experiments are performed. In the first one, our frame retrieval system is compared against other retrieval systems. In the second one, the performance of the frame retrieval method when scaling up the video collection is evaluated in terms of accuracy. 5.1. Retrieval Performance First, we compare our system against other retrieval frameworks. Accuracy (a) B = 50 0.98 Table 1: Comparison between different systems on a single movie. Accuracy is shown for four backtracking steps, B. Our system achieves comparable results by using 42.5x less memory than BF and KT and 12.5x less memory than KF. BF. Brute Force matcher, in which query images are matched against all frames and all features in the database. Brute Force system is only used as an accuracy benchmark, since each query take, in average, 46 minutes to be processed. Temporal information is not used. KT. Kd-Tree search, in which all features from all frames are indexed using a kd-tree structure, similarly to [1]. Temporal information is not used. KF. Key Frame extraction method [23], in which temporal information is used to reduce the amount of frames of each shot into a smaller set of key frames. Key frames are chosen as the ones at the peaks of the distance curve between frames and a reference image computed for each shot. For each key frame, features are extracted and indexed in a kd-tree structure. We compare these three systems along with the method proposed here, using a single movie, The Devil Wears Prada, consisting of 196,572 frames with a total duration of 1 hour 49 minutes and 20 seconds. To be fair in our comparison, we use binary features in each method. The results of this experiment are detailed in Table 1. The performance is similar for the four systems. However, the amount of processed data and the memory requirements are drastically reduced when the temporal information of the video collection is used. In the case of our system, by exploiting temporal redundancy between frames, the memory is reduced by 42.5 times with respect to BF and KT and by 12.5 times with respect to the key frame extraction technique. Theoretically, that means that when implemented in a distributed kd-tree system as the one in [1], where the authors were able to process up to 100 million images, our system might be able to deal with 4,250 million frames, i.e. more than Title N. Frames N. Features N. Shots N. Key Features N. Queries Accuracy The Help 210387 101M 1726 2.2M 813 0.98 Intolerable Cruelty 179234 86M 1306 2M 544 0.97 Casablanca 147483 71M 881 1.5M 565 0.96 Witching & Bitching 163069 66M 4193 0.8M 588 0.74 Pirates of the Caribbean 3 241127 108M 3695 1.7M 881 0.74 Captain Phillips 190496 59M 7578 0.6M 618 0.67 7M 3040M 116307 58M 25142 0.87 Total Table 2: Summary of the results from the experiments on the dataset containing 40 movies. Due to space restrictions, only three of the best and worst accuracies are shown, along with the total results for the whole database. Pirates of the Caribbean 3 perform worst, as fewer descriptors can be found in those kinds of dimly lit images. Figure 6 shows the evolution of accuracy when the size of the database increases for five different movies. It can be seen that most of the movies are not drastically affected when the number of frames in the database is increased from 200,000 to 7 million. For example, both Intolerable Cruelty and 12 Years a Slave movies maintain almost a constant accuracy for different sizes of the collection. Even in the worst case scenario, The Devil Wears Prada movie, the loss in accuracy is less than a 8.5%. This suggests that our frame retrieval system is enough robust to handle largescale video collections without an appreciable loss in performance. Figure 6: Accuracy vs Database size for 5 different movies. 20,000 movies and 40,000 hours of video. 5.2. Large-scale Dataset In this experiment, we explore the scalability of our framework by increasing the dataset up to 40 movies and 7 million frames. The collection is diverse and it contains different movie genres such as animation, fantasy, adventure, comedy or drama, to ensure that a wide range of movies can be indexed and retrieved with the proposed algorithm. Table 2 shows the results of this experiment. It can be seen that, by using our temporal aggregation method, the amount of data is reduced from 7 million frames and 3,040 million features to only 116,307 shots and 58 million key features. Even so, the total number of key features in the 40 movie collection is still smaller than the 80 million features that, in average, a single movie contains. The total accuracy over the 40 movies is 0.87, reaching values of 0.98 and 0.97 in The Help and Intolerable Cruelty movies, respectively. Movies with very dark scenes such as Captain Phillips and 6. Conclusions This work proposes a framework to perform video clothing retrieval. That is, given a snapshot of a video, identify the video frame and retrieve the fashion products that appear in that scene. This task would help users to easily find and purchase items shown in movies, television or YouTube videos. We propose a system based on frame retrieval and fast indexing. Experiments show that, by using our temporal aggregation method, the amount of data to be processed is reduced by a factor of 42.5 with respect to linear search and accuracy is maintained at similar levels. The proposed system scales well when the number of frames is increased from 200,000 to 7 million. Similar to [1] our system can easily be parallelised across multiple CPUs. Therefore, the encouraging experimental results shown here indicate that our method has the potential to index fashion products from thousands of movies with high accuracy. In future work we intend to further increase the size of our dataset in order to identify the true limits of our approach. Furthermore it is our intention to make all data publicly available so that other researchers can investigate this interesting retrieval problem. References [1] M. Aly, M. Munich, and P. Perona. Distributed kd-trees for retrieval from very large image collections. British Machine Vision Conference, 2011. 4, 5, 6 [2] A. Anjulan and N. Canagarajah. Object based video retrieval with local region tracking. Signal Processing: Image Communication, 22(7), 2007. 2 [3] A. Araujo, J. Chaves, R. Angst, and B. Girod. Temporal aggregation for large-scale query-by-image video retrieval. In Image Processing (ICIP), 2015 IEEE International Conference on, pages 1519–1522. IEEE, 2015. 2 [4] A. Araujo and B. Girod. Large-scale video retrieval using image queries. IEEE Transactions on Circuits and Systems for Video Technology, 2017. 2 [5] A. Araujo, M. Makar, V. Chandrasekhar, D. Chen, S. Tsai, H. Chen, R. Angst, and B. Girod. Efficient video search using image queries. In Image Processing (ICIP), 2014 IEEE International Conference on, pages 3082–3086. IEEE, 2014. 2 [6] H. Bay, T. Tuytelaars, and L. Van Gool. SURF : Speeded Up Robust Features. European Conference on Computer Vision, pages 404–417, 2006. 4 [7] M. Calonder, V. Lepetit, C. Strecha, and P. Fua. BRIEF : Binary Robust Independent Elementary Features. ECCV, 2010. 2, 4 [8] D. Chen, N.-M. Cheung, S. Tsai, V. Chandrasekhar, G. Takacs, R. Vedantham, R. Grzeszczuk, and B. Girod. Dynamic selection of a feature-rich query frame for mobile video retrieval. In Image Processing (ICIP), 2010 17th IEEE International Conference on, pages 1017–1020. IEEE, 2010. 2 [9] O. Chum, J. Philbin, M. Isard, and A. Zisserman. Scalable near identical image and shot detection. In Proceedings of the 6th ACM international conference on Image and video retrieval, pages 549–556. ACM, 2007. 2 [10] C. Grana, D. Borghesani, M. Manfredi, and R. Cucchiara. A fast approach for integrating orb descriptors in the bag of words model. In IS&T/SPIE Electronic Imaging. International Society for Optics and Photonics, 2013. 3 [11] M. Hadi Kiapour, X. Han, S. Lazebnik, A. C. Berg, and T. L. Berg. Where to buy it: Matching street clothing photos in online shops. In ICCV, 2015. 2 [12] J. Huang, R. S. Feris, Q. Chen, and S. Yan. Cross-domain image retrieval with a dual attribute-aware ranking network. In Proceedings of the IEEE International Conference on Computer Vision, pages 1062–1070, 2015. 2 [13] Y. Kalantidis, L. Kennedy, and L.-J. Li. Getting the look: clothing recognition and segmentation for automatic product suggestions in everyday photos. In Proceedings of the 3rd ACM conference on International conference on multimedia retrieval, pages 105–112. ACM, 2013. 2 [14] S. Liu, Z. Song, G. Liu, C. Xu, H. Lu, and S. Yan. Street-toshop: Cross-scenario clothing retrieval via parts alignment and auxiliary set. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 3330–3337. IEEE, 2012. 2 [15] Z. Liu, P. Luo, S. Qiu, X. Wang, and X. Tang. Deepfashion: Powering robust clothes recognition and retrieval with rich annotations. In CVPR, 2016. 2 [16] D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 2004. 2, 3 [17] M. Muja and D. G. Lowe. Fast matching of binary features. Proceedings of the 2012 9th Conference on Computer and Robot Vision, 2012. 4 [18] D. Nistér and H. Stewénius. Scalable recognition with a vocabulary tree. CVPR, 2006. 2 [19] J. R. Quinlan. Induction of decision trees. Machine learning, 1(1):81–106, 1986. 4 [20] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski. ORB: an efficient alternative to SIFT or SURF. In ICCV, 2011. 2, 4 [21] E. Simo-Serra, S. Fidler, F. Moreno-Noguer, and R. Urtasun. Neuroaesthetics in fashion: Modeling the perception of fashionability. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 869–877, 2015. 2 [22] J. Sivic and A. Zisserman. Video Google: a text retrieval approach to object matching in videos. ICCV, 2003. 2 [23] Z. Sun, K. Jia, and H. Chen. Video key frame extraction based on spatial-temporal color distribution. In Intelligent Information Hiding and Multimedia Signal Processing, 2008. IIHMSP’08 International Conference on, pages 196– 199. IEEE, 2008. 5 [24] A. Veit, B. Kovacs, S. Bell, J. McAuley, K. Bala, and S. Belongie. Learning visual clothing style with heterogeneous dyadic co-occurrences. In Proceedings of the IEEE International Conference on Computer Vision, pages 4642–4650, 2015. 2 [25] D. Wei, W. Catherine, B. Anurag, P.-m. Robinson, and S. Neel. Style finder: fine-grained clothing style recognition and retrieval, 2013. 2 [26] C.-Z. Zhu and S. Satoh. Large vocabulary quantization for searching instances from videos. In Proceedings of the 2nd ACM International Conference on Multimedia Retrieval, page 52. ACM, 2012. 2
7
Scheduling to Minimize Total Weighted Completion Time via Time-Indexed Linear Programming Relaxations arXiv:1707.08039v1 [] 25 Jul 2017 Shi Li ∗ Abstract We study approximation algorithms for scheduling problems with the objective of minimizing total weighted completion time, under identical and related machine models with job precedence constraints. We give algorithms that improve upon many previous 15 to 20-year-old state-ofart results. A major theme in these results is the use of time-indexed linear programming relaxations. These are natural relaxations for their respective problems, but surprisingly are not studied in the literature. We also consider the scheduling problem of minimizing total weighted completion time on unrelated machines. The recent breakthrough result of [Bansal-Srinivasan-Svensson, STOC 2016] gave a (1.5 − c)-approximation for the problem, based on some lift-and-project SDP relaxation. Our main result is that a (1.5 − c)-approximation can also be achieved using a natural and considerably simpler time-indexed LP relaxation for the problem. We hope this relaxation can provide new insights into the problem. 1 Introduction Scheduling jobs to minimize total weighted completion time is a well-studied topic in scheduling theory, operations research and approximation algorithms. A systematic study of this objective under many different machine models (e.g, identical, related and unrelated machine models, job shop scheduling, precedence constraints, preemptions) was started in late 1990s and since then it has led to great progress on many fundamental scheduling problems. In spite of these impressive results, the approximability of many problems is still poorly understood. Many of the state-of-art results that were developed in late 1990s or early 2000s have not been improved since then. Continuing the recent surge of interest on the total weighted completion time objective [4, 19, 37], we give improved approximation algorithms for many scheduling problems under this objective. The machine models we study in this paper include identical machine model with job precedence constraints, with uniform and non-uniform job sizes, related machine model with job precedence constraints and unrelated machine model. A major theme in our results is the use of time-indexed linear programming relaxations. Given the time aspect of scheduling problems, they are natural relaxations for their respective problems. However, to the best of our knowledge, many of these relaxations were not studied in the literature and thus their power in deriving improved approximation ratios was not well-understood. Compared to other types of relaxations, solutions to these relaxations give fractional scheduling of jobs on machines. Many of our improved results were obtained by using the fractional scheduling to identify the loose analysis in previous results. ∗ Department of Computer Science and Engineering, University at Buffalo, Buffalo, NY, USA 1 1.1 Definitions of Problems and Our Results We now formally describe the problems we study in the paper and state our results. In all of these problems, we have a set J of n jobs, P a set M of m machines, each job j ∈ J has a weight wj ∈ Z>0 , and the objective to minimize is j∈J wj Cj , where Cj is the completion time of the job j. We consider non-preemptive schedules only. So, a job must be processed on a machine without interruption. For simplicity, this global setting will not be repeated when we define problems. Scheduling on Identical Machines with Job Precedence Constraints In this problem, each job j ∈ J has a processing time (or size) pj ∈ Z>0 . The m machines are identical; each job j must be scheduled on one of the m machines non-preemptively; namely, j must be processed during a time interval of length pj on some machine. The completion time of j is then the right endpoint of this interval. Each machine at any time can only process at most one job. Moreover, there are precedence constraints given by a partial order “≺”, where a constraint j ≺ j ′ requires that job j ′ can only start after job j is completed. Using the popular three-field notation introduced by P Graham et al. [17], this problem is described as P prec j wj Cj . For the related problem P |prec|Cmax , i.e, the problem with the same setting but with the makespan objective, the seminal work of Graham [16] gives a 2-approximation algorithm, based on a simple machine-driven list-scheduling algorithm. In the algorithm, the schedule is constructed in real-time. As time goes, each idle machine shall pick any available job to process (a job is available if it is not scheduled but all its predecessors are completed.), if such a job exists; otherwise, it remains idle until some job becomes available. On the negative side, Lenstra and Rinnooy Kan [23] proved a (4/3 − ǫ)-hardness of approximation for P |prec|Cmax . Under some stronger version of the Unique Game Conjecture (UGC) introduced by Bansal and Khot [1], Svensson [39] showed that P |prec|Cmax is hard to approximate within a factor of 2 − ǫ for any ǫ > 0. With precedence constraints, the weighted completion time objective is more general than makespan: one can create a dummy job of size 0 and weight 1 that must be processedPafter all jobs in J, which have weight 0. Thus, the above negative results carry over to P prec j wj Cj . Indeed, Bansal and Khot [1] showed that the problem with even one machine is already hard to approximate within a factor of 2 − ǫ, P under their stronger version of UGC. However, no better hardness results are known for P prec j wj Cj , compared to those for P |prec|Cmax . On the positive side, by combining the Plist-scheduling algorithm of Graham [16] with a convex programming relaxation for P prec j Cj , Hall et al. [18] gave a 7-approximation for jw P P P prec j wj Cj . For the special case 1|prec| j wj Cj of the problem where there is only 1 machine, Hall et al. [18] gave a 2-approximation, which matches the (2 − ǫ)-hardness assuming the stronger version of UGC due to [1]. Later, Munier,PQueyranne and Schulz ([27], [29]) gave the current best 4-approximation algorithm for P prec j wj Cj , using a convex programming relaxation similar to that in Hall et al. [18] and in Charkrabarti et al. [6]. The convex programming gives a completion time vector (Cj )j∈J , and the algorithm of [27] runs a job-driven list-scheduling algorithm using the order of jobs determined by the values Cj − pj /2. In the algorithm, we schedule jobs j one by one, according to the non-increasing order of Cj − pj /2; at any iteration, we schedule ej − pj , C ej ] with the minimum C ej , subject to the precedence constraints and the j at an interval (C m-machine constraint. It has been a long-standing open problem to improve this factor of 4 (see the discussion after Open Problem 9 in [33]). Munier, Queyranne and P Schulz [27] also considered an important special case of the problem, denoted as P prec, pj = 1 j wj Cj , in which all jobs have size pj = 1. They showed that the 2 approximation ratio of their algorithm becomes 3 for the special case, which has not been improved since then. On the negative side, the 2 − ǫ strong UGC-hardness result of Bansal and Khot also applies to this special case. P In this paper, we improve the long-standing approximation ratios of 4 and 3 for P prec j wj Cj P and P prec, pj = 1 j wj Cj due to Munier, Queyranne and Schulz [27, 29]: P Theorem 1.1. There is a 2+ 2 ln 2+ ǫ < (3.387+ ǫ)-approximation algorithm for P prec j wj Cj , for every ǫ > 0. √ P Theorem 1.2. There is a 1 + 2 < 2.415-approximation algorithm for P prec, pj = 1 j wj Cj . Scheduling on Related Machines with Job Precedence Constraints Then we consider the scheduling problem on related machines. We have all the input parameters in the problem P P |prec| j wj Cj . Additionally, each machine i ∈ M is given a speed si > 0 and the time of processing job j on machine i is pj /si (so the m machines are not identical any more). A job j must be scheduled on some machine i during P an interval of length pj /si . Using the three-field notation, the problem is described as Q|prec| j wj Cj . Chudak and Shmoys [12] gave the current best O(log m) approximation algorithm for the prob√ lem, improving upon the previous O( m)-approximation due to Jaffe [20]. Using a general framework of Hall et al. [18] and Queyranne and Sviridenko [30], that converts an algorithm for a scheduling problem with makespan objective to an algorithm for the correspondent problem P with weighted completion time objective, Chudak and Shmoys reduced the problem Q|prec| j wj Cj to Q|prec|Cmax . In their algorithm for Q|prec|Cmax , we partition the machines into groups, each containing machines of similar speeds. By solving an LP relaxation, we assign each job to a group of machines. Then we can run a generalization of the Graham’s machine-driven list scheduling problem, that respect the job-to-group assignment. The O(log m)-factor comes from the number O(log m) of machine groups. On the negative side, all the hardness results for P |prec|Cmax carry over to both Q|prec|Cmax P and Q|prec| j wj Cj . Recently Bazzi and Norouzi-Fard [5] showed that assuming the hardness of some optimization problem on k-partite graphs, both problems are hard to be approximated within any constant. In this paper, we give a slightly better approximation ratio than O(log m) due to Chudak and P Shmoys [12], for both Q|prec|Cmax and Q|prec| j wj Cj : Theorem P 1.3. There are O(log m/ log log m)-approximation algorithms for both Q|prec|Cmax and Q|prec| j wj Cj . Scheduling on Unrelated Machines Finally, we consider the classic scheduling problem to minimize total weighted completion time on unrelated machines (without precedence constraints). In this problem we are given a number pi,j ∈ Z>0 for every i ∈ M, P j ∈ J, indicating the time needed to process job j on machine i. This problem is denoted as R|| j wj Cj . For this problem, there are many classic 3/2-approximation algorithms, based on a weak timeindexed LP relaxation [32] and a convex-programming relaxation ([36], [34]). These algorithms are all based on independent rounding. Solving some LP (or convex programming) relaxation gives yi,j values, where each yi,j indicates the fraction of job j that is assigned to machine i. Then the algorithms randomly and independently assign each job j to a machine i, according to the 3 distribution {yi,j }i . Under this job-to-machine assignment, the optimum scheduling can be found by applying the Smith rule on individual machines. Improving the 3/2-approximation ratio had been a long-standing open problem (see Open Problem 8 in [33]). The difficulty of improving the ratio comes from the fact P that any independent rounding algorithm can not give a better than a 3/2-approximation for R|| j wj Cj , as shown by Bansal, Srinivasan and Svensson [4]. This lower bound is irrespective of the relaxation used: even if the fractional solution is already a convex combination of optimum integral schedules, independent rounding can only give a 3/2-guarantee. To overcome this barrier, [4] introduced a novel dependence rounding scheme, which guarantees some strong negative correlation between events that jobs are assigned to the same machine i. Combining this with their lifted SDP relaxation for the problem,PBansal, Srinivasan and Svensson gave a (3/2 − c)-approximation algorithm for the problem R|| j wj Cj , where c = 1/(108 × 20000). This solves the long-standing open problem in the affirmative. Besides the slightly improved approximation ratio, our main contribution for this problem is that the (1.5− c)-approximation ratio can also be achieved using the following natural time-indexed LP relaxation: X X xi,j,s(s + pi,j ) s.t. (LPR||wC ) min wj i,s j X X xi,j,s = 1 ∀j (1) xi,j,s = 0 ∀i, j, s > T − pi,j (3) xi,j,s ≤ 1 ∀i, t (2) xi,j,s ≥ 0 ∀i, j, s (4) i,s j,s∈(t−pi,j ,t] InP the above LP, T is a trivial upper bound on the makespan of any reasonable schedule (T = j maxi:pi,j 6=∞ pi,j suffices). i, j, s and t are restricted to elements in M, J, {0, 1, 2, · · · , T − 1} and [T ] respectively. xi,j,s indicates whether job j is processed P onPmachine i with starting time s. The objective to minimize is the weighted completion time j wj i,s xi,j,s (s+pi,j ). Constraint (1) requires every job j to be scheduled. Constraint (2) says that on every machine i at any time point t, only one job is being processed. Constraint (3) says that if job j is scheduled on i, then it can not be started after T − pi,j . Constraint (4) requires all variables to be nonnegative. P Theorem 1.4. The LP relaxation (LPR||wC ) for R|| j wj Cj has an integrality gap of at most 1 . Moreover, there is an algorithm that, given a valid fractional solution x to 1.5 − c, where c = 6000 P P (LPR||wC ), outputs a random valid schedule with expected cost at most (1.5 − c) j wj i,s xi,j,s (s + pi,j ), in time polynomial in the number of non-zero variables of x.1 P The above algorithm leads to a (1.5 − c)-approximation for R|| j wj Cj immediately if T is polynomially bounded. In Section 6, we shall show how to handle the case when T is superpolynomial. 1.2 Our Techniques A key technique in many of our results is the use of time-indexed LP relaxations. For the identical machine setting, we have variables xj,t indicating whether job j is scheduled in the time-interval (t − pj , t]; we can visualize xj,t as a rectangle of height xj,t with horizontal span (t − pj , t]. With this 1 We assume x is given as a sequence of (i, j, s, xi,j,s )-tuples with non-zero xi,j,s . 4 visualization, it is straightforward to express the objective function, and formulate the machinecapacity constraints and the precedence constraints. For the unrelated machine model, the LP we use is (LPR||wC ). (We used starting points to index intervals, as opposed to ending points; this is only for the simplicity of describing the algorithm.) Each xi,j,s can be viewed as a rectangle of height xi,j,s on machine i with horizontal span (s, s + pi,j ]. The rectangle structures allow us to recover the previous state-of-art results, and furthermore to derive the improved approximation results by identifying the loose parts in these algorithms and analysis. P P |prec| j wj Cj Let us first consider the scheduling problem on identical machines with job precedence constraints. The 4-approximation algorithm of Munier, Queyranne, and Schulz [27, 29] used a convex programming that only contains the completion time variables {Cj }j∈J . After obtaining the vector C, we run the job-driven list scheduling algorithm, by considering jobs j in increasing order of Cj − pj /2. To analyze the expected completion time of j ∗ in the output schedule, focus on the schedule S constructed by the algorithm at the time j ∗ was inserted. Then, we consider the total length of busy and idle slots in S before the completion of j ∗ separately. The length of busy slots can be bounded by 2Cj ∗ , using the m-machine constraint. The length of idle slots can also be bounded by 2Cj ∗ , by identifying a chain of jobs that resulted in the idle slots. More generally, they showed that if jobs are considered in increasing order of Cj − (1 − θ)pj for θ ∈ [0, 1/2] in the list scheduling algorithm, the factor for idle slots can be improved to 1/(1 − θ) but the factor for busy slots will be increased to 1/θ. Thus, θ = 1/2 gives the best trade-off. The rectangle structure allows us to exam the tightness of the above factors more closely: though the 1/θ factor for busy slots is tight for every individual θ ∈ [0, 1/2], it can not be tight for every such θ. Roughly speaking, the 1/θ factor is tight for a job j ∗ only when j ∗ has small pj ∗ , and all the other jobs j considered before j ∗ in the list scheduling algorithm has large pj and Cj − θpj is just smaller than Cj ∗ − θpj ∗ . However in this case, if we decrease θ slightly, these jobs j will be considered after j ∗ and thus the bound can not be tight for all θ ∈ [0, 1/2]. We show that even if we choose θ uniformly at random from [0, 1/2], the factor for busy time slots remains 2, as opposed R 1/2 2 R 1/2 dθ = 2 ln 2, to θ=0 2θ dθ = ∞. On the other hand, this decreases the factor for idle slots to θ=0 1−θ thus improving the approximation factor to 2 + 2 ln 2. The idea of choosing a random point for each job j and using them to decide the order in the list-scheduling algorithm has been studied before under the name “α-points” [15, 18, 28, 9]. The novelty of our result is the use of the rectangle structure to relate different θ values. In contrast, solutions to the convex programming of [27] and the weak time-indexed LP relaxation of [32] lack such a structure. P P |prec, pj = 1| j wj Cj When jobs have uniform length, the approximation ratio of the algorithm of [27] improves to 3. In this case, the θ parameter in the above algorithm becomes useless since all jobs have the same length. Taking the advantage of the uniform job length, the factor for idle time slots improves 1, while the factor for busy slots remains 2. This gives an approximation factor of 3 for the special case. To improve the factor of 3, we use another randomized procedure to decide the order of jobs in the list scheduling algorithm. For every θ ∈ [0, 1], let Mjθ be the first time when we scheduled θ fraction of job j in the fractional solution. Then we randomly choose θ ∈ [0, 1] and consider jobs the increasing order of Mjθ in the list-scheduling algorithm. This algorithm can recover the factor of 1 for total length of idle slots and 2 for total length of busy slots. We again use the rectangle structure to discover the loose part in the analysis. With uniform 5 job size, the idle slots before a job j are caused only by the precedence constraints: if the total length of idle slots before the completion time of j is a, then there is a precedence-chain of a jobs ending at j; in other words, j is at depth at least a in the precedence graph. In order for the factor 1 for idle slots to be tight, we need to have a ≈ Cj . We show that if this happens, the factor for busy time slots shall be much better than 2. Roughly speaking, the factor of 2 for busy time slots is tight only if j is scheduled evenly among [0, 2Cj ]. However, if j is at depth-a in the dependence graph, it can not be scheduled before time a ≈ Cj with any positive fraction. A quantification of √ this argument allows us to derive the improved approximation ratio 1 + 2 for this special case. P Q|prec| j wj Cj Our O(log m/ log log m)-approximation for related machine scheduling is a simple one. As mentioned earlier, by losing a constant factor in the approximation ratio, we can convert the problem of minimizing the weighted completion time to that of minimizing the makespan, i.e, the problem Q|prec|Cmax . To minimize the makespan, the algorithm of Chudak and Shmoys [12] partitions machines into O(log m) groups according to their speeds. Based on their LP solution, we assign each job j to a group of machines. Then we run the machine-driven list-scheduling algorithm, subject to the precedence constraint, and the constraint that each job can only be scheduled to a machine in its assigned group. The final approximation ratio is the sum of two factors: one from grouping machines with different speeds into the same group, which is O(1) in [12], and the other from the number of different groups, which is O(log m) in [12]. To improve the ratio, we make the speed difference between machines in the same group as large as Θ(log m/ log log m), so that we only have O(log m/ log log m) groups. Then, both factors become O(log m/ log log m), leading to an O(log m/ log log m)-approximation for the problem. One remark is that in the algorithm of [12], the machines in the same group can be assumed to have the same speed, since their original speeds only differ by a factor of 2. In our algorithm, we have to keep the original speeds of machines, to avoid a multiplication of the two factors in the approximation ratio. P R|| j wj Cj Then we sketch how we use our time-indexed LP to recover the (1.5−c)-approximation of [4] (with much better constant c), for the scheduling problem on unrelated machines to minimize P total weighted completion time, namely R|| j wj Cj . The dependence rounding procedure of [4] is the key component leading to a better than 1.5 P approximation for R|| j wj Cj . It takes as input a grouping scheme: for each machine i, the jobs are partitioned into groups with total fractional assignment on i being at most 1. The jobs in the same group for i will have strong negative correlation towards being assigned to i. To apply the theorem, they first solve the lift-and-project SDP relaxation for the problem, and construct a grouping scheme based on the optimum solution to the SDP relaxation. For each machine i, the grouping algorithm will put jobs with similar Smith-ratios in the same group, as the 1.5approximation ratio is caused by conflicts between these jobs. With the strong negative correlation, the approximation ratio can be improved to (1.5 − c) for a tiny constant c = 1/(108 × 20000). We show that the natural time-indexed relaxation (LPR||wC ) for the problem suffices to give a (1.5 − c)-approximation. To apply the dependence rounding procedure, we need to construct a grouping for every machine i. In our recovered 1.5-approximation algorithm for the problem using P (LPR||wC ), the expected completion time of j is at most x (s + 1.5pi,j ), i.e, the average i,j,s i,s starting time of j plus 1.5 times the average length of j in the LP solution. This suggests that a job j is bad only when its average starting time is very small compared to its average length in the LP solution. Thus, for each machine i, the bad jobs are those with a large weight of scheduling 6 intervals near the beginning of the time horizon. If these bad intervals for two bad jobs j and j ′ have large overlap, then they are likely to be put into the same group for i. To achieve this, we construct a set of disjoint basic blocks {(2a , 2a+1 ] : a ≥ −2} in the time horizon. A bad job will be assigned to a random basic block contained in its scheduling interval and two bad jobs assigned to the same basic block will likely to be grouped together. Besides the improved approximation ratio, we believe the use of (LPR||wC ) will shed light on getting an approximation ratio for the problem that is considerably better than 1.5, as it is simpler than the lift-and-project SDP of [4]. Another useful property of our algorithm is that the rounding procedure is oblivious to the weights of the jobs; this may be useful when we consider some variants of the problem. Finally, we remark that Theorems 1.1, 1.2 and 1.3 can be easily extended to handle job arrival times. However, to deliver the key ideas more efficiently, we chose not to consider arrival times. 1.3 Other Related Work There is a vast literature on approximating algorithms for scheduling problems to minimize the total weighted completion time. Here we only discuss the ones that are most relevant to our results; we refer readers to [8] for a more comprehensive overview. When there are no precedence constraints, the P problems of minimizing total weighted completion time on identical and related machines P (P || j wj Cj and Q|| j wj Cj ) admit PTASes ([38, 10]). For the problem of scheduling P jobs on unrelated machines with job arrival times to minimize weighted completion time (R|rj | j wj Cj ), many classic results give 2-approximation algorithms ([36, 32, 22]); recently Im and Li [19] gave a 1.8687-approximation for the problem, solving a long-standing open problem. Skutella [37] gave √ √ a e/( e − 1) ≈ 2.542-approximation algorithm for the single-machine scheduling problem with precedence constraints and job release times, improving upon the previous e ≈ 2.718-approximation [31]. Makespan is an objective closely related to weighted completion time. As we mentioned, for P |prec|Cmax , the Graham’s list scheduling algorithm gives a 2-approximation, which is the best possible under a stronger version of UGC [1, 39]. For the special case of the problem P m|prec, pj = 1|Cmax where there are constant number of machines and all jobs have unit size, the recent breakthrough result of Levey  and Rothvoss [26] gave a (1 + ǫ)-approximation with running time exp exp Om,ǫ (log2 log n) , via the LP hierarchy of the natural LP relaxation for the problem. On the negative side, it is not even known whether P m|prec, pj = 1|Cmax is NP-hard or not. For the problem R||Cmax , i.e, the scheduling of jobs on unrelated machines to minimize the makespan, the classic result of Lenstra, Shmoys and Tardos [24] gives a 2-approximation, which remains the best algorithm for the problem. Some efforts have been put on a special case of the problem, where each job j has a size pj and pi,j ∈ {pj , ∞} for every i ∈ M (the model is called restricted assignment model.) [13, 40, 7, 21]. When jobs have arrival times, the flow time of a job, which is its completion time minus its arrival time, is a more suitable measurement of quality of service. There is a vast literature on scheduling algorithms with flow time related objectives [25, 11, 14, 35, 3, 2]; since they are much harder to approximate than completion time related objectives, most of these works can not handle precedence constraints and need to allow preemptions of jobs. Organization The proofs of Theorems 1.1 to 1.4 are given in Sections 2 to 5 respectively. Throughout this paper, we assume the weights, lengths of jobs are integers. Let T be the maximum 7 P P makespan of any “reasonable” schedule. For problems P prec j wj Cj and R|| j wj Cj , we first assume T is polynomial in n. By losing a 1 + ǫ factor in the approximation ratio, we can handle the case where T is super-polynomial. This is shown in Section 6. 2 Scheduling on Identical Machines with Job Precedence Constraints In this section we give our (2 + 2 ln 2 + ǫ)-approximationP for the problem of scheduling precedenceconstrained jobs on identical machines, namely P |prec| j wj Cj . We solve (LPP|prec|wC ) and run the job-driven list-scheduling algorithm of [27] with a random order of jobs. 2.1 Time-Indexed LP Relaxation for P |prec| P j w j Cj In the identical machine setting, we do not need to specify which machine each job is assigned to; it suffices to specify a scheduling interval (t − pj , t] for every job j. A folklore result says that a set of intervals can be scheduled on m machines if and only if their congestion is at most m: i.e, the number of intervals covering any time point is at most m. Given such a set of intervals, there is a simple greedy algorithm to produce the assignment of intervals to machines. Thus, in our LP relaxation and in the list-scheduling algorithm, we focus on finding a set of intervals with congestion at most m. P P P We use (LPP|prec|wC ) for both P prec j wj Cj and P prec, pj = 1 j wj Cj . Let T = j pj be a trivial upper bound on the makespan of any reasonable schedule. In the LP relaxation, we have a variable xj,t indicating whether job j is scheduled in (t − pj , t], for every j ∈ J and t ∈ [T ]. Throughout this and the next section, t and t′ are restricted to be integers in [T ], and j, j ′ and j ∗ are restricted to be jobs in J. min X X j xj,t = 1 t j,t∈[t′ ,t′ +p j) X xj,t ≤ m ∀j (5) ∀t′ (6) wj X t xj,tt X t<t′ +pj ′ s.t. xj ′ ,t ≤ X t<t′ xj,t = 0 xj,t ≥ 0 xj,t (LPP|prec|wC ) ∀j, j ′ , t′ : j ≺ j ′ (7) ∀j, t < pj (8) ∀j, t (9) P P The objective function is j wj t xj,t t, i.e, the total weighted completion time over all jobs. Constraint (5) requires every job j to be scheduled. Constraint (6) requires that at every time point t′ , at most m jobs are being processed. Constraint (7) requires that for every j ≺ j ′ and t′ , j ′ completes before t′ + pj ′ only if j completes before time t′ . A job j can not complete before pj (Constraint (8)) and all variables are non-negative (Constraint P (9)). We solve (LPP|prec|wC ) to obtain x ∈ [0, 1]J×[T ] . Let C = t xj,t t be the completion time of P j j in the LP solution. Thus, the value of the LP is j wj Cj . For every θ ∈ [0, 1/2], we define Mjθ = Cj − (1 − θ)pj . Our algorithm is simply the following: choose θ uniformly at random from (0, 1/2], and output the schedule returned by job-driven-list-scheduling(M θ ) (described in Algorithm 1). 8 Algorithm 1 job-driven-list-scheduling(M ) Input: a vector M ∈ RJ≥0 used to decide the order of scheduling, s.t. if j ≺ j ′ , then Mj < Mj ′ e C e ∈ RJ Output: starting and completion time vectors S, ≥0 1: for every j ∈ J in non-decreasing order of Mj , breaking ties arbitrarily ej ′ , or t ← 0 if {j ′ ≺ j} = ∅ 2: let t ← maxj ′ ≺j C 3: find the minimum s ≥ t such that we can schedule j in interval (s, s + pj ], without increasing the congestion of the schedule to m + 1 ej ← s + pj , and schedule j in (Sej , C ej ] 4: Sej ← s, C e C) e 5: return (S, We first make a simple observation regarding the C vector, which follows from the constraints in the LP. ′ Claim 2.1. For every pair of jobs j, j ′ such that j ≺ j ′ , we have Cj + pj ′ ≤ ! Cj . X X X X P roof. Cj + p j ′ = xj,t′ t′ + pj ′ = 1− xj,t′ + pj ′ = xj,t′ + pj ′ ≤ = X t  1 − X t′ ≥pj ′ t′ X t′ <t+pj ′  xj ′ ,t′  + pj ′ = (t′ − pj ′ )xj ′ ,t′ + pj ′ = t t′ ,t≤t′ X t′ X t,t′ ≥t+pj ′ xj ′ ,t′ + pj ′ = t′ <t X t′ xj ′ ,t′ − pj ′ + pj ′ = Cj ′ . t′ xj ′ ,t′  t : t ≤ t′ − p j ′ + pj ′ The inequality used Constraint (7); some of the equalities used Constraint (5) and (8) and the definitions of Cj and Cj ′ . Indeed, our analysis does not use the full power of Constraint (7), except for the above claim which is implied by the constraint. Thus, we could simply use Cj + pj ′ ≤ Cj ′ (along with the definitions of Cj ’s) to replace Constraint (7) in the LP. However, in the algorithm for the problem with unit job lengths (described in Section 3), we do need Constraint (7). To have a unified LP for both problems, we chose to use Constraint (7). Our algorithm does not use x-variables, but we need them in the analysis. 2.2 Analysis Our analysis is very similar to that in [27]. We fix a job j ∗ from now on and we shall upper bound ej ∗ ] E[C ej ∗ is determined and will not be changed . Notice that once j ∗ is scheduled by the algorithm, C Cj ∗ later. Thus, we call the schedule at the moment the algorithm just scheduled j ∗ the final schedule. We can then define idle and busy points and slots w.r.t this final schedule. We say a time point τ ∈ (0, T ] is busy if the congestion of the intervals at τ is m in the schedule (in other words, all the m machines are being used at τ in the schedule); we say τ is idle otherwise. We say a left-open-right-closed interval (or slot) (τ, τ ′ ] (it is possible that τ = τ ′ , in which case the interval is empty) is idle (busy, resp.) if all time points in (τ, τ ′ ] are idle (busy, resp.). ej ∗ respectively, w.r.t the Then we analyze the total length of busy and idle time slots before C final schedule. For a specific θ ∈ (0, 1/2], the techniques in [27] can bound the total length of 9 C C ∗ ∗ j idle slots by 1−θ and the total length of busy slots by θj . Thus choosing θ = 1/2 gives the best 4-approximation, which is the best using this analysis. Our improvement comes from the bound on ej ∗ is the total length of busy time slots. We show that the expected length of busy slots before C Cj ∗ at most 2Cj ∗ , which is much better than the bound Eθ∼R (0,1/2] θ = ∞ given by directly applying C ∗ the bound for every θ. We remark that the θj bound for each individual θ is tight and thus can not be improved; our improvement comes from considering all possible θ’s together. Bounding the Expected Length of Idle Slots We first bound the total length of idle slots ej ∗ , the completion time of job j ∗ in the schedule produced by the algorithm. Lemma 2.2 before C and 2.3 are established in [27] and we include their proofs for completeness. Lemma 2.2. Let j ∈ J be a job in the final schedule with Sej > 0. Then we can find a job j ′ such that ej ′ , Sej ] is busy, • either j ′ ≺ j and (C • or Mj ′ ≤ Mj , Sj ′ < Sj and (Sj ′ , Sej ] is busy. Proof. Recall that busy and idle slots are defined w.r.t the final schedule, i.e, the schedule at the moment we just scheduled j ∗ . We first assume (Sej − 1, Sej ] is idle. Then before the iteration for j, the interval (Sej − 1, Sej + pj ] is available for scheduling and thus scheduling j at (Sej − 1, Sej − 1 + pj ] ej ′ . will not violate the congestion constraint. Therefore, it must be the case that Sej = maxj ′ ≺j C ′ e e e e So, there is a job j ≺ j such that Cj ′ = Sj . Since (Cj ′ , Sj ] = ∅, the first property holds. So we can assume (Sej − 1, Sej ] is busy. Let (τ, τ ′ ] be the maximal busy slot that contains ej ′ , Sej ] ej ′ ≥ τ , then the first property holds since (C (Sej − 1, Sej ]. If there is a job j ′ ≺ j such that C ′ is busy. So, we can assume that no such job j exists. ej < Sej , As τ is the starting point of a maximal busy slot, there is a job j1 with Sj1 = τ < Sej . If C 1 e e e e then there is a job j2 with Cj1 = Sj2 < Sj , as (Cj1 , Cj1 + 1] is busy. If Cj2 < Sj then there is job ej = Sej < Sej . We can repeat this process to find a job jk such that τ ≤ Sj < Sej ≤ C ej . j3 with C 2 3 k k ′ ′ Let j = jk ; we claim that Mj ′ ≤ Mj . Otherwise, j is considered before j in the list scheduling ej ] is available. Since we assumed that there algorithm. At the iteration for j, the interval (Sej ′ , C ej ′′ ≥ τ , j should be started at Sj ′ , a contradiction. Thus, Mj ′ ≤ Mj are no jobs j ′′ ≺ j such that C ′ and j satisfies the second property of the lemma. Applying Lemma 2.2 repeatedly, we can identify a chain of jobs whose scheduling intervals cover ej ∗ , which can be used to bound the total length of these slots. This leads all the idle slots before C to the following lemma from [27]: ej ∗ is at most Lemma 2.3. The total length of idle time slots before C Cj ∗ 1−θ . Proof. Let j0 = j ∗ ; if Sj0 > 0, we apply Lemma 2.2 for j = j0 to find a job j ′ and let j1 = j ′ . If Sj1 > 0, then we apply the lemma again for j = j1 to find a job j ′ and let j2 = j ′ . We can repeat the process until we reach a job jk with Sjk = 0. For convenience, we shall revert the sequence so that j ∗ = jk and Sj0 = 0. Thus, we have found a sequence j0 , j1 , · · · , jk = j ∗ of i jobs such that ej , Sj Sj = 0, and for each ℓ = 0, 1, 2, · · · , k − 1, either (i) jℓ ≺ jℓ+1 and C is busy, or (ii) 0 ℓ Mjℓ ≤ Mjℓ+1 , Sjℓ < Sjℓ+1 and (Sjℓ , Sjℓ+1 ] is busy. 10 ℓ+1 We say ℓ is of type-1 if ℓ satisfies (i); otherwise, we say ℓ is of type-2 (ℓ must satisfy (ii)). The interval (0, Sj ∗ ] can be broken into k intervals: (Sj0 , Sj1 ], (Sj1 , Sj2 ], · · · , (Sjk−1 , Sjk ]. If some ℓ is of ej , Sj ] is busy. Let L be the set of type-1 type-2, then (Sjℓ , Sjℓ+1 ] is busy; if ℓ is of type-1, then (C ℓ ℓ+1 [ ej ]. With this ∗ indices ℓ ∈ [0, k − 1]. Then, all the idle slots in (0, Sj ] are contained in (Sjℓ , C ℓ ℓ∈L ∗ observation, we can bound the total length of idle slots in (0, Sj ] by X ℓ∈L pj ℓ ≤ X ℓ∈L k−1 Mjθ∗ 1 X θ Cj ∗ 1 (Mjθℓ+1 − Mjθℓ ) ≤ = − pj ∗ . (Mjℓ+1 − Mjθℓ ) ≤ 1−θ 1−θ 1−θ 1−θ ℓ=0     1 1 1 The first inequality holds since pjℓ = 1−θ Mjθℓ+1 − Cjℓ − Mjθℓ ≤ 1−θ Cjℓ+1 − pjℓ+1 − Mjθℓ ≤ 1−θ  Mjθℓ , due to Claim 2.1. The second inequality is by the fact that Mjℓ ≤ Mjℓ+1 for every ℓ ∈ [0, k−1] and the third inequality is by jk = j ∗ and Mjθ0 ≥ 0. The equality is by the definition of Mjθ∗ . Thus, ej ∗ ] is at most Cj∗ . the total length of idle slots in (0, C 1−θ ej ∗ , over all choices of θ, is at most Thus, the expected length of idle slots before C Z 1/2 θ=0 Cj ∗ 2dθ = 1−θ  1 2 ln 1−θ 1/2 θ=0  Cj ∗ = (2 ln 2)Cj ∗ . (10) Bounding the Expected Length of Busy Slots We now proceed to bound the total length of ej ∗ . This is the key to our improved approximation ratio. For every θ ∈ [0, 1/2], busy slots before C θ ej ∗ }. Thus, if θ < θ ′ , we have Jθ ⊇ Jθ′ . For every θ ∈ [0, 1/2] and j ∈ J0 , define let Jθ = {j : Mj ≤ C ′ θj = sup {θ ∈ [0, P1/2] : j ∈ Jθ }; this is well-defined since j ∈ J′ 0 . For any subset J ⊆ J of jobs, we ′ define p(J ) = j∈J ′ pj to be the total length of all jobs in J . ej ∗ is at most Lemma 2.4. For a fixed θ ∈ (0, 1/2], the total length of busy slots before C ej ∗ ] is at most Proof. The total length of busy time slots in (0, C scheduled so far, which is at most 1 m X j∈J:Mjθ ≤Mjθ∗ pj ≤ 1 m X pj = j∈J:Mjθ ≤Cj ∗ 1 m 1 m p(Jθ ). times the total length of jobs 1 p(Jθ ). m The key lemma for our improved approximation ratio is an upper bound on the above quantity when θ is uniformly selected from (0, 1/2]: Z 1/2 p(Jθ )dθ ≤ mCj ∗ . Lemma 2.5. θ=0 Proof. Notice that we have Z Z 1/2 p(Jθ )dθ = θ=0 1/2 X θ=0 j∈J 0 pj 1j∈Jθ dθ = X j∈J0 pj Z 1/2 θ=0 1j∈Jθ dθ = X θ j pj . j∈J0 P Thus, it suffices to prove that j∈J0 θj pj ≤ mCj ∗ . To achieve this, we construct a set of axisparallel rectangles. For each j ∈ J0 and t such that xj,t > 0, we place a rectangle with height xj,t 11 and horizontal span (t − pj , t − pj + 2θj pj ]. The total area of all the rectangles for j is exactly 2θj pj . Notice that θj ≤ 1/2 and thus (t − pj , t − pj + 2θj pj ] ⊆ (t − pj , t]. P P θ Notice that t xj,t (t − pj + θj pj ) = Cj − (1 − θj )pj = Mj j ≤ Cj ∗ , t xj,t = 1, and t − pj + θj pj is the mass center2 of the rectangle for (j, t). Thus, the mass center of the union of all rectangles for j is at most Cj ∗ . This in turn implies that the mass center of the union of all rectangles over all j ∈ J0 and t, is at most Cj ∗ . Notice that for every t ∈ (0, T ], the total height of all rectangles covering t is at most m, by Constraint (6), and the fact that (t − pj , t − pj + 2θj pj ] ⊆ (t − pj , t] for every j ∈ J0 and t. Therefore, the total area of rectangles for all j P ∈ J0 and t is at most 2mCj ∗ (otherwise, the mass center will be larger than Cj ∗ ). So, we have j∈J0 2θj pj ≤ 2mCj ∗ , which finishes the proof of the lemma. ej ∗ is at Thus, by Lemma 2.4 and Lemma 2.5, the expected length of busy time slots before C most Z 1/2 Z p(Jθ ) 2 1/2 2dθ = p(Jθ )dθ ≤ 2Cj ∗ . (11) m m θ=0 θ=0 Thus, by Inequalities (10) and (11), we have h i ej ∗ ≤ (2 ln 2)Cj ∗ + 2Cj ∗ = (2 + 2 ln 2)Cj ∗ . E C Thus, we have proved the 2+2 ln 2+ǫ ≤ (3.387+ǫ)-approximation ratio for our algorithm, finishing the proof of Theorem 1.1. Remarks One might wonder if choosing a random θ from [0, θ ∗ ] for a different θ ∗ can improve R θ∗ 1 the approximation ratio. For θ ∗ ≤ 1/2, the ratio we can obtain is θ1∗ + θ1∗ θ=0 1−θ dθ = θ1∗ + θ1∗ ln θ1∗ ; that is, the first factor (for busy time slots) will be increased to 1/θ ∗ and the second factor (for idle time slots) will be decreased to θ1∗ ln θ1∗ . This ratio is minimized when θ ∗ = 1/2. If θ ∗ > 1/2, however, the first factor does not improve to 1/θ ∗ , as the proof of Lemma 2.5 used the fact that θj ≤ 1/2 for each j. Thus, using our analysis, the best ratio we can get is 2 + 2 ln 2. 3 Scheduling Unit-Length Jobs on Identical Machines with Job Precedence Constraints √ P In this section, we give our (1 + 2)-approximation j wj Cj . Again, P algorithm for P prec, pj = 1 we solve (LPP|prec|wC ) to obtain x; define Cj = t xj,t for every j ∈ J. For this special case, we define the random M -vector differently. In particular, it depends on the P values of x variables. For every j ∈ J and θ ∈ (0, 1], define Mjθ to be the minimum t such that tt′ =1 xj,t′ ≥ θ. Notice that R1 P Cj = θ=0 Mjθ dθ. Our algorithm for P prec, pj = 1 j wj Cj chooses θ uniformly at random from θ (0, 1], and then call job-driven-list-scheduling(M ) and output the returned schedule. For every j ∈ J, we define aj to be the largest a such that there exists a sequence of a jobs j1 ≺ j2 ≺ j3 ≺ · · · ≺ ja = j. Thus, aj is the “depth” of j in the precedence graph. Claim 3.1. For every j ∈ J and t < aj , we have xj,t = 0. 2 Here, we use mass center for the horizontal coordinate of the mass center, since we are not concerned with the vertical positions of rectangles. 12 Proof. By the definition of aj , there is a sequence of a := aj jobs j1 ≺ j2 ≺ · · · ≺ ja such that ja = j. Let ta = t. If xja ,ta > 0, then by Constraint (7), there is an ta−1 ≤ ta − 1 such that xja−1 ,ta−1 > 0. Then, there is an ta−2 ≤ ta−1 − 1 ≤ ta − 2 such that xja−2,ta−2 > 0. Repeating this process, there will be an t1 ≤ ta − (a − 1) such that xj1 ,t1 > 0. This contradicts the fact that ta = t ≤ a − 1. Again, we fix a job j ∗ and focus on the schedule at the moment the algorithm just scheduled ej ∗ ]/Cj ∗ (recall that C ej ∗ is the j ∗ ; we call this schedule the final schedule. We shall bound E[C ∗ completion time of j in the schedule we output), by bounding the total length of idle and busy ej ∗ separately. Recall that a time point is busy if all the m slots in the final schedule before C machines are processing some jobs at that time, and idle otherwise. The next lemma gives this bound for a fixed θ. The first (resp. second) term on the right side bounds the total length of busy ej ∗ . The clean bound aj ∗ on the total length of idle slots comes from the (resp. idle) slots before C unit-job size property. o n ej ∗ ≤ 1 j : Mjθ ≤ Mjθ∗ + aj ∗ . Lemma 3.2. C m ej ∗ ] is at most 1/m times the total number of jobs Proof. The total length of busy slotsnin (0, C o 1 j : Mjθ ≤ Mjθ∗ . scheduled so far, which is at most m ej ∗ ]. We start from j1 = j ∗ . For every We now bound the total length of idle slots in (0, C ℓ = 1, 2, 3 · · · , let jℓ+1 ≺ jℓ be the job that is scheduled the last in the final schedule; if jℓ+1 does not exists, then we let a = ℓ and break the loop. Thus, we constructed a chain ja ≺ ja−1 ≺ ja−2 ≺ · · · ≺ j1 = j ∗ of jobs. By the definition of aj ∗ , we have a ≤ aj ∗ . We shall show that for every idle unit slot (t − 1, t] in (0, Cej ∗ ], some job in the chain is scheduled ej > t (ℓ exists since in (t − 1, t]. Assume otherwise; let ℓ ∈ [a] be the largest index such that C ℓ ej ∗ ). Then either jℓ+1 does not exist, or C ej ej ∗ − 1, C ej ∗ ] and t < C j1 = j ∗ is scheduled in (C ≤ t − 1. ℓ+1 Consider the iteration when jℓ is considered in the list scheduling algorithm. Before the iteration, (t − 1, t] is available for scheduling. Since jℓ is scheduled after t, there must be a job j ′ ≺ jℓ such ej ′ ≥ t. Thus, we would have set jℓ+1 = j ′ , a contradiction. that C ej ∗ are covered by the scheduling intervals of jobs in the chain. Thus, the idle slots before C ej ∗ is at most a ≤ aj ∗ . Overall, we have C ej ∗ ≤ So, nthe total number of idle slots before C o 1 m j : Mjθ ≤ Mjθ∗ + aj ∗ . R1 We shall use g(θ) = Mjθ∗ for every θ ∈ (0, 1]. Notice that Cj ∗ = θ=0 g(θ)dθ. For simplicity, let g(0) = limθ→0+ g(θ); so, g(0) will be the smallest t such that xj ∗ ,t > 0. By Claim 3.1, we have g(θ) X xj,t . This is the total volume of job g(0) ≥ aj ∗ . For every j ∈ J and θ ∈ [0, 1], define hj (θ) := t=1 P j scheduled in (0, g(θ)]. Thus, we have j∈J hj (θ) ≤ g(θ). Noticing that Mjθ ≤ Mjθ∗ if and only if ej ∗ ] E[C ej ∗ ≤ g(0) + 1 P hj (θ) ≥ θ. So, by Lemma 3.2, we have C j∈J 1hj (θ)≥θ . Thus, we can bound Cj ∗ m by the superior of R1 1 P g(0) + m j∈J θ=0 1hj (θ)≥θ dθ (12) R1 θ=0 g(θ)dθ 13 subject to • g : [0, 1] → [1, ∞) is piecewise linear, left-continuous and non-decreasing, (12.1) • ∀j ∈ J, hj : [0, 1] → [0, 1] is piecewise linear, left-continuous and non-decreasing, X • hj (θ) ≤ mg(θ), ∀θ ∈ [0, 1]. (12.2) (12.3) j∈J .Z 1 To recover the 3-approximation ratio of [27], we know that g(0) g(θ)dθ ≤ 1; this correθ=0  Z . Z 1 1 X 1 e sponds to the fact aj ∗ ≤ Cj ∗ . It is not hard to show 1hj (θ)≥θ dθ g(θ)dθ ≤ 2. m θ=0 θ=0 j∈J The tight factor 2 can be achieved when hj (θ) = θ for every j ∈ J and θ ∈ [0, 1] and g(θ) = nθ/m. This corresponds to the following case: by the time θ-fraction of job j ∗ was completed, exactly θ faction of every job j ∈ J was completed. However, the two bounds can not be tight simultaneously: the first bound being tight requires g to be a constant function, where the second bound being tight requires g to be linear in θ. This is where√we obtain our improved approximation ratio. Before formally proving that (12) is at most 1 + 2, we give the combination of g and {hj }j∈J achieving the bound; the way we prove the upper bound is by showing that this combination√is the worst possible. In the worst case, we have hj (θ) = max {α, θ} for every θ ∈ [0, 1], where α = 2−1. n 1 P g(θ) = m j∈J hj (θ) = m max {α, θ} for every θ ∈ [0, 1]. In this case, the numerator of (12) is R R1 2 )n 1 1 P n g(0) + m = (α + 1) m ; the denominator is θ=0 g(θ)dθ = (1+α j∈J θ=0 1hj (θ)≥θ dθ 2m . Thus, (12) √ α+1 2. in this case is (1+α 2 )/2 = 1 + Computing the superior of (12) We now compute the superior of (12) subject to Constraints (12.1), (12.2) and (12.3). The functions g and hj ’s defined may have other stronger properties (e.g, they are piecewise constant functions); however in the process, we only focus on the properties described above. In the following, j in a summation is over all jobs in J. First, we can add the following constraint without changing the superior of (12). 1 X hj (θ), ∀θ ∈ (θ ∗ , 1]. (13) ∃θ ∗ ∈ [0, 1], such that g(θ) = g(0), ∀θ ∈ [0, θ ∗ ], and g(θ) = m j ′ To we take n see this, o any g and {hj }j satisfying Constraints (12.1) to (12.3). Let g (θ) = P 1 max g(0), m for every θ ∈ [0, 1]. Then g ′ : [0, 1] → [0, ∞) is piecewise-linear, leftj hj (θ) continuous and non-decreasing because of Property (12.2); so g′ satisfies Property (12.1). Obviously g′ satisfies Property (12.3). We define θ ∗ = sup {θ ∈ [0, 1] : g ′ (θ)P = g ′ (0)}. Since g′ is left-continuous 1 ∗ and monotone non-decreasing, we have g′ (θ) = g ′ (0) ≥ m j hj (θ) for every θ ∈ [0, θ ] and P 1 ′ ∗ ′ g′ (θ) = m j hj (θ) > g (0) for every θ ∈ (θ , 1]. Thus, g satisfies Constraint (13). Changing ′ g to g will not decrease (12): the numerator does not change and the denominator can only o n 1 P decrease since g(θ) ≥ max g(0), m j hj θ = g′ (θ) for every θ ∈ [0, 1]. Thus, we can impose Constraint (13), without changing the superior of (12). Then, we can make the following constraint on {hj }j : For every j ∈ J, hj is a constant over [0, θ ∗ ]. 14 (14) ∗ , 1]. Then For every j, we define h′j (θ) = hj (θ ∗ ) if θ ∈ [0, θ ∗ ] and h′j (θ) = hj (θ) if θ ∈ (θP each h′j satisfies Constraint (12.2). Moreover, {h′j }j satisfies Constraint (12.3) since j h′j (θ) = P ∗ ∗ ∗ ′ j hj (θ ) ≤ mg(θ ) = mg(θ) for every θ ∈ [0, θ ]. Moreover, if we change hj to hj for every j ∈ J, the denominator of (12) does not change and the numerator can only increase. Finally, we can assume X hj (θ) = mg(θ), ∀θ ∈ [0, θ ∗ ]. (15) j P Notice that the functions {hj }j and g are constant functions over [0, θ ∗ ] and j hj (θ) = mg(θ) for θ ∈ (θ ∗ , 1]. If the constraint is not satisfied, we can find some j such that hj is not right-continuous at θ ∗ . Then, we increase hj (θ) simultaneously for all θ ∈ [0, θ ∗ ] until the constraint is satisfied or hj becomes right-continuous at θ ∗ . This process can be repeated until the constraint becomes satisfied. 1 P Thus, with all the constraints, we have g(θ) = m j hj (θ) for every θ ∈ [0, 1]. So, (12) becomes  R1 P  R1 1 P 1 P h (0) + 1 dθ j j θ=0 hj (θ)≥θ j hj (0) + m j θ=0 1hj (θ)≥θ dθ m = . R1 P R1 1 P j θ=0 hj (θ)dθ j θ=0 hj (θ)dθ m To bound the quantity, it suffices to upper bound R1 h(0) + θ=0 1h(θ)≥θ dθ sup , R1 h h(θ)dθ θ=0 (16) where the superior is over all piecewise-linear, left-continuous and monotone non-decreasing functions h : [0, 1] → [0, 1]. Z 1 R1 h(θ)dθ ≥ Claim 3.3. Let α = h(0) and β = θ=0 1h(θ)≥θ dθ. Then 0 ≤ α ≤ β ≤ 1 and θ=0 β2 α2 +β− . 2 2 Proof. β ≥ α because h(θ) ≥ θ holds for every θ ∈ [0, α]. To prove the second part, we can assume that h(θ) h(θ) to α for every θ ∈ (0, α]; this does not R 1= α if θ ∈ [0, α]: otherwise, we canRchange 1 change θ=0 1h(θ)≥θ dθ and can only decrease θ=0 h(θ)dθ. Also, we can assume that h(θ) ≤ θ for every θ R∈ [α, 1]. Otherwise, for every θ such Rthat h(θ) > θ, we decrease h(θ) to θ. This does not 1 1 change θ=0 1h(θ)≥θ dθ and can only decrease θ=0 h(θ)dθ. Z 1 Z 1 Z 1  1 α2 (θ − h(θ))dθ. h(θ)dθ = α2 + θ − (θ − h(θ)) dθ = + − 2 2 θ=α θ=α θ=0 Rb 2 For interval [a, b] ⊆ [θ ∗ , 1] such that h(a) = a, we have that θ=a (θ − h(θ))dθ ≤ (b−a) 2 . Since R1 R1 (1−β)2 . Thus, the above quantity is at least 1 ≤ 1 − β, we have that h(θ)<θ 2 θ=α θ=α (θ − h(θ)) ≤ 1 2 + α2 2 − (1−β)2 2 = α2 2 +β− β2 2 . Thus, (16) is at most sup0≤α≤β≤1 α+β 2 α2 +β− β2 2 = sup0≤α≤β≤1 2(α+β) 2β−(α+β)(β−α) . Scaling both α and β Thus, we can assume β = 1 and (16) becomes supα∈[0,1] 2(1+α) 1+α2 . √ √ √ 2(1+ 2−1) 2(1+α) For α ∈ [0, 1], 1+α2 is maximized at α∗ = 2 − 1 and the maximum value is 1+(√2−1)2 = 2 + 1. √ P This finishes the proof of the ( 2 + 1)-approximation for P |prec, pj = 1| j wj Cj (Theorem 1.2). up can only increase 2(α+β) 2β−(α+β)(β−α) . 15 4 Scheduling on Related Machines with Job Precedence Constraints P In this section, we give our O(log m/ log log m)-approximation for Q|prec|Cmax and Q|prec| j wj Cj , proving Theorem 1.3. This slightly improves the previous best O(log m)-approximation, due to Chudak and Shmoys [12]. Our improvement comes from a better tradeoff between two contributing factors. As in [12], we can convert the objective of minimizing total weighted completion time to minimizing makespan, losing a factor of 16. We now describe the LP used in [12] and state the theorem for the reduction. Throughout this section, i is restricted to machines in M , and j and j ′ are restricted to jobs in J. min X xi,j = 1 i pj X xi,j ∀j (17) ≤ Cj ∀j (18) X xi,j ′ ≤ Cj ′ ∀j, j ′ , j ≺ j ′ (19) i si (LPQ|prec|Cmax ) 1 X pj xi,j ≤ D si j si i Cj + p j ′ D Cj ≤ D xi,j , Cj ≥ 0 ∀i (20) ∀j (21) ∀j, i (22) (LPQ|prec|Cmax ) is a valid LP relaxation for Q|prec|Cmax . In the LP, xi,j indicates whether job j is scheduled on machine i. D is the makespan of the schedule, and Cj is the completion time of j in the schedule. Constraint (17) requires every job j to be scheduled. Constraint (18) says that the completion time of j is at least the processing time of j on the machine it is assigned to. Constraint (19) says that if j ≺ j ′ , then Cj ′ is at least Cj plus the processing time of j ′ on the machine it is assigned to. Constraint (20) says that the makespan D is at least the total processing time of all jobs assigned to i, for every machine i. Constraint (21) says that the makespan D is at least the completion time of any job j. Constraint (22) requires the x and C variables to be non-negative. The value of (LPQ|prec|Cmax ) provides a lower bound on the makespan of any valid schedule. However, even if we require each xi,j ∈ {0, 1}, the optimum solution to the integer programming is not necessarily a valid solution to the scheduling problem, since it does not give a scheduling interval for each job j. Nevertheless, we can use the LP relaxation to obtain our O(log m/ log log m)approximation P for Q|prec|Cmax . Using the following theorem from [12], we can extend the result to Q|prec| j wj Cj : Theorem 4.1 ([12]). Suppose there is an efficient algorithm A that can round a fractional solution to (LPQ|prec|Cmax ) to a valid solution to the correspondent Q|prec|CP max instance, losing only a factor of α. Then there is a 16α-approximation for the problem Q|prec| j wj Cj . Thus, from now on, we focus on the objective of minimizing the makespan; our goal is to design an efficient rounding algorithm as stated in Theorem 4.1 with α = O(log m/ log log m). We assume that m is big enough. For the given instance of Q|prec|Cmax , we shall first pre-processing the instance as in [12] so that it contains only a small number of groups. In the first stage of the 16 pre-processing step, we discard all the machines whose speed is at most 1/m times the speed of the fastest machine. Since there are m machines, the total speed for discarded machines is at most the speed of the fastest machine. In essence, the fastest machine can do the work of all the discarded machines; this will increase the makespan by a factor of 2. Formally, let i∗ be the machine with the fastest speed. For every discarded machine i and any job j such that xi,j > 0, we shall increase xi∗ ,j by xi,j and change this xi,j to 0. By scaling D by a factor of 2, the LP solution remains feasible. P x To see this, notice that the modification to the fractional solution can only ∗decrease for each j. The only constraint we need to check is Constraint (20) for i = i . Since pj i si,j i P P P pj xi,j pj xi,j D i discarded,j si∗ ≤ i discarded,j msi ≤ i discarded m ≤ D, moving the scheduling of jobs from ∗ discarded machines to i shall only increase the processing time of jobs on i∗ by D. Thus, we assume all machines have speed larger than 1/m times the speed of the fastest machine. By scaling speeds of machines uniformly, we assume all machines i have speed i ∈ [1, m), and |M | ≤ m. In the second stage of the pre-processing step, we partition the machines into groups, where each group contains machines with similar speeds. Let γ = log m/log log m.  Then group Mk contains machines with speed in [γ k−1 , γ k ), where k = 1, 2, · · · , K := logγ m = O(log m/ log log m). We remark that that unlike [12], we can not round down the speed of each machine i to the nearest power of γ. If we do so, we will lose a factor of (log m/ log log m) and finally we can only obtain an O((log m/ log log m)2 )-approximation. Instead, we keep the speeds of machines unchanged. M ′ ⊆ M of machines, we definePs(M ′ ) = P We now define some useful notations. For a subset ′ ′ i∈M ′ xi,j i∈M ′ si to be the total speed of machines in M ; for M ⊆ M and j ∈ J, let xM ′ ,j = ′ be the total fraction of job j assigned to machines in M . P For any job j, let ℓj be the largest integer ℓ such that K k=ℓ xMk ,j ≥ 1/2. That is, the largest ℓ such that at least 1/2 fraction of j is assigned to machines in groups ℓ to K. Then, let kj be the index k ∈ [ℓj , K] that maximizes s(Mk ). That is, kj is the index of the group in groups ℓj to K with the largest total speed. Later in the machine-driven list scheduling algorithm, we shall constrain that job j can only be assigned to machines in group kj . The following claim says that the time of processing j on any machine in Mkj is not too large, compared to processing time of j in the LP solution. X pj xi′ ,j pj . ≤ 2γ Claim 4.2. For every j ∈ J, and any machine i ∈ Mkj , we have si s i′ ′ i ∈M Proof. Notice that X xi′ ,j ≥ s i′ ′ i ∈M sum has X K X xMk ,j < 1/2 by our definition of ℓj . Thus, k=ℓj +1 Sℓj Mk i′ ∈ k=1 1 −ℓ j . si′ ≥ γ xi′ ,j 1 ≥ · γ −ℓj . This is true since s i′ 2 X Sℓj Mk i′ ∈ k=1 ℓj X xMk ,j > 1/2. Then, k=1 xi′ ,j ≥ 1/2 and every i′ in the Since i is in group kj ≥ ℓj , i′ has speed at least γ ℓj −1 and thus follows. X pj Claim 4.3. ≤ 2KD. s(Mkj ) 1 si ≤ γ 1−ℓj . Then the claim j∈J Proof. Focus on each job j ∈ J. Noticing that K X k=ℓj 17 xMk ,j ≥ 1/2, and kj is the index of the group with the maximum total speed, we have K K X X xMk ,j xMk ,j 1 ≥ ≥ . s(Mk ) s(Mk ) 2s(Mkj ) k=1 k=ℓj Summing up the above inequality scaled by 2pj , over jobs j, we have X j∈J K K k=1 k=1 K X X 1 X X X xM ,j pj k D = 2KD. pj xMk ,j ≤ 2 ≤2 pj =2 s(Mkj ) s(Mk ) s(Mk ) j∈J j∈J k=1 P To see the last inequality, we notice that j∈J pj xMk ,j is the total size of jobs assigned to group k, P s(Mk ) is the total speed of all machines in Mk and D P is the makespan. Thus, we have j∈J pj xMk ,j ≤ s(Mk )D. Formally, Constraint (20) P says j∈J pj xi,j ≤ si D for every i ∈ Mk . Summing up the inequalities over all i ∈ Mk gives j∈J pj xMk ,j ≤ s(Mk )D. With the kj values, we can run the machine-driven list-scheduling algorithm in [12]. The algorithm constructs the schedule in real time. Whenever a job completes (or at the beginning of the algorithm), for each idle machine i, we attempt to schedule an unprocessed job j on i subject to two constraints: (i) machine i can only pick a job j if i ∈ Mkj and (ii) all the predecessors of j are completed. If no such job j exists, machine i remains idle until a new job is competed. We use S to denote this final schedule; Let ij ∈ Mkj be the machine that process j in the schedule constructed by our algorithm. The following simple observation is the key to prove our O(log m/ log log m)-approximation. Similar observations were made and used implicitly in [12], and in [16] for the problem on identical machines. However, we think stating the observation in our way makes the analysis cleaner and more intuitive. We say a time point t is critical, if some job starts or ends at t. To avoid ambiguity, we exclude these critical time points from our analysis (we only have finite number of them). At any non-critical time point t in the schedule, we say a job j is minimal if all its predecessors are completed but j itself is not completed yet. Observation 4.4. At any non-critical time point t in S, either all the minimal jobs j are being processed, or there is a group k such that all machines in Mk are busy. Proof. All the minimum jobs at t are ready for processing. If some such job j is not processed at t, it must be the case that all machines in Mkj are busy. As time goes in S, we maintain the precedence graph over J ′ , the set of jobs that are not completed yet: we have an edge from j ∈ J ′ to j ′ ∈ J ′ if j ≺ j ′ . At any time point, the weight of a job j is the time needed to complete the rest of job j on ij , i.e, the size of the unprocessed part of job j, divided by sij . If at t, all minimum jobs are being processed, then the weights of all minimal jobs are being decreased at a rate of 1. Thus, the length of the longest path of in the precedence graph is being decreased at a rate of 1. The total length of the union of these time points is at most length of the longest path in the precedence graph at time 0, which is at most X X pj xi,j X pj ≤ 2γD, ≤ max 2γ max H H s ij si j∈H j∈H i∈M 18 where H is over all precedence chains of jobs. The first inequality is by Claim 4.2 and the second inequality is by Constraints (19) and (21) in the LP. If not all the minimal jobs are being processed at time t, then there must be a group k such that all machines in group k are busy, by Observation 4.4. The total length of the union of all these points is at most P X pj X j:kj =k pj = ≤ 2KD, s(Mk ) s(Mkj ) k j∈J by Claim 4.3. Thus, our schedule has makespan at most 2(γ + K)D = O(log m/ log log m)D, leading to an O(log m/ log log m)-approximation for Q|prec|CmaxP . Combining this with Theorem 4.1, we obtain an O(log m/ log log m)-approximation for Q|prec| j wj Cj , finishing the proof of Theorem 1.3. Indeed, as shown in [12], this factor is tight if we use (LPQ|prec|Cmax ). 5 (1.5−c)-Approximation Algorithm for Unrelated Machine Scheduling Based on Time-Indexed LP In this section, we prove Theorem 1.4, using the dependence rounding scheme of [4] as a black-box. We first describe the rounding algorithm and then give the analysis. 5.1 Rounding Algorithm P xi,j,s (s+ Let x be a feasible solution to (LPR||wC ) as stated in Theorem 1.4. Again, we use Cj = i,sP pi,j ) to denote the completion time of j in the LP solution; thus the value of the LP is j wj Cj . P LetP yi,j = s xi,j,s for every pair i, j to denote the fraction of job j that is scheduled on machine i; so i yi,j = 1 for every j. It is convenient to define a rectangle Ri,j,s for every xi,j,s > 0: Ri,j,s has height xi,j,s , with horizontal span being (s, s + pi,j ]. We say Ri,j,s is the rectangle for j on machine i at time s. We say a rectangle covers a time point (resp. a time interval), if its horizontal span covers the time point (resp. the time interval). P We can recover the classic 1.5-approximation for R|| j wj Cj using (LPR||wC ). For each job j ∈ J, randomly choose a rectangle for j: the probability of choosing Ri,j,s is xi,j,s , i.e, its height. We shall assign job j to i and let τj be a random number in (s, s + pi,j ]. Then, all jobs j assigned to machine i will be scheduled in increasing order of their τj values. To see this is a 1.5-approximation, fix a job j ∗ ∈ J and condition on the event that j ∗ →i (indicating that j ∗ is assigned to machine i) e and time of j ∗ inithe schedule returned by the algorithm. Notice that h τj ∗ . Let Cj ∗ ibe thehcompletion P ej ∗ |j ∗ →i, τj ∗ ≤ E E C pi,j |τj ∗ + pi,j ∗ . If for some j 6= j ∗ , the selected rectangle ∗ j6=j :j→i,τj ≤τj ∗ for j is Ri,j,s and τj ≤ τj ∗ , then we say the pi,j term in summation inside E[·] on the right side is contributed by the rectangle Ri,j,s. We then consider the contribution of each rectangle Ri,j,s , j 6= j ∗ to the expectation on the right side. The probability that we choose the rectangle Ri,j,s for j is xi,j,s . Under this condition, the probability that τj ≤ τj ∗ is exactly fraction of the portion of Ri,j,s that is before τj ∗ . When this happens, the contribution made by this Ri,j,s is exactly pi,j . Thus, the contribution of Ri,j,s to the expectation is exactly the area of the portion of Ri,j,s before τj ∗ . Since the total height of rectangles on i covering hany time pointi is at most from hP 1, the total contribution i ∗ e ∗ ∗ ∗ ∗ all rectangles on i is at most τj . Thus E Cj |j →i, τj ≤ E + pi,j ∗ ≤ j6=j ∗ :j→i,τj ≤τj ∗ pi,j |τj 19 τj ∗ + pi,j ∗ . Notice that conditioned on choosing rectangle Ri,j ∗ ,s for j ∗ , the expected value of h i P P ej ∗ ≤ τj ∗ is s + pi,j ∗ /2. Thus, E C i,s xi,j ∗ ,s (s + pi,j ∗ /2 + pi,j ∗ ) = i,s xi,j ∗ ,s (s + 1.5pi,j ∗ ) ≤ P 1.5 i,s xi,j ∗ ,s (s + pi,j ∗ ) = 1.5Cj ∗ . This recovers the 1.5-approximation ratio. As [4] already showed, we can not beat 1.5 if the assignments of jobs to machines are independent. This lower bound is irrespective of the LP we use: even if the fractional solution is a convex combination of integral solutions for the problem, independently assigning jobs to machines with probabilities {yi,j }i,j can only lead to a 1.5-approximation. To overcome this barrier, [4] used an elegant dependence rounding scheme, that is formally stated in the following theorem, which we shall apply as a black-box: Theorem 5.1 ([4]). Let ζ = 1/108. Consider a bipartite graph G = (M ∪J, E) between the set M of machines and the set J of jobs. Let y ∈ [0, 1]E be fractional values on the edges satisfying y(δ(j)) = 1 for every job j ∈ J. For each machine i ∈ M , select any family of disjoint Ei1 , Ei2 , · · · , Eiκi ⊆ δ(i) subsets of edges incident to i such that y(Eiℓ ) ≤ 1 for ℓ = 1, · · · , κi . Then, there exists a randomized polynomial-time algorithm that outputs a random subset of the edges E ∗ ⊆ E satisfying (a) For every j ∈ J , we have |E ∗ ∩ δ(j)| = 1 with probability 1; (b) For every e ∈ E, Pr[e ∈ E ∗ ] = ye ; (c) For every i ∈ M and all e 6= e′ ∈ δ(i): (   (1 − ζ) · ye ye′ Pr e ∈ E ∗ , e′ ∈ E ∗ ≤ y e y e′ if ∃ℓ ∈ [κi ], e, e′ ∈ Eiℓ , otherwise. P In the theorem, δ(u) is the set of edges incident to the vertex u in G, and y(E ′ ) = e∈E ′ ye for every E ′ ⊆ E. We shall apply the theorem with G = (M ∪ J, E), with E = {(i, j) : yi,j > 0} and y values being our y values. In the theorem, we can specify a grouping for edges incident to every machine i ∈ M subject to the constraint that the total y-value of all edges in a group is at most 1. The theorem says that we can select a subset E ∗ ⊆ E of edges respecting the marginal probabilities {yi,j }(i,j)∈E , and satisfying the property that exactly one edge incident to any job j is selected, and the negative correlation. The key to the improved approximation ratio in [4] is that for two distinct edges e, e′ in the same group for i, their correlation is “sufficiently negative”. The key to apply Theorem 5.1 is to define a grouping for each machine i. Notice that our analysis for the Pindependent rounding algorithm suggests that the expected completion time for ∗ j is at most i,s xi,j ∗ ,s (s + 1.5pi,j ∗ ); that is, there is no 1.5-factor before s. Thus, for the 1.5approximation ratio to be tight, for most rectangles Ri,j ∗ ,s , s should be very small compared to pi,j ∗ . Intuitively, we define our random groupings such that, if such rectangles for j and for j ′ on machine i overlap a lot, then j and j ′ will have a decent probability to be grouped together in the grouping for i. Defining the Groupings for Machines Our groupings for the machines are random. We choose a τi,j value for every job j on every machine i such that yi,j > 0 (as opposed to choosing only one τj value for a job j as in the recovered 1.5-approximation): choose si,j at random such P that Pr[si,j = s] = xi,j,s /yi,j ; this is well-defined since s xi,j,s = yi,j . Then we let τi,j be a random real number in (si,j , si,j + pi,j ]. 20 Recall P that the expected completion time of j in the schedule given by the independence rounding is i,s xi,j,s (s + 1.5pi,j ). As there is no 1.5-factor before s, we can afford to “shift” a rectangle Ri,j,s to the right side by a small constant times s and use the shifted rectangles to sample {si,j }i,j ej ] can still be bounded by and {τi,j } values. On one hand, after the shifting of rectangles for j, E[C 1.5Cj . On the other hand, the shifting of rectangles for j will benefit the other jobs. Formally, for every machine i and job j with yi,j > 0, we define φi,j = 1 X xi,j,s s yi,j s the average starting time of rectangles for job j on machine i, and θi,j = 0.2(si,j + φi,j ) + 0.4yi,j pi,j . to be the shifting parameter for job j on i. Namely, we shall use the values of {τi,j + θi,j }i,j to decide the order of scheduling jobs. We shall distinguish between good jobs and bad jobs. Informally, we say a job j is good on a machine i, if using the independent rounding algorithm, we can already prove a better than 1.5-factor on the completion time of j, conditioned on that j is assigned to i. Formally, Definition 5.2. Given a job j and machine i with yi,j > 0, we say j is good on i if φi,j + yi,j pi,j ≥ 0.01pi,j . Otherwise, we say job j is bad on i. ej ∗ |j ∗ →i] ≤ Returning to the recovered 1.5-approximation algorithm, we can show that E[C ∗ (1.5 − Ω(1))(φi,j ∗ + pi,j ∗ ), if j is good on i. Either φi,j ∗ ≥ 0.005pi,j ∗ or yi,j ∗ ≥ 0.005. In the former ej ∗ |j ∗ →i] ≤ φi,j ∗ + 1.5pi,j ∗ , which is at most (1.5 − Ω(1))(φi,j ∗ + pi,j ∗ ). In the latter case, we have E[C case, we total area of the portions of rectangles {Ri,j,s : j 6= j ∗ , s} before τj ∗ is smaller than τj ∗ by yi,j ∗ pi,j ∗ /2, in expectation over all τj ∗ . This also can save a constant factor. The formal argument will be made in the proof of Lemma 5.9, where we shall not use the strong negative correlation of the dependence rounding scheme.  Now we are ready to define the groupings Eiℓ i∈M,j∈[κ ] in Theorem 5.1. Till this end, we fix i a machine i and show how to construct the grouping for i. If a job j is good on i, then (i, j) is not in any group. It suffices to focus on bad jobs on i; keep in mind that for these jobs j, both φi,j /pi,j and yi,j are tiny; thus si,j /pi,j and θi,j /pi,j will also be tiny with high probability. Definition 5.3. A basic block is a time interval (2a , 2a+1 ] ⊆ (0, T ], where a ≥ −2 is an integer. Definition 5.4. For a bad job j on machine i (thus yi,j > 0), we say the edge (i, j) is assigned to a basic block (2a , 2a+1 ], denoted as j (5.4a) (2a , 2a+1 ] i a, if ⊆ (10φi,j , pi,j ], and (5.4b) si,j + θi,j ≤ 2a , and (5.4c) τi,j ∈ (2a , 2a+1 ]. 21 Property (5.4a) requires the block to be inside (10φi,j , pi,j ] and Property (5.4c) requires τi,j to be inside the block. Property (5.4b) requires that for the rectangle for j starting at si,j , after we shift it by θi,j distance, it still contains (2a , 2a+1 ]. With Property (5.4a) and the definition of φi,j , we can prove the following lemma: P yi,j ≤ 10/9. Lemma 5.5. For every machine i and a basic block (2a , 2a+1 ], we always have i j a P Proof. Focus on a job j that is bad on i. We have s≤10φi,j xi,j,s ≥ 9yi,j /10, since otherwise we P shall have s xi,j,s s > 10φi,j · (yi,j /10) = φi,j yi,j , contradicting the definition of φi,j . Thus, yi,j ≤ P i (10/9) s≤10φi,j xi,j,s. If j a, then (2a , 2a+1 ] ⊆ (10φi,j , pi,j ] by Property (5.4a). Thus, (2a , 2a+1 ] is P P xi,j,s ≤ yi,j ≤ (10/9) covered by (s, s + pi,j ] for every s ≤ 10φi,j . Thus, we have i i j j a 10/9. The second inequality used the fact that the total height of rectangles machine i is at most 1. a,s≤10φi,j covering (2a , 2a+1 ] on For every basic block (2a , 2a+1 ], we partition the set of edges assigned to (2a , 2a+1 ] into at most 10 sets, each containing edges with total weight at most 1/8. This is possible since the total weight of all edges assigned to (2a , 2a+1 ] is at most 10/9 by Lemma 5.5, and every bad job j on i has yi,j < 0.01: we can keep adding edges to a set until the total weight is at least 1/9 and then we start constructing the next set; the number of sets we constructed is at most (10/9)/(1/9) = 10 and each set has a total weight of at most 1/9 + 0.01 ≤ 1/8. Now, we can randomly drop at most 2 sets so that we have at most 8 sets remaining. Then we create a group for i containing the edges in the remaining sets. Thus, the total y-value of the edges in this group is at most 1. So, we have defined the grouping for i; recall that for each basic block (2a , 2a+1 ] ⊆ (10φi,j , pi,j ], we may create a group. For a bad job j on i, (i, j) may not be assigned to any group. This may happen if (i, j) is not assigned to any basic block, or if (i, j) is assigned to a basic block, but we dropped the set containing (i, j) when constructing the group for the basic block. Obtaining the Final Schedule With the groupings for the machines, we can now apply Theorem 5.1 to obtain a set E ∗ of edges that satisfies the properties of the theorem. If (i, j) ∈ E ∗ , then we assign j to i; j is assigned to exactly one machine since E ∗ contains exactly one edge incident to j. For all the jobs j assigned to i, we schedule them according to the increasing order of τi,j + θi,j ; ej be the completion time of with probability 1, no two jobs j have the same τi,j + θi,j value. Let C j in this final schedule. 5.2 Analysis of Algorithm i Notations and Simple Observations From now on, we use j ∼j ′ to denote the event that (i, j) and (i, j ′ ) are assigned to the same group for i, in the grouping scheme of Theorem 5.1. We use i j→i to denote the event that j is assigned to the machine i. Recall that j a indicates the event that (i, j) is assigned to the basic block (2a , 2a+1 ]. From the way we construct the groups, the following observation is immediate: Observation 5.6. Let (2a , 2a+1 ] be a basic block, j 6= j ′ be bad jobs on i. We have h i i i i Pr j ∼j ′ j a, j ′ a ≥ 1 − 2/10 = 0.8. 22 The following observation will be used in the analysis: Observation 5.7. Let j 6= j ′ be bad jobs on i. Then i h i Pr τi,j + θi,j < τi,j ′ + θi,j ′ j ∼j ′ = 1/2. Proof. Fix a basic block (2a , 2a+1 ] such that (2a , 2a+1 ] ∈ (10φi,j , pi,j ] and (2a , 2a+1 ] ∈ (10φi,j ′ , pi,j ′ ] (i.e, Property (5.4a) for both j and j ′ ). Fix a si,j such that si,j + θi,j ≤ 2a (i.e, Property (5.4b)). Condition on this si,j , τi,j is uniformly distributed in (si,j , si,j +pi,j ], and thus τi,j +θi,j is uniformly distributed in (si,j +θi,j , si,j +pi,j +θi,j ] ⊇ i (2a , 2a+1 ]. Thus, conditioned on si,j and j a, τi,j + θi,j is uniformly distributed in (2a , 2a+1 ]. i This holds even if we only condition on j a, since the statement holds for any si,j satisfying si,j + θi,j ≤ 2a . ′ hThe same holds i for (i, j ). For simplicity, denote by e the event τi,j + θi,j < τi,j ′ + θi,j ′ . Thus, Pr e j i a, j ′ i i a and j ′ i a are independent). Conditioned on h i i i i j a, the event is independent of the event e. Thus, Pr e j ∼j ′ , j a, j ′ a = 1/2. i h i This holds for every a; thus, Pr e j ∼j ′ = 1/2. i i a, j ′ a = 1/2 (notice that the events j i j ∼j ′ We define hi,j (τ ) = P s∈[τ −pi,j ,τ ) xi,j,s to be the total height of rectangles for j h that yi,ji,j pi,j is the probability density function for τi,j . Let on i that Ai,J ′ (τ ) = cover τ . It is easy to see Rτ P ′ ′ j∈J ′ τ ′ =0 hi,j (τ )dτ be the total area of the parts of rectangles {Ri,j,s }j∈J ′ ,s that are before τ ; we simply use Ai,j (τ ) for Ai,{j} (τ ). Notice that Ai,J (τ ) ≤ τ for every i and τ ∈ [0, T ]. ′ It is also convenient to define a set of “shifted” rectangles. For every i, j, s, we define Ri,j,s to be ′ the rectangle Ri,j,s shifted by 0.2(s + φi,j )+ 0.4yi,j pi,j units of time to the right. That is, Ri,j,s is the rectangle of height xi,j,s with horizontal span (1.2s + 0.2φi,j + 0.4yi,j pi,j , 1.2s + 0.2φi,j + 0.4yi,j pi,j + pi,j ].3 Notice that 0.2(s + φi,j ) + 0.4yi,j pi,j is the definition of θi,j if si,j = s. To distinguish these rectangles from the original rectangles, we call the new rectangles R′ -rectangles and the original ones R-rectangles. Similarly, we define h′i,j (τ ) to be the total height of R′ -rectangles for j on i that covers τ . Thus, ′ Rτ hi,j variable τi,j + θi,j . Let A′i,j (τ ) = τ ′ =0 h′i,j (τ ′ )dτ ′ to be the total yi,j pi,j is the PDF for the random n o ′ area of the parts of rectangles Ri,j,s that are before τ . Notice that for every i ∈ M and ′ j∈J ,s τ ∈ [0, T ], A′i,J (τ ) ≤ Ai,J (τ ) ≤ τ since we only shift rectangles to the right. For two functions f : [0, T ] → R≥0 and F : [0, T ] → R≥0 , define f ⊗F = Z T f (τ )F (τ )dτ. τ =0 Bounding Expected Completion Time Job by Job To analyze the approximation ratio h i of ∗ ∗ e the algorithm, we fix a job j and a machine i such that yi,j ∗ > 0. We shall bound E Cj ∗ |j →i . It  P xi,j∗ ,s  1 1 suffices to bound it by 1.5 − 6000 s yi,j ∗ (s + pi,j ∗ ) = 1.5 − 6000 (φi,j ∗ + pi,j ∗ ). The following lemma gives a comprehensive upper bound that takes all parameters into account: 3 We slightly increase T so that even after shifting, all rectangles are inside the interval (0, T ]. 23 h i ej ∗ j ∗ →i is at most Lemma 5.8. Let I : [0, T ] → [0, T ] be the identity function. Then E C 1.4φi,j ∗ + (1.5 − 0.1yi,j ∗ ) pi,j ∗ − i h h′i,j ∗ ζ X i Pr j ∼j ∗ yi,j pi,j . ⊗ (I − A′i,J ) − yi,j ∗ pi,j ∗ 2 ∗ (23) j6=j h If we throw i away all the negative terms, then we get an 1.4φi,j ∗ + 1.5pi,j ∗ upper bound on ∗ ej ∗ j →i , which is at most 1.5(φi,j ∗ + pi,j ∗ ). If either φi,j ∗ or yi,j ∗ is large, then we can prove E C a better than 1.5 factor; this coincides with our definition of good jobs. For a bad job j ∗ on i, we shall show that the absolute value of the third and fourth term in (23) is large. The third term is large if many rectangles are shifted by a large amount. The fourth term is where we use the strong negative correlation of Theorem 5.1 from [4] to reduce the final approximation ratio. Proof of Lemma 5.8. For notational convenience, let ej denote the event that τi,j +θi,j ≤ τi,j ∗ +θi,j ∗ , for every j ∈ J. h i 1 X 1 ej ∗ ∗ →i × C Pr [j ∗ →i, j→i, ej ] × pi,j E 1 = j Pr[j ∗ →i] yi,j ∗ j i i h  h X 1 i i Pr ej , j ∼j ∗ × Pr j ∗ →i, j→i ej , j ∼j ∗ = yi,j ∗ j6=j ∗     i i ∗ ∗ ∗ + Pr ej , j 6∼j × Pr j →i, j→i ej , j 6∼j × pi,j + pi,j ∗   i  i 1 X h i ∗ ∗ ≤ Pr ej , j ∼j (1 − ζ)yi,j ∗ yi,j + Pr ej , j 6∼j yi,j ∗ yi,j pi,j + pi,j ∗ yi,j ∗ j6=j ∗ h i X X i = Pr [ej ] yi,j pi,j − ζ Pr ej , j ∼j ∗ yi,j pi,j + pi,j ∗ h i ej ∗ j ∗ →i = E C j6=j ∗ = X j6=j ∗ j6=j ∗ Pr [ej ] yi,j pi,j − i h ζ X i Pr j ∼j ∗ yi,j pi,j + pi,j ∗ . 2 ∗ (24) j6=j ej ∗ = The second equality is due to the fact that C P j→i:ej pi,j , conditioned on j ∗ →i. The only i i inequality is due to the third property of E ∗ in Theorem 5.1: conditioned on j ∼j ∗ (j 6∼j ∗ resp.), the probability that j ∗ →i, j→i is at most (1 − ζ)yi,j ∗ yi,j (yi,j ∗ yi,j resp.), independent of the τ and θ values (thus, independent of ej ). The last equality is due to Observation 5.7. We focus on the first term of (24): X Pr[ej ]yi,j pi,j = j6=j ∗ = X Pr [τi,j + θi,j ≤ τi,j ∗ + θi,j ∗ ] yi,j pi,j = j6=j ∗ h′i,j ∗ yi,j ∗ pi,j ∗ ⊗I − h′i,j ∗ yi,j ∗ pi,j ∗ ⊗ (I − A′i,J ) − h′ ∗ h′i,j ∗ ⊗ A′i,J\j ∗ yi,j ∗ pi,j ∗ h′i,j ∗ yi,j ∗ pi,j ∗ ⊗ A′i,j ∗ . (25) h′ To see the second equality, we notice that τi,j ∗ + θi,j ∗ has PDF y ∗i,jp ∗ , τi,j + θi,j has PDF y ∗i,jpi,j i,j i,j i,j and the two random quantities are independent if j 6= j ∗ . Thus, for a fixed τi,j ∗ + θi,j ∗ = τ , the 24 A′ (τ ) i,j ′ probability that τi,j + θi,j ≤ τ is exactly yi,j pi,j ; thus contribution of j is exactly Ai,j (τ ). Summing up over all j 6= j ∗ gives the equality. The third equality is by A′i,J\j ∗ ≡ I − (I − A′i,J ) − A′i,j ∗ . The first term of (25) is X xi,j ∗ ,s Z s+pi,j∗ (τ + 0.2(s + φi,j ∗ ) + 0.4yi,j ∗ pi,j ∗ )dτ h′i,j ∗ ⊗I = yi,j ∗ pi,j ∗ yi,j ∗ τ =s pi,j ∗ s X xi,j ∗ ,s = (s + 0.5pi,j ∗ + 0.2(s + φi,j ∗ ) + 0.4yi,j ∗ pi,j ∗ ) = 1.4φi,j ∗ + 0.5pi,j ∗ + 0.4yi,j ∗ pi,j ∗ . yi,j ∗ s Now focus on the third term of (25) : h′i,j ∗ ⊗ A′i,j ∗ = yi,j ∗ pi,j ∗ yi,j ∗ pi,j ∗ Z T τ =0 h′i,j ∗ (τ ) yi,j ∗ pi,j ∗ Z T τ ′ =0 h′i,j ∗ (τ ′ ) yi,j ∗ pi,j ∗ 1τ ′ <τ dτ dτ ′ = . yi,j ∗ pi,j ∗ 2 (26) The first equality holds since A′i,j ∗ is the integral of h′i,j ∗ . The second equality holds since the probability that τ ′ < τ , where τ and τ ′ are i.i.d random variables and are not equal almost surely, is 1/2. Thus, by applying the above two equalities to (25), we have X j6=j ∗ Pr [ej ] yi,j pi,j = 1.4φi,j ∗ + 0.5pi,j ∗ − h′i,j ∗ yi,j ∗ pi,j ∗ ⊗ (I − A′i,J ) − 0.1yi,j ∗ pi,j ∗ . h i ej ∗ |j ∗ →i is at most Applying the above equality to (24), we that E C 1.4φi,j ∗ + (1.5 − 0.1yi,j ∗ ) pi,j ∗ − h′i,j ∗ yi,j ∗ pi,j ∗ ⊗ (I − A′i,J ) − i h ζ X i Pr j ∼j ∗ yi,j pi,j . 2 ∗ j6=j This is exactly (23). With the lemma, we shall analyze good jobs and bad jobs separately. For good jobs, the bound follows directly from the definition: h i ej ∗ j ∗ →i ≤ 1.4991(φi,j ∗ + pi,j ∗ ). Lemma 5.9. If j ∗ is good on i, then E C h i ej ∗ j ∗ →i ≤ 1.4φi,j ∗ + (1.5 − 0.1yi,j ∗ ) pi,j ∗ , by throwing the two negative terms Proof. We have E C in (23). Since j ∗ is good on i, we have φi,j ∗ + yi,j ∗ pi,j ∗ ≥ 0.01pi,j ∗ . h i ej ∗ j ∗ →i ≤ 1.5(φi,j ∗ + pi,j ∗ ) − (0.1φi,j + 0.1yi,j ∗ pi,j ∗ ) E C ≤ 1.5(φi,j ∗ + pi,j ∗ ) − 0.01φi,j ∗ − 0.09(φi,j ∗ + yi,j ∗ pi,j ∗ ) ≤ 1.5(φi,j ∗ + pi,j ∗ ) − 0.01φi,j ∗ − 0.09 × 0.01pi,j ∗ ≤ 1.4991(φi,j ∗ + pi,j ∗ ). Thus, it remains to consider the case where j ∗ is bad on i. The rest of the section is devoted to the proof of the following lemma: h i  ej ∗ j ∗ →i ≤ 1.5 − 1 (φi,j ∗ + pi,j ∗ ). Lemma 5.10. If j ∗ is bad on i, then E C 6000 25 It suffices to give a lower bound on the sum of the absolute values of negative terms in (23): 0.1yi,j ∗ pi,j ∗ + i h h′i,j ∗ pi,j ∗ ζ X i Pr j ∼j ∗ yi,j pi,j ≥ . ⊗ (I − A′i,J ) + yi,j ∗ pi,j ∗ 2 6000 ∗ (27) j6=j h i j ∗ →i ≤ 1.4φi,j ∗ + 1.5 −   1 1 ej ∗ ∗ If the above inequality holds, then we have E C 6000 pi,j ≤ 1.5 − 6000 (φi,j ∗ + pi,j ∗ ), implying Lemma 5.10. To prove (27), we construct a set of configurations with total weight 1, and lower bound the left-side configuration by configuration. We define a configuration U to be a set of pairs in J × {0, 1, 2, · · · , T − 1} such that for every two distinct pairs (j, s), (j ′ , s′ ) ∈ U , the two intervals (s, s + pi,j ] and (s′ , s′ + pi,j ′ ] are disjoint.4 For the sake of the description, we also view (j, s) as the interval (s, s + pi,j ] associated with the job j. Recall that the total height of all R-rectangles on i covering any time point is at most 1. It is a folklore we can find a set of configurations, each configuration U with a zU > 0, such P result that P that U zU = 1 and U ∋(s,j) zU = xi,j,s for every j and s. With the decomposition of the R-rectangles on i into a convex combination of configurations, we can now analyze the contribution of each configuration to the left of (27). For any configuration U , we define a function ℓU : [0, T ] → R≥0 as follows: ℓeU (τ ) = X (j,s)∈U :1.2s+0.2φi,j +0.4yi,j pi,j ≤τ min {τ − (1.2s + 0.2φi,j + 0.4yi,j pi,j ), pi,j } . The definition comes from the following process. Focus on the intervals {(s, s + pi,j ) : (j, s) ∈ U }. We then shift each interval (s, s + pi,j ] to the right by 0.2s + 0.2φi,j + 0.4yi,j pi,j ; notice that this is exactly the definition of θi,j when si,j = s. Then ℓU (τ ) is exactly the total length of the subintervals of the shifted intervals before time point τ . Recalling that A′i,J (τ ) is the total area of the ′ parts of the R′ -rectangles on i before time point τ , and each Ri,j,s is obtained by shifting Ri,j,s by 0.2s + 0.2φi,j + 0.4yi,j pi,j to the right, the following holds: X zU ℓU . (28) A′i,J ≡ U Let cU (j) = {s : (j, s) ∈ U } be the number of pairs in U for the job j. Now, we can define the contribution of U to the bound to be   ′ X  ζ hi,j ∗ i Pr[j ∼j ∗ ]pi,j cU (j) . ⊗ I − ℓU + DU := zU 0.1pi,j ∗ cU (j ∗ ) + yi,j ∗ pi,j ∗ 2 ∗ j6=j Claim 5.11. The left side of (27) is exactly P U DU . Proof. Indeed, the three terms in (27) is respectively the sum over all U of each of theP three terms in the definition of DU . For the first and the third term, thePequality comes from yi,j = U zU cU (j) for every j. The equality for the second term comes from U zU (I − ℓU ) = I − A′i,J . 4 Notice that this configuration does not necessarily correspond to a valid scheduling on i, since it may contain two pairs with the same j. 26 Lower Bound the Contribution of Each U Now we fix some configuration U such that zU > 0. Let a be the largest integer such that 2a+1 ≤ 0.9pi,j ∗ ; thus a ≥ −2. We can focus on the basic block (2a , 2a+1 ]. Notice that the length of the basic block is 2a ≥ 0.9pi,j ∗ /4, by the definition of a. The following simple observations are useful in establishing our bounds. ′ a Observation 5.12. If s ≤ 18φi,j ∗ , then Ri,j ∗ ,s will cover (2 , pi,j ∗ ]. ′ a a Proof. The rectangle Ri,j ∗ ,s will cover (2 , pi,j ∗ ] if s + 0.2(s + φi,j ∗ ) + 0.4yi,j pi,j ∗ ≤ 2 , which is s ≤ a ∗ (2 −0.2φi,j ∗ −0.4yi,j pi,j ∗ )/1.2. Since j is bad on i, we have 0.2φi,j ∗ +0.4yi,j ∗ pi,j ∗ ≤ 0.4×0.01pi,j ∗ = 0.004pi,j ∗ . Thus, (2a − 0.2φi,j ∗ − 0.4yi,j pi,j ∗ )/1.2 ≥ (2a − 0.004pi,j ∗ )/1.2 ≥ (0.9/4 − 0.004)pi,j ∗ /1.2 ≥ 0.9/4−0.004 1 a ∗ ∗ ∗ ∗ ∗ 1.2 0.01 φi,j ≥ 18φi,j . Thus, if s ≤ 18φi,j , we have s + 0.2(s + φi,j ) + 0.4yi,j pi,j ≤ 2 . Observation 5.13. For every τ ∈ (2a , pi,j ∗ ], we have h′i,j ∗ (τ ) ≥ 17 ∗ 18 yi,j . ′ Proof. This comes from Observation 5.12. Ri,j ∗ ,s will cover τ if s ≤ 18φi,j ∗ . By the definition of ∗ φi,j and Markov inequality, the sum of xi,j ∗ ,s over all such s is at least 17 18 yi,j . Observation 5.14. If τ ′ is not contained in the interior of any interval in U and τ ≥ τ ′ , then τ − ℓU (τ ) ≥ min {0.2τ ′ , τ − τ ′ }. Proof. Consider the definition of ℓU via the shifting of intervals. Since τ ′ is not contained in the interior of any interval in U , an interval in U is either to the left of τ ′ or to the right of τ ′ . By the way we shifting intervals, any interval to the right of τ ′ will be shifted by at least 0.2τ ′ distance to the right. Thus, the sub-intervals of the intervals in U from max {τ ′ , τ − 0.2τ ′ } to τ will be shifted to the right of τ . The observation follows. Observation 5.15. If τ is covered by some interval (s, s + pi,j ] in U , then τ − ℓU (τ ) ≥ min {0.2(s + φi,j ) + 0.4yi,j pi,j , τ − s} . Proof. The interval (s, s + pi,j ] will be shifted by 0.2(s + φi,j ) + 0.4yi,j pi,j distance to the right. So the sub-interval (max {τ − 0.2(s + φi,j ) − 0.4pi,j , s} , τ ] will be shifted to the right of τ . The observation follows. Equipped with these observations, we can analyze the contribution of U case by case: Case 1: (2a , 0.92pi,j ∗ ] is not a sub-interval of any interval in U . In this case, there is τ ′ ∈ (2a , 0.92pi,j ∗ ) that is not in the interior of any interval in U . By Observation 5.14, any time point in τ ∈ (0.95pi,j ∗ , pi,j ∗ ] has τ − ℓU (τ ) ≥ min {τ − τ ′ , 0.2τ ′ } ≥ min{0.95pi,j ∗ − 0.92pi,j ∗ , 0.2 × 2a } ≥ min {0.03pi,j ∗ , 0.2 × 0.9pi,j ∗ /4} = 0.03pi,j ∗ . By Observation 5.13, any such τ has h′ value at least 17 ∗ 18 yi,j . Thus, the contribution of U is h′i,j ∗ 17yi,j ∗ ⊗ (I − ℓU ) ≥ zU × × (0.03pi,j ∗ ) × (0.05pi,j ∗ ) yi,j ∗ pi,j ∗ 18yi,j ∗ pi,j ∗ zU pi,j ∗ . ≥ 0.0014zU pi,j ∗ ≥ 6000 DU ≥ zU × Case 2: (2a , 0.92pi,j ∗ ] is covered by some interval (s, s + pi,j ] in U , and 0.2(φi,j + s) + 0.4yi,j pi,j ≥ 0.002 × 2a . Focus on any τ ∈ (1.01 × 2a , 0.92pi,j ∗ ] ⊆ (2a , pi,j ∗ ]. By Observation 5.13, we have 27 ∗ h′ (τ ) ≥ 17 18 yi,j . By Observation 5.15, we have τ − ℓU (τ ) ≥ min{0.2(φi,j + s) + 0.4yi,j pi,j , τ − s} ≥ min {0.002 × 2a , 0.01 × 2a } = 0.002 × 2a . Thus, the contribution of U is h′i,j ∗ 17yi,j ∗ ⊗ (I − ℓU ) ≥ zU × × (0.002 × 2a ) × (0.92pi,j ∗ − 1.01 × 2a ) DU ≥ zU × yi,j ∗ pi,j ∗ 18yi,j ∗ pi,j ∗   zU pi,j ∗ 17 ≥ × 0.002 × 0.9/4 × (0.92 − 1.01 × 0.9/2) zU pi,j ∗ ≥ 0.00019zU pi,j ∗ ≥ , 18 6000 where we used 0.9pi,j ∗ /4 ≤ 2a ≤ 0.9pi,j ∗ /2. Case 3: (2a , 0.92pi,j ∗ ] is covered by some interval (s, s + pi,j ] in U , and 0.2(φi,j + s) + 0.4yi,j pi,j < 0.002 × 2a . If j = j ∗ , then DU ≥ zU × 0.1pi,j ∗ cU (j ∗ ) ≥ zU × 0.1pi,j ∗ . So, we assume j 6= j ∗ . This is where we use the strong negative correlation between j and j ∗ . We shall lower bound Pr[j ∗ i a] i and Pr[j a] separately. 1 φi,j ∗ /4 ≥ 10φi,j ∗ , by the fact that j ∗ is bad on i. Thus, Notice that 2a ≥ 0.9pi,j ∗ /4 ≥ 0.9 × 0.01 a a+1 ′ a a+1 ], (2 , 2 ] ⊆ (10φi,j ∗ , pi,j ∗ ], implying Property (5.4a) for j ∗ . If si,j ∗ = s and Ri,j ∗ ,s covers (2 , 2 17 then Property (5.4b) holds. By Observation 5.12, this happens with probability at least 18 . Thus, we have h i 17 2a 17 0.9 i · ≥ · ≥ 0.21. Pr j ∗ a ≥ 18 pi,j ∗ 18 4 h i i Now, we continue to bound Pr j a . To do this, we need to first prove that j is bad on i. Indeed, φi,j + yi,j pi,j ≤ 5(0.2(φi,j + s) + 0.4yi,j pi,j ) < 5 × 0.002 × 2a ≤ 0.01pi,j , since (s, s + pi,j ] ⊇ (2a , 0.92pi,j ∗ ] ⊇ (2a , 2a+1 ], which is of length at least 2a . This implies that j is bad on i. a+1 ≥ 2a+1 + 0.01 × 2a . Then, s ≤ 0.002 × 2a /0.2 = 0.01 × 2a and s + pi,j > 0.92pi,j ∗ ≥ 0.92 0.9 × 2 a+1 a This implies that pi,j ≥ 2 . Also, 2 ≥ 0.2φi,j /0.002 = 100φi,j ≥ 10φi,j . Thus, Property (5.4a) holds. For every s′ ≤ 50φi,j , we have 1.2s′ + 0.2φi,j + 0.4yi,j pi,j ≤ 60.2φi,j + 0.4yi,j pi,j ≤ 60.2 0.2 (0.2(φi,j + a ≤ 2a . Thus, Property (5.4b) holds if s s) + 0.4yi,j pi,j ) ≤ 60.2 × 0.002 × 2 ≤ 50φ i,j i,j . Notice that 0.2 i a Pr[si,j ≤ 50φi,j ] ≥ 0.98. Under this condition, the probability of j a is at least p2i,j ≥ h i i pi,j ∗ pi,j ∗ Overall, Pr j a ≥ 0.98 × 0.9 4 pi,j ≥ 0.22 pi,j . Thus, by Observation 5.6, h i h i pi,j ∗ pi,j ∗ i i i Pr[j ∼j ∗ ] ≥ 0.8 Pr j ∗ a Pr j a ≥ 0.8 × 0.21 × 0.22 ≥ 0.036 . pi,j pi,j z p i 0.9 pi,j ∗ 4 pi,j . i,j Thus, the contribution of U is DU ≥ zU 2ζ Pr[j ∼j ∗ ]pi,j ≥ 0.018ζzU pi,j ∗ = U6000 . zU pi,j ∗ . By Claim 5.11, the left side of (27) Thus, for every U , the contribution of U is at least 6000 P P zU pi,j∗ pi,j ∗ is U DU ≥ U 6000 = h6000 . Thisifinishes the proof of Lemma 5.10.  ej ∗ |j ∗ →i ≤ 1.5 − 1 (φi,j ∗ + pi,j ∗ ). Deconditioning on j ∗ →i, we So, we always have E C 6000 h i   ej ∗ ≤ 1.5 − 1 P yi,j ∗ (φi,j ∗ + pi,j ∗ ) = 1.5 − 1 Cj . This finishes the proof of have E C i 6000 6000 Theorem 1.4. 28 ∗ 6 Handling Super-Polynomial T P In this section, we show how to handle the case when T is super-polynomial in n for P |prec| j wj Cj P and R|prec| j wj Cj . The way P we handle super-polynomial T is the same as that in [19]. [19] considers the problem R|rj | j wj Cj , i.e, the unrelated machine job scheduling with job arrival times. They showed how to efficiently obtain a (1 + ǫ) approximate LP solution that only contains polynomial number of non-zero variables. Since the problem they considered is more general than P R|| j wj Cj and their LP is alsoP a generalization of our (LPR||wC ), their technique can be directly applied to our algorithm for R|| P j wj Cj . Due to the precedence constraints, the technique does not directly apply to P |prec| j wj Cj . However, with a trivial modification, it can handle the precedence constraints as well. We omit the detail here since the analysis will be almost identical to that in [19]. References [1] Nikhil Bansal and Subhash Khot. Optimal long code test with one free bit. In Proceedings of the 2009 50th Annual IEEE Symposium on Foundations of Computer Science, FOCS ’09, pages 453–462. IEEE Computer Society, 2009. [2] Nikhil Bansal and Janardhan Kulkarni. Minimizing flow-time on unrelated machines. In Proceedings of the Forty-seventh Annual ACM Symposium on Theory of Computing, STOC ’15, pages 851–860. ACM, 2015. [3] Nikhil Bansal and Kirk Pruhs. The geometry of scheduling. In Proceedings of the 2010 IEEE 51st Annual Symposium on Foundations of Computer Science, FOCS ’10, pages 407–414. IEEE Computer Society, 2010. [4] Nikhil Bansal, Aravind Srinivasan, and Ola Svensson. Lift-and-round to improve weighted completion time on unrelated machines. In Proceedings of the Forty-eighth Annual ACM Symposium on Theory of Computing, STOC ’16, pages 156–167. ACM, 2016. [5] Abbas Bazzi and Ashkan Norouzi-Fard. Towards Tight Lower Bounds for Scheduling Problems, pages 118–129. Springer Berlin Heidelberg, 2015. [6] Soumen Chakrabarti, Cynthia A. Phillips, Andreas S. Schulz, David B. Shmoys, Cliff Stein, and Joel Wein. Improved scheduling algorithms for minsum criteria, pages 646–657. Springer Berlin Heidelberg, 1996. [7] Deeparnab Chakrabarty, Sanjeev Khanna, and Shi Li. On (1, ǫ)-restricted asgsignment makespan minimization. In Proceedings of the 26th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2015), 2015. [8] C. Chekuri and S. Khanna. Approximation algorithms for minimizing average weighted completion time. Handbook of Scheduling: Algorithms, Models, and Performance Analysis. CRC Press, Inc., Boca Raton, FL, USA, 2004. [9] C. Chekuri, R. Motwani, B. Natarajan, and C. Stein. Approximation techniques for average completion time scheduling. SIAM J. Comput., 31(1):146–166, January 2002. 29 [10] Chandra Chekuri and Sanjeev Khanna. A PTAS for Minimizing Weighted Completion Time on Uniformly Related Machines, pages 848–861. Springer Berlin Heidelberg, 2001. [11] Chandra Chekuri and Sanjeev Khanna. Approximation schemes for preemptive weighted flow time. In Proceedings of the Thiry-fourth Annual ACM Symposium on Theory of Computing, STOC ’02, pages 297–305. ACM, 2002. [12] Fabián A. Chudak and David B. Shmoys. Approximation algorithms for precedenceconstrained scheduling problems on parallel machines that run at different speeds. In Proceedings of the Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’97, pages 581–590. Society for Industrial and Applied Mathematics, 1997. [13] Tomáš Ebenlendr, Marek Krčál, and Jiřı́ Sgall. Graph balancing: A special case of scheduling unrelated parallel machines. In Proceedings of the Nineteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’08, pages 483–490. Society for Industrial and Applied Mathematics, 2008. [14] Naveen Garg and Amit Kumar. Minimizing average flow-time: Upper and lower bounds. In Proceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science, FOCS ’07, pages 603–613. IEEE Computer Society, 2007. [15] Michel X. Goemans. Improved approximation algorthims for scheduling with release dates. In Proceedings of the Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’97, pages 591–598. Society for Industrial and Applied Mathematics, 1997. [16] R. L. Graham. Bounds on multiprocessing timing anomalies. SIAM JOURNAL ON APPLIED MATHEMATICS, 17(2):416–429, 1969. [17] R. L. Graham, E. L. Lawler, J. K. Lenstra, and A. H. G. Rinnooy Kan. Optimization and approximation in deterministic sequencing and scheduling: a survey. Ann. Discrete Math., 4:287–326, 1979. [18] Leslie A. Hall, Andreas S. Schulz, David B. Shmoys, and Joel Wein. Scheduling to minimize average completion time: Off-line and on-line approximation algorithms. Math. Oper. Res., 22(3):513–544, August 1997. [19] Sungjin Im and Shi Li. Better unrelated machine scheduling for weighted completion time via random offsets from non-uniform distributions. In IEEE 57th Annual Symposium on Foundations of Computer Science, FOCS 2016, pages 138–147, 2016. [20] Jeffrey M. Jaffe. Efficient scheduling of tasks without full use of processor resources. Theoretical Computer Science, 12(1):1 – 17, 1980. [21] Klaus Jansen and Lars Rohwedder. On the configuration-LP of the restricted assignment problem. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’17, pages 2670–2678. Society for Industrial and Applied Mathematics, 2017. [22] VS Anil Kumar, Madhav V Marathe, Srinivasan Parthasarathy, and Aravind Srinivasan. Minimum weighted completion time. In Encyclopedia of Algorithms, pages 544–546. Springer, 2008. 30 [23] J. K. Lenstra and A. H. G. Rinnooy Kan. Complexity of scheduling under precedence constraints. Oper. Res., 26(1):22–35, February 1978. [24] J. K. Lenstra, D. B. Shmoys, and É. Tardos. Approximation algorithms for scheduling unrelated parallel machines. Math. Program., 46(3):259–271, February 1990. [25] Stefano Leonardi and Danny Raz. Approximating total flow time on parallel machines. In Proceedings of the Twenty-ninth Annual ACM Symposium on Theory of Computing, STOC ’97, pages 110–119. ACM, 1997. [26] Elaine Levey and Thomas Rothvoss. A (1+epsilon)-approximation for makespan scheduling with precedence constraints using LP hierarchies. In Proceedings of the Forty-eighth Annual ACM Symposium on Theory of Computing, STOC ’16, pages 168–177. ACM, 2016. [27] Alix Munier, Maurice Queyranne, and Andreas S. Schulz. Approximation Bounds for a General Class of Precedence Constrained Parallel Machine Scheduling Problems, pages 367–382. Springer Berlin Heidelberg, 1998. [28] Cynthia Phillips, Clifford Stein, and Joel Wein. Minimizing average completion time in the presence of release dates. Mathematical Programming, 82(1):199–223, Jun 1998. [29] Maurice Queyranne and Andreas S. Schulz. Approximation bounds for a general class of precedence constrained parallel machine scheduling problems. SIAM J. Comput., 35(5):1241– 1253, May 2006. [30] Maurice Queyranne and Maxim Sviridenko. Approximation algorithms for shop scheduling problems with minsum objective. Journal of Scheduling, 5(4):287–305, 2002. [31] Andreas S. Schulz and Martin Skutella. Random-based scheduling: New approximations and LP lower bounds. In Proceedings of the International Workshop on Randomization and Approximation Techniques in Computer Science, RANDOM ’97, pages 119–133. Springer-Verlag, 1997. [32] Andreas S. Schulz and Martin Skutella. Scheduling unrelated machines by randomized rounding. SIAM J. Discret. Math., 15(4):450–469, April 2002. [33] Petra Schuurman and Gerhard J. Woeginger. Polynomial time approximation algorithms for machine scheduling: Ten open problems, 1999. [34] Jay Sethuraman and Mark S. Squillante. Optimal scheduling of multiclass parallel machines. In Proceedings of the Tenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’99, pages 963–964. Society for Industrial and Applied Mathematics, 1999. [35] René A. Sitters. Approximation and online algorithms. chapter Minimizing Average Flow Time on Unrelated Machines, pages 67–77. Springer-Verlag, 2009. [36] Martin Skutella. Convex quadratic and semidefinite programming relaxations in scheduling. J. ACM, 48(2):206–242, March 2001. 31 [37] Martin Skutella. A 2.542-approximation for precedence constrained single machine scheduling with release dates and total weighted completion time objective. Operations Research Letters, 44(5):676 – 679, 2016. [38] Martin Skutella and Gerhard J. Woeginger. A ptas for minimizing the weighted sum of job completion times on parallel machines. In Proceedings of the Thirty-first Annual ACM Symposium on Theory of Computing, STOC ’99, pages 400–407. ACM, 1999. [39] Ola Svensson. Conditional hardness of precedence constrained scheduling on identical machines. In Proceedings of the Forty-second ACM Symposium on Theory of Computing, STOC ’10, pages 745–754. ACM, 2010. [40] Ola Svensson. Santa claus schedules jobs on unrelated machines. In Proceedings of the Fortythird Annual ACM Symposium on Theory of Computing, STOC ’11, pages 617–626. ACM, 2011. 32
8
A Novel Method for Stock Forecasting based on Fuzzy Time Series Combined with the Longest Common/Repeated Sub-sequence He-Wen Chen, Zih-Ci Wang, Shu-Yu Kuo, Yao-Hsin Chou Dept. of Computer Science and Information Engineering National Chi-Nan University No.1 University Rd., Puli 54561, Taiwan Email : [email protected] Abstract—Stock price forecasting is an important issue for investors since extreme accuracy in forecasting can bring about high profits. Fuzzy Time Series (FTS) and Longest Common/Repeated Sub-sequence (LCS/LRS) are two important issues for forecasting prices. However, to the best of our knowledge, there are no significant studies using LCS/LRS to predict stock prices. It is impossible that prices stay exactly the same as historic prices. Therefore, this paper proposes a state-ofthe-art method which combines FTS and LCS/LRS to predict stock prices. This method is based on the principle that history will repeat itself. It uses different interval lengths in FTS to fuzzify the prices, and LCS/LRS to look for the same pattern in the historical prices to predict future stock prices. In the experiment, we examine various intervals of fuzzy time sets in order to achieve high prediction accuracy. The proposed method outperforms traditional methods in terms of prediction accuracy and, furthermore, it is easy to implement. Keywords-TAIEX forecasting; fuzzy time series; longest common/repeated subsequence I. INTRODUCTION The stock market plays an important role in any country’s economic system because it houses investment targets which are easy to buy/sell and have complete information that is available to the public. The properties mentioned above cause numerous people to invest in the stock market in the hopes of making a profit. In this world, there are many stock markets, but only Taiwan’s stock market is special; it is not only affected by economically powerful countries, such as America, China, Japan, European countries, etc., but also features a great system where stock prices cannot be higher/less than 7% when compared with the previous day’s prices. Moreover, people from all around the world are able to invest in the Taiwanese stock market. Several research studies have focused on issues that are pertinent to the stock market, such as price forecasting, deciding on trading points and stock selection. Price forecasting is typically the foundation of all of these studies. If the results of forecasting systems are made accurate, then decisions regarding trading points and stock selection will result in the acquisition of greater income. Thus numerous mathematical tools have been applied, such as Evolution computation (EC) [1], linear and multi-linear regression (LR, MLR) [2], Artificial neural networks (ANNs) [3], FTS [4], etc. In addition, some researchers [5] recommend a method which uses LCS/LRS to predict stock prices. This method is usually applied to a DNA sequence in order to recognize the particular gene fragment. Most researchers believe that history repeats itself. And so, as a hypothesis, LCS/LRS is a powerful method that is able to identify repeated fragments in the past, and provide more information for forecasting in the future. However, it is difficult to match historic prices accurately, particularly in terms of the second decimal place, because prices are always changing over time. Fortunately, a mathematical tool called “FTS” can be used to solve the mapping issue as it can blur the price and so results in higher success rates in terms of mapping. However, FTS has its own disadvantages. One relates to interval design and the other relates to one/higher order fuzzy logical relationships. Previous research has used the highest and lowest prices of historic data to serve as the domain range of FTS, but if the future price is higher or lower than the domain range, then this causes prediction inaccuracy. We therefore use the percentage of rises and falls in price as a domain range, this problem can be avoided. One/higher order fuzzy logical relationships also cannot represent the actual patterns shown in history. This research therefore proposes a novel concept, which combines the features of “LCS/LRS” and “FTS”, and forecasts Taiwan’s stock exchange capitalization weighted stock index (TAIEX) and Taiwan’s Top 50 Exchange Traded Funds (ETF) in order to approximately represent trends in Taiwan’s stock market. This combination allows us to discard the drawbacks of FTS. The experimental results show that the proposed method is more effective than the FTS, and, further, demonstrates the drawbacks of FTS. This article defines LCS/LRS and introduces some related works in Sec. 0 and 0, respectively. Sec. IV discusses the proposed method. Subsequently, some results of FTS and the proposed method are analyzed in Sec. 0, and we provide a conclusion in Sec. VI. II. BACKGROUND LCS/LRS is a string tool. The original LCS is given certain sequences and identifies the common aspects in all of these sequences. For instance, from two sequences S1=abcde and S2=bdscjkde, the aim is to identify the common part which underlies S2, “de”; an algorithm is usually used to compare the common part in multiple gene sequences. And the target of the original LRS is to locate the repeated subsequence of a sequence; for example, S is abcklesabcgkels and the repeated sequence is abc. In this research, the definition of the repeated subsequence is similar to the original LRS. The principle is that by giving another sequence and order numbering, we are able to identify the repeated subsequences as well as the next letter in the sequence, for example, S1=abcklesabcgkels, S2=jeusabc and the order is 3; the repeated subsequences are “abc,k” and “abc,g” respectively. The number of repeated subsequences is influenced by the number of the order, for instance, if the number of the order is 4, from example above, there will be no subsequence similar to “sabc”. The mapping is as follows: Given a sequence to represent the historical data, which is S1=“AiAi…Ai”, where i  {1, 2, … , k}, and k is the number interval. And another sequence S2=“… ”, the algorithm sets a counter to 1 at first, and mapping starts from the last state, which is where the superscript is the same as the counter. It then identifies all of the states that are the same as where the i values are the same. The counter is then added by 1, and the returned LCS/LRS must correspond to , and so on. The algorithm stops when the counter is equal to the number of the order or the returned LCS/LRS is NULL. For example, the number of the interval is 3, and the history data is S1=“A1A3A2A2A1A3A1”. Another sequence is S2=“A2A1A3”. The counter=1, and the number of the order is 3. The algorithm starts from the last state at S1 which is “A3” and returns two LCS/LRS which are “A3A2” and “A3A1” in the first round, respectively. After this, the counter is added by 1 and also returns two LCS/LRS, which are “A1A3A2” and “A1A3A1” in the second round, respectively. In the third round, the counter is equal to 3 and only one LCS/LRS is returned, which is “A2A1A3A1” and stops the algorithm. As a result, the order 1 LCS/LRS values are “A3A2” and “A3A1”, the order 2 LCS/LRS values are “A1A3A2” and “A1A3A1”, and the order 3 LCS/LRS value is “A2A1A3A1”, respectively. III. RELATED WORK Forecasting issues can be solved using several methods, which are called soft computing. Among them, researchers often apply FTS in order to predict stock prices since stock data is a time series stream. The concept of FTS for forecasting was first proposed by Song and Chissom [6] in 1993. After that, FTS was improved by high-order relationships [7], combined with EC [4, 7] and two-factors [4, 8], which adds other indexes to help with forecasting, etc. Moreover, Seng and Cheng [9] point out an issue that may reduce the accuracy of forecasting. Traditional FTS maps the price to the interval, which is defined by the user, and uses one/higher order logical relationships in order to represent the historical data. For example, A1  (A1,A3), and A2  (A1,A3) where the left/right hand side of the arrow are called “left/right hand side” (LHS/RHS); if the state is A1, then the next state may be forecast as a number that is a weight combined with A1 and A3. As a result, it is neither A1 nor A3, and the result is not a pattern in history. Thus, they changed the mapping size of the LHS until the RHS is only matched by a subsequence. For example, the results of FTS are A2A1A1 and A1A3A2A3. The experimental results [9] show that the accuracy of forecasting improves the forecasting of the University of Alabama’s enrollments as a result of using the new mapping technique. However, the method may still be invalid for the stock market because stock price data are more complex. But it is nevertheless a good concept, which is somewhat similar to LCS/LRS. However, this being said, LCS/LRS are more powerful tools to explore past data. Furthermore, most researchers make the decisions to use the highest and lowest prices in the historical data for the domain range of FTS, which is a little unreasonable because price mapping may cause fewer relationships of LHS/RHS. Thus, mapping should be relative rather than absolute, such as the percentage of difference on price between today with yesterday. For example, there are four situations that may cause the mapping of a FTS logical relationship to lose its accuracy, which are when the price is in a continuous rise/fall, and when it rises/falls at first and then falls/rises during the forecasting period. Take Figure 1 and Figure 2 for example, the days from 1 to 15 are the training days and the forecasting period begins after day 16. The situation presented in Figure 1 is one in which all prices rise at first then one or two rise as usual and another falls. The situation presented in Figure 2 is the reverse of Figure 1. Each day’s price is represented by the blue line in Figure 1, and before day 15, is “1,2,4,3,4,6,5,7,8,10,9,12,13,13,15”, and after day 16 is “16,18,17,18,19” As a result, any interval design of FTS will cause the forecasting to be limited to the lowest and highest price in history; which means that it cannot exactly forecast the price after day 16. The orange line in Figure 1 represents the price falls after day 16; the trend before day 15 may cause the FTS to forecast the future price as rising when the price is actually falling; and as a result, any interval design and logical relationship mapping of FTS cannot avoid reducing the forecasting rate. These two situations, presented in Figure 2, are the reverse of Figure 1; however, they reduce the forecasting rate in the same way as shown in Figure 1. The experimental results show us that these situations will really reduce the forecasting rate with FTS. This will be discussed further in Sec. 0. Figure 1. The price rises at first then continue rise or fall TABLE I. Date 20141226 20141227 20141228 20141229 20141230 20141231 Figure 2. The price falls at first then continue fall or rise In conclusion, this research, never proposed before, combines the features of LCS/LRS and the concept of fuzzy logic in order to explore historical data and uses a relative relationship for mapping in order to improve performance. The LCS/LRS technique is able to explore entire sets of historical data and so avoids the situations shown in Figure 1 and Figure 2. It can also provide information about real patterns occurring in history. Moreover, FTS increases the mapping rate of LCS/LRS, which returns longer LCS/LRS values through the combination of LCS/LRS and FTS. Finally, relative logical relationship mapping also increases the forecasting rate during the forecasting period. IV. In this section, the universe design is defined first. This is relative to the range of Taiwan’s stock every day and can increase the mapping rate from LCS/LRS. Subsequently, the algorithm returns the forecasting value by computing the relationship between LCS/LRS and its mapping. The algorithm includes 6 steps and is shown as follows: 1) The universal design: The universe of discourse refers to the extent of change of prices, where = [Dmin, Dmax], Dmin is -7% and Dmax is 7%. These are the limits placed on the percentage of rises and falls allowed for stock fluctuation every day, which can prevent the forecasted percentage from exceeding the bound and corresponding fuzzy set. In order to make seven intervals, the set is divided into seven parts, u1, u2 , … , u6 and u7. Fourteen values are divided by seven in order to obtain the maximum and minimum values for the set. This makes u1= [-7, -5), u2= [-5, -3) , u3= [-3, -1), … , u6= [+3, +5), and u7= [+5, +7). 2) Transform prices to pulse: Historical data are transformed into the rises and falls of prices in percentage form, and the calculation is shown in (1). ) Interval A7 A4 A3 A5 A4 A5 4) Search the LCS: Search for patterns in the training set that match those found during the testing period. And the test period is one month before the predicted day. The match length ranges from one to the longest length that it can match. If the match length is one, then it is called degree 1. Accordingly, this search can result in many different lengths of common substrings. Degree 1 2003/6/30 2014/12/31 2014/11/30 A6 A2 A4 A5 A2 A3 A5 A4 A5 Training period Testing period Degree 2 Figure 3.The method of finding same pattern PROPOSED METHOD ( THE MAPPING METHOD OF EACH DAY’S PRICE Change extent Universe +5.58% u7 +0.65% u4 -1.31% u3 +1.20% u5 -0.55% u4 +1.02% u5 (1) For example, Pricet is 55.5 and Pricet+1 is 58.6, where Pricet is stock price at date t. The change extant is 5.58% 3) Fuzzify the historical data: Fuzzify the historical data: In order to fuzzify the change extents, these are classified into corresponding sets (defined by Step 1). If the change extent is 5.58%, it belongs to u7, and then is fuzzified into A7 as shown in 3)The interval of each forecasting day is defined in this way. Assume that the searched for pattern is A3A5A4A5 during the testing period. The first set A5 is searched, and the length of the same pattern is one, and so is called degree 1. One continues searching; the longest length, A4A5, also called degree 2 in Figure 3, can be obtained. 5) Construct fuzzy logical relationships: Based on the relationships of all degrees, a fuzzy logical relationship is constructed. Using fuzzy logical relationships enables one to obtain a fuzzy logical relationship group for all of the degrees. The percentage of rises and falls will be the midpoint of the interval. The forecasted percentage of this degree is an average of the midpoints in all intervals. The forecasted percentage of degree 1 is shown in (2), where Ni is the number of the next interval if it is interval i, Mi is the midpoint of interval i, and n is the number of divided intervals, which has a range between two and thirty-five in our proposed model. ∑ (2) ∑ TABLE II. THE TIME OF THE PERCENTAGE OF EACH DEGREE Next Interval NEXT SET AND THE FORECASTED A1 A2 A3 A4 A5 A6 A7 Forecasted percent A5 30 70 150 250 100 80 50 0.082 A4A5 25 50 100 200 80 60 40 0.162 A5A4A5 10 20 60 150 50 30 25 0.318 A3A5A4A5 0 0 10 100 20 10 5 0.620 Pattern TABLE II. provides an example. In other words, the next interval for A5 is and occurs 30 times, occurs 70 times, etc. Moreover, the forecasted percentage is 0.082 for degree 1. The time of each degree is strictly decreasing, because longer length patterns are harder to find than shorter ones. 6) Return to forecasted price: Use the forecasted percentage of rises and falls for each degree to calculate the forecasted price for each degree, as shown as (3). Thus, there are different forecasted prices for all degrees. An average of all the forecasted prices will be the final forecasted price. (3) TABLE III. THE FORECASTED PRICE OF EACH DEGREE Next interval A1 A2 A3 A4 A5 A6 A7 Forecasted price A5 30 70 150 250 100 80 50 108.2 A4A5 25 50 100 200 80 60 40 116.2 A5A4A5 10 20 60 150 50 30 25 131.8 A3A5A4A5 0 0 10 100 20 10 5 162.0 Pattern Assume that the testing period is ……A3A5A4A5, that the longest length is four, and that the previous price is 100 NT. The price of all degrees is shown in TABLE III. . The final forecasted price will be the forecasted price average, 129.55. V. EXPERIMENT AND ANALYSIS There remain some issues that should be solved first, which relate to the interval range and how much historical data we should use. There is another proper tool to calculate the forecast accuracy rate, which is mean absolute percentage error (MAPE). ∑| | (5) For example: TABLE IV. THE COMPARISON OF RMSE AND MAPE Next day Actual price Predict price RMSE Rise 1% Rise 2% Today price MAPE 1000 1010 1020 10 0.99% 10000 10100 10200 100 0.99% From the TABLE IV, we are able to see that the two instances differ by 1%, although the second today price exceeds the first, their error rate should be the same, like the result of MAPE. But just because the second day’s price is greater than the first, it this causes a rise in RMSE. So when the RMSE is big, we cannot conclude with any certainty whether it really is a loss of prediction or whether the sample is too big. A. Experiment 1 How to distinguish intervals in order to achieve high prediction accuracy has been a commonly researched topic in this area since the number of interval affects the forecasted result in FTS. In this experiment, we separate U in equal length from two to thirty-one and apply an ETF from July 2014 to March 2015. Subsequently, the RMSE is calculated for each day. The parameters of the method affect the results of the experiment, and so also affect the best result. The parameters are trained in the following experiment. The number of intervals, which can obtain the best accuracy rate in FTS, is a perplexing and confusing issue for several researchers. Therefore, the numbers of intervals are discussed in experiment 1. This experiment applies ETF from July 2014 to March 2015. The trends of ETF are able to represent Taiwan’s stock market, because the constituent stock of ETF contains Taiwan's top 50 stocks. In order to draw a comparison with Chen’s method [4], the proposed forecast prices in November and December in the TAIEX from 1990 to 2004 are shown in experiments 2 and 3. Figure 4. The RMSE of the different intervals in ETF from July 2014 to March 2015 Different lengths of historical data can receive diverse results using the proposed method, and so this parameter of lengths of historical data is decided by experiment 2. The best parameter obtained from experiments 1 and 2 is used in experiment 3. In order to draw a comparison with Chen’s method [4], we use the root mean square error (RMSE) forecast accuracy rate. RMSE is defined as (4), where n denotes the number of dates that need to be forecasted. √ ∑ ( ) (4) Figure 5. The MAPE of the different intervals in ETF from July 2014 to March 2015 Figure 6. The RMSE of the different intervals of TAIEX from 1990 to 2004 Based on Fig. 7, 8, 9 and 10, the trend shows that odd numbers present lower RMSE values with respect to even numbers, because the bounds of odd numbers can stride across both negative and positive percentages, which makes predictions uncertain. Moreover, when the interval is 3, it can reach the smallest RMSE value in the Fig.4. Because the pulses in ETF are small, the rises and falls of stock prices move up and down mostly between . Given small fluctuations in ETF, this can also explain how the average longest common substring occurs when the interval is 3 in Fig. 11. B. Experiment 2 We use different lengths of training period to predict stock price in November and December 2004 in TAIEX; where this data was taken forward from 2004. Furthermore, we test 2004 because it is the last forecasted year in Chen’s method [4]. When the length is 1, this means that the training period only took place in 2003. When the length of the training period is longer, the RMSE lowers because more historical data exist for forecasting price. So this experiment shows that it is better to use historical data for as long as possible. C. Experiment 3 Chen’s method [4] is an academic authority in the field of fuzzy logic. Chen proposed several meaningful fuzzy methods for forecasting price in TAIEX, and these reformed methods have been shown to obtain better accuracy rates than previous methods. There is no denying that Chen is the most important scholar in the field of fuzzy forecasting, and so the method proposed here is compared with his method as proposed in 2013. Figure 13. The average length of the different intervals from July 2014 to March 2015 Figure 14. The RMSE of 2004 with different lengths of training period The results are shown in TABLE V, VI. Although the RMSE values are much smaller than those obtained using Chen’s method in 2000 and 2004, the calculations made by the proposed method are much easier to run than theirs, and the results obtained in other years are nearly the same as those obtained from their method. TABLE V, VI shows the comparison with Chen’s model from 1990 to 2004. TABLE V. A COMPARISON OF THE RMSE VALUES FOR THE DIFFERENT METHODS Methods [4] 1999 2000 2001 2002 2003 2004 AVG. Factor1 102.34 131.25 113.62 65.77 52.23 56.16 86.89 Factor2 102.11 131.3 113.83 66.45 52.83 54.17 86.78 Factor3 103.52 131.36 112.55 66.23 53.2 55.36 87.03 102.8 117.58 114.64 65.92 52.55 51.28 84.12 Ours (interval=3) Note: factor 1 is Dow Jones, factor 2 is NASDAQ, and factor 3 is M1b. TABLE VI. A COMPARISON OF THE RMSE VALUES FOR THE DIFFERENT METHODS 19901993 19931996 Factor1 92.32 Factor2 92.33 Methods [4] 19961999 19992002 20012004 75.26 101.7 103.25 71.95 74.75 101.375 103.42 71.82 103.42 71.84 100.23 71.1 Factor3 Ours (interval=3) 91.47 73.12 105.6 Note: factor1 is Dow Jones, factor2 is NASDAQ, and factor3 is M1b. Figure 12. The MAPE of the different intervals of TAIEX from 1990 to 2004 Chen’s method has poor performance in 1990 and 2000. Fig. 10 and Fig. 11 are the TAIEX price at 1990 and 2000. It is obviously that it meets the condition we mention in Sec. 0. The prices in 1990 and 2000 are falling continuously from January to October, one is continuous falling at November and December, and the other is continuous rising at November and December. The two situation cause its RMSE are bigger than other years. uses the percentage of rises and falls of the stock price as fuzzy intervals. Taiwan’s stock markets, such as ETF and TAIEX, have a limit of 7% imposed on rising/falling, and so our proposal need not take this problem into consideration. The experimental results show that when the interval is three and of equal length, performance is better than other numbers of intervals and that odd numbers present lower RMSE values with respect to even numbers. Our proposed method is easy to implement and does not provide bad results. In the future certain parameters could be adjusted. First, perhaps we can use the percentage of rises and falls every two or three days…etc. Second, in our proposed method every degree is assigned an average weight, but perhaps we can adjust this weight; namely, perhaps the results will be better if higher degrees are assigned more weight. Finally, when referring to historical data, recent data may be adopted more easily than relatively longer data. REFERENCES Figure 10. The price of TAIEX in 2000 Figure 11. The price of TAIEX in 2000 VI. CONCLUSION AND FUTURE WORK Our research uses LCS/LRS to identify historical stock price trends. However, because prices cannot be exactly matched, we fuzzify these prices into a fuzzy set. The traditional application on FTS uses the highest and lowest prices in the historical data as the upper and lower bounds in the fuzzy set. This can cause the predicted price to overflow the boundary and lose forecast accuracy. The proposed method therefore [1] S. S. Lam, "A genetic fuzzy expert system for stock market timing," in Proceedings of the IEEE Conference on Evolutionary Computation- IEEE CEC, Seoul, May 2001, pp. 410-417. [2] D. E. Koulouriotis, I. E. Diakoulakis, D. M. Emiris, and C. D. Zopounidis, "Development of dynamic cognitive networks as complex systems approximators: Validation in financial time series," Applied Soft Computing- Appl. Soft Comput., vol. 5, pp. 157-179, 2005. [3] G. Armano, M. Marchesi, and A. Murru, "A hybrid genetic-neural architecture for stock indexes forecasting," Information Sciences- Inf. Sci., vol. 170, pp. 3-33, 2004. [4] S. M. Chen, G. M. T. Manalu, J.-S. Pan, and H.-C. Liu, "Fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and particle swarm optimization techniques," IEEE Transactions on Cybernetics- IEEE Trans. Cybern., vol. 43, pp. 11021117, 2013. [5] P. F. o. Marteau, "Time warp edit distance with stiffness adjustment for time series matching," IEEE Transactions on Pattern Analysis and Machine Intelligence- IEEE Trans. Pattern Anal. Mach. Intel., vol. 31, pp. 306-317, 2009. [6] Q. Song and B. S. Chissom, "Fuzzy time series and its models," Fuzzy Sets and Systems- Fuzzy Sets Syst., vol. 54, pp. 269-277, 1993. [7] S. M. Chen and N. Y. Chung, "A new method to forecast enrollments using fuzzy time series and genetic algorithm," International Journal of Intelligent Systems- Int. J. Intell. Syst., vol. 2, pp. 234-244, 2006. [8] C. D. Chen and S. M. Chen, "A new method to forecast the TAIEX based on fuzzy time series," in Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics- IEEE SMC, San Antonio, Texas, U. S. A., October 2009, pp. 3550-3555. [9] T. L. Sheng and Y. C. Cheng, "Deterministic fuzzy time series model for forecasting enrollments," Computers and Mathematics with ApplicationsComput. Math. Appl., vol. 53, pp. 1904-1920, 2007.
5
6D Pose Estimation using an Improved Method based on Point Pair Features Joel Vidal, Chyi-Yeu Lin Robert Martí Department of Mechanical Engineering National Taiwan University of Science and Technology Taipei, Taiwan e-mail: [email protected]; [email protected] Computer Vision and Robotics Group University of Girona Girona, Spain e-mail: [email protected] Abstract—The Point Pair Feature [4] has been one of the most successful 6D pose estimation method among model-based approaches as an efficient, integrated and compromise alternative to the traditional local and global pipelines. During the last years, several variations of the algorithm have been proposed. Among these extensions, the solution introduced by Hinterstoisser et al. [6] is a major contribution. This work presents a variation of this PPF method applied to the SIXD Challenge datasets presented at the 3rd International Workshop on Recovering 6D Object Pose held at the ICCV 2017. We report an average recall of 0.77 for all datasets and overall recall of 0.82, 0.67, 0.85, 0.37, 0.97 and 0.96 for hinterstoisser, tless, tudlight, rutgers, tejani and doumanoglou datasets, respectively. Keywords- 6D pose estimation; 3D object recognition; object detection, 3D computer vision, robotics; range image I. challenging set of datasets presented recently at the SIXD Challenge 2017 [1] organized at the 3rd International Workshop on Recovering 6D Object Pose at ICCV 2017. II. THE POINT PAIR FEATURES APPROACH The proposed method follows the basic structure of the Point Pair Feature(PPF) approach defined by Drost et al. [4] which consists of two stages: global modeling and local matching. The main idea behind this approach is to find, for each scene point, the corresponding model point and their rotation angle, defining a transformation from two points, their normal and the rotation around the normal. This correspondence is found by using a four-dimensional feature (fig. 1), defined between every pair of two points and their normals, so that each model point is defined by all the pairs created by itself and all the other model points. INTRODUCTION 3D Object Recognition and specially 6D pose estimation problems are crucial steps in object manipulation tasks. During the last decades, 3D data and feature-based methods have gained reputation with a wide range of presented model-based approaches [13]-[17]. In general, the model-based methods are divided in two main pipelines: Global and Local. Global approaches [14][17] describe the whole object or its parts using one global description. Local approaches [13] describe the object by using local descriptions around specific points. Global descriptions usually require the segmentation of the target object or its parts and they tend to ignore local details which may be discriminative. These characteristics make global approaches not robust against occlusion and highly cluttered scenes. On the other hand, local approaches are usually more sensitive to sensor noise due to their local nature and they tend to show a lower performance on symmetric object or objects with repetitive features. On this direction, the Point Pair Features approach, proposed by Drost et al. [4], has been proved [9] one of most successful methods showing strong attributes as a compromise solution that integrates benefits of both local and global approaches. Among several proposed extensions of the method, Hinterstoisser et al. [6] analyzed some of the weakest points and proposed an extended solution that provided a significant improvement in presence of sensor noise and background clutter. This paper proposes a new variation of this method and test its performance against the Figure 1. Point Pair Feature definition Initially, on the global modeling stage, the input model data is preprocessed by subsampling the data. Then, a 4dimensional lookup table storing the model pairs is constructed by using the discretized PPF as an index (fig. 2). This table will provide a constant access to all the model correspondence reference points and their rotation angles for each cell pointed by a discretized PPF feature obtained from a scene pair. Figure 2. Example of global modeling. Three pairs of points are stored in the 4D Lookup table using their discretized PPF feature as an index. During the local matching stage, the input data is preprocessed using the same techniques as the modeling part. For each given scene point, all the possible PPF are discretized and used as an index of the lookup table, obtaining a set of pairs of model points and rotation angles representing all the possible corresponding candidates. Each of these candidates casts a vote on a table, in a Hough-like voting scheme, where each value represents a hypothesis transformation defined by a model point and a rotation angle (fig. 3). Then, the peak value will be extracted as the best candidate for this scene point correspondence. Finally, all the hypotheses obtained from the scene points are clustered and a set of postprocessing steps are applied to extract the best one. (a) (b) (c) Figure 4. Scheme of the neigbhour cheking to account for sensor noise during quantization. (a) On dimension representation of the noise effect during quantization of a PPF value. (b) Solution proposed by Hinterstoisser at al. [6] checking the 80 niegbors of the 4D table. (c) Our proposed solution, checking up to 15 neigbors of the 4D table. Figure 3. Example of local matching. For a point of the scene, a given pair is matched using the 4D lookup table obtaining two corresponding candidates, which defines two hypothesis transformations. Each candidate cast a vote on the voting table. III. IMPROVEMENTS Following the analysis and reasoning introduced by [6], we propose a set of new improvements for the basic PPF method. On the preprocessing part, during the subsampling step, the point pairs with angles between normals larger than 30 degrees are kept, but using a local clustering approach instead. In addition, after the clustering, an additional filtering step is proposed to remove non-discriminative pairs between neighboring clusters. In practice, this clustering area size is the same as the subsampling step. To improve the run-time performance of the method during the local matching stage, only the points pairs with a distance smaller than the model diameter are checked by using a kd-tree structure. Following the ideas proposed by [6] the system avoids casting a vote twice for the same discretized PPF and rotation angle and also checks all the PPF index neighbors to account for sensor noise. Instead of checking all 80 neighbors in the lookup table, we propose a more efficient solution, by only voting for those neighbors that have big chances to be affected by noise checking the quantization error (fig. 4). After clustering the hypothesis, a simplified viewdependent re-scoring process is used on the most voted 500 hypothesis. In this process, the hypotheses are reordered based on how well they fit to the scene data. In addition, to improve the score robustness, a Projective ICP [11] refinement is performed on the first 200. Finally, two filtering postprocessing steps are applied to discard special ambiguous cases such as planes and partially matching surfaces. The first step checks for non-consistent points removing hypothesis that are partially fitting the scene but do not have enough consistence with the scene points. A second step checks the overlapping rate of the object silhouette with respect to the scene edges in order to filter well-fitting objects with non-matching boundaries. Figure 5. Some of the models used in the datasets TABLE I. DATASETS MODELS AND RGB-D TEST IMAGES Datasets Models Test images Hinterstoisser et al. [5][2] 15 18273 T-Less [7] 30 10080 TUD Light 3 23914 Rutgers APC [10] 14 5964 Tejani et al. [12] 6 2067 Doumanoglou et al. [3] 2 177 IV. DATASETS The SIXD Challenge 2017 [1] proposed a set of datasets for evaluating the task of 6D localization of a single instance of a single object. The mentioned datasets, shown in Table 1, are hinterstoisser, tless, tudlight, rutgers, tejani and doumanoglou. Each dataset contains a set of 3D objects models and RGB-D test images. The proposed scenes cover a wide range of cases with a variety of objects in different poses and environments including multiple instances, clutter and occlusion. The six datasets contain a total of 68 different object models (fig. 5) and 60475 test images. Notice that rutgers, tejani and doumanoglou are reduced versions and the doumanoglou's models are also included in tejani. V. RESULTS The proposed method was evaluated using the Visible Surface Discrepancy (VSD) [8] metric setting the parameters δ, τ and t to 15 mm, 20 mm and 0.35, respectively. All datasets were evaluated using the same parameters for all models and scenes. Figure 6. Recall rate for each dataset and total average Specifically, the used subsampling leaf was defined as 0.05 of the model diameter and the PPF quantization was set to 20 divisions for distance and 15 divisions for angles. (a) (b) (c) (d) (e) (f) Figure 7. Exemples of results for all datasets: (a) hinterstoisser, (b) tless, (c) tudlight, (d) rutgers, (e) tejani and (f) doumanoglou. The obtained overall recall rate for each dataset and the total average recall are shown in fig. 6. In addition, the individual recall rate for each of the models for all datasets is shown in fig. 8. The results show a high and consistent performance across most datasets with an average recall rate of 0.77. The tejani and doumanoglou datasets shows the highest overall recognition rate with a recall value of 0.97 and 0.96 respectively. Tudlight, hinterstoisser and tless shows a recall rate of 0.85, 0.82 and 0.67. Finally, with the lowest recognition performance, the rutgers dataset shows an overall recall rate of 0.37. Rutgers is a challenging dataset as their CAD object models with few features are easily mismatched with the bookshelf-like structure scenes (fig. 7d), making more challenging the extraction of contour information and increasing the false negative cases. Figure 7 shows an example of the results obtained for all datasets. VI. CONCLUSION This work proposes a new improved variation of the PPF approach and tests its performance against the recently released set of datasets introduced at the SIXD Challenge 2017 [1] including 68 object models and 60475 test images. The proposed method introduces a novel subsampling step with normal clustering and neighbor pairs filtering, in addition to a faster kd-tree neighbor search and a more efficient solution to decrees the effect of the sensor noise during the quantization step. Finally, the method uses several postprocesing steps for re-scoring, refining and filtering the final hypothesis. The obtained results, using the VSD [8] metric, show a high and consistent performance across most datasets with an average recall of 0.77, with the exception of rutgers dataset, which shows a significant lower rate. (a) (b) (c) (d) (e) (f) Figure 8. Recall rate for each model of all the datasets: (a) hinterstoisser, (b) tless, (c) tudlight, (d) rutgers, (e) tejani and (f) doumanoglou. REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] SIXD Challenge 2017. http://cmp.felk.cvut.cz/ sixd/challenge_2017/. Accessed: 2017-9-28. E. Brachmann, A. Krull, F. Michel, S. Gumhold, J. Shotton, and C. Rother, “Learning 6D Object Pose Estimation Using 3D Object Coordinates,” In Proceedings of the European Conference on Computer Vision (ECCV), 2014. A. Doumanoglou, R. Kouskouridas, S. Malassiotis, and T. K. Kim, “Recovering 6d object pose and predicting next-best view in the crowd,” In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. B. Drost, M. Ulrich, N. Navab, and S. Ilic, “Model globally, match locally: Efficient and robust 3d object recognition,” In 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2010, pp. 998–1005. S. Hinterstoisser, V. Lepetit, S. Ilic, S. Holzer, G. Bradski, K. Konolige, and N. Navab, “Model Based Training, Detection and Pose Estimation of Texture-Less 3D Objects in Heavily Cluttered Scenes,” Springer Berlin Heidelberg, Berlin, Heidelberg, 2013, pp. 548–562. S. Hinterstoisser, V. Lepetit, N. Rajkumar, and K. Konolige, “Going Further with Point Pair Features,” In Proceedings of the European Conference on Computer Vision (ECCV), 2016. T. Hodan, P. Haluza, . Obdrlek, J. Matas, M. Lourakis, and X. Zabulis, “T-LESS: An RGB-D Dataset for 6d Pose Estimation of Texture-Less Objects,” In 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), Mar. 2017, pp. 880–888. T. Hodan, J. Matas, and S. Obdrzalek, “On Evaluation of 6D Object Pose Estimation,” In ECCV Workshop, 2016 L. Kiforenko, B. Drost, F. Tombari, N. Krger, and A. Glent Buch, “A performance evaluation of point pair features,” Computer Vision and Image Understanding, Sept. 2017. C. Rennie, R. Shome, K. E. Bekris, and A. F. D. Souza, “A dataset for improved rgbd-based object detection and pose estimation for warehouse pick-and-place,” IEEE Robotics and Automation Letters, July 2016, 1(2):1179–1185. S. Rusinkiewicz and M. Levoy, “Efficient variants of the ICP algorithm,” In Proceedings Third International Conference on 3-D Digital Imaging and Modeling, 2001, pp. 145–152. A. Tejani, D. Tang, R. Kouskouridas and T.-K. Kim, “Latent-class hough forests for 3d object detection and pose estimation,” In European Conference on Computer Vision, 2014, pp. 462–477. Y.Guo, M. Bennamoun, F. Sobel, M. Lu, J. Wan, “3D Object Recognition in Cluttered Scenes with Local Surface Features: A Survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 36(11) :2270-2287. R. Rusu, G. Bradski, R. Thibaux, and J. Hsu, “Fast 3D recognition and pose using the viewpoint feature histogram,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2010, pp. 2155–2162. R. Osada, T. Funkhouser, B. Chazelle, and D. Dobkin, “Shape distributions,” ACM Trans. Graphics, vol. 21, no. 4, pp. 807–832, 2002. E. Wahl, G. Hillenbrand, and G. Hirzinger. Surflet-pair relation histograms: A statistical 3d-shape representation for rapid classification. In 3-D Digital Imaging and Modeling, 3DIM 2003. Proceedings. Fourth International Conference on, pp. 474–481, 2003 R. Schnabel, R. Wahl, and R. Klein. Efficient RANSAC for pointcloud shape detection. In Computer Graphics Forum, vol. 26, pp. 214–226. 2007.
1
Scene-Specific Pedestrian Detection Based on Parallel Vision arXiv:1712.08745v1 [] 23 Dec 2017 Wenwen Zhang, Kunfeng Wang, Member, IEEE, Hua Qu, Jihong Zhao, and Fei-Yue Wang, Fellow, IEEE Abstract—As a special type of object detection, pedestrian detection in generic scenes has made a significant progress trained with large amounts of labeled training data manually. While the models trained with generic dataset work bad when they are directly used in specific scenes. With special viewpoints, flow light and backgrounds, datasets from specific scenes are much different from the datasets from generic scenes. In order to make the generic scene pedestrian detectors work well in specific scenes, the labeled data from specific scenes are needed to adapt the models to the specific scenes. While labeling the data manually spends much time and money, especially for specific scenes, each time with a new specific scene, large amounts of images must be labeled. What’s more, the labeling information is not so accurate in the pixels manually and different people make different labeling information. In this paper, we propose an ACP-based method, with augmented reality’s help, we build the virtual world of specific scenes, and make people walking in the virtual scenes where it is possible for them to appear to solve this problem of lacking labeled data and the results show that data from virtual world is helpful to adapt generic pedestrian detectors to specific scenes. (a) (b) (c) (d) Fig. 1: Examples of surveillance equipments in specific scenes: (a) CMU indoor vision research (b) sub-open scene-subway station (c) MIT surveillance dataset from traffic scene (d) Town center scene. I. I NTRODUCTION Detecting pedestrians from video sequences in specific scenes is very important to surveillance of traffic scenes and other public scenes such as squares or super markets [1]. It can be used to analyze abnormal behaviors of pedestrians in order to manage the crowd or prevent unexpected events. As Fig. 1 shows, there are many scenes (specific scenes) equipped with static cameras to record the events happened, followed by understanding the content of the captured videos and extracting useful information from the video sequences. This work was partly supported by National Natural Science Foundation of China under Grant 61533019, Grant 71232006, and Grant 91520301. Wenwen Zhang is with School of Software Engineering, Xi’an Jiaotong University, Xi’an 710049, China, and also with The State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China (e-mail: [email protected]). Kunfeng Wang (Corresponding author) is with The State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China, and also with Qingdao Academy of Intelligent Industries, Qingdao 266000, China (e-mail: [email protected]). Hua Qu is with School of Software Engineering, Xi’an Jiaotong University, Xi’an 710049, China. Jihong Zhao is with Department of Communication and Information Engineering, Xi’an Jiaotong University, Xi’an 710049, China. Fei-Yue Wang is with The State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China, and also with the Research Center for Computational Experiments and Parallel Systems Technology, National University of Defense Technology, Changsha 410073, China (e-mail: [email protected]). In this process, pedestrian detection is the prerequisite for the subsequent event identification. In recent years, with the improvement of computing power, machine learning, especially deep learning [2] has been playing a critical role in many topics of computer vision and makes a great progress, such as in image classification and object detection. As a special object detection, pedestrian detection in generic scenes have achieved a great progress with large dataset trained detectors , while the pedestrian detectors in generic scenes work bad when they are directly used for pedestrian detection in specific scenes because of different geometry information, viewpoint, illuminations, resolutions, background and so on [3] between specific scenes and generic scenes. Although the generic pedestrian datasets presents intra-class variations, each of the datasets still has its own inherent bias for the different methods collecting the data. For example, since the INRIA [4] dataset is taken from the personal digital images collections, many of the people in the dataset are intent to make poses facing the cameras [5]. This may be different from the natural pedestrian poses and actions from real-life situations. For the dataset of Daimler and Caltech [6], the pedestrians are captured from the on-board cameras from vehicles, with different angles and view-points. In fact, the objective of labeled dataset is to provide the learned classifier with large inner-class variations of pedestrian Fig. 2: The process of rebuilding virtual scene and generating synthetic data with accurate labeling information. A. scene rebuilding and simulate pedestrians in scene(red box): (1) rebuild scene from geometry information (2) simulate pedestrians (3) generate synthetic data from virtual scene. B. Labeling information Generated(blue box): (1) get the location information of pedestrian in photo through vertex render from 3d model. (2) obtain the final bounding box through computing. and non-pedestrian so that the resulting classifier can be generalizable to never-before-seen test dataset [5]. However, the pedestrians of generic scene and specific scenes with different angles, viewpoints and backgrounds are in different variable spaces, the pedestrian detectors in generic scenes works bad if they are used to the specific scenes directly. For appearance-based pedestrian detectors, a large amount of data is important to train the appearance models.A large amount of data happens to be lacked for the high cost to obtain. It is necessary to adapt pedestrian detection models in generic scenes to specific scenes in order to achieve a better result, so called ‘domain adaption’ [5]! So far, there’s much work done in applying the specific scene pedestrian detector to generic scenes. Wang et al. [7] uses transfer learning to transfer the generic pedestrian detectors to the specific scenes with little target labeled data. Hottori at al. [8] make the 3d models as the proxy of the real world and train hundreds of classifiers in the scene of each possible position. Besides those, some researchers combine the synthetic and real data to train the specific pedestrian detector to overcome the lack of data. Different from Hattori et al.’s idea, we directly rebuild the virtual world and simulate the pedestrian in the virtual scene with real background, and then collect a large amount of labeled synthetic data. In this paper, our main goal is to solve the problem of lacking labeled data in the specific scene based on the parallel vision theory. Parallel Vision theory was proposed by Wang et al. [9], [10], [11] based on ACP (Artificial systems, Computational experiments, and Parallel execution) theory [12], [13], [14], [15] attempting to solve the vision problems in the real world. Parallel Vision System follow the idea of ”We can only under- stand what we create ”. For parallel vision, A(CP) references to building the virtual world corresponding to the related real world, then collecting data from the virtual world; (A)C(P) references to obtain knowledge of the real world through Computing Experiments, finally applying the knowledge to both virtual world and real world to evaluate the effectiveness of the algorithm, namely Parallel Execution in (AC)P. Based on the idea of parallel vision, we mainly focus on building the virtual scene and generating a large amount of synthetic data from the virtual scene, and then we execute the Computational experiments with the generated data for pedestrian detection.With the virtual scene, we can generate much data for Computational experiments with labeling information, what’s more, the virtual scene can free researchers from the annoying process of preparing data and make them focus on algorithms. The process of data generating is showed in Fig. 2, more details can be found in Section 3. Our contributions is proposing a method for training pedestrian detectors in specific scenes when new surveillance equipments are set up or there is no labeled data in a specific scene. And there are two purpose in this paper (1): building virtual scene for specific scenes to generate synthetic data with labeling information and (2): validating that the synthetic data is helpful to train a specific scene pedestrian detector, namely Computing Experiment in ACP theory. The remainder of this paper is organized as follows:Section II describes the related works. The virtual world are built, together with the process of generation of dataset and ground truth bounding boxes in Section III. In Section IV, experimental results show that the synthetic data from virtual world is useful in training the pedestrian detectors in the real world dataset. At last but not the least, Section V is the conclusion. II. R ELATED W ORK As a part of ACP method,the virtual scene, also as virtual world is important. Changing the motivation and appearance of target objects can generate a large mount of labeled data just as collecting from the real world! In such way, data-driven method can easily achieve the hoped goal. As described by Bainbridge [16], based on Video Game and Computer Game, the virtual world can simulate the complex physic world in vision and offers a new space for scientific research. In fact, as part of parallel vision, the idea of using synthetic data generated from 3D models and virtual scenes for object detection is not new. So far, there’s much work been done based on the virtual world. Here, we mainly talk about some applications related to our work. Virtual Scene In order to generate synthetic data, the first step is to build up the virtual scene. With the development of GPU computing power, computer game and 3d model, building up a virtual scene based game engine and 3d model software like 3ds max, blender and so on becomes easy. Prendinger et al. [17] built Virtual Living Lab for traffic system simulation and analysis of driver behaviors based on OpenStreenMap, CityEngine and Unity 3D et al. Computer Game and 3d model softwares. In the Virtual Living Lab, they generated the traffic road net based on the free map data,and sense the context based on the interaction between the agent on board and the agent at the roadside. Karamouzas and Overmars [18] use the virtual scene to evaluate the proposed pedestrian motivation model. The virtual scene is also used for camera networks research [19], [20]. In our work, the main goal is to apply parallel vision concepts in specific scene, it is important to rebuild the same scene of the specific scenes, so we directly put variable pedestrians in the image scene based on the technology of augment reality, which is similar to the the work done by Hattori et al. [8], synthetic data generated based on the background of the specific scene. 3D Model for Detection For the similarities between real life object and virtual object in appearance and motivation, many researchers attempt to build a virtual proxy to stand for the real life object and achieve good results. Brooks [21]described a model-based vision system to model image features. Dhome et al. [22]presents a method to estimate the spatial attitude of articulated object from a single perspective image. There’s much work done in modeling human body shape [23], pedestrian pose [24], hand gesture[25] and so on. Hattori et al. [8] use 3ds max model to train pedestrian detectors at each possible pixel in specific scene with static camera, and Marin et al. [26] train a pedestrian detector in mobile scenario using virtual model. In our artificial scene, we also model the pedestrians from 3ds Max. First, the model from modern 3d model software is similar to the real life pedestrians; what’s more, with the synthetic data, we can easily obtain large mount of groundtruth labeled data automatically. (a) (b) Fig. 3: Output different formated image with solid color(black or white) background for following labeling work. (a) output with black-white object and background, (b) output with different color for different pedestrian to show the overlapping relationship. Scene Adaption In order to train a powerful model of pedestrian detection,generic data are collected through all possible way to cover all the possible shape and appearance of pedestrian.However, all the existent dataset are not satisfy the principal and goal, for the variable scenes with variable viewpoint, lighting conditions and so on [7]. In the specific scene, more effort needs to adapt the pedestrian model to a specific scene. So adapting the pre-trained models to a new domain has been an active area of research [7], [27]. Our work is also to adapting a generic pedestrian detector to a specific scene without real data, similar to their work [8], but we are not train many detectors in each possible position. We train the generic pedestrian detector with synthetic data, and make sure that it is possible to use synthetic data generated from virtual world as a proxy of the real life. III. S YNTHETIC DATASET A ND M ETHODS Fig. 2 gives a pictorial illustration of the process of generating the synthetic dataset and labeling information from the virtual scene. And we evaluate the synthetic data with DPM [28] and Faster R-CNN [29] methods. A. Synthetic Dataset In this paper, we use 3ds max to rebuild the virtual scene with known geometry information. With the virtual scene in 3D model, then we place the pedestrians in target scene and simulate pedestrians walking in the scene illustrated in red box of Fig. 2. 3D simulation With the geometry information and background of the specific scene, we can easily rebuild the virtual scene, together with the location of obstacles in 3ds max. In the built virtual scene, we simulate 3d pedestrians in the scene with different appearance walking around the virtual scene with arbitrary possible direction, walking speed and styles. In 3d simulation, we try our best to make the artificial scene photo realistic using the technology of AR. So we render pedestrians walking in different directions, walking around nearby, talking with each other within a group of two or three, making a call alone and so on. With different configurations, we can easily generate variable data from virtual scene to learn a pedestrian model for the truth. Ground-Truth Data An important property of synthetic data is that we can conveniently obtain the ground truth data for the purpose of training the model and validating the model. In our work, we obtain the labeling information from 3d information of virtual scene directly! Although many researchers realize that it is possible to use synthetic data to train the real life model, and prove this method is effective in a way, they ignore the importance of generate the labels from the synthetic data. In Sun et al. [30], they generate synthetic images that there is only one object in each image. To get the labeling information, a parallel set of images was also generated which share the same rendering setting as the synthetic image generated for train the model except that the background is always white. The parallel generated images with white background are only used in automatically calculating the bounding box of nonwhite pixels, aka the target object bounding box in the image. It is easy for only one object in the image, and many other works [31], [32] related with virtual object also generate labels with the similar method. The method is effective for one object in the image, but it does not work for images of multi-objects. For Multi-objects in the image, if the objects are not blocked each other, it is easy to label each object through search the connected t domain in the image [33]. But there is no effective method for the blocked objects. It is impossible to obtain labels from image processing. At the first time, we also try the exist method to generate labels from the synthetic data, but we failed finally. In the virtual world, we can easily obtain each parameter in the scene. Along with the render process, the points in the world XW mapping to the camera coordinates XC , then the image physics coordinates system Xscreen and final in the image coordinates Ximg . The hole process can be display as equation. From world coordinate system to camera coordinate system:       xw xw   xc  yw   yw  R t   = M1    yc  =  zw  0 1  zw  zc 1 1 From camera coordinate system tem:    f 0 u zc = 0 f v 0 0 to physic coordinate sys  0 xc 0   yc  1 zc Finally, convert the physics image system to image coordinate system:  u = xu /dx + u0 v = yu /dy + v0 The final labeling information comes from the coordinates of image, the blocked relationship lies on the relationship of z-coordinates in camera. In our work, we generate the synthetic data from virtual scene, meanwhile a parallel set of vertexes for each pedestrian is generated. From pedestrians’ vertexes coordinates, we can easily obtain the labeling information showed as red box of Fig. 2. B. Object Detection Methods The synthetic data is easy to obtain from 3d model together with labels. In order to validate the effectiveness of the synthetic data, we work on the data with Faster R-CNN [29] and DPM (Deformable Part-based Model) [34] pedestrian detection models, one for appearance-base method from CNN method and one for edge-based method from HOG-SVM pedestrian detection. DPM The DPM (Deformable Part-based Model) model comes from PS (pictorial structure) [35], the relationship of different parts, and HOG (Histogram of Graphic) Feature [4], the features of part characteristic, mainly describing the shape features of detected object and achieve a great success in object detection in the challenge of PASCAL VOC. The authors of DPM, Pedro Felzenszwalb et al. were awarded a “lifetime achievement” prize by the VOC organizers, which reflect the importance of DPM pedestrian detection method. The code can be found at https://github.com/rbgirshick/voc-dpm. Faster R-CNN Different from DPM model, Faster RCNN [29] is a appearance-based method coming from CNN.Recently, the CNN methods have achieve a great process in the vision problems as we mentioned above. As one instance of deep learning, Faster R-CNN, combining Fast R-CNN [36] with RPN (Region Proposal Network), makes one stack object detection model and performances well. The python code can be found at https://github.com/rbgirshick/py-faster-rcnn. DPM and Faster R-CNN are two different methods to describe the detected object and they all perform well in object detection. We evaluate our proposed idea on both DPM and Faster R-CNN method, and the result show that the synthetic data is useful in object detection. IV. E XPERIMENTAL E VALUATION A. Dataset In order to validate the effectiveness of virtual scene for generating data in pedestrian detection, we evaluate it in different scenes as Fig. 4. Town Center Dataset [37]: In our work, we mainly work on the Town Center dataset, a video dataset of a semi-crowded town center with a resolution of 1920*1080 and a frame of 25 fps. In order to consume the computing time and convenient to process, we also down-sample the videos to a standardized resolution of 640*360, together with the ground-truth labels as Hattori et al[8]. The original Town center Dataset only provides the validation set, and we use it for test dataset in our experiments. Atrium Dataset [38]: Atrium pedestrian dataset was filmed at cole Polytechnique Montral. It offers a view from the inside of the building and we can see pedestrians moving around crossing each others. This movie is intersecting since it allows to evaluate the performance on pedestrians. The movie resolution is 800x600. For the evaluation, 4540 frames (30 fps) of the movie were annotated. PETS 2009 Dataset [39]: The PETS 2009 dataset consists of videos(at a resolution of 720X576) of campus from Kyushu (a) (b) (a) (b) (a) (a) (b) (b) curve show that the both Faster R-CNN and DPM model works well when trained with the synthetic data generated from the virtual scene of Town center built by us than the models trained with general dataset of pascal voc 2007. From the AP of different method in different scene evaluated datasets, we can conclude that in a specific scene, it is useful to train the model with synthetic data generated from virtual world. Table I shows the increment of AP with synthetic from virtual scene compared to generic data of pascal voc 2007 in each evaluation scene for different method. In scene Town Center and Atrium, the synthetic data improve the model much by a large margin, while the PEST 2009, for the pedestrians in the scene are much similar to the pedestrians in pascal voc, the generic model of DPM and Faster R-CNN have achieved good results. However, the specific models trained with synthetic data can still achieve results are not worse than them at least. V. C ONCLUSION (a) (a) (b) (b) Fig. 4: The Evaluation Scenes with their corresponding geometric information and pedestrians simulated in them. Town Center (top), Atrium (middle) and PETS 2009 (bottom). University including a number of pedestrians.While the dataset consists of videos captured from 8 different views, we just use a single 8th camera view for our experiments. The ground truth information of pedestrians are labeled by ourselves with the tool labelimg from github 1 . B. Baselines We evaluate the efficiency of the virtual scenes and compare against with the following baselines. D-V/S: D-V and D-S stand for the DPM model of pedestrian detection trained with pascal voc 2007 dataset and synthetic dataset generated from the virtual world built by us respectively. F-V/S: The same with D-V/S, F-V stands for the Faster RCNN pedestrian detection model trained with dataset of person from pascal voc 2007 and the F-S stands for the Faster RCNN model trained with the synthetic data generated from the dataset from the virtual scene. All above are all have the same test set from the baseline of specific scene pedestrian detection datasets: Town Center Atrium and PETS 2009. Except the experiments above, we also train and test Faster R-CNN and DPM model with person dataset from pascal voc 2007. C. Results We compare Faster R-CNN and DPM model to the baseline of datasets of all selected specific scenes based on pascal voc 2007 evaluation metric for pedestrian detection. The PR curves of pedestrian detection on the selected specific datasets are summarized in Fig. 5, and the precision-recall 1 labelimg: https://github.com/tzutalin/labelImg We propose a parallel vision approach to pedestrian detection by building the virtual world to generate a large amount of synthetic data and adapting the generic model to specific scenes. The experimental results show that synthetic data is able to learn a scene-specific detector without real labeled data. In this paper, the concerned objects are only pedestrians. But true scenes may contain pedestrians, cars, and trucks, making object detection more difficult. So in the future work, we will make the virtual world more complex and similar to the real scenes. Of course, the virtual world is useful not only for pedestrian detection, but also for other vision problems, such as segmentation, tracking, and pose estimation. We will do more computer vision research in virtual scenes and make the virtual world become a vision laboratory truly. R EFERENCES [1] K. Wang, Y. Liu, C. Gou, and F.-Y. Wang, “A multi-view learning approach to foreground detection for traffic surveillance applications,” IEEE Transactions on Vehicular Technology, vol. 65, no. 6, pp. 4144– 4158, 2016. [2] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems, 2012, pp. 1097–1105. [3] M. Wang and X. Wang, “Automatic adaptation of a generic pedestrian detector to a specific traffic scene,” in Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on. IEEE, 2011, pp. 3401– 3408. [4] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, vol. 1. IEEE, 2005, pp. 886–893. [5] K. K. Htike and D. Hogg, “Adapting pedestrian detectors to new domains: A comprehensive review,” Engineering Applications of Artificial Intelligence, vol. 50, pp. 142–158, 2016. [6] P. Dollar, C. Wojek, B. Schiele, and P. Perona, “Pedestrian detection: An evaluation of the state of the art,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 4, pp. 743–761, 2012. [7] M. Wang, W. Li, and X. Wang, “Transferring a generic pedestrian detector towards specific scenes,” in Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on. IEEE, 2012, pp. 3274– 3281. [8] H. Hattori, V. Naresh Boddeti, K. M. Kitani, and T. Kanade, “Learning scene-specific pedestrian detectors without real data,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 3819–3827. TABLE I: AP Increment in each evaluation scene TownCenter voc 45.2% 30.8% syn 79.2% 34.2% Faster RCNN DPM incre 34% 3.6% Atrium voc 47.5% 45.4% syn 67% 56.2% Atrium PR Curve 1 1 0.9 0.9 0.8 0.8 0.8 0.7 0.7 0.7 0.6 0.6 0.6 FS(AP=67%) FV(AP=47.5%) DV(AP=45.4%) DS(AP=56.2%) 0.5 Precision FS(AP=79.2%) FV(AP=45.2%) DV(AP=30.8%) DS(AP=34.2%) 0.5 0.4 0.4 0.3 0.3 0.3 0.2 0.2 0.2 0.1 0.1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0 0.1 0.2 0.3 0.4 Recall (a) Town Center FS(AP=90.4%) FV(AP=89.5%) DV(AP=81%) DS(AP=85.3%) 0.5 0.4 0 incre 0.9% 4.3% PETS 2009 PR Curve 1 0 PETS 2009 voc 89.5% 81% syn 90.4% 85.3% 0.9 Precision Precision TownCenter PR Curve incre 12.5% 10.8% 0.5 0.6 0.7 0.8 0.9 1 0 0 0.1 Recall (b) Atrium 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall (c) PETS 2009 Fig. 5: Precision-recall curve with PASCAL VOC 2007 metric in different scenes. [9] K. Wang, C. Gou, and F.-Y. Wang, “Parallel vision: An ACP-based approach to intelligent vision computing,” Acta Automatica Sinica, vol. 42, no. 10, p. 1490, 2016. [10] K. Wang, C. Gou, N. Zheng, J. M. Rehg, and F.-Y. Wang, “Parallel vision for perception and understanding of complex scenes: Methods, framework, and perspectives,” Artificial Intelligence Review, pp. 1–31, 2016. [11] K. Wang, Y. Lu, Y. Wang, Z. Xiong, and F.-Y. Wang, “Parallel imaging: A new theoretical framework for image generation,” Pattern Recognition and Artificial Intelligence, vol. 30, no. 7, pp. 577–587, 2017. [12] F.-Y. Wang, “Parallel system methods for management and control of complex systems,” Control and Decision, vol. 19, no. 5, pp. 485–489, 2004. [13] F.-Y. Wang, X. Wang, L. Li, and L. Li, “Steps toward parallel intelligence,” IEEE/CAA Journal of Automatica Sinica, vol. 3, no. 4, pp. 345–348, 2016. [14] X. Liu, X. Wang, W. Zhang, J. Wang, and F.-Y. Wang, “Parallel data: From big data to data intelligence,” Pattern Recognition and Artificial Intelligence, vol. 30, no. 8, pp. 673–682, 2017. [15] L. Li, Y. Lin, N. Zheng, and F.-Y. Wang, “Parallel learning: A perspective and a framework,” IEEE/CAA Journal of Automatica Sinica, vol. 4, no. 3, pp. 389–395, 2017. [16] W. S. Bainbridge, “The scientific research potential of virtual worlds,” Science, vol. 317, no. 5837, pp. 472–476, 2007. [17] H. Prendinger, K. Gajananan, A. B. Zaki, A. Fares, R. Molenaar, D. Urbano, H. van Lint, and W. Gomaa, “Tokyo virtual living lab: Designing smart cities based on the 3D Internet,” IEEE Internet Computing, vol. 17, no. 6, pp. 30–38, 2013. [18] I. Karamouzas and M. Overmars, “Simulating and evaluating the local behavior of small pedestrian groups,” IEEE Transactions on Visualization and Computer Graphics, vol. 18, no. 3, pp. 394–406, 2012. [19] F. Qureshi and D. Terzopoulos, “Smart camera networks in virtual reality,” Proceedings of the IEEE, vol. 96, no. 10, pp. 1640–1656, 2008. [20] W. Starzyk and F. Z. Qureshi, “Software laboratory for camera networks research,” IEEE Journal on Emerging and Selected Topics in Circuits and Systems, vol. 3, no. 2, pp. 284–293, 2013. [21] R. A. Brooks, “Symbolic reasoning among 3-D models and 2-D images,” Artificial Intelligence, vol. 17, no. 1-3, pp. 285–348, 1981. [22] M. Dhome, A. Yassine, and J.-M. Lavest, “Determination of the pose of an articulated object from a single perspective view,” in BMVC, 1993, pp. 1–10. [23] A. Broggi, A. Fascioli, P. Grisleri, T. Graf, and M. Meinecke, “Modelbased validation approaches and matching techniques for automotive vision based pedestrian detection,” in Computer Vision and Pattern Recognition-Workshops, 2005. CVPR Workshops. IEEE Computer Society Conference on. IEEE, 2005, pp. 1–1. [24] B. Heisele, “3D human models applied to pedestrian pose classification,” Aug. 16 2016, uS Patent 9,418,467. [25] J. Molina, J. A. Pajuelo, M. Escudero-Violo, J. Bescs, and J. M. Martnez, “A natural and synthetic corpus for benchmarking of hand gesture recognition systems,” Machine Vision and Applications, vol. 25, no. 4, pp. 943–954, 2014. [26] J. Marin, D. Vázquez, D. Gerónimo, and A. M. López, “Learning appearance in virtual scenarios for pedestrian detection,” in Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. IEEE, 2010, pp. 137–144. [27] X. Wang, M. Wang, and W. Li, “Scene-specific pedestrian detection for static video surveillance,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 2, pp. 361–374, 2014. [28] P. F. Felzenszwalb, R. B. Girshick, D. Mcallester, and D. Ramanan, “Object detection with discriminatively trained part-based models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 9, pp. 1627–1645, 2010. [29] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards realtime object detection with region proposal networks,” in Advances in Neural Information Processing Systems, 2015, pp. 91–99. [30] B. Sun and K. Saenko, “From virtual to reality: Fast adaptation of virtual object detectors to real domains,” in BMVC, vol. 1, no. 2, 2014, p. 3. [31] X. Peng, B. Sun, K. Ali, and K. Saenko, “Learning deep object detectors from 3D models,” in IEEE International Conference on Computer Vision, 2015, pp. 1278–1286. [32] E. Tzeng, C. Devin, J. Hoffman, C. Finn, X. Peng, S. Levine, K. Saenko, and T. Darrell, “Towards adapting deep visuomotor representations from simulated to real environments,” Computer Science, 2015. [33] L. Di Stefano and A. Bulgarelli, “A simple and efficient connected components labeling algorithm,” in Image Analysis and Processing, 1999. Proceedings. International Conference on. IEEE, 1999, pp. 322– 327. [34] D. Forsyth, “Object detection with discriminatively trained part-based models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 9, pp. 1627–1645, 2014. [35] M. A. Fischler and R. A. Elschlager, “The representation and matching of pictorial structures,” IEEE Transactions on Computers, vol. C-22, no. 1, pp. 67–92, 1973. [36] R. Girshick, “Fast R-CNN,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 1440–1448. [37] B. Benfold and I. Reid, “Stable multi-target tracking in real-time surveillance video,” in Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on. IEEE, 2011, pp. 3457–3464. [38] J. P. Jodoin, G. A. Bilodeau, and N. Saunier, “Urban tracker: Multiple object tracking in urban mixed traffic,” in Applications of Computer Vision, 2014, pp. 885–892. [39] J. Ferryman and A. Shahrokni, “An overview of the PETS 2009 challenge,” IEEE, 2009.
1
1 FAST Adaptive Smoothing and Thresholding for Improved Activation Detection in Low-Signal fMRI Israel Almodóvar-Rivera and Ranjan Maitra arXiv:1702.00111v2 [stat.ME] 2 Feb 2017 Abstract Functional Magnetic Resonance Imaging is a noninvasive tool used to study brain function. Detecting activation is challenged by many factors, and even more so in low-signal scenarios that arise in the performance of high-level cognitive tasks. We provide a fully automated and fast adaptive smoothing and thresholding (FAST) algorithm that uses smoothing and extreme value theory on correlated statistical parametric maps for thresholding. Performance on simulation experiments spanning a range of low-signal settings is very encouraging. The methodology also performs well in a study to identify the cerebral regions that perceive only-auditory-reliable and only-visual-reliable speech stimuli as well as those that perceive one but not the other. Index Terms AM-FAST, AR-FAST, Adaptive Segmentation, AFNI, BIC, circulant matrix, CNR, Cluster Thresholding, Gumbel distribution, SNR, SPM, SUMA, reverse Weibull distribution I. I NTRODUCTION UNCTIONAL Magnetic Resonance Imaging (fMRI) [1]–[9] studies the spatial characteristics and extent of brain function while at rest or, more commonly, while performing tasks or responding to external stimuli. In the latter scenario, which is the setting for this paper, the imaging modality acquires voxel-wise Blood-Oxygen-Level-Dependent (BOLD) measurements [10], [11] at rest and during stimulation or performance of a task. After pre-processing, a statistical model such as a general linear model [4], [12] is fit to the time course sequence against the expected BOLD response [13]–[15]. Statistical Parametric Mapping (SPM) [16] techniques provide voxel-wise test statistics summarizing the association between the time series response at that voxel and the expected BOLD response [3]. The map of test statistics is then thresholded to identify significantly activated voxels [17]–[19]. The analysis of fMRI datasets is challenged [20]–[23] by factors such as scanner, inter- and intrasubject variability, voluntary/involuntary or stimulus-correlated motion and also the several-seconds delay in the BOLD response while the neural stimulus passes through the hemodynamic filter [23]–[25]. Preprocessing steps [22], [26] mitigate some of these effects, but additional challenges are presented by the fact that an fMRI study is expected to have no more than 1-3% activated voxels [8], [27]. Also, many activation studies involving high-level cognitive processes have low contrast-to-noise ratios (CNR), throwing up more issues as illustrated next. F A. Activation Detection during Noisy Audiovisual Speech The most important way in which humans communicate is through speech [28]–[30], which the brain is particularly adept at understanding, even when this form of communication is degraded by noise. It is I. Almodóvar-Rivera is with the Department of Biostatistics and Epidemiology at the University of Puerto Rico, Medical Science Campus, San Juan, Puerto Rico, USA. R.Maitra is with the Department of Statistics, Iowa State University, Ames, Iowa, USA. This research was supported in part by the National Institute of Biomedical Imaging and Bioengineering (NIBIB) of the National Institutes of Health (NIH) under its Award No. R21EB016212, I. Almodóvar-Rivera also acknowledges receipt of a fellowship from Iowa State University’s Alliance for Graduate Education and the Professoriate (AGEP) program for underrepresented graduate students in STEM fields. The content of this paper however is solely the responsibility of the authors and does not represent the official views of either the NIBIB or the NIH. believed [31] that the ability to understand noisy speech may be explained by the capacity of the brain for multisensory integration of input information, with the information coming in independently from the visual and the auditory modalities integrated to reduce noise and allow for more accurate perception [32], [33]. Recently, [31] studied the role of the superior temporal sulcus (STS) in the perception of noisy speech, through fMRI and behavioral experiments and established increased connectivity between the STS and the auditory or the visual cortex depending on whichever modality was more reliable, that is, less noisy. [31] reported their fMRI analyses at the level of regions of interest (ROIs) drawn on the STS and the auditory and visual cortices. However, the full benefit of fMRI can be realized only if we are able to go beyond statements on functionality at the ROI level and delve into understanding cerebral function at the voxel level. Further, for using fMRI in a clinical setting, it may also be worthwhile to reliably detect activation at the voxel level and in single subjects. All these are potential scenarios with low CNRs where reliable activation detection methods are sorely needed. We return to this application in Section IV. B. Background and Current Practices Many thresholding methods [34]–[39] address the issue of multiple testing in determining significance of test statistics but generally ignore the spatial resolution of fMRI. This shortcoming is often accounted for by spatially smoothing the fMR images in the pre-processing stage, but such non-adaptive smoothing reduces both the adaptive spatial resolution and the number of available independent tests for detecting activation [40]. Iterative adaptive smoothing and segmentation methods in the form of propagationseparation (PS) [40] and adaptive-segmentation (AS) [41] have been developed. While operationally different, PS and AS are broadly similar, essentially yielding a segmentation of the SPM into activated and inactivated voxels. PS results (approximately) in a random t-field and uses Random Field Theory to segment the volume while AS uses multi-scale testing to determine activated segments. [41] argue for AS, because of its more general development and fewer model assumptions. AS also requires no heuristic corrections for spatial correlation, and attractively, provides decisions at prescribed significance levels. [41] demonstrated better performance of AS over PS in an auditory experiment. However, AS requires pre-specified bandwidth sequences and ignores correlation within the SPM. In this paper, we therefore develop a fully automated and computationally speedy adaptive smoothing and thresholding approach that accounts for correlation and obviates the need for setting any parameter beyond the significance level. The remainder of this paper is organized as follows. In Section II, we provide theoretical and implementation details of our algorithm. We evaluate performance in a large-scale simulation experimental study in Section III. Both visual and numerical summaries of performance are presented. In Section IV, we illustrate applicability of our FAST algorithms on the dataset of Section I-A. We end with some discussion. II. T HEORY AND M ETHODS A. Preliminaries Let Y i be the time series vector of the observed BOLD response at the ith voxel obtained after preprocessing for registration and motion correction. A common approach relates Y i to the expected BOLD response via the general linear model Y i = Xβ i + i , (1) where i is a pth-order auto-regressive (AR) Gaussian error vector with AR coefficients φi=(φi1 , φi2 , . . . , φip ) and marginal variance σi2 . Without loss of generality (w.l.o.g.), assume that the design matrix X has the intercept in the first column, the expected BOLD response for the k stimulus levels in the next k columns, and polynomial terms for the drift parameter in the remaining m columns. Thus, β is a coefficient vector of length d = k + m + 1. We assume that the image volume has n voxels: thus i = 1, 2, . . . , n in (1). The parameters (β̂ i , σ̂i2 , φ̂i )s are usually estimated via generalized least squares or restricted maximum likelihood. A typical analysis approach then applies (voxel-wise) hypothesis tests with the null hypothesis specifying no activation owing to the stimulus or task. SPMs of the form Γ = {c0 β̂ i }i∈V with appropriate contrasts c0 β i are then formulated at each voxel. Many researchers use models that assume independence or AR(1) errors: others pre-whiten the time series before fitting (1) under independence. Incorrect model specifications can yield less precise SPMs [42]– [45]. In this paper, we used Bayes Information Criterion (BIC) [46], [47] which measures a fitted model’s quality by trading its complexity against its fidelity to the data. Tests on the SPM Γ identify voxels that are activated related to the experiment. Specifically, voxel i ∈ V is declated to be activated if a suitable test rejects the hypothesis H0 : c0 β i = 0 i.e., if c0 β̂ i significantly deviates from zero. Our objective in this paper is to develop an approach that adaptively and automatically smooths and thresholds the SPM while accounting for the spatial correlation in the SPM and the fact that these sequence of thresholds yield SPMs from truncated distributions. Before detailing our algorithm, we provide some theoretical development. B. Theoretical Development We assume that the SPM is t-distributed with degrees of freedom large enough to be well-approximated via the standard normal distribution. We also assume a homogeneous correlation structure for the SPM, which is reasonable given our choice of a Gaussian kernel for smoothing. We have the Theorem 1. Let X ∼ Nn (0, R) where X = (X1 , . . . , Xn )0 and R is a circulant correlation matrix. 1 1 Writing 1 = (1, 1, . . . , 1)0 , we let ρ = R− 2 1 be the sum of the elements in the first row of R− 2 . Further, let X(n) be the maximum value of X, that is, X(n) = max {X1 , X2 , . . . , Xn }. Then the cumulative distribution function (c.d.f.) Fn (x) of X(n) is given by Fn (x) = P (X(n) ≤ x) = [Φ(ρx)]n , where Φ(·) is the c.d.f. of the standard normal random variable. Proof: From the definition of X(n) , we have Fn (x) = P (X ≤ x1) = P (R−1/2 X ≤ xR−1/2 1) = P (Y ≤ ρx1), where Y ∼ Nn (0, I n ) = P (Yi ≤ ρx ∀ i = 1, 2, . . . , n) = [Φ(ρx)]n . (2) In the limit, we are led to the following Corollary 2. Let X and X(n) be as in Theorem 1. Then the limiting distribution of X(n) belongs to the domain of attraction of the Gumbel distribution, and satisfies: lim [Fn (an x + bn )] = exp{e−x }, n→∞ (3) where an = ρ/[nφ(bn )] and bn = Φ−1 (1 − 1/n)ρ, with φ(·) being the standard normal probability density function (p.d.f.). Proof: For Y ∼ Nn (0, I n ), the limiting distribution of Y(n) satisfies limn→∞ [Φ(an x+bn )]n = exp{e−x }, with an = 1/[nφ(bn )] and bn = Φ−1 (1 − 1/n) [48]. The result follows from Theorem 1. Theorem 1 and Corollary 2 provide the wherewithal for choosing a threshold using the limiting distribution of the maximum of correlated normal random variables with circulant correlation structure. Note however, that the first thresholding step results in truncated (and correlated) random variables that are under consideration for thresholding in subsequent steps. We account for this added complication by deriving the limiting distribution of the maximum of a correlated sample from a right-truncated normal distribution. We do so by first noting that if Y1 , Y2 , . . . Yn are independent identically distributed random variables from the standard normal p.d.f truncated at η, then each Yi has p.d.f. φ•η (·) and c.d.f. Φ•η (·): φ(x) Φ(x) I(x < η); Φ•η (x) = I(x < η). (4) Φ(η) Φ(η) = max {Yi , i = 1, 2, . . . , n} has c.d.f. G•η (x) = [Φ(x)/Φ(η)]n I(x < η) with limiting distribution φ•η (x) = Then Y(n) given by Theorem 3. Let Y1 , Y2 , . . . , Yn be a sample from (4). Then the limiting distribution of Y(n) belongs to the domain of attraction of the reverse Weibull distribution and satisfies lim [G•η (a•n x + b•n )] = exp {−(−x−τ )}I(x ≤ 0). n→∞ (5) for some τ > 0. Here a•n = η − Φ•η −1 (1 − 1/n) and b•n = 0. Proof: Note that η = sup{x | Φ•η (x) < 1}. From Theorem 10.5.2 in [49], a sufficient condition for Y(n) to be the domain of attraction is to demonstrate that (η − x)φ• (x) =τ lim x→η 1 − Φ• (x) for some τ > 0 [50]. In our case, the limit holds because η < ∞. Then, using L’Hôpital’s rule, we have d • (η − x) dx φη (x) − φ•η (x) (η − x)φ•η (x) = lim lim x→η x→η 1 − Φ• −φ•η (x) η (x) (η − x)φ0 (x)/Φ(η) − φ(x)/Φ(η) = lim x→η −φ(x)/Φ(η) =1 Thus the right-truncated normal distribution satisfies the reverse Weibull condition and converges to the reverse Weibull distribution with τ ≡ 1 in (5). The constants in the theorem are as per extreme value theory [48], [49]. Theorem 4. Let X be a random vector from the Nn (0, R) density but that is right-truncated in each 1 coordinate at η, with R being a circulant correlation matrix. Let ρ = R− 2 1 be the sum of the elements 1 in the first row of R− 2 . Also, let X(n) be defined as before. Then the c.d.f. Fη• (x) of X(n) is given by Fη• (x) = P (X(n) ≤ x) = [Φ(ρx)/Φ(η)]n I(x < η). Proof: Proceeding as in the proof of Theorem 1 yields Fη• (x) = P (X ≤ x1) = P (R−1/2 X ≤ xR−1/2 1) = P (Y ≤ ρx1), where Yi s are i.i.d. Φ•ρη (y) = G•ρη (ρx) = [Φ(ρx)/Φ(η)]n I(x < η). (6) Corollary 5. Let X and X(n) be as in Theorem 4. Then the limiting distribution of X(n) belongs to the domain of attraction of the reverse Weibull distribution, and satisfies: lim [Fη• (a•n x + b•n )] = exp {−(−x−α )}I(x ≤ 0). n→∞ (7) where a•n = (η − Φ•ρη −1 (1 − 1/n))ρ and b•n = 0. Proof: From Theorem 3, the limiting distribution of Ynm) (5). Then Fη• (x) = G•ρη (ρx) from Themrem 4 and the result follows. Having developed our results, we are now ready to propose our fast adaptive smoothing and thresholding algorithm. C. Fast Adaptive Smoothing and Thresholding Our proposed algorithm adaptively and, in sequence, smooths and identifies activated regions by thresholding. We estimate the amount of smoothing robustly or from the correlation structure, which we assume is well-approximated by a 3D Gaussian kernel. Thus, under the null hypothesis (of no activation anywhere), we assume that the SPM Γ ∼ N (0, R) where R = S h with h the smoothing parameter of S h −1 given in terms of its full-width-at-half-maximum. Let Γ(−h) ∼ S h 2 Γ. Then estimate h by maximizing the loglikelihood function 1 1 n (8) `(h | Γ(−h) ) = − log(2π) − log |S h | − Γ0(−h) Γ(−h) . 2 2 2 Note that Γ(−h) and |Sh | are speedily computers using Fast Fourier transforms. Starting with the SPM Γ, obtained as discussed in Section II-A, we propose the algorithm: 1) Initial Setup. At this point, assume that ζi ≡ 0 ∀ i, where ζi is the activation status of the ith voxel. (0) That is, assume that all voxels are inactive. Set ζi ≡ ζi . Also denote Γ(0) = Γ, and n0 = n, where (k) nk denotes the number of voxels for which ζi = 0. 2) Iterative Steps. For k = 1, 2, . . . , iterate as follows: a) Smoothing. We smooth Γ(k−1) in one of two ways: i) Adaptive Maximum Likelihood (AM-FAST, pronounced ăm-fast): Maximize (8) given Γ(k−1) 1 to obtain hk . Obtain Γ(k) by smoothing Γ(k−1) with S hk . Also obtain ρk = S hk − 2 1. ii) Adaptive Robust Smoothing (AR-FAST, pronounced ahr-fast): Use robust methods [51] on 1 Γ(k−1) to get Γ(k) . Now, maximize (8) given Γ(k) to obtain hk and ρk = S hk − 2 1. b) Adaptive Thresholding. This consists of two steps: i) For k = 1, use Corollary 2 to obtain an and bn , otherwise (i.e. for k > 1) use Corollary 5 to obtain a•nk−1 . In both cases, we use the spatial correlation estimated by S hk−1 . ii) From the Gumbel (for k = 1) or reverse Weibull distributions (for k > 1), get ( an0 ιGα + bn0 for k = 1 ηk = (9) a•nk−1 ιW otherwise. α where ιGα and ιW α are the upper-tail α-values for the Gumbel and the reverse Weibull (with τ = 1) distributions, respectively. (k) (k−1) iii) Set all ζi = 1 if ζi = 0 and if the ith coordinate of Γ(k) exceeds ηk . Also, update Pn (k) nk = i=1 ζi . 3) Stopping criterion. Let J(ζ (k) , ζ (k−1) ), be the Jaccard Index [52], [53] of the activation maps in successive iterations (i.e. between the (k − 1)th and kth iterations). If J(ζ (k) , ζ (k−1) ) ≤ J(ζ (k+1) , ζ (k) ), then the algorithm terminates and ζ (k) is the final activation map. Comments: A few comments are in order: a) Two-sided tests: Our development here assumes one-sided tests where (w.l.o.g.) large values are the extremal values of the SPM. Two-sided tests can be easily accommodated by simply replacing the α in (9) by α/2 and by using the absolute value of the components of Γ(k) to decide on activation. b) Comparison with AS: AS [41] also provides an adaptive weighted smoothing approach based on multi-scale testing. Both AM-FAST and AR-FAST have similarities with AS, in that they also smooth and threshold iteratively. However, there are a few fundamental differences. For one, the AS approach has a set user-specified sequence of bandwidths that smooths Γ(k) at each step. In contrast, we use likelihood maximization (for AM-FAST) or robust (for AR-FAST) methods to optimally determine h at each step. AS also performs thresholding in a similar manner as us, but they use a general Fréchet extreme value distribution. Their development ignores both the spatial context and also the fact that thresholding results in subsequent truncated decisions. Our development represents the procedure more accurately because we account for both the correlation structure (resulting in the initial cut-off decided as per the Gumbel distribution) and the truncation (with the subsequent cut-offs being determined by the reverse Weibull distribution). Finally, our method is completely data-driven, with termination declared only when the Jaccard index indicates no substantial changes in the detected activation. III. P ERFORMANCE E VALUATIONS This section reports performance of the FAST algorithm relative to the AS and also the more popular cluster-thresholding algorithms. We use a 2D setup to make it computationally possible to evaluate performance across many settings. A. Experimental Setup Our simulation setup was a modified version of the digitized 128×128 2D Hoffman phantom [54] often used in Positron Emission Tomography. The phantom (Figure 1) has 3465 in-brain pixels, representing two types (say, A and B) of anatomic structures. 138 of the B-type pixels from two contiguous regions were deemed to be truly activated. The phantom provided the values (see Figure 1) for β i = (βi0 , βi1 , βi2 ) Region Background Brain A Brain B Activated βi0 0 4500 6000 6000 βi1 0 0 0 600 βi2 0 -155.32 -155.32 -155.32 Fig. 1: The phantom used in our simulation experiments. Putative anatomic regions (A and B) are in shades of grey, while truly activated pixels are in red. The table on the right lists the β i s used in our simulations. in (1) as per the location of the ith pixel in the phantom. As in (1), the design matrix X had the intercept in the first column. For the second column, the input stimulus time series was set to alternate as 16 on-off blocks of 6 time-points each. The first block signaled rest, with input stimulus 0, while the second block was 1 to signify activation. These blocks alternated, yielding values at 96 time-points. These temporal values were convolved with the hemodynamic response function [7] to provide the second column of X. The third column of X represented linear drift and was set to t (t = 1, 2, . . . , 96). Also, as per (1), AR(p) errors were simulated for different p and for different structures. Specifically, for each p, we considered AR coefficients for a range of φ ≡ φi s with coefficients (φ1 , φ2 , . . . , φp ) that were, with lag, (a) all equal, (b) decreasing, (c) increasing, (d) first decreasing and then Pp increasing and (e) first increasing and then decreasing. To ensure stationary solutions, we restricted j=1 φj = 0.9. Thus, for AR(1), we have φ1 ≡ 0.9 for all cases. For p = 2, 3, 4, we have φi ≡ 0.9/p for the equal AR coefficients scenario. Table I contains the φ-values for the other cases. Finally, we chose σ to be such that they corresponded to very low to moderate CNR settings. Specifically, we set σ0 = 240, 300, 400 yielding CNR = 1.0, 1.5, and 2.0. By design, our SNRs were 10 times our CNRs. We simulated realizations of time series images using (1) and the setup of Figure 1 and the AR structures in Table I. For each pixel, we fitted (1) with different AR(p), p = 0, 1, 2, 3, 4, 5, and chose the model that provided the minimum BIC. SPMs were generated as discussed in Section II-A. Figure 2 provides sample realizations of SPMs obtained using our simulation setup and using AR(4) time series. We also performed a larger-scale simulation study, where we replicated our experiment 25 times for each simulation setting. Activation detection was done with AM-FAST and AR-FAST and also the commonlyused (in fMRI) cluster-thresholding (CT) at [39]’s suggested significance level (α = 0.001) with a secondorder neighborhood and number of voxels determined by [55]’s 3dClustSim function. The R package TABLE I: φs for the AR(p) scenarios used in our simulations. p 2 3 4 5 p 2 3 4 5 Decreasing (0.6, 0.3) (0.4, 0.3, 0.2) (0.3, 0.25, 0.20, 0.15) (0.3, 0.25, 0.20, 0.10, 0.05) Decreasing-Increasing (0.6,0.3) (0.4,0.1,0.4) (0.4,0.05,0.05,0.4) (0.25,0.15,0.1,0.15,0.25) Increasing (0.3, 0.6) (0.2, 0.3, 0.4) (0.15, 0.20, 0.25, 0.3) (0.05, 0.10, 0.20, 0.25, 0.30) Increasing-Decreasing (0.3,0.6) (0.1,0.7,0.1) (0.05,0.4,0.4,0.05) (0.1,0.15,0.4,0.15,0.1) 25.17 22.17 19.17 16.17 13.17 10.17 7.17 4.17 1.17 −1.83 −4.83 Fig. 2: SPM for a sample simulation setting, for the five settings (from left to right) of equal, increasing, decreasing, increasing-decreasing and decreasing-increasing AR coefficients. The three rows (from top to bottom) correspond to simulation settings with CNR of 1.0, 1.5 and 2.0, respectively. AnalyzeFMRI [56] was used to perform cluster-wise thresholding. We also performed AS [41] (henceforth, TP11) using the R package fmri [57]. In each case, performance was evaluated in terms of the Jaccard Index [52], [53] between the estimated activation map and the truth as per Figure 1. B. Results Figure 3 provides activation maps detected using AM-FAST (with α = 0.025), CT and TP11 for each case of Figure 2. AR-FAST results were essentially indistinguishable from AM-FAST and so are not displayed. It is encouraging to note that AM-FAST visually tracks very closely to the activation regions of Figure 1. Both CT and TP11 also identify the true regions as activated but they also identify several other pixels as activated, with more false positives detected at lower CNR levels. At moderate CNR=2.0, CT does very well under the equal AR coefficients case but not as well as for some of the other cases. TP11 however has more false positives even for CNR=2.0. In summary, however, AM-FAST performs quite creditably at all CNR levels. Figure 4 displays performance of our algorithms and its competitors in the large-scale simulation study. For low CNRs, both CT and TP11 perform poorly, but AM-FAST and AR-FAST perform quite well. As with Figure 3, both CT and TP11 improve with increased CNR: indeed CT is marginally the best performer for when CNR=2.0 and with no autocorrelation (p = 0). Interestingly, for CNR=2.0, CT does worse than AM-FAST or AR-FAST for the decreasing AR coefficients case but not necessarily for the other cases. AM-FAST and AR-FAST perform very similarly at all settings, however AM-FAST is computationally faster than AR-FAST. The poorer performance of CT relative to the FAST algorithms is not surprising because 3dClustSim and other such functions are not very adaptive in their execution. TP11 is however, (a) CNR=1.0 (b) CNR=1.5 (c) CNR=2.0 Fig. 3: Activation maps obtained using AM-FAST (top row), CT (middle) and TP11 (bottom row) from the SPMs of Figure 2 at each CNR. Columns are as in Figure 2. somewhat more adaptive, through the choice of the sequence of smoothing parameters that is left to the user. Specifying this sequence appropriately may, however, require considerable skill, and the default values provided in the fmri package do not appear to be adequate enough for lower CNR situations. On the other hand, our approach determines the optimal smoothing from the SPM at each iteration through likelihood maximization (AM-FAST) or robust methods (AR-FAST). Interestingly though, we have found that α plays a role in the FAST algorithms with smaller values performing marginally better with higher CNRs and higher values performing substantially better for lower CNRs. As a via media, we use α = 0.025 in our application. In summary, however, both FAST algorithms perform very well and over a wide range of low-to-moderate CNR settings. IV. ACTIVATION D URING P ERCEPTION OF N OISY S PEECH The dataset is from [31] and also provided by [55] as data6 to illustrate the many features of the AFNI software. In this experiment the subject was presented with audiovisual speech in both auditory and visual formats. There were two types of stimuli: (1) audio-reliable where the subject could clearly hear the CNR = 1.00 CNR = 1.50 CNR = 2.00 1.00 0.75 Equal 0.50 0.25 0.00 1.00 Decreasing 0.75 0.50 0.25 0.00 0.75 Increasing Jaccard Index 1.00 0.50 0.25 0.00 Increasing-Decreasing 1.00 0.75 0.50 0.25 0.00 Decreasing-Increasing 1.00 0.75 0.50 0.25 0.00 AR(0) AR(1) AR(2) AR(3) AM-FAST0.001 AR(4) AR(5) AR-FAST0.001 AR(0) AR(1) AM-FAST0.01 AR(2) AR(3) AR-FAST0.01 AR(4) AR(5) AM-FAST0.05 AR(0) AR(1) AR(2) AR-FAST0.05 AR(3) CT AR(4) AR(5) TP11 Fig. 4: Summary of performance of the AM-FAST, AR-FAST, CT and TP11 algorithms for the different simulation settings. The subscript on the FAST legends represents the value of α = 0.001, 0.01, 0.05 used in the experiments. spoken word but the visual of the female speaker was degraded and (2) visual-reliable where the subject could clearly see the female speaker vocalizing the word but the audio was of reduced quality. There were three experimental runs, each consisting of a randomized design of 10 blocks, equally divided into blocks of audio-reliable and visual-reliable stimuli. An echo-planar imaging sequence (TR=2s) was used to obtain T∗2 -weighted images with volumes of 80 × 80 × 33 (voxel dimensions = 2.75 × 2.75 × 3.0 mm3 ) over 152 time-points. Our goal was to determine activation in areas that responded to the audio task (H0 : βa = 0), areas that responded to the visual task (H0 : βv = 0) and finally the contrast of visual (a) (b) (c) Fig. 5: Activation regions in AFNI data6 using SPM based on AR(p̂) of (a) Visual-reliable stimulus (b) Audio-reliable stimulus and (c) the difference contrast between Visual-reliable and Audio-reliable stimuli. against audio (H0 : βv − βa = 0). We fitted AR models at each voxel using p = 0, 1, 2, 3, 4, 5 and chose p minimizing the BIC. After obtaining the SPM, we applied AM-FAST with results displayed via AFNI and Surface Mapping (SUMA) in Figure 5. Note that most of the activation occurs in Brodmann areas 18 and 19 (BA18 and BA19), which comprise the occipital cortex and the extrastriate (or peristriate) cortex. In humans with normal sight, extrastriate cortex is a visual association area, where feature-extraction, shape recognition, attentional, and multimodal integrating functions take place. We also observed activated regions in superior temporal sulcus (STS). Recent studies have shown increased activation in this area which are related to: voices versus environmental sounds, stories versus nonsensical speech, moving faces versus moving objects, biological motion and so on [58]. Although a detailed analysis of the results of this study is beyond the purview of this paper, we note that AM-FAST finds interpretable results even when applied to a single subject high-level cognition experiment. V. D ISCUSSION In this paper, we have proposed a new fully automated and adaptive smoothing and thresholding algorithm called FAST that has the ability to perform activation detection in a range of low-signal regions. Two variants AM-FAST and AR-FAST, have been proposed that show similar performance. We recommend AM-FAST because it is computationally faster than AR-FAST. Our methodology accounts for both spatial correlation structure and uses more accurate extreme value theory in its development. Simulation experiments indicate good performance over a range of low-SNR and low-CNR settings. Developing FAST for more sophisticated time series and spatial models, as well as increased use of diagnostics in understanding activation and cognition are important research areas and directions that would benefit from further attention. S UPPLEMENTARY M ATERIALS The datasets and R code used in this paper and minimal examples for using the code are publicly available at https://github.com/maitra/FAST-fMRI. R EFERENCES [1] J. W. Belliveau, D. N. Kennedy, R. C. McKinstry, B. R. Buchbinder, R. M. Weisskoff, M. S. Cohen, J. M. Vevea, T. J. Brady, and B. R. Rosen, “Functional mapping of the human visual cortex by magnetic resonance imaging,” Science, vol. 254, pp. 716–719, 1991. [2] K. K. Kwong, J. W. Belliveau, D. A. Chesler, I. E. Goldberg, R. M. Weisskoff, B. P. Poncelet, D. N. Kennedy, B. E. Hoppel, M. S. Cohen, R. Turner, H.-M. Cheng, T. J. Brady, and B. R. Rosen, “Dynamic magnetic resonance imaging of human brain activity during primary sensory stimulation,” Proceedings of the National Academy of Sciences of the United States of America, vol. 89, pp. 5675–5679, 1992. [3] P. A. Bandettini, A. Jesmanowicz, E. C. Wong, and J. S. Hyde, “Processing strategies for time-course data sets in functional MRI of the human brain,” Magnetic Resonance in Medicine, vol. 30, pp. 161–173, 1993. [4] K. J. Friston, A. P. Holmes, K. J. Worsley, J.-B. Poline, C. D. Frith, and R. S. J. Frackowiak, “Statistical parametric maps in functional imaging: A general linear approach,” Human Brain Mapping, vol. 2, pp. 189–210, 1995. [5] A. M. Howseman and R. W. Bowtell, “Functional magnetic resonance imaging: imaging techniques and contrast mechanisms,” Philosophical Transactional of the Royal Society, London, vol. 354, pp. 1179–94, 1999. [6] W. D. Penny, K. J. Friston, J. T. Ashburner, S. J. Kiebel, and T. E. Nichols, Eds., Statistical Parametric Mapping: The Analysis of Functional Brain Images, 1st ed. Academic Press, 2006. [7] M. A. Lindquist, “The statistical analysis of fMRI data,” Statistical Science, vol. 23, no. 4, pp. 439–464, 2008. [8] N. A. Lazar, The Statistical Analysis of Functional MRI Data. Springer, 2008. [9] F. G. Ashby, Statistical Analysis of fMRI Data. MIT Press, 2011. [10] S. Ogawa, T. M. Lee, A. S. Nayak, and P. Glynn, “Oxygenation-sensitive contrast in magnetic resonance image of rodent brain at high magnetic fields,” Magnetic Resonance in Medicine, vol. 14, pp. 68–78, 1990. [11] S. Ogawa, T. M. Lee, A. R. Kay, and D. W. Tank, “Brain magnetic resonance imaging with contrast dependent on blood oxygenation,” Proceedings of the National Academy of Sciences, USA, vol. 87, no. 24, pp. 9868–9872, 1990. [12] K. J. Worsley, C. H. Liao, J. Aston, V. Petre, G. H. Duncan, F. Morales, and A. C. Evans, “A general statistical analysis for fmri data,” NeuroImage, vol. 15, pp. 1–15, 2002. [13] K. J. Friston, P. Fletcher, O. Josephs, A. Holmes, M. Rugg, and R. Turner, “Event-related fMRI: characterizing differential responses,” Neuroimage, vol. 7, no. 1, pp. 30–40, 1998. [14] G. H. Glover, “Deconvolution of impulse response in event-related bold fmri,” NeuroImage, vol. 9, pp. 416–429, 1999. [15] R. B. Buxton, K. Uludağ, D. J. Dubowitz, and T. T. Liu, “Modeling the hemodynamic response to brain activation,” Neuroimage, vol. 23, pp. S220–S233, 2004. [16] K. J. Friston, C. D. Frith, P. F. Liddle, R. J. Dolan, A. A. Lammertsma, and R. S. J. Frackowiak, “The relationship between global and local changes in PET scans,” Journal of Cerebral Blood Flow and Metabolism, vol. 10, pp. 458–466, 1990. [17] K. J. Friston, A. P. Holmes, K. J. Worsley, J.-P. Poline, C. D. Frith, and R. S. Frackowiak, “Statistical parametric maps in functional imaging: a general linear approach,” Human brain mapping, vol. 2, no. 4, pp. 189–210, 1994. [18] K. J. Worsley and K. J. Friston, “Analysis of fMRI time-series revisitedagain,” Neuroimage, vol. 2, no. 3, pp. 173–181, 1995. [19] C. R. Genovese, N. A. Lazar, and T. Nichols, “Thresholding of statistical maps in functional neuroimaging using the false discovery rate,” Neuroimage, vol. 15, no. 4, pp. 870–878, 2002. [20] J. V. Hajnal, R. Myers, A. Oatridge, J. E. Schweiso, J. R. Young, and G. M. Bydder, “Artifacts due to stimulus-correlated motion in functional imaging of the brain,” Magnetic Resonance in Medicine, vol. 31, pp. 283–291, 1994. [21] B. Biswal, A. E. DeYoe, and J. S. Hyde, “Reduction of physiological fluctuations in fMRI using digital filters.” Magnetic Resonance in Medicine, vol. 35, no. 1, pp. 107–113, January 1996. [Online]. Available: http://view.ncbi.nlm.nih.gov/pubmed/8771028 [22] R. P. Wood, S. T. Grafton, J. D. G. Watson, N. L. Sicotte, and J. C. Mazziotta, “Automated image registration. ii. intersubject validation of linear and non-linear models,” Journal of Computed Assisted Tomography, vol. 22, pp. 253–265, 1998. [23] R. P. Gullapalli, R. Maitra, S. Roys, G. Smith, G. Alon, and J. Greenspan, “Reliability estimation of grouped functional imaging data using penalized maximum likelihood,” Magnetic Resonance in Medicine, vol. 53, pp. 1126–1134, 2005. [24] R. Maitra, S. R. Roys, and R. P. Gullapalli, “Test-retest reliability estimation of functional mri data,” Magnetic Resonance in Medicine, vol. 48, pp. 62–70, 2002. [25] R. Maitra, “Initializing partition-optimization algorithms,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 6, pp. 144–157, 2009. [Online]. Available: http://doi.ieeecomputersociety.org/10.1109/TCBB.2007.70244 [26] Z. S. Saad, D. R. Glen, G. Chen, M. S. Beauchamp, R. Desai, and R. W. Cox, “A new method for improving functional-to-structural mri alignment using local pearson correlation,” NeuroImage, vol. 44, pp. 839–848, 2009. [27] E. E. Chen and S. L. Small, “Test-retest reliability in fMRI of language: Group and task effects,” Brain and Language, vol. 102, no. 2, pp. 176–85, 2007. [Online]. Available: http://dx.doi.org/10.1016/j.bandl.2006.04.015 [28] K. D. Kryter, The handbook of hearing and the effects of noise: Physiology, psychology, and public health. Academic Press, 1994. [29] M. D. Hauser, The evolution of communication. MIT press, 1996. [30] S. Dupont and J. Luettin, “Audio-visual speech modeling for continuous speech recognition,” IEEE transactions on multimedia, vol. 2, no. 3, pp. 141–151, 2000. [31] A. R. Nath and M. S. Beauchamp, “Dynamic changes in superior temporal sulcus connectivity during perception of noisy audiovisual speech,” The Journal of Neuroscience, vol. 31, no. 5, p. 1704 1714, 2011. [32] W. H. Sumby and I. Pollack, “Visual contribution to speech intelligibility in noise,” The journal of the acoustical society of america, vol. 26, no. 2, pp. 212–215, 1954. [33] B. E. Stein and M. A. Meredith, The merging of the senses. The MIT Press, 1993. [34] Y. Benjamini, A. M. Krieger, and D. Yekutieli, “Adaptive linear step-up procedures that control the false discovery rate,” Biometrika, vol. 93, no. 3, pp. 491–507, 2006. [35] R. Heller, D. Stanley, D. Yekutieli, N. Rubin, and Y. Benjamini, “Cluster-based analysis of fMRI data,” NeuroImage, vol. 33, no. 2, pp. 599–608, Nov. 2006. [Online]. Available: http://dx.doi.org/10.1016/j.neuroimage.2006.04.233 [36] Y. Benjamini and R. Heller, “False discovery rates for spatial signals,” Journal of the American Statistical Association, vol. 102, no. 480, pp. 1272–1281, 2007. [37] M. Smith and L. Fahrmeir, “Spatial Bayesian variable selection with application to functional Magnetic Resonance Imaging,” Journal of the American Statistical Association, vol. 102, no. 478, pp. 417–431, 2007. [Online]. Available: http://pubs.amstat.org/doi/abs/10.1198/016214506000001031 [38] S. M. Smith and T. E. Nichols, “Threshold-free cluster enhancement: Addressing problems of smoothing, threshold dependence and localisation in cluster inference,” Neuroimage, vol. 44, pp. 83–98, 2009. [39] C.-W. Woo, A. Krishnan, and T. D. Wager, “Cluster-extent based thresholding in fMRI analyses: Pitfalls and recommendations,” Neuroimage, vol. 91, p. 412419, 2014. [40] K. Tabelow, J. Polzehl, H. U. Voss, and V. Spokoiny, “Analyzing fMRIexperiments with structural adaptive smoothing procedures,” NeuroImage, vol. 33, no. 1, pp. 55–62, 2006. [41] J. Polzehl, H. U. Voss, and K. Tabelow, “Structural adaptive segmentation for statistical parametric mapping,” NeuroImage, vol. 52, no. 2, pp. 515–523, 2010. [42] M. M. Monti, “Statistical analysis of fMRI time-series: A critical review of the glm approach,” Frontiers in Human Neuroscience, vol. 5, no. 00028, pp. 1–13, 2011. [Online]. Available: http://www.frontiersin.org/Journal/Abstract.aspx?s=537&name=human neuroscience&ART DOI=10.3389/fnhum.2011.00028 [43] W.-L. Luo and T. E. Nichols, “Diagnosis and exploration of massively univariate neuroimaging models,” Neuroimage, vol. 19, no. 3, pp. 1014–1032, 2003. [44] J. M. Loh, M. A. Lindquist, and T. D. Wager, “Residual analysis for detecting mis-modeling in fMRI,” Statistica Sinica, pp. 1421–1448, 2008. [45] M. Lindquist, J. Loh, L. Atlas, and T. Wager, “Modeling the hemodynamic response function in fMRI: Efficiency, bias and mismodeling,” Neuroimage, vol. 45, no. 1, p. S187S196, 2009. [46] G. Schwarz, “Estimating the dimensions of a model,” Annals of Statistics, vol. 6, pp. 461–464, 1978. [47] R. H. Shumway and D. S. Stoffer, Time Series Analysis and Its Applications, second edition ed. Springer, 2006. [48] S. I. Resnick, Extreme values, regular variation and point processes. Springer, 2013. [49] H. A. David and H. N. Nagaraja, Order Statistics. Hoboken, New Jersey: John Wiley and Sons, Inc., 2003. [50] R. Von Mises, “La distribution de la plus grande de n valeurs,” Rev. math. Union interbalcanique, vol. 1, no. 1, 1936. [51] D. Garcia, “Robust smoothing of gridded data in one and higher dimensions with missing values,” Computational statistics & data analysis, vol. 54, no. 4, pp. 1167–1178, 2010. [52] P. Jaccard, “Ètude comparative de la distribution florale dans une portion des alpes et des jura,” Bulletin del la Sociètè Vaudoise des Sciences Naturelles, vol. 37, p. 547579, 1901. [53] R. Maitra, “A re-defined and generalized percent-overlap-of-activation measure for studies of fMRI reproducibility and its use in identifying outlier activation maps,” Neuroimage, vol. 50, no. 1, pp. 124–135, 2010. [54] E. J. Hoffman, P. D. Cutler, W. M. Digby, and J. C. Mazziotta, “3-d phantom to simulate cerebral blood flow and metabolic images for pet,” IEEE Transactions on Nuclear Science, vol. 37, pp. 616–620, 1990. [55] R. W. Cox, “Afni: software for analysis and visualization of functional magnetic resonance neuroimages,” Computers and Biomedical research, vol. 29, no. 3, pp. 162–173, 1996. [56] C. Bordier, M. Dojat, and P. L. de Micheaux, “Temporal and spatial independent component analysis for fmri data sets embedded in the AnalyzeFMRI R package,” Journal of Statistical Software, vol. 44, no. 9, pp. 1–24, 2011. [Online]. Available: http://www.jstatsoft.org/v44/i09/ [57] K. Tabelow and J. Polzehl, “Statistical parametric maps for functional mri experiments in R: The package fmri,” Journal of Statistical Software, vol. 44, no. 11, pp. 1–21, 2011. [Online]. Available: http://www.jstatsoft.org/v44/i11/ [58] E. Grossman and R. Blake, “Brain activity evoked by inverted and imagined biological motion,” Vision research, vol. 41, no. 10, pp. 1475–1482, 2001.
10
Dynamic Space Efficient Hashing Tobias Maier and Peter Sanders Karlsruhe Institute of Technology, Karlsruhe, Germany arXiv:1705.00997v1 [] 2 May 2017 Abstract grammers to know tight bounds on the maximum number of inserted elements. This is typically not realistic. For example, a frequent application of hash tables aggregates information about data elements by their key. Whenever the exact number of unique keys is not known a priori, we have to overestimate the initial capacity to guarantee good performance. Dynamic space efficient data structures are necessary to guarantee both good performance and low overhead independent of the circumstances. To visualize this, assume the following scenario. During a word count benchmark, we know an upper bound nmax to the number of unique words. Therefore, we construct a hash table with at least nmax cells. If an instance only contains 0.7 · nmax unique words, no static hash table can fill ratios greater than 70%. Thus, dynamic space efficient hash tables are required to achieve guaranteed nearoptimal memory usage. In scenarios where the final size is not known, the hash table has to grow closely with the actual number of elements. This cannot be achieved efficiently with any of the current techniques used for hashing and migration. Many libraries – even ones that implement space efficient hash tables – offer some kind of growing mechanism. However, all existing implementations either lose their space efficiency or suffer from degraded performance once the table grows above its original capacity. Growing is commonly implemented either by creating additional hash tables – decreasing performance especially for lookups or by migrating all elements to a new table – losing the space efficiency by multiplying the original size. To avoid the memory overhead of full table migrations, during which both the new and the old table coexist, we propose an in-place growing technique that can be adapted to most existing hashing schemes. However, frequent migrations with small relative size changes remain necessary to stay space efficient at all times. To avoid both of these pitfalls we propose a variant of (multi-way) bucket cuckoo hashing [7, 8]. A We consider space efficient hash tables that can grow and shrink dynamically and are always highly space efficient, i.e., their space consumption is always close to the lower bound even while growing and when taking into account storage that is only needed temporarily. None of the traditionally used hash tables have this property. We show how known approaches like linear probing and bucket cuckoo hashing can be adapted to this scenario by subdividing them into many subtables or using virtual memory overcommitting. However, these rather straightforward solutions suffer from slow amortized insertion times due to frequent reallocation in small increments. Our main result is DySECT (Dynamic Space Efficient Cuckoo Table) which avoids these problems. DySECT consists of many subtables which grow by doubling their size. The resulting inhomogeneity in subtable sizes is equalized by the flexibility available in bucket cuckoo hashing where each element can go to several buckets each of which containing several cells. Experiments indicate that DySECT works well with load factors up to 98%. With up to 2.7 times better performance than the next best solution. 1 {t.maier,sanders}@kit.edu Introduction Dictionaries represented as hash tables are among the most frequently used data structures and often play a critical role in achieving high performance. Having several compatible implementations, which perform well under different conditions and can be interchanged freely, allows programmers to easily adapt known solutions to new circumstances. One aspect that has been subject to much investigation is space efficiency [3, 4, 7, 8, 17, 19]. Modern space efficient hash tables work well even when filled to 95% and more. To reach filling degrees like this, the table has to be initialized with the correct final capacity, thereby, requiring pro1 technique where each element can be stored in one of several associated constant sized buckets. When all of them are full, we move an element into one of its other buckets to make space. To solve the problem of efficient migration, we split the table into multiple subtables, each of which can grow independently of all others. Because the buckets associated with one element are spread over the different subtables, growing one subtable alleviates pressure from all others by allowing moves from a dense subtable to the newly-grown subtable. Doubling the size of one subtable increases the overall size only by a small factor while moving only a small number of elements. This makes the size changes easy to amortize. The size and occupancy imbalance between subtables (introduced by one subtable growing) is alleviated using displacement techniques common to cuckoo hashing. This allows our table to work efficiently at fill rates exceeding 95%. We begin our paper by presenting some previous work (Section 2). Then we go into some notations (Section 3) that are necessary to describe our main contribution DySECT (Section 4). In Section 5 we show our in-place migration techniques. Afterwards, we test all hash tables on multiple benchmarks (Section 6) and draw our conclusion (Section 7) 2 often highly non-trivial. Cuckoo hashing can be naturally generalized into two directions in order to make it more space efficient: allowing H choices [8] or extending cells in the table to buckets that can store B elements. We will summarize this under the term bucket cuckoo hashing. Further adaptations of cuckoo hashing include:multiple concurrent implementations either powered by bucket locking, transactional memory [15], or fully lock-less [18]; a de-amortization technique that provides provable worst case guarantees for insertions [1, 13]; and a variant that minimizes page-loads in a paged memory scenario [6]. Some non-cuckoo space efficient hash tables continue to use linear probing variants. Robin Hood hashing is a technique that was originally introduced in 1985 [2]. The idea behind Robin Hood hashing is to move already stored elements during insertions in a way that minimizes the longest possible search distance. Robin Hood hashing has regained some popularity in recent years, mainly for its interesting theoretical properties and the possibility to reduce the inherent variance of linear probing. All these publications show that there is a clear interest in developing hash tables that can be more and more densely filled. Dynamic hash tables on the other hand seem to be considered a solved problem. One paper that takes on the problem of dynamic hash tables was written by Dietzfelbinger at al. [5]. It predates cuckoo hashing, and much of the attention for space efficient hashing. All memory bounds presented are given without tight constant factors. The lack of implementations and theory about dense dynamic hash tables is where we pick up and offer a fast hash table implementation that supports dynamic growing with tight space bounds. Related Work The use of hash tables and other hashing based algorithms has a long history in computer science. The classical methods and results are described in all major algorithm textbooks [14]. Over the last one and a half decades, the field has regained attention, both from theoretical and the practical point of view. The initial innovation that sparked this attention was the idea that storing an element in the less filled of two “random” chains leads to incredibly well balanced loads. This concept is called the power of two choices [16]. It led to the development of cuckoo hashing [19]. Cuckoo hashing extends the power of two choices by allowing to move elements within the table to create space for new elements (see Section 3.2 for a more elaborated explanation). Cuckoo hashing revitalized research into space efficient hash tables. Probabilistic bounds for the maximum fill degree [3, 4] and expected displacement distances [9, 10] are 3 Preliminaries A hash table is a data structure for storing keyvalue-pairs (hkey, datai) that offers the following functionality: insert – stores a given key-value pair or returns a reference to it, if it is already contained; find – given a key returns an reference to said element if it was stored, and ⊥ otherwise; and erase – removes a previously inserted element (if present). Throughout this paper n denotes the number of 2 elements and m the number of cells (m > n) in a hash table. We define the load factor as δ = n/m. Tables can usually only operate efficiently up to a certain maximum load factor. Above that, operations get slower or have a possibility to fail. When implementing a hash table one has to decide between storing elements directly in the hash table – Closed Hashing – or storing pointers to elements – Open Hashing. This has an immediate impact on the amount of memory required (closed : m · |element| and open: m · |pointer| + n · |element|). For large elements (i.e., much larger then the size of a pointer), one can use a non-space efficient hash table with open hashing to reduce the relevant memory factor. Therefore, we restrict ourselves to the common and more interesting case of elements whose size is close to that of a pointer. For our experiments we use 128bit elements (64bit keys and 64bit values). In this case, open hashing introduces a significant memory overhead (at least 1.5×). For this reason, we only consider closed hash tables. Their memory efficiency is directly dependent on the table’s load. To reach high fill degrees with closed hashing tables, we have to employ open addressing techniques. This means that elements are not stored in predetermined cells, but can be stored in one of several possible places (e.g. linear probing, or cuckoo hashing). 3.1 instantiated table can grow arbitrarily large over its original capacity while remaining smaller than α · nmax · size(element) + O(1) at all times. One problem for many implementations of space efficient hash tables is the migration. During a normal full table migration, both the original table and the new table are allocated. This requires mnew + mold cells. Therefore, a normal full table migration is never more than 2-space efficient. The only option for performing a full table migration with less memory is to increase the memory in-place (see Section 5). Similar to static α-space efficiency, we will mostly talk about the minimum load factor δmin = α1 instead of α. 3.2 Cuckoo Hashing Cuckoo hashing is a technique to resolve hash conflicts in a hash table using open addressing. Its main draw is that it guarantees constant lookup times even in densely filled tables. The distinguishing technique of cuckoo hashing is that H hash functions (h1 , ..., hH ) are used to compute H independent positions. Each element is stored in one of its positions. Even if all positions are occupied one can often move elements to create space for the current element. We call this process displacing elements. Bucket cuckoo hashing is a variant where the cells of the hash table are grouped into buckets of size B (m/B buckets). Each element assigned to one bucket can be stored in any of the bucket’s cells. Using buckets one can drastically increase the number of elements that can be displaced to make room for a new one, thus decreasing the expected length of displacement paths. Find and erase operations have a guaranteed constant running time. Independent from the table’s density, there are H buckets – H · B cells – that have to be searched to find an element. During an insert the element is hashed to H buckets. We store the element in the bucket with the most free space. When all buckets are full we have to move elements within the table such that a free cell becomes available. To visualize the problem of displacing elements, one can think of the directed graph implicitly defined by the hash table. Each bucket corresponds to a node and each element induces an edge between the bucket it is stored in and its H − 1 alternate α-Space Efficient Hash Tables Static. We call a hashing technique α-space efficient when it can work effectively using at most α· ncurr · size(element) + O(1) memory. In this case we define working efficiently as having average insertion 1 times in O( 1−δ ). This is a natural estimation for insertion times, since it is the expected number of fully random probes needed to hit an empty cell (1 − δ is the fraction of empty cells). In many closed hashing techniques (e.g. linear probing, cuckoo hashing) cells are the same size as elements. Therefore, being α-space efficient is the same as operating with a load factor of δ = α−1 . Because of this, we will mostly talk about the load factor of a table instead of its memory usage. Dynamic. The definition of a space efficient hashing technique given above is specifically targeted for statically sized hash tables. We call an implementation dynamically α-space efficient if an 3 4.2 buckets. To insert an element into the hash table we have to find a path from one of its associated buckets to a bucket that has free capacity. Then we move elements along this path to make room in the initial bucket. The two common techniques to find such paths are random walks and breadth first searches. 4 As soon as the (overall) table contains enough elements such that the memory constraint can be kept during a subtable migration, we grow one subtable by migrating it into a table twice its size. We migrate subtables in order from first to last. This ensures that no subtable can be more than twice as large as any other. Assume that we have j large subtables (2s) than −1 m = (T + j) · s. When δmin · n > m + 2s we can grow the first subtable while obeying the size constraint (the newly allocated table will have 2s cells). Doubling the size of a subtable increases the global number of cells from mold = (T + j) · s to mnew = mold +s = (T +j+1)·s (grow factor T T+j+1 +j ). Note that all subsequent growing operations migrate one of the smaller tables until all tables have the same size. Therefore, each grow until then increases the overall capacity by the same absolute amount (smaller relative to the current size). The cost of growing a subtable is amortized by all insertions since the last subtable migration. There are δmin ·s = Ω(s) insertions between two migrations. One migration takes Θ(s) time. Apart from being amortized, the migration is cache efficient since it accesses cells in a linear fashion. Even in the target table cells are accessed linearly. We assign elements to buckets by using bits from their hash value. In the grown table we use exactly one more bit than before (double the number of buckets). This ensures that all elements from one original bucket are split between two buckets in the target table. Therefore no bucket can overflow and no displacements are necessary. In the implicit graph model of the cuckoo table (Section 3.2), growing a subtable is equivalent to splitting each node that represents a bucket within that subtable. The resulting graph becomes more sparse, since the edges (elements) are not doubled, making it easier to insert subsequent elements. DySECT (Dynamic Space Efficient Cuckoo Table) A commonly used growing technique is to double the size of a hash table by migrating all its elements into a table with twice its capacity. This is of course not memory efficient. The idea behind our dynamic hashing scheme is to double only parts of the overall data structure. This increases the space in part of our data structure without changing the rest. We then use cuckoo displacement techniques to make this additional memory reachable from other parts of the hash table. 4.1 Growing Overview Our DySECT hash table consists of T subtables (shown in Figure 1) that in turn consist of buckets, which can store B elements each. Each element has H associated buckets – similar to cuckoo hashing – which can be in the same or in different subtables. T , B, and H are constant that will not change during the lifetime of the table. Additionally, each table is initialized with a minimum fill ratio δmin . −1 The table will never exceed δmin · n cells once it begins to grow over its initial size. To find a bucket associated with an element e, we compute e’s hash value using the appropriate hash function hi (e). The hash is then used to compute the subtable and the bucket within that subtable. To make this efficient we use powers of two for the number of subtables (T = 2t ), as well as for the number of buckets per subtable (subtable size s = 2x · B). Since the number of subtables is constant, we can use the first t bits from the hashed key to find the appropriate subtable. From the remaining bits we compute the bucket within that subtable using a bitmask (hi (e) & (2x − 1) = hi (e) mod 2x ). 4.3 Shrinking If shrinking is necessary it can work similarly to growing. We replace a subtable with a smaller one by migrating elements from one to the other. During this migration we join elements from two buckets into one. Therefore it is possible for a bucket to overfill. We reinsert these elements at the end of 4 T subtables s cells 2·s cells j-th table bucket with B cells one element with H associated buckets Figure 1: Schematic Representation of a DySECT Table. the migration. Obviously, this can only affect at most half the migrated elements. When automatically triggering the size reduction, one has to make sure that the migration cost is amortized. Therefore, a grow operation cannot immediately follow a shrink operation. When shrinking is enabled we propose to shrink one subtable when −1 δmin · n < m − s0 elements (s0 size of a large table, mnew = mold − s0 /2). Alternatively, one could implement a shrink to size operation that is explicitly called by the user. 4.4 and the bigger tables have 2n/(T + j) elements. For each table there are about Hn/T elements that have an associated bucket within that table. This shows that having more hash functions can lead to a better balance. For two hash functions (H = 2) and only one grown table (j = 1) this means that ≈ 2n/(T + 1) elements should be stored in the first table to achieve a balanced bucket distribution. Therefore, nearly all elements associated with a bucket in the first table (≈ 2n/T ) have to be stored there. This is one reason why H = 2 does not work well in practice. Difficulties for the Analysis of DySECT There are two factors specific to DySECT impacting its performance: inhomogeneous table resolution and element imbalance. Imbalance through Size Changes. In addition to the problem of inhomogeneous tables there is an inherent balancing problem introduced by resizing subtables. It is clear that a newly grown table Imbalance through Inhomogeneous Table is not filled as densely as other tables. Since we Resolution. By growing subtables individually double the table size, grown tables can only be filled we introduce a size imbalance between subtables. to about 50%. Large subtables contain more buckets but the number of elements hashed to a large subtable is not genAssume the global table is filled close to 100% erally higher than the number of elements that are when the first table grows. Now there is capacity hashed to a small subtable. This makes it difficult for s new elements but this capacity is only in the to spread elements evenly among buckets. Imbal- first table, elements that are not hashed to the first anced bucket fill ratios can lead to longer insertion table, automatically trigger displacements leading times. to slow insertions. Notice that repeated insert and Assume there are n elements in a hash table with erase operations help to equalize this imbalance, T subtables, j of which have size 2s the others have because elements are more likely inserted into the size s. If elements are spread equally among buckets sparser areas, and more likely to be deleted from then all small tables have around n/(T +j) elements, denser areas. 5 4.5 Implementation Details tables that can grow using an in-place full table migration. If we grow these tables in small increments, they can grow while enforcing a strict size constraint. To explain these techniques, we first have to explain how to use memory overcommitting and virtual memory to create a piece of memory that can grow in-place. Note that this technique violates best programming practices and is not fully portable to some systems. The idea is the following: the operating system will – if configured to do so – allow memory allocations larger than the machine’s main memory, with the anticipation that not all allocated memory will actually be used. Only memory pages that are actually used will be mapped from virtual to physical memory pages. Thus, for the purpose of space efficiency the memory is not yet used. Initializing parts of this memory is similar to allocating and initializing new memory. For our experiments (Section 6) we use three hash functions (H = 3) and a bucket size of (B = 8). These values have consistently outperformed other options both in maximum load factor and in insert performance (see Appendix A). T is set to 256 subtables for all our tests. To find displacement opportunities we use breadth first search. In our tests it performed better than random walks, since it better uses the read cache lines from one bucket. The hash table itself is implemented as a constant sized array of pointers to subtables. We have to lookup the corresponding pointer whenever a subtable is accessed. This does not impact performance much since all subtable pointers will be cached – at least if the hash table is a performance bottleneck. Reducing the Number of Computed Hash Functions. Evaluating hash functions is expensive, therefore, reducing the number of hash functions computed per operation can increase the performance of the table. The hash function we use computes 64bit hash values (i.e. xxHash1 ). We split the 64bit hash value into two 32bit values. All common bucket hash table sizes can be addressed using 32 bits (up to 232 buckets 235 ≈ 34 billion elements consuming 512GiB memory). When H > 2 we can use double hashing [11, 12] to further reduce the number of computed hash functions. Double hashing creates an arbitrary number of hash values using only two original hash functions h0 and h00 . The additional values are linear combinations computed from the original two values, hi (key) = h0 (key) + i · h00 (key). Combining both of these techniques, we can reduce the number of computed hash functions to one 64bit hash function. This is especially important during large displacements where each encountered element has to be rehashed to find its alternative buckets. 5 5.1 Improving DySECT Accessing a DySECT subtable usually takes one indirection. The pointer to the subtable has to be read from an array of pointers before accessing the actual subtable. Instead of using an array of pointers, we can implement the subtables as sections within one large allocation (size u). We choose u larger than the actual main memory, to allow all possible table sizes. This has the advantage that the offset for each table can be computed quickly (ti = Tu · i), without looking it up from a table. The added advantage is that we can grow subtables in-place. To increase the size of a subtable, it is enough to initialize a consecutive section of the table (following the original subtable). Once this is done, we have to redistribute the table’s elements. This allows us to grow a subtable without the space overhead of reallocation. Therefore, we can grow earlier, staying closer to the minimum load factor δmin . The in-place growing mechanism is easy in this case, since the subtable size is doubled. (Ab)Using Virtual Memory In this section we show how one can use virtual 5.2 Implementing other size constrained tables memory and memory overcommitting, to eliminate the indirections from a DySECT hash table. The Similarly to the technique above, we can implement same technique also allows us to implement hash any hash table using a large allocation, initializing 1 xxhash.com only as much memory as the table initially needs. 6 The used hash table size can be increased in-place by initializing more memory. To use this additional memory for the hash table, we have to perform an in-place migration. To implement fast in-place migration, we need the correct addressing technique. There are two natural ways to map a hash value h(e) to a cell in the table (size s). Most programmers would use a slow modulo operation (h(e) mod s). This is the same as using the least significant digits when addressing a table whose size is a power of two. A s better way is to use a scale factor (bh(e) · max(h) c). This is similar to using the most significant bits to address a table whose size is a power of two. The second method has two important advantages, it is faster to compute and it helps to make the migration cache efficient. When we use the second method the elements in the hash table are close to being sorted by their hash value (in the absence of collisions they would be sorted). The main idea of all our in-place migration techniques is the following. If we use a scale factor for our mapping – in the new table – most elements will be mapped to a position that is larger than their position in the old table. Therefore, rehashing elements starting from the back of the original table creates very few conflicts. Elements that are mapped to a position earlier than their current position are buffered and reinserted at the end of the migration. When using this technique, both the old and the new table, are accessed linearly in reverse order (from back to front). Making the migration cache efficient and easy to implement. For a table that was initialized with a min load factor δmin we trigger growing once the table is loaded more than δmin2 +1 . We then increase the −1 capacity m to δmin · n. Repeated migrations with small growing amounts are still inefficient, since each element has to be moved. This blueprint can be used, to implement in-place growing variants of most if not all common hashing techniques. We used these same ideas to implement variants of linear probing, robin hood hashing, and bucket cuckoo hashing. Although some variants have their own optimized migration. Robin Hood hashing can be adapted such that the table is truly sorted by hash value (without much overhead) making the migration faster than repeated reinsertions. Bucket cuckoo hashing has a somewhat more complicated migration technique, since each element has multiple possible positions, and one bucket can overflow. The best strategy here is to try to reinsert each element with the hash function (h1 , ..., hH ) that was previously used to store it. 6 Experiments There are many factors that impact hash table performance. To show that our ideas work in practice we use both micro-benchmarks and practical experiments. All reported numbers are averaged by running each experiment five times. The experiments were executed on a server with two Intel Xeon E5-2670 CPUs (2.3GHz base frequency) and 128GB RAM (using gcc 6.2.0 and Ubuntu 14.04).2 To put the performance of our DySECT table into perspective, we implement and test several other options for space efficient hashing using the method described in Section 5.2. We use our own implementations, since no hash table found online supports our strict space-efficiency constraint. With the technique described in Section 5.2, we implement and test hash tables with linear probing, robin hood hashing, and bucket cuckoo hashing (similar to DySECT we choose B = 8 and H = 3 see Appendix A for experiments with other parameter settings). For each table, we implemented an individually tuned cache efficient in-place migration algorithm. Without Virtual Memory/ Memory Overcommitting. All implementations described above work with the trick described in Section 5. The usefulness of this technique is arguable, since abusing the concept of virtual memory in this way is problematic not only from a software design perspective. It directly violates best practices, and reduces portability to many systems. The only table that can achieve dynamic α-space efficiency without this technique is our DySECT hash table. It is notable that this implementation is never significantly worse than DySECT with overcommitting (this variant is displayed using a dashed line). For each competitor table, we also implemented a variant that uses subtables combined with normal 2 Experiments on a desktop machine yielded similar results. 7 migrations (small grow factor, similar to in-place variants). Elements are first hashed to subtables and then hashed within that table. They cannot move between subtables. These variants are not strictly space efficient. The subtables are generally small (n/T ), therefore migrations will usually not violate the size constraint (too much). There can be larger subtables, since imbalances between subtables cannot be regulated. Throughout this section, we display these variants with dashed lines (similar to the DySECT variant without memory overcommitting). already decently filled can have an extremely long search distance. This leads to a high running time variance on find operations. Unsuccessful finds perform really badly, since all cells until the next free cell have to be probed. Their performance is much more related to the filling degree of the hash table. Robin Hood hashing performs somewhat similar to linear probing. It worsens the successful find performance by moving previously inserted elements from their original position, in order to achieve better unsuccessful find performance on highly filled tables. Overall, Robin Hood hashing is objectively worse than both DySECT and classic cuckoo hash6.1 Influence of Fill Ratio (Static Ta- ing. Cuckoo hashing and its variants like DySECT have guaranteed constant running times for all find ble Size) operations – independent of their success and the The following test was performed by initializing a ta- table’s filling degree. ble with m ≈ 25 000 000 cells (non-growing). Then elements are inserted until there is a failing inser- 6.2 Influence of Fill Ratio (Dynamic tion. At different stages, we measure the running Table Size) time of new insertion (Figure 2), and find (Figure 3 and 4) operations (averaged over 1000 operations). In this test 20 000 000 elements are inserted into Finds are measured using either randomly selected an initially empty table. The table is initialized elements from within the table (successful), or by expecting 50 000 elements, thus growing is necessary searching random elements from the whole key space to fit all elements. The tables are configured to (unsuccessful). We omit testing multi table variants guarantee a load factor of at least δmin at all times. of the competitor tables. They are not suitable for Figure 5 shows the performance in relation to the this test since forcing a static size limits the possi- load factor. Insertion times are computed as average bility to react to size imbalances between subtables of all 20 000 000 insertions. They are normalized (in the absence of displacements). similar to Figure 2 (divided by 1−δ1min ). As to be expected, the insertion performance of We see that DySECT performs by far the best depends highly on the fill degree of the table. There- even with less filled tables at 85% load. Here we 1 fore, we show it normalized with 1−δ which is the achieve a speedup of 1.6 over the next best soluexpected number of fully random probes to find a tion (299ns vs. linear probing 479ns). On denser free cell and thus a natural estimate for the running instances with 97.5% load, we can increases these time. We see that – up to a certain point – the inser- speedup to 2.7 (1580ns vs Cuckoo with subtables 1 tion time behaves proportional to 1−δ for all tables. 4210ns). With growing load, we see the insertion Close to the capacity limit of the table, the inser- times of our competitors degrade. The combination tion time increases sharply. DySect has a smaller of long insertion times, and frequent growing phases capacity limit than cuckoo due inhomogeneous table slows them down. There are only few insertions beresolution (see Section 4.4). tween two growing phases. Making the amortization Figure 3 and 4 show the performance of find op- of each growing phase challenging since each growerations. Linear probing performs relatively well on ing phase has to move all elements. Any growing successful find operations, up to a fill degree of over technique that uses a less cache efficient migration 95%. The reason for this is that many elements algorithm, would likely perform significantly worse. were inserted into the table when the table was still DySECT however remains close to O( 1−δ1min ) even relatively empty. They have very short search dis- for fill degrees up to 97.5%. This is possible, because tances, thus improving find performance. Successful only very few elements are actually touched during find performance can still be an issue in applica- each subtable migration (≈ Tn ). tions. An element that is inserted when the table is We also measured the performance of find opera8 80 DySECT Cuckoo Lin Prob Robin Hood 40 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 20 time x (1 − δ) [ns] 60 ● 0.80 0.85 0.90 0.95 1.00 load factor δ 400 200 time [ns] 300 DySECT Cuckoo Lin Prob Robin Hood 100 200 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 100 time [ns] 300 400 Figure 2: Insertions into a Static Table. Here we show the influence from the load factor, on the performance of insertions. To make insertion time more readable, we normalize it with top · (1 − δ). 0.80 0.85 0.90 0.95 1.00 0.80 load factor δ 0.85 0.90 0.95 1.00 load factor δ Figure 3: Performance of Successful Finds. Figure 4: Performance of Unsuccessful Finds. DySECT’s find performance is independent from DySECT’s find performance is independent from the load factor. the operations success. tions on the created tables, they are similar to the performance on the static table in Section 6.1 (see our case is a hash of the contained word. This is a Figure 3 and 4), therefore, we omit displaying them common application, in which static hash tables can never be space efficient, since the final size of the for space reasons. hash table is usually unknown. Here we use the first block of the CommonCrawl dataset (commoncrawl. 6.3 Word Count – a Practical use org/the-data/get-started) and compute a word count of the contained words, using our hash taCase bles. The chosen block has 4.2GB and contains Word count and other aggregation algorithms are around 240 000 000 words, with around 20 000 000 some of the most common use cases for hash tables. unique words. For the test, we hash each word to Data is aggregated according to its key, which in a 64 bit key and insert it together with a counter. 9 250 150 DySECT Cuckoo Lin Prob Robin Hood ● 0.85 0.90 ● ● ● ● ● ● ● ● ● ● 50 ● ● 0 time x (1 − δmin) [ns] ● 0.95 1.00 enforced min load δmin 600 200 400 ● DySECT Cuckoo Lin Prob Robin Hood ● ● ● ● ● ● ● ● ● ● ● ● 0 Time per operation [ns] 800 Figure 5: Insertions into a dynamic growing table enforcing a minimum load factor δmin . 0.85 0.90 0.95 1.00 enforced min load δmin Figure 6: Word Count Benchmark. The benchmark behaves like a mix of insert and find operations. DySECT’s performance is nearly independent form the load factor. Subsequent accesses to the same word increase this counter. Similar to the growing benchmark, we start with an empty table initialized for 50 000 elements. The performance results can be seen in Figure 6. We do not use any normalization since each word is repeated 12 times (on average). This means that most operations will actually behave more like successful find operations instead of inserting an element. When using our DySECT table, the running time seems to be nearly independent from the fill degree. We experience little to no slowdown until around 97%. The tables using full table migration however become very inefficient on high load degrees. For high load factors, the performance closely resembles that of the insertion benchmark (Figure 5). This indicates that insert performance can dominate running times even in find intensive workloads. To confirm this insight, we conducted some exper- iments with mixed operations (insert and find; insert and erase). They showed, that on a hash table with δmin = 0.95 DySECT outperforms linear probing for any workload containing more than 5% insertions (see Section 6.4). 6.4 Mixed Workloads In this test, we show how the hash tables behave under mixed workloads. For this test we fixed the minimum load factor to δ = 0.95. The test starts with a filled table containing 15 000 000 elements (95% filled). On this table we perform 10 000 000 operations mixed between insert and find/erase operations. As one might expect, the running time of a mixed work load can be estimated with a linear combination of the used operations. In the insert/find benchmark (Figure 7), 10 max load δ 3000 2000 ● ● ● ● 1000 ● ● ● ● 0.998 0.997 0.997 0.99 0.989 20 40 60 80 216 3000 2000 ● ● 1000 ● ● ● 0 ● ● 60 70 80 90 100 fraction of insert operations [%] Workloads combining insertions with DySECT outperforms all other hash tables. This shows that fast insertions are important even in find intensive workloads. This is even more accentuated by the fact that all performed finds are successful finds which have usually better performance on hash tables using linear probing The measurements with deletions (Figure 8) show that deletions in linear probing tables are significantly slower than those in cuckoo tables. This makes sense, since linear probing tables have to move elements, to fix their invariants while cuckoo tables have guaranteed constant deletions. 6.5 218 220 222 number of cells m 224 Figure 9: Maximum load bounds for different parameterizations (B/H) over a varying table size. Cuckoo’s bound is indicated by a dashed line. ● ● Figure 8: erase. 0.927 Workload combining insertions with ● ● 50 0.967 0.9 100 fraction of insert operations [%] Figure 7: finds. 0.978 8/3 8/2 4/3 4/2 ● 0 time per operation [ns] 0.9998 ● 0 time per operation [ns] ● 1 0.999 DySECT Cuckoo Lin Prob Robin Hood Investigating Bounds Maximum Load We designed the following experiment, to give an indication for the theoretical load bounds DySECT is able to support. To do this, we configured DySECT to grow only if there is an unsuccessful insertion – in our other tests the table grows in anticipation once the size-constraint allows it. To get even closer to the theoretical limits, we use T = 4096 subtables such that each grow has a smaller relative grow factor. Additionally we increase the number of probes used to find a successful displacement path to 16384 probes (from 1024). We test different combinations of bucket size (B) and number of hash functions (H). The test consists of inserting 20 000 000 elements into a previously empty table initialized with 50 000 cells. Whenever the table has to grow we note its effective load (during the migration). The maximum load bound depends on the number of large subtables. Therefore, we show the achieved load factors over the table capacity (m). To give a perspective, of static table performance we show dashed lines with the performance of a cuckoo hash table. We measured cuckoo’s performance by initializing a table with 20 000 000 cells and filling it until there was an error (using the same search distance of 16384). Figure 9 clearly shows the cyclical nature of DySECT’s maximum load bound. The table can be filled the most, when its capacity is close to a power of two. Then, all subtables have the same size and the table behaves similar to a static cuckoo table of the. On some sizes DySECT even seems to outperform cuckoo hashing. We are not sure why this is the case. We suspect that the bound measured for 11 cuckoo hashing is not tight (cuckoo hashing bounds are only measured on one table size). This effect does not appear for the 8/3 parameterization that we use throughout the paper. Both parameterizations that are using only two hash functions (8/2 and 4/2) have a higher dependency on the table size. The reason for this is that additional hash functions help to reduce the imbalance introduced by varied subtable resolution described in Section 4.4. Overall, we reach load bounds that are close to those of static cuckoo hashing. 7 Conclusion We have shown that dynamically growing hash tables can be implemented to always consume space close to the lower bound. We find it surprising that even our simple solutions based on linear probing seem to be new. DySECT is a sophisticated solution that exploits the flexibility offered by bucket cuckoo hashing to significantly decrease the number of object migrations over more straightforward approaches. When very high space efficiency is desired, it is up to 2.7 times better than simple solutions. For future work, a theoretical analysis of DySECT looks interesting. We expect that techniques previously used to analyze bucket cuckoo hashing will be applicable in principle. However, the already very complex calculations have to be generalized to take all possible ratios of small versus large subtables into account. Even for the static case and classical bucket cuckoo hashing, it is a fascinating open question whether the observed proportionality of insertion time to 1/(1 − δ) can be proven. Previous results on insertion time show much more conservative bounds [8, 9, 10, 7]. On the practical side, DySECT looks interesting for concurrent hashing [15, 18] since it grows only small parts of the table at a time. References guages and Programming (ICALP), number 5555 in LNCS, pages 107–118. Springer, 2009. [2] Pedro Celis, Per-Ake Larson, and J. Ian Munro. Robin hood hashing. In 26th Symposium on Foundations of Computer Science (FOCS), pages 281–288, Oct 1985. [3] Luc Devroye and Pat Morin. Cuckoo hashing: Further analysis. Information Processing Letters, 86(4):215 – 219, 2003. [4] Martin Dietzfelbinger, Andreas Goerdt, Michael Mitzenmacher, Andrea Montanari, Rasmus Pagh, and Michael Rink. Tight thresholds for cuckoo hashing via XORSAT. In 27th International Conference on Automata, Languages and Programming (ICALP), pages 213–225, 2010. [5] Martin Dietzfelbinger, Anna Karlin, Kurt Mehlhorn, Friedhelm Meyer auf der Heide, Hans Rohnert, and Robert E. Tarjan. Dynamic perfect hashing: Upper and lower bounds. SIAM Journal on Computing, 23(4):738–761, 1994. [6] Martin Dietzfelbinger, Michael Mitzenmacher, and Michael Rink. Cuckoo hashing with pages. In 19th European Symposium on Algorithms (ESA), number 6942 in LNCS, pages 615–627. Springer, 2011. [7] Martin Dietzfelbinger and Christoph Weidling. Balanced allocation and dictionaries with tightly packed constant size bins. Theoretical Computer Science, 380(1):47–68, 2007. [8] Dimitris Fotakis, Rasmus Pagh, Peter Sanders, and Paul Spirakis. Space efficient hash tables with worst case constant access time. Theory of Computing Systems, 38(2):229–248, 2005. [9] Nikolaos Fountoulakis, Konstantinos Panagiotou, and Angelika Steger. On the insertion time of cuckoo hashing. SIAM Journal on Computing, 42(6):2156–2181, 2013. [1] Yuriy Arbitman, Moni Naor, and Gil Segev. [10] Alan Frieze, Pll Melsted, and Michael MitzenDe-amortized cuckoo hashing: Provable worstmacher. An analysis of random-walk cuckoo case performance and experimental results. In hashing. SIAM Journal on Computing, International Conference on Automata, Lan40(2):291–308, 2011. 12 A [11] Leo J. Guibas and Endre Szemeredi. The analysis of double hashing. Journal of Computer and System Sciences, 16(2):226–274, 1978. [12] Adam Kirsch and Michael Mitzenmacher. Less hashing, same performance: Building a better bloom filter. In Yossi Azar and Thomas Erlebach, editors, 14th European Symposium on Algorithms (ESA), number 4168 in LNCS, pages 456–467. Springer, 2006. [13] Adam Kirsch and Michael Mitzenmacher. Using a queue to de-amortize cuckoo hashing in hardware. In 45th Annual Allerton Conference on Communication, Control, and Computing, volume 75, 2007. [14] Donald E. Knuth. The Art of Computer Programming, Volume 3: (2nd Ed.) Sorting and Searching. Addison Wesley Longman Publishing Co., Inc., Redwood City, CA, USA, 1998. Parameterization DySECT of In this section, we show additional measurements, using the experiment described in Section 6.2. We insert 20 000 000 elements into a previously empty table – using a dynamic table size. We show the measurements for different parameterizations of DySECT and Cuckoo Tables. First we show different combinations for B and H, here we also show their find performance. Then we show different displacement techniques using (B = 8 and H = 3). For random walk displacements we test an optimistic, and a pessimistic variant. The measurements show, that the chosen parameterization in Section 4.5 has the best maximum fill bounds and the best insert performance. Both 4/3 and 8/2 might achieve better performance on sparser tables, with find heavy workloads. [15] Xiaozhou Li, David G. Andersen, Michael Kaminsky, and Michael J. Freedman. Algorithmic improvements for fast concurrent cuckoo hashing. In 9th European Conference on Computer Systems, EuroSys ’14, pages 27:1–27:14. ACM, 2014. [16] M. Mitzenmacher. The power of two choices in randomized load balancing. IEEE Transactions on Parallel and Distributed Systems, 12(10):1094–1104, Oct 2001. [17] Michael Mitzenmacher. Some open questions related to cuckoo hashing. In Amos Fiat and Peter Sanders, editors, 17th European Symposium on Algorithms (ESA), volume 5757 of LNCS, pages 1–10. Springer, 2009. [18] N. Nguyen and P. Tsigas. Lock-free cuckoo hashing. In 2014 IEEE 34th International Conference on Distributed Computing Systems (ICDCS), pages 627–636, June 2014. [19] Rasmus Pagh and Flemming Friche Rodler. Cuckoo hashing. Journal of Algorithms, 51(2):122–144, 2004. 13 ● ● ● ● ● ● ● ● ● ● ● ●● 250 ● ● ● ● ● ● ● ● DySECT Cuckoo ● ● ● ● ● ● ● ● ● ● ●● 0 50 ● ● ● ● ● 150 ● bfs opt−rwalk pess−rwalk 50 ● ● ● ● time x (1 − δmin) [ns] 250 150 DySECT Cuckoo ● 0 time x (1 − δmin) [ns] 8/3 8/2 4/3 4/2 0.85 0.90 0.95 enforced min load δmin 1.00 0.85 0.90 0.95 enforced min load δmin 1.00 140 140 Figure 10: Insertions with different B/H Parameterizations (left), and different Displacement techniques (using 8/3right). For each displacement, we perform up to 1024 probes for an empty bucket. Measurements end when errors occur. ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● 8/3 8/2 4/3 4/2 ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● 0 0 ● DySECT Cuckoo [ns] ●● 100 ● 60 ● time per op [ns] ● 20 60 ● 20 time per op 100 ● ● 0.85 0.90 0.95 enforced min load δmin 1.00 0.85 0.90 0.95 enforced min load δmin 1.00 Figure 11: Finds with different B/H Parameterizations (successful left, unsuccessfulright). Solid lines represents DySECT. Dashed lines represent bucket cuckoo hashing. 14
8
arXiv:1601.05380v1 [] 20 Jan 2016 THE NON-ABELIAN TENSOR SQUARE OF RESIDUALLY FINITE GROUPS R. BASTOS AND N. R. ROCCO Abstract. Let m, n be positive integers and p a prime. We denote by ν(G) an extension of the non-abelian tensor square G ⊗ G by G × G. We prove that if G is a residually finite group satisfying some non-trivial identity f ≡ 1 and for every x, y ∈ G there exists a p-power q = q(x, y) such that [x, y ϕ ]q = 1, then the derived subgroup ν(G)′ is locally finite (Theorem A). Moreover, we show that if G is a residually finite group in which for every x, y ∈ G there exists a p-power q = q(x, y) dividing pm such that [x, y ϕ ]q is left n-Engel, then the non-abelian tensor square G ⊗ G is locally virtually nilpotent (Theorem B). 1. Introduction The non-abelian tensor square G ⊗ G of a group G as introduced by Brown and Loday [BL84, BL87] is defined to be the group generated by all symbols g ⊗ h, g, h ∈ G, subject to the relations gg1 ⊗ h = (g g1 ⊗ hg1 )(g1 ⊗ h) and g ⊗ hh1 = (g ⊗ h1 )(g h1 ⊗ hh1 ) for all g, g1, h, h1 ∈ G. This is a particular case of a non-abelian tensor product G ⊗ H, of groups G and H, under the assumption that G and H acts one on each other and on itself by conjugation. We observe that the defining relations of the tensor square can be viewed as abstractions of commutator relations; thus in [Roc91] the following related construction is considered: Let G and H be groups and ϕ : H → H ϕ an isomorphism (H ϕ is an isomorphic copy of H, where h 7→ hϕ , for all h ∈ H). Define the group ν(G) to be ϕ ν(G) := hG, Gϕ | [g1 , g2 ϕ ]g3 = [g1 g3 , (g2g3 )ϕ ] = [g1 , g2 ϕ ]g3 , gi ∈ Gi. The motivation for studying ν(G) is the commutator connection: Proposition 1.1. ([Roc91, Proposition 2.6]) The map Φ : G ⊗ G → [G, Gϕ ], defined by g ⊗ h 7→ [g, hϕ ], ∀g, h ∈ G, is an isomorphism. 2010 Mathematics Subject Classification. 20F45, 20E26, 20F40. Key words and phrases. Residually finite groups; Engel elements. 1 2 BASTOS AND ROCCO From now on we identify the tensor square G ⊗ G with the subgroup [G, Gϕ ] of ν(G) (see for instance [NR12] for more details). It is a natural task to study structural aspects of ν(G) and G ⊗ G for different classes of infinite groups G. Moravec [Mor08] proves that if G is locally finite then so is G ⊗ G (and, consequently, also ν(G)). In the article [BR] the authors establish some structural results concerning ν(G) when G is a finite-by-nilpotent group. In the present paper we are concerned with the non-abelian tensor square of residually finite groups. According to the solution of the Restricted Burnside Problem (Zelmanov, [Zel91a, Zel91b]) every residually finite group of finite exponent is locally finite. Another interesting result in this context, due to Shumyatsky [Shu05] states that if G is a residually finite group satisfying a non-trivial identity and generated by a normal commutator-closed set of p-elements, then G is locally finite. A subset X of a group G is called commutator-closed if [x, y] ∈ X whenever x, y ∈ X. In a certain way, our results can be viewed as generalizations of some of the above results. Theorem A. Let p a prime. Let m be a positive integer and G a residually finite group satisfying some non-trivial identity f ≡ 1. Suppose that for every x, y ∈ G there exists a positive integer q = q(x, y) dividing pm such that the tensor [x, y ϕ ]q = 1. Then the derived subgroup ν(G)′ is locally finite. Let G be a group. For elements x, y of G we define [x, 1 y] = [x, y] and [x, i+1 y] = [[x, i y], y] for i ≥ 1, where [x, y] = x−1 y −1 xy. An element y ∈ G is called left n-Engel if for any x ∈ G we have [x, n y] = 1. The group G is called a left n-Engel group if [x, n y] = 1 for all x, y ∈ G. Another result that was deduced following the solution of the Restricted Burnside Problem is that any residually finite n-Engel group is locally nilpotent (J. S. Wilson, [Wil91]). Later, Shumyatsky proved that if G is a residually finite group in which every commutator is left n-Engel in G, then the derived subgroup G′ is locally nilpotent [Shu99b, Shu00]. In the present paper we establish the following related result. Theorem B. Let m, n be positive integers and p a prime. Suppose that G is a residually finite group in which for every x, y ∈ G there exists a positive integer q = q(x, y) dividing pm such that the element [x, y ϕ ]q is left n-Engel in ν(G). Then the non-abelian tensor square G ⊗ G is locally virtually nilpotent. NON-ABELIAN TENSOR SQUARE 3 A natural question arising in the context of Theorem B is whether the theorem remains valid when q is allowed to be an arbitrary positive integer, rather than a p-power (see Proposition 4.4 in Section 4 for details). The paper is organized as follows. Section 2 presents terminology and some preparatory results that are later used in the proofs of our main results. In Section 3 we describe some important ingredients of what is often called Lie methods in group theory. These are essential in the proof of Theorem B. Section 4 contains the proofs of the main theorems. 2. Preliminary results The following basic properties are consequences of the defining relations of ν(G) and the commutator rules (see [Roc91, Section 2] for more details). Lemma 2.1. The following relations hold in ν(G), for all g, h, x, y ∈ G. ϕ (i) [g, hϕ ][x,y ] = [g, hϕ ][x,y] ; (ii) [g, hϕ , xϕ ] = [g, h, xϕ ] = [g, hϕ , x] = [g ϕ , h, xϕ ] = [g ϕ , hϕ , x] = [g ϕ , h, x]; (iii) If h ∈ G′ (or if g ∈ G′ ) then [g, hϕ ][h, g ϕ ] = 1; (iv) [g, [h, x]ϕ ] = [[h, x], g ϕ ]−1 ; (v) [[g, hϕ ], [x, y ϕ ]] = [[g, h], [x, y]ϕ ]. Lemma 2.2. Let G be a group. Then X = {[a, bϕ ] | a, b ∈ G} is a normal commutator-closed subset of ν(G). ϕ Proof. By definition, [a, bϕ ]y = [a, bϕ ]y = [ay , (by )ϕ ], for every a, b, y ∈ G. It follows that X is a normal subset of ν(G). Now, it remains to prove that X is a commutator-closed subset of ν(G). Choose arbitrarily elements a, b, c, d ∈ G and consider the elements [a, bϕ ], [c, dϕ ] ∈ X. By Lemma 2.1 (v), [[a, bϕ ], [c, dϕ ]] = [[a, b], [c, d]ϕ ] ∈ X. The result follows.  The epimorphism ρ : ν(G) → G, given by g 7→ g, hϕ 7→ h, induces the derived map ρ′ : [G, Gϕ ] → G′ , [g, hϕ ] 7→ [g, h] for all g, h ∈ G. In the notation of [Roc94, Section 2], let µ(G) denote the kernel of ρ′ . In particular, [G, Gϕ ] ∼ ′ =G. µ(G) 4 BASTOS AND ROCCO The next lemma is a particular case of Theorem 3.3 in [Roc91]. Lemma 2.3. Let G be a group. Then the derived subgroup ν(G)′ = ([G, Gϕ ] · G′ ) · (G′ )ϕ , where “·” denotes an internal semi-direct product. Lemma 2.4. Let m, n be positive integers and x, y be elements in a group G. If [x, y ϕ ]m is left n-Engel element in ν(G), then the power [x, y]m is left n-Engel in G. Proof. By assumption, [z,n [x, y ϕ ]m ] = 1 for every z ∈ G. It follows that 1 = ρ([z,n [x, y ϕ ]m ]) = [z,n [x, y]m ], for every z ∈ G, i.e., [x, y]m is left n-Engel in G, and the lemma follows.  The remainder of this section will be devoted to describe certain finiteness conditions for residually finite and locally graded groups. Recall that a group is locally graded if every non-trivial finitely generated subgroup has a proper subgroup of finite index. Interesting classes of groups (e.g. locally finite groups, locally nilpotent groups, residually finite groups) are locally graded. We need the following result, due to Shumyatsky [Shu05]. Theorem 2.5. Let G be a residually finite group satisfying some nontrivial identity f ≡ 1. Suppose G is generated by a normal commutatorclosed set X of p-elements. Then G is locally finite. Next, we extend this result to the class of locally graded groups. Lemma 2.6. Let p be a prime. Let G be a locally graded group satisfying some non-trivial identity f ≡ 1. Suppose that G is generated by finitely many elements of finite order and for every x, y ∈ G there exists a ppower q = q(x, y) such that [x, y]q = 1. Then G is finite. Proof. Let R be the finite residual of G, i.e., the intersection of all subgroups of finite index in G. If R = 1, then G is residually finite. Since the set of all commutators is a normal commutator-closed subset of p-elements, we have G′ is locally finite (Theorem 2.5). Moreover, G/G′ is generated by finitely many elements of finite order. It follows that G/G′ is finite and so G′ is finitely generated. Consequently, G is finite. So, we can assume that R 6= 1. By the same argument, G/R is finite and thus R is finitely generated. As R is locally graded we have that R contains a proper subgroup of finite index in G, which gives a contradiction.  NON-ABELIAN TENSOR SQUARE 5 It is easy to see that a quotient of a residually finite group need not be residually finite (see for instance [Rob96, 6.1.9]). This includes the class of locally graded groups. However, the next result gives a sufficient condition for a quotient to be locally graded. Lemma 2.7. (Longobardi, Maj, Smith, [LMS95]) Let G be a locally graded group and N a normal locally nilpotent subgroup of G. Then G/N is locally graded. 3. On Lie Algebras Associated with Groups Let L be a Lie algebra over a field K. We use the left normed notation: thus if l1 , l2 , . . . , ln are elements of L, then [l1 , l2 , . . . , ln ] = [. . . [[l1 , l2 ], l3 ], . . . , ln ]. We recall that an element a ∈ L is called ad-nilpotent if there exists a positive integer n such that [x, n a] = 0 for all x ∈ L. When n is the least integer with the above property then we say that a is ad-nilpotent of index n. Let X ⊆ L be any subset of L. By a commutator of elements in X, we mean any element of L that could be obtained from elements of X by means of repeated operation of commutation with an arbitrary system of brackets including the elements of X. Denote by F the free Lie algebra over K on countably many free generators x1 , x2 , . . . . Let f = f (x1 , x2 , . . . , xn ) be a non-zero element of F . The algebra L is said to satisfy the identity f = 0 if f (l1 , l2 , . . . , ln ) = 0 for any l1 , l2 , . . . , ln ∈ L. In this case we say that L is PI. Now, we recall an important theorem of Zelmanov [Zel90, Theorem 3] that has many applications in roup Theory. Theorem 3.1. Let L be a Lie algebra generated by l1 , l2 , . . . , lm . Assume that L is PI and that each commutator in the generators is adnilpotent. Then L is nilpotent. Let G be a group and p a prime. In what follows, Y k Di = Di (G) = (γj (G))p jpk >i denotes the i-th dimension subgroup of G in characteristic p. These subgroups form a central series of G known as the Zassenhaus-JenningsLazard seriesL(this can be found in Shumyatsky [Shu00, Section 2]). Set L(G) = Di /Di+1 . Then L(G) can naturally be viewed as a Lie algebra over the field Fp with p elements. The subalgebra of L(G) generated by D1 /D2 will be denoted by Lp (G). The nilpotency of Lp (G) 6 BASTOS AND ROCCO has strong influence in the structure of a finitely generated group G. The following result is due to Lazard [Laz65]. Theorem 3.2. Let G be a finitely generated pro-p group. If Lp (G) is nilpotent, then G is p-adic analytic. Let x ∈ G, and let i = i(x) be the largest integer such that x ∈ Di . We denote by x̃ the element xDi+1 ∈ L(G). Now, we can state one condition for x̃ to be ad-nilpotent. Lemma 3.3. (Lazard, [Laz54, page 131]) For any x ∈ G we have (ad x̃)q = ad (xeq ). Remark 3.4. We note that q in Lemma 3.3 does not need to be a ppower. In fact, is easy to see that if ps is the maximal p-power dividing q, then x̃ is ad-nilpotent of index at most ps . The following result is an immediate corollary of [WZ92, Theorem 1]. Lemma 3.5. Let G be any group satisfying a group-law. Then L(G) is P I. For a deeper discussion of applications of Lie methods to group theory we refer the reader to [Shu00]. 4. Proof of the main results By a well-known theorem of Schmidt [Rob96, 14.3.1], the class of locally finite groups is closed with respect to extension. By Lemma 2.3, ν(G)′ = ([G, Gϕ ] · G′ ) · (G′ )ϕ . From this and Proposition 1.1 we conclude that the non-abelian tensor square G ⊗ G is locally finite if and only if ν(G)′ is locally finite. The next result will be needed in the proof of Theorem A. Proposition 4.1. Let G be a group in which G′ is locally finite. The following properties are equivalents. (a) [G, Gϕ ] is locally finite; (b) ν(G)′ is locally finite; (c) For every x, y ∈ G there exists a positive integer m = m(x, y) such that [x, y ϕ ]m = 1. Proof. (a) ⇒ (b): It is sufficient to see that ν(G)′ = ([G, Gϕ ]·G′)·(G′)ϕ (Theorem 2.3). (b) ⇒ (c): Since ν(G)′ is locally finite, it follows that every element in ν(G)′ has finite order. In particular, for every x, y ∈ G the element NON-ABELIAN TENSOR SQUARE 7 [x, y ϕ ] has finite order. (c) ⇒ (a): Let us first prove that [G, Gϕ ] is locally finite. Let W be a finitely generated subgroup of [G, Gϕ ]. Since the factor group [G, Gϕ ]/µ(G) is isomorphic to G′ , it follows that W is central-by-finite. By Schur’s Lemma [Rob96, 10.1.4], the derived subgroup W ′ is finite. Since W/W ′ is an abelian group generated by finite many elements of finite order, it follows that W/W ′ is also finite. Thus [G, Gϕ ] is locally finite. Now, Lemma 2.3 shows that ν(G)′ = ([G, Gϕ ] · G′ ) · (G′ )ϕ . Therefore ν(G)′ is locally finite, by [Rob96, 14.3.1].  We are now in a position to prove Theorem A. Theorem A. Let p be a prime and G a residually finite group satisfying some non-trivial identity f ≡ 1. Suppose that for every x, y ∈ G there exists a p-power q = q(x, y) such that [x, y ϕ ]q = 1. Then the derived subgroup ν(G)′ is locally finite. Proof. The proof is completed by showing that G′ is locally finite (Proposition 4.1). Set X = { [a, bϕ ]; a, b ∈ G}. By Lemma 2.2, X is a normal commutator-closed subset of p-elements in ν(G). Since [G, Gϕ ]/µ(G) ∼ = G′ , we conclude that the derived subgroup G′ is residually finite satisfying the identity f ≡ 1. In the same manner we can see that for every x, y there exists a p-power q = q(x, y) such that [x, y]q = 1. Theorem 2.5 now implies shows that G′ is locally finite. The proof is complete.  Remark 4.2. For more details concerning residually finite groups in which the derived subgroup is locally finite see [Shu99a, Shu05]. Now we will deal with Theorem B: Let m, n be positive integers and p a prime. Suppose that G is a residually finite group in which for every x, y ∈ G there exists a p-power q = q(x, y) dividing pm such that [x, y ϕ ]q is left n-Engel in ν(G). Then the non-abelian tensor square G ⊗ G is locally virtually nilpotent. We denote by N the class of all finite nilpotent groups. The following result is a straightforward corollary of [Wil91, Lemma 2.1]. Lemma 4.3. Let G be a finitely generated residually-N group. For each prime p, let Rp be the intersection of all normal subgroups of G of finite p-power index. If G/Rp is nilpotent for each p, then G is nilpotent. The next result will be helpful in the proof of Theorem B. 8 BASTOS AND ROCCO Proposition 4.4. Let m, n be positive integers and p a prime. Suppose that G is a residually finite group in which for every x, y ∈ G there exists a p-power q = q(x, y) dividing pm such that [x, y]q is left n-Engel. m Then (G′ )p is locally nilpotent. Moreover, the derived subgroup G′ is locally virtually nilpotent. Proof. Let K be the derived subgroup of G. Choose an arbitrary commutator x ∈ G and let q = q(x) be a positive integer q = q(x) dividing pm such that xq is left n-Engel. It suffices to prove that the normal closure of xq in G, h(xq )h | h ∈ Gi, is locally nilpotent. In particular, xq ∈ HP (G). Let b1 , . . . , bt be finitely many elements in G. Let hi = (xq )bi , i = 1, . . . , t and H = hh1 , . . . , ht i. We only need to show that H is soluble (Gruenberg’s Theorem [Rob96, 12.3.3]). As a consequence of Lemma 4.3, we can assume that H is residually-p for some prime p. Let L = Lp (H) be the Lie algebra associated with the Zassenhaus-Jennings-Lazard series H = D1 ≥ D2 ≥ · · · of H. Then L is generated by h̃i = hi D2 , i = 1, 2, . . . , t. Let h̃ be any Lie-commutator in h̃i and h be the group-commutator in hi having the same system of brackets as h̃. Since for any group commutator h in h1 . . . , ht there exists a positive integer q dividing pm such that hq is left n-Engel, Lemma 3.3 shows that any Lie commutator in h̃1 . . . , h̃t is ad-nilpotent. On the other hand, for every commutator x = [a1 , b1 ] there exists a positive integer q = q(x) dividing pm such that xq is left n-Engel, then G satisfies the identity 2 3 m f = [z, n [x1 , y1 ], n [x1 , y1 ]p , n [x1 , y1 ]p , n [x1 , y1 ]p , . . . , n [x1 , y1]p ] ≡ 1 and therefore, by Lemma 3.5, L satisfies some non-trivial polynomial identity. Now Zelmanov’s Theorem 3.1 implies that L is nilpotent. Let Ĥ denote the pro-p completion of H. Then Lp (Ĥ) = L is nilpotent and Ĥ is a p-adic analytic group by Theorem 3.2. Clearly H cannot have a free subgroup of rank 2 and so, by Tits’ Alternative [Tit72], H is virtually soluble. As H is residually-p we have H is soluble. Choose arbitrarily finitely many commutators [a1 , b1 ], . . . , [at , bt ] and m m let M = h[a1 , b1 ], . . . , [at , bt ]i. Since MK p is residually finite and K p m m is locally graded, we conclude that (MK p )/K p is also locally graded m m (Lemma 2.7). Now, looking at the quotient (MK p )/K p , the claim m is immediate from Lemma 2.6. As M/(M ∩ K p ) is finite we have m m M ∩ K p finitely generated. In particular, M ∩ K p is nilpotent and M is virtually nilpotent. This completes the proof.  NON-ABELIAN TENSOR SQUARE 9 Remark 4.5. The above proposition is no longer valid if the assumption of residual finiteness of G is dropped. In [DK99] G. S. Deryabina and P. A. Kozhevnikov showed that there exists an integer N > 1 such that for every odd number n > N there is a group G with commutator subgroup G′ not locally finite and satisfying the identity f = [x, y]n ≡ 1. In particular, G′ cannot be locally virtually nilpotent. For more details concerning groups in which certain powers are left Engel elements see [Bas15, BS15, BSTT13, STT16]. The proof of Theorem B is now easy. Proof of Theorem B. Let M be a finitely generated subgroup of [G, Gϕ ]. Clearly, there exist finitely many elements a1 , . . . , as , b1 , . . . , bs ∈ G such that M 6 h[ai , bϕi ] | i = 1, . . . , si = H. It suffices to prove that H is virtually nilpotent. Since for every a, b ∈ G there exists a p-power q = q(a, b) dividing pm such that [a, bϕ ]q is left n-Engel, we deduce that the power [a, b]q is left n-Engel (Lemma 2.4). That G′ is locally virtually nilpotent follows from Proposition 4.4. Therefore, since µ(G) is the kernel of the derived map ρ′ , it follows that H/(H ∩ µ(G)) is virtually nilpotent. As µ(G) is a central subgroup of ν(G) we have H virtually nilpotent, which proves the theorem.  We conclude this paper by formulating an open problem related to the results described here. Problem 1. Let m, n be positive integers. Suppose that G is a residually finite group in which for every x, y ∈ G there exists a positive integer q = q(x, y) dividing m such that [x, y ϕ ]q is left n-Engel in ν(G). Is it true that G ⊗ G is locally virtually nilpotent? References [Bas15] R. Bastos, On residually finite groups with Engel-like conditions, to appear in Comm. Alg., preprint available at arXiv:1505.04468 []. [BS15] R. Bastos and P. Shumyatsky, On profinite groups with Engel-like conditions, J. Algebra 427 (2015) 215–225. [BR] R. Bastos and N. R. Rocco, Non-abelian tensor square of finite-by-nilpotent groups, preprint available at arXiv:1509.05114 []. [BSTT13] R. Bastos, P. Shumyatsky, A. Tortora and M. Tota, On groups admitting a word whose values are Engel, Int. J. Algebra Comput. 23 (2013), no. 1, 81–89. [BL84] R. Brown and J. L. Loday, Excision homotopique en basse dimension, C. R. Acad. Sci. Paris Sr. I 298 (1984), no. 15, 353–356. 10 BASTOS AND ROCCO [BL87] R. Brown, and J.-L. Loday, Van Kampen theorems for diagrams of spaces, Topology, 26 (1987), 311–335. [DK99] G. S. Deryabina and P. A. Kozevnikov, The derived subgroup of a group with commutators of bounded order can be non-periodic, Comm. Algebra 27 (1999), 4525–4530. [Laz54] M. Lazard, Sur les groupes nilpotents et les anneaux de Lie, Ann. Sci. École Norm. Sup., 71 (1954), 101–190. [Laz65] M. Lazard, Groupes analytiques p-adiques, IHES Publ. Math., 26 (1965), 389–603. [LMS95] P. Longobardi, M. Maj and H. Smith, A note on locally graded groups, Rend. Sem. Mat. Univ. Padova 94 (1995) 275–277. [Mor08] P. Moravec, The exponents of nonabelian tensor products of groups, J. Pure Appl. Algebra, 212 (2008), 1840–1848. [NR12] I. N. Nakaoka and N. R. Rocco, A survey of non-abelian tensor products of groups and related constructions, Bol. Soc. Paran. Mat., 30 (2012), 77–89. [Rob96] D. J. S. Robinson, A course in the theory of groups, 2nd edition, SpringerVerlag, New York, 1996. [Roc91] N. R. Rocco, On a construction related to the non-abelian tensor square of a group, Bol. Soc. Brasil Mat., 22 (1991), 63–79. [Roc94] N. R. Rocco, A presentation for a crossed embedding of finite solvable groups, Comm. Algebra 22 (1994), 1975–1998. [Shu99a] P. Shumyatsky, Groups with commutators of bounded order, Proc. Amer. Math. Soc. 127 (1999), 2583–2586. [Shu99b] P. Shumyatsky, On residually finite groups in which commutators are Engel, Comm. Algebra 27 (1999) 1937–1940. [Shu00] P. Shumyatsky, Applications of Lie ring methods to group theory, in Nonassociative algebra and its applications, eds. R. Costa, A. Grishkov, H. Guzzo Jr. and L. A. Peresi, Lecture Notes in Pure and Appl. Math., Vol. 211 (Dekker, New York, 2000), pp. 373–395. [Shu05] P. Shumyatsky, Elements of prime power order in residually finite groups, Int. J. Algebra Comput. 15 (2005), 571–576. [STT16] P. Shumyatsky, A. Tortora and M. Tota, On locally graded groups with a word whose values are Engel, to appear in Proc. Edinb. Math. Soc., preprint avaible at arXiv: 1305.3045v2 []. [Tit72] J. Tits, Free subgroups in linear groups, J. Algebra, 20 (1972) 250–270. [Wil91] J. S. Wilson, Two-generator conditions for residually finite groups, Bull. London Math. Soc. 23 (1991), 239–248. [WZ92] J. S. Wilson and E. I. Zelmanov, Identities for Lie algebras of pro-p groups, J. Pure Appl. Algebra, 81 (1992) 103–109. [Zel90] E. I. Zelmanov, On the restricted Burnside problem. In: Proceedings of the International Congress of Mathematicians. 1990. p. 395–402. [Zel91a] E. Zel’manov, The solution of the restricted Burnside problem for groups of odd exponent, Math. USSR Izv. 36 (1991), 41–60. [Zel91b] E. Zel’manov, The solution of the restricted Burnside problem for 2-groups, Math. Sb. 182 (1991), 568–592. NON-ABELIAN TENSOR SQUARE 11 Departamento de Matematática, Universidade de Brası́lia, BrasiliaDF, 70910-900 Brazil E-mail address: [email protected] Departamento de Matematática, Universidade de Brası́lia, BrasiliaDF, 70910-900 Brazil E-mail address: [email protected]
4
DYNAMIC ASSIGNMENT IN MICROSIMULATIONS OF PEDESTRIANS Authors: Tobias Kretz1, Karsten Lehmann1,2, Ingmar Hofsäß1, Axel Leonhardt1 Affiliation1: PTV Group Address1: Haid-und-Neu-Straße 15, D-76131 Karlsruhe, Germany Affiliation2: init AG Address2: Käppelestraße 4-6, D-76131 Karlsruhe, Germany Email (corresponding author): [email protected] ABSTRACT A generic method for dynamic assignment used with microsimulation of pedestrian dynamics is introduced. As pedestrians – unlike vehicles – do not move on a network, but on areas they in principle can choose among an infinite number of routes. To apply assignment algorithms one has to select for each OD pair a finite (realistically a small) number of relevant representatives from these routes. This geometric task is the main focus of this contribution. The main task is to find for an OD pair the relevant routes to be used with common assignment methods. The method is demonstrated for one single OD pair and exemplified with an example. INTRODUCTION Finding and evaluating the user equilibrium network load for road networks (traffic assignment) is one of the core tasks of traffic planning. Various algorithms to solve this problem have been developed over the years (Wardrop, 1952) (Beckmann, McGuire, & Winsten, 1956) (LeBlanc, Morlok, & Pierskalla, 1975) (Bar-Gera, 2002) (Gentile & Nökel, 2009). For readers who are not familiar with the concept of iterated assignment a short summary: the basic idea is to simulate (or compute) a scenario multiple times always computing the assignment on the routes based on the results of one or more or all previous simulations (iteration steps). The aim is to come up with a user-equilibirum route assignment which is –Wardrop’s principle – that no driver (or pedestrian) can achieve a smaller travel time by changing the route. This implies that tall travel times of the routes for an origin destination pair are equal. The efficient computation of the assignment from previous simulation runs such that the equilibrium is reached with as few iterations as possible, is a demanding task and progress has been made throughout recent decays since Wardrop stated his principle. The main hindrance for the application of said algorithms with pedestrian traffic is that all these algorithms are formulated for application with discrete networks, i.e. they compute flows on a node-edge graph. Pedestrians on the contrary can and do move continuously in two spatial dimensions. Their walking area can have holes (obstacles), but this does not change the fact that – even when loops are excluded – there are infinitely many possible paths for a pedestrian to walk from an origin to a destination. Dynamic Assignment in Microsimulations of Pedestrians There are two ways to face the resulting challenge: either one develops an entirely different assignment algorithm for pedestrian traffic – this was done for example in (Hoogendoorn & Bovy, 2004, S. a) (Hoogendoorn & Bovy, 2004, S. b) (van Wageningen-Kessels, Daamen, & Hoogendoorn, 2014)– or one extracts from the pedestrian walking area in a well-grounded suitable way a graph structure which can be used with existing assignment methods. This is what is attempted in this contribution. In the remainder the conditions and restrictions for the application will be discussed and the task that is attempted to be accomplished in this contribution will be clearly defined. This is followed by the section which defines the main ideas of the method. Finally an example of application will be given. PRELUDE: NAVIGATING WITH DISTANCE MAPS Models of pedestrian dynamics usually rest on the assumption that pedestrians predominantly move into the direction of the (spatially) shortest path toward their destination. This basic direction then is modified by other effects of which one has to be the influence of other pedestrians. For the computation of the direction of the shortest path there are a number of methods of which one is distance maps. The method of distance maps or static potentials has been particularly popular for modeling pedestrian dynamics in particular in use with cellular automata models (Burstedde, Klauck, Schadschneider, & Zittartz, 2001) (Kirchner & Schadschneider, 2002) (Klüpfel, 2003) (Nishinari, Kirchner, Namazi, & Schadschneider, 2004) (Kretz & Schreckenberg, 2009) (Schadschneider, Klüpfel, Kretz, Rogsch, & Seyfried, 2009). The reason for this is probably that the data structure of the distance map fits well to the spatial representation of cellular automata models: a regular usually rectangular grid. FIGURE 1 visualizes an example of a distance map. FIGURE 1: The left figure shows a distance map with the brightness of a spot directly proportional to the value of the grid point, i.e. the distance of that spot to the destination area. The destination area is shown as a green-black checkerboard, the obstacle is marked with diagonal red lines on white ground. In the central figure the brightness is computed modulo to some distance value d. The figure on the right side shows an excerpt of the geometry: shown is the area around the upper left corner of the obstacle with the field of negative gradients (white) of the distance field. It is a general mathematical property of gradients that they are orthogonal to lines of equal value (equi-potential lines) of the field from which they are derived. Therefore the negative gradients give the direction which for a given step distance most reduces the distance to destination. Following the negative gradients pedestrians are also guided around the obstacle. In cellular automata models the distance maps are usually used such that pedestrians move with higher probability to a cell which is closer to the destination. In other modeling approaches which do not directly operate with space, but rather with velocity or acceleration as for example the force-based models do, Kretz, Lehmann, Hofsäss, Leonhardt 3 pedestrians follow the negative gradient of the distance map at their current position. ``Follow'' here means that this direction is used as preferred or desired or base direction and may be subject to modifications as consequence of influence of for example other pedestrians or limited acceleration ability. A standard method for the computation of the distance maps is the Fast Marching Method (Kimmel & Sethian, 1998). For other methods see (Kretz, Bönisch, & Vortisch, Comparison of Various Methods for the Calculation of the Distance Potential Field, 2010), (Schultz, Kretz, & Fricke, 2010). PROBLEM ILLUSTRATION Imagine a walking geometry as shown in FIGURE 2. A pedestrian following the above mentioned principle of walking into the direction of the shortest path would pass the obstacle on the left side (reader’s perspective) as it is shown in the left figure. In this case it is easy to make pedestrians pass the obstacle on either side by introducing on each side of the obstacle an intermediate destination area. With the intermediate areas and the routes which include them, it is possible to route pedestrians locally into the direction of the shortest path, but still make a given fraction of pedestrians detour: first the pedestrians would head toward one of the two intermediate destinations and as soon as it is reached proceed to the final destination. FIGURE 2: Walking area (black), obstacle (diagonal red lines on white ground), origin area at the upper sides (red and black diagonal checkerboard pattern) and destination at the lower ends (green and black checkerboard pattern). Left figure: The yellow line shows the path a pedestrian (set into the simulation at some arbitrary coordinate on the origin area) would follow if a distance map is used to determine his basic direction. The route data simply would be some information (e.g. an area ID) identifying the destination area. Right figure: Compared to the left side figure the blackblue areas mark intermediate destination areas. There are two routes, one leading over each of the intermediate destination areas. (The intermediate destination is not necessary along the shorter (left) path, it has been added for illustration.) In FIGURE 2 the intermediate destination area on the left side was not necessary. Having a route directly leading from the origin to the destination area would have the same effect. However, it is important to note that the lower intermediate destination area does not change the path of the pedestrians on that route compared to the case without any intermediate destination area. This is not in general the case. In general Dynamic Assignment in Microsimulations of Pedestrians it is difficult to shape the intermediate destinations such that the path on the principally shortest route is not distorted compared to the case without intermediate destination. FIGURE 3 shows such a case. It is concluded that the intermediate destination areas cannot be of trivial shape (rectangles) in general. FIGURE 3: Example with necessarily non-trivial geometry for the intermediate destination areas. Walking area (black), obstacle (diagonal red stripes on white ground), two origin areas to the left (red-black diagonal checkerboard), destination area to the right (green-black checkerboard), and shortest paths (yellow and orange). Compare FIGURE 4. The basic idea is now that the intermediate destination areas need to be shaped along the equi-distance (equi-potential) lines of the destination area distance map (static potential). This is shown in FIGURE 5. FIGURE 4: For the example of FIGURE 3 simple rectangular areas are used as intermediate destination areas (light and dark blue checkerboard pattern). With none of the three variants the shortest paths as shown in FIGURE 3 are reproduced. FIGURE 5: The left side shows the distance map of the destination area. Brighter pixels show larger distance. However, about at half the maximum distance the brightness is set back to zero to better display one distinct equi-distance line. The right side shows the geometry with two intermediate destination areas added which are shaped along that particular equidistance line with their left side. The exact shape of the right side is not relevant. With these intermediate destination Kretz, Lehmann, Hofsäss, Leonhardt 5 areas the shortest paths of FIGURE 3 are reproduced necessarily and exactly up to the limited precision in the numerical definition of the intermediate destination areas.} METHOD FORMULATION In this section a method will be formulated that computes the geometry of intermediate destination areas for routes which represent the main routing alternatives for pedestrians. A note in advance: the method will be proposed here only insofar it is necessary for the concluding example of application. The complete method includes two more elements to deal with certain more difficult geometries. As a consequence of the strictly limited extent of this publication we have omitted these parts. The publication of the full method is currently in preparation (Kretz, Lehmann, & Hofsäss, 2014). The method will be defined and explained visually and not in a mathematical language. The first step is to compute a distance map for the destination (respectively for all destinations; this option is implied in the following) for which alternative routes ought to be computed. Next the modulo variant of this potential as it is displayed in the middle of FIGURE 1 is computed. This is not only a useful visualization to see the properties of the distance map clearly, but its structure is also an important element of the method. FIGURE 6 shows for a simple example geometry the same potential for various modulo values d. FIGURE 7 shows a modification of FIGURE 6. The gradual color change from black to white has been omitted. Instead one range from black to white is shown in one and the same color and the next range in a different one. In this way areas which are within a range of distances to destination are grouped. The coloring of FIGURE 7 (left and center figure) is the key for the further steps in the method. To be precise: all regions which share the property of the magenta colored region have to be identified. These are regions, which are unique for their range of distances to destination, but which have two unconnected regions directly neighbored which are closer to the destination than they are. Relevant origin positions of a pedestrian in a simulation of the scenario of FIGURE 7 are the light and dark orange and the magenta regions as well as the light and dark cyan regions which are further away from the destination than the magenta area. If a pedestrian's origin position is on a cyan area which is closer to the destination than the closest orange area no route alternatives are generated. Dynamic Assignment in Microsimulations of Pedestrians FIGURE 6: One potential displayed with three different modulo values d. The ratio of the modulo values of the middle and the left display is 2, the one of the right vs. the left one is 8. The destination is colored as black and green checkerboard, diagonal red lines on white ground depict an obstacle. FIGURE 7: The destination is shown as green and black checkerboard, an obstacle is colored with diagonal red lines on white ground. Light and dark cyan mark regions which are simply connected and which lie within a range of distances to destination and which have exactly one simply connected area as direct neighbor which is closer to destination. Light and dark orange mark regions where there is more than one (here: exactly two) unconnected (split by an obstacle) regions which are within a range of distances to destination. The magenta region is simply connected and has more than one (here: exactly two) directly neighboring areas which are closer to destination than the magenta region is itself. In almost all simulation models of pedestrian dynamics the base direction is set into the direction of the shortest path. This implies that it is determined by the origin position if a pedestrian passes the (red) obstacle to the right or to the left side. If we want to choose freely if a pedestrian passes the obstacle either to the left or the right side, the magenta region in FIGURE 7 is the critical one. Once a pedestrian has passed it, the shortest path paradigm will take the pedestrian to the destination. Therefore the route choice if the obstacle is passed to the left or to the right side is modeled by making the two light orange regions immediately neighbored to the magenta region intermediate destination areas. Each of the two is assigned to one route, generating a route choice. Obviously not only the shape of the additional intermediate destination areas depends on parameter d, but first and foremost if it exists at all. The right figure of FIGURE 7 shows no additional intermediate destination areas because the value of parameter d has been chosen too large The choice of the value of parameter d at first might appear to be arbitrary and its existence therefore as a major downside of the method. However, simulation models of pedestrian dynamics usually contain elements to deal with obstacles. These elements work the better -- i.e. they capture a larger share of the effect of the obstacle -- the smaller the obstacle is. Therefore it is not desired to have a litter box create a route alternative, but one wants such small obstacles be handled by the operational pedestrian dynamics model and the value of d should be chosen at least that large that such small objects do not create route alternatives. Second, for the purpose of computational efficiency and limited computation resources, in large simulation projects it can be desirable to not consider every existing routing alternative, but only the major ones. Then one may set the value of parameter d to even larger values to maintain a feasible simulation project size. Kretz, Lehmann, Hofsäss, Leonhardt 7 The left figure of FIGURE 7 suggests that there is some arbitrariness in selecting the regions which are directly neighbored to the magenta region as intermediate destination areas. Why not two of the dark orange or the other two light orange regions? The answer can be found by considering pedestrians who have an origin position somewhere on the dark orange or light orange regions. It is desired to be able to also assign such pedestrians a route which leads over an intermediate destination area on the opposite side of the obstacle. If that intermediate destination area was moved toward the destination, some pedestrians would approach the intermediate destination from the back side. This implies an unrealistic trajectory and is therefore not desired. In fact because any intermediate destination area has a finite depth, there can be a small area where this occurs although the region closest to the magenta region has been made an intermediate destination area. This problematic region is the larger, the larger the value of parameter d is. Realizing this it is important to realize as well that only the shape of the front side of an intermediate destination is relevant. The depth can be reduced to the point that it is still guaranteed that any pedestrian heading to the intermediate destination does not have to slow down to step on it for at least one time step. Nevertheless for the remainder of this contribution we will not modify the shape of intermediate destination areas in this way. The unmodified shape simply gives a clearer impression of the idea. Multiple Obstacles If there is more than one obstacle in a scenario the idea presented so far has to be applied recursively. This is shown in FIGURE 8. The left and middle figure show the initial computation step for and with the potential. The dark orange regions are identified as intermediate destinations to pass the downstream obstacle, the light orange regions have the same role for the upstream obstacle. However, the shape of the upstream regions is computed with respect to the destination. If we want to be able to send pedestrians without artifacts in their trajectories on routes which lead from an upstream intermediate destination area to that downstream intermediate area which is more remote respectively, we have to calculate the shape of the upstream intermediate destination areas with respect to the downstream intermediate destination areas. For one of the two downstream intermediate destination areas this is demonstrated in the figure on the right side of FIGURE 8. The brown region in FIGURE 8 (right side figure) -- the region which is closer to the destination than the downstream intermediate destination area -- does not only mark the region where pedestrians' origin positions do not generate routing alternatives. When the map of distances -- the potential -- is computed beginning at the downstream intermediate destination area, the brown region must not be intruded. It is therefore treated as if it was an obstacle -- for the computation of routing alternatives, not later in the simulation. If the brown region was intruded during the computation of the potential short cut paths could result which would lead to unrealistic movement and thus spoil the intended effect of the method. FIGURE 9 finally shows all resulting intermediate destination areas and resulting routes. One can see there that the original route without intermediate destinations is preserved -- it is the route which ends at the central of the nine destination dots which are depicted in green. Of the nine routes the one to the first green destination point from bottom, third from bottom and fifth from bottom basically are identical as they either have no intermediate destination point or if they have they are along the globally shortest path. Does this imply a problem or an inefficiency for the usage of these routes in an assignment method? It is surely not a serious problem. Most assignment methods can deal with multiple identical routes. In realistic networks there are always routes which overlap. Assignment methods have to cope with this. Identical routes are simply routes which overlap by 100%. Second, we have to note that the answer on the Dynamic Assignment in Microsimulations of Pedestrians question which routes are identical depends on the location of the starting area. If the starting area in FIGURE 9 were in the top corner the route without intermediate destinations were identical to different routes. It may even happen that this property is different for different sections of a starting area. This makes clear that only an additional a posteriori analysis under consideration of the starting area can show which routes could be dropped. FIGURE 8: Compared to the previous example a second obstacle has been added. Right side figure: Computation of the upstream intermediate destination areas with respect to one of the downstream intermediate destination areas. The region where pedestrians' origin positions do not generate route alternatives is shown brown. It is a virtual obstacle into which in the further recursions of the algorithm not anymore a potential will spread. FIGURE 9: Shown here are the edge of the destination area (green), the edges of the obstacles (red), the edge of some origin area (gray), the edges of the downstream intermediate destination areas (cyan and yellow) and the upstream intermediate destination areas (cyan, yellow, magenta). Upstream and downstream intermediate destination areas which follow in a sequence on one route are shown in the same color (cyan or yellow). Furthermore this figure shows the routes Kretz, Lehmann, Hofsäss, Leonhardt 9 in blue. The origin area is marked with a gray dot, the destination area with green dots. All intermediate destination areas are marked with the color that corresponds to the color of the edge of the area where they are attached. The dots only mark the area which is an intermediate destination area. Their position on the area has no impact on the movement of pedestrians in the simulation. Nested route choices FIGURE 9 raises the question, if the intermediate destination areas whose edges are colored magenta could be dropped together with their routes. Obviously the two routes do not actually add an alternative path choice. So is it sufficient to always just consider that routing alternative which is currently closest to destination (in FIGURE 8 marked by the dark orange areas in the lower figure) for further process and omit any more remote one (the light orange areas in FIGURE 8)? The answer is``no'' and the reason for this is that there are cases where route alternatives are nested; or in other words: it cannot necessarily be decided what is ``upstream'' and what ``downstream''. FIGURE 10 demonstrates this with an example. It can -- of course -- always be decided which of two points is spatially closer to a destination. However, in terms of travel time this is different. One can easily imagine in the third and fourth figure of FIGURE 10 where there has to be a jamming crowd of pedestrians to make the indicated path the quickest to destination although it implies and includes a spatial detour. FIGURE 10: In this example the route choices are nested. The destination is on the right side. The first computation of the potential (upper left figure) brings up two route alternatives, respectively two local maxima of distance to destination. These are manifest by the existence of the two regions marked with magenta in the upper right figure. lower figures show two different possible trajectories (black; from red to green) of pedestrians where the two magenta regions are passed in different sequence. EXAMPLE APPLICATION The geometry of FIGURE 10 but with two additional bottlenecks on the straight connection between origin and destination is used for a demonstration, see FIGURE 11 for details. With this geometry, these routes and a demand of 5.000 pedestrians per hour an iterated assignment is carried out using PTV Viswalk (Kretz, Hengst, & Vortisch, 2008) for all operational aspects. This is by no means required as the method is very generic and can be combined with various operational models of pedestrian dynamics. Dynamic Assignment in Microsimulations of Pedestrians The geometry of this example is comparatively close to a node-link network. Therefore the route choices can relatively easy be determined “by hand” without the help of the proposed algorithm. Therefore the choice of this geometry may appear to be not a good coice. However, if one is able to reproduce the results of the algorithm without any tool, one can confirm that these results make sense. Other examples with a different focus are published elsewhere (Kretz, Lehmann, & Friderich, 2013) (Kretz, Lehmann, & Hofsäss, 2013). FIGURE 11: Example geometry with calculated intermediate destinations (blue) and routes with their IDs (yellow). Note the bottlenecks along the straight connection between origin (red) and destination (green). From left to right the model extends over 106 m. The corridors are 4 m wide, respectively 2 m at the two bottlenecks. Note that there are two different intermediate destinations on the upper arc because one is computed based on the final destination and one based on the intermediate destination on the lower arc. As initial distribution we’ve set for once a heavy load on the routes along the globally shortest connection (routes 1, 3, and 4) and in a second run an equal distribution on all routes. After each iteration step, the routes with the maximum and the minimum average travel time are selected. Based on the number of possible routes and the time difference between the maximum and minimum average travel time the load (i.e. the probability for a pedestrian to choose a route) on the routes with maximum travel time is decreased and the load on routes with the minimum travel time is increased by the same amount, which is calculated as ( ) where δ is a free parameter, which has been set to δ=1.0 for this example. To avoid oscillations a further reduction factor has been applied if from one iteration to the other the route with smallest travel time has become the route with longest and the route with longest travel time has become the route with the smallest travel time. We do not elaborate on these details because the geometric aspects are in the focus of this paper. For the same reason we have chosen this simple assignment algorithm which only makes use of the results of the immediately previous iteration’s results instead of making use of all results. As a side note: in reality pedestrians do not choose their route exclusively based on travel time. One would rather apply a generalized cost approach. With generalized costs it is possible – at least in principle Kretz, Lehmann, Hofsäss, Leonhardt 11 – to consider other aspects as ground surface, light conditions, social aspects or landmarks which presumably lead to observed route choice behavior which obviously are not travel time based, at least not exclusively (Graessle & Kretz, 2011).We stick here with the simpler idea to base route choice purely on travel time as the geometrical aspects are the focus of the facing work and a discussion on the details of the generalized costs would clearly exceed the scope and available extent of this contribution. It needs furthermore be clear that the results of an iterated assignment approach are not connected to local conditions at spots and times when pedestrians in reality can and do choose about a route or a part of a route. Routes with a value of zero get a small load during simulation to take them into account if they get effective again in later iteration steps. The average travel time of these routes is only taken into account if it is the minimum average travel time. The iteration is continued until either only one route remains, the average travel time on all routes differ only by a predefined value from each other or the number of iterations exceeds a predetermined value, which was set here to half a second. Dynamic Assignment in Microsimulations of Pedestrians FIGURE 12: The upper figure shows the weighted average travel time and the lower figure the weighted standard deviations of travel times evolve in the course of iterations. The blue line shows the evolution when the initial distribution is mainly on the globally shortest path and only one percent is assigned to each of the three detouring routes. The red line shows the evolution when the initial distribution is equal on all routes. FIGURE 13: The upper figure shows the evolution of route choice ratios for each route and for both cases of initial distribution (lines for equal distribution are marked with a cross). The figure on the lower right shows the evolution of travel times for each route over the course of iterations. FIGURE 13 and FIGURE 13: The upper figure shows the evolution of route choice ratios for each route and for both cases of initial distribution (lines for equal distribution are marked with a cross). The figure on the lower right shows the evolution of travel times for each route over the course of iterations.show the results for an iterated assignment in the geometry of FIGURE 11 with a demand of 16,000 pedestrians per hour. Travel times for all pedestrians arriving at the destination in the time interval 300..900 sec were considered. It can be seen that the total average travel time at equilibrium is the same (88.3 sec) Kretz, Lehmann, Hofsäss, Leonhardt 13 independent of the route choice ratios at the first iteration. Similar holds for the route choice ratios of the three detouring routes 2 (9.9% resp. 10.3%), 5 (14.2% resp. 14.4%) and 6 (0%). However the three routes 1, 3, and 4, which all lead along the globally shortest path, end up with largely different route choice ratios for different ratios in the initial iteration. However, in sum the route choice ratios do not depend on the initial ratios. This is shown in FIGURE 14 where the sum of route choice ratios and the average of travel times of route 1, 3, and 4 are shown next to the ones of routes 2, 5, and 6. That the route choice ratios on routes 1, 3 and 4 end up differently for different initial values while the sum is the same is a clear indication that the introduction of the intermediate destination does not have a significant effect on local movement behavior. Because if it had, the travel times on these three routes actually would be different and the route choice ratios for the three routes should be mutually different, but each of them should be identical for different initial ratios. For only if the travel times are identical the route choice ratios cannot be determined uniquely. FIGURE 14: Route choice ratio (top) and travel times (bottom) with routes 1, 3, and 4 averaged or added, respectively. Dynamic Assignment in Microsimulations of Pedestrians CONCLUSIONS We have sketched the core of an algorithm that produces routes with intermediate destination areas to be applied with pedestrian simulation. The routes which are computed mark the most relevant routing alternatives in any given walking geometry. Having reduced the infinitely many trajectories by which a pedestrian can move from origin to destination to a small set of routes it is possible to apply iterated assignment methods to compute the user equilibrium of any given scenario (walking geometry and OD matrix). We have demonstrated the capability of the approach with an example. In our opinion it is less questionable that the method can produce good results in arbitrary scenarios in principle but it rather remains to be shown that the method is computationally feasible also for large scale, real life planning work. The proposed method of iterated assignment is not meant to replace earlier one-shot assignment methods (Pettré, Grillon, & Thalmann, 2007) (Kirik, Yurgel'yan, & Krouglov, 2009) (Kretz, Pedestrian Traffic: on the Quickest Path, 2009) (Guy, et al., 2010), (Kretz, et al., 2011), (Kemloh Wagoum, Seyfried, & Holl, 2012). One-shot assignment has to build on current information, be it global or local (only what a pedestrian can perceive in his environment) and instantaneous or with a delay or extrapolation into the near future. Iterated assignment and the idea of a dynamic equilibrium on the other hand rest on the assumption that pedestrians have information of the load of the walking geometry (or road network) throughout the full time of their presence in the system. This includes – for the early phase of their trip – knowledge of a relatively far future. This can only be justified in situations which repeat similarly on a regular base. Obviously both modeling approaches have their justification for shares of all possible planning problems. The question is how to handle scenarios which are a combination of both. One can easily imagine that scenarios are repeating only approximately and therefore one has to imagine the system to be near equilibrium but with large shares of pedestrians actively and continuously searching for better options based on what they perceive currently. The problem is that one-shot and iterated assignment are not easily compatible. Thus this remains a puzzle for the future. However, we can very well imagine that the geometric part of this contribution can be combined with a one-shot assignment approach. LITERATURE Bar-Gera, H. (2002). Origin-based algorithm for the traffic assignment problem. Transportation Science, pp. 398-417. DOI: 10.1287/trsc.36.4.398.549 Beckmann, M., McGuire, C., & Winsten, C. (1956). Studies in the Economics of Transportation. Yale University Press. Burstedde, C., Klauck, K., Schadschneider, A., & Zittartz, J. (2001). Simulation of pedestrian dynamics using a two-dimensional cellular automaton. Physica A, pp. 507-525. DOI: 10.1016/S03784371(01)00141-8 arxiv.org/abs/cond-mat/0102397 Gentile, G., & Nökel, K. (2009). Linear User Cost Equilibrium: the new algorithm for traffic assignment in VISUM. Proceedings of European Transport Conference 2009, p. eprint. Graessle, F., & Kretz, T. (2011). An Example of Complex Pedestrian Route Choice. Pedestrian and Evacuation Dynamics (pp. 767--771). Gaithersburg: Springer. DOI: 10.1007/978-1-4419-97258_71 arxiv.org/abs/1001.4047 Kretz, Lehmann, Hofsäss, Leonhardt 15 Guy, S., Chhugani, J., Curtis, S., Dubey, P., Lin, M., & Manocha, D. (2010). Pledestrians: A least-effort approach to crowd simulation. SIGGRAPH 2010 (pp. 119-128). Eurographics Association. DOI: 10.2312/SCA/SCA10/119-128 Hoogendoorn, S., & Bovy, P. (2004). Dynamic user-optimal assignment in continuous time and space. Transportation Research Part B: Methodological, pp. 571-592. DOI: 10.1016/j.trb.2002.12.001 Hoogendoorn, S., & Bovy, P. (2004). Pedestrian route-choice and activity scheduling theory and models. Transportation Research Part B: Methodological, pp. 169-190. DOI: 10.1016/S01912615(03)00007-9 Kemloh Wagoum, A., Seyfried, A., & Holl, S. (2012). Modeling the dynamic route choice of pedestrians to assess the criticality of building evacuation. Advances in Complex Systems, 15(7). DOI: 10.1142/S0219525912500294 arxiv.org/abs/1103.4080 Kimmel, R., & Sethian, J. (1998). Computing geodesic paths on manifolds. PNAS, 95(15), pp. 8431-8435. Kirchner, A., & Schadschneider, A. (2002). Simulation of evacuation processes using a bionics-inspired cellular automaton model for pedestrian dynamics. Physica A, pp. 260-276. DOI: 10.1016/S03784371(02)00857-9 arxiv.org/abs/cond-mat/0203461 Kirik, E., Yurgel'yan, T., & Krouglov, D. (2009). The shortest time and/or the shortest path strategies in a CA FF pedestrian dynamics model. Journal of Siberian Federal University. Mathematics & Physics, 2(3), pp. 271-278. Klüpfel, H. (2003). A Cellular Automaton Model for Crowd Movement and Egress Simulation. Duisburg: Universität Duisburg-Essen. Kretz, T. (2009). Pedestrian Traffic: on the Quickest Path. Journal of Statistical Mechanics: Theory and Experiment, p. P03012. DOI: 10.1088/1742-5468/2009/03/P03012 arxiv.org/abs/0901.0170 Kretz, T., & Schreckenberg, M. (2009). F.A.S.T. - Floor field- and Agent-based Simulation Tool. In E. Chung, & A.-G. Dumont, Transport simulation: Beyond traditional approaches. Lausanne: EPFL press. arxiv.org/abs/physics/0609097 Kretz, T., Bönisch, C., & Vortisch, P. (2010). Comparison of Various Methods for the Calculation of the Distance Potential Field. Pedestrian and Evacuation Dynamics 2008, (pp. 335-346). DOI: 10.1007/978-3-642-04504-2_29 arxiv.org/abs/0804.3868 Kretz, T., Große, A., Hengst, S., Kautzsch, L., Pohlmann, A., & Vortisch, P. (2011). Quickest Paths in Simulations of Pedestrians. Advances in Complex Systems, 14(5), pp. 733-759. DOI: 10.1142/S0219525911003281 arxiv.org/abs/1107.2004 Kretz, T., Hengst, S., & Vortisch, P. (2008). Pedestrian Flow at Bottlenecks-Validation and Calibration of Vissim's Social Force Model of Pedestrian Traffic and its Empirical Foundations. 8th International Symposium on Transport Simulation (p. on CD). Surfer's Paradise, Queensland, Australia: Monash University. arxiv.org/abs/0805.1788 Kretz, T., Lehmann, K., & Friderich, T. (2013). Selected Applications of a Dynamic Assignment Method for Microscopic Simulation of Pedestrians. European Transport Conference. Association for European Transport. abstracts.aetransport.org/paper/index/id/125/confid/1 Kretz, T., Lehmann, K., & Hofsäss, I. (2013). Pedestrian Route Choice by Iterated Equilibrium Search. Traffic and granular Flow. Springer (to be published). Kretz, T., Lehmann, K., & Hofsäss, I. (2014). User Equilibrium Route Assignment for Microscopic Pedestrian Simulation. (in review). arxiv.org/abs/1401.0799 Dynamic Assignment in Microsimulations of Pedestrians LeBlanc, L., Morlok, E., & Pierskalla, W. (1975). An efficient approach to solving the road network equilibrium traffic assignment problem. Transportation Research, pp. 309-318. DOI: 10.1016/0041-1647(75)90030-1 Nishinari, K., Kirchner, A., Namazi, A., & Schadschneider, A. (2004). Extended Floor Field CA Model for Evacuation Dynamics. IEICE Trans. Inf. & Syst., pp. 726-732. arxiv.org/abs/condmat/0306262 Pettré, J., Grillon, H., & Thalmann, D. (2007). Crowds of moving objects: Navigation planning and simulation. IEEE International Conference on Robotics and Automation, (pp. 3062-3067). Schadschneider, A., Klüpfel, H., Kretz, T., Rogsch, C., & Seyfried, A. (2009). Fundamentals of Pedestrian and Evacuation Dynamics. In A. Bazzan, & F. Klügl (Eds.), Multi-Agent Systems for Traffic and Transportation Engineering (pp. 124-154). Hershey, PA, USA: Information Science Reference. DOI: 10.4018/978-1-60566-226-8.ch006 Schultz, M., Kretz, T., & Fricke, H. (2010). Solving the Direction Field for Discrete Agent Motion. Cellular Automata - 9th International Conference on Cellular Automata for Research and Industry, ACRI 2010, (pp. 489-495). DOI: 10.1007/978-3-642-15979-4_52 arxiv.org/abs/1008.3990 van Wageningen-Kessels, F., Daamen, W., & Hoogendoorn, S. (2014). Pedestrian Evacuation Optimization - Dynamic Programming in Continuous Time and Space. In M. Boltes, M. Chraibi, A. Schadschneider, & A. Seyfried, Traffic and Granular Flow 13. Jülich: Springer. Wardrop, J. (1952). Road Paper. Some theoretical aspects of road traffic research. ICE Proceedings: Engineering Divisions, (pp. 325-362). DOI: 10.1680/ipeds.1952.11259
5
F-THRESHOLDS, INTEGRAL CLOSURE AND CONVEXITY MATTEO VARBARO arXiv:1607.01147v1 [] 5 Jul 2016 To Winfried Bruns on his 70th birthday A BSTRACT. The purpose of this note is to revisit the results of [HV] from a slightly different perspective, outlining how, if the integral closures of a finite set of prime ideals abide the expected convexity patterns, then the existence of a peculiar polynomial f allows to compute the F-jumping numbers of all the ideals formed by taking sums of products of the original ones. The note concludes with the suggestion of a possible source of examples falling in such a framework. 1. P ROPERTIES A, A+ AND B FOR A FINITE SET OF PRIME IDEALS Let S be a standard graded polynomial ring over a field k and let m be a positive integer. Fix homogeneous prime ideals of S: p1 , p2 , . . . , pm . For any σ = (σ1 , . . . , σm ) ∈ Nm and k = 1, . . . , m, denote by I σ := pσ1 1 · · · pσmm and ek (σ ) := max{ℓ : I σ ⊆ pk }. (ℓ) (e (σ )) k . Since Spk is a regular local ring with maximal Obviously we have I σ ⊆ m k=1 pk ℓ ideal (pk )pk , we have that (pk )pk is integrally closed in Spk for any ℓ ∈ N. Therefore (ℓ) T pk = (pk )ℓpk ∩ S is integrally closed in S for any ℓ ∈ N. Eventually we conclude that (ek (σ )) k=1 pk Tm is integrally closed in S, so: Iσ (1) ⊆ m \ (ek (σ )) pk . k=1 Definition 1.1. We say that p1 , . . . , pm satisfy condition A if Iσ = m \ (ek (σ )) pk ∀ σ ∈ Nm . k=1 If Σ ⊆ Nm , denote by I(Σ) := ∑σ ∈Σ I σ and by Σ ⊆ Qm the convex hull of Σ ⊆ Qm . Lemma 1.2. For any Σ ⊆ Nm , I(Σ) ⊇ ∑v∈Σ I ⌈v⌉ , where ⌈v⌉ := (⌈v1 ⌉, . . . , ⌈vm ⌉) for v = (v1 , . . ., vm ) ∈ Qm . Proof. Since S is Noetherian, we can assume that Σ = {σ 1 , . . ., σ N } is a finite set. Take v ∈ Σ. Then there exist nonnegative rational numbers q1 , . . . , qN such that N N v = ∑ qi σ i and ∑ qi = 1. i=1 i=1 1 2 MATTEO VARBARO Let d be the product of the denominators of the qi ’s and σ = d · v ∈ Nm . Clearly: (I ⌈v⌉ )d = I d·⌈v⌉ ⊆ I σ . N i Setting ai = dqi , notice that σ = ∑N i=1 ai σ and ∑i=1 ai = d. Therefore (I ⌈v⌉ )d ⊆ I σ ⊆ I(Σ)d . This implies that I ⌈v⌉ is contained in the integral closure of I(Σ).  From the above lemma, so, I(Σ) ⊇ ∑v∈Σ I ⌈v⌉ . In particular: (2) p1 , . . . , pm satisfy condition A =⇒ I(Σ) ⊇ ∑ v∈Σ m \ (ek (⌈v⌉)) pk k=1 ! . Definition 1.3. We say that p1 , . . . , pm satisfy condition A+ if ! ∑ I(Σ) = v∈Σ m \ (ek (⌈v⌉)) pk ∀ Σ ⊆ Nm k=1 Remark 1.4. If p1 , . . . , pm satisfy condition A+, then they satisfy A as well (for σ ∈ Nm , just consider the singleton Σ = {σ }). Lemma 1.5. Let σ 1 , . . . , σ N be vectors in Nm , and a1 , . . . , aN ∈ N. Then ! N ek ∑ ai σ i i=1 N = ∑ ai ek (σ i ) ∀ k = 1, . . . , m. i=1 i Proof. Set σ = ∑N i=1 ai σ , and notice that   N  N N  i ai (∑N a e (σ i )) (e (σ i )) ai (a e (σ i )) ⊆ ∏ pk k ⊆ ∏ pk i k ⊆ pk i=1 i k , Iσ = ∏ Iσ i=1 i=1 i=1 i so the inequality ek (σ ) ≥ ∑N i=1 ai ek (σ ) follows directly from the definition. i For the other inequality, for each i = 1, . . ., N choose fi ∈ I σ such that its image in e (σ i )+1 Spk is not in (pk )pkk . Then the class fi is a nonzero element of degree ek (σ i ) in the associated graded ring G of Spk . Being G a polynomial ring (in particular a domain), the ai N i element ∏N i=1 f i is a nonzero element of degree ∑i=1 ai ek (σ ) in G. Therefore N i (∑N i=1 ai ek (σ )+1) ∏ fiai ∈ Ipσk \ pk . i=1 i This means that ek (σ ) ≤ ∑N i=1 ai ek (σ ).  Consider the function e : Nm → Nm defined by σ 7→ e(σ ) := (e1 (σ ), . . ., em (σ )). From the above lemma we can extend it to a Q-linear map e : Qm → Qm . Given Σ ⊆ Nm , the above map sends Σ to the convex hull PΣ ⊆ Qm of the set {e(σ ) : σ ∈ Σ} ⊆ Qm . In particular we have the following: F-THRESHOLDS, INTEGRAL CLOSURE AND CONVEXITY 3 Proposition 1.6. The prime ideals p1 , . . ., pm satisfy condition A+ if and only if ! m \ (⌈vk ⌉) ∑ I(Σ) = (v1 ,...,vm )∈PΣ If Σ ⊆ Nm and s ∈ N, define Σs ∀ Σ ⊆ Nm pk k=1 := {σ i1 + . . . + σ is : σ ik ∈ Σ}. Then I(Σ)s = I(Σs). Furthermore Σs = s · Σ, i.e. PΣs = s · PΣ. So: Proposition 1.7. If p1 , . . . , pm satisfy condition A+, then ! I(Σ)s = ∑ m \ (⌈svk ⌉) pk (v1 ,...,vm )∈PΣ ∀ Σ ⊆ Nm , s ∈ N. k=1 We conclude this section by stating the following definition: Definition 1.8. We say that p1 , . . . , pm satisfy condition B if there exists a polynomial T ht(pk ) such that in≺ ( f ) is a square-free monomial for some term order ≺ on S. f∈ m k=1 pk 2. G ENERALIZED TEST IDEALS AND F - THRESHOLDS Let p > 0 be the characteristic of k, I be an ideal of S and m be the homogeneous maximal ideal of S. For all e ∈ N, define νe (I) := max{r ∈ N : I r 6⊆ m[q] := (gq : g ∈ m)}, q = pe . The F-pure threshold of I is then νe (I) . e→∞ pe fpt(I) := lim e e The pe -th root of I, denoted by I [1/p ] , is the smallest ideal J ⊆ S such that I ⊆ J [p ] . By the flatness of the Frobenius over S the q-th root is well defined. If λ is a positive real number, then it is easy to see that  [1/pe ]  [1/pe+1 ] ⌈λ pe ⌉ ⌈λ pe+1 ⌉ I ⊆ I . The generalized test ideal of I with coefficient λ is defined as: [1/pe ]  ⌈λ pe ⌉ . τ (λ · I) := I e≫0 Note that τ (λ · I) ⊇ τ (µ · I) whenever λ ≤ µ . By [BMS, Corollary 2.16], ∀ λ ∈ R>0 , ∃ ε ∈ R>0 such that τ (λ · I) = τ (µ · I) ∀ µ ∈ [λ , λ + ε ). A λ ∈ R>0 is called an Fjumping number for I if τ ((λ − ε ) · I) ) τ (λ · I) ∀ ε ∈ R>0 . [ τ = (1) )[ λ1 τ 6= (1) )[ λ2 ... )[ λn ... ◮ λ -axis (1) ) τ (λ1 · I) ) τ (λ2 · I) ) . . . ) τ (λn · I) ) . . . 4 MATTEO VARBARO The λi above are the F-jumping numbers. Notice that λ1 = fpt(I). Theorem 2.1. If p1 , . . ., pm satisfy conditions A and B, then ∀ λ ∈ R>0 we have σ τ (λ · I ) = m \ (⌊λ ek (σ )⌋+1−ht(pk )) pk ∀ σ ∈ Nm . k=1 If p1 , . . . , pm satisfy conditions A+ and B, then ∀ λ ∈ R>0 we have ! τ (λ · I(Σ)) = ∑ (v1 ,...,vm )∈PΣ m \ (⌊λ vk ⌋+1−ht(pk )) pk ∀ Σ ⊆ Nm . k=1 Proof. The first part immediately follows from [HV, Theorem 3.14], for if p1 , . . . , pm satisfy conditions A and B, then I σ obviously enjoys condition (⋄+) of [HV] ∀ σ ∈ Nm . Concerning the second part, Proposition 1.7 implies that I(Σ) enjoys condition (∗) of [HV] ∀ Σ ⊆ Nm whenever p1 , . . . , pm satisfy conditions A+ and B. Therefore the conclusion follows once again by [HV, Theorem 4.3].  3. W HERE GO FISHING ? Let k be of characteristic p > 0. So far we have seen that, if we have graded primes p1 , . . . , pm of S enjoying A and B, then we can compute lots of generalized test ideals. If they enjoy A+ and B, we get even more. That looks nice, but how can we produce p1 , . . . , pm like these? Before trying to answer this question, let us notice that, as explained in [HV], the ideals p1 , . . . , pm of the following examples satisfy conditions A+ and B: (i) S = k[x1 , . . . , xm ] and pk = (xk ) for all k = 1, . . ., m. (ii) S = k[X ], where X is an m × n generic matrix (with m ≤ n) and pk = Ik (X ) is the ideal generated by the k-minors of X for all k = 1, . . . , m. (iii) S = k[Y ], where Y is an m × m generic symmetric matrix and pk = Ik (Y ) is the ideal generated by the k-minors of Y for all k = 1, . . ., m. (iv) S = k[Z], where Z is a (2m + 1) × (2m + 1) generic skew-symmetric matrix and pk = P2k (Z) is the ideal generated by the 2k-Pfaffians of Z for all k = 1, . . . , m. Even for a simple example like (i), Theorem 2.1 is interesting: it gives a description of the generalized test ideals of any monomial ideal. In my opinion, a class to look at to find new examples might be the following: fix f ∈ S a homogeneous polynomial such that in≺ ( f ) is a square-free monomial for some term order ≺ (better if lexicographical) on S, and let C f be the set of ideals of S defined, recursively, like follows: (a) ( f ) ∈ C f ; (b) If I ∈ C f , then I : J ∈ C f for all J ⊆ S; (c) If I, J ∈ C f , then both I + J and I ∩ J belong to C f . If f is an irreducible polynomial, C f consists of only the principal ideal generated by f , but otherwise things can get interesting. Let us give two guiding examples: (i) If u := x1 · · · xm , then the associated primes of (u) are (x1 ), . . ., (xm ). Furthermore all the ideals of S = k[x1 , . . . , xm ] generated by variables are sums of the principal F-THRESHOLDS, INTEGRAL CLOSURE AND CONVEXITY 5 ideals above, and all square-free monomial ideals can be obtained by intersecting ideals generated by variables. Therefore, any square-free monomial ideal belongs to Cu , and one can check that indeed: Cu = {square-free monomial ideals of S}. (ii) Let X = (xi j ) be an m × n matrix of variables, with m ≤ n. For positive integers a1 < . . . < ak ≤ m and b1 < . . . < bk ≤ n, recall the standard notation for the corresponding k-minor:   xa1 b1 xa1 b2 · · · xa1 bk .. ..  . .. [a1, . . . , ak |b1 , . . . , bk ] := det  ... . . . xak b1 xak b2 · · · xak bk For i = 0, . . . , n − m, let δi := [1, . . ., m|i + 1, . . ., m + i]. Also, for j = 1, . . . , m − 1 set g j := [ j + 1, . . ., m|1, . . ., m − j] and h j := [1, . . ., m − j|n − m + j + 1, . . ., n]. Let ∆ be the product of the δi ’s, the g j ’s and the h j ’s: ∆ := n−m m−1 i=0 j=1 ∏ δi · ∏ g j h j . By considering the lexicographical term order ≺ extending the linear order x11 > x12 > . . . x1n > x21 > . . . > x2n > . . . > xm1 > . . . > xmn , we have that in(∆) = n−m m−1 i=0 j=1 ∏ in(δi) · ∏ in(g j ) in(h j ) = ∏ xi j i∈{1,...,m} j∈{1,...,n} is a square-free monomial. Since each (δi ) belongs to C∆ , the height-(n − m + 1) complete intersection J := (δ0 , . . . , δn−m ) is an ideal of C∆ too. Notice that the ideal Im (X ) generated by all the maximal minors of X is a height-(n − m + 1) prime ideal containing J. So Im (X ) is an associated prime of J, and thus an ideal of C∆ by definition. With more effort, one should be able to show that the ideals of minors Ik (X ) stay in C∆ for any size k. The ideals of C f have quite strong properties. First of all, C f is a finite set by [Sc]. Then, all the ideals in C f are radical. Even more, Knutson proved in [Kn] that they have a square-free initial ideal! In order to produce graded prime ideals p1 , . . . , pm satisfying conditions A (or even A+) and B, it seems natural to seek for them among the prime ideals in C f . This is because, at least, f is a good candidate for the polynomial needed for condition B: if f = f1 · · · fr is the factorization of f in irreducible polynomials, then for each A ⊆ {1, . . ., r} the ideal JA := ( fi : i ∈ A) ⊆ S is a complete intersection of height |A|. If p is an associated prime ideal of JA , then f obviously belongs to p|A| ⊆ p(|A|) . So such a p satisfies B. 6 MATTEO VARBARO Question 3.1. Does the ideal p above satisfy condition A? Even more, is it true that for prime ideals p as above ps = p(s) for all s ∈ N? If the above question admitted a positive answer, Theorem 2.1 would provide the generalized test ideals of p. A typical example, is when JA = (δ0 , . . . , δn−m ) and p = Im (X ) (see (ii) above), in which case it is well-known that Im (X )s = Im (X )(s) for all s ∈ N (e.g. see [BV, Corollary 9.18]. Remark 3.2. Unfortunately, it is not true that p satisfies B for all prime ideal p ∈ C f : for example, consider f = ∆ in the case m = n = 2, that is ∆ = x21 (x11 x22 − x12 x21 )x21 . Notice that (x21 , x11 x22 − x12 x21 ) = (x21 , x11 x22 ) = (x21 , x11 ) ∩ (x21 , x22 ), so p = (x21 , x11 ) + (x21 , x22 ) = (x21 , x11 , x22 ) ∈ C∆ . However ∆ ∈ / p(3) . Problem 3.3. Find a large class of prime ideals in C f (or even characterize them) satisfying condition B. If p1 , . . . , pm are prime ideals satisfying A+, then (by definition) ∑ pi = ∑ pi i∈A ∀ A ⊆ {1, . . . , m}. i∈A If p1 , . . ., pm are in C f , then the above equality holds true because ∑i∈A pi , belonging to C f , is a radical ideal. Problem 3.4. Let P f be the set of prime ideals in C f . Is it true that P f satisfies condition A+? If not, find a large subset of P f satisfying condition A+. R EFERENCES [BMS] M. Blickle, M. Mustaţă, K.E. Smith, Discreteness and rationality of F-thresholds, Michigan Math. J. 57 (2008), 43–61. [BV] W. Bruns, U. Vetter, Determinantal rings, Lecture Notes in Mathematics 1327, Springer-Verlag, Berlin, 1988. [HV] I.B. Henriques, M. Varbaro, Test, multiplier and invariant ideals, Adv. Math. 287 (2016), 704–732. [Kn] A. Knutson, Frobenius splitting, point-counting, and degeneration, available at http://arxiv.org/abs/0911.4941 (2009). [Sc] K. Schwede, F-adjunction, Algebra & Number Theory 3 (2009), 907–950. D IPARTIMENTO DI M ATEMATICA , U NIVERSIT À E-mail address: [email protected] DEGLI S TUDI DI G ENOVA , I TALY
0
1 Cache-Aided Private Information Retrieval with Partially Known Uncoded Prefetching: arXiv:1712.07021v1 [] 18 Dec 2017 Fundamental Limits Yi-Peng Wei Karim Banawan Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 20742 [email protected] [email protected] [email protected] Abstract We consider the problem of private information retrieval (PIR) from N non-colluding and replicated databases, when the user is equipped with a cache that holds an uncoded fraction r of the symbols from each of the K stored messages in the databases. This model operates in a two-phase scheme, namely, the prefetching phase where the user acquires side information and the retrieval phase where the user privately downloads the desired message. In the prefetching phase, the user receives r N uncoded fraction of each message from the nth database. This side information is known only to the nth database and unknown to the remaining databases, i.e., the user possesses partially known side information. We investigate the optimal normalized download cost D∗ (r) in the retrieval phase as a function of K, N , r. We develop lower and upper bounds for the optimal download cost. The bounds match in general for the cases of very low caching ratio (r ≤ 1 N K−1 ) and very high caching ratio (r ≥ K−2 N 2 −3N +KN ). We fully characterize the optimal download cost caching ratio tradeoff for K = 3. For general K, N , and r, we show that the largest gap between the achievability and the converse bounds is 5 32 . I. I NTRODUCTION In today’s communication networks, the end-users are equipped with large memories, and the data transmitted in the network has shifted from real-time generated data like voice to pregenerated content like movies. These two factors together have enabled caching techniques, which store data in user cache a priori in order to reduce the peak-hour network traffic load. In This work was supported by NSF Grants CNS 13-14733, CCF 14-22111, CNS 15-26608 and CCF 17-13977. 2 the meanwhile, privacy has become an important consideration for users, who wish to download data from publicly accessable databases as privately and as efficiently as possible. This is studied under the subject of private information retrieval (PIR). In this paper, we combine the caching and PIR approaches, and consider the problem of PIR for a cache-enabled end-user. The PIR problem considers the privacy of the requested message by a user from distributed databases. In the classical setting of PIR [1], there are N non-communicating databases, each storing the same set of K messages. The user wishes to download one of these K messages without letting the databases know the identity of the desired message. A feasible scheme is to download all the K messages from a database. However, this results in excessive download cost since it results in a download that is K times the size of the desired message. The goal of the PIR problem is to construct an efficient retrieval scheme such that no database knows which message is retrieved. The PIR problem has originated in computer science [1]–[5] and has drawn significant attention in information theory [6]–[11] in recent years. Recently, Sun and Jafar [12] have characterized the optimal normalized download cost for  1 1 the classical PIR problem to be D = 1 + + · · · + , where L is the message size and K−1 L N N D is the total number of downloaded bits from the N databases. Since the work of Sun-Jafar [12], many interesting variants of the classical PIR problem have been investigated, such as, PIR from colluding databases, robust PIR, symmetric PIR, PIR from MDS-coded databases, PIR for arbitrary message lengths, multi-round PIR, multi-message PIR, PIR from Byzantine databases, secure symmetric PIR with adversaries, cache-aided PIR, PIR with private side information (PSI), PIR for functions, storage constrained PIR, and their several combinations [13]–[35]. Also recently, Maddah-Ali and Niesen [36] have proposed a theoretical framework to study the tradeoff between the cache memory size of users and the network traffic load for a two-phase scheme. In the prefetching phase, when the network traffic is low, the server allocates data to the user’s cache memory. In the retrieval phase, when the network traffic is high, the server delivers the messages according to the users’ requests. By jointly designing the prefetching of the cache content during the low traffic period and the delivery of the requested content during the high traffic period, the coded-caching technique proposed in [36] achieves a global caching gain significantly reducing the peak-time traffic load. The concept of coded-caching has been applied to many different scenarios, such as, decentralized networks, device to device networks, 3 random demands, online settings, general cache networks, security constraints, finite file size constraints, broadcast channels and their several combinations [37]–[55]. The caching technique not only reduces the traffic load but can also help the user to privately retrieve the desired file more efficiently by providing additional side information. The interplay between side information and the PIR problem has been studied recently in [27], [29]–[32]. We first recall that the achievability scheme proposed in [12] is based on three principles: database symmetry, message symmetry, and side information utilization. The side information in [12] comes from the undesired bits downloaded from the other (N − 1) databases. Side information plays an important role in the PIR problem; for instance, when N = 1 (single database), i.e., when no side information is available, the normalized download cost is K, which is the largest possible. Caching can improve PIR download cost by providing useful side information. Reference [27] is the first to study the cache-aided PIR problem. In [27], the user has a memory of size KLr bits and can store an arbitrary function of the K messages, where 0 ≤ r ≤ 1 is the caching ratio. [27] considers the case when the cached content is fully known to all N databases,  1 and determines the optimal normalized download cost to be (1 − r) 1 + N1 + · · · + N K−1 . Although the result is pessimistic since it implies that the user cannot utilize the cached content to further reduce the download cost, [27] reveals two new dimensions for the cache-aided PIR problem. The first one is the databases’ awareness of the side information at its initial acquisition. Different from [27], [29]–[31] study the case when the databases are unaware of the cached side information, and [32] studies the case when the databases are partially aware of the cached side information. The second one is the structure of the side information. Instead of storing an arbitrary function of the K messages, [29], [31], [32] consider caching M full messages out of total K messages, and [30] considers storing an r fraction of each message in uncoded form. This paper is closely related to [30]. In [30], the databases are assumed to be completely unaware of the side information. However, this may be practically challenging to implement. Here, we consider a more natural model which uses the same set of databases for both prefetching and retrieval phases. Therefore, different from [30], here each database gains partial knowledge about the side information, that is the part it provides during the prefetching phase. Our aim is to determine if there is a rate loss due to this partial knowledge with respect to the fully unknown case in [30], and characterize this rate loss as a function of K, N and r. 4 In this work, we consider PIR with partially known uncoded prefetching. We consider the PIR problem with a two-phase scheme, namely, prefetching phase and retrieval phase. In the prefetching phase, the user caches an uncoded The nth database is aware of these KLr N r N fraction of each message from the nth database. bit side information, while it has no knowledge about the cached bits from the other (N − 1) databases. We aim at characterizing the optimal tradeoff between the normalized download cost D(r) L and the caching ratio r. For the outer bound, we explicitly determine the achievable download rates for specific K + 1 caching ratios. Download rates for any other caching ratio can be achieved by memory-sharing between the nearest explicit points. Hence, the outer bound is a piece-wise linear curve which consists of K line segments. For the inner bound, we extend the techniques of [12], [30] to obtain a piece-wise linear curve which also consists of K line segments. We show that the inner and the outer bounds match exactly at three line segments for any K. Consequently, we characterize the optimal tradeoff for the very low (r ≤ 1 ) N K−1 and the very high (r ≥ K−2 ) N 2 −3N +KN caching ratios. As a direct corollary, we fully characterize the optimal download cost caching ratio tradeoff for K = 3 messages. For general K, N and r, we show that the worst-case gap between the inner and the outer bounds is 5 . 32 II. S YSTEM M ODEL We consider a PIR problem with N non-communicating databases. Each database stores an identical copy of K statistically independent messages, W1 , . . . , WK . Each message is L bits long, H(W1 ) = · · · = H(WK ) = L, H(W1 , . . . , WK ) = H(W1 ) + · · · + H(WK ). (1) The user (retriever) has a local cache memory which can store up to KLr bits, where 0 ≤ r ≤ 1, and r is called the caching ratio. There are two phases in this system: the prefetching phase and the retrieval phase. In the prefetching phase, for each message Wk , the user randomly and independently chooses Lr bits out of the L bits to cache. The user caches the Lr bits of each message by prefetching the same amount of bits from each database, i.e., the user prefetches KLr N bits from each database. ∀n ∈ [N], where [N] = {1, 2, . . . , N}, we denote the indices of the cached bits from the nth 5 database by Hn and the cached bits from the nth database by the random variable Zn . Therefore, P the overall cached content Z is equal to (Z1 , . . . , ZN ), and H(Z) = N n=1 H(Zn ) = KLr. We S further denote the indices of the cached bits by H. Therefore, we have H = N n=1 Hn , where Hn1 ∩ Hn2 = ∅, if n1 6= n2 . Since the user caches a subset of the bits from each message, this is called uncoded prefetching. Here, we consider the case where database n knows Hn , but it does not know H \ Hn . We refer to Z as partially known prefetching. In the retrieval phase, the user privately generates an index θ ∈ [K], and wishes to retrieve message Wθ such that it is impossible for any individual database to identify θ. Note that during the prefetching phase, the desired message is unknown a priori. Therefore, the cached bit indices H are independent of the desired message index θ. Note further that the cached bit indices H are independent of the message contents. Therefore, for random variables θ, H, and W1 , . . . , WK , we have H (θ, H, W1 , . . . , WK ) = H (θ) + H (H) + H(W1 ) + · · · + H(WK ). [θ] [θ] (2) [θ] The user sends N queries Q1 , . . . , QN to the N databases, where Qn is the query sent to the nth database for message Wθ . The queries are generated according to H, which are independent of the realizations of the K messages. Therefore, [θ] [θ] I(W1 , . . . , WK ; Q1 , . . . , QN ) = 0. (3) To ensure that individual databases do not know which message is retrieved, we need to satisfy the following privacy constraint, ∀n ∈ [N], ∀θ ∈ [K], [1] [θ] [θ] (Q[1] n , An , W1 , . . . , WK , Hn ) ∼ (Qn , An , W1 , . . . , WK , Hn ), (4) where A ∼ B means that A and B are identically distributed. [θ] [θ] After receiving the query Qn , the nth database replies with an answering string An , which [θ] is a function of Qn and all the K messages. Therefore, ∀θ ∈ [K], ∀n ∈ [N], H(An[θ] |Qn[θ] , W1 , . . . , WK ) = 0. [θ] [θ] (5) After receiving the answering strings A1 , . . . , AN from all the N databases, the user needs 6 to decode the desired message Wθ reliably. By using Fano’s inequality, we have the following reliability constraint   [θ] [θ] [θ] [θ] H Wθ |Z, H, Q1 , . . . , QN , A1 , . . . , AN = o(L), o(L) L where o(L) denotes a function such that (6) → 0 as L → ∞. For a fixed N, K, and caching ratio r, a pair (D(r), L) is achievable if there exists a PIR scheme for message of size L bits long with partially known uncoded prefetching satisfying the privacy constraint (4) and the reliability constraint (6), where D(r) represents the expected number of downloaded bits (over all the queries) from the N databases via the answering strings [θ] [θ] [θ] [θ] A1:N , where A1:N = (A1 , . . . , AN ), i.e., D(r) = N X n=1  H A[θ] n . (7) In this work, we aim at characterizing the optimal normalized download cost D ∗ (r) corresponding to every caching ratio 0 ≤ r ≤ 1, where ∗ D (r) = inf   D(r) : (D(r), L) is achievable , L (8) which is a function of the caching ratio r. III. M AIN R ESULTS We provide a PIR scheme for general K, N and r, which achieves the following normalized download cost, D̄(r). Theorem 1 (Outer bound) In the cache-aided PIR with partially known uncoded prefetching, let s ∈ {1, 2, · · · , K − 1}, for the caching ratio rs , where  K−2 rs = s−1 K−2 s−1  + PK−1−s i=0 ,  (N − 1)i+1 K−1 s+i (9) the optimal normalized download cost D ∗ (rs ) is upper bounded by, D ∗ (rs ) ≤ D̄(rs ) = PK−1−s K (N − 1)i+1 i=0 s+1+i .  P K−1−s K−1 K−2 i+1 + (N − 1) i=0 s−1 s+i  (10) 7 Moreover, if rs < r < rs+1 , and α ∈ (0, 1) such that r = αrs + (1 − α)rs+1 , then D ∗ (r) ≤ D̄(r) = αD̄(rs ) + (1 − α)D̄(rs+1 ). (11) The proof of Theorem 1 is provided in Section IV. The outer bound in Theorem 1 is a piecewise linear curve, which consists of K line segments. These K line segments intersect at the points rs . We characterize an inner bound (converse bound), which is denoted by D̃(r), for the optimal normalized download cost D ∗ (r) for general K, N, r. Theorem 2 (Inner bound) In the cache-aided PIR with partially known uncoded prefetching, the normalized download cost is lower bounded as, ∗ D (r) ≥ D̃(r) = = (1 − r) max i∈{2,··· ,K+1} max i∈{2,··· ,K+1} K+1−i X j=0 K+1−i X j=0  K−i  1 1 XK +1−i−j −r 1− Nj N j=0 Nj 1 − (K + 2 − i)r. Nj (12) (13) The proof of Theorem 2 is provided in Section V. The inner bound in Theorem 2 is also a piece-wise linear curve, which consists of K line segments. Interestingly, these K line segments intersect at the points as follows, r̃i = 1 N K−i , i = 1, · · · , K − 1. (14) The outer bounds provided in Theorem 1 and the inner bounds provided in Theorem 2 match for some caching ratios r as summarized in the following corollary. Corollary 1 (Optimal tradeoff for very low and very high caching ratios) In the cache- aided PIR with partially known uncoded prefetching, for very low caching ratios, i.e., for r ≤ 1 , N K−1 the optimal normalized download cost is given by,   1 1 + · · · + K−1 − Kr. D (r) = 1 + N N ∗ On the other hand, for very high caching ratios, i.e., for r ≥ K−2 , N 2 −3N +KN (15) the optimal normalized 8 download cost is given by,   1 + 1 − 2r, N ∗ D (r) =  1 − r, K−2 N 2 −3N +KN 1 N ≤r≤ 1 N . (16) ≤r≤1 Proof: From (9) and (14), we have r1 = r̃1 = rK−2 = rK−1 = r̃K−1 = 1 N K−1 N2 , K−2 , − 3N + KN 1 . N For the outer bound of the case of very low caching ratios, from (10), we have PK−2 K  (N − 1)i+1 i=0 2+i D̄(r1 ) = K−2 PK−2  K−1 + (N − 1)i+1 i=0 0 1+i  P K−2 K 1 (N − 1)i+2 i=0 2+i (N −1) = N K−1  1 K N − 1 − K(N − 1) (N −1) = N K−1 K N − KN + K − 1 = . N K − N K−1 (17) (18) (19) (20) (21) (22) (23) For the inner bound of the case of very low caching ratios, from (12), by choosing i = 2 and using r = r1 , we have  K−2  1 1 X K −1−j D̃(r1 ) ≥ (1 − r1 ) − r1 1 − j N N j=0 Nj j=0     − 1 + N1K 1 − N1K 1 1 1 K−K N − = 1 − K−1 1 − 2 N N K−1 N 1 − N1 1 − N1      1 1 1 1 K 1  1 − K−1 = 1 − K − K−1 K − −1+ K N N N N N 1 − N1 K−1 X = N K − KN + K − 1 = D̄(r1 ). N K − N K−1 (24) (25) (26) (27) Thus, since D̃(r1 ) ≤ D̄(r1 ) by definition, (27) implies D̃(r1 ) = D̄(r1 ). For the outer bound of the case of very high caching ratios, from (10), we have  P1 K i+1 N i=0 K−1+i (N − 1)   D̄(rK−2) = P1 K−2 K−1 N K−3 + i=0 K−2+i (N − 1)i+1 N (28) 9 = N 2 + KN − 2N − K + 1 , N 2 − 3N + KN (29) and for the inner bound of the case of very high caching ratios, from (12) by choosing i = K and using r = rK−2 ,  0  1 X 1 1 X1−j D̃(rK−2) ≥ (1 − rK−2 ) − rK−2 1 − j N N j=0 N j j=0 1 − 2rK−2 N N 2 + KN − 2N − K + 1 = = D̄(rK−2) N 2 − 3N + KN =1+ (30) (31) (32) implying D̃(rK−2 ) = D̄(rK−2). Finally, from (10), D̄(rK−1 ) = N −1 , N and from (12) by choosing i = K +1 and using r = rK−1, D̃(rK−1 ) ≥ N −1 = D̄(rK−1 ) N (33) implying D̃(rK−1 ) = D̄(rK−1). Therefore, D̃(r) = D̄(r) at r = r1 , r = rK−2 and r = rK−1 . In addition to that D̃(0) = D̄(0) and D̃(1) = D̄(1). Since both D̄(r) and D̃(r) are linear functions of r, and since D̃(0) = D̄(0) and D̃(r1 ) = D̄(r1 ), we have D̃(r) = D̄(r) = D ∗ (r) for 0 ≤ r ≤ r1 . This is the very low caching ratio region. In addition, since D̃(rK−2) = D̄(rK−2), D̃(rK−1) = D̄(rK−1) and D̃(1) = D̄(1), we have D̃(r) = D̄(r) = D ∗ (r) for rK−2 ≤ r ≤ 1. This is the very high caching ratio region.  We use the example of K = 4, N = 2 to illustrate Corollary 1 (see Figure 1). In this case, r1 = r̃1 = 81 , rK−2 = 13 , and rK−1 = r̃K−1 = 21 . Therefore, we have exact results for 0 ≤ r ≤ (very low caching ratios) and 1 3 1 8 ≤ r ≤ 1 (very high caching ratios). We have a gap between the achievability and the converse for medium caching ratios in 1 8 ≤ r ≤ 13 . More specifically, line segments connecting (0, 15 ) and ( 81 , 11 ); connecting ( 13 , 65 ) and ( 12 , 21 ); and connecting ( 21 , 12 ) and 8 8 (1, 0) are tight. For the case K = 3, we have exact tradeoff curve for any N, r as shown in the following corollary. Corollary 2 (Optimal tradeoff for K = 3) In the cache-aided PIR with partially known uncoded prefetching with K = 3 messages, the optimal download cost caching ratio tradeoff is 10 2 (0,15/8) Achievability Converse 1.8 1.6 (1/8, 11/8) 1.4 D L 1.2 1 (1/4, 1) (1/3, 5/6) 0.8 0.6 (1/2, 1/2) 0.4 0.2 0 (1,0) 0 0.2 0.4 r 0.6 0.8 1 Fig. 1. Inner and outer bounds for K = 4, N = 2. given explicitly as,   1 + N1 + N12 − 3r,    D ∗ (r) = 1 + N1 − 2r,     1 − r, 0≤r≤ 1 N2 1 N2 ≤r≤ 1 N 1 N ≤r≤1 . (34) Proof: The proof follows from the proof of Corollary 1. Note that in this case, from (17) and (18), r1 = rK−2 = 0 ≤ r ≤ r1 = 1 N2 1 ; N2 and from (19), r2 = rK−1 = 1 . N Thus, we have a tight result for (very low caching ratios) and a tight result for rK−2 = r1 = 1 N2 ≤ r ≤ 1, i.e., a tight result for all 0 ≤ r ≤ 1. We have three segments in this case: [0, r1 ], [r1 , r2 ] and [r2 , 1] with three different line expressions for the exact result as given in (9)-(10) and written explicitly in (34).  IV. ACHIEVABLE S CHEME In this section, we present an achievable scheme for the outer bounds provided in Theorem 1. Our achievable scheme is based on [12], [27], [30]. We first provide achievable schemes for the caching ratios rs in (9) by applying the principles in [12]: 1) database symmetry, 2) message symmetry within each database, and 3) exploiting undesired messages as side information. For 11 an arbitrary caching ratio r 6= rs , we apply the memory-sharing scheme in [27]. Since the cached content is partially known by the databases, the achievable scheme is different from that in [30]. We first use the case of K = 3, N = 2 to illustrate the main ideas of our achievability scheme. A. Motivating Example: K = 3 Messages and N = 2 Databases We use ai , bi , and ci to denote the bits of messages W1 , W2 and W3 , respectively. We assume that the user wants to retrieve message W1 privately without loss of generality. 1) Caching Ratio r1 = 41 : We choose the message size as 8 bits. In the prefetching phase, for caching ratio r1 = 41 , the user caches 2 bits from each message. Therefore, the user caches 1 bit from each database for each message. Therefore, Z1 = (a1 , b1 , c1 ) and Z2 = (a2 , b2 , c2 ). In the retrieval phase, for s = 1, we first mix 1 bit of side information with the desired bit. Therefore, the user queries a3 + b2 and a4 + c2 from database 1. Note that database 1 knows that the user prefetches Z1 . Therefore, the user does not use side information Z1 to retrieve information from database 1. To keep message symmetry, the user further queries b3 + c3 from database 1. Similarly, the user queries a5 + b1 , a6 + c1 and b4 + c4 from database 2. Then, the user exploits the side information b4 + c4 to query a7 + b4 + c4 from database 1 and the side information b3 + c3 to query a8 + b3 + c3 from database 2. After this step, no more side information can be used and the message symmetry is attained for each database. Therefore, the PIR scheme ends here. The decodability of message W1 can be shown easily, since the desired bits are either mixed with cached side information or the side information obtained from other databases. Overall, the user downloads 8 bits. Therefore, the normalized download cost is 1. We summarize the queries in Table. I. TABLE I s DB1 DB2 s=1 Q UERY TABLE FOR K = 3, N = 2, r1 = 14 . a 3 + b2 a 5 + b1 a4 + c2 a6 + c1 b3 + c3 b4 + c4 a7 + b4 + c4 a8 + b3 + c3 Z1 = (a1 , b1 , c1 ) Z2 = (a2 , b2 , c2 ) 12 2) Caching Ratio r2 = 21 : We choose the message size as 4 bits. In the prefetching phase, for caching ratio r2 = 21 , the user caches 2 bits from each message. Therefore, the user caches 1 bit from each database for each message. Therefore, Z1 = (a1 , b1 , c1 ) and Z2 = (a2 , b2 , c2 ). In the retrieval phase, for s = 2, we first mix 2 bits of side information with the desired bit. Therefore, the user queries a3 + b2 + c2 from database 1. Similarly, the user queries a4 + b1 + c1 from database 2. After this, no more side information can be used and the message symmetry is attained for each database. Therefore, the PIR scheme ends here. Overall, the user downloads 2 bits. Therefore, the normalized download cost is 21 . We summarize the queries in Table. II. TABLE II Q UERY TABLE FOR K = 3, N = 2, r2 = 21 . s DB1 DB2 s=2 a3 + b2 + c2 a4 + b1 + c1 Z1 = (a1 , b1 , c1 ) Z2 = (a2 , b2 , c2 ) 3) Caching Ratio r = 31 : We choose the message size as 12 bits. In the prefetching phase, for caching ratio r = 13 , the user caches 4 bits from each message. Therefore, the user caches 2 bits from each database for each message. Therefore, Z1 = (a1 , a2 , b1 , b2 , c1 , c2 ) and Z2 = (a3 , a4 , b3 , b4 , c3 , c4 ). In the retrieval phase, we combine the achievable schemes in Section IV-A1 and IV-A2 as shown in Table III. The normalized download cost is 65 . By applying [27, Lemma 1] and taking α = 32 , we can show that D̄( 13 ) = D̄( 23 · 41 + 13 · 12 ) = 32 D̄( 14 )+ 31 D̄( 12 ) = 32 ·1+ 13 · 21 = 56 . TABLE III s DB1 DB2 s=1 Q UERY TABLE FOR K = 3, N = 2, r = 13 . a 5 + b3 a 7 + b1 a6 + c3 a8 + c1 b5 + c5 b6 + c6 a9 + b6 + c6 a10 + b5 + c5 a11 + b4 + c4 a12 + b2 + c2 Z1 = (a1 , a2 , b1 , b2 , c1 , c2 ) Z2 = (a3 , a4 , b3 , b4 , c3 , c4 ) s=2 13 B. Achievable Scheme We first present the achievable scheme for the caching ratios rs given in (9). Then, we apply the memory-sharing scheme provided in [27] for the intermediate caching ratios. 1) Achievable Scheme for the Caching Ratio rs : For fixed K and N, there are K − 1 nondegenerate corner points (in addition to degenerate caching ratios r = 0 and r = 1). The caching ratios, rs , corresponding to these non-degenerate corner points are indexed by s, which represents the number of cached bits used in the side information mixture at the first round of the querying. For each s ∈ {1, 2, . . . , K − 1}, we choose the length of the message to be L(s) for the corner point indexed by s, where  K−1−s X K − 1 K−2 (N − 1)i+1 N. + L(s) = N s+i s−1 i=0  (35) In the prefetching phase, for each message the user randomly and independently chooses   K−2 N K−2 bits to cache, and caches bits from each database for each message. Therefore, s−1 s−1 the caching ratio rs is equal to rs = K−2 s−1 PK−1−s K−1 (N i=0 s+i N N K−2 s−1  +  − 1)i+1 N . (36) In the retrieval phase, the user applies the PIR scheme as follows: 1) Initialization: Set the round index to t = s + 1, where the tth round involves downloading sums of every t combinations of the K messages. 2) Exploiting side information: If t = s + 1, for the first database, the user forms queries by mixing s undesired bits cached from the other N − 1 databases in the prefetching phase to form one side information equation. Each side information equation is added to one bit from the uncached portion of the desired message. Therefore, for the first database, the  user downloads K−1 (N −1) equations in the form of a desired bit added to a mixture of s s cached bits from other messages. On the other hand, if t > s + 1, for the first database, the  user exploits the K−1 (N −1)t−s side information equations generated from the remaining t−1 (N − 1) databases in the (t − 1)th round. 3) Symmetry across databases: The user downloads the same number of equations with the  same structure as in step 2 from every database. Consequently, the user decodes K−1 (N − t−1 14 1)t−s desired bits from every database, which are done either using the cached bits as side information if t = s+1, or the side information generated in the (t−1)th round if t > s+1. 4) Message symmetry: To satisfy the privacy constraint, the user should download the same  amount of bits from other messages. Therefore, the user downloads K−1 (N − 1)t−s t undesired equations from each database in the form of sum of t bits from the uncached portion of the undesired messages. 5) Repeat steps 2, 3, 4 after setting t = t + 1 until t = K. 6) Shuffling the order of queries: By shuffling the order of queries uniformly, all possible queries can be made equally likely regardless of the message index. Since the desired bits are added to the side information which is either obtained from the cached bits (if t = s + 1) or from the remaining (N − 1) databases in the (t − 1)th round when t > s + 1, the user can decode the uncached portion of the desired message by canceling out the side information bits. In addition, for each database, each message is queried equally likely with the same set of equations, which guarantees privacy as in [12]. Therefore, the privacy constraint in (4) and the reliability constraint in (6) are satisfied. We now calculate the total number of downloaded bits for the caching ratio rs in (36). For the round t = s + 1, we exploit s cached bits to form the side information equation. Therefore, each download is a sum of s + 1 bits. For each database, we utilize the side information cached from other N − 1 databases. In addition to the message symmetry step enforcing symmetry across K  K messages, we download s+1 (N − 1) bits from a database. Due to the database symmetry step,  K in total, we download s+1 (N − 1)N bits. For the round t = s + i > s + 1, we exploit s + i − 1 undesired bits downloaded from the (t − 1)th round to form the side information equation. Due  K to message symmetry and database symmetry, we download s+1+i (N − 1)i+1 N bits. Overall, the total number of downloaded bits is, D(rs ) = K−1−s X  i=0  K (N − 1)i+1 N. s+1+i (37) By canceling out the undesired side information bits using the cached bits for the round t =  s + 1, we obtain K−1 (N − 1)N desired bits. For the round t = s + i > s + 1, we decode s  K−1 (N −1)i+1 N desired bits by using the side information obtained in (t−1)th round. Overall, s+i 15 we obtain L(s) − N K−2 s−1  desired bits. Therefore, the normalized download cost is, D(rs ) D̄(rs ) = = L(s) N PK−1−s K (N − 1)i+1 N i=0 s+1+i  P K−1−s K−1 K−2 (N − 1)i+1 N + i=0 s+i s−1  . (38) 2) Achievable Scheme for the Caching Ratios not Equal to rs : For caching ratios r which are not exactly equal to (36) for some s, we first find an s such that rs < r < rs+1 . We choose 0 < α < 1 such that r = αrs + (1 − α)rs+1. By using the memory-sharing scheme in [27, Lemma 1], we achieve the following normalized download cost, D̄(r) = αD̄(rs ) + (1 − α)D̄(rs+1 ). (39) V. C ONVERSE P ROOF In this section, we derive an inner bound for the cache-aided PIR with partially known uncoded prefetching. We extend the techniques in [12], [30] to our problem. The main difference between this proof and that in [30] is the usage of privacy constraint given in (4). Lemma 1 (Interference lower bound) [30, Lemma 1] For the cache-aided PIR with partially known uncoded prefetching, the interference from undesired messages within the answering strings D(r) − L(1 − r) is lower bounded by,   [k−1] [k−1] D(r) − L(1 − r) + o(L) ≥ I Wk:K ; H, Q1:N , A1:N |W1:k−1, Z (40) for all k ∈ {2, . . . , K}. The proof of Lemma 1 is similar to [30, Lemma 1]. In the following lemma, we prove an inductive relation for the mutual information term on the right hand side of (40). Lemma 2 (Induction lemma) For all k ∈ {2, . . . , K}, the mutual information term in Lemma 1 can be inductively lower bounded as,   [k−1] [k−1] I Wk:K ; H, Q1:N , A1:N |W1:k−1, Z  L(1 − r) − o(L) 1 − N 1  [k] [k] ≥ I Wk+1:K ; H, Q1:N , A1:N |W1:k , Z + + (K − k + 1)Lr. (41) N N N 16 Lemma 2 is a generalization of [12, Lemma 6] and [30, Lemma 2], and it reduces to [12, Lemma 6] when r = 0. Compared to [30, Lemma 2], the lower bound in (41) is increased by (K−k+1)Lr , N since the cached content is partially known by the databases. Proof: We start with the left hand side of (41),   [k−1] [k−1] I Wk:K ; H, Q1:N , A1:N |W1:k−1 , Z   [k−1] [k−1] = I Wk:K ; H, Q1:N , A1:N , Z|W1:k−1 − I(Wk:K ; Z|W1:k−1). (42) For the first term on the right hand side of (42), we have   [k−1] [k−1] I Wk:K ; H, Q1:N , A1:N , Z|W1:k−1   1 [k−1] [k−1] = NI Wk:K ; H, Q1:N , A1:N , Z|W1:k−1 N N  1 X ≥ I Wk:K ; Hn , Qn[k−1] , An[k−1], Zn |W1:k−1 N n=1 " N # N X  1 X I (Wk:K ; Hn , Zn |W1:k−1) + I Wk:K ; Qn[k−1] , An[k−1]|W1:k−1 , Hn , Zn = N n=1 n=1 " N # N X X  1 I (Wk:K ; Zn |W1:k−1 ) + I Wk:K ; Qn[k−1] , An[k−1] |W1:k−1, Hn , Zn ≥ N n=1 n=1 " # N X  1 (K − k + 1)Lr = N× I Wk:K ; Qn[k−1] , An[k−1] |W1:k−1, Hn , Zn + N N n=1 = (4) = (2),(3) = (5) = ≥ N  1 X (K − k + 1)Lr I Wk:K ; Qn[k−1] , An[k−1] |W1:k−1, Hn , Zn + N N n=1 N  (K − k + 1)Lr 1 X [k] I Wk:K ; Q[k] + n , An |W1:k−1 , Hn , Zn N N n=1 N  1 X (K − k + 1)Lr [k] + I Wk:K ; A[k] n |W1:k−1 , Hn , Zn , Qn N N n=1 N  1 X (K − k + 1)Lr [k] H A[k] + n |W1:k−1 , Hn , Zn , Qn N N n=1 N  (K − k + 1)Lr 1 X  [k] [k] [k] H An |W1:k−1 , H, Z, Q1:N , A1:n−1 + N N n=1 N  1 X  (K − k + 1)Lr [k] [k] I Wk:K ; A[k] |W , H, Z, Q , A + 1:k−1 1:n−1 n 1:N N N n=1  (K − k + 1)Lr 1  [k] [k] = + I Wk:K ; A1:N |W1:k−1 , H, Z, Q1:N N N (5) = (43) (44) (45) (46) (47) (48) (49) (50) (51) (52) (53) (54) 17  (K − k + 1)Lr 1  [k] [k] + I Wk:K ; H, Q1:N , A1:N |W1:k−1, Z (55) N N  o(L) 1  (6) (K − k + 1)Lr [k] [k] + I Wk:K ; Wk , H, Q1:N , A1:N |W1:k−1, Z − (56) = N N N  i 1 h (K − k + 1)Lr [k] [k] I (Wk:K ; Wk |W1:k−1, Z) + I Wk:K ; H, Q1:N , A1:N |W1:k , Z + = N N o(L) − (57) N  o(L) (K − k + 1)Lr L(1 − r) 1  [k] [k] = + + I Wk+1:K ; H, Q1:N , A1:N |W1:k , Z − , (58) N N N N (2),(3) = where (44) and (46) follow from the non-negativity of mutual information, (47) is due to the fact that from the nth database, the user prefetches KLr N bits, (49) follows from the privacy constraint, [k] (50) and (55) follow from the independence of Wk:K and Qn , (51) and (53) follow from the [k] [k] fact that the answering string An is a deterministic function of (W1:K , Qn ), (52) follows from conditioning reduces entropy, and (56) follows from the reliability constraint. For the second term on the right hand side of (42), we have I(Wk:K ; Z|W1:k−1) = H (Z|W1:k−1 ) − H(Z|W1:K ) = (K − k + 1) Lr (59) (60) where (60) follows from the uncoded nature of the cached bits. Combining (42), (58) and (60) yields (41).  Now, we are ready to derive the general inner bound for arbitrary K, N, r. To obtain this bound, we use Lemma 1 to find K lower bounds by varying the index k in the lemma from k = 2 to k = K, and by using the non-negativity of mutual information for the Kth bound. Next, we inductively lower bound each term of Lemma 1 by using Lemma 2 (K − k + 1) times to get K explicit lower bounds. Lemma 3 For fixed N, K and r, we have D(r) ≥ L(1 − r) K+1−k X j=0 where k = 2, . . . , K + 1.  K−k  1 1 X K +1−k−j − Lr 1 − + o(L), Nj N j=0 Nj (61) 18 Proof: We have (40) D(r) ≥ L(1 − r) + I  [k−1] [k−1] Wk:K ; H, Q1:N , A1:N |W1:k−1 , Z  − o(L) L(1 − r) 1 ≥ L(1 − r) + − (K − k + 1)Lr + (K − k + 1)Lr N N  1  [k] [k] + I Wk+1:K ; H, Q1:N , A1:N |W1:k , Z − o(L) N      (41) 1 1 (K − k) Lr ≥ L(1 − r) 1 + + 2 + − Lr (K − k + 1) + N N N N  1  [k+1] [k+1] + 2 I Wk+2:K ; H, Q1:N , A1:N |W1:k+1 , Z − o(L) N (41) (62) (63) (64) (41) ≥ ... (65)  K−k  K+1−k X 1 (41) 1 X K +1−k−j − Lr 1 − + o(L), ≥ L(1 − r) Nj N j=0 Nj j=0 (66) where (62) follows from Lemma 1, and the remaining steps follow from the successive application of Lemma 2.  We conclude the converse proof by dividing by L and taking the limit as L → ∞. Then, for k = 2, · · · , K + 1, we have ∗ D (r) ≥ (1 − r) K+1−k X j=0  K−k  1 1 X K +1−k−j −r 1− . Nj N j=0 Nj (67) Since (67) gives K intersecting line segments, the normalized download cost is lower bounded by their maximum value as follows ∗ D (r) ≥ max (1 − r) i∈{2,··· ,K+1} K+1−i X j=0  K−i  1 XK +1−i−j 1 −r 1− . Nj N j=0 Nj (68) VI. F URTHER E XAMPLES A. K = 4 Messages, N = 2 Databases For K = 4 and N = 2, we present achievable PIR schemes for caching ratios r1 = Table IV, r2 = 1 3 in Table V, and r3 = 1 2 1 8 in in Table VI. The PIR schemes aim to retrieve message W1 , where we use ai to denote its bits. The achievable normalized download costs for these caching ratios are in Figure 1. 11 5 , 8 6 and 21 , respectively. The plot of the inner and outer bounds can be found 19 TABLE IV Q UERY TABLE FOR K = 4, N = 2 AND r1 = 81 . s DB2 a 3 + b2 a 6 + b1 a4 + c2 a7 + c1 a 5 + d2 a 8 + d1 b3 + c3 b5 + c5 b4 + d 3 b6 + d 5 c4 + d4 c6 + d6 a9 + b5 + c5 a12 + b3 + c3 a10 + b6 + d5 a13 + b4 + d3 a11 + c6 + d6 a14 + c4 + d4 b7 + c7 + d7 b8 + c8 + d8 a15 + b8 + c8 + d8 a16 + b7 + c7 + d7 Z1 = (a1 , b1 , c1 , d1 ) Z2 = (a2 , b2 , c2 , d2 ) s=1 DB1 TABLE V Q UERY TABLE FOR K = 4, N = 2, r2 = 13 . s DB2 a5 + b3 + c3 a8 + b1 + c1 a 6 + d 3 + b4 a 9 + d 1 + b2 a7 + c4 + d4 a10 + c2 + d2 b5 + c5 + d5 b6 + c6 + d6 a11 + b6 + c6 + d6 a12 + b5 + c5 + d5 Z1 = (a1 , a2 , b1 , b2 , c1 , c2 , d1 , d2 ) Z2 = (a3 , a4 , b3 , b4 , c3 , c4 , d3 , d4 ) s=2 DB1 TABLE VI Q UERY TABLE FOR K = 4, N = 2, r3 = 12 . s DB1 DB2 s=3 a3 + b2 + c2 + d2 a4 + b1 + c1 + d1 Z1 = (a1 , b1 , c1 , d1 ) Z2 = (a2 , b2 , c2 , d2 ) 20 2 Achievability Converse 1.8 1.6 1.4 D L 1.2 1 0.8 0.6 0.4 0.2 0 0 0.2 0.4 r 0.6 0.8 1 Fig. 2. Inner and outer bounds for K = 5, N = 2. B. K = 5, K = 10 and K = 100 Messages, N = 2 Databases For N = 2, we show the numerical results for the inner and outer bounds for K = 5, K = 10 and K = 100 in Figures 2, 3 and 4. For fixed N as K grows, the gap between the achievable bound and converse bound increases. This observation will be made specific in Section VII. VII. G AP A NALYSIS In this section, we analyze the gap between the achievable bounds given in (10) and the converse bounds given in (12). We first observe that for fixed number of databases N, as the number of messages K increases, the achievable normalized download cost increases. In addition to the monotonicity, the achievable normalized download cost for K + 1 messages has a special relationship with the achievable normalized download cost for K messages. We first use an example to illustrate this property. For N = 2, K = 3, K = 4, and K = 5, the achievable bounds are shown in Figure 5. The achievable bound for K = 5 is above the achievable bound for K = 4, and the achievable bound for K = 4 is above the achievable bound for K = 3. By (K) denoting rs (5) as the caching ratio with total K messages and parameter s (see (9)), we observe (5) (4) (4) (4) (4) that (r1 , D̄(r1 )) falls on the line connecting (r0 , D̄(r0 )) and (r1 , D̄(r1 )). This observation 21 2 Achievability Converse 1.8 1.6 1.4 D L 1.2 1 0.8 0.6 0.4 0.2 0 0 0.2 0.4 r 0.6 0.8 1 Fig. 3. Inner and outer bounds for K = 10, N = 2. 2 Achievability Converse 1.8 1.6 1.4 D L 1.2 1 0.8 0.6 0.4 0.2 0 0 0.2 0.4 r 0.6 0.8 1 Fig. 4. Inner and outer bounds for K = 100, N = 2. (K+1) is general, (rs (K+1) , D̄(rs (K) (K) (K) (K) )) falls on the line connecting (rs−1 , D̄(rs−1 )) and (rs , D̄(rs )). We summarize this result in the following lemma. Lemma 4 (Monotonicity of the achievable bounds) In cache-aided PIR with partially known 22 2 K=3 K=4 K=5 1.8 1.6 1.4 D L 1.2 1 0.8 0.6 0.4 0.2 0 0 0.2 0.4 r 0.6 0.8 1 Fig. 5. Outer bounds for N = 2, K = 3, K = 4 and K = 5. uncoded prefetching, for fixed number of databases N, if the number of messages K increases, then the achievable normalized download cost increases. Furthermore, we have (K) rs(K+1) = αrs−1 + (1 − α)rs(K) , (69) (K) D̄(rs(K+1) ) = αD̄(rs−1 ) + (1 − α)D̄(rs(K) ), (70) where 0 ≤ α ≤ 1. The proof of Lemma 4 is similar to [30, Lemma 4]. After showing the monotonicity of the achievable bounds, we show that as K → ∞, the asymptotic upper bound for the achievable bounds is given as in the following lemma. With this asymptotic upper bound, we conclude that the worst-case gap is 5 . 32 Lemma 5 (Asymptotics and the worst-case gap) In cache-aided PIR with partially known uncoded prefetching, as K → ∞, the outer bound is upper bounded by, D̄(r) ≤ Hence, the worst-case gap is N (1 − r)2 N −1 5 . 32 The proof of Lemma 4 is provided in Section X-A. (71) 23 2 1.8 1.6 1.4 D L 1.2 1 0.8 0.6 [31] [32] [29] [27] This work [30] 0.4 0.2 0 0 0.2 0.4 r 0.6 0.8 1 Fig. 6. Outer bounds for N = 2, K = 12 for different cache-aided PIR models. VIII. C OMPARISONS WITH OTHER C ACHE -A IDED PIR M ODELS In this section, we compare the normalized download costs between different cache-aided PIR models subjected to same memory size constraint. We first use an example of N = 2 and K = 12 (see Figure 6) to show the relative normalized download costs for different models. In [29], [31], [32], the user caches M full messages out of total K messages. In order to compare with other cache-aided PIR schemes, we use M K as the caching ratio. Since the PIR schemes are only reported for the corner points in [29], [31], [32], we use dotted lines to connect the corner points. For [27], [30] and this work, since we can apply memory-sharing to achieve the download costs between the corner points, we use solid lines to connect the corner points. We first compare references [29], [31], [32], in which the user caches M full messages out of K messages and the databases are (partially) unaware. In [31], [32], the user not only wishes to protect the privacy of the desired messages but also wishes to protect the privacy of the cached messages. Note that the other works ( [27], [29], [30] and this work) only consider to protect the privacy of the desired messages. Since the message privacy constraint is less restricted, reference [29] achieves lower normalized download cost than references [31], [32]. The main difference between [32] and [31] is that the databases are totally unaware of the cached M messages as 24 in [31] or the nth database is aware of some of M messages cached from the nth database as in [32]. Interestingly, these two models result in the same normalized download costs. Although the nth database’s awareness of some cached messages might increase the download cost, at the same time the user does not need to protect the privacy of these known messages from the nth database, which might reduce the download cost. We then compare references [27], [30] and this work. The main difference between these three works is the different level of awareness of the side information the user cached. Reference [27] considers that all the databases are aware of the side information the user cached. In contrast, reference [30] considers that all the databases are unaware of the side information. This work considers that ths nth database is aware of the side information cached from the nth database. Reference [30, Corollary 1] shows the unawareness gain. Therefore, reference [30] achieves lower normalized download cost than [27]. The same proof technique in [30, Corollary 1] can also show the partially unawareness gain. Therefore, this work also achieves lower normalized download cost than [27]. Since these three works consider only the privacy of the desired message, different from [31], [32], reference [30] achieves lower normalized download cost than this work. IX. C ONCLUSION In this paper, we studied the cache-aided PIR problem from N non-communicating and replicated databases, when the cache stores uncoded bits that are partially known to the databases. We determined inner and outer bounds for the optimal normalized download cost D ∗ (r) as a function of the total number of messages K, the number of databases N, and the caching ratio r. Both inner and outer bounds are piece-wise linear functions in r (for fixed N, K) that consist of K line segments. The bounds match in two specific regimes: the very low caching ratio regime, i.e., r ≤ 1 , N K−1 and the very high caching ratio regime, where r ≥ K−2 . N 2 −3N +KN As a direct corollary for this result, we characterized the exact tradeoff between the download cost and the caching ratio for K = 3. For general K, N, and r, we showed that the largest gap between the achievability and the converse bounds is 5 . 32 The achievable scheme extends the greedy scheme in [12] so that it starts with exploiting the cache bits as side information. For fixed K, N, there are K − 1 non-degenerate corner points. These points differ in the number of cached bits that contribute in generating one side information equation. The achievability for the remaining caching ratios is done by memory-sharing between the two adjacent corner points that enclose 25 that caching ratio r. For the converse, we extend the induction-based techniques in [12], [30] to account for the availability of uncoded and partially prefetched side information at the retriever. The converse proof hinges on developing K lower bounds on the length of the undesired portion of the answer string. By applying induction on each bound separately, we obtain the piece-wise linear inner bound. X. A PPENDIX A. Proof of Lemma 5 Proof: From (10), we rewrite D̄(rs ) as D̄(rs ) = = Let λ = s . K PK−1−s K (N − 1)i+1 i=0 s+1+i  PK−1−s K−1 K−2 + (N − 1)i+1 i=0 s−1 s+i PK−1−s K )(N −1)i+1 (s+1+i i=0 PK−1−s K−1 ψ1 (N, K, s) ( s+i )(N −1)i+1 i=0  (K−2 s−1 ) PK−1−s K−1 +1 ( s+i )(N −1)i+1 i=0 = ψ2 (N, K, s) + 1 (72) . (73) We first upper bound ψ1 (N, K, s), ψ1 (N, K, s) = = ≤ PK−1−s K (N − 1)i+1 i=0 s+1+i PK−1−s K−1 (N − 1)i+1 i=0 s+i PK−1−s K K−1 (N − 1)i+1 i=0 s+1+i s+i  PK−1−s K−1 (N − 1)i+1 i=0 s+i  PK−1−s K K−1 (N − 1)i+1 1 i=0 s s+i = . PK−1−s K−1 λ (N − 1)i+1 i=0 s+i  (74) (75) (76) We then upper bound the reciprocal of ψ2 (N, K, s) as, K−1−s X 1 = ψ2 (N, K, s) i=0 K−1 s+i = (N − 1)  (N − 1)i+1  K−2 i=0 K−1−s X (K − 1)(K − 1 − s)(K − 2 − s) · · · (K − i − s) (N − 1)i (78) s(s + 1)(s + 2) · · · (s + i) K(K − s)i (N − 1)i i+1 s i=0 i (1−λ)K−1  (N − 1) X (1 − λ)(N − 1) . = λ λ i=0 ≤ (N − 1) (77) s−1 K−1−s X (79) (80) 26 When λ > 1 − 1 (1−λ)(N −1) , N λ < 1. As K → ∞, 1 ψ2 (N,K,s) is upper bounded by i ∞  N − 1 X (1 − λ)(N − 1) 1 ≤ lim K→∞ ψ2 (N, K, s) λ i=0 λ N −1 · λ 1− = 1 (1−λ)(N −1) λ = N −1 . Nλ − (N − 1) (81) (82) Now, we lower bound (78) by keeping the first ǫK terms in the sum for any ǫ such that 0 < ǫ < 1 − λ, ǫK X (K − 1)(K − 1 − s)(K − 2 − s) · · · (K − i − s) 1 ≥ (N − 1) (N − 1)i ψ2 (N, K, s) s(s + 1)(s + 2) · · · (s + i) i=0 ≥ (N − 1) = (N − 1) ǫK X (K − 1)(K − ǫK − s)i i=0 ǫK X (N − 1)i (84) − (λ + ǫ))i (N − 1)i . (λ + ǫ)i+1 (85) (s + ǫK)i+1 (1 − 1 )((1 K i=0 (83) As K → ∞, for any 0 < ǫ < 1 − λ, we have i ∞  N − 1 X (1 − (λ + ǫ))(N − 1) 1 ≥ lim K→∞ ψ2 (N, K, s) λ + ǫ i=0 λ+ǫ = N −1 . N(λ + ǫ) − (N − 1) (86) (87) From (86) and (81), as K → ∞, by picking ǫ → 0, we have ψ2 (N, K, s) → N λ − 1. N −1 (88) Furthermore, as K → ∞, rs converges to rs → r = lim K−2 s−1  K−2 s−1 PK−1−s K−1 (N i=0 s+i  + − 1)i+1 ψ2 (N, K, s) = lim K→∞ ψ2 (N, K, s) + 1   Nλ − (N − 1) 1 1 = =1− 1− . Nλ N λ K→∞ Note that if λ = 1 − N1 , then r = 0, while if λ = 1, then r = in the region of 0 ≤ r ≤ 1 , N 1 . N (89) (90) (91) Since we now consider the gap without loss of generality, we consider λ > 1 − N1 . We express λ as λ= 1 − N1 . 1−r (92) 27 Continuing (73), by using (76), (88) and (92), we have the following upper bound on D̄(r) D̄(r) ≤ 1 λ N λ N −1 1 = 2 λ   N 1 = 1− (1 − r)2 . N N −1 (93) Now, we compare the inner bound in (12) with the outer bound derived in (93). Note that the inner bound in (12) consists of K line segments, and these K line segments intersect at the following K − 1 points given by, r̃i = 1 , Ni i = 1, · · · , K − 1. (94) As i increases, r̃i concentrates to r = 0. Therefore, for these K line segments, we only need to consider small number of them for the worst-gap analysis. Denote the gap between the inner and the outer bounds by ∆(N, K, r). We note that the gap ∆(N, ∞, r) is a piece-wise convex function for 0 ≤ r ≤ 1 since it is the difference between a convex function D̄(r) and a piecewise linear function. Hence, the maximizing caching ratio for the gap exists exactly at the corner points r̃i and it suffices to examine the gap at these corner points. For the outer bound, by plugging (94) into (93), we have   2  1 − ( N1 )i 1 1 N 1− i . 1− i D̄(r̃i ) ≤ = N −1 N N 1 − N1 (95) Furthermore, for the inner bound, we have      (i − 1) 1 1 1 1 i+ D̃(r̃i ) =(1 − ri ) 1 + + · · · + i − ri 1 − + · · · + i−1 N N N N N      (i − 1) 1 1 1 1 i+ + ···+ i + 1− + · · · + i−1 = − ri 1 + N N N N N   1 1 + 1+ +···+ i N N   1 1 +···+ i = − ri (i + 1) + 1 + N N 1 i+1 1 − (N ) 1 − ( N1 )i+1 i + 1 = − r (i + 1) = − i Ni 1 − N1 1 − N1 (96) (97) (98) (99) Consequently, we can upper bound the asymptotic gap at the corner point r̃i as   1 − ( N1 )i 1 ∆(N, ∞, r̃i ) = D̄(r̃i ) − D̃(r̃i ) ≤ i i − N 1 − N1 (100) 28 Hence, ∆(N, ∞, r̃i ) is monotonically decreasing in N. Therefore,   1 − ( 12 )i 1 ∆(N, K, r) ≤ ∆(2, ∞, r) ≤ max i i − i 2 1 − 21 (101) For the case N = 2, we note that all the inner bounds after the 7th corner point are concentrated around r = 0 since r̃i ≤ 1 128 for i ≥ 7. Therefore, it suffices to characterize the gap only for the first 7 corner points. Considering the 7th corner point which corresponds to r̃6 = 1 ) = 1.9297. Hence, ∆(2, ∞, r) ≤ 0.07, for r ≤ D̄(r) ≤ 2 trivially for all r, and D̃( 128 1 , 128 1 . 127 and Now, we focus on calculating the gap at r̃i , i = 1, · · · , 7. Examining all the corner points, we see that r= 1 8 is the maximizing caching ratio for the gap (corresponding to i = 3), and ∆(2, ∞, 81 ) ≤ 5 , 32 which is the worst-case gap.  R EFERENCES [1] B. Chor, E. Kushilevitz, O. Goldreich, and M. Sudan. Private information retrieval. Journal of the ACM, 45(6):965–981, 1998. [2] W. Gasarch. A survey on private information retrieval. In Bulletin of the EATCS, 2004. [3] C. Cachin, S. Micali, and M. Stadler. Computationally private information retrieval with polylogarithmic communication. In International Conference on the Theory and Applications of Cryptographic Techniques. Springer, 1999. [4] R. Ostrovsky and W. Skeith III. A survey of single-database private information retrieval: Techniques and applications. In International Workshop on Public Key Cryptography, pages 393–411. Springer, 2007. [5] S. Yekhanin. Private information retrieval. Communications of the ACM, 53(4):68–73, 2010. [6] N. B. Shah, K. V. Rashmi, and K. Ramchandran. One extra bit of download ensures perfectly private information retrieval. In IEEE ISIT, June 2014. [7] G. Fanti and K. Ramchandran. Efficient private information retrieval over unsynchronized databases. IEEE Journal of Selected Topics in Signal Processing, 9(7):1229–1239, October 2015. [8] T. Chan, S. Ho, and H. Yamamoto. Private information retrieval for coded storage. In IEEE ISIT, June 2015. [9] A. Fazeli, A. Vardy, and E. Yaakobi. Codes for distributed PIR with low storage overhead. In IEEE ISIT, June 2015. [10] R. Tajeddine and S. El Rouayheb. Private information retrieval from MDS coded data in distributed storage systems. In IEEE ISIT, July 2016. [11] H. Sun and S. A. Jafar. The capacity of private information retrieval. In IEEE Globecom, December 2016. [12] H. Sun and S. A. Jafar. The capacity of private information retrieval. IEEE Transactions on Information Theory, 63(7):4075– 4088, July 2017. [13] H. Sun and S. Jafar. The capacity of robust private information retrieval with colluding databases. 2016. Available at arXiv:1605.00635. [14] H. Sun and S. Jafar. The capacity of symmetric private information retrieval. 2016. Available at arXiv:1606.08828. [15] K. Banawan and S. Ulukus. The capacity of private information retrieval from coded databases. IEEE Transactions on Information Theory. Submitted September 2016. Also available at arXiv:1609.08138. 29 [16] H. Sun and S. Jafar. Optimal download cost of private information retrieval for arbitrary message length. 2016. Available at arXiv:1610.03048. [17] Q. Wang and M. Skoglund. Symmetric private information retrieval for MDS coded distributed storage. 2016. Available at arXiv:1610.04530. [18] H. Sun and S. Jafar. Multiround private information retrieval: Capacity and storage overhead. 2016. Available at arXiv:1611.02257. [19] R. Freij-Hollanti, O. Gnilke, C. Hollanti, and D. Karpuk. Private information retrieval from coded databases with colluding servers. 2016. Available at arXiv:1611.02062. [20] H. Sun and S. Jafar. Private information retrieval from MDS coded data with colluding servers: Settling a conjecture by Freij-Hollanti et al. 2017. Available at arXiv: 1701.07807. [21] R. Tajeddine, O. W. Gnilke, D. Karpuk, R. Freij-Hollanti, C. Hollanti, and S. El Rouayheb. Private information retrieval schemes for coded data with arbitrary collusion patterns. 2017. Available at arXiv:1701.07636. [22] K. Banawan and S. Ulukus. Multi-message private information retrieval: Capacity results and near-optimal schemes. IEEE Transactions on Information Theory. Submitted February 2017. Also available at arXiv:1702.01739. [23] Y. Zhang and G. Ge. A general private information retrieval scheme for MDS coded databases with colluding servers. 2017. Available at arXiv: 1704.06785. [24] Y. Zhang and G. Ge. Multi-file private information retrieval from MDS coded databases with colluding servers. 2017. Available at arXiv: 1705.03186. [25] K. Banawan and S. Ulukus. The capacity of private information retrieval from Byzantine and colluding databases. IEEE Transactions on Information Theory. Submitted June 2017. Also available at arXiv:1706.01442. [26] Q. Wang and M. Skoglund. Secure symmetric private information retrieval from colluding databases with adversaries. 2017. Available at arXiv:1707.02152. [27] R. Tandon. The capacity of cache aided private information retrieval. 2017. Available at arXiv: 1706.07035. [28] Q. Wang and M. Skoglund. Linear symmetric private information retrieval for MDS coded distributed storage with colluding servers. 2017. Available at arXiv:1708.05673. [29] S. Kadhe, B. Garcia, A. Heidarzadeh, S. El Rouayheb, and A. Sprintson. Private information retrieval with side information. 2017. Available at arXiv:1709.00112. [30] Y.-P. Wei, K. Banawan, and S. Ulukus. Fundamental limits of cache-aided private information retrieval with unknown and uncoded prefetching. 2017. Available at arXiv:1709.01056. [31] Z. Chen, Z. Wang, and S. Jafar. The capacity of private information retrieval with private side information. 2017. Available at arXiv:1709.03022. [32] Y.-P. Wei, K. Banawan, and S. Ulukus. The capacity of private information retrieval with partially known private side information. 2017. Available at arXiv:1710.00809. [33] H. Sun and S. A. Jafar. The capacity of private computation. 2017. Available at arXiv:1710.11098. [34] M. Mirmohseni and M. A. Maddah-Ali. Private function retrieval. 2017. Available at arXiv:1711.04677. [35] M. Abdul-Wahid, F. Almoualem, D. Kumar, and R. Tandon. Private information retrieval from storage constrained databases–coded caching meets PIR. 2017. Available at arXiv:1711.05244. [36] M. A. Maddah-Ali and U. Niesen. Fundamental limits of caching. IEEE Transactions on Information Theory, 60(5):2856– 2867, May 2014. 30 [37] M. A. Maddah-Ali and U. Niesen. Decentralized coded caching attains order-optimal memory-rate tradeoff. IEEE/ACM Transactions on Networking, 23(4):1029–1040, August 2015. [38] R. Pedarsani, M. A. Maddah-Ali, and U. Niesen. Online coded caching. IEEE/ACM Transactions on Networking, 24(2):836– 845, April 2016. [39] A. Sengupta, R. Tandon, and T. C. Clancy. Fundamental limits of caching with secure delivery. IEEE Transactions on Information Forensics and Security, 10(2):355–370, February 2015. [40] M. Ji, G. Caire, and A. F. Molisch. Fundamental limits of caching in wireless D2D networks. IEEE Transactions on Information Theory, 62(2):849–869, February 2016. [41] H. Ghasemi and A. Ramamoorthy. Improved lower bounds for coded caching. IEEE Transactions on Information Theory, 63(7):4388–4413, July 2017. [42] M. Ji, A. M. Tulino, J. Llorca, and G. Caire. Order-optimal rate of caching and coded multicasting with random demands. IEEE Transactions on Information Theory, 63(6):3923–3949, June 2017. [43] R. Timo and M. Wigger. Joint cache-channel coding over erasure broadcast channels. In IEEE ISWCS, August 2015. [44] K. Shanmugam, M. Ji, A. M. Tulino, J. Llorca, and A. G. Dimakis. Finite-length analysis of caching-aided coded multicasting. IEEE Transactions on Information Theory, 62(10):5524–5537, October 2016. [45] J. Zhang and P. Elia. Fundamental limits of cache-aided wireless BC: Interplay of coded-caching and CSIT feedback. IEEE Transactions on Information Theory, 63(5):3142–3160, May 2017. [46] M. Gregori, J. Gómez-Vilardebó, J. Matamoros, and D. Gündüz. Wireless content caching for small cell and D2D networks. IEEE Journal on Selected Areas in Communications, 34(5):1222–1234, May 2016. [47] M. M. Amiri and D. Gündüz. Fundamental limits of coded caching: Improved delivery rate-cache capacity tradeoff. IEEE Transactions on Communications, 65(2):806–815, February 2017. [48] C. Tian and J. Chen. Caching and delivery via interference elimination. In IEEE ISIT, July 2016. [49] K. Wan, D. Tuninetti, and P. Piantanida. On the optimality of uncoded cache placement. In IEEE ITW, September 2016. [50] Q. Yu, M. A. Maddah-Ali, and A. S. Avestimehr. The exact rate-memory tradeoff for caching with uncoded prefetching. In IEEE ISIT, June 2017. [51] A. A. Zewail and A. Yener. Fundamental limits of secure device-to-device coded caching. In IEEE Asilomar Conference on Signals, Systems, and Computers, October 2016. [52] N. Naderializadeh, M. A. Maddah-Ali, and A. S. Avestimehr. On the optimality of separation between caching and delivery in general cache networks. 2017. Available at arXiv:1701.05881. [53] S. S. Bidokhti, M. Wigger, and A. Yener. Benefits of cache assignment on degraded broadcast channels. 2017. Available at arXiv:1702.08044. [54] L. Tang and A. Ramamoorthy. Low subpacketization schemes for coded caching. 2017. Available at arXiv:1706.00101. [55] L. Xiang, D. W. K. Ng, R. Schober, and V. W. S. Wong. Cache-enabled physical layer security for video streaming in backhaul-limited cellular networks. IEEE Transactions on Wireless Communications, November 2017.
7
Decentralized Frank-Wolfe Algorithm for Convex and Non-convex Problems arXiv:1612.01216v2 [math.OC] 17 Mar 2017 Hoi-To Wai, Jean Lafond, Anna Scaglione, Eric Moulines ∗† March 21, 2017 Abstract Decentralized optimization algorithms have received much attention due to the recent advances in network information processing. However, conventional decentralized algorithms based on projected gradient descent are incapable of handling high dimensional constrained problems, as the projection step becomes computationally prohibitive to compute. To address this problem, this paper adopts a projection-free optimization approach, a.k.a. the Frank-Wolfe (FW) or conditional gradient algorithm. We first develop a decentralized FW (DeFW) algorithm from the classical FW algorithm. The convergence of the proposed algorithm is studied by viewing the decentralized algorithm as an inexact FW algorithm. Using a diminishing step size rule and letting t be the iteration number, we show that the DeFW algorithm’s convergence rate is O(1/t) for convex objectives; is O(1/t2 ) for strongly√convex objectives with the optimal solution in the interior of the constraint set; and is O(1/ t) towards a stationary point for smooth but non-convex objectives. We then show that a consensus-based DeFW algorithm meets the above guarantees with two communication rounds per iteration. Furthermore, we demonstrate the advantages of the proposed DeFW algorithm on low-complexity robust matrix completion and communication efficient sparse learning. Numerical results on synthetic and real data are presented to support our findings. 1 Introduction Recently, algorithms for tackling high-dimensional optimizations have been sought for the ‘big-data’ challenge [3]. As these data/measurements are dispersed over clouds of networked machines, it is important to consider decentralized algorithms that can allow the agents/machines to co-operate and leverage the aggregated computation power [4]. This paper considers decentralized algorithm for tackling a constrained optimization problem with N agents: N 1 X min F (θ) s.t. θ ∈ C, where F (θ) := fi (θ) , (1) N θ∈Rd i=1 ∗ The work of H.-T. Wai is supported by NSF CCF-1553746, NSF CCF-1531050 and J. Lafond by Direction Générale de l’Armement and the labex LMH (ANR-11-LABX-0056-LMH). Preliminary versions of this work are presented at ICASSP 2016 [1] and GlobalSIP 2016 [2]. † H.-T. Wai and A. Scaglione are with the School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, AZ 85281, USA. E-mails: {htwai,Anna.Scaglione}@asu.edu. J. Lafond was with Institut MinesTelecom, Telecom ParisTech, CNRS LTCI, Paris, France. Email: [email protected]. E. Moulines is with CMAP, Ecole Polytechnique, Palaiseau, France. Email: [email protected]. 1 and fi (θ) is a continuously differentiable (possibly non-convex) function held by the ith agent and C is a closed and bounded convex set in Rd . Typically, the function fi (θ) models the loss function over the private data held by agent i, and C corresponds to a regularization constraint imposed on the optimization variable θ such as sparsity or low rank of an array of values. Problem (1) covers a number of applications in control theory, signal processing and machine learning, including system identification [5], matrix completion [6] and sparse learning [7, 8]. To tackle (1), the agents communicate on a network described as a graph G = (V, E), where V = {1, ..., N } is the set of agents and E ⊆ V × V describes the connectivity. As G is not fully connected, it is useful to apply decentralized algorithms that relies on near-neighbor information exchanges. In light of this, various authors have proposed to tackle (1) through decentralized algorithms that are built on the average consensus protocol [9, 10]. For example, [11–18] studied the decentralized gradient methods; [19, 20] considered the Newton-type methods; [21–25] considered the decentralized primal-dual algorithms. [26, 27] are built on the successive convex approximation framework. The convergence properties of these algorithms were investigated extensively, especially for convex objectives [4, 12–25]; for non-convex objectives, a few recent results can be found in [11, 25–29]. However, most prior work are projection-based such that each iteration of the above algorithms necessitates a projection step onto the constraint set C, or solving a sub-problem with similar complexity. While the projection step entails a modest complexity for problems with moderate dimension, it may become computationally prohibitive for high dimensional problems, thereby rendering the above methods impractical for our scenario of interest. To address the issue above, this paper focuses on a decentralized projection-free algorithm. We extend the Frank-Wolfe (FW) algorithm [30] to operate with near-neighbor information exchange, leading to a decentralized FW (DeFW) algorithm. The FW (a.k.a. conditional gradient) algorithm has been recently popularized due to its efficacy in handling high-dimensional constrained problems. Examples of its applications include optimal control [31], matrix completion [32], image and video colocation [33], electric vehicle charging [34] and traffic assignment [35]; see the overview article [36]. From the algorithmic perspective, the FW algorithm replaces the costly projection step in PG based algorithms with a constrained linear optimization, which often admits an efficient solution. In a centralized setting, the convergence of FW algorithm has been studied for convex problems [30,36], yet little is known for non-convex problems [37–39]. Our contributions are as follows. We first describe abstractly the DeFW algorithm as a variation of the FW algorithm with inexact iterates and gradients. We then analyze its convergence — for convex objectives, the sub-optimality (of objective values) of the iterates produced by the proposed algorithm is shown to converge as O(1/t) with t being the iteration number, and is O(1/t2 ) for strongly convex objectives when the optimal solution is in the interior of C; for nonconvex objectives, we demonstrate that the proposed algorithm has limit points that are stationary p points of (1), and they can be found at the rate of O( 1/t). We then show that a consensusbased implementation of the proposed algorithm with fixed number of communication rounds of near-neighbor information exchanges achieves all the above guarantees. To our knowledge, this is the first decentralized FW algorithm. Moreover, our convergence rate in the non-convex setting is comparable to that of a centralized projected gradient method [40]. Lastly, we present examples on communication-efficient LASSO and decentralized matrix completion. The rest of this paper is organized as follows. Section 2 develops the DeFW algorithm. We summarize the main theoretical results of convergence for convex and non-convex objective functions. A consensus-based implementation of the DeFW algorithm will then be presented in Section 3. Ap- 2 plications of the DeFW algorithm are discussed in Section 4. Finally, Section 5 presents numerical results to support our findings. 1.1 Notations & Mathematical Preliminaries For any d ∈ N, we define [d] as the set {1, ..., d}. We use boldfaced lower-case letters to denote vectors and boldfaced upper-case letters to denote matrices. For a vector x (or a matrix X), the notation [x]i (or [X]i,j ) denotes its ith element (or (i, j)th element). The vectorization of a matrix X ∈ Rm1 ×m2 is denoted by vec(X) = [x1 ; x2 ; . . . ; xm2 ] ∈ Rm1 m2 such that xi is the ith column of X. The vector ei ∈ Rd is the ith unit vector such that [ei ]j = 0 for all j 6= i and [ei ]i = 1. For some positive finite constants C1 , C2 , C3 , C2 ≤ C3 and non-negative functions f (t), g(t), the notations f (t) = O(g(t)), f (t) = Θ(g(t)) indicate f (t) ≤ C1 g(t), C2 g(t) ≤ f (t) ≤ C3 g(t) for sufficiently large t, respectively. Let E be a Euclidean space embedded in Rd and the Euclidean norm is denoted by k · k2 . The binary operator h·, ·i denotes the inner product on E. In addition, E is equipped with a norm k · k and the corresponding dual norm k · k? . Let G, L, µ be non-negative constants. Consider a function f : Rd → R, the function f is G-Lipschitz if for all θ, θ 0 ∈ E, |f (θ) − f (θ 0 )| ≤ Gkθ − θ 0 k ; (2) the function f is L-smooth if for all θ, θ 0 ∈ E, f (θ) − f (θ 0 ) ≤ h∇f (θ 0 ), θ − θ 0 i + L kθ − θ 0 k22 , 2 (3) the above is equivalent to k∇f (θ 0 ) − ∇f (θ)k2 ≤ Lkθ 0 − θk2 ; the function f is µ-strongly convex if for all θ, θ 0 ∈ E, µ (4) f (θ) − f (θ 0 ) ≤ h∇f (θ), θ − θ 0 i − kθ − θ 0 k22 ; 2 moreover, f is convex if the above is satisfied with µ = 0. Consider Problem (1), its constraint set C ⊆ E is convex and bounded with the diameter defined as: ρ := max kθ − θ 0 k, ρ̄ := max kθ − θ 0 k2 , (5) 0 0 θ,θ ∈C θ,θ ∈C note that ρ is defined with respect to (w.r.t.) the norm k · k while ρ̄ is defined w.r.t. the Euclidean norm. When the objective function F is µ-strongly convex with µ > 0, the optimal solution to (1) is unique and denoted by θ ? , we also define δ := min ks − θ ? k2 , s∈∂C (6) where ∂C is the boundary set of C. If δ > 0, the solution θ ? is in the interior of C. 2 Decentralized Frank-Wolfe (DeFW) We develop the decentralized Frank-Wolfe (DeFW) algorithm from the classical FW algorithm [30]. Let t ∈ N be the iteration number and the initial point θ0 ∈ C is feasible. Recall the definition 3 Algorithm 1 Decentralized Frank-Wolfe (DeFW). 1: Input: Initial point θ1i for i = 1, . . . , N . 2: for t = 1, 2, . . . do 3: Consensus: approximate the average iterate: θ̄ti ← NetAvgit ({θtj }N j=1 ), ∀ i ∈ [N ] . 4: Aggregating: approximate the average gradient: ∇it F ← NetAvgit ({∇fj (θ̄tj )}N j=1 ), ∀ i ∈ [N ] . 5: Frank-Wolfe Step: update i θt+1 ← (1 − γt )θ̄ti + γt ait where ait ∈ arg minh∇it F , ai, a∈C for all agent i ∈ [N ] and γt ∈ (0, 1] is a step size. end for i , ∀ i ∈ [N ]. 7: Return: θ̄t+1 6: F (θ) := (1/N ) PN i=1 fi (θ), the centralized FW algorithm for problem (1) proceeds by: at−1 ∈ arg min h∇F (θt−1 ), ai , a∈C θt = θt−1 + γt−1 (at−1 − θt−1 ) , (7a) (7b) where γt−1 ∈ (0, 1] is a step size to be determined. Observe that θt is a convex combination of θt−1 and at−1 which are both feasible, therefore θt ∈ C as C is a convex set. When the step size is chosen as γt = 2/(t + 1), the FW algorithm is known to converge at a rate of O(1/t) if F is L-smooth and convex [36]. A main feature of the FW algorithm is that the linear optimization1 (LO) (7a) can be solved more efficiently than computing a projection, leading to a projection-free algorithm. At the end of this section, we will illustrate a few examples of C with efficient LO computations. Our next endeavor is to extend the FW algorithm to a decentralized setting via mimicking (7) with only near-neighbor information exchanges. Doing so requires replacing the centralized gradient/iterate ∇F (θt ), θt in (7) with local approximations, as similar to the strategy in [14, 24]. In the following, we offer a high-level description of the proposed DeFW algorithm and discuss the convergence properties of it. Details regarding the implementation will be postponed to Section 3. Let θti denotes an auxillary iterate kept by agent i at iteration t. Define the average iterate: P i θ̄t := N −1 N (8) i=1 θt and the local iterate θ̄ti as an approximation of the average iterate above, also kept by agent i. We require θ̄ti to track θ̄t with an increasing accuracy. Let {∆pt }t≥1 be a non-negative, decreasing sequence with ∆pt → 0, we assume 1 Notice that (7a) is a convex optimization problem with a linear objective. 4 Assumption 1 ({∆pt }t≥1 ) For all t ≥ 1, it holds that maxi∈[N ] kθ̄ti − θ̄t k2 ≤ ∆pt . (9) To compute (7a), ideally each agent has to access the global gradient, ∇F (θ̄t ). However, just the local function fi (·) is available and agent i can only compute the local gradient ∇fi (θ̄ti ). Therefore, we also need to track the average gradient, P j ∇t F := N −1 N (10) j=1 ∇fj (θ̄t ) , by the local approximation ∇it F . Note that ∇t F is close to ∇F (θ̄t ) when each of the function fi (θ) is smooth and θ̄ti is close to θ̄t . Let {∆dt }t≥1 be a non-negative, decreasing sequence with ∆dt → 0, we assume: Assumption 2 ({∆dt }t≥1 ) For all t ≥ 1, it holds that maxi∈[N ] k∇it F − ∇t F k2 ≤ ∆dt . (11) Naturally, from the local approximation ∇it F , the ith agent can compute the update direction i ait = arg mina∈C h∇it F , ai and update θt+1 similarly as in (7b). To summarize, a sketch of the DeFW algorithm can be found in Algorithm 1. Under Assumptions 1-2, for each agent i, line 5 in Algorithm 1 can be regarded as performing an inexact FW update on θ̄t , whose convergence can be characterized below. For convex objective functions, we have: Theorem 1 Set the step size as γt = 2/(t + 1). Suppose that each of fi is convex and L-smooth. Let Cp and Cg be two positive constants. Under Assumptions 1-2 [∆pt = Cp /t, ∆dt = Cg /t], we have 8ρ̄(Cg + LCp ) + 2Lρ̄2 F (θ̄t ) − F (θ ? ) ≤ , (12) t+1 for all t ≥ 1, where θ ? is an optimal solution to (1). Furthermore, if F is µ-strongly convex and the optimal solution θ ? lies in the interior of C, i.e., δ > 0 (cf. (6)), we have F (θ̄t ) − F (θ ? ) ≤ (4ρ̄(Cg + LCp ) + Lρ̄2 )2 9 · , 2 2δ µ (t + 1)2 (13) for all t ≥ 1. The proof can be found in Appendix A. We remark that θ̄t is always feasible. For strongly convex objective functions, the conditions (12), (13) imply that the sequence {θ̄t }t≥1 converges to an optimal solution of (1). Furthermore, as the consensus error, maxi∈[N ] kθ̄ti − θ̄t k2 , decay to zero (cf. Assumption 1), the local iterates {θ̄ti }t≥1 share similar convergence guarantee as {θ̄t }t≥1 . For non-convex objective functions, we study the convergence of the FW/duality gap: gt := maxh∇F (θ̄t ), θ̄t − θi . θ∈C (14) From the definition, when gt = 0, the iterate θ̄t will be a stationary point to (1). Thus we may regard gt as a measure of the stationarity of the iterate θ̄t . Also, define the set of stationary point to (1) as:  C ? = θ ∈ C : max h∇F (θ), θ − θi = 0 . (15) θ∈C We consider the following technical assumption: 5 Assumption 3 The set C ? is non-empty. Moreover, the function F (θ) takes a finite number of values over C ? , i.e., the set F (C ? ) = {F (θ) : θ ∈ C ? } is finite. Verifying Assumption 3 may be hard in practice. Meanwhile, it is reasonable to assume that (1) has a finite number of stationary points since the set C is bounded. In the latter case, Assumption 3 is satisfied. We now have: Theorem 2 Set the step size as γt = 1/tα for some α ∈ (0, 1]. Suppose each of fi is L-smooth and G-Lipschitz (possibly non-convex). Let Cp , Cg be two positive constants. Under Assumption 1-2 [∆pt = Cp /tα , ∆dt = Cg /tα ], it holds that: 1. for all T ≥ 6 that are even, if α ∈ [0.5, 1), min t∈[T /2+1,T ]  gt ≤ 1 T 1−α · 1−α · (1 − (2/3)1−α )  2 (16) Gρ + (Lρ̄ /2 + 2ρ̄(Cg + LCp )) log 2 ; if α ∈ (0, 0.5), min t∈[T /2+1,T ]  gt ≤ 1 1−α · · α T (1 − (2/3)1−α ) (Lρ̄2 /2 + 2ρ̄(Cg + LCp ))(1 − (1/2)1−2α )  . Gρ + 1 − 2α (17) 2. additionally, under Assumption 3 and α ∈ (0.5, 1], the sequence of objective values {F (θ̄t )}t≥1 converges, {θ̄t }t≥1 has limit points and each limit point is in C ? . The proof can √ be found in Appendix B. Note that setting α = 0.5 gives the quickest convergence rate of O(1/ T ). It is worth mentioning that our results are novel compared to prior work on non-convex FW even in a centralized setting (N = 1, ∆pt = 0, ∆dt = 0). For instance, [37] requires that the local minimizer is unique; [38] gives the same convergence rate but uses an adaptive step size. We remark that the local iterates {θ̄ti }t≥1 share similar convergence property as {θ̄t }t≥1 due to Assumption 1. Lastly, we survey some relevant examples of the constraint set C where the LO in the DeFW algorithm (cf. line 5 in Algorithm 1) can be computed efficiently: 1. When C is the `1 ball, C = {θ ∈ Rd : kθk1 ≤ R}, ait = −R · ek , where k ∈ arg max [∇it F ]j . (18) j∈[d] The solution above amounts to finding the coordinate index of ∇it F with the maximum magnitude. Importantly, this solution is only 1-sparse. Consequently, the tth iterate θ̄t will be at most tN -sparse. The worst-case complexity of computing ait is O(d); in comparison, the worst-case complexity for the projection into an `1 ball is O(d log d)2 . 2 There exists a randomized, accelerated algorithm for projection in [41] with an expected complexity of O(d). 6 2. When C is the trace norm ball, C = {θ ∈ Rm1 ×m2 : kθkσ,1 ≤ R}, where kθkσ,1 is the sum of the singular values of θ. Let u1 , v1 be the top-1 left/right singular vector of ∇it F , we have ait = −R · u1 v1> . (19) Importantly, at a target solution accuracy of δ, the top singular vectors can be computed with a complexity of O(max{m1 , m2 } log(1/δ)) using the power/Lanczos method if kvec(∇it F )k0 = O(max{m1 , m2 }). In comparison, the projection onto the trace norm ball requires a complexity of O(max{m1 m22 , m2 m21 } log(1/δ)) for computing the full SVD of an m1 × m2 matrix [42]. For more examples of C with efficient LO computations, the interested readers are referred to [36] for an overview. 3 Consensus-based DeFW algorithm This section demonstrates how an average consensus (AC) based scheme can generate local approximations θ̄ti , ∇it F at the desirable sub-linear accuracies (cf. Assumptions 1-2) using as few as two communication rounds per iteration. Specifically, the following discussions are based on the static AC [9, 10]. While the exact details are left for a future work, we believe that it is possible to extend our protocol to a time varying network’s setting. We consider an undirected graph G = (V, E) and assign a non-negative, sym×N metric weighted adjacency matrix W ∈ RN that describes the local communication between the + N agents. The matrix satisfies Wij := [W ]ij > 0 iff (i, j) ∈ E and it is doubly stochastic, i.e., W 1 = W > 1 = 1. Moreover, Assumption 4 The second largest (in magnitude) eigenvalue of W is strictly less than one, i.e., |λ2 (W )| < 1. The existence of such matrix W is guaranteed if G is connected. For each round of the AC update, the agents take a weighted average of the values from its neighbors according to W . We now state the following fact regarding W . P Fact 1 Let x1 , ..., xN ∈ Rd be a set of vectors and xavg := N −1 N i=1 xi be their average. Suppose W is a doubly stochastic, non-negative matrix. The output after performing one round of AC update: P xi = N (20) j=1 Wij · xj must satisfy v v uN uN uX uX t 2 kxi − xavg k2 ≤ |λ2 (W )| · t kxi − xavg k22 , i=1 (21) i=1 where λ2 (W ) is the second largest eigenvalue of W . The fact above can be verified from linear algebra. Together with Assumption 4, the above implies that each AC update (20) moves the vectors closer to the average xavg . Repeatedly applying (21) shows the well known fact that AC computes the average xavg at a linear rate. 7 Let us consider the near-neighbor computation of θ̄ti in line 3 of Algorithm 1. Here, the consensus step is computed by: P j θ̄ti = N (22) j=1 Wij · θt , i.e., we apply one round of the AC update. Since Wij = 0 if (i, j) ∈ / E, the above operation is achievable by information exchanges with the near-neighbors of agent i. Now, for some α ∈ (0, 1], we define t0 (α) as the smallest integer such that λ2 (W ) ≤  t (α) α 1 0 · . t0 (α) + 1 1 + (t0 (α))−α (23) Notice that t0 (α) is upper bounded by: t0 (α) ≤ d(|λ2 (W )|−1/(1+α) − 1)−1 e , (24) which is finite under Assumption 4. The following lemma can be easily proven: Lemma 1 Set the step size γt = 1/tα in the DeFW algorithm for some α ∈ (0, 1], then θ̄ti in (22) satisfies Assumption 1: maxi∈[N ] kθ̄ti − θ̄t k2 ≤ ∆pt = Cp /tα , ∀ t ≥ 1 , Cp := (t0 (α))α · (25) √ N ρ̄ . (26) The proof is postponed to Appendix C, which relies on the observation that θti is a convex combii i nation of θ̄t−1 and ait−1 . In particular, θ̄t−1 is already O(1/(t − 1)α )-close to the network average from the last iteration and the update direction ait−1 is weighted by a decaying step size γt−1 . In comparison to what we were able to establish above, the near-neighbor computation of ∇it F in line algorithm is less straightforward. Unlike the computation of θ̄t , computing P4N of the DeFW i α i −1 N i=1 ∇fi (θ̄t ) to an accuracy of O(1/t ) by only communicating the local gradient ∇fi (θ̄t ) requires ∼ log t rounds of updates when the AC protocol is employed. The main issue is that the local gradient ∇fi (θ̄ti ) computed by the ith agent is different from the local gradient computed at the other agent, even when θ̄ti is close to θ̄tj for j 6= i. We propose an approach that is inspired by the fast stochastic average gradient (SAGA) method [43] which re-uses the gradient approximation ∇it−1 F from the last iteration3 . Define the following surrogate of local gradient at iteration t: i ∇it F := ∇it−1 F + ∇fi (θ̄ti ) − ∇fi (θ̄t−1 ), (27) for all i ∈ [N ]. When t = 1, we set ∇i1 F = ∇fi (θ̄1i ). We now apply the AC update to the gradient surrogate. In line 4 of Algorithm 1, the aggregating step is computed by: P j ∇it F = N (28) j=1 Wij · ∇t F , i.e., using just one round of AC update on ∇it F . Below we show that the average gradient is preserved by ∇it F . Moreover, ∇it F has an approximation error similar to Lemma 1: 3 After the submission, the authors notice that a similar technique is adopted in [13, 18, 27] under the name of ‘gradient tracking’ for various decentralized methods. We provide a rate analysis with non-asymptotic constants. 8 Lemma 2 Set the step size γt = 1/tα in the DeFW algorithm for some α ∈ (0, 1]. Suppose that each of fi is L-smooth, θ̄ti is updated according to (22), then ∇it F in (28) satisfies P PN i −1 i N −1 N (29) i=1 ∇t F = N i=1 ∇fi (θ̄t ), ∀ t ≥ 1 , and Assumption 2: maxi∈[N ] k∇it F − ∇t F k2 ≤ ∆dt = Cg /tα , ∀ t ≥ 1 , √ Cg := (t0 (α))α · 2 N (2Cp + ρ̄)L . (30) (31) The proof can be found in Appendix D. Similar intuition as in Lemma 1 was used in the proof. In i ). The particular, we observe that ∇tt F is a linear combination of ∇it−1 F and ∇fi (θ̄ti ) − ∇fi (θ̄t−1 former is O(1/(t − 1)α )-close to the average, and the latter decays to zero as enforced by the step size γt . Finally, the conditions on ∆pt , ∆dt necessitated by Theorem 1 & 2 can be satisfied by (22) & (28). Corollary 1 Under Assumption 4, the results in Theorem 1 & 2 hold when line 3, line 4 in the DeFW algorithm (Algorithm 1) are computed by (22), (28) respectively. In other words, the consensus-based DeFW algorithm converges for both convex and non-convex problems, while using a constant number of communication rounds per iteration. As a final remark, recall from Theorem 2 that for non-convex objectives, the best rate of convergence can be achieved if we set α = 0.5. However, from Lemma 1 & 2, we notice that the approximation error also decays the slowest when α = 0.5. This presents a potential tradeoff in the choice of α. From our numerical experience, we find that setting α = 0.75 yields a good performance in practice. 4 Applications In this section, we study two applications of the DeFW algorithm to illustrate its flexibility and efficacy. 4.1 Example I: Decentralized Matrix Completion Consider a setting when the network of agents obtain incomplete observations of a matrix θtrue of dimension m1 × m2 with m1 , m2  0. The ith agent has corrupted observations from the training set Ωi ⊂ [m1 ] × [m2 ] that are expressed as: Yk,l = [θtrue ]k,l + Zk,l , ∀ (k, l) ∈ Ωi . (32) To recover a low-rank θtrue , we consider the following trace-norm constrained matrix completion (MC) problem: N X X min f˜i ([θ]k,l , Yk,l ) s.t. kθkσ,1 ≤ R , (33) θ∈Rm1 ×m2 i=1 (k,l)∈Ωi where f˜i : R2 → R is a loss function picked by agent i according to the observations he/she received. Notice that (33) is also related to the low rank subspace system identification problem described 9 in [5], where Y with [Y ]k,l = Yk,l , θtrue are modeled as the measured system response and the ground truth low rank response; also see [44] for a related work. Similar MC problems have been considered in [45–48], where [45] studied a consensus-based optimization method similar to ours and [46–48] studied the parallel computation setting where the agents are working synchronously in a fully connected network. Compared to our approach, these work assume that the rank of θtrue is known in advance and solve the MC problem via matrix factorization. In addition, [45,46] required that each local observation set Ωi only have entries taken from a disjoint subset of the columns/rows only. Our approach does not have any restrictions above. We consider two different observation models. When Zk,l is the i.i.d. Gaussian noise of variance 2 σi , we choose f˜i (·, ·) to be the square loss function, i.e., f˜i ([θ]k,l , Yk,l ) := (1/σi2 ) · (Yk,l − [θ]k,l )2 . (34) This yields the classical MC problem in [6]. The next model considers the sparse+low rank matrix completion in [49], where the observations are contaminated with a sparse noise. Here, we model Zk,l as a sparse noise in the sense that there are a few number of entries in Ωi where Zk,l is non-zero. We choose f˜i (·, ·) to be the negated Gaussian loss, i.e.,   ([θ] − Y )2  k,l k,l f˜i ([θ]k,l , Yk,l ) := 1 − exp − , (35) σi where σi > 0 controls the robustness to outliers for the data obtained at the ith agent. Here, f˜i (·, ·) is a smoothed `0 loss [50] with enhanced robustness to outliers in the data. Notice that the resultant MC problem (33) is non-convex. Note that (33) is a special case of problem (1) with C being the trace-norm ball. The consensusbased DeFW algorithm can be applied on (33) directly. The projection-free nature of the DeFW algorithm leads to a low complexity implementation (33). Lastly, several remarks on the communication and storage cost of the DeFW algorithm are in order: • The SAGA-like gradient surrogate ∇it F (27) is supported only on ∪N i=1 Ωi since for all i ∈ [N ], the local gradient X ∇fi (θ̄ti ) = (36) f˜i0 ([θ̄ti ]k,l , Yk,l ) · ek (e0l )> (k,l)∈Ωi is supported on Ωi , where θ̄ti is defined in (22). In the above, ek (e0l ) is the kth (lth) canonical basis vector for Rm1 (Rm2 ) and f˜i0 (θ, y) is the derivative of f˜i (θ, y) taken with respect to θ. N Consequently, the average ∇it F is supported only on ∪N i=1 Ωi . As | ∪i=1 Ωi |  m1 m2 , the amount of information exchanged during the aggregating step (Line 4 in DeFW) is low. • The update direction ait is a rank-one matrix composed of the top singular vectors of ∇it F (cf. (19)). Since every iteration in DeFW adds at most N distinct pair of singular vectors to θ̄t , the rank of θ̄ti is upper bounded by tN if we initialize by θ̄0i = 0. We can reduce the communication cost in Line 3 in DeFW by exchanging these singular vectors. Note that (tN ) · (m1 + m2 ) entries are stored/exchanged instead of m1 · m2 . • When the agents are only concerned with predicting the entries of θtrue in the subset Ξ ⊂ [m1 ] × [m2 ], instead of propagating the singular vectors as described above, the consensus i step can be carried out by exchanging only the entries of θt+1 in Ξ ∪ ∪N i=1 Ωi without affecting the operations  of the DeFW algorithm. In this case, the storage/communication cost is |Ξ ∪ ∪N Ω |. i i=1 10 4.2 Example II: Communication Efficient DeFW for LASSO Let (yi , Ai ) be the available data tuple at agent i ∈ [N ] such that Ai ∈ Rm×d and yi ∈ Rm . The data yi is a corrupted measurement of an unknown parameter θtrue : yi = Ai θtrue + zi , (37) N (0, σ 2 I) where zi ∼ are independent noise vectors. Furthermore, we assume m  d such that the > matrix Ai Ai is rank-deficient. However, the parameter θtrue is s-sparse such that s = kθtrue k0  d. This motivates us to consider the following distributed LASSO problem: min θ∈Rd N X 1 i=1 2 kyi − Ai θk22 s.t. kθk1 ≤ R , (38) Notice that the above is a special case of (1) with fi (θ) = (1/2)kyi − Ai θk22 and C is an `1 -ball in Rd with radius R. We assume that (38) has an optimal solution θ ? that is sparse. The settings above also correspond to identifying a linear system described by a sparse parameter θtrue , where Ai , yi are the input, output of the system, respectively; see [51] for a related formulation on the identification of switched linear systems. A number of decentralized algorithms are easily applicable to (38). For example, the decentralized projected gradient (DPG) algorithm in [15] is described by — at iteration t, PN PN j,P G  i,P G j,P G , (39) θt+1 = PC − αt ∇fi j=1 Wij θt j=1 Wij θt where αt ∈ (0, 1] is a diminishing step size and Wij is the weighted adjacency matrix described in Section 3. For convex problems, the DPG algorithm is shown to converge to an optimal solution √ ? θ of (38) at a rate of O(1/ t) [12]. Let us focus on the communication efficiency of the DPG algorithm, which is important when the network between agents is limited in bandwidth. To this end, we define the communication cost as the number of non-zero real numbers exchanged per agent. As seen from (39), at each iteration the ith agent exchanges its current iterate θti,P G with the neighboring agents. From the computation step shown, θti,P G may contain as high as O(d) non-zeros and the per-iteration communication cost will be O(d). Despite the high communication cost, the per-iteration computation complexity of (39) is also high, i.e., at O(d log d) [41]. We notice that [7, 8] have considered distributed sparse recovery algorithm with focus on the communication efficiency. However, their algorithms are based on the iterative hard thresholding (IHT) formulation [52] that requires a-priori knowledge on the sparsity level of θtrue . Our consensus-based DeFW algorithm in Section 3 may also be applied directly to (38). However, similar issue as the DPG algorithm may arise during the aggregating step, since the gradient surrogate (27) may also have O(d) non-zeros. Lastly, another related work is [53] which applies coding to ‘compress’ the message exchanged in the consensus-based algorithms. This section proposes a sparsified DeFW algorithm for solving (38). The modified algorithm applies a novel ‘sparsification’ procedure to reduce communication cost during the iterations, which is enabled by the structure of the DeFW algorithm. To describe the sparsified DeFW algorithm, we first argue that the consensus step in the consensus-based DeFW should remain unchanged as it already has a low communication cost. From (18) and (22), we see that θti is at most (t − 1)N + 1sparse since ait is always a 1-sparse vector4 (cf. (18)). As such, the communication cost of this step is bounded by tN . 4 As pointed out by [36], this observation also leads to an interesting sparsity-accuracy trade-off when applying FW on `1 constrained problems. 11 Our focus is to improve the communication efficiency of aggregating step. Here, the key idea is that only the largest magnitude coordinate in ∇it F is sought when computing ait (cf. (18)). As long as the largest magnitude coordinate in ∇it F is preserved, the updates in the DeFW algorithm can remain unaffected. This motivates us to ‘sparsify’ the gradient information at each iteration before exchanging them with the neighboring agents. Let Ωt ⊆ [d] be the coordinates of the gradient information to be exchanged at iteration t. The agents exchange the following gradient surrogate in lieu of (27):  P d i F := ∇f (θ̄ i ) ∇ 1Ωt , where 1Ωt = k∈Ωt ek , (40) i t t and denotes the Hadamard/element-wise product. Let `t = dCl + log(t)/ log |λ−1 2 (W )|e where Cl is some finite constant and λ2 (W ) is the second largest eigenvalue of the weight matrix W , the sparsified DeFW algorithm computes the approximate gradient average ∇it F in line 4 of Algorithm 1 by: ∇it F = PN j=1 [W `t ] ij [ · ∇jt F . (41) Note that (41) requires `t rounds of AC updates to be performed at iteration t, i.e., a logarithmically increasing number of rounds of AC updates. The update direction ait can then be computed by sorting the vector ∇it F . As ∇it F is |Ωt |-sparse, this update direction can be computed in O(|Ωt |) time. We pick the coordinate set Ωt in a decentralized manner. Consider the following decomposition: S (42) Ωt = N i=1 Ωt,i , where Ωt,i ⊂ [d] is picked by agent i at iteration t. The coordinate set Ωt needs to be known by all agents before (41). This can be achieved with low communication overhead, e.g., by forming a spanning tree on the graph G and broadcasting the required indices in Ωt to all agents; see [54]. Set pt as the maximum desirable cardinality of Ωt,i , agent i chooses the coordinate set using one of the following two schemes: • (Random coordinate) Each agent selects Ωt,i by picking pt coordinates uniformly (with replacement) from [d]. • (Extreme coordinate) Each agent selects Ωt,i as the pt largest magnitude coordinates of the vector ∇fi (θ̄ti ). For the random coordinate selection scheme, the following lemma shows that the gradient approximation error can be controlled at a desirable rate with an appropriate choice of pt . Let ξt := (1 − (1 − 1/d)pt N ), we have: Lemma 3 Set  > 0 and `t = dCl + log(t)/ log |λ−1 2 (W )|e. Let pt ≥ C0 t for some C0 < ∞. With 2 probability at least 1 − π /6, the following holds for all θ ∈ C: ξt−1 ∇it F N 1 X − ∇fi (θ̄ti ) N i=1 for all t ≥ 1 and i ∈ [N ]. 12 ∞  dplog(t2 /)  =O , tN (43) The proof can be found in Appendix E. Note that the above is given in terms of ξt−1 ∇it F instead of ∇it F . However, the result remains relevant as the LO (7a) in the DeFW algorithm is scale invariant, i.e., arg mina∈C h∇it F , ai = arg mina∈C hα∇it F , ai for any α > 0. In other words, performing the FW stepPwith ∇it F is equivalent to doing so with ξt−1 ∇it F . As ξt−1 ∇it F is an O(1/t) approximation j to N −1 N j=1 ∇fj (θ̄t ), Assumption 2 is satisfied with ∆dt = O(1/t). Lastly, we conclude that Corollary 2 The sparsified DeFW algorithm using random coordinate selection, i.e., with line 3 & 4 in Algorithm 1 replaced by (22) & (41), respectively, generates iterates that satisfy the guarantees in Theorem 1 (with high probability). Under strong convexity and interior optimal point assumption, the communication complexity is O(N · (1/δ) · log(1/δ)) to reach a δ-optimal solution to (38). In the above, the first statement is a consequence of Lemma 3. √ The second statement can be verified by noting that reaching a δ-optimal solution requires O(1/ δ) iterations and the communication cost is O(N t log t) at iteration t, as the agents exchange an O(N t)-sparse vector for Θ(log t) times. 5 Numerical Experiments We perform numerical experiments to verify our theoretical findings on the DeFW algorithm. The following discussions will focus on the two applications described in Section 4 using synthetic and real data. To simulate the decentralized optimization setting, we artificially construct a network of N = 50 agents, where the underlying communication network G is an Erdos-Renyi graph with connectivity of 0.1. For the AC steps (22), (28) & (41), the doubly stochastic matrix W is calculated according to the Metropolis-Hastings rule in [55]. 5.1 Decentralized Matrix Completion This section considers the decentralized matrix completion problem, where the goal is to predict missing entries of an unknown matrix through corrupted partial measurements. We consider two datasets — the first dataset is synthetically generated where the unknown matrix θtrue is rank-K P > /K where y , x have y x and has dimensions of m1 × m2 ; the matrix is generated as θtrue = K i i i i i=1 i.i.d. N (0, 1) entries and different settings of m1 , m2 , K will be experimented. The second dataset is the movielens100k dataset [56]. The unknown matrix θtrue records the movie ratings of m1 = 943 users on m2 = 1682 movies; and a total of 105 entries in θtrue are available as integers ranging from 1 to 5. The datasets are divided into training and testing sets and the mean square error (MSE) on the testing set is evaluated as: P 2 (44) MSE = |Ωtest |−1 (k,l)∈Ωtest [θtrue ]k,l − [θ̂]k,l , where θ̂ denotes the estimated θ produced by the algorithm. For the synthetic dataset, the training (testing) set contains 20% (80%) entries which are selected randomly. For movielens100k, the training (testing) set contains 80 × 103 (20 × 103 ) entries. The training data of the two datasets are equally partitioned into N = 50 parts; for movielens100k, each agent holds 1600 entries. We evaluate the performance of the proposed consensus-based DeFW algorithm applied to square loss and negated Gaussian loss, as described in Section 4.1. Unless otherwise specified, we fix the number of AC rounds applied at ` = 1 such that the agents only exchange information once per iteration. As the negated Gaussian loss is non-convex, we set the 13 10-1 100 Test MSE Objective value / Consensus Error 101 10-1 10-2 10-3 Sq. Loss Obj. 10-4 1 10 10-2 Obj. (Sq., DPG) Obj. (Gau., Cen.) Obj. (Sq., Cen.) Obj. (Gau., DeFW) Obj. (Sq., DeFW) Consensus Err (Gau.) Consensus Err (Sq.) Duality (Gau., DeFW) 102 103 Iteration number 104 101 4000 6000 DPG w/ Sq. Loss Gau. Loss (Cen.) Sq. Loss (Cen.) Gau. Loss (DeFW) Sq. Loss (DeFW) Qing et al. 102 103 8000 10000 Iteration number 104 101 100 10-1 10-2 0 DPG DeFW 2000 Iteration number Figure 1: Results on noiseless synthetic data with m1 = 100, m2 = 250 and rank K = 5. (TopLeft) Objective value and consensus error of θ̄ti against iteration number t, the objective values are evaluated by F (θ̄t ). (Top-Right) Worst-case MSE (among agents) against iteration number on the testing set. (Bottom) Objective value (sq. loss) against iteration number t for DeFW and DPG. The legend ‘Gau.’, ‘Sq.’ denote the consensus-based DeFW algorithm applied to (33) with the negated Gaussian and square loss, respectively. step size as γt = t−0.75 . The centralized FW algorithm for both losses will also be compared (cf. (7)); as well as the decentralized algorithm √ in [45] (labeled as ‘Qing et al.’) and the DPG algorithm [15] with step size set to αt = 0.1N/( t + 1) applied to square loss. Our first example considers the noiseless synthetic dataset of problem dimension m1 = 100, m2 = 250, K = 5. The results are shown in Fig. 1. Here, for the DeFW/DPG algorithms, we set the trace-norm radius to R = 1.2kθtrue kσ,1 ; and the algorithm in [45] is supplied with the true rank K of θtrue . Notice that for this set of data, the minimum of (33) can be achieved by θ = θtrue ∈ C with a zero objective value. From the top-left plot, for the DeFW algorithm applied to the convex square loss function, we observe an O(1/t2 ) trend for the objective values, corroborating with our analysis in Theorem 1; for the non-convex negated Gaussian loss function, the objective value and the FW/duality gap gt also decay with t, the latter indicates the convergence to a stationary point. Moreover, the consensus error of θ̄ti for DeFW applied to the two objective functions decay at the 14 Objective value 700 600 0.25 10-1 0.20 30 10 100 500 10-2 10-3 400 0 50 104 Obj. (Sq., DPG) Obj. (Gau., Cen.) 103 Obj. (Sq., Cen.) Obj. (Gau., DeFW) 102 Obj. (Sq., DeFW) Consensus Err (Gau.) 1 Consensus Err (Sq.) 10 Duality (Gau., DeFW) Consensus Err/Duality gap Test MSE 800 0.15 0.10 10-4 0.050 10-5 1000 2000 3000 4000 5000 Iteration number DPG w/ Sq. Loss Gau. Loss (Cen.) Sq. Loss (Cen.) Gau. Loss (DeFW) Square Loss (DeFW) Qing et al. 1000 2000 3000 4000 5000 Iteration number Figure 2: Results on sparse-noise contaminated synthetic data with m1 = 100, m2 = 250 and rank K = 5. (Left) Objective value and consensus error of θ̄ti against the DeFW iteration number t. Notice that the consensus error (in purple and yellow) / duality gap (in black) are plotted in a logarithmic scale (cf. the right y-axis) while the objective values are plotted in a linear scale; (Right) MSE against the DeFW iteration number t on the testing set. rate predicted by Lemma 1. On the other hand, the top-right plot compares mean square error (MSE) of the predicted matrix θ for the testing set. Here, we also compare the result with the algorithm in [45]. We observe that the MSE performance of the DeFW algorithms approach their centralized counterpart as the iteration number grows, yet the algorithm in [45] achieves the best performance in this setting, notice that the true rank of θtrue is provided to this algorithm. From the bottom plot, the DPG method applied to square loss function converges at a relatively fast rate in the beginning, but was overtaken by DeFW as the iteration number grows. It is worth mentioning that the DeFW algorithms have a consistently better MSE performance than DPG. The second example considers adding noise to the observations for the same synthetic data case in Fig. 1. In particular, we adopt the same setting as the previous example but include a sparse noise in the observations — here, each Zk,l = pk,l · Z̃k,l where pk,l is Bernoulli with P (pk,l = 1) = 0.2 and Z̃k,l ∼ N (0, 5) (cf. (32)). The convergence results are compared in Fig. 2. For the left plot, we observe similar convergence behaviors for the DeFW algorithms applied to different objective functions as in the previous example. On the right plot, we observe that the DeFW algorithm based on negated Gaussian loss achieves the lowest MSE, demonstrating its robustness to outlier noise. We also see that the algorithm in [45] performs poorly on this dataset. Another interesting discovery is that the algorithm in [45] seems to fail when the rank of θtrue is high, even when the true rank is known and the observations are noiseless. In Fig. 3, we show the MSE against iteration number of the algorithms when the synthetic data is noiseless and generated with m1 = 100, m2 = 250, K = 10. As seen, [45] fails to produce a low MSE, while DeFW offers a reasonable performance. The next example evaluates the objective value and test MSE on synthetic, noiseless data against the average runtime per agent. We focus on comparing the DeFW and DPG algorithms. 15 DPG w/ Sq. Loss Gaussian Loss (Cen.) Square Loss (Cen.) Gaussian Loss (DeFW) Square Loss (DeFW) Qing et al. Test MSE 0.20 0.15 0.10 0.05 0 1000 2000 3000 Iteration number 4000 5000 102 101 Test MSE Objective value Figure 3: Convergence of test MSE against iteration number on the testing set on noise-free synthetic data with m1 = 100, m2 = 250 and rank K = 10. 100 10-1 10-2 0 Sq. loss, DPG Sq. loss, DeFW 50 100 150 Running Time (s) 200 0.30 0.25 0.20 0.15 0.10 0.05 0.000 Sq. loss, DPG Sq. loss, DeFW Gau. loss, DeFW 50 100 150 Running Time (s) 200 Figure 4: Results on noiseless synthetic data with m1 = 200, m2 = 1000 and rank K = 5. (Left) Objective value against running time. (Right) Worst-case MSE (among agents) against running time. In Fig. 4, DeFW demonstrates a significant advantage over DPG since the former does not require the projection computation. In fact, the average running time per iteration of DeFW is five times faster than DPG. We also expect the complexity advantages to widen as the problem size grows. Lastly, we consider the dataset movielens100k. We set R = 105 and focus on the test MSE evaluated against the iteration number for the proposed DeFW algorithm. The numerical results are presented in Fig. 5, where we also compare the case when we apply multiple (` = 1, 3) rounds of AC updates per iteration to speed up the algorithm. The left plot in Fig. 5 considers the noiseless scenario. As seen, the proposed DeFW algorithm applied on different loss functions converge to a reasonable MSE that is attained by the centralized FW algorithm. We see that the DeFW with negated Gaussian loss has a slower convergence compared to the square loss which is possibly attributed to the non-convexity of the loss function. Moreover, the algorithms achieve much faster convergence if we allow ` = 3 AC rounds of network information exchange per iteration. The right plot in Fig. 5 considers when the observations are contaminated with a sparse noise of the same model as Fig. 2. We observe that the negated Gaussian loss implementations attain the best MSE as the non-convex loss is more robust against the sparse noise. Interestingly, the DeFW algorithm with ` = 3 AC rounds has even outperformed its centralized counterpart. We suspect that this is caused by the DeFW algorithm converging to a different local minima for the non-convex problem. 16 100 3.5 10-1 Duality / Consensus Error 4.0 Test MSE 3.0 2.5 2.0 1.5 1.0 0 2000 4000 6000 8000 10000 Iteration number 10-3 10-4 10-5 10-6 0 100 3.5 10-1 Duality / Consensus Error 4.0 3.0 Test MSE 10-2 2.5 2.0 1.5 1.0 0 2000 4000 6000 8000 10000 Iteration number Gau. Loss (Cen.) Sq. Loss (Cen.) Gau. Loss (DeFW, 1AC) Sq. Loss (DeFW, 1AC) 2000 4000 6000 8000 10000 Iteration Number 10-2 10-3 10-4 10-5 10-6 0 Gau. Loss (DeFW., 3AC) Sq. Loss (DeFW, 3AC) Duality gap (1AC) Duality gap (3AC) 2000 4000 6000 8000 10000 Iteration Number Consensus Err. (Gau., 1AC) Consensus Err. (Gau., 3AC) Consensus Err. (Sq., 1AC) Consensus Err. (Sq., 3AC) Figure 5: Convergence of the DeFW algorithm on movielens100k. (Top) Noiseless observations; (Bottom) sparse-noise contaminated observations. Note that the duality gap and consensus error are drawn in a logarithmic scale in the right plots. 5.2 Communication-efficient LASSO This section conducts numerical experiments on the decentralized sparse learning problem. We focus on the sparsified DeFW algorithm in Section 4.2 that has better communication efficiency. We evaluate the performance on both synthetic and benchmark data. For the synthetic data, we randomly generate each Ai as a (m = 20)×(d = 10000) matrix with N (0, 1) elements (cf. (37)) and θtrue is a random sparse vector with kθtrue k0 = 50 such that the non-zero elements are also N (0, 1). The observation noise zi has a variance of σ 2 = 0.01. For benchmark data, we test our method on 17 105 103 102 101 100 DeFW (extreme) DeFW (rand) DPG PG-EXTRA BHT 105 Objective value 104 Objective value 106 DeFW (extreme) DeFW (rand) DPG PG-EXTRA BHT 104 103 102 101 200 400 600 800 Iteration number 1000 100 4 10 105 106 Communication Cost 107 Figure 6: Convergence of the objective value on LASSO with synthetic dataset. (Left) against the iteration number. (Right) against the communication cost (i.e., total number of values transmitted/received in the network during AC updates). In the legend, ‘DeFW (extreme)’ refers to the extreme coordinate selection and ‘DeFW (rand)’ refers to the random coordinate selection scheme. sparco7 [57], which is a commonly used dataset for benchmarking sparse recovery algorithms. For sparco7, we have Ai ∈ R12×2560 as the local measurement matrix and θtrue is a sparse vector with kθtrue k0 = 20. The sparsified DeFW algorithm is implemented with pt = d2 + αcomm · te, `t = dlog(t) + 1e with extreme or random coordinate selection. We compare the algorithms of PG-EXTRA [16] (with fixed step size α = 1/d), DPG [15] (with step size αt = 1/t) and BHT [7]. DeFW, PG-EXTRA and DPG are set to solve the convex problem (38) with R = 1.1kθtrue k1 . BHT is a communication efficient decentralized version of IHT and is supplied with the true sparsity level in our simulations. The first example in Fig. 6 shows the convergence of the algorithms on the synthetic data, where we compare the objective value against the number of iterations and the communication cost, i.e., total number of values sent during the distributed optimization. We set αcomm = 0.05 for the DeFW algorithms. From the left plot, we observe that DeFW and PG-EXTRA algorithms have similar iteration complexity while ‘DeFW (rand)’ seems to have the fastest convergence. Meanwhile, BHT demands a high number of iterations for convergence. On the other hand, in the right plot, the DeFW algorithms demonstrate the best communication efficiency at low accuracy, while they lose to BHT at higher accuracy. Lastly, ‘DeFW (extreme)’ achieves a better accuracy at the beginning (i.e., less communication cost paid) but is overtaken by ‘DeFW (rand)’ as the communication cost grows. We then compare the performance on sparco7, where we show the convergence of objective value against the communication cost in Fig. 7. We set αcomm = 0.025 for the sparsified DeFW algorithms. At low accuracy, the DeFW algorithms offer the best communication cost-accuracy trade-off, i.e., it performs the best at an accuracy of above ∼ 10−2 . Moreover, ‘DeFW (extreme)’ seems to be perform better than ‘DeFW (rand)’ in this example. Nevertheless, the BHT algorithm 18 Objective value 10-1 10-2 10-3 104 DeFW (extreme) DeFW (rand) DPG PG-EXTRA BHT 105 106 Communication Cost Figure 7: Convergence of the objective value against the communication cost on LASSO with sparco7 dataset. In the legend, ‘DeFW (extreme)’ refers to the extreme coordinate selection and ‘DeFW (rand)’ refers to the random coordinate selection scheme. achieves the best performance when the communication cost paid is above 3 × 105 . Lastly, we comment that although BHT has the lowest communication cost at high accuracy, its computational complexity is high as the former requires a large number of iterations to reach a reasonable accuracy (cf. left plot of Fig. 6). The sparsified DeFW offers a better balance of the communication and computation complexity. 6 Conclusions & Open Problems In this paper, we have studied a decentralized projection-free algorithm for constrained optimization, which we called the DeFW algorithm. Importantly, we showed that the DeFW algorithm converges for both convex and non-convex loss functions and the respective convergence rates are analyzed. The efficacy of the proposed algorithm is demonstrated through tackling two problems related to machine learning, with the advantages over previous state-of-the-art demonstrated through numerical experiments. Future directions of study include developing an asynchronous version of the DeFW algorithm. 7 Acknowledgement The authors would like to thank the anonymous reviewers for providing constructive comments on our paper. A Proof of Theorem 1 The proof of Theorem 1 follows from our recent analysis on online/stochastic FW algorithm [39]. Using line 5 of Algorithm 1, we observe that: P i θ̄t+1 = θ̄t + γt (N −1 N (45) i=1 at − θ̄t ) . 19 Define ht := F (θ̄t ) − F (θ ? ) where θ ? is an optimal solution to (1). From the L-smoothness of F and the boundedness of C, we have: ht+1 N γt X i Lρ̄2 ≤ ht + hat − θ̄t , ∇F (θ̄t )i + γt2 , N 2 (46) i=1 where ρ̄ was defined in (5). Observe the following chain for the inner product: for each i ∈ [N ], we have hait − θ̄t , ∇F (θ̄t )i ≤ hait − θ̄t , ∇it F i + ρ̄k∇it F − ∇F (θ̄t )k2 ≤ ha − θ̄t , ∇it F i + ρ̄ · k∇it F − ∇F (θ̄t )k2 , ∀ a ∈ C (47) ≤ ha − θ̄t , ∇F (θ̄t )i + 2ρ̄ · k∇it F − ∇F (θ̄t )k2 , ∀ a ∈ C, and used the fact ait ∈ arg mina∈C ha, ∇it F i where we have added and subtracted ∇it F in the first inequality; P i in the second inequality. Recalling that ∇t F = N −1 N i=1 ∇fi (θ̄t ), k∇it F − ∇F (θ̄t )k2 ≤ k∇it F − ∇t F k2 + k∇t F − ∇F (θ̄t )k2 P i ≤ ∆dt + N −1 N i=1 k∇fi (θ̄t ) − ∇fi (θ̄t )k2 P i ≤ ∆dt + L · N −1 N i=1 kθ̄t − θ̄t k2 (48) ≤ ∆dt + L · ∆pt , where the third inequality is due to the L-smoothness of {fi }N i=1 . Recalling that ∆pt = Cp /t, ∆dt = Cg /t and substituting the results above into the inequality (46) implies: ht+1 ≤ ht + γt hāt − θ̄t , ∇F (θ̄t )i + γt2 Cg + LCp Lρ̄2 + 2ρ̄γt , 2 t (49) where āt ∈ C is the minimizer of the linear optimization (7a) using ∇F (θ̄t ), i.e., āt ∈ arg minha, ∇F (θ̄t )i . a∈C (50) Case 1: When F is convex, we observe hāt − θ̄t , ∇F (θ̄t )i ≤ hθ ? − θ̄t , ∇F (θ̄t )i ≤ −ht , (51) where the first inequality is due to the optimality of āt and the last inequality stems from the convexity of F . Plugging the above into (49) yields ht+1 ≤ (1 − γt )ht + γt2 2ρ̄(Cg + LCp ) Lρ̄2 + γt . 2 t (52) As γt = 2/(t + 1), from a high-level point of view, the above inequality behaves similarly to ht+1 ≤ (1 − (1/t))ht + O(1/t2 ). Consequently, applying [58, Lemma 4] yields a O(1/t) convergence rate for ht . In fact, this is a deterministic version of the case analyzed by [39, Theorem 10]. In particular, setting α = 1, K = 2 in [39, (56)] and using an induction argument yield ht ≤ 2 · (4ρ̄(Cg + LCp ) + Lρ̄2 )/(t + 1), ∀ t ≥ 1 . 20 (53) Case 2: For the case when F is µ-strongly convex and θ ? lies in the interior of C with distance δ > 0 (cf. (6)). Using [39, Lemma 6], we have p hθ̄t − āt , ∇F (θ̄t )i ≥ 2µδ 2 ht . (54) Plugging the above into (49) gives ht+1 ≤ p p p 2ρ̄(Cg + LCp ) Lρ̄2 ht ( ht − γt 2µδ 2 ) + γt2 + γt . 2 t (55) Compared to the case analyzed in (52), when ht is decreased, the decrement in ht+1 is increased, leading to a faster convergence. This is a deterministic version of the case analyzed in [39, Theorem 7]. Setting α = 1, K = 2 in [39, (48)] and using an induction argument yields ht ≤ B B.1 (4ρ̄(Cg + LCp ) + Lρ̄2 )2 9 · , ∀t≥1. 2 2δ µ (t + 1)2 (56) Proof of Theorem 2 Convergence rate Let us recall the definition of the FW gap: gt := max h∇F (θ̄t ), θ̄t − θi = h∇F (θ̄t ), θ̄t − āt i , θ∈C (57) where we have used the definition of āt in (50) from the previous proof. For simplicity, we shall assume that T is an even integer in the following. From the L-smoothness of F , we have: F (θ̄t+1 ) ≤ F (θ̄t ) + h∇F (θ̄t ), θ̄t+1 − θ̄t i + L kθ̄t+1 − θ̄t k22 . 2 (58) Observe that: θ̄t+1 − θ̄t = N −1 PN t i=1 γt (ai − θ̄ti ) . (59) As ait , θ̄ti ∈ C, we have kθ̄t+1 − θ̄t k2 ≤ γt ρ̄. Using (47) and (48) from the previous proof of Theorem 1, the inequality (58) can be bounded as: F (θ̄t+1 ) ≤ F (θ̄t ) − γt h∇F (θ̄t ), θ̄t − āt i + 2γt ρ̄ · (∆dt + L · ∆pt ) + γt2 Lρ̄2 /2 = F (θ̄t ) − γt gt + 2γt ρ̄ · (∆dt + L · ∆pt ) + (60) Lρ̄2 γt2 2 . From the definition, we observe that gt ≥ 0. Now, summing the two sides of (60) from t = T /2 + 1 to t = T gives: T X γt gt ≤ t=T /2+1 + T X  F (θ̄t ) − F (θ̄t+1 )  t=T /2+1 T X t=T /2+1  (61) Lρ̄2  2γt ρ̄ · (∆dt + L · ∆pt ) + γt2 . 2 21 Canceling duplicated terms in the first term of the right hand side above gives: T X γt gt ≤ F (θ̄T /2+1 ) − F (θ̄T +1 ) t=T /2+1 (62) T X + Lρ̄2  . 2γt ρ̄ · (∆dt + L · ∆pt ) + γt2 2 t=T /2+1 As gt , γt ≥ 0, we can lower bound the left hand side as: T X  γt gt ≥ min t∈[T /2+1,T ] t=T /2+1   gt · T X  γt , (63) t=T /2+1 and observe that for all T ≥ 6 and α ∈ (0, 1), T X γt ≥ t=T /2+1  2 1−α  T 1−α  · 1− = Ω(T 1−α ) . 1−α 3 (64) Define the constant C := Lρ̄2 /2 + 2ρ̄(Cg + LCp ). When α ≥ 0.5, using the fact that γt = t−α , ∆pt = Cp /tα , ∆dt = Cg /tα , the right hand side of (62) is bounded above by: G·ρ+C · PT −2α t=T /2+1 t ≤ G · ρ + C · log 2 , (65) note that the series is converging as wePare summing from t = T /2 + 1 to t = T . Dividing the above term by the lower bound (64) to Tt=T /2+1 γt yields (16). On the other hand, when α < 0.5, we notice that T X t−2α ≤ t=T /2+1 Z T t−2α dt = T /2 21−2α − 1  T 1−2α . 1 − 2α 2 (66) Therefore, the right hand side of (62) is bounded above by Gρ + C T X t=T /2+1  1 − (1/2)1−2α  1−2α ·T . t−2α ≤ Gρ + C 1 − 2α Dividing the above term by the lower bound (64) to B.2 PT t=T /2+1 γt (67) yields (17). Convergence to stationary point of (1) Recall that the set of stationary points to (1) is defined as: C ? := {θ̄ ∈ C : maxθ∈C h∇F (θ̄), θ̄ − θi = 0} . We state the following Nurminskii’s sufficient condition: 22 (68) Theorem 3 [59, Theorem 1] Consider a sequence {θ̄t }t≥1 in a compact set C. Suppose that the following hold5 : A.1 limt→∞ kθ̄t+1 − θ̄t k = 0. A.2 Let θ be a limit point of {θ̄t }t≥1 and {θst }t≥1 be a subsequence that converges to θ. If θ ∈ / C?, then for any t and some sufficiently small  > 0, there exists a finite s such that kθ̄s − θ̄st k >  and s > st . A.3 Let θ be a limit point of {θ̄t }t≥1 and {θst }t≥1 be a subsequence that converges to θ. If θ ∈ / C?, then for any t and some sufficiently small  > 0, we can define τt := min s s.t. kθ̄s − θ̄st k >  s>st (69) where τt is finite. Also, there exists a continuous function W (θ̄) that takes a finite number of values in C ? with lim sup W (θ̄τt ) < lim W (θ̄st ) . (70) t→∞ t→∞ Then the sequence {W (θ̄t )}t≥1 converges and the limit points of the sequence {θ̄t }t≥1 belongs to the set C ? . We apply the above theorem to prove that every limit point of {θ̄t }t≥1 are in C ? . First, A.1 can be easily verified since N γt X i γt ρ̄ kθ̄t+1 − θ̄t k ≤ (71) kat − θ̄t k ≤ N N i=1 and we have γt → 0 as t → ∞. As C is compact, there exists a convergent subsequence {θ̄st }t≥1 of the sequence of iterates generated by the DeFW algorithm. Let θ be the limit point of {θ̄st }t≥1 and θ ∈ / C ? . We shall verify A.2 by contradiction. In particular, fix a sufficiently small  > 0 and assume that the following holds: kθ̄s − θ̄st k ≤ , ∀ s > st , ∀ t ≥ 1 . (72) As {θ̄st }t≥1 converges to θ, the assumption (72) implies that for some sufficiently large t and any s > st , we have θ̄s ∈ B2 (θ), i.e., the ball of radius 2 centered at θ. Since θ ∈ / C ? , the following holds for some δ > 0, h∇F (θ̄s ), θ − θ̄s i ≤ −δ < 0, ∀ θ ∈ C, ∀s > st . (73) In particular, we have h∇F (θ̄s ), ās − θ̄s i ≤ −δ as we recall that ās = arg mina∈C h∇F (θ̄s ), ai. On the other hand, from (60) and using Assumptions 1-2, it holds true for all t ≥ 1 that: F (θ̄t+1 ) − F (θ̄t ) ≤ γt · h∇F (θ̄t ), āt − θ̄t i + γt · O(t−α ) + γt2 Lρ̄2 /2 . 5 (74) To give a clearer presentation, we have rephrased conditions A.2 and A.3 from the original Nurminskii’s conditions. 23 To arrive at a contradiction, we let s > st and sum up the two sides of (74) from t = st to t = s. Consider the following chain of inequality: F (θ̄s ) − F (θ̄st ) ≤ s X γ` (∇F (θ̄` ), ā` − θ̄` i + O(`−α )) `=st ≤ −δ s X γ` + `=st s X (75) −α γ` O(` ), `=st where the first inequality is due to the fact that γ`2 Lρ̄2 /2 = γ` O(`−α ) and the second inequality is due to (73). Rearranging terms in (75), we have F (θ̄s ) − F (θ̄st ) − s X C ·` −2α `=st ≤ −δ s X `−α , (76) `=st P for somePC < ∞. As 1 ≥ α > 0.5, we have lims→∞ s`=st `−2α < ∞ on the left hand side and lims→∞ s`=st `−α → +∞ on the right hand side. Letting s → ∞ on the both side of (76) implies lim F (θ̄s ) − F (θ̄st ) < −∞ , (77) s→∞ This leads to a contradiction to (73) since F (θ) is bounded over C. We conclude that A.2 holds for the DeFW algorithm. The remaining task is to verify A.3. We notice that the indices τt in (69) are well defined since A.2 holds. Take W (θ) = F (θ) and notice that the image F (C ? ) is a finite set (cf. Assumption 3). By the definition of τt , we have θ̄s ∈ B (θ̄st ) for all st ≤ s ≤ τt − 1. Again for some sufficiently large t, we have θ̄s ∈ B (θ̄st ) ⊆ B2 (θ) and the inequality (75) holds for s = τt − 1. This gives: F (θ̄τt ) − F (θ̄st ) ≤ τX t −1 γ` · (−δ + O(`−α )) . (78) `=st On the other hand, we have θ̄τt ∈ / B (θ̄st ) and thus  < kθ̄τt − θ̄st k ≤ τX t −1 `=st γ` N X ai ` i=1 N − θ̄` ≤ ρ̄ τX t −1 γ` . (79) `=st P t −1 The above implies that τ`=s γ` > /ρ̄ > 0. Considering (78) again, observe that O(`−α ) decays t to zero, for some sufficiently large t, we have −δ + O(`−α ) ≤ −δ 0 < 0 if ` ≥ st . Therefore, (78) leads to τX t −1 δ0 F (θ̄τt ) − F (θ̄st ) ≤ −δ 0 <0. (80) γ` < − ρ̄ `=st Taking the limit t → ∞ on both sides leads to (70). The proof for the convergence to stationary point in Theorem 2 is completed by applying Theorem 3. 24 C Proof of Lemma 1 For simplicity, we shall drop the dependence of α in the constant t0 (α). It suffices to show that for all t ≥ 1, v uN uX √ Cp t (81) kθ̄ti − θ̄t k22 ≤ α , Cp = (t0 )α · N ρ̄ . t i=1 i We observe that for t = 1 to t = t0 , the above inequality is true since qPθ̄t , θ̄t ∈ C and the diameter N α i 2 of C is bounded by ρ̄. For the induction step, let us assume that i=1 kθ̄t − θ̄t k ≤ Cp /t for some t ≥ t0 . Observe that i θt+1 = (1 − t−α )θ̄ti + t−α ait . (82) P i Denote ãt = N −1 N i=1 at and using Fact 1, we observe that, N X i kθ̄t+1 − θ̄t+1 k22 ≤ i=1 |λ2 (W )|2 · N X (83) k(1 − t−α )(θ̄tj − θ̄t ) + t−α (ajt − ãt )k22 , j=1 where we have used the fact θ̄t+1 = (1 − t−α )θ̄t + t−α ãt . The right hand side in the above can be bounded by N X k(1 − t−α )(θ̄tj − θ̄t ) + t−α (ajt − ãt )k22 j=1 ≤ N X kθ̄tj − θ̄t k22 + t−2α ρ̄2 + 2ρ̄t−α kθ̄tj − θ̄t k2  j=1 (84) v uN uX j √ −2α 2 2 −α ≤t (Cp + N ρ̄ ) + 2ρ̄t Nt kθ̄t − θ̄t k22 j=1 ≤ t−2α (Cp + √  (t )α + 1 2 0 N ρ̄)2 ≤ · C , p (t0 )α · tα P where we have used the boundedness of C in the first inequality, the norm equivalence N j=1 |cj | ≤ q √ PN 2 N j=1 cj in the second inequality and the induction hypothesis in the third and fourth inequalities. Consequently, from (23), we observe that for all t ≥ t0 , |λ2 (W )| · (t0 )α + 1 1 ≤ , α α (t0 ) · t (t + 1)α (85) and the induction step is completed. Finally, Lemma 1 is proven by observing that (81) implies (25). 25 D Proof of Lemma 2 We prove the first condition (29) with a simple induction. This condition is obviously true for the base step t = 1. For induction step, suppose that (29) is true up to some t, then N X ∇it+1 F = i=1 N X (∇it F − ∇fi (θ̄ti )) + N X i ∇fi (θ̄t+1 ). (86) i=1 i=1 Note that the first term on the right hand is zero due hypothesis. Thus, the P side PNto the induction i F = N −1 i ) for all t ≥ 1. induction step is completed and N −1 N ∇ ∇f ( θ̄ i t i=1 t i=1 Then, we prove the second condition (30). For simplicity, we drop the dependence of α in the P i F . It suffices to prove: constant t0 (α). Recall ∇t F := N −1 N ∇ i=1 t v uN uX √ Cg t (87) k∇it F − ∇t F k22 ≤ α , Cg = 2 N (t0 )α (2Cp + ρ̄)L t i=1 for all t ≥ 1 using induction. For t = 1 to t = t0 , the inequality can be qeasily proven using the PN i 2 boundedness of the gradients. For the induction step, we suppose that i=1 k∇t F − ∇t F k2 ≤ i i ) − ∇f (θ̄ i ). We observe that := ∇fi (θ̄t+1 Cg /tα for some t ≥ t0 . Define the slack variable δft+1 i t P j N i i i i ∇t+1 F = δft+1 + ∇t F and ∇t+1 F = j=1 Wij ∇t+1 F , thus applying Fact 1 yields N X k∇it+1 F − ∇t+1 F k22 ≤ i=1 |λ2 (W )|2 · N X (88) i − ∇t+1 F k22 , k∇it F + δft+1 i=1 Similarly, define δFt+1 := ∇t+1 F − ∇t F = N −1 right hand side of (88) as N X PN i i=1 δft+1 and observe that we can bound the i k∇it F + δft+1 − ∇t+1 F k22 i=1 ≤ N  X (89) i − δFt+1 k22 k∇it F − ∇t F k22 + kδft+1 i=1 i + 2 · kδft+1 − δFt+1 k2 · k∇it F − ∇t F k2  where the first inequality is obtained by expanding the squared `2 norm and applying CauchySchwartz inequality. Observe that for all i ∈ [N ], we have the following chain: i i i kδft+1 k2 = k∇fi (θ̄t+1 ) − ∇fi (θ̄ti )k2 ≤ Lkθ̄t+1 − θ̄ti k2  PN j j j i ≤L j=1 Wij (θt+1 − θ̄t ) + (θ̄t − θ̄t ) 2   PN ≤ L j=1 Wij t−α ρ̄ + 2Cp t−α = (2Cp + ρ̄)Lt−α , 26 (90) where the last inequality is due to the convexity of `2 norm, the update rule in line 5 of Algorithm 1 and the results from Lemma 1. Using the triangular inequality, we observe that  i P j i kδft+1 − δFt+1 k2 = 1 − N1 δt+1 + N1 j6=i δt+1 2  i P j ≤ 1 − N1 kδt+1 k2 + N1 j6=i kδt+1 k2 (91) 1 ≤2 1− (2Cp + ρ̄)Lt−α ≤ 2(2Cp + ρ̄)Lt−α . N Finally, applying the induction hypothesis, the right hand side of Eq. (89) can be bounded by N X i − ∇t+1 F k22 k∇it F + δft+1 i=1  ≤ t−2α Cg2 + 4N (2Cp + ρ̄)2 L2 √ qPN −α i 2 + t 4L(2Cp + ρ̄) N i=1 k∇t F − ∇t F k2 2 √ 2  (t0 )α + 1 ≤ t−2α · Cg + 2L N (2Cp + ρ̄) ≤ · C , g (t0 )α · tα √ qPN PN i i 2 where we have used the fact that i=1 k∇t F − ∇t F k2 ≤ N i=1 k∇t F − ∇t F k2 in the first 2 inequality. Invoking (23), we can upper bound the right hand side of (88) by Cg /(t + 1)2α for all t ≥ t0 . Taking square root on both sides of the inequality completes the induction step. Consequently, (30) can be implied by (87). E Proof of Lemma 3 Applying triangular inequality on the error vector yields: ξt−1 ∇it F N 1 X ∇fj (θ̄tj ) − N j=1 ≤ ξt−1 · ∇it F ∞ N 1 X − ∇fj (θ̄tj ) N 1Ωt j=1 + N  1 X ∇fj (θ̄tj ) N (ξt−1 1Ωt − 1) j=1 (92) ∞ ∞ , where 1 denotes the all-one vector. For the first term in the right hand side of (92), observe that ∇it F is obtained by applying the AC updates on the sparsified local gradients ∇fi (θ̄ti ) 1Ωt for `t = dCl + log(t)/ log |λ−1 2 (W )|e rounds, applying Fact 1 yields the following for all i ∈ [N ]: ∇it F − N 1 X ∇fj (θ̄tj ) N 1Ωt j=1 ≤ |λ2 (W )|`t · (∇fi (θ̄ti ) − ∞ N 1 X ∇fj (θ̄tj )) N j=1 ≤ |λ2 (W )|Cl · B/t , 27 (93) 1Ωt ∞ for some B < ∞ since the gradients are bounded.  P For the second term in the right hand side of (92), we first apply the inequality k N −1 N ∇fi (θ̄ti ) i=1 P −1 i (ξt−1 1Ωt − 1)k∞ ≤ kN −1 N i=1 ∇fi (θ̄t )k∞ k(ξt 1Ωt − 1)k∞ from [60]. Now, the probability that coordinate k is included is given by: P (k ∈ Ωt ) = 1 − P N \ i=1  1 k∈ / Ωt,i = 1 − (1 − )pt N = ξt , d (94) and that E[1Ωt ] = ξt 1. Then, observing that each element in ξt−1 1Ωt is bounded in [0, ξt−1 ] and applying the Hoefding’s inequality [61], the following holds true for all x > 0:  2 −2 P kξt−1 1Ωt − 1k∞ ≥ x ≤ 2d · e−2x /ξt , (95) where we have applied p a union bound argument to take care of the `∞ -norm. Setting x = ξt−1 (log(2dt2 ) − log )/2 and applying another union bound show that with probability at least 1 − (π 2 /6), the following holds for all t ≥ 1: N 1 X ∇fi (θ̄ti ) N (ξt−1 1Ωt − 1) i=1 ≤ ξt−1 N 1 X ∇fi (θ̄ti ) N i=1 r ∞ ∞ (96) log(2dt2 /) , 2 As d  0, we have ξt−1 ≈ d/(pt N ). Recalling pt ≥ C0 t yields the desired result in Lemma 3. References [1] J. Lafond, H.-T. Wai, and E. Moulines, “D-FW: Communication Efficient Distributed Algorithms for High-dimensional Sparse Optimization,” in Proc ICASSP, Mar 2016. [2] H.-T. Wai, A. Scaglione, J. Lafond, and E. Moulines, “A projection-free decentralized algorithm for non-convex optimization,” in Proc GlobalSIP, December 2016. [3] V. Cevher, S. Becker, and M. Schmidt, “Convex Optimization for Big Data: Scalable, randomized, and parallel algorithms for big data analytics,” IEEE Signal Process. Mag., vol. 31, no. 5, pp. 32–43, Sep. 2014. [4] A. H. Sayed, S.-Y. Tu, J. Chen, X. Zhao, and Z. J. Towfic, “Diffusion strategies for adaptation and learning over networks: an examination of distributed strategies and network behavior,” IEEE Signal Process. Mag., vol. 30, no. 3, pp. 155–171, May 2013. [5] Z. Liu and L. Vandenberghe, “Interior-point method for nuclear norm approximation with application to system identification,” SIAM J. Matrix Anal. Appl., vol. 31, no. 3, pp. 1235– 1256, 2010. [6] E. J. Candès and B. Recht, “Exact matrix completion via convex optimization,” Found. Comput. Math., vol. 9, no. 6, pp. 717–772, 2009. 28 [7] C. Ravazzi, S. M. Fosson, and E. Magli, “Randomized algorithms for distributed nonlinear optimization under sparsity constraints,” IEEE Trans. on Signal Process., vol. 64, no. 6, pp. 1420–1434, Mar 2016. [8] S. Patterson, Y. C. Eldar, and I. Keidar, “Distributed compressed sensing for static and timevarying networks,” IEEE Trans. on Signal Process., vol. 62, no. 19, pp. 4931–4946, July 2014. [9] J. Tsitsiklis, “Problems in decentralized decision making and computation,” Ph.D. dissertation, Dept. of Electrical Engineering and Computer Science, M.I.T., Boston, MA, 1984. [10] A. G. Dimakis, S. Kar, J. M. F. Moura, M. G. Rabbat, and A. Scaglione, “Gossip Algorithms for Distributed Signal Processing,” Proc. IEEE, vol. 98, no. 11, pp. 1847–1864, Nov. 2010. [11] P. Bianchi and J. Jakubowicz, “Convergence of a multi-agent projected stochastic gradient algorithm for non-convex optimization,” IEEE Trans. Autom. Control, vol. 58, no. 2, pp. 391–405, Feb 2013. [12] I.-A. Chen, “Fast distributed first-order methods,” Master’s thesis, MIT, 2012. [13] G. Qu and N. Li, “Harnessing smoothness to accelerate distributed optimization,” CoRR, vol. abs/1605.07112, May 2016. [14] B. Johansson, T. Keviczky, M. Johansson, and K. H. Johansson, “Subgradient methods and consensus algorithms for solving convex optimization problems,” in Proc. CDC, Dec 2008, pp. 4185–4190. [15] S. Ram, A. Nedić, and V. Veeravalli, “Distributed stochastic subgradient projection algorithms for convex optimization,” Journal of Optimization Theory and Applications, vol. 147, no. 3, pp. 516–545, 2010. [16] W. Shi, Q. Ling, G. Wu, and W. Yin, “A proximal gradient algorithm for decentralized composite optimization,” IEEE Trans. on Signal Process., vol. 63, no. 22, pp. 6013–6023, Nov 2015. [17] D. Jakovetic, J. Xavier, and J. M. F. Moura, “Fast distributed gradient methods,” IEEE Trans. Autom. Control, vol. 59, no. 5, pp. 1131–1146, May 2014. [18] A. Nedić, A. Olshevsky, and W. Shi, “Achieving geometric convergence for distributed optimization over time-varying graphs,” CoRR, vol. abs/1607.03218, July 2016. [19] X. Li and A. Scaglione, “Convergence and applications of a gossip based gauss newton algorithm,” IEEE Trans. on Signal Process., vol. 61, no. 21, pp. 5231–5246, Nov 2013. [20] D. Varagnolo, F. Zanella, A. Cenedese, G. Pillonetto, and L. Schenato, “Newton-raphson consensus for distributed convex optimization,” IEEE Trans. Autom. Control, vol. 61, no. 4, pp. 994–1009, April 2016. [21] T.-H. Chang, A. Nedic, and A. Scaglione, “Distributed constrained optimization by consensusbased primal-dual perturbation method,” IEEE Trans. Autom. Control, vol. 59, no. 6, pp. 1524–1538, June 2014. 29 [22] J. Duchi, A. Agarwal, and M. J. Wainwright, “Dual averaging for distributed optimization: Convergence analysis and network scaling,” IEEE Trans. Autom. Control, vol. 57, no. 3, pp. 592–606, March 2012. [23] E. Wei and A. Ozdaglar, “On the o(1/k) convergence of asynchronous distributed alternating direction method of multipliers,” CoRR, vol. abs/1307.8254, July 2013. [24] A. Simonetto and H. Jamali-Rad, “Primal recovery from consensus-based dual decomposition for distributed convex optimization,” JOTA, vol. 168, no. 1, pp. 172–197, 2016. [25] M. Hong, “Decomposing linearly constrained nonconvex problems by a proximal primal dual approach: Algorithms, convergence, and applications,” CoRR, vol. abs/1604.00543, Apr 2016. [26] Y. Yang, G. Scutari, D. P. Palomar, and M. Pesavento, “A parallel stochastic approximation method for nonconvex multi-agent optimization problems,” CoRR, vol. abs/1410.5076, Oct 2014. [27] P. D. Lorenzo and G. Scutari, “Next: In-network nonconvex optimization,” IEEE Trans. on Signal and Info. Process. over Networks, 2016. [28] H.-T. Wai and A. Scaglione, “Consensus on state and time: Decentralized regression with asynchronous sampling,” IEEE Trans. on Signal Process., vol. 63, no. 11, pp. 2972–2985, June 2015. [29] H.-T. Wai, T.-H. Chang, and A. Scaglione, “A consensus-based decentralized algorithm for non-convex optimization with application to dictionary learning,” in Proc ICASSP, Apr 2015, pp. 3546–3550. [30] M. Frank and P. Wolfe, “An algorithm for quadratic programming,” Naval Res. Logis. Quart., 1956. [31] Z. Wu and K. Teo, “A conditional gradient method for an optimal control problem involving a class of nonlinear second-order hyperbolic partial differential equations,” Journal of Mathematical Analysis and Applications, vol. 91, no. 2, pp. 376 – 393, 1983. [32] M. Jaggi and M. Sulovsky, “A simple algorithm for nuclear norm regularized problems,” in ICML, June 2010. [33] A. Joulin, K. Tang, and L. Fei-Fei, “Efficient image and video co-localization with frank-wolfe algorithm,” in ECCV, Sept 2014. [34] L. Zhang, V. Kekatos, and G. B. Giannakis, “Scalable electric vehicle charging protocols,” CoRR, vol. abs/1510.00403v2, Oct 2016. [35] M. Fukushima, “A modified frank-wolfe algorithm for solving the traffic assignment problem,” Transportation Research Part B: Methodological, vol. 18, no. 2, pp. 169–177, April 1984. [36] M. Jaggi, “Revisiting Frank-Wolfe: Projection-free sparse convex optimization,” in ICML, June 2013. 30 [37] S. Ghosh and H. Lam, “Computing worst-case input models in stochastic simulation,” CoRR, vol. abs/1507.05609, July 2015. [38] S. Lacoste-Julien, “Convergence rate of Frank-Wolfe for non-convex objectives,” CoRR, vol. abs/1607.00345, July 2016. [39] J. Lafond, H.-T. Wai, and E. Moulines, “On the online Frank-Wolfe algorithms for convex and non-convex optimizations,” CoRR, vol. abs/1510.01171v2, Aug 2016. [40] S. Ghadimi and G. Lan, “Accelerated gradient methods for nonconvex nonlinear and stochastic programming,” Mathematical Programming, vol. 156, no. 1, pp. 59–99, Feb 2015. [41] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra, “Efficient projections onto the `1 -ball for learning in high dimensions,” in ICML, July 2008. [42] G. H. Golub and C. F. van Loan, Matrix computations, 4th ed. Press, Baltimore, MD, 2013. Johns Hopkins University [43] A. Defazio, F. Bach, and S. Lacoste-Julien, “SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives,” in NIPS, Dec 2014. [44] A. Scaglione, R. Pagliari, and H. Krim, “The decentralized estimation of the sample covariance,” in Proc. Asilomar, Nov. 2008, pp. 1722–1726. [45] Q. Ling, Y. Xu, W. Yin, and Z. Wen, “Decentralized low-rank matrix completion,” in Proc ICASSP, Mar 2012. [46] L. Mackey, A. Talwalkar, and M. I. Jordan, “Distributed matrix completion and robust factorization,” Journal of Machine Learning Research, vol. 16, pp. 913–960, 2015. [47] H.-F. Yu, C.-J. Hsieh, S. Si, and I. Dhillon, “Scalable coordinate descent approaches to parallel matrix factorization for recommender systems,” in ICDM. IEEE, 2012, pp. 765–774. [48] B. Recht and C. Ré, “Parallel stochastic gradient algorithms for large-scale matrix completion,” Mathematical Programming Computation, vol. 5, no. 2, pp. 201–226, 2013. [49] V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willsky, “Sparse and low-rank matrix decompositions,” in Proc. Allerton, 2009, pp. 962–967. [50] G. H. Mohimani, M. Babaie-Zadeh, and C. Jutten, “Fast Sparse Representation Based on Smoothed L0 Norm,” in ICA, ser. Lecture Notes in Computer Science. Springer, Sep. 2007, pp. 389–396. [51] L. Bako, “Identification of switched linear systems via sparse optimization,” Automatica, vol. 47, no. 4, pp. 668 – 677, 2011. [52] T. Blumensath and M. E. Davies, “Iterative hard thresholding for compressed sensing,” CoRR, vol. abs/0805.0510, May 2008. [53] M. E. Yildiz and A. Scaglione, “Coding with side information for rate-constrained consensus,” IEEE Trans. on Signal Process., vol. 56, no. 8, pp. 3753–3764, Aug 2008. 31 [54] H. Attiya and J. Welch, Distributed Computing: Fundamentals, Simulations, and Advanced Topics. Wiley, 2004. [55] L. Xiao and S. Boyd, “Fast linear iterations for distributed averaging,” Systems & Control Letters, vol. 53, no. 1, pp. 65–78, Sep. 2004. [56] F. M. Harper and J. A. Konstan, “The movielens datasets: History and context,” ACM TiiS, Jan 2015. [57] E. v. Berg, M. P. Friedlander, G. Hennenfent, F. Herrmann, R. Saab, and Ö. Yılmaz, “Sparco: A testing framework for sparse reconstruction,” Dept. Computer Science, University of British Columbia, Vancouver, Tech. Rep. TR-2007-20, October 2007. [58] B. P. Polyak, Introduction to Optimization. Optimization Software, Inc., 1987. [59] E. A. Nirminskii, “Convergence conditions for nonlinear programming algorithms,” Cybernetics, no. 6, pp. 79–81, Nov 1972. [60] R. A. Horn and C. R. Johnson, Topics in matrix analysis. Cambridge: Cambridge University Press, 1994, corrected reprint of the 1991 original. [61] P. Massart, Concentration Inequalities and Model Selection. 32 Springer, 2003.
3
1 Limitation of SDMA in Ultra-Dense Small Cell Networks arXiv:1801.00222v1 [] 31 Dec 2017 Junyu Liu, Min Sheng, Jiandong Li State Key Laboratory of Integrated Service Networks, Xidian University, Xi’an, Shaanxi, 710071, China Email: [email protected], {msheng, jdli}@mail.xidian.edu.cn Abstract—Benefitting from multi-user gain brought by multiantenna techniques, space division multiple access (SDMA) is capable of significantly enhancing spatial throughput (ST) in wireless networks. Nevertheless, we show in this letter that, even when SDMA is applied, ST would diminish to be zero in ultradense networks (UDN), where small cell base stations (BSs) are fully densified. More importantly, we compare the performance of SDMA, single-user beamforming (SU-BF) (one user is served in each cell) and full SDMA (the number of served users equals the number of equipped antennas). Surprisingly, it is shown that SU-BF achieves the highest ST and critical density, beyond which ST starts to degrade, in UDN. The results in this work could shed light on the fundamental limitation of SDMA in UDN. I. I NTRODUCTION While network densification is promising to improve network capacity, its limitation has been manifest in recent research [1]–[3]. Depending on practical system settings, it is shown that area spectral efficiency would decrease as base station (BS) density grows in downlink cellular networks [2]. Worsestill, network capacity is shown to diminish to zero when single-antenna BSs are over deployed [3]. To rejuvenate the potential of network densification, space division multiple access (SDMA), which is capable of simultaneously serving multiple users within one cell over identical spectrum resources, is shown to be of great potential [4], [5]. Especially, it is shown that single-user beamforming (SU-BF), a special case of SDMA, could improve network capacity by tens of folds in ultra-dense networks (UDN), compared to the single-antenna regime [5]. However, only single user could be served within one cell using SU-BF. Therefore, it is imperative to study whether the multi-user gain of SDMA could be harvested to further improve network capacity in UDN. In this work, we investigate the performance of SDMA in ultra-dense downlink small cell networks. As more than one user is served within one cell using SDMA, spatial resources could be fully exploited. However, it is unexpectedly shown that network spatial throughput (ST) would monotonously decrease with the number of served users within one cell when BS density is sufficiently large, which radically contradicts with the sparse deployment cases. Moreover, it is proved that applying SU-BF could as well maximize the critical density, beyond which network capacity starts to diminish. Therefore, the results reveal the fundamental limits of SDMA and confirm the superiority of SU-BF in UDN1 . 1 Different from [3], which investigates the limitation of network densification, we intend to unveil the limitation of SDMA in UDN in this work. II. S YSTEM M ODEL We consider a downlink small cell network, where the locations of Na -antenna BSs (with constant transmit power P ) and single-antenna users in a two-dimension plane are modeled as two independent homogeneous Poisson Point Processes (PPPs) ΠBS = {BSi } (i ∈ N) of density λ and ΠU = {Uj } (j ∈ N) of density λU , respectively. If r is the 2-dimension distance of cellular link, the distance√ between the antennas of BSs and the associated users is d = r2 + ∆h2 , where ∆h > 0 denotes the antenna height difference (AHD) between BSs and users We further assume that each user connects to the BS providing the smallest pathloss. Accordingly, the probability density function (PDF) of r is given by [6] 2 fr (x) =2πλxe−πλx . (x ≥ 0) (1) Denote NU (NU ≤ Na ) as the number of active users within one cell. Meanwhile, a dense user deployment is considered, i.e., λ ≫ λU , such that each full-buffer BS could select at most Na users to serve using SDMA. Channel gain consists of a pathloss component and a smallscale fading component. In particular, given transmission distance d, a generalized multi-slope pathloss model (MSPM) is adopted to characterize differentiated signal power attenuation rates within different regions, i.e.,   −1 (2) ; d = Kn d−αn , Rn ≤ d < Rn+1 lN {αn }N n=0 where αn denotes the pathloss exponent, K0 = 1, Kn = Q αi −αi−1 n (n ≥ 1), and 0 = R0 < R1 < · · · < i=1 Ri RN = ∞. Note that αi ≤ αj (i < j) and αN −1 > 2 for practical concerns. Rayleigh fading is used to model smallscale fading for mathematical tractability. The rationality of Rayleigh fading assumption in UDN has been verified via experimental results in [7]. SDMA with zero-forcing precoding is applied, where perfect channel state information (CSI) from the BSs to the serving downlink users could be obtained to design the precoders. According to [4], [8], [9], under Rayleigh fading channels, the channel power gain caused by small-scaling fading of both direct link and interfering link follows gamma distribution when zero-forcing precoding is applied by the multi-antenna technique. If denoting HU0 ,BS0 as the channel power gain caused by small-scale fading from BS0 to U0 (typical pair) and HU0 ,BSi (i 6= 0) as the channel power gain caused by small scaling fading from BSi (interfering BS) to U0 , then 2 STN (λ) = NU λCPN (λ) log2 (1 + τ ) . (3) In (3), τ denotes the decoding threshold and CPN (λ) denotes the coverage probability (CP) of the typical downlink users, which is given by 0 10 SSPM DSPM −1 ¡ ¢ ST bits/ s · Hz · m2 HU0 ,BS0 ∼ Γ (Na − NU + 1, 1) and HU0 ,BSi ∼ Γ (NU , 1). It is worth noting that SDMA degenerates into full SDMA when NU = Na , while degenerates into SU-BF when NU = 1. We use network ST for performance evaluation of SDMA in UDN. In particular, ST is defined by 10 −2 10 21 folds −3 10 −4 10 Na = 16, 8, 4, 1 1 CPN (λ) = P {SIRU0 > τ } . IN UDN 3 10 4 10 10 λ BS/km2 5 6 10 10 (a) Impact of Na . 2 10 SSPM DSPM ¡ ¢ ST bits/ s · Hz · m2 In (4), SIRU0 denotes the signal-to-interference ratio (SIR) at U0 . Note that the subscript ‘N ’ denotes the number of slopes in (2). Notation: Denoting 2 F1 (·, ·, ·, ·) as Gaussian hypergeomet  2 ric function, we denote ω (x, y, z) = 2 F1 x, − y , 1 − y2 , −z   N −1 in the remaining parts. Besides, lN {αn }n=0 ; d is replaced by lN (d) for notation simplicity. III. P ERFORMANCE E VALUATION OF SDMA 10 (4) 2 ∆h = 0m 0 10 ∆h = 2m −2 10 ∆h = 5m −4 10 ∆h = 10m In this section, we evaluate the performance of SDMA in terms of network ST, which depends on the distribution of SIRU0 according to (3). In particular, SIRU0 is given by −6 10 2 10 4 10 λ BS/km2 6 10 (b) Impact of AHD ∆h. where IIC = (5) SIRU0 =P HU0 ,BS0 l (d0 ) /IIC , P P HU0 ,BSi l (di ) is the inter-cell interfer- BSi ∈Π̃BS ence, di denotes the distance between the antennas of BSi and U0 , and HU0 ,BSi denotes the power gain caused by smallscale-fading from BSi to U0 . Following SIRU0 in (5), we present the results on CP and ST in the following corollary. Corollary 1 (CP and ST in SDMA System). When SDMA is applied under MSPM, ST in downlink small cell networks is given by STN (λ) = NU λCPN (λ) log2 (1 + τ ), where "N −N # a X U (−s)k dk CPN (λ) =E LI (s) s= P l τ(d ) . (6) 0 N k! dsk IC k=0 In (6), LIIC (s) is the Laplace Transform of IIC evaluated at s = P lNτ(d0 ) , which is given by LIIC (s) =e −2πλ R∞ d0 x(1−(1+sP lN (x))−1 )dx . (7) Proof. Following [5, Corollary 2], the results can be obtained with easy manipulation. The detail is omitted due to space limitation. Corollary 1 provides a numerical approach to evaluate the performance of SDMA in UDN. As well, the impact of key parameters of SDMA, such as the number of antennas and served users, on system performance could be captured. In particular, we plot ST as a function of BS density in Fig. 1 under single-slope pathloss model (SSPM), i.e., N = 1 in (2), and dual-slope pathloss model (DSPM), i.e., N = 2 in (2). It can be seen from Fig. 1a that network ST could be significantly improved by SDMA, compared to the single-antenna regime. Figure 1. ST scaling laws. Set NU = 1 when Na = 1, while set NU = 2 when Na > 1. For system settings, set P = 23dBm and τ = 10dB. Set α0 = 4 for SSPM, and α0 = 2.5, α1 = 4 and R1 = 10m for DSPM. In (a), set ∆h = 2m. In (b), set Na =16. Lines and markers denote numerical and simulation results, respectively. Especially, the maximal ST could be improved by 21 folds under DSPM when 16 antennas are equipped by each BS and 2 users are served within one cell. Meanwhile, we evaluate the impact of AHD on the network ST in Fig. 1b. It is observed that network ST would be significantly over-estimated without considering the AHD, i.e., ST even linearly increases with BS density. As the ∆h = 0m assumption is no longer impractical when the distance from transmitters and the intended receivers is small, this confirms the importance of modeling AHD when evaluating the performance of UDN. Taking ∆h > 0m into account, however, it is pessimistic to observe that ST would asymptotically approach zero under dense deployment even when SDMA is applied. Therefore, we analytically study ST scaling law next. Corollary 2 (CP and ST Scaling Laws in SDMA System). In SDMA system, CP and ST scale with BS density λ as CPN (λ) ∼ e−κλ and STN (λ) ∼ λe−κλ (κ is a constant), respectively. Proof. Given the number of antennas, the experience of single user would be degraded if more users within one cell share the antenna resources. Therefore, we have CPF N (λ) ≤ CPN (λ) ≤ CPSN (λ), where CPF (λ) is obtained by setting NU = Na (full N SDMA) and CPSN (λ) is obtained by setting NU = 1 (SU-BF). 3 Hence, CPF (λ) 0 10 d0 (b)  −πλd20  (ω (Na , αN−1 , τ ) − 1) = Er0 >RN −1 exp    2 2 − ∆h2 (c) exp −πλ ω (Na , αN−1 , τ ) RN−1 + ∆h = ω (Na , αN−1 , τ ) =CPL N (λ) , (8) Exact results Approximate results −1 ¡ ¢ ST bits/ s · Hz · m2   Z ∞    (a) x 1 − (1 + sP lN (x))−Na dx = Er0 exp −2πλ   d0 Z ∞    x 1 − (1 + sP lN (x))−Na dx >Er0 >RN −1 exp −2πλ 10 −2 10 −3 10 overlap −4 10 Na = 16, 8, 4, 1 where s = P lNτ(d0 ) . In (8), (a) follows due to HU0 ,BS0 ∼ τ Γ (1, 1) since Na = NU , (b) follows due to s = −αN −1 2 4 10 P KN −1 d0 and lN (x) = KN −1 x−αN −1 when r0 > RN −1 , and (c) follows due to the PDF of r0 given in (1). In consequence, CPLN (λ) ∼ e−κλ holds. Besides, it follows from [5, Theorem 2] that CPS (λ) ∼ e−κλ . Therefore, CPN (λ) ∼ e−κλ . Following the definition of ST, the proof is complete. IV. SDMA, F ULL SDMA AND SU-BF: A S PECIAL C ASE S TUDY In this section, analytical comparison of SDMA, full SDMA and SU-BF is made from the ST perspective. Especially, a special case study is performed under SSPM given by l1 (d) = d−α0 . d ≥ 0 0 10 SU − BF, NU = 1 SDMA, NU = 8 Full SDMA, NU = 16 Proposition 1. When SDMA is applied under SSPM, ST in ultra-dense downlink small cell networks can be approximated as ST†1 (λ) = λCP†1 (λ) log2 (1 + τ ), where exp (−πλδ (α0 )) , ω (NU , α0 , τ S )   ω NU , x, τ S − 1 and τ S = where δ (x) = ∆h2 ¡ ¢ ST bits/ s · Hz · m2 10 −4 10 DSPM −6 10 SSPM −8 10 1 10 2 3 10 10 4 10 λ BS/km2 5 10 6 10 7 10 (b) Comparison of SDMA, full SDMA and SU-BF. Figure 2. ST scaling laws. In (a), set NU = 1 when Na = 1 and set NU = Na /2 when Na > 1. In (b), set Na =16 and other system parameters are identical to those in Fig. 1. (9) Nevertheless, the results on ST in Corollary 1 are much too complicated even when SSPM is applied. To facilitate the comparison, we first present a simple but effective approximation on ST in the Proposition 1. CP†1 (λ) = 10 (a) Examine the approximation accuracy. −2 Corollary 2 reveals the fundamental limitation of SDMA in UDN. Meanwhile, it is intuitive to obtain that SU-BF outperforms SDMA and full SDMA in terms of CP. Therefore, it is interesting to analytically compare the performance of these schemes in terms of ST as well, the detail of which is presented in the following part. 6 10 λ BS/km2 exact and approximate results are small. They even overlap when Na = 1. Moreover, it is observed that the critical densities obtained via the approximate and exact results are almost identical. Therefore, it is valid to use the approximate results to derive critical density as follows. Corollary 3. The critical density in SDMA downlink small cell networks is given by λ∗ = (10) τ Na −NU +1 . Proof.  The key to the approximation is to use GU0 ,BS0 ∼ 1 Exp Na −N to replace HU0 ,BS0 ∼ Γ (Na − NU + 1, 1). U +1 Accordingly, we have     R −NU −2πλ d∞ x 1−(1+s† P l1 (x)) dx † 0 CP1 (λ) =Er0 e h i S 2 =Er0 e−πλd0 [ω(NU ,α0 ,τ )−1] , (11) p S where s† = P l1τ(d0 ) and d0 = r02 + ∆h2 . Aided by (11) and the PDF of r0 given in (1), CP1 (λ) can be obtained. We next verify the accuracy of the approximation in Proposition 1 using Fig. 2a. It can be seen that the gaps between the where τ S = 1 , π∆h2 (ω (NU , α0 , τ S ) − 1) (12) τ Na −NU +1 . Proof. The proof can be feasibly completed by solving ∂ST†1 (λ) = 0. ∂λ Corollary 3 identifies the relationship of key system parameters and critical density. For instance, it is apparent that λ∗ inversely increases with ∆h2 . For this reason, it is suggested to lower the transmitter and receiver antenna heights and reduce the AHD such that network densification could be still effective in enhancing spectrum efficiency even when BS density is large. Next, we use the results in Corollary 3 to further evaluate the performance of SDMA, full SDMA and SU-BF in UDN. Before that, the results on ω (x, y, z) are given in Lemma 1. 4 Lemma 1. Given N > 1 (an integer), −1 < b < 0, c > 0 and z < 0, 2 F1 (N, b, c, z) is an increasing function of N . Proof. The proof can be completed using mathematical induction. Assuming ∆1 = 2 F1 (N + 1, b, c, z) − 2 F1 (N, b, c, z) = bz c 2 F1 (N + 1, b + 1, c + 1, z) > 0, if the inequality ∆2 = 2 F1 (N + 2, b, c, z) − 2 F1 (N + 1, b, c, z) > 0 holds, the proof is complete. By definition of hypergemetric function, we have bz 2 F1 (N + 2, b + 1, c + 1, z) c b = [2 F1 (N + 1, b + 1, c, z) − 2 F1 (N + 1, b, c, z)] . N +1 (13) ∆2 = As N > 0 and b < 0, to prove ∆2 > 0, it is sufficient to show (N, b + 1, c, z) − 2 F1 (N, b, c, z) < 0. Hence, we have 2 F1 ∆3 =2 F1 (b + 1, N, c, z) − 2 F1 (b, N, c, z) Nz = 2 F1 (b + 1, N + 1, c + 1, z) c Nz = (14) 2 F1 (N + 1, b + 1, c + 1, z) . c According to the assumption that ∆1 > 0, we have < 0 2 F1 (N + 1, b + 1, c + 1, z) > 0. Therefore, ∆3 holds. Aided by Lemma 1, we provide the results on full SDMA in the following theorem. Theorem 1. When full SDMA is applied, the critical density of downlink small cell networks is a decreasing function of the number of antennas. Proof. Applying full SDMA, NU = Na and the critical density in Corollary 3 degenerates into λ∗ = 1 . π∆h2 (ω (Na , α0 , τ ) − 1) (15) Since ω (Na , α0 , τ ) is an increasing function of Na (Lemma 1), the proof is complete. Theorem 1 evaluates the performance of full SDMA in terms of critical density. More specifically, it is observed from Fig. 2b that ST would begin to decrease at a smaller critical density when more antennas are equipped. Following Theorem 1, we further compare SDMA, full SDMA and SU-BF in Theorem 2. Theorem 2. Given the number of antennas equipped on each BS, the critical density achieved by SU-BF is greater than those achieved by SDMA and full SDMA. Proof. The results can be proved by showing λ∗ in Corollary 3 is a decreasing function of NU or equivalently ω NU , α0 , τ S is an increasing function of NU . By making an extension of  [3, Lemma 1], it can be shown that ω NU , α0 , τ S increases 1 with τ S = Na −N , which monotonously increases with U +1 NU . Therefore, aided by Lemma 1, it can be proved that  ω NU , α0 , τ S increases with NU . Theorem 2 indicates that simultaneously serving multiple users would enable ST to decrease at a smaller BS density. In other words, compared to SU-BF, SDMA potentially degrades spectrum efficiency in UDN. This is inconsistent with traditional understanding of SDMA in sparse deployment [4], which shows higher spectrum efficiency could be achieved by SDMA via serving more users. This can be verified via the results in Fig. 2b. Specifically, although a higher ST could be obtained by SDMA when NU is large in sparse scenario, it would experience a notable decrease at a greater BS density, compared to SU-BF. The intuition behind this is that the multiuser gain harvested by SDMA is ruined by the overwhelming interference in UDN. Besides, it is numerically shown in Fig. 2b that SU-BF outperforms SDMA and full SDMA in terms of ST in UDN as well. From this perspective, SU-BF is more favorable to enhance network capacity in UDN. While the analytical comparison is made based on SSPM, it could be easily tested that the comparison results are still valid under MSPM. For instance, the results under DSPM, i.e., N = 2 in (2), are verified in Fig. 2b. V. C ONCLUSION In this letter, the performance of SDMA has been explored in ultra-dense downlink small cell networks. Specifically, SDMA, albeit incapable of improving network capacity scaling law, is proved to significantly enhancing spatial throughput, compared to single-antenna regime. Moreover, it is interestingly shown that network capacity and critical density could be maximized when single user is served in each small cell, which contradicts with the intuition. The primary reason is that demerits of overwhelming inter-cell interference dominate the benefits of multi-user gain of SDMA in UDN. Therefore, the outcomes of this work could provide guideline towards the application and optimization of SDMA in UDN via reasonably selecting the number of users to serve. R EFERENCES [1] X. Zhang and J. G. Andrews, “Downlink cellular network analysis with multi-slope path loss models,” IEEE Trans. Commun., vol. 63, no. 5, pp. 1881–1894, May. 2015. [2] M. D. Renzo, W. Lu, and P. Guan, “The intensity matching approach: A tractable stochastic geometry approximation to system-level analysis of cellular networks,” IEEE Trans. Wireless Commun., vol. 15, no. 9, pp. 5963–5983, Sep. 2016. [3] J. Liu, M. Sheng, L. Liu, and J. Li, “Effect of densification on cellular network performance with bounded pathloss model,” IEEE Commun. Lett., vol. 21, no. 2, pp. 346–349, Feb. 2017. [4] H. S. Dhillon, M. Kountouris, and J. G. Andrews, “Downlink MIMO hetnets: Modeling, ordering results and performance analysis,” IEEE Trans. Wireless Commun., vol. 12, no. 10, pp. 5208–5222, Oct. 2013. [5] J. Liu, M. Sheng, and J. Li, “MISO in ultra-dense networks: Balancing the tradeoff between user and system performance,” submitted to IEEE Trans. Wireless Commun., 2017. [Online]. Available: https://arxiv.org/ abs/1707.05957 [6] D. Stoyan, W. S. Kendall, J. Mecke, and L. Ruschendorf, Stochastic geometry and its applications. Wiley Chichester, 1995, vol. 2. [7] J. Liu, M. Sheng, L. Liu, and J. Li, “Network densification in 5G: From the short-range communications perspective,” vol. abs/1606.04749, 2017. [Online]. Available: http://arxiv.org/abs/1606.04749 [8] N. Jindal, J. G. Andrews, and S. Weber, “Multi-antenna communication in ad hoc networks: Achieving MIMO gains with SIMO transmission,” IEEE Trans. Commun., vol. 59, no. 2, pp. 529–540, Feb. 2011. [9] R. W. H. Jr, T. Wu, Y. H. Kwon, and A. C. K. Soong, “Multiuser MIMO in distributed antenna systems with out-of-cell interference,” IEEE Trans. Signal Process., vol. 59, no. 10, pp. 4885–4899, Oct 2011.
7
Near-Optimal and Robust Mechanism Design for Covering Problems with Correlated Players∗ arXiv:1311.6883v2 [cs.GT] 29 Sep 2016 Hadi Minooei† Chaitanya Swamy † Abstract We consider the problem of designing incentive-compatible, ex-post individually rational (IR) mechanisms for covering problems in the Bayesian setting, where players’ types are drawn from an underlying distribution and may be correlated, and the goal is to minimize the expected total payment made by the mechanism. We formulate a notion of incentive compatibility (IC) that we call support-based IC that is substantially more robust than Bayesian IC, and develop black-box reductions from support-based-IC mechanism design to algorithm design. For singledimensional settings, this black-box reduction applies even when we only have an LP-relative approximation algorithm for the algorithmic problem. Thus, we obtain near-optimal mechanisms for various covering settings including single-dimensional covering problems, multi-item procurement auctions, and multidimensional facility location. 1 Introduction In a covering mechanism-design problem, there are players who provide covering objects and a buyer who needs to obtain a suitable collection of objects so as to satisfy certain covering constraints (e.g., covering a ground set); each player incurs a certain private cost, which we refer to as his type, for providing his objects, and the mechanism must therefore pay the players from whom objects are procured. We consider the problem of designing incentive-compatible, ex-post individually rational (IR) mechanisms for covering problems (also called procurement auctions) in the Bayesian setting, where players’ types are drawn from an underlying distribution and may be correlated, and the goal is to minimize the expected total payment made by the mechanism. Consider the simplest such setting of a single-item procurement auction, where a buyer wants to buy an item from any one of n sellers. Each seller’s private type is the cost he incurs for supplying the item and the sellers must therefore be incentivized via a suitable payment scheme. Myerson’s seminal result [19] solves this problem (and other single-dimensional problems) when players’ private types are independent. However, no such result (or characterization) is known when players’ types are correlated. This is the question that motivates our work. Whereas the analogous revenue-maximization problem for packing domains, such as combinatorial auctions (CAs), has been extensively studied in the algorithmic mechanism design (AMD) literature, both in the case of independent and correlated (even interdependent) player-types (see, e.g., [3, 4, 2, 1, 5, 9, 8, 21, 2, 25] and the references therein), surprisingly, there are almost no results on the payment-minimization problem for covering settings in the AMD literature (see however the discussion in “Related work” for some exceptions). The economics literature does contain various general results that apply to both covering and packing problems. However much of this work ∗ A preliminary version appeared as [18]. {hminooei,cswamy}@uwaterloo.ca. Dept. of Combinatorics and Optimization, Univ. Waterloo, Waterloo, ON N2L 3G1. Research supported partially by NSERC grant 327620-09 and the second author’s Discovery Accelerator Supplement Award, and Ontario Early Researcher Award. † 1 focuses on characterizing special cases; see, e.g., [14, 13]. An exception is the work of Crémer and McLean [6, 7], which shows that under certain conditions, one can devise a Bayesian-incentivecompatible (BIC) mechanism whose expected total payment exactly equal to the expected cost incurred by the players, albeit one where players may incur negative utility under certain typeprofile realizations. Our contributions. We initiate a study of payment-minimization (PayM) problems from the AMD perspective of designing computationally efficient, near-optimal mechanisms. We develop black-box reductions from mechanism design to algorithm design whose application yields a variety of optimal and near-optimal mechanisms. As we elaborate below, covering problems turn out to behave quite differently in certain respects from packing problems, which necessitates new approaches (and solution concepts). Formally, we consider the setting of correlated players in the explicit model, that is, where we have an explicitly-specified arbitrary discrete joint distribution of players’ types. A commonlyused solution concept in Bayesian settings is Bayesian incentive compatibility (BIC) and interim individual rationality (interim IR), wherein at the interim stage when a player knows his type but is oblivious of the random choice of other players’ types, truthful participation in the mechanism by all players forms a Bayes-Nash equilibrium. Two serious drawbacks of this solution concept (which are exploited strikingly and elegantly in [6, 7]) are that: (i) a player may regret his decision of participating and/or truthtelling ex post, that is, after observing the realization of other players’ types; and (ii) it is overly-reliant on having precise knowledge of the true underlying distribution making this a rather non-robust concept: if the true distribution differs, possibly even slightly, from the mechanism designer’s and/or players’ beliefs or information about it, then the mechanism could lose its IC and IR properties. On the other hand, the solution concept of dominant-strategy incentive compatibility (DSIC) and ex-post IR ensures that truthful participation is the best choice, and a no-regret choice, for every player regardless of the other players’ reported types. This is the most robust notion of IC and IR, since it is a completely distribution-independent. We consider the problem of designing near-optimal mechanisms for payment-minimization problems with robustness being an important consideration. This makes (DSIC, ex-post IR), which we abbreviate to (DSIC, IR), as the natural ideal. However, certain difficulties arise in achieving this goal for covering problems (see “Differences . . . packing problems” below). We formulate a notion of incentive compatibility that we call support-based IC1 that, while somewhat weaker than DSIC, is still substantially more robust than BIC and at the same time is flexible enough that it allows one to obtain various polytime near-optimal mechanisms satisfying this notion. A support-based-(IC, IR) mechanism (see Section 2) ensures that truthful participation in the mechanism is in the best interest of every player (i.e. a “no-regret” choice) for every type profile in a certain subset of the type space: for every player i, we impose the IC and IR conditions for all type profiles of the form (·, c−i ) for every profile c−i of other players’ types coming from the support of the underlying distribution. In particular, a support-based-(IC, IR) mechanism ensures that truthful participation is the best choice for every player even at the ex-post stage when the other players’ (randomly-chosen) types are revealed to him. Such a mechanism is significantly more robust than a (BIC, interim-IR) mechanism since it retains its IC and IR properties for a large class of distributions that contains (in particular) every distribution whose support is a subset of the support of the actual distribution. In other words, in keeping with Wilson’s doctrine of detail-free mechanisms, the mechanism functions robustly even under fairly limited information about the type-distribution. 1 The conference version [18] of this paper referred to this as “robust Bayesian IC.” 2 We show that for a variety of settings, one can reduce the support-based-(IC, IR) paymentminimization (PayM) mechanism-design problem to the algorithmic cost-minimization (CM) problem of finding an outcome that minimizes the total cost incurred. Moreover, this black-box reduction applies to: (a) single-dimensional settings even when we only have an LP-relative approximation algorithm for the CM problem (that is required to work only with nonnegative costs) (Theorem 4.2); and (b) multidimensional problems with additive types (Corollary 3.3). Our reduction yields near-optimal support-based (IC-in-expectation, IR) mechanisms for a variety of covering settings such as (a) various single-dimensional covering problems including singleitem procurement auctions (Table 1); (b) multi-item procurement auctions (Theorem 5.1); and (c) multidimensional facility location (Theorem 5.3). (Support-based IC-in-expectation means that the support-based-IC guarantee holds for the expected utility of a player, where the expectation is over the random coin tosses of the mechanism.) In Section 6, we consider some extensions involving both weaker (but more robust than (BIC, interim IR)) solution concepts and the stronger (DSIC, IR) solution concept. We obtain the same guarantees under the various weaker solution concepts, and adapt our techniques to obtain (DSIC-in-expectation, IR) mechanisms with the same guarantees for single-dimensional problems in time exponential in the number of players (Section 6.2). These are the first results for the PayM mechanism-design problem for covering settings with correlated players under a notion stronger than (BIC, interim IR). To our knowledge, our results are new even for the simplest covering setting of single-item procurement auctions. Our techniques. The starting point for our construction is a linear-programming (LP) relaxation (P) for the problem of computing an optimal support-based-(IC, IR)-in-expectation mechanism. This was also the starting point in the work of [8], which considers the revenue-maximization problem for CAs, but the covering nature of the problem makes it difficult to apply certain techniques utilized successfully in the context of packing problems (as described below). We show that an optimal solution to (P) can be computed given an optimal algorithm A for the CM problem since A can be used to obtain a separation oracle for the dual LP. Next, we prove that a feasible solution to (P) can be extended to a support-based-(IC-in-expectation, IR) mechanism with no larger objective value. For single-dimensional problems, we show that even LP-relative ρ-approximation algorithms for the CM problem can be utilized, as follows. We move to a relaxation of (P), where we replace the set of allocations with the feasible region of the CM-LP. This can be solved efficiently, since the separation oracle for the dual can be obtained by optimizing over the feasible region of CM-LP, which can be done efficiently! But now we need to work harder to “round” an optimal solution (x, p) to the relaxation of (P) and obtain a support-based-(IC-in-expectation, IR) mechanism. Here, we exploit the Lavi-Swamy [12] convex-decomposition procedure, using which we can show (roughly speaking) that we can decompose ρx into a convex combination of allocations. This allows us to obtain a support-based-(IC-in-expectation, IR) mechanism while blowing up the payment by a ρ-factor. In comparison with the reduction in [8], which is the work most closely-related to ours, our reduction from support-based-IC mechanism design to the algorithmic CM problem is stronger in the following sense. For single-dimensional settings, it applies even with LP-relative approximation algorithms, and the approximation algorithm is required to work only for “proper inputs” with nonnegative costs. (Note that whereas for packing problems, allowing negative-value inputs can be benign, this can change the character of a covering problem considerably.) In contrast, Dobzinski et al. [8] require an exact algorithm for the analogous social-welfare-maximization (SWM) problem. 3 Differences with respect to packing problems. At a high level, our method of writing an LP for the underlying mechanism-design problem and solving it given an algorithm for an associated algorithmic problem is similar to the procedure in Dobzinski et al. [8]. However, we encounter three distinct sources of difficulty when dealing with covering problems vis-a-vis packing problems. First, as noted in [8], the LP can only encode the IC and IR conditions for a finite set of type profiles, whereas, with an infinite type space, both support-based-(IC, IR) and (DSIC, IR) require the mechanism to satisfy the IC and IR conditions for an infinite set of type profiles. Therefore, to translate the LP solution to a suitable mechanism, we need to solve the “extension problem” of extending an allocation and pricing rule defined on a (finite) subset of the type space to the entire type space while preserving its IC and IR properties. This turns out to be a much more difficult task for covering problems than for packing domains. The key difference is that for a packing setting such as combinatorial auctions, one can show that any LP solution—in particular, the optimal LP solution—can be converted into a (DSIC-in-expectation, IR) mechanism without any loss in expected revenue (see Section C.1). (Consequently, [8] obtain (DSIC-in-expectation, IR) mechanisms.) Intuitively, this works because one can focus on a single player by allocating no items to the other players. Clearly, one cannot mimic this for covering problems: dropping players may render the problem infeasible, and it is not clear how to extend an LP-solution to a (DSICin-expectation, IR) mechanism for covering problems. We suspect that not every LP solution or support-based-(IC, IR) mechanism can be extended to a (DSIC, IR) mechanism, and that there is a gap between the optimal expected total payments of support-based-(IC-in-expectation, IR) and (DSIC, IR) mechanisms. We leave these as open problems.2 Due to this complication, we sacrifice a modicum of the IC, IR properties in favor of obtaining polytime near-optimal mechanisms and settle for the weaker, but still quite robust notion of support-based (IC-in-expectation, IR). We consider this to be a reasonable starting point for exploring mechanism-design solutions for covering problems, which leads to various interesting research directions.3 A second difficulty, which we have alluded to above, arises due to the fact that solving the LP requires one to solve the CM problem with negative-valued inputs. This is also true of packing problems [8] (where one needs to solve the SWM problem), even in the single-item setting [19] (where reserve prices arise due to negative virtual valuations). While this is not a problem if we have an optimal algorithm for the CM problem, it creates serious issues, even in the singledimensional setting, if we only have an approximation algorithm at hand; in particular, the standard notion of approximation becomes meaningless since the optimum could be negative. In contrast, for packing problems with single-dimensional types (or additive types [2, 3, 4]), these issues are more benign since one may always discard players (or options of players) with negative value. In particular, an approximation algorithm can be used to obtain an approximate separation oracle for the dual LP, and thus obtain a near-optimal solution to the primal LP via a well-known technique in approximation algorithms. (We sketch this extension of a result of [8] in Appendix C.1.) Finally, a stunning aspect where covering and packing problems diverge can be seen when one considers the idea of a k-lookahead auction [22, 8]. This was used by [8] to convert their results in 2 We show in Section 6.2 that if we expand our LP to include IC and IR constraints for a much larger (exponentialsized) set of type-profiles then, for single-dimensional settings, it is possible to extend an LP-solution to a (DSIC-inexpectation, IR) mechanism. 3 For covering problems, even formulating the LP in a way that its solution can be extended to a support-based-(IC, IR) mechanism is somewhat tricky due to the following complication. We need to argue (see Lemma 2.3) that there is an optimal support-based-(IC-in-expectation, IR) mechanism such that for every player i, and every profile c−i coming from the support of the underlying distribution, there is a type profile (mi , c−i ) under which the mechanism never procures objects from i, and include this condition in the LP (otherwise, the IC and IR conditions would force the LP-extension to make arbitrarily large payments). 4 the explicit model to the oracle model introduced by [22]. This however fails spectacularly in the covering setting. One can show that even for single-item procurement auctions, dropping even a single player can lead to an arbitrarily large payment compared to the optimum (see Appendix B). Other related work. In the economics literature, the classical results of Crémer and McLean [6, 7] and McAfee and Reny [16], also apply to covering problems, and show that one can devise a (BIC, interim IR) mechanism with correlated players whose expected total payment is at most the expected total cost incurred provided the underlying type-distribution satisfies a certain full-rank assumption. These mechanisms may however cause a player to have negative utility under certain realizations of the random type profile. The AMD literature has concentrated mostly on the independent-players setting [3, 4, 2, 1, 5, 9]. There has been some, mostly recent, work that also considers correlated players [22, 23, 8, 21, 2, 25]. Much of this work pertains to the revenue-maximization setting; an exception is [23], which is discussed below. Ronen [22] considers the single-item auction setting in the oracle model, where one samples from the distribution conditioned on some players’ values. He proposes the (1-) lookahead auction and shows that it achieves a 12 -approximation. Papadimitriou and Pierrakos [21] show that the optimal (DSIC, IR) mechanism for the single-item auction can be computed efficiently with at most 2 players, and is NP-hard otherwise. Cai et al. [2] give a characterization of the optimal auction under certain settings. Roughgarden and Talgam-Cohen [25] consider interdependent types, which generalizes the correlated type-distribution setting, and develop an analog of Myerson’s theory for certain such settings. Ronen and Lehmann [23] consider the PayM problem in the setting where a buyer wants to buy an item from sellers who can supply the item in on of many configurations and incur private costs for supplying the item. However, this procurement problem is in fact a packing problem: one can view a solution to be feasible if it selects at most one configuration for procurement; in particular, the buyer has the flexibility of not procuring the item. As noted earlier, this flexibility drastically alters the character of the mechanism-design problem. Not surprisingly, the results therein, which are based on lookahead auctions, do not apply in the covering setting (as noted above). Various reductions from revenue-maximization to SWM are given in [2, 3, 4]. The reductions in [2, 4] also apply to covering problems and the PayM objective, but they are incomparable to our results. These works focus on the (BIC, interim-IR) solution concept, which is a rather weak/liberal notion for correlated distributions. Most (but not all) of these consider independent players and additive valuations, and often require that the SWM-algorithm also work with negative values, which is a benign requirement for downward-closed environments such as CAs but is quite problematic for covering problems when only has an approximation algorithm. Cai et al. [2] consider correlated players and obtain mechanisms having running time polynomial in the maximum support-size of the marginal distribution of a player, which could be substantially smaller than the support-size of the entire distribution. This savings can be traced to the use of the (BIC, interim-IR) notion which allows [2] to work with a compact description of the mechanism. It is unclear if these ideas are applicable when one considers robust-(BIC, IR) mechanisms. A very interesting open question is whether one can design robust-(BIC-in-expectation, IR) mechanisms having running time polynomial in the support-sizes of the marginal player distributions (as in [2, 8]). 2 Preliminaries Covering mechanism-design problems. We adopt the formulation in [17] to describe general covering mechanism-design problems. There are some items that need to be covered, and n players who provide covering objects. Let [k] denote the set {1, . . . , k}. Each player i provides a set Ti 5 of covering objects. All this information is public knowledge. Player i has a private cost or type vector ci = {ci,v }v∈Ti , where ci,v ≥ 0 is the cost he incurs for providing object v ∈ Ti ; for T ⊆ Ti , P we use ci (T ) to denote v∈T ci,v . A feasible solution or allocation selects a subset Ti ⊆ Ti for each S agent i, denoting that i provides the objects in Ti , such that i Ti covers all the items. Given this solution, each agent i incurs the private cost ci (Ti ), and the mechanism designer incurs a publicly known cost pub(T1 , . . . , Tn ) ≥ 0, which may be used to encode any feasibility constraints in the covering problem. Q Let Ci denote the set of all possible types of agent i, and C = ni=1 Ci . We assume (for |T | notational simplicity) that Ci = R+ i . Let Ω := {(T1 , . . . , Tn ) : pub(T1 , . . . , Tn ) < ∞} be the (finite) set of all feasible allocations. For aQtuple x = (x1 , . . . , xn ), we use x−i to denote (x1 , . . . , xi−1 , xi+1 , . . . , xn ). Similarly, let C−i = j6=i Cj . For an allocation ω = (T1 , . . . , Tn ), we sometimes use ωi to denote Ti , ci (ω) to denote ci (ωi ) = ci (Ti ), and pub(ω) to denote pub(T1 , . . . , Tn ). We make the mild assumption that pub(ω ′ ) ≤ pub(ω) if ωi ⊆ ωi′ for all i; so in particular, if ω is feasible, then adding covering objects to the ωi s preserves feasibility. A (direct revelation) mechanism M = (A, p1 , . . . , pn ) for a covering problem consists of an allocation algorithm A : C 7→ Ω and a payment function pi : C 7→ R for each agent i. Each agent i reports a cost function ci (that might be different from his true cost function). The mechanism computes the allocation A(c) = (T1 , . . . , Tn ) = ω ∈ Ω, and pays pi (c) to each agent i. The utility ui (ci , c−i ; ci ) that player i derives when he reports ci and the others report c−i is pi (c) − ci (ωi ) where ci is his true cost function, and each agent i aims to maximize his own utility. We refer to maxi |Ti | as the dimension of a covering problem. Thus, for a single-dimensional problem, each player i’s cost can be specified as ci (ω) = ci αi,ω , where ci ∈ R+ is his private type and αi,ω = 1 if ωi 6= ∅ and 0 otherwise. The above setup yields a multidimensional covering mechanism-design problem with additive types, where by additivity we mean that the private cost thatP a player i incurs for providing a set T ⊆ Ti of objects is additive across the objects in T (i.e., it is v∈Ti ci,v ). Notice that if ci , c′i ∈ Ci , then the type ci + c′i defined by {ci,v + c′i,v }v∈Ti is also in Ci and satisfies (ci + c′i )(ω) = ci (ω) + c′i (ω) for all ω ∈ Ω. It is possible to define more general multidimensional settings, but additive types is a reasonable starting point to explore the multidimensional covering mechanism-design setting. (As noted earlier, there has been almost no work on designing polytime, near-optimal mechanisms for covering problems.) The Bayesian setting. We consider Bayesian settings where there is an underlying publiclyknown discrete and possibly correlated joint type-distribution on C from which the players’ types are drawn. We consider the so-called explicit model, where the players’ type distribution is explicitly specified. We use D ⊆ C to denote the support of the type distribution, and PrD (c) to denote the probability of realization of c ∈ C. Also, we define Di := {ci ∈ Ci : ∃c−i s.t. (ci , c−i ) ∈ D}, and D−i to be {c−i : ∃ci s.t. (ci , c−i ) ∈ D}. Solution concepts. A mechanism sets up a game between players, and the solution concept dictates certain desirable properties that this game should satisfy, so that one can reason about the outcome when rational players are presented with a mechanism satisfying the solution concept. The two chief properties that one seeks to capture relate to incentive compatibility (IC), which (roughly speaking) means that every agent’s best interest is to reveal his type truthfully, and individual rationality (IR), which is the notion that no agent is harmed by participating in the mechanism. Differences and subtleties arise in Bayesian settings depending on the stage at which we impose these properties and how robust we would like these properties to be with respect to the underlying type distribution. 6  Definition 2.1 A mechanism M = A, {pi } is Bayesian incentive compatible (BIC) and interim IR if for every player i and every ci ∈ Di , ci ∈ Ci , we have Ec−i [ui (ci , c−i ; ci )|ci ] ≥ Ec−i [ui (ci , c−i ; ci )|ci ] (BIC) and Ec−i [ui (ci , c−i ; ci )|ci ] ≥ 0 (interim IR), where Ec−i [.|ci ] denotes the expectation over the other players’ types conditioned on i’s type being ci . As mentioned earlier, the (BIC, interim-IR) solution concept may yet lead to ex-post “regret”, and is quite non-robust in the sense that the mechanism’s IC and IR properties rely on having detailed knowledge of the distribution; thus, in order to be confident that a BIC mechanism achieves its intended functionality, one must be confident about the “correctness” of the underlying distribution, and learning this information might entail significant cost. To remedy these weaknesses, we propose and investigate the following stronger IC and IR notions.  Definition 2.2 A mechanism M = A, {pi } is support-based IC and support-based IR, if for every player i, every ci , ci ∈ Ci , and every c−i ∈ D−i , we have ui (ci , c−i ; ci ) ≥ ui (ci , c−i ; ci ) (supportbased IC) and ui (ci , c−i ; ci ) ≥ 0 (support-based IR). Support-based (IC, IR) ensures that participating truthfully in the mechanism is in the best interest of every player even at the ex-post stage when he knows the realized types of all players. To ensure that support-based IC and support-based IR are compatible, we focus on monopoly-free settings: for every player i, there is some ω ∈ Ω with ωi = ∅. Notice that support-based (IC, IR) is subtly weaker than the notion of (dominant-strategy IC (DSIC), IR), wherein the IC and IR conditions of Definition 2.2 must hold for all c−i ∈ C−i , ensuring that truthtelling and participation are no-regret choices for a player even if the other players’ reports are outside the support of the underlying type-distribution. We focus on supportbased IC because it forms a suitable middle-ground between BIC and DSIC: it inherits the desirable robustness properties of DSIC, making it much more robust than BIC (and closer to a worst-case notion), and yet is flexible enough that one can devise polytime mechanisms satisfying this solution concept. It might seem strange that in the definition of support-based (IC, IR), we consider player i’s incentives for types outside of i’s support and for type-profiles that are inconsistent with the underlying distribution. Keeping robustness in mind, our goal here is to approach the ideal of (DSIC, IR) and we have therefore formulated the most-robust notion that permits us to devise polytime near-optimal mechanisms satisfying this notion. In Section 6, we consider various alternate solution concepts that, while weaker than support-based (IC, IR), still retain its robustness properties to a large extent, and show that our results extend easily to these solution concepts. The above definitions are stated for a deterministic mechanism, but they have analogous extensions to a randomized mechanism M ; the only change is that the ui (.) and pi (.) terms are now replaced by the expected utility EM [ui (.)] and expected price EM [pi (.)] respectively, where the expectation is over the random coin tosses of M . We denote the analogous solution concept for a randomized mechanism by appending “in expectation” to the solution concept, e.g., a (BIC, interim IR)-in-expectation mechanism denotes a randomized mechanism whose expected utility satisfies the BIC and interim-IR requirements stated in Definition 2.1.  A support-based-(IC, IR)-in-expectation mechanism M = A, {pi } can be easily modified so that the IR condition holds with probability 1 (with respect to M ’s coin tosses) while the expected payment to a player (again over M ’s coin tosses) is unchanged: on input c, if A(c) = ω ∈ Ω with probability q, the new mechanism returns, with probability q, the allocation ω, and payment M [pi (c)] ci (ω) · EEM [ci (ω)] to each player i (where we take 0/0 to be 0, so if ci (ω) = 0, the payment to i is 0). Thus, we obtain a mechanism whose expected utility satisfies the support-based-IC condition, and 7 IR holds with probability 1 for all ci ∈ Ci , c−i ∈ D−i , A similar transformation can be applied to a (DSIC, IR)-in-expectation mechanism. Optimization problems. Our main consideration is to minimize the expected total payment of the mechanism. It is natural to also incorporate the  mechanism-designer’s  Pcost into the objective. Define the disutility of a mechanism M = A, {pi } under input c to be i pi (c) + κ · pub A(c) , where κ ≥ 0 is a given scaling factor. Our objective is to devise a polynomial-time support-based(IC (in-expectation), IR)-mechanism with minimum expected disutility. Since most problems we consider have pub(ω) = 0 for all feasible allocations, in which case disutility equals the total payment, abusing terminology slightly, we refer to the above mechanism-design problem as the payment-minimization (PayM) problem. (An exception is metric uncapacitated facility location (UFL), where players provide facilities and the underlying metric is public knowledge; here, pub(ω) is the total client-assignment cost of the solution ω.) We always use O∗ to denote the expected disutility of an optimal mechanism for the PayM problem under consideration. We define the cost minimization (CM) problem to be the algorithmic problem of finding ω ∈ Ω P that minimizes the total cost i ci (ω) + pub(ω) incurred. The following technical lemma, whose proof we defer to Appendix A, will prove quite useful since it allows us to restrict the domain to a bounded set, which is essential to achieve IR with finite prices. (For example, in the single-dimensional setting, the payment is equal to the integral of a certain quantity from 0 to ∞, and a bounded domain ensures that this is well defined.) Note that such complications do not arise for packing problems. Let 1Ti be the |Ti |-dimensional all 1s vector. Let I denote the input size. Lemma 2.3 We can efficiently compute an estimate mi > maxci ∈Di ,v∈Ti ci,v with log mi = poly(I) for all i such that there is an optimal support-based-(IC-in-expectation, IR) mechanism M ∗ = A∗ , {p∗i } where A∗ (mi 1Ti , c−i )i = ∅ with probability 1 (over the random choices of M ∗ ) for all i and all c−i ∈ D−i . It is easy to obtain the stated estimates if we consider only deterministic mechanisms, but it turns out to be tricky to obtain this when one allows randomized mechanisms due to the artifact that a randomized mechanism may choose arbitrarily high-cost solutions as long as they are chosen with S small enough probability. In the sequel, we set Di := Di ∪ {mi 1Ti } for all i ∈ [n], and D := i (D i × D−i ). Note that |D| = O(n|D|2 ). 3 LP-relaxations for the payment-minimization problem The starting point for our results is the LP (P) that essentially encodes the payment-minimization problem. Throughout, we use i to index players, c to index type-profiles in D, and ω to index Ω. We use variables xc,ω to denote the probability of choosing ω, S and pi,c to denote the expected payment to player i, for input c. For c ∈ D, let Ω(c) = Ω if c ∈ i (Di × D−i ), and otherwise if c = (mi 1Ti , c−i ), let Ω(c) = {ω ∈ Ω : ωi = ∅} (which is non-empty since we are in a monopoly-free setting). 8 X min c∈D X  X PrD (c) pi,c + κ xc,ω pub(ω) X s.t. (P) ω i ∀c ∈ D xc,ω = 1 ω pi,(ci ,c−i ) − X ci (ω)x(ci ,c−i ),ω ≥ pi,(c′i ,c−i ) − pi,(ci ,c−i ) − X ci (ω)x(ci ,c−i ),ω ≥ 0 ω X ci (ω)x(c′i ,c−i ),ω (1) ∀i, ci , c′i ∈ D i , c−i ∈ D−i (2) ω ∀i, ci ∈ D i , c−i ∈ D−i (3) ∀c, ω ∈ / Ω(c). (4) ω p, x ≥ 0, xc,ω = 0 (1) encodes that an allocation is chosen for every c ∈ D, and (2) and (3) encode the support-basedIC and support-based-IR conditions respectively. Lemma 2.3 ensures that (P) correctly encodes PayM, so that OPT := OPT P is a lower bound on the expected disutility of an optimal mechanism. Our results are obtained by computing an optimal solution to (P), or a further relaxation of it, and translating this to a near-optimal support-based-(IC-in-expectation, IR) mechanism. Both steps come with their own challenges. Except in very simple settings (such as single-item procurement auctions), |Ω| is typically exponential in the input size, and therefore it is not clear how to solve (P) efficiently. We therefore consider the dual LP (D), which has variables γc , yi,(ci ,c−i ),c′i and βi,(ci ,c−i ) corresponding to (1), (2) and (3) respectively. X max s.t. X c γc X i:c∈Di ×D−i c′i ∈Di X c′i ∈D i yi,(ci ,c−i ),c′i (D)   ci (ω)yi,(ci ,c−i ),c′i − c′i (ω)yi,(c′i ,c−i ),ci + ci (ω)βi,c + κ · PrD (c) pub(ω) ≥ γc  − yi,(c′i ,c−i ),ci + βi,ci ,c−i ≤ PrD (c) y, β ≥ 0. ∀c ∈ D, ω ∈ Ω(c) ∀i, ci ∈ D i , c−i ∈ D−i (5) (6) (7) P With additive types, one can encode the LHS of (5) as i c̃i (ω) for a suitably-defined additive type (depending on c) c̃i = (c̃i,v )v∈Ti . Thus, the separation problem for constraints (5) amounts to determining if the optimal value of the CM problem defined by a certain additive type profile, with possibly negative values, is at least γc . Hence, an optimal algorithm for the CM problem can be used to solve (D), and hence, (P), efficiently. Theorem 3.1 With additive types, one can efficiently solve (P) given an optimal algorithm for the CM problem. Proof : Let A be an optimal algorithm for the CM problem. Note that A is only required to work with nonnegative inputs. We first observe that we can use A to find a solution that minimizes P i ci (ω) + κ · pub(ω) for any κ ≥ 0, even for an input c = {ci,v }i,v∈Ti where some of the ci,v s are negative. Let Ai = {v ∈ Ti : ci,v < 0}. Clearly, if ω ∗ is an optimal solution, then Ai ⊆ ωi∗ (since pub(.) does not increase upon adding covering objects). Define c+ i,v := max(0, ci,v ) and + c+ := {c } . i i,v v∈Ti Let Γ = κ1 if κ > 0; otherwise let Γ = N U , where U is a strict upper bound on maxω∈Ω pub(ω) 1 ′ and N is an integer such that all the c+ i,v s are integer multiples of N . Note that for any ω, ω ∈ Ω, 9 P P + ′ 1 if i c+ i (ω) − i ci (ω ) is non-zero, then its absolute value is at least N . Also, U and N may be efficiently computed (for rational data) and log(N U ) is polynomially bounded. Let (S1 , . . . , Sn ) be the solution returned by A for the CM problem on the input where all the c+ i,v s are scaled by Γ. The choice of Γ ensures that X X   X + ∗ ci (ωi∗ ) − ci (Ai ) + κ · pub(ω ∗ ). ci (ωi ) + κ · pub(ω ∗ ) = c+ i (Si ) + κ · pub S1 , . . . , Sn ≤ i i i P P + ∗ + (The first inequality clearly holds if κ > 0. If κ = 0 and c (S ) > i i i i ci (ωi ), then we have P + ∗ P + P + ∗ Γ ∗ that Γ i ci (Si ) ≥ Γ i ci (ωi ) + N > Γ i ci (ωi ) + pub(ω ), which contradicts the optimality of ′ (S1 , . . . , SnP ) for the input {Γc+ i .) So setting ωi = Ai ∪ Si for every i yields a feasible solution i,v }i,v∈T P such that i ci (ω ′ ) + κ · pub(ω ′ ) ≤ i ci (ω ∗ ) + κ · pub(ω ∗ ); hence ω ′ is an optimal solution. Given a dual solution (y, β, γ), we can easily P check if (6), (7) hold. Fix c ∈ D and  player i. c = ′ y ′ ′ Since we have additive types, if we define θi,v c y − c ′ i,v i,(ci ,c−i ),ci i,v i,(ci ,c−i ),ci + ci,v βi,c , ci ∈Di  P ′ then for every ω ∈ Ω, we can equate c′ ∈Di ci (ω)yi,(ci ,c−i ),c′i − ci (ω)yi,(c′i ,c−i ),ci + ci (ω)βi,c with i P c . θic (ω) := v∈ωi θi,v P c Let I = {i : c ∈ Di × D−i }. Constraints (5) for c can then be written as minω∈Ω(c) i∈I θ (ω) + κ PrD (c) pub(ω) ≥ γc . Define c̃ as follows: c̃i,v   γc + 1 c = θi,v   0 if ci = mi 1Ti , if i ∈ I, ci ∈ Di , otherwise.  P It is easy to see that (5) holds for c iff minω∈Ω i c̃i (ω)+κ·PrD (c) pub(ω) —which can be computed using A—is at least γc . Thus, we can use the ellipsoid method to solve (D). This also yields a compact dual consisting of constraints (6), (7) and the polynomially-many (5) constraints that were returned by the separation oracle during the execution of the ellipsoid method, whose optimal value is OPT D . The dual of this compact dual is an LP of the same form as (P) but with polynomially many xc,ω -variables; solving this yields an optimal solution to (P). Complementing Theorem 3.1, we argue that a feasible solution (x, p) to (P) can be extended to a support-based-(IC-in-expectation, IR) mechanism having expected disutility at most the value of (x, p) (Theorem 3.2). Combining this with Theorem 3.1 yields the corollary that an optimal algorithm for the CM problem can be used to obtain an optimal mechanism for the PayM problem (Corollary 3.3). Theorem 3.2 We can extend a feasibleP solution (x,P p) to (P) toPa support-based-(IC-in-expectation,  IR) mechanism with expected disutility c PrD (c) i pi,c + κ ω xc,ω pub(ω) . Proof : Let Ω′ = {ω : xc,ω > 0 for some c ∈ D}. We use xc to denote the vector {xc,ω }ω∈Ω′ . Consider a player i, c−i ∈ D−i , and ci , c′i ∈ D i . Note that (2) implies that if x(ci ,c−i ) = x(c′i ,c−i ) ,  then pi,(ci ,c−i ) = pi,(c′i ,c−i ) . For c−i ∈ D−i , define R(i, c−i ) = x(ci ,c−i ) : (ci , c−i ) ∈ D , and for y = x(ci ,c−i ) ∈ R(i, c−i ) define pi,y to be pi,(ci ,c−i ) (which is well  defined by the above argument). We now define the randomized mechanism M = A, {qi } , where A(c) and qi (c) denote respectively the probability distribution over allocations and the expected payment to player i, on input c. We sometimes view A(c) equivalently as the random variable specifying the allocation chosen for input c. Fix an allocation ω0 ∈ Ω. Consider an input c. If c ∈ D, we set A(c) = xc , and qi (c) = pi,c for all i. So consider c ∈ / D. If there is no i such that c−i ∈ D−i , we simply set A(c) = ω0 , qi (c) = ci (ω0 ) 10 for all i; such a c does not figure in the support-based (IC, IR) conditions. Otherwise there is a P unique i such that c−i ∈ D−i , ci ∈ Ci \ D i . Set A(c) = arg maxy∈R(i,c−i ) pi,y − ω∈Ω′ ci (ω)yω and qj (c) = pj,A(c) for all players j. Note that P (ci , c−i ) figures in (2) only for player i. Crucially, note that since y = x(mi ,c−i ) ∈ R(i, c−i ) and ω∈Ω ci (ω)yω = 0 by definition, we always have qi (c) − EA [ci (A(c))] ≥ 0. Thus, by definition, and by (2), we have ensured that M is supportbased (IC, IR)-in-expectation and its expected disutility is exactly the value of (x, p). This can be modified so that IR holds with probability P 1. The above procedure is efficient if ω∈Ω′ ci (ω)xc,ω can be calculated efficiently. This is clearly true if |Ω′ | is polynomially bounded, but it could hold under weaker conditions as well. Corollary 3.3 Given an optimal algorithm for the CM problem, we can efficiently obtain an optimal support-based-(IC-in-expectation, IR) mechanism for the PayM problem in multidimensional settings with additive types. Using approximation algorithms. The CM problem is however often NP-hard (e.g., for vertex cover), and we would like to be able to exploit approximation algorithms for the CM problem to obtain near-optimal mechanisms. The usual approach is to use an approximation algorithm to “approximately” separate over constraints (5).4 However, this does not work here since the CM problem that one needs to solve in the separation problem involves negative costs, which renders the usual notion of approximation meaningless. Instead, if the CM problem admits a certain type of LP-relaxation (C-P), then we argue that one can solve a relaxation of (P) where the allocation-set is the set of extreme points of (C-P) (Theorem 3.4). For single-dimensional problems (Section 4), we leverage this to obtain strong and far-reaching results. We show that a ρ-approximation algorithm relative to (C-P) can be used to “round” the optimal solution to this relaxation to a supportbased-(IC-in-expectation, IR)-mechanism losing a ρ-factor in the disutility (Theorem 4.2). Thus, we obtain near-optimal mechanisms for a variety of single-dimensional problems. Suppose that the CM problem admits an LP-relaxation of the following form, where c = {ci,v }i∈[n],v∈Ti is the input type-profile. min cT x + dT z Ax + Bz ≥ b, s.t. x, z ≥ 0. (C-P) Intuitively x encodes the allocation chosen, and dT z encodes pub(.). For x ≥ 0, define z(x) := arg min{dT z : (x, z) is feasible to (C-P)}; if there is no z such that (x, z) is feasible to (C-P), set z(x) := ⊥. Define ΩLP := {x : z(x) 6= ⊥, 0 ≤ xi,v ≤ 1 ∀i, v ∈ Ti }. We require that: (a) a {0, 1}-vector x is in ΩLP iff it is the characteristic vector of an allocation ω ∈ Ω, and in this case, we have dT z(x) = pub(ω); (b) A ≥ 0; (c) for any input c ≥ 0 to the covering problem, (C-P) is not unbounded, and if it has an optimal solution, it has one where x ∈ ΩLP ; (d) for any c, we can efficiently find an optimal solution to (C-P) or detect that it is unbounded or infeasible. We extend the type ci of each player i and pub to assign values also to points in ΩLP : define P ci (x) = v∈Ti ci,v xi,v and pub(x) = dT z(x) for x ∈ ΩLP . Let Ωext denote the finite set of extreme points of ΩLP . Condition (a) ensures that Ωext contains the characteristic vectors of all feasible allocations. Let (P’) denote the relaxation of (P), where we replace the set of feasible allocations Ω with Ωext (so ω indexes Ωext now), and for c ∈ D with ci = mi 1(Ti ), we now define Ω(c) := 4 For revenue-maximization problems in packing domains, this simple approach does indeed work in singledimensional settings and settings with additive types. This is because the dual separation problem is now an SWM problem, and it is easy to use a ρ-approximation algorithm for the SWM problem for nonnegative inputs to obtain a ρ-approximate solution with arbitrary, positive or negative, inputs. This yields a simple extension of some of the results in [8]; see Appendix C.1. 11 {ω ∈ Ωext : ωi,v = 0 ∀v ∈ Ti }. Since one can optimize efficiently over ΩLP , and hence Ωext , even for negative type-profiles, we have the following. Theorem 3.4 We can efficiently compute an optimal solution to (P’). 4 Single-dimensional problems Corollary 3.3 immediately yields results for certain single-dimensional problems (see Table 1), most notably, single-item procurement auctions. We now substantially expand the scope of PayM problems for which one can obtain near-optimal mechanisms by showing how to leverage “LP-relative” approximation algorithms for the CM problem. (As noted earlier, and sketched in Appendix C.1, a simpler approach can be used to leverage approximation algorithms for revenue-maximization in packing domains.) Suppose that the CM problem can be encoded as (C-P). An LP-relative ρ-approximation algorithm for the CM problem is a polytime algorithm that for any input c ≥ 0 to the covering problem, returns a {0, 1}-vector x ∈ ΩLP such that cT x + dT z(x) ≤ ρOPT C-P . Using the convex-decomposition procedure in [12] (see Section 5.1 of [12]), one can show the following; the proof appears at the end of this section. Lemma 4.1 Let x ∈ ΩLP . Given an LP-relative ρ-approximation algorithm A for the CM problem, P one can efficiently obtain (λ(1) , x(1) ), . . . , (λ(k) , x(k) ), where ℓ λ(ℓ) = 1, λ ≥ 0, and x(ℓ) is a {0, 1}P P (ℓ) vector in ΩLP for all ℓ, such that ℓ λ(ℓ) xi,v = min(ρxi,v , 1) for all i, v ∈ Ti , and ℓ λ(ℓ) dT z(x(ℓ) ) ≤ ρdT z(x). Theorem 4.2 Given an LP-relative ρ-approximation algorithm for the CM problem, one can obtain a polytime ρ-approximation support-based-(IC-in-expectation, IR) mechanism for the PayM problem. Proof : We solve (P’) to obtain an optimal solution (x, p). Since |Ti | = 1 for all i, it will be convenient to view ω ∈ ΩLP as a vector {ωi }i∈[n] ∈ [0, 1]n , where ωi ≡ ωi,v for the single covering P object v ∈ Ti . Fix c ∈ D. Define yc = ω∈Ωext xc,ω ω (which can be efficiently computed since P P x has polynomial support). Then, ω∈Ωext ci (ω)xc,ω =Pci yc,i and dT z(y) ≤ ω∈Ωext Ppub(ω)xc,ω . By Lemma 4.1, we can efficiently find a point ỹc = x̃ ω, where x̃ ≥ 0, c ω∈Ω c,ω w∈Ω x̃c,ω = 1, in the convex hull of the {0, 1}-vectors in ΩLP such that ỹc,i = min(ρyc,i , 1) for all i, and P T w∈Ω x̃c,ω pub(ω) ≤ ρd z(y). We now argue that one can obtain payments {qi,c } such that (x̃, q) is feasible to (P) and qi,c ≤ ρpi,c for all i, c ∈ D. Thus, the value of (x̃, q) is at most ρ times the value of (x, p). Applying Theorem 3.2 to (x̃, q) yields the desired result. Fix i and c−i ∈ D−i . Constraints (4) and (2) ensure that y(mi ,c−i ),i = 0, and y(ci ,c−i ),i ≥ y(c′i ,c−i ),i for all ci , c′i ∈ Di s.t. ci < c′i . Hence, ỹ(mi ,c−i ),i = 0, ỹ(ci ,c−i ),i ≥ ỹ(c′i ,c−i ),i for ci , c′i ∈ Di , ci > c′i . Define qi,(mi ,c−i ) = 0. Let 0 ≤ c1i < c2i < . . . < cki i be the values in Di . For ci = cℓi , define qi,(ci ,c−i ) = ci ỹ(ci ,c−i ),i + ki X (cti − ct−1 i )ỹ(cti ,c−i ),i . t=ℓ+1 P Since ω∈Ω ci (ω)x̃(ci ,c−i ),ω = ci ỹ(ci ,c−i ),i , (3) holds. By construction, for consecutive values ci =  cℓi , c′i = cℓ+1 i , we have qi,(ci ,c−i ) − qi,(c′i ,c−i ) = ci ỹ(ci ,c−i ),i − ỹ(c′i ,c−i ),i , which is at most   ρ · ci y(ci ,c−i ),i − y(c′i ,c−i ),i ≤ ρ pi,(ci ,c−i ) − pi,(c′i ,c−i ) . 12 Since qi,(mi ,c−i ) = 0 ≤ ρpi,(mi ,c−i ) , this implies that qi,(ci ,c−i ) ≤ ρpi,(ci ,c−i ) . Finally,  it is easy to verify that for any ci , c′i ∈ Di , we have qi,(ci ,c−i ) − qi,(c′i ,c−i ) ≥ ci ỹ(ci ,c−i ),i − ỹ(c′i ,c−i ),i , so (x̃, q) satisfies (2). Corollary 3.3 and Theorem 4.2 yield polytime near-optimal mechanisms for a host of singledimensional PayM problems. Table 1 summarizes a few applications. Even for single-item procurement auctions, these are the first results for PayM problems with correlated players satisfying a notion stronger than (BIC, interim IR). Problem Approximation Single-item procurement auction: buy one item provided by n players Metric UFL: players are facilities, output should be a UFL solution Vertex cover: players are nodes, output should be a vertex cover Set cover: players are sets, output should be a set cover Steiner forest: players are edges, output should be a Steiner forest Multiway cut (a), Multicut (b): players are edges, output should be a multiway cut in (a), or a multicut in (b) Due to 1 Corollary 3.3 1.488 using [15] Theorem 4.2 2 Theorem 4.2 O(log n) 2 Theorem 4.2 Theorem 4.2 2 for (a) O(log n) for (b) Theorem 4.2 Table 1: Results for some representative single-dimensional PayM problems. Proof of Lemma 4.1 : It suffices to show that the LP (Q) can be solved in polytime and its optimal value is 1. Throughout, we use ℓ to index {0, 1} vectors in ΩLP . (Recall that these correspond to feasible allocations.) X max s.t. λ(ℓ) X  λ(ℓ) dT z(x(ℓ) ) ≤ ρdT z(x) (ℓ) X X min(ρxi,v , 1)αi,v + ρdT z(x) · β + θ X  (ℓ) xi,v αi,v + dT z(x(ℓ) ) β + θ ≥ 1 (R) i,v∈Ti λ(ℓ) xi,v = min(ρxi,v , 1) ∀i, v ∈ Ti ℓ min ℓ X ℓ (Q) λ(ℓ) ≤ 1 (8) s.t. i,v∈Ti (9) ∀ℓ (11) β, θ ≥ 0. (10) ℓ λ ≥ 0. Here the αℓ s, β and θ are the dual variables corresponding to constraints (8), (9), and (10) respectively. Clearly, OPT (R) ≤ 1 since θ = 1, αi,v = 0 = β for all i, v is a feasible dual solution. Suppose (α̂, β̂, θ̂) is a feasible dual solution of value less than 1. Set α̃i,v = α̂i,v if α̂i,v ≥ 0 and ρxi,v ≤ 1, and α̃i,v = 0 otherwise. Let Γ = 1 if β̂ > 0 and equal to 2N dT z otherwise, where N is β̂ is such that for all {0, 1}-vectors x(ℓ) ∈ ΩLP , we have that cT x(ℓ) > cT x implies cT x(ℓ) ≥ cT x + N1 . Note that we can choose N so that its size is poly(I, size of x). Consider the CM problem defined by the input Γα̃. Running A on this input, we obtain a {0, 1}-vector x(ℓ) ∈ ΩLP whose total cost is at most ρ times the cost of the fractional solution x, z(x) . This translates to  X X (ℓ)  xi,v α̃i,v + dT z(x) · β̂ . (12) xi,v α̃i,v + dT z(x(ℓ) ) β̂ ≤ ρ i,v i,v 13 Now augment x(ℓ) to the following {0, 1}-vector x̂: set x̂i,v = 1 if ρxi,v > 1 or α̂i,v < 0, and (ℓ) xi,v otherwise. Then x̂ is the characteristic vector of a feasible allocation, since we have only added covering objects to the allocation corresponding to x(ℓ) ; hence x̂ ∈ ΩLP . We have dT z(x̂) =  pub(x̂) ≤ pub(x(ℓ) ) = dT z(xℓ ) and X i,v x̂i,v α̂i,v = X α̂i,v + i,v:ρxi,v >1 or α̂i,v <0 X i,v Combined with (12), this shows that X X x̂i,v α̂i,v + dT z(x̂)β̂ ≤ i,v (ℓ) xi,v α̃i,v ≤ X min(ρxi,v , 1)α̂i,v + i,v:ρxi,v >1 or α̂i,v <0 min(ρxi,v , 1)α̂i,v + i,v:ρxi,v >1 or α̂i,v <0 = X X (ℓ) xi,v α̃i,v . i,v X ρxi,v α̃i,v + ρdT z(x) · β̂ i,v:α̃i,v >0 min(ρxi,v , 1)α̂i,v + ρdT z(x) · β̂ < 1 − θ̂ i,v which contradicts that (α̂, β̂, θ̂) is feasible to (R). Hence, OPT (Q) = OPT (R) = 1. P Thus, we can add the constraint i,v∈Ti min(ρxi,v , 1)αi,v + ρdT z(x) · β + θ ≤ 1 to (R) without altering anything. If we solve the resulting LP using the ellipsoid method, and take the inequalities corresponding to the violated inequalities (11) found by A during the ellipsoid method, then we obtain a compact LP with only a polynomial number of constraints that is equivalent to (R). The dual of this compact LP yields an LP equivalent to (Q) with a polynomial number of λ(ℓ) variables which we can solve to obtain the desired convex decomposition. 5 Multidimensional problems We obtain results for multidimensional PayM problems via two distinct approaches. One is by directly applying Corollary 3.3 (e.g., Theorem 5.1). The other approach is based on again moving to an LP-relaxation of the CM problem and utilizing Theorem 3.4 in conjunction with a stronger LP-rounding approach. This yields results for multidimensional (metric) UFL and its variants (Theorem 5.3). Multi-item procurement auctions. Here, we have n sellers and k (heterogeneous) items. Each seller i has a supply vector si ∈ Zk+ denoting his supply for the various items, and the buyer has a demand vector d ∈ Zk+ specifying his demand for the various items. This is public knowledge. Each seller i has a private cost-vector ci ∈ Rk+ , where ci,ℓ is the cost he incurs for supplying one unit of item ℓ. A feasible solution is an allocation specifying how many units of each item each seller supplies to the buyer such that for each item ℓ, each seller i provides at most si,ℓ units of ℓ and the buyer obtains dℓ total units of ℓ. The corresponding CM problem is a min-cost flow problem (in a bipartite graph), which can be efficiently solved optimally, thus we obtain a polytime optimal mechanism. Theorem 5.1 There is a polytime optimal support-based-(IC-in-expectation, IR) mechanism for multi-unit procurement auctions. 14 Multidimensional budgeted (metric) uncapacitated facility location (UFL). Here, we have a set E of clients that need to be serviced by facilities, and a set F of locations where facilities may be opened. Each player i may provide facilities at the locations in Ti ⊆ F. We may assume that the Ti s are disjoint. For each facility ℓ ∈ Ti that is opened, i incurs a private opening cost fℓ ≡ fi,ℓ , and assigning client j to an open facility ℓ incurs a publicly-known assignment cost dℓj , where the dℓj s form a metric. We are also given a public assignment-cost budget B. The goal in Budget-UFL is to open a subset F ⊆P F of facilities and assign P P each client j to an open facility σ(j) ∈ F so as to minimize ℓ∈F fℓ + j∈E dσ(j)j subject to j∈E dσ(j)j ≤ B; UFL is the special case where B = ∞. We can define pub(T1 , . . . , Tn ) to be the total assignment cost if this is at most B, and ∞ otherwise. Let O∗ denote the expected disutility of an optimal mechanism for Budget-UFL. We obtain a mechanism with expected disutility at most 2O∗ that always returns a solution with expected assignment cost at most 2B. Consider the following LP-relaxation for Budget-UFL. X X min f ℓ xℓ + dℓj zℓj s.t. (BFL-P) X j∈E,ℓ∈F ℓ∈F dℓj zℓj ≤ B, X j∈E,ℓ∈F zℓj ≥ 1 ∀j ∈ E, 0 ≤ zℓj ≤ xℓ ∀ℓ ∈ F, j ∈ E. ℓ∈F Let (FL-P) denote (BFL-P) with B = ∞, and OPT FL-P denote its optimal value. We say that an algorithm A is a Lagrangian multiplier preserving (LMP)Pρ-approximation algorithm for UFL if P for every instance, it returns a solution (F, σ) such that ρ ℓ∈F fℓ + j∈E dσ(j)j ≤ ρ · OPT FL-P . In [17], it is shown that given such an algorithm A, one can take any solution (x, z) to (FL(k) (k) , σ (k) )—so P) andPobtain a convex combination of UFL solutions (λ(1)P ; F (1) , σ (1) P), . . . , (λ  ; F P P (r) (r) (r) λ ≥ 0, r λ = 1—such that r:ℓ∈F (r) λ = xℓ for all ℓ and r λ j,ℓ dℓj zℓj . j dσ(r) (j)j ≤ ρ An LMP 2-approximation algorithm for UFL is known [10]. Lemma 5.2 Given an LMP ρ-approximation algorithm for UFL, one can design a polytime supportbased-(IC-in-expectation, IR) mechanism for Budget-UFL whose expected disutility is at most ρO ∗ while violating the budget by at most a ρ-factor. Proof : The LP-relaxation (BFL-P) for the CM problem is of the form (C-P) and satisfies the required properties. Recall that for x ≥ 0, z(x) denotes the min-cost completion of x to a feasible solution to (BFL-P) if one exists, and is ⊥ if there is no such completion of x. Let ΩLP := {x : z(x) 6= ⊥, 0 ≤ xℓ ≤ 1 ∀ℓ}. For integral ω ∈ ΩLP , z(ω) specifies the assignment where each client j is assigned to the nearest open facility. By Theorem 3.4, one can efficiently compute an optimal solution (X, p) to the relaxation of (P) where the set of feasible allocations is the set Ωext of extreme points of ΩLP . We round (X, p) to a feasible solution to (P) by proceeding as in the proof of Theorem 4.2. Let ΩUFL be the set of characteristic vectors of open facilities of all integral UFLP solutions. We yc = ω∈Ωext Xc,ω ω, useP ℓ to index facilities inPF and j to index clients P P in E. Fix c ∈ D. Define zc,ℓj dℓj ≤ B. We use so w∈Ωext ci (ω)Xc,ω = ℓ∈Ti fℓ yc,ℓ . Let zc = ω∈Ωext Xc,ω z(ω), so j,ℓP the LMP ρ-approximation algorithm to express yc as a convex PcombinationP ω∈ΩUFL x̃c,ω ω of (integral) UFL-solutions such that the expected assignment cost ω∈ΩUFL x̃c,ω j,ℓ z(ω)ℓj dℓj is at most P ρ j,ℓ dℓj zc,ℓj ≤ ρB. Hence, (x̃, p) is a feasible solution to (P). Theorem 3.2 now yields the desired mechanism. 15 Theorem 5.3 There is a polytime support-based-(IC-in-expectation, IR) mechanism for BudgetUFL with expected disutility at most 2O∗ , which always returns a solution with expected assignment cost at most 2B. 6 Extensions: alternative solution concepts We now investigate the PayM problem under various alternative solution concepts. In Section 6.1, we consider solution concepts weaker than support-based (IC, IR), but yet robust enough to ensure that truthful participation is an ex-post no-regret choice for every player at every type profile in the support of the underlying distribution. We show that all our guarantees extend readily to these solution concepts. (Note that a weaker solution concept does not necessarily mean that the corresponding PayM mechanism-design problem is a simpler problem; a weaker solution concept enlarges the space of allowed mechanisms, which could make it more- or less- difficult to search for an optimal mechanism.) In Section 6.2, we consider the stronger solution concept of (DSIC (inexpectation, IR), and obtain results for single-dimensional settings but at the expense of increasing the running time to exponential in the number of players. 6.1 Solution concepts weaker than support-based (IC, IR) Consider the following weakenings of support-based (IC, IR) (Definition 2.2). For every player i, ui (ci , c−i ; ci ) ≥ ui (ci , c−i ; ci ), ui (ci , c−i ; ci ) ≥ 0, for all (ci , c−i ) ∈ D, ci : (ci , c−i ) ∈ D (13) ui (ci , c−i ; ci ) ≥ ui (ci , c−i ; ci ), ui (ci , c−i ; ci ) ≥ 0, for all (ci , c−i ) ∈ D, ci ∈ Ci (14) ui (ci , c−i ; ci ) ≥ ui (ci , c−i ; ci ), ui (ci , c−i ; ci ) ≥ 0, for all ci , ci ∈ Di , c−i ∈ D−i (15) All three solution concepts, (13)–(15), ensure that truthful participation is in the best interest of every player i at every type-profile in D even at the ex-post stage when he knows the realized types of all players, but for varying choices of lies: in (14), the lie could be anything, so a mechanism satisfying (14) is (BIC, interim IR) for every distribution whose support is a subset of D; in (15), the “best interest” is among lies consistent with i’s support; and in (13), the “best interest” is among lies consistent with the support of the distribution. We now argue that our results extend to these notions. For notions (13) and (15), one can simply incorporate all the IC and IR constraints in the LP. Note that there are O(n|D|2 ) such constraints under (13), and O(n|D|3 ) constraints under (15), so the size of the resulting LP is poly(n, |D|). Theorem 3.1 continues to hold for the resulting LP, due to the same arguments. For both notions, an LP solution immediately yields a randomized mechanism satisfying that notion, except that utility is replaced by expected utility; for type profiles not included in the LP, we may output any outcome ω0 (and any prices). The refinements for single-dimensional settings and multidimensional FL work in the same fashion as before: the appropriate LP (e.g., (P’)) is modified to include the appropriate set of IC and IR constraints and solved as before. The rounding of an LP solution to obtain a suitable mechanism proceeds as before. Consequently, all of our results extend to these two notions with minimal effort. For notion (14), we incorporate constraints (14) but restrict ci to lie in D i . Recall that D i = Di ∪ {mi 1Ti }. We also include constraints (4) as before. Again, the resulting LP can be solved given an optimal algorithm for the CM problem. The LP solution can be extended to a mechanism exactly as in Theorem 3.2, and it is easy to see that this extension satisfies (14). Therefore, all our results extend to this notion as well. 16 6.2 Dominant-strategy IC mechanisms We can strengthen our results from Section 4 to obtain (near-) optimal dominant-strategy incentive compatible (DSIC) mechanisms for single-dimensional problems in time exponential in n. Thus, we obtain polytime mechanisms for any constant number of players. The key change is in Q the LP (P) (or (P’)), where we now enforce (1)–(4) for every player i and every type profile in i D i . (Note that, as before, we can only enforce IC and IR constraints for a finite set of type profiles.) Theorem 3.1, as also the rounding procedure and arguments in Theorem 4.2 proceed essentially identically to yield a near-optimal solution to Q this LP. We prove that in single-dimensional settings, enforcing the IC, IR constraints for the set i Di of type profiles enables one to extend the LP solution to a (DSIC-in-expectation, IR) mechanism without increasing the expected disutility. Thus, we obtain the same guarantees as in Table 1, but under the stronger solution concept of (DSIC-in-expectation, IR). We focus on single-dimensional settings here because at various places, our arguments rely on the well-known equivalence between monotonic allocation rules and DSIC-implementable allocation rules. We do not know if a similar result holds for multidimensional settings. This, and various other unanswered questions emerge from our result; we mention a few of these below, before delving into our construction for single-dimensional settings. (a) In multidimensional settings, what finite subsets C ′ ⊆ C of the type space have the property that enforcing the IC and IR constraints for every player i and every type profile in C ′ suffice to extend an LP solution to a (DSIC-in-expectation, IR mechanism)? Q (b) Does C ′ = i D i have this extension property (as is the case in single-dimensional settings)? (c) Is there some C ′ of size poly(n, |D|) with this extension property? We now describe briefly the changes required to obtain (DSIC-in-expectation, IR) mechanisms. Analogous to Lemma 2.3, we can Q obtain estimates mi such that there is an optimal mechanism M ∗ such that on any input c ∈ i (Di ∪ {mi }) where ci < mi for at least one i, M ∗ only buys the item with probability from a player i with ci < mi (the same proof approach works). Let Q Q non-zero e ei := Di := Di ∪ {mi } for uniformity of notation. For e D := i D i and D−i := j6=i D j ; also, let D e define Ω(c) = {ω ∈ Ω : ωi = ∅ for all i s.t. ci = mi }, if there is some i such that ci < mi , c ∈ D, and Ω otherwise. In our LP (P), or its relaxation (P’) (where we move to an LP-relaxation of the ei CM problem and consider the allocation-set Ωext ), we now enforce (1)–(4) for all i, all ci , c′i ∈ D e−i . and all c−i ∈ D Let (K-P), and (K-P’) (with allocation-set ΩLP ) denote these new LPs. When n is a constant, both LPs have a polynomial number of constraints. So again by considering the dual, we can efficiently compute: (i) an optimal solution to (K-P) given an optimal algorithm for the CM problem; and (ii) an optimal solution to (K-P’). If the CM problem can be encoded via (C-P) and we have an LP-relative approximation algorithm for the CM problem, then one can use the rounding procedure described in Theorem 4.2 to round the optimal solution to (K-P’) to a nearoptimal solution to (K-P); the arguments are essentially identical. So suppose that we have a near-optimal solution (x, p) to (K-P). We extend (x, p) to a (DSIC in-expectation, IR) mechanism M = A, {qi } without increasing the expected disutility. Here, A(c) and qi (c) denote as before the allocation-distribution and expected payment to i, on input c. P Define yc = ω xc,ω ω, where we treat ω as a vector in {0, 1}n with ωi ≡ ωi,v for the single covering object v ∈ Ti . Let 0 ≤ c1i < c2i < . . . < cki i = cmax be the values in Di , and set ciki +1 := mi . i  e as follows: set H(c) := Hi (ci ) Define the mapping H : C → D , where Hi (ci ) is cr+1 if i i=1,...,n  r+1 r ci ∈ (ci , ci ], r ≤ ki , and mi if ci ≥ mi . Define H−i (c−i ) := Hj (cj ) j6=i . 17 Consider c ∈ C. If ci ≤ cmax for at least one i, we set A(c) = yH(c) . If ci > cmax for all i, we set i i A(c) as in the VCG mechanism. Since we are in the single-dimensional setting, if we show that for all i, c−i ∈ C−i , A(c) R ∞ i is non-increasing in ci and hits 0 at some point, then we know that setting qi (c) = ci A(c)i + ci A(t, c−i )i dt ensures such that M = A, {qi } is (DSIC, IR)-in-expectation. Consider some i, c−i ∈ C−i . If cj ≤ cmax for some j 6= i, then A(c) = yH(c) . Since Hi is j non-decreasing in ci and yc,i is non-increasing in ci (which is easily verified), it follows that A(c)i e−i , then one can argue as in the proof of Theorem 4.2 that is non-increasing in ci . Also, if c−i ∈ D P qi (c) ≤ pi,c . Hence, M has expected total payment at most c,i PrD (c)pi,c . Suppose cj > cmax for j all j 6= i. Then, Hj (cj ) = mj for all j 6= i. So A(c) = yH(c) for ci ≤ cmax , and is the VCG allocation i max , and the VCG allocation for c > cmax , which is for ci > cmax . Therefore, A(c) = 1 for c ≤ c i i i i i i clearly non-increasing in ci . Theorem 6.1 For single-dimensional problems with a constant number of players, we obtain the same guarantees as in Table 1, but under the stronger solution concept of DSIC-in-expectation and IR. Acknowledgments We thank the anonymous reviewers and referees of the conference and journal submissions for various useful comments. Section 6.1 was prompted by questions raised by an anonymous referee of the journal submission regarding the solution concepts (13)-(15). References [1] S. Alaei, H. Fu, N. Haghpanah, J. Hartline, and A. Malekian. Bayesian optimal auctions via multi- to single-agent reduction. In Proc., EC, 17, 2012. [2] Y. Cai, C. Daskalakis, and S. Weinberg. Optimal multi-dimensional mechanism design: reducing revenue to welfare maximization. In Proc., FOCS, 130–139, 2012. [3] Y. Cai, C. Daskalakis, and S. Weinberg. Reducing revenue to welfare maximization: approximation algorithms and other generalizations. In Proc., SODA, 2013. [4] Y. Cai, C. Daskalakis, and S. M. Weinberg. Understanding incentives: mechanism design becomes algorithm design. In Proc., FOCS, 2013. [5] S. Chawla, J. Hartline, D. Malec, and B. Sivan. Multi-parameter mechanism design and sequential posted pricing. In Proc., STOC, 311–320, 2010. [6] J. Crémer, and R. McLean. Optimal selling strategies under uncertainty for a discriminating monopolist when demands are interdependent. Econometrica, 53, 2, 345–361, 1985. [7] J. Crémer, and R. McLean. Full extraction of the surplus in Bayesian and dominant strategy auctions. Econometrica 56, 6, 1247–1257, 1988. [8] S. Dobzinski, H. Fu, and R. Kleinberg. Optimal auctions with correlated bidders are easy. In Proc. STOC, 129–138, 2011. [9] S. Hart and N. Nisan. Approximate revenue maximization with multiple items. In Proc., EC, 656, 2012. 18 [10] K. Jain, M. Mahdian, E. Markakis, A. Saberi, and V. Vazirani. Greedy facility location algorithms analyzed using dual fitting with factor-revealing LP. J. ACM, 50:795–824, 2003. [11] K. Jain, M. Mahdian, and M. Salavatipour. Packing Steiner trees. In Proc. SODA, pages 266–274, 2003. [12] R. Lavi and C. Swamy. Truthful and near-optimal mechanism design via linear programming. Journal of the ACM, 58(6): 25, 2011. [13] P. Klemperer. Auction theory: A guide to the literature. Journal of Economic Surveys 13, 227–286, 1999. [14] V. Krishna. Auction Theory. Academic Press, 2010. [15] S. Li. A 1.488 approximation algorithm for the uncapacitated facility location problem. Inf. Comput., 222:45–58, 2013. [16] R. McAfee and P. Reny. Correlated information and mechanism design. Econometrica 60, 2, 395–421, 1992. [17] H. Minooei and C. covering problems. Swamy. Truthful mechanism design for multidimensional In Proc., WINE, 448–461, 2012. Detailed version at http://www.math.uwaterloo.ca/~cswamy/papers/coveringmech-arxiv.pdf. [18] H. Minooei and C. Swamy. Near-optimal and robust mechanism design for covering problems with correlated players. In Proc. WINE, 2013. [19] R. Myerson. Optimal auction design. Math. of Operations Research, 6:58–73, 1981. [20] N. Nisan, T. Roughgarden, É. Tardos, and V. Vazirani. Algorithmic Game Theory. Cambridge University Press, September 2007. [21] C. Papadimitriou and G. Pierrakos. On optimal single-item auctions. In Proc., STOC, 119–128, 2011. [22] A. Ronen. On approximating optimal auctions. In Proc. EC, 11–17, 2001. [23] A. Ronen and D. Lehmann. Nearly optimal multi attribute auctions. In Proc. EC, pages 279–285, 2005. [24] A. Ronen and A. Saberi. On the hardness of optimal auctions. In Proc., FOCS, 396–405, 2002. [25] T. Roughgarden and I. Talgam-Cohen. Optimal and near-optimal mechanism design with interdependent values. In Proc., EC, 767–784, 2013. 19 A Proof of Lemma 2.3 Consider the following LP, which is the same as (P) except that we only consider c ∈ X  X X min PrD (c) qi,c + κ xc,ω pub(ω) c∈D s.t. ω i X ∀c ∈ xc,ω = 1 ω qi,(ci ,c−i ) − X ci (ω)x(ci ,c−i ),ω ≥ qi,(c′i ,c−i ) − qi,(ci ,c−i ) − X ci (ω)x(ci ,c−i ),ω ≥ 0 ω X S i (Di × D−i ). [ (Di × D−i ) (LP) (16) i ci (ω)x(c′i ,c−i ),ω ∀i, ci , c′i ∈ Di , c−i ∈ D−i (17) ω ∀i, ci ∈ Di , c−i ∈ D−i (18) ω q, x ≥ 0. (19)  Let M = A, {pi } be an optimal mechanism. Recall that O∗ is the expected disutility of M . Then, M naturally yields a feasible solution (x, q) to (LP) of objective value O∗ , where xc,ω = PrM [A(c) = ω] and qi,c = EM [pi (c)]. Let (x̂, q̂) be an optimal basic solution to (LP). Then, for some N such that log N is polynomially bounded in the input size I, we can say that the 1 values S of all variables are integer multiples of N , and log(N x̂c,ω ), log(N q̂i,c ) = poly(I) for all i, c ∈ i (Di × D−i ), ω. First, we claim that we may assume that for every i, ci ∈ Di , c−i ∈ D−i , if whenever x̂c,ω > 0 we P have ωi = ∅ (where c = (ci , c−i )), then q̂i,c = 0. If not, then (17) implies that q̂i,(c̃i ,c−i ) − ω c̃i (ω)x(c̃i ,c−i ),ω ≥ q̂i,c for all c̃i ∈ Di and decreasing q̂i,(c̃i ,c−i ) by q̂i,c for all c̃i ∈ Di continues to satisfy (17)–(19).  P P Set mi := max 2 i,v∈Ti maxci ∈Di ci,v , N i,c q̂i,c for all i. So log mi = poly(I). Recall that S D i := Di ∪ {mi 1Ti } for all i ∈ [n], and D := i (D i × D−i ). S Now we extend (x̂, q̂) to (x̃, q̃) that assigns values also to type-profiles in D\ i (Di ×D−i ) so that ′ constraints (16)–(19) hold for all i, cS i , ci ∈ D i , c−i ∈ D−i . First set x̃c,ω = x̂c,ω , q̃i,c = q̂i,c for all i, ω, S c ∈ i (Di × D−i ). Consider c ∈ D \ i (Di × D−i ), and let i be such that ci = mi 1Ti (there is exactly one such i). We “run” VCG on c considering only the cost incurred by the P Pplayers. That P is, we set x̃c,ω = 1 for ω = ω(c) := arg minω∈Ω i ci (ω) and pay q̃i,c = minω∈Ω:ωi =∅ j cj (ω) − j6=i cj ω(c) to each player i. Note that the choice of mi ensures that ω(c)i = ∅ and hence, q̃i,c = 0. We claim that this extension satisfies (16)–(19) for all i, ci , c′i ∈ Di , c−i ∈ D−i . Fix i, ci , c′i ∈ D i , c−i ∈ D−i . It is clear that (16), (19) hold. If ci ∈ Di , then (18) clearly holds; if ci = mi 1Ti , then it again holds since x̃c,ω = 1 for ω = ω(c) and ω(c)i = ∅. To verify (17), we consider four cases. If ci , c′i ∈ Di , then (17) holds since (x̃, q̃) extends (x̂, q̂). If ci = c′i = mi 1Ti , then (17) trivially holds. If ci ∈ Di , c′i = mi 1Ti , then (17) holds since the RHS of (17) is 0 (as x̃(c′i ,c−i ),ω(c′i ,c−i ) = 1 and ω(c′i , c−i )i = ∅). We are left with the case ci = mi 1Ti and c′i ∈ Di . If whenever x̃(c′i ,c−i ),ω = ′ x̂(c′i ,c−i ),ω > 0 we have ωi = ∅, then we also have q̃i,(c′i ,c−i ) = claim, so Pq̂i,(ci ,c−i ) = 0 by ourmearlier i the RHS of (17) is 0, and (17) holds. Otherwise, we have ω ci (ω)x̃(c′i ,c−i ),ω ≥ N ≥ q̂i,(c′i ,c−i ) , so the RHS of (17) is at most 0, and (17) holds. Thus, we have shown that (x̃, q̃) is a feasible solution to (P). Now we can apply Theorem 3.2 to extend (x̃, q̃) and obtain IR) mechanism M ∗ whose expected  P a support-based-(IC-in-expectation, P ∗ disutility is at most c,i PrD (c) q̃i,c + ω x̃c,ω pub(ω) ≤ O . Since x̃(mi 1Ti ,c−i ),ω > 0 implies that ωi = ∅ for all i, M ∗ satisfies the required conditions. 20 B Inferiority of k-lookahead procurement auctions The following k-lookahead auction was proposed by [8] for the single-item revenue-maximization problem generalizing the 1-lookahead auction considered by [22, 24]: on input v = (v1 , . . . , vn ), pick the set I of k players with highest values, and run the revenue-maximizing (DSIC, IR) mechanism for player-set I where the distribution we use for I is the conditional distribution of the values for I given the values (vi )i∈I / for the other players. Dobzinski et al. [8] show that the k-lookahead auction achieves a constant-fraction of the revenue of the optimal (DSIC, IR) mechanism. For any k ≥ 2, we can consider an analogous definition of k-lookahead auction for the single-item procurement problem: on input c, we pick the set I of k players with smallest costs, and run the payment-minimizing support-based-(IC-in-expectation, IR) mechanism for I for the conditional distribution of I’s costs given (ci )i∈I / . We call this the k-lookahead procurement auction. The following example shows that the expected total payment of the k-lookahead procurement auction can be arbitrarily large, even when k = n − 1 (that is, we drop only 1 player), and compared to the optimal expected total payment of even a deterministic (DSIC, IR) mechanism. Let t = K + ǫ where ǫ > 0, and δ > 0. The distribution D consists of n points: each c in 1−δ , and the {c : cn = t, ∃i ∈ [n − 1] s.t. ci = 0, cj = K ∀j 6= i, n} has probability PrD (c) = n−1 type-profile c where ci = K ∀i 6= n, cn = t has probability PrD (c) = δ. Let k = n − 1. The k-lookahead procurement auction will always select the player-set I = {1, . . . , n − 1}, and the conditional distribution of values of players in I is simply D. Let M ′ be the support-based-(IC-in-expectation, IR) mechanism for the players in I under this conditional distribution D. Suppose that on input (K, K, . . . , K, t), the k-lookahead auction (which runs M ′ ) P buys the item from player i ∈ I with probability xi . Clearly, i∈I xi = 1. Then, on the input c̃ where c̃i = 0, c̃j = K for all j 6= i, n, c̃n = t, the mechanism must also buy the item from player i with probability at least xi since M ′ is support-based IC. So since M ′ is support-based (IC-inexpectation, IR), the payment to player i under input c̃ is at least K, and therefore the expected P 1−δ = K(1−δ) total payment of the k-lookahead auction is at least K( i∈I xi ) · n−1 n−1 .  Now consider the following mechanism M = A, {pi } . Consider input c. If some player i < n has ci = 0, M buys the item from such a player i (breaking ties in some fixed way). Otherwise, if cn ≤ t, M buys from player n; else, M buys from the player i with smallest ci . It is easy to verify that for every i and c−i , this allocation rule A is monotonically decreasing in ci . Let pi (c) = 0 if M does not buy the item from i on input c, and max{z : M buys the item from i on input (z, c−i )} otherwise. By a well-known fact, (see, e.g., Theorem 9.39 in [20]), M is DSIC and IR. Then, the total payment under any c ∈ D for which ci = 0 for some i, is 0, and the total payment under the input where ci = K for all i 6= n, cn = t is t. So the expected total payment of M is tδ. K(1−δ) , which can be Thus the ratio of the expected total payments of M ′ and M is at least tδ(n−1) made arbitrarily large by choosing δ and ǫ = t − K small enough. C Insights for revenue-maximization in packing domains Our study of PayM problems also leads to some interesting insights into the revenue-maximization problem in packing settings with correlated players. We state two results that are obtained via relatively-simple observations, but we believe are nevertheless of interest. In Section C.1, we justify our comment in the Introduction that the problem of extending an LP solution to a suitable mechanism becomes much easier in a packing setting such as combinatorial auctions (CAs). We show that any solution to an LP-relaxation similar to (P) for the revenuemaximization problem in CAs can be extended to a (DSIC-in-expectation, IR) mechanism without 21 any loss in revenue. In Section C.2, we obtain a noteworthy extension of a result in [8]. We show that in single-dimensional packing settings, a ρ-approximation algorithm for the SWM problem can be used to obtain a ρ-approximation (DSIC-in-expectation, IR)-mechanism for the revenuemaximization problem. We obtain this by noting that a ρ-approximation for the SWM problem can be used to obtain a ρ-approximate separation oracle for the dual of the revenue-maximization LP, despite the fact that the separation problem is an SWM problem possibly involving negative-valued inputs5 , which then yields, in a fairly-standard way, a ρ-approximate solution to the revenuemaximization LP. C.1 Extending LP solutions to (DSIC-in-expectation, IR) mechanisms We consider the prototypical problem of combinatorial auctions; similar arguments can be made for other packing domains. In CAs, a feasible allocation ω is one that allots a disjoint set ωi of items (which could be empty) to each player i, and player i’s value under allocation ω is vi (ωi ), where vi : 2[m] 7→ R+ is player i’s private valuation function. Q We use vi (ω) to denote vi (ωi ). Let Vi denote the set of all private types of player i, and V−i = j6=i Vj . As before, let Ω be the set of all feasible solutions. We consider the following LP along the lines of (P). Since we are in a packing setting, we do not need the mi estimates. We may assume that each Di contains the valuation 0i , where 0i (ω) = 0 S ′ for all ω, since if not, we can just add this to Di , and set PrD (0i , v−i ) = 0. Let D := i (Di × D−i ). X X max PrD (v) pi,v (R-P) v∈D X s.t. i ω∈Ω X vi (ω)x(vi ,v−i ),ω − pi,(vi ,v−i ) ≥ X vi (ω)x(vi ,v−i ),ω − pi,(vi ,v−i ) ≥ 0 ω∈Ω ∀v ∈ D xv,ω ≤ 1 X vi (ω)x(vi′ ,v−i ),ω − pi,(vi′ ,v−i ) ′ (20) ∀i, vi , vi′ ∈ Di , v−i ∈ D−i ω∈Ω ∀i, v ∈ D ′ ω∈Ω p, x ≥ 0. The above LP-relaxation is similar to the LP in [8] for single-dimensional packing problems. For CAs, the LP in [8] is subtly different from (R-P): their allocation variables encode the probability that a player i receives a set S of items. If we let the allocation space in (R-P) be the set Ωext of extreme points of the standard LP-relaxation for the CA problem, then our formulations coincide since a feasible solution to the standard LP specifies the extent to which each player receives each set. (The convex-decomposition technique in [12] directly implies that an integrality-gap verifying ρ-approximation algorithm for the SWM problem can be used to decompose the fractional allocation specified by the LP in [8] scaled by ρ into a distribution over Ω, and thereby obtain a solution to (R-P).) Let (x̃, p̃) be a solution to (R-P). We convert (x̃, p̃) to a (DSIC-in-expectation, IR) mech anism M = A, {qi } with no smaller expected total revenue. Here, A(v) and qi (v) are the allocation-distribution and expected price of player i on input v. Since any support-based-(IC, IR)-in-expectation mechanism yields a feasible solution to (R-P), this also shows that any supportbased-(IC, IR)-in-expectation mechanism for CAs can be extended to a (DSIC-in-expectation, IR) mechanism without any loss in revenue. 5 Recall that negative costs are quite problematic for the CM problem, and hence, we had to resort to the method outlined in Section 4. 22 Our argument is similar to that in the proof of Theorem 3.2, but the packing nature of the problem simplifies things significantly. We may assume that all constraints (20) are tight; otherwise P ′ if there is some v ∈ D with ω∈Ω x̃v,ω < 1, then letting ω 0 be allocation where ωi0 = ∅ for all i, we can increase x̃v,ω0 to make this sum 1 without affecting feasibility. ′ First, we set A(v) = x̃v , qi (v) = p̃i,v for all v ∈ D and all i, so it is clear that the expected total revenue of M is the value of (x̃, p̃). If |{i : vi ∈ / Di }| ≥ 2, then we give everyone the empty-set and charge everyone 0. Otherwise, P (i) = v (ω)x̃ suppose vi ∈ / Di , v−i ∈ D−i . Let v̄ (i) = arg maxṽi ∈Di (ṽi ,v−i ),ω − p̃i,(ṽi ,v−i ) and ȳ ω i x̃(v̄(i) ,v−i ) . For ω ∈ Ω, let proji (ω) denote the allocation where player i receives ωi ⊆ [m], and the other players receive ∅. Viewing A(v) as the random variable specifying the allocation selected, (i) we set A(v) = proji (ω)P with probability ȳω . We set qi (v) = p̃i,(v̄(i) ,v−i ) . Since 0i ∈ Di , we have EA [vi (A(v))] − qi (v) ≥ ω 0i (ω)x̃(0i ,v−i ),ω − p̃i,(0i ,v−i ) ≥ 0, so M is IR-in-expectation. To see that M is DSIC in expectation, consider some i, vi , vi′ ∈ Vi , v−i ∈ V−i . If v−i ∈ / D−i , then player i always receives the empty set and pays 0. Otherwise, we have ensured by definition that player i does not benefit by lying. C.2 Utilizing approximation algorithms for the SWM problem We briefly sketch how to utilize a ρ-approximation algorithm for the SWM problem to obtain a ρ-approximation (DSIC-in-expectation, IR)-mechanism for the revenue-maximization problem in single-dimensional settings. Similar arguments apply to packing settings with additive types. Consider again the LP (R-P). For all i, ω, we now have vi (ω) = vi αi,ω , where vi ∈ R is i’s private type and αi,ω ≥ 0 is public knowledge. This is identical to the LP in [8] for single-parameter revenue-maximization problems. It will be convenient to view ω ∈ Ω as the vector {αi,ω }i∈[n] ∈ Rn+ . Since we are in a packing setting, Ω is downward closed, so ω ∈ Ω and ω ′ ≤ ω implies that ω ′ ∈ Ω. The dual of (R-P) is: X min γv (R-D) v s.t. X X i:v∈Di ×D−i vi′ ∈Di X   vi (ω)yi,(vi ,v−i ),vi′ − vi′ (ω)yi,(vi′ ,v−i ),vi + vi (ω)βi,v yi,(vi ,v−i ),vi′ − yi,(vi′ ,v−i ),vi vi′ ∈Di  ≤ γv + βi,vi ,v−i ≥ PrD (v) ′ ∀v ∈ D , ω ∈ Ω ∀i, v ∈ D y, β, γ ≥ 0. ′ (21) (22) (23) P Let optr be the common optimal value of (R-P) and (R-D). Define θiv = v′ ∈Di (vi yi,(vi ,v−i ),vi′ − i + vi βi,v if v ∈ Di × D−i , and 0 otherwise. Then, the separation problem for (R-D) P ′ amounts to determining if maxω∈Ω i θiv αi,ω ≤ γv for every v ∈ D ; that is, solving an SWM problem over the allocation space Ω under the input θ v = {θiv }i∈[n] for every v. Given a ρapproximation algorithm A for the SWM problem (where ρ ≥ 1) that only works with nonnegative inputs, we can obtain a ρ-approximate solution for the input θ v as follows. Set (θ + )vi := max(0, θiv ), use A with the input (θ + )v = {(θ + )vi }i∈[n] to obtain a solution ω ′ , and let ωi′′ = ωi′ if θi ≥ 0 and 0 otherwise. Thus, A can be used to approximately separate over constraints (21). We now argue that this implies that we can obtain a solution to (R-P) of value at least optr/ρ. This follows a routine argument in the approximation-algorithms literature (see, e.g., [11]). P Define P(ν) := {(y, β, γ) : (21)–(23), v γv ≤ ν}. Note that optr is the smallest ν such that P(ν) 6= ∅. Given ν, y, β, γ, we either show that (y, β, γρ) ∈ P(νρ), or we exhibit a hyperplane vi′ yi,(vi′ ,v−i ),vi ) 23 P separating (y, β, γ) from P(ν). To do this, we first check if v γv ≤ ν, (22), (23) hold, and if not ′ use the appropriate inequality as the separating hyperplane. Next, for every v ∈ D , we use A as specified above to obtain some ω ′′ ∈ Ω. If in this process, the LHS of (21) exceeds γv for some v, then we return the corresponding inequality as the separating hyperplane. Otherwise, for all ′ v ∈ D and all ω ∈ Ω, the LHS of (21) is at most γρ, and so (y, β, γρ) ∈ P(νρ). Thus, for a fixed ν, in polynomial time, the ellipsoid method either certifies that P(ν) = ∅, or returns a point (y, β, γ) with (y, β, γρ) ∈ P(νρ). We find the smallest value ν ∗ (via binary search) such that the ellipsoid method run for ν ∗ (with the above separation oracle) returns a solution (y ∗ , β ∗ , γ ∗ ) with (y ∗ , β ∗ , γ ∗ ρ) ∈ P(ν ∗ ρ); hence, optr ≤ ν ∗ ρ. For any ǫ > 0, running the ellipsoid method for ν ∗ − ǫ yields a polynomial-size certificate for the emptiness of P(ν ∗ − ǫ). This consists of the polynomially many violated inequalities P returned by the separation oracle during the execution of the ellipsoid method and the inequality v γv ≤ ν ∗ − ǫ. By duality (or Farkas’ lemma), this ∗ means that here is a polynomial-size  solution (x̃, p̃) to (R-P) whose value is at least ν − ǫ. Taking 1 ǫ to be 1/ exp(input size) (so ln ǫ is polynomially bounded), this also implies that (x̃, p̃) has value at least ν ∗ ≥ optr/ρ. We can now convert (x̃, p̃) to a (DSIC-in-expectation, IR) mechanism with revenue at least optr/ρ via the procedure described in Section C.1. 24
8
Tight Bounds for Single-Pass Streaming Complexity of the Set Cover Problem Sepehr Assadi∗ Sanjeev Khanna∗ Yang Li∗ arXiv:1603.05715v1 [] 17 Mar 2016 Abstract We resolve the space complexity of single-pass streaming algorithms for approximating √ the classic set cover problem. For finding an α-approximate set cover (for any α = o ( n)) using a single-pass streaming algorithm, we show that Θ(mn/α) space is both sufficient and necessary (up to an O(log n) factor); here m denotes number of the sets and n denotes size of the universe. This provides a strong negative answer to the open question posed by Indyk et al. [17] regarding the possibility of having a single-pass algorithm with a small approximation factor that uses sub-linear space. We further study the problem of estimating the size of a minimum set cover (as opposed to finding the actual sets), and establish that an additional factor of α saving in the space is achievable in this case and that this is the best possible. In other words, we show that Θ(mn/α2 ) space is both sufficient and necessary (up to logarithmic factors) for estimating the size of a minimum set cover to within a factor of α. Our algorithm in fact works for the more general problem of estimating the optimal value of a covering integer program. On the other hand, our lower bound holds even for set cover instances where the sets are presented in a random order. ∗ Department of Computer and Information Science, University of Pennsylvania. Supported in part by National Science Foundation grants CCF-1116961, CCF-1552909, and IIS-1447470 and an Adobe research award. Email: {sassadi,sanjeev,yangli2}@cis.upenn.edu. 1 Introduction The set cover problem is a fundamental optimization problem with many applications in computer science and related disciplines. The input is a universe [n] and a collection of m subsets of [n], S = hS1 , . . . , Sm i, and the goal is to find a subset of S with the smallest cardinality that covers [n], i.e., whose union is [n]; we call such a collection of sets a minimum set cover and throughout the paper denote its cardinality by opt := opt (S). The set cover problem can be formulated in the well-established streaming model [1, 23], whereby the sets in S are presented one by one in a stream and the goal is to solve the set cover problem using a space-efficient algorithm. The streaming setting for the set cover problem has been studied in several recent work, including [7, 10, 12, 17, 27]. We refer the interested reader to these references for many applications of the set cover problem in the streaming model. In this paper, we focus on algorithms that make only one pass over the stream (i.e., single-pass streaming algorithms), and our goal is to settle the space complexity of single-pass streaming algorithms that approximate the set cover problem. Two versions of the set cover problem are considered in this paper: (i ) computing a minimum set cover, and (ii ) computing the size of a minimum set cover. Formally, Definition 1 (α-approximation). An algorithm A is said to α-approximate the set cover problem iff on every input instance S , A outputs a collection of (the indices of) at most α · opt sets that covers [n], along with a certificate of covering which, for each element e ∈ [n], specifies the set used for covering e. If A is a randomized algorithm, we require that the certificate corresponds to a valid set cover w.p.1 at least 2/3. We remark that the requirement of returning a certificate of covering is standard in the literature (see, e.g., [7, 12]). Definition 2 (α-estimation). An algorithm A is said to α-estimate the set cover problem iff on every input instance S , A outputs an estimate for the cardinality of a minimum set cover in the range [opt, α · opt ]. If A is a randomized algorithm, we require that:   Pr A(S) ∈ [opt, α · opt ] ≥ 2/3 1.1 Our Results We resolve the space complexities of both versions of the set cover problem. Specifically, we √ show that for any α = o( n/ log n) and any m = poly(n), • There is a deterministic single-pass streaming algorithm that α-approximates the set cover e (mn/α) bits and moreover, any single-pass streaming algorithm problem using space O (possibly randomized) that α-approximates the set cover problem must use Ω(mn/α) bits of space. • There is a randomized single-pass streaming algorithm that α-estimates the set cover probe (mn/α2 ) bits and moreover, any single-pass streaming algorithm lem using space O e (mn/α2 ) bits of (possibly randomized) that α-estimates the set cover problem must use Ω space. We should point out right away that in this paper, we are not concerned with poly-time computability, though, our algorithms for set cover can be made computationally efficient for any α ≥ log n by allowing an extra log n factor in the space requirement2. 1 Throughout, we use w.p. and w.h.p. to abbreviate “with probability” and “with high probability”, respectively. cover admits a classic log n-approximation algorithm [19, 28], and unless P = NP, there is no polynomial time α-approximation for the set cover problem for α < (1 − ǫ) log n (for any constant ǫ > 0) [11, 13, 22]. 2 Set 1 We establish our upper bound result for α-estimation for a much more general problem: estimating the optimal value of a covering integer linear program (see Section 4 for a formal definition). Moreover, the space lower bound for α-estimation (for the original set cover problem) holds even if the sets are presented in a random order. We now describe each of these two sets of results in more details. Approximating Set Cover. There is a very simple deterministic α-approximation algorithm e (mn/α) bits which we mention in Section 1.2 for for the set cover problem using space O completeness. Perhaps surprisingly, we establish that this simple algorithm is essentially the e (mn/α) best possible; any α-approximation algorithm for the set cover problem requires Ω bits of space (see Theorem 1 for a formal statement). Prior to our work, the best known lower bounds for single-pass streams ruled out (3/2 − √ ǫ)-approximation using o(mn) space [17] (see also [15]), o( n)-approximation in o(m) space [7, 12], and O(1)-approximation in o(mn) space [10] (only for deterministic algorithms); see Section 1.3 for more detail on different existing lower bounds. Note that these lower bound results leave open the possibility of a single-pass randomized 3/2-approximation or even a e (m) deterministic O(log n)-approximation algorithm for the set cover problem using only O space. Our result on the other hand, rules out the possibility of having any non-trivial tradeoff between the approximation factor and the space requirement, answering an open question raised by Indyk et al. [17] in the strongest sense possible. √ We should also point out that the √ bound of α = o( n/ log n) in our lower bound is tight e (n) up to an O(log n) factor since an O( n)-approximation is known to be achievable in O space (essentially independent of m for m = poly(n)) [7, 12]. e (mn/α2 ) space algorithm for α-estimating the Estimating Set Cover Size. We present an O set cover problem, and in general any covering integer program (see Theorem 3 for a formal statement). Our upper bound suggests that if one is only interested in α-estimating the size of a minimum set cover (instead of knowing the actual sets), then an additional α factor saving in the space (compare to the best possible α-approximation algorithm) is possible. To the best of our knowledge, this is the first non-trivial gap between the space complexity of α-approximation and α-estimation for the set cover problem. e (mn/α2 ) space α-estimation algorithm We further show that the space complexity of our O for the set cover problem is essentially tight (up to logarithmic factors). In other words, any e (mn/α2 ) space (see α-estimation algorithm for set cover (possibly randomized) requires Ω Theorem 4 for a formal statement). There are examples of classic optimization problems in the streaming literature for which size estimation seems to be distinctly easier in the random arrival streams3 compare to the adversarial streams (see, e.g., [20]). However, we show that this is not the case for the set e (mn/α2 ) for α-estimation continues to hold even for cover problem, i.e., the lower bound of Ω random arrival streams. We note in passing two other results also: (i ) our bounds for α-approximation and αestimation also prove tight bounds on the one-way communication complexity of the two-player communication model of the set cover problem (see Theorem 2 and Theorem 5), previously studied in [7, 10, 24]; (ii ) the use of randomization in our α-estimation algorithm is inevitable: any deterministic α-estimation algorithm for the set cover requires Ω(mn/α) bits of space (see Theorem 6). 3 In random arrival streams, the input (in our case, the collection of sets) is randomly permuted before being presented to the algorithm 2 1.2 Our Techniques e (mn/α) bits of space can be simply achieved Upper bounds. An α-approximation using O as follows. Merge (i.e., take the union of) every α sets in the stream into a single set; at the end of the stream, solve the set cover problem over the merged sets. To recover a certificate of covering, we also record for each element e in each merged set, any set in the merge that covers e. It is an easy exercise to verify that this algorithm indeed achieves an α-approximation e (mn/α) bits. and can be implemented in space O 2 e Our O(mn/α )-space α-estimation algorithm is more involved and in fact works for any covering integer program (henceforth, a covering ILP for short). We follow the line of work in [10] and [17] by performing “dimensionality reduction” over the sets (in our case, columns of the constraint matrix A) and storing their projection over a randomly sampled subset of the universe (here, constraints) during the stream. However, the goal of our constraint sampling approach is entirely different from the ones in [10,17]. The element sampling approach of [10, 17] aims to find a “small” cover of the sampled universe which also covers the vast majority of the elements in the original universe. This allows the algorithm to find a small set cover of the sampled universe in one pass while reducing the number of remaining uncovered elements for the next pass; hence, applying this approach repeatedly over multiple passes on the input allows one to obtain a complete cover. On the other hand, the goal of our constraint sampling approach is to create a smaller instance of set cover (in general, covering ILP) with the property that the minimum set cover size of the sampled instance is a “proxy” for the minimum set cover size of the original instance. We crucially use the fact that the algorithm does not need to identify the actual cover and hence it can estimate the size of the solution based on the optimum set cover size in the sampled universe. At the core of our approach is a simple yet very general lemma, referred to as the constraint sampling lemma (Lemma 4.1) which may be of independent interest. Informally, this lemma states that for any covering ILP instance I , the optimal value of a sub-sampled instance IR , obtained by picking roughly 1/α fraction of the constraints uniformly at random from I , is an α estimator of the optimum value of I whenever no constraint is “too hard” to satisfy. Nevertheless, the constraint sampling is not enough for reducing the space to meet the e (mn/α2 ) bound (see Theorem 3). Hence, we combine it with a pruning step, similar desired O to the “set filtering” step of [17] for (unweighted) set cover (see also “GreedyPass” algorithm of [7]) to sparsify the columns in the input matrix A before performing the sampling. We point out that as the variables in I can have different weights in the objective function (e.g. for weighted set cover), our pruning step needs to be sensitive to the weights. Lower bounds. As is typical in the streaming literature, our lower bounds are obtained by establishing communication complexity lower bounds; in particular, in the one-way two-player communication model. To prove these bounds, we use the information complexity paradigm, which allows one to reduce the problem, via a direct sum type argument, to multiple instances of a simpler problem. For our lower bound for α-estimation, this simpler problem turned out to be a variant of the well-known Set Disjointness problem. However, for the lower bound of α-approximation algorithms, we introduce and analyze a new intermediate problem, called the Trap problem. The Trap problem is a non-boolean problem defined as follows: Alice is given a set S, Bob is given a set E such that all elements of E belong to S except for a special element e∗ , and the goal of the players is to “trap” this special element, i.e., to find a small subset of E which contains e∗ . For our purpose, Bob only needs to trap e∗ in a set of cardinality | E| /2. To prove a lower bound for the Trap problem, we design a novel reduction from the well-known Index problem, which requires Alice and Bob to use the protocol for the Trap problem over non- 3 legal inputs (i.e., the ones for which the Trap problem is not well-defined), while ensuring that they are not being “fooled” by the output of the Trap protocol over these inputs. To prove our lower bound for α-estimation in random arrival streams, we follow the approach of [5] in proving the communication complexity lower bounds when the input data is randomly allocated between the players (as opposed to adversarial partitions). However, the distributions and the problem considered in this paper are different from the ones in [5]. 1.3 Related Work Communication complexity of the set cover problem was first studied by Nisan [24]. Among other things, Nisan showed that the two-player communication complexity of ( 21 − ǫ) log nestimating the set cover is Ω(m). In particular, this implies that any constant-pass streaming algorithm that ( 12 − ǫ) log n-estimates the set cover must use Ω(m) bits of space. Saha and Getoor [27] initiated the study of set cover in the semi-streaming model [14] e (n) space, where the sets are arriving in a stream and the algorithms are required to use O and obtained an O(log n)-approximation via an O(log n)-pass algorithm that uses O(n log n) space. A similar performance was also achieved by [8] in the context of “disk friendly” algorithms. As designed, the algorithm of [8] achieves (1 + β ln n)-approximation by making O(logβ n) passes over the stream using O(n log n) space. The single-pass semi-streaming setting for set√cover was initially and throughly studied by e (n) space (which Emek and Rosén [12]. They provided an O( n)-approximation using O also extends to the weighted set cover problem) and a lower bound that states that no semie (n) space) that O(n1/2−ǫ )-estimates set streaming algorithm (i.e., an algorithm using only O cover exists. Recently, Chakrabarti and Wirth [7] generalized the space bounds in [12] to multipass algorithms, providing an almost complete understanding of the pass/approximation tradeoff for semi-streaming algorithms. In particular, they developed a deterministic pe (n) space and prove that any p-pass pass ( p + 1) · n1/( p+1) -approximation algorithm in O 1/ ( p + 1 ) 2 n /(c · ( p + 1) )-estimation algorithm requires Ω(nc /p3 ) space for some constant c√> 1 (m in their “hard instances” is Θ(ncp )). This, in particular, implies that any single-pass o( n)estimation algorithm requires Ω(m) space. Demaine et al. [10] studied the trade-off between the number of passes, the approximation ratio, and the space requirement of general streaming algorithms (i.e., not necessarily semistreaming) for the set cover problem and developed an algorithm that for any δ = Ω(1/ log n), e (mnδ ) makes O(41/δ ) passes over the stream and achieves an O(41/δ ρ)-approximation using O space; here ρ is the approximation factor of the off-line algorithm for solving the set cover problem. The authors further showed that any constant-pass deterministic O(1)-estimation algorithm for the set cover requires Ω(mn) space. Very recently, Indyk et al. [17] (see also [15]) made a significant improvement on the trade-off achieved by [10]: they presented an algorithm that for any δ > 0, makes O(1/δ) passes over the stream and achieves an O(ρ/δ)e (mnδ ) space. The authors also established two lower bounds: for approximation using O multi-pass algorithms, any algorithm that computes an optimal set cover solution while mak1 e (mnδ ) space. More relevant to our paper, they also showed − 1) passes must use Ω ing only ( 2δ that any single-pass streaming algorithm (possibly randomized) that can distinguish between the instances with set cover size of 2 and 3 w.h.p., must use Ω(mn) bits. Organization. We introduce in Section 2 some preliminaries needed for the rest of the paper. In Section 3, we present our Ω(mn/α) space lower bound for computing an α-approximate set cover in a single-pass. In Section 4, we present a single-pass streaming algorithm for e (mn/α2 ) upper estimating the optimal value of a covering integer program and prove an O bound on the space complexity of α-estimating the (weighted) set cover problem. In Section 5, we present our Ω(mn/α2 ) space lower bound for α-estimating set cover in a single-pass. 4 Finally, Section 6 contains our Ω(mn/α) space lower bound for deterministic α-estimation algorithms. 2 Preliminaries Notation. We use bold face letters to represent random variables. For any random variable X, supp( X ) denotes its support set. We define | X | := log |supp( X )|. For any kdimensional tuple X = ( X1 , . . . , Xk ) and any i ∈ [k], we define X <i := ( X1 , . . . , Xi−1 ), and X −i := ( X1 , . . . , Xi−1 , Xi+1 , . . . , Xk ). The notation “X ∈ R U” indicates that X is chosen uniformly at random from a set U. Finally, we use upper case letters (e.g. M) to represent matrices and lower case letter (e.g. v) to represent vectors. Concentration Bounds. We use an extension of the Chernoff-Hoeffding bound for negatively correlated random variables. Random variables X1 , . . . , Xn are negatively correlated if for every set S ⊆ [n], Pr (∧i∈S Xi = 1) ≤ ∏i∈S Pr ( Xi = 1). It was first proved in [25] that the Chernoff-Hoeffding bound continues to hold for the case of random variables that satisfy this generalized version of negative correlation (see also [16]). 2.1 Tools from Information Theory We briefly review some basic concepts from information theory needed for establishing our lower bounds. For a broader introduction to the field, we refer the reader to the excellent text by Cover and Thomas [9]. In the following, we denote the Shannon Entropy of a random variable A by H ( A) and the mutual information of two random variables A and B by I ( A; B) = H ( A) − H ( A | B) = H ( B) − H ( B | A). If the distribution D of the random variables is not clear from the context, we use HD ( A) (resp. ID ( A; B)). We use H2 to denote the binary entropy function where for any real number 0 < δ < 1, H2 (δ) = δ log 1δ + (1 − δ) log 1−1 δ . We use the following basic properties of entropy and mutual information (proofs can be found in [9], Chapter 2). Claim 2.1. Let A, B, and C be three random variables. 1. 0 ≤ H ( A) ≤ | A|. H ( A) = | A| iff A is uniformly distributed over its support. 2. I ( A; B) ≥ 0. The equality holds iff A and B are independent. 3. Conditioning on a random variable reduces entropy: H ( A | B, C ) ≤ H ( A | B). The equality holds iff A and C are independent conditioned on B. 4. Subadditivity of entropy: H ( A, B | C ) ≤ H ( A | C ) + H ( B | C ). 5. The chain rule for mutual information: I ( A, B; C ) = I ( A; C ) + I ( B; C | A). 6. For any event E independent of A and B, H ( A | B, E) = H ( A | B). 7. For any event E independent of A, B and C, I ( A; B | C, E) = I ( A; B | C ). The following claim (Fano’s inequality) states that if a random variable A can be used to estimate the value of another random variable B, then A should “consume” most of the entropy of B. Claim 2.2 (Fano’s inequality). For any binary random variable B and any (possibly randomized) function f that predicts B based on A, if Pr(f(A)6= B) = δ, then H ( B | A) ≤ H2 (δ). 5 We also use the following simple claim, which states that conditioning on independent random variables can only increase the mutual information. Claim 2.3. For any random variables A, B, C, and D, if A and D are independent conditioned on C, then I ( A; B | C ) ≤ I ( A; B | C, D ). Proof. Since A and D are independent conditioned on C, by Claim 2.1-(3), H ( A | C ) = H ( A | C, D ) and H ( A | C, B) ≥ H ( A | C, B, D ). We have, I ( A; B | C ) = H ( A | C ) − H ( A | C, B) = H ( A | C, D ) − H ( A | C, B) ≤ H ( A | C, D ) − H ( A | C, B, D ) = I ( A; B | C, D ) 2.2 Communication Complexity and Information Complexity Communication complexity and information complexity play an important role in our lower bound proofs. We now provide necessary definitions for completeness. Communication complexity. Our lowers bounds for single-pass streaming algorithms are established through communication complexity lower bounds. Here, we briefly provide some context necessary for our purpose; for a more detailed treatment of communication complexity, we refer the reader to the excellent text by Kushilevitz and Nisan [21]. We focus on the two-player one-way communication model. Let P be a relation with domain X × Y × Z . Alice receives an input X ∈ X and Bob receives Y ∈ Y , where ( X, Y ) are chosen from a joint distribution D over X × Y . In addition to private randomness, the players also have an access to a shared public tape of random bits R. Alice sends a single message M ( X, R) and Bob needs to output an answer Z := Z ( M ( X, R), Y, R) such that ( X, Y, Z ) ∈ P. We use Π to denote a protocol used by the players. Unless specified otherwise, we always assume that the protocol Π can be randomized (using both public and private randomness), even against a prior distribution D of inputs. For any 0 < δ < 1, we say Π is a δ-error protocol for P over a distribution D , if the probability that for an input ( X, Y ), Bob outputs some Z where ( X, Y, Z ) ∈ / P is at most δ (the probability is taken over the randomness of both the distribution and the protocol). Definition 3. The communication cost of a protocol Π for a problem P on an input distribution D , denoted by kΠk, is the worst-case size of the message sent from Alice to Bob in the protocol Π, when the inputs are chosen from the distribution D . The communication complexity CCδD ( P) of a problem P with respect to a distribution D is the minimum communication cost of a δ-error protocol Π over D . Information complexity. There are several possible definitions of information complexity of a communication problem that have been considered depending on the application (see, e.g., [2–4, 6]). Our definition is tuned specifically for one-way protocols, similar in the spirit of [3] (see also [18]). Definition 4. Consider an input distribution D and a protocol Π (for some problem P). Let X be the random variable for the input of Alice drawn from D , and let Π := Π( X ) be the random variable denoting the message sent from Alice to Bob concatenated with the public randomness R used by Π. The information cost ICostD (Π) of a one-way protocol Π with respect to D is ID (Π; X ). The information complexity ICδD ( P) of P with respect to a distribution D is the minimum ICostD (Π) taken over all one-way δ-error protocols Π for P over D . 6 Note that any public coin protocol is a distribution over private coins protocols, run by first using public randomness to sample a random string R = R and then running the corresponding private coin protocol Π R . We also use Π R to denote the random variable of the message sent from Alice to Bob, assuming that the public randomness is R = R. We have the following well-known claim. Claim 2.4. For any distribution h D and any protocol i Π, let R denote the public randomness used in Π; then, ICostD (Π) = ER∼ R ID (Π R ; X | R = R) . Proof. Let Π = ( M, R), where M denotes the message sent by Alice and R is the public randomness. We have, ICostD (Π) = I (Π; X ) = I ( M, R; X ) = I ( R; X ) + I ( M; X | R) = E R∼ R h (the chain rule for mutual information, Claim 2.1-(5)) i ID (Π R ; X | R = R) (M = Π R whenever R = R and I ( R; X ) = 0 by Claim 2.1-(2)) The following well-known proposition (see, e.g., [6]) relates communication complexity and information complexity. Proposition 2.5. For every 0 < δ < 1 and every distribution D : CCδD ( P) ≥ ICδD ( P). Proof. Let Π be a protocol with the minimum communication complexity for P on D and R denotes the public randomness of Π; using Claim 2.4, we can write, h i h i ICδD ( P) = E ID (Π R ; X | R = R) ≤ E HD (Π R | R = R) R∼ R R∼ R h i R δ ≤ E Π ≤ kΠk = CCD ( P) R∼ R 3 An Ω(mn/α)-Space Lower bound for α-Approximate Set Cover In this section, we prove that the simple α-approximation algorithm described in Section 1.2 is in fact optimal in terms of the space requirement. Formally, √ Theorem 1. For any α = o( lognn ) and m = poly(n), any randomized single-pass streaming algorithm that α-approximates the set cover problem with probability at least 2/3 requires Ω(mn/α) bits of space. √ Fix a (sufficiently large) value for n, m = poly(n) (also m = Ω(α log n)), and α = o( lognn ); throughout this section, SetCoverapx refers to the problem of α-approximating the set cover problem for instances with m + 1 sets4 defined over the universe [n] in the one-way communication model, whereby the sets are partitioned between Alice and Bob. 4 To simplify the exposition, we use m + 1 instead of m as the number of sets. 7 Overview. We design a hard input distribution Dapx for SetCoverapx , whereby Alice is provided with a collection of m sets S1 , . . . , Sm , each of size (roughly) n/α and Bob is given a single set T of size (roughly) n − 2α. The input to the players are correlated such that there exists a set Si∗ in Alice’s collection (i ∗ is unknown to Alice), such that Si∗ ∪ T covers all elements in [n] except for a single special element. This in particular ensures that the optimal set cover size in this distribution is at most 3 w.h.p. On the other hand, we “hide” this special element among the 2α elements in T in a way that if Bob does not have (essentially) full information about Alice’s collection, he cannot even identify a set of α elements from T that contain this special element (w.p strictly more than half). This implies that in order for Bob to be sure that he returns a valid set cover, he should additionally cover a majority of T with sets other than Si∗ . We design the distribution in a way that the sets in Alice’s collection are “far” from each other and hence Bob is forced to use a distinct set for (roughly) each element in T that he needs to cover with sets other than Si∗ . This implies that Bob needs to output a set cover of size α (i.e., an (α/3)-approximation) to ensure that every element in [n] is covered. 3.1 A Hard Input Distribution for SetCoverapx Consider the following distribution. Distribution Dapx . A hard input distribution for SetCoverapx . Notation. Let F be the collection of all subsets of [n] with cardinality n 10α , and ℓ := 2α log m. • Alice. The input of Alice is a collection of m sets S = (S1 , . . . , Sm ), where for any i ∈ [m], Si is a set chosen independently and uniformly at random from F . • Bob. Pick an i∗ ∈ [m] (called the special index) uniformly at random; the input to Bob is a set T = [n] \ E, where E is chosen uniformly at random from all subsets of [n] with | E| = ℓ and | E \ Si∗ | = 1.a a Since √ α = o ( n/ log n ) and m = poly(n ), the size of E is strictly smaller than the size of Si∗ . The claims below summarize some useful properties of the distribution Dapx . Claim 3.1. For any instance (S , T ) ∼ Dapx , with probability 1 − o(1), opt (S , T ) ≤ 3. ∗ Proof. Let e∗ denote the element in E \ Si∗ . S −i contains m − 1 random subsets of [n] of ∗ size n/10α, drawn independent of the choice of e∗ . Therefore, each set in S −i covers e∗ with probability 1/10α. The probability that none of these m − 1 sets covers e∗ is at most (1 − 1/10α)m−1 ≤ (1 − 1/10α)Ω(α log n) ≤ exp(−Ω(α log n)/10α) = o(1) ∗ Hence, with probability 1 − o(1), there is at least one set S ∈ S −i that covers e∗ . Now, it is straightforward to verify that (Si∗ , T, S) form a valid set cover. ∗ Lemma 3.2. With probability 1 − o(1), no collection of 3α sets from S −i covers more than ℓ/2 elements of E. ∗ Proof. Recall that the sets in S −i and the set E are chosen independent of each other. For ∗ each set S ∈ S −i and for each element e ∈ E, we define an indicator binary random variable Xe , where Xe = 1 iff e ∈ S. Let X := ∑e Xe , which is the number of elements in E covered by S. We have, log m |E| = E [ X ] = ∑ E[ X e ] = 10α 5 e 8 Moreover, the variables Xe are negatively correlated since for any set S′ ⊆ E, Pr ^ e∈S′ Xe = 1 ! = ( n −|S ′ | n ′ 10α −|S | ( n n 10α ) )   n n · 10α − 1 . . . 10α − |S′ | + 1 = ( n ) · ( n − 1) . . . ( n − | S ′ | + 1)   ′ 1 |S | = ∏ Pr ( Xe = 1) ≤ 10α e∈S′ n 10α  Hence, by the extended Chernoff bound (see Section 2),   log m 1 Pr X ≥ = o( ) 3 m ∗ Therefore, using union bound over all m − 1 sets in S −i , with probability 1 − o(1), no set in ∗ S −i covers more than log m/3 elements in E, which implies that any collection of 3α sets can only cover up to 3α · log m/3 = ℓ/2 elements in E. 3.2 The Lower Bound for the Distribution Dapx In order to prove our lower bound for SetCoverapx on Dapx , we define an intermediate communication problem which we call the Trap problem. Problem 1 (Trap problem). Alice is given a set S ⊆ [n] and Bob is given a set E ⊆ [n] such that E \ S = {e∗ }; Bob needs to output a set L ⊆ E with | L| ≤ | E| /2 such that e∗ ∈ L. In the following, we use Trap to refer to the trap problem with | S| = n/10α and | E| = ℓ = 2α log m (notice the similarity to the parameters in Dapx ). We define the following distribution DTrap for Trap. Alice is given a set S ∈ R F (recall that F is the collection of all subsets of [n] of size n/10α) and Bob is given a set E chosen uniformly at random from all sets that satisfy | E \ S| = 1 and |E| = 2α log m. We first use a direct sum style argument to prove that under the distributions Dapx and DTrap , information complexity of solving SetCoverapx is essentially equivalent to solving m copies of Trap. Formally, δ + o ( 1) Lemma 3.3. For any constant δ < 1/2, ICδDapx (SetCoverapx ) ≥ m · ICDTrap (Trap). Proof. Let ΠSC be a δ-error protocol for SetCoverapx ; we design a δ′ -error protocol ΠTrap for solving Trap over DTrap with parameter δ′ = δ + o(1) such that the information cost of ΠTrap on DTrap is at most m1 · ICostDapx (ΠSC ). The protocol ΠTrap is as follows. Protocol ΠTrap . The protocol for solving Trap using a protocol ΠSC for SetCoverapx . Input: An instance (S, E) ∼ DTrap . Output: A set L with | L| ≤ | E| /2, such that e∗ ∈ L. 1. Using public randomness, the players sample an index i∗ ∈ [m] uniformly at random. 2. Alice creates a tuple S = (S1 , . . . , Sm ) by assigning Si∗ = S and sampling each remaining set uniformly at random from F using private randomness. Bob creates a set T := E. 3. The players run the protocol ΠSC over the input (S , T ). 4. Bob computes the set L of all elements in E = T whose certificate (i.e., the set used to cover them) is not Si∗ , and outputs L. 9 We first argue the correctness of ΠTrap and then bound its information cost. To argue the correctness, notice that the distribution of instances of SetCoverapx constructed in the reduction is exactly Dapx . Consequently, it follows from Claim 3.1 that, with probability 1 − o(1), any α-approximate set cover can have at most 3α sets. Let Sb be the set cover computed by Bob minus the sets Si∗ and T. As e∗ ∈ E = T and moreover is not in Si∗ , it follows that e∗ should be covered by some set in Sb. This means that the set L that is output by Bob contains e∗ . Moreover, by Lemma 3.2, the number of elements in E covered by the sets in Sb is at most ℓ/2 w.p. 1 − o(1). Hence, | L| ≤ ℓ/2 = | E| /2. This implies that:     Pr ΠTrap errs ≤ Pr ΠSC errs + o(1) ≤ δ + o(1) DTrap Dapx We now bound the information cost of ΠTrap . Let I be the random variable for the choice of ∈ [m] in the protocol ΠTrap (which is uniform in [m]). Using Claim 2.4, we have, i∗ i    1 m i ;S | I = i = IDTrap ΠTrap · ∑ IDTrap ΠSC ; Si | I = i m i=1 i∼ I     m 1 1 m = · ∑ IDapx ΠSC ; Si | I = i = · ∑ IDapx ΠSC ; Si m i=1 m i=1 ICostDTrap (ΠTrap ) = E h where the last two equalities hold since (i ) the joint distribution of ΠSC and Si conditioned on I = i under DTrap is equivalent to the one under Dapx , and (ii ) the random variables ΠSC and Si are independent of the event I = i (by the definition of Dapx ) and hence we can “drop” the conditioning on this event (by Claim 2.1-(7)). We can further derive, ICostDTrap (ΠTrap ) =     1 m 1 m · ∑ IDapx ΠSC ; Si ≤ · ∑ IDapx ΠSC ; Si | S <i m i=1 m i=1 The inequality holds since Si and S <i are independent and conditioning on independent variables can only increase the mutual information (i.e., Claim 2.3). Finally, ICostDTrap (ΠTrap ) ≤     1 m 1 1 · ∑ IDapx ΠSC ; Si | S <i = · IDapx ΠSC ; S = · ICostDapx (ΠSC ) m i=1 m m where the first equality is by the chain rule for mutual information (see Claim 2.1-(5)). Having established Lemma 3.3, our task now is to lower bound the information complexity of Trap over the distribution DTrap . We prove this lower bound using a novel reduction from the well-known Index problem, denoted by Indexnk . In Indexnk over the distribution DIndex , Alice is given a set A ⊆ [n] of size k chosen uniformly at random and Bob is given an element a such that w.p. 1/2 a ∈ R A and w.p. 1/2 a ∈ R [n] \ A; Bob needs to determine whether a ∈ A (the YES case) or not (the NO case). We remark that similar distributions for Indexnk have been previously studied in the literature (see, e.g., [26], Section 3.3). For the sake of completeness, we provide a self-contained proof of the following lemma in Appendix A. ′ Lemma 3.4. For any k < n/2, and any constant δ′ < 1/2, ICδDIndex (Indexnk ) = Ω(k). Using Lemma 3.4, we prove the following lemma, which is the key part of the proof. Lemma 3.5. For any constant δ < 1/2, ICδDTrap (Trap) = Ω(n/α). 10 Proof. Let k = n/10α; we design a δ′ -error protocol ΠIndex for Indexnk using any δ-error protocol ΠTrap (over DTrap ) as a subroutine, for some constant δ′ < 1/2. Protocol ΠIndex . The protocol for reducing Indexnk to Trap. Input: An instance ( A, a) ∼ DIndex . Output: YES if a ∈ A and NO otherwise. 1. Alice picks a set B ⊆ A with | B| = ℓ − 1 uniformly at random using private randomness. 2. To invoke the protocol ΠTrap , Alice creates a set S := A and sends the message ΠTrap (S), along with the set B to Bob. 3. If a ∈ B, Bob outputs YES and terminates the protocol. 4. Otherwise, Bob constructs a set E = B ∪ { a} and computes L := ΠTrap (S, E) using the message received from Alice. 5. If a ∈ L, Bob outputs NO, and otherwise outputs YES. We should note right away that the distribution of instances for Trap defined in the previous reduction does not match DTrap . Therefore, we need a more careful argument to establish the correctness of the reduction. We prove this lemma in two claims; the first claim establishes the correctness of the reduction and the second one proves an upper bound on the information cost of ΠIndex based on the information cost of ΠTrap . Claim 3.6. ΠIndex is a δ′ -error protocol for Indexnk over DIndex for the parameter k = n/10α and a constant δ′ < 1/2. Y Proof. Let R denote the private coins used by Alice to construct the set B. Also, define DIndex N (resp. DIndex ) as the distribution of YES instances (resp. NO instances) of DIndex . We have, Pr DIndex ,R    1    1 ΠIndex errs + · Pr ΠIndex errs ΠIndex errs = · Pr Y N 2 DIndex 2 DIndex ,R ,R (1) Note that we do not consider the randomness of the protocol ΠTrap (used in construction of ΠIndex ) as it is independent of the randomness of the distribution DIndex and the private coins R. We now bound each term in Equation (1) separately. We first start with the easier case which is the second term. The distribution of instances (S, E) for Trap created in the reduction by the choice of N and the randomness of R, is the same as the distribution DTrap . Moreover, in ( A, a) ∼ DIndex this case, the output of ΠIndex would be wrong iff a ∈ E \ S (corresponding to the element e∗ in Trap) does not belong to the set L output by ΠTrap . Hence,     (2) ΠIndex errs = Pr ΠTrap errs ≤ δ Pr N ,R DIndex DTrap Y We now bound the first term in Equation (1). Note that when ( A, a) ∼ DIndex , there is a small chance that ΠIndex is “lucky" and a belongs to the set B (see Line (3) of the protocol). Let this event be E . Conditioned on E , Bob outputs the correct answer with probability 1; however note that probability of E happening is only o(1). Now suppose E does not happen. Y (and In this case, the distribution of instances (S, E) created by the choice of ( A, a) ∼ DIndex randomness of R) does not match the distribution DTrap . However, we have the following 11 important property: Given that (S, E) is the instance of Trap created by choosing ( A, a) from Y and sampling ℓ − 1 random elements of A (using R), the element a is uniform over the DIndex set E. In other words, knowing (S, E) does not reveal any information about the element a. Note that since (S, E) is not chosen according to the distribution DTrap (actually it is not even a “legal” input for Trap), it is possible that ΠTrap terminates, outputs a non-valid set, or outputs a set L ⊆ E. Unless L ⊆ E (and satisfies the cardinality constraint), Bob is always able to determine that ΠTrap is not functioning correctly and hence outputs YES (and errs with probability at most δ < 1/2). However, if L ⊆ E, Bob would not know whether the input to ΠTrap is legal or not. In the following, we explicitly analyze this case. In this case, L is a subset of E chosen by the (inner) randomness of ΠTrap for a fixed S and E and moreover | L| ≤ | E| /2 (by definition of Trap). The probability that ΠIndex errs in this case is exactly equal to the probability that a ∈ L. However, as stated before, for a fixed (S, E), the choice of L is independent of the choice of a and moreover, a is uniform over E; hence a ∈ L happens with probability at most 1/2. Formally, (here, RTrap denotes the inner randomness of ΠTrap )   a ∈ L = ΠTrap (S, E) | E Pr (ΠIndex errs | E ) = Pr Y ,R DIndex Y ,R DIndex = E E ( S,E )∼( S,E )|E RTrap ∼ RTrap h Pr Y DIndex ,R  a ∈ L | S = S, E = E, RTrap = RTrap , E i (L = ΠTrap (S, E) is a fixed set conditioned on (S, E, RTrap )) h | L| i E = E ( S,E )∼( S,E )|E RTrap ∼ RTrap | E | (a is uniform on E conditioned on (S, E, RTrap ) and E ) Hence, we have, PrDY ,R (ΠIndex errs | E ) ≤ 12 , since by definition, for any output set L, Index | L| ≤ | E| /2. As stated earlier, whenever E happens, ΠIndex makes no error; hence, Pr (ΠIndex errs) = Y ,R DIndex Pr (E ) · Y ,R DIndex Pr (ΠIndex errs | E ) ≤ Y ,R DIndex 1 − o ( 1) 2 (3) Finally, by plugging the bounds in Equations (2,3) in Equation (1) and assuming δ is bounded away from 1/2, we have, Pr (ΠIndex errs) ≤ DIndex ,R 1 1 − o ( 1) 1 1 − o ( 1) δ 1 · + ·δ = + ≤ −ǫ 2 2 2 4 2 2 for some constant ǫ bounded away from 0. We now bound the information cost of ΠIndex under DIndex . Claim 3.7. ICostDIndex (ΠIndex ) ≤ ICostDTrap (ΠTrap ) + O(ℓ log n). Proof. We have,   ICostDIndex (ΠIndex ) = IDIndex ΠIndex ( A); A   = IDIndex ΠTrap (S), B; A     = IDIndex ΠTrap (S); A + IDIndex B; A | ΠTrap (S) ≤ IDIndex  (the chain rule for mutual information, Claim 2.1-(5))    ΠTrap (S); A + HDIndex B | ΠTrap (S) 12   ≤ IDIndex ΠTrap (S); A + O(ℓ log n) (| B| = O(ℓ log n) and Claim 2.1-(1))   (A = S as defined in ΠIndex ) = IDIndex ΠTrap (S); S + O(ℓ log n)   = IDTrap ΠTrap (S); S + O(ℓ log n) (the joint distribution of (ΠTrap (S), S) is identical under DIndex and DTrap ) = ICostDTrap (ΠTrap ) + O(ℓ log n) The lower bound now follows from Claims 3.6 and 3.7, and√ Lemma 3.4 for the parameters n and δ′ < 1/2, and using the fact that α = o( n/ log n), ℓ = 2α log m, and k = | S| = 10α m = poly(n), and hence Ω(n/α) = ω (ℓ log n). √ To conclude, by Lemma 3.3 and Lemma 3.5, for any set of parameters δ < 1/2, α = o( lognn ), and m = poly(n),   ICδDapx (SetCoverapx ) ≥ m · Ω(n/α) = Ω(mn/α) Since the information complexity is a lower bound on the communication complexity (Proposition 2.5), we have, √ Theorem 2. For any constant δ < 1/2, α = o( lognn ), and m = poly(n), CCδDapx (SetCoverapx ) = Ω(mn/α) Finally, since one-way communication complexity is also a lower bound on the space complexity of single-pass streaming algorithms, we obtain Theorem 1 as a corollary of Theorem 2. e (mn/α2 )-Space Upper Bound for α-Estimation 4 An O In this section, we show that if we are only interested in estimating the size of a minimum set cover (instead of finding the actual sets), we can bypass the Ω(mn/α) lower bound established in Section 3. In fact, we prove this upper bound for the more general problem of estimating the optimal solution of a covering integer program (henceforth, covering ILP) in the streaming setting. A covering ILP can be formally defined as follows. min c · x s.t. Ax ≥ b where A is a matrix with dimension n × m, b is a vector of dimension n, c is a vector of dimension m, and x is an m-dimensional vector of non-negative integer variables. Moreover, all coefficients in A, b, and c are also non-negative integers. We denote this linear program by ILPCover ( A, b, c). We use amax , bmax , and cmax , to denote the largest entry of, respectively, the matrix A, the vector b, and the vector c. Finally, we define the optimal value of the I := ILPCover ( A, b, c) as c · x∗ where x∗ is the optimal solution to I , and denote it by opt := opt (I). We consider the following streaming setting for covering ILPs. The input to a streaming algorithm for an instance I := ILPCover ( A, b, c) is the n-dimensional vector b, and a stream of the m columns of A presented one by one, where the i-th column of A, Ai , is presented along with the i-th entry of c, denoted by ci (we will refer to ci as the weight of the i-th column). It is easy to see that this streaming setting for covering ILPs captures, as special cases, the set cover problem, the weighted set cover problem, and the set multi-cover problem. We prove the following theorem for α-estimating the optimal value of a covering ILPs in the streaming setting. 13 Theorem 3. There is a randomized algorithm that given a parameter α ≥ 1, for any instance I := ILPCover ( A, b, c) with poly(n)-bounded entries, makes a single pass over a stream of columns of A (presented in an arbitrary order), and outputs an α-estimation to opt (I) w.h.p. using space  e (mn/α2 ) · bmax + m + nbmax bits. O √ In particular, for the weighted set cover problem with poly(n) bounded weights and α ≤ n, e (mn/α2 + n).5 the space complexity of this algorithm is O To prove Theorem 3, we design a general approach based on sampling constraints of a covering ILP instance. The goal is to show that if we sample (roughly) 1/α fraction of the constraints from an instance I := ILPCover ( A, b, c), then the optimum value of the resulting covering ILP, denoted by IR , is a good estimator of opt (I). Note that in general, this may not be the case; simply consider a weighted set cover instance that contains an element e which is only covered by a singleton set of weight W (for W ≫ m) and all the remaining sets are of weight 1 only. Clearly, opt (IR ) ≪ opt (I) as long as e is not sampled in IR , which happens w.p. 1 − 1/α. To circumvent this issue, we define a notion of cost for covering ILPs which, informally, is the minimum value of the objective function if the goal is to only satisfy a single constraint (in the above example, the cost of that weighted set cover instance is W). This allows us to bound the loss incurred in the process of estimation by sampling based on the cost of the covering ILP. Constraint sampling alone can only reduce the space requirement by a factor of α, which is not enough to meet the bounds given in Theorem 3. Hence, we combine it with a pruning step to sparsify the columns in A before performing the sampling. We should point out that as columns are weighted, the pruning step needs to be sensitive to the weights. In the rest of this section, we first introduce our constraint sampling lemma (Lemma 4.1) and prove its correctness, and then provide our algorithm for Theorem 3. 4.1 Covering ILPs and Constraint Sampling Lemma In this section, we provide a general result for estimating the optimal value of a Covering ILP using a sampling based approach. For a vector v, we will use vi to denote the i-th dimension of v. For a matrix A, we will use Ai to denote the i-th column of A, and use a j,i to denote the entry of A at the i-th column and the j-th row (to match the notation with the set cover problem, we use a j,i instead of the standard notation ai,j ). For each constraint j ∈ [n] (i.e., the j-th constraint) of a covering ILP instance I := ILPCover ( A, b, c), we define the cost of the constraint j, denoted by Cost( j), as, m Cost( j) := min c · x s.t x ∑ aj,i · xi ≥ bj i=1 which is the minimum solution value of the objective function for satisfying the constraint j. Furthermore, the cost of I , denoted by Cost(I), is defined to be Cost(I) := max Cost( j) j∈[n ] Clearly, Cost(I) is a lower bound on opt (I). Constraint Sampling. Given any instance of covering ILP I := ILPCover ( A, b, c), let IR be a covering ILP instance ILPCover ( A, e b, c) obtained by setting e b j := b j with probability p, and e b j := 0 with probability 1 − p, for each dimension j ∈ [n] of b independently. Note that setting 5 Note that Ω(n ) space is necessary to even determine whether or not a given instance is feasible. 14 e b j := 0 in IR is equivalent to removing the j-th constraint from I , since all entries in I are non-negative. Therefore, intuitively, IR is a covering ILP obtained by sampling (and keeping) the constraints of I with a sampling rate of p. We establish the following lemma which asserts that opt (IR ) is a good estimator of opt (I) (under certain conditions). As opt (IR ) ≤ opt (I) trivially holds (removing constraints can only decrease the optimal value), it suffices to give a lower bound on opt (IR ). Lemma 4.1 (Constraint Sampling Lemma). Fix an α ≥ 32 ln n; for any covering ILP I with n constraints, suppose IR is obtained from I by sampling each constraint with probability p := 4 lnα n ; then  opt (I)  3 Pr opt (IR ) + Cost(I) ≥ ≥ 8α 4 Proof. Suppose by contradiction that the lemma statement is false and throughout the proof opt(I ) let I be any instance where w.p. at least 1/4, opt (IR ) + Cost(I) < 8α (we denote this event by E1 (IR ), or shortly E1 ). We will show that in this case, I has a feasible solution with a value smaller than opt (I). To continue, define E2 (IR ) (or E2 in short) as the event that opt(I ) opt (IR ) < 8α . Note that whenever E1 happens, then E2 also happens, hence E2 happens w.p. at least 1/4. For the sake of analysis, suppose we repeat, for 32α times, the procedure of sampling each constraint of I independently with probability p, and obtain 32α covering ILP instances  S := IR1 , . . . , IR32α . Since E2 happens with probability at least 1/4 on each instance IR , the expected number of times that E2 happens for instances in S is at least 8α > 12 ln n. Hence, by the Chernoff bound, with probability at least 1 − 1/n, E2 happens on at least 4α of instances in S. Let T ⊆ S be a set of 4α sampled instances for which E2 happens. In the following, we show that if I has the property that Pr(E1 (I R )) ≥ 1/4, then w.p. at least 1 − 1/n, every constraint in I appears in at least one of the instances in T. Since each of these 4α instances opt(I ) admits a solution of value at most 8α (by the definition of E2 ), the “max” of their solutions, i.e., the vector obtained by setting the i-th entry to be the largest value of xi among all these opt(I ) opt(I ) solutions, gives a feasible solution to I with value at most 4α · 8α = 2 ; a contradiction. We use “j ∈ IR ” to denote the event that the constraint j of I is sampled in IR , and we need to show that w.h.p. for all j, there exists an instance IR ∈ T where j ∈ IR . We establish the following claim.   Claim 4.2. For any j ∈ [n], Pr j ∈ IR | E2 (IR ) ≥ ln2αn . Before proving Claim 4.2, we show how this claim would imply the lemma. By Claim 4.2, for each of the 4α instances IR ∈ T, and for any j ∈ [n], the probability that the constraint j is sampled in IR is at least ln2αn . Then, the probability that j is sampled in none of the 4α instances of T is at most:  1 ln n 4α ≤ exp(−2 ln n) = 2 1− 2α n Hence, by union bound, w.p. at least 1 − 1/n, every constraint appears in at least one of the instances in T, and this will complete the proof. It remains to prove Claim 4.2. Proof of Claim 4.2. Fix any j ∈ [n]; by Bayes rule,     Pr E2 (IR ) | j ∈ IR · Pr ( j ∈ IR )   Pr j ∈ IR | E2 (IR ) = Pr E2 (IR ) 15   Since Pr E2 (IR ) ≤ 1 and Pr ( j ∈ IR ) = p = 4 ln n α , we have,     4 ln n Pr j ∈ IR | E2 (IR ) ≥ Pr E2 (IR ) | j ∈ IR · α   and it suffices to establish a lower bound of 1/8 for Pr E2 (IR ) | j ∈ IR . (4) Consider the following probabilistic process (for a fixed j ∈ [n]): we first remove the constraint j from I (w.p. 1) and then sample each of the remaining constraints of I w.p. p. Let IR′ be an instance created by this process. We prove Pr (E2 (IR ) | j ∈ IR ) ≥ 1/8 in two steps by first showing that the probability that E1 happens to IR′ (i.e., Pr (E1 (IR′ ))) is at least 1/8, and then use a coupling argument to prove that Pr (E2 (IR ) | j ∈ IR ) ≥ Pr (E1 (IR′ )). We first show that Pr (E1 (IR′ )) (which by definition is the probability that opt (IR′ ) + opt(I ) Cost(I) ≤ 16α ) is at least 1/8. To see this, note that the probability that E1 happens to IR′ is equal to the probability that E1 happens to IR conditioned on j not being sampled (i.e., Pr (E1 (IR ) | j ∈ / IR )). Now, if we expand Pr (E1 (IR )),       / IR / IR ) Pr E1 (IR ) | j ∈ Pr E1 (IR ) = Pr ( j ∈ IR ) Pr E1 (IR ) | j ∈ IR + Pr ( j ∈     / IR = p + Pr E1 (IR′ ) ≤ Pr ( j ∈ IR ) + Pr E1 (IR ) | j ∈ As Pr (E1 (IR )) ≥ 1/4 and p = 4 ln n α ≤ 1/8 (since α ≥ 32 ln n), we have,   1/4 ≤ 1/8 + Pr E1 (IR′ ) and therefore, Pr (E1 (IR′ )) ≥ 1/8. It remains to show that Pr (E2 (IR ) | j ∈ IR ) ≥ Pr (E1 (IR′ )). To see this, note that conditioned on j ∈ IR , the distribution of sampling all constraints other than j is exactly the same as the distribution of IR′ . Therefore, for any instance IR′ drawn from this distribution, there is a unique instance IR sampled from the original constraint sampling distribution conditioned on j ∈ IR . For any such (IR′ , IR ) pair, we have opt (IR ) ≤ opt (IR′ ) + Cost( j) (≤ opt (IR′ ) + Cost(I)) since satisfying the constraint j ∈ IR requires increasing the value of the objective function opt in IR′ by at most Cost( j). Therefore if opt (IR′ ) + Cost(I) ≤ 8α (i.e., E1 happens to IR′ ), then opt opt (IR ) ≤ 8α (i.e., E2 happens to IR conditioned on j ∈ IR ). Hence,     Pr E2 (IR ) | j ∈ IR ≥ Pr E1 (IR′ ) ≥ 1/8 Plugging in this bound in Equation (4), we obtain that Pr ( j ∈ IR | E2 ) ≥ ln n 2α . 4.2 An α-estimation of Covering ILPs in the Streaming Setting We now prove Theorem 3. Throughout this section, for simplicity of exposition, we assume that α ≥ 32 ln n (otherwise the space bound in Theorem 3 is enough to store the whole input and solve the problem optimally), the value of cmax is provided to the algorithm, and x is a vector of binary variables, i.e., x ∈ {0, 1}m (hence covering ILP instances are always referring to covering ILP instances with binary variables); in Section 4.3, we describe how to eliminate the later two assumptions. Algorithm overview. For any covering ILP instance I := ILPCover ( A, b, c), our algorithm estimates opt := opt (I) in two parts running in parallel. In the first part, the goal is simply to 16 compute Cost(I) (see Claim 4.3). For the second part, we design a tester algorithm (henceforth, Tester) that given any “guess” k of the value of opt, if k ≥ opt, Tester accepts k w.p. 1 opt and for any k where Cost(I) ≤ k ≤ 32α , w.h.p. Tester rejects k. Let K := {2γ }γ∈[⌈log(mcmax )⌉] ; for each k ∈ K (in parallel), we run Tester(k). At the end of the stream, the algorithm knows Cost(I) (using the output of the part one), and hence it can identify among all guesses that are at least Cost(I), the smallest guess accepted by Tester (denoted by k∗ ). On one hand, k∗ ≤ opt since for any guess k ≥ opt, k ≥ Cost(I) also opt (since opt ≥ Cost(I)) and Tester accepts k. On the other hand, k∗ ≥ 32α w.h.p. since (i ) if opt opt opt opt Cost(I) ≥ 32α , k∗ ≥ Cost(I) ≥ 32α and (ii ) if Cost(I) < 32α , the guess 32α will be rejected w.h.p. by Tester. Consequently, 32α · k∗ is an O(α)-estimation of opt (I). We first show that one can compute the Cost of a covering ILP presented in a stream using a simple dynamic programming algorithm. Claim 4.3. For any I := ILPCover ( A, b, c) presented by a stream of columns, Cost(I) can be computed in space O(nbmax log cmax ) bits. Proof. We maintain arrays Cost j [y] ← +∞ for any j ∈ [n] and y ∈ [bmax ]: Cost j [y] is the minimum cost for achieving a coverage of y for the j-th constraint. Define Cost j [y] = 0 for y ≤ 0. Upon arrival of h Ai , ci i, compute Cost j [y] = min(Cost j [y], Cost j [y − a j,i ] + ci ), for every y from b j down to 1. We then have Cost(I) = max j Cost j [b j ]. To continue, we need the following notation. For any vector v with dimension d and any set S ⊆ [d], v(S) denotes the projection of v onto the dimensions indexed by S. For any two vectors u and v, let min(u, v) denote a vector w where at the i-th dimension: wi = min(ui , vi ), i.e., the coordinate-wise minimum. We now provide the aforementioned Tester algorithm. Tester(k): An algorithm for testing a guess k of the optimal value of a covering ILP. Input: An instance I := ILPCover ( A, b, c) presented as a stream h A1 , c1 i, . . ., h Am , cm i, a parameter α ≥ 32 ln n, and a guess k ∈ K. opt Output: ACCEPT if k ≥ opt and REJECT if Cost(I) ≤ k ≤ 32α . The answer could be either opt ACCEPT or REJECT if 32α < k < opt. 1. Preprocessing: c ← 0m , (i) Maintain an n-dimensional vector bres ← b, an m-dimensional vector e e ← 0n × m . and an n × m dimensional matrix A (ii) Let V be a subset of [n] obtained by sampling each element in [n] independently with probability p := 4 ln n/α. 2. Streaming: when a pair h Ai , ci i arrives: (i) If ci > k, directly continue to the next input pair of the stream. Otherwise: (ii) Prune step: Let ui := min(bres , Ai ) (the coordinate-wise minimum). If kui k1 ≥ n ·bmax α , update bres ← bres − u i (we say h A i , c i i is pruned by Tester in this case). ei ← ui (V ), and e Otherwise, assign A ci ← ci . 3. At the end of the stream, solve the following covering ILP (denoted by Itester ): e ≥ bres (V ) min e c · x s.t. Ax If opt (Itester ) is at most k, ACCEPT; otherwise REJECT. 17 ei ← We first make the following observation. In the prune step of Tester, if we replace A ′ e ui (V ) by Ai ← Ai (V ), the solution of the resulting covering ILP instance (denoted by Itester ) ′ has the property that opt (Itester ) = opt (Itester ) (we use Itester only to control the space requirement). To see this, let bres i denotes the content of the vector bres when h Ai , ci i arrives. By construction, (ui ) j := min((bres i ) j , a j,i ), and hence if (ui ) j 6= a j,i , then both (ui ) j and a j,i are at least (bres i ) j , which is at least (bres ) j (since every dimension of bres is monotonically decreasing). However, for any integer program ILPCover ( A, b, c), changing any entry a j,i of A between two values that are at least b j does not change the optimal value, and hence ′ ) = opt (Itester ). To simplify the proof, in the following, when concerning opt(Itester ), opt (Itester ′ . we redefine Itester to be Itester We now prove the correctness of Tester in the following two lemmas.   Lemma 4.4. For any guess k ≥ opt, Pr Tester(k) = ACCEPT = 1. Proof. Fix any optimal solution x∗ of I ; we will show that x∗ is a feasible solution for c · x∗ ≤ c · x∗ ≤ opt, this will show that c, we have e Itester , and since by the construction of e opt (Itester ) ≤ opt ≤ k and hence Tester(k) = ACCEPT. Fix a constraint j in Itester as follows: ∑ i∈[m ] e a j,i xi ≥ bres (V ) j If j ∈ / V, bres (V ) j = 0 and the constraint is trivially satisfied for any solution x∗ . Suppose j ∈ V and let P denote the set of (indices of) pairs that are pruned. By construction of the Tester, bres (V ) j = max(b j − ∑ i∈ P a j,i , 0). If bres (V ) j = 0, again the constraint is trivially satisfied. Suppose bres (V ) j = b j − ∑i∈ P a j,i . The constraint j can be written as ∑ eaj,i xi ≥ bj − ∑ aj,i i∈ P i By construction of the tester, e a j,i = 0 for all i that are pruned and otherwise e a j,i = a j,i . Hence, we can further write the constraint j as ∑ aj,i xi ≥ bj − ∑ aj,i i∈ P i/ ∈P Now, since x∗ satisfies the constraint j in I , ∑ i∈[m ] a j,i xi∗ ≥ b j ∑ aj,i x∗ ≥ bj − ∑ aj,i xi∗ i/ ∈P i∈ P ≥ b j − ∑ a j,i i∈ P (xi∗ ≤ 1) and the constraint j is satisfied. Therefore, x∗ is a feasible solution of Itester ; this completes the proof. opt We now show that Tester will reject guesses that are smaller than 32α . We will only prove that the rejection happens with probability 3/4; however, the probability of error can be reduced to any δ < 1 by running O(log 1/δ) parallel instances of the Tester and for each guess, REJECT if any one of the instances outputs REJECT and otherwise ACCEPT. In our case δ = O(|K |−1 ) so we can apply union bound for all different guesses.   opt Lemma 4.5. For any guess k where Cost(I) ≤ k < 32α , Pr Tester(k) = REJECT ≥ 3/4. 18   Proof. By construction of Tester(k), we need to prove that Pr opt (Itester ) > k ≥ 3/4. Define the following covering ILP I ′ : b ≥ bres (V ) min e c · x s.t. Ax bi = Ai if h Ai , ci i is not pruned by Tester, and A bi = 0n otherwise. In Tester(k), for where A each pair h Ai , ci i that is not pruned, instead of storing the entire vector Ai , we store the ei in Itester ). This projection of Ai onto dimensions indexed by V (which is the definition of A ′ is equivalent to performing constraint sampling on I with a sampling rate of p = 4 ln n/α. opt(I ′ ) Therefore, by Lemma 4.1, with probability at least 3/4, opt (Itester ) + Cost(I ′ ) ≥ 8α . Since Cost(I ′ ) ≤ Cost(I) ≤ k < opt(I ) 32α , opt (Itester ) ≥ this implies that opt (I ′ ) opt (I) opt (I ′ ) − Cost(I ′ ) > − . 8α 8α 32α Therefore, we only need to show that opt (I ′ ) ≥ opt(I ) 32α opt(I ) 2 since then opt (Itester ) > opt(I ) 16α (I ) − opt 32α = > k and Tester will reject k. opt(I ) To show that opt (I ′ ) ≥ 2 , we first note that for any optimal solution x∗ of I ′ , if we further set xi∗ = 1 for any pair h Ai , ci i that are pruned, the resulting xi∗ is a feasible solution for I . Therefore, if we show that the total weight of the h Ai , ci i pairs that are pruned is at opt opt most 2 , opt (I ′ ) must be at least 2 or we will have a solution for I better than opt (I). To see that the total weight of the pruned pairs is at most opt/2, since only pairs with opt ci ≤ k (≤ 32α ) will be considered, we only need to show that at most 16α pairs can be pruned. By the construction of the prune step, each pruned pair reduces the ℓ1 -norm of the vector bres by an additive factor of at least nbαmax . Since bres is initialized to be b and kbk1 ≤ nbmax , at most α (≤ 16α) pairs can be pruned. This completes the proof. We now finalize the proof of Theorem 3. Proof of Theorem 3. We run the algorithm described in the beginning of this section. The correctness of the algorithm follows from Claim 4.3 and Lemmas 4.4 and 4.5. We now analyze the space complexity of this algorithm. We need to run the algorithm in Claim 4.3 to come (nbmax ) space. We also need to run Tester for O(log (m · cmax )) pute Cost(I), which require O different guesses of k. In Tester(k), we need O(n log bmax ) bits to store the vector bres and O(m log cmax ) bits to e requires O(mnbmax /α · (log n/α) · (log amax log n)) maintain the vector e c. Finally, the matrix A ei of A e is either 0n or ui (V ) where kui k < bits to store. This is because each column A 1 n ·bmax n ·bmax n ·bmax < . Since u , there are at most non-zero entries in u . Therefore, after k k i 1 i α α α ei ) in expectation the number of non-zero entries in A ei is at projecting ui to V (to obtain A nbmax 2 e most O(nbmax /α ). Using the Chernoff bound w.h.p at most O( α2 ) non-zero entries of ui ei , where each entry needs O(log amax log n) bits to store. Note that the space remain in each A complexity of the algorithm can be made deterministic by simply terminating the execution ei has (c · nbmax when at least one set A ) non-zero entries (for a sufficiently large constant α2 c > 1); as this event happens with o(1) probability, the error probability of the algorithm increases only by o(1). Finally, as all entries in ( A, b, c) are poly(n)-bounded, the total space e ((mn/α2 ) · bmax + m + nbmax ). requirement of the algorithm is O We also make the following remark about α-approximating covering ILPs. Remark 4.6. The simple algorithm described in Section 1.2 for α-approximating set cover can also be e (mnbmax /α): Group extended to obtain an α-approximation algorithm for covering ILPs in space O the columns by the weights and merge every α sets for each group independently. 19 4.3 Further Remarks We now briefly describe how to eliminate the assumptions that variables are binary and cmax is given to the algorithm. Binary variables versus integer variables. We point out that any algorithm that solves covering ILP with binary variables in the streaming setting, can also be used to solve covering ILP with non-negative integer variables (or shortly, integer variables), while increasing the number of columns by a factor of O(log bmax ). To see this, given any covering ILP instance ILPCover ( A, b, c) (with integer variables), we replace each h Ai , ci i pair with O(log bmax ) pairs (or columns) where the j-th pair (j ∈ [⌈log bmax ⌉]) is defined to be h2j−1 Ai , 2j−1 ci i. Every combination of the corresponding j binary variables maps to a unique binary representation of the variable xi in ILPCover ( A, b, c) for xi = O(bmax ). Since no variable in ILPCover ( A, b, c) needs to be more than bmax , the correctness of this reduction follows. Knowing cmax versus not knowing cmax . As we pointed out earlier, the assumption that the value of cmax is provided to the algorithm is only for simplicity; here we briefly describe how to eliminate this assumption. Our algorithm uses cmax to determine the range of opt, which determines the set of guesses K that Tester needs to run. If cmax is not provided, one natural fix is to use the same randomness (i.e., the same set V) for all testers, and whenever a larger ci arrives, create a tester for each new guess by duplicating the state of Tester for the current largest guess and continue from there (the correctness is provided by the design of our Tester). However, if we use this approach, since we do not know the total number of guesses upfront, we cannot boost the probability of success to ensure that w.h.p. no tester fails (if cmax is poly(n)-bounded, the number of guesses is O(log(ncmax )) = O(log n), O(log log n) parallel copies suffice, and though we do not know the value of the constant, log n copies always suffice; but this will not be the case for general cmax ). To address this issue, we point out that it suffices to ensure that the copy of Tester that runs a specific guess k′ succeed w.h.p., where opt k′ is the largest power of 2 with k′ ≤ 32α . To see this, we change the algorithm to pick the largest rejected guess k∗ and return 64αk∗ (previously, it picks the smallest accepted guess k opt and returns 32αk), if k′ is correctly rejected by the tester, k∗ ≥ k′ ≥ 64α . On the other hand, for any guess no less than opt, Tester accepts it w.p. 1, and hence k∗ ≤ opt. Therefore, as long as Tester rejects k′ correctly, the output of the algorithm is always an O(α)-estimation and hence we do not need the knowledge of cmax upfront. The set multi-cover problem. For the set multi-cover problem, Theorem 3 gives an αe ( mnb2max + m + nbmax ); here, bmax is the maximum demand estimation algorithm using space O α of the elements. However, we remark that if sets are unweighted, a better space requirement e ( mn2 + m + n) is achievable for this problem: Instead of running the tester for multiple of O α guesses, just run it once with the following changes. In the prune step, change from “kui k1 ≥ nbmax n α ” to “k u i k1 ≥ α ”; and output est : = # pruned + 8α ( opt (I tester ) + bmax ), where # pruned is the number of sets that are pruned. One one hand, est ≥ opt w.h.p since # pruned plus the optimal solution of the residual covering ILP (denoted by opt res ) is a feasible solution of I , and by Lemma 4.1, w.h.p. 8α(opt (Itester ) + bmax ) ≥ optres . One the other hand, est = O(α · opt ) since # pruned ≤ αbmax ≤ α · opt (bmax sets is needed even for covering one element); opt (Itester ) ≤ opt and bmax ≤ opt, which implies 8α(opt (Itester ) + bmax ) ≤ 8α(opt + opt ) = O(α · opt ). 5 An Ω(mn/α2 )-Space Lower Bound for α-Estimate Set Cover Our algorithm in Theorem 3 suggests that there exists at least a factor α gap on the space requirement of α-approximation and α-estimation algorithms for the set cover problem. We now show that this gap is the best possible. In other words, the space complexity of our algorithm in Theorem 3 for the original set cover problem is tight (up to logarithmic factors) 20 even for random arrival streams. Formally, Theorem 4. qLet S be a collection of m subsets of [n] presented one by one in a random order. For any α = o( logn n ) and any m = poly(n), any randomized algorithm that makes a single pass over S and outputs an α-estimation of the set cover problem with probability 0.9 (over the randomness of e ( mn2 ) bits of space. both the stream order and the algorithm) must use Ω α q Fix a (sufficiently large) value for n, m = poly(n), and α = o( logn n ); throughout this section, SetCoverest refers to the problem of α-estimating the set cover problem with m + 1 sets (see footnote 4) defined over the universe [n] in the one-way communication model, whereby the sets are partitioned between Alice and Bob. Overview. We start by introducing a hard distribution Dest for SetCoverest in the spirit of the distribution Dapx in Section 3. However, since in SetCoverest the goal is only to estimate the size of the optimal cover, “hiding” one single element (as was done in Dapx ) is not enough for the lower bound. Here, instead of trying to hide a single element, we give Bob a “block” of elements and his goal would be to decide whether this block appeared in a single set of Alice as a whole or was it partitioned across many different sets6. Similar to Dapx , distribution Dest is also not a product distribution; however, we introduce a way of decomposing Dest into a convex combination of product distributions and then exploit the simplicity of product distributions to prove the lower bound. Nevertheless, the distribution Dest is still “adversarial” and hence is not suitable for proving the lower bound for random arrival streams. Therefore, we define an extension to the original hard distribution as Dext which randomly partitions the sets of distribution Dest between Alice and Bob. We prove a lower bound for this distribution using a reduction from protocols over Dest . Finally, we show how an algorithm for set cover over random arrival streams would be able to solve instances of SetCoverest over Dext and establish Theorem 4. 5.1 A Hard Input Distribution for SetCoverest Consider the following distribution Dest for SetCoverest . Distribution Dest . A hard input distribution for SetCoverest . Notation. Let F be the collection of all subsets of [n] with cardinality n 10α . • Alice. The input of Alice is a collection of m sets S = (S1 , . . . , Sm ), where for any i ∈ [m], Si is a set chosen independently and uniformly at random from F . • Bob. Pick θ ∈ {0, 1} and i∗ ∈ [m] independently and uniformly at random; the input of Bob is a single set T defined as follows. − If θ = 0, then T is a set of size α log m chosen uniformly at random from Si∗ .a − If θ = 1, then T is a set of size α log m chosen uniformly at random from [n] \ Si∗ . a Since α = o ( p n/ log n) and m = poly(n ), the size of T is strictly smaller than the size of Si∗ . Recall that opt (S , T ) denotes the set cover size of the input instance (S , T ). We first establish the following lemma regarding the parameter θ and opt (S , T ) in the distribution Dest . Lemma 5.1. For (S , T ) ∼ Dest : 6 The actual set given to Bob is the complement of this block; hence the optimal set cover size varies significantly between the two cases. 21 (i) Pr (opt (S , T ) = 2 | θ = 0) = 1. (ii) Pr (opt (S , T ) > 2α | θ = 1) = 1 − o(1). Proof. Part (i) is immediate since by construction, when θ = 0, T ∪ Si∗ = [n]. We now prove part (ii). Since a valid set cover must cover T, it suffices for us to show that w.h.p. no 2α sets from S can cover T. By construction, neither T nor Si∗ contains any element in T, hence T must be covered by at most 2α sets in S \ {Si∗ }. Fix a collection Sb of 2α sets in S \ {Si∗ }; we first analyze the probability that Sb covers T and then take union bound over all choices of 2α sets from S \ {Si∗ }. Note that according to the distribution Dest , the sets in Sb are drawn independent of T. Fix any choice of T; for j each element k ∈ T, and for each set S j ∈ Sb, define an indicator random variable Xk ∈ {0, 1}, j j where Xk = 1 iff k ∈ S j . Let X := ∑ j ∑k Xk and notice that: E[ X ] = 1 j ∑ ∑ E[Xk ] = (2α) · (α log m) · ( 10α ) = α log m/5 j k We have,   Pr Sb covers T ≤ Pr ( X ≥ α log m) = Pr ( X ≥ 5 E[ X ]) ≤ exp (−3α log m) j where the last equality uses the fact that Xk variables are negatively correlated (which can be proven analogous to Lemma 3.2) and applies the extended Chernoff bound (see Section 2). Finally, by union bound,   m b Pr(opt (S , T ) ≤ 2α) ≤ Pr ∃ S covers T ≤ · exp (−3α log m) 2α ≤ exp (2α · log m − 3α log m) = o(1) Notice that distribution Dest is not a product distribution due to the correlation between the input given to Alice and Bob. However, we can express the distribution as a convex combination of a relatively small set of product distributions; this significantly simplifies the proof of the lower bound. To do so, we need the following definition. For integers k, t and n, a collection P of t subsets of [n] is called a random (k, t)-partition iff the t sets in P are constructed as follows: Pick k elements from [n], denoted by S, uniformly at random, and partition S randomly into t sets of equal size. We refer to each set in P as a block. An alternative definition of the distribution Dest . Parameters: k= n 5α p = α log m t = k/p 1. For any i ∈ [m], let Pi be a random (k, t)-partition in [n] (chosen independently). 2. The input to Alice is S = (S1 , . . . , Sm ), where each Si is created by picking t/2 blocks from Pi uniformly at random. 3. The input to Bob is a set T where T is created by first picking an i∗ ∈ [m] uniformly at random, and then picking a block from Pi∗ uniformly at random. To see that the two formulations of the distribution Dest are indeed equivalent, notice that (i ) the input given to Alice in the new formulation is a collection of sets of size n/10α 22 chosen independently and uniformly at random (by the independence of Pi ’s), and (ii ) the complement of the set given to Bob is a set of size α log m which, for i ∗ ∈ R [m], with probability half, is chosen uniformly at random from Si∗ , and with probability half, is chosen from [n] \ Si∗ (by the randomness in the choice of each block in Pi∗ ). Fix any δ-error protocol ΠSC (δ < 1/2) for SetCoverest on the distribution Dest . Recall that ΠSC denotes the random variable for the concatenation of the message of Alice with the public randomness used in the protocol ΠSC . We further use P := (P1 , . . . , Pt ) to denote the random partitions ( P1 , . . . , Pt ), I for the choice of the special index i∗ , and θ for the parameter θ ∈ {0, 1}, whereby θ = 0 iff T ⊆ Si∗ . We make the following simple observations about the distribution Dest . The proofs are straightforward. Remark 5.2. In the distribution Dest , 1. The random variables S , P , and ΠSC (S) are all independent of the random variable I. 2. For any i ∈ [m], conditioned on Pi = P, and I = i, the random variables Si and T are independent of each other. Moreover, supp(Si ) and supp(T) contain, respectively, ( tt ) and t 2 elements and both Si and T are uniform over their support. 3. For any i ∈ [m], the random variable Si is independent of both S −i and P −i . 5.2 The Lower Bound for the Distribution Dest Our goal in this section is to lower bound ICostDest (ΠSC ) and ultimately kΠSC k. We start by simplifying the expression for ICostDest (ΠSC ). Lemma 5.3. ICostDest (ΠSC ) ≥ ∑m i=1 I ( ΠSC ; Si | Pi ) Proof. We have, ICostDest (ΠSC ) = I (ΠSC ; S) ≥ I (ΠSC ; S | P ) where the inequality holds since (i ) H (ΠSC ) ≥ H (ΠSC | P ) and (ii ) H (ΠSC | S) = H (ΠSC | S , P ) as ΠSC is independent of P conditioned on S . We now bound the conditional mutual information term in the above equation. m I (ΠSC ; S | P ) = ∑ I (Si; ΠSC | P , S <i ) i=1 (the chain rule for the mutual information, Claim 2.1-(5)) m = ∑ H (Si | P , S <i ) − H (Si | ΠSC , P , S <i ) i=1 m ≥ = ∑ H (Si | Pi ) − H (Si | ΠSC , Pi ) i=1 m ∑ I (Si; ΠSC | Pi ) i=1 The inequality holds since: (i) H (Si | Pi ) = H (Si | Pi , P −i , S <i ) = H (Si | P , S <i ) because conditioned on Pi , Si is independent of P −i and S <i (Remark 5.2-(3)), hence the equality holds by Claim 2.1-(3). (ii) H (Si | ΠSC , Pi ) ≥ H (Si | ΠSC , Pi , P −i , S <i ) = H (Si | ΠSC , P , S <i ) since conditioning reduces the entropy, i.e., Claim 2.1-(3). 23 Equipped with Lemma 5.3, we only need to bound ∑i∈[m] I (ΠSC ; Si | Pi ). Note that, m ∑ i=1 I (ΠSC ; Si | Pi ) = m m i=1 i=1 ∑ H (Si | Pi ) − ∑ H (Si | ΠSC , Pi ) (5) Furthermore, for each i ∈ [m], |supp(Si | Pi )| = ( tt ) and Si is uniform over its support (Re2 mark 5.2-(2)); hence, by Claim 2.1-(1), m ∑ H (Si | Pi ) = i=1 m ∑ log i=1   t t 2   = m · log 2t−Θ(log t) = m · t − Θ(m log t) (6) Consequently, we only need to bound ∑m i=1 H ( Si | ΠSC , Pi ). In order to do so, we show that ΠSC can be used to estimate the value of the parameter θ, and hence we only need to establish a lower bound for the problem of estimating θ. Lemma 5.4. Any δ-error protocol ΠSC over the distribution Dest can be used to determine the value of θ with error probability δ + o(1). Proof. Alice sends the message ΠSC (S) as before. Using this message, Bob can compute an α-estimation of the set cover problem using ΠSC (S) and his input. If the estimation is less than 2α, we output θ = 0 and otherwise we output θ = 1. The bound on the error probability follows from Lemma 5.1. Before continuing, we make the following remark which would be useful in the next section. Remark 5.5. We assume that in SetCoverest over the distribution Dest , Bob is additionally provided with the special index i ∗ . Note that this assumption can only make our lower bound stronger since Bob can always ignore this information and solves the original SetCoverest . Let β be the function that estimates θ used in Lemma 5.4; the input to β is the message given from Alice, the public coins used by the players, the set T, and (by Remark 5.5) the special index i ∗ . We have, Pr( β(ΠSC , T, I ) 6= θ) ≤ δ + o(1) Hence, by Fano’s inequality (Claim 2.2), H2 (δ + o(1)) ≥ H (θ | ΠSC , T, I ) h i = E H (θ | ΠSC , T, I = i) i∼ I = 1 m m ∑ H (θ | ΠSC , T, I = i) (7) i=1 We now show that each term above is lower bounded by H (Si | ΠSC , Pi )/t and hence we obtain the desired upper bound on H (Si | ΠSC , Pi ) in Equation (5). Lemma 5.6. For any i ∈ [m], H (θ | ΠSC , T, I = i ) ≥ H (Si | ΠSC , Pi )/t. 24 Proof. We have, H (θ | ΠSC , T, I = i ) ≥ H (θ | ΠSC , T, Pi , I = i ) (conditioning on random variables reduces entropy, Claim 2.1-(3)) h i H (θ | ΠSC , T, Pi = P, I = i ) = E P ∼ Pi | I = i For brevity, let E denote the event (Pi = P, I = i ). We can write the above equation as, h i H (θ | ΠSC , T, Pi , I = i ) = E E H (θ | ΠSC , T = T, E) P ∼ Pi | I = i T ∼T| E Note that by Remark 5.2-(2), conditioned on the event E, T is chosen to be one of the blocks of P = ( B1 , . . . , Bt ) uniformly at random. Hence, H (θ | ΠSC , T, Pi , I = i ) = E P ∼ Pi | I = i h t ∑ j=1 H (θ | ΠSC , T = Bj , E) i t Define a random variable X := ( X1 , . . . , Xt ), where each X j ∈ {0, 1} and X j = 1 iff Si contains the block Bj . Note that conditioned on E, X uniquely determines the set Si . Moreover, notice that conditioned on T = Bj and E, θ = 0 iff X j = 1. Hence, H (θ | ΠSC , T, Pi , I = i ) = E P ∼ Pi | I = i h t ∑ j=1 H ( X j | ΠSC , T = Bj , E) i t Now notice that X j is independent of the event T = Bj since Si is chosen independent of T conditioned on E (Remark 5.2-(2)). Similarly, since ΠSC is only a function of S and S is independent of T conditioned on E, ΠSC is also independent of the event T = Bj . Consequently, by Claim 2.1-(6), we can “drop” the conditioning on T = Bj , h H ( X j | ΠSC , E) i ∑ t P ∼ Pi | I = i j=1 h H ( X | Π , E) i SC ≥ E t P ∼ Pi | I = i (sub-additivity of the entropy, Claim 2.1-(4)) h H (S | Π , E) i SC i = E t P ∼ Pi | I = i (Si and X uniquely define each other conditioned on E) h H (S | Π , P = P, I = i ) i SC i i = E t P ∼ Pi | I = i (E is defined as (Pi = P, I = i )) H (θ | ΠSC , T, Pi , I = i ) = = E t H (Si | ΠSC , Pi , I = i ) t Finally, by Remark 5.2-(1), Si , ΠSC , and Pi are all independent of the event I = i, and hence by Claim 2.1-(6), H (Si | ΠSC , Pi , I = i ) = H (Si | ΠSC , Pi ), which concludes the proof. By plugging in the bound from Lemma 5.6 in Equation (7) we have, m ∑ H (Si | ΠSC , Pi ) ≤ H2 (δ + o(1)) · (mt) i=1 25 Finally, by plugging in this bound together with the bound from Equation (6) in Equation (5), we get, m ∑ I (ΠSC ; Si | Pi ) ≥ mt − Θ(m log t) − H2 (δ + o(1)) · (mt) i=1   = 1 − H2 (δ + o(1)) · (mt) − Θ(m log t) e (n/α2 ) and δ < 1/2, hence H2 (δ + o(1)) = 1 − ǫ for some constant Recall that t = k/p = Ω ǫ bounded away from 0. By Lemma 5.3,   e (mn/α2 ) ICδDest (SetCoverest ) = min ICostDest (ΠSC ) = Ω ΠSC To conclude, since the information complexity is a lower bound on the communication complexity (Proposition 2.5), we obtain the following theorem. p Theorem 5. For any constant δ < 1/2, any α = o( n/ log n), and any m = poly(n), e (mn/α2 ) CCδDest (SetCoverest ) = Ω As a corollary of this result, we have that the space complexity of single-pass streaming e (mn/α2 ). algorithms for the set cover problem on adversarial streams is Ω 5.3 Extension to Random Arrival Streams We now show that the lower bound established in Theorem 5 can be further strengthened to prove a lower bound on the space complexity of single-pass streaming algorithms in the random arrival model. To do so, we first define an extension of the distribution Dest , denoted by Dext , prove a lower bound for Dext , and then show that how to use this lower bound on the one-way communication complexity to establish a lower bound for the random arrival model. We define the distribution Dext as follows. Distribution Dext . An extension of the hard distribution Dest for SetCoverest . 1. Sample the sets S = {S1 , . . . , Sm , T } in the same way as in the distribution Dest . 2. Assign each set in S to Alice with probability 1/2, and the remaining sets are assigned to Bob. We prove that the distribution Dext is still a hard distribution for SetCoverest . p Lemma 5.7. For any constant δ < 1/8, α = o( n/ log n), and m = poly(n), e (mn/α2 ) CCδDext (SetCoverest ) = Ω Proof. We prove this lemma using a reduction from SetCoverest over the distribution Dest . Let ΠExt be a δ-error protocol over the distribution Dext . Let δ′ = 3/8 + δ; in the following, we create a δ′ -error protocol ΠSC for the distribution Dest (using ΠExt as a subroutine). Consider an instance of the problem from the distribution Dest . Define a mapping σ : [m + 1] 7→ S such that for i ≤ m, σ(i) = Si and σ(m + 1) = T. Alice and Bob use public randomness to partition the set of integers [m + 1] between each other, assigning each number 26 in [m + 1] to Alice (resp. to Bob) with probability 1/2. Note that by Remark 5.5, we may assume that Bob knows the special index i ∗ . Consider the random partitioning of [m + 1] done by the players. If i ∗ = σ−1 (Si∗ ) is assigned to Bob, or m + 1 = σ−1 ( T ) is assigned to Alice, Bob always outputs 2. Otherwise, Bob samples one set from F for each j assigned to him independently and uniformly at random and treat these sets plus the set T as his “new input”. Moreover, Alice discards the sets S j = σ( j), where j is assigned to Bob and similarly treat the remaining set as her new input. The players now run the protocol ΠExt over this distribution and Bob outputs the estimate returned by ΠExt as his estimate of the set cover size. Let R denote the randomness of the reduction (excluding the inner randomness of ΠExt ). Define E as the event that in the described reduction, i ∗ is assigned to Alice and m + 1 is assigned to Bob. Let Dnew be the distribution of the instances over the new inputs of Alice and Bob (i.e., the input in which Alice drops the sets assigned to Bob, and Bob randomly generates the sets assigned to Alice) when E happens. Similarly, we define Eb to be the event that in the distribution Dext , Si∗ is assigned to Alice and T is assigned to Bob. It is straightforward to verify that Dnew = (Dext | Eb). We now have, Pr Dest ,R         1 ΠSC errs = · Pr E + Pr E · Pr ΠExt errs R 2 R Dnew (since Bob outputs 2 when E , he will succeed with probability 1/2)       1 = · Pr E + Pr E · Pr ΠExt errs | Eb (Dnew = (Dext | Eb)) R 2 R Dext       Pr Π errs Dext Ext 1   ≤ · Pr E + Pr E · R 2 R PrDext Eb     1 (PrR (E ) = PrDext (Eb)) = · Pr E + Pr ΠExt errs 2 R Dext 3 ≤ +δ (PrR (E ) = 3/4) 8 Finally, since δ < 1/8, we obtain a (1/2 − ǫ)-error protocol (for some constant ǫ bounded away from 0) for the distribution Dest . The lower bound now follows from Theorem 5. We can now prove the lower bound for the random arrival model. Proof of Theorem 4. Suppose A is a randomized single-pass streaming algorithm satisfying the conditions in the theorem statement. We use A to create a δ-error protocol SetCoverest over the distribution Dext with parameter δ = 0.1 < 1/8. Consider any input S in the distribution Dext and denote the sets given to Alice by S A and the sets given to Bob by SB . Alice creates a stream created by a random permutation of S A denoted by s A , and Bob does the same for SB and obtains sB . The players can now compute A (hs A , sB i) to estimate the set cover size and the communication complexity of this protocol equals the space complexity of A. Moreover, partitioning made in the distribution Dext together with the choice of random permutations made by the players, ensures that hs A , sB i is a random permutation of the original set S . Hence, the probability that A fails to output an α-estimate of the set cover problem is at most δ = 0.1. The lower bound now follows from Lemma 5.7. 27 6 An Ω(mn/α)-Space Lower Bound for Deterministic α-Estimation In Section 4, we provided a randomized algorithm for α-estimating the set cover problem, with an (essentially) optimal space bound (as we proved in Section 5); here, we establish that randomization is crucial for achieving better space bound for α-estimation: any deterministic α-estimation algorithm for the set cover problem requires Ω( mn α ) bits of space. In other words, α-approximation and α-estimation are as hard as each other for deterministic algorithms. Formally, q Theorem 6. For any α = o( logn n ) and m = poly(n), any deterministic α-estimation single-pass streaming algorithm for the set cover problem requires Ω(mn/α) bits of space. Before continuing, we make the following remark. The previous best lower bound for deterministic algorithm asserts that any O(1)-estimation algorithm requires Ω(mn) bits of space even allowing constant number of passes [10]. This lower bound can be stated more generally in terms of its dependence on α: for any α ≤ log n − O(log log n), any deterministic α-estimation algorithm requires space of Ω(mn/α · 2α ) bits (see Lemma 11 and Theorem 3 in [10]). Our bound in Theorem 6 provide an exponential improvement over this bound for single-pass algorithms. Recall the definition of SetCoverest from Section 5 for a fixed choice of parameters n, m, and α. We define the following hard input distribution for deterministic protocols of SetCoverest . Distribution Ddet . A hard input distribution for deterministic protocols of SetCoverest . Notation. Let F be the collection of all subsets of [n] with cardinality n/10α. • Alice. The input to Alice is an m-subset S ⊆ F chosen uniformly at random. • Bob. Toss a coin θ ∈ R {0, 1}; let the input to Bob be the set T, where if θ = 0, T ∈ R S and otherwise, T ∈ R F \ S . We stress that distribution Ddet can only be hard for deterministic protocols; Alice can simply run a randomized protocol for checking the equality of each Si with T (independently for each Si ∈ S ) and since the one-way communication complexity of the Equality problem with public randomness is O(1) (see, e.g., [21]), the total communication complexity of this protocol is O(m). However, such approach is not possible for deterministic protocols since deterministic communication complexity of the Equality problem is linear in the input size (again, see [21]). q The following lemma can be proven similar to Lemma 5.1 (assuming that α = o( logn n )). Lemma 6.1. For (S , T ) ∼ Ddet : (i) Pr (opt (S , T ) = 2 | θ = 0) = 1. (ii) Pr (opt (S , T ) > 2α | θ = 1) = 1 − 2−Ω(n/α) . To prove Theorem 6, we use a reduction from the Sparse Indexing problem of [26]. In SparseIndexing kN , Alice is given a set S of k elements from a universe [ N ] and Bob is given an element e ∈ [ N ]; Bob needs to output whether or not e ∈ S. The crucial property of this problem that we need in our proof is that the communication complexity of Sparse Indexing depends on the success probability required from the protocol. The following lemma is a restatement of Theorem 3.2 from [26]. Lemma 6.2 ([26]). Consider the following distribution DSI for SparseIndexing kN : 28 • Alice is given a k-subset S ⊆ [ N ] chosen uniformly at random. • With probability half Bob is given an element e ∈ S uniformly at random, and with probability half Bob receives a random element e ∈ [ N ] \ S. For any δ < 1/3, the communication complexity of any deterministic δ-error protocol over distribution DSI is Ω(min {k log(1/δ), k log( N/k)}) bits. We now show that how a deterministic protocol for SetCoverest can be used to solve the hard distribution of SparseIndexing kN described in the previous lemma. Notice that there is a simple one-to-one mapping φ from an instance (S, e) of SparseIndexing kN to an instance (S , T ) of SetCoverest with parameters n and m where N = ( nn ) and k = m. Fix 10α any arbitrary bijection σ : [ N ] 7→ F ; we map S to the collection S = {σ( j) | j ∈ S} and map e to the set T = [n] \ σ(e). Under this transformation, if we choose (S, e) ∼ DSI the SetCoverest distribution induced by φ(S, e) is equivalent to the distribution Ddet . Moreover, the answer of (S, e) for the SparseIndexingkN problem is YES iff θ = 0 in (S , T ). We now proceed to the proof of Theorem 6. Proof of Theorem 6. Let ΠSC be a deterministic protocol for the SetCoverest problem; we use ΠSC to design a δ-error protocol ΠSI for SparseIndexing kN on the distribution DSI for δ = 2−Ω(n/α) . The lower bound on message size of ΠSC then follows from Lemma 6.2. Protocol ΠSI works as follows. Consider the mapping φ from (S, e) to (S , T ) defined before. Given an instance (S, e) ∼ DSI , Alice and Bob can each compute their input in φ(S, e) without any communication. Next, they run the protocol ΠSC (S , T ) and if the set cover size is less than 2α Bob outputs YES and otherwise outputs NO. We argue that ΠSI is a δ-error protocol for SparseIndexing kN on the distribution DSI . To see this, recall that if (S, e) ∼ DSI then (S , T ) ∼ Ddet . Let E be an event in the distribution Ddet , where θ = 1 but opt < 2α. Note that since ΠSC is a deterministic α-approximation protocol and never errs, the answer for ΠSI would be wrong only if E happens. We have,     ΠSI (S, e) errs ≤ Pr E = 2−Ω(n/α) Pr ( S,e )∼DSI (S ,T )∼Ddet where the equality is by Lemma 6.1. This implies that ΠSI is a δ-error protocol for the distribution DSI . By Lemma 6.2, kΠSC k = kΠSI k = Ω(min(k · log 1/δ, k log( N/k))) = Ω(mn/α) As one-way communication complexity is a lower bound on the space complexity of singlepass streaming algorithm we obtain the final result. Acknowledgements We thank Piotr Indyk for introducing us to the problem of determining single-pass streaming complexity of set cover, and the organizers of the DIMACS Workshop on Big Data through the Lens of Sublinear Algorithms (August 2015) where this conversation happened. We are also thankful to anonymous reviewers for many valuable comments. 29 References [1] Alon, N., Matias, Y., and Szegedy, M. The space complexity of approximating the frequency moments. In STOC (1996), ACM, pp. 20–29. [2] Bar-Yossef, Z., Jayram, T. S., Kumar, R., and Sivakumar, D. An information statistics approach to data stream and communication complexity. In 43rd Symposium on Foundations of Computer Science (FOCS 2002), 16-19 November 2002, Vancouver, BC, Canada, Proceedings (2002), pp. 209–218. [3] Bar-Yossef, Z., Jayram, T. S., Kumar, R., and Sivakumar, D. Information theory methods in communication complexity. In Proceedings of the 17th Annual IEEE Conference on Computational Complexity, Montréal, Québec, Canada, May 21-24, 2002 (2002), pp. 93–102. [4] Barak, B., Braverman, M., Chen, X., and Rao, A. How to compress interactive communication. In Proceedings of the 42nd ACM Symposium on Theory of Computing, STOC 2010, Cambridge, Massachusetts, USA, 5-8 June 2010 (2010), pp. 67–76. [5] Chakrabarti, A., Cormode, G., and McGregor, A. Robust lower bounds for communication and stream computation. In Proceedings of the 40th Annual ACM Symposium on Theory of Computing, Victoria, British Columbia, Canada, May 17-20, 2008 (2008), pp. 641– 650. [6] Chakrabarti, A., Shi, Y., Wirth, A., and Yao, A. C. Informational complexity and the direct sum problem for simultaneous message complexity. In 42nd Annual Symposium on Foundations of Computer Science, FOCS 2001, 14-17 October 2001, Las Vegas, Nevada, USA (2001), pp. 270–278. [7] Chakrabarti, A., and Wirth, T. Incidence geometries and the pass complexity of semistreaming set cover. CoRR, abs/1507.04645. To appear in Symposium on Discrete Algorithms (SODA) (2016). [8] Cormode, G., Karloff, H. J., and Wirth, A. Set cover algorithms for very large datasets. In Proceedings of the 19th ACM Conference on Information and Knowledge Management, CIKM 2010, Toronto, Ontario, Canada, October 26-30, 2010 (2010), pp. 479–488. [9] Cover, T. M., and Thomas, J. A. Elements of information theory (2. ed.). Wiley, 2006. [10] Demaine, E. D., Indyk, P., Mahabadi, S., and Vakilian, A. On streaming and communication complexity of the set cover problem. In Distributed Computing - 28th International Symposium, DISC 2014, Austin, TX, USA, October 12-15, 2014. Proceedings (2014), pp. 484– 498. [11] Dinur, I., and Steurer, D. Analytical approach to parallel repetition. In Symposium on Theory of Computing, STOC 2014, New York, NY, USA, May 31 - June 03, 2014 (2014), pp. 624–633. [12] Emek, Y., and Rosén, A. Semi-streaming set cover - (extended abstract). In Automata, Languages, and Programming - 41st International Colloquium, ICALP 2014, Copenhagen, Denmark, July 8-11, 2014, Proceedings, Part I (2014), pp. 453–464. [13] Feige, U. A threshold of ln n for approximating set cover. J. ACM 45, 4 (1998), 634–652. [14] Feigenbaum, J., Kannan, S., McGregor, A., Suri, S., and Zhang, J. On graph problems in a semi-streaming model. Theor. Comput. Sci. 348, 2-3 (2005), 207–216. 30 [15] Har-Peled, S., Indyk, P., Mahabadi, S., and Vakilian, A. Towards tight bounds for the streaming set cover problem. To appear in PODS (2016). [16] Impagliazzo, R., and Kabanets, V. Constructive proofs of concentration bounds. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, 13th International Workshop, APPROX 2010, and 14th International Workshop, RANDOM 2010, Barcelona, Spain, September 1-3, 2010. Proceedings (2010), pp. 617–631. [17] Indyk, P., Mahabadi, S., and Vakilian, A. Towards tight bounds for the streaming set cover problem. CoRR abs/1509.00118 (2015). [18] Jayram, T. S., and Woodruff, D. P. Optimal bounds for johnson-lindenstrauss transforms and streaming problems with sub-constant error. In Proceedings of the TwentySecond Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011, San Francisco, California, USA, January 23-25, 2011 (2011), pp. 1–10. [19] Johnson, D. S. Approximation algorithms for combinatorial problems. J. Comput. Syst. Sci. 9, 3 (1974), 256–278. [20] Kapralov, M., Khanna, S., and Sudan, M. Approximating matching size from random streams. In Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2014, Portland, Oregon, USA, January 5-7, 2014 (2014), pp. 734–751. [21] Kushilevitz, E., and Nisan, N. Communication complexity. Cambridge University Press, 1997. [22] Lund, C., and Yannakakis, M. On the hardness of approximating minimization problems. J. ACM 41, 5 (1994), 960–981. [23] Muthukrishnan, S. Data streams: Algorithms and applications. Foundations and Trends in Theoretical Computer Science 1, 2 (2005). [24] Nisan, N. The communication complexity of approximate set packing and covering. In Automata, Languages and Programming, 29th International Colloquium, ICALP 2002, Malaga, Spain, July 8-13, 2002, Proceedings (2002), pp. 868–875. [25] Panconesi, A., and Srinivasan, A. Randomized distributed edge coloring via an extension of the chernoff-hoeffding bounds. SIAM J. Comput. 26, 2 (1997), 350–368. [26] Saglam, M. Tight bounds for data stream algorithms and communication problems. PhD thesis, Simon Fraser University, 2011. [27] Saha, B., and Getoor, L. On maximum coverage in the streaming model & application to multi-topic blog-watch. In Proceedings of the SIAM International Conference on Data Mining, SDM 2009, April 30 - May 2, 2009, Sparks, Nevada, USA (2009), pp. 697–708. [28] Slavík, P. A tight analysis of the greedy algorithm for set cover. J. Algorithms 25, 2 (1997), 237–254. 31 A A Lower Bound for ICδDIndex (Indexnk ) In this section we prove Lemma 3.4. As stated earlier, this lemma is well-known and is proved here only for the sake of completeness. Proof of Lemma 3.4. Consider the following “modification” to the distribution DIndex . Suppose we first sample a 2k-set P ⊆ [n], and then let the input to Alice be a k-set A ⊂ P chosen uniformly at random, and the input to Bob be an element a ∈ P chosen uniformly at random. It is an easy exercise to verify that this modification does not change the distribution DIndex . However, this allows us to analyze the information cost of any protocol ΠIndex over DIndex easier. In particular, ICostDIndex (ΠIndex ) = I ( A; ΠIndex ) ≥ I ( A; ΠIndex | P ) = H ( A | P ) − H ( A | ΠIndex , P ) = 2k − Θ(log k) − H ( A | ΠIndex , P ) where the inequality holds since (i ) H (ΠIndex ) ≥ H (ΠIndex | P ) and (ii ) H (ΠIndex | A) = H (ΠIndex | A, P ) as ΠIndex is independent of P conditioned on A. We now bound H ( A | ΠIndex , P ). Define θ ∈ {0, 1} where θ = 1 iff a ∈ A. Note that a δ′ -error protocol for Index is also a δ′ -error (randomized) estimator for θ (given the message ΠIndex , the public randomness used along with the message, and the element a). Hence, using Fano’s inequality (Claim 2.2), H2 (δ′ ) ≥ H (θ | ΠIndex , a) ≥ H (θ | ΠIndex , a, P ) (conditioning decreases entropy, Claim 2.1-(3)) h i H (θ | ΠIndex , a = a, P = P) = E E P ∼ P a∼ a |P= P Conditioned on P = P, a is chosen uniformly at random from P. Define X := ( X1 , . . . , X2k ), where Xi = 1 if i-th element in P belongs to A and Xi = 0 otherwise. Using this notation, we have θ = Xi conditioned on P = P, and a = a being the i-th element of P. Hence, i 1 · H ( X | Π , a = a, P = P ) i Index ∑ 2k P∼P i=1 h 2k 1 i = E ∑ · H ( Xi | ΠIndex , P = P) P∼P 2k i=1 (Xi and ΠIndex are independent of a = a conditioned on P = P, and Claim 2.1-(6)) h i 1 ≥ · E H ( A | ΠIndex , P = P) (sub-additivity of the entropy, Claim 2.1-(4)) 2k P∼P 1 = H ( A | ΠIndex , P ) 2k H2 (δ′ ) ≥ E h 2k Consequently, H ( A | ΠIndex , P ) ≤ 2k · H2 (δ′ ) Finally, since δ′ < 1/2 and hence H2 (δ′ ) = (1 − ǫ) for some ǫ bounded away from 0, we have, ICostDIndex (ΠIndex ) ≥ 2k · (1 − H2 (δ′ )) − Θ(log k) = Ω(k) 32
8
arXiv:1704.07041v1 [math.GT] 24 Apr 2017 ON VIRTUAL PROPERTIES OF KÄHLER GROUPS STEFAN FRIEDL AND STEFANO VIDUSSI Abstract. This paper stems from the observation (arising from work of T. Delzant) that “most” Kähler groups virtually algebraically fiber, i.e. admit a finite index subgroup that maps onto Z with finitely generated kernel. For the remaining ones, the Albanese dimension of all finite index subgroups is at most one, i.e. they have virtual Albanese dimension one. We show that the existence of (virtual) algebraic fibrations has implications in the study of coherence and of higher BNSR invariants of the fundamental group of aspherical Kähler surfaces. The class of Kähler groups of virtual Albanese dimension one contains groups commensurable to surface groups. It is not hard to give further (albeit unsophisticated) examples; however, groups of this class exhibit strong similarities with surface groups. In fact, we show that its only virtually residually finite Q–solvable (vRFRS) elements are commensurable to surface groups, and we show that their Green–Lazarsfeld sets (virtually) coincide with those of surface groups. 1. Introduction This paper is devoted to the study of some virtual properties of Kähler groups, i.e. fundamental groups of compact Kähler manifolds. Recall that if P is a property of groups, we say that a group G is virtually P if a finite index subgroup H ≤f G is P. This paper stems from the desire to understand if some of the virtual properties of fundamental groups of irreducible 3–manifolds with empty or toroidal boundary, that have recently emerged from the work of Agol, Wise [Ag13, Wi09, Wi12] and their collaborators, have a counterpart for Kähler groups. Admittedly, there is no a priori geometric reason to expect any analogy. However this viewpoint seems to be not completely fruitless: for example in [FV16] we investigated consequences of the fact that both classes of groups satisfy a sort of “relative largeness” property, namely that any epimomorphism φ : G → Z with infinitely generated kernel virtually factorizes through an epimorphism to a free nonabelian group. (This is a property that is nontrivial to prove for both classes.) In this paper we study, in a sense, the opposite phenomenon, namely the existence of epimorphisms φ : G → Z with finitely generated kernel. Or, stated otherwise, whether G is an extension of Z by a finitely generated subgroup. This condition has been conveniently referred to in [JNW17] by saying that G algebraically fibers and here we will adhere to that terminology. Again, thanks to the recent results of Date: April 25, 2017. 1 2 STEFAN FRIEDL AND STEFANO VIDUSSI Agol, Wise and collaborators (see [AFW15] for accurate statements and references) the emerging picture is that “most” freely indecomposable 3–manifold groups (e.g. hyperbolic groups) virtually algebraically fiber. (By Stallings Theorem [Sta62], this is actually equivalent to the fact that the underlying 3–manifold is virtually a surface bundle over S 1 .) This result has triggered recent interest in the study of (virtual) algebraic fibration for various classes of groups, and relevant results have appeared, including during the preparations of this manuscript, see [FGK17, JNW17]. We have tasked ourselves with the purpose of understanding virtual algebraic fibrations in the realm of Kähler groups. The first result is little more than a rephrasing of Delzant’s results on the Bieri– Neumann–Strebel invariants of Kähler groups ([De10]), and is possibly known at least implicitly to those familiar with that result. It asserts that the “generic” Kähler group virtually algebraically fibers, and more importantly gives a geometric meaning to latter notion. To state this result, recall that the Albanese dimension a(X) ≥ 0 of a Kähler manifold is defined as the complex dimension of the image of X under the Albanese map Alb. We define the virtual Albanese dimension va(X) to be the supremum of the Albanese dimension of all finite covers of X. (This definition replicates that of virtual first Betti number vb1 , that will be as well of use in what follows.) The property of having Albanese dimension equal to zero or equal to one is determined by the fundamental group G = π1 (X) alone (see e.g. Proposition 2.1); because of that, it makes sense to talk of (virtual) Albanese dimension of a Kähler group as an element of {0, 1, > 1}. With this in mind we have the following: Theorem A. Let G be a Kähler group. Then either G virtually algebraically fibers, or va(G) ≤ 1. This statement is, in essence, an alternative: the only intersection are groups G that have a finite index subgroup H ≤f G with b1 (H) = vb1 (G) = 2 such that the commutator subgroup [H, H] is finitely generated. (Such a H appears as fundamental group of a genus 1 Albanese pencil without multiple fibers.) We could phrase this theorem as an alternative, but the form above comes naturally from its proof, and fits well with what follows. Theorem A kindles some interest in identifying the class of Kähler groups of virtual Albanese dimension at most one, and in what follow we summarize what we know about this class. To start, a Kähler group has va(G) = 0 if and only if all its finite index subgroup have finite abelianization, or equivalently vb1 (G) = 0. All finite groups fall in this class, but importantly for us, there exist infinite examples as well: a noteworthy one is Sp(2n, Z) for n ≥ 2 (see [To90]). The class of Kähler groups with va(G) = 1 contains some obvious examples, namely all surface groups (i.e. fundamental groups of compact Riemann surfaces of positive genus); a moment’s thought shows that the same is true for all groups that are commensurable (in the sense of Gromov, see [Gr89] or Section 3) to surface groups. These ON VIRTUAL PROPERTIES OF KÄHLER GROUPS 3 groups do not exhaust the class of groups with va(G) = 1: an easy method to build further examples comes by taking the product of a surface group with Sp(2n, Z), n ≥ 2, by which we obtain Kähler groups with va(G) = 1 but not commensurable with surface groups. That said, we are not aware of any subtler construction, that does not hinge on the existence of Kähler groups with vb1 = 0, and it would interesting to decide if such constructions exist. In particular we analyze in Section 3 the implications of the existence of (virtual) algebraic fibrations in the context of aspherical Kähler surfaces. This allows us to partly recast and refine some results about coherence of their fundamental group, which appear in [Ka98, Ka13, Py16]. (Recall that a group is coherent if all its finitely generated subgroups are finitely presented.) Combining these results with ours yields the following: Theorem B. Let G be a group with b1 (G) > 0 which is the fundamental group of an aspherical Kähler surface X; then G is not coherent, except for the case where it is virtually the product of Z2 by a surface group, and perhaps for the case where X is finitely covered by a Kodaira fibration of virtual Albanese dimension one. (A Kodaira fibration is a smooth non-isotrivial pencil of curves.) We are not aware of the existence of Kodaira fibrations of virtual Albanese dimension one (Question 3.2). The proof of the theorem above actually entails the existence of Kähler groups whose second Bieri-Neumann-Strebel-Renz invariant is strictly contained in the first (see Lemma 3.4) and are not a direct product. In fact, the properties of the Albanese map give a tight relation between Kähler groups with va(G) = 1 and surface groups. In fact, the underlying Kähler manifold (virtually) admits the following form: if a Kähler manifold X has Albanese dimension one, the Albanese map f : X → Σ (after restricting its codomain to the image) is a genus g = q(X) pencil. (Here q(X) := 12 b1 (X) denotes the irregularity of a Kähler manifold X.) When va(G) = 1, the Albanese pencil lifts to an Albanese pencil e →Σ e for all finite covers X e → X. f˜: X If we impose to G = π1 (X) residual properties that mirror those of 3–manifold groups, we obtain a refinement to Theorem A that can be thought of as an analogue (with much less work on our side) to Agol’s virtual fiberability result ([Ag08]) for 3–manifold groups that are virtually RFRS (see Section 3 for the definition). Theorem C. Let G be a Kähler group that is virtually RFRS. Then either G virtually algebraically fibers, or is commensurable to a surface group. Turning this theorem on its head, examples of Kähler groups with va(G) ≤ 1 that are not commensurable to surface groups are certified not to be virtually RFRS. To date, the infinite Kähler groups known to be RFRS are subgroups of the direct product of surface groups and abelian groups. This is a remarkable but not yet completely understood class of Kähler groups: in [DG05, 7.5 sbc–Corollary] some conditions for a Kähler group to be virtually of this type are presented. 4 STEFAN FRIEDL AND STEFANO VIDUSSI In general, the relation between X and Σ induced by the Albanese pencil f : X → Σ turns out to be much stronger than the isomorphism of the first cohomology groups. In fact, up to going to a finite index subgroup if necessary, the Green–Lazarsfeld sets of their fundamental groups coincide. Given an Albanese pencil, we refer to the induced map in homotopy f : G → Γ (where G := π1 (X) and Γ := π1 (Σ)) as Albanese map as well. We have the following: Theorem D. Let G be a group with va(G) = 1. After going to a finite index normal subgroup if necessary, the Albanese map f : G → Γ induces an isomorphism of the Green–Lazarsfeld sets ∼ = fb: Wi (Γ) −→ Wi (G). Structure of the paper. Section 2 discusses some preliminary results on the Albanese dimension of Kähler manifolds and groups, as well as the proof of Theorem A. Section 3 is devoted to the study of groups of virtual Albanese dimension one, and contains the proofs of Theorems B, C and D. In order to keep the presentation reasonably self–contained, we included some fairly classical results, for which we could not find a formulation in the literature suitable for our purposes. For the same reason, we describe – hopefully with accurate and appropriate attribution – more recent work that is germane to the purposes of this paper. Conventions. All manifolds are assumed to be compact, connected and orientable, unless we say otherwise. Acknowledgment. The authors want to thank Thomas Delzant, Dieter Kotschick, and Mahan Mj for their comments and clarifications to previous versions of this paper. Also, they gratefully acknowledge the support provided by the SFB 1085 ‘Higher Invariants’ at the University of Regensburg, funded by the Deutsche Forschungsgemeinschaft (DFG). 2. Albanese dimension and algebraic fibrations We start with some generalities on pencils on Kähler manifolds and the properties of their fundamental group. The reader can find in the monograph [ABCKT96] a detailed discussion of Kähler groups, and the rôle they play in determining pencils on the underlying Kähler manifolds. Given a genus g pencil on X (i.e. a surjective holomorphic map with connected fibers f : X → Σ to a surface with g = g(Σ)) we can consider the homotopy–induced epimorphism f : π1 (X) → π1 (Σ). In presence of multiple fibers we have a factorization f : π1 (X) → π1orb (Σ) → π1 (Σ) through a further epimorphism onto π1orb (Σ), the orbifold fundamental group of Σ associated to the pencil f , with orbifold points and multiplicities corresponding to the multiple fibers of the pencil. Throughout the paper we write G := π1 (X) and Γ := π1orb (Σ). The factor epimorphism, that by slight abuse of notation we denote as well as f : G → Γ, ON VIRTUAL PROPERTIES OF KÄHLER GROUPS 5 has finitely generated kernel so we have the short exact sequence of finitely generated groups f 1→K→G→Γ→1 (see e.g. [Cat03] for details of the above). We have the following two results about Kähler manifolds of (virtual) Albanese dimension one. These are certainly well–known to the experts (at least implicitly), and we provide proofs for completeness. Proposition 2.1. Let X be a Kähler manifold and let G = π1 (X) be its fundamental group. If X has Albanese dimension a(X) ≤ 1, any Kähler manifold with isomorphic fundamental group has the same Albanese dimension as X. Proof. The case where a(X) = 0 corresponds to manifolds with vanishing irregularity q(X) = 21 b1 (X) so it is determined by the fundamental group alone. Next, consider a Kähler manifold X with positive irregularity. For any such X, the genus g(X) is defined as the maximal rank of submodules of H 1 (X) isotropic with respect to the cup product. In [ABCKT96, Chapter 2] it is shown that g(X) is in fact an invariant of the fundamental group alone. As the cup product is nondegenerate, there is a bound 1 1 g(X) ≤ q(X) = b1 (X) = b1 (G). 2 2 By Catanese’s version of the Castelnuovo–de Franchis theorem ([Cat91]), the case where X has Albanese dimension one occurs exactly for g(X) = q(X). By the above, this equality is determined by the fundamental group alone.  Based on the observations above, we will refer to the Albanese dimension of a Kähler group as an element of the set {0, 1, > 1}. Whenever G has Albanese dimension one, the kernel of the map f∗ : H1 (G) → H1 (Γ) induced by the (homotopy) Albanese map, identified by the Hochschild–Serre spectral sequence with a quotient of the coinvariant homology H1 (K)Γ by a torsion group, is torsion (or equivalently f ∗ : H 1 (Γ) → H 1 (G) is an isomorphism). Note that, by universality of the Albanese map, whenever a Kähler manifold X has a pencil f : X → Σ such that the kernel of f∗ : H1 (G) → H1 (Γ) is torsion, the pencil is Albanese. e → X be a finite cover of X. Denote by H ≤f G the subgroup associLet π : X ated to this cover; the regular cover of X determined by the normal core NG (H) = T −1 e g∈G g Hg ≤f H is a finite cover of X as well. By universality of the Albanese map, the Albanese dimension is nondecreasing when we pass to finite covers. Therefore (as happens with virtual Betti numbers) we can define the virtual Albanese dimension of X in terms of finite regular covers. Given an epimorphism onto a finite group 6 STEFAN FRIEDL AND STEFANO VIDUSSI α : π1 (X) → S we have the commutative diagram (with self–defining notation) (1) 1 1 // ∆ // Λ //  K  α(K) //  1 // 1  // S/α(K)  1 Γ α S // //  G // α  //  H //  1 1   1 1 // 1  1 1 e and Σ e the induced covers of X and Σ respectively (so that π1 (X) e =H Denote by X e = Λ). There exists a pencil f˜: X e → Σ, e which is a lift of f : X → Σ; in and π1 (Σ) e appearing in homotopy, this corresponds to the epimorphism fe: H → Λ := π1orb (Σ) (1) above. In the next proposition, we illustrate the fact that when X has Albanese dimension one, its Albanese pencil f : X → Σ is the only irrational pencil of X, up to holomorphic automorphisms of Σ. Proposition 2.2. Let X be a Kähler manifold with a(X) = 1. Then the Albanese pencil f : X → Σ is the unique irrational pencil on X up to holomorphic automorphism of the base. Moreover, if X satisfies va(X) = 1, any rational pencil has orbifold base with finite orbifold fundamental group. Proof. By assumption the Albanese pencil f : X → Σ factorizes the Albanese map Alb. Let g : X → Σ′ be an irrational pencil, and compose it with the Jacobian map j : Σ′ → Jac(Σ′ ). By universality of the Albanese map we have the commutative diagram X❃ f // ❃❃ ❃❃ ❃ g ❃❃  Σ✤ ✤ // Alb(X) h ✤ Σ′ j  // Jac(Σ′ ). The map h : Σ → Σ′ is well defined by injectivity of the Jacobian map, and is a holomorphic surjection by universality of the Albanese map. Holomorphic surjections of Riemann surfaces are ramified covers; however, unless the cover is one–sheeted, i.e. h is a holomorphic isomorphism, the fibers of g : X → Σ′ will fail to be connected. This argument above does not prevent X from having rational pencils. However, if X has also virtual Albanese dimension one, this imposes constraints on the multiple fibers of those pencils. Recall that orbifolds with infinite π1orb (Σ) are those that are flat or hyperbolic, hence admit a finite index normal subgroup that is a surface group with positive b1 (see [Sc83] for this result and a characterization of these orbifolds in terms ON VIRTUAL PROPERTIES OF KÄHLER GROUPS 7 of the singular points). Therefore, if X were to admit a rational pencil g : X → Σ′ with infinite π1orb (Σ′ ), there would exist an irrational pencil without multiple fibers e → Σ e ′ covering g : X → Σ′ . We claim that the pencil e e → Σ e ′ cannot be g: X e g: X Albanese: building from the commutative diagram in (1) we have the commutative diagram 1 1 K❋ ❋❋ ❋❋ "" ∼ = H1 (K) // //   // K ●● ●●● ##  H1 (K) e π1 (X) // ge // ❑❑❑ ❑%% // H1 (X) e π1 (X) g e  // ▲▲▲ ▲&&  // H1 (X) e ′) π1 (Σ // 1 ▼▼▼ ▼&& // H1 (Σ e ′) π1orb (Σ′ ) // 1 The subgroup im(H1 (K) → H1 (X)) ≤ H1 (X) has positive rank (it is a finite index e ≤ H1 (X) e has subgroup), hence by commutativity the image im(H1 (K) → H1 (X)) e → H 1 (Σ e ′ ) is positive rank. It follows that the kernel of the epimorphism e g : H1 (X) e →Σ e ′ is not Albanese. As X e has Albanese dimension one, this not torsion, so e g: X is inconsistent with the first part of the statement.  We are now in a position to prove Theorem A. In order to do so, it is both practical and insightful to use the Bieri–Neumann–Strebel invariant of a finitely presented group G (henceforth BNS), for which we refer to [BNS87] for definitions and properties used here. This invariant is an open subset Σ1 (G) of the character sphere S(G) := (H 1 (G; R) \ {0})/R>0 of H 1 (G; R). For rational rays, the invariant can be described as follows: a rational ray in S(G) is determined by a primitive class φ ∈ H 1 (G). Given such φ, we can write G as HNN extension G = hA, t|t−1 Bt = Ci for some finitely generated subgroups B, C ≤ A ≤ Ker φ with φ(t) = 1. The extension is called ascending (descending) if A = B (resp. A = C). By [BNS87, Proposition 4.4] the extension is ascending (resp. descending) if and only if the rational ray determined by φ (resp. −φ) is contained in Σ1 (G). We have the following theorem, which applied to the collection of finite index subgroups of G, implies Theorem A: Theorem 2.3. Let G be a Kähler group. The following are equivalent: (1) G algebraically fibers; (2) The BNS invariant Σ1 (G) ⊆ S(G) is nonempty; (3) For any compact Kähler manifold X such that π1 (X) = G either the Albanese map is a genus 1 pencil without multiple fibers, or X has Albanese dimension greater than one. 8 STEFAN FRIEDL AND STEFANO VIDUSSI Before proving this proposition, let us mention that (with varying degree of complexity) it is possible to verify directly that groups commensurable to nonabelian surface groups cannot satisfy any of the three cases above. Proof. We will first show (1) ⇔ (2), and then ¬(2) ⇔ ¬(3). (1) ⇒ (2): If (1) holds we can write the short exact sequence (2) φ 1 → Ker φ → G → Z → 1 with Ker φ finitely generated. Hence G is both an ascending and descending HNN extension. It follows that the rational rays determined by both ±φ ∈ H 1 (G) are contained in Σ1 (G). (2) ⇒ (1): Let us assume that Σ1 (G) is nonempty. As Σ1 (G) is open, we can assume that there exists a primitive class φ ∈ H 1 (G) whose projective class is determined by a rational ray in Σ1 (G). A remarkable fact at this point is that Kähler groups cannot be written as properly ascending or descending extensions, i.e. ascending extensions are also descending and viceversa. (This was first proven in [NR08]; see also [FV16] for a proof much in the spirit of the present paper.) But this is to say that G has the form of Equation (2) with Ker φ finitely generated. To show the equivalence of ¬(2) and ¬(3), we start by recalling Delzant’s description of the BNS invariant of a Kähler group G. Let X be a Kähler manifold with G = π1 (X). The collection of irrational pencils fα : X → Σα such that the orbifold fundamental group Γα := π1orb (Σ) is a cocompact Fuchsian group, is finite up to holomorphic automorphisms of the base (see [De08, Theorem 2]). (In the language of orbifolds, these are the holomorphic orbifold maps with connected fibers from X to hyperbolic Riemann orbisurfaces.) The pencil maps give, in homotopy, a finite collection of epimorphisms with finitely generated kernel fα : G → Γa . Then [De10, Théorème 1.1] asserts that the complement of Σ1 (G) in S(G) (i.e. the set of so–called exceptional characters) is given by S (3) S(G) \ Σ1 (G) = α [fα∗ H 1 (Γα ; R) − {0}], where we use the brackets [·] to denote the image of a subset of (H 1(G; R) \ {0}) in (H 1 (G; R) \ {0})/R>0 . (Note, instead, that genus 1 pencils without multiple fibers do not induce exceptional characters.) ¬(3) ⇒ ¬(2): The negation of (3) asserts that X has either Albanese dimension zero, in which case S(G) is empty, or it has an Albanese pencil f : X → Σ with Γ := π1orb (Σ) cocompact Fuchsian, in which case f ∗ H 1 (Γ; R) = H 1 (G; R). In either case Σ1 (G) is empty, i.e. ¬(2) holds. ¬(2) ⇒ ¬(3): If the set Σ1 (G) ⊆ S(G) is empty, either S(G) is empty (i.e. b1 (G) = 0) whence G has Albanese dimension zero, or by Equation (3) there exists an irrational pencil f : X → Σ, with Γ := π1orb (Σ) cocompact Fuchsian, inducing an isomorphism f ∗ : H 1 (Γ; R) → H 1 (G; R). (The union of finitely many proper vector subspaces of H 1 (G; R) cannot equal H 1 (G; R).) Such a pencil is then the Albanese pencil of X.  ON VIRTUAL PROPERTIES OF KÄHLER GROUPS 9 In our understanding, Kähler manifolds whose Albanese dimension is smaller than their dimension are “nongeneric”, and their study should reduce, through a sort of dimensional reduction induced by the Albanese map, to the study of lower dimensional spaces (see e.g. [Cat91]). In that sense, we think of groups that, together with their finite index subgroups, fail to satisfy the equivalent conditions (1) to (3) of Theorem 2.3 as nongeneric. Examples of Kähler groups with a(G) > 1 abound. It is less obvious to provide examples of Kähler groups that have a jump in Albanese dimension, i.e. a(G) = 1 but va(G) > 1. Before doing so, we can make an observation about the geometric meaning of such occurrence. Given an irrational pencil f : X → Σ, its relative irregularity is defined as qf = q(X) − q(Σ) (where the irregularity of Σ equals its genus). The Albanese pencil occurs exactly when qf = 0. The notion of relative irregularity allows us to tie the notion of virtual Albanese dimension larger than one with the more familiar notion of virtual positive Betti number (or more properly, in Kähler context, virtual irregularity): a Kähler e →Σ e of the group with a(G) = 1 has va(G) > 1 if and only if there is a lift f˜: X Albanese pencil which is irregularly fibered, i.e. qf˜ > 0. A fairly simple class of examples comes from groups of type G = πg × K where πg is the fundamental group of a genus g > 1 surface and K the fundamental group of a hyperbolic orbisurface of genus 0, so that b1 (K) = 0. The group K is Kähler (e.g. it is the fundamental group of an elliptic surface with enough multiple fibers and multiplicity), hence so is G. As H 1 (G; Z) = H 1 (πg ; Z), the Albanese dimension of G must be one. On the other hand, K has a finite index subgroup that is the fundamental group of a genus h > 1 surface, hence G is virtually πg × πh . The algebraic surface Σg × Σg has Albanese dimension 2, so by Proposition 2.1 the virtual Albanese dimension of G is greater than one. (Group theoretically, this can be seen as consequence of [BNS87, Theorem 7.4], which asserts that for cartesian products of groups with positive b1 the BNS invariant is nonempty.) Less trivial examples with a(G) = 1 but va(G) > 1 come from bielliptic surfaces. These possess an Albanese pencil of genus 1 without multiple fibers, but their fundamental groups are virtually Z4 , hence have virtual Albanese dimension 2 (see [BHPV04, Section V.5]). More sophisticated examples of Kähler surfaces (hence groups) with a(G) = 1 that are finitely covered by the product of curves of genera bigger than one are discussed in [Cat00, Theorem F]. The most interesting class of examples we are aware of, however, comes from the recent paper of Stover ([Sto15, Theorem 3]) that gives examples of Kähler groups that arise as cocompact arithmetic lattices in P U(n, 1) for which a(G) = 1; Stover proves that those groups are virtually extensions of Z by a finitely generated group, and application of Theorem 2.3 yields to the conclusion that va(G) > 1. 10 STEFAN FRIEDL AND STEFANO VIDUSSI 3. Groups with Virtual Albanese Dimension One In this section, we will discuss groups with va(G) = 1. Familiar examples of Kähler groups with va(G) = 1 are given by surface groups, and other simple examples arise as follows. Following Gromov ([Gr89]) we say that a group G is commensurable to a surface group if it admits a (normal) finite index subgroup H ≤f G that admits an epimorphism f : H → Γ to a surface group Γ with finite kernel, namely for which there exists an exact sequence 1→F →H→Γ→1 with Γ a surface group and F finite. The map f : H1 (H) → H1 (Γ) is then an epimorphism with torsion kernel, i.e. f : H → Γ represents, in homotopy, an Albanese map. Quite obviously, each finite index subgroup of G will also be commensurable to a surface group, hence the virtual Albanese dimension of G equals one. We want to analyze the picture so far in comparison with the situation for 3– manifold groups. The class of irreducible 3–manifolds that are not virtually fibered is limited (it is composed entirely by graph manifolds). One may contemplate that, similarly, the Kähler counterpart to that class contains only the obvious candidates, namely manifolds whose fundamental group is commensurable to a surface group. Proposition 3.1 below guarantees that this is not quite the case. The starting point is the existence of infinite Kähler groups (such as Sp(2n, Z), n > 1) with vb1 (G), hence va(G), equal to zero. These do not have a counterpart in dimension 3. Proposition 3.1. Let Γ be the fundamental group of a genus g > 0 surface and let K be an infinite Kähler groups with va(K) = 0; then va(K × Γ) = a(K × Γ) = 1 and K × Γ is not commensurable to a surface group. Proof. The projection map f : K × Γ → Γ is the Albanese map, hence a(K × Γ) = 1. We claim that as vb1 (K) = 0, the virtual Albanese dimension of K × Γ is one. In fact, for any normal subgroup H ≤f K × Γ we have from (1) 1 // ∆ ■■ // ■■ $$ H ◆◆◆ ◆◆◆ '' H1 (∆) 1 //  // K  // K×Γ fe Λ ❍❍ // H1 (H) f // 1 ❍❍ ## // H1 (Λ)  // Γ // 1. As vb1 (K) = 0, H1 (∆) is torsion, hence Ker(H1 (H) → H1 (Λ)) = Im(H1 (∆) → H1 (H)) is torsion as well. It follows that the map fe: H → Λ is, in homotopy, the Albanese map, hence a(H) = 1. Assume by contradiction that K × Γ is commensurable to a surface group, and denote by H ≤f K × Γ a normal subgroup such that 1→F →H→π→1 ON VIRTUAL PROPERTIES OF KÄHLER GROUPS 11 with F finite and π a surface group. As F is finite, Ker(H1 (H) → H1 (π)) is torsion, hence the epimorphism H → π is, in homotopy, the Albanese map. But then it must coincide with the epimorphism fe: H → Λ as described above (perhaps up to an automorphism of Λ). This is not possible, as the former has a finite kernel F , while the latter has kernel ∆, which is infinite as ∆ ≤f K and K is infinite by assumption.  Proposition 3.1 guarantees that the class of Kähler groups that do not virtually admit an epimorphism to Z with finitely generated kernel is more variegated than its counterpart in the 3–manifold world. We should, however, qualify this result. The examples of Proposition 3.1 build on the existence of infinite Kähler groups with vb1 = 0, and leverage on the fact that we can take products of finitely presented groups, as the class of Kähler manifolds is closed under cartesian product (and, more generally, holomorphic fiber bundles). Neither of these phenomena has a counterpart in the realm of 3–manifolds. It is perhaps not too greedy to ask for examples of Kähler groups with va(G) = 1 in a realm where simple constructions as the one of Proposition 3.1 are tuned out. As we are about to see, an instance of this occurs in the case of aspherical surfaces: Question 3.2. Does there exist a group G with b1 (G) > 0 that is the fundamental group of an aspherical Kähler surface and does not virtually algebraically fiber? The reason why an example like the one we are after in Question 3.2 would be appealing comes from the fact that, perhaps going to a finite index subgroup, the fundamental group G of an aspherical Kähler surface with va(G) = 1 is a 4–dimensional Poincaré duality group whose Albanese pencil f : X → Σ determines a short exact sequence of finitely generated groups (4) 1 → K → G → Γ → 1, with Γ a surface group and K finitely generated. As first remarked by Kapovich in [Ka98], it is a theorem of Hillman that either K is itself a (nontrivial) surface group, or it is not finitely presented (see e.g. [Hil02, Theorem 1.19]). In either case, a construction like the one in Proposition 3.1 (or even a twist thereof) is excluded. Constructions of this type may not be easy to find. As we mentioned before, the examples of Stover virtually algebraically fiber. The same is true for the Cartwright– Steger surface ([CS10]), another ball quotient with b1 (G) = 2 and whose Albanese map (as shown in [CKY15, Corollary 5.3]) has no multiple fiber: by the discussion in the proof of Theorem 2.3 G itself has BNS invariant Σ1 (G) equal to the entire S(G) and all epimorphisms to Z have finitely generated kernel. (According to the introduction to [Sto15], this fact was known also to Stover and collaborators.) We want now to show how the question above ties with the study of coherence of fundamental groups of aspherical Kähler surfaces, which was initiated in [Ka98, Ka13] using the aforementioned result of Hillman, and further pursued in [Py16]. The outcome of these articles is that, with the obvious exceptions, most aspherical 12 STEFAN FRIEDL AND STEFANO VIDUSSI Kähler surfaces can be shown to have non–coherent fundamental group. (See [Py16, Theorem 4] for a detailed statement, which uses a notation slightly different from ours.) Drawing from the same circle of ideas of these references (as well as a minor extension of the work in [Ko99]) we can prove the following result, which improves on the existing results insofar as it further narrows possible coherent fundamental groups to finite index subgroups of the fundamental group of Kodaira fibrations with virtual Albanese dimension one. (A pencil on a Kähler surface is called a Kodaira fibration if it is smooth and not isotrivial.) Theorem 3.3. Let G be a group with b1 (G) > 0 which is the fundamental group of an aspherical Kähler surface X; then G is not coherent, except for the case where it is virtually the product of Z2 by a surface group, or perhaps for the case where X is finitely covered by a Kodaira fibration of virtual Albanese dimension one. Proof. If G has va(G) > 1, let H ≤f G be a subgroup, corresponding to a finite e of X, which algebraically fibers. Let n–cover X (5) φ 1 → Ker φ → H → Z → 1 with Ker φ finitely generated represent an algebraic fibration. By [Hil02, Theorem 4.5(4)] the finitely generated group Ker φ has type F P2 if and only if the Euler e = ne(X) = 0. A finitely presented group has type F P2 . It characteristic e(X) follows that Ker φ ≤ G is finitely generated but not finitely presented, hence G is not coherent, unless e(X) = 0. If e(X) = 0 the classification of compact complex surfaces entails that X admits an irrational pencil with elliptic fibers. As e(X) = 0 the Zeuthen–Segre formula (see e.g. [BHPV04, Proposition III.11.4]) implies that the only singular fibers can be multiple covers of an elliptic fiber, hence X is finitely covered by a torus bundle. An holomorphic fibration with smooth fibers of genus 1 is also isotrivial (i.e. all fibers are isomorphic), namely a holomorphic fiber bundle, see [BHPV04, Section V.14]. We can invoke then [BHPV04, Sections V.5 and V.6] to deduce that some finite cover of X is a product T 2 × Σg . In this case, the fundamental group is virtually Z2 ×Γ, with Γ a surface group. By Theorem B of [BHMS02] a finitely generated subgroup L ≤ Z2 × Γ has a finite index subgroup that is the product of finitely generated subgroups of each factor. As surface groups are coherent, L must be finitely presented, hence in this case the fundamental group of X is coherent. If G has va(G) = 1, perhaps going to a finite index subgroup, G is a 4–dimensional Poincaré duality group whose Albanese pencil f : X → Σ determines as usual the short exact sequence of finitely generated groups 1 → K → G → Γ → 1, with Γ a surface group and K finitely generated. To deal with this case, we relay on the strategy of [Ka98, Ka13]: by the aforementioned theorem of Hillman [Hil02, Theorem 1.19] either K is not finitely presented (and G is not coherent) or it is itself a nontrivial surface group. In the latter case, we can follow the path of [Ko99] (see also [Hil00]) to complete the proof of the statement. We first observe that the surface X, being aspherical, is homotopy equivalent to a smooth 4–manifold M 4 , a surface ON VIRTUAL PROPERTIES OF KÄHLER GROUPS 13 bundle over a surface F ֒→ M → Σ, where π1 (F ) = K, π1 (Σ) = Γ. Note that M is uniquely determined by the short exact sequence of its fundamental group, see [Hil02, Theorem 5.2]. Next, as X and M are homotopy equivalent, they have the same Euler characteristic e(X) = e(M) = e(F ) · e(Σ). At this point, the Zeuthen– Segre formula entails that the only nonsmooth fibers of the Albanese pencil could be multiple covers of an elliptic fiber (in particular, g(F ) = 1). The assumption that K is finitely generated excludes the presence of multiple fibers ([Cat03, Lemma 4.2]). This means that Albanese pencil f : X → Σ is smooth (i.e. a holomorphic fibration of maximal rank), namely X is actually a surface bundle over a surface, in particular it is diffeomorphic to M. If the pencil was isotrivial (i.e. all fibers isomorphic), it would be a holomorphic fiber bundle, and we would conclude as above that some finite cover of X is a product, whence va(X) = 2 and va(G) > 1. The statement follows.  In summary, the existence of non-obvious examples of coherent fundamental groups of aspherical Kähler surfaces hinges on an affirmative answer to Question 3.2. Remarks. (1) Note that the proof of the first case of Theorem 3.3 applies verbatim also in the case of aspherical surfaces with a(G) = 1 that algebraically fiber; in particular, this entails the Cartwright–Steger surface and the surfaces described in [Sto15, Theorem 2], [DiCS17, Theorem 1.2] have non–coherent fundamental groups. Those groups are torsion–free lattices G ≤ P U(2, 1), with the surfaces appearing as ball quotient BC2 /G, i.e. complex hyperbolic surfaces. This was implicitly known (for slightly different reasons) also from [Ka98]; we point out that for the argument above we don’t need to invoke [Liu96]. (2) The first examples of Kodaira fibrations, due to Kodaira and Atiyah, actually carry two inequivalent structures of Kodaira fibrations, hence are guaranteed to have Albanese dimension two. By the above, their fundamental group is not coherent. The same result applies for the doubly fibered Kodaira fibrations constructed in [CR09]. It is not difficult to prove the existence of Kodaira fibrations of Albanese dimension one (which, by the Hochschild–Serre spectral sequence, are surface bundles whose coinvariant homology of the fiber H1 (K; Z)Γ has rank zero): in fact, the “generic” Kodaira fibration arising from a holomorphic curve in a moduli space of curves has Albanese dimension one. However, this seems to have no obvious consequences for the discussion above: we ignore if there exist Kodaira fibration of virtual Albanese dimension one. We can add that, even if such surfaces did exist, we cannot decide if their fundamental groups are coherent. For instance, it is not obvious whether they may contain F2 × F2 as subgroup. It is perhaps interesting to flesh out one consequence of the proof of Theorem 3.2 that gives some information on higher BNS–type invariants of some Kähler groups. Precisely, we will consider the homotopical BNSR invariant Σ2 (G) ⊆ Σ1 (G) ⊆ S(G) 14 STEFAN FRIEDL AND STEFANO VIDUSSI introduced in [BR88]. We will not need the definition of these invariant and we will limit ourselves to mention the well–known facts (see e.g. [BGK10, Section 1.3]) that, using the notation preceding Theorem 2.3, given a primitive class φ ∈ H 1 (G), the kernel Ker φ ≤ G is of type F2 (namely, finitely presented) if and only if the rational rays determined by both ±φ ∈ H 1 (G) are contained in Σ2 (G). While we know no way to get complete information on the full invariant Σ2 (G), the ingredients of the proof of Theorem 3.2 is sufficient to entail the following lemma, that per se refines the previous result of non–coherence, and is possibly one of the first results on higher invariants of Kähler groups, besides the case of direct products: Lemma 3.4. The fundamental group G of an aspherical Kähler surface X of strictly positive Euler characteristic with Albanese dimension two has BNSR invariants satisfying the inclusions Σ2 (G) ( Σ1 (G) ⊆ S(G). Proof. The point of this statement is that the first inclusion is strict. For sake of clarity, we review the argument we used in the proof of Theorem 3.2: the condition on the Albanese dimension implies that G algebraically fibers, for some primitive class φ ∈ H 1 (G). As the Euler characteristic of X is strictly positive, Hillman’s Theorem ([Hil02, Theorem 4.5(4)]) entails that Ker φ is not F P2 , nor a fortiori finitely presented.  Note that the same conclusion of the lemma holds, even when a(G) = 1, as long as the fundamental group group algebraically fibers, e.g. for the Cartwright–Steger surface. Perhaps more importantly, the corollary applies to the aforementioned Kodaira fibrations defined by Kodaira and Atiyah. Topologically, these are surface bundles over a surface, so that their fundamental groups are nontrivial extension of a surface group by a surface group. Higher BNSR invariants of direct products of surfaces groups are (to an extent) well understood by purely group theoretical reasons. Instead, Lemma 3.4 seems to be the first result of that type for nontrivial extensions, and uses in crucial manner the fact that the group is Kähler: we are not aware of any other means to show that Σ1 (G) is nonempty. Remark. The reader familiar with BNSR invariants may notice that we are just shy of being able to conclude that the fundamental group of aspherical Kähler surfaces of positive Euler characteristic has empty Σ2 (G). We conjecture that this is true. (The conjecture holds true whenever Σ2 (G) = −Σ2 (G).) We mention also that the statement of Lemma 3.4 remains true if we consider the homological BNSR invariant Σ2 (G; Z) of [BR88]. We want to discuss now a result that may prevent the existence of new simple examples of Kähler groups with va(G) = 1, under the assumption that the group satisfies residual properties akin to those holding for most irreducible 3–manifold groups, namely being virtually RFRS (in particular, that holds for the fundamental group of ON VIRTUAL PROPERTIES OF KÄHLER GROUPS 15 irreducible manifolds that have hyperbolic pieces in their geometric decomposition). This class of groups was first introduced by Agol in the study of virtual fibrations of 3–manifold groups: A group G is RFRS if there T exists a filtration {Gi |i ≥ 0} of finite index normal subgroups Gi Ef G0 = G with i Gi = {1} whose successive quotient maps αi : Gi → Gi /Gi+1 factorize through the maximal free abelian quotient: 1 // Gi+1 αi // Gi /Gi+1 Gi ❖❖❖ ❖❖'' ❦ ❦ ❦ 55 H1 (Gi )/Tor // // 1 Subgroups of the direct product of surface groups and abelian groups are virtually RFRS. The largest source of virtually RFRS group we are aware of is given by subgroups of right–angled Artin groups (RAAGs), that are virtually RFRS by [Ag08]. However, this class does not give us new examples, as Py proved in [Py13, Theorem A] that all Kähler groups that are subgroups of RAAGs are in fact virtually subgroups of the product of surface groups and abelian groups. We have the following, that combined with Theorem 2.3 gives Theorem C: Theorem 3.5. Let G be a virtually RFRS Kähler group. Then for any Kähler mane of X with Albanese ifold X such that π1 (X) = G either there exists a finite cover X dimension greater than one, or G is virtually a surface group. Proof. After going to a suitable finite cover can assume that X has RFRS fundamental group G, with associated sequence {Gi }. A nontrivial RFRS group has positive first Betti number. In light of this, it is sufficient to show that if G is a RFRS group with Albanese map f : G → Γ and virtual Albanese dimension one, then it is a surface group. (This implies, because of the initial cover to get G RFRS, the theorem as stated.) Recall that we have a short exact sequence 1 // K // G f // Γ // 1 where Γ can be assumed to be a surface group and K is finitely generated. We claim that if X has virtual Albanese dimension one, then K is actuallyTtrivial, i.e. f is injective. Let γ ∈ G be a nontrivial element; the assumption that i Gi = {1} implies that there exist an index j such that γ ∈ Gj \Gj+1. Consider now the diagram (6) 1 // K ∩ Gj fj // Gj ❙❙❙❙ ♦ Γj ❙❙)) ww♦ ♦ αj H1 (Gj )/Tor //  Gj /Gj+1  // 1 uu❦❦❦ 1 where fj : Gj → Γj is the restriction epimorphism between finite index subgroups of G and Γ respectively determined by Gj E G as in the commutative diagram of 16 STEFAN FRIEDL AND STEFANO VIDUSSI (1). This map represents, in homotopy, the pencil fj : Xj → Σj of the cover of X associated to Gj . By assumption, this pencil is Albanese. This entails that there is an isomorphism ∼ = (fj )∗ : H1 (Gj )/Tor −→ H1 (Γj ). Composing the inverse of this isomorphism with the maximal free abelian quotient of Γj gives the map denoted with a dashed arrow in the diagram of Equation (6). By the commutativity of that diagram we deduce that the quotient map αj : Gj → Gj /Gj+1 factors through fj : Gj → Γj . As γ ∈ Gj \ Gj+1 , the image αj (γ) ∈ Gj /Gj+1 is nontrivial, hence so is fj (γ). This implies that f (γ) = fj (γ) ∈ Γ is nontrivial, i.e. f : G → Γ is injective.  The conclusion of this theorem asserts that virtually RFRS Kähler groups of virtual Albanese dimension one are virtually (and not just commensurable to) surface groups. In fact, it is a simple exercise to verify that a residually finite group commensurable to a surface group is virtually a surface group, hence the result is exactly what we should expect. We will finish this section with a result that further ties groups with virtual Albanese dimension one and surface groups, asserting that the Green–Lazarsfeld sets of such groups coincide (up to going to a finite index subgroup) with those of their Albanese image. The Green–Lazarsfeld sets of a Kähler manifold X (and, by extension, of its fundamental group G) are subsets of the character variety of G, the complex algebraic b := H 1 (G; C∗ ) . The Green–Lazarsfeld sets Wi (G) are defined as group defined as G the collection of cohomology jumping loci of the character variety, namely b | rk H 1 (G; Cξ ) ≥ i}, Wi (G) = {ξ ∈ G b nested by the depth i: Wi (G) ⊆ Wi−1 (G) ⊆ . . . ⊆ W0 (G) = G. For Kähler groups the structure of W1 (G) is well–understood. The projective case appeared in [Si93] (that refined previous results of [GL87, GL91]); this result was then extended to the Kähler case in [Cam01] (see also [De08]). Briefly, W1 (G) is the union of a finite set of isolated torsion characters and the inverse image of the Green–Lazarsfeld set of hyperbolic orbisurfaces under the finite collection of pencils of X with hyperbolic base. If X has Albanese dimension one, the Albanese map f : G → Γ induces an epimorphism f∗ : H1 (G) → H1 (Γ). Therefore, we have an induced isomorphism of the connected components of the character varieties containing the trivial character (7) ∼ = b b −→ G1̂ fb: Γ 1̂ where for a group G, we denote the connected component of the character variety b . containing the trivial character 1̂ : G → C∗ as G 1̂ ON VIRTUAL PROPERTIES OF KÄHLER GROUPS 17 The next theorem, a restatement of Theorem D, shows that if X has virtual Albanese dimension one, then after perhaps going to a cover, the map fb restricts to an isomorphism of the Green–Lazarsfeld sets. Theorem 3.6. Let X be a Kähler manifold with va(X) = 1. Up to going to a finite index normal subgroup if necessary, the Albanese map f : G → Γ induces an isomorphism ∼ = fb : Wi (Γ) −→ Wi (G) of the Green–Lazarsfeld sets. Proof. Up to going to a finite cover, we can assume that X admits an Albanese pencil f : X → Σ. Moreover, as every cocompact Fuchsian group of positive genus admits a finite index normal subgroup which is a honest surface group, we can also assume that, after going to a further finite cover if necessary, the Albanese pencil doesn’t contain any multiple fibers. In particular, H1 (Σ) will be torsion–free. Without loss of generality, by going to the normal core of the associated finite index subgroup, we can always assume that the cover is regular. Summing up, after possibly going to a finite cover the Albanese map, in homotopy, is an epimorphism f : G → Γ where Γ = π1 (Σ) is a genus g(Γ) surface group. The Green–Lazarsfeld sets Wi (Γ) for a surface group are determined in [Hir97], and are given by  b if 1 ≤ i ≤ 2g(Γ) − 2,  Γ (8) Wi (Γ) = 1̂ if 2g(Γ) − 1 ≤ i ≤ 2g(Γ),  ∅ if i ≥ 2g(Γ) + 1. b surjectivity of f : G → Γ implies by general arguments (see e.g. [Hir97, Given ρ ∈ Γ, Proposition 3.1.3]) that f ∗ : H 1 (Γ; Cρ ) → H 1(G; Cf ∗ (ρ) ) is a monomorphism. But we will actually need more, namely that by [Br02, Theorem 1.1] or [Br03, Theorem 1.8] b is a torsion character. f ∗ is an isomorphism, except perhaps when ρ ∈ Γ b This implies that f : Wi (Γ) → Wi (G) is an injective map, and it will fail to preserve the depth (i.e. dimension of the twisted homology) only for torsion characters. Consider the short exact sequence of groups t b −→ G b −→ 1 −→ G Hom(TorH1 (G); C∗ ) −→ 1, 1̂ b connected to the trivial character 1̂. b refers as above to the component of G where G 1̂ By [GL91, Theorem 0.1] all irreducible positive dimensional components of Wi (G) are inverse images of the Green–Lazarsfeld set of the hyperbolic orbisurfaces. By b ⊆ Wi (G) (for Proposition 2.2, the Albanese pencil is unique, hence fb(Wi (Γ)) = G 1̂ 1 ≤ i ≤ 2q(G) − 2) is the only positive dimensional component, that will occur if (and only if ) q(G) = g(Γ) ≥ 2. We now have the following claim. 18 STEFAN FRIEDL AND STEFANO VIDUSSI Claim. For all i ≥ 1, Wi (G) \ fb(Wi (Γ)) is composed of torsion characters. If q(G) = 1, this follows immediately from [Cam01, Théorème 1.3], as in this case W1 (Γ) is torsion. If q(G) ≥ 2, we need a bit more work: again by [Cam01, Théorème 1.3] and the above, we have that S b W1 (G) = G Z 1̂ where Z is a finite collection of torsion characters, that we will assume to be disjoint b . By definition Wi (G) ⊆ W1 (G). This implies, by the aforementioned result from G 1̂ of Brudnyi, that: – if 1 ≤ i ≤ 2q(G) − 2 the isolated points of Wi (G) are contained in Z; b . – if i ≥ 2q(G) − 1, they are either contained in Z, or are torsion characters in G 1̂ In either case, Wi (G) \ fb(Wi (Γ)) is composed of torsion characters as claimed. This concludes the proof of the claim. All this, so far, is a consequence of the fact that X has Albanese dimension one. Now we will make use of the assumption on the virtual Albanese dimension to show that Wi (G) \ fb(Wi (Γ)) is actually empty. In order to prove this, recall the formula for the first Betti number for finite regular abelian covers of X, as determined in [Hir97]: Given an epimorphism α : G → S to a finite abelian group, and following the notation from the diagram in (1), the finite cover H of X determined by α has first Betti number X b (9) b1 (H) = |Wi (G) ∩ α b(S)|. i≥1 This formula says that a character ξ : G → C∗ such that ξ ∈ Wi (G) contributes with multiplicity equal to its depth to the Betti number of the cover defined by α : G → S whenever it factorizes via α. Similarly, the corresponding cover Λ of Σ has first Betti number X \ b S/α(K))|. (10) b1 (Λ) = |Wi (Γ) ∩ β( i≥1 \ b S/α(K)). Consider a character ρ ∈ Wi (Γ) ∩ β( We have a commutative diagram G α f // Γ❃ ❃❃ ❃❃ ❃❃  // S/α(K) ❃❃❃ρ ◆◆◆ ❃❃❃ ◆◆◆ ❃❃ ◆◆◆ ❃ ◆''   β S C∗ whence f ∗ (ρ) factorizes via α. As fb(Wi (Γ)) ⊆ Wi (G) this implies that f ∗ (ρ) ∈ b We deduce that for any α : G → S any contribution to the right hand Wi (G) ∩ α b(S). ON VIRTUAL PROPERTIES OF KÄHLER GROUPS 19 side of Equation (10) is matched by an equal contribution to the right-hand-side of Equation (9). Now assume that, for some i ≥ 1, there is a nontrivial character ξ : G → C∗ such that ξ ∈ Wi (G) \ fb(Wi (Γ)). (The case of the trivial character 1̂ : G → C∗ is dealt with in the same way; we omit the details to avoid repetition.) As shown in the claim above, such character must be torsion. Because of that, its image is a finite abelian subgroup S ≤ C∗ , i.e. ξ factors through an epimorphism αξ : G → S. This character either lies in Z or it is of the form ξ = f ∗ (ρ) for ρ ∈ W2g(Γ)−2 (Γ) (in which case i > 2g(Γ) − 2). These two cases are treated in slightly different ways: – If the case ξ ∈ Z holds, ξ will give a positive contribution to b1 (Hξ ) that is not matched by any term in the right-hand-side of Equation (10). – If the case ξ = f ∗ (ρ) holds, it is immediate to verify that ρ factors through βξ : Γ → S/αξ (K) ∼ = S. However, the contribution of ρ to b1 (Λξ ) is at least i − 2g(Γ) + 2 > 0 short of the contribution of ξ = f ∗ (ρ) to b1 (Hξ ), as dim H 1(G; Cξ ) ≥ i > dim H 1 (Γ; Cρ ) = 2g(Γ) − 2. In either case, the outcome is that b1 (Hξ ) > b1 (Λξ ), which violates the assumption that X has virtual Albanese dimension one. Summing up, fb: Wi (Γ) → Wi (G) is a bijection. These sets coincide therefore with the connected components of the respective character variety (for i ≤ i ≤ 2g(Γ) − 2), the trivial character (for 2g(Γ) − 1 ≤ i ≤ 2g(Γ)), and are empty otherwise. Whenever nonempty, both sets inherit a group structure as subsets of the respective character varieties. The map fb is the restriction of an homomorphism between these character varieties, hence an isomorphism as stated.  Example. The bielliptic surfaces mentioned in Section 2 are a clean example of the fact that we need more than Albanese dimension one to get the isomorphism of Theorem 3.6. These surfaces admit genus one Albanese pencils without multiple fibers, and there exists an epimorphism α : G → S with S abelian and H = Ker α ∼ = Z4 (see [BHPV04, Section V.5]). This entails that, for some i ≥ 1, Wi (G) is strictly larger than fb(Wi (Z2 )) (the difference being torsion characters, contributing to the first Betti number of H). By Theorem 3.6 the only characters of G that contribute to the rational homology of finite abelian covers of X are those that descend to Γ, i.e. restrict trivially to im(H1 (K) → H1 (G)). In particular, the finite abelian cover of X determined by the image, in H1 (G), of the homology of the fiber of the Albanese map has the same first Betti number as X. Note that such an image can be nonzero: infinitely many examples arise from Proposition 3.1 by taking K equal to any finite index subgroups of Sp(2n, Z), n ≥ 2. 20 STEFAN FRIEDL AND STEFANO VIDUSSI References [Ag13] I. Agol, The virtual Haken conjecture, with an appendix by I. Agol, D. Groves and J. Manning, Doc. Math. 18 (2013), 1045–1087. [ABCKT96] J. Amorós, M. Burger, K. Corlette, D. Kotschick and D. Toledo, Fundamental groups of compact Kähler manifolds, Mathematical Surveys and Monographs, 44. American Mathematical Society, Providence, RI, 1996. [AFW15] M. Aschenbrenner, S. Friedl and H. Wilton, 3–manifolds groups, EMS Series of Lectures in Mathematics (2015). [Ag08] I. Agol, Criteria for virtual fibering, J. Topol. 1 (2008), no. 2, 269–284. [BHPV04] W. Barth, K. Hulek, C. Peters and A. Van de Ven, Compact complex surfaces, Second edition. Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics, 4. Springer-Verlag, Berlin (2004). [BGK10] R. Bieri, R. Geoghegan and D. Kochloukova, The sigma invariant of Thompson’s group F , Groups Geom. Dyn. 4 (2010), no. 2, 263–273 [BNS87] R. Bieri, W. Neumann and R. Strebel, A geometric invariant of discrete groups, Invent. Math. 90 (1987), no. 3, 451–477. [BR88] R. Bieri and B. Renz, Valuations on free resolutions and higher geometric invariants of groups, Comment. Math. Helv. 63 (1988), no. 3, 464–497. [BHMS02] M. R. Bridson, J. Howie, C. F. Miller III and H. Short, The subgroups of direct products of surface groups, Geom. Dedicata 92 (2002) 95–103. [Br02] A. Brudnyi, A Note on the Geometry of Green-Lazarsfeld sets, unpublished, arXiv:math/0204069 (2002). [Br03] A. Brudnyi, Solvable matrix representations of Kähler groups, Differential Geom. Appl. 19 (2003), no. 2, 167–191. [Cam01] F. Campana, Ensembles de Green-Lazarsfeld et quotients résolubles des groupes de Kähler, J. Alg. Geometry 10 (2001), no. 4, 599–622. [CKY15] D. Cartwight, V. Koziark and S.–K. Yeung, On the Cartwight–Steger surface, J. Alg. Geom. (to appear). [CS10] D. Cartwright and T. Steger, Enumeration of the 50 fake projective planes, C. R. Acad. Sci. Paris, Ser. 1, 348 (2010), 11–13. [Cat91] F. Catanese, Moduli and classication of irregular Kähler manifolds (and algebraic varieties) with Albanese general type brations, Invent. Math. 104 (1991), 263–289. [Cat00] F. Catanese, Fibred surfaces, varieties isogenous to a product and related moduli spaces, Amer. J. Math. 122, No. 1 (2000), 1–44. [Cat03] F. Catanese, Fibred Kähler and quasi–projective groups, Special issue dedicated to Adriano Barlotti, Adv. Geom. suppl. (2003), 13–27. [CR09] F. Catanese and S. Rollenske, Double Kodaira fibrations, J. Reine Angew. Math. 628 (2009), 205–233. [De08] T. Delzant, Trees, valuations and the Green–Lazarsfeld set, Geom. Funct. Anal. 18 (2008), no. 4, 1236–1250. [De10] T. Delzant, L’invariant de Bieri Neumann Strebel des groupes fondamentaux des variétés kähleriennes, Math. Annalen 348 (2010), 119–125. [DG05] T. Delzant and M. Gromov, Cuts in Kähler Groups, Progress in Mathematics 248. Birkhäuser Verlag Basel/Switzerland (2005), 3–55. [DiCS17] L. F. Di Cerbo and M. Stover, Bielliptic ball quotient compactifications and lattices in P U (2, 1) with finitely generated commutator subgroup, Ann. Inst. Fourier (Grenoble) 67 (2017), no. 1, 315—328. ON VIRTUAL PROPERTIES OF KÄHLER GROUPS 21 [FGK17] G. Fiz Pontiveros, R. Glebov and I. Karpas, Virtually fibering random Right–angled Coxeter groups, preprint (2017). [FV16] S. Friedl and S. Vidussi, Rank gradients of infinite cyclic covers of Kähler manifolds, J. Group Theory 19 (2016), no. 5, 941–957. [GL87] M. Green and R. Lazarsfeld, Deformation theory, generic vanishing theorems and some conjectures of Enriques, Catanese and Beauville, Invent. Math. 90 (1987), no. 2, 389–407. [GL91] M. Green and R. Lazarsfeld, Higher obstruction to deforming cohomology groups of line bundles, J. Amer. Math. Soc. 4 (1991), no. 1, 87–103. [Gr89] M. Gromov, Sur le groupe fondamental d’une variété Kählerienne, C. R. Acad. Sci. I 308 (1989), 67–70. [Hil00] J. Hillman, Complex surfaces which are fibre bundles, Topology Appl. 100 (2000), no. 2-3, 187–191. [Hil02] J. Hillman, Four–manifolds, geometries and knots, Geom. Topol. Monogr. 5 (2002, revision 2014). [Hir97] E. Hironaka, Alexander stratifications of character varieties, Ann. Inst. Fourier (Grenoble) 47 (1997), no. 2, 555–583. [JNW17] K. Jankiewicz, S. Norin and D. Wise, Virtually fibering Right–angled Coxeter groups, preprint (2017). [Ka98] M. Kapovich, On normal subgroups in the fundamental groups of complex surfaces, unpublished, arXiv:math/9808085 (1998). [Ka13] M. Kapovich, Non-coherence of arithmetic hyperbolic lattices, Geom. Topol. 17 (2013), 39–71. [Ko99] D. Kotschick, On regularly fibered complex surfaces, Proceedings of the Kirbyfest (Berkeley, CA, 1998), Geom. Topol. Monogr. 2 (1999), 291–298. [Liu96] K. Liu, Geometric height inequalities, Math. Res. Lett. 3 (1996), 693–702. [NR08] T. Napier and M. Ramachandran, Filtered ends, proper holomorphic mappings of Kähler manifolds to Riemann surfaces, and Kähler groups, Geom. Funct. Anal. 17, no. 5 (2008), 1621– 1654. [Py13] P. Py, Coxeter groups and Kähler groups, Math. Proc. Cambridge Philos. Soc. 155 (2013), 557–566. [Py16] P. Py, Some noncoherent, nonpositively curved Kähler groups, Enseign. Math. 62 (2016), no. 1-2, 171–187. [Sc83] P. Scott, The geometries of 3–manifolds, Bull. London Math. Soc. 15 (1983), 401–487. [Si93] C. Simpson, Subspaces of moduli spaces of rank one local systems, Ann. Sci. Ecole Norm. Sup. 26 (1993), no. 3, 361–401. [Sta62] J. Stallings, On fibering certain 3-manifolds, in: Topology of 3-Manifolds and Related Topics, pp. 95–100, Prentice-Hall, Englewood Cliffs, NJ, 1962 [Sto15] M. Stover, Cusp and b1 growth for ball quotients and maps onto Z with finitely generated kernel, preprint (2015). [To90] D. Toledo, Examples of fundamental groups of compact Kähler manifolds, Bull. London Math. Soc. 22, No.4 (1990), 339–343. [Wi09] D. Wise, The structure of groups with a quasi-convex hierarchy, Electronic Res. Ann. Math. Sci 16 (2009), 44–55. [Wi12] D. Wise, From Riches to RAAGS: 3-Manifolds, Right-angled Artin Groups, and Cubical Geometry, CBMS Regional Conference Series in Mathematics, vol. 117, American Mathematical Society, Providence, RI, 2012 22 STEFAN FRIEDL AND STEFANO VIDUSSI Fakultät für Mathematik, Universität Regensburg, Germany E-mail address: [email protected] Department of Mathematics, University of California, Riverside, CA 92521, USA E-mail address: [email protected]
4
Prediction of the Optimal Threshold Value in DF Relay Selection Schemes Based on Artificial Neural Networks 1 Ferdi KARA, 1Hakan KAYA, 2Okan ERKAYMAZ, 1Ertan ÖZTÜRK Department of Electrical Electronics Engineering, 2Department of Computer Engineering Bulent Ecevit University, Zonguldak, Turkey {[email protected], [email protected], [email protected], [email protected]} 1 Abstract—In wireless communications, the cooperative communication (CC) technology promises performance gains compared to traditional Single-Input Single Output (SISO) techniques. Therefore, the CC technique is one of the nominees for 5G networks. In the Decode-and-Forward (DF) relaying scheme which is one of the CC techniques, determination of the threshold value at the relay has a key role for the system performance and power usage. In this paper, we propose prediction of the optimal threshold values for the best relay selection scheme in cooperative communications, based on Artificial Neural Networks (ANNs) for the first time in literature. The average link qualities and number of relays have been used as inputs in the prediction of optimal threshold values using Artificial Neural Networks (ANNs): Multi-Layer Perceptron (MLP) and Radial Basis Function (RBF) networks. The MLP network has better performance from the RBF network on the prediction of optimal threshold value when the same number of neurons is used at the hidden layer for both networks. Besides, the optimal threshold values obtained using ANNs are verified by the optimal threshold values obtained numerically using the closed form expression derived for the system. The results show that the optimal threshold values obtained by ANNs on the best relay selection scheme provide a minimum Bit-Error-Rate (BER) because of the reduction of the probability that error propagation may occur. Also, for the same BER performance goal, prediction of optimal threshold values provides 2dB less power usage, which is great gain in terms of green communication. Keywords—relay selection, cooperative, ANNs, MLP, RBF, optimal threshold I. INTRODUCTION The performances of wireless communication techniques are mostly evaluated according to bit-error rate (BER) and outage probability. These two criteria’s depend on the wireless channel properties such as interference, fading, shadowing and path-loss etc[1]. Researchers have been studying on several techniques in order to increase the performance. One of these techniques for improving the performance is diversity in which the copies of data are transmitted over independent dimensions such as time, frequency and space (antennas). In spatial diversity, the copies of data are transmitted/received by multiple antennas. Therefore, the spatial diversity is mostly called as Multiple Input Multiple Output (MIMO) system [2], This work has been supported by The Scientific and Technological Research Council of Turkey (TUBITAK) with the name 2211A PhD Scholarship Program. which is key technology in 4G standards i,e LTE and LTEAdvanced [3]. However, in mobile communications multiple antennas cannot be easily adapted due to physical limitations. To overcome these limitations the relaying systems are proposed in literature. In the relaying systems, the other users called relays- in the environment forward a processed version of the received data from source to the destination. The relaying systems are seen the nominees for the 5G and beyond technologies. This relaying system is called cooperative communication (CC) and the obtain diversity is called cooperative diversity or virtual spatial diversity [4]. In literature, different relaying protocols such as amplify-andforward (AF) and decode-and-forward (DF) are used according to the employed processing at the relay. In AF protocol, the received data is amplified by a relay (to equalize the effect of the channel fade between the source and the relay) then the relay retransmits the amplified version of data to a destination [5]. On the other hand, in DF protocol, a relay first decodes the received data, then re-encodes and forwards to a destination. In the DF protocol, the relay decides whether it transmits or not according to the source-relay link quality. If the source-relay link quality is less than a threshold value on the relay, relay remains silent. Otherwise the relay transmits [6]. However, it is still possible that the relay may decode incorrectly even if a threshold value is greater than the source-relay link quality and decode the data from the source and then forward this incorrect data to the destination erroneously. This phenomenon is called error propagation problem. The probability of this incorrect detection at the relay depends on the threshold value [7]. In a CC system, the source could transmit the signal to the destination not only through the one relay but also through multi relays (all available relays) to increase diversity order gain .Instead of using all available relays, only one relay, which is the best one among the multi relays to obtain best performance, can be selected. The selected relay has the highest link qualities in terms of the transmission path and called best relay. Hence this scheme is called relay selection [8].The use of only the best relay also provides the full diversity order with an efficient use of the bandwidth. In the DF protocol and relay selection scheme, the threshold value has great effect on the performance of cooperative communication. To minimize BER, an optimal threshold values has to be obtained. This optimal threshold value depends on the number of relays and the link qualities between source-relay, relay-destination and source-destination [7]. The computation of optimal threshold value changing relay numbers and link qualities by analytically is hardly possible. In this paper, for the prediction of optimal threshold values by using MLP and RBF networks, the number of relays and the average link qualities of source-relay-destination paths are used as inputs after normalization. The optimal threshold values on the best relay selection scheme are determined for different scenarios by numerically minimizing the closed form BER equation similar to given in [6]. Different types of scenarios having different relay numbers and different link qualities have been used to test the proposed techniques. The outputs of ANNs are perfect match with the numerical results. This paper organized into six sections. In section II, we have briefly explained the best relay selection in DF network models. Also the numerical results for optimal threshold values are given in this section. Section 3, has an overview of literature in determination of threshold values on DF networks. Section 4 deals with the description of ANNs as proposed techniques. Application of ANNs for the prediction of optimal threshold values in DF networks and results are given in Section 5. In section 6, some results are discussed, and the paper is concluded. II. SYSTEM MODEL In the relay selection scheme, one relay is selected among multi-relays in order to utilize full cooperative diversity [8]. The relay selection is performed by considering source-relaydestination cascaded link qualities of all available relays. Relay selection is done in two steps: First, the decoding set (C) of relays which are called reliable relays, is determined by considering source-relay link qualities of all available relays. If the received SNR from source at the relay is less than a threshold value, the relay cannot decode the source message correctly and it does not transmit. If the received SNR at the relay is greater than the threshold, this relay is added into the decoding set. It is also assumed that the relays belong to the decoding set may detect the signals from source erroneously and forward incorrect data to the destination, causing error propagation that can reduce the performance of the system [9]. Second, the relay selection is completed at the destination by taking the relay having the highest relay-destination link quality from the decoding set. After determining the best relay, the destination informs all relays about which relay is selected to transmit as the best relay through a reverse broadcast channel then other unselected relays turn to be idle [10]. The destination combines the data received from source and best relay by using Maximum Ratio Combining (MRC). The best relay selection scheme is given Fig 1. For the best relay selection scheme, the BER of system, which is given in the (1), is obtained by using the equations given in [5] and [11] under the condition that the destination uses MRC and the modulation is Binary Phase Shift Keying (BPSK). The BER of systems depends on the threshold value (𝛾𝑡ℎ ), the average link SNRs (𝛾𝑆𝑅 , 𝛾𝑅𝐷 , 𝛾𝑅𝐷 ) and the number of relays (M). The definition of erfc(.) function used in (1) is 𝑒𝑟𝑓𝑐(𝑥) = 𝜋⁄2 𝑥2 ∫ 𝑒𝑥𝑝⁡(− 𝑠𝑖𝑛2 (𝜃))𝑑𝜃 . 𝜋 0 2 𝑜𝑝𝑡 The optimal threshold (𝛾𝑡ℎ ) is the value which minimizing the BER of the system. 𝑜𝑝𝑡 𝛾𝑡ℎ = 𝑎𝑟𝑔, 𝑚𝑖𝑛{𝛾𝑡ℎ , 𝐵𝐸𝑅} (2) In our scenarios, it is assumed that the average source-relay, relay-destination and source-destination link SNRs are 𝜀 2 ). 𝜀𝑏 2 𝛾̅𝑆𝑅 = 𝑬(ℎ𝑆𝑅 𝛾̅𝑅𝐷 = 𝑬(ℎ𝑅𝐷 ). 𝑏⁄𝑁 , 𝛾̅𝑆𝐷 = ⁄𝑁 , 0 0 𝜀𝑏 2 𝑬(ℎ𝑆𝐷 ). ⁄𝑁 respectively. E(.) is the statistical average (or 0 expectation) operator. hSD, hSR and hRD are the channel coefficients and they are modeled as Rayleigh flat fading 2 2 2 channels with variance of 𝜎𝑆𝐷 , 𝜎𝑆𝑅 and 𝜎𝑅𝐷 , respectively. In 𝑜𝑝𝑡 Fig. 2, the optimal threshold value (𝛾𝑡ℎ ) numerically obtained 𝜀 minimizing BER as a function of 𝑏⁄𝑁 is showed for different 0 number of relays (M) on the symmetric network which has 2 2 2 𝜎𝑆𝐷 = 𝜎𝑆𝑅 = 𝜎𝑅𝐷 = 1. 𝑜𝑝𝑡 In Fig. 3, the optimal threshold value (𝛾𝑡ℎ ) is showed on 2 2 2 another network in which 𝜎𝑆𝑅 = 𝜎𝑅𝐷 = 10 and 𝜎𝑆𝐷 = 1. It is mostly possible scenario if the relay stands between source and destination. 𝑀 𝑀−𝑖 𝑖 𝛾 𝛾𝑡ℎ 𝛾 𝐵𝐸𝑅 = (1 − exp (− 𝑡ℎ⁄ )) × 0.5⁡𝑒𝑟𝑓𝑐 (√𝛾𝑆𝐷 ⁡) + ∑𝑀 ⁄𝛾 ) (1 − exp (− 𝑡ℎ⁄𝛾 )) 𝑖=1 ⌈exp (− 𝛾𝑆𝑅 𝑆𝑅 𝑆𝑅 {0.5 (𝑒𝑟𝑓𝑐(√𝛾𝑡ℎ ⁡) − exp ( 𝑒𝑟𝑓𝑐 (√𝛾𝑡ℎ (1 + 1 𝛾𝑆𝑅 [( 𝛾𝑅𝐷 𝛾𝑅𝐷 +𝛾𝑆𝐷 )× 𝛾𝑆𝑅 1 𝛾𝑆𝑅 𝛾𝑡ℎ 𝛾 ⁄𝛾 ) √1+𝛾 𝑒𝑟𝑓𝑐 (√𝛾𝑡ℎ (1 + 𝛾 )⁡))} + {1 − 0.5 (𝑒𝑟𝑓𝑐(√𝛾𝑡ℎ ⁡) − exp ( 𝑡ℎ⁄𝛾 ) √1+𝛾 × 𝑆𝑅 𝑆𝑅 𝑆𝑅 𝑆𝑅 𝑆𝑅 )⁡))} ⁡ × 0.5 {⁡𝑀 ∑𝑀−1 𝑘=0 (−1)𝑘 𝑀−1 ( 𝑘 ) 𝑘+1 × (1 − 𝛾𝑅𝐷 ⁄𝑘+1 𝛾𝑅𝐷 ⁄𝑘+1−𝛾𝑆𝐷 𝛾 √1+𝛾𝑅𝐷 ⁄𝑘+1 𝑅𝐷 ⁄𝑘+1 + 𝛾𝑆𝐷 𝛾𝑅𝐷 ⁄𝑘+1−𝛾𝑆𝐷 𝛾 √1+𝛾𝑆𝐷 )}]⌉ (1) 𝑆𝐷 Relay 1 Yes Reliable No İdle . . . Direct Link Source Relay Selection Destination Relay M Yes Reliable No İdle Figure 1. Best Relay Selection Scheme Figure 1 Figure 2 𝜀𝑏 ⁄𝑁 on the symmetric 0 = 1 for M=4 Figure 2. The optimal threshold values versus network 2 𝜎𝑆𝐷 = 2 𝜎𝑆𝑅 = 2 𝜎𝑅𝐷 III. RELATED WORKS The optimal threshold value in DF schemes could have been obtained by analytically just for a single relay i,e M=1 over Rayleigh fading channels [7] and over Nakagami-m fading channels [12] by using optimum decision rule similar to given (2). Similarly in [13] , the authors proposed a method for the optimal threshold value and optimal power allocation when single relay is used. In [7] , the authors have obtained the optimal values numerically by minimizing BER values as described Section II for the multiple relay scenarios. However, this method could not be used at the relay because of the computational burden. In [14], the authors used the MATLAB 𝜀𝑏 2 = ⁄𝑁 on the network 𝜎𝑆𝑅 0 = 1 for M=4 Figure 3. The optimal threshold values versus 2 𝜎𝑅𝐷 = 10 and 2 𝜎𝑆𝐷 Optimization Toolbox for the determination of the threshold values. IV. OVERVIEW OF ARTIFICIAL NEURAL NETWORKS The Artificial Neural Networks (ANNs) have been studied by researchers since 1970s. Artificial Neural Networks (ANNs) have become commonly used tool to generate proper outputs for given inputs in the absence of a formula between inputs and outputs. An ANN model is formed by input(s) with weight(s), activation function(s), biases an output(s) [15].The weights and biases are changed by the time an output is generated with a tolerated error for given inputs. Thanks to their computational speed, ability to handle complex functions and great efficiency even if full information is absent for the problem, ANNs have become one of the most preferred methods to solve non-linear engineering problems. ANNs are mainly used for classification, function approximation, clustering and regression [16]. ANNs have been also used in wireless communications for the purpose of channel estimations, channel equalizations etc [17] [18] [19]. In the last years, researchers have started to use the ANNs for the network solutions as relay selection methodology[3]. ANNs have different types which are called according to network connections and the activation functions used. MultiLayer Perceptron (MLP) and Radial Basis Function (RBF) networks are two of the most popular ANNs. A. Multi-Layer Perceptron (MLP) Network Multi-Layer Perceptron (MLP) network which is proper for the linear and non-linear applications is the most used ANN network. MLP network consists of an input layer, one or more hidden layers and an output layer. A MLP network with one hidden layer is shown Fig. 4. output layer activation function, hidden-output layer weights and output layer biases, respectively. In this work “tangent sigmoid” and “pure linear” functions are used as a hidden layer activation and output layer activation function. B. Radial Basis Function (RBF) Network Radial Basis Function (RBF) network also consists of three layers: input layer, hidden layer and output layer like MLPs. However, RBF networks are trained faster than MLPs. The RBF networks could be defined as an application of neural networks for function approximation on the multi dimension space. RBF networks use Gauss function as hidden layer activation function. The output of a RBF network is: 𝑦 = ∑𝑛𝑘=1 (𝑒 −‖ 𝑥𝑗 −𝜇𝑗 ‖ 2𝜎2 ) 𝑤𝑜𝑘 + 𝑏𝑜𝑘. (4) In (4) 𝑥𝑗 is the input and 𝑦 is output of network. 𝜇𝑗 𝜎 2 are neurons center points and spread, respectively. The output of the hidden layer is transferred to output after multiplied by hidden-output layer weights (𝑤𝑜𝑘 ) and summed with output biases (𝑏𝑜𝑘 ). In the RBF network the input-hidden layer weights are all 1 [20]. V. PREDICTION OF OPTIMAL THRESHOLD VALUES The BER values for the selection relaying scheme given in (1) are calculated for various threshold values on the different scenarios having changing number of relays and link qualities as described Part II. Then, the threshold value which minimizes the BER is assumed the optimal. The data set explained above how to obtain is divided into two parts for training and testing. The training data has 12500 samples and the testing data has 3125 samples. After normalized, the training data is applied to ANNs: MLP and RBF networks, in the same way. Figure 4. Multi-Layer Perceptron Network with one hidden layer The computations are located on the neurons at the each layer and the information is transferred forward from node to node via weighted connections. The weights on the connections and the biases values for each neuron are adjusted according to desired outputs via backpropagation algorithms [20]. There are several learning algorithms in the literature. In this work, one of the mostly used algorithm Levenberg-Marquardt back propagation is preferred. MLP is chosen in this work due to its simple structure for prediction problems and its efficiency in learning large data sets. The output of MLP with one hidden layer is: 𝑛 𝑦 = 𝑓𝑜 {∑𝑚 𝑘=1 𝑓ℎ (∑𝑗=1 𝑤ℎ𝑗 𝑥𝑗 + 𝑏ℎ𝑗 )𝑤𝑜𝑘 + 𝑏𝑜𝑘 }. (3) In (3) 𝑥𝑗 ⁡is input(s) and 𝑦 is the output. 𝑓𝑜 ,⁡⁡𝑤ℎ𝑗 ,⁡𝑏ℎ𝑗 ⁡ are hidden layer activation function, input-hidden layer weights and hidden layer biases, respectively. Likewise 𝑓ℎ , 𝑤𝑜𝑘 , 𝑏𝑜𝑘 are During the training of MLP, the network was trained many times for different numbers of hidden layer neurons, and the performance criteria –Mean Squared Error (MSE) - was noted. The more neurons in the hidden layer, the less MSE is obtained on training. However, the changing of MSE after 12 neurons is very slow. In addition to that, considering the complexity to implement of MLP, the number of hidden layer neurons is chosen 12. The training MSEs of MLP according to different number of hidden neurons are given Table 1. TABLE I. THE TRAIN MSES OF MLP ACCORDING TO NUMBER OF HIDDEN LAYER NEURONS Number of Hidden Layer Neurons 4 6 8 10 12 14 16 18 MSEs 2.62 E-04 8.49 E-05 1.45 E-05 1.16 E-05 8.59 E-06 2.53 E-06 1.58 E-06 1.11 E-06 The RBF network was trained with the same number of neurons to compare with MLP network properly. The spread of the network on RBF has effect on training MSE. However, there is not a way to calculate the spread coefficient. Therefore, the RBF network was retrained for different spread coefficients until the minimum MSE is obtained. The spread coefficient is chosen 0.8 for RBF network. After training of two networks, they were tested with dataset not used for training. For the 10 different samples randomly selected from the test data, the outputs of ANNs are obtained. Three statistical performance criteria’s –Mean Squared Error (MSE), Mean Absolute Error (MAE) and coefficient of determination (R2)- are calculated to compare the ANNs outputs with the numerical optimal threshold values. The results are given Table 2. TABLE II. THE COMPARISON OF NUMERICAL RESULTS WITH ANNS OUTPUTS AND THE STATISTICAL PERFORMANCE CRITERIA'S OF ANNS FOR 10 SAMPLES RANDOMLY SELECTED . S. No 1 2 3 4 5 6 7 8 9 10 Num. Values Inputs 𝝈𝟐𝑺𝑹 1 10 1 3.25 3.25 10 7.75 3.25 1 7.75 𝝈𝟐𝑹𝑫 5.5 3.25 3.25 5.5 1 10 3.25 5.5 7.75 3.25 𝝈𝟐𝑺𝑫 10 10 7.75 7.75 10 10 3.25 3.25 5.5 7.75 M 2 8 4 6 2 6 4 8 8 8 𝜀𝑏 ⁄𝑁 0 9 8 0 0 3 13 20 10 18 16 𝒐𝒑𝒕 ANN Outputs 𝜸𝒕𝒉 MLP RBF 0.1100 0.3432 0.3171 0.0753 0.0750 0.4438 0.4291 0.2839 0.4107 0.6127 0.1094 0.3371 0.3222 0.0786 0.0739 0.4446 0.4303 0.2805 0.4090 0.6125 9.225 E-06 0.0024 0.9997 0.989 0.3377 0.3056 0.0791 0.0652 0.4387 0.4190 0.2861 0.4540 0.6196 2.4516 E-04 0.0109 0.9914 MSE MAE R2 On the two network scenarios described in Part II, the ANNs outputs are calculated like given in (3) and (4). In Fig. 5, 2 2 2 on the symmetric network -𝜎𝑆𝐷 ⁡ = 𝜎𝑆𝑅 = 𝜎𝑅𝐷 = 1- the optimal values numerically obtained and the ANNs outputs are given according to 𝜀𝑏 ⁄𝑁0 values for M=4. In Fig. 6, the same computations are given for M=6 on the other network 2 2 2 𝜎𝑆𝑅 = 𝜎𝑅𝐷 = 10 and 𝜎𝑆𝐷 = 1-. Figure 6. The numerical optimal thresholds and ANNs’ outputs versus 2 2 2 𝜀𝑏 ⁄𝑁0 for M=6 on the network in which 𝜎𝑆𝑅 = 𝜎𝑅𝐷 = 10 and 𝜎𝑆𝐷 = 1. VI. CONCLUSION We have proposed the use of two different ANN models (MLP and RBF) for the first time to obtain the optimal threshold values for the best relay selection scheme in cooperative communications systems. Different scenarios having different link qualities and different number of relays were used to test the validity of the proposed models. Numerical results have shown that the optimal threshold value increases as a function of relays number and of SNR which represents all link qualities. The results have shown that two ANN networks can predict the optimal values. However, the MLP network outperforms the RBF network when using the same number of hidden layer neurons. By the prediction of the optimal threshold value at the relay adaptively, BER value of the DF scheme remain minimum all SNR values. Otherwise, small threshold values would have low BER performance at high 𝜀𝑏 ⁄𝑁0 values because of the propagation error; high threshold values would have low BER performance at the low 𝜀𝑏⁄𝑁0 ⁡value. On the symmetric network 2 2 2 -𝜎𝑆𝐷 ⁡ = 𝜎𝑆𝑅 = 𝜎𝑅𝐷 = 1- which has M=4 relays, the BER performances of the system have obtained for different constant threshold values and for the optimal threshold values according to changing 𝜀𝑏 ⁄𝑁 values. 0 TABLE III. THE COMPARISON OF THE BER VALUES FOR ACHIVED BY OPTIMAL THRESHOLD VALUES WITH THE ACHIVED BY CONSTANT THRESHOLD VALUES ON TH SYMETRIC NEWROKS WITH 4 RELAYS Threshold Values 3 0 7,32 E-02 0,130 5 0,144 10 0,147 𝜸𝒕𝒉 (MLP) 6,98 E-02 7,2 E-02 1 𝒐𝒑𝒕 Figure 5. The numerical optimal thresholds and ANNs’ outputs versus 2 2 2 𝜀𝑏 ⁄𝑁0 for M=4 on the symmetric network in which 𝜎𝑆𝐷 ⁡ = 𝜎𝑆𝑅 = 𝜎𝑅𝐷 =1 𝒐𝒑𝒕 𝜸𝒕𝒉 (RBF) 4 1,54 E-02 2,8 E-02 4,96 E-02 7,31 E-02 1,48 E-02 1,51 E-03 8 4,74 E-03 2 E-03 4,67 E-03 1,58 E-02 1,86 E-03 1,85 E-03 𝜺𝒃 ⁄𝑵 (dB) 𝟎 10 3 E-03 5,55 E-04 9,26 E-04 4,41 E-03 5,39 E-04 5,45 E-04 12 1,93 E-03 2,29 E-04 1,61 E-04 9 E-04 1,38 E-04 1,43 E-04 16 7,92 E-04 7,96 E-05 1,12 E-05 1,99 E-05 6,83 E-06 7,38 E-06 20 3,19 E-04 3,19 E-05 3,63 E-06 2,96 E-07 2,44 E-07 2,87 E-07 In summary, obtaining the threshold value adaptively with the ANNs for average 𝜀𝑏 ⁄𝑁0 ⁡ values and number of relays provides minimum BER values for all 𝜀𝑏 ⁄𝑁0 values. The results also shown that, compared to constant threshold values using optimal threshold values at the relay gives us 2 dB less power opportunity for the same BER performances. This is very promising result in terms of green communication systems. As a conclusion, this paper shows that by the time the threshold value is obtained properly, the DF cooperation communication systems are very promising for 5G standards instead of the traditional MIMO techniques already exist in 4G standards [9] P. Herhold, E. Zimmermann, and G. Fettweis, “A simple cooperative extension to wireless relaying,” in International Zurich Seminar on Digital Communications, 2004, pp. 36–39. [10] E. Öztürk and H. Kaya, “Performance analysis of distributed turbo coded scheme with two ordered best relays,” IET Commun., vol. 9, no. 5, pp. 638–648, Mar. 2015. [11] S. S. Ikki and M. H. Ahmed, “On the Performance of CooperativeDiversity Networks with the Nth Best-Relay Selection Scheme,” IEEE Trans. Commun., vol. 58, no. 11, pp. 3062–3069, Nov. 2010. [12] S. Ikki and M. H. Ahmed, “Performance of Decode-and-Forward Cooperative Diversity Networks Over Nakagami-m Fading Channels,” in IEEE GLOBECOM 2007-2007 IEEE Global Telecommunications Conference, 2007, pp. 4328–4333. [13] W. P. Siriwongpairat, T. Himsoon, W. S. W. Su, and K. J. R. Liu, “Optimum threshold-selection relaying for decode-and-forward cooperation protocol,” IEEE Wirel. Commun. Netw. Conf. 2006. WCNC 2006., vol. 2, no. c, pp. 1015–1020, 2006. [14] H. X. Nguyen and H. H. Nguyen, “Selection combining for noncoherent decode-and-forward relay networks,” EURASIP J. Wirel. Commun. Netw., vol. 2011, no. 1, p. 106, 2011. [15] S. Haykin, “Neural networks: a comprehensive foundation,” Knowl. Eng. Rev., vol. 13, no. 4, Feb. 1999. [16] J. M. Zurada, “Introduction to Artificial Neural Systems,” West Publ., p. 683, 1992. [17] S. Theodoridis, “Schemes for equalisation of communication channels with nonlinear impairments,” IEE Proc. - Commun., vol. 142, no. 3, p. 165, 1995. [18] G. J. Gibson, S. Siu, S. Chen, C. F. N. Cowan, and P. M. Grant, “The application of nonlinear architectures to adaptive channel equalisation,” in Conference Record - International Conference on Communications, 1990, vol. 2, pp. 649–653. [19] K. Burse, R. N. Yadav, and S. C. Shrivastava, “Channel Equalization Using Neural Networks: A Review,” IEEE Trans. Syst. Man, Cybern. Part C (Applications Rev., vol. 40, no. 3, pp. 352– 357, May 2010. [20] L. I. Guan, “R.O. Duda, P.E. Hart, and D.G. Stork, Pattern Classification, New York: John Wiley &amp; Sons, 2001, pp. xx + 654, ISBN: 0-471-05669-3,” J. Classif., vol. 24, no. 2, pp. 305–307, Sep. 2007. REFERENCES [1] M. K. Simon and M.-S. Alouini, Digital Communication over Fading Channels. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2004. [2] A. Goldsmith, Wireless Communications, First., vol. 1. Cambridge: Cambridge University Press, 2005. [3] K. Sankhe, C. Pradhan, S. Kumar, and G. Ramamurthy, “Machine Learning Based Cooperative Relay Selection in Virtual MIMO,” p. 6, Jun. 2015. [4] J. N. Chen, D.;Laneman, “Modulation and demodulation for cooperative diversity in wireless systems,” IEEE Trans. Wirel. Commun., vol. 5, no. 7, pp. 1785–1794, Jul. 2006. [5] S. S. Ikki and M. H. Ahmed, “On the Performance of Amplify-andForward Cooperative Diversity with the,” Commun. Soc., pp. 0–5, 2009. [6] F. Atay Onat, Y. Fan, H. Yanikomeroglu, and H. V. Poor, “Threshold-Based Relay Selection for Detect-and-Forward Relaying in Cooperative Wireless Networks,” EURASIP J. Wirel. Commun. Netw., vol. 2010, no. 1, pp. 1–10, 2010. [7] F. A. Onat, A. Adinoyi, Y. Fan, H. Yanikomeroglu, J. S. Thompson, and I. D. Marsland, “Threshold selection for SNR-based selective digital relaying in cooperative wireless networks,” IEEE Trans. Wirel. Commun., vol. 7, no. 11, pp. 4226–4237, 2008. [8] A. Bletsas, A. Khisti, D. P. Reed, and A. Lippman, “A simple Cooperative diversity method based on network path selection,” IEEE J. Sel. Areas Commun., vol. 24, no. 3, pp. 659–672, Mar. 2006.
9
arXiv:1704.07284v1 [] 24 Apr 2017 Optimal algorithms for hitting (topological) minors on graphs of bounded treewidth∗ Julien Baste† Ignasi Sau‡§ Dimitrios M. Thilikos‡ ¶ Abstract For a fixed collection of graphs F , the F -M-Deletion problem consists in, given a graph G and an integer k, decide whether there exists S ⊆ V (G) with |S| ≤ k such that G \ S does not contain any of the graphs in F as a minor. We are interested in the parameterized complexity of F -M-Deletion when the parameter is the treewidth of G, denoted by tw. Our objective is to determine, for a fixed F , the smallest function fF such that F -M-Deletion can be solved in time fF (tw) · nO(1) on n-vertex graphs. Using and enhancing the machinery of boundaried graphs and small sets of representatives introduced by Bodlaender et al. [J ACM, 2016], we prove that when all the graphs in F are connected and at least one of them is planar, then fF (w) = 2O(w·log w) . When F is a singleton containing a clique, a cycle, or a path on i vertices, we prove the following asymptotically tight bounds: • f{K4 } (w) = 2Θ(w·log w) . • f{Ci } (w) = 2Θ(w) for every i ≤ 4, and f{Ci } (w) = 2Θ(w·log w) for every i ≥ 5. • f{Pi } (w) = 2Θ(w) for every i ≤ 4, and f{Pi } (w) = 2Θ(w·log w) for every i ≥ 6. The lower bounds hold unless the Exponential Time Hypothesis fails, and the superexponential ones are inspired by a reduction of Marcin Pilipczuk [Discrete Appl Math, 2016]. The single-exponential algorithms use, in particular, the rank-based approach introduced by Bodlaender et al. [Inform Comput, 2015]. We also consider the version of the problem where the graphs in F are forbidden as topological minors, and prove that essentially the same set of results holds. Keywords: parameterized complexity; graph minors; treewidth; hitting minors; topological minors; dynamic programming; Exponential Time Hypothesis. ∗ Emails of authors: [email protected], [email protected], [email protected]. This work has been supported by project DEMOGRAPH (ANR-16-CE40-0028). † Université de Montpellier, LIRMM, Montpellier, France. ‡ AlGCo project-team, CNRS, LIRMM, France. § Departamento de Matemática, Universidade Federal do Ceará, Fortaleza, Brazil. ¶ Department of Mathematics, National and Kapodistrian University of Athens, Athens, Greece. 1 1 Introduction Let F be a finite non-empty collection of non-empty graphs. In the F-M-Deletion (resp. F-TM-Deletion) problem, we are given a graph G and an integer k, and the objective is to decide whether there exists a set S ⊆ V (G) with |S| ≤ k such that G \ S does not contain any of the graphs in F as a minor (resp. topological minor). These problems have a big expressive power, as instantiations of them correspond to several notorious problems. For instance, the cases F = {K2 }, F = {K3 }, and F = {K5 , K3,3 } of F-M-Deletion (or F-TM-Deletion) correspond to Vertex Cover, Feedback Vertex Set, and Vertex Planarization, respectively. For the sake of readability, we use the notation F-Deletion in statements that apply to both F-M-Deletion and F-TM-Deletion. Note that if F contains a graph with at least one edge, then F-Deletion is NP-hard by the classical result of Lewis and Yannakakis [26]. In this article we are interested in the parameterized complexity of F-Deletion when the parameter is the treewidth of the input graph. Since the property of containing a graph as a (topological) minor can be expressed in Monadic Second Order logic (see [22] for explicit formulas), by Courcelle’s theorem [8], F-Deletion can be solved in time O∗ (f (tw)) on graphs with treewidth at most tw, where f is some computable function1 . Our objective is to determine, for a fixed collection F, which is the smallest such function f that one can (asymptotically) hope for, subject to reasonable complexity assumptions. This line of research has attracted some interest during the last years in the parameterized complexity community. For instance, Vertex Cover is easily solvable in time O∗ (2O(tw) ), called single-exponential, by standard dynamic-programming techniques, and no algorithm with running time O∗ (2o(tw) ) exists unless the Exponential Time Hypothesis (ETH)2 fails [19]. For Feedback Vertex Set, standard dynamic programming techniques give a running time of O∗ (2O(tw·log tw) ), while the lower bound under the ETH [19] is again O∗ (2o(tw) ). This gap remained open for a while, until Cygan et al. [10] presented an optimal algorithm running in time O∗ (2O(tw) ), using the celebrated Cut&Count technique. This article triggered several other techniques to obtain single-exponential algorithms for so-called connectivity problems on graph of bounded treewidth, mostly based on algebraic tools [2, 15]. Concerning Vertex Planarization, Jansen et al. [20] presented an algorithm of time O∗ (2O(tw·log tw) ) as a crucial subroutine in an FPT algorithm parameterized by k. Marcin Pilipczuk [30] proved that this running time is optimal under the ETH, by using the framework introduced by Lokshtanov et al. [28] for proving superexponential lower bounds. 1 2 The notation O∗ (·) suppresses polynomial factors depending on the size of the input graph. The ETH states that 3-SAT on n variables cannot be solved in time 2o(n) ; see [19] for more details. 2 Our results. We present a number of upper and lower bounds for F-Deletion parameterized by treewidth, several of them being tight. Namely, we prove the following results, all the lower bounds holding under the ETH:  O(tw·log tw)  . 1. For every F, F-Deletion can be solved in time O∗ 22 2. For every connected3 F containing at least one planar graph (resp. subcubic planar graph), F-M-Deletion (resp. F-TM-Deletion) can be solved in time  ∗ O(tw·log tw) O 2 . 3. For any connected F, F-Deletion cannot be solved in time O∗ (2o(tw) ). 4. When F = {Ki }, the clique on i vertices, {Ki }-Deletion cannot be solved in time O∗ (2o(tw·log tw) ) for i ≥ 4. Note that {Ki }-Deletion can be solved in time O∗ (2O(tw) ) for i ≤ 3 [10], and that the case i = 4 is tight by item 2 above (as K4 is planar). 5. When F = {Ci }, the cycle on i vertices, {Ci }-Deletion can be solved in time O∗ (2O(tw) ) for i ≤ 4, and cannot be solved in time O∗ (2o(tw·log tw) ) for i ≥ 5. Note that, by items 2 and 3 above, this settles completely the complexity of {Ci }Deletion for every i ≥ 3. 6. When F = {Pi }, the path on i vertices, {Pi }-Deletion can be solved in time O∗ (2O(tw) ) for i ≤ 4, and cannot be solved in time O∗ (2o(tw·log tw) ) for i ≥ 6. Note that, by items 2 and 3 above, this settles completely the complexity of {Pi }Deletion for every i ≥ 2, except for i = 5, where there is still a gap. The results discussed in the last three items are summarized in Table 1. Note that the cases with i ≤ 3 were already known [10, 19], except when F = {P3 }. ❍ i ❍ F ❍❍❍ 2 3 4 5 ≥6 Ki Ci Pi tw tw tw tw tw tw tw · log tw tw tw tw · log tw ? 2O(tw·log tw) tw · log tw tw ? tw · log tw tw · log tw ? 2O(tw·log tw) tw · log tw tw · log tw Table 1: Summary of our results when F equals {Ki }, {Ci }, or {Pi }. If only one value ‘x’ is written in the table (like ‘tw’), it means that the corresponding problem can be solved in time O∗ (2O(x) ) and that this bound is tight. An entry of the form ‘x ? y’ means that the corresponding problem cannot be solved in time O∗ (2o(x) ), and that it can be solved in time O∗ (2O(y) ). We interpret {C2 }-Deletion as Feedback Vertex Set. Grey cells correspond to known results. 3 A connected collection F is a collection containing only connected graphs. 3  O(tw·log tw)  uses and, a sense, Our techniques. The algorithm running in time O∗ 22 enhances, the machinery of boundaried graphs, equivalence relations, and representatives originating in the seminal work of Bodlaender et al. [5], and which has been subsequently used in [16, 17, 22]. For technical reasons, we use branch decompositions instead of tree decompositions, whose associated widths are equivalent from a parametric point of view [32].  In order to obtain the faster algorithm running in time O∗ 2O(tw·log tw) when F is a connected collection containing at least a (subcubic) planar graph, we combine the above ingredients with additional arguments to bound the number and the size of the representatives of the equivalence relation defined by the encoding that we use to construct the partial solutions. Here, the connectivity of F guarantees that every connected component of a minimum-sized representative intersects its boundary set (cf. Lemma 6). The fact that F contains a (subcubic) planar graph is essential in order to bound the treewidth of the resulting graph after deleting a partial solution (cf. Lemma 13). We present these algorithms for the topological minor version and then it is easy to adapt them to the minor version within the claimed running time (cf. Lemma 4). The single-exponential algorithms when F ∈ {{P3 }, {P4 }, {C4 }} are ad hoc. Namely, the algorithms for {P3 }-Deletion and {P4 }-Deletion use standard (but nontrivial) dynamic programming techniques on graphs of bounded treewidth, exploiting the simple structure of graphs that do not contain P3 or P4 as a minor (or as a subgraph, which in the case of paths is equivalent). The algorithm for {C4 }-Deletion is more involved, and uses the rank-based approach introduced by Bodlaender et al. [2], exploiting again the structure of graphs that do not contain C4 as a minor (cf. Lemma 20). It might seem counterintuitive that this technique works for C4 , and stops working for Ci with i ≥ 5 (see Table 1). A possible reason for that is that the only cycles of a C4 -minor-free graph are triangles and each triangle is contained in a bag of a tree decomposition. This property, which is not true anymore for Ci -minor-free graphs with i ≥ 5, permits to keep track of the structure of partial solutions with tables of small size. As for the lower bounds, the general lower bound of O∗ (2o(tw) ) for connected collections is based on a simple reduction from Vertex Cover. The superexponential lower bounds, namely O∗ (2o(tw·log tw) ), are strongly based on the ideas presented by Marcin Pilipczuk [30] for Vertex Planarization. We present a general hardness result (cf. Theorem 14) that applies to wide families of connected collections F. Then, our superexponential lower bounds, as well as the result of Marcin Pilipczuk [30] itself, are corollaries of this general result. Combining Theorem 14 with item 2 above, it easily follows that the running time O∗ (2O(tw·log tw) ) is tight for a wide family of F, for example, when all graphs in F are planar and 3-connected. Organization of the paper. In Section 2 we give some preliminaries. The algorithms based on boundaried graphs and representatives are given in Section 3, and the singleexponential algorithms for hitting paths and cycles are given in Section 4. The general lower bound for connected collections is presented in Section 5, and the superexponential lower bounds are presented in Section 6. We conclude the article in Section 7. 4 2 Preliminaries In this section we provide some preliminaries to be used in the following sections. Sets, integers, and functions. We denote by N the set of every non-negative integer and we set N+ = N \ {0}. Given two integers p and q, the set [p, q] refers to the set of every integer r such that p ≤ r ≤ q. Moreover, for each integer p ≥ 1, we set N≥p = N \ [0, p − 1]. In the set [1, k] × [1, k], a row is a set {i} × [1, k] and a column is a set [1, k] × {i} for some i ∈ [1, k]. We use ∅ to denote the empty set and ∅ to denote the empty function, i.e., the unique subset of ∅ × ∅. Given a function f : A → B and S a set S, we define f |S = {(x, f (x)) | x ∈ S ∩ A}. Moreover if S ⊆ A, we set f (S) = s∈S {f (s)}. Given a set S, we denote  by S2 the set containing every subset of S that has cardinality 2. We also denote by 2S the set of all the subsets ofSS. If S S is a collection of objects where the operation ∪ is defined, then we then denote S = X∈S X. Let p ∈ N with p ≥ 2, let f : Np → N, and let g : Np−1 → N. We say that f (x1 , . . . , xp ) = Oxp (g(x1 , . . . , xp−1 )) if there is a function h : N → N such that f (x1 , . . . , xp ) = O(h(xp ) · g(x1 , . . . , xp−1 )). Graphs. All the graphs that we consider in this paper are undirected, finite, and without loops or multiple edges. We use standard graph-theoretic notation, and we refer the reader to [11] for any undefined terminology. Given a graph G, we denote by V (G) the set of vertices of G and by E(G) the set of the edges of G. We call |V (G)| the size of G. A graph is the empty graph if its size is 0. We also denote by L(G) the set of the vertices of G that have degree exactly 1. If G is a tree (i.e., a connected acyclic graph) then L(G) is the set of the leaves of G. A vertex labeling of G is some injection ρ : V (G) → N+ . Given a vertex v ∈ V (G), we define the neighborhood of v as NG (v) = {u | u ∈ V (G), {u, v} ∈ E(G)} and the closedS neighborhood of v as NG [v] = NG (v) ∪ {v}. If X ⊆ V (G), then we write NG (X) = ( v∈X NG (v)) \ X. The degree of a vertex v in G is defined as degG (v) = |NG (v)|. A graph is called subcubic if all its vertices have degree at most 3. A subgraph H = (VH, EH ) of a graph G = (V, E) is a graph such that VH ⊆ V (G) and EH ⊆ E(G) ∩ V (H) . If S ⊆ V (G), the subgraph of G induced by S, denoted G[S], 2 S is the graph (S, E(G) ∩ 2 ). We also define G \ S to be the subgraph of G induced by V (G) \ S. If S ⊆ E(G), we denote by G \ S the graph (V (G), E(G) \ S). If s, t ∈ V (G), an (s, t)-path of G is any connected subgraph P of G with maximum degree 2 and where s, t ∈ L(P ). We finally denote by P(G) the set of all paths of G. Given P ∈ P(G), we say that v ∈ V (P ) is an internal vertex of P if degP (v) = 2. Given an integer i and a graph G, we say that G is i-connected if for each {u, v} ∈ V (G) , there 2 exists a set Q ⊆ P(G) of (u, v)-paths of G such that |Q| = i and for each P1 , P2 ∈ Q such that P1 6= P2 , V (P1 ) ∩ V (P2 ) = {u, v}. We denote by Kr the complete graph on r vertices, by Kr1 ,r2 the complete bipartite + graph where the one part has r1 vertices and the other r2 , and by K2,r the graph obtained 5 if we take K2,r and add an edge between the two vertices of the part of size 2. Finally we denote by Pk and Ck the path and cycles of k vertices respectively. We define the + diamond to be the graph K2,2 . Block-cut trees. A connected graph G is biconnected if for any v ∈ V (G), G \ {v} is connected (notice that K2 is the only biconnected graph that it is not 2-connected). A block of a graph G is a maximal biconnected subgraph of G. We name block(G) the set of all blocks of G and we name cut(G) the set of all cut vertices of G. If G is connected, we define the block-cut tree of G to be the tree bct(G) = (V, E) such that • V = block(G) ∪ cut(G) and • E = {{B, v} | B ∈ block(G), v ∈ cut(G) ∩ V (B)}. Note that L(bct(G)) ⊆ block(G). It is worth mentioning that the block-cut tree of a graph can be computed in linear time using depth-first search [18]. Let F be a set of connected graphs such that for each H ∈ F, |V (H)| ≥ 2. Given H ∈ F and B ∈ L(bct(H)), we say that (H, B) is an essential pair if for each H ′ ∈ F and each B ′ ∈ L(bct(H ′ )), |E(B)| ≤ |E(B ′ )|. Given an essential pair (H, B) of F, we define the first vertex of (H, B) to be, if it exists, the only cut vertex of H contained in V (B), or an arbitrarily chosen vertex of V (B) otherwise. We define the second vertex of (H, B) to be an arbitrarily chosen vertex of V (B) that is a neighbor in H[B] of the first vertex of (H, B). Note that, given an essential pair (H, B) of F, the first vertex and the second vertex of (H, B) exist and, by definition, are fixed. Moreover, given an essential pair (H, B) of F, we define the core of (H, B) to be the graph H \ (V (B) \ {a}) where a is the first vertex of (H, B). Note that a is a vertex of the core of (H, B). For the statement of our results, we need to consider the class K containing every + | r ≥ 0} = ∅. connected graph G such that L(bct(G)) ∩ {K2,r , K2,r Minors and topological minors. Given two graphs H and G and two functions φ : V (H) → V (G) and σ : E(H) → P(G), we say that (φ, σ) is a topological minor model of H in G if • for every {x, y} ∈ E(H), σ({x, y}) is an (φ(x), φ(y))-path in G and • if P1 , P2 are two distinct paths in σ(E(H)), then none of the internal vertices of P1 is a vertex of P2 . The branch vertices of (φ, σ) are the vertices in φ(V (E)), while the subdivision vertices of (φ, σ) are the internal vertices of the paths in σ(E(H)). We say that G contains H as a topological minor, denoted by H tm G, if there is a topological minor model (φ, σ) of H in G. Given two graphs H and G and a function φ : V (H) → 2V (G) , we say that φ is a minor model of H in G if 6 • for every x ∈ V (H), G[φ(x)] is a connected non-empty graph and • for every {x, y} ∈ E(H), there exist x′ ∈ φ(x) and y ′ ∈ φ(y) such that {x′ , y ′ } ∈ E(G). We say that G contains H as a minor, denoted by H m G, if there is a minor model φ of H in G. Let H be a graph. We define the set of graphs tpm(H) as follows: among all the graphs containing H as a minor, we consider only those that are minimal with respect to the topological minor relation. The following two observations follow easily from the above definitions. Observation 1. There is a function f1 : N → N such that for every h-vertex graph H, every graph in tpm(H) has at most f1 (h) vertices. Observation 2. Given two graphs H and G, H is a minor of G if and only if some of the graphs in tpm(H) is a topological minor of G. Graph separators and (topological) minors. Let G be a graph and S ⊆ V (G). Then for each connected component C of G \ S, we define the cut-clique of the triple (C, G, S)  to be the graph whose vertex set is V (C)∪ S and whose edge set is E(G[V (C)∪ S])∪ S2 . Lemma 1. Let i ≥ 2 be an integer, let H be an i-connected graph, let G be a graph, and let S ⊆ V (G) such that |S| ≤ i − 1. If H is a topological minor (resp. a minor) of G, then there exists a connected component G′ of G \ S such that H is a topological minor (resp. a minor) of the cut-clique of (G′ , G, S). Proof. We prove the lemma for the topological minor version, and the minor version can be proved with the same kind of arguments. Let i, H, G, and S be defined as in the statement of the lemma. Assume that H tm G and let (φ, σ) be a topological minor model of H in G. If S is not a separator of G, then the statement is trivial, as in that case the cut-clique of (G \ S, G, S) is a supergraph of G. Suppose henceforth that S is a separator of G, and assume for contradiction that there exist two connected components G1 and G2 of G \ S and two distinct vertices x1 and x2 of H such that φ(x1 ) ∈ V (G1 ) and φ(x2 ) ∈ V (G2 ). Then, as H is i-connected, there should be i internally vertexdisjoint paths from φ(x1 ) to φ(x2 ) in G. As S is a separator of size at most i − 1, this is not possible. Thus, there exists a connected component G′ of G \ S such that for each x ∈ V (H), φ(x) ∈ V (G′ )∪S. This implies that H is a topological minor of the cut-clique of (G′ , G, S). Lemma 2. Let G be a connected graph, let v be a cut vertex of G, and let V be the vertex set of a connected component of G \ {v}. If H is a connected graph such that H tm G and for each leaf B of bct(H), B 6tm G[V ∪ {v}], then H tm G \ V . Proof. Let G, v, V , and H be defined as in the statement of the lemma. Let B ∈ L(bct(H)). If B is a single edge, then the condition B 6tm G[V ∪ {v}] implies that 7 V = ∅. But V is the vertex set of a connected component of G \ {v} and so V 6= ∅. This implies that the case B is a single edge cannot occur. If B is not a simple edge, then by definition B is 2-connected and then, by Lemma 1, B tm G \ V . This implies that there is a topological minor model (φ, σ) of H in G such that for each B ∈ L(bct(H)) and for each b ∈ B, φ(b) 6∈ V . S We show now that for each x ∈ V (H), φ(x) 6∈ V . If V (H) \ ( B∈L(bct(H)) V (B)) = ∅ S then the result is already proved. Otherwise, let x ∈ V (H)S\ ( B∈L(bct(H)) V (B)). By definition of the block-cut tree, there exist b1 and b2 in B∈L(bct(H)) V (B) such that x lies on a (b1 , b2 )-path P of P(H). Let Pi be the (bi , x)-subpath of P for each i ∈ {1, 2}. By definition of P , we have that V (P1 ) ∩ V (P2 ) = {x}. This implies that there exists a (φ(b1 ), φ(x))-path P1′ and a (φ(b2 ), φ(x))-path P2′ in P(G) such that V (P1′ ) ∩ V (P2′ ) = {φ(x)}. Then, as v is a cut vertex of G, it follows that φ(x) 6∈ V . Thus, for each x ∈ V (H), φ(x) 6∈ V . Let {x, y} be an edge of E(H). As σ({x, y}) is a simple (φ(x), φ(y))-path, both φ(x) and φ(y) are not in V and v is a cut vertex of G, we have, with the same argumentation that before that, for each z ∈ V (σ({x, y}), z 6∈ V . This concludes the proof. Using the same kind of argumentation with minors instead of topological minors, we also obtain the following lemma. Lemma 3. Let G be a connected graph, let v be a cut vertex of G, and let V be the vertex set of a connected component of G \ {v}. If H is a graph such that H m G and for each leaf B of bct(H), B 6m G[V ∪ {v}], then H m G \ V . In the above two lemmas, we have required graph H to be connected so that bct(H) is well-defined, but we could relax this requirement, and replace in both statements “for each leaf B of bct(H)” with “for each connected component H ′ of H and each leaf B of bct(H ′ )”. Graph collections. Let F be a collection of graphs. From now on instead of “collection of graphs” we use the shortcut “collection”. If F is a collection that is finite, non-empty, and all its graphs are non-empty, then we say that F is a regular collection. For any regular collection F, we define size(F) = max{|V (H)| | H ∈ F} ∪ {|F|}. Note that if the size of F is bounded, then the size of the graphs in F is also bounded. We say that F is a planar collection (resp. planar subcubic collection) if it is regular and at least one of the graphs in F is planar (resp. planar and subcubic). We say that F is a connected collection if it is regular and all the graphs in F are connected. We say that F is an (topological) minor antichain if no two of its elements are comparable via the (topological) minor relation. Let F be a regular collection. We extend the (topological) minor relation to F such that, given a graph G, F tm G (resp. F m G) if and only if there exists a graph H ∈ F such that H tm G (resp. H m G). We also denote extm (F) = {G | F tm G}, i.e., extm (F) is the class of graphs that do not contain any graph in F as a topological minor. The set exm (F) is defined analogously. 8 Tree decompositions. A tree decomposition of a graph G is a pair D = (T, X ), where T is a tree and X = {Xt | t ∈ V (T )} is a collection of subsets of V (G) such that: S • t∈V (T ) Xt = V (G), • for every edge {u, v} ∈ E, there is a t ∈ V (T ) such that {u, v} ⊆ Xt , and • for each {x, y, z} ⊆ V (T ) such that z lies on the unique path between x and y in T , Xx ∩ Xy ⊆ Xz . We call the vertices of T nodes of D and the sets in X bags of D. The width of a tree decomposition D = (T, X ) is maxt∈V (T ) |Xt | − 1. The treewidth of a graph G, denoted by tw(G), is the smallest integer w such that there exists a tree decomposition of G of width at most w. For each t ∈ V (T ), we denote by Et the set E(G[Xt ]). Parameterized complexity. We refer the reader to [9, 12] for basic background on parameterized complexity, and we recall here only some very basic definitions. A parameterized problem is a language L ⊆ Σ∗ × N. For an instance I = (x, k) ∈ Σ∗ × N, k is called the parameter. A parameterized problem is fixed-parameter tractable (FPT) if there exists an algorithm A, a computable function f , and a constant c such that given an instance I = (x, k), A (called an FPT algorithm) correctly decides whether I ∈ L in time bounded by f (k) · |I|c . Definition of the problems. Let F be a regular collection. We define the parameter tmF as the function that maps graphs to non-negative integers as follows: tmF (G) = min{|S| | S ⊆ V (G) ∧ G \ S ∈ extm (F)}. (1) The parameter mF is defined analogously. The main objective of this paper is to study the problem of computing the parameters tmF and mF for graphs of bounded treewidth under several instantiations of the collection F. The corresponding decision problems are formally defined as follows. F-M-Deletion Input: A graph G and an integer k ∈ N. Parameter: The treewidth of G. Output: Is mF (G) ≤ k? F-TM-Deletion Input: A graph G and an integer k ∈ N. Parameter: The treewidth of G. Output: Is tmF (G) ≤ k? Note that in both above problems, we can always assume that F is an antichain with respect to the considered relation. Indeed, this is the case because if F is contains two graphs H1 and H2 where H1 tm H2 , then tmF (G) = tmF ′ (G) where F ′ = F \ {H2 } (similarly for the minor relation). Throughout the article, we let n and tw be the number of vertices and the treewidth of the input graph of the considered problem, respectively. In some proofs, we will also use w to denote the width of a (nice) tree decomposition that is given together with the input graph (which will differ from tw by at most a factor 5). 9 3 Dynamic programming algorithms for computing tmF The purpose of this section is to prove the following results. Theorem 1. If F is a regular collection, where d = size(F), then there exists an algoO (tw·log tw) rithm that solves F-TM-Deletion in 22 d · n steps. Theorem 2. If F is a connected and planar subcubic collection, where d = size(F), then there exists an algorithm that solves F-TM-Deletion in 2Od (tw·log tw) · n steps. Theorem 3. If F is a regular collection, where d = size(F), then there exists an algoO (tw·log tw) · n steps. rithm that solves F-M-Deletion in 22 d Theorem 4. If F is a connected and planar collection, where d = size(F), then there exists an algorithm that solves F-M-Deletion in 2Od (tw·log tw) · n steps. The following lemma is a direct consequence of Observation 2. Lemma 4. Let F be aS regular collection. Then, for every graph G, it holds that mF (G) = tmF ′ (G) where F ′ = F ∈F tpm(F ). It is easy to see that for every (planar) graph F , the set tpm(F ) contains a subcubic (planar) graph. Combining this observation with Lemma 4 and Observation 1, Theorems 3 and 4 follow directly from Theorems 1 and 2, respectively. The rest of this section is dedicated to the proofs of Theorems 1 and 2. We first need to provide some preliminaries in Subsections 3.1 and 3.2. The proof of Theorem 3 presented in Subsection 3.3. Before we present the proof of Theorem 2 in Subsection 3.5, we give in Subsection 3.4 bounds on the sets of representatives when F is a connected and planar collection. 3.1 Boundaried graphs and their equivalence classes Many of the following definitions were introduced in [5, 16] (see also [17, 22]. Boundaried graphs. Let t ∈ N. A t-boundaried graph is a triple G = (G, R, λ) where G is a graph, R ⊆ V (G), |R| = t, and λ : R → N+ is an injective function. We call R the boundary of G and we call the vertices of R the boundary vertices of G. We also call G the underlying graph of G. Moreover, we call t = |R| the boundary size of G and we define the label set of G as Λ(G) = λ(R). We also say that G is a boundaried graph if there exists an integer t such that G is an t-boundaried graph. We say that a boundary graph G is consecutive if Λ(G) = [1, |R|]. We define the size of G = (G, R, λ), as |V (G)| and we use the notation V (G) and E(G) for V (G) and E(G), respectively. If S ⊆ V (G), we define G′ = G \ S such that G′ = (G′ , R′ , λ′ ), G′ = G \ S, R′ = R \ S, and λ′ = λ|R′ . We define B (t) as the set of all t-boundaried graphs. We also use the notation B∅ = ((∅, {∅}), ∅, ∅) to denote the (unique) 0-boundaried empty boundaried graph. Given a t-boundaried graph G = (G, R, λ), we define ψG : R → [1, t] such that for each v ∈ R, ψG (v) = |{u ∈ R | λ(u) ≤ λ(v)}|. Note that, as λ is an injective function, ψG is a bijection and, given a boundary vertex v of G, we call ψG (v) the index of v. 10 Let t ∈ N. We say that two t-boundaried graphs G1 = (G1 , R1 , λ1 ) and G2 = (G2 , R2 , λ2 ) are isomorphic if there is a bijection σ : V (G1 ) → V (G2 ) that is an isomor−1 ◦ ψG2 ⊆ σ, i.e., σ sends phism σ : V (G1 ) → V (G2 ) from G1 to G2 and additionally ψG 1 the boundary vertices of G1 to equally-indexed boundary vertices of G2 . We say that G1 −1 ◦ ψG2 is an isomorphism from G1 [R1 ] to G2 [R2 ] and G2 are boundary-isomorphic if ψG 1 and we denote this fact by G1 ∼ G2 . It is easy to make the following observation. Observation 3. For every t ∈ N, if S is a collection of t-boundaried graphs where t |S| > 2(2) , then S contains at least two boundary-isomorphic graphs. Topological minors of boundaried graphs. Let G1 = (G1 , R1 , λ1 ) and G2 = (G2 , R2 , λ2 ) be two boundaried graphs. We say that G1 is a topological minor of G2 if there is a topological minor model (φ, σ) of G1 in G2 such that • ψG1 = ψG2 ◦ φ|R1 , i.e., the vertices of R1 are mapped via φ to equally indexed vertices of R2 and • none of the vertices in R2 \ φ(R1 ) is a subdivision vertex of (φ, σ). Operations on boundaried graphs. Let G1 = (G1 , R1 , λ1 ) and G2 = (G2 , R2 , λ2 ) be two t-boundaried graphs. We define the gluing operation ⊕ such that (G1 , R1 , λ1 ) ⊕ (G2 , R2 , λ2 ) is the graph G obtained by taking the disjoint union of G1 and G2 and −1 −1 then, for each i ∈ [1, t], identifying the vertex ψG (i) and the vertex ψG (i). Keep in 1 2 mind that G1 ⊕ G2 is a graph and not a boundaried graph. Moreover, the operation ⊕ requires both boundaried graphs to have boundaries of the same size. Let G = (G, R, λ) be a t-boundaried graph and let I ⊆ N. We use the notation G|I = (G, λ−1 (I), λ|λ−1 (I) ), i.e., we do not include in the boundary anymore the vertices that are not indexed by numbers in I. Clearly, G|I is a t′ -boundaried graph where t′ = |I ∩ Λ(G)|. Let G1 = (G1 , R1 , λ1 ) and G2 = (G2 , R2 , λ2 ) be two boundaried graphs. Let also I = λ1 (R1 ) ∩ λ2 (R2 ) and let t = |R1 | + |R2 | − |I|. We define the merging operation ⊙ such that (G1 , R1 , λ1 ) ⊙ (G2 , R2 , λ2 ) is the t-boundaried graph G = (G, R, λ) where G is obtained by taking the disjoint union of G1 and G2 and then for each i ∈ I identify −1 the vertex λ−1 1 (i) with the vertex λ2 (i). Similarly, R is the obtained by R1 ∪ R2 after applying the same identifications to pairs of vertices in R1 and R2 . Finally, λ = λ′1 ∪ λ′2 where, for j ∈ [1, 2], λ′j is obtained from λj after replacing each (x, i) ∈ λj (for some −1 i ∈ I) by (xnew , i), where xnew is the result of the identification of λ−1 1 (i) and λ2 (i). Observe that G1 ⊙ G2 is a boundaried graph and that the operation ⊙ does not require input boundaried graphs to have boundaries of the same size. Let G = (G, R, λ) be a consecutive t-boundaried graph and let I ⊆ N be such that |I| = t. We define G = (G, R, λ) ⋄ I as the unique t-boundaried graph G′ = (G, R, λ′ ) where λ′ : R → I is a bijection and ψG′ = λ. 11 Equivalence relations. Let F be a regular collection and let t be a non-negative integer. We define an equivalence relation ≡(F ,t) on t-boundaried graphs as follows: Given two t-boundaried graphs G1 and G2 , we write G1 ≡(F ,t) G2 to denote that ∀G ∈ B (t) F tm G ⊕ G1 ⇐⇒ F tm G ⊕ G2 . It is easy to verify that ≡(F ,t) is an equivalent relation. We set up a set of representatives R(F ,t) as a set containing, for each equivalent class C of ≡(F ,t) , some consecutive tboundaried graph in C with minimum number of edges and no isolated vertices out of its boundary (if there are more than one such graphs, pick one arbitrarily). Given a t-boundaried graph G we denote by rep(F ) (G) the t-boundaried graph B ∈ R(F ,t) where B ≡(F ,t) G and we call B the F-representative of G. Clearly, rep(F ) (B) = B. Note that if B = (B, R, λ) is a t-boundaried graph and F tm B, then rep(F ) (B) is, by definition, a consecutive t-boundaried graph whose underlying graph is a graph H ∈ F with minimum number of edges, possibly completed with t − |V (H)| isolated vertices in the case where |V (H)| < t. We denote this graph by F(F ,t) (if there are many possible choices, just pick one arbitrarily). Note also that the underlying graph of every boundaried graph in R(F ,t) \ {F(F ,t) } belongs in extm (F). We need the following three lemmata. The first one is a direct consequence of the definitions of the equivalence relation ≡(F ,t) and the set of representatives R(F ,t) . Lemma 5. Let F be a regular collection and let t ∈ N. Let also B1 and B2 be tboundaried graphs. Then B1 ≡(F ,t) B2 if an only if ∀G ∈ R(F ,t) F tm G ⊕ B1 ⇐⇒ F tm G ⊕ B2 . Lemma 6. Let F be a connected collection and let t ∈ N. Let also B ∈ R(F ,t) . Then every connected component of the underlying graph of B intersects its boundary set. Proof. Let B = (B, R, λ) ∈ R(F ,t) . As the lemma follows directly in the case where B = F(F ,t) , we may assume that F 6tm B. We assume, towards a contradiction, that B has a component C whose vertex set does not contain any of the vertices of R. This means that B can be seen as the disjoint union of C and B ′ = B \ V (C). As F 6tm B, we also have that F 6tm C. Let now B′ = (B ′ , R, λ). Clearly |E(B′ )| < |E(B)|. We will arrive to a contradiction by proving that B′ ≡(F ,t) B. Let G ∈ B (t) . Note that G ⊕ B is the disjoint union of G ⊕ B′ and C. As all graphs in F are connected, it follows that a (connected) graph H ∈ F is a topological minor of G ⊕ B if and only if H is a topological minor of G ⊕ B′ . We conclude that B′ ≡(F ,t) B, a contradiction. Lemma 7. Let F be a connected collection. Then, for every graph B, it holds that rep(F ) ((B, ∅, ∅)) = B∅ if and only if F 6tm G. Proof. Let B = (B, ∅, ∅) where B is a graph. Recall that if F tm B, then rep(F ) (B) = F(F ,t) . As F does not contain the empty graph, we have that F(F ,t) 6= B∅ , therefore rep(F ) (B) 6= B∅ . Suppose now that F 6tm B. We have to prove that for every G ∈ B (0) , F tm G ⊕ B ⇐⇒ F tm G ⊕ B∅ . Let G = (G, ∅, ∅) ∈ B (0) . Note that G ⊕ B is the disjoint 12 union of G and B and that G ⊕ B∅ = G. As F 6tm B and F is connected, it follows that the disjoint union of B and G contains some (connected) graph in F if and only if B does. This implies that F tm G ⊕ B ⇐⇒ F tm G ⊕ B∅ , as required. (t) Folios. Let F be a regular collection. Given t, r ∈ N, we define AF ,r as the set of all pairwise non-isomorphic boundaried graphs that contain at most r non-boundary vertices, whose label set is a subset of [1, t], and whose underlying graph belongs in (t) extm (F). Note that a graph in AF ,r is not necessarily a t-boundaried graph. Given a t-boundaried graph B and an integer r ∈ N, we define the (F, r)-folio of (t) B, denoted by folio(B, F, r) the set containing all boundaried graphs in AF ,r that are topological minors of B. Moreover, in case F tm B, we also include in folio(B, F, r) the graph F(F ,t) . (t) (t) (F ,t) } We also define FF ,r = 2AF ,r ∪{F (t) and notice that {folio(B, F, r) | B ∈ B (t) } ⊆ (t) FF ,r , i.e., FF ,r contains all different (F, r)-folios of t-boundaried graphs. Lemma 8. Let t ∈ N and let F be a regular collection. For every t-boundaried graph B and every r ∈ N, it holds that |folio(B, F, r)| = 2Or+d (t log t) , where d = size(F). Or+d (t log t) (t) . Moreover, |FF ,r | = 22 Proof. Let t ∈ N, let F be a regular collection, let r ∈ N, and let n = t + r. We prove (t) (t) a stronger result, namely that |AF ,r | = 2Or+d (t log t) . The claimed bound on |FF ,r | then (t) follows directly by definition of the set FF ,r . By [25], there exists a constant c such that for each G ∈ extm (F), |E(G)| ≤ c · |V (G)|. By definition, every underlying graph of an (t) (t) element of AF ,r is in extm (F). If we want to construct an element G = (G, R, λ) of AF ,r  n2 ≤ c · n1+2·c·n with at most n vertices, then there are asymptotically at most c · n · c·n  n t |R| choices for the edge set E(G), at most t · t ≤ t · n choices for R, and t ≤ tt choices (t) for the function λ. We obtain that AF ,r is of size at most n · 2(1+2·c·n) log n · 2t log t = 2Or+d (t log t) , and the lemma follows. The following lemma indicates that folios define a refinement of the equivalence relation ≡(F ,t) . Lemma 9. Let F be a regular collection and let d = size(F). Let also B1 and B2 be two t-boundaried graphs. If folio(B1 , F, d) = folio(B2 , F, d), then B1 ≡(F ,t) B2 . Proof. Let B1 and B2 be two t-boundaried graphs such that folio(B1 , F, d) = folio(B2 , F, d). We fix G ∈ B (t) , and we need to prove that F tm G ⊕ B1 if and only if F tm G ⊕ B2 . Assume first that F tm G ⊕ B1 . Then there exists a graph F ∈ F and a topological minor model (φ, σ) of F in G⊕ B1 . This topological minor model (φ, σ) can be naturally decomposed into two topological minor models (φ0 , σ0 ) and (φ1 , σ1 ) of two graphs F0 (t) and F1 in AF ,d , respectively, with F0 ⊙ F1 = F , such that (φ0 , σ0 ) (resp. (φ1 , σ1 )) is a topological minor model of F0 (resp. F1 ) in the (boundaried) graph G (resp. B1 ). 13 Since folio(B1 , F, d) = folio(B2 , F, d), there exists a topological minor model (φ2 , σ2 ) of F1 in B2 . Combining the topological minor models (φ0 , σ0 ) and (φ2 , σ2 ) gives rise to a topological minor model (φ′ , σ ′ ) of F in G ⊕ B2 , and therefore F tm G ⊕ B2 . Conversely, assume that F 6tm G ⊕ B1 , and assume for contradiction that there exists a graph F ∈ F and a topological minor model (φ, σ) of F in G ⊕ B2 . Using the same arguments as above, (φ, σ) implies the existence of a topological minor model (φ′ , σ ′ ) of F in G ⊕ B1 , contradicting the hypothesis that F 6tm G ⊕ B1 . Lemmata 8 and 9 directly imply the following. Lemma 10. There exists a function h1 : N × N → N such that if F is a regular collection O (t·log t) . and t ∈ N, then |R(F ,t) | ≤ h1 (d, t) where d = size(F). Moreover h1 (d, t) = 22 d 3.2 Branch decompositions of boundaried graphs Let G = (G, R, λ) be a boundaried graph and let ρ be a vertex labeling of G where λ ⊆ ρ. A branch decomposition of G is a pair (T, σ) where T is a ternary tree and σ : E(G) ∪ {R} → L(T ) is a bijection. Let r = σ(R) and let er be the unique edge in T that is incident to r. We call r the root of T . Given an edge e ∈ E(T ), we define Te as the one of the two connected components of T \{e} that does not contain the S root r. We then define Ge = (Ge , Re , λe ) where E(Ge ) = σ −1 (L(Te ) ∩ L(T )), V (Ge ) = E(Ge ), Re is the set containing every vertex of G that is an endpoint of an edge in E(Ge ) and also belongs in a set in {R} ∪ (E(G) \ E(Ge )) (here we treat edges in E(G) \ E(Ge ) as 2-element sets), and λe = ρ|Re . We also set te = |Re | and observe that Ge is a te -boundaried graph. The width of (T, σ) is max{te | e ∈ E(T )}. The branchwidth of G, denoted by bw(G), is the minimum width over all branch decompositions of G. This is an extension of the definition of a branch decomposition on graphs, given in [32], to boundaried graphs. Indeed, of G is a graph, then a branch decomposition of G is a branch decomposition of (G, ∅, ∅). We also define the branchwidth of G as bw(G) = bw(G, ∅, ∅). Lemma 11. Let G = (G, R, λ) be a boundaried graph. Then bw(G) ≤ bw(G) + |R|. Proof. Let (T ′ , σ ′ ) be a branch decomposition of G′ = (G, ∅, ∅) and let r be the root of T ′ . Recall that G′e = (G′e , Re′ , λ′e ), e ∈ E(T ′ ). We construct a branch decomposition (T, σ) of G = (G, R, λ) as follows: we set T = T ′ and σ = (σ ′ \ {(∅, r)}) ∪ {(R, r)}. Note that Ge = (G′e , Re , λe ), e ∈ E(T ), where Re ⊆ Re′ ∪R. This means that |Re | ≤ |Re′ |+|R|, therefore bw(G) ≤ bw(G) + |R|. The following lemma is a combination of the single-exponential linear-time constantfactor approximation of treewidth by Bodlaender et al. [4], with the fact that any graph G with |E(G)| ≥ 3 satisfies that bw(G) ≤ tw(G)+1 ≤ 32 bw(G) [32]; it is worth noting that from the proofs of these inequalities, simple polynomial-time algorithms for transforming a branch (resp. tree) decomposition into a tree (resp. branch) decompositions can be derived. 14 Lemma 12. There exists an algorithm that receives as input a graph G and a w ∈ N and either reports that bw(G) > w or outputs a branch decomposition (T, σ) of G of width O(w). Moreover, this algorithm runs in 2O(w) · n steps. Lemma 13. There exists a function µ : N → N such that for every planar subcubic collection F, every graph in extm (F) has branchwidth at most y = µ(d) where d = size(F). Proof. Let G ∈ extm (F) and let F ∈ F be a planar subcubic graph. Since F is subcubic and F tm G, it follows (see [11]) that F m G, and since F is planar this implies by [31] that tw(G), hence bw(G) as well, is bounded by a function depending only on F . 3.3 Proof of Theorem 1 We already have all the ingredients to prove Theorem 1. Proof of Theorem 1. We provide a dynamic programming algorithm for the computation of tmF (G) for the general case where F is a regular collection. We first consider an, arbitrarily chosen, vertex labeling ρ of G. From Lemma 12, we may assume that we have a branch decomposition (T, σ) of (G, ∅, ∅) of width O(w), where w = tw(G). This gives rise to the te -boundaried graphs Ge = (Ge , Re , λe ) for each e ∈ E(T ). Moreover, if r is the root of T , σ −1 (r) = ∅ = Rer and Ger = (G, ∅, ∅). Keep also in mind that te = O(w) for every e ∈ E(T ). (t′ ) For each e ∈ E(T ), we say that (L, C) is an e-pair if L ⊆ Re and C ∈ FFe,d where t′e = P (t −i) te − |L|. We also denote by Pe the set of all e-pairs. Clearly, |Pe | = i∈[0,te ] 2i · |FFe,d |, O (w log w) therefore, from Lemma 8, |Pe | = 22 d . (e) We then define the function tmF : Pe → N such that if (L, C) ∈ Pe , then (e) tmF (L, C) = min{|S| | S ⊆ V (Ge ) ∧ L = Re ∩ S ∧ C = folio(Ge \ S, d)} ∪ {∞}. (0) (0) Note that Per = ∅ × FF ,d . Note also that the set AF ,d contains only those that do not contain some graph in F as a topological minor. Therefore (0) (e ) tmF (G) = min{tmF r (∅, C) | C ∈ 2AF ,d }. (e) Hence, our aim is to give a way to compute tmF for every e ∈ E(T ). Our dynamic programming algorithm does this in a bottom-up fashion, starting from the edges that contain as endpoints leaves of T that are different from the root. Let ℓ ∈ L(T ) \ {r} and let eℓ be the unique edge of T that contains it. Let also σ −1 (eℓ ) = {x, y}. Clearly, Geℓ = ({x, y}, {{x, y}}) and   (2) (1) Peℓ = ({x, y}, A0F ,d ) ∪ ( {x}, {y} × AF ,d ) ∪ ({∅} × AF ,d ). (e ) As the size of the elements in Peℓ depends only on d, it is possible to compute tmF l in Od (1) steps. 15 Let e ∈ {er } ∪ E(T \ L(T )), and let e1 and e2 be the two other edges of T that share an endpoint with e and where each path from them to r contains e. We also set  Fe = Re1 ∪ Re2 \ Re . (e) For the dynamic programming algorithm, it is enough to describe how to compute tmF (e ) given tmF i , i ∈ [1, 2]. For this, given an e-pair (L, C) ∈ Pe it is possible to verify that  (e ) (e ) (e) tmF (L, C) = min tmF 1 (L1 , C1 ) + tmF 2 (L2 , C2 ) − |L1 ∩ L2 | | (Li , Ci ) ∈ Pei , i ∈ [1, 2] , Li \ Fe = L ∩ Rei , i ∈ [1, 2] , L1 ∩ Fe = L2 ∩ Fe , and   [  folio (B1 ⋄ Z1 ) ⊙ (B2 ⋄ Z2 ) |Z , F, te − |L| C= (B1 ,B2 )∈C1 ×C2 where Z = ρ(Re \ L) and Zi = ρ(Rei \ Li ), i ∈ [1, 2] . (e) (e ) Note that given tmF i , i ∈ [1, 2] and a (L, B) ∈ Pe , the value of tmF (L, B) can O (w log w) be computed by the above formula in O(|Pe1 | · |Pe2 |) = 22 d steps. As |Pe | = Od (w log w) Od (w log w) (e) 2 2 steps. This 2 , the computation of the function tmF requires again 2 O (w log w) Od (w·log w) 2 · · |V (T )| = 22 d means that the whole dynamic programming requires 2 |E(G)| steps. As |E(G)| = O(tw(G) · |V (G)|), the claimed running time follows. 3.4 Bounding the sets of representatives We now prove the following result. Lemma 14. Let t ∈ N and F be a connected and planar collection, where d = size(F), and let R(F ,t) be a set of representatives. Then |R(F ,t) | = 2Od (t·log t) . Moreover, there exists an algorithm that given F and t, constructs a set of representatives R(F ,t) in 2Od (t·log t) steps. Before we proceed with the proof of Lemma 14, we need a series of results. The proof of the following lemma uses ideas similar to the ones presented by Garnero et al. [17]. Lemma 15. There is a function h2 : N × N → N such that if F is a connected and planar collection, where d = size(F), t ∈ N, B = (B, R, λ) ∈ R(F ,t) \ {F(F ,t) }, z ∈ N, and X is a subset of V (B) such that X ∩ R = ∅ and |NB (X)| ≤ z, then |X| ≤ h2 (z, d). Proof. We set h2 (z, d) = 2h1 (d,µ(d)+z)·(z+µ(d)+1)+ζ(µ(d)+z)−1 + z, where h1 is the function x of Lemma 10, µ is the function of Lemma 13, and ζ : N → N such that ζ(x) = 2(2) . Let y = µ(d), q = h1 (d, y + z) · (x + y + 1) · ζ(y + z), s = h2 (z, d), and observe that s = 2q−1 + z. Towards a contradiction, we assume that |X| > s. 16 Let B = (B, R, λ) ∈ R(F ,t) \ {F(F ,t) } and let ρ be a vertex-labeling of B where λ ⊆ ρ. As B 6= F(F ,t) , it follows that B ∈ extm (F). (2) We set G = B[X ∪ NB (X)] and observe that |V (G)| ≥ |X| > s. As G is a subgraph of B, (2) implies that G ∈ extm (F), (3) and therefore, from Lemma 13, bw(G) ≤ y. Let R′ = NB (X) and λ′ = ρ|R′ . We set G = (G, R′ , λ′ ). From Lemma 11, bw(G) ≤ bw(G) + |R′ | ≤ y + |R′ | = y + z. Note now that G has at most |R′ | connected components. Indeed, if it has more, then one of them, say C, will not intersect |R′ |. This, together with the fact that R ∩ X = ∅, implies that C is also a connected component of B whose vertex set is disjoint from R, a contradiction to Lemma 6. We conclude that |E(G)| ≥ |V (G)| − |R′ | ≥ |V (G)| − z > s − z = 2q−1 . Let (T, σ) be a branch decomposition of G of width at most y + z. We also consider the graph Ge = (Ge , Re , λe ), for each e ∈ E(T ) (recall that λe ⊆ ρ). Observe that ∀e ∈ E(T ), |Re | ≤ y + z. (4) S We define H = {repF (Ge ) | e ∈ E(T )}. From (4), H ⊆ i∈[0,y+z] R(F ,i) . From Lemma 10, |H| ≤ (y + z + 1) · h1 (d, y + z), therefore q ≥ |H| · ζ(y + z). Let r be the root of T and let P be a longest path in T that has r as an endpoint. As B has more than 2q−1 edges, T also has more than 2q−1 leaves different from r. This means that P has more than q edges. Recall that q ≥ |H| · ζ(y + z). As a consequence, there is a set S ⊆ {Ge | e ∈ E(P )} where |S| > ζ(y + z) and repF (S) contains only one boundaried graph (i.e., all the boundaried graphs in S have the same F-representative). From Observation 3, there are two graphs Ge1 , Ge2 ∈ S, e1 6= e2 , such that Ge1 ≡(F ,t) Ge2 and Ge1 ∼ Ge2 . (5) (6) W.l.o.g., we assume that e1 is in the path in T between r and some endpoint of e2 . This implies that the underlying graph of Ge1 is a proper subgraph of the underlying graph of Ge2 , therefore |E(Ge2 )| < |E(Ge1 )|. (7) Recall that Gei = (Gei , Rei , λei ), i ∈ [1, 2]. Let B − = B \ (V (Ge1 ) \ Re1 ) and we set B− = (B − , Re1 , λe1 ). Clearly, B− ∼ Ge1 . This, combined with (6), implies that B− ∼ Ge2 . 17 (8) Let now B ∗ = B− ⊕ Ge2 . Combining (7) and (8), we may deduce that |E(B ∗ )| < |E(B)|. (9) We now set B∗ = (B ∗ , R, λ) and recall that t = |R|. Clearly, both B and B∗ belong in B (t) . We now claim that B ≡(F ,t) B∗ . For this, we consider any D = (D, R, λ) ∈ B (t) . We define B⋆ = (B − , R, λ), D + = D ⊕ B⋆ , and D+ = (D + , Re1 , λe1 ). Note that D ⊕ B = D+ ⊕ Ge1 and D ⊕ B∗ = D+ ⊕ Ge2 . (10) (11) From (5), we have that F tm D+ ⊕Ge1 ⇐⇒ F tm D+ ⊕Ge2 . This, together with (10) and (11), implies that F tm D ⊕ B ⇐⇒ F tm D ⊕ B∗ , therefore B ≡(F ,t) B∗ , and the claim follows. We just proved that B ≡(F ,t) B∗ . This, together with (9), contradict the fact that B ∈ R(F ,t) . Therefore |X| ≤ s, as required. Given a graph G and an integer y, we say that a vertex set S ⊆ V (G) is a branchwidthy-modulator if bw(G \ S) ≤ y. This notion is inspired from treewidth-modulators, which have been recently used in a series of papers (cf., for instance, [5, 16, 17, 22]. The following proposition is a (weaker) restatement of [16, Lemma 3.10 of the full version] (see also [22]). Proposition 5. There exists a function f2 : N≥1 × N → N such that if d ∈ N≥1 , y ∈ N, and G is a graph such that G ∈ extm (Kd ) and G contains a branchwidth-y-modulator R, then there exists a partition X of V (G) and an element X0 ∈ X such that R ⊆ X0 , max{|X0 |, |X | − 1} ≤ 2 · |R|, and for every X ∈ X \ {X0 }, |NG (X)| ≤ f2 (d, y). Lemma 16. There is a function h3 : N → N such that if t ∈ N and F is a connected and planar collection, where d = size(F), then every graph in R(t,F ) has at most t · h3 (d) vertices. Proof. We define h3 : N → N so that h3 (d) = 2 + h2 (f2 (d, µ(d)), µ(d)) where h2 is the function of Lemma 15, f2 is the function of Proposition 5, and µ is the function of Lemma 13. As F(F ,t) has at most d vertices, we may assume that G = (G, R, λ) ∈ R(t,F ) \ (F {F ,t) }. Note that G ∈ extm (F), therefore, from Lemma 13, bw(G) ≤ µ(d). We set y = µ(d) and we observe that R is a branchwidth-y-modulator of G. Therefore, we can apply Proposition 5 on G and R and obtain a partition X of V (G) and an element X0 ∈ X such that R ⊆ X0 , (12) max{|X0 |, a} ≤ 2 · |R|, and (13) ∀X ∈ X \ {X0 } : |NG (X)| ≤ f2 (d, y) (14) 18 From (12) and (14), each X ∈ X \ {X0 } is a subset of V (G) such that X ∩ R = ∅ and |NG (X)| ≤ f2 (d, y). Therefore, from Lemma 15, for each X ∈ X \ {X0 }, |X| ≤ h2 (f2 (d, y), d). We obtain that X |G| = |X0 | + |X| X∈X \{X0 } ≤(13) 2 · |R| + |R| · h2 (f2 (d, y), d) = t · (2 + h2 (f2 (d, y), d)) = t · h3 (d), as required. The next proposition follows from the results of Baste et al. [1] on the number of labeled graphs of bounded treewidth. Proposition 6 (Baste et al. [1]). Let n, y ∈ N. The number of labeled graphs with at most n vertices and branchwidth at most q is 2Oq (n·log n) . Proof of Lemma 14. Before we proceed to the proof we need one more definition. Given (t) (F ,t) n ∈ N, we set B≤n = AF ,n−t ∪ {F(F ,t) }. (F ,t) Note that, from Lemma 16, R(F ,t) ⊆ B≤n , where n = t·h3 (d). Also, from Lemma 13, (F ,t) (F ,t) all graphs in B≤n have branchwidth at most y = max{µ(d), t}. The fact that |B≤n | = 2Od (t·log t) follows easily by applying Proposition 6 for n and y. The algorithm claimed in the second statement of the lemma constructs a set of (F ,t) representatives R(F ,t) as follows: first it finds a partition Q of B≤n into equivalence classes with respect to ≡(F ,t) and then picks an element with minimum number of edges from each set of this partition. (F ,t) The computation of the above partition of B≤n is based on the fact that, given (F ,t) two t-boundaried graphs B1 and B2 , B1 ≡(F ,t) B2 iff for every G ∈ B≤n F tm G ⊕ B1 ⇐⇒ F tm G ⊕ B2 . This fact follows directly from Lemma 5 and taking into (F ,t) account that R(F ,t) ⊆ B≤n . (F ,t) (F ,t) Note that it takes |B≤n |3 · Od (1) · tO(1) steps to construct Q. As |B≤n | = 2Od (t·log t) , the construction of Q, and therefore of R(F ,t) as well, can be done in the claimed number of steps. 3.5 Proof of Theorem 2 We are now ready to prove Theorem 2. The main difference with respect to the proof of Theorem 1 is an improvement on the size of the tables of the dynamic programming algorithm, namely |Pe |, where the fact that F is a connected and planar subcubic collection is exploited. 19 Proof of Theorem 2. We provide a dynamic programming algorithm for the computation of tmF (G). We first consider an, arbitrarily chosen, vertex labeling ρ of G. From Lemma 12, we may assume that we have a branch decomposition (T, σ) of (G, ∅, ∅) of width at most w = O(bw(G)) = O(tw(G)). This gives rise to the te -boundaried graphs Ge = (Ge , Re , λe ) for each e ∈ E(T ). Moreover, if r is the root of T , σ −1 (r) = ∅ = Rer and Ger = (G, ∅, ∅). Keep also in mind that te = O(w) for every e ∈ E(T ). Our next step is to define the tables of the dynamic programming algorithm. Let e ∈ E(T ). We call the pair (L, B) an e-pair if 1. L ⊆ Re ′ 2. B = (B, R, λ) ∈ R(k ,F ) where k′ = |Re \ L| = te − |L|. For each e ∈ E(T ), we denote by Pe the set of all e-pairs. Note that X |Pe | = 2i · |R (F ,te −i) | i∈[0,te ] = (te + 1) · 2te · 2Od (te ·log te ) Od (w·log w) = 2 (from Lemma 14) . (e) We then define the function tmF : Pe → N such that if (L, B) ∈ Pe , then (e) tmF (L, B) = min{|S| | S ⊆ V (Ge ) ∧ L = Re ∩ S ∧ B = repF (Ge \ S)} ∪ {∞}. Note that Per = {(∅, B∅ ), (∅, F(F ,t) )} where B∅ = ((∅, ∅), ∅, ∅). We claim that tmF (G) = (e ) tmF r (∅, B∅ ). Indeed, tmF (G) = min{|S| | F 6tm G \ S} (from Equation (1)) = min{|S| | B∅ = repF ((G \ S, ∅, ∅))} (from Lemma 7) = min{|S| | ∅ = ∅ ∩ S ∧ B∅ = repF (G \ S, ∅, ∅)} = min{|S| | ∅ = Rre ∩ S ∧ B∅ = repF (Ger \ S)} (e ) = tmF r (∅, B∅ ). (e) Therefore, our aim is to give a way to compute tmF for every e ∈ E(T ). Our dynamic programming algorithm does this in a bottom-up fashion, starting from the edges that contain as endpoints leaves of T that are different to the root. Let l ∈ L(T ) \ {r} and let eℓ be the edge of T that contains it. Let also σ −1 (eℓ ) = {x, y}. Clearly, Geℓ = ({x, y}, {{x, y}}) and    Peℓ = ( {x, y} × R(0,F ) ) ∪ ( {x}, {y} × R(1,F ) ) ∪ ( ∅ × R(2,F ) ). (e ) As the size of the elements in Peℓ depends only on F, it is possible to compute tmF l in Od (1) steps. 20 Let e ∈ {er } ∪ E(T \ L(T )), and let e1 and e2 be the two other edges of T that share an endpoint  with e and where each path from them to r contains e. We also set Fe = Re1 ∪ Re2 \ Re . For the dynamic programming algorithm, it is enough to describe (e ) (e) how to compute tmF given tmF i , i ∈ [1, 2]. For this, given an e-pair (L, B) ∈ Pe where B = (B, R, λ), it is possible to verify that  (e ) (e) (e ) tmF (L, B) = min tmF 1 (L1 , B1 ) + tmF 2 (L2 , B2 ) − |L1 ∩ L2 | | (Li , Bi ) ∈ Pei , i ∈ [1, 2], Li \ Fe = L ∩ Rei , i ∈ [1, 2] , L1 ∩ Fe = L2 ∩ Fe , and    B = repF (B1 ⋄ Z1 ) ⊙ (B2 ⋄ Z2 ) |Z , te − |L| where Z = ρ(Re \ L) and Zi = ρ(Rei \ Li ), i ∈ [1, 2] . (e) (e ) Note that given tmF i , i ∈ [1, 2] and a (L, B) ∈ Pe , the value of tmF (L, B) can be computed by the above formula in O(|Pe1 | · |Pe2 |) = 2Od (w·log w) steps. As |Pe | = (e) 2Od (w·log w) , the computation of the function tmF requires again 2Od (w·log w) steps. This means that the whole dynamic programming requires 2Od (w·log w) · |E(T )| = 2Od (w·log w) · O(|E(G)|) steps. As |E(G)| = O(bw(G) · |V (G)|), the claimed running time follows. 4 Single-exponential algorithms for hitting paths and cycles In this section we show that if F ∈ {{P3 }, {P4 }, {C4 }}, then F-TM-Deletion can also be solved in single-exponential time. It is worth mentioning that the {Ci }-TMDeletion problem has been studied in digraphs from a non-parameterized point of view [29]. The algorithms we present for {P3 }-TM-Deletion and {P4 }-TM-Deletion use standard dynamic programming techniques and the algorithm we present for {C4 }-TMDeletion uses the rank-based approach introduced by Bodlaender et al. [2]. We first need a simple observation and to introduce nice tree decompositions, which will be very helpful in the algorithms. Observation 4. Let G be a graph and h be a positive integer. Then the following assertions are equivalent. • G contains Ph as a topological minor. • G contains Ph as a minor. • G contains Ph as a subgraph. Moreover, the following assertions are also equivalent. • G contains Ch as a topological minor. 21 • G contains Ch as a minor. Nice tree decompositions. Let D = (T, X ) be a tree decomposition of G, r be a vertex of T , and G = {Gt | t ∈ V (T )} be a collection of subgraphs of G, indexed by the vertices of T . We say that the triple (D, r, G) is a nice tree decomposition of G if the following conditions hold: • Xr = ∅ and Gr = G, • each node of D has at most two children in T , • for each leaf t ∈ V (T ), Xt = ∅ and Gt = (∅, ∅). Such t is called a leaf node, • if t ∈ V (T ) has exactly one child t′ , then either – Xt = Xt′ ∪ {vinsert } for some vinsert 6∈ Xt′ and Gt = G[V (Gt′ ) ∪ {vinsert }]. The node t is called introduce vertex node and the vertex vinsert is the insertion vertex of Xt , – Xt = Xt′ \ {vforget } for some vforget ∈ Xt′ and Gt = Gt′ . The node t is called forget vertex node and vforget is the forget vertex of Xt . • if t ∈ V (T ) has exactly two children t′ and t′′ , then Xt = Xt′ = Xt′′ , and E(Gt′ ) ∩ E(Gt′′ ) = ∅. The node t is called a join node. As discussed in [24], given a tree decomposition, it is possible to transform it in polynomial time to a nice new one of the same width. Moreover, by Bodlaender et al. [4] we can find in time 2O(tw) · n a tree decomposition of width O(tw) of any graph G. Hence, since in this section we focus on single-exponential algorithms, we may assume that a nice tree decomposition of width w = O(tw) is given with the input. 4.1 A single-exponential algorithm for {P3 }-TM-Deletion We first give a simple structural characterization of the graphs that exclude P3 as a topological minor. Lemma 17. Let G be a graph. P3 6tm G if and only if each vertex of G has degree at most 1. Proof. Let G be a graph. If G has a connected component of size at least 3, then clearly it contains a P3 . This implies that, if P3 6tm G, then each connected component of G has size at most 2 and so, each vertex of G has degree at most 1. Conversely, if each vertex of G has degree at most 1, then, as P3 contains a vertex of degree 2, P3 6tm G. We present an algorithm using classical dynamic programming techniques over a tree decomposition of the input graph. Let G be an instance of {P3 }-TM-Deletion and let ((T, X ), r, G) be a nice tree decomposition of G. 22 We define, for each t ∈ V (T ), the set It = {(S, S0 ) | S, S0 ⊆ Xt , S ∩ S0 = ∅} and a function rt : It → N such that for each (S, S0 ) ∈ It , r(S, S0 ) is the minimum ℓ such that there exists a set Sb ⊆ V (Gt ), called the witness of (S, S0 ), that satisfies: b ≤ ℓ, • |S| • Sb ∩ Xt = S, b and • P3 6tm Gt \ S, • S0 is the set of each vertex of Xt of degree 0 in Gt \ S. Note that with this definition, tmF (G) = rr (∅, ∅). For each t ∈ V (T ), we assume that we have already computed rt′ for each children t′ of t, and we proceed to the computation of rt . We distinguish several cases depending on the type of node t. Leaf. It = {(∅, ∅)} and rt (∅, ∅) = 0. Introduce vertex. If v is the insertion vertex of Xt and t′ is the child of t, then for each (S, S0 ) ∈ It , rt (S, S0 ) = min {rt′ (S ′ , S0 ) + 1 | (S ′ , S0 ) ∈ It′ , S = S ′ ∪ {v}} ∪ {rt′ (S, S0′ ) | (S, S0′ ) ∈ It′ , S0 = S0′ ∪ {v}, NGt [Xt ] (v) \ S = ∅} ∪ {rt′ (S, S0′ ) | (S, S0′ ) ∈ It′ , S0 = S0′ \ {u}, u ∈ S0′ ,  NGt [Xt ] (v) \ S = {u}} . Forget vertex. If v is the forget vertex of Xt and t′ is the child of t, then for each (S, S0 ) ∈ It , rt (S, S0 ) = min{rt′ (S ′ , S0′ ) | (S ′ , S0′ ) ∈ It′ , S = S ′ \ {v}, S0 = S0′ \ {v}} Join. If t′ and t′′ are the children of t, then for each (S, S0 ) ∈ It , r(S, S0 ) = min{r(S ′ , S0′ ) + r(S ′′ , S0′′ ) − |S ′ ∩ S ′′ | | (S ′ , S0′ ) ∈ It′ , (S ′′ , S0′′ ) ∈ It′′ , S = S ′ ∪ S ′′ , S0 = S0′ ∩ S0′′ , Xt \ S ⊆ S0′ ∪ S0′′ }. Let us analize the running time of this algorithm. As, for each t ∈ V (T ), S and S0 are disjoint subsets of Xt , we have that |It | ≤ 3|Xt | . Note that if t is a leaf, then rt can be computed in time O(1), if t is an introduce vertex or a forget vertex node, and t′ is the child of t, then rt can be computed in time O(|It′ | · |Xt |), and if t is a join node, and t′ and t′′ are the two children of t, then rt can be computed in time O(|It′ | · |It′′ | · |Xt |). We now show that for each t ∈ V (T ), the function rt is correctly computed by the algorithm. 23 Leaf. This follows directly from the definition of rt . Introduce vertex. Let v be the insertion vertex of Xt . As v is the insertion vertex, we have that NGt [Xt ] (v) = NGt (v), and so for each value we add to the set, we can find a witness of (S, S0 ) of size bounded by this value. Conversely, let (S, S0 ) ∈ It and let Sb be a witness. If v ∈ S, then (S \ {v}, S0 ) ∈ It′ b b and r(S \{v}, S0 ) ≤ |S|−1, if v ∈ S0 then (S, S0 \{v}) ∈ It′ and r(S, S0 \{v}) ≤ |S|, b and if v ∈ Xt \(S ∪ S0 ), then by definition v has a unique neighbor, say u, in Gt \ S, b moreover u ∈ Xt \(S ∪S0 ), v is the unique neighbor of u in Gt \ S, (S, S0 ∪{u}) ∈ It′ , b and r(S, S0 ∪ {u}) ≤ |S|. Forget vertex. This also follows directly from the definition of rt . Join. Let (S ′ , S0′ ) ∈ Rt′ and let (S ′′ , S0′′ ) ∈ It′′ with witnesses Sb′ and Sb′′ , respectively. If S = S ′ ∪ S ′′ and S0′ ∪ S0′′ = Xt \ S, then the condition Xt \ S ⊆ S0′ ∪ S0′′ ensures that Gt \ (Sb′ ∪ Sb′′ ) has no vertex of degree at least 2 and so Sb′ ∪ Sb′′ is a witness of (S, S0′ ∩ S0′′ ) ∈ It of size at most rt′ (S ′ , S0′ ) + rt′ (S ′′ , S0′′ ) − |S ′ ∩ S ′′ |. b b If Sb′ = Sb∩V (Gt′ ) and Sb′′ = S∩V (Gt′′ ), Conversely, let (S, S0 ) ∈ It with witness S. ′′ ′ ′ ′ b b b then by definition of S, S is a witness of some (S , S0 ) ∈ It′ , and S is a witness of some (S ′′ , S0′′ ) ∈ It′′ such that S = S ′ = S ′′ , S0′ ∪ S0′′ = Xt \ S, and S0 = S0′ ∩ S0′′ , b and we have rt′ (S ′ , S0′ ) + rt′ (S ′′ , S0′′ ) − |S| ≤ |S|. The following theorem summarizes the above discussion. Theorem 7. If a nice tree decomposition of G of width w is given, {P3 }-TM-Deletion can be solved in time O(9w · w · n). 4.2 A single-exponential algorithm for {P4 }-TM-Deletion Similarly to what we did for {P3 }-TM-Deletion, we start with a structural definition of the graphs that exclude P4 as a topological minor. Lemma 18. Let G be a graph. P4 6tm G if and only if each connected component of G is either a C3 or a star. Proof. First note that if each connected component of G is either a C3 or a star, then P4 6tm G. Conversely, assume that P4 6tm G. Then each connected component of G of size at least 4 should contain at most 1 vertex of degree at least 2, hence such component is a star. On the other hand, the only graph on at most 3 vertices that is not a star is C3 . The lemma follows. As we did for {P3 }-TM-Deletion, we present an algorithm using classical dynamic programming techniques over a tree decomposition of the input graph. Let G be an instance of {P4 }-Deletion, and let ((T, X ), r, G) be a nice tree decomposition of G. 24 We define, for each t ∈ T , the set It to be the set of each tuple (S, S1+ , S1− , S∗ , S3+ , S3− ) such that {S, S1+ , S1− , S∗ , S3+ , S3− } is a partition of Xt and the function rt : It → N such that, for each (S, S1+ , S1− , S∗ , S3+ , S3− ) ∈ It , rt (S, S1+ , S1− , S∗ , S3+ , S3− ) is the b Sb∗ , Sb3− ) ⊆ V (Gt ) × V (Gt ) × V (Gt ), called minimum ℓ such that there exists a triple (S, the witness of (S, S1+ , S1− , S∗ , S3+ , S3− ), which satisfies the following properties: b Sb∗ , and Sb3− are pairwise disjoint, • S, • Sb ∩ Xt = S, Sb∗ ∩ Xt = S∗ , and Sb3− ∩ Xt = S3− , b ≤ ℓ, • |S| b • P4 6tm Gt \ S, b • S1+ is a set of vertices of degree 0 in Gt \ S, • each vertex of S1− has a unique neighbor in Gt \ Sb and this neighbor is in Sb∗ , • each connected component of Gt [Sb3− ] is a C3 , • there is no edge in Gt \ Sb between a vertex of Sb3− and a vertex of V (Gt )\(Sb ∪ Sb3− ), • there is no edge in Gt \ Sb between a vertex of S3+ and a vertex of V (Gt )\(Sb ∪ S3+ ), and • there is no edge in Gt \ Sb between two vertices of S∗ . Intuitively, Sb corresponds to a partial solution in Gt . Note that, by Lemma 18, each component of Gt \ Sb must be either a star or a C3 . With this in mind, Sb∗ is the set of b S1+ is the set of leaves of a star that are vertices that are centers of a star in Gt \ S, b not yet connected to a vertex of S∗ , S1− is the set of leaves of a star that are already connected to a vertex of Sb∗ , Sb3− is the set of vertices that induce C3 ’s in Gt , and S3+ is a set of vertices that will induce C3 ’s when further edges will appear. Note that with this definition, tmF (G) = rr (∅, ∅, ∅, ∅, ∅, ∅). For each t ∈ V (T ), we assume that we have already computed rt′ for each children t′ of t, and we proceed to the computation of rt . We distinguish several cases depending on the type of node t. Leaf. It = {(∅, ∅, ∅, ∅, ∅, ∅)} and rt (∅, ∅, ∅, ∅, ∅, ∅) = 0. Introduce vertex. If v is the insertion vertex of Xt and t′ is the child of t, then, for 25 each (S, S1+ , S1− , S∗ , S3+ , S3− ) ∈ It , rt (S, S1+ , S1− , S∗ , S3+ , S3− ) = min {rt′ (S ′ , S1+ , S1− , S∗ , S3+ , S3− ) + 1 | (S ′ , S1+ , S1− , S∗ , S3+ , S3− ) ∈ Rt′ , S = S ′ ∪ {v}} ′ , S1− , S∗ , S3+ , S3− ) ∪ {rt′ (S, S1+ ′ | (S, S1+ , S1− , S∗ , S3+ , S3− ) ∈ Rt′ , ′ S1+ = S1+ ∪ {v}, NGt [Xt \S] (v) = ∅} ′ , S∗ , S3+ , S3− ) ∪ {rt′ (S, S1+ , S1− ′ | (S, S1+ , S1− , S∗ , S3+ , S3− ) ∈ Rt′ , ′ S1− = S1− ∪ {v}, z ∈ S∗ , NGt [Xt \S] (v) = {z}} ′ ′ , S1− , S∗′ , S3+ , S3− ) ∪ {rt′ (S, S1+ ′ ′ , S∗′ , S3+ , S3− ) ∈ Rt′ , , S1− | (S, S1+ ′ , S∗ = S∗′ ∪ {v}, NGt [Xt \S] (v) ⊆ S1+ ′ ′ \ NGt [Xt \S] (v), S1− = S1− ∪ NGt [Xt \S] (v)} S1+ = S1+ ′ , S3− ) ∪ {rt′ (S, S1+ , S1− , S∗ , S3+ ′ | (S, S1+ , S1− , S∗ , S3+ , S3− ) ∈ Rt′ , ′ ∪ {v}, S3+ = S3+ [NGt [Xt \S ′ ] (v) = ∅] or ′ [z ∈ S3+ , NGt [Xt \S] (v) = {z}, NGt [Xt \S] (z) = {v}]} ′ ′ , S3− ) ∪ {rt′ (S, S1+ , S1− , S∗ , S3+ ′ | (S, S1+ , S1− , S∗ , S3+ , S3− ) ∈ Rt′ , ′ ′ S3+ = S3+ \ {z, z ′ }, S3− = S3− ∪ {z, z ′ , v}, ′ z, z ′ ∈ S3+ , NGt [Xt \S] (v) = {z, z ′ }, NGt [Xt \S] (z) = {v, z ′ }, NGt [Xt \S] (z ′ ) = {v, z}} Forget vertex. If v is the forget vertex of Xt and t′ is the child of t, then, for each (S, S1+ , S1− , S∗ , S3+ , S3− ) ∈ It , ′ ′ , S∗′ , S3+ , S3− ) rt (S, S1+ , S1− , S∗ , S3+ , S3− ) = min{rt′ (S ′ , S1+ , S1− ′ ′ | (S ′ , S1+ , S1− , S∗′ , S3+ , S3− ) ∈ It′ , ′ S = S ′ \ {v}, S1− = S1− \ {v}, ′ S∗ = S∗′ \ {v}, S3− = S3− \ {v}}. Join. If t′ and t′′ are the children of t, then for each (S, S1+ , S1− , S∗ , S3+ , S3− ) ∈ It , 26  . rt (S, S1+ , S1− , S∗ , S3+ , S3− ) is ′ ′ ′ ′ ′ ′ ′ ′ , S1− , S∗ , S3+ , S3− ) − |S| , S1− , S∗ , S3+ , S3− ) + rt′ (S, S1+ min{rt′ (S, S1+ ′′ ′′ ′′ ′′ ′ ′ ′ ′ , S1− , S∗ , S3+ , S3− ) ∈ It′′ , | (S, S1+ , S1− , S∗ , S3+ , S3− ) ∈ It′ , (S, S1+ ′ ′ ′′ ′′ ′′ ′′ ′ ′ (S1+ ∪ S1− ) ∩ (S3+ ∪ S3− ) = (S1+ ∪ S1− ) ∩ (S3+ ∪ S3− ) = ∅, ′ ′′ ∀v ∈ S1− ∩ S1− , ∃z ∈ S∗ : NGt [Xt \S] (v) = {z}, ′ ′′ ′ ′′ ∀v ∈ S3− ∩ S3− , ∃z, z ′ ∈ S3− ∩ S3− : v, z, z ′ induce a C3 in Gt [Xt \ S]}. Let us analyze the running time of this algorithm. As, for each t ∈ V (T ), S, S1+ , S1− , S∗ , S3+ , and S3− form a partition of Xt , we have that |It | ≤ 6|Xt | . Note that if t is a leaf, then rt can be computed in time O(1), if t is an introduce vertex or a forget vertex node, and t′ is the child of t, then rt can be computed in time O(|It′ | · |Xt |), and if t is a join node, and t′ and t′′ are the two children of t, then rt can be computed in time O(|It′ | · |It′′ | · |Xt |). We now show that for each t ∈ V (T ), rt is correctly computed by the algorithm. For each (S, S1+ , S1− , S∗ , S3+ , S3− ) ∈ It , it can be easily checked that each value ℓ we compute respects, rt (S, S1+ , S1− , S∗ , S3+ , S3− ) ≤ ℓ. Conversely, we now argue that for each (S, S1+ , S1− , S∗ , S3+ , S3− ) ∈ It , the computed value ℓ is such that for any witness b Sb∗ , Sb3− ) of (S, S1+ , S1− , S∗ , S3+ , S3− ) is such that ℓ ≤ |S|. b We again distinguish the (S, type of node t. Leaf. This follows directly from the definition of rt . Introduce vertex. Let v be the insertion vertex of Xt , let (S, S1+ , S1− , S∗ , S3+ , S3− ) ∈ b Sb∗ , Sb3− ) be a witness. Rt , and let (S, • If v ∈ S, then (S \ {v}, S1+ , S1− , S∗ , S3+ , S3− ) ∈ It′ and b − 1. rt′ (S \ {v}, S1+ , S1− , S∗ , S3+ , S3− ) ≤ |S| b hence (S, S1+ \{v}, S1− , S∗ , S3+ , S3− ) ∈ • If v ∈ S1+ , then v is of degree 0 in Gt \S, b It′ and rt′ (S, S1+ \ {v}, S1− , S∗ , S3+ , S3− ) ≤ |S|. • If v ∈ S1− , then v has a unique neighbor that is in Sb∗ . As v is the insertion vertex of Xt , it implies that NGt (v) ⊆ S∗ , and so (S, S1+ , S1− \ b {v}, S∗ , S3+ , S3− ) ∈ It′ and rt′ (S, S1+ , S1− \ {v}, S∗ , S3+ , S3− ) ≤ |S|. b • If v ∈ S∗ , then every neighbor of v is in S1− and has degree 1 in Gt \ S. Thus, (S, S1+ ∪ NGt [Xt \S] (v), S1− \ NGt [Xt \S] (v), S∗ \ {v}, S3+ , S3− ) ∈ It′ and b rt′ (S, S1+ ∪ NG [X \S] (v), S1− \ NG [X \S] (v), S∗ \ {v}, S3+ , S3− ) ≤ |S|. t t t t • If v ∈ S3+ , then (S, S1+ , S1− , S∗ , S3+ \ {v}, S3− ) ∈ It′ and b rt′ (S, S1+ , S1− , S∗ , S3+ \ {v}, S3− ) ≤ |S|. • If v ∈ S3− , then there exist z and z ′ in S3− such that {v, z, z ′ } induce a C3 in Gt \ Sb and there is no edge in Gt \ Sb between a vertex of {v, z, z ′ } and a vertex b \ {x, z, z ′ }. So (S, S1+ , S1− , S∗ , S3+ ∪ {z, z ′ }, S3− \ {x, z, z ′ }) ∈ It′ of V (Gt \ S) b and rt′ (S, S1+ , S1− , S∗ , S3+ ∪ {z, z ′ }, S3− \ {x, z, z ′ }) ≤ |S|. 27 Forget vertex. Let v be the forget vertex of Xt , let (S, S1+ , S1− , S∗ , S3+ , S3− ) ∈ It , b Sb∗ , Sb3− ) be a witness. If v has degree 0 in Gt \ S, b then (S, S1+ , S1− , S∗ ∪ and let (S, b If v has degree {v}, S3+ , S3− ) ∈ It′ and rt′ (S, S1+ , S1− , S∗ ∪ {v}, S3+ , S3− ) ≤ |S|. b at least 1 in Gt \ S, then NGt \Sb(v) ∩ S3+ = ∅, as otherwise there would be an edge in Gt \ Sb between a vertex of S3+ and a vertex of V (Gt ) \ (Sb ∪ S3+ ). So, one of the following case occurs: b (S ∪ {v}, S1+ , S1− , S∗ , S3+ , S3− ) ∈ It′ , and • v ∈ S, b ′ rt (S ∪ {v}, S1+ , S1− , S∗ , S3+ , S3− ) ≤ |S|, • v ∈ Sb∗ , (S, S1+ , S1− , S∗ ∪ {v}, S3+ , S3− ) ∈ It′ , and b rt′ (S, S1+ , S1− , S∗ ∪ {v}, S3+ , S3− ) ≤ |S|, • NGt \Sb(v) ⊆ Sb∗ , (S, S1+ , S1− ∪ {v}, S∗ , S3+ , S3− ) ∈ It′ , and b or rt′ (S, S1+ , S1− ∪ {v}, S∗ , S3+ , S3− ) ≤ |S|, • v ∈ Sb3− , (S, S1+ , S1− , S∗ , S3+ , S3− ∪ {v}) ∈ It′ , and b rt′ (S, S1+ , S1− , S∗ , S3+ , S3− ∪ {v}) ≤ |S| b Sb∗ , Sb3− ) be a witness. Let t′ and Join. Let (S, S1+ , S1− , S∗ , S3+ , S3− ) ∈ It , and let (S, ′′ ′ b b t be the two children of t. We define S = S ∩ V (Gt′ ), Sb′′ = Sb ∩ V (Gt′′ ), Sb∗′ = Sb∗ ∩ ′ ⊆S b3− ∩V (Gt′ ), and Sb′′ ⊆ Sb3− ∩V (Gt′′ ), such that V (Gt′ ), Sb∗′′ = Sb∗ ∩V (Gt′′ ), Sb3− 3− ′′ ]) is a C and G ′ \ (S ′ ] (resp. G [S b′ ∪ Sb′ ) b each connected component of Gt [Sb3− 3 t 3− t 3− ′′ )) is a forest). Then we define (resp. Gt′′ \ (Sb′′ ∪ Sb3− • S ′ = Sb′ ∩ Xt , ′ =S b′ • S1+ 1+ ∪ {v ∈ S1− | NGt \Sb(v) 6⊆ S∗ }, ′ = {v ∈ S b′ • S1− 1− | NGt \Sb(v) ⊆ S∗ }, • S∗′ = S∗ ∩ V (Gt′ ), • S ′ = Sb′ ∩ Xt , and • 3− ′ S3+ 3− ′ ). = S3+ ∪ (S3− \ S3− ′ , S ′ , S ′ , S ′ , S ′ ) ∈ I ′ . We define (S ′′ , S ′′ , S ′′ , S ′′ , S ′′ , S ′′ ) ∈ Note that (S ′ , S1+ ∗ t ∗ 3+ 3− 1+ 1− 3+ 3− 1− It′′ similarly. Moreover we can easily check that • S = S ′ = S ′′ , S∗ = S∗′ = S∗′′ , ′ ∪ S ′ ) ∩ (S ′′ ∪ S ′′ ) = (S ′′ ∪ S ′′ ) ∩ (S ′ ∪ S ′ ) = ∅, • (S1+ 3− 3+ 1− 1+ 3− 3+ 1− ′ ∩ S ′′ , ∃z ∈ S : N • ∀v ∈ S1− ∗ Gt [Xt \S] (v) = {z}, 1− ′ ∩ S ′′ , ∃z, z ′ ∈ S ′ ∩ S ′′ : v, z, z ′ induce a C in G [X \ S], • ∀v ∈ S3− 3 t t 3− 3− 3− ′ ∩ S ′′ , S ′ ∪ S ′′ , S , S ′ ∩ S ′′ , S ′ ∪ • (S, S1+ , S1− , S∗ , S3+ , S3− ) = (S, S1+ ∗ 1+ 1− 1− 3+ 3+ 3− ′′ S3− ), and ′ , S ′ , S ′ , S ′ , S ′ ) + r ′′ (S ′′ , S ′′ , S ′′ , S ′′ , S ′′ , S ′′ ) − |S| ≤ |S|. b • rt′ (S ′ , S1+ t ∗ ∗ 3− 3+ 1− 1+ 3− 3+ 1− 28 This concludes the proof of correctness of the algorithm. The following theorem summarizes the above discussion. Theorem 8. If a nice tree decomposition of G of width w is given, {P4 }-Deletion can be solved in time O(36w · w · n). 4.3 A single-exponential algorithm for {C4 }-TM-Deletion As discussed before, in this subsection we use the dynamic programming techniques introduced by Bodlaender et al. [2] to obtain a single-exponential algorithm for {C4 }TM-Deletion. The algorithm we present solves the decision version of {C4 }-TMDeletion: the input is a pair (G, k), where G is a graph and k is an integer, and the output is the boolean value tmF (G) ≤ k. We give some definitions that will be used in this section. Given a graph G, we denote by n(G) = |V (G)|, m(G) = |E(G)|, c3 (G) the number of C3 ’s that are subgraphs of G, and cc(G) the number of connected components of G. We say that G satisfies the C4 -condition if the following conditions hold: • G does not contain the diamond as a subgraph, and • n(G) − m(G) + c3 (G) = cc(G). As in the case of P3 and P4 , we state in Lemma 20 a structural characterization of the graphs that exclude C4 as a (topological) minor. We first need an auxiliary lemma. Lemma 19. Let n0 be a positive integer. Assume that for each graph G′ such that 1 ≤ n(G′ ) ≤ n0 , C4 6tm G′ if and only if G satisfies the C4 -condition. If G is a graph that does not contain a diamond as a subgraph and such that n(G) = n0 , then n(G) − m(G) + c3 (G) ≤ cc(G). Proof. Let n0 be a positive integer, and assume that for each graph G′ such that 1 ≤ n(G′ ) ≤ n0 , C4 6tm G′ if and only if G satisfies the C4 -condition. Let G be a graph that does not contain a diamond as a subgraph and such that n(G) = n0 . Let S ⊆ E(G) such that C4 6tm G \ S and cc(G \ S) = cc(G) (note that any minimal feedback edge set satisfies these conditions). We have, by hypothesis, that G \ S satisfies the C4 -condition, so n(G \ S) − m(G \ S) + c3 (G \ S) = cc(G \ S). Moreover, as G does not contain a diamond as a subgraph, each edge of G participates in at most one C3 , and thus c3 (G) − c3 (G \ S) ≤ |S|. As by definition n(G) = n(G \ S) and m(G) − m(G \ S) = |S|, we obtain that n(G) − m(G) + c3 (G) ≤ cc(G \ S) = cc(G). Lemma 20. Let G be a non-empty graph. C4 6tm G if and only if G satisfies the C4 -condition. Proof. Let G be a non-empty graph, and assume first that C4 6tm G. This directly implies that G does not contain the diamond as a subgraph. In particular, any two cycles of G, which are necessarily C3 ’s, cannot share an edge. Let S be a set containing 29 an arbitrary edge of each C3 in G. By construction, G \ S is a forest. As in a forest F , we have n(F ) − m(F ) = cc(F ), and S is defined such that |S| = c3 (G) because each edge of G participates in at most one C3 , we obtain that n(G) − m(G) + c3 (G) = cc(G). Thus, G satisfies the C4 -condition. Conversely, assume now that G satisfies the C4 -condition. We prove that C4 6tm G by induction on n(G). If n(G) ≤ 3, then n(G) < n(C4 ) and so C4 6tm G. Assume now that n(G) ≥ 4, and that for each graph G′ such that 1 ≤ n(G′ ) < n(G), if G′ satisfies the C4 -condition, then C4 6tm G′ . We prove that this last implication is also true for G. Note that, as two C3 cannot share an edge in G, we have that c3 (G) ≤ m(G) 3 . This implies that the minimum degree of G is at most 3. Indeed, if each vertex of G had degree at least 4, then m(G) ≥ 2n(G), which together with the relations n(G) − m(G) + c3 (G) = cc(G) and c3 (G) ≤ m(G) would imply that cc(G) < 0, a contradiction. Let v ∈ V (G) be a 3 vertex with minimum degree. We distinguish two cases according to the degree of v. If v has degree 0 or 1, then the graph G \ {v} satisfies the C4 -condition as well, implying that C4 6tm G \ {v}. As v has degree at most 1, it cannot be inside a cycle, hence C4 6tm G. Assume that v has degree 2 and participates in a C3 . As G does not contain a diamond as a subgraph, C4 tm G if and only if C4 tm G \ {v}. Moreover n(G \ {v}) = n(G) − 1, m(G \ {v}) = m(G) − 2, c3 (G \ {v}) = c3 (G) − 1, and cc(G \ {v}) = cc(G). This implies that G \ {v} satisfies the C4 -condition, hence C4 6tm G \ {v}, and therefore C4 6tm G. Finally, assume that v has degree 2 and does not belong to any C3 . Using the induction hypothesis and Lemma 19, we have that n(G\{v})−m(G\{v})+c3 (G\{v}) ≤ cc(G\{v}). As n(G\{v}) = n(G)−1, m(G\{v}) = m(G)−2, c3 (G\{v}) = c3 (G), v has degree 2 in G, and G satisfies the C4 -condition, we obtain that cc(G \ {v}) = cc(G) − 1. This implies that G \ {v} satisfies the C4 -condition, and thus C4 6tm G \ {v}. Since v disconnects one of the connected components of G it cannot participate in a cycle of G, hence C4 6tm G. Lemma 21. If G is a non-empty graph such that C4 6tm G, then m(G) ≤ 32 (n(G) − 1). Proof. As C4 6tm G, by Lemma 20 G satisfies the C4 -condition. It follows that c3 (G) ≤ 1 3 m(G). Moreover, as G is non-empty, we have that 1 ≤ cc(G). The lemma follows by using these inequalities in the equality n(G) − m(G) + c3 (G) = cc(G). We are now going to restate the tools introduced by Bodlaender et al. [2] that we need for our purposes. Let U be a set. We define Π(U ) to be the set of all partitions of U . Given two partitions p and q of U , we define the coarsening relation ⊑ such that p ⊑ q if for each S ∈ q, there exists S ′ ∈ p such that S ⊆ S ′ . (Π(U ), ⊑) defines a lattice with minimum element {{U }} and maximum element {{x} | x ∈ U }. On this lattice, we denote by ⊓ the meet operation and by ⊔ the join operation. Let p ∈ Π(U ). For X ⊆ U we denote by p↓X = {S ∩ X | S ∈ p, S ∩ X 6= ∅} ∈ Π(X) the partition obtained by removing all elements not in X from p, and analogously for U ⊆ X we denote p↑X = p ∪ {{x} | x ∈ X \ U } ∈ Π(X) the partition obtained by adding 30 to p a singleton for each element in X \ U . Given a subset S of U , we define the partition U [S] = {{x} | x ∈ U \ S} ∪ {S}. A set of weighted partitions is a set A ⊆ Π(U )×N. We also define rmc(A) = {(p, w) ∈ A | ∀(p′ , w′ ) ∈ A : p′ = p ⇒ w ≤ w′ }. We now define some operations on weighted partitions. Let U be a set and A ⊆ Π(U ) × N. ↓ B = rmc(A ∪ B). Union. Given B ⊆ Π(U ) × N, we define A ∪ Insert. Given a set X such that X ∩U = ∅, we define ins(X, A) = {(p↑U ∪X , w) | (p, w) ∈ A}. Shift. Given w′ ∈ N, we define shft(w′ , A) = {(p, w + w′ ) | (p, w) ∈ A}. Glue. Given a set S, we define Û = U ∪ S and glue(S, A) ⊆ Π(Û ) × N as glue(S, A) = rmc({(Û [S] ⊓ p↑Û , w | (p, w) ∈ A}). Given w : Û × Û → N , we define gluew ({u, v}, A) = shft(w(u, v), glue({u, v}, A)). Project. Given X ⊆ U , we define X = U \ X and proj(X, A) ⊆ Π(X) × N as proj(X, A) = rmc({(p↓X , w) | (p, w) ∈ A, ∀e ∈ X : ∀e′ ∈ X : p ⊑ U [ee′ ]}). Join. Given a set U ′ , B ⊆ Π(U ) × N, and Û = U ∪ U ′ , we define join(A, B) ⊆ Π(Û ) × N as join(A, B) = rmc({(p↑Û ⊓ q↑Û , w1 + w2 ) | (p, w1 ) ∈ A, (q, w2 ) ∈ B}). Proposition 9 (Bodlaender et al. [2]). Each of the operations union, insert, shift, glue, and project can be carried out in time s · |U |O(1) , where s is the size of the input of the operation. Given two weighted partitions A and B, join(A, B) can be computed in time |A| · |B| · |U |O(1) . Given a weighted partition A ⊆ Π(U ) × N and a partition q ∈ Π(U ), we define opt(q, A) = min{w | (p, w) ∈ A, p ⊓ q = {U }}. Given two weighted partitions A, A′ ⊆ Π(U ) × N, we say that A represents A′ if for each q ∈ Π(U ), opt(q, A) = opt(q, A′ ). Given a set Z and a function f : 2Π(U )×N × Z → 2Π(U )×N , we say that f preserves representation if for each two weighted partitions A, A′ ⊆ Π(U ) × N and each z ∈ Z, it holds that if A′ represents A then f (A′ , z) represents f (A, z). Proposition 10 (Bodlaender et al. [2]). The union, insert, shift, glue, project, and join operations preserve representation. Theorem 11 (Bodlaender et al. [2]). There exists an algorithm reduce that, given a set of weighted partitions A ⊆ Π(U ) × N, outputs in time |A| · 2(ω−1)|U | · |U |O(1) a set of weighted partitions A′ ⊆ A such that A′ represents A and |A′ | ≤ 2|U | , where ω denotes the matrix multiplication exponent. 31 We now have all the tools needed to describe our algorithm. Let G be a graph and k be an integer. We recall that the algorithm we describe solves the decision version of {C4 }-TM-Deletion. This algorithm is based on the one given in [2, Section 3.5] for Feedback Vertex Set. We define a new graph G0 = (V (G) ∪ {v0 }, E(G) ∪ E0 ), where v0 is a new vertex and E0 = {{v0 , v} | v ∈ V (G)}. The role of v0 is to artificially guarantee the connectivity of the solution graph, so that the machinery of Bodlaender et al. [2] can be applied. In the following, for each subgraph H of G, for each Z ⊆ V (H), and for each Z0 ⊆ E0 ∩ E(H), Z\{v0 } we denote by HhZ, Z0 i the graph Z, Z0 ∪ E(H) ∩ . 2 Given a nice tree decomposition of G of width w, we define a nice tree decomposition ((T, X ), r, G) of G0 of width w + 1 such that the only empty bags are the root and the leaves and for each t ∈ T , if Xt 6= ∅ then v0 ∈ Xt . Note that this can be done in linear time. For each bag t, each integers i, j, and ℓ, each function s : Xt → {0, 1}, each function s0 : {v0 } × s−1 (1) → {0, 1}, and each function r : E(Gt s−1 (1), s−1 0 (1) ) → {0, 1}, if C4 6tm Gt s−1 (1), s−1 (1) , we define: 0 Et (p, s, s0 , r, i, j, ℓ) = {(Z, Z0 ) | (Z, Z0 ) ∈ 2Vt × 2E0 ∩E(Gt ) |Z| = i, |E(Gt hZ, Z0 i)| = j, c3 (Gt hZ, Z0 i) = ℓ, Gt hZ, Z0 i does not contain the diamond as a subgraph, Z ∩ Xt = s−1 (1), Z0 ∩ (Xt × Xt ) = s−1 0 (1), v0 ∈ Xt ⇒ s(v0 ) = 1, ∀u ∈ Z \ Xt : either t is the root or ∃u′ ∈ s−1 (1) : u and u′ are connected in Gt hZ, Z0 i, ∀v1 , v2 ∈ s−1 (1) : p ⊑ Vt [{v1 , v2 }] ⇔ v1 and v2 are con- ∀e ∈ E(Gt hZ, Z0 i) ∩  s−1 (1) 2  nected in Gt hZ, Z0 i, : r(e) = 1 ⇔ e is an edge of a C3 in Gt hZ, Z0 i} At (s, s0 , r, i, j, ℓ) = {p | p ∈ Π(s−1 (1)), Et (p, s, s0 , r, i, j, ℓ) 6= ∅}. Otherwise, i.e., if C4 tm Gt s−1 (1), s−1 0 (1) , we define At (s, s0 , r, i, j, ℓ) = ∅. Note that we do not need to keep track of partial solutions if C4 tm Gt s−1 (1), s−1 0 (1) , as we already know they will not lead to a global solution. Moreover, if C4 6tm −1 −1 3 −1 −1 Gt s−1 (1), s−1 0 (1) , then by Lemma 21 m(Gt s (1), s0 (1) ) ≤ 2 (n(Gt s (1), s0 (1) )− 1). 32 Using the definition of Ar , Lemma 20, and Lemma 21 we have that tm{C4 } (G) ≤ k if and only if for some i ≥ |V (G) ∪ {v0 }| − k and some j ≤ 23 (i − 1), we have Ar (∅, ∅, ∅, i, j, 1 + j − i) 6= ∅. For each t ∈ V (T ), we assume that we have already computed At′ for each children t′ of t, and we proceed to the computation of At . As usual, we distinguish several cases depending on the type of node t. Leaf. By definition of At we have At (∅, ∅, ∅, 0, 0, 0) = {∅}. Introduce vertex. Let v be the insertion vertex of Xt , let t′ be the child of t, let s, s0 , and r the functions defined as before, let H = Gt s−1 (1), s−1 0 (1) , and let d3 be the number of C3 ’s of H that contain the vertex v. • If C4 tm H or if v = v0 and s(v0 ) = 0, then by definition of At we have that At (s, s0 , r, i, j, ℓ) = ∅. • Otherwise, if s(v) = 0, then, by definition of At , it holds that At (s, s0 , r, i, j, ℓ) = At′ (s|Xt′ , s0 |Et′ , r|Et′ , i, j, ℓ). • Otherwise, if v = v0 , then by construction of the nice tree decomposition, we know that t′ is a leaf of T and so s = {(v0 , 1)}, s0 = r = ∅, j = ℓ = i − 1 = 0 and At (s, s0 , r, i, j, ℓ) = ins({v0 }, At′ (∅, ∅, ∅, 0, 0, 0)). • Otherwise, we know that v 6= v0 , s(v) = 1, and C4 6tm H. As s(v) = 1, we have to insert v and we have to make sure that all vertices of NH [v] are in the same connected component of H. Moreover, by adding v we add one vertex, |N (v)| edges, and d3 C3 ’s. Therefore, we have that At (s, s0 , r, i, j, ℓ) = glue(NH [v], ins({v}, At′ (s|Xt′ , s0 |Et′ , r|Et′ , i − 1, j − |NH (v)|, ℓ − d3 ))). Forget vertex. Let v be the forget vertex of Xt , let t′ be the child of t, and let s, s0 , and r the functions defined as before. For each function, we have a choice on how it can be extended in t′ , and we potentially need to consider every possible such extension. Note the number of vertices, edges, or C3 ’s is not affected. We obtain that At (s, s0 , r, i, j, ℓ) = At′ (s ∪ {(v, 0)}, s0 , r, i, j, ℓ) S ↓ proj({v}, At′ (s′ , s′0 , r′ , i, j, ℓ)). s′ :Xt′ →{0,1}, s′ |Xt =s, s′ (v)=1 s′0 :{v0 }×s′−1 (1)→{0,1}, s′0 |Xt =s0 ′ r′ :E(Gths′−1 (1),s′−1 0 (1)i )→{0,1}, r |Xt =r Join. Let t′ and t′′ be the two children of t, let s, s0 , and r be the functions defined as before, let H = Gt s−1 (1), s−1 0 (1) , and let S ⊆ E(H) be the set of edges that participate in a C3 of H. 33 We join every compatible entries At′ (s′ , s′0 , r′ , i′ , j ′ , ℓ′ ) and At′′ (s′′ , s′′0 , r′′ , i′′ , j ′′ , ℓ′′ ). For two such entries being compatible, we need s′ = s′′ = s and s′0 = s′′0 = s0 . Moreover, we do not want the solution graph to contain a diamond as a subgraph, and for this we need r′−1 (1) ∩ r′′−1 (1) = S. Indeed, either H contains the diamond as a subgraph, and then At′ (s′ , s′0 , r′ , i′ , j ′ , ℓ′ ) = At′′ (s′′ , s′′0 , r′′ , i′′ , j ′′ , ℓ′′ ) = {∅}, or the diamond is created by joining two C3 ’s, one from t′ and the other one from t′′ , sharing a common edge. This is possible only if (r′−1 (1) ∩ r′′−1 (1)) \ S 6= ∅. For the counters, we have to be careful in order not to count some element twice. We obtain that S ↓ join(At′ (s, s0 , r′ , i′ , j ′ , ℓ′ ), At′′ (s, s0 , r′′ , i′′ , j ′′ , ℓ′′ )). At (s, s0 , r, i, j, ℓ) = r′ ,r′′ :E(H)→{0,1}, r′−1 (1)∩r′′−1 (1)=S i′ +i′′ =i+|V (H)| j ′ +j ′′ =j+|E(H)| ℓ′ +ℓ′′ =ℓ+c3 (H) Theorem 12. {C4 }-TM-Deletion can be solved in time 2O(tw) · n7 . Proof. The algorithm works in the following way. For each node t ∈ V (T ) and for each entry M of its table, instead of storing At (M ), we store A′t (M ) = reduce(At (M )) by using Theorem 11. As each of the operation we use preserves representation by Proposition 10, we obtain that for each node t ∈ V (T ) and for each possible entry M , A′t (M ) represents At (M ). In particular, we have that A′r (M ) = reduce(Ar (M )) for each possible entry M . Using the definition of Ar , Lemma 20, and Lemma 21, we have that tm{C4 } (G) ≤ k if and only if for some i ≥ |V (G) ∪ {v0 }| − k and some j ≤ 32 (i − 1), we have A′r (∅, ∅, ∅, i, j, 1 + j − i) 6= ∅. We now focus on the running time of the algorithm. The size of the intermediate sets of weighted partitions, for a leaf node and for an introduce vertex node are upper−1 bounded by 2|s (1)| . For a forget vertex node, as in the big union operation we take into consideration a unique extension of s, at most 2 possible extensions of s0 , and at −1 most 2|s (1)| possible extenstions for r, we obtain that the intermediate sets of weighted −1 −1 −1 −1 partitions have size at most 2|s (1)| + 2 · 2|s (1)| · 2|s (1)| ≤ 22|s (1)|+2 . For a join node, as in the big union operation we take into consideration at most 2|E(H)| possible functions r′ and as many functions r′′ , at most n + |s−1 | choices for i′ and i′′ , at most 3 1 1 ′ ′′ 2 (n − 1) + |E(H)| choices for j and j , and at most 2 (n − 1) + 3 |E(H)| choices for ′ ′′ ℓ and ℓ , we obtain that the intermediate sets of weighted partitions have size at most −1 2|E(H)| · 2|E(H)| · (n + |s−1 |) · ( 23 (n − 1) + |E(H)|) · ( 21 (n − 1) + 13 |E(H)|) · 4|s (1)| . As each time we can check the condition C4 6tm H, by Lemma 21 m(H) ≤ 23 (n(H) − 1), so we −1 obtain that the intermediate sets of weighted partitions have size at most 6 · n3 · 25|s (1)| . Moreover, for each node t ∈ V (T ), the function reduce will be called as many times as the number of possible entries, i.e., at most 2O(w) · n3 times. Thus, using Theorem 11, A′t can be computed in time 2O(w) · n6 . The theorem follows by taking into account the linear number of nodes in a nice tree decomposition. 34 5 Single-exponential lower bound for any connected F Theorem 13. Let F be a connected collection. Neither F-TM-Deletion nor F-MDeletion can be solved in time 2o(tw) · nO(1) unless the ETH fails. Proof. Let F be a connected collection and recall that w = tw(G). Without loss of generality, we can assume that F is a topological minor antichain. We present a reduction from Vertex Cover to F-TM-Deletion, both parameterized by the treewidth of the input graph, and then we explain the changes to be made to prove the lower bound for F-M-Deletion. It is known that Vertex Cover cannot be solved in time 2o(w) · nO(1) unless the ETH fails [19] (in fact, it cannot be solved even in time 2o(n) ). First we select an essential pair (H, B) of F. Let a be the first vertex of (H, B), b be the second vertex of (H, B), and A be the core of (H, B). Let G be the input graph of the Vertex Cover problem and let < be an arbitrary total order on V (G). We build a graph G′ starting from G. For each vertex v of G, we add a copy of A, which we call Av , and we identify the vertices v and a. For each edge e = {v, v ′ } ∈ E(G) with v < v ′ , we remove e, we add a copy of B, which we call B e , and we identify the vertices v and a and the vertices v ′ and b. This concludes the construction of G′ . Note that |V (G′ )| = |V (G)| · |V (A)| + |E(G)| · |V (B) \ {a, b}| and that tw(G′ ) = max{tw(G), tw(H)}. We claim that there exists a solution of size at most k of Vertex Cover in G if and only if there is a solution of size at most k of F-TM-Deletion in G′ . In one direction, assume that S is a solution of F-TM-Deletion in G′ with |S| ≤ k. By definition of the problem, for each e = {v, v ′ } ∈ E(G) with v < v ′ , either B e contains an element of S or Av contains an element of S. Let S ′ = {v ∈ V (G) | ∃v ′ ∈ V (G) : v < v ′ , e = {v, v ′ } ∈ E(G), (V (B e ) \ {v, v ′ }) ∩ S 6= ∅} ∪ {v ∈ V (G) | V (Av ) ∩ S 6= ∅}. Then S ′ is a solution of Vertex Cover in G and |S ′ | ≤ |S| ≤ k. In the other direction, assume that we have a solution S of size at most k of Vertex Cover in G. We want to prove that S is also a solution of F-TM-Deletion in G′ . For this, we fix an arbitrary H ′ ∈ F and we show that H ′ is not a topological minor of G′ \S. First note that the connected components of G′ \ S are either of the shape Av \ {v} if ′ v ∈ S, B e \ e if e ⊆ S, or the union of Av with zero, one, or more graphs B {v,v } \ {v ′ } such that {v, v ′ } ∈ E(G) if v ∈ V (G) \ S. As F is a topological minor antichain, for any v ∈ V (G), H ′ 6tm Av \ {v} and for any e ∈ E(G), H ′ 6tm B e \ e. Moreover, let v ∈ V (G) \ S and let K be the connected component of G \ S containing v. K is the ′ union of Av and of every B {v,v } \ {v ′ } such that {v, v ′ } ∈ E(G). As, for each v ′ ∈ V (G) ′ such that {v, v ′ } ∈ E(G), v ′ is not an isolated vertex in B {v,v } , by definition of B, for ′ any B ′ ∈ L(bct(H ′ )), |E(B {v,v } \ {v ′ })| < |E(B ′ )|. This implies that for each leaf B ′ ′ of bct(H ′ ) and for each {v, v ′ } ∈ E(G), B ′ 6tm B {v,v } \ {v ′ }. Moreover, it follows by definition of F that H ′ 6tm Av . This implies by Lemma 2 that H ′ is not a topological minor of K. Moreover, as H ′ is connected by hypothesis, it follows that that H ′ is not a topological minor of G′ \ S either. This concludes the proof for the topological minor version. Finally, note that the same proof applies to F-M-Deletion as well, just by replacing 35 • F-TM-Deletion with F-M-Deletion, • topological minor with minor, • tm with m , and • Lemma 2 with Lemma 3. 6 Superexponential lower bound for specific cases In this section, we focus on the graph classes P = {Pi | i ≥ 6} and K, and we show the following theorem. Let us recall that K is the set containing every connected graph G + such that L(bct(G)) ∩ {K2,r , K2,r | r ≥ 0} = ∅. Note that K can be equivalently defined as the set containing every connected graph G such that for each B ∈ L(bct(G)) and for each r ∈ N, B 6tm K2,r (or B 6m K2,r , which is equivalent). Theorem 14. Let F be a regular collection such that F ⊆ P or F ⊆ K. Unless the ETH fails, neither F-TM-Deletion nor F-M-Deletion can be solved in time 2o(tw log tw) · nO(1) . In particular, this theorem implies the result of Pilipczuk [30] as a corollary. Indeed, Vertex Planarization corresponds to F-Deletion where F = {K5 , K3,3 }, and note that {K5 , K3,3 } ⊆ K. Corollary 1. Unless the ETH fails, Vertex Planarization cannot be solved in time 2o(tw log tw) · nO(1) . Note also that Theorem 14 also implies the results stated in items 4 and 5 of the introduction, as all these graphs are easily seen to belong in K. Corollary 2. Unless the ETH fails, for each F ∈ {{Ci } | i ≥ 5} ∪ {{Ki } | i ≥ 4}, neither F-TM-Deletion nor F-M-Deletion can be solved in time 2o(tw log tw) · nO(1) . In the following we focus on the proof of Theorem 14 for F-TM-Deletion. Namely, we describe in Subsection 6.1 the general construction used to prove Theorem 14, and in Subsection 6.2 (resp. Subsection 6.3) we explain how to complete this construction to prove the hardness results for collections F such that F ⊆ P (resp. F ⊆ K). We explain in Subsection 6.4 end how to modify the proof to obtain the result for F-M-Deletion. 6.1 The general construction In order to prove Theorem 14, we will reduce from the following problem, defined by Lokshtanov et al. [28]. 36 k × k Permutation Clique Input: An integer k and a graph G with vertex set [1, k] × [1, k]. Parameter: k. Output: Is there a k-clique in G with exactly one element from each row and exactly one element from each column? Theorem 15 (Lokshtanov et al. [28]). The k×k Permutation Clique problem cannot be solved in time 2o(k log k) unless the ETH fails. We now present the common part of the construction for both P and K. Let F be a regular collection such that F ⊆ P or F ⊆ K. In the following, we assume without loss of generality that F is a topological minor antichain. Note that if F ⊆ P, then |F| = 1. Let us fix (H, B) to be an essential pair of F. We first define some gadgets that generalize the K5 -edge gadget and the s-choice gadget introduced in [30]. Given a graph G and two vertices x and y of G, by introducing an H-edge gadget between x and y we mean that we add a copy of H where we identify the first vertex of (H, B) with y and the second vertex of (H, B) with x. Note that {x, y} is a cut set that separates the H-edge gadget and the remaining of the graph. Note also that the role of x and y is not symmetric. Using the fact that an H-edge gadget between two vertices x and y is a copy of H and that {x, y} is a cut set, we have that the H-edge gadgets clearly satisfy the following proposition. Proposition 16. If F-TM-Deletion has a solution on (G, k) then • this solution intersects every H-edge gadget, and • there exists a solution S such that for each H-edge gadget A between two vertices x and y, V (A) ∩ S ⊆ {x, y} and {x, y} ∩ S 6= ∅. In the following, we will always assume that the solution that we take into consideration is a solution satisfying the properties given by Proposition 16. Moreover, we will restrict the solution to contain only vertices of H-edge gadgets by setting an appropriate budget to the number of vertices we can remove from the input graph G. Given a graph G and two vertices x and y of G, by introducing a B-edge gadget between x and y we mean that we add a copy of B where we identify the first vertex of (H, B) with y and the second vertex of (H, B) with x. Given a graph G and three vertices x, y, and z of G, by introducing a double H-edge gadget between x and z through y we mean that we introduce an H-edge gadget between z and y and we introduce a B-edge gadget between x and y. Note that this implies that there is an H-edge gadget between x and y and an H-edge gadget between z and y. Given a set of s vertices {xi | i ∈ [1, s]}, by introducing an H-choice gadget connecting {xi | i ∈ [1, s]}, we mean that we add 2s + 2 vertices zi , i ∈ [0, 2s + 1], for each i ∈ [0, 2s], we introduce an H-edge gadget between zi and zi+1 , and for each i ∈ [1, s], we introduce two B-edge gadgets, one between xi and z2i−1 and another between xi and z2i . We see 37 the H-choice gadget as a graph induced by the vertices {xi | i ∈ [1, s]} ∪ {zi | i ∈ [0, 2s]}, the B-edge gadgets, and the H-edge gadgets. The following proposition is very similar to [30, Lemma 5]. Proposition 17. For every H-choice gadget connecting a set of s vertices {xi | i ∈ [1, s]}, • the smallest size of the restriction of a solution of F-TM-Deletion to this Hchoice gadget is 2s, • for every i ∈ [1, s], there exists a solution S such that xi 6∈ S, and • every solution S of size 2s is such that there exists i ∈ [1, s] such that xi 6∈ S. Proof. In any solution we need, for each i ∈ [1, s], to take two vertices among xi , z2i−1 , and z2i . This implies that any solution is of size at least 2s. Moreover, for every i ∈ [1, s], the set {xj | j 6= i}∪{z2j−1 | j ∈ [1, i]}∪{z2j | j ∈ [i, s]} is a solution of F-TM-Deletion in the H-choice gadget of size 2s. This can be seen using Lemma 2 and the fact that F is a topological minor antichain, as we did in the proof of Theorem 13. Finally, note that any solution need to contain at least s + 1 vertices from {zi | i ∈ [0, 2s + 1]}. This implies that in any solution S of size at most 2s, at least one vertex from {xi | i ∈ [1, s]} is not in S. We now start the description of the general construction. Given an instance (G, k) of k × k Permutation Clique, we construct an instance (G′ , ℓ) of F-TM-Deletion, which we call the general H-construction of (G, k). We first introduce k2 + 2k vertices, namely {ci | i ∈ [1, k]}, {ri | i ∈ [1, k]}, and {ti,j | i, j ∈ [1, k]}. For each i, j ∈ [1, k], we add the edges {rj , ti,j } and {ti,j , ci }. For each j ∈ [1, k], we introduce an H-choice gadget connecting {ti,j | i ∈ [1, k]}. This part of the construction is depicted in Figure 1. We now describe how we encode the edges of G in G′ . For each edge e ∈ E(G), we define the integers p(e), γ(e), q(e), and δ(e), elements of [1, k], such that e = {(p(e), γ(e)), (q(e), δ(e))} with p(e) ≤ q(e). Note that the edges e such that p(e) = q(e) are not relevant to our construction and hence we safely forget them. For each edge ℓ r e ∈ E(G), we add to G′ three new vertices, dℓe , dm e , and de , and four edges {de , cp(e) }, {dℓe , rγ(e) }, {dre , cq(e) }, and {dre , rδ(e) }. Then we introduce a double H-edge gadget between dℓe and dre through dm e . The encoding of an edge e ∈ E(G) is depicted in Figure 2. For each 1 ≤ p < q ≤ k, we define E(p, q) = {e ∈ E(G) | (p(e), q(e)) = (p, q)} and we introduce an H-choice gadget connecting {dℓe | e ∈ E(p, q)}. Note that even if the construction seems to give a privilege to vertices of type dℓe over the vertices of type dre , the roles of vertices of type dℓe and dre are symmetric. For each e ∈ E(G), we increase the size of the requested solution in G′ by one, the initial budget being the sum of the budget given by Proposition 17 over all the H-choice gadgets introduced in the construction. Because of the double H-edge gadget, we need r ℓ to take in the solution either dm e or both de and de . The extra budget given for each 38 r3 r2 r1 c1 t1,3 t2,3 t3,3 t1,2 t2,2 t3,2 t1,1 t2,1 t3,1 c2 c3 Figure 1: The general H-construction, where the dotted parts correspond to H-edge gadgets, without the encoding of the edges of E(G). cp(e) cq(e) dℓe dm e rγ(e) dre rδ(e) Figure 2: The encoding of an edge e = {(p(e), γ(e)), (q(e), δ(e))} of G, where the dotted lines correspond to a double H-edge gadget. ℓ edge permits to include dm e in the solution. If the H-choice gadget connected to de already chooses dℓe to be in the solution, then we can safely use the extra budget given m for the edge e to add dre in the solution instead of dm e . In the case de is chosen, then in the resulting graph cp(e) remains connected to rγ(e) and cq(e) remains connected to rδ(e) . In the following, we take into consideration only a solution S such that either r ℓ r ℓ m r m {dℓe , dm e , de } ∩ S = {de , de } or {de , de , de } ∩ S = {de } for each e ∈ E(G). We set ℓ = 3|E(G)| + 2k2 . By construction, this budget is tight and permits to take only a minimum-size solution in every H-choice gadget and one endpoint of each H-edge ′ gadget between dre and dm e , e ∈ E(G). This concludes the general H-construction (G , ℓ) of (G, k). Let us now discuss about the treewidth of G′ . By deleting 2k vertices, namely the vertices {ci | i ∈ [1, k]} and the vertices {rj | j ∈ [1, k]}, we obtain a graph where each connected component is an H-choice gadget, with eventually some pendant Hedge gadgets or double H-edge gadgets. As the treewidth of the H-choice gadget, the H-edge gadget, and the double H-choice gadget is linear in |V (H)|, we obtain that tw(G) = Od (k) (recall that d = size(F)). 39 Let σ : [1, k] → [1, k] be a permutation. If {(σ(i), i) | i ∈ [1, k]} induces a k-clique in G, then we say that σ is a solution of k × k Permutation Clique on (G, k). We denote by eσp,q the edge {(σ(p), p), (σ(q), q)}. With such σ, we associate a σ-general H-solution S such that • S ⊆ V (G′ ), • |S| = 3|E(G)| + 2k2 , • for each j ∈ [1, k], S restricted to the H-choice gadget connecting {ti,j | i ∈ [1, k]} is a solution of F-TM-Deletion of size 2k such that {ti,j | i ∈ [1, k]} \ {tσ(j),j } ⊆ S and tσ(j),j 6∈ S, • for each p, q ∈ [1, k], p < q, such that eσp,q ∈ E(G), S restricted to the H-choice gadget connecting {dℓe | e ∈ E(p, q)} is a solution of F-TM-Deletion of size 2|E(p, q)| such that {dℓe | e ∈ E(p, q) \ {eσp,q }} ⊆ S and dℓeσp,q 6∈ S, • for each p, q ∈ [1, k], {dre | e ∈ E(p, q) \ {eσp,q }} ⊆ S and dreσp,q 6∈ S, and ∈ S. • for each p, q ∈ [1, k], dm eσ p,q Note that, with this construction of S, we already impose 3|E(G)| + 2k2 vertices to be in S, and therefore no other vertex of G′ can be in S. Given S ⊆ V (G′ ), we say that S satisfies the permutation property if • for each e ∈ E(G) such that dℓe 6∈ S and for each j ∈ [1, k] \ {γ(e)}, tp(e),j ∈ S, and • for each e ∈ E(G) such that dre 6∈ S and for each j ∈ [1, k] \ {δ(e)}, tq(e),j ∈ S. Lemma 22. If S is a subset of V (G′ ) such that • |S| ≤ 3|E(G)| + 2k2 , • for each H-choice gadget, S restricted to this H-choice gadget is a solution of F-TM-Deletion of minimum size, r ℓ r • for each edge e ∈ E(G), S is such that either {dℓe , dm e , de } ∩ S = {de , de } or ℓ m r m {de , de , de } ∩ S = {de }, and • S satisfies the permutation property, then • there is a unique permutation σ : [1, k] → [1, k] such that S ∩ {ti,j | i, j ∈ [1, k]} = {ti,j | i, j ∈ [1, k]} \ {tσ(j),j | j ∈ [1, k]} and • {(σ(j), j) | j ∈ [1, k]} induces a clique in G. 40 Proof. By hypothesis, for each 1 ≤ p < q ≤ k, we know that there is an edge e ∈ E(p, q) such that dℓe 6∈ S and dre 6∈ S. As S satisfies the permutation property, this implies that for each set {ti,j | i ∈ [1, k]}, j ∈ [1, k], at most one vertex is not in S. As we have supposed that for each set {ti,j | j ∈ [1, k]}, i ∈ [1, k], at least one vertex is not in S, this implies that there is a unique permutation σ : [1, k] → [1, k] such that S ∩ {ti,j | i, j ∈ [1, k]} = {ti,j | i, j ∈ [1, k]} \ {tσ(j),j | j ∈ [1, k]}. Moreover, for each 1 ≤ p < q ≤ k, eσp,q ∈ E(G). Indeed, if eσp,q 6∈ E(G), then there exists e ∈ E(p, q) such that σ(γ(e)) 6= p or σ(δ(e)) 6= q and such that dℓe 6∈ S and dre 6∈ S. Assume without loss of generality, that σ(γ(e)) 6= p. As tp,σ−1 (p) 6∈ S, this contradicts the fact that S satisfies the permutation property. We call the permutation given by Lemma 22 the associated permutation of S. To conclude the reduction, we deal separately with the cases F ⊆ P and F ⊆ K. For each such F, we will assume without loss of generality that F is a topological minor antichain, we will fix (H, B) to be an essential pair of F, and given an instance (G, k) of k × k Permutation Clique we will start from the general H-construction (G′ , ℓ) and add some edges and vertices in order to build an instance (G′′ , ℓ) of F-TM-Deletion. We will show that if k × k Permutation Clique on (G, k) has a solution σ, then the σ-general H-solution is a solution of F-TM-Deletion on (G′′ , ℓ). Conversely, we will show that if F-TM-Deletion on (G′′ , ℓ) has a solution S, then this solution satisfies the permutation property. This automatically gives, using Lemma 22, that the associated permutation σ of S is a solution of k × k Permutation Clique on (G, k). 6.2 The reduction for paths Theorem 18. Given an integer h ≥ 6, the {Ph }-TM-Deletion problem cannot be solved in time 2o(tw log tw) · nO(1) , unless the ETH fails. Proof. Let h ≥ 6 be an integer and let F = {Ph }. Let (G, k) be an instance of k × k Permutation Clique and let (G′ , ℓ) be the general Ph -construction of (G, k) described in Subsection 6.1. We build G′′ starting from G′ by adding, for each i, j ∈ [1, k], a pendant path of size h − 6 to ti,j . This completes the construction of G′′ . Note that if h = 6, then G′′ = G′ . Let σ be a solution of k × k Permutation Clique on (G, k) and let S be the σ-general Ph -solution. To show that G′′ \ S does not contain Ph as a topological minor, we show that each connected component of G′′ \ S does not contain Ph as a topological minor. Note that, by definition of S, the only connected components of G′′ \ S that can contain Ph are the connected components that contain a vertex in {ci | i ∈ [1, k]} ∪ {rj | j ∈ [1, k]}. Moreover, each of these connected components is a subgraph of G′′ induced by {rj , cσ(j) , tσ(j),j } ∪ {dℓe | e ∈ E(G), σ(γ(e)) = p(e), σ(δ(e)) = q(e), p(e) = j} ∪ {dre | e ∈ E(G), σ(γ(e)) = p(e), σ(δ(e)) = q(e), q(e) = j} and the vertices of the path of size h − 6 pendant to tσ(j),j , for some j ∈ [1, k]. These connected components, depicted in Figure 3 for the case h = 8, do not contain Ph as a topological minor, as it can be easily 41 checked that a longest path in them has h − 1 vertices. Therefore, S is a solution of F-TM-Deletion on (G′′ , ℓ). cσ(i) ti,σ(i) ··· pendant path of size 2 ri Figure 3: The connected component of G′′ \S that contains the vertex ri , i ∈ [1, k] where h = 8. Conversely, let S be a solution of F-TM-Deletion on (G′′ , ℓ). We first show that S satisfies the permutation property. Let e ∈ E(G) be such that dℓe 6∈ S and assume that tp(e),j 6∈ S for some j ∈ [1, k] \ {γ(e)}. By construction of the Ph -choice gadget, we know that there exists i0 ∈ [1, k] such that ti0 ,γ(e) 6∈ S. This implies that the path rj tp(e),j cp(e) dℓe rγ(e) ti0 ,γ(e) together with the pendant path with h − 6 vertices attached to ti0 ,γ(e) form a path with h vertices. As the same argument also works if dre 6∈ S, it follows that S satisfies the permutation property, concluding the proof. 6.3 The reduction for subsets of K Theorem 19. Given a regular collection F ⊆ K, the F-TM-Deletion problem cannot be solved in time 2o(tw log tw) · nO(1) , unless the ETH fails. Proof. Let F ⊆ K be a regular collection. We assume without loss of generality that F is a topological minor antichain. Let (H, B) be an essential pair of F, let a be the first vertex of (H, B), let b be the second vertex of (H, B), let B = (V (B), E(B) \ {a, b}), and let C be the core of (H, B). Let (G, k) be an instance of k × k Permutation Clique and let (G′ , ℓ) be the general H-construction of (G, k). We build G′′ starting from G′ by adding a new vertex r0 , adding a copy of the core of (H, B) and identifying a and r0 , and adding for each j ∈ [1, k] a copy of B in which we identify a and r0 , and b and rj . This completes the construction of G′′ . Let σ be a solution of k × k Permutation Clique on (G, k) and let S be the σgeneral H-solution. We show that F 6tm G′′ \ S. For this, let us fix H ′ ∈ F. Note that, by definition of S, the only connected component of G′′ \ S that can contain H ′ is the one containing {ci | i ∈ [1, k]} ∪ {rj | j ∈ [1, k]}. Let K be this connected component, depicted in Figure 4. Note that for each j ∈ [0, k], rj is a cut vertex of K. Moreover, 42 we know that H ′ 6tm C and for each B ′ ∈ L(bct(H ′ )), B ′ 6tm B and B ′ 6tm K2,k . This implies, using Lemma 2, that H ′ is not a topological minor of K. Therefore, S is a solution of F-TM-Deletion on (G′′ , ℓ). r3 cσ(3) r2 cσ(2) r1 cσ(1) B C r0 B B Figure 4: The main connected component of G′′ \ S. Conversely, let S be a solution of F-TM-Deletion on (G′′ , ℓ). We show that S satisfies the permutation property. Let e ∈ E(G) be such that dℓe 6∈ S and assume that tp(e),j 6∈ S for some j ∈ [1, k]\{γ(e)}. This implies that there exists in G′′ \S a (r0 , rγ(e) )path that uses the vertices r0 , rj , tp(e),j , cp(e) , dℓe , and rγ(e) and that does not use any edge of the graph B = (V (B), E(B) \ {a, b}) between r0 and rγ(e) . By construction, this implies that H is a topological minor of G′′ \ S. As the same argument also works if dre 6∈ S, it follows that S satisfies the permutation property, concluding the proof. 6.4 The minor version Theorems 18 and 19 allow to prove the statement of Theorem 14 for topological minors. The equivalent statement of Theorem 14 for minors follows from the same proof as for the topological minor version by applying the following local modifications: • we replace F-TM-Deletion with F-M-Deletion, • we replace topological minor with minor, • we replace tm with m , and • we replace Lemma 2 with Lemma 3. 7 Conclusions and further research We analyzed the parameterized complexity of F-M-Deletion and F-TM-Deletion taking as the parameter the treewidth of the input graph. We obtained a number of lower and upper bounds for general and specific collections F, several of them being tight (in particular, for all cycles). In order to complete the dichotomy for cliques and 43 paths (see Table 1), it remains to settle the complexity when F = {Ki } with i ≥ 5 and when F = {P5 }. An ultimate goal is to establish the tight complexity of F-Deletion for all collections F, but we are still very far from it. In particular, we do not know whether there exists some F for which a double-exponential lower bound can be proved, or for which the complexities of F-M-Deletion and F-TM-Deletion differ. In the last years, the F-M-Deletion problem has been extensively studied in the literature taking as the parameter the size of the solution [14,20–23]. In all these papers, FPT algorithms parameterized by treewidth play a fundamental role. We hope that our results will be helpful in this direction. Note that the connectivity of F was also relevant in previous work [14, 22], as it is the case of the current article. Getting rid of connectivity in both the lower and upper bounds we presented is an interesting avenue. Our algorithms for {Ci }-Deletion may also be used to devise approximation algorithms for hitting or packing long cycles in a graph (in the spirit of [7] for other patterns), possibly by using the fact that cycles of length at least i satisfy the Erdős-Pósa property [13]. We did not focus on optimizing either the degree of the polynomials or the constants involved in our algorithms. Concerning the latter, one could use the framework presented by Lokshtanov et al. [27] to prove lower bounds based on the Strong Exponential Time Hypothesis. It is easy to check that the lower bounds presented in Sections 5 and 6 also hold for treedepth (as it is the case in [30]) which is a parameter more restrictive than treewidth [6]. Also, it is worth mentioning that F-Deletion is unlikely to admit polynomial kernels parameterized by treewidth for essentially any collection F, by using the framework introduced by Bodlaender et al. [3] (see [6] for an explicit proof for any problem satisfying a generic condition). References [1] J. Baste, M. Noy, and I. Sau. The number of labeled graphs of bounded treewidth. CoRR, abs/1604.07273, 2016. [2] H. L. Bodlaender, M. Cygan, S. Kratsch, and J. Nederlof. Deterministic single exponential time algorithms for connectivity problems parameterized by treewidth. Information and Computation, 243:86–111, 2015. [3] H. L. Bodlaender, R. G. Downey, M. R. Fellows, and D. Hermelin. On problems without polynomial kernels. Journal of Computer and System Sciences, 75(8):423– 434, 2009. [4] H. L. Bodlaender, P. G. Drange, M. S. Dregi, F. V. Fomin, D. Lokshtanov, and M. Pilipczuk. A ck n 5-Approximation Algorithm for Treewidth. SIAM Journal on Computing, 45(2):317–378, 2016. 44 [5] H. L. Bodlaender, F. V. Fomin, D. Lokshtanov, E. Penninkx, S. Saurabh, and D. M. Thilikos. (meta) kernelization. Journal of the ACM, 63(5):44:1–44:69, 2016. [6] M. Bougeret and I. Sau. How much does a treedepth modulator help to obtain polynomial kernels beyond sparse graphs? CoRR, abs/1609.08095, 2016. [7] D. Chatzidimitriou, J. Raymond, I. Sau, and D. M. Thilikos. An O(log OP T )Approximation for Covering/Packing Minor Models of Θr . In Proc. of the 13th International Workshop on Approximation and Online Algorithms (WAOA), volume 9499 of LNCS, pages 122–132, 2015. [8] B. Courcelle. The Monadic Second-Order Logic of Graphs. I. Recognizable Sets of Finite Graphs. Information and Computation, 85(1):12–75, 1990. [9] M. Cygan, F. V. Fomin, L. Kowalik, D. Lokshtanov, D. Marx, M. Pilipczuk, M. Pilipczuk, and S. Saurabh. Parameterized Algorithms. Springer, 2015. [10] M. Cygan, J. Nederlof, M. Pilipczuk, M. Pilipczuk, J. M. M. van Rooij, and J. O. Wojtaszczyk. Solving Connectivity Problems Parameterized by Treewidth in Single Exponential Time. In Proc. of the 52nd Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages 150–159, 2011. [11] R. Diestel. Graph Theory. Springer-Verlag, 3rd edition, 2005. [12] R. G. Downey and M. R. Fellows. Fundamentals of Parameterized Complexity. Texts in Computer Science. Springer, 2013. [13] S. Fiorini and A. Herinckx. A tighter Erdős-Pósa function for long cycles. Journal of Graph Theory, 77(2):111–116, 2014. [14] F. V. Fomin, D. Lokshtanov, N. Misra, and S. Saurabh. Planar F-Deletion: Approximation, Kernelization and Optimal FPT Algorithms. In Proc. of the 53rd Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages 470–479, 2012. [15] F. V. Fomin, D. Lokshtanov, F. Panolan, and S. Saurabh. Efficient computation of representative families with applications in parameterized and exact algorithms. Journal of the ACM, 63(4):29:1–29:60, 2016. [16] F. V. Fomin, D. Lokshtanov, S. Saurabh, and D. M. Thilikos. Bidimensionality and kernels. In Proc. of the 21st Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 503–510, 2010. Full version available at CoRR, abs/1606.05689, 2016. [17] V. Garnero, C. Paul, I. Sau, and D. M. Thilikos. Explicit linear kernels via dynamic programming. SIAM Journal on Discrete Mathematics, 29(4):1864–1894, 2015. 45 [18] J. E. Hopcroft and R. E. Tarjan. Efficient algorithms for graph manipulation. Communations of ACM, 16(6):372–378, 1973. [19] R. Impagliazzo, R. Paturi, and F. Zane. Which problems have strongly exponential complexity? Journal of Computer and System Sciences, 63(4):512–530, 2001. [20] B. M. P. Jansen, D. Lokshtanov, and S. Saurabh. A near-optimal planarization algorithm. In Proc. of the 25th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1802–1811, 2014. [21] G. Joret, C. Paul, I. Sau, S. Saurabh, and S. Thomassé. Hitting and harvesting pumpkins. SIAM Journal on Discrete Mathematics, 28(3):1363–1390, 2014. [22] E. J. Kim, A. Langer, C. Paul, F. Reidl, P. Rossmanith, I. Sau, and S. Sikdar. Linear kernels and single-exponential algorithms via protrusion decompositions. ACM Transactions on Algorithms, 12(2):21:1–21:41, 2016. [23] E. J. Kim, C. Paul, and G. Philip. A single-exponential FPT algorithm for the K4 minor cover problem. Journal of Computer and System Sciences, 81(1):186–207, 2015. [24] T. Kloks. Treewidth. Computations and Approximations. Springer-Verlag LNCS, 1994. [25] J. Komlós and E. Szemerédi. Topological cliques in graphs II. Combinatorics, Probability & Computing, 5:79–90, 1996. [26] J. M. Lewis and M. Yannakakis. The node-deletion problem for hereditary properties is NP-complete. Journal of Computer and System Sciences, 20(2):219–230, 1980. [27] D. Lokshtanov, D. Marx, and S. Saurabh. Lower bounds based on the exponential time hypothesis. Bulletin of the EATCS, 105:41–72, 2011. [28] D. Lokshtanov, D. Marx, and S. Saurabh. Slightly superexponential parameterized problems. In Proc. of the 22nd Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 760–776, 2011. [29] D. Paik, S. M. Reddy, and S. Sahni. Deleting vertices to bound path length. IEEE Transactions on Computers, 43(9):1091–1096, 1994. [30] M. Pilipczuk. A tight lower bound for Vertex Planarization on graphs of bounded treewidth. Discrete Applied Mathematics, DOI: http://dx.doi.org/10.1016/j.dam.2016.05.019, 2016, to appear. [31] N. Robertson and P. D. Seymour. Graph minors. V. Excluding a planar graph. Journal of Combinatorial Theory, Series B, 41(1):92–114, 1986. [32] N. Robertson and P. D. Seymour. Graph minors. X. Obstructions to tree decomposition. Journal of Combinatorial Theory, Series B, 52(2):153–190, 1991. 46
8
Local Marchenko-Pastur Law for Random Bipartite Graphs Kevin Yang† arXiv:1704.08672v1 [math.PR] 27 Apr 2017 April 28, 2017 1 Abstract. We study the random matrix ensemble of covariance matrices arising from random (db , dw )-regular bipartite graphs on a set of M black vertices and N white vertices, for db ≫ log4 N . We simultaneously prove that the Green’s functions of these covariance matrices and the adjacency matrices of the underlying graphs agree with the corresponding limiting law (e.g. MarchenkoPastur law for covariance matrices) down to the optimal scale. This is an improvement from the previously known mesoscopic results. We obtain eigenvector delocalization for the covariance matrix ensemble as consequence, as well as a weak rigidity estimate. Contents 1. Introduction 2 1.1. Acknowledgements 1.2. Notation 3 3 2. Underlying Model and Main Results 2.1. The Random Matrix Ensemble 3 3 2.2. Random Covariance Matrices 4 2.3. The Main Result 3. Switchings on Bipartite Graphs 5 8 3.1. Switchings on Adjacency Matrices 3.2. Probability Estimates on Vertices 10 11 4. Green’s Function Analysis 4.1. Preliminary Resolvent Theory 13 13 4.2. Reductions of the Proof of Theorem 2.5 4.3. Switchings on Green’s Functions 14 16 5. The Self-Consistent Equation and Proof of Theorem 2.5 5.1. Derivation of the self-consistent equation 18 18 5.2. Analysis of the self-consistent equation 5.3. Final estimates 20 22 6. Appendix 6.1. Estimates on the Stieltjes transforms on the upper-half plane 23 23 References 24 1† Stanford University, Department of Mathematics. Email: [email protected]. 1 1. Introduction Let XA denote the adjacency matrix of a random (db , dw )-biregular graph with off-diagonal blocks A, A∗ . Here, we assume −1/2 ∗ A is a matrix of size M ×N with M > N . We define the normalized empirical spectral distributions of d−1 w A A and dw to be the following macroscopic random point masses: X 1 µA ∗ A = (1.1) δd−1 (x), w λ N ∗ XA λ∈σ(A A) (1.2) µXA = 1 M +N X δd−1/2 λ (x). w λ∈σ(XA ) It is known (see [3]) that the empirical spectral distribution of the normalized covariance matrix converges almost surely (in the limit M, N → ∞ and d → ∞ at a suitable rate) to the Marchenko-Pastur law with parameter γ := N/M given by the following density function: p (λ+ − x)(x − λ− ) ̺∞ (x) dx := ̺MP (x) dx = (1.3) 1x∈[λ− ,λ+ ] dx, 2πγx √ where we define λ± = (1 ± γ)2 . As noted in [8], this implies that the empirical spectral distribution of the normalized adjacency matrix converges almost surely to a linearization of the Marchenko-Pastur law given by the following density function:  p γ  (λ+ − E 2 )(E 2 − λ− ) E 2 ∈ [λ− , λ+ ] (1+γ)π|E| ̺(E) = (1.4) . 0 E 2 6∈ [λ− , λ+ ] We briefly remark that in the regime M = N , the linearized Marchenko-Pastur density agrees exactly with the Wigner semicircle density, which is the limiting density for the empirical spectral distribution of random d-regular graphs on N vertices in the limit N, d → ∞ at suitable rates. In this regime, coincidence of the empirical spectral distribution and the limiting Wigner semicircle law was shown for intervals at the optimal scale N −1+ε in [5]. This short-scale result is crucial for understanding eigenvalue gap and correlation statistics and showing universality of eigenvalue statistics for random regular graphs compared to the GOE. Moreover, the short-scale result is a drastic improvement from the order 1 result discussed above for biregular bipartite ∗ graphs. For these graphs, convergence of the empirical spectral distribution of d−1 w A A to the Marchenko-Pastur law was shown for scales N −ε for sufficiently small ε > 0 in [8]. The techniques used in this paper included primarily analysis of trees and ballot sequences. This result, however, is far from the optimal scale and is thus far from sufficient for showing universality of eigenvalue statistics. The aim of this paper is remedy this problem and obtain convergence at the optimal scale. Similar to [5], we bypass the analysis of trees and ballot sequences with a combinatorial operator on graphs known as switchings, which are ubiquitous throughout graph theory. This will help us resample vertices in a random graph and will be crucial in deriving a tractable self-consistent equation for the Green’s function of a random biregular bipartite graph. However, in contrast to [8], we aim to prove convergence at the optimal scale for the ensembles of (normalized) covariance matrices and adjacency matrices simultaneously, transferring between the analysis of each ensemble whenever convenient. In [8], the result for adjacency matrices was derived as a consequence of the result for covariance matrices. To the author’s knowledge, this idea is original to this paper. For random regular graphs, universality of local bulk eigenvalue correlation statistics was shown in [4], which required both the local law from [5] as a crucial ingredient as well as analysis of the Dyson Brownian Motion in [13]. In a similar spirit for random biregular bipartite graphs, we prove universality of bulk eigenvalue statistics in [17] using the local law result in this paper and an analysis of Dyson Brownian Motion for covariance matrices in [18]. Thus, this paper may be viewed as the first in a series of three papers on random covariance matrices resembling the papers [5], [4], [13], and [12] which study Wigner matrices. Before we proceed with the paper, we remark that as with random regular graphs and Wigner ensembles, the covariance matrix ensembles arising from biregular bipartite graphs is a canonical example of a covariance matrix ensemble whose 2 entries are nontrivially correlated. A wide class of covariance matrices with independent sample data entries was treated in the papers [1], [6], [7], [15], and [16]. In these papers, local laws were derived and universality of local eigenvalue correlation statistics were proven assuming moment conditions. Because of the nontrivial correlation structure of the sample data entries and the lack of control of entry-wise moments, these papers and their methods cannot apply to our setting. 1.1. Acknowledgements. The author thanks H.T. Yau and Roland Bauerschmidt for suggesting the problem, referring papers, and answering the author’s questions pertaining to random regular graphs. This work was partially funded by a grant from the Harvard College Research Program. This paper was written while the author was a student at Harvard University. 1.2. Notation. We adopt the Landau notation for big-Oh notation, and the notation a . b. We establish the notation [[a, b]] := [a, b] ∩ Z. We let [E] denote the underlying vertex set of a graph E. For vertices v, v ′ ∈ E, we let vv ′ denote the edge in E containing v and v ′ . For a real symmetric matrix H, we let σ(H) denote its (real) spectrum. 2. Underlying Model and Main Results We briefly introduce the underlying graph model consisting of bipartite graphs on a prescribed vertex set. Definition 2.1. Suppose V = {1b , 2b , . . . , Mb , 1w , . . . , Nw } is a set of labeled vertices, and suppose E is a simple graph on V . We say the graph E is bipartite with respect to the vertex sets (V , Vb , Vw ) if V admits the following decomposition: [ (2.1) V = {1b , 2b , . . . , Mb } {1w , 2w , . . . , Nw } =: Vb ∪ Vw , such that for any vertices vi , vj ∈ Vb and vk , vℓ ∈ Vw , the edges vi vj and vk vℓ are not contained in E. Moreover, for fixed integers db , dw > 0, we say that a bipartite graph E is (db , dw )-regular if each v ∈ Vb has db neighbors and if each w ∈ Vw has dw neighbors. Remark 2.2. For the remainder of this paper, we will refer to Vb as the set of black vertices and Vw as the set of white vertices. Moreover, we will refer to a (db , dw )-regular graph simply as a biregular graph, if the parameters db , dw are assumed. In particular, when referring to a biregular graph we assume a bipartite structure. Lastly, the set of (db , dw )-regular graphs on the vertex sets (V , Vb , Vw ) will be denoted by Ω, where we suppress the dependence of the parameters M, N, db , dw without the risk of confusion. We now record the following identity which follows from counting the total number of edges in a biregular graph E: (2.2) M db = N dw , where M = Mb and N = Nw . We retain this notation for M, N for the remainder of the paper. 2.1. The Random Matrix Ensemble. We now introduce a modification of the random matrix ensemble studied in [8], retaining the notation used in the introduction of this paper. We first note the adjacency matrix of a biregular graph is a block matrix with vanishing diagonal, i.e. has the following algebraic form: ! 0 A (2.3) XA = , A∗ 0 where A is a matrix of size M ×N . By the biregular assumption of the underlying graph, the matrix XA exhibits the following eigenvalue-eigenvector pair: ! p eb db dw , emax = √ λmax = (2.4) , αew where eb and ew are constant ℓ2 -normalized vectors of dimension M and N respectively. By the Perron-Frobenius theorem, the eigenvalue λmax is simple. 3 Using ideas from [5], the matrix ensemble of interest is the ensemble X = X (M, N, db , dw ) of normalized adjacency matrices given by 0 H∗ X = (2.5) H 0 ! −1/2 H = dw (A − db eb e∗w ) , . , Because the eigenspace corresponding to λmax is one-dimensional, standard linear algebra implies that upon a normalization −1/2 factor of dw , the matrices X and XA will share the same eigenvalue-eigenvector pairs orthogonal to the eigenspace corresponding to λmax . On this maximal eigenspace, the matrix X will exhibit emax as an eigenvector corresponding to the eigenvalue λ = 0. Moreover, as noted in [8] the spectrum of X will be compactly supported in a sense we will make shortly make precise. To complete our discussion of the random matrix ensemble of interest, we first note that X is clearly in bijection with the set of biregular graphs on the fixed triple (V , Vb , Vw ), which is a finite set for each fixed M, N, db , dw . Because Ω is finite, we may impose the uniform probability measure on it. We now define the following fundamental parameters: α := (2.6) M dw , = N db γ := 1 N = . α M where here we use the identity (2.2). As in [8], we now impose the following constraints on the parameters M, N, db , dw : lim (2.7) M,N →∞ α > 1. This assumption is not crucial as we may also relabel the vertices V if M < N in the limit. This assumption will be convenient in our analysis of the spectral statistics, however. This completes our construction of the random matrix ensemble of interest. 2.2. Random Covariance Matrices. Strictly speaking, the random matrix ensemble studied in [8] was the ensemble X∗ consisted of the corresponding N × N covariance matrices: X∗ = H ∗ H, (2.8) −1/2 H = dw (A − db eb e∗w ). The ensemble X introduced in this paper may be realized as a linearization of the ensemble X∗ of covariance matrices. This will be the upshot of working instead with the matrix ensemble X whenever convenient, i.e. when studying linear perturbations of the adjacency matrix of a biregular graph. The following result shows that when transferring between the ensembles X and X∗ , the spectral data is preserved. This result is standard in linear algebra and the analysis of compact operators, but we include it for completeness and since organizational purposes, as the result does not seem to be written precisely and formally in any standard text. Before we give the result and its (short) proof, we define the following third matrix ensemble X∗,+ of M × M covariance matrices: X∗,+ = HH ∗ . (2.9) The ensemble X∗,+ will not play any essential role in our analysis of random matrix ensembles and is included in this paper for the sake of completeness of our results. Proposition 2.3. Suppose H is a real-valued matrix of size M × N with M > N , and suppose X is a block matrix of the following form: ! 0 H (2.10) X = . H∗ 0 • (I). The spectrum of X admits the following decomposition: (2.11) σ(X) = σ 1/2 (H ∗ H) ∪ ζ(X), where σ 1/2 (H ∗ H) denotes the pairs of eigenvalues (±λ) such that (±λ)2 is an eigenvalue of H ∗ H. Here, ζ(X) denotes the set of eigenvalues not in σ 1/2 (H ∗ H), all of which are 0. 4 • (II). The spectrum of HH ∗ admits the following decomposition: σ(HH ∗ ) = σ(H ∗ H) ∪ ζ 2 (X), (2.12) where ζ 2 (X) denotes the set of eigenvalues not in σ(H ∗ H), all of which are 0. • (III). Suppose λ2 ∈ σ(H ∗ H) is associated to the following ℓ2 -normalized eigenvectors: v∗ ! H ∗ H, (2.13) (2.14) v∗,+ ! HH ∗ . Then ±λ is associated to the following ℓ2 -normalized eigenvector pair of X: ! 1 v∗,+ ±λ ! √ . 2 ±v∗ • (IV). Conversely, any eigenvalue pair ±λ ∈ σ 1/2 (H ∗ H) is associated to the following ℓ2 -normalized eigenvector pair of X: 1 ±λ ! √ 2 (2.15) v∗,+ ±v∗ ! , where v∗,+ is an ℓ2 -normalized eigenvector of HH ∗ with eigenvalue λ2 and v∗ is an ℓ2 -normalized eigenvector of H ∗ H with eigenvalue λ2 . • (V). Suppose λ = 0 ∈ ζ 2 (X) is associated to the ℓ2 -normalized eigenvector vλ of HH ∗ . Then for some λ′ = 0 ∈ ζ(X), the corresponding ℓ2 -normalized eigenvector is given by ! vλ ′ λ ! (2.16) . 0 • (VI). Conversely, suppose λ ∈ ζ(X). Then λ = 0 is associated to the following ℓ2 -normalized eigenvector of X: ! vλ (2.17) , λ ! 0 where vλ is an ℓ2 -normalized eigenvector of HH ∗ with eigenvalue λ′ = 0. Remark 2.4. We briefly note that Proposition 2.3 applies to a much more general class of covariance matrices and their linearizations, as it does not refer to any underlying graph or graph structure. Proof. Statements (I) – (II) are consequence of the SVD (singular value decomposition) of the matrix H and dimensioncounting. Statements (III) – (VI) follow from a direct calculation and dimension-counting.  2.3. The Main Result. We begin with notation for the Stieltjes transforms of the Marchenko-Pastur law and its linearization, respectively: w ̺ (x) ∞ m∞ (z) = (2.18) dx, x−z R w ̺(x) (2.19) dx. m(z) = x−z R Here, we take z ∈ C+ or z ∈ C− . We also define the following perturbed Stieltjes transforms to address the ensemble X∗,+ : (2.20) (2.21) w γ̺ (x) + (γ − 1)δ (x) γ−1 ∞ 0 = dx, z x−z R w γρ(x) + (γ − 1)δ (x) γ−1 0 = dx. m+ (z) := γm(z) + z x−z m∞,+ (z) := γm∞ (z) + R 5 This describes the limiting spectral behavior. For the graphs themselves, we define the following Green’s functions of the matrix ensembles X , X∗ , and X∗,+ : G(z) = (X − z)−1 , (2.22) G∗ (z) = (X∗ − z)−1 , (2.23) X ∈X; G∗,+ (z) = (X∗,+ − z)−1 , (2.24) X∗ ∈ X∗ ; X∗,+ ∈ X∗,+ . We also define the Stieltjes transforms of each covariance matrix ensemble X∗ and X∗,+ , which may be realized as the Stieltjes transform of the corresponding empirical spectral distribution in spirit of Proposition 2.3: 1 1 Tr G∗ (z), s∗,+ (z) = Tr G∗,+ (z). N M Lastly, for the ensemble X , we define instead the partial Stieltjes transforms which average only over the diagonal terms of s∗ (z) = (2.25) a specified color (black or whtie): sb (z) = (2.26) M 1 X Gii (z), M i=1 sw (z) = N 1 X Gkk (z). N k=1 We now introduce domains in the complex plane on which we study the Green’s functions of each matrix ensemble. These domains are engineered to avoid the singularities in the Green’s functions near the origin, and in the case of the linearized Marchenko-Pastur law, the edge of the support. To this end, we establish notation for the following subsets of the complex plane for any fixed ε > 0: Uε,± := {z = E + iη : |E| > ε, η > 0} , (2.27) Uε := Uε,+ ∪ Uε,− . (2.28) We will also need to define the following control parameters: D := db ∧ (2.29) N2 , d3b 1 1 Φ(z) := √ +√ , Nη D " (2.30) Fz (r) = F (r) := (2.31) We now present the main result of this paper. 1 1+ p (λ+ − z)(z − λ− ) ! # r ∧ √ r. Theorem 2.5. Suppose ξ = ξN is a parameter chosen such that the following growth conditions on D and η hold: ξ log ξ ≫ log2 N, (2.32) |η| ≫ ξ2 , N D ≫ η2 . Then for any fixed ε > 0, we have the following estimates with probability at least 1−e−ξ log ξ , uniformly over all z = E+iη ∈ Uε with η satisfying the growth condition in (2.32): (2.33) max |[G∗ (z)]ii − m∞ (z)| = O (Fz (ξΦ)) , max |[G∗ (z)]ij | = O i i6=j  ξΦ(z 2 ) z  . Similarly, for any fixed ε > 0, we have the following estimates with probability at least 1−e−ξ log ξ , uniformly over all parameters z = E + iη ∈ Uε with η satisfying the growth condition in (2.32):   ξΦ(z 2 ) (2.34) . max |[G∗,+ (z)]ii − m∞,+ (z)| = O (Fz (ξΦ)) , max |[G∗,+ (z)]ij | = O i i6=j z Conditioning on the estimates (2.33) and (2.34), respectively, uniformly over z = E + iη ∈ Uε with η satisfying the growth condition in (2.32), we have (2.35) |s∗ (z) − m∞ (z)| = O (Fz (ξΦ)) , |s∗,+ (z) − m∞ (z)| = O (Fz (ξΦ)) . 6 Conditioning on the estimates (2.33), uniformly over all z = E + iη with η satisfying the growth condition in (2.32), we have (2.36) (2.37)  max |[G(z)]kk − m(z)| = O zFz2 (ξΦ(z 2 )) , k>M max |[G(z)]kℓ | = O (ξΦ) . M<k<ℓ Moreover, conditioning on the estimates (2.34), for any fixed ε > 0, uniformly over all z = E + iη ∈ Uε with η satisfying the growth condition in (2.32), we have (2.38) (2.39)  max |[G(z)]ii − m+ (z)| = O zFz2 (ξΦ(z 2 )) , i6M max |[G(z)]ij | = O(ξΦ). i<j6M Conditioning on (2.33) and (2.34), we have the following estimates uniformly over all z = E + iη with η satisfying the growth condition in (2.32): (2.40) (2.41)  |sb (z) − m+ (z)| = O zFz2 (ξΦ(z 2 )) ,  |sw (z) − m(z)| = O zFz2 (ξΦ(z 2 )) . Lastly, the estimates (2.33) and (2.35) hold without the condition |E| > ε if α > 1. The estimates (2.38) and (2.39) hold without the condition |E| > ε if α = 1. Remark 2.6. We briefly remark on the repulsion assumption |E| > ε in Theorem 2.5. The removal of this assumption discussed at the end of the statement of Theorem 2.5 is a direct consequence of studying the dependence of the singularities of the Green’s functions and Stieltjes transforms at the origin with respect to the structural parameter α. For example, the presence of a singularity of m∞ at the origin occurs exactly when α = 1. Moreover, the singularities in the Stieltjes transforms of matrices and the singularities of m∞,+ at the origin cancel each other out, allowing for a regularization at the origin. Remark 2.7. We last remark that if α = 1, the covariance matrices X∗ and X∗,+ are equal in law. This comes from symmetry of the bipartite graph between the two vertex sets Vb and Vw , i.e. the graph statistics are unchanged upon relabeling the graph. This allows us to remove the assumption |E| > ε > 0 for certain estimates in Theorem 2.5 in the regime α = 1. We now discuss important consequences of Theorem 2.5, the first of which is the following result on eigenvector delocalization, i.e. an estimate on the ℓ∞ -norm of an eigenvector in terms of its ℓ2 -norm. The proof of this delocalization result will be delegated to a later section after we study in more detail the spectral data of covariance matrices and their linearizations. Corollary 2.8. (Eigenvector Delocalization). Assume the setting of Theorem 2.5, and suppose u is an eigenvector of X∗ with eigenvalue λ. Then with probability at least 1 − e−ξ log ξ , we have (2.42) kukℓ∞ = O   ξ √ kukℓ2 . N We briefly remark that the eigenvector delocalization fails for the larger covariance matrix X∗,+ . Proof. First, we note in the case u ∈ Span(eb ), the result is true trivially. Moreover, by Proposition 2.3, it suffices to prove the claim for eigenvectors of the linearization X, replacing the ℓ∞ -norm by a supremum over indices k > M . We now take for granted |zm∞ (z 2 )| = O(1) uniformly for z = E + iη ∈ C+ ; this follows from an elementary analysis of the Stieltjes transform discussed in the appendix of this paper. This allows us to obtain the following string of inequalities 7 with probability at least 1 − e−ξ log ξ and any index k > M : |u(k)|2 6 (2.43) X vβ 6=u 2 η 2 |vβ (k)| (λβ − λ)2 + η 2 (2.44) = η Im[G(λ + iη)]kk (2.45) 6 η zm∞ (z 2 ) + O(η (2.46) 6 2η, p ξΦ) where we used the local law for the linearization X to estimate the second line. This completes the derivation of the eigenvector delocalization.  We conclude this preliminary discussion concerning consequences of Theorem 2.5 with the following weak rigidity estimates. We briefly remark that it relies heavily upon the Helffer-Sjostrand formula and functional calculus, and beyond these tools, the local law in Theorem 2.5. To state the result, we first introduce the following definition. Definition 2.9. For each i ∈ [[1, N ]], we define the i-th classical location, denoted γi , by the following quantile formula: wγi i = ̺∞ (E) dE, N −∞ (2.47) where we recall ̺∞ denotes the density function of the Marchenko-Pastur law. The following consequence of Theorem 2.5 will compare the classical location γi to the i-th eigenvalue λi of the covariance matrix X∗ , where the ordering on the eigenvalues is the increasing order. Corollary 2.10. For any fixed κ > 0 and index i ∈ [[κN, (1 − κ)N ]], we have, with probability at least 1 − e−ξ log ξ ,  2  ξ |λi − γi | = O (2.48) . D1/4 For details of the proof, we refer to Section 5 in [4] and Section 7 in [13]. We now give an outline for the derivation of the local law. The proof will roughly consist of the following three steps: • (I). The first step will be to adapt the methods in [5] to define and study a method of resampling biregular graphs in Ω. The resampling will be generated by local operations on a given graph known as switchings, which we will define more precisely in a later section. The local nature of the resampling method will help us derive equations exploiting the probabilistic stability of the Green’s function under these switchings. • (II). The second step will be to study the Green’s functions of the three matrix ensemble simultaneously. This includes both a preliminary analysis and a further analysis using the switching dynamics established in the previous step. In particular, we derive an approximate self-consistent equation for the diagonal entries of the Green’s function and study its stability properties. As in [5], this will help us compare the diagonal of the Green’s function to the associated Stieltjes transform. The equation in [5], however, contains a constant leading-order coefficient whereas for covariance matrices the leading-order coefficient is nonconstant. We adapt the methods suitably to handle this nonlinearity. 3. Switchings on Bipartite Graphs We begin by introducing notation necessary to define switchings on biregular graphs. Switchings will be local operations on the biregular graphs, so we will establish notation for vertices and edges containing said vertices as follows. Notation 3.1. A generic vertex in Vb (resp. Vw ) will be denoted by vb (resp. vw ). b For a fixed graph E ∈ Ω, we will denote the edges in E containing vb by {eb,µ }dµ=1 . Moreover, for a fixed edge eb,µ containing vb , we will denote the neighboring vertex by vb,µ , so that eb,µ = vb vb,µ . w . For a fixed edge ew,ν containing vw , we will denote the Similarly, the edges in E containing vw will be denoted by {ew,ν }dν=1 neighboring vertex by vw,ν . 8 For a fixed vertex vb ∈ Vb , we establish the notation for the set of edges not containing vb : (3.1) Uvb := {edges e ∈ E : vb 6∈ e} . Similarly for a fixed vertex vw ∈ Vw , we define the following set of edges not containing vw : (3.2) Uvw := {edges e ∈ E : vw 6∈ e} . We may now begin to define a switching on a generic graph E ∈ Ω. To this end, we fix a black vertex vb ∈ Vb and an edge ev,µ for some µ ∈ [[1, db ]]. We define the following space of subgraphs of E: (3.3) Svb ,µ,E := {S ⊂ E : S = {eb,µ , pb,µ , qb,µ }, pb,µ 6= qb,µ ∈ Uvb } . In words, the set Svb ,µ,E is the set of graphs consisting of the edges eb,µ and any two distinct edges pb,µ and qb,µ , neither of which contains the vertex vb . Similarly, we may define for a fixed white vertex vw ∈ Vw and edge ew,ν , for some ν ∈ [[1, dw ]], the same set of graphs: (3.4) Svw ,µ,E := {S ⊂ E : S = {ew,ν , pw,ν , qw,ν } : pw,ν 6= qw,ν ∈ Uvw } . Notation 3.2. A generic graph in Svb ,µ,E will be denoted by Sb,µ . A generic graph in Svw ,ν,E will be denoted by Sw,ν . The set Svb ,µ,E contains the edge-local data along which switchings on graphs will be defined. To make this precise, we need to introduce the following indicator functions. First, we define the following configuration vectors for fixed vertices vb ∈ Vb and vw ∈ Vw : (3.5) d d w Svw := (Sw,ν )ν=1 . b Svb := (Sb,µ )µ=1 , With this notation, we define the following indicator functions that detect graph properties in Sb,µ and Sw,ν . (3.6) (3.7) I(Sb,µ ) = 1 (|[Sb,µ ]| = 6) , Y J(Svb , µ) = 1 ([Sb,µ ] ∩ [Sb,µ′ ] = {vb }) , µ′ 6=µ (3.8) W (Svb ) = {µ : I(Sb,µ )J(Svb , µ) = 1} . For white vertices vw ∈ Vw , the functions I, J and W retain the same definition upon replacing b with w and µ with ν. e which will make the switchings systematic from the perspective of We now define the augmented probability spaces Ω Markovian dynamics. For a fixed black vertex vb ∈ Vb and a fixed white vertex vw ∈ Vw , we define the following augmented space: ) ( db dw Y Y e = (E, Sv , Svw ) , E ∈ Ω, Sv ∈ (3.9) Ω Svb ,µ,E , Svw ∈ Svw ,ν,E . b b µ=1 ν=1 e To this end we define switchings on configuration vectors We now precisely define switchings by defining dynamics on Ω. Svb and Svw ; we first focus on the configuration vectors for black vertices. Fix a label µ and consider a component Sb,µ of a uniformly sampled configuration vector Svb . Precisely, the components of Svb are sampled jointly uniformly and independently from Svb ,µ,E , where E ∈ Ω is uniform over all µ and sampled uniformly. We now define the following map: (3.10) Tb : db Y µ=1 Svb ,µ,E −→ db Y Svb ,µ,E ′ µ=1 where E ′ ∈ Ω is possibly different from E. The map is given as follows: for any µ, we define the map Tb,µ  S µ 6∈ W (Svb ) b,µ (3.11) Tb,µ (Sb,µ ) = . (S , s ) µ ∈ W (S ) b,µ 9 b,µ vb We define the graph (Sb,µ , sb,µ ) as follows; this is where we now introduce randomness into the dynamics Tb . Suppose µ ∈ W (Svb ), in which case Sb,µ is 1-regular and bipartite with respect to the vertex sets ([Svb ], V1 , V2 ). Consider the set of 1-regular bipartite graphs with respect to the vertex set ([Sb,µ ], V1 , V2 ). In words, this is the set of 1-regular graphs on [Sb,µ ] such that, upon replacing Sb,µ with any such graph, the global graph E remains biregular. We now define (Sb,µ , sb,µ ) to be drawn from this set uniformly at random conditioning on the event (Sb,µ , sb,µ ) 6= Sb,µ . Lastly, we define the following global dynamics: Tb = (3.12) db Y Tb,µ , µ=1 where the product is taken as composition. We note this product is independent of the order of composition; this is a consequence of the definition of the functions I, J and W . For white vertices vw ∈ Vw , we define the map Tw by replacing all black indices b and white indices w. e because we are allowed to change the underlying graph E when varyWe note that the maps Tb and Tw define maps on Ω, e this is the utility of the almost-product representation of Ω. e This allows us to finally define switchings ing over the space Ω; of a biregular graph. Definition 3.3. For a fixed black vertex vb ∈ Vb and a fixed label µ ∈ [[1, db ]], the local switching at vb along µ is the map Tb,µ . The global switching is the map Tb . Similarly, for a fixed white vertex vw ∈ Vw and a fixed label ν ∈ [[1, dw ]], the local switching at vw along ν is the map Tw,ν . The global switching at vw is the map Tw . Remark 3.4. We note our construction, technically, implies the mappings Tb,µ and Tw,ν are random mappings on the auge Via this construction, we obtain a probability measure Ω e induced by the uniform measure and a uniform mented space Ω. sampling of switchings. To obtain an honest mapping on the original space Ω, we may instead construct deterministic mappings by averaging over the random switchings. For precise details, we cite [5]. 3.1. Switchings on Adjacency Matrices. We now aim to translate the combinatorics of graph switchings into analysis of adjacency matrices. Suppose E ∈ Ω is a biregular graph with adjacency matrix A. We will fix the following notation. Notation 3.5. For an edge e = ij on the vertex set V , we let ∆ij denote the adjacency matrix of the graph on V consisting only of the edge e. In particular, ∆ij is the matrix whose entries are given by (3.13) (∆ij )kℓ = δik δjℓ + δiℓ δjk . In the context of switchings on biregular graphs, the matrices ∆ij are perturbations of adjacency matrices. This is made precise in the following definition. Definition 3.6. Fix a black vertex vb ∈ Vb and a label µ ∈ [[1, db ]]. A local switching of A, denoted Tb,µ , at vb along µ is given by the following formula: (3.14) Tb,µ (A) = A − Sb,µ + (Sb,µ , sb,µ ), where Sb,µ is a component of a random, uniformly sampled configuration vector Svb . A global switching of A, denoted Tb , is the composition of the random mappings Tb,µ . Similarly, we may define local switchings and global switchings of adjacency matrices for white vertices by replacing the black subscript b with the white subscript w, and replacing the label µ with ν. Clearly, a local or global switching of an adjacency matrix is the adjacency matrix corresponding to a local or global switching of the underlying graph. To realize the matrices ∆ij as perturbations, we will rewrite the formula defining Tb,µ as follows. As usual, we carry out the discussion for black vertices vb ∈ Vb , though the details for white vertices vb follow analogously. 10 First, we recall the following notation for a component Sb,µ ∈ Svb ,µ,E of a configuration vector Svb : Sb,µ := {eb,µ , pb,µ , qb,µ } , (3.15) subject to the constraint that Sb,µ contains three distinct edges. Notation 3.7. We will denote the vertices of pb,µ by ab,p,µ ∈ Vb and aw,p,µ ∈ Vw . Similarly, we will denote the vertices of qb,µ by ab,q,µ ∈ Vb and aw,q,µ ∈ Vw . With this notation, we may rewrite the random mapping Tb,µ as follows:   Tb,µ (A) = A − ∆vb ,vb,µ + ∆ab,p,µ aw,p,µ + ∆ab,q,µ aw,q,µ + ∆vb ,x + ∆ab,p,µ ,y + ∆ab,q,µ ,z , (3.16) where we recall eb,µ = vb vb,µ . Here, the variables x, y, z are three distinct vertices sampled from the set of white vertices {vb,µ , aw,p,µ , aw,q,µ } conditioning on the following constraint on ordered triples: (x, y, z) 6= (vb,µ , aw,p,µ , aw,q,µ ). (3.17) 3.2. Probability Estimates on Vertices. In this discussion, we obtain estimates on the distribution of graph vertices after performing switchings. The main estimates here show that the vertices are approximately uniformly distributed, which we make precise in the following definition. Definition 3.8. Suppose S is a finite set and X is an S-valued random variable. We say (the distribution of) X is approximately uniform if the following bound on total variation holds:   X 1 1 (3.18) . P (X = s) − 6 O √ |S| dw D s∈S e These σ-algebras will allow us to focus on edge-local features of graphs We now introduce the following σ-algebras on Ω. E ∈ Ω upon conditioning on the global data E. Definition 3.9. For a fixed label µ ∈ [[0, db ]], we define the following σ-algebras: Fµ := σ (E, (Sb,1 , sb,1 ), . . . , (Sb,µ , sb,µ )) ,   Gµ := σ E (Sb,µ′ , sb,µ′ )µ′ 6=µ . (3.19) (3.20) We similarly define the σ-algebras Fν and Gν for ν ∈ [[1, dw ]] for white vertices. In particular, conditioning on F0 corresponds to conditioning on the graph E only. The last piece of probabilistic data we introduce is the following notation, which will allow us to compare i.i.d. switchings on biregular graphs. e denotes a Notation 3.10. Suppose X is a random variable on the graph data E, {(Sb,µ , sb,µ )}µ , {(Sw,ν , sw,ν )}ν . Then X e {(Seb,µ , seb,µ )µ , {(Sew,ν , sew,ν )}, where the tildes on the graph data denote i.i.d. resamplings. random variable on the variables E, Notation 3.11. For notational simplicity, by pµ , qµ or pν , qν , we will refer to either pb,µ , qb,µ or pw,ν , qw,ν , respectively, whenever the discussion applies to both situations. We now focus on obtaining an estimate on the distribution of the pair of edges (pµ , qµ ), and similarly for (pν , qν ). As with all results concerning switchings from here on, details of proofs resemble those of Section 6 in [5], so we omit details whenever redundant. Lemma 3.12. Conditioned on Gµ , the pair (pµ , qµ ) is approximately uniform, i.e., for any bounded symmetric function F , we have (3.21) EGµ F (pµ , qµ ) =   X 1 1 kF k F (p, q) + O ∞ . (N dw )2 N p,q∈E 11 Similarly, for any bounded function F , we have EGµ ,qµ F (pµ ) = (3.22)   1 X 1 F (p) + O kF k∞ . N dw N p∈E Proof. Assume we resample about vb ∈ Vb ; the case for vw ∈ Vw follows analogously. By definition, we have X 1 EGµ F (pµ , qµ ) = (3.23) F (p), (N dw − dw )(N dw − dw − 1) p∈Evb where Evb is the set of edges in E that are not incident to vb . Then, (3.21) follows from the following estimate   1 1 1 (3.24) + O = (N dw − dw )(N dw − dw − 1) (N dw )2 N 3 d2w as well as the estimate |Evb | 6 (N dw )2 , and lastly the estimate |EvCb | 6 N dw . This last upper bound follows combinatorially; for details, see the proof of Lemma 6.2 in [5]. The estimate (3.22) follows from a similar argument.  Because an edge is uniquely determined by its vertices in the graph, we automatically deduce from Lemma 3.12 the following approximately uniform estimate for resampled vertices as well. Corollary 3.13. Conditioned on Gµ , the pair (pµ (b), qµ (b)) (resp. (pµ (w), qµ (w))) is approximately uniform. Similarly, conditioned on Gµ and qµ (b) (resp. qµ (w)), the random variable pµ (b) (resp. pµ (w)) is approximately uniform. Proof. This follows immediately upon applying Lemma 3.12 to the function F (pµ , qµ ) = f (pµ (b), qµ (b)).  To fully exploit the resampling dynamics, we need a lower bound on the probability that a local switching Sb,µ , sb,µ around a vertex vb ∈ Vb does not leave the graph fixed. In particular, we need an estimate for the probability of the event µ ∈ W (Svb ) where here µ is fixed and the set W is viewed as random. As discussed in [5], to provide an estimate, the naive approach to estimating this probability conditioning on Gµ fails in an exceptional set. Precisely, suppose the µ-th neighbor vb,µ of vb lives in Sb,µ′ for some µ′ 6= µ. In this case, almost surely, we have [Sb,µ ] ∩ [Sb,µ′ ] is nontrivial. It turns out this is the only obstruction, so we aim to show that vb,µ ∈ [Sb,µ′ ] occurs with low probability for any µ′ 6= µ. Formally, we define the following indicator random variable which detects this exceptional set: Y h(Svb , µ) = (3.25) 1 (vb,µ ∈ Sb,µ′ ) . µ′ 6=µ Thus, the estimates we need are given in the following result. Lemma 3.14. For any neighbor index µ, we have PGµ [I(Sb,µ )J(Svb , µ) = h(Svb , µ)] > 1 − O (3.26) Moreover, we have PF0 [h(Svb , µ) = 1] > 1 − O (3.27)  db N   db N  . . Proof. We first note that (3.26) follows immediately conditioning on h = 0. In particular, the first lower bound (3.26) follows from a combinatorial analysis of the underlying graph using the following union bound: PGµ ,h=1 [I(Sb,µ )J(Svb , µ) = 0] 6 PGµ [I(Sb,µ ) = 0] + PGµ ,h=1 [J(Svb , µ) = 0] . (3.28) Similarly, (3.27) follows from the union bound (3.29) PF0 [h(Svb , µ) = 0] 6 X µ′ 6=µ For details, we refer back to [5]. PF0 [vb,µ ∈ [Sµ′ ]] .  12 f , W are i.i.d. copies of We conclude this section with an estimate that compares independent resamplings. Recall that W the random variable W (Svb ). The following result bounds the fluctuation in W (Svb ) from independent resamplings. Lemma 3.15. Almost surely, we know (3.30)   f = O(1), # W ∆W where the implied constant is independent of N . Moreover, we also have   h i f 6= ∅ 6 O db . PGµ W ∆W (3.31) N The proof follows the argument concerning Lemma 6.3 in [5] almost identically, so we omit it. We now present the final estimate on adjacency matrices comparing switched matrices upon i.i.d. switchings in the sense of matrix perturbations. This will allow us to perform and control resamplings of biregular graphs, in particular using the resolvent perturbation identity. Lemma 3.16. Under the setting of the resampling dynamics, we have (3.32) e − A = Tb,µ (A) − Teb,µ (A) e A with probability at least 1 − O(db /N ). Almost surely, we have O(1) (3.33) e−A = A X ∆xy , x,y=1 such that either, conditioning on Gµ , the random indices x, y are approximately uniform in the corresponding set Vb or Vw or, conditioning on Gµ , pµ , peµ , at least one of the random indices x, y is approximately uniform in the appropriate vertex set. Lastly, the statement remains true upon switching instead at a white vertex vw along an edge label ν. Proof. The result follows from unfolding Lemma 3.15 and the following deterministic identity: i h e − A = 1 f Teb,µ (E) − E − 1µ∈W [Tb,µ (A) − A] A µ∈W X (3.34) ± [Tb,µ (A) − A] , + f ∆W µ′ ∈W f contains the indexing label µ′ . where the sign corresponds to which of the random sets W or W  4. Green’s Function Analysis 4.1. Preliminary Resolvent Theory. Here we record the following fundamental identities for studying the Green’s functions of adjacency matrices in the ensemble X . These identities are standard and follow from standard linear algebra. First, because these identities hold for Green’s functions of any real symmetric matrix, we fix the following notation. Notation 4.1. Suppose F is a function of the Green’s function or matrix entries of matrices belonging to any one of the matrix ensembles X, X∗ , or X∗,+ . Then we establish the notation F⋆ to be the function obtained when restricted to the matrix ensemble X⋆ , where we take ⋆ to be blank or ⋆ = ∗ or ⋆ = ∗, +. Lemma 4.2. (Resolvent Identity) Suppose A and B are invertible matrices. Then we have (4.1) A−1 − B −1 = A−1 (B − A)B −1 . e denote real symmetric or complex Hermitian matrices with Green’s functions G(z) and G(z), e In particular, if H and H respectively, for z 6∈ R, then h   i e e −H G e (z). G(z) − G(z) = G H (4.2) e As an immediate consequence by letting G(z) = G(z), we deduce the following off-diagonal averaging identity. 13 Corollary 4.3. (Ward Identity) Suppose H is a real symmetric matrix of size N with Green’s function G(z). Then for any fixed row index i ∈ [[1, N ]], (4.3) N X k=1 |Gik (E + iη)|2 = Im Gii (E + iη) . η In particular, we obtain the following a priori estimate for any matrix index (i, j): (4.4) |Gij (E + iη)| 6 1 , η and thus for any matrix index (i, j), the function Gij (z) is locally Lipschitz with constant η −2 . The third preliminary result we give is the following representation of the Green’s function G(z; H) in terms of the spectral data of H which is also an important result in compact operator and PDE theory. This spectral representation will be indispensable for exploiting the rich spectral correspondence among covariance matrices and their linearizations. Lemma 4.4. (Spectral Representation) Suppose H is a real-symmetric or complex-Hermitian matrix with eigenvalue-eigenvector pairs {(λα , uα )}α , and let G(z) denote its Green’s function. Then for any matrix index (i, j), we have (4.5) Gij (z) = N X uα (i)uα (j) , λα − z α=1 where the overline notation denotes the complex conjugate of the vector entry. In particular, the Green’s function is complex Hermitian. We conclude this preliminary discussion of the Green’s function G(z; H) with the following local regularity result concerning a maximal Green’s function. The proof of this result may be found as Lemma 2.1 in [5]. To state it, we now define the maximal Green’s functions of interest, which may be viewed as control parameters for the sake of this paper:   Γ(E + iη) = max |Gij (z)| ∨ 1, i,j Γ∗ (E + iη) = sup Γ(E + iη ′ ). η ′ >η Lemma 4.5. For any z = E + iη ∈ C+ , the function Γ(z) is locally Lipschitz continuous in η with the following bound on its almost-everywhere derivative: (4.6) |∂η Γ(z)| 6 Γ(z) . η In particular, for any κ > 1 and z = E + iη ∈ C+ , we have  η Γ E+i (4.7) 6 κΓ(E + iη). κ 4.2. Reductions of the Proof of Theorem 2.5. We now return to the setting of biregular bipartite graphs, i.e. the ensembles X , X∗ , and X∗,+ . We begin with the following consequence of Lemma 4.4, which relates the Green’s function entries of matrices from each of the three matrix ensembles of interest. Lemma 4.6. Suppose X is a block matrix of the form (2.10), and suppose i, j ∈ [[1, M + N ]] are indices chosen such that either i, j 6 M or i, j > M . Then, for any z = E + iη ∈ C+ , we have  zG (z 2 ) i, j 6 M ∗,+ Gij (z) = (4.8) . zG (z 2 ) i, j > M ∗ 14 Proof. For simplicity, we suppose X is real symmetric as the proof for complex Hermitian matrices is similar. First suppose i, j 6 M . By the spectral representation in (4.5) and Proposition 2.3, we obtain   X uα (i)uα (j) X 1 uα (i)uα (j) uα (i)uα (j) √ √ Gij (z) = (4.9) , = + λα − z 2 λα − z − λα − z α λ∈σ(HH ∗ ) where the last equality holds by abuse of notation for eigenvectors of the covariance matrix HH ∗ versus the linearization X. This completes the derivation for the case i, j 6 M . The proof for the case i, j > M follows by the exact same calculation, but instead taking a summation over σ(H ∗ H) and noting the eigenvector terms uα (i)uα (j) vanish for λα ∈ ζ(X) by Statements (V) and (VI) in Proposition 2.3.  Lemma 4.6 now gives the first reduction of the proof of Theorem 2.5. Lemma 4.7. Assuming the setting of Theorem 4.1, then the following two estimates are equivalent: • (I). For any fixed ε > 0, we have with probability at least 1 − e−ξ log ξ , uniformly over z = E + iη ∈ Uε with η ≫ ξ 2 /N , max |[G∗ (z)]ii − m∞ (z)| = O (Fz (ξΦ)) , max |[G∗ (z)]ij | = O(ξΦ). (4.10) i i6=j • (II). For any fixed ε > 0, we have with probability at least 1 − e−ξ log ξ , uniformly over z = E + iη ∈ Uε with η ≫ ξ 2 /N ,  max [G(z)]kk − zm∞ (z 2 ) = O zFz2 (ξΦ(z 2 )) ,  max |[G(z)]kℓ | = O zξΦ(z 2 ) . (4.11) k>M (4.12) M<k<ℓ Similarly, the above equivalence holds replacing G∗ with G∗,+ and taking the maximums over i 6 M and i, j 6 M . From Lemma 4.6, we also deduce the next reduction. Lemma 4.8. Assuming the setting of Theorem 4.1, the following estimates are equivalent for any z ∈ C+ :  sb (z) − zm∞,+ (z 2 ) = O zFz2 (ξΦ(z 2 )) ,  sw (z) − zm∞ (z 2 ) = O zFz2 (ξΦ(z 2 )) . (4.13) (4.14) Similarly, the following estimates are equivalent for any z ∈ C+ : (4.15) |s∗ (z) − m∞ (z)| = O (Fz (ξΦ)) , (4.16) |s∗,+ (z) − m∞,+ (z)| = O (Fz (ξΦ)) . We briefly note that Lemma 4.8 improves upon Lemma 4.7 in that it removes the restriction |E| > ε on the energy. We are now in a position to make our final reduction of the proof of the local laws in Theorem 4.1, which exploits the first reduction in Lemma 4.7 and thus allows us to focus on the covariance matrices X∗ and X∗,+ . The reduction will depend on the following result, for which we need to define the following spectral domain. DN,δ,ξ = {z = E + iη : |E| 6 N δ , ξ 2 /N 6 η 6 N }. (4.17) Proposition 4.9. Suppose ξ, ζ > 0 and D ≫ ξ 2 . If, for a fixed z ∈ DN,δ,ξ , we have P (Γ∗⋆ (z) = O(1)) > 1 − e−ζ , (4.18) then, with probability at least 1 − e−(ξ log ξ)∧ζ+O(log N ) , we have (4.19) max |[G⋆ (z)]ii − m(z)| = O(Fz (ξΦ(z))), max |[G⋆ (z)]ij | = O i Here, ⋆ can take the values ⋆ = ∗ and ⋆ = ∗, +. i6=j 15  ξΦ(z 2 ) z  . To deduce Theorem 2.5 from Proposition 4.9, we follow the exactly the argument used to deduce Theorem 1.1 from Proposition 2.2 in [5]. The only thing we need to check to apply the same argument are the following bounds: m∞ (z), m∞,+ (z) 6 C, (4.20) for some constant C = O(1). These estimates are proven in the appendix of this paper. We may also extend this argument to remove the energy repulsion assumption |E| > ε, which we precisely state in the following lemma. Lemma 4.10. Suppose Proposition 4.9 holds. Then the estimates (4.10) and (4.11) hold without the assumption |E| > ε. Consequently, the estimates (4.14) and (4.15) hold without the assumption |E| > ε. Moreover, if α > 1, then the estimate (4.7) holds without the assumption |E| > ε. Proof. We may again apply the iteration scheme used in proving Theorem 1.1 from Proposition 2.2 in [5]. Here, we need to check the estimate m(z) = O(1) and that in the regime α > 1, we have the estimate m∞ (z) = O(1). These are similarly derived in the appendix of this paper.  Thus, contingent on estimates derived in the appendix, to prove Theorem 2.5 it suffices to prove Proposition 4.9. This will be the focus for remainder of the paper. In particular, we may now work with an explicitly smaller domain DN,δ,ξ and an a priori estimate on the maximal Green’s function. 4.3. Switchings on Green’s Functions. The main result in this subsection consists of the following estimates comparing Green’s function entries to index-wise averages. These estimates will be fundamental to controlling terms that show up naturally in the derivation of the self-consistent equation. Before we can state the result, we need to first introduce a notion of high probability used throughout the remainder of this paper. Definition 4.11. Fix a parameter t = tN ≫ log N , and a probability space Ω. We say an event Ξ ⊂ Ω holds with t-high probability, or t-HP for short, if (4.21) P ΞC  6 e−t+O(log N ) . As suggested in Theorem 4.1, we will take the parameter t = (ξ log ξ) ∧ ζ. We now state the main estimates. Lemma 4.12. Fix a vertex i = vb ∈ Vb and an edge label µ ∈ [[1, db ]]. Suppose z = E + iη ∈ C+ satisfies the following constraints for a fixed ε > 0: |E| > ε, (4.22) η & 1 . N Suppose further that Γ = O(1) holds with t-HP. Then for all fixed indices j, ℓ, r we have ! M+N   1 X −1/2 −1/2 EF0 Gvb,µ j − Φ), sw Gij + O(dw Gkj = − EF0 dw N k=M+1 " !# M+N   1 X −1/2 −1/2 EF0 Gℓr Gvb,µ j − sw Gij + O(dw = − EF0 Gℓr dw Φ). Gkj N k=M+1 Similarly, fix a vertex k = vw ∈ Vw and µ ∈ [[1, dw ]], and suppose z ∈ C+ satisfies the constraints (4.22). Further suppose Γ = O(1) with t-HP. Then for all fixed indices j, ℓ, r, we have ! M   1 X −1/2 −1/2 Φ), sb Gkj + O(dw Gij = − EF0 dw EF0 Gvw,µ j − M i=1 " !# M   1 X −1/2 −1/2 Φ). EF0 Gℓr Gvw,µ j − sb Gkj + O(dw = − EF0 Gℓr dw Gij M i=1 16 Before we provide a proof of this result, we will need an auxiliary estimate comparing Green’s function entries from i.i.d. samples of graphs. To state this auxiliary estimate, we define first the conditional maximal Green’s functions for any fixed edge label µ ∈ [[1, db ]] or ν ∈ [[1, dw ]]: Γµ = Γµ (z) := kΓ(z)kL∞ (Gµ ) , (4.23) where the notation L∞ (Gµ ) in the norm denotes the L∞ -norm conditioning on the σ-algebra Gµ . Lemma 4.13. For any fixed indices i, j ∈ [[1, M + N ]] and any label µ ∈ [[1, db ]], we have −1/2 eij = O(dw Gij − G Γµ Γ). (4.24) Moreover, suppose x, y are random variables such that, conditioned on Gµ and x, the random variable y is approximately uniform. Then we have 2 EGµ |Gxy | = O(Γ4µ Φ2 ). (4.25) The estimate also holds for any fixed label ν ∈ [[1, dw ]] of a white vertex. The proof of this result follows from the proof of Lemma 3.9 in [5] so we omit it. We now record the following consequence of Lemma 4.13 and proceed to the proof of Lemma 4.12. Corollary 4.14. In the setting of Lemma 4.13, we have   −1/2 Γ = Γµ + O dw Γµ Γ . (4.26) Proof. (of Lemma 4.12). We prove the first estimate only for vb ∈ Vb ; the proof of the second estimate and the estimates for vw ∈ Vw are analogous. In expectation, we first note e ve j . EF0 Gvb,µ j = EF0 G b,µ Moreover, because G(z) is independent of the random variable veb,µ , and because, conditioned on Gµ , the random variable veb,µ ∈ Vw is approximately uniform, we also know EF0 1 N M+N X k=M+1 Gkj ! −1/2 = EF0 Gveb,µ j + O(db Φ). Thus, it suffices to compute   e ve j . − EF0 Gveb,µ j − G b,µ By the resolvent identity, we have the following equation holding in expectation:     X e ve j = EF0  eℓj  . e − X)kℓ G EF0 −Gveb,µ j − G (4.27) Gveb,µ k (X b,µ k,ℓ −1/2 Unfolding the high-probability equation (11.15) in Lemma 11.9, we have, with probability at least 1 − O(dw on F0 ,   e − X = d−1/2 ∆v ve − ∆v v + Σb + d−1/2 ∆ve v − ∆v v + Σ∗ . (4.28) X b,µ b b w b b,µ w b,µ b b b,µ Φ) conditioned Here, we recall that vb,µ (resp. veb,µ ) is the vertex adjacent to vb in Svb ,µ (resp. Sevb ,µ ) after resampling. Also, Σb is the matrix given by a sum of terms ±∆xy where one of the following two conditions holds: • Conditioned on Gµ , peµ , the random variable x is approximately uniform, or; • Conditioned on Gµ , the random variable y is approximately uniform. 17 Thus, upon unfolding the RHS of (4.27), we see one term is given by, in expectation, h i h i −1/2 e ij = EF0 d−1/2 sG eij + O(d−1/2 Φ) Gveb,µ evb,µ G EF0 dw w w i h −1/2 −1/2 Φ), sGij + O(dw = EF0 dw where the first equality holds because veβ,µ is approximately uniform by Corollary 11.6 and the second holds since for any e ij conditioned on Gµ . In particular, we have fixed indices i, j, we have Gij ∼ G i   h   e ij − Gij e ij = EF0 Gve ev Gij + EF0 Gve ve G EF0 Gveb,µ evb,µ G b,µ b,µ b,µ b,µ    1 = EF0 Gveb,µ evb,µ Gij + O √ , D where the second equality holds by Lemma 4.13. Thus, it suffices to bound the remaining terms in (4.27). By (4.28), it suffices e yj . By the second result in Lemma 4.13 and the Schwarz inequality, in the case to estimate the expectation of terms Gveb,µ x G where y is approximately uniform conditioning on Gµ , we have, with high probability, e yj |2 + O(D−1/2 ) 6 O(Φ). e yj 6 EF0 |G EF0 Gveb,µ x G Thus, by the assumption Γ = O(1), Lemma 4.12 follows after accumulating the finitely many events all holding with probability at least 1 − O(d−1/2 Φ).  5. The Self-Consistent Eqation and Proof of Theorem 2.5 5.1. Derivation of the self-consistent equation. We begin by introducing the following two pieces of notation for a e = (Zek )k∈[[M+1,M+N ]] : random vector Z = (Zi )i∈[[1,M]] and Z (5.1) E(i) Z = M 1 X Zi , M i=1 E(k) Ze = 1 N M+N X k=M+1 ek . Z We now record the following concentration estimate which will allow us to take expectations and exploit the estimates in Lemma lemma:conditionalexpectationscederivation. To do so, we introduce the following notation. Definition 5.1. Suppose X is an L1 -random variable, and suppose σ(·) is a σ-algebra which X is measurable with respect to. We define the σ(·)-fluctuation of X to be the following centered random variable: Xσ(·) := X − Eσ(·) X. (5.2) Proposition 5.2. Suppose that z = E + iη ∈ DN,δ,ξ ∩ Uε , that ζ > 0, and that Γ = O(1) with probability at least 1 − e−ζ . Fix k = O(1) and pairs of indices Ik = {(i1 , j1 ), . . . , (ik , jk )}. Define the random variable XIk (z) = Gi1 j1 . . . Gik jk . Then, for any ξ = ξ(N ) satisfying ξ → ∞ as N → ∞, we have the following pointwise concentration estimate:   (5.3) P (XIk (z))F0 = O(ξΦ) > 1 − e−[(ξ log ξ)∧ζ]+O(log N ) . The proof of Proposition 5.2 follows from the proof of Proposition 4.1 in [5] so we omit it. We now consider the matrix equation HG = zG + Id and compute the diagonal entries of both sides. The (i, i)-entry of the RHS is clearly given by zGii + 1. We now study the LHS, considering the (k, k)-entry for k ∈ [[M, M + 1]]. By matrix multiplication we have (5.4) (HG)kk = M X Hki Gik = −1/2 dw i=1 (5.5) −1/2 = dw  dw  M X X 1 δi,vb,ν − Gik M i=1 ν=1 dw X ν=1 18  Gvb,ν k − E(i) Gik , where we used the relation M db = N dw . Appealing to Lemma 4.12 we deduce the following identity: ! dw  X 1 −1 EF0 (HG)kk = − EF0 (5.6) sb Gii + O(dw Φ) = − EF0 sb Gii + O(Φ). dw ν=1 Taking an expectation conditioning on F0 in the matrix equation HG = zG + Id, we see (5.7) 1 + z EF0 Gkk = − EF0 sb Gkk + O (Φ) . Using Proposition 5.2 to account for the F0 -fluctuation of the Green’s function terms, we ultimately deduce a stability equation for the diagonal (k, k)-entries of G, with k > M . We may run a similar calculation for indices i ∈ [[1, M ]] and derive the following system of equations: (5.8) 1 + (z + γsw )Gii = O ((1 + |z|)ξΦ) , (5.9) 1 + (z + sb )Gkk = O ((1 + |z|)ξΦ) . Although this system is a priori coupled, we now appeal to Lemma 4.6 to decouple the equations. More precisely, we deduce the following system of decoupled equations:   1−γ (5.10) Gii = O ((1 + |z|)ξΦ) , 1 + z + sb + z   γ−1 1 + z + γsw + (5.11) Gkk = O ((1 + |z|)ξΦ) . z From here, we may proceed in two fashions. First, we may use Lemma 7.3 to deduce stability equations for the Green’s functions G∗ and G∗,+ , relating the diagonal entries of these Green’s functions to the Stieltjes transforms s∗ and s∗,+ . On the other hand, we may also average over the diagonal entries and deduce self-consistent equations for the Stieltjes transforms sb , sw and s∗ , s∗,+ . We summarize these estimates in the following proposition. Proposition 5.3. Suppose Γ = O(1) with t-HP, and let z = E + iη ∈ Uε satisfy η ≫ N −1 . Then for any i ∈ [[1, M ]] and k ∈ [[M + 1, M + N ]], we have the following equations uniformly over such z with t-HP:   1−γ (5.12) Gii = O((1 + |z|)ξΦ), 1 + z + sb + z   γ−1 1 + z + γsw + (5.13) Gkk = O((1 + |z|)ξΦ), z (5.14) 1 + (z + 1 − γ + zs∗,+ ) [G∗,+ ]ii = O((1 + |z|1/2 )ξΦ) = O((1 + |z|)ξΦ), (5.15) 1 + (z + γ − 1 + γzs∗ ) [G∗ ]kk = O((1 + |z|1/2 )ξΦ) = O((1 + |z|)ξΦ). Moreover, we have the following averaged equations uniformly over such z with t-HP:   1−γ (5.16) sb = O((1 + |z|)ξΦ), 1 + z + sb + z   γ−1 1 + z + γsw + (5.17) sw = O((1 + |z|)ξΦ), z (5.18) (5.19) 1 + (z + 1 − γ + zs∗,+ ) s∗,+ = O((1 + |z|1/2 ξΦ) = O((1 + |z|)ξΦ), 1 + (z + γ − 1 + γzs∗ ) s∗ = O((1 + |z|1/2 )ξΦ) = O((1 + |z|)ξΦ). Proof. It remains to upgrade the self-consistent equations (5.12) – (5.19) to hold over all such z = E + iη with t-HP. To this end, we appeal to the Lipschitz continuity of the Green’s function entries on a sufficiently dense lattice as in the proof of Lemma 8.5.  19 5.2. Analysis of the self-consistent equation. From a direct calculation, we note the Stieltjes transform m∞ of the Marchenko-Pastur law with parameter γ 6 1 is given by the following self-consistent equation: (5.20) γzm2∞ (z) + (γ + z − 1) m∞ (z) + 1 = 0. For the augmented Stieltjes transform m∞,+ , we may similarly deduce a self-consistent equation. In our analysis, we will be concerned with providing full details for the Stieltjes transform m∞ only, as the estimate for the augmented transform m∞,+ will follow from the estimate on m∞ . We now note Proposition 5.3 implies the Stieltjes transform s∗ solves the same self-consistent equation with an error of o(1) throughout the domain DN,δ,ξ ∩ Uε , with t-HP. Our goal will be to use the stability of the self-consistent equation (5.20) under o(1) perturbations to compare s∗ and m∞ . This is the content of the following result. Proposition 5.4. Let m : C+ → C+ be the unique solution to the following equation: (5.21) γzm2 + (γ + z − 1)m + 1 = 0. Suppose s : C+ → C+ is continuous and let (5.22) R := γzs2 + (γ + z − 1)s + 1. Fix an energy E ∈ R \ [−ε, ε] for ε > 0 small and scales η0 < C(E) and η∞ 6 N , where C(E) = OE (1) is a constant to be determined. Suppose we have (5.23) |R(E + iη)| 6 (1 + |z|)r(E + iη) for a nonincreasing function r : [η0 , η∞ ] → [0, 1]. Then for all z = E + iη for η ∈ [η0 , η∞ ], we have the following estimate for sufficiently large N : |m − s| = O(F (r)). (5.24) Here, the constant C(E) is determined by (5.25) Im  1−γ E + iη  > 3α1/2 ε−1/2 for all η 6 C(E). Before we proceed with the proof of Proposition 5.4, we introduce the following notation. Notation 5.5. We denote the solutions to the equation (5.20) by m± , where m+ maps the upper-half plane to itself, and m− maps the upper-half plane to the lower-half plane. Moreover, we define the following error functions: (5.26) v± = |m± − s| . Having established this notation, because m+ takes values in the upper-half plane, we deduce the following upper bound on the values taken by the imaginary part of m− as follows:   1−γ Im(m− (z)) 6 − Im (5.27) < −3α1/2 ε−1/2 E + iη for scales η < C(E). We now proceed to derive an a priori estimate on the error functions v± . Lemma 5.6. Under the assumptions and setting of Proposition 5.4, we have (5.28) |v+ | ∧ |v− | 6 3α1/2 ε−1/2 F (r). 20 Proof. We appeal to the following inequality which holds for any branch of the complex square root parameters w, ζ for which the square root is defined: p p p √ √ |ζ| | w + ζ − w| ∧ | w + ζ + w| 6 p (5.29) ∧ |ζ|. |w| √ · and any complex In particular, this implies the following string of inequalities: ! p |4γzR| p ∧ |4γzR| |(γ + z − 1)2 − 4γz| √ 2|R| 6 p ∧ εαR |(γ + z − 1)2 − 4γz| 2ε1/2 (1 + |z|)r(E + iη) p 6 p ∧ εα(1 + |z|)r(E + iη) |(γ + z − 1)2 − 4γz| 1 |v+ | ∧ |v− | 6 2γz (5.30) (5.31) (5.32) where the second inequality follows from the assumption |z| > |E| > ε and the last bound follows if we choose ε 6 1. But this is bounded by 3α1/2 ε−1/2 F (r) for any r ∈ [0, 1].  Proof. (of Proposition 5.4). We consider two different regimes. First consider the regime where |m+ − m− | > (1 + |z|)r(η). Precisely, this is the regime defined by η > C(E) and the energy-dependent constant D(E) such that (1 + |z|)r(η) < (5.33) |(γ + z − 1)2 − 4γz| ; D(E) the constant D(E) will be determined later. We note by Lemma 5.6, in this regime it suffices to prove the following bound: |v− | > |v− | ∧ |v+ |. (5.34) We now choose an energy-dependent constant κ(E) such that for all η ∈ [C(E), η∞ ], we have the bound α1/2 (1 + |z|)r(η) 6 κ(E)(1 + |E + iC(E)|). (5.35) Moreover, note |(γ + z − 1)2 − 4γz| is increasing in η, as seen by translating z = w + 1 − γ and computing |(γ + z − 1)2 − 4γz| = |w2 − 4γw + X| = |w(w − 4γ) + X|, where X ∈ R. Because r(η) is non-increasing in η, for all z = E + iη with η ∈ [C(E), η∞ ], we have (1 + |z|)r(η) < (5.36) κ(E) |(γ + z − 1)2 − 4γz|. D(E) We first compute a uniform lower bound on the difference term as follows: |v+ − v− | = |(γ + z − 1)2 − 4γz| 2γ|z| 1 > 2γ|E + iη∞ | > 0. D(E) (1 + |z|)r(η) p ∧ κ(E) |(γ + z − 1)2 − 4γz| s D(E) p α(1 + |z|)r(η) κ(E) ! By continuity of s and the estimate Lemma 5.6, choosing D(E) large enough as a function of κ(E), E, η∞ , ε, it suffices to prove the estimate (5.24) for some η ∈ [C(E), η∞ ]. But this follows from Lemma 5.6; in particular, we have at η = C(E) and N sufficiently large, |v− | > | Im(s) − Im(m− )| > | Im(m− )| > 3ε−1/2 > 3ε−1/2 F (r) > |v+ | ∧ |v− |. Thus, we have |v+ | = |v+ | ∧ |v− | in this first regime, implying the stability estimate (5.24). Now, we take the regime where the a priori estimate (5.37) |(γ + z − 1)2 − 4γz| = O((1 + |z|)r(η)) 21 holds. Thus, we know p |(γ + z − 1)2 − 4γz| |v− | 6 |v+ | + = |v+ | + O 2ε = |v+ | + O(F (r)), (1 + |z|)r(η) p |(γ + z − 1)2 − 4γz| p ∧ (1 + |z|)r ! implying the estimate in the second regime as well.  We conclude the discussion of the self-consistent equation by noting that Lemma 4.6 allows us to deduce the following local law for the Stieltjes transform s∗,+ : |s∗,+ − m∞,+ | = O(F (r)). (5.38) This estimate holds in the regime z = E + iη for η ∈ [η0 , η∞ ]. 5.3. Final estimates. Fix an index k ∈ [[1, N ]], and for notational convenience, for this calculation only, we let G denote the Green’s function G∗ . Consider the following approximate stability equation for the diagonal entry Gkk : 1 + (z + γ − 1 + γzs∗ )Gkk = O ((1 + |z|)ξΦ) . (5.39) We now appeal to the following estimate which will allow us to study the stability of this equation upon the replacement s∗ → m∞ that holds with t-HP: |zGkk | = O(1). (5.40) Indeed, if η ≫ 1, we have |zGkk | 6 (5.41) E + iη η = O(1). If E ≫ 1, we appeal to the spectral representation of Gkk and deduce |zGkk | 6 C (5.42) 1 X |uλ (k)|2 × E N E − λ + iη λ C X E 6 N E − λ + iη (5.43) λ = O(1), (5.44) where we used the uniform a priori bound λ = O(1). If η, E . 1, then we appeal to the a priori bound Γ = O(1) with t-HP and the trivial bound z = O(1). Thus, by Proposition 5.4, we have (5.45) 1 + (z + γ − 1 + γzm∞ )Gkk = O ((1 + |z|)ξΦ) + O(F (ξΦ)). On the other hand, the self-consistent equation (5.20) implies the Green’s function term on the LHS may be written as −Gkk /m∞ . Moreover, because m∞ = O(1) uniformly on the domain Uε , we establish the following estimate that holds over all z ∈ DN,δ,ξ ∩ Uε with t-HP: (5.46) m∞ − Gkk = O (F (ξΦ)) , where we use the estimate (1 + |z|)m∞ = O(1). This completes the proof of the local law along the diagonal of G = G∗ . To derive the estimate for the off-diagonal entries, we appeal to the Green’s function G(z) = (X −z)−1 of the linearization X. Note this is no longer the Green’s function G∗ of the covariance matrix X∗ . In particular, we appeal to the following entrywise representation of a matrix equation (for indices i, j > M ): (5.47) Gij (HG)ii − Gii (HG)ij = Gij . As in the derivation of the stability equations in Proposition 5.3, by Lemma 4.12 the expectation of the LHS is given by h h i i −1/2 −1/2 sGii − EF0 Gii dw EF0 Gij dw sGij + O(Φ) = O(Φ). 22 Thus, at the cost of a concentration estimate in Proposition 5.2, we deduce |Gij | = O(ξΦ), (5.48) which yields the estimate for the off-diagonal entries. By Lemma 7.3, this gives the desired estimate for the off-diagonal entries of the Green’s function G∗ with t-HP. An analogous calculation proves the desired entry-wise estimates for the Green’s function G∗,+ , which completes the proof of Theorem 4.1. 6. Appendix 6.1. Estimates on the Stieltjes transforms on the upper-half plane. Here, we want to control the growth of the Stieltjes transforms on the upper-half plane. This is summarized in the following lemma. Lemma 6.1. Uniformly over z ∈ C+ , we have m(z) = O(1). (6.1) We proceed by considering the two regimes γ = 1 and γ < 1, which we refer to as the square regime and rectangular regime, respectively. Square Regime. If γ = 1, we rewrite the density ̺(E) as √ 4 − E2 ̺γ=1 (E) = 1E∈[−2,2] , 2π (6.2) which is the well-studied semicircle density, whose Stieltjes transform is given by m(z) = (6.3) −z + √ z2 − 4 . 2 For a reference on the semicircle law and its Stieltjes transform, we cite [2], [4], [5], [9], [10], and [13]. We note the branch √ of the square root is taken so that z 2 − 4 ∼ z for large z, in which case the bound (6.1) follows immediately. Rectangular Regime. Fix constants Λ > 0 and ε > 0 to be determined. Suppose |E| ∈ [ε, Λ]. By the representation m(z) = zm∞ (z 2 ) and the explicit formula for m∞ (z) as given in (??), the bound (6.1) follows immediately in this energy regime, where the implied constant in (6.1) may be taken independent of η. Suppose now that |E| > Λ. Again by the representation m(z) = zm∞ (z 2 ), we have   p |m(z)| = O −z 2 + i (λ+ − z 2 )(z 2 − λ− ) (6.4)   p (6.5) = O −z 2 + (z 2 − λ+ )(z 2 − λ− ) = O(1), (6.6) p since the square root is, again, chosen so that z 4 + O(z 2 ) ∼ z 2 for large z. Lastly, suppose |E| < ε. By definition of m(z) as the Stieltjes transform of ̺, we obtain (6.7) |m(z)| = w E 2 ∈[λ± ] p γ (λ+ − E 2 )(E 2 − λ− ) dE = O (1 + γ)π|E|(E − ε) where the implied constant depends only on fixed data γ, λ± . Choosing ε = p λ− − ε ! , p λ− /100 > 0, we obtain the desired bound. We note this choice of ε is positive if and only if α > 1. This completes the proof of Lemma 6.1. 23 1  References [1] Adlam, Ben. The Local Marchenko-Pastur Law for Sparse Covariance Matrices. Senior Thesis, Harvard University Department of Mathematics (2013). [2] Anderson, Greg W., Alice Guionnet, and Ofer Zeitouni. An Introduction to Random Matrices. Cambridge. Cambridge UP, 2010. Print. [3] Bai, Zhidong and Jack W. Silverstein. Spectral analysis of large dimensional random matrices. Springer Series in Statistics. Springer, New York, second edition, 2010. [4] Bauerschmidt, Roland and Jiaoyang Huang, Antti Knowles, and H.T. Yau. ”Bulk eigenvalue statistics for random regular graphs.” arXiv:1505.06700 [math.PR]. Submitted 25 May, 2015. [5] Bauerschmidt, Roland and Antti Knowles, and H.T. Yau. ”Local semicircle law for random regular graphs.” arXiv:1503.08702. Submitted 30 Mar, 2015. [6] Bloemendal, Alex and Laszlo Erdos, Antti Knowles, H-T. Yau, and Jun Yin. ”Isotropic local laws for sample covariance matrices and generalized Wigner matrices.” Electronic Journal of Probability, published online March 15, 2014, DOI: 10.1214/EJP.v19-3054. [7] Bloemendal, A., Knowles, A., Yau, HT. et al. Probab. Theory Relat. Fields (2016) 164: 459. DOI:10.1007/s00440-015-0616-x [8] Dumitriu, Ioana and T. Johnson. ”The Marchenko-Pastur law for sparse random bipartite biregular graphs.” Random Structures and Algorithms, published online 12/2014, DOI 10.1002/rsa.20581. [9] Erdos, Laszlo and Benjamin Schlein and H.-T. Yau. “Universality of Random Matrices and Local Relaxation Flow”. Inventiones Mathematicae. (2011). 185: 75. DOI:10.1007/s00222-010-0302-7 [10] Erdos, Laszlo, and H.-T. Yau and J. Yin. “Bulk Universality for generalized Wigner matrices”. Probability and Related Fields, 154(1-2):341-407, 2012. [11] Huang, Jiaoyang and Benjamin Landon and H.T. Yau. “Bulk Universality of Sparse Random Matrices”. Journal of Mathematical Physics. (2015). [12] Landon, Benjamin, Phillippe Sosoe, and H.T. Yau. ”Fixed energy universality of Dyson Brownian motion”. arXiv:1609.09011 [math.PR]. September 28, 2016. [13] Landon, Benjamin and H.T. Yau. “Convergence of local statistics of Dyson Brownian motion”. Communications in Mathematical Physics, to appear (2015). [14] Pillai, Natesh S, Jun Yin ”Edge universality of correlation matrices.” Annals of Statistics. 40 (2012), no. 3, 1737–1763. DOI” 10.1214/12-AOS1022. [15] Pillai, Natesh S., Jun Yin. ”Universality of covariance matrices.” Annals of Applied Probability, 24 (2014), no. 3, 935–1001. DOI: 10.1214/13-AAP939. [16] Tao, Terence; Vu, Van. Random covariance matrices: Universality of local statistics of eigenvalues. Ann. Probab. 40 (2012), no. 3, 1285–1315. doi:10.1214/11-AOP648. [17] Yang, Kevin. ”Bulk eigenvalue correlation statistics of random biregular bipartite graphs.” In preparation. [18] Yang, Kevin. ”Local correlation and gap statistics under Dyson Brownian motion for covariance matrices.” In preparation. 24
10
arXiv:1709.02027v1 [] 6 Sep 2017 LARGE SETS IN BOOLEAN AND NON-BOOLEAN GROUPS AND TOPOLOGY OL’GA V. SIPACHEVA Various notions of large sets in groups and semigroups naturally arise in dynamics and combinatorial number theory. Most familiar are those of syndetic, thick (or replete), and piecewise syndetic sets. Apparently, the term “syndetic” was introduced by Gottschalk and Hedlund in their 1955 book [1] in the context of topological groups, although syndetic sets of integers have been studied long before (they appear, e.g., in Khintchine’s 1934 ergodic theorem). During the past decades, large sets in Z and in abstract semigroups have been extensively studied. It has turned out that, e.g., piecewise syndetic sets in N have many attractive properties: they are partition regular (i.e., given any partition of N into finitely many subsets, at least one of the subsets is piecewise syndetic), contain arbitrarily long arithmetic progressions, and are characterized in terms of ultrafilters on N (namely, a set is piecewise syndetic it and only if it belongs to an ultrafilter contained in the minimal two-sided ideal of βN). Large sets of other kinds are no less interesting, and they have numerous applications to dynamics, Ramsey theory, the ultrafilter semigroup on N, the Bohr compactification, and so on. Quite recently Reznichenko and the author have found yet another application of large sets. Namely, we introduced special large sets in groups, which we called fat, and applied them to construct a discrete set with precisely one limit point in any countable nondiscrete topological group in which the identity element has nonrapid filter of neighborhoods. Using this technique and special features of Boolean groups, we proved, in particular, the nonexistence of a countable nondiscrete extremally disconnected group in ZFC (see [2]). In this paper, we study right and left thick, syndetic, piecewise syndetic, and fat sets in groups (although they can be defined for arbitrary semigroups). Our main concern is the interplay between such sets in Boolean groups. We also consider natural topologies closely related to fat sets, which leads to interesting relations between fat sets and ultrafilters. 1. Basic Definitions and Notation We use the standard notation Z for the group of integers, N for the set (or semigroup, depending on the context) of positive integers, and ω for the set of nonnegative integers or the first infinite cardinal; we identify cardinals with the corresponding initial ordinals. Given a set X, by |X| we denote its cardinality, by [X]k for k ∈ N, the kth symmetric power of X (i.e., the set of all k-element subsets of X), and by [X]<ω , the set of all finite subsets of X. Definition 1 (see [3]). Let G be a group. A set A ⊂ G is said to be This work was financially supported by the Russian Foundation for Basic Research (Project No. 15-01-05369). 1 2 OL’GA V. SIPACHEVA (a) right thick, or simply thick if, for every finite F ⊂ S, there exists a g ∈ G (or, equivalently, g ∈ A [3, Lemma 2.2]) such that F g ⊂ A; (b) right syndetic, or simply syndetic, if there exists a finite F ⊂ G such that G = F A; (c) right piecewise syndetic, or simply piecewise syndetic, if there exists a finite F ⊂ G such that F A is thick. Left thick, left syndetic, and left piecewise syndetic sets are defined by analogy; in what follows, we consider only right versions and omit the word “right.” Definition 2. Given a subset A of a group G, we shall refer to the least cardinality of a set F ⊂ G for which G = F A as the syndeticity index, or simply index (by analogy with subgroups) of A in G. Thus, a set is syndetic if and only if it is of finite index. We also define the thickness index of A as the least cardinality of F ⊂ G for which F A is thick. A set A ⊂ Z is syndetic if and only if the gaps between neighboring elements of A are bounded, and B ⊂ Z is thick if and only if it contains arbitrarily long intervals of consecutive integers. The intersection of any such sets A and B is piecewise syndetic; clearly, such a set is not necessarily syndetic or thick (although it may as well be both syndetic and thick). The simplest general example of a syndetic set in a group is a coset of a finite-index subgroup. In what follows, when dealing with general groups, we use multiplicative notation, and when dealing with Abelian ones, we use additive notation. Given a set A in a group G, by hAi we denote the subgroup of G generated by A. As mentioned, we are particularly interested in Boolean groups, i.e., groups in which all elements are self-inverse. All such groups are Abelian. Moreover, any Boolean group G can be treated as a countable-dimensional vector space over the two-element field Z2 ; therefore, for some set X (basis), G can be represented as the free Boolean group B(X) on X, i.e., as [X]<ω with zero ∅, which we denote by 0, and the operation of symmetric difference, or Boolean sum, which we denote by △: A △ B = (A ∪ B) \ A ∩ B. The elements of B(X) (i.e., finite subsets of X) are called words. The length of a word equals its cardinality. The basis X is embedded in B(X) as the set of words of length 1. Given n ∈ ω, we use the standard notation Bn (X) for the set of words S of length at most n; thus, B0 (X) = {0}, B1 (X) = X ∪ {0}, and B(X) = n∈ω Bn (X). For the set of words of length precisely n, where n ∈ N, we use the notation B=n (X); we have B=n (X) = Bn (X) \ Bn−1 (X). Any free filter F on an infinite set X determines a topological space XF = X ∪{∗} with one nonisolated point ∗; the neighborhoods of this point are A∪{∗} for A ∈ F . The topology of the free Boolean topological group B(XF ) = [X ∪ {∗}]<ω on this space, that is, the strongest group topology that induces the topology of XF on X ∪ {∗}, is described in detail in [4]. One of the possible descriptions is as follows. For each n ∈ N, we fix an arbitrary sequence of neighborhoods Vn of ∗, that is, of An ∪ {∗}, where An ∈ F , and set U (Vn ) = {x △ y : x, y ∈ Vn } for n ∈ N LARGE SETS IN BOOLEAN AND NON-BOOLEAN GROUPS AND TOPOLOGY 3 and [  U (Vn )n∈N = (U (V1 ) △ U (V2 ) △ . . . △ U (Vn )) n∈N = [ {x1 △ y1 △ . . . △ xn △ yn : xi , yi ∈ Ai for i ≤ n}. n∈N In particular, the subgroup generated by (A ∪ {∗}) △(A ∪ {∗}) is a neighborhood of zero for any A ∈ F . Clearly, for n ∈ ω, a set Y ⊂ B=2n (XF ) is a trace on B=2n (XF ) of a neighborhood of zero in  B(XF ) if and only if it contains a set of the form (A ∪ {∗}) △ . . . △(A ∪ {∗}) ∩ B=2n (XF ) = [A ∪ {∗}]2n , and a set1 {z } | 2n times Y ⊂ B=2n (X) ⊂ B=2n (XF ) is a trace on B=2n (X) of a neighborhood of zero in  B(XF ) if and only if it contains a set of the form A △ . . . △ A ∩ B=2n (X) = [A]2n . | {z } 2n times The intersection of a neighborhood of zero with B=k (XF ) may be empty for all odd k. In what follows, we deal with rapid, κ-arrow, and Ramsey filters and ultrafilters. Definition 3 ([5]). A filter F on ω is said to be rapid if every function ω → ω is majorized by the increasing enumeration of some element of F . Clearly, any filter containing a rapid filter is rapid as well; thus, the existence of rapid filters is equivalent to that of rapid ultrafilters. Rapid ultrafilters are also known as semi-Q-point, or weak Q-point, ultrafilters. Both the existence and nonexistence of rapid ultrafilters is consistent with ZFC (see, e.g., [6] and [7]). The notions of κ-arrow and Ramsey filters are closely related to Ramsey theory, more specifically, to the notion of homogeneity with respect to a coloring, or partition. Given a set X and positive integers m and n, by an m-coloring of [X]n we mean any map c : X → Y of X to a set Y of cardinality m. Any such coloring determines a partition of X into m disjoint pieces, each of which is assigned a color y ∈ Y . A set A ⊂ X is said to be homogeneous with respect to c, or c-homogeneous, if c is constant on [A]n . The celebrated Ramsey theorem (finite version) asserts that, given any positive integers k, l, and m, there exists a positive integer N such that, for any k-coloring c : [X]l → Y , where |X| ≥ N and |Y | = k, there exists a c-homogeneous set A ⊂ X of size m. We consider κ-arrow and Ramsey filters on any, not necessarily countable, infinite sets. For convenience, we require these filters to be uniform, i.e., nondegenerate in the sense that all of their elements have the same cardinality (equal to that of the underlying set). Definition 4. Let κ be an infinite cardinal, and let F be a uniform filter on a set X of cardinality κ. (i) We say that F is a Ramsey filter if, for any 2-coloring c : [X]2 → {0, 1}, there exists a c-homogeneous set A ∈ U . (ii) Given an arbitrary cardinal λ ≤ κ, we say that F is a λ-arrow filter if, for any 2-coloring c : [X]2 → {0, 1}, there exists either a set A ∈ F such that c([A]2 ) = {0} or a set S ⊂ X with |S| ≥ λ such that c([S]2 ) = {1}. 1Recall that X ⊂ X F = X ∪ {∗}, and, therefore, B(X) (without topology) is naturally embedded in B(XF ) as a subgroup. 4 OL’GA V. SIPACHEVA Any filter F on X which is Ramsey or λ-arrow for λ ≥ 3 is an ultrafilter. Indeed, let S ⊂ X and consider the coloring c : [X]2 → {0, 1} defined by ( 0 if x, y ∈ S or x, y ∈ X \ S, c({x, y}) = 1 otherwise. Clearly, any c-homogeneous set containing more than two points is contained entirely in S or in X \ S; therefore, either S or X \ S belongs to F , so that F is an ultrafilter. According to Theorem 9.6 in [8], if U is a Ramsey ultrafilter on X, then, for any n < ω and any 2-coloring c : [X]n → {0, 1}, there exists a c-homogeneous set A∈U. It is easy to see that if F is λ-arrow, then, for any A ∈ F and any c : [A]2 → {0, 1}, there exists either a set B ∈ F such that B ⊂ A and c([B]2 ) = {0} or a set S ⊂ A with |S| ≥ λ such that c([S]2 ) = {1}. In [9], where k-arrow ultrafilters for finite k were introduced, it was shown that the existence of a 3-arrow (ultra)filter on ω implies that of a P -point ultrafilter; therefore, the nonexistence of κ-arrow ultrafilters for any κ ≥ 3 is consistent with ZFC (see [10]). On the other hand, the continuum hypothesis implies the existence of k-arrow ultrafilters on ω for any k ≤ ω. To formulate a more delicate assumption under which k-arrow ultrafilters exist, we need more definitions. Given a uniform filter F on ω, a set B ⊂ ω is called a pseudointersection of F if the complement A \ B is finite for all A ∈ F . The pseudointersection number p is the smallest size of a uniform filter on ω which has no infinite pseudointersection. It is easy to show that ω1 ≤ p ≤ 2ω , so that, under the continuum hypothesis, p = 2ω . It is also consistent with ZFC that, for any regular cardinals κ and λ such that ω1 ≤ κ ≤ λ, 2ω = λ and p = κ (see [11, Theorem 5.1]). It was proved in [9] that, under the assumption p = 2ω (which is referred to as P(c) in [9]), there exist κ-arrow ultrafilters on ω for all κ ≤ ω. Moreover, for each k ∈ N, there exists a k-arrow ultrafilter on ω which is not (k + 1)-arrow, and there exists an ultrafilter which is k-arrow for each k ∈ N but is not Ramsey and hence not ω-arrow [9, Theorems 2.1 and 4.10]. In addition to the free group topology of Boolean groups on spaces generated by filters, we consider the Bohr topology on arbitrary abstract and topological groups. This is the weakest group topology with respect to which all homomorphisms to compact topological groups are continuous, or the strongest totally bounded group topology; the Bohr topology on an abstract group (without topology) is defined as the Bohr topology on this group endowed with the discrete topology. Finally, we need the definition of a minimal dynamical system. Definition 5. Let G be a monoid with identity element e. A pair (X, (Tg )g∈G ), where X is a topological space and (Tg )g∈G is a family of continuous maps X → X such that Te is the identity map and Tgh = Tg ◦ Th for any g, h ∈ G, is called a topological dynamical system. Such a system is said to be minimal if no proper closed subset of X is Tg -invariant for all g ∈ G. We sometimes identify sequences with their ranges. All groups considered in this paper are assumed to be infinite, and all filters are assumed to have empty intersection, i.e., to contain the Fréchet filter of all cofinite subsets (and hence be free). LARGE SETS IN BOOLEAN AND NON-BOOLEAN GROUPS AND TOPOLOGY 5 2. Properties of Large Sets We begin with well-known general properties of large sets defined above. Let G be a group. 1. A set A ⊂ G is thick if and only if the family {gA : g ∈ G} of all translates of A has the finite intersection property. Indeed, this property means that, for every finite subset F of G, there exists an T h ∈ g∈F g −1 A, and this, in turn, means that gh ∈ A for each g ∈ F , i.e., F h ⊂ A. 2 [3, Theorem 2.4]. A set A is syndetic if and only if A intersects every thick set nontrivially, or, equivalently, if its complement G \ A is not thick. 3. A set A is thick if and only if A intersects every syndetic set nontrivially, or, equivalently, if its complement G \ A is not syndetic. 4 [3, Theorem 2.4]. A set A is piecewise syndetic if and only if there exists a syndetic set B and a thick set C such that A = B ∩ C. 5 [12, Theorem 4.48]. A set A is thick if and only if A βG = {p ∈ βG : A ∈ p} (the closure of A in the Stone–Čech compactification βG of G with the discrete topology) contains a left ideal of the semigroup βG. 6 [12, Theorem 4.48]. A set A is syndetic if and only if every left ideal of βG βG intersects A . 7. The families of thick, syndetic, and piecewise syndetic sets are closed with respect to taking supersets. 8. Thickness, syndeticity, and piecewise syndeticity are translation invariant. 9 [3, Theorem 2.5]. Piecewise syndeticity is partition regular, i.e., whenever a piecewise syndetic set is partitioned into finitely many subsets, one of these subsets is piecewise syndetic. 10 [3, Theorem 2.4]. For any thick set A ⊂ G, there exists an infinite sequence B = (bn )n∈N in G such that FP(B) = {xn1 xn2 . . . xnk : k, n1 , n2 , . . . , nk ∈ N, n1 < n2 < · · · < nk } is contained in A. 11. Any IP∗ -set in G, i.e., a set intersecting any infinite set of the form FP(B), is syndetic. This immediately follows from properties 2 and 10. 3. Fat Sets As mentioned at the beginning of this section, in [2], Reznichenko and the author introduced a new2 class of large sets, which we called fat; they have played the key role in our construction of nonclosed discrete subsets in topological groups. Definition 6. We say that a subset A of a group G is fat in G if there exists a positive integer m such that any m-element set F in G contains a two-element subset D for which D−1 D ⊂ A. The least number m with this property is called the fatness of A. We shall refer to fat sets of fatness m as m-fat sets. In a similar manner, κ-fat sets for any cardinal κ can defined. 2Later, we have found out that similar subsets of Z had already been used in [13]: the ∆∗ -sets n considered there and n-fat subsets of Z are very much alike. 6 OL’GA V. SIPACHEVA Definition 7. Given a cardinal κ, we say that a subset A of a group G is κ-fat in G if any set S ⊂ G with |S| = κ contains a two-element subset D for which D−1 D ⊂ A. The notions of an ω-fat and a k-fat set are very similar to but different from those of ∆∗ - and ∆∗k -sets. ∆∗ -Sets were introduced and studied in [3] for arbitrary semigroups, and ∆∗k -sets with k ∈ N were defined in [13] for the case of Z. Definition 8. Given a finite of countable cardinal κ and a sequence (gn )n∈κ in a group G, we set  −1 gn } ∆ (gn )n∈κ = {x ∈ G : there exist m < n < κ such that x = gm and  −1 }. ∆D (gn )n∈κ = {x ∈ G : there exist m < n < κ such that x = gn gm  ∗ (g ) A subset of a group G is called a right (left ) ∆ -set if it intersects ∆ n n∈κ I κ  (respectively, ∆D (gn )n∈κ ) for any one-to-one sequence (gn )n∈κ in G. ∆∗ω -sets are referred to as ∆∗ -sets. Remark. For any one-to-one sequence S = (gn )n∈κ in a Boolean group with zero 0, we have ∆I (S) = ∆D (S) = (S △ S) \ {0}. Hence any κ-fat set in such a group is a right and left ∆∗κ -set. Moreover, the only difference between ∆∗κ - and κ-fat sets in a Boolean group is in that the latter must contain 0. The most obvious feature distinguishing fatness among other notions of largeness is symmetry (fatness has no natural right and left versions). In return, translation invariance is sacrificed. Thus, in studying fat sets, it makes sense to consider also their translates. Clearly, a 2-fat set in a group must coincide with this group. The simplest nontrivial example of a fat set is a subgroup of finite index n; its fatness equals n + 1 (any (n + 1)-element subset has two elements x and y in the same coset, and both x−1 y and y −1 x belong to the subgroup). It seems natural to refine the definition of fat sets by requiring A ∩ F −1 F to be of prescribed size rather than merely nontrivial. However, this (and even a formally stronger) requirement does not introduce anything new. Proposition 1 ([2, Proposition 1.1]). For any fat set A in a group G and any positive integer n, there exists a positive integer m such that any m-element set F in G contains an n-element subset F ′ for which F ′−1 F ′ ⊂ A. Indeed, considering the coloring c : [G]2 → {0, 1} defined by c({x, y}) = 1 ⇐⇒ x y, y −1 x ∈ A and applying the finite Ramsey theorem, we find a c-homogeneous set F of size n (provided that m is large enough). If n is no smaller than the fatness of A (which we can assume without loss of generality), then c([F ]2 ) = {1}. There is yet another important distinguishing feature of fat sets, namely, the finite intersection property. Neither thick, syndetic, nor piecewise syndetic sets have this property. (Indeed, the disjoint sets of even and odd numbers are syndetic S S in Z, and i≥0 [22i , 22i+1 ) ∩ Z and i≥1 [22i−1 , 2i ) are thick.) The following theorem is valid. −1 Theorem 2 ([2]). Let G be a group. (i) If A ⊂ G is fat, then so is A−1 . LARGE SETS IN BOOLEAN AND NON-BOOLEAN GROUPS AND TOPOLOGY 7 (ii) If A ⊂ B ⊂ G and A is fat, then so is B. (iii) If A ⊂ G and B ⊂ G are fat, then so is A ∩ B. Assertions (i) and (ii) are obvious, and (iii) follows from Proposition 1. Proposition 3. If G is a group, S ⊂ G, and S ∩ (SS ∪ S −1 S −1 ) = ∅, then G \ S is 3-fat. Proof. Take any three different elements a, b, c ∈ G. We must show that the identity element e belongs to G\S (which is true by assumption) and either (a−1 b)±1 ∈ G\S, (b−1 c)±1 ∈ G \ S, or (c−1 a)±1 ∈ G \ S. Assume that, on the contrary, (a−1 b)ε ∈ S (i.e., a−1 b ∈ S ε ), b−1 c ∈ S δ , and c−1 a ∈ S γ for some ǫ, δ, γ ∈ {−1, 1}. At least two of the three numbers ε, δ, and γ are equal. Suppose for definiteness that ε = δ. Then we have c−1 a = c−1 bb−1 a ∈ S −ε S −ε , which contradicts the assumption S ∩ (S 2 ∪ S −2 ) = ∅.  We see that the family of fat sets in a group resembles, in some respects, a base of neighborhoods of the identity element for a group topology. However, as we shall see in the next section, it does not generate a group topology even in a Boolean group: any Boolean group has a 3-fat subset A containing no set of the form B △ B for fat B. On the other hand, very many groups admit of group topologies in which all neighborhoods of the identity element are fat; for example, such are topologies generated by normal subgroups of finite index. A more precise statement is given in the next section. Before turning to related questions, we consider how fat sets fit into the company of other large sets. We begin with a comparison of fat and syndetic sets. Proposition 4 (see [2, Proposition 1.7]). Let G be any group with identity element e. Any fat set A in G is syndetic, and its syndeticity index is less than its fatness. Proof. Let n denote the fatness of A. Take a finite set F ⊂ G with |F | = n − 1 such that x−1 y ∈ / A or y −1 x ∈ / A for any different x, y ∈ F . Pick any g ∈ G \ F . Since |F ∪ {g}| = n, it follows that x−1 g ∈ A and g −1 x ∈ A for some x ∈ F , whence g ∈ xA, i.e., G \ F ⊂ F A. By definition, the identity element of G belongs to A, and we finally obtain G = F A.  Examples of nonfat syndetic sets are easy to construct: any coset of a finite-index subgroup in a group is syndetic, while only one of them (the subgroup itself) is fat. However, the existence of syndetic sets with nonfat translates is not so obvious. An example of such a set in Z can be extracted from [13]. Example 1. There exists a syndetic set in Z such that none of its translates is fat. This is, e.g., the set constructed in [13, Theorem 4.3]. Namely, let C = {0, 1}Z, and let τ : C → C be the shift, i.e., the map defined by τ (f )(n) = f (n + 1) for f ∈ C. It was proved in [13, Theorem 4.3] that if M ⊂ C is a minimal closed τ -invariant subset3 and the dynamical system (M, (τ n )n∈Z ) satisfies a certain condition4, then the support of any f ∈ M is syndetic but not piecewise Bohr; the latter means that it cannot be represented as the intersection of a thick set and a set having nonempty interior in the Bohr topology on Z. Clearly, any translate of supp f has 3Then the support of each f ∈ M is syndetic in Z (see, e.g., [14]). 4Namely, is weakly mixing; see, e.g., [14] 8 OL’GA V. SIPACHEVA these properties as well. On the other hand, according to Theorem II in [13], any ∆∗n -set in Z (i.e., any set intersecting the set of differences {kj − ki : i < j ≤ n} for each n-tuple (k1 , . . . , kn ) of different integers) is piecewise Bohr. Since every n-fat set is a ∆∗n -set, it follows that the translates of supp f cannot be fat. Bearing in mind our particular interest in Boolean groups, we also give a similar example for a Boolean group. Example 2. We construct a syndetic set in the Boolean group B(Z) with nonfat translates. Let S be a syndetic set in Z all ofSwhose translates are not ∆∗n -sets for all n (see Example 1). By definition, Z = k≤r (sk + S) for some r ∈ N and different s1 , . . . , sr ∈ Z. We set Sk′ = {x1 △ . . . △ xn : n ∈ N, xi ∈ Z for i ≤ n, xi 6= xj for i 6= j, X {x1 , . . . , xn } ∩ {s1 , . . . , sr } = {sk }, xi ∈ 2sk + S}, k ≤ r, i≤n and S′ = [ Sk′ . k≤r We have sk △ Sk′ = {x1 △ . . . △ xn : n ∈ N, xi ∈ Z for i ≤ n, xi 6= xj for i 6= j, X {x1 , . . . , xn } ∩ {s1 , . . . , sr } = ∅, xi ∈ sk + S}, k ≤ r. i≤n Since [ S k≤r (sk + S) = Z, it follows that (sk △ S ′ ) k≤r ⊂ {x1 △ . . . △ xn : n ∈ N, xi ∈ Z for i ≤ n, {x1 , . . . , xn } ∩ {s1 , . . . , sr } = ∅}. Obviously, the set on the right-hand side of this inclusion is syndetic; therefore, so is S ′ . Let us show that no translate of S ′ is fat. Suppose that, on the contrary, k, n ∈ N, z1 , . . . , zk ∈ Z, w = z1 △ . . . △ zk , and w △ S ′ is n-fat. Take any different k1 , . . . , kn ∈ Z larger than the absolute values of all elements of w (which is a finite subset of Z) and of all si , i ≤ r. We set F = {k1 , k1 △(−k1 ) △ k2 , k1 △(−k1 ) △ k2 △(−k2 ) △ k3 , . . . , k1 △(−k1 ) △ k2 △(−k2 ) △ . . . △ kn−1 △(−kn−1 ) △ kn }. Suppose that there exist different x, y ∈ F for which x △ y ∈ w △ S ′ , i.e., there exist i, j ≤ n for which i < j and k1 △(−k1 ) △ . . . △ ki−1 △(−ki−1 ) △ ki △ k1 △(−k1 ) △ . . . △ kj−1 △(−kj−1 ) △ kj = ki △ ki △(−ki ) △ ki+1 △(−ki+1 ) △ . . . △ kj−1 △(−kj−1 ) △ kj = w △ s ∈ w △ S ′ , where s is an element of S ′ and hence belongs to Sl′ for some l ≤ r, which means, in particular, that s contains precisely one of the letters s1 , . . . , sr , namely, sl . There are no such letters among ±ki , . . . , ±kj−1 , kj . Therefore, one of the letters zm (say z1 ) is sl . The other letters of w do not equal ±ki , . . . , ±kj−1 , kj either and, therefore, are canceled with letters of s ∈ S ′ in the word w + s. By the definition of LARGE SETS IN BOOLEAN AND NON-BOOLEAN GROUPS AND TOPOLOGY 9 the set S ′ containing s, one letter of the word w (namely, z1 = sl ) belongs to the set {s1 , . . . , sr } and the other letters do not. Since the sum (in Z) of the integer-letters of s belongs to 2sl + S (by the definition of Sl′ ) and sl = z1 , it follows that the sum of letters of w + s belongs to S + z1 − z2 − · · · − zk and the letter z1 is determined uniquely for the given word w. To obtain a contradiction, it remains to recall that the translates of S (in particular, S + z1 − z2 − · · · − zk ) are not ∆∗n -sets in Z and choose k1 , . . . , kn so that {kj − ki : i < j ≤ n} ∩ (S + z1 − z2 − · · · − zk ) = ∅. Example 3. There exist fat sets which are not thick and thick sets which are not fat. Indeed, as mentioned, any proper finite-index group is fat, but it cannot be thick by the first property in the list of properties of large sets given above. An example of a nonfat thick set is, e.g., any thick nonsyndetic set. In an infinite Boolean group G, such a set can be constructed as follows. Take any S basis X in G (so that G = B(X)), fix any nonsyndetic thick set T in N (say T = n ([an , bn ]∩N), where the an and bn are numbers such that the bn − an and the an+1 − bn increase without bound), and consider the set A = {x1 △ . . . △ xn ∈ B(X) : n ∈ T, xi ∈ X for i ≤ n, xi 6= xj for i 6= j} of all words in B(X) whose lengths belong to T . The thickness of this set is obvious (by the same property 1), because the translate of A by any S word g ∈ B(X) of any length l surely contains all words whose lengths belong to n ([an +l, bn −l]∩N) ⊂ T and, therefore, intersects A. However, A is not fat, because it misses all words S whose lengths belong to the set n ((bn , an+l ) ∩ N). The last set contains at least one even positive integer 2k. It remains to choose different points x1 , x2 , . . . in X, set B = {xkn+1 △ xkn+2 . . . △ xkn+k : n ∈ ω}, and note that all nonempty words in B △ B have length 2k. Therefore, A is disjoint from B △ B (much more from F △ F for any finite F ⊂ B). Note that the translates of A are not fat either, because both thickness and (non)syndeticity are translation invariant. Proposition 5. Let G be any group with identity element e. (i) If a set A in G is 3-fat, then (G \ A)−1 (G \ A) ⊂ A. (ii) If a set A in G is 3-fat, then either AA−1 = G or A is a subgroup of index 2. Proof. (i) Suppose that A is a 3-fat subset of a group G with identity element e. Take any different x, y ∈ / A (if there exist no such elements, then there is nothing to prove). By definition, the set {x, y, e} contains a two-element subset D for which D−1 D ⊂ A. Clearly, D 6= {x, e} and D 6= {y, e}. Therefore, x−1 y ∈ A and y −1 x ∈ A (and e ∈ A, too), whence (G \ A)−1 (G \ A) ⊂ A. (ii) If AA−1 6= G, then there exists a g ∈ G for which gA ∩ A = ∅. If A is, in addition, 3-fat, then (ii) implies (gA)−1 gA = A−1 A ⊂ A, which means that A is a subgroup of G. According to (i), A is syndetic of index at most 2; in fact, its index is precisely 2, because A does not coincide with G.  4. Quotient sets −1 In [3] sets of the form AA or A−1 A were naturally called quotient sets. We shall refer to the former as right quotient sets and to the latter as left quotient sets. Thus, a set in a group G is m-fat if it intersects nontrivially the left quotient set of any m-element subset of G. Quotient sets play a very important role in combinatorics, and their interplay with large sets is quite amazing. 10 OL’GA V. SIPACHEVA First, the passage to right quotient sets annihilates the difference between syndetic and piecewise syndetic sets. Theorem 6 (see [3, Theorem 3.9]). For each piecewise syndetic subset A of a group G, there exists a syndetic subset B of G such that BB −1 ⊂ AA−1 and the syndeticity index of B does not exceed the thickness index of A. Briefly, the construction of B given in [3] is as follows: we takeTa finite set T such that T A is thick and, for each finite F ⊂ G, let ΦF = {ϕ ∈ T G : x∈F x−1 ϕ(x)A 6= ∅}. Then we pick ϕ∗ in the intersection of all ΦF (which exists since the product space T G is compact) and let B = {ϕ∗ (x)−1 x : x ∈ G}. Since ϕ∗ (G) ⊂ T , it follows that T B = G, which means that B is syndetic and its index does not exceed |T | = t. Moreover, for any finite F ⊂ B, there exists a g ∈ G such that F g ⊂ A, and this implies BB −1 ⊂ AA−1 . In Theorem 6, right quotient sets cannot be replaced by left ones: there are examples of piecewise syndetic sets A such that A−1 A does not contain B −1 B for any syndetic B. One of such examples is provided by the following theorem. Theorem 7. (i) If a subset A of a group G is syndetic of index s, then A−1 A is fat, and its fatness does not exceed s + 1. (ii) If a subset A of an Abelian group G is piecewise syndetic of thickness index t, then A − A is fat, and its fatness does not exceed t + 1. (iii) There exists a group G and a thick (in particular, piecewise syndetic) set A ⊂ G such that A−1 A is not fat and, therefore, does not contain B −1 B for any syndetic set. (iv) If a subset A of a group G is thick, then AA−1 = G. Proof. (i) Suppose that F A = G, where F = {g1 , . . . , gs }. Any (s + 1)-element subset of G has at least two points x and y in the same “coset” gi A. We have x = gi a′ and y = gi a′′ , where a′ , a′′ ∈ A. Thus, x−1 y, y −1 x ∈ A−1 A. Assertion (ii) follows immediately from (i) and Theorem 6. Let us prove (iii). Consider the free group G on two generators a and b and let A be the set of all words in G whose last letter is a. Then A is thick (given any finite F ⊂ G, we have F an ⊂ A for sufficiently large n). Clearly, all nonidentity words in A−1 A contain a or a−1 . Therefore, if F ⊂ G consists of words of the form bn , then the intersection F −1 F ∩ A−1 A is trivial, so that A−1 A is not fat. Finally, to prove (iv), take any g ∈ G. We have A ∩ gA 6= ∅ (by property 1 in our list of properties of large sets). This means that g ∈ AA−1 .  We see that the right quotient sets AA−1 of thick sets A are utmostly fat, while the left quotient sets A−1 A may be rather slim. In the Abelian case, the difference sets of all thick sets coincide with the whole group. It is natural to ask whether condition (i) in Theorem 7 characterizes fat sets in groups. In other words, given any fat set A in a group, does there exist a syndetic (or, equivalently, piecewise syndetic) set B such that B −1 B ⊂ A (or BB −1 ⊂ A)? The answer is no, even for thick 3-fat sets in Boolean groups. The idea of the following example was suggested by arguments in paper [13] and in John Griesmer’s note [15], where the group Z was considered. Example 4. Let G be a countable Boolean group with zero 0. Any such group can be treated as the free Boolean group on Z. We set A = G \ {m △ n = {m, n} : m, n ∈ Z, m < n, n − m = k 3 for some k ∈ N}. LARGE SETS IN BOOLEAN AND NON-BOOLEAN GROUPS AND TOPOLOGY 11 Clearly, A is thick (if F ⊂ G is finite and a word g ∈ G is sufficiently long, then all words in the set F △ g have more than two letters and, therefore, belong to A). Let us prove that A is 3-fat. Take any different a, b, c ∈ G. We must show that a △ b ∈ A, b △ c ∈ A, or a △ c ∈ A. We can assume that c = 0; otherwise, we translate a, b, and c by c, which does not affect the Boolean sums. Thus, it suffices to show that, given any different nonzero x, y 6∈ G, we have x △ y ∈ A. The condition x, y 6∈ G means that x = {k, l}, where k < l and l − k = r3 for some r ∈ Z, and y = {m, n}, where m < n and n − m = s3 for some s ∈ Z. Suppose for definiteness that n > l or n = l and m > k. If x △ y 6∈ A, then either k = m and l − n = t3 for some t ∈ N, l = m and n − k = t3 for some t ∈ N, or l = n and m − k = t3 for some t ∈ N. In the first case, we have l − k = l − n + n − m, i.e., r3 = t3 + s3 ; in the second, we have n − k = n − m + l − k, i.e., t3 = s3 + r3 ; and in the third, we have l − k = n − m + m − k, i.e., r3 = s3 + t3 . In any case, we obtain a contradiction with Fermat’s theorem. It remains to prove that there exists no syndetic (and hence no piecewise syndetic) B ⊂ G for which B △ B ⊂ A. Consider any syndetic set B. Let F = {f1 , . . . , fk } ⊂ G be a finite set for which F B = G, and let m be the maximum absolute value of all letters of words in F (recall that all letters are integers). To each n ∈ Z with |n| > m we assign a word fi ∈ F for which n ∈ fi △ B; if there are several such words, then we choose any of them. Thereby, we divide the set of all integers with absolute value larger than m into k pieces I1 , . . . , Ik . To accomplish our goal, it suffices to show that there is a piece Ii containing two integers r and s such that r − s = z 3 for some z ∈ Z. Indeed, in this case, we have r ∈ fi △ B and s ∈ fi △ B, so that r △ s ∈ B △ B. On the other hand, r △ s 6∈ A. From now on, we treat the pieces I1 , . . . , Ik as subsets of Z. We have Z = {−m, −m + 1, . . . , 0, 1, . . . , m} ∪ I1 ∪ · · · ∪ Ik . Since piecewise syndeticity is partition regular (see property 9 of large sets), one of the sets Ii , say Il , is piecewise syndetic. Therefore, by Theorem 5, Il − Il ⊃ S − S for some syndetic set S ⊂ Z. Let d∗ (S) denote the upper Banach density of S, i.e., |S ∩ {n, n + 1, · · · , n + d}| . n→∞ d→∞ d The syndeticity of S in Z implies the existence of an N ∈ N such that every interval of integers longer than N intersects S. Clearly, we have d∗ (S) ≥ 1/N . Proposition 3.19 in [14] asserts that if X is a set in Z of positive upper Banach density and p(t) is a polynomial taking on integer values at the integers and including 0 in its range on the integers, then there exist x, y ∈ X, x 6= y, and z ∈ Z such that x − y = p(z) (as mentioned in [14], this was proved independently by Sárközy). Thus, there exist different x, y ∈ S and a z ∈ Z for which x − y = z 3 . Since S − S ⊂ Il − Il , it follows that z 3 = r − s for some r, s ∈ Il , as desired. d∗ (S) = lim lim sup 5. Large Sets and Topology In the context of topological groups quotient sets arise again, because for each neighborhood U of the identity element, there must exist a neighborhood V such that V −1 V ⊂ U and V V −1 ⊂ U . Thus, if we know that a group topology consists of piecewise syndetic sets, then, in view of Theorem 6, we can assert that all open sets are syndetic, and so on. Example 4 shows that if G is any countable Boolean topological group and all 3-fat sets are open in G, then some nonempty open sets in this group are not piecewise syndetic. Thus, all syndetic or piecewise syndetic 12 OL’GA V. SIPACHEVA subsets of a group G do not generally form a group topology. Even their quotient (difference in the Abelian case) sets are insufficient; however, it is known that double difference sets of syndetic (and hence piecewise syndetic) sets in Abelian groups are neighborhoods of zero in the Bohr topology.5 These and many other interesting results concerning a relationship between Bohr open and large subsets of abstract and topological groups can be found in [16, 17]. As to group topologies in which all open sets are large, the situation is very simple. Theorem 8. For any topological group G with identity element e, the following conditions are equivalent: (i) all neighborhoods of e in G are piecewise syndetic; (ii) all open sets in G are piecewise syndetic; (iii) all neighborhoods of e in G are syndetic; (iv) all open sets in G are syndetic; (v) all neighborhoods of e in G are fat; (vi) G is totally bounded. Proof. The equivalences (i) ⇔ (ii) and (iii) ⇔ (iv) follow from the obvious translation invariance of piecewise syndeticity and syndeticity. Theorem 6 implies (i) ⇔ (iii), Theorem 7 (i) implies (iii) ⇒ (v), and Proposition 4 implies (v) ⇒ (iii). The implication (iii) ⇒ (i) is trivial. Finally, (vi) ⇔ (iii) by the definition of total boundedness.  Thus, the Bohr topology on a (discrete) group is the strongest group topology in which all open sets are syndetic (or, equivalently, piecewise syndetic, or fat). For completeness, we also mention the following corollary of Theorem 7 and Theorem 3.12 in [3], which relates fat sets to topological dynamics. Corollary 9. If G is an Abelian group with zero 0, X is a compact Hausdorff space, and (X, (Tg )g∈G ) is a minimal dynamical system, then the set {g ∈ G : U ∩Tg−1 U 6= ∅} is fat for every nonempty open subset U of X. 6. Fat and Discrete Sets in Topological Groups As mentioned above, fat sets were introduced in [2] to construct discrete sets in topological groups. Namely, given a countable topological group G whose identity element e has nonrapid filter F of neighborhoods, we can construct a discrete set with precisely one limit point in this group as follows. The nonrapidness of F means that, given any sequence (mn )n∈N of positive integers, there exist finite sets Fn ⊂ G, n ∈ N, such that each neighborhood of e intersects some Fn in at least mn points (see [7, Theorem 3 (3)]).TThus, if we have a decreasing sequence of closed mn -fat sets An in G such that An = {e}, then the set [ D= {a−1 b : a 6= b, a, b ∈ Fn , a−1 b ∈ An } n∈N is discrete (because e ∈ / D and each g ∈ G\{e} has a neighborhood of the form G\An which contains only finitely many elements of D), and e is the only limit point of D (because, given any neighborhood U of e, we can take a neighborhood V such that 5It follows, in particular, that, given any piecewise syndetic set A in an Abelian group, there exists an infinite sequence of fat sets A1 , A2 , . . . such that A1 − A1 ⊂ A + A − A − A and An+1 − An+1 ⊂ An for all n (because all Bohr open sets are syndetic). LARGE SETS IN BOOLEAN AND NON-BOOLEAN GROUPS AND TOPOLOGY 13 V −1 V ⊂ U ; we have |V ∩Fn | ≥ mn for some n, and hence (V ∩Fn )−1 (V ∩Fn )∩An 6= ∅, so that U ∩ D 6= ∅). It remains to find a family of closed fat sets with trivial intersection and make it decreasing. The former task is easy to accomplish in any topological group: by Proposition 3, in any topological group G, the complements to open neighborhoods gU of all g ∈ G satisfying the condition gU ∩ (U 2 ∪ (U −1 )2 ) = ∅ form a family of closed 3-fat sets with trivial intersection. In countable groups, this family can be made decreasing by using Theorem 2, according to which the family of fat sets has the finite intersection property. Unfortunately, no similar argument applies in the uncountable case, because countable intersections of fat sets may be very small. Thus, in Zω 2 , the intersection of the 3-fat sets Hn = {f ∈ Zω : f (n) = 0} (each of which is a 2 subgroup of index 2 open in the product topology) is trivial. 7. Large Sets in Boolean Groups In the case of Boolean groups, many assertions concerning large sets can be refined. For example, properties 10 and 11 of large sets are stated as follows. Proposition 10. (i) For any thick set T in a Boolean group G with zero 0, there exists an infinite subgroup H of G for which T ∪ {0} ⊃ H. (ii) Any set which intersects nontrivially all infinite subgroups in a Boolean group G is syndetic. Note that this is not so in non-Boolean groups: the set {n! : n ∈ N} intersects any infinite subgroup in Z, but it is not syndetic, because the gaps between neighboring elements are not bounded. The complement of this set contains no infinite subgroups, and it is thick by property 2 of large sets. Another specific feature of thick sets in Boolean groups is given by the following proposition. Proposition 11. For any thick set T in a countable Boolean group G with zero 0, there exists a set A ⊂ G such that T ∪ {0} = A △ A (and A △ A △ A △ A = G by Theorem 7 (iv)). Proposition 11 is an immediate corollary of Lemma 4.3 in [3], which says that any thick set in a countable Abelian group equals ∆I (gn )∞ n=1 for some sequence (gn )∞ n=1 . In view of Example 4, we cannot assert that the set A in this proposition is large (in whatever sense), even for the largest (3-fat) nontrivial thick sets T . The following statement can be considered as a partial analogue of Propositions 10 and 11 for ∆∗ - (in particular, fat) sets in Boolean groups. Theorem 12. For any ∆∗ -set A in a Boolean group G with zero 0, there exists a B ⊂ G with |B| = |A| such that B △ B ⊂ A ∪ {0}. Proof. First, note that |A| = |G|. Any Boolean group is algebraically free; therefore, we can assume that G = B(X) for a set X with |X| = |A|. Let  A2 = A ∩ B=2 (X) = {x, y} = x △ y ∈ A : x, y ∈ X be the intersection of A with the set of words of length 2. We have |A2 | = |X|, because A must intersect nontrivially each countable set of the form Y △ Y for 14 OL’GA V. SIPACHEVA Y ⊂ X. Consider the coloring c : [X]2 → {0, 1} defined by ( 0 if {x, y} ∈ A2 , c({x, y}) = 1 otherwise. According to the well-known Erdös–Dushnik–Miller theorem κ → (κ, ℵ0 )2 (see, e.g., [18]), there exists either an infinite set Y ⊂ X for which [Y ]2 ∩ A2 = ∅ or a set Y ⊂ X of cardinality |X| for which [Y ]2 ⊂ A2 . The former case cannot occur, because [Y ]2 = Y △ Y in B(X), [Y ]2 ⊂ B=2 (X), and A is a ∆∗ -set. Thus, the latter case occurs, and we set B = Y .  We have already distinguished between fat sets and translates of syndetic sets in Boolean groups (see Example 2). For completeness, we give the following example. Example 5. The countable Boolean group B(Z) contains an IP∗ -set (see property 11 of large sets) which is not a ∆∗ -set. An example of such a set is constructed from the corresponding example in Z (see [14, p. 177] in precisely the same way as Example 2. 8. Large Sets in Free Boolean Topological Groups As shown in Section 5, given any Boolean group G, the filter of fat sets in G cannot be the filter of neighborhoods of zero for a group topology, because not all fat and even 3-fat sets are neighborhoods of zero in the Bohr topology. Moreover, if we fix any basis X in G, so that G = B(X), then not all traces of 3-fat sets on the set B=2 (X) of two-letter words contain those of Bohr neighborhoods of zero. However, there are natural group topologies on B(X) such that the topologies which they induce on B2 (X) contain those generated by n-fat sets. Theorem 13. Let k ∈ N , and let F be a filter on an infinite set X. Then the following assertions hold. (i) For k 6= 4, the trace of any k-fat subset of B(X) on B2 (X) ⊂ B2 (XF ) contains that of a neighborhood of zero in the free group topology of B(XF ) if and only if F is a k-arrow filter. (ii) If the trace of any 4-fat set on B2 (X) contains that of a neighborhood of zero in the free group topology of B(XF ), then F is a 4-arrow filter, and if F is a 4-arrow filter, then the trace of any 3-fat set on B2 (XF ) contains that of a neighborhood of zero in the free group topology of B(XF ). (iii) The trace of any ω-fat set on B2 (X) contains that of a neighborhood of zero in the free group topology of B(XF ) if and only if F is an ω-arrow ultrafilter. The proof of this theorem uses the following lemma. Lemma 14. (i) If k 6= 4, w1 , . . . , wk ∈ B(X), and wi △ wj ∈ B=2 (X) for any i < j ≤ k, then there exist x1 , . . . , xk ∈ X such that wi △ wj = xi △ xj for any i < j ≤ k. (ii) If k = 4, w1 , w2 , w3 , w4 ∈ B(X), and wi △ wj ∈ B=2 (X) for any i < j ≤ 4, then there exist either (a) x1 , x2 , x3 , x4 ∈ X such that wi △ wj = xi △ xj for any i < j ≤ 4 or LARGE SETS IN BOOLEAN AND NON-BOOLEAN GROUPS AND TOPOLOGY 15 (b) x1 , x2 , x3 ∈ X such that w1 △ w4 = w2 △ w3 = x2 △ x3 , w2 △ w4 = w1 △ w3 = x1 △ x3 , w3 △ w4 = w1 △ w3 = x1 △ x3 . (iii) If w1 , w2 , · · · ∈ B(X) and wi △ wj ∈ B2 (X) for any i < j, then there exist x1 , x2 , · · · ∈ X such that wi △ wj = xi △ xj for any i < j. Proof. We prove the lemma by induction on k. There is nothing to prove for k = 1, and for k = 2, assertion (i) obviously holds. Suppose that k = 3. For some y1 , y2 , y3 , y4 ∈ X, we have w1 △ w2 = y1 △ y2 and w2 △ w3 = y3 △ y4 . Since w1 △ w3 = w1 △ w2 △ w2 △ w3 ∈ B=2 (X), it follows that either y1 = y3 , y1 = y4 , y2 = y3 , or y2 = y4 . If y1 = y3 , then w1 △ w3 = y2 △ y4 and w2 △ w3 = y1 △ y4 , so that we can set x1 = y2 , x2 = y1 , and x3 = y4 . If y2 = y3 , then w1 △ w3 = y1 △ y4 and w2 △ w3 = y2 △ y4 , and we set x1 = y1 , x2 = y1 , and x3 = y4 . The remaining cases are treated similarly. Suppose that k = 4 and let x1 , x2 , x3 ∈ X be such that wi △ wj = xi △ xj for i = 1, 2, 3. There exist y, z ∈ X for which w1 △ w4 = y △ z. We have w2 △ w4 = w1 △ w2 △ w1 △ w4 = x1 △ x2 △ y △ z ∈ B2 (X). Therefore, either x1 = y, x2 = y, x1 = z, or x2 = z. If x1 = y or x1 = z, then the condition in (ii) (a) holds for x4 = z in the former case and x4 = y in the latter. Suppose that x1 6= y and x1 6= z. Then x2 = y or x2 = z. Let x2 = y. Then w1 △ w4 = x2 △ z, and we have w3 △ w4 = w1 △ w3 △ w1 △ w4 = x1 △ x3 △ x2 △ z ∈ B2 (X), whence x3 = z (because x1 , x2 6= z), so that w1 △ w4 = x2 △ x3 = w2 △ w3 , w2 △ w4 = w1 △ w2 △ w1 △ w4 = x1 △ x3 = w1 △ w3 , and w3 △ w4 = x1 △ x3 = w1 △ w3 , i.e., assertion (ii) (b) holds. The case x2 = z is similar. Note for what follows that, in both cases x2 = y and x2 = z, we have w4 = w1 △ w2 △ w3 . Let k > 4. Consider the words w1 , w2 , w3 , and w4 . Let x1 , x2 , x3 ∈ X be such that wi △ wj = xi △ xj for i = 1, 2, 3. As previously, there exist y, z ∈ X for which w1 △ w4 = y △ z and either x1 = y, x2 = y, x1 = z, or x2 = z. Suppose that x1 6= y and x1 6= z; then w4 = w1 △ w2 △ w3 . In this case, we consider w5 instead of w4 . Again, there exist y ′ , z ′ ∈ X for which w1 △ w5 = y △ z and either x1 = y ′ , x2 = y ′ , x1 = z ′ , or x2 = z ′ . Since w5 6= w4 , it follows that w5 6= w1 △ w2 △ w3 , and we have x1 = y ′ or x1 = z ′ . In the former case, we set x5 = z ′ and in the latter, x5 = y ′ . Consider again w4 ; recall that w1 △ w4 = y △ z. We have wi △ w4 = w1 △ wi △ w1 △ w4 = x1 △ xi △ y △ z ∈ B=2 (X) for i ∈ {2, 3, 5}. Since x2 6= x5 and x3 6= x5 , it follows that x1 = y, which contradicts the assumption. Thus, x1 = y or x1 = z. As above, we set x4 = z in the former case and x4 = y in the latter; then the condition in (ii) (a) holds. Suppose that we have already found the required x1 , . . . , xk−1 ∈ X for w1 , . . . , wk−1 . There exist y, z ∈ X for which w1 △ wk = y △ z. We have wi △ wk = w1 △ wi △ w1 △ wk = x1 △ xi △ y △ z ∈ B=2 (X) for i ≤ k − 1. If x1 6= y and x1 6= z, then we have xi ∈ {y, z} for 2 ≤ i ≤ k−1, which is impossible, because k > 4. Thus, either x1 = y or x1 = z. In the former case, we set xk = z and in the latter, xk = y. Then w1 △ wk = x1 △ wk and, for any i ≤ k − 1, wi △ wk = w1 △ wi △ w1 △ wk = x1 △ xi △ x1 △ xk = xi △ xk . The infinite case is proved by the same inductive argument.  16 OL’GA V. SIPACHEVA Proof of Theorem 13. (i) Suppose that F is a k-arrow filter on X. Let C be a k-fat set in B(XF ). Consider the 2-coloring of [X]2 defined by ( 0 if {x, y} = x △ y ∈ C, c({x, y}) = 1 otherwise. Since F is k-arrow, there exists either an A ∈ F for which c([A]2 ) = {0} and hence [A]2 ⊂ C ∩ B2 (XF ) or a k-element set F ⊂ X for which c([F ]2 ) = {1} and hence [F 2 ] ∩ C = [F 2 ] ∩ C ∩ B=2 (XF ) = ∅. The latter case cannot occur, because C is k-fat. Therefore, C ∩B2 (XF ) contains the trace [A]2 ∪{0} = ((A∪{∗}) △(A∪{∗})) of the subgroup hA ∪ {∗}i, which is an open neighborhood of zero in B(XF ). Now suppose that k 6= 4 and the trace of each k-fat set on B2 (X) contains the trace on B2 (X) of a neighborhood of zero in B(XF ), i.e., a set of the form A △ A for some A ∈ F . Let us show that F is k-arrow. Given any c : [X]2 → {0, 1}, we set  C = x △ y : c({x, y}) = 1 and C ′ = B(XF ) \ C. ′ If C is not k-fat, then there exist w1 , . . . , wk ∈ B(X) such that wi △ wj ∈ C for i < j ≤ k. By Lemma 14 (i) we can find x1 , . . . , xk ∈ X such that xi △ xj ∈ C (and hence xi 6= ∗) for i < j ≤ k. This means that, for F = {x1 , . . . , xk }, we have c([F ]2 ) = {1}. If C ′ is k-fat, then, by assumption, there exists an A ∈ F for which A △ A \ {0} ⊂ C ′ ∩ B2 (X) = C, which means that c([A]2 ) = {0}. The same argument proves (ii); the only difference is that assertion (ii) of Lemma 14 is used instead of (i). The proof of (iii) is similar.  Let Rr (s) denote the least number n such that, for any r-coloring c : [X]2 → Y , where |X| ≥ n and |Y | = r, there exists an s-element c-homogeneous set. By the finite Ramsey theorem, such a number exists for any positive integers r and s. Theorem 15. There exists a positive integer N (namely, N = R36 (R6 (3)) + 1) such that, for any uniform ultrafilter U on a set X of infinite cardinality κ, the following conditions are equivalent: (i) the trace of any N -fat subset of B(X) on B4 (X) ⊂ B4 (XU ) contains that of a neighborhood of zero in the free group topology of B(XU ); (ii) all κ-fat sets in B(X) are neighborhoods of zero in the topology induced from the free topological group B(XU ); (iii) U is a Ramsey ultrafilter. Proof. Without loss of generality, we assume that X = κ. (i) ⇒ (iii) Suppose that N is as large as we need and the trace of each N -fat set on B4 (κU ) contains the trace on B4 (κ) of a neighborhood of zero in B(XU ), which, in turn, contains a set of the form (A △ A △ A △ A) ∩ B=4 (κ) for some A ∈ U . Let us show that U is a Ramsey ultrafilter. Consider any 2-coloring c : [κ]2 → {0, 1}. We set  C = α1 △ α2 △ α3 △ α4 : αi ∈ κ for i ≤ 4, α1 < α2 < α3 < α4 , c({α1 , α2 }) 6= c({α3 , α4 }), c({α1 , α3 }) 6= c({α2 , α4 }), c({α1 , α4 }) 6= c({α2 , α3 }) and C ′ = B(X) \ C. LARGE SETS IN BOOLEAN AND NON-BOOLEAN GROUPS AND TOPOLOGY 17 If C ′ is not N -fat, then there exist w1 , . . . , wN ∈ B(κ) such that wi △ wj ∈ C for i < j ≤ N . We can assume that wN = 0 (otherwise, we translate all wi by wN ). Then wi ∈ C ⊂ B4 (κ), i < N . Let wi = αi1 △ αi2 △ αi3 △ αi4 for i < N and consider the 36-coloring of all pairs {wi , wj }, i < j < N , defined as follows. Since wi △ wj is a four-letter word, it follows that wi △ wj = β1 △ β2 △ β3 △ β4 , where βi ∈ κ. Two letters among β1 , β2 , β3 , β4 (say β1 and β2 ) occur in the word wi and the remaining two (β3 and β4 ) occur in wj . We assume that β1 < β2 and β3 < β4 . Let us denote the numbers of the letters β1 and β2 in wi (recall that the letters in wi are numbered in increasing order) by i′ and i′′ , respectively, and the numbers of the letters β3 and β4 in wj by j ′ and j ′′ . To the pair {wi , wj } we assign the quadruple (i′ , i′′ , j ′ , j ′′ ). The number of all possible quadruples is 36, so that this assignment is a 36-coloring. We choose N ≥ R36 (N ′ ) + 1 for N ′ as large as we need. Then there exist two pairs i′0 , i′′0 and j0′ , j0′′ and N ′ words win , where n ≤ N ′ and is < it for s < t, such that i′ = i′0 , i′′ = i′′0 , j ′ = j0′ , and j ′′ = j0′′ for any pair {wi , wj } with i, j ∈ {i1 , . . . , iN ′ } and i < j. Clearly, if N ′ ≥ 3, then we also have j0′ = i′0 and j0′′ = i′′0 . In the same manner, we can fix the position of the letters coming from wi and wj in the sum wi △ wj : to each pair {wis , wit }, s, t ∈ {1, . . . , N ′ }, s < t, we assign the numbers of the i′0 th and i′′0 th letters of wis in the word wis △ wit (recall that the letters of all words are numbered in increasing order); the positions of the letters of wit in wis △ wit are then determined automatically. There are six possible arrangements: 1, 2, 1, 3, 1, 4, 2, 3, 2, 4, and 3, 4. Thus, we have a 6-coloring of the symmetric square of the N ′ -element set {wi1 , . . . , wiN ′ }, and if N ′ ≥ R6 (3) (which we assume), then there exists a 3-element set {wk , wl , wm } homogeneous with respect to this coloring, i.e., such that all pairs of words from this set are assigned the same color. For definiteness, suppose that this is the color 1, 2; suppose also that i′0 = 1, i′′0 = 2, k < l < m, and wt = αt1 △ αt2 △ αt3 △ αt4 for t = k, l, m. Then wk , wl , wm ∈ C, wk △ wl = αk1 △ αk2 △ αl1 △ αl2 ∈ C, wl △ wm = m m k k m αl1 △ αl2 △ αm 1 △ α2 ∈ C, and wk △ wm = α1 △ α2 △ α1 △ α2 ∈ C. By the definition k k l l l l m k k of C we have c(α1 △ α2 ) 6= c(α1 △ α2 ), c(α1 △ α2 ) 6= c(α1 △ αm 2 ), and c(α1 △ α2 ) 6= m m c(α1 △ α2 ), which is impossible, because c takes only two values. The cases of other colors and other numbers i′0 and i′′0 are treated in a similar way. Thus, C ′ is N -fat and, therefore, contains (A △ A △ A △ A) ∩ B4 (κ) for some A ∈ U . Take any α ∈ A and consider the sets A′ = {β > α : c({α, β}) = {0}} and A′′ = {β > α : c({α, β}) = {1}}. One of these sets belongs to U , because U is uniform. For definiteness, suppose that this is A′ . By Theorem 13 U is 3-arrow. Therefore, there exists either an A′′ ⊂ A′ for which c([A′′ ]2 ) = {0} or β, γ, δ ∈ A′ , β < γ < δ, for which c([{β, γ, δ}]2 ) = {1}. In the former case, we are done. In the latter case, we have α, β, γ, δ ∈ A, α < β < γ < δ, c({β, γ}) = c({γ, δ}) = c({β, δ}) = 1, and c({α, β}) = c({α, γ}) = c({α, δ}) = 0 (by the definition of A′ ). Therefore, α △ β △ γ △ δ ∈ C, which contradicts the definition of A. (iii) ⇒ (ii) Suppose that U is a Ramsey ultrafilter on X and C is a κ-fat set in B(X). Take any n ∈ N and consider the coloring c : [X]2n → {0, 1} defined by ( 0 if {x1 , . . . , x2n } = x1 △ . . . △ x2n ∈ C, c({x1 , . . . , x2n }) = 1 otherwise. Since U is Ramsey, there exists either a set An ∈ U for which [A]2n ⊂ C or a set Y ⊂ X of cardinality κ for which [Y ]2n ∩ C = ∅. In the latter case, for Z = [Y ]n ⊂ B(X), we have (Z △ Z) ∩ C ⊂ {0}, which contradicts C being κ-fat. 18 OL’GA V. SIPACHEVA Hence the former case occurs, and C ∩ B2n (X) contains the trace [An ]2n ∩ B=2n (X) of the open subgroup h(An ∪ {∗}) △(An ∪ {∗})i of B(XF ). Thus, for each n ∈ N,Twe have found A1 , A2 , . . . , An ∈ F such that [Ai ]2i ∩ B=2i (X) ⊂ C. Let A = i≤n Ai . Then A ∈ U and [A]2i ∩ B=2i (X) ⊂ C for all i ≤ n. Hence C ∩ B2n (X) contains the trace on B2n (X) of the open subgroup h(A ∪ {∗}) △(A ∪ {∗})i of B(XU ) (recall that 0 ∈ C). This means that, for each n, C ∩ B2n (X) is a neighborhood of zero in the topology induced from B(XU ). If κ = ω, then B(XUS ) has the inductive limit topology with respect to the decomposition B(XU ) = n∈ω Bn (XF ), because F is Ramsey (see [4]). Therefore, in this case, C ∩ B(X) is a neighborhood of zero in the induced topology. If κ > ω, then the ultrafilter U is countably complete [8, Lemma 9.5 and Theorem 9.6], i.e., any countable intersection of elements Sof U belongs to U . Hence T A = n∈N An ∈ U , and h(A ∪ {∗}) △(A ∪ {∗})i ∩ n∈ω B2n (X) ⊂ C. Thus, C ∩ B(X) is a neighborhood of zero in the induced topology in this case, too. The implication (ii) ⇒ (i) is obvious.  Theorem 13 has the following purely algebraic corollary. Corollary 16 (p = c). Any Boolean group contains ω-fat sets which are not fat and ∆∗ -sets which are ∆∗k -sets for no k. Proof. Theorem 4.10 of [9] asserts that if p = c, then there exists an ultrafilter U on ω which is k-arrow for all k ∈ N but not Ramsey and, therefore, not ωarrow [9, Theorem 2.1]. By Theorem 13 the traces of all fat sets on B2 (ω) contain those of neighborhoods of zero in B(ωU ), and there exist ω-fat sets whose traces do not. This proves the required assertion for the countable Boolean group. The case of a group B(X) of uncountable cardinality κ reduces to the countable case by representing B(X) as B(κ) = B(ω) × B(κ); it suffices to note that a set of the form C × B(κ), where C ⊂ B(ω), is λ-fat in B(ω) × B(κ) for λ ≤ ω if and only if so is C in B(ω).  The author is unaware of where there exist ZFC examples of such sets in any groups. Acknowledgments The author is very grateful to Evgenii Reznichenko and Anton Klyachko for useful discussions. References [1] W. H. Gottschalk and G. A. Hedlund, Topological Dynamics (Amer. Math. Soc., Providence, R.I., 1955). [2] E. Reznichenko and O. Sipacheva Discrete Subsets in Topological Groups and Countable Extremally Disconnected Groups, arXiv:1608.03546v2 [math.GN]. [3] V. Bergelson, N. Hindman, and R. McCutcheon, “Notions of size and combinatorial properties of quotient sets in semigroups,” Topology Proc. 23, 23–60 (1998). [4] O. Sipacheva, “Free Boolean topological groups,” Axioms 4 (4), 492–517 (2015). [5] G. Mokobodzki, “Ultrafiltres rapides sur N. Construction d’une densité relative de deux potentiels comparables,” Séminaire Brelot–Choquet–Deny. Théorie du potentiel 12, 1–22 (1967–1968). [6] A. R. D. Mathias, “A remark on rare filters,” in Infinite and finite sets (Colloq., Keszthely, 1973; dedicated to P. Erdős on his 60th birthday) (North-Holland, Amsterdam, 1975), Vol. 3, pp. 1095–1097. LARGE SETS IN BOOLEAN AND NON-BOOLEAN GROUPS AND TOPOLOGY 19 [7] A. W. Miller, “There are no q-points in Laver’s model for the borel conjecture,” Proc. Amer. Math. Soc. 78 (1980) 103–106. [8] W. W. Comfort and S. Negrepontis, The Theory of Ultrafilters (Springer-Verlag, Berlin, 1974). [9] J. E. Baumgartner and A. D. Taylor, “Partition theorems and ultrafilters,” Trans. Amer. Math. Soc. 241, 283–309 (1978). [10] S. Shelah, Proper and Improper Forcing (Springer-Verlag, Berlin, 1998). [11] E. K. van Douwen, “The integers and topology,” in Handbook of Set-Theoretic Topology (North-Holland, Amsterdam, 1984), pp. 111–167. [12] N. Hindman and D. Strauss, Algebra in the Stone–Čech Compactification, 2nd ed. (De Gryuter, Berlin/Boston, 2012). [13] V. Bergelson, H. Furstenberg, and B. Weiss, “Piecewise-Bohr sets of integers and combinatorial number theory,” in Topics in Discrete Mathematics, Ed. by M. Klazar, J. Kratochvı́l, M. Loebl, J. Matoušek, P. Valtr, and R. Thomas (Springer, Berlin, Heidelberg, 2006), pp. 13– 37. [14] H. Furstenberg, Recurrence in Ergodic Theory and Combinatorial Number Theory (Princeton Univ. Press, Princeton, N.J., 1981). [15] John T. Griesmer, “A remark on intersectivity and difference sets,” private communication. [16] R. Ellis and H. B. Keynes, “Bohr compactifications and a result of Følner,” Israel J. Math. 12 (3), 314–330 (1972). [17] M. Beiglböck, V. Bergelson, and Alexander Fish, “Sumset phenomenon in countable amenable groups,” Adv. Math. 223, 416–432 (2010). [18] T. Jech, Set Theory, 3rd millennium (revised) ed. (Springer, 2003). Department of General Topology and Geometry, Lomonosov Moscow State University, Leninskie Gory 1, Moscow 119991, Russia E-mail address: [email protected]
4
arXiv:1404.0352v1 [math.AG] 1 Apr 2014 CHERN CHARACTERS FOR TWISTED MATRIX FACTORIZATIONS AND THE VANISHING OF THE HIGHER HERBRAND DIFFERENCE MARK E. WALKER Abstract. We develop a theory of “ad hoc” Chern characters for twisted matrix factorizations associated to a scheme X, a line bundle L, and a regular global section W ∈ Γ(X, L). As an application, we establish the vanishing, in certain cases, of hR c (M, N ), the higher Herbrand difference, and, ηcR (M, N ), the higher codimensional analogue of Hochster’s theta pairing, where R is a complete intersection of codimension c with isolated singularities and M and N are finitely generated Rmodules. Specifically, we prove such vanishing if R = Q/(f1 , . . . , fc ) has only isolated singularities, Q is a smooth k-algebra, k is a field of characteristic 0, the fi ’s form a regular sequence, and c ≥ 2. Such vanishing was previously established in the general characteristic, but graded, setting in [16]. Contents 1. Introduction 2. Chern classes for generalized matrix factorizations 2.1. The Jacobian complex 2.2. Matrix factorizations 2.3. Connections for matrix factorizations and the Atiyah class 2.4. Ad hoc Hochschild homology 2.5. Supports 2.6. Chern classes of matrix factorizations 2.7. Functorality of the Chern character and the top Chern class 3. Relationship with affine complete intersections 3.1. Euler characteristics and the Herbrand difference for hypersurfaces 3.2. Matrix factorizations for complete intersections 4. Some needed results on the geometry of complete intersections 5. The vanishing of Chern characters 5.1. The vanishing of the top Chern class 5.2. The Polishchuk-Vaintrob Riemann-Roch Theorem 5.3. Vanishing of the higher h and η Invariants Appendix A. Relative Connections A.1. The classical Atiyah class A.2. The vanishing of the classical Atiyah class and connections References Date: April 2, 2014. 1 2 4 5 7 10 12 14 15 17 18 18 20 21 24 24 26 27 29 29 31 32 2 MARK E. WALKER 1. Introduction This paper concerns invariants of finitely generated modules over an affine complete intersection ring R. Such a ring is one that can be written as R = Q/(f1 , . . . , fc ), where Q is a regular ring and f1 , . . . , fc ∈ Q form a regular sequence of elements. For technical reasons, we will assume Q is a smooth algebra over a field k. More precisely, this paper concerns invariants of the singularity category, Dsg (R), of R. This is the triangulated category defined as the Verdier quotient of Db (R), the bounded derived category of R-modules, by the sub-category of perfect complexes of R-modules. More intuitively, the singularity category keeps track of just the “infinite tail ends” of projective resolutions of finitely generated modules. Since R is Gorenstein, Dsg (R) is equivalent to M CM (R), the stable category of maximal Cohen-Macaulay modules over R. Thanks to a Theorem of Orlov [19, 2.1], the singularity category of R is equivalent to the singularity category of the hypersurface Y of Pc−1 = Proj Q[T1 , . . . , Tc ] Q P c−1 is cut out by the element W = i fi Ti ∈ Γ(PQ , O(1)). Since the scheme Pc−1 Q regular Dsg (Y ) may in turn be given by the homotopy category of “twisted matrix factorizations” associated to the triple (Pc−1 Q , O(1), W ) (see [21], [15], [18], [23], [2]). In general, if X is a scheme (typically smooth over a base field k), L is a line bundle on X, and W ∈ Γ(X, L) is regular global section of L, a twisted matrix factorization for (X, L, W ) consists of a pair of locally free coherent sheaves E0 , E1 on X and morphisms d1 : E1 → E0 , d0 : E0 → E1 ⊗OX L such that each composite d0 ◦ d1 and (d1 ⊗ idL ) ◦ d0 is multiplication by W . In the special case where X = Spec(Q) for a local ring Q, so that L is necessarily the trivial line bundle and W is an element of Q, a twisted matrix factorization is just a classical matrix factorization as defined by Eisenbud [10]. Putting these facts together, we get an equivalence X fi Ti ) Dsg (R) ∼ , O(1), = hmf (Pc−1 Q i between the singularity category Pof R and the homotopy category of twisted matrix factorizations for (Pc−1 , O(1), i fi Ti ). The invariants we attach to objects of Q Dsg (R) for a complete intersection ring R are actually defined, more generally, in terms of such twisted matrix factorizations. In detail, for a triple (X, L, W ) as above, let us assume also that X is smooth over a base scheme S and L is pulledback from S. In the main case of interest, namely X = Pc−1 for a smooth k-algebra Q c−1 Q, the base scheme S is taken to be Pk . In this situation, we define a certain hoc ad hoc Hochschild homology group HHad (X/S, L, W ) associated to (X, L, W ). 0 Moreover, to each twisted matrix factorization E for (X, L, W ) we attach a class hoc ch(E) ∈ HHad (X/S, L, W ), 0 called the ad hoc Chern character of E. We use the term “ad hoc” since we offer no justification that these invariants are the “correct” objects with these names given by a more theoretical framework. But, if X is affine and L is the trivial bundle, then our ad hoc Hochschild homology groups and Chern characters coincide with the usual notions found in [9], [8], [22], [7], [25], [17], [26], [20]. In particular, in this case, our definition of the Chern character of a matrix factorization is given by the “Kapustin-Li formula” [14]. CHERN CHARACTERS FOR TWISTED MATRIX FACTORIZATIONS 3 When the relative dimension n of X → S is even, we obtain from ch(E) a “top Chern class” ctop (E) ∈ H 0 (X, JW ( n2 )) where JW is the “Jacobian sheaf” associated to W . By definition, JW is the dW ∧− n−1 coherent sheaf on X given as the cokernel of the map ΩX/S ⊗ L−1 −−−−→ ΩnX/S define by multiplication by the element dW of Γ(X, Ω1X/S ⊗OX L). The top Chern class contains less information than the Chern character in general, but in the affine case with L trivial and under certain other assumptions, the two invariants coincide. The main technical result of this paper concerns the vanishing of the top Chern class: Theorem 1.1. Let k be a field of characteristic 0 and Q a smooth k-algebra of even dimension. If f1 , . . . , fc ∈ Q is a regular sequence of elements with c ≥ 2 such that the singular locus of R = Q/(f1 , . . . , fc ) is zero-dimensional, then ctop (E) = 0 ∈ H 0 (X, JW ( n2 )) P for all twisted matrix factorizations E of (Pc−1 Q , O(1), i fi Ti ). As an application of this Theorem, we prove the vanishing of the invariants ηc and hc for c ≥ 2 in certain cases. These invariants are defined for a pair of modules M and N over a complete intersection R = Q/(f1 , . . . , fc ) having the property that ExtiR (M, N ) and ToriR (M, N ) are of finite length for i ≫ 0. For example, if the non-regular locus of R is zero-dimensional, then every pair of finitely generated modules has this property. In general, for such a pair of modules, the even and odd sequences of lengths of ExtiR (M, N ) and ToriR (M, N ) are governed, eventually, by polynomials of degree at most c − 1: there exist polynomials pev (M, N )(t) = ac−1 tc−1 + lower order terms podd (M, N )(t) = bc−1 tc−1 + lower order terms with integer coefficients such that 2i+1 lengthR Ext2i (M, N ) = podd (M, N )(i) R (M, N ) = pev (M, N )(i) and lengthR ExtR for i ≫ 0, and similarly for the Tor modules. Following Celikbas and Dao [4, 3.3], we define the invariant hc (M, N ) in terms of the leading coefficients of these polynomials: ac−1 − bc−1 hc (M, N ) := c2c The invariant ηc (M, N ) is defined analogously, using Tor modules instead of Ext modules; see [5]. If c = 1 (i.e., R is a hypersurface) the invariants h1 and η1 coincide (up to a factor of 12 ) with the Herbrand difference, defined originally by Buchweitz [1], and Hochster’s θ invariant, defined originally by Hochster [12]. In this case, ExtiR (M, N ) and TorR i (M, N ) are finite length modules for i ≫ 0 and the sequence of their lengths are eventually two periodic, and h1 and η1 are determined by the formulas R R R 2hR 1 (M, N ) = h (M, n) = length Ext2i (M, N ) − length Ext2i+1 (M, N ), i ≫ 0 and R 2η1R (M, N ) = θR (M, n) = length TorR 2i (M, N ) − length Tor2i+1 (M, N ), i ≫ 0. We can now state the main application of Theorem 1.1: 4 MARK E. WALKER Theorem 1.2. Let k be a field of characteristic 0 and Q a smooth k-algebra. If f1 , . . . , fc ∈ Q is a regular sequence of elements with c ≥ 2 such that the singular locus of R = Q/(f1 , . . . , fc ) is zero-dimensional, then hR c (M, N ) = 0 and ηcR (M, N ) = 0 for all finitely generated R-modules M and N . Acknowledgements. I am grateful to Jesse Burke, Olgur Celikbas, Hailong Dao, Daniel Murfet, and Roger Wiegand for conversations about the topics of this paper. 2. Chern classes for generalized matrix factorizations In this section, we develop ad hoc notions of Hochschild homology and Chern classes for twisted matrix factorizations. The connection with the singularity category for complete intersection rings will be explained more carefully later. Assumptions 2.1. Throughout this section, we assume • S is a Noetherian scheme. • p : X → S is a smooth morphism of relative dimension n. • LS is a locally free coherent sheaf of rank one on S. Let LX := p∗ LS , the pullback of LS along p. • W ∈ Γ(X, LX ) is global section of LX . Recall that the smoothness assumption means that p : X → S is flat and of finite type and that Ω1X/S is a locally free coherent sheaf on X of rank n. Assumptions 2.2. When discussing ad hoc Hochschild homology and Chern characters (see below), we will also be assuming: • p is affine and • n! is invertible in Γ(X, OX ). It is useful to visualize Assumptions 2.1 by a diagram. Let q : LS → SL denote the geometric line bundle whose sheaf of sections is LS ; that is, LS = Spec( i≥0 L−i S ). W Then we may interpret W as a morphism X −→ LS fitting into the commutative triangle X❈ ❈❈ ❈❈W p ❈❈ ❈!  S o q LS . The data (p : X → S, L, W ) determines a closed subscheme Y of X, defined by the vanishing of W . More formally, if z : S ֒→ LS denotes the zero section of the line bundle LS , then Y is defined by the pull-back square /X Y W  S z  / LS . We often suppress the subscripts and write L to mean LS or LX . For brevity, we write F (n) to mean F ⊗OX L⊗n X if F is a coherent sheaf or a complex of such on X. In our primary application, L will in fact by the standard very ample line bundle O(1) on projective space, and so this notation is reasonable in this case. The case of primary interest for us is described in the following example: CHERN CHARACTERS FOR TWISTED MATRIX FACTORIZATIONS 5 Example 2.3. Suppose k is a field, Q is a smooth k-algebra of dimension n, and f1 , . . . , fc is regular sequence of elements of Q. Let X = Pc−1 = Proj Q[T1 , . . . , Tc ], Q S = Pc−1 = Proj k[T1 , . . . , Tc ] k and define p:X →S to be the map induced by k ֒→ Q. Finally, set LS := OS (1) and X fi Ti ∈ Γ(X, LX ). W := i Then the triple (p : X → S, L, W ) satisfies all the hypotheses in Assumptions 2.1, and moreover p is affine. The subscheme cut out by W is Y = Proj Q[T1 , . . . , Tc ]/W in this case. 2.1. The Jacobian complex. We construct a complex of locally free sheaves on X that arises from the data (p : X → S, LS , W ). Regarding W as a map W : X → LS as above, we obtain an induced map W ∗ Ω1LS /S → Ω1X/S on cotangent bundles. There is a canonical isomorphism Ω1LS /S ∼ = q ∗ L−1 S and hence −1 ∗ 1 ∼ an isomorphism L−1 → Ω1X/S , and . Using this, we obtain a map L W Ω = X X LS /S upon tensoring with L we arrive at the map: dW : OX → Ω1X/S ⊗OX LX =: Ω1X/S (1). Note that Λ·X/S (Ω1X/S ⊗OX L) ∼ = M ΩqX/S (q) q so that we may regard the right-hand side as a sheaf of graded-commutative gradedOX -algebras. Since dW is a global section of this sheaf lying in degree one, we may form the Jacobian complex with differential given as repeated multiplication by dW :   dW dW dW dW (2.4) Ω·dW := OX −−→ Ω1X/S (1) −−→ Ω2X/S (2) −−→ · · · −−→ ΩnX/S (n) . We index this complex so that ΩjX/S (j) lies in cohomological degree j. Example 2.5. Let us consider the case where X and S are affine and the line bundle L is the trivial one. That is, suppose k is a Noetherian ring, A is a smooth k-algebra, and f ∈ A, and set S = Spec(k), X = Spec(A), L = OS , and W = f ∈ Γ(X, OX ) = A. (The map p : X → S is given by the structural map k → A.) Then the Jacobian complex is formed by repeated multiplication by the degree one element df ∈ Ω1A/k :   df ∧− df ∧− df ∧− df ∧− Ω·df = A −−−→ Ω1A/k −−−→ Ω2A/k −−−→ · · · −−−→ ΩnA/k . If we further specialize to the case A = k[x1 , . . . , xn ], then we may identify Ω1A/k with An by using the basis dx1 , . . . , dxn . Then Ω·df is the A-linear dual of the ∂f ∂f Koszul complex on the sequence ∂x , . . . , ∂x of partial derivatives of f . 1 n 6 MARK E. WALKER ∂f ∂f , . . . , ∂x form an If k is a field and f has only isolated singularities, then ∂x 1 n A-regular sequence. In this case, the Jacobian complex has cohomology only in degree n and k[x1 , . . . , xn ] H n (Ω·df ) ∼ dx1 ∧ · · · ∧ dxn . = ∂f ∂f i h ∂x1 , . . . , ∂x n Example 2.6. With the notation of Example 2.3, the Jacobian complex is the complex of coherent sheaves on Pc−1 associated to the following complex of graded Q modules over the graded ring Q[T ] := Q[T1 , . . . , Tc ]: dW dW dW 0 → Q[T ] −−→ Ω1Q/k [T ](1) −−→ · · · −−→ ΩnQ/k [T ](n) → 0 Here, dW = df1 T1 +· · ·+dfc T1 , a degree one element in the graded module Ω1Q/k [T ]. P ∂f Further specializing to Q = k[x1 , . . . , xn ], we have dfi = j ∂x dxj and j   ∂f   ∂fc 1 · · · ∂x T1 ∂x1 1  ∂f1 ∂f    ∂x2 · · · ∂xc2    T2   · . dW =   ..   ..   .. .  . .  ∂f1 ∂xn ··· ∂fc ∂xn Tc It is convenient to think of Ω·dW as representing a “family” of complexes of the type occurring in Example 2.5 indexed by the Q points of Pc−1 Q . That is, given a tuple a = (a1 , . . . , ac ) of elements of Q that generated the unit ideal, we have an associated Q-point ia : Spec(Q) ֒→ Pc−1 of Pc−1 Q Q . Pulling back along ia (and ∗ identifying ia O(1) with the trivial bundle in the canonical way) gives the Jacobian P complex of Example 2.5 for (Spec(Q) → Spec(k), O, i ai fi ). We are especially interested in the cokernel (up to a twist) of the last map in the Jacobian complex (2.4), and thus give it its own name: Definition 2.7. The Jacobi sheaf associated to (p : X → S, L, W ), written J (X/S, L, W ) or just JW for short, is the coherent sheaf on X defined as   dW ∧− n−1 JW = J (X/S, L, W ) := coker ΩX/S (−1) −−−−→ ΩnX/S . In other words, JW := Hn (Ω·dW )(−n), where Hn denote the cohomology of a complex in the abelian category of coherent sheaves on X. Example 2.8. If S = Spec(k), k is a field, X = Ank = Spec(k[x1 , . . . , xn ]), L = OX and f a polynomial, then k[x1 , . . . , xn ]  dx1 ∧ · · · ∧ dxn . Jf =  ∂f ∂f , . . . , ∂x1 ∂xn Example 2.9. With the notation of 2.3, JW is the coherent sheaf associated to the graded Q[T ]-module   dW n−1 coker ΩQ/k [T ](−1) −−→ ΩnQ/k [T ] . If we further specialize to the case Q = k[x1 , . . . , xn ] = k[x], then using dx1 , . . . , dxn as a basis of Ω1Q/k gives us that JW is isomorphic to the coherent sheaf associated CHERN CHARACTERS FOR TWISTED MATRIX FACTORIZATIONS to ⊕n coker k[x, T ](−1) (T1 ,...,Tc )  ∂fi ∂xi 7 ! t −−−−−−−−−−−→ k[x, T ] . 2.2. Matrix factorizations. We recall the theory of “twisted matrix factorizations” from [21] and [3]. Suppose X is any Noetherian scheme, L is a line bundle on X and W ∈ Γ(X, L) any global section of it. (The base scheme S is not needed for this subsection, and we write L for LX .) A twisted matrix factorization for (X, L, W ) consists of the data E = (E0 , E1 , d0 , d1 ) where E0 and E1 are locally free coherent sheaves on X and d0 : E0 → E1 (1) and d1 : E1 → E0 are morphisms such that d0 ◦ d1 and d1 (1) ◦ d0 are each given by multiplication by W . (Recall that E1 (1) denotes E1 ⊗OX L.) We visualize E as a diagram of the form   d1 d0 E0 (1) E1 −→ E = E0 −→ or as the “twisted periodic” sequence d0 (−1) d1 (−1) d d0 (1) d d1 (1) 1 0 E0 (1) −−−→ E1 (1) −−−→ · · · . E1 −→ · · · −−−−→ E1 (−1) −−−−→ E0 −→ Note that the composition of any two adjacent arrows in this sequence is multiplication by W (and hence this is not a complex unless W = 0). A strict morphism of matrix factorizations for (X, L, W ), say from E to E′ = ′ (E0 , E1′ , d′0 , d′1 ) consists of a pair of morphisms E0 → E0′ and E1 → E1′ such that the evident pair of squares both commute. Matrix factorizations and strict morphisms form an exact category, which we write as mf (X, L, W ), for which a sequence of morphisms is declared exact if it is so in each degree. If V ∈ Γ(X, L) is another global section, there is a tensor product pairing − ⊗mf − : mf (X, L, W ) × mf (X, L, V ) → mf (X, L, W + V ). In the special case L = OX and V = W = 0, this tensor product is the usual tensor product of Z/2-graded complexes of OX -modules, and the general case is the natural “twisted” generalization of this. We refer the reader to [2, §7] for a more precise definition. There is also a sort of internal Hom construction. Given W, V ∈ Γ(X, L) and objects E ∈ mf (X, L, W ) and F ∈ mf (X, L, V ), there is an object Hommf (E, F) ∈ mf (X, L, V − W ). Again, if L = OX and W = V = 0, this is the usual internal Hom-complex of a pair of Z/2-graded complexes of OX -modules, and we refer the reader to [2, 2.3] for the general definition. If V = W , then Hommf (E, F) belongs to mf (X, L, 0) and hence is actually a complex. In general, we refer to an object of mf (X, L, 0) as a twisted two-periodic complex, since it is a complex of the form d0 (−1) d d d1 (1) d0 (1) 0 1 E1 (1) −−−→ E0 (1) −−−→ · · · . E0 −→ · · · −−−−→ E1 −→ In other words, such an object is equivalent to the data of a (homologically indexed) complex E· of locally free coherent sheaves and a specified isomorphism E· (1) ∼ = E· [2]. (We use the convention that E· [2]i = Ei−2 .) There is also a notion of duality. Given E ∈ mf (X, L, W ), we define E∗ ∈ mf (X, L, −W ) as Hommf (E, OX ) where here OX denotes the object of mf (X, L, 0) given by the twisted two-periodic complex · · · → 0 → OX → 0 → L → 0 → L⊗2 → · · · . 8 MARK E. WALKER (In the notation introduced below, E∗ = Hommf (E, Fold OX ).) See [2, 7.5] for a more explicit formula. These constructions are related by the natural isomorphisms (cf. [2, §7]) Hommf (E, F) ∼ = E∗ ⊗mf F and Hommf (E ⊗mf G, F) ∼ = Hommf (E, Hommf (G, F)). The construction of Hommf (E, F) is such that we have a natural identification Homstrict (E, F) = Z0 Γ(X, Hommf (E, F)). Here, Γ(X, Hommf (E, F)) is the complex of abelian groups obtained by regarding Hommf (E, F) as an unbounded complex of coherent sheaves on X and applying the global sections functor Γ(X, −) degree-wise, obtaining a complex of Γ(X, OX )modules, and Z0 denotes the cycles lying in degree 0 in this complex. Let P(X) denote the category of complexes of locally free coherent sheaves on X and let P b (X) denote the full subcategory of bounded complexes. There is a “folding” functor, Fold : P b (X) → mf (X, L, 0), defined by taking direct sums of the even and odd terms in a complex. In detail, given a complex (P, dP ) ∈ P b (X), let M M P2i+1 (i). P2i (i) and Fold(P )1 = Fold(P )0 = i i The required map Fold(P )1 → Fold(P )0 maps the summand P2i+1 (i) to the summand P2i (i) via dP (i) and Fold(P )0 → Fold(P )1 (1) maps P2i (i) to P2i−1 (i) also via dP (i). There is also an “unfolding” functor Unfold : mf (X, L, 0) → P(X) which forgets the two periodicity. In detail, Unfold(E1 → E0 → E1 (1))2i = E0 (−i) and Unfold(E1 → E0 → E1 (1))2i+1 = E1 (−i) with the evident maps. These functors are related by the following adjointness properties: Given E ∈ mf (X, L, 0) and P ∈ P b (X), there are natural isomorphisms ∼ HomP(X) (P, Unfold E) Hommf (X,L,0) (Fold P, E) = and HomP(X) (Unfold E, P) ∼ = Hommf (X,L,0) (E, Fold P). (These functors do not technically form an adjoint pair, since Unfold takes values in P(X), not P b (X).) Example 2.10. If E is a locally free coherent sheaf on X, we write E[0] for the object in P(X) with E in degree 0 and 0’s elsewhere. Then Fold(E[0]) = (0 → E → 0) = (· · · → E(−1) → 0 → E → 0 → E(1) → · · · ) indexed so that E lies in degree 0. Letting E[j] = E[0][j], we have Fold(E[2i]) = (0 → E(i) → 0) = (· · · → E(i − 1) → 0 → E(i) → 0 → E(i + 1) → · · · ) CHERN CHARACTERS FOR TWISTED MATRIX FACTORIZATIONS 9 with E(i) in degree 0, and Fold(E[2i + 1]) = (E(i) → 0 → E(i + 1)) = (· · · → E(i − 1) → 0 → E(i) → 0 → E(i + 1) → · · · ) with E(i) in homological degree 1. If P ∈ P b (X) is a bounded complex and E ∈ mf (X, L, W ) is a twisted matrix factorization, we define their tensor product to be P ⊗ E := Fold(P ) ⊗mf E ∈ mf (X, L, W ). Example 2.11. We will use this construction, in particular, for the complex P = Ω1X/S (1)[−1] consisting of Ω1X/S (1) concentrated in homological degree −1. Since Ω1 lies in odd degree, the sign conventions give us   − id ⊗d1 (1) − id ⊗d0 Ω1X/S ⊗ E1 (1) −−−−−−−→ Ω1X/S ⊗ E0 (1) . Ω1X/S (1)[−1] ⊗ E = Ω1X/S ⊗ E0 −−−−−→   dW ∧− We will also apply this construction to the complex P := OX −−−−→ Ω1X/S (1) indexed so that OX lies in degree 0 to form the matrix factorization dW ∧− (OX −−−−→ Ω1X/S (1)) ⊗OX E which is given explicitly as E1 ⊕ Ω1X/S ⊗E0   d1 0  dW − id ⊗d0 −−−−−−−−−−−−−→  E0 ⊕ Ω1X/S ⊗E1 (1)   d0 0  dW − id ⊗d1 −−−−−−−−−−−−−→  E1 (1) ⊕ Ω1X/S ⊗E0 (1) . Building on the identity Homstrict (E, F) = Z0 Γ(X, Hommf (E, F)) we obtain a natural notion of homotopy: Two strict morphisms are homotopic if their difference lies in the image of the boundary map in the complex Γ(X, Hommf (E, F)). That is, the group of equivalence classes of strict morphisms up to homotopy from E to F is H0 Γ(X, Hommf (F, E)). We define [mf (X, L, W )]naive , the “naive homotopy category”, to be the category with the same objects as mf (X, L, W ) and with morphisms given by strict maps modulo homotopy; that is, Hom[mf ]naive (E, F) := H0 Γ(X, Hommf (F, E)). Just as the homotopy category of chain complexes is a triangulated category, so too is the category [mf (X, L, W )]naive ; see [21, 1.3] or [3, 2.5] for details. The reason for the pejorative “naive” in this definition is that this notion of homotopy equivalence does not globalize well: two morphisms can be locally homotopic without being globally so. Equivalently, an objects can be locally contractible with being globally so. To rectify this, we define the homotopy category of matrix factorizations for (X, L, W ), written hmf (X, L, W ), to be the Verdier quotient of [mf (X, L, W )]naive by the thick subcategory consisting of objects that are locally contractible. (The notion appears to be originally due to Orlov, but see also [21, 3.13] where this category is referred to as the “derived category of matrix factorizations”.) We list the properties of hmf (X, L, W ) needed in the rest of this paper: 10 MARK E. WALKER (1) There is a functor mf (X, L, W ) → hmf (X, L, W ), and it sends objects that are locally contractible to the trivial object. (2) If α : E → F is strict morphism and it is locally null-homotopic, then α is sent to the 0 map in hmf (X, L, W ). (3) When W = 0, hmf (X, L, 0) is the Verdier quotient of [mf (X, L, 0]naive obtained by inverting quasi-isomorphisms of twisted two-periodic complexes. (4) At least in certain cases, the hom sets of hmf (X, L, W ) admit a more explicit description. In general, for objects E, F ∈ mf (X, L, W ) there is a natural map Homhmf (E, F) → H0 (X, Unfold Hommf (E, F)) where H0 (X, −) denotes sheaf hyper-cohomology [3, 3.5]. This map is an isomorphism when (a) W = 0 or (b) X is projective over an affine base scheme and L = OX (1) is the standard line bundle; see [3, 4.2]. 2.3. Connections for matrix factorizations and the Atiyah class. See Appendix A for recollections on the notion of a connection for a locally free coherent sheaf on a scheme. Much of the material in this subsection represents a generalization to the non-affine case of constructions found in [26], which were in turn inspired by constructions in [9]. Definition 2.12. With (p : X → S, L, W ) as in Assumptions 2.1, given E ∈ mf (X, L, W ), a connection on E relative to p, written as ∇E or just ∇, is a pair of connections relative to p ∇1 : E1 → Ω1X/S ⊗OX E1 and ∇0 : E0 → Ω1X/S ⊗OX E0 . There is no condition relating ∇E and d1 , d0 . Since each ∇i is p∗ OS -linear and L is pulled back from S, we have induced connections ∇1 (j) : E1 (j) → Ω1X/S ⊗OX E1 (j) and ∇0 (j) : E0 (j) → Ω1X/S ⊗OX E0 (j) for all j. Example 2.13. Suppose S = Spec k and X = Spec Q are affine where Q is a smooth k-algebra of dimension n, L = OX so that W ∈ Q, and thecomponents of  A E are free Q-modules. Upon choosing bases, we may represent E as Qr −=== − Qr B where A, B are r × r matrices with entries in Q such that AB = BA = W Ir . The choice of basis leads to an associated “trivial” connection ⊕r  ∇ = d : Qr → Ω1Q/k ⊗Q Qr = Ω1Q/k given by applying exterior differentiation to the components of a vector. As noted in Appendix A, if p is affine, then every vector bundle on X admits a connection relative to p and so every matrix factorization admits a connection in this case. CHERN CHARACTERS FOR TWISTED MATRIX FACTORIZATIONS 11 Definition 2.14. Given a connection ∇ relative to p for a twisted matrix factorization E ∈ mf (X, L, W ), the associated Atiyah class is defined to be the map AtE,∇ : E → Ω1X/S (1)[−1] ⊗ E given as the “commutator” [∇, δE ]. In detail, it is given by the pair of maps ∇1 (1) ◦ δ − (id ⊗δ) ◦ ∇0 : E0 → Ω1 (1) ⊗ E1 and ∇0 ◦ δ − (id ⊗δ) ◦ ∇1 : E1 → Ω1 ⊗ E0 . AtE,∇ is not a morphism of matrix factorizations in general, but by Lemma A.6 we have that AtE,∇ is OX -linear. Example 2.15. Keep the notations and assumptions of Example 2.13, so that   A r E = Qr −=== −Q . B Then 1 Ω (1)[−1] ⊗ E = 1  Ω1Q/k ⊕r ⊕r   −B 1 −==  ==− ΩQ/k −A and the map At : E → Ω (1)[−1] ⊗ E obtained by choosing the trivial connections is represented by the following diagram: A Qr dA  ⊕r  1 ΩQ/k / Qr B dB −B  ⊕r  1 / Ω Q/k / Qr dA −A  ⊕r  1 / Ω Q/k Here dA and dB denote the r×r matrices with entires in Ω1Q/k obtained by applying d entry-wise to A and B. This is not a map of matrix factorizations since the squares do not commute. Indeed, the differences of the compositions around these squares are (dB)A+B(dA) and (dA)B + A(dB). Since AB = BA = W , these expression both equal dW , a fact which is relevant for the next construction. Definition 2.16. Given a connection ∇ for a matrix factorization E , define   dW ∧− ΨE,∇ : E → OX −−−−→ Ω1X/S (1) ⊗OX E dW ∧− as idE + At∇ . (Here, OX −−−−→ Ω1X/S (1) is the complex with OX in degree 0.) In other words, ΨE,∇ is the morphism whose composition with the canonical projection   dW ∧− OX −−−−→ Ω1X/S (1) ⊗OX E −−։ E is the identity and whose composition with the canonical projection   dW ∧− OX −−−−→ Ω1X/S (1) ⊗OX E −−։ Ω1 (1)[−1] ⊗OX E is AtE,∇ . Lemma 2.17. The map ΨE,∇ is a (strict) morphism of matrix factorizations, and it is independent in the naive homotopy category [mf (X, L, W )]naive (and hence the homotopy category too) of the choice of connection ∇. 12 MARK E. WALKER Proof. The proofs found in [26], which deal with the case where X is affine and L is trivial, apply nearly verbatim. The homotopy relating ΨE,∇ and ΨE,∇′ for two different connections ∇ and ∇′ on E is given by the map ∇ − ∇′ : E → Ω1X/S ⊗ E, which is OX -linear by Lemma A.6.  Example 2.18. Continuing with Examples 2.13 and 2.15, the map Ψ in this case is represented by the diagram A Qr   1 dA   Qr  ⊕ r Ω1Q/k  A dW B / Qr 0 −B  /  1 dB / Qr   Qr  ⊕ r Ω1Q/k  B dW  0 −A /   1 dA Qr  ⊕ r Ω1Q/k . This diagram commutes, confirming that Ψ is indeed a morphism of matrix factorizations. 2.4. Ad hoc Hochschild homology. For an integer j ≥ 0, we define the twisted two-periodic complex (j) ΩdW :=   (j−2)dW (j−1)dW jdW 2dW dW −→ ΩjX/S (j)) . Fold OX −−→ Ω1X/S (1) −−−−−→ Ω2X/S (2) −−−−−→ · · · −−→ Ωj−1 X/S (j − 1) − (j) Explicitly, ΩdW is the twisted periodic complex  1   1    ΩX/S ΩX/S (1) OX  ⊕   ⊕   ⊕   3   3   2  ΩX/S (1) ΩX/S (2) ΩX/S (1)             · · · −→  ⊕  −→  4 ⊕  −→  ⊕  −→ · · · Ω5 (2)  Ω5 (3) Ω (2)  X/S   X/S   X/S   ⊕   ⊕   ⊕        . .. .. . . . . with Li=⌊ 2j ⌋ i=0 Ω2i X/S (i) in degree 0 and Li=⌊ j−1 2 ⌋ i=0 Ω2i+1 X/S (i) in degree 1. (j) The coefficients appearing in the maps of ΩdW are necessarily to make the following statement hold true. (We omit its straight-forward proof.) Lemma 2.19. The pairings ΩiX/S (i) ⊗OX ΩlX/S (l) → Ωi+l X/S (i + l) defined by exterior product induce a morphism (j) (l) (j+l) ΩdW ⊗mf ΩdW → ΩdW . in mf (X, L, 0). CHERN CHARACTERS FOR TWISTED MATRIX FACTORIZATIONS 13 If j! is invertible in Γ(X, OX ) (for example, if X is a scheme over a field k with char(k) = 0 or char(k) > j), then there is an isomorphism   dW dW dW (j) j ΩdW ∼ = Fold OX −−→ Ω1X/S (1) −−→ · · · −−→ ΩX/S (j)) 1 n(n−1)···(n−i) given by collection of isomorphisms ΩiX/S (i) −−−−−−−−−→ ΩiX/S (i), 0 ≤ i ≤ j. In particular, if n! is invertible in Γ(X, OX ), then there is an isomorphism (n) ΩdW ∼ = Fold Ω·dW . (Recall that n is the relative dimension of X of S, and hence is the rank of Ω1X/S . It follows that Ωm X/S = 0 for m > n.) Definition 2.20. Let p : X → S, L, W be as in Assumptions 2.1 and assume also that n! is invertible in Γ(X, OX ). We define the (degree 0) ad hoc Hochschild homology of mf (X, L, W ) relative to S to be hoc (X/S, L, W ) := H0 (X, Unfold Fold Ω·dW ). HHad 0 hoc As we will see in the next section, HHad (X/S, L, W ) is the target of what we 0 term the “ad hoc Chern character” of a twisted matrix factorization belonging to mf (X, L, W ). We use the adjective “ad hoc” for two reasons. The first is that we offer here no justification that this definition deserves to be called “Hochschild homology”. See, however, [22, 24] for the affine case and [20] for the non-affine case with L = OX . The second reason is that whereas the support of every object of mf (X, L, W ) is contained in the singular locus of the subscheme Y cut out by W (as will be justified in the next subsection), the twisted two-periodic complex Fold Ω·dW is not so supported in general. A more natural definition of Hochshild homology would thus be given by H0Z (X, Unfold Fold Ω·dW ), where Z is the singular locus of Y and H0Z refers to hypercomology with supports (i.e., local cohomology). Since this more sensible definition of Hochschild homology is not necessary for our purposes and hoc only adds complications to what we do, we will stick with using HHad . 0 Recall that the Jacobi sheaf is defined as   dW n−1 JW = J (X/S, L, W ) = coker ΩX/S (−1) −−→ ΩnX/S so that there is a canonical map Ω·dW → Hn (Ω·dW )[−n] = JW (n)[−n]. From it we obtain the map Fold(Ω·dW ) → Fold(JW (n)[−n]) ∼ = Fold(JW ( n2 )). Applying Unfold and using the canonical map Unfold Fold(JW ( n2 )) → JW ( n2 ) results in a map Unfold Fold Ω·dW → JW ( n2 ). Finally, applying H0 (X, −) yields the map (2.21) hoc HHad (X/S, L, W ) → H 0 (X, JW ( n2 )). 0 This map will play an important role in the rest of this paper. Let us observe that in certain situations, it is an isomorphism: 14 MARK E. WALKER Proposition 2.22. Assume S = Spec k for a field k, X = Spec(Q) for a smooth k-algebra Q, and hence LX = OX and W ∈ Q. If the morphism of smooth varieties W : X → A1 has only isolated critical points and n is even, then the canonical map HHad 0 hoc (X/S, W ) → H 0 (X, JW ( n2 )) = ΩnQ/k n−1 dW ∧ ΩQ/k is an isomorphism. Proof. These conditions ensure that the complex of Q-modules Ω·dW is exact except on the far right where it has homology J (n). The result follows since X is affine.  Example 2.23. Assume S = Spec(k) for a field k, X = Ank = Spec k[x1 , . . . , xn ], and the morphism f : Ank → A11 associated to a given polynomial f ∈ k[x1 , . . . , xn ] has only isolated critical points. Then hoc HHad (X/S, f ) ∼ = H 0 (X, Jf ( n2 )) = 0 ΩnQ/k df ∧ n−1 ΩQ/k = k[x1 , . . . , xn ] dx1 ∧ · · · ∧ dxn . ∂f ∂f ( ∂x , . . . , ∂x ) 1 n 2.5. Supports. We fix some notation. For a Noetherian scheme Y , the non-regular locus of Y is Nonreg(Y ) := {y ∈ Y | the local ring OY,y is not a regular local ring}. Under mild additional hypotheses (e.g., Y is excellent), Nonreg(Y ) is a closed subset of Y . Assume g : Y → S is a morphism of finite type. Recall that g is smooth near y ∈ Y if and only if it is flat of relative dimension n near y and the stalk of Ω1Y /S at y is a free OY,y -module of rank n. We define singular locus of g : Y → S to be the subset Sing(g) = Sing(Y /S) = {y ∈ Y | g is not smooth near y}. At least for a flat morphism g : Y → S of finite type, the singular locus of g is a closed subset of Y by the Jacobi criterion. When S = Spec(k) for a field k and there is no danger of confusion, we write Sing(Y ) instead of Sing(Y / Spec(k)). If Y is finite type over a field k, then Nonreg(Y ) ⊆ Sing(Y ) = Sing(Y / Spec(k)), and equality holds if k is perfect. Proposition 2.24. Assume X is a Noetherian scheme, L is a line bundle on X, and W is a regular global section of L. Let Y ⊆ X be the closed subscheme cut out by W . Then for every E, F ∈ mf (X, L, W ), the twisted two-periodic complex Hommf (E, F) is supported Nonreg(Y ). Proof. For x ∈ X, we have Hommf (E, F)x ∼ = Hommf (Ex , Fx ) where the hom complex on the right is for the category mf (Spec(OX,x ), Lx , Wx ). By choosing a trivialization OX ∼ = Lx , we may identify this category with mf (Q, f ) for a regular local ring Q and non-zero-divisor f . The assertion thus becomes that if f is non-zero-divisor in regular local ring Q and Q/f is also regular, then the complex Hommf (E, F) is acyclic for all objects E, F ∈ mf (Q, f ). This holds since the ∗ d Q/f (coker(E), coker(F)); cohomology modules of the complex Hommf (E, F) are Ext see Theorem 3.2 below.  CHERN CHARACTERS FOR TWISTED MATRIX FACTORIZATIONS 15 Example 2.25. Suppose k is a field, Q is a smooth k-algebra, m is a maximal ideal of Q, and f1 , . . . , fc ∈ m form a regular sequence such that the non-regular locus of the ring R := Q/(f1 , . . . , fc ) is {m}. Let X = Pc−1 = Proj Q[T1 , . . . , Tc ], Q P L = OX (1), and W = i fi Ti . Then c−1 Nonreg(Y ) = Pc−1 Q/m ⊆ PQ . 2.6. Chern classes of matrix factorizations. If E ∈ mf (X, L, W ) is a matrix factorization that admits a connection (for example, if p is affine), we define strict morphisms (j) (j) ΨE,∇ : E → ΩdW ⊗ E, for j ≥ 0, recursively, by letting (1) Ψ(0) = idE , ΨE,∇ = ΨE,∇ , (j) and, for j ≥ 2, defining ΨE,∇ as the composition of (j−1) idΩ(1) ⊗ΨE,∇ (1) (j−1) E −−−−−−−−−→ ΩdW ⊗ ΩdW ∧⊗id (j) E ⊗ E −−−−→ ΩdW ⊗ E. (j) By Lemma 2.17, ΨE,∇ is independent up to homotopy of the choice of ∇. Example 2.26. Continuing with Examples 2.13, 2.15 and 2.18, the map Ψ(∧j) arising from the trivial connection is in degree zero given by j factors }| { z (1 + dA) · · · (1 + dA)(1 + dB) for j even and by j factors }| { z (1 + dB) · · · (1 + dA)(1 + dB) for j odd, where the product is taken in the ring Matr×r (Ω·Q/k ). By the duality enjoyed by the category mf (X, L, Q), we obtain a morphism of objects of mf (X, L, 0) of the form (j) (j) Ψ̃E : Endmf (E) = E∗ ⊗mf E → ΩdW . Moreover, as a morphism in the category [mf (X, L, 0)]naive , it is independent of the choice of connection. Take j = n and assume n! is invertible, so that we have an isomorphism (n) ΩdW ∼ = Fold Ω·dW . We obtain the morphism Endmf (E) → Fold Ω·dW in mf (X, L, 0) and hence a map of unbounded complexes (2.27) chE : Unfold Endmf (E) → Unfold Fold Ω·dW . id E The identity morphism on E determines a morphism OX −−→ Unfold Endmf (E), 0 that is, an element of H (X, Unfold Endmf (E)). Definition 2.28. Under Assumptions 2.1 and 2.2, define the ad hoc Chern character of E ∈ mf (X, L, W ) relative to p : X → S to be hoc ch(E) = chE (idE ) ∈ HHad (X/S, L, W ). 0 16 MARK E. WALKER hoc Recall that when n is even, we have the map HHad (X/S, L, W ) → H 0 (X, JW ( n2 )) 0 defined in (2.21), and hence we obtain an invariant of E in H 0 (X, JW ( n2 )). Since this invariant will be of crucial importance in the rest of this paper, we give it a name: Definition 2.29. Under Assumptions 2.1 and 2.2 and with n even, define the ad hoc top Chern class of E ∈ mf (X, L, W ) relative to p : X → S to be the element ctop (E) ∈ H 0 (X, JW ( n2 )) given as the image of ch(E) under (2.21). Example 2.30. With the notation and assumptions of Example 2.13, we have   dW −−→ Ω2i+1 M ker Ω2i Q/k Q/k hoc hoc  . HHad (X/S, L, W ) = HHad (Spec(Q)/ Spec(k), O, W ) = 0 0 2i−1 dW 2i i im ΩQ/k −−→ ΩQ/k and, when n is even, H 0 (X, JW ( n2 )) is the last summand: H 0 (X, JW ( n2 )) = ΩnQ/k n−1 dW ∧ ΩQ/k . We have n factors }| { z 2 tr(dAdB · · · dB). n! Here dA and dB are r × r matrices with entries in Ω1 − Q/k and the product is occurring in the ring Matr×r (Ω·Q/k ), and the map tr : Matr×r (ΩnQ/k ) → ΩnQ/k is the usual trace map. Our formula for ctop (E) coincides with the “Kapustin-Li” formula [14] found in many other places, at least up to a sign: In the work of Polishchuck-Vaintrob [22, n Cor 3.2.4], for example, there is an additional a factor of (−1)( 2 ) in their formula for the Chern character. ctop (E) = Since the top Chern class of E ∈ mf (X, L, W ) is defined as the image of idE under the composition ch E Unfold Endmf (E) −−→ Unfold Fold Ω·dW → JW ( n2 ) of morphisms of complexes of coherent sheaves on X, the following result is an immediate consequence of Proposition 2.24. Recall that for a quasi-coherent sheaf F on X and a closed subset Z of X, HZ0 (X, F ) denotes the kernel of H 0 (X, F ) → H 0 (X \ Z, F ). Proposition 2.31. For any E ∈ mf (X, L, W ), its top Chern class is supported on Nonreg(Y ): 0 n ctop (E) ∈ HNonreg(Y ) (X, JW ( 2 )). Remark 2.32. It is also follows from what we have established so far that the Chern character of any E is supported on Nonreg(Y ) in the sense that it lifts canonically to an element of H0Nonreg(Y ) (X, Unfold Fold Ω·dW ), where in general H0Z denote local hyper-cohomology of a complex of coherent sheaves. CHERN CHARACTERS FOR TWISTED MATRIX FACTORIZATIONS 17 2.7. Functorality of the Chern character and the top Chern class. The goal of this subsection is to prove Chern character is functorial in S. Throughout, we suppose Assumptions 2.1 and 2.2 hold. Let i : S ′ → S be a morphism of Noetherian schemes and write L′ , X ′ , W ′ for the evident pull-backs: X ′ = X ×S S ′ , L′S = i∗ (LS ), and W ′ = i∗ (W ). The typical application will occur when i is the inclusion of a closed point of S. There is a functor (2.33) i∗ : mf (X, L, W ) → mf (X ′ , L′ , W ′ ) induced by pullback along the map X ′ → X induced from i. There is a canonical isomorphism i∗ Ω·dW ∼ = Ω·dW ′ and hence an hence an induced map hoc hoc i∗ : HHad (X/S, L, W ) → HHad (X ′ /S ′ , L′ , W ′ ). 0 0 Proposition 2.34. For E ∈ mf (X, L, W ), we have hoc i∗ (ch(E)) = ch(i∗ (E)) ∈ HHad (X ′ /S ′ , L′ , W ′ ). 0 Proof. Recall that the Chern character of E is determined by a map of matrix factorizations E → E ⊗ Ω·dW which is itself determined by a choice of connection Ej → Ej ⊗OX Ω1X/S for j = 0, 1. Since our connections are OS -linear, we have the pull-back connection i∗ Ej → i∗ (Ej ) ⊗OX ′ Ω1X ′ /S ′ , j = 0, 1. which we use to define the map i∗ E → i∗ E ⊗ Ω·dW ′ . Using also that i∗ End(E) = End(i∗ E), it follows that the square End(E) / Fold Ω· dW  Ri∗ End(i∗ E)  / Ri∗ Fold Ω· ′ dW commutes. Applying H0 (X, Unfold(−)) gives the commutative square H0 (X, Unfold End(E)) hoc / HHad (X/S, L, W ) 0 i∗  H0 (X ′ , Unfold End(i∗ E)) i∗  hoc / HHad (X ′ /S ′ , L′ , W ′ ). 0 The result follows from the fact that i∗ : H0 (X, Unfold End(E)) → H0 (X ′ , Unfold End(i∗ E)) sends idE to idi∗ E .  Recall from (2.21) that, when n is even, there is a natural map hoc HHad (X/S, L, W ) → Γ(X, JX/S,L,W ( n2 )) 0 Since i∗ JX/S,L,W ∼ = JX ′ /S ′ ,L′ ,W ′ , we obtain a map i∗ : Γ(X, JX/S,L,W ( n2 )) → Γ(X ′ , JX ′ /S ′ ,L′ ,W ′ ( n2 )). 18 MARK E. WALKER Corollary 2.35. When n is even, for any E ∈ mf (X, L, W ), we have i∗ (ctop (E)) = ctop (i∗ (E)) ∈ Γ(X, JX ′ /S ′ ,L′ ,W ′ ( n2 )). Proof. This follows from Proposition 2.34 and the fact that hoc HHad (X/S, L, W ) 0 / Γ(X, JX/S,L,W ( n2 )) i∗ i∗   / Γ(X ′ , JX ′ /S ′ ,L′ ,W ′ ( n2 )) hoc HHad (X ′ /S ′ , L′ , W ′ ) 0 commutes.  3. Relationship with affine complete intersections We present some previously known results that relate twisted matrix factorizations with the singularity category of affine complete intersections. We start with some basic background. For any Noetherian scheme X, we write Db (X) for the bounded derived category of coherent sheaves on X and Perf(X) for the full subcategory of perfect complexes — i.e., those complexes of coherent sheaves on X that are locally quasi-isomorphic to bounded complexes of finitely generated free modules. We define Dsg (X) to be the Verdier quotient Db (X)/ Perf(X). For a Noetherian ring R, we write Dsg (R) for Dsg (Spec(R)). The category Dsg (X) is triangulated. For a pair of finitely generated R-modules M and N , we define their stable Ext-modules as i d (M, N ) := HomD (R) (M, N [i]), Ext R sg where on the right we are interpreting M and N [i] as determining complexes (and hence objects of Dsg (R)) consisting of one non-zero component, lying in degrees 0 and −i, respectively. Note that since Db (R) → Dsg (R) is a triangulated functor by construction and the usual ExtR -modules are given by HomDb (R) (M, N [i]), there is a canonical map i d R (M, N ) ExtiR (M, N ) → Ext that is natural in both variables. Proposition 3.1. [1, 1.3] For all finitely generated R-modules M and N , the natural map i d (M, N ) Exti (M, N ) → Ext R R is an isomorphism for i ≫ 0. 3.1. Euler characteristics and the Herbrand difference for hypersurfaces. Throughout this subsection, assume Q is a regular ring and f ∈ Q is a non-zeroα divisor, and let R = Q/f . Given an object E = (E1 −== − E0 ) ∈ mf (Q, f ), define β α → E0 ). The Q-module coker(E) is annihilated by f and coker(E) to be coker(E1 − we will regard it as an R-module. Theorem 3.2 (Buchweitz). [1] If R = Q/f where Q is a regular ring and f is a non-zero-divisor, there is an equivalence of triangulated categories ∼ = hmf (Q, f ) − → Dsg (R) CHERN CHARACTERS FOR TWISTED MATRIX FACTORIZATIONS 19 induced by sending a matrix factorization E to coker(E), regarded as an object of Dsg (R). In particular, we have an isomorphisms i and d R (coker(E), coker(F)), for all i, Extihmf (E, F) := Homhmf (E, F[i]) ∼ = Ext Extihmf (E, F) ∼ = ExtiR (coker(E), coker(F)), for i ≫ 0. Since the category hmf (Qp , f ) is trivial for all f ∈ p such that Rp is regular, Homihmf (E, F) is an R-module supported on Nonreg(R), for all i. If we assume Nonreg(R) is a finite set of maximal ideals, then Homihmf (E, F) has finite length as an R-module, allowing us to make the following definition. Definition 3.3. Assume Q is a regular ring and f ∈ Q is a non-zero-divisor such that Nonreg(R) is a finite set of maximal ideals, where R := Q/f . For E, F ∈ mf (Q, f ), define the Euler characteristic of (E, F) to be χ(E, E) = length Hom0hmf (E, F) − length Hom1hmf (E, F). In light of Theorem 3.2, if M and N are the cokernels of E and F, then χ(E, F) = hR (M, N ), (3.4) where the right-hand side is the Herbrand difference of the R-modules M and N , defined as 0 1 d R (M, N ) − length Ext d R (M, N ) hR (M, N ) = length Ext 2i+1 (M, N ), i ≫ 0. = length Ext2i R (M, N ) − length ExtR Lemma 3.5. Suppose φ : Q → Q′ is a flat ring homomorphism between regular rings, f is a non-zero-divisor in Q, and f ′ := φ(f ) is a non-zero-divisor of Q′ . Assume Nonreg(Q/f ) = {m} and Nonreg(Q′ /f ′ ) = {m′ } for maximal ideals m, m′ satisfying the condition that mQ′ is m′ -primary. Then, for all E, F ∈ mf (Q, f ), χmf (Q,f ) (E, F) = λ · χmf (′ Q,f ′ ) (E ⊗Q Q′ , F ⊗Q Q′ ), where λ := lengthQ′ (Q′ /mQ′ ). Proof. Let R = Q/f and R′ = Q′ /f ′ . The induced map R → R′ is flat and hence for any pair of R-modules M and N , we have an isomorphism ExtiR (M, N ) ⊗R R′ ∼ = ExtiR′ (M ⊗R R′ , N ⊗R R′ ) of R′ -modules. For i ≫ 0, ExtiR (M, N ) is supported on {m} and ExtiR′ (M ⊗R R′ , N ⊗R R′ ) is supported on {m′ }. It follows that lengthR′ ExtiR′ (M ⊗R R′ , N ⊗R R′ ) = lengthR′ (ExtiR (M, N )⊗R R′ ) = λ lengthR ExtiR (M, N ). ′ Hence hR (M ′ , N ′ ) = λhR (M, N ) and the result follows from (3.4).  20 MARK E. WALKER 3.2. Matrix factorizations for complete intersections. Using a Theorem of Orlov [19, 2.1], one may generalize Theorem 3.2 to complete intersection rings. The precise statement is the next Theorem. Versions of it are found in [21], [15], [18], [23]; the one given here is from [2, 2.11]. Theorem 3.6. Assume Q is a regular ring of finite Krull dimension and f1 , . . . , fc is a regular sequence of elements of Q. Let R = Q/(f1 , . . . , fc ), define X = Pc−1 = Q P Proj Q[T1 , . . . , Tc ] and set W = i fi Ti ∈ Γ(X, O(1)). There is an equivalence of triangulated categories ∼ = Dsg (R) − → hmf (X, O(1), W ). The isomorphism of Theorem 3.6 has a certain naturality property that we need. Suppose a1 , . . . , ac is a sequence of element of Q that generate the unit ideal, and let i : Spec(Q) ֒→ X be the associated closed immersion. Then we have a functor X ai fi ). i∗ : hmf (X, O(1), W ) → hmf (Q, i ∗ ∗ Technically, the right hand side should P be hmf (Spec(Q), i O(1), i (W )), but it is canonically isomorphic to hmf (Q, i aP i fi ) because there is a canonical isomorphism i∗ O(1) ∼ i∗ (W ) to i ai fi . = Q that sends P Also, the quotient map Q/( i ai fi ) −−։ R has finite projective dimension, since, locally on Q, it is given by modding out by a regular sequence of length c − 1. We thus an induced functor X ai f i ) res : Dsg (R) → Dsg (Q/ i on singularity categories, given by restriction of scalars. Proposition 3.7. With the notation above, the square ∼ = Dsg (R) / hmf (Pc−1 , O(1), W ) Q res Dsg (Q/  P i i∗ ∼ = ai f i ) / hmf (Q, commutes up to natural isomorphism. P i ai f i ) Proof. To prove this, we need to describe the equivalence of Theorem 3.6 explicitly. Let Y ⊆ Pc−1 denote the closed subscheme cut out by W . Then there are a pair of Q equivalences (3.8) ∼ = ∼ = Dsg (R) − → Dsg (Y ) ← − hmf (X, O(1), W ), ∼ = which give the equivalence Dsg (R) − → hmf (X, O(1), W ) of the Theorem. The left-hand equivalence of (3.8) was established by Orlov [19, 2.1] (see also [2, A.4]), and it is induced by the functor sending a bounded complex of finitely generated R-modules M · to β∗ π ∗ (M · ), where π : Pc−1 → Spec(R) is the evident R projection and β : Pc−1 ֒→ Y is the evident closed immersion. R The right-hand equivalence of (3.8) is given by [3, 6.3], and it sends a twisted α β → E1 (1)) to coker(α) (which may be regarded → E0 − matrix factorization E = (E1 − as a coherent sheaf on Y .) CHERN CHARACTERS FOR TWISTED MATRIX FACTORIZATIONS For brevity, set g = the diagram (3.9) P i ai fi . The Proposition will follow once we establish that β∗ ◦π ∗ ∼ = / Dsg (Y ) o s ss ss k∗ s ss ∗  yss Lj Dsg (Q/g) o ∼ Dsg (R) 21 ∼ = hmf (Pc−1 Q , O(1), W ) i∗  hmf (Q, g) = commutes up to natural isomorphism, where k : Spec(R) ֒→ Spec(Q/g) is the canonical closed immersion and j : Spec(Q/g) → Y is the restriction of i. (The closed immersion j is locally a complete intersection and thus of finite flat dimension. It follows that it induces a morphism on singularity categories, which we write Lj ∗ .) The right-hand square of (3.9) commutes since for a matrix factorization E = (E1 → E0 → E1 (1)) we have j ∗ coker(E1 → E0 ) ∼ = coker(i∗ E1 → i∗ E0 ) and j ∗ coker(E1 → E0 ) ∼ = Lj ∗ coker(E1 → E0 ). The first isomorphism is evident. The second holds since j has finite flat dimension and coker(E1 → E0 ) is an “infinite syzygy”; that is, coker(E1 → E0 ) has a right resolution by locally free sheaves on Y , namely γ ∗ E1 (1) → γ ∗ E0 (1) → γ ∗ E1 (2) → · · · , where γ : Y ֒→ X is the canonical closed immersion. To show the left-hand triangle commutes, we consider the Cartesian square of closed immersions c−1 / PR O β j iR O Spec(R) / /Y O k O / Spec(Q/g), where iR denote the restriction of i to Spec(R). This square is Tor-independent Y (i.e., TorO i (β∗ OPc−1 , j∗ OSpec(Q/g) ) = 0 for i > 0), from which it follows that R Lj ∗ ◦ β∗ = k∗ ◦ Li∗R . Since Li∗R ◦ π ∗ is the identity map, the left-hand triangle of (3.9) commutes.  4. Some needed results on the geometry of complete intersections Let k be a field and Q a smooth k-algebra of dimension n. Set V = Spec(Q), a smooth affine k-variety. Given f1 , . . . , fc ∈ Q, we have the associated morphism f := (f1 , . . . , fc ) : V → Ack of smooth k-varieties, induced by the k-algebra map k[t1 , . . . , tc ] → Q sending ti to fi . Define the Jacobian of f , written Jf , to be the map Jf : Qc → Ω1Q/k 22 MARK E. WALKER given by (df1 , . . . , dfc ). (More formally, Jf is the map f ∗ Ω1Ac /k → Ω1V /k induced by k f , but we identify f ∗ Ω1Ac /k with Qc by using the basis dt1 , . . . , dtc of Ω1Ac /k . Also, k k Jf is really the dual of what is often called the Jacobian of f .) For example, if Q = k[x1 , . . . , xn ], then Jf is given by the n× c matrix (∂fi /∂xj ), using the basis dx1 , . . . , dxn of Ω1Q/k . For a point x ∈ V , let κ(x) denote its residue field and define Jf (x) = Jf ⊗Q κ(x) : κ(x)c → Ω1Q/k ⊗Q κ(x) ∼ = κ(x)n to be the map on finite dimensional κ(x)-vector spaces induced by Jf . Define (4.1) Vj := {x ∈ V | rank Jf (κ(x)) ≤ j} ⊆ V, so that we have a filtration ∅ = V−1 ⊆ V0 ⊆ · · · ⊆ Vc = V. Note that the set Vc−1 is the singular locus of the map f . For example, if Q = k[x1 , . . . , xn ], then Vj is defined by the vanishing of the (j + 1) × (j + 1) minors of the matrix (∂fi /∂xj ). Example 4.2. Let k be a field of characteristic not equal to 2, let Q = k[x1 , . . . , xn ] so that V = Ank , let c = 2, and define f1 = 21 (x21 + · · · + x2n ) and f2 = 12 (a1 x21 + · · · + an x2n ), where a1 , . . . , an ∈ k are such that ai 6= aj for all i 6= j. Thus f is a morphism between two affine spaces: f : Ank → A2k . For the usual basis, the Jacobian is the matrix   x1 a1 x1  x2 a2 x2    Jf =  . ..  .  .. .  xn an xn Then V0 is just the origin in V = Ank and V1 is the union of the coordinate axes. In the previous example, we have dim(Vj ) ≤ j for all j < c. This will turn out to be a necessary assumption for many of our result. Let us summarize the notations and assumptions we will need in much of the rest of this paper: Assumptions 4.3. Unless otherwise indicated, from now on we assume: (1) k is field. (2) V = Spec(Q) is smooth affine k-variety of dimension n. (3) f1 , . . . , fc ∈ Q is a regular sequence of elements. We write f : V → Ack for the flat morphism given by (f1 , . . . , fc ). (Note that c ≤ n.) (4) The singular locus of f −1 (0) = Spec(Q/(f1 , . . . , fc )), is zero-dimensional — say Sing(f −1 (0)) = {v1 , . . . , vm } where the vi ’s are closed points of V . (5) dim(Vj ) ≤ j for all j < c, where Vj is defined in (4.1). Observe that the singular locus of f −1 (0) may be identified with Vc−1 ∩ f −1 (0). For the results in this paper, we will be allowed to shrink about the singular locus of f −1 (0). In this situation, if char(k) = 0, the last assumption in above list is unnecessary, as we now prove: CHERN CHARACTERS FOR TWISTED MATRIX FACTORIZATIONS 23 Lemma 4.4. If all of Assumptions 4.3 hold except possibly (5) and char(k) = 0, then for a suitably small affine open neighborhood V ′ of the set {v1 , . . . , vm }, we have dim(Vj′ ) ≤ i for all j ≤ c − 1. Proof. Since the induced map Sing(f ) → Ac has finite type, the set B := {y ∈ Sing(f ) | Sing(f ) ∩ dim(f −1 (f (y)) > 0} is a closed subset of Sing(f ) by the upper-semi-continuity of fiber dimensions [EGAIV, 13.1.3]. Since Sing(f −1 (0)) = f −1 (0) ∩ Sing(f ) is a finite set of closed points, it follows that {v1 , . . . , vm } ∩ B = ∅. Thus V ′ := V \ B is an open neigh′ is quasi-finite over Ac . borhood of the vi ’s and Sing(f |V ′ ) = Sing(f ) ∩ V ′ = Vc−1 ′ Shrinking V further, we may assume it is affine. ′ ′ ) is → f (Vc−1 By [11, III.10.6], we have dim(f (Vj′ )) ≤ j for all j. Since Vc−1 quasi-finite, so is Vj′ → f (Vj′ ) for all j ≤ c − 1, and hence dim(Vj′ ) ≤ i for all j ≤ c − 1.  Example 4.5. Suppose char(k) = p > 0, Q = k[x1 , . . . , xc ], and let f1 (x) = xp1 , · · · , fc (x) = xpc . Then Jf : Qc → Ω1Q/k ∼ = Qc is the zero map, and yet p p k[x1 , . . . , xc ]/(x1 , . . . , xc ) has an isolated singularity at (x1 , . . . , xc ). This shows that the characteristic 0 hypothesis is necessary in Lemma 4.4. Given the set-up in Assumptions 4.3, we let S = Pc−1 = Proj k[T1 , . . . , Tc ] k and X = Pc−1 ×k V = Proj Q[T1 , . . . , Tc ], k and we define W = f1 T1 + · · · + fc Tc ∈ Γ(X, O(1)). As before, we have a map dW : OX → Ω1X/S (1), or, equivalently, a global section dW ∈ Γ(X, Ω1X/S (1)). Recall that Ω1S/X (1) is the coherent sheaf associated to the graded module Ω1Q/k [T1 , . . . , Tc ](1), and dW may be given explicitly as the degree one element dW = df1 T1 + · · · + dfc Tc ∈ Ω1Q/k [T1 , . . . , Tc ], or, in other words, (4.6)   T1  T2    dW = Jf ·  .  .  ..  Tc Proposition 4.7. With Assumptions 4.3, dW is a regular section of Ω1X/S (1). Proof. Since X = Pc−1 ×k V is smooth of dimension n + c − 1 and Ω1X/S is locally k free of rank n, it suffices to prove the subscheme Z cut out by dW has dimension at most c − 1. Using (4.6), we see that the fiber of Z → V over a point x ∈ V is a linear subscheme of Pc−1 κ(x) of dimension c − 1 − rank(Jf (x)). That is, if x ∈ Vj \ Vj−1 , then the fiber over Z → Y over y has dimension c − 1 − j. (This includes the case j = c in the sense that the fibers over Vc \ Vc−1 are empty.) Since dim(Vj ) ≤ j if j < c by assumption, it follows that dim(Z) ≤ c − 1.  24 MARK E. WALKER Corollary 4.8. If Assumptions 4.3 hold, then the Jacobian complex Ω·dW is exact everywhere except in the right-most position, and hence determines a resolution of JW (n) by locally free coherent sheaves. We will need the following result in the next section. Proposition 4.9. Under Assumptions 4.3, there is an open dense subset U of Pc−1 such that for every k-rational point [a1 : · · · : ac ] ∈ U , the singular locus of k the morphism X ai fi : V → A1k i has dimension 0. Proof. Let Z be the closed subvariety of Pc−1 ×k V defined by the vanishing of k dW (see (4.6)). The fiber of the projection map π : Z → Pc−1 over a k-rational k point [a : · · · : a ] may be identified with the singular locus of the morphism 1 c P 1 a f : V → A . The claim is thus that there is a open dense subset U of Pc−1 over i i k i k which the fibers of π have dimension zero. Such a U exists because dim(Z) ≤ c − 1, as we established in the proof of Proposition 4.7. (U may be taken to be the complement of π(B) where B is the closed subset {z ∈ Z | dim π −1 π(z) ≥ 1} of Z.)  5. The vanishing of Chern characters In this section, we prove that the top Chern class of a twisted matrix factorization vanishes in certain situations. We combine this with a Theorem of PolishchukVaintrob to establish the vanishing of hc and ηc under suitable hypotheses. 5.1. The vanishing of the top Chern class. The following theorem forms the key technical result of this paper. Theorem 5.1. If Assumptions 4.3 hold and in addition Pn is even, char(p) ∤ n!, c−1 and c ≥ 2, then ctop (E) = 0 for any E ∈ mf (PQ , O(1), i fi Ti ). Proof. The key point is that, under Assumptions 4.3 with c ≥ 2, we have (5.2) HP0c−1 ×{v1 ,...,vm } (Pc−1 ×k V, JW (i)) = 0 k for all i < n. To prove this, since HP0c−1 ×{v1 ,...,vm } (Pc−1 ×k V, JW (i)) = m M HP0c−1 ×{vi } (Pc−1 ×k V, JW (i)) i=1 and since we may replace V = Spec(Q) by any open neighborhood of vi without affecting the Q-module HP0c−1 ×{vi } (Pc−1 ×k V, JW (i)), we may assume m = 1 and that there exists a regular sequence of elements x1 , . . . , xn ∈ m that generate the maximal ideal m of Q corresponding to v = v1 . Write C := C(x1 , . . . , xn ) for the “augmented Cech complex”   M  1  M 1 1 Q → → ··· → Q , Q Q→ xi xi xj x1 · · · xn i,j i CHERN CHARACTERS FOR TWISTED MATRIX FACTORIZATIONS 25 with Q in cohomological degree 0. Equivalently C = C(x1 ) ⊗Q · · · ⊗Q C(xn ), where h i c−1 1 , we have C(xi ) = (Q ֒→ Q xi ). For any coherent sheaf F on PQ HP0c−1 ×{v} (Pc−1 ×k V, F ) = Γ(Pc−1 ×k V, H0 (F ⊗Q C)). (5.3) The complex C is exact in all degrees except in degree n, and we set E = H n (C). Explicitly, i h 1 Q x1 ···x n h i. E= 1 Σi Q x1 ···xˆi ···xn (The localization of E at m is an injective hull of the residue field, but we do not ∼ need this fact.) Thus we have a quasi-isomorphism C − → E[−n]. By Corollary 4.8, there is a quasi-isomorphism ∼ Ω·dW (−n)[n] − → JW . Since C is a complex of flat modules, the map ∼ → JW ⊗Q C Ω·dW (−n)[n] ⊗Q C − ∼ is also a quasi-isomorphism. Combining this with the quasi-isomorphism C − → E[−n] gives an isomorphism in the derived category Ω· (−n) ⊗Q E ∼ = JW ⊗ C. dW For any i we obtain an isomorphism H0 (JW (i) ⊗ C) ∼ = H0 (Ω·dW (i − n) ⊗Q E) of quasi-coherent sheaves. We thus obtain from (5.3) the isomorphism  Pc−1 ×k V, JW (i) ∼ (5.4) Γ c−1 = Γ Pc−1 ×k V, H0 (Ω· Pk ×{v} k dW (i k But H0 (Ω·dW (i − n) ⊗Q E) is the kernel of  − n) ⊗Q E) dW ∧− OPc−1 (i − n) ⊗Q E −−−−→ Ω1Pc−1 /Pc−1 (i − n + 1) ⊗Q E. Q Q k Since c − 1 ≥ 1, the coherent sheaf OPc−1 (i − n) has no global section when i < n, Q and hence  Γ Pc−1 ×k V, H0 (Ω·dW (i − n) ⊗Q E) = 0, k which establishes (5.2). P Let Y be the closed subscheme of X = Pc−1 × V cut out by W = i fi Ti . k c−1 We claim Sing(Y /k) ⊆ P × {v , . . . , v }. Indeed, Sing(Y /k) is defined by the 1 m k P 1 d(f )T = 0 and f = 0, i = 1, . . . c, where d(f ) ∈ Ω . The equations i i i i i Q/k first equation cuts out a subvariety contained in Pc−1 × V , where, recall, Vc−1 k c−1 k denotes the singular locus of f : V → Ack . The equations fi = 0, i = 1, . . . c, cut out Pc−1 ×k f −1 (0). But Vc−1 ∩ f −1 (0) = Sing(f −1 (0)) = {v1 , . . . , vm }. k Since Nonreg(Y ) ⊆ Sing(Y /k), we deduce from (5.2) that c−1 0 ×k V, JW (i)) = 0. HNonreg(Y ) (Pk The vanishing of ctop (E) now follows from Proposition 2.31. Corollary 5.5. If • k is a field of characteristic 0,  26 MARK E. WALKER • Q is a smooth k-algebra of even dimension, • f1 , . . . , fc ∈ Q forms a regular sequence of elements with c ≥ 2, and • the singular locus of Q/(f1 , . . . , fc ), is zero-dimensional, P then ctop (E) = 0 for all E ∈ mf (Pc−1 Q , O(1), i fi Ti ). Proof. Let {v1 , . . . , vm } be the singular locus of Q/(f1 , . . . , fc ) and let V ′ = Spec(Q′ ) be an affine open neighborhood of it such that dim(Vi′ ) ≤ i for all i ≤ c − 1. The existence of such a V ′ is given by Lemma 4.4. The result follows from the (proof of the) Theorem, since the map X X fi Ti ) fi Ti ) → HP0c−1 ×{v ,...,v } (Pc−1 HP0c−1 ×{v ,...,v } (Pc−1 Q′ , O, Q , O, k 1 m i k 1 m i is an isomorphism.  5.2. The Polishchuk-Vaintrob Riemann-Roch Theorem. We recall a Theorem of Polishchuk-Vaintrob that relates the Euler characteristic of (affine) hypersurfaces with isolated singularities to Chern characters. This beautiful theorem should be regarded as a form of a “Riemann-Roch” theorem. Since PolishchukVaintrob work in a somewhat different setting that we do, we begin by recalling their notion of a Chern character. For a field k and integer n, define Q̂ := k[[x1 , . . . , xn ]], a power series ring in n variables, and let f ∈ Q̂ be such that the only non-regular point of Q̂/f is its maximal ideal. Define Ω̂·Q̂/k to be the exterior algebra on the free Q̂-module Ω̂1Q̂/k = Q̂dx1 ⊕ · · · ⊕ Q̂dxn . In other words, Ω̂·Q̂/k = Ω·k[x1 ,...,xn ]/k ⊗k[x1 ,...,xn ] Q̂. Finally, define Note that   n−1 df ∧− Jˆ(Q̂/k, f ) = coker Ω̂Q̂/k −−−→ Ω̂nQ̂/k . k[[x1 , . . . , xn ]] dx1 ∧ . . . ∧ dxn . Jˆ(Q̂/k, f ) ∼ = ∂f ∂f h ∂x1 , . . . , ∂x i n Definition 5.6. With the notation above, given a matrix factorization E ∈ mf (Q̂, f ), A upon choosing bases, it may be written as E = (Q̂r −=== − Q̂r ) for r × r matrices B A, B with entries in Q̂. When n is even, we define the Polishchuk-Vaintrob Chern character of E to be   n factors }| { z n chPV (E) = (−1)( 2 ) tr dA ∧ dB ∧ · · · ∧ dB  ∈ Jˆ(Q̂/k, f ). Theorem 5.7 (Polishchuk-Vaintrob). Assume k is a field of characteristic 0 and f ∈ Q̂ = k[[x1 , . . . , xn ]] is a power series such that R := Q̂/f has in an isolated singularity (i.e., Rp is regular for all p 6= m). If n is odd, then χ(E, F) = 0 for all E, F ∈ mf (Q̂, f ). If n is even, there is a pairing h−, −i on Jˆ(Q̂, f ), namely the residue pairing, such that for all E, F ∈ mf (Q̂, f ), we have χ(E, F) = hchPV (E), chPV (F)i. CHERN CHARACTERS FOR TWISTED MATRIX FACTORIZATIONS 27 In particular, χ(E, F) = 0 if either chPV (E) = 0 or chPV (F) = 0. It is easy to deduce from their Theorem the following slight generalization of it, which we will need to prove Theorem 5.11 below. Corollary 5.8. Assume k is a field of characteristic 0, Q is a smooth k-algebra of dimension n, f ∈ Q is a non-zero-divisor such that the singular locus of R := Q/f is zero-dimensional. Given E ∈ mf (Q, f ), if either n is odd or n is even and n−1 ctop (E) = 0 in J (Q/k, f ) = ΩnQ/k /df ∧ ΩQ/k , then χ(E, F) = 0 for all F ∈ mf (Q, f ). Proof. We start be reducing to the case where R has just one singular point m and it is k-rational. We have X χmf (Q,f ) (E, F) = χmf (Qm ,f ) (Em , Fm ) m where the sum ranges over the singular points of R. If k is an algebraic closure of k, then for each such m we have ∼ Hom∗ Hom∗hmf (Qm ,f ) (Em , Fm ) ⊗k k = (Em ⊗k k, Fm ⊗k k) hmf (Qm ⊗k k,f ⊗1) and hence χmf (Qm ⊗k k,f ⊗1) (Em ⊗k k, Fm ⊗k k) = [Q/m : k] · χmf (Qm ,f ) (Em , Fm ). It thus suffices to prove the Corollary for a suitably small affine open neighborhood of each maximal ideal m of Q ⊗k k at which Q ⊗k k/f is singular. In this case, a choice of a regular sequence of generators x1 , . . . , xn of the maximal ideal of Qm allows us to identify the completion of Q along m with Q̂ = k[[x1 , . . . , xn ]]. By Lemma 3.5, χmf (Q,f ) (E, F) = χmf (Q̂,f ) (Ê, F̂) where Ê = E⊗Q Q̂ and F̂ = F⊗Q Q̂. Moreover, under the evident map J (Qm /k, f ) → Jˆ(Q̂/k, f ), ctop (E) is sent to a non-zero multiple of chPV (Ê) by Example 2.30. The result now follows from the Polishchuk-Vaintrob Theorem.  5.3. Vanishing of the higher h and η Invariants. We apply our vanishing result for ctop to prove that the invariants hc and ηc vanish for codimension c ≥ 2 isolated singularities in characteristic 0, under some mild additional hypotheses. First, we use the following lemma to prove the two invariants are closely related. I thank Hailong Dao for providing me with its proof. Lemma 5.9 (Dao). Let R = Q/(f1 , . . . , fc ) for a regular ring Q and a regular sequence of elements f1 , . . . , fc ∈ Q with c ≥ 1. Assume M and N are finitely geni erated R-modules and that M is MCM. If TorR i (M, N ) and ExtR (HomR (M, R), N ) have finite length for i ≫ 0, then ηcR (M, N ) = (−1)c hR c (HomR (M, R), N ). In particular, if the non-regular locus of R is zero dimensional and hc vanishes for all pairs of finitely generated R-modules, then so does ηc . Remark 5.10. The conditions that TorR i (M, N ) has finite length for i ≫ 0 and that ExtiR (HomR (M, R), N ) has finite length for i ≫ 0 are equivalent; see [13, 4.2]. 28 MARK E. WALKER Proof. We proceed by induction on c. The case c = 1 follows from the isomorphism [6, 4.2] i R d R (HomR (M, R), N ) ∼ d −i−1 (M, N ). Ext = Tor Let S = Q/(f1 , . . . , fc−1 ) so that R = S/(fc ) with fc a non-zero-divisor of S. By [5, 4.3] and [4, 3.4], TorSi (M, N ) and ExtiS (HomR (M, R), N ) have finite length for i ≫ 0 and 1 S ηcR (M, N ) = ηc−1 (M, N ) 2c 1 S hR h (HomR (M, R), N ). c (HomR (M, R), N ) = 2c c−1 Let S n −−։ M be a surjection of S-modules with kernel M ′ so that we have the short exact sequence 0 → M ′ → S n → M → 0. Observe that M ′ is an MCM S-module. Since fc ·M = 0 and fc is a non-zero-divisor ∼ = of S, we have HomS (M, S n ) = 0. Using also the isomorphism HomR (M, R) − → 1 ExtS (M, S) of S-modules coming from the long exact sequence of Ext modules fc associated to the sequence 0 → S −→ S → R → 0, we obtain the short exact sequence 0 → S n → HomS (M ′ , S) → HomR (M, R) → 0. In particular, it follows from these short exact sequences that TorSi (M ′ , N ) and ExtiS (HomS (M ′ , S), N ) have finite length for i ≫ 0. Moreover, since hSc−1 vanishes on free S-modules and is additive for short exact sequences, we have hSc−1 (HomS (M ′ , S), N ) = hSc−1 (HomR (M, R), N ). Using the induction hypotheses, we get 1 1 S S h (HomS (M ′ , S), N ) = (−1)c−1 ηc−1 (M ′ , N ). hR c (HomR (M, R), N ) = 2c c−1 2c S S Finally, ηc−1 (M ′ , N ) = −ηc−1 (M, N ), since M ′ is a first syzygy of M . Combining these equations gives ηcR (M, N ) = (−1)c hR c−1 (HomR (M, R), N ). The final assertion holds since ηc is completely determined by its values on MCM modules.  Theorem 5.11. Assume • k is field of characteristic 0, • Q is a smooth k-algebra, • f1 , . . . , fc ∈ Q form a regular sequence of elements, • the singular locus of R := Q/(f1 , . . . , fc ) is zero-dimensional, and • c ≥ 2. R Then hR c (M, N ) = 0 and ηc (M, N ) = 0 for all finitely generated R-modules M and N. Remark 5.12. Since k is a perfect field, the hypotheses that Q is a smooth k algebra is equivalent to Q being a finitely generated and regular k-algebra. Likewise, Sing(R/k) = Nonreg(R). CHERN CHARACTERS FOR TWISTED MATRIX FACTORIZATIONS 29 Proof. By Lemma 5.9, it suffices to prove hR c (M, N ) = 0. The value of this invariant is unchanged if we semi-localize at Nonreg(R). So, upon replacing Spec(Q) with a suitably small affine open neighborhood of Nonreg(R), Lemma 4.4 allows us to assume all of Assumptions 4.3 hold. Let EM , EN ∈ mf (Pc−1 Q , O(1), W ) be twisted matrix factorizations corresponding to the classes of M, N in Dsg (R) under the equivalence of Theorem 3.6. Now choose a1 , . . . , ac ∈ k as in Proposition P 4.9 so that the singular locus of Q/g is zero dimensional, where we define g := i ai fi . A key fact we use is that hR c is related to the classical Herbrand difference of the hypersurface Q/g: By [4, 3.4], for all finitely generated R-modules M and N , we have 1 Q/g hR (M, N ) = hQ/g (M, N ), c (M, N ) = h1 2 and thus it suffices to prove hQ/g (M, N ) = 0. By Proposition 3.7 the affine matrix factorizations i∗ EM , i∗ EN ∈ hmf (Q, g) represent the classes of M, N in Dsg (Q/g), where i : Spec(Q) ֒→ Pc−1 is the closed immersion associated to the k-rational Q point [a1 : · · · , ac ] of Pc−1 . It thus follows from (3.4) that k hQ/g (M, N ) = χ(i∗ EM , i∗ EN ). By Corollary 5.8, it suffices to prove ctop (i∗ EM ) = 0 in J (Qm /k) (when n is even). But ctop (i∗ EM ) = i∗ ctop (EM ) by Corollary 2.35 and ctop (EM ) = 0 by Corollary 5.5.  Appendix A. Relative Connections We record here some well known facts concerning connections for locally free coherent sheaves. Throughout, S is a Noetherian, separated scheme and p : X → S is a smooth morphism; i.e., p is separated, flat and of finite type and Ω1X/S is locally free. Definition A.1. For a vector bundle (i.e., locally free coherent sheaf) E on X, a connection on E relative to p is a map of sheaves of abelian groups ∇ : E → Ω1X/S ⊗OX E on X satisfying the Leibnitz rule on sections: given an open subset U ⊆ X and elements f ∈ Γ(U, OX ) and e ∈ Γ(U, E), we have ∇(f · e) = df ⊗ e + f ∇(e) in Γ(U, Ω1X/S ⊗OX E), where d : OX → Ω1X/S denotes exterior differentiation relative to p. Note that imply that ∇ is OS -linear — more precisely, p∗ (∇) :   the hypotheses p∗ E → p∗ Ω1X/S ⊗OX E is a morphism of quasi-coherent sheaves on S. A.1. The classical Atiyah class. Let ∆ : X → X ×S X be the diagonal map, which, since X → S is separated, is a closed immersion, and let I denote the sheaf of ideals cutting out ∆(X). Since p is smooth, I is locally generated by a regular sequence. Recall that I/I 2 ∼ = ∆∗ Ω1X/S . Consider the coherent sheaf P̃X/S := OX×S X /I 2 on X ×S X. Observe that P̃X/S is supported on ∆(X), so that (πi )∗ P̃X/S is a coherent sheaf on X, for i = 1, 2, where πi : X ×S X → X denotes projection onto the i-th factor. 30 MARK E. WALKER The two push-forwards (πi )∗ P̃X/S , i = 1, 2 are canonically isomorphic as sheaves of abelian groups, but have different structures as OX -modules. We write PX/S = P for the sheaf of abelian groups (π1 )∗ P̃ = (π2 )∗ P̃ regarded as a OX − OX -bimodule where the left OX -module structure is given by identifying it with (π1 )∗ P̃X/S the right OX -module structure is given by identifying it with (π2 )∗ P̃X/S . Locally on an affine open subset U = Spec(Q) of X lying over an affine open −·− subset V = Spec(A) of S, we have PU/V = (Q⊗A Q)/I 2 , where I = ker(Q⊗A Q −−−→ Q) and the left and right Q-module structures are given in the obvious way. There is an isomorphism of coherent sheaves on X × X ∼ ∆∗ Ω1 = I/I 2 X/S given locally on generators by dg 7→ g ⊗ 1 − 1 ⊗ g. From this we obtain the short exact sequence 0 → Ω1X/S → PX/S → OX → 0. (A.2) This may be thought of as a sequence of OX − OX -bimodules, but for Ω1X/S and OX the two structures coincide. Locally on open subsets U and V as above, we have Ω1Q/A ∼ = I/I 2 , and (A.2) takes the form 0 → I/I 2 → (Q ⊗A Q)/I 2 → Q → 0. Viewing (A.2) as either a sequence of left or right modules, it is a split exact sequence of locally free coherent sheaves on X. For example, a splitting of PX/S → OX as right modules may be given as follows: Recall that as a right module, PX/S = (π2 )∗ P and so a map of right modules OX → PX/S is given by a map π2∗ OX → PX/S . Now, π2∗ OX = OX×S X , and the map we use is the canonical surjection. We refer to this splitting as the canonical right splitting of (A.2). Locally on subsets U and V as above, the canonical right splitting of (Q ⊗A Q)/I 2 −−։ Q is given by q 7→ 1 ⊗ q. Given a locally free coherent sheaf E on X, we tensor (A.2) on the right by E to obtain the short exact sequence π i →E →0 → PX/S ⊗OX E − 0 → Ω1X/S ⊗OX E − (A.3) of OX − OX -bimodules. Taking section on affine open subsets U and V as before, letting E = Γ(U, E), this sequence has the form 0 → Ω1Q/A ⊗Q E → (Q ⊗A E)/I 2 · E → E → 0. Since (A.2) is split exact as a sequence of right modules and tensor product preserves split exact sequences, (A.3) is split exact as a sequence of right OX -modules, and the canonical right splitting of (A.2) determines a canonical right splitting of (A.3), which we write as can : E → PX/S ⊗OX E. The map can is given locally on sections by e 7→ 1 ⊗ e. In general, (A.3) need not split as a sequence of left modules. Viewed as a sequence of left modules, (A.3) determines an element of Ext1 (E, Ω1 ⊗O E) ∼ = H 1 (X, Ω1 ⊗O EndO (E)), OX X/S X X/S X X sometimes called the “Atiyah class” of E relative to p. To distinguish this class from what we have called the Atiyah class of a matrix factorization in the body of CHERN CHARACTERS FOR TWISTED MATRIX FACTORIZATIONS 31 this paper, we will call this class the classical Atiyah class of the vector bundle E, and we write it as (E) ∈ Ext1OX (E, Ω1X/S ). Atclassical X/S The sequence (A.3) splits as a sequence of left modules if and only if Atclassical = 0. X/S Lemma A.4. If p : X → S is affine, then the classical Atiyah class of any vector bundle E on X vanishes, and hence (A.3) splits as a sequence of left modules. Proof. Since p is affine, p∗ is exact. Applying p∗ to (A.3) results in a sequence of OS − OS bimodules (which are quasi-coherent for both actions). But since p ◦ π1 = p ◦ π2 these two actions coincide. Moreover, since (A.3) splits as right modules, so does its push-forward along p∗ . It thus suffices to prove the following general fact: If F := (0 → F ′ → F → F ′′ → 0) is a short exact sequence of vector bundles on X such that p∗ (F ) splits as a sequence of quasi-coherent sheaves on S, then F splits. To prove this, observe that F determines a class in H 1 (X, HomOX (F ′′ , F ′ )) and it is split if and only if this class vanishes. We may identify H 1 (X, HomOX (F ′′ , F ′ )) with H 1 (S, p∗ HomOX (F ′′ , F ′ )) since p is affine. Moreover, the class of F ∈ H 1 (S, p∗ HomOX (F ′′ , F ′ )) is the image of the class of p∗ (F ) ∈ H 1 (S, HomOS (p∗ F ′′ , p∗ F ′ )) under the map induced by the canonical map HomOS (p∗ F ′′ , p∗ F ′ ) → p∗ HomOX (F ′′ , F ′ ). But by our assumption the class of p∗ (F ) vanishes since p∗ F splits.  A.2. The vanishing of the classical Atiyah class and connections. Suppose σ : E → PX/S ⊗OX E is a splitting of the map π in (A.3) as left modules and recall can : E → PX/S ⊗OX E is the splitting of π as a morphism of right modules given locally by e 7→ 1 ⊗ e. Since σ and can are splittings of the same map regarded as a map of sheaves of abelian groups, the difference σ − can factors as i ◦ ∇σ for a unique map of sheaves of abelian groups ∇σ : E → Ω1X/S ⊗OX P. Lemma A.5. The map ∇σ is a connection on E relative to p. Proof. The property of being a connection may be verified locally, in which case the result is well known. In more detail, restricting to an affine open U = Spec(Q) of X lying over an affine open V = Spec(A) of S, we assume E is a projective Q-module and that we are given a splitting σ of the map of left Q-modules (Q ⊗A E)/I 2 · E −−։ E. The map ∇σ = (σ − can) lands in I/I 2 ⊗Q Q = Ω1Q/A ⊗Q E, and for a ∈ A, e ∈ E we have ∇σ (ae) = σ(ae) − 1 ⊗ ae = aσ(e) − 1 ⊗ ae = aσ(e) − a ⊗ e + a ⊗ e − 1 ⊗ ae = a(σ(e) − 1 ⊗ e) + (a ⊗ 1 − 1 ⊗ a) ⊗ e = a∇σ (e) + da ⊗ e, since da is identified with a ⊗ 1 − 1 ⊗ a under Ω1Q/A ∼ = I/I 2 .  32 MARK E. WALKER Lemma A.6. Suppose E, E ′ are locally free coherent sheaves on X and ∇, ∇′ are connections for each relative to p. If g : E → E ′ is a morphisms of coherent sheaves, then the map ∇′ ◦ g − (id ⊗g) ◦ ∇ : E → Ω1X/S ⊗OX E ′ is a morphism of coherent sheaves. Proof. Given an open set U and elements f ∈ Γ(U, OX ), e ∈ Γ(U, E), the displayed map sends f · e ∈ Γ(U, E) to ∇′ (g(f e)) − (id ⊗g)(df ⊗ e − f ∇(e) = ∇′ (f g(e)) − df ⊗ g(e) − f (id ⊗g)(∇(e)) = df ⊗ g(e) + f ∇′ (g(e)) − df ⊗ g(e) − f (id ⊗g)(∇(e)) = f ∇′ (g(e)) − f (id ⊗g)(∇(e)) = f ((∇′ ◦ g − (id ⊗g) ◦ ∇) (e)) .  Proposition A.7. For a vector bundle E on X, the function σ 7→ ∇σ determines a bijection between the set of splittings of the map π in (A.3) as a map of left modules and the set of connections on E relative to p. In particular, E admits a connection relative to p if and only if Atclassical (E) = 0. X/S Proof. From Lemma A.6 with g being the identity map, the difference of two connections on E is OX -linear. By choosing any one splitting σ0 of (A.3) and its associated connection ∇0 = ∇σ0 , the inverse of σ 7→ ∇σ is given by ∇ 7→ ∇ − ∇σ0 + σ0 .  References 1. R. O. Buchweitz, Maximal Cohen-Macaulay modules and Tate cohomology over Gorenstein rings, unpublished, available at http://hdl.handle.net/1807/16682, 1987. 2. Jesse Burke and Mark E. Walker, Matrix factorizations in higher codimension, ArXiv e-prints (2012). 3. Jesse Burke and Mark E. Walker, Matrix factorizations over projective schemes, Homology Homotopy Appl. 14 (2012), no. 2, 37–61. MR 3007084 4. Olgur Celikbas and Hailong Dao, Asymptotic behavior of Ext functors for modules of finite complete intersection dimension, Math. Z. 269 (2011), no. 3-4, 1005–1020. MR 2860275 5. Hailong Dao, Asymptotic behavior of Tor over complete intersections and applications, ArXiv e-prints (2007). 6. Hailong Dao, Some observations on local and projective hypersurfaces, Math. Res. Lett. 15 (2008), no. 2, 207–219. MR 2385635 (2009c:13032) 7. Tobias Dyckerhoff, Compact generators in categories of matrix factorizations, Duke Math. J. 159 (2011), no. 2, 223–274. MR 2824483 (2012h:18014) 8. Tobias Dyckerhoff and Daniel Murfet, The Kapustin-Li formula revisited, Adv. Math. 231 (2012), no. 3-4, 1858–1885. MR 2964627 , Pushing forward matrix factorizations, Duke Math. J. 162 (2013), no. 7, 1249–1311. 9. MR 3079249 10. David Eisenbud, Homological algebra on a complete intersection, with an application to group representations, Trans. Amer. Math. Soc. 260 (1980), no. 1, 35–64. MR 570778 (82d:13013) 11. Robin Hartshorne, Algebraic geometry, Springer-Verlag, New York, 1977, Graduate Texts in Mathematics, No. 52. MR 0463157 (57 #3116) CHERN CHARACTERS FOR TWISTED MATRIX FACTORIZATIONS 33 12. Melvin Hochster, The dimension of an intersection in an ambient hypersurface, Algebraic geometry (Chicago, Ill., 1980), Lecture Notes in Math., vol. 862, Springer, Berlin, 1981, pp. 93–106. 13. Craig Huneke and David A. Jorgensen, Symmetry in the vanishing of Ext over Gorenstein rings, Math. Scand. 93 (2003), no. 2, 161–184. MR 2009580 (2004k:13039) 14. Anton Kapustin and Yi Li, D-branes in Landau-Ginzburg models and algebraic geometry, J. High Energy Phys. (2003), no. 12, 005, 44 pp. (electronic). MR 2041170 (2005b:81179b) 15. Kevin H. Lin and Daniel Pomerleano, Global matrix factorizations, Math. Res. Lett. 20 (2013), no. 1, 91–106. MR 3126725 16. W. Frank Moore, Greg Piepmeyer, Sandra Spiroff, and Mark E. Walker, The vanishing of a higher codimension analogue of Hochster’s theta invariant, Math. Z. 273 (2013), no. 3-4, 907–920. MR 3030683 17. Daniel Murfet, Residues and duality for singularity categories of isolated Gorenstein singularities, Compos. Math. 149 (2013), no. 12, 2071–2100. MR 3143706 18. Dmitri Orlov, Matrix factorizations for nonaffine LG-models, Math. Ann., To appear. , Triangulated categories of singularities, and equivalences between Landau-Ginzburg 19. models, Mat. Sb. 197 (2006), no. 12, 117–132. MR 2437083 (2009g:14013) 20. D. Platt, Chern Character for Global Matrix Factorizations, ArXiv e-prints (2012). 21. Alexander Polishchuk and Arkady Vaintrob, Matrix factorizations and singularity categories for stacks, Ann. Inst. Fourier (Grenoble) 61 (2011), no. 7, 2609–2642. MR 3112502 22. , Chern characters and Hirzebruch-Riemann-Roch formula for matrix factorizations, Duke Math. J. 161 (2012), no. 10, 1863–1926. MR 2954619 23. Leonid Positselski, Coherent analogues of matrix factorizations and relative singularity categories, arXiv:1102.0261. 24. A. Preygel, Thom-Sebastiani and Duality for Matrix Factorizations, ArXiv e-prints (2011). 25. Ed Segal, The closed state space of affine Landau-Ginzburg B-models, J. Noncommut. Geom. 7 (2013), no. 3, 857–883. MR 3108698 26. Yu Xuan, Geometric study of the category of matrix factorizations, (2013), unpublished, available at http://digitalcommons.unl.edu/mathstudent/45. Department of Mathematics, University of Nebraska, Lincoln, NE 68588 E-mail address: [email protected]
0
Under consideration for publication in Theory and Practice of Logic Programming 1 arXiv:1102.1178v1 [] 6 Feb 2011 The BinProlog Experience: Architecture and Implementation Choices for Continuation Passing Prolog and First-Class Logic Engines Paul Tarau Dept. of Computer Science and Engineering University of North Texas, Denton, Texas, USA E-mail: [email protected] submitted 10th October 2009; revised 28st February 2010; accepted 2nd February 2011 Abstract We describe the BinProlog system’s compilation technology, runtime system and its extensions supporting first-class Logic Engines while providing a short history of its development, details of some of its newer re-implementations as well as an overview of the most important architectural choices involved in their design. With focus on its differences with conventional WAM implementations, we explain key details of BinProlog’s compilation technique, which replaces the WAM with a simplified continuation passing runtime system (the “BinWAM”), based on a mapping of full Prolog to binary logic programs. This is followed by a description of a term compression technique using a “tag-on-data” representation. Later derivatives, the Java-based Jinni Prolog compiler and the recently developed Lean Prolog system refine the BinProlog architecture with first-class Logic Engines, made generic through the use of an Interactor interface. An overview of their applications with focus on the ability to express at source level a wide variety of Prolog built-ins and extensions, covers these newer developments. To appear in Theory and Practice of Logic Programming (TPLP). KEYWORDS: Prolog, logic programming system, continuation passing style compilation, implementation of Prolog, first-class logic engines, data-representations for Prolog runtime systems 1 Introduction At the time when we started work on the BinProlog compiler, around 1991, WAMbased implementations (Warren 1983; Aı̈t-Kaci 1991) had reached already a significant level of maturity. The architectural changes occurring later can be seen mostly as extensions for constraint programming and runtime or compile-time optimizations. BinProlog’s design philosophy has been minimalistic from the very beginning. In the spirit of Occam’s razor, while developing an implementation as an iterative 2 Paul Tarau process, this meant not just trying to optimize for speed and size, but also to actively look for opportunities to refactor and simplify. The guiding principle, at each stage, was seeking answers to questions like: • what can be removed from the WAM without risking significant, program independent, performance losses? • what can be done to match, within small margins, performance gains resulting from new WAM optimizations (like read/write stream separation, instruction unfolding, etc.) while minimizing implementation complexity and code size? • can one get away with uniform data representations (e.g. no special tags for lists) instead of extensive specialization, without major impact on performance? • when designing new built-ins and extensions, can we use source-level transformations rather than changes to the emulator? The first result of this design, BinProlog’s BinWAM abstract machine has been originally implemented as a C emulator based on a program transformation introduced in (Tarau and Boyer 1990). While describing it, we assume familiarity with the WAM and focus on the differences between the two abstract machines. We refer to (Aı̈t-Kaci 1991) for a tutorial description of the WAM, including its instruction set, run-time areas and compilation of unification and control structures. The BinWAM replaces the WAM with a simplified continuation passing logic engine (Tarau 1991) based on a mapping of full Prolog to binary logic programs (binarization). Its key assumption is that as conventional WAM’s environments are discarded in favor of a heap-only run-time system, heap garbage collection and efficient term representation become instrumental as means to ensure ability to run large classes of Prolog programs. The second architectural novelty, present to some extent in the original BinProlog and a key element of its newer Java-based derivatives Jinni Prolog (Tarau 1999a; Tarau 2008b) and Lean Prolog (still under development) is the use of Interactors (and first-class Logic Engines, in particular) as a uniform mechanism for the sourcelevel specification (and often actual implementation) of key built-ins and language extensions (Tarau 2000; Tarau 2008a; Tarau and Majumdar 2009). We first explore various aspects of the compilation process and the runtime system. Next we discuss source-level specifications of key built-ins and extensions using first-class Logic Engines. Sections 2 and 3 provide an overview BinProlog’s key source-to-source transformation (binarization) and its use in compilation. Section 4 introduces BinProlog’s unusual “tag-on-data” term representation (4.1) and studies its impact on term compression (4.2). Section 5 discusses optimizations of the runtime system like instruction compression and the implicit handling of read-write modes. Section 6 introduces Logic Engines seen as implementations of a generic Interactor interface and describes their basic operations. The BinProlog Experience 3 Section 7 applies Interactors to implement, at source level, some key Prolog builtins, exceptions (7.5) and higher order constructs (7.6). Section 8 applies the logic engine API to specify Prolog extensions ranging from dynamic database operations (8.1) and backtracking if-then-else (8.2) to predicates comparing alternative answers 8.3.1 and mechanisms to encapsulate infinite streams 8.3.2. Section 9 gives a short historical account of BinProlog and its derivatives. Section 10 discusses related work and Section 11 concludes the paper. 2 The binarization transformation We start by reviewing the program transformation that allows compilation of logic programs towards a simplified WAM specialized for the execution of binary programs (called BinWAM from now on). Binary programs consist of facts and binary clauses that have only one atom in the body (except for some inline “built-in” operations like arithmetics) and therefore they need no “return” after a call. A transformation introduced in (Tarau and Boyer 1990) allows the emulation of logic programs with operationally equivalent binary programs. Before defining the binarization transformation, we describe two auxiliary transformations, commonly used by Prolog compilers. The first transformation converts facts into rules by giving them the atom true as body. E.g., the fact p is transformed into the rule p :- true. The second transformation eliminates metavariables (i.e. variables representing Prolog goals only known at run-time), by wrapping them in a call/1 predicate, e.g., a clause like and(X,Y):-X,Y is transformed into and(X,Y) :- call(X),call(Y). The binarization transformation (first described in (Tarau and Boyer 1990)) adds continuations as the last argument of predicates in a way that preserves first argument indexing. Let P be a definite program and Cont a new variable. Let T and E = p(T1 , ..., Tn ) be two expressions (i.e. atoms or terms). We denote by ψ(E, T ) the expression p(T1 , ..., Tn , T ). Starting with the clause (C) A : −B1 , B2 , ..., Bn . we construct the clause (C’) ψ(A, Cont) : −ψ(B1 , ψ(B2 , ..., ψ(Bn , Cont))). 0 The set P of all clauses C’ obtained from the clauses of P is called the binarization of P . The following example shows the result of this transformation on the well-known “naive reverse” program: app([],Ys,Ys,Cont):-true(Cont). app([A|Xs],Ys,[A|Zs],Cont):-app(Xs,Ys,Zs,Cont). nrev([],[],Cont):-true(Cont). nrev([X|Xs],Zs,Cont):-nrev(Xs,Ys,app(Ys,[X],Zs,Cont)). Note that true(Cont) can be seen as a (specialized version) of Prolog’s call/1 4 Paul Tarau that executes the goals stacked in the continuation variable Cont. Its semantics is expressed by the following clauses true(app(X,Y,Z,Cont)):-app(X,Y,Z,Cont). true(nrev(X,Y,Cont)):-nrev(X,Y,Cont). true(true). which, together with the code for nrev and app run the binarized query ?- nrev([1,2,3],R,true). in any Prolog, directly, returning R=[3,2,1]. Prolog’s inference rule (called LD-resolution) executes goals in the body of a clause left-to-right in a depth first order. LD-resolution describes Prolog’s operational semantics more accurately than order-independent SLD-resolution (Lloyd 1987). The binarization transformation preserves a strong operational equivalence with the original program with respect to the LD-resolution rule which is reified in the syntactical structure of the resulting program, where the order of the goals in the body becomes hardwired in the representation (Tarau and De Bosschere 1993b). This means that each resolution step of an LD-derivation on a definite program P can be mapped to an LD-resolution step of the binarized program P 0 . More precisely, let G be an atomic goal and G0 = ψ(G, true). Then, the answers computed using LD-resolution obtained by querying P with G are the same as those obtained by querying P 0 with G0 . Note also that the concepts of SLD- and LD-resolution overlap in the case of binary programs. 3 Binarization based compilation and runtime system BinProlog’s BinWAM virtual machine specializes the WAM to binary clauses and therefore it drops WAM’s environments. Alternatively, assuming a two stack WAM implementation the BinWAM can be seen as an OR-stack-only WAM. Independently, its simplifications of the indexing mechanism and a different “tag-on-data” representation are the most important differences with conventional WAM implementations. The latter also brings opportunities for a more compact heap representation that is discussed in section 4. Note also that continuations become explicit in the binary version of the program. We refer to (Tarau and Dahl 1994) for a technique to access and manipulate them by modifying BinProlog’s binarization preprocessor. This results in the ability to express constructs like a backtracking sensitive variant of catch/throw at source level. We focus in this section only on their uses in BinProlog’s compiler and runtime system. 3.1 Metacalls as built-ins The first step of our compilation process simply wraps metavariables inside a predicate call/1, and adds true/0 as a body for facts, as most Prolog compilers do. The binarization transformation then adds a continuation as last arguments of each predicate and a new predicate true/1 to deal with unit clauses. During this step, The BinProlog Experience 5 the arity of all predicates increases by 1 so that, for instance, call/1 becomes call/2. Although we can add the special clause true(true), and for each functor f occurring in the program, clauses like true(f(...,Cont)):-f(...,Cont). call(f(...),Cont):-f(...,Cont). as an implementation of true/1 and call/2, in practice it is simpler and more efficient to treat them as built-ins (Tarau 1991). The built-in corresponding to true/1 looks up the address of the predicate associated to f(...,Cont) and throws an exception if no such predicate is found. The built-in corresponding to call/2 adds the extra argument Cont to f(...), looks up wether a predicate definition is associated to f(...,Cont) and throws an exception if no definition is found. In both cases, when predicate definitions are found, the BinWAM fills up the argument registers and proceeds with the execution of the code of those predicates. Note that the predicate look-ups are implemented efficiently by using hashing on a <symbol, arity> pair stored in one machine word. Moreover, they happen relatively infrequently. For the case of call/2-induced look-ups, as in ordinary Prolog compilation, they are generated only when metavariables are used. As calls to true/1 only happen when execution reaches a “fact” in the original program, they also have a relatively little impact on performance, for typical recursion intensive programs. 3.2 Inline compilation of built-ins Demoen and Mariën pointed out in (Demoen and Mariën 1992) that a more implementation oriented view of binary programs can be very useful: a binary program is simply one that does not need an environment in the WAM. This view leads to inline code generation (rather than binarization) for built-ins occurring immediately after the head. For instance something like a(X):-X>1,b(X),c(X). is handled as: a(X,Cont) :- inline_code_for(X>1),b(X,c(X,Cont)). rather than a(X,Cont) :- ’>’(X,1,b(X,c(X,Cont))). Inline expansion of built-ins contributes significantly to BinProlog’s speed and supports the equivalent of WAM’s last call optimization for frequently occurring linear recursive predicates containing such built-ins, as unnecessary construction of continuation terms on the heap is avoided for them. 6 Paul Tarau 3.3 Handling CUT Like in the WAM, a special register cutB contains the choice point address up to where choice points need to be popped off the stack on backtracking. In clauses like a(X):-X>1,!,b(X),c(X). CUT can be handled inline (by a special instruction PUSH CUT, generated when the compiler recognizes this case), after the built-in X>1, by trimming the choice point stack right away. On the other hand, in clauses like a(X):-X>1,b(X),!,c(X). a pair of instructions PUT CUT and GET CUT is needed. During the BinWAM’s term creation, PUT CUT saves to the heap the register cutB. This value of cutB is used by GET CUT when the execution of the instruction sequence reaches it, to trim the choice point stack to the appropriate level. 3.4 Term construction in the absence of the AND-stack The most important simplification in the BinWAM in comparison with the standard WAM is the absence of an AND-stack. Clearly, this is made possible by the fact that each binary clause has (at most) one goal in the body. In procedural and call-by-value functional languages featuring only deterministic calls it was a typical implementation choice to avoid repeated structure creation by using environment stacks containing only the variable bindings. The WAM (Warren 1983) follows this model based on the assumption that most logic programs are deterministic. This is one of the key points where the execution models between the WAM and BinWAM differ. A careful analysis suggests that the choice between • the standard WAM’s late and repeated construction with variables of each goal in the body pushed on the AND stack • the BinWAM’s eager early construction on the heap (once) and reuse (by possibly trailing/untrailing variables) favors different programming styles, with “AND-intensive”, deterministic, possibly tail-recursive programs favoring the WAM while “OR-intensive”, nondeterministic programs reusing structures through trailing/untrailing favoring the BinWAM. The following example illustrates the difference between the two execution models. In the clause p(X) :- q(X,Y), r(f(X,Y)). binarized as p(X,C) :- q(X,Y,r(f(X,Y),C)). the term f(X,Y) is created on the heap by the WAM as many times as the number of solutions of the predicate q. On the other hand, the BinWAM creates it only once and reuses it by undoing the bindings of variables X and Y (possibly trailed). This The BinProlog Experience P⇒ next clause address H⇒ saved top of the heap TR ⇒ saved top of the trail AN +1 ⇒ continuation argument register AN ⇒ saved argument register N ... ... A1 ⇒ saved argument register 1 7 Fig. 1: A frame on BinProlog’s OR-stack. means that if q fails, the BinWAM’s “speculative” term creation work is wasted. And it also means that if q is nondeterministic and has a large number of solutions, then the WAM’s repeated term creation leads to a less efficient execution model. 3.5 A minimalistic BinWAM instruction set A minimalistic BinWAM instruction set (as shown for two simple C and Java implementations at http://www.binnetcorp.com/OpenCode/free_prolog.html) consists of the following subset of the WAM: GET STRUCTURE, UNIFY VARIABLE, UNIFY VALUE, EXECUTE, PROCEED, TRY ME ELSE, RETRY ME ELSE, TRUST ME, as well as the following instructions, that the reader will recognize as mild variations of their counterparts in the “vanilla” WAM instruction set (Aı̈t-Kaci 1991). • • • • MOVE REGISTER (simple register-to-register move) NONDET (sets up choice-point creation when needed) SWITCH (simple first-argument indexing) PUSH CUT, PUT CUT, GET CUT (cut handling instructions for binary programs, along the lines of (Demoen and Mariën 1992)) Note that specializations for CONSTANTs, LISTs as well as WRITE-mode variants of the GET and UNIFY instructions can be added as obvious optimizations. 3.6 The OR-stack A simplified OR-stack having the layout shown in Figure 1 is used only for (1level) choice point creation in nondeterministic predicates. No link pointers between frames are needed as the length of the frames can be derived from the arity of the predicate. Given that variables kept on the local stack in conventional WAM are now located on the heap, the heap consumption of the program increases. It has been shown that, in some special cases, partial evaluation at source level can deal with the problem (Demoen 1992; Neumerkel 1992) but as a more practical solution, the impact of heap consumption has been alleviated in BinProlog by the use of an efficient copying garbage collector (Demoen et al. 1996). 8 Paul Tarau 3.7 A simplified clause selection and indexing mechanism As the compiler works on a clause-by-clause basis, it is the responsibility of the loader (that is part of the runtime system) to index clauses and link the code. The runtime system uses a global < key1 , key2 >→ value hash table seen as an abstract multipurpose dictionary. This dictionary provides the following services: • indexing compiled code, with key1 as the functor of the predicate and key2 as the functor of the first argument • implementing multiple dynamic databases, with key1 as the name of the database and key2 the functor of a dynamic predicate • supporting a user-level storage area (called “blackboard”) containing global terms indexed by two keys A one byte mark-field in the table is used to distinguish between load-time use when the kernel (including built-ins written in Prolog and the compiler itself) is loaded, and run-time use (when user programs are compiled and loaded) to protect against modifications to the kernel and for fast clean-up. Sharing of the global multipurpose dictionary, although somewhat slower than the small key → value hashing tables injected into the code-space of the standard WAM, keeps the implementation as simple as possible. Also, with data areas of fixed size (as in the original BinProlog implementation), one big dictionary provides overall better use of the available memory by sharing the hashing table for different purposes. Predicates are classified as single-clause, deterministic and nondeterministic. Only predicates having all first-argument functors distinct are detected as deterministic and indexed. In contrast to the WAM’s fairly elaborate indexing mechanism, indexing of deterministic predicates in the BinWAM is done by a unique SWITCH instruction. If the first argument dereferences to a non-variable, SWITCH either fails or finds the 1-word address of the unique matching clause in the global hash-table, using the predicate and the functor of the first argument as a 2-word key. Note that the basic difference with the WAM is the absence of intensive tag analysis. This is related also to our different low-level data-representation that we discuss in section 4. A specialized JUMP-IF instruction deals with the frequent case of 2 clause deterministic predicates. To reduce the interpretation overhead, SWITCH and JUMP IF are combined with the preceding EXECUTE and the following GET STRUCTURE or GET CONSTANT instruction, giving EXEC SWITCH and EXEC JUMP IF. This not only avoids dereferencing the first argument twice, but also reduces unnecessary branching logic that breaks the processor’s pipeline. Note also that simplification of the indexing mechanism, in combination with smaller and unlinked choice points, helps making backtracking sometimes faster in the BinWAM than in conventional WAMs, as in the case of simple (but frequent!) predicates having only a few clauses. However, as mentioned in subsection 3.4, backtracking performance in the BinWAM also benefits from sharing structures occurring in the body of a clause in the OR-subtree it generates, instead of repeated creation as in conventional WAM. This The BinProlog Experience 9 property of binarized programs (see example in subsection 3.4), was first pointed out in (Demoen and Mariën 1992) as the typical case when binarized variants are faster than the original programs. Our original assumption when simplifying the WAM’s indexing instructions was that, for predicates having a more general distribution of first-arguments, a sourceto-source transformation, grouping similar arguments into new predicates, can be used. Later in time, while noticing that often well written Prolog code tends to be either “database type” (requiring multiple argument indexing) or “recursion intensive” (with small predicates having a few clauses, fitting well this simplified first argument indexing mechanism) it became clear that it makes sense to handle these two problems separately. As a result, we have kept this simplified indexing scheme (for “recursion intensive” compiled code) unchanged through the evolution of BinProlog and its derivatives. On the other hand, our newest implementation, Lean Prolog handles “database type” dynamic code efficiently using a very general multi-argument indexing mechanism. 3.8 Binarization: some infelicities We have seen that binarization has helped building a simplified abstract machine that provides good performance with help from a few low level optimizations. However, there are some “infelicities” that one has to face, somewhat similar to what any program transformation mechanism induces at runtime - and DCG grammars come to one’s mind in the Prolog world. For instance, the execution order in the body is reified at compile time into a fixed structure. This means that things like dynamic reordering of the goal in a clause body or AND-parallel execution mechanisms become trickier. Also, inline compilation of constructs like if-then-else becomes more difficult - although one can argue that using a source-level technique, when available (e.g. by creating small new predicates) is an acceptable implementation in this case. 4 Data representation We review here an unconventional data representation choice that turned out to also provide a surprising term-compression mechanism, that can be seen as a generalization of “CDR-coding” (Clark and Green 1977) used in LISP/Scheme systems. 4.1 Tag-on-pointer versus tag-on-data When describing the data in a cell with a tag we have basically 2 possibilities. We can put a tag in the pointer to the data or in the data cell itself. The first possibility, probably most popular among WAM implementors, allows one to check the tag before deciding if and how it has to be processed. We choose the second possibility as in the presence of indexing, unifications are more often 10 Paul Tarau ./2 a ./2 b ./2 c ./2 [] Fig. 2: Compressed list representation of [a,b,c] w: t/2 1 t/2 1 → t/2 2 → t/2 3 n/0 ↓ t/2 2 ↓ t/2 3 n/0 bw: t/2 1 t/2 2 t/2 3 n/0 Fig. 3: Term compression. w: WAM, bw: BinWAM. intended to succeed propagating bindings, rather than being used as a clause selection mechanism. This also justifies why we have not implemented the WAM’s traditional SWITCH_ON_TAG instruction. We found it very convenient to precompute a functor in the code-space as a word of the form <arity,symbol-number,tag> 1 and then simply compare it with objects on the heap or in registers. In contrast, in a conventional WAM, one compares the tags, finding out that they are almost always the same, then compares the functor-names and finally compares the arities - an unnecessary but costly if-logic. This is avoided with our tag-on-data representation, while also consuming as few tag bits as possible. Only 2 bits are used in BinProlog for tagging variables, integers and functors/atoms2 . With this representation, a functor fits completely in one word. As an interesting consequence, as we have found out later, when implementing a symbol garbage collector for a derivative of BinProlog, the “tag-on-data” representation makes scanning the heap for symbols (and updating them in place) a trivial operation. 4.2 Term compression If a term has a last argument containing a functor, with our tag-on-data representation we can avoid the extra pointer from the last argument to the functor cell and simply make them collapse. Obviously the unification algorithm must take care of this case, but the space savings are important, especially in the case of lists which become contiguous vectors with their N-th element directly addressable at offset 2*sizeof(term)*N+1 bytes from the beginning of the list, as shown in Figure 2. The effect of this last argument overlapping on t(1,t(2,t(3,n))) is represented in Figure 3. 1 2 This technique is also used in various other Prologs e.g. SICStus, Ciao. This representation limits arity and available symbol numbers - a problem that, went away with the newer 64-bit versions of BinProlog. The BinProlog Experience 11 This representation also reduces the space consumption for lists and other “chained functors” to values similar or better than in the case of conventional WAMs. We refer to (Tarau and Neumerkel 1993) for the details of the term-compression related optimizations of BinProlog. 5 Optimizing the run-time system We give here an overview of the optimizations of the runtime system. Most of them are, at this point in time, “folklore” and shared with various other WAM-based Prolog implementations. Instruction compression It happens very often that a sequence of consecutive instructions share some WAM state information (Nässén et al. 2001). For example, two consecutive unify instructions have the same mode as they correspond to arguments of the same structure. Moreover, due to our very simple instruction set, some instructions have only a few possible other instructions that can follow them. For example, after an EXECUTE instruction, we can have a single, a deterministic or a nondeterministic clause. It makes sense to specialize the EXECUTE instruction with respect to what has to be done in each case. This gives, in the case of calls to deterministic predicates the instructions EXEC SWITCH and EXEC JUMP IF as mentioned in the section on indexing. On the other hand, some instructions are simply so small that just dispatching them can cost more than actually performing the associated WAM-step. This in itself is a reason to compress two or more instructions taking less than a word in one instruction. This optimization has been part of WAM-based Prolog systems like Quintus, SICStus, Ciao as well. Also having a small initial instruction set reduces the number of combined instructions needed to cover all cases. For example, by compressing our UNIFY instructions and their WRITE-mode specializations, we get the new instructions: UNIFY_VARIABLE_VARIABLE WRITE_VARIABLE_VARIABLE ... This gives, in the case of the binarized version of the recursive clause of append/3, the following code: append([A | Xs],Ys,[A | Zs],Cont):-append(Xs,Ys,Zs,Cont). TRUST_ME_ELSE ∗/4, % keeps also the arity = 4 GET_STRUCTURE X1, ./2 UNIFY_VARIABLE_VARIABLE X5, A1 GET_STRUCTURE X3, ./2 UNIFY_VALUE_VARIABLE X5, A3 EXEC_JUMP_IF append/4 % actually the address of append/4 The choice of candidates for instruction compression was based on low level profiling (instruction frequencies) and possibility of sharing of common work by two successive instructions and frequencies of functors with various arities. 12 Paul Tarau BinProlog also integrates the preceding GET STRUCTURE instruction into the double UNIFY instructions and the preceding PUT STRUCTURE into the double WRITE instructions. This gives another 16 instructions but it covers a large majority of uses of GET STRUCTURE and PUT STRUCTURE. GET_UNIFY_VARIABLE_VARIABLE ... PUT_WRITE_VARIABLE_VALUE .... Reducing interpretation overhead on those critical, high frequency instructions definitely contributes to the speed of our emulator. As a consequence, in the frequent case of structures of arity=2 (lists included), mode-related IF-logic is completely eliminated, with up to 50% speed improvements for simple predicates like append/3. The following example shows the effect of this transformation: a(X,Z):-b(X,Y),c(Y,Z). ⇒binary form⇒ a(X,Z,C):-b(X,Y,c(Y,Z,C)). BinProlog BinWAM code, without compression a/3: PUT_STRUCTURE WRITE_VARIABLE WRITE_VALUE WRITE_VALUE MOVE_REG MOVE_REG EXECUTE X4<-c/3 X5 X2 X3 X2<-X5 X3<-X4 b/3 BinProlog BinWAM code, with instruction compression PUT_WRITE_VARIABLE_VALUE WRITE_VALUE MOVE_REGx2 EXECUTE X4<-c/3, X5,X2 X3 X2<-X5, X3<-X4 b/3 Note that instruction compression is usually applied inside a procedure. As BinProlog has a unique primitive EXECUTE instruction instead of standard WAM’s CALL, ALLOCATE, DEALLOCATE, EXECUTE, PROCEED we can afford to do instruction compression across procedure boundaries with very little increase in code size due to relatively few different ways to combine control instructions. Inter-procedural instruction compression can be seen as a kind of “hand-crafted” partial evaluation at implementation language level, intended to optimize the main loop of the WAM-emulator. It has the same effect as partial evaluation at source level which also eliminates procedure calls. At the global level, knowledge about possible continuations can also remove the run-time effort of address look-up for meta-variables in predicate positions, and of useless trailing and dereferencing. (Most of ) the benefits of two-stream compilation for free Let us point out here that in the case of GET * * instructions we have the benefits of separate READ and WRITE streams (for instance, avoidance of mode checking) on some high frequency The BinProlog Experience 13 instructions without actually incurring the compilation complexity and emulation overhead in generating them. As terms of depth 1 and functors of low arity dominate statistically Prolog programs, we can see that our instruction compression scheme actually behaves as if two separate instruction streams were present, most of the time! 6 Logic Engines as Interactors We now turn the page to a historically later architectural feature of BinProlog and its newer derivatives. While orthogonal to the BinWAM architecture, it shares the same philosophy: proceed with a fundamental system simplification on purely esthetic grounds, independently of short term performance concerns, and hope that overall elegance will provide performance improvements for free, later3 . BinProlog’s Java-based reimplementation, Jinni has been mainly used in various applications (Tarau 1998; Tarau 1999b; Tarau 1999a; Tarau 2004a) as an intelligent agent infrastructure, by taking advantage of Prolog’s knowledge processing capabilities in combination with a simple and easily extensible runtime kernel supporting a flexible reflexion mechanism (Tyagi and Tarau 2001). Naturally, this has suggested to investigate whether some basic agent-oriented language design ideas can be used for a refactoring of pure Prolog’s interaction with the external world. Agent programming constructs have influenced design patterns at “macro level”, ranging from interactive Web services to mixed initiative computer human interaction. Performatives in Agent communication languages (FIPA 1997) have made these constructs reflect explicitly the intentionality, as well as the negotiation process involved in agent interactions. At the same time, it has been a long tradition of logic programming languages (Hermenegildo 1986; Lusk et al. 1993) to use multiple Logic Engines for supporting concurrent execution. In this context, the Jinni Prolog agent programming framework (Tarau 2004b) and the recent versions of the BinProlog system (Tarau 2006) have been centered around logic engine constructs providing an API that supports reentrant instances of the language processor. This has naturally led to a view of Logic Engines as instances of a generalized family of iterators called Fluents (Tarau 2000), that have allowed the separation of the first-class language interpreters from the multi-threading mechanism, while providing, at the same time, a very concise source-level reconstruction of Prolog’s built-ins. Later we have extended the original Fluents with a few new operations (Tarau and Majumdar 2009) supporting bi-directional, mixedinitiative exchanges between engines. The resulting language constructs, that we have called Interactors, express coroutining, metaprogramming and interoperation with stateful objects and external services. They complement pure Horn Clause Prolog with a significant boost in 3 Even in cases when such hopes do not materialize, indirect consequences of such architectural simplifications often lower software risks and bring increased system reliability while keeping implementation effort under control. 14 Paul Tarau expressiveness, to the point where they allow emulating at source level virtually all Prolog built-ins, including dynamic database operations. In a wider programming language implementation context, a yield statement supports basic coroutining in newer object oriented languages like Ruby C# and Python but it goes back as far as (Conway 1963) and the Coroutine Iterators introduced in older languages like CLU (Liskov et al. 1981). 6.1 Logic Engines as answer generators Our Interactor API, a unified interface to various stateful objects interacting with Prolog processors, has evolved progressively into a practical Prolog implementation framework starting with (Tarau 2000) and continued with (Tarau 2008a) and (Tarau and Majumdar 2009). We summarize it here while instantiating the more general framework to focus on interoperation of Logic Engines. We refer to (Tarau and Majumdar 2009) for the details of an emulation in terms of Horn Clause Logic of various engine operations. An Engine is simply a language processor reflected through an API that allows its computations to be controlled interactively from another Engine very much the same way a programmer controls Prolog’s interactive toplevel loop: launch a new goal, ask for a new answer, interpret it, react to it. A Logic Engine is an Engine running a Horn Clause Interpreter with LD-resolution (Tarau and Boyer 1993) on a given clause database, together with a set of built-in operations. The command new_engine(AnswerPattern,Goal,Interactor) creates a new Horn Clause solver, uniquely identified by Interactor, which shares code with the currently running program and is initialized with Goal as a starting point. AnswerPattern is a term, usually a list of variables occurring in Goal, of which answers returned by the engine will be instances. Note however that new engine/3 acts like a typical constructor, no computations are performed at this point, except for allocating data areas. In our newer implementations, with all data areas dynamic, engines are lightweight and engine creation is fast and memory efficient4 to the point where using them as building blocks for a significant number of built-ins and various language constructs is not always prohibitive in terms of performance. 6.2 Iterating over computed answers Note that our Logic Engines are seen, in an object oriented-style, as implementing the interface Interactor. This supports a uniform interaction mechanism with a variety of objects ranging from Logic Engines to file/socket streams and iterators over external data structures. 4 The additional operation load engine(Interactor,AnswerPattern,Goal) that clears data areas and initializes an engine with AnswerPattern,Goal has also been available as a further optimization, by providing a mechanism to reuse an existing engine. The BinProlog Experience 15 The get/2 operation is used to retrieve successive answers generated by an Interactor, on demand. It is also responsible for actually triggering computations in the engine. The query get(Interactor,AnswerInstance) tries to harvest the answer computed from Goal, as an instance of AnswerPattern. If an answer is found, it is returned as the(AnswerInstance), otherwise the atom no is returned. As in the case of the Maybe Monad in Haskell, returning distinct functors in the case of success and failure, allows further case analysis in a pure Horn Clause style, without needing Prolog’s CUT or if-then-else operation. Note that bindings are not propagated to the original Goal or AnswerPattern when get/2 retrieves an answer, i.e. AnswerInstance is obtained by first standardizing apart (renaming) the variables in Goal and AnswerPattern, and then backtracking over its alternative answers in a separate Prolog interpreter. Therefore, backtracking in the caller does not interfere with the new Interactor’s iteration over answers. Backtracking over the Interactor’s creation point, as such, makes it unreachable and therefore subject to garbage collection. An Interactor is stopped with the stop(Interactor) operation, that might or might not reclaim resources held by the engine. In our later implementation Lean Prolog, we are using a fully automated memory management mechanism where unreachable engines are automatically garbage collected. While this API clearly refers to operations going beyond Horn Clause logic, it can be shown that a fairly high-level pure Prolog semantics can be given to them in a style somewhat similar to what one would do when writing a Prolog interpreter in Haskell, as shown in section 4 of (Tarau and Majumdar 2009). So far, these operations provide a minimal API, powerful enough to switch tasks cooperatively between an engine and its “client”5 and emulate key Prolog built-ins like if-then-else and findall (Tarau 2000), as well as higher order operations like fold and best of (Tarau and Majumdar 2009). We give more details on emulations of these constructs in section 7. 6.3 A yield/return operation The following operations provide a “mixed-initiative” interaction mechanism, allowing more general data exchanges between an engine and its client. First, like the yield return construct of C# and the yield operation of Ruby and Python, our return/1 operation return(Term) saves the state of the engine, and transfers control and a result Term to its client. The client receives a copy of Term when using its get/2 operation. 5 Another Prolog engine using and engine’s services 16 Paul Tarau Note that an Interactor returns control to its client either by calling return/1 or when a computed answer becomes available. By using a sequence of return/get operations, an engine can provide a stream of intermediate/final results to its client, without having to backtrack. This mechanism is powerful enough to implement a complete exception handling mechanism simply by defining throw(E):-return(exception(E)). When combined with a catch(Goal,Exception,OnException), on the client side, the client can decide, upon reading the exception with get/2, if it wants to handle it or to throw it to the next level. 6.4 Coroutining Logic Engines Coroutining has been in use in Prolog systems mostly to implement constraint programming extensions. The typical mechanism involves attributed variables holding suspended goals that may be triggered by changes in the instantiation state of the variables. We discuss here a different form of coroutining, induced by the ability to switch back and forth between engines. The operations described so far allow an engine to return answers from any point in its computation sequence. The next step is to enable an engine’s client6 to inject new goals (executable data) to an arbitrary inner context of another engine. Two new primitives are needed: to_engine(Engine,Data) that is called by the client to send data to an Engine, and from_engine(Data) that is called by the engine to receive a client’s Data. A typical use case for the Interactor API looks as follows: 1. the client creates and initializes a new engine 2. the client triggers a new computation in the engine, parameterized as follows: (a) the client passes some data and a new goal to the engine and issues a get operation that passes control to it (b) the engine starts a computation from its initial goal or the point where it has been suspended and runs (a copy of) the new goal received from its client (c) the engine returns (a copy of) the answer, then suspends and returns control to its client 3. the client interprets the answer and proceeds with its next computation step 4. the process is fully reentrant and the client may repeat it from an arbitrary point in its computation 6 Another engine, that uses an engine’s services. The BinProlog Experience 17 Using a metacall mechanism like call/17 , one can implement a close equivalent of Ruby’s yield statement as follows: ask_engine(Engine,(Answer:-Goal), Result):to_engine(Engine,(Answer:-Goal)), get(Engine,Result). engine_yield(Answer):from_engine((Answer:-Goal)), call(Goal), return(Answer). The predicate ask engine/3 sends a query (possibly built at runtime) to an engine, which in turn, executes it and returns a result with an engine yield operation. The query is typically a goal or a pattern of the form AnswerPattern:-Goal in which case the engine interprets it as a request to instantiate AnswerPattern by executing Goal before returning the answer instance. As the following example shows, this allows the client to use, from outside, the (infinite) recursive loop of an engine as a form of updatable persistent state. sum_loop(S1):-engine_yield(S1⇒S2),sum_loop(S2). inc_test(R1,R2):-new_engine(_,sum_loop(0),E), ask_engine(E,(S1⇒S2:-S2 is S1+2),R1), ask_engine(E,(S1⇒S2:-S2 is S1+5),R2). ?- inc_test(R1,R2). R1=the(0⇒2), R2=the(2⇒7). Note also that after parameters (the increments 2 and 5) are passed to the engine, results dependent on its state (the sums so far 2 and 7) are received back. Moreover, note that an arbitrary goal is injected in the local context of the engine where it is executed. The goal can then access the engine’s state variables S1 and S2. As engines have separate garbage collectors (or in simple cases as a result of tail recursion), their infinite loops run in constant space, provided that no unbounded size objects are created. 7 Source level extensions through new definitions To give a glimpse of the expressiveness of the resulting Horn Clause + Engines language, first described in (Tarau 2000) we specify a number of built-in predicates known as ”impossible to emulate” in Horn Clause Prolog (except by significantly lowering the level of abstraction and implementing something close to the virtual machine itself). 7 Which, interestingly enough, can itself be emulated in terms of engine operations (Tarau 2000) or directly through a source level transformation (Tarau and Boyer 1990)). 18 Paul Tarau 7.1 Negation, first solution/3, if then else/3 These constructs are implemented simply by discarding all but the first solution produced by an engine. The predicate first solution (usable to implement once/1), returns the(X) or the atom no as first solution of goal G: first_solution(X,G,Answer):new_engine(X,G,E), get(E,R),stop(E), Answer=R. not(G):-first_solution(_,G,no). The same applies to an emulation of Prolog’s if-then-else construct, shown here as the predicate if then else/3, which, if Cond succeeds, calls Then, keeping the bindings produced by Cond and otherwise calls Else after undoing the bindings of the call to Cond. if_then_else(Cond,Then,Else):new_engine(Cond,Cond,E), get(E,Answer), stop(E), select_then_else(Answer,Cond,Then,Else,Goal), Goal. select_then_else(the(Cond),Cond,Then,_Else,Then). select_then_else(no,_,_,_Then,Else,Else). Note that these operations require the use of CUT in typical Prolog library implementations. While in the presence of engines, one can control the generation of multiple answers directly and only use the CUT when more complex control constructs are required (like in the case of embedded disjunctions), given the efficient WAM-level implementation of CUT and the frequent use of Prolog’s if-then-else construct, emulations of these built-ins can be seen mostly as an executable specification of their faster low-level counterparts. 7.2 Reflective meta-interpreters A simple Horn Clause+Engines meta-interpreter metacall/1 just reflects backtracking through element of/2 over deterministic engine operations. metacall(Goal):new_engine(Goal,Goal,E), element_of(E,Goal). element_of(E,X):-get(E,the(A)),select_from(E,A,X). select_from(_,A,A). select_from(E,_,X):-element_of(E,X). We can see metacall/1 as an operation which fuses two orthogonal language features provided by an engine: computing an answer of a Goal, and advancing to the next answer, through the source level operations element of/2 and select from/3 The BinProlog Experience 19 which ’borrow’ the ability to backtrack from the underlying interpreter. The existence of the simple meta-interpreter defined by metacall/1 indicates that first-class engines lift the expressiveness of Horn Clause logic significantly. 7.3 All-solution predicates All-solution predicates like findall/3 can be obtained by collecting answers through recursion. The (simplified) code consists of findall/3 that creates an engine and collect all answers/3 that recurses while new answers are available. findall(X,G,Xs):new_engine(X,G,E), get(E,Answer), collect_all_answers(Answer,E,Xs). collect_all_answers(no,_,[]). collect_all_answers(the(X),E,[X | Xs]):-get(E,Answer), collect_all_answers(Answer,E,Xs). Note that after the auxiliary engine created for findall/3 is discarded, heap space is needed only to hold the computed answers, as it is also the case with the conventional implementation of findall. Note also that the implementation handles embedded uses of findall naturally and that no low-level built-ins are needed. 7.4 Term copying and instantiation state detection As standardizing variables in the returned answer is part of the semantics of get/2, term copying is just computing a first solution to true/0. Implementing var/1 uses the fact that only free variables can have copies unifiable with two distinct constants. copy_term(X,CX):-first_solution(X,true,the(CX)). var(X):-copy_term(X,a),copy_term(X,b). The previous definitions have shown that the resulting language subsumes (through user provided definitions) constructs like negation as failure, if-then-else, once, copy term, findall - this suggests calling this layer Kernel Prolog. As Kernel Prolog contains negation as failure, following (Deransart et al. 1996) we can, in principle, use it for an executable specification of full Prolog. It is important to note here that the engine-based implementation serves in some cases just as a proof of expressiveness and that, in practice, operations like var/1 for which even a small overhead is unacceptable are implemented directly as builtins. Nevertheless, the engine-based source-level definitions provide in all cases a reference implementation usable as a specification for testing purposes. 7.5 Implementing exceptions While it is possible to implement an exception mechanism at source level as shown in (Tarau and Dahl 1994), through a continuation passing program transformation 20 Paul Tarau (binarization), one can use engines for the same purpose. By returning a new answer pattern as indication of an exception, a simple and efficient implementation of exceptions is obtained. We have actually chosen this implementation scenario in the BinProlog compiler which also provides a return/1 operation to exit an engine’s emulator loop with an arbitrary answer pattern, possibly before the end of a successful derivation. The (somewhat simplified) code is as follows: throw(E):-return(exception(E)). catch(Goal,Exception,OnException):new_engine(answer(Goal),Goal,Engine), element_of(Engine,Answer), do_catch(Answer,Goal,Exception,OnException,Engine). do_catch(exception(E),_,Exception,OnException,Engine):(E=Exception→ OnException % call action if matching ; throw(E) % throw again otherwise ), stop(Engine). do_catch(the(Goal),Goal,_,_,_). The throw/1 operation returns a special exception pattern, while the catch/3 operation stops the engine, calls a handler on matching exceptions or re-throws nonmatching ones to the next layer. If engines are lightweight, the cost of using them for exception handling is acceptable performance-wise, most of the time. However, it is also possible to reuse an engine (using load engine/3) - for instance in an inner loop, to define a handler for all exceptions that can occur, rather than wrapping up each call into a new engine with a catch. 7.6 Interactors and higher order constructs As a glimpse at the expressiveness of the Interactor API, we implement, in the tradition of higher order functional programming, a fold operation. The predicate efoldl can be seen as a generalization of findall connecting results produced by independent branches of a backtracking Prolog engine by applying to them a closure F using call/4: efoldl(Engine,F,R1,R2):-get(Engine,X),efoldl_cont(X,Engine,F,R1,R2). efoldl_cont(no,_Engine,_F,R,R). efoldl_cont(the(X),Engine,F,R1,R2):-call(F,R1,X,R),efoldl(Engine,F,R,R2). Classic functional programming idioms like reverse as fold are then implemented simply as: reverse(Xs,Ys):new_engine(X,member(X,Xs),E), efoldl(E,reverse_cons,[],Ys). reverse_cons(Y,X,[X | Y]). The BinProlog Experience 21 Note also the automatic deforestation effect (Wadler 1990) of this programming style - no intermediate list structures need to be built, if one wants to aggregate the values retrieved from an arbitrary generator engine with an operation like sum or product. 8 Extending the Prolog kernel using Interactors We review here a few typical extensions of the Prolog kernel showing that using first class Logic Engines results in a compact and portable architecture that is built almost entirely at source level. 8.1 Emulating dynamic databases with Interactors The gain in expressiveness coming directly from the view of Logic Engines as iterative answer generators (i.e. Fluents (Tarau 2000)) is significant. The notable exception is Prolog’s dynamic database, requiring the bidirectional communication provided by interactors. The key idea for implementing dynamic database operations with interactors is to use a logic engine’s state in an infinite recursive loop. First, a simple difference-list based infinite server loop is built: queue_server:-queue_server(Xs,Xs). queue_server(Hs1,Ts1):from_engine(Q),server_task(Q,Hs1,Ts1,Hs2,Ts2,A),return(A), queue_server(Hs2,Ts2). Next we provide the queue operations, needed to maintain the state of the database. To keep the code simple, we only focus in this section on operations resulting in additions at the end of the database. server_task(add_element(X),Xs,[X | Ys],Xs,Ys,yes). server_task(queue,Xs,Ys,Xs,Ys,Xs-Ys). server_task(delete_element(X),Xs,Ys,NewXs,Ys,YesNo):server_task_delete(X,Xs,NewXs,YesNo). Then we implement the auxiliary predicates supporting various queue operations. server_task_delete(X,Xs,NewXs,YesNo):select_nonvar(X,Xs,NewXs),!, YesNo=yes(X). server_task_delete(_,Xs,Xs,no). select_nonvar(X,XXs,Xs):-nonvar(XXs),XXs=[X | Xs]. select_nonvar(X,YXs,[Y | Ys]):-nonvar(YXs),YXs=[Y | Xs], select_nonvar(X,Xs,Ys). Next, we put it all together, as a dynamic database API. We can create a new engine server providing Prolog database operations: new_edb(Engine):-new_engine(done,queue_server,Engine). 22 Paul Tarau We can add new clauses to the database edb_assertz(Engine,Clause):ask_engine(Engine,add_element(Clause),the(yes)). and we can return fresh instances of asserted clauses edb_clause(Engine,Head,Body):ask_engine(Engine,queue,the(Xs-[])), member((Head:-Body),Xs). or remove them from the the database edb_retract1(Engine,Head):-Clause=(Head:-_Body), ask_engine(Engine,delete_element(Clause),the(yes(Clause))). Finally, the database can be discarded by stopping the engine that hosts it: edb_delete(Engine):-stop(Engine). Externally implemented dynamic databases can also be made visible as Interactors and reflection of the interpreter’s own handling of the Prolog database becomes possible. As an additional benefit, multiple databases can be provided. This simplifies adding module, object or agent layers at source level. By combining database and communication Interactors, support for mobile code and autonomous agents can be built as shown in (Tarau and Dahl 2001). Encapsulating external stateful objects like file systems, external database or Web service interfaces as Interactors can provide a uniform interfacing mechanism and reduce programmer learning curves in Prolog applications. A note on practicality is needed here. While indexing can be added at source level by using hashing on various arguments, the relative performance compared to compiled code, of this emulated database is 2-3 orders of magnitude slower. Therefore, in our various Prolog systems we have used this more as an executable specification rather than the default implementation of the database. 8.2 Refining control: a backtracking if-then-else Various Prolog implementations also provide a variant of if-then-else (called *->/3 in SWI-Prolog and if/3 in SICStus-Prolog) that either backtracks over multiple answers of its guard Cond (and calls its Then branch for each) or it switches to the Else branch if no such answers of Cond are found. With the same API, we can implement it at source level as follows: if_any(Cond,Then,Else):new_engine(Cond,Cond,Engine), get(Engine,Answer), select_then_or_else(Answer,Engine,Cond,Then,Else). select_then_or_else(no,_,_,_,Else):-Else. select_then_or_else(the(BoundCond),Engine,Cond,Then,_):backtrack_over_then(BoundCond,Engine,Cond,Then). backtrack_over_then(Cond,_,Cond,Then):-Then. The BinProlog Experience 23 backtrack_over_then(_,Engine,Cond,Then):get(Engine,the(NewBoundCond)), backtrack_over_then(NewBoundCond,Engine,Cond,Then). 8.3 Simplifying algorithms: Interactors and combinatorial generation Various combinatorial generation algorithms have elegant backtracking implementations. However, it is notoriously difficult (or inelegant, through the use of ad-hoc side effects) to compare answers generated by different OR-branches of Prolog’s search tree. 8.3.1 Comparing alternative answers Optimization problems, selecting the “best” among answers produced on alternative branches can easily be expressed as follows: • running the generator in a separate logic engine • collecting and comparing the answers in a client controlling the engine The second step can actually be automated, provided that the comparison criterion is given as a predicate compare_answers(Comparator,First,Second,Best) to be applied to the engine with an efold operation: best_of(Answer,Comparator,Generator):new_engine(Answer,Generator,E), efoldl(E,compare_answers(Comparator),no,Best), Answer=Best. compare_answers(Comparator,A1,A2,Best):( A1\==no,call(Comparator,A1,A2)→Best=A1 ; Best=A2 ). ?-best_of(X,>,member(X,[2,1,4,3])). X=4 Note that in the call to compare answers the closure compare answers(Comparator), gets the extra arguments A1 and A2 out of which, depending on the comparison, Best is selected at each step of efoldl. 8.3.2 Encapsulating infinite computation streams An infinite stream of natural numbers is implemented as: loop(N):-return(N),N1 is N+1,loop(N1). The following example shows a simple space efficient generator for the infinite stream of prime numbers: 24 Paul Tarau prime(P):-prime_engine(E),element_of(E,P). prime_engine(E):-new_engine(_,new_prime(1),E). new_prime(N):- N1 is N+1, (test_prime(N1) → true ; return(N1)), new_prime(N1). test_prime(N):-M is integer(sqrt(N)),between(2,M,D),N mod D =:=0 Note that the program has been wrapped, using the element of predicate to provide one answer at a time through backtracking. Alternatively, a forward recursing client can use the get(Engine) operation to extract primes one at a time from the stream. 9 A short history of BinProlog and its derivatives The first iteration of BinProlog goes back to around 1990. Along the years it has pioneered some interesting architectural choices while adopting a number of new (at the time) implementation ideas from others. From 1999 on, we have also released a Java port of BinProlog called Jinni Prolog, using essentially the same runtime system and compiler as BinProlog and resulting in some new developments happening either on the Java or C side. Some of BinProlog’s features are interesting to mention mostly for historical reasons - as they either became part of various Prolog systems, when genuinely practical, or, on the contrary, have turned out to have only limited, program-specific benefits. Among features for which BinProlog has been either a pioneer or an early adopter in the world of Prolog implementations, that have not been covered in this paper are: • an efficient implementation of findall using a heap splitting technique resulting in a single copy operation (Tarau 1992) • a multithreading API using native threads under explicit programmer control (around 1992-1993) • a blackboard architecture using Linda coordination between threads (Tarau and De Bosschere 1993a; De Bosschere and Tarau 1996) • backtrackable global variables (around 1993) • a mechanism for “partial compilation” to C (Tarau et al. 1994; Tarau et al. 1996) • using continuations to implement Prolog extensions, including catch/throw (Tarau and Dahl 1994) • cyclic terms (originating in Prolog III) and subterm-sharing implemented using a space efficient value trailing mechanism (around 1993) • memoing of goal-independent answer substitutions for deterministic calls (Tarau and De Bosschere 1993b; Tarau et al. 1997) • a DCG variant using backtrackable state updates (Dahl et al. 1997) • on the fly compilation of dynamic code, based on runtime call/update statistics (around 1994-1995) (a technique similar to the HotSpot compilation now popular in Java VMs) The BinProlog Experience 25 • segment preserving copying GC (Demoen et al. 1996) • assumption grammars - a mechanism extending Prolog grammar with hypothetical reasoning (Dahl et al. 1997) • strong mobility of code and data by transporting live continuations between Prolog processes (Tarau and Dahl 1998; Tarau and Dahl 2001) • Prolog based shared virtual worlds supporting simple natural language interactions (Tarau et al. 1999) Elements of the BinProlog continuation passing implementation model have been successfully reused in a few different Prolog systems: • jProlog (http://www.cs.kuleuven.be/~bmd/PrologInJava/) written in Java, mostly by Bart Demoen with some help from Paul Tarau in 1997, used a Prolog to Java translator with binarization as a source to source transformation • Jinni Prolog, written in Java by Paul Tarau (http://www.binnetcorp.com/ Jinni) actively developed since 1998, first as continuation passing interpreter and later as a BinWAM compiler • A Java port of Free Prolog (a variant of BinProlog 2.0) by Peter Wilson http://www.binnetcorp.com/OpenCode/free_prolog.html (around 1999) with additions and fixes by Paul Tarau • Kernel Prolog, a continuation passing interpreter written in Java by Paul Tarau (http://www.binnetcorp.com/OpenCode/kernelprolog.html) around 1999 • PrologCafe (http://kaminari.scitec.kobe-u.ac.jp/PrologCafe/), a fairly complete Prolog system derived from jProlog implemented by Mutsunori Banbara and Naoyuki Tamura • P# derived from PrologCafe, written in C# by Jon Cook (http://homepages. inf.ed.ac.uk/jcook/) • Carl Friedrich Bolz’s Python-based Prolog interpreter and JIT compiler, using AND+OR-continuations directly, without a program transformation (Bolz et al. 2010) • Lean Prolog - a new first-class Logic Engines based lightweight Prolog system using two identical C and Java-based BinWAM runtime systems to balance performance and flexibility implemented by Paul Tarau (work in progress, started in 2008) 10 Related work Most modern Prolog implementations are centered around the Warren Abstract Machine (WAM) (Warren 1983; Aı̈t-Kaci 1991) which has stood amazingly well the test of time. In this sense BinProlog’s BinWAM is no exception, although its overall ”rate of mutations” with respect to the original WAM is probably comparable to systems like Neng-Fa Zhou’s TOAM or TOAM-Jr (Zhou et al. 1990; Zhou 2007) or Jan Wielemaker’s SWI-Prolog (Wielemaker 2003) and definitely higher, if various extensions are factored out, than the basic architecture of systems like GNU-Prolog (Diaz and Codognet 2001), SICStus Prolog (Carlsson et al. ), Ciao 26 Paul Tarau (Carro and Hermenegildo 1999), YAP (da Silva and Costa 2006) or XSB (Swift and Warren 1994). We refer to (Van Roy 1994) and (Demoen and Nguyen 2000) for extensive comparisons of compilation techniques and abstract machines for various logic programming systems. Techniques for adding built-ins to binary Prolog are first discussed in (Demoen and Mariën 1992), where an implementation oriented view of binary programs that a binary program is simply one that does not need an environment in the WAM is advocated. Their paper also describes a technique for implementing Prolog’s CUT in a binary Prolog compiler. Extensions to BinProlog’s AND-continuation passing transformation to also cover OR-continuations are described in (Lindgren 1994). Multiple Logic Engines have been present in one form or another in various parallel implementation of logic programming languages (Shapiro 1989; Ueda 1985). Among the earliest examples of parallel execution mechanisms for Prolog, ANDparallel (Hermenegildo 1986) and OR-parallel (Lusk et al. 1993) execution models are worth mentioning. However, with the exception of this author’s papers on this topic (Tarau 1999a; Tarau 1999c; Tarau 2000; Tarau and Dahl 2001; Tarau 2008a; Tarau and Majumdar 2009; Tarau 2011) we have not found an extensive use of first-class Logic Engines as a mechanism to enhance language expressiveness, independently of their use for parallel programming, with maybe the exception of (Casas et al. 2007) where such an API is discussed for parallel symbolic languages in general. In combination with multithreading (Tarau 2011), our own engine-based API bears similarities with various other Prolog systems, notably (Carro and Hermenegildo 1999; Wielemaker 2003) while focusing on uncoupling “concurrency for performance” and “concurrency for expressiveness”. The use of a garbage collected, infinitely looping recursive program to encapsulate state goes back to early work in logic programming and it is likely to be common in implementing various server programs. However an infinitely recursive pure Horn Clause program is an “information sink” that does not communicate with the outside world on its own. The minimal API (to engine/2 and from engine/1) described in this paper provides interoperation with such programs, in a generic way. 11 Conclusion At the time of writing, BinProlog has been around for almost 20 years. As mostly a single-implementor system, BinProlog has not kept up with systems that have benefited from a larger implementation effort in terms of optimizations and extensions like constraints or tabling, partly also because our research interests have diverged towards areas as diverse as natural language processing, logic synthesis or computational mathematics. On the other hand, re-implementations in Java and a number of experimental features make it still relevant as a due member of the unusually rich and colorful family of Prolog systems. Our own re-implementations of BinProlog’s virtual machine have been extended with first-class Logic Engines that can be used to build on top of pure Prolog a prac- The BinProlog Experience 27 tical Prolog system, including dynamic database operations, entirely at source level. In a broader sense, interactors can be seen as a starting point for rethinking fundamental programming language constructs like Iterators and Coroutining in terms of language constructs inspired by performatives in agent oriented programming. Along these lines, we are currently building a new BinWAM based implementation, Lean Prolog, that combines a minimal WAM kernel with an almost entirely source-level interactor-based implementation of Prolog’s built-ins and libraries. We believe that under this new incarnation some of BinProlog’s architectural choices are likely to have an interesting impact on the design and implementation of future logic programming languages. Acknowledgements We are thankful to Marı́a Garcı́a de la Banda, Bart Demoen, Manuel Hermenegildo, Joachim Schimpf and D.S. Warren and to the anonymous reviewers for their careful reading and thoughtful comments on earlier drafts of this paper. Special thanks go to Koen De Bosschere, Bart Demoen, Geert Engels, Ulrich Neumerkel, Satyam Tyagi and Peter Wilson for their contribution to the implementation of BinProlog and Jinni Prolog components. Richard O’Keefe’s public domain Prolog parser and writer have been instrumental in turning BinProlog into a self-contained Prolog system quickly. Fruitful discussions with Hassan Aı̈t-Kaci, Patrice Boizumault, Michel Boyer, Mats Carlsson, Jacques Cohen, Veronica Dahl, Bart Demoen, Thomas Lindgren, Arun Majumdar, Olivier Ridoux, Kostis Sagonas, Sten-Åke Tärnlund, D.S. Warren, Neng-Fa Zhou and comments from a large number of BinProlog users helped improving its design and implementation. References Aı̈t-Kaci, H. 1991. Warren’s Abstract Machine: A Tutorial Reconstruction. MIT Press. Bolz, C. F., Leuschel, M., and Schneider, D. 2010. Towards a Jitting VM for Prolog Execution. In PPDP ’10: Proceedings of the 12th international ACM SIGPLAN symposium on Principles and practice of declarative programming. ACM, New York, NY, USA, 99–108. Carlsson, M., Widen, J., Andersson, J., Andersson, S., Boortz, K., Nilsson, H., and Sjoland, T. SICStus Prolog user’s manual. Carro, M. and Hermenegildo, M. V. 1999. Concurrency in Prolog Using Threads and a Shared Database. In ICLP. 320–334. Casas, A., Carro, M., and Hermenegildo, M. 2007. Towards a high-level implementation of flexible parallelism primitives for symbolic languages. In PASCO ’07: Proceedings of the 2007 international workshop on Parallel symbolic computation. ACM, New York, NY, USA, 93–94. Clark, D. W. and Green, C. C. 1977. An empirical study of list structure in lisp. Commun. ACM 20, 78–87. Conway, M. E. 1963. Design of a Separable Transition-Diagram Compiler. Communications of the ACM 6, 7, 396–408. 28 Paul Tarau da Silva, A. F. and Costa, V. S. 2006. The design and implementation of the yap compiler: An optimizing compiler for logic programming languages. In ICLP, S. Etalle and M. Truszczynski, Eds. Lecture Notes in Computer Science, vol. 4079. Springer, 461–462. Dahl, V., Tarau, P., and Li, R. 1997. Assumption Grammars for Processing Natural Language. In Proceedings of the Fourteenth International Conference on Logic Programming, L. Naish, Ed. MIT press, 256–270. De Bosschere, K. and Tarau, P. 1996. Blackboard-based Extensions in Prolog. Software — Practice and Experience 26, 1 (Jan.), 49–69. Demoen, B. 1992. On the transformation of a prolog program to a more efficient binary program. In LOPSTR. 242–252. Demoen, B., Engels, G., and Tarau, P. 1996. Segment Preserving Copying Garbage Collection for WAM based Prolog. In Proceedings of the 1996 ACM Symposium on Applied Computing. ACM Press, Philadelphia, 380–386. Demoen, B. and Mariën, A. 1992. Implementation of Prolog as binary definite Programs. In Logic Programming, RCLP Proceedings, A. Voronkov, Ed. Number 592 in Lecture Notes in Artificial Intelligence. Springer-Verlag, Berlin, Heidelberg, 165–176. Demoen, B. and Nguyen, P.-L. 2000. So many wam variations, so little time. In CL ’00: Proceedings of the First International Conference on Computational Logic. SpringerVerlag, London, UK, 1240–1254. Deransart, P., Ed-Dbali, A., and Cervoni, L. 1996. Prolog: The Standard. SpringerVerlag, Berlin. ISBN: 3-540-59304-7. Diaz, D. and Codognet, P. 2001. Design and implementation of the gnu prolog system. Journal of Functional and Logic Programming 2001, 6. FIPA. 1997. FIPA 97 specification part 2: Agent communication language. Version 2.0. Hermenegildo, M. V. 1986. An abstract machine for restricted and-parallel execution of logic programs. In Proceedings on Third international conference on logic programming. Springer-Verlag New York, Inc., New York, NY, USA, 25–39. Lindgren, T. 1994. A continuation-passing style for prolog. In ILPS ’94: Proceedings of the 1994 International Symposium on Logic programming. MIT Press, Cambridge, MA, USA, 603–617. Liskov, B., Atkinson, R. R., Bloom, T., Moss, J. E. B., Schaffert, C., Scheifler, R., and Snyder, A. 1981. CLU Reference Manual. Lecture Notes in Computer Science, vol. 114. Springer. Lloyd, J. 1987. Foundations of Logic Programming. Symbolic computation — Artificial Intelligence. Springer-Verlag, Berlin. Second edition. Lusk, E., Mudambi, S., Gmbh, E., and Overbeek, R. 1993. Applications of the aurora parallel prolog system to computational molecular biology. In In Proc. of the JICSLP’92 Post-Conference Joint Workshop on Distributed and Parallel Implementations of Logic Programming Systems, Washington DC. MIT Press. Nässén, H., Carlsson, M., and Sagonas, K. 2001. Instruction merging and specialization in the sicstus prolog virtual machine. In Proceedings of the 3rd ACM SIGPLAN international conference on Principles and practice of declarative programming. PPDP ’01. ACM, New York, NY, USA, 49–60. Neumerkel, U. 1992. Specialization of Prolog Programs with Partially Static Goals and Binarization. Phd thesis. Technische Universität Wien. Shapiro, E. 1989. The family of concurrent logic programming languages. ACM Comput. Surv. 21, 3, 413–510. Swift, T. and Warren, D. S. 1994. An abstract machine for SLG resolution: definite programs. In Logic Programming - Proceedings of the 1994 International Symposium, M. Bruynooghe, Ed. The MIT Press, Massachusetts Institute of Technology, 633–652. The BinProlog Experience 29 Tarau, P. 1991. A Simplified Abstract Machine for the Execution of Binary Metaprograms. In Proceedings of the Logic Programming Conference’91. ICOT, Tokyo, 119–128. Tarau, P. 1992. Ecological Memory Management in a Continuation Passing Prolog Engine. In Memory Management International Workshop IWMM 92 Proceedings, Y. Bekkers and J. Cohen, Eds. Number 637 in Lecture Notes in Computer Science. Springer, 344–356. Tarau, P. 1998. Towards Inference and Computation Mobility: The Jinni Experiment. In Proceedings of JELIA’98, LNAI 1489, J. Dix and U. Furbach, Eds. Springer, Dagstuhl, Germany, 385–390. invited talk. Tarau, P. 1999a. Inference and Computation Mobility with Jinni. In The Logic Programming Paradigm: a 25 Year Perspective, K. Apt, V. Marek, and M. Truszczynski, Eds. Springer, 33–48. ISBN 3-540-65463-1. Tarau, P. 1999b. Intelligent Mobile Agent Programming at the Intersection of Java and Prolog. In Proceedings of The Fourth International Conference on The Practical Application of Intelligent Agents and Multi-Agents. London, U.K., 109–123. Tarau, P. 1999c. Multi-Engine Horn Clause Prolog. In Proceedings of Workshop on Parallelism and Implementation Technology for (Constraint) Logic Programming Languages, G. Gupta and E. Pontelli, Eds. Las Cruces, NM. Tarau, P. 2000. Fluents: A Refactoring of Prolog for Uniform Reflection and Interoperation with External Objects. In Computational Logic–CL 2000: First International Conference, J. Lloyd, Ed. London, UK. LNCS 1861, Springer-Verlag. Tarau, P. 2004a. Agent Oriented Logic Programming Constructs in Jinni 2004. In Logic Programming, 20-th International Conference, ICLP 2004, B. Demoen and V. Lifschitz, Eds. Springer, LNCS 3132, Saint-Malo, France, 477–478. Tarau, P. 2004b. Orthogonal Language Constructs for Agent Oriented Logic Programming. In Proceedings of CICLOPS 2004, Fourth Colloquium on Implementation of Constraint and Logic Programming Systems, M. Carro and J. F. Morales, Eds. Saint-Malo, France. Tarau, P. 2006. BinProlog 11.x Professional Edition: Advanced BinProlog Programming and Extensions Guide. Tech. rep., BinNet Corp. Tarau, P. 2008a. Logic Engines as Interactors. In Logic Programming, 24-th International Conference, ICLP, M. Garcia de la Banda and E. Pontelli, Eds. Springer, LNCS, Udine, Italy, 703–707. Tarau, P. 2008b. The Jinni Prolog Compiler: a fast and flexible Prolog-in-Java. http://www.binnetcorp.com/download/jinnidemo/JinniUserGuide.html. Tarau, P. 2011. Concurrent programming constructs in multi-engine prolog. In Proceedings of DAMP’11: ACM SIGPLAN Workshop on Declarative Aspects of Multicore Programming. ACM, New York, NY, USA. Tarau, P. and Boyer, M. 1990. Elementary Logic Programs. In Proceedings of Programming Language Implementation and Logic Programming, P. Deransart and J. Maluszyński, Eds. Number 456 in Lecture Notes in Computer Science. Springer, 159–173. Tarau, P. and Boyer, M. 1993. Nonstandard Answers of Elementary Logic Programs. In Constructing Logic Programs, J. Jacquet, Ed. J.Wiley, 279–300. Tarau, P. and Dahl, V. 1994. Logic Programming and Logic Grammars with First-order Continuations. In Proceedings of LOPSTR’94, LNCS, Springer. Pisa. Tarau, P. and Dahl, V. 1998. Mobile Threads through First Order Continuations. In Proceedings of APPAI-GULP-PRODE’98. Coruna, Spain. Tarau, P. and Dahl, V. 2001. High-Level Networking with Mobile Code and First Order AND-Continuations. Theory and Practice of Logic Programming 1, 3 (May), 359–380. Cambridge University Press. 30 Paul Tarau Tarau, P. and De Bosschere, K. 1993a. Blackboard Based Logic Programming in BinProlog. In Proceedings of the fifth University of New Brunswick Artificial Intelligence Symposium, L. Goldfarb, Ed. Fredericton, N.B., 137–147. Tarau, P. and De Bosschere, K. 1993b. Memoing with Abstract Answers and Delphi Lemmas. In Logic Program Synthesis and Transformation, Y. Deville, Ed. SpringerVerlag. Louvain-la-Neuve, 196–209. Tarau, P., De Bosschere, K., Dahl, V., and Rochefort, S. 1999. LogiMOO: an Extensible Multi-User Virtual World with Natural Language Control. Journal of Logic Programming 38, 3 (Mar.), 331–353. Tarau, P., De Bosschere, K., and Demoen, B. 1996. Partial Translation: Towards a Portable and Efficient Prolog Implementation Technology. Journal of Logic Programming 29, 1–3 (Nov.), 65–83. Tarau, P., De Bosschere, K., and Demoen, B. 1997. On Delphi Lemmas And other Memoing Techniques For Deterministic Logic Programs. Journal of Logic Programming 30, 2 (Feb.), 145–163. Tarau, P., Demoen, B., and De Bosschere, K. 1994. The Power of Partial Translation: an Experiment with the C-ification of Binary Prolog. In Proceedings of the First COMPULOG-NOE Area Meeting on Parallelism and Implementation Technology, M. Garcı́a de la Banda, J. and H. M., Eds. Madrid/Spain, 3–17. Tarau, P. and Majumdar, A. 2009. Interoperating Logic Engines. In Practical Aspects of Declarative Languages, 11th International Symposium, PADL 2009. Springer, LNCS 5418, Savannah, Georgia, 137–151. Tarau, P. and Neumerkel, U. 1993. Compact Representation of Terms and Instructions in the BinWAM. Tech. Rep. 93-3, Dept. d’Informatique, Université de Moncton. Nov. available by ftp from clement.info.umoncton.ca. Tyagi, S. and Tarau, P. 2001. A Most Specific Method Finding Algorithm for Reflection Based Dynamic Prolog-to-Java Interfaces. In Proceedings of PADL’2001, I. Ramakrishan and G. Gupta, Eds. Las Vegas. Springer-Verlag. Ueda, K. 1985. Guarded horn clauses. In Logic Programming ’85, Proceedings of the 4th Conference, Tokyo, Japan, July 1-3, 1985, E. Wada, Ed. Lecture Notes in Computer Science, vol. 221. Springer, 168–179. Van Roy, P. 1994. 1983-1993: The wonder years of sequential prolog implementation. Journal of Logic Programming. Wadler, P. 1990. Deforestation: Transforming programs to eliminate trees. Theor. Comput. Sci. 73, 2, 231–248. Warren, D. H. D. 1983. An Abstract Prolog Instruction Set. Technical Note 309, SRI International. Oct. Wielemaker, J. 2003. Native Preemptive Threads in SWI-Prolog. In ICLP, C. Palamidessi, Ed. Lecture Notes in Computer Science, vol. 2916. Springer, 331–345. Zhou, N.-F. 2007. A register-free abstract prolog machine with jumbo instructions. In ICLP, V. Dahl and I. Niemelä, Eds. Lecture Notes in Computer Science, vol. 4670. Springer, 455–457. Zhou, N.-F., Takagi, T., and Ushijima, K. 1990. A matching tree oriented abstract machine for prolog. 159–173.
6
arXiv:1511.00824v3 [math.CT] 30 Dec 2016 CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES CLEMENS BERGER AND DOMINIQUE BOURN Abstract. We study nilpotency in the context of exact Mal’tsev categories taking central extensions as the primitive notion. This yields a nilpotency tower which is analysed from the perspective of Goodwillie’s functor calculus. We show in particular that the reflection into the subcategory of n-nilpotent objects is the universal endofunctor of degree n if and only if every n-nilpotent object is n-folded. In the special context of a semi-abelian category, an object is n-folded precisely when its Higgins commutator of length n + 1 vanishes. Contents Introduction 1. Central extensions and regular pushouts 2. Affine objects and nilpotency 3. Affine morphisms and central reflections 4. Aspects of nilpotency 5. Quadratic identity functors 6. Identity functors with bounded degree Acknowledgements References 1 6 17 23 32 36 46 67 67 Introduction This text investigates nilpotency in the context of exact Mal’tsev categories. Our purpose is twofold: basic phenomena of nilpotency are treated through universal properties rather than through commutator calculus, emphasising the fundamental role played by central extensions; nilpotency is then linked to an algebraic form of Goodwillie’s functor calculus [29]. This leads to a global understanding of nilpotency in terms of functors with bounded degree. A Mal’tsev category is a finitely complete category in which reflexive relations are equivalence relations [17, 19]. Important examples of exact Mal’tsev categories are Date: November 25, 2016. 1991 Mathematics Subject Classification. 17B30, 18D35, 18E10, 18G50, 20F18. Key words and phrases. Nilpotency, Mal’tsev category, central extension, Goodwillie functor calculus. 1 2 CLEMENS BERGER AND DOMINIQUE BOURN Mal’tsev varieties [53] and semi-abelian categories [47]. The simplicial objects of an exact Mal’tsev category are “internal” Kan complexes (cf. [17, 62]). Nilpotency is classically understood via the vanishing of iterated commutators: in a Mal’tsev variety by means of so-called Smith commutators [60, 27], in a semi-abelian category by means of so-called Huq commutators [43, 25]. The first aim of this text is to promote another point of view which seems more intrinsic to us and is based on the notion of central extension, by which we mean a regular epimorphism with central kernel relation. The n-nilpotent objects are defined to be those which can be linked to a terminal object by a chain of n consecutive central extensions. This notion of nilpotency is equivalent to the two aforementioned notions in their respective contexts (cf. Proposition 2.14). In particular, we get the usual notions of n-nilpotent group, n-nilpotent Lie algebra and n-nilpotent loop [15]. A category is called n-nilpotent if all its objects are n-nilpotent. For any exact Mal’tsev category with binary sums, the full subcategory spanned by the n-nilpotent objects is a reflective Birkhoff subcategory (cf. Theorem 2.12). This generalises the analogous known results for Mal’tsev varieties [60, 27] and semiabelian categories [25]. We denote the reflection into the subcategory of n-nilpotent n objects by I n and the unit of the adjunction at an object X by ηX : X ։ I n (X). Since an n-nilpotent object is a fortiori (n + 1)-nilpotent, the different reflections assemble into the following nilpotency tower X ❄PP qq⑧⑧q⑧ ❄❄❄PPPPP q q q ⑧ ❄❄ PPP qqq ⑧⑧⑧ ❄❄ n PPP n+1 q 2 q PPPηX ⑧ η ❄❄ηX 1 X qq  ⑧⑧ ηX q PP' '  ❄  q ⑧  q  xqx o q 1 2 n o o o o o o o ⋆ I (X) I (X) I (X) I n+1 (X) o o in which the successive quotient maps I n+1 (X) ։ I n (X) are central extensions. Pointed categories with binary sums will be ubiquitous throughout the text; we call them σ-pointed for short. Among σ-pointed exact Mal’tsev categories we characterise the n-nilpotent ones as those for which the comparison maps θX,Y : X + Y ։ X × Y are (n − 1)-fold central extensions (cf. Theorem 4.3). The nilpotency class of a σpointed exact Mal’tsev category measures thus the discrepancy between binary sum and binary product. If n = 1, binary sum and binary product coincide, and all objects are abelian group objects. A σ-pointed exact Mal’tsev category is 1-nilpotent if and 1 : X ։ I 1 (X) only if it is an abelian category (cf. Corollary 4.4). The unit ηX of the first Birkhoff reflection is abelianisation (cf. Proposition 4.2). Moreover, the successive kernels of the nilpotency tower are abelian group objects as well. This situation is reminiscent of what happens in Goodwillie’s functor calculus [29] where “infinite loop spaces” play the role of abelian group objects. The second aim of our study of nilpotency was to get a deeper understanding of this analogy. Goodwillie’s notions [29] of cross-effect and degree of a functor translate well into our algebraic setting: for each (n + 1)-tuple (X1 , . . . , Xn+1 ) of objects of a σ-pointed CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 3 category and each based endofunctor F we define an (n + 1)-cube ΞF X1 ,...,Xn+1 consisting of the images F (Xi1 + · · · + Xik ) for all subsequences of (X1 , . . . , Xn+1 ) together with the obvious contraction maps. We say that a functor F is of degree ≤ n if these (n + 1)-cubes are limit-cubes for all choices of (X1 , . . . , Xn+1 ). F F We denote by θX : F (X1 + · · · + Xn+1 ) → PX the comparison map 1 ,...,Xn+1 1 ,...,Xn+1 towards the limit of the punctured (n + 1)-cube so that F is of degree ≤ n if and only F F if θX is invertible for each choice of (n + 1)-tuple. The kernel of θX 1 ,...,Xn+1 1 ,...,Xn+1 F is a (n + 1)-st cross-effect of F , denoted crn+1 (X1 , . . . , Xn+1 ). A based endofunctor F is linear, i.e. of degree ≤ 1, if and only if F takes binary sums to binary products. In a semi-abelian category, the second cross-effects cr2F (X, Y ) measure thus the failure of linearity of F . If F is the identity functor, we drop F from the notation so that cr2 (X, Y ) denotes the kernel of the comparison map θX,Y : X + Y → X × Y . This kernel is often denoted X ⋄ Y and called the co-smash product of X and Y (cf. [16] and Remarks 3.10 and 6.2). An endofunctor of a semi-abelian (or homological [4]) category is of degree ≤ n if and only if all its cross-effects of order n + 1 vanish. For functors taking values in abelian categories, our cross-effects agree with the original cross-effects of EilenbergMac Lane [23] (cf. Remark 6.2). For functors taking values in σ-pointed categories with pullbacks, our cross-effects agree with those of Hartl-Loiseau [38] and Hartl-Van der Linden [39], defined as kernel intersections (cf. Definition 5.1). A Goodwillie type characterisation of the nilpotency tower amounts to the property that for each n, the reflection I n into the Birkhoff subcategory of n-nilpotent objects is the universal endofunctor of degree ≤ n. In fact, every endofunctor of degree ≤ n of a σ-pointed exact Mal’tsev category takes values in n-nilpotent objects (cf. Proposition 6.5). The reflection I n is of degree ≤ n if and only if the identity functor of the Birkhoff subcategory of n-nilpotent objects itself is of degree ≤ n. In the present article we have mainly been investigating this last property. The property holds for n = 1 because the identity functor of an abelian category is linear. However, already for n = 2, there are examples of 2-nilpotent semi-abelian categories which are not quadratic, i.e. do not have an identity functor of degree ≤ 2 (cf. Section 6.5). We show that a σ-pointed exact Mal’tsev category is quadratic if and only if the category is 2-nilpotent and algebraically distributive, i.e. endowed with isomorphisms (X × Z) +Z (Y × Z) ∼ = (X + Y ) × Z for all objects X, Y, Z (cf. Corollary 5.16). Since algebraic distributivity is preserved under Birkhoff reflection, the subcategory of 2-nilpotent objects of an algebraically distributive exact Mal’tsev category is always quadratic (cf. Theorem 5.18). Algebraic distributivity is a consequence of the existence of centralisers for subobjects as shown by Gray and the second author [13]. For pointed Mal’tsev categories, it also follows from algebraic coherence in the sense of Cigoli-Gray-Van der Linden [20]. Our quadraticity result implies that iterated Huq commutator [X, [X, X]] and ternary Higgins commutator [X, X, X] coincide for each object X of an algebraically distributive semi-abelian category (cf. Corollary 5.19 and [20, Corollary 7.2]). There is a remarkable duality for σ-pointed 2-nilpotent exact Mal’tsev categories: algebraic distributivity amounts to algebraic codistributivity, i.e. to isomorphisms 4 CLEMENS BERGER AND DOMINIQUE BOURN (X × Y ) + Z ∼ = (X + Z) ×Z (Y + Z) for all X, Y, Z (cf. Proposition 5.15). Indeed, the difference between 2-nilpotency and quadraticity is precisely algebraic codistributivity (cf. Theorem 5.5). An extension of this duality to all n ≥ 2 is crucial in relating general nilpotency to identity functors with bounded degree. The following characterisation is very useful: The identity functor of a σ-pointed exact Mal’tsev category E is of degree ≤ n if and only if all its objects are n-folded (cf. Proposition 6.5). An object is n-folded (cf. Definition 6.3) if the (n + 1)-st X folding map δn+1 : X + · · · + X → X factors through the comparison map θX,...,X : X + · · · + X ։ PX,...,X . In a varietal context this can be expressed in combinatorial terms (cf. Remark 6.4). The full subcategory Fldn (E) spanned by the n-folded objects is a reflective Birkhoff subcategory of E, and the reflection J n : E → Fldn (E) is the universal endofunctor of degree ≤ n (cf. Theorem 6.8). Every n-folded object is n-nilpotent (cf. Proposition 6.13) while the converse holds if and only if the other Birkhoff reflection I n : E → Niln (E) is also of degree ≤ n. In the context of semi-abelian categories, closely related results appear in the work of Hartl and his coauthors [38, 39, 40], although formulated slightly differently. In a semi-abelian category, an object X is n-folded if and only if its Higgins commutator of length n + 1 vanishes (cf. Remark 6.4), where the latter is defined as the image of the composite map crn+1 (X, . . . , X) → X + · · · + X → X, cf. [38, 39, 54]. The universal n-folded quotient J n (X) may then be identified with the quotient of X by the Higgins commutator of length n + 1 in much the same way as the universal nnilpotent quotient I n (X) may be identified with the quotient of X by the iterated Huq commutator of length n + 1. It was Hartl’s insight that Higgins commutators are convenient for extending the “polynomial functors” of Eilenberg-Mac Lane [23] to a semi-abelian context. Our treatment in the broader context of exact Mal’tsev categories follows more closely Goodwillie’s functor calculus [29]. In a σ-pointed exact Mal’cev category, abelianisation I 1 is the universal endofunctor J 1 of degree ≤ 1 (cf. Mantovani-Metere [54]). For n > 1 however, the universal endofunctor J n of degree ≤ n is in general a proper quotient of the n-th Birkhoff reflection I n (cf. Corollary 6.12). In order to show that even in a semi-abelian variety the two endofunctors may disagree, we exhibit a Moufang loop of order 16 (subloop of Cayley’s octonions) which is 2-nilpotent but not 2-folded (cf. Section 6.5). Alternatively, Mostovoy’s modified lower central series of a loop [55] yields other examples of a similar kind provided the latter agrees with the successive Higgins commutators of the loop (cf. [39, Example 2.15] and [61]). We did not find a simple categorical structure that would entail the equivalence between n-nilpotency and n-foldedness for all n. As a first step in this direction we show that an n-nilpotent semi-abelian category has an identity functor of degree ≤ n if and only if its n-th cross-effect is multilinear (cf. Theorem 6.22). We also show that the nilpotency tower has the desired universal property if and only if it is homogeneous, i.e. for each n, the n-th kernel functor is of degree ≤ n (cf. Theorem 6.23). This is preserved under Birkhoff reflection (cf. Theorem 6.24). The categories of groups and of Lie algebras have homogeneous nilpotency towers so that a group, resp. Lie algebra is n-nilpotent if and only if it is n-folded, and the Birkhoff reflection CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 5 I n is here indeed the universal endofunctor of degree ≤ n. The category of triality groups [22, 28, 37] also has a homogeneous nilpotency tower although it contains the category of Moufang loops as a full coreflective subcategory, and the latter has an inhomogeneous nilpotency tower (cf. Section 6.5). There are several further ideas closely related to the contents of this article which we hope to address in future work. Let us mention two of them: The associated graded object of the nilpotency tower ⊕n≥1 K[I n (X) ։ I n−1 (X)] is a functor in X taking values in graded abelian group objects. For the category of groups this functor actually takes values in graded Lie rings and as such preserves n-nilpotent objects and free objects, cf. Lazard [51]. It is likely that for a large class of semi-abelian categories, the associated graded object of the nilpotency tower carries a similar algebraic structure. It would be interesting to establish the relationship between this algebraic structure and the cross-effects of the identity functor. It follows from [17, Theorem 4.2] and [59, Theorem IV.4] that the simplicial objects of a pointed Mal’tsev variety carry a Quillen model structure in which the weak equivalences are the maps inducing a quasi-isomorphism on Moore complexes. Such a model structure also exists for the simplicial objects of a semi-abelian category with enough projectives, cf. [59, 62]. In both cases, regular epimorphisms are fibrations, and the trivial fibrations are precisely the regular epimorphisms for which the kernel is homotopically trivial. This implies that Goodwillie’s homotopical cross-effects [29] agree here with our algebraic cross-effects. Several notions of homotopical nilpotency are now available. The first is the least n : X• ։ I n (X• ) is a trivial fibration, the second (resp. integer n for which the unit ηX • third) is the least integer n for which X• is homotopically n-folded (resp. the value of an n-excisive approximation of the identity). The first is a lower bound for the second, and the second is a lower bound for the third invariant. For simplicial groups the first invariant recovers the Berstein-Ganea nilpotency for loop spaces [2], the second the cocategory of Hovey [42], and the third the Biedermann-Dwyer nilpotency for homotopy nilpotent groups [3]. Similar chains of inequalities have recently been studied by Eldred [24] and Costoya-Scherer-Viruel [21]. The plan of this article is as follows. Section 1 reviews the notions of central extension and regular pushout. At the end an algebraic Beck-Chevalley condition for pushouts of regular epimorphisms in an exact Mal’tsev category is established. Section 2 presents our definition of nilpotency and studies under which conditions the n-nilpotent objects form a reflective Birkhoff subcategory. Section 3 investigates central reflections, the motivating example being the reflection of the category of (n + 1)-nilpotent objects into the category of n-nilpotent objects. The unit of these central reflections is shown to be pointwise affine. Section 4 establishes first aspects of nilpotency. The nilpotency class of a σ-pointed exact Mal’cev category is related to universal properties of the comparison map θX,Y : X + Y → X × Y . This leads to a new family of binary tensor products interpolating between binary sum and binary product. 6 CLEMENS BERGER AND DOMINIQUE BOURN Section 5 studies the σ-pointed exact Mal’tsev categories with quadratic identity functor. They are characterised among the 2-nilpotent ones as those which are algebraically distributive, resp. algebraically codistributive. Section 6 studies the σ-pointed exact Mal’tsev categories with an identity functor of degree ≤ n. They are characterised as those in which all objects are n-folded. Every n-folded object is shown to be n-nilpotent. Several sufficient criteria for the converse are given. The semi-abelian varieties of groups, Lie algebras, Moufang loops and triality groups are discussed. 1. Central extensions and regular pushouts In this introductory section we review the notion of central equivalence relation and study basic properties of the associated class of central extensions, needed for our treatment of nilpotency. By central extension we mean a regular epimorphism with central kernel relation [8, 11, 12]. This algebraic concept of central extension has to be distinguished from the axiomatic concept of Janelidze-Kelly [44] which is based on a previously chosen admissible Birkhoff subcategory. Nevertheless, it is known that with respect to the Birkhoff subcategory of abelian group objects, the two approaches yield the same class of central extensions in any congruence modular variety (cf. [45, 46]) as well as in any exact Mal’tsev category (cf. [11, 26, 31]). We assume throughout that our ambient category is a Mal’tsev category, i.e. a finitely complete category in which every reflexive relation is an equivalence relation, cf. [4, 6, 17, 19]. Most of the material of this section is well-known to the expert, and treated in some detail here mainly to fix notation and terminology. One exception is Section 1.6 which establishes an “algebraic” Beck-Chevalley condition for pushouts of regular epimorphisms in exact Mal’tsev categories, dual to the familiar Beck-Chevalley condition for pullbacks of monomorphisms in elementary toposes. In recent and independent work, Gran-Rodelo [32] consider a weaker form of this condition and show that it characterises regular Goursat categories. 1.1. Smith commutator of equivalence relations. – An equivalence relation R on X will be denoted as a reflexive graph (p0 , p1 ) : R ⇒ X with section s0 : X → R, but whenever convenient we shall consider R as a subobject of X × X. By a fibrant map of equivalence relations (X, R) → (Y, S) we mean a natural transformation of the underlying reflexive graphs such that the three naturality squares are pullback squares. A particularly important equivalence relation is the kernel relation R[f ] of a morphism f : X → Y which is part of the following diagram: p1 R[f ] o p0 / /X f / Y. The discrete equivalence relation ∆X on X is the kernel relation R[1X ] of the identity map 1X : X → X. The indiscrete equivalence relation ∇X on X is the kernel relation R[ωX ] of the unique map ωX from X to a terminal object. CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 7 Two equivalence relations R, S on the same object X are said to centralise each other if the square R (s0 ,1S ) R ×O X S o S (1R ,sS 0) R p pR 0 pS 1 #  /X admits a (necessarily unique) filler which makes the diagram commute, cf. [12, 57]. In set-theoretical terms such a filler amounts to the existence of a “partial Mal’tsev operation” on X, namely (considering R ×X S as a subobject of X × X × X) a ternary operation p : R ×X S → X such that x = p(x, y, y) and p(x, x, y) = y. We shall follow Marino Gran and the second author in calling p : R ×X S → X a connector between R and S, cf. [11, 12]. In a finitely cocomplete regular Mal’tsev category, there exists for each pair (R, S) of equivalence relations on X a smallest effective equivalence relation [R, S] on X such that R and S centralise each other in the quotient X/[R, S]. This equivalence relation is the so-called Smith commutator of R and S, cf. [8, 12, 57, 60]. In these terms R and S centralise each other precisely when [R, S] = ∆X . The Smith commutator is monotone in each variable and satisfies [R, S] = [S, R] and f ([R, S]) ⊂ [f (R), f (S)] for each regular epimorphism f : X → Y , where f (R) denotes the direct image of the subobject R ⊂ X × X under the regular epimorphism f × f : X × X → Y × Y . The Mal’tsev condition implies that this direct image represents an equivalence relation on Y . Note that equality f ([R, S]) = [f (R), f (S)] holds if and only if the direct image f ([R, S]) is an effective equivalence relation on Y , which is always the case in an exact Mal’tsev category. 1.2. Central equivalence relations and central extensions. An equivalence relation R on X is said to be central if [R, ∇X ] = ∆X . A central extension is by definition a regular epimorphism with central kernel relation. An n-fold central extension is the composite of n central extensions. An n-fold centrally decomposable morphism is the composite of n morphisms with central kernel relation. The indiscrete equivalence relation ∇X is a central equivalence relation precisely when X admits an internal Mal’tsev operation p : X × X × X → X. In pointed Mal’tsev categories such a Mal’tsev operation amounts to an abelian group structure on X, cf. [4, Proposition 2.3.8]. An object X of a pointed Mal’tsev category (D, ⋆D ) is thus an abelian group object if and only if the map X → ⋆D is a central extension. Central equivalence relations are closed under binary products and inverse image along monomorphisms. In a regular Mal’tsev category, central equivalence relations are closed under direct images, cf. [12, Proposition 4.2] and [4, Proposition 2.6.15]. Lemma 1.1. In a regular Mal’tsev category, an n-fold centrally decomposable morphism can be written as an n-fold central extension followed by a monomorphism. 8 CLEMENS BERGER AND DOMINIQUE BOURN Proof. It suffices to show that a monomorphism ψ followed by a central extension φ can be rewritten as a central extension φ′ followed by a monomorphism ψ ′ . Indeed, the kernel relation R[φψ] is central, being the restriction ψ −1 (R[φ]) of the central equivalence relation R[φ] along the monomorphism ψ; therefore, by regularity, one obtains φψ = ψ ′ φ′ where φ′ is quotienting by the kernel relation R[φψ].  Lemma 1.2. In a Mal’tsev category, morphisms with central kernel relation are closed under pullback. In a regular Mal’tsev category, central extensions are closed under pullback. Proof. It suffices to show the first statement. In the following diagram, p′1 R[f ′ ] o ′ /X p′0 R(x,y) / Y′ y x p1  R[f ] o f′ / /  /X p0 f  /Y if the right square is a pullback, then the left square is a fibrant map of equivalence relations. This permits to lift the connector p : R[f ] ×X ∇X → X so as to obtain a  connector p′ : R[f ′ ] ×X ′ ∇X ′ → X ′ . f g Lemma 1.3. Let X → Y → Z be morphisms in a Mal’tsev category. If gf is a morphism with central kernel relation then so is f . More generally, if gf is n-fold centrally decomposable then so is f . Proof. Since R[f ] ⊂ R[gf ], the commutation relation [R[gf ], ∇X ] = ∆X implies the commutation relation [R[f ], ∇X ] = ∆X . Assume now gf = kn · · · k1 where each ki is a morphism with central kernel relation. In the following pullback P o γ /X φ gf ψ  Y g  /Z φ is the unique map such that γφ = 1X and ψφ = f . If we denote by hi the morphism with central kernel relation obtained by pulling back ki along g, we get f = hn · · · h2 (h1 φ). Since φ is a monomorphism, the kernel relation R[h1 φ] = φ−1 (R[h1 ]) is central, and hence f is the composite of n morphisms with central kernel relation.  Proposition 1.4 (Corollary 3.3 in [8]). In a finitely cocomplete regular Mal’tsev category, each morphism f : X → Y factors canonically as in ηf / / X/[∇X , R[f ]] ❧❧ f ❧❧❧   v❧❧❧❧❧❧ o X/R[f ] Y o X ζf CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 9 where ηf is a regular epimorphism and ζf has a central kernel relation. If f is a regular epimorphism then ζf is a central extension. This factorisation has the following universal property. Any commutative diagram of undotted arrows as below (with Zf = X/[∇X , R[f ]]), such that ζ ′ has a central kernel relation, produces a unique dotted map x X❅ ❅❅ ηf ❅❅ ❅❅ / X′ ❆❆ ′ ❆❆ η ❆❆ ❆ z / Z′ ⑥ ⑥ ⑥ ⑥⑥ ~⑥⑥ ζ ′ / Y′ Zf f ⑦ ⑦⑦ ⑦  ~⑦⑦ ζf Y y rendering the whole diagram commutative. Proposition 1.5. Let n be a positive integer. In a finitely cocomplete regular Mal’tsev category, each morphism has a universal factorisation into a regular epimorphism followed by an n-fold centrally decomposable morphism. Each n-fold central extension has an initial factorisation into n central extensions. Proof. We proceed by induction, the case n = 1 being treated in Proposition 1.4. Suppose the assertion holds up to level n − 1 with f = ζfn−1 ζfn−2 · · · ζf1 ηf1 and ηf1 a regular epimorphism. Take the universal factorisation of ηf1 . The universality of this new factorisation is then a straightforward consequence of the induction hypothesis and the universal property stated in Proposition 1.4. Starting with an n-fold central extension f , its universal factorisation through a composite of n − 1 morphisms with central kernel relation makes the regular epimorphism ηf1 a central extension by Lemma 1.3, and therefore produces a factorisation into n central extensions which is easily seen to be the initial one.  1.3. Regular pushouts. In a regular category, any pullback square of regular epimorphisms is also a pushout square, as follows from the pullback stability of regular epimorphisms. In particular, a commuting square of regular epimorphisms X f x  X′ f′ //Y  y / / Y′ is a pushout whenever the comparison map (x, f ) : X → X ′ ×Y ′ Y to the pullback of y along f ′ is a regular epimorphism. Such pushouts will be called regular, cf. [7]. A regular pushout induces in particular regular epimorphisms on kernel relations which we shall denote R(x, y) : R[f ] ։ R[f ′ ] and R(f, f ′ ) : R[x] ։ R[y]. For a regular Mal’tsev category the following more precise result holds. 10 CLEMENS BERGER AND DOMINIQUE BOURN Proposition 1.6 (cf. Proposition 3.3 in [7]). In a regular Mal’tsev category, a commuting square of regular epimorphisms like in (1.3) is a regular pushout if and only if one of the following three equivalent conditions holds: (a) the comparison map X → X ′ ×Y ′ Y is a regular epimorphism; (b) the induced map R(x, y) : R[f ] → R[f ′ ] is a regular epimorphism; (c) the induced map R(f, f ′ ) : R[x] → R[y] is a regular epimorphism. Accordingly, central extensions are closed under regular pushouts. Proof. We already mentioned that (a) implies (b) and (c) in any regular category. It remains to be shown that in a regular Mal’tsev category (b) or (c) implies (a). For this, it is useful to notice that in a regular category condition (a) holds if and only if the composite relation R[f ] ◦ R[x] equals the kernel relation of the diagonal of the square by Theorem 5.2 of Carboni-Kelly-Pedicchio [17]. Since this kernel relation is given by x−1 (R[f ′ ]), and condition (b) just means that x(R[f ]) = R[f ′ ], it suffices to establish the identity R[f ] ◦ R[x] = x−1 (x(R[f ])). In a regular Mal’tsev category, the composition of equivalence relations is symmetric and coincides with their join. The join R[f ] ∨ R[x] is easily identified with x−1 (x([R[f ])). The second assertion follows from (b) resp. (c) and the closure of central kernel relations under direct image in regular Mal’tsev categories.  Corollary 1.7. In a regular Mal’tsev category, any commutative square f Xo s x  X′ o f′ //Y  y / / Y′ s′ with a parallel pair of regular epimorphisms and a parallel pair of split epimorphisms is a regular pushout. Proof. The induced map R(f, f ′ ) : R[x] → R[y] is a split and hence regular epimorphism so that the pushout is regular by Proposition 1.6.  Corollary 1.8. In an exact Mal’tsev category, pushouts of regular epimorphisms along regular epimorphims exist and are regular pushouts. Proof. Given a pair (f, x) of regular epimorphisms with common domain, consider the following diagram p1 R[f ] o x̃  So p0 / /X f y x q1 q0  / ′ X / //Y f′  / / Y′ in which S denotes the direct image x(R[f ]). By exactness, this equivalence relation on X ′ has a quotient Y ′ . The induced right square is then a regular pushout.  CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 11 Remark 1.9. It follows from [6] that Corollary 1.7 characterises regular Mal’tsev categories among regular categories, while [17, Theorem 5.7] shows that Corollary 1.8 characterises exact Mal’tsev categories among regular categories. Remark 1.10. It is worthwhile noting that in any category a commuting square of epimorphisms in which one parallel pair admits compatible sections is automatically a pushout square. Dually, a commuting square of monomorphisms in which one parallel pair admits compatible retractions is automatically a pullback square. Lemma 1.11 (cf. [11], Lemma 1.1). In a pointed regular category, a regular pushout induces a regular epimorphism on parallel kernels K[f ] / K(x,y)  K[f ′ ] / /X f y x  / X′ //Y f′  // Y′ so that the kernel K[f ′ ] of f ′ is the image under x of the kernel K[f ] of f . Proof. This follows from the fact that, in any pointed category, the induced map on kernels factors as a pullback of the comparison map (x, f ) : X ։ X ′ ×Y ′ Y followed by an isomorphism.  Proposition 1.12. In a finitely cocomplete regular Mal’tsev category, consider the following diagram of pushouts X ηf x  X′ / / Zf ζf z ηf ′  / / Zf ′ ζf ′ //Y  y // Y′ in which the upper row represents the universal factorisation of the regular epimorphism f into a regular epimorphism ηf followed by a central extension ζf . If the outer rectangle is a regular pushout then the right square as well, and the lower row represents the universal factorisation 1.4 of its composite f ′ = ζf ′ ηf ′ . Proof. Since on vertical kernel relations R[x] → R[z] → R[y] we get a regular epimorphism, the second one R[z] → R[y] is a regular epimorphism as well, and the right square is a regular pushout by Proposition 1.6. Therefore, ζf ′ is a central extension by Corollary 1.7. It remains to be shown that the lower row fulfills the universal property of factorisation 1.4. Consider the following diagram X ηf / / Zf ζf //Y z   y  z′′ ζ ′ ηf ′ f // Y′ / / Zf ′ X ′ ◆◆ qq8 8 ◆◆◆ q q ◆ qq η ◆◆◆ & ′ qqq ζ Z x 12 CLEMENS BERGER AND DOMINIQUE BOURN with central extension ζ. According to Proposition 1.4, there is a unique dotted factorisation z ′′ making the diagram commute. Since the left square is a pushout, z ′′ factors uniquely and consistently through z ′ : Zf ′ → Z ′ , showing that the lower row has indeed the required universal property.  Proposition 1.13. In a finitely cocomplete exact Mal’tsev category, the universal factorisation of a regular epimorphism through an n-fold central extension is preserved under pushouts along regular epimorphisms. Proof. Let us consider the following diagram of pushouts η X x  X′ η / / Zn ζn / / Zn−1  zn  zn ′ / / Zn′ / / Zn−1 ′ ′ ζn ζ1 ... Z1 ...  z1 Z1′ //Y  y // Y′ ′ ζ1 in which the upper row is the universal factorisation 1.5 of a regular epimorphism f : X → Y through an n-fold central extension. By Corollary 1.8, all pushouts are ′ regular. Therefore, the morphisms ζk′ : Zk′ → Zk−1 are central extensions for all k. It remains to be shown that the lower row satisfies the universal property of the factorisation 1.5 of f ′ : X ′ → Y ′ through an n-fold central extension. This follows induction on n beginning with the case n = 1 proved in Proposition 1.12.  Proposition 1.14. Let D be an exact Mal’tsev category. Consider the following diagram of pushouts X xn  X′ / / Xn−1 fn−1/ / Xn−2 fn xn−1  ′ / / Xn−1 ′ fn ... X1 ...  x1 X1′ xn−2  ′ / / Xn−2 ′ fn−1 f1 / / X0 x0  / / X0′ ′ f1 f0 //Y y  / / Y′ ′ f0 in which xn : X ։ X ′ is a regular epimorphism. If the upper row represents an n-fold central extension of the regular epimorphism f0 : X0 ։ Y in the slice category D/Y then the lower row represents an n-fold central extension of f0′ : X0′ ։ Y ′ in the slice category D/Y ′ . Proof. Let us set φi = f0 · f1 · · · fi and φ′i = f0′ · f1′ · · · fi′ . Since the indiscrete equivalence relation ∇f0 on the object f0 : X0 ։ Y of the slice category D/Y is given by R[f0 ], our assumption on the upper row translates into the conditions [R[fi ], R[φi ]] = ∆Xi for 1 ≤ i ≤ n. Since any of the rectangles is a regular pushout by Corollary 1.8, we get xi (R[fi ]) = R[fi′ ] and xi (R[φi ]) = R[φ′i ], and consequently [R[fi′ ], R[φ′i ]] = ∆Xi′ for all i.  CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 13 1.4. Regular pushouts in pointed Mal’tsev categories with binary sums. In a pointed category with binary sums and binary products, each pair of objects (X1 , X2 ) defines a canonical comparison map θX1 ,X2 : X1 + X2 → X1 × X2 , uniquely determined by the requirement that the composite morphism Xi / θX1 ,X2 / X1 + X2 / X1 × X2 / / Xj is the identity (resp. the null morphism) if i = j (resp. i 6= j), where i, j ∈ {1, 2}. Recall that θX1 ,X2 is a strong epimorphism for all objects X1 , X2 precisely when the category is unital in the sense of the second author, and that every pointed Mal’tsev category is unital, cf. [4, 6]. In a regular category strong and regular epimorphisms coincide. Note also that an exact Mal’tsev category has coequalisers for reflexive pairs, so that an exact Mal’tsev category with binary sums has all finite colimits. In order to shorten terminology, we call σ-pointed any pointed category with binary sums. Later we shall need the following two examples of regular pushouts. Proposition 1.15. For any regular epimorphism f : X ։ Y and any object Z of a σ-pointed regular Mal’tsev category, the following square X+Z θX,Z / / X ×Z f +Z  Y +Z θY,Z  f ×Z / / Y ×Z is a regular pushout. Proof. The regular epimorphism θR[f ],Z : R[f ] + Z ։ R[f ] × Z factors as below θ R[f ],Z / / R[f ] × Z = R[f × Z] R[f ] + Z ▼▼▼ 5 ❧❧❧5 ▼▼▼ ❧❧❧ ❧ ▼▼▼ ❧ ❧❧❧ & R[f + Z] inducing a regular epimorphism R[f + Z] → R[f × Z] on the vertical kernel relations of the square above. Proposition 1.6 allows us to conclude.  Corollary 1.16. For any objects X, Y, Z of a σ-pointed regular Mal’tsev category, the following square (X + Y ) + Z θX+Y,Z θX,Y +Z  (X × Y ) + Z is a regular pushout. θX×Y,Z / / (X + Y ) × Z  θX,Y ×Z / / (X × Y ) × Z 14 CLEMENS BERGER AND DOMINIQUE BOURN 1.5. Central subobjects, centres and centralisers. In a pointed Mal’tsev category (D, ⋆D ), two morphisms with common codomain f : X → Z and g : Y → Z are said to commute [9, 43] if the square (0Y X ,1Y ) X ×O Y o Y φf,g (1X ,0XY ) X f g #  /Z admits a (necessarily unique) filler φf,g : X × Y → Z making the whole diagram commute, where 0XY : X → ⋆D → Y denotes the zero morphism. A monomorphism Z ֌ X which commutes with the identity 1X : X → X is called central, and the corresponding subobject is called a central subobject of X. Every regular epimorphism f : X ։ Y with central kernel relation R[f ] has a central kernel K[f ]. In pointed protomodular categories, the converse is true: the centrality of K[f ] implies the centrality of R[f ], so that central extensions are precisely the regular epimorphisms with central kernel, cf. [33, Proposition 2.2]. Recall [4, 5] that a pointed category is protomodular precisely when the category has pullbacks of split epimorphisms, and for each split epimorphism, section and kernel-inclusion form a strongly epimorphic cospan. Every finitely complete protomodular category is a Mal’tsev category [4, Proposition 3.1.19]. The categories of groups and of Lie algebras are pointed protomodular. Moreover, in both categories, each object possesses a centre, i.e. a maximal central subobject. Central group (resp. Lie algebra) extensions are thus precisely regular epimorphisms f : X ։ Y with kernel K[f ] contained in the centre of X. This is of course the classical definition of a central extension in group (resp. Lie) theory. In these categories, there exists more generally, for each subobject N of X, a so-called centraliser, i.e. a subobject Z(N ֌ X) of X which is maximal among subobjects commuting with N ֌ X. The existence of centralisers has far-reaching consequences, as shown by James Gray and the second author, cf. [13, 35, 36]. Since they are useful for our study of nilpotency, we discuss some of them here. Following [6], we denote by PtZ (D) the category of split epimorphisms (with chosen section) in D over a fixed codomain Z, cf. Section 3.2. For each f : Z → Z ′ pulling back along f defines a functor f ∗ : PtZ ′ (D) → PtZ (D) which we call pointed basechange along f . In particular, the terminal map ωZ : Z → 1D defines a functor (ωZ )∗ : D → PtZ (D). Since in a pointed regular Mal’tsev category D, morphisms commute if and only if their images commute, morphisms in PtZ (D) of the form φf,f ′ / X ×Z ⑦? Y ❍c❍❍❍❍ ❍❍❍❍ ⑦s⑦⑦⑦⑦⑦ pZ ❍❍❍# ⑦⑦⑦r Z f correspond bijectively to morphisms f : X → K[r] such that X −→ K[r] ֌ Y commutes with s : Z ֌ Y in D. Therefore, if split subobjects have centralisers in D, CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 15 then for each object Z, the functor (ωZ )∗ : D → PtZ (D) : X 7→ X × Z admits a right adjoint (ωZ )∗ : PtZ (D) → D : (r, s) 7→ K[r] ∩ Z(s). A category with the property that for each object Z, the functor (ωZ )∗ has a right adjoint is called algebraically cartesian closed [13]. Algebraic cartesian closedness implies canonical isomorphisms (X × Z) +Z (Y × Z) ∼ = (X + Y ) × Z for all objects X, Y, Z, a property we shall call algebraic distributivity, cf. Section 5.3. 1.6. An algebraic Beck-Chevalley condition. – The dual of an elementary topos is an exact Mal’tsev category, cf. [17, Remark 5.8]. This suggests that certain diagram lemmas for elementary toposes admit a dual version in our algebraic setting. Supporting this analogy we establish here an “algebraic dual” of the well-known Beck-Chevalley condition. As a corollary we get a diagram lemma which will be used several times in Section 6. Another instance of the same phenomenon is the cogluing lemma for regular epimorphisms in exact Mal’tsev categories (cf. proof of Theorem 6.23a and Corollary 1.8) which is dual to a gluing lemma for monomorphisms in an elementary topos. Lemma 1.17 (cf. Lemma 1.1 in [30]). Consider a commutative diagram X  Y x y / / X′ / X ′′  / /Y′  / Y ′′ in a regular category. If the outer rectangle is a pullback and the left square is a regular pushout (1.3) then left and right squares are pullbacks. Proof. The whole diagram contains three comparison maps: one for the outer rectangle, denoted φ : X → Y ×Y ′′ X ′′ , one for the left and one for the right square, denoted respectively φl : X → Y ×Y ′ X ′ and φr : X ′ → Y ′ ×Y ′′ X ′′ . We get the identity φ = y ∗ (φr ) ◦ φl where y ∗ denotes base-change along y. Since the outer rectangle is a pullback, φ is invertible so that φl is a section and y ∗ (φr ) a retraction. Since the left square is a regular pushout, the comparison map φl is a regular epimorphism and hence φl and y ∗ (φr ) are both invertible. Since y is a regular epimorphism in a regular category, base-change y ∗ is conservative so that φr is invertible as well, i.e. both squares are pullbacks.  Proposition 1.18 (Algebraic Beck-Chevalley). Let D be an exact Mal’tsev category with pushouts of split monomorphisms along regular epimorphisms. Any pushout of regular epimorphisms Ū ḡ u  U g / / V̄  v //V 16 CLEMENS BERGER AND DOMINIQUE BOURN yields a functor isomorphism ḡ! u∗ ∼ = v ∗ g! from the fibre PtU (D) to the fibre PtV (D). Proof. We have to show that for any point (r, s) over U , the following diagram Ū ′ ❇ ′ G ❇❇u r̄ ✍✍✍✍ ❇! ! ✍✍ ✍ ✍✍ ✍✍ U′ ✍ F r ✌✌ ✌ ✍✍ ✍✍✍ ✌ ✌ ✍ ✌ ✌✌ Ū ❆ ✌✌✌✌ ḡ ❆❆ ✌ ✌ u ❆ ✌✌ ✌ ✌ U ḡ′ / / V̄ ′ ′ G ❇❇v❇ r̄ ✍✍✍✍ ❇! ! ✍ ′ ✍ g / /V′ ✍✍ ✍✍✍ ✍ F ✍✍ ✍✍✍ ✌✌✌✌ ✌ ✍ ✌ r ′ ✌✌ ✌ / / V̄ ✌ ❆❆ ✌✌✌✌ ❆ v ❆ ✌✌ ✌ ✌ //V g ′ in which (r̄, s̄) = u∗ (r, s) and g! (r, s) = (r′ , s′ ) and ḡ! (r̄, s̄) = (r̄′ , s̄′ ), has a right face which is a downward-oriented pullback; indeed, this amounts to the required identity v ∗ (r′ , s′ ) = (r̄′ , s̄′ ). Since bottom face and the upward-oriented front and back faces are pushouts, the top face is a pushout as well, which is regular by Corollary 1.8. Taking pullbacks in top and bottom faces induces a split epimorphism U ′ ×V ′ V̄ ′ ։ U ×V V̄ through which the left face of the cube factors as in the following commutative diagram Ū ′ r̄  Ū / / U ′ ×V ′ V̄ ′ / / U′ r  / / U ×V V̄  //U in which the left square is a regular pushout by Corollary 1.7. Lemma 1.17 shows then that the right square is a pullback. Therefore, we get the following cube U ′ ×V ′ V̄▲ ′ ▲▲▲ ⑥⑥⑥> ▲▲& & ⑥⑥⑥⑥⑥ ⑥ ⑥⑥⑥ ⑥ U′ ⑥⑥⑥⑥⑥ r ☎☎☎A ⑥ ☎☎ ~⑥⑥⑥⑥⑥ ☎☎☎☎☎ ☎ U ×V V̄P ḡ ☎☎☎☎ PPP PPP ☎☎☎☎☎ ☎  P( ( ☎ U g / / V̄ ′ ′ G ❇❇v❇ r̄ ✎✎✎✎ ❇ ✎✎ ✎ // V′ ✎✎ ✎✎ ✎✎ ✎✎ ✍ ✍F ✍✍ ✍✍✍ ✎ ✎✎ r ′ ✍ ✍✍ / V̄ ✍✍ ❅❅ ❅ ✍✍ ✍ v ❅❅ ✍✍ ✍✍ ✍ //V ′ in which the downward-oriented left face and the bottom face are pullbacks. Therefore, the composite of the top face followed by the downward-oriented right face is a pullback. Moreover, as above, the top face is a regular pushout. It follows then from Lemma 1.17 that the downward-oriented right face is a pullback as required.  Corollary 1.19. In an exact Mal’tsev category with pushouts of split monomorphisms along regular epimorphisms, each commuting square of natural transformations of split CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 17 epimorphisms ŪG ′ ❇ ′ u f¯ ✍✍ ✍ ❇❇❇ !! ✍✍ ✍✍ ✍✍ ✍✍ U′ F ✍✍ ✍✍ ✌ f ✌ ✌✌ ✍ ✍✍ ✌ ✌ ✌✌ Ū ✌✌✌✌ ḡ ❆❆ ✌ ✌ ✌✌✌✌ u ❆❆ U ḡ′ / / V̄ ′ ′ ✍ ✍G ❇❇v❇ ✍ ❇! ! ✍ ✍✍ g′ / /V′ ✍✍ ✍✍✍ F ✍✍ ✍✍ ✌✌✌✌ ✍ ✍✍ ✌ ✌ f ′ ✌✌ ✌ / / V̄ ✌ ✌✌ ❆❆v ✌ ❆❆ ✌✌✌✌✌ //V f¯′ g such that all horizontal arrows are regular epimorphisms, and front and back faces are upward-oriented pushouts, induces the following upper pushout square ḡ′ Ū✮ T✮′ ❑ ✮✮ ✮✮ ❑❑❑❑(f¯,u′ ) ❑❑ ✮✮ ✮✮ ❑❑ ✮✮ ✮✮ ❑% % ✮✮ ✮✮ Ū ×U U ′ f¯ ✮ ✮ ✮✮ ✮✮ ✁✁✁@ ✮✮ ✮✮ ✁✁✁✁✁ ✁ ✮ ✮ ✁✁ ✁✁✁✁ Ū ✁ ḡ / / V̄ ′ ✮✮ T✮✮ ❑❑❑ ❑❑(f¯′ ,v′ ) ✮✮ ❑❑ f¯′ ✮ ✮ ❑❑ ✮✮ ✮✮ ❑% % ✮ ′✮ ḡ×g ✮ ✮ / / V̄ ×V V ′ ✮✮ ✮✮ ✮✮ ✮✮ @ ✮✮ ✮✮ ✮ ✮ / / V̄ in which the kernel relation of the regular epimorphism (f¯′ , v ′ ) : V̄ ′ → V̄ ×V V ′ may be identified with the intersection R[f¯′ ] ∩ R[v ′ ]. Proof. Taking downward-oriented pullbacks in left and right face of the first diagram yields precisely a diagram as studied in the proof of Proposition 1.18. This implies that the front face of the second diagram is an upward-oriented pushout. Since the back face of the second diagram is also an upward-oriented pushout, the upper square is a pushout as well, as asserted. The kernel relation of the comparison map (f¯′ , v ′ ) : V̄ ′ → V̄ ×V V ′ is the intersection of the kernel relations of f¯′ and of v ′ .  2. Affine objects and nilpotency 2.1. Affine objects. Definition 2.1. Let C be a full subcategory of a Mal’tsev category D. An object X of D is said to be C-affine if there exists a morphism f : X → Y in D with central kernel relation R[f ] and with codomain Y in C. The morphism f is called a C-nilindex for X. We shall write Aff C (D) for the full replete subcategory of D spanned by the C-affine objects of D. Clearly Aff C (D) contains C. When C consists only of a terminal object 1D of D, we call the C-affine objects simply the affine objects of D and write Aff C (D) = Aff(D). Recall that the unique morphism X → 1D has a central kernel relation precisely when the indiscrete equivalence relation on X centralises itself, which amounts to the existence of a (necessarily unique associative and commutative) Mal’tsev operation on X. 18 CLEMENS BERGER AND DOMINIQUE BOURN When D is pointed, such a Mal’tsev operation on X induces (and is induced by) an abelian group structure on X. For a pointed Mal’tsev category D, the category Aff(D) of affine objects is thus the category Ab(D) of abelian group objects of D. Remark 2.2. When D is a regular Mal’tsev category, any nilindex f : X → Y factors as a regular epimorphism f˜ : X ։ f (X) followed by a monomorphism f (X) ֌ Y with codomain in C; therefore, if C is closed under taking subobjects in D, this defines a strongly epimorphic nilindex f˜ for X with same central kernel relation as f . In other words, for regular Mal’tsev categories D and subcategories C which are closed under taking subobjects in D, the C-affine objects of D are precisely the objects which are obtained as central extensions of objects of C. Proposition 2.3. For any full subcategory C of a Mal’tsev category D, the subcategory Aff C (D) is closed under taking subobjects in D. If C is closed under taking binary products in D then Aff C (D) as well, so that Aff C (D) is finitely complete. Proof. Let m : X ֌ X ′ be a monomorphism with C-affine codomain X ′ . If f ′ : X ′ → Y ′ is a nilindex for X ′ , then f m : X → Y ′ is a nilindex for X, since central equivalence relations are stable under pointed base-change along monomorphisms and we have R[f m] = m−1 (R[f ]). If X and Y are C-affine with nilindices f and g then f × g is a nilindex for X × Y since maps with central kernel relations are stable under products.  A Birkhoff subcategory [44] of a regular category D is a subcategory C which is closed under taking subobjects, products and quotients in D. A Birkhoff subcategory of an exact (resp. Mal’tsev) category is exact (resp. Mal’tsev), and regular epimorphisms in C are those morphisms in C which are regular epimorphisms in D. If D is a variety (in the single-sorted monadic sense) then Birkhoff subcategories of D are precisely subvarieties of D, cf. the proof of Lemma 5.11 below. Proposition 2.4. Let C be a full subcategory of an exact Mal’tsev category D. If C is closed under taking subobjects and quotients in D then Aff C (D) as well. In particular, if C is a Birkhoff subcategory of D, then Aff C (D) as well. Proof. Let X be a C-affine object of D with nilindex f : X → Y . We can suppose f is a regular epimorphism, cf. Remark 2.2. Thanks to Proposition 2.3 it remains to establish closure under quotients. Let g : X ։ X ′ be a regular epimorphism in D. Since D is exact, the pushout of f along g exists in D X f g  X′ //Y h f′  // Y′ and f ′ is a central extension since f is, cf. Corollary 1.8. By hypothesis C is stable under quotients. Therefore the quotient Y ′ belongs to C, and f ′ is a nilindex for X ′ so that X ′ is C-affine as required.  CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 19 2.2. The C-lower central sequence. – Definition 2.1 is clearly the beginning of an iterative process. We write C = Nil0C (D) and define inductively NilnC (D) to be the category Aff Niln−1 (D) (D). The objects of this C category NilnC (D) are called the C-nilpotent objects of order n of D, and we get the following diagram ♦7 = DO a❇h◗◗◗ ♦♦⑤♦⑤⑤ ❇❇❇◗◗◗◗◗ ♦ ♦ ❇❇ ◗◗◗ ♦♦ ⑤⑤ ◗◗◗ ❇❇ ♦♦♦ ⑤⑤⑤ ♦ ◗◗◗ ❇❇ ♦ ♦ ⑤ ◗◗◗ ♦ ❇ ❇a ♦♦ =⑤ O ⑤ h ♦ ♦ 7/♦ / n 2 1 NilC (D) / / Niln+1 C NilC (D) / / NilC (D) (D) C which we call the C-lower central sequence of D. If C = {1D }, we obtain the (absolute) lower central sequence of D: 7 D h◗ ♥♥⑤♥⑤> O a❇❇◗❇◗◗◗◗◗ ♥ ♥ ❇❇ ◗◗ ♥♥ ⑤⑤ ◗◗◗ ❇❇ ♥♥♥ ⑤⑤⑤ ◗◗◗ ♥ ❇❇ ♥ ♥ ⑤ ◗◗◗ ♥ ❇ ⑤ ♥ ❇ ♥ ◗◗h ⑤ ❇a >⑤ O 7♥♥♥ 2 n 1 / Nil (D) / / Nil (D) Nil (D) / / Niln+1 (D) {1D } / Remark 2.5. It follows from Remark 2.2 and an iterative application of Proposition 2.4 that for an exact Mal’tsev category D, the nilpotent objects of order n are precisely those which can be obtained as an n-fold central extension of the terminal object 1D and that moreover Niln (D) is a Birkhoff subcategory of D. If D is the category of groups (resp. Lie algebras) then Niln (D) is precisely the full subcategory spanned by nilpotent groups (resp. Lie algebras) of class ≤ n. Indeed, it is well-known that a group (resp. Lie algebra) is nilpotent of class ≤ n precisely when it can be obtained as an n-fold “central extension” of the trivial group (resp. Lie algebra), and we have seen in Section 1.5 that the group (resp. Lie) theorist’s definition of central extension agrees with ours. We will see in Proposition 2.14 below that the equivalence between the central extension definition and the iterated commutator definition of nilpotency carries over to our general context of finitely cocomplete exact Mal’tsev categories. Huq [43] had foreseen a long time ago that a categorical approach to nilpotency was possible. Everaert-Van der Linden [25] recast Huq’s approach in modern language in the context of semi-abelian categories. Definition 2.6. A Mal’tsev category D with full subcategory C is called C-nilpotent of order n (resp. of class n) if D = NilnC (D) (resp. if n is the least such integer). When C = {1D } the prefix C will be dropped, and instead of “nilpotent of order n” we also just say “n-nilpotent”. Proposition 2.7. A Mal’tsev category is n-nilpotent if and only if each morphism is n-fold centrally decomposable. A regular Mal’tsev category is n-nilpotent if and only if each morphism factors as an n-fold central extension followed by a monomorphism. 20 CLEMENS BERGER AND DOMINIQUE BOURN Proof. The second statement follows from the first by Lemma 1.1. If each morphism is n-fold centrally decomposable, then this holds for terminal maps ωX : X → 1D , so that all objects are n-nilpotent. Conversely, assume that all objects are n-nilpotent, i.e. that for all objects X, the terminal map ωX is n-fold centrally decomposable. Then, for each morphism f : X → Y , the identity ωX = ωY f together with Lemma 1.3 imply that f is n-fold centrally decomposable as well.  2.3. Epireflections, Birkhoff reflections and central reflections. – We shall see that if C is a reflective subcategory of D, then the categories NilnC (D) are again reflective subcategories of D, provided D and the reflection fulfill suitable conditions. In order to give precise statements we need to fix some terminology. A full replete subcategory C of D is called reflective if the inclusion C ֒→ D admits a left adjoint functor I : D → C, called reflection. The unit of the adjunction at an object X of D will be denoted by ηX : X → I(X). Reflective subcategories C are stable under formation of limits in D. In particular, reflective subcategories of Mal’tsev categories are Mal’tsev categories. A reflective subcategory C of D is called strongly epireflective and the reflection I is called a strong epireflection if the unit ηX : X → I(X) is pointwise a strong epimorphism. Strongly epireflective subcategories are characterised by the property that C is closed under taking subobjects in D. In particular, strongly epireflective subcategories of regular categories are regular categories. A Birkhoff reflection (cf. [14]) is a strong epireflection I : D → C such that for each regular epimorphism f : X → Y in D, the following naturality square X f  Y ηX / / I(X) I(f )  / / I(Y ) ηY is a regular pushout (see Section 1.3 and Proposition 1.6). A subcategory of D defined by a Birkhoff reflection is a Birkhoff subcategory of D, and is thus exact whenever D is. It follows from Corollary 1.8 that a reflective subcategory of an exact Mal’tsev category is a Birkhoff subcategory if and only if the reflection is a Birkhoff reflection. A central reflection is a strong epireflection I : D → C with the property that the unit ηX : X ։ I(X) is pointwise a central extension. The following exactness result will be used at several places. In the stated generality, it is due to Diana Rodelo and the second author [14], but the interested reader can as well consult [44, 31, 32] for closely related statements. Proposition 2.8. In a regular Mal’tsev category, strong epireflections preserve pullback squares of split epimorphisms, and Birkhoff reflections preserve pullbacks of split epimorphisms along regular epimorphisms. Proof. See Proposition 3.4 and Theorem 3.16 in [14].  CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 21 Lemma 2.9. Let C be a reflective subcategory of D with reflection I, and assume that ηX : X → I(X) factors through an epimorphism f : X ։ Y as in: ηX / I(X) ♦7 ♦ ♦ ♦ f   ♦♦♦♦♦ η Y X Then I(f ) is an isomorphism and we have η = I(f )−1 ηY . Proof. Consider the following diagram ηX / I(X) ♣8 ♣ ♣ f  I(f ) ♣♣   ♣♣♣♣ η / Y I(Y ) ηY X where the lower triangle commutes because f is an epimorphism. If we apply the reflection I to the whole diagram we get two horizontal isomorphisms I(ηX ) and I(ηY ). It follows that I(η) is an isomorphism as well, hence so is I(f ), and η = I(f )−1 ηY .  Lemma 2.10. For any reflective subcategory C of a Mal’tsev category D the C-affine objects of D are those X for which the unit ηX has a central kernel relation. Proof. If ηX : X → I(X) has a central kernel relation then X is C-affine. Conversely, let X be C-affine with nilindex f : X → Y . Then Y is an object of the reflective subcategory C so that f factors through ηX : X → I(X). Accordingly, we get R[ηX ] ⊂ R[f ], and hence R[ηX ] is central because R[f ] is.  Corollary 2.11. A reflection I : D → C of a regular Mal’tsev category D is central if and only if D is C-nilpotent of order 1 (i.e. all objects of D are C-affine). Theorem 2.12 (cf. [8], Sect. 4.3 in [43], Prp. 7.8 in [25] and Thm. 3.6 in [49]). – For a reflective subcategory C of a finitely cocomplete regular Mal’tsev category D, the category Aff C (D) is a strongly epireflective subcategory of D. The associated strong epireflection IC1 : D → Aff C (D) is obtained by factoring the unit ηX : X → I(X) universally through a map with central kernel relation. If C is a reflective Birkhoff subcategory of a finitely cocomplete exact Mal’tsev category D, then the reflection IC1 : D → Aff C (D) is a Birkhoff reflection. Proof. Proposition 1.4 yields the following factorisation of the unit: X 1 ηX  IC1 (X) ηX / I(X) ♥7 ♥ ♥ ♥♥ ♥♥♥ η̄X Since η̄X has a central kernel relation and I(X) is an object of C, the object IC1 (X) 1 belongs to Aff C (D). We claim that the maps ηX : X ։ IC1 (X) have the universal 1 property of the unit of an epireflection IC : D → Aff C (D). 22 CLEMENS BERGER AND DOMINIQUE BOURN Let f : X → T be a map with C-affine codomain T which means that ηT has a central kernel relation. Then consider the following diagram: f X■ ■■ η1 ■■ X ■■ ■$ $ ηX IC1 (X) ✈ ✈✈ ✈✈  {✈✈ η̄X I(X) I(f ) /T ❈ ❈❈❈❈ ❈❈❈❈ ❈❈❈❈ ❈❈❈ f¯ /T ⑤ ⑤ ⑤ ⑤⑤  ~⑤⑤ ηT / I(T ) According to Proposition 1.4, there is a unique factorisation f¯ making the diagram commute. If D is exact and C a Birkhoff subcategory, the subcategory Aff C (D) is closed under taking subobjects and quotients by Proposition 2.4. The reflection IC1 is thus a Birkhoff reflection in this case.  Remark 2.13. A reflective Birkhoff subcategory C of a semi-abelian category D satisfies all hypotheses of the preceding theorem. In this special case, the Birkhoff reflection IC1 : D → Aff C (D) is given by the formula IC1 (X) = X/[X, K[ηX ]] where [X, K[ηX ]] is the Huq commutator of X and K[ηX ], cf. Section 1.5. Indeed, the pointed protomodularity of D implies (cf. [33, Proposition 2.2]) that the kernel of the quotient map X → X/[∇X , R[ηX ]] is canonically isomorphic to the Huq commutator [X, K[ηX ]] so that the formula follows from Proposition 1.4. 2.4. The Birkhoff nilpotency tower. – According to Theorem 2.12, any reflective Birkhoff subcategory C of a finitely cocomplete exact Mal’tsev category D produces iteratively the following commutative diagram of Birkhoff reflections: D ◗◗ ♦♦⑤♦⑤ ❇❇❇◗❇◗◗◗◗ ♦ ♦ ❇❇ ◗◗◗◗ ♦♦ ⑤⑤ ❇❇ n ◗◗◗ ♦1♦♦ ⑤⑤⑤ ♦ ♦ ◗◗◗ I n+1 ❇❇ IC I ♦ C ⑤⑤ IC2 ◗◗C◗ ❇❇ I ♦♦♦♦ ⑤  } ( ⑤ ♦ ! ♦ w♦ 1 2 n NilC (D) o NilC (D) NilC (D) o Niln+1 (D) C o C A Birkhoff subcategory of an exact Mal’tsev category is an exact Mal’tsev category so that the subcategories NilnC (D) are all exact Mal’tsev categories, and the horizontal n reflections Niln+1 C (D) → NilC (D) are central reflections by Corollary 2.11. In the special case C = {1D } we get the following commutative diagram of Birkhoff subcategories and Birkhoff reflections: CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 23 D◗ ♥♥⑤♥⑤ ❇❇◗❇◗◗◗◗◗ ♥ ♥ ❇❇ ◗◗ ♥ ⑤ ◗◗ ❇❇ ♥♥♥ ⑤⑤⑤ ♥ ♥ ❇❇ n ◗◗◗◗◗ n+1 ♥ I 1 ⑤⑤ ♥ 2 ♥ ❇ ◗◗I◗ I ⑤ I I ♥♥ ❇❇ ◗( ~⑤⑤  ! w♥♥♥ Nil1 (D) o Nil2 (D) Niln (D) o Niln+1 (D) {1D } o 1 If D is pointed, then the first Birkhoff reflection I 1 = I{⋆ : D → Nil1 (D) can be D} identified with the classical abelianisation functor D → Ab(D). In particular, the abelian group objects of D are precisely the nilpotent objects of order 1. When C is a reflective Birkhoff subcategory of a finitely cocomplete exact Mal’tsev category D, then D is C-nilpotent of class n if and only if n is the least integer such that either the unit of the n-th Birkhoff reflection ICn is invertible, or equivalently, the (n − 1)st Birkhoff reflection ICn−1 is a central reflection, see Corollary 2.11. Proposition 2.14. For an exact Mal’tsev category D with binary sums, the unit of n the n-th Birkhoff reflection ηX : X → I n (X) is given by quotienting out the iterated Smith commutator [∇X , [∇X , [∇X , · · · , ∇X ]]] of length n + 1. If D is semi-abelian, this unit is also given as the quotient of X by the iterated Huq commutator [X, [X, [X, . . . , X]]] of length n + 1. Proof. The second statement follows from the first by Remark 2.13. The first staten ment follows from the inductive construction of ηX in the proof of Theorem 2.12 together with Proposition 1.4.  A finite limit and finite colimit preserving functor is called exact. A functor between exact Mal’tsev categories with binary sums is exact if and only if it preserves finite limits, regular epimorphisms and binary sums, cf. Section 1.4. Lemma 2.15. Any exact functor F : D → E between exact Mal’tsev categories with binary sums commutes with the n-th Birkhoff reflections, i.e. IEn ◦ F ∼ = F|Niln (D) ◦ IDn . Proof. According to Theorem 2.12 and Proposition 1.4 it suffices to show that F takes the canonical factorisation of f : X ։ Y through the central extension ζf : X/[∇X , R[f ]] ։ Y to the factorisation of F (f ) : F (X) ։ F (Y ) through the central extension ζF (f ) : F (X)/[∇F (X) , R[F (f )]] ։ F (Y ). Since F is left exact, we have F (∇X ) = ∇F (X) and F (R[f ]) = R[F (f )], and since F preserves regular epimorphisms, we have F (X/[∇X , R[f ]]) = F (X)/F ([∇X , R[f ]]). It remains to be shown that F preserves Smith commutators. This follows from exactness of F and the fact that in a finitely cocomplete exact Mal’tsev category the Smith commutator is given by an explicit formula involving only finite limits and finite colimits.  3. Affine morphisms and central reflections We have seen that the nilpotency tower of a σ-pointed exact Mal’tsev category is a tower of central reflections. In this section we establish a useful general property of central reflections in exact Mal’tsev categories, namely that the unit of a central 24 CLEMENS BERGER AND DOMINIQUE BOURN reflection is pointwise affine. Since this property might be useful in other contexts as well, we first discuss possible weakenings of the notion of exactness. 3.1. Quasi-exact and efficiently regular Mal’tsev categories. – An exact category is a regular category in which equivalence relations are effective, i.e. arise as kernel relations of some morphism. In general, effective equivalence relations R on X have the property that the inclusion R ֌ X × X is a strong monomorphism. Equivalence relations with this property are called strong. A regular category in which strong equivalence relations are effective is called quasi-exact. Any quasi-topos (cf. Penon [58]) is quasi-exact so that there are plenty examples of quasi-exact categories which are not exact. There are also quasi-exact Mal’tsev categories which are not exact, as for instance the category of topological groups and continuous group homomorphisms. Further weakenings of exactness occur quite naturally as shown in the following chain of implications: exact +3 quasi-exact +3 efficiently regular +3 fibrational kernel relations A category is efficiently regular [14] if every equivalence relation (X, S), which is a regular refinement of an effective equivalence relation (X, R), is itself effective. By regular refinement we mean any map of equivalence relations (X, S) → (X, R) inducing the identity on X and a regular monomorphism S → R. We call a kernel relation (X, R) fibrational if for each fibrant map of equivalence relations (Y, S) → (X, R) the domain is an effective equivalence relation as well. According to Janelidze-Sobral-Tholen [48] a kernel relation (X, R) is fibrational precisely when its quotient map X ։ X/R has effective descent, i.e. base-change along X ։ X/R is a monadic functor. A regular category has thus fibrational kernel relations precisely when all regular epimorphisms have effective descent. The careful reader will observe that in all proofs of this section where we invoke efficient regularity we actually just need that the considered kernel relations are fibrational. The second implication above follows from the fact that any regular monomorphism is a strong monomorphism, while the third implication follows from the facts that for any fibrant map of equivalence relations f : (Y, S) → (X, R) the induced map on relations S → f ∗ (R) is a regular (even split) monomorphism, and that in any regular category, effective equivalence relations are closed under inverse image. 3.2. Fibration of points and essentially affine categories. – Recall [6] that for any category D, we denote by Pt(D) the category whose objects are split epimorphisms with chosen section (“genereralised points”) of D and whose morphisms are natural transformations between such (compatible with the chosen sections), and that ¶D : Pt(D) → D denotes the functor associating to a split epimorphism its codomain. The functor ¶D : Pt(D) → D is a fibration (the so-called fibration of points) whenever D has pullbacks of split epimorphisms. The ¶D -cartesian maps are precisely pullbacks of split epimorphisms. Given any morphism f : X → Y in D, base-change along f with respect to the fibration ¶D is denoted by f ∗ : PtY (D) → PtX (D), and will CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 25 be called pointed base-change in order to distinguish it from the classical base-change D/Y → D/X on slices. Pointed base-change f ∗ has a left adjoint pointed cobase-change f! if and only if pushouts along f of split monomorphisms with domain X exist in D. In this case pointed cobase-change along f is given by precisely this pushout, cf. [5]. Accordingly, the unit (resp. counit) of the (f! , f ∗ )-adjunction is an isomorphism precisely when for each natural transformation of split epimorphisms ′ X O s f′ r  X / Y′ O s′ f  /Y r′ the downward square is a pullback as soon as the upward square is a pushout (resp. upward square is a pushout as soon as the downward square is a pullback). It is important that in a regular Mal’tsev category pointed base-change along a regular epimorphism is fully faithful : in a pullback square like above with regular epimorphism f : X ։ Y , the upward-oriented square is automatically a pushout. This follows from fact that the induced morphism on kernel relations R(s, s′ ) : R[f ] → R[f ′ ] together with the diagonal X ′ → R[f ′ ] forms a strongly epimorphic cospan because the kernel relation R[f ′ ] is the product of R[f ] and X ′ in the fibre PtX (D), and all fibres are unital in virtue of the Mal’tsev condition, cf. [4, 6]. Recall [6] that a category is called essentially affine if pushouts of split monomorphisms and pullbacks of split epimorphisms exist, and moreover for any morphism f : X → Y the pointed base-change adjunction (f! , f ∗ ) is an adjoint equivalence. Additive categories with pullbacks of split epimorphisms are essentially affine. This follows from the fact that in this kind of category every split epimorphism is a projection, and every split monomorphism is a coprojection. Conversely, a pointed essentially affine category is an additive category with pullbacks of split epimorphisms. Any slice or coslice category of an additive category with pullbacks of split epimorphisms is an example of an essentially affine category that is not pointed, and hence not additive. Therefore, the property of a morphism f : X → Y in D to induce an adjoint equivalence f ∗ : PtY (D) → PtX (D) expresses somehow a “relative additivity” of f . This motivates the following definition: Definition 3.1. In a category D with pullbacks of split epimorphisms, a morphism f : X → Y will be called ¶D -affine when the induced pointed base-change functor f ∗ : PtD (Y ) → PtD (X) is an equivalence of categories. An affine extension in D is any regular epimorphism which is ¶D -affine. Clearly any isomorphism is ¶D -affine. It follows from the analogous property of equivalences of categories that for any composable morphisms f, g in D, if two among f , g and gf are ¶D -affine then so is the third, i.e. ¶D -affine morphisms fulfill the so-called two-out-of-three property. It might be confusing that we use the term “affine” in two different contexts, namely as well for objects as well for morphisms. Although their respective definitions seem 26 CLEMENS BERGER AND DOMINIQUE BOURN unrelated at first sight, this isn’t the case. We will see in Proposition 3.2 that every affine extension is a central extension, and it will follow from Theorem 3.3 that for a reflective subcategory C of an efficiently regular Mal’tsev category D, every object of D is C-affine if and only if every object of D is an affine extension of an object of C. We hope that this justifies our double use of the term “affine”. Proposition 3.2. In any Mal’tsev category D, the kernel relation of a ¶D -affine morphism is central. In particular, each affine extension is a central extension. Proof. Let f : X → Y be an ¶D -affine morphism with kernel relation (p0 , p1 ) : R[f ] ⇒ X and let X (pX 0 , p1 ) : X × X ⇒ X be the indiscrete equivalence relation on X with section sX 0 : X → X × X. Since pointed base-change f ∗ is an equivalence of categories, there is a split epimorphism X (r, s) : Y ′ ⇄ Y such that f ∗ (r, s) = (pX 0 , s0 ) and we get the right hand pullback of diagram R[fˇ] o O R(pX 0 ,r) p̌1 p̌0 R(sX 0 ,s)  R[f ] o p1 p0 / / X ×O X pX 0 fˇ pX 1 /  /X / Y′ O s r f  /Y in which the left hand side are the respective kernel relations. Therefore the left hand side consists of two pullbacks, and the map pX 1 p̌0 produces the required connector between R[f ] and the indiscrete equivalence relation ∇X on X.  In particular, if an epireflection I : D → C of a Mal’tsev category D has a pointwise ¶D -affine unit ηX : X → I(X), then it is a central reflection. The following converse will be essential in understanding nilpotency. Theorem 3.3. A central reflection I of an efficiently regular Mal’tsev category D has a unit which is pointwise an affine extension. In particular, morphisms f : X → Y with invertible image I(f ) : I(X) → I(Y ) are necessarily ¶D -affine. Proof. The second assertion follows from the first since ¶D -affine morphisms fulfill the two-out-of-three property and isomorphisms are ¶D -affine. For the first assertion note that in a regular Mal’tsev category pointed base-change along a regular epimorphism is fully faithful, and hence ηY∗ : PtI(Y ) (D) → PtY (D) is a fully faithful functor. Corollary 3.6 below shows that ηY∗ is essentially surjective, hence ηY∗ is an equivalence of categories for all objects Y in D.  CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 27 3.3. Centralising double relations. Given a pair (R, S) of equivalence relations on Y , we denote by RS the inverse image of the reflexive relation S × S under R (pR 0 , p1 ) : R ֌ Y × Y . This defines a double relation q1S o RS O / / SO q0S q1R q0R   Ro pS 1 pS 0 pR 1 /  /Y pR 0 actually the largest double relation relating R and S. In set-theoretical terms, this double relation RS corresponds to the subset of elements (u, v, u′ , v ′ ) of Y 4 such that the relations uRu′ , vRv ′ , uSv, u′ Sv ′ hold. Lemma 3.4. Any split epimorphism (r, s) : X ⇄ Y of a regular Mal’tsev category with epireflection I and unit η induces the following diagram R[ηR[r] ] o O R(pr0 ,Ipr0 ) R(pr1 ,Ipr1 )   R[ηX ] o O R(r,Ir) R(s,Is)  R[ηY ] o / / R[r] O ηR[r] pr1 pr0 /  / XO r /  /Y / / I(R[r]) O Ipr0 ηX s   / / IX O Ir ηY Ipr1 Is  / / IY in which the rows and the two right vertical columns represent kernel relations. The left most column represents then the kernel relation of the induced map R(r, Ir) : R[ηX ] → R[ηY ], and we have R[ηR[r] ] = R[ηX ]R[r] = R[R(r, Ir)]. Proof. This follows from Proposition 2.8 which shows that I(R[r]) may be identified with the kernel relation R[I(r)] of the split epimorphism Ir : IX → IY .  For sake of simplicity a split epimorphism (r, s) : X ⇄ Y is called a C-affine point over Y whenever its domain X is C-affine. Proposition 3.5. Let C be a reflective subcategory of an efficiently regular Mal’tsev category D with reflection I and unit η. Any C-affine point (r, s) over Y is the image under ηY∗ of a C-affine point (r̄, s̄) over IY such that both points have isomorphic reflections in C. 28 CLEMENS BERGER AND DOMINIQUE BOURN Proof. Since by Lemma 2.10 ηX has a central kernel relation, the kernel relations R[ηX ] and R[r] centralise each other. This induces a centralising double relation / R[ηX ] ×X R[r] o O / R[r] O pr0   R[ηX ] o pr1 /  /X which we consider as a fibrant split epimorphism of equivalence relations (disregarding the dotted arrows). Pulling back along the monomorphism / R[ηX ] o O / XO s R(s,Is) / /Y R[ηY ] o yields on the left hand side of the following diagram / RI [(r, s)] o O / XO s r  R[ηY ] o q / /Y / / X̄ O r̄ ηY s̄  / / IY another fibrant split epimorphism of equivalence relations. Since D is efficiently regular, the upper equivalence relation is effective with quotient q : X ։ X̄. We claim that the induced point (r̄, s̄) : X̄ ⇄ IY has the required properties. Indeed, the right square is a pullback by a well-known result of Barr and Kock, cf. [4, Lemma A.5.8], so that ηY∗ (r̄, s̄) = (r, s). The centralising double relation R[ηX ] ×X R[r] is coherently embedded in the double relation R[ηX ]R[r] of Lemma 3.4, cf. [4, Proposition 2.6.13]. This induces an inclusion RI [(r, s)] ֌ R[ηX ] and hence a morphism φ : X̄ ։ IX such that φq = ηX . According to Lemma 2.9, we get an isomorphism Iq : IX ∼ = I X̄ compatible with the units. The kernel relation R[ηX̄ ] is thus the direct image of the central kernel relation R[ηX ] under the regular epimorphism q : X → X̄ and as such central as well. In particular, (r̄, s̄) is a C-affine point with same reflection in C as (r, s).  Corollary 3.6. For an efficiently regular Mal’tsev category D with central reflection I and unit η, pointed base-change ηY∗ : PtI(Y ) (D) → PtY (D) is essentially surjective. Proof. Since the reflection is central, Corollary 2.11 shows that Proposition 3.5 applies to the whole fibre PtY (D) whence essential surjectivity of ηY∗ .  CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 29 3.4. Affine extensions in efficiently regular Mal’tsev categories. – A functor G : E → E′ is called saturated on quotients if for each object A in E and each strong epimorphism g ′ : G(A) → B ′ in E′ , there exists a strong epimorphism g : A → B in E such that G(g) and g ′ are isomorphic under G(A). Note that a right adjoint functor G : E → E′ is essentially surjective whenever it is saturated on quotients and each object B ′ of E′ is the quotient of an object B ′′ for which the unit ηB ′′ : B ′′ → GF (B ′′ ) is invertible. Lemma 3.7. In an efficiently regular Mal’tsev category, pointed base-change along a regular epimorphism is saturated on quotients. Proof. Let f : X ։ Y be a regular epimorphism, let (r, s) be a point over Y , and l : f ∗ ((r, s)) ։ (r′ , s′ ) be a quotient map over X. Consider the following diagram pf1 ′ ′ / f′ ′ / X O ❇❇ ❇❇ l ❇❇ ❇❇! / ! ′′ /X ⑤⑤ ⑤ ∗ ∗ ⑤ f (r) f (s) ⑤⑤  }⑤} ⑤⑤ r′ / /X R[f ] o O ❈❈ ❈❈ pf0 ′ ❈❈ ❈❈! ! So ④ ④ ④ ④④  }④} ④④ pf1 o R[f ] // Y′ O l̄ f ′′ r s  ~ ~ //Y f pf0 / / Y ′′ ρ in which the right hand side square is a pullback, and the left hand side is defined by factoring the induced map on kernel relations R[f ′ ] → R[f ] through the direct image S = l(R[f ′ ]) under l. Since the right square is a pullback, the left square represents a fibrant split epimorphism of equivalence relations. The factorisation of this fibrant morphism induced by l yields two fibrant maps of equivalence relations, cf. Lemma 1.17. Note that the second (X ′′ , S) → (X, R[f ]) is a fibrant split epimorphism. Efficient regularity implies then that the equivalence relation S is effective with quotient f ′′ : U ′′ ։ V ′′ , defining a point (ρ, σ) over Y . This induces (by a well-known result of Barr and Kock, cf. [4, Lemma A.5.8]) a decomposition of the right pullback into two pullbacks. The induced regular epimorphism l̄ : (r, s) ։ (ρ, σ) has the required properties, namely f ∗ (l̄) = l.  Proposition 3.8. In an efficiently regular Mal’tsev category with binary sums, a regular epimorphism f : X ։ Y is an affine extension if and only if for each object Z either of the following two diagrams X +O Z Z πX f +Z ιZ  X X //Y +Z O Z πY f  ιZ Y //Y is a downward-oriented pullback square. X +Z f +Z / / Y +Z θX,Z  X ×Z θY,Z  f ×Z / / Y ×Z 30 CLEMENS BERGER AND DOMINIQUE BOURN Proof. If f is an affine extension then the downward-oriented left square is a pullback because the upward-oriented left square is a pushout, cf. Section 3.2. Moreover, the outer rectangle of the following diagram f +Z X +Z / Y +Z θY,Z θX,Z  X ×Z  / Y ×Z f ×Z pY pX  X  /Y f is a pullback if and only if the upper square is a pullback, because the lower square is always a pullback. Assume conversely that the downward oriented left square is a pullback. In a regular Mal’tsev category, pointed base-change along a regular epimorphism is fully faithful so that f is affine whenever f ∗ is essentially surjective. Lemma 3.7 shows that in an efficiently regular Mal’tsev category f ∗ is saturated on quotients. It suffices thus to show that in the fibre over X each point is the quotient of a point for which the unit of the pointed base-change adjunction is invertible. Since for each object Z, the undotted downward-oriented square X +O Z Z πX f +Z Z πY h1X ,ri   X / / Y +Z O h1Y ,f ri   //Y f is a pullback, the dotted downward-oriented square (which is induced by an arbitrary morphism r : Z → X) is a pullback as well. This holds in any regular Mal’tsev category, since the whole diagram represents a natural transformation of reflexive graphs, cf. [6]. It follows that the point (h1X , ri, ιZ X ) : X + Z ⇄ X has an invertible unit with respect to the pointed base-change adjunction (f! , f ∗ ). Now, an arbitrary point (r, s) : Z ⇄ X can be realised as a quotient Z ❅o o _❅❅❅❅ ❅❅❅❅r ❅❅❅❅ s ❅❅❅ hs,1Z i X; + Z ①①①① ① ①①①① ①①①① ①{①①①① h1X ,ri ιZ X X of the latter point (h1X , ri, ιZ X ) : X + Y ⇄ X with invertible unit.  We end this section with several properties of affine extensions in semi-abelian categories. They will only be used in Section 6. CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 31 Proposition 3.9. In a semi-abelian category, a regular epimorphism f : X ։ Y is an affine extension if and only if either of the following conditions is satisfied: (a) for each object Z, the induced map f ⋄ Z : X ⋄ Z → Y ⋄ Z is invertible, where X ⋄ Z stands for the kernel of θX,Z : X + Z ։ X × Z; (b) every pushout of f along a split monomorphism is a pullback. Proof. That condition (a) characterises affine extensions follows from Proposition 3.8 and protomodularity. The necessity of condition (b) follows from Section 3.2. The sufficiency of condition (b) follows from the “pullback cancellation property” in semi abelian categories, cf. [4, Proposition 4.1.4]. Remark 3.10. This product X ⋄ Z is often called the co-smash product of X and Z, since it is the dual of the smash product as investigated by Carboni-Janelidze [16] in the context of lextensive categories. The co-smash product X ⋄ Z coincides in semiabelian categories with the second cross-effect cr2 (X, Z) of the identity functor, cf. Definition 5.1 and [54, 38, 39]. Since the co-smash product is in general not associative (cf. [16]), parentheses should be used with care. Proposition 3.11. Let Y −→ W ←− Z and Ȳ −→ W̄ ←− Z̄ be cospans in the fibre PtX (D) of a semi-abelian category D. Let f : Y ։ Ȳ , g : Z ։ Z̄, h : W ։ W̄ be affine extensions in D inducing a map of cospans in PtX (D). Assume furthermore that the first cospan realises W as the binary sum of Y and Z in PtX (D). Then the second cospan realises W̄ as the binary sum of Ȳ and Z̄ in PtX (D) if and only if the kernel cospan K[f ] −→ K[h] ←− K[g] is strongly epimorphic in D. Proof. Let us consider the following commutative diagram X / i /Y  f / / Ȳ  j  Z / g  Z̄ /   / W h1 / / W1 ❈❈ ❈❈h ❈❈ h2 ❈! !   / W2 / / W̄ in which i (resp. j) denotes the section of the point Y (resp. Z) over X, and all little squares except the lower right one are pushouts. It follows that the outer square is a pushout (i.e. W̄ = Ȳ +X Z̄) if and only if the lower right square is a pushout. According to Carboni-Kelly-Pedicchio [17, Theorem 5.2] this happens if and only if the kernel relation R[h] is the join of the kernel relations R[h1 ] and R[h2 ]. In a semiabelian category this is the case if and only if the kernel K[h] is generated as normal subobject of W by the kernels K[h1 ] and K[h2 ], resp. (since h1 and h2 are affine extensions) by the kernels K[f ] and K[g], cf. Proposition 3.9b. Now, h is also an affine extension so that by Proposition 3.2, the kernel K[h] is a central subobject of W . In particular, any subobject of K[h] is central and normal in W (cf. the characterisation of normal subobjects in semi-abelian categories by Mantovani-Metere [54, Theorem 6.3]). Therefore, generating K[h] as normal subobject of W amounts to the same as generating K[h] as subobject of W .  32 CLEMENS BERGER AND DOMINIQUE BOURN 4. Aspects of nilpotency Recall that a morphism is called n-fold centrally decomposable if it is the composite of n morphisms with central kernel relation. For consistency, a monomorphism is called 0-fold centrally decomposable, and an isomorphism a 0-fold central extension. Proposition 4.1. For all objects X, Y of a σ-pointed n-nilpotent Mal’tsev category, the comparison map θX,Y : X + Y → X × Y is (n − 1)-fold centrally decomposable. Proof. In a pointed n-nilpotent Mal’tsev category, each object maps to an abelian group object through an (n − 1)-fold centrally decomposable morphism, cf. Proposition 2.7. Since the codomain of such a morphism φX,Y : X + Y → A is an abelian group object, the restrictions to the two summands commute and φX,Y factors θX,Y X + Y❍ ❍❍ ❍❍ φX,Y ❍$ A / / X ×Y ✈✈ ✈ ✈ψ ✈ z✈ X,Y so that θX,Y is (n − 1)-fold centrally decomposable by Lemma 1.3.  Proposition 4.2. For any finitely cocomplete regular pointed Mal’tsev category, the following pushout square X +X h1X ,1X i  X θX,X / / X ×X  / / A(X) defines the abelianisation A(X) of X. In particular, the lower row can be identified 1 with the unit ηX : X → I 1 (X) of the strong epireflection of Theorem 2.12. Proof. The first assertion follows by combining [4, Proposition 1.7.5, Theorems 1.9.5 and 1.9.11] with the fact that pointed Mal’tsev categories are strongly unital in the sense of the second author, cf. [4, Corollary 2.2.10]. The second assertion expresses the fact that X ։ A(X) and X ։ I 1 (X) share the same universal property.  Theorem 4.3. A σ-pointed exact Mal’tsev category is n-nilpotent if and only if for all objects X, Y the comparison map θX,Y : X + Y → X × Y is an (n − 1)-fold central extension. Proof. By Proposition 4.1 n-nilpotency implies that θX,Y is an (n − 1)-fold central extension. For the converse, consider the pushout square of Proposition 4.2, which is 1 regular by Corollary 1.8. The unit ηX : X → I 1 (X) is thus an (n − 1)-fold central extension by Proposition 1.13 so that all objects are n-nilpotent.  Corollary 4.4. For a σ-pointed exact Mal’tsev category, the following three properties are equivalent: (a) the category is 1-nilpotent; (b) the category is linear (cf. Definition 5.1); CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 33 (c) the category is abelian. Proof. The equivalence of (a) and (b) follows from Theorem 4.3. The equivalence of (b) and (c) follows from the fact that a σ-pointed Mal’tsev category is additive if and only if it is linear (cf. [4, Theorem 1.10.14]) together with the well-known fact (due to Miles Tierney) that abelian categories are precisely the additive categories among exact categories.  Theorem 4.5. For a σ-pointed exact Mal’tsev category, the following five properties are equivalent: (a) (b) (b′ ) (c) (c′ ) all objects are 2-nilpotent; 1 for all X, abelianisation ηX : X → I 1 (X) is a central extension; 1 for all X, abelianisation ηX : X → I 1 (X) is an affine extension; for all X, Y , the map θX,Y : X + Y → X × Y is a central extension; for all X, Y , the map θX,Y : X + Y → X × Y is an affine extension. Proof. Properties (a) and (b) are equivalent by definition of 2-nilpotency. Theorem 4.3 shows that (b) and (c) are equivalent. Theorem 3.3 and Proposition 3.2 imply that (b) and (b′ ) are equivalent. Finally, since the first Birkhoff reflection I 1 preserves binary sums and binary products (cf. Proposition 2.8), we get I 1 (θX,Y ) = θI 1 (X),I 1 (Y ) which is invertible in the subcategory of 1-nilpotent objects by Proposition 4.1. It follows that under assumption (a), the map θX,Y is an affine extension by Theorem 3.3, which is property (c′ ). Conversely, (c′ ) implies (c) by Proposition 3.2.  4.1. Niltensor products. In order to extend Proposition 4.5 to higher n we introduce here a new family of binary tensor products, called niltensor products. For any finitely cocomplete pointed regular Mal’tsev category (D, ⋆D ) the n-th niltensor product X ⊗n Y is defined by factorizing the comparison map θX,Y universally n into a regular epimorphism θX,Y : X + Y → X ⊗n Y followed by an (n − 1)-fold central extension ω n−1 ω2 X,Y X,Y / / X ⊗n−1 Y X ⊗3 Y❞❞❞❞❞/2 /2 X ⊗2 Y◆ X; ⊗n Y ❞ 4 ❞ 4 ◆◆◆ ω1 ✇; n θX,Y ❞❞❞❞❞❞❞ ❥❥❥❥ ✇✇ ◆◆◆X,Y ❞❞❞❞❞❞❞ n−1 ❞ ❞ ✇✇ ❥❥❥❥❥❥❥θX,Y ❞ ❞ ◆◆◆ ❞ ✇ ❞ ❞ ❞ 2 ❞ ✇ ❥ ❞ ❞ ❥ ❞ θX,Y ◆& & ❥ ❞❞❞ ✇✇❥ ❞❥❞❞❞❞❞❞ / /X ×Y X +Y θX,Y as provided by Proposition 1.5. This n-th niltensor product is symmetric and has ⋆D as unit, but it does not seem to be associative in general. 34 CLEMENS BERGER AND DOMINIQUE BOURN Proposition 4.6. In a σ-pointed exact Mal’tsev category, the following diagram is an iterated pushout diagram X: ⊗n X n θX,X ✉✉: ✉ ✉ ✉✉ ✉✉ X +X  X● ●● ●● ●● n ●●#  ηX #  I n (X) n−1 ωX,X / / X ⊗n−1 X X ⊗3 X θX,X ηX  / / I n−1 (X)  I (X) 3 2 ωX,X / / X ⊗2 X ◆◆◆ ω1 ◆◆◆X,X ◆◆◆ ◆' ' / / X ×X  / / I 1 (X) q8 8 qqq q q  qqq / / I 2 (X) where the left vertical map is the folding map h1X , 1X i : X + X → X. Proof. This follows from Corollary 1.8 and Propositions 1.13 and 4.2.  Theorem 4.7. For a σ-pointed exact Mal’tsev category, the following five properties are equivalent: (a) (b) (b′ ) (c) (c′ ) all objects are n-nilpotent; n−1 for all X, the (n − 1)th unit ηX : X ։ I n−1 (X) is a central extension; n−1 for all X, the (n − 1)th unit ηX : X ։ I n−1 (X) is an affine extension; n−1 for all X, Y , the map θX,Y : X + Y → X ⊗n−1 Y is a central extension; n−1 for all X, Y , the map θX,Y : X + Y → X ⊗n−1 Y is an affine extension. Proof. Properties (a) and (b) are equivalent by definition of n-nilpotency. Theorem 4.3 shows that (a) implies (c) while Proposition 4.6 shows that (c) implies (b). Therefore, (a), (b) and (c) are equivalent. Proposition 3.2 and Theorem 3.3 imply that (b) and (b′ ) are equivalent. Finally, Proposition 1.13 implies that the Birkhoff reflection I n−1 takes the comn−1 parison map θX,Y to the corresponding map θIn−1 n−1 (X),I n−1 (Y ) for the (n − 1)-nilpotent n−1 n−1 objects I (X) and I (Y ). Since the (n − 1)-nilpotent objects form an (n − 1)nilpotent Birkhoff subcategory, Theorem 4.3 shows that the latter map must be invertible; therefore, (a) implies (c′ ) by Theorem 3.3. Conversely, (c′ ) implies (c) by Proposition 3.2.  Definition 4.8. A σ-pointed Mal’tsev category is said to be pseudo-additive (resp. n pseudo-n-additive) if for all X, Y, the map θX,Y : X + Y ։ X × Y (resp. θX,Y : X + Y ։ X ⊗n Y ) is an affine extension. CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 35 Proposition 4.9. A σ-pointed exact Mal’tsev category is pseudo-additive (i.e. 2nilpotent) if and only if the following diagram (X + Y ) + Z θX,Y +Z / / (X × Y ) + Z θX×Y,Z θX+Y,Z  (X + Y ) × Z  / / (X × Y ) × Z θX,Y ×Z is a pullback for all objects X, Y, Z. Proof. This follows from Theorem 4.5 and Proposition 3.8.  Proposition 4.10. A σ-pointed exact Mal’tsev category is pseudo-(n − 1)-additive (i.e. n-nilpotent) if and only if the following diagram (X + Y ) + Z n−1 θX,Y +Z θX+Y,Z  (X ⊗n−1 Y ) + Z / / (X + Y ) × Z  θX⊗n−1 Y,Z n−1 θX,Y ×Z / / (X ⊗n−1 Y ) × Z is a pullback for all objects X, Y, Z. Proof. This follows from Theorem 4.7 and Proposition 3.8.  We end this section with a general remark about the behaviour of n-nilpotency under slicing and passage to the fibres. Note that any left exact functor between Mal’tsev categories preserves central equivalence relations, morphisms with central kernel relation, and consequently n-nilpotent objects. Proposition 4.11. If D is an n-nilpotent Mal’tsev category, then so are any of its slice categories D/Y and of its fibres PtY (D). Proof. The slices D/Y of a Mal’tsev category D are again Mal’tsev categories. Moreover, base-change ωY∗ : D → D/Y is a left exact functor so that the objects of D/Y of the form ωY∗ (X) = pY : Y × X → Y are n-nilpotent provided D is an n-nilpotent Mal’tsev category. We can conclude with Proposition 2.3 by observing that any object f : X → Y of D/Y may be considered as a subobject (f,1X ) / Y ×X X ❆/ ❆❆ ✇✇ ❆❆ ✇✇ ✇ ❆ ❆ {✇✇✇ pY f Y 36 CLEMENS BERGER AND DOMINIQUE BOURN of ωY∗ (X) in D/Y . The proof for the fibres is the same as for the slices, since any object (r, s) of PtY (D) may be considered as a subobject (r,1X ) / Y ×X X `❆/ ❆❆❆❆ s ①①①①< ① p ❆❆❆❆ Y ①① ① ① ❆❆❆❆ ①①①①① ① ❆ ❆ ① ① (1 ❆ ❆ Y ,s) r ❆❆❆ |①①①①① Y of the projection pY : Y × X → Y splitted by (1Y , s) : Y → Y × X.  5. Quadratic identity functors We have seen that 1-nilpotency has much to do with linear identity functors (cf. Corollary 4.4). We now investigate the relationship between 2-nilpotency and quadratic identity functors, and below in Section 6, the relationship between n-nilpotency and identity functors of degree n. While a linear functor takes binary sums to binary products, a quadratic functor takes certain cubes constructed out of triple sums to limit cubes. This is the beginning of a whole hierarchy assigning degree ≤ n to a functor whenever the functor takes certain (n + 1)-dimensional cubes constructed out of iterated sums to limit cubes. This definition of degree of a functor is much inspired by Goodwillie [29] who described polynomial approximations of a homotopy functor in terms of their behaviour on certain cubical diagrams. Eilenberg-Mac Lane [23] defined the degree of a functor with values in an abelian category by a vanishing condition of so-called cross-effects. Our definition of degree does not need cross-effects. Yet, a functor with values in a semi-abelian (or homological [4]) category is of degree ≤ n precisely when all its cross-effects of order n + 1 vanish, cf. Corollary 6.17. Our cubical cross-effects agree up to isomorphism with those of Hartl-Loiseau [38] and Hartl-Van der Linden [39], which are defined as kernel intersections. There are several other places in literature where degree n functors, especially quadratic functors, are studied in a non-additive context, most notably Baues-Pirashvili [1], Johnson-McCarthy [50] and Hartl-Vespa [40]. In all these places, the definition of a degree n functor is based on a vanishing condition of cross-effects, closely following the original approach of Eilenberg-Mac Lane. It turned out that for us Goodwillie’s cubical approach to functor calculus was more convenient. Our Definition 6.3 of an n-folded object and the resulting characterisation of degree n identity functors in terms of n-folded objects (cf. Proposition 6.5) rely in an essential way on cubical combinatorics. 5.1. Degree and cross-effects of a functor. – An n-cube in a category E is given by a functor Ξ : [0, 1]n → E with domain the n-fold cartesian product [0, 1]n of the arrow category [0, 1]. The category [0, 1] has two objects 0,1 and exactly one non-identity arrow 0 → 1. Thus, an n-cube in E is given by objects Ξ(ǫ1 , . . . , ǫn ) in E with ǫi ∈ {0, 1}, and CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 37 ǫ′ ,...,ǫ′ arrows ξǫ11,...,ǫnn : Ξ(ǫ1 , . . . , ǫn ) → Ξ(ǫ′1 , . . . , ǫ′n ) in E, one for each arrow in [0, 1]n , which compose in an obvious way. To each n-cube Ξ we associate a punctured n-cube Ξ̌ obtained by restriction of Ξ to the full subcategory of [0, 1]n spanned by the objects (ǫ1 , . . . , ǫn ) 6= (0, . . . , 0). Definition 5.1. Let (E, ⋆E ) be a σ-pointed category. For each n-tuple of objects (X1 , . . . , Xn ) of E we denote by ΞX1 ,...,Xn the following n-cube: • ΞX1 ,...,Xn (ǫ1 , . . . , ǫn ) = X1 (ǫ1 ) + · · · + Xn (ǫn ) with X(0) = X and X(1) = ⋆E ; ′n ǫ′ ǫ′ ,...,ǫ′ • ξǫ11,...,ǫnn = jǫ11 + · · · + jǫǫn ′ where jǫǫ is the identity if ǫ = ǫ′ , resp. the null morphism if ǫ 6= ǫ′ . A functor of σ-pointed categories F : (E, ⋆E ) → (E′ , ⋆E′ ) is called of degree ≤ n if • F (⋆E ) ∼ = ⋆ E′ ; • for each (n + 1)-cube ΞX1 ,...,Xn+1 in E, the image-cube F ◦ ΞX1 ,...,Xn+1 is a limit-cube in E′ , i.e. F (X1 + · · · + Xn+1 ) may be identified with the limit of the punctured image-cube F ◦ Ξ̌X1 ,...,Xn+1 . A functor of degree ≤ 1 (resp. ≤ 2) is called linear (resp. quadratic). A σ-pointed category is called linear (resp. quadratic) if its identity functor is. If E′ has pullbacks, the limit over the punctured image-cube is denoted F = PX 1 ,...,Xn+1 lim ←− F ◦ Ξ̌X1 ,...,Xn+1 [0,1]n+1 −(0,...,0) and the associated comparison map is denoted F F . : F (X1 + · · · + Xn+1 ) → PX θX 1 ,...,Xn+1 1 ,...,Xn+1 The (n + 1)-st cross-effects of the functor F : E → E′ are the total kernels of the F image-cubes, i.e. the kernels of the comparison maps θX : 1 ,...,Xn+1 F F ]. crn+1 (X1 , . . . , Xn+1 ) = K[θX 1 ,...,Xn+1 If F is the identity functor, the symbol F will be dropped from the notation. A functor F : E → E′ has degree ≤ n if and only if for all (n + 1)-tuples (X1 , . . . , Xn+1 ) F of objects of E, the comparison maps θX are invertible. 1 ,...,Xn+1 F Our total-kernel definition of the cross-effect crn+1 (X1 , . . . , Xn+1 ) is directly inspired by Goodwillie [29, pg. 676] but agrees up to isomorphism with the kernel intersection definition of Hartl-Loiseau [38] and Hartl-Van der Linden [39]. Their kernel intersection is dual to the (n + 1)-fold smash product of Carboni-Janelidze [16], cf. Remark 3.10 and also Remark 6.2, where the duality between cross-effects and smash-products is discussed in more detail. Indeed, each of the n + 1 “contraction morphisms” F ci + · · · + Xn+1 ), 1 ≤ i ≤ n + 1, πX̂ : F (X1 + · · · + Xn+1 ) → F (X1 + · · · + X i 38 CLEMENS BERGER AND DOMINIQUE BOURN F factors through θX so that we get a composite morphism 1 ,...,Xn+1 rF : F (X1 + · · · + Xn+1 ) F θX 1 ,...,X −→ n+1 F ֌ PX 1 ,...,Xn+1 n+1 Y i=1 ci + · · · + Xn+1 ) F (X1 + · · · + X F embedding the limit construction PX into an (n + 1)-fold cartesian product. 1 ,...,Xn+1 F Therefore, the kernel of θX1 ,...,Xn+1 coincides with the kernel of rF K[rF ] = n+1 \ F K[πX̂ ], i i=1 which is precisely the kernel intersection of [38, 39] serving as their definition for the F (n + 1)st cross-effect crn+1 (X1 , . . . , Xn+1 ) of the functor F . For n = 1 and F = idE we get the following 2-cube ΞX,Y X +Y πŶ  X πX̂ /Y  / ⋆E so that the limit PX,Y of the punctured 2-cube is X × Y and the comparison map θX,Y : X + Y → X × Y is the one already used before. In particular, the just introduced notion of linear category is the usual one. For n = 2 and F = idE we get the following 3-cube ΞX,Y,Z π Ŷ /: Z Y 7 +O Z ♥ πX̂ ✈✈ O ♥ ♥ ✈ ♥ ✈ ✈✈ ♥♥♥ πŶ / X + YO + Z X +O Z π πX̂ Ẑ πẐ  X +Y  Y 7 ♦ πX̂ ♦♦♦ ♦♦ ♦ ♦ ♦ πŶ  / ⋆E : ✈ πẐ ✈✈✈  ✈✈✈ /X which induces a split natural transformation of 2-cubes: ΞX,Y + Z ⇄ ΞX,Y For sake of simplicity, we denote by +Z the functor E → PtZ (E) which takes an object X to X + Z → Z with obvious section, and similarly, we denote by ×Z the functor E → PtZ (E), which takes an object X to X × Z → Z with obvious section. CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 39 The previous split natural transformation of 2-cubes induces a natural transformation of split epimorphisms X +YO +Z πẐ  X +Y +Z θX,Y '&%$ !"# 2 θX,Y / (X + Z) ×Z (Y + Z) O  / X ×Y the comparison map of which may be identified with the comparison map θX,Y,Z : X + Y + Z → PX,Y,Z of ΞX,Y,Z . In particular, the category E is quadratic if and only if square (2) is a pullback square. Notice that in a regular Mal’tsev category, the downward-oriented square (2) is necessarily a regular pushout by Corollary 1.7. Proposition 5.2. In a σ-pointed regular Mal’tsev category, the comparison map θX,Y,Z is a regular epimorphism with kernel relation R[πX̂ ] ∩ R[πŶ ] ∩ R[πẐ ]. Proof. The first assertion expresses the regularity of pushout (2), the second follows +Z +Z from identities R[θX,Y ] ∩ R[πẐ ] which hold ] = R[πX̂ ] ∩ R[πŶ ] and R[θX,Y,Z ] = R[θX,Y +Z because both, θX,Y and θX,Y,Z , are comparison maps.  Lemma 5.3. A σ-pointed category with pullbacks is quadratic if and only if square (2 ′ ) of the following diagram X +Y +Z +Z θX,Y  (X + Z) ×Z (Y + Z) θX+Y,Z /.-, ()*+ 2′ / (X + Y ) × Z / X +Y ×Z θX,Y θX,Z ×Z θY,Z  / (X × Z) ×Z (Y × Z) 7654 0123 2′′ θX,Y  / X ×Y is a pullback square. Proof. Composing squares (2′ ) and (2′′ ) yields square (2) above. Square (2′′ ) is a pullback since (X × Z) ×Z (Y × Z) is canonically isomorphic to (X × Y ) × Z.  5.2. The main diagram. We shall now give several criteria for quadraticity. For this we consider the following diagram (X + Y ) ⋄ Z / θX,Y ⋄Z  (X × Y ) ⋄ Z / ϕZ X,Y  (X ⋄ Z) × (Y ⋄ Z) / θX+Y,Z / (X + Y ) + Z θX,Y +Z  / (X × Y ) + Z φZ / / (X + Y ) × Z '&%$ !"# a θX,Y ×Z  / / (X × Y ) × Z θX×Y,Z  X,Y / (X + Z) ×Z (Y + Z) '&%$ !"# b µZ  X,Y / / (X × Z) ×Z (Y × Z) θX,Z ×Z θY,Z +Z ×Z ⋄Z in which the vertical composite morphisms from left to right are θX,Y , θX,Y , θX,Y , the horizontal morphisms on the left are the kernel-inclusions of the horizontal regular epimorphisms on their right, and µZ X,Y is the canonical isomorphism. 40 CLEMENS BERGER AND DOMINIQUE BOURN Observe that square (b) is a pullback if and only if the canonical map φZ X,Y : (X × Y ) + Z → (X + Z) ×Z (Y + Z) is invertible. This is the case if and only if for each Z pointed cobase-change (αZ )! : D → PtZ (D) along the initial map αZ : ⋆D → Z preserves binary products, cf. Section 3.2. Proposition 5.4. A σ-pointed regular Mal’tsev category is quadratic if and only if squares (a) and (b) of the main diagram are pullback squares. Proof. Since composing squares (a) and (b) yields square (2′ ) of Lemma 5.3, the condition is sufficient. Lemma 1.17 and Corollary 1.16 imply that the condition is necessary as well.  Theorem 5.5. A σ-pointed exact Mal’tsev category is quadratic if and only if it is 2-nilpotent and pointed cobase-change along initial maps preserves binary products. Proof. By Proposition 5.4, the category is quadratic if and only if the squares (a) and (b) are pullback squares, i.e. the category is 2-nilpotent by Proposition 4.9, and pointed cobase-change along initial maps preserves binary products.  Corollary 5.6. A semi-abelian category is quadratic1 if and only if either of the following three equivalent conditions is satisfied: (a) all objects are 2-nilpotent, and the comparison maps ϕZ X,Y : (X × Y ) ⋄ Z → (X ⋄ Z) × (Y ⋄ Z) are invertible for all objects X, Y, Z; (b) the third cross-effects of the identity functor cr3 (X, Y, Z) = K[θX,Y,Z ] vanish for all objects X, Y, Z; (c) the co-smash product is linear, i.e. the canonical comparison maps ⋄Z θX,Y : (X + Y ) ⋄ Z → (X ⋄ Z) × (Y ⋄ Z) are invertible for all objects X, Y, Z. Proof. Theorem 5.5 shows that condition (a) amounts to quadraticity. For condition (b) note that by protomodularity the cross-effect K[θX,Y,Z ] vanishes if and only if the regular epimorphism θX,Y,Z is invertible. The equivalence of conditions (b) and (c) follows from the isomorphism of kernels ⋄Z K[θX,Y,Z ] ∼ ]. The latter is a consequence of the 3 × 3-lemma which, applied = K[θX,Y to main diagram 5.2 and square (2), yields the chain of isomorphisms K[θ⋄Z ] ∼ = K[K[θ+Z ] ։ K[θ×Z ]] ∼ = K[θX,Y,Z ]. X,Y X,Y X,Y  1The authors of [20, 38, 39] call a semi-abelian category two-nilpotent if each object has a vanishing ternary Higgins commutator, cf. Remark 6.4. By Proposition 6.5 this means that the identity functor is quadratic. Corollaries 5.6 and 5.16 describe how to enhance a 2-nilpotent semi-abelian category in our sense so as to get a two-nilpotent semi-abelian category in their sense. CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 41 5.3. Algebraic distributivity and algebraic extensivity. – We shall see that in a pseudo-additive regular Mal’tsev category (D, ⋆D ), pointed cobase-change along initial maps αZ : ⋆D → Z preserves binary products if and only if pointed base-change along terminal maps ωZ : Z → ⋆D preserves binary sums. The latter condition means that for all objects X, Y, Z, the following square Z / Y ×Z  X ×Z  / (X + Y ) × Z is a pushout, inducing thus for all objects X, Y, Z an isomorphism (X × Z) +Z (Y × Z) ∼ = (X + Y ) × Z which can be considered as an algebraic distributivity law. This suggests the following definitions, where the added adjective “algebraic” means here that the familiar definition has to be modified by replacing the slices of the category with the fibres of the fibration of points, cf. Section 3.2 and Carboni-Lack-Walters [18]. Definition 5.7. A category with pullbacks of split epimorphisms is algebraically distributive if pointed base-change along terminal maps preserves binary sums. A category with pullbacks of split epimorphisms and pushouts of split monomorphisms is algebraically extensive if any pointed base-change preserves binary sums. We get the following implications between several in literature studied “algebraic” notions, where we assume that pullbacks of split epimorphisms and (whenever needed) pushouts of split monomorphisms exist: (5.11)  local alg. cartesian closure +3 alg. extensivity (5.8) +3 alg. coherence (5.9) (5.9) qy   +3 alg. distributivity protomodularity  alg. cartesian T\ closure (5.11) The existence of centralisers implies algebraic cartesian closure [13] and hence algebraic distributivity, cf. Section 1.5. The categories of groups and of Lie algebras are not only algebraically cartesian closed, but also locally algebraically cartesian closed [35, 36], which means that any pointed base-change admits a right adjoint. Algebraic coherence, cf. Cigoli-Gray-Van der Linden [20], requires any pointed basechange to be coherent, i.e. to preserve strongly epimorphic cospans. Lemma 5.8. An algebraically extensive regular category is algebraically coherent. Proof. In a regular category, pointed base-change preserves regular epimorphisms. Henceforth, if the fibres have binary sums and pointed base-change preserves them, pointed base-change also preserves strongly epimorphic cospans.  42 CLEMENS BERGER AND DOMINIQUE BOURN Lemma 5.9 (cf. [10], Theorem 3.10 and [20], Theorem 6.1). An algebraically coherent pointed Mal’tsev category is protomodular and algebraically distributive. Proof. To any split epimorphism (r, s) : Y ⇄ X we associate the split epimorphism (r̄ = r × 1X , s̄ = s × 1X ) : Y × X ⇄ X × X r̄ Y × Xc● o ●●●● ●●●●(s,1X ) ●●●● ●● p2 ●●●●● ●● # / X; × X ①①①① ① ① ①①①①① ①①①①①p2 ① ①{①①①① s̄ (1X ,1X ) X in the fibre over X. The kernel of (r̄, s̄) may be identified with the given point (r, s) over X where the kernel-inclusion is defined by (1Y , r) : Y ֌ Y × X. Kernelinclusion and section strongly generate the point Y × X over X, cf. [10, Proposition 3.7]. Pointed base-change along αX : ⋆ → X takes (r̄, s̄) back to (r, s), so that by algebraic coherence, section and kernel-inclusion of (r, s) strongly generate Y . In a pointed category this amounts to protomodularity. For the second assertion observe that if F and G are composable coherent functors such that G is conservative and GF preserves binary sums, then F preserves binary sums as well; indeed, the isomorphism GF (X) + GF (Y ) → GF (X + Y ) decomposes into two isomorphisms GF (X) + GF (Y ) → G(F (X) + F (Y )) → GF (X + Y ). This ∗ applies to F = ωZ and G = α∗Z (where αZ : ⋆ → Z and ωZ : Z → ⋆) because ∗ ∗ preserves binary sums for all Z.  ωZ αZ = idZ and αZ is conservative, so that ωZ Lemma 5.10. A σ-pointed exact Mal’tsev category is algebraically extensive if and only if it is a semi-abelian category with exact pointed base-change functors. Proof. This follows from Lemmas 5.8 and 5.9 and the fact that a left exact and regular epimorphism preserving functor between semi-abelian categories is right exact if and  only if it preserves binary sums, cf. Section 1.4. For the following lemma a variety means a category equipped with a forgetful functor to sets which is monadic and preserves filtered colimits. Every variety is bicomplete, cowellpowered, and has finite limits commuting with filtered colimits. Lemma 5.11 (cf. [35], Theorem 2.9). A semi-abelian variety is (locally) algebraically cartesian closed if and only if it is algebraically distributive (extensive). Proof. Since the fibres of a semi-abelian category are semi-abelian, the pointed basechange functors preserve binary sums if and only if they preserve finite colimits, cf. Section 1.4. Since any colimit is a filtered colimit of finite colimits, and pointed basechange functors of a variety preserve filtered colimits, they preserve binary sums if and only if they preserve all colimits. It follows then from Freyd’s special adjoint functor theorem that a pointed base-change functor of a semi-abelian variety preserves binary sums if and only if it has a right adjoint.  A pointed category D with pullbacks is called fibrewise algebraically cartesian closed (resp. distributive) if for all objects Z of D the fibres PtZ (D) are algebraically CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 43 cartesian closed (resp. distributive). This is the case if and only if pointed base-change along every split epimorphism has a right adjoint (resp. preserves binary sums). Any algebraically coherent pointed Mal’tsev category is fibrewise algebraically distributive, cf. the proof of Lemma 5.9. Proposition 5.12. For a pointed regular Mal’tsev category, fibrewise algebraic cartesian closure is preserved under strong epireflections. Proof. Let (r, s) : X ⇄ Y be a point in a strongly epireflective subcategory C of a fibrewise algebraically cartesian closed regular Mal’tsev category D. Let f : Y → Z be a split epimorphism in C. We shall denote f∗ : PtY (D) → PtZ (D) the right adjoint of pointed base-change f ∗ : PtZ (D) → PtY (D). Consider the following diagram φ̄ f¯ ¯ / X̄ X̄ O ❈❈ O ❈o ❈ ❈❈ ηX̄¯ ❈❈ ηX̄ ❈❈ ❈❈ r̄ s̄ r̄¯ s̄¯ ❈! ! ❈! ! ¯) I(ε) I( f ¯ o / X I(X̄ ) I(X̄) ③ ③ ⑧⑧ ③ ③ ⑧ ③③ I(r̄¯) ③③ ⑧⑧  }③③③ I(r̄)  }③③③  ⑧⑧ r f /Z Y Y o XO ❂❂o ❂❂❂❂ ❂ r s ❂❂❂❂ ❂❂❂ ε where (r̄, s̄) = f∗ (r, s), the downward-oriented right square is a pullback, and ε : (r̄¯, s̄¯) = f ∗ f∗ (r, s) → (r, s) is the counit at (r, s). Since D is a regular Mal’tsev category, the strong epireflection I preserves this pullback of split epimorphisms (cf. ¯ ) is isomorphic to f ∗ (I(X̄)). Since by adjunction, maps Proposition 2.8) so that I(X̄ ¯ ) = f ∗ (I(X̄)) → X there is a I(X̄) → f∗ (X) = X̄ correspond bijectively to maps I(X̄ ∗ unique dotted map φ̄ : I(X̄) → X̄ such that ε ◦ f (φ̄) = I(ε). Accordingly we get φ̄ηX̄ = 1X̄ so that ηX̄ is invertible and hence X̄ belongs to C. This shows that the right adjoint f∗ : PtY (D) → PtZ (D) restricts to a right adjoint f∗ : PtY (C) → PtZ (C) so that C is fibrewise algebraically cartesian closed.  For regular Mal’tsev categories, algebraic cartesian closure amounts to the existence of centralisers for all (split) subobjects, see Section 1.5. Part of Proposition 5.12 could thus be reformulated by saying that in this context the existence of centralisers is preserved under strong epireflections, which can also be proved directly. In a varietal context, Proposition 5.12 also follows from Lemmas 5.11 and 5.17. Lemma 5.13. If an algebraically extensive semi-abelian (or homological [4]) category D has an identity functor of degree ≤ n, then all its fibres PtZ (D) as well. Proof. The kernel functors (αZ )∗ : PtZ (D) → D are conservative and preserve binary Pt (D) sums. Therefore, the kernel functors preserve the limits PX1Z,...,Xn+1 and the comparPt (D) ison maps θX1Z,...,Xn+1 . Accordingly, if the identity functor of D is of degree ≤ n, then Pt (D) α∗Z (θX1 ,...,Xn+1 ) is invertible, hence so is θX1Z,...,Xn+1 , for all objects X1 , . . . , Xn+1 of the fibre PtZ (D).  44 CLEMENS BERGER AND DOMINIQUE BOURN It should be noted that in general neither algebraic extensivity nor local algebraic cartesian closure is preserved under Birkhoff reflection. This is in neat contrast to (fibrewise) algebraic distributivity and algebraic coherence, which are preserved under strong epireflections, cf. Proposition 5.12 and [20, Proposition 3.7]. 5.4. Duality for pseudo-additive regular Mal’tsev categories. Lemma 5.14. For any pointed category D with binary sums and binary products consider the following commutative diagram (X × Y ) + Z O Y jX +Z ρX,Y,Z pY X +Z  X +Z / X × (Y + Z) O X×ιY Z Y X×πZ  / X ×Z θX,Z Y in which ρX,Y,Z is induced by the pair X × ιZ Y : X × Y → X × (Y + Z) and αX × ιZ : Z → X × (Y + Z). (1) pointed base-change (ωX )∗ : D → PtX (D) preserves binary sums if and only if the upward-oriented square is a pushout for all objects Y, Z; (2) pointed cobase-change (αZ )! : PtZ (D) → D preserves binary products if and only if the downward-oriented square is a pullback for all objects X, Y . Proof. The left upward-oriented square of the following diagram X×ιZ Y Y jX / / (X × Y ) + Z O ιZ X×Y X O× Y Y jX +Z pY X  X ιZ X ρX,Y,Z pY X +Z / X × (Y + Z) O X×ιY Z  /X +Z θX,Z / Z jX Y X×πZ  / X ×Z is a pushout so that the whole upward-oriented rectangle is a pushout (i.e. (ωX )∗ preserves binary sums) if and only if the right upward-oriented square is a pushout. The right downward-oriented square of the following diagram pX Y +Z (X × Y ) + Z O Y jX +Z / + Z) X ρX,Y,Z X × (Y O pY +Z pY X +Z  X +Z X×ιY Z θX,Z Y X×πZ  / X ×Z X πZ / / Y +Z O ιY Z pX Z Y πZ  /Z / is a pullback so that the whole downward-oriented rectangle is a pullback (i.e. (αZ )! preserves binary products) if and only if the left downward-oriented square is a pullback.  CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 45 Proposition 5.15. In a σ-pointed regular Mal’tsev category D, pointed base-change (ωZ )∗ : D → PtZ (D) preserves binary sums for all objects Z as soon as pointed cobasechange (αZ )! : D → PtZ (D) preserves binary products for all objects Z. The converse implication holds if D is pseudo-additive (cf. Definition 4.8). Proof. According to the previous lemma pointed cobase-change (αX )! preserves binary products if and only if the downward-oriented square is a pullback which implies that the upward-oriented square is a pushout, and hence pointed base-change (ωZ )∗ preserves binary sums. If D is pseudo-additive, the comparison map θX,Z is an affine extension. Therefore, the downward-oriented square is a pullback if and only if the upward-oriented square is a pushout, whence the converse.  Corollary 5.16. A σ-pointed exact Mal’tsev category is quadratic if and only if it is 2-nilpotent and algebraically distributive. Lemma 5.17. In a σ-pointed regular Mal’tsev category, (fibrewise) algebraic distributivity is preserved under strong epireflections. Proof. This follows from Proposition 2.8 which shows that strong epireflections preserve besides pushouts and binary sums also binary products (in the fibres).  Theorem 5.18. For any algebraically distributive, σ-pointed exact Mal’tsev category, the Birkhoff subcategory of 2-nilpotent objects is quadratic. Proof. The Birkhoff subcategory is pointed, exact, 2-nilpotent, and algebraically distributive by Lemma 5.17, and hence quadratic by Corollary 5.16.  Corollary 5.19 (cf. Cigoli-Gray-Van der Linden [20], Corollary 7.2). For each object X of an algebraically distributive semi-abelian category, the iterated Huq commutator [X, [X, X]] coincides with the ternary Higgins commutator [X, X, X]. Proof. Recall (cf. [38, 39]) that [X, X, X] is the direct image of K[θX,X,X ] under the ternary folding map X + X + X → X. In general, the iterated Huq commutator [X, [X, X]] is contained in [X, X, X], cf. Corollary 6.12. In a semi-abelian category, 2 the unit of second Birkhoff reflection I 2 takes the form ηX : X → X/[X, [X, X]], cf. Remark 2.13. Since in the algebraically distributive case, the subcategory of 2nilpotent objects is quadratic by Theorem 5.18, the image of [X, X, X] in X/[X, [X, X]] is trivial by Corollaries 5.6b and 6.20, whence [X, [X, X]] = [X, X, X].  Remark 5.20. The category of groups (resp. Lie algebras) has centralisers for subobjects and is thus algebraically distributive. Therefore, the category of 2-nilpotent groups (resp. Lie algebras) is a quadratic semi-abelian variety. The reader should observe that although on the level of 2-nilpotent objects there is a perfect symmetry between the property that pointed base-change along terminal maps preserves binary sums and the property that pointed cobase-change along initial maps preserves binary products (cf. Proposition 5.15), only the algebraic distributivity carries over to the category of all groups (resp. Lie algebras) while the algebraic “codistributivity” fails in either of these categories. “Codistributivity” is a quite restrictive property, which is rarely satisfied without assuming 2-nilpotency. 46 CLEMENS BERGER AND DOMINIQUE BOURN 6. Identity functors with bounded degree In the previous section we have seen that quadraticity is a slightly stronger property than 2-nilpotency, insofar as it also requires a certain compatibility between binary sum and binary product (cf. Theorem 5.5 and Proposition 5.15). In this last section, we relate n-nilpotency to identity functors of degree ≤ n. 6.1. Degree n functors and n-folded objects. – Any (n + 1)-cube ΞX1 ,...,Xn+1 (cf. Definition 5.1) defines a split natural transformation of n-cubes inducing a natural transformation of split epimorphisms X1 + · · · + Xn+1 O πX̂ n+1  X1 + · · · + Xn +Xn+1 1 ,...,Xn θX / P +Xn+1 X1 ,...,Xn O +ωX n+1 1 ,...,Xn '&%$ !"# n PX θX1 ,...,Xn +αX n+1 1 ,...,Xn PX  / PX1 ,...,Xn the comparison map of which may be identified with the comparison map θX1 ,...,Xn+1 : X1 + · · · + Xn+1 → PX1 ,...,Xn+1 of the given (n + 1)-cube. In particular, our category has an identity functor of degree ≤ n if and only if square (n) is a pullback square for all objects X1 , . . . , Xn+1 . Proposition 6.1. In a σ-pointed regular Mal’tsev category, the comparison map θX1 ,...,Xn+1 is a regular epimorphism with kernel relation R[πX̂1 ] ∩ · · · ∩ R[πX̂n+1 ]. Proof. This follows by induction on n like in the proof of Proposition 5.2.  Remark 6.2. The previous proposition shows that in a σ-pointed regular Mal’tsev category, the intersection of the kernel relations of the contraction maps may be considered as the “total kernel relation” of the cube. This parallels the more elementary fact that the total-kernel definition of the cross-effects crn+1 (X, . . . , Xn+1 ) coincides with the kernel-intersection definition of Hartl-Loiseau [38] and Hartl-Van der Linden [39]. In particular, in any σ-pointed regular Mal’tsev category, the image of the morQn+1 ci + · · · + Xn+1 coincides with the phism rid : X1 + · · · + Xn+1 → i=1 X1 + · · · + X limit PX1 ,...,Xn+1 of the punctured (n + 1)-cube. We already mentioned that these kernel intersections are dual to the (n + 1)fold smash products of Carboni-Janelidze [16]. An alternative way to describe the duality between cross-effects and smash-products is to consider the limit construction PX1 ,...,Xn as the dual of the so-called fat wedge T X1 ,...,Xn , cf. Hovey [42]. Settheoretically, the fat wedge is the subobject of the product X1 × · · · × Xn formed by the n-tuples having at least one coordinate at a base-point. If base-point inclusions behave “well” with respect to cartesian product, the fat wedge is given by a colimit construction, strictly dual to the limit construction defining PX1 ,...,Xn . The n-fold smash-product X1 ∧ · · · ∧ Xn is then the cokernel of the monomorphism T X1 ,...,Xn ֌ X1 × · · · × Xn while the n-th cross-effect crn (X1 , . . . , Xn ) is the kernel of the regular epimorphism X1 + · · · + Xn ։ PX1 ,...,Xn . CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 47 The cubical cross-effects are just the algebraic version of Goodwillie’s homotopical cross-effects [29, pg. 676]. Nevertheless, for functors taking values in abelian categories, the cubical cross-effects agree with the original cross-effects of EilenbergMac Lane [23, pg. 77]. Indeed, by [23, Theorems 9.1 and 9.6], for a based functor F : D → E from a σ-pointed category (D, +, ⋆D ) to an abelian category (E, ⊕, 0E ), the latter are completely determined by the following decomposition formula M M F (X1 + · · · + Xn ) ∼ crkF (Xi1 , . . . , Xik ) = 1≤k≤n 1≤i1 <···<ik ≤n for any objects X1 , . . . , Xn in D. It suffices thus to show that the cubical cross-effects satisfy the decomposition formula if values are taken in an abelian category. F F For n = 2 we get PX = F (X1 ) ⊕ F (X2 ) from which it follows that θX : 1 ,X2 1 ,X2 F (X1 +X2 ) ։ F (X1 )⊕F (X2 ) is a split epimorphism. Henceforth, we get the asserted isomorphism F (X1 + X2 ) ∼ = F (X1 ) ⊕ F (X2 ) ⊕ cr2F (X1 , X2 ). The 3-cube F (ΞX1 ,X2 ,X3 ) induces a natural transformation of split epimorphisms F (X1 + X2 + X3 ) O / P3 O  F (X1 + X2 )  / / F (X1 ) ⊕ F (X2 ) F θX 1 ,X2 in which P3 is isomorphic to F (X1 ) ⊕ F (X2 ) ⊕ F (X3 ) ⊕ cr2F (X1 , X3 ) ⊕ cr2F (X2 , X3 ). F From this, we get for PX = F (X1 + X2 ) ×F (X1 )⊕F (X2 ) P3 the formula 1 ,X2 ,X3 F ∼ PX = F (X1 ) ⊕ F (X2 ) ⊕ F (X3 ) ⊕ cr2F (X1 , X2 ) ⊕ cr2F (X1 , X3 ) ⊕ cr2F (X2 , X3 ) 1 ,X2 ,X3 F so that θX is again a split epimorphism inducing the asserted decomposition 1 ,X2 ,X3 of F (X1 + X2 + X3 ). The same scheme keeps on for all positive integers n.  Definition 6.3. An object X of a σ-pointed regular Mal’tsev category will be called X n-folded if the folding map δn+1 factors through the comparison map θX,...,X θX,...,X X + · · · ❉+ X ❉❉ ❉❉ ❉❉ X δn+1 ❉❉! ! / / PX,...,X mX X X i.e. if the folding map δn+1 annihilates the kernel relation R[θX,...,X ]. An object X is 1-folded if and only if the identity of X commutes with itself. In a σ-pointed regular Mal’tsev category this is the case if and only if X is an abelian group object, cf. the proof of Proposition 4.2. Remark 6.4. In a semi-abelian (or homological [4]) category, an object X is n-folded X if and only if the image of the kernel K[θX,...,X ] under the folding map δn+1 :X+ · · · + X ։ X is trivial. Recall [38, 39, 41, 54] that this image is by definition the so-called Higgins commutator [X, . . . , X] of length n + 1. Therefore, an object of a 48 CLEMENS BERGER AND DOMINIQUE BOURN semi-abelian category is n-folded precisely when its Higgins commutator of length n + 1 vanishes. Under this form n-folded objects have already been studied by Hartl emphasising their role in his theory of polynomial approximation. In a varietal context, n-foldedness can be expressed in more combinatorial terms. For instance, a group X is n-folded if and only if (n + 1)-reducible elements of X are trivial. An element w ∈ X is called (n + 1)-reducible if there is an element v in the free group F (X ⊔ · · · ⊔ X) on n + 1 copies of X (viewed as a set) such that (a) w is the image of v under the composite map n+1 n+1 n+1 X }| { z z z }| { }| { δn+1 F (X ⊔ · · · ⊔ X) ∼ = F (X) + · · · + F (X) ։ X + · · · + X ։ X F (X) : F (X)+(n+1) ։ F (X)+n , cf. Section (b) for each of the n + 1 contractions πi F (X) (v) maps to the neutral element of X under 6.2, the image πi n n z }| { }| { δnX z F (X) + · · · + F (X) ։ X + · · · + X ։ X. Indeed, since the evaluation map F (X) ։ X is a regular epimorphism, and the evaluation maps in (a) and (b) are compatible with the contraction maps πi : X +(n+1) ։ X +n , Proposition 6.1, Section 6.2 and Corollary 6.20 imply that we get in this way X the image of the kernel K[θX,...,X ] under the folding map δn+1 . Qk Qk −1 −1 Any product of commutators i=1 [xi , yi ] = i=1 xi yi xi yi in X is 2-reducible by letting the xi (resp. yi ) belong to the first (resp. second) copy of X. Conversely, a direct computation shows that any 2-reducible element of X can be rewritten as a product of commutators of X. This recovers in a combinatorial way the aforementioned fact that X is abelian (i.e. 1-nilpotent) if and only if X is 1-folded. The relationship between n-nilpotency and n-foldedness is more subtle, closely related to the cross-effects of the identity functor (cf. Theorem 6.8). For groups and Lie algebras the two concepts coincide (cf. Theorem 6.23c) but, though any n-folded object is n-nilpotent, the converse is wrong in general (cf. Section 6.5). Proposition 6.5. Let F : D → E be a based functor between σ-pointed categories and assume that E is a regular Mal’tsev category. (a) If F is of degree ≤ n then F takes values in n-folded objects of E; (b) If F preserves binary sums and takes values in n-folded objects of E then F is of degree ≤ n; (c) The identity functor of E is of degree ≤ n if and only if all objects of E are n-folded. F (X) Proof. Clearly, (c) follows from (a) and (b). For (a) note that δn+1 factors through X F (δn+1 ), and that by definition of a functor of degree ≤ n, the comparison map F X θX,...,X is invertible so that F (δn+1 ) gets identified with mF (X) . For (b) observe first that preservation of binary sums yields the isomorphisms F F ∼ ∼ PX = θF (X1 ),...,F (Xn+1 ) . We shall show = PF (X1 ),...,F (Xn+1 ) and θX 1 ,...,Xn+1 1 ,...,Xn+1 F that if moreover F takes values in n-folded objects of E then θX is invertible 1 ,...,Xn+1 for all (n + 1)-tuples (X1 , . . . , Xn+1 ) of objects of D. CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 49 T Consider any family (fi : Xi → T )1≤i≤n+1 of morphisms in E, and let φ = δn+1 ◦ (f1 + · · · + fn+1 ) : X1 + · · · + Xn+1 → T be the induced map. We have the following F factorisation of F (φ) through θX : 1 ,...,Xn+1 θF (X1 ),...,F (Xn+1 ) F (X1 ) + · · · + F (Xn+1 ) / / PF (X ),...,F (X ) 1 n+1 PF (f1 ),...,F (fn+1 ) F (f1 )+···+F (fn+1 )   θF (T ),...,F (T ) / / PF (T ),...,F (T ) F (T ) + · · · + F (T ) ◆◆◆ t ◆◆◆ tt ◆◆◆ tt t T ◆◆◆ tt mF (T ) δn+1 ztt & F (T ) In particular, if T = X1 + · · · + Xn+1 and fi is the inclusion of the ith summand, we get a retraction of θF (X1 ),...,F (Xn+1 ) which accordingly is a monomorphism. Since θF (X1 ),...,F (Xn+1 ) is also a regular epimorphism, it is invertible.  Proposition 6.6. The full subcategory Fldn (E) of n-folded objects of a σ-pointed regular Mal’tsev category E is closed under products, subobjects and quotients. Proof. For any two n-folded objects X and Y the following diagram (X × Y ) + · · · + (X × Y ) θX×Y,...,X×Y / / PX×Y,...,X×Y   θX,...,X ×θY,...,Y / / PX,...,X × PY,...,Y (X + · · · + X) × (Y + · · · + Y ) ❘❘❘ r ❘❘❘ rr ❘❘❘ rr r ❘ r ❘ X Y ❘❘❘ r m ×m δn+1 ×δn+1 ( yrr X Y X ×Y X×Y induces the required factorisation of δn+1 through θX×Y,...,X×Y . For a subobject n : U ֌ X of an n-folded object X consider the diagram U + · · · + U ❩❩❩❩❩ ❩❩❩❩❩θ❩U,··· ❩❩❩,U❩❩❩❩❩ θ ❩❩❩❩❩❩❩- ' ν / PU,...,U W / U δn+1   U / Pn,...,n n+···+n n  ( θX,...,X / / PX,...,X X + ···+ X X ✐✐✐✐ δn+1  t✐✐✐✐✐✐✐mX /X in which the dotted quadrangle is a pullback. The commutatitvity of the diagram induces a morphism θ such that νθ = θU,...,U . Since θU,...,U is a regular epimorphism, the monomorphism ν is invertible, whence the desired factorisation. 50 CLEMENS BERGER AND DOMINIQUE BOURN Finally, for a regular epimorphism f : X ։ Y with n-folded domain X consider the following diagram f +···+f X + · · · +❖ X ❖❖❖θX,...,X ❖❖❖ ❖❖❖ '' X δn+1 PX,...,X ♦ ♦ ♦ ♦♦♦  w♦♦♦♦♦ mX X f / / Y + ··· + Y ❖❖❖ ❖❖θ❖Y,...,Y ❖❖❖ Y δn+1 ❖' ' Pf,...,f / / PY,...,Y  //Y w mY in which the existence of the dotted arrow has to be shown. According to Lemma 6.7 the induced morphism on kernel relations R(f + · · · + f, Pf,...,f ) : R[θX,...,X ] → R[θY,...,Y ] Y is a regular epimorphism. A diagram chase shows then that δn+1 annihilates R[θY,...,Y ] Y whence the required factorisation of δn+1 through θY,...,Y .  Lemma 6.7. In a σ-pointed regular Mal’tsev category, any finite family of regular epimorphisms fi : Xi ։ Yi (i = 1, . . . , n) induces a regular epimorphism on kernel relations R(f1 + · · · + fn , Pf1 ,...,fn ) : R[θX1 ,...,Xn ] ։ R[θY1 ,...,Yn ]. Proof. Since regular epimorphisms compose (in any regular category) it suffices to establish the assertion under the assumption fi = 1Xi for i = 2, . . . , n. Moreover we can argue by induction on n since for n = 1 the comparison map is the terminal map θX : X ։ ⋆ and a binary product of regular epimorphisms is a regular epimorphism. Assume now that the statement is proved for n−1 morphisms. Using the isomorphism of kernel relations ∼ R[R[θ+Xn R[θX ,...,X ,X ] = ] ։ R[θX ,...,X ]] 1 n−1 n X1 ,...,Xn−1 1 n−1 and Propositon 1.6 it suffices then to show that the following by f1 : X1 ։ Y1 induced commutative square +Xn R[θX ] 1 ,X2 ,...,Xn−1 O / / R[θ+Xn Y1 ,X2 ,...,Xn−1 ] O  R[θX1 ,X2 ,...,Xn−1 ]  / / R[θY1 ,X2 ,...,Xn−1 ] is a downward-oriented regular pushout. This in turn follows from Corollary 1.7 since the vertical arrows above are split epimorphisms by construction and the horizontal arrows are regular epimorphisms by induction hypothesis.  Special instances of the following theorem have been considered in literature: if E is the category of groups and n = 2, the result can be deduced from Baues-Pirashvili [1] and Hartl-Vespa [40]; if E is a semi-abelian category, under the identification of n-folded objects given in Remark 6.4 and with the kernel intersection definition of degree, the result has been announced by Manfred Hartl in several of his talks. CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 51 Theorem 6.8. The full subcategory Fldn (E) of n-folded objects of a σ-pointed exact Mal’tsev category E is a reflective Birkhoff subcategory. The associated Birkhoff reflection J n : E → Fldn (E) is the universal endofunctor of E of degree ≤ n. Proof. Observe that the second assertion is a consequence of the first and of Proposition 6.5a-b. In virtue of Proposition 6.6 it suffices thus to construct the reflection J n : E → Fldn (E). The latter is obtained by the following pushout θX,··· ,X X + ···+ X X δn+1 / / PX,...,X µn X  X ǫn X  / / J n (X) which is regular by Corollary 1.8 so that J n (X) = X/Hn+1 [X] where Hn+1 [X] is the X direct image of R[θX,...,X ] under the folding map δn+1 . We will show that J n (X) is n-folded and that any morphism X → T with n-folded codomain T factors uniquely through J n (X). For this, consider the following diagram θH [X],...,H [X] n+1 n+1 / / PH [X],...,H [X] Hn+1 [X] + · · · + H❚n+1 n+1 n+1 ❚❚❚[X] ❚❚❚❚ ❚❚p❚1❚+···+p Pp1 ,··· ,p1 Pp0 ,··· ,p0 ❚❚❚❚ ❚❚) 1   θX,··· ,X p0 +···+p0 ❚❚) / / PX,...,X X + ···+ X Hn+1 [X] ❯❯❯P❯ǫn X,...,ǫn X δn+1 ❯❯❯* * X P n δn+1  p1 Hn+1 [X] p0 µn X /  /X ǫn X J (X),...,J n (X)  / / J n (X) s in which the existence of the dotted arrow has to be shown. By Lemma 6.9 PJ n (X),...,J n (X) is the coequaliser of the reflexive pair (Pp0 ,...,p0 , Pp1 ,...,p1 ). It suffices thus to check that µn X coequalises the same pair. This follows by precomposition with the regular epimorphism θHn+1 [X],...,Hn+1 [X] using the commutativity of the previous diagram. For the universal property of ǫn X : X ։ J n (X) let us consider a morphism f : X → T with n-folded codomain T . By construction of J n (X), the following commutative diagram θX,...,X / / PX,...,X X + · · · + ❙X ❙❙f ❙+···+f ◆◆P◆f,...,f ❙❙❙ ◆◆& ) θ T ,...,T X / / PT,...,T δn+1 T + · · · +❘T δT ❘❘❘n+1 ♦ ♦ ❘ ♦ ❘  ❘❘( w♦♦♦ mT /T X f induces the desired factorisation.  Lemma 6.9. In a σ-pointed exact Mal’tsev category, the functor PX1 ,...,Xn+1 preserves reflexive coequalisers in each of its n + 1 variables. 52 CLEMENS BERGER AND DOMINIQUE BOURN Proof. By exactness, it suffices to show that P preserves regular epimorphisms in each variable, and that for a regular epimorphism fi : Xi ։ Xi′ the induced map on kernel relations PX1 ,...,R[fi ],...,Xn+1 → R[PX1 ,...,fi ,...,Xn+1 ] is a regular epimorphism as well. By symmetry, it is even sufficient to do so for the first variable. We shall argue by induction on n (since for n = 1 there is nothing to prove) and consider the following downward-oriented pullback diagram PX1 ,...,Xn ,Xn+1 O / / P +Xn+1 X1 ,...,Xn O  X1 + · · · + Xn  / / PX1 ,...,Xn θX1 ,...,Xn which derives from square (n) of the beginning of this section. By induction hypothesis, the two lower corners and the upper right corner are functors preserving regular epimorphisms in the first variable. It follows then from the cogluing lemma (cf. the proof of Theorem 6.23a) and Corollary 1.7 that the upper left corner also preserves regular epimorphisms in the first variable. It remains to be shown that for f : X1 ։ X1′ we get an induced regular epimorphism on kernel relations. For this we denote by F, G, H the functors induced on the lower left, lower right and upper right corners, and consider the following commutative diagram / H(R[f ]) P (R[f ]) ◆◆ρ◆H tR[f ] ρP @ ✁✁@ ◆◆◆' ✁ ✁ ' ✁ ' ' ✁✁✁ ✁ / R[H(f )] ✁✁✁ R[P (f )] ✁ @ ✁@ ✁✁✁✁ ✁✁✁✁✁ ✁✁✁✁✁ ✁ R(t ) X ✁✁ / G(R[f ]) F (R[f ]) ✁✁✁✁✁ θR[f ] ◆ ◆◆◆ ✁ ◆ ◆ ✁✁ ρG ◆◆◆' ' ρF ◆◆◆' ' ✁✁✁✁✁ / R[G(f )] R[F (f )] R(θX ) in which the back vertical square is a downward-oriented pullback by definition of P . By commutation of limits the front vertical square is a downward oriented pullback as well. Again, according to the cogluing lemma and Corollary 1.7, the induced arrow ρP is then a regular epimorphisms, since ρF , ρG and ρH are so by induction hypothesis.  6.2. Higgins commutator relations and their normalisation. – We shall now concentrate on the case X = X1 = X2 = · · · = Xn+1 . Accordingly, we abbreviate the n + 1 “contractions” as follows: n+1 πX̂i n z }| { }| { z = πi : X + · · · + X → X + · · · + X, i = 1, . . . , n + 1. Proposition 6.1 reads then R[θX,...,X ] = R[π1 ] ∩ · · · ∩ R[πn+1 ]. CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 53 We denote the direct image of the kernel relation R[θX,...,X ] under the folding map X δn+1 : X + · · · + X → X by a single bracket [∇X , . . . , ∇X ] of length n + 1 and call it the (n + 1)-ary Higgins commutator relation on X. The proof of Theorem 6.8 shows that in σ-pointed exact Mal’tsev categories the universal n-folded quotient J n (X) of an object X is obtained by quotienting out the (n + 1)-ary Higgins commutator relation. The binary Higgins commutator relation coincides with the Smith commutator [∇X , ∇X ] (cf. Section 1.1, Corollary 1.8 and Proposition 4.2) which ensures consistency of our notation. Recall that in a pointed category the normalisation of an effective equivalence relation R on X is the kernel of its quotient map X ։ X/R. In σ-pointed exact Mal’tsev categories normalisation commutes with direct image, cf. Corollary 1.8. In particular, the normalisation of the Higgins commutator relation yields precisely the Higgins commutator of same length, cf. Remark 6.4. Proposition 6.10. In a σ-pointed exact Mal’tsev category, the image of R[θX,X,X ] under δ2X + 1X is the kernel relation of the pushout of θX,X,X along δ2X + 1X X +X +X θX,X,X δ2X +1X  X +X X ζX,X / / PX,X,X  / / JX X,X X which may be computed as an intersection: R[ζX,X ] = [R[π1 ], R[π1 ]] ∩ R[π2 ]. In particular, we get the inclusion [∇X , [∇X , ∇X ]] ⊂ [∇X , ∇X , ∇X ]. Proof. By Corollary 1.8, the pushout is regular so that the first assertion follows from Proposition 1.6. Consider the following diagram δ2X +1X X + X + X❱ ❱❱❱❱ θ+X ⑤> ❱❱❱X,X ⑤⑤⑤⑤⑤ ❱❱* * ⑤ π3 ⑤⑤⑤⑤ (X + X) ×X (X + X) ⑤ ⑤ ⑤⑤⑤⑤⑤ r8 ⑤ rrrrrrr δX ~⑤⑤⑤⑤⑤ r r 2 r rr X +X rrrrrrr r r PPP r rr P xrrrrrrr θX,X PPP( ( X ×X / / X +X ✠✠D ▼▼▼η▼1 (π1 ) ✠ ▼& & ✠ ✠✠✠✠ ✠ / / I 1 (π1 ) ✠ ✠✠ π2 ✠✠✠ ✠ ✄✄✄A ✠✠✠✠ ′ ✄✄✄✄✄ f ✄ //X ✄✄ ❍❍ η1 (X) ✄✄✄✄✄✄ ❍❍ ✄ ✄ ✄✄✄ ❍# # / / I 1 (X) in which top and bottom are regular pushouts by Corollary 1.8. The bottom square constructs the associated abelian object I 1 (X) of X, while the top square constructs the associated abelian object I 1 (π1 ) of π1 : X + X ⇄ X in the fibre over X. The upward oriented back and front faces are pushouts of split monomorphisms. The left face is a specialisation of square (2) just before Proposition 5.2. We can therefore 54 CLEMENS BERGER AND DOMINIQUE BOURN apply Corollary 1.19 and we get diagram δ X +1 X 2 / / X +X X + X Z✺+ ❘X ✲✲V✲✲ ❑❑❑ X ✺✺✺ ❘❘❘❘ ✺ ❘❘θ❘X,X,X ✺✺✺ ✲✲✲✲ ❑❑❑ζ❑X,X ❘ ✺ ❘❘❘ ❑❑❑ ✺✺✺ ✲✲✲✲ ❘❘❘ ❑% % ✺✺✺✺ ❘) ) ✲✲✲✲ ✺ ✺✺✺ ✲✲ / / JX ✲ P ✺ X,X,X X,X ✲✲ ✲ π3 ✺✺✺ ✺ ✺✺✺ π2 ✲ ✲✲ ⑤⑤= ✞✞C ⑤ ✲ ✺ ✞ ✺✺✺ ✲✲✲✲ ⑤⑤⑤⑤ ✞✞✞✞ ✺✺✺✺ ⑤⑤⑤⑤⑤ ✲ ✲ ✞✞✞✞✞ ⑤  }⑤ ⑤ / / X ✞✞ X + X⑤ X δ2 X in which the kernel relation of the regular epimorphism ζX,X is given by R[η 1 (π1 )] ∩ R[π2 ] = [R[π1 ], R[π1 ]] ∩ R[π2 ]. For the second assertion, observe first that the ternary folding map δ3 may be identified with the composition δ2X ◦ (δ2X + 1X ). Therefore, the ternary Higgins commutator relation [∇X , ∇X , ∇X ] is the direct image under δ2X : X + X → X of the X kernel relation of ζX,X . Now we have the following chain of inclusions, where for shortness we write R1 = R[π1 ], R2 = R[π2 ], R12 = R1 ∩ R2 : [R12 , [R12 , R12 ]] ⊂ R12 ∩ [R12 , R12 ] ⊂ R2 ∩ [R1 , R1 ]. By exactness, the direct image of the leftmost relation is [∇X , [∇X , ∇X ]], while the direct image of the right most relation is [∇X , ∇X , ∇X ].  Proposition 6.11. In a σ-pointed exact Mal’tsev category, the image of R[θX,...,X ] X X + 1X + 1X is the kernel relation of the pushout of θX,...,X along δn−1 under δn−1 X + ···+ X θX,...,X X δn−1 +1X  X +X X ζX,...,X / / PX,...,X  / / JX X,...,X n−1 which may be computed as an intersection X R[ζX,...,X ] n−1 z }| { = [R[π1 ], . . . , R[π1 ]] ∩ R[π2 ]. n z }| { }| { z In particular, we get the inclusion [∇X , [∇X , . . . , ∇X ]] ⊂ [∇X , . . . , ∇X ]. CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 55 Proof. The first assertion follows from Proposition 1.6 and the following diagram X δn−1 +1X X + · · · +PX +X PPθPX,...,X ①①①; PP( ( ①①①① πn ①①①① +X ① ① ①① PX,...,X ①①①①① ① = ①① ④④④X {①①①①① ④④④④④δn−1 ④ X + ···+ X ④④④④ ❚❚❚❚ ④④④④④ ④ ❚ }④④④ θX,...,X ❚❚❚) ) PX,...,X // X +X ❚❚❚❚ q ❚❚❚❚) ⑦⑦⑦⑦> ⑦ ) ⑦⑦⑦⑦⑦ X+X ⑦ // ⑦ ⑦ [∇π1 ,...,∇π1 ] ⑦⑦⑦ π2 ⑦⑦⑦ ✉ ✉✉ ✉: ~⑦⑦⑦⑦ ✉✉✉✉✉✉ //X ⑦ ✉ ✉✉ ✉ ◗◗◗ ✉✉✉✉✉✉ ◗◗◗ ✉ z ✉ ◗( ( ✉✉ / / X/[∇ X , . . . , ∇X ] in which top and bottom are regular pushouts by Corollary 1.8. The bottom square constructs the quotient of X by the (n−1)-ary Higgins commutator relation [∇X , . . . , ∇X ]. The top square constructs the quotient of π1 : X + X ⇄ X by the (n − 1)-ary Higgins commutator relation [∇π1 , . . . , ∇π1 ] in the fibre over X. The upward oriented back and front faces are pushouts of split monomorphisms. The left face is a specialisation of square (n) of the beginning of this section. We can therefore apply Corollary 1.19 and we get the following diagram X δn−1 +1X X + · · ·\✿+ X ✿✿✿✿ ❯❯❯❯❯❯ ❯❯❯θ❯X,...,X ✿✿✿✿ ❯❯❯❯ ✿✿✿✿ ❯❯❯❯ ✿✿✿✿ ❯❯* * ✿✿✿✿ PX,...,X ✿ πn ✿✿✿ ✿✿✿✿ ✈ ✿✿✿✿ ✈✈ ✈: ✈✈✈✈✈✈ ✿✿✿✿ ✈ ✈✈ ✈ ✿ z✈✈✈✈✈✈ X + ···+ X X δn−1 / / X +X ✲✲V✲✲ ▲▲▲ X ✲ ▲▲▲ζX,...,X π2 ✲✲ ✲ ▲▲▲ ✲✲✲✲ ▲▲& & ✲ ✲✲ ✲ ✲ / / JX ✲✲ ✲ X,...,X ✲✲✲✲ ✄A ✲✲✲✲ ✄ ✄✄✄✄ ✄ ✲✲✲✲ ✄✄✄✄✄  / / X ✄✄ X in which the kernel relation of the regular epimorphism ζX,...,X is given by R[q] ∩ R[π2 ] = [∇π1 , . . . , ∇π1 ] ∩ R[π2 ] = [R[π1 ], . . . , R[π1 ]] ∩ R[π2 ]. X Since δnX = δ2X ◦ (δn−1 + 1X ), the proof of the second assertion is completely analogous to the proof of the corresponding part of Proposition 6.10.  The semi-abelian part of the following corollary can also be derived from a direct analysis of “iterated” Higgins commutators, cf. [39, Proposition 2.21(iv)]. Corollary 6.12. For any object X of a σ-pointed exact Mal’tsev category, the iterated Smith commutator [∇X , [∇X , [∇X , · · · , [∇X , ∇X ] · · · ]]] is a subobject of the Higgins commutator relation [∇X , . . . , ∇X ] of same length. In a semi-abelian category, the iterated Huq commutator [X, [X, · · · , [X, X] · · · ]] is a subobject of the Higgins commutator [X, . . . , X] of same length. Proof. The first statement follows inductively from Propositions 6.10 and 6.11. The second statement follows from the first and the fact that in a semi-abelian category, the iterated Huq commutator is the normalisation of the iterated Smith  commutator by Remark 2.13 and [33, Proposition 2.2]. 56 CLEMENS BERGER AND DOMINIQUE BOURN Proposition 6.13. In a σ-pointed exact Mal’tsev category E, each n-folded object is an n-nilpotent object, i.e. Fldn (E) ⊂ Niln (E). In particular, endofunctors of degree ≤ n take values in n-nilpotent objects. Proof. The second assertion follows from the first and from Proposition 6.5. For any n-folded object X, the (n + 1)-ary Higgins commutator relation of X is discrete and hence, by Corollary 6.12, the iterated Smith commutator of same length is discrete as well. By an iterated application of Theorem 2.12 and Proposition 1.4, this iterated n Smith commutator is the kernel relation of ηX : X ։ I n (X), and hence X ∼ = I n (X), i.e. X is n-nilpotent.  The following theorem generalises Theorem 4.5 to all positive integers. Theorem 6.14. For a σ-pointed exact Mal’tsev category D such that the identity functor of Niln−1 (D) is of degree ≤ n − 1, the following properties are equivalent: (a) all objects are n-nilpotent; (b) for all objects X1 , . . . , Xn , the map θX1 ,...,Xn is an affine extension; (c) for all objects X1 , . . . , Xn , the map θX1 ,...,Xn is a central extension. Proof. For an n-nilpotent category D, the Birkhoff reflection I n−1 : D → Niln−1 (D) is a central reflection. Since all limits involved in the construction of PX1 ,...,Xn are preserved under I n−1 by an iterative application of Proposition 2.8, we get I n−1 (θX1 ,...,Xn ) = θI n−1 (X1 ),...,I n−1 (Xn ) . Since by assumption the identity functor of Niln−1 (D) is of degree ≤ n − 1, the latter comparison map is invertible so that by Theorem 3.3, the comparison map θX1 ,...,Xn is an affine extension, i.e. (a) implies (b). By Proposition 3.2, (b) implies (c). Specializing (c) to the case X = X1 = X2 = · · · = Xn we get the following commutative diagram R[θX,...,X ]  [∇X , . . . , ∇X ] / / X + ··· + X X δn  /  /X θX,...,X / / PX,...,X  / / X/[∇X , . . . , ∇X ] in which is the right square is a regular pushout by Corollary 1.8 so that the lower row represents the kernel relation of a central extension. We have already seen that the iterated Smith commutator [∇X , [∇X , [∇X , · · · , [∇X , ∇X ] · · · ]]] of length n is the kernel relation of the unit η n−1 (X) : X ։ I n−1 (X) of the (n−1)st Birkhoff reflection. Corollary 6.12 implies thus that this unit is a central extension as well so that D is n-nilpotent, i.e. (c) implies (a).  Definition 6.15. A σ-pointed category (D, ⋆D ) with pullbacks is said to satisfy condition Pn if for all X1 , . . . , Xn , Z, pointed cobase-change (αZ )! : D → PtZ (D) takes +Z . the object PX1 ,...,Xn to the object PX 1 ,...,Xn In particular, since PX = ⋆ condition P1 is void and just expresses that (αZ )! preserves the null-object. Since PX,Y = X × Y condition P2 expresses that (αZ )! CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 57 preserves binary products. Therefore, the following result extends Corollary 4.4 (n = 1) and Theorem 5.5 (n = 2) to all positive integers. Proposition 6.16. The identity functor of a σ-pointed exact Mal’tsev category D is of degree ≤ n if and only if all objects are n-nilpotent and the Birkhoff subcategories Nilk (D) satisfy condition Pk for 1 ≤ k ≤ n. Proof. Since the statement is true for n = 1 by Corollary 4.4 we can argue by induction on n and assume that the statement is true up to level n − 1. In particular, we can assume that Niln−1 (D) has an identity functor of degree ≤ n − 1. Let us then consider the following substitute of the main diagram 5.2: (X1 + · · · + Xn ) ⋄ Z / θX1 ,...,Xn ⋄Z  PX1 ,...,Xn ⋄ Z / ϕZ X1 ,...,Xn  ⋄Z / PX ,...,X 1 n θ / (X1 + · · · + Xn ) + ZX1 +···+Xn/ / ,Z (X1 + · · · + Xn ) × Z !"# θX1 ,...,Xn +Z '&%$ θ ×Z a   X1 ,...Xn θPX ,...,X ,Z n 1 / / PX1 ,...,Xn × Z / PX1 ,...,Xn + Z Z Z '&%$ !"# b  φX1 ,...,Xn  µX1 ,...,Xn / / P ×Z / P +Z X1 ,...,Xn X1 ,...,Xn in which the composite vertical morphisms from left to right are respectively +Z ×Z ⋄Z and θX θX and θX , 1 ,...,Xn 1 ,...,Xn 1 ,...,Xn and the morphism µZ X1 ,...,Xn is the canonical isomorphism. Exactly as in the proof of Proposition 5.4 it follows that the identity functor of D is of degree ≤ n if and only if squares (a) and (b) are pullback squares. Square (b) is a pullback if and only if φZ X1 ,...,Xn is invertible which is the case precisely when condition Pn holds. By Proposition 3.8 and Theorem 6.14, square (a) is a pullback if and only if θX1 ,...,Xn is an affine (resp. central) extension, which is the case for all objects X1 , . . . , Xn precisely when D is n-nilpotent.  Corollary 6.17. A semi-abelian category has an identity functor of degree ≤ n if and only if either of the following three equivalent conditions is satisfied: (a) all objects are n-nilpotent, and the comparison maps ⋄Z ϕZ X1 ,...,Xn : PX1 ,...,Xn ⋄ Z → PX1 ,...,Xn are invertible for all objects X1 , . . . , Xn , Z; (b) the (n + 1)st cross-effects of the identity functor crn+1 (X1 , . . . , Xn , Z) = K[θX1 ,...,Xn ,Z ] vanish for all objects X1 , . . . , Xn , Z; (c) the co-smash product is of degree ≤ n − 1, i.e. the comparison maps ⋄Z ⋄Z : (X1 + · · · + Xn ) ⋄ Z → PX θX 1 ,...,Xn 1 ,...,Xn are invertible for all objects X1 , . . . , Xn , Z. 58 CLEMENS BERGER AND DOMINIQUE BOURN Proof. Condition (a) expresses that squares (a) and (b) of the main diagram are pullbacks. By Proposition 6.16 this amounts to an identity functor of degree ≤ n. For (b) note that by protomodularity the cross-effect crn+1 (X1 , . . . , Xn , Z) is trivial if and only if the regular epimorphism θX1 ,...,Xn ,Z is invertible. The equivalence of conditions (b) and (c) follows from the isomorphism of kernels ⋄Z K[θX1 ,...,Xn ,Z ] ∼ ]. The latter is a consequence of the 3 × 3-lemma which, = K[θX 1 ,...,Xn applied to main diagram 6.16 and to square (n), yields a chain of isomorphisms: ×Z +Z ⋄Z ]] ∼ ] ։ K[θX ]∼ K[θX = K[θX1 ,...,Xn ,Z ]. = K[K[θX 1 ,...,Xn 1 ,...,Xn 1 ,...,Xn  6.3. Higher duality and multilinear cross-effects. – In Section 5 we obtained a precise criterion for when 2-nilpotency implies quadraticity, namely algebraic distributivity, cf. Corollary 5.16. We now look for a similar criterion for when n-nilpotency implies an identity functor of degree ≤ n. Proposition 6.16 gives us an explicit exactness condition in terms of certain limit-preservation properties (called Pn ) of pointed cobase-change along initial maps. In order to exploit the latter we first need to dualise condition Pn into a colimit-preservation property, extending Proposition 5.15. Surprisingly, this dualisation process yields the simple condition that in each variable, the functor PX1 ,...,−,...,Xn takes binary sums to binary sums in the fibre over PX1 ,...,⋆,...,Xn . For n-nilpotent semi-abelian categories, this in turn amounts to the condition that the n-th cross-effect of the identity functor crn (X1 , . . . , Xn ) = K[θX1 ,...,Xn ] is multilinear. Such a characterisation of degree n functors in terms of the multilinearity of their nth cross-effect is already present in the original treatment of Eilenberg-Mac Lane [23] for functors between abelian categories. It plays also an important role in Goodwillie’s [29] homotopical context (where however linearity has a slightly different meaning). The following lemma is known in contexts close to our’s. Lemma 6.18. Let D be a σ-pointed category and let E be an abelian category. Any ∆n F D multilinear functor F : Dn → E has a diagonal G : D −→ Dn −→ E of degree ≤ n. Proof. This is a consequence of the decomposition formula of Eilenberg-Mac Lane for functors taking values in abelian categories, cf. Remark 6.2. Indeed, an induction on k shows that the k-th cross-effect of the diagonal crkG (X1 , . . . , Xk ) is the direct sum of all terms F (Xj1 , . . . , Xjn ) such that the sequence (j1 , . . . , jn ) contains only integers 1, 2, . . . , k, but each of them at least once. In particular, M crnG (X1 , . . . , Xn ) ∼ F (Xσ(1) , . . . , Xσ(n) ) = σ∈Σn and the cross-effects of G of order > n vanish, whence G is of degree ≤ n.  Lemma 6.19 (cf. Proposition 2.9 in [39]). For each n ≥ 1, the n-th cross-effect of the identity functor of a semi-abelian category preserves regular epimorphisms in each variable. CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 59 Proof. The first cross-effect is the identity functor and the second cross-effect is the co-smash product. Proposition 1.15 and Lemma 1.11 imply that the co-smash product preserves regular epimorphisms in both variables. The general case n + 1 follows from the already treated case n = 1. By symmetry it suffices to establish the preservation property for the last variable which we shall denote Z. We have the following formula: crn+1 (X1 , . . . , Xn , Z) = crn⋄Z (X1 , . . . , Xn ) (n ≥ 1) ⋄Z where crn⋄Z (X1 , . . . , Xn ) = K[θX ] denotes the n-th cross-effect of the functor 1 ,...,Xn − ⋄ Z. Indeed, this kernel has already been identified with K[θX1 ,...,Xn ,Z ] in the proofs of Corollaries 5.6 and 6.17. It is now straightforward to deduce preservation of regular epimorphisms in Z using that (−) ⋄ (−) preserves regular epimorphisms in both variables.  Corollary 6.20 (cf. Proposition 2.21 in [39]). In a semi-abelian category, the image of a Higgins commutator [X, . . . , X] of X under a regular epimorphism f : X ։ Y is the corresponding Higgins commutator [Y, . . . , Y ] of Y . Proof. The Higgins commutator of length n is the image of the diagonal n-th crosseffect K[θX,...,X ] under the folding map δnX : X + · · · + X → X, cf. Section 6.2. By Lemma 6.19, any regular epimorphism f : X ։ Y induces a regular epimorphism K[θX,...,X ] ։ K[θY,...,Y ] on diagonal cross-effects, whence the result.  Note that the commutative square (n) of the beginning of this section induces the following pullback square χX1 ,...,Xn+1 PX1 ,...,Xn+1 O PX1 ,...,Xn ,ωX n+1 PX1 ,...,Xn ,⋆ / P +Xn+1 X1 ,...,Xn O +ωX n+1 1 ,...,Xn PX1 ,...,Xn ,αX PX n+1  X1 + · · · + Xn  +αX n+1 1 ,...,Xn PX / PX1 ,...,Xn θX1 ,...,Xn in which the identification PX1 ,...,Xn−1 ,⋆ = X1 + · · · + Xn−1 is exploited to give the left vertical morphisms names. Recall that αX : ⋆ → X and ωX : X → ⋆ denote the initial and terminal maps. Proposition 6.21. For any objects X1 , . . . , Xn−1 , Y, Z of a σ-pointed category (D, ⋆D ) with pullbacks consider the following diagram ρX1 ,...,Xn−1 ,Y,Z / PX1 ,...,Xn−1 ,Y +Z PX1 ,...,Xn−1 ,Y + Z O O PX PX1 ,...,Xn−1 ,ωY +Z  X1 + · · · + Xn−1 + Z Y 1 ,...,Xn−1 ,πZ PX  Y 1 ,...,Xn−1 ,ιZ / PX1 ,...,Xn−1 ,Z θX1 ,...,Xn−1 ,Z in which the horizontal map ρX1 ,...,Xn−1 ,Y,Z is induced by the pair PX1 ,...,Xn−1 ,ιZY : PX1 ,...,Xn−1 ,Y → PX1 ,...,Xn−1 ,Y +Z and PαX1 ,...,αXn−1 ,ιZY : Z → PX1 ,...,Xn−1 ,Y +Z ; 60 CLEMENS BERGER AND DOMINIQUE BOURN (1) The functor PX1 ,...,Xn−1 ,− : D → PtX1 +···+Xn−1 (D) preserves binary sums if and only if the upward-oriented square is a pushout for all objects Y, Z; (2) the category D satisfies condition Pn (cf. Definition 6.15) if and only if the downward-oriented square is a pullback for all objects X1 , . . . , Xn−1 , Y, Z. In particular, (1) and (2) hold simultaneously whenever θX1 ,...,Xn−1 ,Z is an affine extension for all objects X1 , . . . , Xn−1 , Z. Proof. The second assertion follows from the discussion in Section 3.2. For (1), observe that the left upward-oriented square of the following diagram PX Z 1 ,...,Xn−1 ,ιY / / PX1 ,...,Xn−1 ,Y +Z / PX1 ,...,Xn−1 ,Y + Z ρ X1 ,...,Xn−1 ,Y,Z O O PX1 ,...,Xn−1 ,Y O PX PX1 ,...,Xn−1 ,αY  X1 + · · · + Xn−1 PX Y 1 ,...,Xn−1 ,πZ  θ / X1 + · · · + Xn−1 + ZX1 ,...,Z / PX1 ,...,Xn−1 ,αZ Y 1 ,...,Xn−1 ,ιZ  / PX1 ,...,Xn−1 ,Z is a pushout so that the whole upward-oriented rectangle is a pushout if and only if the right upward-oriented square is a pushout, establishing (1). For (2) observe that the right downward-oriented square of the following diagram ρX1 ,...,Xn−1 ,Y,Z χX1 ,...,Xn−1 ,Y +Z / PX1 ,...,Xn−1 ,Y +Z / P +Y +Z X1 ,...,Xn−1 PX1 ,...Xn−1 ,Y + Z O O Y +πZ 1 ,...,Xn−1 PX O  +ιY Z 1 ,...,Xn−1 PX   θX1 ,...,Xn−1 ,Z χX ,...,Xn−1 ,Z / PX1 ,...,Xn−1 ,Z 1 / P +Z X1 + · · · + Xn−1 + Z / X1 ,...,Xn−1 +Z θX ,··· ,X 1 n−1 is a pullback (see below) so that the whole downward-oriented rectangle is a pullback if and only if the left downward-oriented square is a pullback. The whole downwardoriented rectangle is a pullback if and only if the comparaison map +Z PX1 ,...,Xn−1 ,Y + Z → PX 1 ,...,Xn−1 ,Y is invertible (i.e. if and only if condition Pn holds) since the following square is by definition a pullback in the fibre PtZ (D): P +Z +Z PX 1 ,...,Xn−1 ,Y O χ+Z X ,...,X 1 n−1 ,Y Y +πZ 1 ,...,Xn−1 PX X1 ,...,Xn−1 ,π Y Z  X1 + · · · + Xn−1 + Z / P +Y +Z X1 ,...,Xn−1 O +Z θX ,...,X 1 n−1  / P +Z X1 ,...,Xn−1 CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 61 Thus (2) is established. The pullback property of the right square above follows finally from considering the following diagram χX1 ,...,Xn−1 ,Y +Z / P +Y +Z X1 ,...,Xn−1 PX1 ,...,Xn−1 ,Y +Z O Y +πZ PX ,...,X 1 n−1  / P +Z O  +ιY Z 1 ,...,Xn−1 PX PX1 ,...,Xn−1 ,Z χX1 ,...,Xn−1 ,Z X1 ,...,Xn−1 O O  X1 + · · · + Xn−1  / PX1 ,...,Xn−1 θX1 ,...,Xn−1 in which whole rectangle and lower square are downward-oriented pullbacks.  Theorem 6.22. Let D be an n-nilpotent σ-pointed exact Mal’tsev category such that the identity functor of Niln−1 (D) is of degree ≤ n − 1. The following properties are equivalent: (a) the identity functor of D is of degree ≤ n; (b) the category D satisfies condition Pn (cf. Definition 6.15); (c) the functor PX1 ,...,Xn−1 ,− : D → PtX1 +···+Xn−1 (D) preserves binary sums for all objects X1 , . . . , Xn−1 . If D is semi-abelian then the former properties are also equivalent to: (d) the n-th cross-effect of the identity is coherent in each variable; (e) the n-th cross-effect of the identity is linear in each variable; (f) the diagonal n-th cross-effect of the identity is a functor of degree ≤ n. Proof. It follows from Proposition 6.16 that properties (a) and (b) are equivalent, while properties (b) and (c) are equivalent by Theorem 6.14 and Proposition 6.21. For the equivalence between (c) and (d), note first that the n-th cross-effect preserves regular epimorphisms in each variable by Lemma 6.19 so that coherence (in the last variable) amounts to the property that the canonical map crn (X1 , . . . , Xn−1 , Y ) + crn (X1 , . . . , Xn−1 , Z) → crn (X1 , . . . , Xn−1 , Y + Z) is a regular epimorphism. Since by Theorem 6.14 for W = Y, Z, Y + Z the regular epimorphism X1 +· · ·+Xn−1 +W ։ PX1 ,...,Xn−1 ,W is an affine extension, Proposition 3.11 establishes the equivalence between (c) and (d). Finally, consider the following commutative diagram in Nil1 (D) = Ab(D) I 1 (crn (X1...n−1 , Y ) + crn (X1...n−1 , Z))  crn (X1...n−1 , Y + Z) ∼ = / crn (X1...n−1 , Y ) × crn (X1...n−1 , Z) O crn (X1 ,...,Xn−1 ,θY,Z ) / crn (X1...n−1 , Y × Z) 62 CLEMENS BERGER AND DOMINIQUE BOURN in which the upper horizontal map is invertible because the n-th cross-effect takes values in abelian group objects. It follows that the left vertical map is a section so that property (d) is equivalent to the invertibility of this left vertical map. Therefore, (d) is equivalent to the invertibility of the diagonal map crn (X1 , . . . , Xn−1 , Y + Z) → crn (X1 , . . . , Xn−1 , Y ) × crn (X1 , . . . , Xn−1 , Z) which expresses linearity in the last variable, i.e. property (e). Property (e) implies property (f) by Lemma 6.18. It suffices now to prove that (f) implies (a). The Higgins commutator [X, . . . , X] of length n is the image of diagonal n-th cross-effect crn (X, . . . , X) under the n-th folding map δnX : X + · · · + X → X. The Higgins commutator of length n is thus a quotient-functor of the diagonal n-th cross-effect and as such a functor of degree ≤ n by Theorem 6.23a. Corollary 6.12 n−1 and Remark 2.13 imply that the kernel K[ηX : X ։ I n−1 (X)] (considered as a functor in X) is a subfunctor of the Higgins commutator of length n and hence, again by Theorem 6.23a, a functor of degree ≤ n. It follows then from the short exact sequence of endofunctors ⋆ −→ K[η n−1 ] −→ idD −→ I n−1 −→ ⋆ (by a third application of Theorem 6.23a) that the identity functor of D is also of degree ≤ n, whence (f) implies (a).  6.4. Homogeneous nilpotency towers. – One of the starting points of this article was the existence of a functorial nilpotency tower for any σ-pointed exact Mal’tsev category E, cf. Section 2.4. It is not surprising that for a semi-abelian category E the successive kernels of the nilpotency tower capture the essence of the whole tower. To make this more precise, we denote by M M LE (X) = LnE (X) = K[I n (X) ։ I n−1 (X)] ∈ Ab(E) n≥1 n≥1 the graded abelian group object defined by the successive kernels. This construction is a functor in X. The nilpotency tower of E is said to be homogeneous if for each n, the n-th kernel functor LnE : E → Ab(E) is a functor of degree ≤ n. The degree of a functor does not change under composition with conservative left exact functors. We can therefore consider LnE as an endofunctor of E. Observe also that the binary sum in Niln (E) is obtained as the reflection of the binary sum in E. This implies that the degree of LnE is the same as the degree of LnNiln (E) . We get the following short exact sequence of endofunctors of Niln (E) ⋆ −→ LnNiln (E) −→ idNiln (E) −→ IEn,n−1 −→ ⋆ where the last term is the relative Birkhoff reflection IEn,n−1 : Niln (E) → Niln−1 (E). A more familiar way to express the successive kernels LnE (X) of the nilpotency tower of X is to realise them as subquotients of the lower central series of X. Indeed, CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 63 the 3 × 3-lemma implies that there is a short exact sequence ⋆ −→ LnE (X) = γn (X)/γn+1 (X) −→ X/γn+1 (X) −→ X/γn (X) −→ ⋆ where γn+1 (X) denotes the iterated Huq commutator of X of length n + 1, i.e. the n kernel of the n-th Birkhoff reflection ηX : X ։ I n (X), cf. Remark 2.13. The conclusion of the following theorem is folklore among those who are familiar with Goodwillie calculus in homotopy theory (cf. [3, 29]). Ideally, we would have liked to establish Theorem 6.23c by checking inductively one of the conditions of Theorem 6.22 without using any computation involving elements. Theorem 6.23. Let E be a semi-abelian category. (a) For any short exact sequence ⋆ −→ F1 −→ F −→ F2 −→ ⋆ of endofunctors of E, F is of degree ≤ n if and only if F1 and F2 are both of degree ≤ n; (b) The nilpotency tower of E is homogeneous if and only if the identity functors of Niln (E) are of degree ≤ n for all n; (c) The category of groups and the category of Lie algebras have homogeneous nilpotency towers. Proof. For (a) we need the following cogluing lemma for regular epimorphisms in regular categories: for any quotient-map of cospans X f  X′ //Zo h Y g  Y′  / / Z′ o in which the left naturality square is a regular pushout (cf. Section 1.3), the induced map on pullbacks f ×h g : X ×Z Y → X ′ ×Z ′ Y ′ is again a regular epimorphism. Indeed, a diagram chase shows that the following square Xo  X ×Z ′ Z o ′ X ×Z Y ′ (X × Z′  Y ′ ) ×Y ′ Y is a pullback. The left vertical map is a regular epimorphism by assumption so that the right vertical map is a regular epimorphism as well. Since g is a regular epimorphism, the projection (X ′ ×Z ′ Y ′ ) ×Y ′ Y → X ′ ×Z ′ Y ′ is again a regular epimorphism so that f ×h g is the composite of two regular epimorphisms. F The limit construction PX is an iterated pullback along split epimorphisms. 1 ,...,Xn+1 Therefore, Corollary 1.7 and the cogluing lemma show inductively that the morphism 64 CLEMENS BERGER AND DOMINIQUE BOURN F2 F PX → PX induced by the quotient-map F ։ F2 is a regular epimor1 ,...,Xn+1 1 ,...,Xn+1 phism. The 3 × 3-lemma yields then the exact 3 × 3-square ⋆ ⋆ ⋆ ⋆  / crF1 (X1 , . . . , Xn+1 ) n+1  F / crn+1 (X1 , . . . , Xn+1 )  / crF2 (X1 , . . . , Xn+1 ) n+1 /⋆ ⋆  / F1 (X1 + · · · + Xn+1 )  / F (X1 + · · · + Xn+1 )  / F2 (X1 + · · · + Xn+1 ) /⋆ ⋆  / P F1 X1 ,...,Xn+1  / PF X1 ,...,Xn+1  / P F2 X1 ,...,Xn+1 /⋆  ⋆  ⋆  ⋆ from which (a) immediately follows, cf. Corollary 6.17b. For (b) we can assume inductively that Niln−1 (E) has an identity functor of degree ≤ n − 1 so that the Birkhoff reflection IEn,n−1 : Niln (E) → Niln−1 (E) is of degree ≤ n−1, and finally IEn,n−1 is also of degree ≤ n−1 when considered as an endofunctor of Niln (E). Statement (b) follows then from (a) by induction on n. For (c) we treat the group case, the Lie algebra case being very similar. In the category of groups, the graded object LGrp (X) is a graded Lie ring with Lie bracket m+n n [−, −] : Lm Grp ⊗ LGrp (X) → LGrp (X) induced by the commutator map (x, y) 7→ −1 −1 xyx y in X. This graded Lie ring is generated by its elements of degree 1, cf. Lazard [51, Section I.2]. In particular, there is a regular epimorphism of abelian groups L1Grp (X)⊗n → LnGrp (X) which is natural in X. The functor which assigns to X the tensor power L1Grp (X)⊗n is the diagonal of a multilinear abelian-groupvalued functor in n variables, and hence a functor of degree ≤ n by Lemma 6.18. It follows from (a) that its quotient-functor LnGrp is of degree ≤ n as well, whence the homogeneity of the nilpotency tower in the category of groups.  Theorem 6.24. Let E be a semi-abelian category. The following conditions are equivalent: (a) (b) (c) (d) The nilpotency tower of E is homogeneous; For each n, the n-th Birkhoff reflection I n : E → Niln (E) is of degree ≤ n; For each n, an object of E is n-nilpotent if and only if it is n-folded; For each object X of E, iterated Huq commutator [X, [X, · · · , X] · · · ]] and Higgins commutator [X, X, . . . , X] of same length coincide. If E satisfies one and hence all of these conditions, then so does any reflective Birkhoff subcategory of E. If E is algebraically extensive and satisfies one and hence all of these conditions then so does the fibre PtX (E) over any object X of E. Proof. We have already seen that (b) is equivalent to condition (b) of Theorem 6.23, which implies the equivalence between (a) and (b). Propositions 6.5 and 6.13 show CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 65 that (b) implies (c), while Theorem 6.8 shows that (c) implies (b). The equivalence between (c) and (d) is proved in exactly the same way as Corollary 5.19. Let D be a reflective Birkhoff subcategory of E. We shall show that D inherits (c) from E. By Proposition 6.13, it suffices to show that in D each n-nilpotent object X is n-folded. Since the inclusion D ֒→ E is left exact, it preserves n-nilpotent objects so that X is n-nilpotent in E, and hence by assumption n-folded in E. The Birkhoff reflection E → D preserves sums and the limit construction PX1 ,...,Xn+1 by an iterated application of Proposition 2.8. Therefore, X is indeed n-folded in D. By Lemma 5.10 algebraic extensivity implies that all pointed base-change functors are exact. By Lemma 2.15 this implies that the following square of functors PtX (E) n IPt X (E) / Niln (PtX (E)) ∗ ωX ∗ ωX  E  IEn / Niln (E) commutes up to isomorphism. The vertical functors are exact and conservative. n Therefore, if IEn is of degree ≤ n then IPt is of degree ≤ n as well.  X (E) 6.5. On Moufang loops and triality groups. – We end this article by giving an example of a semi-abelian category in which 2-foldedness is not equivalent to 2-nilpotency, namely the semi-abelian variety of Moufang loops. In particular, the semi-abelian subvariety of 2-nilpotent Moufang loops is neither quadratic (cf. Proposition 6.5) nor algebraically distributive (cf. Corollaries 5.16 and 5.19). The nilpotency tower of the semi-abelian category of Moufang loops is thus inhomogeneous (cf. Theorem 6.24). Nevertheless, the category of Moufang loops fully embeds into the category of triality groups [22, 28, 37] which, as we will see, is a semi-abelian category with homogeneous nilpotency tower. Recall [15] that a loop is a unital magma (L, ·, 1) such that left and right translation by any element z ∈ L are bijective. A Moufang loop [56] is a loop L such that (z(xy))z = (zx)(yz) = z((xy)z) for all x, y, z ∈ L. Moufang loops form a semiabelian variety which contains the variety of groups as a reflective Birkhoff subvariety. Moufang loops share many properties of groups, but the lack of a full associative law complicates the situation. The main example of a non-associative Moufang loop is the set of invertible elements of a non-associative alternative algebra (i.e. in characteristic 6= 2 a unital algebra in which the difference (xy)z − x(yz) alternates in sign whenever two variables are permuted). In particular, the set O∗ of non-zero octonions forms a Moufang loop. Taking the standard real basis of the octonions together with their additive inverses yields a Moufang subloop O16 = {±1, ±e1, . . . , ±e7 } with sixteen elements. We will see that O16 is 2-nilpotent, but not 2-folded. 66 CLEMENS BERGER AND DOMINIQUE BOURN By Moufang’s theorem [56], any Moufang loop, which can be generated by two elements, is associative and hence a group. In particular, for any element of a Moufang loop, left and right inverse coincide. The kernel of the reflection of a Moofang loop L into the category of groups is the so-called associator subloop [L, L, L]ass of L. For a Moufang loop L, the associator subloop is generated by the elements of the form [x, y, z] = ((xy)z)(x(yz))−1 . Such an “associator” satisfies [1, y, z] = [x, 1, z] = [x, y, 1] = 1 and is thus 3-reducible, cf. Remark 6.4. This implies that for a Moufang loop L, the associator subloop [L, L, L]ass is contained in the ternary Higgins commutator [L, L, L], cf. Proposition 6.1 and Section 6.2. In conclusion, any 2-folded Moufang loop has a trivial associator subloop and is therefore a 2-folded group. In particular, O16 cannot be 2-folded since O16 is not a group. One can actually show that [O16 , O16 , O16 ] = {±1}. On the other hand, the centre of O16 is also {±1}, and the quotient by the centre O16 /{±1} is isomorphic to (Z/2Z)3 . This implies that O16 is 2-nilpotent, i.e. [O16 , [O16 , O16 ]] = {1}. The variety of Moufang loops is interesting with respect to the existence of centralisers. Since algebraic distributivity fails, such centralisers do not exist for general subloops, cf. [13]. Nevertheless, each Moufang loop L has a centre Z(L) in the sense of Section 1.5, i.e. a centraliser Z(1L ) for its identity 1L : L → L. This centre Z(L) is a normal subloop of L, and is the intersection Z(L) = M (L) ∩ N (L) of the Moufang centre M (L) = {z ∈ L | zx = xz ∀x ∈ L} with the so-called nucleus N (L) = {z ∈ L | [z, x, y] = [x, z, y] = [x, y, z] = 1 ∀x, y ∈ L}, cf. Bruck [15]. Groups with triality have been introduced in the context of Moufang loops by Glauberman [28] and Doro [22]. A triality on a group G0 is an action (by automorphisms) of the symmetric group S3 on three letters such that for all g ∈ G0 and σ ∈ S3 (resp. ρ ∈ S3 ) of order 2 (resp. 3), the identity [σ, g](ρ.[σ, g])(ρ2 .[σ, g]) = 1 holds where [σ, g] = (σ.g)g −1 . We denote the split epimorphism associated to the group action by p : G0 ⋊ S3 ⇆ S3 : i and call it the associated triality group. The defining relations for a group with triality are equivalent to the following condition on the associated triality group p : G ⇆ S3 : i (cf. Liebeck [52] and Hall [37]): for any two special elements g, h ∈ G such that p(g) 6= p(h) one has (gh)3 = 1 where g ∈ G is called special if g is conjugate in G to some element of order 2 in i(S3 ). For the obvious notion of morphism, the category TriGrp⋆ of triality groups is a full subcategory of the fibre PtS3 (Grp) over the symmetric group S3 . The category TriGrp⋆ is closed under taking subobjects, products and quotients in PtS3 (Grp). Moreover, quotienting out the normal subgroup generated by the products (gh)3 for all pairs of special elements (g, h) such that p(g) 6= p(h) defines a reflection PtS3 (Grp) → TriGrp⋆ . Therefore, TriGrp⋆ is a reflective Birkhoff subcategory of PtS3 (Grp). Since the category of groups is an algebraically extensive semi-abelian category (cf. Section 5.3) with homogeneous nilpotency tower (cf. Theorem 6.23), so is its fibre PtS3 (Grp) by Lemma 5.13 and Theorem 6.24. The reflective Birkhoff subcategory TriGrp⋆ formed by the triality groups is thus also a semi-abelian category with homogeneous nilpotency tower, again by Theorem 6.24. CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 67 This result is remarkable because the category of triality groups contains the category of Moufang loops as a full coreflective subcategory, and the latter has an inhomogeneous nilpotency tower. The embedding of Moufang loops and its right adjoint have been described by Doro [22] for groups with triality, and by Hall [37] for the associated triality groups, see also Grishkov-Zavarnitsine [34]. Moufang loops can thus up to equivalence of categories be identified with triality groups for which the counit of the adjunction is invertible. Considering them inside the category of triality groups permits the construction of a homogeneous nilpotency tower. Acknowledgements We are grateful to Georg Biedermann, Rosona Eldred, Marino Gran, James Gray, Jonathan Hall, George Janelidze, Daniel Tanré and Tim Van der Linden for helpful discussions. We are also grateful to the referee for his careful reading of our manuscript. Special thanks are due to Manfred Hartl whose seminar talk in September 2013 in Nice was the starting point for this work. The first author acknowledges financial support of the French ANR grant HOGT. References [1] H.-J. Baues and T. Pirashvili – Quadratic endofunctors of the category of groups, Adv. Math. 141 (1999), 167–206. 36, 50 [2] I. Berstein and and T. Ganea – Homotopical nilpotency, Illinois J. Math. 5 (1961), 99–130. 5 [3] G. Biedermann and B. Dwyer – Homotopy nilpotent groups, Algebr. Geom. Topol. 10 (2010), 33–61. 5, 63 [4] F. Borceux and D. Bourn – Mal’cev, protomodular, homological and semi-abelian categories, Math. Appl. 566, Kluwer Acad. Publ., 2004. 3, 6, 7, 13, 14, 25, 28, 29, 31, 32, 33, 36, 43, 47 [5] D. Bourn – Normalization equivalence, kernel equivalence and affine categories, Lect. Notes Math. 1488, Springer Verlag 1991, 43–62. 14, 25 [6] D. Bourn – Mal’tsev categories and fibration of pointed objects, Appl. Categ. Struct. 4 (1996), 307–327. 6, 11, 13, 14, 24, 25, 30 [7] D. Bourn – The denormalized 3 × 3 lemma, J. Pure Appl. Algebra 177 (2003), 113–129. 9, 10 [8] D. Bourn – Commutator theory in regular Mal’tsev categories, AMS Fields Inst. Commun. 43 (2004), 61–75. 6, 7, 8, 21 [9] D. Bourn – Commutator theory in strongly protomodular categories, Theory Appl. Categ. 13 (2004), 27–40. 14 [10] D. Bourn – On the monad of internal groupoids, Theory Appl.Categ. 28 (2013), 150–165. 42 [11] D. Bourn and M. Gran – Central extensions in semi-abelian categories, J. Pure Appl. Algebra 175 (2002), 31–44. 6, 7, 11 [12] D. Bourn and M. Gran – Centrality and connectors in Maltsev categories, Algebra Universalis 48 (2002), 309–331. 6, 7 [13] D. Bourn and J.R.A. Gray – Aspects of algebraic exponentiation, Bull. Belg. Math. Soc. 19 (2012), 823–846. 3, 14, 15, 41, 66 [14] D. Bourn and D. Rodelo – Comprehensive factorization and I-central extensions, J. Pure Appl. Algebra 216 (2012), 598–617. 20, 24 [15] R. H. Bruck – A survey of binary systems, Ergebnisse der Mathematik und ihrer Grenzgebiete 20, Springer Verlag 1958. 2, 65, 66 [16] A. Carboni and G. Janelidze – Smash product of pointed objects in lextensive categories, J. Pure Appl. Algebra 183 (2003), 27–43. 3, 31, 37, 46 [17] A. Carboni, G. M. Kelly and M. C. Pedicchio – Some remarks on Mal’tsev and Goursat categories, Appl. Categ. Struct. 1 (1993), 385–421. 1, 2, 5, 6, 10, 11, 15, 31 68 CLEMENS BERGER AND DOMINIQUE BOURN [18] A. Carboni, S. Lack and R. F. C. Walters – Introduction to extensive and distributive categories, J. Pure Appl. Algebra 84 (1993), 145–158. 41 [19] A. Carboni, J. Lambek, M. C. Pedicchio, – Diagram chasing in Malcev categories, J. Pure Appl. Algebra 69 (1991), 271–284. 1, 6 [20] A. Cigoli, J. R. A. Gray, T. Van der Linden – Algebraically coherent categories, Theory Appl. Categ. 30 (2015), 1864–1905. 3, 40, 41, 42, 44, 45 [21] C. Costoya, J. Scherer, A. Viruel – A torus theorem for homotopy nilpotent groups, arXiv:1504.06100. 5 [22] S. Doro – Simple Moufang loops, Math. Proc. Cambridge Philos. Soc. 83 (1978), 377–392. 5, 65, 66, 67 [23] S. Eilenberg and S. Mac Lane – On the groups H(π,n). II. Methods of computation, Ann. of Math. (2) 60 (1954), 49–139. 3, 4, 36, 47, 58 [24] R. Eldred – Goodwillie calculus via adjunction and LS cocategory, arXiv:1209.2384. 5 [25] T. Everaert and T. Van der Linden – Baer invariants in semi-abelian categories I: general theory, Theory Appl. Categ. 12 (2004), 1–33. 2, 19, 21 [26] T. Everaert and T. Van der Linden – A note on double central extensions in exact Mal’tsev categories, Cah. Topol. Géom. Differ. Catég. 51 (2010), 143–153. 6 [27] R. S. Freese and R. N. McKenzie – Commutator theory for congruence modular varieties, London Math. Soc. Lect. Note Series 125, Cambridge Univ. Press, Cambridge, 1987. 2 [28] G. Glauberman – On loops of odd order II, J. Algebra 8 (1968), 383–414. 5, 65, 66 [29] T. G. Goodwillie – Calculus III. Taylor series, Geom. Topol. 7 (2003), 645–711. 1, 2, 4, 5, 36, 37, 47, 58, 63 [30] M. Gran – Central extensions and internal groupoids in Maltsev categories, J. Pure Appl. Alg. 155 (2001), 139–166. 15 [31] M. Gran – Applications of categorical Galois theory in universal algebra, AMS Fields Inst. Commun. 43 (2004), 243–280. 6, 20 [32] M. Gran and D. Rodelo – Beck-Chevalley condition and Goursat categories, arXiv:1512.04066. 6, 20 [33] M. Gran and T. Van der Linden – On the second cohomology group in semi-abelian categories, J. Pure Appl. Algebra 212 (2008), 636–651. 14, 22, 55 [34] A. N. Grishkov and A. V. Zavarnitsine – Groups with triality, J. Algebra Appl. 5 (2006), 441– 463. 67 [35] J. R. A. Gray – Algebraic exponentiation in general categories, Appl. Categ. Struct. 20 (2012), 543–567. 14, 41, 42 [36] J. R. A. Gray – Algebraic exponentiation for categories of Lie algebras, J. Pure Appl. Algebra 216 (2012), 1964–1967. 14, 41 [37] J. I. Hall – Central automorphisms, Z ∗ -theorems, and loop structures, Quasigroups and Related Systems 19 (2011), 69–108. 5, 65, 66, 67 [38] M. Hartl and B. Loiseau – On actions and strict actions in homological categories, Theory Appl. Categ. 27 (2013), 347–392. 3, 4, 31, 36, 37, 38, 40, 45, 46, 47 [39] M. Hartl and T. Van der Linden – The ternary commutator obstruction for internal crossed modules, Adv. Math. 232 (2013), 571–607. 3, 4, 31, 36, 37, 38, 40, 45, 46, 47, 55, 58, 59 [40] M. Hartl and C. Vespa – Quadratic functors on pointed categories, Adv. Math. 226 (2011), 3927–4010. 4, 36, 50 [41] P. J. Higgins – Groups with multiple operators, Proc. London Math. Soc. 6 (1956), 366–416. 47 [42] M. Hovey – Lusternik-Schnirelmann cocategory, Illinois J. Math. 37 (1993), 224–239. 5, 46 [43] S. A. Huq – Commutator, nilpotency and solvability in categories, Quart. J. Math. Oxford 19 (1968), 363–389. 2, 14, 19, 21 [44] G. Janelidze and G. M. Kelly – Galois theory and a general notion of central extension, J. Pure Appl. Algebra 97 (1994), 135–161. 6, 18, 20 [45] G. Janelidze and G. M. Kelly – Central extensions in universal algebra: a unification of three notions, Algebra Universalis 44 (2000), 123–128. 6 CENTRAL REFLECTIONS AND NILPOTENCY IN EXACT MAL’TSEV CATEGORIES 69 [46] G. Janelidze and G. M. Kelly – Central extensions in Mal’tsev varieties, Theory Appl. Categ. 7 (2000), 219–226. 6 [47] G. Janelidze, L. Márki and W. Tholen – Semi-abelian categories, J. Pure Appl. Algebra 168 (2002), 367–386. 2 [48] G. Janelidze, M. Sobral and W. Tholen – Beyond Barr exactness: effective descent morphisms, Cambridge Univ. Press, Encycl. Math. Appl. 97 (2004), 359–405. 24 [49] M. Jibladze and T. Pirashvili – Linear extensions and nilpotence of Maltsev theories, Contributions to Algebra and Geometry 46 (2005), 71–102. 21 [50] B. Johnson and R. McCarthy – A classification of degree n functors I/II, Cah. Topol. Géom. Différ. Catég. 44 (2003), 2–38, 163–216. 36 [51] M. Lazard – Sur les groupes nilpotents et les anneaux de Lie, Ann. Sci. E. N. S. 71 (1954), 101–190. 5, 64 [52] M. W. Liebeck – The classification of finite simple Moufang loops, Math. Proc. Cambridge Philos. Soc. 102 (1987), 33–47. 66 [53] A. I. Mal’cev – On the general theory of algebraic systems, Mat. Sbornik N. S. 35 (1954), 3–20. 2 [54] S. Mantovani and G. Metere – Normalities and commutators, J. of Algebra 324 (2010), 2568– 2588. 4, 31, 47 [55] J. Mostovoy – Nilpotency and dimension series for loops, Comm. Algebra 36 (2008), 1565–1579. 4 [56] R. Moufang – Zur Struktur von Alternativkörpern, Math. Ann. 110 (1935), 416–430. 65, 66 [57] M. C. Pedicchio – A categorical approach to commutator theory, J. of Algebra 177 (1995), 647–657. 7 [58] J. Penon – Sur les quasi-topos, Cah. Topol. Géom. Diff. 18 (1977), 181–218. 24 [59] D. Quillen – Homotopical algebra, Lect. Notes Math. 43, Springer Verlag 1967. 5 [60] J. D. H. Smith – Mal’cev varieties, Lect. Notes Math. 554, Springer Verlag 1976. 2, 7 [61] D. Stanovsky and P. Vojtěchovskỳ, Commutator theory for loops, J. of Algebra 399 (2014), 290–322. 4 [62] T. Van der Linden – Simplicial homotopy in semi-abelian categories, J. K-theory 4 (2009), 379–390. 2, 5 Université de Nice, Lab. J. A. Dieudonné, Parc Valrose, 06108 Nice Cedex, France E-mail address: [email protected] Lab. J. Liouville, CNRS Fed. Rech. 2956, Calais, France E-mail address: [email protected]
4
1 arXiv:1705.08394v1 [] 23 May 2017 Information Theoretic Principles of Universal Discrete Denoising Janis Nötzel∗,1,2 and Andreas Winter×,1,3 1 Departament de Fı́sica: Grup d’Informació Quàntica, Universitat Autònoma de Barcelona, ES-08193 Bellaterra (Barcelona), Spain. 2 Theoretische Nachrichtentechnik, Technische Universität Dresden, D-01187 Dresden, Germany. 3 ICREA—Institució Catalana de Recerca i Estudis Avançats, Pg. Lluis Companys, 23, ES-08010 Barcelona, Spain. Email: ∗ [email protected], × [email protected] (23 May 2017) Abstract—Today, the internet makes tremendous amounts of data widely available. Often, the same information is behind multiple different available data sets. This lends growing importance to latent variable models that try to learn the hidden information from the available imperfect versions. For example, social media platforms can contain an abundance of pictures of the same person or object, yet all of which are taken from different perspectives. In a simplified scenario, one may consider pictures taken from the same perspective, which are distorted by noise. This latter application allows for a rigorous mathematical treatment, which is the content of this contribution. We apply a recently developed method of dependent component analysis to image denoising when multiple distorted copies of one and the same image are available, each being corrupted by a different and unknown noise process. In a simplified scenario, we assume that the distorted image is corrupted by noise that acts independently on each pixel. We answer completely the question of how to perform optimal denoising, when at least three distorted copies are available: First we define optimality of an algorithm in the presented scenario, and then we describe an aymptotically optimal universal discrete denoising algorithm (UDDA). In the case of binary data and binary symmetric noise, we develop a simplified variant of the algorithm, dubbed BUDDA, which we prove to attain universal denoising uniformly. Keywords: blind detection, image denoising, hidden variable, latent variable, internet of things. I. I NTRODUCTION Consider the following, idealised situation of inference from noisy data: In the past, a number K of pictures has been taken of an object. Each picture consists of a number n of pixels, and each pixel takes values in a finite alphabet X of colours, ∣X ∣ = L. We would like to recover from these images a “true” image, of which we assume all K that we have at our disposal are noisy copies. To be precise, we assume that each of the pictures is the output of a memoryless (i.i.d.) noise process W , mapping an element x ∈ X to another element y ∈ X with probability w(y∣x). If we knew W as well as the frequency p(x) of the colours x in the original, we could choose some permutation Φ ∶ X → X as the denoising rule. If one is agnostic about the colour of each pixel and applies any such denoising to a long enough string on a pixel-by-pixel basis, it will produce a recovered image that, with high probability, deviates from the original in a fraction of pixels asymptotically equal to 1 − max ∑ p(τ (x))w(Φ−1 (x)∣τ (x)) τ x∈X = 1 − max ∑ p(x)w(τ (x)∣x), τ (1) x∈X where the maximum is over all permutations τ ∶ X → X . Given that we are more interested in the structure of the image, and not in the question whether it is, say, black-and-white or white-and-black, this is a satisfactory solution. However, if the number of possible outputs of the noise process grows, such simple solutions turn out to be not optimal anymore, because more information is available in the string that a letter-by-letter algorithm does not see. Consider the following example, where a channel maps the black and white pixels represented by the alphabet {1, 2} to three different colours {1, 2, 3}. Let the probabilistic law of this process be given by w(1∣1) = w(3∣1) = 1/2 and w(2∣2) = 1. A simple denoising algorithm may map the elements of the set {1, 2} to 1 and those in {3} to 2. If the input bits are equally distributed, the probability that this algorithm produces an error is lower bounded by 1/4 – even if one is agnostic as to an exchange 1 ↔ 2 at the input. Instead, knowing the channel lets one perform perfect denoising, i.e. with zero error. The problem is thus to know the channel, which it turns out can be achieved by looking at several independent noisy copies instead of only one. We now give another example about large classes of channels which allow us to achieve a nontrivial inference about the original by a generic algorithmic strategy. Example 1. Let the binary symmetric channels (BSC) W1 , W2 , W3 be given, each with a respective probability wi of transmitting the input bit correctly, and the channel W from {0, 1} to {0, 1}3 is the product, giving the same input to all: W (y1 y2 y3 ∣x) = ∏3j=1 wj (yj ∣x). Then, a simple way of guessing the input bit is to use the majority rule, i.e. the output is mapped to 0 iff N (0∣y1 y2 y3 ) > N (1∣y1 y2 y3 ). Assume that the input string has equal number of 0 and 1. For BSC parameters w1 = 0.1, w2 = 0.45 and w3 = 0.9, this gives a probability of correct decoding that is bounded as 4/10 ≤ pcorrect ≤ 5/10. work assumes Gaussian noise, whereas our work is able to handle any form of noise within the category of finite-alphabet memoryless noise processes. Sparsity has been exploited for coloured images in [13], universal compressed sensing in [14]. A new method for image denoising was presented first here [15] and then further developed in [16]. A scenario that is related to the one treated here has been investigated in the recent work [17], but without application to a specific problem. The focus of the work [17] was to demonstrate that certain hidden variable models have a one to one correspondence to a particular subset of probability distributions. The results presented in [17] are closely related to the famous work [18] of Kruskal, which forms one theoretical cornerstone of the PARAFAC model (see the review article [19] to get an impression of methods in higher order data analysis). The connections between PARAFAC, multivariate statistical analysis [20] and more information-theoretic models has also been used in the recent work [21], where the authors basically elaborated on methods to efficiently perform step 3 of our algorithm in the restricted scenario where all channels are identical: w1 (y∣x) = . . . = wK (y∣x) for all y ∈ Y and x ∈ X . The interested reader can find details regarding algorithms for fitting the PARAFAC model in [22]. Further connections to deep learning, current implementations and other areas of application are to be found in [23]. Our Theorems 13 and 12 provide an improvement over all the existing algorithms in a restricted but important scenario. Despite some similarities, the problem treated here is to be distinguished from the CEO problem [24], [25], which is also receiving attention recently [26] but is focussed on rate constraints in the process of data combining from the multiple sources. In addition, noise levels are assumed to be known in the CEO problem. Thus, even taking into account our agnostic view regarding the labelling of the input alphabet, the probability of guessing the right input up to a permutation is upper bounded by 6/10. Now consider a genie-aided observer that is given BSC parameters. Such an observer would come to the conclusion that the second channel, W2 , actually conveys little information to him, and could decide to not consider its output at all. The genie-aided observer could then come up with the following rule for guessing the input bit: The outputs in the set {00, 01} get mapped to 0, while the outputs in the set {11, 10} get mapped to 1. Then, the probability of correctly guessing the input bit would rise to 9/10. Note that the permutation of the colours is an unavoidable limit to any recovery procedure in which we do not want to make any assumptions about the channels and the colour distribution p. Based on these considerations, we propose the general mathematical formulation of our problem is as follows. As before we assume the existence of one “original” version of the picture, which is represented by a string xn ∈ X n where n ∈ N. A number K of copies of the image is given, where each copy is modelled by a string yj = (y1j , . . . , ynj ), and j ranges from 1 to K. These images are assumed to be influenced by an i.i.d. noise process, such that the probability of receiving the matrix y = (yij )n,K i,j=1 of distorted images is given by n,K P(y∣xn ) = ∏ wj (yij ∣xi ). (2) i,j=1 The probabilistic laws w1 , . . . , wK crucially are not known. The question is: What is the optimal way of recovering xn from the output y? Before we formalize this problem, we give an overview over the structure of the paper and then of the related work. III. D EFINITIONS A. Notation The set of permutations of the elements of a finite set X is denoted SX . Given two finite sets X , Y, their product is X ×Y ∶= {(x, y) ∶ x ∈ X , y ∈ Y}. For any natural number n, X n is the n-fold product of X with itself. For a string xn ∈ X n , and an element x′ ∈ X , let N (x′ ∣xn ) ∶= ∣{i ∶ xi = x′ }∣ be the number of times x′ occurs in the string. The probability distribution N (x′ ∣xn ) ∶= n1 N (x′ ∣xn ) is called the type of xn . The set P(X ) of probability distributions on X is identified with the subset II. R ELATED WORK The earlier work in the area of image denoising has focussed mostly on situations where the probabilistic law governing the noise process is (partially) known. Among the work in that area are publications like [1]–[4]. In particular, the work [1] laid the information theoretic foundation of image denoising when the channel is known. It has been extended to cover analysis of grayscale images, thereby dealing with the problem incurred by a large alphabet, here [5]. The DUDE framework has also been extended to cover continuous alphabets [6] and in other directions [7]. An extension to cover noise models with memory has been developed in [8]. Optimality of the DUDE has been further analyzed in [9], where it was compared to an omniscient denoiser, that is tuned to the transmitted noiseless sequence. An algorithm for general images has e.g. been demonstrated here [10]. Cases where the probabilistic law behind the noise process is known partially were treated in [3], [4]. A tour of modern image filtering with focus on denoising can be found in the work [11]. The feasibility of the task of multi copy image denoising in a very particular scenario that is similar to the one treated here has been demonstrated [12]. However, that {p ∈ R∣X ∣ ∶ p = ∑ p(x)ex } (3) x∈X of R∣X ∣ , where {ex }x∈X is the standard orthonormal basis of R∣X ∣ . The support of p ∈ P(X ) is supp(p) ∶= {x ∈ X ∶ p(x) > 0}. We define two important subsets of P(X ), for a conventional numbering X = {x1 , x2 , . . . , xL }: P> (X ) ∶= {p ∈ P(X ) ∶ p(x) > 0 ∀ x ∈ X } , (4) ↓ P (X ) ∶= {p ∈ P(X ) ∶ p(x1 ) ≥ p(x2 ) ≥ . . . ≥ p(xL ) > 0} . (5) A subset of elements of P(X ) is the set of its extreme points, the Dirac measures: for x ∈ X , δx ∈ P(X ) is defined through 2 δx (x′ ) = δ(x, x′ ), where δ(⋅, ⋅) is the usual Kronecker delta symbol. The indicator functions are defined for subsets S ⊂ X via 1S (x) = 1 if x ∈ S and 1X (x) = 0 otherwise. We will have to measure how different two elements p, p′ ∈ P(X ) are. This may √ be done using (for any α ≥ 1) the αnorms ∥p − p′ ∥α ∶= α ∑x∈X ∣p(x) − p′ (x)∣α . If α = 1, this reduces, up to factor of 2, to the total variational distance, which we will use in the following while dropping the subscript. The noise that complicates the task of estimating p is modelled by matrices W of conditional probability distributions (w(y∣x))x∈X ,y∈Y whose entries are numbers in the interval [0, 1] satisfying, for all x ∈ X , ∑y∈Y w(y∣x) = 1; such objects are also called stochastic matrices. To given finite sets X and Y, the set of all such matrices is denoted C(X , Y) in what follows. These stochastic matrices are, using standard terminology of Shannon information theory, equivalently called a “channel”. In this work, we restrict attention to those stochastic matrices that are invertible, denoting their set C0 (X , Y). A special channel is the “diagonal” ∆ ∈ C(X , X K ), defined as ∆(δx ) ∶= δx⊗K . For a probability distribution p ∈ P(X ), we write p(K) ∶= ∆(p), which assigns probability p(x) to xx . . . x, and 0 to all other strings. If two channels W1 and W2 act independently in parallel (as in (2) with K = 2 and n = 1), one writes W1 ⊗ W2 for this channel, and more generally for K ≥ 2. We will abbreviate the K-tuple (W1 , . . . , WK ) as W, if K is known from the context. A dependent component system is simply a collection (p, W) = (p, W1 , . . . , WK ) consisting of a distribution p on X and channels W ∈ C(X , Y). The distance between two dependent component systems is defined as meaning of a pixel color. For example, we do not make any distinction between a black-and-white picture and a version of that picture where black and white have been interchanged throughout. This motivates the following definition. Definition 3 (Universal algorithm). Let W ∈ C(X , X )K and p ∈ P(X ) be given. A sequence C = (Cn )n∈N of algorithms Cn ∶ (X K )n → X n is said to achieve resolution D ≥ 0 on W with respect to p, if for every ε > 0 there is an N = N (p, W, ε) such that for all sequences (xn )n∈N of elements of X n with limit limn→∞ N (⋅∣xn ) = p we have ∀n ≥ N (p, W, ε) The number R(p, W) ∶= inf {D ∶ ∃ C achieving distortion } D with respect to p (9) is the resolution of W with respect to p. A sequence of algorithms is universal if it achieves resolution R(p, W) for all p and W. A sequence of algorithms is uniformly universal on S ⊂ C(X , X ) if for all W ∈ S K the number N = N (W, ε) = N (p, W, ) can be chosen independently of p. If such a sequence exists, we say that the resolution D is achieved uniformly. Then, R(p, W) ∶= inf {D ∶ ∃ C uniformly achieving } distortion D w.r.t. p (10) is called the uniform resolution of W. Clearly, R(p, W) ≤ R(p, W). ∥(r, V) − (s, W)∥DCS ∶= ∥r − s∥1 + ∑ ∥Vj − Wj ∥F B , (6) Note that, if a sequence of uniformly universal algorithms exists, this implies that these algorithms deliver provably optimal performance for every possible input sequence xn , if only n ≥ N (W, ε) is large enough. We stress that the definition of resolution does in general depend on p, as can be seen from the following example. j=1 where ∥Vj − Wj ∥F B ∶= maxx ∑y ∣vj (y∣x) − wj (y∣x)∣. Finally, we quantify the distance between an image (or string of letters) and another such image (or string of letters). Definition 2 (Distortion measure). Any function d ∶ X × X → R≥0 is said to be a distortion measure. Clearly, this definition includes as a subclass the distortion measures that are at the same time a metric. The special case d(x, x′ ) ∶= δ(x, x′ ) of the Kronecker delta is known as the Hamming metric. It is understood that any distortion measure d naturally extends to a distortion measure on X n × X n via 1 n ∑ d(xi , yi ). n i=1 τ (8) K d(xn , y n ) ∶= P (min d(Cn (⋅), τ ⊗n (xn )) > D) < ε. Example 4. Let W ∈ C({0, 1, 2}, {0, 1}) be defined by w(0∣0) = w(0∣1) = 1 and w(1∣2) = 1 and d be the hamming distortion. Then, clearly, every two p, p̂ ∈ P({0, 1, 2}) satisfying p(2) = p̂(2) = 0 have the property W (p) = W (p̂). Thus, no two sequences xn , x̂n ∈ {0, 1, 2}n having N (2∣xn ) = N (2∣x̂n ) = 0 can be distinguished from each other after having passed through the channel. Let Cn be any decoder, assuming without loss of generality that Cn (1n ) = 1n . Consider xn = 0n/2 1n/2 ; it follows that minτ d(Cn (⋅), τ ⊗n (xn )) = 21 with probability 1. The same decoder applied to x̂n = 1n or x̃n = 0n obviously has minτ d(Cn (⋅), τ ⊗n (x̂n )) = 0 and minτ d(Cn (⋅), τ ⊗n (x̃n )) = 0, both once more with probability 1. This demonstrates the dependence of R(p, W) on p for simple cases where K = 1 and the channel matrix is noninvertible. (7) B. Definition of Resolution In the following, we assume a distortion measure d on X × X . With respect to such a measure, the performance of a denoising algorithm can be defined. In the special case treated here, we are interested in a form of universal performance that does not depend on any knowledge regarding the specific noise parameters. Our way of defining universality is in the spirit of Shannon information theory. Such a definition brings with it two important features: First, it is an asymptotic definition, and secondly it is a definition that is agnostic as to the specific Remark 5. Note that our definition of universality explicitly forbids an algorithm to know the probabilistic laws according 3 5) For each i ∈ [n], apply the MCA decoder to the sequence (yi1 , . . . , yiK ). This generates a sequence x̂n = x̂1 . . . x̂n . 6) Output x̂n . to which the original image gets distorted. In that sense, it respects a stronger definition of universality as for instance the DUDE algorithm [1]: the DUDE algorithm has to know the probabilistic law, which is assumed to be represented by an invertible stochastic matrix. We will see in Theorem 7 in which situations an algorithm obeying our stronger definition can be expected to deliver provably optimal performance, and what its drawbacks in terms of performance are. Remark 8. An obvious modification of this algorithm is to deviate from the above definition in step 4, and instead run the DUDE algorithm from Ref. [1] or one of its modifications. The results in [27] imply that optimality holds also in that case, since the deviation of the estimated tuple (r, V1 , . . . , VK ) from the original (p, W1 , . . . , WK ) goes to 0 as n → ∞. Definition 6 (Minimal clairvoyant ambiguous distortion). Let W ∈ C(X , Y) be a channel and p ∈ P(X ) a probability distribution. Let d ∶ X × X → R be a distortion measure. The number dMCA (p, W ) ∶= min Definition 9. The DCA algorithm from [17] is defined by the following simple rule: ′ ∑ p(x)d(x, x )w(Tτ (x′ ) ∣x), (11) K τ ∈SX ,T x,x′ ∈X q̂ z→ arg min {∥(⊗ Vi )r(K) − q̂∥ ∶ i=1 is called the minimal clairvoyant ambiguous (MCA) distortion, where each T = {Tx }x∈X is partition of Y, i.e. Tx ∩Tx′ = ∅ for all x ≠ x′ and ⋃x Tx = Y. It is the smallest expected distortion obtainable if the original xn is distributed according to p⊗n and the dependent component system (p, W ) is known to the decoder. We call a minimizing partition T MCA decoder. For each collection W = (W1 , . . . , WK ) of channels we define The reason that this simple algorithm yields correct solutions is based on the following observation, which is a reformulation of [17, Thm. 1] and stated without proof. Theorem 10. Let K ≥ 3, p ∈ P(X ) and let W1 , . . . , WK ∈ C0 (X , X ) be invertible channels. There are exactly ∣X ∣! tuples (V1 , . . . , VK , p′ ) of channels V1 , . . . , VK ∈ C(X , X ) and probability distributions p′ ∈ P(X ) satisfying the equation K dMCA (p, W) ∶= dMCA (p, (⊗ Wj ) ○ ∆) , K (12) K ′ ∑ ∏ wi (yi ∣x)p(x) = ∑ ∏ vi (yi ∣x)p (x) j=1 x∈X i=1 with the “diagonal” channel ∆ ∈ C(X , X K ). (15) x∈X i=1 simultaneously for all y1 , . . . , yK ∈ X . These are as follows: For every permutations τ ∶ X → X , the matrices Vi = Wi ○ τ −1 and the probability distribution p′ = τ (p) solve (15), and these are the only solutions. As a consequence, the function Θ ∶ P(X ) × W(X , X )K → P(X K ) defined by IV. M AIN RESULTS In this section, we state our main result regarding universal denoising and present two algorithms. The first one is universal, while the second one is even uniformly universal, but unfortunately only if restricted to dependent component systems with binary symmetric channels. K L i=1 i=1 Θ[(p, T1 , . . . , TK )] ∶= (⊗ Ti ) (∑ p(i)δi⊗K ) (16) is invertible if restricted to the correct subset: there exists a function Θ′ ∶ ran(Θ) → P ↓ (X ) such that A. The universal discrete denoising algorithm Θ′ (Θ[(p, T1 , . . . , TK )]) = p, Theorem 7. Let K ≥ 3 and d be a distortion measure on X × X . Let W = (W1 , . . . , WK ) ∈ C(X , X )K be such that their associated stochastic matrices are invertible, and p ∈ P(X ). Then, the resolution of W with respect to p is R(p, W) = dMCA (p, W). r ∈ P ↓ (X ), } . (14) Vi ∈ C(X , X ) (17) for all p ∈ P ↓ (X ) ∩ P> (X ) and (T1 , . . . , TK ) ∈ C0 (X , X )K . B. A uniformly universal algorithm (13) An important distinction that we made in the definitions is that between a universal and a uniformly universal algorithm. We will now explain this difference based on the restricted class of binary symmetric channels (BSCs). Then, we present an algorithm that is uniformly universal if restricted to dependent component s ystems allowing only BSCs. For this class of channels we know the following from [28, Theorem 3]: There exists an algorithm which is universal on the set C0 (X , X )K . Below, this universal discrete denoising algorithm (UDDA) is described. Algorithm 1 (UDDA: Universal discrete denoising algorithm). The following steps produce the desired estimate x̂n for xn : 1) Input is (yij )n,K i,j=1 . 2) Calculate q̂(⋅) ∶= n1 N (⋅∣(y1 , . . . , yK )) ∈ P(X K ). 3) Run the DCA algorithm [17] (see Definition 9 below) with input q̂ to find stochastic matrices V1 , . . . , VK ∈ C(X , X ) and r ∈ P(X ). 4) Find the minimal clairvoyant ambiguous recovery T for (r, V1 , . . . , VK ). Theorem 11. Let X = Y1 = Y2 {0, 1} and K = 2. Let A, C ∈ C0 (X , Y1 ) and B, D ∈ C0 (X , Y2 )be BSCs and p, r ∈ P> (X ). The four equations ∑ a(y1 ∣x)b(y2 ∣x)p(x) = ∑ c(y1 ∣x)d(y2 ∣x)r(x) x∈X 4 x∈X (18) (one for each choice of y1 , y2 ∈ {0, 1}) have exactly the following two families of solutions: either A = C, B = D and p = r, or A = C ○ F, B = D ○ F, p = F(r). When treating BSCs, it turns out that a universal algorithm (called BUDDA in the following) may be defined based on analytical formulas rather than outcomes of an optimisation procedure such as in the definition of UDDA. The core of this approach is the content of the following theorem. Since our goal now is to understand the intricacies of uniform universality, let us for the moment assume that UDDA is uniformly universal. Taking into account the content of Theorem 11, this might lead one to conclude that it is already universal for the set of invertible BSCs and K = 2. That this is not the case can be seen as follows: Let p ∈ X be such that p(0) = p(1) = 1/2; this is the input distribution explicitly excluded in Theorem 11. Then (see the appendix for a proof) we get the output distribution defined by q(x1 , x2 ) = p(x1 )c(x2 ∣x) ∀x1 , x2 ∈ {0, 1}, Theorem 13. Let X = {0, 1} and K ≥ 2. There exists a function fK ∶ P(X K ) → [0, 1] such that the following holds: If p ∈ P> (X ), B1 , . . . , BK ∈ C(X , X ) are binary symmetric channels and q ∈ P(X K ) is the output distribution of the dependent component system (p, B1 , . . . , BK ) and q1 , . . . , qK ∈ P(X ) denote the marginals of q on the respective alphabets, then (19) p(0) = where the number c(0∣0) is given by c(0∣0) = 1 − a(0∣0) − b(0∣0) + 2 ⋅ a(0∣0) ⋅ b(0∣0). Obviously, there is a large number of channels A and B leading to the same channel C, so that UDDA cannot choose the correct one uniquely. This is also reflected in Fig. 1. δ11 The explicit nature of Theorem 13 (the definition of fK is the content of its proof) allows us to develop error bounds in the determination of p, when instead of the correct q an estimate q̂ is used for the calculation. This in turn makes it possible to prove Theorem 12 based on the following definition of BUDDA. δ01 Algorithm 2. BSC-UDDA To any pair r, s ∈ P({0, 1} such that r↓ (0) < s↓ (0), we let b(r, s) ∶= (s + r − 1)/(2s − 1) be the parameter such that a BSC B with b(0∣0) = b(r, s) satisfies B(s) = r. The following steps produce the desired estimate x̂n for xn when K ≥ 3: δ11 δ10 δ10 1) 2) 3) 4) δ01 Fig. 1: Output distributions q parameterised by the BSC parameters b1 (0∣0) and b2 (0∣0) for p(1) = 0.45 (left) and p(1) = 1/2 (right). The geometric shape of the set of output distributions already hints at the fact that it is not possible to retrieve b1 (0∣0) and b2 (0∣0) from q. Cf. [17, Fig. 3]. 5) 6) Fortunately, it is enough to go to K = 3 and the following still holds. Theorem 12. Let K ≥ 3 and d be a distortion measure on X × X . Let B = (B1 , . . . , BK ) be taken from the set {B ∶ B is a BSC with b(0∣0) ≠ 1/2}K , and p ∈ P(X ). The uniform resolution of W with respect to p is R(p, W) = dMCA (p, W). (21) The mapping q ↦ p(0) is henceforth denoted EK . δ00 δ00 1/K ∣ ∏K 1 i=1 (1 − 2 ⋅ qi (0))∣ (1 + ). 2 fK (q) 7) 8) (20) 9) There is an algorithm that is uniformly universal on the set of invertible binary symmetric channels {B ∶ B BSC with b(0∣0) ≠ 12 }. Input is (yij )n,K i,j=1 . Calculate q̂(⋅) ∶= n1 N (⋅∣(y1 , . . . , yK )) ∈ P(X K ). Calculate p̂ = EK (q̂). Let i ∈ {0, 1} be the number such that N (y1n ∣i) > N (y1n ∣i ⊕ 1) where ⊕ is addition modulo two. Without loss of generality we assume in the following description that i = 0. Calculate the conditional distribution q̂(⋅∣y1 = 0) ∈ P(X K−1 ). If ∥p̂−π∥ > mini ∥q̂i (⋅∣y1 = 0)−π∥, set b̂i (0∣0) ∶= b(q̂i , p̂). If ∥p̂ − π∥ < mini ∥q̂i (⋅∣y1 = 0) − π∥, calculate p̂1 ∶= EK−1 (q̂(⋅∣y1 = 0) and p̂2 ∶= EK−1 (q̂(⋅∣y2 = 0). Then, set b̂i (0∣0) ∶= b(q̂i (⋅∣y1 = 0), p̂1 ) for i = 2, . . . , K and b̂1 (0∣0) = b(q̂i (⋅∣y2 = 0), p̂2 . Find the minimal clairvoyant ambiguous recovery T for (p̂, B̂1 , . . . , B̂K ). For each i ∈ [n], apply the MCA decoder to the sequence (yi1 , . . . , yiK ). This generates a sequence x̂n = x̂1 . . . x̂n . Output x̂n . As explained earlier, the fifth and sixth step in the algorithm allows one to reliably deal with those situations where p ≈ π. Note that either of the two procedures may fail: the one in step five if p ≈ π, the one in step six if p(0) = 1 − a1 . Luckily however, in the limit of n → ∞ it is not possible that both conditions are fulfilled. The fact that one output of the system is used to create a bias explains why at least three copies are needed. The intuition behind the proof of this result is inspired by the following observation: If K = 3 then the distribution of X given X1 is not uniform anymore. Thus, conditioned on the outcome of the first observed variable being e.g. a 0, we can use the content of Theorem 11 to determine b2 (0∣0), b3 (0∣0) and B1 (p) (note that B1 (p) = p holds). 5 X Y1 Y3 Y2 Fig. 5: K = 7. Left: denoised image; right: denoised and inverted for easier comparison with the original. Fig. 2: Dependent component system with three outputs. Solid arrows denote information flow in the original model. Dashed arrows denote information flow when instead of the ”hidden variable“ X the first visible alphabet Y1 is treated as input to a dependent component system which then has only two outputs, Y2 , Y3 . Fig. 6: K = 10. Left: denoised image; right: denoised and inverted for easier comparison with the original. This is especially important in the light of findings such as [22] which demonstrate the instability of contemporary algorithms for fitting of tensor models. VI. P ROOFS Proof of Theorem 7. Let the empirical distribution of the output pixels produced in step 2 of the algorithm be q̂ ∈ P(X K ) and ε > 0. Application of the Chernoff-Hoeffding bound [29] yields V. S IMULATION R ESULTS Here we present simulations with a sample picture (a part of Leonardo da Vinci’s Mona Lisa, Fig. 3), distorted by ten different BSCs. K P (∥q̂ − (⊗ Wj ) p(K) ∥ ≤ ε) ≥ 1 − 2L 2−2nε . K 2 (22) j=1 This implies that step 2 of the algorithm produces a distribution which is, with arbitrarily high accuracy and probability converging to 1 as n tends to infinity and independently of the (K) type of xn , close to the distribution (⊗K . Based j=1 Wj ) p on this estimate, we know by simple application of the triangle inequality that the function E ∶ P(X K ) → P ↓ (X K ) defined by Fig. 3: The original image, a 200 × 200 pixels black-and-white picture of a part of Leonardo da Vinci’s Mona Lisa. The parameters of the ten BSCs are chosen as K E(q) ∶= (⊗ Vj ) r(K) , (0.71, 0.32, 0.41, 0.49, 0.48, 0.82, 0.81, 0.51, 0.84, 0.17). (23) j=1 The distorted images are shown in Fig. 4. where (note that this next step is the implementation of the DCA algorithm) K (r, V) = arg min ∥(⊗ Vj ) r↓(K) − q∥ , (r ↓ ,V) (24) j=1 satisfies K ⎛ ⎞ K 2 P ∥E(q̂) − (⊗ Wj ) p(K) ∥ ≤ 2ε ≥ 1 − 2L 2−2nε . (25) ⎝ ⎠ j=1 1 As a consequence, we get Fig. 4: The ten distorted versions of the original. K ⎛ ⎞ P ∥E(q̂) − (⊗ Wj ) p(K) ∥ ≤ n−ε/2 Ð→ 1 ⎝ ⎠ j=1 1 Using K = 7 if the noisy copies, the channel parameters estimated by BUDDA are (26) as n → ∞. Take the function Θ′′ from Ref. [17, Theorem 1], which for r ∈ P> (X ) and channels V1 , . . . , VK with invertible transition probability matrices is defined as a one-sided inverse to Θ: (0.28, 0.67, 0.59, 0.51, 0.52, 0.17, 0.18), and the algorithm returns the denoised image shown in Fig. 5. Now we proceed to use all our K = 10 copies. Then, BUDDA produces the following estimates for the channels: K Θ′′ ((⊗ Vj ) r(K) ) ∶= (r↓ , V ○ τ ), (27) j=1 (0.29, 0.68, 0.59, 0.51, 0.52, 0.17, 0.19, 0.49, 0.16, 0.83), with τ ∈ SX being a permutation with the property r(x) = r↓ (τ (x)) for all x ∈ X , and r↓ ∈ P ↓ (X ). According to Lemma and the denoised image shown in Fig. 6. 6 15 below, Θ′′ is continuous if restricted to the set P> (X ). Based on the above observation, see in particular eq. (26), Θ′′ satisfies As a consequence, the restriction of Θ to P ↓ (X ) × C(X , X )K , which is invertible according to [17, Theorem 1], is continuous as well. P(∥Θ′′ ○ E(q̂) − (p, W)∥DCS ≤ δDCS (p, W, εn )) Ð→ 1 Lemma 15. There exists a function δDCS ∶ P ↓ (X ) × C(X , X )K × R≥0 → R≥0 , monotonic and continuous in the real argument and with δDCS (r, W, 0) = 0, such that for every two dependent component systems (s, W) and (r, V) with r, s ∈ P ↓ (X ), as n Ð→ ∞. Thus, with probability approaching 1 when n tends to infinity and the image types N (⋅∣xn ) are in a small enough vicinity of some distribution s′ , we know that our estimate of the system (p, W) will be accurate up to an unknown permutation, and small deviations incurred by the properties of δDCS (p, W, ε). According to Lemma 16, the minimum clairvoyant ambiguous recovery computed based on the estimated system Θ′′ ○ E(q̂) is asymptotically optimal for the true system (p, W). Using the Chernoff-Hoeffding inequality [29], we can prove that application of the MCA decoder separately to each letter yi1 , . . . , yiK yields a sequence x̂n with the property n−1 ∑ni=1 d(τ (xi ), x̂i ) < D with high probability for every D > dM CA (p, W). Thus a look at (2) finishes the proof. (30) ∥(s, W) − (r, V)∥DCS ≤ δDCS (r, V, ε). (31) min ∑ p′ (x)d(x, x′ )c′ (Tτ (x′ ) ∣x) (28) (32) τ ∈SX x,x′ ≥ dMCA (p, C) − 2∣X ∣∣Y∣ max d(x, y). x,y Proof. We use a telescope sum identity: for arbitrary matrices A1 , . . . , AK+1 and B1 , . . . , BK+1 we have (33) Proof. The prerequisites of the lemma imply ∣p(x)w(y∣x) − p′ (x)w′ (y∣x)∣ ≤ ε for all x, y ∈ X , Y. Thus A1 ⋅ . . . ⋅ AK+1 − B1 ⋅ . . . ⋅ BK+1 min ∑ p′ (x)d(x, x′ )c′ (Tτ (x′ ) ∣x) (29) (34) τ ∈SX x,x′ ≥ min ∑ p(x)d(x, x′ )c(Tτ (x′ ) ∣x) i=1 τ ∈SX x,x′ We apply it to the channel matrices and input distributions as follows. Let Ai ∶= 1⊗(i−1) ⊗Vi ⊗ 1⊗(n−i) , Bi ∶= 1⊗(i−1) ⊗Wi ⊗ 1⊗(n−i) for i ∈ [K]; furthermore AK+1 ∶= r(K) , BK+1 ∶= s(K) , where we interpret probability distributions on an alphabet as channels from a set with one element. Now we have − 2∣X × Y∣ max d(x, y), x,y (35) concluding the proof. Proof of Theorem 13. Let X = {0, 1}, B1 , . . . , BK ∈ C0 (X , X ) be BSCs and p ∈ P(X ). Let qi ∶= Bi (p). Then the following equation holds true: (K) (K) ∥(⊗K ∥ − (⊗K i=1 Vi )r i=1 Wi )s = ∥A1 ⋅ . . . ⋅ AK+1 − B1 ⋅ . . . ⋅ BK+1 ∥ 1/K K+1 p(0) = = ∥ ∑ A1 ⋯Ai−1 (Ai − Bi )Bi+1 ⋯BK+1 ∥ i=1 K+1 j=1 Lemma 16. Let p, p′ ∈ P(X ) and C, C ′ ∈ C(X , Y). Let d be any distortion on X × X and T , T ′ be the MCA decoders for (p, C) and (p′ , C ′ ), respectively. If ∥p − p′ ∥ ≤ ε and ∥C − C ′ ∥F B ≤ ε, then the minimal clairvoyant ambiguous distortions satisfy That is, Θ is continuous with respect to ∥ ⋅ ∥DCS . = ∑ A1 ⋅ . . . ⋅ Ai−1 ⋅ (Ai − Bi )Bi+1 ⋅ . . . ⋅ BK+1 . j=1 Proof. Under the premise of the lemma, the function Θ is both continuous on a connected domain (as a consequence of Lemma 14) and invertible (as a consequence of [17, Thm. 1]). Thus, the inverse of Θ is continuous, implying the claim. (K) Lemma 14. Set Θ(s, W) ∶= (⊗K . Then for every j=1 Wj ) s two dependent component systems (r, V) and (s, W), we have K+1 K implies We now state and prove the lemmas required above. ∥Θ(r, V) − Θ(s, W)∥ ≤ ∥(r, V) − (s, W)∥DCS . K ∥(⊗ Wj ) s(K) − (⊗ Vj ) r(K) ∥ ≤ ε FB K 1⎛ 1 − 2 ⋅ qi (0) 1 + ∏∣ ∣ 2⎝ 1 − 2 ⋅ bi i=1 ⎞ , ⎠ (36) where bi ∶= bi (0∣0), i = 1, . . . , K. It turns out that, for even K, the expression in the denominator can be derived as a function of q = (B1 ⊗ . . . ⊗ BK )p(K) in a very natural way. Set ≤ ∑ ∥A1 ⋯Ai−1 (Ai − Bi )Bi+1 ⋯BK+1 ∥F B i=1 K+1 = ∑ ∥V1 ⊗ ⋯ ⊗ Vi−1 ⊗ (Vi − Wi ) ⊗ Wi+1 ⊗ ⋯ ⊗ WK+1 ∥F B fK (q) ∶= ∑ sgn(τ )q(τ1 (1), . . . , τK (1)), i=1 K+1 (37) τ ∈S2K = ∑ ∥Vi − Wi ∥F B where S2K is the K-fold cartesian product of the set S2 of permutations on {0, 1} with itself. Note that, given empirical data as a string y, fK ( n1 N (⋅∣y)) can be calculated for even K as i=1 K = ∑∥Vi − Wi ∥F B + ∥r(K) − s(K) ∥ i=1 = ∥(r, V) − (s, W)∥DCS , n K fK ( n1 N (⋅∣y)) = ∑(−1)∑j=1 yij . concluding the proof. i=1 7 (38) xn ∈ {0, 1}n . This implies that probabilities are calculated with respect to the distribution on the K-fold product {0, 1}K that is defined as the marginal on Y K of This formula may be particularly useful for calculating fK in an online fashion, while data is being retrieved from a source. Going further, we see that it holds (with p ∶= p(0) and p′ ∶= p(1): K 1 P(Y = y1 , . . . , yK , X = x) = ∏ bj (yj ∣x) N (x∣xn ). n j=1 K fK (q) = ∑ sgn(τ )p ⋅ ∏ bi (τi (0)∣0) τ ∈S2K i=1 K Since it simplifies notation, all following steps are however performed as if p ∈ P({0, 1}) was an arbitrary (meaning that it is not necessarily an empirical distribution) parameter of the system. To every such parameter p, there is a set of associated distributions pX (⋅∣Yi = 0), i = 1, . . . , K, which are calculated from (50) as pX (X = x∣Yi = 0) = P(X = x, Yi = 0) ⋅ P(Yi = 0)−1 if P(Yi = 0) > 0, and pX (X = x∣Yi = 0) = δ1 (x) otherwise. Since BUDDA always assumes that N (0∣xn ) ≥ N (1∣xn ), this latter choice is justified. By definition of EK in Theorem 13, it is clear that EK is continuous. Thus, from the Chernoff-Hoeffding bound (see e.g. [30, Thm. 1.1]) we have that for every ε > 0 there is an N1 = N1 (ε, B1 , . . . , BK ) ∈ N such that, if n ≥ N1 , the probability of the set S0 ∶= {y n ∶ ∥EK (q) − p↓ ∥1 ≤ ε} satisfies i=1 P(S0 ) ≥ 1 − ν(n) K ′ + ∑ sgn(τ )p ⋅ ∏ bi (τi (0)∣1) τ ∈S2K (39) i=1 K = p ∑ sgn(τ ) ∏ bi (τi (0)∣0) τ ∈S2K i=1 K + p′ ∑ sgn(τ ) ∏ bi (τi (0)∣1) τ ∈S2K (40) i=1 K = p ∑ sgn(τ ) ∏ bi (τi (0)∣0) τ ∈S2K i=1 K + p′ ∑ sgn(τ ○ F⊗4 ) ∏ bi (τi ○ F(0)∣1) τ ∈S2K (41) i=1 = p ∑ sgn(τ ) ∏ bi (τi (0)∣0) τ ∈S2K τ ∈S2K (42) i=1 K = p ∑ sgn(τ ) ∏ bi (τi (0)∣0) i=1 K + p′ ∑ sgn(τ ) ∏ bi (τi (0)∣0) τ ∈S2K (43) i=1 Si ∶= {y n ∶ ∥EK−1 (q(⋅∣yi = 0)) − pX (⋅∣yi = 0)∥1 ≤ ε}, K = ∑ sgn(τ ) ∏ bi (τi (0)∣0), τ ∈S2K (44) P(∩K i=0 Si ) ≥ 1 − (K + 1)ν(n). so it does not depend on the distribution p at all. Going further, we see that (45) i=1 K = ∏ ∑ sgn(τ )ai (τ (0)∣0) (46) i=1 τ ∈S2 K = ∏(ai − 1 + ai ) (47) = ∏(1 − 2ai ). (48) i=1 K i=1 For uneven K, K ≥ 3, the above calculations demonstrate one can define fK by K fK (q) ∶= (∏ fK−1 (q≠i )) (53) It is critical to note here that we use one and the same function ν over and over again. Theoretically, we would need to update the constant c at each use. We do however step away from such explicit calculation, since any positive c′ < c that would arise e.g. from setting c′ ∶= min{c1 , . . . , cK } is sufficient for our proof. Thus we use one and the same constant repeatedly in what follows. As an outcome of the calculation of p via application of EK to q, one is in principle able to obtain an estimate of the channels B1 , . . . , BK . However, the precision of this estimate depends critically on the distance ∥p − π∥ and thus any algorithm using only such estimate cannot be uniformly universal because the equality p = π may hold. Define K τ ∈S2K (52) we have then i=1 fK (q) = ∑ sgn(τ ) ∏ ai (τi (0)∣0) (51) for some function ν ∶ N → (0, 1) of the form ν(n) = 2−nc for some constant c > 0 depending only on the channels B1 , . . . , BK . Obviously, ν converges to 0 when n goes to infinity. The fact that this convergence is exponentially fast makes it possible to prove that a larger number of events has asymptotically the probability one by repeated union bounds. Setting K + p′ ∑ sgn(τ ) ∏ bi (τi (1)∣1) τ ∈S2K (50) 1/K E ∶= {y n ∶ ∥EK (q̂) − π∥ > min ∥EK−1 (q̂(⋅∣yi = 0)) − π∥} i (49) (54) i=1 where q≠i is the marginal of q on all but the i-th alphabet. This proves the Theorem. and F ∶= {y n ∶ ∥EK (q̂) − π∥ ≤ min ∥EK−1 (q̂(⋅∣yi = 0)) − π∥}. Proof of Theorem 12. In the remainder, it is assumed that p is the (empirical) distribution of the sought after data string i (55) 8 Observe that the distribution of X given X1 = 0 satisfies P(X = 0∣Y1 = 0) = p(0) ⋅ b1 . p(0) ⋅ b1 + (1 − p(0))(1 − b1 ) A PPENDIX A: A DDITIONAL P ROOFS Proof of eq. (19). Let A, B ∈ C0 ({0, 1}, {0, 1}) be BSCs and p = π. Then the following holds true: (56) 1 1 (A ⊗ B)p(2) = (A ⊗ B) ∑ e⊗2 i 2 i=0 The left hand side in (56) takes on the value b1 ≠ 1/2 at p(0) = 1/2, and the value 1/2 at p(0) = 1 − b1 . Since by assumption we have that bi ≠ 1/2 for all i = 1, . . . , K it is clear that, asymptotically, either S1 or S2 has large probability. Thus, depending on the distribution p, we have either P(E) ≥ 1 − ν(n) (57) 1 1 = (A ○ B T ⊗ 1) ∑ e⊗2 i 2 i=0 (60) 1 1 = (C ⊗ 1) ∑ e⊗2 i 2 i=0 (61) where C = A ○ B T . Since B T = B for a BSC, this implies C = A ○ B, and especially that or else c = ab + (1 − a)(1 − b) P(F ) ≥ 1 − ν(n), (59) = 2ab + 1 − a − b, (58) (62) (63) where we used the abbreviation c ∶= c(0∣0), b ∶= b(0∣0) and a ∶= a(0∣0). but always P(E ∩ F ) < ν(n). This reasoning forms the basis of step 5) of BUDDA. If y n ∈ S2 , the proof of Theorem 7 shows that BUDDA succeeds with probability lower bounded ′ by 1 − 2−nc for some c′ > 0 depending only on the channels B1 , . . . , BK . If y n ∈ S3 , then one introduces a bias by viewing the outcome Y1 as the source of the system, and then conditioning on the outcomes of Y1 : Delete all rows in y n = (yi,j )K,n i,j=1 with indices j such that y1,j = 1. The resulting empirical distribution q̂(⋅∣y1 = 0) serves as input to EK−1 . It produces a dependent component system (b̂2 , . . . , b̂K , p̂0 ). Performing the same step with k set to two yields a second such system with an additional value for b̂1 . Thus steps 1) to 6) of BUDDA succeed with probability lower bounded by 1 − (K + 4)ν(n), independently of the sequence xn . The remaining steps of the proof are identical to those in the proof of Theorem 7. They are based on further application of the union bound and the fact that limn→∞ M ⋅ 2−nc = 0 for every M ∈ N. Lemma 18. Let W ∈ C(X , Y). Given x ∈ X , let Yx denote the random variable with distribution w(⋅∣x). Let T be an estimator achieving Λ = ∑x p(x)w(Tx ∣x). If xn has type p, then for every possible input string xn the letter-by-letter application of T to the output sequences of W ⊗n given input xn has the following property: P (∣ 1 n −2ε2 n . ∑ δ(xi , T (Yxi )) − Λ∣ ≥ ε) ≤ 2 ⋅ 2 n i=1 (64) Proof. The expectation of the function Φ defined by Φ(y n ) ∶= n 1 δ(xi , T (Yxi )) is calculated as n ∑i=1 EΦ = ∑ p(x) ∑ δ(x, y)w(T (y)∣x) = Λ. x (65) y Thus by Hoeffding’s inequality [30, Thm. 1.1] or alternatively McDiarmid’s inequality [31], Remark 17. A more optimized choice of estimator arises when one picks instead of the first and the second system those two systems for which ∥EK−1 (q̂(⋅∣yi = 0))−π∥ takes on the largest and second largest value, respectively. P (∣ 1 n −2ε2 n . ∑ δ(xi , T (Yxi )) − Λ∣ ≥ ε) ≤ 2 ⋅ 2 n i=1 (66) Remark 19. As a consequence of this lemma, an algorithm that knows p and W but not xn can calculate the MCA decoder T corresponding to p and W , after which it applies the MCA decoder in order to obtain the best possible estimate on xn . VII. C ONCLUSIONS We have laid the information theoretic foundations of universal discrete denoising and demonstrated the in-principle feasibility of our new algorithm UDDA based on discrete dependent component analysis. We proved that the resolution equals the minimal clairvoyant ambiguous distortion. We conjecture that, in fact, the same distortion can be achieved uniformly in the input distribution, i.e. that the equality R(p, W) = dM CA (p, W) holds, but have only been able to prove this for a restricted class of models and with some slight modifications to UDDA. Acknowledgments. The present research was funded by the German Research Council (DFG), grant no. 1129/1-1, the BMWi and ESF, grant 03EFHSN102 (JN), the ERC Advanced Grant IRQUAT, the Spanish MINECO, projects FIS201340627-P and FIS2016-86681-P, with the support of FEDER funds, and the Generalitat de Catalunya, CIRIT project 2014SGR-966 (AW and JN). 9 R EFERENCES [15] A. Buades, B. Coll, and J. M. Morel, “A non-local algorithm for image denoising,” in Proc. 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 2, June 2005, pp. 60–65. [16] ——, “Image denoising methods. a new nonlocal principle,” SIAM Review, vol. 52, no. 1, pp. 113–147, Feb 2010. [17] J. Nötzel and W. Swetly, “Deducing truth from correlation,” IEEE Trans. Inf. Theory, vol. 62, no. 12, pp. 7505–7517, Dec 2016. [18] J. B. Kruskal, “More factors than subjects, tests and treatments: an indeterminacy theorem for canonical decomposition and individual differences scaling,” Psychometrika, vol. 41, no. 3, pp. 281–293, Sep 1976. [19] E. Acar and B. Yener, “Unsupervised multiway data analysis: A literature survey,” IEEE Transactions on Knowledge and Data Engineering, vol. 21, no. 1, pp. 6–20, Jan 2009. [20] A. C. Rencher and W. F. Christensen, Methods of Multivariate Analysis. Wiley Series in Probability and Statistics, 2012. [21] A. Anandkumar, R. Ge, D. Hsu, S. M. Kakade, and M. Telgarsky, “Tensor decompositions for learning latent variable models,” J. Machine Learning Res., vol. 15, no. 12, pp. 2773–2832, 2014. [22] G. Tomasi and R. Bro, “A comparison of algorithms for fitting the parafac model,” Computational Statistics & Data Analysis, vol. 50, pp. 1700–1734, Apr 2006. [23] J. Kossaifi, Y. Panagakis, and M. Pantic, “Tensorly: Tensor learning in python,” ArXiv e-prints, arXiv:1210.7559, Oct 2016. [24] V. Prabhakaran, D. Tse, and K. Ramachandran, “Rate region of the quadratic gaussian CEO problem,” in International Symposium onInformation Theory, 2004. ISIT 2004. Proceedings., June 2004, pp. 119–. [25] T. Berger and Z. Zhang, “On the CEO problem,” in Proceedings of 1994 IEEE International Symposium on Information Theory, Jun 1994, pp. 201–. [26] A. J. G. A. Kipnis, S. Rini, “Compress and estimate in multiterminal source coding,” ArXiv e-prints, arXiv:1602.02201, 2017. [27] G. M. Gemelos, S. Sigurjonsson, and T. Weissman, “Universal Minimax Discrete Denoising Under Channel Uncertainty,” IEEE Trans. Inf. Theory, vol. 52, no. 8, pp. 3476–3497, Aug 2006. [28] J. Nötzel and C. Arendt, “Using dependent component analysis for blind channel estimation in distributed antenna systems,” in 2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Dec 2016, pp. 1116–1121. [29] W. Hoeffding, “Probability Inequalities for Sums of Bounded Random Variables,” J. Amer. Stat. Assoc., vol. 58, no. 301, pp. 13–30, Mar 1963. [30] D. D. Dubhashi and A. Panconesi, Concentration of Measure for the Analysis of Randomized Algorithms. Cambridge University Press, 2012. [31] C. McDiarmid, On the method of bounded differences, ser. London Mathematical Society Lecture Note Series. Cambridge University Press, 1989, pp. 148–188. [1] T. Weissman, E. Ordentlich, G. Seroussi, S. Verdú, and M. J. Weinberger, “Universal discrete denoising: known channel,” IEEE Trans. Inf. Theory, vol. 51, no. 1, pp. 5–28, Jan 2005. [2] E. Ordentlich, G. Seroussi, and M. Weinberger, “Modeling enhancements in the DUDE framework for grayscale image denoising,” in Proc. 2010 IEEE Information Theory Workshop (ITW 2010, Cairo), Jan 2010, pp. 1–5. [3] G. Gemelos, S. Sigurjonsson, and T. Weissman, “Universal minimax discrete denoising under channel uncertainty,” in Proc. 2004 International Symposium on Information Theory (ISIT 2004), June 2004, p. 199. [4] G. M. Gemelos, S. Sigurjonsson, and T. Weissman, “Algorithms for discrete denoising under channel uncertainty,” IEEE Trans. Signal Proc., vol. 54, no. 6, pp. 2263–2276, June 2006. [5] G. Motta, E. Ordentlich, I. Ramirez, G. Seroussi, and M. J. Weinberger, “The iDUDE Framework for Grayscale Image Denoising,” IEEE Trans. Image Proc., vol. 20, no. 1, pp. 1–21, Jan 2011. [6] ——, “The DUDE framework for continuous tone image denoising,” in Proc. 2005 IEEE International Conference on Image Processing, vol. 3, Sept 2005, pp. III–345–8. [7] E. Ordentlich, K. Viswanathan, and M. J. Weinberger, “Twice-universal denoising,” IEEE Trans. Inf. Theory, vol. 59, no. 1, pp. 526–545, Jan 2013. [8] C. D. Giurcaneanu and B. Yu, “Efficient algorithms for discrete universal denoising for channels with memory,” in Proc. 2005 International Symposium on Information Theory (ISIT 2005), Sept 2005, pp. 1275– 1279. [9] K. Viswanathan and E. Ordentlich, “Lower limits of discrete universal denoising,” in Proc. 2006 IEEE International Symposium on Information Theory (ISIT 2006), July 2006, pp. 2363–2367. [10] Y. Cao, Y. Luo, and S. Yang, “Image Denoising with Gaussian Mixture Model,” in Proc. 2008 Congress on Image and Signal Processing (CISP ’08), vol. 3, May 2008, pp. 339–343. [11] P. Milanfar, “A tour of modern image filtering: New insights and methods, both practical and theoretical,” IEEE Signal Processing Magazine, vol. 30, no. 1, pp. 106–128, Jan 2013. [12] Y. Shih, V. Kwatra, T. Chinen, H. Fang, and S. Ioffe, “Joint noise level estimation from personal photo collections,” in Proc. 2013 IEEE International Conference on Computer Vision, Dec 2013, pp. 2896– 2903. [13] J. Mairal, M. Elad, and G. Sapiro, “Sparse representation for color image restoration,” IEEE Trans. Image Proc., vol. 17, no. 1, pp. 53–69, Jan 2008. [14] S. Jalali and H. V. Poor, “Universal compressed sensing,” in Proc. 2016 International Symposium on Information Theory (ISIT 2016), July 2016, pp. 2369–2373. 10
7
EVOLUTIONARY DEMOGRAPHIC ALGORITHMS MARCO AR ERRA, PEDRO MM MITRA, AGOSTINHO C ROSA Laseeb-ISR, DEEC-IST Av. Rovisco País, 1, Torre Norte 6.21.1049, Portugal [email protected] [email protected] [email protected] KEYWORDS Distributed Evolutionary Algorithm, Demographics Algorithm, Island, Migration. Evolutionary ABSTRACT Most of the problems in genetic algorithms are very complex and demand a large amount of resources that current technology can not offer. Our purpose was to develop a Java-JINI distributed library that implements Genetic Algorithms with sub-populations (coarse grain) and a graphical interface in order to configure and follow the evolution of the search. The sub-populations are simulated/evaluated in personal computers connected trough a network, keeping in mind different models of sub-populations, migration policies and network topologies. We show that this model delays the convergence of the population keeping a higher level of genetic diversity and allows a much greater number of evaluations since they are distributed among several computers compared with the traditional Genetic Algorithms. INTRODUCTION The distributed implementation of the evolutionary genetic algorithm has two major advantages: the increase of the speed of execution from the direct parallelization and the diversity of subpopulations effect that tends to avoid the premature convergence effect (this last one, a problem with a difficult solution in almost all evolutionary algorithms). The Laboratório de Sistemas Evolutivos e Engenharia Biomedica (Laseeb), in Instituto Superior de Robotica (ISR), where the project was developed already had a library that implements the most common variants of evolutionary algorithms and a master-slave distributed model called JDEAL (Java Distributed Evolutionary Algorithm Library). Our project was built using this library, extending and complementing it with new functionalities. A too fast convergence would lead quickly to a solution but probably neither a good nor an efficient one, therefore the need of keeping the diversity of individuals in a population. but completely independent form each other, so that an island (algorithm) may exchange individuals with other islands in order to maintain diversity. These islands communicate between them through a network. The role of each island, to who can or can not communicate is dependent from the predefined topology. When, how and who may or can migrate define a migration policy, which can influence positively or negatively the search objective. Distributed evolutionary algorithms are normally used to solve complex problems where a normal computation power is not enough to cover the entire domain of possible states in order to achieve the best solution. Although factors such as speed and search space size are important, there is no doubt that the risk of the algorithm falling into a local maximum is a factor to avoid. In such situation the search doesn’t evolve in a larger search space and the solution found may not be the optimum solution. If there is only a local maximum such problem is not so evident, as can been seen in figure 1, but if there is more than one and an individual fall in that maximum it will influence the rest of the population and the all search will converge to that value, when another and better one exists, as can be seen in figure 2. This problem is what the distributed approach seeks to avoid. Figure 1. Function with only one local maximum DISTRIBUTED EVOLUTIONARY ALGORITHM To solve the problem we present a solution based on islands. This solution allows the execution of several algorithms running in parallel in different machines or locally, 1 references of how to contact them and a JavaSpaces service that acts as a network storage repository in witch common data can be writen. Figure 2. Function with more than one local maximum The distributed genetic evolutionary algorithms can be divided in two categories: fine grain and coarse grain. The fine grain distributed evolutionary algorithms can be represented in two ways. As a single population where each chromosome can only interact with its neighbors, or as subpopulations overlapped, where the interactions between the chromosomes are limited to the neighbor sub-populations. The coarse grain distributed evolutionary algorithms, also known as the islands model, are a little more complex. In this case the initial population is divided in subpopulations, and each sub-population evolves isolated from the others and occasionally exchanges chromosomes with the others. In this kind of problems it is usual to designate each sub-population as an island and the exchange as a migration between them. The parallel implementation of the migration model shows not only a speed increase in the computation time, but also a reduction in the number of evaluation functions to apply, when compared to unique population algorithms. Therefore, even for a single processor computer, using this kind of parallel algorithm in a serial way (pseudo-parallel), better results can be achieved. The selection of the individuals to migrate can be done randomly or fitness based. Many types of topologies can be structured and implemented, being the most simple and general topology a no restrictions topology (each island can migrate with every other island). In order to delay the propagation of a highly fit chromosome other strategies can be structured, for example a ring topology, in which each island can only exchange chromosomes with its neighbors. JAVA-JINI NETWORK TECHNOLOGY We used Java-Jini network technology as a solution to the challenges posed by distributed systems: complexity, security, manageability, and unpredictability of computer networks. Jini offers network plug and play, which means that any device can be easily connected to the network, its presence announced and the clients that wish may use the services available. The mobility that Jini offers for distributed tasks over dynamic networks, where a service may be available for different periods of time, is the most suitable for this kind of problems. A Jini community needs a set of support services, such as RMID to allow remote method invocation, an HTTPD to simplify stub transfers form the service provider to the service client, a LookupService in wich all services must register and where we can find all services available in the community and IMPLEMENTATION The project was divided in four modules: the demographic evolutionary algorithm, the communication based on Jini, the migration policies and the graphical interface. The demographic evolutionary algorithm model implements a coarse grain distributed genetic algorithm, the communication model based on Jini makes available all the mechanisms needed to communicate and exchange between the evolutionary islands, the migration policies define how the chromosome exchange is done and when, finally the graphical interface allows the configuration and the monitorization of the evolution. The tasks of configure and execute a genetic algorithm are complex and computationally heavy, in order to minimize the impact of these two factors and simplifying then, they were divided. A role called researcher was created with the task of configuring and collecting information about the search. Another role, called worker, was developed with the task of executing the algorithm. Both of them register their Jini services in the LookupService to allow the services to be invoked remotely. When a well-behaved Jini service starts, it will register on all LookupService found. A migration is an exchange of individuals between islands (workers). This exchange is made through the JavaSpace. With these new individuals we intend to introduce a larger heterogeneity to the destination population and consequentially enlarge the search space avoiding local maximums. The graphical interface module makes easier the configuration and the monitorization of the search. A graphic allow us to follow the diversity of the individuals in all islands and the evolution of the search. These extensions to the JDEAL library are composed by a set of new libraries (and some corrections to the previous libraries) that introduced new functionalities. These new libraries (communication, graphics, topologies, migrations, reports…) can also be extended if needed. This project is not only an application, already functional, but also a library for future, different and possible improvements and reference. RESULTS The tests were a resolution of a scheduling problem more specifically a jobshop problem with a 15x20 matrix (15 machines for 20 jobs, problem swv11.job). We performed four different tests in order to show how the migrations in the search could benefit the fitness and the deviation. We used three computers. Two Intel Celeron 850 MHz with 382MB of RAM running each a worker and a AMD 1300 MHz with 512MB of RAM running the researcher with two local workers. The tests were performed ten times each in order to achieve a representative average result. All the tests had the following common parameters: A population of 100 chromosomes, through 1000 generations, using one point crossover with 0.95% of 2 probability and invert integer mutator with 0.05% probability. The topology used was with no restrictions and with no elitism. In test1 it was used migrations every 100 generations, in test2 migrations every 200 generations, in test3 migrations every 500 generations and finally in test4 with no migrations. Deviation Values From Diferent Tests 350 300 Fitness Values From Diferent Tests Deviation Values 250 7000 6000 150 100 50 4000 0 0 0 90 84 78 96 0 0 0 0 66 60 54 48 42 72 0 0 0 0 0 36 30 24 18 12 0 0 0 0 3000 1 60 Fitness Values 5000 200 Generation 2000 DeviationTest1 1000 DeviationTest2 DeviationTest3 Figure 4. Average Deviation 960 900 840 780 720 660 600 540 480 420 360 300 240 180 120 60 1 0 Generation FitnessTest1 FitnessTest2 FitnessTest3 Figure 3. Average Fitness FitnessTest4 In order to show a distributed genetic algorithm can run the same number of evaluations much faster than a normal genetic algorithm, we performed another set of tests. Using the same computer (the AMD 1300 MHz described above) and running the same number of evaluations that four workers perform in a parallel execution another ten tests were performed in order to achieve a representative average result of the execution time. The test1 was a population of 100 chromosomes, through 1000 generations, using one point crossover with 0.95% of probability and invert integer mutator with 0.05% probability. The topology used was with no restrictions and with no elitism and no migrations. An average number of 19450 evaluations was achieved. DeviationTest4 The test2 was exactly the same but instead of 1000 generations as a conditional terminator an evaluation terminator with a value of 19450 evaluations on a single algorithm was used. From the chart presented on Figure 3 we can see the evolution of the fitness value on the 4 sets of tests performed, with migrations every 100, 200 and 500 generations and a no migration search we obtained better results with the search were the migrations were made every 100 generations, but the higher heap in the deviation introduced by the arrival of new individuals was when the migrations were less frequent as we can see in the chart in figure 4 in witch we show the evolution of the deviation over the total number of generations. The difference shown between the different algorithms in the fitness chart would tend to show higher values with the increase of the number of generations following the tendency denoted so far. If the study of this problem were to be more conclusive we would start by increasing the number of generations until we achieve convergence. 3 Single Island vs Distributed Evolution execution time 18 16 minutes 14 12 10 8 6 Single Island Distributed Evolution 4 2 0 Figure 5. Single Island vs. Distributed Evolution Execution Time Single Island vs Distributed Evolution 4890 4885 Fitness Values 4880 4875 4870 4865 4860 Single Island Distributed Evolution 4855 4850 4845 Figure 6. Single Island vs. Distributed Evolution Fitness Values As seen in charts from figures 5 and 6, by running the same number of evaluations on a single island and running them distributed over a set of islands (four in this case) we can see that the gain in fitness is superficial but the loss in time is three times bigger. The low increase on the fitness obtained can be backed up by the lack of diversity existent in a single population. CONCLUSIONS The conclusions we were able to obtain from the tests performed, where as expected that with the use of migrations over a set of parallel genetic algorithms, we can improve the diversity of the chromosomes in each island, leading to better results, and avoiding local maximums (in our case local minimums due to the problem specification to minimize the search). As a result of this, the convergence is delayed and the probability of obtaining better results is increased. Increasing the number of migrations on the same search, we can see that in the limit we will be approaching the single island model; since migrations occur so frequently that the diversity of chromosomes is the same in all islands. We can also conclude that the best timing in which a migration should occur is when the deviation approaches zero meaning that we have already exhaustively explored that local maximum. In this case a migration would bring the necessary heterogeneity through new and different chromosomes with fresh genotypes. Allowing the search to overcome the local maximum, and proceed with its search of a better solution. From the second set of tests we can conclude that with the parallel solution without migrations we can obtain the same results as can be obtained running the same number of evaluations on a single island, but much faster. ACKNOWLEDGMENTS We would like to thanks Eng. João Paulo Caldeira for his availability and help every time we needed it. REFERENCES Anzaldo, Armando Aranda, “En la Frontera de la Vida: Los Virus”, Edição online, 1995 Caldeira, J. P. dos S., "Parallel Evolutionary Algorithms Based on Voyager Core technology", IST, 2000. Caldeira, J. P. dos S., "Resolução do Problema JobShop Recorrendo a um Algoritmo de Tabu Search", IST, 2000 Costa, J., Lopes, N., Silva, P., Documentação vária sobre o JDEAL Fernandes, C. ,“Algoritmos Genéticos - Análise e Síntese de Algoritmos”, Laseeb-ISR, Internal Report 00.04 Griffiths, A. J. F., Miller, J. H., Suzuki, D. T., Lewontin, R. C., Gelbart, W. M., "An Introdution to Genetic Analysis", Seventh Edition, W. H. Freeman and Company, New York, 2000 Hiernaux, J., "A Diversidade Biológica Humana", Fundação Calouste Gulbenkian, 1980 Hue, Xavier, “Genetic Algorithms for Optimisation, Background and Applications”, version 1.0, Edinburgh Parallel Computing Center, University of Edinburgh, 1997 Li, S., Ashri, R., Buurmeijer, M., Hol, E., Flenner, B., Scheuring, J., Schneider, A., "Professional Jini", Wrox Press Ltd, 2000 Mattfeld, D. C., "Evolutionary Search and JobShop, Investigations on Genetic Algorithms for Prodution Scheduling", Physica-Verlag, 1996 Monteiro, V. S., Pinto, I. G., “Introdução aos Algoritmos Genéticos “, IST, 1998 Rosa, A. C., “Algoritmos Evolutivos e Vida Artificial”, LaSEEB – ISR – IST Rosa, A. C., “Introdução aos Algoritmos Genéticos”, LaSEEB – ISR – IST Vários autores, "Professional Java Server Programming J2EE Edition", Wrox Press Ltd, 2000 Whitley D., A Genetic Algorithm Tutorial, Statistics and Computing 1994, Vol.4 Advances in Artificial Life . F. Moran, A. Moreno, J.J. Merelo, and P. Chacon (Eds.). Series: Lecture Notes in Artificial Intelligence, Springer-Verlag. pp. 368-382. Rocha, Luís Mateus, Computer Research Group, MS P990,Los Alamos National Laboratory GEATbx: Genetic and Evolutionary Algorithm Toolbox for use with MATLAB,Version 1.92 (July 1997 / review November 1998), Hartmut Pohlheim (Online http://www.geatbx.com/docu/index.html ) Lopes, N. M. V.,Silva,P. J. G. C. M. da, JDEAL - The Java Distributed Evolutionary Algorithms Library - Final Report, I. S. R. / Instituto Superior Técnico, 1999 4 5
9
arXiv:1211.2963v2 [cs.DC] 28 Jan 2013 Flexible composition and execution of high performance, high fidelity multiscale biomedical simulations D. Groen1 , J. Borgdorff2 , C. Bona-Casas2 , J. Hetherington1 , R.W. Nash1 , S.J. Zasada1 , I. Saverchenko3 , M. Mamonski4 , K. Kurowski4 , M.O. Bernabeu1 , A.G. Hoekstra2 & P.V. Coveney1 1 2 Centre for Computational Science, University College London, UK. Section Computational Science, University of Amsterdam, the Netherlands 3 4 Leibniz-Rechenzentrum, Garching, Germany Poznan Supercomputing and Networking Center, Poznan, Poland Abstract Multiscale simulations are essential in the biomedical domain to accurately model human physiology. We present a modular approach for designing, constructing and executing multiscale simulations on a wide range of resources, from desktops to petascale supercomputers, including combinations of these. Our work features two multiscale applications, instent restenosis and cerebrovascular bloodflow, which combine multiple existing single-scale applications to create a multiscale simulation. These applications can be efficiently coupled, deployed and executed on computers up to the largest (peta) scale, incurring a coupling overhead of 1 to 10% of the total execution time. 1 Introduction Models of biomedical systems are inherently complex; properties on small time and length scales, such as the molecular or genome level, can make a substantial difference to the properties observed on much larger scales, such as the organ, full-body and even the population level; and vice-versa [1, 2]. We therefore need to apply multiscale approaches when modelling many biomedical problems. Example biomedical multiscale challenges include predicting the impact of a surgical procedure [3], investigating the effects of pathologies (e.g. arterial malformations or fistulas [4]), or assessing the effects of a targeted drug on a given patient [5]. In all these cases, we need to examine processes that not only occur across several time and/or length scales, but that also rely on different underlying physical and/or biological mechanisms. As a result, modelling these 1 processes may require substantially different algorithms and varying levels of computational effort. Historically, these problems have often been modelled using single scale approaches, focussing exclusively on those aspects of the problem which are deemed most relevant. However, applying a single scale model is frequently insufficient to fully understand the problem at hand, as additional processes occurring on different scales must be incorporated to obtain sufficient accuracy. It is this need for understanding the composite problem, rather than its individual subcomponents alone, that has driven many research groups to explore multiscale modelling; for example [6, 7, 8, 9]. In a multiscale model, the overall system is approximated by coupling two or more single scale submodels. Establishing and performing the data exchange between these submodels is an important aspect of enabling multiscale modelling. It is often addressed by using coupling tools, such as MUSCLE [10, 11], the Multilevel Communicating Interface [12] and GridSpace [13]. Groen et al. [14] provide a review of coupling tools and the computational challenges they address. Another important aspect is adopting a data standard which submodels can adopt to exchange meaningful quantities. Several markup languages, such as SBML [15] and CellML [16], resolve this problem by providing a description language for the storage and exchange of model data and submodel definitions. CellML specifically allows for submodel exchange between ODE and PDE solvers, whereas SBML is aimed towards biochemical pathway and reaction ODE submodels. Both SBML and CellML, being languages for the description of submodels and system data, have serious limitations in that they require additional tools to perform tasks that are not directly related to ensuring data interoperability. These include transferring data between submodels, deploying submodels on appropriate resources and orchestrating the interplay of submodels in a multiscale simulation. Additionally, they only provide very limited features to describe submodels that do not rely on ODE-based methods, such as finite-element/volume methods, spectral methods, lattice-Boltzmann, molecular dynamics and particle-based methods, which are of increasing importance in the biomedical domain. Here we present a comprehensive approach to enable multiscale biomedical modelling from problem definition through the bridging of length and time scales, to the deployment and execution of these models as multiscale simulations. Our approach, which has been developed within the MAPPER project 1 , relies on coupling existing submodels and supports the use of resources ranging from a local laptop to large international supercomputing infrastructures, and distributed combinations of these. We present our approach for describing multiscale biomedical models in section 2 and for constructing and executing multiscale simulations on large computing infrastructures in section 3. We describe our approach for biomedical applications in section 4. We have applied our approach to two biomedical multiscale applications, in-stent restenosis and 1 http://www.mapper-project.eu/ 2 hierarchical cerebrovascular blood flow, which we present in sections 5 and 6 respectively. We conclude with a brief discussion. 2 Multiscale biomedical modelling Multiscale modelling gives rise to a number of challenges which extend beyond the translation of model and system data. Most importantly, we seek to allow application developers, such as computational biologists and biomedics, to couple multiscale models for large problems, supporting any type of coupling, using any type of submodel they wish to include, and executing this using any type of computational resource, from laptop to petascale. Since we cannot expect computational biologists to have expertise in all the technical details of multiscale computing, we also aim to present a uniform and easy-to-use interface which retains the power of the underlying technology. This also enables users with less technical expertise to execute previously constructed multiscale simulations. Multiscale systems are, in general, characterized by the interaction of phenomena on different scales, but the details vary for different scientific domains. To preserve the generality of our approach, we adopt the Multiscale Modelling and Simulation Framework (MMSF) to reason about multiscale models in a domain-independent context and to create recipes for constructing multiscale simulations independent of the underlying implementations or computer architectures. This MMSF is based on earlier work on coupled cellular automata and agent-based systems [17, 18] and has been applied to several computational problems in biomedicine [19, 20, 3]. Within the MMSF, the interactions between single-scale submodels are confined to well-defined couplings. The submodels can therefore be studied as independent models with dependent incoming and outgoing links. The graph of all submodels and couplings, the coupling topology, can either be cyclic or acyclic [21]. In a cyclic coupling topology the submodels will exchange information in an iterative loop, whereas in an acyclic topology the submodels are activated one after another, resulting in a directional data flow which makes them well-suited for workflow managers. Two parts of the MMSF are particularly useful for our purposes, namely the Scale Separation Map [17] and the Multiscale Modelling Language (MML) [22, 11]. The Scale Separation Map is a graphical representation which provides direct insights into the coupling characteristics of the multiscale application. MML provides a formalization of the coupling between the submodels, independent of the underlying implementation. In addition, it simplifies the issues associated with orchestrating submodels by capturing the orchestration mechanisms in a simple model which consists of only four distinct operators. MML definitions can be stored for later use using an XML-based file format (xMML) or represented visually using graphical MML (gMML) [11]. The generic MML definitions allow us to identify commonalities between multiscale applications in different scientific domains. Additionally, the stored xMML can be used as input for a range of supporting tools that facilitate multiscale simulations (e.g., tools that automate the deployment or the submodel 3 coupling of these simulations, as discussed in section 3). 3 From multiscale model to production simulation We have developed a range of tools that allow us to create, deploy and run multiscale simulations based on the xMML specification of a multiscale model. Two of the main challenges in realising successful multiscale simulations are to establish a fast and flexible coupling between submodels and to deploy and execute the implementations of these submodels efficiently on computer resources of any size. In this section we review these challenges and present our solutions to them. 3.1 Coupling submodels Effective and efficient coupling between submodels encompasses three major aspects. First, we must translate our MML definitions to technical recipes for the multiscale simulation execution. Second, we need to initiate and orchestrate the execution of the submodels in an automated way, in accordance with the MML specification. Third, we need to efficiently exchange model and system information between the submodels. The Multiscale Library and Coupling Environment (MUSCLE) [10, 11] provides a solution to the first two aspects. It uses MML in conjunction with a definition of the requested resources to bootstrap and start the different submodels and to establish a connection for data exchange between the submodels. It also implements the coupling definitions defined in the MMSF to orchestrate the submodels. The submodels can be coupled either locally (on a workstation for example), via a local network, or via a wide area network using the MUSCLE Transport Overlay (MTO) [23]. Running submodels in different locations and coupling them over a wide area network is especially important when submodels have very different computational requirements, for example when one submodel requires a local machine with a set of Python libraries, and another submodel requires a large supercomputer. However, messages exchanged across a wide area network do take longer to arrive. Among other things, MTO provides the means to exchange data between large supercomputers while adhering to the local security policies and access restrictions. The exchanges between submodels overlap with computations in many cases (see section 6 for an example), allowing for an efficient execution of the overall simulation. However, when the exchanges are particularly frequent or the exchanged data particularly large, inefficiencies in the data exchange can severely slow down the multiscale simulation as a whole. We use the MPWide communication library [24], previously used to run cosmological simulations distributed across supercomputers [25], to optimise our wide area communications for performance. We already use MPWide directly within the cerebrovascular blood- 4 flow simulation and are currently incorporating it in MUSCLE to optimise the performance of the MTO. 3.2 Deploying and executing submodels on production resources When large multiscale models are deployed and executed as simulations on production compute resources, a few major practical challenges arise. Production resources are often shared by many users, and are difficult to use even partially at appropriate times. This in turn makes it difficult to ensure that the submodels are invoked at the right times, and in cases where they are distributed on different machines, to ensure that the multiscale simulation retains an acceptable time to completion. We use the QosCosGrid environment (QCG) [26] to run submodels at a predetermined time on large computing resources. QCG enables us, if installed on the target resource, to reserve these resources in advance for a given period of time to allow our submodels to be executed there. This is valuable in applications that require cyclic coupling, as we then require multiple submodels to be executed either concurrently or in alternating fashion. Additionally, with QCG we can explicitly specifiy the sequence of resources to be reserved, allowing us to repeat multiple simulations in a consistent manner using the same resource sequence each time. A second component that aids in the execution of submodels is the Application Hosting Environment (AHE) [27]. AHE simplifies user access to production resources by streamlining the authentication methods and by centralising the installation and maintenance tasks of application codes. The end users of AHE only need to work with one uniform client interface to access and run their applications on a wide range of production resources. 4 Using MAPPER for biomedical problems We apply the MAPPER approach to a number of applications. We present two of these applications here (in-stent restenosis and cerebrovascular blood flow), but we have also used MAPPER to define, deploy and execute multiscale simulations of clay-polymer nanocomposite materials [28], river beds and canals [29], and several problems in nuclear fusion. MAPPER provides formalisms, tools and services which aid in the description of multiscale models, as well as the construction, deployment and execution of multiscale simulations on production infrastructures. It is intended as a general-purpose solution, and as such tackles challenges in multiscale simulation that exist across scientific disciplines. There are a number of challenges which are outside the scope of our approach, because they may require different solutions for different scientific disciplines. These include choosing appropriate submodels for a multiscale simulation and defining, on the application level, what information should be exchanged between submodels at which times to provide a scientifically accurate and stable multiscale simulation. However, MMSF does 5 simplify the latter task by providing a limited number of orchestration mechanisms. The formalisms, tools and services presented here are independent components, allowing users to adopt those parts of our approach which specifically meet their requirements in multiscale modelling. This modular approach makes it easier for users to exploit the functionalities of individual components and helps to retain a lightweight simulation environment. 5 Modelling in-stent restenosis Coronary heart disease (CHD) is one of the most common causes of death, and is responsible for about 7.3 million deaths per year worldwide [30]. CHD is typically expressed as artherosclerosis, which corresponds with a thickening and hardening of blood vessels caused by build-up of atheromatous plaque; when this significantly narrows the vessel, it is called a stenosis. A common intervention for stenosis is stent-assisted balloon angioplasty where a balloon, attached to a stent, is inserted in the blood vessel and inflated at the stenosed location, consequently deploying the stent. The stent acts as a scaffold for the blood vessel, compressing the plaque and holding the lumen open. Occasionally, however, this intervention is followed by in-stent restenosis (ISR), an excessive regrowth of tissue due to the injury caused by the stent deployment [31, 32]. Although there are a number of different hypotheses [33], the pathophysiological mechanisms and risk factors of in-stent restenosis are not yet fully clear. By modelling in-stent restenosis with a three-dimensional model (ISR3D) it is possible to test mechanisms and risk factors that are likely to be the main contributors to in-stent restenosis. After evaluating the processes involved in in-stent restenosis [19], ISR3D applies the hypothesis that smooth muscle cell proliferation drives the restenosis, and that this is affected most heavily by wall shear stress of the blood flow, which regulates endothelium recovery, and by growth inhibiting drugs diffused by a drug-eluting stent. Using the model we can evaluate the effect of different drug intensities, physical stent designs, vascular geometries and endothelium recovery rates. ISR3D is derived from the two-dimensional ISR2D [20, 3] code. Compared to ISR2D it provides additional accuracy, incorporating a full stent design, realistic cell growth, and a three-dimensional blood flow model, although it requires more computational effort. We present several results from modelling a stented artery using ISR3D in figure 2. This figure shows the artery both before and after restenosis has occurred. From a multiscale modelling perspective, ISR3D combines four submodels, each operating on a different time scale: smooth muscle cell proliferation (SMC), which models the cell cycle, cell growth, and physical forces between cells; initial thrombus formation (ITF) due to the backflow of blood; drug diffusion (DD) of the drug eluding stent through the tissue and applied to the smooth muscle cells; and blood flow (BF) and the resulting wall shear stress on smooth muscle cells. We show the time scales of each submodel and the relations between them in figure 1. The SMC submodel uses an agent-based algorithm on the cellular 6 spatial scale tissue geometry with thrombus Blood flow Drug diffusion cell geometry Thrombus SMC proliferation formation cells drug concentration wall shear stress sec min hr day temporal scale Figure 1: Scale Separation Map of the ISR3D multiscale simulation, originally presented in [11]. Figure 2: Images of a stented artery as modelled using ISR3D, before restenosis occurs on the left and after 12.5 days of muscle cell proliferation on the right. 7 scale, which undergoes validation on the tissue level. All other submodels act on a cartesian grid representation of those cells. For the BF submodel we use 1,237,040 lattice sites, and for the SMC submodel we use 196,948 cells. The exchanges between the submodels are in the order of 10 to 20 MB. The SMC sends a list of cell locations and sizes (stored as 8-byte integers) to the ITF, which sends the geometry (stored as a 3D matrix of 8-byte integers) to BF and DD. In turn, BF and DD respectively send a list of wall shear stress and drug concentrations (stored as 8-byte doubles) to SMC. Each coupling communication between SMC and the other submodels takes place once per SMC iteration. The submodels act independently, apart from exchanging messages, and are heterogeneous. The SMC code is implemented in C++, the DD code in Java, and the ITF code in Fortran. The BF code, which unlike the other codes runs in parallel, uses the Palabos lattice-Boltzmann application2 , written in C++. The MML specification of ISR3D contains a few conversion modules not mentioned above, which perform basic data transformations necessary to ensure that the various single scale models do not need to be aware of other submodels and their internal representation or scales. Specifically, the submodels operate within the same domain, requiring the application to keep the grid and cell representation consistent, as well as their dimensions. 5.1 Tests We have performed a number of tests to measure both the runtime and the efficiency of our multiscale ISR3D simulation. The runs that were performed here were short versions of the actual simulations, with a limited number of iterations. The runs constributed to the integration testing of the code and allowed us to estimate the requirements for future computing resource proposals. We have run our tests in five different scenarios, using the EGI resource in Krakow (Zeus, one scenario), a Dutch PRACE tier-1 machine in Amsterdam (Huygens, two scenarios), or a combination of both (two scenarios). We provide the technical specifications of the resources in table 1. Since the runtime behaviour of ISR3D is cyclic, determined by the number of smooth muscle cell iterations, we measured the runtime of a single cycle for each scenario. Because only the BF model is parallelised, the resources we use are partially idle whenever we are not executing the BF model. To reduce this overhead of ‘idle time’, we created two double mapping scenarios, each of which runs two simulations in alternating fashion using the same resource reservation. In these cases, the blood flow calculations of one simulation takes place while the other simulation executes one or more of the other submodels. We use a wait/notify signalling system within MUSCLE to enforce that only one of the two simulations indeed executes its parallel BF model. We have run one double mapping scenario locally on Huygens, and one distributed over Huygens and Zeus. 8 name HECToR Huygens Henry Zeus processor AMD Interlagos IBM Power6 Intel Xeon Intel Xeon freq. 2.3 GHz 4.7 GHz 2.4 GHz 2.4 GHz cores 512/2048 32 1 4 mem/node 32 GB 128 GB 6 GB 16 GB middleware UNICORE UNICORE none QCG-Comp. Table 1: Computational characteristics of the machines we use in our multiscale simulations for ISR3D and HemeLB. Clock frequency is given in the second column, the number of cores used in the third column, and the amount of memory per node in the fourth column. The administrative details are listed in table 2. name HECToR Huygens Henry Zeus provider EPCC SARA UCL Cyfronet location Edinburgh, United Kingdom Amsterdam, The Netherlands London, United Kingdom Krakow, Poland infrastructure PRACE Tier-1 PRACE Tier-1 Local workstation EGI (PL-Grid) Table 2: Administrative information for the resources described in table 1. 5.1.1 Results We present our performance measurements for the five scenarios in table 3. The runtime of our simulation is reduced by almost 50% when we use the Huygens machine instead of Zeus. This is because we run the BF code on Huygens using 32 cores, and on Zeus using 4 cores. However, the usage of the reservation was considerably lower on Huygens than on Zeus for two reasons: first, the allocation on Huygens was larger, leaving more cores idle when the sequential submodels were computed; second, the ITF solver uses the gfortran compiler, which is not 2 http://www.palabos.org/ scenario Zeus Huygens Huygens-double Zeus-Huygens Zeus-Huygens-double BF 2100 480 480 480 480 other 1140 1260 1320 960 1080 coupling 79 73 131 92 244 total 3240 1813 1931 1532 1804 usage % 80% 27% 45% 32% 56% Table 3: Runtimes with different scenarios. The name of the scenario is given in the first column, the time spent on BF in the second, and the total time spent on the other submodels in the third column (both rounded to the nearest minute). We provide the coupling overhead in the fourth column and the total simulation time per cycle in the fifth column. In the sixth column we provide the average fraction of reserved resources doing computations (not idling) throughout the period of execution. All time are given in seconds. 9 well optimized for Huygens architecture. As a result it requires only 4 minutes on Zeus and 12 minutes on Huygens. The performance of the other sequential submodels is less platform dependent. When we run our simulation distributed over both resources, we achieve the lowest runtime. This is because we combine the fast execution of the BF module on Huygens with the fast execution of the ITF solver on Zeus. When we use double mapping, the runtime per simulation increases by about 6% (using only Huygens) to 18% (using both resources). However, the doublemapping improves the usage of the reservation by a factor of ∼ 1.7. Doublemapping is therefore an effective method to improve the resource usage without incurring major increases to the time to completion of each simulation. The coupling overhead is relatively low throughout our runs, consisting of no more than 11% of the runtime throughout our measurements. Using our approach, we are now modelling different in-stent restenosis scenarios, exploring a range of modelling parameters. 5.2 Clinical directions We primarily seek to understand which biological pathways dominate in the process leading to in-stent restenosis. If successful, this will have two effects on clinical practice: first, it suggests which factors are important for in-stent restenosis, in turn giving clinicians more accurate estimates of what the progression of the disease can be; second, it may spur further directed clinical research of certain pathways, which will help the next iteration of the model give more accurate results. The methods for achieving this divide naturally in two directions: general model validation and experiments; and virtual patient cohort studies. For general model validation we consult the literature and use basic experimental data, such as measurements from animal studies. In addition, we intend to use virtual patient cohort studies to assess the in-stent restenosis risk factors of virtual patients with different characteristics. In clinical practice, this will not lead to personalized estimates, but rather to patient classifiers on how ISR3D will progress. Once the primary factors leading to in-stent restenosis have been assessed, a simplified model could be made based on ISR3D, which takes less computational effort and runs within a hospital. 6 Modelling of cerebrovascular blood flow Our second hemodynamic example aims to incorporate not only the local arterial structure in our models, but also properties of the circulation in the rest of the human body. The key feature of this application is the multiscale modelling of blood flow, delivering accuracy in the regions of direct scientific or clinical interest while incorporating global blood flow properties using more approximate methods. Here we provide an overview of our multiscale model and report on its performance. A considerable amount of previous work has been done where 10 groups combined bloodflow solvers of different types, for example in the area of cardiovascular [34, 35, 36] or cerebrovascular bloodflow [37, 12]. We have constructed a distributed multiscale model in which we combine the open source HemeLB lattice-Boltzmann application for blood flow modelling in 3D [38, 39] with the open source one-dimensional Python Navier-Stokes (pyNS) blood flow solver [40]. HemeLB is optimized for sparse geometries such as vascular networks, and has been shown to scale linearly up to at least 32,768 cores [41]. PyNS is a discontinuous Galerkin solver which is geared towards modelling large arterial structures. It uses aortic blood flow input based on a set of patient-specific parameters and it combines 1D wave propagation elements to model arterial vasculature with 0D resistance elements to model veins. The numerical code supports thread-level parallelization, and is written in Python in conjunction with the numpy numerical library. 6.1 Simulations We have run a number of coupled simulations using both HemeLB and pyNS as submodels. Within pyNS we use a customised version of the ’Willis’ model, based on [42], with a mean pressure 90 mmHg, a heart rate of 70 beats per minute and a cardiac output of 5.68 litres per minute. Our model includes the major arteries in the human torso, head and both arms, as well as a full model for the circle of Willis, which is a major network of arteries in the human head. For pyNS we use a time step size of 2.3766 × 10−4 s. We have modified a section of the right mid-cerebral artery (MCA) in pyNS to allow it to be coupled to HemeLB in four places, exchanging pressure values in these boundary regions. In HemeLB we simulate a small network of arteries, with a voxel size of 3.5 × 10−5 m and consisting of about 4.2 million lattice sites which occupy 2.3% of the simulation box volume. The HemeLB simulation runs with a timestep of 2.3766 × 10−6 s, a Mach number of 0.1 and a relaxation parameter τ of 0.52. We run pyNS using a local machine at UCL (Henry), while we run HemeLB for 400,000 time steps on the HECToR supercomputer in Edinburgh. The round-trip time for a network message between these two resources is on average 11 milliseconds. We provide technical details of both machines in table 2. Both codes exchange pressure data at an interval of 100 HemeLB time steps (or 1 pyNS time step). Because HemeLB time steps can take as little as 0.0002 seconds, we adopted MPWide to connect our submodels, which run concurrently, and minimize the communication response time. The exchanged data is represented using 8-byte doubles, and has a small aggregate size (less than 1 kb). As a comparison, we have also run HemeLB as a standalone single scale simulation (labelled ‘ss’), retrieving its boundary values from a local configuration file. The pyNS code requires 116 seconds to simulate 4000 time steps of our modified circle of Willis problem when run as a stand-alone code. We present our results in table 4. For 512 cores, the single scale HemeLB simulation takes 2271 seconds to perform its 400,000 lattice-Boltzmann time steps. The coupled HemeLB-pyNS simulation is only marginally slower, reaching completion in 2298 seconds. We measure a coupling overhead of only 24 11 ss/ms cores ss ms ss ms 512 512 2048 2048 Init t/s 47.9 51.2 46.4 50.5 HemeLB* t/s 2223 2223 815 815 coupling t/s n/a 24 n/a 37 total t/s 2271 2298 862 907 coupling efficiency n/a 98.8% n/a 95.0% Table 4: Performance measurements of coupled simulations which consist of HemeLB running on HECToR and pyNS running on a local UCL workstation. The type of simulation, single scale (ss) or coupled multiscale (ms) is given in the first colums. The number of cores used by HemeLB and the initialisation time are respectively given in the second and third column. The time spent on HemeLB and coupling work are given respectively in the fourth and fifth column, the total time in the sixth column and the efficiency in the seventh column. The times for HemeLB model execution in the multiscale runs are estimates, which we derived directly from the single scale performance results of the same problem on the same resources. seconds, which is the time to do 4,002 pressure exchanges between HemeLB and pyNS. This amounts to about 6 milliseconds per exchange, well below even the round-trip time of the network between UCL and EPCC alone. The communication time on the HemeLB side is so low because pyNS runs faster per coupling iteration than HemeLB, and the incoming pressure values are already waiting at the network interface when HemeLB begins to send out its own. As a result, the coupling overhead is only 24 seconds, and the multiscale simulation is only 1.2% slower than a single-scale simulation of the same network domain. When using 2048 cores, the runtime for the single-scale HemeLB simulation is 815 seconds, which is a speedup of 2.63 compared to the 512 core run. The coupling overhead of the multiscale simulation is relatively higher than that of the 512 core run due to a larger number of processes with which pressures must be exchanged. This results in an overall coupling efficiency of 0.95, which is lower than for the 512 core run. However, the 2048 core run contains ∼ 2000 sites per core, which is a regime where we no longer achieve linear scalability [41] to begin with. As a future task, we plan to coalesce these pressure exchanges with the other communications in HemeLB, using the coalesced communication pattern [43]. 6.2 Clinical directions We aim to understand the flow dynamics in cerebrovascular networks and to predict the flow dynamics in brain aneurysms for individual patients. The ability to predict the flow dynamics in cerebrovascular networks is of practical use to clinicians, as it allows them to more accurately determine whether surgery is required for a specific patient suffering from an aneurysm. In addition, our work supports a range of other scenarios, which in turn may drive clinical in- 12 vestigations of other vascular diseases. Our current efforts focus on enhancing the model by introducing velocity exchange and testing our models for accuracy. As a first accuracy test we compared different boundary conditions and flow models within HemeLB [39]. Additionally we used our HemeLB-pyNS setup to compare different blood rheology models, which we describe in detail in [44]. Next steps include more patient-specific studies, where we wish to compare the flow behavior within patient-specific networks of arteries in our multi-scale simulations with the measured from those same patients. We have already established several key functionalities to allow patient-specific modelling. For example, HemeLB is able to convert 3D rotational angiographic data into initial conditions for the simulation, while pyNS provides support for patient-specific global parameters and customized arterial tree definitions. 7 Conclusions and Future Work We have shown that MAPPER provides a usable and modular environment which enables us to efficiently map multiscale simulation models to a range of computing infrastructures. Our methods provide computational biologists with the ability to more clearly reason about multiscale simulations, to formally define their multiscale scenarios, and to more quickly simulate problems that involve the use of multiple codes using a range of resources. We have presented two applications, in-stent restenosis and cerebrovascular bloodflow, and conclude that both applications run rapidly and efficiently, even when using multiple compute resources in different geographical locations. Acknowledgements We thank our colleagues in the MAPPER consortium, the HemeLB development team at UCL, as well as Simone Manini and Luca Antiga from the pyNS development team. This work received funding from the MAPPER EU-FP7 project (grant no. RI-261507) and the CRESTA EU-FP7 project (grant no. RI-287703). This work made use of computational resources provided by the PL-Grid Infrastructure (Zeus), by PRACE at SARA in Amsterdam, the Netherlands (Huygens) and EPCC in Edinburgh, United Kingdom (HECToR), and by University College London (Henry). References [1] P. M. A. Sloot and A. G. Hoekstra. Multi-scale modelling in computational biomedicine. Briefings in Bioinformatics, 11(1):142–152, 2010. [2] P. Kohl and D. Noble. Systems biology and the virtual physiological human. Molecular Systems Biology, 5(1), July 2009. 13 [3] H. Tahir, A. G. Hoekstra, E. Lorenz, P. V. Lawford, D. R. Hose, J. Gunn, and D. J. W. Evans. Multi-scale simulations of the dynamics of instent restenosis: impact of stent deployment and design. Interface Focus, 1(3):365–373, 2011. [4] F. Migliavacca and G. Dubini. Computational modeling of vascular anastomoses. Biomechanics and Modeling in Mechanobiology, 3:235–250, 2005. [5] C. Obiol-Pardo, J. Gomis-Tena, F. Sanz, J. Saiz, and M. Pastor. A multiscale simulation system for the prediction of drug-induced cardiotoxicity. Journal of Chemical Information and Modeling, 51(2):483–492, 2011. [6] D. Noble. Modeling the heart–from genes to cells to the whole organ. Science, 295(5560):1678–1682, 2002. [7] A. Finkelstein, J. Hetherington, L. Li, O. Margoninski, P. Saffrey, R. Seymour, and A. Warner. Computational challenges of systems biology. Computer, 37:26–33, May 2004. [8] J. Hetherington, I. D. L. Bogle, P. Saffrey, O. Margoninski, L. Li, M. Varela Rey, S. Yamaji, S. Baigent, J. Ashmore, K. Page, R. M. Seymour, A. Finkelstein, and A. Warner. Addressing the challenges of multiscale model management in systems biology. Computers and Chemical Engineering, 31(8):962 – 979, 2007. [9] J. Southern, J. Pitt-Francis, J. Whiteley, D. Stokeley, H. Kobashi, R. Nobes, Y. Kadooka, and D. Gavaghan. Multi-scale computational modelling in biology and physiology. Progress in Biophysics and Molecular Biology, 96(1-3):60 – 89, 2008. [10] J. Hegewald, M. Krafczyk, J. Tölke, A. Hoekstra, and B. Chopard. An agent-based coupling platform for complex automata. In Proceedings of the 8th international conference on Computational Science, Part II, ICCS ’08, pages 227–233, Berlin, Heidelberg, 2008. Springer-Verlag. [11] J. Borgdorff, C. Bona-Casas, M. Mamonski, K. Kurowski, T. Piontek, B. Bosak, K. Rycerz, E. Ciepiela, T. Gubala, and D. et al. Harezlak. A Distributed Multiscale Computation of a Tightly Coupled Model Using the Multiscale Modeling Language. Procedia Computer Science, 9:596–605, 2012. [12] L. Grinberg, V. Morozov, D. Fedosov, J.A. Insley, M.E. Papka, K. Kumaran, and G.E. Karniadakis. A new computational paradigm in multiscale simulations: Application to brain blood flow. In High Performance Computing, Networking, Storage and Analysis (SC), 2011 International Conference for, pages 1–12, nov. 2011. [13] E. Ciepiela, D. Harezlak, J. Kocot, T. Bartynski, M. Kasztelnik, P. Nowakowski, T. Gubala, M. Malawski, and M. Bubak. Exploratory programming in the virtual laboratory. In Computer Science and Information 14 Technology (IMCSIT), Proceedings of the 2010 International Multiconference on, pages 621 –628, oct. 2010. [14] D. Groen, S. J. Zasada, and P. V. Coveney. Survey of Multiscale and Multiphysics Applications and Communities. ArXiv e-prints arXiv:1208.6444, August 2012. [15] M. Hucka, A. Finney, H. M. Sauro, H. Bolouri, J. C. Doyle, and H. et al. Kitano. The systems biology markup language (SBML): a medium for representation and exchange of biochemical network models. Bioinformatics, 19(4):524–531, 2003. [16] C. M. Lloyd, M. D. B. Halstead, and P. F. Nielsen. CellML: its future, present and past. Progress in Biophysics and Molecular Biology, 85(23):433 – 450, 2004. [17] A. G. Hoekstra, E. Lorenz, J.-L. Falcone, and B. Chopard. Toward a complex automata formalism for multiscale modeling. International Journal for Multiscale Computational Engineering, 5(6):491–502, 2007. [18] A.G. Hoekstra, A. Caiazzo, E. Lorenz, J.-L. Falcone, and B. Chopard. Complex Automata: Multi-scale Modeling with Coupled Cellular Automata, pages 29–57. Understanding Complex Systems. Springer-Verlag, 2010. [19] D. J. W. Evans, P. V. Lawford, J. Gunn, D. Walker, D. R. Hose, R. H. Smallwood, B. Chopard, M. Krafczyk, J. Bernsdorf, and A. Hoekstra. The application of multiscale modelling to the process of development and prevention of stenosis in a stented coronary artery. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 366(1879):3343–3360, 2008. [20] A. Caiazzo, D. J. W. Evans, J. L. Falcone, J. Hegewald, E. Lorenz, B. Stahl, D. Wang, J. Bernsdorf, B. Chopard, and J. et al. Gunn. A Complex Automata approach for In-stent Restenosis: two-dimensional multiscale modeling and simulations. Journal of Computational Science, 2(1):9–17, 2011. [21] J. Borgdorff, J.-L. Falcone, E. Lorenz, B. Chopard, and A. G. Hoekstra. A principled approach to distributed multiscale computing, from formalization to execution. In Proceedings of the IEEE 7th International Conference on e-Science Workshops, pages 97–104, Stockholm, Sweden, 2011. IEEE Computer Society Press. [22] J.-L. Falcone, B. Chopard, and A. Hoekstra. Mml: towards a multiscale modeling language. Procedia Computer Science, 1(1):819 – 826, 2010. [23] S. J. Zasada, M. Mamonski, D. Groen, J. Borgdorff, I. Saverchenko, T. Piontek, K. Kurowski, and P. V. Coveney. Distributed infrastructure for multiscale computing. In Proceedings of the 2012 IEEE/ACM 16th International Symposium on Distributed Simulation and Real Time Applications, DS-RT ’12, pages 65–74, 2012. 15 [24] D. Groen, S. Rieder, P. Grosso, C. de Laat, and P. Portegies Zwart. A lightweight communication library for distributed computing. Computational Science and Discovery, 3(015002), August 2010. [25] D. Groen, S. Portegies Zwart, T. Ishiyama, and J. Makino. High Performance Gravitational N-body simulations on a Planet-wide Distributed Supercomputer. Computational Science and Discovery, 4(015001), January 2011. [26] K. Kurowski, T. Piontek, P. Kopta, M. Mamonski, and B. Bosak. Parallel large scale Simulations in the PL-Grid Environment. Computational Methods in Science and Technology, pages 47–56, 2010. [27] S.J. Zasada and P.V. Coveney. Virtualizing access to scientific applications with the application hosting environment. Computer Physics Communications, 180(12):2513 – 2525, 2009. [28] J. Suter, D. Groen, L. Kabalan, and P. V. Coveney. Distributed multiscale simulations of clay-polymer nanocomposites. In Materials Research Society Spring Meeting, volume 1470, San Francisco, CA, April 2012. MRS Online Proceedings Library. [29] M. Ben Belgacem, B. Chopard, and A. Parmigiani. Coupling method for building a network of irrigation canals on a distributed computing environment. In Georgios Sirakoulis and Stefania Bandini, editors, Cellular Automata, volume 7495 of Lecture Notes in Computer Science, pages 309– 318. Springer Berlin / Heidelberg, 2012. [30] WHO; World Heart Federation; World Stroke Organization, editor. Global atlas on cardiovascular disease prevention and control. Policies, strategies and interventions. World Health Organization, 2012. [31] A. Moustapha, A. R. Assali, S. Sdringola, W. K. Vaughn, R. D. Fish, O. Rosales, G. Schroth, Z. Krajcer, R. W. Smalling, and H. V. Anderson. Percutaneous and surgical interventions for in-stent restenosis: long-term outcomes and effect of diabetes mellitus. Journal of the American College of Cardiology, 37(7):1877–1882, June 2001. [32] A. Kastrati, D. Hall, and A. Schömig. Long-term outcome after coronary stenting. Current Controlled Trials in Cardiovascular Medicine, 1(1):48–54, 2000. [33] J. W. Jukema, J. J. W. Verschuren, T. A. N. Ahmed, and P. H. A. Quax. Restenosis after PCI. Part 1: pathophysiology and risk factors. Nature Reviews Cardiology, 9(1):53–62, September 2011. [34] Y. Shi, P. Lawford, R. Hose, et al. Review of zero-d and 1-d models of blood flow in the cardiovascular system. Biomedical engineering online, 10:33, 2011. 16 [35] F.N. van de Vosse and N. Stergiopulos. Pulse wave propagation in the arterial tree. Annual Review of Fluid Mechanics, 43:467–499, 2011. [36] K. Sughimoto, F. Liang, Y. Takahara, K. Yamazaki, H. Senzaki, S. Takagi, and H. Liu. Assessment of cardiovascular function by combining clinical data with a computational model of the cardiovascular system. The Journal of Thoracic and Cardiovascular Surgery (in press), 2012. [37] J. Alastruey, S. M. Moore, K. H. Parker, T. David, J. Peir, and S. J. Sherwin. Reduced modelling of blood flow in the cerebral circulation: Coupling 1-d, 0-d and cerebral auto-regulation models. International Journal for Numerical Methods in Fluids, 56(8):1061–1067, 2008. [38] M. D Mazzeo and P. V. Coveney. HemeLB: A high performance parallel lattice-Boltzmann code for large scale fluid flow in complex geometries. Computer Physics Communications, 178(12):894–914, 2008. [39] H. B. Carver, R. W. Nash, M. O. Bernabeu, J. Hetherington, D. Groen, T. Krüger, and P. V. Coveney. Choice of boundary condition and collision operator for lattice-Boltzmann simulation of moderate Reynolds number flow in complex domains. Submitted to Phys Rev. E, November 2012. [40] L. Botti, M. Piccinelli, B. Ene-Iordache, A. Remuzzi, and L. Antiga. An adaptive mesh refinement solver for large-scale simulation of biological flows. International Journal for Numerical Methods in Biomedical Engineering, 26(1):86–100, 2010. [41] D. Groen, J. Hetherington, H. B. Carver, R. W. Nash, M. O. Bernabeu, and P. V. Coveney. Analyzing and modeling the performance of the HemeLB lattice-boltzmann simulation environment. ArXiv e-prints arXiv:1209.3972, 2012. [42] G. Mulder, A. C. B. Bogaerds, P. Rongen, and F. N. van de Vosse. The influence of contrast agent injection on physiological flow in the circle of willis. Medical Engineering and Physics, 33(2):195 – 203, 2011. [43] H. B. Carver, D. Groen, J. Hetherington, R. W. Nash, M. O. Bernabeu, and P. V. Coveney. Coalesced communication: a design pattern for complex parallel scientific software. ArXiv e-prints arXiv:1210.4400, October 2012. [44] M. O. Bernabeu, R. W. Nash, D. Groen, H. B. Carver, J. Hetherington, T. Krüger, and P. V. Coveney. Choice of blood rheology model has minor impact on computational assessment of shear stress mediated vascular risk. ArXiv e-prints arXiv:1211.5292, accepted by Interface Focus, November 2012. 17
5
arXiv:1710.09298v1 [] 25 Oct 2017 GLUING SEMIGROUPS AND STRONGLY INDISPENSABLE FREE RESOLUTIONS MESUT ŞAHİN AND LEAH GOLD STELLA Abstract. We study strong indispensability of minimal free resolutions of semigroup rings. We focus on two operations, gluing and extending, used in literature to produce more examples with a special property from the existing ones. We give a naive condition to determine whether gluing of two semigroup rings has a strongly indispensable minimal free resolution. As applications, we determine extensions of 3-generated non-symmetric, 4-generated symmetric and pseudo symmetric numerical semigroups as well as obtain infinitely many complete intersection semigroups of any embedding dimension, having strongly indispensable minimal free resolutions. 1. introduction Let N denote the set of non-negative integers and consider the affine semigroup S generated minimally by m1 , . . . , mn ∈ Nr . Turning the additive structure of S into a multiplicative one yields an algebra K[S], for a field K, called the affine semigroup ring associated to S. Any polynomial ring A = K[x1 , . . . , xn ], can be graded by S, via degS (xi ) = mi , yielding a graded ir map A → K[S], sending xi to tmi := t1mi1 · · · tm r , whose kernel denoted by IS is called the toric ideal of S. Then, K[S] is isomorphic to the coordinate ring A/IS of the affine toric variety V (IS ). Toric ideals generated by indispensable binomials are of special importance for some emerging problems arising from Algebraic Statistics, as they have a unique minimal generating set. This leads to a search for criteria to characterize indispensability, see e.g. [5, 6, 9, 12, 15, 21]. Indispensable binomials are constant multiples of those binomials that appear in every minimal binomial generating set. Strongly indispensable binomials are those whose constant multiples are present in every (non necessarily binomial) minimal generating set. In the same vein, as introduced for the first time in [3, 4], indispensable (rest. strongly indispensable) higher syzygies are those that are present in every (resp. non-necessarily) simple minimal free resolution. Semigroups all of whose higher syzygy modules are generated minimally by strongly indispensable elements are said to have a strongly indispensable minimal free resolution, SIFRE for short. The statistical models having Date: October 26, 2017. The first author is supported by the project 114F094 under the program 1001 of the Scientific and Technological Research Council of Turkey. 1 2 MESUT ŞAHİN AND LEAH GOLD STELLA SIFREs or equivalently having uniquely generated higher syzygy modules are a subclass of those having a unique Markov basis and therefore have a better potential statistical behaviour. It is difficult to construct examples having SIFRE. Generic lattice ideals have SIFRE by [16, Theorem 4.2] and [3, Theorem 4.9]. Numerical semigroups having SIFREs are classified in [2, 19] for some small embedding dimensions. In this article, we focus on two operations, gluing and extending, used in literature to produce more examples with a special property from the existing one, see e.g. [22, 13, 18, 14, 8]. In section 2, we rephrase the general method given by [3, Theorem 4.9] to check if a given semigroup has a SIFRE, see Lemma 2.1. In section 3, we study the gluing S of S1 and S2 . We show that a minimal graded free resolution for K[S] is obtained from that of K[S1 ] and K[S2 ] via tensor product of three complexes, see Theorem 3.2. As a consequence we get the Betti S-degrees, see Lemma 3.3, which is key for our refined criterion. We then give a naive criterion to determine whether K[S] has a SIFRE, see Theorem 3.5. This section generalizes the main results of [9], see Remark 3.6. We conclude the section with Example 3.7 illustrating the main advantage of our criterion over method given by [3, Theorem 4.9]. In the last section, we focus on a particular gluing also known as extension and get an even more refined criterion special for them. It turns out that this condition is very helpful for producing infinitely many examples having SIFRE from a single example. As applications, we determine extensions of 3-generated non-symmetric, 4-generated symmetric and pseudo symmetric numerical semigroups as well as obtain infinitely many complete intersection semigroups of any embedding dimension, having SIFREs. 2. strongly indispensable minimal free resolutions Motivated by the open questions listed at the end of [4], our main aim here is to refine Theorem 4.9 in [3] for checking strong indispensability of minimal free resolutions of affine semigroup rings. For a graded minimal free A-resolution φ φk−1 φ2 φ1 k F : 0 −→ Aβk −→ Aβk−1 −→ · · · −→ Aβ1 −→ Aβ0 −→K[S]−→0 of K[S], let Aβi be generated in degrees si,j ∈ S, which we call i-Betti Sβi M βi A[−si,j ]. The resolution (F, φ) is called strongly degrees, i.e. A = j=1 indispensable if for any graded minimal resolution (G, θ), we have an injective complex map i : (F, φ) −→ (G, θ). The following general criterion about strong indispensability is nothing but Theorem 4.9 in [3] stated slightly different. Denote by pd(S) the projective dimension of K[S]. GLUING SEMIGROUPS AND STRONGLY INDISPENSABLE FREE RESOLUTIONS 3 Lemma 2.1. A minimal graded free resolution of K[S] is strongly indispensable if and only if the differences between different i-Betti degrees do not belong to S for all 1 ≤ i ≤ pd(S). Proof. This follows from Theorem 4.9 in [3], as the differences between different i-Betti degrees do not belong to S means that i-Betti degrees are different and minimal.  When S is symmetric, it is sufficient to check the condition above for the first half of the indices. Lemma 2.2. If S is symmetric, then K[S] has a SIFRE if and only if the differences between different i-Betti degrees do not belong to S, for 1 ≤ i ≤ ⌊pd(S)/2⌋. Proof. The proof of [2, Lemma 21] extends from numerical semigroups to arbitrary affine semigroups, as it uses the symmetry in the minimal graded free resolution of K[S], which is true for any graded Gorenstein K-algebra by Stanley’s second proof of Theorem 4.1. in [20].  3. Gluing Strongly Indispensable Resolutions Let S1 = N{a1 , . . . , am } and S2 = N{b1 , . . . , bn } be two affine semigroups. If there is an α ∈ S1 ∩S2 such that ZS1 ∩ZS2 = Zα then S = S1 +S2 is said to be the gluing of S1 and S2 . When α = u1 a1 + · · · + um am = v1 b1 + · · · + vn bn , the binomial fα = xu1 1 · · · xumm − y1v1 · · · ynvn has S-degree α and the toric ideal IS = IS1 + IS2 + hfα i ⊂ R = K[x1 , . . . , xm , y1 , . . . , yn ], see [17]. Let φp φ2 φ1 F : 0 → Fp → · · · → F1 → F0 →0 be a minimal S1 graded minimal free resolution of IS1 with H0 (F) = R/IS1 , Φq Φ Φ G : 0 → Gq → · · · →2 G1 →1 G0 →0 be a minimal S2 graded minimal free resolution of IS2 with H0 (G) = R/IS2 . Our aim is to compute a minimal S graded minimal free resolution of IS using the complexes F and G. Since IS = IS1 + IS2 + hfα i the idea is to fα tensor these complexes and the complex Cfα : 0 → R −→ R → 0. This shall work if fα is a non zerodivisor on R/(IS1 + IS2 ) which we address first. Lemma 3.1. The gluing binomial fα is a non zerodivisor on R/(IS1 + IS2 ). Proof. For let fα = xu −yv = xu1 1 · · · xumm −y1v1 · · · ynvn . Pnotationalz convenience, Let g = z,w cz,w x yw ∈ R with gfα ∈ IS1 + IS2 . As IS1 + IS2 is gener′ ′ ated by binomials of P the form xz − xz and yw − yw , these appear in the z+u yw − xz yw+v ). In other words, each expansion of gfα = z,w cz,w (x ′ ′ ′ ′ monomial xz+u yw has a match of type xz +u yw or xz yw +v such that ′ ′ ′ ′ ′ ′ xz+u yw −xz +u yw or xz+u yw −xz yw +v is divisible by xz −xz or yw −yw . 4 MESUT ŞAHİN AND LEAH GOLD STELLA In the first case, this is possible only if z = z′ or w = w′ . If z = z′ , then ′ ′ ′ ′ xz+u yw − xz +u yw = xz+u (yw − yw ) = xu (xz yw − xz yw ). This means ′ ′ that the term xz yw of g has a match xz yw such that xz yw − xz yw is ′ divisible by yw − yw . One can prove similarly that this happens for the other cases. Hence, terms in g may be rearranged so that it is an algebraic ′ ′ combination of binomials xz − xz and yw − yw , that is g ∈ IS1 + IS2 .  We are now ready to prove the following key result. Theorem 3.2. Let S be the gluing of S1 and S2 . If F is a minimal S1 graded minimal free resolution of IS1 and G is a minimal S2 graded minimal free resolution of IS2 , then Cfα ⊗ F ⊗ G is an S graded minimal free resolution of IS . Proof. Recall that the tensor product of F and G is a complex δp+q δ δ 2 1 F ⊗ G : 0 −→ Fp ⊗ Gq −→ · · · −→ F1 ⊗ G0 ⊕ F0 ⊗ G1 −→ F0 ⊗ G0 −→0 with terms (F ⊗ G)i = ⊕p+q=i Fp ⊗ Gq and maps given by X X δi ( ap ⊗ bq ) = φp (ap ) ⊗ bq + (−1)p ap ⊗ Φq (bq ). p+q=i p+q=i It is well known that Hi (F ⊗ G) = Hi (F ⊗ R/IS2 ) where F ⊗ R/IS2 is the complex ∆p ∆ ∆ 0 → Fp ⊗ R/IS2 → · · · →2 F1 ⊗ R/IS2 →1 F0 ⊗ R/IS2 →0, where ∆i (ai ⊗ b) = φi (ap ) ⊗ b. It is easy to see that Ker(∆i ) = [Ker(φi ) ⊗ R/IS2 ] ∪ ri−1 M −1 IS2 ) [(φi ( j=1 ⊗ R/IS2 ], where ri−1 = rank(Fi−1 ), and Im(∆i ) = Im(φi ) ⊗ R/IS2 . Since, Im(φi ) involves the variables xj only and IS2 involves the variables yk only, it follows ri−1 that φ−1 i (⊕j=1 IS2 ) = {0}. Thus, Hi (F ⊗ G) = Hi (F ⊗ R/IS2 ) = 0, for all i > 0. Since H0 (F ⊗ G) = R/IS1 ⊗ R/IS2 ∼ = R/(IS1 + IS2 ), it follows that F ⊗ G is an S graded minimal free resolution of R/(IS1 + IS2 ). Let f = fα for notational convenience. As before, Hi (Cf ⊗ F ⊗ G) = Hi (Cf ⊗ R/(IS1 + IS2 )), where Cf ⊗ R/(IS1 + IS2 ) is f ⊗1 0 → R ⊗ R/(IS1 + IS2 ) −→ R ⊗ R/(IS1 + IS2 ) → 0. Note that H1 (Cf ⊗ R/(IS1 + IS2 )) = ((IS1 + IS2 ) : f ) ⊗ R/(IS1 + IS2 ) = {0} as f is a non zerodivisor on R/(IS1 + IS2 ). Since H0 (Cf ⊗ R/(IS1 + IS2 )) ∼ = R/(IS1 + IS2 + hf i), it follows that Cf ⊗ F ⊗ G gives an S graded minimal free resolution of IS .  Denote by Bi (S) the set of i−Betti S-degrees of a minimal free resolution of K[S] for 1 ≤ i ≤ pd(S) and let Bi (S) = {0} otherwise. GLUING SEMIGROUPS AND STRONGLY INDISPENSABLE FREE RESOLUTIONS 5 Lemma 3.3. Let S be the gluing of S1 and S2 . Then,     [ [ Bp (S1 ) + Bq (S2 ) + {α} . Bp (S1 ) + Bq (S2 ) ∪  Bi (S) =  p+q=i−1 p+q=i Proof. By Theorem 3.2, Cfα ⊗ F ⊗ G is an S graded minimal free resolution of IS . Hence, the proof follows from the following M M R ⊗ Fp ⊗ Gq + R(−α) ⊗ Fp ⊗ Gq , (Cfα ⊗ F ⊗ G)i = p+q=i p+q=i−1 since S degrees of elements in Fp ⊗Gq constitute the set Bp (S1 )+Bq (S2 ).  We use the following simple observation in the proof of our main result. Lemma 3.4. Let S be the gluing of S1 and S2 . Fix j ∈ {1, 2}, and b, b′ ∈ Sj . Then, b − b′ ∈ Sj ⇐⇒ b − b′ ∈ S. Proof. WLOG, assume that j = 1. As S1 ⊂ S, b − b′ ∈ S1 ⇒ b − b′ ∈ S. For the converse, take b, b′ ∈ S1 with b − b′ ∈ S. Then, b − b′ = s1 + s2 , for some s1 ∈ S1 and s2 ∈ S2 , since S = S1 + S2 . So, s2 = b − b′ − s1 ∈ ZS1 . Since ZS1 ∩ ZS2 = Zα and s2 ∈ S2 , we have s2 = kα for a positive integer k. Hence, b − b′ = s1 + kα ∈ S1 .  We are now ready to prove our main result which gives a practical method to produce infinitely many affine semigroups having SIFRE. Theorem 3.5. Let bi,j denotes an element of Bi (Sj ) for i = 1, . . . , pd(Sj ), j = 1, 2. Then, IS has SIFRE ⇐⇒ IS1 and IS2 have SIFRE and the following hold (1) ∓(α + bi−1,j − bi,j ) ∈ / Sj , (2) ∓(bp,1 + bq,2 − b′r,1 − b′s,2 ) ∈ / S, for p − r ≥ 2, where p + q = i = r + s, ′ ′ / S for p − r ≥ 2, where p + q = i = (3) ∓(bp,1 + bq,2 − br,1 − bs,2 − α) ∈ r + s + 1. Proof. Let us prove necessity first. By Lemma 2.1, the differences between the elements in Bi (S) do not belong to S. Let bi,j , b′i,j ∈ Bi (Sj ), for j = 1, 2. By Lemma 3.3, we have Bi (Sj ) ⊂ Bi (S), and thus bi,j −b′i,j ∈ / S. This implies bi,j − b′i,j ∈ / Sj , by Lemma 3.4, which means that IS1 and IS2 have SIFRE by the virtue of Lemma 2.1. As the elements in the conditions (1)-(3) are the differences of some elements in Bi (S), they do not belong to S. So, (2)-(3) hold. Lemma 3.4 implies (1) now. Let us prove sufficiency now. If b, b′ ∈ Bi (S), then there are three possibilities due to Lemma 3.3: (i) b, b′ ∈ Bp (S1 ) + Bq (S2 ), for p + q = i, (ii) b, b′ ∈ Bp (S1 ) + Bq (S2 ) + α, for p + q = i − 1, (iii) b ∈ Bp (S1 ) + Bq (S2 ), b′ ∈ Br (S1 ) + Bs (S2 ) + α, for p + q = i = r + s + 1. Case (i): Let b = bp,1 + bq,2 and b′ = b′r,1 + b′s,2 with p + q = i = r + s. Suppose now that b − b′ ∈ S. Then, b − b′ = bp,1 + bq,2 − b′r,1 − b′s,2 = s1 + s2 , 6 MESUT ŞAHİN AND LEAH GOLD STELLA for some s1 ∈ S1 and s2 ∈ S2 . Thus, bp,1 − b′r,1 − s1 = s2 − bq,2 + b′s,2 = kα being an element of ZS1 ∩ ZS2 = Zα. By the condition (2), we need only to check the difference for p = r and p = r + 1. When p = r, it follows that bp,1 − b′p,1 = s1 + kα ∈ S1 if k ≥ 0, and that bq,2 − b′q,2 = s2 + (−k)α ∈ S2 if k < 0, contradicting to hypothesis by Lemma 2.1. When p = r + 1, it follows that br+1,1 − b′r,1 − s1 = s2 − bq,2 + b′q+1,2 = kα. Since the resolution of IS2 is S2 -graded, there is s′2 ∈ S2 such that b′q+1,2 = bq,2 + s′2 . So, kα = s2 + s′2 ∈ S2 . Since S2 ∩ (−S2 ) = {0}, we have k > 0. But then, br+1,1 − b′r,1 − α = s1 + (k − 1)α ∈ S1 , contradicts to the condition (1). Case (ii): follows from Case (i). Case (iii): Let b = bp,1 +bq,2 and b′ = b′r,1 +b′s,2 +α with p+q = i = r+s+1. Suppose that b−b′ ∈ S. Then, b−b′ = bp,1 +bq,2 −b′r,1 −b′s,2 −α = s1 +s2 , for some s1 ∈ S1 and s2 ∈ S2 . It follows that bp,1 −b′r,1 −s1 = s2 −bq,2 +b′s,2 +α = kα, for some k ∈ Z. By the condition (3), we need only to check the difference for p = r and p = r + 1. When p = r, we have br,1 − b′r,1 = s1 + kα ∈ S1 if k > 0, and bs+1,2 − ′ bs,2 − α = s2 + (−k)α ∈ S2 if k ≤ 0, which give rise to contradictions. When p = r + 1, we have br+1,1 − b′r,1 − α = s1 + (k − 1)α ∈ S1 if k > 0, and bs,2 − b′s,2 = s2 + (−k)α ∈ S2 if k ≤ 0, which give rise to contradictions. One proves that b′ − b ∈ / S, similarly.  Remark 3.6. This section generalizes the main results of [9] as we briefly explain now. Our Lemma 3.3 specializes to B1 (S) = B1 (S1 ) ∪ B1 (S2 ) ∪ {α}, which is exactly [9, Theorem 10]. Furthermore the condition (1) in our Theorem 3.5, specializes to ∓(α − b1,j ) ∈ / Sj , since b0,j = 0, for j = 1, 2. This is exactly the condition in [9, Theorem 12] by the virtue of Lemma 3.4. The following illustrates the advantage of our criterion over the general method given by [3, Theorem 4.9]. Example 3.7. S1 = hq · 31, q · 37, q · 41i, S2 = hp · 4, p · 5i has SIFRE. But it is difficult to determine p and q for which S = S1 + S2 has SIFRE using [3, Theorem 4.9] since it requires checking if 68 elements does not lie in S for every choice of p and q, and gaps of S depend on p and q. On the other hand, the conditions (2) − (3) of Theorem 3.5 hold automatically and it is sufficient to check if 15 elements do not belong to one of h31, 37, 41i and h4, 5i. Note that this time everything is independent of p and q. One can use gaps of h31, 37, 41i and h4, 5i to see only q = 109 or q = 150 yield (1) to holds. When q = 109 taking p = 9 and for q = 150 assuming p = 13 produces gluings with SIFREs. Similarly one can see using our criterion that h6, 7, 10i or h8, 9, 11i does not give rise to a gluing with SIFRE. GLUING SEMIGROUPS AND STRONGLY INDISPENSABLE FREE RESOLUTIONS 7 4. Extending Strongly Indispensable Resolutions We determine some semigroups having SIFREs in this section. We focus on a particular case of gluing where the second semigroup is generated by a single element. These semigroups are known as extensions in literature. Given an affine semigroup S generated minimally by m1 , . . . , mn , recall that an extension of S is an affine semigroup denoted by E and generated minimally by ℓm1 , . . . , ℓmn and m, where ℓ is a positive integer coprime to a component of m = u1 m1 + · · · + un mn for some non-negative integers u1 , . . . , un . Note that E is the gluing of S1 = ℓS and S2 = N{m}, with α = ℓm. Theorem 4.1. K[E] has a SIFRE if and only if K[S] has a SIFRE and ∓(m + b′ − b) ∈ / S, for all b ∈ Bi (S), b′ ∈ Bi−1 (S), and 1 ≤ i ≤ pd(S) + 1. Proof. K[E] has a SIFRE ⇐⇒ e − e′ ∈ / E for all e, e′ ∈ Bi (E) by Lemma 2.1. It follows from Lemma 3.3 that Bi (E) = ℓBi (S) ∪ ℓ[Bi−1 (S) + m], and so we have three possibilities if e, e′ ∈ Bi (E): (1) e, e′ ∈ ℓBi (S), (2) e, e′ ∈ ℓ[Bi−1 (S) + m], (3) e ∈ ℓBi (S) and e′ ∈ ℓ[Bi−1 (S) + m]. In the first two cases e − e′ = ℓ(b − b′ ) ∈ / E ⇐⇒ b − b′ ∈ / S, by Lemma 3.4, which is equivalent to K[S] having a SIFRE by Lemma 2.1. In the last one, ∓(e − e′ ) = ∓ℓ(b − b′ − m) ∈ / E ⇐⇒ ∓(b − b′ − m) ∈ / S, which completes the proof.  4.1. Symmetric affine semigroups. As another application, we obtain infinitely many complete intersection semigroup rings with SIFRE. When E is symmetric (or equivalently S is symmetric), it is sufficient to check the condition above for the first half of the indices. Corollary 4.2. If E is symmetric, then K[E] has a SIFRE if and only if K[S] has a SIFRE and ∓(m + b′ − b) ∈ / S, for b ∈ Bi (S), b′ ∈ Bi−1 (S), and 1 ≤ i ≤ ⌊pd(E)/2⌋. Proof. The proof mimics the proof of Theorem 4.1 applying this time Lemma 2.2 instead of Lemma 2.1.  Let n > 1 and ei denote the canonical basis of Nn , for i = 1, . . . , n. Let u1 , . . . , un be some positive integers and S be the semigroup generated minimally by u1 e1 , . . . , un en . It is clear that IS = (0) and thus K[S] = K[x1 , . . . , xn ] has a SIFRE. Fix a = (−u1 , u2 , . . . , un ), a0 = (0, u2 , . . . , un ) ∈ S and consider the extensions of S defined recursively as follows: • E1 = 2S + N{a1 }, where a1 = 2a0 − a = (u1 , u2 , . . . , un ) ∈ S, and • Ej = 2Ej−1 + N{aj }, where aj = aj−1 + 2aj−2 ∈ Ej−1 , for j ≥ 2. Proposition 4.3. With the notations above, we have 8 MESUT ŞAHİN AND LEAH GOLD STELLA (1) aj − 2aj−1 = (−1)j a, for all j ≥ 1, (2) aj + b′ − b = ua, for some u ∈ Z − {0} and for all b′ ∈ Bi−1 (Ej−1 ), b ∈ Bi (Ej−1 ), where j ≥ 2 and 1 ≤ i ≤ ⌊j/2⌋, (3) K[Ej ] has a SIFRE for all j ≥ 1. Proof. We use induction on j in all items. (1) The claim follows from the definition of a1 = 2a0 − a when j = 1. Assuming that the claim is true for j = p − 1, we have ap − 2ap−1 = −(ap−1 − 2ap−2 ) = −(−1)p−1 a = (−1)p a, since ap = ap−1 + 2ap−2 , for all p ≥ 2. (2) When j = 2 and 1 ≤ i ≤ ⌊j/2⌋ = 1, we have a2 + b′ − b = a2 − 2a1 = a, by (1), for all b′ ∈ B0 (E1 ) = {0}, b ∈ B1 (E1 ) = {2a1 }, as IE1 is a principal ideal generated by y 2 − xa11 · · · xann of E1 -degree 2a1 . Assume now that the claim is true for all indices 3 ≤ j ≤ p − 1. We need to study ap + b′ − b, for all b′ ∈ Bi−1 (Ep−1 ), b ∈ Bi (Ep−1 ), where p ≥ 4 and 1 ≤ i ≤ ⌊p/2⌋. There are four cases to consider since by Lemma 3.3, Bi (Ep−1 ) = 2Bi (Ep−2 ) ∪ 2[Bi−1 (Ep−2 ) + ap−1 ]: Case (i): b′ = 2c′ , c′ ∈ Bi−1 (Ep−2 ) and b = 2c, c ∈ Bi (Ep−2 ). In this case, ap + b′ − b = ap − 2ap−1 + 2(ap−1 + c′ − c) = (−1)p a + 2ua, for some u ∈ Z − {0}, by induction hypothesis. Case (ii): b′ = 2(c′ + ap−1 ), c′ ∈ Bi−2 (Ep−2 ) and b = 2(c + ap−1 ), c ∈ Bi−1 (Ep−2 ). In this case, ap + b′ − b = ap − 2ap−1 + 2(ap−1 + c′ − c) = (−1)p a + 2ua, for some u ∈ Z − {0}, by induction hypothesis. Case (iii): b′ = 2(c′ + ap−1 ), c′ ∈ Bi−2 (Ep−2 ) and b = 2c, c ∈ Bi (Ep−2 ). Taking d ∈ Bi−1 (Ep−2 ) in this case, we have ap + b′ − b = ap −2ap−1 +2(ap−1 +c′ −d)+2(ap−1 +d−c) = (−1)p a+2u1 a+2u2 a, for some u1 , u2 ∈ Z − {0}, by induction hypothesis. Case (iv): b′ = 2c′ , c′ ∈ Bi−1 (Ep−2 ) and b = 2(c + ap−1 ), c ∈ Bi−1 (Ep−2 ). So, ap + b′ − b = ap − 2ap−1 + 2(c′ − c) = (−1)p a + 2(c′ − c). Note that the proof will be complete if we show that c′ − c = va, for some v ∈ Z. Let us prove this by verifying the claim that c′ − c = va, for some v ∈ Z, and for all c, c′ ∈ Bi−1 (Eq ), 1 ≤ i ≤ ⌊p/2⌋, using induction on 1 ≤ q ≤ p − 2. For q = 1, the claim is trivial with v = 0 as i = 1 and B0 (E1 ) = {0}. Assume now that it is true for q = r − 1, and consider c, c′ ∈ Bi−1 (Er ). By Lemma 3.3, we have three possibilities as as before and in two of them c′ − c = 2(d′ − d), for either d′ , d ∈ Bi−1 (Er−1 ) or d′ , d ∈ Bi−2 (Er−1 ). So, we are done by induction hypothesis on q. In the third one, c′ − c = 2(ap + d′ − d) = 2ua, by induction hypothesis on p, where d ∈ Bi−1 (Er−1 ) or d′ ∈ Bi−2 (Er−1 ). (3) As IE1 is a principal ideal, the projective dimension of K[E1 ] is 1 and B0 (E1 ) = {0} and B1 (E1 ) = {2a1 }. So, when j = 1, there is nothing to check in Corollary 4.2 as K[S] has a SIFRE and ⌊j/2⌋ = 0. So, K[E1 ] has a SIFRE. Assume that the claim is true for j = p − 1, so K[Ep−1 ] has a SIFRE for all p ≥ 2. We first note that the projective GLUING SEMIGROUPS AND STRONGLY INDISPENSABLE FREE RESOLUTIONS 9 dimension of K[Ep ] is p, by Theorem 3.2. So, we need to verify that ap + b′ − b ∈ / Ep−1 , for all b′ ∈ Bi−1 (Ep−1 ), b ∈ Bi (Ep−1 ), where p ≥ 2 and 1 ≤ i ≤ ⌊p/2⌋, which is true by (2) as Ep−1 ⊂ Nn and ua ∈ / Nn , for u ∈ Z − {0}. So, K[Ej ] has a SIFRE for j ≥ 1.  4.2. Numerical semigroups. In this section, we characterize extensions of some numerical semigroups having SIFRE. As we classified extensions of 3-generated symmetric numerical semigroups in [2, Theorem 25], we start with 3-generated non-symmetric numerical semigroups here. It is known that they have SIFREs, see [2, Example 20]. As a first application, we determine their extensions which have SIFREs, using the following results. Theorem 4.4. [11, Proposition 3.2] Let αp be the smallest positive integer such that αp mp = αpq mq + αpr mr , for some αpq , αpr ∈ N, where {p, q, r} = {1, 2, 3}. Then S = hm1 , m2 , m3 i is 3-generated not symmetric if and only if αpq > 0 for all p, q, and αqp + αrp = αp , for all {p, q, r} = {1, 2, 3}. Then K[S] = A/(f1 , f2 , f3 ), where f1 = xα1 1 − xα2 12 xα3 13 , f2 = xα2 2 − xα1 21 xα3 23 , f3 = xα3 3 − xα1 31 xα2 32 . Theorem 4.5. [7, Lemma 2.5] If S is a 3-generated semigroup which is not symmetric then K[S] has a minimal graded free A-resolution φ2 φ1 0 −→ A2 −→ A3 −→ A−→K[S] −→ 0,  α23  x3 xα2 32 where φ1 = (f1 f2 f3 ), and φ2 =  xα1 31 xα3 13  . xα2 12 xα1 21 Remark 4.6. As the resolution above is graded, B1 (S) = {d1 , d2 , d3 }, where dp = αp mp = αpq mq + αpr mr , for all p, q, r ∈ {1, 2, 3}. Since the entries in φ1 φ2 = 0 are S-homogeneous, we have B2 (S) = {b1 , b2 }, where b1 = α23 m3 + d1 = α31 m1 + d2 = α12 m2 + d3 , b2 = α32 m2 + d1 = α13 m3 + d2 = α21 m1 + d3 . Theorem 4.7. Let S = hm1 , m2 , m3 i be a non-symmetric numerical semigroup and E be an extension of S, where m = u1 m1 + u2 m2 + u3 m3 . Then, K[E] has a SIFRE if and only if 0 < up < min{αqp , αrp } for all {p, q, r} = {1, 2, 3}. In particular, S does not have an extension with a SIFRE if and only if αpq = 1 for some p, q ∈ {1, 2, 3}. Proof. If K[E] has a SIFRE, then m + di − bj ∈ / S and di − m ∈ / S, for all i, j ∈ {1, 2, 3} by Theorem 4.1. By Remark 4.6, there are i, j ∈ {1, 2, 3} such that m + di − bj = (up − αqp )mp + uq mq + ur mr . When, up ≥ αqp for some p, q ∈ {1, 2, 3}, m + di − bj ∈ S, a contradiction. Thus, up < αqp for all p, q ∈ {1, 2, 3}. If ui = 0, for some i, then di −m = (αip −up )mp +(αiq −uq )mq ∈ S, a contradiction. 10 MESUT ŞAHİN AND LEAH GOLD STELLA Conversely, assume that 0 < up < min{αqp , αrp } for all {p, q, r} = {1, 2, 3}. We claim m − dp = up mp + (uq − αpq )mq + (ur − αpr )mr ∈ / S. If not, m − dp = vp mp + vq mq + vr mr , for some vi ∈ N, and so (up − vp )mp = (uq + vq − αpq )mq + (ur + vr − αpr )mr > 0 which contradicts to αp being the smallest positive integer with this property. Next, we prove dp −m = (αp −up )mp −uq mq −ur mr ∈ / S. If not, dp −m = vp mp +vq mq +vr mr , for some vi ∈ N, and so (αp − up − vp )mp = (uq + vq )mq + (ur + vr )mr > 0, which contradicts to αp being the smallest positive integer with this property. By [2, Corollary 11], P F (S) = {b1 −N, b2 −N }, where N = m1 +m2 +m3 , are the pseudo-Frobeneous elements of S. In particular, bi − N ∈ / S. This implies that bi − m − dj ∈ / S, since bi − N = (bi − m − dj ) + (αp + up − 1)mp + (uq − 1)mq + (ur − 1)mr . Finally, by Remark 4.6, for all i, j there are p, q, r such that m + di − bj = (up + αip )mp + (uq − αpq )mq + (ur − αpr )mr . If m + di − bj = vp mp + vq mq + vr mr , for some vi ∈ N, then (up + αip − vp )mp = (vq + αpq − uq )mq + (vr + αpr − ur )mr > 0, which implies that up + αip − vp ≥ αp , as αp is the smallest with this property. But this contradicts to the assumption up < min{αqp , αrp }.  Example 4.8. Take S = h7, 9, 10i. Then K[S] = A/(f1 , f2 , f3 ), where f1 = x41 − x22 x3 , f2 = x32 − x1 x23 , f3 = x33 − x31 x2 . Since α13 = 1, no extension of S will have a SIFRE. On the other hand, the following semigroups will lead to infinitely many families of extensions having SIFRE. For S = h31, 37, 41i, Macaulay 2 computes the following generators f1 = x91 − x22 x53 , f2 = x52 − x21 x33 , f3 = x83 − x71 x32 . So, u1 = 1, u2 = 1 and 1 ≤ u3 ≤ 2 give m = 109 and m = 150. Hence, E = h31ℓ, 37ℓ, 41ℓ, 109i and E = h31ℓ, 37ℓ, 41ℓ, 109i have SIFREs, for any ℓ, with gcd(ℓ, 109) = 1 and with gcd(ℓ, 150) = 1, respectively. Similarly, S = h67, 91, 93i leads to 6 and S = h71, 93, 121i leads to 14 different infinite families having SIFREs. Now, we study extensions of a symmetric 4-generated not complete intersection numerical semigroup using the following. Theorem 4.9. [1, Theorem 5, Theorem 3] The semigroup S is 4-generated symmetric, not complete intersection, if and only if there are integers αi and αij , such that 0 < αij < αi , for all i, j, α1 = α21 + α31 , α2 = α32 + α42 , α3 = α13 + α43 , α4 = α14 + α24 and m1 = α2 α3 α14 + α32 α13 α24 , m3 = α1 α4 α32 + α14 α42 α31 , m2 = α3 α4 α21 + α31 α43 α24 , m4 = α1 α2 α43 + α42 α21 α13 . GLUING SEMIGROUPS AND STRONGLY INDISPENSABLE FREE RESOLUTIONS 11 Then, K[S] = A/(f1 , f2 , f3 , f4 , f5 ), where f1 = xα1 1 − xα3 13 xα4 14 , f4 = xα4 4 − xα2 42 xα3 43 , f2 = xα2 2 − xα1 21 xα4 24 , f3 = xα3 3 − xα1 31 xα2 32 , f5 = xα3 43 xα1 21 − xα2 32 xα4 14 . If S = hm1 , m2 , m3 , m4 i is a symmetric not complete intersection numerical semigroup then K[S] has a SIFRE by [2, Theorem 27]. Let E be an extension of S with m = u1 m1 + u2 m2 + u3 m3 + u4 m4 . Then, we have the following result. Theorem 4.10. K[E] has a SIFRE if and only if up < min{αqp , αrp } for all p, q, r ∈ {1, 2, 3, 4} and at most one up = 0 such that u1 u2 u3 u4 =0 =0 =0 =0 =⇒ =⇒ =⇒ =⇒ α32 − α42 α43 − α13 α31 − α21 α21 − α31 < u2 < u3 < u1 < u1 or or or or α13 − α43 α24 − α14 α14 − α24 α42 − α32 < u3 , < u4 , < u4 , < u2 . Proof. We use [2, Corollary 13] and Theorem 4.9 for all the relations involving ai and di , where di = degS (fi ), and ai is the S-degree of a first syzygy, for i = 1, . . . , 5. We first prove the necessity of these conditions. Assume K[E] has a SIFRE. Then by Corollary 4.2, m + dj − ak ∈ / S and di − m ∈ / S, Given αpi there X are j, k with dj − ak = −αpi mi . So, uq mq + (ui − αpi )mi ∈ S. Thereif ui ≥ αpi , then m + dj − ak = q6=i fore, up < min{αqp , αrp } for all {p, q, r} = {1, 2, 3, 4}. If up = uq = 0, then di − m = (αjr − ur )mr + (αts − us )ms − up mp − uq mq ∈ S. So, at most one up = 0. If α32 − α42 ≥ u2 and α13 − α43 ≥ u3 when u1 = 0, then a4 − d4 − m = (α32 − α42 − u2 )m2 + (α13 − α43 − u3 )m3 + (α14 − u4 )m4 ∈ S. The others are shown similarly. Next, we prove sufficiency. Assume up < min{αqp , αrp } forPall {p, q, r} = {1, 2, 3, P4} and at most one up = 0. Then, di − m = vj mj implies di = (uj + vj )mj . Since fP is indispensable, there are only two monomials with S-degree di . So, i (uj + vj )mj must be αi mi or αpq mq + αrs ms . In any P case, at least two uj = 0, contradiction. So, m − di ∈ / S. If m − di = vj mj , then d := (ui − vi )mi + (uj − vj )mj = (uq − αpq − vq )mq + (us − αrs − vs )ms > 0. Since ui − vi < αi and uj − vj < αj , we get ui > vi and uj > vj by the minimality of αi and αj . But then d <S ds = αpi mi + αqj mj , which contradicts to the minimality of ds . So, m − di ∈ / S. For, i 6= P j and (i, j) ∈ / {(1, 3), (2, 4)}, we have a −d = α m . So, if a −d −m = vj m j , i j pq q i j P then (αpq − uq − vq )mq = j6=q (uj + vj )mj ≥ 0. Since αpq − uq − vq < αq , P all ui = 0 for i 6= q. So, ai − dj − m ∈ / S. If a1 − d3 − m = vj mj , then (α2 − u2 − v2 )m2 = (α13 + u3 + v3 )m3 + (u1 + v1 )m1 + (u4 + v4 )m4 > 0. By the minimality of α2 , u2 = v2 = 0 in which case we have a third monomial xu1 1 +v1 x3α13 +u3 +v3 xu4 4 +v4 of S-degree d2 , a contradiction to the indispensability of f2 . Similarly, a2 − d4 − m = α1 m1 − α42 m2 − m ∈ / S. To prove that ai − di − m ∈ / S, it suffices to see m + 2di − N ∈ S, 12 MESUT ŞAHİN AND LEAH GOLD STELLA as the Frobeneous number of S, which is the biggest integer not in S, is ai + dP i − N = (ai − di − m) + (m + 2di − N ) by [2, Corollary 14], where N = j mj . If all ui > 0 then m − N ∈ S, so m + 2di − N ∈ S. If up = 0 for some p ∈ {1, 2, 3, 4}, then m+2di −N = (di +m−mj −mq −mr )+(di −mp ) ∈ S, except for (i, p) ∈ {(4, 1), (1, 2), (2, 3), (3, 4)}. When u1 = 0, we have a4 − d4 − m = (αP 32 − α42 − u2 )m2 + (α13 − α43 − u3 )m3 + (α14 − u4 )m4 . If a4 − d4 − m = vj mj , then either α32 − α42 ≥ u2 or α13 − α43 ≥ u3 as 0 < α14 − u4 < α4 . If α32 − α42 ≥ u2 then α13 − α43 < u3 and thus we have (α32 − α42 − u2 − v2 )m2 + (α14 − u4 − v4 )m4 = v1 m1 + (α43 − α13 + u3 + v3 )m3 . This gives a binomial in IS of degree d with d <S d5 , a contradiction to the minimality of d5 . If α13 − α43 ≥ u3 , then we get similarly a binomial of S-degree less than d1 . The other cases can be done the same way. Finally, we prove that mP+ dj − ai ∈ / S. For i 6= j and (i, j) ∈ / {(1, 3), (2,P 4)}, we have m + dj − ai = p6=q up mp + (uq − αrq )mq . If m + dj − ai = vp mp ∈ S, then (ur − vr )mr + (us − vs )ms = (vp − up )mp + (αrq − uq + vq )mq , as the other cases contradicts to the minimality of some αt . But this gives a binomial in IS of S-degree less than dt = αpr mr + αqs ms , a contradiction. P Now, d1 − a1 = α31 m1 − α43 m3 − α24 m4 . If m + d1 − a1 = vj mj , then (u2 − v2 )m2 + (α31 + u1 − v1 )m1 = (α43 − u3 + v3 )m3 + (α24 − u4 + v4 )m4 > 0. If one term of the left hand side is negative then we get a contradiction to the minimality of α1 and α2 . So, this gives a binomial in IS of S-degree d. Only d3 may be less than d, so α32 ≤ u2 − v2 ≤ u2 , a contradiction. The rest is similar and we are done.  Remark 4.11. Using the formulas in Theorem 4.9, one can now produce infinitely many symmetric not complete intersection S = hm1 , m2 , m3 , m4 i having SIFRE. We classified 4-generated pseudo symmetric semigroups having SIFRE in [19]. The next result reveals that none of their extensions have a SIFRE. Theorem 4.12. Let S be a 4-generated pseudo symmetric semigroup and E be one of its extensions. Then K[E] does not have a SIFRE. Proof. By the proof of [19, Theorem 2.5], we have B1 (S) = {d1 , . . . , d6 }, B2 (S) = {b1 , . . . , b6 } and B3 (S) = {c1 , c2 }, where c1 = b1 + m4 = b2 + m1 = b4 + m3 = b5 + m2 . Thus, if m = u1 m1 + · · · + u4 m4 with uj > 0, then there is some i such that m + bi − c1 = m − mj ∈ S. The result now follows from Theorem 4.1.  Acknowledgements The author would like to thank Anargyros Katsabekis for his suggestions on the preliminary version of the paper. All the examples are computed by using the computer algebra system Macaulay 2, see [10]. GLUING SEMIGROUPS AND STRONGLY INDISPENSABLE FREE RESOLUTIONS 13 References 1. H. Bresinsky, Symmetric semigroups of integers generated by 4 elements, Manuscripta Math. 17 (1975), 205–219. 2. V. Barucci, R. Fröberg, M. Şahin, On free resolutions of some semigroup rings, Journal of Pure and Applied Algebra 218 (2014) 1107–1116. 3. H. Charalambous, A. Thoma, On simple A-multigraded minimal resolutions, Contemporary Mathematics, 33–44, 502, 2009. 4. H. Charalambous, A. Thoma, On the generalized Scarf complex of lattice ideals, Journal of Algebra, 1197–1211, 323, 2010. 5. H. Charalambous, A. Katsabekis and A. Thoma, Minimal systems of binomial generators and the indispensable complex of a toric Ideal, Proc. of Amer. Math. Soc., 135(2007), 3443–3451. 6. H. Charalambous, A. Thoma and M. Vladoiu, Binomial fibers and indispensable binomials, J. Symbolic Comput. 74 (2016), 578–591. 7. G. Denham, Short generating functions for some semigroup algebras, Electr. J. Combinatorics 10 (2003), R36. 8. L. Fel, Symmetric (not Complete Intersection) Semigroups Generated by Five Elements, 2016, http://arxiv.org/abs/1604.00577. 9. P. A. Garcia-Sanchez, I. Ojeda, Uniquely presented finitely generated commutative monoids, Pac. J. Math. 248, No. 1, 91–105 (2010). 10. D. Grayson, M. Stillman, Macaulay 2–A System for Computation in Algebraic Geometry and Commutative Algebra. http://www.math.uiuc.edu/Macaulay2. 11. J. Herzog, Generators and relations of abelian semigroups and semigroup rings, Manuscripta Math. 3 (1970), 175–193. 12. A. Katsabekis, I. Ojeda, An indispensable classification of monomial curves in A4 , Pac. J. Math. 268 (2014), no. 1, 95-116. 13. M. Morales, Noetherian symbolic blow-ups. J. Algebra 140 (1991), no. 1, 12–25. 14. T. Numata, A variation of gluing of numerical semigroups, Semigroup Forum, 93 (2016), no. 1, 152-160. 15. I. Ojeda, A. Vigneron-Tenorio, Indispensable binomials in semigroup ideals. Proc. Amer. Math. Soc. 138 (12), 2010, 4205–4216. 16. I. Peeva, B. Sturmfels, Generic lattice ideals, J. Amer. Math. Soc. 11 (1998) 363–373. 17. J.C. Rosales, On presentations of subsemigroups of Nn , Semigroup Forum 55 (1997) 152–159. 18. M. Şahin, Extensions of toric varieties, Electron. J. Comb. 18(1) (2011), P93. 19. M. Şahin, N. Şahin, On pseudo symmetric monomial curves, http://arxiv.org/abs/1507.01288. 20. R. P. Stanley, Hilbert functions of graded algebras, Adv. Math. 28 (1978), 57–83. 21. A. Takemura and S. Aoki, Some characterizations of minimal Markov basis for sampling from discrete conditional distributions, Ann. Inst. Statist. Math. 56:1 (2004), 1–17. 22. K. Watanabe, Some examples of one dimensional Gorenstein domains, Nagoya Math. J. 49 (1973) 101–109. Department of Mathematics, Hacettepe University, Beytepe, 06800, Ankara, Turkey E-mail address: [email protected] Department of Mathematics, Cleveland State University, Cleveland, OH, USA E-mail address: [email protected]
0
Population Fluctuation Promotes Cooperation in Networks Steve Miller1,* and Joshua Knowles1 1 University of Manchester, School of Computer Science - Machine Learning & Optimisation Group, UK arXiv:1407.8032v4 [cs.GT] 9 May 2016 * [email protected] ABSTRACT We consider the problem of explaining the emergence and evolution of cooperation in dynamic network-structured populations. Building on seminal work by Poncela et al., which shows how cooperation (in one-shot prisoner’s dilemma) is supported in growing populations by an evolutionary preferential attachment (EPA) model, we investigate the effect of fluctuations in the population size. We find that a fluctuating model – based on repeated population growth and truncation – is more robust than Poncela et al.’s in that cooperation flourishes for a wider variety of initial conditions. In terms of both the temptation to defect, and the types of strategies present in the founder network, the fluctuating population is found to lead more securely to cooperation. Further, we find that this model will also support the emergence of cooperation from pre-existing non-cooperative random networks. This model, like Poncela et al.’s, does not require agents to have memory, recognition of other agents, or other cognitive abilities, and so may suggest a more general explanation of the emergence of cooperation in early evolutionary transitions, than mechanisms such as kin selection, direct and indirect reciprocity. Introduction Cooperation among organisms is observed, both within and between species, throughout the natural world. It is necessary for the organization and functioning of societies, from insect to human. Cooperation is also posited as essential to the evolutionary development of complex forms of life from simpler ones, such as the evolutionary transition from prokaryotes to eukaryotes or the development of multicellular organisms1. A widespread phenomenon in nature, cooperative behaviour has been studied in detail in a wide variety of situations and lifeforms; in viruses2 , bacteria3 , insects4 , fish5 , birds6 , mammals7 , primates8 and of course in humans9, where the evolution of cooperation has been linked to the development of language10. The riddle of cooperation is how to resolve the tension between the ubiquitous existence of cooperation in the natural world, and the competitive struggle for survival between organisms (or genes or groups), that is an essential ingredient of the Darwinian evolutionary perspective. Based on existing theories11, 12, 13, Nowak14 describes a framework of enabling mechanisms to address the existence of cooperation under a range of differing scenarios. This framework consists of the following five mechanisms: kin selection, direct and indirect reciprocity, multi-level selection and network reciprocity. These mechanisms have been developed and much studied within the flourishing area of evolutionary game theory, and to a lesser extent in the simulated evolution and artificial life areas (in computer science). Our interest in this paper is network reciprocity, where the interactions between organisms in relation to their network structure, offer an explanation of cooperation. This mechanism is important for two reasons. First, whilst cooperation is widespread and found in a broad range of scenarios in the real world, many of the mechanisms that have been proposed to explain it require specific conditions such as familial relationships (for kin selection), the ability to recognise or remember (for direct and indirect reciprocity) and transient competing groups (for multi-level selection) (see Nowak14 for specific details). The requirement for such conditions limits the use of each of these mechanisms as a more general explanation for a widespread phenomenon. Secondly, the more specific behavioural or cognitive abilities required by some of these mechanisms precludes their use in explaining the role of cooperation in early evolutionary transitions. Network-based mechanisms which focus on simple agents potentially offer explanations which do not require such abilities. All forms of life exist in some sort of relationship with other individuals, or environments, and as a result can all be considered to exist within networks. Thus the network explanation has ingredients for generality which are lacking in the other models. It has been shown that networks having heterogeneous structure can demonstrate cooperation in situations where individuals interact with differing degrees15, effectively by clustering. Populations studied in this way are represented in the form of a network, with individuals existing at the nodes of the network and connections represented by edges. The degree of an individual node indicates the number of neighbour nodes it is connected to. Heterogeneity refers to the range of the degree distribution within the network. Behaviour in these networks can be investigated using the prisoner’s dilemma game which is widely adopted as a metaphor for cooperation. The majority of studies15, 16, 17 investigating cooperation with regards to network structure have focused on static networks and hence consider the behaviour of agents distinct from the networks within which they exist. Specifically in these works, the behaviour of the individuals within the network has no effect on their environment. More recently however, the interaction between behaviour and the development of network structure has been considered in an interesting development which shows promise in understanding evolutionary origins of cooperation. The Evolutionary Preferential Attachment (EPA) model developed by Poncela et al.18 proposes a fitness-based coevolutionary mechanism where scale-free networks, which are supportive of cooperation, emerge in a way that is influenced by the behaviour of agents connecting to the network. There is a large body of work devoted to coevolutionary investigations of cooperation (see Perc and Szolnoki19 for a review) of which a subset focus on coevolutionary studies within networks20, 21, 22, 23. However the EPA approach of Poncela, which we investigate further in this report, is notable in that it addresses an area that seems to have received very little attention: Specifically it explores how environment affects the behaviour of individuals, simultaneously with how such individuals, in return, affect their developing environment. In the EPA model, new network nodes connect preferentially to existing network nodes that have higher fitness scores. Accumulated fitness scores arise from agents located on nodes within the network playing one-shot prisoner’s dilemma with their neighbours. Strategies are subsequently updated on a probabilistic basis by comparison with the relative fitness of a randomly-selected neighbour. The linking of evolutionary agent behaviours to their environment in this way has been shown to promote cooperation, whilst the use of fitness rather than degree allows for a broader and more natural representation of preferential attachment24. Whilst further exploring the role of scale-free network growth with regards to cooperation (which of itself is interesting) the EPA model also implements one-shot rather than iterated prisoner’s dilemma and it utilises agents having unconditional (fixed) strategies; hence it potentially presents a very simple minimal model, in the light of reported findings, for the coemergence of cooperation and scale-free networks. Our investigations here are driven by two observations regarding the EPA model. First, we note that the model achieves a fixed network structure very early within simulations, from which point onwards agent behaviour has no effect on network structure. Secondly, whilst the EPA model supports the growth of cooperation in networks from a founder population consisting solely of cooperators, it does not achieve similar levels of success for networks grown from defectors. The broader question of how cooperation emerges requires an answer which can generalise to explain emergence from populations that are not assumed cooperative initially; hence, in this work we investigate networks grown from founder populations which may be cooperators or defectors. We introduce a modification to the EPA model (see Methods for details) which we consider an abstraction common to most, if not all, real populations, that of population size fluctuation. We investigate whether the resulting opportunity for the agents to continually modify the network, leads to increased levels of cooperation in the population. We achieve this fluctuation by truncating the evolved network whenever it reaches the specified maximum size. At truncation, agents are selected for deletion on the basis of fitness. Those least fit are removed from the network. The network is grown and truncated repeatedly until the simulation is ceased. The original EPA model offered a limited period of time for agents to initially affect the structure of their network. Our modification makes this ‘window of opportunity’ repeatedly available. Whilst a small number of interesting studies have explored the effect on cooperation of deleting network links25, 26, 27, 28, or to a much lesser extent, nodes29, 30, 31, the process we have implemented here differs in that it specifically targets individuals (nodes) on the basis of least-fitness. As such it has a very clear analogue in nature, in terms of natural selection. The question, “How does cooperation emerge?” can be considered from two extreme perspectives, firstly the scenario where cooperation develops within a population from its very earliest origins and secondly in terms of its emergence within a pre-existing non-cooperative network. In reality cooperation may occur anywhere within a spectrum bounded by these two extrema, at different times for different sets of events and circumstances, therefore a network-based mechanism to explain this phenomenon should be able to deal with either extreme and other positions in between. In testing this model, we investigate scenarios where cooperation may develop as a population grows from its founder members and we also apply the model to pre-existing randomly structured networks. Results Unless stated otherwise in the text, the general outline of the evolutionary process by which our results were obtained, occurs for one generation as follows: 1. Play prisoner’s dilemma: Each agent plays one-shot prisoner’s dilemma with all neighbours and achieves a fitness score that is the sum of all the payoffs. 2. Update strategies: Those strategies that result in low scores are replaced on a probabilistic basis by comparison with the strategies of randomly selected neighbours. 2/14 3. Grow network: A specified number (we used 10 in all our simulations) of new nodes are added to the network, connecting to m distinct existing nodes via m edges using either EPA or CRA. 4. Remove nodes (only in the case of attrition models): If the network has reached maximum size, it is pruned by a truncation process that removes agents on the basis of least fitness. Full details on the specifics of the implementations are provided in the methods section. Results for networks grown from founder populations We investigate the effect of population fluctuation on networks grown from founder populations consisting of three nodes. Low levels of truncation result in increased levels of cooperation. For simulations starting from founder networks consisting solely of cooperators, we achieved similar profiles to those from the EPA model, however when lower levels of truncation (less than 20%) were used we were able to demonstrate consistently higher levels of cooperation than the EPA model for values of b (the temptation to defect) greater than 1.6 (see Figure1a). Highest levels of cooperation were achieved using as little as 2.5% and 5% truncation. We observed that cooperation does not reduce to the levels seen for EPA until truncation values are reduced to as little as 0.1% (not shown). Whilst large percentage truncations risk deleting some of the higher fitness nodes which give the network its heterogeneous (power-law) degree distribution and hence aid cooperation, small truncation percentages will be focused on low-fitness, low-connectivity nodes, the deletion of which is unlikely to have such a detrimental effect. Also, given small truncation values, truncation events will occur at higher frequencies, thus supplying a steady ‘drip-feed’ of new nodes which will attach to existing hubs by preferential attachment and hence continually promote a power-law degree distribution within the network. The reason that the EPA model can achieve higher levels of cooperation at b = 1 to 1.6 than the fluctuation model is because whilst it is possible for the EPA model to be completely overrun by cooperators, in the fluctuation model however, repeated truncation prevents such a situation occurring. Defectors are being added to the population after every truncation event. A similar constraint also applies at the other end of the scale, with regards to very low levels of cooperation. So there are limits below 1 and above 0 which are a result of the truncation size and frequency. These limits restrict the range of cooperation values achievable by the fluctuation model. a) b) Populations grown from cooperator founders Fraction of cooperators 1 Populations grown from defector founders 1 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0 EPA 2.5% 5% 10% 20% 30% 40% 50% 0 1 1.5 2 2.5 3 3.5 1 1.5 2 2.5 3 3.5 Temptation to defect (b) Figure 1. Effect of truncation size on cooperation. Simulations were run for 2000 generations using the EPA and fluctuation models at a range of b values. The graphs plot the fraction of cooperators in the population against b values. Each point on the graph represents an average of 25 individual results. Each of the individual results is in turn an average of the last 20 generations of a simulation replicate. Plots are shown from fluctuation model simulations using 2.5 to 50% network truncation. An EPA simulation is also shown which does not feature any truncation. (a) illustrates results grown from founder populations of 3 cooperators. (b) illustrates results grown from founder populations of 3 defectors. See Figures 3 and 4, and accompanying text for more detailed discussion of within-simulation data variability. 3/14 Cooperation occurs even for populations that are founded entirely with defectors. Our results starting from founder populations consisting solely of defectors show an increase compared with levels of cooperation achieved by the EPA model (see Figure 1b). Further, we note that final levels of cooperation arising from the fluctuation model for networks founded from cooperator and from defector strategies were almost indistinguishable statistically: we tested the dependence of the final cooperation levels observed as a function of b, the temptation to defect, and the founding strategy type (C or D), using a nonparametric Sign Test32 (see Table 1). Model EPA T 2.5% T 5% T 10% T 20% T 30% T 40% T 50% n k 178 240 240 240 240 240 239 239 151 143 129 126 137 131 136 129 p-value < 2.20 e−16 0.003 587 0.2724 0.4778 0.032 94 0.1751 0.038 20 0.2442 Table 1. Results of a nonparametric Sign Test (using a two-tailed exact binomial calculation), comparing the final level of cooperation observed in networks founded with cooperators and networks founded with defectors. For each level of truncation, the 240 ∗ 2 independent samples were paired by the value of b, the temptation to defect. The column n is the number of non-tied sample pairs. The column k is the number of times the C-founded population had a larger final cooperation level than the D-founded population. With the standard EPA model there is strong statistical evidence that the cooperator- and defector-founded networks differ. For the fluctuation model, the evidence is much less clear. Given the power of the test is high here due to the relatively large number of samples used, we can tentatively conclude that there is little or no effect of the type of network founding strategy (cooperator or defector) in those fluctuation models having above 2.5% truncation. Fluctuation using random selection can still improve cooperation for defector-founded populations. As a control to the effect seen in the fluctuation model, we repeated the above simulations, deleting nodes randomly rather than on the basis of lowest fitness. Results are illustrated in Figure 2. By comparing with Figure 1, it can be seen that there is a clear difference in outcomes. First, for random deletion (Figure 2), fractions of cooperators present are reduced compared to least fitness (Figure 1); although, we note that levels of cooperation achieved are still independent of the founder population strategy (Figures 2a and 2b are approximately equivalent for fluctuation model simulations). Secondly, the percentage truncation parameter no longer appears to have any effect on cooperation (all truncation graphs in Figure 2 look approximately equivalent regardless of % values). Focusing solely on Figure 2, we now consider the fluctuation model compared to the EPA model. In the case of networks grown from cooperator-founders (Figure 2a), EPA demonstrates higher levels of cooperation than the fluctuation simulations. Truncating the network by a method that simply deletes nodes at random is unsurprisingly, less effective at promoting cooperation than the EPA model which has been shown to be effective for cooperator-founded networks. In the case of networks grown from defector-founders (Figure 2b), the fluctuation model still achieves the same results as it did for cooperator-founders (in Figure 2a). However in this case, EPA achieves lower levels of performance than the fluctuation model featuring ‘random’ truncation. As we have mentioned EPA is generally less effective at promoting cooperation when networks are grown from defector-founders. How is the fluctuation model, with random truncation, still able to promote cooperation (albeit at reduced levels compared to targeted truncation)? The random deletion process will inevitably disrupt the formation of the heterogeneous network by deleting the higher degree nodes that are key to the scale-free structure. However, this disruption will be countered by the preferential process for addition of replacement nodes which is still fitness-based, i.e. new nodes added will still be preferentially attached to existing nodes of higher fitness. Heterogeneity of degree, (which supports cooperation) still arises, but not to the extent seen for the fluctuation model, which targets least fit nodes for deletion. The preferential attachment process explains why the fluctuation model is still able to have a positive (but reduced) effect on levels of cooperation, even when nodes are deleted randomly. Fluctuation in population size reduces variability within simulation results and increases cooperation. In Figure 3, 4/14 a) b) Populations grown from cooperator founders Fraction of cooperators 1 Populations grown from defector founders 1 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0 EPA 2.5% 5% 10% 20% 30% 40% 50% 0 1 1.5 2 2.5 3 3.5 1 1.5 2 2.5 3 3.5 Temptation to defect (b) Figure 2. Effect of random rather than weighted selection of nodes for network truncation. Simulations were run as described for Figure 1 however, in the fluctuation model, least-fitness based node deletion was replaced with random deletion. Plots are shown from fluctuation model simulations using 2.5 to 50% network truncation. For reference, an EPA simulation is also shown which does not feature any truncation. (a) illustrates results grown from founder populations of 3 cooperators. (b) illustrates results grown from founder populations of 3 defectors. we provide sample illustrations of time profiles from the EPA and fluctuation models respectively (starting from cooperator founders, b = 2.2). The EPA model (Figure 3a) results in high variability between different simulation replicates with some replicates being overrun by defectors (i.e. fraction of cooperators ≈ 0). The fluctuation model (Figure 3b) demonstrates far less variability, with clear transitions between two states. Whilst the time at which transition occurs varies, most replicates achieved transitions to a consistent level of cooperation – equivalent to, or greater than the highest level observed from amongst all simulations in a comparable EPA model. We considered the possibility that the EPA model may simply require longer for convergence and hence ran extended simulations up to 200,000 generations (not shown). We did not see any consistent convergence over later generations: whilst some replicates achieved higher levels of cooperation beyond 2,000 generations, others did not, and some oscillated continually. Fluctuation results in dramatic increases in cooperation for networks grown from defectors. Figure 4 shows replicate simulations grown from defector founders, using the EPA and fluctuation models respectively. In the EPA model (Figure 4a), all replicates are overrun by defectors. In the fluctuation model (Figure 4b), all replicates transition to cooperation. Ultimately, levels of cooperation achieved are similar for the fluctuation model regardless of whether the founder network is cooperators or defectors. We have however noticed that whilst final outcomes are typically similar for both types of founding strategy, defector-founded simulations tend to result in later times to transitions and greater variation in such times (Figures 3b and 4b illustrate this). Generally, for cooperator-founded populations, with b values where cooperation was able to arise, we observed transition of the majority (> 95%) of replicates within our typical simulation period of 2000 generations, with delayed transitions becoming more common given increasing b values. For defector-founded populations, delayed transitions occurred more frequently and to achieve consistent results (> 95% transitioned) required 20,000 generations. Figure 5 illustrates time profile plots, with corresponding network degree distributions, for replicate simulations grown from cooperator founders using the fluctuation model. We see that the fluctuation model enables all replicates to consistently reach an apparent power-law degree distribution, as previously reported for the EPA model33. We also observe the same result (not shown) for the fluctuation model operating on networks grown from defector-founded populations. In addition, the replicate data makes clear that, when cooperation arises, variability in transition times (Figure 5a) does not correspond to variability in degree distribution (Figure 5b). The presence of a small number of nodes with degree k = 1 is an artefact of the fluctuation model implementation. The fluctuation model grows the network in the same way as EPA (with each new node extending m = 2 connections), however 5/14 Fraction of cooperators a) b) EPA model 1.0 1.0 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0.0 0 500 1000 1500 2000 0.0 0 Fluctuation model 500 1000 1500 2000 Generation number Figure 3. Example simulation plots for EPA and fluctuation models starting from cooperator founders. Plots show the individual time profiles for 10 replicates using a b value of 2.2. (a) shows the EPA model. (b) shows the fluctuation model operating with 2.5% truncation. Generation 100 is marked in both figures by a vertical black line. This is the point at which the EPA model reaches a fixed network structure, after which no further nodes are added. Fraction of cooperators a) b) EPA model 1.0 1.0 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0.0 0 500 1000 1500 2000 0.0 0 Fluctuation model 500 1000 1500 2000 Generation number Figure 4. Example simulation plots for EPA and fluctuation models starting from defector founders. Plots show the individual time profiles for 10 replicates using a b value of 2.2. (a) shows the EPA model with network fixation occurring at generation 100. (b) shows the fluctuation model operating with 2.5% truncation. Generation 100 is marked in both figures by a vertical black line. This is the point at which the EPA model reaches a fixed network structure, after which no further nodes are added. 6/14 a) b) Time profile Final degree distribution 103 1.0 102 0.6 Frequency Fraction of cooperators 0.8 0.4 101 0.2 0.0 100 0 500 1000 Generation number 1500 2000 100 101 102 103 Degree Figure 5. Time profiles and corresponding final degree distributions for networks grown from cooperator founders using the fluctuation model. (a) shows the time profile for a simulation consisting of 10 replicates with a b value of 2.2. The fluctuation model is truncating networks using 2.5% truncation. (b) shows the final degree distributions (at generation 2000) for each of the 10 simulation replicates. the truncation component of the fluctuation model can leave residual nodes of degree k = 1 (at low frequencies) due to the deletion of connections from removed nodes. Cooperation has a characteristic degree distribution. Whilst in the majority of cases, the fluctuation model supported a transition of networks to a higher level of cooperation, we observed that as b values increased, the transition was not guaranteed. Figure 6 captures an example of this, for 1 replicate out of 10 (for b = 2.2). The replicate data demonstrates clearly the difference in degree distributions between networks that transition to cooperation and those that do not (the red lines in plots 6a and 6b refer to the same replicate). Results for pre-existing random networks The following results look at the effect of the fluctuation model when applied to pre-existing random networks. Fluctuation drives non-cooperative pre-existing networks to cooperation. Figure 7 shows final levels of cooperation achieved in simulations which started from randomly structured networks. Nodes within these networks were allocated cooperator (defector) strategies according to probability P (1 − P). Simulations were run for 20,000 generations during which time the majority (> 95%) of replicates transitioned to cooperation (for those simulations using b values where cooperation was seen to emerge). Three pre-existing networks were tested, consisting of i) all cooperators, ii) cooperators and defectors in approximately equal amounts, and iii) all defectors. The curves for these three networks are almost entirely coincident, again illustrating the emergence of cooperation in the fluctuation model, regardless of starting criteria (as seen previously in networks grown from founder populations). A static network (iv), where structural changes were disallowed (i.e. strategy updating only), is shown for comparison and clearly illustrates the contribution of the fluctuation mechanism. Fluctuation transforms pre-existing network structure from random to scale-free. In Figure 8 we show the effect of the fluctuation model on degree distribution, for pre-existing random networks, initially composed entirely of defectors. Figure 8a, using linear axes, highlights the initial Poisson degree distribution for the pre-existing random network, and Figure 8b highlights apparent log-log linearity of the final degree distribution, that is characteristic of a power-law distribution. Cooperation appears to be permanent. In several thousands of simulations, excluding the small fluctuations visible in asymptotic states (see Figures 3, 4 and 5), whilst we have observed failures to transition to cooperation, we have not observed a single instance of widespread reversion to defection once cooperation has been achieved within a population. It would appear that once cooperation is established by means of this model, it does not collapse. 7/14 a) b) Time profile Final degree distribution 103 1.0 102 0.6 Frequency Fraction of cooperators 0.8 0.4 101 0.2 0.0 100 0 500 1000 1500 2000 100 101 102 Generation number 103 Degree Figure 6. Plot illustrating the difference in degree distribution observed for a replicate that fails to achieve cooperation (fluctuation model). Networks were grown from cooperator founders. Simulation consists of 10 replicates with a b value of 2.2. The fluctuation model is truncating networks using 2.5% truncation. (a) shows the time profile. (b) shows the final degree distributions (at generation 2000) for each of the 10 replicates. The red line in (a) (defectors predominate the population) corresponds to the red line in (b) (steeper exponent than all other replicates). Effect of fluctuation on pre−existing networks 1 i) Evolving/C = 0 ii) Evolving/C = 0.5 iii) Evolving/C = 1.0 iv) Static/C = 0.5 Fraction of cooperators 0.8 0.6 0.4 0.2 0 1 1.5 2 2.5 3 3.5 Temptation to defect (b) Figure 7. Effect of fluctuation model on pre-existing random networks. The plot shows temptation to defect plotted against final fraction of cooperators. Each data point represents an average of 25 individual results. Each of the individual results is an average of the last 20(of 20,000) generations of a simulation replicate. The fluctuation model used 2.5% truncation. The pre-existing networks were in the form of random graphs with each node in the network being populated by cooperators according to probabilities: i) 0, ii) 0.5 and iii) 1. For reference, simulations involving a network that was structurally immutable are also shown in iv. For the immutable network, nodes were populated with cooperators (or defectors) according to a probability of 0.5. 8/14 a) Linear plot of network degree di b) s 500 Log log plot of network degree distri 103 450 I 400 Final 350 102 300 250 200 101 150 100 50 100 0 0 5 10 15 20 100 101 102 D  e Figure 8. Degree distributions for pre-existing network at start and end of fluctuation model simulations. The plots present aggregate data from fluctuation model simulations of 25 replicates and illustrate the starting and finishing degree distributions, after 20,000 generations. The simulations used a b value of 2.2 and truncation at 2.5%. The starting networks were in the form of random graphs populated entirely by defectors. The same data are represented on linear plots (a) and log log plots (b) in order to clearly illustrate, respectively, the apparent Poisson initial and power-law final distributions. In the interests of visualising both curves, the linear graph only includes degree values up to k = 20. The error bars shown represent 95% confidence intervals for the data. Discussion The main findings of our investigations are that: i fluctuation of population size leads to an increase in levels of cooperation compared with the EPA model, ii that the levels of cooperation achieved thus are largely independent of whether the populations were founded from defectors or cooperators, iii that the fluctuation model supports the emergence of cooperation both in networks grown from founder populations and also pre-existing random networks. The time profile plots we have provided in our results, give an indication as to how the fluctuation model is able to reach the increased levels of cooperation. Whilst the EPA model results in a high degree of variability, the fluctuation model produces consistent transition profiles. The EPA model has two interacting dynamic components: preferential attachment and strategy updating. Structural organization within the EPA model, which is driven by the preferential attachment component, occurs until the network reaches its defined size limit. Changes in levels of cooperation continue to occur after this point. Given that the network structure is fixed, these latter changes can only occur as a result of the remaining active component of the EPA model process: strategy updating. Close examination of EPA simulation replicate time profiles (as shown in Figure 3) reveals an interesting observation (for b values greater than 1.6): At the point where network structure becomes fixed, those replicates having higher levels of cooperation at this time, tend to finish with higher levels of cooperation than replicates experiencing lower levels of cooperation at the network fixation point. Whilst we do not yet have a detailed understanding of how cooperative structure develops within our networks, this observation suggests that prior to the network fixation point, some structural precedent is set which gives a probabilistic indication of how a network will profit, in cooperative terms, from strategy updating subsequent to structure fixation. Based on the work of Poncela et al.33 which describes the connection between scale-free network structure and cooperation, a plausible explanation for such a structural precedent is as follows: Whilst new nodes are preferentially attached to a growing network in a way that may generate hubs and hence a scale-free structure, there is no guarantee that the early clusters of nodes appearing in the network will be cooperators (cooperator and defector strategies are assigned to newly added nodes 9/14 with equal probability). If the first network hubs appearing in the network are largely occupied by cooperators who in turn have cooperative neighbours, then these agents are likely to accumulate high fitness scores. This would potentially set the foundation for cooperation since such a group is likely to have a hub which would then draw further connections from newcomer nodes and hence promote scale-free structure. On the other hand, if early groupings of cooperators are interspersed with large numbers of defectors, this is likely to result in defectors preying on cooperators and initially accumulating higher fitness values. Strategy updating around such groups would in turn result in the conversion of cooperators present to the (at that point in time) more successful defectors. Eventually large groups of defectors will arise and result in lower fitness values. In a sea of low fitness values, preferential attachment is less able to demonstrate the “rich get richer effect” and this is more likely to result in random rather than targetted connections for newcomer nodes. After network fixation, strategies can be updated, but the network structure cannot change and early groupings of defectors in this way are likely to disrupt, impair or sufficiently delay the foundation of the scale-free structure that is needed to support higher levels of cooperation. We anticipate forming a testable hypothesis around this explanation as the basis for a subsequent work on this model. The fluctuation model effectively allows a network to “go back in time” to fix structure that may have caused such a poor start. The model targets low fitness nodes (and their edges) for deletion. Such nodes are more likely to be defectors surrounded by other defectors (a defector–defector interaction results in a payoff of zero for both parties). Replacement nodes do not inherit the deleted node’s connections; they are preferentially attached to higher fitness nodes. In this way, networks that have a poor start are no longer permanently fixed, they have repeated opportunities to address the ‘poorest performing’ elements of their structure. When viewed in this way, it is no longer surprising that similar levels of cooperation are ultimately achieved regardless of starting strategies. In the same way that this process of continual readjustment allows the network to deal with a less-than-ideal initial structure, it similarly allows the network to deal with less than favourable starting strategies. If such strategies perform poorly, then sooner or later there is a likelihood they will be deleted, and should their replacements also perform poorly, there is a similar likelihood that they too will be deleted. It is this ability to continually replace poor performing network nodes and connections that supports the fluctuation model’s striking ability to convert pre-existing random networks, initially populated entirely by defectors, to highly cooperative networks with a power-law distribution. The fluctuation model studied in this paper is not intended as an accurate representation of any specific real world process, and probably does not map onto any such process precisely. However it may be interpreted in several ways as analogues of natural phenomena. We now briefly consider possible interpretations of specific aspects of our model. Firstly, as in the original EPA model, new nodes joining the network could be considered as being “newly born into the network” or as “newcomers from outside the network”. In either case, they are positioned by preferential attachment in network ‘locations’ which may prove advantageous or disadvantageous to them. Secondly, given that the model is one of (Darwinian) evolution, we tend to view strategy updates as equivalent to new population members replacing old, rather than any form of learning or imitation. This may be viewed as birth-death. In this situation, the new strategy ‘inherits’ a set of connections forged by its ancestors along with the advantages or disadvantages that those connections confer. Thirdly, fluctuation as we have implemented it, deletes not only the least fit agents, but also the connections established over generations by successive offspring at that network location. The purpose of deleting both agent and network node is to introduce some form of flux into the actual network itself rather than just its constituents’ behaviour. This is a different and more disruptive process to that described by strategy updating - perhaps akin to real world scenarios where external environmental effects may have wider consequences for an entire population than for just specific individuals. Each of these processes is open to alternative interpretations. However, we suspect based on our results from this work, that it is not necessarily the exact process that is the important issue here, it is merely that, much like most ecological systems in nature, a network continues to be perturbed in some way and is hence unable to achieve a permanently fixed structure: it thus continues to adapt. We anticipate that there may be alternative ways of perturbing a similar model to achieve results akin to those we have demonstrated. Conclusion In summary, natural selection acts as a culling process that maintains diversity. In this work we have attempted to generalise the effect of that process across a model of networked individuals – with the crucial ability that individuals can locally affect their network in response to such culls. We have introduced a relatively simple modification to the original EPA model, symbolising an effect elemental to the behaviour of populations in the real world – effectively some sort of representation of flux in the environment. This modification creates the opportunity for individuals in a population to continue to test adaptation against the selective pressures of the ecosystem. We have shown that such a feature brings about marked increases in levels of cooperation in networks grown from defector-founded populations. We have also shown that this feature results in levels of cooperation which are independent of starting behaviour and we have shown that the model can bring about cooperation in both growing and pre-existing non-cooperative networks. 10/14 It is important that models which seek to explain the origins of cooperation are general and also robust to starting conditions. We believe that our model achieves both of these aims and hence our findings are of value in the search to understand the emergence and the ubiquity of cooperation. Methods Our model and simulations are based on those described in18 , but with the addition of a pruning step which deletes nodes from the network. We here give a full description of the approach for completeness. Overview of approach The models consist of a network (i.e. graph) with agents situated at the nodes. Edges between nodes represent interactions between agents. Interactions are behaviours between agents playing the one-shot prisoner’s dilemma game. These behaviours are encoded by a ‘strategy’ variable, associated with each agent, which takes one of two values: cooperate or defect. The game is played in a round robin fashion, with each agent playing its strategy against all its connected neighbours, in turn. Each agent thus accumulates a fitness score which is the sum of all the individual game payoffs. Within an evolutionary simulation, starting from a founding population, this process is repeated over generations. The evolutionary process assesses agents at each generation on the basis of their fitness score; fitter agents’ strategies remain unchanged; less fit agents are more likely to have strategies displaced by those of fitter neighbours. The evolutionary preferential attachment (EPA) process18 connects strategy dynamics to network growth: Starting from a small founding population new nodes are added which preferentially connect to fitter agents within the network. Our adaptation of the EPA model adds a further component which repeatedly truncates the network: Whenever the population reaches a maximum size, a specified percentage of nodes in the network are removed, on the basis of least fitness, after which the network grows again. Outline of the evolutionary process. As described earlier, the general evolutionary process we implement is as follows: 1. 2. 3. 4. Play prisoner’s dilemma Update strategies Grow network Remove nodes (only in the case of attrition models) In the following, we provide more detail on the specifics of each of the four steps: Play prisoner’s dilemma. We use the single parameter representation of the one-shot prisoner’s dilemma as formulated in16 . In this form (the ‘weak’ prisoner’s dilemma), payoff values for the actions, referred to as T, R, P and S, become b, 1, 0 and 0 (see Figure 9). The b parameter represents the ‘temptation to defect’ and is set at a value greater than 1 for the dilemma to exist. Cooperate Cooperate Defect Defect R S (0) T (b) P Figure 9. Payoff matrix for weak prisoner’s dilemma. From the accumulated prisoner’s dilemma interactions, each agent achieves a fitness score as follows: ki fi = ∑ πi, j , (1) j=1 11/14 where ki is the number of neighbours that node i has, j represents a connected neighbour and πi, j represents the payoff achieved by node i from playing prisoner’s dilemma with node j. Update strategies. Each node i selects a neighbour j at random. If the fitness of node i, fi is greater or equal to the neighbour’s fitness f j , then i’s strategy is unchanged. If the fitness of node i, fi is less than the neighbour’s fitness, f j , then i’s strategy is replaced by a copy of the neighbour j’s strategy, according to a probability proportional to the difference between their fitness values. Thus poor scoring nodes have their strategies displaced by the strategies of more successful neighbours. Hence, at generation t, if fi (t) ≥ f j (t) then i’s strategy remains unchanged. If fi (t) < f j (t) then i’s strategy is replaced with that of the neighbour j with the following probability: Pi = f j (t) − fi (t) , b.max[ki (t), k j (t)] (2) where ki and k j are degrees of node i and its neighbour j respectively. The purpose of the denominator is to normalise the difference between the two nodes. The term b.max[ki (t), k j (t)] represents the largest achievable fitness difference between the two nodes given their respective degrees. (The highest payoff value in the prisoner’s dilemma is T, equivalent to b in the single-parameter version of the game used here. The maximum possible score for a node of degree k is therefore k ∗ b. The lowest payoff value is P or S, both equal to zero, giving k ∗ b = 0. Thus the maximum possible difference between two nodes is simply the maximum possible score of the fitter node.) Grow network. New nodes with randomly allocated strategies are added, to achieve a total of 10 at each generation. Each new node uses m edges to connect to existing nodes. In all our simulations, we use m = 2 edges. Duplicate edges and self-edges are not allowed. The probability that an existing node i receives one of the m new edges is as follows: Π(t) = 1 − ε + ε fi (t) , N(t) ∑ j=1 (1 − ε + ε f j (t)) (3) where fi (t) is the fitness of an existing node i and N(t) is the number of nodes available to connect to at time t in the existing population. Given that in our model each new node extends m = 2 new edges, and multiple edges are not allowed, N is therefore determined without replacement. The parameter ε ∈ [0, 1) is used to adjust selection pressure. For all of our simulations ε = 0.99, hence focusing our model on selection occurring directly as a result of the preferential attachment process. Truncate network. On achieving a specified size, the network is truncated according to a percentage X. Truncation is achieved by ranking all nodes in order of current fitness scores from maximum to minimum. The X least fit nodes are then deleted from the network. All edges from deleted nodes are removed from the network. Any nodes that become disconnected from the network as a result of this process are also deleted. (Failure to do this would result in small numbers of single, disconnected, non-playing nodes, having static strategies, whose zero fitness values would result in continual isolation from the network.) When there are multiple nodes of equivalent low fitness value, the earliest (oldest) nodes are deleted first. Where X = 0, no truncation occurs and the fluctuation model becomes the EPA model. Investigations of the fluctuation model in networks grown from founder populations We investigated networks grown from an initial complete network with N0 = 3 agents at generation t0 . Founding populations were either entirely cooperators or entirely defectors. We tested a range of different sized truncation values from 0.001 to 50% starting from each of the two founder populations (cooperators or defectors). Networks were grown to a maximum size of N = 1000 nodes with an overall average degree of approximately k = 4. Simulations were run until 2000 generations. The ‘fraction of cooperators’ values we use are means, averaged over the last 20 generations of each simulation (to compensate for variability that might occur if just using final generation values). Each data point on ‘b-profile’ plots (Figures 1, 2, and 7) is the mean of 25 simulations. Simulations run for the purposes of illustrating time profiles or degree distributions were limited to 10 replicates in the interests of clarity. Investigations of the fluctuation model applied to pre-existing random networks Random networks were generated by randomly connecting edges between a specified number of nodes (i.e. maximum size of network). Total number of edges added N ∗ k/2, was determined on the basis of a random graph of degree k = 4. Simulation parameters were as described previously for founder population investigations except, i) we focused on a single truncation 12/14 value of X = 2.5% and ii) longer run times (e.g. 20,000 generations) were generally required for replicate simulations to stabilise, when looking at pre-existing networks initially populated entirely with defectors. In applying the fluctuation model to pre-existing networks, the model simply ‘sees’ a pre-existing network, as a ’grownfrom-founders’ network which has reached the point where it requires truncation. In essence, the fluctuation model is the same when it is applied to pre-existing networks as it is when applied to networks grown from founders. Where parameters were modified from those described in this methods section (e.g. longer simulations), this is made clear in the results. References 1. Smith, J. M. & Szathmary, E. The Major Transitions in Evolution (Oxford University Press, 1997). 2. Shirogane, Y., Watanabe, S. & Yanagi, Y. Cooperation: another mechanism of viral evolution. Trends in microbiology 21, 320–324 (2013). 3. Crespi, B. J. The evolution of social behavior in microorganisms. Trends in ecology & evolution 16, 178–183 (2001). 4. Wilson, E. O. The Insect Societies. (Harvard University Press, 1971). 5. Milinski, M. Tit for tat in sticklebacks and the evolution of cooperation. Nature 325, 433–435 (1987). 6. Stacey, P. B. & Koenig, W. D. Cooperative breeding in birds: long term studies of ecology and behaviour (Cambridge University Press, 1990). 7. Clutton-Brock, T. H. et al. Cooperation, control, and concession in meerkat groups. Science 291, 478–481 (2001). 8. Mendres, K. A. & de Waal, F. Capuchins do cooperate: the advantage of an intuitive task. Animal Behaviour 60, 523–529 (2000). 9. Axelrod, R. & Hamilton, W. D. The evolution of cooperation. Science 211, 1390–1396 (1981). 10. Nowak, M. A. & Krakauer, D. C. The evolution of language. Proceedings of the National Academy of Sciences 96, 8028–8033 (1999). 11. Hamilton, W., D. The genetical evolution of social behaviour. I, II. Journal of Theoretical Biology 7, 1–52 (1964). 12. Trivers, R. L. The evolution of reciprocal altruism. The Quarterly Review of Biology 46, 35–57 (1971). URL http://www.jstor.org/stable/2822435. 13. Grafen, A. Natural Selection Kin Selection and Group Selection. In Behavioural Ecology: An Evolutionary Approach 2, 62–84 (Blackwell Scientific Publications, Oxford, UK, 1984). 14. Nowak, M. A. Five rules for the evolution of cooperation. Science 314, 1560–1563 (2006). 15. Santos, F. C. & Pacheco, J. M. A new route to the evolution of cooperation. Journal of Evolutionary Biology 19, 726–733 (2006). 16. Nowak, M. A. & May, R. M. Evolutionary games and spatial chaos. Nature 359, 826–829 (1992). 17. Szabó, G. & Fáth, G. Evolutionary games on graphs. Physics Reports 446, 97–216 (2007). 18. Poncela, J., Gómez-Gardeñes, J., Florı́a, L. M., Sánchez, A. & Moreno, Y. Complex cooperative networks from evolutionary preferential attachment. PLoS one 3, e2449 (2008). 19. Perc, M. & Szolnoki, A. Coevolutionary games-A mini review. BioSystems 99, 109–125 (2010). 20. Ebel, H. & Bornholdt, S. Coevolutionary games on networks. Physical Review E 66, 056118 (2002). 21. Pacheco, J. M., Traulsen, A. & Nowak, M. A. Coevolution of strategy and structure in complex networks with dynamical linking. Physical Review Letters 97, 258103 (2006). 22. Szolnoki, A., Perc, M. & Danku, Z. Making new connections towards cooperation in the prisoner’s dilemma game. EPL (Europhysics Letters) 84, 50007 (2008). 23. Cardillo, A., Gómez-Gardeñes, J., Vilone, D. & Sánchez, A. Co-evolution of strategies and update rules in the prisoner’s dilemma game on complex networks. New Journal of Physics 12, 103034 (2010). 24. Nguyen, K. & Tran, D. A. Fitness-Based Generative Models for Power-Law Networks. In Handbook of Optimization in Complex Networks, 39–53 (Springer, 2012). 25. Zimmermann, M. G. & Eguı́luz, V. M. Cooperation, social networks, and the emergence of leadership in a prisoner’s dilemma with adaptive local interactions. Physical Review E 72, 056118 (2005). 26. Santos, F. C., Pacheco, J. M. & Lenaerts, T. Cooperation prevails when individuals adjust their social ties. PLoS Computational Biology 2, e140 (2006). URL http://dx.plos.org/10.1371/journal.pcbi.0020140. 27. Pacheco, J. M., Traulsen, A. & Nowak, M. A. Active linking in evolutionary games. Journal of theoretical biology 243, 437–443 (2006). 28. Traulsen, A., Santos, F. C. & Pacheco, J. M. Evolutionary games in self-organizing populations. In Adaptive Networks, 253–267 (Springer, 2009). 13/14 29. Perc, M. Evolution of cooperation on scale-free networks subject to error and attack. New Journal of Physics 11, 033027 (2009). 30. Szolnoki, A., Perc, M., Szabó, G. & Stark, H.-U. Impact of aging on the evolution of cooperation in the spatial prisoner’s dilemma game. Physical Review E 80, 021901 (2009). 31. Ichinose, G., Tenguishi, Y. & Tanizawa, T. Robustness of cooperation on scale-free networks under continuous topological change. Physical Review E 88, 052808 (2013). 32. Conover, W. J. Practical Nonparametric Statistics (J. Wiley&Sons, New York, 1999), third edn. 33. Poncela, J., Gómez-Gardeñes, J., Florı́a, L., M. & Moreno, Y. Growing networks driven by the evolutionary prisoners’ dilemma game. In Handbook of Optimization in Complex Networks, 115–136 (Springer, 2012). Acknowledgements This work was supported by funding from the Engineering and Physical Sciences Research Council (Grant reference number EP/I028099/1). Author contributions statement S.M. conceived and conducted the experiment(s), and analysed the results. J.K performed statistical analysis. Both authors reviewed the manuscript. Additional information The authors declare no competing financial interests 14/14
9
A Semantics for Approximate Program Transformations arXiv:1304.5531v1 [] 19 Apr 2013 Edwin Westbrook and Swarat Chaudhuri Department of Computer Science, Rice University Houston, TX 77005 Email: {emw4,swarat}@rice.edu Abstract An approximate program transformation is a transformation that can change the semantics of a program within a specified empirical error bound. Such transformations have wide applications: they can decrease computation time, power consumption, and memory usage, and can, in some cases, allow implementations of incomputable operations. Correctness proofs of approximate program transformations are by definition quantitative. Unfortunately, unlike with standard program transformations, there is as of yet no modular way to prove correctness of an approximate transformation itself. Error bounds must be proved for each transformed program individually, and must be reproved each time a program is modified or a different set of approximations are applied. In this paper, we give a semantics that enables quantitative reasoning about a large class of approximate program transformations in a local, composable way. Our semantics is based on a notion of distance between programs that defines what it means for an approximate transformation to be correct up to an error bound. The key insight is that distances between programs cannot in general be formulated in terms of metric spaces and real numbers. Instead, our semantics admits natural notions of distance for each type construct; for example, numbers are used as distances for numerical data, functions are used as distances for functional data, and polymorphic lambda-terms are used as distances for polymorphic data. We then show how our semantics applies to two example approximations: replacing reals with floatingpoint numbers, and loop perforation. 1. Introduction Approximation is a fundamental concept in engineering and computer science, including notions such as floating-point numbers, lossy compression, and approximation algorithms for NP-hard problems. Such techniques are often used to trade off accuracy of the result for reduced resource usage, for resources such as computation time, power, and memory. In addition, some approximation techniques are also used to ensure computability. For example, true representations of real numbers (e.g., [7], [1]), require some operations, such as comparison, to be incomputable; floating-point comparison, in contrast, is efficiently decidable on modern computers. Recently, there has been a growing interest in language-based approximations, where approximate program transformations are performed by the programming language environment [21], [12], [19], [18], [4], [3], [16]. Such approaches allow the user to give an exact program as a specification, and then apply some set of transformations to this specification, yielding an approximate program. The goal is for approximations to be performed on behalf of the programmer, either fully automatically or with only high-level input from the user, while still maintaining a given error bound; i.e., the goals are automation and correctness. These goals can in turn increase programmer productivity while helping to remove programmer errors, where the latter can be especially important, for example, in safety-critical systems. This leads us to a fundamental question: what does it mean for an approximate transformation to be correct? A good answer to this question must surely be quantitative, since approximate transformations should not change the output by too much; i.e., they must respect user-specified error bounds. Correctness should also be modular, meaning that, in settings where approximate transformations T1 , . . . , Tk are applied together to a program P , it should be possible to reduce the proof of correctness of {T1 , . . . , Tk } to individual proof obligations for the Ti s. Current formal approaches to approximate program transformations, however, do not permit such modular reasoning about approximations. Typically, they are tailored to specific forms of approximation — for example, the use of floating-point numbers or loop perforation [19] (skipping certain iterations in long-running loops). Even when multiple approximations can be combined, reasoning about them is monolithic; in the above example, if T1 is changed slightly, we would need to re-prove the correctness of P with respect to not just T1 but {T1 , . . . , Tk }. In addition, current approaches to approximate computation [2], [21], [19], [18], [4] are almost universally restricted to first-order programs over restricted data types. In this work, we improve on this state of the art by giving a general, composable semantics for program approximation, for higher-order programs with polymorphic types. In our semantics, individual approximate transformations are proved to be quantifiably correct, i.e., to induce a given local error expression. Error expressions are then combined compositionally, yielding a top-level error expression for the whole program that is built up from the errors of the individual approximate transforms being used. This approach has a number of benefits. First, it allows for more portable proofs: an approximate transformation T can now be proved correct once, and the resulting error expression can be used many times in many different contexts. Second, it is mechanical and opens up opportunities for automation: approximation errors for a whole program and a set of disparate approximate transformations are generated simply by composing the errors of the individual transformations. Finally, our approach reduces the correctness of approximately transformed programs to the much easier question of whether a generated error expression is less than or equal to a given error bound. The key technical insight that makes our semantics possible is that, despite past work on using metric spaces and real numbers in program semantics (e.g., [20], [15], [11]), we argue that real numbers cannot in general capture how, for example, the output error of a function depends on the input and its error. Instead, our approach allows arbitrary System F types for errors. We show that this can accurately capture errors, for example, by using functions as errors for functional data and polymorphic lambdas as errors for polymorphic data. To allow this, our semantics is based around a novel notion of an approximation type, which is a ternary logical relation [6], [17] between exact expressions e, approximate expressions a, and error expressions q. In addition to the above benefits, this approach can also handle changes of type between exact and approximate expressions, e.g., approximating real numbers by floating-point. The remainder of the paper is organized as follows. Section 2 motivates our semantics with a highlevel overview. Section 3 defines our input language, F ADT+ , which is System F with algebraic datatypes and built-in operations. Section 4 defines our semantics by defining the notion of approximation types mentioned above. Section 5 then shows how our semantics of approximation types can be used to verify an approximating compiler which compiles real numbers into floating-point numbers and optionally performs loop perforation [12], [19]. Note that the error bounds created for the real to floating-point approximation essentially yields a general approach to floating-point error analysis [10], [5] that works for higher-order and even polymorphic programs. Finally, Section 6 discusses related work and Section 7 concludes. 2. Approximating Programs The goal of this work is to give a semantics for approximate transformations, which convert an exact program e into an approximate program a that, although not identical to the e, is within some quantifiable error bound q. Our notion of approximate transformation is very general, but it includes at least: Data Approximations: a uses a less exact datatype, such as floating-point numbers instead of reals; Approximations of Incomputable Operations: a uses inexact but computable versions of potentially incomputable operations, such as f (n) for finite n in place of limx→∞ f (x); and Approximate Optimizations: a performs a cheaper, less precise version of a computation, such as the identity function in place of sin(x). The point of language-based approximations is that these transformations become part of the language semantics, which precisely captures the relationship between exact programs, their approximations, and the associated error bounds. As a simple yet illustrative example, consider an approximate transformation that replaces the sin operator with the identity function λx. x. Such an approximation can greatly reduce computation, since sin (for floating-point numbers) can be a costly operation, while the identity function is a no-op. Intuitively, we know that this change does not greatly affect the output when x is close to 0. If, however, the output of a call to sin is then passed to a numerically sensitive operation, such as a reciprical, then the small change resulting from replacing sin by the identity could lead to a large change in the final result. Thus, our goal is to quantify the error introduced by this approximate Original Program: in sin output error: err + in - sin(in) input error: err Approximate Program: in’ sin(in) λx.x in’ Figure 1. Error of Approximating sin with λx. x transformation in a local, compositional way, allowing this error to be propogated through the rest of the program. Figure 1 allows us to visualize the error resulting from replacing sin with the identity function. We assume that the input in has already been approximated, yielding an approximate input in0 with some approximation error err. The exact output, sin(in), gets replaced by an approximate result equal to in0 . The error for this expression can then be calculated as err + in − sin(in), as described in the figure. This example leads to the following observations: 1) Errors are programs: Although the standard approach to quantification and errors is to use metric spaces and real numbers, the error expression err + in − sin(in) in our example depends on both the input in and the input error err; i.e., it is a function, which cannot be represented by a single real number. 2) Errors are compositional: Approximation errors in the input to sin are substituted into the approximation error for sin itself, and this error is in turn passed to any further approximations that occur at a later point in the computation. In order to formalize the relationship between programs, their approximations, and the resulting errors, we define a semantic notion below called an approximation type. An approximation type is a form of logical relation (see, e.g., [6], [17]), satisfying certain properties discussed below, that relates exact expressions e, approximate expressions a, and error expressions q. If A is an approximation type, we write {|a|}qA for the set of all exact expressions e related by A to approximate expression a and error expressions q. Thus, e ∈ {|a|}qA means that e can be approximated by a with approximation error no greater than q when using approximation type A. As a first example, we define the approximation type Fl that relates real number expressions e, floatingpoint expressions a, and non-negative real number expressions q iff q is the distance between the e and the real number corresponding to a. For instance, the real number π is within error 0.1415926 . . . of the floatingpoint number 3, meaning that π ∈ {|3|}0.1415926... Fl holds. If the only difference between a and e is that the former uses floating-point numbers in place of reals, then showing e ∈ {|a|}qFl is essentially a form of floating-point error analysis. Using Fl is more general, however, because it allows the possibility that a performs further approximations, e.g., using the identity in place of sin. Taking this comparison further, Fl is specifically like an interval-based error analysis; we examine this more closely in Section 5.2. For functions, following Figure 1, we use an approximation type where the error of approximating a function is itself a function, that maps exact inputs and their approximation errors to output approximation errors. More formally, if A1 and A2 are approximation types for the input and output of a function, respectively, then then functional approximation type A1 ⇒ A2 is defined such that e ∈ {|a|}qA1 ⇒A2 iff e e1 ∈ {|a a1 |}qAe21 q1 for all e1 , a1 , and q1 such that e1 ∈ {|a1 |}qA11 . These notions can then be used to prove correctness of approximate tranformations, as follows. First, the designer of an approximate transformation gives an approximation rule for the judgment ` e a ≤ q : A, stating that expressions matching e can be transformed into a with error at most q using approximation type A. Approximation rules are explained in more detail below, but, as an example, our sin approximation above can be captured as: ` sin λx. x ≤ λx. λq. q + x − sin(x) : Fl ⇒ Fl After formulating this rule, it must then be proved correct, by proving that e ∈ {|a|}qA for all e, a, and q that match the rule. As argued above, a significant benefit of this approach is that it allows approximate transformations to be analyzed and proved correct on their own, without reference to the programs in which they are being used. 3. System F as a Logical Language In the remainder of this paper, we work in System F with algebraic datatypes (ADTs) (see, e.g., Pierce [13]), extended to allow uncountability and incomputability. Specifically, ADTs can have uncountably many constructors, while built-in operations can perform potentially incomputable functions. The former is useful for modeling the real numbers, while the latter is useful for modeling operations on real numbers such as comparison that are incomputable in general (e.g., [7], [1]). We refer to this language as F ADT+ . The only additional technical machinery needed for these extensions to System F is to require that all builtin functions are definable in the meta-language (set theory or type theory). More specifically, we assume as given some set of built-in operation symbols f , each with a given type τf and meta-language relation Rf relating allowed inputs for the function defined by f to their corresponding outputs. Further, we assume that each Rf obeys τf and relates only one output to any given set of inputs. The small-step evaluation relation of System F is then be extended to allow f applied to any values to evaluate to any output related to those inputs by Rf . The details are straightforward but tedious, and so are omitted here. We use −→ to denote the small-step evaluation relation of F ADT+ ; i.e., e −→ e0 means that e evaluates in one step to e0 . Typing contexts Γ are lists of type variables X that are considered in scope, along with pairs x : τ of expression variables x along with their types. Substitutions for expressions e1 through en for variables x1 through xn are written [e1 /x1 , . . . , en /xn ]. These are represented with the letter σ, and we define capture-avoiding substitution σe for expressions and στ for types in the usual manner. Well-formedness of types and expressions is given respectively by the kinding judgment Γ ` τ : ∗ and typing judgment Γ ` e : τ , defined in the standard manner. We write JΓ ` τ K for the set of all expressions e such that Γ ` e : τ . Expressions or types with no free variables are ground. Typing is extended to typing contexts Γ and substitutions in the usual manner: ` Γ indicates that all types in Γ are well-kinded, while Γ ` σ : Γ0 indicates that Dom(σ) = Dom(Γ0 ), Γ ` σ(X) : ∗ for all X ∈ Dom(σ), and Γ ` σ(x) : Γ0 (x) for all x ∈ Dom(σ). We assume all types are wellkinded and all expressions, contexts, and substitutions are well-typed below. We write e ↓ to denote that e is terminating. An expression context C an expression with exactly one occurrence of a “hole” {}, and C{e} denotes the (noncapture-avoiding) replacement of {} with e. Contextual equivalence e1 ∼ = e2 is then defined to hold iff, for all C and τ 0 such that · ` C{ei } : τ 0 for i ∈ {1, 2}, we have that C{e1 } ↓ iff C{e2 } ↓. We use ⊥ to denote an arbitrary non-terminating expression. fix(λx. x) of a given type τ . We assume below type R of real numbers, represented with an uncountable number of constructors, and the type R+∞ of the non-negative reals with infinity. We use 0R for the R-constructor corresponding to the real number 0, and ≤R , +R , and dR for the function symbols whose functions perform real number comparison, addition, and absolute difference, both on R and, abusing notation slightly, on R+∞ . Since F ADT+ can contain incomputable built-in operations, we can additionally use it as a metalogic by embedding any relations of a given metalanguage, such as set theory or type theory, as builtin operations. We assume an ADT Prop with the sole constructor > :: Prop, which will intuitively be used as the type of propositions; a “true” proposition terminates to > and a “false” one is non-terminating, i.e., is contextually equivalent to ⊥. A meta-language relation R can then be added to F ADT+ as a function ~ τ → Prop iff R is a set of symbol fR of type ∀X.~ 0 ~ tuples hτ , ~ei such that · ` fR [τ~0 ] ~e : Prop that is closed under contextual equivalence and contains only terminating expressions ~e. Note that this latter restriction is not too significant because we can always ~ use a “thunkified” relation R0 of type ∀X.(Unit → τ1 ) → . . . → (Unit → τn ) → Prop such that hτ~0 , ~ei ∈ R iff hτ~0 , λx. e1 , . . . , λx. en i ∈ R. As an example, we can immediately see that the thunkified version of contextual equivalence itself is a relation of type ∀X.(Unit → X) → (Unit → X) → Prop. Lemma 1 (Consistency): For any class of built-in operations f with functions Ff , > ∼ = ⊥ does not hold. In the below, φ refers to expressions of type ~ τ → Prop. A constrained context is a pair hΓ; φi ∀X.~ of a context Γ and an expression φ such that Γ ` φ : Prop. A substitution σ satisfies hΓ; φi, written σ  hΓ; φi, iff · ` σ : Γ and σ(φ) ∼ = >. We say that hΓ; φi entails φ, written hΓ; φi ` φ true, iff ∀σ  hΓ; φi.σ(φ) ∼ = > holds. Finally, we say that σ is a substitution from hΓ1 ; φ1 i to hΓ2 ; φ2 i, written hΓ2 ; φ2 i ` σ : hΓ1 ; φ1 i, iff Γ2 ` σ : Γ1 and hΓ2 ; φ2 i ` σ(φ1 ) true. 4. A Semantics of Program Approximation In this section, we define approximation types and illustrate them with examples. The main technical difficulty is that the straightforward way to handle free (expression and type) variables involves a circular definition. Specifically, to fit with the logical relations approach, e ∈ {|a|}qA for e, a, and q with free variables should only hold iff σe ∈ {|σa|}σq A for all substitutions σ for exact variables xe , approximate variablesq xa , σx and error variables xq such that σxe ∈ {|σxa |}A0 for some approximation A0 . This defines approximation types in terms of approximation types! To remove this circularity, we first define approximation types that handle a given set of free variables arbitrarily, in Section 4.1. We then show in Section 4.2 how to lift ground approximation types into approximation functions, which uniformly handle any given context of variables in the “right” way. 4.1. Approximation Types Definition 1 (Expression-Preorder): Let Γ ` τ : ∗. A relation ≤ is called a (Γ ` τ )-expression preorder iff it is a preorder (reflexive and transitive) over ∼ =equivalence classes of JΓ ` τ K. Definition 2 (Quantification Type): A Quantification type is a is a tuple Q = hΓ, Q, ≤, +, 0i of: a typing context Γ and a type Q such that Γ ` Q : ∗; a (Γ ` Q)-expression preorder ≤; and two expressions + (sometimes written in infix notation) and 0 of types Q → Q → Q and Q (relative to Γ), respectively, such that 0 is a least element for ≤, + is monotone with respect to ≤ and hJΓ ` QK, +, 0i forms a monoid with respect to the equivalence relation Q= (≤ ∩ ≥). Stated differently, the following must hold: Closedness: e1 , e2 ∈ JΓ ` QK implies e1 + e2 ∈ JΓ ` QK; Monotonicity: e1 ≤ e01 and e2 ≤ e02 implies e1 + e2 ≤ e01 + e02 ; Leastness of 0: 0 ≤ e for all e ∈ JΓ ` QK; Identity: e + 0 Q e for all e ∈ JΓ ` QK; Commutativity: e1 +e2 Q e2 +e1 for ei ∈ JΓ ` QK; and Associativity: e1 + (e2 + e3 ) Q (e1 + e2 ) + e3 for ei ∈ JΓ ` QK. Such a tuple is sometimes called a Q- or (Γ ` Q)Quantification type. If Γ = · then Q is said to be ground. In the below, we often omit Γ when it is clear from context, writing hQ, ≤, +, 0i. We also write QQ , ≤Q , +Q , and IQ for the corresponding elements of Q. We sometimes omit the Q subscript where Q can be inferred from context. The following lemma gives a final, implied constraint, that ⊥ is always infinity: Lemma 2: For any quantification type Q, e ≤Q ⊥ for all e ∈ JΓ ` QK. Example 1 (Non-Negative Reals): QR = hR+∞ , ≤R , +R , 0R i is a ground quantification type that corresponds to the standard ordering and addition on the non-negative reals. Example 2 (Non-Negative Real Functions): For any ground τ , hτ → R+∞ , ≤τ →R+∞ , λx1 . λx2 . λy. x1 y +R x2 y, λy. 0R i is a ground quantification type, where addition is performed pointwise on functions and e1 ≤τ →R+∞ e2 iff e1 e ≤R e2 e for all ground e of type τ ; i.e., iff the output of e1 is always no greater than that of e2 . Definition 3 (Approximate Equality): Let Γq and Γe be any contexts with disjoint domains, let Q be a (Γq , Γe ` Q))-quantification type, and let E by any type such that Γe ` E : ∗. A (Q, Γe , E)approximate equality relation is a ternary relation e1 ≈q e2 ⊆ JΓq , Γe ` QK × JΓe ` EK × JΓe ` EK, where q ∈ JΓq , Γe ` QK and each ei ∈ JΓe ` EK, that satisfies the following: Upward Closedness: e1 ≈q e2 and q ≤ q 0 implies 0 e 1 ≈q e 2 ; Reflexivity: e1 ∼ = e2 implies e1 ≈0Q e2 ; Symmetry: e1 ≈q e2 implies e2 ≈q e1 ; Triangle Inequality: e1 ≈q1 e2 and e2 ≈q2 e3 implies e1 ≈q1 +q2 e3 ; and Completeness: e1 ≈⊥ e2 for all e1 , e2 ∈ JΓ ` EK. An approximation equality relation is ground iff Γe = Γq = ·. In the below, we often omit Γq and Γe when clear from context. We use a subscript to denote which approximate equality relation is intended, as in e1 ≈qA e2 , when it is not clear from context. Example 3 (Reals): The relation e1 ≈qR e2 that holds iff q ∼ = ⊥, or e1 ∼ = e2 , or |e1 − e2 | ≤R q is a ground (R+∞ , R)-approximate equality relation that corresponds to the standard distance over the reals. Example 4 (Real Functions): The relation e1 ≈qR→R e2 that holds iff q ∼ = ⊥, or e1 ∼ = e2 , or |(e1 r) − (e2 r)| ≤R (q r r+ ) for all r ∈ R and r+ ∈ R+∞ , is a ground (R → R+∞ → R+∞ , R → R)approximate equality relation, where, intuitively, the distance between two functions e1 and e2 is given by a function q that bounds the distance between their outputs for each input. Definition 4 (Approximation Type): Let Γe , Γa , and q Γ be three domain-disjoint contexts. A (Γe , Γa , Γq )approximation type is a tuple hQ, E, A, ≈, {| · |}· i of: a (Γq , Γe ` Q)-quantification type Q; types E and A, called respectively the exact and approximate types, such that Γe ` E : ∗ and Γa ` A : ∗; a (Q, Γe , E)approximate equality relation ≈; and a mapping {|a|}q : (JΓq , Γe ` QK × JΓa ` AK) → P(JΓe ` EK), where q ∈ JΓq , Γe ` QK and a ∈ JΓa ` AK, that satisfies: Error Weakening: q1 ≤ q2 implies {|a|}q1 ⊆ {|a|}q2 ; 0 Error Addition: e1 ∈ {|a|}q and e1 ≈q e2 implies 0 e2 ∈ {|a|}q+q ; Equivalence: If q ∼ = q0 , a ∼ = a0 , and e ∼ = e0 then q 0 0 q0 e ∈ {|a|} implies e ∈ {|a |} ; Approximate Equality: e1 , e2 ∈ {|a|}q implies e1 ≈q+q e2 ; and A ground approximation type is one where Γe = Γa = Γq = ·. In the below, we write A for approximation types, writing {|a|}qA for the set to which A maps q and a, · ≈·A · for the approximate equality relation, EA and AA for the exact and approximate types, QA for the quantification type, and QA , ≤A , and +A for the elements of QA . Again, we often omit the subscript A when it can be inferred from context. Example 5 (Floating-Point Numbers): Let e ∈ {|a|}qFl iff q ∼ = ⊥, or e ∼ = ⊥ and a ∼ = ⊥, or |e − real(a)| ≤R q where real maps (the value of) a to its corresponding real number. We then have that Fl = hQR , R, Fl, ≈R , {| · |}·Fl i is a ground approximation type where a real can be approximated by a floating-point number with an error given by the real-number distance between the two. Example 6 (Floating-Point Functions): Let e ∈ {|a|}qFl⇒Fl iff q ∼ = ⊥, or e ∼ = ⊥ and a ∼ = ⊥, or 0 0 0 q e q e e0 ∈ {|a a0 |}Fl for all e0 ∈ {|a0 |}qFl . We then have that Fl ⇒ Fl = hR ⇒ R+∞ ⇒ QR , R → R, Fl → Fl, ≈Fl⇒Fl , {| · |}·Fl⇒Fl i is a ground approximation type, where R ⇒ R+∞ ⇒ QR is the quantification type obtained from QR by applying the construction of Example 2 twice. Intuitively, this approximation type allows a real function f to be approximated by a floating-point function f 0 with an error q whenever q r qr bounds the error between calling f on exact real number r and calling f 0 on a floating-point number with at most distance qr from r. 4.2. Approximation Families Definition 5 (Approximation Families): Let Γ = Γe , Γa , Γq , ΓA for four domain-disjoint typing contexts and φ be an expression such that Γ ` φ : Prop. We say that F = hE, A, Q, 0, +, F i is a (Γe , Γa , Γq , ΓA , φ)-approximation family iff Γe ` E : ∗, Γa ` A : ∗, Γe , Γq ` Q : ∗, Γe , Γq ` 0 : Q, Γe , Γq ` 0 : Q → Q → Q, and F is a meta-language function from substitutions σ such that σ  hΓ; φi to ground approximation types A such that EA = σ(E), AA = σ(A), QA = σ(Q), 0A = σ(0), and +A = σ(+). We use a subscript F to denote the elements of F; e.g., EF denotes the exact type E of F. We write F(σ) for the approximation resulting from applying the F component to σ. A (·, ·, ·, ·, >)-approximation family is called ground. Note that the ground F are isomorphic to the ground approximation types A, since, if F is ground, then the domain of FF consists of the sole pair h·; ·i. We define approximation contexts Ξ with grammar: · Ξ, (xe , xa , xq ) : F Ξ, (X e , X a , X q ) : ξ Ξ, Γ.(φ) The form (xe , xa , xq ) : F introduces variables xe , xa , and xq such that xa is an approximation of xe with error xq in some approximation returned by approximation family F. The form (X e , X a , X q ) : ξ introduces type variables X e , X a , and X q , along with a variable ξ that quantifies over approximations of X e by X a with error X q . Finally, the form Γ.(φ) introduces additional variables in Γ and constraint φ. More formally, let |Ξ|e , |Ξ|a , |Ξ|q , and |Ξ|A be typing contexts that contain, respectively: all xe and X e ; all xa and X a ; all xq , X q , and contexts Γ in a Γ.(φ) form; and all ξ variables in Ξ. Further, let |Ξ|p be the conjunction of the following formuq las: xe ∈ {|xa |}x for each (xe , xa , xq ) : F; the formula isapprox[X e , X a , X q ] stating that ξ is an approximation of X e by X a with error X q for each (X e , X a , X q ) : ξ; and φ for each Γ.(φ). We use the abbreviations |Ξ|eq = |Ξ|e , |Ξ|q and |Ξ|eaqA = |Ξ|e , |Ξ|a , |Ξ|q , |Ξ|A . The approximation context Ξ is well-formed, written ` Ξ, iff ` |Ξ|eaqA and |Ξ|eaqA ` φ : Prop. We say F is a Ξ-approximation family, written Ξ ` F, iff F is a (|Ξ|e , |Ξ|a , |Ξ|q , |Ξ|A , |Ξ|p )approximation family. A σ is a substitution from Ξ to Ξ0 , written Ξ0 ` σ : Ξ, iff h|Ξ0 |eaqA ; |Ξ0 |p i ` σ : h|Ξ|eaqA ; |Ξ|p i. Although the functions F in approximation families return only ground approximations, we can create nonground approximation types from F as follows. Let Ξ be any approximation context and F be any Ξapproximation family. The notation Ξ F then denotes the (|Ξ|e , |Ξ|a , |Ξ|q )-approximation hQ, E, A, ≈ , {| · |}· i where Q = h|Ξ|eq , QF , ≤, 0F , +F i and: • • • q1 ≤ q2 iff ∀σ  h|Ξ|eaqA ; |Ξ|p i. σq1 ≤F (σ) σq2 ; e1 ≈q e2 iff ∀σ  h|Ξ|eaqA ; |Ξ|p i. σe1 ≈σq F (σ) σe2 ; and e ∈ {|a|}q iff ∀σ  h|Ξ|eaqA ; |Ξ|p i. σe ∈ {|σa|}σq F (σ) . We call an approximation formed this way a Ξapproximation. Theorem 1: If ` Ξ and Ξ ` F then Ξ F is a valid approximation type. Lemma 3 (Approximation Weakening): Let Ξ ` F and ` Ξ, Ξ0 . We then have that: Ξ, Ξ0 ` F; q1 ≤Ξ F q2 implies q1 ≤Ξ,Ξ0 F q2 ; e1 ≈qΞ F e2 implies e1 ≈qΞ,Ξ0 F e2 ; and e ∈ {|a|}qΞ F implies e ∈ {|a|}qΞ,Ξ0 F . We define substitution σ(F) into approximation families as yielding the approximation family hσE, σA, σQ, σ0, σ+, λσ 0 . F (σ 0 ◦ σ)i: Lemma 4 (Approximation Substitution): If Ξ, Ξ0 ` F and Ξ ` σ : Ξ0 then: Ξ ` σ(F); q1 ≤Ξ,Ξ0 F q2 implies σq1 ≤Ξ σ(F ) σq2 ; e1 ≈qΞ,Ξ0 F e2 implies q σe1 ≈σq Ξ σ(F ) σe2 ; and e ∈ {|a|}Ξ,Ξ0 F implies σe ∈ {|σa|}σq Ξ σ(F ) . Definition 6 (Π-Approximations): If F is a Ξ0 , Ξapproximation family, then Π(Ξ).F is the Ξ0 approximation family h|Ξ|e → EF , |Ξ|a → AF , |Ξ|eq → QF , λ|Ξ|eq . 0F , λ|Ξ|eq .+F , F 0 i where F 0 (σ) for σ  h|Ξ0 |eaqA ; |Ξ0 |p i is defined such that: q1 ≤F 0 (σ) q2 iff q1 |Ξ|eq ≤Ξ σ(F ) q2 |Ξ|eq ; e1 ≈qF 0 (σ) e2 iff q |Ξ|eq q e σ(F ) e2 |Ξ| ; and e ∈ {|a|}F 0 (σ) iff eq q |Ξ| {|a |Ξ|a |}Ξ σ(F ) . Intuitively, Π(Ξ).F forms e e1 |Ξ|e ≈Ξ e (e |Ξ| ) ∈ an approximation family where λ|Ξ| . e is approximated by λ|Ξ|a . a with error λ|Ξ|eq . q whenever e is approximated by a with error q in approximation context Ξ. When Ξ is just the single element (xe , xa , xq ) : F1 and F2 does not depend on the values substituted for xe , xa , or xq , then Π(Ξ).F2 , which we abbreviate as F1 ⇒ F2 , yields the notion of function approximation types discussed in Section 2 and in Example 6. To approximate the polymorphic type ∀X.E we use Π((X e , X a , X q ) : ξ, z 0 : X q , z + : X q → X q → X q .(z 0 ∼ = 0ξ ∧ z + ∼ = +ξ )).F This approximation family quantifies over the type variables X e , X a , and X q for the exact, approximate, and error types, as well as over the variables z 0 and z + for the zero error and error addition of the approximation type ξ. The latter variables are explicitly abstracted in order to allow error expressions to refer to them: recall that ξ is only bound in |Ξ|p , not |Ξ|q ; i.e., error terms refer only to X e and X q , not to ξ. Abusing notation slightly, we abbreviate the above approximation family as Π(X, z : ξ).F. 5. Verifying an Approximating Compiler As discussed in the Introduction, the long-term goal of this work is to enable language-based approximations, where a compiler or other tool performs approximate transformations in a correct and automated manner. In this section, we show how to verify such a tool, the goal of the current work, with the semantics given in the previous section. Specifically, we consider a tool that performs two transformations: it compiles real numbers into floating-point implementations; and it optionally performs loop perforation [12], [19]. For the current work, we assume only that our tool can be specified with an approximate compilation judgment Ξ ` e a ≤ q : A such that this judgment can be derived whenever the tool might approximate exact expression e by a with error bound by q using approximation type A and assumptions Ξ. Intuitively, each rule of this judgment corresponds to an approximate transformation that the tool might perform; for example, the approximation rule of Section 2, for approximating sin by λx. x, might be included. We ignore the specifics of how the tool chooses which approximations to use where, as long as all possible choices are contained in the approximate compilation judgment. To verify such a tool, we then prove soundness of its approximate compilation judgment. Soundness here means that Ξ ` e a ≤ q : A implies e ∈ {|a|}qΞ A . This can be proved in a local, modular fashion, by verifying each approximation rule individually; more specifically, if an approximation rule derives Ξ ` e a ≤ q : A from assumptions Ξi ` ei ai ≤ qi : Ai for 1 ≤ i ≤ n and side conditions φj for 1 ≤ j ≤ m, then the rule is correct iff e ∈ {|a|}qΞ A holds whenever ei ∈ {|ai |}qΞii Ai for all i and φj holds for all j. This also allows extensibility, since additional rules can always be added as long as they are proved correct. In the remainder of this section, we consider rules that would be used in our example tool, including: compositionality rules (Section 5.1); rules for replacing real numbers by floating-point implementations (Section 5.2); and a rule for performing loop perforation (Section 5.3). 5.1. Compositionality Rules In order to combine errors from individual approximate transforms into a single, whole-program error, we now introduce the compositionality rules. These rules, given in Figure 2, are essentially the identity, stating that each expresison construct can be approximated by itself; however, they show how to build up and combine error expressions for different contructs. The first rule, A-W EAK, allows the error bound to be weakened from q to any greater error q 0 . The A-VAR rule approximates variable xe by xa with error xq when these variables are associated in Ξ. The rule A-L AM approximates a lambda-abstraction λxe . e with a lambda-abstraction λxa . a by approximating the body e by a, using the error function λxe . λxq . q that abstracts over the input xe and its approximation error xq . The A-A PP rule approximates applications e1 e2 by applying the approximation of e1 to that of e2 and applying the error for e1 to both e2 and its error. The A-TL AM rule approximates polymorphic lambdas ΛX e . e by approximating the body e in the extended approximation context Ξ, X, z : ξ, recalling the abbreviation X, z : ξ from Section 4.2 that abstracts the various components of approximation families. A-TA PP approximates type applications e[EF ] where the type involved is the exact type of some approximation family F. This is accomplished by first approximating e to some a with error q in the polymorphic approximation Π(X, z : ξ).F 0 introduced in Definition 6, and then applying a to the approximate type AF of F. The error q is applied to the necessary components of F, and the resulting approximation is F 0 with F substituted for ξ and all the appropriate Ξ`e a ≤ q : F q ≤Ξ Ξ`e a ≤ q0 : F F q0 A-W EAK e Ξ ` λx . e Ξ ` e1 (xe , xa , xq ) : F ∈ Ξ A-VAR Ξ ` xe xa ≤ xq : F a1 ≤ q1 : Π((xe , xa , xq ) : F).F 0 Ξ ` e2 Ξ ` e1 e2 a2 ≤ q2 : F A-A PP a1 a2 ≤ q1 e2 q2 : [e/x , a/x , q/xq ]F 0 e Ξ`e Ξ, X, z : ξ ` e a≤q:F A-TL AM Ξ ` ΛX . e ΛX a . a e q 0 + ≤ ΛX . ΛX . λz . λz . q : Π(X, z : ξ).F e a a ≤ q : Π(X, z : ξ).F 0 Ξ`F Ξ ` e[EF ] a[AF ] ≤ q[EF , QF ] 0F +F : [F/ξ, . . .]F 0 A-TA PP a ≤ q : Π((xe , xa , xq ) : F).(xe ∼ = fix e ∧ xa ∼ = fix a ∧ xq ∼ = fix(q (fix e))) ⇒ F Ξ`e Ξ ` fix e Ξ`e Ξ, x : F ` e a ≤ q : F0 A-L AM a e q λx . a ≤ λx . λx . q : Π((xe , xa , xq ) : F).F 0 a ≤ q0 : F 0 A-F IX fix e ≤ fix(q (fix e)) : F ∀b ∈ {True, False}.Ξ ` eb ab ≤ qb : F Ξ ` if e then eTrue else eFalse ∀(b, b0 ) ∈ {True, False}.eb ∈ {|ab0 |}q q0 Ξ,b∈{|b0 |} 0 F F if a then aTrue else aFalse ≤ q : F A-I F Figure 2. Compositionality Rules for Approximate Compilation components of F substituted for the X and z variables. This is abbreviated as [F/ξ, . . .]F 0 . Fixed-points fix(e) are approximated using A-F IX. This approximates e to a with error q, applying fix to the results in the conclusion. The approximation family used for e is F ⇒ F, augmented with the assumption that the inputs xe , xa , and xq are equal to the exact, approximate, and error fix-expressions in the conclusion of the rule. Finally, if-expressions are approximated with A-I F. First, each component of the if-expression is approximated; the condition can use an arbitrary approximation family F 0 , while the then and else branches must use the same family as the whole expression. The final condition requires that the output error q bounds the error between the branches taken in the exact and approximate expressions, even if one takes the then branch and the other takes the else branch. This is stated by quantifying over all four combinations of True or False in the exact and approximate conditions that meet the error q 0 computed for approximating the if-condition, and then requiring that the corresponding then or else branches are within error q. Lemma 5: Each rule of Figure 2 is sound. 5.2. Floating-Point Approximation To approximate real-number programs with their floating-point equivalents, we use the Fl approximation type formalized in Example 5. We then add rules Ξ`r Ξ ` opR real(r) ≤ |r − real(r)| : Fl opFloat ≤ opq : Fl ⇒ Fl ⇒ Fl for each real number r and each built-in binary operation (such as +, ∗, etc.) op, where opR and opFloat are the version of op for reals and floating-points, respectively, and opq is the error function calculating the size of the interval error in the output from those of the inputs. The error functions opq to use for the various operations can be derived in a straightforward manner by considering the smallest error that bounds the difference between the real result of opr and any potential result of opFloat on floating-point numbers in the input interval. The infinite error ⊥ is returned in case of overflow or Not-A-Number results. For instance, the error +q can defined (in pseudocode) as: +q xe xq y e y q = let I = round([(xe − xq ) + (y e − y q ), (xe + xq ) + (y e + y q )]) in if ∃r ∈ I.|r| ≥ MAXFLOAT then ⊥ else maxr0 ∈I |r0 − r| where round rounds all reals in an interval to floatingpoint numbers (using the current rounding mode) and MAXFLOAT is the maximum absolute value of the floating-point representation being used. The errors for other operations can be defined similarly. Using these rules with the compositionality rules of Figure 2 yields an interval-based floating-point error analysis that works for higher-order and even polymorphic terms. Although recent work has given more precise floating-point error analyses than intervals [10], [5], we anticipate, as future work, that such approaches can also be incorporated into our framework, allowing them to be used on higher-order, polymorphic programs. Theorem 2: The rules listed above for compiling reals to floating-points are sound. Ξ ` e1 a1 ≤ q1 : F ⇒ F ⇒ F Ξ ` e2 a2 ≤ 0 : Nat x Ξ ` e3 a3 ≤ q3 : Nat ⇒ F e1 x ≈qΞ,x:N F e1 bxcK q0 red-seq e1 e2 e3 ≈Ξ F red-seq e1 de2 eK e3 Ξ ` red-seq e1 e2 e3 red-seq (λx. (a1 x)K ) (a2 /K) (λx. a3 (x ∗ K)) ≤ (red-seq +F e2 (λx. (q3 bxcK 0) +F (q x)))+F q 0 : F Figure 3. Loop Perforation 5.3. Loop Perforation Loop perforation [12], [19] is a powerful approximate transformation that takes a loop which combines f (0), f (1), etc., for some computationally intensive function f , and only performs every nth iteration, repeating the values that are computed n times. We formalize a simplified version of loop perforation as operating on reductions red-seq e1 e2 e3 , defined as the function that reduces, using e1 , the sequence of values of e3 from 0 less than or equal to e2 . The rule in Figure 3 shows how to perforate this reduction by first approximating each ei to some ai with error qi . The rule requires that the number e2 of sequence elements can be approximated with 0 error; this is not a fundamental limitation, but is simply made here to simplify the exposition. The approximation type Nat is defined similarly to Fl, except that the natural numbers are used for the exact, approximate, and error types. Next, the rule finds an error function q that, when applied to input x, bounds the error between the exact sequence value e1 x and the xth value in the perforated sequence, which can be calculated as e1 bncK . Finally, the rule considers the fact that n may not be an exact multiple of K, in which case the perforated loop actually computes the result of de2 eK iterations. An error q 0 is thus synthesized to bound the error between this perforated result and the original reduction. The resulting approximation then performs the perforation computation as described above, and the error sums the errors (q3 bxcK 0), capturing the approximation error of a3 , with (q x), capturing the error of using bxcK instead of x, for each value x, and then adds the error q 0 . Theorem 3: The rule of Figure 3 is sound. 6. Related Work A number of recent papers relate to program approximations. Possibly the closest to this work is the work of Reed and Pierce [16], because they consider a higher-order (though not polymorphic) input language. They show how to perform an approximate program transformation that adds noise to a database query to ensure differential privacy, i.e., that a query cannot violate the privacy of a single individual recorded in the database. In order to ensure that adding noise does not change the query results too much, a type system is used to capture functions that are K-Lipschitz, meaning that a change of δ in the input yields a change of at most K ∗ δ in the output. This condition can be captured in our system by the error λxe.λxq. K ∗ xq in an approximation of a function. Loop perforation [12], [19] transforms certain mapreduce programs, written as for-loops in C, to perform only a subset of their iterations. Section 5.3 shows how to capture a variant of this transformation in our system. More recent work [21] has extended loop perforation to sampling, which takes a random (instead of a controlled) sample of the iterations of a loop. This work also adds substitution transformations, where different implementations of the same basic operations (such as sin or log) are substituted to try to trade off accuracy for performance. EnerJ [18] allows the programmer to specify, with type modifiers, whether data is exact or inexact. Inexact data means the program can tolerate errors in this data. Such data is then stored in low-power memory, which is cheaper but is susceptible to random bit flips. Carbin et al. [2] present a programming model with relaxation, where certain values in a program are allowed to vary in a pre-defined way, to model errors. The authors then show how to verify properties of relaxed programs despite these errors. Although this work addresses some of the same questions as ours, one significant drawback is that it cannot represent the relationship between expressions of different types, such as real-valued functions and functions on floatingpoint numbers used to implement them. Approximate bisimulation [9] is a new technique for relating discrete, continuous and hybrid systems in a manner that can tolerate errors in a system. We are still investigating the relationships between our system and approximate bisimulation, but one key difference with our work is that this approach considers only transition systems, and so cannot be applied to higherorder programs, programs with recursive data, etc.; however, such systems still encompass the large and practical class of control systems. Chaudhuri et al. [4], [3] have investigated a static analysis for proving robustness of programs, which they then argue is a useful precondition for approximate transformations. They define robustness as the K-Lipschitz condition, which, again, can be captured in our system as the error λxe.λxq. K ∗ xq . Floating-point error analysis [10], [5] bounds the error between a floating-point program and the syntaci- cally equivalent real-number program. We showed how to accomplish this in our setting in Section 5.2, yielding what we believe is the first floating-point error analysis to work for higher-order and polymorphic programs. Although state-of-the-art analyses use affine expressions, instead of intervals as we used above, we anticipate that we will be able to accommodate these approaches in our semantics as well, thereby adapting them to higher-order and polymorphic programs. On a technical note, our quantification types and approximate equality relations were inspired by Flagg and Kopperman’s continuity spaces [8], one of the few works that considers a more general notion of distance than metric spaces. Specifically, the distributive lattice completion of a quantification type corresponds exactly to Flagg and Kopperman’s abstract notion of distance, called a quantale. A similar transformation turns an approximate equality relation into a continuity space, their generalization of metric spaces. 7. Conclusion We have introduced a semantics for approximate program transformations. The semantics relates an exact program to an approximation of it, and quantifies this relationship with an error expression. Rather than specifying errors solely with real numbers and metric spaces, our approach is based on approximation types, an extension of logical relations that allows us to use, for example, functions as the errors for approximations of functions, and polymorphic types as the errors for polymorphic types. We then show how approximation types can be used to verify approximate transforms in a modular, composable fashion, by proving soundness of each transform individually and by including a set of compositionality rules, also proved correct, that combine errors from individual approximate transforms into a whole-program error. References [1] Boehm, H.J., Cartwright, R., Riggle, M., O’Donnell, M.J.: Exact real arithmetic: a case study in higher order programming. In: LISP and functional programming (1986) [2] Carbin, M., Kim, D., Misailovic, S., Rinard, M.: Proving acceptability properties of relaxed nondeterministic approximate programs. In: PLDI (2012) [3] Chaudhuri, S., Gulwani, S., Lublinerman, R.: Continuity analysis of programs. In: POPL (2010) [4] Chaudhuri, S., Gulwani, S., Lublinerman, R., Navidpour, S.: Proving programs robust. In: FSE (2011) [5] Darulova, E., Kuncak, V.: Trustworthy numerical computation in scala. In: OOPSLA (2011) [6] Dreyer, D., Ahmed, A., Birkedal, L.: Logical stepindexed logical relations. Logical Methods in Computer Science 7(2) (2011) [7] Edalat, A., Potts, P.J.: A new representation for exact real numbers. In: MFPS (2005) [8] Flagg, B., Kopperman, R.: Continuity spaces: reconciling domains and metric spaces. Theoretical Computer Science 1 (1997) [9] Girard, A., Pappas, G.J.: Approximate bisimulation: a bridge between computer science and control theory. European Journal of Control 17(5–6), 568–578 (2011) [10] Goubault, E., Putot, S.: Static analysis of finite precision computations. In: VMCAI (2011) [11] Manes, E.G.: Algebraic approaches to program semantics. Springer-Verlag New York, Inc. (1986) [12] Misailovic, S., Roy, D., Rinard, M.: Probabilistically accurate program transformations. In: SAS (2011) [13] Pierce, B.: Types and Programming Languages. The MIT Press (2002) [14] Ramsey, N., Pfeffer, A.: Stochastic lambda calculus and monads of probability distributions. In: POPL (2002) [15] Reed, G.M., Roscoe, A.W.: Metric spaces as models for real-time concurrency. In: MFPS III (1988) [16] Reed, J., Pierce, B.C.: Distance makes the types grow stronger: A calculus for differential privacy. In: ICFP (2010) [17] Reynolds, J.C.: Types, abstraction and parametric polymorphism. Information Processing 83(1), 513–523 (1983) [18] Sampson, A., Dietl, W., Fortuna, E., Gnanapragasam, D., Ceze, L., Grossman, D.: Enerj: Approximate data types for safe and general low-power computation. In: PLDI (2011) [19] Sidiroglou, S., Misailovic, S., Hoffmann, H., Rinard, M.: Managing performance vs. accuracy trade-offs with loop perforation. In: FSE (2011) [20] Turi, D., Rutten, J.: On the foundations of final coalgebra semantics: non-well-founded sets, partial orders, metric spaces. Mathematical. Structures in Comp. Sci. 8(5), 481–540 (1998) [21] Zhu, Z.A., Misailovic, S., Kelner, J.A., Rinard, M.: Randomized accuracy-aware program transformations for efficient approximate computations. In: POPL (2012)
6
1 Spatial Channel Covariance Estimation for the Hybrid MIMO Architecture: A Compressive Sensing Based Approach arXiv:1711.04207v1 [] 11 Nov 2017 Sungwoo Park and Robert W. Heath Jr. Abstract Spatial channel covariance information can replace full knowledge of the entire channel matrix for designing analog precoders in hybrid multiple-input-multiple-output (MIMO) architecture. Spatial channel covariance estimation, however, is challenging for the hybrid MIMO architecture because the estimator operating at baseband can only obtain a lower dimensional pre-combined signal through fewer radio frequency (RF) chains than antennas. In this paper, we propose two approaches for covariance estimation based on compressive sensing techniques. One is to apply a time-varying sensing matrix, and the other is to exploit the prior knowledge that the covariance matrix is Hermitian. We present the rationale of the two ideas and validate the superiority of the proposed methods by theoretical analysis and numerical simulations. We conclude the paper by extending the proposed algorithms from narrowband massive MIMO systems with a single receive antenna to wideband systems with multiple receive antennas. I. I NTRODUCTION One way to increase coverage and capacity in wireless communication systems is to use a large number of antennas. For instance, millimeter wave systems use large antenna arrays to obtain high array gain, thereby increasing cellular coverage [1], [2]. Sub-6 GHz systems are also likely to equip many antennas at a base station (BS) to increase cellular spectral efficiency by transmitting data to many users simultaneously via massive MIMO systems [3], [4]. Using a large number of antennas in a conventional MIMO architecture, however, results in high cost S. Park and R. W. Heath Jr. are with the Wireless Networking and Communication Group (WNCG), Department of Electrical and Computer Engineering, The University of Texas at Austin, TX, 78701 USA. (e-mail: {swpark96, rheath}@utexas.edu). This work is supported in part by the National Science Foundation under Grant No. 1711702, and by a gift from Huawei Technologies. 2 and high power consumption because each antenna requires its own RF chain [5]. To solve this problem, hybrid analog/digital precoding reduces the number of RF chains by dividing the linear process between the analog RF part and the digital baseband part. Several hybrid precoding techniques have been proposed for single-user MIMO [6]–[9] and multi-user MIMO [10]–[14] in either sub-6 GHz systems or millimeter wave systems. Most prior work on hybrid precoding assumes that full channel state information at the transmitter (CSIT) for all antennas is available when designing the analog precoder. Full CSIT, however, is difficult to obtain even in time-division duplexing (TDD) systems if many antennas are employed. Furthermore, the hybrid structure makes it more difficult to estimate the full CSIT of the entire channel matrix for all antennas because the estimator can only see a lower dimensional representation of the entire channel due to the reduced number of RF chains. Although some channel estimation techniques for the hybrid architecture were proposed by using compressive sensing techniques that exploit spatial channel sparsity [15]–[17], these techniques assume that channel does not change during the estimation process which requires many measurements over time. Consequently, these channel estimation techniques can be applied only to slowly varying channels. Unlike the hybrid precoding techniques that require full CSIT, there exists another type of hybrid precoding that uses long-term channel statistics such as spatial channel covariance instead of full CSIT in the analog precoder design [13], [14], [18], [19]. This approach has advantages over using full CSIT. First, the spatial channel covariance is well modeled as constant over many channel coherence intervals [18], [20]. Second, the spatial channel covariance is constant over frequency in general [21], so covariance information is suitable for the wideband hybrid precoder design problem where one common analog precoder must be shared by all subcarriers in wideband OFDM systems [9]. For these reasons, spatial channel covariance information is a promising alternative to full CSIT for hybrid MIMO architecture. Although several techniques have been developed to estimate spatial channel covariance or its simplified version such as subspace or angle-of-arrival (AoA) for hybrid architecture, these methods have practical issues. For example, the sparse array developed in [22] and the coprime sampling used in [23] reduce the number of RF chains for the covariance or AoA estimation as they disregard some of the antennas by exploiting redundancy in linear arrays of equidistant antennas. These methods, however, have a limitation on the configuration of the number of RF chains and antennas. In [24], various estimation methods were proposed based on convex 3 optimization problems coupled with the coprime sampling. The proposed methods, though, require many iterations to converge. A subspace estimation method using the Arnoldi iteration was proposed for millimeter wave systems in [25]. The estimation target of this method is not the subspace of a spatial channel covariance matrix but that of a channel matrix itself, and the channel matrix must be constant over time during the estimation process. Consequently, it is difficult to apply the method in [25] to time-varying channels. The subspace estimation method proposed in [26] does not use any symmetry properties of the covariance matrix, while the AoA estimation for the hybrid structure in [27] does not exploit the sparse property of millimeter wave channels. In [28], [29], AoA estimation methods for the hybrid structure were proposed by applying compressive sensing techniques with vectorization but do not fully exploit the slow varying AoA property. Instead of using a vector-type compressive sensing, matrixtype compressive sensing techniques were developed for so-called multiple measurement vector (MMV) problems [30]–[33]. Most prior work on the MMV problems, however, assumes that the sensing matrix is fixed over time to model the problem in a matrix form, which is not an efficient strategy when the measurement size becomes as small as the sparsity level. In this paper, we propose a novel spatial channel covariance estimation technique for the hybrid MIMO architecture. Considering spatially sparse channels, we develop the estimation technique based on compressive sensing techniques that leverage the channel sparsity. Between two different approaches in the compressive sensing field (convex optimization algorithms vs. greedy algorithms), we focus on the greedy approach because they provide a similar performance to convex optimization algorithms in spite of their simpler complexity [34]. Based on well-known greedy algorithms such as orthogonal matching pursuit (OMP) and simultaneous OMP (SOMP), we improve performance by applying two key ideas: one is to use a time-varying sensing matrix, and the other is to exploit the fact that covariance matrices are Hermitian. Motivated by the fact that SOMP is a generalized version of OMP, we develop a further generalized version of SOMP by incorporating the proposed ideas. The algorithm modification is simple, but the performance improvement is significant, which is demonstrated by numerical and analytical results. We first develop the spatial channel covariance estimation work for a simple scenario where a mobile station (MS) has a single antenna in narrowband systems. Preliminary results were presented in the conference version of this paper [35]. In this paper, we add two new contributions to our prior work. First, we present theoretical analysis to validate the rationale of the two proposed ideas and show the superiority of the proposed algorithm. The theoretical analysis 4 demonstrates that the use of a time-varying sensing matrix dramatically improves the covariance estimation performance, in particular, when the number of RF chains is not so large and similar to the number of channel paths. The analytical results also disclose that exploiting the Hermitian property of the covariance matrix provides an additional gain. Second, we extend the estimation work to other scenarios. Considering that an MS as well as a BS has hybrid architecture with multiple antennas and RF chains, we modify the proposed algorithm to adapt to the situation where analog precoders as well as analog combiners change over time. We also modify the algorithm for the wideband systems by using the fact that frequency selective baseband combiners do not improve the estimation performance. We use the following notation throughout this paper: A is a matrix, a is a vector, a is a scalar, and A is a set. kak0 and kak2 are the l0 -norm and l2 -norm of a vector a. kAkF denotes a Frobenius norm. AT , AC , A∗ , and A† are transpose, conjugate, conjugate transpose, and Moore-Penrose pseudoinverse. [A]i,: and [A]:,j are the i-th row and j-th column of the matrix A. [A]:,S denotes a matrix whose columns are composed of [A]:,j for j ∈ S. If there is nothing ambiguous in the context, [A]:,j , [A]:,S , [At ]:,j , and [At ]:,S are replaced by aj , AS , at,j , and At,S . A \ B is the relative complement of A in B, diag(A) is a column vector whose elements are the diagonal elements of A, and |A| is the cardinality of A. A ⊗ B, A } B, and A B denote the Kronecker product, the Hadamard product, and the column-wise Khatri-Rao product. (K) A (K) A B is the generalized Khatri-Rao product with respect to K partitions, which is defined as i h i i h h B = A1 ⊗ B1 · · · AK ⊗ BK where A = A1 · · · AK and B = B1 · · · BK . II. S YSTEM MODEL AND PRELIMINAIRES : COVARIANCE ESTIMATION VIA COMPRESSIVE SENSING BASED CHANNEL ESTIMATION In this section, we present the system model and briefly overview prior work on compressive sensing based channel estimation followed by covariance calculation. A. System model Consider a system where a BS with N antennas and M RF chains communicates with an MS. We focus on a narrowband massive MIMO system where an MS with a single antenna; we extend the work to the multiple-MS-antenna case and the wideband case in Section V. Let L be the number of channel paths, g`,t be a time-varying channel coefficient of the `-th channel path at the t-th snapshot, and a(φ` ) be an array response vector associated with the AoA of the `-th 5 channel path φ` . Assuming that AoAs do not change during T snapshots, the uplink channel at the t-th snapshot can be represented as ht = L X g`,t a(φ` ). (1) `=1 Considering spatially sparse channels, the channel model in (1) can be approximated in the compressive sensing framework as ht = Agt , (2) where A ∈ CN ×D is an dictionary matrix whose D columns are composed of the array response vectors associated with a predefined set of AoAs, and gt ∈ CD×1 is a sparse vector with only L nonzero elements whose positions and values correspond to their AoAs and path gains. Let dt be a training symbol with |dt | = 1, and zt be a Gaussian noise with CN (0, σ 2 I). The received signal can be represented as xt = Agt dt + zt . (3) Combined with an analog combining matrix Wt ∈ CM ×N , the received signal multiplied by d∗t at baseband is expressed as yt = d∗t Wt xt = Φt gt + Wt nt , (4) where Φt = Wt A denotes an overall sensing matrix and nt = d∗t zt ∼ CN (0, σ 2 I). By letting Rg = E[gt gt∗ ], the spatial channel covariance matrix is expressed as Rh = E[ht h∗t ] = ARg A∗ . The goal is to estimate Rh with given y1 , ..., yT . B. Preliminaires: covariance estimation via compressive sensing based channel estimation Some prior work developed compressive channel estimators for the hybrid architecture [15]– [17]. Once the channel vectors are estimated, the covariance can be calculated from the estimated channels. In this section, we overview two estimation approaches to make a comparison. A baseline technique to estimate the channel vector at each time is formulated as min kyt − Φt gt k2 s.t. kgt k0 ≤ L. gt (5) This optimization problem is known as a single measurement vector (SMV) problem and can 6 Algorithm 1 OMP Algorithm 2 SOMP Input: Φ, y, and L Initialize: v = y, S = Ø, ĝ = 0 for n = 1 : L do j = arg maxi |φ∗i v| S = S ∪ {j}  v = I − ΦS Φ†S y end for ĝS = Φ†S y Output: ĝ Input: Φ, Y, and L Initialize: V = Y, S = Ø, Ĝ = 0 for n = 1 : L do j = arg maxi kφ∗i Vk2 S = S ∪ {j}  V = I − ΦS Φ†S Y end for [Ĝ]S,: = Φ†S Y Output: Ĝ be solved by the OMP algorithm described in Algorithm 1, which is a simple but efficient approach among various solutions to the SMV problem [36]. Once the gt ’s are estimated during T snapshots, then the covariance matrix of the channel can be calculated as ! T 1X R̂h = A gt gt∗ A∗ . T t=1 (6) In contrast to SMV, another compressive sensing technique called MMV [30] exploits the observation that g1 , ..., gT shares a common support if the AoAs do not change during the estimation process. The received signal model in (4) can be written in a matrix form as Y = ΦG + WN, (7) h i h i h i where Y = y1 · · · yT , G = g1 · · · gT , and N = n1 · · · nT . Note that in this MMV problem format, the sensing matrix Φ must be common over time. The optimization problem in the MMV is formulated as min kY − ΦGkF s.t. kGkrow,0 ≤ L, G (8) S where kGkrow,0 represents the row sparsity defined as kGkrow,0 = | t supp ([G]:,t )|. This optimization problem can be solved by the SOMP algorithm described in Algorithm 2 [37], [38]. Regarding the selection rule of SOMP, the `2 -norm in Algorithm 2 can be replaced by the `1 -norm because there is no significant difference between the two in terms of performance [31]. We use `2 -norm throughout this paper for analytical tractability. Although SOMP is known to outperform OMP in general, SOMP still needs improvement, particularly when the number of 7 RF chains is small, as will be discussed in Section III-A. III. C OVARIANCE ESTIMATION FOR HYBRID ARCHITECTURE : TWO KEY IDEAS In this section, we present two key ideas to improve covariance estimation work based on compressive sensing techniques. First, we develop an estimation algorithm by applying a time-varying sensing matrix. Second, we propose another algorithm that exploits the Hermitian property of the covariance matrix. Finally, we combine the two proposed algorithms. A. Applying a time-varying analog combining matrix Although compressive sensing can reduce the required number of measurements, there is a lim itation on reducing the measurement size. This lower bound is known to be M = O L log D L for both SMV and MMV when the sensing matrix satisfies the restricted isometry property (RIP) condition. [39]. One possible solution to overcome this limitation is to extend the measurement vector size into the time domain, i.e., gathering the measurement vectors of multiple snapshots with different sensing matrices over time. The same approach is also used in [15] and [36], where the measurements are stacked together as         Φ W1 n1 W1 A y1    1      .   .   ..   ..   .  =  .  g +  ..  =  ..  g + ñ.         ΦT WT nT WT A yT (9) Note that the key assumption in (9) is that g is constant during the estimation process. Consequently, this technique can be applied only to static channels, not time-varying channels. It is worthwhile to compare the MMV signal model in (7) with the signal model where the time-varying combining matrix is used for the static channel as shown in (9). In (7), yt ’s are stacked in columns due to the fact that gt changes over time but Φ is fixed. In contrast, the signal model in (9) adopts a row-wise stack of yt because Φt changes over time but g is fixed. If both Φt and gt change over time, we can not stack yt ’s in either columns or rows and thus need another approach. Regarding this scenario, two questions arise: 1) is it useful to employ a time-varying sensing matrix for a time-varying channel? 2) how can we recover the data in this time-varying sensing matrix and time-varying data? We will show that the time-varying sensing matrix can increase the recovery success rate especially when M is not much larger than L by 8 analysis in Section IV and simulations in Section VI. In this subsection, we develop a recovery algorithm to answer the problem of recovery with a time-varying sensing matrix. To apply the time-varying matrix concept to SOMP, we first focus on the fact that SOMP is a generalized version of OMP. The reason is that the optimization problem of MMV in (8) can be rewritten as min g1 ,...,gT T X kyt − Φgt k22 s.t. T [ supp (gt ) ≤ L. (10) t=1 t=1 Note that SOMP is equivalent to OMP when T = 1. Unlike the original formulation in (8), this reformulated form gives an insight into how to apply the time-varying sensing matrix. We can further generalize the optimization problem in (10) by replacing Φ with Φt to reflect the timevarying feature of the sensing matrix. Noting that the selection rule of SOMP in Algorithm 2 is P equivalent to j = arg maxi kφ∗i Vk22 = arg maxi Tt=1 |φ∗i vt |2 , we modify the selection rule by replacing φi with φt,i (= [Φt ]:,i ) to adapt to the time-varying sensing matrix case. The residual     matrix in SOMP, V = I − ΦS Φ†S Y, can also be replaced by vt = I − Φt,S Φ†t,S yt for t = 1, ..., T . The modified version of SOMP, which we call dynamic SOMP (DSOMP), is described in Algorithm 3. B. Exploiting the Hermitian property of the covariance matrix The covariance estimation techniques using OMP, SOMP, and DSOMP in the previous subsections employ a two-step approach where the channel gain vectors gt ’s (and thus channel vectors ht ’s) are estimated and then the covariance matrix is calculated from (6). If it is not the channel but the covariance that needs to be estimated, the first step estimating the channel explicitly is unnecessary. In this subsection, we take a different approach that directly estimates the covariance Rg without estimating the instantaneous channel gain vectors. The relationship between Rg and Ry is given by Ry = ΦRg Φ∗ + σ 2 WW∗ . (11) If we consider a special case where Rg is assumed to be a sparse diagonal matrix as in Fig. 1(c), the relationship between Rg and Ry in (11) can be rewritten as vec(Ry ) = ΦC  Φ diag(Rg ) + σ 2 vec(WW∗ ), (12) 9 (a) Unstructured (b) Sparse rows (c) Sparse diagonal (d) Sparse Hermitian Fig. 1. Sparse matrix types: (a) an unstructured sparse matrix where the elements are randomly spread, (b) a structured sparse matrix with sparse rows, (c) a structured sparse diagonal matrix, and (d) a structured sparse Hermitian matrix. by using vectorization and the column-wise Khatri-Rao product, then OMP can be directly applied to this reformulated problem without any modification [40]. This approach, however, has a limitation in application to realistic scenarios because the covariance Rg is not a diagonal matrix in general [35]. Instead of assuming that Rg is a sparse diagonal matrix, we consider Rg to be a sparse Hermitian matrix as in Fig. 1(d). As MMV uses the fact that G is a matrix with sparse rows as in Fig. 1(b), we exploits the Hermitian property of the covariance matrix Rg . The optimization problem is represented in a matrix form like MMV, min kRY − ΦRg Φ∗ kF s.t. kRg klattice,0 ≤ L, (13) Rg where kRg klattice,0 is defined as kRg klattice,0 = S i supp ([Rg ]:,i ) S j supp ([Rg ]j,: ) . A greedy approach similar to OMP and SOMP can be applied to find a suboptimal solution. At each iteration, the algorithm needs to find the solution to the following sub-problems as min kRY − ΦSn RX Φ∗Sn kF , (14) RX where RX is a Hermitian matrix whose dimension is less than or equal to L. By using the least squares method along with vectorization, the optimal solution to (14) is given by (opt) RX where Φ†Sn = Φ∗Sn ΦSn −1  ∗ = Φ†Sn RY Φ†Sn , (15) (opt) Φ∗Sn . Note that if RY is Hermitian, RX is also Hermitian. The proposed algorithm using (15), which we call covariance OMP (COMP), is described in Algorithm 4. Note that the proposed COMP uses quadratic forms instead of linear forms. The time-varying sensing matrix concept in Section III-A can also be applied to COMP. The generalization of COMP, however, requires a careful modification. In covariance estimation based 10 Algorithm 3 Dynamic SOMP (DSOMP) Algorithm 4 Covariance OMP (COMP) Input: Φ1 , ..., ΦT , Y, L Initialize: V = Y, S = Ø, Ĝ = 0 for n = 1 : L doP 2 j = arg maxi Tt=1 φ∗t,i vt S = S ∪ {j}  † vt = I − Φt,S Φt,S yt , ∀t end for h i [Ĝ]S,: = Φ†1,S y1 · · · Φ†T,S yT Input: Φ, Y, L Initialize: V = R̂y = T1 YY∗ , S = Ø, R̂g = 0 for n = 1 : L do j = arg maxi φ∗i Vφi S = S ∪ {j} ∗  † † V = R̂y − ΦS ΦS R̂y ΦS ΦS end for  ∗ [R̂g ]S,S = Φ†S R̂y Φ†S Output: Ĝ Output: R̂g Algorithm 5 Dynamic COMP (DCOMP) Input: Φ1 , ..., ΦT , Y, L Initialize: Vt = R̂y,t = yt yt∗ , ∀t, S = Ø, R̂g = 0 for n = 1 : L doPT j = arg maxi t=1 φ∗t,i Vt φt,i S = S ∪ {j} ∗  Vt = R̂y,t − Φt,S Φ†t,S R̂y,t Φt,S Φ†t,S , ∀t end for  ∗ PT † † 1 [R̂g ]S,S = T t=1 Φt,S R̂y,t Φt,S Output: R̂g on COMP, the calculation of R̂y is followed by the estimation of R̂g . This is because R̂y can be represented as a function of R̂g as T 1X yt yt∗ = Φ R̂y = T t=1 ! T 1X gt gt∗ Φ∗ = ΦR̂g Φ∗ . T t=1 (16) If the sensing matrix changes over time, R̂y can not be represented as a function of R̂g because T T 1X 1X ∗ R̂y = y t yt = Φt gt gt∗ Φ∗t . T t=1 T t=1 (17) For this reason, instead of using the sample covariance matrix of the measurement vectors R̂y , we use a per-snapshot yt yt∗ , which can be regarded as a one sample estimate of the sample covariance matrix. Note that this extreme sample covariance is not a diagonal matrix in general even when the channel paths are uncorrelated. Using the fact that yt yt∗ for all snapshots are sparse Hermitian matrices sharing the same positions of nonzero elements, we develop the dynamic 11 COMP (DCOMP) from COMP in a similar way that we develop DSOMP from SOMP, which is described in Algorithm 5. Note that DCOMP becomes equivalent to COMP if Φ1 = · · · = ΦT . IV. T HEORETICAL ANALYSIS OF THE PROPOSED ALOGIRHTMS In this section, we analyze the benefit of the time-varying sensing matrix and compare SOMP, DSOMP, and DCOMP. In summary, the recovery success probability of SOMP, DSOMP, and DCOMP increases as the number of measurements T increases. The recovery success probability of SOMP, however, saturates and does not approach one as T goes to infinity even in the noiseless case if the number of RF chains M is not much larger than the number of channel paths L. In contrast to SOMP, the recovery success probability of DSOMP and DCOMP approaches one as T increases for any M and L. In other words, DSOMP and DCOMP can guarantee perfect recovery even when the number of RF chains is so small that both OMP and SOMP fail to recover the support even in the noiseless case. Finally, we will show that DCOMP has a higher recovery success probability than DSOMP. Let us first consider how to design the sensing matrix Φ. For analytical tractability, we confine the dictionary size to D = N in this section. The algorithms, however, can be applied to more h i general cases such as D > N . The sensing matrix can be represented as Φ = φ1 · · · φN = WA. One possible option for the sensing matrix is to use a random analog precoding matrix W = WRF such that each element in WRF has a constant amplitude and a random phase with an independent uniform distribution in [0, 2π]. Since the elements in the sensing matrix designed in this way have a sub-Gaussian distribution, this simple random sensing matrix Φ satisfies the RIP condition [34]. There are other desirable features that the sensing matrix needs to have. For example, it is desirable for the sensing matrix to have a small mutual coherence defined as q |φ∗j φi | N −M [34]. The mutual coherence has a lower bound , which is known ρ = maxi6=j kφ kkφ k M (N −1) j i as Welch bound. This bound can be achievable if the column vectors in Φ constitute an equalnorm equiangular tight frame, i.e., Φ satisfies that 1) kφi k = c1 , ∀i, 2) |φ∗j φi | = c2 , ∀j 6= i, and 3) ΦΦ∗ = c3 I. It is impossible for a sensing matrix Φ that is designed with a random phase analog combining matrix WRF to meet these three conditions at all times. Nevertheless, it is possible to meet the tight frame condition for any random analog precoding matrix WRF if the baseband combining matrix WBB is employed such that 1 ∗ W = WBB WRF = (WRF WRF )− 2 WRF . (18) 12 If the baseband combining matrix is designed as (18), the columns in the overall sensing matrix Φ constitute the tight frame because it satisfies ∗ ∗ ΦΦ∗ = WBB WRF AA∗ WRF WBB = N I. (19) Consequently, any sensing matrix can be transformed to satisfy the tight frame condition by 1 ∗ using WBB = (WRF WRF )− 2 . The following theorem provides the exact condition for the perfect recovery condition at each iteration of the DSOMP algorithm. Note that OMP and SOMP are special cases of DSOMP. Theorem 1 Let S ⊂ N = {1, 2, ..., N } be an optimal support set with |S| = L and G be an ideal channel gain matrix. Suppose that So ⊂ S with |So | = Lo was perfectly recovered at the previous Lo iterations in the DSOMP algorithm. In the noiseless case, one of the elements in the optimal support S can be perfectly recovered at the (Lo + 1)-th iteration if and only if ρ(DS) (S, So , Φ1 , ..., ΦT , G) =  where Ψt = I − Φt,So Φ†t,So  maxj∈N \S PT P maxj∈S\So PT P t=1 t=1 i∈S\So ψ ∗t,j ψ t,i gt,i ∗ i∈S\So ψ t,j ψ t,i gt,i 2 2 < 1, (20) Φt . Proof: See Appendix A. Although the newly defined metric ρ(DS) (S, So , Φ1 , ..., ΦT , G) identifies the exact condition for the perfect recovery condition, its dependence on the input data G makes this metric less attractive. Some prior work takes an approach to find upper bounds of certain coherence-related metrics for any random input data [41], but the bounds are loose, which can be regarded as a worst case analysis. In addition, those upper bounds are meaningful only for the case where M is significantly larger than L. In the extreme case where M and L are similar in value, the upper bounds are too loose to give a useful insight. Instead of an upper bound, we focus on the distribution of ρ(DS) (S, So , Φ1 , ..., ΦT , G) considering S, So , Φ1 , ..., ΦT , and G as random variables. While the exact distribution depends on the distribution of G, Proposition 1 and Theorem 2 show that ρ(DS) (S, So , Φ1 , ..., ΦT , G) converges to a value that does not depend on G as T becomes larger. 13 Proposition 1 Suppose that gt,i are independent random variables with zero mean and unit variance. Let ρ(S) (S, So , Φ, G) denote ρ(DS) (S, So , Φ1 , ..., ΦT , G) in the SOMP case where Φ1 = · · · = ΦT = Φ. As T → ∞, ρ(S) (S, So , Φ, G) converges to (S) ρ (S, So , Φ, G) → (S) ρT →∞ (S, So , Φ) = maxj∈N \S P ψ ∗j ψ i maxj∈S\So P ψ ∗j ψ i i∈S\So i∈S\So 2 2, (21)   where Ψ = I − ΦSo Φ†So Φ. Proof: See Appendix B. (S) Note that ρT →∞ (S, So , Φ) in (21) can be rewritten as  P 2 ∗ maxj∈N \S i∈S\So ψ j ψ i (S)  . ρT →∞ (S, So , Φ) = P 2 ∗ maxj∈S\So |ψ j |4 + ψj ψi i6=j (22) i∈S\So In the OMP case, ρ(DS) (S, So , Φ1 , ..., ΦT , G) becomes equivalent to P  2 ∗ maxj∈N \S ψ ψ + X j j i i∈S\So  , ρ(O) (S, So , Φ, g) = P 2 ∗ ψ j ψ i + Xj maxj∈S\So |ψ j |4 + i6=j (23) i∈S\So where Xj = P i1 ∈S\So P i2 ∈S\So i2 6=i1 gi1 gi∗2 ψ ∗j ψ i1 ψ ∗i2 ψ j ∈ R. Since the addition of a random variable Xj results in increasing the variance of the metric at j, the maximum values in both the numerator and the denominator for the OMP case are likely to be larger than those of the SOMP case. Compared with the SOMP case, the relative rate of increase in the numerator is higher than that of increase in the denominator with high possibility because |ψ j |4 is larger than |ψ ∗j ψ i |2 for i 6= j in general for a random sensing matrix. Consequently, we can infer that the normalized coherence metric of SOMP is likely to be less than that of OMP. We validate this inference by simulations for random Φ, S, and So in Section VI-A. Although this explains the superiority of SOMP over OMP, we can see that the probability of (S) ρT →∞ (S, So , Φ) < 1 does not become one in general. This means that SOMP can not guarantee the perfect recovery even when an infinite number of snapshots are measured for the recovery. Unlike SOMP, DSOMP can always guarantee perfect recovery as T becomes larger as shown in the following theorem. 14 Theorem 2 As T → ∞, ρ(DS) (S, So , Φ1 , ..., ΦT , G) in (20) converges to a deterministic value (DS) ρT →∞ (N, M, L, Lo ) that has an upper bound as (DS) ρ(DS) (S, So , Φ1 , ..., ΦT , G) → ρT →∞ (N, M, L, Lo ) −1  M − N + (M − Lo )(N − Lo − 1) , ≤ 1+ (N − M )(L − Lo ) (24) and this upper bound is always less than or equal to one for any N ≥ M ≥ L > Lo , i.e,. perfect recovery is guaranteed. Proof: See Appendix C. In Section VI, we show by simulations that a reasonably small T can provide a significant gain over SOMP even in the extreme case when M = L. In addition, the converged value (DS) of DSOMP ρT →∞ (N, M, L, Lo ) is not a function of sensing matrices Φ1 , ..., ΦT while that of (S) SOMP ρT →∞ (S, So , Φ) depends on the sensing matrix Φ. In other words, in contrast to the SOMP case where sophisticated sensing matrix design is crucial to the estimation performance, DSOMP can reduce the necessity of the elaborate design of the sensing matrix as T increases. Similar to the DSOMP case, DCOMP can also guarantee the perfect recovery. The perfect recovery condition of the DCOMP case is shown in the following theorem. Theorem 3 Let S be an optimal support set with |S| = L and G be an ideal channel gain matrix. Suppose that So ⊂ S with |So | = Lo was perfectly recovered at the previous Lo iterations in the DCOMP algorithm. In the noiseless case, one of the elements in the optimal support S can be perfectly recovered at the (Lo + 1)-th iteration if and only if ρ (DC) (S, So , Φ1 , ..., ΦT , G) = (DC) ∗ t=1 φt,j Qt,S,So φt,j P (DC) maxj∈S\So Tt=1 φ∗t,j Qt,S,So φt,j maxj∈N \S PT < 1, (25) ∗ ∗ where Qt,S,So = Φt,S gt,S gt,S Φ∗t,S − Pt,So Φt,S gt,S gt,S Φ∗t,S Pt,So and Pt,So = Φt,So Φ†t,So . (DC) Proof: From the definition of the residual matrix Vt in Algorithm 5, the selection rule of DCOMP can be represented as j (opt) = arg max and this completes the proof. j∈N T X t=1 φ∗t,j Vt φt,j = arg max j∈N T X t=1 (DC) φ∗t,j Qt,S,So φt,j , (26) 15 (DS) (DC) ∗ Let Qt,S,So = (I − Pt,So ) Φt,S gt,S gt,S Φ∗t,S (I − Pt,So ). Notice that, if Qt,S,So is replaced by (DS) Qt,S,So , then Theorem 3 becomes identical to Theorem 1. It is also worthwhile to note that both (DS) (DC) DSOMP and DCOMP select the same support at the first iteration because Qt,S,So = Qt,S,So for So = ∅. The superiority of DCOMP over DSOMP starts from the second iteration, which can be inferred from the following theorem. Theorem 4 ρ(DC) (S, So , Φ1 , ..., ΦT , G) in (25) converges to a deterministic value as T → ∞, (DC) and the deterministic value ρT →∞ (N, M, L, Lo ) always satisfies (DC) (DS) ρT →∞ (N, M, L, Lo ) ≤ ρT →∞ (N, M, L, Lo ), (27) for any N ≥ M ≥ L > Lo . Proof: See Appendix D. Until now, we considered a time-varying analog combining matrix and its associated digital combining matrix. One question arises: if the analog combiner is fixed and only the digital combiner is time-varying, does the time variance of the baseband combiner improve the recovery performance? The following proposition shows that the time-varying baseband combining matrix itself does not improve the performance when the analog combining matrix is fixed. 1 ∗ )− 2 for a fixed WRF and a random unitary maProposition 2 Let WBB,t = Ut (WRF WRF trix Ut ∈ CM ×M such that Φt = WBB,t WRF A constitutes a tight frame for any t. Then, ρ(DS) (S, So , Φ1 , ..., ΦT , G) of DSOMP becomes identical to ρ(DS) (S, So , Φ0 , ..., Φ0 , G) of SOMP 1 ∗ )− 2 WRF A. where Φ0 = (WRF WRF Proof: Each term in both the numerator and the denominator in (51) becomes (DS) ∗ φ∗t,j Qt,S,So φt,j = φ∗t,j (I − Pt,So ) Φt,S gt,S gt,S Φ∗t,S (I − Pt,So ) φt,j (28) ∗ = φ∗0,j (I − P0,So ) Φ0,S gt,S gt,S Φ∗0,S (I − P0,So ) φ0,j , because Φt = Ut Φ0 and Pt,So = Ut P0,So U∗t for all t. Since this is the same as in the SOMP case where Φ1 = · · · = ΦT = Φ0 , the proof is completed. The result in Proposition 2 is beneficial especially to the wideband case where the analog combining matrix must be fixed over frequency. According to Proposition 2, we will use the frequency-invariant baseband combining matrix when modifying the algorithm for the wideband case. This will be discussed in more detail in Section V-B. 16 frame t frame t-1 frame t+1 MT training symbols data (s = 1, 2, …, MT) Fig. 2. Frame structure in the case when an MS has hybrid structure with multiple RF chains and antenna. Consecutive MT training symbols are inserted in every frame, and each training symbol is transmitted one at a time via its dedicated RF chain. V. E XTENSION TO OTHER SCENARIOS In Section III, we considered a narrowband system and assumed that an MS has a single antenna. In this section, we first extend the work to the multiple-MS-antenna case and then to the wideband case. A. Extension to the multiple-MS-antenna case In this subsection, we extend the work in Section III to the case where the MS has hybrid architecture with multiple antennas and RF chains. The signal model is shown in Fig. 2 where NT , MT , NR , and MR denote the number of transmit antennas, transmit RF chains, receive antennas, and receive RF chains. We assume that consecutive MT training symbols are used in each frame as shown in Fig. 2, and the symbol index per frame is denoted by s. We assume that the duration of MT symbols is less than the channel coherence time, i.e., the channel is invariant during MT consecutive symbol transmission. Let θ` be the angle of departure (AoD) of the `-th path. The channel model for the single-MS-antenna case in (1) can be extended to the multiple-MS-antenna case as Ht = L X gt,` aR (φ` )a∗T (θ` ). (29) `=1 Similarly to (2), this can also be represented in the compressive sensing framework as Ht = AR Gt A∗T , where AR ∈ CNR ×DR and AT ∈ CNT ×DT are dictionary matrices associated with AoA and AoA, and Gt ∈ CDR ×DT is a sparse matrix with L nonzero elements with associated with complex channel path gains. At each symbol s at frame t, one training signal is transmitted through one transmit RF chain. Let Wt,s be a combiner and ft,s a precoder at frame t and symbol s. Assuming that the training 17 symbol is known to the BS, thereby omitted as in the single-MS-antenna case, the received signal at baseband is represented as yt,s = Wt,s AR Gt A∗T ft,s + Wt,s nt,s . (30) There are four different types with respect to how to apply Wt,s and ft,s within a frame: 1) fixed Wt,s and fixed ft,s , 2) time-varying Wt,s and fixed ft,s , 3) fixed Wt,s and time-varying ft,s , and 4) time-varying Wt,s and time-varying ft,s . In the first approach, all the precoders are the same within a frame, and so are the combiners such that ft,1 = · · · = ft,MT = ft and Wt,1 = · · · = Wt,MT = Wt . Since these combiners and precoders are constant within a frame in this approach, averaging the received signals at baseband within a frame can increase the effective signal-to-noise ratio (SNR). The averaged received signal is represented as MT 1 X yt,s = Wt AR Gt A∗T ft + Wt n̄t , ȳt = MT s=1 where n̄t = 1 MT (31)   σ2 n . Note that n̄ ∼ CN 0, I , which means that this process t s=1 t,s MT PM T averages out the noise, increasing the effective SNR. The signal model per frame in (31) can be transformed into ȳt = ftT ⊗ Wt   AC T ⊗ AR vec(Gt ) + Wt n̄t . (32) In the second approach, where only combiners vary and precoders are fixed within a frame such that ft,1 = · · · = ft,MT = ft , the first step is to stack yt,1 , ..., yt,MT in rows. This row-wise stack yields a MR MT × 1 vector per frame, ỹt,agg = W̃t,agg AR Gt A∗T ft + ñt,agg , where W̃t,agg (33) h iT h iT T T T T T T = Wt,1 · · · Wt,MT and ñt,agg = nt,1 Wt,1 · · · nt,MT Wt,MT . The signal model in (33) can be rewritten as    ỹt,agg = ftT ⊗ W̃t,agg AC T ⊗ AR vec(Gt ) + ñt,agg . (34) The third approach changes only precoders, and combiners are fixed within a frame such that 18 Algorithm 6 Wideband DCOMP (WB-DCOMP) Input: Φ1 , ..., ΦT , y1,1 , ..., y T,K , L P ∗ Initialize: Vt = R̂y,t = K1 K k=1 yt,k yt,k , S = Ø, R̂g = 0 for n = 1 : L doP j = arg maxi Tt=1 φ∗t,i Vt φt,i S = S ∪ {j}  ∗ Vt = Ry,t − Φt,S Φ†t,S Ry,t Φt,S Φ†t,S , ∀t end for  ∗ PT † † 1 [R̂g ]S,S = T t=1 Φt,S Ry,t Φt,S Output: R̂g Wt,1 = · · · = Wt,MT = Wt . In contrast to the second approach, yt,1 , ..., yt,MT are stacked in columns. This column-wise stack makes the aggregate signal model per frame as Ỹt,agg = Wt AR GD,t A∗T F̃t,agg + Ñt,agg , (35) h i h i h i where Ỹt,agg = yt,1 · · · yt,MT , F̃t,agg = ft,1 · · · ft,MT , and Ñt,agg = Wt∗ nt,1 · · · nt,MT . By using vectorization, the matrix Ỹt,agg in (35) can be transformed into a vector form as    ỹt,agg = vec(Ỹt,agg ) = F̃Tt,agg ⊗ Wt AC T ⊗ AR vec(Gt ) + vec(Ñt,agg ). (36) For the last approach, both precoders and combiners change within a frame. Like the second approach, we stack yt,1 , ..., yt,MT in rows. Then, the row-wise stacked vector ỹt,agg becomes  ỹt,agg = F̃t,agg (MT ) T W̃t,agg T  AC T ⊗ AR vec(Gt ) + ñt,agg . (37) Note that the signal models for all four different approaches in (32), (34), (36), and (37) can be generalized as ỹt,agg = Θt,agg Aagg gt,agg + ñtt,agg where gt,agg = vec(Gt ), Aagg = AC T ⊗ AR , and Θt,agg are dependent on each approach. This generalized signal model has the same format as that of the single-MS-antenna case in (4). Consequently, the algorithm proposed for the single-MSantenna case such as DCOMP can be used without any modification in the multiple-MS-antenna case regardless of different approaches. B. Extension to wideband systems In this subsection, we extend our work in the narrowband case into the wideband case. Since the same algorithm can be used for both the single-MS-antenna case and the multiple-MS-antenna 19 case as shown in Section V-A, we focus on the single antenna case for the sake of exposition. Compared to the narrowband channel model in (1), we adopt the delay-d MIMO channel model for the wideband OFDM systems with K subcarriers [42], [43]. Let Ts be the sampling period, τ` be the delay of the `-th path, NCP be the cyclic prefix length, and p(t) denote a filter comprising the combined effects of pulse shaping and other analog filtering. The delay-d MIMO channel matrix is modeled as ht [d] = L X g`,t p(dTs − τ` )a(φ` ) for d = 0, ..., NCP−1 , (38) `=1 and the channel frequency response matrix at each subcarrier k can be expressed as ht,k = L X g`,t c`,k a(φ` ), (39) `=1 where c`,k = PNCP −1 d=0 p(dTs − τ` )e− j2πkd K . This can be cast in the compressive sensing framework as ht,k = A (gt } ck ), where gt and ck are sparse vectors with L nonzero elements associated with g`,t and c`,k . Note that gt and ck share the same support for all t and k. An analog combiner WRF,t at frame t must be common for all subcarrier k. Since Proposition 2 shows that changing baseband combiners does not impact the performance as long as the analog combiner is fixed, we use a common sensing matrix Wt = WBB,k WRF,t for all subcarriers at frame t. Then, the signal part of the received signal in the baseband at frame t and subcarrier k is given by rt,k = Wt A (gt } ck ). The problem of finding gt } ck is similar to the narrowband case in Section III because gt } ck share the same support for all t and k. Therefore, the DCOMP algorithm in Section III can be directly applied to this problem. This fails, however, to exploit a useful property of sparse wideband channels. At a fixed subcarrier k = k0 , the time domain average of rt,k0 r∗t,k0 becomes T X rt,k0 r∗t,k0 = t=1 T X Wt A (gt } ck0 ) (gt } ck0 )∗ A∗ Wt∗ . (40) t=1 Since (40) has a similar form to (17) of the narrowband case, DCOMP can be used for this time domain operation. The frequency domain average of rt0 ,k r∗t0 ,k at a fixed frame t = t0 can be expressed differently from (40) as K X k=1 rt0 ,k r∗t0 ,k = Wt0 A K X k=1 ! (gt0 } ck ) (gt0 } ck )∗ A∗ Wt∗0 , (41) 20 1 1 0.9 0.9 0.8 0.8 0.7 0.7 0.6 T = 1, 4, 8, 64, ∞ CDF CDF 0.6 0.5 0.4 0.5 0.4 0.3 Lo = 0, 3, 7 (SOMP) 0.3 T = 1 (OMP) T =4 T =8 T = 64 Asymptotic (T → ∞) 0.2 0.1 Lo = 0, 3, 7 (OMP) 0.1 0 0 0 0.5 1 1.5 T T T T T T 0.2 2 0 0.5 ρ(S) (S, S0 , Φ, G) 1 1.5 2 = 1 (OMP), Lo = 0 = 1 (OMP), Lo = 3 = 1 (OMP), Lo = 7 → ∞, Lo = 0 → ∞, Lo = 3 → ∞, Lo = 7 2.5 3 ρ(S) (S, S0 , Φ, G) (a) CDF of ρ(S) (S, S0 , Φ, G) when Lo = 0 (b) CDF of ρ(S) (S, S0 , Φ, G) when T = 1 or T = ∞ Fig. 3. CDFs of ρ(S) (S, S0 , Φ, G). The optimal support set S has L = 8 elements, the optimally preselected subset So has Lo elements, Φ ∈ CM ×N is a fixed sensing matrix of M = 8 and N = 64, and G ∈ CL×T is a random matrix whose elements have zero-mean and unit-variance. which is similar to (16) of the narrowband case. Since COMP is developed based on the signal model in (16), COMP can be used for the frequency domain operation in the wideband case. The different features between the time and frequency domain motivate us to apply different approaches to each domain, i.e., COMP to the frequency domain and DCOMP to the time domain. This combination of two algorithms, which we call WB-DCOMP, is described in Algorithm 6. It is worthwhile to compare WB-DCOMP to the direct extension of COMP and DCOMP. If the sensing matrix Φ is varying over both time and frequency, the direct extension of DCOMP provides the best performance. However, if Φ is varying only in the time domain and fixed over frequency, WB-DCOMP, which is the combination of DCOMP and COMP, can outperform the direct extension of each algorithm in the wideband case. This is demonstrated in Section VI. VI. S IMULATION RESULTS In this section, we validate our analysis in Section IV in terms of the recovery success probability in the noiseless case especially when M has a similar value to L. We also present simulation results to demonstrate the performance of the proposed spatial channel covariance estimation algorithms for hybrid architecture. A. Theoretical anaylsis in the noiseless case We compare OMP and SOMP in terms of the cumulative distribution function (CDF) of the newly defined metric ρ(S) (S, S0 , Φ, G) in (21). Fig. 3 shows how the CDF of ρ(S) (S, S0 , Φ, G) 1 1 0.9 0.9 0.8 0.8 0.7 0.7 0.6 0.6 CDF CDF 21 0.5 0.4 0.5 0.4 T = 1 (OMP) T =4 T =8 T = 64 T = 1024 Upper bound as T → ∞ 0.3 0.2 0.1 T = 1 (OMP) T =4 T =8 T = 64 T = 1024 Upper bound as T → ∞ 0.3 0.2 0.1 0 0 0 0.5 1 ρ(DS) (S, S0 , Φ1 , · · · , ΦT , G) (a) Lo = 0 1.5 2 0 1 2 3 4 5 ρ(DS) (S, S0 , Φ1 , · · · , ΦT , G) (b) Lo = 7 Fig. 4. CDFs of ρ(DS) (S, S0 , Φ1 , ..., ΦT , G). The optimal support set S has L = 8 elements, the optimally preselected subset So has Lo elements, Φ1 , ..., ΦT ∈ CM ×N are time-varying sensing matrices of M = 8 and N = 64, and G ∈ CL×T is a random matrix whose elements have zero-mean and unit-variance. The upper bound of ρ(DS) (S, S0 , Φ1 , ..., ΦT , G) as T → ∞ (DS) denotes ρT →∞ (N, M, L, Lo ) in Theorem 2. changes according to T and Lo . In Fig. 3(a), Pr(ρ(S) (S, S0 , Φ, G) < 1) becomes higher as T (S) gets larger. In addition, as shown in Proposition 1, the CDF converges to that of ρT →∞ (S, S0 , Φ) in (21) regardless of G. Note that T = 1 indicates the OMP case. Fig. 3(b) shows the CDF versus Lo in the OMP case (T = 1) and the asymptotic SOMP case (T = ∞). As Lo increases, the gap between OMP and SOMP reduces and becomes zero if M = L. In this extreme case where M = L, both OMP and SOMP do not work properly because Pr(ρ(S) (S, S0 , Φ, G) < 1) becomes low in the last iteration. Fig. 4 shows the CDF of ρ(DS) (S, S0 , Φ1 , ..., ΦT , G) in the DSOMP case. Unlike the SOMP (DS) case, ρ(DS) (S, S0 , Φ1 , ..., ΦT , G) converges to a deterministic value ρT →∞ (N, M, L, Lo ) in (24) for any sensing matrices Φ1 , ..., ΦT as T increases. The simulation results in Fig. 4 also indicate (DS) that ρT →∞ (N, M, L, Lo ) has an upper bound as shown in (24), which is consistent with the analytical result in Theorem 2. Although the convergence rate becomes slower as Lo increases, Pr(ρ(DS) (S, S0 , Φ1 , ..., ΦT , G) < 1) = 1 is guaranteed as T → ∞. Simulation results in Fig. 4 show that a reasonably small value of T  ∞ can ensure a perfect recovery. Fig. 5 compares DSOMP and DCOMP. As discussed in Section IV, both select the same support at the first iteration, and the superiority of DCOMP over DSOMP starts from the second iteration. In Fig. 5, the CDF of ρ(DS) (S, S0 , Φ1 , ..., ΦT , G) and ρ(DC) (S, S0 , Φ1 , ..., ΦT , G) are compared in the last iteration, i.e., Lo = 7. The figure shows that the CDF of the DCOMP case is located on the left side of that of the DSOMP case, which means DCOMP has a higher 22 1 0.9 0.8 0.7 CDF 0.6 0.5 T T T T T T T T 0.4 0.3 0.2 0.1 = 1: DSOMP = 1: DCOMP = 4: DSOMP = 4: DCOMP = 8: DSOMP = 8: DCOMP = 64: DSOMP = 64: DCOMP 0 0 1 2 3 4 5 ρ(DS or DC) (S, S0 , Φ1 , · · · , ΦT , G) 1 1 0.9 0.9 0.8 0.7 0.6 0.5 1st iter. 2nd iter. 3rd iter. 4th iter. 5th iter. 6th iter. 7th iter. 8th iter. total 0.4 0.3 0.2 0.1 Success probability per iteration Success probability per iteration Fig. 5. Comparison between ρ(DS) (S, S0 , Φ1 , ..., ΦT , G) of DSOMP and ρ(DC) (S, S0 , Φ1 , ..., ΦT , G) of DCOMP when N = 64, M = 8, L = 8, and Lo = 7. 0.8 0.7 0.6 0.5 1st iter. 2nd iter. 3rd iter. 4th iter. 5th iter. 6th iter. 7th iter. 8th iter. total 0.4 0.3 0.2 0.1 0 0 0 10 20 30 40 50 60 0 10 T 20 30 40 50 60 T (a) DSOMP (b) DCOMP Fig. 6. Success probability of recovering one of elements in the optimal support set at each iteration for Lo = 1, ..., L when N = 64, M = 8, and L = 8. success probability than DSOMP. Simulation results with respect to the success probability per iteration is shown in Fig. 6. As expected, the success probability at the first iteration is identical for both DSOMP and DCOMP, but the success probability of DCOMP becomes higher than that of DSOMP from the second iteration, leading to the higher success probability in total. B. Performance evaluation on the covariance estimation for hybrid architecture In this subsection, we evaluate the performance of the spatial channel covariance estimation.   The performance metric is defined as η = Tr U∗ Rh UR̂ R̂h h   Tr U∗R Rh URh [24] where URh and UR̂h are the h matrices whose columns are the eigenvectors of the ideal covariance Rh and the estimated covariance R̂h . Note that 0 ≤ η ≤ 1 and a larger η indicates a more accurate estimation. 1 1 0.9 0.9 0.8 0.8 0.7 0.7 Efficiency metric Efficiency metric 23 0.6 0.5 0.4 0.3 OMP SOMP COMP DSOMP DCOMP 0.2 0.1 0.6 0.5 0.4 0.3 OMP SOMP COMP DSOMP DCOMP 0.2 0.1 0 0 0 10 20 30 40 50 0 T (a) M = 16 10 20 30 40 50 T (b) M = 8 Fig. 7. Comparison among OMP, SOMP, COMP, DSOMP, and DCOMP when N = 64, D = 256, L = 8 for the single-MSantenna case in narrowband systems. Fig. 7 shows simulation results when N = 64, D = 256, L = 8, and SNR = 10 dB for the single-MS-antenna case in narrowband systems. In the fixed combining matrix case in Fig. 7(a) where M = 16, we can see that the proposed COMP outperforms OMP and SOMP. Combined with the time-varying analog combining matrix, DSOMP and DCOMP have more gain over other techniques with a fixed combining matrix. The gap between DSOMP and DCOMP, however, is marginal in this case. Fig. 7(b) shows the results with 8 RF chains instead of 16 RF chains. As discussed in Section IV, none of the algorithms using a fixed analog combining matrix such as OMP, SOMP, and COMP work properly. Moreover, SOMP even yields worse performance than OMP. In contrast, DSOMP and DCOMP that use a time-varying sensing matrix have a remarkable gain compared to those that use a fixed sensing matrix. In addition, DCOMP, which exploits the Hermitian property of covariance matrices, has a considerable gain compared to DSOMP. These results are consistent with the analysis in Section IV. Regarding the extension of the proposed work to the multiple-MS-antenna case, the four different approaches explained in Section V-A are compared in Fig. 8. As expected, the use of time-varying precoding and combining matrices at both BS and MS improves covariance estimation. Fig. 9 shows the extension to the wideband case. As shown in the figure, the combination of COMP and DCOMP outperforms the direct extension of COMP or DCOMP. In addition to the efficiency metric shown in Fig. 8(a) and Fig. 9(a), we evaluate the loss caused by covariance estimation error in terms of spectral efficiency of hybrid precoding. The rate loss 1 0 0.9 -10 0.8 -20 0.7 -30 Rate loss (%) Efficiency metric 24 0.6 0.5 0.4 -40 -50 -60 0.3 DCOMP DCOMP DCOMP DCOMP 0.2 0.1 w/ w/ w/ w/ Approach Approach Approach Approach 1 2 3 4 (W: (W: (W: (W: fixed, f : fixed, f : varying, varying, -70 fixed) varying) f : fixed) f : varying) DCOMP DCOMP DCOMP DCOMP -80 0 w/ w/ w/ w/ Approach Approach Approach Approach 1 2 3 4 (W: (W: (W: (W: fixed, f : fixed, f : varying, varying, fixed) varying) f : fixed) f : varying) -90 0 5 10 15 20 0 5 10 T 15 20 T (a) Efficiency metric vs. T (b) Rate loss vs. T Fig. 8. Comparison among four different approaches when NT = 64, MT = 8, DT = 256, NR = 64, MR = 8, DR = 256, L = 8, and SNR = 0 dB for the multiple-MS-antenna case in narrowband systems. 1 0 0.9 -5 0.8 Rate loss (%) Efficiency metric -10 0.7 0.6 0.5 0.4 -15 -20 -25 0.3 -30 direct extension of COMP direct extension of DSOMP direct extension of DCOMP modified DCOMP (WB-DCOMP) 0.2 0.1 direct extension of COMP direct extension of DSOMP direct extension of DCOMP modified DCOMP (WB-DCOMP) -35 0 -40 0 5 10 15 20 25 30 0 T 5 10 15 20 25 30 T (a) Efficiency metric vs. T (b) Rate loss vs. T Fig. 9. Comparison among the WB-DCOMP and the direct extensions of other algorithms when N = 64, M = 8, D = 256, L = 8, K = 128, NCP = 32, and SNR = 0 dB for the single-MS-antenna case in wideband systems. The normalized delays τ` /Ts ’s are uniformly distributed in (0, NCP ), and p(t) = sinc(t/Ts ) is used assuming that ideal low pass filters are employed. in Fig. 8(b) and Fig. 9(b) is defined as SEest −SEideal SEideal in percentage where SEest and SEideal are the spectral efficiency of the estimated and ideal covariance case under the assumption that the analog precoding matrix is composed of the dominant eigenvectors of the estimated or ideal spatial channel covariance matrix. The figures show that the trend in the rate loss is consistent with that in the efficiency metric although the rate does not only depend on the efficiency metric but also on other factors such as the type of MIMO techniques and the number of streams. VII. C ONCLUSIONS In this paper, we proposed spatial channel covariance estimation techniques for the hybrid MIMO structure. Based on compressive sensing techniques that leverage the spatially sparse 25 property of the channel, we developed covariance estimation algorithms that are featured by two key ideas. One is to apply a time-varying sensing matrix, and the other is to exploit the Hermitian property of the covariance matrix. Simulation results showed that the proposed greedy algorithms outperform prior work, and the benefit of adopting the two ideas becomes more significant as the number of RF chains becomes smaller. We also analyzed the performance of the proposed algorithms in terms of recovery success probability. The theoretical analysis indicated that the success probability approaches one as the number of snapshots increases if a time-varying sensing matrix is applied. The analysis also proved that using the structured property of the covariance matrix improves the estimation performance, which is consistent with the simulation results. A PPENDIX A P ROOF OF T HEOREM 1 Let Pt,So = Φt,So Φ†t,So . Then, the support selection criterion at the (Lo + 1)-th iteration in the DSOMP shown in Algorithm 3 becomes j (opt) = arg max j∈N T X φ∗t,j (I − Pt,So ) Φt,S gt,S 2 t=1 (a) = arg max j∈N \So (b) = arg max j∈N \So T X φ∗t,j (I − Pt,So ) Φt,S\So gt,S\So 2 (42) t=1 T X X 2 ψ ∗t,j ψ t,i gt,i , t=1 i∈S\So where (a) comes from Φt,S gt,S = Φt,So gt,So + Φt,S\So gt,S\So and (I − Pt,So ) Φt,So = 0, and (b) comes from (I−Pt,So )2 = I−Pt,So due to the characteristics of the orthogonal projection matrix Pt,So . Consequently, one of the elements in the optimal support S is selected, i.e., j (opt) ∈ S \ So at iteration Lo + 1 if and only if (20) is satisfied. 26 A PPENDIX B P ROOF OF P ROPOSITION 1 If Φ1 = · · · = ΦT = Φ, ρ(DS) (S, So , Φ1 , ..., ΦT , G) in (20) becomes  P  maxj∈N \S ψ ∗j ΨS\So T1 Tt=1 gt gt∗ Ψ∗S\So ψ j  P  ρ(DS) (S, So , Φ, ..., Φ, G) = maxj∈S\So ψ ∗j ΨS\So T1 Tt=1 gt gt∗ Ψ∗S\So ψ j (a) → maxj∈N \S ψ ∗j ΨS\So Ψ∗S\So ψ j (43) maxj∈S\So ψ ∗j ΨS\So Ψ∗S\So ψ j P 2 maxj∈N \S i∈S\So ψ ∗j ψ i = P 2, maxj∈S\So i∈S\So ψ ∗j ψ i where (a) comes from the fact that gt,i are independent random variables with zero mean and unit variance. A PPENDIX C P ROOF OF T HEOREM 2 ρ(DS) (S, So , Φ1 , ..., ΦT , G) in (20) can be written as  P P 2 ∗ maxj∈N \S T1 Tt=1 ψ ψ + X t,j t,j t,i i∈S\So  P ρ(DS) (S, So , Φ1 , ..., ΦT , G) = P 2 ∗ ψ ψ + X maxj∈S\So T1 Tt=1 t,j t,j t,i i∈S\So h i 2 (L − Lo )E ψ ∗t,j(6=i) ψ t,i (a) h i h i, → 2 2 (L − Lo − 1)E ψ ∗t,j(6=i) ψ t,i + E ψ ∗t,i ψ t,i where Xt,j = P i1 ∈S\So P i2 ∈S\So i2 6=i1 (44) ∗ gt,i1 gt,i ψ ∗t,j ψ t,i1 ψ ∗t,i2 ψ t,j . In (44), (a) comes from the fact that 2 ψ ∗t,i ψ t,i for all i are identical random variables with non-zero mean, ψ ∗t,j ψ t,i for all j 6= i are identical random variables with zero-mean, and gt,i are independent random variables with zero mean and unit variance. h i h i 2 2 Now let us look at E ψ ∗t,i ψ t,i and E ψ ∗t,j(6=i) ψ t,i . Let Ωt = Ψ∗t,N \So Ψt,N \So , then the trace of Ωt becomes Tr(Ωt ) = Tr Φ∗t,N \So (IM − Pt,So )2 Φt,N \So (a)  = Tr (IM − Pt,So ) N IM − Φt,So Φ∗t,So (b) = N (M − Lo ) ,  (45) 27 where (a) comes from Φt,So Φ∗t,So + Φt,N \So Φ∗t,N \So = N IN due to the tight frame property, and i h 2 ∗ is given by (b) comes from (I − Pt,So ) Φt,So = 0. From (45), the lower bound of E ψ t,i ψ t,i h E 2 ψ ∗t,i ψ t,i  2 i (a) 2  ∗ E [Tr(Ωt )] N 2 (M − Lo )2 = ≥ E ψ t,i ψ t,i = , N − Lo (N − Lo )2 (46) where (a) comes from the fact that E[|X|2 ] ≥ (E[|X|])2 for any random variable X. Now, let us look at the squared Frobenius norm of Ωt that is given by  2  2 ∗ kΩt kF = Tr (IM − Pt,So ) Φt,N \So Φt,N \So (47) 2 = N (M − Lo ) . In addition, E [kΩt k2F ] can be represented differently as   X   E kΩt k2F = E   2 X ψ ∗t,i ψ t,i + i∈N \So = (N − Lo )E j,i∈N \So j6=i h 2 ψ ∗t,i ψ t,i i  2 ψ ∗t,j ψ t,i   (48) + (N − Lo )(N − Lo − 1)E h 2 ψ ∗t,j(6=i) ψ t,i i . h i 2 From (46)-(48), the upper bound of E ψ ∗t,j(6=i) ψ t,i can be obtained as h E ψ ∗t,j(6=i) ψ t,i 2 i = h i 2 N 2 (M − Lo ) − (N − Lo )E ψ ∗t,i ψ t,i (N − Lo )(N − Lo − 1) (49) N (M − Lo )(N − M ) ≤ . (N − Lo )2 (N − Lo − 1) i i h h 2 2 By using the lower bound of E ψ ∗t,i ψ t,i and the upper bound of E ψ ∗t,j(6=i) ψ t,i , it can 2 be shown that the converged value of ρ(DS) (S, So , Φ1 , ..., ΦT , G) in (44) has an upper bound as ρ(DS) (S, So , Φ1 , ..., ΦT , G) → L − Lo L − Lo − 1 + ≤ h i 2 E |ψ ∗t,i ψ t,i | h i 2 E |ψ ∗t,j(6=i) ψ t,i | L − Lo L − Lo − 1 + (M −Lo )(N −Lo −1) (N −M ) (50) , and this upper bound is always less than or equal to one because N ≥ M ≥ L > Lo . 28 A PPENDIX D P ROOF OF T HEOREM 4 As T → ∞, ρ(DS) (S, So , Φ1 , ..., ΦT , G) in (20) can be rewritten as ρ (DS) (S, So , Φ1 , ..., ΦT , G) = (DS) ∗ t=1 φt,j Qt,S,So φt,j P (DS) maxj∈S\So Tt=1 φ∗t,j Qt,S,So φt,j maxj∈N \S PT i h (DS) ∗ Tr(Φt,N \S Q̃t,S,So Φt,N \S ) (a) i, h → (DS) 1 ∗ Φ ) Q̃ E Tr(Φ t,S\S o t,S\So t,S,So L−Lo 1 E N −L (51) (DS) where Q̃t,S,So = (IM − Pt,So ) Φt,S\So Φ∗t,S\So (IM − Pt,So ) and (a) comes from the fact that G and Φt are independent and all elements in G have zero mean and unit variance. In a similar way, ρ(DC) (S, So , Φ1 , ..., ΦT , G) in (25) converges as i h (DC) 1 ∗ E Tr(Φt,N \S Q̃t,S,So Φt,N \S ) N −L i, h ρ(DC) (S, So , Φ1 , ..., ΦT , G) → (DC) 1 ∗ E Tr(Φt,S\So Q̃t,S,So Φt,S\So ) L−Lo (52) (DC) where Q̃t,S,So = Φ∗t,S\So Φ∗t,S\So − Pt,So Φt,S\So Φ∗t,S\So Pt,So . Since the terms in the expectation in (51) and (52) are independent random variables with respect to t, we omit the time slot index t for simplicity. The difference between the inner parts of the denominators in (51) and (52) becomes      (DC) (DS) Tr Φ∗S\So Q̃S,So − Q̃S,So ΦS\So = 2Re Tr Φ∗S\So (I − PSo )ΦS\So Φ∗S\So PSo ΦS\So . (53) Note that both PSo and I − PSo can be represented as the production of two semi-unitary matrices, and thus Φ∗S\So PSo ΦS\So and Φ∗S\So (I − PSo ) ΦS\So become positive semidefinite matrices. Since the trace of the product of two semidefinite matrices is larger than or equal to zero, (53) becomes Tr  Φ∗S\So  (DC) Q̃S,So − (DS) Q̃S,So   ΦS\So ≥ 0 (54) for any Φ, S, and So . The difference between the inner parts of the numerators in (51) and (52) 29 becomes      (DS) (DC) Tr Φ∗N \S Q̃S,So − Q̃S,So ΦN \S = 2Re Tr Φ∗N \S (I − PSo ) ΦS\So Φ∗S\So PSo ΦN \S (a) = −2Re Tr PSo ΦS\So Φ∗S\So (I − PSo ) ΦS\So Φ∗S\So     (DC) (DS) ∗ = −Tr ΦS\So Q̃S,So − Q̃S,So ΦS\So  (55) ≤ 0, where (a) can be proved by using ΦN \S Φ∗N \S = N IM −ΦSo Φ∗So −ΦS\So Φ∗S\So and PSo (N IM − ΦSo Φ∗So )(IM − PSo ) = 0. From the two inequalities in (54) and (55), the converged value in (51) is always larger than or equal to that in (52), and this completes the proof. R EFERENCES [1] W. Roh, J. Seol, J. Park, B. Lee, J. Lee, Y. Kim, J. Cho, K. Cheun, and F. Aryanfar, “Millimeter-wave beamforming as an enabling technology for 5G cellular communications: theoretical feasibility and prototype results,” IEEE Comm. Mag., vol. 52, no. 2, pp. 106–113, Feb. 2014. [2] R. Heath, N. González-Prelcic, S. Rangan, W. Roh, and A. Sayeed, “An overview of signal processing techniques for millimeter wave MIMO systems,” IEEE Jour. Select. Topics in Sig. Proc., vol. 10, no. 3, pp. 436–453, Apr. 2016. [3] T. Marzetta, “Noncooperative cellular wireless with unlimited numbers of base station antennas,” IEEE Trans. Wireless Comm., vol. 9, no. 11, pp. 3590–3600, Nov. 2010. [4] J. Hoydis, S. ten Brink, and M. Debbah, “Massive MIMO in the UL/DL of cellular networks: How many antennas do we need?” IEEE Jour. Select. Areas in Comm., vol. 31, no. 2, pp. 160–171, Feb. 2013. [5] A. F. Molisch, V. V. Ratnam, S. Han, Z. Li, S. L. H. Nguyen, L. Li, and K. Haneda, “Hybrid beamforming for massive MIMO: A survey,” IEEE Comm. Mag., vol. 55, no. 9, pp. 134–141, 2017. [6] O. El Ayach, S. Rajagopal, S. Abu-Surra, Z. Pi, and R. Heath, “Spatially sparse precoding in millimeter wave MIMO systems,” IEEE Trans. Wireless Comm., vol. 13, no. 3, pp. 1499–1513, Mar. 2014. [7] A. Alkhateeb, O. El Ayach, G. Leus, and R. Heath, “Hybrid precoding for millimeter wave cellular systems with partial channel knowledge,” in Proc. Info. Th. App. (ITA), Feb. 2013, pp. 1–5. [8] X. Yu, J. Shen, J. Zhang, and K. Letaief, “Alternating minimization algorithms for hybrid precoding in millimeter wave MIMO systems,” IEEE Jour. Select. Topics in Sig. Proc., vol. 10, no. 3, pp. 485–500, Apr. 2016. [9] S. Park, A. Alkhateeb, and R. Heath, “Dynamic subarrays for hybrid precoding in wideband mmWave MIMO systems,” IEEE Trans. Wireless Comm., vol. 16, no. 5, pp. 2907–2920, May 2017. [10] W. Ni and X. Dong, “Hybrid block diagonalization for massive multiuser MIMO systems,” IEEE Trans. Comm., vol. 64, no. 1, pp. 201–211, Jan. 2016. [11] F. Sohrabi and W. Yu, “Hybrid digital and analog beamforming design for large-scale antenna arrays,” IEEE Jour. Select. Topics in Sig. Proc., vol. 10, no. 3, pp. 501–513, Apr. 2016. [12] A. Alkhateeb, G. Leus, and R. Heath, “Limited feedback hybrid precoding for multi-user millimeter wave systems,” IEEE Trans. Wireless Comm., vol. 14, no. 11, pp. 6481–6494, Nov. 2015. 30 [13] A. Adhikary, J. Nam, J. Ahn, and G. Caire, “Joint spatial division and multiplexing: The large-scale array regime,” IEEE Trans. Info. Th., vol. 59, no. 10, pp. 6441–6463, Oct. 2013. [14] S. Park, J. Park, A. Yazdan, and R. W. Heath, “Exploiting spatial channel covariance for hybrid precoding in massive MIMO systems,” IEEE Trans. Sig. Proc., vol. 65, no. 14, pp. 3818–3832, Jul. 2017. [15] A. Alkhateeb, O. El Ayach, G. Leus, and R. Heath, “Channel estimation and hybrid precoding for millimeter wave cellular systems,” IEEE Jour. Select. Topics in Sig. Proc., vol. 8, no. 5, pp. 831–846, Oct. 2014. [16] J. Lee, G. T. Gil, and Y. H. Lee, “Channel estimation via orthogonal matching pursuit for hybrid MIMO systems in millimeter wave communications,” IEEE Trans. Comm., vol. 64, no. 6, pp. 2370–2386, Jun. 2016. [17] R. T. Suryaprakash, M. Pajovic, K. J. Kim, and P. Orlik, “Millimeter wave communications channel estimation via Bayesian group sparse recovery,” in Proc. IEEE Int. Conf. Acoust., Speech and Sig. Proc., Mar. 2016, pp. 3406–3410. [18] Z. Li, S. Han, and A. F. Molisch, “Optimizing channel-statistics-based analog beamforming for millimeter-wave multi-user massive MIMO downlink,” IEEE Trans. Wireless Comm., vol. 16, no. 7, pp. 4288–4303, Jul. 2017. [19] R. Méndez-Rial, N. G. Prelcic, and R. W. Heath, “Adaptive hybrid precoding and combining in mmWave multiuser MIMO systems based on compressed covariance estimation,” in Proc. IEEE Int. Workshop on Comp. Adv. in Multi-Sensor Adap. Proc. (CAMSAP), Dec. 2015, pp. 213–216. [20] D. Love, R. Heath, V. Lau, D. Gesbert, B. Rao, and M. Andrews, “An overview of limited feedback in wireless communication systems,” IEEE Jour. Select. Areas in Comm., vol. 26, no. 8, pp. 1341–1365, Oct. 2008. [21] E. Bjornson, D. Hammarwall, and B. Ottersten, “Exploiting quantized channel norm feedback through conditional statistics in arbitrarily correlated MIMO systems,” IEEE Trans. Sig. Proc., vol. 57, no. 10, pp. 4027–4041, Oct. 2009. [22] S. Shakeri, D. D. Ariananda, and G. Leus, “Direction of arrival estimation using sparse ruler array design,” in Proc. IEEE Workshop on Sign. Proc. Adv. in Wireless Comm., June 2012, pp. 525–529. [23] D. Romero and G. Leus, “Compressive covariance sampling,” in Proc. Info. Th. App. (ITA), Feb 2013, pp. 1–8. [24] S. Haghighatshoar and G. Caire, “Massive MIMO channel subspace estimation from low-dimensional projections,” IEEE Trans. Sig. Proc., vol. 65, no. 2, pp. 303–318, Jan 2017. [25] H. Ghauch, T. Kim, M. Bengtsson, and M. Skoglund, “Subspace estimation and decomposition for large millimeter-wave MIMO systems,” IEEE Jour. Select. Topics in Sig. Proc., vol. 10, no. 3, pp. 528–542, Apr. 2016. [26] Y. Chi, Y. C. Eldar, and R. Calderbank, “PETRELS: parallel subspace estimation and tracking by recursive least squares from partial observations,” IEEE Trans. Sig. Proc., vol. 61, no. 23, pp. 5947–5959, Dec 2013. [27] S. F. Chuang, W. R. Wu, and Y. T. Liu, “High-resolution AoA estimation for hybrid antenna arrays,” IEEE Trans. on Ant. Prop., vol. 63, no. 7, pp. 2955–2968, July 2015. [28] Y. Peng, Y. Li, and P. Wang, “An enhanced channel estimation method for millimeter wave systems with massive antenna arrays,” IEEE Comm. Lett., vol. 19, no. 9, pp. 1592–1595, Sep. 2015. [29] J. Lee, G. T. Gil, and Y. H. Lee, “Channel estimation via orthogonal matching pursuit for hybrid MIMO systems in millimeter wave communications,” IEEE Trans. Comm., vol. 64, no. 6, pp. 2370–2386, Jun. 2016. [30] S. F. Cotter, B. D. Rao, K. Engan, and K. Kreutz-Delgado, “Sparse solutions to linear inverse problems with multiple measurement vectors,” IEEE Trans. Sig. Proc., vol. 53, no. 7, pp. 2477–2488, July 2005. [31] J. Chen and X. Huo, “Theoretical results on sparse representations of multiple-measurement vectors,” IEEE Trans. Sig. Proc., vol. 54, no. 12, pp. 4634–4643, Dec. 2006. [32] M. F. Duarte and Y. C. Eldar, “Structured compressed sensing: From theory to applications,” IEEE Trans. Sig. Proc., vol. 59, no. 9, pp. 4053–4085, Sep. 2011. [33] J. F. Determe, J. Louveaux, L. Jacques, and F. Horlin, “On the exact recovery condition of simultaneous orthogonal matching pursuit,” IEEE Sig. Proc. Lett., vol. 23, no. 1, pp. 164–168, Jan 2016. 31 [34] Y. C. Eldar and G. Kutyniok, Compressed Sensing: Theory and Applications. Cambridge, 2012. [35] S. Park and R. W. Heath, “Spatial channel covariance estimation for mmWave hybrid MIMO architecture,” in Proc. of Asilomar Conf. on Sign., Syst. and Computers, Nov. 2016, pp. 1424–1428. [36] R. Méndez-Rial, C. Rusu, N. G. Prelcic, A. Alkhateeb, and R. W. Heath, “Hybrid MIMO architectures for millimeter wave communications: Phase shifters or switches?” IEEE Access, vol. 4, pp. 247–267, Jan. 2016. [37] J. A. Tropp, A. C. Gilbert, and M. J. Strauss, “Simultaneous sparse approximation via greedy pursuit,” in Proc. IEEE Int. Conf. Acoust., Speech and Sig. Proc., vol. 5, Mar. 2005, pp. v/721–v/724 Vol. 5. [38] J. Determe, J. Louveaux, L. Jacques, and F. Horlin, “On the noise robustness of simultaneous orthogonal matching pursuit,” IEEE Trans. Sig. Proc., vol. 65, no. 4, pp. 864–875, Feb. 2017. [39] M. A. Davenport and M. B. Wakin, “Analysis of orthogonal matching pursuit using the restricted isometry property,” IEEE Trans. Info. Th., vol. 56, no. 9, pp. 4395–4401, Sept 2010. [40] D. D. Ariananda and G. Leus, “Direction of arrival estimation for more correlated sources than active sensors,” Sig. Proc., vol. 93, no. 12, pp. 3435–3448, 2013. [41] J. A. Tropp, “Greed is good: algorithmic results for sparse approximation,” IEEE Trans. Info. Th., vol. 50, no. 10, pp. 2231–2242, Oct 2004. [42] P. Schniter and A. Sayeed, “Channel estimation and precoder design for millimeter-wave communications: The sparse way,” in Proc. of Asilomar Conf. on Sign., Syst. and Computers, Nov. 2014, pp. 273–277. [43] A. Alkhateeb and R. Heath, “Frequency selective hybrid precoding for limited feedback millimeter wave systems,” IEEE Trans. Comm., vol. 64, no. 5, pp. 1801–1818, May 2016.
7
1 Side Information in the Binary Stochastic Block Model: Exact Recovery arXiv:1708.04972v1 [] 16 Aug 2017 Hussein Saad, Student Member, IEEE, Ahmed Abotabl , Student Member, IEEE and Aria Nosratinia, Fellow, IEEE Abstract In the community detection problem, one may have access to additional observations (side information) about the label of each node. This paper studies the effect of the quality and quantity of side information on the phase transition of exact recovery in the binary symmetric stochastic block model (SBM) with n nodes. When the side information consists of the label observed through a binary symmetric channel with crossover probability α, and when log( 1−α α ) = O(log(n)), it is shown that side information has a positive effect on phase transition; the new phase transition under this condition is characterized. When α is constant or approaches zero sufficiently slowly, i.e., log( 1−α α ) = o(log(n)), it is shown that side information does not help exact recovery. When the side information consists of the label observed through a binary erasure channel with parameter ǫ, and when log(ǫ) = O(log(n)), it is shown that side information improves exact recovery and the new phase transition is characterized. If log(ǫ) = o(log(n)), then it is shown that side information is not helpful. The results are then generalized to an arbitrary side information of finite cardinality. Necessary and sufficient conditions are derived for exact recovery that are tight, except for one special case under M -ary side information. An efficient algorithm that incorporates the effect of side information is proposed that uses a partial recovery algorithm combined with a local improvement procedure. Sufficient conditions are derived for exact recovery under this efficient algorithm. Index Terms Community detection, Stochastic block model, Side information, Exact recovery. The authors are with the Department of Electrical Engineering, University of Texas at Dallas, Richardson, TX 75083-0688 USA, E-mail: [email protected];[email protected];[email protected]. August 17, 2017 DRAFT 2 I. I NTRODUCTION The problem of learning or detecting community structures in random graphs has been studied in statistics [1]–[5], computer science [6]–[10] and theoretical statistical physics [11], [12], among others. The problem of detecting communities has many applications: finding like-minded people in social networks [13], improving recommendation systems [14], detecting protein complexes [15]. Among the different random graph models [16], [17], the stochastic block model (SBM) is widely used in the context of community detection [18]. This extension of the Erdos-Renyi model consists of n nodes that belong to two1 communities, each pair connected with an edge with probability p if the pair belongs to the same community and with probability q if they do not. The prior distribution of the node labels is identical and independent, and often uniform (labels are equi-probable). The goal of community detection is to recover/detect the labels upon observing the graph edges. The graphical structure of inference problems can lead to well-characterized asymptotic results (e.g. phase transitions) that can give insights on the performance of inference algorithms on large data sets. But a purely graphical model for observations also unfortunately limits the scope of the applicability of the model. For example, social networks such as Facebook and Twitter have access to much information other than the graph edges. A citation network that has the authors names, keywords, and abstracts of papers, and therefore may provide significant additional information beyond the co-authoring relationships. In statistics, the problem of community detection with additional information such as “annotation” [19], “attributes” [20], or “features” [21] has been broached, wherein for matching to real (finite) data sets a parametric model is proposed that expresses the joint probability distribution of the graphical and non-graphical (attribute/feature) observations conditioned on the true label and a modeling parameter. These works concentrate on model-matching and inference using graphical and non-graphical observations. This is unlike the present paper that is concerned with phase transitions and thresholds, but they nevertheless show the interest of the broader community in the issue of side-information in graph-based inference. For reference, a few basic definitions are highlighted before continuing with the literature survey. Correlated recovery refers to community detection that performs better than random 1 We consider the binary stochastic block model. August 17, 2017 DRAFT 3 guessing [22]–[26]. Weak recovery refers to a vanishing fraction of misclassified labels [27]– [29]. Exact recovery refers to recovering all communities with probability converging to one as n → ∞ [18], [30], [31]. Phase transition refers to a threshold on the random graph parameters such that on one side of the threshold no algorithm can achieve a certain form of recovery, and on the other side some algorithm exists to achieve recovery. A sparse regime is in place when the average degree of the graph is Ω(1), and a graph is dense if the average degree is Ω(log n). The asymptotic behavior of belief propagation with side information has been studied in binary community detection in the sparse regime. Mosel and Xu [32] considered side information consisting of the label observed through a binary symmetric channel, showing that subject to side information the belief propagation under certain condition has the same residual error as the MAP estimator. They also showed weak recovery if the average degree grows with n. Cai et. al [33] considered side information consisting of the label observed through a binary erasure channel (BEC) with erasure probability ǫ → 1 as n → ∞, demonstrating regimes for correlated recovery and weak recovery. Both [32], [33] present sufficient (but not necessary) conditions. Kadavankandy et al. [34] studies the single-community problem under side information consisting of the labels observed through a binary asymmetric channel, where they showed weak recovery in the sparse regime. Kanade et. al [35] showed that for symmetric communities, the phase transition of correlated recovery is not affected if labels are observed through a BEC whose erasure probability ǫ → 1 as n grows. The same side information was shown to be helpful to correlated recovery of local algorithms. Caltagirone et. al [36] considered binary asymmetric communities, showing that in the presence of side information mentioned above, local algorithms achieve correlated recovery up to the phase transition threshold. This paper studies the effect of side information on binary community detection, motivated and directed by several observations. First, while the effect of side information on correlated recovery and weak recovery has been studied, its effect on exact recovery has been unknown. Also, the literature has concentrated on the effect of side information only on belief propagation, which is not enough to determine phase transition. Second, only binary side information about binary labels has been studied (or binary side information with erasures). The more general case where side information has an arbitrary alphabet is motivated by many practical applications, but an Mary side information has not been thoroughly studied either in the context of belief propagation or maximum likelihood. Finally, in many cases either necessary or sufficient conditions for recovery August 17, 2017 DRAFT 4 is known, but not both. In this paper, we study the effect of side information on exact recovery in the dense regime, i.e., when p = a logn n and q = b logn n with constants a > b > 0. We investigate the question: when and how much can side information affect the phase transition threshold of exact recovery?2 We study this question for the popular models of side information: (a) the labels are observed through a binary symmetric channel with crossover probability α ∈ (0, 0.5), and (b) labels are observed through a binary erasure channel with erasure probability ǫ ∈ (0, 1). We also generalize the existing models to include arbitrary side information distributions on finite alphabets. For exact recovery, necessary and sufficient conditions are derived that are tight. More specifically: • For side information observed through a binary erasure channel with erasure probability ǫ ∈ (0, 1), we show that when log(ǫ) is constant independent of n, or, surprisingly, o(log(n)), √ then side information does not help exact recovery and the phase transition is still ( a − √ 2 b) > 2. When log(ǫ) is O(log(n)), then side information helps and the phase transition for exact recovery changes. More precisely, we show that when log(ǫ) = −β log(n) for some √ √ β > 0, then: ( a− b)2 +2β > 2 is necessary and sufficient for exact recovery. We provide our sufficient conditions for two detection algorithms, namely, Maximum Likelihood and an efficient algorithm which uses a partial recovery algorithm from the literature combined with a local improvement procedure that combines both the graph and side information. Also, during the proofs, we needed one lemma from [30], for which we provide an alternative and more compact proof. • For side information observed through a binary symmetric channel with crossover proba) is o(log(n)), then side information bility α ∈ (0, 0.5), we show that when c = log( 1−α α √ √ does not help exact recovery and the phase transition is still ( a − b)2 > 2. When α is decreasing with n such that c = β log(n) for some β > 0, then: for graphs with T (a−b) 2 T (a−b) 2 < β, β > 1 is necessary and sufficient for exact recovery and for graphs with > β, η(a, b, β) > 2 is necessary and sufficient for exact recovery, where η(a, b, β) = p γ+β + Tβ log( γ−β ), where T = log( ab ), γ = β 2 + abT 2 . Unlike the partial noiseless a+b+β− 2γ T side information, we provide the sufficient conditions only for the efficient algorithm3. 2 √ √ The exact recovery phase transition without side information is ( a − b)2 > 2 [30]. 3 These results appear in the Allerton Conference on Communications, Control, and Computing 2017 [37]. August 17, 2017 DRAFT 5 • We then generalize our results to M-ary side information with finite M. We provide necessary and sufficient conditions and show that they are tight, expect for one special case, by extending our efficient algorithm to M-ary side information. Surprisingly, we show that if for at least one m ∈ {1, · · · , M} we have the log-likelihood ratio of this mth side information to be o(log(n)), then side information does not help exact recovery and the √ √ phase transition is still ( a − b)2 > 2. If for all m ∈ {1, · · · , M} we have the loglikelihood ratio of side information to be Ω(log(n)), then we need several conditions to be satisfied which will be specified later in Section V. To illustrate our results, we show in Figures 1, 2 the error exponent, for some values of a, b, for the side information observed through a binary erasure and binary symmetric channels as a function of β. From the figures, we can see that the value of β needed for recovery depends on √ √ a, b. For the binary erasure channel, for any given a > b > 0 with ( a − b)2 < 2, the needed √ √ value of β for recovery, i.e., critical β, is 1 − 12 ( a − b)2 + δ for any arbitrary small δ > 0. √ √ For the binary symmetric channel, for any given a > b > 0 with ( a − b)2 < 2, the value ) > 2, then the critical of β needed for recovery can be determined as follows: if η(a, b, T (a−b) 2 β is the value of β that makes η > 2, in that case it is less than one. On the other hand, if η(a, b, T (a−b) ) < 2, then it is 1 + δ for arbitrary small δ > 0. 2 Remark 1: For weak recovery, it is easy to see that for α or ǫ → 0 as n → ∞ arbitrary slow, side information could help to achieve recovery regardless of the observed graph. However, for exact recovery, we still need a certain rate at which α or ǫ → 0 to make the side information useful to the maximum likelihood detector. We will provide some intuition about why this is the case in our analysis in the next few sections. II. S YSTEM M ODEL In this paper, we consider the binary symmetric stochastic block model with side information. Let the labels of the two communities be 1 or −1. Our observations are generated as follows: each node, out of n nodes, is assigned independently and uniformly at random to one of the two communities. After the assignment of nodes, each two nodes are connected with an edge if the two nodes belong to the same community and with independently with probability a log(n) n , otherwise. Finally, for each node we observe a scalar side information. We probability b log(n) n consider three models of side information. August 17, 2017 DRAFT 6 1-0.5η(a,b,β) VS 1-β 1 a=5,b=1 a=4,b=1 a=3,b=1 a=2.5,b=1 0.5 0 Error exponent Recovery region -0.5 Critical β -1 -1.5 -2 -2.5 -3 0 0.5 1 1.5 2 2.5 3 3.5 Quality of Side information (β) Fig. 1. Error exponent of the binary symmetric channel side information as a function of β. √ √ 1 − (0.5( a − b)2 + β) 1 0.5 Error exponent 0 Recovery region −0.5 Critical β −1 −1.5 −2 −2.5 −3 0 1 2 3 4 5 Quality of Side information (β) Fig. 2. Error exponent of binary erasure channel side information as a function of β. August 17, 2017 DRAFT 7 First, for each node, we independently observe its true label with probability (1 − α) or the negative of its true label with probability α, for α ∈ (0, 0.5). For the second model, for each node, we independently observe its true label with probability 1 − ǫ or 0 with probability ǫ, for ǫ ∈ (0, 1). For the third model, we consider M-ary side information, where for each node i we independently observe yi ∈ {u1 , u2, · · · , um }. We denote P(yi = um |xi = ±1) = α±,m , and PM PM α = +,m m=1 α−,m = 1. m=1 We denote the observed graph by G = (V, E), the vector of nodes’ true assignment by x∗ , and the vector of nodes’ side information by y. The goal is to recover the node assignment x∗ from the observation of (G, y). Remark 2: In [30] where the graph is the only observation, exact recovery is defined up to a global flip. This is due to the symmetry of the two communities and hence, recovering the true vector x∗ or −x∗ would be considered a success. This is not generally true with side information. For example, with side information observed through binary symmetric or binary erasure channels, symmetry is broken and success is only considered when the true vector x∗ is recovered. III. B INARY S YMMETRIC C HANNEL In this section, we consider side information consisting of the true labels observed through a binary symmetric channel with crossover probability α ∈ (0, 0.5). We provide tight necessary and sufficient conditions for different cases of α. But before we proceed with the proofs we need the following lemmas. First, we present the ML rule for detecting the communities. It is known that without side information, in the binary symmetric SBM, the ML will find two equally sized communities (of size n 2 each) that have the minimum number of edges between them, i.e., minimum cut [30]. So we need to determine the ML rule when it has side information. Denote the first and second communities by A and B, respectively. We will use communities A (1) and B (−1) interchangeably. Let the number of edges inside community A and B be E(A) and E(B), respectively. Also, let the total number of edges in the observed graph be Et . Finally, let the number of {i ∈ A : yi = 1} and {i ∈ B : yi = −1} be J+ (A) and J− (B), respectively. Then, the log-likelihood function can be written as: August 17, 2017 DRAFT 8  (a)   log P(G, y|x) = log P(G|x) + log P(y|x)   n n2 2( 22 )−E(A)−E(B) −E +E(A)+E(B) E(A)+E(B) Et −E(A)−E(B) t (1 − q) 4 = log p q (1 − p)   J+ (A)+J− (B) n−J+ (A)−J− (B) + log (1 − α) α (b)   =R + T E(A) + E(B) (1 + o(1)) + c J+ (A) + J− (B) (1) where T = log( ab ), c = log( 1−α ), (a) holds because G, y are independent given x and (b) α holds by defining R to be a constant that contains all the terms that are independent of x and approximating log( p(1−q) ) by (1 + o(1))T because (1 − p), (1 − q) both approach 1 as n → ∞. q(1−p Therefore, ML rule finds two equally sized communities that have the maximum the number of edges inside the communities weighted by T plus the number of yi that matches the communities’ labels weighted by c. Using this result, we will now get a sufficient and necessary conditions for the event that ML fails to detect the communities. But we need the following definitions. Recall, we defined the true communities as A (1) and B (−1). Moreover, we define the following events. F = {ML fails} FA = {∃i ∈ A : T (E[i, B] − E[i, A]) − cyi ≥ T } FB = {∃j ∈ B : T (E[j, A] − E[j, B]) + cyj ≥ T } (2) where E[·, ·] denotes the number of edges between two sets of nodes. The following two lemmas define lower and upper bounds of the probability of failure of ML. Lemma 1: F will happen if both FA and FB happened. Proof: Define two new communities  = A\{i} ∪ {j} and B̂ = B\{j} ∪ {i}. Hence, we need to   show that log P(G, y|Â, B̂) ≥ log P(G, y|A, B) , which implies the failure of ML. Let Aij ∼ Bern(q) be a random variable representing the existence of the edge between nodes i and j. Then, using (1):    log P(G, y|Â, B̂) =R + T E(Â) + E(B̂) + c J+ (Â) + J− (B̂) August 17, 2017 DRAFT 9   =R + T E(A) + E(B) − E[i, A] + E[j, A] − E[j, B] + E[i, B] − 2Aij   + c J+ (A) + J− (B) + J+(j) − J+(i) + J−(i) − J−(j)   =R + T E(A) + E(B) + c J+ (A) + J− (B) − 2T Aij   + T E[j, A] − E[j, B] + cyj + T E[i, B] − E[i, A] − cyi (a)  ≥ log P(G, y|A, B) + 2T (1 − Aij ) (b) ≥ log P(G, y|A, B)  where (a) holds by the assumption that FA ∩ FB happened and (b) holds because (1 − Aij ) ≥ 0 and T ≥ 0. Hence, from the last inequality, we conclude that the ML will not coincide with the true assignment A, B. Lemma 2: Suppose that F happens. Then, there exists k and sets Aw ⊂ A and Bw ⊂ B with |Aw | = |Bw | = k, for 1 ≤ k ≤ n 2 such that:     T E[Bw , A\Aw ]+E[Aw , B\Bw ]−E[Bw , B\Bw ]−E[Aw , A\Aw ] +2c J+ (Bw )−J+ (Aw ) ≥ 0 Proof: Let  = A\{Aw } ∪ {Bw } and B̂ = B\{Bw } ∪ {Aw }. Since F happened, then:   log P(G, y|Â, B̂) ≥ log P(G, y|A, B) , and hence:   T E[A\{Aw }, A\{Aw }] + E[Bw , A\{Aw }] + E[B\{Bw }, B\{Bw }] + E[Aw , B\{Bw }] +   c J+ (A\{Aw }) + J+ (Bw ) + J− (B\{Bw }) + J− (Aw ) ≥   T E[A\{Aw }, A\{Aw }] + E[Aw , A\{Aw }] + E[B\{Bw }, B\{Bw }] + E[Bw , B\{Bw }] +   c J+ (A\{Aw }) + J+ (Aw ) + J− (B\{Bw }) + J− (Bw ) By canceling out similar terms, this concludes our proof. Remark 3: In [30], due to symmetry, recovering the true assignment vector x∗ or −x∗ is considered a success. Thus, the error event was defined for 1 ≤ k ≤ n4 . Here, since α < 0.5, August 17, 2017 DRAFT 10 then side information will break symmetry, and hence, exchanging a subset of nodes of size k is not the same as exchanging a subset of nodes of size n 2 − k. Thus, we defined our error event for 1 ≤ k ≤ n2 . That is true also for side information observed through a binary erasure channel. A. Necessary Conditions Let a > b > 0 and define c = log( 1−α ) and η(a, b, β) = a + b + β − 2γ + Tβ log( γ+β ), where α T γ−β p T = log( ab ), γ = β 2 + abT 2 . √ √ Theorem 1: For c = o(log(n)), if ( a − b)2 < 2, then ML fails in recovering the communities with probability bounded away from zero. On the other hand, for c = β log(n), for some β > 0, for graphs with T (a−b) 2 < β, if β < 1 then ML fails in recovering the communities with probability bounded away from zero, and for graphs with T (a−b) 2 > β, if η(a, b, β) < 2, then ML fails in recovering the communities with probability bounded away from zero. Remark 4: Note that under the assumption that the number of edges connected to nodes i, j ∈ community A are independent, the proof would be much simpler. However, they are dependent. To overcome this dependency, we break the event FA into several events. Such idea was introduced in [30]. Proof: Note that since x∗ is generated uniformly, then ML minimizes the probability of error over all possible estimators. Hence, if the probability of failure of ML is bounded away from zero, then every other estimator has probability of failure bounded away from zero. Let H be a subset of A with |H| = n . log3 (n) We define the following events:  △i = i ∈ H : E[i, H] ≤ log(n) log log(n)  log(n) ≤ T ∗ E[i, B] FiH = i ∈ H : T ∗ E[i, A\H] + cyi + T + T log log(n)  △ = ∀i ∈ H : △i is true  F H = ∪i∈H FiH We will prove this part of the theorem via several lemmas. August 17, 2017 DRAFT 11 Lemma 3: If P(FA ) ≥ 23 , then P(F ) ≥ 31 . Proof: If P(FA ) ≥ 23 , then by the symmetry of the graph and the side information, P(FB ) ≥ 2 3 too. Also, by Lemma 1 FA ∩ FB ⇒ F . Then, we have: P(F ) ≥ P(FA ) + P(FB ) − 1 ≥ Lemma 4: IF P(F H ) ≥ 9 10 and P(△) ≥ 9 , 10 1 4 −1= 3 3 then P(F ) ≥ 31 . Proof: It is easy to see that △ ∩ F H ⇒ FA . Hence, P(FA ) ≥ P(F H ) + P(△) − 1 ≥ 2 8 > 10 3 which together with Lemma 3 concludes the proof. 9 10 Based on the above Lemmas, our proof boils down to proving when P(F H ) ≥ 9 10 and P(△) ≥ are true. Lemma 5: P(△) ≥ 9 10 for sufficiently large n. Proof: Let Wi ∼ Bern(p). Then, we have: P(△ci ) =P X i−1 ≤P n log3 (n) log(n) Wj + Wj ≥ log(log(n)) j=1 j=i+1 n 3 (n)  log X j=1 X log(n) Wj ≥ log(log(n))   − log(n) (a)  1 log3 (n)  log(log(n)) ≤ e a log(log(n)) where (a) holds from a multiplicative form of Chernoff bound, which states that for a sequence P of n i.i.d random variables Xi , P( ni=1 Xi ≥ tµ) ≤ ( et )−tµ , where µ = nE[X]. Thus, we get by union bound: August 17, 2017 DRAFT 12 n  1 log3 (n)  log(log(n)) P(△i ) ≥ 1 − log3 (n) e a log(log(n))  − log(n) log(n)−3 log(log(n)) = 1−e e log(n) log(ae) log(n) − log(log(n)) log(log(n))  3 log(log(n))−log(log(log(n)))  = 1 − e−2 log(n)+o(log(n)) Thus, for sufficiently large n, P(△) ≥ 9 10 We will show when P(F H ) ≥ log3 (n) n 9 . 10 is true in two steps. First we will show that if P(FiH ) > 9 . Then, we will show when 10 log3 (n) 9 log(10), then P(F H ) ≥ 10 . n log(10), then P(F H ) ≥ Lemma 6: If P(FiH ) > P(FiH ) > log3 (n) n log(10) is true. Proof: (a) P(F H ) = P(∪i∈H FiH ) = 1 − P(∩i∈H (FiH )c )  n = 1 − 1 − P(FiH ) log3 (n) where (a) holds because FiH are i.i.d random variables. So, for P(F H ) ≥ (1 − P(FiH )) n log 3 (n) ≤ 1 . 10 9 10 to be true, we need If P(FiH ) is not o(1), then clearly the inequality is true. On the other hand, if P(FiH ) is o(1), then: lim 1 − P(FiH ) n→∞  n log3 (n) = lim 1 − P(FiH ) n→∞ −n = lim e log3 (n) P(FiH ) (P(FiH ))( P(F1H ) )( log3n(n) ) i n→∞ So clearly, if P(FiH ) > P(F H ) ≥ 9 . 10 August 17, 2017 log3 (n) n n log(10), then (1 − P(FiH )) log3 (n) ≤ 1 , 10 which implies that DRAFT 13 log3 (n) log(10) is true. n 3 log P(FiH ) > n(n) log(10) for The final step is to show when P(FiH ) > Lemma 12 in the Appendix shows that Recall that c = log( 1−α ). α sufficiently large n if one of the following is satisfied: • √ √ When c = o(log(n)): if ( a − b)2 < 2. • When c = β log(n), β > 0: for any a, b, β, if η(a, b, β) < 2. • When c = β log(n), β > 0: for a, b, β : β > T (a−b) , 2 if η(a, b, β) > 2 and β < 1. B. Sufficient Conditions In this section we provide an efficient algorithm that achieves exact recovery down to the necessary conditions obtained in Section III-A. The algorithm is decomposed into two stages. In the first stage we use the algorithm proposed in [24] on the graph alone. Such algorithm is known to achieve weak recovery. Then, we modify its outcome using the graph and side information. This idea was used before in [30], [31], but it is the first time to be used with side information, to the best of our knowledge. Note that based on Lemma 2, we can try to provide the sufficient conditions by bounding P n2 n 2 (k) 2 Pn , where 1 ≤ k ≤ n2 and the failure event of maximum likelihood as P(F ) ≤ k=1 k P Pk (k) n Pn := P T m i=1 (Zi − Wi ) + c i=1 (Si − Ri ) ≥ 0 , m = 2k( 2 − k), Wi ∼ Bern(p), Zi ∼ Bern(q), Si ∼ Bern(α) and Ri ∼ Bern(1 − α). However, it is difficult to deal with a weighted sum of four independent Binomial random variables with different number of trials and different probability of success. The algorithm we provide overcome this difficulty because the second stage that exploits side information works on a node by node basis, and hence, deal with the side information as a Bernoulli random variable instead of Binomial. The algorithm starts by splitting the information in the observed graph into two parts. The first part to be used by the partial recovery algorithm and the second part to be used for local modification with the observed side information. In order to make the two steps independent, we follow a similar idea as the one presented in [30]. First assume we have a complete graph on n nodes. We split its edges into two random graphs each with n nodes. We call these graphs H1 and H2 . More precisely, we generate H1 by keeping edges in the complete graph with probability D . log(n) H2 will be the complement of H1 . Then, we define G1 and G2 as sub-graphs of the observed graph G as: G1 = G ∩ H1 and G2 = G ∩ H2 . August 17, 2017 DRAFT 14 Next, we apply the partial recovery algorithm presented in [24] on G1 . Notice that G1 is a graph obtained from a binary symmetric SBM with connectivity parameters ( Da , Db ). Thus, the n n ′ ′ ′ partial recovery algorithm is guaranteed to return two communities A and B such that A , ′ B coincide with the true community assignment A, B on at least (1 − δ(D))n nodes, with δ(D) → 0 as D → ∞ [24]. The last step is the local modifications step. Now that we have G2 , the side information y, ′ ′ ′ A and B , we locally modify the community assignment as follows: for a node i ∈ A , we flip ′ its membership if the number of edges between i and B is greater than or equal the number of ′ edges between i and A plus ′ c T ′ yi and for node j ∈ B , we flip its membership if the number of ′ edges between j and A is greater than or equal the number of edges between j and B minus c T yi . If the the number of flips in each cluster is not the same, keep the clusters unchanged. √ √ Theorem 2: For c = o(log(n)), if ( a− b)2 > 2, then, there exists large enough D such that, with high probability, the algorithm described above will successfully recover the communities from the observed graph and side information. On the other hand, for c = β log(n), for some β > 0, then the algorithm described above will successfully recover the communities from the observed graph and side information, if:   η(a, b, β) > 2, for Proof:  β > 1, for a, b, β : β < T (a−b) 2 a, b, β : β > T (a−b) 2 Our goal here is to upper bound the error of misclassifing one node, then use a simple union bound over all nodes. If we assume for now (to be changed later) that H2 is a complete graph, then we have four different cases for which a node could be misclassified according to our ′ algorithm. Two cases could happen when the node i ∈ A and another two when the node ′ i ∈ B . The first two cases are displayed in Figures 3, 4. The remaining cases are similar. Now under the assumption that H2 is complete, then the four cases for error can be written as follows. Let Wi ∼ Bern(p), Zi ∼ Bern(q) and yi ∈ {1, −1} with probabilities (1 − α), α, respectively. For simplicity, we will write δ instead of δ(D). Then, we have the following:  Pe = P node i is mislabeled August 17, 2017 DRAFT 15 Yi= {1,-1} ~(1-α,α) Fig. 3. Yi= {1,-1} ~(α,1-α) B A B A A B A B A’ B’ A’ B’ Correct node that will be flipped =P Fig. 4. n  (1−δ) X2 n Zi + i=1 δ2 X i=1 Incorrect node that will not be flipped (1−δ) n2 Wi ≥ X n Wi + i=1 δ2 X i=1 c Zi + y i T  (3) Recall, we assumed that H2 is a complete graph. However, using [Lemma 14 in [30]], it can 2D ) with high probability. Hence, be shown that the degree of any node in H2 is at least n(1 − log(n) we will loosely upper bound (3) be removing 2D n log(n) from the first two terms on the right hand side. Thus, we get: Pe ≤ P n  (1−δ) X2 i=1 2D (1−δ) n2 − log(n) n n Zi + δ2 X i=1 X Wi ≥ 2D δn − log(n) n 2 X Wi + i=1 i=1 c Zi + y i T  (4) Now Lemma 17 shows that if c = o(log(n)), then (4) can be upper bounded by (5) and if c = β log(n), for some β > 0, then for a, b, β : β < and for a, b, β : β > T (a−b) , 2 T (a−b) , 2 (4) can be upper bounded by (6) (4) can be upper bounded by (7). 1 √ Pe ≤ n− 2 ( √ a− b)2 +o(1) + n−(1+Ω(1)) 1 Pe ≤ (2 − α)n− 2 η(a,b,β)+o(1) + n−(1+Ω(1)) 1 Pe ≤ (1 − α)n− 2 η(a,b,β)+o(1) + n−β + n−(1+Ω(1)) August 17, 2017 (5) (6) (7) DRAFT 16 Hence, using a union bound (we will do the analysis for (6) only, (5) and (7) follow similarly), we get: 1 P(∃ a misclassified node) ≤ (2 − α)n1− 2 η(a,b,β)+o(1) + n−Ω(1) (8) which shows that if η(a, b, β) > 2, then the algorithm described above will successfully recover the communities from the observed graph and side information with high probability. Note that for (7), Lemma 15 shows that for all a and b, η > 2 if β > 1, and hence, for (7), β > 1 is sufficient. IV. B INARY E RASURE C HANNEL In this section, we consider side information consisting of the true labels observed through a binary erasure channel with parameter ǫ ∈ (0, 1). We provide necessary and sufficient conditions that are tight for different cases for ǫ. Unlike Section III, we provide the sufficient conditions for the Maximum Likelihood detector (known as information theoretic upper bound [30]), then we propose an efficient algorithm that succeeds all the way down to the threshold. But as we did in Section III, before we proceed with the proofs we need the following lemmas. First, we present the ML rule for detecting the communities. Note that since here the side information is either the true label or zero, thus, the maximum likelihood rule is the same as the rule without side information. In other words, the maximum likelihood rule is to maximize the number of edges inside the communities. However, the only difference is the feasible set. Instead P of having a feasible set of all possible vectors of length n ∈ {±1}n such that ni=1 xi = 0, the P new feasible set would be all vectors of length n ∈ {±1}n such that ni=1 xi = 0 and all these vectors match the non erased bits in the observed side information y. The reason why this is true, is because for any vector x that does not match the observed y, then P(y|x) = 0, and hence, the maximum likelihood will never choose such vector. Now based on the above observation, we will now get a sufficient and necessary conditions for the event that ML fails to detect the communities. Recall that F denotes the event of failure of maximum likelihood and E[·, ·] denotes the number of edges between two sets of nodes. We define the following events August 17, 2017 DRAFT 17 FA = {∃i ∈ A : (E[i, B] − E[i, A]) ≥ 1 and yi = 0} FB = {∃j ∈ B : (E[j, A] − E[j, B]) ≥ 1 and yj = 0} Lemma 7: F will happen if both FA and FB happened. Proof: Define two new communities  = A\{i} ∪ {j} and B̂ = B\{j} ∪ {i}. Hence, we need to   show that log P(G, y|Â, B̂) ≥ log P(G, y|A, B) , which implies the failure of ML. Let Aij ∼ Bern(q) be a random variable representing the existence of the edge between nodes i and j. Then, using the maximum likelihood rule, we have:    log P(G, y|Â, B̂) =R + T E(Â) + E(B̂) + log P(y|A, B)   =R + T E(A) + E(B) + T E[j, A] − E[i, A] − E[j, B] + E[i, B] − 2Aij  + log P(y|A, B) (a) (b)   ≥ log P(G, y|A, B) + 2T (1 − Aij ) ≥ log P(G, y|A, B) where (a) holds by the assumption that FA ∩ FB happened and (b) holds because (1 − Aij ) ≥ 0. Hence, from the last inequality, we conclude that the ML will not coincide with the true assignment A, B. Lemma 8: Suppose that maximum likelihood fails. Then, there exists k and sets Aw ⊂ A and Bw ⊂ B with |Aw | = |Bw | = k, for 1 ≤ k ≤ n 2 such that (9) is true and the side information yi = 0∀i ∈ Aw , Bw . E[Bw , A\Aw ] + E[Aw , B\Bw ] − E[Bw , B\Bw ] − E[Aw , A\Aw ] ≥ 0 (9) Proof: Let  = A\{Aw } ∪ {Bw } and B̂ = B\{Bw } ∪ {Aw }. Since maximum likelihood is assumed   to have failed, then: log P(G, y|Â, B̂) ≥ log P(G, y|A, B) . First, note that any node to be exchanged, has got to have erased side information, otherwise, it will never be exchanged. Also, note that conditioned on the community assignments, G, y are independent, and since all the August 17, 2017 DRAFT 18 feasible community assignment vectors, x, have the same P(y|x), therefore, we are only left with P(G|x) which gives (9). A. Necessary Conditions √ √ Theorem 3: For log(ǫ) = o(log(n)), if ( a − b)2 < 2, then ML fails in recovering the communities with probability bounded away from zero. On the other hand, for log(ǫ) = −β log(n), √ √ for some β > 0, if 12 ( a − b)2 + β < 1, then ML fails in recovering the communities with probability bounded away from zero. Proof: We will need some definitions as in Section III-A. Some of these definitions are the same but we include them here for completeness. Let H be a subset of A with |H| = n . log3 (n) We define the following events: log(n) log log(n)  △i = i ∈ H : E[i, H] ≤  FiH = i ∈ H : yi = 0 and E[i, A\H] + 1 + log(n) ≤ E[i, B] log log(n)  △ = ∀i ∈ H : △i is true  F H = ∪i∈H FiH We will prove this part of the theorem via several lemmas. Note that Lemmas 3, 4, 5, 6 extend directly to our case here using the above definitions. To complete the proof, we need to show log3 (n) log(10) is true. n 3 log P(FiH ) > n(n) log(10) for when P(FiH ) > sufficiently large n if one of the following is satisfied: √ When log(ǫ) = o(log(n)): if ( a − b)2 < 2. √ √ When log(ǫ) = −β log(n), β > 0: for any a, b, β, if 12 ( a − b)2 + β < 1. shows that • • This is proven in Lemma 13 in the Appendix, which August 17, 2017 √ DRAFT 19 B. Sufficient Conditions Unlike Section III, we provide our sufficient conditions for two detectors, namely, Maximum Likelihood and an efficient algorithm which uses a partial recovery algorithm from the literature combined with a local improvement procedure. 1) Maximum Likelihood: √ √ Theorem 4: For log(ǫ) = o(log(n)), if ( a− b)2 > 2, then the ML detector exactly recovers the communities , with high probability. On the other hand, for log(ǫ) = −β log(n), for some √ √ β > 0, if ( a − b)2 + 2β > 2, then the ML detector exactly recovers the communities , with high probability. Proof: We will use Lemma 8, from which we can define the following event: Pn(k) := P m X i=1 where m = 2k( n2 (Zi − Wi ) ≥ 0  − k), Wi ∼ Bern(p) and Zi ∼ Bern(q). Then, by a simple union bound we have: n P(F ) ≤ 2 X k=1 2k ǫ  n 2 2 k Pn(k) (10) (k) To bound Pn , we can use [Lemma 8 in [30]]. Lemma 16 in the Appendix shows an alternative proof which is easier and more compact. Using Lemma 16, we have: Pn(k) ≤ e−2m Using the fact that n 2 X k=1 ǫ2k  n 2 2 k n k  √ log(n) 1 √ ( 2 ( a− b)2 ) n (11) )k , we can bound (10) as follows: ≤ ( ne k n Pn(k) ≤ ǫn + −1 2 X ǫ2k ek  2 log(n)−2 log(2k)+2 k=1 1 k e−4k( 2 − n ) n ≤ o(1) + August 17, 2017 −1 2 X k=1  k 2 log(n)−2 log(2k)+2 e k −4k( 12 − n ) e 1 √ ( a− 2 1 √ ( a− 2 √  b)2 log(n) √ 2 b) − log(ǫ) k 2 log(n)( 1 2−n)  log(n) DRAFT 20 n (a) ≤ o(1) + −1 2 X ek  2 log(n)−2 log(2k)+2 k=1 1 1 √ ( a− 2 k e−4k( 2 − n ) √ log(ǫ) b)2 − log(n)  log(n) (12) where (a) holds because log(ǫ) < 0 and supk 2( 21 − nk ) = 1 Now, we divide our analysis into two cases. • when log(ǫ) = −β log(n), for some β > 0. In this case, we have: n o(1) + −1 2 X k=1 2k ǫ  n 2 2 k n Pn(k) ≤ o(1) + −1 2 X  k 2 log(n)−2 log(2k)+2 e k=1 k −4k( 21 − n ) e 1 √ ( a− 2 √  b)2 +β log(n) n (a) ≤ o(1) + 2 4 X k ) k 2 log(n)−2 log(2k)+2) −4k( 12 − n e e 1 √ ( a− 2 k=1  √ 2 b) +β log(n) n (b) ≤ o(1) + 2 4 X  k k 2 log(n)−2 log(2k)+2−4( 12 − n )(1+δ) log(n) e k=1 n (c) ≤ o(1) + 2 4 X  k k 2−2 log(2k)+4 n log(n)−δ log(n) e k=1 n (d) ≤ o(1) + 2n−δ 4 X e−2k  k log(n) −1+log(2k)−2 n k=1 (13) where (a) holds because increasing) for 1 ≤ k ≤ n 4 n 2 k  = and n 4 n 2 n −k 2  and the range of m is the same (only decreasing not √ √ ≤ k ≤ n2 −1, (b) holds by assuming that 21 ( a− b)2 +β > 1 + δ for δ > 0 and (c), (d) hold because 1 ≤ k ≤ n4 . Now notice that for sufficiently large n, 1 ≤ k ≤ log(2k) − n 4 we have 2k 1 log(n) ≥ log(2k) n 3 Hence for sufficiently large n, n P(F ) ≤ 2n−δ 4 X k=1 which, together with the observation that of the second case of the theorem. August 17, 2017 2 e− 3 k(log(2k)−3) + o(1) P n4 k=1 2 e− 3 k(log(2k)−3) is O(1) concludes the proof DRAFT 21 • when log(ǫ) = o(log(n)). In this case, we have: n o(1) + −1 2 X k=1 2k ǫ  n 2 2 k n Pn(k) ≤ o(1) + −1 2 X  k 2 log(n)−2 log(2k)+2 e k=1 k −4k( 12 − n ) e 1 √ ( a− 2 √  b)2 +o(1) log(n) (a) ≤ Cn−δ + o(1) (14) √ √ where (a) holds as the same analysis done in the first case, just assume that 12 ( a − b)2 > √ √ 1 + δ for δ > 0 instead of 12 ( a − b)2 + β > 1 + δ. This concludes the proof of the first case of the theorem. 2) Efficient Algorithm: In this section we provide an efficient algorithm that achieves exact recovery down to the necessary conditions obtained in Section IV-A. The algorithm is decomposed into two stages. In the first stage we use the algorithm proposed in [24] on the graph alone. Such algorithm is known to achieve partial recovery. Then, we modify its outcome using the graph and side information. The first stage of the algorithm is the same as Section III-B. The second stage, the local modification is however different. ′ ′ After the first stage, we have G2 , the side information y, A and B . We locally modify the community assignment as follows: for any node i, we flip its membership if yi 6= 0 and does ′ ′ not match the assignment node i got from A and B , or if yi = 0 and the number of edges between i and the opposite community is greater than or equal the number of of edges between i and its own community. If the the number of flips in each cluster is not the same, keep the clusters unchanged. √ √ Theorem 5: If log(ǫ) = o(log(n)), then if ( a − b)2 > 2, then, there exists large enough D such that, with high probability, the algorithm described above will successfully recover the communities from the observed graph and side information. On the other hand, if log(ǫ) = −β log(n), for some β > 0, then the algorithm described above will successfully recover the √ √ communities from the observed graph and side information, if ( a − b)2 + 2β > 2. Proof: Recall we defined Pe = P(node i to be misclassified). Following the same analysis as in the proof of Lemma 2, we get: August 17, 2017 DRAFT 22 Pe ≤ ǫP n  (1−δ) X2 i=1 2D (1−δ) n2 − log(n) n n Zi + δ2 X i=1 Wi ≥ X 2D δn − log(n) n 2 X Wi + i=1 i=1 Zi  (15) Now using Lemma 17 with c = 0 instead of c = o(log(n)), then (4) can be upper bounded as: √ 1 Pe ≤ ǫn− 2 ( √ a− b)2 + n−(1+Ω(1)) (16) Following the analysis in Section IV-A, we divide the analysis into two cases: • When log(ǫ) = −β log(n), for some β > 0. In this case, we have: 1 √ Pe ≤ n− 2 ( • √ a− b)2 −β + n−(1+Ω(1)) (17) When log(ǫ) = o(log(n)). In this case, we have: 1 √ Pe ≤ n− 2 ( √ a− b)2 +o(1) + n−(1+Ω(1)) (18) Finally, using a union bound (we will do the analysis only for (17), (18) follows similarly), we get: 1 √ P(∃ a misclassified node) ≤ n1−( 2 ( √ a− b)2 +β) + n−Ω(1) (19) √ √ which shows that if 12 ( a − b)2 + β > 1, then the algorithm described above will successfully recover the communities from the observed graph and side information. V. M- ARY S IDE I NFORMATION In this section, we generalize our results to M-ary side information with finite M. More precisely, for each node i we independently observe yi ∈ {u1 , u2, · · · , um}. We denote P(yi = um |xi = 1) = α+,m and P(yi = um |xi = −1) = α−,m , for α+,m > 0, α−,m > 0 and PM PM α+,m m=1 α+,m = m=1 α−,m = 1. Moreover, we define hm = log( α−,m ). We provide necessary conditions for exact recovery for different cases for hm and propose an efficient algorithm that succeeds all the way down to the threshold. We emphasize that we studied the binary symmetric August 17, 2017 DRAFT 23 and binary erasure models before the M-ary side information because their proofs are easier to follow and also for one case (that was proved to be sufficient and necessary for the binary symmetric model) we only prove that it is sufficient for the M-ary case but we could not prove its necessity. The proofs in this section are similar to the proof of the Theorems in Sections III, IV. Thus, we only provide proof sketches below. Let T = log( ab ) and recall that F denotes the event of failure of maximum likelihood and E[·, ·] denotes the number of edges between two sets of nodes. Moreover, for a node i, we define hi = hm , if yi = um . Then, we define the following events FA = {∃i ∈ A : T (E[i, B] − E[i, A]) − hi ≥ T } FB = {∃j ∈ B : T (E[j, A] − E[j, B]) + hj ≥ T } Lemma 9: F will happen if both FA and FB happened. Proof: Define two new communities  = A\{i} ∪ {j} and B̂ = B\{j} ∪ {i}. Hence, we need to   show that log P(G, y|Â, B̂) ≥ log P(G, y|A, B) , which implies the failure of ML. Let the number of {i ∈ A : yi = um } and {i ∈ B : yi = um } be Jum (A) and Jum (B), respectively. Using the maximum likelihood rule, we have:    log P(G, y|Â, B̂) = log P(G|Â, B̂) + log P(y|Â, B̂)  =R + T E(A) + E(B) − E[i, A] + E[j, A] − E[j, B] + E[i, B] − 2Aij + M X m=1 Jum (A) log(α+,m ) + Jum (B) log(α−,m ) + M X m=1 (Jum (j) − Jum (i))hm   = log P(G, y|A, B) + T − E[i, A] + E[j, A] − E[j, B] + E[i, B] − 2Aij + hj − hi (a) (b)   ≥ log P(G, y|A, B) + 2T (1 − Aij ) ≥ log P(G, y|A, B) where R is a constant representing all terms that are independent of x, (a) holds by the assumption that FA ∩ FB happened and (b) holds because (1 − Aij ) ≥ 0. Hence, from the August 17, 2017 DRAFT 24 last inequality, we conclude that the ML will not coincide with the true assignment A, B. √ √ Theorem 6: If for at least one m, α+,m 6= α−,m and hm = o(log(n)), then ( a − b)2 > 2 is necessary and sufficient for exact recovery. On the other hand, if for at least one m, we have one of the following conditions:    α+,m = α−,m = −β log(n), β > 0    , log(αsgn(β1 ),m ) = o(log(n)) hm = β1 log(n), |β1 | < T (a−b) 2     ′ ′ h = β log(n), |β | < T (a−b) , log(α m 2 2 sgn(β2 ),m ) = −β2 log(n), β2 > 0 2   √ 2 √ ′ then minm ( a − b) + 2β, η(a, b, |β1|), η(a, b, |β2|) + 2β2 > 2 is necessary and sufficient for exact recovery, where η is defined in section III-A. A. Necessary Conditions In this section we provide necessary conditions for exact recovery in the binary symmetric SBM with M-ary side information. Proof: [Proof of converse part of Theorem 6] Note that unlike Sections III, IV, the side information might not be symmetric. Hence, we need to define the events of Section III-A for both communities A and B. Let H1 and H2 be subsets of A and B, respectively, with |H1 | = |H2 | =  △1i = i ∈ H1 : E[i, H1 ] ≤ n . log3 (n) We define the following events: log(n) log log(n)  log(n) ≤ T E[i, B] FiH1 = i ∈ H1 : T E[i, A\H1 ] + hi + T + T log log(n)  △1 = ∀i ∈ H1 : △1i is true  F H1 = ∪i∈H1 FiH1  △2i = i ∈ H2 : E[i, H2 ] ≤ log(n) log log(n)  log(n) ≤ T E[i, A] FiH2 = i ∈ H2 : T E[i, B\H2 ] − hi + T + T log log(n)  △2 = ∀i ∈ H2 : △2i is true August 17, 2017 DRAFT 25 F H2 =  ∪i∈H2 FiH2 Note that hi is distributed according to α+,m and α−,m if node i ∈ A, B, respectively. We will prove this part of the theorem via several lemmas. Note that Lemmas 3, 4, 5, 6 extend directly to our case here using the above definitions for both communities A and B. To complete the proof, we need to show show when P(FiH1 ) > log3 (n) n log(10) and P(FiH2 ) > log3 (n) n log(10) are true. Lemma 10: Both P(FiH1 ) and P(FiH2 ) are greater than log3 (n) n log(10) for sufficiently large n if one of the following is true:  √ 2 √   For m : α = 6 α , h = o(log(n)), if ( a − b) < 2  +,m −,m m     √  For m : log(α+,m ) = log(α−,m ) = −β log(n), β > 0, if (√a − b)2 + 2β < 2   For m : hm = β log(n), |β| < T (a−b) , log(αsgn,m ) = o(log(n)), if η(a, b, |β|) < 2   2     For m : hm = β log(n), |β| < T (a−b) , log(αsgn,m ) = −β ′ log(n), β ′ > 0, if η(a, b, |β|) + 2β ′ < 2 2 Proof: ] Let Wi ∼ Bern(p), Zi ∼ Bern(q) and define l = n 2 and Γ(t) = log(EX [etx ]) for a random variable X. Then, we have the following: P(FiH1 ) =P n X 2 i=1 M X ≥ n − 3n 2 log (n) (Zi ) − ≥ i=1 hi log(n) (Wi ) ≥ +1+ T log log(n) n 2 α+,m P m=1 (a) X M X X log(n) hm +1+ [Zi − Wi ] ≥ T log log(n) i=1 −l t∗m am −Γ(t∗m )+|t∗m |δ α+,m e m=1    (1 − o(1)) (20) 2 where (a) holds by defining δ = log 3 (n) , l am = 1l ( hTm +1+ loglog(n) )+δ, t∗m = arg supt∈R am t−Γ(t) log(n) and by using Lemma 14 in the Appendix. Similarly, we have: n  X 2 log(n) hm H2 +1+ [Zi − Wi ] ≥ − P(Fi ) ≥ α−,m P T log log(n) m=1 i=1 M X August 17, 2017 DRAFT 26 ≥ M X m=1 −l t∗m am −Γ(t∗m )+|t∗m |δ α−,m e  (1 − o(1)) (21) log(n) ) log log(n) where am = 1l (− hTm + 1 + + δ. Without loss of generality, we will focus on m = 1. Following similar analysis as before we have the following cases: • If h1 = o(log(n)) and α+,1 6= α−,1 , then t∗1 = 21 T for both (20), (21). Hence, by substituting in (20), (21), we have: P(FiH1 ) √ √ −0.5( a− b)2 +o(1) ≥n + M X α+,m e M X α−,m e−l −l t∗m am −Γ(t∗m )+|t∗m |δ  (1 − o(1)) (22) t∗m am −Γ(t∗m )+|t∗m |δ  (1 − o(1)) (23) m=2 √ P(FiH2 ) ≥ n−0.5( √ √ a− b)2 +o(1) + m=2 √ Thus, it is clear that if ( a− b)2 ≤ 2−ε for some 0 < ε < 2, then P(FiH1 ) and P(FiH2 )are ε both greater than n−1+ 2 > • log3 (n) n log(10) for sufficiently large n. If h1 = 0 and log(α+,1 ) = log(α−,1 ) = −β log(n), β > 0, then t∗1 = 21 T for both (20), (21). Hence, by substituting in (20), (21), we have: P(FiH1 ) P(FiH2 ) √ √ −0.5( a− b)2 −β+o(1) ≥n + M X −l t∗m am −Γ(t∗m )+|t∗m |δ  (1 − o(1)) (24) −l t∗m am −Γ(t∗m )+|t∗m |δ  (1 − o(1)) (25) α+,m e m=2 √ √ −0.5( a− b)2 −β+o(1) ≥n + M X m=2 α−,m e √ √ Thus, it is clear that if ( a − b)2 + 2β ≤ 2 − ε for some 0 < ε < 2, then P(FiH1 ) and ε P(FiH2 )are both greater than n−1+ 2 > • log3 (n) n log(10) for sufficiently large n. If h1 = β log(n), for 0 < β < T a−b , then t∗1 = log( γ+β ) for (20) and t∗1 = T1 log( γ−β ) 2 bT bT p for (21), where γ = β 2 + abT 2 . Hence, by substituting in (20), (21), we have: P(FiH1 ) ≥ e− log(n) August 17, 2017 0.5η(a,b,β)−  log(α+,1 ) +o(1) log(n) + M X m=2 α+,m e−l t∗m am −Γ(t∗m )+|t∗m |δ  (1 − o(1)) (26) DRAFT 27 P(FiH2 ) − log(n) 0.5η(a,b,β)−β− ≥e  log(α−,1 ) +o(1) log(n) + m=2 Then, if log(α+,1 ) = o(log(n)), this implies that P(FiH1 ) P(FiH2 ) −0.5η(a,b,β)+o(1) ≥n M X + M X −l t∗m am −Γ(t∗m )+|t∗m |δ α−,m e ≥n M X + m=2 (1 − o(1)) (27) log(α−,1 ) log(n) = −β + o(1). Hence we have: −l t∗m am −Γ(t∗m )+|t∗m |δ  (1 − o(1)) (28) −l t∗m am −Γ(t∗m )+|t∗m |δ  (1 − o(1)) (29) α+,m e m=2 −0.5η(a,b,β)+o(1)  α−,m e Thus, it is clear that if η(a, b, β) ≤ 2 − ε for some 0 < ε < 2, then P(FiH1 ) and P(FiH2 )are ε both greater than n−1+ 2 > log3 (n) n log(10) for sufficiently large n. ′ On the other hand, if log(α+,1 ) = −β log(n)), this implies that ′′ ′′ log(α−,1 ) log(n) ′ ′′ = −β , for some β > 0 and β = β − β . Hence we have: P(FiH1 ) P(FiH2 ) ′ −0.5η(a,b,β)−β +o(1) ≥n + M X −l t∗m am −Γ(t∗m )+|t∗m |δ α+,m e m=2 ′′ −0.5η(a,b,β)+β−β +o(1) ≥n + M X m=2 ′ −0.5η(a,b,β)−β +o(1) =n + M X m=2  (1 − o(1)) −l t∗m am −Γ(t∗m )+|t∗m |δ α−,m e −l t∗m am −Γ(t∗m )+|t∗m |δ α−,m e   (30) (1 − o(1)) (1 − o(1)) Thus, it is clear that if η(a, b, β) + 2β ≤ 2 − ε for some 0 < ε < 2, then P(FiH1 ) and ′ ε P(FiH2 )are both greater than n−1+ 2 > • log3 (n) n log(10) for sufficiently large n. < β < 0, then following the same analysis as the last point, we If h1 = β log(n), for T b−a 2 get the following: if log(α−,1 ) = o(log(n)), this implies that P(FiH1 ) ≥ n−0.5η(a,b,|β|)+o(1) + P(FiH2 ) August 17, 2017 −0.5η(a,b,|β|)+o(1) ≥n + log(α+,1 ) log(n) M X = −β + o(1). Hence we have: t∗m am −Γ(t∗m )+|t∗m |δ  (1 − o(1)) (31) −l t∗m am −Γ(t∗m )+|t∗m |δ  (1 − o(1)) (32) α+,m e−l m=2 M X m=2 α−,m e DRAFT 28 Thus, it is clear that if η(a, b, |β|) ≤ 2 − ε for some 0 < ε < 2, then P(FiH1 ) and P(FiH2 )are ε both greater than n−1+ 2 > log3 (n) n log(10) for sufficiently large n. ′′ On the other hand, if log(α−,1 ) = −β log(n)), this implies that ′ ′′ log(α+,1 ) log(n) ′ ′ = −β , for some β > 0 and β = β − β . Hence we have: P(FiH1 ) P(FiH2 ) ′′ −0.5η(a,b,|β|)−β +o(1) ≥n −l t∗m am −Γ(t∗m )+|t∗m |δ  (1 − o(1)) (33) −l t∗m am −Γ(t∗m )+|t∗m |δ  (1 − o(1)) (34) α+,m e m=2 ′′ −0.5η(a,b,|β|)−β +o(1) ≥n + M X + M X m=2 α−,m e Thus, it is clear that if η(a, b, |β|) + 2β ≤ 2 − ε for some 0 < ε < 2, then P(FiH1 ) and ′′ ε P(FiH2 )are both greater than n−1+ 2 > log3 (n) n log(10) for sufficiently large n. B. Sufficient Conditions In this section we prove the achievable part of Theorem 6 by providing an efficient algorithm that achieves exact recovery down to the optimal information theoretic threshold. The algorithm is decomposed into two stages. The first stage of the algorithm is the same as Section III-B. The second stage, the local modification is however different. ′ ′ After the first stage, we have G2 , the side information y, A and B . We locally modify the ′ community assignment as follows: if a node i ∈ A , we flip its membership if the number of of ′ ′ edges between i and B is greater than or equal the number of of edges between i and A plus hi T ′ ′ and for node j ∈ B , we flip its membership if the number of of edges between j and A is ′ greater than or equal the number of of edges between j and B minus hi . T If the the number of flips in each cluster is not the same, keep the clusters unchanged. Lemma 11: There exists large enough D such that, with high probability, the algorithm described above will successfully recover the communities from the observed graph and side information if the following are satisfied simultaneously: August 17, 2017 DRAFT 29  √ √   ( a − b)2 > 2, for any m : α+,m 6= α−,m and hm = o(log(n))      √  (√a − b)2 + 2β > 2, for any m : log(α+,m ) = log(α−,m ) = −β log(n), β > 0   , log(αsgn(β),m ) = o(log(n)) η(a, b, |β|) > 2, for any m : hm = β log(n), |β| < T (a−b)   2     η(a, b, |β|) + 2β ′ > 2 for any m : hm = β log(n), |β| < T (a−b) , log(αsgn(β),m ) = −β ′ log(n), β ′ > 0 2 Proof: Recall we defined Pe = P(node i to be misclassified). Following the same analysis as in the proof of Lemma 2, we get: n n δ2  (1−δ) M X2 X 1X α+,m P Zi + Wi ≥ Pe ≤ 2 m=1 i=1 i=1 (1−δ) n 2 δn 2  X M X 1X α−,m P Zi + Wi ≥ 2 m=1 i=1 i=1 2D n (1−δ) n2 − log(n) X 2D δn − log(n) n 2 i=1 X 2D n (1−δ) n2 − log(n) 2D δn − log(n) n 2 X Wi + i=1 Wi + i=1 X i=1  hm Zi + + T hm Zi − T (35)  Using a similar lemma as Lemma 17, we can show that for any term of the M terms above: 1 • √ if hm = o(log(n)) and α+,m 6= α−,m , then this term can be upper bounded by n− 2 ( √ a− b)2 +o(1) n−(1+Ω(1)) . • if hm = 0 and log(α+,m ) = log(α−,m ) = −β log(n), β > 0, then this term can be upper 1 √ bounded by n− 2 ( • √ a− b)2 −β+o(1) + n−(1+Ω(1)) . and log(αsgn(β),m ) = o(log(n)), then this term can be upper if hm = β log(n), |β| < T (a−b) 2 1 bounded by n− 2 η(a,b,|β|)+o(1) + n−(1+Ω(1)) . • if hm = β log(n), |β| < T (a−b) and log(αsgn(β),m ) = −β log(n), β > 0, then this term can 2 ′ 1 be upper bounded by n− 2 η(a,b,|β|)−β • ′ +o(1) ′ + n−(1+Ω(1)) . < |β| and log(α−sgn(β),m ) = −β log(n), β > 0, then this term if hm = β log(n), T (a−b) 2 ′ ′ ′ can be upper bounded by n−β + n−(1+Ω(1)) . Combining all the above with a simple union bound over all nodes concludes the proof. Note ′ that the last case, which implies that β > 1 is also a needed condition for recovery, was not included in Theorem 6 as we only show that it is sufficient but we could not show that it is necessary for the M-ary case. However, we believe it is also necessary as shown, as a special case, in Lemma 12. August 17, 2017 DRAFT + 30 VI. C ONCLUSION AND O PEN P ROBLEMS In this paper, the binary symmetric stochastic block model is studied with side information. We addressed the problem of the effect of side information on the phase transition threshold for exact recovery. We first considered two popular models of side information, namely: side information observed through binary symmetric or binary erasure channels. We showed that, for both cases, side information does not always help. In fact even when the quality or quantity of side information improves as n → ∞, we showed that for side information to help exact recovery, we need a certain rate by which the quality or quantity improves. We also, provided an efficient algorithm which succeeds all the way down to the thresholds, for both cases, using a partial recovery algorithm combined with a local improvement procedure. Finally, we generalized our results to M-ary side information with finite M. A PPENDIX I ) and η(a, b, β) = a + b + β − 2 Tγ + Lemma 12: Recall that c = log( 1−α α p T = log( ab ), γ = β 2 + abT 2 . Also, recall from Section III-A that P(FiH ) =P n X 2 i=1 n − 3n 2 log (n) Zi − X i=1 log(n) Wi − cyi ≥ T + T log log(n) β T log( γ+β ), where γ−β  where Wi ∼ T ∗ Bern(p), Zi ∼ T ∗ Bern(q) and yi ∈ {±1} ∼ {(1 − α), α}. For sufficiently large n, P(FiH ) > log3 (n) n log(10), if one of the following is satisfied:  √ 2 √   a − b) < 2 For c = o(log(n)), if (    For c = β log(n), β > 0, for any a, b, β     For c = β log(n), β > 0, for a, b, β : β > Proof: Define l = n 2 P(FiH ) T (a−b) 2 if η(a, b, β) > 2 and β < 1 and Γ(t) = log(EX [etx ]) for a random variable X. Then, we have the following: =P n X 2 i=1 n 2 ≥P August 17, 2017 if η(a, b, β) < 2 X n − 3n 2 log (n) Zi − X i=1 log(n) Wi − cyi ≥ T + T log log(n) log(n) [Zi − Wi ] ≥ cyi + T + T log log(n) i=1   DRAFT 31 n   X log(n) 1 2 1 ) =(1 − α)P [Zi − Wi ] ≥ (c + T + T l i=1 l log log(n) n   X log(n) 1 2 1 ) + αP [Zi − Wi ] ≥ (−c + T + T l i=1 l log log(n)   (a) −l t∗2 a2 −Γ(t∗2 )+|t∗2 |δ −l t∗1 a1 −Γ(t∗1 )+|t∗1 |δ (1 − o(1)) (1 − o(1)) + αe ≥(1 − α)e 2 where (a) holds by defining δ = log 3 (n) , l (36) a1 = 1l (c + T + T loglog(n) ) + δ, a2 = 1l (−c + T + log(n) ) + δ, t∗1 = arg supt∈R ta1 − Γ(t), t∗2 = arg supt∈R ta2 − Γ(t) and by using Lemma 14. T loglog(n) log(n) Now, we calculate t∗1 and t∗2 . First, we simplify the function inside the supremum. Note that both supremums are very similar, so we will do the analysis for one of them, and the second should follow similarity.  a  a ta1 − Γ(t) = ta1 − log 1 − q(1 − ( )t ) − log 1 − p(1 − ( )−t ) b b Now, it is easy to check that the right hand side is concave in t ∈ R. Hence, taking the derivative with respect to t, we get: p( ab )−t q( ab )t + = a1 − 1 − q(1 − ( ab )t ) 1 − p(1 − ( ab )−t )   a( ab )−t b( ab )t 2c 2T 2T 2 log(n) =0 + + + + − n log(n) log(n) log log(n) log 13 (n) 1 − q(1 − ( ab )t ) 1 − p(1 − ( ab )−t ) (37) Now, we divide our analysis into two cases: • Case one: c is o(log(n)). In that case, the first four terms in (37) is o(1). This suggests that t∗ = 12 . Hence, by substituting back in (37), we get: r r a  b  1 )) − log 1 − p(1 − ( ) ta1 − Γ(t) = a1 − log 1 − q(1 − ( 2 b a q pa b p(1 − ( )) (a) 1 q(1 − ( b )) a q pa + ≤ a1 + 2 1 − q(1 − ( b )) 1 − p(1 − ( b )) a q p   a(1 − ( ab )) b(1 − ( ab )) log(n) q p = + o(1) + n 1 − q(1 − ( ab )) 1 − p(1 − ( b )) a August 17, 2017 DRAFT 32   √ 2 log(n) √ = ( a − b) + o(1) n (b) (38) p −x where (a) holds because log(1 − x) ≥ 1−x and (b) holds because both (1 − q(1 − ( ab ))) p and (1 − q(1 − ( ab ))) → 1 as n → ∞. Hence, substituting in one of the supremums of (36), we have: e−l t∗1 a1 −Γ(t∗1 )+|t∗1 |δ  1 √ ( a− 2 (1 − o(1)) ≥ e− log(n) √  b)2 +o(1) (1 − o(1)) Finally, following the same steps for the second supremum and substituting in (36), we have: √ P(FiH ) ≥ (1 − α)n−0.5( √ = n−0.5( √ a− b)2 +o(1) √ + αn−0.5( √ a− b)2 +o(1) √ a− b)2 +o(1) √ √ ε Thus, if ( a − b)2 ≤ 2 − ε for some 0 < ε < 2, then P(FiH ) ≥ n−1+ 2 > log3 (n) n log(10) for sufficiently large n. This proves the first case of Lemma 12. • 1 , nβ β > 0. In this case c = β log(n) + o(1). Hence, substituting in (37), p γ−β 1 ∗ ) and t = log( ), where γ = β 2 + abT 2 . Hence, this suggests that t∗1 = T1 log( γ+β 2 bT T bT Case two: α = by substituting back in (37) and following the same ideas as in (38), we get:  a ∗ a ∗ log(n) 2βt∗ + b(1 − ( )t ) + a(1 − ( )−t ) + o(1) n b b  γ+β γ+β abT log(n) 2β log( )+a+b− − + o(1) = n T bT T γ+β  log(n) 2γ β γ+β = a+b+β− + log( ) + o(1) n T T γ−β log(n) (η(a, b, β) + o(1)) = n ta1 − Γ(t) ≤ (39) Hence, substituting in one of the supremums of (36), we have: −l t∗1 a1 −Γ(t∗1 )+|t∗1 |δ e  − (1 − o(1)) ≥ e log(n) 2  η(a,b,β)+o(1) (1 − o(1)) Finally, following the same steps for the second supremum and substituting in (36), we have: August 17, 2017 DRAFT 33 P(FiH ) ≥ (1 − α)n−0.5η(a,b,β)+o(1) + αn−0.5η(a,b,β)+β+o(1) = n−0.5η(a,b,β)+o(1) (2 − α) ǫ Thus, if η(a, b, β) ≤ 2 − ε for some 0 < ε < 2, then P(FiH ) ≥ n−1+ 2 > log3 (n) n log(10) for sufficiently large n. This proves the second case of Lemma 12. For the last case of Lemma 12, we begin as in (36) but take a different approach: P(FiH ) ≥P n X 2 log(n) [Zi − Wi ] ≥ cyi + T + T log log(n) i=1 =(1 − α)(1 − P + α(1 − P n X 2  log(n) ) [Zi − Wi ] ≤ c + T + T log log(n) i=1 n X 2 (a)  log(n) ) ) [Zi − Wi ] ≤ (−c + T + T log log(n) i=1   Pn log(n) −t( 2 [Z −W ]) ≥1 − (1 − α)e−n supt>0 −n supt>0 − αe  −t n −t n 1 log E(e c+T +T log log(n) − n  log(n) 1 log E(e−t( −c+T +T log −n log(n) i i=1 Pn 2 [Z −W ]) i ) i=1 i  i ) (40) where (a) holds by Chernoff bound. Note that unlike the previous cases, here the supremum is only on t > 0. Again, we will now focus on calculating one of the supremums in (40). By direct computation of the logarithmic term we get:  P n2  (a) n  p −t  n p t  log 1 − q(1 − ) + log 1 − p(1 − ) log E e−t i=1 [Zi−Wj ] = 2 q 2 q (b) n p −t n p t ≤− q(1 − )− p(1 − ) 2 q 2 q where (a) follows from the fact that Wi , Zi are independent random variables ∀i, and (b) holds because log(1 − x) ≤ −x. Substituting in one of the supremums: August 17, 2017 DRAFT 34   Pn  −t log(n) 1 2 sup c+T +T − log E(e−t( i=1 [Zi −Wi ]) ) log log(n) n t>0 n log(n) a 1 a  ≥ sup −t(β + o(1)) + a + b − b( )−t − a( )t n t>0 2 b b (41) Again, by concavity of the last equation in t, we calculate the first derivative to get: −β − aT a t bT a −t ( ) + ( ) =0 2 b 2 b (42) Hence, following the same analysis as before, we can show that t∗ for the first and second supremums can be calculated as: 1 T log( γ−β ) and aT 1 T T (b−a) 2 t to be greater than zero. Thus, we need β < log( γ+β ), respectively. Since we need aT for the first supremum, which can not be true, since β is positive and b < a. Hence, by the concavity of the function and the fact that it approaches −∞ as t → ∞, the optimal t for the first supremum is t∗1 = 0. On the other hand, for the second supremum, we need β > Thus, assuming β > T (a−b) , 2 T (a−b) 2 for t to be positive. substituting in (40), we get: 1 P(FiH ) ≥ 1 − (1 − α)e0 − αn− 2 η(a,b,β)+β (a) 1 = n−β − n− 2 η(a,b,β) where (b) holds by using the fact that α = n−β . Hence, if β ≤ 1 − ε1 and 12 η ≥ 1 + ε2 , then P(FiH ) ≥ n−1 (nε1 − n−ε2 ) > log3 (n) n log(10) for sufficiently large n. This proves the third and last case of Lemma 12. Lemma 13: Recall from Section IV-A that P(FiH ) = ǫP n X 2 i=1 n − 3n 2 log (n) Zi − X i=1 log(n) Wi ≥ 1 + log log(n)  where Wi ∼ Bern(p), Zi ∼ Bern(q) and ǫ ∈ (0, 1). For sufficiently large n, P(FiH ) > August 17, 2017 log3 (n) n log(10), if one of the following is satisfied: DRAFT 35 √ √ if ( a − b)2 < 2 √  For log(ǫ) = −β log(n), β > 0, if (√a − b)2 + 2β < 2   For log(ǫ) = o(log(n)), Proof: Define l = n 2 and let Γ(t) = log(EX [etx ]) for a random variable X. Then, we have the following: P(FiH ) = ǫP n X 2 i=1 n − 3n 2 log (n) (Zi ) − X i=1 log(n) (Wi ) ≥ 1 + log log(n) n X 2 log(n) [Zi − Wi ] ≥ 1 + ≥ ǫP log log(n) i=1  (a) ∗ ∗ ∗ ≥ ǫe−l t a−Γ(t1 )+|t1 |δ (1 − o(1))  −l t∗ a−Γ(t∗1 )+|t∗1 |δ +log(ǫ) =e (1 − o(1)) 2 where (a) holds by defining δ = log 3 (n) , l a = 1l (1 + log(n) ) log log(n)   (43) + δ, t∗1 = arg supt∈R at − Γ(t) and by using Lemma 14 in the Appendix. Now, we calculate t∗ . First, we simplify the function inside the supremum.   ta − Γ(t) = ta − log 1 − q(1 − et ) − log 1 − p(1 − e−t ) (44) By concavity of the above function in t and by taking the derivative with respect to t and equating to zero, we get: pe−t qet + = 1 − q(1 − et ) 1 − p(1 − e−t )   log(n) 2 2 2 ae−t bet =0 + + + − n log(n) log log(n) log 13 (n) 1 − q(1 − et ) 1 − p(1 − e−t ) a− (45) Now, since the first three terms in (45) is o(1). Thus, this suggests that t∗ = 12 T and T = log( ab ). Hence, by substituting back in (44), we get: 1 ta − Γ(t) = aT − log 1 − q(1 − ( 2 August 17, 2017 r a  )) − log 1 − p(1 − ( b r b  ) a DRAFT 36 q b )) q(1 − ( b )) 1 a q pa + ≤ aT + 2 1 − q(1 − ( b )) 1 − p(1 − ( b )) a p(1 − ( pa (a) √  log(n) √ ( a − b)2 + o(1) n = where (a) holds because log(1 − x) ≥ −x . 1−x (46) Hence, substituting in the supremum of (43), we have: −l t∗ a−Γ(t∗ )+|t∗ |δ e  − log(n) (1 − o(1)) ≥ e 1 √ ( a− 2  √ 2 b) +o(1) (1 − o(1)) (47) Finally, substituting in (43), we have: √ P(FiH ) ≥ ǫn−0.5( √ a− b)2 +o(1) (48) √ √ Thus, if log(ǫ) = o(log(n)), then, it is clear that if ( a − b)2 ≤ 2 − ε for some 0 < ε < 2, ε then P(FiH ) ≥ n−1+ 2 > log3 (n) n log(10) for sufficiently large n. This proves the first case of Lemma 13. √ √ On the other hand, if log(ǫ) = −β log(n), for some β > 0, then, it is clear that if ( a − ε b)2 + 2β ≤ 2 − ε for some 0 < ε < 2, then P(FiH ) ≥ n−1+ 2 > log3 (n) n log(10) for sufficiently large n. This proves the second and last case of Lemma 13. Lemma 14: Let X1 , · · · , Xn be a sequence of i.i.d random variables. Define Γ(t) = log(E[etX ]). Then, for any a, ǫ ∈ R: n  1X P Xi ≥ a − ǫ ≥ e−n n i=1 t∗ a−Γ(t∗ )+|t∗ |ǫ  2  σX̂ 1− 2 nǫ where t∗ = arg supt∈R ta − Γ(t), X̂ is a random variable with the same alphabet as X but ∗ distributed according to et x P(x) EX [et∗ x ] 2 are the mean and variance of X̂, respectively. and µX̂ , σX̂ Proof: n n   1X 1X Xi ≥ a − ǫ ≥ P a − ǫ ≤ Xi ≤ a + ǫ P n i=1 n i=1 Z = P(x1 ) · · · P(xn )dx1 · · · dxn P 1 a−ǫ≤ n August 17, 2017 n i=1 Xi ≤a+ǫ DRAFT 37 Pn Pn (EX [et i=1 xi ])(et i=1 xi ) P Pn P(x1 ) · · · P(xn )dx1 · · · dxn = t n i=1 xi ])(et i=1 xi ) 1 Pn (E [e X X ≤a+ǫ a−ǫ≤ n i=1 i  Z n  txi Y (b) e P(xi ) −n(ta−Γ(t)+|t|ǫ) dxi ≥e tx ] 1 Pn E [e X a−ǫ≤ n X ≤a+ǫ i i=1 i=1   n X 1 (c) −n(ta−Γ(t)+|t|ǫ) =e PX̂n a − ǫ ≤ X̂i ≤ a + ǫ n i=1   2 (d) + (nµX̂ − na)2 nσX̂ −n(ta−Γ(t)+|t|ǫ) (49) 1− ≥e n2 ǫ2 (a) Z where (a) holds for any t ∈ R such that the expectation holds, (b) holds because a − ǫ ≤ Pn etx P(x) 1 X̂ ≤ a + ǫ, (c) holds because defines a probability distribution [38] on a new i i=1 n EX [etx ] random variable X̂ with the same alphabet as X, and (d) holds by Chebyshev’s inequality and 2 to be the mean and variance of X̂, respectively. defining µX̂ , σX̂ Now, choose t = t∗ = arg supt∈R ta − Γ(t). Since this function is concave in t ∈ R [38], then by setting the first derivative to zero, we have a = we can show that µX̂ = [xetx ] EX E[etx ] ∗ EX [xet x ] . E[et∗ x ] Also, by direct computation of µX̂ , . This means that at t = t∗ , we have µX̂ = a. Thus, substituting back in (49), we get:   X n σ2  1 ∗ ∗ ∗ Xi ≥ a − ǫ ≥ e−n(t a−Γ(t )+|t |ǫ) 1 − X̂2 P n i=1 nǫ 2 Now, in our model ǫ = log 3 (n) n and X = Z − W , where Z ∼ T*Bern(q) and W ∼ T*Bern(p), 2 is in the order of where T = log( ab ). Hence, we can easily show that σX̂ log(n) , n and hence, we have:  X  n  1 P Xi ≥ a − ǫ ≥ e−n(ta−Γ(t)+|t|ǫ) 1 − o(1) n i=1 which concludes our proof. p Lemma 15: Define η(a, b, β) = a + b + β − 2 Tγ + β T log( γ+β ), where T = log( ab ), γ = γ−β β 2 + abT 2 and a, b, β > 0 with a > b. Then, β > 1 implies η > 2. Proof: Let a + b − β − 2 Tγ + August 17, 2017 β T log( γ+β ) = ψ(a, b, β). Then, from the definition of η, we have: γ−β DRAFT 38 η(a, b, β) − 2β = ψ(a, b, β) (50) ∗ ∗ +β Note that ψ(a, b, β) is convex in β. Hence, at the optimal β ∗ , we can show that log( γγ ∗ −β ∗) = T. Using this fact and by substituting in (50), we have: η(a, b, β) − 2β ≥ a + b − 2 Now by the definition of γ, we have γ + β = a b = abT 2 , (γ ∗ −β ∗ )2 abT 2 γ−β γ∗ T and using the fact that (51) γ ∗ +β ∗ γ ∗ −β ∗ = ab , we have which implies that γ ∗ = bT + β ∗ . Hence, by substituting in (51), we get: β∗ η(a, b, β) − 2β ≥ a − b − 2 T Also, we can show that at β ∗ , γ ∗ = β ∗ ( a+b ). This implies that β ∗ = a−b (52) T (a−b) . 2 Substituting in (52), we get: η(a, b, β) − 2β ≥ 0, which implies that η > 2 when β > 1. (k) Lemma 16: Define Pn := P Zi ∼ Bern(q). Then, Pm i=1 (Zi  − Wi ) ≥ 0 , where m = 2k( n2 − k), Wi ∼ Bern(p), Pn(k) ≤ e−2m √ log(n) 1 √ ( 2 ( a− b)2 ) n Proof: Pn(k) =P X m i=1 (Zi − Wi ) ≥ 0 (a) 1 ≤ e−m supt>0 − m log(E[e (b) ≤ e−m (c) log(n) n ≤ e−2m t  Pm i=1 (Zi −Wi ) ]) supt>0 a+b−bet −ae−t √ log(n) 1 √ ( 2 ( a− b)2 ) n (53) where (a) holds by Chernoff bound, (b) holds because Zi and Wi are both i.i.d ∀i and (c) holds by evaluating the supremum to get t∗ = 21 T and using the fact that log(1 − x) ≤ −x. August 17, 2017 DRAFT 39 Lemma 17: Let Wi ∼ Bern(p), Zi ∼ Bern(q), yi ∈ {1, −1} with probabilities (1 − α), α, 2D Pδ n2 − log(n) P(1−δ) n Pδ n2 P(1−δ) n − 2D n n Zi + Tc yi ). respectively. Define Pe = P( i=1 2 Zi + i=1 Wi ≥ i=1 2 log(n) Wi + i=1 Then,  √ √  − 21 ( a− b)2 +o(1)  If c = o(log(n)), P ≤ n + n−(1+Ω(1)) e    1 , Pe ≤ (2 − α)n− 2 η(a,b,β)+o(1) + n−(1+Ω(1)) If c = β log(n) : 0 < β < T (a−b) 2     1 If c = β log(n) : β > T (a−b) , Pe ≤ (1 − α)n− 2 η(a,b,β)+o(1) + n−β + n−(1+Ω(1)) 2 Proof: By upper bounding Pe , we get: Pe ≤P ≤P ≤P (a) ≤P P n  (1−δ) X2 Zi + i=1 n X 2 i=1 X Wi ≥ i=1 2D (1−δ) n2 − log(n) n n i=1 δ2 X n 2 n 2 2D δn+ log(n) n i=1 X X X Zi + i=1 Zi − Wi ≥ Wi + i=1 X i=1 i=1 c Wi + yi T c Wi + yi T c Wi ≥ yi T    n X 2  c (Zi − Wi ) ≥ yi − ψδ log(n) + T i=1 2D n  δn+X log(n) i=1 =(1 − α)P αP P 2D (1−δ) n − log(n) n 2 n δ2 X Wi ≥ ψδ log(n)  n X 2  c (Zi − Wi ) ≥ − ψδ log(n) + T i=1 n X 2  c (Zi − Wi ) ≥ − − ψδ log(n) + T i=1 2D n  δn+X log(n) i=1 Wi ≥ ψδ log(n) where (a) holds by defining ψ = √ 1 δ log( δ1 )  (54) . Now we bound the second term. A multiplicative Chernoff bound that states that for a sequence P of n i.i.d random variables Xi : P( ni=1 ≥ tµ) ≤ ( et )−tµ , where µ = nE[X]. Applying this bound August 17, 2017 DRAFT 40 to the second term with µ = a(δ log(n) + 2D) and t = P 2D n  δn+X log(n) i=1 Wi ≥ ψδ log(n)  ψδ log(n) , a(δ log(n)+2D) we get: −ψδ log(n) ψδ log(n) ≤ ae(δ log(n) + 2D)  − √log(n) ψ log( 1 ) δ = 2D ae(1 + δ log(n) )   log(1+ 2D ) log log( 1 ) log(δ)+ 1 1+log(a) δ log(n) 2 δ √ 1 log(n) √ + √ + log( 1 ) log( 1 ) log( ) δ δ δ =e   √ 1 log(1+ 2D ) δ log(n)  (a) − =n log( δ ) 1− log( 1 ) δ +o(1) (55) where (a) holds because δ → 0 as D → ∞. Note that we can find D large enough such that 2D log(1+ δ log(n) ) log( δ1 ) < 1. Hence, we get: P 2D n  δn+X log(n) i=1 Wi ≥ ψδ log(n)  ≤ n−(1+Ω(1)) (56) Now for the first term in (54), we use Chernoff bound as follows: n X 2 n  X  2 c c (Zi − Wi ) ≥ − ψδ log(n) + αP (Zi − Wi ) ≥ − − ψδ log(n) (1 − α)P T T i=1 i=1   2t 2t −n supt1 >0 n1 Tc −ψδ log(n) −log(E[et1 Z−t1 W ]) −n supt2 >0 n2 − Tc −ψδ log(n) −log(E[et2 Z−t2 W ]) 2 2 ≤ (1 − α)e + αe (a) ≤ (1 − α)e− log(n) 2 c supt1 >0 2t( T log(n) −ψδ)+a+b−bet1 −ae−t1 + αe− log(n) 2 c supt2 >0 2t2 (− T log(n) −ψδ)+a+b−bet2 −ae−t2 (57) where (a) holds because log(1 − x) ≤ −x. Now, we calculate t∗1 and t∗2 . Note that ψδ → 0 as D → ∞. Hence, we will replace ψδ by o(1) for sufficiently large D. Now we divide our analysis into three cases: • If c = o(log(n)), this suggests that t∗1 = t∗2 = 21 T . Hence, substituting in (57), we get: (1 − α)P August 17, 2017 n  X  2 c c (Zi − Wi ) ≥ − ψδ log(n) + αP (Zi − Wi ) ≥ − − ψδ log(n) T T i=1 i=1 n X 2 DRAFT 41 1 √ ≤ n− 2 ( • √ a− b)2 +o(1) (58) , then we can show that that t∗1 = log( γ+β ) and If c = β log(n), for 0 < β < T (a−b) 2 bT p t∗2 = log( γ−β ), where γ = β 2 + abT 2 . Hence, substituting in (57), we get: bT (1 − α)P n X 2 n  X  2 c c (Zi − Wi ) ≥ − ψδ log(n) + αP (Zi − Wi ) ≥ − − ψδ log(n) T T i=1 i=1 1 ≤ (2 − α)n− 2 η(a,b,β)+o(1) • If c = β log(n), for β > (59) T (a−b) , 2 then we can show that that t∗1 = log( γ+β ) and t∗2 = 0. bT Hence, substituting in (57), we get: (1 − α)P n  X  2 c c (Zi − Wi ) ≥ − ψδ log(n) + αP (Zi − Wi ) ≥ − − ψδ log(n) T T i=1 i=1 n X 2 1 ≤ (1 − α)n− 2 η(a,b,β)+o(1) + n−β (60) Using the last three displayed equations and (56) and substituting in (54) concludes the proof of the lemma. R EFERENCES [1] P. Holland, K. Laskey, and S. Leinhardt, “Stochastic blockmodels: First steps,” Social Networks, vol. 5, no. 2, pp. 109–137, June 1983. [2] A. Zhang and H. Zhou, “Minimax rates of community detection in stochastic block models,” arXiv:1507.05313, Nov. 2015. [3] P. J. Bickel and A. Chen, “A nonparametric view of network models and newmangirvan and other modularities,” Proceedings of the National Academy of Sciences, vol. 106, no. 50, pp. 21 068–21 073, 2009. [4] T. T. Cai and X. Li, “Robust and computationally feasible community detection in the presence of arbitrary outlier nodes,” arXiv:1404.6000v4, June 2015. [5] T. A. B. Snijders and K. Nowicki, “Estimation and prediction for stochastic blockmodels for graphs with latent block structure,” Journal of Classification, vol. 14, pp. 75–100, 1997. [6] S. Jafar, “Statistical-computational tradeoffs in planted problems and submatrix localization with a growing number of clusters and submatrices,” ICML, In proceedings of, Feb. 2014. [7] A. Coja-oghlan, “Graph partitioning via adaptive spectral techniques,” Comb. Probab. Comput., vol. 19, no. 2, pp. 227–284, Mar. 2010. August 17, 2017 DRAFT 42 [8] A. Coja-Oghlan, “A spectral heuristic for bisecting random graphs,” in Proceedings of the Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms, ser. SODA ’05. Philadelphia, PA, USA: Society for Industrial and Applied Mathematics, 2005, pp. 850–859. [Online]. Available: http://dl.acm.org/citation.cfm?id=1070432.1070552 [9] X. L. T. Tony Cai, “A tensor approach to learning mixed membership community models,” arXiv:1302.2684v4, Oct. 2013. [10] Y. Chen, S. Sanghavi, and H. Xu, “Improved graph clustering,” IEEE Trans. Inf. Theory, vol. 60, no. 10, pp. 6440–6455, Oct 2014. [11] A. Decelle, F. Krzakala, C. Moore, and L. Zdeborová, “Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications,” Phys. Rev. E, vol. 84, p. 066106, Dec. 2011. [12] P. Zhang, K. F., ReichardtJ., and Z. L., “Comparative study for inference of hidden classes in stochastic block models,” arXiv:1207.2328v2, July 2012. [13] M. Girvan and M. E. J. Newman, “Community structure in social and biological networks,” Proceedings of the National Academy of Sciences, vol. 99, no. 12, pp. 7821–7826, 2002. [14] J. Xu, R. Wu, K. Zhu, B. Hajek, R. Srikant, and L. Ying, “Jointly clustering rows and columns of binary matrices: Algorithms and trade-offs,” arXiv:1310.0512, Feb. 2014. [15] J. Chen and B. Yuan, “Detecting functional modules in the yeast proteinprotein interaction network,” Bioinformatics, vol. 22, no. 18, pp. 2283–2290, Sept. 2006. [16] A. Lancichinetti and S. Fortunato, “Community detection algorithms: A comparative analysis,” Phys. Rev. E, vol. 80, p. 056117, Nov. 2009. [17] S. Fortunato, “Community detection in graphs,” arXiv:0906.0612v2, Jan. 2010. [18] E. Abbe, A. Bandeira, and G. Hall, “Community detection in general stochastic block models: fundamental limits and efficient recovery algorithms,” arXiv:1503.00609, Mar. 2015. [19] M. Newman and A. Clauset, “Structure and inference in annotated networks,” arXiv:1507.04001v1, 2015. [20] J. Yang, J. McAuley, and J. Leskovec, “Community detection in networks with node attributes,” in IEEE International Conference on Data Mining, Dec. 2013, pp. 1151–1156. [21] Y. Zhang, E. Levina, and J. Zhu, “Community detection in networks with node features,” arXiv:1509.01173v1, 2015. [22] A. Decelle, F. Krzakala, C. Moore, and L. Zdeborová, “Inference and phase transitions in the detection of modules in sparse networks,” Phys. Rev. Lett., vol. 107, p. 065701, Aug 2011. [Online]. Available: https://link.aps.org/doi/10.1103/PhysRevLett.107.065701 [23] E. Mossel, J. Neeman, and A. Sly, “Reconstruction and estimation in the planted partition model,” Probability Theory and Related Fields, vol. 162, no. 3, pp. 431–461, 2014. [24] L. Massoulie, “Community detection thresholds and the weak ramanujan property,” arXiv:1311.3085v1, Nov. 2013. [25] E. Mossel, J. Neeman, and A. Sly, “A proof of the block model threshold conjecture,” arXiv:1311.4115v4, Aug 2015. [26] E. Abbe and C. Sandon, “Detection in the stochastic block model with multiple clusters: proof of the achievability conjectures, acyclic bp, and the information-computation gap,” arXiv:1512.09080v4, Sep 2016. [27] S. Yun and A. Proutiere, “Community detection via random and adaptive sampling,” arXiv:1402.3072v2, feb 2014. [28] E. Mossel and J. Xu, “Density evolution in the degree-correlated stochastic block model,” in 29th Annual Conference on Learning Theory, ser. Proceedings of Machine Learning Research, V. Feldman, A. Rakhlin, and O. Shamir, Eds., vol. 49. Columbia University, New York, New York, USA: PMLR, 23–26 Jun 2016, pp. 1319–1356. [Online]. Available: http://proceedings.mlr.press/v49/mossel16.html August 17, 2017 DRAFT 43 [29] H. Saad, A. Abotabl, and A. Nosratinia, “Exit analysis for belief propagation in degree-correlated stochastic block models,” in IEEE International Symposium on Information Theory (ISIT), July 2016, pp. 775–779. [30] E. Abbe, A. Bandeira, and G. Hall, “Exact recovery in the stochastic block model,” IEEE Trans. Inf. Theory, vol. 62, no. 1, pp. 471–487, Jan. 2016. [31] M. Elchanan, J. Neeman, and S. Allan, “Consistency thresholds for the planted bisection model,” arXiv:1407.1591v4, Mar. 2016. [32] E. Mossel and J. Xu, “Local algorithms for block models with side information,” in ACM Conference on Innovations in Theoretical Computer Science, ser. ITCS ’16. New York, NY, USA: ACM, 2016, pp. 71–80. [Online]. Available: http://doi.acm.org/10.1145/2840728.2840749 [33] T. Tony Cai, T. Liang, and A. Rakhlin, “Inference via message passing on partially labeled stochastic block models,” arXiv:1603.06923v1, Mar. 2016. [34] A. Kadavankandy, K. Avrachenkov, L. Cottatellucci, and R. Sundaresan, “The power of side-information in subgraph detection,” arXiv:1611.04847v3, Mar. 2017. [35] V. Kanade, E. Mossel, and T. Schramm, “Global and local information in clustering labeled block models,” arXiv:1404.6325v4, July 2014. [36] F. Caltagirone, M. Lelarge, and L. Miolane, “Recovering asymmetric communities in the stochastic block model,” arXiv:1610.03680v2, Mar. 2017. [37] H. Saad, A. Abotabl, and A. Nosratinia, “Exact recovery in the binary stochastic block model with binary side information,” in Allerton Conference on Communications, Control, and Computing, Oct 2017. [38] A. Dembo and O. Zeitouni, Large deviations techniques and applications. Berlin; New York: Springer-Verlag Inc, 2010. August 17, 2017 DRAFT
7
Journal of Machine Learning Research 1 (2000) 1-48 Submitted 4/00; Published 10/00 arXiv:1508.04905v2 [] 12 Oct 2017 Theoretical analysis of cross-validation for estimating the risk of the k-Nearest Neighbor classifier Alain Celisse Laboratoire de Mathématiques Modal Project-Team UMR 8524 CNRS-Université Lille 1 F-59 655 Villeneuve d’Ascq Cedex, France [email protected] Tristan Mary-Huard INRA, UMR 0320 / UMR 8120 Génétique Végétale et Évolution Le Moulon, F-91190 Gif-sur-Yvette, France UMR AgroParisTech INRA MIA 518, Paris, France 16 rue Claude Bernard F-75 231 Paris cedex 05, France [email protected] Editor: TBW Abstract The present work aims at deriving theoretical guaranties on the behavior of some crossvalidation procedures applied to the k-nearest neighbors (kNN) rule in the context of binary classification. Here we focus on the leave-p-out cross-validation (LpO) used to assess the performance of the kNN classifier. Remarkably this LpO estimator can be efficiently computed in this context using closed-form formulas derived by Celisse and Mary-Huard (2011). We describe a general strategy to derive moment and exponential concentration inequalities for the LpO estimator applied to the kNN classifier. Such results are obtained first by exploiting the connection between the LpO estimator and U-statistics, and second by making an intensive use of the generalized Efron-Stein inequality applied to the L1O estimator. One other important contribution is made by deriving new quantifications of the discrepancy between the LpO estimator and the classification error/risk of the kNN classifier. The optimality of these bounds is discussed by means of several lower bounds as well as simulation experiments. Keywords: Classification, Cross-validation, Risk estimation 1. Introduction The k-nearest neighbor (kNN) algorithm (Fix and Hodges, 1951) in binary classification is a popular prediction algorithm based on the idea that the predicted value at a new point is based on a majority vote from the k nearest labeled neighbors of this point. Although quite c 2000 A. Celisse and T. Mary-Huard. Celisse and Mary-Huard simple, the kNN classifier has been successfully applied to many difficult classification tasks (Li et al., 2004; Simard et al., 1998; Scheirer and Slaney, 2003). Efficient implementations have been also developed to allow dealing with large datasets (Indyk and Motwani, 1998; Andoni and Indyk, 2006). The theoretical performances of the kNN classifier have been already extensively investigated. In the context of binary classification preliminary theoretical results date back to Cover and Hart (1967); Cover (1968); Györfi (1981). More recently, Psaltis et al. (1994); Kulkarni and Posner (1995) derived an asymptotic equivalent to the performance of the 1NN classification rule, further extended to kNN by Snapp and Venkatesh (1998). Hall et al. (2008) also derived asymptotic expansions of the risk of the kNN classifier assuming either a Poisson or a binomial model for the training points, which relates this risk to the parameter k. By contrast to the aforementioned results, the work by Chaudhuri and Dasgupta (2014) focuses on the finite sample framework. They typically provide upper bounds with high probability on the risk of the kNN classifier where the bounds are not distribution-free. Alternatively in the regression setting, Kulkarni and Posner (1995) provide a finite-sample bound on the performance of 1NN that has been further generalized to the kNN rule (k ≥ 1) by Biau et al. (2010a), where a bagged version of the kNN rule is also analyzed and then applied to functional data Biau et al. (2010b). We refer interested readers to Biau and Devroye (2016) for an almost thorough presentation of known results on the kNN algorithm in various contexts. In numerous (if not all) practical applications, computing the cross-validation (CV) estimator (Stone, 1974, 1982) has been among the most popular strategies to evaluate the performance of the kNN classifier (Devroye et al., 1996, Section 24.3). All CV procedures share a common principle which consists in splitting a sample of n points into two disjoint subsets called training and test sets with respective cardinalities n − p and p, for any 1 ≤ p ≤ n−1. The n−p training set data serve to compute a classifier, while its performance is evaluated from the p left out data of the test set. For a complete and comprehensive review on cross-validation procedures, we refer the interested reader to Arlot and Celisse (2010). In the present work, we focus on the leave-p-out (LpO) cross-validation. Among CV procedures, it belongs to exhaustive strategies since it considers (and averages over) all the  n possible such splittings of {1, . . . , n} into training and test sets. Usually the induced p computation time of the LpO is prohibitive, which gives rise to its surrogate called V −fold cross-validation (V-FCV) with V ≈ n/p (Geisser, 1975). However, Steele (2009); Celisse and Mary-Huard (2011) recently derived closed-form formulas respectively for the bootstrap and the LpO procedures applied to the kNN classification rule. Such formulas allow one to efficiently compute the LpO estimator. Moreover since the V-FCV estimator suffers a larger variance than the LpO one (Celisse and Robin, 2008; Arlot and Celisse, 2010), LpO (with p = bn/V c) strictly improves upon V-FCV in the present context. Although being favored in practice for assessing the risk of the kNN classifier, the use of CV comes with very few theoretical guarantees regarding its performance. Moreover probably for technical reasons, most existing results apply to Hold-out and leave-one-out (L1O), that is LpO with p = 1 (Kearns and Ron, 1999). In this paper we rather consider the general LpO procedure (for 1 ≤ p ≤ n − 1) used to estimate the risk (alternatively the classification error rate) of the kNN classifier. Our main purpose is then to provide 2 Performance of CV to estimate the risk of kNN distribution-free theoretical guarantees on the behavior of LpO with respect to influential parameters such as p, n, and k. For instance we aim at answering questions such as: “Does it exist any regime of p = p(n) (with p(n) some function of n) where the LpO estimator is a consistent estimate of the risk of the kNN classifier?”, or “Is it possible to describe the convergence rate of the LpO estimator with respect to p/n?” Contributions. The main contribution of the present work is two-fold: (i) we describe a new general strategy to derive moment and exponential concentration inequalities for the LpO estimator applied to the kNN binary classifier, and (ii) these inequalities serve to derive the convergence rate of the LpO estimator towards the risk of the kNN classifier. This new strategy relies on several steps. First exploiting the connection between the LpO estimator and U-statistics (Koroljuk and Borovskich, 1994) and the Rosenthal inequality (Ibragimov and Sharakhmetov, 2002), we prove that upper bounding the polynomial moments of the centered LpO estimator reduces to deriving such bounds for the simpler L1O estimator. Second, we derive new upper bounds on the moments of the L1O estimator using the generalized Efron-Stein inequality (Boucheron et al., 2005, 2013, Theorem 15.5). Third, combining the two previous steps provides some insight on the interplay between p/n and k in the concentration rates measured in terms of moments. This finally results in new exponential concentration inequalities for the LpO estimator applying whatever the value of the ratio p/n ∈ (0, 1). In particular while the upper bounds increase with 1 ≤ p ≤ n/2 + 1, it is no longer the case if p > n/2 + 1. We also provide several lower bounds suggesting our upper bounds cannot be improved in some sense in a distribution-free setting. The remainder of the paper is organized as follows. The connection between the LpO estimator and U -statistics is clarified in Section 2, where we also recall the closed-form formula of the LpO estimator (Celisse and Mary-Huard, 2011) applied to the kNN classifier. Order-q moments (q ≥ 2) of the LpO estimator are then upper bounded in terms of those of the L1O estimator. This step can be applied to any classification algorithm. Section 3 then specifies the previous upper bounds in the case of the kNN classifier, which leads to the main Theorem 3.2 characterizing the concentration behavior of the LpO estimator with respect to p, n, and k in terms of polynomial moments. Deriving exponential concentration inequalities for the LpO estimator is the main concern of Section 4 where we highlight the strength of our strategy by comparing our main inequalities with concentration inequalities derived with less sophisticated tools. Finally Section 5 exploits the previous results to bound the gap between the LpO estimator and the classification error of the kNN classifier. The optimality of these upper bounds is first proved in our distribution-free framework by establishing several new lower bounds matching the upper ones in some specific settings. Second, empirical experiments are also reported which support the above conclusions. 2. U -statistics and LpO estimator 2.1 Statistical framework Classification We tackle the binary classification problem where the goal is to predict the unknown label Y ∈ {0, 1} of an observation X ∈ X ⊂ Rd . The random variable (X, Y ) has an unknown joint distribution P(X,Y ) defined by P(X,Y ) (B) = P [ (X, Y ) ∈ B ] for any Borelian set B ∈ X × {0, 1}, where P denotes a probability distribution. In what 3 Celisse and Mary-Huard follows no particular distributional assumption is made regarding X. To predict the label, one aims at building a classifier fˆ : X → {0, 1} on the basis of a set of random variables Dn = {Z1 , . . . , Zn } called the training sample, where Zi = (Xi , Yi ), 1 ≤ i ≤ n represent n copies of (X, Y ) drawn independently from P(X,Y ) . In settings where no confusion is possible, we will replace Dn by D. Any strategy to build such a classifier is called a classification algorithm or classification rule, and can be formally defined as a function A : ∪n≥1 {X × {0, 1}}n → F that maps a training sample Dn onto the corresponding classifier ADn (·) = fˆ ∈ F, where F is the set of all measurable functions from X to {0, 1}. Numerous classification rules have been considered in the literature and it is out of the scope of the present paper to review all of them (see Devroye et al. (1996) for many instances). Here we focus on the k-nearest neighbor rule (kNN) initially proposed by Fix and Hodges (1951) and further studied for instance by Devroye and Wagner (1977); Rogers and Wagner (1978). The kNN algorithm For 1 ≤ k ≤ n, the kNN rule, denoted by Ak , consists in classifying any new observation x using a majority vote decision rule based on the label of the k points X(1) (x), . . . , X(k) (x) closest to x among the training sample X1 , . . . , Xn . In what follows these k nearest neighbors are chosen according to the distance associated with the usual Euclidean norm in Rd . Note that other adaptive metrics have been also considered in the literature (see for instance Hastie et al., 2001, Chap. 14 ). But such examples are out of the scope of the present work that is, our reference distance does not depend on the training sample at hand. Let us also emphasize that possible ties are broken by using the smallest index among ties, which is one possible choice for the Stone lemma to hold true (Biau and Devroye, 2016, Lemma 10.6,p.125).  Formally, given Vk (x) = 1 ≤ i ≤ n, Xi ∈ X(1) (x), . . . , X(k) (x) the set of indices of the k nearest neighbors of x among X1 , . . . , Xn , the kNN classification rule is defined by  P P  1 , if k1 i∈Vk (x) Yi = k1 ki=1 Y(i) (x) > 0.5  P Ak (Dn ; x) = fbk (Dn ; x) := , (2.1) 0 , if k1 ki=1 Y(i) (x) < 0.5   B(0.5) , otherwise where Y(i) (x) is the label of the i-th nearest neighbor of x for 1 ≤ i ≤ k, and B(0.5) denotes a Bernoulli random variable with parameter 1/2. Leave-p-out cross-validation For a given sample Dn , the performance of any classifier fˆ = ADn (·) (respectively of any classification algorithm Å) is assessed by the classification error L(fˆ) (respectively the risk R(fˆ)) defined by   h  i L(fˆ) = P fˆ(X) 6= Y | Dn , and R(fˆ) = E P fˆ(X) 6= Y | Dn . In this paper we focus on the estimation of L(fˆ) (and its expectation R(fˆ)) by use of the Leave-p-Out (LpO) cross-validation for 1 ≤ p ≤ n − 1 (Zhang, 1993; Celisse and Robin, 2008). LpO successively considers all possible splits of Dn into a training set of cardinality n − p and a test set of cardinality p. Denoting by En−p the set of all possible subsets of {1, . . . , n} with cardinality n − p, any e ∈ En−p defines a split of Dn into a training sample De = {Zi | i ∈ e} and a test sample Dē , where ē = {1, . . . , n} \ e. For a given classification 4 Performance of CV to estimate the risk of kNN algorithm A, the final LpO estimator of the performance of ADn (·) = fb is the average (over all possible splits) of the classification error estimated on each test set, that is !  −1 X X n 1 bp (A, Dn ) = R 1{ADe (Xi )6=Yi } , (2.2) p p i∈ē e∈En−p e where AD (·) is the classifier built from De . We refer the reader to Arlot and Celisse (2010) for a detailed description of LpO and other cross-validation procedures. In the sequel, the bp (A, Dn ) is replaced by R bp,n in settings where no confusion can arise lengthy notation R bp (Dn ) if the training sample about the algorithm A or the training sample Dn , and by R has to be kept in mind. Exact LpO for the kNN classification algorithm Usually due to its seemingly prohibitive computational cost, LpO is not applied except with p = 1 where it reduces to the well known leave-one-out. However unlike this widespread idea Celisse and Robin (2008); Celisse (2008, 2014) proved that the LpO estimator can be efficiently computed by deriving closed-form formulas in several statistical frameworks. The kNN classification rule is another instance for which efficiently computing the LpO estimator is possible with a time complexity linear in p as previously established by Celisse and Mary-Huard (2011). Let us briefly recall the main steps leading to the closed-form formula. 1. From Eq. (2.2) the LpO estimator can be expressed as a sum (over the n observations of the complete sample) of probabilities:   !  −1 X  −1 X n X X n 1 1  n  = 1{ADe (Xi )6=Yi } 1{ADe (Xi )6=Yi } 1{i∈e} / p p p p e∈En−p i=1 i∈e / = 1 p n X e∈En−p e Pe (AD (Xi ) 6= Yi | i ∈ / e)Pe (i ∈ / e). i=1 Here Pe means that the integration is computed with respect to the random variable e ∈ En−p , which follows the uniform distribution over the np possible subsets with cardinality n−p in En−p . For instance Pe (i ∈ / e) = p/n since it is the proportion of subsamples with cardinality n − p which do not contain a given prescribed index i, which  n n−1 equals n−p / p . (See also Lemma D.4 for further examples of such calculations.) 2. For any Xi , let X(1) , ..., X(k+p−1) , X(k+p) , ..., X(n−1) be the ordered sequence of neighbors of Xi . This list depends on Xi , i.e. X(1) should be noted X(i,1) , but this dependency is skipped here for the sake of readability. The key in the derivation is to condition with respect to the random variable Rki which denotes the rank (in the whole sample Dn ) of the k−th neighbor of Xi in the De , that is Rki = j means that X(j) is the k-th neighbor of Xi in De . Then De Pe (A (Xi ) 6= Yi |i ∈ / e) = k+p−1 X e Pe (AD (Xi ) 6= Yi | Rki = j, i ∈ / e)Pe (Rki = j | i ∈ / e), j=k 5 Celisse and Mary-Huard where the sum involves p terms since only X(k) , . . . , X(k+p−1) are candidates for being the k-th neighbor of Xi in at least one training subset e. 3. Observe that the resulting probabilities can be easily computed (see Lemma D.4): ? Pe (i ∈ / e) = np ? Pe (Rki = j|i ∈ / e) = kj P (U = j − k)  e ? Pe (AD (Xi ) 6= Yi |Vki = j, i ∈ / e) = (1 − Yj ) 1 − FH k+1 2   + Yj 1 − F H 0 k−1 2  , with U ∼ H(j, n − j − 1, p − 1), H ∼ H(Nij , j − Nij − 1, k − 1), and H 0 ∼ H(Nij − 1, j − Nij , k − 1), where H denotes the hypergeometric distribution and Nij is the number of 1’s among the j nearest neighbors of Xi in Dn . The computational cost of LpO for the kNN classifier is the same as that of L1O for the (k + p − 1)NN classifier whatever p, that is O(p n). This contrasts with the usual np prohibitive computational complexity seemingly suffered by LpO. 2.2 U -statistics: General bounds on LpO moments The purpose of the present section is to describe a general strategy allowing to derive new upper bounds on the polynomial moments of the LpO estimator. As a first step of this strategy, we establish the connection between the LpO risk estimator and U-statistics. Second, we exploit this connection to derive new upper bounds on the order-q moments of the LpO estimator for q ≥ 2. Note that these upper bounds, which relate moments of the LpO estimator to those of the L1O estimator, hold true with any classifier. Let us start by introducing U -statistics and recalling some of their basic properties that will serve our purposes. For a thorough presentation, we refer to the books by Serfling (1980); Koroljuk and Borovskich (1994). The first step is the definition of a U -statistic of order m ∈ N∗ as an average over all m-tuples of distinct indices in {1, . . . , n}. Definition 2.1 (Koroljuk and Borovskich (1994)). Let h : X m −→ R (or Rk ) denote any Borelian function where m ≥ 1 is an integer. Let us further assume h is a symmetric function of its arguments. Then any function Un : X n −→ R such that  −1 X n Un (x1 , . . . , xn ) = Un (h)(x1 , . . . , xn ) = h (xi1 , . . . , xim ) m 1≤i1 <...<im ≤n where m ≤ n, is a U -statistic of order m and kernel h. Before clarifying the connection between LpO and U -statistics, let us introduce the main property of U -statistics our strategy relies on. It consists in representing any U-statistic as an average, over all permutations, of sums of independent variables. Proposition 2.1 (Eq. (5.5) in Hoeffding (1963)). With the notation of Definition 2.1, let us define W : X n −→ R by r W (x1 , . . . , xn ) =  1X h x(j−1)m+1 , . . . , xjm , r j=1 6 (2.3) Performance of CV to estimate the risk of kNN where r = bn/mc denotes the integer part of n/m. Then Un (x1 , . . . , xn ) = where P σ  1 X W xσ(1) , . . . , xσ(n) , n! σ denotes the summation over all permutations σ of {1, . . . , n}. We are now in position to state the key remark of the paper. All the developments further exposed in the following result from this connection between the LpO estimator defined by Eq. (2.2) and U -statistics. Theorem 2.1. For any classification rule A and any 1 ≤ p ≤ n − 1 such that the following bp,n is a U-statistic of order m = n − p + 1 quantities are well defined, the LpO estimator R m with kernel hm : X −→ R defined by m hm (Z1 , . . . , Zm ) = 1 X  , 1 D(i) m A m (Xi )6=Yi i=1 (i) where Dm denotes the sample Dm = (Z1 , . . . , Zm ) with Zi withdrawn. Proof of Theorem 2.1. From Eq. (2.2), the LpO estimator of the performance of any classification algorithm A computed from Dn satisfies X 1X bp (A, Dn ) = R bp,n = 1 1{ADe (Xi )6=Yi } R n p p e∈En−p i∈ē   X X X 1 1  = n 1{v=e∪{i}}  1{ADe (Xi )6=Yi } , p p e∈En−p i∈ē v∈En−p+1 since there is a unique set of indices v with cardinality n − p + 1 such that v = e ∪ {i}. Then   n X 1X X 1 bp,n =   1{v=e∪{i}} 1{i∈ē}  1nADv\{i} (X )6=Y o . R n i i p p i=1 v∈En−p+1 e∈En−p P Furthermore for v and i fixed, e∈En−p 1{v=e∪{i}} 1{i∈ē} = 1{i∈v} since there is a unique set of indices e such that e = v \ i. One gets bp,n = 1 1 R p np = by noticing p n p  = pn! p!n−p! X n X i v∈En−p+1 i=1 1 X n n−p+1  = 1{i∈v} 1nADv\{i} (X )6=Y o v∈En−p+1 n! p−1!n−p! i X 1 1nADv\{i} (X )6=Y o , i i n−p+1 i∈v = (n − p + 1) 7 n n−p+1  . Celisse and Mary-Huard The kernel hm is a deterministic and symmetric function of its arguments that does only depend on m. Let us also notice that hm (Z1 , . . . , Zm ) reduces to the L1O estimator of the risk of the classification rule A computed from Z1 , . . . , Zm , that is b1 (A, Dm ) = R b1,n−p+1 . hm (Z1 , . . . , Zm ) = R (2.4) In the context of testing whether two binary classifiers have different error rates, this fact has already been pointed out by Fuchs et al. (2013). We now derive a general upper bound on the q-th moment (q ≥ 1) of the LpO estimator that holds true for any classification rule as long as the following quantities remain meaningful. Theorem 2.2. For any classification rule A, let ADn (·) and ADm (·) be the corresponding classifiers built from respectively Dn and Dm , where m = n − p + 1. Then for every 1 ≤ p ≤ n − 1 such that the following quantities are well defined, and any q ≥ 1, h h i qi h h i qi bp,n − E R bp,n b1,m − E R b1,m E R ≤E R . (2.5) Furthermore as long as p > n/2 + 1, one also gets • for q = 2   E h bp,n − E R bp,n R i 2 E ≤ h i 2 b1,m − E R b1,m R n · (2.6) m • for every q > 2 h h i qi bp,n − E R bp,n E R ≤ B(q, γ)×  h i   b1,m − E R b1,m  jnk R n E max 2q γ  m  m q   q   v u  b1,m u 2Var R  t   , n  ,   m (2.7) where γ > 0 is a numeric constant and B(q, γ) denotes the optimal constant defined in the Rosenthal inequality (Proposition D.2). The proof is given in Appendix A.1. Eq. (2.5) and Eq. (2.6) straightforwardly result from the Jensen inequality applied to the average over all permutations provided in Proposition 2.1. If p > n/2+1, the integer part bn/mc becomes larger than 1 and Eq. (2.6) becomes better than Eq. (2.5) for q = 2. As a consequence of our strategy of proof, the right-hand side of Eq. (2.6) is equal to the classical upper bound on the variance of U-statistics which suggests it cannot be improved without adding further assumptions. Unlike the above ones, Eq. (2.7) Pr is derived from the Rosenthal inequality, which enables us to upper bound a sum k i=1 ξi kq of independent and identically centered random P P variables in terms of ri=1 kξi kq and ri=1 Var(ξi ). Let us remark that, for q = 2, both terms of the right-hand side of Eq. (2.7) are of the same order as Eq. (2.6) up to constants. Furthermore using the Rosenthal inequality allows us to take advantage of the integer part bn/mc when p > n/2 + 1, unlike what we get by using Eq.(2.5) for q > 2. In particular it provides a new understanding of the behavior of the LpO estimator when p/n → 1 as highlighted later by Proposition 4.2. 8 Performance of CV to estimate the risk of kNN 3. New bounds on LpO moments for the kNN classifier Our goal is now to specify the general upper bounds provided by Theorem 2.2 in the case of the kNN classification rule Ak (1 ≤ k ≤ n) introduced by (2.1). Since Theorem 2.2 expresses the moments of the LpO estimator in terms of those of the L1O estimator computed from Dm (with m = n − p + 1), the next step consists in focusing on the L1O moments. Deriving upper bounds on the moments of the L1O is achieved using a generalization of the well-known Efron-Stein inequality (see Theorem D.1 for EfronStein’s inequality and Theorem 15.5 in Boucheron et al. (2013) for its generalization). For the sake of completeness, we first recall a corollary of this generalization that is proved in Section D.1.4 (see Corollary D.1). Proposition 3.1. Let Z1 , . . . , Zn denote n independent random variables and ζ = f (Z1 , . . . , Zn ), where f : Rn → R is any Borelian function. With Z10 , . . . , Zn0 independent copies of the Zi s, there exists a universal constant κ ≤ 1.271 such that for any q ≥ 2, v u X p u n kζ − Eζkq ≤ 2κq t (f (Z1 , . . . , Zn ) − f (Z1 , . . . , Zi0 , . . . , Zn ))2 . i=1 q/2 b1 (Ak , Dm ) = R b1,m (L1O estimator computed Then applying Proposition 3.1 with ζ = R from Dm with m = n − p + 1) leads to the following Theorem 3.1, which finally allows us to control the order-q moments of the L1O estimator applied to the kNN classifier. m (m = n − p + 1) denote the kNN classifier Theorem 3.1. For every 1 ≤ k ≤ n − 1, let AD k b learnt from Dm and R1,m be the corresponding L1O estimator given by Eq. (2.2). Then • for q = 2,  E h i2  k 3/2 b1,m − E R b1,m R ≤ C1 ; m (3.1) • for every q > 2, h E h i qi b1,m − E R b1,m R ≤ (C2 · k)q q q/2 , m (3.2) √ with C1 = 2 + 16γd and C2 = 4γd 2κ, where γd is a constant (arising from Stone’s lemma, see Lemma D.5) that grows exponentially with dimension d, and κ is defined in Proposition 3.1. Its proof (detailed in Section A.2) involves the use of Stone’s lemma (Lemma D.5), (i) which upper bounds, for a given Xi , the number of points in Dn having Xi among their k nearest neighbors by kγd . The dependence of our upper bounds with respect to γd (see explicit constants C1 and C2 ) induces their strong deterioration as the dimension d grows since γd ≈ 4.8d − 1. Therefore the larger the dimension d, the larger the required sample size n for the upper bound to be small (at least smaller than 1). Note also that the tie breaking strategy (based on the smallest index) is chosen so that it ensures Stone’s lemma to hold true. 9 Celisse and Mary-Huard In Eq. (3.1), the easier case q = 2 enables to exploit exact h calculations i  (rather  than Dn−p b upper bounds) of the variance of the L1O. Further noticing E R1,m = R Ak (risk of the kNN classifier learnt from Dn−p ), the resulting k 3/2 /m rate is a strict improvement upon the usual upper bound in k 2 /m which is derived from using the sub-Gaussian exponential concentration inequality provided by Theorem 24.4 in Devroye et al. (1996). By contrast the larger k q in Eq. (3.2) comes from the difficulty to derive a tight upper P (i)  )q , where Dm bound with q > 2 for the expectation of ( ni=1 1 D(i) (resp. (i,j) D Ak m (Xi )6=Ak m (i,j) Dm ) (Xi ) denotes the sample Dm where Zi has been (resp. Zi and Zj have been) removed. We are now in position to state the main result of this section. It follows from the combination of Theorem 2.2 (connecting moments of the LpO estimator to those of the L1O) and Theorem 3.1 (providing an upper bound on the order-q moments of the L1O). bp,n denote the LpO risk estimator Theorem 3.2. For every p, k ≥ 1 such that p+k ≤ n, let R Dn (see (2.2)) of the kNN classifier Ak (·) defined by (2.1). Then there exist (known) constants C1 , C2 > 0 such that for every 1 ≤ p ≤ n − k, • for q = 2,  E h bp,n − E R bp,n R i2  ≤ C1 k 3/2 ; (n − p + 1) (3.3) • for every q > 2, h E h i qi  q/2 q bp,n − E R bp,n R ≤ (C2 k)q n−p+1 , (3.4) √ √ d and C2 = 4γd 2κ, where γd denotes the constant arising from Stone’s with C1 = 128κγ 2π lemma (Lemma D.5). Furthermore in the particular setting where n/2 + 1 < p ≤ n − k, then • for q = 2,  E h i2  b b Rp,n − E Rp,n ≤ C1 k 3/2 j k , n (n − p + 1) n−p+1 (3.5) • for every q > 2, h i qi bp,n − E R bp,n R    n k2 k 3/2  j k q, ≤ Γq max  j n n−p+1 (n − p + 1) n−p+1 (n − p + 1) h E √  √ where Γ = 2 2e max 2C1 , 2C2 . 10 q/2 n n−p+1 3 , k2 q  (3.6) Performance of CV to estimate the risk of kNN The straightforward proof is detailed in Section A.3. Let us start by noticing that both upper bounds in Eq. (3.3) and (3.4) deteriorate as p grows. This is no longer the case for Eq. (3.5) and (3.6), which are specifically designed to cover the setup where p > n/2 + 1, that is where bn/mc is no longer equal to 1. Therefore unlike Eq. (3.3) and (3.4), these last two inequalities are particularly relevant in the setup where p/n → 1, as n → +∞, which has been investigated in different frameworks by Shao (1993); Yang (2006, 2007); Celisse (2014). Eq. (3.5) and (3.6) lead to respective convergence rates at worse k 3/2 /n (for q = 2) and k q /nq−1 (for q > 2). In particular this last rate becomes approximately equal to (k/n)q as q gets large. One can also emphasize that, as a U-statistic of fixed order m = n − p + 1, the LpO estimator has a known Gaussian limiting distribution, that is (see Theorem A, Section 5.5.1 Serfling, 1980) √  h i  n b L bp,n Rp,n − E R −−−−−→ N 0, σ12 , n→+∞ m where σ12 = Var [ g(Z1 ) ], with g(z) = E [ hm (z, Z2 , . . . , Zm ) ]. Therefore the upper bound given by Eq. (3.5) is non-improvable in some sense with respect to the interplay between n and p since one recovers the right magnitude for the variance term as long as m = n − p + 1 is assumed to be constant. Finally Eq. (3.6) has been derived using a specific version of the Rosenthal inequality (Ibragimov and Sharakhmetov, 2002) stated with the optimal constant and involving a “balancing factor”. In particular this balancing factor has allowed us to optimize the relative weight of the two terms between brackets in Eq. (3.6). This leads us to claim that the dependence of the upper bound with respect to q cannot be improved with this line of proof. However we cannot conclude that the term in q 3 cannot be improved using other technical arguments. 4. Exponential concentration inequalities This section provides exponential concentration inequalities for the LpO estimator applied to the kNN classifier. Our main results heavily rely on the moments inequalities previously derived in Section 3, that is Theorem 3.2. In order to emphasize the gain allowed by this strategy of proof, we start this section by successively proving two exponential inequalities obtained with less sophisticated tools. We then discuss the strength and weakness of each of them to justify the additional refinements we introduce step by step along the section. bp (Ak , Dn ) = R bp,n can be derived by A first exponential concentration inequality for R use of the bounded difference inequality following the line of proof of Devroye et al. (1996, Theorem 24.4) originally developed for the L1O estimator. bp,n denote the LpO Proposition 4.1. For any integers p, k ≥ 1 such that p + k ≤ n, let R Dn estimator (2.2) of the classification error of the kNN classifier Ak (·) defined by (2.1). Then for every t > 0,  P t2    −n 8(k+p−1)2 γ 2 b b d . Rp,n − E Rp,n > t ≤ 2e where γd denotes the constant introduced in Stone’s lemma (Lemma D.5). 11 (4.1) Celisse and Mary-Huard The proof is given in Appendix B.1. The upper bound of Eq. (4.1) strongly exploits the facts that: (i) for Xj to be one of the k nearest neighbors of Xi in at least one subsample X e , it requires Xj to be one of the k + p − 1 nearest neighbors of Xi in the complete sample, and (ii) the number of points for which Xj may be one of the k + p − 1 nearest neighbors cannot be larger than (k + p − 1)γd by Stone’s Lemma (see Lemma D.5). This reasoning results in a rough upper bound since the denominator in the exponent exhibits a (k + p − 1)2 factor where k and p play the same role. The reason is that we do not distinguish between points for which Xj is among or above the k nearest neighbors of Xi in the whole sample, although these two setups lead to strongly different probabilities of being among the k nearest neighbors in the training resample. Consequently the dependence of the convergence rate on k and p in Proposition 4.1 can be improved, as confirmed by forthcoming Theorems 4.1 and 4.2. Based on the previous comments, a sharper quantification of the influence of each neighbor among the k + p − 1 ones leads to the next result. bp,n denote the LpO estimator Theorem 4.1. For every p, k ≥ 1 such that p + k ≤ n, let R Dn (2.2) of the classification error of the kNN classifier Ak (·) defined by (2.1). Then there exists a numeric constant  > 0 such that for every t > 0,           bp,n > t , P E R bp,n − R bp,n − E R bp,n > t ≤ exp −n max P R t2  h i , p−1 k 2 1 + (k + p) n−1 with  = 1024eκ(1+γd ), where γd is introduced in Lemma D.5 and κ ≤ 1.271 is a universal constant. The proof is given in Section B.2. Let us remark that unlike Proposition 4.1, taking into account the rank of each neighbor in the whole sample enables to considerably reduce the weight of p (compared to that of k) in the denominator of the exponent. In particular, one observes that letting p/n → 0 as n → +∞ (with k assumed to be fixed for instance) makes the influence of the k + p factor asymptotically negligible. This would allow to recover (up to numeric constants) a similar upper bound to that of Devroye et al. (1996, Theorem 24.4), achieved with p = 1. However the upper bound of Theorem 4.1 does not reflect the right dependencies with respect to k and p compared with what has been proved for polynomial moments in Theorem 3.2. The upper bound seems to strictly deteriorate as p increases, which contrasts with the upper bounds derived for p > n/2 + 1 in Theorem 3.2. This drawback is overcome by the following result, which is our main contribution in the present section. bp,n denote the LpO estimator of Theorem 4.2. For every p, k ≥ 1 such that p + k ≤ n, let R D n the classification error of the kNN classifier fˆk = Ak (·) defined by (2.1). Then for every t > 0,     h i   h i  t2 b b b b max P Rp,n − E Rp,n > t , P E Rp,n − Rp,n > t ≤ exp −(n − p + 1) 2 2 , ∆ k (4.2) 12 Performance of CV to estimate the risk of kNN √  √ where ∆ = 4 e max C2 , C1 with C1 , C2 > 0 defined in Theorem 3.1. Furthermore in the particular setting where p > n/2 + 1, it comes   i  i   h h   n b b b b max P Rp,n − E Rp,n > t , P E Rp,n − Rp,n > t ≤ e × n−p+1   !1/3    2    2 2 t n 1 n t , exp − min (n − p + 1) , (n − p + 1)   2e n − p + 1 4Γ2 k 3/2 n − p + 1 4Γ2 k 2 (4.3) where Γ arises in Eq. (3.6) and γd denotes the constant introduced in Stone’s lemma (Lemma D.5). The proof has been postponed to Appendix B.3. It involves different arguments for the two inequalities (4.2) and (4.3) depending on the range of values of p. Firstly for p ≤ n/2 + 1, a simple argument is applied to derive Ineq. (4.2) from the two corresponding moment inequalities of Theorem 3.2 characterizing the sub-Gaussian behavior of the LpO estimator in terms of its even moments (see Lemma D.2). Secondly for p > n/2 + 1, we rather exploit: (i) the appropriate upper bounds on the moments of the LpO estimator given by Theorem 3.2, and (ii) a dedicated Proposition D.1 which provides exponential concentration inequalities from general moment upper bounds. In accordance with the conclusions drawn about Theorem 3.2, the upper bound of Eq. (4.2) increases as p grows, unlike that of Eq. (4.3) which improves as p increases. In particular the best concentration rate in Eq. (4.3) is achieved as p/n → 1, whereas Eq. (4.2) turns out to be useless in that setting. Let us also notice that Eq. (4.2) remains strictly better than Theorem 4.1 as long as p/n → δ ∈ [0, 1[, as n → +∞. Note also that the constants Γ and γd are the same as in Theorem 3.1. Therefore the same comments regarding their dependence with respect to the dimension d apply here. In order to facilitate the interpretation of the last Ineq. (4.3), we also derive the following proposition (proved in Appendix B.3) which focuses on the description of each deviation term in the particular case where p > n/2 + 1. Proposition 4.2. With the same notation as Theorem 4.2, for any p, k ≥ 1 such that p + k ≤ n, p > n/2 + 1, and for every t > 0    v √   u h i 2eΓ k n  b u j k 3/2 k   3/2 b kt   ≤ P  Rp,n − E Rp,n > √ t + 2e j e · e−t , t n n n−p+1 n−p+1 n−p+1 n−p+1 where Γ > 0 is the constant arising from (3.6). The present inequality is very similar to the well-known Bernstein inequality (Boucheron et al., 2013, Theorem 2.10) except the second deviation term of order t3/2 instead of t (for the Bernstein inequality). √ With respect to n, the first deviation term is of order ≈ k 3/2 / n, which is the same as with the Bernstein inequality. The second deviation term is of a somewhat different order, 13 Celisse and Mary-Huard √ that is ≈ k n − p + 1/n, as compared with the usual 1/n in the Bernstein inequality. Note that we almost recover this k/n rate by choosing for instance p ≈ n(1 − log n/n), which √ √ leads to k log n/n. Therefore varying p allows to interpolate between the k/ n and the k/n rates. Note also that the dependence of the first (sub-Gaussian) deviation term with respect to k is only k 3/2 , which improves upon the usual k 2 resulting from Ineq. (4.2) in Theorem 4.2 for instance. However this k 3/2 remains certainly too large for being optimal even if this question remains widely open at this stage in the literature. More generally one strength of our approach is its versatility. Indeed the two above deviation terms directly result from the two upper bounds on the moments of the L1O stated in Theorem 3.1. Therefore any improvement of the latter upper bounds would immediately lead to enhance the present concentration inequality (without changing the proof). 5. Assessing the gap between LpO and classification error 5.1 Upper bounds b First, we derive new upper bounds on different measures of the discrepancy h ibetween Rp,n = bp (Ak , Dn ) and the classification error L(fˆk ) or the risk R(fˆk ) = E L(fˆk ) . These bounds R on the LpO estimator are completely new for p > 1, some of them being extensions of former ones specifically derived for the L1O estimator applied to the kNN classifier. √ √ bp,n denote the Theorem 5.1. For every p, k ≥ 1 such that p ≤ k and k + k ≤ n, let R Dn ˆ LpO risk estimator (see (2.2)) of the kNN classifier fk = Ak (·) defined by (2.1). Then, √ i 4 p k ˆ b E Rp,n − R(fk ) ≤ √ , 2π n h (5.1) and  E 2  128κγ k 3/2 16 p2 k d ˆ b · Rp,n − R(fk ) ≤ √ + 2π n − p + 1 2π n2 (5.2) Moreover,  E bp,n − L(fˆk ) R 2  √ √ 2 2 (2p + 3) k 1 ≤ √ + · n n π (5.3) By contrast with the √ results in the previous sections, a new restriction on p arises in Theorem 5.1, that is p ≤ k. It is the consequence of using Lemma D.6 in the above proof to quantify how different two classifiers respectively computed from the same n and n − p points can be. Indeed this lemma, which provides an upper bound on the L1 stability of the kNN classifier previously proved by Devroye and Wagner (1979b), only remains meaningful √ as long as 1 ≤ p ≤ k. 14 Performance of CV to estimate the risk of kNN Proof of Theorem 5.1. e Proof of (5.1): With fˆke = AD k , Lemma D.6 immediately provides h i h i i h bp,n − L(fˆk ) = E L(fˆke ) − E L(fˆk ) E R h i ≤ E 1{ADe (X)6=Y } − 1{ADn (X)6=Y } k k √   e 4 p k Dn D = P Ak (X) 6= Ak (X) ≤ √ · 2π n Proof of (5.2): The proof combines the previous upper bound with the one established for the variance of the LpO estimator, that is Eq. (3.3).   i h i2 h i2  h i2   h bp,n − E L(fˆk ) bp,n − E L(fˆk ) bp,n − E R bp,n E R =E R + E R 128κγd k 3/2 ≤ √ + 2π n − p + 1 √ !2 4 p k √ , 2π n which concludes the proof. The proof of Ineq. (5.3) is more intricate and has been postponed to Appendix C.1. h i bp,n = R(ADn−p ), the right-hand side of Ineq. (5.1) is an Keeping in mind that E R k upper bound on the bias of the LpO estimator, that is on the difference between the risks of the classifiers built from respectively n − p and n points. Therefore, the fact that this D n upper bound increases with p is reliable since the classifiers Ak n−p+1 (·) and AD k (·) can become more and more different from one another as p increases. More precisely, the √ upper √ bound in Ineq. (5.1) goes to 0 provided p k/n does. With the additional restriction p ≤ k, this reduces to the usual condition k/n → 0 as n → +∞ (see Devroye et al., 1996, Chap. 6.6 for instance). The monotonicity of this upper upper bound with respect to k can seem somewhat unexpected. One could think that the two classifiers would become more and more “similar” to each other as k increases. However it can be proved that, in some sense, this dependence cannot be improved in the present distribution-free framework (see Proposition 5.1 and Figure 1). Note that an upper bound similar to that of Ineq. (5.2) can be easily derived for any order-q moment (q ≥ 2) at the price of increasing the constants by using (a + b)q ≤ 2q−1 (aq + bq ), for every a, b ≥ 0. We also emphasize that Ineq. (5.2) allows us to control the discrepancy between the LpO estimator and the risk of the kNN classifier, that is the expectation of its classification error. Ideally we would have liked to replace the risk R(fˆk ) by the prediction error L(fˆk ). But with our strategy of proof, this would require an additional distribution-free concentration inequality on the prediction error of the kNN classifier. To the best of our knowledge, such a concentration inequality is not available up to now. Upper bounding the squared difference between the LpO estimator and the prediction error is precisely the purpose of Ineq. (5.3). Proving the latter inequality requires a completely different strategy which can be traced back to an earlier proof by Rogers and Wagner (1978, see the proof of Theorem 2.1) applying to the L1O estimator. Let us mention that 15 Celisse and Mary-Huard Ineq. (5.3) combined with the Jensen inequality lead to a less accurate upper bound than Ineq. (5.1). Finally the apparent difference between the upper bounds in Ineq. (5.2) and (5.3) results from the completely different schemes of proof. The first one allows us to derive general upper bounds for all centered moments of the LpO estimator, but exhibits a worse dependence with respect to k. By contrast the second one is exclusively dedicated to upper bounding the mean squared difference between the prediction error and the LpO estimator √ and leads to a smaller k. However (even if probably not optimal), the upper bound used in Ineq. (5.2) still enables to achieve minimax rates over some Hölder balls as proved by Proposition 5.3. 5.2 Lower bounds 5.2.1 Bias of the L1O estimator The purpose of the next result is to provide a counter-example highlighting that the upper bound of Eq. (5.1) cannot be improved in some sense. We consider the following discrete setting where X = {0, 1} with π0 = P [ X = 0 ], and we define η0 = P [ Y = 1 | X = 0 ] and η1 = P [ Y = 1 | X = 1 ]. In what follows this two-class generative model will be referred to as the discrete setting DS. Note that (i) the 3 parameters π0 , η0 and η1 fully describe the joint distribution P(X,Y ) , and (ii) the distribution of DS satisfies the strong margin assumption of Massart and Nédélec (2006) if both η0 and η1 are chosen away from 1/2. However this favourable setting has no particular effect on the forthcoming lower bound except a few simplifications along the calculations. Proposition 5.1. Let us consider the DS setting with π0 = 1/2, η0 = 0 and η1 = 1, and assume that k is odd. Then there exists a numeric constant C > 1 independent of n and k Dn−1 n satisfy such that, for all n/2 ≤ k ≤ n − 1, the kNN classifiers AD k and Ak h E L  AkDn  −L  D Ak n−1 i √ k ≥C · n √ The proof of Proposition 5.1 is provided in Appendix C.2. The rate k/n in the righthand side of Eq. (5.1) is then achieved under the generative model DS for any k ≥ n/2. As a consequence this rate cannot be improved without any additional assumption, for instance on the distribution of the Xi s. See also Figure 1 below and related comments. Empirical illustration To further illustrate the result of Proposition 5.1, we simulated data according to the DS setting, for different values of n ranging from 100 to 500 and different values of k ranging from 5 to n − 1. h    i D n Figure 1 (a) displays the evolution of the absolute bias E L AD − L Ak n−1 as k a function of k, for several values of n (plain curves). The absolute bias is a nondecreasing function of k, as suggested by the upper bound provided in Eq. (5.1) which is also plotted (dashed lines) to ease the comparison. Importantly, the non-decreasing behavior of the absolute bias is not always restricted to high values of k (w.r.t. n), as illustrated in Figure 1 (b) which corresponds to the same DS setting but with parameter values (π0 , η0 , η1 ) = 16 Performance of CV to estimate the risk of kNN (0.2, 0.2, 0.9). In particular the non-decreasing behavior now appears for a range of values of k that are lower than n/2. Note that a rough idea about the location of the peak, denoted by kpeak , can be deduced as follows in the simple case where η0 = 0 and η1 = 1. • For the peak to arise, the two classifiers (based on n and respectively n − 1 observations) have to disagree the most strongly. • This requires one of the two classifiers – say the first one – to have ties among the k nearest neighbors of each label in at least one of the two cases X = 0 or X = 1. • With π0 < 0.5, then ties will most likely occur for the case X = 0. Therefore the discrepancy between the two classifiers will be the highest at any new observation x0 = 0. • For the tie situation to arise at x0 , half of its neighbors have to be 1. This only occurs if (i) k > n0 (with n0 the number of observations such that X = 0 in the training set), and (ii) k0 η0 + k1 η1 = k/2, where k0 (resp. k1 ) is the number of neighbors of x0 such that X = 0 (resp. X = 1). • Since k > n0 , one has k0 = n0 and the last expression boils down to k = n0 (η1 − η0 ) · η1 − 1/2 • For large values of n, one should have n0 ≈ nπ0 , that is the peak should appear at nπ0 (η1 − η0 ) kpeak ≈ · η1 − 1/2 In the setting of Proposition 5.1, this reasoning remarkably yields kpeak ≈ n, while it leads to kpeak ≈ 0.4n in the setting of Figure 1 (b), which is close to the location of the observed peaks. This also suggests that even smaller values of kpeak can arise by tuning the parameter π0 close to 0. Let us mention that very similar curves have been obtained for a Gaussian mixture model with two √ disjoint classes (not reported here). On the one hand this empirically illustrates that the k/n rate is not limited to the discrete setting DS. On the other hand, all of this confirms that this rate cannot be improved in the present distribution-free framework. Let us finally consider Figure 1 (c), which displays the absolute bias as a function of n where k = bCoef × nc for different values of Coef, where b·c denotes the integer part. With √ this choice of k, Proposition 5.1 implies that the absolute bias should decrease at a 1/ n rate, which is supported by the plotted curves. By contrast, panel (d) of Figure 1 illustrates √ that choosing smaller values of k, that is k = bCoef × nc, leads to a faster decreasing rate. 5.2.2 Mean squared error Following an example described by Devroye and Wagner (1979a), we now provide a lower bound on the minimal convergence rate of the mean squared error (see also Devroye et al., 1996, Chap. 24.4, p.415 for a similar argument). 17 Celisse and Mary-Huard 0.15 0.002 Bias Bias 0.10 0.001 0.05 0.00 0.000 0 100 200 300 400 500 0 100 200 k n 100 200 300 400 500 n 100 (a) 400 500 200 300 400 500 (b) 0.020 0.020 0.015 0.015 Bias Bias 300 k 0.010 0.005 0.010 0.005 0.000 0.000 100 200 300 400 500 100 200 300 n Coef 1 2 400 500 n 3 4 5 Coef (c) 1 2 3 4 5 (d) Figure 1: (a) Evolution of the absolute value of the bias as a function of k, for different values of n (plain lines). The dashed lines correspond to the upper bound obtained in (5.1). (b) Same as previous, except that data were generated according to the DS setting with parameters (π0 , η0 , η1 ) = (0.2, 0.2, 0.9). Upper bounds are not displayed in order to fit the scale of the absolute bias. (c) Evolution of the absolute value of the bias with respect to n, when k is chosen such that k = bCoef × nc (b·c denotes the integer part). The different colors correspond to different values of Coef. (d) Same as previous, except that k is chosen such that √ k = bCoef × nc. 18 Performance of CV to estimate the risk of kNN Proposition 5.2. Let us assume n is even, and that P (Y = 1 | X) = P (Y = 1) = 1/2 is independent of X. Then for k = n − 1 (k odd), it results  h i 2  Z 1 1 1 ˆ b1,n − L(fˆk ) > t dt ≥ √ b 2t · P R E R1,n − L(fk ) = ·√ · 8 π n 0 √ From the upper bound of order k/n (with p = 1) provided by Ineq. (5.3), choosing √ k = n − 1 leads to the same 1/ n √ rate as that of Proposition 5.2. This suggests that, at least for very large values of k, the k/n rate is of the right order and cannot be improved in the distribution-free framework. 5.3 Minimax rates Let us conclude this section withh a corollary, which provides a finite-sample bound on the i ˆ ˆ b gap between Rp,n and R(fk ) = E L(fk ) with high probability. It is stated under the same √ restriction on p as the previous Theorem 5.1 it is based on, that is for p ≤ k. Corollary √ √ 5.1. With the notation of Theorems 4.2 and 5.1, let us assume p, k ≥ 1 with p ≤ k, k + k ≤ n, and p ≤ n/2 + 1. Then for every x > 0, there exists an event with probability at least 1 − 2e−x such that v √ u 2 k2 ∆ 4 p k u bp,n ≤ t  x + √ , (5.4) R(fˆk )) − R p−1 n 2π n 1− n n where fˆk = AD k (·). Proof of Corollary 5.1. Ineq. (5.4) results from combining the exponential concentration bp,n , namely Ineq. (4.2) (from Theorem 4.2) and the upper bound on the result derived for R bias, that is Ineq. (5.1). h i h i bp,n + E R bp,n − R bp,n ≤ R(fˆk )) − E R bp,n R(fˆk )) − R s √ 4 p k ∆2 k 2 ≤√ + x · n−p+1 2π n Note that the right-hand side of Ineq. (5.4) could be used to derive bounds on R(fˆk ) that seem similar to confidence bounds. However we do not recommend doing this in practice for several reasons. On the one hand, Ineq. (5.4) results from the repeated use of concentration inequalities where numeric constants are not optimized at all, which leads to require a large sample size n for the deviation terms to be small in practice. On the other hand, explicit numeric constants such as ∆2 in Corollary 5.1 exhibit a dependence on γd ≈ 4.8d − 1, which becomes exponentially large as d increases. Proving that this dependence can be weakened or not remains a completely open question at this stage. Nevertheless one can highlight that, for a given n, increasing d will quickly make the deviation term larger than 1, whereas bp,n belong to [0, 1]. both R(fˆk ) and R 19 Celisse and Mary-Huard √ The right-most term of order k/n in Ineq. (5.4) results from the bias. This is a necessary price to pay which cannot be improved in the present distribution-free framework √ according to Proposition 5.1. Besides combining the restriction p ≤ k with the usual consistency constraint k/n = o(1) leads to the conclusion that small values of p (w.r.t. n) have almost no√effect on the convergence rate of the LpO estimator. Weakening the key restriction p ≤ k would be necessary to potentially nuance this conclusion. In order to highlight the interest of the above deviation inequality, let us deduce an optimality result in terms of minimax rate. In the following statement, Corollary 5.1 is bp,n and the risk R(fˆk ) used to prove that, uniformly with respect to k, the LpO estimator R of the kNN classifier remain close to each other with high probability. Proposition 5.3. With the same notation as Corollary 5.1, for every C > 1 and θ > 0, −(C−1) on which, for any p, k ≥ 1 such there exists √ an event √ of probability at least 1 − 2 · n that p ≤ k, k + k ≤ n, and p ≤ n/2 + 1, the LpO estimator of the kNN classifier satisfies √ h i θ−1 ∆2 C k 2 log(n) 4 p k ? ˆ −√  (1 − θ) R(fk ) − L − 4 2π n n R(fˆk ) − L? √ i θ−1 ∆2 C h k 2 log(n) 4 p k ? ? ˆ b +√  ≤ Rp (Ak , Dn ) − L ≤ (1 + θ) R(fk ) − L + , 4 2π n n R(fˆk ) − L? (5.5) where L? denotes the classification error of the Bayes classifier. Furthermore if one assumes the regression function η belongs to a Hölder ball H(τ, α) 2α for some α ∈]0, d/4[ (recall that Xi ∈ Rd ) and τ > 0, then choosing k = k ? = k0 · n 2α+d leads to bp (Ak? , Dn ) − L? ∼n→+∞ R(fˆk? ) − L? . R (5.6) Ineq. (5.5) gives a uniform control (over k) of the gap between the excess risk R(fˆk ) − L? bp (fˆk ) − L? with high probability. The decreasing and the corresponding LpO estimator R rate (in n−(C−1) ) of this probability is directly related to the log(n) factor in the lower and upper bounds. This decreasing rate could be made faster at the price of increasing the exponent of the log(n) factor. In a similar way the numeric constant θ has no precise meaning and can be chosen as close to 0 as we want, leading to increase one of the other deviation terms by a numeric factor θ−1 . For instance one could choose θ = 1/ log(n), which would replace the log(n) by a (log n)2 . The equivalence stated by (5.6) results from knowing that this choice k = k ? makes the α kNN classifier achieve the minimax rate n− 2α+d over Hölder balls with smoothness parameter α ∈]0, d/4[ (see Theorems 3.3 and 3.5 in Audibert and Tsybakov, 2007). Therefore it is not difficult to check that the other deviation terms are negligible with respect to the excess risk R(fˆk ) − L? for k = k ? . Proof of Proposition 5.3. Let us define K ≤ n as the maximum value of k and assume xk = C · log(n) (for some constant C > 1) for any 1 ≤ k ≤ K. Let us also introduce the 20 Performance of CV to estimate the risk of kNN event ( bp (Ak , Dn ) ≤ ∀1 ≤ k ≤ K, R(fˆk ) − R Ωn = Then P [ Ωcn ] ≤ 1 nC−1 K X −xk e k=1 √ ) 2 k 4 p k . ∆2 xk + √ n n 2π r → 0, as n → +∞, since a union bound leads to = K X e−C·log(n) = K · e−C·log(n) ≤ e−(C−1)·log(n) = k=1 1 · nC−1 Furthermore combining (for a, b > 0) the inequality ab ≤ a2 θ2 + b2 θ−2 /4 for every θ > 0 √ √ √ with a + b ≤ a + b, it results that r   θ−1 k2 k2  xk ∆2 xk ≤ θ R(fˆk ) − L? + ∆2  n 4 n R(fˆ ) − L? k   ≤ θ R(fˆk ) − L? + θ−1 k2  C · log(n), ∆2  4 n R(fˆk ) − L? hence Ineq. (5.5). Let us now choose k = k ? . Then Theorems 3.3 and 3.5 in Audibert and Tsybakov (2007) combined with Theorem 7 in Chaudhuri and Dasgupta (2014) provide that the minimax rate of the kNN classifier is   α R(fˆk? ) − L?  n− 2α+d , where a  b means there exist numeric constants l, u > 0 such that l · b ≤ a ≤ u · b. It is then easy to check that  d−3α    −1 − 2α+d Cθ−1 ∆2 k0 k?2 ? , ˆ ? • θ 4 ∆2 · n log(n) = o R( f C · log(n) = ) − L n→=∞ k 4 n(R(fˆk? )−L? )   √ d ? ? • p nk ≤ kn = k0 · n− 2α+d = on→+∞ R(fˆk? ) − L? . The desired conclusion (5.6) finally results from choosing θ = 1/(log n). 6. Discussion The present work provides several new results quantifying the performance of the LpO estimator applied to the kNN classifier. By exploiting the connexion between LpO and Ustatistics (Section 2), the polynomial and exponential inequalities derived in Sections 3 and 4 give some new insight on the concentration of the LpO estimator around its expectation for different regimes of p/n. In Section 5, these results serve for instance to conclude to the consistency of the LpO estimator towards the risk (or the classification error rate) of the kNN classifier (Theorem 5.1). They also allow us to establish the asymptotic equivalence between the LpO estimator (shifted by the Bayes risk L? ) and the excess risk over some Hölder class of regression functions (Proposition 5.3). 21 Celisse and Mary-Huard It is worth mentioning that the upper-bounds derived in Sections 4 and 5 — see for instance Theorem 5.1 — can be minimized by choosing p = 1, suggesting that the L1O estimator is optimal in terms of risk estimation when applied to the kNN classification algorithm. This observation corroborates the results of the simulation study presented in Celisse and Mary-Huard (2011), where it is empirically shown that small values of p (and in particular p = 1) lead to the best estimation of the risk, whatever the value of parameter k or the level of noise in the data. The suggested optimality of L1O (for risk estimation) is also consistent with results by Burman (1989) and Celisse (2014), where it is proved that L1O is asymptotically the best cross-validation procedure to perform risk estimation in the context of low-dimensional regression and density estimation respectively. Alternatively, the LpO estimator can also be used as a data-dependent calibration procedure to choose k: the value k̂p leading to the minimum LpO estimate is selected. Although the focus of the present paper is different, it is worth mentioning that the concentration results established in Section 4 are a significant early step towards deriving theoretical guarantees on LpO as a model selection procedure. Indeed, exponential concentration inequalities have been a key ingredient to assess model selection consistency or model selection efficiency in various contexts (see for instance Celisse (2014) or Arlot and Lerasle (2012) in the density estimation framework). Still theoretically investigating the behavior of k̂p requires some further dedicated developments. One first step towards such results is to derive a tighter upper bound on the bias between the LpO estimator and the risk. The best known upper bound currently available is derived from Devroye and Wagner (1980, see Lemma D.6 in the present paper). Unfortunately it does not fully capture the true behavior of the LpO estimator with √ respect to p (at least as p becomes large) and could be improved in particular for p > k as emphasized in the comments following Theorem 5.1. Another important direction for studying the model selection behavior of the LpO procedure is to prove a concentration inequality for the classification error rate of the kNN classifier around its expectation. While such concentration results have been established for the kNN algorithm in the (fixed-design) regression framework (Arlot and Bach, 2009), deriving similar results in the classification context remains a challenging problem to the best of our knowledge. References A. Andoni and P. Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. In Foundations of Computer Science, 2006. FOCS’06. 47th Annual IEEE Symposium on, pages 459–468. IEEE, 2006. S. Arlot. Resampling and Model Selection. PhD thesis, University Paris-Sud 11, December 2007. URL http://tel.archives-ouvertes.fr/tel-00198803/en/. oai:tel.archivesouvertes.fr:tel-00198803 v1. S. Arlot and F. Bach. Data-driven calibration of linear estimators with minimal penalties. Advances in Neural Information Processing Systems (NIPS), 2:46–54, 2009. S. Arlot and A. Celisse. A survey of cross-validation procedures for model selection. Statistics Surveys, 4:40–79, 2010. 22 Performance of CV to estimate the risk of kNN S. Arlot and M. Lerasle. Why v= 5 is enough in v-fold cross-validation. arXiv preprint arXiv:1210.5830, 2012. J.-Y. Audibert and A. Tsybakov. Fast learning rates for plug-in estimators under the margin condition. The Annals of Statistics, 35(2), 2007. G. Biau and L. Devroye. Lectures on the nearest neighbor method. Springer, 2016. G. Biau, F. Cérou, and A. Guyader. On the rate of convergence of the bagged nearest neighbor estimate. The Journal of Machine Learning Research, 11:687–712, 2010a. G. Biau, F. Cérou, and A. Guyader. Rates of convergence of the functional-nearest neighbor estimate. Information Theory, IEEE Transactions on, 56(4):2034–2040, 2010b. S. Boucheron, O. Bousquet, G. Lugosi, and P. Massart. Moment inequalities for functions of independent random variables. Ann. Probab., 33(2):514–560, 2005. ISSN 0091-1798. S. Boucheron, G. Lugosi, and P. Massart. Concentration Inequalities: A Nonasymptotic Theory of Independence. Oxford University Press, 2013. P. Burman. Comparative study of Ordinary Cross-Validation, v-Fold Cross-Validation and the repeated Learning-Testing Methods. Biometrika, 76(3):503–514, 1989. A. Celisse. Model selection via cross-validation in density estimation, regression and change-points detection. (In English). PhD thesis, University Paris-Sud 11. http://tel.archives-ouvertes.fr/tel-00346320/en/., December 2008. URL http: //tel.archives-ouvertes.fr/tel-00346320/en/. A. Celisse. Optimal cross-validation in density estimation with the l2 -loss. The Annals of Statistics, 42(5):1879–1910, 2014. A. Celisse and T. Mary-Huard. Exact cross-validation for knn: applications to passive and active learning in classification. JSFdS, 152(3), 2011. A. Celisse and S. Robin. Nonparametric density estimation by exact leave-p-out crossvalidation. Computational Statistics and Data Analysis, 52(5):2350–2368, 2008. K. Chaudhuri and S. Dasgupta. Rates of convergence for nearest neighbor classification. In Advances in Neural Information Processing Systems, pages 3437–3445, 2014. T. M. Cover. Rates of convergence for nearest neighbor procedures. In Proceedings of the Hawaii International Conference on Systems Sciences, pages 413–415, 1968. T. M. Cover and P. E. Hart. Nearest neighbor pattern classification. Information Theory, IEEE Transactions on, 13(1):21–27, 1967. L. Devroye and T. Wagner. Distribution-free performance bounds for potential function rules. IEEE Transactions on Information Theory, 25:601–604, 1979a. L. Devroye, L. Györfi, and G. Lugosi. A Probilistic Theory of Pattern Recognition. Springer Verlag, 1996. 23 Celisse and Mary-Huard L. P. Devroye and T. J. Wagner. The strong uniform consistency of nearest neighbor density estimates. Ann. Statist., 5(3):536–540, 1977. ISSN 0090-5364. L. P. Devroye and T. J. Wagner. Distribution-free inequalities for the deleted and holdout error estimates. Information Theory, IEEE Transactions on, 25(2):202–207, 1979b. L.P. Devroye and T.J. Wagner. Distribution-free consistency results in nonparametric discrimination and regression function estimation. Ann. Statist., 8(2):231–239, 1980. E. Fix and J. Hodges. Nearest Neighbor (NN) Norms: NN Pattern Classification Techniques, chapter Discriminatory analysis- nonparametric discrimination: Consistency principles. IEEE Computer Society Press, Los Alamitos, CA, 1951. Reprint of original work from 1952. M. Fuchs, R. Hornung, R. De Bin, and A.-L. Boulesteix. A u-statistic estimator for the variance of resampling-based error estimators. Technical report, arXiv, 2013. S. Geisser. The predictive sample reuse method with applications. J. Amer. Statist. Assoc., 70:320–328, 1975. L. Györfi. The rate of convergence of kn -nn regression estimates and classification rules. IEEE Trans. Commun, 27(3):362–364, 1981. P. Hall, B. U. Park, and R. J. Samworth. Choice of neighbor order in nearest-neighbor classification. The Annals of Statistics, pages 2135–2152, 2008. T. Hastie, R. Tibshirani, and J. Friedman. The elements of statistical learning. Springer Series in Statistics. Springer-Verlag, New York, 2001. ISBN 0-387-95284-5. Data mining, inference, and prediction. W. Hoeffding. Probability inequalities for sums of bounded random variables. Journ. of the American Statistical Association, 58(301):13–30, 1963. R. Ibragimov and S. Sharakhmetov. On extremal problems and best constants in moment inequalities. Sankhyā: The Indian Journal of Statistics, Series A, pages 42–56, 2002. P. Indyk and R. Motwani. Approximate nearest neighbors: towards removing the curse of dimensionality. In Proceedings of the thirtieth annual ACM symposium on Theory of computing, pages 604–613. ACM, 1998. M. Kearns and D. Ron. Algorithmic Stability and Sanity-Check Bounds for Leave-One-Out Cross-Validation. Neural Computation, 11:1427–1453, 1999. V. S. Koroljuk and Y. V. Borovskich. Theory of U-statistics. Springer, 1994. S. R. Kulkarni and S. E. Posner. Rates of convergence of nearest neighbor estimation under arbitrary sampling. Information Theory, IEEE Transactions on, 41(4):1028–1039, 1995. L. Li, D. M. Umbach, P. Terry, and J. A. Taylor. Application of the ga/knn method to seldi proteomics data. Bioinformatics, 20(10):1638–1640, 2004. 24 Performance of CV to estimate the risk of kNN Pascal Massart and Élodie Nédélec. Risk bounds for statistical learning. The Annals of Statistics, pages 2326–2366, 2006. D. Psaltis, R. R. Snapp, and S. S. Venkatesh. On the finite sample performance of the nearest neighbor classifier. Information Theory, IEEE Transactions on, 40(3):820–837, 1994. W. H. Rogers and T. J. Wagner. A finite sample distribution-free performance bound for local discrimination rules. Annals of Statistics, 6(3):506–514, 1978. E. D. Scheirer and M. Slaney. Multi-feature speech/music discrimination system, May 27 2003. US Patent 6,570,991. R. J. Serfling. Approximation Theorems of Mathematical Statistics. John Wiley & Sons Inc., 1980. J. Shao. Linear model selection by cross-validation. J. Amer. Statist. Assoc., 88(422): 486–494, 1993. ISSN 0162-1459. P. Y. Simard, Y. A. LeCun, J. S. Denker, and B. Victorri. Transformation invariance in pattern recognition tangent distance and tangent propagation. In Neural networks: tricks of the trade, pages 239–274. Springer, 1998. R. R Snapp and S. S. Venkatesh. Asymptotic expansions of the k nearest neighbor risk. The Annals of Statistics, 26(3):850–878, 1998. B. M. Steele. Exact bootstrap k-nearest neighbor learners. Machine Learning, 74(3):235– 255, 2009. C. J. Stone. Optimal global rates of convergence for nonparametric regression. Ann. Statist., 10(4):1040–1053, 1982. ISSN 0090-5364. M. Stone. Cross-validatory choice and assessment of statistical predictions. J. Roy. Statist. Soc. Ser. B, 36:111–147, 1974. ISSN 0035-9246. With discussion by G. A. Barnard, A. C. Atkinson, L. K. Chan, A. P. Dawid, F. Downton, J. Dickey, A. G. Baker, O. BarndorffNielsen, D. R. Cox, S. Giesser, D. Hinkley, R. R. Hocking, and A. S. Young, and with a reply by the authors. Y. Yang. Comparing learning methods for classification. Statist. Sinica, 16(2):635–657, 2006. ISSN 1017-0405. Y. Yang. Consistency of cross-validation for comparing regression procedures. The Annals of Statistics, 35(6):2450–2473, 2007. P. Zhang. Model selection via multifold cross validation. Ann. Statist., 21(1):299–313, 1993. ISSN 0090-5364. 25 Celisse and Mary-Huard Appendix A. Proofs of polynomial moment upper bounds A.1 Proof of Theorem 2.2 The proof relies on Proposition 2.1 that allows to relate the LpO estimator to a sum of independent random variables. In the following, we distinguish between the two settings q = 2 (where exact calculations can be carried out), and q > 2 where only upper bounds can be derived. When q > 2, our proof deals separately with the cases p ≤ n/2 + 1 and p > n/2 + 1. In the first one, a straightforward use of Jensen’s inequality leads to the result. In the second setting, one has to be more cautious when deriving upper bounds. This is done by using the more sophisticated Rosenthal’s inequality, namely Proposition D.2. A.1.1 Exploiting Proposition 2.1 According to the proof of Proposition 2.1, it arises that the LpO estimator can be expressed as a U -statistic since X  bp,n = 1 R W Zσ(1) , . . . , Zσ(n) , n! σ with mc j n k−1 bX n W (Z1 , . . . , Zn ) = m hm Z(a−1)m+1 , . . . , Zam  (with m = n − p + 1) a=1 m and hm (Z1 , . . . , Zm ) = 1 X   =R b1,n−p+1 , 1 D(i) m A m (Xi )6=Yi i=1 (i) (i) where ADm (.) denotes the classifier based on sample Dm = (Z1 , . . . , Zi−1 , Zi+1 , . . . , Zm ). Further centering the LpO estimator, it comes h i X  bp,n − E R bp,n = 1 R W̄ Zσ(1) , . . . , Zσ(n) , n! σ where W̄ (Z1 , . . . , Zn ) = W (Z1 , . . . , Zn ) − E [ W (Z1 , . . . , Zn ) ]. Then with h̄m (Z1 , . . . , Zm ) = hm (Z1 , . . . , Zm ) − E [ hm (Z1 , . . . , Zm ) ], one gets h h i qi  q bp,n − E R bp,n E R ≤ E W̄ (Z1 , . . . , Zn ) (Jensen0 s inequality)  q n mc j n k−1 bX    h̄m Z(i−1)m+1 , . . . , Zim  = E m i=1 q n c b m j n k−q    X = E h̄m Z(i−1)m+1 , . . . , Zim  . m  i=1 26 (A.1) Performance of CV to estimate the risk of kNN A.1.2 The setting q = 2 If q = 2, then by independence it comes   n bX mc i q i j n k−2 h   bp,n bp,n − E R ≤ Var  E R hm Z(i−1)m+1 , . . . , Zim  m h i=1 mc j n k−2 bX n = = m i=1 j n k−1 m   Var hm Z(i−1)m+1 , . . . , Zim   b1 (A, Z1,n−p+1 ) , Var R which leads to the result. A.1.3 The setting q > 2 If p ≤ n/2 + 1: A straightforward use of Jensen’s inequality from (A.1) provides h h bp,n bp,n − E R E R i qi ≤ n mc j n k−1 bX   q E h̄m Z(i−1)m+1 , . . . , Zim m i=1 h h i qi b1,n−p+1 − E R b1,n−p+1 =E R . If p > n/2 + 1: Let us now use Rosenthal’s inequality (Proposition D.2) by introducing symmetric random variables ζ1 , . . . , ζbn/mc such that ∀1 ≤ i ≤ bn/mc ,    0 0 ζi = hm Z(i−1)m+1 , . . . , Zim − hm Z(i−1)m+1 , . . . , Zim , where Z10 , . . . , Zn0 are i.i.d. copies of Z1 , . . . , Zn . Then it comes for every γ > 0  q q n n bX bX mc mc      h̄m Z(i−1)m+1 , . . . , Zim  ≤ E  ζi  , E  i=1 i=1 which implies  v q  u n q n n   u   bX b b c c c m m m  X u X    2    q t E h̄m Z(i−1)m+1 , . . . , Zim  ≤ B(q, γ) max γ E [ |ζi | ] ,  E ζi    .    i=1 i=1  i=1   Then using for every i that E [ |ζi |q ] ≤ 2q E  h̄m Z(i−1)m+1 , . . . , Zim 27  q , Celisse and Mary-Huard it comes  q n bX mc    E h̄m Z(i−1)m+1 , . . . , Zim  i=1 ≤ B(q, γ) max 2q γ jnk m h E ! i q i rj n k q  b1,m b1,m − E R b1,m , R . 2Var R m h Hence, it results for every q > 2 h h i qi bp,n − E R bp,n E R q ≤ B(q, γ) max 2 γ j n k−q+1 h E m h i q i j n k−q/2 b1,m − E R b1,m , R m r  b1,m 2Var R  !q ! , which concludes the proof. A.2 Proof of Theorem 3.1 Our strategy of proof follows several ideas. The first one consists in using Proposition 3.1 which says that, for every q ≥ 2, v u m u X 2 p u hm (Z1 , . . . , Zm ) − hm (Z1 , . . . , Zj0 , . . . , Zm ) , h̄m (Z1 , . . . , Zm ) q ≤ 2κq t j=1 q/2 b1,m by Eq. (2.4), and h̄m (Z1 , . . . , Zm ) = hm (Z1 , . . . , Zm ) − where hm (Z1 , . . . , Zm ) = R E [ hm (Z1 , . . . , Zm ) ]. The second idea consists in deriving upper bounds of ∆j hm = hm (Z1 , . . . , Zm ) − hm (Z1 , . . . , Zj0 , . . . , Zm ) by repeated uses of Stone’s lemma, that is Lemma D.5 which upper bounds by kγd the maximum number of Xi s that can have a given Xj among their k nearest neighbors. Finally, for technical reasons we have to distinguish the case q = 2 where we get tighter bounds, and q > 2. A.2.1 Upper bounding ∆j hm (i) For the sake of readability the notation D(i) = Dm (see Theorem 2.1), and  let us now use  (i) let Dj denote the set Z1 , . . . , Zj0 , . . . , Zn where the i-th coordinate has been removed. Then, ∆j hm = hm (Z1 , . . . , Zm ) − hm (Z1 , . . . , Zj0 , . . . , Zm ) is now upper bounded by ∆ j hm ≤ 1 1 X n o − 1( ) 1 D(i) + (i) D Ak (Xi )6=Yi m m A j (Xi )6=Yi i6=j ≤ k 1 1 X ( ) . + 1 (i) D (i) j m m D A (Xi )6=A (Xi ) i6=j k 28 k (A.2) Performance of CV to estimate the risk of kNN Furthermore, let us introduce for every 1 ≤ j ≤ n,  Aj = {1 ≤ i ≤ m, i 6= j, j ∈ Vk (Xi )} and A0j = 1 ≤ i ≤ m, i 6= j, j ∈ Vk0 (Xi ) where Vk (Xi ) and Vk0 (Xi ) denote the indices of the k nearest neighbors of Xi respectively among X1 , . . . , Xj−1 , Xj , Xj+1 , . . . , Xm and X1 , ..., Xj−1 , Xj0 , Xj+1 , . . . , Xm . Setting Bj = Aj ∪ A0j , one obtains ∆j hm ≤ 1 X ( 1 ) . + 1 (i) D (i) m m AD (Xi )6=A j (Xi ) i∈Bj k (A.3) k From now on, we distinguish between q = 2 and q > 2 because we will be able to derive a tighter bound for q = 2 than for q > 2. A.2.2 Case q > 2 From (A.3), Stone’s lemma (Lemma D.5) provides 1 1 X ( 2kγd 1 ) ≤ + 1 + · ∆j hm ≤ (i) D (i) m m m m AD (Xi )6=A j (Xi ) i∈Bj k k Summing over 1 ≤ j ≤ n and applying (a + b)q ≤ 2q−1 (aq + bq ) (a, b ≥ 0 and q ≥ 1), it comes X 2  4 2 ∆ j hm ≤ 1 + (2kγd )2 ≤ (2kγd )2 , m m j hence m X 2 hm (Z1 , . . . , Zm ) − hm (Z1 , . . . , Zj0 , . . . , Zm ) j=1 ≤ 4 (2kγd )2 . m q/2 This leads for every q > 2 to h̄m (Z1 , . . . , Zm ) q √ 4kγd , ≤ q 1/2 2κ √ m which enables to conclude. A.2.3 Case q = 2 It is possible to obtain a slightly better upper bound in the case q = 2 with the following reasoning. With the same notation as above and from (A.3), one has  2  h i 2 2 2  X (  ) (using 1{·} ≤ 1) E ∆ j hm = 2 + 2 E  1 (i)   D (i) j m m D (Xi )6=A (Xi ) A i∈Bj k k  ≤ 2 2  + 2 E |Bj | 2 m m  X i∈Bj 1( AD k 29 (i) D (Xi )6=Ak (i) j (Xi ) ) . Celisse and Mary-Huard Lemma D.5 implies |Bj | ≤ 2kγd , which allows to conclude   h E ∆j hm 2 i ≤ 2 4kγd  X ( ) + E 1 (i) . D 2 (i) j m m2 D (Xi )6=A (Xi ) A i∈Bj k k Summing over j and introducing an independent copy of Z1 denoted by Z0 , one derives m h X 2 i E hm (Z1 , . . . , Zm ) − hm (Z1 , . . . , Zj0 , . . . , Zm ) j=1 ≤ 2 4kγd + m m m X   E 1 i=1  AkD (i) D (i) ∪Z0 (Xi )6=Ak  (Xi ) + 1( )  (i) D D (i) ∪Z0 Ak (Xi )6=Ak j (Xi ) √ √ √ 2 2 4 k 32γd k k k k ≤ = + 4kγd × 2 √ +√ ≤ (2 + 16γd ) , m m m 2πm 2π m (A.4) where the last but one inequality results from Lemma D.6. A.3 Proof of Theorem 3.2 The idea is to plug the upper bounds previously derived for the L1O estimator, namely Ineq. (2.5) and (2.6) from Theorem 2.2, in the inequalities proved for the moments of the LpO estimator in Theorem 2.2. Proof of Ineq. (3.3), (3.4), and (3.5): These inequalities straightforwardly result from the combination of Theorem 2.2 and Ineq. (2.5) and (2.6) from Theorem 3.1. Proof of Ineq. (3.6): It results from the upper bounds proved in Theorem 3.1 and plugged in Ineq. (2.7) (derived from Rosenthal’s inequality with optimized constant γ, namely Proposition D.3). Then it comes h i q i  √ q bp,n − E R bp,n R ≤ 2 2e ×   v  !2 q u √        −1 −q+1 q √  u √ n n k k √ q  q q √ , q max ( q) t 2C1 k √ (2C q)  2  n−p+1 n−p+1 n−p+1 n−p+1     √ q = 2 2e ×   q  q  v     q √  u   q √ n k k u q 3/2   j k j k max ( q)  2C1 k t , q 2C  2 √ n n  n−p+1   (n − p + 1) n−p+1 n−p+1  n−p+1   n q  q o n ≤ max λ1 q 1/2 , λ2 q 3/2 , n−p+1 h E 30 Performance of CV to estimate the risk of kNN with v √ u √ q u λ1 = 2 2e 2C1 k t k (n − p + 1) j n n−p+1 k, √ λ2 = 2 2e2C2 j n n−p+1   √ √ Finally introducing Γ = 2 2e max 2C2 , 2C1 provides the result. 31 k k√ · n−p+1 Celisse and Mary-Huard Appendix B. Proofs of exponential concentration inequalities B.1 Proof of Proposition 4.1 The proof relies on two successive ingredients: McDiarmid’s inequality (Theorem D.3), and Stone’s lemma (Lemma D.5). First with Dn = D and Dj = (Z1 , . . . , Zj−1 , Zj0 , Zj+1 , . . . , Zn ), let us start by upper bp (Dn ) − R bp (Dj ) for every 1 ≤ j ≤ n. bounding R Using Eq. (2.2), one has bp (D) − R bp (Dj ) R n 1X ≤ p  n −1 p 1 p  n −1 p ≤ 1 ≤ p i=1 n X i=1 n X X e X k 1 De e j AD k (Xi )6=Ak (Xi ) e 1 {i6∈e} "  X n −1 p i6=j  1 1{ADe (Xi )6=Yi } − 1 Dje {i6∈e} k A (Xi )6=Yi # 1{j∈V De (Xi )} + 1 e De j∈Vk j (Xi ) k  1{i6∈e} + 1 p  n −1 p X 1{j6∈e} , e e where Dje denotes the set of random variables among Dj having indices in e, and VkD (Xi ) Dje (resp. Vk (Xi )) denotes the set of indices of the k nearest neighbors of Xi among De (resp. Dje ). Second, let us now introduce o n De e E Bj n−p = ∪ 1 ≤ i ≤ n, i 6∈ e ∪ {j} , Vk j (Xi ) 3 j or VkD (Xi ) 3 j . e∈En−p E Then Lemma D.5 implies Card(Bj n−p ) ≤ 2(k + p − 1)γd , hence bp (Dn ) − R bp (Dj ) ≤ 1 R p X E i∈Bj n−p  n −1 p X 2 · 1{i6∈e} + e 4(k + p − 1)γd 1 1 ≤ + · n n n The conclusion results from McDiarmid’s inequality (Section D.1.5). B.2 Proof of Theorem 4.1 In this proof, we use the same notation as in that of Proposition 4.1. The goal of the proof is to provide a refined version of previous Proposition 4.1 by taking into account the status of each Xj as one of the k nearest neighbors of a given Xi (or not). To do so, our strategy is to prove a sub-Gaussian concentration inequality by use of bp . Lemma D.2, which requires the control of the even moments of the LpO estimator R Such upper bounds are derived • First, by using Ineq. (D.4) (generalized Efron-Stein inequality), which amounts to control the q-th moments of the differences bp (D) − R bp (Dj ) . R 32 Performance of CV to estimate the risk of kNN • Second, by precisely evaluating the contribution of each neighbor Xi of a given Xj ,  e that is by computing quantities such as Pe j ∈ e, i ∈ ē, j ∈ VkD (Xi ) , where Pe [ · ] denotes the probability measure with respect to the uniform random variable e over e En−p , and VkD (Xi ) denotes the indices of the k nearest neighbors of Xi among X e = {X` , ` ∈ e}. bp (D) − R bp (Dj ) B.2.1 Upper bounding R For every 1 ≤ j ≤ n, one gets  −1 X   1 bp (D) − R bp (Dj ) = n R 1{j∈ē} 1{ADe (Xj )6=Yj } − 1{ADe (X 0 )6=Y 0 } j j k k p p e 1X  + 1{j∈e} 1{ADe (Xi )6=Yi } − 1 Dje k p Ak (Xi )6=Yi !) . i∈ē Absolute values and Jensen’s inequality then provide )  −1 X ( X 1 n 1  bp (D) − R bp (Dj ) ≤ 1 De R 1{j∈ē} + 1{j∈e} De p p p Ak (Xi )6=Ak j (Xi ) e i∈ē  −1 X X 1 n 1  ≤ + 1{j∈e} 1 D e De n p p Ak (Xi )6=Ak j (Xi ) e i∈ē = 1 1 + n p n X i Dje e Pe j ∈ e, i ∈ ē, AD (X ) = 6 A (X ) . i i k k h i=1 where the notation Pe means the integration is carried out with respect to the random variable e ∈ En−p , which follows a discrete uniform distribution over the set En−p of all n − p distinct indices among {1, o o n n . . .e, n}. e Dje De (X ) ∪ V Dj (X ) , where (X ) ⊂ j ∈ V Let us further notice that AD (X ) = 6 A i i i i k k k k De Vk j (Xi ) denotes the set of indices of the k nearest neighbors of Xi among Dje with the notation of the proof of Proposition 4.1. Then it results n i h X Dje e (X ) Pe j ∈ e, i ∈ ē, AD (X ) = 6 A i i k k i=1 ≤ n X h i De e Pe j ∈ e, i ∈ ē, j ∈ VkD (Xi ) ∪ Vk j (Xi ) i=1 ≤ n  X i=1 n X ≤2 h i   De e e Pe j ∈ e, i ∈ ē, j ∈ VkD (Xi ) + Pe j ∈ e, i ∈ ē, j ∈ VkD (Xi ) ∪ Vk j (Xi )   e Pe j ∈ e, i ∈ ē, j ∈ VkD (Xi ) , i=1 which leads to n X   e bp (D) − R bp (Dj ) ≤ 1 + 2 R Pe j ∈ e, i ∈ ē, j ∈ VkD (Xi ) . n p i=1 33 Celisse and Mary-Huard Summing over 1 ≤ j ≤ n the square of the above quantity, it results n  X )2 n  1 2X  De + Pe j ∈ e, i ∈ ē, j ∈ Vk (Xi ) n p i=1 j=1 ( n )2 n X 1  2X  e ≤2 +2 Pe j ∈ e, i ∈ ē, j ∈ VkD (Xi ) n2 p i=1 j=1 )2 ( n n X 1X   2 e ≤ +8 Pe j ∈ e, i ∈ ē, j ∈ VkD (Xi ) . n p n 2 X bp (D) − R bp (Dj ) ≤ R j=1 ( i=1 j=1 B.2.2 Evaluating the influence of each neighbor Further using that !2 n  1X  e Pe j ∈ e, i ∈ ē, j ∈ VkD (Xi ) p n X j=1 = n X i=1 n X 1 p2 j=1 n X j=1 =  2 e Pe j ∈ e, i ∈ ē, j ∈ VkD (Xi ) + i=1 1 p2     e e Pe j ∈ e, i ∈ ē, j ∈ VkD (Xi ) Pe j ∈ e, i ∈ ē, j ∈ VkD (X` ) X 1≤i6=`≤n T1 + T2 , let us now successively deal with each of these two terms. Upper bound on T 1 First, we start by partitioning the sum over j depending on the rank of Xj as a neighbor of Xi in the whole sample (X1 , . . . , Xn ). It comes = n X n n h io2 X e Pe j ∈ e, i ∈ ē, j ∈ VkD (Xi ) j=1 i=1 = n X  n h io2 e Pe j ∈ e, i ∈ ē, j ∈ VkD (Xi ) + X  i=1 j∈Vk (Xi )  n h io2 e . Pe j ∈ e, i ∈ ē, j ∈ VkD (Xi ) X j∈Vk+p (Xi )\Vk (Xi ) Then Lemma D.4 leads to X    e Pe j ∈ e, i ∈ ē, j ∈ VkD (Xi ) 2 X j∈Vk (Xi )  =k    e Pe j ∈ e, i ∈ ē, j ∈ VkD (Xi ) j∈Vk+p (Xi )\Vk (Xi ) j∈Vk (Xi ) ≤ X +  pn−p nn−1 pn−p nn−1 2 + 2 + X j∈Vk+p (Xi )\Vk (Xi )   pn−p e Pe j ∈ e, i ∈ ē, j ∈ VkD (Xi ) nn−1  p 2 n − p kp p − 1 p n − p =k , n n−1nn−1 n n−1 34 2 Performance of CV to estimate the risk of kNN where the upper bound results from P j a2j ≤ (maxj aj ) n n  1 XX  e T1 = 2 Pe j ∈ e, i ∈ ē, j ∈ VkD (Xi ) p j=1 i=1 2 P j aj , for aj ≥ 0. It results     kn−p p 2 n−p 1 = · ≤ 2n k p n n−1 nn−1 Upper bound on T 2 Let us now apply the same idea to the second sum, partitioning the sum over j depending on the rank of j as a neighbor of ` in the whole sample. Then, T2 = ≤ n 1 X p2 1 p2 X     e e Pe j ∈ e, i ∈ ē, j ∈ VkD (Xi ) Pe j ∈ e, ` ∈ ē, j ∈ VkD (X` ) j=1 1≤i6=`≤n n X X X i=1 `6=i j∈Vk (X` ) n X X + 1 p2   pn−p e Pe j ∈ e, i ∈ ē, j ∈ VkD (Xi ) nn−1 X i=1 `6=i j∈Vk+p (X` )\Vk (X`   kp p − 1 e Pe j ∈ e, i ∈ ē, j ∈ VkD (Xi ) · n n−1 We then apply Stone’s lemma (Lemma D.5) to get T2   n n i X X e 1 XX h p n − p kp p − 1  Pe j ∈ e, i ∈ ē, j ∈ VkD (Xi )  = 2 1j∈Vk (X` ) + 1j∈Vk+p (X` )\Vk (X` p i=1 j=1 nn−1 n n−1 `6=i `6=i     n pn−p kp p − 1 k2 n − p p−1 1 X kp kγd + (k + p)γd = γd + (k + p) ≤ 2 p i=1 n nn−1 n n−1 n n−1 n−1   k2 p−1 = γd 1 + (k + p − 1) . n n−1 Gathering the upper bounds The two previous bounds provide n X j=1 ( )2 n  1X  e = T1 + T2 Pe j ∈ e, i ∈ ē, j ∈ VkD (Xi ) p i=1 kn−p k2 ≤ + γd nn−1 n   p−1 1 + (k + p − 1) , n−1 which enables to conclude n  X 2 bp (D) − R bp (Dj ) R j=1 2 ≤ n      p−1 8k 2 (1 + γd ) p−1 2 1 + 4k + 4k γd 1 + (k + p) ≤ 1 + (k + p) . n−1 n n−1 35 Celisse and Mary-Huard B.2.3 Generalized Efron-Stein inequality Then (D.4) provides for every q ≥ 1 h bp,n bp,n − E R R √ i 2q ≤ 4 κq s 8(1 + γd )k 2 n   p−1 1 + (k + p) . n−1 √ Hence combined with q! ≥ q q e−q 2πq, it comes   q  h i2q  2 p−1 q 8(1 + γd )k b b 1 + (k + p) E Rp,n − E Rp,n ≤ (16κq) n n−1  q  2 p−1 8(1 + γd )k 1 + (k + p) ≤ q! 16eκ . n n−1 h i 2 p−1 d )k The conclusion follows from Lemma D.2 with C = 16eκ 8(1+γ 1 + (k + p) n n−1 . Then for every t > 0,           2 nt bp,n − E R bp,n > t ∨ P E R bp,n − R bp,n > t ≤ exp − h i · P R p−1 2 1024eκk (1 + γd ) 1 + (k + p) n−1 B.3 Proof of Theorem 4.2 and Proposition 4.2 B.3.1 Proof of Theorem 4.2 If p < n/2 + 1: In what follows, we exploit a characterization of sub-Gaussian random variables by their 2q-th moments (Lemma D.2). From (3.3) and  (3.4) applied with 2q, and further introducing a constant ∆ = p √ C1 /2, C2 > 0, it comes for every q ≥ 1 4 e max  q  2 q h i 2q   ∆2 ∆ k2 k2 q b b E Rp,n − E Rp,n (2q) ≤ q! , (B.1) ≤ 16e n − p + 1 8 n−p+1 √ with q q ≤ q!eq / 2πq. Then Lemma D.2 provides for every t > 0    h i   h i  t2 b b b b P Rp,n − E Rp,n > t ∨ P E Rp,n − Rp,n > t ≤ exp −(n − p + 1) 2 2 . ∆ k If p ≥ n/2 + 1: This part of the proof relies on Proposition D.1 which provides an exponential concentration inequality from upper bounds on the moments of a random variable. j k n Let us now use (3.3) and (3.6) combined with (D.1), where C = n−p+1 , q0 = 2, and minj αj = 1/2. This provides for every t > 0  h h i i  n b b P Rp,n − E Rp,n > t ≤ e× n−p+1   !1/3     2   2 2 1 n t n t , √ , (n − p + 1) exp − min (n − p + 1)   2e n − p + 1 4Γ2 k k n − p + 1 4Γ2 k 2 36 Performance of CV to estimate the risk of kNN where Γ arises from Eq. (3.6). B.3.2 Proof of Proposition 4.2 As in the previous proof, the derivation of the deviation terms results from Proposition D.1. With the same notation and reasoning as in the jprevious k proof, let us combine (3.3) n and (3.6). From (D.2) of Proposition D.1 where C = n−p+1 , q0 = 2, and minj αj = 1/2, it results for every t > 0    v s   u i h 3/2 2e k k n u  b    3/2 bp,n > Γ k t + 2e j kt   ≤ e · e−t , P  Rp,n − E R t j n n (n − p + 1) n−p+1 n−p+1 where Γ > 0 is given by Eq. (3.6). 37 n−p+1 Celisse and Mary-Huard Appendix C. Proofs of deviation upper bounds C.1 Proof of Ineq. (5.3) in Theorem 5.1 The proof follows the same strategy as that of Theorem 2.1 in Rogers and Wagner (1978). Along the proof, we will repeatedly use some notation that we briefly introduce here. First, let us define Z0 = (X0 , Y0 ) and Zn+1 = (Xn+1 , Yn+1 ) that are independent copies of Z1 . Second to ease the reading of the proof, we also use several shortcuts: fbk (X0 ) = De n be AD k (X0 ), and fk (X0 ) = Ak (X0 ) for every set of indices e ∈ En−p (with cardinality n − p). Finally along the proof, e, e0 ∈ En−p denote two random variables which are sets of distinct indices with discrete uniform distribution over En−p . The notation Pe (resp. Pe,e0 ) means the integration is made with respect to the sample D and also the random variable e (resp. D and also the random variables e, e0 ). Ee [ · ] and Ee,e0 [ · ] are teh corresponding expectations. Note that the sample D and the random variables e, e0 are independent from each other, so that computing for instance Pe (i 6∈ e) amounts to integrating with respect to the random variable e only. C.1.1 Main part of the proof n With the notation Ln = L(AD k ), let us start from i h i i h h   bp,n Ln , bp2 (ADn ) + E L2n − 2E R bp,n − Ln )2 = E R E (R k let us notice that     E L2n = P fbk (X0 ) 6= Y0 , fbk (Xn+1 ) 6= Yn+1 , and h i   bp,n Ln = Pe fbk (X0 ) 6= Y0 , fbe (Xi ) 6= Yi | i ∈ E R / e Pe (i 6∈ e) . k It immediately comes i h bp,n − Ln )2 E (R i   h b2 (ADn ) − Pe fbk (X0 ) 6= Y0 , fbe (Xi ) 6= Yi | i ∈ / e Pe (i 6∈ e) (C.1) =E R p k k  i h    / e Pe (i ∈ / e) . + P fbk (X0 ) 6= Y0 , fbk (Xn+1 ) 6= Yn+1 − Pe fbk (X0 ) 6= Y0 , fbke (Xi ) 6= Yi | i ∈ (C.2) The proof then consists in successively upper bounding the two terms (C.1) and (C.2) of the last equality. Upper bound of (C.1) First, we have h i h i X bp2 (ADn ) = 0 0 0} p2 E R E 1 1 1 1 e e e,e b b {i ∈e} / {j ∈e / k {f (Xi )6=Yi } {f (Xj )6=Yj } k i,j = k i Ee,e0 1{fbe (Xi )6=Yi } 1{i∈e} / 1{fbe0 (Xi )6=Yi } 1{i∈e / 0} k k i h i X + Ee,e0 1{fbe (Xi )6=Yi } 1{i∈e} / 1{fbe0 (Xj )6=Yj } 1{j ∈e / 0} . X i6=j h k 38 k Performance of CV to estimate the risk of kNN Let us now introduce the five following events where we emphasize e and e0 are random variables with the discrete uniform distribution over En−p : Si0 = {i ∈ / e, i ∈ / e0 }, 1 Si,j = {i ∈ / e, j ∈ / e0 , i ∈ / e0 , j ∈ / e}, 2 Si,j = {i ∈ / e, j ∈ / e0 , i ∈ / e0 , j ∈ e}, 3 Si,j = {i ∈ / e, j ∈ / e0 , i ∈ e0 , j ∈ / e}, 4 Si,j = {i ∈ / e, j ∈ / e0 , i ∈ e0 , j ∈ e}. Then, i X   h  e e0 0 0 b b b2 (ADn ) = 0 P f (X ) = 6 Y , f (X ) = 6 Y |S p2 E R i i i i i Pe,e0 Si e,e p k k k i + 4 XX     0 ` ` Pe,e0 fbke (Xi ) 6= Yi , fbke (Xi ) 6= Yi |Si,j Pe,e0 Si,j i6=j `=1    0 = nPe,e0 fbke (X1 ) 6= Y1 , fbke (X1 ) 6= Y1 |S10 Pe,e0 S10 + n(n − 1) 4 X     0 ` ` Pe,e0 fbke (X1 ) 6= Y1 , fbke (X2 ) 6= Y2 | S1,2 Pe,e0 S1,2 . `=1 Furthermore since " # 4   X   1 1 X ` 0 nPe,e0 S1 + n(n − 1) Pe,e0 S1,2 = 2 Pe,e0 i ∈ / e, j ∈ / e0 = 1, 2 p p i,j `=1 it comes  i  h bp2 (ADn ) − Pe,e0 fbk (X0 ) 6= Y0 , fbe (X1 ) 6= Y1 = E R k k n n(n − 1) A+ B, 2 p p2 (C.3) where h    i 0 A = Pe,e0 fbke (X1 ) 6= Y1 , fbke (X1 ) 6= Y1 | S10 − Pe,e0 fbk (X0 ) 6= Y0 , fbke (X1 ) 6= Y1 | S10  × Pe,e0 S10 , 4 h    i X 0 ` ` Pe,e0 fbke (X1 ) 6= Y1 , fbke (X2 ) 6= Y2 | S1,2 − Pe,e0 fbk (X0 ) 6= Y0 , fbke (X1 ) 6= Y1 | S1,2 and B = `=1   ` × Pe,e0 S1,2 . • Upper bound for A: To upper bound A, simply notice that:    p 2 A ≤ Pe,e0 Si0 ≤ Pe,e0 i ∈ / e, i ∈ / e0 ≤ . n • Upper bound for B: To obtain an upper bound for B, one needs to upper bound     0 ` ` Pe,e0 fbke (X1 ) 6= Y1 , fbke (X2 ) 6= Y2 | S1,2 − Pe,e0 fbk (X0 ) 6= Y0 , fbke (X1 ) 6= Y1 | S1,2 , (C.4) which depends on `, i.e. on the fact that index 2 belongs or not to the training set e. 39 Celisse and Mary-Huard • If 2 6∈ e (i.e. ` = 1 or 3): Then, Lemma C.2 proves √ 4p k (C.4) ≤ √ · 2πn • If 2 ∈ e (i.e. ` = 2 or 4): Then, Lemma C.3 settles √ √ 8 k 4p k (C.4) ≤ √ +√ · 2π(n − p) 2πn Combining the previous bounds and Lemma C.1 leads to √ √ ! √ !     8 k 4p k  4p k  1 3 2 4 Pe,e0 S1,2 + Pe,e0 S1,2 + √ +√ Pe,e0 S1,2 + Pe,e0 S1,2 B≤ √ 2πn 2π(n − p) 2πn √         2 2√ p  2 p  1 3 2 4 ≤ √ k Pe,e0 S1,2 + Pe,e0 S1,2 + + Pe,e0 S1,2 + Pe,e0 S1,2 n n−p n π √      2 2√ p 2 0 2 4 ≤ √ Pe,e0 i ∈ / e, j ∈ /e + Pe,e0 S1,2 + Pe,e0 S1,2 k n n−p π √    2 2 √ p  p 2 2 (n − p)p2 (p − 1) (n − p)2 p2 ≤ √ + k + 2 n n n−p n2 (n − 1)2 n (n − 1)2 π √   2 2 √  p 2 p 2 ≤ √ k + . n n n−1 π Back to Eq. (C.3), one deduces i   h bp2 (ADn ) − Pe,e0 fbk (X0 ) 6= Y0 , fbe (X1 ) 6= Y1 = n A + n(n − 1) B E R k k p2 p2 √ √ 1 2 2 (p + 2) k ≤ + √ · n n π Upper bound of (C.2) First observe that     (−1) / e = Pe,e0 fbk (X0 ) 6= Y0 , fbke (Xn+1 ) 6= Yn+1 Pe,e0 fbk (X0 ) 6= Y0 , fbke (Xi ) 6= Yi | i ∈ (−1) where fbk is built on sample (X2 , Y2 ), ..., (Xn+1 , Yn+1 ). One has     /e P fbk (X0 ) 6= Y0 , fbk (Xn+1 ) 6= Yn+1 − Pe,e0 fbk (X0 ) 6= Y0 , fbke (Xi ) 6= Yi | i ∈    (−1)  (X0 ) 6= Y0 , fbke (Xn+1 ) 6= Yn+1 = P fbk (X0 ) 6= Y0 , fbk (Xn+1 ) 6= Yn+1 − Pe,e0 fbk     (−1) ≤ P fbk (X0 ) 6= fbk (X0 ) + Pe,e0 fbke (Xn+1 ) 6= fbk (Xn+1 ) √ √ 4 k 4p k ≤ √ +√ , 2πn 2πn 40 Performance of CV to estimate the risk of kNN where we used Lemma D.6 again to obtain the last inequality. Conclusion: The conclusion simply results from combining bonds (C.1) and (C.2), which leads to  E bp,n − Ln R 2  √ √ 2 2 (2p + 3) k 1 ≤ √ + · n n π C.1.2 Combinatorial lemmas All the lemmas of the present section are proved with the notation introduced at the beginning of Section C.1. Lemma C.1. For any 1 ≤ i 6= j ≤ n, n−2  (n−2 (n−p ) n−p ) 1 × n , Pe,e0 Si,j = n (n−p ) (n−p ) Pe,e0 3 Si,j  n−p−1 (n−p n−2 ) (n−2 ) , = n (n−p ) (nn−p ) n−p  (n−p−1 ) ) (n−2 2 Pe,e0 Si,j = n−2 × , n n (n−p ) (n−p ) Pe,e0 4 Si,j  n−p−1 ) (n−2 ) (n−p−1 × n−2 · = n n (n−p ) (n−p ) Proof of Lemma C.1. Along the proof, we repeatedly exploit the independence of the random variables e and e0 , which are set of n − p distinct indices with the discrete uniform distribution over En−p . Note also that an important ingredient is that the probability of each one of the following events does not depend on the particular choice of the indices (i, j), but only on the fact that i 6= j. 1 Pe,e0 Si,j  = Pe,e0 i ∈ / e, j ∈ / e0 , i ∈ / e0 , j ∈ /e  Pe,e0 n−2  (n−p ) (n−2 n−p ) = Pe (i ∈ / e, j ∈ / e) Pe0 j ∈ × n · / e0 , i ∈ / e0 = n (n−p ) (n−p )   2 Si,j = Pe,e0 i ∈ / e, j ∈ / e0 , i ∈ / e0 , j ∈ e Pe,e0  (n−p−1 ) (n−p ) × n−2 · = Pe (i ∈ / e, j ∈ e) Pe0 j ∈ / e0 , i ∈ / e0 = n−2 n n (n−p ) (n−p )   3 Si,j = Pe,e0 i ∈ / e, j ∈ / e0 , i ∈ e0 , j ∈ /e Pe,e0  (n−p ) (n−p−1 n−2 ) = Pe (i ∈ / e, j ∈ / e) Pe0 j ∈ / e0 , i ∈ e0 = n−2 · n (n−p ) (nn−p )   4 Si,j = Pe,e0 i ∈ / e, j ∈ / e0 , i ∈ e0 , j ∈ e  (n−p−1 ) (n−p−1 ) = Pe (i ∈ / e, j ∈ e) Pe0 j ∈ / e0 , i ∈ e0 = n−2 × n−2 · n n (n−p ) (n−p ) 41 Celisse and Mary-Huard Lemma C.2. With the above notation, for ` ∈ {1, 3}, it comes √     0 4p k ` ` ≤√ − Pe fbk (X0 ) 6= Y0 , fbke (X1 ) 6= Y1 | S1,2 Pe fbke (X1 ) 6= Y1 , fbke (X2 ) 6= Y2 | S1,2 · 2πn Proof of Lemma C.2. First remind that as a test sample element Z0 cannot belong to either e or e0 . Consequently, an exhaustive formulation of     ` ` Pe fbk (X0 ) 6= Y0 , fbke (X1 ) 6= Y1 | S1,2 = Pe fbk (X0 ) 6= Y0 , fbke (X1 ) 6= Y1 | S1,2 . Then it results    (2)  ` ` Pe fbk (X0 ) 6= Y0 , fbke (X1 ) 6= Y1 | S1,2 = Pe fbk (X2 ) 6= Y2 , fbke (X1 ) 6= Y1 | S1,2 , (2) where fbk is built on sample (X0 , Y0 ), (X1 , Y1 ), (X3 , Y3 ), ..., (Xn , Yn ). Hence Lemma D.6 implies     0 ` ` − Pe,e0 fbk (X0 ) 6= Y0 , fbke (X1 ) 6= Y1 | S1,2 Pe,e0 fbke (X1 ) 6= Y1 , fbke (X2 ) 6= Y2 | S1,2    (2)  0 ` ` − Pe,e0 fbk (X2 ) 6= Y2 , fbke (X1 ) 6= Y1 | S1,2 = Pe,e0 fbke (X1 ) 6= Y1 , fbke (X2 ) 6= Y2 | S1,2 n o n n (2) o  o n 0 o  ` ` ≤ Pe,e0 fbke (X1 ) 6= Y1 4 fbke (X1 ) 6= Y1 | S1,2 + Pe,e0 fbk (X2 ) 6= Y2 4 fbke (X2 ) 6= Y2 | S1,2 √  (2)  4p k e0 ` b b · = Pe,e0 fk (X2 ) 6= fk (X2 ) | S1,2 ≤ √ 2πn Lemma C.3. With the above notation, for ` ∈ {2, 4}, it comes     0 ` ` Pe,e0 fbke (X1 ) 6= Y1 , fbke (X2 ) 6= Y2 | S1,2 − Pe,e0 fbk (X0 ) 6= Y0 , fbke (X1 ) 6= Y1 | S1,2 √ √ 4p k 8 k ≤√ +√ · 2π(n − p) 2πn Proof of Lemma C.3. As for the previous lemma, first notice that    (2)  e0 ` ` Pe,e0 fbk (X0 ) 6= Y0 , fbke (X1 ) 6= Y1 | S1,2 = Pe,e0 fbk (X2 ) 6= Y2 , fbk (X1 ) 6= Y1 | S1,2 , e0 where fbk is built on sample e with observation (X2 , Y2 ) replaced with (X0 , Y0 ). Then     0 ` ` Pe,e0 fbke (X1 ) 6= Y1 , fbke (X2 ) 6= Y2 | S1,2 − Pe,e0 fbk (X0 ) 6= Y0 , fbke (X1 ) 6= Y1 | S1,2     (2) e0 e e0 ` ` b b b b 0 0 = Pe,e fk (X1 ) 6= Y1 , fk (X2 ) 6= Y2 | S1,2 − Pe,e fk (X2 ) 6= Y2 , fk (X1 ) 6= Y1 | S1,2   n  n o n e0 o  o (2) e ` e0 ` b b b b 0 0 ≤ Pe,e fk (X1 ) 6= Y1 4 fk (X1 ) 6= Y1 | S1,2 + Pe,e fk (X2 ) 6= Y2 4 fk (X2 ) 6= Y2 | S1,2 √ √     e0 (2) 4p k 8 k e ` e0 ` b b b b +√ · = Pe,e0 fk (X1 ) 6= fk (X1 ) | S1,2 + Pe,e0 fk (X2 ) 6= fk (X2 ) | S1,2 ≤ √ 2π(n − p) 2πn 42 Performance of CV to estimate the risk of kNN C.2 Proof of Proposition 5.1 The bias of the L1O estimator is equal to h    i Dn−1 n E L AD − L A k k i i i h  h h Dn−1 n (X) − A (X) | X (X), X | X = −2E (η(X) − 1/2) E E AD (k+1) k k i i i h  h h Dn−1 n (X) | X (X), X | X (X) − A = −2E (η(X) − 1/2) E E AD (k+1) k k i  n h  Dn−1 n (0) | X(k+1) (0) = 0, X = 0 P X(k+1) (0) = 0 | X = 0 = 1/2 E AD k (0) − Ak i  h o Dn−1 n (0) − A (0) | X (0) = 1, X = 0 P X (0) = 1 | X = 0 +E AD (k+1) (k+1) k k i  n h  D n−1 n (1) | X (1) = 0, X = 1 P X(k+1) (1) = 0 | X = 1 (1) − A − 1/2 E AD (k+1) k k i  h o Dn−1 n , (1) − A (1) | X (1) = 1, X = 1 P X (1) = 1 | X = 1 +E AD (k+1) (k+1) k k where X(k+1) (x) denotes the k + 1-th neighbor of x. Then, a few remarks lead to simplify the above expression. • On the one hand it is easy to check that i h Dn−1 n (0) − A (0) | X (0) = 0, X = 0 E AD (k+1) k k i h Dn−1 n (1) − A (1) | X (1) = 1, X = 1 = 0, = E AD (k+1) k k since all of the k + 1 nearest neighbors share the same label. • On the other hand, let us notice h i Dn−1 n E AD (0) − A (0) | X (0) = 1, X = 0 (k+1) k k h i Dn−1 n = P AD (0) = 1, A (0) = 0 | X (0) = 1, X = 0 (k+1) k k h i Dn−1 n − P AD (0) = 0, A (0) = 1 | X (0) = 1, X = 0 . (k+1) k k n Then knowing X(k+1) (X) and X are not equal implies the only way for AD and k Dn−1 Ak to differ is that the numbers of k nearest neighbors of each label are almost equal, that is either equal to (k − 1)/2 or to (k + 1)/2 (k is odd by assumption). With N01 (respectively Ñ01 ) denoting the number of 1s among th k nearest neighbors of X = 0 among X1 , . . . , Xn (resp. X1 , . . . , Xn−1 ), the proof of Theorem 3 in Chaudhuri 43 Celisse and Mary-Huard and Dasgupta (2014) leads to i h Dn−1 n (0) = 0 | X (0) = 1, X = 0 (0) = 1, A P AD (k+1) k k h i = P n ∈ Vk (0), N01 = (k + 1)/2, Ñ01 = (k − 1)/2 | X(k+1) (0) = 1, X = 0 i h k = × P Ñ01 = (k − 1)/2 | N01 = (k + 1)/2, X(k+1) (0) = 1, X = 0 n  × P N01 = (k + 1)/2 | X(k+1) (0) = 1, X = 0       k k+1 k−1 k = ×P H , ; 1 = 1 · η1 × η̄ (k+1)/2 (1 − η̄)(k−1)/2 n 2 2 (k + 1)/2   k+1 k = × η1 × η̄ (k+1)/2 (1 − η̄)(k−1)/2 , 2n (k + 1)/2 where H(a, b; c) denotes a hypergeometric random variable with a successes in a population of cardinality a + b, and c draws, and η̄ = π0 η0 + (1 − π0 )η1 = 1/2. i h Dn−1 n (0) = 0, A (0) = 1 | X (0) = 1, X = 0 Following the same reasoning for P AD (k+1) k k and recalling that η0 = 0 and η1 = 1 by assumption, it results   h i k+1 k Dn−1 Dn E Ak (0) − Ak (0) | X(k+1) (0) = 1, X = 0 = − × (1/2)k . 2n (k + 1)/2 • Similar calculations applied to X = 1 finally lead to    i k + 1  h    k Dn−1 Dn − L Ak = E L Ak × (1/2)k × P X(k+1) (0) = 1 | X = 0 2n (k + 1)/2   k+1 k = × (1/2)k × P [ B(n, 1/2) ≤ k ] . (k + 1)/2 2n • The conclusion then follows from considering k P [ B(n, 1/2) ≤ k ] ≥ 1/2, and also by noticing that ≥ n/2 which entails that √   k+1 k k k × (1/2) ≥ C , 2n (k + 1)/2 n where denotes a numeric constant independent of n and k. 44 Performance of CV to estimate the risk of kNN Appendix D. Technical results D.1 Main inequalities D.1.1 From moment to exponential inequalities Proposition D.1 (see also Arlot (2007), Lemma 8.10). Let X denote a real valued random variable, and assume there exist C ≥ 1, λ1 , . . . , λN > 0, and α1 , . . . , αN > 0 (N ∈ N∗ ) such that for every q ≥ q0 , !q N X q αi E [ |X| ] ≤ C λi q . i=1 Then for every t > 0, −(mini αi )e−1 minj P [ |X| > t ] ≤ Ceq0 minj αj e    t N λj  1 αj    , Furthermore for every x > 0, it results " αi #  N X ex ≤ Ceq0 minj αj · e−x . P |X| > λi minj αj (D.1) (D.2) i=1 Proof of Proposition D.1. By use of Markov’s inequality applied to |X|q (q > 0), it comes for every t > 0 !q PN αi λ q E [ |X|q ] i i=1 P [ |X| > t ] ≤ 1q≥q0 + 1q<q0 ≤ 1q≥q0 C + 1q<q0 . tq t P αi αi Now using the upperbound N  λi q ≤ N maxi {λi q } and choosing the particular value  1 i=1  αj t q̃ = q̃(t) = e−1 minj , one gets N λj     1 αi  q̃ αj t −α i minj N λj   maxi N λi e  + 1q̃<q P [ |X| > t ] ≤ 1q̃≥q0 C  0   t   −(mini αi ) e−1 minj    ≤ 1q̃≥q0 Ce t N λj  1 αj     + 1q̃<q0 , which provides (D.1).  α i P ex Let us now turn to the proof of (D.2). From t∗ = N λ combined with i i=1 minj αj x ∗ q = minj αj , it arises for every x > 0  α i  α i PN PN ex   PN λ −1 ex ∗ α λ e i i i i=1 i=1 minj αj minj αj i=1 λi (q )  αi ≤ max e−αk P  αi = e− mink αk . = P N N ex ex k t∗ λ λ i=1 i i=1 minj αj 45 i minj αj Celisse and Mary-Huard Then, PN ∗ αi i=1 λi (q ) t∗ C !q ∗ −(mink αk ) minx α ≤ Ce j j = Ce−x . Hence, " P |X| > N X i=1  λi ex minj αj αi # ≤ Ce−x 1q∗ ≥q0 + 1q∗ <q0 ≤ Ceq0 minj αj · e−x , since eq0 minj αj ≥ 1 and −x + q0 minj αj ≥ 0 if q < q0 . D.1.2 Sub-Gaussian random variables Lemma D.1 (Theorem 2.1 in Boucheron et al. (2013) first part). Any centered random 2 variable X such that P (X > t) ∨ P (−X > t) ≤ e−t /(2ν) satisfies   E X 2q ≤ q! (4ν)q . for all q in N+ . Lemma D.2 (Theorem 2.1 in Boucheron et al. (2013) second part). Any centered random variable X such that   E X 2q ≤ q!C q . 2 /(2ν) for some C > 0 and q in N+ satisfies P (X > t) ∨ P (−X > t) ≤ e−t with ν = 4C. D.1.3 The Efron-Stein inequality Theorem D.1 (Efron-Stein’s inequality Boucheron et al. (2013), Theorem 3.1). Let X1 , . . . , Xn be independent random variables and let Z = f (X1 , . . . , Xn ) be a squareintegrable function. Then Var(Z) ≤ n X h i E (Z − E [Z | (Xj )j6=i ])2 = ν. i=1 Moreover if X10 , . . . , Xn0 denote independent copies of X1 , . . . , Xn and if we define for every 1≤i≤n  Zi0 = f X1 , . . . , Xi0 , . . . , Xn , then n ν= 2 i 1X h E Z − Zi0 . 2 i=1 46 Performance of CV to estimate the risk of kNN D.1.4 Generalized Efron-Stein’s inequality Theorem D.2 (Theorem 15.5 in Boucheron et al. (2013)). Let X1 , . . . , Xn n independent random variables, f : Rn → R a measurable function, and define ζ = f (X1 , . . . , Xn ) and 0 0 ζi0 = f (Xh1 , . . . , Xi0 , . . . , Xn ), with copies of Xii . Furthermore let i X1 , . . . , Xn independent hP    Pn  2 n 0 0 ) 2 | X n . Then there exists n (ζ − ζ V+ = E ) (Z − Z | X and ζ = E − 1 1 i i i i + − a constant κ ≤ 1, 271 such that for all q in [2, +∞[, q q (ζ − Eζ)− q ≤ 2κq kV− kq/2 . (ζ − Eζ)+ q ≤ 2κq kV+ kq/2 , and Corollary D.1. With the same notation, it comes v v u X u X p p u n u n 2 0 t kζ − Eζkq ≤ 2κq (ζ − ζi ) (ζ − E [ζ | (Xj )j6=i ])2 ≤ 4κq t i=1 i=1 q/2 . (D.3) q/2 Moreover considering ζ (j) = f (X1 , . . . , Xj−1 , Xj+1 , . . . , Xn ) for every 1 ≤ j ≤ n, it results r  Pn √ (j) 2 . (D.4) kζ − Eζkq ≤ 2 2κq i=1 ζ − ζ q/2 D.1.5 McDiarmid’s inequality Theorem D.3. Let X1 , ..., Xn be independent random variables taking values in a set A, and assume that f : An → R satisfies sup x1 ,...,xn ,x0i f (x1 , ..., xi , ..., xn ) − f (x1 , ..., x0i , ..., xn ) ≤ ci , 1 ≤ i ≤ n . Then for all ε > 0, one has 2/ Pn 2/ Pn P (f (X1 , ..., Xn ) − E [f (X1 , ..., Xn )] ≥ ε) ≤ e−2ε P (E [f (X1 , ..., Xn )] − f (X1 , ..., Xn ) ≥ ε) ≤ e−2ε 2 i=1 ci 2 i=1 ci A proof can be found in Devroye et al. (1996) (see Theorem 9.2). D.1.6 Rosenthal’s inequality Proposition D.2 (Eq. (20) in Ibragimov and Sharakhmetov (2002)). Let X1 , . . . , Xn denote independent real random variables with symmetric distributions. Then for every q > 2 and γ > 0,  v q  " n u n q# n  X X X u    ≤ B(q, γ) γ E [ |Xi |q ] ∨ t E Xi E Xi2  ,   i=1 i=1 i=1 where a ∨ b = max(a, b) (a, b ∈ R), and B(q, γ) denotes a positive constant only depending on q and γ. Furthermore, the optimal value of B(q, γ) is given by B ∗ (q, γ) = = q | ] 1 + E[ |N , if γ q −q/(q−1) 0 γ E [ |Z − Z | ] , if 47 2 < q ≤ 4, 4 < q, Celisse and Mary-Huard where N denotes a standard Gaussian variable, and Z, Z 0 are i.i.d. random variables with  1/(q−1)  Poisson distribution P γ 2 . Proposition D.3. Let X1 , . . . , Xn denote independent real random variables with symmetric distributions. Then for every q > 2,  v q  " n u n q# n  X  √ q X X u    √ E [ |Xi |q ] , ( q)q t E ≤ 2 2e max q q Xi E Xi2  .   i=1 i=1 i=1 Proof of Proposition D.3. From Lemma D.3, let us observe • if 2 < q ≤ 4, choosing γ = 1 provides  √ √ q B ∗ (q, γ) ≤ 2 2e q . • if 4 < q, γ = q (q−1)/2 leads to q  √ q  √ √ q  q ∗ −q/2 1/2 B (q, γ) ≤ q 4eq q + q ≤ q −q/2 8eq = 2 2e q . Plugging the previous upper bounds in Rosenthal’s inequality (Proposition D.2), it results for every q > 2  v q  " n u n q# n √ X  √ √ q X X u    E E [ |Xi |q ] , t Xi E Xi2  . ≤ 2 2e q max ( q)q   i=1 i=1 i=1 Lemma D.3. With the same notation as Proposition D.2 and for every γ > 0, it comes • for every 2 < q ≤ 4, √ √ q 2e q B (q, γ) ≤ 1 + , γ ∗ • for every 4 < q, ∗ B (q, γ) ≤ γ −q/(q−1) q   q 1/(q−1) 4eq γ +q . Proof of Lemma D.3. If 2 < q ≤ 4, E [ |N |q ] B ∗ (q, γ) = 1 + ≤1+ γ √ √ 2e q γ √ q q2 e 48 ≤1+ q√ q 2e e γ q q2 e √ √ q 2e q =1+ , γ Performance of CV to estimate the risk of kNN by use of Lemma D.9 and If q > 4, √ q 1/q ≤ √ e for every q > 2. q Z − Z0  iq/2 √ h q  1/(q−1) ≤ γ −q/(q−1) 2q/2+1 e q γ +q e  iq/2 h  √ √ q q q ≤ γ −q/(q−1) 2q/2 2e e γ 1/(q−1) + q e q  h   iq/2  q −q/(q−1) 1/(q−1) −q/(q−1) 1/(q−1) ≤γ 4eq γ +q =γ 4eq γ +q , B ∗ (q, γ) = γ −q/(q−1) E  applying Lemma D.11 with λ = 1/2γ 1/(q−1) . D.2 Technical lemmas D.2.1 Basic computations for resampling applied to the kNN algorithm Lemma D.4. For every 1 ≤ i ≤ n and 1 ≤ p ≤ n, one has p Pe (i ∈ ē) = , n n X kp , Pe [ i ∈ ē, j ∈ Vke (Xi ) ] = n (D.5) (D.6) j=1 X Pe [ i ∈ ē, j ∈ Vke (Xi ) ] = k<σi (j)≤k+p kp p − 1 · n n−1 (D.7) Proof of Lemma D.4. The first equality is straightforward. The second one results from simple calculations as follows.    −1 X n n  −1 X n X X X n n Pe [ i ∈ ē, j ∈ Vke (Xi ) ] = 1i∈ē 1j∈Vke (Xi ) = 1i∈ē  1j∈Vke (Xi )  p p e e j=1 j=1 j=1 !  −1 X n p = 1i∈ē k = k . n p e For the last equality, let us notice every j ∈ Vi satisfies Pe [ i ∈ ē, j ∈ Vke (Xi ) ] = Pe [ j ∈ Vke (Xi ) | i ∈ ē ] Pe [ i ∈ ē ] = n−1p , n−pn hence X k<σi (j)≤k+p Pe [ i ∈ ē, j ∈ Vke (Xi ) ] = n X Pe [ i ∈ ē, j ∈ Vke (Xi ) ] − j=1 =k X σi (j)≤k p n−1p p p−1 −k =k · n n−pn nn−1 49 Pe [ i ∈ ē, j ∈ Vke (Xi ) ] Celisse and Mary-Huard D.2.2 Stone’s lemma Lemma D.5 (Devroye et al. (1996), Corollary 11.1, p. 171). Given n points (x1 , ..., xn ) in Rd , any of these points belongs to the k nearest neighbors of at most kγd of the other points, where γd increases on d. D.2.3 Stability of the kNN classifier when removing p observations Lemma D.6 (Devroye and Wagner (1979b), Eq. (14)). For every 1 ≤ k ≤ n, let Ak denote k-NN classification algorithm defined by Eq. (2.1), and let Z1 , . . . , Zn denote n i.i.d. random variables such that for every 1 ≤ i ≤ n, Zi = (Xi , Yi ) ∼ P . Then for every 1 ≤ p ≤ n − k, √ 4 p k P [ Ak (Z1,n ; X) 6= Ak (Z1,n−p ; X) ] ≤ √ , 2π n where Z1,i = (Z1 , . . . , Zi ) for every 1 ≤ i ≤ n, and (X, Y ) ∼ P is independent of Z1,n . D.2.4 Exponential concentration inequality for the L1O estimator Lemma D.7 (Devroye et al. (1996), Theorem 24.4). For every 1 ≤ k ≤ n, let Ak denote b1 (·) denote the L1O estimator k-NN classification algorithm defined by Eq. (2.1). Let also R defined by Eq. (2.2) with p = 1. Then for every ε > 0,    h i  2 ε b1 (Ak , Z1,n ) − E R b1 (Ak , Z1,n ) > ε ≤ 2 exp −n . P R γd2 k 2 D.2.5 Moment upper bounds for the L1O estimator Lemma D.8. For every 1 ≤ k ≤ n, let Ak denote k-NN classification algorithm defined by b1 (·) denote the L1O estimator defined by Eq. (2.2) with p = 1. Then Eq. (2.1). Let also R for every q ≥ 1, !  2 q h i 2q  (kγ ) d b1 (Ak , Z1,n ) − E R b1 (Ak , Z1,n ) E R ≤ q! 2 . (D.8) n The proof is straightforward from the combination of Lemmas D.1 and D.7. D.2.6 Upper bound on the optimal constant in the Rosenthal’s inequality Lemma D.9. Let N denote a real-valued standard Gaussian random variable. Then for every q > 2, one has √ √  q  2q . E [ |N |q ] ≤ 2e q e Proof of Lemma D.9. If q is even (q = 2k > 2), then r Z +∞ Z +∞ 2 x2 2 q q 1 − x2 √ x xq−2 e− 2 dx E [ |N | ] = 2 e dx = (q − 1) π 2π 0 r0 r 2 (q − 1)! 2 q! = = · π 2k−1 (k − 1)! π 2q/2 (q/2)! 50 Performance of CV to estimate the risk of kNN Then using for any positive integer a  a a  a a √ √ 2πa < a! < 2eπa , e e it results √ q! < 2e e−q/2 q q/2 , q/2 2 (q/2)! which implies r   q e q q/2 √ √  q  2 E [ |N | ] ≤ 2 < 2e q · π e e q If q is odd (q = 2k + 1 > 2), then r Z +∞ r Z +∞ 2 √ q −t dt 2 2 q q − x2 dx = 2t e √ , E [ |N | ] = x e π 0 π 0 2t √ by setting x = 2t. In particular, this implies r Z +∞ r r   √ √  q  2q 2 2 k 2 q−1 q − 1 q k −t 2 E [ |N | ] ≤ . (2t) e dt = 2 k! = 2 ! < 2e q π 0 π π 2 e Lemma D.10. Let S denote a binomial random variable such that S ∼ B(k, 1/2) (k ∈ N∗ ). Then for every q > 3, it comes r q √ √ qk q E [ |S − E [ S ]| ] ≤ 4 e q · 2e Proof of Lemma D.10. Since S − E(S) is symmetric, it comes q Z +∞ E [ |S − E [ S ]| ] = 2 h P S < E[S ] − t 1/q i +∞ Z P [ S < E [ S ] − u ] uq−1 du. dt = 2q 0 0 p Using Chernoff’s inequality and setting u = k/2v, it results q Z E [ |S − E [ S ]| ] ≤ 2q +∞ r 2 q−1 − uk u e 0 du = 2q k 2 q Z +∞ v q−1 e− v2 2 dv. 0 If q is even, then q −1 > 2 is odd and the same calculations as in the proof of Lemma D.9 apply, which leads to r q r q r q r q  q q/2 √ √ √ √ k q/2  q  k q/2 √ qk qk q E [ |S − E [ S ]| ] ≤ 2 2 !≤2 2 πeq = 2 πe q <4 e q · 2 2 2 2e 2e 2e 51 Celisse and Mary-Huard If q is odd, then q − 1 > 2 is even and another use of the calculations in the proof of Lemma D.9 provides r q r q (q − 1)! q! k k q E [ |S − E [ S ]| ] ≤ 2q =2 q−1 q−1 . (q−1)/2 (q−1)/2 2 2 2 2 2 ! 2 ! Let us notice q! 2(q−1)/2 q−1 2 ! √ 2πeq ≤ q q e q q √ r q e = 2e q − 1  q−1 (q−1)/2 (q−1)/2  p 2(q−1)/2 π(q − 1) q−1 2e (q−1)/2  r   √ q q (q+1)/2 q = 2e q−1 e q−1 e and also that r q q−1  q q−1 (q−1)/2 ≤ √ 2e. This implies q! 2(q−1)/2 q−1 2 ! ≤ 2e  q (q+1)/2 e √ √  q q/2 =2 e q , e hence r q E [ |S − E [ S ]| ] ≤ 2 q √ √ k √ √  q q/2 =4 e q 2 e q 2 e r qk 2e q · Lemma D.11. Let X, Y be two i.i.d. random variables with Poisson distribution P(λ) (λ > 0). Then for every q > 3, it comes iq/2 √ hq (2λ + q) E [ |X − Y |q ] ≤ 2q/2+1 e q . e Proof of Lemma D.11. Let us first remark that E [ |X − Y |q ] = EN [ E [ |X − Y |q | N ] ] = 2q EN [ E [ |X − N/2|q | N ] ] , where N = X + Y . Furthermore, the conditional distribution of X given N = X + Y is a binomial distribution B(N, 1/2). Then Lemma D.10 provides that r q √ √ qN q E [ |X − N/2| | N ] ≤ 4 e q a.s. , 2e which entails that " q q E [ |X − Y | ] ≤ 2 EN √ √ 4 e q r qN 2e q 52 # q/2+2 √ =2 √ e q r q h i q EN N q/2 . e Performance of CV to estimate the risk of kNN It only remains to upper bound the last expectation where N is a Poisson random variable P(2λ) (since X, Y are i.i.d. ): h i p EN N q/2 ≤ EN [ N q ] by Jensen’s inequality. Further introducing Touchard polynomials and using a classical upper bound, it comes v v u q     q h i u X uX u q 1 1 q q−i q/2 t t q−i i i i ≤ q EN N ≤ (2λ) (2λ) 2 i 2 i i=1 i=0 v r u q   u1 X q −1 1 (2λ)i q q−i = =t (2λ + q)q = 2 2 (2λ + q)q/2 . i 2 2 i=0 Finally, one concludes q E [ |X − Y | ] ≤ 2 q/2+2 √ √ e q r q iq/2 q −1 √ hq 2 2 (2λ + q)q/2 < 2q/2+1 e q (2λ + q) . e e 53
10
Italian Association of Aeronautics and Astronautics XXII Conference Napoli, 9-12 September 2013 MULTIDISCIPLINARY OPTIMIZATION FOR GAS TURBINES DESIGN F. Bertini1, L. Dal Mas2, L. Vassio3 and E. Ampellio3* 1 2 3 AVIO S.p.A., R&TD, 10040 – Rivalta di Torino (TO), Italy Università di Padova, Dipartimento di Ingegneria Industriale, 35100 – Padova (PD), Italy Politecnico di Torino, Dipartimento di Ingegneria Meccanica e Aerospaziale, 10129 – Torino (TO), Italy *[email protected] ABSTRACT State-of-the-art aeronautic Low Pressure gas Turbines (LPTs) are already characterized by high quality standards, thus they offer very narrow margins of improvement. Typical design process starts with a Concept Design (CD) phase, defined using mean-line 1D and other loworder tools, and evolves through a Preliminary Design (PD) phase, which allows the geometric definition in details. In this framework, multidisciplinary optimization is the only way to properly handle the complicated peculiarities of the design. The authors present different strategies and algorithms that have been implemented exploiting the PD phase as a real-like design benchmark to illustrate results. The purpose of this work is to describe the optimization techniques, their settings and how to implement them effectively in a multidisciplinary environment. Starting from a basic gradient method and a semi-random second order method, the authors have introduced an Artificial Bee Colony-like optimizer, a multi-objective Genetic Diversity Evolutionary Algorithm [1] and a multiobjective response surface approach based on Artificial Neural Network, parallelizing and customizing them for the gas turbine study. Moreover, speedup and improvement arrangements are embedded in different hybrid strategies with the aim at finding the best solutions for different kind of problems that arise in this field. Keywords: optimization, low pressure turbine, evolutionary algorithms, CFD Q3D analysis 1 INTRODUCTION Recent international regulations like ACARE [2] for Europe and ICAO [3] worldwide impose to reduce fuel consumption, pollutant and noise emissions for greener aircraft and engines. Focusing on propulsion system, the intensive application and tuning of high-performance and multi-objective optimization strategies is imperative. Especially as regards Ultra High Bypass Turbofan (UHBT), the design of Low Pressure Turbine (LPT) is a critical and challenging task, since this component has a great impact on Specific Fuel Consumption and only a multidisciplinary approach can allow obtaining the best configuration. Low pressure turbine module is a very complex system, made of several physical components and operating in a fluid environment difficult to be numerically simulated in its completeness. Therefore, the design of a LPT is an intrinsically multidisciplinary problem, 1 normally characterized by a high dimensionality in terms of both free-to-change parameters, and objective functions. Since the early CD phase, there are many important characteristics that need to be evaluated at the same time (e.g. aerodynamic performances, structural properties, acoustic behaviour and different heterogeneous constraints). The correct definition of the optimization problem is very important, given that the costs of changes in a project are higher when advancing in the design process and available degrees of freedom are fewer. In general, input/output definitions and computational tools become more and more detailed and specific moving from CD to Detailed Design (DD). For example, in concept design, only fundamental LPT traits are considered, like flow path, blade number and work split in order to respect mechanical and aero-acoustic requirements through mean-line 1D correlation-based solvers. On the other hand, in DD some deeply specialized characteristics, like local geometries of tip clearance or non-axisymmetric endwall contouring are optimized in order to essentially refine the overall performance, using 3D full Navier-Stokes multistage unsteady solvers. From these simple considerations, it is apparent that many kind of optimization problems can arise during the design of a LPT. Quasi mono-objective problems with huge and jagged domain of parameters need to be addressed, as well as inherently multi-objective ones with strongly contrasting targets, but easy-knowable boundaries to set for feasibility. In this scenario, a very large spectrum of optimization algorithms and methodologies, each having its potentials and limitations, can be applied. While some of them are general and easy to be used, others need to be properly tuned in order to successfully complete the optimization. Therefore, the perfect knowledge of the physical/engineering problem to be solved is required to choose the best algorithm or methodology. To emphasize what has been said so far, the authors will introduce and analyse different optimization strategies, from the standard to the forefront ones. Aeronautical LPTs optimization in PD phase represents a real-like test case for meaningful comparisons. The final aim of the paper is to provide a straightforward compendium of modern cutting edge techniques easily extendable and applicable to many research areas. 2 DESIGN PROCESS This work fits in the standard procedure for aero design of aeronautic Low Pressure Turbines currently adopted by AVIO S.p.A.. Figure 1 reports the complete framework which includes both the CD and the PD phases. Each block of the flowchart was thoroughly described in [4]. First of all, the Concept Design phase is basically carried out using a 1D Meanline Multidisciplinary solver, embedded in the in-house tool called ‘Tùrbine’ together with various optimization methods herein discussed. This tool is able to manage the overall turbine layout by defining some fundamental parameters like operating thermodynamic cycle, encumbrance geometries and flow path. ‘Tùrbine’ calculates velocity triangles by solving Euler equations and it exploits revised classical loss correlations for performance estimation [5], finally building the preliminary turbine cross section. Within ‘Tùrbine’, it is also possible to carried out mono and multi–objective optimizations by properly setting the design variables and objectives. 2D solution represents the second step in CD. This part of the design is aimed at optimizing the preliminary spanwise work and outlet aerodynamic angle distribution, while meanline values are maintained constant. The through-flow computations follow the same approach of meanline ones, except that many streamline are considered along the radial extension of the flow path. Blade shape is realized through the identification of a discrete number of sections, whose airfoils are parametrically defined. The 3D geometry of each blade can be automatically generated using the data from 1D meanline or 2D spanwise solution, or even 2 including the benefits obtained from Q3D optimizations during PD phase described below. Database corrections, inverse correlation and free vortex principles are applied. During the PD phase, the geometric refinement of 3D airfoils is achieved with the help of multidisciplinary optimization. Since employing directly three-dimensional simulations would be expensive in terms of both computational resources and time, few reference sections are usually studied by means of faster Q3D analyses [12]. In order to interpret the physical 3D phenomena that got lost in the numerical model simplification (e.g. secondary vortex effects), the flow channel diffusion has to be adjusted according to the 3D results. This operation is normally referred to as “fitting”. Multi-row steady and unsteady calculation are available to rearrange velocity triangles and stator/rotor interaction using CFD; single row simulation are introduced for perfecting airfoils geometries. In both cases, at least aerodynamic and structural requirements have always to be satisfied. CFD 3D multistage complete calculations [13] are performed after every optimization cycle to confirm the improvement made and obtain fully trustful reference results. Figure 1: Typical AVIO S.p.A. aereo-design process for LPTs 3 OPTIMIZATION ALGORITHMS As introduced in the first two chapters, various optimization methods are necessary in the different phases of the LPT design. In this section some valuable algorithms, used for comparisons in the next chapter, will be briefly outlined. They are presented, without loss of generality, as possible solution of a minimization problem. These algorithms were all implemented by the authors, with the only exception of the GeDEA, which was developed by the University of Padova. 3.1 Gradient Descent (GD) Gradient Descent (GD or Steepest Descent) is a classical first-order iterative optimization algorithm for continuous differentiable functions [6]. It often represents a good first try for optimization methods thanks to its simplicity and velocity. To proceed towards the minimum from the current point, the method moves in the opposite direction of the gradient evaluated in the point. In the case where an analytical characterization of the gradient is not feasible, an estimation of it should be computed. In this work, in order to reduce the number of function evaluations, is used a forward finite difference method with first order accuracy, performed in 3 parallel with as many evaluations as are the free parameters. Once the direction of improvement is found, the step size has to be accurately chosen to reach quickly a local minimum. In the studied case a line optimization search over the step size, implemented in parallel, is performed. A negative aspect of GD is that there is no space investigation, following greedily the steepest descent to a close local minimum. Moreover, if the functions studied are noisy under a certain step threshold, the estimation of derivatives is hard to calibrate. Notice also that this algorithm is totally deterministic, not depending from random components, and automatically stops itself when no improvement direction is found. 3.2 Semi-Random Walk with second order Interpolation (IRW) The algorithm proposed is an improvement of a simple random walk [6] over the parameters space. Iteratively from the best point found, a random step is performed and, if it is not an improvement, the opposite step is tried. If even such step is not an improvement then a simple estimation of the directional curvature of the function is computed through a second order interpolation of the three previous points. Then a new point, minimum of such parabola, is tested. With this procedure the random walk can exit more easily from local minima with respect to the GD. Moreover, a particular experience on the problem is not required to implement the procedure. On the contrary this algorithm is very slow due to the totally random component that avoid to use data history smartly to find a direction of improvement. Additionally, it cannot be effectively parallelized due to the serial nature of the walk. 3.3 Artificial super-Bee enhanced Colony (AsBeC) Artificial Bee Colony (ABC) is one of the most recent and promising evolutionary technique, inspired by the intelligent behaviour of honey bees. Originally introduced in 2005 by [7], it is now widely spread and studied in many research fields for its simplicity, effectiveness and robust behaviour. This algorithm embraces the principles of Particle Swarm Optimization (PSO) and Differential Evolution (DE). ABC provides a population-based search modus operandi in which bees fly around a multidimensional space. A group of bees searches food in the space randomly, while another one moves depending on the experience of all the nest mates. The food sources with higher nectar are constantly updated, memorized and exploited to look around, while poorer ones are instead abandoned. Hence ABC system combines local and global search methods. Several types of upgrading have already been proposed over the years to improve the speed, the quality and/or the explorative ability of the swarm. An example of current use in turbomachinery is reported in [8]. The authors developed and used an improved version of the original algorithm, called Artificial super-Bee enhanced Colony (AsBeC), to which will be soon dedicated an in-depth technical article. In short, AsBeC is a parallelized, hybrid and enhanced scheme. The bee movement here follows the aforementioned semi-random second order method (super-Bee principle). Bees repositioning is based on both nectar amount evolution and space filling properties. Moreover analyses of equal configurations are avoided. Finally the objective function evaluations are executed in parallel at each swarm movement iteration. AsBeC is easy to handle thanks to the small number of tuneable characteristic parameters. As in the IRW, it still wastes many time expensive simulations, due to its implicit random nature. 4 3.4 Genetic algorithm (GeDEA) The genetic algorithms (GAs) are a particular class of evolutionary algorithms inspired by the natural evolutions. Nowadays they are widely used for the solution of multi-objective optimization problems, since they tend to be good at finding generally good global solutions. The Genetic Diversity Evolutionary Algorithm (GeDEA) [1] was developed at the University of Padova (UNIPD) and it was implemented within the AVIO procedures for the optimization of LPTs, under a collaborative project between the university and the company. The GeDEA starts creating a random population of a given number of individuals. A mating pool is created form the initial population, then the offspring is generated by crossover and mutation. The presence of clones is automatically avoided by the algorithm, which substitutes the copies with randomly generated individuals, thus encouraging the exploration of the search space. The new individuals are processed to evaluate the objective functions. The algorithm estimates the genetic diversity of each individual of the current population with respect to the other, by means of a common distance metric. Then, the parents and the offspring are ranked using the non-dominating sorting procedure. Therefore, the GeDEA provides a dual selection pressure towards both the Pareto optimal set and the maintenance of the diversity within the population. According to the assigned ranks, the best solutions are selected for survival and for the generation of the successive mating pool, while the other individuals are suppressed. It is worth noting that the original version of the algorithm was partially modified to guarantee the survival of those individual characterized by the highest weighted sum of the selected objective functions (elitism). The optimization loop continues until the number of generations prescribed by the user is reached. At the end of the multi-objective optimization process, the algorithm provides a set of optimal solutions, according to the Pareto concept, among which the user can choose the one that is more suitable for his needs. The algorithm is easily parallelizable, since each individual belonging to a specific generation can be analysed separately from the other. Another advantage of GAs is the widespread exploration of the feasible design space, the bounds of which should be accurately defined in order for the algorithm to work properly and efficiently. 3.5 Artificial Neural Network (ANN) applied to Latin Optimal Hypercube (LOH) sampling Artificial Neural Networks are mathematical models inspired by biological neural networks and are used to model complex relationships between inputs and outputs [9]. An ANN is formed by an interconnected cluster of artificial computational units, called neurons, that process information through signal alteration and recombination. Once the architecture of the network is set, the model adapts itself during a training phase. In the studied case, the ANN is trained by the standard back-propagation gradient-based procedure [9]. Thanks to the training phase an ANN can estimate outputs corresponding to new inputs never tried before, acting as a ‘black-box’. The incredible speed in outcomes computed through the network is one of its strengths, with respect to the direct simulation of the emulated phenomenon. The scheme developed by the authors is intended to use the ANN as a fast surrogate method for the estimation of the objective functions, without using direct Q3D analyses. Therefore, ANN approach permits to manage multi-objective optimizations and leads easily to an acceptable approximation of the Pareto Front. As in the GeDEA algorithm, the choice of many parameters still remain hard and a certain experience on initial space boundaries is needed. The key point for obtaining good results is the selection of the training points: they are taken from a well-distributed sampling, the LOH [10], within the investigation ranges. The 5 dimensions of the hypercube has been chosen to suitably address the optimization problem with the aim at maximizing quality to computation resources ratio. However, high dimensionality in parameters would be a limitation for the velocity of this scheme. Best Pareto points suggested by the network are then directly simulated and used to perform subsequent refinement cycles of LOHs+ANNs, reducing investigation ranges nearby lowest weighted sum point. 3.6 Accelerating techniques Some important accelerating technologies have been implemented within the aforementioned optimization algorithms. In addition to probabilistic choice in parameter modification and range resize [11], the authors developed a prediction routine that works as a trend analyser. This routine exploits the knowledge about the history of the best solutions found and how the variables changed: given the last best solution values and its relative parameters, the algorithm guesses a new point to try, computing a weighted average of the last directions of improvement. Such weights depends both on the quality of the improvements and on the proximity to the current solution. This accelerator works better with the algorithms related to a path in the search space (IRW, AsBeC). 4 TEST CASE: HIGH SPEED TURBINE The real-like test case selected for comparison purposes is a rotor blade of a high-speed turbine configuration. This LPT layout is typically suited for geared-fan or open rotor engines and it requires a strong and careful aero-mechanic optimization procedure integrated in the Q3D analysis tools. While mechanical strength is predominant at the hub section, aerodynamic features are critical at the tip section. This background outlines differential optimization problems depending on the blade span. The distinctiveness traits just illustrated cause the 3D blade shape constructed by the profile generator after the first CD (see Figure 1 and Part 2) to be inadequate for the case under study. For this reason fast and effective multiobjective optimizations are fundamental. Keeping in mind the above, the authors will present one complete design cycle in PD phase, consisting in the 3D/Q3D fitting procedure (§ 4.1) and the consequent single blade Q3D optimization (§ 4.2). Conventional Hub, Mid and Tip sections will be considered. Starting geometries come out from 1D meanline solution, as they are automatically generated without any previous PD optimization. Since Q3D analyses represent almost all the time of the optimization procedures a notable time saving is reached introducing parallelization (when possible) in the optimization routines, so that multiple Q3D analyses can be performed simultaneously. For all the simulations the same 8 core-based system with no bottlenecks in memory was used. Attention was also paid to load equally the machine in order not to give a bias to the obtained results. 4.1 Fitting As anticipated in Part 2, the “fitting” is a fundamental step of the PD required to ensure a reliable geometry optimization [11]. Generally speaking, it consists in overlapping the isentropic Mach profiles that contour the blades provided by the 3D analyses to those obtained by the Q3D simulations, thus imposing the same flow field around the turbine section. This result is achieved by modifying five different variables which define the flow channel characteristics: the inlet flow angle, three channel diffusion factors, and a coefficient for the exit static pressure. Acting on these parameters, the fitting procedure aims at minimizing the isentropic Mach error, which is calculated using the standard deviation between the 3D and the Q3D Mach number values on both suction and pressure side. 6 It is apparent that the fitting is a challenging mono-objective optimization problem, characterized by jagged and very large boundaries. Furthermore, the solution may not be unique and the domain space usually presents many minima with close objective function values. Therefore, the choice of the correct optimization algorithm to use is difficult: some methods can be severely affected by solution convergence, while intrinsically multi-objective (GeDEA and ANN) ones are not well exploited. In the present work, the fitting is carried out on the three sections (hub, mid, and tip) of the test-case blade. The algorithms presented in Chapter 3 were tested and compared in order to assess their performances with respect to the minimization problem under consideration. LOH+ANN method was not taken into account because the vastness and uncertainty of the design space make the LOH sampling ineffective, unless adopting a prohibitive number of points. Due to convergence error, it happens that identical set of input parameters do not always provide the same distribution of the isentropic Mach number, which means that the convergence precision of the Q3D analysis affects the results of the numerical simulation. This fact can be interpreted as noise that affects more the methods based on differences from small perturbations (mainly the GD). The objective function was specifically created to include not only the Mach number error, but also the convergence error. Hence all the optimization algorithms prioritize the solutions whose results are more trustworthy. 4.1.1 Results and analysis Figure 2 shows an example of the results obtained during the fitting of the hub section. It is easy to note that Q3D Mach contours (gold) are almost perfectly overlapped on 3D ones (blue), while the initial condition (green) is very far from the final outline as usual for hub sections due to secondary 3D effects. Obviously, geometries remain unchanged during all fitting operations. In order to reduce the impact of the random component, all the analyses were repeated six times. Table 2 reports the comparison of the algorithms in terms of computational resources used. The total time was kept constant (one hour) for each random-based optimization method; however, the deterministic GD always reached a local minimum before the allowed time. Figure 3 shows the trend of the objective function with respect to the time for all the optimization algorithms for hub and tip sections. Note that the plotted objective function (error ratio) is the error divided by a small reference value and the error ratio axis has been cut to improve readability. The trends of the objective function with respect to number of iterations, here not plotted, do not present remarkable differences from the ones depending on time in Figure 3. This was expected, thanks to a fine code parallelization. It is also worth noting that the plot reported in Figure 3 compare the trend of the objective function obtained by the IRW to those provided by three parallelized algorithms (GD, AsBeC and GeDEA). The IRW is characterized by the poorest performance level, not only due to the lower number of analyses performed within the prescribed time. In fact, AsBeC and GeDEA are designed to better exploit the high number of simulations they can compute. It has been tested that even running 8 IRW instances in parallel the best result is still worse than both the average of the two evolutionary algorithms. AsBeC seems to have better exploring qualities than others, finally reaching lower objective values in hub (shown) and mid (not shown) cases. In addition, AsBeC final results are less affected by the random component as it can be deduced by looking at the mean standard deviation reported in Table 1. 7 Figure 2: Hub blade section (on the left) and Mach profiles before and after fitting (on the right). Total time (minutes) Mean total # of Q3D analyses performed # of threads used per block Mean # blocks of parallel analyses Mean final standard deviation on error ratio GD 6 (mean) 110 5-6 20 - IRW 60 193 1 193 0.1 AsBeC 60 1064 8 133 0.007 GeDEA 60 1104 8 138 0.04 Table 1: Comparison of mean resources used among the algorithms for the fitting of the 3 sections Figure 3: Performance comparisons of the algorithms in hub and tip sections 4.2 Optimization The geometric optimization of LPT airfoils is the core of the PD phase. Focusing on the specific study, the parameters domain is six-dimensional and the variables are namely: the tangential chord, the unguided turning, the inlet blade angle, the inlet wedge angle, the leading edge radius and its eccentricity. The optimization of the high speed blade airfoils is multidisciplinary and essentially aero-mechanical. Three different objective functions were properly defined so that they have to be all maximized. In particular, the first objective is represented by the aerodynamic efficiency (η), the second one is connected to section area (Area Obj.) in order to fulfil the structural requirements; the third function (M+C Obj.) combines together the convergence error (to prioritize trustworthy solutions) and an objective 8 on the Mach numbers along the profile suction side, that have to be limited for aerodynamic reasons. In the case of the GD, the IRW and the AsBeC, these three objectives were gathered together in only one function by means of a weighted sum. In this multi-objective framework the Pareto front (possible with ANN and GeDEA) can be very useful, allowing to give more importance to some objective functions without reperforming the whole optimization. The five different optimization methods are compared as done in the fitting case, performing six different instances for each section. In addition, the Pareto Front is presented and analysed from the physical point of view. 4.2.1 Results and analysis This time all the analyses were stopped after 8 hours (a working day), with the only exception of the GD which reaches its final solution after 28 minutes on average, as reported in Table 2. In Figure 4, the objective functions for the hub and the mid are reported with respect to the time. As far as the multi-objective optimization algorithms (GeDEA and ANN) are concerned, Figure 4 plots the trend of the best values in terms of weighted sum of the three objective functions. Thanks to this expedient, it is possible to appreciate the performances of all the algorithm used, even though the comparison is not completely consistent. The right side of Figure 4 reports the airfoils and isentropic Mach trends of the optimized configuration (black solid lines) with respect to the baseline (red solid lines). As for the fitting, the IRW method is the slowest due to the same reasons explained in §4.1. The AsBeC algorithm combines global and local search abilities in a very efficient way, being faster than IRW and overcoming even the GeDEA after some time. The LOH+ANN scheme turns out to be the best overall despite the long training time required. It also runs less Q3D simulations than the AsBeC and the GeDEA, as apparent from Table 2. On the other hand, the GeDEA seems to be unable at finding better solutions after some hours of evolutionary processes, probably because of elitism. All the previous observations are also valid for the tip section, whose results are omitted for the sake of compactness. The remarkable performance of the ANN is mostly linked to the problem under study, in which the initial configuration was very different from the optimized one. In such a situation, the accurate LOH sampling is fundamental for the investigation of the feasible domain. In order to discuss an antithetical example, a midspan section closer to the optimum one was optimized apart for 60 minutes using all the algorithms. The results are shown in Figure 5: compared to Figure 4 the differences are relevant. In this case, the LOH+ANN performance is evidently the worst, since the LOH is totally ineffective when sampling in a large domain around an already good initial solution. Following the same wide-exploration principle, also GeDEA degrades its quality. On the contrary, GD and AsBeC are built to focus on the local search and then proves to be the most beneficial. However, LOH+ANN and GeDEA provide the Pareto Fronts which can be very useful to ensure a quick and excellent design. A practical illustration is offered in Figure 6, in which both the whole final GeDEA Pareto Front (greyscale for M+C Obj.) and an intermediate step of the LOH+ANN Pareto front (yellow-to-blue scale for M+C Obj., zoomed inside the picture) are shown. It is worth remembering that both the algorithms thicken their Pareto near points that have higher weighted sum of objectives. Moreover, the diversity within the population in the GeDEA helps to obtain widespread Pareto Front. On the contrary, LOH+ANN Paretos narrow the more the refinement is advanced, but the ANN metamodel allows to enrich the specialized Pareto with much more points. Hub section is selected as the key example since it has to balance mechanical and aerodynamic goals. In general, Area Obj. results to be inversely proportional to efficiency while M+C Obj. seems positively correlated with efficiency. 9 Figure 4: Algorithms performance comparisons and final optimized blade in different sections Total time (minutes) Mean total # of Q3D analyses performed # of threads used per block Mean # blocks of parallel analyses Mean final standard deviation on objective GD 28 (mean) 203 6-8 29 - IRW 480 532 1 532 1∙10-4 AsBeC 480 3232 8 404 6∙10-5 GeDEA 480 3360 8 420 4∙10-5 ANN-LOH 480 2816 8 352 3∙10-5 Table 2: Comparison of resources used among the algorithms for the optimization of the 3 sections Figure 5: Performance comparisons of the algorithms starting from an already optimized blade Three different solutions belonging to the ANN Pareto front (named Case 1, Case 2 and Case 3), have been tested and compared (right side of Figure 6). Specifically, the Case 1 (blue) is an high efficiency configuration, that limits the diffusive phenomena, lowers the 10 Mach number and promotes the flow acceleration; Case 2 (green) is the best configuration with balanced weights (same of previous optimizations), and an intermediate Mach profile between Case 1 and Case 3; Case 3 (gold) is a high resistance configuration, that increases the area at the expense of higher Mach number and strong diffusion, mainly on the pressure side. The Pareto exploitation reveals how much easier and faster can be the blade design moving among optimal configurations without re-performing any further optimization. For the specific example of hub section, Case 3 is recommended because its lower  is largely dominated by its structural strength that allows larger safety margin on breakages. Figure 6: 3D Pareto Fronts for GeDEA and ANN and 3 optimal configurations tested (hub section) 5 CONCLUDING REMARKS This paper concerned the application and comparison of different optimization methods for the PD design of a real-like and peculiar LPT rotor blade. Quasi mono-objective optimization with large parameter space (fitting operation) was addressed as well as typical multi-objective one, with main aero-mechanical goals but limited feasibility domain (geometric optimization). In the first case, both evolutionary algorithms turn out to be the best, but the GeDEA needs time to accurately set the range of the boundaries, while the AsBeC shows a better local search attitude, with the possibility of finding lower minima. In the second case, the wideexploring algorithms are the best choice among the others, especially when the optimization starts from a poor initial configuration. Nevertheless, LOH+ANN is not suitable for high dimensionality in parameters and GeDEA tends to freeze the genetic of the individuals due to elitism when advancing in time. Furthermore, optimizing an already good solution leads to worsen both of them (especially LOH+ANN) while AsBeC ad GD works very well. However, LOH+ANN and GeDEA should be privileged for multi-objective and multidisciplinary problems since they provide the Pareto Front. 6 ACKNOWLEDGEMENTS The authors would like to thank M. Marconcini and M. Giovannini at the University of Florence for the support on the TRAF code for CFD analyses. 11 REFERENCES [1] A. Toffolo, E. Benini, Genetic diversity as an objective in multi-objective evolutionary algorithms, Evolutionary Computation, Massachusetts Institute of Technology, Volume 11, pp. 151-167, (2003) [2] ACARE, European Aeronautics: A Vision for 2020 - Meeting society’s needs and winning global leadership, (2001) [3] ICAO, Annex 16 to the convention on international civil aviation: environment protection, Volume 2, (2008) [4] F. Bertini, M. Credi, M. Marconcini, M. Giovannini, A Path Toward the Aerodynamic Robust Design of Low Pressure Turbines, ASME Journal of Turbomachinery, Vol. 135, 021018, (2013) [5] F. Bertini, M. Marconcini, M. Giovannini, E. Ampellio, A Critical Numerical Review of Loss Correlation Models and Smith Diagram for Modern Low Pressure Turbine Stages, GT2013-94849, ASME Turbo Expo, San Antonio (TX), USA, (2013) [6] T. Verstraete, J. Periaux, Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery - Lecture Series, Von Karman Institute for Fluid Dynamics, Rhode Saint Genese, Belgium, (2012) [7] D. Karaboga, An Idea Based on Honey Bee Swarm for Numerical Optimization, Technical Report-TR06, Erciyes University, Engineering Faculty, Computer Engineering Department, (2005) [8] X. Yang, B. Liu, Z. Cao, Opposition-Based Artificial Bee Colony Algorithm Application in Optimization of Axial Compressor Blade, ASME Turbo Expo, San Antonio (TX), USA, (2013) [9] C. M. Bishop, Neural Network for Pattern Recognition, Oxford University Press, Oxford, (1995) [10] F. Kai Tai, R. Li, A. Sudjianto, Latin Hypercube Sampling and its modifications in “Design And Modeling for Computer Experiments”, Chapman and Hall/CRC Computer Science and Data Analysis Series, Boca Raton, Volume 6, pp. 47-65, (2006) [11] F. Graziano, Multidisciplinary Optimization of Aeronautical Airfoils, BSc Thesis, Politecnico di Torino, Torino, (2012) [12] A. Arnone, R. C. Swanson, A Navier-Stokes Solver for Turbomachinery Applications, ASME Journal of Turbomachinery, Volume 115, No. 2, pp. 305-313, (1993) [13] A. Arnone, Viscous Analysis of Three-Dimensional Rotor Flow Using a Multigrid Method, ASME Journal of Turbomachinery, Volume 116, pp. 435-445, (1994) 12
9
THE MODULAR GROUP AND WORDS IN ITS TWO GENERATORS arXiv:1512.02596v5 [math.NT] 24 Mar 2017 GIEDRIUS ALKAUSKAS Abstract. Consider the full modular group PSL2 (Z) with presentation hU, S|U 3 , S 2 i. Motivated by our investigations on quasi-modular forms and the Minkowski question mark function (so that this paper might be considered as a necessary appendix), we are lead to the following natural question. Some words in the alphabet {U, S} are equal to the unity; for example, U SU 3 SU 2 is such a word of length 8, and U SU 3 SU SU 3 S 3 U is such a word of length 15. We consider the following integer sequence. For each n ∈ N0 , let t(n) be the number of words in alphabet {U, S} that equal the identity in the group. This is the new entry A265434 into the Online Encyclopedia of Integer Sequences. We investigate the generating function of this sequence and prove that it is an algebraic function over Q(x) of degree 3. As an interesting generalization, we formulate the problem of describing all algebraic functions with a Fermat property. 1. Introduction 1.1. Motivation and results. The topic of this paper is the following two new sequences. The first one t(n), n ≥ 0, starts from 1, 0, 1, 1, 1, 5, 2, 14, 13, 31, 66, 77, 240, 286, 722, 1226, 2141, 4760, 7268, 16473, . . . (1) This is the sequence A265434 in [13]. The second sequence t(n), n ≥ 0, starts from 1, 0, 1, 1, 0, 3, 0, 5, 3, 7, 16, 12, 50, 44, 123, 195, 301, 718, 928, 2244, . . . They are defined as follows. Let U 3 = I and S 2 = I be two elements of order 3 and 2, respectively, I being the unity. that  Consider  the the full modular group PSL2 (Z).  It is known  0 1 0 1 it is freely generated by U = , element of order 3, and and S = , element −1 1 −1 0 of order 2. Let n ∈ N0 , and A be any word in the alphabet U, S of total lenght n: n Y (U ǫj S δj ), ǫj , δj ∈ {0, 1}, ǫj + δj = 1. A= j=1 Let t(n) be the number of such words that in the group Γ are equal to the unity. Our method to calculate the first 20, 0 ≤ n ≤ 19, terms of the sequence (1) was via a brute force calculation. As the model, let U and S be my 2 × 2 matrices as above. Let us construct a binary tree,  given  1 0 starting from the node I = . Each node A in this binary tree generates two offspring 0 1 - the right one AU, and the left one AS. For a given n, we then calculate which of the 2n Date: March 28, 2017. 2010 Mathematics Subject Classification. Primary 05A05, 05A15, 20XX, 14H05. Key words and phrases. The modular group, combinatorial group theory, free group, unity, generating function, algebraic function, cogrowth rate, return generating function, word problem, pushdown automaton. The research of the author was supported by the Research Council of Lithuania grant No. MIP-072/2015. 1 2 G. ALKAUSKAS matrices in the nth generation are equal to ±I. Via this method, a standard home computer can give few more terms of this sequence. Further, we call such a word primitive, if s Y (U ǫj S δj ) = I only if s = n. j=1 Let t(n) be the number of primitive (and nonempty) words of length n. Let us introduce the generating functions ∞ ∞ X X n t(n)x = T (x), t(n)xn = T(x). n=0 n=1 Obviously, 1 = T (x). 1 − T(x) Indeed, we are just considering words broken up into primitive loops, so  2 T (x) = 1 + t(1)x + t(2)x2 + · · · + t(1)x + t(2)x2 + · · · + · · · . The number of total words of length n is equal to 2n . So, t(n) ≤ 2n , and thus both series converge at least for |x| < 21 . The Table gives values for this function (and all words) for 0 ≤ n ≤ 8. Table. Sequences t(n) and t(n). Primitive elements are in bold n t(n) t(n) Products 0 1 0 I 1 0 0 − 2 1 1 S2 3 1 1 U3 4 1 0 S4 5 5 3 U 3 S 2 , U2 S2 U, US2 U2 , S 2 U 3 , SU3 S 6 2 0 S6, U 6 7 14 5 U 3 S 4 , U2 S4 U, US4 U2 , S 4 U 3 , SU 3 S 3 , S 2 U 2 S 2 U, SU2 S2 US, U 2 S 2 US 2 S 2 US 2 U 2 , SUS2 U2 S, US 2 U 2 S 2 , S 3 U 3 S, S 2 U 3 S 2 , US2 US2 U 8 13 3 S 8 , U 6 S 2 , U 5 S 2 U, U 4 S 2 U 2 , U 3 S 2 U 3 , U 2 S 2 U 4 , US 2 U 5 , S 2 U 6 SU6 S, U 3 SU 3 S, U2 SU3 SU, USU3 SU2 , SU 3 SU 3 The integer q(n, m) counts the number of words that are equal to the unity in the group Γ, and which contain n copies of U and m copies of S. Note that q(n, m) = 0 unless n ≡ 0 (mod 3) and m ≡ 0 (mod 2). We also introduce X A is unity P x ǫj y P δj = ∞ X q(n, m)xn y m = Q(x, y). n,m=0 Obviously, T (x) = Q(x, x). The series T (x) can be interpreted as a return generating function for a certain directed graph (see the Subsection 1.2 and Figure 1), and thus this function belongs to a hugely diverse and abundant family of functions with the following two features. All THE MODULAR GROUP AND WORDS 3 of them share the property that their Taylor coefficients grow exponentially, and all are related to the graph theory and enumeration ([3], 5.6). The main result of this paper is the following Theorem. The function Q(x, y) is an algebraic 3rd degree function over Q(x, y), satisfying (y 6 − x6 + 6y 2 x3 − 3y 4 + 2x3 + 3y 2 − 1)Q3 + (x3 y 2 − y 4 + x3 + 2y 2 − 1)Q2 + (x3 − y 2 + 1)Q + 1 = 0. In particular, the function T (x) is an algebraic 3rd degree function over Q(x), satisfying (6x5 − 3x4 + 2x3 + 3x2 − 1)T 3 + (x5 − x4 + x3 + 2x2 − 1)T 2 + (x3 − x2 + 1)T + 1 = 0. Thus, ∞ X t(n) n=0 2n 14 6√ = + 17, 13 13 ∞ X t(n) n=0 2n · 1 2n+1 = 0.5443390725+. The last number can be interpreted as probability that a randomly chosen word is a unity, with a convention that each length n is given a weight 2−(n+1) , and then probability is equally distributed among all words of length n. Two proofs of this result are given. The first one is longer, but uses only elementary combinatorics and considerations from the scratch. The second one is shorter, is included due to a suggestion and very clear guidance by the referee, and is a standard proof in the area of geometric and combinatoric group theory. We finish this Subsection with the following Proposition 1. For any prime p > 3, t(p) ≡ 0 (mod p). Proof. Indeed, if AB = I in the group Γ, then BA = I as well. So, any cyclic permutation of the word which is a unity is a unity again. So, words of length p > 3 which are equal to the unity split into groups each containing exactly p words. Indeed, otherwise all these permutations are equal. But U p 6= I and S p 6= I - a contradiction.  1.2. Context, previous results. In [1] we investigate the relation of modular forms to the Minkowski question mark function, introducing the notion of mean-modular forms. In particular, the analytic continuation of a certain bivariate analytic function G(κ, z) (an extension of the Stieltjes transform of the Minkowski question mark function) requires us to know the analytic formula for Q(x, y) as a bivariate function; whence the principal motivation for the current paper. In fact, the result of Theorem is new, but it is an exercise for people in the field. The much more ambitious program of integrating the world of Minkowski question mark function and the world of modular forms for PSL2 (Z) justifies the current paper as a necessary appendix to [1]. As just mentioned, this particular question is new, though many intricately related problems were investigated and solved before, and the topic itself is of big importance in the theory of groups (growth and cogrowth rates), graphs (return and first-return paths), and non-commutative probability. 4 G. ALKAUSKAS As a particular example of his more general results, Kuksov [6] considers the cogrowth rate of the product Z/(2) ⋆ Z/(3), which is the same modular group. The question of investigating cogrowth rates amounts to the following. Count the number of reduced words in the alphabet {U, U −1 , S, S −1 } that are equal to the unity. Reduced means that S and S −1 , and also U and U −1 never follow immediately one after another. It is obvious then that the total number of reduced words of length n is 4 · 3n−1 : the first letter can be anything, after that the choice is restricted. The generating function of this sequence (cogrowth series) turns out to be   p (x + 1) 9x5 − 3x4 + 8x3 − x2 + x − (6x2 − x + 2) R(x) v(x) = , 2(3x − 1)(3x2 + 1)(3x2 + 3x + 1)(3x2 − x + 1) where R(x) = 81x8 − 54x7 + 9x6 − 18x5 − 8x4 − 6x3 + x2 − 2x + 1. The cogrowth rate (the inverse of the radius of convergence of the Taylor series for this function at the origin) turns out to be 2.9249+ < 3. The function v(x) is quadratic algebraic function, as opposed to cubic algebraic function T (x) in our case. The Taylor coefficients of v(x) are 1, 0, 2, 2, 6, 24, 44, 136, 298, 914, 2462, 6464, . . . For example, there are 6 reduced words of length 4: SSSS, S −1 S −1 S −1 S −1 , U −1 SSU, USSU −1 , U −1 S −1 S −1 U, US −1 S −1 U −1 . Also, Proposition 1 does not have an analogoue in this case. In a related direction, Quenell in [9] investigates the return generating function of Cayley graphs for the free products of finite groups. This is related to the thesis of McLaughlin [8] (see also [2]). In particular, in case of the modular group the return generating function counts words in {U, U −1 , S, S −1 } of total length n that are equal to the unity (words are not necessarily reduced). The total number of all words of spell length n is 4n . In this context, the function T (x) can be interpreted as a return generated function for a directed Cayley graph for Z/(2) ⋆ Z/(3), where each 2-cycle and 3-cycle has a particular direction chosen in advance; see Figure 1. Franz Lehner has pointed out that the question in consideration is a special case of a “free convolution” and can be obtained via Voiculescu-Woess transform [7, 10, 11, 12]. This technique is implemented as a package and it is part of the library of FriCAS. So, the current paper can be thought as a purely combinatoric demonstration of the result, with emphasis on a bivariate function Q(x, y) rather than a univariate T (x) = Q(x, y). 1.3. Algebraic functions with a Fermat property. Before passing to the proof of the main result, we will formulate one interesting problem, which is motivated by Proposition 1. Consider the following rational function ∞ ∞ X X x2 P (x) = = s(n)xn = (2n−1 − 1)xn . (1 − 2x)(1 − x) n=2 n=2 Thus, we have s(p) ≡ 0 (mod p) for p > 2 prime. THE MODULAR GROUP AND WORDS 5 Figure 1. First few generations of a directed Cayley graph for Z/(2) ⋆ Z/(3). Next, let  ∞ ∞   X X 1 2n 2 n J(x) = √ − 2 xn . = s(n)x = − n 1 − 4x 1 − x n=0 n=0 This also gives s(p) ≡ 0 (mod p) for p ≥ 2 prime. Finally, as is implied by Proposition 1, we have the same (for p > 3) conclusion for a degree 3 algebraic function T (x) = 1 + x2 + O(x3 ), which satisfies (6x5 − 3x4 + 2x3 + 3x2 − 1)T 3 + (x5 − x4 + x3 + 2x2 − 1)T 2 + (x3 − x2 + 1)T + 1 = 0. ∞ P Let R(x) = s(n)xn ∈ Z[[x]] be an algebraic function over Q(x), unramified at x = 0. n=0 Suppose, s(p) ≡ 0 (mod p) for all sufficiently large prime numbers p. We call such a function R(x) an algebraic function with a Fermat property. Thus, we formulate Problem 1. Characterize all algebraic functions with a Fermat property in general, and in any particular algebraic function field Q(x, U), where U is unramified at x = 0. We can multiply the function R(x) by an integer to get the congruence valid for all primes. All algebraic functions with a Fermat property form an abelian group P F . Since the set of all n algebraic functions over Q is countable, F is also countable. If U(x) = ∞ n=0 a(n)x ∈ Z[[x]] is P ∞ n ∂ ′ an algebraic function, then U (x) := xU (x) = n=0 na(n)x ∈ F . Indeed, first it is obvious that U ∂ ∈ Z[[x]]. And second, if G(Y, x) ∈ Z[Y, x] and G(U, x) = 0, then U′ = − Gx (U, x) GY (U, x) 6 G. ALKAUSKAS belongs to the same algebraic function field Q(x, U). Let D (from “Differential”) P be the union of n all such possible U ∂ . Then D is a subgroup of F . So is Z[x]. Finally, let U(x) = ∞ n=0 a(n)x ∈ Z[[x]] is again an algebraic function, unramified and without a pole at x = 0, and let for an integer M ≥ 2, U (M ) (x) = U(xM ). Let P (from “Power”) be the group whose elements are s X (M ) Uj j , Mj ≥ 2. j=1 Then P is also a subgroup of F . Of course, any two of the subgroups D, Z[x] and P have a non-trivial pairwise intersection. Let also for any U, algebraic over Q(x) and unramified at x = 0, FU = F ∩ Q(x, U), DU = D ∩ Q(x, U), PU = P ∩ Q(x, U) (We identify any algebraic function with its Laurent expansion at x = 0). We may refine Problem 1 as follows. Problem 2. Find the structure of abelian groups F /(D + Z[x] + P ),  AU = FU / DU + Z[x] , PU = FU / DU + Z[x] + PU  for any U algebraic over Q(x) and unramified at x = 0. In particular, for example, what is the group P∅ (that is, we talk only about Q(x))? In fact, for a function J(x) we have an even stronger property s(p) ≡ 0 (mod p2 ). We may call it an algebraic function with a strong Fermat (or Wieferich) property, and ask similar questions. 2. The first proof Let us introduce the function q̂(n, m). This is defined similarly as q(n, m). Namely, we count the number of words in U, S which are equal to unity in Γ, which have n copies of U and m copies of S, but which do not contain S 2 (two S ′ s in a row). Let ∞ X b y). q̂(3n, 2m)x3n y 2m = Q(x, n,m≥0 Proposition 2. We have an identity   X 3n + k . q(3n, 2m) = q̂(3n, 2m − 2k) · k k≥0 This implies  1 x , y · . 2 1−y 1 − y2 Proof. Indeed, consider any word which has 3n copies of U and 2m copies of S. Now, replace each occurring segment S 2ℓ+j , ℓ ≥ 0, j ∈ {0, 1}, with S j . We get a word which lies in the set which defines q̂(3n, 2m − 2k) for some k ∈ N0 . In the other direction, consider the latter word. Suppose, we have k spare copies of S 2 . We can plug k copies of S 2 into such a word, to get a word which defines q(3n, 2m). As can be seen, we can confine in plugging to the left of each occurrence of U, plus to the right of the rightmost U. This gives 3n + 1 possible places to plug in. We want to distribute k copies of S 2 . This gives the formula in Proposition.   b Q(x, y) = Q Let us divide the words which define the quantity q̂(n, m) into 7 disjoint subsets. B stands for any non-empty word. Let n, m ≥ 0. Here everywhere “—” stands for the phrase “be the number of such words that are of the form...”. THE MODULAR GROUP AND WORDS a) b) c) d) e) f) g) 7 a(n, m) — SUBUS. b(n, m) — either U α SBSU β , α, β > 0 and α + β ≡ 0 (mod 3), or U γ , γ ≡ 0 (mod 3). c(n, m) — U α SBS, or SBSU α , α > 0, α ≡ 0 (mod 3). d(n, m) — U α SBSU β , α, β > 0, and α + β ≡ 2 (mod 3). e(n, m) — U α SBSU β , α, β > 0, and α + β ≡ 1 (mod 3). f (n, m) — either SBSU α , or U α SBS, α ≡ 1 (mod 3). g(n, m) — either SBSU α , or U α SBS, α ≡ 2 (mod 3). Let us also introduce two subsets which define d and f , respectively, as follows: d) d(n, m) — USBSU, f) f(n, m) — SBSU. Example 1. We have: q̂(6, 2) = 5. Words which corespond to a, b, c are, respectively, {SU 6 S}, {U 2 SU 3 SU, U SU 3 SU 2 }, {SU 3 SU 3 , U 3 SU 3 S}. All sets beyond “c” are empty. ✷ Example 2. We have: q̂(9, 4) = 20. Words which correspond to a(9, 4) = 7, b(9, 4) = 5, c(9, 4) = 3, d(9, 4) = 1, e(9, 4) = 0, f (9, 4) = 2, g(9, 4) = 2, are, respectively: a : {SU SU 3 SU 5 S, SU SU 6 SU 2 S, SU 2 SU 3 SU 4 S, SU 2 SU 6 SU S, SU 3 SU 3 SU 3 S, SU 4 SU 3 SU 2 S, SU 5 SU 3 SU S}, b : {U SU SU 3 SU 2 SU 2 , U SU 2 SU 3 SU SU 2 , U 2 SU SU 3 SU 2 SU, U 2 SU 2 SU 3 SU SU, SU SU 3 SU 2 SU 3 }, c : {SU 2 SU 3 SU SU 3 , U 3 SU SU 3 SU 2 S, U 3 SU 2 SU 3 SU S}, d : {U SU 3 SU SU 3 SU }, e : ∅, f : {SU 3 SU 2 SU 3 SU, U SU 3 SU 2 SU 3 S}, g : {SU 3 SU SU 3 SU 2 , U 2 SU 3 SU SU 3 S}. ✷ Note that a(n, m) and other 8 functions are potentially non-zero only for n = 3k, m = 2l for k ≥ 0, l ≥ 0. The first step to derive our main result is the following Proposition 3. We have the following recurrences: a(3n, 2m) = b(3n, 2m − 2) + d(3n, 2m − 2) + e(3n, 2m − 2), X b(3n, 2m) = (3k − 1)a(3n − 3k, 2m) for m ≥ 1, b(3n, 0) = 1 for n > 0, k≥1 c(3n, 2m) = 2 X k≥1 d(3n, 2m) = X k≥0 e(3n, 2m) = X k≥1 f (3n, 2m) = 2 (3k + 1)d(n − 3k, 2m), 3kf(n − 3k, 2m), X k≥0 g(3n, 2m) = 2 a(3n − 3k, 2m), X k≥0 f(n − 3k, 2m), d(n − 3k, 2m). 8 G. ALKAUSKAS These hold for n, m ≥ 0 assuming that all these functions vanish if one of the arguments is negative. Proof. All these equalities are straighforward. Only the formula for e needs an explanation. Indeed, we note that (as already used in the proof of Proposition 1) if AB = I in the group Γ, then BA = I. So, if U 2 BU 2 = I, this gives BU = I.  Let A(x, y) = X a(3n, 2m)x3n y 2m n,m≥0 be the generating function of the coefficients a(3n, 2m), and similarly we define other bivariate functions B, C, D, E, F, G, D and F. The identities of Proposition 3 now read as A = y 2B + y 2D + y 2 E, C= 2x3 A, 1 − x3 D= This gives A = b is then Our function Q 2x3 + 1 D, (1 − x3 )2 x3 x3 (x3 + 2) + A, 1 − x3 (1 − x3 )2 3x3 2 E= F, F = F, 3 2 (1 − x ) 1 − x3 B= G= 2 D. 1 − x3 y 2 x3 (x3 + 2) y 2 (2x3 + 1) 3y 2x3 y 2x3 + A + D + F. 1 − x3 (1 − x3 )2 (1 − x3 )2 (1 − x3 )2 That is, b y) = Q(x, (2) b = 1 + A + B + C + D + E + G. Q 2x3 + 1 3 x3 + 2 1 + A+ D+ F. 1 − x3 (1 − x3 )2 (1 − x3 )2 (1 − x3 )2 (3) Example 3. The sets which define “d” through “g” are non-empty starting only from the length 13 (that is, where 3n + 2m = 13). Thus, if we use the above recurences only minding the sets “a”, “b” and “c” (that is, assuming that D = 0 and F = 0), we obtain A(x, y) ≫ y 2 x3 (1 − x3 ) . (1 − x3 )2 − y 2 x3 (x3 + 2) By the sign ≫ we mean that the inequality ≥ holds for Taylor coeffiencts of corresponding functions on the left and on the right. This gives b y) ≫ 1 + A + B + C ≫ Q(x, 1 − x3 − y 2 x3 , (1 − x3 )2 − y 2 x3 (x3 + 2) and consequently   x 1 (x − 1)(x + 1)(x6 + x5 − 3x4 + x3 + 3x2 − 1) b , x · ≫ . T (x) = Q 1 − x2 1 − x2 1 − 5x2 − 2x3 + 10x4 + 2x5 − 9x6 + 2x7 + 5x8 − 2x9 − x10 And indeed, minding the values given by (1), MAPLE confirms that the first disrepancy occurs only for n = 13. Namely, 286 > 281, and the five missing words are precisely those given by Example 2 in the sets “d” through “g”. For n = 14 there are no words in the sets beyond “c”, so the above is in fact the equality 722 = 722. The Galois group of the splitting field of the polynomial in the denominator THE MODULAR GROUP AND WORDS 9 is equal to S10 . This polynomial has the unique root θ of the smallest absolute value, it is real and positive: θ = 0.5394737936+ , θ −1 = 1.853658161+ . So, this single observation gives the lower bound t(n) > C(1.853658161)n for a certain C > 0. In fact, our main Theorem implies that for the function t(n) we have the sharper bound t(n) > C(1.971480194)n , where φ−1 = 1.971480194+ , φ being the smallest (in absolute value) root of x7 − 20x5 + 12x4 − 8x3 − 12x2 + 4 = 0, the factor of the discriminant of the cubic polynomial in the Theorem. ✷ b we need to express D and F in terms of A. In To prove the formula for A, and hence for Q, order to accomplish this, we introduce the notion of a primitive a-word. This, by definition, is the word which belongs to the subset which defines the function “a” (an a−word ), but which cannot be written as A1 U α1 A2 U α2 · · · U αs−1 As , s ≥ 2, αi ≥ 1, (4) where each Ai is an a−word. Thus, let us continue our classification given by a) through f), and introduce a) a(n, m) — number of primitive a-words, which have n copies of U and m copies of S. Example 4. In the Example 2 above, six of the a−words are primitive, only SU 3 SU 3 SU 3 S is not. ✷ Proposition 4. We have the recurrence a(3n, 2m) = a(3n, 2m) + ∞ X X k=1 +  ∞  X 3k − 1 k=1 + 1  ∞  X 3k − 1 k=1 2 X a(3n1 , 2m1 )a(3n2 , 2m2 ) 3n1 +3n2 =3n−3k 2m1 +2m2 =2m a(3n1 , 2m1 )a(3n2 , 2m2 )a(3n3 , 2m3 ) 3n1 +3n2 +3n3 =3n−3k 2m1 +2m2 +2m3 =2m X a(3n1 , 2m1 )a(3n2 , 2m2 )a(3n3 , 2m3 )a(3n4 , 2m4 ) 3n1 +3n2 +3n3 +3n4 =3n−3k 2m1 +2m2 +2m3 +2m4 =2m + ··· . Proof. We note that each a−word is either primitive, or can be written in the form (4), where each Ai is a primitive a−word. This claim  follows by induction. Now we are left to count, which gives the above formula. Here 3k−1 stands for a number of ways the number 3k can be s written as a sum of s + 1 positive integers.  Let W (x, y) = X n,m≥0 a(3n, 2m)x3n y 2m . 10 G. ALKAUSKAS The identity in Proposition 4 can be written as  ∞ ∞ X X X x3 W 2 (1 + W )2 3k − 1 3k s+2 x W =W+ x3k W 2 (1 + W )3k−1 = W + A=W + . 3 (1 + W )3 s 1 − x k=1 k=1 s≥0 Analogously we derive recurrences for d and f in terms of a, which lead to the identities D= x3 W 2 , 1 − x3 (1 + W )3 F= x3 W 2 (1 + W ) . 1 − x3 (1 + W )3 Plugging all these three identities for A, D and F into (2), we readily obtain that W is an algebraic function: W = W 2 x3 (1 + W )2 W 2 x3 y 2 (5x3 + 1 + 3x3 W ) y 2 x3 (1 − x3 ) − + . (1 − x3 )2 − y 2 x3 (x3 + 2) 1 − x3 (1 + W )3 [1 − x3 (1 + W )3 ][(1 − x3 )2 − y 2 x3 (x3 + 2)] MAPLE simplifies this to a very elegant form W = x3 (W + 1)2 (W + y 2 ). (5) So, W is a third degree algebraic function over Q(x, y), and thus so is Q. The exact form of the cubic equation can be easily calculated with MAPLE, but we rather concentrate on T (x), since, first, the equation for Q with the help of method in Section 3 can be easily calculated by hand, and second, the cubic equation for W is very convenient to calculate fast the Taylor coefficients of T recurrently. Let  x  Z(x) = W ,x . 1 − x2 Then the equation for Z reads as Z= x3 (Z + 1)2 (Z + x2 ) . (1 − x2 )3 (6) We are left to verify the cubic equation given in the formulation of the Theorem. Plugging known values into (3) and using Proposition 2, we obtain (1 − x2 )2 (1 + Z) . (7) 1 − 3x2 − x3 + 3x4 − x6 − 3x3 Z − 3x3 Z 2 − x3 Z 3 Now, plug this value of T (x) into the cubic equation given by the Theorem. Then factor the numerator. MAPLE confirms that one of the two multipliers is indeed Z(1 − x2 )3 − x3 (Z + 1)2 (Z + x2 ), so the equation for T (x) is verified. In fact, this equation was discovered by Robert Israel by finding a 3rd degree algabraic function whose first 42 Taylor coefficients coincide with t(n), 0 ≤ n ≤ 41. Our theoretical result thus double-checks this fact. T (x) = The equalities (6) and (7) give a polynomial-time method to calculate coefficients t(n). Indeed, let us start from Z1 (x) = x5 (the first primitive a−word is SU 3 S), and let us define polynomials ZN (x) ∈ Z[x] recurrently by ZN +1 = x3 (ZN + 1)2 (ZN + x2 ) (mod x3N +6 ). (1 − x2 )3 Thus, ZN is of degree 3N + 2, ZN +1 ≡ ZN (mod x3N +3 ). After reaching enough terms, plug this into (7). This agrees perfectly with (1), which was calculated by a direct count. Robert Israel calculated 2000 terms of the sequence t(n). THE MODULAR GROUP AND WORDS 11 3. The short proof using PDA The following alternative proof was proposed by the referee. We reproduce it almost verbatim, since the ideas are clear and self-explanatory. The PDA stands for a pushdown automaton. Both proofs give exactly the same algebraic equation for the function Q. The group Γ = PSL2 (Z) is virtually free, so in the framework of geometric and combinatoric group theory it is known that the word problem (the set of all words in generators and inverses that are equal 1) is an unambiguous context-free language. However, for our purposes in [1] we need the regular language of all positive words {U, S}∗ to obtain the language of words counted by the generating functions T and Q. It follows immediatelly from the ChomskySchützenberger enumeration theorem that the functions T and Q are algebraic. To obtain the explicit formulae, we can explicitly construct a PDA accepting the language, then follow standard methods to obtain generating functions from the PDA. The language is very simple: a word equals 1 if some sequence of applications of the rules S 2 → 1, U 3 → 1 to factors of the input word reduces it to the empty word. This can be done using a PDA which has just one state, and a pushdown stack which uses the alphabet {0, 1, 2, 3} where 0 is the bottom-of-stack marker, 1 means a single U, 2 means U 2 , and 3 means S. So that the model of a PDA can be used that accepts on empty stack. Let us introduce a symbol $, and consider the language L$ = {w$ | w ∈ {U, S}∗ , w =Γ 1}. Here are the transitions: 1) 2) 3) 4) 5) 6) 7) 8) 9) U, 0 → 10 U, 1 → 2 U, 2 → ǫ U, 3 → 13 S, 0 → 30 S, 1 → 31 S, 2 → 32 S, 3 → ǫ $, 0 → ǫ This is indeed self-explanatory. For example, take the item 4). This means U is the next letter, and 3 is on the top of the stack (that is, S). Since US does not reduce, replace 3 with 13. On the other hand, consider the item 3). It stands for a move that now we get a factor U 3 , which is removed. The idea is that one reads a word in U, S and puts it into normal form on the fly, reducing U 3 , S whenever they appear as one moves right. The normal form of the prefix of the input word is written (in code) on the stack. Accept if at the end the stack is empty (the normal form is 1). 2 Now we follow [4] to convert the deterministic PDA into an unambiguous context free grammar. Let Ni stand for the nonterminal Nq,i,q (since there exists only one state). Start symbol is N0 . We get 1) N0 → UN1 N0 12 G. ALKAUSKAS 2) 3) 4) 5) 6) 7) 8) 9) N1 N2 N3 N0 N1 N2 N3 N0 → UN2 →U → UN1 N3 → SN3 N0 → SN3 N1 → SN3 N2 →S →$ Next, following the method of Chomsky-Schützenberger we obtain the following system of equations for the generating function directly from the grammar:  f = xf1 f0 + yf3f0 + z,    0 f1 = xf2 + yf3 f1 , (8) f2 = x + yf3f2 ,    f = xf f + y. 3 1 3 That is, we replace U and S with x and y, respectively, and add the corresponding terms. If (f0 , f1 , f1 , f3 ) is the solution, then Q(x, y) = fz0 . Note that the variable z appears only in the extression for f0 only as a linear factor z. Solving easily by hand the last three equations of (8) gives the cubic equation for f3 = K: y 2K 3 − (2y + y 3)K 2 + (1 + 2y 2 − x3 )K − y = 0. (9) Plugging everything into the first equation of (8), we get Q(x, y) = K . y(1 − K 2 ) (10) Let K1 , K2 and K3 be three distinct roots of (9), and let σ1 = K1 + K2 + K3 , σ2 = K1 K2 + K1 K3 + K2 K3 , σ3 = K1 K2 K3 be the standard symmetric polynomials. Then σ1 = 2 + y2 , y σ2 = 1 + 2y 2 − x3 , y2 1 σ3 = . y Let Qi , 1 ≤ i ≤ 3, are obtained from (10) by plugging Ki instead of K. We get X 1 X 1 1 1 yσ2 + + = y −y Ki = − yσ1 = y 2 − x3 − 1, Q1 Q2 Q3 Ki σ3 2 1 1 y σ1 (σ1 σ2 − 3σ3 ) 1 + + = + y 2 σ2 − y 2 = x3 y 2 − y 4 + x3 + 2y 2 − 1, Q1 Q2 Q2 Q3 Q1 Q3 σ3 σ3 1 y3 σ 2 − 2σ2 σ 2 − 2σ1 σ3 = − y 3 σ3 − y 3 1 + y3 2 Q1 Q2 Q3 σ3 σ3 σ3 6 6 2 3 4 3 2 = x − y − 6y x + 3y − 2x − 3y + 1. We thus get the cubic equation for formulated in the Theorem. 1 , Q and the reciprocal of it is the equation for Q, exactly as THE MODULAR GROUP AND WORDS 13 Acknowledgements. I would like to thank Wadim Zudilin, Murray Elder and Franz Lehner for sending me references and pointing out that these results are a small part of the much richer field in the theory of groups and graphs. I sincerely thank Roland Bacher, and also Robert Israel and activists of OEIS for helping with MAPLE and the sequence t(n) itself. I thank Audrius Alkauskas for a picture. I especially thank the anonymous referee: all the text in Section 3, up to the system (8), is almost verbatim taken from the referee’s report. References [1] G. Alkauskas, The Minkowski ?(x) function, a class of singular measures, quasi-modular and meanmodular forms. http://arxiv.org/abs/1209.4588. [2] L. Bartholdi, Counting paths in graphs, Enseign. Math. (2) 45 (1-2) (1999), 83–131. [3] S.R. Finch, Mathematical constants, Encyclopedia of Mathematics and its Applications, 94. Cambridge University Press, Cambridge (2003). [4] J.E. Hopcroft, J.D. Ullman, Introduction to automata theory, languages, and computation, AddisonWesley Series in Computer Science. Addison-Wesley Publishing Co., Reading, Mass. (1979). [5] D. G. Kouksov, On rationality of the cogrowth series, Proc. Amer. Math. Soc. 126 (10) (1998), 2845–2847. [6] D. Kuksov, Cogrowth series of free products of finite and free groups, Glasgow Math. J. 41 (1) (1999), 19–31. [7] F. Lehner, On the computation of spectra in free probability, J. Funct. Anal. 183 (2) (2001), 451–471. [8] J. C. McLaughlin, “Random walks and convolution operators on free products”, Doctoral dissertation, New York University, 1986. [9] G. Quenell, Combinatorics of free product graphs, Contemp. Math. 173 (1994), 257–281. [10] D. Voiculescu, Addition of certain noncommuting random variables, J. Funct. Anal. 66 (3) (1986), 323–346. [11] W. Woess, Nearest neighbour random walks on free products of discrete groups. Boll. Un. Mat. Ital. B (6) 5 (3) (1986), 961–982. [12] W. Woess, Random walks on infinite graphs and groups. Cambridge Tracts in Mathematics, 138. Cambridge University Press, Cambridge, 2000. [13] The Online Encyclopedia of Integer Sequences, Sequence A265434. Vilnius University, Department of Mathematics and Informatics, Naugarduko 24, LT-03225 Vilnius, Lithuania E-mail address: [email protected]
4
On the Lie bracket approximation approach to distributed optimization: Extensions and limitations ∗ arXiv:1802.10519v1 [math.OC] 28 Feb 2018 Simon Michalowsky1 , Bahman Gharesifard2 , and Christian Ebenbauer1 1 Institute for Systems Theory and Automatic Control, University of Stuttgart, Germany {michalowsky,ce}@ist.uni-stuttgart.de 2 Department of Mathematics and Statistics, Queen’s University, Canada [email protected] to relax these assumptions. In the present work we want to extend these results and show that the approach can be applied to a large class of optimization problems under mild assumptions on the communication network. While in [11], [12], [13] only optimization problems with linear constraints and objective functions in the form of a sum of separable functions were considered, in the present work we enhance the methodology to general convex optimization problems. The main idea of the approach is to use Lie bracket approximation techniques to find distributed approximations of non-distributed saddle-point dynamics. While in the previous works only certain Lie bracket approximations were used, we further show here that a whole class is applicable. Abstract. We consider the problem of solving a smooth convex optimization problem with equality and inequality constraints in a distributed fashion. Assuming that we have a group of agents available capable of communicating over a communication network described by a time-invariant directed graph, we derive distributed continuous-time agent dynamics that ensure convergence to a neighborhood of the optimal solution of the optimization problem. Following the ideas introduced in our previous work, we combine saddle-point dynamics with Lie bracket approximation techniques. While the methodology was previously limited to linear constraints and objective functions given by a sum of strictly convex separable functions, we extend these result here and show that it applies to a very general class of optimization problems under mild assumptions on the communication topology. 1 INTRODUCTION 2 PRELIMINARIES Over the last decades, distributed optimization has been an active area of research with high practical relevance, see, e.g., [2, 3, 4] for applications. In these type of problems, the goal is to cooperatively solve an optimization problem using a group of agents communicating over a network. While discrete-time algorithms for distributed optimization constitute the majority in the existing literature, we focus on continuous-time algorithms which have regained interest in the last decades, [5, 6, 7, 8, 9, 10]. These algorithms require strong assumptions either on the structure of the optimization problem or on the communication network. Recently, a novel approach to continuous-time distributed optimization based on Lie bracket approximations has been proposed in [11], [12] that has the potential * 2.1 Notation We denote by Rn the set of n-dimensional real vectors and further write C p , p ∈ N, for the set of p-times continuously differentiable real-valued functions. The gradient of a function f : Rn → R, f ∈ C 1 , with respect to its argument x ∈ Rn , will be denoted by ∇ x f : Rn → Rn ; we often omit the subscript, if it is clear from the context. We denote the (i, j)th entry of a matrix A ∈ Rn×m by aij , and sometimes denote A by A = [ aij ]. We use ei to denote the vector with the ith entry equal to 1 and all other entries equal to 0. We do not specify the dimension of ei but expect it to be clear from the context. For a vector λ ∈ Rn we let diag(λ) ∈ Rn×n denote the diagonal matrix whose diagonal entries are the entries of λ. Given two continuously differentiable vector fields f 1 : Rn → Rn and f 2 : Rn → Rn , the Lie This article is an extended version of [1] additionally including proofs for all results, a discussion of the choice of vector fields after Remark 1 and the overview in Table 1. 1 bracket of f 1 and f 2 evaluated at x is defined to be [ f 1 , f 2 ]( x ) := ∂ f2 ∂f ( x ) f 1 ( x ) − 1 ( x ) f 2 ( x ). ∂x ∂x set of (4) is non-empty and that the problem has a unique solution. Our goal is to design continuous-time optimization algorithms that converge to an arbitrarily small neighborhood of the unique global optimizer of (4) and that can be implemented in a distributed fashion. More precisely, we assume that we have a group of n agents available, each capable of interchanging information over a communication network described by a directed graph G = (V , E ) with graph Laplacian G = [ gij ], where V = {v1 , v2 , . . . , vn } is a set of n nodes and E ⊆ V × V is the edge set between the nodes. In the present setup, each node vi represents an agent and the edges define the existing communication links between the agents, i.e., if there is an edge from node i to node j then agent i has access to the information provided by agent j. We then say that an algorithm is distributed if each agent only uses its own information as well as that provided by its out-neighboring agents. Let L : Rn × Rneq × Rnineq → R, neq = ∑i∈Ieq neq i , nineq = ∑i∈Iineq nineq i , denote the Lagrangian associated to (4), i.e., (1) With a slight abuse of notation we sometimes also write [ f 1 ( x ), f 2 ( x )]. For a vector x = [ x1 , x2 , . . . , xn ]> ∈ Rn and a finite set S ⊆ {1, 2, . . . , n}, we denote by xS ∈ R|S| the ordered stacked vector of all xi with i ∈ S . For example, if n = 5 and S = {2, 4}, then xS = [ x2 , x4 ]> . 2.2 Basics on graph theory We recall some basic notions on graph theory, and refer the reader to [14] or other standard references for more information. A directed graph (or simply digraph) is an ordered pair G = (V , E ), where V = {v1 , v2 , . . . , vn } is the set of nodes and E ⊆ V × V is the set of edges, i.e. (vi , v j ) ∈ E if there is an edge from node vi to v j . In our setup the edges encode to which other agents some agent has access to, i.e. (vi , v j ) ∈ E means that node vi receives information from node v j . We say that node v j is an outneighbor of node vi if there is an edge from node vi to node v j . The adjacency matrix A = [aij ] ∈ Rn×n associated to G is defined as ( 1 if i 6= j and (vi , v j ) ∈ E , (2) aij = 0 otherwise. L( x, ν, λ) = F ( x ) + ν> a( x ) + λ> c( x ), where a = [ ai ]i∈Ieq , c = [ci ]i∈Iineq are the stacked vectors of all ai and ci , respectively, and ν ∈ Rneq , λ ∈ Rnineq are the associated Lagrange multipliers. In the sequel, we assume that the state of the ith agent comprises of xi as well as the dual variables associated to the constraints ai , ci , given they exist. Without loss of generality we then put the following assumption on the indexing of the constraints: We also define the out-degree matrix D = [dij ] associated to G as ( if i = j ∑nk=1 aik dij = (3) 0 otherwise. Assumption 1. For any i ∈ Ieq and any j ∈ Iineq , there exists an x = [ x1 , x2 , . . . , xn ]> ∈ Rn such that and Finally, we call G = D − A = [ gij ] ∈ Rn×n the Laplacian of G . A directed path in G is a sequence of nodes connected by edges and we write pi1 ir = hvi1 |vi2 | . . . |vir i for a path from node vi1 to node vir . We say that a path is simple if the sequence contains no node more than once. i ∈ Ieq ⊆ {1, 2, . . . , n}, ci ( x ) ≤ 0, i ∈ Iineq ⊆ {1, 2, . . . , n}, (6) The following saddle-point dynamics adapted from [15] is known to converge to a saddle point of the Lagrangian (see [13] for a proof), thus providing a solution to (4) F(x) ai ( x ) = 0, 6= 0 6= 0. L( x ? , ν, λ) ≤ L( x ? , ν? , λ? ) ≤ L( x, ν? , λ? ). Consider the following convex optimization problem s.t. ∂c j ∂x j ( x ) ∂ai ∂xi ( x ) In a nutshell, this assumption Newguarantees that the ith constraints ai , ci are both functions of xi . It is well-known that if the Lagrangian has Newa saddle point ( x ? , ν? , λ? ), then x ? is an optimizer of (4). We say that a point ( x ? , ν? , λ? ) is a saddle point if for all x ∈ Rn , nineq ν ∈ Rneq , λ ∈ R+ , we have 3 PROBLEM SETUP min (5) ẋ = −∇ x L( x, ν, λ) (4) = −∇ F ( x ) − > > ∂a ∂c ∂x ( x ) ν − ∂x ( x ) λ (7a) ν̇ = ∇ν L( x, ν, λ) where F : Rn → R, ai : Rn → Rneq i , ci : Rn → Rnineq i , neq i , nineq i ≥ 1, F ∈ C 2 is strictly convex, the functions ai , i ∈ Ieq , are affine and the ci ∈ C 2 , i ∈ Iineq , are convex. We assume further that F, ai , ci are such that the feasible = a( x ) (7b) λ̇ = diag(λ)∇λ L( x, ν, λ) = diag(λ)c( x ). 2 (7c) notation, for each i = 1, 2, . . . , n, we define the index set However, (7) is in general not distributed in the aforementioned sense, since the right-hand side of (7) is not composed only of admissible vector fields, i.e., vector fields that can be computed locally by the nodes. For example, if g12 6= 0 and g13 = 0, then the vector field [ e1> x2 , 0, 0]> is admissible for (7), while [ e1> x3 , 0, 0]> is not. Recently, a novel approach to distributed optimization has been proposed that employs Lie bracket approximation techniques to derive distributed approximations of (7). The idea is to write the right-hand side of (7) by means of Lie brackets of admissible vector fields. If we have achieved to rewrite (7) in this form, i.e., we have ż = ∑ v B B ( z ), i −1 Ī(i ) := {i } ∪ { j = n + ∑ neq k + `, ` = 1, 2, . . . , neq i } k =1 i −1 ∪ { j = n + neq + ∑ nineq k + `, ` = 1, 2, . . . , nineq i } (11) k =1 associating the components of the complete state z to the ith agent meaning that z j is part of the state of agent i for all j ∈ Ī(i ). Hence, the state vector of the ith agent is given by zĪ(i) . Based on this, for all j = 1, 2, . . . , N, we define I( j) = Ī(i ) (8) B∈B (12) for some i such that j ∈ Ī(i ), i.e., I( j) is the set of all indices which are associated to the same agent as the jth index. Note that Ī( j) = I( j), for all j = 1, 2, . . . , n. [ x > , ν> , λ> ]> , where z = v B ∈ R and B is a set of Lie brackets of admissible vector fields {φ1 , φ2 , . . . , φ M }, φi : Rn+neq +nineq → Rn+neq +nineq , then we can employ Lie bracket approximation techniques from [16], [17] to derive distributed approximations of (8). More precisely, we can find a family of functions uiσ : R → R parametrized by σ > 0 such that the trajectories of 4.1 Lie brackets of admissible vector fields In the following, we first want to discuss which kind of vector fields can be written in terms of Lie brackets of admissible vector fields. For i, j = 1, 2, . . . , N, define M żσ = ∑ φi (z)uiσ (t) hi,j (z) = e j f j (zI( j) , zI(i) ), (9) (13) i =1 where f j : R|I( j)| × R|I(i)| → R, f j ∈ C 1 . Observe that hi,j is admissible if and only if there exist `, k ∈ {1, 2, . . . , n} such that i ∈ I(`), j ∈ I(k) and gk` 6= 0. In the next Newresult we consider Lie brackets of admissible vector fields of the form (13). uniformly converge to those of (8) as σ increases, given (8) and (9) are initialized equally. A general algorithm to compute suitable functions uiσ is presented in [16] and we will present a modified version thereof tailored to the problem at hand in [13]. In the present paper we do not discuss the second step of how to design these functions uiσ but focus on rewriting the right-hand side of (7) in terms of Lie brackets of admissible vector fields. Lemma 1. Consider a graph G of n nodes and let pi1 ir = hvi1 |vi2 | . . . |vir i denote a simple path in G from vi1 to vir . Then, for any jk ∈ I(ik ), k = 1, 2, . . . , r, we have  4 MAIN RESULTS r −2 ∂ f jk ∂z j k +1 k =1 = e j1 f jr−1 (zI( jr−1 ) , zI( jr ) ) ∏ Consider the saddle-point dynamics (7) and observe that the right-hand side is a sum of vector fields of the form ei ψ( x ) and ei z j ψ( x ), i, j = 1, 2, . . . , n + neq + nineq , ψ : Rn → R, where z = [ x > , ν> , λ> ]> ∈ R N , N = n + neq + nineq ,  h  i h jr ,jr−1 , h jr−1 ,jr−2 , . . . , [h j3 ,j2 , h j2 ,j1 ] . . . (z) (14) (zI( jk ) , zI( jk+1 ) ) and the left-hand side is a Lie bracket of admissible vector fields. • A proof is given in Section 7.1. By the above Lemma, each non-admissible vector field that takes the same form as the right-hand side of (14) can be written in terms of a Lie bracket of admissible vector fields. It is worth mentioning that this does not classify the whole set of vector fields that can be written as a Lie bracket of admissible vector fields since we limited ourselves to a single path. We next discuss a special case that is of particular importance for the application at hand. (10) is the complete state and ei ∈ R N is the ith unit vector. These vector fields might either be admissible or not, depending on the communication graph as well as the problem structure. In the following, we wish to discuss how to write vector fields of this form by means of Lie brackets of admissible vector fields. For the purpose of 3 Proposition 1. Consider a graph G of n nodes and let pi1 ir = hvi1 |vi2 | . . . |vir i denote a simple path in G from vi1 to vir . Let jk ∈ I(ik ), k = 1, 2, . . . , r, be any set of indices and suppose that (1) (2) k k f jk (zI( jk ) , zI( jk+1 ) ) = f j (zI( jk ) ) f j (zI( jk+1 ) ) for k = 2, . . . , r − 1; hence (1) k (2) k −1 ∂z j k ∂ fj (zI( jk ) ) = 1 (15) | = (2) f j (zI( jk+1 ) ) = z jk+1 , k (17) k = 2, 3, . . . , r − 1, (18a) k = 1, 2, . . . , r − 2; (18b) As we see from the proof, each bounded vector field in (17) leads to another unbounded vector field. Hence, at most half of the vector fields h jk ,jk−1 in (17) can be bounded. In particular, we can choose the functions f jk , k = 1, 2, . . . , r − 1, as follows to guarantee that (16) holds: ( (1) f j (x) k 1  1 cos(α( x −d)) if k is even, if k is odd, k 6= 1,  sin α( x − d) if k is even, k 6= r − 1, ∫ cos(α(1x−d)) dx if k is odd, k 6= r − 1, (21a) (21b) Example 1. Consider the graph shown in Fig. 1 and assume for the sake of simplicity that I( j) = j for j = 1, . . . , 5 and z ∈ R5 . By (14) we can write non-admissible vector fields of the form e1 ϕ1 (z2 ) ϕ2 (z3 ) as well as e1 ϕ3 (z4 ) ϕ4 (z5 ) and sums thereof in terms of Lie brackets of admissible vector fields as long as ϕ1 , ϕ3 admit an analytic expression of their antiderivatives ∫ ϕ1 (z2 )dz2 , ∫ ϕ3 (z4 )dz4 . In fact, with (1) (1) (zI( j ) ) k k = cos(α x − d) where α 6= 0, d ∈ R. However, this choice will lead to functions f jk that are not globally continuous but only π π ); hence we well-defined in the interval (d − 2α , d + 2α need to choose d appropriately and α sufficiently small. By that choice, all functions f jk with even k are bounded while all functions f jk with odd k are unbounded. We next discuss how we can make use of the previous results to rewrite more general vector fields. While Lemma 1 enables us to write products of functions of variables of nodes which lie on the same path as Lie brackets of admissible vector fields, we cannot directly use this result to rewrite functions which do not fulfill this property. To make this clearer, consider the following example: all k = 1, 2, . . . , r. By (15), f j is also bounded for all k k = 1, 2, . . . , r since I( jk ) and I( jk+1 ) are disjunct. However, by (16), fj = ( (2) f j (x) k Proof. Suppose there exists a set of bounded vector fields such that (17) holds. Then f jk is a bounded function for (zI( jk ) ) = ∈ C 1. Remark 1. The same holds true if we do not assume the structure (15). In fact, the structure is required for all other variables except of zI( j1 ) , zI( jr ) to cancel out in (17). • Lemma 2. Suppose that all assumptions from Proposition 1 are fulfilled. Then there exists no set of bounded vector fields h jk ,jk−1 ∈ C 1 such that (17) holds. • (2) k −1 ∂z j k k −1 (2) hence leading to h jk+1 ,jk (zI( jk ) , zI( jk+1 ) ) = e jk z jk+1 . However, in view of (9) where the φi are given by admissible vector fields of the form (13), it is often desired that the admissible vector fields have certain properties such as boundedness in order to simplify the calculation of the approximating inputs uiσ and improve the transient behavior of (9). However, as we see next, it is not possible to render all vector fields bounded. ∂ fj (2) , fj Thus, f j is strictly monotone in z jk which contradicts the k −1 boundedness assumption, thus concluding the proof. A proof is given in Section 7.2. In view of (14), the constraint (16) ensures that all terms depending on zI( jk ) , k = 2, 3, . . . , r − 1, cancel out. Equation (17) is of particular interest since the non-admissible vector fields often take the form as its right-hand side. According to (16), there exists a whole class of vector fields h jk ,jk−1 , or equivalently, functions f jk , such that (17) holds. A particularly simple choice that has been utilized in the previous works [11], [12] is to take k (20) k −1 for all z ∈ R N and the left-hand side is a Lie bracket of admissible vector fields. • (1) (zI( jk ) )| ≥ β. (1) (1) (2) e j1 f j (zI( j1 ) ) f j (zI( jr ) ) 1 r −1 f j (zI( jk ) ) = 1, (2) k −1 ∂z j k ∂ fj Note that h jk ,jk−1 ∈ C 1 if and only if f j (16) R|I( jk )| , for all zI( jk ) ∈ k = 2, 3, . . . , r − 1. Then   h  i (z) h jr ,jr−1 , h jr−1 ,jr−2 , . . . , [h j3 ,j2 , h j2 ,j1 ] . . . is bounded away from zero, i.e., there exists some constant β > 0 such that for all zI( jk ) ∈ R|I( jk )| we have with f j (zI( jk ) ) (2) k −1 ∂z j k ∂ fj (19) 4 h2,1 (z) = e1 ∫ ϕ1 (z2 )dz2 , h3,2 (z) = e2 ϕ2 (z3 ), (22a) h4,3 (z) = e1 ∫ ϕ3 (z4 )dz4 , h5,4 (z) = e4 ϕ4 (z5 ), (22b) z2 z3 2 3 and observe that for all z ∈ R N we have [ψ2 , ψ1 ](z) = e1 ϕ2 (z3 ) ϕ4 (z5 ). z1 1 (27) Further, following Proposition 1 and choosing 5 z5 4 z4 h2,1 (z) = e1 z1 , h3,2 (z) = e2 ϕ2 (z3 ), (28a) h4,3 (z) = e1 , h5,4 (z) = e4 ϕ4 (z5 ), (28b) we have for all z ∈ R N Figure 1. The graph considered in Example 1 where the ith agent has an associated state zi . ψ1 (z) = [h3,2 , h2,1 ](z), ψ2 (z) = [h5,4 , h4,3 ](z). (29) Using this in (27) we finally managed to rewrite e1 ϕ2 (z3 ) ϕ4 (z5 ) in terms of admissible vector fields. • we have for all z ∈ R5 e1 ϕ1 (z2 ) ϕ2 (z3 ) = [h3,2 , h2,1 ](z) (23a) e1 ϕ3 (z4 ) ϕ4 (z5 ) = [h5,4 , h3,2 ](z). (23b) Remark 2. Instead of realizing the multiplication by means of Lie brackets another way is to augment the agent state by estimates of the respective state of the other agent. More precisely, in Example 1, we augment the state of agent 1 by ξ 3 and ξ 5 that are estimates of z3 and z5 , respectively, and let     ż1 w ( z1 , z2 , z4 , ξ 3 , ξ 5 )  + e2 z 5 + e3 z 5 , ξ̇ 3  =  −µξ 3 (30) −µξ 5 ξ̇ 5 However, we cannot directly use (14) to rewrite a nonadmissible vector field of the form e1 ϕ2 (z3 ) ϕ4 (z5 ). • In the next result, we wish to overcome this limitation and show how to not only write sums of the vector fields in the form of the right-hand side of (14) in terms of Lie brackets of admissible vector fields, but also products thereof. where w : R5 → R and µ > 0 is sufficiently large, hence ξ 3 (t) ≈ z3 (t), ξ 5 (t) ≈ z5 (t). The resulting non-admissible vector fields in the complete augmented system can then be written in terms of Lie brackets of admissible vector fields using Proposition 1. However, in the application at hand this alters the saddle-point dynamics (7) which necessitates a stability analysis of the augmented system. Proposition 2. Let ηk : R → R, ηk ∈ C 1 , k = 1, 2, . . . , m, m ≥ 2, and define ψ1 (z) = ei η1 (z j1 ) ∫ η0 (zi )dzi , ψk (z) = ei zi ηk (z jk ), (24a) k = 2, . . . , m − 1, ψm (z) = ei ηm (z jm ), (24b) (24c) i, jk ∈ N, ψk : R N → R N . Then, for any jk 6= i, k = 1, . . . , m, we have for all z ∈ R N h  i   . . . [ψm , ψm−1 ], ψm−2 , . . . , ψ1 (z) m = ei η0 (zi ) ∏ ηk (z jk ). k =1 Hence, under suitable assumptions on the communication graph, this allows us to write vector fields whose components are sums of products of arbitrary functions in terms of Lie brackets of admissible vector fields. This observation gives rise to the next Lemma. (25) Lemma 3. Consider a strongly connected graph G of n nodes and let ϕ : R N → R be an analytic function. Then any vector field ψ(z) = ei ϕ(z), ei ∈ R N , i = 1, 2, . . . , N, can be written as a possibly infinite sum of Lie brackets of admissible vector fields. • • A proof is given in Section 7.3. The vector fields ψk defined by (24) are in general non-admissible; thus, the left-hand side of (25) is not a Lie bracket of admissible vector fields. However, observing that the vector fields ψk take the same form as the right-hand side of (17), we can make use of Proposition 1 to write the left-hand side of (25) as a Lie bracket of admissible vector fields as long as, for any k ∈ {1, . . . , m}, there exists a path from the ith node to the node that is associated to state z jk . We illustrate that by means of Example 1. Proof. Since ϕ is analytic, by a series expansion it can be written as a possibly infinite sum of monomials of the components z j of z, j = 1, 2, . . . , N. Using Proposition 2, all these monomials can be written in terms of Lie brackets of vector fields of the form (24). By strong connectivity of G , all these vector fields can be written in terms of Lie brackets of admissible vector fields, thus concluding the proof. Example 1. [continued] Reconsider Example 1 and suppose we want to rewrite the non-admissible vector field e1 ϕ2 (z3 ) ϕ4 (z5 ). Following Proposition 2 we let ψ1 (z) = e1 z1 ϕ2 (z3 ), ψ2 (z) = e1 ϕ3 (z5 ), While the result might be more of a theoretical nature for the application at hand, it nevertheless shows that the proposed approach in principle applies to a large class of problems. (26) 5 Vector field Admissible if... (`) ei (`) ∂Fi ∂xi ( xi ) ∏ Fj ( x j ) (`) j∈J F Rewritable if... ` = 1, 2, . . . , n F i = 1, 2 . . . , n gij 6= 0 for all j ∈ J F ∃ pij in G for all j ∈ J F k ∈ Ieq i = 1, 2 . . . , n gik 6= 0 for all k : aki 6= 0 ∃ pik in G for all k : aki 6= 0 k ∈ Iineq , ` = 1, 2, . . . , nck i = 1, 2, . . . , n gij 6= 0 for all j ∈ Jck ∃ pij in G for all j ∈ Jck k ∈ Ieq i = 1, 2 . . . , n Assumption 2 holds ∃ pki in G for all aki 6= 0 k ∈ Iineq ` = 1, 2, . . . , nck Assumption 2 holds ∃ pkj in G for all j ∈ Jck (`) (`) j 6 =i ei āki νk (`) ei ∂ck,k ∂xk (`) ( xk ) ∏ ck,j ( x j ) (`) j∈Jc k j6=k en+k aki xi (`) en+neq +k λk ∏ ck,j ( x j ) (`) k j∈Jc (`) (`) (`) Table 1. An overview of all vector fields appearing in (31). 4.2 Distributed optimization via Lie brackets functions, i.e., nF F(x) = In the sequel we apply the results from the last section to the problem at hand and rewrite the saddle-point dynamics (7) by means of Lie brackets of admissible vector fields. For the sake of a simpler notation we assume in the following that neq i = 1 for all i ∈ Ieq and nineq i = 1 for all i ∈ Iineq . We further assume that each agent has an associated equality and inequality constraint, i.e., Ieq = {1, 2, . . . , n}, Iineq = {1, 2, . . . , n}. This can always be achieved by augmenting the optimization problem (4) by constraints that do not alter the feasible set. We emphasize that this is not necessary for the methodology to apply as we will illustrate in the example in Section 5. We can then write the saddle-point dynamics (7) equivalently as n ẋ = − ∑ ei i =1 ν̇ = ∑ ∑ ∑ āik νk + ∑ k ∈Ieq  ∂ck ∂xk ( x ) λk `=1 ∑ āki xi + b̄k  (32) (`) (33) ck,j ( x j ), (`) j∈Jc k (`) (`) where the Fj : R → R, j ∈ J F ⊆ {1, 2, . . . , n}, ` = 1, 2, . . . , n F , n F > 0, are strictly convex functions (`) (`) and the ck,j : R → R, k ∈ Iineq , j ∈ Jck ⊆ {1, 2, . . . , n}, ` = 1, 2, . . . , nci , nck > 0, are convex. Observe that, (`) (`) if n F and nck are infinite and Fj , ck,j are monomials, (`) this includes all analytic functions F, ck . If J F = {`}, (`) Jck = {`}, we obtain the particularly important special case that both the objective function and the constraints are a sum of separable functions; hence also the case of linear constraints considered in [11], [12] is covered here. Under this assumption the vector fields appearing in (31) are summed up in Table 1. Depending on the communication graph as well as the structure of the constraints and the objective function, these vector fields can either be admissible or not. In particular, the vector fields in (31b), (31c) are admissible if the constraints are compatible with the communication topology defined by the graph G , i.e., if the following assumption holds: (31a) k ∈Iineq (31b) i =1 e k λ k c k ( x ), ∑ ∏ `=1 (`) Fj ( x j ), (`) j∈J F nck ck ( x ) = n ek k ∈Ieq λ̇ = ∂F ∂xi ( x ) + ∑ ∏ (31c) k ∈Iineq Assumption 2. For all i, j = 1, 2, . . . , n with gij = 0 we where ai ( x ) = āi x + b̄i , āi = [ āi1 , āi2 , . . . , āin ] ∈ R1×n , b̄i ∈ R. Motivated by our previous discussions, we assume in the following that the objective function as well as the inequality constraints are sums of products of separable have ∂ai ∂x j ( x ) ≡ 0 as well as ∂ci ∂x j ( x ) ≡ 0. • We point out that all non-admissible vector fields in (31) can be written in terms of Lie brackets of admissible vector 6 2 1 5 3 4 Figure 2. The communication graph for the example considered in Section 5. Vector field Corresponding path e5 ν2 p52 = hv5 |v1 |v2 i e2 x 2 λ 1 p21 = hv2 |v5 |v1 i e1 λ 3 p13 = hv1 |v2 |v3 i e4 x 1 λ 4 p41 = hv4 |v5 |v1 i e1 x 4 λ 4 p14 = hv1 |v2 |v3 |v4 i e1 x 1 λ 4 p14 = hv1 |v2 |v3 |v4 i e8 λ 3 x 1 p31 = hv3 |v4 |v5 |v1 i e9 λ 4 x 4 x 1 p41 = hv4 |v5 |v1 i e9 λ4 x12 p41 = hv4 |v5 |v1 i Lie bracket representation   e j z 6 , e5 z j   e j z 7 , e2 z 2 z j   e j λ 3 , e1 z j   e j x 1 , e4 λ 4 z j   e j2 λ4 x4 , [ e j1 z j2 , e1 z j1 ]   e j λ4 , [ e j1 z j2 , e1 x1 z j1 ]  2  e j2 x1 , [ e j1 z j2 , e8 λ2 z j1 ]   e j x 1 , e9 z j x 4 λ 4 ]  2  e j x 1 , e9 λ 4 z j j ∈ I(1) j ∈ I(5) j ∈ I(2) j ∈ I(5) j1 ∈ I(2), j2 ∈ I(3) j1 ∈ I(2), j2 ∈ I(3) j1 ∈ I(4), j2 ∈ I(5) j ∈ I(5) j ∈ I(5) Table 2. An overview of the non-admissible vector fields in (35) and their Lie bracket representations. Here, the index sets are I(1) = {1, 7}, I(2) = {2, 6}, I(3) = {3, 8}, I(4) = {4, 9}, I(5) = {5}. fields under appropriate assumptions on the communication graph, see also the last column of Table 1. Specifically, if the graph is strongly connected, then all non-admissible vector fields can be rewritten independent of the objective function as well as the constraints, given that they admit the structure (32), (33). In most cases, however, much less restrictive requirements on the communication graph are sufficient. We do not explicitly discuss how to rewrite the non-admissible vector fields using Proposition 1 and Proposition 2 in general but illustrate this by means of an example in Section 5. point dynamics (7) are then given by ẋ = − ∇ F ( x ) − (2e2 − e5 )ν2 − 2( x1 e1 + x2 e2 )λ1 (35a) − (2x4 − x1 ) e4 λ4 − (2x1 − x4 ) e1 λ4 − ( e1 + e3 )λ3 ν̇2 = 2x2 − x5 (35b) λ̇ = λ1 e1 ( x12 + x22 − 4) + λ3 e2 ( x1 + x3 − 2) + λ4 e3 ( x42 − x4 x1 + x12 − 100), (35c) where x ∈ R5 , ν2 ∈ R, and λ = [λ1 , λ3 , λ4 ]> ∈ R3 . We next rewrite all non-admissible vector fields in (35) using Proposition 1. We will thereby follow the choice (18) to make sure that (16) holds. We do not discuss how to rewrite each non-admissible vector field in detail, but limit ourselves to the vector field e1 x4 λ4 from (35a). Comparing 5 EXAMPLE (1) with (17), we have j1 = 1, jr = 4, and f j (zI( j1 ) ) = 1, 1 In this section we illustrate the previous results by means of an example. Consider the following optimization problem (2) f j (zI( jr ) ) = x4 λ4 . Following Proposition 1, we require r −1 a path from node 1 to node 4 that is here given by p14 = hv1 |v2 |v3 |v4 i; thus r = 4. We then obtain 5 min x ∈ R5 s.t. F(x) =   e1 x4 λ4 = e j2 λ4 x4 , [ e j1 z j2 , e1 z j1 ] , ∑ Fi (xi ) i =1 a2 ( x ) = 2x2 − x5 = 0 c1 ( x ) = x12 + x22 − 4 ≤ 0 (36) where j1 ∈ I(2) = {2, 6}, j2 ∈ I(3) = {3, 8}, and the right-hand side is a Lie bracket of admissible vector fields. All other non-admissible vector fields in (35) can be treated similarly and we sum up the resulting Lie brackets in Table 2. For the simulation we let each j, j1 , j2 be the largest index in its respective index set. By that choice, we inject less perturbation in the primal and more in the dual variables New which is also visible in the simulation results depicted in Fig. 3. As to be seen, the distributed algorithm approximates the trajectories of the non-distributed saddle-point dynamics (35) and converges to a neighborhood of the optimizer x ? = [0, 2, 2, 4, 4]> . NewWe also (34) c3 ( x ) = x1 + x3 − 2 ≤ 0 c4 ( x ) = x42 − x4 x1 + x12 − 100 ≤ 0, where Fi ( xi ) = ( xi − i )2 , x = [ x1 , x2 , x3 , x4 , x5 ]> ∈ R5 . We assume that the communication topology is described by the graph in Fig. 2. Observe that the constraints c3 and c4 are not compatible with the graph topology, hence Assumption 2 is not fulfilled. The corresponding saddle- 7 6 x(t) bracket approximation approach to distributed optimization proposed in [11], [12] and discussed which kind of vector fields can in principle be written in terms of Lie brackets of admissible vector fields. We did not discuss the construction of approximating inputs but postpone this to [13] where we will present a modified version of the general algorithm from [16] that exploits the structure of the problem at hand. x1 x2 x3 x4 x5 4 2 0 -2 0 5 10 15 1 References ν(t) 0 -1 [1] S. Michalowsky, B. Gharesifard, and C. Ebenbauer, “On the Lie bracket approximation approach to distributed optimization: Extensions and limitations,” in 2018 European Control Conference (ECC), accepted, 2018. [2] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends in Machine Learning, vol. 3, no. 1, pp. 1–122, 2011. [3] F. Bullo, J. Cortés, and S. Martínez, Distributed Control of Robotic Networks, ser. Applied Mathematics Series. Princeton University Press, 2009. [4] J. Zhao and F. Dörfler, “Distributed control and optimization in DC microgrids,” Automatica, vol. 61, no. Supplement C, pp. 18 – 26, 2015. [5] D. Feijer and F. Paganini, “Stability of primal–dual gradient dynamics and applications to network optimization,” Automatica, vol. 46, no. 12, pp. 1974 – 1981, 2010. [6] J. Wang and N. Elia, “A control perspective for centralized and distributed convex optimization,” in 2011 IEEE 50th Conference on Decision and Control and European Control Conference (CDC-ECC), 2011, pp. 3800–3805. [7] H.-B. Dürr, C. Zeng, and C. Ebenbauer, “Saddle point seeking for convex optimization problems,” IFAC Proceedings Volumes, vol. 46, no. 23, pp. 540–545, 2013. [8] B. Gharesifard and J. Cortés, “Distributed continuous-time convex optimization on weightbalanced digraphs,” IEEE Transactions on Automatic Control, vol. 59, no. 3, pp. 781–786, 2014. [9] S. K. Niederländer and J. Cortés, “Distributed coordination for separable convex optimization with coupling constraints,” in 2015 54th IEEE Conference on Decision and Control (CDC), Dec 2015, pp. 694–699. -2 0 5 10 15 2 λ(t) 1 0 -1 0 5 10 15 time t Figure 3. Simulation results of the example from Section 5. The upper plot shows the primal variable x (t), the lower left one the dual variable ν2 (t) and the lower right one the dual variable λ(t). NewIn each plot, the dotted black lines marked with squares depict the trajectories of the non-distributed saddlepoint dynamics (35) while the thinner oscillating ones depict the trajectories of the distributed approximation. The corresponding thick lines depict the distributed approximation with additional low-pass filters. The dashed black lines indicate the desired equilbrium of (35). included simulation results with additional low-pass filters in the distributed x-, ν- and λ-dynamics. While the effect on the primal variables is small since we already reduced the oscillations by our design choice, the dual variables show significantly less oscillations and better approximate the non-distributed trajectories. A rigorous stability analysis of the augmented distributed dynamics and a performance-oriented design of the filters is up to future work. 6 CONCLUSIONS AND OUTLOOK We considered a convex optimization problem and showed how distributed optimization algorithms can be designed for a quite general class of problems with little structural requirements under mild assumptions on the communication network. We therefore extended the Lie 8 [10] B. Touri and B. Gharesifard, “Saddle-point dynamics for distributed convex optimization on general directed graphs,” in 2016 IEEE 55th Conference on Decision and Control (CDC), 2016, pp. 862–866. By the induction hypothesis we then have  h h jr̄+1 ,jr̄ , h jr̄ ,jr̄−1 , [ . . . , [h j3 ,j2 , h j2 ,j1 ] . . . ] i (z) = [h jr̄+1 ,jr̄ , e j1 f˜jr̄ ,j1 ](z) [11] C. Ebenbauer, S. Michalowsky, V. Grushkovskaya, and B. Gharesifard, “Distributed optimization over directed graphs with the help of Lie brackets,” in Proc. 20th IFAC World Congress, 2017, pp. 15 908–15 913. N = e j1 ( ∑ i =1 − e jr̄ ( ∂ f˜jr̄ ,j1 ∂zi ∑ ei i ∈I( jr̄ ) ∪I( jr̄+1 ) [12] S. Michalowsky, B. Gharesifard, and C. Ebenbauer, “Distributed extremum seeking over directed graphs,” in 2017 IEEE 56th Conference on Decision and Control (CDC), 2017, pp. 2095–2101. = e j1 ∂ f˜jr̄ ,j1 ∂z jr̄ (z)) e jr̄ f jr̄ (zI( jr̄ ) , zI( jr̄+1 ) ) ∂ f jr̄ ∂zi (zI( jr̄ ) , zI( jr̄+1 ) )) e j1 f˜jr̄ ,j1 (z) (z) f jr̄ (zI( jr̄ ) , zI( jr̄+1 ) ), (38) where e ji ∈ Rn+neq +nineq is the ji th unit vector. Here, we used that pi1 ir̄+1 is a simple path; hence, since I(ik1 ) and I(ik2 ) are disjunct for any k1 6= k2 , also jk1 6= jk2 for any k1 , k2 = 1, 2, . . . , r̄ + 1, k1 6= k2 . Using definition (37), we obtain (14). Finally, since jk ∈ I(ik ) and gik ik+1 6= 0 for all k = 1, 2, . . . , r − 1, all h jk+1 ,jk are admissible; hence the left-hand side of (14) is a Lie bracket of admissible vector fields. [13] ——, “A Lie bracket approximation approach to distributed optimization over directed graphs,” ArXiv e-prints arXiv:1711.05486 [math.OC], 2017, https:// arxiv.org/abs/1711.05486. [14] N. Biggs, Algebraic graph theory. Cambridge university press, 1993. [15] H.-B. Dürr and C. Ebenbauer, “On a class of smooth optimization algorithms with applications in control,” IFAC Proceedings Volumes, vol. 45, no. 17, pp. 291 – 298, 2012, 4th IFAC Conference on Nonlinear Model Predictive Control. 7.2 Proof of Proposition 1 Proof. Observe first that r −2 ∂ f jk ∂z j k +1 k =1 f jr−1 (zI( jr−1 ) , zI( jr ) ) ∏ [16] W. Liu, “An approximation algorithm for nonholonomic systems,” SIAM Journal on Control and Optimization, vol. 35, no. 4, pp. 1328–1365, 1997. (2) (zI( jk ) , zI( jk+1 ) ) (1) = f jr−1 (zI( jr ) ) f jr−1 (zI( jr−1 ) ) r −2 [17] H. J. Sussmann and W. Liu, “Limits of highly oscillatory controls and the approximation of general paths by admissible trajectories,” in 1991 IEEE 30th Conference on Decision and Control (CDC), 1991, pp. 437–442. r − 1 ∂ f (2) j × ∏ f j (zI( jk ) ) ∏ k =1 = (1) k (1) (2) f j (zI( jr ) ) f j (zI( j1 ) ) r −1 1 k =2 r −1 ∏ k =1 k −1 ∂z j k (zI( jk ) ) (2) ∂ fj (1) f j (zI( jk ) ) ∂zk−1 (zI( jk ) ). j k k Then, using (16) and applying Lemma 1, the result immediately follows. 7 APPENDIX 7.1 Proof of Lemma 1 7.3 Proof of Proposition 2 Proof. We prove this result by induction. First, for r = 1, (14) is trivially true by definition (13). Suppose now that the claim holds for all r ≤ r̄, r̄ > 1 and consider r = r̄ + 1. Define Proof. We first show by induction that h  i . . . [[ψm , ψm−1 ], ψm−2 ], . . . , ψ2 (z) = ei m ∏ ηk (z jk ). k =2 f˜jr̄ ,j1 (z) = (39) r̄ −2 ∂ f jk f jr̄−1 (zI( jr̄−1 ) , zI( jr̄ ) ) ∂z j k +1 k =1 ∏ (zI( jk ) , zI( jk+1 ) ). (37) For m = 2 it is clear that (39) holds. Suppose now that (39) holds for all m ≤ m̄, m̄ ≥ 2. For m = m̄ + 1 we then have 9 with a slight abuse of notation h  i . . . [[ψm̄+1 , ψm̄ ], ψm̄−1 ], . . . , ψ2 (z) m̄+1 = [ ei ∏ ηk (z jk ), ei zi η2 (z j2 )] k =2 ∂η2 (z j2 )) ei = ei ( ei> η2 (z j2 ) + zi e j2 ∂z j2 m̄+1 + ei ( ∑ k =2 m̄+1 ∏ ηk (z jk ) k =2 e jk ∂z k (z jk ) ∏ η` (z j` )) ei zi η2 (z j2 ) ∂η jk `=2 `6=k m̄+1 = ei ∏ ηk (z jk ), (40) k =2 where we used that i 6= jk for any k = 1, . . . , m in the last step. This proves (39). Finally, using (39), we obtain h  i . . . [[ψm , ψm−1 ], ψm−2 ], . . . , ψ1 (z) m = [ ei ∏ ηk (z jk ), ei η1 (z j1 ) ∫ η0 (zi )dzi ] k =2 m = ei η0 (zi ) ∏ ηk (z jk ). (41) k =1 This finishes the proof. 10
3
JOURNAL OF COMMUNICATION AND INFORMATION SYSTEMS, VOL. 32, NO.1, 2017. 133 Low-Complexity Integer-Forcing Methods for Block Fading MIMO Multiple-Access Channels arXiv:1711.07977v1 [] 21 Nov 2017 Ricardo Bohaczuk Venturelli and Danilo Silva Abstract—Integer forcing is an alternative approach to conventional linear receivers for multiple-antenna systems. In an integer-forcing receiver, integer linear combinations of messages are extracted from the received matrix before each individual message is recovered. Recently, the integer-forcing approach was generalized to a block fading scenario. Among the existing variations of the scheme, the ones with the highest achievable rates have the drawback that no efficient algorithm is known to find the best choice of integer linear combination coefficients. In this paper, we propose several sub-optimal methods to find these coefficients with low complexity, covering both parallel and successive interference cancellation versions of the receiver. Simulation results show that the proposed methods attain a performance close to optimal in terms of achievable rates for a given outage probability. Moreover, a low-complexity implementation using root LDPC codes is developed, showing that the benefits of the proposed methods also carry on to practice. Index Terms—Block fading, integer-forcing linear receivers, lattices, root LDPC codes, successive interference cancellation. I. I NTRODUCTION Integer-forcing (IF) receivers are an alternative to conventional methods of equalization, such as zero-forcing (ZF) and minimum-mean-squared-error (MMSE) equalization [2] for multiple-input and multiple-output (MIMO) channels. The IF approach follows from the compute-and-forward framework [3], [4] for relay networks, where the receivers attempt to extract integer linear combinations of the transmitted messages from the received signals, before recovering the messages themselves. Although joint maximum likelihood (ML) receivers achieve the best performance among all methods, by searching over all possible transmitted codewords [5], their complexity is prohibitively high, increasing exponentially with the number of users. In contrast, IF receivers have a much lower complexity and can approach the ML performance in many situations [2]. Moreover, the performance of an IF receiver can be further improved in some situations by successive computation, leading to the so-called successive IF (SIF) receiver, analogously to the successive interference cancellation (SIC) technique for conventional linear receivers. The Ad Hoc Associate Editor coordinating the review of this manuscript and approving it for publication was Prof. Cristiano Magalhães Panazio. Ricardo Bohaczuk Venturelli and Danilo Silva are with the Department of Electrical and Eletronic Engineering, Federal University of Santa Catarina, Florianopolis-SC, Brazil (e-mails: [email protected], [email protected]). This work was partially supported by CAPES under Grant 1345815 and by CNPq under Grants 153535/2016-4, 429097/2016-6 and 310343/2016-0. A preliminary version of this paper was presented at the XXXIV Simpósio Brasileiro de Telecomunicações e Processamento de Sinais (SBrT’16), Santarém, PA, Brazil, August 30-September 4, 2016 [1]. Digital Object Identifier: 10.14209/jcis.2017.14 Recent works on IF include the design of practical channel codes compatible with IF [6], [7], efficient methods to select the coefficients of the integer linear combinations [8], [9], as well as its application to relay networks [10]. Moreover, the IF principle has been used not only as receive method in the MIMO uplink scenario, but also for precoding in the MIMO downlink scenario [11], [12] and even for source coding [13]. The main results about IF receivers consider static fading, where all symbols of a codeword are subject to the same channel fading. However, in a practical situation where a powerful code with large blocklength is used, it may not be realistic to assume that all symbols of the codeword are subject to the same channel fading. Therefore, channels that allow block fading [14], where the channel fading can vary during the transmission of a codeword, seem to be a more realistic model. In a recent work, El Bakoury and Nazer [15] generalize the IF approach to a block fading scenario. They described two decoding methods for block fading, which are called AM (arithmetic mean) and GM (geometric mean) decoding. The AM decoding method approximates the effective noise seen on all blocks (after equalization) as having the same variance, as in the case of static fading. On the other hand, the GM decoding method optimally exploits the diversity inherent in the channel variation, allowing to achieve higher rates than with AM decoding. Both decoding methods are applicable to both (non-successive) IF and SIF receivers. The rates achievable by all these receivers depend on the choice of an integer matrix A specifying the coefficients of the linear combinations that should be decoded. However, finding the optimal choice of A for GM-IF, AM-SIF and GM-SIF appears to be a hard problem for which no efficient approximation algorithm is known, making these receivers currently infeasible to implement in practice. Nevertheless, results obtained in [15] by exhaustive search demonstrate that optimal GM-IF and AM-SIF significantly outperform AM-IF in terms of achievable rates, while GM-SIF outperforms all of them. Our main contribution in this paper is to develop lowcomplexity optimization methods for these receivers (GMIF, AM-SIF and GM-SIF) which can closely approach the performance obtained by an optimal search. Each proposed method is based on optimizing a more tractable lower bound on the achievable rate, which can be performed very efficiently. Simulation results show that the proposed method for GM-IF has a small gap compared to optimal performance, while the remaining two methods, at least for the scenarios tested, have performance almost indistinguishable from the optimal one. Another contribution of this paper is to develop a low- JOURNAL OF COMMUNICATION AND INFORMATION SYSTEMS, VOL. 32, NO.1, 2017. complexity implementation of these receivers using finitelength codes over a low-order constellation, in order to validate the information-theoretic results under practical constraints. Attaining the GM performance requires the use of fulldiversity codes, from which we have adopted root low-density parity-check (LDPC) codes. Simulation results are shown to be consistent with the theoretical ones, with an expected gap due to the finite codeword length. The remainder of the paper is organized as follows. The system model is described in Section II. Section III reviews integer forcing for static fading as well as block fading, while Section IV reviews successive integer forcing, again for both fading types. In Section V, we present our proposed methods for selecting A. Section VI describes our low-complexity implementation under practical constraints. Simulation results are shown in Section VII, covering the information-theoretic performance as well as the performance with practical codes. Lastly, our conclusions are presented in Section VIII. A. Notation For any x > 0, define log+ (x) , max(log(x), 0). We denote row vectors as lowercase bold letters (e.g., x) and matrices as uppercase bold letters (e.g., X). The `2 -norm of a vector x is denoted by kxk. The matrix XT denotes the transpose of X. We use I and 0, respectively, to denote an identity and an all-zero matrix of appropriate size, which should always be clear from the context. The set of all m × n matrices with entries from the set A is denoted Am×n . II. S YSTEM M ODEL Consider a discrete-time, real Gaussian MIMO multipleaccess channel (MAC) with NT single-antenna transmitters and one NR -antenna receiver, subject to block fading with F independent fading realizations per codeword. Specifically, let n be the codeword length and assume, for simplicity, that F divides n. For ` = 1, . . . , NT , let x` ∈ Rn denote the vector transmitted by the `th transmitter, which can be represented by the `th row of a matrix X ∈ RNT ×n . Similarly, let Y ∈ RNR ×n be a matrix whose jth row represents the vector received by the jth receive antenna. Assuming fading realizations of equal length, the received matrix can be expressed as   Y = Y(1) · · · Y(F ) (1) where, for i = 1, . . . , F , Y(i) = H(i) X(i) + Z(i) 134 We assume that the receiver has perfect knowledge of the channel realization H(1:F ) , while the transmitters do not have this knowledge and are only aware of the channel statistics. Each transmitted vector x` is assumed to be the encoding of a message w` ∈ W produced by the `th transmitter, where W is the message space. The encoder rate, which is the same for all transmitters, is defined as 1 (5) R = log2 |W|. n Moreover, the transmitted vectors must satisfy a (symmetric) power constraint 1 kx` k2 ≤ P. (6) n These assumptions on equal power and equal rate are reasonable since the transmitters do not have knowledge of the channel matrix. For convenience, we denote SNR = P/σ 2 . At the receiver, the decoder attempts to recover all messages, producing estimates ŵ1 , . . . , ŵNT . An error is said to occur if ŵ` 6= w` for any `. The error probability of the scheme (encoder/decoder pair) is Pe = E[Pe (H(1:F ) )], where Pe (H(1:F ) ) denotes the error probability for a fixed channel realization. For any fixed H(1:F ) , a rate R is said to be achievable if, for any , δ > 0 and sufficiently large n, there exists a scheme of rate at least R − δ such that Pe (H(1:F ) ) ≤ . For a given family of schemes (indexed by n), let Rscheme (H(1:F ) ) denote its maximum achievable rate under a fixed channel realization H(1:F ) . For a target rate R, the outage probability is defined as pout (R) , P[Rscheme (H(1:F ) ) < R], and for a fixed probability ρ ∈ (0, 1], the outage rate is defined as Rout (ρ) , sup{R : pout (R) ≤ ρ}. Remark: We have adopted a real-valued channel in order to facilitate comparison with optimal IF methods that require exhaustive search—whose complexity becomes prohibitively large over a complex-valued channel even for low dimension— as well as with the existing literature, which mostly considers real-valued channels. There is no loss of generality, since it is always possible to express a complex-valued channel Yc = Hc Xc + Zc as a real-valued channel        Re(Yc ) Re(Hc ) −Im(Hc ) Re(Xc ) Re(Zc ) = + . Im(Yc ) Im(Hc ) Re(Hc ) Im(Xc ) Im(Zc ) Alternatively, all expressions presented here can be straightforwardly generalized to the complex case by replacing transpose with conjugate transpose. (2) H(i) ∈ RNR ×NT is the matrix of channel fading coefficients for the ith block, X(i) ∈ RNT ×(n/F ) is such that   X = X(1) · · · X(F ) (3) and Z(i) ∈ RNR ×(n/F ) is a Gaussian noise matrix with i.i.d. entries of zero mean and variance σ 2 . Note that H(i) remains constant throughout the transmission of a block of n/F symbols, but can vary independently between realizations. For convenience, let  H(1:F ) = H(1) , . . . , H(F ) . (4) III. I NTEGER F ORCING To aid the understanding, we first review integer forcing for static fading (F = 1), before describing its extension to block fading (F ≥ 2). For simplicity, the superscript indicating the block index is omitted when F = 1. A. Static Fading An integer-forcing receiver [2] shares the same basic structure of a conventional linear receiver, as illustrated in Fig. 1: first, the received matrix is linearly transformed by an equal- JOURNAL OF COMMUNICATION AND INFORMATION SYSTEMS, VOL. 32, NO.1, 2017. y1 y2 .. . yNR ŵ1 SISO decoder B (over R) −1 SISO decoder .. . SISO decoder A (over Z) where 2 σeff,m = bm H − am ŵ2 ŵNT conventional linear receiver 135 2 P + bm 2 σ2 is the per-component variance of the vector zeff,m , and am , bm and zeff,m are the mth row of A, B and Zeff , respectively. The optimal equalization matrix B, for a given integer matrix A, can be found using MMSE estimation [2] as B = AHT (SNR−1 I + HHT )−1 Fig. 1. Integer-forcing receiver. ization matrix; then, each resulting stream is individually processed by a channel decoder. The difference, in the case of the integer-forcing receiver, is that the equalizer attempts to estimate not the transmitted signals directly but rather an integer linear transformation of the transmitted signals, which is then inverted after noise is removed. Crucial to this noise removal step at the channel decoders is the use of a lattice code common to all transmitters [2], [3]. A lattice Λ ∈ Rn is a discrete subgroup of Rn , i.e., it is closed under integer linear combinations [16]. In particular, a lattice can be expressed as Λ = {x ∈ Rn : x = uG, u ∈ Zn } where G ∈ Rn×n is the generator matrix of Λ. It follows that if every x` ∈ Λ, then aX ∈ Λ, for all a ∈ Z1×NT [16]. In other words, since aX is also a codeword from the same code Λ, then a decoder for Λ is able to decode it (i.e., “denoise” it), regardless of the value chosen for a [3]. In more detail, consider the received matrix in (2), Y = HX + Z. (7) NT ×NT be a full-rank integer matrix. The receiver Let A ∈ Z applies the equalization matrix B ∈ RNT ×NR to create an effective channel output Yeff = BY = BHX + BZ (8) (9) = AX + Zeff (10) = V + Zeff (11) Zeff = (BH − A)X + BZ (12) where is the so-called effective noise [2] and V = AX (13) is an integer linear transformation of X. Assuming that each x` is chosen from the same lattice Λ ⊆ Rn , we have that each row of V is also a lattice point from Λ [3]. Thus, if decoding is successful and V is recovered, then X can also be recovered, provided A is full-rank.1 As a consequence, it is shown in [2], [3] that the following rate is achievable ! 1 P + (14) RIF (H, A, B) , min log 2 m 2 σeff,m 1 One might think that a stricter condition is required, namely, that A is invertible. This would be true if any element of Λ could be chosen for transmission without a power constraint. However, when nested lattice shaping is used to satisfy the power constraint, it is possible to show that requiring rank A = NT is enough. See [2], [17] for details. (15) (16) In this case, we have [4] 2 σeff,m = σ 2 am MaT m (17) M = (SNR−1 I + HT H)−1 (18) where and the achievable rate becomes 1 RIF (H, A) , min log+ m 2  SNR am MaT m  . (19) Optimizing the choice of A, we have the achievable rate RIF (H) , max A:rank A=NT RIF (H, A). (20) As can be seen, the optimal matrix A solving (20) consists of a set of linearly independent integer vectors minimizing (17). Since M is symmetric and positive definite [4], it admits a Cholesky decomposition M = GGT , where G ∈ RNT ×NT , leading to 2 σeff,m = σ 2 kam Gk2 . (21) Thus, the optimal solution can be described as the integer coefficients (under the basis G) of a set of NT shortest linearly independent vectors in the lattice generated by G. This is known as the Shortest Independent Vector Problem (SIVP) [18], which is believed to be NP-Hard [19]. However, suboptimal algorithms for basis reduction2 exist that can find an approximation in polynomial time, such as the LenstraLenstra-Lovasz (LLL) algorithm [21]. Note that, if A = I is chosen, then the scheme reduces to MMSE equalization [2]. Thus, integer forcing generalizes linear equalization, providing potentially higher achievable rates. B. Block Fading In the case of block fading, a complication arises, since now each block of the transmitted matrix experiences a different channel realization [14]. While it is possible to independently equalize each block of the received matrix, the resulting integer linear transformation A must be the same for all blocks, in order for the rows V to remain lattice codewords [15]. 2 For the special case of N T = 2, an optimal basis can be found very efficiently using Lagrange’s algorithm [20]. JOURNAL OF COMMUNICATION AND INFORMATION SYSTEMS, VOL. 32, NO.1, 2017. More precisely, for each ith block, let A(i) ∈ ZNT ×NT and B(i) ∈ RNT ×NR be its corresponding integer and equalization matrices, respectively, and compute the effective channel output for the block as Yeff,(i) = B(i) Y(i) (22) = B(i) H(i) X(i) + B(i) Z(i) (23) = A(i) X(i) + Zeff,(i) (24) = V(i) + Zeff,(i) (25) where V(i) = A(i) X(i) and Zeff,(i) = (B(i) H(i) − A(i) )X(i) + B(i) Z(i) . (26) It follows that Yeff = V + Zeff (27) where Yeff , V and Zeff denote the horizontal concatenation of Yeff,(i) , V(i) and Zeff,(i) , respectively, for i = 1, . . . , F . However, we can see that, in general, the rows of   V = A(1) X(1) · · · A(F ) X(F ) (28) are not guaranteed to be lattice points, since we cannot generally express V as an integer linear transformation of X. For instance, the first half of a codeword concatenated to the second half of another codeword is not guaranteed to form a codeword in a general code.3 Thus, for integer forcing to work over a block fading channel, we should require all A(i) to be equal [15], A(i) = A, i = 1, . . . , F (29) so that V = AX. (30) For a given A, the optimal equalization matrix for each ith block can be obtained as  −1 −1 T B(i) = AHT SNR I + H H (31) (i) (i) (i) and the per-component variance of the vector zeff,m,(i) , the m-th row of the Zeff,(i) , is given as where 2 σeff,m,(i) = σ 2 am M(i) aT m (32)  −1 M(i) = SNR−1 I + HT (i) H(i) (33) and am is the mth  row of A.  Let zeff,m = zeff,m,(1) · · · zeff,m,(F ) denote the mth row of Zeff . Since the noise variance (32) may now be different for each block, an achievable rate expression is not immediate to obtain and may, in fact, depend on the type of decoder used [15]. Specifically, it depends on whether the decoder properly exploits the diversity inherent in the block fading channel. Two decoding methods are proposed and analyzed in [15]: the Arithmetic Mean (AM) and the Geometric Mean (GM) decoders discussed below. 136 1) AM decoder: This decoder does not attempt to exploit the channel variation and instead treats each component of the effective noise vector zeff,m as having the same variance [15], 2 denoted by σAM,m . This variance is given by the arithmetic mean (hence the decoder name) of the variance of the effective noise on each block, 2 σAM,m = F 1 X 2 . σ F i=1 eff,m,(i) (34) It is shown in [15] that the AM-IF receiver achieves the following rate ! 1 P + RAM-IF (H(1:F ) , A) = min log 2 m 2 σAM,m   1 SNR + = min log (35) m 2 am MAM aT m where MAM = 1 X M(i) . F i (36) Thus, the maximum rate achievable by this method is RAM-IF (H) , max A:rank A=NT RAM-IF (H(1:F ) , A). (37) Note that (35) is identical to (19) with M replaced by MAM . Thus, (37) can be solved in the same way as in the case of static fading [15]. In particular, the LLL algorithm (or similar) may be used to find an approximately optimal solution in polynomial time. A special case of AM-IF consists of choosing A = I, which corresponds to conventional MMSE equalization followed by an AM decoder [15]. This scheme is referred to as AM-MMSE and its achievable rate denoted as RAM-MMSE (H(1:F ) ) = RAM-IF (H(1:F ) , I). (38) 2) GM decoder: This decoder optimally exploits the fact that the effective noise variance is not constant across the blocks [15]. The rate achievable by this method can be understood as the average achievable rate among all the individual blocks, as if they could be treated as parallel channels (which is clearly an upper bound). More precisely, the rate achievable by GM-IF is proven in [15] to be ! P 1 X1 + log (39) RGM-IF (H(1:F ) , A) = min 2 m F 2 σeff,m,(i) i ! 1 P + = min log (40) 2 m 2 σGM,m where ! F1 2 σGM,m = Y 2 σeff,m,(i) (41) i ! F1 2 =σ · Y am M(i) aT m . (42) i 3 Unless the code can be expressed as the Cartesian product of shorterlength codes. But this effectively violates the assumption of block fading, corresponding to static fading with a shorter codeword length. Note that (41) is the geometric mean (hence the decoder name) of the variance of the effective noise in each block. JOURNAL OF COMMUNICATION AND INFORMATION SYSTEMS, VOL. 32, NO.1, 2017. Therefore, the maximum achievable rate with GM-IF is RGM-IF (H) , max A:rank A=NT RGM-IF (H(1:F ) , A). (43) It is useful pointing out that, for the same matrix A, the GM decoder always achieves a rate at least as high as that of the 2 2 AM decoder, since σGM ≤ σAM due to AM-GM inequality [22]. However, there is currently no known efficient method to find an optimal (or approximately optimal) solution for A in (43) [15], making optimal GM-IF currently infeasible to implement in practice, especially as NT grows. As before, a special case of GM-IF consists of choosing A = I, which corresponds to conventional MMSE equalization followed by a GM decoder [15]. This scheme is referred to as GM-MMSE and its achievable rate denoted as RGM-MMSE (H(1:F ) ) = RGM-IF (H(1:F ) , I). (44) 137 Note that the generalized covariance matrix of N is the identity matrix [24]. The effective channel output (11) can be rewritten as Yeff = V + LN (48) where V = AX. Since L is a lower triangular matrix, we have that m X yeff,m = vm + `m,j nj (49) j=1 where `m,j denotes the (m, j) entry of L. Note that yeff,m is not affected by nm0 for any m0 > m. The decoder acts in each row of Yeff in a successive way. Suppose that v1 is successfully recovered. Then we can compute 1 (yeff,1 − v1 ) (50) n1 = `1,1 and remove its influence on the second row of Yeff , IV. S UCCESSIVE I NTEGER -F ORCING One way to improve the performance of a conventional linear receiver is to apply successive interference cancellation (SIC) [23]: after a codeword is successfully decoded, the receiver can use it as side information in order to cancel part of the interference, reducing the variance of the effective noise. This principle can be applied to integer-forcing as well [24]. Specifically, in a successive integer-forcing (SIF) receiver, each integer linear combination of codewords that is successfully decoded is used to cancel its contribution to the effective noise affecting the remaining linear combinations, potentially enabling a higher achievable rate. Note that SIF decoding must be done sequentially, in contrast to conventional IF, which may be done in parallel. Thus, the decoding order is relevant. We assume that decoding follows the index of am , m = 1, . . . , NT . Thus, in contrast to conventional IF, the ordering of the rows of A may have an impact on the achievable rates for SIF. We start by reviewing SIF for static fading, followed by its extension to block fading. y20 , yeff,2 − `2,1 n1 = v2 + `2,2 n2 (51) so that v2 can be decoded in the presence of less noise. Generalizing, suppose that v1 , . . . , vm−1 have been successfully recovered, providing the estimates n1 , . . . , nm−1 . We can remove the influence of this noise to obtain 0 ym , yeff,m − m−1 X `m,j nj = vm + `m,m nm (52) j=1 so that vm can be decoded under noise of variance `2m,m . Then, the corresponding noise vector can be estimated as nm = 1 (y0 − vm ). `m,m m (53) Proceeding this way, it can be shown that the following rate is achievable [24]   P 1 + RSIF (H, A) = min log . (54) m 2 `2m,m By choosing the optimal A, the maximum achievable rate is A. Static Fading RSIF (H) = Recall the effective channel (11) described in Section III. The SIF receiver exploits the fact that the rows Zeff are correlated in general and starts by performing a whitening transformation. Consider the generalized covariance matrix of Zeff , defined as 1 KZeff , E[Zeff ZT (45) eff ]. n Assuming that the optimal equalization matrix B is used, it is possible to show that [2], [24] KZeff = σ 2 AMAT (46) where M is defined in (18). Since KZeff is a symmetric positive definite matrix [24], it admits a Cholesky decomposition KZeff = LLT , where L is a lower triangular matrix with strictly positive diagonal entries. Define N , L−1 Zeff . (47) max A:rank A=NT RSIF (H, A). (55) However, finding the optimal matrix for the SIF decoder is a different (and harder) problem than that for the IF decoder. In particular, each row permutation of A may give a different achievable rate. As shown in [24], it is possible to restrict the choice of A to the class of unimodular matrices and the optimal solution is obtained by finding a Korkin-Zolotarev (KZ) basis for the lattice generated by G ∈ RNT ×NT , obtained from the Cholesky decomposition of M = GGT . Finding a KZ basis for a lattice involves finding a shortest lattice vector and is therefore an NP-hard problem [20]. Suboptimal algorithms can be used, for example, applying the LLL algorithm NT successive times, where in each iteration the dimension of the underlying lattice decreases [20]. Note that it is possible to choose A = π(I), where π(I) denotes a row permutation of the identity matrix. In this case, the method reduces to conventional SIC decoding, which is referred to as MMSE-SIC. In principle, all possible permutations JOURNAL OF COMMUNICATION AND INFORMATION SYSTEMS, VOL. 32, NO.1, 2017. could be tested, however this quickly becomes unattractive as the number of users increases. Heuristic methods [23] can be applied to find a good decoding order, for instance, decoding 2 first the user with the highest SNR (i.e., lowest σeff,m ), at the expense of some performance degradation. B. Block Fading As in the case of parallel IF, for successive IF in the block fading scenario it is required that A remain the same for all blocks so that the rows of V in (48) are still lattice points and lattice decoding and subsequent inversion is possible. All the other steps are the same as for static fading applied separately to each ith block, namely: equalization by B(i) given in (31), Cholesky decomposition of 1 KZeff,(i) = E[Zeff,(i) ZT eff,(i) ] n/F = σ 2 AM(i) AT = L(i) LT (i) (56) where M(i) is defined in (33), and successive noise cancellation and estimation from m−1 X 0 ym,(i) , yeff,m,(i) − `m,j nj,(i) (57) j=1 = vm,(i) + `m,m,(i) nm,(i) . (58) Note that the effective channel after cancellation can be expressed more simply as 0 ym = vm + z0m (59) 0 0 and z0m are the horizontal concatenation of ym,(i) where ym 0 and zm,(i) = `m,m,(i) nm,(i) , respectively, for i = 1, . . . , F . In particular, each ith block of the reduced effective noise z0m has a possibly different variance `2m,m,(i) . For the lattice decoding step, either of the two decoding methods discussed before, namely AM and GM decoding, may be used [15]. Their generalization to the case of block fading is straightforward. 1) AM decoder: This decoder treats z0m as white noise of variance F 1 X 2 2 σAM-SIF,m = ` . (60) F i=1 m,m,(i) It follows that the achievable rate of AM-SIF is given by ! 1 P + RAM-SIF (H(1:F ) , A) , min log (61) 2 m 2 σAM-SIF,m max A:rank A=NT RAM-SIF (H(1:F ) , A). π where ! F1 2 σGM-SIF,m = Y `2m,m,(i) . (66) i Note that, due the AM-GM inequality, for the same matrix A, the GM decoder achieves a rate at least as high as that of the AM decoder. The maximum achievable rate for GM-SIF is then given by RGM-SIF (H(1:F ) ) = max A:rank A=NT RGM-SIF (H(1:F ) , A). (67) However, there is currently no known efficient method to find an optimal or approximately optimal solution for A, making optimal GM-SIF infeasible to implement in practice. Even an exhaustive search is more costly for GM-SIF than for GMIF, since now all row permutations of the same A must be considered. As a special case, it is always possible to choose A = π(I), which corresponds to conventional SIC with a GM decoder. This method is referred to as GM-SIC and its achievable rate is given by RGM-SIC (H(1:F ) ) = max RGM-SIF (H(1:F ) , π(I)). π (68) V. P ROPOSED M ETHODS Although AM-SIF, GM-IF and GM-SIF all have higher achievable rates than AM-IF, the fact that no efficient algorithm is known to find even an approximately optimal A can undermine the practical applicability of these methods. In this section, we propose four suboptimal, low-complexity methods for choosing the integer matrix A. The first two are applicable to GM-IF, the third is applicable to AM-SIF, while the fourth applies to GM-SIF. A. Proposed Method 1 (GM-IF) Let (69) A:rank A=NT (62) However, there is currently no known efficient method to find an optimal or approximately optimal choice of A, which should be chosen to minimize (60). Note that, if we choose A = π(I), then the method reduces to conventional SIC with an AM decoder. This method is referred to as AM-SIC and its achievable rate is given by RAM-SIC (H(1:F ) ) , max RAM-SIF (H(1:F ) , π(I)). 2) GM decoder: As before, this decoder attempts to optimally exploit the variation of the noise statistics across blocks. The achievable rate for GM-SIF can be computed as the average achievable rate all blocks, given by [15] ! SNR 1 X1 + log (64) RGM-SIF (H(1:F ) , A) = min m F 2 `2m,m,(i) i ! 1 SNR + = min log (65) 2 m 2 σGM-SIF,m AAM-IF , arg max RAM-IF (H(1:F ) , A) which, after optimizing over A, becomes RAM-SIF (H(1:F ) ) , 138 (63) be the optimal matrix A for AM-IF. We propose to use either this matrix or the identity matrix, depending on which one gives the highest rate under GM-IF decoding. Let A1 = {I, AAM-IF }. The rate achievable by this method is given by Rprop1 (H(1:F ) ) , max RGM-IF (H(1:F ) , A). A∈A1 (70) Note that, if matrix AAM is chosen, then a rate at least as high as that of AM-IF is achieved, since for the same JOURNAL OF COMMUNICATION AND INFORMATION SYSTEMS, VOL. 32, NO.1, 2017. choice of A, the GM decoder always outperforms the AM decoder. On the other hand, if the identity matrix is chosen, then the proposed method becomes the same as GM-MMSE and, therefore, achieves the same rate. Thus, the proposed method achieves rates as high as both GM-MMSE and AM-IF. The complexity of this method is dominated by that of finding an optimal matrix for AM-IF, which can be approximated in polynomial time with the LLL algorithm. Therefore, the complexity is the same as that of AM-IF. B. Proposed Method 2 (GM-IF) In addition to the choices discussed above, we propose to test also the optimal matrix A(i) for each ith block that would be obtained with IF under static fading, namely A(i) , arg max RIF (H(i) , A) (71) A:rank A=NT for i = 1, . . . , F . Let A2 = {I, AAM-IF , A(1) , . . . , A(F ) }. The rate achievable by this method is given by Rprop2 (H(1:F ) ) , max RGM-IF (H(1:F ) , A). A∈A2 (72) Note that Proposed Method 1 chooses a matrix which may be “reasonably good” for all blocks simultaneously but which is not necessarily optimal for any block. Proposed Method 2 expands this choice by including matrices which are optimal for at least one block, even if it they are worse for the others blocks. A reasoning behind this approach is that, in contrast to the AM decoder, the GM decoder is not limited by the performance of the worst block, since individual rates are added in (39). The complexity of Proposed Method 2 is higher than that of Proposed Method 1 since it is necessary run the LLL algorithm F +1 times. Since the LLL algorithm can be done in polynomial time, this proposed method still viable in practice, especially for small F . C. Proposed Method 3 (AM-SIF) Recall that the AM-SIF receiver correctly takes into account the fact that the effective noise matrix has a different generalized covariance matrix KZeff,(i) for each block, using each corresponding L(i) for noise cancellation; only the lattice decoding step treats the reduced effective noise z0m as having 2 equal variance σAM-SIF,m across blocks. An upper bound on this variance can be obtained by treating each block of the effective noise matrix Zeff as having the same generalized covariance matrix, given by  1  KZeff = E Zeff ZT eff n i 1X h = E Zeff,(i) ZT eff,(i) n i 1 X = KZeff,(i) (73) F i 1 X 2 = σ AM(i) AT (74) F i = σ 2 AMAT (75) 139 P where M = F1 i M(i) . From this point on, we can proceed similarly to the case of static fading (Section IV-A), obtaining a reduced effective noise z0m with variance `2m,m , where L is a lower triangular matrix with positive diagonal entries given by the Cholesky decomposition KZeff = LLT . It follows 2 that σAM-SIF,m ≤ `2m,m , since a suboptimal noise cancellation scheme is used.4 To be clear, the scheme described above uses block IF equalization, followed by static noise cancellation (SNC) and AM decoding, in contrast to AM-SIF, which uses block noise cancellation. To distinguish it from AM-SIF, we refer to this scheme as AM-SIF-SNC. Its achievable rate is then given by   P 1 (76) RAM-SIF-SNC (H(1:F ) , A) = min log+ 2 m 2 `m,m which is a lower bound on RAM-SIF (H(1:F ) , A). Let AAM-SIF-SNC , arg max RAM-SIF-SNC (H(1:F ) , A) (77) A:rank A=NT be the optimal matrix A for AM-SIF-SNC. We propose to use this matrix for AM-SIF. The rate achievable by this method is given by Rprop3 (H(1:F ) ) , RAM-SIF (H(1:F ) , AAM-SIF-SNC ). (78) Note that optimizing (76) is exactly the same problem as optimizing (54). Thus, as discussed in Section IV-A, AAM-SIF-SNC can be computed by KZ reduction of the lattice generated by G ∈ RNT ×NT , obtained from the Cholesky decomposition of M = GGT , and this procedure can be approximated by applying the LLL algorithm NT times. D. Proposed Method 4 (GM-SIF) We can extended the same ideas of Proposed Method 2 to the case of successive decoding, simply by redefining (71) and (72) with their SIF counterparts, while using AAM-SIF-SNC instead of AAM-SIF , as in Proposed Method 3. However, since the decoding order is important, in principle we would have to test all permutations of the identity matrix, but that number of permutations grows exponentially with the number of users. To avoid this complexity, we simply exclude the identity matrix (and all its permutations) from the set of possible choices for A. Let A4 = {AAM-SIF-SNC , ASIF,(1) , . . . , ASIF,(F ) }, where, for i = 1, . . . , F , ASIF,(i) , arg max RSIF (H(i) , A) (79) A:rank A=NT is the optimal matrix for the ith block that would be obtained with SIF under static fading. The rate achievable by this method is given by Rprop4 (H(1:F ) ) = max RGM-SIF (H(1:F ) , A). A∈A4 (80) Similarly to Proposed Method 2, the complexity of this method grows linearly with the number of blocks. However, finding AAM-SIF-SNC as well as each ASIF,(i) requires running the LLL algorithm NT times, for a total of NT (F + 1) runs. 4 This upper bound can also be derived directly by diagonalizing the positive P T definite matrix LLT = F1 i L(i) L(i) . JOURNAL OF COMMUNICATION AND INFORMATION SYSTEMS, VOL. 32, NO.1, 2017. TABLE I C OMPLEXITY OF I NTEGER -F ORCING M ETHODS Method Complexity AM/GM-MMSE AM-IF Proposed Method 1 Proposed Method 2 GM/AM-SIC Proposed Method 3 Proposed Method 4 Optimal GM/AM-SIF O(NT4 log(NT )) O(NT4 log(NT )) O((F + 1)NT4 log(NT )) O(NT !) O(NT4 log(NT )) O((F + 1)NT4 log(NT )) √ O(( 1 + SNR)NT ) E. Summary of Complexity Table I shows the complexity of finding A for each method discussed. For AM/GM-MMSE methods, since A = I is known a priori, the complexity is negligible. The AM-IF and Proposed Method 1 have the same complexity, corresponding to a single run of the LLL algorithm [9]. Note that, for fixed F , Proposed Method 2 approaches the complexity of AM-IF and Proposed Method 1. For the successive scenario, the optimal choice of A in AM/GM-SIC is found by testing all permutations of the identity matrix. Proposed Method 3 and Proposed Method 4 have similar complexity as Proposed Method 1 and 2, respectively. Finally, the complexity of optimal GM/AMSIF by exhaustive search follows from the bound in [2] (see also [2], [9]). Note that, besides finding A, the receiver operation includes also the tasks of equalization and channel decoding, whose complexity scales with the blocklength n and is identical for each method in the same category (parallel or successive). Thus, the task of finding A tends to take a smaller fraction of the overall decoding time as n grows. VI. I MPLEMENTATION WITH P RACTICAL C ODES All the achievable rate results discussed above assume the use of lattice codes of asymptotically high dimension over an asymptotically large constellation. However, in practice, a finite-length code and a finite-order modulation must be used. In this section, we discuss how practical lattice encoders and decoders for an IF receiver may be implemented with low complexity. We focus on the use of binary LDPC codes with 2-PAM modulation. A. Encoding and Decoding We start with conventional IF; the extension to successive IF is straightforward. To simplify the description, with a slight abuse of notation, we consider the finite field of size 2, denoted Z2 , as a subset of the integers, Z2 = {0, 1} ⊆ Z. Suppose the `th user encodes its message as a codeword c` ∈ C from a linear (n, k) block code C over Z2 . The codeword c` is then modulated into a vector x` ∈ {− 21 , 12 }n from a 2-PAM constellation, computed as x` = c` + d` mod 2 (81) 140 where d` ∈ {− 12 , 12 }n is a (discrete) dither vector independent from x` and known to the receiver, and the x mod 2 operation is applied element-wise and assumed to return a real-valued number in the interval (−1, 1]. Note that the dither vector must be used in order to reduce the transmit power from the {0, 1} constellation to the {− 21 , 12 } constellation, and it could be interpreted more simply as a modulation map. In this case, a simple choice would be d` = (− 21 , . . . , − 12 ). However, using a random dither uniformly distributed over {− 12 , 12 }n is more convenient for our purposes since, as we shall see, it makes the error probability independent from the transmitted codeword. As a consequence of the use of dithers, the transmitted vector x` is not a lattice point anymore, in contrast to the description is sections III and IV. However, dithers can be easily removed at the receiver with a simple modification, re-enabling the results of the those sections. More precisely, let C and D be matrices whose `th row is c` and d` , respectively. The receiver computes the effective channel output as   Yeff = B(1) Y(1) · · · B(F ) Y(F ) − AD mod 2 (82) = AC + Zeff mod 2 (83) = V + Zeff mod 2 (84) where Zeff is defined as in Section III and V = AC mod 2. Note that each row vm of V is a codeword from C. Thus, we recover the same effective channel (11), except for the mod-2 operation.5 For j = 1, . . . , n, let yeff,m [j], vm [j], zeff,m [j] denote the jth component of yeff,m , vm and zeff,m , respectively. In order to use a belief propagation decoder, it is necessary to compute the log-likelihood ratio (LLR) at the channel output, defined as   P[yeff,m [j] | vm [j] = 0] (85) LLR[j] = log P[yeff,m [j] | vm [j] = 1] for j = 1, . . . , n, where P(·|·) denotes conditional probability. To simplify the decoder, the (non-Gaussian) effective noise component zeff,m [j] is treated as a Gaussian random variable 2 with zero mean and variance σeff,m [j], where ( 2 σAM,m , for AM decoding 2 σeff,m [j] = (86) 2 σeff,m,(I[j]) , for GM decoding and I[j] denotes the index of the block to which the jth component belongs. However, since (84) is a mod-2 channel, there is an infinite number of noise realizations that lead to any given channel output, resulting in the expression ! X (yeff,m [j] − vm [j] − k)2 P[yeff,m [j]|vm [j]] = exp − . 2 2σeff,m [j] k∈2Z (87) In order to further simplify the LLR computation, we can approximate the above expression by keeping only the largest 5 Actually, a modulo-lattice channel is required for the results in [2], which is omitted in the IF overview given here and in [2]. Thus, some modulo operation must be used at the receiver if the transmitted vectors satisfy a power constraint. JOURNAL OF COMMUNICATION AND INFORMATION SYSTEMS, VOL. 32, NO.1, 2017. 8 It is well-known that an important parameter characterizing the performance of a code for a fading channel is its diversity order, defined as [25] Real LLR Approximated LLR 6 141 4 d,− LLR[j] (91) where Pe is the error probability of the decoder. For a block Rayleigh fading channel, the diversity order of a q-ary code is known to satisfy a Singleton-like bound [25]    R (92) d≤1+ F 1− log2 q 2 0 -2 -4 -6 -8 -1 log Pe SNR→∞ log SNR lim -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 yeff,m [j] Fig. 2. LLR in the mod-2 channel for SNR = 5 dB. term, corresponding to the element vm [j] + 2Z nearest to yeff,m [j] in Euclidean distance. It follows that  1 + 2yeff,m [j]   , −1 < yeff,m [j] ≤ 0  2 2σeff,m [j] LLR[j] ≈ 1 − 2y (88)  eff,m [j]  , 0 ≤ y [j] ≤ 1  eff,m 2 2σeff,m [j] 1 where b·c is the floor function and R is the code rate in bits per channel use. Thus, codes that achieve full diversity (d = F ) are limited by R ≤ (log2 q)/F , or R ≤ 1/F for binary codes. A family of rate-1/F binary LDPC codes that achieve full diversity under belief propagation and have performance close to theoretical limits are the so-called root LDPC codes [26]. These codes are systematic, with the information bits corresponding to the first n/F 2 positions of each block, and (F −1)n/F ×n have a parity-check matrix H ∈ Z2 satisfying the following structure     H11 · · · H1F Hij1   ..  , H =  .. .. H =  ...   (93) ij . .  . HF 1 ··· Hijk    0, h = I   0, 1 − 2|yeff,m [j]| 2σeff,m [j]2 (89) since yeff,m [j] ∈ (−1, 1]. Fig. 2 shows the exact LLR in comparison with its approximation for a channel with SNR = 5 dB. As we can see, for most of the input range the approximation is indistinguishable from the exact value. The approximation is slightly less accurate when the input is close to an integer, but since these correspond to the peak LLR values, i.e., when there is a high degree of certainty about the value of vm [j], this loss of accuracy should not degrade the performance of belief propagation. The above decoding procedure can be easily extended to 0 successive IF by replacing yeff,m with ym , zeff,m with z0m , 2 and σeff,m [j] with ( σ2 , for AM decoding 2 σSIF,m [j] = 2AM-SIF,m (90) `m,m,(I[j]) , for GM decoding. B. Code Construction In practice, approaching the GM performance (be it for MMSE, SIC, IF or SIF reception) requires not only a suitable decoder but also well-designed codes that allow the decoder to exploit diversity. This issue is not apparent in [15] since their achievable rate results (even for AM) are based on asymptotically good lattices that are already optimal for exploiting diversity. Designing such codes under finite-length and lowcomplexity constraints, however, is far from trivial. Hi,j,F −1 where or more simply LLR[j] ≈ HF,F j < i and k 6= j i 0 , j=i (94) j > i and k 6= j − 1 for all 1 ≤ i, j ≤ F and all 1 ≤ k ≤ F − 1. This structure implies that, for each information bit from each ith block, there is one parity-check equation relating it to the bits from the jth block (and no other blocks), for all j 6= i. It is worth mentioning that root LDPC codes under belief propagation guarantee full diversity only over the information bits. Thus, if one is interested in recovering the entire codeword, it must be regenerated by re-encoding the information bits after decoding. VII. S IMULATION R ESULTS In this section we present simulation results comparing the outage rate performance of our proposed methods with the optimal performance obtained by exhaustive search. For comparison, we include the performance of AM-IF (previously the only low-complexity IF receiver), as well as that of ML and conventional linear receivers. In our simulations, we specify an outage probability ρ = 0.01, estimated by 104 channel realizations. In each realization, the channel fading coefficients are drawn independently from a real-valued Gaussian distribution with zero mean and unit variance. For the optimal GM-IF, AM-SIF and GM-SIF, the matrix A was obtained by an exhaustive search over all matrices vectors whose `1 -norm of each row does not exceed 15. Fig. 3 shows the outage rate for all these receivers on a 2 × 2 channel with F = 2 blocks. As can be seen, Proposed Method 1 and JOURNAL OF COMMUNICATION AND INFORMATION SYSTEMS, VOL. 32, NO.1, 2017. 3 3 ML Optimal GM-IF Prop. 2 (GM-IF) Prop. 1 (GM-IF) GM-MMSE AM-IF AM-MMSE 2.5 ML Optimal GM-SIF Prop. 4 (GM-SIF) GM-SIC Optimal AM-SIF Prop. 3 (AM-SIF) AM-SIC 2.5 2 Rate (bits/dim) 2 Rate (bits/dim) 142 1.5 1.5 1 1 0.5 0.5 0 0 0 5 10 15 20 25 30 SNR (dB) (a) 0 5 10 15 20 25 30 SNR (dB) (b) Fig. 3. Outage rate on a 2 × 2 channel with F = 2 blocks. (a) Parallel decoding methods. (b) Successive decoding methods. 2 achieve performance close to optimal GM-IF and strictly higher than the maximum between AM-IF and GM-MMSE. In particular, for an outage rate of 1.5 bits/dim, the performance of Proposed Method 2 is within 1.8 dB of optimal GM-IF and outperforms Proposed Method 1 by 2.4 dB. On the other hand, Proposed Method 3 and 4 appear to have performance indistinguishable from optimal AM-SIF and optimal GM-SIF, respectively. Note that Proposed Method 4 outperforms GMSIC by approximately 3.2 dB for an outage rate of 2 bits/dim, while AM-SIF has a much lower performance in this case. Fig. 4 shows the outage rate for a scenario with F = 2 blocks and SNR = 25 dB, varying the number of users NT , while assuming the same number of receive antennas, NR = NT . Due to the complexity of exhaustive search, which grows exponentially with NT , the performance of optimal GM-IF is shown only for NT = 2 and NT = 3, while that of optimal AM-SIF and GM-SIF is shown only for NT = 2. As can be seen, as NT increases, the performance of both Proposed Methods 1 and 2 appears to converge and is approached by that of AM-IF. Similarly, the performance of Proposed Method 3 significantly improves as NT increases, outperforming GM-SIC for NT ≥ 4 and approaching that of Proposed Method 4. Nevertheless, Proposed Method 4 still outperforms all other methods by a visible margin. Fig. 5 considers the same scenario as Fig. 4, but with F = 4 blocks. Similar observations can be made, except that, comparatively to Proposed Methods 1 and 2, the performance of AM-IF has worsened and that of GM-MMSE has improved, while still being significantly outperformed by the proposed methods. For the successive methods, a behavior similar to the F = 2 case is observed, except that now GM-SIC outperforms Proposed Method 3 for all NT ≤ 6 and achieves a smaller gap to Proposed Method 4. While one might expect this gap to vanish for large F , it should be noted that, due to rate limitations, constructions of full-diversity codes are typically restricted to the small F case. Lastly, Figs. 6 and 7 show the frame-error rate (FER) on a 2 × 2 channel with F = 2 and F = 4, respectively, using 2-PAM modulation. A regular, rate-1/F root-LDPC code [26] of length n = 208, constructed using a PEG-based technique [27], is used in each simulation. As can be seen, the simulations are consistent with the theoretical results, with a performance gap due to the small constellation size and the non-optimality of the channel code. In particular, for a FER of 1%, both proposed methods are within 3.7 dB of their theoretical FER for F = 2 and within 3.4 dB for F = 4. The FER for successive decoding is not shown, since for R ≤ 1/2 all methods have performance similar to their non-IF counterpart. For the benefits of SIF to become salient, lattice codes with higher spectral efficiency are needed. The design of such codes, however, is outside the scope of this paper. VIII. C ONCLUSIONS In this paper, we propose four suboptimal methods for selecting an integer matrix A for IF reception in a block fading scenario, two of them applicable to GM-IF and the other two applicable to AM-SIF and GM-SIF, respectively. The main idea behind these methods is to use a matrix A optimized for a lower-performance scheme with a simpler objective function for which an approximately optimal solution can be found in polynomial time. For AM-SIF, the corresponding simpler scheme is the proposed AM-SIF-SNC scheme, while, for GMIF and GM-SIF, it is the best choice among their respective AM counterpart and certain static fading solutions, all of which can be found very efficiently. As shown by simulations, the proposed methods for GMIF achieve outage rates strictly higher than both GM-MMSE and AM-IF (until now the best low-complexity methods), regardless of the number of blocks and users, while being only slightly more complex than AM-IF. Exactly the same JOURNAL OF COMMUNICATION AND INFORMATION SYSTEMS, VOL. 32, NO.1, 2017. 4.5 4.5 ML Optimal GM-IF Prop. 2 (GM-IF) Prop. 1 (GM-IF) AM-IF GM-MMSE AM-MMSE 4 3.5 4 3.5 3 Rate (bits/dim) 3 Rate (bits/dim) 143 2.5 2 2.5 2 1.5 1.5 1 1 0.5 0.5 0 ML Optimal GM-SIF Prop. 4 (GM-SIF) Prop. 3 (AM-SIF) GM-SIC Optimal AM-SIF AM-SIC 0 2 3 4 5 6 2 3 Nt (and Nr) 4 5 6 Nt (and Nr) (a) (b) Fig. 4. Outage rate for a channel with F = 2 blocks and SNR = 25 dB. (a) Parallel decoding methods. (b) Successive decoding methods. 4.5 4.5 ML Optimal GM-IF Prop. 2 (GM-IF) Prop. 1 (GM-IF) AM-IF GM-MMSE AM-MMSE 4 3.5 3.5 3 Rate (bits/dim) Rate (bits/dim) 3 4 2.5 2 2.5 2 1.5 1.5 1 1 0.5 0.5 0 ML Optimal GM-SIF Prop. 4 (GM-SIF) GM-SIC Prop. 3 (AM-SIF) Optimal AM-SIF AM-SIC 0 2 3 4 5 6 Nt (and Nr) (a) 2 3 4 5 6 Nt (and Nr) (b) Fig. 5. Outage rate for channel with F = 4 blocks and SNR = 25 dB. (a) Parallel decoding methods. (b) Successive decoding methods. observations hold for GM-SIF in comparison with GM-SIC and AM-SIF-SNC. particular at outage rates for which these schemes are much superior to their AM or non-IF counterparts. We also show that AM-(S)IF and GM-(S)IF schemes can be realized in practice with low complexity, under finite codeword length and constellation constraints. Simulation results using full-diversity root LDPC codes are found to agree with theoretical ones, confirming the superiority of GM-IF in comparison with GM-MMSE and AM-IF. The authors would like to thank Islam El Bakoury and Bobak Nazer for valuable discussions. An interesting avenue for future work is the development of low-complexity, full-diversity lattice codes with higher spectral efficiency (for instance, full-diversity q-ary linear codes with q > 2 for use in Construction A [2]). Such codes would be directly applicable to the GM-IF and GM-SIF schemes, allowing a wider and more interesting operating range, in [1] R. Venturelli and D. Silva, “Low-complexity integer forcing for block fading MIMO multiple-access channels,” in XXXIV Simpósio Brasileiro de Telecomunicações e Processamento de Sinais (SBrT’16), Aug 30-Sep 4 2016, pp. 110–114. [2] J. Zhan, B. Nazer, U. Erez, and M. Gastpar, “Integer-Forcing Linear Receivers,” IEEE Trans. Inf. Theory, vol. 60, no. 12, pp. 7661–7685, Dec. 2014. doi: 10.1109/TIT.2014.2361782 ACKNOWLEDGMENTS R EFERENCES JOURNAL OF COMMUNICATION AND INFORMATION SYSTEMS, VOL. 32, NO.1, 2017. 10 0 144 10.1109/TWC.2015.2462844 [9] W. Liu and C. Ling, “Efficient Integer Coefficient Search for Computeand-Forward,” IEEE Trans. Wirel. Commun., vol. 15, no. 12, pp. 8039– 8050, Dec. 2016. doi: 10.1109/TWC.2016.2611580 [10] S. M. Azimi-Abarghouyi, M. Nasiri-Kenari, B. Maham, and M. Hejazi, “Integer Forcing-and-Forward Transceiver Design for MIMO Multipair Two-Way Relaying,” IEEE Trans. Veh. Technol., vol. 65, no. 11, pp. 8865–8877, Nov. 2016. doi: 10.1109/TVT.2016.2518667 [11] W. He, B. Nazer, and S. Shamai, “Uplink-downlink duality for integerforcing,” in 2014 IEEE International Symposium on Information Theory, 10 -1 Jun. 2014. doi: 10.1109/ISIT.2014.6875293 pp. 2544–2548. [12] D. Silva, G. Pivaro, G. Fraidenraich, and B. Aazhang, “On IntegerForcing Precoding for the Gaussian MIMO Broadcast Channel,” IEEE Trans. Wirel. Commun., vol. 16, no. 7, pp. 4476–4488, Jul. 2017. doi: 10.1109/TWC.2017.2699178 [13] O. Ordentlich and U. Erez, “Integer-Forcing Source Coding,” IEEE Trans. Inf. Theory, vol. 63, no. 2, pp. 1253–1269, Feb. 2017. doi: 10.1109/TIT.2016.2629491 [14] R. Knopp and P. A. Humblet, “On coding for block fading channels,” IEEE Trans. Inf. Theory, vol. 46, no. 1, pp. 189–205, Jan. 2000. doi: 10 -2 10.1109/18.817517 0 5 10 15 20 25 30 [15] I. E. Bakoury and B. Nazer, “The impact of channel variation on SNR (dB) integer-forcing receivers,” in 2015 IEEE International Symposium on Information Theory (ISIT), Jun. 2015. doi: 10.1109/ISIT.2015.7282520 Fig. 6. FER on a 2 × 2 channel using parallel decoding methods. R = pp. 576–580. 1/2 bits/dim and F = 2 blocks. [16] R. Zamir, Lattice Coding for Signals and Networks. Cambridge: Cambridge University Press, 2014. ISBN 978-0-521-76698-2 [17] B. Nazer, V. R. Cadambe, V. Ntranos, and G. Caire, “Expanding the 10 0 Compute-and-Forward Framework: Unequal Powers, Signal Levels, and AM-MMSE Multiple Linear Combinations,” IEEE Trans. Inf. Theory, vol. 62, no. 9, AM-IF pp. 4879–4909, Sep. 2016. doi: 10.1109/TIT.2016.2593633 GM-MMSE [18] J. Blömer, “Closest Vectors, Successive Minima, and Dual HKZ-Bases Prop. 1 (GM-IF) Prop. 2 (GM-IF) of Lattices,” in Automata, Languages and Programming. Springer, Optimal GM-IF Berlin, Heidelberg, Jul. 2000. doi: 10.1007/3-540-45022-X 22 pp. 248– Prop. 1 (GM-IF) (theo.) 259. Prop. 2 (GM-IF) (theo.) [19] O. Regev, “On Lattices, Learning with Errors, Random Linear Codes, Optimal GM-IF (theo.) and Cryptography,” in Proceedings of the Thirty-Seventh Annual ACM Symposium on Theory of Computing, ser. STOC ’05. New York, NY, -1 10 USA: ACM, 2005. doi: 10.1145/1060590.1060603. ISBN 978-1-58113960-0 pp. 84–93. [20] E. Agrell, T. Eriksson, A. Vardy, and K. Zeger, “Closest point search in lattices,” IEEE Trans. Inf. Theory, vol. 48, no. 8, pp. 2201–2214, Aug. 2002. doi: 10.1109/TIT.2002.800499 [21] A. K. Lenstra, H. W. Lenstra, and L. Lovasz, “Factoring polynomials with rational coefficients,” Math. Ann., vol. 261, pp. 515–534, 1982. doi: 10.1007/BF01457454 [22] J. M. Steele, The Cauchy-Schwarz Master Class: An Introduction to the Art of Mathematical Inequalities. Cambridge; New York: The 10 -2 Mathematical Association of America, Apr. 2004. ISBN 978-0-521-6 -4 -2 0 2 4 6 8 10 12 14 54677-5 SNR (dB) [23] P. W. Wolniansky, G. J. Foschini, G. D. Golden, and R. A. Valenzuela, “V-BLAST: An architecture for realizing very high data rates over the Fig. 7. FER on a 2 × 2 channel using parallel decoding methods. R = rich-scattering wireless channel,” in 1998 URSI International Symposium 1/4 bits/dim and F = 4 blocks. on Signals, Systems, and Electronics. Conference Proceedings (Cat. No.98EX167), Sep. 1998. doi: 10.1109/ISSSE.1998.738086 pp. 295– 300. [24] O. Ordentlich, U. Erez, and B. Nazer, “Successive integer-forcing and [3] B. Nazer and M. Gastpar, “Compute-and-Forward: Harnessing Interits sum-rate optimality,” in 2013 51st Annual Allerton Conference on ference Through Structured Codes,” IEEE Trans. Inf. Theory, vol. 57, Communication, Control, and Computing (Allerton), Oct. 2013. doi: no. 10, pp. 6463–6486, Oct. 2011. doi: 10.1109/TIT.2011.2165816 10.1109/Allerton.2013.6736536 pp. 282–292. [4] C. Feng, D. Silva, and F. R. Kschischang, “An Algebraic Approach [25] E. Biglieri, Coding for Wireless Channels, ser. Information technology– to Physical-Layer Network Coding,” IEEE Trans. Inf. Theory, vol. 59, transmission, processing, and storage. New York: Springer, 2005. ISBN no. 11, pp. 7576–7596, Nov. 2013. doi: 10.1109/TIT.2013.2274264 978-1-4020-8083-8 978-1-4020-8084-5 [5] D. Tse and P. Viswanath, Fundamentals of Wireless Communication, [26] J. J. Boutros, A. Guillen i Fabregas, E. Biglieri, and G. Zemor, “Low1st ed. Cambridge, UK ; New York: Cambridge University Press, Jul. Density Parity-Check Codes for Nonergodic Block-Fading Channels,” 2005. ISBN 978-0-521-84527-4 IEEE Trans. Inf. Theory, vol. 56, no. 9, pp. 4286–4300, Sep. 2010. doi: [6] O. Ordentlich and U. Erez, “Achieving the gains promised by Integer10.1109/TIT.2010.2053890 Forcing equalization with binary codes,” in 2010 IEEE 26-Th Convention [27] A. G. D. Uchoa, C. T. Healy, and R. C. de Lamare, “Structured rootof Electrical and Electronics Engineers in Israel, Nov. 2010. doi: LDPC codes and PEG-based techniques for block-fading channels,” 10.1109/EEEI.2010.5662120 pp. 000 703–000 707. J Wireless Com Network, vol. 2015, no. 1, p. 213, Dec. 2015. doi: [7] S. H. Chae, M. Jang, S. K. Ahn, J. Park, and C. Jeong, “Multilevel 10.1186/s13638-015-0439-6 Coding Scheme for Integer-Forcing MIMO Receivers With Binary Codes,” IEEE Trans. Wirel. Commun., vol. 16, no. 8, pp. 5428–5441, Aug. 2017. doi: 10.1109/TWC.2017.2711002 [8] L. Ding, K. Kansanen, Y. Wang, and J. Zhang, “Exact SMP Algorithms for Integer-Forcing Linear MIMO Receivers,” IEEE Trans. Wirel. Commun., vol. 14, no. 12, pp. 6955–6966, Dec. 2015. doi: FER FER AM-MMSE AM-IF GM-MMSE Prop. 1 (GM-IF) Prop. 2 (GM-IF) Optimal GM-IF Prop. 1 (GM-IF) (theo.) Prop. 2 (GM-IF) (theo.) Optimal GM-IF (theo.) JOURNAL OF COMMUNICATION AND INFORMATION SYSTEMS, VOL. 32, NO.1, 2017. Ricardo Bohaczuk Venturelli received the B.Sc. degree and M.Sc. degree from the Federal University of Santa Catarina (UFSC), Florianópolis, Brazil, in 2014 and 2016, respectively, all in electrical engineering. Currently, he is Ph.D. student in electrical engineering at the Federal University of Santa Catarina. His research interests include channel coding, network coding, information theory and wireless communications. He is a member of the Brazilian Telecommunications Society (SBrT) since 2014. 145 Danilo Silva (S’06–M’09) received the B.Sc. degree from the Federal University of Pernambuco (UFPE), Recife, Brazil, in 2002, the M.Sc. degree from the Pontifical Catholic University of Rio de Janeiro (PUC-Rio), Rio de Janeiro, Brazil, in 2005, and the Ph.D. degree from the University of Toronto, Toronto, Canada, in 2009, all in electrical engineering. From 2009 to 2010, he was a Postdoctoral Fellow at the University of Toronto, at the École Polytechnique Fédérale de Lausanne (EPFL), and at the State University of Campinas (UNICAMP). In 2010, he joined the Department of Electrical Engineering, Federal University of Santa Catarina (UFSC), Brazil, where he is currently an Assistant Professor. His research interests include wireless communications, channel coding, information theory, and machine learning. Dr. Silva is a member of the Brazilian Telecommunications Society (SBrT). He was a recipient of a CAPES Ph.D. Scholarship in 2005, the Shahid U. H. Qureshi Memorial Scholarship in 2009, and a FAPESP Postdoctoral Scholarship in 2010.
7
Promises and Caveats of Uplink IoT Ultra-Dense Networks Ming Ding‡ , David López Pérez† ‡ arXiv:1801.06623v1 [cs.NI] 20 Jan 2018 † Data61, Australia {[email protected]} Nokia Bell Labs, Ireland {[email protected]} Abstract—In this paper, by means of simulations, we evaluate the uplink (UL) performance of an Internet of Things (IoT) capable ultra-dense network (UDN) in terms of the coverage probability and the density of reliably working user equipments (UEs). From our study, we show the benefits and challenges that UL IoT UDNs will bring about in the future. In more detail, for a low-reliability criterion, such as achieving a UL signal-to-interference-plus-noise ratio (SINR) above 0 dB, the density of reliably working UEs grows quickly with the network densification, showing the potential of UL IoT UDNs. In contrast, for a high-reliability criterion, such as achieving a UL SINR above 10 dB, the density of reliably working UEs remains to be low in UDNs due to excessive inter-cell interference, which should be considered when operating UL IoT UDNs. Moreover, considering the existence of a non-zero antenna height difference between base stations (BSs) and UEs, the density of reliably working UEs could even decrease as we deploy more BSs. This calls for the usage of sophisticated interference management schemes and/or beam steering/shaping technologies in UL IoT UDNs. I. I NTRODUCTION Recent years have seen rapid advancement in the development and deployment of Internet of Things (IoT) networks, which can be attributed to the increasing communication and sensing capabilities combined with the falling prices of IoT devices [1]. For example, base stations (BSs) can be equipped with the latest IoT technologies, such as the new generation of machine type communications [2], to collect data from gas, water, and power meters via uplink (UL) transmissions. In practice, such BSs can take the form of both terrestrial and aerial ones [3]. This poses, however, a challenge to the wireless industry, which must offer an increasing volume of reliable traffic in a profitable and energy efficient manner, especially for the UL communications. In this context, the orthogonal deployment of ultra-dense (UD) small cell networks (SCNs), or simply ultra-dense networks (UDNs), have been selected as one of the workhorse for network enhancement in the fourth-generation (4G) and fifth-generation (5G) networks developed by the 3rd Generation Partnership Project (3GPP) [2]. Here, orthogonal deployment means that UDNs and macrocell networks operate on different frequency spectrum, which simplifies network management due to no inter-tier interference. The performance analysis of IoT UDNs is, however, particularly challenging for the UL because i) UDNs are fundamentally different from the current sparse/dense networks [4] and ii) the UL power control mechanism operates according to the random user equipment (UE) positions in the network, which is quite different from the constant power setting in the downlink (DL) [5]. In this paper, by means of simulations, we evaluate the network performance of UL IoT UDNs in terms of the coverage probability and the density of reliably working UEs. The main findings of this paper are as follows: • We find that for a low-reliability criterion, such as achieving a UL signal-to-interference-plus-noise ratio (SINR) above γ = 0 dB, the density of reliably working UEs quickly grows with the network densification, showing the benefits of UL IoT UDNs. In contrast, for a highreliability criterion, such as achieving a UL SINR above γ = 10 dB, the density of reliably working UEs remains low in UDNs due to excessive inter-cell interference, which should be considered when operating UL IoT UDNs. • We find that due to the existence of a non-zero antenna height difference between BSs and UEs, the density of reliably working UEs could even decrease as we deploy more BSs in a UL IoT UDN. This calls for the usage of sophisticated interference management schemes and/or beam steering/shaping technologies in UL IoT UDNs. • We find that the correlated shadow fading allows a BS with a lower environmental fading factor to provide connection to a larger number of UEs. Thus, its theoretical analysis is an open problem for further study. • We find that the optimized hexagonal-like BS deployment can improve network performance for relatively sparse networks, but not for UDNs. Thus, its theoretical study is not urgent. II. D ISCUSSION ON THE A SSUMPTIONS OF UL I OT UDN S In this section, we discuss several important assumptions in UL IoT UDNs. A. BS Deployments In general, to study the performance of a UL IoT network, two types of BS deployments can be found in the literature, i.e., the hexagonal and the random deployments as shown in Fig. 1. In Fig. 1, BSs are represented by markers “x” and cell coverage areas are outlined by solid lines. Note that the hexagonal BS deployment leads to an upper-bound performance because BSs are evenly distributed in the network scenario, (a) The hexagonal BS deployment. (b) The random BS deployment. Fig. 1: Illustration of two widely accepted types of BS deployments. Here, BSs are represented by markers “x” and cell coverage areas for user equipment (UE) distribution are outlined by solid lines. and thus very strong interference due to close BS proximity is precluded [6]. In contrast, the random BS deployment reflects a more realistic network deployment with more challenging interference conditions [2, 7]. For completeness, in our following performance evaluation, we will consider both BS deployments. B. Antenna Heights In the performance analysis of the conventional sparse or dense cellular networks, the antenna height difference between BSs and UEs is usually ignored due to the dominance of the horizontal distance. However, with a much shorter distance between a BS and its served UEs in an UDN, such antenna height difference becomes non-negligible [7]. The performance impact of such antenna height difference between BSs and UEs on the DL UDNs has been investigated in [8]. More specifically, the existence of a non-zero antenna height difference between BSs and UEs gives rise to a non-zero cap on the minimum distance between a BS and its served UEs, and thus a cap on the received signal power strength. Thus, and although each inter-cell interference power strength is subject to the same cap, the aggregate inter-cell interference power will overwhelm the signal power in an UDN due to the sheer number of strong interferers. Consequently, the antenna height difference between BSs and UEs should be considered in the performance evaluation of UL IoT UDNs. C. Line-of-Sight Transmissions A much shorter distance between a BS and its served UEs in an UDN implies a higher probability of line-of-sight (LoS) transmissions. The performance impact of such LoS transmissions on the DL of UDNs has been shown to be significant in [9, 10]. Generally speaking, LoS transmissions are more helpful to enhance the received signal strength than non-line-of-sight (NLoS) transmissions. However, after a certain level of BS densification, not only the signal power, but also the inter-cell interference power will significantly grow due to the emergence of LoS interfering paths. Thus, the transition of a large number of interfering paths from NLoS to LoS will overwhelm the signal power in an UDN. Consequently, the probabilistic LoS transmissions should also be considered in the performance evaluation of UL IoT UDNs. D. BS Idle Mode Considering the surplus of BSs in UDNs, a BS can be put to sleep when there is no active UE connected to it, which is referred as the BS idle mode capability (IMC) [11, 12]. As a result, the active UEs’ SINR can benefit from i) a BS selection diversity gain, i.e., each UE has a plurality of BSs to select its server from, and ii) a tight control of inter-cell interference, as effective UL inter-cell interference only comes from active UEs served by active neighboring BSs. The surplus of BSs together with the IMC can be seen as a powerful tool that can mitigate the interference problems presented in the previous subsections. However, it should be noted that switching off BSs has a negative impact on the number of IoT UEs that can concurrently transmit. III. S YSTEM M ODEL For a certain time-frequency resource block (RB), we consider a UL IoT network with BSs deployed on a plane according to the hexagonal deployment or the random deployment, as shown in Fig. 1. For both BS deployments, the density of BSs is denoted by λ BSs/km2 . Furthermore, we consider a homogeneous Poisson point process (HPPP) Φ to characterize the random deployment. Active UEs are assumed to be distributed following an HPPP with a density of ρ UEs/km2 . Here, we only consider active UEs in the network because non-active UEs do not trigger data transmission. Note that the total number of UEs, e.g., phones, gateways, sensors, tags, etc., in a UL IoT network should be much higher than the number of the active UEs. However, we believe that in a certain time-frequency RB, the active UEs with non-zero data traffic demands should still be not many. For example, a typical density of active UEs is around 300 UEs/km2 in 5G [2]. NL A. Channel Model The two-dimensional (2D) distance between a BS and a UE is denoted by r. Moreover, the absolute antenna height difference between a BS and a UE is denoted by L. Thus, the 3D distance between a BS and a UE can be expressed as p w = r 2 + L2 . (1) Note that the value of L is in the order of several meters [13]. Following [10], we adopt a general path loss model, where the path loss ζ (w) is a multi-piece function of w written as  ζ1 (w) , when L ≤ w ≤ d1     ζ2 (w) , when d1 < w ≤ d2 ζ (w) = . , (2) .. ..   .    ζN (w) , when w > dN −1 where each piece ζn (w) , n ∈ {1, 2, . . . , N } is modeled as ( L ζ L (w) = ALn w−αn , LoS: PrLn (w) ζn (w)= nNL , (3) NL −αn ζn (w) = ANL , NLoS: 1 − PrLn (w) n w where L NL • ζn (w) and ζn (w) , n ∈ {1, 2, . . . , N } are the n-th piece path loss functions for the LoS and the NLoS cases, respectively, L NL • An and An are the path losses at a reference 3D distance w = 1 for the LoS and the NLoS cases, respectively, L NL • αn and αn are the path loss exponents for the LoS and the NLoS cases, respectively, and L • Prn (w) is the n-th piece LoS probability function that a transmitter and a receiver separated by a 3D distance w has an LoS path, which is assumed to be a monotonically decreasing function with respect to w. Existing measurement studies have confirmed this assumption [13]. Moreover, we assume that each BS/UE is equipped with an isotropic antenna, and that the multi-path fading between a BS and a UE is independently identical distributed (i.i.d.) Rayleigh distributed [9, 10, 14]. B. User Association Strategy We assume a practical user association strategy (UAS), in which each UE is connected to the BS giving the maximum average received signal strength [10, 14]. Such UAS can be formulated by  bo = arg max R̄b (w) , (4) b where R̄b (w) denotes the average received signal strength from BS b and the UE of interest, separated by a distance of w. Assuming a constant BS transmission power, R̄b (w) can be equivalently evaluated by ζ (w) defined in (2). As a special case to show our numerical results in the simulation section, we consider a practical two-piece path loss function and a two-piece exponential LoS probability function, defined by the 3GPP [13]. More specifically, in (2) L we use N = 2, ζ1L (w) = ζ2L (w) = AL w−α , ζ1NL (w) = ζ2NL (w) = ANL w−α , PrL1 (w) = 1 − 5 exp (−R1 /w), and PrL2 (w) = 5 exp (−w/R2 ), where R1 = 156 m, R2 = 30 m, 1 = 67.75 m [13]. For clarity, this path loss case and d1 = lnR10 is referred to as the 3GPP Case hereafter. C. BS Activation Model As discussed in Subsection II-D, a BS will enter an idle mode if there is no UE connected to it. Thus, the set of active BSs is determined by the UAS. Since UEs are randomly and uniformly distributed in the network and given the adopted UAS strategy, we can assume that the active BSs also follow an HPPP distribution Φ̃ [11], the density of which is denoted by λ̃ BSs/km2 , where λ̃ ≤ λ and λ̃ ≤ ρ. Note that λ̃ also characterizes the density of active UEs because no collision exists in the centralized cellular IoT UDNs. For illustration purposes, considering a single-slope path loss model and a nearest-BS UAS, λ̃ can be calculated as [11]   1 q  , λ̃ = λ 1 −  (5) ρ 1 + qλ where an empirical value of 3.5 was suggested for q in [11]1 . D. UL Power Control Model The UE power, denoted by P UE , is subject to semi-static power control (PC) in practice. In this paper, we adopt the fractional path loss compensation (FPC) scheme standardized in 4G [13], which can be modeled as P0 P UE = 10 10 [ζ (w)] −η N RB , (6) where P0 is the target received power in dBm on each RB at the BS, η ∈ (0, 1] is the FPC compensation factor, and N RB is the number of RBs in the frequency domain. E. The Coverage Probability Based on this system model, we can define the coverage probability that the typical UE’s UL SINR is above a designated threshold γ as   pcov (λ, γ) = Pr SINRU > γ , (7) where the UL SINR is calculated by SINRU = PbUE ζ (wbo ) h o , U + PU Iagg N (8) where bo denotes the serving BS of the typical UE, PbUE is o the UE transmission power given by (6), wbo is the distance from the typical UE to its serving BS bo , h is the Rayleigh channel gain modeled as an exponentially distributed random variable (RV) with a mean of one as mentioned above, PNU is the additive white Gaussian noise (AWGN) power at the U serving BS bo , and Iagg is the UL aggregate interference. 1 Note that according to [12], q should also depend on the path loss model with LoS/NLoS transmissions. Having said that, [12] also showed that (5) is generally very accurate to characterize λ̃ for the 3GPP Case with LoS/NLoS transmissions. F. The Density of Reliably Working UEs Based on the definitions of the active BS density in Subsection III-C and the coverage probability in Subsection III-E, we can further define a density of reliably working UEs that can operate above a target UL SINR threshold γ as ρ̃ = λ̃pcov (λ, γ) , (9) where the active BS density λ̃ measures the maximum density of UEs that can simultaneously transmit, and the coverage probability pcov (λ, γ) scales down λ̃, giving the density of reliably working UEs. The larger the UL SINR threshold γ, the higher the reliability of the IoT communications, and thus the less the UEs that can simultaneously achieve such reliability. G. More Refined Assumptions In performance analysis, the multi-path fading is usually modeled as Rayleigh fading for simplicity. However, in the 3GPP, a more practical model based on generalized Rician fading is widely adopted for LoS transmissions [15]. In such model, the K factor in dB scale (the ratio between the power in the direct path and the power in the other scattered paths) is modeled as K[dB] = 13 − 0.03w, where w is defined in (1). More specifically, let hL denote the multi-path fading power for LoS transmissions. Then, for the 3GPP model of Rician fading, hL follows a non-central chi-squared distribution with its PDF given by [16]   = (K + 1) exp −K − (K + 1) hL f hL  q  L ×I0 2 K (K + 1) h , (10) where K is the distance-dependent value discussed above and I0 (·) is the 0-th order modified Bessel function of the first kind [16]. Moreover, the shadow fading is also usually not considered or simply modeled as i.i.d. RVs in performance analysis. However, in the 3GPP, a more practical correlated shadow fading is often used [13, 15, 17], where the shadow fading in dB unit is modeled as zero-mean Gaussian RV [13]. More specifically, the shadow fading coefficient in dB unit between BS b and UE u is formulated as [13] √ √ (11) Sbu = τ SuUE + 1 − τ SbBS , where τ is the correlation coefficient of shadow fading, SuUE and SbBS are i.i.d. zero-mean Gaussian RVs attributable to UE u and BS b, respectively. The variance of SuUE and SbBS is 2 denoted by σShad . In [13], it is suggested that τ = 0.5 and σShad = 10 dB. Considering the distance-dependent Rician fading for LoS transmissions and the correlated shadow fading, we can upgrade the 3GPP Case to an advanced 3GPP Case. In the next section, we will present simulation results of UL IoT UDNs for both the 3GPP Case and the Advanced 3GPP Case. It should be noted that for the Advanced 3GPP Case, shadow fading should be considered in the computation of R̄b (w), Sbu i.e., R̄b (w) should be evaluated by ζ (w) × 10 10 in (4). IV. S IMULATION AND D ISCUSSION In this section, we present numerical results to validate the accuracy of our analysis. According to Tables A.1-3~A.1-7 of [13], we adopt the following parameters for the 3GPP Case: αL = 2.09, αNL = 3.75, AL = 10−10.38 , ANL = 10−14.54 , P0 = −76 dBm, η = 0.8, N RB = 55, PN = −91 dBm (with a noise figure of 13 dB), τ = 0.5 and σShad = 10 dB. A. Performance Results of the 3GPP Case In Figs. 2 and 3, we plot the performance results of the 3GPP Case for γ = 0 dB and γ = 10 dB, respectively. Here, we only consider realistic networks with the random deployment of BSs. From these two figures, we can draw the following observations: • Figs. 2a and 3a show that the active BS density λ̃ monotonically increases with the network densification, and it is bounded by ρ = 300 UEs/km2 . Such results are in line with the analytical results in (5) [11, 12]. However, the density of reliably working UEs ρ̃, i.e., λ̃pcov (λ, γ) defined in (9), does not necessarily grow as the BS density λ increases. This is because pcov (λ, γ) is a non-monotone function with respect to λ, which will be explained in the following. • When BS density λ is around λ ∈  −1 the  10 , 70 BSs/km2 , the network is noise-limited, and thus pcov (λ, γ) increases with λ as the network is lightened up with coverage and the signal power strength benefits form LoS transmissions. 2 • When the BS density λ is around λ ∈ [70, 400] BSs/km , cov p (λ, γ) decreases with λ. This is due to the transition of a large number of interfering paths from NLoS to LoS, which accelerates the growth of the aggregate intercell interference. Such performance behavior has been reported in [10] for the [18] for the UL.  DL and 4 • When λ ∈ 400, 10 BSs/km2 , pcov (λ, γ) continuously increases thanks to the BS IMC [12], i.e., the signal power continues increasing with the network densification due to the BS diversity gain, while the aggregate interference power becomes constant, as λ̃ is bounded by ρ. However, as shown in Figs. 2b and 3b, the antenna height difference L between BSs and UEs has a large impact on pcov (λ, γ) because a non-zero L places a bound on the signal power strength, which degrades the coverage performance. In more detail, when λ = 104 BSs/km2 and γ = 0 dB, the coverage probability with L = 8.5 m loses 13 % compared to that with L = 0 m. Such performance degradation further enlarges to 32 % when λ = 104 BSs/km2 and γ = 10 dB, showing a much less chance of UE working reliably above 10 dB. • Due to the complicated performance behavior of pcov (λ, γ), the density of reliably working UEs ρ̃ displayed in Figs. 2c and 3c depends on the following factors: – For a low-reliability criterion, such as surpassing a UL SINR threshold of γ = 0 dB, the density of reliably working UEs ρ̃ grows quickly with the network (a) The active BS density λ̃. (b) The coverage probability pcov (λ, γ). (c) The density of reliably working UEs ρ̃. Fig. 2: Performance results of the 3GPP Case (Rayleigh fading, no shadow fading) with γ = 0 dB. (a) The active BS density λ̃. (b) The coverage probability pcov (λ, γ). (c) The density of reliably working UEs ρ̃. Fig. 3: Performance results of the 3GPP Case (Rayleigh fading, no shadow fading) with γ = 10 dB. (a) The active BS density λ̃. (b) The coverage probability pcov (λ, γ). (c) The density of reliably working UEs ρ̃. Fig. 4: Performance results of the Advanced 3GPP Case (Rician fading for LoS, correlated shadow fading) with γ = 0 dB. (a) The active BS density λ̃. (b) The coverage probability pcov (λ, γ). (c) The density of reliably working UEs ρ̃. Fig. 5: Performance results of the Advanced 3GPP Case (Rician fading for LoS, correlated shadow fading) with γ = 10 dB. densification, showing the benefits of UL IoT UDNs. In contrast, for a high-reliability criterion, such as surpassing a UL SINR threshold of γ = 10 dB, the density of reliably working UEs ρ̃ does not exhibit a satisfactory performance even in UDNs, e.g., we merely get ρ̃ < 50 UEs/km2 when λ = 103 BSs/km2 . The situation only improves when the BS IMC fully kicks in, e.g., λ > 103 BSs/km2 . – Considering the existence of a non-zero L, the density of reliably working UEs ρ̃ could even decrease as we deploy more BSs (see Fig. 3c, λ ∈ [200, 600] BSs/km2 ). This calls for the usage of sophisticated interference management schemes [19] in UL IoT UDNs. Another solution to mitigate such strong inter-cell interference is beam steering/shaping using multi-antenna technologies [2]. B. Performance Results of the Advanced 3GPP Case In Figs. 4 and 5, we plot the performance results of the Advanced 3GPP Case for γ = 0 dB and γ = 10 dB, respectively. To make our study more complete, here we consider both the random and the hexagonal deployments of BSs. From these two figures, we can see that the previous conclusions are qualitatively correct, which indicates that it is not urgent to investigate Rician fading and/or correlated shadow fading in the context of UDNs. However, there are two new observations that are worth mentioning: • From Figs. 2a/3a and Figs 4a/5a, we can see that the active BS density λ̃ of the Advanced 3GPP Case is smaller than that of the 3GPP Case. This means that (5) is no longer accurate to characterize λ̃ for the Advanced 3GPP Case. This is because the correlated shadow fading allows a BS with a lower environmental fading factor, i.e., SbBS , to attract more UEs than other BSs with higher values of SbBS . Its theoretical analysis is an open problem for further study. • The hexagonal deployment of BSs can improve network performance for relatively sparse networks (e.g., λ < 102 BSs/km2 ), but not for UDNs (e.g., λ > 103 BSs/km2 ). Such conclusion indicates that it is not urgent to investigate the performance of UDNs with the hexagonal deployment of BSs, which has been a longstanding open problem for decades [5]. V. C ONCLUSION We presented simulation results to evaluate the network performance of UL IoT UDNs. From our study, we can see that for a low-reliability criterion, the density of reliably working UEs grows quickly with the network densification. However, in our journey to realize a more reliable UL IoT UDNs, we should be aware of several caveats: • First, for a high-reliability criterion, the density of reliably working UEs remains low in UDNs due to excessive inter-cell interference, which should be considered when operating UL IoT UDNs. • • Second, due to the existence of a non-zero antenna height difference between BSs and UEs, the density of reliably working UEs could even decrease as we deploy more BSs. This calls for further study of UL IoT UDNs. Third, well-planned hexagonal-like BS deployments can improve network performance for relatively sparse networks, but not for UDNs, showing that alternative solutions other than BS position optimization should be considered in the future UL IoT UDNs. R EFERENCES [1] A. Al-Fuqaha, M. Guizani, M. Mohammadi, M. Aledhari, and M. Ayyash, “Internet of things: A survey on enabling technologies, protocols, and applications,” IEEE Communications Surveys Tutorials, vol. 17, no. 4, pp. 2347–2376, Fourthquarter 2015. [2] D. López-Pérez, M. Ding, H. Claussen, and A. Jafari, “Towards 1 Gbps/UE in cellular systems: Understanding ultra-dense small cell deployments,” IEEE Communications Surveys Tutorials, vol. 17, no. 4, pp. 2078–2101, Jun. 2015. [3] A. Fotouhi, M. Ding, and M. Hassan, “Understanding autonomous drone maneuverability for Internet of Things applications,” pp. 1–6, Jun. 2017. [4] M. Ding, D. López-Pérez, H. Claussen, and M. A. Kâafar, “On the fundamental characteristics of ultra-dense small cell networks,” to appear in IEEE Network Magazine, arXiv:1710.05297 [], Oct. 2017. [Online]. Available: http://arxiv.org/abs/1710.05297 [5] M. Ding, D. López-Pérez, G. Mao, and Z. Lin, “Microscopic analysis of the uplink interference in FDMA small cell networks,” IEEE Trans. on Wireless Communications, vol. 15, no. 6, pp. 4277–4291, Jun. 2016. [6] J. Andrews, F. Baccelli, and R. Ganti, “A tractable approach to coverage and rate in cellular networks,” IEEE Transactions on Communications, vol. 59, no. 11, pp. 3122–3134, Nov. 2011. [7] 3GPP, “TR 36.872: Small cell enhancements for E-UTRA and EUTRAN - Physical layer aspects,” Dec. 2013. [8] M. Ding and López-Pérez, “Performance impact of base station antenna heights in dense cellular networks,” to appear in IEEE Transactions on Wireless Communications, arXiv:1704.05125 [cs.NI], Sep. 2017. [Online]. Available: https://arxiv.org/abs/1704.05125 [9] X. Zhang and J. Andrews, “Downlink cellular network analysis with multi-slope path loss models,” IEEE Transactions on Communications, vol. 63, no. 5, pp. 1881–1894, May 2015. [10] M. Ding, P. Wang, D. López-Pérez, G. Mao, and Z. Lin, “Performance impact of LoS and NLoS transmissions in dense cellular networks,” IEEE Transactions on Wireless Communications, vol. 15, no. 3, pp. 2365–2380, Mar. 2016. [11] S. Lee and K. Huang, “Coverage and economy of cellular networks with many base stations,” IEEE Communications Letters, vol. 16, no. 7, pp. 1038–1040, Jul. 2012. [12] M. Ding, D. López-Pérez, G. Mao, and Z. Lin, “Performance impact of idle mode capability on dense small cell networks with LoS and NLoS transmissions,” to appear in IEEE Transactions on Vehicular Technology, arXiv:1609.07710 [cs.NI], Sep. 2017. [13] 3GPP, “TR 36.828: Further enhancements to LTE Time Division Duplex for Downlink-Uplink interference management and traffic adaptation,” Jun. 2012. [14] T. Bai and R. Heath, “Coverage and rate analysis for millimeter-wave cellular networks,” IEEE Transactions on Wireless Communications, vol. 14, no. 2, pp. 1100–1114, Feb. 2015. [15] Spatial Channel Model AHG, “Subsection 3.5.3, Spatial Channel Model Text Description V6.0,” Apr. 2003. [16] I. Gradshteyn and I. Ryzhik, Table of Integrals, Series, and Products (7th Ed.). Academic Press, 2007. [17] M. Ding, M. Zhang, D. López-Pérez, and H. Claussen, “Correlated shadow fading for cellular network system-level simulations with wraparound,” 2015 IEEE International Conference on Communications (ICC), pp. 2245–2250, Jun. 2015. [18] T. Ding, M. Ding, G. Mao, Z. Lin, D. López-Pérez, and A. Y. Zomaya, “Uplink performance analysis of dense cellular networks with LoS and NLoS transmissions,” IEEE Transactions on Wireless Communications, vol. 16, no. 4, pp. 2601–2613, Apr. 2017. [19] M. Ding and H. Luo, Multi-point Cooperative Communication Systems: Theory and Applications. Springer, 2013.
7
m-Bonsai: a Practical Compact Dynamic Trie∗ arXiv:1704.05682v1 [] 19 Apr 2017 Andreas Poyias Department of Informatics, University of Leicester University Road, Leicester, LE1 7RH, United Kingdom. [email protected] Simon J. Puglisi Helsinki Institute of Information Technology, Department of Computer Science, University of Helsinki, P. O. Box 68, FI-00014, Finland. [email protected] Rajeev Raman Department of Informatics, University of Leicester University Road, Leicester, LE1 7RH, United Kingdom. [email protected] April 20, 2017 Abstract We consider the problem of implementing a space-efficient dynamic trie, with an emphasis on good practical performance. For a trie with n nodes with an alphabet of size σ, the informationtheoretic lower bound is n log σ + O(n) bits. The Bonsai data structure is a compact trie proposed by Darragh et al. (Softw., Pract. Exper. 23(3), 1993, pp. 277–291). Its disadvantages include the user having to specify an upper bound M on the trie size in advance (which cannot be changed easily after initalization), a space usage of M log σ + O(M log log M ) (which is asymptotically non-optimal for smaller σ or if n ≪ M ) and a lack of support for deletions. It supports traversal and update operations in O(1/ǫ) expected time (based on assumptions about the behaviour of hash functions), where ǫ = (M − n)/M and has excellent speed performance in practice. We propose an alternative, m-Bonsai, that addresses the above problems, obtaining a trie that uses (1 + β)n(log σ + O(1)) bits in expectation, and supports traversal and update operations in O(1/β) expected time and O(1/β 2 ) amortized expected time, for any user-specified parameter β > 0 (again based on assumptions about the behaviour of hash functions). We give an implementation of m-Bonsai which uses considerably less memory and is slightly faster than the original Bonsai. 1 Introduction In this paper, we consider practical approaches to the problem of implementing a dynamic trie in a highly space-efficient manner. A dynamic trie (also known as a dynamic cardinal tree [3]) ∗ This research was supported in part by the Academy of Finland via grant 294143. 1 is a rooted tree, where each child of a node is labelled with a distinct symbol from an alphabet Σ = {0, . . . , σ − 1}. We consider dynamic tries that support the following operations: create(): create a new empty tree. getRoot(): return the root of the current tree. getChild (v, c): return child node of node v having symbol c, if any (and return −1 if no such child exists). getParent(v): return the parent of node v. getLabel (v): return the label (i.e. symbol) of node v. addLeaf (v, c): add a new child of v with symbol c and return the newly created node. delLeaf (v, c): delete the child of v with symbol c, provided that the child indicated is a leaf (if the user asks to delete a child that is not a leaf, the subsequent operations may not execute correctly). The trie is a classic data structure (the name dates back to 1959 [8]) that has numerous applications in string processing. A naive implementation of tries uses pointers. Using this approach, each node in an n-node binary trie uses 3 pointers for the navigational operations. A popular alternative for larger alphabets is the ternary search tree (TST) [4], which uses 4 pointers (3 plus a parent pointer), in addition to the space for a symbol. Other approaches include the double-array trie (DAT) [1], which uses a minimum of two integers per node, each of magnitude O(n). Since a pointer must asymptotically use Ω(log n) bits of memory, the asymptotic space bound of TST (or DAT) is O(n(log n + log σ)) bits1 . However, the information-theoretic space lower bound of n log σ + O(n) bits (see e.g. [3]) corresponds to one symbol and O(1) bits per node, so the space usage of both TST and DAT is asymptotically non-optimal. In practice, log σ is a few bits, or one or two bytes at most. An overhead of 4 pointers per node, or 32n bytes on today’s machines, makes it impossible to hold tries with even moderately many nodes in main memory. Although tries can be path-compressed by deleting nodes with just one child and storing paths explicitly, this approach (or more elaborate ones [23]) cannot guarantee a small space bound. Motivated by this, a number of space-efficient solutions were proposed [18, 3, 28, 11, 12], which represent static tries in information-theoretically optimal space, and support a wide range of operations. A number of asymptotic worst-case results for dynamic tries are given in [22, 29, 2, 20]. As our focus is on practical performance, we do not discuss all previous results in detail and refer the reader to, e.g., [2], for a comparison. For completeness, we give a summary of some of the results of [2, 20]. These results, use the standard word RAM model with word size Θ(log n) bits, as do ours. The first uses almost optimal 2n+n log σ +o(n log σ) bits, and supports trie operations in O(1) time if σ = polylog(n) and in O(log σ/ log log σ) time otherwise. The second [20] uses O(n log σ) bits and supports individual dynamic trie operations in O(log log n) amortized expected time, although finding the longest prefix of a string in the trie can be done in O(log k/ logσ n + log log n) expected time. Neither of these methods has been fully implemented, although a preliminary attempt (without memory usage measurements) was presented in [30]. Finally, we mention the wavelet trie [17], 1 Logarithms are to base 2 unless stated otherwise. 2 which is a data structure for a sequence of strings, and in principle can replace tries in many applications. Although in theory it is dynamic, we are not aware of any implementation of a dynamic wavelet trie. Predating most of this work, Darragh et al. [7] proposed the Bonsai data structure, which uses a different approach to support the above dynamic trie operations in O(1) expected time (based on assumptions about the behaviour of hash functions, which we will describe later). The Bonsai data structure, although displaying excellent practical performance in terms of run-time, has some deficiencies as a dynamic trie data structure. It is expressed in terms of a parameter M , which we will refer to as the capacity of the trie. In what follows, unless stated explicitly otherwise, M will be used to refer to this parameter, n (which must satisfy n < M ) will refer to the current number of nodes in the trie, ǫ will be used to denote (M − n)/M (i.e. ǫ satisfies n = (1 − ǫ)M ), and α will be used to denote 1 − ǫ. The Bonsai data structure allows a trie to be grown from empty to a theoretical maximum of M nodes, uses M log σ + O(M log log M ) bits of space and performs the above set of operations in O(1) expected time. However: • It is clear that, to perform operations in O(1) expected time and use space that is not too far from optimal, n must lie between (1−c1 )M and (1−c2 )M , for small constants 0 < c2 < c1 < 1. The standard way to maintain the above invariant is by periodically rebuilding the trie with a new smaller or larger value of M , depending on whether n is smaller than (1 − c1 )M or larger than (1 − c2 )M , respectively. To keep the expected (amortized) cost of rebuilding down to O(1), the rebuilding must take O(n) expected time. We are not aware of any way to achieve this without using Θ(n log n) additional bits—an unacceptably high cost. The natural approach to rebuilding, to traverse the old tree and copy it node-by-node to the new data structure, requires the old tree to be traversed in O(n) time. Unfortunately, in a Bonsai tree only root-to-leaf and leaf-to-root paths can be traversed efficiently. While a Bonsai tree can be traversed in O(nσ) time, this is too slow if σ is large. • Even if the above relationship between n and M is maintained, the space is non-optimal due to the additive O(M log log M ) bits term in the space usage of Bonsai, which can dominate the remaining terms of M (log σ + O(1)) bits when σ is small. In addition, the Bonsai data structure also has a certain chance of failure: if it fails then the data structure may need to be rebuilt, and its not clear how to do this without affecting the space and time complexities. Finally, it is not clear how to support delLeaf (Darragh et al. do not claim to support this operation). In this paper, we introduce m-Bonsai2 , a variant of Bonsai. The advantages of m-Bonsai include: • Based upon the same assumptions about the behaviour of hash functions as in Darragh et al. [7], our variant uses M (log σ + O(1)) bits of memory in expectation, and supports getChild in O(1/ǫ) expected time, and getParent, getRoot , and getLabel in O(1) time. • Using O(M log σ) additional bits of space, we are able to traverse a Bonsai tree in O(n + M ) time (in fact, we can traverse it in sorted order within these bounds). 2 This could be read as mame-bonsai (豆盆栽) a kind of small bonsai plant, or mini-bonsai. 3 • addLeaf and delLeaf can be supported in O((1/ǫ)2 ) amortized expected time. Note that in delLeaf the application needs to ensure that a deleted node is indeed a leaf3 . As a consequence, we obtain a trie representation that, for any given constant β > 0, uses at most (1+ β)(n log σ + O(n)) bits of space, and supports the operations addLeaf and delLeaf in O((1/β)2 ) expected amortized time, getChild in O(1/β) expected time, and getLabel and getParent in O(1) worst-case time. This trie representation, however, periodically uses O(n log σ) bits of temporary additional working memory to traverse and rebuild the trie. We give two practical variants of m-Bonsai. Our implementations and experimental evaluations show our first variant to be consistently faster, and significantly more space-efficient, than the original Bonsai. The second variant is even more space-efficient but rather slower. Of course, all Bonsai variants use at least 20 times less space than TSTs for small alphabets and compare well in terms of speed with TSTs. We also note that our experiments show that the hash functions used in Bonsai appear to behave in line with the assumptions about their behaviour. The rest of this paper is organized as follows. In Section 2, we summarize the Bonsai data structure [7], focussing on asymptotics. Section 3 summarizes the m-Bonsai approach. In Section 4 we give practical approaches to implementing the data structure described in Section 3, and give details of the implementation and experimental evaluation. 2 Preliminaries 2.1 Bit-vectors. Given a bit string x1 , . . . , xn , we define the following operations: select1 (x, i): Given an index i, return the location of ith 1 in x. rank1 (x, i): Return the number of 1s upto and including location i in x. Lemma 1 (Pǎtraşcu [25]) A bit string can be represented in n + O(n/(log n)2 ) bits such that select1 and rank1 can be supported in O(1) time. The above data structure can be dynamized by adding the following operation: flip(x, i): Given an index i, set xi to 1 − xi . It can be shown that for bit-strings that are polynomial in the word size, all the above operations can be supported in O(1) time: Lemma 2 ([27]) A bit string of length m = wO(1) bits can be represented in m + o(m) bits such that select1 , rank1 and flip can be supported in O(1) time, where w is the word size of the machine. 3 If the application cannot ensure this, one can use the CDRW array data structure [19, 26] to maintain the number of children of each node (as it changes with insertions and deletions) with no asymptotic slowdown in time complexity and an additional space cost of O(n log log n) bits. 4 2.2 Compact hash tables We now sketch the compact hash table (CHT) described by Cleary [6]. The objective is to store a set X of key-value pairs, where the keys are from U = {0, . . . , u − 1} and the values (also referred to as satellite data) are from Σ = {0, . . . , σ − 1}, and to support insertion and deletion of key-value pairs, checking membership of a given key in X, and retrieval of the value associated with a given key. Assuming that |X| = n, we let M > n be the capacity of the CHT. The CHT is comprised of two arrays, Q and F and two bit-strings of length M . To be notationally consistent with the introduction, we take ǫ = (M − n)/M . The description below assumes that n does not change significantly due to insertions and deletions into X—standard techniques such as rebuilding can be used to avoid this assumption. Keys are stored in Q using open addressing and linear probing4 . In normal implementations of hashing with open addressing, the keys would be stored in Q so that they can be compared with a key that is the argument to a given operation. Q would then take M ⌈log U ⌉ bits, which is excessive. To reduce the space usage of Q, Cleary uses quotienting [21]. In rough terms, the aim of quotienting is the following: given a hash function h : U → M , a good hash function h will have O(u/M ) elements of U that map to any given i ∈ M . Let x ∈ U be a key that already exists in the hash table, such that h(x) = i. For x, it should suffice to use a quotient value q(x) of log(u/M ) + O(1) bits to distinguish x from any other keys that may map to i. To represent x in in the CHT, we store q(x) in Q, and during any operation, given q(x) and h(x), we should be able to reconstruct x in O(1) time. If a hash function h supports this functionality, we say that it supports quotienting. While checking for membership of x in X, one needs to know h(y) for all keys y encountered during the search. As keys may not be stored at their initial addresses due to collisions, the idea is to keep all keys with the same initial address in consecutive locations in Q (this means that keys may be moved after they have been inserted) and to use the two bit-strings to effect the mapping from a key’s initial address to the position in Q containing its quotient (for details of the bit-strings see [6]). The satellite data is stored in an array F that has entries of size ⌈log σ⌉. The entry F [i] stores the data associated with the key whose quotient is stored in Q[i] (if any). Thus, the overall space usage is M (log(u/n) + log σ + O(1)), which is within a small constant factor of the  information-theoretic minimum space of log nu + n log σ = n log(u/n) + n log σn log e − Θ(n2 /u) bits. To summarize: Theorem 3 ([6]) There is a data structure that stores a set X of n keys from {0, . . . , u − 1} and associated satellite data from {0, . . . , σ − 1} in at most M (log(u/n) + log σ + O(1)) bits, for any value M > n and supports insertion, deletion, membership testing of keys in X and retrieval of associated satellite data in O(1/ǫ) time, where ǫ = (M − n)/M . This assumes the existence of a hash function that supports quotienting and satisfies the full randomness assumption. Remark 4 The full randomness assumption [9] states that for every x ∈ U , h(x) is a random variable that is (essentially) uniformly distributed over {0, . . . , M − 1} and is independent of the random variable h(y) for all y 6= x. We now give an example of a hash function that supports quotienting that was used by Cleary. The hash function has the form h(x) = (ax mod p) mod M for some prime p > u and multiplier a, 1 ≤ a ≤ p − 1. For this h we set q(x) = ⌊(ax mod p)/M ⌋ corresponding to x. Given h(x) and q(x), 4 A variant, bidirectional probing, is used in [6], but we simplify this to linear probing. 5 it is possible to reconstruct x to check for membership. Thus, h supports quotienting. Observe that q(x) is at most ⌈log⌊(p − 1)/M ⌋. In addition, hash the table needs to store one additional value to indicate an empty (unused) location. Thus, the values in Q are at most ⌈log⌊(p − 1)/M ⌋ + 2⌉ bits. Since we can always find a p ≤ 2u, the values in Q are at most log(u/n)+3 bits. Although this family of hash functions is known not to satisfy the full randomness assumption, our implementation uses this family as well, and it performs well in practice in the applications that we consider. 2.3 Asymptotics of Bonsai We now sketch the Bonsai data structure, focussing on asymptotics. Let M be the capacity of the Bonsai data structure, n the current number of nodes in the Bonsai data structure, and let ǫ = (M − n)/M and α = (1 − ǫ) be as defined in the Introduction. The Bonsai data structure uses a CHT with capacity M , and refers to nodes via a unique node ID, which is a pair hi, ji where 0 ≤ i < M and 0 ≤ j < λ, where λ is an integer parameter that we discuss in greater detail below. If we wish to add a child w with symbol c ∈ Σ to a node v with node ID hi, ji, then w’s node ID is obtained as follows: We create the key of w using the node ID of v, which is a triple hi, j, ci. We insert the key into the CHT. Letting h : {0, . . . , M · λ · σ − 1} 7→ {0, . . . , M − 1} be the hash function used in the CHT. We evaluate h on the key of w. If i′ = h(hi, j, ci), the node ID of w is hi′ , j ′ i where j ′ ≥ 0 is the lowest integer such that there is no existing node with a node ID hi′ , j ′ i; i′ is called the initial address of w. Given the ID of a node, we can search for the key of any potential child in the CHT, which allows us to check if that child is present or not. However, to get the node ID of the child, we need to recall that all keys with the same initial hash address in the CHT are stored consecutively. The node ID of the child is obtained by checking its position within the set of keys with the same initial address as itself. This explains how to support the operations getChild and addLeaf ; for getParent(v) note that the key of v encodes the node ID of its parent. There are, however, two weaknesses in the functionality of the Bonsai tree. • It is not clear how to support delLeaf in this data structure without affecting the time and space complexities (indeed, as stated earlier, Darragh et al. [7] do not claim support for delLeaf ). If a leaf v with node ID hi, ji is deleted, we may not be able to handle this deletion explicitly by moving keys to close the gap, as any other keys with the same initial address cannot be moved without changing their node IDs (and hence the node IDs of all their descendants). Leaving a gap by marking the location previously occupied by v as “deleted” has the problem that the newly-vacated location can only store keys that have the same initial address i (in contrast to normal open address hashing). To the best of our knowledge, there is no analysis of the space usage of open hashing under this constraint. • As noted in the Introduction, is not obvious how to traverse an n-node tree in better than O((nσ)/ǫ) time. This also means that the Bonsai tree cannot be resized if n falls well below (or comes too close to) M without affecting the overall time complexity of addLeaf and delLeaf . Asymptotic space usage. In addition to the two bit-vectors of M bits each, the main space usage of the Bonsai structure is Q. Since a prime p can be found that is < 2 · M · λ · σ, it follows that the values in Q are at most ⌈log(2σλ + 1)⌉ bits. The space usage of Bonsai is therefore M (log σ + log λ + O(1)) bits. 6 Since the choice of the prime p depends on λ, λ must be fixed in advance. However, if more than λ keys are hashed to any value in {0, . . . , M − 1}, the algorithm is unable to continue and must spend O((nσ)/ǫ) time to traverse and possibly re-hash the tree. Thus, λ should be chosen large enough to reduce the probability of more than λ keys hashing to the same initial address to acceptable levels. In [7] the authors, assuming the hash function satisfies the full randomness assumption argue that choosing λ = O(log M/ log log M ) reduces the probability of error to at most M −c for any constant c (choosing asymptotically smaller λ causes the algorithm almost certainly to fail). As the optimal space usage for an n-node trie on an alphabet of size σ is O(n log σ) bits, the additive term of O(M log λ) = O(n log log n) makes the space usage of Bonsai non-optimal for small alphabets. However, even this choice of λ is not well-justified from a formal perspective, since the hash function used is quite weak—it is only 2-universal [5]. For 2-universal hash functions, the maximum √ number of collisions can only be bounded to O( n) [13] (note that it is not obvious how to use more robust hash functions, since quotienting may not be possible). Choosing λ to be this large would make the space usage of the Bonsai structure asymptotically uninteresting. Practical analysis. In practice, we note that choosing λ = 16 as suggested in [7] gives a relatively high failure probability for M = 256 and α = 0.8, using the formula in [7]. Choosing choosing λ = 32, and assuming the hash function satisfies full randomness, the error probability for M up to 264 is about 10−19 for α = 0.8 Also, in practice, the prime p is not significantly larger than M λσ [24, Lemma 5.1]. As a result (see the paragraph following 3) the space usage of Bonsai is typically well approximated by M (⌈log σ⌉ + 7) bits for the tree sizes under consideration in this paper (for alphabet sizes that are powers of 2, we should replace the 7 by 8). 3 3.1 m-Bonsai Collision handling using O(M) bits In our approach, each node again has an associated key that needs to be searched for in a hash table, again implemented using open addressing with linear probing and quotienting. However, the ID of a node x in our case is a number from {0, . . . , M − 1} that refers to the index in Q that contains the quotient corresponding to x. If a node with ID i has a child with symbol c ∈ Σ, the child’s key, which is hi, ci, is hashed using a multiplicative hash function h : {0, . . . , M ·σ −1} 7→ {0, . . . , M −1}, and an initial address i′ is computed. If i′′ is the smallest index ≥ i′ such that Q[i′′ ] is vacant, then we store q(x) in Q[i′′ ]. Observe that q(x) ≤ ⌈2σ⌉, so Q takes M log σ + O(M ) bits. In addition, we have a displacement array D, and set D[i′′ ] = i′′ − i′ . From the pair Q[l] and D[l], we can obtain both the initial hash address of the key stored there and its quotient, and thus reconstruct the key. The key idea is that in expectation, the average value in D is small: Proposition 5 Assuming h is fully independent and uniformly random, the expected value of PM −1 α2 i=0 D[i] after all n = αM nodes have been inserted is ≈ M · 2(1−α) . Proof. The average number of probes, over all keys in the table, made in a successful search is 1 ) [21]. Multiplying this by n = αM gives the total average number of probes. However, ≈ 12 (1 + 1−α the number of probes for a key is one more than its displacement value. Subtracting αM from the above and simplifying gives the result. ✷ Thus, encoding 7 Table 1: Average number of bits per entry needed to encode the displacement array using the unary, Elias-γ and Golomb encodings. For the unary encoding, Proposition 5 predicts 1.816̇, 2.6 and 5.05 bits per value. For file details see Table 2. unary γ Golomb Load Factor 0.7 0.8 0.9 0.7 0.8 0.9 0.7 0.8 0.9 Pumsb 1.81 2.58 5.05 1.74 2.11 2.65 2.32 2.69 3.64 Accidents 1.81 2.58 5.06 1.74 2.11 2.69 2.33 2.69 3.91 Webdocs 1.82 2.61 5.05 1.75 2.11 2.70 2.33 2.70 3.92 D using variable-length encoding could be very beneficial. For example, coding D in unary would PM take M + i=1 D[i] bits; by Proposition 5, and plugging in α = 0.8, the expected space usage of D, encoded in unary, should be about 2.6M bits, which is smaller than the overhead of 7M bits of the original Bonsai. As shown in Table 1, predictions made using Proposition 5 are generally quite accurate. Table 1 also suggests that encoding each D[i] using the Elias-γ-code [10] (hereafter abbreviated to just γ-code), we would come down to about 2.1M bits for the D, for α = 0.8. 3.2 Representing the displacement array We now describe how to represent the displacement array. A compact dynamic rewriteable array (CDRW array) is a data structure for a sequence of non-negative integers that supports the following operations: create(n): Create an array A of size n with all entries initialized to zero. set(A, i, v): Set A[i] to v where v ≥ 0 and v = nO(1) . get (A, i): Return A[i]. The following lemma shows how to implement such a data structure. Note that the apparently slow running time of set is enough to represent the displacement array without asymptotic slowdown: setting D[i] = v means that O(v) time has already been spent in the hash table finding an empty slot for the key. P Lemma 6 A CDRW array A of size n can be represented in ni=1 |γ(A[i] + 1)| + n + o(n) bits, supporting get in O(1) time and set(A, i, v) in O(v) amortized time. Proof. We divide A into contiguous blocks of size b = (log n)3/2 . The i-th block Bi = A[bi..bi + b − 1] will be stored in one or two contiguous sequences of memory locations. There will be a pointer pointing to the start each of these contiguous sequences of memory locations. We first give a naive representation of a block. All values in a block are encoded using γ-codes and (in essence) concatenated into a single bit-string. A set operation is performed by decoding all the γ-codes in the block, and re-encoding the new sequence of γ-codes. Since each γ-code is O(log n) bits, or O(1) words, long, it can be decoded in O(1) time. Decoding and re-encoding an entire block therefore takes O(b) time, which is also the time for the set operation. A get operation can be realized in O(1) time using the standard idea of to concatenating the unary and binary portions of the γ-codes separately into two bit-strings, and to use select1 operations on the unary bit-string 8 Pbi+b−1 to obtain, in O(1) time, the binary portion of the i-th γ-code. Let Gi = j=bi |γ(A[j] + 1)|. 2 The space usage of a single block is therefore Gi + O(Gi /(log n) + log n) bits, where the second term comes from Lemma 1 applied to the unary part of the γ-codes and the third accounts for the pointers to the block and any unused space in the “last” The P word ofPa block representation. 2 )+(n log n)/b) overall space usage of the naive representation is therefore G +O(( G )/(log n) i i i i P bits: this adds up to i Gi + o(n) bits as required. The problem, however, is that set takes O(b) = O((log n)3/2 ) time, regardless of the value being set. To improve the speed of the set operation, we first make the observation that by a standard amortization argument, we can assume that the amortized time allowed for set(A, i, v) is O(v + v ′ ), where v ′ is the current value of A[i]. Next, we classify the values in a block as being either large or small, depending on whether or not they are ≥ b. Each block is now stored as two bit-strings, one each for small and large values. An additional bit-string, of length exactly b bits, indicates whether a value in a block is large or small. By performing rank1 and flip operations in O(1) time on this bit-string using Lemma 2 we can reduce a set or get operation on the block to a set or get operation on either the large or the small bit-strings. The bit-string for large values (the large bit-string) is represented as in the naive representation. l l 2 The space P usage for the large bit-string for the i-th block is Gi + O(G′ i /(log n) + log n) bits, where l Gi = {bi≤j<bi+b|A[j] is large} |γ(A[j] + 1)|. Observe that if either v or v is large, then the O(b) time needed to update the large bit-string is in fact O(v + v ′ ) as required. We now consider the representation of the small bit-string. Note that all γ-codes in the small bit-string are O(log b) = O(log log n) bits long. We divide a block into segments of λ = ⌈c log n/ log log n⌉ values for some sufficiently small constant c > 0. All small values in a segment are stored as a concatenation their γ-codes, followed by an overflow zone of between o/2 and 2o √ bits, where o = ⌈ log n log log n⌉ bits (initially, all overflow zones are of size o bits). All segments, and their overflow zones, are concatenated into a single bit-string. The small bit-string of the i-th block,P also denoted Bi , has length at most Gsi + (b/λ) · (2o) = Gsi + O(log n(log log n)2 ), where s Gi = {bi≤j<bi+b|A[j] is small} |γ(A[j] + 1)|. We now discuss how to support operations on Bi . • Firstly, we need to be able to find the starting and ending points of individual segments in a block. As the size of a segment and its overflow zone is an integer of at most O(log log n) √ bits, and there are only O( log n log log n) segments in a block, we can store the sizes of the segments and overflow blocks in a block in a single word and perform the appropriate prefix sum operations in O(1) time using table lookup. • As we can ensure that a segment is of size at most (log n)/2 by choosing c small enough, we can support a set operation on a segment in O(1) time, by overwriting the sub-string of Bi that represents this segment (note that if v, new value, is large, then we would need to excise the γ-code from the small bit-string, i.e. replace its old γ-code by an empty string). If the overflow zone exceeds 2o or goes below √ o/2 bits, the number of set operations that have taken place in this segment alone is Ω( log n log log n). Since the length of Bi is at most √ O(b log log n) bits or O( log n log log n) words, when any segment overflows, we can simply copy Bi to a new sequence of memory words, and while copying, use table lookup again to rewrite Bi , ensuring that each segment has an overflow zone of exactly o bits following it. Note that an overflow zone can only change in size by a constant factor, so rewriting a section of c log n bits in Bi will give a new bit-string which is also O(log n) bits long, and the rewriting takes O(1) time as a result. The amortized cost of rewriting is clearly O(1), so the case where 9 a large value is replaced by a small one, thus necessitating a change in Bi even though the update is to the large bit-string is also covered. • To support a get in O(1) time is straightforward. After finding the start of a segment, we extract the appropriate segment in O(1) time and read and decode the appropriate γ-code using table lookup. Adding up the space usages for the large and small blocks and the bit-string that distinguishes between large and small values gives Lemma 6. ✷ 3.3 An alternative representation of the displacement array A quick inspection of the proof of Lemma 6 shows that the data structure is not especially practical. We now give an alternative approach that is, asymptotically speaking, slightly less space-efficient. In this approach, we choose two integer parameters 1 < ∆0 < ∆1 , and the displacement array D is split into three layers: • The first layer consists of an array D0 of equal-length entries of ∆0 bits each, which has size M0 = M . All displacement values ≤ 2∆0 − 2 are stored as is in D0 . If D[i] > 2∆0 − 2, then we set D0 [i] = 2∆0 − 1. • The second layer consists of a CHT with maximum size M0 ≤ M . If 2∆0 − 1 ≤ D1 [i] ≤ 2∆0 + 2∆1 − 2, then we store the value D[i] − 2∆0 + 1 as satellite data associated with the key i in the second layer. Note that the satellite data has value between 0 and 2∆1 − 1 and so fits into ∆1 bits. • The third layer consists of a standard hash table. If D[i] > 2∆0 + 2∆1 − 2, we store D[i] in this hash table as satellite date associated with the key i. Clearly, D[i] can be accessed in O(1) expected time. We now describe how to choose the parameters ∆0 and ∆1 . We use the following theorem, which is an adaptation of a textbook proof of the expected search cost of linear probing [16]: Theorem 7 Given an open-address hash table with load factor α = n/M < 1, assuming full randomness, the probability that an unsuccessful search makes ≥ k probes is at most bα · ckα for some constants bα and cα < 1 that depend only on α. Proof. Let A[0..M − 1] be the hash table. Let x be a key that is not in the hash table, and let h(x) = i. If the search for x makes ≥ k probes then the locations A[i], A[i + 1], . . . , A[i + k − 1] must be occupied.5 A necessary condition for this to happen is that there must exist a k′ ≥ k such that k′ keys are mapped to A[i − k′ + k], . . . , A[i + k − 1], and the remaining keys are mapped to A[0..i − k′ + k − 1] or A[i + k..M − 1]. Under the assumption of full randomness, the number of keys mapped to A[i − k′ + k], . . . , A[i + k − 1] is binomially distributed with parameters k′ /M and n, and the expected number of keys mapped to this region is k′ n/M = αk ′ . Using the multiplicative form of the Chernoff bound, we get that the probability of ≥ k′ keys being hashed to this region is at most:  1−α k′ ′ e = ckα , 1/α (1/α) 5 To simplify notation we ignore wrapping around the ends of A. 10 where cα < 1 is a constant that depends only on α. Summing over k ′ = k, k + 1, . . . , n we get that ✷ the probability of an unsuccessful search taking over k probes is at most bα · ckα , as desired. We now analyze the space usage: • The space usage of the first layer is M ∆0 . We will choose ∆0 = O(log(5) n) so the space usage of this layer is O(n log(5) n) bits. • Let the expected number of displacement values stored in the second layer be n1 . Only displacement values ≥ θ1 = 2∆0 − 1 will be stored in the second layer, so by Theorem 7, log(4) n n1 ≤ αθ1 n. We choose ∆0 so that n1 = O(n/ log(3) n), so θ1 = log 2 . It follows that ∆0 = O(log (5) 1/cα n), as promised above. We now discuss the asymptotic space usage of the CHT. By Theorem 3, the space usage of this CHT is M1 (log(M/M1 ) + ∆1 + O(1)) bits, where M = n/α is the universe and M1 = n1 /α is the size of the CHT6 . Since n1 = O(n/ log(3) n), we see that M/M1 = O(log(3) n), and n hence the asymptotic space usage of the CHT is O( (3) (log(4) n + ∆1 + O(1))) bits. We log n (3) will choose ∆1 = O(log n), so the space usage of the CHT is O(n) bits. • Finally, we discuss the asymptotic space usage of the third layer. Let n2 be the expected number of keys stored in the third layer. By Theorem 7, n2 ≤ αθ2 n, where θ2 = 2∆1 − 3. We log(2) n (3) n), as choose ∆1 so that n2 = O(n/ log n), so θ2 = log 2 . It follows that ∆1 = O(log 1/cα promised above. Since the expected space usage of the third layer is O(n2 log n) bits, this is also O(n) bits as required. 3.4 Traversing the Bonsai tree In this section, we discuss how to traverse a Bonsai tree with n nodes stored in a hash table array of size M in O(M + n) expected time. We first give an approach that uses O(M log σ) additional bits. This is then refined in the next section to an approach that uses only n log σ + O(M + n) additional bits. Finally, we show how to traverse the Bonsai tree in sorted order. 3.4.1 Simple traversal Traversal involves a preprocessing step to build three simple support data structures. The first of these is an array A of M log σ-bit integers. The preprocessing begins by scanning array Q left to right. For each non-empty entry Q[i] encountered in the scan, we increment A[getParent (i)]. At the end of the scan, a non-zero entry A[j] is the number of children of node j. We then create bitvector B, a unary encoding of A, which requires M + n + o(M + n) bits of space. Observe A[i] = select1 (B, i) − select1 (B, i − 1) and rank0 (select1 (B, i)) is the prefex sum of A[1..i]. We next allocate an array C, of size n to hold the labels for the children of each node. To fill C, we scan Q a second time, and for each non-zero entry Q[i] encountered, we set C[select1 (B, getParent (i)) − getP arent(i) − A[getParent (i)]] ← getLabel (i) and decrement A[getParent (i)]. At the end of the scan, C[rank0 (select1 (B, i))..rank0 (select1 (B, i + 1))] contains precisely the labels of the children of node i, and A contains all zeroes. Note that the child labels in C for a given node are not 6 For simplicity, we choose the same load factor for m-Bonsai and the CHT. 11 necessarily in lexicographical order. Preprocessing time is dominated by the scans of Q, which take O(M ) time. Space usage is (M + n)(log σ + 1 + o(1)) bits. With A, B and C we are able to affect a depth first traversal of the trie, as follows. B allows us to determine the label of the first child of an arbitrary node i in constant time: specifically, it is C[rank0 (select1 (B, i))]. Before we visit the child of i with label C[rank0 (select1 (B, i))], we increment A[i], which allows us, when we return to node i having visited its children, to determine the label of the next child of node i to visit: it is C[rank0 (select1 (B, i)) + A[i]]. Traversal, excluding preprocessing, clearly takes O(n) time. 3.4.2 Reducing space We can reduce the space used by the simple traversal algorithm described above by exploiting the fact that the M values in A sum to n and so can be represented in n bits, in such a way that they can be accessed and updated in constant time with the help of B. Essentially, we will reuse the M log σ bits initially allocated for A to store the C array and a compressed version of A. We allocate min(M log σ, M + n(1 + log σ))i bits for A, compute it in the manner described in the simple traversal algorithms, and then use it to compute B. The space allocated for A is sufficient to store C, which is of size n log σ bits, and at least another n + M bits. Denote these n + M bits A′ . We will use B to divide A′ into variable length counters. Specifically, bits A′ [select1 (B, i − 1)) + 1..select1 (B, i))] will be used to store a counter that ranges from 0 to the degree of node i. A′ replaces the use of A above during the traversal phase. 3.4.3 Sorted traversal We now describe a traversal that can be used to output the strings present in the trie in lexicographical order. In addition to the data structures used in simple traversal, we store L, a set of σ lists of integers, one for each symbol of the alphabet. We begin by scanning Q left to right, and for each non-empty entry Q[i] encountered in the scan, we increment A[getParent (i)] (as in the simple traversal algorithm) and append i to the list for symbol getLabel (i). At the end of the scan the lists contain n elements in total. Note also that the positions in the list for a given symbol are strictly increasing, and so we store them differentially encoded as Elias-γ codes to reduce their overall space to n log σ bits. We then compute B as in the simple traversal algorithm, and allocate space for C. Now, however, where in the simple algorithm we would make a second scan of Q, we scan the lists of L in lexicographical order of symbol, starting with the lexicographically smallest symbol. For each position j encountered in the scan, we access Q[j] and add getLabel(j) to the next available position in the region of C containing the child labels for node getParent (j) (which we can access as before with B and A. The child labels in C for a given node now appear in lexicgraphical order, allowing us to affect a lexicographic traversal of the trie. 3.5 Conclusion Theorem 8 For any given integer σ and β > 0, there is a data structure that represents a trie on an alphabet of size σ with n nodes, using (1 + β)(n log σ + O(1)) bits of memory in expectation, and supporting getRoot, getParent, and getLabel in O(1) time, getChild in O(1/β) expected time, and addChild and delLeaf in O((1/β)2 ) amortized expected time. The data structure uses O(n log σ) 12 additional bits of temporary memory in order to periodically restructure. The expected time bounds are based upon the existence of a hash function that supports quotienting and satisfies the full randomness assumption. Proof. We represent the tree using the m-Bonsai data structure, representing the D-array using Lemma 6. Let M be the current capacity of the hash table representing the trie. We choose an arbitrary location r ∈ {0, . . . , M − 1} as the root of the tree, and set Q[r] = 0 (or indeed any value that indicates that r is an occupied location) and D[r] = 0 as well. We store r, which is the ID of the root node, in the data structure. The operations are implemented as follows: getRoot: We return the stored ID of the root node. getChild (v, c): We create the key hv, ci and search for it in the CHT. When searching for a key, we use the quotients stored in Q and O(1)-time access to the D array to recover the keys of the nodes we encounter during linear probing. addChild (v, c): We create the key hv, ci and insert it into the CHT. Let i = h(h< v, ci) and suppose that Q[j] is empty for some j ≥ i. We set D[j] = j − i – this takes O(j − i + 1) time, but can be subsumed by the cost of probing locations i, . . . , j. getParent(v): If v is not the root, we use Q[v] and D[v] (both accessed in O(1) time) to reconstruct the key hv ′ , ci of v, where v ′ is the parent of v and c is the label of v. getLabel (v): Works in the same way as getParent. delLeaf (v, c): We create the key hv, ci and search f or it in the CHT as before. When we find the location v ′ that is the ID of the leaf, we store a “deleted” value that is distinct from any quotient or from the “unoccupied” value (clearly, if an insertion into the CHT encounters a “deleted” value during linear probing, this is treated as an empty location and the key is inserted into it. We now discuss the time complexity of the operations. If n is the current number of trie nodes, we ensure that the current value of M satisfies (1 + β/2)n ≤ M ≤ (1 + β)n which means that ǫ = (M − n)/n varies between (β/2)/(1 + β/2) and β/(1 + β). Under this assumption, the value of ǫ = (M − n)/M is Θ((1 + β)/β), and the operations addChild , getChild and delLeaf take O(1/ǫ) = O(1/β) (expected) time. If an addChild causes n to go above M/(1 + β/2), we create a new hash table with capacity M ′ = (1+3β/4) (1+β/2) M . We traverse the old tree in O((M + n)/β) time and copy it to the new hash table. However, at least Omega(βn) addChild operations must have occurred since last time that the tree was copied to its current hash table, so the amortized cost of copying is O((1/β)2 ). The case of a delLeaf operation causing n to go below M/(1 + β) is similar. The space complexity, by the previous discussions, is clearly M (log σ + O(1)) bits. Since M ≤ (1 + β)n, the result is as claimed. ✷ 4 Implementation and experimental evaluation In this section, we first discuss details of our implementations. Next we describe the implementation of a naive approach for traversal and the simple linear-time traversal explained in Section 3.4.1. All 13 implementations are in C++ and use some components from the sdsl-lite library [15]. Finally we describe our machine’s specifications, our datasets, our benchmarks, and the performance of our implementations in these benchmarks. 4.1 Cleary’s CHT and original Bonsai We implemented our own version of Cleary’s CHT, which mainly comprises three sdsl containers: firstly, the int vector<> class, which uses a fixed number of bits for each entry, is used for the Q array. In addition, we have two instance of the bit vector container to represent the bit-strings used by [6, 7] to map a key’s initial address to the position in Q that contains its quotient. The original Bonsai trie is implemented essentially on top of this implementation of the CHT. 4.2 Representation of the displacement array As noted earlier, the data structure of Lemma 6 appears to be too complex for implementation. We therefore implemented two practical alternatives, one based on the naive approach outlined in Lemma 6 and one that is based on Section 3.3. 4.2.1 m-Bonsai (γ) We now describe m-Bonsai (γ) which is is based on representing the D-array using the naive approach of Lemma 6. D is split into M/b consecutive blocks of b displacement values each. The displacement values are stored as a concatenation of their γ-codes. Since γ-codes are only defined for values > 1, and there will be some zero displacement values, we add 1 to all displacement values. In contrast to the description in Lemma 6, to perform a get(i), we find the block containing D[i], decode all the γ-codes in the block up to position i, and return the value. To perform set(i, v), we decode the block containing i, change the appropriate value and re-encode the entire block. In our experiments, we choose b = 256. We have an array of pointers to the blocks. We used sdsl’s encode and decode functions to encode and decode each block for the set and get operations. 4.2.2 m-Bonsai (recursive) We now discuss the implementation of the alternate representation of the displacement array discussed in Section 3.3, which we call m-Bonsai (recursive). D0 is has equal-length entries of ∆0 -bits and is implemented as an int vector<>. The second layer is a CHT, implemented as discussed above. The third layer is implemented using the C++ std::map. We now discuss the choice of parameters ∆0 and ∆1 . The description in Section 3.3 is clearly aimed at asymptotic analysis: the threshold θ1 above which values end up in the second or third layers, for n = 265,536 and α = 0.8, is about 2/9. As a result the values of ∆0 and ∆1 are currently chosen numerically. Specifically, we compute the probability of a displacement value exceeding k for load factors α = 0.7, 0.8, and 0.9 using the exact analysis in [21] (however, since we are not aware of a closed-form formula for this probability, the calculation is done numerically). This numerical analysis shows, for example, that for α = 0.8, choosing∆0 = 2 or 3 and ∆1 = 6, 7, or8 give roughly the same expected space usage. Clearly, choosing ∆0 = 3 would give superior performance as more displacement values would be stored in D0 which is just an array, so we chose ∆0 = 3. Given this choice, even choosing ∆1 = 7 (which means displacement values ≥ 134 are stored in the third layer), the probability of a displacement value being stored in the third layer is at most 0.000058. 14 Since storing a value in the stl::map takes 384 bits on a 64-bit machine, the space usage of the third layer is negligible for a 64-bit machine. 4.3 m-Bonsai traversal We now discuss the implementation of traversals. As discussed, the difficulty with both Bonsai data structures is that the getChild and getParent operations only support leaf-to-root traversal. One approach to traversing a tree with this set of operations is as follows. Suppose that we are at a node v. For i = 0, . . . , σ − 1, we can perform getChild (v, i) to check if v has a child labelled i; if a child is found, we recursively traverse this child and its descendants. This approach takes O(nσ) time. The algorithm described in Section 3.4.1 was implemented using sdsl containers as follows. The arrays A and C are implemented as sdsl int-vectors of length M and n respectively, with width ⌈log σ⌉. The bit-vector was implemented as a select support mcl class over a bit vector of length M + n. 4.4 4.4.1 Experimental evaluation Datasets We use benchmark datasets arising in frequent pattern mining [14], where each “string” is a subset of a large alphabet (up to tens of thousands). We also used sets of short read genome strings given in the standard FASTQ format. These datasets have a relatively small alphabet σ = 5. Details of the datasets can be found in Table 2. 4.4.2 Experimental setup All the code was compiled using g++ 4.7.3 with optimization level 6. The machine used for the experimental analysis is an Intel Pentium 64-bit machine with 8GB of main memory and a G6950 CPU clocked at 2.80GHz with 3MB L2 cache, running Ubuntu 12.04.5 LTS Linux. To measure the resident memory (RES), /proc/self/stat was used. For the speed tests we measured wall clock time using std::chrono::duration cast. 4.4.3 Tests and results We now give the results of our experiments, divided into tests on the memory usage, benchmarks for build, traverse and successful search operations. Our m-Bonsai approaches where compared with Bonsai and Bentley’s C++ TST implementation [4]. The DAT implementation of [31] was not tested since it apparently uses 32-bit integers, limiting the maximum trie size to 232 nodes, which is not a limitation for the Bonsai or TST approaches. The tests of [31] suggest that even with this “shortcut”, the space usage is only a factor of 3 smaller than TST (albeit it is ∼ 2 times faster). Memory usage. We compare the aforementioned data structures in terms of memory usage. For this experiments we set α = 0.8 for all Bonsai data structures. Then, we insert the strings of each dataset in the trees and we measure the resident memory. Table 2 shows the space per node (in 15 bits). We note that m-Bonsai (γ) consistently uses the least memory, followed by m-Bonsai (r). Both m-Bonsai variants used less memory than the original Bonsai. Since the improvement of m-Bonsai over Bonsai is a reduction in space usage from O(n log σ + n log log n) to O(n log σ + n), the difference will be greatest when σ is small. This can be observed in our experimental results, where the difference in space usage between m-Bonsai and the original Bonsai is greatest when σ is small. In the FASTQ data sets, where original Bonsai uses 85% more space than m-Bonsai (r) while the advantage is reduced to 23% for webdocs. Of course, all Bonsai variants are significantly more space-efficient than the TST: on the FASTQ datasets, by a factor of nearly 60. Indeed, the TST could not even load the larger datasets (FASTQ and webdocs) on our machine. Table 2: Characteristics of datasets and memory usage (bits per node) for all data structures. TST was not able to complete the process for larger datasets. Datasets n σ m-Bonsai (r) m-Bonsai(γ) Bonsai TST Chess 38610 75 13.99 11.94 17.51 389.56 Accidents 4242318 442 16.45 14.42 19.99 388.01 Pumsb 1125375 1734 18.95 16.93 22.51 387.52 Retail 1125376 8919 22.71 20.69 26.25 384.91 Webdocs8 63985704 364 16.45 14.44 19.99 386.75 Webdocs 231232676 59717 25.20 23.19 28.72 — SRR034939 3095560 5 8.94 6.78 12.51 385.88 SRR034944 21005059 5 8.93 6.74 12.51 385.76 SRR034940 1556235309 5 8.93 6.77 12.51 — SRR034945 1728553810 5 8.93 6.79 12.51 — Tree construction speed. In Table 3 we show the wall clock time in seconds for the construction of the tree. Of the three Bonsai implementations, m-Bonsai (r) is always the fastest and beats the original Bonsai by 25% for the bigger datasets and about 40% for the smaller ones. We believe m-Bonsai (r) may be faster because Bonsai requires moving elements in Q and one of the bit-vectors with each insertion. When inserting a node in m-Bonsai (r) each D[i] location is written to once and D and Q are not rearranged after that. Finally, m-Bonsai (γ) is an order of magnitude slower than ther other Bonsai variants. It would appear that this is due to the time required to access and decode concatenated γ-encodings of a block, append the value and then encode the whole block back with the new value. Comparing to the TST, m-Bonsai (r) was comparably fast even on small datasets which fit comfortably into main memory and often faster (e.g. m-Bonsai is 30% faster on webdocs). The difference appears to be smaller on the FASTQ datasets, which have small alphabets. However, the TST did not complete loading some of the FASTQ datasets. Traversal speed. In Table 4 we show compare the simple linear-time and naive traversals, using m-Bonsai (r) as the underlying Bonsai representation. After we construct the trees for the given datasets, we traverse and measure the performance of the two approaches in seconds. The simple linear-time traversal includes both the preparation and the traversal phase. Since the naive traversal takes O(nσ) time and the simple traversal takes O(n) time, one would expect the difference to be greater for large σ. For example, Retail is a relatively small dataset with large σ, the difference 16 Table 3: The wall clock time in seconds for the construction of the Trie. TST was not able to complete the process for larger datasets. Datasets Chess Accidents Pumsb Retail Webdocs8 Webdocs SRR034939 SRR034944 SRR034940 SRR034945 m-Bonsai (r) 0.02 2.18 0.43 0.22 26.07 96.38 0.61 5.72 730.55 841.87 m-Bonsai(γ) 0.21 21.46 5.85 2.27 252.59 869.22 10.25 70.83 6,192.14 7,199.11 Bonsai 0.08 3.01 0.69 0.25 32.75 130.92 0.79 7.34 970.81 1,106.39 TST 0.02 2.31 0.57 0.31 18.25 — 0.61 4.31 — — in speed is nearly two orders of magnitude. Whereas all FASTQ datasets are consistently only 2 times slower. Surprisingly, even for datasets with small σ like the FASTQ, naive approach does worse than the simple linear-time approach. This may be because the simple linear-time traversal makes fewer random memory accesses (approximately 3n) during traversal, while the naive approach makes nσ = 5n random memory accesses for this datasets. In addition, note that most searches in the hash table for the naive traversal are unsuccessful searches, which are slower than successful searches. Table 4: The wall clock time in seconds for traversing the tree using simple and naive approach. Datasets Chess Accidents Pumsb Retail Webdocs8 Webdocs SRR034939 SRR034944 SRR034940 SRR034945 Simple traversal 0.02 4.82 1.01 1.04 104.92 150.81 2.61 24.78 3,352.81 4,216.82 Naive traversal 0.38 228.92 233.11 788.36 6,617.39 — 4.52 41.94 7,662.37 8,026.94 Successful search speed. Now we explain the experiment for the runtime speed for successful search operations. For this experiment we designed our own search-datasets, where we randomly picked 10% of the strings from each dataset, shown in Table 5. After the tree construction, we measured the time needed in nanoseconds per successful search operation. It is obvious that TST is the fastest approach. However, m-Bonsai (recursive) remains competitive with TST and consistently faster than Bonsai by at least 1.5 times, whereas m-Bonsai (γ) in the slowest. Note that there is an increase in runtime speed per search operation for all Bonsai data structures as the datasets get bigger, since there are more cache misses. For TST, we see that Retail with high σ is affecting the runtime speed, as TST can search for a child of a node in O(log σ) time. ‘ 17 Table 5: The wall clock time in nanoseconds per successful search operations. Datasets m-Bonsai (r) m-Bonsai(γ) Bonsai TST Chess Accidents Pumsb Retail Webdocs8 Webdocs SRR034939 SRR034944 SRR034940 SRR034945 5 130 187 134 407 296 352 173 247 451 511 1240 1342 1204 1244 1573 1705 1472 1682 1946 1953 288 399 301 418 586 795 350 498 709 718 59 60 55 102 61 — 65 66 — — Conclusion We have demonstrated a new variant of the Bonsai approach to store large tries in a very spaceefficient manner. Not only have we (re)-confirmed that the original Bonsai approach is very fast and space-efficient on modern architectures, both m-Bonsai variants we propose are significantly smaller (both asymptotically and in practice) and and one of them is a bit faster than the original Bonsai. In the near future we intend to investigate other variants, to give a less ad-hoc approach to m-Bonsai (recursive), and to compare with other trie implementations. Neither of our approaches is very close to the information-theoretic lower bound of (σ log σ − (σ − 1) log(σ − 1))n − O(log(kn)) bits [3]. For example, for σ = 5, the lower bound is 3.61n bits, while m-Bonsai (γ) takes ∼ 5.6M ∼ 7n bits. Closing this gap would be an interesting future direction. Another interesting open question is to obtain a practical compact dynamic trie that has a wider range of operations, e.g. being able to navigate directly to the sibling of a node. Acknowledgements. We thank Gonzalo Navarro for helpful comments and in particular suggesting an asymptotic analysis of the D-array representation used in m-Bonsai (r). References [1] Jun-Ichi Aoe. An efficient digital search algorithm by using a double-array structure. IEEE Trans. Software Eng., 15(9):1066–1077, 1989. [2] Diego Arroyuelo, Pooya Davoodi, and SrinivasaRao Satti. Succinct dynamic cardinal trees. Algorithmica, 74:1–36, 2015. Online first. [3] David Benoit, Erik D. Demaine, J. Ian Munro, Rajeev Raman, Venkatesh Raman, and S. Srinivasa Rao. Representing trees of higher degree. Algorithmica, 43(4):275–292, 2005. [4] Jon Bentley and Bob Sedgewick. Ternary http://www.drdobbs.com/database/ternary-search-trees/184410528, 1998. 18 search trees. [5] Larry Carter and Mark N. Wegman. Universal classes of hash functions. J. Comput. Syst. Sci., 18(2):143–154, 1979. [6] John G. Cleary. Compact hash tables using bidirectional linear probing. IEEE Trans. Computers, 33(9):828–834, 1984. [7] John J. Darragh, John G. Cleary, and Ian H. Witten. Bonsai: a compact representation of trees. Softw., Pract. Exper., 23(3):277–291, 1993. [8] René de la Briandais. File searching using variable length keys. In Proc. Western J. Computer Conf., pages 295—-298, 1959. [9] Martin Dietzfelbinger. On randomness in hash functions (invited talk). In Christoph Dürr and Thomas Wilke, editors, 29th International Symposium on Theoretical Aspects of Computer Science, STACS 2012, February 29th - March 3rd, 2012, Paris, France, volume 14 of LIPIcs, pages 25–28. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2012. [10] Peter Elias. Universal codeword sets and representations of the integers. IEEE Trans. Information Theory, 21(2):194–203, 1975. [11] Arash Farzan and J. Ian Munro. A uniform paradigm to succinctly encode various families of trees. Algorithmica, 68(1):16–40, 2014. [12] Arash Farzan, Rajeev Raman, and S. Srinivasa Rao. Universal succinct representations of trees? In Susanne Albers, Alberto Marchetti-Spaccamela, Yossi Matias, Sotiris E. Nikoletseas, and Wolfgang Thomas, editors, Automata, Languages and Programming, 36th International Colloquium, ICALP 2009, Rhodes, Greece, July 5-12, 2009, Proceedings, Part I, volume 5555 of Lecture Notes in Computer Science, pages 451–462. Springer, 2009. [13] Michael L. Fredman, János Komlós, and Endre Szemerédi. Storing a sparse table with 0(1) worst case access time. J. ACM, 31(3):538–544, 1984. [14] Bart Goethals. Frequent itemset mining implementations repository. http://fimi.ua.ac.be/. [15] Simon Gog, Timo Beller, Alistair Moffat, and Matthias Petri. From theory to practice: Plug and play with succinct data structures. In Experimental Algorithms - 13th International Symposium, SEA 2014, Copenhagen, Denmark, June 29 - July 1, 2014. Proceedings, pages 326–337, 2014. [16] Michael T. Goodrich and Roberto Tamassia. Algorithm Design and Applications. Wiley, 2015. [17] Roberto Grossi and Giuseppe Ottaviano. The wavelet trie: maintaining an indexed sequence of strings in compressed space. In PODS, pages 203–214, 2012. [18] Guy Jacobson. Space-efficient static trees and graphs. In Proc. 30th Annual Symposium on Foundations of Computer Science, pages 549–554. IEEE Computer Society, 1989. [19] Jesper Jansson, Kunihiko Sadakane, and Wing-Kin Sung. CRAM: compressed random access memory. In Automata, Languages, and Programming - 39th International Colloquium, ICALP 2012, Warwick, UK, July 9-13, 2012, Proceedings, Part I, volume 7391 of Lecture Notes in Computer Science, pages 510–521. Springer, 2012. 19 [20] Jesper Jansson, Kunihiko Sadakane, and Wing-Kin Sung. Linked dynamic tries with applications to lz-compression in sublinear time and space. Algorithmica, 71(4):969–988, 2015. [21] Donald E. Knuth. The Art of Computer Programming, Volume 3: Sorting and Searching (2nd ed.). Addison Wesley Longman, 1998. [22] J. Ian Munro, Venkatesh Raman, and Adam J. Storm. Representing dynamic binary trees succinctly. In S. Rao Kosaraju, editor, Proc. 12th Annual Symposium on Discrete Algorithms, pages 529–536. ACM/SIAM, 2001. [23] Stefan Nilsson and Matti Tikkanen. An experimental study of compression methods for dynamic tries. Algorithmica, 33(1):19–33, 2002. [24] Rasmus Pagh. Low redundancy in static dictionaries with constant query time. SIAM J. Comput., 31(2):353–363, 2001. [25] Mihai Patrascu. Succincter. In 49th Annual IEEE Symp. Foundations of Computer Science, pages 305–313. IEEE Computer Society, 2008. [26] Andreas Poyias, Simon J. Puglisi, and Rajeev Raman. Compact dynamic rewritable (CDRW) arrays. In Sándor P. Fekete and Vijaya Ramachandran, editors, Proceedings of the Ninteenth Workshop on Algorithm Engineering and Experiments, ALENEX 2017, Barcelona, Spain, Hotel Porta Fira, January 17-18, 2017., pages 109–119. SIAM, 2017. [27] Rajeev Raman, Venkatesh Raman, and S. Srinivasa Rao. Succinct dynamic data structures. In Frank K. H. A. Dehne, Jörg-Rüdiger Sack, and Roberto Tamassia, editors, Algorithms and Data Structures, 7th International Workshop, WADS 2001, Providence, RI, USA, August 810, 2001, Proceedings, volume 2125 of Lecture Notes in Computer Science, pages 426–437. Springer, 2001. [28] Rajeev Raman, Venkatesh Raman, and Srinivasa Rao Satti. Succinct indexable dictionaries with applications to encoding k -ary trees, prefix sums and multisets. ACM Transactions on Algorithms, 3(4), 2007. [29] Rajeev Raman and S. Srinivasa Rao. Succinct dynamic dictionaries and trees. In Jos C. M. Baeten, Jan Karel Lenstra, Joachim Parrow, and Gerhard J. Woeginger, editors, Automata, Languages and Programming, 30th International Colloquium, ICALP 2003, Eindhoven, The Netherlands, June 30 - July 4, 2003. Proceedings, volume 2719 of Lecture Notes in Computer Science, pages 357–368. Springer, 2003. [30] Takuya Takagi, Takashi Uemura, Shunsuke Inenaga, Kunihiko Sadakane, and Hiroki Arimura. Applications of succinct dynamic compact tries to some string problems. http://www-ikn.ist.hokudai.ac.jp/∼arim/papers/waac13takagi.pdf . Presented at WAAC’13. [31] Naoki Yoshinaga and Masaru Kitsuregawa. A self-adaptive classifier for efficient text-stream processing. In Jan Hajic and Junichi Tsujii, editors, COLING 2014, 25th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, August 23-29, 2014, Dublin, Ireland, pages 1091–1102. ACL, 2014. 20
8
ON CYCLE DECOMPOSITIONS IN COXETER GROUPS arXiv:1611.03442v1 [] 10 Nov 2016 THOMAS GOBET Abstract. The aim of this note is to show that the cycle decomposition of elements of the symmetric group admits a quite natural formulation in the framework of dual Coxeter theory, allowing a generalization of it to the family of so-called parabolic quasi-Coxeter elements of Coxeter groups (in the symmetric group every element is a parabolic quasiCoxeter element). We show that such an element admits an analogue of the cycle decomposition. Elements which are not in this family still admit a generalized cycle decomposition, but it is not unique in general. 1. Introduction The cycle decomposition in the symmetric group is a powerful combinatorial tool to study properties of permutations. On the other hand, the symmetric groups can be realized as Coxeter groups. It is easy for example to determine the order of an element from its cycle decomposition, hence even if we prefer to view the symmetric groups as Coxeter groups it is sometimes useful to represent their elements as permutations and make use of their unique cycle decomposition, rather than using Coxeter theoretic representations of the elements as words in the simple generating set. It therefore appears as natural to wonder whether the cycle decomposition admits a natural generalization to Coxeter groups. However, when trying to define cycle decompositions in the symmetric group purely in terms of the classical Coxeter theoretic data, one rapidly sees an obstruction towards such a generalization: considering a Coxeter system (W, S) of type An (with W identified with Sn+1 and S with the set of simple transpositions) and an element w ∈ W with cycle decomposition w = c1 c2 · · · ck , the Coxeter length ℓS (w) of w is not equal in general to the sums of the lengths of the various ci . However, replacing the generating simple set S by the set T of all transpositions and the classical length by the length function ℓT on W with respect P to T , one has that ℓT (w) = ki=1 ℓT (ci ). The set of transpositions forms a single conjugacy class. From a Coxeter theoretic point of view, it is the set of reflections of W , i.e., the set of W -conjugates of the elements of S. In particular the reflection length function ℓT can be defined for an abritrary Coxeter group. There are deep motivations for the study of a (finite) reflection group as a group generated by the set T of all its reflections instead of just the set S of reflections through the walls of a chamber. This approach, nowadays called the dual approach, has been a very active field of research in the last fifteen years (see for instance [4], [8], [1], [12], [13]). The author is funded by the ANR Geolie ANR-15-CE40-0012. 1 2 THOMAS GOBET The above basic observation on the reflection length of a permutation indicates that the cycle decomposition has something which is dual in essence. Each cycle can be thought of as a Coxeter element in an irreducible parabolic subgroup of W . From the type An picture, it therefore appears as natural to generalize a cycle decomposition as a decomposition of an element w of Coxeter system (W, S) into a product of Coxeter elements in irreducible reflection subgroups of W , which pairwise commute, and such that the sum of their reflection lengths equals the reflection length of w. However, even for finite W there are in general elements failing to admit such a decomposition. In order to make it work, one has to relax the definition of Coxeter element to that of a quasi-Coxeter element (in type An both are equivalent). Namely, given w ∈ W and denoting by RedT (w) the set of T -reduced expressions of w, that is, minimal length expressions of w as product of reflections, we say that w ∈ W is a parabolic quasi-Coxeter element if it satisfies the following condition: Condition 1.1. There exists (t1 , t2 , . . . , tk ) ∈ RedT (w) such that W ′ := ht1 , t2 , . . . , tk i is a parabolic subgroup of W . If the parabolic subgroup is the whole group W then we just call w a quasiCoxeter element. In type Dn for instance, there are quasi-Coxeter elements which fail to be Coxeter elements (that is, with no T -reduced expression yielding a simple system for W ; see [3]). The parabolic subgroup in the above Condition is unique in the sense that if another reduced expression (q1 , q2 , . . . , qk ) ∈ RedT (w) generates a parabolic subgroup W ′′ of W , then W ′ = W ′′ (see Lemma 2.6). We therefore denote W ′ by P (w). In this situation it is easy to derive Proposition 1.2 (Generalized cycle decomposition). Let (W, S) be a Coxeter system. Let w ∈ W satisfying Condition 1.1. There exists a (unique up to the order of the factors) decomposition w = x1 x2 · · · xm , xi ∈ W such that (1) xi xj = xj xi for all i, j = 1, . . . , m, (2) ℓT (w) = ℓT (x1 ) + ℓT (x2 ) + · · · + ℓT (xm ), (3) Each xi admits a T -reduced expression generating an irreducible parabolic subgroup Wi of W and P (w) = W1 × W2 × · · · × Wm . This statement is not entirely satisfying in the sense that we would like to state the maximality condition given in point (3) only in terms of the factors xi and not in terms of the parabolic subgroups. More precisely, we expect the xi ’s to be indecomposable, that is, to admit no nontrivial decomposition of the form ui vi with ui vi = vi ui and ℓT (xi ) = ℓT (ui ) + ℓT (vi ). For finite groups at least this can be achieved (see Proposition 3.5) yielding Theorem 1.3 (Generalized cycle decomposition in finite Coxeter groups). Let (W, S) be a finite Coxeter system. Let w ∈ W satisfying Condition 1.1. There exists a (unique up to the order of the factors) decomposition w = x1 x2 · · · xm , xi ∈ W such that (1) xi xj = xj xi for all i, j = 1, . . . , m, (2) ℓT (w) = ℓT (x1 ) + ℓT (x2 ) + · · · + ℓT (xm ), ON CYCLE DECOMPOSITIONS IN COXETER GROUPS 3 (3) Each xi is indecomposable. Theorem 1.3 is the analogue of the cycle decomposition for parabolic quasi-Coxeter elements. In general there are elements in W failing to be parabolic quasi-Coxeter elements, but the advantage of this definition is that such an element is always a quasi-Coxeter element in a reflection subgroup (but this subgroup is never unique when W is finite: in that case by Corollary 3.11 below these reflection subgroups are in bijection with the number of Hurwitz orbits on RedT (w); by [3, Theorem 1.1] this number is one precisely when w satisfies Condition 1.1). 2. Coxeter groups and their parabolic subgroups Let (W, S) be a (not necessarily finite) Coxeter system of rank n = |S|. We assume the reader to be familiar with the general theory groups S of Coxeter −1 and refer to [6] or [11] for basics on the topic. Let T = w∈W wSw be the set of reflections of W . Let ℓT : W → Z≥0 be the reflection length, that is, for w ∈ W the integer ℓT (w) is the smallest possible length of an expression of w as product of reflections. We write ≤T for the absolute order on W , that is, for u, v ∈ W we set u ≤T v ⇔ ℓT (u) + ℓT (u−1 v) = ℓT (v). Given w ∈ W , we denote by RedT (w) the set of T -reduced expressions of w, that is, the set of minimal length expressions for w as products of reflections. Definition 2.1. A subgroup W ′ ⊆ W is parabolic if it exists a subset S ′ = {r1 , r2 , . . . , rn } ⊆ T and m ≤ n such that (W, S ′ ) is a Coxeter system and W ′ = hr1 , r2 , . . . , rm i. The above definition, which is borrowed from [2], is more general than the usual definition of parabolic subgroups as conjugates of subgroups generated by subsets of S. In [3, 4.4 and 4.6] it is shown that the above definition is equivalent to the classical one for finite and irreducible 2-spherical Coxeter groups. The example below shows that the two definitions are not equivalent in general: Example 2.2. Let W be a universal Coxeter group on three generators S = {s, t, u}, that is, with no relation between distinct generators. Then S ′ := {s, t, tut} ⊆ T is a simple system for W , hence X := hs, tuti is parabolic. However, using the fact that elements of W have a unique Sreduced expression (because W is universal) it is easy to check that X is not conjugate to any of the three rank 2 standard parabolic subgroups of W . Parabolic subgroups provide a family of reflection subgroups of W , that is, subgroups generated by reflections. Any reflection subgroup W ′ ⊆ W comes equipped with a canonical structure of Coxeter group (see [10]), in particular it has a canonical set S ′ of Coxeter generators. Moreover by [10, Corollary 3.11 (ii)] the set Ref(W ′ ) of W ′ -conjugates of S ′ (the reflections of W ′ ) coincide with W ′ ∩ T . The rank rank(W ′ ) of W ′ is defined to be |S ′ |. The following result will be useful: 4 THOMAS GOBET Theorem 2.3 ([2, Theorem 1.4]). Let W ′ ⊆ W be a parabolic subgroup. Let w ∈ W ′ . Then RedT ′ (w) = RedT (w), where T ′ = W ′ ∩ T is the set of reflections of W ′ . In this context it seems natural to us to conjecture the following: Conjecture 2.4. Let w ∈ W . Assume that there is (t1 , t2 , . . . , tk ) ∈ RedT (w) such that W ′ := ht1 , t2 , . . . , tk i is parabolic. Then for any (q1 , q2 , . . . , qk ) ∈ RedT (w) we have W ′ = hq1 , q2 , . . . , qk i. Note that Theorem 2.5 ([3], [2]). Conjecture 2.4 holds in the following cases: (1) When W is finite, (2) When w is a parabolic Coxeter element in W , that is, if it exists S ′ = {r1 , r2 , . . . , rn } ⊆ T and m ≤ n such that w = r1 r2 · · · rm and S ′ is a simple system for W . Proof. Conjecture 2.4 for finite W is an immediate consequence of [3, Theorem 1.1]: there it is shown that an element satisfies Condition 1.1 if and only if the Hurwitz action (see Section 3.3 for the definition) is transitive on RedT (w); but this action leaves the subgroup generated by the reflections in a T -reduced expression invariant. It also holds for parabolic Coxeter elements since in that case the Hurwitz action is also transitive on RedT (w) by [2, Theorem 1.3].  Lemma 2.6. Let w ∈ W . Assume that (t1 , t2 , · · · , tk ), (q1 , q2 , . . . , qk ) ∈ RedT (w) are such that both W ′ := ht1 , t2 , . . . , tk i and W ′′ := hq1 , q2 , . . . , qk i are parabolic. Then W ′ = W ′′ . Proof. Since W ′ is parabolic, by Theorem 2.3 we have qi ∈ W ′ for all i, hence W ′′ ⊆ W ′ . Reversing the roles of W ′ and W ′′ we get W ′ ⊆ W ′′ .  The lemma above allows the following definition Definition 2.7. Let w ∈ W satisfying Condition 1.1. We denote by P (w) the parabolic subgroup of W generated by any T -reduced decomposition of w generating a parabolic subgroup. This is well-defined by Lemma 2.6. It follows immediately from Theorem 2.3 that any parabolic subgroup P containing w must contain P (w). We call P (w) the parabolic closure of w. Lemma 2.8. Let W ′ ⊆ W be a finitely generated reflection subgroup. Then there is a unique (up to the order of the factors) decomposition W ′ = W1 × W2 × · · · × Wk where W1 , W2 , . . . , Wk are irreducible reflection subgroups of Ṡk W ′ and Ref(W ) = i=1 Ref(Wi ). Proof. Let S ′ be the canonical set of Coxeter of W ′ (see [10]). S generators ′ −1 ′ = W ′ ∩ T . If By [10, Corollary 3.11], S is finite and w∈W ′ wS w Γ1 , Γ2 , . . . , Γk are the irreducible components of the Coxeter graph of (W ′ , S ′ ), then W ′ = W1 × W2 × · · · × Wk (where Wi is generated by the nodes of Γi ) and each Wi is an irreducible reflection subgroup. Now if there is another decomposition W = W1′ × W2′ × · · · × Wℓ′ , then all the reflections in Wi′ must be included in Wj for some j (otherwise irreducibility is not satisfied) and vice-versa, implying uniqueness of the decomposition.  ON CYCLE DECOMPOSITIONS IN COXETER GROUPS 5 3. Generalized cycle decompositions 3.1. Proof of Proposition 1.2. Proof of Proposition 1.2. Let (t1 , t2 , . . . , tk ) ∈ RedT (w) such that P := ht1 , t2 , . . . , tk i is parabolic. By Lemma 2.6 we have P = P (w). It follows from the definition of a parabolic subgroup that there is a (unique up to the order of the factors) factorization P = W1 × W2 × · · · × Wm where the Wi ’s are irreducible parabolic subgroups. Moreover we have k = P Ṡm rank(P ) = m i=1 Ref(Wi ). It follows that for i=1 rank(Wi ) and Ref(P ) = each j = 1, . . . , k, there exists j ′ ∈ {1, . . . , m} such that tj ∈ Wj ′ . This implies that we can transform the T -reduced expression (t1 , t2 , . . . , tk ) by a sequence of commutations of adjacent letters into a T -reduced expression (q1 , . . . , qℓ1 , qℓ1 +1 , . . . , qℓ2 , . . . , qℓm−1 , . . . , qk ) of w where {q1 , . . . , qℓ1 −1 } ⊆ W1 , . . . , {qℓm−1 , . . . , qk } ⊆ Wm . Note that since the set {t1 , . . . , tk } generates P we must have hq1 , . . . , qℓ1 −1 i = W1 , . . . , hqℓm−1 , . . . , qk i = Wm . Setting ℓ0 = 1 and ℓm = k + 1 we define xi := qℓi−1 qℓi−1 +1 · · · qℓi −1 for all i = 1, . . . , m. Note that since the Wi ’s are irreducible and parabolic we get (3). As (q1 , . . . qk ) ∈ RedT (w) is given by concatenating T -reduced expressions of the xi ’s we have ℓT (w) = ℓT (x1 )+ ℓT (x2 )+ · · · + ℓT (xm ) which shows (2). Since xi ∈ Wi for all i and P = W1 × W2 × · · · × Wm we have xi xj = xj xi for all i, j = 1, . . . , m, which shows (1). It remains to show that the decomposition is unique up to the order of the factors. Hence assume that w = y1 y2 · · · ym′ is another decomposition of w satisfying the three conditions of Proposition 1.2. By the third condition each of the yi ’s has a reduced expression generating an irreducible parabolic subgroup Wi′ = P (yi ) of W and concatenating them yields a reduced expression generating P (w). By uniqueness of the decomposition P (w) = W1 × · · · × Wm we must have m = m′ and there must exist a permutation π ∈ Sm such that Wi′ = Wπ(i) for all i. Up to reordering, we can therefore assume that Wi′ = Wi for all i. Since x = x1 x2 · · · xm = y1 y2 · · · ym it follows by uniqueness of the decomposition of w as element of the direct product W1 × W2 × · · · × Wm that xi = yi for all i = 1, . . . , m.  Remark 3.1. In type An we recover the classical cycle decomposition. In that case each element w satisfies Condition 1.1. The second condition in Proposition 1.2 follows from [7, Lemma 2.2]. 3.2. Finite Coxeter groups and their parabolic subgroups. This section is devoted to recalling well-known facts on parabolic subgroups of finite Coxeter groups and their connexion with finite root systems. Most of what we present here is covered in [6], though often in different notations. From now on we always assume (W, S) to be finite. Let (W, S) be finite, of rank n. Let Φ be a root system for (W, S) in an n-dimensional Euclidean space V with inner product (·, ·). Let Φ+ ⊆ Φ be 6 THOMAS GOBET a positive system. Recall that there is a one-to-one correspondence between T and Φ+ which we denote by t 7→ αt . Let w ∈ W and let V w := {v ∈ V | w(v) = v} be the subspace of V consisting of the fixed points under the action of w. The following well-known result is due to Carter [9, Lemma 3] Lemma 3.2 (Carter’s Lemma). Let αt1 , αt2 , . . . , αtk ∈ Φ+ . Then {αt1 , αt2 , . . . , αtk } is linearly independent if and only if ℓT (t1 t2 · · · tk ) = k. In that case one has dim V t1 t2 ···tk = n − k and W ′ := ht1 , t2 , . . . , tk i is a reflection subgroup of rank k of W . The following will be useful (see [4, Lemma 1.2.1 (i)]) Lemma 3.3. Let x ∈ W , t ∈ T . Then t ≤T x ⇔ V x ⊆ V t . Given w ∈ W , there is an orthogonal decomposition V = V w ⊕ Mov(w) with respect to (·, ·), where Mov(w) := im(w−1) (see for instance [1, Section 2.4]). Recall that for finite Coxeter groups, Definition 2.1 is equivalent to the classical definition, that is, W ′ ⊆ W is parabolic if and only if W ′ is a conjugate of a standard parabolic subgroup WI of W , where for I ⊆ S we write WI := hs | s ∈ Ii. There is the following result, characterizing parabolic subgroups of finite Coxeter groups as centralizers of subspaces of V (which is a Corollary of [6, 3.3, Proposition 1]). Proposition 3.4. Let P ⊆ W be a parabolic subgroup. Then P = Fix(E) := {x ∈ W | x(v) = v, ∀v ∈ E} for some subspace E ⊆ V . Conversely, given any subspace E ⊆ V , the subgroup Fix(E) is a parabolic subgroup of W . In fact, the subspaces E in Proposition 3.4 can be chosen to be intersections of reflection hyperplanes. Given linearly independent reflection hyperplanes Vt1 , . . . , Vtk where ti ∈ T, i = 1, . . . , k and setting w := t1 t2 · · · tk , T it follows from Carter’s Lemma (Lemma 3.2) that V w = ki=1 Vti , and dim(V w ) = n − ℓT (w) (see also [4, Lemma 1.2.1 (ii)]). It follows from the discussion above that P (w) = Fix(V w ) and rank(P (w)) = ℓT (w). Every parabolic subgroup is the parabolic closure of some (in general not uniquely determined) element. 3.3. Hurwitz action and proof of Theorem 1.3. We now give a few properties of elements of finite Coxeter groups satisfying Condition 1.1 before proving Theorem 1.3. These elements were introduced in [3] and called quasi-Coxeter elements. Recall that for each w ∈ W with ℓT (w) = k, there is an action of the kstrand Artin braid group Bk on RedT (w) called the Hurwitz action, defined as follows. The Artin generator σi ∈ Bk acts by σi · (t1 , . . . , ti−1 , ti , ti+1 , ti+2 , . . . , tk ) = (t1 , . . . , ti−1 , ti ti+1 ti , ti , ti+2 , . . . , tk ). In [3, Theorem 1.1], it is shown that this action is transitive if and only if w satisfies Condition 1.1. Since the reflection subgroup generated by the reflections from a reduced expression is invariant under a Hurwitz move ON CYCLE DECOMPOSITIONS IN COXETER GROUPS 7 as above, this means that either every T -reduced expression generates a parabolic subgroup (which by Lemma 2.6 is nothing but P (w)) or no reduced expression does. With the following Proposition (which requires the above mentioned result from [3] for which we only have a case-by-case proof) it will be easy to derive a proof of Theorem 1.3 Proposition 3.5. Let (W, S) be a finite irreducible Coxeter system. Let x be a quasi-Coxeter element in W . Then there is no nontrivial decomposition x = uv = vu such that u, v ∈ W and ℓT (x) = ℓT (u) + ℓT (v). Proof. Assume that there is such a decomposition x = uv. For y ∈ {u, v} define Wy := ht ∈ T | t ≤T yi. We claim that in that case we have W = Wu × Wv and ˙ Ref(W ) = Ref(Wu )∪Ref(W v ), contradicting the irreducibility of W . Firstly we show that Mov(v) ⊆ V u . We have V u ∩ V v ⊆ V uv , hence Mov(uv) ⊆ Mov(u) + Mov(v). Since moreover ℓT (x) = ℓT (uv) = ℓT (u) + ℓT (v) by Carter’s Lemma we have Mov(u) ∩ Mov(v) = 0. Let a ∈ Mov(v). Then since uv = vu we have u(a) ∈ Mov(v), hence u(a) − a ∈ Mov(v) ∩ Mov(u) = 0. Hence a ∈ V u , which shows the claimed inclusion. Now if t ∈ T is such that t ≤T u, then by Lemma 3.3 we have that Mov(v) ⊆ V u ⊆ V t which implies that αt ∈ V v . Using 3.3 again, we deduce that t commutes with any reflection t′ ∈ T such that t′ ≤T v. Concatenating a T -reduced expression t1 t2 · · · tk of u with a T -reduced expression tk+1 tk+2 · · · tn of v we get a T -reduced expression t1 t2 · · · tn of x. By the discussion above we have tt′ = t′ t for all t ∈ {t1 , t2 , . . . , tk }, t′ ∈ {tk+1 , tk+2 , . . . , tn } and any reflection occurring in a T -reduced expression in the orbit Bn · (t1 , t2 , . . . , tn ) lies either in Wu or in Wv . Since by [3, Theorem 1.1] there is a unique Hurwitz orbit on RedT (x), the set {t1 , t2 , . . . , tn } generates W . Let t ∈ T . Since a quasi-Coxeter element x has no nontrivial fixed point in V we have that t ≤T x, hence t occurs in a reduced expression of RedT (x). But there is only one orbit Bn ·(t1 , t2 , . . . , tk ) of Bn on RedT (w). It follows that t ∈ Wu or in t ∈ Wv . The claimed direct product decomposition follows.  Proof of Theorem 1.3. The existence of the claimed decomposition w = x1 x2 · · · xm is given by Propositions 1.2 and 3.5. It remains to show uniqueness. Assume that w = y1 y2 · · · yℓ is another such decomposition. For each i, choose a reduced expression ti1 · · · tini of yi . Thanks to (2), concatenating these reduced expressions yields a reduced expression of w. For a fixed i, all the tji are in P (w) by Theorem 2.3. It follows from Lemma 2.6 that tji lies in one of the parabolic factor W (i, j) of P (w), and indecomposability of yi forces W (i, j) to depend only on i. Therefore we set W (i) := W (i, j) and note that yi ∈ W (i). But since the irreducible parabolic factors of P (w) are 8 THOMAS GOBET precisely the P (xj )’s, for each W (i) there exists j such that W (i) = P (xj ). It follows from the decomposition (3.6) P (w) = P (x1 ) × P (x2 ) × . . . P (xm ) and the uniqueness of the decomposition of w in the direct product (3.6) that each of the xj can be written as a product of yi ’s: more precisely, xj is the product of all those yi ’s such that W (i) = P (xj ) (and for each j, there must be at least one yi such that W (i) = P (xj ) otherwise w would have two distinct decompositions in (3.6)). But by indecomposability of xj , there can be at most one such yi , which concludes.  3.4. Consequences. We conclude with some facts about elements for which Condition 1.1 fails. For such an element w ∈ W , by [3, Theorem 1.1] the Hurwitz operation on RedT (w) is not transitive. Denote by H(w) the set of orbits. For O ∈ H(w), denote by hOi the reflection subgroup generated by any T -reduced expression in O. This is well-defined since the Hurwitz action leaves the subgroup generated by a T -reduced expression unchanged. Hence we get the following: Proposition 3.7. Let (W, S) be finite. Let w ∈ W . For each O ∈ H(w), the element w has a unique generalized cycle decomposition in the sense of Theorem 1.3 in the Coxeter group hOi. Proof. The reflection subgroup hOi is a Coxeter group. Denote by S ′ its set of canonical Coxeter generators. We have (see [10, Corollary 3.11 (ii)]) that [ T ′ := hOi ∩ T = uS ′ u−1 . u∈hOi Hence w is a quasi-Coxeter element in (hOi, S ′ ). Applying Theorem 1.3 to w viewed as element of hOi we get the claim.  Lemma 3.8. Let w ∈ W . If O1 , O2 ∈ H(w) with O1 6= O2 , then hO1 i = 6 hO2 i. Proof. Let (t1 , t2 , . . . , tk ) ∈ O1 . Then ht1 , t2 , . . . , tk i = hO1 i. Let S ′ be the set of canonical Coxeter generators of the reflection subgroup hO1 i. The Hurwitz operation is transitive on RedT1 (w) where T1 = T ∩ hO1 i (see the proof of Proposition 3.7). If we have hO1 i = hO2 i, then in particular for (q1 , q2 , . . . , qk ) ∈ O2 we have qi ∈ T1 for all i since qi ∈ T ∩ hO2 i = T1 . This implies that (t1 , t2 , . . . , tk ) and (q1 , q2 , . . . , qk ) lie in the same Hurwitz orbit since the Hurwitz operation is transitive on RedT1 (w), a contradiction.  Remark 3.9. Proposition 3.7 tells us that any w ∈ W has a unique cycle decomposition in any reflection subgroup generated by one of its T -reduced expressions. However distinct such reflection subgroups, equivalently (by Lemma 3.8) distinct Hurwitz orbits in H(w) can yield the same decomposition of w: more precisely if x1 , x2 , . . . , xm is a cycle decomposition in hO1 i ON CYCLE DECOMPOSITIONS IN COXETER GROUPS 9 and y1 , y2 , . . . , yℓ is a cycle decomposition in hO2 i where O1 and O2 are distinct elements in H(w), then it is possible that ℓ = m and {x1 , x2 , . . . , xm } = {y1 , y2 , . . . , ym }. As an example consider W of type G2 = I2 (6) with S = {s, t}. Then w = stst is a Coxeter element in both irreducible reflection subgroups hs, tsti and ht, stsi of type A2 hence its cycle decomposition is the trivial decomposition x1 = stst = y1 in both subgroups. However we always have {P (x1 )1 , P (x2 )1 , . . . , P (xm )1 } = 6 {P (y1 )2 , P (y2 )2 , . . . , P (ym )2 }, where P (xi )1 (resp. P (yi )2 ) is the parabolic closure of xi (resp. yi ) in hO1 i (resp. hO2 i). Indeed, the reflections in these subgroups have to generate the parent group hO1 i (resp. hO2 i) in which the cycle decomposition is considered and they are distinct by Lemma 3.8. In the above example with W of type G2 we have P (x1 )1 = hs, tsti 6= P (y1 )2 = ht, stsi. Also note the following Lemma 3.10. Let w ∈ W with ℓT (w) = k. Let W ′ ⊆ W be a reflection subgroup of W containing w with set of reflections T ′ = W ′ ∩ T . Then ℓT (w) = ℓT ′ (w). Proof. Since T ′ ⊆ T we must have ℓT (w) ≤ ℓT ′ (w). Assume that ℓT ′ (w) = k′ > k. Let (t1 , t2 , . . . , tk′ ) ∈ RedT ′ (w). Let i be minimal such that ℓT (t1 t2 · · · ti ) 6= ℓT ′ (t1 t2 · · · ti ). Then ℓT (t1 t2 · · · ti ) = i−2, ℓT ′ (t1 t2 · · · ti ) = i. Let u = t1 t2 · · · ti−1 . By minimality of i we have ℓT (u) = ℓT ′ (u) = i − 1. In particular, we have ti ≤T u. The parabolic closure P (u) of u therefore has rank i − 1 and contains t1 , . . . , ti−1 but also ti . But since (t1 , . . . , tk′ ) is T ′ -reduced, the reflection subgroup W ′′ = ht1 , t2 , . . . , ti i has rank i (as a reflection subgroup of the Coxeter group W ′ , for instance by Lemma 3.2). But the reflections of W ′′ as a reflection subgroup of W ′ or W are the same, hence W ′′ also has rank i as a reflection subgroup of W . Therefore W ′′ cannot be included in P (u) which has smaller rank, a contradiction.  Hence together with Lemma 3.8 we get Corollary 3.11. For all w ∈ W , there is a one-to-one correspondence between H(w) and reflection subgroups of W in which w is a quasi-Coxeter element, given by O 7→ hOi. Example 3.12. Let W be of type D4 with S = {s0 , s1 , s2 , s3 } where s2 commutes with no other simple reflection. Then the element w = s1 (s2 s1 s2 )(s2 s0 s2 )s3 is a quasi-Coxeter element in W (see [3, Example 2.4]), but it is not a Coxeter element (in the sense that it has no T -reduced expression yielding a simple system for W ). It follows from Proposition 3.5 that its cycle decomposition is the trivial decomposition x1 = w. Now W can be viewed as a reflection f of type B4 . In that case W is not parabolic subgroup of a Coxeter group W f , hence w has no reduced expression generating a parabolic subgroup of in W f : indeed, if there was such a decomposition, then the Hurwitz operation W 10 THOMAS GOBET f would be transitive by [3, on the set of T -reduced expressions of w in W Theorem 1.1], hence each reduced expression would generate a parabolic f . The element w has a subgroup, in particular W would be parabolic in W unique generalized cycle decomposition in W which is the one we gave above (since W is irreducible), but w can also be realized as a Coxeter element in a (non irreducible) reflection subgroup W ′ of type B2 × B2 as follows: in the f of type B4 (see [5, Section signed permutation model for the Weyl group W 8.1]), we have w = (1, −2, −1, 2)(3, 4, −3, −4), which is a product of two Coxeter elements of the two type B2 reflection subgroups consisting of those signed permutations supported on {±1, ±2} and {±3, ±4} respectively. In W ′ the unique cycle decomposition of w has two factors x1 = (1, −2, −1, 2), x2 = (3, 4, −3, −4). Note that neither x1 nor x2 lies in W . Acknowledgments I thank Barbara Baumeister and Vivien Ripoll for helpful discussions. References [1] D. Armstrong, Generalized noncrossing partitions and combinatorics of Coxeter groups, Mem. Amer. Math. Soc. 202 (2009), No. 949. [2] B. Baumeister, M. Dyer, C. Stump, P. Wegener, A note on the transitive Hurwitz action on decompositions of parabolic Coxeter elements, Proc. Amer. Math. Soc. Ser. B 1 (2014), 149-154. [3] B. Baumeister, T. Gobet, K. Roberts, P. Wegener, On the Hurwitz action in finite Coxeter groups, Journal of Group Theory (2016). [4] D. Bessis, The dual braid monoid, Ann. Sci. École Normale Supérieure 36 (2003), 647-683. [5] A. Björner and F. Brenti, Combinatorics of Coxeter groups, GTM 231, Springer, 2005. [6] N. Bourbaki, Groupes et algèbres de Lie, chapitres 4,5 et 6, Hermann (1968). [7] T. Brady, A partial order on the symmetric group and new K(π, 1)’s for the braid groups, Adv. Math. 161 (2001), no. 1, 20-40. [8] T. Brady and C. Watt, Non-crossing partition lattices in finite real reflection groups, Trans. Amer. Math. Soc. 360 (2008), no. 4, 1983-2005. [9] R.W. Carter, Conjugacy Classes in the Weyl group, Compositio Math. 25 (1972), 1-59. [10] M.J. Dyer, Reflection subgroups of Coxeter systems, J. of Algebra 135 (1990), Issue 1, 57-73. [11] J. Humphreys, Reflection groups and Coxeter groups, Cambridge Studies in Advanced Mathematics 29, Cambridge University Press (1990). [12] C. Ingalls, H. Thomas, Noncrossing partitions and representations of quivers, Compos. Math. 145 (2009), no. 6, 1533-1562. [13] N. Reading, Clusters, Coxeter-sortable elements and noncrossing partitions, Trans. Amer. Math. Soc. 359 (2007), No. 12, 5931-5958. Institut Elie Cartan de Lorraine, Université de Lorraine, B.P. 70239, F54506 Vandoeuvre-lès-Nancy Cedex.
4
arXiv:1611.00630v2 [] 27 Jun 2017 The accumulated persistence function, a new useful functional summary statistic for topological data analysis, with a view to brain artery trees and spatial point process applications Christophe A.N. Biscio Department of Mathematical Sciences, Aalborg University, Denmark and Jesper Møller Department of Mathematical Sciences, Aalborg University, Denmark June 28, 2017 Abstract We start with a simple introduction to topological data analysis where the most popular tool is called a persistent diagram. Briefly, a persistent diagram is a multiset of points in the plane describing the persistence of topological features of a compact set when a scale parameter varies. Since statistical methods are difficult to apply directly on persistence diagrams, various alternative functional summary statistics have been suggested, but either they do not contain the full information of the persistence diagram or they are two-dimensional functions. We suggest a new functional summary statistic that is one-dimensional and hence easier to handle, and which under mild conditions contains the full information of the persistence diagram. Its usefulness is illustrated in statistical settings concerned with point clouds and brain artery trees. The appendix includes additional methods and examples, together with technical details. The R-code used for all examples is available at http://people.math.aau.dk/~christophe/Rcode.zip. Keywords: clustering, confidence region, global rank envelope, functional boxplot, persistent homology, two-sample test. 1 1 Introduction Statistical methods that make use of algebraic topological ideas to summarize and visualize complex data are called topological data analysis (TDA). In particular persistent homology and a method called the persistence diagram are used to measure the persistence of topological features. As we expect many readers may not be familiar with these concepts, Section 1.1 discusses two examples without going into technical details though a few times it is unavoidable to refer to the terminology used in persistent homology. Section 1.2 discusses the use of the persistence diagram and related summary statistics and motivates why in Section 1.3 a new functional summary statistic called the accumulative persistence function (APF) is introduced. The remainder of the paper demonstrates the use of the APF in various statistical settings concerned with point clouds and brain artery trees. 1.1 Examples of TDA The mathematics underlying TDA uses technical definitions and results from persistent homology, see Fasy et al. (2014) and the references therein. This theory will not be needed for the present paper. Instead we provide an understanding through examples of the notion of persistence of k-dimensional topological features for a sequence of compact subsets Ct of the d-dimensional Euclidean space R d , where t ≥ 0, k = 0, 1, . . . , d − 1, and either d = 2 (Section 1.1.1) or d = 3 (Section 1.1.2). Recalling that a set A ⊆ R d is path-connected if any two points in A are connected by a curve in A, a 0-dimensional topological feature of a compact set C ⊂ R d is a maximal path-connected subset of C, also called a connected component of C. The meaning of a 1-dimensional topological feature is simply understood when d = 2, and we appeal to this in a remark at the end of Section 1.1.1 when d = 3. 2 t=0 t = 0.5 t = 0.62 0.4 Birth time 0.8 1.0 0.5 APF(m) 0.0 APF0 APF1 0.0 0.4 k=0 k=1 0.0 Death time 0.8 1.5 t = 0.75 0.0 0.4 0.8 m Figure 1: The four first panels show a simple example of spherically growing circles centred at (-1,-1), (1,-1), and (0,1) and all with radii 0.5 when time is 0. The fifth panel shows the persistence diagram for the connected components (k = 0) and the loops (k = 1). The final panel shows the corresponding accumulated persistence functions. 1.1.1 A toy example Let C ⊂ R2 be the union of the three circles depicted in the top-left panel of Figure 1. The three circles are the 0-dimensional topological features (the connected components) of C, as any curve that goes from a circle to another will be outside C. The complement R2 \ S has four connected components, one of which is unbounded, whilst the three others are the 1-dimensional topological features of C, also called the loops of C (the boundary of each bounded connected component is a closed curve with no crossings; in this example the closed curve is just a circle). For t ≥ 0, let Ct be the subset of points in R2 within distance t from C. Thinking of t as time, Ct results when each point on C grows as a disc with constant speed one. The algebraic topology (technically speaking the Betti numbers) changes exactly at the times t = 0, 0.5, 0.62, 0.75, see the first four panels of Figure 1: For each topological dimension (k) k = 0, 1, let ti denote the time of the ith change. First, C0 = C has three connected (0) components and three loops as given above; we say that they are born at time t1 3 = (1) t1 = 0 (imaging there was nothing before time 0). Second, the loops disappear and two connected components merge into one connected component; we say that the loops (0) (1) and one of the connected components die at time t2 = t2 = 0.5; since the two merging connected components were born at the same time, it is decided uniformly at random which one should die respectively survive; the one which survives then represent the (0) (1) new merged connected component. Third, at time t3 = t3 = 0.62, a new loop is born and the two connected component merge into one which is represented by the oldest (first born) connected component whilst the other connected component (the one which (0) was retained when the two merged at time t2 = 0.5) dies; the remaining connected component "lives forever" after time 0.62 and it will be discarded in our analysis. Finally, (1) at time t4 = 0.75, the loop dies. Hence, for each k = 0, 1, there is a multiset of points specifying the appearance and disappearance of each k-dimensional topological feature as t grows. A scatter plot of these points is called a persistence diagram, see Figure 1 (bottom-middle panel): For k = 0 (the connected components), the points are (0, 0.5) and (0, 0.62) (as (0, ∞) is discarded from the diagram) with multiplicities 1 and 1, respectively; and for k = 1 (the loops), the points are (0, 0.5) and (0.62, 0.75) with multiplicities 3 and 1, respectively. The term "persistence" refers to that distant connected components and large loops are present for a long time; which here of course just corresponds to the three circles/connected components of C and their loops persist for long whilst the last appearing loop has a short lifetime and hence is considered as "noise". Usually in practice C is not known but a finite subset of points { x1 , . . . , x N } has been collected as a sample on C, possibly with noise. Then we redefine Ct as the union of closed discs of radius t and with centres given by the point cloud. Hence the connected components of C0 are just the points x1 , . . . , x N , and C0 has no loops. For t > 0, it is in general difficult to directly compute the connected components and loops of Ct , but a graph in R m (with m ≥ 2) can be constructed so that its connected components correspond to those of Ct and moreover the triangles of the graph may be filled or not in a way so that the loops of the obtained triangulation correspond to those of Ct . 4 Remark: Such a construction can also be created in the case where Ct is the union of d-dimensional closed balls of radius t and with centres given by a finite point pattern { x1 , . . . , x N } ⊂ R d . The construction is a so-called simplicial complex such as the Čechcomplex, where m may be much larger than d, or the Delaunay-complex (or alphacomplex), where m = d, and a technical result (the Nerve Theorem) establishes that it is possible to identify the topological features of Ct by the Čech or Delaunay-complex, see e.g. Edelsbrunner and Harer (2010). It is unnecessary for this paper to understand the precise definition of these notions, but as d = 2 or d = 3 is small in our examples, it is computationally convenient to use the Delaunay-complex. When d = 3, we may still think of a 1-dimensional topological feature as a loop, i.e. a closed curve with no crossings; again the simplicial complex is used for the "book keeping" when determining the persistence of a loop. For example, a 2-dimensional sphere has no loops, and a torus in R3 has two. Finally, when d ≥ 3, a k-dimensional topological feature is a k-dimensional manifold (a closed surface if k = 2) that cannot "be filled in", but for this paper we omit the precise definition since it is technical and not needed. 1.1.2 Persistent homology for brain artery trees The left panel of Figure 2 shows an example of one of the 98 brain artery trees analysed in Bendich et al. (2016). The data for each tree specifies a graph in R3 consisting of a dense cloud of about 105 vertices (points) together with the edges (line segments) connecting the neighbouring vertices; further details about the data are given in Section 2.2. As in Bendich et al. (2016), for each tree we only consider the k-dimensional topological features when k = 0 or k = 1, using different types of data and sets Ct as described below. Below we consider the tree in Figure 2 and let B ⊂ R3 denote the union of its edges. Following Bendich et al. (2016), if k = 0, let Ct = {( x, y, z) ∈ B : z ≤ t} be the sub-level set of the height function for the tree at level t ≥ 0 (assuming Ct is empty for t < 0). Thus the 0-dimensional topological features at "time/level" t are the connected components of 5 25 20 15 5 10 Death time 80 60 40 Death time 20 0 0 0 20 40 60 80 Birth time 0 5 10 15 20 25 Birth time Figure 2: A brain artery tree with the "water level" indicated (left panel) and the persistence diagrams of connected components (middle panel) and loops (right pane). Ct . As illustrated in the left panel of Figure 2, instead of time we may think of t as "water level": As the water level increases, connected components of the part of B surrounded by water (the part in blue) may be born or die; we refer to this as sub-level persistence. As in Section 1.1.1, we represent the births and deaths of the connected component in a persistence diagram which is shown in Figure 2 (middle panel). The persistence of the connected components for all brain artery trees will be studied in several examples later on. As in Bendich et al. (2016), if k = 1, we let B be represented by a point pattern C of 3000 points subsampled from B, and redefine Ct to be the union of balls of radii t ≥ 0 and centres given by C (as considered in the remark at the end of Section 1.1.1). The loops of Ct are then determined by the corresponding Delaunay-complex. The right panel of Figure 2 shows the corresponding persistence diagram. The persistence of the loops for all trees will be studied in the examples to follow. 1.2 Further background and objective The persistence diagram is a popular graphical representation of the persistence of the topological features of a sequence of compact sets Ct ⊂ R d , t ≥ 0. As exemplified above it consists for each topological dimension k = 0, . . . , d − 1 of a multiset PDk of points (bi , di ) with multiplicities ci , where bi and di is a pair of birth-death times for a k-dimensional topological feature obtained as time t grows. 6 In the majority of literature on TDA, including the analysis in Bendich et al. (2016) of brain artery trees, long lifetimes are of main interest whereas short lifetimes are considered as topological noise. Short lifetimes are of interest in the study of complex structures such as branch polymers and fractals, see MacPherson and Schweinhart (2012); and for brain artery trees Bendich et al. (2016) noticed in one case that "not-particularlyhigh persistence have the most distinguishing power in our specific application". In our examples we demonstrate that short lifetimes will also be of key interest in many situations, including when analysing the brain artery trees dataset from Bendich et al. (2016). Chazal et al. (2013) and Chen et al. (2015) note that it is difficult to apply statistical methodology to persistent diagrams. Alternative functional summary statistics have been suggested: Bubenik (2015) introduces a sequence of one-dimensional functions called the persistent landscape, where his first function is denoted λ1 and is considered to be of main interest, since it provides a measure of the dominant topological features, i.e. the longest lifetimes; therefore we call λ1 the dominant function. Chazal et al. (2013) introduce the silhouette which is a weighted average of the functions of the persistent landscape, where the weights control whether the focus is on topological features with long or short lifetimes. Moreover, Chen et al. (2015) consider a kernel estimate of the intensity function for the persistent diagram viewed as a point pattern. The dominant function, the silhouette, and the intensity estimate are one-dimensional functions and hence easier to handle than the persistence diagram, however, they provide selected and not full information about the persistence diagram. In Section 1.3, we introduce another one-dimensional functional summary statistic called the accumulative persistence function and discuss its advantages and how it differs from the existing functional summary statistics. 7 1.3 The accumulated persistence function For simplicity and specificity, for each topological dimension k = 0, 1, . . . , d − 1, we always assume that the persistence diagram PDk = {(b1 , d1 , c1 ), . . . , (bn , dn , cn )} is such that n < ∞ and 0 ≤ bi < di < ∞ for i = 1, . . . , n. This assumption will be satisfied in our examples (at least with probability one). Often in the TDA literature, PDk is transformed to the rotated and rescaled persistence diagram (RRPD) given by RRPDk = {(m1 , l1 , c1 ), . . . , (mn , ln , cn )}, where mi = (bi + di )/2 is the meanage and li = di − bi is the lifetime. This transformation is useful when defining our accumulative persistence function (APF) by n APFk (m) = ∑ c i l i 1 ( m i ≤ m ), i =1 m ≥ 0, (1) where 1(·) is the indicator function and we suppress in the notation that APFk is a function of RRPDk . The remainder of this section comments on this definition. Formally speaking, when RRPDk is considered to be random, it is viewed as a finite point process with multiplicities, see e.g. Daley and Vere-Jones (2003). It what follows it will always be clear from the context whether PDk and RRPDk are considered as being random or observed, and hence whether APFk is a deterministic or random function. In the latter case, because APFk (m) is an accumulative function, its random fluctuations typically increase as m increases. Depending on the application, the jumps and/or the shape of APFk may be of interest as demonstrated later in our examples. A large jump of APFk corresponds to a large lifetime (long persistence). In the simple example shown in Figure 1, both jumps of APF0 are large and indicate the three connected components (circles), whilst only the first jump of APF1 is large and indicates the three original loops. For the more complicated examples considered in the following it may be hard to recognize the individual jumps. In particular, as in the remark at the end of Section 1.1.1, suppose Ct is the union of d-dimensional balls of radius t and with centres given by a finite point pattern { x1 , . . . , x N } ⊂ R d . Roughly speaking we may then have the following features as il- lustrated later in Example 1. For small meanages m, jumps of APF0 (m) correspond to 8 balls that merge together for small values of t. Thus, if the point pattern is aggregated (e.g. because of clustering), we expect that APF0 (m) has jumps and is hence large for small meanages m, whilst if the point pattern is regular (typically because of inhibition between the points), we expect the jumps of APF0 (m) to happen and to be large for modest values of m (as illustrated later in the middle panel of Figure 3 considering curves for the Matérn cluster process and the determinantal point process). For large meanages, jumps of APF0 (m) are most likely to happen in the case of aggregation. Accordingly, the shape of APF0 can be very different for these two cases (as illustrated in the first panel of Figure 3). Similar considerations lead us to expect different shapes of APF1 for different types of point patterns; we expect that APF1 (m) is large respective small for the case of aggregation respective regularity when m is small, and the opposite happens when m is large (as illustrated in the last panel of Figure 3). Clearly, RRPDk is in one-to-one correspondence to PDk . In turn, if all ci = 1 and the mi are pairwise distinct, then there is a one-to-one correspondence between RRPDk and its corresponding APFk . For k = 0, this one-to-one correspondence would easily be lost if we had used bi in place of mi in (1). We need to be careful with not over-stating this possible one-to-one correspondence. For example, imagine we want (1) to compare two APFs with respect to Lq -norm (1 ≤ q ≤ ∞) and let PDk (2) and PDk be the underlying persistence diagrams. However, when points (b, d) close to the diagonal are considered as topological noise (see Section 1.1.1), usually the so-called bot(1) (2) tleneck distance W∞ (PDk , PDk ) is used, see e.g. Fasy et al. (2014). Briefly, for ǫ > 0, √ let N = {(b, d) : b ≤ d, l ≤ 2ǫ} be the set of points at distance 2ǫ of the diagonal in the persistence diagram, and let S(b, d) = {( x, y) : | x − b| ≤ ǫ, |y − d| ≤ ǫ} be the square with center (b, d), sides parallel to the b- and d-axes, and of side length 2ǫ. (1) (2) (2) Then W∞ (PDk , PDk ) ≤ ǫ if PDk (bi , di ) a point of (1) (1) PDk has exactly one point in each square S(bi , di ), with (repeating this condition ci times). However, small values of (2) W∞ (PDk , PDk ) does not correspond to closeness of the two corresponding APFs with respect to Lq -norm. Note that the dominant function, the silhouette, and the intensity estimate (see Sec- 9 tion 1.2) are in general not in a one-to-one correspondence with RRPDk . Like these functions, APFk is a one-dimensional function, and so it is easier to handle than the sequence of functions for the persistent landscape in Bubenik (2015) and the intensity estimate in Chen et al. (2015) — e.g. confidence regions become easier to plot. Contrary to the dominant function and the silhouette, the APF provides information about topological features without distinguishing between long and short lifetimes. 1.4 Outline Our paper discusses various methods based on APFs in different contexts and illustrated by simulation studies related to spatial point process applications and by reanalysing the brain artery trees dataset previously analysed in Bendich et al. (2016). Section 2 specifies the setting for these examples. Sections 3, 4, and 5 consider the case of a single APF, a sample of APFs, and two samples of APFs, respectively. Further examples and details appear in Appendix A-F. 2 Datasets 2.1 Simulated data In our simulation studies we consider a planar point cloud, i.e. a finite point pattern { x1 , . . . , x N } ⊂ R2 , and study as at the end of Section 1.1.1 how the topological features of Ct , the union of closed discs of radii t and centred at x1 , . . . , x N , change as t grows. Here { x1 , . . . , x N } will be a realisation of a point process X ⊂ R2 , where the count N is finite. Thus PDk and RRPDk can be viewed as finite planar point processes (with multiplicities) and APFk as a random function. Note that N may be random, and conditional on N, the points in X are not necessarily independent and identically distributed (IID). This is a common situation in spatial statistics, e.g. if the focus is on the point process X 10 and the purpose is to assess the goodness of fit for a specified point process model of X when { x1 , . . . , x N } is observed. 2.2 Brain artery trees dataset The dataset in Bendich et al. (2016) comes from 98 brain artery trees which can be included within a cube of side length at most 206 mm; one tree is excluded "as the java/matlab function crashed" (e-mail correspondence with Sean Skwerer). They want to capture how the arteries bend through space and to detect age and gender effects. For k = 0, sub-level persistence of the connected components of each tree represented by a union of line segments is considered, cf. Section 1.1.2; then for all meanages, mi ≤ 137; and the number of connected components is always below 3200. For k = 1, persistence of the loops for the union of growing balls with centres at a point cloud representing the tree is considered, cf. Section 1.1.2; the loops have a finite death time but some of them do not die during the allocated time T = 25 (that is, Bendich et al. (2016) stop the growth of balls when t > 25). Thus we shall only consider meanages mi ≤ 25; then the number of loops is always below 2700. For each tree and k = 0, 1, most ci = 1 and sometimes ci > 1. For each tree and k = 0, 1, Bendich et al. (2016) use only the 100 largest lifetimes in their analysis. Whereas their principal component analysis clearly reveal age effects, their permutation test based on the mean lifetimes for the male and females subjects only shows a clear difference when considering PD1 . Accordingly, when demonstrating the usefulness of APF0 and APF1 , we will focus on the gender effect and consider the same 95 trees as in Bendich et al. (2016) (two transsexual subjects are excluded) obtained from 46 female subjects and 49 male subjects; in contrast to Bendich et al. (2016), we consider all observed meanages and lifetimes. In accordance to the allocated time T = 25, we need to redefine APF1 by n APF1 (m) = ∑ ci li 1(mi ≤ m, mi + li /2 ≤ T ), i =1 11 m ≥ 0. (2) For simplicity we use the same notation APF1 in (1) and (2); although all methods and results in this paper will be presented with the definition (1) in mind, they apply as well when considering (2). Finally, we write APFkF and APFkM to distinguish between APF’s for females and males, respectively. 3 A single accumulated persistence function There exists several constructions and results on confidence sets for persistence diagrams when the aim is to separate topological signal from noise, see Fasy et al. (2014), Chazal et al. (2014), and the references therein. Appendix A and its accompanying Example 5 discuss the obvious idea of transforming such a confidence region into one for an accumulate persistence function, where the potential problem is that the bottleneck metric is used for persistence diagrams and this is not corresponding to closeness of APFs, cf. Section 1.3. In this section we focus instead on spatial point process model assessment using APFs or more traditional tools. Suppose a realization of a finite spatial point process X0 has been observed and copies X1 , . . . , Xr have been simulated under a claimed model for X0 so that the joint distribution of X0 , X1 , . . . , Xr should be exchangeable. That is, for any permutation (σ0 , . . . , σr ) of (0, . . . , r ), (Xσ0 , . . . , Xσr ) is claimed to be distributed as (X0 , . . . , Xr ); e.g. this is the case if X0 , X1 , . . . , Xr are IID. This is a common situation for model assessment in spatial point process analysis when a distribution for X0 has been specified (or estimated), see e.g. Baddeley et al. (2015) and Møller and Waagepetersen (2016). Denote the APFk s for X0 , . . . , Xr by A0 , . . . , Ar , respectively, and the null hypothesis that the joint distribution of A0 , . . . , Ar is exchangeable by H0 . Adapting ideas from Myllymäki et al. (2016), we will discuss how to construct a goodness-of-fit test for H0 based on a so-called global rank envelope for A0 ; their usefulness will be demonstrated in Example 1. 12 In functional data analysis, to measure how extreme A0 is in comparison to A1 , . . . , Ar , a so-called depth function is used for ranking A0 , . . . , Ar , see e.g. López-Pintado and Romo (2009). We suggest using a depth ordering called extreme rank in Myllymäki et al. (2016): Let T > 0 be a user-specified parameter chosen such that it is the behaviour of A0 (m) for 0 ≤ m ≤ T which is of interest. For l = 1, 2, . . ., define the l-th bounding curves of A0 , . . . , Ar by l Alow (m) = min l Ai (m) i =0,...,r and l Aupp (m) = max l Ai (m), i =0,...,r 0 ≤ m ≤ T, where min l and max l denote the l-th smallest and largest values, respectively, and where l ≤ r/2. Then, for i = 0, . . . , r, the extreme rank of Ai with respect to A0 , . . . , Ar is n l l Ri = max l : Alow (m) ≤ Ai (m) ≤ Aupp (m ) o for all m ∈ [0, T ] . The larger Ri is, the deeper or more central Ai is among A0 , . . . , Ar . Now, for a given α ∈ (0, 1), the extreme rank ordering is used to define the 100(1 − α)%- lα lα global rank envelope as the band delimited by the curves Alow and Aupp where ) ( r 1 1 ( Ri < l ) ≤ α . lα = max l : r + 1 i∑ =0 Under H0 , with probability at least 1 − α, lα lα Alow (m) ≤ A0 (m) ≤ Aupp (m ) for all m ∈ [0, T ], (3) see Myllymäki et al. (2016). Therefore, the 100(1 − α)%-global rank envelope is specifying a conservative statistical test called the extreme rank envelope test and which accepts H0 at level 100α% if (3) is satisfied or equivalently if r 1 1( Ri < R0 ) > α, r + 1 i∑ =0 (4) cf. Myllymäki et al. (2016). A plot of the extreme rank envelope allows a graphical interpretation of the extreme rank envelope test and may in case of rejection suggest an alternative model for X0 . 13 There exist alternatives to the extreme rank envelope test, in particular a liberal extreme rank envelope test and a so-called global scaled maximum absolute difference envelope, see Myllymäki et al. (2016). It is also possible to combine several extreme rank envelopes, for instance by combining APF0 and APF1 , see Mrkvička et al. (2016). In the following example we focus on (3)-(4) and briefly remark on results obtained by combining APF0 and APF1 . Example 1 (simulation study). Recall that a homogeneous Poisson process is a model for complete spatial randomness (CSR), see e.g. Møller and Waagepetersen (2004) and the simulation in the first panel of Figure 3. Consider APFs A0 , A1 , . . . , Ar corresponding to independent point processes X0 , X1 , . . . , Xr defined on a unit square and where Xi for i > 0 is CSR with a given intensity ρ (the mean number of points). Suppose X0 is claimed to be CSR with intensity ρ, however, the model for X0 is given by one of the following four point process models, which we refer to as the true model: (a) CSR; hence the true model agrees with the claimed model. (b) A Baddeley-Silverman cell process; this has the same second-order moment properties as under CSR, see Baddeley and Silverman (1984). Though from a mathematical point of view, it is a cluster process, simulated realisations will exhibit both aggregation and regularity at different scales, see the second panel of Figure 3. (c) A Matérn cluster process; this is a model for clustering where each cluster is a homogenous Poisson process within a disc and the centers of the discs are not observed and constitute a stationary Poisson process, see Matérn (1986), Møller and Waagepetersen (2004), and the third panel of Figure 3. (d) A most repulsive Bessel-type determinantal point process (DPP); this is a model for regularity, see Lavancier et al. (2015), Biscio and Lavancier (2016), and the fourth panel of Figure 3. We let ρ = 100 or 400. This specifies completely the models in (a) and (d), whereas 14 Figure 3: Simulated point patterns for a homogeneous Poisson process (first panel), a Baddeley-Silverman cell process (second panel), a Matérn cluster process (third panel), and a most repulsive Bessel-type DPP (fourth panel). the remaining parameters in the cases (b)-(c) are defined to be the same as those used in Robins and Turner (2016). In all cases of Figure 3, ρ = 400. Finally, following the recommendation in Myllymäki et al. (2016), we let r = 2499. For each value of ρ = 100 or 400, we simulate each point process in (a)-(d) with the Rpackage spatstat. Then, for each dimension k = 0 or 1, we compute the extreme rank envelopes and extreme rank envelope tests with the R-package spptest. We repeat all this 500 times. Table 1 shows for each case (a)-(d) the percentage of rejection of the hypothesis that X0 is a homogeneous Poisson process with known intensity ρ. In case of CSR, the type one error of the test is small except when k = 0 and ρ = 100. As expected in case of (b)-(d), the power of the test is increased when ρ is increased. For both the Baddeley-Silverman process and the DPP, when k = 0 and/or ρ = 400, the power is high and even 100% in two cases. For the Matérn cluster process, the power is 100% when both ρ = 100 and 400; this is also the case when instead the radius of a cluster becomes 10 times larger and hence it is not so easy to distinguish the clusters as in the third panel of Figure 3. When we combine the extreme rank envelopes for APF0 and APF1 , the results are better or close to the best results obtained when considering only one extreme rank envelope. Figure 4 illustrates for one of the 500 repetitions and for each dimension k = 0 and k = 1 the deviation of APFk from the extreme rank envelope obtained when the true model is not CSR. For each of the three non-CSR models, APFk is outside the extreme 15 CSR DPP Matérn cluster Baddeley-Silverman ρ = 100 ρ = 400 ρ = 100 ρ = 400 ρ = 100 ρ = 400 ρ = 100 ρ = 400 APF0 3.6 4 77.4 100 100 100 45.6 99.6 APF1 3.8 4.6 28.2 57.8 100 100 65.8 100 APF0 , APF1 4.8 3.6 82.4 100 100 100 60.8 100 Table 1: Percentage of point patterns for which the 95%-extreme rank envelope test rejects the hypothesis of CSR (a homogeneous Poisson process on the unit square with intensity ρ = 100 or ρ = 400) when the true model is either CSR or one of three alternative point process models. rank envelope, in particular when k = 0 and both the meanage and lifetime are small, cf. the middle panel. This means that small lifetimes are not noise but of particular importance, cf. the discussion in Section 1.2. Using an obvious notation, for small m, MC we may expect that APFDPP (m) < APFCSR 0 0 (m ) < APF0 (m ) which is in agreement with the middle panel. For large m, we may expect that APFDPP (m) > APFCSR 0 0 (m ) and MC APFCSR 0 (m ) > APF0 (m ), but only the last relation is detected by the extreme rank CSR envelope in the left panel. Similarly, we may expect APFMC 1 (m ) > APF1 (m ) for small CSR m, whereas APFMC 1 (m ) < APF1 (m ) for large m, and both cases are detected in the right panel. Note that for the Baddeley-Silverman cell process and k = 0, 1, APFBS k has a rather similar behaviour as APFDPP , i.e. like a regular point process and probably k because clustering is a rare phenomena. A similar simulation study is discussed in Robins and Turner (2016) for the models in (a)-(c), but notice that they fix the number of points to be 100 and they use a testing procedure based on the persistent homology rank function, which in contrast to our onedimensional APF is a two-dimensional function and is not summarizing all the topological features represented in a persistent diagram. Robins and Turner (2016) show that a test for CSR based on the persistent homology rank function is useful as compared to various tests implemented in spatstat and which only concern first and secondorder moment properties. Their method is in particular useful, when the true model is a Baddeley-Silverman cell process with the same first and second-order moment properties as under CSR. Comparing Figure 4 in Robins and Turner (2016) with the results 16 1.5 Determinantal 0.5 1.0 2.5 1.5 APF1(m) Baddeley−Silverman 1.0 APF0(m) Matern cluster Baddeley−Silverman 0.5 Matern cluster Extreme rank envelope bounds Matern cluster Determinantal 2.0 8 6 4 APF0(m) 2 Extreme rank envelope bounds Extreme rank envelope bounds 0.1 0.2 0.3 0.4 0.0 0 Baddeley−Silverman 0.0 0.0 Determinantal 0.000 0.010 m 0.020 m 0.030 0.0 0.1 0.2 0.3 0.4 0.5 0.6 m Figure 4: 95%-extreme rank envelope for APFi when i = 0 (left panel and the enlargement shown in the middle panel) or i = 1 (right panel) together with the curves for the three non-CSR models (Baddeley-Silverman cell process, Matérn cluster process, and Bessel-type DPP). The envelope is obtained from 2499 realisations of a CSR model on the unit square and with intensity 100. in Table 1 when the true model is a Baddeley-Silverman cell process and ρ = 100, the extreme rank envelope test seems less powerful than the test they suggest. On the other hand, Robins and Turner (2016) observe that the latter test performs poorly when the true model is a Strauss process (a model for inhibition) or a Matérn cluster process; as noticed for the Matérn cluster process, we obtain a perfect power when using the extreme rang envelope test. 4 A single sample of accumulated persistence functions 4.1 Functional boxplot This section discusses the use of a functional boxplot (Sun and Genton, 2011) for a sample A1 , . . . , Ar of APFk s those joint distribution is exchangeable. The plot provides a representation of the variation of the curves given by A1 , . . . , Ar around the most central curve, and it can be used for outlier detection, i.e. detection of curves that are too extreme with respect to the others in the sample. This is illustrated in Example 2 for the 17 brain artery trees dataset and in Appendix B and its accompanying Example 6 concerning a simulation study. The functional boxplot is based on an ordering of the APFk s obtained using a so-called depth function. For specificity we make the standard choice called the modified band depth function (MBD), cf. López-Pintado and Romo (2009) and Sun and Genton (2011): For a user-specified parameter T > 0 and h, i, j = 1, . . . , r with i < j, define    Bh,i,j = m ∈ [0, T ] : min Ai (m) , A j (m) ≤ Ah (m) ≤ max Ai (m) , A j (m) , and denote the Lebesgue measure on [0, T ] by |·|. Then the MBD of Ah with respect to A1 , . . . , Ar is MBDr ( Ah ) = 2 Bh,i,j . r (r − 1) 1≤∑ i < j ≤r (5) This is the average proportion of Ah on [0, T ] between all possible pairs of A1 , . . . , Ar . Thus, the larger the value of the MBD of a curve is, the more central or deeper it is in the sample. We call the region delimited by the 50% most central curves the central envelope. It is often assumed that a curve outside the central envelope inflated by 1.5 times the range of the central envelope is an outlier or abnormal curve — this is just a generalisation of a similar criterion for the boxplot of a sample of real numbers — and the range may be changed if it is more suitable for the application at hand, see the discussion in Sun and Genton (2011) and Example 2 below. Example 2 (brain artery trees). For the brain artery trees dataset (Section 2.2), Figure 5 shows the functional boxplots of APFk s for females (first and third panels) respective males (second and fourth panels) when k = 0 (first and second panels) and k = 1 (third and fourth panels): The most central curve is plotted in black, the central envelope in purple, and the upper and lower bounds obtained from all the curves except the outliers in dark blue. Comparing the two left panels (concerned with connected components), the shape of the central envelope is clearly different for females and males, in particular on the interval [40, 60], and the upper and lower bounds of the non-outlier are closer 18 20 40 60 80 100 m 0 20 40 60 80 100 1000 200 0 0 m 600 APF1(m) − Male 1000 600 APF1(m) − Female 0 200 1500 1000 0 500 APF0(m) − Male 1500 1000 500 APF0(m) − Female 0 0 5 10 15 20 25 0 5 10 m 15 20 25 m Figure 5: Functional boxplots of APFs for females and males obtained from the brain artery trees dataset: APF0F (first panel), APF0M (second panel), APF1F (third panel), APF1M (fourth panel). The dashed lines show the outliers detected by the 1.5 criterion. Figure 6: Brain artery tree of a male subject with APF0M and APF1M detected as outliers by the 1.5 criterion. to the central region for females, in particular on the interval [0, 50]. For the two right panels (concerned with loops), the main difference is observed on the interval [15, 25] where the central envelope is larger for females than for males. The dashed lines in Figure 5 show the APFs detected as outliers by the 1.5 criterion, that is 6 APF0F s (first panel), 3 APF1F s (third panel), 6 APF0M s (second panel), and 4 APF1M s (fourth panel). For the females, only for one point pattern both APF0F and APF1F are outliers, where APF1F is the steep dashed line in the bottom-left panel; and for the males, only for two point patterns both APF0M and APF1M are outliers, where in one case APF1M is the steep dashed line in the bottom-right panel. For this case, Figure 6 reveals an obvious issue: A large part on the right of the corresponding tree is missing! Examples 3 and 4 discuss to what extent our analysis of the brain artery trees will be sensitive to whether we include or exclude the detected outliers. 19 4.2 Confidence region for the mean function This section considers an asymptotic confidence region for the mean function of a sample A1 , . . . , Ar of IID APFk s. We assume that D1 , . . . , Dr are the underlying IID RRPDk s for the sample so that with probability one, there exists an upper bound T < ∞ on the death times and there exists an upper bound nmax < ∞ on the number of k-dimensional topological features. Note that the state space for such RRPDk s is n Dk,T,nmax = {{(m1 , l1 , c1 ), . . . , (mn , ln , cn )} : ∑ ci ≤ nmax , mi + li /2 ≤ T, i = 1, . . . , n} i =1 and only the existence and not the actual values of nmax and T play a role when applying our method below. For example, in the settings (i)-(ii) of Section 2.1 it suffices to assume that X is included in a bounded region of R2 and that the number of points N is bounded by a constant; this follows from the two versions of the Nerve Theorem presented in Fasy et al. (2014) and Edelsbrunner and Harer (2010), respectively. We adapt an empirical bootstrap procedure (see e.g. van der Vaart and Wellner (1996)) which in Chazal et al. (2013) is used for a confidence region for the mean of the dominant function of the persistent landscape and which in our case works as follows. For 0 ≤ m ≤ T, the mean function is given by µ(m) = E { A1 (m)} and estimated by the empirical mean function Ar (m) = 1 r ∑ri=1 Ai (m). Let A1∗ , . . . , Ar∗ be independent uni- form draws with replacement from the set { A1 , . . . , Ar } and set Ar∗ = 1r ∑ri=1 Ai∗ and √ θ ∗ = supm∈[0,T ] r Ar (m) − Ar∗ (m) . For a given integer B > 0, independently repeat this procedure B times to obtain θ1∗ , . . . , θB∗ . Then, for 0 < α < 1, the 100(1 − α)%quantile in the distribution of θ ∗ is estimated by q̂αB = inf{q ≥ 0 : 1 B 1(θi∗ > q) ≤ α}. B i∑ =1 The following theorem is verified in Appendix F. Theorem 4.1. Let the situation be as described above. For large values of r and B, the functions √ Ar ± q̂αB / r provide the bounds for an asymptotic conservative 100(1 − α)%-confidence region 20 20 40 60 80 100 400 200 APF1(m) 0 Confidence region bounds − Female 0 0 Confidence region bounds − Male 600 800 1000 1400 1000 600 APF0(m) 200 Confidence region bounds − Female Confidence region bounds − Male 0 m 5 10 15 20 25 m Figure 7: Bootstrap confidence regions for the mean APFkM and the mean APFkF when k = 0 (left panel) and k = 1 (right panel). for the mean APF, that is   √ √ lim lim P µ(m) ∈ [ Ar (m) − q̂αB / r, Ar (m) + q̂αB / r ] for all m ∈ [0, T ] ≥ 1 − α. r →∞ B→∞ Example 3 (brain artery trees). The brain artery trees are all contained in a bounded region and presented by a bounded number of points, so it is obvious that T and nmax exist for k = 0, 1. To establish confidence regions for the mean of the APFkM s respective APFkF s, we apply the bootstrap procedure with B = 1000. The result is shown in Figure 7 when all 95 trees are considered: In the left panel, k = 0 and approximatively half of each confidence region overlap with the other confidence region; it is not clear if there is a difference between genders. In the right panel, k = 1 and the difference is more pronounced, in particular on the interval [15, 25]. Similar results and conclusions are obtained if we exclude the APFs detected as outliers in Example 2. Of course we should supply with a statistical test to assess the gender effect and such a test is established in Section 5 and applied in Example 4. Appendix C provides the additional Example 7 for a simulated dataset along with a discussion on the geometrical interpretation of the confidence region obtained. 21 5 Two samples of accumulated persistence functions This section concerns a two-sample test for comparison of two samples of APFs. Appendix E presents both a clustering method (Appendix E.1, including Example 9) and a unsupervised classification method (Appendix E.2, including Example 10) for two or more samples. Consider two samples of independent RRPDk s D1 , . . . , Dr1 and E1 , . . . , Er2 , where each Di (i = 1, . . . , r1 ) has distribution PD and each E j has distribution PE (j = 1, . . . , r2 ), and suppose we want to test the null hypothesis H0 : PD = PE = P. Here, the common distribution P is unknown and as in Section 4.2 we assume it is concentrated on Dk,T,nmax for some integer nmax > 0 and number T > 0. Below, we adapt a two-sample test statistic studied in Præstgaard (1995) and van der Vaart and Wellner (1996). Let r = r1 + r2 . Let A1 , . . . , Ar be the APFk corresponding to ( D1 , . . . , Dr1 , E1 , . . . , Er2 ), and denote by Ar1 and Ar2 the empirical means of A1 , . . . , Ar1 and Ar1 +1 , . . . , Ar1 +r2 , respectively. Let I = [ T1 , T2 ] be a user-specified interval with 0 ≤ T1 < T2 ≤ T and used for defining a two-sample test statistic by r r1 r2 KSr1 ,r2 = sup Ar1 (m) − Ar2 (m) , r m∈ I where large values are critical for H0 . This may be rewritten as r r r r 2 r1 r 1 r2 r1 r2 KSr1 ,r2 = sup GD ( m ) − GE ( m ) + E { A D − A E } (m ) , r r r m∈ I r1 = where GD √ (6) (7)   √ r1 Ar1 − E { A D } and GEr2 = r2 Ar2 − E { A E } . By Lemma F.2 in Apr pendix F and by the independence of the samples, GD1 and GEr2 converge in distribution to two independent zero-mean Gaussian processes on I, denoted GD and GE , respectively. Assume that r1 /r → λ ∈ (0, 1) as r → ∞. Under H0 , in the sense of convergence in distribution, lim KSr1 ,r2 = sup r →∞ m∈ I √ 1 − λGD (m) − 22 √ λGE (m) , (8) where √ 1 − λGD − λGE follows the same distribution as GD . If H0 is not true and  E A1 − Ar1 +1 (m) > 0, then KSr1 ,r2 → ∞ as r → ∞, see van der Vaart and Wellner √ supm∈ I (1996). Therefore, for 0 < α < 1 and letting qα = inf{q : P(supm∈ I | GD (m)| > q) ≤ α}, the asymptotic test that rejects H0 if KSr1 ,r2 ≤ qα is of level 100α% and of power 100%. As qα depends on the unknown distribution P, we estimate qα by a bootstrap method: Let A1∗ , . . . , Ar∗ be independent uniform draws with replacement from { A1 , . . . , Ar }. For 0 ≤ m ≤ T, define the empirical mean functions Ar∗1 (m) = 1 r2 1 r1 r1 ∗ ∗ ∑i = 1 Ai (m ) and Ar2 (m ) = r +r 2 ∗ 1 ∑i = r1 +1 Ai (m ), and compute ∗ θ = r r1 r2 sup A∗ (m) − Ar∗2 (m) . r m ∈ I r1 (9) For a given integer B > 0, independently repeat this procedure B times to obtain θ1∗ , . . . , θB∗ . Then we estimate qα by the 100(1 − α)%-quantile of the empirical distribution of θ1∗ , . . . , θB∗ , that is q̂αB 1 B = inf{q ≥ 0 : ∑ 1(θi∗ > q) ≤ α}. B i =1 The next theorem is a direct application of Theorem 3.7.7 in van der Vaart and Wellner (1996) noticing that the APFk s are uniformly bounded by Tnmax and they form a socalled Donsker class, see Lemma F.2 and its proof in Appendix F. Theorem 5.1. Let the situation be as described above. If r → ∞ such that r1 /r → λ with λ ∈ (0, 1), then under H0  lim lim P KSr1 ,r2 > r →∞ B→∞ q̂αB  = α,  whilst if H0 is not true and supm∈ I E A1 − Ar1 +1 (m) > 0, then   lim lim P KSr1 ,r2 > q̂αB = 1. r →∞ B→∞ Therefore, the test that rejects H0 if KSr1 ,r2 > q̂αB is of asymptotic level 100α% and power 100%. As remarked in van der Vaart and Wellner (1996), by their Theorem 3.7.2 it is 23 possible to present a permutation two-sample test so that the critical value q̂αB for the bootstrap two-sample test has the same asymptotic properties as the critical value for the permutation test. Other two-sample test statistics than (6) can be constructed by considering other measurable functions of Ar1 − Ar2 , e.g. we may consider the two-sample test statistic Mr1 ,r2 = Z I Ar1 (m) − Ar2 (m) dm. (10) Then by similar arguments as above but redefining θ ∗ in (9) by r Z r1 r2 ∗ A∗ (m) − Ar∗2 (m) dm, θ = r m ∈ I r1 the test that rejects H0 if Mr1 ,r2 > q̂αB is of asymptotic level 100α% and power 100%. Example 4 (brain artery trees). To distinguish between male and female subjects of the brain artery trees dataset, we use the two-sample test statistic KSr1 ,r2 under three different settings: (A) For k = 0, 1, we let PD′k be the subset of PDk corresponding to the 100 largest lifetimes. Then D1 , . . . , D46 and E1 , . . . , E49 are the RRPDk s obtained from the PD′k s associated to female and male subjects, respectively. This is the setting used in Bendich et al. (2016). (B) For k = 0, 1, we consider all lifetimes and let D1 , . . . , D46 and E1 , . . . , E49 be the RRPDk s associated to female and male subjects, respectively. (C) The samples are as in setting (B) except that we exclude the RRPDk s where the corresponding APFk was detected as an outlier in Example 2. Hence, r1 = 40 and r2 = 43 if k = 0, and r1 = 43 and r2 = 45 if k = 1. Bendich et al. (2016) perform a permutation test based on the mean lifetimes for the male and female subjects and conclude that gender effect is recognized when considering PD1 24 APF0 APF1 I = [0, 137] I = [0, 60] I = [0, 25] I = [15, 25] Setting (A) 5.26 3.26 3.18 2.72 Setting (B) 7.67 3.64 20.06 1.83 Setting (C) 4.55 2.61 0.92 0.85 Table 2: Estimated p-values given in percentage of the two-sample test based on KSr1 ,r2 used with APF0 and APF1 on different intervals I to distinguish between male and female subjects under settings (A), (B), and (C) described in Example 4. (p-value = 3%) but not PD0 (p-value = 10%). For comparison, under each setting (A)(C), we perform the two-sample test for k = 0, 1, different intervals I, and B = 10000. In each case, we estimate the p-value, i.e. the smallest α such that the two-sample test with significance level 100α% does not reject H0 , by p̂ = 1 B ∑iB=1 1(θi∗ > KSr1 ,r2 ). Table 2 shows the results. Under each setting (A)-(C), using APF0 we have a smaller p-value than in Bendich et al. (2016) if I = [0, 137] and an even larger p-value if I = [0, 60]; and for k = 1 under setting (B), our p-value is about seven times larger than the pvalue in Bendich et al. (2016) if I = [0, 25], and else it is similar or smaller. For k = 1 and I = [0, 25], the large difference between our p-values under settings (B) and (C) indicates that the presence of outliers violates the result of Theorem 5.1 and care should hence be taken. In our opinion we can better trust the results without outliers, where in contrast to Bendich et al. (2016) we see a clear gender effect when considering the connected components. Notice also that in agreement with the discussion of Figure 5 in Example 2, for each setting A, B, and C and each dimension k = 0, 1, the p-values in Table 2 are smallest when considering the smaller interval I = [0, 60] or I = [15, 25]. Appendix D provides an additional Example 8 illustrating the use of two-sample test in a simulation study. 25 Acknowledgements Supported by The Danish Council for Independent Research | Natural Sciences, grant 7014-00074B, "Statistics for point processes in space and beyond", and by the "Centre for Stochastic Geometry and Advanced Bioimaging", funded by grant 8721 from the Villum Foundation. Helpful discussions with Lisbeth Fajstrup on persistence homology is acknowledged. In connection to the brain artery trees dataset we thank James Stephen Marron and Sean Skwerer for helpful discussions and the CASILab at The University of North Carolina at Chapel Hill for providing the data distributed by the MIDAS Data Server at Kitware, Inc. We are grateful to the editors and the referees for useful comments. Appendix Appendix A-F contain complements and additional examples to Sections 3-5. Our setting and notation are as follows. All the examples are based on a simulated point pattern { x1 , . . . , x N } ⊂ R2 as described in Section 2.1, with x1 , . . . , x N being IID points where N is a fixed positive integer. As in Section 1.1.1, our setting corresponds to applications typically considered in TDA where the aim is to obtain topological information about a compact set C ⊂ R2 which is unobserved and where possibly noise appears: For specificity, we let xi = yi + ǫi , i = 1, . . . , N, where y1 , . . . , y N are IID points with support C, the noise ǫ1 , . . . , ǫ N are IID and independent of y1 , . . . , y N , and ǫi follows the restriction to the square [−10σ, 10σ]2 of a bivariate zero-mean normal distribution with IID coordinates and standard deviation σ ≥ 0 (if σ = 0 there is no noise). We denote this distribution for ǫi by N2 (σ) (the restriction to [−10σ, 10σ]2 is only imposed for technical reasons and is not of practical importance). We let Ct be the union of closed discs of radii t and centred at x1 , . . . , x N , and we study how the topological features of Ct changes as t ≥ 0 grows. For this we use the Delaunay-complex mentioned in Section 1.1.1. Finally, we denote by C((a, b), r ) the circle with center (a, b) and radius r. 26 A Transforming confidence regions for persistence diagrams used for separating topological signal from noise As noted in Section 3 there exists several constructions and results on confidence sets for persistence diagrams when the aim is to separate topological signal from noise, see Fasy et al. (2014), Chazal et al. (2014), and the references therein. We avoid presenting the technical description of these constructions and results, which depend on different choices of complexes (or more precisely so-called filtrations). For specificity, in this appendix we just consider the Delaunay-complex and discuss the transformation of such a confidence region into one for an accumulate persistence function. We use the following notation. As in the aforementioned references, consider the persistence diagram PDk for an unobserved compact manifold C ⊂ R2 and obtained as in Section 1.1.1 by considering the persistence as t ≥ 0 grows of k-dimensional topological features of the set consisting of all points in R2 within distance t from C. Note that PDk is considered as being non-random and unknown; of course in our simulation study c k,N be the presented in Example 5 below we only pretend that PDk is unknown. Let PD random persistence diagram obtained as in Section 1.1.1 from IID points x1 , . . . , x N with √ support C. Let N = {(b, d) : b ≤ d, l ≤ 2c N } be the set of points at distance 2c N of the diagonal in the persistence diagram. Let S(b, d) = {( x, y) : | x − b| ≤ c N , |y − d| ≤ c N } be the square with center (b, d), sides parallel to the b- and d-axes, and of side length 2c N . Finally, let α ∈ (0, 1). Fasy et al. (2014) and Chazal et al. (2014) suggest various ways of constructing a bound c N > 0 so that an asymptotic conservative 100(1 − α)%-confidence region for PDk with respect to the bottleneck distance W∞ is given by   c lim inf P W∞ (PDk , PDk,N ) ≤ c N ≥ 1 − α N →∞ (11) where W∞ is the bottleneck distance defined in Section 1.3. The confidence region given by (11) consists of those persistence diagrams PDk which have exactly one point in each c k,N , and have an arbitrary number of points in square S(bi , di ), with (bi , di ) a point of PD 27 c k,N falling in N as noise and the the set N . Fasy et al. (2014) consider the points of PD remaining points as representing a significant topological feature of C. Using (11) an asymptotic conservative 100(1 − α)%-confidence region for the APFk corresponding to PDk is immediately obtained. This region will be bounded by two funcbmin and A bmax specified by PD c k,N and c N . Due to the accumulating nature of APFk , tions A k,N k,N the span between the bounds is an increasing function of the meanage. When using the Delaunay-complex, Chazal et al. (2014) show that the span decreases as N increases; this is illustrated in Example 5 below. Example 5 (simulation study). Let C = C((−1.5, 0), 1) ∪ C((1.5, 0), 0.8) and suppose each point xi is uniformly distributed on C. Figure 8 shows C and an example of a simulated point pattern with N = 300 points. We use the bootstrap method implemented in the R-package TDA and presented in Chazal et al. (2014) to compute the 95%-confidence region for PD1 when N = 300, see the top-left panel of Figure 9, where the two squares above the diagonal correspond to the two loops in C and the other squares correspond to topological noise. Thereby 95%-confidence regions for RRPD1 (top-right panel) and APF1 (bottom-left panel) are obtained. The confidence region for APF1 decreases as N increases as demonstrated in the bottom panels where N is increased from 300 to 500. As noticed in Section 1.3, we must be careful when using results based on the bottleneck metric, because small values of the bottleneck metric does not correspond to closeness of the two corresponding APFs: Although close persistence diagram with respect to the bottleneck distance imply that the two corresponding APFs are close with respect to the Lq -norm (1 ≤ q ≤ ∞), the converse is not true. Hence, it is possible that an APF is in the confidence region plotted in Figure 9 but that the corresponding persistence diagram is very different from the truth. 28 Figure 8: The set C in Example 5 (left panel) and a simulated point pattern of N = 300 independent and uniformly distributed points on C (right panel). B Additional example related to Section 4.1 "Functional boxplot" The functional boxplot described in Section 4.1 can be used as an exploratory tool for the curves given by a sample of APFk s. It provides a representation of the most central curve and the variation around this. It can also be used for outliers detection as illustrated in the following example. Example 6 (simulation study). We consider a sample of 65 independent APFk s, where the joint distribution of the first 50 APFk s is exchangeable, whereas the last 15 play the role of outliers. We suppose each APFk corresponds to a point process of 100 IID points, where each point xi follows one of the following distributions P1 , . . . , P4 . • P1 (unit circle): xi is a uniform point on C((0, 0), 1) perturbed by N2 (0.1)-noise. • P2 (Gaussian mixture): Let yi follow N2 (0.2), then xi = yi with probability 0.5, and xi = yi + (1.5, 0.5) otherwise. • P3 (two circles): xi is a uniform point on C((−1, −1), 1) ∪ C((1, 1), 0.5) perturbed by N2 ((0, 0), 0.1)-noise. • P4 (circle of radius 0.7): xi is a uniform point on C((0, 0), 0.7) perturbed by N2 (0.1)noise. 29 1.2 0.6 0.2 0.4 Lifetime 0.8 1.0 1.0 0.8 0.6 Death time 0.4 0.0 0.2 0.0 0.0 0.2 0.4 0.6 0.8 1.0 −0.2 0.4 0.6 0.8 5 APF1 0 0 1 2 3 2 3 4 Confidence region bounds APF1 APF1(m) 4 Confidence region bounds 1 APF1(m) 0.2 Meanage 5 Birth time 0.0 0.5 1.0 1.5 0.0 0.5 m 1.0 1.5 m Figure 9: 95%-confidence regions obtained by the bootstrap method for PD1 (top-left panel) and its corresponding RRPD1 (top-right panel) when C and x1 , . . . , x300 are as in Figure 8. The bottom-left panel shows the corresponding 95%-confidence region for APF1 . The bottom-right panel shows the 95%-confidence region for APF1 when a larger point cloud with 300 points is used. We let the first 50 point processes be obtained from P1 (the distribution for non-outliers), the next 5 from P2 , the following 5 from P3 , and the final 5 from P4 . Figure 10 shows a simulated realization of each of the four types of point processes. Figure 11 shows the functional boxplots when considering APF0 (left panel) and APF1 (right panel). The curves detected as outliers and corresponding to the distributions P2 , P3 , and P4 are plotted in red, blue, and green, respectively. In both panels the outliers detected by the 1.5 criterion agree with the true outliers. In the left panel, each curve has an accumulation of small jumps between m = 0 and m ≈ 0.1, corresponding to the moments where the points associated to each circle are connected by the growing discs in the sequence {Ct }t≥0 . The curves corresponding to realisations of P3 have a jump at m ≈ 0.38 which corresponds to the moment where 30 P1 − P2 − P3 − P4 − Circle r=1 Gaussian mixture Two circles Circle r=0.7 Figure 10: Simulated realizations of the four types of point processes, each consisting of 100 IID points with distribution either P1 (black dots), P2 (blue crosses), P3 (red triangles), or P4 (rotated green crosses). the points associated to the two circles used when defining P3 are connected by the growing discs in the sequence {Ct }t≥0 . The points following the distribution P4 are generally closer to each other than the ones following the distribution P1 as the radius of the underlying circle is smaller. This corresponds to more but smaller jumps in APF0 for small meanages, and hence the curves of APF0 are lower when they correspond to realisations of P1 than to realisations of P4 ; and as expected, for large meanages, the curves of APF0 are larger when they correspond to realisations of P1 than to realisations of P4 . Note that if we redefine P4 so that the N2 (0.1)-noise is replaced by N2 (0.07)-noise, then the curves would be the same up to rescaling. In the right panel, we observe clear jumps in all APF1 s obtained from P1 , P3 , and P4 . These jumps correspond to the first time that the loops of the circles in P1 , P3 , and P4 are covered by the union of growing discs in the sequence {Ct }t≥0 . Once again, if we have used N2 (0.07)-noise in place of N2 (0.1)-noise in the definition of P4 , the curves would be the same up to rescaling. If we repeat everything but with the distribution P4 redefined so that C((0, 0), 0.7) is replaced by C((0, 0), 0.8), then the support of P4 is closer to that of P1 and it becomes harder in the case of APF0 to detect the outliers with distribution P4 (we omit the corresponding plot); thus further simulations for determining a stronger criterion would be 31 1.5 1.0 APF1(m) 0.5 6 4 Gaussian mixture Two circles Circle r=0.7 0 0.0 2 APF0(m) 8 Gaussian mixture Two circles Circle r=0.7 0.0 0.2 0.4 0.6 0.8 0.0 m 0.5 1.0 1.5 m Figure 11: Functional boxplots of 65 APFs based on the topological features of dimension 0 (left panel) and 1 (right panel). In each panel, 50, 5, 5, and 5 APFs are obtained from the Delaunay-complex of 100 IID points from the distribution P1 , P2 , P3 , and P4 , respectively. The APFs detected as outliers are plotted in red, blue, and green in the case of P2 , P3 , and P4 , respectively. needed. C Additional example related to Section 4.2 "Confidence region for the mean function" This appendix provides yet an example to illustrate the bootstrap method in Section 4.2 for obtaining a confidence region for the mean function of a sample of IID APFk s. Example 7 (simulation study). Consider 50 IID copies of a point process consisting of 100 independent and uniformly distributed points on the union of three circles with radius 0.25 and centred at (−1, −1), (0, 1), and (1, −1), respectively (these circles were also considered in the example of Section 1.1.1). A simulated realization of the point process is shown in the left panel of Figure 12, and the next two panels show simulated confidence regions for APF0 and APF1 , respectively, when the bootstrap procedure with B = 1000 is used. In the middle panel, between m = 0 and m = 0.2, there is an accu32 1.2 0.1 0.2 0.3 0.4 m 0.5 0.6 0.4 0.2 APF1(m) 0.0 Confidence region bounds 0.0 0 Empirical mean 0.8 1.0 6 5 4 3 APF0(m) 2 1 Confidence region bounds Empirical mean 0.0 0.2 0.4 0.6 0.8 1.0 m Figure 12: A simulation of 100 independent and uniformly distributed points on the union of three circles (dashed lines) with the same radius r = 0.5 and centred at (−1, −1), (0, 1), and (1, −1) (left panel). The 95%-confidence regions for the mean APF0 (middle panel) and the mean APF1 (right panel) are based on 50 IID simulations. mulation of small jumps corresponding to the moment when each circle is covered by the union of growing discs from the sequence {Ct }t≥0 ; we interpret these small jumps as topological noise. The jump at m ≈ 0.25 corresponds to the moment when the circles centred at (−1, −1) and (1, −1) are connected by the growing discs, and the jump at m ≈ 0.3 to when all three circles are connected by the growing discs. In the right panel, at m ≈ 0.3 there is an accumulation of small jumps corresponding to the moment when the three circles are connected by the growing discs and they form a loop at m = 0.25 in Figure 1. The disappearance of this loop at m = 0.69 in Figure 1 corresponds to the jump at m ≈ 0.7 in Figure 12. D Additional example to Section 5 "Two samples of accumulated persistence functions" Section 5 considered two samples of independent RRPDk s D1 , . . . , Dr1 and E1 , . . . , Er2 , where each Di (i = 1, . . . , r1 ) has distribution PD and each E j has distribution PE (j = 1, . . . , r2 ). Then we studied a bootstrap two-sample test to asses the null hypothesis H0 : PD = PE , e.g. in connection to the brain artery trees. An additional example showing 33 Circle r=1 Circle r=0.95 Figure 13: A simulation of 100 independent and uniformly distributed points on the circle centred at (0, 0) with radius 1 and perturbed by N2 ((0, 0), 0.2)-noise (red dots), together with 100 independent and uniformly distributed points on the circle centred at (0, 0) with radius 0.95 and perturbed by N2 ((0, 0), 0.2)-noise (blue crosses). the performance of the test is presented below. Example 8 (simulation study). Let PD be the distribution of a RRPDk obtained from 100 independent and uniformly distributed points on C((0, 0), 1) perturbed by N2 (0.2)noise, and define PE in a similar way but with a circle of radius 0.95. A simulated realisation of each point process is shown in Figure 13; it seems difficult to recognize that the underlying circles are different. Let us consider the two-sample test statistics (6) and (10) with I = [0, 3], r1 = r2 = 50, and α = 0.05. Over 500 simulations of the two samples of RRPDk we obtain the following percentage of rejection: For Mr1 ,r2 , 5.2% if k = 0, and 24.2% if k = 1. For KSr1 ,r2 much better results are observed, namely 73.8% if k = 0, and 93.8% if k = 1, where this high percentage is mainly caused by the largest lifetime of a loop. 34 E Further methods for two or more samples of accumulated persistence functions E.1 Clustering Suppose A1 , . . . , Ar are APFk s which we want to label into K < r groups by using a method of clustering (or unsupervised classification). Such methods are studied many places in the literature for functional data, see the survey in Jacques and Preda (2014). In particular, Chazal et al. (2009), Chen et al. (2015), and Robins and Turner (2016) consider clustering in connection to RRPDk s. Whereas the RRPDk s are two-dimensional functions, it becomes easy to use clustering for the one-dimensional APFk s as illustrated in Example 9 below. For simplicity we just consider the standard technique known as the K-means clustering algorithm (Hartigan and Wong (1979)). For more complicated applications than considered in Example 9 the EM-algorithm may be needed for the K-means clustering algorithm. As noticed by a referee, to avoid the use of the EM-algorithm we can modify (9) or (10) and thereby construct a distance/similarity matrix for different APFs which is used to perform hierarchical clustering. However, for Example 9 the results using hierarchical clustering (omitted here) were not better than with the K-means algorithm. Assume that A1 , . . . , Ar are pairwise different and square-integrable functions on [0, T ], where T is a user-specified parameter. For example, if RRPDk ∈ Dk,T,nmax (see Section 4.2), then APFk ∈ L2 ([0, T ]). The K-means clustering algorithm works as follows. • Chose uniformly at random a subset of K functions from { A1 , . . . , Ar }; call these functions centres and label them by 1, . . . , K. • Assign each non-selected APFk the label i if it is closer to the centre of label i than to any other centre with respect to the L2 -distance on L2 ([0, T ]). 35 • In each group, reassign the centre by the mean curve of the group (this may not be an APFk of the sample). • Iterate these steps until the assignment of centres does not change. The algorithm is known to be convergent, however, it may have several drawbacks as discussed in Hartigan and Wong (1979) and Bottou and Bengio (1995). Example 9 (simulation study). Consider K = 3 groups, each consisting of 50 APF0 s and associated to point processes consisting of 100 IID points, where each point xi follows one of the following distributions P1 , P2 , and P3 for groups 1, 2, and 3, respectively. • P1 (unit circle): xi is a uniform point on C((0, 0), 1) perturbed by N2 (0.1)-noise. • P2 (two circles): xi is a uniform point on C((−1, −1), 0.5) ∪ C((1, 1), 0.5) perturbed by N2 (0.1)-noise. • P3 (circle of radius 0.8): xi is a uniform point on C((0, 0), 0.8) perturbed by N2 (0.1)noise. We start by simulating a realization of each of the 3 × 50 = 150 point processes. The left panel of Figure 14 shows one realization of each type of point process; it seems difficult to distinguish the underlying circles for groups 1 and 3, but the three APF0 s associated to these three point patterns are in fact assigned to their right groups. The right panel of Figure 14 shows the result of the K-means clustering algorithm. Here we are using the R-function “kmeans” for the K-means algorithm and it takes only a few seconds when evaluating each Ai (m) at 2500 equidistant values of m between 0 and T = 0.5. As expected we see more overlap between the curves of the APF0 s assigned to groups 1 and 3. We next repeat 500 times the simulation of the 150 point processes. A clear distinction between the groups is obtained by the K-means algorithm applied for connected com- 36 ponents: The percentage of wrongly assigned APF0 s among the 500 × 3 × 50 = 75000 APF0 s has an average of 4.5% and a standard deviation of 1.6%. The assignment error is in fact mostly caused by incorrect labelling of APF0 s associated to P1 or P3 . This is expected as the underlying circles used in the definitions of P1 and P3 are rather close, whereas the underlying set in the definition of P2 is different with two connected components as represented by the jump at m ≈ 0.4 in the middle panel of Figure 14. Even better results are obtained when considering loops instead of connected components: The percentage of wrongly assigned APF1 s among the 75000 APF1 s has an average of 1.6% and a standard deviation of 1.0%. This is mainly due to the sets underlying P1 , P2 , and P3 which have distinctive loops that results in clear distinct jumps in the APF1 s as seen in the right panel of Figure 14. 0.2 0.4 m 0.8 0.4 APF1(m) 0.0 0.0 4 3 2 P1 − Circle r=1 P2 − Two circles P3 − Circle r=0.8 1 P1 − Circle r=1 P2 − Two circles P3 − Circle r=0.8 0 APF0(m) 5 6 1.2 P1 − Circle r=1 P2 − Two circles P3 − Circle r=0.8 0.0 0.2 0.4 m Figure 14: Left panel: Simulated example of the three point processes, each consisting of 100 IID points drawn from the distribution P1 (black dots), P2 (red triangles), or P3 (blue crosses). Middle panel: The 150 APF0 s obtained from the simulation of the 150 point processes associated to P1 , P2 , or P3 , where the colouring in black, red, or blue specifies whether the K-means algorithm assigns an APF0 to the group associated to P1 , P2 , or P3 . Right panel: As the middle panel but for the 150 APF1 s. E.2 Supervised classification Suppose we want to assign an APFk to a training set of K different groups G1 , . . . , GK , where Gi is a sample of ri independent APFk s Ai1 , . . . , Airi . For this purpose supervised classification methods for functional data may be adapted. 37 We just consider a particular method by López-Pintado et al. (2010): Suppose α ∈ [0, 1] and we believe that at least 100(1 − α)% of the APFk s in each group are IID, whereas the remaining APFk s in each group follow a different distribution and are considered as outliers (see Section 4.1). For a user-specified parameter T > 0 and i = 1, . . . , K, define α the 100α%-trimmed mean Ai with respect to Gi as the mean function on [0, T ] of the 100(1 − α)% APFk s in Gi with the largest MBDri , see (5). Assuming ∪iK=1 Gi ⊂ L2 ([0, T ]), an APFk A ∈ L2 ([0, T ]) is assigned to Gi if α i = argmin k A j − Ak, (12) j∈{1,...,K } where k · k denotes the L2 -distance. Here, the trimmed mean is used for robustness and allows a control over the curves we may like to omit because of outliers, but e.g. the median could have been used instead. Example 10 (simulation study). Consider the following distributions P1 , . . . , P4 for a point xi . • P1 (unit circle): xi is a uniform point on C((0, 0), 1) which is perturbed by N2 (0.1)noise. • P1′ (two circles, radii 1 and 0.5): xi is a uniform point on C((0, 0), 1) ∪ C((1.5, 1.5), 0.5) and perturbed by N2 (0.1)-noise. • P2 (circle of radius 0.8): xi is a uniform point on C((0, 0), 0.8) which is perturbed by N2 (0.1)-noise. • P2′ (two circles, radii 0.8 and 0.5): xi is a uniform point on C((0, 0), 0.8) ∪ C((1.5, 1.5), 0.5) and perturbed by N2 (0.1)-noise. For k = 0, 1 we consider the following simulation study: K = 2 and r1 = r2 = 50; G1 consists of 45 APFk s associated to simulations of point processes consisting of 100 IID points with distribution P1 (the non-outliers) and 5 APFk s obtained in the same way but from P1′ (the outliers); G2 is specified in the same way as G1 but replacing P1 and P1′ with 38 0.8 6 0.6 5 4 0.4 3 0.00 0.10 0.20 0.2 A0.2 2 A0.2 1 A0.2 2 0.0 0 2 1 A0.2 1 0.30 0.0 m 0.2 0.4 0.6 0.8 1.0 m Figure 15: Top panels: The 20%-trimmed mean functions with respect to G1 and G2 when considering APF0 s (left) and APF1 s (right) obtained from the Delaunay-complex and based on 100 IID points following the distribution P1 (solid curve) or P2 (dotted curve). Bottom panels: Examples of point patterns with associated APF0 s assigned to the wrong group, together with the circles of radius 0.8 and 1. P2 and P2′ , respectively; and we have correctly specified that α = 0.2. Then we simulate 100 APFk s associated to P1 and 100 APFk s associated to P2 , i.e. they are all non-outliers. Finally, we use (12) to assign each of these 200 APFk s to either G1 or G2 . 0.2 The top panels in Figure 15 show the 20%-trimmed means A1 0.2 and A2 when k = 0 (left) and k = 1 (right). The difference between the 20%-trimmed means is clearest when k = 1 and so we expect that the assignment error is lower in that case. In fact wrong assignments happen mainly when the support of P1 or P2 is not well covered by the point pattern as illustrated in the bottom panels. 39 Repeating this simulation study 500 times, the percentage of APF0 s wrongly assigned among the 500 repetitions has a mean of 6.7% and a standard deviation of 1.7%, whereas for the APF1 s the mean is 0.24% and the standard deviation is 0.43%. To investigate how the results depend on the radius of the smallest circle, we repeat everything but with radius 0.9 in place of 0.8 when defining the distributions P2 and P2′ . Then for the APF0 s, the proportion of wrong assignments has a mean of 23.2% and a standard deviation of 2.9%, and for the APF1 s, a mean of 5.7% and a standard deviation of 1.9%. Similar to Example 9, the error is lowest when k = 1 and this is due to the largest lifetime of a loop. F Proof of Theorem 4.1 The proof of Theorem 4.1 follows along similar lines as in Chazal et al. (2013) as soon as we have verified Lemma F.2 below. Note that the proof of Lemma F.2 is not covered by the approach in Chazal et al. (2013). We first need to recall the following definition, where BT denotes the topological space of bounded real valued Borel functions defined on [0, T ] and its topology is induced by the uniform norm. Definition F.1. A sequence { Xr }r=1,2,... of random elements in BT converges in distribution to a random element X in BT if for any bounded continuous function f : BT 7→ R, E f (Xr ) converges to E f (X ) as r → ∞. Lemma F.2. Let the situation be as in Section 4.2. As r → ∞, √  r Ar − µ converges in dis- tribution towards a zero-mean Gaussian process on [0, T ] with covariance function c(m, m′ ) = Cov ( A1 (m), A1 (m′ )), m, m′ ∈ [0, T ]. Proof. We need some notation and to recall some concepts of empirical process theory. For D ∈ DTk,nmax , denote A D the APFk of D. Let F = { f m : 0 ≤ m ≤ T } be the class of functions f m : DTk,nmax 7→ [0, ∞) given by f m ( D ) = A D (m). To see the connection with 40 empirical process theory, we consider Gr ( f m ) = √ r 1 r f m ( Di ) − µ ( m ) r i∑ =1 ! as an empirical process. Denote k · k the L2 norm on F with respect to the distribution  of D1 , i.e. k f m (·)k2 = E A D1 (m)2 . For u, v ∈ F , the bracket [u, v] is the set of all functions f ∈ F with u ≤ f ≤ v. For any ǫ > 0, N[] (ǫ, F , k · k) is the smallest integer J J ≥ 1 such that F ⊂ ∪ j=1 [u j , v j ] for some functions u1 , . . . , u J and v1 , . . . , v J in F with r   R1 kv j − u j k ≤ ǫ for j = 1, . . . , J. We show below that 0 log N[] (ǫ, F , k · k) dǫ is finite. Then, by Theorem 19.5 in van der Vaart (2000), F is a so-called Donsker class which implies the convergence in distribution of Gr ( f m ) to a Gaussian process as in the statement of Lemma F.2. For any sequence −∞ = t1 < . . . < t J = ∞ with J ≥ 2, for j = 1, . . . , J − 1, and for D = {(m1 , l1 , c1 ), . . . , (mn , ln , cn )} ∈ DTk,nmax , let u j ( D ) = ∑in=1 ci li 1(mi ≤ t j ) and v j ( D ) = ∑in=1 ci li 1(mi < t j+1 ) (if n = 0, then D is empty and we set u j ( D ) = v j ( D ) = 0). Then, for any m ∈ [0, T ], there exists a j = j(m) such that u j ( D ) ≤ f m ( D ) ≤ v j ( D ), i.e. J −1 f m ( D ) ∈ [u j , v j ]. Consequently, F ⊂ ∪ j=1 [u j , v j ]. We prove now that for any ǫ ∈ (0, 1), the sequence {t j }1≤ j≤ J can be chosen such that for j = 1, . . . , J − 1, we have kv j − u j k ≤ ǫ. Write D1 = {( M1 , L1 , C1 ), . . . , ( M N , L N , CN )}, where N is random and should not to be confused with N in Sections 2.1 and A (if N = 0, then D1 is empty). Let n ∈ {1, . . . , nmax } and conditioned on N = n, let I be uniformly selected from {1, . . . , n}. Then E n v j ( D1 ) − u j ( D1 ) 2 1 ( N = n) o ( 1 Ci Li 1 Mi ∈ t j , t j+1 n i∑ =1   2 ≤ T 2 n4max E 1 ( N = n) 1 M I ∈ t j , t j+1   ≤ T 2 n4max P M I ∈ t j , t j+1 | N = n , = n2 E 1 ( N = n ) as n ≤ nmax , Ci ≤ nmax , and Li ≤ T. Further, E n n v j ( D1 ) − u j ( D1 ) 41 2 o 1 ( N = 0) = 0.  )2 Hence  E v j ( D1 ) − u j ( D1 ) 2 nmax = ∑ E n =0 n ≤ T 2 n5max v j ( D1 ) − u j ( D1 ) max n =1,...,nmax 2 1 ( N = n) o   P M I ∈ t j , t j +1 | N = n . (13) Moreover, by Lemma F.3 below, there exists a finite sequence {tn,j }1≤ j≤ Jn such that  P( M I ∈ (tn,j , tn,j+1)| N = n) ≤ ǫ2 / T 2 n5max and Jn ≤ 2 + T 2 n5max /ǫ2 . Thus, by choosing { t j } 1≤ j ≤ J = [ n =1,...,nmax {tn,j }1≤ j≤ Jn , we have J ≤ 2nmax + T 2 n6max /ǫ2 and max n =1,...,nmax   P M I ∈ t j , t j +1 | N = n ≤ ǫ2 . T 2 n5max Hence by (13), kv j − u j k ≤ ǫ, and so by definition, N[] (ǫ, F , k · k) ≤ 2nmax + T 2 n6max /ǫ2 . Therefore Z 1r 0   log N[] (ǫ, F , k · k) dǫ ≤ Z 1q 0 log (2nmax + T 2 n6max /ǫ2 ) dǫ < ∞. This completes the proof. Proof of Theorem 4.1. By the Donsker property established in the proof of Lemma F.2  √ √ and Theorem 2.4 in Gine and Zinn (1990), r ( Ar − Ar∗ ) and r Ar − µ converge in √ distribution to the same process as r → ∞, so the quantile of supm∈[0,T ] r Ar (m) − Ar∗ (m) √ converges to the quantile of supm∈[0,T ] r Ar (m) − µ(m) . Therefore, q̂αB provides the bounds for the asymptotic 100(1 − α)%-confidence region stated in Theorem 4.1. Lemma F.3. Let X be a positive random variable. For any ǫ ∈ (0, 1), there exists a finite sequence −∞ = t1 < . . . < t J = ∞ such that J ≤ 2 + 1/ǫ and for j = 1, . . . , J − 1,  P X ∈ (t j , t j+1 ) ≤ ǫ. Proof. Denote by F the cumulative distribution function of X, by F(t−) the left-sided limit of F at t ∈ R, and by F−1 the generalised inverse of F, i.e. F−1 (y) = inf{ x ∈ 42 R : F( x ) ≥ y} for y ∈ R. We verify the lemma with J = 2 + ⌊1/ǫ⌋, t J = ∞, and t j = F−1 (( j − 1)ǫ) for j = 1, . . . , J − 1. Then, for j = 1, . . . , J − 2, P X ∈ t j , t j +1 Finally, P X ∈ t J −1 , t J       = F F−1 ( jǫ)− − F F−1 (( j − 1) ǫ) ≤ jǫ − ( j − 1) ǫ = ǫ.   = P(X > F−1 (( J − 2)ǫ)) = 1 − F F−1 (( J − 2)ǫ) ≤ 1 − ⌊1/ǫ⌋ǫ < ǫ. References Baddeley, A., Rubak, E. & Turner, R. (2015). Spatial Point Patterns: Methodology and Applications with R. Chapman and Hall/CRC Press. Baddeley, A. J. & Silverman, B. W. (1984). A cautionary example on the use of secondorder methods for analyzing point patterns. Biometrics 40, 1089–1093. Bendich, P., Marron, J., Miller, E., Pieloch, A. & Skwerer, S. (2016). Persistent homology analysis of brain artery trees. The Annals of Applied Statistics 10, 198–218. Biscio, C. A. N. & Lavancier, F. (2016). Quantifying repulsiveness of determinantal point processes. Bernoulli 22, 2001–2028. Bottou, L. & Bengio, Y. (1995). Convergence properties of the k-means algorithms. In: Advances in Neural Information Processing Systems, volume 7, MIT Press, 585–592. Bubenik, P. (2015). Statistical topological data analysis using persistence landscapes. Journal of Machine Learning Research 16, 77–102. Chazal, F., Cohen-Steiner, D., Guibas, L. J., Mémoli, F. & Oudot, S. Y. (2009). GromovHausdorff stable signatures for shapes using persistence. Computer Graphics Forum 28, 1393–1403. 43 Chazal, F., Fasy, B., Lecci, F., Rinaldo, A., Singh, A. & Wasserman, L. (2013). On the bootstrap for persistence diagrams and landscapes. Modeling and Analysis of Information Systems 20, 111–120. Chazal, F., Fasy, B. T., Lecci, F., Michel, B., Rinaldo, A. & Wasserman, L. (2014). Robust topological inference: Distance to a measure and kernel distance. Available on arXiv:1412.7197. Chen, Y.-C., Wang, D., Rinaldo, A. & Wasserman, L. (2015). Statistical analysis of persistence intensity functions. Available on arXiv: 1510.02502. Daley, D. J. & Vere-Jones, D. (2003). An Introduction to the Theory of Point Processes. Volume I: Elementary Theory and Methods. Springer-Verlag, New York, 2nd edition. Edelsbrunner, H. & Harer, J. L. (2010). Computational Topology. American Mathematical Society, Providence, RI. Fasy, B., Lecci, F., Rinaldo, A., Wasserman, L., Balakrishnan, S. & Singh, A. (2014). Confidence sets for persistence diagrams. The Annals of Statistics 42, 2301–2339. Gine, E. & Zinn, J. (1990). Bootstrapping general empirical measures. The Annals of Probability 18, 851–869. Hartigan, J. A. & Wong, M. A. (1979). Algorithm AS 136: A k-means clustering algorithm. Journal of the Royal Statistical Society: Series C (Applied Statistics) 28, 100–108. Jacques, J. & Preda, C. (2014). Functional data clustering: a survey. Advances in Data Analysis and Classification 8, 231–255. Lavancier, F., Møller, J. & Rubak, E. (2015). Determinantal point process models and statistical inference. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 77, 853–877. López-Pintado, S. & Romo, J. (2009). On the concept of depth for functional data. Journal of the American Statistical Association 104, 718–734. 44 López-Pintado, S., Romo, J. & Torrente, A. (2010). Robust depth-based tools for the analysis of gene expression data. Biostatistics 11, 254–264. MacPherson, R. & Schweinhart, B. (2012). Measuring shape with topology. Journal of Mathematical Physics 53, 073516. Matérn, B. (1986). Spatial Variation. Lecture Notes in Statistics 36, Springer-Verlag, Berlin. Møller, J. & Waagepetersen, R. P. (2004). Statistical Inference and Simulation for Spatial Point Processes. Chapman and Hall/CRC, Boca Raton. Møller, J. & Waagepetersen, R. P. (2016). Some recent developments in statistics for spatial point patterns. Annual Review of Statistics and Its Application To appear. Mrkvička, T., Myllymäki, M. & Hahn, U. (2016). Multiple Monte Carlo testing, with applications in spatial point processes. Statistics and Computing , 1–17. Myllymäki, M., Mrkvička, T., Grabarnik, P., Seijo, H. & Hahn, U. (2016). Global envelope tests for spatial processes. Journal of the Royal Statistical Society: Series B (Statistical Methodology) Available on arXiv:1307.0239. Præstgaard, J. T. (1995). Permutation and bootstrap Kolmogorov-Smirnov tests for the equality of two distributions. Scandinavian Journal of Statistics 22, 305–322. Robins, V. & Turner, K. (2016). Principal component analysis of persistent homology rank functions with case studies of spatial point patterns, sphere packing and colloids. Physica D: Nonlinear Phenomena 334, 99–117. Sun, Y. & Genton, M. G. (2011). Functional boxplots. Journal of Computational and Graphical Statistics 20, 316–334. van der Vaart, A. W. (2000). Asymptotic Statistics. Cambridge University Press, Cambridge. van der Vaart, A. W. & Wellner, J. A. (1996). Weak Convergence and Empirical Processes. Springer Series in Statistics, Springer-Verlag, New York. 45
10
Accepted by VLDB 2018 Model-Free Control for Distributed Stream Data Processing using Deep Reinforcement Learning Teng Li, Zhiyuan Xu, Jian Tang and Yanzhi Wang {tli01, zxu105, jtang02, ywang393}@syr.edu arXiv:1803.01016v1 [cs.DC] 2 Mar 2018 Department of Electrical Engineering and Computer Science Syracuse University Syracuse, NY 13244 ABSTRACT ∗ framework can quickly reach a good scheduling solution during online learning, which justifies its practicability for online control in DSDPSs. In this paper, we focus on general-purpose Distributed Stream Data Processing Systems (DSDPSs), which deal with processing of unbounded streams of continuous data at scale distributedly in real or near-real time. A fundamental problem in a DSDPS is the scheduling problem (i.e., assigning workload to workers/machines) with the objective of minimizing average end-to-end tuple processing time. A widelyused solution is to distribute workload evenly over machines in the cluster in a round-robin manner, which is obviously not efficient due to lack of consideration for communication delay. Model-based approaches (such as queueing theory) do not work well either due to the high complexity of the system environment. We aim to develop a novel model-free approach that can learn to well control a DSDPS from its experience rather than accurate and mathematically solvable system models, just as a human learns a skill (such as cooking, driving, swimming, etc). Specifically, we, for the first time, propose to leverage emerging Deep Reinforcement Learning (DRL) for enabling model-free control in DSDPSs; and present design, implementation and evaluation of a novel and highly effective DRL-based control framework, which minimizes average end-to-end tuple processing time by jointly learning the system environment via collecting very limited runtime statistics data and making decisions under the guidance of powerful Deep Neural Networks (DNNs). To validate and evaluate the proposed framework, we implemented it based on a widely-used DSDPS, Apache Storm, and tested it with three representative applications: continuous queries, log stream processing and word count (stream version). Extensive experimental results show 1) Compared to Storm’s default scheduler and the state-of-the-art model-based method, the proposed framework reduces average tuple processing by 33.5% and 14.0% respectively on average. 2) The proposed 1. INTRODUCTION In this paper, we focus on general-purpose Distributed Stream Data Processing Systems (DSDPSs) (such as Apache Storm [48] and Google’s MillWheel [3]), which deal with processing of unbounded streams of continuous data at scale distributedly in real or near-real time. Their programming models and runtime systems are quite different from those of MapReduce-based batch processing systems, such as Hadoop [22] and Spark [46], which usually handle static big data in an offline manner. To fulfill the real or near-real time online processing requirements, average end-to-end tuple processing time (or simply average tuple processing time [52]) is the most important performance metric for a DSDPS. A fundamental problem in a DSDPS is the scheduling problem (i.e., assigning workload to workers/machines) with the objective of minimizing average tuple processing time. A widely-used solution is to distribute workload over machines in the cluster in a round-robin manner [48], which is obviously not efficient due to lack of consideration for communication delay among processes/machines. We may be able to better solve this problem if we can accurately model the correlation between a solution and its objective value, i.e., predict/estimate average tuple processing time for a given scheduling solution. However, this is very hard and has not yet been well studied in the context of DSDPS. In distributed batch processing systems (such as MapReducebased systems), an individual task’s completion time can be well estimated [53], and the scheduling problem can be well formulated into a Mixed Integer Linear Programming (MILP) problem with the objective of minimizing the makespan of a job [13, 55]. Then it can be tackled by optimally solving the MILP problem or using a fast polynomial-time approximation/heuristic algorithm. However, in a DSDPS, a task or an application never ends unless it is terminated by its user. A scheduling solution makes a significant impact on the average tuple processing time. But their relationship is very subtle and complicated. It does not even seem possible to have a mathematical programming formulation for the scheduling problem if its objective is to directly minimize average tuple processing time. Queueing theory has been employed to model distributed stream database systems [34]. However, it does not work for ∗Accepted by VLDB 2018 1 DSDPSs due to the following reasons: 1) The queueing theory can only provide accurate estimations for queueing delay under a few strong assumptions (e.g, tuple arrivals follow a Poisson distribution, etc), which, however, may not hold in a complex DSDPS. 2) In the queueing theory, many problems in a queueing network (rather than a single queue) remain open problems, while a DSDPS represents a fairly complicated multi-point to multi-point queueing network where tuples from a queue may be distributed to multiple downstream queues, and a queue may receive tuples from multiple different upstream queues. In addition, a model-based scheduling framework has been proposed in a very recent work [25], which employs a model to estimate (end-to-end) tuple processing time by combing delay at each component (including processing time at each Processing Unit (PU) and communication delay between two PUs) predicted by a supervised learning method, Support Vector Regression (SVR) [14]. However, this model-based approach suffers from two problems: 1) In a very complicated distributed computing environment such as a DSDPS, (end-to-end) tuple processing time (i.e., end-to-end delay) may be caused by many factors, which are not fully captured by the proposed model. 2) Prediction for each individual component may not be accurate. 3) A large amount of high-dimensional statistics data need to be collected to build and update the model, which leads to high overhead. Hence, we aim to develop a novel model-free approach that can learn to well control a DSDPS from its experience rather than accurate and mathematically solvable system models, just as a human learns a skill (such as cooking, driving, swimming, etc). Recent breakthrough of Deep Reinforcement Learning (DRL) [33] provides a promising technique for enabling effective model-free control. DRL [33] (originally developed by a startup DeepMind) enables computers to learn to play games, including Atari 2600 video games and one of the most complicated games, Go (AlphaGo [45]), and beat the best human players. Even though DRL has made tremendous successes on game-playing that usually has a limited action space (e.g., moving up/down/left/right), it has not yet been investigated how DRL can be leveraged for control problems in complex distributed computing systems, such as DSDPSs, which usually have sophisticated state and huge action spaces. We believe DRL is especially promising for control in DSDPSs because: 1) It has advantages over other dynamic system control techniques such as model-based predictive control in that the former is model-free and does not rely on accurate and mathematically solvable system models (such as queueing models), thereby enhancing the applicability in complex systems with randomized behaviors. 2) it is capable of handling a sophisticated state space (such as AlphaGo [45]), which is more advantageous over traditional Reinforcement Learning (RL) [49]. 3) It is able to deal with time-variant environments such as varying system states and user demands. However, direct application of the basic DRL technique, such as Deep Q Network (DQN) based DRL proposed in the pioneering work [33], may not work well here since it is only capable of handling control problems with a limited action space but the control problems (such as scheduling) in a DSDPS usually have a sophisticated state space, and a huge action (worker threads, worker processes, virtual/physical machines, and their combinations) space (See Section 3.2 for greater details). Moreover, the exist- ing DRL methods [33] usually need to collect big data (e.g., lots of images for the game-playing applications) for learning, which are additional overhead and burden for an online system. Our goal is to develop a method that only needs to collect very limited statistics data during runtime. In this paper, we aim to develop a novel and highly effective DRL-based model-free control framework for a DSDPS to minimize average data processing time by jointly learning the system environment with very limited runtime statistics data and making scheduling decisions under the guidance of powerful Deep Neural Networks (DNNs). We summarize our contributions in the following: • We show that direct application of DQN-based DRL to scheduling in DSDPSs does not work well. • We are the first to present a highly effective and practical DRL-based model-free control framework for scheduling in DSDPSs. • We show via extensive experiments with three representative Stream Data Processing (SDP) applications that the proposed framework outperforms the current practice and the state-of-the-art. To the best of our knowledge, we are the first to leverage DRL for enabling model-free control in DSDPSs. We aim to promote a simple and practical model-free approach based on emerging DRL, which, we believe, can be extended or modified to better control many other complex distributed computing systems. The rest of the paper is organized as follows: We give a brief background introduction in Section 2. We then present design and implementation details of the proposed framework in Section 3. Experimental settings are described, and experimental results are presented and analyzed in Section 4. We discuss related work in Section 5 and conclude the paper in Section 6. 2. BACKGROUND In this section, we provide a brief background introduction to DSDPS, Storm and DRL. 2.1 Distributed Stream Data Processing System (DSDPS) In a DSDPS, a stream is an unbounded sequence of tuples. A data source reads data from external source(s) and emits streams into the system. A Processing Unit (PU) consumes tuples from data sources or other PUs, and processes them using code provided by a user. After that, it can pass it to other PUs for further processing. A DSDPS usually uses two levels of abstractions (logical and physical) to express parallelism. In the logical layer, an application is usually modeled as a directed graph, in which each vertex corresponds to a data source or a PU, and direct edges show how data tuples are routed among data sources/PUs. A task is an instance of a data source or PU, and each data source or PU can be executed as many parallel tasks on a cluster of machines. In the physical layer, a DSDPS usually includes a set of virtual or physical machines that actually process incoming data, and a master serving as the central control unit, which distributes user code around the cluster, scheduling tasks, and monitoring them for failures. At runtime, an application graph is executed on multiple worker processes running on multiple (physical or virtual) machines. Each machine is usually configured to have multiple slots. The number of slots indicates the number of worker processes that can be run on this machine, and can be pre-configured by the cluster operator based on hardware constraints (such as the number of CPU cores). Each worker process occupies a slot, which uses one or multiple threads to actually process data tuples using user code. Normally, a task is mapped to a thread at runtime (even it does not have to be this way). Each machine also runs a daemon that listens for any work assigned to it by the master. In a DSDPS, a scheduling solution specifies how to assign threads to processes and machines. Many DSDPSs include a default scheduler but allow it to be replaced by a custom scheduler. The default scheduler usually uses a simple scheduling solution, which assigns threads to pre-configured processes and then assigns those processes to machines both in a roundrobin manner. This solution leads to almost even distribution of workload over available machines in the cluster. In addition, a DSDPS usually supports several ways for grouping, which defines how to distribute tuples among tasks. Typical grouping policies include: fields grouping (based on a key), shuffle grouping (random), all grouping (one-to-all) and global grouping (all-to-one). When the message ID of a tuple coming out of a spout successfully traverses the whole topology, a special acker is called to inform the originating data source that message processing is complete. The (end-to-end) tuple processing time is the duration between when the data source emits the tuple and when it has been acked (fully processed). Note that we are only interested in processing times of those tuples emitted by data sources since they reflect the total end-to-end processing delay over the whole application. To ensure fault tolerance, if a message ID is marked failure due to acknowledgment timeout, data processing will be recovered by replaying the corresponding data source tuple. The master monitors heartbeat signals from all worker processes periodically. It re-schedules them when it discovers a failure. 2.2 Apache Storm Since we implemented the proposed framework based on Apache Storm [48], we briefly introduce it here. Apache Storm is an open-source and fault-tolerant DSDPS, which has an architecture and programming model very similar to what described above, and has been widely used by quite a few companies and institutes. In Storm, data source, PU, application graph, master, worker process and worker thread are called spout, bolt, topology, Nimbus, worker and executor, respectively. Storm uses ZooKeeper [54] as a coordination service to maintain it’s own mutable configuration (such as scheduling solution), naming, and distributed synchronization among machines. All configurations stored in ZooKeeper are organized in a tree structure. Nimbus (i.e., master) provides interfaces to fetch or update Storm’s mutable configurations. A Storm topology contains a topology specific configuration, which is loaded before the topology starts and does not change during runtime. 2.3 Deep Reinforcement Learning (DRL) A DRL agent can be trained via both offline training and online deep Q-learning [33, 45]. It usually adopts a DNN (known as DQN) to derive the correlation between each state-action pair (s, a) of the system under control and its value function Q(s, a), which is the expected cumulative (with discounts) reward function when system starts at state s and follows action a (and certain policy thereafter). Q(s, a) is given as: Q(s, a) = E ∞ hX i λk rt (st , at ) s0 = s, a0 = a , (2.1) t=0 where rt (·) is the reward, and λ < 1 is the discount factor. The offline training needs to accumulate enough samples of value estimates and the corresponding state-action pair (s, a) for constructing a sufficiently accurate DNN using either a model-based (a mathematical model) procedure or actual measurement data (model-free) [33]. For example, in game-playing applications [33], this procedure includes preprocessing game playing samples, and obtaining state transition samples and Q-value estimates (e.g., win/lose and/or the score achieved). The deep Q-learning is adopted for the online learning and dynamic control based on the offlinebuilt DNN. More specifically, at each decision epoch t, the system under control is at a state st . The DRL agent performs inference using the DNN to select action at , either the one with the highest Q-value estimate, or with a certain degree of randomness using the ǫ-greedy policy [41]. Using a neural network (or even a DNN) as a function approximator in RL is known to suffer from instability or even divergence. Hence, experience replay and target network were introduced in [33] to improve stability. A DRL agent updates the DNN with a mini-batch from the experience replay buffer [33], which stores state transition samples collected during training. Compared to using only immediately collected samples, uniformly sampling from the replay buffer allows the DRL agent to break the correlation between sequential generated samples, and learn from a more independently and identically distributed past experiences, which is required by most of training algorithms, such as Stochastic Gradient Descent (SGD). So the use of experience replay buffer can smooth out learning and avoid oscillations or divergence. Besides, a DRL agent uses a separate target network (with the same structure as the original DNN) to estimate target values for training the DNN. Its parameters are slowly updated every C > 1 epochs and are held fixed between individual updates. The DQN-based DRL only works for control problems with a low-dimensional discrete action space. Continuous control has often been tackled by the actor-critic-based policy gradient approach [44]. The traditional actor-critic approach can also be extended to use a DNN (such as DQN) to guide decision making [26]. A recent work [26] from DeepMind introduced an actor-critic method, called Deep Deterministic Policy Gradient (DDPG), for continuous control. The basic idea is to maintain a parameterized actor function and a parameterized critic function. The critic function can be implemented using the above DQN, which returns Q value for a given state-action pair. The actor function can also be implemented using a DNN, which specifies the current policy by mapping a state to a specific action. Both the experience replay and target network introduced above can also be integrated to this approach to ensure stability. 3. DESIGN AND IMPLEMENTATION OF THE PROPOSED FRAMEWORK In this section, we present the design and implementation details of the proposed framework. 3.1 Overview Master Custom Scheduler Machine Machine Processes DSDPS Processes Process …... Process Thread Thread Thread Thread State and Reward DRL-based Control Thread Database Transition Samples DRL Agent Scheduling Solution Figure 1: The architecture of the proposed DRLbased control framework We illustrate the proposed framework in Figure 1, which can be viewed to have two parts: DSDPS and DRL-based Control. The architecture is fairly simple and clean, which consists of the following components: 1) DRL Agent (Section 3.2): it is the core of the proposed framework, which takes the state as input, applies a DRL-based method to generating a scheduling solution, and pushes it to the custom scheduler. 2) Database: It stores transition samples including state, action and reward information for training (See Section 3.2 for details). 3) Custom Scheduler: It deploys the generated scheduling solution on the DSDPS via the master. Our design leads to the following desirable features: 1) Model-free Control: Our design employs a DRL-based method for control, which learns to control a DSDPS from runtime statistics data without relying on any mathematically solvable system model. 2) Highly Effective Control: The proposed DRL-based control is guided by DNNs, aiming to directly minimize average tuple processing time. Note that the current practise evenly distributes workload over machines; and some existing methods aim to achieve an indirect goal, (e.g., minimizing inter-machine traffic load [52]), with the hope that it can lead to minimum average tuple processing time. These solutions are obviously less convincing and effective than the proposed approach. 3) Low Control Overhead: The proposed framework only needs to collect very limited statistics data, i.e., just the average tuple processing time, during runtime for offline training and online learning (see explanations in Section 3.2), which leads to low control overhead. 4) Hot Swapping of Control Algorithms: The core component of the proposed framework, DRL agent, is external to the DSDPS, which ensures minimum modifications to the DSDPS, and more importantly, makes it possible to replace it or its algorithm at runtime without shutting down the DSDPS. 5) Transparent to DSDSP users: The proposed framework is completely transparent to DSDSP users, i.e., a user does not have to make any change to his/her code in order to run his/her application on the new DSDPS with the proposed framework. We implemented the proposed framework based on Apache Storm [48]. In our implementation, the custom scheduler runs within Nimbus, which has access to various information regarding executors, supervisors and slots. A socket is implemented for communications between the custom scheduler and the DRL agent. When an action is generated by the DRL agent, it is translated to a Storm-recognizable scheduling solution and pushed to the custom scheduler. Upon receiving a scheduling solution, the custom scheduler first frees the executors that need to be re-assigned and then adds them to the slots of target machines. Note that during the deployment of a new scheduling solution, we try to make a minimal impact to the DSDPS by only re-assigning those executors whose assignments are different from before while keeping the rest untouched (instead of deploying the new scheduling solution from scratch by freeing all the executors first and assigning executors one by one as Storm normally does). In this way, we can reduce overhead and make the system to re-stabilize quickly. In addition, to make sure of accurate data collection, after a scheduling solution is applied, the proposed framework waits for a few minutes until the system re-stabilizes to collect the average tuple processing time and takes the average of 5 consecutive measurements with a 10-second interval. 3.2 DRL-based Control In this section, we present the proposed DRL-based control, which targets at minimizing the end-to-end average tuple processing time via scheduling. Given a set of machines M, a set of processes P, and a set of threads N , a scheduling problem in a DSDPS is to assign each thread to a process of a machine, i.e., to find two mappings: N 7→ P and P 7→ M. It has been shown [52] that assigning threads from an application (which usually exchange data quite often) to more than one processes on a machine introduces inter-process traffic, which leads to serious performance degradation. Hence, similar as in [52, 25], our design ensures that on every machine, threads from the same application are assigned to only one process. Therefore the above two mappings can be merged into just one mapping: N 7→ M, i.e., to assign each thread to a machine. Let a scheduling solution be X =< xij >, i ∈ {1, · · · , N }, j ∈ {1, · · · , M }, where xij = 1 if thread i is assigned to machine j; and N and M are the numbers of threads and machines respectively. Different scheduling solutions lead to different tuple processing and transfer delays at/between tasks at runtime thus different end-to-end tuple processing times [25]. We aim to find a scheduling solution that minimizes the average end-to-end tuple processing time. We have described how DRL basically works in Section 2.3. Here, we discuss how to apply DRL to solving the above scheduling problem in a DSDPS. We first define the state space, action space and reward. State Space: A state s = (X, w) consists of two parts: the scheduling solution X, i.e., the current assignment of executors, and the workload w, which includes the tuple arrival rate (i.e., the number of tuples per second) of each data source. The state space is denoted as S. Workload is included in the state to achieve better adaptivity and sensitivity to the incoming workload, which has been validated by our experimental results. Action Space: An action is defined as a =< aij >, ∀i ∈ PM {1, ..., N }, ∀j ∈ {1, ..., M }, where j=1 aij = 1, ∀i, and aij = 1 means assigning thread i to machine j. The action space A is the space P that contains all feasible actions. Note that the constraints M j=1 aij = 1, ∀i ensure that each thread i can only be assigned to a single machine, and the size of action space |A| = M N . Note that an action can be easily translated to its corresponding scheduling solution. Reward : The reward is simply defined to be the negative average tuple processing time so that the objective of the DRL agent is to maximize the reward. Note that the design of state space, action space and reward is critical to the success of a DRL method. In our case, the action space and reward are straightforward. However, there are many different ways for defining the state space because a DSDPS includes various runtime information (features) [25], e.g., CPU/memory/network usages of machines, workload at each executor/process/ machine, average tuple processing delay at each executor/PU, tuple transfer delay between executors/PUs, etc. We, however, choose a simple and clean way in our design. We tried to add additional system runtime information into the state but found that it does not necessarily lead to performance improvement. Different scheduling solutions lead to different values for the above features and eventually different average end-to-end tuple processing times; and the tuple arrival rates reflect the incoming workload. These information turns out to be sufficient for representing runtime system state. We observed that based on our design, the proposed DNNs can well model the correlation between the state and the average end-to-end tuple processing time (reward) after training. A straightforward way to apply DRL to solving the scheduling problem is to directly use the DQN-based method proposed in the pioneering work [33]. The DQN-based method uses a value iteration approach, in which the value function Q = Q(s, a; θ) is a parameterized function (with parameters θ) that takes state s and the action space A as input and return Q value for each action a ∈ A. Then we can use a greedy method to make an action selection: πQ (s) = argmax Q(s, a; θ) (3.1) a∈A If we want to apply the DQN-based method here, we need to restrict the exponentially-large action space A described above to a polynomial-time searchable space. The most natural way to achieve this is to restrict each action to assigning only one thread to a machine. In this way, the size of the action space can be significantly reduced to |A| = N × M , which obviously can be searched in polynomial time. Specifically, as mentioned above, in the offline training phase, we can collect enough samples of rewards and the corresponding state-action pairs for constructing a sufficiently accurate DQN, using a model-free method that deploys a randomlygenerated scheduling solution (state), and collect and record the corresponding average tuple processing time (reward). Then in the online learning phase, at each decision epoch t, the DRL agent obtains the estimated Q value from the DQN for each at with the input of current state st . Then ǫgreedy policy [41] is applied to select the action at according to the current state st : with (1 − ǫ) probability, the action with the highest estimated Q value is chosen, or an action is randomly selected with probability ǫ. After observing immediate reward rt and next state st+1 , a state transition sample (st , at , rt , st+1 ) is stored into the experience replay buffer. At every decision epoch, the DQN is updated with a mini-batch of collected samples in the experience replay buffer using SGD. Although this DQN-based method can provide solutions to the scheduling problem and does achieve model-free control for DSDPSs, it faces the following issue. On one hand, as described above, its time complexity grows linearly with |A|, which demands an action space with a very limited size. On the other hand, restricting the action space may result in limited exploration of the entire exponentially-large action space and thus suboptimal or even poor solutions to the scheduling problem, especially for the large cases. The experimental results in Section 4 validate this claim. 3.2.1 The Actor-critic-based Method for Scheduling In this section, we present a method that can better explore the action space while keeping time complexity at a reasonable level. AK â f (·) Q(·) Figure 2: The actor-critic-based method We leverage some advanced RL techniques, including actor-critic method [49, 6] and the deterministic policy gradient [44], for solving the scheduling problem. Note that since these techniques only provide general design frameworks, we still need to come up with a specific solution to our problem studied here. The basic idea of the proposed scheduling method is illustrated in Figure 2, which includes three major components: 1) an actor network that takes the state as input and returns a proto-action â, 2) an optimizer that finds a set AK of K Nearest Neighbors (K-NN) of â in the action space, and 3) a critic network that takes the state and and AK as input and returns Q value for each action a ∈ AK . Then an action with the highest Q value can be selected for execution. The basic design philosophy of this method is similar to that of a rounding algorithm, which finds a continuous solution by solving a relaxed version of the original integer (i.e., discrete) problem instance and then rounds the continuous solution to a “close” feasible integer solution that hopefully offers an objective value close to the optimal. Specifically, the actor network f (s; θπ ) = â N is a function parameterized by θπ and f : S 7→ RM . â is returned as a proto-action that takes continuous values so â ∈ / A. In our design, we use a 2-layer fully-connected feedforward neural network to serve as the actor network, which includes 64 and 32 neurons in the first and second layer respectively and uses the hyperbolic tangent function tanh(·) for activation. Note that we chose this activation function because our empirical testing showed it works better than the other commonly-used activation functions. The hardest part is to find the K-NN of the proto-action, which has not been well discussed in related work before. Even though finding K-NN can easily be done in linear time, the input size is M N here, which could be a huge number even in a small cluster. Hence, enumerating all actions in A and doing a linear search to find the K-NN may take an exponentially long time. We introduce an optimizer, which finds the K-NN by solving a series of Mixed-Integer Quadratic Programming (MIQP) problems presented in the following: MIQP-NN: min : a s.t.: M X ka − âk22 aij = 1, ∀i ∈ {1, · · · , N }; (3.2) j=1 Similar to the actor network f (·), we employ a 2-layer fullyconnected feedforward neural network to serve as the critic network, which includes 64 and 32 neurons in the first and second layer respectively and uses the hyperbolic tangent function tanh(·) for activation. Note that the two DNNs (one for actor network and one for critic network) are jointly trained using the collected samples. Algorithm 1 The actor-critic-based method for scheduling 1: Randomly initialize critic network Q(·) and actor network f (·) with weights θQ and θπ respectively; ′ 2: Initialize target networks Q′ and f ′ with weights θQ ← Q π′ π θ ,θ ←θ ; 3: Initialize experience replay buffer B; /**Offline Training**/ 4: Load the historical transition samples into B, train the actor and critic network offline; /**Online Learning**/ 5: Initialize a random process R for exploration; 6: Receive a initial observed state s1 ; /**Decision Epoch**/ 7: for t = 1 to T do 8: Derive proto-action â from the actor network f (·); 9: Apply exploration policy to â: R(â) = â + ǫI; 10: Find K-NN actions AK of â by solving a series of MIQP-NN problems (described above); 11: Select action at = argmaxa∈AK Q(st , a); 12: Execute action at by deploying the corresponding scheduling solution, and observe the reward rt ; 13: Store transition sample (st , at , rt , st+1 ) into B; 14: Sample a random mini-batch of H transition samples (si , ai , ri , si+1 ) from B; 15: yi := ri +γ maxa∈Ai+1,K Q′ (si+1 , a), ∀i ∈ {1, · · · , H}, where Ai+1,K is the set of K-NN of f ′ (si+1 ); 16: Update the critic network Q(·) by minimizing the loss: H P 1 L(θQ ) = H [yi − Q(si , ai )]2 ; i=1 17: aij ∈ {0, 1}, ∀i ∈ {1, · · · , N }, ∀j ∈ {1, · · · , M }. In this formulation, the objective is to find the action a that is the nearest neighbor of the proto-action â. The constraints ensure that action a is a feasible action, i.e., a ∈ A. To find the K-NN, the MIQP-NN problem needs to be iteratively solved K times. Each time, one of the KNN of the proto-action will be returned, the corresponding values < aij > are fixed, then the MIQP-NN problem is updated and solved again to obtain the next nearest neighbor until all the K-NN are obtained. Note that this simple MIQP problem can usually be efficiently solved by a solver as long as the input size is not too large. In our tests, we found our MIQP-NN problem instances were all solved very quickly (within 10ms on a regular desktop) by the Gurobi Optimizer [19]. For very large cases, the MIQP-NN problem can be relaxed to a convex programming problem [11] and a rounding algorithm can be used to obtain approximate solutions. The set AK of K-NN actions are further passed to the critic network to select the action. The critic network Q(s, a; θQ ) is a function parameterized by θQ , which returns Q value for each action a ∈ AK , just like the above DQN. The action can then be selected as follows: πQ (s) = argmax Q(s, a; θQ ). a∈AK (3.3) Update the weights θπ of actor network f (·) using the sampled gradient: H P 1 ∇θ π f ≈ H ∇â Q(s, â)|â=f (si ) · ∇θ π f (s)|si ; i=1 18: Update the corresponding target networks: ′ ′ θQ := τ θQ + (1 − τ )θQ ; ′ ′ θπ := τ θπ + (1 − τ )θπ 19: end for We formally present the actor-critic-based method for scheduling as Algorithm 1. First, the algorithm randomly initializes all the weights θπ of actor network f (·) and θQ of critic network Q(·) (line 1). If we directly use the actor and critic networks to generate the training target values < yi > (line 15), it may suffer from unstable and divergence problems as shown in paper [6]. Thus, similar as in [6, 26], we create the target networks to improve training stability. The target networks are clones of the original actor or critic ′ networks, but the weights of the target networks θQ and ′ θπ are slowly updated, which is controlled by a parameter τ . In our implementation, we set τ = 0.01. To robustly train the the actor and critic networks, we adopt the experience replay buffer B [33]. Instead of training network using the transition sample immediately collected at each decision epoch t (from line 8 to line 12), we first store the sample into a replay buffer B, then randomly select a mini-batch of transition samples from B to train the actor and critic networks. Note that since the size of B is limited, the oldest sample will be discarded when B is full. The sizes of replay buffer and mini-batch were set to |B| = 1000 and H = 32 respectively in our implementation. The online exploration policy (line 9) is constructed as R(â) = â+ǫI, where ǫ is an adjustable parameter just as the ǫ in the ǫ-greedy method [41], which determines the probability to add a random noise to the proto-action rather than take the derived action from the actor network. ǫ decreases with decision epoch t, which means with more training, more derived actions (rather than random ones) will be taken. In this way, ǫ can tradeoff exploration and exploitation. The parameter I is a uniformly distributed random noise, each element of which was set to a random number in [0, 1] in our implementation. The critic network Q(·) is trained by the mini-batch samples from B as mentioned above. For every transition sample (si , ai , ri , si+1 ) in the mini-batch, first we obtain the proto-action âi+1 of the next state si+1 from the target actor network f ′ (si+1 ); second, we find K-NN actions Ai+1,K of the proto-action âi+1 by solving a series of MIQP-NN problems presented above; then we obtain the highest Q-value from the target critic network, maxa∈Ai+1,K Q′ (si+1 , a). To train critic network Q(si , ai ), the target value yi for input si and ai is given by the sum of the immediate reward ri and the discounted max Q-value (line 15). The discount factor γ = 0.99 in our implementation. A common loss function L(·) is used to train the critic network (line 16). The actor network f (·) is trained by the deterministic policy gradient method [44] (line 17). The gradient is calculated by the chain rule to obtain the expected return from the transition samples in the mini-batch with respect to the weights θπ of the actor network. The actor and critic networks can be pre-trained by the historical transition samples, so usually the offline training (line 4) is performed first, which is almost the same as online learning (lines 13–18). In our implementation, we first collected 10, 000 transition samples with random actions for each experimental setup and then pre-trained the actor and critic networks offline. In this way, we can explore more possible states and actions and significantly speed up online learning. 4. We implemented three popular and representative SDP applications (called topologies in Storm) to test the proposed framework: continuous queries, log stream processing and word count (stream version), which are described in the following. These applications were also used for performance evaluation in [25] that presented the state-of-the-art modelbased approach; thus using them ensures fair comparisons. Continuous Queries Topology (Figure 3): This topology represents a popular application on Storm. It is a select query that works by initializing access to a database table created in memory and looping over each row to check if there is a hit [8]. It consists of a spout and two bolts. Randomly generated queries are emitted continuously by the spout and sent to a Query bolt. The database tables are placed in the memory of worker machines. After taking queries from the spout, the Query bolt iterates over the database table to check if there is any matching record. The Query bolt emits the matching record to the last bolt named the File bolt, which writes matching records into a file. In our experiments, a database table with vehicle plates and their owners’ information including their names and SSNs was randomly generated. We also randomly generated queries to search the database table for owners of speeding vehicles, while vehicle speeds were randomly generated and attached to every entry. To perform a comprehensive evaluation, we came up with 3 different setups for this topology: small-scale, mediumscale and large-scale. In the small-scale experiment, a total of 20 executors were created, including 2 spout executors, 9 Query bolt executors and 9 File bolt executors. In the medium-scale experiment, we had 50 executors in total, including 5 spout executors, 25 Query bolt executors and 20 File bolt executors. For the large-scale experiment, we had a total of 100 executors, including 10 spout executors, 45 Query bolt executors and 45 File bolt executors. Continuous Queries Topology Spout Query Bolt File Bolt Output File Database Table PERFORMANCE EVALUATION In this section, we describe experimental setup, followed by experimental results and analysis. 4.1 Experimental Setup We implemented the proposed DRL-based control framework over Apache Storm [48] (obtained from Storm’s repository on Apache Software Foundation) and installed the system on top of Ubuntu Linux 12.04. We also used Google’s TensorFlow [50] to implement and train the DNNs. For performance evaluation, we conducted experiments on a cluster in our data center. The Storm cluster consists of 11 IBM blade servers (1 for Nimbus and 10 for worker machines) connected by a 1Gbps network, each with an Intel Xeon Quad-Core 2.0GHz CPU and 4GB memory. Each worker machine was configured to have 10 slots. Figure 3: Continuous Queries Topology Log Stream Processing Topology (Figure 4): Being one of the most popular applications for Storm, this topology uses an open-source log agent called LogStash [29] to read data from log files. Log lines are submitted by LogStash as separate JSON values into a Redis [39] queue, which emits the output to the spout. We used Microsoft IIS log files collected from computers at our university as the input data. The LogRules bolt performs rule-based analysis on the log stream, delivering values containing a specific type of log entry instance. The results are simultaneously delivered to two separate bolts: one is the Indexer bolt performing index actions and another is the Counter bolt performing counting actions on the log entries. For the testing purpose, we slightly revised the original topology to include two more bolts, two Database bolts, after the Indexer and Counter bolts respectively. They store the results into separate collections in a Mongo database for verification purpose. In our experiment, the topology was configured to have a total of 100 executors, including 10 spout executors, 20 LogRules bolt executors, 20 Indexer bolt executors, 20 Counter bolt executors and 15 executors for each Database bolt. Log Files Log Stream Processing Topology Redis Spout LogRules Bolt Log Stash Indexer Bolt Database Bolt Counter Bolt Database Bolt Mongo DB Figure 4: Log Stream Processing Topology Word Count Topology (stream version) (Figure 5): The original version of the topology is widely known as a classical MapReduce application that counts every word’s number of appearances in one or multiple files. The stream version used in our experiments runs a similar routine but with a stream data source. This topology consists of one spout and three bolts with a chain-like structure. LogStash [29] was used to read data from input source files. LogStash submits input file lines as separate JSON values into a Redis [39] queue, which are consumed and emitted into the topology. The text file of Alice’s Adventures in Wonderland [1] was used as the input file. When the input file is pushed into the Redis queue, the spout produces a data stream which is first directed to the SplitSentence bolt, which splits each input line into individual words and further sends them to the WordCount bolt. This bolt then counts the number of appearances using fields grouping. The Database bolt finally stores the results into a Mongo database. In the experiment, the topology was configured to have a total of 100 executors, including 10 spout executors, 30 SplitSentence bolt executors, 30 WordCount executors and 30 Database bolt executors. Input File Word Count Topology Redis Spout SplitSentence Bolt WordCount Bolt Database Bolt Mongo DB Log Stash Figure 5: Word Count Topology (stream version) 4.2 Experimental Results and Analysis In this section, we present and analyze experimental results. To well justify effectiveness of our design, we compared the proposed DRL-based control framework with the actor-critic-based method (labeled as “Actor-critic-based DRL”) with the default scheduler of Storm (labeled as “Default”) and the state-of-the-art model-based method proposed in a very recent paper [25] (labeled as “Model-based”) in terms of average (end-to-end) tuple processing time. Moreover, we included the straightforward DQN-based DRL method (described in Section 3.2) in the comparisons (labeled as “DQN-based DRL”). For the proposed actor-critic-based DRL method and the DQN-based DRL method, both the offline training and online learning were performed to train the DNNs to reach certain scheduling solutions, which were then deployed to the Storm cluster described above. The figures presented in the following show the average tuple processing time corresponding to the scheduling solutions given by all these methods in the period of 20 minutes. In addition, we show the performance of the two DRL methods over the online learning procedure in terms of the reward. For illustration and comparison purposes, we normalize and smooth the reward r−rmin values using a commonly-used method rmax (where r −rmin is the actual reward, rmin and rmax are the minimum and maximum rewards during online learning respectively) and the well-known forward-backward filtering algorithm [20] respectively. Note that in Figures 6, 8 and 10, time 0 is the time when a scheduling solution given by a well-trained DRL agent is deployed in the Storm. It usually takes a little while (10-20 minutes) for the system to gradually stabilize after a new scheduling solution is deployed. This process is quite smooth, which has also been shown in [25]. So these figures do not show the performance of the DRL methods during training processes but the performance of the scheduling solutions given by well-trained DRL agents. The rewards given by the DRL agents during their online learning processes are shown in Figures 7, 9 and 11. This process usually involves large fluctuations, which have also been shown in other DRL-related works such as [23, 26]. Continuous Queries Topology (Figure 3): We present the corresponding experimental results in Figures 6 and 7. As mentioned above, we performed experiments on this topology using three setups: small-scale, medium-scale and largescale, whose corresponding settings are described in the last subsection. From Figure 6, we can see that for all 3 setups and all the four methods, after a scheduling solution is deployed, the average tuple processing time decreases and stabilizes at a lower value (compared to the initial one) after a short period of 8 − 10 minutes. Specifically, in Figure 6(a) (small-scale), if the default scheduler is used, it starts at 3.71ms and stabilizes at 1.96ms; if the model-based method is employed, it starts at 3.22ms and stabilizes at 1.46ms; if the DQN-based DRL method is used, it starts at 3.20ms and stabilizes at 1.54ms; and if the actor-critic-based DRL method is applied, it starts at 2.94ms and stabilizes at 1.33ms. In this case, the actor-critic-based DRL method reduces the average tuple processing time by 31.4% compared to the default scheduler and by 9.5% compared to the model-based method. The DQN-based DRL method performs slightly worse than the model-based method. From Figure 6(b) (medium-scale), we can see that the average tuple processing times given by all the methods slightly go up. Specifically, if the default scheduler is used, it stabilizes at 2.08ms; if the model-based method is employed, it stabilizes at 1.61ms; if the DQN-based DRL method is used, it stabilizes at 1.59ms; and if the actor-critic-based DRL method is applied, it stabilizes at 1.43ms. Hence, in this case, the actor-critic-based DRL method achieves a performance improvement of 31.2% over the default scheduler 4 3.5 3 2.5 2 1.5 1 0 5 10 Running time (min) 15 4.5 Default Model−based DQN−based DRL Actor−critic−based DRL 3.5 Average tuple processing time (ms) Default Model−based DQN−based DRL Actor−critic−based DRL Average tuple processing time (ms) Average tuple processing time (ms) 4 3 2.5 2 1.5 1 0 20 (a) Small-scale 5 10 Running time (min) 15 (b) Medium-scale 20 Default Model−based DQN−based DRL Actor−critic−based DRL 4 3.5 3 2.5 2 1.5 0 5 10 Running time (min) 15 20 (c) Large-scale Figure 6: Average tuple processing time over the continuous queries topology 0.7 Actor−critic−based DRL DQN−based DRL 0.65 Normalized reward 0.6 0.55 0.5 0.45 0.4 0.35 0.3 0.25 0 500 1000 Decision epoch 1500 2000 Figure 7: Normalized reward over the continuous queries topology (large-scale) and 11.2% over the model-based method. The performance of DQN-based DRL method is still comparable to the modelbased method. From Figure 6(c) (large-scale), we can observe that the average tuple processing times given by all the methods increase further but still stabilize at reasonable values, which essentially shows that the Storm cluster undertakes heavier workload but has not been overloaded in this large-scale case. Specially, if the default scheduler is used, it stabilizes at 2.64ms; if the model-based method is employed, it stabilizes at 2.12ms; if the DQN-based DRL method is used, it stabilizes at 2.45ms; and if the actor-critic-based DRL method is applied, it stabilizes at 1.72ms. In this case, the actor-critic-based DRL method achieves a more significant performance improvement of 34.8% over the default scheduler and 18.9% over the model-based method. In addition, the performance of the DQN-based DRL method is noticeably worse than that of the model-based method. In summary, we can make the following observations: 1) The proposed actor-critic-based DRL method consistently outperforms all the other three methods, which well justifies the effectiveness of the proposed model-free approach for control problems in DSDPSs. 2) The performance improvement (over the default scheduler and the model-based method) offered by the proposed actor-critic-based DRL method become more and more significant with the increase of input size, which shows that the proposed model-free method works even better when the distributed computing environment becomes more and more sophisticated. 3) Direct application of DQN-based DRL method does not work well, especially in the large case. This method lacks a carefully-designed mechanism (such as the proposed MIQPbased mechanism presented in Section 3.2.1) that can fully discover the action space and make a wise action selection. Hence, in large cases with huge action spaces, random selection of action may lead to a suboptimal or even poor decision. We further exploit how the two DRL methods behave during online learning by showing how the normalized reward varies over time within T = 2000 decision epochs in Figure 7. We performed this experiment using the large-scale setup described above. From this figure, we can observe that both methods start from similar initial reward values. The DQN-based DRL method keeps fluctuating during the entire procedure and ends at an average award value of 0.44 (the average over the last 200 epochs); while the actor-criticbased DRL method experiences some fluctuations initially, then gradually climbs to a higher value. More importantly, the actor-critic-based DRL method consistently offers higher rewards compared to the DQN-based method during online learning. These results further confirm superiority of the proposed method during online learning. Moreover, we find that even in this large-scale case, the proposed actorcritic-based DRL method can quickly reach a good scheduling solution (whose performance have been discussed above) without going though a really long online learning procedure (e.g., several million epochs) shown in other applications [33]. This justifies the practicability of the proposed DRL method for online control in DSPDSs. Log Stream Processing Topology (Figure 4): We performed a large-scale experiment over the log stream processing topology, whose settings have been discussed in the last subsection too. We show the corresponding results in Figures 8 and 9. This topology is more complicated than the previous continuous queries topology, which leads to a longer average tuple processing time no matter which method is used. Default Model−based DQN−based DRL Actor−critic−based DRL 11 10 9 8 7 0 5 10 15 20 Figure 8: Average tuple processing time over the log processing topology (large-scale) 0.9 Actor−critic−based DRL DQN−based DRL Normalized reward 0.8 0.75 0.7 0.65 0.6 0.55 0.5 0 500 1000 1500 Decision epoch Figure 9: Normalized reward over the log processing topology (large-scale) Average tuple processing time (ms) 4.5 Default Model−based DQN−based DRL Actor−critic−based DRL 4 3.5 3 2.5 2 1.5 0 0.55 0.5 0.45 0.4 Running time (min) 0.85 Actor−critic−based DRL DQN−based DRL Normalized reward Average tuple processing time (ms) 12 5 10 Running time (min) 15 20 Figure 10: Average tuple processing time over the word count topology (large-scale) 0 500 1000 1500 Decision epoch Figure 11: Normalized reward over the word count topology (large-scale) In Figure 8, if the default scheduler is used, it stabilizes at 9.61ms; if the model-based method is employed, it stabilizes at 7.91ms; if the DQN-based DRL method is used, it stabilizes at 8.19ms; and if the actor-critic-based DRL method is applied, it stabilizes at 7.20ms. As expected, the proposed actor-critic-based DRL method consistently outperforms the other three methods. Specifically, it reduces the average tuple processing time by 25.1% compared to the default scheduler and 9.0% compared to the model-based method. Furthermore, the DQN-based DRL method performs worse than the model-based method, which is consistent with the results related to the continuous queries topology. Similarly, we show how the normalized reward varies over time within T = 1500 decision epochs in Figure 9. From this figure, we can make a similar observation that the actor-critic-based DRL method consistently leads to higher rewards compared to the DQN-based method during online learning. Obviously, after a short period of online learning, the proposed actor-critic-based DRL method can reach a good scheduling solution (discussed above) in this topology too. Word Count Topology (stream version) (Figure 5): We performed a large-scale experiment over the word count (stream version) topology and show the corresponding results in Figures 10 and 11. Since the complexity of this topology is similar to that of the continuous queries topology, all the four methods give similar average time processing times. In Figure 10, if the default scheduler is used, it stabilizes at 3.10ms; if the model-based method is employed, it stabilizes at 2.16 ms; if the DQN-based DRL method is used, it stabilizes at 2.29ms; and if the actor-critic-based DRL method is applied, it stabilizes at 1.70ms. The actor-critic-based DRL method results in a performance improvement of 45.2% over the default scheduler and 21.3% improvement over the model-based method. The performance of the DQN-based DRL method is still noticeably worse than the model-based method. We also show the performance of the two DRL methods during online learning in Figure 11, from which we can make observations similar to those related to the first two topologies. In addition, we compared the proposed model-free method with the model-based method under significant workload 10 4 4 Model−based Actor−critic−based DRL 3.5 3 2.5 2 1.5 0 10 20 30 Running time (min) 40 (a) Continuous queries topology 50 Model−based Actor−critic−based DRL 9.5 Average tuple processing time (ms) Average tuple processing time (ms) Average tuple processing time (ms) Model−based Actor−critic−based DRL 9 8.5 8 7.5 7 0 10 20 30 Running time (min) 40 50 (b) Log stream processing topology 3.5 3 2.5 2 1.5 0 10 20 30 Running time (min) 40 50 (c) Word count topology Figure 12: Average tuple processing time over 3 different topologies (large-scale) under significant workload changes changes on 3 different topologies (large-scale). For the continuous queries topology (large-scale), we can see from Figure 12(a) that when the workload is increased by 50% at 20 minute, the average tuple processing time of the actor-criticbased DRL method rises sharply to a relatively high value then gradually stabilizes at 1.76ms, while the model-based method rises sharply too and then stabilizes at 2.17ms. The spikes are caused by the adjustment of the scheduling solution. However, we can observe that once the system stabilizes, the proposed method leads to a very minor increase on average tuple processing time. Hence, it is sensitive to the workload change and can quickly adjust its scheduling solution accordingly to avoid performance degradation. This result well justifies the robustness of the proposed method in a highly dynamic environment with significant workload changes. Moreover, during the whole period, we can see that the proposed method still consistently outperforms the model-based method. We can make similar observations for the other two topologies from Figures 12(b)–12(c). 5. RELATED WORK In this section, we provide a comprehensive review for related systems and research efforts. Distributed Stream Data Processing Systems (DSDPS): Recently, a few general-purpose distributed systems have been developed particularly for SDP in a large cluster or in the cloud. Storm [48] is an open-source, distributed and fault-tolerant system that was designed particularly for processing unbounded streams of data in real or near-real time. Storm provides a directed graph based model for programming as well as a mechanism to guarantee message processing. S4 [42] is another general-purpose DSDPS that has a similar programming model and runtime system. MillWheel [3] is a platform designed for building lowlatency SDP applications, which has been used at Google. A user specifies a directed computation graph and application code for individual nodes, and the system manages persistent states and continuous flows of data, all within the envelope of the framework’s fault-tolerance guarantees. TimeStream [38] is a distributed system developed by Microsoft specifically for low-latency continuous processing of stream data in the cloud. It employs a powerful new abstraction called resilient substitution that caters to specific needs in this new computation model to handle failure recovery and dynamic reconfiguration in response to load changes. Spark Streaming [47] is a DSDPS that uses the core Spark API and its fast scheduling capability to perform streaming analytics. It ingests data in mini-batches and performs RDD transformations on those mini-batches of data. Other DSDPSs include C-MR [7], Apache Flink [16], Google’s FlumeJava [12], M3 [2], WalmartLab’s Muppet [24], IBM’s System S [32] and Apache Samza [43]. Modeling and scheduling have also been studied in the context of DSDPS and distributed stream database system. In an early work [34], Nicola and Jarke provided a survey of performance models for distributed and replicated database systems, especially queueing theory based models. In [30], scheduling strategies were proposed for a data stream management system, aiming at minimizing the tuple latency and total memory requirement. In [51], Wei et al. proposed a prediction-based Quality-of-Service (QoS) management scheme for periodic queries over dynamic data streams, featuring query workload estimators that predict the query workload using execution time profiling and input data sampling. The authors of [40] described a decentralized framework for pro-actively predicting and alleviating hot-spots in SDP applications in real-time. In [5], Aniello et al. presented both offline and online schedulers for Storm. The offline scheduler analyzes the topology structure and makes scheduling decisions; the online scheduler continuously monitors system performance and reschedules executors at runtime to improve overall performance. Xu et al. presented T-Storm in [52], which is a traffic-aware scheduling framework that minimizes inter-node and inter-process traffic in Storm while ensuring no worker nodes were overloaded, and enables fine-grained control over worker node consolidation. In [9], Bedini et al. presented a set of models that characterize a practical DSDPS using the Actor Model theory. They also presented an experimental validation of the proposed models using Storm. In [10], a general and simple technique was presented to design and implement priority-based resource scheduling in flow-graph-based DSDPSs by allowing application developers to augment flow graphs with priority metadata and by introducing an extensive set of priority schemas. In [31], a novel technique was proposed for resource usage estimation of SDP workloads in the cloud, which uses mixture density networks, a combined structure of neural networks and mixture models, to estimate the whole spectrum of resource usage as probability density functions. In a very recent work [25], Li et al. presented a topology-aware method to accurately predict the average tuple processing time for a given scheduling solution, according to the topology of the application graph and runtime statistics. For scheduling, they presented an effective algorithm to assign threads to machines under the guidance of the proposed prediction model. Unlike them, we aim to develop a model-free control framework for scheduling in DSDPSs to directly minimize the average tuple processing time by leveraging the state-of-the-art DRL techniques, which has not been done before. Deep Reinforcement Learning (DRL): DRL has recently attracted extensive attention from both industry and academia. In a pioneering work [33], Mnih et al. proposed Deep Q Network (DQN), which can learn successful policies directly from high dimensional sensory inputs. This work bridges the gap between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging gaming tasks. The authors of [23] proposed double Q-learning as a specific adaptation to the DQN, which is introduced in a tabular setting and can be generalized to work with a large-scale function approximation. The paper [17] considered a problem of multiple agents sensing and acting with the goal of maximizing their shared utility, based on DQN. The authors designed agents that can learn communication protocols to share information needed for accomplishing tasks. In order to further extend DRL to address continuous actions and large discrete action spaces, Duan et al. [15] presented a benchmark suite of control tasks to quantify progress in the domain of continuous control; and Lillicrap et al. [26] proposed an actor-critic, model-free algorithm based on the policy gradient that can operate over continuous action spaces. Gu et al. [18] presented normalized advantage functions to reduce the sample complexity of DRL for continuous tasks. Arnold et al. [6] extended the methods proposed for continuous actions to make decisions within a large discrete action space. In [36], the authors employed a DRL framework to jointly learn state representations and action policies using game rewards as feedback for text-based games, where the action space contains all possible text descriptions. It remains unknown if and how the emerging DRL can be applied to solving complicated control and resource allocation problems in DSDPSs. In addition, RL/DRL has been applied to control of big data and cloud systems. In [35], the authors proposed a novel MapReduce scheduler in heterogeneous environments based on RL, which observes the system state of task execution and suggests speculative re-execution of the slower tasks on other available nodes in the cluster. An RL approach was proposed in [37] to enable automated tuning configuration of MapReduce parameters. Liu et al. [27] proposed a novel hierarchical framework for solving the overall resource resource allocation and power management problem in cloud computing systems with DRL. The control problems in MapReduce and cloud systems are quite different from the problem studied here. Hence, the methods proposed in these related works cannot be directly applied here. 6. CONCLUSIONS In this paper, we investigated a novel model-free approach that can learn to well control a DSDPS from its experience rather than accurate and mathematically solvable system models, just as a human learns a skill. We presented design, implementation and evaluation of a novel and highly effective DRL-based control framework for DSDPSs, which minimizes the average end-to-end tuple processing time. The proposed framework enables model-free control by jointly learning the system environment with very limited runtime statistics data and making decisions under the guidance of two DNNs, an actor network and a critic network. We implemented it based on Apache Storm, and tested it with three representative applications: continuous queries, log stream processing and word count (stream version). Extensive experimental results well justified the effectiveness of our design, which showed: 1) The proposed framework achieves a performance improvement of 33.5% over Storm’s default scheduler and 14.0% over the state-of-the-art model-based method on average. 2) The proposed DRL-based framework can quickly reach a good scheduling solution during online learning. 7. ACKNOWLEDGEMENT This research was supported in part by AFOSR grants FA9550-16-1-0077 and FA9550-16-1-0340. The information reported here does not reflect the position or the policy of the federal government. 8. REFERENCES [1] Alice’s Adventures in Wonderland, http://www.gutenberg.org/files/11/11-pdf.pdf [2] A.M. Aly, A. Sallam, B.M. Gnanasekaran, L. Nguyen-Dinh, W.G. Aref, M. Ouzzani, and A. Ghafoor, M3: Stream Processing on Main-Memory MapReduce, Proceedings of IEEE ICDE’2012, pp.1253–1256, 2012. [3] T. Akidau, A. Balikov, K. Bejiroglu, S. Chernyak, J. Haberman, R. Lax, S. McVeety, D. Mills, P. Nordstrom and S. Whittle, MillWheel: fault-tolerant stream processing at internet scale. PVLDB, 6(11): 1033–1044, 2013. [4] Q. Anderson, Storm real-time processing cookbook, PACKT Publishing, 2013. [5] L. Aniello, R. Baldoni and L. Querzoni, Adaptive online scheduling in Storm, Proceedings of ACM DEBS’2013. [6] G. D. Arnold, R. Evans, H. v. Hasselt, P. Sunehag, T. Lillicrap, J. Hunt, T. Mann, T. Weber, T. Degris and B. Coppin, Deep reinforcement learning in large discrete action spaces, arXiv: 1512.07679, 2016. [7] N. Backman, K. Pattabiraman, R. Fonseca and U. Cetintemel, C-MR: continuously executing MapReduce workflows on multi-core processors, Proceedings of MapReduce’12. [8] P. Bakkum and K. Skadron, Accelerating SQL database operations on a GPU with CUDA, Proceedings of the 3rd Workshop on General-Purpose Computation on GPU (GPGPU’10), pp. 94–103. [9] Ivan Bedini, Sherif Sakr, Bart Theeten, Alessandra Sala, Peter Cogan, Modeling performance of a parallel [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] streaming engine: bridging theory and costs, Proceedings of IEEE ICPE’2013, pp. 173–184. P. Bellavista, A. Corradi, A. Reale and N. Ticca, Priority-based resource scheduling in distributed stream processing systems for big data applications, Proceedings of IEEE/ACM International Conference on Utility and Cloud Computing, 2014, pp. 363–370. S. Boyd and L. Vandenberghe, Convex Optimization Cambridge University Press, 2004. C. Chambers, A. Raniwala, F. Perry, S. Adams, R. Henry, R. Bradshaw and N. Weizenbaum, FlumeJava: easy, efficient data-parallel pipelines, Proceedings of ACM PLDI’2010, pp. 363–375. F. Chen, M. Kodialam and T. V. Lakshman, Joint scheduling of processing and shuffle phases in MapReduce systems, Proceedings of IEEE Infocom’2012, pp. 1143–1151. H. Drucker, C. Burges, L. Kaufman, A. Smola and V. Vapnik, Support vector regression machines, Proceedings of NIPS’1996, pp. 155–161. Y. Duan, X. Chen, R. Houthooft, J. Schulman and P. Abbeel, Benchmarking deep reinforcement learning for continuous control, Proceedings of ICML’2016. Apache Flink, https://flink.apache.org/ J. N. Foerster, Y. M. Assael, N. d. Freitas and S. Whiteson, Learning to communicate with deep multi-agent reinforcement learning, Proceedings of NIPS’2016. S. Gu, T. Lillicrap, I. Sutskever and S. Levine, Continuous deep Q-Learning with model-based acceleration, Proceedings of ICML’2016. Gurobi Optimizer, http://www.gurobi.com/ F. Gustafsson, Determining the initial states in forward-backward filtering, IEEE Transactions on Signal Processing, Vol. 44, No. 4, 1996, pp. 988–992. J. Han, M. Kamber and J. Pei, Data Mining: Concepts and Techniques (3rd Edition), Morgan Kaufmann, 2011. Apache Hadoop, http://hadoop.apache.org/ H. v. Hasselt, A. Guez, and D. Silver, Deep reinforcement learning with double Q-learning, Proceedings of AAAI’2016. W. Lam, L. Liu, S. Prasad, A. Rajaraman, Z. Vacheri and A. Doan, Muppet: MapReduce-style processing of fast data, PVLDB, 5(12): 1814–1825, 2012. T. Li, J. Tang and J. Xu, Performance modeling and predicitive scheduling for distributed stream data processing, IEEE Transactions on Big Data, 2(4): 353–364, 2016. T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver and D. Wierstra, Continuous control with deep reinforcement learning, Proceedings of ICLR’2016. N. Liu, Z. Li, J. Xu, Z. Xu, S. Lin, Q. Qiu, J. Tang and Y. Wang, A hierarchical framework of cloud resource allocation and power management using deep reinforcement learning, Proceedings of ICDCS’2017. S. Loesing, M. Hentschel, T. Kraska and D. Kossmann, Stormy: an elastic and highly available streaming service in the cloud, Proceedings of the 2012 ACM Joint EDBT/ICDT Workshops, pp. 55–60. [29] Logstash - Open Source Log Management, http://logstash.net/ [30] Q. Jiang and S. Chakravarthy, Scheduling strategies for a data stream management system, Technical Report CSE-2003-30. [31] A. Khoshkbarforoushha, R. Ranjan, R. Gaire, P. P. Jayaraman, J. Hosking and E. Abbasnejad, Resource usage estimation of data stream processing workloads in datacenter clouds, 2015, http://arxiv.org/abs/1501.07020. Proceedings of ACM STOC’1997, pp. 654–663. [32] V. Kumar, H. Andrade, B. Gedik, and K.-L. Wu, Deduce: at the intersection of Mapreduce and stream processing, Proceedings of ACM EDBT’2010, pp. 657–662. [33] V. Mnih, et al. , Human-level control through deep reinforcement learning, Nature, 518(7540): 529–533, 2015. [34] M. Nicola and M. Jarke, Performance modeling of distributed and replicated databases, IEEE Transactions on Knowledge Discovery and Data Engineering, 12(4): 645–672, 2000. [35] N. Naik, A. Negi and V. Sastry, Performance improvement of MapReduce framework in heterogeneous context using reinforcement learning, Procedia Computer Science, 50: 169–175, 2015. [36] K. Narasimhan, T. D Kulkarni and R. Barzilay, Language understanding for text-based games using deep reinforcement learning, Proceedings of Conference on Empirical Methods in Natural Language Processing, 2015. [37] C. Peng, C. Zhang, C. Peng and J. Man, A reinforcement learning approach to map reduce auto-configuration under networked environment, International Journal of Security and Networks, 12(3): 135–140, 2017. [38] Z. Qian, et al. , Timestream: reliable stream computation in the cloud, Proceedings of EuroSys’2013. [39] Redis, http://redis.io [40] T. Repantis and V. Kalogeraki, Hot-spot prediction and alleviation in distributed stream processing applications, Proceedings of IEEE DSN’2008, pp. 346–355. [41] M. Restelli, Reinforcement learning - exploration vs exploitation, 2015, http://home.deib.polimi.it/restelli/MyWebSite/pdf/ rl5.pdf [42] Apache S4, http://incubator.apache.org/s4/ [43] Apache Samza, http://samza.apache.org/ [44] D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra and M. Riedmiller, Deterministic policy gradient algorithms, Proceedings of ICML’2014. [45] D. Silver, et al. , Mastering the game of Go with deep neural networks and tree search Nature, 529: 484–489, 2016. [46] Apache Spark, http://spark.apache.org/ [47] Spark Streaming — Apache Spark, http://spark.apache.org/streaming/ [48] Apache Storm, http://storm.apache.org/ [49] R. Sutton and A. Barto, Reinforcement learning: an introduction, MIT press Cambridge, 1998. [50] TensorFlow, https://www.tensorflow.org/ [51] Y. Wei, V. Prasad, S. Son and J. Stankovic, Prediction-based QoS management for real-time data streams, Proceedings of IEEE RTSS’2006. [52] J. Xu, Z. Chen, J. Tang and S. Su, T-Storm: traffic-aware online scheduling in Storm, Proceedings of IEEE ICDCS’2014, pp. 535–544. [53] M. Zaharia, A. Konwinski, A. D. Joseph, R. Katz, I. Stoica, Improving MapReduce performance in heterogeneous environments, Proceedings of OSDI’2008. [54] Apache Zookeeper, https://zookeeper.apache.org/ [55] Y. Zhu, et al. , Minimizing makespan and total completion time in MapReduce-like systems, Proceedings of IEEE Infocom’2014, pp. 2166–2174.
2
Spatial Symmetry Driven Pruning Strategies for Efficient Declarative Spatial Reasoning Carl Schultz1,3 and Mehul Bhatt2,3 Institute for Geoinformatics, University of Münster, Germany Department of Computer Science, University of Bremen, Germany 3 The DesignSpace Group 1 arXiv:1506.04945v1 [] 16 Jun 2015 2 Abstract. Declarative spatial reasoning denotes the ability to (declaratively) specify and solve real-world problems related to geometric and qualitative spatial representation and reasoning within standard knowledge representation and reasoning (KR) based methods (e.g., logic programming and derivatives). One approach for encoding the semantics of spatial relations within a declarative programming framework is by systems of polynomial constraints. However, solving such constraints is computationally intractable in general (i.e. the theory of real-closed fields). We present a new algorithm, implemented within the declarative spatial reasoning system CLP(QS), that drastically improves the performance of deciding the consistency of spatial constraint graphs over conventional polynomial encodings. We develop pruning strategies founded on spatial symmetries that form equivalence classes (based on affine transformations) at the qualitative spatial level. Moreover, pruning strategies are themselves formalised as knowledge about the properties of space and spatial symmetries. We evaluate our algorithm using a range of benchmarks in the class of contact problems, and proofs in mereology and geometry. The empirical results show that CLP(QS) with knowledgebased spatial pruning outperforms conventional polynomial encodings by orders of magnitude, and can thus be applied to problems that are otherwise unsolvable in practice. Keywords: Declarative Spatial Reasoning, Geometric Reasoning, Logic Programming, Knowledge Representation and Reasoning 1 Introduction Knowledge representation and reasoning (KR) about space may be formally interpreted within diverse frameworks such as: (a) analytically founded geometric reasoning & constructive (solid) geometry [21, 27, 29]; (b) relational algebraic semantics of ‘qualitative spatial calculi’ [24]; and (c) by axiomatically constructed formal systems of mereotopology and mereogeometry [1]. Independent of formal semantics, commonsense spatio-linguistic abstractions offer a human-centred and cognitively adequate mechanism for logic-based automated reasoning about spatio-temporal information [5]. . Declarative Spatial Reasoning In the recent years, declarative spatial reasoning has been developed as a high-level commonsense spatial reasoning 2 Carl Schultz and Mehul Bhatt paradigm aimed at (declaratively) specifying and solving real-world problems related to geometric and qualitative spatial representation and reasoning [4]. A particular manifestation of this paradigm is the constraint logic programming based CLP(QS) spatial reasoning system [4, 33, 34] (Section 2). . Relational Algebraic Qualitative Spatial Reasoning The state of the art in qualitative spatial reasoning using relational algebraic methods [24] has resulted in prototypical algorithms and black-box systems that do not integrate with KR languages, such as those dealing with semantics and conceptual knowledge necessary for handling background knowledge, action & change, relational learning, rule-based systems etc. Furthermore, relation algebraic qualitative spatial reasoning (e.g. LR [25]), while efficient, is incomplete in general [22, 23, 24].4 Alternatively, constraint logic programming based systems such as CLP(QS) [4] and others (see [9, 10, 18, 20, 28, 29]) adopt an analytic geometry approach where spatial relations are encoded as systems of polynomial constraints;5 while these methods are sound and complete (see Section 2.2), they have prohibitive comcn putational complexity, O(c12 ) in the number of polynomial variables n, meaning that even relatively simple problems are not solved within a practical amount of time via “naive” or direct encodings as polynomial constraints, i.e. encodings that lack common-sense knowledge about spatial objects and relations. On the other hand, highly efficient and specialised geometric theorem provers (e.g. [12]) and geometric constraint solvers (e.g. [17, 27]) exist. However, these provers exhibit highly specialised and restricted spatial languages6 and lack (a) the direct integration with more general AI methods and (b) the capacity for incorporating modular common-sense rules about space in an extensible domain- and context-specific manner. The aims and contributions of the research presented in this paper are two-fold: 1. to further develop a KR-centered declarative spatial reasoning paradigm such that spatial reasoning capabilities are available and accessible within AI programs and applications areas, and may be seamlessly integrated with other AI methods dealing with representation, reasoning, and learning about non-spatial aspects 4 5 6 Incompleteness refers to the inability of a spatial reasoning method to determine whether a given network of qualitative spatial constraints is consistent or inconsistent in general. Relation-algebraic spatial reasoning (i.e. using algebraic closure based on weak composition) has been shown to be incomplete for a number of spatial languages and cannot guarantee consistency in general, e.g. relative directions [23] and containment relations between linearly ordered intervals [22], Theorem 5.9. We emphasise that this analytic geometry approach that we also adopt is not qualitative spatial reasoning in the relation algebraic sense; the foundations are similar (i.e. employing a finite language of spatial relations that are interpreted as infinite sets of configurations, determining consistency in the complete absence of numeric information, and so on) but the methods for determining consistency etc. come from different branches of spatial reasoning. Standard geometric constraint languages of approaches including [12, 17, 27] consist of points, lines, circles, ellipses, and coincidence, tangency, perpendicularity, parallelism, and numerical dimension constraints; note the absence of e.g. mereotopology and “common-sense” relative orientation relations [35]. Spatial Symmetry Driven Pruning 3 2. to demonstrate that in spite of high computational complexity in a general and domain-independent case, the power of analytic geometric —in particular polynomial systems for encoding the semantics of spatial relations– can be exploited by systematically utilising commonsense knowledge about spatial object and relationships at the qualitative level. We present a new algorithm that drastically improves analytic spatial reasoning performance within KR-based declarative spatial reasoning approaches by identifying and pruning spatial symmetries that form equivalence classes (based on affine transformations) at the qualitative spatial level. By exploiting symmetries our approach utilises powerful underlying, but computationally expensive, polynomial solvers in a significantly more effective manner. Our algorithm is simple to implement, and enables spatial reasoners to solve problems that are otherwise unsolvable using analytic or relation algebraic methods. We emphasise that our approach is independent of any particular polynomial constraint solver; it can be similarly applied over a range of solvers such as CLP(R), SMTs, and specialised geometric constraint solvers that have been integrated into a KR framework. In addition to AI / commonsense reasoning applications areas such as design, GIS, vision, robotics [3, 5, 6, 7], we also address application into automating support for proving the validity of theorems in mereotopology, orientation, shape, etc. (e.g. [8, 36]). Building on such foundational capabilities, another outreach is in the area of computer-aided learning systems in mathematics (e.g. at a highschool level). For instance, consider Proposition 9, Book I of Euclid’s Elements, where the task is to bisect an angle using only an unmarked ruler and collapsable compass. Once a student has developed what they believe to be a constructive proof, they can employ declarative spatial reasoners to formally verify that their construction applies to all possible inputs (i.e. all possible angles) and manipulate an interactive sketch that maintains the specified spatial relations (i.e. dynamic geometry [17]). A further area of interest is verifying the entries of composition tables that are used in relation algebraic qualitative spatial reasoning [30]: given spatial objects a, b, c ∈ U , composition “look up” tables are indexed by pairs of (base) relations R1ab , R2bc and return disjunctions of possible (base) relations  R3ac . For each entry, the task is to prove ∃a, b, c ∈ U R1ab ∧ R2bc ∧ R3ac for only those base relations R3 in the entry’s disjunction. 2 Declarative Spatial Reasoning with CLP(QS) Declarative spatial reasoning denotes the ability of declarative programming frameworks in AI to handle spatial objects and the spatial relationships amongst them as native entities, e.g., as is possible with concrete domains of Integers, Reals and Inequality relationships. The objective is to enable points, oriented points, directed line-segments, regions, and topological and orientation relationships amongst them as first-class entities within declarative frameworks in AI [4]. 4 Carl Schultz and Mehul Bhatt 2.1 Examples of Declarative Spatial Reasoning with CLP(QS) With a focus on spatial question-answering, the CLP(QS) spatial reasoning system [4, 33, 34] provides a practical manifestation of certain aspects of the declarative spatial reasoning paradigm in the context of constraint logic programming (CLP). 7 CLP(QS) utilises a high-level language of spatial relations and commonsense knowledge about how various spatial domains behave. Such relations describe sets of object configurations, i.e. qualitative spatial relations such as coincident, left of, or partially overlapping. Through this deep integration of spatial reasoning with KR-based frameworks, the long-term research agenda is to seamlessly provide spatial reasoning in other AI tasks such as planning, non-monotonic reasoning, and ontological reasoning [4]. What follows is a brief illustration of the spatial Q/A capability supported by CLP(QS). EXAMPLE A. Massachusetts Comprehensive Assessment System (MCAS) Grade 3 Mathematics (2009), Question 12. Put a square and two rightangled triangles together to make a rectangle. (1) Put the shapes T1 , T2 , S illustrated in Figure 1(d) together to make a rectangle. (2) Put the shapes T1 , T2 , S in Figure 1(d) together to make a quadrilateral that is not a rectangle. CLP(QS) represents right-angle triangles as illustrated in Figure 1(b). Figures 1(a) and 1(c) present the CLP(QS) solutions. Grade 3 Mathematics (2013), Question 17. (1) How many copies of T1 illustrated in Figure 1(d) are needed to completely fill the region R illustrated in Figure 2(a) without any of them overlapping? As presented in Figure 2, CLP(QS) solves both the geometric definition and a variation where the dimensions of the rectangle and triangles are not given. EXAMPLE B. Qualitative Spatial Reasoning with Complete Unknowns In this example CLP(QS) reasons about spatial objects based solely on given qualitative spatial relations, i.e. without any geometric information. Define three cubes A, B, C. Put B inside A, and make B disconnected from C. What spatial relations can possibly hold between A and C? CLP(QS) determines that A must be disconnected from C and provides the inferred corresponding geometric constraints, as illustrated in Figure 3. 2.2 Analytical Geometry Foundations for Declarative Spatial Reasoning Analytic geometry methods parameterise classes of objects and encode spatial relations as systems of polynomial equations and inequalities [12]. For example, we can define a sphere as having a 3D centroid point (x, y, z) and a radius r, where x, y, z, r are reals. Two spheres s1 , s2 externally connect or touch if (xs1 − xs2 )2 + (ys1 − ys2 )2 + (zs1 − zs2 )2 = (rs1 + rs2 )2 7 Spatial Reasoning (CLP(QS)). www.spatial-reasoning.com (1) Spatial Symmetry Driven Pruning = right_triangle(_,_,W,W), = right_triangle(_,_,W,W), ?- T1 | talking about space 17 talking about space 17 5 TW T2 | S = rectangle(_,W,W), ?- T1 = right_triangle(_,_,W,W), | Solution = rectangle(_,_,_), | T2 = right_triangle(_,_,W,W), | topology(rcc8(eq), Solution, tile_union([T1,T2,S])), | S = rectangle(_,W,W), | ground_object(Solution). | Solution = rectangle(_,_,_), | topology(rcc8(eq), Solution, tile_union([T1,T2,S])), T1 = right_triangle(orientation(0), point(0, 0), 5, 5), | ground_object(Solution). W = 5, T2 = right_triangle(orientation(180), point(5, 5), 5, 5), T1 = right_triangle(orientation(0), point(0, 0), 5, 5), S = rectangle(point(5, 0), 5, 5), W = 5, Solution = rectangle(point(0, 0), 10, 5) . T2 = right_triangle(orientation(180), point(5, 5), 5, 5), TH TW TX,TY orientation = 0o (a) CLP(QS) solution to arranging T , T , S to rectangle. Solution = rectangle(point(0, 0), 10, 5) . form rectangles. ?- T1 = right_triangle(_,_,W,W), TX,TY orientation = 90o (b) Representing right-angle triangle T in CLP(QS). the shapes in Figure 10), together to make a quadrilateral S Put = rectangle(point(5, 5, 5), 1 that2 is not a Put the shapes in _Figure 1 together to make a quadrilateral that is not a | T2 = right triangle(_,_,W,W), rectangle. _ | S = rectangle( ,W,W), ?- T1 = right_triangle(_,_,W,W), | topology(rcc8(eq), Solution, tile_union([T1,T2,S])), | T2 = right_triangle(_,_,W,W), | quadrilateral(Solution), | S = rectangle(_,W,W), | not(Solution = rectangle(_,_,_)), | topology(rcc8(eq), Solution, tile_union([T1,T2,S])), | ground_object(Solution). | quadrilateral(Solution), | not(Solution = rectangle(_,_,_)), T1 = right_triangle(orientation(90), point(5, 0), 5, 5), | ground_object(Solution). W = 5, T2 = right_triangle(orientation(270), point(10, 5), 5, 5), T1 = right_triangle(orientation(90), point(5, 0), 5, 5), S = rectangle(point(5, 0), 5, 5), W = 5, Solution = parallelogram(orientation(0), point(0,0), 10, 5); T2 = right_triangle(orientation(270), point(10, 5), 5, 5), ... S = rectangle(point(5, 0), 5, 5), T1 = right_triangle(orientation(90), point(5, 0), 5, 5), Solution = parallelogram(orientation(0), point(0,0), 10, 5); W = 5, ... T2 = right_triangle(orientation(0), point(10, 0), 5, 5), T1 = right_triangle(orientation(90), point(5, 0), 5, 5), S = rectangle(point(5, 0), 5, 5), W = 5, Solution = trapezoid(orientation(0), point(0, 0), 15, 5, 5); T2 = right_triangle(orientation(0), point(10, 0), 5, 5), ... S = rectangle(point(5, 0), 5, 5), false. Solution = trapezoid(orientation(0), point(0, 0), 15, 5, 5); 2013 Massachusetts Comprehensive Assessment System (MCAS) Grade 3 Mathematics, Question 17. How many triangles illustrated in Figure 1 are needed to completely 2013 Massachusetts Comprehensive Assessment System (MCAS) 1 2 fill the region illustrated in Figure 3 without any overlapping? Grade 3 Mathematics, Question 17. How many triangles illustrated in Figure 1 are needed to completely fill the region illustrated in Figure 3 without any overlapping? TH T1 T2 S shapes to arrange rectangle parallelogram ... trapezoid false. (c) CLP(QS) solution to arranging T , T , S to form non-rectangular quadrilaterals. 18 (d) Shapes T1 , T2 , S spatial reasoning in cognitive systems to be arranged and CLP(QS) solutions. ?- between(1,8,TriangleCount), | right_triangle_list(TriangleCount,[T1|TRest]), _,_,1,1), T1 = right_triangle(Test Fig. 1. Using CLP(QS) to solve MCAS Grade| 3 Mathematics questions (2009). | size(equal, T1, TRest), | topology(rcc8(eq), rectangle(point(0,0),3,1), tile_union([T1|TRest])). | region R 18 tiling solution spatial reasoning in cognitive systems TriangleCount = 6 When no geometric information is given, CLP(QS) determines that the number of right-angled triangles must be odd: ?- between(1,8,TriangleCount), | right_triangle_list(TriangleCount,Triangles), ?- between(1,8,TriangleCount), | right_triangle_list(TriangleCount,[T1|TRest]), | | topology(rcc8(eq), rectangle(_,_,_), tile_union(Triangles)). | T1 = right_triangle(_,_,1,1), | size(equal, T1, TRest), TriangleCount = 2;... | topology(rcc8(eq), rectangle(point(0,0),3,1), tile_union([T1|TRest])). TriangleCount = 4;... | TriangleCount = 6;... TriangleCount = 8; TriangleCount = 6 false. Regenstno High School Examination, The University of the no geometric information determines that (b)2012 (a)When CLP(QS) solution tois given, tilingCLP(QS) a rectangular When geometric information is given, State of New York the number of right-angled triangles must be odd: region with triangle T . CLP(QS) determines that the number of right1 ?- between(1,8,TriangleCount), Part 1, Question 1. angle must even.angles shown in Figure 4. | right_triangle_list(TriangleCount,Triangles), Line ntriangles intersects lines l and be m, forming | | topology(rcc8(eq), rectangle(_,_,_), tile_union(Triangles)). Which value of x would prove l ||m? ?- L=line(_, horizon_angle(_)), M=line(_, horizon_angle( Fig. 2. Using CLP(QS) to solve MCAS Grade| 3 Mathematics Test_)),questions (2013). N=line(_, horizon_angle(_)), TriangleCount = 2, | ... ; ... | TriangleCount = 4, | ... ; ... | TriangleCount = 6, | ... ; ... | TriangleCount = 8, | ... ; | orientation(parallel, L,M), orientation(angle_degrees(Theta1), M,N), orientation(angle_degrees(Theta2), L,N), false. | true. 2012 Regenst High School Examination, The University of the State of New York Part 1, Question 1. Line n intersects lines l and m, forming angles shown in Figure 4. Which value of x would prove l ||m? {Theta1 =:= 18 * X - 12}, {Theta2 =:= 6 * X + 42}, ... Theta1 = Theta2, Theta2 = 69.0, X = 4.5 talking about space 19 Figure 4: Regents High School Examination question to find x to prove that l and 6 m are parallel. Carl Schultz and Mehul Bhatt Figures/clpqs/nyu-exercise.png B ?- A=cube(_, _), B=cube(_, _), C=cube(_, _), | topology(rcc8(ntpp), A, B), | topology(rcc8(dc), B, C), | topology(Relation, A,C), | constraints([(A,’A’),(B,’B’),(C,’C’)],Cons), pretty_print_constraints(Cons). | ---------------------------------CLP(QS) Constraints: (1) B.length+B.x-C.x< -0.0 (2) A.z-B.z>0.0 (3) A.length-B.length+A.z-B.z<0.0 (4) A.y-B.y>0.0 (5) A.length-B.length+A.y-B.y<0.0 (6) A.x-B.x>0.0 (7) A.length-B.length+A.x-B.x<0.0 (8) A.length>0.0 (9) C.length>0.0 20 ntpp A C Relation = rcc8(dc), ... A C dc page 2 of 2 examples (1) Bx + Bl < Cx B is left of C (2) Az > Bz the front face of A is behind the front face of B (3) Az + Al < Bz + Bl the back face of A is in front of the back face of B (4) Ay > By the base of A is above the base of B (5) Ay + Al < By + Bl the top of A is below the top of B (6) A x > Bx the left side of A is right of the left side of B (7) A x + Al < Bx + Bl the right side of A is left of the right side of B ---------------------------------... B dc spatial reasoning in cognitive systems (8) Al > 0 A side length is positive (9) Cl > 0 C side length is positive (a) CLP(QS) solution and its corresponding (b) Behaviour Geometric inequalities provided by Specification using Spatial Knowledge inferred CLP(QS) that correspond to the qualitative In the behaviour specification we use CLP(QS) to find out on which geometric constraints. constraints. side of the robot the goal field is and turn the robot accordingly. Define a new predicate get_direction/4 in the beginning of the file, that Fig. 3. Spatial reasoning about cubes A,B,C takes withthecomplete geometric unknowns. points of the robots location and the goal location and the facing direction and gives us the orientation of the goal relative to the robot. The If the system of polynomial constraints is satisfiable then the spatial constraint possible relations are left, right, front and back. _direction(point(Xa,Ya), graph is consistent. Specifically, the system of get polynomial (in)equalities over direction(D) varipoint(Xb,Yb), , Rel):direction(point(Xa,Ya), Line), ables X is satisfiable if there exists a real number assignment for eachdirection(D), x ∈ X such point(Xb, Yb)). that the (in)equalities are true. Partial geometric orientation(Rel,Line, information (i.e. a combinaTo execute the sensing actions, extend the sensing interrupt by the new tion of numerical and qualitative spatial information) is utilised by assigning the sensing action. given real numerical values to the corresponding object parameters. Thus, we can interrupt(sensed = false, [ integrate spatial reasoning and logic programming using Constraint Logic Programming (CLP) [19]; this system is called CLPsenseFront, over qualitative spatial domains. senseGoalLocation, CLP(QS), provides a suitable framework for expressing and proving first-order senseRobotLocation ]), spatial theorems. To let the robot turn into the direction of the goal after each step, define two Cylindrical Algebraic Decomposition (CAD) new [13] is a prominent sound and cominterrupts after the sensing interrupt. plete algorithm for deciding satisfiability of a general system of polynomial conThe first interrupt is executed when the next field is free and the robot has cn 2 in the last step. moved straints over reals and has time complexity not O(c ) in the number of free vari1 ables [2]. Thus, a key focus within analytic spatial reasoning has been methods for managing this inherent intractibility.8 More efficient refinements of the original CAD algorithm include partial CAD [14]. Symbolic methods for solving systems of multivariate equations include the Gröbner basis method [11] and Wu’s characteristic set method [40]. In the QUAD-CLP(R) system, the authors improve solving performance by using linear approximations of quadratic constraints and by identifying geometric equivalence classes [28]. Ratschan employs pruning methods, also at the polynomial level, in the rsolve system [31, 32]. 8 Important factors in determining the applicability of various analytic approaches are the degree of the polynomials (particularly the distinction between linear and nonlinear) and whether both equality and inequalities are permitted in the constraints. Spatial Symmetry Driven Pruning 7 Constructive and iterative (i.e. Newton and Quasi-Newton iteration) methods solve spatial reasoning problems by “building” a solution, i.e. by finding a configuration that satisfies the given constraints [27]. If a solution is found, then the solution itself is the proof that the system is consistent – but what if a solution is not found within a given time frame? In general these methods are incomplete for spatial reasoning problems encoded as nonlinear equations and inequalities of arbitrary degree.9 3 Spatial Symmetries Information about objects and their spatial relations is formally expressed as a constraint graph G = (N, E), where the nodes N of the graph are spatial objects and the edges between nodes specify the relations between the objects. Objects belong to a domain, e.g. points, lines, squares, and circles in 2D Euclidean space, and cuboids, vertically-extruded polygons, spheres, and cylinders in 3D Euclidean space. We denote the object domain of node i as Ui (spatial domains are typically infinite). A node may refer to a partially ground, or completely geometrically ground object, such that Ui can be a proper subset of the full domain of that object type. Each element i0 ∈ Ui is called an instance of that object domain. A configuration of objects is a set of instances {i01 , . . . , i0n } of nodes i1 , . . . , in respectively. A binary relation Rij between nodes i, j distinguishes a set of relative configurations of i, j; relation R is said to hold for those configurations, Rij ⊆ Ui × Uj . In general, an n-ary relation for n ≥ 1 distinguishes a set of configurations between n objects: Ri1 ,...,in ⊆ Ui1 × · · · × Uin . An edge between nodes i, j is assigned a logical formula over relation symbols R1 , . . . , Rm and logical operators ∨, ∧, ¬. Given an interpretation i0 , j 0 , the formula for edge e is interpreted in the standard way, denoted e(i0 , j 0 ): – – – – R1 ≡ (i0 , j 0 ) ∈ R1ij (R1 ∨ R2 ) ≡ (i0 , j 0 ) ∈ R1ij ∪ R2ij (R1 ∧ R2 ) ≡ (i0 , j 0 ) ∈ R1ij ∩ R2ij (¬R1 ) ≡ (i0 , j 0 ) ∈ (Ui × Uj ) \ R1ij An edge between i, j is satisfied by a configuration i0 , j 0 if e(i0 , j 0 ) is true (this is generalised to n-ary relations). A spatial constraint graph G = (N, E) is consistent or satisfiable if there exists a configuration s of N that satisfies all edges in E, denoted G(s); this is referred to as the consistency task. Graph G0 is a consequence of, or implied by, G if every spatial configuration that satisfies G also satisfies G0 . This is the sufficiency task (or entailment) that we commonly apply to constructive proofs, where the task is to prove that objects and relations in G are sufficient for ensuring that particular properties hold in G0 . Given graph G, two key questions are (1) how to give meaning, or interpret, the spatial relations in G, and (2) how to efficiently determine consistency and produce instantiations of G. That is, we need to adopt a method for spatial reasoning. 9 That is, constructive methods may fail in building a consistent solution, and iterative root finding methods may fail to converge. 8 Carl Schultz and Mehul Bhatt (a) initial configuration (b) rotation (c) translation Fig. 4. Topological relations between four spheres maintained after various affine transformations. 3.1 An Example of the Basic Concept A key insight is that spatial configurations form equivalence classes over qualitative relationships based on certain affine transformations. For example, consider the spatial task of determining whether five same-sized spheres can be mutually touching. Suppose we are given a specific numerically defined configuration of four mutually touching spheres as illustrated in Figure 4(a), and we prove that it is impossible to add an additional mutually touching sphere to this configuration. That is, let s1 , . . . , s4 be unit spheres (radiusq1), q centred on points √ 1 p1 = (0, 0, 0), p2 = (2, 0, 0), p3 = (1, 3, 0), p4 = (1, 3 , 83 ), respectively. According to Equation 1, s1 , . . . , s4 are mutually touching. We prove that a fifth same-sized, mutually touching sphere cannot be added to this configuration by determining that the corresponding system of polynomial constraints is unsatisfiable (the system consists of four constraints with three free variables xs5 , ys5 , zs5 , by reapplying Equation 1 between s5 and each other sphere, e.g. s1 touches s5 is x2s5 + ys25 + zs25 = 4). Now consider that we apply an affine transformation to the original configuration such as rotation, translation, scaling, or reflection, as illustrated in Figures 4(b) and 4(c). After having applied the transformation, it is still impossible to add a fifth mutually touching sphere, because the relevant qualitative (topological) relations are preserved under these transformations. Thus, when we proved that it was impossible to add a fifth same-sized mutually touching sphere to the original given configuration, in fact we proved it for a class of configurations, specifically, the class of configurations that can be reached by applying an affine transformation to the original configuration. Now, when determining consistency of graphs of qualitative spatial relations, we are not given any specific spatial configurations to work with (i.e. complete absence of numerical information), and instead need to prove consistency over all possible configurations. The key is that, each time we ground and constrain variables, we are eliminating a spatial symmetry from our partially defined configuration. If we maintain knowledge about symmetries that certain object types have (e.g. spheres have complete rotational symmetry) then we can judiciously “trade” symmetries for unbound variables in our polynomial encoding at a purely symbolic level. Importantly, rather than having to compute symmetries or undertake any complex symmetry detection procedure, we are instead building knowledge about space and spatial properties of objects into the spatial solver at a declarative level. Thus, we are able to efficiently reason over an infinite set of possible configurations by incrementally pruning spatial symmetries based on commonsense Spatial Symmetry Driven Pruning 9 knowledge about space, and this pruning is exploited by eliminating and constraining variables in the underlying polynomial encoding. 3.2 Theoretical Foundations for Symmetries Due to the parameterisation of objects, spatial configurations are embedded in n-dimensional Euclidean space Rn (1 ≤ n ≤ 3) with a fixed origin point. Let V, W be Euclidean spaces in Rn , each with an origin. Given vectors x, y and constant k, a linear transformation f is a mapping V → W such that f (x + y) = f (x) + f (y) (additive) f (kx) = kf (x) (homogeneous) An affine transformation f 0 is a linear transformation composed with a translation. It is convenient to represent a linear transformation on vector x as a left multiplication of a d×d real matrix Q, and translation as an addition of vector t, f 0 (x) = Qx + t. We denote a transformation T applied to a spatial configuration of objects s as T s. We distinguish particular classes of transformations with respect to the qualitative spatial relationships that are preserved, for example, in R2 the following matrices represent rotation by θ, uniform scaling by k > 0, and horizontal reflection, respectively:       cos(θ) −sin(θ) k0 −1 0 , , . sin(θ) cos(θ) 0k 01 Given transformation T we annotate it with its type c ∈ C, e.g. C = {translate, rotate, scale, reflect} as T c . Each spatial relation R belongs to a class of relations in Rel, such as topology, mereology, coincidence, relative orientation, distance. Let Sym be a function Sym : Rel → 2C that represents the classes of transformations that preserve a given class of spatial relations. The Sym function is our mechanism for building knowledge about spatial symmetries into the spatial reasoning system. Let RelG be the set of classes of the spatial relations that are T used in the spatial constraint graph G, and let SymG = R∈RelG Sym(R). The following formal Condition on SymG states that transformations (applied to the embedding space) define equivalence classes of configurations with respect to the consistency of spatial constraint graphs. When satisfied, this condition provides a theoretically sound foundation for symmetry pruning. Condition 1 Given spatial constraint graph G, configuration s, and affine transformation T c with c ∈ SymG then G(s) is true if and only if G(T c s). 3.3 Polynomial Encodings for Spatial Relations In this section we define a range of spatial domains and spatial relations with the corresponding polynomial encodings. While our method is applicable to a wide range of 2D and 3D spatial objects and qualitative relations, for brevity and clarity we primarily focus on a 2D spatial domain. Our method is readily applicable to other 2D and 3D spatial domains and qualitative relations, for example, as defined in [4, 9, 10, 28, 29, 33, 34]. 10 Carl Schultz and Mehul Bhatt Relation Polynomial Encoding left of (point p, segment sab ) collinear (point p, segment sab ) right or collinear (point p, segment sab ) parallel (segments sab , scd ) coincident (point p, segment sab ) coincident (point p, circle c) inside (point p, rectangle a) intersects (point p, rectangle a) boundary (point p, rectangle a) outside (point p, rectangle a) concentric (rectangles a, b) part of (rectangles a, b) proper part (rectangles a, b) boundary part of (rectangles a, b) discrete from (rectangles a, b) partially overlaps (rectangles a, b) (xb − xa )(yp − ya ) > (yb − ya )(xp − xa ) (xb − xa )(yp − ya ) = (yb − ya )(xp − xa ) (xb − xa )(yp − ya ) ≤ (yb − ya )(xp − xa ) (yb − ya )(xd − xc ) = (yd − yc )(xb − xa ) collinear(p, sab ) ∧ xp ∈ [xa , xb ] ∧ yp ∈ [ya , yb ] (xc − xp )2 + (yc − yp )2 = rc2 (0 < (p − p1a ) · va < wa )∧ 0 (0 < (p − p1a ) · va < ha ) (0 ≤ (p − p1a ) · va ≤ wa )∧ 0 ≤ ha ) (0 ≤ (p − p1a ) · va intersects(p, a) ∧ ¬ inside(p, a) ¬ intersects(p, a) 1 2 (p3a V − p1a ) + p1a = i=1...4 1 2 (p3b − p1b ) + p1b intersects(pia , b) ¬ equals(a, b) ∧ part_of(a, b) V i=1...4 boundary(pia , b) W i=1...4 ∃pi ∈ R2 ∃pj ∈ R2 ∃pk ∈ R2 right_or_collinear(a, (pib , p(i+1)b ))  ∨ right_or_collinear(b, (pia , p(i+1)a ))  inside(pi , a) ∧ inside(pi , b) ∧ inside(pj , a) ∧ outside(pj , b) ∧ outside(pk , a) ∧ inside(pk , b) Table 1. Polynomial encodings of qualitative spatial relations. – a point is a pair of reals x, y – a line segment is a pair of end points p1 , p2 (p1 6= p2 ) – a rectangle is a point p representing the bottom left corner, a unit direction vector v defining the orientation of the base of the rectangle, and a real width and height w, h (0 < w, 0 < h); we can refer to the vertices of the rectangle: let v 0 = (−yv , xv ) be v rotated 90o counter-clockwise, then p1 = p = p5 , p2 = wv + p1 , p3 = wv + hv 0 + p1 , p4 = hv 0 + p1 – a circle is a centre point p and a real radius r (0 < r) We consider the following spatial relations: Relative Orientation. Left, right, collinear orientation relations between points and segments, and parallel, perpendicular relations between segments [23]. Coincidence. Intersection between a point and a line, and a point and the boundary of a circle. Also whether the point is in the interior, outside or on the boundary of a region. Mereology. Part-whole relations between regions [37]. Table 1 presents the corresponding polynomial encodings. Given three real variables v, i, j, let: v ∈ [i, j] ≡ i ≤ v ≤ j ∨ j ≤ v ≤ i. l2 l1 l3 pa l2 l1 pd p p pb l3c a pd l3 pa l3 pa pb p c pd pc pb l2 l1 Spatial Symmetry Driven Pruning l2 l1 l2pd l1 pb l3 pa l3 pa (a) pd pc pb configuration points and lines l1 pc of l1 pd l3 pa l3l1 ppad pc p p b cpb lp2c l2 pc pb llp32d pbpd l3 l2 pa p a l1 l1 (b) vertical reflection p (c) x-scaling pd l1 pc pb l2 pa l2 l1 rotation p l2 l2 pl3b pwhether a point is inside a prectangle is based on vector projection. b l3Determining p a pa p pisa projected onto vector v aby taking Point the dot product, l3 l3 (xp , yp ) · (xv , yv ) = xp xv + yp yv Given point a and rectangle b, we translate the point such that the bottom left corner of b is at the origin, project pa on the base and side vectors of b, and check whether the projection lies within the width and height of the rectangle, 0 < (pa − p1b ) · vb < wb 0 < (pa − p1b ) · vb0 < hb Convex regions a, b are disconnected iff there is a hyperplane of separation, i.e. there exists a line l such that a and b lie on different sides of l. This is the basis for determining the discrete from relation between rectangles. Formalising Knowledge about Symmetries In this section we formally determine the qualitative spatial relations that are preserved by various affine transformations. A fundamental property of affine transformations is that they preserve (a) point coincidence (e.g. line intersections), (b) parallelism between straight lines, and (c) proportions of distances between points on parallel lines [26]. For example, consider the configuration of points pa , pb , pc , pd and lines l1 , l2 , l3 in Figure 5: (a) we cannot introduce new points of coincidence between lines by applying transformations such as translation, scaling, reflection, and rotation. Conversely, if two lines intersect, then they will still intersect after these transformations; (b) lines l1 , l2 are parallel before and after the transformations; lines l1 , l3 are always non-parallel; (c) the ratio of distances between collinear points pa , pb , pc is maintained; formally, let sij be the segment between points pi , pj and let |sij | be the length of the segment. Then ab | the ratio |s |sbc | in Figure 5 is the same before and after the transformations. Based on these properties we can determine the transformations that preserve various qualitative spatial relations.10 10 pd The properties of affine transformations and the geometric objects that they preserve are well understood; further information is readily available in introductory texts such as [26]. Our key contribution is formalising and exploiting this spatial knowledge as modular and extensible common-sense rules in intelligent knowledge-based spatial assistance systems. pc pb l2 d l1 point pc d coincidence, pc d lAffine l1 preserve Fig.p5. parallelism, and ratios of pc1 pd pctransformations ldistances 2 l2 along parallel lines. pb pb 3.4 l1 11 l3 (d) pb p c pd pa l3 12 Carl Schultz and Mehul Bhatt Theorem 1 The following qualitative spatial relations are preserved under translation, scale, rotation, and reflection (applied to the embedding space): topology, mereology, coincidence, collinearity, line parallelism. Proof. By definition, affine transformations preserve parallelism with respect to qualitative line orientation, and point coincidence. Due to preservation of point coincidence and proportions of collinear distances by affine transformations, it follows that mereological part of and topological contact relations between regions are preserved, i.e. if a mereological or topological relation changes between regions a, b, then by definition there exists a point p coincident with a such that the coincidence relation between p and b has changed; but this cannot occur as point coincidence is maintained with affine transformations by definition, therefore mereological and topological relations are also maintained. The interaction between spatial relations and transformations is richer than we have space to elaborate on here, i.e. not all qualitative spatial relations are preserved under all affine transformations; orientation is not preserved under reflection (e.g. Figure 5(b) gives a counter example), distance is not preserved under non-uniform scaling. To summarise, we formalise the following knowledge as modular commonsense rules in CLP(QS): point-coincidence, line parallelism, topological and mereological relations are preserved with all affine transformations. Relative orientation changes with reflection, and qualitative distances and perpendicularity change with non-uniform scaling. Spheres, circles, and rectangles are not preserved with non-uniform scaling, with the exception of axisaligned bounding boxes. “Trading” Transformations Symmetries are used to eliminate object variables. As a metaphor, unbound variables are replaced by constant values in “exchange” for transformations. We start with a set of transformations that can be applied to a configuration: translation, scaling, arbitrary rotation, and horizontal and vertical reflection. We can then “trade” each transformation for an elimination of variables. Each transformation can only be “spent” once. Theorem 2 presents an instance of such a pruning case. Table 2 presents a variety of different pruning cases for position variables and the associated combination of transformations, as illustrated in Figure 6.11 Some cases require more than one distinct set of parameter restrictions to cover the set of all position variables due to point coincidence being preserved by affine transformations. For example, consider case (f): all pairs of points p1 , p2 can be transformed into any other pair of points pi , pj by translation, rotation, and scaling, iff p1 = p2 ↔ pi = pj . Thus, to cover all pairs of possible points, we need to consider two distinct parameter restrictions: pi = pj and pi 6= pj ; we refer to these as subcases. Many further pruning cases are identifiable. For example, a version of case (i) can be defined without reflection by requiring more sub-cases where c4 > c2 and c4 < c2 . Case (i) can be extended so that all six coordinates of three points are grounded if we also “exchange” the skew transformation (e.g. applicable to object domains like triangles or points). 11 All cases have been verified using Reduce as presented in Theorem 2. Spatial Symmetry Driven Pruning 13 Theorem 2 Any pair of object position variables (x1 , y1 ), (x2 , y2 ) can be transformed into any given position constants (c1 , c2 ), (c3 , c2 ) such that (c1 = c3 ↔ (x1 = x2 ∧ y1 = y2)) by applying: an xy-translation, a rotation about the origin in the range (0, 2π), and an x-scale. Proof. The corresponding expression has been verified using the Reduce system (Redlog quantifier elimination) [15]; all variables are quantified over reals. ∀c1 ∀c2 ∀c3 ∀x1 ∀y1 ∀x2 ∀y2 (c1 = c3 ↔ (x1 = x2 ∧ y1 = y2)) ↔ ∃tx ∃ty ∃dx ∃dy ∃sx (0 < sx )∧ (d2x +d2y = 1) ∧     sx 0 dx − dy tx let S = ∧ let R = ∧ let T = ∧ 0 1 dy dx ty          x2 c3 x1 c1 +T ≡> = SR +T ∧ = SR y2 c2 y1 c2 We can use this pruning case on any spatial constraint graph G where the graph’s spatial relations are preserved by translation, rotation, and scaling. Given graph G = (N, E), the following Algorithm applies the pruning case, with selected constants c1 = 0, c2 = 0, and c3 = 1 or c3 = 0: 1. 2. 3. 4. 5. select object position variables p1 , p2 from nodes in N copy G to create G1 , G2 in G1 set p1 = (0, 0), p2 = (1, 0) (case c1 6= c3 ) in G2 set p1 = (0, 0), p2 = (0, 0) (case c1 = c3 ) if the task is: W2 (a) consistency of G then solve i=1 ∃s Gi (s) (b) sufficiency, G → G0 , then solve V2 0 i=1 ¬∃s(Gi (s) ∧ ¬G (s)) In Step 1 any pair of objects can be selected for which their position variables will be grounded; we also employ policies that target computationally costly subgraphs (for example, pairs of non-equal circles that share a boundary point are often good candidates for this pruning case). Having eliminated free variables from the system of polynomial constraints, the constraints are significantly more cn simple to solve. Due to the double exponential complexity O(c12 ) reducing n has a significant impact on performance; the system may even collapse from nonlinear constraints to linear (solvable in O(cn )) or constants. 3.5 Combining Symmetry Pruning with Graph Decomposition In certain cases, spatial constraint graphs can be decomposed into subgraphs that can be solved independently. For example, subgraphs G1 , G2 can be independently solved if all objects in subgraph G1 are either: – – – – disconnected from all objects in subgraph G2 ; a proper part of some object in G2 ; left of some segment in G2 ; only related by relative size to some object in G2 , and so on. y y x1=c1 y2 y1=c x1=c1 y1=c2 y x1=c1 x x x1=c1 y x1=c1 x1=c1 y1=c2 y x1=c1 y y1=c2 y y y x1=c1 y1=c2 y y2=c2 y x x1=c1 y1=c2 x (a) (b) x x x1=c1 x x2x=c 3 x =c x =c 1 1 2 3 x1=c1 y x1=c x2=c3 1 y y y1=c2 y2=c y2y=c 2 2 y 1=c 2=c 2 3 x2=c3 x =c 2 2 y1=c2 y1=c2 y2=c2 y y y y y x2=c3 x2=c3 x x2=c3 x x x =c x =c y y x x x1=c1 x1=cx21=c3 x2=c3 x2=c2 x1=c1 y1=c2 y1=c2 y1=c2 y1=cy2=c2 y2=c2 x x x1=c1 x x x1=c1 xx2=c3 x x1=c1 =c31 (e) x2=c3 x1=c1 x1=c1 xx21=c (f) y1=c2(d) y =c y1y=c=c y2=c2 2 y y =c y y 1=c2 2 2 1=c2 1 2 21=c 22 y y y y y y3=c3 y3=cy3 y3=c4 y3=c4y y3=c3 y3=cy4 y3=c4y y3=c4 1 1 1 x (c) 1 x CLP(QS). For example, consider Proposition 22 of Book I of Euclid’s Elements (Figure 7): =c1 =c2 y2=c2 y3=c4 x x x x x x x1=cx1 x1=cx1 x2x=c x1=c x 3 1 y 1 y x1=c1 1y=c 3 y x2x=c x1=c1 x2=c 2 x2=c3 y 1=c1 y x1=c1 x2=c y =c 2 y =c 3 3 3 3 y =c x1=c1 x2=c12 2 x2=c 3 2 y y =c y1=c y =c x2y=c 1=c2 2y 2 y =c y3=c4 y3=c 3 4 3 3 4 =c y =c 1 2 2y =c 2 x2=c3 1=c2 y2=c2 1 2 y y y y3=c3 y (h) y y y (g) y3=c(i) y3=c4 4 y y y3=c3 y3=c3 yx3=c4 y3=cx4 y3=c4x y3x=c4 x x1=c1 x1=c1 x1=c1 x1=cx12=c3 x2=c3 x2=c2 x2=c 1 2 position 1=cpruning 1 x1=c Fig. 6. Casesxxfor parameters. x22=c3 x2=c3 x y1=c2 y1=c x y1=c2 y1=cy2 =c2 y2=c2 x x x1=c1 xx2=c3 x x x1=c1 pruning In such cases wex1can in each independent x =c1 reapply x2=c2 x1spatial xsymmetry =c =c1 1 1 y1=c2 =c31 y1=c x22=c3 y2=c2 xx21=c x2=c3 x1=c1 =c x2=c2 x1=c1 x2x=c 1this 2 1 commonsense sub-graph; knowledge is3 modularly formalised =c32 x2=c yspatial xy21=c 1=c2 =c22 within y2=c2 y1=c yy21=c 2 2=c2 y y2=c2 y x x x x x x x x x 14 Bhatt y Carly Schultz and y 1 xy2=c2 x1=c1 y y x1=cMehul 1 x2=c x12=c x =c 1 1 x1=c1 x1=cx11x=c 1 1 x2=c2 1=c x1=c1 x1=c1 y1=c2 y1=c2 y y y y y y y y y y y y y2=c y2=c2 x1=c1 x 1=c1 x x1=c1 x x 2y2=c x x2 x1=c1 x1=c1 x1=c1 y1=c2 y1=c2 y1=c2 x1=c1 x1=c x2=c 1 2 x2=c2 x x x x xy x x x x x x x y y x2=c3 y x2=c3 y x =c x =c x =c 1 2 2 x21=c1 x1=c1 1 1 x1=c1 x2x=c 1=c 2 1 1x2=c x2=c3 y x x 2=c3 y x1=c1 y3=c4 x x2=c3 y y3=c4 Constructing a triangle from three segments. Given three line segments lab , lcd , x x1=cl1ef , draw x2=c3 a line through four collinear points p1 , . . . , p4 such that |(p1 , p2 )| = |lab |, , p =c )|2 = |lcd |, |(p3 , p4 )| = |lef |. Draw circle ca centred on p2 , coincident with y1=c|(p 2 2 y23 p1 . Draw circle cb centred on p3 coincident with p4 . Draw p5 coincident with ca and cb . The triangle p2 , p3 , p5 has side lengths such that |(p2 , p3 )| = |lcd |, |(p3 , p5 )| = |lef |, |(p5 , p2 )| = |lab |. In this example, the three segments lab , lcd , lef and the remaining objects are only related by the distances between their end points. That is, the relative position and orientation of lab , lcd , lef is not relevant to the consistency of the spatial graph; we only need to explore all combinations of segment lengths. Thus the solver decomposes the graph into four sub-graphs: (1) lab (2) lcd (3) lef , and (4) p1 , . . . , p5 , ca , cb . In subgraphs (1),(2),(3) it “trades” translation and rotation to ground pa = pc = pe = (0, 0), and yb = yd = yf = 0 and keeps x-scale to cover all possible combinations of segment lengths, i.e. xb , xd , xf are free variables. In subgraph (4) CLP(QS) applies the pruning case of Theorem 2 by grounding p1 , p4 . Spatial Symmetry Driven Pruning Case Parameter restrictions 15 Traded transformations a b c d e x 1 = c1 x-translate x 1 = c 1 , y1 = c2 xy-translate x1 = c1 , x2 = c2 , (i) c1 6= c2 (ii) c1 = c2 x-translate, rotate ⇡, x-scale x 1 = c 1 , y2 = c2 xy-translate x 1 = c 1 , y1 = c2 , x 2 = c3 , xy-translate, rotate ⇡, x-scale (i) c1 6= c3 (ii) c1 = c3 f x 1 = c 1 , y1 = c2 , x 2 = c3 , y2 = c 2 , xy-translate, rotate (0, 2⇡), x-scale Spatial Symmetry Driven Pruning 15 (i) c1 6= c3 (ii) c1 = c3 g x 1 = c 1 , x 2 = c2 , y3 = c3 xy-translate, rotate ⇡, x-scale (i) c1 6= c2 (ii) c1 = c2 Case Parameter restrictions Traded transformations h⇤ x xy-translate, rotate ⇡, xy-scale, 1 = c 1 , y1 = c2 , x 2 = c3 , y3 = c 4 (i) a x1 =c1c16= c3 ^ c2 6= c4 (ii) c1 = c3 ^ c2 6= c4 y-reflect x-translate (iii) ^ c2 = c4 (iv) c1 = c3 ^ c2 = c4 xy-translate b x1 =c1c16=, yc13 = i c⇤ x 1 = c1 , y = cc21, = y3 c= rotateπ,(0, 2⇡), xy-scale, x12 = = cc22,, x (i) c1 c6= x-translate, rotate x-scale 2 = 3 ,cy22(ii) 2 c4 xy-translate, (i) c = 6 c ^ c2 6= c4 (ii) c1 6= c3 ^ c2 = c4 y-reflect d x1 = 1c1 , y23 = xy-translate (iii) c = c ^ c = c e x1 = 1c1 , y13 = 2 , x2 4= c3 , xy-translate, rotate π, x-scale (i) c1 for 6= c3pruning (ii) c1 = parameters c3 Table 2. Cases for one position point (a,b), two position points f x1 = c1 , y1 = c2 , x2 = c3 , y2 = c2 , xy-translate, rotate (0, 2π), x-scale (c-f ), three (i) position (g-i). c1 6= c3points (ii) c1 = c3 Cases marked with ⇤ require arbitrary scaling (i.e. both uniformg andx1non-uniform). = c1 , x2 = c2 , y3 = c3 xy-translate, rotate π, x-scale (i) c1 6= c2 (ii) c1 = c2 h ∗ x1 = c1 , y1 = c2 , x2 = c3 , y3 = c4 xy-translate, rotate π, xy-scale, (i) c1 6= c3 ∧ c2 6= c4 (ii) c1 = c3 ∧ c2 6= c4 y-reflect p5 ab (iii) c1 6= c3 ∧ c2 = c4 (iv) c1 = c3 ∧ c2 = c4 i∗ x1 = c1 , y1 = c2 , x2 = c3 , y2 = c2 , y3 = c4 xy-translate, rotate (0, 2π), xy-scale, (i) c1 6= c3 ∧ c2 6= c4 (ii) c1cd 6= c3 ∧ c2 = c4 y-reflect p1 p2 p3 p4 (iii) c1 = c3 ∧ c2 = c4 l l Table 2. Cases for pruning parameters lef for one position point (a,b), two position points (c-f ), three position points (g-i). Cases marked with ∗ require arbitrary scaling (i.e. both uniform and non-uniform). Fig. 7. Constructing a triangle by decomposing the spatial constraint graph. ?- Sab = segment(Pa,Pb), Scd = segment(Pc,Pd), Sef = segment(Pe,Pf), | S12 = segment(P1,P2), S23 = segment(P2,P3), S34 = segment(P3,P4), | S14 = segment(P1,P4), | coincident(P2,S14), coincident(P3,S14), | | length_dim(equal, S12, Sab), | length_dim(equal, S23, Scd), | length_dim(equal, S34, Sef), | | Ca=circle(P2,_), coincident(P1,Ca), | Cb=circle(P3,_), coincident(P4,Cb), | coincident(P5,Ca), coincident(P5,Cb), | S35 = segment(P3,P5), S52 = segment(P5,P2), | | %% triangle | length_dim(equal, S23, Scd), | length_dim(equal, S35, Sef), | length_dim(equal, S52, Sab). true. ... | (length_dim(not_equal, S23, Scd); | length_dim(not_equal, S35, Sef); | length_dim(not_equal, S52, Sab)). false. 4 Application-Driven lUse Cases ab p5 lcd p1 ofp2mereology, p3 p4 ruler and compass, We present problem instances in the classes and contact. Table 3 presents lthe ef experiment time results of CLP(QS) using symmetry pruning compared with existing systems: z3 SMT solver, Redlog real quantifier (ina the Reduce computer algebra system) [12], and the reFig. 7.elimination Constructing triangle by decomposing the spatial constraint graph. lation algebraic qualitative spatial reasoners GQR [13] and SparQ [35]. CLP(QS) uses z3 to solve polynomial constraints (after our pruning), thus z3 is the most 4 Application-Driven Use Cases We present problem instances in the classes of mereology, ruler and compass, and contact. Table 3 presents the experiment time results of CLP(QS) using symmetry pruning compared with existing systems: z3 SMT solver, Redlog real 16 Carl Schultz and Mehul Bhatt quantifier elimination (in the Reduce computer algebra system) [15], and the relation algebraic qualitative spatial reasoners GQR [16] and SparQ [39]. CLP(QS) uses z3 to solve polynomial constraints (after our pruning), thus z3 is the most direct comparison. Experiments were run on a MacBookPro, OS X 10.8.5, 2.6 GHz Intel Core i7. The empirical results show that no other spatial reasoning system exists (to the best of our knowledge) that can solve the range of problems presented in this section, and in cases where solvers are applicable, CLP(QS) with spatial pruning solves those problems significantly faster than other systems. Problem Aligned Concentric Boundary Concentric Mereologically Concentric Angle Bisector Sphere Contact CLP(QS) z3 Redlog GQR SparQ 6.831 47.651 time out n/a n/a 2.036 time out time out n/a n/a 0.105 0.373 time out n/a n/a 0.931 time out time out n/a n/a 0.004 time out time out fail fail Table 3. Time (in seconds) to solve benchmark problems using CLP(QS) with pruning compared to z3 SMT solver, Redlog (Reduce) quantifier elimination, and GQR and SparQ relation algebraic solvers. Time out was issued after a running time of 10 minutes. Failure ( fail) indicates that the incorrect result was given. Not applicable ( n/a) indicates that the problem could not be expressed using the given system. B A Z 4.1 Spatial Theorem Proving: Geometry of Solids P A Tarski [36] shows that a geometric point Spatial can be Symmetry defined byDriven a language of mereoPruning 17 B concenlogical relations over spheres. TheBidea isZto distinguish when spheres are A A tric, and to define a geometric point as the point of convergence. Borgo [8] shows h I B that this can be accomplished with aPlanguage of mereology over hypercubes. We will use CLP(QS) Pto prove that the definitions are sound for rotatable squares. As a preliminary we need to determine whetherAthe intersectionAof two squares A is non-square A(Figure 8) [34]. Given two squares a, b, the B B intersection is non- B B B square if a partially overlaps b (Table 1) and either (a) a and b are not aligned, P xva 6= xv(a) orA(b) width height equal, andthe B are (b)and A and B areof the (c) intersection A is bound- are (d) A and B arewI 6= hI not I hI , b non-concentric concentric ary concentric rotated concensuch that B. · p , v tric. A wI = min(v · p2a , v · p2b )Awith − max(v · p1b ) 1a B A 0 0 and rotated (c),(d) 0 0 Fig. 9. Characterising aligned concentric hI = min(v · p(a),(b) · p1b ) squares using 4a , v · p 4b ) − max(v · p1a , v B 2013]) wI mereology (reproduced from [Borgo, Aligned Concentric. Two squares A, B are aligned and concentric if: A is part of B and there does not exist a square P such that (a) P is covertex with B, and (b) the intersection of P and A is not a square (Figure 9). | | | | | | | aligned_concentric(A,B) :mereology(part_of, A,B), not(( P=square(_,_,_), covertex(P,B), intersection(non_square,A,P). )). We use CLP(QS) to prove that the definition is sufficient, by contradiction; two squares are covertex if they are aligned and share a vertex, and the relation concentric is the geometric definition of concentricity in Table 1 that is used to evaluate the mereological definition of concentricity: ?- A=square(_,_,_), | B=square(_,_,_), | aligned(A,B), | not_concentric(A,B), | aligned_concentric(A,B). false. Boundary Concentric. Square A is boundary concentric with square B if: A is proper part of B and there does not exist a square Z such that (a) Z is proper part of B (b) A is part of Z, and (c) Z is not part of A (Figure 9). B I A wI B I A B A Z Spatial Symmetry Driven Pruning P B Spatial Symmetry Driven Pruning B B AhIZ A B P A B A (a) A, B B are A A A B not P(a) A andaligned. B are (b) A and B are A B wI (b) P hI 6= wI A 17 B I P I 17 B hI A B A Z A B B (c) A is bound- (d) hI A and B are A I non-concentric concentric ary concentric rotated concenB A squares ZAwith B. Fig. 8. IntersectionBI of A, B is non-square. tric. A B A Fig. 9. Characterising aligned (a),(b) Pand rotated (c),(d) concentric squares using B B 2013]) P wI mereology (reproduced from [Borgo, A and concentricA if: A is part of Aligned Concentric. Two squares A, B are aligned A h I I A B and there does not exist a square P suchB that (a) P isBcovertex with B, and B B B (b) the intersection of P and A is not a square P (Figure 9). A (d) boundhI A and B are (a) A and B are :(b) A and B are (c) A is | aligned_concentric(A,B) non-concentricA,B),concentric ary concentric | mereology(part_of, | not(( Awith B. | P=square(_,_,_), A | covertex(P,B), B | intersection(non_square,A,P). Fig. 9. Characterising aligned (a),(b) and rotated (c),(d) | )). B mereology (reproduced from [Borgo, 2013]) rotated concentric. A I wI concentric squares using wI We use CLP(QS) to prove that the definition is sufficient, sufficient, by contradiction; two squares are covertex if they are aligned and share a vertex, and the relation concentric is the geometric definition of concentricity in Table 1 that is used to evaluate the mereological definition of concentricity: ?- A=square(_,_,_), | B=square(_,_,_), | aligned(A,B), | not_concentric(A,B), | aligned_concentric(A,B). false. Boundary Concentric. Square A is boundary concentric with square B if: A is proper part of B and there does not exist a square Z such that (a) Z is proper part of B (b) A is part of Z, and (c) Z is not part of A (Figure 9). | | | | | | | | I boundary_concentric(A,B) :mereology(proper_part,A,B), not(( Z = square(_,_,_) mereology(proper_part, Z,B), mereology(part_of, A,Z), mereology(not_part_of, Z,A), )). ?- A=square(_,_,_), | B=square(_,_,_), | not_aligned(A,B), | not_concentric(A,B), | boundary_concentric(A,B). false. Mereologically Concentric. Squares A, B are mereologically concentric if: A, B are aligned concentric or there exists Q such that (a) Q is boundary concentric with B and Q is aligned concentric with A or (b) Q is boundary concentric with A and Q is aligned concentric with B. wI B hI I A wI 18 Carl Schultz and Mehul Bhatt Mereologically Concentric. Squares A, B are mereologically concentric if: A, B 18 Carl Schultz Mehul Bhatt are aligned concentric or there exists Q suchand that (a) Q is boundary concentric with B and Q is aligned concentric A orand (b)Mehul Q is boundary concentric with 18 Carlwith Schultz Bhatt A and Q is aligned concentric with B. Having proved the mereological definitions of aligned and boundary concentricity, we can replace these with more efficient geometric definitions from concentricity, Table 1 when Having proved the mereological definitions of aligned and boundary proving mereological concentricity. we can replace these with more efficient efficient geometric definitions from Table 1 when proving mereological concentricity. | g_aligned_concentric(A,B) :| aligned(A,B), concentric(A,B). || g_aligned_concentric(A,B) :aligned(A,B), concentric(A,B). || g_boundary_concentric(A,B) :|| mereology(boundary_part, A,B). | g_boundary_concentric(A,B) :| mereology(boundary_part, A,B). | m_concentric(A,B) :| g_aligned_concentric(A,B) :|| m_concentric(A,B) ; || Qg_aligned_concentric(A,B) = square(_,_,_), ; || g_boundary_concentric(Q,B), Q = square(_,_,_), || g_aligned_concentric(Q,A) || ;g_boundary_concentric(Q,B), || Qg_aligned_concentric(Q,A) = square(_,_,_), ; || g_boundary_concentric(Q,A), | Q = square(_,_,_), | g_aligned_concentric(Q,B). | g_boundary_concentric(Q,A), | g_aligned_concentric(Q,B). ?- A=square(_,_,_), | B=square(_,_,_), A=square(_,_,_), |?- not_concentric(A,B), B=square(_,_,_), || m_concentric(A,B). | not_concentric(A,B), false. | m_concentric(A,B). false. 4.2 Didactics: Ruler–Compass and Contact Problems 4.2 Didactics: Ruler–Compass and Contact Problems 4.2 Didactics: Ruler–Compass and Contact Problems Angle Angle Bisector. Bisector. Let Let llaa,, llbb be line segments that share an endpoint at p. Draw circle at at pa andthat lb atshare pb . Draw circle ccBisector. at p. p. Circle Circle intersects circles c aatatp.p aDraw and Angle Let cclaintersects , lb be linelaa segments an endpoint circle at ppbb Circle such pp is coincident both c , c . Circles circle ccbbat at p. such that that c and c intersect a b c intersects la at pwith and l at p . Draw circles c at p a b a b b a a and at from p with to pcboth bisects at pp and and The line segment angle between l and lbb circle cb atppccp..bThe suchline thatsegment p is coincident ca , cthe . Circles c and c intersect a b a b (Figure (Figure 10). at p and10). pc . The line segment from p to pc bisects the angle between la and lb (Figure 10). We to We use use CLP(QS) CLP(QS) to prove prove that the definition is sufficient. sufficient. The relation bisects is used for evaluation by checking if the midpoint of (pa ,pb ) isThe used for evaluation by checking collinear with lc (i.e. We use CLP(QS) to prove that the definition is sufficient. relation bisects is idealised rulers cannot directly measure the midpoint idealised rulers cannot directly of a line). An interactive used for evaluation by checking if the midpoint of (pa ,pb ) is collinear with lc (i.e. diagram automatically generated encodes diagram is is then thencannot automatically specified program idealised rulers directly measure thethat midpoint of the a line). An interactive (using system); Figure 11.that encodes the specified program (using the theisFreeCAD FreeCAD system); seegenerated diagram then automatically (using FreeCAD system); see Figure 11. ?- La = the segment(P,_),Lb = segment(P,_), | orientation(not_parallel, La, Lb), segment(P,_),Lb = segment(P,_), |?- CLa= =circle(P,_), orientation(not_parallel, La, Lb), || coincident(Pa,La),coincident(Pa,C), C = circle(P,_), || coincident(Pb,Lb),coincident(Pb,C), coincident(Pa,La),coincident(Pa,C), || Ca=circle(Pa,_), Cb=circle(Pb,_), coincident(Pb,Lb),coincident(Pb,C), || coincident(Pa,Ca),coincident(Pa,Cb), | Ca=circle(Pa,_), Cb=circle(Pb,_), | coincident(Pc,Ca),coincident(Pc,Cb), coincident(Pa,Ca),coincident(Pa,Cb), || not_equal(P,Pc), coincident(Pc,Ca),coincident(Pc,Cb), || Lc=segment(P,Pc), not_equal(P,Pc), || bisects(segment(Pa,Pb),Lc). | Lc=segment(P,Pc), true. | bisects(segment(Pa,Pb),Lc). ... |true.not_bisects(segment(Pa,Pb),Lc). ... false. | not_bisects(segment(Pa,Pb),Lc). false. Sphere contact. Determine the maximum number of same-sized mutually touching spheres (Figure 4 - note no numeric information about the spheres is Sphere contact. Determine thethat maximum number of same-sized mutually touchgiven in this(Figure benchmark problem). ing spheres 4 - note that no numeric information about the spheres is given in this benchmark problem). Lb Cb C La Ca P Spatial Symmetry Driven Pruning Lb 19 Lb Cb C La P Lc C Ca P Pc La (a) (b) Fig. 10. Ruler and compass method for angle bisection. Line Lc bisects the angle between lines La , Lb . Lc Pc (a) (b) (c) Fig. 11. Interactive diagram encoding the student’s constructive proof of Euclid’s angle bisector theorem; as the student manipulates figures in the diagram, the other geometries are automatically updated to maintain the specified qualitative constraints. Sphere contact. Determine the maximum number of same-sized mutually touching spheres (Figure 4 - note that no numeric information about the spheres is given in this benchmark problem). 5 Conclusions Affine transformations provide an effective and interesting class of symmetries that can be used for pruning across a range of qualitative spatial relations. To summarise, we formalise the following knowledge as modular commonsense rules in CLP(QS): point-coincidence, line parallelism, topological and mereological relations are preserved with all affine transformations. Relative orientation changes with reflection, and qualitative distances and perpendicularity change with non-uniform scaling. Spheres, circles, and rectangles are not preserved with non-uniform scaling, with the exception of axis-aligned bounding boxes. Our algorithm is simple to implement, and is easily extended to handle more pruning cases. Theoretical and empirical results show that our method of pruning yields an improvement in performance by orders of magnitude over standard polynomial encodings without loss of soundness, thus increasing the horizon of spatial problems solvable with any polynomial constraint solver. Furthermore, the declaratively formalised knowledge about pruning strategies is available to be utilised in 20 Carl Schultz and Mehul Bhatt a modular manner within other knowledge representation and reasoning frameworks that rely on specialised SMT solvers etc, e.g., in the manner demonstrated in ASPMT(QS) [38], which is a specialised non-monotonic spatial reasoning system built on top of answer set programming modulo theories. References [1] M. Aiello, I. E. Pratt-Hartmann, and J. F. v. Benthem. Handbook of Spatial Logics. Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2007. ISBN 978-1-4020-5586-7. [2] D. S. Arnon, G. E. Collins, and S. McCallum. Cylindrical Algebraic Decomposition I: The basic algorithm. SIAM Journal on Computing, 13(4): 865–877, 1984. [3] M. Bhatt and J. O. Wallgrün. Geospatial narratives and their spatiotemporal dynamics: Commonsense reasoning for high-level analyses in geographic information systems. ISPRS Int. J. Geo-Information, 3(1):166– 205, 2014. doi: 10.3390/ijgi3010166. URL http://dx.doi.org/10.3390/ ijgi3010166. [4] M. Bhatt, J. H. Lee, and C. Schultz. CLP(QS): A Declarative Spatial Reasoning Framework. In Proceedings of the 10th international conference on Spatial information theory, COSIT’11, pages 210–230, Berlin, Heidelberg, 2011. Springer-Verlag. ISBN 978-3-642-23195-7. [5] M. Bhatt, C. Schultz, and C. Freksa. The ‘Space’ in Spatial Assistance Systems: Conception, Formalisation and Computation. In T. Tenbrink, J. Wiener, and C. Claramunt, editors, Representing space in cognition: Interrelations of behavior, language, and formal models. Series: Explorations in Language and Space. 978-0-19-967991-1, Oxford University Press, 2013. [6] M. Bhatt, J. Suchan, and C. Schultz. Cognitive Interpretation of Everyday Activities – Toward Perceptual Narrative Based Visuo-Spatial Scene Interpretation. In M. Finlayson, B. Fisseni, B. Loewe, and J. C. Meister, editors, Computational Models of Narrative (CMN) 2013., a satellite workshop of CogSci 2013: The 35th meeting of the Cognitive Science Society., Dagstuhl, Germany, 2013. OpenAccess Series in Informatics (OASIcs). [7] M. Bhatt, C. P. L. Schultz, and M. Thosar. Computing narratives of cognitive user experience for building design analysis: KR for industry scale computer-aided architecture design. In C. Baral, G. D. Giacomo, and T. Eiter, editors, Principles of Knowledge Representation and Reasoning: Proceedings of the Fourteenth International Conference, KR 2014, Vienna, Austria, July 20-24, 2014. AAAI Press, 2014. ISBN 978-1-57735-657-8. [8] S. Borgo. Spheres, cubes and simplexes in mereogeometry. Logic and Logical Philosophy, 22(3):255–293, 2013. [9] D. Bouhineau. Solving geometrical constraint systems using CLP based on linear constraint solver. In Artificial Intelligence and Symbolic Mathematical Computation, pages 274–288. Springer, 1996. [10] D. Bouhineau, L. Trilling, and J. Cohen. An application of CLP: Checking the correctness of theorems in geometry. Constraints, 4(4):383–405, 1999. [11] B. Buchberger. Bruno Buchberger’s PhD thesis 1965: an algorithm for finding the basis elements of the residue class ring of a zero dimensional polynomial ideal (English translation). Journal of symbolic computation, 41(3):475–511, 2006. Spatial Symmetry Driven Pruning 21 [12] S.-C. Chou. Mechanical geometry theorem proving, volume 41. Springer Science & Business Media, 1988. [13] G. E. Collins. Quantifier elimination for real closed fields by cylindrical algebraic decompostion. In Automata Theory and Formal Languages 2nd GI Conference Kaiserslautern, May 20–23, 1975, pages 134–183. Springer, 1975. [14] G. E. Collins and H. Hong. Partial cylindrical algebraic decomposition for quantifier elimination. Journal of Symbolic Computation, 12(3):299 – 328, 1991. ISSN 0747-7171. [15] A. Dolzmann, A. Seidl, and T. Sturm. REDLOG User Manual, Apr. 2004. Edition 3.0. [16] Z. Gantner, M. Westphal, and S. Wölfl. GQR-A fast reasoner for binary qualitative constraint calculi. In Proc. of AAAI, volume 8, 2008. [17] N. Hadas, R. Hershkowitz, and B. B. Schwarz. The role of contradiction and uncertainty in promoting the need to prove in dynamic geometry environments. Educational Studies in Mathematics, 44(1-2):127–150, 2000. [18] P. Haunold, S. Grumbach, G. Kuper, and Z. Lacroix. Linear constraints: Geometric objects represented by inequalitiesl. In S. C. Hirtle and A. U. Frank, editors, Spatial Information Theory A Theoretical Basis for GIS, volume 1329 of Lecture Notes in Computer Science, pages 429–440. Springer Berlin Heidelberg, 1997. ISBN 978-3-540-63623-6. [19] J. Jaffar and M. J. Maher. Constraint logic programming: A survey. The journal of logic programming, 19:503–581, 1994. [20] P. C. Kanellakis, G. M. Kuper, and P. Z. Revesz. Constraint query languages. In D. J. Rosenkrantz and Y. Sagiv, editors, Proceedings of the Ninth ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems, April 2-4, 1990, Nashville, Tennessee, USA, pages 299–313. ACM Press, 1990. ISBN 0-89791-352-3. [21] D. Kapur and J. L. Mundy, editors. Geometric Reasoning. MIT Press, Cambridge, MA, USA, 1988. ISBN 0-262-61058-2. [22] P. B. Ladkin and R. D. Maddux. On binary constraint problems. Journal of the ACM (JACM), 41(3):435–469, 1994. [23] J. H. Lee. The complexity of reasoning with relative directions. In 21st European Conference on Artificial Intelligence (ECAI 2014), 2014. [24] G. Ligozat. Qualitative Spatial and Temporal Reasoning. Wiley-ISTE, 2011. [25] G. F. Ligozat. Qualitative triangulation for spatial reasoning. In Spatial Information Theory A Theoretical Basis for GIS, pages 54–68. Springer, 1993. [26] G. E. Martin. Transformation geometry: An introduction to symmetry. Springer, 1982. [27] J. C. Owen. Algebraic solution for geometry from dimensional constraints. In Proceedings of the first ACM symposium on Solid modeling foundations and CAD/CAM applications, pages 397–407. ACM, 1991. [28] G. Pesant and M. Boyer. QUAD-CLP (R): Adding the power of quadratic constraints. In Principles and Practice of Constraint Programming, pages 95–108. Springer, 1994. [29] G. Pesant and M. Boyer. Reasoning about solids using constraint logic programming. Journal of Automated Reasoning, 22(3):241–262, 1999. [30] D. A. Randell, A. G. Cohn, and Z. Cui. Computing transitivity tables: A challenge for automated theorem provers. In 11th International Conference on Automated Deduction (CADE-11), pages 786–790, 1992. 22 Carl Schultz and Mehul Bhatt [31] S. Ratschan. Approximate quantified constraint solving by cylindrical box decomposition. Reliable Computing, 8(1):21–42, 2002. [32] S. Ratschan. Efficient solving of quantified inequality constraints over the real numbers. ACM Transactions on Computational Logic (TOCL), 7(4): 723–748, 2006. [33] C. Schultz and M. Bhatt. Towards a Declarative Spatial Reasoning System. In 20th European Conference on Artificial Intelligence (ECAI 2012), 2012. [34] C. Schultz and M. Bhatt. Declarative spatial reasoning with boolean combinations of axis-aligned rectangular polytopes. In ECAI 2014 - 21st European Conference on Artificial Intelligence, pages 795–800, 2014. [35] C. Schultz, M. Bhatt, and A. Borrmann. Bridging qualitative spatial constraints and parametric design - a use case with visibility constraints. In EG-ICE: 21st International Workshop - Intelligent Computing in Engineering 2014, 2014. [36] A. Tarski. A general theorem concerning primitive notions of Euclidean geometry. Indagationes Mathematicae, 18(468):74, 1956. [37] A. C. Varzi. Parts, wholes, and part-whole relations: The prospects of mereotopology. Data & Knowledge Engineering, 20(3):259–286, 1996. [38] P. Walega, M. Bhatt, and C. Schultz. ASPMT(QS): Non-Monotonic Spatial Reasoning with Answer Set Programming Modulo Theories. In LPNMR: Logic Programming and Nonmonotonic Reasoning - 13th International Conference, 2015. URL http://lpnmr2015.mat.unical.it. [39] J. O. Wallgrün, L. Frommberger, D. Wolter, F. Dylla, and C. Freksa. Qualitative Spatial Representation and Reasoning in the SparQ-Toolbox. In T. Barkowsky, M. Knauff, G. Ligozat, and D. Montello, editors, Spatial Cognition V: Reasoning, Action, Interaction, volume 4387 of Lecture Notes in Computer Science, pages 39–58. Springer, 2007. [40] W. Wenjun. Basic principles of mechanical theorem proving in elementary geometries. Journal of Systems Sciences & Mathematical Sciences, 4(3): 207–235, 1984.
6
Ranks and Pseudo-Ranks - Paradoxical Results of Rank Tests - Edgar Brunner arXiv:1802.05650v1 [] 15 Feb 2018 Department of Medical Statistics, University of Göttingen, Germany Frank Konietschke Department of Mathematical Sciences, University of Texas at Dallas, U.S.A. Arne C. Bathke Department of Mathematics, University of Salzburg, Austria and Markus Pauly Institute of Statistics, Ulm University, Germany Abstract Rank-based inference methods are applied in various disciplines, typically when procedures relying on standard normal theory are not justifiable, for example when data are not symmetrically distributed, contain outliers, or responses are even measured on ordinal scales. Various specific rank-based methods have been developed for two and more samples, and also for general factorial designs (e.g., Kruskal-Wallis test, Jonckheere-Terpstra test). It is the aim of the present paper (1) to demonstrate that traditional rank-procedures for several samples or general factorial designs may lead to paradoxical results in case of unbalanced samples, (2) to explain why this is the case, and (3) to provide a way to overcome these disadvantages of traditional rankbased inference. Theoretical investigations show that the paradoxical results can be explained by carefully considering the non-centralities of the test statistics which may be non-zero for the traditional tests in unbalanced designs. These non-centralities may even become arbitrarily large for increasing sample sizes in the unbalanced case. A simple solution is the use of socalled pseudo-ranks instead of ranks. As a special case, we illustrate the effects in sub-group analyses which are often used when dealing with rare diseases. 2 1 Paradoxical Results of Rank Tests Introduction If the assumptions of classical parametric inference methods are not met, the usual recommendation is to apply nonparametric rank-based tests. Here, the Wilcoxon-Mann-Whitney and Kruskal-Wallis (1952) tests are among the most commonly applied rank procedures, often utilized as replacements for the unpaired two-sample t-test and the one-way ANOVA, respectively. Other popular rank methods include the Hettmansperger-Norton (1987) and JonckheereTerpstra (1952, 1954) tests for ordered alternatives, and the procedures by Akritas et al. (1997) for two- or higher-way designs. In statistical practice, these procedures are usually appreciated as robust and powerful inference tools when standard assumptions are not fulfilled. For example, Whitley and Ball (2002) conclude that “Nonparametric methods require no or very limited assumptions to be made about the format of the data, and they may, therefore, be preferable when the assumptions required for parametric methods are not valid.” In line with this statement, Bewick et al. (2004) also state that “the Kruskal-Wallis, Jonckheere-Terpstra (..) tests can be used to test for differences between more than two groups or treatments when the assumptions for analysis of variance are not held.” These descriptions are slightly over-optimistic since nonparametric methods also rely on certain assumptions. In particular, the Wilcoxon-Mann-Whitney and Kruskal-Wallis tests postulate homoscedasticity across groups under the null hypothesis, and they have originally only been developed for continuous outcomes. In case of doubt, it is nevertheless expected that rank procedures are more robust and lead to more reliable results than their parametric counterparts. While this is true for deviations from normality, and while by now it is widely accepted that ordinal data should rather be analyzed using adequate rank-based methods than using normal theory procedures, we illustrate in various instances that nonparametric rank tests for more than two samples possess one noteworthy weakness. Namely, they are generally non-robust against changes from balanced to unbalanced designs. In particular, keeping the data generating processes fixed, we provide paradigms under which commonly used rank tests surprisingly yield completely opposite test decisions when rearranging group sample sizes. These examples are in general not artificially generated to obtain paradoxical results, but even include homoscedastic normal models. This effect is completely undesirable, leading to the somewhat heretical question • Are nonparametric rank procedures useful at all to handle questions for more than two groups? In order to comprehensively answer this question, we carefully analyze the underlying nonparametric effects of the respective rank procedures. From this, we develop detailed guidelines for an adequate application of rank-based procedures. Moreover, we even state a simple solution for all these problems: Substituting ranks by closely related quantities, the so-called pseudoranks that have already been considered by Kulle (1999), Gao and Alvo (2005a, b), Gao, Alvo, Paradoxical Results of Rank Tests 3 Chen, and Li (2008), and in more detail by Thangavelu and Brunner (2007), and by Brunner, Konietschke, Pauly and Puri (2017). It should be noted that the motivation in these references was different, and that their authors had not been aware of the striking paradoxical properties that may arise when using classical rank tests. These surprising paradigms only appear in case of unbalanced designs since all rank procedures discussed below coincide with their respective pseudo-rank analogs in case of equal sample sizes. Pseudo-ranks are easy to compute, share the same advantageous properties of ranks and lead to reliable and robust inference procedures for a variety of factorial designs. Moreover, we can even obtain confidence intervals for (contrasts of) easy to interpret reasonable nonparametric effects. Thus, resolving the commonly raised disadvantage that “nonparametric methods are geared toward hypothesis testing rather than estimation of effects” (Whitley and Ball, 2002). The paper is organized as follows. Notations are introduced in Section 2. Then in Section 3 some paradoxical results are presented in the one-way layout for the Kruskal-Wallis test and for the Hettmansperger-Norton trend test by means of certain tricky (non-transitive) dice. In the two-way layout, a paradoxical result for the Akritas-Arnold-Brunner test in a simple 2 × 2design is presented in Section 4 using a homoscedastic normal shift model. The theoretical background of the paradoxical results is discussed in Section 5 and a solution of the problem by using pseudo-ranks is investigated in detail. Moreover, the computation of confidence intervals is discussed and applied to the data in Section 4. Section 6 provides a cautionary note for the problem of sub-group analysis where typically unequal sample sizes appear. The paper closes with some guidelines for adequate application of rank procedures in the discussion and conclusions section. There, it is also briefly discussed that the use of pairwise and stratified rankings would make matters potentially worse. 2 Statistical Model and Notations P For d > 2 samples of N = di=1 ni independent observations Xik ∼ Fi = 21 [Fi− + Fi+ ], i = 1, . . . , d, k = 1, . . . , ni , the nonparametric relative effects which are underlying the rank tests are commonly defined as pi = Z d 1X HdFi , where H = nr F r N r=1 (1) denotes the weighted mean distribution of the distributions F1 , . . . , Fd in the design. Here we use the so-called normalized version of the the distribution Fi to cover the cases of continuous, as well as non-continuous distributions in a unified approach. Thus, the case of ties does not require a separate consideration. This idea was first mentioned by Kruskal (1952) and later considered in more detail by Ruymgaart (1980). Akritas, Arnold, and Brunner (1997) extended this approach to factorial designs, while Akritas and Brunner (1997) and Brunner, Munzel, and Puri (1999) applied this technique to repeated measures and longitudinal data. 4 Paradoxical Results of Rank Tests Easily interpreted, pi = P(Z < Xi1 ) + 12 P(Z = Xi1 ) is the probability that a randomly selected observation Z from the weighted mean distribution H is smaller than a randomly selected observation Xi1 from the distribution Fi plus 21 times the probability that both observations are equal. Thus, the quantity pi measures an effect of the distribution Fi with respect to the weighted mean distribution H. In the case of two independent random variables X1 ∼ F1 and X2 ∼ F2 , Birnbaum and Klose (1957) had called the function L(t) = F2 [F1−1 (t)] the “relative distribution function” of X1 and X2 , assuming continuous distributions. Thus, its expectation Z 1 Z ∞ tdL(t) = F1 (s)dF2 (s) = P(X1 < X2 ) 0 −∞ is called a “relative effect” with an obvious adaption of the notation. In the same way, the quantity pi = P(Z < Xi1 ) + 12 P(Z = Xi1 ) is called a “relative effect” of Xi1 ∼ Fi with respect to theRweighted mean Z ∼ H. This effect pi is a linear combination of the pairwise effects wri = Fr dFi . In vector notation, Equation (1) is written as p = Z   w11 , · · · wd1  .. .. HdF = W 0 n =  ... . .  w1d , · · · wdd     n1 /N   .  ·  ..   nd /N        =    Here, F = (F1 , . . . , Fd )0 denotes the vector of distribution functions, and   Z  w11 , · · · w1d   ..  .. W = F0 dF =  ... . .    wd1 , · · · wdd p1 .. . pd     .  (2) (3) is the matrix of pairwise effects wri . Note that wii = 21 and wir = 1 − wri which follows from integration by parts. The relative effects pi are arranged in the vector p = (p1 , . . . , pd )0 and can be estimated consistently by the simple plug-in estimator Z   b F bi = 1 Ri· − 1 . b pi = Hd (4) 2 N bi denotes the (normalized) empirical distribution of Xi1 , . . . , Xini , i = 1, . . . , d, and Here, F P b = 1 dr=1 nr F br their weighted mean. Finally, Ri· = 1 Pni Rik denotes the mean of the ranks H k=1 N ni d Rik n r 1 1 XX b = + N H(Xik ) = + c(Xik − Xr` ), 2 2 r=1 `=1 (5) where the function c(u) = 0, 1/2, 1 for u <, = or > 0, respectively, denotes the count function. The estimators b p1 , . . . , b pd are arranged in the vector Z   b F b = 1 R· − 1 1d , b p = Hd (6) 2 N Paradoxical Results of Rank Tests 5 b1 , . . . , F bd )0 is the vector of the empirical distributions, R· = (R1· , . . . , Rd· )0 the b = (F where F vector of the rank means Ri· , and 1d = (1, . . . , 1)0d×1 denotes the vector of 1s. In the following sections we demonstrate that for d ≥ 3 groups, rank tests may lead to paradoxical results in case of unequal sample sizes. In particular, for factorial designs involving two or more factors, Rthe nonparametric main effects and interactions (defined by the weighted relative effects pi j = HdFi j ) may be severely biased. 3 Paradoxical Results in the One-Way Layout To demonstrate some paradoxical results of rank tests for d ≥ 3 samples in the one-way layout, we consider the vector p = W 0 n in (2) of the nonparametric effects pi , which are all equal P P to their mean p· = d1 di=1 pi iff di=1 (pi − p· )2 = 0 or in matrix notation p0 T d p = 0. Here, T d = Id − d1 J d denotes the centering matrix, Id the d-dimensional unit matrix, and J d = 1d 10d R b F b denote the plug-in estimator of b the d × d-dimensional matrix of 1s. Let b p = Hd p defined in (6). √ In order to detect whether the the pi are different, we study the asymptotic distribution p. This is obtained from the asymptotic equivalence theorem (see, e.g., Akritas et al., of NT d b 1997; Brunner and Puri, 2001, 2002 or Brunner et al., 2017),   √ √ √ . NT d b p =. NT d Y · + Z · − 2p + NT d p, (7) R R . b are b and Z · = HdF where the symbol =. denotes asymptotic equivalence. Here, Y · = Hd F vectors of means of independent random vectors with expectation E(Y · ) = E(Z · ) = p. It follows   √ from the central limit theorem that NT d Y · + Z · − 2p has, asymptotically, a multivariate nor √ mal distribution with mean 0 and covariance matrix T d ΣN T d , where ΣN = Cov N[Y · + Z · ] has a quite involved structure √ (for details see Brunner et al., 2017). Obviously, the multivariate distribution is shifted by NT d p from the origin 0. Therefore, we call T d p the “multivariate non-centrality”, and a “univariate non-centrality” may be quantified by the quadratic form c p = p0 T d p. In particular, we have c p = 0 iff T d p =√0. The actual (multivariate) shift of the distribution, depending on the total sample size N, is NT d p, and the corresponding univariate non-centrality (depending on N) is then given by N · c p . From these considerations, it should become clear √ that N · c p → ∞ as N → ∞ if T d p , 0. This defines the consistency region of a test based on NT d b p. Below, we will demonstrate that for the same vector of distributions F, the non-centrality c p = p0 T d p may be 0 in case of equal sample sizes, while c p may be unequal to 0 in case of unequal sample sizes. Under H0F : T d F = 0, tests based on b p (such as the Kruskal-Wallis test) reject the hypothesis H0F with approximately the pre-assigned type-I error probability α. If, however, the strong hypothesis H0F is not true then the non-centrality c p = p0 T d p may be 0 or unequal to 0 for the same set of distributions F1 , . . . , Fd , since c p depends on the relative samples sizes n1 /N, . . . , nd /N. This means that for the same set of distributions F1 , . . . , Fd 6 Paradoxical Results of Rank Tests and unequal sample sizes the p-value of the test may be arbitrary small if N is large enough. However, the p-value for the same test may be quite large for the same total sample size N in case of equal sample sizes. Some well-known tests which have this paradoxical property are, for example, the Kruskal-Wallis test (1952), the Hettmansperger-Norton trend test (1987), and the Akritas-Arnold-Brunner test (1997). As an example, consider the case of d = 3 distributions where straightforward calculations show that 1. in case of equal sample sizes, p1 = p2 = p3 ⇐⇒ w21 = w32 = 1 − w31 = w, (8) w21 = w32 = w31 = 12 . (9) 2. in general, however, p1 = p2 = p3 ⇐⇒ This means that c p = p0 T d p = 0 in case of equal sample sizes, but c p , 0 in case of unequal sample sizes if w21 = w32 = 1 − w31 = w , 12 . We note that (9) follows under the strict hypothesis H0F : F1 = F2 = F3 . However, this null hypothesis is not a necessary condition for (9) to hold. For example, if Fi , i = 1, 2, 3 are R symmetric distributions with the same center of symmetry then wri = Fr dFi = 12 for i = 1, 2, 3. Thus, in this case, c p = 0 is also true for all samples sizes. An example of discrete distributions generating the nonparametric effects wri in (8) is given by the probability mass functions • f1 (x) = 1 6 if x ∈ {9, 16, 17, 20, 21, 22} and f1 (x) = 0 otherwise, • f2 (x) = 1 6 if x ∈ {13, 14, 15, 18, 19, 26} and f2 (x) = 0 otherwise, • f3 (x) = 1 6 if x ∈ {10, 11, 12, 23, 24, 25} and f3 (x) = 0 otherwise, which are derived from some tricky dice (see, e.g., Peterson, 2002). For the distribution functions Fi (x) defined by fi (x), i = 1, 2, 3 above, it is easily seen that Z w21 = P(X2 < X1 ) = F2 dF1 = 7/12 (10) Z w13 = P(X1 < X3 ) = F1 dF3 = 7/12 (11) Z w32 = P(X3 < X2 ) = F3 dF2 = 7/12. (12) Thus, w21 = w13 = 1 − w31 = w32 = w and the vector of the weighted relative effects is given by  1    n1 + n3 + (n2 − n3 )w  p1  1  2   0 p =  p2  = W n =  n1 + 12 n2 + (n3 − n1 )w   N  p3 n2 + 12 n3 + (n1 − n2 )w     .  (13) Paradoxical Results of Rank Tests 7 Table 1: Ratios of relative sample sizes ni/N , weighted relative effects pi , and the non-centralities for the example of the tricky dice where w = 7/12 and the distributions F1 , F2 , and F3 are fixed. Setting n1/N n2/N n3/N (A) (B) (C) 1/3 1/3 1/3 2/3 1/12 1/4 1/4 2/3 1/12 p1 p2 p3 0.5 0.5 0.5 0.4861 0.4653 0.5486 0.5486 0.4861 0.4653 p· cp 0.5 0 0.5 0.00376 0.5 0.00376 The weighted relative effects pi and the resulting non-centralities c p are listed in Table 1 for equal and some different unequal sample sizes. Since for unequal sample sizes one obtains c p , 0, it is only a question of choosing the total sample size N large enough to reject the hypothesis H0F : F1 = F2 = F3 by the Kruskal-Wallis test with a probability arbitrary close to 1 while in case of equal sample sizes for N → ∞ the probability of rejecting the hypothesis remains constant equal to α∗ (close to α) since in this case, c p = 0. It may be noted that in general α∗ , α since the variance estimator of the KruskalWallis statistic is computed under the strong hypothesis H0F : F1 = F2 = F3 , which is obviously not true here. Thus, the scaling is not correct, and the Kruskal-Wallis test has a slightly different type-I error α∗ . For the Hettmansperger-Norton trend test, the situation gets worse since for different ratios of sample sizes the nonparametric effects p1 , p2 , and p3 may change their order. In setting (B) in Table 1 we have p2 < p1 < p3 , while in setting (C) we have p3 < p2 < p1 . Now consider the non-centrality of the Hettmansperger-Norton trend test which is a linear rank test. Let c = (c1 , . . . , cd )0 denote a vector reflecting the conjectured pattern. Then it follows from (7) that  √ √ 0  √ 0 . N c Td b p =. N c T d Y · + Z · − 2p + N c0 T d p, (14) where cHN = c0 T d p is a univariate non-centrality. If T d F = 0 then it follows that T d p = 0 and cHN = c0 T d p = 0. If, however, T d F , 0 then cHN < 0 indicates a decreasing trend and cHN > 0 an increasing trend. In the above discussed example, we obtain for setting (B) and for a conjectured pattern of c = (1, 2, 3)0 for an increasing trend the non-centrality cHN = P3 1 1 i=1 ci (pi − 2 ) = /16 > 0, indeed indicating an increasing trend. For setting (C) however, we 1 obtain cHN = − /12, indicating a decreasing trend. In case of setting (A) (equal sample sizes), cHN = 0 since p1 = p2 = p3 = 21 , and thus indicating no trend. Again it is a question of the total sample size N to obtain the decision of a significantly decreasing trend for the first setting (B) of unequal sample sizes and for the second setting (C) the decision of a significantly increasing trend with F1 , F2 , and F3 . In the fist √ a probability arbitrary close to 1 for the same distributions √ case, N · cHN → ∞ for N → ∞ and in the second case, N · cHN → −∞ for N → ∞. In case of equal sample sizes, the hypothesis of no trend is only rejected with a type-I error probability α∗∗ . Regarding α∗∗ , α, a similar remark applies as above for the Kruskal-Wallis test. 8 4 Paradoxical Results of Rank Tests Paradoxical Results in the Two-Way Layout In the previous section, paradoxical decisions by rank tests in case of unequal sample sizes were demonstrated for the one-way layout using large sample sizes and particular distributions leading to non-transitive decisions. In this section, we will show that in two-way layouts paradoxical results are already possible with rather small sample sizes and even in simple homoscedastic normal shift models. To this end, we consider the simple 2 × 2-design with two crossed factors A and B, each with two levels i = 1, 2 for A and j = 1, 2 for B. The observations Xi jk ∼ Fi j , k = 1, . . . , ni j , are assumed to be independent. The hypotheses of no nonparametric effects in terms of the distribution functions Fi j (x) are expressed as (see Akritas et al., 1997) (1) no main effect of factor A - H0F (A) : F11 + F12 − F21 − F22 = 0 (2) no main effect of factor B - H0F (B) : F11 − F12 + F21 − F22 = 0 (3) no interaction AB - H0F (AB) : F11 − F12 − F21 + F22 = 0, where in all three cases, 0 denotes a function which is identical 0. Let F = (F11 , F12 , F21 , F22 )0 denote the vector of the distribution functions. Then the three hypotheses formulated above can be written in matrix notation as H0F (c) : c0 F = 0, where c = cA = (1, 1, −1, −1)0 generates the hypothesis for the main effect A, c = cB = (1, −1, 1, −1)0 for the main effect B, and c = cAB = (1, −1, −1, 1)0 for the interaction AB. For testing these hypotheses, Akritas et al. (1997) derived rank procedures based on the statistic T N (c) = √ 1 N c0 b p = √ c0 R· , N (15) where R· = (R11· , R12· , R21· , R22· )0 denotes the vector of the rank means Ri j· within the four samples. They showed that under the hypothesis H0F (c), the statistic T N (c) has, asymptotically, a normal distribution with mean 0 and variance σ20 2 X 2 X N 2 = σ , n ij i=1 j=1 i j (16) where the unknown variances σ2i j (see Akritas et al., 1997, for their explicit form) can be consistently estimated by ni j X 1 2 1 S = (Ri jk − Ri j· )2 N2 i j N 2 (ni j − 1) k=1 (17) Paradoxical Results of Rank Tests 9 Table 2: Comparison of the empirical distributions in the four factor level combinations A1 B1 , A2 B1 , A1 B2 , and A2 B2 sampled from homoscedastic normal distributions with µ11 = 10, µ12 = µ21 = 9, µ22 = 8, and standard deviation τ = 0.4. Within each factor level combination Ai B j , i, j = 1, 2, the unadjusted p-values of a t-test comparing the location of balanced and unbalanced samples, more specifically testing µi j (balanced) = µi j (unbalanced), are listed in the last column. Level Means Standard Deviations t-Tests Combinations Balanced Unbalanced Balanced Unbalanced p-Values A1 B1 A2 B1 A1 B2 A2 B2 10.07 9.04 9.05 8.07 9.91 8.95 8.99 8.02 0.311 0.380 0.408 0.371 0.314 0.313 0.480 0.359 0.178 0.409 0.684 0.562 P P and b σ20 = N 2i=1 2j=1 S i2j /ni j . For small sample sizes, the null distribution of LN (c) = T N (c)/b σ0 can be approximated by a t f -distribution with estimated degrees of freedom S 04 b f = P2 P2 i=1 2 2 j=1 (S i j /ni j ) /(ni j − 1) , (18) P P where S 02 = 2i=1 2j=1 S i2j /ni j . The non-centrality of T N (c) is given by cT = c0 p, and under the restrictive null hypothesis H0F (c) : c0 F = 0 it follows that c0T p = 0. To demonstrate a paradoxical result, we assume that the observations Xi jk are coming from the normal distributions N(µi j , τ2 ) with equal standard deviations τ = 0.4 and expectations µ = (µ11 , µ12 , µ21 , µ22 )0 = (10, 9, 9, 8)0 . From the viewpoint of linear models, there is a main effect A of c0A µ = µ11 + µ12 − µ21 − µ22 = 2, a main effect B of c0B µ = µ11 − µ12 + µ21 − µ22 = 2, and no AB-interaction since c0AB µ = µ11 − µ12 − µ21 + µ22 = 0. Since this is a homoscedastic linear model, the classical ANOVA should reject the hypotheses H0µ (cA ) : c0A µ = 0 and H0µ (cB ) : c0B µ = 0 with a high probability if the total sample size is large enough. In contrast to that, the hypothesis H0µ (cAB ) : c0AB µ = 0 of no interaction is only rejected with the pre-selected typeI error probability α. The non-centralities are given by cµA = c0A µ = 2, cµB = c0B µ = 2, and cµAB = c0AB µ = 0. The following two settings of samples sizes ni j are considered: (1) n11 = 10, n12 = 20, n21 = 20, n22 = 50, (2) n11 = n12 = n21 = n22 = 25, - (unbalanced) - (balanced). First we demonstrate that the empirical characteristics of the two data sets, which are sampled from the same distributions, are nearly identical. Thus, potentially different results could not be explained by substantially different empirical distributions obtained by an “unhappy randomization”. The results of the comparisons are listed in Table 2. We apply the classical ANOVA F-statistic and the rank statistic LN (c) = T N (c)/b σ0 to the same simulated data sets from Table 2 and compare the results for testing the three hypotheses 10 Paradoxical Results of Rank Tests Table 3: Comparison of the results obtained by an ANOVA and by the rank test LN (c) in case of a balanced (left) and an unbalanced (right) 2 × 2-design. Surprising is the fact that in the balanced case, the decisions of both procedures coincide while in the unbalanced case the decisions for testing the interaction AB are totally opposite. Balanced n11 = n12 = n21 = n22 = 25 ANOVA Effect A B AB Unbalanced n11 = 10, n12 = 20, n21 = 20, n22 = 50 Rank Test ANOVA Rank Test F p-Value L2N (c) p-Value F p-Value L2N (c) p-Value 184.43 182.60 0.12 < 10−4 < 10−4 0.7317 168.19 169.32 0.00 < 10−4 < 10−4 0.9541 120.99 111.48 0.01 < 10−4 < 10−4 0.9112 150.07 143.55 7.99 < 10−4 < 10−4 0.0065 H0F (A), H0F (B), and H0F (AB). We note that H0F (c) : c0 F = 0 ⇒ H0µ (c) : c0 µ = 0 and c0 µ , 0 ⇒ c0 F , 0. The decisions for H0F (A), H0F (B), and H0F (AB) obtained by the ANOVA as well as by the rank tests based on LN (c) are identical in all cases in the balanced setting. In the unbalanced setting, all decisions obtained by the parametric ANOVA are comparable to those in the balanced setting. However, the decision on the interaction AB based on the rank test is totally different from that obtained by the parametric ANOVA, as well as that obtained for the rank test in the balanced setting. The results are summarized in Table 3. On the surface, the difference of the decisions in the unbalanced case could be explained by the fact that the nonparametric hypothesis H0F (AB) and the parametric hypothesis H0µ (AB) are not identical and that this particular configuration of normal distributions falls into the gap between H0F (AB) and H0µ (AB). That is, here H0µ (AB) is true, but H0F (AB) is not. It is surprising, however, that this explanation does not hold for the balanced case. The difference of the two p-values 0.9541 and 0.0065 in Table 3 calls for an explanation. The reason becomes clear when computing the vector p in (2) for this particular example of the 2 × 2-design. To avoid fourfold indices we re-label the distributions F11 , F12 , F21 , and F22 as F1 , F2 , F3 , and F4 , respectively, and the sample sizes accordingly as n1 , n2 , n3 , and n4 . In the 2 2 2 example, F1 = N(10, R τ ), F2 = F3 = N(9, τ ), and F4 = N(8, τ ), where τ = 0.4. Thus, the probabilities wri = Fr dFi of the pairwise comparisons are w = w12 = w13 = w24 = w34 w23 = w32 v = w14 ! −1 = 0.0392, = Φ √ τ 2 1 = , 2 √   − 2   ≈ 0. = Φ  τ Paradoxical Results of Rank Tests Finally, by observing wri = 1 − wir , we obtain  1  2 n1 + n2 + n3 + n4 − (n2 + n3 )w − n4 v  1  2 (n2 + n3 ) + n4 + (n1 − n4 )w 1  p = W0 n = N  12 (n2 + n3 ) + n4 + (n1 − n4 )w  1 n + (n2 + n3 )w + n1 v 2 4 11      ,    and the nonparametric AB-interaction is described by p cAB = c0AB p = p1 − p2 − p3 + p4  n1 − n4  1 − 2w + v . = 2 N (19) p In this example, we obtain for equal samples sizes cAB = 0, while for n = (10, 20, 20, 50)   √ p p 2 1 we obtain cAB = − 5 2 − 2w + v ≈ −0.1686 and NcAB ≈ −1.686. This explains the small p-value for unequal sample sizes in the example. 5 Explanation of the Paradoxical Results 5.1 Unweighted Effects and Pseudo-Ranks The simple reason for the paradoxical results is the fact that even when all distribution functions underlying the observations are specified, the consistency regions C0 p , 0 of the rank tests based on C0 b p are not fixed. Indeed, the consistency regions are defined by the weighted nonparametric relative effects pi which are not fixed model quantities by which hypotheses could be formulated or for which confidence intervals could be reasonably computed since the pi themselves generally depend on the sample sizes ni . Thus, it appears reasonable to define different nonparametric effects which are fixed model 1 Pd quantities not depending on sample R sizes. To this end let G = d r=1 Fr denote the unweighted mean distribution, and let ψi = GdFi . Easily interpreted, this nonparametric effect ψi measures an effect of the distribution Fi with respect to the unweighted mean distribution G and is P therefore a “fixed relative effect”. As ψi = d1 dr=1 wri is the mean of the pairwise nonparametric effects w1i , . . . , wdi , it can be written in vector notation as the vector of row means of W 0 , that is,     Z  w11 , · · · wd1   ψ1   ..  · 1 1 =  ..  . .. ψ = GdF = W 0 · d1 1d =  ... (20)  .  . .  d d     w1d , · · · wdd ψd The fixed relative effects ψi can be estimated consistently by the simple plug-in estimator Z   b F bi = 1 Rψi· − 1 , b ψi = Gd (21) 2 N 12 Paradoxical Results of Rank Tests b= where G and ψ Ri· = 1 ni 1 d Pd r=1 br denotes the unweighted mean of the empirical distributions F b1 , . . . , F bd , F ψ k=1 Rik Pni the mean of the so-called pseudo-ranks ps-rank(Xik ) = Rψik nr d 1 NX 1 X 1 b + N G(Xik ) = + c(Xik − Xr` ). = 2 2 d r=1 nr `=1 (22) Finally, the estimators b ψi are arranged in the vector    b =  ψ  ψ ψ b ψ1 .. . b ψd   Z  ψ   b F b = 1 R· − 1 1d ,  = Gd 2  N ψ (23) ψ where R· = (R1· , . . . , Rd· )0 is the vector of the pseudo-rank means Ri· . It may be noted that the pseudo-ranks Rψik have similar properties as the ranks Rik . The properties given below follow from the definitions of the ranks and pseudo-ranks and by some straightforward algebra. P Lemma 1 Let Xik denote N = di=1 ni observations arranged in i = 1, . . . , d groups each involving ni observations. Then, for i, r = 1, . . . , d and k = 1, . . . , ni , ` = 1, . . . , nr , 1. If Xik < Xr` then Rik < Rr` and Rψik < Rψr` . 2. If Xik = Xr` then Rik = Rr` and Rψik = Rψr` . 3. 1 N Pd Pni i=1 k=1 Rik = 1 d Pd 1 i=1 ni Pni k=1 Rψik = N+1 . 2 4. If m(u) is a strictly monotone transformation of u then, (a) Rik = rank(Xik ) = rank(m(Xik )) (b) Rψik = ps-rank(Xik ) = ps-rank (m(Xik )). 5. 1 ≤ Rik ≤ N. 6. d+1 1 N 1 N d−1 ≤ + ≤ Rψik ≤ N + − ≤N+ 2d 2 2dni 2 2dni 2d 7. If ni ≡ n, i = 1, . . . , d, then Rik = Rψik 8. Let Xik , i = 1, . . . , d; k = 1, . . . , ni be independent and identically distributed random variables, then   N+1 E (Rik ) = E Rψik = . 2 Paradoxical Results of Rank Tests 5.2 13 Consistency Regions of Pseudo-Rank Procedures As a solution to the paradoxical results discussed in Sections 3 and 4, we demonstrate that replacing the ranks Rik with the pseudo-ranks Rψik leads to procedures that do not have these undesirable properties. The main reason is that pseudo-rank procedures are based on the (unweighted) relative effects ψi which are fixed model quantities by which hypotheses can be formulated and for which confidence intervals can be derived. In case of equal sample sizes ni ≡ n, i = 1, . . . , d, we do not obtain paradoxical results since in this case ranks and pseudo-ranks coincide, Rik = Rψik (see Lemma 1). Pseudo-rank based inference procedures are obtained in much the same way as the common rank procedures, by using relations (21) and (22), which generally means substituting ranks by the corresponding pseudo-ranks. Only in the computation of the confidence intervals (see Section 5.3), there is a minor change in the variance estimator. For details we refer to Brunner, Bathke, and Konietschke (2018, Result 4.16 in Section 4.6.1). Below, we examine the behavior of pseudo-rank procedures in the situations where the use of rank tests led to paradoxical results. 1. For testing the hypothesis H0F : F1 = F2 = F3 ⇐⇒ T 3 F = 0 in case of the tricky dice in the example in Section 3, one obtains for the non-centrality of the Kruskal-Wallis statistic, when the ranks Rik are replaced with the pseudo-ranks Rψik , the value cψ = ψ0 T 3 ψ = 0. This is easily seen from (20), since in this case it follows from (10), (11), and (12) that ψ = W 0 31 13 = 21 13 and cψ = ψ0 T 3 ψ = 12 103 T 3 12 13 = 0 by noting that T 3 is a contrast matrix and thus, T 3 13 = 0. 2. The non-centrality for the Hettmansperger-Norton trend test when substituting ranks with pseudo-ranks becomes cψHN = c0 T 3 ψ = c0 T 3 W 0 31 13 = 0 for all trend alternatives c = (c1 , c2 , c3 )0 . 3. In the two-way layout, we reconsider the example of the four shifted normal distributions we obtain the unweighted relative effects ψi j ψ = W0 · 1 1 4 4  7  /2 − 2w − v 1  2 =  4  2 1/2 + 2w + v       and the non-centrality of the statistic for the interaction cψAB = ψ1 − ψ2 − ψ3 + ψ4 = 0. (24) In the analysis of the unbalanced data in Table 3 using pseudo-ranks, one obtains for the interaction the statistic L2N (cAB ) = 0.08 and the resulting p-value is 0.7832 which agrees with the result for the balanced sample from the same distributions in Table 3. 14 Paradoxical Results of Rank Tests In all cases, paradoxical results obtained by changing the ratios of the sample sizes cannot occur since the non-centralities cψ , cψHN , and cψAB are equal to 0 for all constellations of the relative sample sizes. R In case of d = 2 samples, it is easily seen that p2 − p1 = ψ2 − ψ1 = p = F1 dF2 which does not depend on sample sizes. Thus, paradoxical results for rank-based tests – such as presented in the previous sections – can only occur for d ≥ 3 samples. 5.3 Confidence Intervals Here we briefly explain the details on how to compute confidence intervals for fixed nonparametric effects which have an intuitive and easy to understand interpretation. This shall be demonstrated by means of the example involving the four shifted normal distributions considered in Section 4. The quantity b ψi estimates the probability that a randomly drawn observation from the mean P distribution G = 14 4i=1 Fi is smaller than a randomly drawn observation from distribution Fi (plus 12 times the probability that they are equal). The estimator is obtained from (21), and the limits of the confidence interval are obtained from formula (25) in Brunner et al. (2017) or Result 4.16 in Chapter 4 of Brunner, Bathke, and Konietschke (2018). The results are listed in Table 4. Table 4: Estimates and two-sided 95%-confidence intervals [ψi,L , ψi,U ] for the (unweighted) nonparametric rel- ative effects ψi of the two data sets considered in Table 3. The results for the balanced case (upper part) and the unbalanced case (lower part) are quite similar. Equal Sample Sizes n1 = n2 = n3 = n4 = 25 Sample Lower Limit ψi,L Estimator ψi Upper Limit ψi,U 1 2 3 4 0.84 0.45 0.45 0.12 0.86 0.50 0.50 0.14 0.88 0.55 0.55 0.16 Unequal Sample Sizes n1 = 10, n2 = n3 = 20, n4 = 50 Sample Lower Limit ψi,L Estimator ψi Upper Limit ψi,U 1 2 3 4 0.85 0.45 0.43 0.13 0.86 0.51 0.48 0.15 0.88 0.57 0.53 0.16 Obviously, the results for equal and unequal sample sizes are nearly identical. Let samples 1 and 2 refer to the factor level combinations A1 B1 and A2 B1 within level B1 and samples 3 and Paradoxical Results of Rank Tests 15 4 to the factor level combination A1 B2 and A2 B2 within level B2 , respectively. The differences b ψ1 − b ψ2 and b ψ3 − b ψ4 between factor levels A1 and A2 of the estimates in factor level B1 and in factor level B2 are identical in the balanced case (b ψ1 − b ψ2 = 0.36, b ψ3 − b ψ4 = 0.36) and nearly identical (b ψ1 − b ψ2 = 0.35, b ψ3 − b ψ4 = 0.33) in the unbalanced case. These results do not indicate any interaction. This finding agrees with the analysis of the balanced data in Table 3 when using ranks. For the weighted relative effects pi in (1), the estimators b pi based on ranks, as well as 95%intervals for pi are listed in Table 5. The 95%-intervals for pi , however, cannot strictly be interpreted as confidence intervals since the quantities pi depend on the relative sample sizes, unless the design is balanced. In case of equal sample sizes we have pi = ψi . Therefore only the results for the unbalanced case are listed in Table 5. The 95%-intervals for pi are obtained from formulas (1.14) - (1.17) in Brunner and Puri (2001) and formulas (2.2.33) - (2.2.35) in Section 2.2.7 in Brunner and Munzel (2013). Comparing the differences of the estimates within the two factor levels B1 and B2 using the usual ranks, we obtain b p1 − b p2 = 0.25 and b p3 − b p4 = 0.41 which are quite different. Such a difference indicates an interaction effect, which agrees with the results of the analysis of the unbalanced data in Table 3. Moreover, the corresponding 95%-intervals for the weighted effects pi are totally different from the 95%-confidence intervals for the unweighted effects ψi in Table 4. They even do not overlap with the confidence intervals for the ψi . Thus, only the nonparametric effects ψi offer the possibility of computing estimates and confidence intervals for fixed model quantities with an intuitive interpretation. Table 5: Estimates and 95%-intervals [pi,L , pi,U ] for the weighted effects pi of the data set in the unbalanced case in Table 2. The results for the balanced case are identical to those in Table 4 since pi = ψi for equal sample sizes. Unequal Sample Sizes n1 = 10, n2 = n3 = 20, n4 = 50 6 Sample Lower Limit pi,L Estimate b pi Upper Limit pi,U 1 2 3 4 0.93 0.62 0.63 0.25 0.94 0.69 0.68 0.27 0.95 0.74 0.72 0.28 Cautionary Note for Sub-Group Analysis The fact that the (weighted) relative effects in (2), which are estimated by the ranks, depend on the relative sample sizes ni /N is important in sub-group analyses, for example in clinical trials. Here, typically, the total group of patients is quite large while the sub-group may be small. The design of such a trial can be considered as a 2 × 2-design where F11 and F12 are the distributions 16 Paradoxical Results of Rank Tests of the outcome for the patients who received treatment 1 (e.g., standard treatment with distribution F11 ) or treatment 2 (experimental treatment with distribution F12 ), without the particular sub-group of interest. The corresponding larger sample sizes, n11 and n12 are approximately the same when using an appropriate randomization procedure. The smaller sample sizes n21 and n22 in the sub-group may be equal or quite different depending on whether or not the randomization was also stratified for the sub-group. The main question regarding the sub-group in this design is whether the treatment effect is the same as in the population of patients without the sub-group. Here we consider only the case where the sub-group is known in advance and is not identified on the basis of the data. In order to demonstrate the advantage of using the unweighted relative effects in (20) estimated by the pseudo-ranks, we consider the homoscedastic two-way design from Section 4. To this end, let cAB = (1, −1, −1, 1)0 denote the contrast vector for the interaction in the 2 × 2-design as given in Section 4 and let p denote the vector of the weighted relative effects estimated by b p using the b using ranks. Moreover, let ψ denote the vector of the unweighted relative effects estimated by ψ pseudo-ranks, and finally let µ denote the vector of the expectations. Then the corresponding p non-centralities are given by cAB = c0AB p, cψAB = c0AB ψ, and cµAB = c0AB µ. The total sample size N is increased such that the sample sizes n21 = n22 = 50 in the sub-group remain constant while the samples sizes n11 = n12 obtained from the population without the sub-group increases from 50 to For √ the interaction term of the procedure based on ranks, we also consider the quantity √ 2000. p NcAB = N c0AB p describing the shift of the asymptotic multivariate normal distribution. It √ p is noteworthy that with increasing total sample size N not only NcAB is increasing but also p the basic non-centrality cAB increases while the corresponding non-centralities cψAB and cµAB of the two other procedures remain constant equal to 0. Note that for equal sample sizes, all non-centralities are 0. The results are listed in Table 6. Analyses of this type are very common in biostatistical practice. In particular, the subgroup’s proportion of the total sample size is often very small, as, for example. in case of rare disease studies with large control groups. The above example further illustrates the need for changing the usage of classical rank procedures to more favorable pseudo-rank procedures. 7 Discussion and Conclusions We have demonstrated that inference based on ranks may lead to paradox results in the case of at least three samples and an unbalanced design. These results may occur in rather natural situations and are not restricted to artificial configurations of the population distributions. The reason for this behavior of rank-based tests (including some classical and frequently used tests) can be explained by the non-centralities of the test statistics which are functions of weighted nonparametric relative effects. Moreover, it is worth to note that neither pairwise nor stratified rankings provide a solution to the problem. In fact, it would only cause new problems. For a discussion of these types of rankings see Brunner et al., (2018). As a remedy, the use of unweighted nonparametric relative effects, and thus, the use of tests based on pseudo-ranks instead of ranks, is recommended. Paradoxical Results of Rank Tests 17 p Table 6: Non-centralities cAB , cψAB , and cµAB of the interaction statistics in the 2 × 2-design with increasing sample sizes in only one stratum. This reflects the situation of a sub-group analysis where the sample size the sub-group √ in p is also listed is small compared to the total population. For the procedure based on ranks, the quantity NcAB to demonstrate the shift of its asymptotic multivariate normal distribution generated only by the unequal sample sizes in the two strata. The non-centralities cψAB and cµAB of the two statistics based on the pseudo-ranks or on the expectations are both equal to 0, independently of the sample sizes. Sample Sizes Non-Centralities n11 = n12 n21 = n22 cµAB 50 100 200 300 400 500 1000 2000 50 50 50 50 50 50 50 50 0 0 0 0 0 0 0 0 cψAB p cAB 0 0 0 0 0 0 0 0 0 0.071 0.127 0.151 0.165 0.173 0.191 0.201 √ p NcAB 0 1.22 2.84 4.00 4.94 5.74 8.77 12.89 A completely different approach to circumventing the above mentioned problem could be given by so-called aligned rank procedures (see Hodges and Lehmann, 1962 or Puri and Sen, 1985). However, these procedures are restricted to semi-parametric models and are not applicable to ordinal data since the hypotheses are formulated in terms of artificially introduced parameters of metric data. In addition, the typical invariance under strictly monotone transformations of the data is lost by the two different transformations. On the contrary, the proposed pseudo-rank procedures share the typical invariance of rankbased procedures and do not lead to the paradoxical results described in Sections 3 and 4. They can additionally be used to construct confidence intervals for meaningful nonparametric effect sizes as described in Section 5.3. In case of factorial designs with independent observations pseudo-rank procedures are already implemented in the R-package rankFD by choosing the option for unweighted effects. 18 8 Paradoxical Results of Rank Tests References Akritas, M. G., Arnold, S. F. and Brunner, E. (1997). Nonparametric hypotheses and rank statistics for unbalanced factorial designs. Journal of the American Statistical Association 92, 258-265. Akritas, M. G. and Brunner, E. (1997). A unified approach to ranks tests in mixed models. Journal of Statistical Planning and Inference 61, 249–277. Bewick, V., Cheek, L., and Ball, J. (2004). Statistics Review 10: Further Nonparametric Methods. Critical Care 8, 196-199. Birnbaum, Z. W. and Klose, O. M. (1957). Bounds for the Variance of the Mann-Whitney Statistic. Annals of Mathematical Statistics 28, 933–945. Brunner, E., Bathke, A. C., and Konietschke, F. (2018). Rank- and Pseudo-Rank Procedures in Factorial Designs - Using R and SAS - Independent Observations. Lecture Notes in Statistics, Springer, Heidelberg. Brunner, E., Konietschke, F., Pauly, M., and Puri, M.L. (2017). Rank-Based Procedures in Factorial Designs: Hypotheses about Nonparametric Treatment Effects. Journal of the Royal Statistical Society Series B, 79, 1463–1485. Brunner, E. and Munzel, U. (2000). The Nonparametric Behrens-Fisher Problem: Asymptotic Theory and a Small-Sample Approximation. Biometrical Journal 42, 17–25. Brunner, E. and Munzel, U. (2013). Nichtparametrische Datenanalyse. 2nd edition, Springer, Heidelberg. Brunner, E. and Puri, M.L. (2001). Nonparametric Methods in Factorial Designs. Statistical Papers 42, 1–52. Brunner, E. and Puri, M. L. (2002). A class of rank-score tests in factorial designs. Journal of Statistical Planning and Inference 103, 331–360. Gao, X. and Alvo, M. (2005a). A Nonparametric Test for Interaction in Two-Way Layouts. The Canadian Journal of Statistics 33, 529–543. Gao, X. and Alvo, M. (2005b). A Unified Nonparametric Approach for Unbalanced Factorial Designs. Journal of the American Statistical Association, 100, 926–941. Govindarajulu, Z. (1968). Distribution-free confidence bounds for Pr{X < Y}. Annals of the Institute of Statistical Mathematics 20, 229–238. Hettmansperger, T. P. and Norton, R. M. (1987). Tests for patterned alternatives in k-sample problems. Journal of the American Statistical Association 82, 292–299. Paradoxical Results of Rank Tests 19 Jonckheere, A. R. (1954). A Distribution-free k-sample Test Against Ordered Alternatives. Biometrika 41, 133–145. Konietschke, F. (2009). Simultane Konfidenzintervalle für nichtparametrische relative Kontrasteffekte. PhD thesis, Inst. of Math. Stochastics, University of Göttingen. Konietschke, F. Hothorn, L. A., and Brunner, E. (2012). Rank-based multiple test procedures and simultaneous confidence intervals. Electronic Journal of Statistics 6, 737–758. DOI: 10.1214/12-EJS691 Kruskal, W. H. (1952). A Nonparametric Test for the Several Sample Problem. Annals of Mathematical Statistics 23, 525–540. Kruskal, W. H. and Wallis, W. A. (1952). The use of ranks in one-criterion variance analysis. Journal of the American Statistical Association 47, 583–621. Kulle, B. (1999). Nichtparametrisches Behrens-Fisher-Problem im Mehrstichprobenfall. Diploma Thesis, Inst. of Math. Stochastics, University of Göttingen. Peterson, I. (2002). Tricky Dice Revisited. Science News 161, https://www.sciencenews.org/article/tricky-dice-revisited Puri, M. L. (1964). Asymptotic efficiency of a class of c-sample tests. Annals of Mathematical Statistics 35, 102–121. Puri, M. L. and Sen, P. K. (1985). Nonparametric methods in general linear models. Wiley, New York. Ruymgaart, F. H. (1980). A unified approach to the asymptotic distribution theory of certain midrank statistics. In: Statistique non Parametrique Asymptotique, 1–18, J.P. Raoult (Ed.), Lecture Notes on Mathematics, No. 821, Springer, Berlin. Sen, P. K. (1967). A note on asymptotically distribution-free confidence intervals for Pr(X < Y) based on two independent samples. Sankhya, Series A 29, 95–102. Thangavelu, K. and Brunner, E. (2007). Wilcoxon-Mann-Whitney test for stratified samples and Efron’s paradox dice. Journal of Statistical Planning and Inference 137, 720–737. Terpstra, T. J. (1952). The asymptotic normality and consistency of Kendall’s test against trend, when ties are present in one ranking. Indagationes Mathematicae 14, 327–333. Whitley, E., and Ball, J. (2002). Statistics Review 6: Nonparametric Methods. Critical Care, 6(6), 509-513. Zaremba, S. K. (1962). A generalization of Wilcoxon’s tests. Monatshefte für Mathematik 66, 359–70.
10
Infinite Horizon Optimal Transmission Power Control for Remote State Estimation over Fading Channels Xiaoqiang Ren, Junfeng Wu, Karl Henrik Johansson, Guodong Shi, and Ling Shi∗†‡ arXiv:1604.08680v1 [] 29 Apr 2016 Abstract Jointly optimal transmission power control and remote estimation over an infinite horizon is studied. A sensor observes a dynamic process and sends its observations to a remote estimator over a wireless fading channel characterized by a time-homogeneous Markov chain. The successful transmission probability depends on both the channel gains and the transmission power used by the sensor. The transmission power control rule and the remote estimator should be jointly designed, aiming to minimize an infinite-horizon cost consisting of the power usage and the remote estimation error. A first question one may ask is: Does this joint optimization problem have a solution? We formulate the joint optimization problem as an average cost belief-state Markov decision process and answer the question by proving that there exists an optimal deterministic and stationary policy. We then show that when the monitored dynamic process is scalar, the optimal remote estimates depend only on the most recently received sensor observation, and the optimal transmission power is symmetric and monotonically increasing with respect to the innovation error. 1 Introduction In networked control systems, control loops are often closed over a shared wireless communication network. This motivates research on remote state estimation problems, where a sensor measures the state of a linear system and transmits its observations to a remote estimator over a wireless fading channel. Such monitoring problems appear in a wide range of applications in environmental monitoring, space exploration, smart grids, intelligent buildings, among others. The challenges introduced by the networked setting lie in the fact that nonideal communication environment and constrained power supplies at sensing nodes may result in overall system performance degradation. The past decade has witnessed tremendous research efforts devoted to communication-constrained estimation problems, with the purpose of establishing a balance between estimation performance and communication cost. ∗ X. Ren and L. Shi are with the Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong. Email: [email protected], [email protected] † J. Wu and K. H. Johansson are with the ACCESS Linnaeus Center, School of Electrical Engineering, Royal Institute of Technology, Stockholm, Sweden. Email: [email protected], [email protected] ‡ G. Shi is with College of Engineering and Computer Science, The Australian National University, Canberra, Australia. Email: [email protected] 1 1.1 Related Work Wireless communications are being widely used nowadays in sensor networks and networked control systems. The interface of control and wireless communication has been a central theme in the study of networked sensing and control systems in the past decade. Early works assumed finite-capacity digital channels and focused on the minimum channel capacity or data rate needed for feedback stabilization, and on constructing encoder-decoder pairs to improve performance, e.g., [1–5]. Motivated by the fact that packets are the fundamental information carrier in most modern data networks [6], networked control and estimation subject to packet delays [7–9] and packet losses [10–13] has been extensively studied. State estimation is embedded in many networked control applications, playing a fundamental role therein. For networked state estimation subject to limited communication resource, the research on controlled communication has been extensive, see the survey [6]. Controlled communication, in general referring to reducing the communication rate intentionally to obtain a desirable tradeoff between the estimation performance and the communication rate, is motivated from at least two facts: (i). Wireless sensors are usually battery-powered and sparsely deployed, and replacement of battery is difficult or even impossible, so the amount of communication needs to e kept at a minimum as communication is often the dominating on-board energy consumer [14]. (ii). Traffic congestion in a sensor network many lead to packet losses and other network performance degradation. To minimize the inevitable enlarged estimation error due to reduced communication rate, a communication scheduling strategy for the sensor is needed. Two lines of research directions are present in the literature. The first line is known as time-based (offline) scheduling, whereby the communication decisions are simply specified only according to the time. Informally, a purely time-based strategy is likely to lead to a periodic communication schedule [15,16]. Optimal periodic scheduling has been extensively studied, e.g, [17–20]. The second line is known as event-based scheduling, whereby the communication decisions are specified according to the system state. The idea of event-based scheduling was popularized by Lebesgue sampling [21]. Deterministic event-based transmission schedules have been proposed in [22–31] for different application scenarios, and randomized event-based transmission schedules can be found in [32–34]. Essentially, event-based scheduling is a sequential decision problem with a team of two agents (a sensor and an estimator). Due to the nonclassical information structure of the two agents, joint optimization of the communication controller and the estimator is hard [35]. Most works [22–25, 27–31] bypassed the challenge by imposing restricted information structures or by approximations, while some authors have obtained structural descriptions of the agents under the joint optimization framework, using a majorization argument [26,28] or an iterative procedure [31]. In all these works communication models were highly simplified, restricted to a binary switching model. Fading is non-ignorable impairment to wireless communication [36]. The effects of fading has been taken into account in networked control systems [37–40]. There are works that are concerned with transmission power management for state estimation [41–47]. The power allocated to transmission affects the probability of successful reception of the measurement, thus affecting the estimation performance. In [42], transmission power is allocated via a predictive control algorithm based on the channel gain prediction. In [46], imperfect 2 acknowledgments of communication links and energy harvesting were taken into account. In [43], power allocation for the estimation outage minimization problem was investigated in estimation of a scalar GaussMarkov source. In all of the aforementioned works, the estimation error covariances are a Markov chain controlled by the transmission power, so Markov decision process (MDP) theory is ready for solving this kind of problems. The reference [45] considered the case when plant state is transmitted from a sensor to the controller over a wireless fading channel. The transmission power is adapted to the channel gain and the plant states. Due to nonclassical information structure, joint optimization of plant input and transmit power policies, although desired, is difficult. A restricted information structure was therefore imposed, i.e., only a subset of the full information history available at the sensor is utilized when determining the transmission power, to allow separate design at expense of loss of optimality. It seems that such a challenge involved in these joint optimization problems always exists. 1.2 Contributions In this paper, we consider a remote state estimation scheme, where a sensor measures the state of a linear time-invariant discrete-time process and transmits its observations to a remote estimator over a wireless fading channel characterized by a time-homogeneous Markov chain. The successful transmission probability depends on both the channel gain and the transmission power used by the sensor. The objective is to minimize an infinite horizon cost consisting of the power consumption and the remote estimation error. In contrast to [45], no approximations are made to prevent loss of optimality, which however renders the analysis challenging. We formulate our problem as an infinite horizon belief-state MDP with an average cost criterion. Contrary to the finite horizon belief-state MDP considered in [28], for which an optimal solution exists, a first question that one may ask about our infinite horizon MDP is: Does this optimization problem have a solution? The answer is yes provided certain conditions given in this paper. On top of this, we present structural results on the optimal transmission power controller and the remote estimator for some special systems, which can be seen as the extension of the results in [26, 31] for the power management scenario. The analysis tools used in the work (i.e., the partially observable Markov decision process (POMDP) formulation and the majorization interpretation) is inspired by [28]. Nevertheless, the contributions of the two works are distinct. In [28] the authors mainly studied the threshold structure of the optimal communication strategy within a finite horizon, while the present work focuses on the asymptotic analysis of the joint optimization problem over an infinite horizon. In summary, the main contributions of this paper are listed as follows. We prove that a deterministic and stationary policy is an optimal solution to the formulated average cost belief-state MDP. We should remark that the abstractness of the considered state and action spaces (the state space is a probability measure space and the action space a function space) renders the analysis rather challenging. Then we prove that both the optimal estimator and the optimal power control have simple structures when the dynamic process monitored is scalar. To be precise, the remote estimator synchronizes its estimates with the data received in the presence of successful transmissions, and linearly projects its estimates a step 3 forward otherwise. For a certain belief, the optimal transmission power is a symmetric and monotonically increasing function of the innovation error. Thanks to these properties, both the offline computation and the online implementation of the optimal transmission power rule are greatly simplified, especially when the available power levels are discrete, for which only thresholds of switchings between power levels are to be determined. This paper provides a theory in support of the study of infinite horizon communication-constrained estimation problems. Deterministic and stationary policies are relatively easy to compute and implement, thus it is important to know that an optimal solution that such a policy exists. The structural characteristic of the jointly optimal transmission power and estimation policies provides insights into the design of energy-efficient state estimation algorithms. 1.3 Paper Organization In Section 2, we provide the mathematical formulation of the system model adopted, including the monitored dynamic process, the wireless fading channel, the transmission power controller and the remote estimator. We then present the considered problem and formulate it as an average cost MDP in Section 3. In Section 4, we prove that there exists a deterministic and stationary policy that is optimal to the formulated MDP. Some structural results about the optimal remote estimator and the optimal transmission power control strategy are presented in Section 5. Concluding remarks are given in Section 6. Some auxiliary background results and a supporting lemma are provided in the appendices. Notation N and R+ are the sets of nonnegative integers and nonnegative real numbers, respectively. Sn+ (and Sn++ ) is the set of n by n positive semi-definite matrices (and positive definite matrices). When X ∈ Sn+ (and Sn++ ), we write X  0 (and X ≻ 0). X  Y if X − Y ∈ Sn+ . Tr(·) and det(·) are the trace and the determinant of a matrix, respectively. λmax (·) represents the eigenvalue, having the largest magnitude, of a matrix. The superscripts ⊤ and −1 stand for matrix transposition and matrix inversion, respectively. The indictor function of a set A is defined as   1, ω ∈ A 1A (ω) =  0, ω ∈ 6 A. The notation p(x; x) represents the probability density function (pdf) of a random variable x taking value at x. If being clear in the context, x is omitted. For a random variable x and a pdf θ, the notation x ∼ θ means that x follows the distribution defined by θ. The symbol Nx0 ,Σ (x) denotes a Gaussian distribution of x with mean x0 and covariance Σ. For measurable functions f, g : Rn 7→ R, we use f ∗ g to denote the convolution of f and g. For a Lebesgue measurable set A ⊂ Rn , L(A) denotes the Lebesgue measure of A. Let kxk denote the L2 norm of a vector x ∈ Rn . δij is the Dirac delta function, i.e., δij equals to 1 when i = j and 0 otherwise. 4 Process ❄ xk✲ Wireless γk xk ✲ ✲ Tx ✻ uk Tx Power Controller Channel Remote Estimator x̃k ✲ ✛ Sensor Figure 1: The remote state estimation scheme. 2 System Model In this paper, we focus on dynamic power control for remote state estimation. We consider a remote state estimation scheme as depicted in Figure 1. In this scheme, a sensor measures a linear time-invariant discrete-time process and sends its measurement in the form of data packets, to a remote estimator over a wireless link. The remote estimator produces an estimate of the process state based on the received data. When sending packets through the wireless channel, transmissions may fail due to interference and weak channel gains. Packet losses lead to distortion of the remote estimation and packet loss probabilities depend on transmission power levels used by the transmitter and on the channel gains. Lower loss probabilities require higher transmission power usage; on the other hand, energy saving is critical to expand the lifetime of the sensor. The wireless communication overhead dominates the total power consumption, therefore we introduce a transmission power controller, which aims to balance the transmission energy cost and distortion penalty as the channel gain varies over time. In what follows, the attention is devoted to laying out the main components in Figure 1. 2.1 State Process We consider the following linear time-invariant discrete-time process: xk+1 = Axk + wk , (1) where k ∈ N, xk ∈ Rn is the process state vector at time k, wk ∈ Rn is zero-mean independent and identically distributed (i.i.d.) noises, described by the probability density function (pdf) µw , with E[wk wk′ ] = W (W ≻ 0). The initial state x0 , independent of wk , k ∈ N, is described by pdf µx0 , with mean E[x0 ] and covariance Σ0 . Without loss of generality, we assume E[x0 ] = 0, as nonzero-mean cases can be translated into zero-mean one by coordinate change x′k = xk − E[x0 ]. We let X = Rn denote the domain of xk . The system parameters are all known to the sensor as well as the remote estimator. We assume the plant is unstable, i.e., |λmax (A)| > 1. 5 N0 2 gk xk N ✲ ❄ ✲ Encoder L ✲ ❄ ✲ Decoder γk xk ✲ Wireless Channel Figure 2: Wireless communication model. N0 is the power spectral density of the channel noise vk . 2.2 Wireless Communication Model The sensor measures and sends the process state xk to the remote estimator over an additive white Gaussian noise (AWGN) channel which suffers from channel fading (see Figure 2): y = gk x + vk , where gk is a random complex number, and vk is additive white Gaussian noise; x represents the signal (e.g., xk ) sent by the transmitter and y the signal received by the receiver. Let the channel gain hk = |gk |2 take values in a finite set h ⊆ R+ and {hk }k∈N possess temporal correlation modeled by a time-homogenous Markov chain [40, 47]. The one-step transition probability for this chain is denoted by π(·|·) : h × h 7−→ [0, 1]. The function π(·|·) is known a priori. We assume the remote estimator or the sensor can access the channel state information (CSI), so the channel gain hk is available at each time before transmission. This can for instance done using pilot-aided channel estimation techniques are adopted, by which the transmitter sends a pilot signal at each fading block and the channel coefficients, including the channel gain, are obtained at the receiver [36]. The estimation errors of the channel gains are not taken into account in this paper. To facilitate our analysis, the following assumption is made. Assumption 1 (Communication model). (i). The channel gain hk is independent of the system paremeters. (ii). The channel is block fading, i.e., the channel gain remains constant during each packet transmission and varies from block to block. (iii). The quantization effect is negligible and does not effect the remote estimator. (iv). The receiver can detect symbol errors1 . Only the data reconstructed error-free are regarded as successfully reception. The receiver perfectly realizes whether the instantaneous communication succeeds or not. 1 In practice, symbol errors can be detected via a cyclic redundancy check (CRC) code. 6 (v). The Markov chain governing the channel gains, π(·|·), is aperiodic and irreducible. Assumption 1-(i)(ii)(iii)(iv) are standard for fading channel model. Note that Assumption 1-(i)(iii)(iv) were used in [10, 44, 45, 48, 49], and that Assumption 1-(ii) was used in [42]. From Assumption 1-(iv), whether or not the data sent by the sensor is successfully received by the remote estimator is indicated by a sequence {γk }k∈N of random variables, where   1, if xk is received error-free at time k, γk =  0, otherwise (regarded as dropout), (2) initialized with γ0 = 1. When γk = 0, we regard the remote estimator as having received a virtual symbol ♯. Assumption 1-(v) is a technical requirement for Theorem 1. One notes that both the i.i.d. channel gains model and the Gilbert–Elliott model with the good/bad state transition probability not equal to 1 satisfies Assumption 1-(v). 2.3 Transmission Power Controller Let uk ∈ R+ be the transmission power at time k, the power supplied to the radio transmitter. Due to constraints with respect to radio power amplifiers, the admissible transmission power is restricted. Let uk take values in U ⊂ [0, umax ], in which umax stands for the maximum power. Depending on the radio implementation, U may be a continuum or a finite set. It is further assumed that U is compact and contains zero. Under Assumption 1-(iii), the successful packet reception is statistically determined by the signal-to-noise ratio (SNR) hk pk /N0 at the receiver, where N0 is the power spectral density of vk . A very general model [40, 45] of the conditional packet reception probabilities for a variety of modulations is as follows: q(uk , hk ) , P (γk = 1|uk , hk ) , (3) where q is a nondecreasing function in both uk and hk . Assumption 2. The function q(u, h) : U × h 7→ [0, 1] is continuous almost everywhere with respect to u for any fixed h. Especially, q(0, h) = 0 for all h. Remark 1. If letting q(uk , hk ) = q(uk ) with U = {0, 1} and   1, if u = 1; k q(uk ) =  0, if uk = 0, we conclude that the “on-off ” controlled communication problem considered in [17–20, 22–34] is a special case of the transmission power control problem considered here. We assume that packet reception probabilities are conditionally independent for given channel gains and transmission power levels, which is stated in the following assumption. 7 Assumption 3. The following equality holds for any k ∈ N, P (γk = rk , . . . , γ1 = r1 |u1:k , h1:k ) = k Y P (γj = rj |uj , hj ) . j=1 Remark 2. Assumption 2 is standard for digital communication over fading channels. Assumption 3 is in accordance with the common sense that the symbol error rate statistically depends on the instantaneous SNR at the receiver. Many digital communication modulation methods are embraced by these assumptions [40, 42, 50, 51]. Assumption 4. For the least favorable channel power gain h , min{h : h ∈ h}, the maximum achievable successful transmission probability satisfies q(ū, h) > 1 − 1 , λmax (A)2 where ū is the highest available power level: ū , max{u : u ∈ U } and A is the system matrix in (1). Note that since both h and U are compact, h and ū always exist. Remark 3. Assumption 4 provides a sufficient condition under which the expected estimation error covariance is bounded, even for the least favorable channel power gain. Similar assumptions were also adopted in many works, such as [40, 46], for guaranteeing the stability of the Kalman filtering subject to random packet losses. 2.4 Remote Estimator At the base station side, each time a remote estimator generates an estimate based on what it has received from the sensor. In many applications, the remote estimator is powered by an external source or is connected with an energy-abundant controller/actuator, thus having sufficient communication energy in contrast to the energy-constrained sensor. This energy asymmetry allows us to assume that the estimator can send messages back to the sensor. The content of feedback messages are separatively defined under different system implementations, the details of which will be discussed later in Section 3.2. Denote by Ok− the observation obtained by the remote estimator up to before the communication at time k, i.e., Ok− , {γ1 x1 , ..., γk−1 xk−1 } ∪ {γ1 , . . . , γk−1 } ∪ {h1 , . . . , hk }. Similarly, denote by Ok+ the observation obtained by the remote estimator up to after the communication at time k, where Ok+ , Ok− ∪ {γk , γk xk }. 3 Problem Definition We take into account both the estimation quality at the remote estimator and the transmission energy consumed by the sensor. To this purpose, joint design of the transmission power controller and the remote 8 estimator is desired. Measurement realizations, communication indicators, and channel gains are adopted to manage the usage of transmission power:  uk = fk x1:k , h1:k , γ1:k−1 . (4) To produce uk , the remote estimator can work under two different implementations, regarding the computational capacity of the sensor node: the remote estimator decides an intermediate function lk (·) = fk (Ok− , ·) (5) and feeds lk back to the sensor and then the sensor evaluates uk by uk = lk (x1:k ); or, equivalently, the estimator directly feeds γk ’s back to the sensor, and the sensor is in charge of deciding fk . Given the transmission power controller, the remote estimator generates an estimate as a function of what it has received from the sensor, i.e., x̃k , gk (Ok+ ). (6) We emphasize that x̃k also depends on fk since fk statistically affects the arrival of the data. The average remote estimation quality over an infinite time horizon is quantified by   T X 1 kxk − x̃k k2  ; E(f , g) , Ef ,g limsup T T →∞ (7) j=1 correspondingly, the average transmission power cost, denoted as W (f , g), is given by # " T 1X uk , W (f ) , Ef limsup T →∞ T (8) k=1 where f , {f1 , . . . , fk , . . .} and g , {g1 , . . . , gt , . . .}. The arguments f and g indicate that the quantity of (7) depends on them. This is also the case in (8). Note that in (7) and (8) the expectations are taken with respect to the randomness of the system and the transmission outcomes for given f and g. For the remote state estimation system, we naturally wonder how to find a jointly optimal transmission power controller fk∗ and remote state estimator gk∗ satisfying minimizef ,g [E(f , g) + α W (f )] , (9) where the constant α can be interpreted as a Lagrange multiplier. We should remark that (9) is difficult to solve due to the nonclassical information structure [35]. What is more, (9) has an average cost criterion that depends only on the limiting behavior of f and g, adding additional analysis difficulty. 3.1 Belief-State Markov Decision Process To find a solution to the optimization problem (9), we first observe from (8) that W (f ) does not depend on g, thus leading to an insight into the structure of gk∗ —Lemma 1, the proof of which follows from optimal filtering theory: the conditional mean is the minimum-variance estimate [52]. Similar results can be seen in [26, 28, 31]. 9 Lemma 1. For any given transmission power controller fk , the optimal remote estimator gk∗ is the MMSE estimator x̂k , gk∗ (Ok+ ) = Ef1:k [xk |Ok+ ]. (10) Problem (9) still remains hard since gk∗ depends on the choice of f1:k . To address this issue, by viewing the problem from the perspective of a decision maker holding the common information [53], we formulate (9) as a POMDP [54] at the decision maker’s side. Following the conventional treatment of the POMDP, we are allowed to equivalently study its belief-state MDP [54]. For technical reasons, we pose two moderate constraints on the action space. We will present the formal belief-state MDP model and remark that the resulting gap between the formulated belief-state MDP and (9) is negligible (see Remark 6). Before doing so, a few definitions and notations are needed. Define innovation ek as ek , xk − Ak−τ (k) xτ (k) (11) with ek taking values in Rn and τk being the most recent time the remote estimator received data before time k as τ (k) , max {t : γt = 1}. 16t6k−1 (12) + , the equality Let êk , Ef1:k [ek |Ok+ ]. Since τ (k), xτ (k) ∈ Ok−1 ek − êk = xk − x̂k (13) + . holds for all k ∈ N. In other words, ek can be treated as xk offset by a variable that is measurable to Ok−1 We define the belief state on ek . From (13), the belief state on xk can be equally defined. Here we use ek instead of xk for ease of presentation. Definition 1. Before the transmission at time k, the belief state θk (·) : Rn 7→ R+ is defined as θk (e) , + ). p(ek ; e|f1:k , Ok−1 We also need some definitions related to a partition of a set. Definition 2. A collection ∆ of sets is a partition of a set X if the following conditions are satisfied: (i). ∅ 6∈ ∆. (ii). ∪B∈∆ B = X . (iii). If B1 , B2 ∈ ∆ and B1 6= B2 , then B1 ∩ B2 = ∅. An element of ∆ is also called a cell of ∆. If X ⊂ Rn , we define the size of ∆ as |∆| , sup {kx − yk : x, y ∈ δj , δj ∈ ∆}. δj ,x,y Definition 3. For two partitions, denoted as ∆1 and ∆2 , of a set X , ∆1 is called a refinement of ∆2 if every cell of ∆1 is a subset of some cell of ∆2 . Formally it is written as ∆1  ∆2 . 10 One can verify that the relation  is a partial order, and the set of partitions together with this relation form a lattice. We denote the infimum of partitions ∆1 and ∆2 as ∆1 ∧ ∆2 . Now we are able to mathematically describe the belief-state MDP by a quintuplet (N, S, A, P , C ). Each item in the tuple is elaborated as follows. We provide background knowledge of the mathematical notions used below in the Appendix A. (i). The set of decision epochs is N. (ii). State space S = Θ × h: Θ is the set of beliefs over Rn , i.e., the space of probability measures on Rn . The set Θ is further constrained as follows. Let µ be a generic element of Θ. Then µ is equivalent to R the Lebesgue measure2 , and µ has the finite second moment, i.e., Rn kek2 dµ(e) < ∞. No generality has been lost by the above two constraints; see Remark 4. Let θ(e) = dµ(e) dL(e) be the Radon–Nikodym derivative [55]. Note that θ(e) is uniquely defined up to a L−null set (i.e., a set having Lebesgue measure zero). We thus use µ and θ(e) interchangeably to represent a probability measure on Rn , and we do not distinguish between any two functions θ(e) and θ ′ (e) with L({e : θ(e)−θ(e)′ 6= 0}) = 0 by convention. We assume that Θ is endowed with the topology of weak convergence. Denote by dP (·, ·) denote the Prohorov metric [56] on Θ. We define the as ds ((µ1 , h1 ), (µ2 , h2 )) = max{dP (µ1 , µ2 ), |h1 − h2 |}. s , (µ, h) a generic element of S. Let metric on S (iii). Action space A is the set of all functions that have the following structure:   ū, if kek > L, a(e) =  a′ (e), otherwise, (14) where a′ ∈ A′ : E 7→ U with E , {e ∈ Rn : kek ≤ L}. The space A′ is further defined as follows. Let a′ ∈ A′ be a generic element, then there exists a finite partition ∆a′ of E such that each cell of ∆a′ is a L−continuity set3 and on each cell a′ (e) is Lipschitz continuous with Lipschitz constant uniformly bounded by M . It is further assumed that ∆ = ∧a′ ∈A′ ∆a′ is a finite partition of E. Since both L and M can be arbitrarily large and |∆| can be arbitrarily small, the structure constraints pose little limitation; see Remark 5 why we consider such an action space. We consider the Skorohod distance defined in (38). By convention, we do not distinguish two functions in A that have zero distance and we consider the space of the resulting equivalence classes. Note that the argument of the function a(·) is the innovation ek defined in (11), and by the definition of ek , one obtains that ak (e) = lk (e + Ak−τ (k) xτ (k) ). (iv). The function P (θ ′ , h′ |θ, h, a) : S × A × S defines the conditional state transition probability. To be 2 Let µ1 and µ2 be measures on the same measurable space. Then µ1 is said to be equivalent to µ2 if for any Borel subset B, µ2 (B) = 0 ⇔ µ1 (B) = 0. 3 A Borel subset B is said to be a µ−continuity set if µ(∂B) = 0, where ∂B is the boundary set of B. 11 precise, P (θ ′ , h′ |θ, h, a) , p(θk+1 , hk+1 ; θ ′ , h′ |θk = θ, hk = h, ak = a)  ′ ′ ′    π(h |h) (1 − ϕ(θ, h, a)) δφ(θ,h,a,0) (θ ), if θ = φ(θ, h, a, 0), π(h′ |h)ϕ(θ, h, a)δφ(θ,h,a,1) (θ ′ ), if θ ′ = φ(θ, h, a, 1), =    0, otherwise, R where ϕ(θ, h, a) , Rn q(a(e), h)θ(e)de, δθ∗ (θ) denotes a degenerate distribution at θ∗ over Θ with R Θ δθ∗ (θ)dθ = 1, and + where θθ,h,a (e) , φ(θ, h, a, γ)  + 1 −1  det(A) θθ,h,a (A e) ∗ N0,W (e), if γ = 0, ,  N0,W (e), if γ = 1, (1−q(a(e),h))θ(e) 1−ϕ(θ,h,a) (15) is interpreted as the post-transmission belief when the transmission fails, and N0,W (e) is the multivariate Gaussian distribution with mean 0 and covariance W . (v). The function C (θ, h, a) : S × A → R+ is the cost function when performing a ∈ A for θ ∈ Θ and h ∈ h at time k, which is given by C (θ, h, a) = Z θ(e)c(e, h, a)de. (16) Rn In (16), the function c(·, ·, ·) : Rn × h × A 7→ R+ is defined as c(e, h, a) = αa(e) + (1 − q(a(e), h))ke − ê+ k2 with ê+ = Eθ+ θ,h,a + ], where the communication cost is counted by the first [e] , E[e|e ∼ θθ,h,a term and the distortion ke − ê+ k2 with probability 1 − q(a(e), h) is counted by the second term. Remark 4. The initial belief θ1 (e) = 1/det(A)µx0 (A−1 e) ∗ N0,W (e) is equivalent to the Lebesgue measure. The belief evolution in (15) gives that, whatever policy is used, θk is equivalent to the Lebesgue measure for k ≥ 2. Also, note that if there exists a channel gain h ∈ h such that q(ū, h) < 1 and if θ has infinite second moment, then C (θ, h, a) = ∞ for any action a. Thus, to solve (9), without any performance loss, we can restrict beliefs into the state space Θ. Remark 5. The action a(e) ∈ A is allowed to have a L−null set of discontinuity points. The assumption that on each cell of a partition, a(e) is a Lipschitz function is a technical requirement for Theorem 1. The intuition is that given θk , except for L−null set of points, the difference between the power used for ek and e′k is at most proportional to the distance between ek and e′k . The saturation structure in (14), i.e., a(e) = ū if kek > L is also a technical requirement for Theorem 1. Intuitively, this ensures that, when the + transmission fails, the second moment of the post-transmission belief θθ,h,a (e) is bounded by a function of the second moment of θ(e). The saturation assumption can also be found in [45]. 12 ✻ x̃k γk x k ✲ θk ✲ Estimator Observation acqui. & trans. MDP algorithm Remote Estimator ✛ action lk (a) Process xk ❄ Tx Power Controller u✲ k Sensor Tx xk✲ Transmission & Estimation x̃k ✲ ✻ l k MDP algorithm ✛θk γk Virtual ✛ estimation ✻ (b) Figure 3: Implementation of the system. The block “Observation acqui. & trans.” in (a) corresponds to the blue-dashed rectangle in Figure 1 and the block “Transmission & estimation” the red-dashed rectangle. In (a), the MDP algorithm is run at the remote estimator and the action lk is fed back to the sensor. While in (b), the MDP algorithm is run at the sensor node and γk is fed back by the remote estimator. An admissible k−history for this MDP is defined as hk , {θ1 , h1 , a1 , . . . , θk−1 , hk−1 , ak−1 , θk , hk }. Let Hk denote the class of all the admissible k−history hk . A generic policy d for (N, S, A, P , C ) is a sequence of decision rules {dk }k∈N , with each dk : Hk → A. In general, dk may be a stochastic mapping. Let D denote the whole class of d. In some cases, we may write d as d(dk ) to explicitly point out the decision rules used at each stage. We focus on the following problem. Problem 1. Find optimal policy for the MDP problem: minimize J (d, θ, h), d∈D ∀ (θ, h) ∈ S, (17) where T 1 X θ,h Ed [C (θk , hk , ak )] J (d, θ, h) , limsup T →∞ T k=1 is the average cost with the initial state (θ, h) and the policy d. Remark 6. The gap between Problem 1 and the optimization problem (9) arises from the structure assumptions for the action space. These structure constraints, however, are moderate, since the saturation level L and the uniform Lipschitz constant M can be arbitrarily large and the size of |∆| can be arbitrarily small. 13 3.2 Practical Implementation Here we discuss about the implementation of the system, which is illustrated in Figure 3. Depending on the computational capacity of the senor node, the system we study can work either as in (a) or in (b). The main difference between the systems in (a) and (b) is where the MDP algorithm is run. The computational capacity required for the sensor node as well as the content of feedback messages are correspondingly different. In (a), the MDP algorithm is run at the remote estimator and the action lk is fed back to the sensor. In practice, for a generic lk , only an approximate version (e.g., lookup tables) can be transmitted due to bandwidth limitation. An accurate feedback of lk is possible if lk has a special structure. For example, if ak (e) (recall that that ak (e) = lk (e + Ak−τ (k) xτ (k) )) is a monotonic step function taking values in a finite set, only those points, where ak jumps, are needed to represent lk (note that Ak−τ (k) xτ (k) is available at the sensor node). Since the function lk is directly fed back to the sensor, the only computational task carried out by the sensor is computing lk (xk ). When the sensor node is capable of running the MDP algorithm locally, the system can be implemented as illustrated in (b). In this case, only γk (a binary variable) is fed back. Note that when γk is fed back, the sensor knows exactly the information available at the remote estimator. It can run a virtual estimator locally that has the same behavior as the remote estimator. 4 Optimal Deterministic Stationary Policy: Existence The definition of the policy d in the above section allows the dependence of dk on the full k−history, hk . Fortunately, with the aid of the results of average cost MDPs [57–59], we prove that there exists a deterministic stationary policy that is optimal to Problem 1. Before showing the main theorem, we introduce some notations. We define the class of deterministic and stationary policies Dds as follows: d(dk ) ∈ Dds if and only if there exists a Borel measurable function d : S 7→ A such that ∀i, dk (Hk−1 , ak−1 , θk = θ, hk = h) = d(θ, h). Since the decision rules dk ’s are identical (equal d) along the time horizon for a stationary policy d({dk }k∈N ) ∈ Dds , we write it as d(d) for the ease of notation. Theorem 1. There exists a deterministic and stationary policy d∗ (d) ∈ Dds such that J (d∗ (d), θ, h) ≤ J (d, θ, h) ∀ (θ, h) ∈ S, d ∈ D, Moreover, d∗ (d) = arg min{Cd (θ, h) − ρ∗ (θ, h) + Ed [Q ∗ (θ ′ , h′ )|θ, h}, d∈Dds and J (d∗ (d), θ, h) = ρ∗ (θ, h), 14 (18) where the functions Q ∗ : S 7→ R and ρ∗ : 7→R satisfy Q ∗ (θ, h) = min {Cd (θ, h) − ρ∗ (θ, h) + Ed [Q ∗ (θ ′ , h′ )|θ, h]} d∈Dds with Cd (θ, h) , C (θ, h, d(θ, h)) and Ed [Q ∗ (θ ′ , h′ )|θ, h] , R S Q ∗ (θ ′ , h′ )P (θ ′ , h′ |θ, h, d(θ, h))d(θ ′ , h′ ). The above theorem says that the optimal power transmission policy exists and is deterministic and stationary, i.e., the power used at the sensor node uk only depends on (θk , hk ) and ek . Since the belief state θk can be updated recursively as in (15), this property facilitates the related performance analysis. In principle, the optimal deterministic and stationary policy to an average cost MDP can be obtained by the value iteration algorithm [59]. However, it is not computationally tractable to solve (18), since neither the state space nor the action space is finite. An approximate algorithm is needed to obtain an suboptimal solution [54], which is out of the scope of this paper. Nevertheless, Theorem 1 provides a qualitative characteristic of the optimal transmission power control rule. Define Jβ (d, θ, h) , Edθ,h "∞ X k # β C (θk , hk , ak ) k=1 as the expected total discounted cost with the discount factor 0 < β < 1. Let υβ (θ, h) , inf d∈D Jβ (d, θ, h) be the least cost associated with the initial state (θ, h), and let mβ = inf (θ,h)∈S υβ (θ, h). By Theorem 3.8 in [57], in order to prove Theorem 1, it is sufficient to verify the following conditions. C1 (State Space) The state space S is locally compact with countable base. C2 (Regularity) Let M be a mapping assigning to each s ∈ S the nonempty available action space A(s). Then for each s ∈ S, A(s) is compact, and M is upper semicontinuous. C3 (Transition Kernel) The state transition kernel P (·|s, a) is weakly continuous4 . C4 (Cost Function) The one stage cost function C (s, a) is lower semicontinuous. C5 (Relative Discounted Value Function) There holds sup [υβ (θ, h) − mβ ] < ∞, ∀(θ, h) ∈ S. (19) 0<β<1 We now verify each of the above conditions for the considered problem, by which we establish the proof of Theorem 1. 4 We say P (·|s, a) is is weakly continuous if as i → ∞, Z Z b(s′ )P (ds′ |si , ai ) → b(s′ )P (ds′ |s, a) S S for any sequence {(si , ai ), i ≥ 1} converging to (s, a) with si , s ∈ S and ai , a ∈ A, and for any bounded and continuous function b : S 7→ R. 15 4.1 State Space Condition C1 We prove that both S and A are Borel subsets of Polish spaces (i.e., separable completely metrizable topological spaces) instead. Then as pointed out in [58], by Arsenin–Kunugui Thoerem, the condition C1 holds. To show that S is a Borel subset of a Polish space, by the well known results about the product topology [60], it suffices to prove that Θ and h are Borel subsets of Polish spaces. Since h is a compact subset of R, we only need to prove Θ is a Borel subset of a Polish space. Let M(Rn ) be the space of probability measures on Rn endowed with the topology of weak convergence. It is well known that M(Rn ) is a Polish space [56]. Let M2 (Rn ) ⊆ M(Rn ) be the set of probability measures with finite second moment, and Me (Rn ) ⊆ M(Rn ) be the set of probability measures equivalent to L. By Theorem 3.5 in [61], Me (Rn ) w is a Borel set. We then show that M2 (Rn ) is closed. Suppose {µi,i∈N } ∈ M2 (Rn ) and µi → µ. Since M(Rn ) is complete, µ ∈ M(Rn ), and using the fact that norms are continuous, by Theorem 1.1 in [62], Z Z 2 kek2 µi (de) < ∞. kek µ(de) ≤ lim inf i→∞ Rn Then µ ∈ M2 (Rn ), implying that M2 (Rn ) Rn is closed. Since Θ = M2 (Rn ) ∩ Me (Rn ), Θ is a Borel subset of M(Rn ). The state space S thus is a Borel subset of a Polish space. Now we shall show that A is a Borel subset of a Polish space. Since a bounded function can be approximated by simple functions uniformly [63], the space A′ is a subset of the general Skorohod space defined on E (see Appendix A), i.e., A′ ⊆ D(E). We first prove that the closure of A′ , denoted as cl(A′ ), is a compact set by Theorem 3.11 in [64]. Since a generic a ∈ A′ maps from E to [0, ū], the condition 3.37 is obviously satisfied. Note that the condition 3.38 is equivalent to lim sup ∆ a∈A′ w(a, ∆) → 0. (20) By the definition of ∆, all the functions in A′ are Lipschitz continuous with Lipschitz constant uniformly bounded by M on each cell of ∆. Thus, for ∆  ∆, sup a∈A′ w(a, ∆) ≤ M |∆|, which yields (20). We then show that cl(A′ ) = A′ . Suppose that ai ∈ A′ converges to a limit a in the s Skorohod topology (we write as ai → a), we then show that a ∈ A′ . By the definition of the Skorohod distance d(·, ·) in (38), ai → a if and only if there exist mappings πi ∈ Λt such that s lim ai (πi x) = a(x) uniformly in E i (21) and limi πi x = x uniformly in E. Since limi πi x = x uniformly in E, for any ǫ > 0, there exists i0 such that kπi kt < ǫ with i ≥ i0 . Note that if kπi kt < ǫ, πi is a bi-Lipschitz homeomorphism. By the definition of A′ , any ai ∈ A′ has L−null set of discontinuity points. Since measure-null sets are preserved by a Lipschitz homeomorphism, by (21), one obtains that L(the set of discontinuity points of a) = 0. 16 (22) Following the same reasoning for one dimensional Skorohod space D[0, 1] (see e.g., P124 in [56]), one s obtains that ai → a implies that ai (x) → a(x) uniformly for all continuity points x of a. Since on each cell of ∆ = {δj }, all the functions in A′ are Lipschitz continuous, the interior points of δj (write the set as δjo ) must be continuity points of a. By the fact that if a sequence of Lipschitz functions with Lipschitz constant uniformly bounded by M converge to a limit function, then this limit function is also a Lipschitz function with Lipschitz constant bounded by the same M , a is Lipschitz continuous with Lipschitz constant uniformly bounded by M on the interior set of each cell of ∆. For a boundary point x of the cells of ∆, denote the collection of cells whose boundary contains x as δx , {δj : x ∈ ∂δj }. Then one obtains that a(x) must be a limit of a from one cell in δx , i.e., there exists δj ∈ δx such that limy→x,y∈δjo a(y) = a(x). Now we define a function a∗ such that for each δj ∈ ∆, a∗ (x) = a(x) if x ∈ δjo and a∗ (x) are continuous on δj . Then one obtains that d(a, a∗ ) = 0, which implies that a = a∗ since D(E) is a metric space. Combining (22), one obtains that a ∈ A′ . Thus A′ is closed and compact. Using the fact that every compact metric space is complete and separable, one obtains that A′ is a Polish space. By the structure relation between A and A′ in (14), the space A is also a Polish space. 4.2 Regularity Condition C2 Since A is compact and A(s) = A for every s ∈ S, C2 is readily verified. 4.3 Transition Kernel Condition C3 Since S is separable and given (θk , hk , a), hk+1 and θk+1 are independent, then by Theorem 2.8 in [56], it suffices to prove that for any h ∈ h, as θi → θ (µi → µ) and ai → a, the followings hold: w w s ϕ(θi , h, ai ) → ϕ(θ, h, a) (23) w and φ(θi , h, ai , 0) → φ(θ, h, a, 0). (24) s Since ai → a implies that ai (x) → a(x) uniformly for all the continuity points of a and the set of s discontinuity points of a has Lebesgue measure zero, ai → a implies that ai → a L−a.e. Noting that µ is equivalent to L, ai → a µ−a.e. holds, and it follows that q(ai , h) → q(a, h) µ−a.e., since q is continuous sw L−a.e. Also, by Lemma 4, µi → µ. Then by Theorem 2.2 in [65], one obtains that Z Z q(a(e), h)µ(de) q(ai (e), h)µi (de) ≥ lim inf i→∞ n n R R Z Z q(a(e), h)µ(de). −q(ai (e), h)µi (de) ≥ − and lim inf i→∞ Rn Rn Combing the above two equations, one obtains that limi→∞ i.e., ϕ(θi , h, ai ) → ϕ(θ, h, a). R Rn q(ai (e), h)µi (de) → R Rn q(a(e), h)µ(de), sw We now prove that the equation (24) holds. Noting that θi → θ implies that θi (e)→θ(e) L−a.e., it thus follows that + (e) θθ+i ,h,ai (e) → θθ,h,a 17 (25) + L−a.e. Note that θθ+i ,hi ,ai (e) and θθ,h,a (e) can be viewed as probability density functions of e, and for + simplicity, we write the corresponding probability measures as µ+ i and µ , respectively. Then it follows from (25) that sw + µ+ i →µ . (26) Let b(e) be any bounded and continuous function defined on Rn , then Z b(e)φ(θ, h, a, 0)(e)de RnZ Z + θθ,h,a (e′ )N0,W (e − Ae′ )de′ de b(e) = n n R Z ZR + b(e)N0,W (e − Ae′ )dede′ θθ,h,a(e′ ) = n n R ZR ′ + b̃(e )µ (de′ ), , Rn where b̃(e′ ) , R Rn b(e)N0,W (e − Ae′ )de. Noting that b̃(e′ ) is a bounded function, then by Appendix E of [59] and (26), Z ′ b̃(e Rn ′ )µ+ i (de ) → Z b̃(e′ )µ+ (de′ ). Rn The equation (24) thus follows by the Portmanteau Theorem. 4.4 Cost Function Condition C4 We first show that µ+ also has finite second moment given that µ has finite second moment. Z kek2 dµ+ (e) n R Z Z kek2 dµ+ (e) kek2 dµ+ (e) + = n R \E E Z kek2 dµ(e), for any h, a ≤ L2 + Rn \E < ∞, + where the first inequality follows from the structure of a(e) in (14). Since θθ,h,a has finite second moment, + is uniformly integrable. Then by Theorem 3.5 in [56], e ∼ θθ,h,a êi+ → ê+ where êi+ = Eθ+ θi ,h,ai [e]. Note that C (θ, h, a) = = Z θ(e)c(e, h, a)de. n ZR Rn αa(e) + (1 − q(a(e), h))ke − ê+ k2 dµ(e). 18 Since ai (e) + (1 − q(ai (e), hi ))kei − êi+ k2 ≥ 0, then by Theorem 2.2 in [65], one obtains that Z αa(e) + (1 − q(a(e), h))ke − ê+ k2 dµ(e) n R Z ≤ lim inf αai (e) + (1 − q(ai (e), h))ke − êi+ k2 dµi (e), i→∞ Rn which means that C (θ, h, a) is lower semicontinuous. 4.5 Relative Discounted Value Function Condition C5 Note that by Lemma 5 in [58], if inf J (d, θ, h) < ∞, (27) lim sup[υβ (θ, h) − mβ ] < ∞, ∀(θ, h) ∈ S. (28) d,θ,h then (19) can be equivalently written as β↑1 To verify (27), consider a suboptimal policy, denoted by d⋄ , where at each time instant the maximal transmission power ū is used. Given a belief θ, denote by Var(θ) the second central moment, i.e., Z θ(e)(e − ê)(e − ê)⊤ de, Var(θ) = (29) Rn where ê = E[e|e ∼ θ] is the mean. Then for any initial state (θ, h) ∈ S, if the policy d⋄ is used, one can rewrite (16) as C (θk , hk , ak ) = αū + (1 − q(ū, hk ))Tr(Var(θk )) and for any k ≥ 1,   AVar(θ )A⊤ + W, if γ = 0, k k Var(θk+1 ) =  W, otherwise, with P(γk = 0) = 1 − q(ū, hk ) and Var(θ1 ) = AΣ0 A⊤ + W . Then for any initial state (θ, h) ∈ S, with Assumption 4, there exists a finite upper bound κ(θ), which depends on the initial state θ, such that for ⋄ any k ≥ 1, Edθ,h [Tr(Var(θk ))] ≤ κ(θ). Then one obtains that inf J (d, θ, h) ≤ inf J (d⋄ , θ, h) θ,h d,θ,h < inf κ(θ) + αū θ <∞. We now focus on the verification of (28). Define the stopping time Tβ , inf{k ≥ 1 : υβ (θk , hk ) ≤ υ β (N0,W )}, 19 where υ β (N0,W ) = minh∈h υβ (N0,W , h). Then by Lemma 4.1 in [57], one has for any β < 1 and (θ, h) ∈ S, υβ (θ, h) − mβ ≤υ β (N0,W ) − mβ   Tβ −1 X + inf Edθ,h  C (θk , hk , ak ) . d∈D (30) k=1 ⋄ We now prove the finiteness of Edθ,h [Tβ ] with any initial state (θ, h). To this end, let h∗ = arg minh∈h υβ (N0,W , h) and T∗β , inf{k ≥ 1 : (θk , hk ) = (N0,W , h∗ )}. Note that the dependence of T∗β on β is due to h∗ . Then one can see that for any realization of {θk } and {hk }, T∗β ≥ Tβ always holds. Note that {hk } evolves independently. Though {θk } depends on the realization of {hk }, under the policy d⋄ , P(θk = N0,W ) ≥ q(ū, h) for all k > 1 with any initial state θ. Based on the above two ⋄ observations, we construct a uniform (for any 0 < β < 1) upper bound of Edθ,h [T∗β ] as follows. Define K (h, h′ ) = min{k > 1 : hk = h′ , h1 = h} as the first time hk reaches h′ when starting at h. Then given the initial state h, let {Ti }i≥1 be a sequence of independent random variables such that E[T1 ] = E[K (h, h∗ )] and E[Ti ] = E[K (h∗ , h∗ )], i > 1. Let χ be a geometrically distributed random variable with success probability q(ū, h). Then one obtains that ⋄ Edθ,h [T∗β ] # " χ X Ti ≤E i=1 1 ≤ max{E[K (h, h∗ )], E[K (h∗ , h∗ )]} q(ū, h) 1 ≤ max {max{E[K (h, h′ )], E[K (h′ , h′ )]}} q(ū, h) h,h′ ∈h < ∞, (31) (32) where the second inequality follows from the Wald’s identity and the last inequality follows from the assumption that h is a finite set and Assumption 1-(v). Note that since (31) is independent of β, Edθ,h⋄ [T∗β ] is uniformly bounded. With a little abuse of notation, define K (θ, θ ′ ) = min{k > 1 : θk = θ ′ , θ1 = θ}. (33) Then for any initial state (θ, h), the finiteness of J (d, θ, h) implies P (K (θ, N0,W ) < ∞) = 1. Then by the definition of υ β (N0,W ), one obtains that lim sup(υ β (N0,W ) − mβ ) = 0. β↑1 20 (34) One thus obtains that for any (θ, h) ∈ S, lim sup[υβ (θ, h) − mβ ] β↑1   Tβ −1 ≤ lim sup inf Edθ,h  β↑1 d∈D  X k=1 Tβ −1 ⋄ ≤ lim sup Edθ,h  β↑1 X k=1 C (θk , hk , ak )  C (θk , hk , ak ) T∗ −1  β X ⋄ ≤ lim sup Edθ,h  C (θk , hk , ak ) β↑1 k=1 d⋄ ≤ lim sup Eθ,h [T∗β − 1](κ(θ) + αū) β↑1 < ∞, where the first inequality follows from (30) and (34), the last second inequality follows from the Wald’s identity and the last inequality follows from (32). The condition (relative discounted value function) thus is verified. The proof of Theorem 1 now is complete. 5 Structural Description: Majorization Interpretation In this section, we borrow the technical reasoning from [28,66] to show that the optimal transmission power allocation strategy has a symmetric and monotonic structure and the optimal estimator has a simple form for cases where the system is scalar. Before presenting the main theorem, we introduce a notation as follows. For a policy d(d) ∈ Dds with d(θ, h) = a(e), with a little abuse of notations, we write a(e) as aθ,h (e) to emphasize its dependence on the state (θ, h). We also use aθ,h (e) to represent the deterministic and stationary policy d(d) with d(θ, h) = a(e). According to Theorem 1, to solve Problem 1, we can restrict the optimal policy to be deterministic and stationary without any performance loss. The following theorem suggests that we the optimal policy can be further restricted to be a specific class of functions. Theorem 2. Let the system (1) be scalar. There exists an optimal deterministic and stationary policy a∗θ,h (e) such that a∗θ,h (e) is a symmetric and monotonic function of e, i.e., for any given (θ, h) ∈ S, (i). a∗θ,h (e) = a∗θ,h (−e) for all e ∈ R; (ii). a∗θ,h (e1 ) ≥ a∗θ,h (e2 ) when |e1 | ≥ |e2 | with equality for |e1 | = |e2 |. The proof is given in Section 5.2. Note that Theorem 2 does not require a symmetric initial distribution µx0 . Intuitively, this is because 1) whatever the initial distribution is, the belief state will reach the very special state N0,W sooner or later, 2) we focus on the long term average cost and the cost incurred by finite transient states can be omitted. 21 Remark 7. When there exists only a finite number of power levels, only the thresholds of the innovation error used to switch the power levels are to be determined for computation of the optimal transmission power control strategy. This significantly simplifies both the offline computation complexity and the online implementation. The feedback of an accurate action lk in Figure 3 is possible in this scenario, since one just need to feed back a finite number of thresholds, at which the power level switches. In the following theorem, without a proof, we give the optimal estimator (10) when the transmission power controller adopts the optimal policy defined in Theorem 2. The simple structure of the optimal remote estimator gk∗ in (35) is due to the symmetric structure the function a∗θ,h (e) possesses. Recall that τ (k) is defined in (12). Theorem 3. Consider the optimal transmission power controller fk∗ , uk = fk∗ (xk , Ok− ) , a∗θk ,hk (ek ) where a∗θ,h (e) is a symmetric and monotonic function of ek . Then the optimal remote state estimator gk∗ is given by   x , k x̂k = gk∗ (Ok+ ) =  Aτ (k) x̂ if γk = 1, k−τ (k) , if γk = 0, (35) In the following, we focus on the proof of Theorem 2. 5.1 Technical Preliminaries We first give some supporting definitions and lemmas as follows. Definition 4 (Symmetry). A function f : Rn → R is said to be symmetric about a point o ∈ Rn , if, for any two points x, y ∈ Rn , ky − ok = kx − ok implies f (x) = f (y). Definition 5 (Unimodality). A function f : Rn → R is said to be unimodal, if there exists o ∈ Rn such that f (o) ≥ f (o + α0 v) ≥ f (o + α1 v) holds for any v ∈ Rn and any α1 ≥ α0 ≥ 0. Definition 6. For any given Borel measurable set B ⊂ Rn , where L(B) < ∞, we denote the symmetric rearrangement of B by B σ , i.e., B σ is a ball centered at 0 with the Lebesgue measure L(B). For a given integrable, nonnegative function f : Rn → R, we denote the symmetric nonincreasing rearrangement of f by f σ , where f σ is defined as σ f (x) , Z 0 ∞ 1{o∈Rn :f (o)>t}σ (x)dt. Definition 7. For any given two integrable, nonnegative functions f, g : Rn → R, we say that f majorizes g, which is denoted as g ≺ f , if the following conditions hold: Z Z σ gσ (x)dx f (x)dx ≥ kxk≤t kxk≤t 22 ∀t ≥ 0 (36) and Z f (x)dx = Z g(x)dx. Rn Rn Equivalently, (36) can be altered by the following condition: for any Borel set B ⊂ Rn , there always exists R R another Borel set B ′ with L(B ′ ) = L(B) such that B g(x)dx ≤ B′ f (x)dx. Recall that L, which is introduced in (14), is the saturation threshold for actions. Definition 8 (Binary Relation R on Belief States). For any two belief states θ, θ∗ ∈ Θ, we say that θRθ∗ if the following conditions hold: (i). there holds θ ≺ θ∗ ; (ii). θ∗ is symmetric and unimodal about the origin point 0. (iii). θ(e) = θ∗ (e) for any e ∈ Rn \E, where E , {e ∈ Rn : kek ≤ L} is defined below (14). In the following, we define a symmetric increasing rearrangement of an action a ∈ A, which preserves the average power consumption and successful transmission probability. Definition 9. For any given Borel measurable B ⊂ Rn , where L(B) < ∞, we define σ Bθ, , {e ∈ Rn : kek ≥ r}, θ̂ where θ, θ̂ ∈ Θ and r is determined such that aσθ,θ̂ (e) , It can be verified that if R Rn \E θ(e)de = Z R R Z B θ(e)de = ∞ 0 Rn \E R Bσ θ̂(e)de. Given an action a ∈ A, define θ,θ̂ 1{o∈Rn :a(o)>t}σ (e)dt. θ,θ̂ θ̂(e)de, aσ (e) ∈ A. One also obtains that θ,θ̂ a(e)θ(e)de = Z Rn Rn aσθ,θ̂ (e)θ̂(e)de (37) and for any h, Z q(a(e), h)θ(e)de = Rn Z Rn   q aσθ,θ̂ (e), h θ̂(e)de. Then the following lemma follows straightforwardly. Lemma 2. If A is a scalar, then θRθ∗ implies φ(θ, h, a, 0)Rφ(θ∗ , h, aσθ,θ∗ , 0), where φ(·, ·, ·, ·) is the belief update equation defined in (15).   Note that if θRθ̂, then q(a(e), h)θ(e)Rq aσθ,θ̂ (e), h θ̂(e). Then based on (37), following the same rea- soning as in Lemma 15 in [28], one obtains the following lemma. Lemma 3. If θRθ̂, then the following inequality about the one stage cost holds: C (θ, h, a) ≥ C (θ̂, h, aσθ,θ̂ ). 23 5.2 Proof of Theorem 2 We then proceed to prove Theorem 2 in a constructive way. To be specific, we show that for any initial state (θ, h), and any deterministic and stationary policy d(d) ∈ Dds 5 such that J (d(d), θ, h) < ∞6 , ˆ ∈ Dds with a symmetric and monotonic structure defined in Theothere exists another policy d̂(d) ˆ θ, h) ≤ J (d(d), θ, h). For any initial state (θ, h) and any policy with finite cost, rem 2 such that J (d̂(d), P (K (θ, N0,W ) < ∞) = 1, where K (·, ·) is defined in (33). Then, without loss of generality, we can assume that the initial state θ = N0,W . Let d(θ, h) = aθ,h (e), then under the policy d(d), the evolution of belief states is illustrated in Figure 4. Note that since the evolution of channel gains is independent of action a, we omit it. In Figure 4, the channel gain is assumed to be a constant h. Notice the difference between the notations θ i and θk . The quantity θ i denotes an element in Θ, while θk is the belief state of the MDP at time instant k. 1 − p0 1 − pi ✤✜ ❘ p0 ✤✜ θ1 θ0 ... p1 ❥ ✣✢ ✣✢ ■ ■ pi ■ ✤✜ ✤✜ ❘ θi θi+1 ... ✣✢ ✣✢ pi+1 Figure 4: Evolution of belief states. The special state θ 0 = N0,W , pi = ϕ(θ i , h, aθi ,h ), ∀i ≥ 0 is the successful transmission probability, and θ i+1 = φ(θ i , h, aθi ,h , 0), ∀i ≥ 0. When the belief state is θ i , it incurs cost C (θ i , h, aθi ,h ). ˆ h) = âθ,h (e), and p̂i and θ̂ i be the counterparts of pi and θ i , respectively. To facilitate presentation, Let d(θ, let ai , aθi ,h and âi = âθ̂i ,h . Then {âi }i∈N are constructed as follows: σ âi = ai θi ,θ̂i . Then by Lemmas 2 and 3, one obtains that p̂i =pi , ∀i ≥ 0, θ̂ 0 = θ 0 = N0,W , θ i Rθ̂ i , ∀i ≥ 1, C (θ i , h, ai ) ≥C (θ̂ i , h, âi ), ∀i ≥ 0, h ∈ h. ˆ θ, h) ≤ J (d(d), θ, h). Since {âi }i∈N is symmetric and increasing, and θ̂ i is It then follows that J (d̂(d), symmetric, one concludes the results of the theorem. 6 Conclusion and Future Work In this paper, we studied the remote estimation problem where the sensor communicates with the remote estimator over a fading channel. The transmission power control strategy, which affects the behavior of 5 6 By Theorem 1, without any performance loss, we can just focus on the class of deterministic and stationary policies Dds . Note that it is impossible for a policy with infinite cost to be optimal. 24 communications, as well as the remote estimator were optimally co-designed to minimize an infinite horizon cost consisting of power consumption and estimation error. We showed that when determining the optimal transmission power, the full information history available at the sensor is equivalent to its belief state. Since no constraints on the information structure are imposed and the belief state is updated recursively, the results we obtained provide some insights into the qualitative characterization of the optimal power allocation strategy and facilitate the related performance analyses. In particular, we provided some structural results on the optimal power allocation strategy and the optimal estimator, which simplifies the practical implementation of the algorithm significantly. One direction of future work is to explore the structural description of the optimal remote estimator and the optimal transmission power control rule when the system matrix is a general one. Appendix A We will introduce background knowledge for weak convergence of probability measures and generalized skorohad space. 6.1 Weak Convergence of Probability Measures Let X be a general Polish space and X be the Borel σ−field [55]. Let µ and {µi,i∈N } be probability measures on (X , X ). By the Portmanteau Theorem [56], the following statements are equivalent: (i). µi converges weakly to µ. (ii). limi→∞ R bdµi → R bdµ for every bounded and continuous function b(·) on X. (iii). limi→∞ µi (B) → µ(B) for every µ−continuity set B. w We write as µi → µ if µi converges weakly to µ. The Prohorov metric [56] is a metrization of this weak convergence topology. Let M be the collection of all the probability measures defined on (X , X ). If M is endowed with the weak convergence topology, then M is a Polish space. 6.2 Generalized Skorohod Space [64]. Let (X , dX (·, ·)) be a compact metric space and Λ be a set of homeomorphisms from X onto itself. Let π be a generic element of Λ, then on Λ, define the following three norms: kπks = sup dX (πx, x) x∈X kπkt = log sup x,y∈X :x6=y kπkm =kπks + kπkt . 25 dX (πx, πy) dX (x, y) Note that kπkt = kπ−1 kt . Let Λt ⊆ Λ be the group of homeomorphisms with finite k · kt , i.e., Λt = {π ∈ Λ : kπkt < ∞}. Note that since X is compact, each element in Λt also has finite k · km . Let Br (X ) be the set of bounded real-valued functions defined on X , then the Skorohod distance d(·, ·) for f, g ∈ Br (X ) is defined by d(f, g) = inf {ǫ > 0 : ∃ π ∈ Λt such that ǫ kπkm < ǫ and sup |f (x) − g(πx)| < ǫ}. (38) x∈X Let W be the set of all finite partitions of X that are invariant under Λ. Let I∆ be the collection of functions that are constant on each cell of a partition ∆ ∈ W. Then the generalized Skorohod space on X are defined by D(X ) = {f ∈ Br (X ) : ∃ ∆ ∈ W, g ∈ I∆ such that d(f, g) = 0}. (39) By convention, two functions f and g with d(a, b) = 0 are not distinguished. Then by Lemma 3.4, Theorems 3.7 and 3.8 in [64], the space D(X ) of the resulting equivalence classes with metric d(·, ·) defined in (38) is a complete metric space. For f ∈ Br (X) and ∆ = {δj } ∈ W, define w(f, ∆) = max sup{|f (x) − f (y)| : x, y ∈ δj }. δj (40) x,y For f ∈ Br (X), f ∈ D(X ) if and only if lim∆ w(f, ∆) → 0, with the limits taken along the direction of refinements. In the end, we should remark that in our case for the action space A, the collection of homeomorphisms Λt used to define the Skorohod distance in (38) is defined on E instead of Rn . Appendix B In the following lemma, we give a condition on the probability measures, under which the weak convergence implies set-wise convergence. Lemma 4. Let µ and {µi,i∈N } be probability measures defined on (Rn , B(Rn )), where B(Rn ) denotes the Borel σ−algebra of Rn . Suppose they are absolutely continuous with respect to the Lebesgue measure. Then the following holds: w sw µi → µ ⇒ µi → µ, (41) sw where µi → µ represents set-wise convergence, i.e., for any A ∈ B(Rn ), µi (A) → µ(A). Proof. The Borel σ−algebra B(Rn ) can be generated by n−demensional rectangles, i.e., B(Rn ) = σ({(x1 , y1 ] × · · · × (xn , yn ] : xj , yj ∈ R}). 26 (42) Since µ is absolutely continuous with respect to Lebesgue measure, all the rectangles are µ−continuity sets. By the Portmanteau Theorem [56], for any xj , yj ∈ R, µi ((x1 , y1 ] × · · · × (xn , yn ]) → µ((x1 , y1 ] × · · · × (xn , yn ]). Then statement (41) follows from (42), which completes the proof. References [1] W. S. Wong and R. W. Brockett, “Systems with finite communication bandwidth-part II: Stabilization with limited information feedback,” IEEE Transactions on Automatic Control, vol. 44, no. 5, pp. 1049–1053, 1999. [2] G. N. Nair and R. J. Evans, “Stabilizability of stochastic linear systems with finite feedback data rates,” SIAM Journal on Control and Optimization, vol. 43, no. 2, pp. 413–436, 2004. [3] S. Tatikonda and S. Mitter, “Control under communication constraints,” IEEE Transactions on Automatic Control, vol. 49, pp. 1056–1068, July 2004. [4] H. Ishii and B. A. Francis, “Quadratic stabilization of sampled-data systems with quantization,” Automatica, vol. 39, pp. 1793–1800, 2003. [5] M. Fu and L. Xie, “The sector bound approach to quantized feedback control,” IEEE Transactions on Automatic Control, vol. 50, no. 11, pp. 1698–1711, 2005. [6] J. Hespanha, P. Naghshtabrizi, and Y. Xu, “A survey of recent results in networked control systems,” Proceedings of the IEEE, vol. 95, no. 1, pp. 138–162, 2007. [7] L. Schenato, “Optimal estimation in networked control systems subject to random delay and packet drop,” IEEE Transactions on Automatic Control, vol. 53, no. 5, pp. 1311–1317, 2008. [8] L. Shi, L. Xie, and R. M. Murray, “Kalman filtering over a packet-delaying network: A probabilistic approach,” Automatica, vol. 45, no. 9, pp. 2134–2140, 2009. [9] K. You, M. Fu, and L. Xie, “Mean square stability for kalman filtering with markovian packet losses,” Automatica, vol. 47, no. 12, pp. 2647–2657, 2011. [10] B. Sinopoli, L. Schenato, M. Franceschetti, K. Poolla, M. I. Jordan, and S. S. Sastry, “Kalman filtering with intermittent observations,” IEEE Transactions on Automatic Control, vol. 49, no. 9, pp. 1453–1464, 2004. [11] M. Huang and S. Dey, “Stability of Kalman filtering with markovian packet losses,” Automatica, vol. 43, pp. 598–607, 2007. 27 [12] L. Schenato, B. Sinopoli, M. Franceschetti, K. Poolla, and S. S. Sastry, “Foundations of control and estimation over lossy networks,” Proceedings of the IEEE, vol. 95, no. 1, pp. 163–187, 2007. [13] V. Gupta, N. C. Martins, and J. S. Baras, “Optimal output feedback control using two remote sensors over erasure channels,” IEEE Transactions on Automatic Control, vol. 54, no. 7, pp. 1463–1476, 2009. [14] A. Mainwaring, D. Culler, J. Polastre, R. Szewczyk, and J. Anderson, “Wireless sensor networks for habitat monitoring,” in Proc. International Workshop on Wireless Sensor Networks and Applications. ACM, 2002, pp. 88–97. [15] C. Yang and L. Shi, “Deterministic sensor data scheduling under limited communication resource,” IEEE Transactions on Signal Processing, vol. 59, no. 10, pp. 5050–5056, 2011. [16] L. Zhao, W. Zhang, J. Hu, A. Abate, and C. J. Tomlin, “On the optimal solutions of the infinitehorizon linear sensor scheduling problem,” IEEE Transactions on Automatic Control, vol. 59, no. 10, pp. 2825–2830, 2014. [17] L. Shi, P. Cheng, and J. Chen, “Optimal periodic sensor scheduling with limited resources,” IEEE Transactions on Automatic Control, vol. 56, no. 9, pp. 2190–2195, 2011. [18] M. F. Huber, “Optimal pruning for multi-step sensor scheduling,” IEEE Transactions on Automatic Control, vol. 57, no. 5, pp. 1338–1343, 2012. [19] D. Shi and T. Chen, “Optimal periodic scheduling of sensor networks: A branch and bound approach,” Systems & Control Letters, vol. 62, no. 9, pp. 732–738, 2013. [20] S. Liu, M. Fardad, P. K. Varshney, and E. Masazade, “Optimal periodic sensor scheduling in networks of dynamical systems,” IEEE Transactions on Signal Processing, vol. 62, no. 12, pp. 3055–3068, 2014. [21] K. J. Åström and B. Bernhardsson, “Comparison of riemann and lebesque sampling for first order stochastic systems,” in Proceedings of the 41st IEEE Conference on Decision and Control, vol. 2. IEEE, 2002, pp. 2011–2016. [22] Y. Xu and J. Hespanha, “Estimation under uncontrolled and controlled communications in networked control systems,” in Proc. IEEE Conference on Decision and Control and European Control Conference, Dec 2005, pp. 842–847. [23] O. Imer and T. Başar, “Optimal estimation with limited measurements,” in Proceedings of the 44th IEEE Conference on Decision and Control and European Control Conference, December 2005, pp. 1029–1034. [24] R. Cogill, S. Lall, and J. P. Hespanha, “A constant factor approximation algorithm for event-based sampling,” in Proceedings of the American Control Conference, 2007, pp. 305–311. 28 [25] J. Sijs and M. Lazar, “On event based state estimation,” in Hybrid Systems: computation and control. Springer, 2009, pp. 336–350. [26] G. Lipsa and N. Martins, “Remote state estimation with communication costs for first-order LTI systems,” IEEE Transactions on Automatic Control, vol. 56, no. 9, pp. 2013–2025, Sept 2011. [27] J. Wu, Q.-S. Jia, K. H. Johansson, and L. Shi, “Event-based sensor data scheduling: Trade-off between communication rate and estimation quality,” IEEE Transactions on Automatic Control, vol. 58, no. 4, pp. 1041–1046, 2013. [28] A. Nayyar, T. Başar, D. Teneketzis, and V. V. Veeravalli, “Optimal strategies for communication and remote estimation with an energy harvesting sensor,” IEEE Transactions on Automatic Control, vol. 58, no. 9, pp. 2246–2260, 2013. [29] C. Ramesh, H. Sandberg, and K. H. Johansson, “Design of state-based schedulers for a network of control loops,” IEEE Transactions on Automatic Control, vol. 58, no. 8, pp. 1962–1975, 2013. [30] J. Wu, K. Johansson, and L. Shi, “A stochastic online sensor scheduler for remote state estimation with time-out condition,” IEEE Transactions on Automatic Control, vol. 59, no. 11, pp. 3110–3116, 2014. [31] A. Molin, “Optimal event-triggered control with communication constraints,” Ph.D. dissertation, München, Technische Universität München, Diss., 2014, 2014. [32] V. Gupta, T. Chung, B. Hassibi, and R. M. Murray, “On a stochastic sensor selection algorithm with applications in sensor scheduling and dynamic sensor coverage,” Automatica, vol. 42, no. 2, pp. 251–260, 2006. [33] D. Han, Y. Mo, J. Wu, S. Weerakkody, B. Sinopoli, and L. Shi, “Stochastic event-triggered sensor schedule for remote state estimation,” IEEE Transactions on Automatic Control,to appear, 2015. [34] S. Weerakkody, Y. Mo, B. Sinopoli, D. Han, and L. Shi, “Multi-sensor scheduling for state estimation with event-based, stochastic triggers,” IEEE Transactions on Automatic Control, accepted. [35] S. Yüksel and T. Başar, Stochastic networked control systems: Stabilization and optimization under information constraints. Springer Science & Business Media, 2013. [36] A. Goldsmith, Wireless Communications. Cambridge University Press, 2005. [37] N. Elia, “Remote stabilization over fading channels,” Systems & Control Letters, vol. 54, no. 3, pp. 237–249, 2005. [38] S. Dey, A. S. Leong, and J. S. Evans, “Kalman filtering with faded measurements,” Automatica, vol. 45, no. 10, pp. 2223–2233, 2009. 29 [39] N. Xiao, L. Xie, and L. Qiu, “Feedback stabilization of discrete-time networked systems over fading channels,” IEEE Transactions on Automatic Control, vol. 57, no. 9, pp. 2176–2189, 2012. [40] D. E. Quevedo, A. Ahlén, and K. H. Johansson, “State estimation over sensor networks with correlated wireless fading channels,” IEEE Transactions on Automatic Control, vol. 58, no. 3, pp. 581–593, 2013. [41] C.-H. Wang and S. Dey, “Distortion outage minimization in Rayleigh fading using limited feedback,” in IEEE Global Telecommunications Conference (GLOBECOM). IEEE, 2009, pp. 1–8. [42] D. E. Quevedo, A. Ahlén, and J. Østergaard, “Energy efficient state estimation with wireless sensors through the use of predictive power control and coding,” IEEE Transactions Signal Processing, vol. 58, no. 9, pp. 4811–4823, 2010. [43] A. S. Leong, S. Dey, G. N. Nair, and P. Sharma, “Power allocation for outage minimization in state estimation over fading channels,” IEEE Transactions on Signal Processing, vol. 59, no. 7, pp. 3382– 3397, 2011. [44] J. Wu, Y. Li, D. E. Quevedo, V. Lau, and L. Shi, “Data-driven power control for state estimation: a Bayesian inference approach,” Automatica, vol. 54, pp. 332–339, 2015. [45] K. Gatsis, A. Ribeiro, and G. J. Pappas, “Optimal power management in wireless control systems,” IEEE Transactions on Automatic Control, vol. 59, no. 6, pp. 1495–1510, 2014. [46] M. Nourian, A. S. Leong, and S. Dey, “Optimal energy allocation for Kalman filtering over packet dropping links with imperfect acknowledgments and energy harvesting constraints,” IEEE Transactions on Automatic Control, vol. 59, no. 8, pp. 2128–2143, 2014. [47] M. Nourian, S. Dey, and A. Ahlén, “Distortion minimization in multi-sensor estimation with energy harvesting,” IEEE Journal on Selected Areas in Communications, vol. 33, no. 3, pp. 524–539, 2015. [48] M. Fu and C. E. de Souza, “State estimation for linear discrete-time systems using quantized measurements,” Automatica, vol. 45, no. 12, pp. 2937 – 2945, 2009. [49] A. S. Leong and S. Dey, “Power allocation for error covariance minimization in Kalman filtering over packet dropping links,” in Proceedings of the 51st IEEE Conference on Decision and Control. IEEE, 2012, pp. 3335–3340. [50] Y. Mostofi and R. M. Murray, “To drop or not to drop: design principles for Kalman filtering over wireless fading channels,” IEEE Transactions on Automatic Control, vol. 54, no. 2, pp. 376–381, 2009. [51] S. S. Haykin, Digital Communications. New York: Wiley, 1988. [52] B. Anderson and J. Moore, Optimal Filtering. Prentice Hall, 1979. 30 [53] A. Nayyar, “Sequential decision making in decentralized systems,” Ph.D. dissertation, University of California, Berkeley, 2011. [54] A. R. Cassandra, “Exact and approximate for partially observable Markov decision processes,” Ph.D. dissertation, Brown University, 1998. [55] R. Durrett, Probability: Theory and Examples. Cambridge university press, 2010. [56] P. Billingsley, Convergence of Probability Measures. New York: John Wiley & Sons, 1999. [57] M. Schäl, “Average optimality in dynamic programming with general state space,” Mathematics of Operations Research, vol. 18, no. 1, pp. 163–172, 1993. [58] E. A. Feinberg, P. O. Kasyanov, and N. V. Zadoianchuk, “Average cost Markov decision processes with weakly continuous transition probabilities,” Mathematics of Operations Research, vol. 37, no. 4, pp. 591–607, 2012. [59] O. Hernández-Lerma and J. B. Lasserre, Discrete-time Markov Control Processes: Basic Optimality Criteria. New York: Springer Science & Business Media, 2012, vol. 30. [60] M. Yan, Introduction to Topology: Theory and Applications. Beijing: Higher Education Press, 2010. [61] K. Lange, “Borel sets of probability measures,” Pacific Journal of Mathematics, vol. 48, pp. 141–161, 1973. [62] E. A. Feinberg, P. O. Kasyanov, and N. V. Zadoianchuk, “Fatou’s lemma for weakly converging probabilities,” Theory of Probability & Its Applications, vol. 58, no. 4, pp. 683–689, 2014. [63] W. Rudin, Principles of Mathematical Analysis. New York: McGraw-Hill, 1964, vol. 3. [64] M. L. Straf, “Weak convergence of stochastic processes with several parameters,” in Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability, Volume 2: Probability Theory. Berkeley, Calif.: University of California Press, 1972, pp. 187–221. [65] O. Hernández-Lerma and J. B. Lasserre, “Fatou’s lemma and Lebesgue’s convergence theorem for measures,” International Journal of Stochastic Analysis, vol. 13, no. 2, pp. 137–146, 2000. [66] B. Hajek, K. Mitzel, and S. Yang, “Paging and registration in cellular networks: Jointly optimal policies and an iterative algorithm,” IEEE Transactions on Information Theory, vol. 54, no. 2, pp. 608–622, 2008. 31
3
Induced Matchings and the Algebraic Stability of Persistence Barcodes Ulrich Bauer∗ and Michael Lesnick† arXiv:1311.3681v4 [math.AT] 30 Apr 2016 January 26, 2015 Abstract We define a simple, explicit map sending a morphism f : M → N of pointwise finite dimensional persistence modules to a matching between the barcodes of M and N. Our main result is that, in a precise sense, the quality of this matching is tightly controlled by the lengths of the longest intervals in the barcodes of ker f and coker f . As an immediate corollary, we obtain a new proof of the algebraic stability theorem for persistence barcodes [7, 10], a fundamental result in the theory of persistent homology. In contrast to previous proofs, ours shows explicitly how a δ-interleaving morphism between two persistence modules induces a δ-matching between the barcodes of the two modules. Our main result also specializes to a structure theorem for submodules and quotients of persistence modules, and yields a novel “single-morphism” characterization of the interleaving relation on persistence modules. 1 Introduction Persistent homology, a topological tool for analyzing the global, non-linear, geometric features of data, is one of the primary objects of study in applied topology. It provides simple, readily computed invariants, called barcodes, of a variety of types of data, such as finite metric spaces and R-valued functions. A barcode is simply a collection of intervals in R; we regard each interval as the lifespan of a topological feature of our data, and we interpret the length of the interval as a measure of the significance of that feature. To obtain a barcode from data, we proceed in three steps: 1. We first associate to the data a filtration, a family of topological spaces F = {Ft }t∈R such that F s ⊆ Ft whenever s ≤ t. For example, if our data is a function γ : T → R, we may take F = Sγ , where γ St = {x ∈ T | γ(x) ≤ t}; we call Sγ the sublevel set filtration of γ. ∗ † Technische Universität München, Germany. [email protected] Institute for Mathematics and its Applications, Minneapolis, MN, USA. [email protected] 1 2. We then apply ith singular homology with coefficients in a field to each space Ft and each inclusion F s ,→ Ft . This yields a persistence module Hi (F) for any i ≥ 0, i.e., a diagram of vector spaces indexed by the totally ordered set R. 3. A persistence module M is said to be pointwise finite dimensional (p.f.d.) if each of the vector spaces Mt is finite dimensional. The structure theorem of [16] yields a barcode invariant B M of any p.f.d. persistence M; B M specifies the decomposition of M into indecomposable summands. Thus, under mild assumptions on F, we obtain a barcode invariant BHi (F) of our data for each i ≥ 0. In the last fifteen years, these invariants have been applied widely to the study of scientific data [5, 19], and have been the subject of a great deal of theoretical interest. The Algebraic Stability Theorem In 2005, Cohen-Steiner, Edelsbrunner, and Harer introduced a stability result for the persistent homology of R-valued functions [14]. In brief, the result tells us that the map sending an R-valued function to its persistence barcode is 1-Lipschitz with respect to suitable choices of metrics. In 2009, Chazal, Cohen-Steiner, Glisse, Guibas, and Oudot showed that the stability result of [14] admits a purely algebraic generalization [7]. This generalization, known as the algebraic stability theorem, asserts that if there exists a δ-interleaving (a sort of “approximate isomorphism”) between two p.f.d. persistence modules M, N, then there exists a δ-matching (approximate isomorphism) between the barcodes B M , BN of these persistence modules. The algebraic stability theorem is perhaps the central theorem in the theory of persistent homology. It provides the core mathematical justification for the use of persistent homology in the study of noisy data. The theorem is used, in one form or another, in nearly all available results on the approximation, inference, and estimation of persistent homology. It has also been the basis for much subsequent theoretical work on persistence. While the earlier stability result for the persistent homology of R-valued functions [14] is itself a powerful result with several important applications, the more general algebraic stability theorem offers significant advantages. First, it allows us to dispense with some technical conditions on the stability result for functions of [14], thereby yielding a generalization of that result to a wider class of functions. Second and perhaps more importantly, the algebraic stability theorem offers a formalism for comparing the barcode invariants of functions defined on different domains, or for comparing barcode invariants of data that do not directly arise from functions at all. In practice, this added flexibility is quite valuable: It allows us to establish fundamental approximation and inference theorems for persistent homology that would otherwise be much harder to come by [8, 9, 11, 13, 20]. For one example, the algebraic stability theorem has been applied to establish the consistency properties of Rips complex-based estimators for the persistent homology of probability density functions [11]. The Isometry Theorem In fact, the converse to the algebraic stability theorem also holds: There exists a δ-interleaving between p.f.d. persistence modules M and N if and only if there exists a δ-matching between B M and BN . The algebraic stability theorem and its converse are together known as the isometry theorem. A slightly weaker formulation of the isometry theorem 2 establishes a relationship between the interleaving distance (a pseudometric on persistence modules) and the bottleneck distance (a widely studied pseudometric on barcodes): It says that the interleaving distance between M and N is equal to the bottleneck distance between B M and BN . Given the structure theorem for persistence modules [16], the converse algebraic stability theorem admits a very simple, direct proof. The converse was first proven for p.f.d. persistence modules in [23]. Later proofs appeared, independently, in [10] (in a slightly more general setting), and in [4] (in a special case). We give the proof of the converse algebraic stability theorem in Section 8, following [23]. The isometry theorem is interesting in part because the definition of the interleaving distance extends to a variety of generalized persistence settings where the direct definition of the bottleneck distance does not. For example, interleaving distances can be defined on multidimensional persistence modules [23] and filtrations [3, 7, 22]. The isometry theorem thus suggests one way to extend the definition of the bottleneck distance to these settings. Since much of the theory of topological data analysis is formulated in terms of the bottleneck distance [9, 11, 13, 21], this opens the door to the adaptation of the theory to more general settings. Interleaving distances on multidimensional persistence modules and filtered topological spaces satisfy universal properties, indicating that they are, in certain relative senses, the “right” generalizations of the bottleneck distance [3, 22, 23]. Earlier Proofs of the Algebraic Stability Theorem The original proof of the algebraic stability theorem is an algebraic adaptation of the stability argument for the persistent homology of functions given in [14]. Owing to its geometric origins, this argument has a distinctly geometric flavor. In particular, the argument employs an (algebraic) interpolation lemma for persistence modules inspired by an interpolation construction for functions appearing in [14]. In 2012, Chazal, Glisse, de Silva, and Oudot [10] presented a reworking of the original proof of algebraic stability as part of an 80-page treatise on persistence modules, barcodes, and the isometry theorem. The reworked proof is similar on a high level to the original, but differs in the details. In particular, it makes use of a characterization of barcodes in terms of rectangle measures; these are functions from the set of rectangles in the plane to N ∪ {∞} which have properties analogous to those of a measure. The original proof of [7] and the proof given in [10] are, to the best of our knowledge, the only proofs of the algebraic stability theorem in the literature. However, in unpublished work from 2011, Guillaume Troianowski and Daniel Müllner gave a third proof which, in contrast to the proofs of [7] and [10], establishes the theorem as a corollary of a general fact about persistence modules. First, in early 2011, Troianowski showed that if f : M → N is a morphism of persistence modules such that the length of each interval in the barcodes Bker f and Bcoker f is at most , then the bottleneck distance between B M and BN is at most 2. Troianowski never made this result public, but he mentioned it to the second author of this paper. Troianowski’s result implies that if the interleaving distance between M and N is δ, then the bottleneck distance between B M and BN is at most 4δ. The result thus implies the algebraic stability theorem, up to a factor of 4. As we were putting the finishing touches on the present paper, we learned that in a September 3 2011 manuscript never made public [24], Müllner and Troianowski strengthened Troianowksi’s result to show that under the same assumptions on ker f and coker f as above, the bottleneck distance between B M( 2 ) and BN is at most 2 ; here M( 2 ) denotes the shift of the persistence module M to the left by 2 . This stronger result implies the algebraic stability theorem. In turn, the results of the present paper, obtained independently of [24], strengthen those of [24]. The proof of Müllner and Troianowski’s result is an adaptation of the original proof of the algebraic stability theorem, and closely follows the technical details of that proof. The three existing proofs of algebraic stability are thus rather similar to one another. In particular, each relies in an essential way on some version of the interpolation lemma. The interpolation lemma is interesting and pretty in its own right, and variants of it have found applications apart from the proof of algebraic stability [17, 23]. However, reliance on the interpolation lemma makes the existing proofs of algebraic stability rather indirect: Given a pair of δ-interleaved persistence modules M and N, these proofs construct a δ-matching between B M and BN only implicitly, and in a roundabout way, requiring several technical lemmas, a compactness argument, and consideration of a sequence of interpolating barcodes. 1.1 Induced Matchings of Barcodes It is natural to ask whether there exists a more direct proof of the algebraic stability theorem which associates to a δ-interleaving between two persistence modules a δ-matching between their barcodes in a simple, explicit way. We present such a proof in this paper. To do so, we define a map X sending each morphism f : M → N of pointwise finite dimensional persistence modules to a matching X f between the barcodes B M and BN . This map X is not functorial in general; in fact, we prove that it is impossible to define such a map in a fully functorial way. However, X is functorial on the two subcategories of persistence modules whose morphisms are the monomorphisms and the epimorphisms, respectively. X f is completely determined by the barcodes B M , BN , and Bim f , and also completely determines these three barcodes. The Induced Matching Theorem We establish the algebraic stability theorem for pointwise finite dimensional persistence modules as an immediate corollary of a general result about the behavior of the matching X f . This result, which we call the induced matching theorem, tells us that the quality of the matching X f is tightly controlled by the lengths of the longest intervals in the barcodes Bker f and Bcoker f . Roughly, it says that for any pair (I, J) ∈ B M × BN of intervals matched by X f , (i) J is obtained from I by moving both the left and right endpoints of I to the left, (ii) if each interval in Bker f is of length at most , then X f matches all intervals in BN of length greater than , and the right endpoints of I and J differ by at most , and (iii) dually, if each interval in Bcoker f is of length at most , then X f matches all intervals in B M of length greater than , and the left endpoints of I and J differ by at most . We give the precise statement of the theorem in Section 6. 4 Figure 1: The relationship between a pair of intervals matched by X f , as described by the induced matching theorem. A Structure Theorem for Persistence Submodules and Quotients Our proof of the induced matching theorem uses none of the technical intermediates appearing in the earlier proofs of the algebraic stability theorem. Instead, our proof centers on an easy but apparently novel structure theorem for submodules and quotients of persistence modules. This structure theorem is in fact the specialization of the induced matching theorem to the cases ker f = 0 and coker f = 0, corresponding to  = 0 in (ii) and (iii) above. A slightly weaker version of our structure theorem appears, independently, in the unpublished manuscript [24], with a different proof. Roughly, the structure theorem says that for p.f.d. persistence modules K ⊆ M, there exist canonical choices of (i) an injection BK ,→ B M mapping each interval I ∈ BK to an interval in B M which contains I and has the same right endpoint as I, (ii) an injection B M/K ,→ B M mapping each interval I ∈ B M/K to an interval in B M which contains I and has the same left endpoint as I. Figure 2: Intervals in the barcodes BK and B M/K , together with their images under the canonical injections BK ,→ B M and B M/K ,→ B M . Our definition of the matching X f induced by a morphism f : M → N of persistence modules is derived from the definitions of these canonical injections in a simple way. We consider the canonical factorization of f through its image, M  im f ,→ N. Note that im f is a submodule of N, and up to isomorphism, im f is also a quotient of M. Thus, the structure theorem gives us canonical injections B M ←- Bim f and Bim f ,→ BN , which we can interpret as matchings. We define the matching X f as a composition of these two matchings. See Sections 2.2, 4 and 5 for the formal definitions. A Single-Morphism Characterization of the Interleaving Relation As an easy corollary of the induced matching theorem and the converse algebraic stability theorem, we obtain a novel characterization of interleaved pairs of persistence modules: for any δ ≥ 0, two p.f.d. persistence modules M and N are δ-interleaved if and only if there exists a morphism f : M → N(δ) such that all intervals in Bker f and Bcoker f are of length at most 2δ. 5 Figure 3: The matching X f between B M and BN is defined via the canonical injections B M ←- Bim f and Bim f ,→ BN . The Isometry Theorem for q-Tame Persistence Modules Because of the structure theorem [16] and the induced matching theorem, which hold for p.f.d. persistence modules, the setting of p.f.d. persistence modules is a very convenient one in which to formulate the isometry theorem. In particular, in this setting we can formulate a sharp version of the isometry theorem, which appears here for the first time; see Remark 3.4 and Theorem 3.5. On the other hand, as explained in [10, 13], there is good reason to want a version of the isometry theorem for q-tame persistence modules. These are persistence modules M for which the maps M s → Mt are of finite rank whenever s < t ∈ R. Clearly, any p.f.d. persistence modules is q-tame. q-tame persistence modules that are not necessarily p.f.d. arise naturally in the study of compact metric spaces and continuous functions on compact triangulable spaces [10, 13]. With this in mind, [10, Theorem 4.11] presents a version of the isometry theorem for q-tame persistence modules. The chief difficulty in this, relative to the p.f.d. case, is in selecting a suitable definition the barcode of a q-tame persistence module; see Section 7 and [12]. In Sections 7 and 8, we show that [10, Theorem 4.11] follows easily from the isometry theorem for p.f.d. persistence modules. In particular, we make clear in Section 7 that the algebraic stability theorem for q-tame persistence modules is an easy corollary of the induced matching theorem. 2 Preliminaries This section presents the basic definitions and notation that we will use throughout the paper. 2.1 Persistence Modules and Barcodes Persistence Modules For K a field, let Vect denote the category of vector spaces over K, and let vect ⊂ Vect denote the subcategory of finite dimensional vector spaces. Let R denote the real numbers, considered as a poset category. That is, homR (s, t) consists of a single morphism for s ≤ t and is empty otherwise. As indicated in Section 1, a persistence module is a diagram of K-vector spaces indexed by R, i.e., a functor R → Vect. The persistence modules form a category VectR whose morphisms are natural transformations. Concretely, this means that a persistence module M assigns to each t ∈ R a vector space Mt , and to each pair s ≤ t ∈ R a linear map ϕ M (s, t) : M s → Mt in such a way that for all r ≤ s ≤ t ∈ R, ϕ M (t, t) = Id Mt and ϕ M (s, t) ◦ ϕ M (r, s) = ϕ M (r, t). 6 We call the maps ϕ M (s, t) transition maps. In this notation, a morphism f : M → N of persistence modules is exactly a collection of maps { ft : Mt → Nt }t∈R such that for all s ≤ t ∈ R the following diagram commutes: Ms ϕ M (s, t) fs Ns Mt ft ϕN (s, t) Nt We say f is a monomorphism (epimorphism) if ft is an injection (surjection) for all t ∈ R. As already noted, we say M is pointwise finite dimensional (p.f.d.) if dim Mt < ∞ for all t ∈ R. Let vectR denote the full subcategory of VectR whose objects are p.f.d. persistence modules. Interval Persistence Modules We say I ⊆ R is an interval if I is non-empty and r, t ∈ I implies s ∈ I whenever r ≤ s ≤ t. Let IR denote the set of all intervals in R. For I ∈ IR , define the interval persistence module C(I) by    K if t ∈ I, C(I)t =   0 otherwise.    IdK if s, t ∈ I, ϕC(I) (s, t) =   0 otherwise. Multisets Informally, a multiset is a collection where elements may repeat multiple times. For example, {x, x, y} is a multiset, where the element x appears with multiplicity 2 and the element y appears with multiplicity 1. Formally, a multiset is a pair (S , m), where S is a set and m : S → N is a function. Here N denotes the positive integers. One sometimes also allows m to take values in a larger set of cardinals, but we will not do that here. For a multiset S = (S , m) define the representation of S to be the set Rep(S) = {(s, k) ∈ S × N | k ≤ m(s)}. We will generally work with multisets by way of their representations. Barcodes We define a barcode to be the representation of a multiset of intervals in R. In the literature, barcodes are also often called persistence diagrams. Formally, the elements of a barcode are pairs (I, k), with I ∈ IR and k ∈ N. In what follows, we will denote an element (I, k) of a barcode simply as I, suppressing the index k, and call such an element an interval. Using [1, Theorem 1], the authors of [10] observe that in general, if M is a (not necessarily p.f.d.) persistence module such that M M C(I) I∈B M for some barcode B M , then B M is uniquely determined. We say that M is interval-decomposable and call B M the barcode of M. 7 Theorem 2.1 (Structure of p.f.d. Persistence Modules [16]). Any p.f.d. persistence module is interval-decomposable. 2.2 The Category of Matchings A matching from S to T (written as σ : S → T ) is a bijection σ : S 0 → T 0 , for some S 0 ⊆ S , T 0 ⊆ T ; we denote S 0 as coim σ and T 0 as im σ. Formally, we regard σ as a relation σ ⊆ S × T , where (s, t) ∈ σ if and only if s ∈ coim σ and σ(s) = t. We define the reverse matching rev σ : T → S in the obvious way. Note that any injective function is in particular a matching. For matchings σ : S → T and τ : T → U, we define the composition τ ◦ σ : S → U as | | | | | τ ◦ σ = {(s, u) | (s, t) ∈ σ, (t, u) ∈ τ for some t ∈ T }. As noted in [20], with this definition we obtain a category Mch whose objects are sets and whose morphisms are matchings. Given a matching σ : S → T , there is a canonical matching S → coim σ; this matching is the categorical coimage of σ in Mch. Similarly, the canonical matching im σ → T is the categorical image of σ in Mch. This justifies our choice of notation for coim σ and im σ. | | | 2.3 Decorated Endpoints and Intervals As noted in the introduction, our main results concern matchings of barcodes that move endpoints of intervals in controlled ways. To make the notion of moving the endpoints of intervals precise in the R-indexed setting, we need some formalism. Let D = {−, +}. Adopting a notational and terminological convention of [10], we define the set E of (decorated) endpoints by E = R × D ∪ {−∞, ∞}. For t ∈ R, we will write the decorated endpoints (t, −) and (t, +) as t− and t+ , respectively. We define a total order on D by taking − < +. The lexicographic total ordering on R × D then extends to a total ordering on E by taking −∞ to be the minimum element and taking ∞ to be the maximum element. We define an addition operation ( · ) + ( · ) : E × R → E by taking s± + t = (s + t)± and ±∞ + t = ±∞ for all s, t ∈ R. We define a subtraction operation ( · ) − ( · ) : E × R → E by taking e − t = e + (−t) for (e, t) ∈ E × R. There is a sensible bijection from the set {(b, d) ∈ E × E | b < d} to IR , the set of intervals in R, so that we may regard intervals as ordered pairs of decorated endpoints. This bijection is specified by the following table: −∞ s− s+ t− (−∞, t) [s, t) (s, t) t+ (−∞, t] [s, t] (s, t] ∞ (−∞, ∞) [s, ∞) (s, ∞) For example, for s < t ∈ R the bijection sends (s+ , t− ) to the interval (s, t) and sends (s− , s+ ) to the one-point interval [s, s]. We will always denote the interval specified by b < d as hb, di to avoid confusion with the usual notation for open intervals. Note that hb, di ⊆ hb0 , d0 i whenever b0 ≤ b < d ≤ d 0 . 8 3 The Isometry Theorem In this section we give the precise statement of the isometry theorem for p.f.d. persistence modules. We discuss the isometry theorem for q-tame persistence modules in Sections 7 and 8. 3.1 Interleavings and the Interleaving Distance Distances An extended pseudometric on a class X is a function d : X × X → [0, ∞] with the following three properties: 1. d(x, x) = 0 for all x ∈ X, 2. d(x, y) = d(y, x) for all x, y ∈ X, 3. d(x, z) ≤ d(x, y) + d(y, z) for all x, y, z ∈ X such that d(x, y), d(y, z) < ∞. Note that an extended pseudometric d is a metric if d is finite and d(x, y) = 0 implies x = y. In this paper, by a distance we will mean an extended pseudometric. Shift Functors For δ ∈ R, we define the shift functor ( · )(δ) : VectR → VectR as follows: For M a persistence module we let M(δ) be the persistence module such that for all t ∈ R we have M(δ)t = Mt+δ , and for all s ≤ t ∈ R we have ϕ M(δ) (s, t) = ϕ M (s + δ, t + δ). For a morphism f ∈ hom(VectR ) we define f (δ) by taking f (δ)t = ft+δ . Note that the barcode B M(δ) is obtained from B M by shifting all intervals to the left by δ, as in Fig. 4. Figure 4: Corresponding intervals in B M and B M(δ) . Transition Morphisms For M a persistence module and δ ≥ 0, let the δ-transition morphism ϕδM : M → M(δ) be the morphism whose restriction to Mt is the linear map ϕ M (t, t + δ), for all t ∈ R. Interleavings We say that two persistence modules M and N are δ-interleaved if there exist morphisms f : M → N(δ) and g : N → M(δ) such that g(δ) ◦ f = ϕ2δ M, f (δ) ◦ g = ϕ2δ N. We refer to such f and g as δ-interleaving morphisms. The definition of δ-interleaving morphisms was introduced in [7]. Remark 3.1. It is easy to show that if L and M are δ-interleaved, and M and N are δ0 -interleaved, then L and N are (δ + δ0 )-interleaved. Similarly, if 0 ≤ δ ≤ δ0 and M and N are δ-interleaved, then M and N are also δ0 -interleaved. 9 Example 3.2. Here is one of the central examples of a δ-interleaving in topological data analysis: For T a topological space and functions γ, κ : T → R, let d∞ (γ, κ) = sup |γ(y) − κ(y)|. y∈T Suppose d∞ (γ, κ) = δ. Then for each t ∈ R, we have inclusions γ St ⊆ Sκt+δ , γ Sκt ⊆ St+δ . Applying the ith homology functor with coefficients in K to the collection of all such inclusion maps yields a δ-interleaving between Hi (Sγ ) and Hi (Sκ ). The Interleaving Distance We define dI : obj(VectR ) × obj(VectR ) → [0, ∞], the interleaving distance, by taking dI (M, N) = inf {δ ∈ [0, ∞) | M and N are δ-interleaved}. It is not hard to check that dI is a distance on persistence modules. In addition, if M, M 0 , and N are persistence modules with M  M 0 , then dI (M, N) = dI (M 0 , N), so dI descends to a distance on isomorphism classes of persistence modules. 3.2 δ-Matchings and the Bottleneck Distance For D a barcode and  ≥ 0, let D = {hb, di ∈ D | b +  < d} = {I ∈ D | [t, t + ] ⊆ I for some t ∈ R}. Note that D0 = D. We define a δ-matching between barcodes C and D to be a matching σ : C → D such that | 1. C2δ ⊆ coim σ, 2. D2δ ⊆ im σ, 3. if σhb, di = hb0 , d0 i, then hb, di ⊆ hb0 − δ, d0 + δi, hb0 , d0 i ⊆ hb − δ, d + δi. See Fig. 5 for an example. Remark 3.3. Note that for barcodes C, D, and E, σ1 : C → D a δ1 -matching, and σ2 : D → E a δ2 -matching, σ2 ◦ σ1 : C → E is a (δ1 + δ2 )-matching. | | We define the bottleneck distance dB by dB (C, D) = inf {δ ∈ [0, ∞) | ∃ a δ-matching between C and D}. The triangle inequality for dB follows immediately from Remark 3.3. dB is the most commonly considered distance on barcodes in the persistent homology literature. This is in part because dB is especially well behaved from a theoretical standpoint. 10 Figure 5: A δ-matching between two barcodes C and D. Endpoints of matched intervals are at most δ apart, and unmatched intervals are of length at most 2δ. Remark 3.4. Our definition of a δ-matching is slightly stronger than the one appearing in [10, Section 4.2], which is insensitive to the decoration of the endpoints of intervals. Using the stronger definition of δ-matching allows us to state slightly sharper stability results. However, regardless of which definition of δ-matching one uses, the definition of the bottleneck distance one obtains is the same. Bottleneck Distance as an Interleaving Distance As an aside, note that we can regard a barcode D as an object in MchR , the category whose objects are functors R → Mch and whose morphisms are natural transformations: For each real number t we let Dt be the subset of D consisting of all intervals which contain t, and for each s ≤ t we define the transition matching φD (s, t) : D s → Dt to be the identity on D s ∩ Dt . Viewing barcodes as objects of MchR in this way, it is possible to define interleavings and an interleaving distance on barcodes in essentially the same way as for persistence modules. The reader may check that δ-interleavings of barcodes as thus defined correspond exactly to δ-matchings of barcodes. Thus, dB is equal to the interleaving distance on barcodes. See [2] for details. | 3.3 The Isometry Theorem Theorem 3.5 (Isometry Theorem for p.f.d. Persistence Modules). Two p.f.d. persistence modules M and N are δ-interleaved if and only if there exists a δ-matching between B M and BN . In particular, dB (B M , BN ) = dI (M, N). As noted earlier, the algebraic stability theorem (Theorem 6.4) is the “only if” half of the isometry theorem: It says that if M and N are δ-interleaved, then there exists a δ-matching between B M and BN . We give a proof of the algebraic stability theorem as an immediate consequence of our induced matching theorem in Section 6.1. In Section 8, we present a proof of the converse, following [23]. As one illustration of the utility of the algebraic stability theorem, note that in view of Example 3.2, the theorem yields the following strengthening of the original stability result for barcodes of R-valued functions [14]: Corollary 3.6 ([7, 14]). For any topological space T , functions γ, κ : T → R, and i ≥ 0 such that Hi (Sγ ) and Hi (Sκ ) are p.f.d., dB (BHi (Sγ ) , BHi (Sκ ) ) ≤ d∞ (γ, κ). 11 3.4 Dualization of Persistence Modules To close this section on preliminaries, we examine the behavior of barcodes under dualization of persistence modules. For more on duality in the context of persistence, see [6, 15, 18]. Let Rop denote the opposite category of R and let Neg : R → Rop be the isomorphism of categories such that for all objects t ∈ R, Neg(t) = −t. Given any persistence module M, taking the dual of all vector spaces and all linear maps in M gives us a functor M † : Rop → Vect. We define M ∗ , the dual persistence module of M, by M ∗ = M † ◦ Neg : R → Vect. If M is p.f.d., M ∗∗ is canonically isomorphic to M. If f : M → N is a morphism of persistence modules, we define f ∗ : N ∗ → M ∗ by taking ( f ∗ )t = ( f−t )∗ . With these definitions, ( · )∗ is a contravariant endofunctor on VectR . When M and N are p.f.d., under the canonical identifications of M ∗∗ with M and N ∗∗ with N, we have f ∗∗ = f . For D a barcode, let D∗ be the barcode {−I | I ∈ D}, where we define −I = {t | −t ∈ I} for any interval I . We then have the following easy observation, which we state without proof: Proposition 3.7. If M is p.f.d., then B M∗ = (B M )∗ . 4 The Structure of Persistence Submodules and Quotients As a starting point for the main results of this paper, in this section we describe the relationship between the barcode of a p.f.d. persistence module M and the barcode of a submodule or quotient of M. 4.1 Canonical Injections To prepare for the main result of the section, we first introduce the definition of a canonical injection between barcodes. Canonical Injections between Enumerated Sets Define an enumerated set S to be a totally ordered set S such that there exists an order-preserving bijection S → N or S → {1, . . . , n}. For S , T enumerated sets with |S | ≤ |T |, we define the canonical injection αST : S ,→ T by αST (si ) = ti , where si and ti denote the ith element of S and T , respectively. Remark 4.1. Clearly, for S , T, U enumerated sets with |S | ≤ |T | ≤ |U|, the canonical injections satisfy αSU = αTU ◦ αST Partitions of Barcodes into Enumerated Sets Suppose S = (S , m) is a multiset. A total order on S induces a canonical total order on Rep(S), which is obtained by restricting the lexicographic total order on S × N to Rep(S). 12 For M a p.f.d. persistence module and b ∈ E, let hb, · i M denote the intervals in B M of the form hb, di for some d ∈ E. Symmetrically, for d ∈ E, let h · , di M denote the intervals in B M of the form hb, di for some b ∈ E. Note that a a BM = h · , di M = hb, · i M . d∈E b∈E For each b, d ∈ E, we regard both hb, · i M and h · , di M as totally ordered sets, with the total order on each set induced by the reverse inclusion relation on intervals, so that larger intervals are ordered before smaller ones. Thus, for example, if hb, di, hb, d0 i ∈ B M and d0 > d, we have that hb, d0 i < hb, di in the total order on hb, · i M . With these choices of total orders, each of the sets hb, · i M and h · , di M is an enumerated set. Canonical Injections Between Barcodes If M, N are p.f.d. persistence modules and |h · , di M | ≤ |h · , diN | for each d ∈ E, then the canonical injections h · , di M ,→ h · , diN assemble into an injection B M ,→ BN , which we also call a canonical injection. Note that this injection maps the ith largest interval of h · , di M to the ith largest interval of h · , diN for all d ∈ E, 1 ≤ i ≤ |h · , di M |, as in Fig. 6. Figure 6: The canonical injection h · , di M ,→ h · , diN matches intervals in order of decreasing length. Dually, if we have |hb, · i M | ≥ |hb, · iN | for each b ∈ E, then the canonical injections hb, · i M ←- hb, · iN assemble into a canonical injection B M ←- BN which maps the ith largest interval of hb, · iN to the ith largest interval of hb, · i M . 4.2 Structure Theorem We now turn to the main result of this section, our structure theorem for submodules and quotients of persistence modules. Theorem 4.2 (Structure of Persistence Submodules and Quotients). Let M and N be p.f.d. persistence modules. (i) If there exists a monomorphism M ,→ N, then for each d ∈ E |h · , di M | ≤ |h · , diN |, and the canonical injection B M ,→ BN maps each hb, di ∈ B M to an interval hb0 , di with b0 ≤ b. (ii) Dually, if there exists an epimorphism M  N, then for each b ∈ E |hb, · i M | ≥ |hb, · iN |, 13 and the canonical injection B M ←- BN maps each interval hb, di ∈ BN to an interval hb, d0 i with d ≤ d0 . For M a p.f.d. persistence module and I = hb, di, let h · , Ii M ⊆ B M denote the intervals in h · , di M which contain I. The key step in our proof of Theorem 4.2 (i) is the following. Lemma 4.3. If j : M → N is a monomorphism between p.f.d. persistence modules, then for each interval I, |h · , Ii M | ≤ |h · , IiN |. Proof. Write I = hb, di. We may assume without loss of generality that M M M= C(J), N= C(J). J∈B M Let M U= J∈BN M V= C(J), J∈h · , Ii M C(J). J∈h · , IiN Clearly, U ⊆ M and V ⊆ N. For J any interval and t ∈ R, we write t > J if t > s for all s ∈ J. Since M and N are p.f.d., there exists some t ∈ I such that t > J for all intervals J = hb0 , d0 i ∈ B M ∪ BN with b0 ≤ b and d0 < d. Note that dim Ut = |h · , Ii M | and dim Vt = |h · , IiN |. We claim that jt (Ut ) ⊆ Vt . To see this, note that by the choice of t, \ \ Ut = im ϕ M (s, t) ∩ ker ϕ M (t, r), Vt = s∈I s≤t r>I \ \ im ϕN (s, t) ∩ s∈I s≤t ker ϕN (t, r). r>I For each s ∈ I such that s ≤ t, we have jt (im ϕ M (s, t)) ⊆ im ϕN (s, t) by the commutativity of the following diagram: Ms ϕ M (s, t) Mt js Ns jt ϕN (s, t) Nt Similarly, for each r > I, we have jt (ker ϕ M (t, r)) ⊆ ker ϕN (t, r). Thus jt (Ut ) ⊆ Vt as claimed. Since jt is an injection, we have dim Ut ≤ dim Vt , and the lemma follows.  Proof of Theorem 4.2. To show that |h · , di M | ≤ |h · , diN |, it is enough to observe that for all i ≤ |h · , di M |, we have i ≤ |h · , diN |. Let I = hb, di be the ith interval of the enumerated set h · , di M . Note that for 1 ≤ j ≤ i, h · , Ii M contains the jth interval of h · , di M . Hence i ≤ |h · , Ii M | ≤ |h · , IiN | ≤ |h · , diN |, where the second inequality follows from Lemma 4.3. 14 Now let I 0 denote the ith interval of h · , diN . Recall that the canonical injection B M ,→ BN matches I to I 0 . Since |h · , Ii M | ≤ |h · , IiN | we must have I 0 ∈ h · , IiN , so I 0 = hb0 , di for some b0 ≤ b. This establishes Theorem 4.2 (i). Theorem 4.2 (ii) follows from Theorem 4.2 (i) by a simple duality argument: Suppose q : M → N is an epimorphism of p.f.d. persistence modules. Then q∗ : N ∗ → M ∗ is a monomorphism. Theorem 4.2 (i) yields the canonical injection BN ∗ ,→ B M∗ mapping each hb, di ∈ BN ∗ to an interval hb0 , di ∈ B M∗ with b0 ≤ b. By Proposition 3.7, this map in turn induces an injection ι : BN ,→ B M , which is exactly the canonical injection. Since ι maps each hb, di ∈ BN to an interval hb, d0 i ∈ B M with d ≤ d0 , the result follows.  Remark 4.4. After we announced our results, Primož Škraba and Mikael Vejdemo-Johansson shared with us a nice alternate proof of Theorem 4.2 (ii) for the special case of finitely generated N-indexed persistence modules. By duality, this yields Theorem 4.2 (i) for the special case as well. The proof uses the graded Smith Normal Form algorithm for matrices with homogeneous k[t]-coefficients, as discussed in [25, 26]. Here is a brief outline of the argument: Given an epimorphism M  N of finitely generated N-graded persistence modules and a presentation matrix P M for M in graded Smith Normal Form, we can extend P M to a presentation matrix PN for N by adding more columns; see [25]. By considering the behavior of the graded Smith Normal Form algorithm on PN , we can show that the barcode of N is obtained from the barcode of M by moving the right endpoints of intervals in the barcode of M to the left. In fact, for this it suffices to consider just the column operations performed by the algorithm, since these are enough to reduce PN to a form from which we can read off the barcode of N [26]. The special case of Theorem 4.2 (ii) follows readily. 5 Induced Matchings of Barcodes We now define the map X sending each morphism f : M → N of pointwise finite dimensional persistence modules to a matching X f : B M → BN . We first define the map for monomorphisms and epimorphisms. For j : M ,→ N a monomorphism, we define X j : B M → BN to be the canonical injection B M ,→ BN of Theorem 4.2, considered as a matching. Dually, for q : M  N an epimorphism, we define Xq : B M → BN to be the reverse of the canonical injection B M ←- BN . Now consider an arbitrary morphism f : M → N of p.f.d. persistence modules. f factors (canonically) as a composition of morphisms | | | jf qf M  im f ,→ N, where q f is an epimorphism and j f is a monomorphism. We define X f = X j f ◦ Xq f . Remark 5.1. To build intuition for our definition of X f , the reader may find it helpful to consider the definition in the generic case that |hb, · i M | ≤ 1 and |h · , diN | ≤ 1 for all b, d ∈ E. In 15 this case, the definition becomes especially simple: for each interval I = hb, di ∈ Bim f , there is a unique interval I M = hb, d0 i ∈ B M and a unique interval IN = hb0 , di ∈ BN . We have X f = {(I M , IN ) | I ∈ Bim f }. Figure 7: Barcodes of M, im f , and N, as in Example 5.2. Matched intervals are grouped together. Example 5.2. Let M = C[1, 2) ⊕ C[1, 3), N = C[0, 2) ⊕ C[3, 4). Let f : M → N be a morphism which maps the summand C[1, 2) injectively into the summand C[0, 2) and maps the summand C[1, 3) to 0. Then im f  C[1, 2). The barcodes B M , Bim f , and BN are plotted in Fig. 8. We have X f = {([1, 3), [0, 2))}. 5.1 Properties of Induced Matchings Several key facts about the induced matchings X f follow almost immediately from the definition. We record them here. Proposition 5.3. Let f : M → N be a morphism of p.f.d. persistence modules and suppose X f hb, di = hb0 , d0 i. Then b0 ≤ b < d0 ≤ d. Figure 8: A pair of intervals matched by X f , together with the corresponding interval in Bim( f ) . Proof. By the definition of X f , we have Xq f hb, di = hb, d0 i and X j f hb, d0 i = hb0 , d0 i. By Theorem 4.2, b0 ≤ b < d0 ≤ d, where the middle inequality holds because hb, d0 i is an interval.  Our induced matching theorem, presented in Section 6, is a refinement of Proposition 5.3. Proposition 5.4. For any morphism f : M → N of p.f.d. persistence modules, X f is completely determined by B M , BN , and Bim f . Conversely, X f completely determines these three barcodes. 16 Proof. It is clear from the definition that X f depends only on B M , BN , and Bim f . X f is a matching between B M and BN , so by definition X f determines B M and BN . To prove the converse then, it is enough to check that Bim f = {I ∩ J | (I, J) ∈ X f }, where we interpret the right hand side as the representation of a multiset in the obvious way. By the definition of X f , the map X f → Bim f which sends each matched pair (I, J) ∈ X f to Xq f (I) ∈ Bim f is a bijection. To obtain the result, we note that if X f hb, di = hb0 , d0 i, then Xq f hb, di = hb, d0 i = hb, di ∩ hb0 , d0 i,  where the second equality follows from Proposition 5.3. Induced Matchings and Duality We next observe that the matching X f ∗ is, up to indexing considerations, the reverse of the matching X f . For D a barcode, recall our definition of D∗ from Section 3.4. A matching σ : C → D between barcodes canonically induces a matching σ0 : C∗ → D∗ , and hence a matching σ∗ : D∗ → C∗ obtained simply by reversing σ0 . With these definitions, ( · )∗ is a contravariant endofunctor on the full subcategory of Mch whose objects are barcodes. The following is a counterpart of Proposition 3.7 for induced matchings: | | | Proposition 5.5. If f : M → N is a morphism of p.f.d. persistence modules, then X f ∗ = X∗f . Proof. If q is an epimorphism of persistence modules and j is a monomorphism of persistence modules, then q∗ is a monomorphism of persistence modules and j∗ is an epimorphism of persistence modules. Thus, for f = j f ◦ q f we have X f ∗ = X( j f ◦q f )∗ = Xq∗f ◦ j∗f = Xq∗f ◦ X j∗f . It is easy to check that if g is an epimorphism or a monomorphism, then X∗g = Xg∗ . Therefore, Xq∗f ◦ X j∗f = X∗q f ◦ X∗j f = (X j f ◦ Xq f )∗ = X∗f . We thus have X f ∗ = X∗f as desired. 5.2  Partial Functoriality of Induced Matchings Let B : obj(vectR ) → obj(Mch) denote the map sending a p.f.d. persistence module M to its barcode B M . Given that X : hom(vectR ) → hom(Mch) maps each morphism f : M → N of persistence modules to a matching between B M and BN , one might hope that the pair (B, X) defines a functor vect → Mch. However, as the next example shows, X does not respect composition of morphisms, so is not functorial. 17 Example 5.6. Consider persistence modules L = C[3, ∞) ⊕ C[4, ∞), M = C[2, ∞) ⊕ C[1, ∞), N = C[0, ∞). Define f : L → M by f (s, t) = (s, 0); define g : M → N by g(s, t) = t. We have im f  C[3, ∞) and im g  C[1, ∞). X f matches the interval [3, ∞) ∈ BL to [1, ∞) ∈ B M , and leaves [4, ∞) ∈ BL and [2, ∞) ∈ B M unmatched. Xg matches [1, ∞) ∈ B M to [0, ∞) ∈ BN and leaves [2, ∞) ∈ B M unmatched. Thus Xg ◦ X f matches [3, ∞) ∈ BL to [0, ∞) ∈ BN and leaves [4, ∞) ∈ BL unmatched. On the other hand g ◦ f = 0, so Xg◦ f is the trivial matching (i.e., Xg◦ f = ∅). Though X is not functorial in general, we have the following partial functoriality result for X: Proposition 5.7. X is functorial on the two subcategories of vectR whose morphisms are, respectively, the monomorphism and epimorphisms. Proof. It is clear from the definition that X sends identity morphisms to identity morphisms. If j : M → N is a monomorphism of p.f.d. persistence modules, then by definition X j factors as the disjoint union of the canonical injections h · , di M ,→ h · , diN indexed by endpoints d ∈ E. It thus follows by Remark 4.1 that if j1 : L → M and j2 : M → N are monomorphisms of p.f.d. persistence modules, then X j2 ◦ X j1 = X j2 ◦ j1 . Thus, X is functorial on the subcategory of monomorphisms in vectR . Using essentially the same argument, together with the fact that the operation of reversing a matching is functorial, we find that, dually, X is functorial on the subcategory of epimorphisms in vectR .  Example 5.8. Given p.f.d. persistence modules M, N, P, Q and morphisms f : M → N, g : P → Q we can define the direct sum f ⊕ g : M ⊕ P → N ⊕ Q in the obvious way. In view ` of Proposition 5.7, one might hope that when f and g are monomorphisms, X f ⊕g = X f Xg . However, this equality does not hold in general. For example, we can take M = N = C[1, 2), ` P = 0, and Q = C[0, 2), with f : M → N the identity map. Then X f Xg = {([1, 2), [1, 2))} but X f ⊕g = {([1, 2), [0, 2))}. Non-Functoriality of Persistence Barcodes It is natural to ask whether there exists some other, fully functorial definition of a map sending each morphism M → N of p.f.d. persistence modules to a matching B M → BN . Proposition 5.10 below makes clear that the answer is no. | Lemma 5.9. There exists no functor vect → Mch sending each vector space of dimension d to a set of cardinality d. Proof. Assume for a contradiction that such a functor S exists. We first show that S must send each linear map f : V → W of rank r to a matching S ( f ) of cardinality |S ( f )| = r. We begin with the case of f an injection. In this case, f has a left inverse g : W → V. We have S (g) ◦ S ( f ) = S (g ◦ f ) = S (IdV ), 18 so r = |S (IdV )| = |S (g) ◦ S ( f )| ≤ |S ( f )| ≤ dim V = r, which gives the result. By an analogous argument, the result also holds when f is a surjection. qf jf For the general case, consider f as a composition of maps V  im f ,→ W. Since rank q f = rank j f = rank f = r, we have |S (q f )| = |S ( j f )| = r. Since |S (im f )| = r as well, this implies that both S (q f ) and S ( j f ) match every element of S (im f ). It follows that |S ( f )| = r. Now let i1 , i2 , i3 : K → K 2 be injective linear maps with left inverses p1 , p2 , p3 : K 2 → K, respectively, such that p1 ◦ i2 = p2 ◦ i1 = 0 and p1 ◦ i3 = p2 ◦ i3 = IdK . For instance, i1 : x 7→ (x, 0), i2 : x 7→ (0, x), i3 : x 7→ (x, x), p1 : (x, y) 7→ x, p2 : (x, y) 7→ y, p3 : (x, y) 7→ x. Each of these six maps is of rank 1 and thus matches exactly one element of S (K 2 ). Write S (K 2 ) = {a, b} and assume without loss of generality that S (i1 ) matches a. Since S (p1 ) ◦ S (i1 ) = S (p1 ◦ i1 ) = S (IdK ) = IdS (K) , S (p1 ) must also match a. Moreover, S (i2 ) must match b; otherwise, S (i2 ) would have to match a, and we would have S (p1 ) ◦ S (i2 ) , ∅ and S (p1 ◦ i2 ) = S (0) = ∅, violating the functoriality of S . By an analogous argument, S (p2 ) must match b as well. If i3 matches a, then S (p2 ) ◦ S (i3 ) = ∅. But S (p2 ◦ i3 ) = S (IdK ) , ∅, violating functoriality. On the other hand, if i3 matches b, then S (p1 ) ◦ S (i3 ) = ∅ but S (p1 ◦ i3 ) , ∅, again violating functoriality. We conclude that the functor S cannot exist.  Proposition 5.10. There exists no functor vectR → Mch sending each persistence module to its barcode. Proof. Assume for a contradiction that there exists a functor B : vectR → Mch sending each persistence module to its barcode. Consider the functor P : vect → vectR that sends a vector space V to the persistence module M with M0 = V and Mt = 0 for t , 0. The functor B ◦ P : vect → Mch then sends each vector space of dimension d to a barcode of cardinality d. But by Lemma 5.9, such a functor cannot exist. We conclude that there cannot exist a functor vectR → Mch sending each persistence module to its barcode.  6 The Induced Matching Theorem We now come to our main result, the induced matching theorem. For  ≥ 0, we say a persistence module M is -trivial if the transition morphism ϕM : M → M() is the zero morphism. Recall that in Section 3.2, for a barcode D and  ≥ 0 we defined D = {hb, di ∈ D | b +  < d} = {I ∈ D | [t, t + ] ⊆ I for some t ∈ R}. Theorem 6.1 (Induced Matching Theorem). Let f : M → N be a morphism of pointwise finite dimensional persistence modules and suppose X f hb, di = hb0 , d0 i. Then (i) b0 ≤ b < d0 ≤ d. 19 (ii) If coker f is -trivial, then BN ⊆ im X f and b0 ≤ b ≤ b0 + . (iii) Dually, if ker f is -trivial, then BM ⊆ coim X f and d −  ≤ d0 ≤ d. Figure 9: The relationship between a pair of intervals matched by X f , as described by the induced matching theorem. Proof of Theorem 6.1. Theorem 6.1 (i) is exactly Proposition 5.3. To prove Theorem 6.1 (ii), let N  be the submodule of N given by Nt = im ϕ M (t − , t) for all t ∈ R. Note that BN  = {hc +  , ei | hc, ei ∈ BN }. Put informally, we obtain the barcode BN  from BN by first removing the intervals of length less than  from BN , and then moving the left endpoint of each remaining interval to the right by . Our proof strategy will be to sandwich Bim f between BN  and BN via canonical injections. Let j : N  ,→ N denote the inclusion. We see from the definition of X j that im X j = BN , and if hc, ei ∈ BN , then X j hc +  , ei = hc, ei. Figure 10: The relationship between a pair of intervals matched by X j : BN  ,→ BN . Now assume that coker f is -trivial. Then N  ⊆ im f . (To see this, note that for all t ∈ R, ϕcoker f (t − , t) = 0 if and only if im ϕN (t − , t) ⊆ im ft .) Thus the following diagram commutes, where each map is the inclusion: j N N jf j im f 20 By Proposition 5.7, we have X j = X j f ◦X j . Moreover, by definition we have that X f = X j f ◦Xq f , so the following diagram of matchings commutes: X j BN Xf Xjf BN  Xj Bim f BM Xq f By the commutativity of the left triangle, BN = im X j ⊆ im X j f . By our definition of induced matchings, we have that im X j f = im X f , so BN ⊆ im X f as claimed. Figure 11: To show that b ≤ b0 + , we sandwich b between b0 and b0 + , using that X j = X j f ◦ X j . To finish the proof of Theorem 6.1 (ii), we need to show that for b, b0 as in the statement of the result, b ≤ b0 + . This follows from the commutativity of the left triangle as well. To see this, recall that X j f hb, d0 i = hb0 , d0 i. If hb0 , d0 i ∈ BN , then we also have that X j hb0 +  , d0 i = hb0 , d0 i, so X j hb0 +  , d0 i = hb, d0 i by commutativity of the triangle; see Fig. 11. By Theorem 4.2 (i), we have that b ≤ b0 + . If, on the other hand, hb0 , d0 i < BN , then d0 ≤ b0 + . Since b < d0 , we again have that b ≤ b0 + . It remains to prove Theorem 6.1 (iii). Our proof of Theorem 6.1 (ii) dualizes readily. Alternatively, Theorem 6.1 (iii) follows directly from Theorem 6.1 (ii) via a straightforward duality argument. This latter argument requires the use of Proposition 5.5 and Lemma 6.2 below, whose proof we omit.  Lemma 6.2. For f : M → N a morphism of p.f.d. persistence modules, ker f is -trivial if and only if coker f ∗ is -trivial. 6.1 Algebraic Stability via Induced Matchings We now establish an explicit form of the algebraic stability theorem for pointwise finite dimensional persistence modules as a corollary of the induced matching theorem. 21 Lemma 6.3. If f : M → N(δ) is a δ-interleaving morphism, then ker f and coker f are both 2δ-trivial. Proof. This follows from the definition of a δ-interleaving.  For any persistence module N and δ ∈ [0, ∞), we have a bijection rδ : BN(δ) → BN such that rδ hb, di = hb + δ, d + δi for each interval hb, di ∈ BN(δ) . Theorem 6.4 (Explicit Formulation of Algebraic Stability). If M, N are pointwise finite dimensional persistence modules and f : M → N(δ) is a δ-interleaving morphism, then rδ ◦ X f : B M → BN is a δ-matching. In particular, | dB (B M , BN ) ≤ dI (M, N). Proof. By Lemma 6.3, ker f and coker f are both 2δ-trivial. The Induced Matching Theorem 6.1 2δ 0 0 thus tells us that B2δ M ⊆ coim X f , BN(δ) ⊆ im X f , and each matched pair of bars X f hb, di = hb , d i satisfies b0 ≤ b ≤ b0 + 2δ and d − 2δ ≤ d0 ≤ d. It now follows from the definition of rδ that rδ ◦ X f is a δ-matching.  Figure 12: A pair of intervals in the δ-matching rδ ◦ X f : B M → BN of Theorem 6.4, together with the corresponding intervals in Bim f and BN(δ) . | Remark 6.5. For M and N δ-interleaved persistence modules, our proof of the algebraic stability theorem gives a construction of a δ-matching between B M and BN depending only on a single δ-interleaving morphism f : M → N(δ); to construct the δ-matching, one does not need an explicit choice of morphism g : N → M(δ) such that g(δ) ◦ f = ϕ2δ M, f (δ) ◦ g = ϕ2δ N. This is in contrast to the proof strategy via interpolation used in [7] and [10]: The construction of a matching given in those proofs depends in an essential way on an explicit choice of both f and g. 22 6.2 Single-Morphism Characterization of the Interleaving Relation As an another application of Theorem 6.1, we have the following: Corollary 6.6. Two p.f.d. persistence modules M and N are δ-interleaved if and only if there exists a morphism f : M → N(δ) with ker f and coker f both 2δ-trivial. Proof. The forward direction is Lemma 6.3. The converse direction follows from the induced matching theorem and the converse algebraic stability theorem 8.2.  Remark 6.7. Our definitions of δ-interleavings and δ-trivial persistence modules extend readily to multidimensional persistence modules; see [23]. Given this, it is natural to ask whether Corollary 6.6 generalizes to the multidimensional setting. It is easy to check that the forward direction of the theorem (i.e., Lemma 6.3) does generalize to multidimensional persistence modules, with the same easy proof. Conversely, by considering the canonical factorization of a morphism though its image, it is not hard to check that the converse direction of Corollary 6.6 holds for multidimensional persistence modules, up to a factor of 2. Perhaps surprisingly, the next example shows that showing that this factor of 2 is tight. Thus, while Corollary 6.6 does adapt to the multidimensional setting, the general result is not as strong as the result in one dimension. Example 6.8. In this example, we freely use the notation and terminology for multidimensional persistence modules established in [23]. First, for δ ≥ 0 we define a 2-D persistence module P to be δ-trivial if ϕP (δ) : P → P(δ), as defined in [23], is the trivial morphism. We will describe 2-D persistence modules M and N and a morphism f : M → N(1) with 2-trivial kernel and cokernel, such that M, N are δ-interleaved if and only if δ ≥ 2. We specify M and N via presentations. We take M  h(a, (3, 2)), (b, (2, 3)) | x2 y3 a − x3 y2 bi. That is, M has a generator a at grade (3, 2), a generator b at (2, 3), and a relation x2 y3 a − x3 y2 b at grade (5, 5). We take N  h(c, (2, 1)), (d, (1, 2)) | yc − xdi. Thus N(1)  h(c0 , (1, 0)), (d0 , (0, 1)) | yc0 − xd0 i. We define f : M → N(1) by taking f (a) = x2 y2 c0 , f (b) = x2 y2 d0 . It is easy to check that f has 2-trivial kernel and cokernel, and that M and N are δ-interleaved if and only if δ ≥ 2. 7 Algebraic Stability for q-Tame Persistence Modules In the remaining two sections, we show that the isometry theorem for q-tame persistence modules, as presented in [10, Theorem 4.11], follows readily from the isometry theorem for p.f.d. persistence modules. First, we derive the algebraic stability theorem for q-tame persistence modules, as stated in [10, Theorem 4.20], from the algebraic stability theorem for p.f.d. persistence modules, Theorem 6.4. 23 The last section treats the converse algebraic stability theorem in both the p.f.d. and q-tame settings. Neither [10, Theorem 4.20] nor our Theorem 6.4 immediately implies the other. On the one hand, [10, Theorem 4.20] applies to q-tame persistence modules, whereas Theorem 6.4 only applies to p.f.d. persistence modules. On the other hand, Theorem 6.4 is slightly sharper than the restriction of [10, Theorem 4.20] to p.f.d. persistence modules, because [10, Theorem 4.20] concerns undecorated barcodes (see below), and because as noted in Remark 3.4, the definition of δ-matching used in the present paper is stronger than the one used in [10]. 7.1 Undecorated Barcodes of q-Tame Persistence Modules Recall that we say a persistence module M is q-tame if ϕ M (s, t) has finite rank whenever s < t. A q-tame persistence module does not necessarily decompose into interval summands [10, Section 4.5], and thus our definition of barcode does not extend to q-tame persistence modules. However, it is observed in [12] that if we are willing to use a slightly coarser notion of barcode, then we can define the barcode of a q-tame persistence module M by way of the interval summand decomposition of an approximation of M. We recall the definition here. See [12] for a thorough study of the definition, including an interpretation in terms of quotient categories. Undecorated Barcodes Adapting a definition of [10], we define an undecorated barcode to be a barcode for which every interval is open. In what follows, we will sometimes refer to barcodes as decorated barcodes, to emphasize the distinction between general barcodes and the special case of undecorated barcodes. There is an obvious map U from the set of decorated barcodes to the set of undecorated barcodes, which removes all intervals of the form [t, t] from a barcode and forgets the decorations of the endpoints of all other intervals in the barcode. For example, we have  U({[0, 1), [0, 2], (−∞, ∞), [0, 0]}) = (0, 1), (0, 2), (−∞, ∞) . If M is an interval-decomposable persistence module, we define the undecorated barcode U M of M to be U(B M ). Undecorated Barcodes of q-Tame persistence modules As in the proof of Theorem 6.1, for M a persistence module and  ≥ 0, let M  denote the submodule of M defined by Mt = im ϕ M (t − , t) for all t ∈ R. For M any persistence module, define rad M, the radical of M, as [ rad M = M . >0 Note that rad M is a submodule of M. It is easy to see that dI (M, rad M) = 0; in this sense, rad M is an infinitesimally close approximation of M. Theorem 7.1 (Structure of Radical q-Tame Persistence Modules [12, Corollary 3.6]). For M any q-tame persistence module, rad M is interval decomposable. 24 See also [12, Theorem 3.2] for a simultaneous generalization of both Theorem 7.1 and the structure theorem for p.f.d. persistence modules [16]. We state the following easy result without proof: Proposition 7.2. For any interval-decomposable persistence module M, Urad M = U M . For M q-tame, we define U M , the undecorated barcode of M, by taking U M = Urad M . In view of Proposition 7.2, for M a persistence module that is both q-tame and interval-decomposable, this definition coincides with the definition of U M given above. Remark 7.3. [10] defines the undecorated barcode of a q-tame persistence module by first defining the decorated barcode of a q-tame persistence module M using the rectangle measure formalism introduced in that paper, and then taking the undecorated barcode of M to be the image of the decorated barcode of M under U; it follows from the results of [10] that the two definitions of the undecorated barcode of a q-tame persistence module are equivalent. 7.2 Algebraic Stability for q-Tame Persistence Modules Using the algebraic stability theorem for p.f.d. persistence modules, we now give our alternative proof of the algebraic stability theorem for q-tame persistence modules. Theorem 7.4 (Algebraic Stability for q-Tame Modules [10, Theorem 4.20]). If M and N are q-tame and are (δ + )-interleaved for all  > 0, then there is a δ-matching between U M and UN . In particular, dB (U M , UN ) ≤ dI (M, N). To prepare for the proof of the theorem, we make some easy technical observations, leaving the proofs to the reader: Lemma 7.5. Let M be a q-tame persistence module. (i) M and M  are -interleaved. (ii) For any  > 0, M  is p.f.d. (iii) There is a canonical -matching Brad M → B M . | Lemma 7.5 (i,ii) implies in particular that a q-tame persistence module can be approximated arbitrarily well by a p.f.d. persistence module. Proof of Theorem 7.4. M and N are (δ+)-interleaved for all  > 0, so by Lemma 7.5 (i,ii) and the triangle inequality for interleavings (Remark 3.1), M  and N  are p.f.d. and are (δ+3)-interleaved. By Theorem 6.4, there is a (δ + 3)-matching between B M and BN  . Thus by Lemma 7.5 (iii) and the triangle inequality for δ-matchings (Remark 3.3), there is a (δ + 5)-matching between Brad M and Brad N . This induces a (δ + 5)-matching between U M = Urad M and UN = Urad N . Since this is true for all  > 0, we conclude that dB (U M , UN ) ≤ dI (M, N). Finally, the existence of a δ-matching between U M and UN follows by Theorem 7.6 below.  25 Theorem 7.6 ([10, Theorem 4.10]). If there is a (δ + )-matching between U M and UN for all  > 0, then there is a δ-matching between U M and UN . We note that the proof of Theorem 7.6 is self-contained; in invoking this result and its proof, we are not implicitly invoking any other results of [10]. 8 Converse Algebraic Stability Since the algebraic stability theorem and its converse go hand in hand, for completeness we include a proof of the converse here. We first present the result under the assumption that the persistence modules are interval-decomposbale, following [23]. Together with Theorem 6.4, this establishes the isometry theorem for p.f.d. persistence modules, Theorem 3.5. Following [10], we then establish a version of converse algebraic stability for q-tame persistence modules, as an easy corollary of the result in the interval-decomposable setting. Together with Theorem 7.4, this gives the isometry theorem for q-tame persistence modules. 8.1 Converse Algebraic Stability for Interval-Decomposable Persistence Modules Lemma 8.1. Let δ ≥ 0. (i) If hb, di, hb0 , d0 i are intervals such that hb, di ⊆ hb0 − δ, d0 + δi, hb0 , d0 i ⊆ hb − δ, d + δi, then Chb, di and Chb0 , d0 i are δ-interleaved. (ii) If b +  ≥ d, then Chb, di and the trivial module are δ-interleaved.  Proof. We leave the straightforward details to the reader. Theorem 8.2 ([23, Theorem 3.3]). If M and N are interval-decomposable persistence modules and there exists a δ-matching between B M and BN , then M and N are δ-interleaved. In particular, dB (B M , BN ) ≥ dI (M, N). Proof. We may assume without loss of generality that M M M= C(I), N= C(I). I∈B M I∈BN Let σ : B M → BN be a δ-matching, and let M M• = C(I), | N• = I∈coim(σ) M◦ = M M C(I), I∈im(σ) N◦ = C(I), M I∈coker(σ) I∈ker(σ) 26 C(I), where ker(σ) and coker(σ) denote the complements of coim(σ) and im(σ) in B M and BN , respectively. Clearly, M = M• ⊕ M◦ and N = N• ⊕ N◦ . By Lemma 8.1 (i), for each (I, J) ∈ σ we may choose a pair of δ-interleaving morphisms fI : C(I) → C(J)(δ), g J : C(J) → C(I)(δ). These morphisms induce a pair of δ-interleaving morphisms f• : M• → N• (δ), g• : N• → M• (δ). Define a morphism f : M → N by taking the restriction of f to M• to be equal to f• and taking the restriction of f to M◦ to be the trivial morphism. Symmetrically, define a morphism g : N → M by taking the restriction of g to N• to be equal to g• and taking the restriction of g to N◦ to be the trivial morphism. By Lemma 8.1 (ii), 2δ ϕ2δ M (M◦ ) = ϕN (N◦ ) = 0. From this fact and the fact that f• and g• are δ-interleaving morphisms, it follows that f and g are δ-interleaving morphisms as well.  8.2 Converse Algebraic Stability for q-Tame Persistence Modules The converse algebraic stability theorem for q-tame persistence modules, as presented in [10], follows easily from Theorem 8.2: Corollary 8.3 ([10, Theorem 4.1100 ]). For M and N q-tame persistence modules, dB (U M , UN ) ≥ dI (M, N). Proof. Let δ = dB (U M , UN ). By definition, dB (Urad M , Urad N ) = δ. Clearly, then, we also have that dB (Brad M , Brad N ) = δ, so by Theorem 8.2, dI (rad M, rad N) ≤ δ. dI (M, rad M) = 0 = dI (N, rad N), so by the triangle inequality, we conclude that dI (M, N) ≤ δ.  Together, Theorem 7.4 and Corollary 8.3 give the isometry theorem for q-tame persistence modules: Theorem 8.4 ([10, Theorem 4.11]). For M and N q-tame persistence modules, dB (U M , UN ) = dI (M, N). 27 Acknowledgments We thank Vin de Silva, Peter Landweber, Daniel Müllner, and the anonymous referees for valuable feedback on this work; this paper has benefitted from their input in several ways. We also thank Guillaume Troianowski and Daniel for providing us with a copy of their manuscript [24] and for helpful correspondence regarding their related work. In addition, we thank William CrawleyBoevey and Vin for enlightening discussions about q-tame persistence modules. UB was partially supported by the Toposys project FP7-ICT-318493-STREP. ML thanks the Institute for Advanced Study and the Institute for Mathematics and its Applications for their support and hospitality during the course of this project. ML was partially supported by NSF grant DMS-1128155. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. References [1] G. Azumaya. Corrections and supplementaries to my paper concerning Krull–Remak– Schmidt’s theorem. Nagoya Mathematical Journal, 1:117–124, 1950. [2] U. Bauer and M. Lesnick. Persistence diagrams are really diagrams. In preparation. [3] A. Blumberg and M. Lesnick. Universality of the homotopy interleaving distance. In preparation. [4] P. Bubenik and J. A. Scott. Categorification of persistent homology. Discrete & Computational Geometry, 51(3):600–627, 2014. [5] G. Carlsson. Topological pattern recognition for point cloud data. Acta Numerica, 23: 289–368, 2014. [6] G. Carlsson, V. de Silva, and D. Morozov. Zigzag persistent homology and real-valued functions. In Proceedings of the 25th Annual Symposium on Computational Geometry, SCG ’09, pages 247–256. ACM, 2009. [7] F. Chazal, D. C. Steiner, M. Glisse, L. J. Guibas, and S. Y. Oudot. Proximity of persistence modules and their diagrams. In Proceedings of the 25th Annual Symposium on Computational Geometry, SCG ’09, pages 237–246. ACM, 2009. [8] F. Chazal, D. C. Steiner, L. J. Guibas, F. Mémoli, and S. Y. Oudot. Gromov–Hausdorff stable signatures for shapes using persistence. In Proceedings of the Symposium on Geometry Processing, SGP ’09, pages 1393–1403. Eurographics Association, 2009. [9] F. Chazal, L. Guibas, S. Oudot, and P. Skraba. Scalar field analysis over point cloud data. Discrete & Computational Geometry, 46(4):743–775, 2011. [10] F. Chazal, V. de Silva, M. Glisse, and S. Oudot. The structure and stability of persistence modules. Preprint, 2012. arXiv:1207.3674. 28 [11] F. Chazal, L. J. Guibas, S. Y. Oudot, and P. Skraba. Persistence-based clustering in Riemannian manifolds. Journal of the ACM, 60(6), 2013. Article No. 41. [12] F. Chazal, W. Crawley-Boevey, and V. de Silva. The observable structure of persistence modules. Preprint, 2014. arXiv:1405.5644. [13] F. Chazal, V. Silva, and S. Oudot. Persistence stability for geometric complexes. Geometriae Dedicata, 173(1):193–214, 2014. [14] D. Cohen-Steiner, H. Edelsbrunner, and J. Harer. Stability of persistence diagrams. Discrete & Computational Geometry, 37(1):103–120, 2007. [15] D. Cohen-Steiner, H. Edelsbrunner, and J. Harer. Extending persistence using Poincaré and Lefschetz duality. Foundations of Computational Mathematics, 9(1):79–103, 2008. [16] W. Crawley-Boevey. Decomposition of pointwise finite-dimensional persistence modules. Preprint, 2012. arXiv:1210.0819. [17] V. de Silva and V. Nanda. Geometry in the space of persistence modules. In Proceedings of the 29th Annual Symposium on Computational Geometry, SoCG ’13, pages 397–404. ACM, 2013. [18] V. de Silva, D. Morozov, and M. Vejdemo-Johansson. Dualities in persistent (co)homology. Inverse Problems, 27(12):124003, 2011. [19] H. Edelsbrunner and J. Harer. Computational Topology. An Introduction. American Mathematical Society, 2010. [20] H. Edelsbrunner, G. Jabłoński, and M. Mrozek. The persistent homology of a self-map. Foundations of Computational Mathematics, 2014. Available online. [21] B. T. Fasy, F. Lecci, A. Rinaldo, L. Wasserman, S. Balakrishnan, and A. Singh. Confidence sets for persistence diagrams. The Annals of Statistics, 42(6):2301–2339, 2014. [22] M. Lesnick. Multidimensional Interleavings and Applications to Topological Inference. PhD thesis, Stanford University, 2012. [23] M. Lesnick. The theory of the interleaving distance on multidimensional persistence modules. Foundations of Computational Mathematics, 2015. To appear. arXiv:1106. 5305. [24] D. Müllner and G. Troianowski. Proximity of persistence modules. Unfinished manuscript, 2011. [25] P. Skraba and M. Vejdemo-Johansson. Persistence modules: Algebra and algorithms. Preprint, 2013. arXiv:1302.2015. [26] A. Zomorodian and G. Carlsson. Computing persistent homology. Discrete & Computational Geometry, 33(2):249–274, 2005. 29
0
Communication-Avoiding Parallel Algorithms for Solving Triangular Systems of Linear Equations arXiv:1612.01855v2 [cs.DC] 22 Mar 2017 Tobias Wicky Department of Computer Science ETH Zurich Zurich, Switzerland [email protected] Edgar Solomonik Department of Computer Science University of Illinois at Urbana-Champaign Urbana, IL, USA [email protected] Abstract—We present a new parallel algorithm for solving triangular systems with multiple right hand sides (TRSM). TRSM is used extensively in numerical linear algebra computations, both to solve triangular linear systems of equations as well as to compute factorizations with triangular matrices, such as Cholesky, LU, and QR. Our algorithm achieves better theoretical scalability than known alternatives, while maintaining numerical stability, via selective use of triangular matrix inversion. We leverage the fact that triangular inversion and matrix multiplication are more parallelizable than the standard TRSM algorithm. By only inverting triangular blocks along the diagonal of the initial matrix, we generalize the usual way of TRSM computation and the full matrix inversion approach. This flexibility leads to an efficient algorithm for any ratio of the number of right hand sides to the triangular matrix dimension. We provide a detailed communication cost analysis for our algorithm as well as for the recursive triangular matrix inversion. This cost analysis makes it possible to determine optimal block sizes and processor grids a priori. Relative to the best known algorithms for TRSM, our approach can require asymptotically fewer messages, while performing optimal amounts of computation and communication in terms of words sent. Keywords-TRSM, communication cost, 3D algorithms I. I NTRODUCTION Triangular solve for multiple right hand sides (TRSM) is a crucial subroutine in many numerical linear algebra algorithms, such as LU and Cholesky factorizations [1], [2]. Moreover, it is used to solve linear systems of equations once the equation matrix is decomposed using any factorization involving a triangular matrix factor. We consider TRSM for dense linear equations in the matrix form, L · X = B, where L ∈ Rn×n is a lower-triangular matrix while B ∈ Rn×k , X ∈ Rn×k are dense matrices. We study the communication cost complexity of two variants of the TRSM algorithm for parallel execution on p processors with unbounded memory. First, we present an adaptation of a known recursive scheme for TRSM [3] along with a complete communication cost analysis. Then, we demonstrate a new algorithm that uses selective triangular inversion Torsten Hoefler Department of Computer Science ETH Zurich Zurich, Switzerland [email protected] to reduce the synchronization cost over known schemes, while preserving optimal communication and computation costs. Careful choice of algorithmic parameters, allows us to achieve better asymptotic complexity for a large (and most important) range of input configurations. Our TRSM algorithm leverages matrix multiplication and triangular matrix inversion as primitives. We provide a communication-efficient parallel matrix multiplication algorithm that starts from a 2D cyclic distribution, a modest enhancement to existing approaches. Triangular matrix inversion provides the key ingredient to the lower synchronization cost in our TRSM algorithm. Unlike general matrix inversion, triangular inversion is numerically stable [4] and can be done with relatively few synchronizations. We present a known parallel approach for triangular inversion and provide the first communication cost analysis thereof. We invert triangular diagonal blocks of the L matrix at the start of our TRSM algorithm, increasing the computational granularity of the main part of the solver. Inverting blocks, rather than the whole matrix, also allows our algorithm to be work-efficient in cases when the number of right-hand sides is smaller than the matrix dimension. We formulate the algorithm in an iterative, rather than a recursive manner, avoiding overheads incurred by the known parallel recursive TRSM approach. This innovation reduces communication cost by a factor of Θ(log(p)) relative to the recursive algorithm when the number of right-hand sides is relatively small. At the same time, across a large range of input parameters, we achieve a synchronization cost  improvement 1/6 2/3  p . over the recursive approach by a factor of Θ nk II. P RELIMINARIES A. Execution Time Model The model we use to calculate the parallel execution time of an algorithm along its critical path is the α − β − γ model. It describes the total execution time of the algorithm T in terms of the floating point operations (flop) count F , the bandwidth W (number of words of data sent and received) and the latency S (number of messages sent and received) along the critical path [5] in the following fashion: T = α · S + β · W + γ · F. For all the terms we only show the leading order cost in terms of n, k and p. We assume that every processor can send and receive one message at a time in point to point communication. We do not place constraints on the local memory size. Scatter and gather also have the same cost [7], Tscatter (n, p) = α · log p + β · n1p , Tgather (n, p) = α · log p + β · n1p . Reduce-scatter uses the same communication-path and has same communication cost but we have to add the additional overhead of local computation, Treduce−scatter (n, p) = α · log p + β · n1p + γ · n1p . The cost of an all-to-all among p processors is Talltoall (n, p) = α · log(p) + β · B. Notation For brevity, we will sometimes omit specification of the logarithm base, using log to denote log2 . We will make frequent use of the unit step function ( 1x = 1:x>1 n log p . 2 The combination of the algorithms leads to the following costs for reduction, allreduction, and broadcast: Treduction (n, p) = α · 2 log p + β · 2n1p + γ · n1p , Tallreduction (n, p) = α · 2 log p + β · 2n1p + γ · n1p , Tbcast (n, p) = α · 2 log p + β · 2n1p . 0 : x ≤ 1. We use Π(x1 , . . . , xn )to denote single processors in an ndimensional processor grids. To refer to successive elements or blocks, we will use the colon notation where i : j = [i, i + 1, . . . , j − 1] i < j. To refer to strided sets of elements or blocks, we will write i : k : j = [i, i + k, i + 2k, . . . , i + αk] max α s.t. αk < j. The colon notation can be applied to a list and should there be considered element-wise. To use subsets of processors, we use the ◦ notation in the way that Π(x, ◦, z) = Π(x, 1 : py , z) denotes a (1-dimensional) grid of processors in the y-dimension. Global matrices are denoted by capital letters whereas locally owned parts have square brackets: If L is distributed on Π(◦, ◦, 1), every processor Π(x, y, 1) owns L[x, y]. Matrix elements are accessed with brackets. C. Previous Work We now cover necessary existing building blocks (e.g. collective communication routines) and previous work. In particular, we overview related results on communication cost of matrix multiplication and triangular solves. 1) Collective Communication: In [6], Chan et al. present a way to perform reduction, allreduction and broadcast via allgather, scatter, gather and reduce-scatter. The latter set of collectives can be done using recursive doubling (butterfly algorithms) [6], [7] for a power of two number of processors. If we have a non-power of two number of processors, the algorithm described in [8] can be used. For simplicity, we do not consider reduction and broadcast algorithms that can achieve a factor of two less in cost in specific regimes of α and β [9]. If we use butterfly methods, the cost of an allgather of n words among p processors is Tallgather (n, p) = α · log p + β · n1p . 2) Matrix Multiplication: Communication-efficient parallel algorithms for matrix multiplication have been analyzed extensively [10]–[15]. In [16], Demmel et al. present algorithms that are asymptotically optimal for matrix multiplication of arbitrary (potentially non square) matrices. If we neglect the memory terms, their work shows that matrix multiplication can be done with the following costs. Bandwidth: When multiplying a matrix A that is of dimension n × n with a matrix B of dimensions n × k with p processors, we obtain an asymptotic bandwidth cost of    nk   O √   p     2 2/3 ! n k WMM (n, k, p) = O    p     2 O n n>k· k/p ≤n ≤ k · √ √ p p n < k/p. We refer to the first of the cases of WMM , as the case of two large dimensions, here the matrix A is much larger than the right hand side B, when the best way of performing a matrix multiplication is to use a two dimensional layout for the processor grid. The second case, three large dimensions, has matrices A and B of approximately the same size. A three dimensional grid layout is optimal here. And the third case, one large dimension, is the case where the right hand side B is larger than the triangular matrix A, the best way to do a matrix multiplication is to use a one dimensional layout for the processor grid. Latency: Assuming unlimited memory, matrix multiplication as presented in [16] can be done with a latency of SMM (p) = O (log(p)) .  Flop Cost: Matrix multiplication takes O n2 k flops, which can be divided on p processors and therefore we have  FMM (n, k, p) = O n2 k p  . Previous Analysis: For the case where k = n the bandwidth analysis of a general matrix multiplication goes back to what is presented in [12]. Aggarwal et al. present a cost analysis in the LPRAM model. In that work, the authors show that the same cost can also be achieved for the transitive closure problem that can be extended to the problem of doing an LU decomposition. The fact that these bandwidth costs can be obtained for the LU decomposition was later demonstrated by Tiskin [17]. He used the bulk synchronous parallel (BSP) execution time model. Since the dependencies in LU are more complicated than they are for TRSM, we also expect TRSM to be able to have the same asymptotic bandwidth and flop costs as a general matrix multiplication. 3) Triangular Matrix Solve for Single Right Hand Sides: Algorithms for the problem of triangular solve for a single right hand side (when X and B are vectors) have been wellstudied. A communication-efficient parallel algorithm was given by Heath and Romine [18]. This parallel algorithm was later shown to be an optimal schedule in latency and bandwidth costs via lower bounds [5]. However, when X and B are matrices (k > 1), it is possible to achieve significantly lower communication costs relative to the amount of computation required. The application of selective inversion has been used to accelerate repeated triangular solves that arise in preconditioned sparse iterative methods [19]. 4) Recursive Triangular Matrix Solve for Multiple Right Hand Sides: A recursive approach of solving the TRSMproblem was presented in the work of Elmroth et  al. [3]. The  initial problem, L · X = B can be split into L· X1 X2 =   B1 B2 , which yields two independent subproblems: L · X1 = B1 , L · X2 = B2 . hence the subproblems are independent and can be solved in parallel. The other, dependent splitting proposed divides the triangular matrix, yielding two subtasks that have to be solved one at a time       L11 L12 L22 · X1 B1 = , X2 B2 where we obtain the dependent subproblems: L11 · X1 = B1 and L22 · X2 = B2 − L12 · X1 . These problems are dependent as we need the solution X1 to solve the second problem. Parallel TRSM algorithms with 3D processor grids can reduce the communication cost in an analogous fashion to matrix multiplication. Irony and Toledo [20] presented the first parallelization of the recursive TRSM algorithm with a 3D processor grid. They demonstrated that the communication volume of their parallelization is O(nkp1/3 + n2 p1/3 ). Thus each processor communicates O((nk + n2 )/p2/3 ) elements, which is asymptotically equal to WMM (n, k, p) when k = Θ(n). However, they did not provide a bound on the latency cost nor on the communication bandwidth cost along the critical path, so it is unclear to what extent the communication volume is load-balanced. Lipshitz [21] provides an analysis of the recursive TRSM algorithm in the same communication cost model as used in this paper. For the case of k = n, his analysis demonstrates TTRSM−L (n, n, p) = O(p2/3 · α + n2 /p2/3 · β + n3 /p · γ). For some choices of algorithmic parameters, the analysis in [21] should lead to the bandwidth cost, WTRSM−L (n, k, p) = O  2   n k 2/3 p , which is as good as matrix multiplication, WMM (n, k, p). However, it is unclear how to choose the parameters of the formulation in [21] to minimize latency cost for general n, k. Prior to presenting our main contribution (an inversionbased TRSM algorithm with a lower latency cost), we provide a simpler form of the recursive TRSM algorithm and its asymptotic cost complexity. Inversion has previously been used in TRSM implementations [22], but our study is the first to consider communication-optimality. We start by presenting a subroutine for 3D matrix multiplication which operates from a starting 2D distribution, simplifying the subsequent presentation of TRSM algorithms. III. M ATRIX M ULTIPLICATION B = MM(L, X, Π2D , n, k, p, p1 , p2 ) Require: √ √ The processor grid Π2D has dimensions p × p L is an n × n matrix, distributed on Π2D in a cyclic layout, so Π2D (x, y) owns L[x, y] of size √np × √np such that √ √ L[x, y](i, j) = L(i p + x, j p + y). X is a dense n × k matrix is distributed cyclically so that Π2D (x, y) owns X[x, y] of size √np × √kp √ p2 × p1 × p2 processor grid Π4D , such that Π4D (x1 , x2 , y1 , y2 ) = Π2D (x1 +p1 x2 , y1 +p2 y2 ) owns blocks L[x1 , x2 , y1 , y2 ] and X[x1 , x2 , y1 , y2 ] 2: L0 [x1 , y1 ] = Allgather (L[x1 , ◦, y1 , ◦], Π4D (x1 , ◦, y1 , ◦)) 3: X 0 [x1 , y1 , x2 , y2 ] = Transpose(X[x1 , x2 , y1 , y2 ], Π4D (x1 , x2 , y1 , y2 ), x2 , y1 ) 4: X 00 [y1 , x1 , x2 , y2 ] = Transpose(X 0 [x1 , y1 , x2 , y2 ], Π4D (x1 , x2 , y1 , y2 ), x1 , y1 ) 5: X 000 [y1 , x2 , y2 ] = Allgather(X 00 [y1 , ◦, x2 , y2 ], Π4D (◦, x2 , y1 , y2 )) 6: Π4D (x1 , x2 , y1 , y2 ) : B 00 [x1 , y1 , x2 , y2 ] = L0 [x1 , y1 ] · X 000 [y1 , x2 , y2 ] 7: B 0 [x1 , y1 , x2 , y2 ] = Scatter-reduce(B 00 [x1 , ◦, x2 , y2 ], Π4D (x1 , x2 , ◦, y2 )) 8: B[x1 , x2 , y1 , y2 ] = Transpose(B 0 [x1 , y1 , x2 , y2 ], Π4D (x1 , x2 , y1 , y2 ), x2 , y1 ) Ensure: B = LX is distributed the same way as X 1: Define a p1 × √ We present an algorithm for 3D matrix multiplication [10]– [15] that works efficiently given input matrices distributed cyclically on a 2D processor grid. The algorithm is wellsuited for the purposes of analyzing TRSM algorithms. We √ √ define the algorithm using a p1 × p2 × p1 × p2 processor √ √ grid, where p = p1 p2 in order to provide well-defined √ √ transitions from a distribution on a p × p processor grid to faces of a 3D p1 × p1 × p2 processor grid. The latter 3D processor grid is being used implicitly in our construction. √ The algorithm assumes divisibility among p, p1 , p2 and p2 . A. Cost Analysis of the 3D Matrix Multiplication Algorithm We analyze the algorithm with account for constant factors in the key leading order costs. The communication costs incurred at line 2, 5, and 7 correspond to the cost of respective collectives, given in Section II-C1. The transpose on line 4 always occurs on a square processor grid, and so involves only one send and receive of a block. The transposes on line 3 and line 8 are transposes on 2D √ grids of p1 × p2 processors, with each processor owning nk/p elements. The cost of these transposes is no greater √ than an all-to-all among p processors, which can be done with cost O(α · log(p) + β · nk log(p)/p). We consider only the asymptotic cost of this transpose, since it will be low order so long as p1  1. Based on the above arguments, the cost for MM is given line-by-line in the following table. 2 Line 3 α · log(p2 ) + β · np2 1p2 1   O α · log(p) + β · nk log(p) p Line 5 Line 4 α · log(p1 ) + β · α + β · nk p Line 6 Line 7 Line 8 γ · np k α · log(p1 ) + (β + γ) · α · l + β · nkl p Line 2 nk p1 p2 2 nk p1 p2 To leading order, the cost of MM is given by n2 k 2nk  1 + γ · + p 2 p2 p1 p2 p 1  nk log(p) + O α · log(p) + β · . p TMM (n, k, p, p1 , p2 ) =β ·  n2 The last communication cost term (due to the rectangular grid transpose) is only of leading order when p1 ≈ log(p). A square processor grid is not a good initial/final layout for X and B in this case, and the problem would be addressed by choosing an alternative one. We disregard this issue, because we will use the algorithm only for n ≥ k. IV. R ECURSIVE TRSM We provide a recursive TRSM algorithm for solving LX = B using the techniques covered in Section II-C4. Our algorithm works recursively on a pr × pc processor grid. We will define the processor grid to be square (pr = pc ) when n ≥ k, but rectangular (pr < pc ) when n < k. So long as p < k/n, we will choose pc = (k/n)pr . This strategy implies the largest of the matrices L and B will be partitioned initially so each cyclically-selected block is close to square. The algorithm starts by partitioning the processor grid into pc /pr square grids, if pc > pr , replicating the matrix L and computing a subset of kpr /pc columns of X on each. Then the algorithm partitions L into n/2 × n/2 blocks recursively, executing subproblems with all processors. At a given threshold, n0 , the algorithm stops recursing, gathers L onto all processors, and computes a subset of columns of X with each processor. X = Rec-TRSM(L, B, Π2D , n, k, pr , pc , n0 ) Require: L is a lower triangular n × n matrix, distributed on pr × pc in a cyclic layout, so Π2D (x, y) owns L[x, y] of size pnr × pnc such that L[x, y](i, j) = L(ipr + x, jpc + y). B is a dense n × k matrix is distributed cyclically so that Π2D (x, y) owns X[x, y] of size pnr × pkc 1: if pr = qpc and q > 1 then 2: Define pr × pr × q processor grid Π3D , such that Π3D (x, y, z) = Π2D (x1 , y + pr z) owns blocks L[x, y, z], and B[x, y, z] 3: L0 [x, y] = Allgather(L[x, y, ◦], Π3D (x, y, ◦)) 4: X[◦, ◦, z] = Rec-TRSM(L0 [◦, ◦], B[◦, ◦, z], Π2D (◦, ◦, z), n, k/q, pr , pr , n0 ) 5: else if n ≤ n0 or pr = pc = 1 then 6: L = Allgather(L[◦, ◦], Π2D (◦, ◦)) 7: B[x + ypr ] = AllToAll(B[◦, y], Π2D (◦, y)) 8: Π2D (x, y) : X[x + ypr ] = L−1 B[x + ypr ] 9: X[x, y] = AllToAll(B[x + ◦pr ], Π2D (◦, y)) 10: else   L11 0 n n 11: Partition L = so Lij ∈ R 2 × 2 L L 21 22     B1 X1 n 12: Partition B = ,X = , so Bi , Xi ∈ R 2 ×k B2 X2 13: X1 = Rec-TRSM(L11 , B1 , Π2D , n/2, k, p, pr , pc , n0 ). 0 14: B2 = B2 − MM(L21 , X1 , Π2D , n/2, k, p, p1/3 (n/k)1/3 , p1/3 (n/k)2/3 ). 15: X2 = Rec-TRSM(L22 , B20 , Π2D , n/2, k, p, pr , pc , n0 ). 16: end if Ensure: X = L−1 B is distributed on Π2D in the same way as B A. Cost Analysis of the Recursive Algorithm p √ We select pc = max(pp, min(p, pk/n)) and pr = √ p/pc = min( p, max(1, pn/k)). The cost of the allgather on line 3 is   n2 Tpart−cols (n, pr ) = O β · 2 + α · log(p) , pr since each L0 [x, y] ∈ Rn/pr ×n/pr and is lower triangular. Once we have a square processor grid, we partition L, yielding the recurrence,  n 1/3 1/3 n 2/3  TRT (n, k, p, n0 ) =TMM n/2, k, p, p1/3 ,p k k + 2TRT (n/2, k, p, n0 ). We now derive the cost of the algorithm for different relations between n, k, and p, as in the expression for TMM . One large dimension: When n < k/p, we have pr = 1 and pc = p and the first allgather will be the only communication, therefore,   n2 k . TRT1D (n, k, p) = O α · log(p) + β · n2 + γ · p √ Two large dimensions: When n > k p, we will select √ pr = pc = p and the column partitioning of B is not performed. In this case, the MM algorithm will always have p2 = 1 (it will be 2D). For sufficiently large k, this leads us to the recurrence, TRT2D (n, k, p, n0 ) =TMM2D (n/2, k, p) + 2TRT2D (n/2, k, p, n0 ),   n2 k nk + γ · . where TMM2D (n, k, p) = O α · log(p) + β · √ p p The bandwidth cost stays the same at every recursive level, while the computation cost decreases by a factor of 2. At the base case, we incur the cost TRTBC (n0 , k, p) =    n0 k log(p)  n2 k O α · log(p) + β · n20 + +γ· 0 p p √ √ We select n0 = max( p, n log(p)/ p), so n/n0 ≤ √ p/ log(p), which results in the overall cost,   √ nk log(p) n2 k TRT2D (n, k, p) = O α · p + β · +γ· . √ p p The bandwidth cost above is suboptimal by a factor of O(log(p)). The overhead is due to the recursive algorithm re-broadcasting some of the same elements of L at every recursive level. We use an iterative approach for our subsequent TRSM algorithm to avoid this redundant communication. Three large dimensions: √ When k/p < n < k/ p , the algorithm partitions the columns of B initially then recursively partitions L. In √ p particular, we select p = max( p, pk/n) and pr = c √ p p/pc = min( p, pn/k), so the first step partitions a rectangular processor grid into max(1, k/n) fewer processor grids. After the first step, which partitions B, we have independent subproblems with pr processors. We now start recursively partitioning L, yielding the cost recurrence, T RT3D (n, k, p2r , n0 ) = TMM3D (n/2, k, p2r ) + Tpart−cols (n, p)1k/n + 2TRT3D (n/2, k, p2r , n0 ). Above, we always employ the MM algorithm in the 2/3 3D regime by selecting p1 = pr (n/l)1/3 and p2 = 2/3 pr (n/l)2/3 where l = kpr /pc . In this case, TMM reduces to TMM3D (n, k, p) =     2  n k 2/3 nk log(p) n2 k O α · log(p) + β · + +γ· p p p This gives to the cost recurrence, TRT3D (n, k, p2r , n0 ) = n2 O α · log(p) + β · + 2 1k 2 pr pr n   nk log(p) n2 k n + +γ· 2 + 2TRT3D ( , k, p2r , n0 ), p2r pr 2   2  n k 2/3 = O((n2 k/p2r )2/3 ), since where we can see that nk log(p) p2r the initial partitioning will give n ≥ k. It is also easy to see 2 that np2 1 k = O((n2 k/p2r )2/3 ). With these simplifications, r n   n2 k 2/3 T RT3D (n, k, p2r , n0 ) = O α · log(p) + β · p2r  2 n k n +γ· 2 + 2TRT3D ( , k, p2r , n0 ). pr 2 We observe that the bandwidth cost O((n2 k/p2r )2/3 ) decreases by a factor of 21/3 at every recursive level, and the computation cost by a factor of 2. The base-case cost will  2/3 be TRTBC (n0 , k, p2r ). We select n0 = n1/3 pk2 , giving r a total cost over all base cases of nn0 TRTBC (n0 , k, p2r ) =    nk log(p)  nn0 k n log(p) + β · nn0 + + γ · O α· n0 p2r p2r   2    2 npr 2/3 n k 2/3 n4/3 k5/3 =O α· . log(p) + β · +γ· 10/3 k p2r pr Therefore, the overall cost incurred on each square processor grid is TRT3D (n, k, p2r ) =   2   n2 k 2/3 npr 2/3 n2 k O α· log(p) + β · + γ · . k p2r p2r When k ≤ n, we do not have a partitioning step and p2r = p. Otherwise, we have p2r = np/k obtain the cost TRT3D (n, n, np/k) =      n2 k 2/3 np 2/3 n2 k O α· log(p) + β · +γ· , k p p which is the same as for the case k ≤ n. For n = k, the 3D costs obtained above are the same as the most efficient algorithms for n × n LU factorization. In the subsequent sections, we show that a lower synchronization cost is achievable via selective use of triangular matrix inversion. V. T RIANGULAR I NVERSION In this section, we derive the cost of inverting a lower triangular matrix L of size n × n with p processors. Since the input matrix is square, the dimensions of the processor grid Π should be identical in two dimensions leaving us with dim (Π) = p1 × p1 × p2 , where p = p21 p2 . We assume the initial matrix to be cyclically distributed on the subgrid Π(◦, ◦, 1). A. Algorithmic Approach In [23], a recursive method for inverting triangular matrices was presented. A similar method for full inversion was presented in [24]. When applied to a triangular matrix, those methods coincide. The method uses the triangular structure of the initial matrix to calculate the inverse by subdividing the problem into two recursive matrix inversion calls, which can be executed concurrently and then uses two matrix multiplications to complete the inversion. Since the subproblems are independent, we want to split the processor grid such that two distinct sets of processors work on either subproblem. We chose the base case condition to be that the grid is one-dimensional in the dimension of p1 and we do redundant base case calculations in this subgrid. For this section, we consider p2 ≥ p1 , a constraint that we will fulfill anytime the method is called. L−1 = RecTriInv(L, Π, p, p1 , p2 ) Require: √ √ The processor grid Π has dimensions p × p L is a lower triangular n × n matrix, distributed on Π in a cyclic layout, so Π(x, y) owns L[x, y] of size √np × √np such √ √ that L[x, y](i, j) = L(i p + x, j p + y). 1: if p1 = 1 then 2: AllToAll (L[x, ◦], Π(x, ◦)) 3: L−1 = sequential inversion(L) 4: else 5: Subdivide L inton/2 × n/2 blocks,  6: 7: 8: 9: 10: 11: 12: 13: 14: 15: L11 0 L= L21 L22 Subdivide the processor grid p Π = [Π p1 , Π2 ] such that dim(Π1 ) = dim(Π2 ) = ( p/2 × p/2) Redistribute (L11 , Π → Π1 ) Redistribute (L22 , Π → Π2 ) 2/3 L−1 , p2 /22/3 ) 11 = Rec-Tri-Inv(L11 , Π1 , p, p1 /2 −1 2/3 L22 = Rec-Tri-Inv(L22 , Π2 , p, p1 /2 , p2 /22/3 ) −1 L0−1 21 = −MM(L22 , L21 , Π, n, n, p, p1 , p2 ) −1 0−1 L21 = MM(L21 , L−1 11 , Π, n, , p, p1 , p2 ) −1 Assemble " L from #the n/2 × n/2 blocks, L−1 0 11 L−1 = −1 L21 L−1 22 owns having a factor of 21/3 more rows and columns. This redistribution is effectively an all-to-all between a larger and a smaller set of processors. We can get a concrete bound on the cost, by first performing an all-to-all to transition to a blocked layout (each processor owns contiguous blocks of the matrix). Then we can transition to a blocked layout on the smaller processor grid by scattering each block to at most 4 processors. Finally, we can perform an all-to-all on the smaller processor grid to transition from the blocked layout back to a cyclic one. The overall cost of these steps is O(α · log(p) + β · n20 log(p)/p) This redistribution bandwidth cost is dominated by the cost of the matrix multiplication. The total cost for the recursive inversion is TRecTriInv (n, p1 , p2 ) =β · n2  21/3  n2 + 2 2p1 p2 − 1 8p1 21/3 +γ·  21/3 1 n3 + O α log2 p . −18 p 21/3 In contrast to LU factorization and our recursive TRSM algorithm, the synchronization cost is logarithmic rather than polynomial in p. VI. I TERATIVE T RIANGULAR S OLVER In this section, we present our main contribution, a 3D TRSM algorithm that uses inversion of diagonal blocks to achieve a lower synchronization cost. By precomputing the inversions, we replace the latency-dominated small TRSMs with more parallel matrix multiplications. 17: end if 16: A. Block-Diagonal Triangular Inversion Ensure: LL−1 = 1 where L−1 is distributed the same way as L In order to lower the synchronization cost of TRSM, first we invert a set of triangular blocks along the diagonal of the matrix, each with a distinct subset of processors. We split Π into nn0 subgirds of dimensions r1 × r1 × r2 , where r12 r2 = p nn0 . To have the proper layout for the inversion, a transition from the original, cyclic layout on a subgrid to the grid as described in Section III has to happen. Afterwards, all the inversions can be done in parallel. To p support our inversion, we must have r2 > r1 and n0 ≥ r12 r2 . The precise choices of r1 and r2 are given in the algorithm and will be discussed in Section VII. B. Total Cost of Triangular Inversion This recursive approach of inverting a matrix has total cost, TRecTriInv (n, p1 , p2 ) = 2TMM (n/2, n/2, p1 , p2 )+ TRecTriInv (n/2, p1 /21/3 , p2 /21/3 ) + Tredistr (n/2, p1 , p2 ), with a base case cost of  TRecTriInv (n0 , 1, p2 ) = α · 2 log p2 p1  + β · 2n20 + γ · n30 . n The base case size will be n0 = 3/2 and therefore neither of p1 the terms is of leading order. We observe that the bandwidth cost of the matrix multiplication O((n3 /p21 )2/3 ) decreases by a factor of 24/9 at every recursive level, and the computation cost by a factor of 2. The redistribution process requires moving the matrices from a cyclic processor grid to a smaller cyclic processor grid, with the block each processor B. Triangular Solve using Partial Inversion Initially, we want L to be distributed on the top level of the three dimensional grid Π in a cyclic layout such that each processor Π(x, y, 1) owns L (y : p1 : n, x : p1 : n) . Also, we set the right hand side to be distributed on one level of the grid with a blocked layout with a physical block size of b × pk2 such that each processor  Π(x, 1, z) owns B x : p1 : nb , zk/p2 : (z + 1)k/p2 . L̃ = Diagonal-Inverter(L, Π, n, p1 , p2 , n0 ) Require: The processor grid Π has dimensions p1 × p1 × p2 L is a lower triangular n×n matrix distributed cyclically on Π such that processor Π (x, y, 1)) owns L[x, y] a lower triangular n × pn1 matrix such that L[x, y](i, j) = L(ip1 + x, jp1 + y). p1 pn0 r = nn0 n  pn0 1/3 r1 = 4n 1/3 0 r2 = 16pn n √ √ a p1 × p1 × p2 × p2 1: Define q = 2: Define 3: Define 4: Define 5: 6: 7: 8: 9: 10: 11: 12: processor √ grid Π4D , such that Π4D (x1 , x2 , y1 , y2 ) = Π(x1 , x2 , y1 + p2 y2 ). and Π4D (x1 , x2 , y1 , y2 ) owns blocks L[x1 , x2 , y1 , y2 ] Define a block diagonal matrix LD [x1 , x2 , y1 , y2 ] such that LD [x1 , x2 , y1 , y2 ][b] denotes the block L[x1 , x2 , y1 , y2 ](bn0 : (b + 1)n0 , bn0 : (b + 1)n0 ) Scatter (LD [◦, ◦, y1 , y2 ][◦], (x1 , x2 , 1, 1), Π4D (x1 , x2 , ◦, ◦)) √Π4D√ Define a p × p processor grid Π2D , such that Π2D (x1 + p1 y1 , x2 +√p2 y2 )√= Π4D (x1 , x2 , y1 , y2 ). √ √ Define a q × q × r × r processor grid ΠI4D , such that √ √ I Π4D (u1 , u2 , v1 , v2 ) = Π2D (u1 + qv1 , u2 + qv2 ) owns blocks LD [u1 , u2 , v1 , v2 ].  I AllToAll Lq D [u1 , u2 , v1 , v2 ][◦], Π4D (u1 , u2 , ◦, ◦) For i = 0 : nn0 − 1 do in parallel q For j = 0 : nn0 − 1 do in parallel   q Define b = i + nn0 j L̃D [◦, ◦, i, j][b] = RecTriInv (LD [◦, ◦, i, j][b],  ΠI4D [◦, ◦, i, j] , q, r1 , r2 ) 14: end for 15: end for   16: AllToAll L̃D [u1 , u2 , v1 , v2 ][◦], ΠI4D (u1 , u2 , ◦, ◦)  17: Gather L̃D [◦, ◦, y1 , y2 ][◦], Π4D (x1 , x2 , ◦, ◦), Π4D (x1 , x2 , 1, 1)) Ensure: L̃D LD = 1 ∀i where L and L̃ are partitioned the same way 13: Additionally, each processor has memory of the same size as its part of B allocated for an update-matrix denoted as Bj , j ∈ [1, p1 ], where also each processor Π(x, y, z) owns  By x : p1 : nb , zk/p2 : (z + 1)k/p2 . The algorithm itself consists of two parts: first ‘inversion’, we invert all the base-cases on the diagonal in parallel as described in the algorithm above and, second ‘solve’, we do the updates and calculate the solution to TRSM. VII. C OST A NALYSIS OF THE I TERATIVE TRSM In this section we will derive a performance model for the algorithm presented in Section VI. The total cost of the algorithm is put together from the cost of its three subroutines: TIt−Inv−TRSM (n, k, n0 , p1 , p2 ) = TInv (n, p1 , p2 )+ TUpd (n, k, n0 , p1 , p2 ) + TSolve (n, k, n0 , p1 , p2 ). Above the cost denoted by inversion is the part of the algorithm that inverts the blocks (Algorithm Diagonal-Inverter). The solve part is in lines 4-5, and the update in lines 7-8. X = It-Inv-TRSM(L, B, Π, n, k, p1 , p2 , r1 , r2 ) Require: The processor grid Π has dimensions p1 × p1 × p2 L is a lower triangular n × n matrix is distributed on Π such that Π(x, y, 1) owns L[x, y] of size n/p1 × n/p1 such that L[x, y](i, j) = L(ip1 + x, jp1 + y) B is a dense n × k matrix is distributed such that Π(x, 1, z) owns B[x, z]of sizen/p1 × k/p2 , such that B[x, z](i, j) = L(ip1 + x, zk/p2 + j) Define blocks Si = in0 : (i + 1)n0 and Ti = in0 : n 1: L̃ = Diagonal-Inverter(L, Π, n, p1 , p2 , n0 ) 2: Bcast (B [x, z] (S0 (x), ◦), Π(x, 1, z), Π(x, ◦, z)) 3: for i = 0 : nn − 1 do 0 4: Π(x, y, z) : X [y, z] (Si , ◦) = 5: 6: L̃ [y, x] (Si , Si ) · B [x, z] (Si , ◦) X [y, z] (◦, y, z))  (Si , ◦) = Allreduce (X [y, z] (Si , ◦), Π  Bcast L̃ [x, y] (Ti+1 , Si ), Π(x, y, 1), Π(x, y, ◦) Π(x, y, z) : By [x, z] (Ti+1 , ◦)+ = L̃ [x, y] (Ti+1 , Si ) · X [y, z] (Si , ◦) B0 [x, z] (Si+1 , ◦) = 8: Allreduce(B◦ [x, z] (Si+1 , ◦), Π (x, ◦, z)) 9: Π(x, y, z) : B [x, z] (Si+1 , ◦) = B [x, z] (Si+1 , ◦) − B0 [x, z] (Si+1 , ◦) 10: end for Ensure: B = LX where X is distributed the same way as B 7: A. Inversion Cost We invert the nn0 submatrices of size n0 × n0 along the diagonal with distinct processor grids. The size of the processor grids involved is r1 × r1 × r2 . The choices of r1 and r2 are made such that the bandwidth cost of the inversion is minimal. Additionally we have to account for the cost that arises from communicating the submatrices to the proper subgrids. This happens in lines 6, 9, 16, and 17 of the Algorithm Diagonal-Inverter. The respective costs are summed up the the following table. Line 16 0 α · log(p2 ) + β · nn 2p2 1   O α · log(p) + β · nn02plog p   O α · log(p) + β · nn02plog p Line 17 α · log(p2 ) + β · Line 6 Line 9 nn0 2p2 1 These cost are never of leading order compared to the costs that arise form the triangular inversion. With the derivations done in Section V, we get the following costs for inversion: Latency Cost: The total latency cost of inversion is  SInv (p) = O α log2 p . Bandwidth Cost: In order to minimize the bandwidth cost of triangular inversion, we choose a grid splitting to achieve closest to ideal ratios for the subgrids processor layout r1 and r2 . This ratio is achieved when r2 = 4r1 . The choices for r1 and r2 are therefore, r1 =  pn 1/3 0 4n  r2 = and 16pn0 n 1/3 . With this grid slicing, we get nn0 different sub-grids of dimensions r1 × r1 × r2 . This setup leads to a cost for inverting nn0 submatrices of 21/3 WInv (n0 , r1 , r2 ) = 1/3 2 −1  n20 n20 + 2 8r1 2r1 r2  . Flop Cost: The flop cost of the inversion part is FInv (n0 , p1 , p2 ) = 1 8 Bandwidth Cost: The cost of doing all the updates as described in the algorithm in Section VI (Lines 5-9) is the cost of both the allreductions and the broadcast, WUpd (n, k, n0 , p1 , p2 ) =   n/n0 −1  X nn0 − in0 Wbcast , p 2 + p21 i=1     n0 k n0 k Wallreduction . , p1 + Wallreduction , p1 p1 p2 p1 p2 This yields to a total cost of WUpd (n, k, n0 , p1 , p2 ) =   n − n0 n0 k nn0 − n 4 1p2 + 4 1p . n0 p21 p1 p2 1 Flop Cost: The update flop cost is FUpd (n, k, n0 , p1 , p2 ) = nn20 . p21 p2 n − n0 n0  knn0 p21 p2  . D. Total Cost B. Solve Cost The complete solve cost can be derived by n TMM (n0 , k, p, p1 , p2 ) n0 TSolve (n, n0 , k, p, p1 , p2 ) = Latency Cost: The latency cost of the solve part is  SSolve (n, n0 , p) = O  n log p . n0 Bandwidth Cost: The cost of the solve is one call to triangular matrix multiplication for each base case. The synchronization cost is again dominated by the nn0 cases. The cost for these sums has been presented in Section III. Including these, we obtain a total cost of n WSolve (n, k, n0 , p, p1 , p2 ) = · WMM (n0 , k, p, p1 , p2 ) n0     2  n0 k n n0 + 4 = · 1 1 p p 2 1 . n0 p21 p1 p2 Flop Cost: The flop cost of the solve part is FSolve (n, k, n0 , p1 , p2 ) = n n0  n20 k p21 p2 The total cost of the algorithm is the sum of its three parts and leaves a lot of tuning room as with a choice of p1 = 1, p2 = 1 or n0 = n one is able to eliminate certain terms. Latency Cost: The total latency cost of the algorithm is a sum of the previous parts, SIt−Inv−TRSM (p1 , p2 , r1 , b) = SUpd (n, n0 , p1 , p2 )+ SSolve (n, n0 , p1 , p2 ) + SInv (p1 , p2 , r1 , b)    n =O α log p + log2 p . n0 Bandwidth Cost: The total bandwidth cost for the TRSM 21/3 algorithm is, by abbreviating ν = 21/3 , −1 WIt−Inv−TRSM (n, k, n0 , p1 , p2 , u, v, b) = WUpd (n, k, n0 , p1 , p2 ) + WSolve (n, k, n0 , p1 , p2 ) + WInv (n, b, p1 , p2 , u, v)  2     n0 n0 k n · 1 + 4 1 = p2 p1 n0 p21 p1 p2    2  n − n0 nn0 − n n0 k n0 n20 + 4 1p2 + 4 1p1 +ν + . 2 2 n0 p1 p1 p2 8r1 2r1 r2  . Flop Cost: Lastly, the combined total flop cost is FIt−Inv−TRSM (n, k, n0 , p1 , p2 ) = FUpd (n, n0 , p1 , p2 ) C. Update Cost +FSolve (n, n0 , p1 , p2 )+FInv (n0 , u, v, p1 , p2 ) = The complete solve cost can be derived by VIII. PARAMETER T UNING n/n0 −1 TUpd (n, k, n0 , p, p1 , p2 ) = X TMM (n−in0 , n0 , k, p, p1 , p2 ) . i=1 Latency Cost: The update latency cost is  SUpd (n, n0 , p) = O n2 k n2 n + 20 . 2 p1 p2 p1 p2  n − n0 log p . n0 In this section, we give asymptotically optimal tuning parameters for different relative matrix sizes to optimize performance. We only focus on asymptotic parameters as there is a trade off between the constant factors on the bandwidth and latency costs. The exact choice is therefore machine dependent and should be determined experimentally. The initial grid layout is dependent on the relative matrix sizes of L and B since the update part of the algorithm is one of the dominating terms in any case where there is an update to be made and determines the case where is is infeasible. The different layouts are shown in Figure 1. 4k the processor In the case where n < p , grid layout is one-dimensional. The optimal parameters are given in the following table.   p1 = 1 r1 = p2 = p r2 = O (p)1/3   O (p)1/3 n0 = n Using this set of parameters will yield to a total cost of    n2 k TIT1D (n, k, p) = O α · log2 p + log p + β · n2 + γ · . p Comparing these costs to the costs of TRT1D obtained in Section IV-A, we can see that we are within the asymptotic bounds of the original algorithm in bandwidth and flop cost, but pay an extra factor of log p latency, since the inversion, if performed on a 3D grid, requires log2 p steps. But since the inversion is the least significant part of the routine in 1 large dimension, no gain was to be expected in this case. √ In the case where n > 4k p, the processor grid layout is two-dimensional. The optimal parameters are given in the following table.  p1 = r1 = √ O p  p2 =  k 1/4 n 1 n0 =  r2 = p3/8 1/4  O nk3 p1/2 O   k 1/4 n p3/8     n 3/4 1 2 TIT2D (n, k, p) = O α log p + log p k p1/8    2  nk n k +β √ +γ √ . p p Comparing these costs to the cost of TRT2D obtained in Section IV-A, we can see that we are asymptotically more p1/4 efficient in terms of latency by a factor of at least log p as well as in bandwidth by a factor of log p while having the same flop cost asymptotically. This is a significant gain and especially important as the occurrence of fewer right hand sides k < n is high. √ In the case where 4k p ≤ n ≤ 4k p, the processor grid layout is three-dimensional. The optimal parameters are given in the following table. r1 = O pn 1/3 4k p2 = √ p4k n  √  n0 = O min nk, n  i1/3  h √ r2 = O min p nnk , p 2/3  i1/3  h √ min p nnk , p Using this set of parameters will yield to a total cost of     r n 2 , 1 log p TIT3D (n, k, p) = O α log p + max k !  2 2/3  2 ! n k n k +β +γ √ . p p Comparing these costs to the cost of TRT3D obtained in Section IV-A, we can see that we are asymptotically more 1/6 2/3 efficient in terms of latency by a factor of nk p while being able to keep bandwidth and flop costs asymptotically constant. IX. C ONCLUSION S W  1 Large Dimension n < 4k p F  standard log p n2 n2 k p new method log2 p n2 n2 k p √  2 Large Dimensions n > 4k p  Using this set of parameters will yield to a total cost of p1 = Figure 1. One-, two-, and three-dimensional layout dependent on relative matrix sizes. Inverted blocks of the matrix in dark and input- and output of the right hand side on the left and right size of the cuboid. √ standard new method log2 p + p  n 3/4 1 k p1/8 new method np 2/3 k log2 p + q log p n k n2 k p nk √ p n2 k p log p 3 Large Dimensions standard nk log p √ p log p 4k p √ ≤ n ≤ 4k p  2 2/3 n k p  n2 k p 2/3 n2 k p 2n2 k p We present a new method for solving triangular systems for multiple right hand sides. In the above table, we compare to a baseline algorithm adapted from [3] that achieves costs that are as good or better than the state of the art [18], [21], [25]. Our algorithm achieves better theoretical scalability 1/6 2/3 p . For than these alternatives by up to a factor of nk certain matrix dimensions, a decrease of bandwidth cost by a factor of log2 p is obtained by use of selective triangular matrix inversion. By only inverting triangular blocks along the diagonal of the initial matrix, we generalize the usual way of TRSM computation and the full matrix inversion approach. Fine-tuning the algorithm based on the relative input sizes as well as the number of processors available leads to a significantly more robust algorithm. The cost analysis of this new method allows us to give recommendations for asymptotically optimal tuning parameters for a wide variety of possible inputs. The detailed pseudo-code provides a direct path toward a more efficient parallel TRSM implementation. R EFERENCES [1] F. G. Gustavson, “Recursion leads to automatic variable blocking for dense linear-algebra algorithms,” IBM Journal of Research and Development, vol. 41, no. 6, pp. 737–755, 1997. [2] E. Solomonik and J. Demmel, “Communication-optimal parallel 2.5-D matrix multiplication and LU factorization algorithms,” in Euro-Par 2011 Parallel Processing. Springer, 2011, pp. 90–109. [3] E. Elmroth, F. Gustavson, I. Jonsson, and B. Kågström, “Recursive blocked algorithms and hybrid data structures for dense matrix library software,” SIAM review, vol. 46, no. 1, pp. 3–45, 2004. [4] J. J. Du Croz and N. J. Higham, “Stability of methods for matrix inversion,” IMA Journal of Numerical Analysis, vol. 12, no. 1, pp. 1–19, 1992. [5] E. Solomonik, E. Carson, N. Knight, and J. Demmel, “Tradeoffs between synchronization, communication, and computation in parallel linear algebra computations,” in Proceedings of the 26th ACM Symposium on Parallelism in Algorithms and Architectures, ser. SPAA ’14. New York, NY, USA: ACM, 2014, pp. 307–318. [6] E. Chan, M. Heimlich, A. Purkayastha, and R. Van De Geijn, “Collective communication: theory, practice, and experience,” Concurrency and Computation: Practice and Experience, vol. 19, no. 13, pp. 1749–1783, 2007. [7] R. Thakur, R. Rabenseifner, and W. Gropp, “Optimization of collective communication operations in mpich,” International Journal of High Performance Computing Applications, vol. 19, no. 1, pp. 49–66, 2005. [8] J. Bruck, C.-T. Ho, S. Kipnis, E. Upfal, and D. Weathersby, “Efficient algorithms for all-to-all communications in multiport message-passing systems,” Parallel and Distributed Systems, IEEE Transactions on, vol. 8, no. 11, pp. 1143–1156, 1997. [9] J. L. Träff and A. Ripke, “Optimal broadcast for fully connected processor-node networks,” Journal of Parallel and Distributed Computing, vol. 68, no. 7, pp. 887–901, 2008. [10] E. Dekel, D. Nassimi, and S. Sahni, “Parallel matrix and graph algorithms,” SIAM Journal on Computing, vol. 10, no. 4, pp. 657–675, 1981. [11] R. C. Agarwal, S. M. Balle, F. G. Gustavson, M. Joshi, and P. Palkar, “A three-dimensional approach to parallel matrix multiplication,” IBM J. Res. Dev., vol. 39, pp. 575–582, September 1995. [12] A. Aggarwal, A. K. Chandra, and M. Snir, “Communication complexity of PRAMs,” Theoretical Computer Science, vol. 71, no. 1, pp. 3 – 28, 1990. [13] J. Berntsen, “Communication efficient matrix multiplication on hypercubes,” Parallel Computing, vol. 12, no. 3, pp. 335– 342, 1989. [14] W. F. McColl and A. Tiskin, “Memory-efficient matrix multiplication in the BSP model,” Algorithmica, vol. 24, pp. 287– 297, 1999. [15] S. L. Johnsson, “Minimizing the communication time for matrix multiplication on multiprocessors,” Parallel Comput., vol. 19, pp. 1235–1257, November 1993. [16] J. Demmel, D. Eliahu, A. Fox, S. Kamil, B. Lipshitz, O. Schwartz, and O. Spillinger, “Communication-optimal parallel recursive rectangular matrix multiplication,” in Parallel Distributed Processing (IPDPS), 2013 IEEE 27th International Symposium on, May 2013, pp. 261–272. [17] A. Tiskin, “Bulk-synchronous parallel gaussian elimination,” Journal of Mathematical Sciences, vol. 108, no. 6, pp. 977– 991, 2002. [18] M. T. Heath and C. H. Romine, “Parallel solution of triangular systems on distributed-memory multiprocessors,” SIAM Journal on Scientific and Statistical Computing, vol. 9, no. 3, pp. 558–588, 1988. [19] P. Raghavan, “Efficient parallel sparse triangular solution using selective inversion,” Parallel Processing Letters, vol. 8, no. 01, pp. 29–40, 1998. [20] D. Irony and S. Toledo, “Trading replication for communication in parallel distributed-memory dense solvers,” Parallel Processing Letters, vol. 12, no. 01, pp. 79–94, 2002. [21] B. Lipshitz, “Communication-avoiding parallel recursive algorithms for matrix multiplication,” Master’s thesis, University of California, Berkeley, 2013. [22] S. Tomov, R. Nath, H. Ltaief, and J. Dongarra, “Dense linear algebra solvers for multicore with GPU accelerators,” in Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW), 2010 IEEE International Symposium on. IEEE, 2010, pp. 1–8. [23] A. Borodin and I. Munro, The computational complexity of algebraic and numeric problems. Elsevier Publishing Company, 1975, vol. 1. [24] S. M. Balle, P. C. Hansen, and N. Higham, “A Strassen-type matrix inversion algorithm,” Advances in Parallel Algorithms, pp. 22–30, 1994. [25] L. S. Blackford, J. Choi, A. Cleary, E. D’Azeuedo, J. Demmel, I. Dhillon, S. Hammarling, G. Henry, A. Petitet, K. Stanley, D. Walker, and R. C. Whaley, ScaLAPACK User’s Guide, J. J. Dongarra, Ed. Philadelphia, PA, USA: Society for Industrial and Applied Mathematics, 1997.
8
Causal Linearizability: Compositionality for Partially Ordered Executions Simon Doherty∗1 , John Derrick†1 , Brijesh Dongol‡2 , and Heike Wehrheim§3 1 Department of Computer Science, University of Sheffield, UK Department of Computer Science, Brunel University London, UK 3 Department of Computer Science, University of Paderborn, Germany arXiv:1802.01866v1 [cs.LO] 6 Feb 2018 2 February 7, 2018 Abstract In the interleaving model of concurrency, where events are totally ordered, linearizability is compositional: the composition of two linearizable objects is guaranteed to be linearizable. However, linearizability is not compositional when events are only partially ordered, as in many weak-memory models that describe multicore memory systems. In this paper, we present causal linearizability, a correctness condition for concurrent objects implemented in weak-memory models. We abstract from the details of specific memory models by defining our condition using Lamport’s execution structures. We apply our condition to the C11 memory model, providing a correctness condition for C11 objects. We develop a proof method for verifying objects implemented in C11 and related models. Our method is an adaptation of simulation-based methods, but in contrast to other such methods, it does not require that the implementation totally order its events. We also show that causal linearizability reduces to linearizability in the totally ordered case. 1 Introduction Linearizability [18, 19] is a well-studied [12] condition that defines correctness of a concurrent object in terms of a sequential specification. It ensures that for each history of an implementation, there is a history of the specification such that (1) each thread makes the same method invocations in the same order, and (2) the order of non-overlapping operation calls is preserved. The condition however, critically depends on the existence of a total order of memory events (e.g., as guaranteed by sequential consistency (SC) [24]) to guarantee contextual refinement [15] and compositionality [18]. Unfortunately most modern systems can only guarantee a partial order of memory events, e.g., due to the effects of relaxed memory [1, 3, 4, 25]. It is known that a naive adaptation of linearizability to the partially ordered setting of weak memory is problematic from the perspective of contextual refinement [13]. In this paper, we propose a compositional generalisation of linearizability for partially ordered executions. Figs. 1 and 2 show two examples1 of multi-threaded programs on which weak memory model ∗ [email protected][email protected][email protected] § [email protected] 1 Example inspired by H.-J.Boehm talk at Dagstuhl, Nov. 2017 1 x = 0, y = 0 Process 2 Process 1 1 : x := 1; 1 : if(y = 1) 2 : y := 1; 2 : assert(x = 1); S = h i, S′ = h i Process 2 Process 1 1 : S.Push(1); 1 : if(S’.Pop = 1) 2 : S’.Push(1); 2 : assert(S.Pop = 1); Figure 1: Writing to shared variables Figure 2: Writing to shared stacks effects can be observed. Fig. 1 shows two threads writing to and reading from two shared variables x and y. Under SC, the assert in process 2 never fails: if y equals 1, x must also equal 1. However, in weak memory models like the C11 model [4, 21], this is not true: if the writes to x and y are relaxed, process 2 may observe the write to y, yet also observe the initial value x (missing the write to x by process 1). Such effects are not surprising to programmers familiar with memory models [4, 21]. However, programmer expectations for linearizable objects, even in a weak memory model like C11, are different: if the two stacks S and S′ in Fig. 2 are linearizable, the expectation is that the assert will never fail since linearizable objects are expected to be compositional [18, 19], i.e., any combination of linearizable objects must itself be linearizable. However, it is indeed possible for the two stacks to be linearizable (using the classical definition), yet for the program to generate an execution in which the assert fails. The issue here is that linearizability, when naively applied to a weak memory setting, allows too many operations to be considered “overlapping”. Our key contribution in this paper is the development of a new compositional notion of correctness, called causal linearizability, which is defined in terms of an execution structure [23], taking two different relations over operations into account: a “precedence order” (describing operations that are ordered in time) and a “communication relation”. Applied to Fig. 2, for a weak memory execution in which the assert fails, the execution restricted to stack S would not be causally linearizable in the first place. Namely, causal linearizability ensures enough precedence order in an execution to ensure that the method call S.Push(1) occurs before S.Pop, meaning S.Pop is forced to return 1. Execution structures are generic, and can be constructed for any weak memory execution that includes method invocation/response events. Our second contribution is one such scheme for mapping executions to execution structures based on the happens-before relation of the C11 memory model. Given method calls m1 and m2, we say m1 precedes m2 if the response of m1 happens before the invocation m2; we say m1 communicates with m2 if the invocation of m1 happens before the response of m2. Our third contribution is a new inductive simulation-style proof technique for verifying causal linearizability of weak memory implementations of concurrent objects, where the induction is over linear extensions of the happens-before relation. This is the first such proof method for weak memory, and one of the first that enables full verification, building on existing techniques for linearizability in SC [26, 12, 8]. Our fourth contribution is the application of this proof technique to causal linearizability of the Treiber Stack in the C11 memory model. We present our motivating example, the Treiber Stack in C11 in Section 2; describe the problem of compositionality and motivate our execution-structure based solution in Section 3; and formalise causal linearizability and prove compositionality in Section 4. Causal linearizability for C11 is presented in Section 5, and verification of the stack described in Section 6. 2 Treiber Stack in C11 The example we consider (see Algorithm 1) is the well-studied Treiber Stack [28], executing in a recent version of the C11 [22] memory model. In C11, commands may be annotated, e.g., R 2 Algorithm 1 Release-Acquire Treiber Stack 1: procedure Init 2: Top := null; 3: 4: 5: 6: 7: 8: 9: procedure Push(v) n := new(node) ; n.val := v ; repeat top :=A Top ; n.nxt := top ; until CASR (&Top, top, n) 10: 11: 12: 13: 14: 15: 16: 17: procedure Pop repeat repeat top :=A Top ; until top 6= null ; ntop := top.nxt ; until CASR (&Top, top, ntop) return top.val ; (for release) and A (for acquire), which introduces extra synchronisation, i.e., additional order over memory events [4, 21]. We assume racy read and write accesses that are not part of an annotated command are unordered or relaxed, i.e., we do not consider the effects of non-atomic operations [4]. Full details of the C11 memory model are deferred until Section 5. Due to weak memory effects, the events under consideration, including method invocation and response events are partially ordered. As we show in Section 3, it turns out that one cannot simply reapply the standard notion of linearizability in this weaker setting; compositionality demands that we use modified form: causal linearizability that additionally requires “communication” across conflicting operations. In Algorithm 1, all accesses to the shared variable Top are via an annotated command. Thus, any read of Top (lines 7, 13) reading from a write to Top (lines 9, 16) induces happens-before order from the write to the read. This order, it turns out, is enough to guarantee invariants that are in turn strong enough to guarantee2 causal linearizability of the Stack (see Section 6). Note that we modify the Treiber Stack so that the Pop operation blocks by spinning instead of returning empty. This is for good reason - it turns out that the standard Treiber Stack (with a non-blocking Pop operation) is not naturally compositional if the only available synchronisation is via release-acquire atomics (see Section 7). 3 Compositionality and execution structures This section describes the problems with compositionality for linearizability of concurrent objects under weak execution environments (e.g., relaxed memory) and motivates a generic solution using execution structures [23]. Notation. First we give some basic notation. Given a set X and a relation r ⊆ X ×X, we say r is a partial order iff it is reflexive, antisymmetric and transitive, and a strict order, iff it is irreflexive, antisymmetric and transitive. The support of r is denoted support (r) = dom(r) ∪ ran(r). A partial or strict order r is a total order iff either (a, b) ∈ r or (b, a) ∈ r for all a, b ∈ support (r). We typically use notation such as <, ≤, ≺, to denote orders, and write, for example, a < b instead of (a, b) ∈ <. The operations of an object are defined by a set of labels, Σ. For concurrent data structures, Σ = Inv × Res, where Inv and Res are sets of invocations and responses (including their input 2 Note that a successful CAS operation comprises both a read and a write access to Top, but we only require release synchronisation here. The corresponding acquire synchronisation is provided via the earlier read in the same operation. This synchronisation is propagated to the CAS by sequenced-before (aka program order), which, in C11, is included in happens-before (see Section 6 for details). 3 and return values), respectively. For example, for a stack S of naturals, the invocations are given by {Push(n) | n ∈ N} ∪ {Pop}, and the responses by N ∪ {⊥, empty}, and ΣS = {(Push(n), ⊥), (Pop, n) | n ∈ N} ∪ {(Pop, empty)} The standard notion of linearizability is defined for a concurrent history, which is a sequence (or total order) of invocation and response events of operations. Since operations are concurrent, an invocation of an operation may not be directly followed by its matching response, and hence, a history induces a partial order on operations. For linearizability, we focus on the real-time partial order (denoted ), where, for operations o and o′ , we say o o′ in a history iff the response of operation o happens before the invocation of operation ′ o in the history. A concurrent implementation of an object is linearizable if the real-time partial order ( ) for any history of the object can be extended to a total order that is legal for the object’s specification [18]. It turns out that linearizability in this setting is compositional [18, 19]: any history of a family of linearizable objects is itself guaranteed to be linearizable. Unfortunately, histories in modern executions contexts (e.g., due to relaxed memory or distributed computation) are only partially ordered since processes do not share a single global view of time. It might seem that this is unproblematic for linearizability and that the standard definition can be straightforwardly applied to this weaker setting. However, it turns out that a naive application fails to satisfy compositionality. To see this, consider the following example. Example 1. Consider a S and S’ that are both S’.Push happens before the invocation of S.Pop. history h, partially ordered by a happens-before relation, for two stacks initially empty (denoted by ⊥). Suppose that in h, the response of the invocation of S.Pop, and the response of S.Push happens before History h induces a partial order over these operations as shown below: (S’.Pop, 11) (S.Push(42), ⊥) (S.Pop, 42) (S’.Push(11), ⊥) If we restrict the execution above to S only, we can obtain a legal stack behaviour by linearizing (S.Push(42), ⊥) before (S.Pop, 42) without contradicting the real-time partial order in the diagram above. Similarly, the execution when restricted to S′ is linearizable. However, the full execution is not linearizable: ordering both pushes before both pops contradicts the induced real-time partial order ( above). A key contribution of this paper is the development of a correctness condition, causal linearizability, that recovers compositionality of concurrent objects with partially ordered histories. Our definition is based on two main insights. The first insight is that one must augment the real-time partial order with additional information about the underlying concurrent execution. In particular, one must introduce information about the communication when linearizing conflicting operations. Two operations conflict if they do not commute according to the sequential specification, e.g., for a stack data structure, Push and Pop are conflicting. Causal linearizability states that for any conflicting operations, say o and o′ , that are linearized in a particular order, say o o′ , there must exist some communication from o to o′ . We represent communication by a relation . 4 Example 2. Consider the partial order in Example 1. For both stacks S and S′ , the Push must be linearized before the Pop, and hence, we must additionally have communication edges as follows: (S’.Pop, 11) (S.Push(42), ⊥) (S.Pop, 42) (S’.Push(11), ⊥) The second insight is that the operations and the induced real-time partial order, , extended with a communication relation, , must form an execution structure [23], defined below. Definition 3 (Execution structure). Given that E is a finite3 set of events, and , ⊆ E ×E are relations over E, an execution structure is a tuple (E, , ) satisfying the following axioms for e1 , e2 , e3 ∈ E. A1 The relation is a strict order. A2 Whenever e1 e2 , then e1 A3 If e1 e2 e3 or e1 A4 If e1 e2 e3 e2 and ¬(e2 e2 e1 ). e3 , then e1 e4 , then e1 e3 . e4 . Example 4. Consider the execution depicted in Example 2. The requirements of an execution structure, in particular axiom A4 necessitate that we introduce additional real-time partial order edges as follows. (S’.Pop, 11) (S.Push(42), ⊥) (S.Pop, 42) (S’.Push(11), ⊥) For example, the edge (S.Pop, 42) (S.Push(42), ⊥) is induced by the combination of edges (S.Pop, 42) (S’.Push(11), ⊥) (S’.Pop, 11) (S.Push(42), ⊥) together with axiom A4. A consequence of these additional real-time partial order edges is that S (and symmetrically S’) is not linearizable since the edge (S.Pop, 42) (S.Push(42), ⊥) must be present even when restricting the structure to S only. Hence compositionality no longer fails. 4 Causal linearizability This section provides a formal definition of causal linearizability, and the compositionality theorem. We define sequential objects in Section 4.1, then define causal linearizability in Section 4.2. 4.1 Sequential specifications Causal linearizability defines correctness of a concurrent object with respect to a sequential object specification. Definition 5 (Sequential object). A sequential object is a pair (Σ, legal ), where legal ⊆ Σ∗ is a prefix-closed sequence of labels. 3 The original presentation allows for infinite execution structures, placing a well-foundedness condition on 5 . For example, in each legal sequence of a stack, each pop operation returns the value from the latest push operation that has not yet been popped, or empty if no such operation exists. For each sequential object, we define a conflict relation, # ⊆ Σ × Σ, based on the legal behaviours of the object. Two operations conflict if they do not commute in some legal history: o#o′ = ¬(∀k1 , k2 ∈ Σ∗ . k1 o o′ k2 ∈ legal ⇔ k1 o′ ok2 ∈ legal ) For a stack, we have, for instance, (Push(n), ⊥)#(Pop, n′ ) for any n, n′ , and for n 6= n′ , (Push(n), ⊥)#(Push(n′ ), ⊥). We now show (in Lemma 6 below) that the order of conflicting actions in a sequential history captures all the orders in that history that matter. This is formalized and proved using order relations derived from a legal sequence. However, since the same action can occur more than once in a legal sequence, we lift actions to events by enhancing each action with a unique tag and process identifier4 . Thus, given a sequential object S = (Σ, legal ) an S-event is a triple (g, p, a) where g is an event tag (taken from a set of tags G), p is a process (taken from a set of processes P ) and a is a label in Σ. We let Evt be the set of all S-events, and for a sequence k ∈ Evt ∗ , ev (k) be the set of events in k. The definitions of legality and conflict as well as sequential specifications can naturally be lifted to the level of events by virtue of the action labels. That is, a sequence of events is legal if the sequence of actions it induces is legal. In the following, we therefore use legal to refer to sequences of actions and sequences of events interchangeably. From a sequence k ∈ Evt ∗ we derive two relations on events, a temporal ordering ( k ) and a causal ordering (≺k ), where: e k e′ = ∃k1 , k2 , k3 ∈ Evt ∗ . k = k1 ek2 e′ k3 e ≺k e′ = e k e′ ∧ e#e′ Any sequential history that extends the causal order of a legal history is itself legal. We formalize this in the following lemma. Lemma 6 (Legal linear extensions). For a sequential object (Σ, legal ), if k ∈ legal and k ′ ∈ Evt ∗ , such that ev (k) = ev (k ′ ) and ≺k ⊆ k′ , then k ′ ∈ legal . Proof. We transform k into k ′ by reordering events in k to match the order k′ . We only reorder events that are not conflicting and thus, each step of the transformation preserves legality. This is sufficent to prove that k ′ ∈ legal . Let a mis-ordered pair be any pair of events e, e′ such that e k e′ but e′ k′ e. Note that in this case, we have e 6≺k e′ , because ≺k ⊆ k′ . Let e1 , e2 be a mis-ordered pair with minimal distance in k between the two elements (i.e., so that the number of events in k between e1 and e2 is minimal). We will reorder non-conflicting events in k so as to eliminate this mis-ordered pair, or reduce its size without creating a new mis-ordered pair. Once all mis-ordered pairs have been eliminated, we will have transformed k into k ′ , while preserving the legality of k. If e1 and e2 are adjacent in k, then because e1 6≺k e2 , we have ¬(e1 #e2 ), and thus we can reorder them to form a new sequence legal with fewer mis-ordered pairs. If e′1 #e2 then we would have e1 ≺k e′1 ≺k e2 and so e1 ≺k e2 , which is a contradiction. The same argument shows that there is no event between e′1 and e2 that conflicts with both. So let k ′′ be the sequence derived from k by reordering e′1 forward just past e2 . Note that because there were no conflicts, k ′′ ∈ legal. It remains to show that k ′′ has no mis-ordered pairs that were not already present in k. This could only happen if there was some e′2 such that e′1 S e′2 S e2 and e′1 k′ e′2 . Because e1 , e2 is the mis-ordered pair with minimal gap in k, it must be that e′2 k′ e2 , but then e′2 k′ e1 while e1 k e′2 . Thus, in this case, e1 , e′2 forms a smaller mis-ordered pair, contrary to hypothesis. 4 Strictly speaking, the process identifier is unimportant for Lemma 6, but we introduce it here to simplify compatibility with the rest of this paper. 6 As we shall see, this lemma is critical in the proof of our compositionality result, Theorem 10. 4.2 Concurrent executions and causal linearizability We now define causal linearizability. For simplicity, we assume complete concurrent executions, i.e., executions in which every invoked operation has returned. It is straightforward to extend these notions to cope with incomplete executions. In general, executions of concurrent processes might invoke operations on several concurrent objects. To capture this, we define a notion of object family, which represents a composition of sequential objects, indexed by some set X. Definition 7 (Object family). Suppose X is an index set. For each x ∈ X, assume a sequential object Sx = (Σx , legal x ) such that Σx is disjoint from Σy for all y ∈ X\{x}. We define the object family over X, SX = (ΣX , legal X ) by: S • ΣX = x Σx • legal X = {k ∈ Σ∗X | ∀x. k ⇂ x ∈ legal x }, where k ⇂ x is the sequence k restricted to actions of object x. Thus the set legal X contains exactly the interleavings of elements of each of the legal x . N.B., the pairwise disjointness requirement on Σx can be readily achieved by attaching the object identifier x to each operation in Σx . An execution structure (E, , ) is a complete S-execution structure iff all events in E are S-events. Definition 8 (Causal linearizability). Let SX = (ΣX , legal X ) be an object family. A complete SX -execution structure (E, , ) is causally linearizable if there exists a k ∈ legal X with ev (k) = E such that ⊆ k , and ⊇ ≺k . Condition ⊆ k ensures that the real-time (partial) order of operations is consistent with the chosen k, while condition ⊇ ≺k captures the idea that the causal ordering in k (i.e., the ordering between conflicting actions) requires a communication in the concurrent execution. Causal linearizability for single objects is a special case of Definition 8, where the family is a singleton set. To establish compositionality, we must first define an object family’s causal ordering. Note that because an object family’s legal set is just an interleaving of the underlying object’s legal sets, operations from distinct objects can always be reordered, and therefore they never conflict. Thus, we have the following lemma. Lemma S 9. Suppose SX = (ΣX , legal X ) is an object family. For any k ∈ legal X , we have ≺k = x∈X ≺k ⇂ x . For an object family SX = (ΣX , legal X ) and x ∈ X, we let Ex be the SX -execution structure E restricted to Σx . Theorem 10 (Compositionality). Suppose SX = (ΣX , legal X ) is an object family over X, and let E = (E, , ) be a complete SX -execution structure. Then, Ex is causally linearizable w.r.t. Sx for all x ∈ X iff E is causally linearizable w.r.t. SX . Proof. The implication from right to left is straightforward. For the other direction, for each x ∈ X, let kx be the legal sequential execution witnessing causal linearizability of Ex . Let be the irreflexive transitive relation defined by [ =( ∪ ≺kx )+ x∈X 7 We show that is acyclic, and is therefore a strict partial order. Because is a partial order, there is some total order ⊇ , where m ∈ legal . This total order defines a sequence of m X labels of SX . We prove that this sequence witnesses the causal linearizability of E. By definition, ⊆ m for all x, and so by Lemma 6, we have m ⇂ x ∈ legal x (where m ⇂ x we have ≺kx ⊆ is the restriction of m to the events of object x). Furthermore, 1. ⊆ 2. ⊇ ≺kx follows from causal linearizability of Ex , and hence m follows from ⊆ ⊆ m, ⊇ S x∈X ≺kx , as required. Thus, E is causally linearizable, as required. We show that is acyclic by contradiction. Suppose contains a cycle. Pick P = e1 e2 ... en to be the minimal cycle. Since is acyclic, and each ≺x is acyclic, the cycle P must contain accesses to least two different objects. Without loss of generality, assume e1 and e2 access different objects, i.e., e1 ∈ Ey , e2 ∈ Ez for some y 6= z. Since each ≺x only orders e2 . Observe that P must be of length greater than two, i.e., elements of Ex , we must have e1 it cannot be of the form e1 e2 e1 since we would then have e1 e2 e1 , which contradicts the assumption that is a partial order. Hence P must contain a third (distinct) element e3 . Note that e2 6 e3 , because otherwise we could shorten the cycle P , using the transitivity of . Thus e2 , e3 are from the same object z and e2 ≺kz e3 . By the causal linearizability of Ez , we must have e2 e3 . Let e be the element of P following e4 (so possibly e = e1 ). Note that e3 6≺kz e, because otherwise we could shorten e, so we have the cycle P , using the transitivity of ≺kz . Thus, e3 e1 e2 e3 By the execution structure axiom A4, we have e1 ··· e1 contradicting minimality of P . 4.3 e e, and hence there exists a cycle e1 e Relationship with classical linearizability In this section, we show that classical linearizablity, which is defined for totally ordered histories of invocations (events of type Inv ) and responses (events of type Res), degenerates to causal linearizability. As in the previous section, for simplicity, we assume the histories under consideration are complete; extensions to cope with incomplete histories are straightforward. First, we describe a method, inspired by the execution structure constructions given by Lamport [23], for constructing execution structures for any well-formed partially ordered history. We let Hist = (Inv ∪ Res) × (Inv ∪ Res) denote the type of all histories. A history is well-formed if it is a partial order and the history restricted to each process is a total order of invocations followed by their matching response. The set of all matching pairs of invocations and responses in a history h is given by mp(h). A history is sequential iff it is totally ordered and each invocation is immediately followed by its matching response. Note that a history could be totally ordered, but not sequential (as is the case for the concurrent histories considered under SC [19, 18]). Definition 11. Let h ⊆ Hist be a well-formed (partially ordered) history. We say exec(h) = (E, , ) is the execution structure corresponding to h if E = mp(h) = {((i1 , r1 ), (i2 , r2 )) ∈ E × E | (r1 , i2 ) ∈ h} = {((i1 , r1 ), (i2 , r2 )) ∈ E × E | (i1 , r2 ) ∈ h} 8 We now work towards the standard definition of linearizability. Recall that a sequential object (see Definition 5) is defined in terms of sequences of labels of type Σ∗ , where Σ = Inv × Res, whereas sequential histories are of type Hist . Thus, we define a function γ : Hist → Σ∗ such that for each pair (i, r), (i′ , r′ ) ∈ mp(hs) of sequential history hs we have (r, i′ ) ∈ hs iff (i, r) γ(hs) (i′ , r′ ). Thus, the order of operations in hs and γ(hs) are identical. A complete history h is linearizable w.r.t. a (family of) sequential object(s) S = (Σ, legal ) iff there exists a sequential history hs such that γ(hs) ∈ legal , for each process p, h ⇂ p = hs ⇂ p and h ⊆ hs [18]. Theorem 12. Suppose h is a totally ordered complete history and S a (family of ) sequential object(s). Then h is linearizable w.r.t S iff exec(h) is causally linearizable w.r.t. S. 5 Causal linearizability of C11 implementations We now introduce the C11 memory model, where we adapt the programming-language oriented presentation of C11 [21, 10], but we ignore various features of C11 not needed for our discussion, including non-atomic operations and fences. The C11 memory-model. Let L be a set of locations (ranged over by x, y), let V be a set of values (ranged over by u, v). Our model employs a set of memory events, which can be partitioned into read events, R, write events, W , and update events, U . Moreover, let Mod = W ∪ U be the set of events that modify a location, and Qry = R ∪ U be the set of events that query a location. For any memory event e, let loc(e) be the event’s location, and let ann(e) be the event’s annotation. Let Loc(x) = {e | loc(e) = x}. For any query event let rval(e) be the value read; and for any modification event let wval(e) be the value written. An event may carry a synchronisation annotation, which may either be a release, R, or an acquire, A, annotation. A C11 execution (not to be confused with an execution structure) is a tuple D = (D, sb, rf , mo) where D is a set of events, and sb, rf , mo ⊆ D × D define the sequence-before, reads-from and modification order relations, respectively. We say a C11 execution is valid when it satisfies: (V1) sb is a strict order, such that, for each process p, the projection of sb onto p is a total order; (V2) for all (w, r) ∈ rf , loc(w) = loc(r) and wval(w) = rval(r); (V3) for all r ∈ D ∩ Qry, there exists some w ∈ D ∩ Mod such that (w, r) ∈ rf ; (V4) for all (w, w′ ) ∈ mo, loc(w) = loc(w′ ); and (V5) for all w, w′ ∈ W such that loc(w) = loc(w′ ), (w, w′ ) ∈ mo or (w′ , w) ∈ mo. Other relations can be derived from these basic relations. For example, assuming DR and DA denote the sets of events with release and acquire annotations, respectively, the synchronises-with relation, sw = rf ∩ (DR × DA ), creates interthread ordering guarantees based on synchronisation annotations. The from-read relation, fr = (rf −1 ; mo) \ Id, relates each query to the events in modification order after the modification that it read from. Our final derived relation is the happens before relation hb = (sb ∪ sw )+ , which formalises causality. We say that a C11 execution is consistent if (C1) hb is acyclic, and (C2) hb; (mo ∪ rf ∪ fr ) is irreflexive. 9 Method invocations and responses. So far, the events apearing in our memory model are standard. Our goal is to model algorithms such as the Treiber stack. Thus, we add method events to the standard model, namely, invocations, Inv , and responses, Res. Unlike weak memory at the processor architecture level, where program order may not be preserved [13], program order in C11 is consistent with happens-before order, and hence, these can be introduced here in a straightforward manner. The only additional requirement is that validity also requires (V6) sb for each process projected restricted Inv ∪ Res must be alternating sequence of invocations and matching responses, starting with an invocation. Dynamic memory. To describe the behaviour of algorithms, such as the Treiber Stack, we must define reads and writes to higher-level structures. To this end, we develop a simple theory of references to objects, the fields of those objects and memory allocations for the object. We let F be the set of all fields and A be the set of all memory allocation events, which is an event of the form A(l) for a location l. We let . : L × F → L be the function that returns a location for a given location, field pair. We use infix notation x.f for .(x, f ), where x ∈ L and f ∈ F . We then introduce three additional constraints: (A1) for every a, a′ ∈ E ∩ A, if loc(a) = loc(a′ ) then a = a′ ; and (A2) if l.f = l′ .f ′ then l = l′ and f = f ′ . (A3) for all locations l and fields f there are no allocations of the form A(l.f ). From C11 executions to execution structures. A C11 execution with method invocations and responses naturally gives rise to an execution structure. First, for a C11 execution F, let the history of D, denoted hist(D) be the happens-before relation for D restricted to the invocation and response events. By (V6), hist(D) is a well-formed history. Thus, we can apply the construction defined in Section 4.3 to build an execution structure exec(hist(D)). Definition 13. We say that a C11 execution D is causally linearizable w.r.t a sequential object if exec(hist(D)) is. We can now state a compositionality result for a C11 execution D of an object family X. The property follows from Theorem 10 and the fact that for any object x ∈ X, exec(hist(Dx )) = exec(hist(D))x , where Dx is D restricted to events of object x. Note that Dx contains all events of x, i.e., all invocations, responses and low-level memory operations of x. Corollary 14 (Compositionality for C11 executions). Suppose that SX = (ΣX , legal X ) is an object family over X, and let D be an execution. Then, Dx is causally atomic w.r.t. Sx for all x ∈ X iff D is causally atomic w.r.t. SX . Finally, note that because the sb relation is included in hb, hist(D) includes program order on the invocations and responses of D. 6 Verification We now describe an operational method for proving that a given C11 execution is causally linearizable w.r.t a given sequential object. Accordingly, we give a state-based, operational model of a sequential object that generates legal sequences of labels (Definition 5), then present a simulation-based proof rule for causal linearizability (Section 6.1). Then, we illustrate our technique on the Treiber Stack (Section 6.2). 6.1 A simulation relation over happens-before An operational sequential object is a tuple (Γ, init , τ ) where: Γ is a set of states; init ∈ Γ is the initial state and τ : Γ × H × Inv → 7 Γ × H, where H = (Inv × Res)∗ , is a partial update function 10 that applies an invocation to a state and a history, returning the resulting state and updated history. We require that for s ∈ Γ, h ∈ H and i ∈ Inv , there exists some r ∈ Res, such that τ (s, h, i) = ( , h · h(i, r)i), where we use · for sequence concatenation. This response r is the object’s response to the invocation i. Example 15 (Operational sequential stack). A stack containing natural numbers can be represented as an operational sequential object in the following way. Let Γ = N∗ , init = h i and define the update function as follows τ (s, h, Push(n)) = (n · s, h · h(Push(n), ⊥)i) τ (n · s, h, Pop) = (s, h · h(Pop, n)i) for n ∈ N and s ∈ Γ. Note that assuming i is a stack invocation (as per Section 3), τ (s, i) is defined iff s 6= h i or i 6= Pop. Given an operational sequential object S = (Γ, init , τ ), it is easy to construct a corresponding sequential object (in the sense of Definition 5). Let ΣS = Inv × Res and let legal S be the set of histories returned by τ . Thus (ΣS , legal S ) is a sequential object, and our method verifies causal linearizability w.r.t that object. For the remainder of this section, fix a C11 execution D = (D, sb, rf , mo), and an operational sequential object S = (Γ, init , τ ). We describe a method for proving that D is causally linearizable w.r.t S. Our proof method is an induction on the length of some linear extension of D’s hb order. The proof proceeds by remembering the set Z ⊆ D of events that have already been considered by the induction, i.e., Z defines the current stage of the induction. The set Z is assumed to be downclosed with respect to hb, i.e., if z ∈ Z and (z ′ , z) ∈ hb, then z ′ ∈ Z. At each stage of the induction, we add an arbitrary e ∈ / Z to Z, where e’s hb predecessors are already in Z (i.e., the set Z ∪ {e} is also downclosed w.r.t. hb). Correctness of each inductive step is formalised by a simulation relation, ρ, relating the events in the current state, Z, to a state of the operational sequential object. Each inductive step of the implementation must match a “move” of the sequential object, i.e., be a stutter step, or a state update as given by the update function of the sequential object. Moreover, assuming that ρ holds for Z (before each inductive step), ρ must hold after the step (i.e., for Z ∪ {e}). Following the existing verification literature [12], we refer to events corresponding to nonstuttering steps as linearization points: the points where the high-level operation appears to take effect. The verifier must define a function lp : D ∩ Inv → D to determine the memory event that linearizes the given invocation, and this function must satisfy certain constraints with respect to the simulation relation ρ, as described in Definition 16, below. For each low-level operation, we must also determine the invocation and response to which it belongs. Thus we also define a function µ : D → D ∩ Inv that maps each event in D to the invocation responsible for producing e, and a function and ν : D → D ∩ Res that that maps e to the response produced by e’s invocation. More formally, µ(e) is the latest invocation in sb-order prior to e, and ν(e) is the earliest response in sb-order after e. Thus, we obtain the following definition. Definition 16 (hb-simulation). Suppose D = (D, sb, rf , mo) is an execution and S = (Γ, init , τ ) an operational sequential object. An hb-simulation is a relation ρ ⊆ 2D × (Γ × H) such that: 1. ρ(∅, (init , h i)), and (initialisation) 2. for all Z ⊆ D, and events e ∈ D\Z such that Z ∪{e} is down-closed w.r.t D’s happens-before order, if ρ(Z, (s, h)) then (a) if e 6= lp(µ(e)) then ρ(Z ∪ {e}, (s, h)), 11 (stutter step) (b) if e = lp(µ(e)) provided i = µ(e), h′ = h · h(i, r)i, and τ (s, h, i) = (s′ , h′ ), then (linearization step) i. ρ(Z ∪ {e}, (s′ , h′ )), and ii. ν(e) = r, and iii. for all operations (i′ , r′ ) in h, if (i′ , r′ ) ≺h′ (i, r) then (lp(i′ ), e) ∈ hb. The initialisation is straightforward, while the two inductive steps consider a new e for inclusion in Z following hb order. If e is a stutter step, we only have to prove that ρ is preserved by adding e to Z. If e is a linearization step (that is, if e = lp(µ(e))), then there are three obligations: prove that ρ is preserved (2(b)i); prove that the response of the high-level operation matches that returned by the sequential object (2(b)ii); and prove that whenever some operation that has already been linearized is causally prior to the newly linearized operation, then that operation’s linearization point is hb-prior to the new event e (2(b)iii). Theorem 17 (Soundness of hb-simulation). If ρ is an hb-simulation for a C11 execution D, then D is causally linearizable. Proof. The proof below uses a formulation of an operational sequential object where τ that does not maintain a history. Fix the operational sequential object S = (Γ, init , τ ). Fix the execution D, and let ≤E be any linear extension of D’s hb relation. Assume that lp is the linearization function and ρ is the simulation relation. We perform an induction on the indexes of ≤E . Let en be the nth event in ≤E order, so we are indexing from 0. Let Zn be the set of events strictly below the nth index. Thus, Zn = {em | m < n} Note that Z0 = ∅ and Zn+1 = Zn ∪ {en }. We define a function rep : {n | n < | ≤E |} → Γ recursively as follows: rep(0) = init (1) rep(n + 1) = rep(n) rep(n + 1) = π1 (τ (rep(n), µ(en )) when lp(µ(en )) 6= en when lp(µ(en )) = en (2) (3) By induction, we have ρ(Zn , rep(n)) for all n < | ≤E |. • Because ρ(Z0 , rep(0)) = ρ(∅, init ), Proposition 1 ensures that ρ(Z0 , rep(0)). • Assume ρ(Zn , rep(n)) and lp(µ(en )) 6= en . Then ρ(Zn+1 , rep(n+1)) = ρ(Zn ∪{en }, rep(n)), and thus Property 2a ensures that ρ(Zn+1 , rep(n + 1)). • Assume ρ(Zn , rep(n)) and lp(µ(en )) = en . Then ρ(Zn+1 , rep(n + 1)) = ρ(Zn ∪ {en }, π1 (τ (rep(n), µ(en ))), and thus Property 2(b)i ensures that ρ(Zn+1 , rep(n + 1)). We turn now to defining k, the legal sequence we need to witness causal linearizability of D. k0 = hi kn+1 = kn rep(n + 1) = kn · h(µ(e), ν(e))i 12 when lp(µ(en )) 6= en (4) (5) when lp(µ(en )) = en (6) It is easy to see that this is a legal history, and that (kn , rep(n)) is a move. We need to show that ≤H ⊆ k . Consider a response r and invocation i such that (r, i) ∈ hb. Ley i′ be the invocation of r, and let r′ be the response of i.. Because µ(lp(i)) = i, we have {(lp(i′ ), r), (i, lp(i))} ∈ sb ⊆ hb, and thus (lp(i′ ), lp(i)) ∈ hb and so lp(i′ ) appears at an earlier point in ≤E than lp(i), and therefore (i′ , r) k (i, r′ ), as required. Finally, we must show that ≺k ⊆ D . This is a simple induction on the length of k, with the hypothesis that, for all operations (i, r), (i′ , r′ ) in k, if (i, r) ≺k (i′ , r′ ) then (lp(i), lp(i′ ) ∈ hb. At each step we apply Property 2(b)iii. Thus, for each existing operation (i, r) and new operation (i′ , r′ ), we have (i, r) ≺k (i′ , r′ ) =⇒ (lp(i), lp(i′) ∈ hb immediately. On the other hand, (i′ , r′ ) ≺k (i, r) is impossible, because (i′ , r′ ) k (i, r) is false. This completes oour proof. 6.2 Case-study: the Treiber Stack We now outline an hb-simulation relation ρ for the Treiber stack. We fix some arbitrary C11 execution D = (D, sb, rf , mo) that contains an instance of the Treiber stack. That is, the invocations in D are the stack invocations, and the responses are the stack responses (as given in Section 3). Furthermore, the low-level memory operations between these invocations and responses are generated by executions of the operations of the Treiber stack (Algorithm 1). The main component of our simulation relation guarantees correctness of the data representation, i.e., the sequence of values formed by following next pointers starting with &T op forms an appropriate stack, and we focus on this aspect of the relation. As is typical with verifications of shared-memory algorithms, there are various other properties that would need to be considered in a full proof. In a sequentially consistent setting, the data representation can easily be obtained from the state (which maps locations to values). However, for C11 executions calculating the data representation requires a bit more work. In what follows, we define various functions that depend on a set Z of events, representing the current stage of the induction. We define the latest write in Z to a location x as latestZ (x) = max(mo ⇂(Z ∩ Loc(x))) and the current value of a location x in some set Z as cvalZ (x) = wval(latestZ (x)), which is the value written by the last write to x in modification order. It is now straightforward to construct the sequence of values corresponding to a location as stackOf Z (x) = v · stackOf Z (y), where v = cvalZ (x.val) and y = cvalZ (x.nxt). Now, assuming that (s, h) is a state of the operational sequential stack, our simulation requires: stackOf Z (cvalZ (&T op)) = s (7) Further, we require that all modifications of &T op are totally ordered by hb: ∀m, m′ ∈ Z ∩ M od(&T op). (m, m′ ) ∈ hb ∨ (m′ , m) ∈ hb (8) to ensure that any new read considered by the induction sees the most recent version of &T op. The linearization function lp for the Treiber stack is completely standard: each operation is linearized at the unique update operation generated by the unique successful CAS at line 9 (for pushes) or line 16 (for pops). In what follows, we illustrate how to verify the proof obligations given in Definition 16, for the case where the new event e is a linearization point. Let e be an update operation that is generated by the CAS at line 9 of the push operation in Algorithm 1. The first step is to prove that every modification of &T op in Z is happens-before the update event e. Formally, ∀m ∈ Z ∩ Mod ∩ Loc(&T op). (m, e) ∈ hb 13 (9) (S.Push(1), ⊥) (S’.Pop, empty) (S.Push(1), ⊥) (S’.Push(2), ⊥) (S.Pop, empty) (S’.Push, ⊥) Figure 3: Read-only operations without communication (not compositional) (S’.Pop, empty) (S.Pop, empty) Figure 4: Read-only operations with communication (compositional) Proving this formally is somewhat involved, but the essential reason is as follows. Note that there is an acquiring read r to &T op executed at line 7 of e’s operation and sb-prior to e. r reads from some releasing update u. Thus, by Property 8, and the fact the hb contains sb, e is happens after u, and all prior updates. If there were some update u′ of &T op such that (u′ , e) ∈ / hb, then (u′ , u) ∈ / hb so by Property 8, (u, u′ ) ∈ hb. But it can be shown in this case that the CAS that generated e could not have succeeded, because u′ constitutes an update intervening between r and e. Therefore, there can be no such u′ . Property 9 makes it straightforward to verify that Condition 2(b)iii of Definition 16 is satisfed. To see this, note that every linearization point of every operation is a modification of &T op. Thus, if (i′ , r′ ) is some operation such that lp(i′ ) ∈ Z (so that this operation has already been linearized) then (lp(i′ ), e) ∈ hb. Using Property 9 it is easy to see that both Property 7 and Property 8 are preserved. We show by contradiction that latestZ ′ (&T op) = e. Otherwise, we have (e, latestZ ′ (&T op)) ∈ mo. / hb, but latestZ ′ (&T op) is a modification operation, so this Therefore (latestZ ′ (&T op), e) ∈ contradicts Property 9. It follows from latestZ ′ (&T op) = e that stackOf (cvalZ ′ ) = stackOf (wval(e)). Given this, it is straightforward to show that Property 7 is preserved. This step of the proof relies on certain simple properties of push operations. Specifically, we need to show that the current value of the val field of the node being added to the stack (formally, cvalZ ((wval(e)).nxt)) is the value passed to the push operation; and that the current value of the nxt field (formally, cvalZ ((wval(e)).nxt)) is the current value of &T op when the successful CAS occurs. These properties can be proved using the model of dynamic memory given in Section 5. 7 A synchronisation pitfall We now describe an important observation regarding failure of compositionality of read-only operations caused by weak memory effects. The issue can be explained using our abstract notion of an execution structure, however, a solution to the problem is not naturally available in C11 with only release-acquire annotations. Consider the Treiber Stack in Algorithm 1 that returns empty instead of spinning; namely where the inner loop (lines 12-14) is replaced by code block “top :=A Top ; if top = null then return empty”. Such an implementation could produce executions such as the one in Fig. 3 which, like the examples in Section 3, is not compositional. Recovering compositionality requires one to introduce additional communication edges as shown in Fig. 4. In the C11 memory model, these correspond to “from-read” anti-dependencies from a read to a write overwriting the value read. However, release-acquire synchronisation is not adequate for promoting from-read order in the memory to happens-before. One fix would be to disallow read-only operations, e.g., by introducing a release-acquire CAS operation on a special variable that always succeeds at the start of each operation. However, such a fix is somewhat unnatural. Another would be to use C11’s SC annotations, which can induce 14 synchronisation across from-read edges. However, the precise meaning of these annotations is still a topic of active research [22, 6]. 8 Conclusion and related work We have presented causal linearizability, a new correctness condition for objects implemented in weak-memory models, that generalises linearizability and addresses the important problem of compositionality. Our condition is not tied to a particular memory model, but can be readily applied to memory models, such as C11, that feature a happens-before relation. We have presented a proof method for verifying causal linearizability. We emphasise that our proof method can be applied directly to a standard axiomatic memory model. Unlike other recent proposals [11, 20], we model C11’s relaxed accesses without needing to prohibit their problematic dependency cycles (so called “load-buffering” cycles). Although causal linearizability has been presented as a condition for concurrent objects, we believe it is straightforward to extend this condition to cover, for example, transactional memory. We intend to develop our approach into a framework in which the behaviour of programs that mix transactional memory, concurrent objects and primitive weak-memory operations can be precisely described in a compositional fashion. Causal linearizability is closely related to causal hb-linearizability defined in [13], which is a causal relaxation of linearizability that uses specifications strengthened with a happens-before relation. The compositionality result there requires that either a specification is commuting or that a client is unobstructive (does not introduce too much synchronisation). Our result is more general as we place no such restriction on the object or the client. Others [9] define a correctness condition, also called causal linearizabilty, that is only compositional when the client satisfies certain constraints; in contrast, we achieve full decoupling. Furthermore, that condition is only defined when the underlying memory model is given operationally, rather than axiomatically like C11. Early attempts, targetting TSO architectures, used totally ordered histories but allowed the response of an operation to be moved to a corresponding “flush” event [16, 7, 27, 14]. Others have considered the effects of linearizability in the context of a client abstraction. This includes a correctness condition for C11 that is strictly stronger than linearizability under SC [5]. Although we have applied causal linearizability to C11, causal linearizability itself is more general as it can be applied to any weak memory model with a happens-before relation. Causal consistency [2] is a related condition, aimed at shared-memory and data-stores, which has no notion of real-time order and is not compositional. References [1] Sarita V. Adve and Kourosh Gharachorloo. Shared memory consistency models: A tutorial. IEEE Computer, 29(12):66–76, 1996. [2] Mustaque Ahamad, Gil Neiger, James E. Burns, Prince Kohli, and Phillip W. Hutto. Causal memory: definitions, implementation, and programming. Distributed Computing, 9(1):37– 49, Mar 1995. [3] J. Alglave, L. Maranget, and M. Tautschnig. Herding cats: Modelling, simulation, testing, and data mining for weak memory. ACM Trans. Program. Lang. Syst., 36(2):7:1–7:74, 2014. [4] M. Batty, S. Owens, S. Sarkar, P. Sewell, and T. Weber. Mathematizing C++ concurrency. In Thomas Ball and Mooly Sagiv, editors, POPL, pages 55–66. ACM, 2011. 15 [5] Mark Batty, Mike Dodds, and Alexey Gotsman. Library abstraction for C/C++ concurrency. In Roberto Giacobazzi and Radhia Cousot, editors, POPL, pages 235–248. ACM, 2013. [6] Mark Batty, Alastair F. Donaldson, and John Wickerson. Overhauling SC atomics in C11 and opencl. In POPL, pages 634–648. ACM, 2016. [7] Sebastian Burckhardt, Alexey Gotsman, Madanlal Musuvathi, and Hongseok Yang. Concurrent library correctness on the TSO memory model. In Helmut Seidl, editor, ESOP, volume 7211 of Lecture Notes in Computer Science, pages 87–107. Springer, 2012. [8] S. Doherty, L. Groves, V. Luchangco, and M. Moir. Formal verification of a practical lockfree queue algorithm. In FORTE, volume 3235 of Lecture Notes in Computer Science, pages 97–114. Springer, 2004. [9] Simon Doherty and John Derrick. Linearizability and causality. In SEFM, volume 9763 of Lecture Notes in Computer Science, pages 45–60. Springer, 2016. [10] Marko Doko and Viktor Vafeiadis. A program logic for C11 memory fences. In VMCAI, volume 9583 of Lecture Notes in Computer Science, pages 413–430. Springer, 2016. [11] Marko Doko and Viktor Vafeiadis. Tackling real-life relaxed concurrency with FSL++. In ESOP, pages 448–475, 2017. [12] B. Dongol and J. Derrick. Verifying linearisability: A comparative survey. ACM Comput. Surv., 48(2):19:1–19:43, 2015. [13] B. Dongol, R. Jagadeesan, J. Riely, and A. Armstrong. On abstraction and compositionality for weak-memory linearisability. In VMCAI, volume 10747 of Lecture Notes in Computer Science, pages 183–204. Springer, 2018. [14] Brijesh Dongol, John Derrick, and Graeme Smith. Reasoning algebraically about refinement on TSO architectures. In ICTAC, volume 8687 of Lecture Notes in Computer Science, pages 151–168. Springer, 2014. [15] Ivana Filipovic, Peter W. O’Hearn, Noam Rinetzky, and Hongseok Yang. Abstraction for concurrent objects. Theor. Comput. Sci., 411(51-52):4379–4398, 2010. [16] Alexey Gotsman, Madanlal Musuvathi, and Hongseok Yang. Show no weakness: Sequentially consistent specifications of TSO libraries. In Marcos K. Aguilera, editor, DISC, volume 7611 of Lecture Notes in Computer Science, pages 31–45. Springer, 2012. [17] Rachid Guerraoui and Michal Kapalka. On the correctness of transactional memory. In Siddhartha Chatterjee and Michael L. Scott, editors, PPoPP, pages 175–184. ACM, 2008. [18] M. Herlihy and J. M. Wing. Linearizability: A correctness condition for concurrent objects. ACM TOPLAS, 12(3):463–492, 1990. [19] Maurice Herlihy and Nir Shavit. The art of multiprocessor programming. Morgan Kaufmann, 2008. [20] Jan-Oliver Kaiser, Hoang-Hai Dang, Derek Dreyer, Ori Lahav, and Viktor Vafeiadis. Strong logic for weak memory: Reasoning about release-acquire consistency in iris. In ECOOP, pages 17:1–17:29, 2017. 16 [21] Ori Lahav, Nick Giannarakis, and Viktor Vafeiadis. Taming release-acquire consistency. In Rastislav Bodı́k and Rupak Majumdar, editors, POPL, pages 649–662. ACM, 2016. [22] Ori Lahav, Viktor Vafeiadis, Jeehoon Kang, Chung-Kil Hur, and Derek Dreyer. Repairing sequential consistency in C/C++11. In PLDI, pages 618–632. ACM, 2017. [23] L. Lamport. On interprocess communication. part I: basic formalism. Distributed Computing, 1(2):77–85, 1986. [24] Leslie Lamport. How to make a multiprocessor computer that correctly executes multiprocess programs. IEEE Trans. Computers, 28(9):690–691, 1979. [25] J. Manson, W.Pugh, and S. V. Adve. The Java memory model. In POPL, pages 378–391. ACM, 2005. [26] Gerhard Schellhorn, John Derrick, and Heike Wehrheim. A sound and complete proof technique for linearizability of concurrent data structures. ACM Trans. Comput. Log., 15(4):31:1–31:37, 2014. [27] Oleg Travkin and Heike Wehrheim. Handling TSO in mechanized linearizability proofs. In Haifa Verification Conference, volume 8855 of Lecture Notes in Computer Science, pages 132–147. Springer, 2014. [28] R. K. Treiber. Systems programming: Coping with parallelism. Technical Report RJ 5118, IBM Almaden Res. Ctr., 1986. 17 A Potentially incomplete executions A complication with concurrent executions is that they may contain incomplete operations (that have been invoked, but have not yet returned). Since the effect of an incomplete operation may be globally visible, they cannot simply be ignored. This phenomenon has been well studied and arises in the definitions of linearizability [18] and opacity [17]. This section describes how we cope with incomplete operations in the context of causal linearizability. We define the completable extension of a sequential object S = (Σ, legal ) to be a triple T = (S, I, C), where I is a set of allowable incomplete actions and C : I → 2Σ is a completion function that maps each I to a set of possible completions for I. Example 18. If S = (Σ, legal ) is a concurrent object the set of allowable incomplete operations and completion function is defined by: I = {i ∈ Inv | ∃k1 , k2 ∈ Σ∗ . ∃r. k1 (i, r)k2 ∈ legal} C = λi. {(i, r) ∈ Σ} For the Treiber stack in Algorithm 1, we have IS = {Push(n) | n ∈ N}∪{Pop}, and CS (Push(n)) = {⊥} and CS (Pop) = N ∪ {empty}. S A completable extension of an object family SX is a triple TX = (SX , IX , CX ), where IX = x Ix and CX (a) = Cx (a), with x being the unique element of X such that a ∈ Σx . We say that E = (E, , ) is a T-execution structure iff E ⊆ Σ∪I such that dom( )∩I = ∅, i.e., no element of E may depend (in real-time order) on an element in I. Note that there may be edges both in and out of elements in E ∩ I and edges into E ∩ I. A T-execution structure is causally atomic if we can replace all incomplete events by complete events in a way that is allowed by the corresponding sequential object. This process is analagous to the extension of incomplete histories to complete histories, as allowed by linearizability and opacity in the classical (i.e., sequentially consistent) setting. Definition 19 (Causal linearizability). Let S = (Σ, legal ) be a family of sequential objects and T its completable extension. A T-execution structure E = (E, , ) is causally atomic w.r.t. S iff there exists a causally atomic S-execution structure E′ = (E ′ , ′ , ′ ) and an (order-preserving) isomorphism ϕ : E → E′ such that: • for each event e = (g, p, i) ∈ E, where i ∈ I, we have ϕ(e) = (g, p, a) for some a ∈ C(i), and • for each event e = (g, p, a) ∈ E, where a ∈ Σ, we have ϕ(e) = (g, p, a). Theorem 10 extends directly to the case of incomplete histories. The fact that each individual object history is causally atomic implies that we can assume the existence of a valid extension for each incomplete event, which is itself causally atomic. Thus, we can apply these per-object extensions to the object-family execution, and show, using the proof of Theorem 10, that the resulting complete execution structure is causally atomic. 18
6
Topological and Algebraic Properties of Chernoff Information between Gaussian Graphs arXiv:1712.09741v1 [] 28 Dec 2017 Binglin Li, Shuangqing Wei, Yue Wang, Jian Yuan Abstract—In this paper, we want to find out the determining factors of Chernoff information in distinguishing a set of Gaussian graphs. We find that Chernoff information of two Gaussian graphs can be determined by the generalized eigenvalues of their covariance matrices. We find that the unit generalized eigenvalue doesn’t affect Chernoff information and its corresponding dimension doesn’t provide information for classification purpose. In addition, we can provide a partial ordering using Chernoff information between a series of Gaussian trees connected by independent grafting operations. With the relationship between generalized eigenvalues and Chernoff information, we can do optimal linear dimension reduction with least loss of information for classification. Key words: Gaussian graphs, generalized eigenvalue, Chernoff information, dimension reduction I. I NTRODUCTION Gaussian graphical model is widely used in constructing the conditional independence of continuous random variables. It is used in many applications such as social network [1], economics [2], biology [3] and so on. Our work is about statistical inference problems related to Gaussian graphical model where we want to learn the error exponents for classification. There are two different error events in this learning problem. If we want to classify the learnt distribution from the true one, we often use Kullback-Leibler(KL) distance as the error exponent [4], [5]. Otherwise, we use Chernoff information when we measure the average error exponent for the error probability in discerning M distributions based on a sequence of data drawn independently from one of M distributions. The minimum pair-wise Chernoff information among these distributions is the error exponent characterizing the performance of an M ary hypothesis testing problem. In algebraic analysis of hypothesis testing problem, we also use generalized eigenvalues of covariance matrices as a metric of the difference between them [6], [7]. Clearly, Chernoff information and generalized eigenvalues of covariance matrices are respectively probabilistic and algebraic ways to describe the difference between two Gaussian graphs. So there must be relation among topology, statistical distributions (Chernoff information), and algebra (generalized eigenvalues). This paper will study this relationship and show how Chernoff B. Li, Y. Wang and J. Yuan are with Department of Electronic Engineering, Tsinghua University, Beijing, P. R. China, 100084. (E-mail: [email protected]; wangyue, [email protected]). S. Wei is with the school of Electrical Engineering and Computer Science, Louisiana State University, Baton Rouge, LA 70803, USA (Email: [email protected]). This material is based upon work supported in part by the National Science Foundation (USA) under Grant No. 1320351, and the National Natural Science Foundation of China under Grant 61673237. information can be determined by generalized eigenvalues. What’s more, we will show how topology differences affect generalized eigenvalues and thus Chernoff information. More specifically, we find that two Gaussian graphs can be linearly and inversely transformed to two graphs whose covariance matrices are diagonal. And entries of the diagonal matrices are related to generalized eigenvalues. Thus we find that Chernoff information between two Gaussian graphs is an expression of generalized eigenvalues and a special parameter λ∗ which is also determined by generalized eigenvalues. What’s more, we find that the unit generalized eigenvalue doesn’t affect Chernoff information and the corresponding dimension is not making contributions to differentiating two Gaussian graphs for classification problem. Our former paper [8] dealt with the classification problem related to Gaussian trees. And we found that some special operations on Gaussian trees, namely, adding operation and division operation, don’t change Chernoff information between them. Now in this paper, we find that the two operations only add one extra unit generalized eigenvalue and don’t affect other generalized eigenvalues. So we can use generalized eigenvalues to prove the same probabilities. [8] also dealt with two Gaussian trees connected by one grafting operation and showed that Chernoff information between them is the same with that of two special 3-node trees whose weights are related to the underlying operation. In this paper, we extend this result to a Gaussian tree chain connected by independent grafting operations and provide a partial ordering of Chernoff information between these trees. In practice, the collection and storage of data have cost and we may need to reduce the dimension of data. An good choice here is doing linear dimension reduction in collection stage. We linearly transform an N dimensional Gaussian vector x to an NO < N dimensional vector y = Ax, through an NO × N matrix A. An immediate question here is the optimal selection of A which can maximize the error exponent in the classification of two hypothesis. In our former paper [8], we only dealt with a simple, but non-trivial case with NO = 1. In this work, we offer an optimal method to maximize the resulting Chernoff information after a linear transformation for an arbitrary NO ≥ 1. Our major and novel results can be summarized as follows. We first provide the relationship between Chernoff information and generalized eigenvalues. According to the former result, we show that generalized eigenvalues which are equal to 1 make no contribution to Chernoff information. And we use this to explain why adding and division operations of [8] don’t affect Chernoff information between Gaussian trees. These results build a relationship between topology, statistical distribution and algebra. What’s more, we deal with Gaussian trees connected by more than one grafting operation and show a partial ordering inside the chain. At last, we provide an optimal linear dimension reduction method. This paper is organized as below. In Section II, we propose the model of our analysis. The relationship between topology, Chernoff information and generalized eigenvalues is shown in Section III. The partial ordering of Gaussian tree chain is presented in Section IV. Section V shows the optimal dimension reduction method. And in Section VI we conclude the paper. II. S YSTEM M ODEL Consider a set of Gaussian graphs, namely, Gk (x), k = 1, 2, . . . , M , with their prior probabilities given by π1 , π2 , . . . , πM . For simplification, we normalize the variance of all Gaussian variables to be 1 and the mean values to be 0. They share the same entropy, and thus the (k) same determinant of their covariance matrices Σk = [σij ]. The same entropy means the same amount of randomness. This assumption can let us compare these Gaussian graphs fairly. We want to do an M -ary hypothesis testing to find out from which Gaussian distribution the data sequence T X = [x1 , . . . , xt ] (xl = [x1,l , . . . , xN,l ] ) comes from. We define the average error probability of the hypothesis testing to be Pe , and let Ee = limt→∞ − lnt Pe be the resulting error exponent, which depends on the smallest Chernoff information between the graphs [9] , namely, Ee = min 1≤i6=j≤M CI(Σi ||Σj ) (1) where CI(Σi ||Σj ) is the Chernoff information between the ith and j th graphs. For two 0-mean N-dim Gaussian joint distributions, x1 ∼ N (0, Σ1 ) and x2 ∼ N (0, Σ2 ), their Kullback-Leibler divergence is as follows D(Σ1 ||Σ2 ) = 1 |Σ2 | 1 N log + tr(Σ−1 2 Σ1 ) − 2 |Σ1 | 2 2 (2) P where tr(X) = i xii is the trace of square matrix X. We define a new distribution N (0, Σλ ) in the exponential family of the N (0, Σ1 ) and N (0, Σ2 ), namely −1 −1 Σ−1 λ = Σ1 λ + Σ2 (1 − λ) (3) So the Chernoff information is given below CI(Σ1 ||Σ2 ) = D(Σλ∗ ||Σ2 ) = D(Σλ∗ ||Σ1 ) (4) where λ∗ is the point at which the latter equation is satisfied [10]. It is unique in interval [0, 1]. We already know that the overall Chernoff information in an M -ary testing is bottle-necked by the minimum pair-wise difference [10], thus we next focus on the calculation of Chernoff information of pair-wise Gaussian graphs. Further research will show that pair-wise Chernoff information can be determined by the generalized eigenvalues of covariance matrices Σ1 , Σ2 and λ∗ . And λ∗ can also be determined by these generalized eigenvalues. In addition to the full observation case, we will also study a dimension reduction case. For two N variables Gaussian graphs G1 , G2 , in the full observation case, we can have access to all N variables and only need to calculate Chernoff information CI(Σ1 ||Σ2 ). But in the dimension reduction case, we can only observe an NO -dim vector each time, namely, y = Ax, where A is an NO × N matrix and x ∈ RN , y ∈ RNO . The new variables follow joint distributions N (0, Σ̃1 ) and N (0, Σ̃2 ) where Σ̃i = AΣi AT , i = 1, 2. For fixed NO , we want to find out the optimal A∗ and its Chernoff information result CI(Σ̃∗1 ||Σ̃∗2 ), s.t. A∗ = arg max CI(Σ̃1 ||Σ̃2 ) A (5) We only consider binary hypothesis testing in this dimension reduction problem. III. G ENERALIZED EIGENVALUES , C HERNOFF INFORMATION AND TOPOLOGY Chernoff information is the measurement of the difference between statistical distributions. It is hard to be calculated directly and we rarely study its insights about the relationship between Chernoff information and structure characters. Chernoff information and generalized eigenvalues are both important parameters to describe Gaussian graphical models. We will expose the relationship between them. A. Linear transformation to diagonal covariance matrix related to generalized eigenvalues For two N -node 0-mean Gaussian graphs G1 and G2 on random variables x, whose covariance matrices are Σ1 and Σ2 , we can use an inverse linear transformation matrix P to transform them to x0 = Px whose covariance matrices Σ01 and Σ02 are diagonal and related to the generalized eigenvalues of Σ1 and Σ2 . Σ1 and Σ2 are real symmetric positive definite matrices, so the eigenvalues of Σ1 Σ−1 are all positive, as shown in 2 Appendix A. The eigenvalue decomposition of Σ1 Σ−1 is 2 QΛQ−1 , where Q is an N × N matrix and Λ = Diag({λi }) is a diagonal matrix of eigenvalues, in which we put multiple eigenvalues adjacent. {λi } are the eigenvalues of Σ1 Σ−1 2 , namely, the generalized eigenvalues of Σ1 and Σ2 . Note that Q may be non-orthogonal when Σ1 Σ−1 2 isn’t symmetric. Proposition 1: For two N -node 0-mean Gaussian graphs G1 and G2 whose covariance matrices are Σ1 and Σ2 respectively, we can construct a linear transformation matrix   1 T − 2 −1 Q and thus P = Q−1 Σ2 (Q−1 ) Σ02 =PΣ2 PT = IN (6) Σ01 =PΣ1 PT = Λ (7) −1 where eigenvalue decomposition of Σ1 Σ−1 . 2 is QΛQ The proof of Proposition 1 is shown in Appendix A. We can treat Σ01 and Σ02 as two graphs G01 and G02 on x0 obtained from G1 and G2 by inverse linear transformation P. G01 and G02 are graphs with N independent variables. The distances of G01 and G02 are as follows. The distances between G1 and G2 are the same with the distances of G01 and G02 just because P is inverse. 1X (− ln λi + λi − 1) 2 i   1X 1 0 0 ln λi + −1 D(Σ2 ||Σ1 ) =D(Σ2 ||Σ1 ) = 2 i λi D(Σ1 ||Σ2 ) =D(Σ01 ||Σ02 ) = G_1 G_1 GT1 GT1 GT1 w N+1 i GT1 G_1' G_2 G_2 GT2 GT2 w N+1 i i G_2' GT2 w G_1' (8) N+1 GT2 w G_2' G_2 G_1 (9) D(Σλ ||Σ1 ) =D(Σ0λ ||Σ01 )   1X 1 = ln (λ + (1 − λ)λi ) + −1 2 i λ + (1 − λ)λi (10) GT_11 GT_11 G_1 GT_12 q w1w2 GT_11 q p w1w2 p N+1 i GT_21 p GT_12 w1w2 q GT_21 G_2 GT_22 w1w2 q p GT_22 w1 N+1 w2 w1 N+1 w2 GT_12 GT_21 GT_22 q w1 N+1 w2 q w1 N+1 w2 p p GT_11 GT_12 GT_21 GT_22 G_1' G_2' q q p p G_1' G_2' Fig. 1. Adding and Division operations of two trees =D(Σ0λ ||Σ02 ) D(Σλ ||Σ2 )   λi λ + (1 − λ)λi 1X ln = + −1 2 i λi λ + (1 − λ)λi (11) CI(Σ1 ||Σ2 ) =CI(Σ01 ||Σ02 ) = D(Σ0λ∗ ||Σ01 ) = D(Σ0λ∗ ||Σ02 ) (12) −1 −1 −1 −1 0 where Σ−1 = Σ01 λ + λ = Σ1 λ + Σ2 (1 − λ) and Σλ 0 −1 0 Σ2 (1 − λ). Σλ is also a diagonal matrix. λ∗ in (12) is the unique root of D(Σ0λ∗ ||Σ01 ) = D(Σ0λ∗ ||Σ02 ), namely, P 1−λi i λ∗ +(1−λ∗ )λi + ln λi = 0. In this way, we can conclude that the KL and CI divergences between two Gaussian graphs can be determined by their generalized eigenvalues. B. Relationship between generalized eigenvalues and Chernoff information Here we will show the relationship between Chernoff information of two Gaussian graphs and generalized eigenvalues of their covariance matrices Σ1 and Σ2 under the assumption of same entropy QN and normalized covariance matrix. Under this assumption, i=1 λi = |Σ1 |/|Σ2 | = 1. Proposition 2: For two N -node Gaussian distributions who have the same entropy and normalized covariance matrix Σ1 , Σ2 , their Chernoff information satisfies   p λ∗ 1X CI(Σ1 ||Σ2 ) = ln{(1 − λ∗ ) λi + √ } (13) 2 i λi where {λi } are the generalized eigenvalues of Σ1 , Σ2 , namely eigenvalues of Σ1 Σ2−1 , and λ∗ ∈ [0, 1] is the unique result of X X 1 λi = =N (14) ∗ ∗ ∗ λ + (1 − λ )λi λ + (1 − λ∗ )λi i i We can prove this proposition from the equation (12), as shown in Appendix B. We find that generalized eigenvalues of covariance matrices Σ1 and Σ2 are the key parameters of Chernoff information. We can get Chernoff information with these N generalized eigenvalues, so these N parameters contain all the information about the difference between two Gaussian trees. The generalized eigenvalues are so important that we need more property about them. C. Performance of unit generalized eigenvalues In former equation, we find that unit generalized eigenvalues are very special to Chernoff information. Proposition 3: Assuming that the generalized eigenvalues of (N + 1)-node G01 and G02 are the same with that of N node G1 and G2 except a newly additional unit generalized eigenvalue, the optimal parameter λ∗ of (Σ01 , Σ02 ) is the same with that of (Σ1 , Σ2 ) and CI(Σ01 ||Σ02 ) = CI(Σ1 ||Σ2 ). Proposition 3 can be proved from Proposition 2 as shown in Appendix C. Proposition 3 shows a possible way to do dimension reduction that we can reduce the dimension of Gaussian graphs from N +1 to N without changing their Chernoff information if we only remove an unit generalized eigenvalue in this procedure. Gaussian tree models, whose definition can be seen in [8], represent the dependence of multiple Gaussian random variables by tree topologies. [8] dealt with the classification on Gaussian trees. In that paper, we defined two special operations on two Gaussian trees, namely adding operation and division operation, as shown in Fig. 1. We add the same leaf node or divide the same edge by adding or division operations. These two operations don’t change Chernoff information between two Gaussian trees. Next we will show how generalized eigenvalues change after adding or division operation. Proposition 4: Assume that Gaussian trees G01 and G02 are obtained from G1 and G2 by adding operation or division operation. Their covariance matrices are (Σ01 , Σ02 ), (Σ1 , Σ2 ) respectively. The generalized eigenvalues of (Σ01 , Σ02 ) are the same with that of (Σ1 , Σ2 ) except a newly added unit eigenvalue. Proposition 4 is proved in Appendix D. From Proposition 3 and 4, we can conclude that adding and division operations don’t affect Chernoff information between two Gaussian trees. p i subtree2 grafting Subtree1 Subtree1 q i subtree2 Fig. 2. T2 is obtained from T1 by grafting operation 3 2 subtr ee3 2 subt ree2 subt ree4 Unchanged subtree 1 3 subt ree2 subt ree4 Unchanged subtree i subt ree5 1 …… subt ree1 j subt ree5 subt ree1 subt ree3 …… Fig. 3. Independent grafting operations This result has been proved in our former paper [8] and is one of key contribution of that paper. IV. PARTIAL ORDERING IN INDEPENDENT GRAFTING CHAIN Grafting operation is a topological operation by cutting down a subtree from another tree and pasting it to another location, as shown in Figure 2. It is defined in our former paper [8]. In that former paper, we have shown that two Gaussian trees connected by one grafting operation have the same Chernoff information with two special 3-node Gaussian trees whose weights are related to the underlying operation. Now we consider a more complex situation: two Gaussian trees connected by more than one grafting operation. Gaussian trees connected by more grafting operations can’t be simplified to a fixed couple of small trees because the interaction of these grafting operations varies. Our initial expectation was that bigger difference in topology between two Gaussian trees leads to larger Chernoff information. But this may not be true for all situations. Before we deal with a sequence of grafting operations, we need to constrain the interaction among them. So we will define the independence of grafting operations at first. Definition 1: If all the grafting operations can be divided into different subtrees, as shown in Fig. 3, then these grafting operations are independent. After regrouping all the nodes into small subtrees represented by circles, the whole tree has super star-shaped topology. The subtree in the center is unchanged during grafting operations. And grafting operations are involved in disjoint super leaf nodes of the star. In Fig. 3, we show 4 independent grafting operations around the unchanged subtree. There are three types of grafting operations in the star-shaped topology. From left to right, the 1-st, 2-nd grafting operations belong to the first type, the 4-th one belongs to the second type and the 3-rd operation belongs to the third type. For the first type, we can cut a subtree, represented by a small circle with a number in it, from the super leaf node(subtree1,2) and paste it to another part of this super leaf node. In the second type, we can cut the unchanged subtree outside the super leaf node(subtree5) and paste it to another location in this super leaf node. But for the third type, we will cut a subtree, represented by a small circle with a number in it, from a super leaf node(subtree3) and paste it to another super leaf node(subtree4). The third kind of grafting operation involves two super leaf nodes of the star while the first and second kinds of operation only involve one. If all the grafting operations are independent, then we can make the following conclusion. Proposition 5: For two Gaussian trees connected by several independent grafting operations, λ∗ = 1/2 holds. The trees have the same number of nodes and the same entropy due to grafting operations. So λ∗ satisfies tr(Σλ∗ (Σ−1 1 − Σ−1 )) = 0, which can be transformed from the definition 2 −1 formulas of λ∗ . tr(Σλ∗ (Σ−1 1 −Σ2 )) is a summation formula with 4n term, where each 4 terms are related to one single grafting operation. So we can deal with the terms respectively −1 and prove tr(Σ0.5 (Σ−1 1 − Σ2 )) = 0 eventually. More details can be found in Appendix E. Considering a grafting chain T1 ↔ T2 ↔ T3 , intuition tells us that CI(T1 ||T3 ) is likely larger than CI(T1 ||T2 ) and CI(T2 ||T3 ), because the difference between T1 and T3 is the accumulation of T1 − T2 ’s difference and T2 − T3 ’s difference. If the assumption is true, we only need to consider the adjacent pairs of Gaussian trees in the chain when looking for the minimum Chernoff information. Later results will tell us that it depends on the structure of the chain. Proposition 6: For the grafting chain T1 ↔ T2 ↔ T3 ↔ · · · ↔ Tn where all the grating operations in the chain are independent, we can conclude that CI(Ti ||Tj ) ≤ CI(Tp ||Tq ) if p ≤ i ≤ j ≤ q. Detail of the proof can be found in Appendix F. If we want to find out the minimum Chernoff information in this set, weonly need to try n − 1 pairs of Ti − Ti+1 , rather than all the n2 pairs. The number of candidates is significantly reduced. It is a partial ordering because we can’t compare CI(T1 ||T2 ) and CI(T2 ||T3 ) even in a simplest chain T1 ↔ T2 ↔ T3 without knowing the weights. We can only compare Chernoff information pairs CI(Ti ||Tj ), CI(Tp ||Tq ) when p ≤ i ≤ j ≤ q ordering, and thus this result is a partial inequality, rather than a full ordering inequality. In proposition 6, we constrain the grafting operations independent. So we may wonder whether the result suits for all the possible grafting chains T1 ↔ T2 ↔ T3 ↔ · · · ↔ Tn without independent assumption. Taking grafting chain T1 ↔ T2 ↔ T3 in Fig. 4 as an example, the two grafting operations are not independent. Some special cases is shown in Table I. In this table, we can find that CI(T1 ||T3 ) < CI(T1 ||T2 ) can hold in some special situations. This is a counter-intuitive result because more topological differences can’t lead to larger Chernoff information between Gaussian trees. cases 1 2 3 4 λ∗T ||T 1 3 0.5191 0.5073 0.5254 0.5082 λ CIT1 ||T3 CIT1 ||T2 CIT2 ||T3 19.5746, 0.0433, 1.5439, 0.7642, 1, 1, 1 0.8983 0.9142 0.0251 9.2341, 0.1019, 1.2982, 0.8185, 1, 1, 1 0.5402 0.5418 0.0113 9.4328, 1.653, 0.0844, 0.7603, 1, 1, 1 0.5982 0.6103 0.0392 5.0195, 0.1863, 1.2201, 0.8766, 1, 1, 1 0.3102 0.3132 0.0056 TABLE I N UMERICAL CASES DISSATISFYING P ROPOSITION 6 IN THE CHAIN OF F IG . 4 1 w5 6 w1 5 5 3 w3 4 G_2 7 4 w6 2 w2 3 w3 w1 4 w6 3 w3 G_1 2 w2 w4 w4 2 w2 5 w4 w1 1 w5 6 w6 7 1 w5 6 7 G_3 Fig. 4. Example for dependent grafting operations So we can only provide a partial ordering with independent grafting operations in this section. The problems of ordering is much more complex than what we have expected. For a grafting chain T1 ↔ T2 ↔ T3 , the Chernoff information G_1 between T1 and T3 may be larger than that between T1 and T2 , even though the difference between T1 and T2 seems smaller. Here we only consider topological difference, rather than parameterized difference, between Gaussian trees. Topological difference is not the only contribution factor affecting such comparisons [5], [11]. V. D IMENSION REDUCTION The analysis in former section deals with Chernoff information in full-observation situation where we can get access to all the variables. But in practice, there is significant cost in collecting and storing data. So we can only use linearly transformed low-dimensional samples to do the classification. The linear transformation matrix should make sure that the reduced data have maximum information for classification. Traditional dimension reduction methods, such as Principal Component Analysis (PCA) and other Representation Learning [12], aim to find the optimal features with maximum information. They only consider a distribution at one time and don’t care how we use the resulting low-dimensional data. But our method considers two hypothesises at the same time and want to keep the most information for classification. Some features may be important in PCA, but invalid in our method because these features in two hypothesis are similar. Our object function here is the Chernoff information between the resulting low-dimensional distributions. In section III-A, we have shown linear transformation method which can make the distribution in new space independent. In these new space, x0i , the i-th variable of x0 , follows N (0, 1) in hypothesis 2 and N (0, λi ) in hypothesis 1. If λi is farther from 1, this dimension can provide more information for classification than other dimensions. Assume that m of all the N eigenvalues {λi } are greater than 1 and the other N − m eigenvalues are no more than 1. If we want to reduce the observation dimension from N to NO , we choose the dimensions of x0 corresponding to the first k rank and last NO − k rank of {λi }, where NO + m − N ≤ k ≤ m and k ≥ 0. We can choose a sub-matrix of Σ01 and Σ02 corresponding to the NO chosen eigenvalues as the result of dimension reduction. The NO × N linear transformation matrix Ak is the corresponding NO rows of P corresponding to the chosen eigenvalues. Proposition 7: A∗ is the optimal NO ×N linear transformation matrices to maximize the Chernoff information in transformed space, namely A∗ = arg maxANO ×N CI(Σ̂1 ||Σ̂2 ) where Σ̂i = A∗ Σi A∗ T for i = 1, 2. So A∗ ∈ {Ak |NO + m − N ≤ k ≤ m, k ≥ 0}. Proof of proposition 7 can be seen in Appendix G and this proposition ensure the optimality of our method. The observation is y = A∗ x and the covariance matrices of y in two hypothesis are Σ002 = A∗ Σ2 A∗ T = INO Σ001 ∗ = A Σ1 A ∗T = Diag({µi }) (15) (16) where Σ001 and Σ002 are NO × NO diagonal matrices and {µ1 , µ2 , . . . , µNO } (including multiple eigenvalues) are NO chosen eigenvalues. D(Σ00λ ||Σ001 )   1X 1 = ln (λ + (1 − λ)µi ) + −1 2 i λ + (1 − λ)µi D(Σ00λ ||Σ002 )   λ + (1 − λ)µi λi 1X ln + −1 = 2 i λi λ + (1 − λ)µi (17) (18) CI(Σ001 ||Σ002 ) = max min{D(Σ00λ ||Σ001 ), D(Σ00λ ||Σ002 )} λ∈[0,1] (19) Equation (19) is another definition of Chernoff information [10]. The distances are only related to the eigenvalues we have chosen. VI. C ONCLUSION In this paper, we show the relationship between topology, statistical distribution and algebra. Chernoff information between two Gaussian graphs can be determined by the generalized eigenvalues of their covariance matrices. Among them unit generalized eigenvalue is very special and doesn’t affect the Chernoff information. Adding and division operations on Gaussian trees only add a newly unit generalized eigenvalues and don’t change other generalized eigenvalues. Thus these operations keep the Chernoff information. We also extend our former result about grafting operation to Gaussian trees connected by more than one independent grafting operation and provide a partial ordering among these trees. What’s more, we provide an optimal linear dimension reduction method with the metric of Chernoff information. R EFERENCES [1] F. Vega-Redondo, Complex social networks. Cambridge University Press, 2007, no. 44. [2] A. Dobra, T. S. Eicher, and A. Lenkoski, “Modeling uncertainty in macroeconomic growth determinants using Gaussian graphical models,” Statistical Methodology, vol. 7, no. 3, pp. 292–306, 2010. [3] A. Ahmed, L. Song, and E. P. Xing, “Time-varying networks: Recovering temporally rewiring genetic networks during the life cycle of drosophila melanogaster,” arXiv preprint arXiv:0901.0138, 2008. [4] V. Y. F. Tan, A. Anandkumar, and A. S. Willsky, “Learning Gaussian tree models: analysis of error exponents and extremal structures.” IEEE Transactions on Signal Processing, vol. 58, no. 5, pp. 2701–2714, 2010. [5] V. Y. Tan, A. Anandkumar, L. Tong, and A. S. Willsky, “A largedeviation analysis of the maximum-likelihood learning of markov tree structures,” Information Theory, IEEE Transactions on, vol. 57, no. 3, pp. 1714–1735, 2011. [6] G. Doretto and Y. Yao, “Region moments: Fast invariant descriptors for detecting small image structures,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2010, pp. 3019–3026. [7] R. Wang, H. Guo, L. S. Davis, and Q. Dai, “Covariance discriminative learning: A natural and efficient approach to image set classification,” in 2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 2496–2503. [8] B. Li, S. Wei, Y. Wang, and J. Yuan, “Chernoff information of bottleneck Gaussian trees,” in 2016 IEEE International Symposium on Information Theory (ISIT). IEEE, 2016, pp. 970–974. [9] M. B. Westover, “Asymptotic geometry of multiple hypothesis testing,” IEEE transactions on information theory, vol. 54, no. 7, pp. 3327–3329, 2008. [10] T. M. Cover and J. A. Thomas, Elements of information theory. John Wiley & Sons, 2012. [11] V. Jog and P.-L. Loh, “On model misspecification and KL separation for Gaussian graphical models,” in 2015 IEEE International Symposium on Information Theory (ISIT). IEEE, 2015, pp. 1174–1178. [12] Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A review and new perspectives,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 8, pp. 1798–1828, 2013. A PPENDIX A P ROOF OF P ROPOSITION 1 Because Σ1 and Σ−1 are real symmetric positive definite 2 T matrices, Σ1 = L1 LT1 and Σ−1 2 = L2 L2 due to Cholesky decomposition where L1 and L2 are real non-singular triangular matrices. The characteristic polynomial of Σ1 Σ−1 2 is −1 f (Σ1 Σ−1 2 ) = |λI − Σ1 Σ2 | = |λI − L1 LT1 L2 LT2 | = |L1 (λI − LT1 L2 LT2 L1 )L−1 1 | = |λI − LT1 L2 LT2 L1 | T = |λI − (LT1 L2 )(LT1 L2 ) | (20) T T T So Σ1 Σ−1 2 has the same eigenvalues with (L1 L2 )(L1 L2 ) , which is a real symmetric positive definite matrix. So the −1 eigenvalues of Σ1 Σ−1 2 are all positive and Σ1 Σ2 is a positive definite matrix. The eigenvalue decomposition of Σ1 Σ−1 is QΛQ−1 , 2 where Q is an N × N matrix and Λ = Diag({λi }) is a diagonal matrix of eigenvalues, in which we put multiple eigenvalues adjacent. {λi } are the eigenvalues of Σ1 Σ−1 2 , namely, the generalized eigenvalues of Σ1 and Σ2 . T (1) (1) And Σ1 = Q−1 Σ1 Q−1 = [aij ] and Σ2 = T (1) (1) Q−1 Σ2 Q−1 = [bij ] satisfy Σ1 = ΛΣ2 . aij = λi bij (21) aij = aji = λj bji = λj bij (22) So for ∀i 6= j, aij = bij = 0 or λi = λj . If the eigenvalues of Σ1 Σ−1 have k different 2 multiple eigenvalues λ(1) , λ(2) , . . . , λ(k) with multiplicity n1 , n2 , . . . , nk , so   J1   J2     (1) J 3 Σ2 =   (23)   . ..   Jk  (1)  λ J1   λ(2) J2   (3)   (1) λ J3 Σ1 =   (24)   ..   . λ(k) Jk where Ji is ni × ni inverse matrix. (1) −0.5 We can also get an inverse matrix Q(1) = Σ2 so that (2) (1) T Σ2 = Q(1) Σ2 Q(1) = IN (2) Σ1 =Q (1) T (1) Σ1 Q(1) =Λ (25) (26) So we can construct a linear  1transformation matrix P = T − 2 −1 Q and thus Q(1) Q−1 = Q−1 Σ2 (Q−1 ) Σ02 =PΣ2 PT = IN (27) Σ01 =PΣ1 PT = Λ (28) A PPENDIX B P ROOF OF P ROPOSITION 2 λ∗ satisfies D(Σ0λ∗ ||Σ01 ) = D(Σ0λ∗ ||Σ02 ), and thus   1X 1 ln (λ∗ + (1 − λ∗ )λi ) + ∗ − 1 2 i λ + (1 − λ∗ )λi   1X λ∗ + (1 − λ∗ )λi λi = ln + ∗ −1 2 i λi λ + (1 − λ∗ )λi (29) QN With knowing i=1 λi = 1, we can conclude X X 1 λi = (30) ∗ ∗ ∗ λ + (1 − λ )λi λ + (1 − λ∗ )λi i i We also know X X λi 1 +(1 − λ∗ ) λ∗ ∗ ∗ ∗ λ + (1 − λ )λi λ + (1 − λ∗ )λi i i X = 1 =N (31) i ∗ So λ ∈ [0, 1] is the unique result of X X 1 λi = =N ∗ ∗ ∗ λ + (1 − λ )λi λ + (1 − λ∗ )λi i i (32) And CI(Σ1 ||Σ2 ) = D(Σ0λ∗ ||Σ01 )   1X 1 = ln (λ∗ + (1 − λ∗ )λi ) + ∗ − 1 2 i λ + (1 − λ∗ )λi 1X ln (λ∗ + (1 − λ∗ )λi ) = 2 i   p 1X λ∗ = ln (1 − λ∗ ) λi + √ (33) 2 i λi A PPENDIX C P ROOF OF P ROPOSITION 3 λ∗ is unique in the range of [0, 1]. So if λ∗ satisfies equation (14), then X 1 1 + ∗ = ∗ ∗ λ + (1 − λ )λi λ + (1 − λ∗ ) × 1 i X λi 1 + ∗ (34) ∗ + (1 − λ∗ )λ λ λ + (1 − λ∗ ) × 1 i i where {λi } are the generalized eigenvalues S of (Σ1 , Σ2 ). So the generalized eigenvalues {λi } {1} of (Σ01 , Σ02 ) and λ∗ also satisfies equation (14). The optimal λ∗ is the same. And  N  p 1X λ∗ CI(Σ01 ||Σ02 ) = ln{(1 − λ∗ ) λi + √ } 2 i=1 λi   ∗ √ 1 λ + ln{(1 − λ∗ ) 1 + √ } 2 1  N  X p 1 λ∗ = ln{(1 − λ∗ ) λi + √ } 2 i=1 λi =CI(Σ1 ||Σ2 ) (35) A PPENDIX D P ROOF OF P ROPOSITION 4 Assume that Gaussian trees G01 and G02 are obtained from G1 and G2 by adding operation or division operation. Their covariance matrices are (Σ01 , Σ02 ), (Σ1 , Σ2 ) respectively. The generalized eigenvalues of (Σ01 , Σ02 ) are the same with that of (Σ1 , Σ2 ) except a newly added unit eigenvalue. We use the equation of block matrix to prove this proposition. We deal with adding operation and division operation in different subsections. A. Proof on adding operation case Adding operation is shown in Fig. 1. Assume G1 and G2 have node 1, .. . , N , and  their covariance matrices are D α G1 : Σ1 = αT  1  A2 B2 G2 : Σ2 , Σ−1 = . 2 BT2 a2 Without loss of generality, the new edge is eN,N +1 with parameter w. So new covariance matrices after adding operation are   D α wα 1 w  Σ01 =  αT (36) T wα w 1   A2 B2 0 2 −w  w (37) Σ0−1 = BT2 a2 + 1−w 2 2 1−w2 w 1 0 − 1−w2 1−w2 The generalized eigenvalues of Σ1 and Σ2 are the roots of |λI − Σ1 Σ−1 2 | = 0. Σ1 Σ−1 2 Σ01 Σ0−1 2    D α A2 B2 = T α 1 BT2 a2   DA2 + αBT2 DB2 + a2 α = (38) αT A2 + BT2 αT B2 + a2   DA2 + αBT2 DB2 + a2 α 0  = αT A2 + BT2 αT B2 + a2 0 (39) ∗ ∗ 1 −1 |λI − Σ01 Σ0−1 2 | =(λ − 1)|λI − Σ1 Σ2 | (40) So new trees have an extra unit generalized eigenvalue without changing other eigenvalues. B. Proof on division operation case Division operation is shown in Fig. 1. Assume that G1 and G2 have node 1, . . . , N , and the connecting points are p = 1, q = N without loss of generality. If we set edge e1N in G2 to be 0, we can treat it as a new N -node tree, whose covariance matrix is A−1 . We divide A as   x XT 0 A = X Z Y (41) 0 YT y where X, Y are (N −2)×1 matrices and Z is (N −2)×(N −2) matrix. 2 w2 w1 2 2 w2 1−w1 2 XT X − Z w1 w2 − 2 w2 1−w1 2 Y T w1 w2 2 w2 1−w1 2 Y y+ 2 w2 w1 2 2 w2 1−w1 2     (43) 0 T αT 1 + w1 w2 α2 P T w1 w2 αT 1 + α2 T w1 αT + w α 2 1 2 1 α1 + w1 w2 α2 =  w1 w2 w1 2 w1  0 G2 : −1 Σ22 , Σ22 x + 1−w12  X   = 0    w1 − 2 1−w1 w1 w2 w1 w2 α1 + α2 1 w2 XT 0 Z Y YT 2 w2 2 1−w2 w2 − 2 1−w2 The generalized eigenvalues of Σ1 and |λI − Σ1 Σ−1 2 | = 0.  0  Σ11 Σ−1 12 =Σ11 A + −w1 w2 α2 −w1 w2 Σ21 Σ−1 22 Σ11 Σ−1 12 = ∗   w1 2 1−w1 0 w2 2 1−w2 2 w2 1 + 2 2 1−w1 1−w2 − y+ 0  w1 w1 α1 + w2 α2    w2 1 (44) −         (45) Σ2 are the roots of 0 0 0  0 1 −1 |λI − Σ21 Σ−1 22 | =(λ − 1)|λI − Σ11 Σ12 | 4 5 TYPE1 Other parts 1 w1 2 w2 3 Other parts’ 2 w1 1 w2 4 4 TYPE2 Other parts 1 w1 2 w2 Other parts’ Other 5 6 parts 1 4 w 5 w 6 3 2 w2 TYPE3 7 Other parts 2 5 4 w Other 6 parts 1' 1 w1 3 w4 5 w 6 7 Other parts 2'  G1 : Σ21  3 w3 w3 x+ 2 w2 5 3 w4 (42) 1 w1 w3 −1 G2 : Σ12 , Σ12   =   w1 w2 w1 w2 α1 + α2  1 4 w4  T αT 1 + w1 w2 α2 P T w1 w2 αT 1 + α2 3 w3 w3 G1 : Σ11 1 = α1 + w1 w2 α2 w1 w2 2 w2 w3  1 w1 w4 In G1 and G01 , we define the column node 1 to nodes 2 to N − 1 to be α1 (setting the respective rows to the nodes in GT12 0) and the column node N to nodes 2 to N − 1 to be α2 (setting the respective rows to the nodes in GT11 0). Then in G1 and G01 , the column from node 1 to nodes 2 to N − 1 is α1 + w1 w2 α2 because node 1 can reach the nodes in GT11 directly, but reach the nodes in GT12 through node N with path weighted by w1N = w1 w2 . And the column from node N to node 2 to N − 1 is w1 w2 α1 + α2 because node N can reach the nodes in GT12 directly, but reach the nodes in GT11 through node 1 with path weighted by w1N = w1 w2 . In G01 , the column from node N + 1 to node 2 to N − 1 is w1 α1 + w2 α2 because node N + 1 can reach the nodes in GT11 through node 1 with path e1,N +1 weighted by w1 , and reach the nodes in GT12 through node N with path eN,N +1 weighted by w2 . The covariance matrices of these trees are as shown below:  −w1 w2 −w1 w2 α1  0 (46) (47) (48) So new trees have an extra unit generalized eigenvalue without changing other eigenvalues. A PPENDIX E P ROOF OF P ROPOSITION 5 λ∗ satisfies D(Σλ∗ ||Σ2 ) = D(Σλ∗ ||Σ1 ). When |Σ1 | = |Σ2 |, it will become the unique root in [0, 1] of  −1 tr Σλ∗ (Σ−1 − Σ ) = 0. We only need to prove 1 2  −1 tr Σ0.5 (Σ−1 − Σ ) = 0 before we get λ∗ = 1/2. 1 2 Fig. 5. Simplified separate grafting operation types P tr (AB) = i,j aij bij when A, B are both symmetric −1 matrices. Luckily, A = Σ0.5 and B = Σ−1 are both 1 − Σ2 −1 symmetric matrices. What’s more, B = Σ1 − Σ−1 only 2 have 2n non-zero diagonal elements and 2n pairs of non-zero non-diagonal elements, where n is the number of grafting w2 w2 operations. For example, bpp = 1−w 2 , bqq = − 1−w 2 , biq = w w 1−w2 , bip = − 1−w2 if we cut node i from node p with edge eip weighting w and paste it to node q, and there aren’t any other grafting operations involving these three nodes.  −1 So tr Σ0.5 (Σ−1 1 − Σ2 ) is a sum of 4n terms, each 4 terms respecting to one grafting operation. We can divide the polynomial into n parts corresponding each grafting  to P −1 operation, namely, tr Σ0.5 (Σ−1 − Σ ) = 1 2 k Pk . Then we only need to prove each Pt equals to 0. As Fig. 3 shows, there are three different types of grafting operations here. After simplification by using adding and merging as shown in [8], the three types of subparts can be translated as shown in Fig. 5. In this figure, we label these relevant nodes as 1, 2, 3 . . . for simplification because it doesn’t change the result if we exchange labels of nodes. Before we deal with Pk in each case, we will show Σ−1 λ is positive definite matrix. Σ1 and Σ2 are positive definite matrices, so Σ−1 and Σ−1 are positive definite matrices, 1 2 T −1 namely , z Σ1 z > 0 and zT Σ−1 2 z > 0 for any non-zero T −1 vector z. Then zT Σ−1 z = λz Σ z + zT (1 − λ)Σ−1 1 2 z > 0 λ −1 for any non-zero vector z. So Σλ is positive definite matrix and its order principal minors are invertible matrices. This result is useful in our later proof. w2 −wi 2 i We define fi = 1−w 2 and gi = 1−w 2 , so (1 + gi )gi = fi . i i Then we will prove Pk = 0 in different type of Fig. 5 later. Mij is the (i, j) minor of Σ−1 0.5 , so A. TYPE1 In this type, we focus on node 1 to 4, which is involved in this grafting operation. So Σ−1 1 =  1 + g1 f1   1 + g1 + g2 f2   f1     f 1 + g + g + g f V 2 2 3 4 3     f3 1 + g3   T (1) V K  Σ−1 2 =  1 + g1       f  1  f1 1 + g2 f2 f2 1 + g2 + g3 + g4 f3 f3 1 + g1 + g3 V T    V     (2) K where V is a 4 × (N − 4) matrix with all zero elements except v3,1 = f4 and K(1) , K(2) are covariance matrices of node 5 to N in G1 and G2 respectively. And then −1 Σ−1 1 − Σ2 = 0 f1 −f1  f1 g1   0   −g1  −f1   1 M22 =|K(3) |(1 + g1 ){(1 + g3 )(1 + g2 + g4 + g1 ) 4 1 1 + g1 (g2 + g4 )} − (1 + g1 )(1 + g3 + g1 )X 4 4 1 M44 =|K(3) |(1 + g1 ){(1 + g2 )(1 + g3 + g4 + g1 ) 4 1 1 + g1 (g3 + g4 )} − (1 + g1 )(1 + g2 + g1 )X 4 4 1 (3) 1 M12 =|K |{ f1 {(1 + g3 )(1 + g2 + g4 + g1 ) 2 2 1 1 1 1 + g1 (g2 + g4 )} + f1 f2 f3 } − f1 (1 + g3 + g1 )X 2 2 2 2 1 1 M14 =|K(3) |{ f1 {(1 + g2 )(1 + g3 + g4 + g1 ) 2 2 1 1 1 1 + g1 (g3 + g4 )} + f1 f2 f3 } − f1 (1 + g2 + g1 )X 2 2 2 2 and 2f1 (−M12 + M14 ) + g1 (M22 − M44 ) = 0 = (2f1 (−M12 + M14 ) + g1 (M22 − M44 )) |Σ0.5 | = 0 (51) B. TYPE2 In this type, we focus on node 1 to 3, which is involved in this grafting operation. So Σ−1 1 =       1 + g1 f1  f1 1 + g f2 1 + g2   f2 1 + g2 + g3   K(1) − K(2) VT 1 −1 1 −1 Σ−1 0.5 = 2 Σ1 + 2 Σ2 =  1  1 + 2 g1    1  f  2 1        1f  2 1   (50) Pk =g1 m22 − g1 m44 + 2f1 m12 − 2f1 m14  V     K(1) Σ−1 2 = 1f 2 1 1 + g1 + g2 f1 f2  f1 1 + g1   f2 1 + g2 + g3  1 + g2 + 1 g1 2 f2 f2 1 + g2 + g3 + g4 f3 f3 1 + 1 g1 + g3 2 V VT K(3)                 VT where K(3) = 12 K(1) + 12 K(2) is an invertible matrix because Σ−1 λ is positive definite matrix. And we define  0  −1 VK(3) VT =   X |K(3) |    V     K(2) where V is a 3 × (N − 3) matrix with all zero elements except v3,1 = f3 and K(1) , K(2) are covariance matrices of node 4 to N in G1 and G2 respectively. And then we use the same progress and get Pk =g2 m22 − g2 m11 + 2f2 m23 − 2f2 m13 = 0  0    1f 2 1 (49) (52) where mij is the ij-th element of Σ0.5 . 0 C. TYPE3 So Pk related to this grafting operation is g1 m22 − g1 m44 + 2f1 m12 − 2f1 m14 where mij is the ij-th element of Σ0.5 . In this type, we focus on node 1 to 5, which is involved in this grafting operation. So Σ−1 1 = f1 1 + g1 + g2 f2 1 + g4 f2 f4 f4 1 + g2 + g3 + g5 f3 f3 1 + g3 + g4 + g6 K(1) subtr ee2 1 2   1 + g1        f1          f1 1 + g2 f2 1 + g1 + g4 f2 f4 f4 1 + g2 + g3 + g5 f3 f3 1 + g3 + g4 + g6 VT V K(2)                  where V is a 5 × (N − 5) matrix with all zero elements except v4,1 = f5 , v5,2 = f6 and K(1) , K(2) are covariance matrices of node 6 to N in G1 and G2 respectively. And then we use the same progress and get Pk =g1 m22 − g1 m33 + 2f1 m12 − 2f1 m13 = 0 (53) where mij is the ij-th element of Σ0.5 . Pk = 0 in all types of grafting operations these operations  if P −1 are separate. And tr Σ0.5 (Σ−1 1 − Σ2 ) = k Pk = 0. So λ∗ = 12 if all the grafting operations in the chain are separate. A PPENDIX F P ROOF OF P ROPOSITION 6 We prove this proposition by using adding and merging operation of [8] repeatedly. For p ≤ i ≤ j ≤ q, there are q − p grafting operations in the grafting chain Tp ↔ · · · ↔ Tq . We divide these grafting operations into two sets: grafting operations from Ti to Tj and other grafting operations, namely, set 1 and set 2. We can simplify tree pairs (Tp , Tq ) and (Ti , Tj ) into (T̂p , T̂q ) and (T̃i , T̃j ) respectively by using adding and merging operation of [8] repeatedly. (T̂p , T̂q ) have all the anchor nodes of all q − p operations and the paths of backbone among these operations. (T̃i , T̃j ) have all the anchor nodes of all set 1 operations and the paths of backbone among these operations. If all the grafting operations in the chain are independent, (T̂p , T̂q ) have the same substructure with (T̃i , T̃j ) after dropping the extra nodes. So CI(T̂p ||T̂q ) ≥ (T̃i ||T̃j ) holds by Proposition 2 of [8]. And then CI(Ti ||Tj ) ≤ CI(Tp ||Tq ). Here we take a T1 ↔ T2 ↔ T3 case as an example, as shown in Fig. 6. We can simplify the calculation of CI(T1 ||T3 ) and CI(T1 ||T2 ) as shown in Fig. 6. And then Proposition 2 of [8] can tell us CI(T1 ||T3 ) ≥ CI(T1 ||T2 ). In the same way, we can conclude CI(T1 ||T3 ) ≥ CI(T2 ||T3 ). A PPENDIX G P ROOF OF P ROPOSITION 7 We want to prove CI(Σ̂1 ||Σ̂2 ) ≥ CI(Σ̃1 ||Σ̃2 ) where Σ̂i = A∗ Σi A∗ T , Σ̃i = DΣi DT for arbitrary NO × N matrix D 1 3 7 subtr ee1 subtr ee2 6 5 3 Unchanged subtree 2 subtr ee2 Unchanged subtree T2 T1 1 w1 2 w2 CI(T1||T3)=CI( CI(T1||T2)=CI( T3 3 w3 8 w7 7 w4 1 w1 2 w2w3 4 6 8 5 4 Unchanged subtree Σ−1 2 =  8 7 4 subtr ee1 6 5 3 4 2 w2 || 5 w5 6 3 w3 7 w4 || 4 w1 1 6 w7 8 ) w6 VT V                  1 2 w6  1 + g1    f1              8 7 4 subtr ee1   5 w5 2 w2w3 4 w1 1 ) Fig. 6. Example for Proposition 6 and i = 1, 2. Due to equation (19), we only need to prove D(Σ̂λ ||Σ̂2 ) ≥ D(Σ̃λ ||Σ̃2 ) and D(Σ̂λ ||Σ̂1 ) ≥ D(Σ̃λ ||Σ̃1 ) −1 −1 −1 −1 where Σ̂−1 λ = λΣ̂1 + (1 − λ)Σ̂2 and Σ̃λ = λΣ̃1 + (1 − −1 λ)Σ̃2 . In the following part, I will prove D(Σ̂λ ||Σ̂1 ) ≥ D(Σ̃λ ||Σ̃1 ) and the proof of D(Σ̂λ ||Σ̂2 ) ≥ D(Σ̃λ ||Σ̃2 ) is similar. D(Σλ ||Σ1 ) = 1X g(νi ) 2 i (54) 1 where g(νi ) = ln (λ + (1 − λ)νi ) + λ+(1−λ)ν − 1 and {νi } i are generalized eigenvalues of Σ1 and Σ2 . g(1) = 0. g(νi ) is decreasing in (0, 1) and increasing in (1) (2) (1) (2) (1) (1, +∞). If ν1 ≤ ν1 ≤ ν2 ≤ ν2 ≤ · · · ≤ νN −1 ≤ (2) (1) (1) (1) (2) νN −1 ≤ νN , we can choose µi = νi+1 for νi ≥ 1 and (1) (1) (2) (1) µi = νi for νi < 1 so that {µi } is an (N − 1) subset PN −1 PN −1 (1) (1) (2) of {νi } and 12 i=1 g(µi ) ≥ 21 i=1 g(νi ). (2) (1) If m elements of {νi } are greater than one, {µi } contains the first k elements and last N − 1 − k elements (1) of {νi }. We first introduce a preparation of proof. A. The relationship of eigenvalues 0  We have  four covariance   matrices Σ1 , Σ2 , Σ1 Σ1 m Σ p and Σ02 = T2 . mT a0 p b0 (2) (2) = (2) The eigenvalues of Σ1 Σ−1 2 are λ1 ≤ λ2 ≤ · · · ≤ λN −1 . (1) (1) (1) The eigenvalues of Σ01 Σ0−1 are λ1 ≤ λ2 ≤ · · · ≤ λN . 2 For Σ1 and Σ2 , we can find an inverse matrix P where (2) (1) (1) (2) (2) Σ1 = PΣ1 PT = Diag{λ 1 , λ2 , . . . , λN −1 } and Σ2 =  P 0(1) PΣ2 PT = I. So P0 = satisfies Σ1 = P0 Σ01 P0T = 1  (1)    I b 0(1) Σ1 a 0 0 0T and Σ2 = P Σ2 P = T , where a = b b0 aT a0 {a1 , a2 , . . . , aN −1 }T and b = {b1 , b2 , . . . , bN −1 }T . (1) So {λk } are the roots of |λΣ02 − Σ01 | = 0, which is equal to F (λ) = 0 where 0(1) F (λ) = |λΣ2 = 0(1) (1) − Σ1 | = N −1 Y λI − Σ1 λbT − aT λb − a λb0 − a0 (N −N ) (2) (λ − λi )× i=1 n o (1) −1 λb0 − a0 − (λbT − aT )(λI − Σ1 ) (λb − a) ) ( N −1 N −1 Y X fi2 (λ) (2) (55) = (λ − λi ) × f0 (λ) − (2) i=1 i=1 λ − λi 0(1) = |Σ2 |λN + N −1 X ci λ i (56) i=0 and fi (λ) = bi λ − ai . At first, we assume that there is no multiple eigenvalues for (2) (2) {λi } and fi (λi ) 6= 0 for all 1 ≤ i ≤ N − 1. So (−1) N −k (2) F (λk ) =(−1) N −k+1 2 (2) fk (λk ) N −1 Y (2) (2) (λk − λi ) i=1,i6=k >0 The generalized eigenvalues after k dimension reduction are (k+1) {λi } of size N − k. In No. (N − NO ) dimension reduction stage, we can (N −NO ) (N −NO ) always find {µi } of {λi  an NO subset   } so that P P (N −NO ) (N −NO +1) NO NO 1 1 ≥ 2 i=1 g λi because i=1 g µi 2 (57) And F (+∞) = +∞, F (−∞) = (−1)N ∞ There is at least (2) (2) one root of F (λ) = 0 in (−∞, λ1 ), (λN −1 , +∞) and (2) (2) (λi , λi+1 ) for 1 ≤ i ≤ N − 2. (1) (2) (1) (2) So we can conclude that λ1 < λ1 < λ2 < λ2 < · · · < (1) (2) (1) λN −1 < λN −1 < λN . (2) (2) If λi has multiplicity ni , we can conclude that λi has multiplicity ni − 1 in F (λ) = 0. If we put this exact multiple eigenvalues aside, other eigenvalues satisfy the ordering before. (2) (2) If fi (λi ) = 0, we can conclude that λi is also the root of F (λ) = 0. If we put this eigenvalue aside, other eigenvalues satisfy the ordering before. (1) (2) (1) (2) So for all cases, λ1 ≤ λ1 ≤ λ2 ≤ λ2 ≤ · · · ≤ (1) (2) (1) λN −1 ≤ λN −1 ≤ λN . B. Main proof For N -dimension graphs whose covariance matrices are A and B, we can do inverse linear transformation by A0 = KAKT and B0 = KBKT . Then we can do dimensionreduction from N to N − 1 by A00 = [IN −1 , 0]A0 [IN −1 , 0]T and B00 = [IN −1 , 0]B0 [IN −1 , 0]T . (1) (1) The generalized eigenvalues of A and B are λ1 ≤ λ2 ≤ (1) · · · ≤ λN , which are also generalized eigenvalues of A0 and (2) 0 B . And the generalized eigenvalues of A00 and B00 are λ1 ≤ (2) (2) (1) (2) (1) (2) λ2 ≤ · · · ≤ λN −1 . So λ1 ≤ λ1 ≤ λ2 ≤ λ2 ≤ · · · ≤ (1) (2) (1) λN −1 ≤ λN −1 ≤ λN . We can also do the procedure to reduce the dimension from (2) (3) (2) (3) N − 1 to N − 2 and get λ1 ≤ λ1 ≤ λ2 ≤ λ2 ≤ · · · ≤ (2) (3) (2) λN −2 ≤ λN −2 ≤ λN −1 . (N −N +1) (N −N ) (N −N +1) O O O O λ1 ≤ λ1 ≤ λ2 ≤ λ2 ≤ ··· ≤ (N −NO ) (N −NO +1) (N −NO ) λNO +1 ≤ λNO ≤ λNO +1 . (N −NO −1) We can always find NO subset {µi } of (N −NO −1) (N −NO ) (N −NO ) {λi } and {µi } of {λ }, which sat PNO PiNO  (N −NO )  (N −NO −1) 1 1 ≥ 2 i=1 g µi ≥ isfy 2 i=1 g µi PNO  (N −NO +1)  1 with the same reason. i=1 g λi 2 With the same method, we can conclude that there exist PNO  (1)  (1) (1) an NO subset {µi } of {λi } so that 21 i=1 g µi ≥ PNO  (N −NO +1)  (N −NO +1) 1 . If k elements of {λi } are i=1 g λi 2 (1) greater than one, {µi } contains the first k elements and last (1) NO − k elements  of {λi }. P (1) N O 1 is D(Σλ ||Σ1 ) with linear transformation i=1 g µi 2 PNO  (N −NO +1)  1 matrix Ak . 2 i=1 g λi is D(Σ̃λ ||Σ̃1 ) under arbitrary linear transformation where the arbitrariness is embodied in K. So D(Σλ ||Σ1 ) ≥ D(Σ̃λ ||Σ̃1 ) holds for arbitrary D and its corresponding Ak . In the same way, D(Σλ ||Σ2 ) ≥ D(Σ̃λ ||Σ̃2 ) holds for arbitrary D and its corresponding Ak . So CI(Σ̂1 ||Σ̂2 ) ≥ CI(Σ̃1 ||Σ̃2 ) for for arbitrary D and A∗ is the optimal matrix in the set {Ak |NO + m − N ≤ k ≤ m, k ≥ 0}.
7
Optimal Quasi-Gray Codes: Does the Alphabet Matter?∗ Diptarka Chakraborty†1 , Debarati Das‡2 , Michal Koucký§ 3 , and Nitin Saurabh¶4 1,2,3,4 Computer Science Institute of Charles University, Malostranské náměstí 25, 118 00 Praha 1, Czech Republic arXiv:1712.01834v1 [] 5 Dec 2017 December 7, 2017 Abstract A quasi-Gray code of dimension n and length ℓ over an alphabet Σ is a sequence of distinct words w1 , w2 , . . . , wℓ from Σn such that any two consecutive words differ in at most c coordinates, for some fixed constant c > 0. In this paper we are interested in the read and write complexity of quasi-Gray codes in the bit-probe model, where we measure the number of symbols read and written in order to transform any word wi into its successor wi+1 . We present construction of quasi-Gray codes of dimension n and length 3n over the ternary alphabet {0, 1, 2} with worst-case read complexity O(log n) and write complexity 2. This generalizes to arbitrary odd-size alphabets. For the binary alphabet, we present quasiGray codes of dimension n and length at least 2n − 20n with worst-case read complexity 6 + log n and write complexity 2. Our results significantly improve on previously known constructions and for the odd-size alphabets we break the Ω(n) worst-case barrier for space-optimal (non-redundant) quasiGray codes with constant number of writes. We obtain our results via a novel application of algebraic tools together with the principles of catalytic computation [Buhrman et al. ’14, Ben-Or and Cleve ’92, Barrington ’89, Coppersmith and Grossman ’75]. We also establish certain limits of our technique in the binary case. Although our techniques cannot give space-optimal quasi-Gray codes with small read complexity over the binary alphabet, our results strongly indicate that such codes do exist. The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013)/ERC Grant Agreement no. 616787. The third author was also partially supported by the Center of Excellence CE-ITI under the grant P202/12/G061 of GA ČR. † [email protected]ff.cuni.cz ‡ [email protected] § [email protected]ff.cuni.cz ¶ [email protected]ff.cuni.cz ∗ 1 1 Introduction One of the fundamental problems in the domain of algorithm design is to list down all the objects belonging to a certain combinatorial class. Researchers are interested in efficient generation of a list such that an element in the list can be obtained by small amount of changes to the element that precedes it. One of the classic examples is the binary Gray code introduced by Gray [16], initially used in pulse code communication. The original idea of a Gray code was to list down all the binary strings of length n, i.e, all the elements of Zn2 such that any two successive strings differ by exactly one bit. The idea was later generalized for other combinatorial classes. Interested reader may refer to an excellent survey by Savage [29] for a comprehensive treatment on this subject. In this paper we study the construction of Gray codes over Znm for any m ∈ N. Originally, Gray codes were meant to list down all the elements from its domain but later studies focused on the generalization where we list ℓ distinct elements from the domain, each two consecutive elements differing in one position. We refer to such codes as Gray codes of length ℓ [14]. When the code lists all the elements from its domain it is referred to as space-optimal. It is often required that the last and the first strings appearing in the list also differ in one position. Such codes are called cyclic Gray codes. Throughout this paper we consider only cyclic Gray codes and we refer to them simply as Gray codes. Researchers also study codes where two successive strings differ in at most c positions, for some fixed constant c > 0, instead of differing in exactly one position. Such codes are called quasi-Gray codes [3]1 or c-Gray codes. We study the problem of constructing quasi-Gray codes over Znm in the cell probe model [34], where each cell stores an element from Zm . The efficiency of a construction is measured using three parameters. First, we want the length of the quasi-Gray code to be as large as possible, ideally we want space-optimal codes. Second, we want to minimize the number of coordinates of the input string the algorithm reads in order to generate the next (or, previous) string in the code. Finally, we also want the number of cells written in order to generate the successor (or, predecessor) string to be as small as possible. Since our focus is on quasi-Gray codes, the number of writes will always be bounded by a universal constant. We are interested in the worst-case behavior and we use decision assignment trees of Fredman [14] to measure these complexities. Although there are known quasi-Gray codes with logarithmic read complexity and constant write complexity [14, 27, 3, 4], none of these constructions is space-optimal. The best result misses at least 2n−t strings from the domain when having read complexity t+O(log n) [4]. Despite of an extensive research under many names, e.g., construction of Gray codes [14, 26, 11, 17], dynamic language membership problem [13], efficient representation of integers [27, 4], so far we do not have any quasi-Gray code of length 2n − 2ǫn , for some constant ǫ < 1, with worst-case read complexity o(n) and write complexity o(n). The best worst-case read complexity for spaceoptimal Gray code is n − 1 [15]. Indeed, there are various conjectures [27, 3] on non-existence of space-optimal Gray codes with sub-linear read complexity and constant write complexity. Our main result dispels such conjectures for quasi-Gray codes over Znm , when m is odd. In particular, we construct space-optimal quasi-Gray codes over {0, 1, 2}n with read complexity 4 log3 n and write complexity 2. Theorem 1.1. Let m ∈ N be odd and n ∈ N be sufficiently large. Then, there is a spaceoptimal quasi-Gray code C over Znm for which, the two functions next(C, w) and prev(C, w) can be implemented by inspecting at most 4 logm n cells while writing only 2 cells. In the statement of the above theorem, next(C, w) denotes the element appearing after w in 1 Readers may note that the definition of quasi-Gray code given in [14] was different. The code referred as quasi-Gray code by Fredman [14] is called Gray code of length ℓ where ℓ < mn , in our notation. 2 the cyclic sequence of the code C, and analogously, prev(C, w) denotes the preceding element. In the case of space-optimal quasi-Gray codes over binary strings, the read complexity must be at least Ω(log n) bits [14, 26]. This readily generalizes to a lower bound of Ω (logm n) in the bit-probe model when the domain is Znm . Hence our result is optimal up to some small constant factor. Using a different technique we also get a quasi-Gray code over the binary alphabet which enumerates all but O(n) many strings. This result generalizes to the domain Znq for any prime power q. Theorem 1.2. Let n ∈ N be sufficiently large. Then, there is a quasi-Gray code C of length at least 2n −20n over Zn2 , such that the two functions next(C, w) and prev(C, w) can be implemented by inspecting at most 6 + log n cells while writing only 2 cells. We remark that the points that are missing from C in the above theorem are all of the form {0, 1}O(log n) 0n−O(log n) . If we are allowed to read and write constant fraction of n bits then Theorem 1.2 can be adapted to get a quasi-Gray code of length 2n − O(1) (see Section 5). In this way we get a trade-off between length of the quasi-Gray code and the number of bits read in the worst-case. All our constructions can be made uniform. Using Chinese Remainder Theorem (cf. [9]), we also develop a technique that allows us to compose Gray codes over various domains. Hence, from quasi-Gray codes over domains Znm1 , Znm2 , . . . Znmk , where mi ’s are pairwise co-prime, we can construct quasi-Gray codes over ′ Znm , where m = m1 · m2 · · · mk . Using this technique on our main results, we get a quasi-Gray code over Znm for any m ∈ N that misses only O(n · on ) strings where m = 2ℓ o for an odd o, while achieving the read complexity similar to that stated in Theorem 1.1. It is worth mentioning that if we get a space-optimal quasi-Gray code over the binary alphabet with non-trivial savings in read complexity, then we will have a space-optimal quasi-Gray code over the strings of alphabet Zm for any m ∈ N with similar savings. The technique by which we construct our quasi-Gray codes relies heavily on simple algebra which is a substantial departure from previous mostly combinatorial constructions. We view Gray codes as permutations on Znm and we decompose them into k simpler permutations on Znm , each being computable with read complexity 3 and write complexity 1. Then we apply ′ a different composition theorem, than mentioned above, to obtain quasi-Gray codes on Znm , n′ = n + log k, with read complexity O(1) + log k and write complexity 2. The main issue is the decomposition of permutations into few simple permutations. This is achieved by techniques of catalytic computation [5] going back to the the work of Coppersmith and Grossman [8, 1, 2]. It follows from the work of Coppersmith and Grossman [8] that our technique is incapable ′ of designing a space-optimal quasi-Gray code on Zn2 as any such code represents an odd permutation. The tools we use give inherently only even permutations. However, we can construct quasi-Gray codes from cycles of length 2n − 1 on Zn2 as they are even permutations. Indeed, that is what we do for our Theorem 1.2. Any efficiently computable odd permutation on Zn2 , with say read complexity (1 − ǫ)n and write complexity O(1), could be used together with our technique ′ to construct a space-optimal quasi-Gray code on Zn2 with read complexity at most (1 − ǫ′ )n′ and constant write complexity. This would represent a major progress on space-optimal Gray codes. (We would compose the odd permutation with some even permutation to obtain a full cycle on Zn2 . The size of the decomposition of the even permutation into simpler permutations would govern the read complexity of the resulting quasi-Gray code.) Our results imply that any potential lower bound on the complexity of quasi-Gray codes ′ over Zn2 must distinguish between the binary and ternary alphabets with certain consequences on algebraic properties of permutations. 3 In addition to our positive results we provide also an impossibility result for the construction of Gray codes in a certain structured way. 1.1 Related works The construction of Gray codes is central to the design of algorithms for many combinatorial problems [29]. Frank Gray [16] first came up with a construction of Gray code over binary strings of length n, where to generate the successor or predecessor strings one needs to read n bits in the worst-case. The type of code described in [16] is known as binary reflected Gray code. Later Bose et al. [3] provided a different type of Gray code construction, namely recursive partition Gray code which attains O(log n) average case read complexity while having the same worst-case read requirements. The read complexity we referred here is in the bit-probe model. It is easy to observe that any space-optimal binary Gray code must read log n + 1 bits in the worst-case [14, 26, 15] and surprisingly this is the best known lower bound till date. An upper bound of even n − 1 was not known until very recently [15]. This is also the best known so far. Fredman [14] extended the definition of Gray codes by considering codes that may not enumerate all the strings (though presented in a slightly different way in [14]) and also introduced the notion of decision assignment tree (DAT) to study the complexity of any code in bit-probe model. He provided a construction that generates a Gray code of length 2c·n for some constant c < 1 while reducing the worst-case bit-read to O(log n). Using the idea of Lucal’s modified reflected binary code [22], Munro and Rahman [27] got a code of length 2n−1 with worst-case read complexity only 4 + log n. However in their code two successive strings differ by 4 coordinates in the worst-case, instead of just one and we refer to such codes as quasi-Gray codes following the nomenclature used in [3]. Brodal et al. [4] extended the results of [27] by constructing a quasi-Gray code of length 2n − 2n−t for arbitrary 1 ≤ t ≤ n − log n − 1, that has t + 3 + log n bits (t + 2 + log n bits) worst-case read complexity and any two successive strings in the code differ by at most 2 bits (3 bits). In contrast to the Gray codes over binary alphabets, Gray codes over non-binary alphabets received much less attention. The construction of binary reflected Gray code was generalized to the alphabet Zm for any m ∈ N in [12, 7, 18, 28, 19]. However, each of those constructions read n coordinates in the worst-case to generate the next element. As mentioned before, we measure the read complexity in the well studied bit-probe model [34] where we assume that each cell stores an element of Zm . The argument of Fredman in [14] implies a lower bound of Ω(logm n) on the read complexity of quasi-Gray code on Znm . To the best of our knowledge, for non-binary alphabets, there is nothing known similar to the results of Munro and Rahman or Brodal et al. [27, 4]. We summarize the previous results along with ours in Table 1. Additionally, many variants of Gray codes have been studied in the literature. A particular one that has garnered a lot of attention in the past 30 years is the well-known middle levels conjecture. See [23, 24, 25, 17], and the references therein. It has been established only recently [23]. The conjecture says that there exists a Hamiltonian cycle in the graph induced by the vertices on levels n and n + 1 of the hypercube graph in 2n + 1 dimensions. In other words, there exists a Gray code on the middle levels. Mütze et al. [24, 25] studied the question of efficiently enumerating such a Gray code in the word RAM model. They [25] gave an algorithm to enumerate a Gray code in the middle levels that requires O(n) space and on average takes O(1) time to generate the next vertex. In this paper we consider the bit-probe model, and Gray codes over the complete hypercube. It would be interesting to know whether our technique can be applied for the middle level Gray codes. 4 Reference [16] [14] [13] [27] [3] [4] [4] [15] Theorem 1.2 [7] Theorem 1.1 Value of m 2 2 2 2 2 2 2 2 2 any m any odd m length 2n 2Θ(n) Θ(2n /n) 2n−1 2n − O(2n /nt ) 2n − 2n−t 2n − 2n−t 2n 2n − O(n) mn mn Worst-case cell read n O(log n) log n + 1 log n + 4 O(t log n) log n + t + 3 log n + t + 2 n−1 log n + 4 n 4 logm n + 3 Worst-case cell write 1 O(1) log n + 1 4 3 2 3 1 2 1 2 Table 1: Taxonomy of construction of Gray/quasi-Gray codes over Znm 1.2 Our technique Our construction of Gray codes relies heavily on the notion of s-functions defined by Coppersmith and Grossman [8]. An s-function is a permutation τ on Znm defined by a function f : Zsm → Zm and an (s + 1)-tuple of indices i1 , i2 , . . . , is , j ∈ [n] such that τ (hx1 , x2 , . . . , xn i) = (hx1 , x2 , . . . , xj−1 , xj + f (xi1 , . . . , xis ), xj+1 , . . . , xn i), where the addition is inside Zm . Each sfunction can be computed by some decision assignment tree that given a vector x = hx1 , x2 , . . . , xn i, inspects s + 1 coordinates of x and then it writes into a single coordinate of x. A counter C (quasi-Gray code) on Znm can be thought of as a permutation on Znm . Our goal is to construct some permutation α on Znm that can be written as a composition of 2-functions α1 , . . . , αk , i.e., α = αk ◦ αk−1 ◦ · · · ◦ α1 . r+n , where r = ⌈log n⌉, Given such a decomposition, we can build another counter C ′ on Zm m for which the function next(C ′ , x) operates as follows. The first r-coordinates of x serve as an instruction pointer i ∈ [mr ] that determines which αi should be executed on the remaining n coordinates of x. Hence, based on the current value i of the r coordinates, we perform αi on the remaining coordinates and then we update the value of i to i + 1. (For i > k we can execute the identity permutation which does nothing.) We can use known Gray codes on Zrm to represent the instruction pointer so that when incrementing i we only need to write into one of the coordinates. This gives a counter C ′ which can be computed by a decision assignment tree that reads r + 3 coordinates and writes into 2 coordinates of x. (A similar composition technique is implicit in Brodal et al. [4].) If C is of length ℓ = mn − t, then C ′ is of length mn+r − tmr . In particular, if C is space-optimal then so is C ′ . Hence, we reduce the problem of constructing 2-Gray codes to the problem of designing large cycles in Znm that can be decomposed into 2-functions. Coppersmith and Grossman [8] studied precisely the question of, which permutations on Zn2 can be written as a composition of 2-functions. They show that a permutation on Zn2 can be written as a composition of 2-functions if and only if the permutation is even. Since Zn2 is of even size, a cycle of length 2n on Zn2 is an odd permutation and thus it cannot be represented as a composition of 2-functions. However, their result also implies that a cycle of length 2n − 1 on Zn2 can be decomposed into 2-functions. We want to use the counter composition technique described above in connection with a cycle of length 2n − 1. To maximize the length of the cycle C ′ in Z2n+r , we need to minimize k, the number of 2-functions in the decomposition. By a simple counting argument, most cycles of length 2n − 1 on Zn2 require k to be exponentially large in n. This is too large for our purposes. 5 Luckily, there are cycles of length 2n − 1 on Zn2 that can be decomposed into polynomially many 2-functions, and we obtain such cycles from linear transformations. There are linear transformations Zn2 → Zn2 which define a cycle on Zn2 of length 2n − 1. For example, the matrix corresponding to the multiplication by a fixed generator of the multiplicative group (Zn2 )∗ of the Galois field GF [2n ] is such a matrix. Such matrices are full rank and they can be decomposed into O(n2 ) elementary matrices, each corresponding to a 2-function. Moreover, there are matrices derived from primitive polynomials that can be decomposed into at most ′ ′ 4n elementary matrices.2 We use them to get a counter on Zn2 of length at least 2n − 20n′ whose successor and predecessor functions are computable by decision assignment trees of read complexity ≤ 6 + log n′ and write complexity 2. Such counter represents 2-Gray code of the prescribed length. For any prime q, the same construction yields 2-Gray codes of length at least ′ q n − 5q 2 n′ with decision assignment trees of read complexity ≤ 6 + logq n′ and write complexity 2. The results of Coppersmith and Grossman [8] can be generalized to Znm as stated in Richard Cleve’s thesis [6].3 For odd m, if a permutation on Znm is even then it can be decomposed into 2-functions. Since mn is odd, a cycle of length mn on Znm is an even permutation and so it can be decomposed into 2-functions. If the number k of those functions is small, so the logm k is small, we get the sought after counter with small read complexity. However, for most cycles of length mn on Znm , k is exponential in n. We show though, that there is a cycle α of length mn on Znm that can be decomposed ′ into O(n3 ) 2-functions. This in turn gives space-optimal 2-Gray codes on Znm with decision assignment trees of read complexity O(logm n′ ) and write complexity 2. We obtain the cycle α and its decomposition in two steps. First, for i ∈ [n], we consider the permutation αi on Znm which maps each element 0i−1 ay onto 0i−1 (a + 1)y, for a ∈ Zm and n−i , while other elements are mapped to themselves. Hence, α is a product of mn−i y ∈ Zm i disjoint cycles of length m. We show that α = αn ◦ αn−1 ◦ · · · ◦ α1 is a cycle of length mn . In the next step we decompose each αi into O(n2 ) 2-functions. For i ≤ n − 2, we can decompose αi using the technique of Ben-Or and Cleve [2] and its refinement in the form of catalytic computation of Buhrman et al. [5]. We can think of x ∈ Znm as content of n memory registers, where x1 , . . . , xi−1 are the input registers, xi is the output register, and xi+1 , . . . , xn are the working registers. The catalytic computation technique gives a program consisting of O(n2 ) instructions, each being equivalent to a 2-function, which performs the desired adjustment of xi based on the values of x1 , . . . , xi−1 without changing the ultimate values of the other registers. (We need to increment xi iff x1 , . . . , xi−1 are all zero.) This program directly gives the desired decomposition of αi , for i ≤ n − 2. (Our proof in Section 6 uses the language of permutations.) The technique of catalytic computation fails for αn−1 and αn as the program needs at least two working registers to operate. Hence, for αn−1 and αn we have to develop entirely different technique. This is not trivial and quite technical but it is nevertheless possible, thanks to the specific structure of αn−1 and αn . Organization of the paper In Section 2 we define the notion of counter, Gray code and our computational model, namely decision assignment tree and also provide certain known results regarding the construction of Gray codes. Then in Section 3 we describe how to combine counters over smaller alphabets to 2 Primitive polynomials were previously also used in a similar problem, namely to construct shift-register sequences (see e.g. [19]). 3 Unfortunately, there is no written record of the proof. 6 get another counter over larger alphabet, by introducing the Chinese Remainder Theorem for counters. Next we provide some basic facts about the permutation group and the underlying structure behind all of our constructions of quasi-Gray codes. We devote Section 5 to construction of a quasi-Gray code over binary alphabet that misses only a few words, by using full rank linear transformation. In Section 6 we construct a space-optimal quasi-Gray code over any oddsize alphabet. Finally in Section 7 we rule out the existence of certain kind of space-optimal binary counters. 2 Preliminaries In the rest of the paper we only present constructions of the successor function next(C, w) for our codes. Since all the operations in those constructions are readily invertible, the same arguments also give the predecessor function prev(C, w). Notations: We use the standard notions of groups and fields, and mostly we will use only elementary facts about them (see [10, 21] for background.). By Zm we mean the set of integers modulo m, i.e., Zm := Z/mZ. Throughout this paper whenever we use addition and multiplication operation between two elements of Zm , then we mean the operations within Zm that is modulo m. For any m ∈ N, we let [m] denote the set {1, 2, . . . , m}. Unless stated otherwise explicitly, all the logarithms we consider throughout this paper are based 2. Now we define the notion of counters used in this paper. Definition 1 (Counter). A counter of length ℓ over a domain D is any cyclic sequence C = (w1 , . . . , wℓ ) such that w1 , . . . , wℓ are distinct elements of D. With the counter C we associate two functions next(C, w) and prev(C, w) that give the successor and predecessor element of w in C, that is for i ∈ [ℓ], next(C, wi ) = wj where j − i = 1 mod ℓ, and prev(C, wi ) = wk where i − k = 1 mod ℓ. If ℓ = |D|, we call the counter a space-optimal counter. Often elements in the underlying domain D have some “structure” to them. In such cases, it’s desirable to have a counter such that consecutive elements in the sequence differ by a “small” change in the “structure”. We make this concrete in the following definition. Definition 2 (Gray Code). Let D1 , . . . , Dn be finite sets. A Gray code of length ℓ over the domain D = D1 × · · · × Dn is a counter (w1 , . . . , wℓ ) of length ℓ over D such that any two consecutive strings wi and wj , j − i = 1 mod ℓ, differ in exactly one coordinate when viewed as an n-tuple. More generally, if for some constant c ≥ 1, any two consecutive strings wi and wj , j − i = 1 mod ℓ, differ in at most c coordinates such a counter is called a c-Gray Code. By a quasi-Gray code we mean c-Gray code for some unspecified fixed c > 0. In the literature sometimes people do not place any restriction on the relationship between wℓ and w1 and they refer to such a sequence a (quasi)-Gray code. In their terms, our codes would be cyclic (quasi)Gray codes. If ℓ = |D|, we call the codes space-optimal (quasi-)Gray codes. Decision Assignment Tree: The computational model we consider in this paper is called Decision Assignment Tree (DAT). The definition we provide below is a generalization of that given in [14]. It is intended to capture random access machines with small word size. Let us fix an underlying domain D n whose elements we wish to enumerate. In the following, we will denote an element in D n by hx1 , x2 , . . . , xn i. A decision assignment tree is a |D|-ary tree such that each internal node is labeled by one of the variables x1 , x2 , . . . , xn . Furthermore, each outgoing edge of an internal node is labeled with a distinct element of D. Each leaf node of the 7 tree is labeled by a set of assignment instructions that set new (fixed) values to chosen variables. The variables which are not mentioned in the assignment instructions remain unchanged. The execution on a decision assignment tree on a particular input vector hx1 , x2 , . . . , xn i ∈ D n starts from the root of the tree and continues in the following way: at a non-leaf node labeled with a variable xi , the execution queries xi and depending on the value of xi the control passes to the node following the outgoing edge labeled with the value of xi . Upon reaching a leaf, the corresponding set of assignment statements is used to modify the vector hx1 , x2 , . . . , xn i and the execution terminates. The modified vector is the output of the execution. Thus, each decision assignment tree computes a mapping from D n into D n . We are interested in decision assignment trees computing the mapping next(C, hx1 , x2 , . . . , xn i) for some counter C. When C is space-optimal we can assume, without loss of generality, that each leaf assigns values only to the variables that it reads on the path from the root to the leaf. (Otherwise, the decision assignment tree does not compute a bijection.) We define the read complexity of a decision assignment tree T , denoted by READ(T ), as the maximum number of non-leaf nodes along any path from the root to a leaf. Observe that any mapping from D n into D n can be implemented by a decision assignment tree with read complexity n. We also define the write complexity of a decision assignment tree T , denoted by WRITE(T ), as the maximum number of assignment instructions in any leaf. Instead of the domain D n , we will sometimes also use domains that are a cartesian product of different domains. The definition of a decision assignment tree naturally extends to this case of different variables having different domains. For any counter C = (w1 , . . . , wℓ ), we say that C is computed by a decision assignment tree T if and only if for i ∈ [ℓ], next(C, wi ) = T (wi ), where T (wi ) denotes the output string obtained after an execution of T on wi . Note that any two consecutive strings in the cyclic sequence of C differ by at most WRITE(T ) many coordinates. For a small constant c ≥ 1, some domain D, and all large enough n, we will be interested in constructing cyclic counters on D n that are computed by decision assignment trees of write complexity c and read complexity O(log n). By the definition such cyclic counters will necessarily be c-Gray codes. 2.1 Construction of Gray codes For our construction of quasi-Gray codes on a domain D n with decision assignment trees of small read and write complexity we will need ordinary Gray codes on a domain D O(log n) . Several constructions of space-optimal binary Gray codes are known where the oldest one is the binary reflected Gray code [16]. This can be generalized to space-optimal (cyclic) Gray codes over non-binary alphabets (see e.g. [7, 19]). Theorem 2.1 ([7, 19]). For any m, n ∈ N, there is a space-optimal (cyclic) Gray code over Znm . 3 Chinese Remainder Theorem for Counters In this paper we consider quasi-Gray codes over Znm for m ∈ N. Below we describe how to compose decision assignment trees over different domains to get a decision assignment tree for a larger mixed domain. Theorem 3.1 (Chinese Remainder Theorem for Counters). Let r, n1 , . . . , nr ∈ N be integers, and let D1,1 , . . . , D1,n1 , D2,1 , . . . , Dr,nr be some finite sets of size at least two. Let ℓ1 ≥ r − 1 be an integer, and ℓ2 , . . . , ℓr be pairwise co-prime integers. For i ∈ [r], let Ci be a counter of length ℓi over Di = Di,1 × · · · × Di,ni computed by a decision assignment tree Ti over ni variables. Then, 8 Pr there exists a decision assignment tree T over i=1 ni variables that implements a counter C of Qr length i=1 ℓi over D1 × · · · × Dr . Furthermore, READ(T ) = n1 + max{READ(Ti )}ri=2 , and WRITE(T ) = WRITE(T1 ) + max{WRITE(Ti )}ri=2 . Proof. For any i ∈ [r], let the counter Ci = (wi,1 , . . . , wi,ℓi ). Let x1 , . . . , xr be variables taking values in D1 , . . . , Dr , respectively. The following procedure, applied repeatedly, defines the counter C: If x1 = w1,i for some i ∈ [r − 1] then xi+1 ← next(Ci+1 , xi+1 ) x1 ← next(C1 , x1 ) else x1 ← next(C1 , x1 ). It is easily seen that the above procedure defines a valid cyclic sequence when starting at w1,i1 , . . . , wr,ir for any hi1 , i2 , . . . , ir i ∈ [ℓ1 ] × · · · × [ℓr ]. That is, every element has a unique predecessor and a unique successor, and that the sequence is cyclic. It can easily be implemented by a decision assignment tree, say T . First it reads the value of x1 . Since x1 ∈ D1 = D1,1 × · · · × D1,n1 , it queries n1 components. Then, depending on the value of x1 , it reads and updates another component, say xj . This can be accomplished using the decision assignment tree Tj . We also update the value of x1 , and to that end we use the appropriate assignments from decision assignment tree T1 . Observe that irrespective of how efficient T1 is, we read x1 completely to determine which of the remaining r − 1 counters to update. Hence, READ(T ) = n1 + max {READ(Ti )}ri=2 , and WRITE(T ) = WRITE(T1 ) + max {WRITE(Ti )}ri=2 . Q Now it only remains to show that the counter described above is indeed of length ri=1 ℓi . Thus, it suffices to establish that starting with the string hw1,1 , . . . , wr,1 i, we can generate the string hw1,i1 , . . . , wr,ir i for any hi1 , . . . , ir i ∈ [ℓ1 ] × · · · × [ℓr ]. Let us assume i1 = 1. At the end of the proof we will remove this assumption. Suppose the string hw1,1 , w2,i2 , . . . , wr,ir i is reachable from hw1,1 , w2,1 , . . . , wr,1 i in t steps. As our procedure always increment x1 , t must be divisible by ℓ1 . Let d = t/ℓ1 . Furthermore, the procedure increments a variable xi , i 6= 1, exactly after ℓ1 steps. Thus, hw1,1 , w2,i2 , . . . , wr,ir i is reachable if and only if d satisfies the following equations: d ≡ i2 − 1 (mod ℓ2 ) d ≡ i3 − 1 (mod ℓ3 ) .. . d ≡ ir − 1 (mod ℓr ). Since ℓ2 , . . . , ℓr are pairwise co-prime, Chinese Remainder Theorem (for a reference, see [9]) Q guarantees the existence of a unique integral solution d such that 0 ≤Qd < ri=2 ℓi . Hence, hw1,1 , w2,,i2 , . . . , wr,ir i is reachable from hw1,1 , w2,1 , . . . , wr,1 i in at most ri=1 ℓi steps. Now we remove the assumption i1 = 1, i.e., w1,i1 6= w1,1 . Consider the string hw1,1 , w2,i′ , . . . , wr,i′ i r 2 where wj,i′ = wj,ij −1 for 2 ≤ j ≤ min{i1 , r}, and wj,i′ = wj,ij for j > min{i1 , r}. From the j j arguments in the previous paragraph, we know that this tuple is reachable. We now observe that the next i1 − 1 steps increment w1,1 to w1,i1 and wj,i′ to wj,ij for 2 ≤ j ≤ min{i1 , r}, thus, j reaching the desired string hw1,i1 , . . . , wr,ir i. Remark. We remark that if Ci ’s are space-optimal in Theorem 3.1, then so is C. 9 In the above proof, we constructed a special type of a counter where we always read the first coordinate, incremented it, and further depending on its value, we may update the value of another coordinate. From now on we refer to such type of counters as hierarchical counters. In Section 7 we will show that for such type of a counter the co-primality condition is necessary at least for ℓ1 = 2, 3. One can further note that the above theorem is similar to the well known Chinese Remainder Theorem and has similar type of application for constructing of space-optimal quasi-Gray codes over Znm for arbitrary m ∈ N. Lemma 3.2. Let n, m ∈ N be such that m = 2k o, where o is odd and k ≥ 0. Given decision assignment trees T1 and T2 computing space-optimal (quasi-)Gray codes over (Z2k )n−1 and Zon−1 , respectively, there exists a decision assignment tree T implementing a space-optimal quasi-Gray code over Znm such that READ(T ) = 1 + max{READ(T1 ), READ(T2 )}, and WRITE(T ) = 1 + max{WRITE(T1 ), WRITE(T2 )}. Proof. We will view Znm as Zm × (Z2k )n−1 × (Zo )n−1 and simulate a decision assignment tree operating on Zm × (Z2k )n−1 × (Zo )n−1 on Znm . From the Chinese Remainder Theorem (see [9]), we know that there exists a bijection (in fact, an isomorphism) f : Zm → Z2k × Zo . We denote the tuple f (z) by hf1 (z), f2 (z)i. From Theorem 3.1 we know that there exists a decision assignment tree T ′ over Zm × (Z2k )n−1 × (Zo )n−1 computing a space-optimal quasiGray code such that READ(T ′ ) = 1 + max{READ(T1 ), READ(T2 )}, and WRITE(T ′ ) = 1 + max{WRITE(T1 ), WRITE(T2 )}. We can simulate actions of T ′ on an input Znm to obtain the desired decision assignment tree T . Indeed, whenever T ′ queries x1 , T queries the first coordinate of its input. Whenever T ′ queries the i-th coordinate of (Z2k )n−1 , T queries the (i+1)-th coordinate of its input and makes its decision based on the f1 (·) value of that coordinate. Similarly, whenever T ′ queries the j-th coordinate of (Zo )n−1 , T queries the (j + 1)-th coordinate and makes its decision based on the f2 (·) value of that coordinate. Assignments by T are handled in similar fashion by updating only the appropriate part of hf1 (z), f2 (z)i. (Notice, queries made by T might reveal more information than queries made by T ′ .) Before proceeding further, we would also like to point out that to get a space-optimal decision assignment tree over Z2k , it suffices to a get space-optimal decision assignment trees over Z2 for arbitrary dimensions. Thus, to get a decision assignment tree implementing space-optimal quasi-Gray codes over Zm , we only need decision assignment trees implementing space-optimal quasi-Gray codes over Z2 and Zo . This also justifies our sole focus on construction of spaceoptimal decision assignment trees over Z2 and Zo in the later sections. Lemma 3.3. If, for all n ∈ N, there exists a decision assignment tree T implementing a spaceoptimal (quasi-)Gray code over Zn2 , then for any k and n ∈ N, there exists a decision assignment tree T ′ implementing a space-optimal (quasi-)Gray code over (Z2k )n such that the read and write complexity remain the same. Proof. Consider any bijective map f : Z2k → Zk2 . For example, one can take standard binary encoding of integers ranging from 0 to 2k − 1 as the bijective map f . Next, define another map g : (Z2k )n → Zkn 2 as follows: g(x1 , . . . , xn ) = hf (x1 ), . . . , f (xn )i. Now consider T that implements a space-optimal (quasi-)Gray code over Zkn 2 . We fix a partition of the variables {1, . . . , k} ⊎ · · · ⊎ {(n − 1)k + 1, . . . , nk} into n blocks of k variables each. We now construct a decision assignment tree T ′ over (Z2k )n using T and the map f . As in the proof of Lemma 3.2, our T ′ follows T in the decision making. That is, if T queries a variable, then T ′ queries the block in the partition where the variable lies. (Again, as noted before, T ′ may get more information than required by T .) Upon reaching a leaf, using f , T ′ updates the blocks depending on T ’s updates to the variables. 10 We devote the rest of the paper to the construction of counters over Zn2 , and Znm for any odd m. 4 Permutation Group and Decomposition of Counters We start this section with some basic notation and facts about the permutation group which we will use heavily in the rest of the paper. The set of all permutations over a domain D forms a group under the composition operation, denoted by ◦, which is defined as follows: for any two permutations σ and α, σ ◦ α(x) = σ(α(x)), where x ∈ D. The corresponding group, denoted SN , is the symmetric group of order N = |D|. We say, a permutation σ ∈ SN is a cycle of length ℓ if there are distinct elements a1 , . . . , aℓ ∈ [N ] such that for i ∈ [ℓ − 1], ai+1 = σ(ai ), a1 = σ(aℓ ), and for all a ∈ [N ] \ {a1 , a2 , . . . , aℓ }, σ(a) = a. We denote such a cycle by (a1 , a2 , · · · , aℓ ). Below we state few simple facts about composition of cycles. Proposition 4.1. Consider two cycles C1 = (t, a1 , · · · , aℓ1 ) and C2 = (t, b1 , · · · , bℓ2 ) where for any i ∈ [ℓ1 ] and j ∈ [ℓ2 ], ai 6= bj . Then, C = C2 ◦ C1 is the cycle (t, a1 , · · · , aℓ1 , b1 , · · · , bℓ2 ) of length ℓ1 + ℓ2 + 1. Proposition 4.2. If σ ∈ SN is a cycle of length ℓ, then for any α ∈ SN , α ◦ σ ◦ α−1 is also a cycle of length ℓ. Moreover, if σ = (a1 , a2 , · · · , aℓ ), then α ◦ σ ◦ α−1 = (α(a1 ), α(a2 ), · · · , α(aℓ )). The permutation α ◦ σ ◦ α−1 is called the conjugate of σ with respect to α. The above proposition is a special case of a well known fact about the cycle structure of conjugates of any permutation and can be found in any standard text book on Group Theory (e.g., Proposition 10 in Chapter 4.3 of [10].). Roughly speaking, a counter of length ℓ over D, in the language of permutations, is nothing but a cycle of the same length in S|D| . We now make this correspondence precise and give a construction of a decision assignment tree that implements such a counter. Lemma 4.3. Let D = D1 × · · · × Dr be a domain. Suppose σ1 , . . . , σk ∈ S|D| are such that σ = σk ◦ σk−1 ◦ · · · ◦ σ1 is a cycle of length ℓ. Let T1 , . . . , Tk be decision assignment trees that implement σ1 , . . . , σk respectively. Let D ′ = D ′ 1 × · · · × D ′ r′ be a domain such that |D ′ | ≥ k, and let T ′ be a decision assignment tree that implements a counter C ′ of length k′ over D ′ where k′ ≥ k. Then, there exists a decision assignment tree T that implements a counter of length k′ ℓ over ′ D × D such that READ(T ) = r ′ + max{READ(Ti )}ki=1 , and WRITE(T ) = WRITE(T ′ ) + max{WRITE(Ti )}ki=1 . Proof. Suppose C ′ = (a1 , . . . , ak′ ). Now let us consider the following procedure P : on any input hx1 , x2 i ∈ D ′ × D, If x1 = aj for some j ∈ [k] then x2 ← σj (x2 ) x1 ← next(C ′ , x1 ) else x1 ← next(C ′ , x1 ). Now using a similar argument as in the proof of Theorem 3.1, the above procedure is easily seen to be implementable using a decision assignment tree T of the prescribed complexity. Each 11 time we check the value of x1 ∈ D ′ = D ′ 1 × · · · × D ′ r′ . Thus, we need to read r ′ components. Depending on the value of x1 , we may apply σj on x2 using the decision assignment tree Tj . Then we update the value of x1 . Hence, READ(T ) = r ′ + max{READ(Ti )}ki=1 , and WRITE(T ) = WRITE(T ′ ) + max{WRITE(Ti )}ki=1 . Let (w1 , w2 · · · , wℓ ) be the cycle of length ℓ given by σ. We now argue that the procedure P generates a counter of length k′ ℓ over D ′ × D starting at ha1 , w1 i. Without loss of generality, let us assume that σ = σk′ ◦ · · · ◦ σk+1 ◦ σk ◦ σk−1 ◦ · · · ◦ σ1 , where for j ≥ k + 1, σj is the identity map. Fix j ∈ [k′ ]. Define αj = σj−1 ◦ · · · ◦ σ1 , and τj = αj ◦ σ ◦ α−1 j = σj−1 ◦ · · · ◦ σ1 ◦ σk ′ ◦ · · · ◦ σj . ′ ′ ik ik For i = 0, 1, . . . , ℓ, let hgi , ei i = P (haj , αj (w1 )i) where P denotes ik′ invocations of P . Since P increments x1 in every invocation, for i = 1, 2, . . . , ℓ, gi = aj and ei = τj (ei−1 ). By Proposition 4.2, τj is a cycle (αj (w1 )αj (w2 ) · · · αj (wℓ )) of length ℓ. Hence, e1 , . . . , eℓ are all distinct and eℓ = e0 . As a consequence we conclude that for any x ∈ D ′ ×D and 1 ≤ j1 6= j2 ≤ k′ ℓ, P j1 (x) 6= P j2 (x) ′ and P k ℓ (x) = x. This completes the proof. In the next two sections we describe the construction of σ1 , · · · , σk ∈ SN where N = mn for some m, n ∈ N and how the value of k depends on the length of the cycle σ = σk ◦ σk−1 ◦ · · · ◦ σ1 . 5 Counters via Linear Transformation The construction in this section is based on linear transformations. Consider the vector space Fnq , and let L : Fnq → Fnq be a linear transformation. A basic fact in linear algebra says that if L has full rank, then the mapping given by L is a bijection. Thus, when L is full rank, the mapping can also be thought of as a permutation over Fnq . Throughout this section we use many basic terms related to linear transformation without defining them, for the details of which we refer the reader to any standard text book on linear algebra (e.g. [20]). A natural way to build counter out of a full rank linear transformation is to fix a starting element, and repeatedly apply the linear transformation to obtain the next element. Clearly this only list out elements in the cycle containing the starting element. Therefore, we would like to choose the starting element such that we enumerate the largest cycle. Ideally, we would like the largest cycle to contain all the elements of Fnq . However this is not possible because any linear transformation fixes the all-zero vector. But there do exist full rank linear transformations such that the permutation given by them is a single cycle of length q n −1. Such a linear transformation would give us a counter over a domain of size q n that enumerates all but one element. Clearly, a trivial implementation of the aforementioned argument would lead to a counter that reads and writes all n coordinates in the worst-case. In the rest of this section, we will develop an implementation and argue about the choice of linear transformation such that the read and write complexity decreases exponentially. We start with recalling some basic facts from linear algebra. Definition 3 (Linear Transformation). A map L : Fnq → Fnq is called a linear transformation if L(c · x + y) = cL(x) + L(y), for all x, y ∈ Fnq and c ∈ Fq . It is well known that every linear transformation L is associated with some matrix A ∈ Fqn×n such that applying the linear transformation is equivalent to the left multiplication by A. That is, L(x) = Ax where we interpret x as a column vector. Furthermore, L has full rank iff A is invertible over Fq . Definition 4 (Elementary matrices). An n × n matrix over a field F is said to be an elementary matrix if it has one of the following forms: 12 (a) The off-diagonal entries are all 0. For some i ∈ [n], (i, i)-th entry is a non-zero c ∈ F. Rest of the diagonal entries are 1. For a fixed i, we denote all matrices of this type by Ei,i . (See Fig. 1.) (b) The diagonal entries are all 1. For some i and j, 1 ≤ i 6= j ≤ n, (i, j)-th entry is a non-zero c ∈ F. Rest of the off-diagonal entries are 0. For each i and j, i 6= j, we denote all matrices of this type by Ei,j . (See Fig. 1.) j i  (a)      i     1 ..  . 1 c 1 .. . 1            or, (b)        i   1 ..  . 1 c .. . 1 .. . 1           Figure 1: Elementary matrices From the definition it is easy to see that left multiplication by an elementary matrix of type Ei,i is equivalent to multiplying the i-th row with c, and by an elementary matrix of type Ei,j is equivalent to adding c times j-th row to the i-th row. Proposition 5.1. Let A ∈ Fn×n be invertible. Then A can be written as a product of k elementary matrices such that k ≤ n2 + 4(n − 1). Proof. Consider the inverse matrix A−1 which is also full rank. It is easily seen from Gauss elimination that by left multiplying A−1 with at most n2 many elementary matrices, we can reduce A−1 to a permutation matrix. A permutation matrix is a {0, 1}-matrix that has exactly one 1 in each row and column. Now we need at most (n − 1) row swaps to further reduce the obtained permutation matrix to the identity matrix. We claim that a row swap can be implemented by left multiplication with at most 4 elementary matrices. Indeed, to swap row i and row j, the following sequence of operation suffices: (i) add j-th row to i-th row, (ii) subtract i-th row from j-th row, (iii) add j-th row to i-th row, and (iv) multiply j-th row with −1. (The last operation is not required if the characteristic of the underlying field is 2.) Hence, the inverse of A−1 which is our original matrix A is the product of k elementary matrices. 5.1 Construction of the counter Let A be a full rank linear transformation from Fnq to Fnq such that when viewed as permutation n it is a single cycle of length q n − 1. In other words, A(q −1) = I but for a non-zero j < q n − 1, Aj 6= I where I is the identity matrix. Furthermore, let A = Ek Ek−1 · · · E1 where Ei ’s are elementary matrices. Theorem 5.2. Let q, A, and k be as defined above. Let r ≥ logq k. There exists a quasiGray code on the domain (Fq )n+r of length q n+r − q r that can be implemented using a decision assignment tree T such that READ(T ) ≤ r + 2 and WRITE(T ) ≤ 2. 13 Proof. The proof follows readily from Lemma 4.3, where Ei ’s play the role of σi ’s, and noting that the permutation given by any elementary matrix can be implemented using a decision assignment tree that reads at most two coordinates and writes at most one. For the counter C ′ on (Fq )r we chose a Gray code of trivial read complexity r and write complexity 1. Thus, we obtain a counter on a domain of size roughly kq n that misses at most qk elements. Clearly, we would like to minimize k. A trivial bound on k is O(n2 ) that follows from Proposition 5.1. We now discuss the choice of A so that k becomes O(n). We start with recalling a notion of primitive polynomials over finite fields. Definition 5 (Primitive polynomials). A primitive polynomial p(z) ∈ Fq [z] of degree n is a monic irreducible polynomial over Fq such that any root of it in Fqn generates the multiplicative group of Fqn . Theorem 5.3 ([21]). The number of primitive polynomials of degree n over Fq equals φ(q n −1)/n, where φ(·) is the Euler φ-function. Let p(z) be a primitive polynomial of degree n over Fq . The elements of Fqn can be uniquely expressed as a polynomial in z over Fq of degree at most n − 1. In particular, we can identify an element of Fqn with a vector in Fnq that is given by the coefficient vector of the unique polynomial expression of degree at most n − 1. But, since p(z) is primitive, we also know that Fqn = n {0, 1, z, z 2 , . . . , z q −2 }. This suggests a particularly nice linear transformation to consider: left multiplication by z. This is so because the matrix A of the linear transformation computing the multiplication by z is very sparse. In particular, if p(z) = z n +cn−1 z n−1 +cn−2 z n−2 +· · ·+c1 z+c0 , then A looks as follows:   −cn−1 1 0 · · · 0 0 0 −cn−2 0 1 · · · 0 0 0   −cn−3 0 0 · · · 0 0 0    .. .. .. . . .. .. ..  .  . . . . . . .    −c2 0 0 · · · 0 1 0    −c1 0 0 · · · 0 0 1 −c0 0 0 · · · 0 0 0 Thus, from the proof of Proposition 5.1, it follows that A can be written as a product of at most n + 4(n − 1) elementary matrices. (When q is a power of 2, then the number of elementary matrices in the product is at most n + 3(n − 1).) Hence, from the discussion above and using Theorem 5.2, we obtain the following corollaries. Setting r = ⌈log(4n − 3)⌉ in Theorem 5.2 gives: Corollary 5.4. For any n′ ≥ 2, and n = n′ + ⌈log(4n′ − 3)⌉, there exists a counter on (Z2 )n that misses at most 8n strings and can be implemented by a decision assignment tree that reads at most 4 + log n bits and writes at most 2 bits. By doubling the number of missed strings and increasing the number of read bits by one we can construct given counters for any Zn2 , where n ≥ 15. In the above corollary the number of missed strings grows linearly with n. One might wonder if it is possible to keep the number of missing strings o(n), while keeping the read complexity essentially the same. The next corollary shows that this is indeed possible, but at the cost of increasing the write complexity. Corollary 5.5. For n ≥ 2, there exists a counter on (Z2 )n+O(log n) that misses out at most O(n/⌈log n⌉) strings. Furthermore, it can be implemented by a decision assignment tree that reads and writes at most O(log n) bits. 14 Proof. The idea is simply to increase the underlying alphabet size. In particular, let q = 2⌈log n⌉ in Theorem 5.2. n We also remark that by taking q to be 2 C , where C > 1 is a universal constant, one would get a counter on (Z2 )n+O(1) that misses only O(C) strings (i.e. independent of n). However, the read and write complexity gets worse. They are at most 2 Cn + O(1). For the general case, when q is a prime power, we obtain the following corollary by setting r to ⌈logq (5n − 4)⌉ or 1 + ⌈logq (5n − 4)⌉ in Theorem 5.2. Corollary 5.6 (Generalization of Theorem 1.2). Let q be any prime power. For n ≥ 15, there exists a counter on Znq that misses at most 5q 2 n strings and that is computed by a decision assignment tree with read complexity at most 6 + logq n and write complexity 2. Remark. We remark that the algorithm given in this section can be made uniform. To achieve that we need to obtain a primitive polynomial of degree n over Fq uniformly. To this end, we can use a number of algorithms (deterministic or probabilistic); for example, [30, 33, 31]. For a thorough treatment, we refer to Chapter 2 in [32]. 6 Space-optimal Counters over Znm for any Odd m This section deals with space-optimal counter over an odd-size alphabet. We start by recalling Theorem 1.1 in terms of decision assignment tree complexity. Theorem 6.1 (Restatement of Theorem 1.1). For any odd m ∈ N and n sufficiently large, there is a space-optimal 2-Gray code over Znm that can be computed by a decision assignment tree T such that READ(T ) ≤ 4 logm n. Before providing our construction of the quasi-Gray code, we give a short overview of how the construction will proceed. First we set n′ = n − c · log n for some constant c > 0 that will ′ be fixed later. Then we define suitable permutations α1 , . . . , αn′ ∈ SN where N = mn such ′ that their composition αn′ ◦ · · · ◦ α1 is a cycle of length mn . Next we show that each αi can be further decomposed into αi,1 , . . . , αi,j ∈ SN for some j, such that each αi,r for r ∈ [j] can be computed using a decision assignment tree with read complexity 3 and write complexity 1. Finally to complete the construction we use Lemma 4.3 with αi,r ’s playing the role of σ1 , . . . , σk in the lemma. We recall the notion of r-functions over Znm that was introduced by Coppersmith and Grossman [8] for m = 2. Below we generalize that definition for any m ∈ N. Definition 6. For any r ∈ [n − 1], an r-function on Znm is a permutation τ over Znm identified by a subset {i1 , . . . , ir , j} ⊆ [n] of size r + 1 and a function f : Zrm → Zm such that for any ha1 , . . . , an i ∈ Znm , τ (ha1 , . . . , an i) = ha1 , . . . , aj−1 , aj + f (ai1 , . . . , air ), aj+1 , . . . , an i. Observe that any r-function can be implemented using a decision assignment tree T that queries xi1 , . . . , xir and xj at internal nodes, and at leaves it assigns value only to the variable xj . Thus, READ(T ) = r + 1 and WRITE(T ) = 1. Claim 6.2. Any r-function on Znm can be implemented using a decision assignment tree T with n variables such that READ(T ) = r + 1 and WRITE(T ) = 1. We are now ready to provide details of our construction of a space-optimal quasi-Gray code over Znm for any odd m. Define n′ := n − c · logm n for some constant c > 0 that will be fixed later. 15 Step 1: Construction of α1 , . . . , αn′ . ′ We consider specific permutations α1 , . . . , αn′ over Znm such that α = αn′ ◦ · · · ◦ α1 is a cycle of ′ length mn . We define them below. Definition 7. Let m and n′ be natural numbers. For i ∈ [n′ ], we define αi to be the permutation ′ given by the following map: for any hx1 , . . . , xi , . . . , xn′ i ∈ Znm , ( hx1 , . . . , xi−1 , xi + 1, xi+1 , . . . , xn′ i if xj = 0 for all j ∈ [i − 1], αi (hx1 , . . . , xi , . . . , xn′ i) = otherwise. hx1 , . . . , xi , . . . , xn′ i The addition operation in the mapping xi ← xi + 1 is within Zm . The following observation is easily seen from the definitions of r-functions and αi . ′ Claim 6.3. For any i ∈ [n′ ], αi is an (i − 1)-function on Znm . Furthermore, each αi is composed ′ of disjoint cycles of length m over Znm . We now establish a crucial property of the αi ’s, i.e., their composition is a full length cycle. ′ Claim 6.4. α = αn′ ◦ · · · ◦ α1 is a cycle of length mn . Proof. Consider the sequence of permutations τ1 , . . . , τn′ such that τi = αi ◦ αi−1 ◦ · · · ◦ α1 for i ∈ [n′ ]. Clearly, τn′ = α. We now prove the claim. In fact, we establish the following stronger ′ claim: for i ∈ [n′ ], τi is a permutation composed of mn −i disjoint cycles, each of the cycles ′ being of length mi . Furthermore, for every hai+1 , . . . , an′ i ∈ Znm−i there is a cycle that involves ′ all tuples of the form hx1 , . . . , xi , ai+1 , . . . , an′ i. The claim, α is a cycle of length mn , follows as a consequence. We prove the stronger claim by induction on i. Base case: τ1 = α1 . From the definition of α1 , it follows that there is a cycle of length m of ′ the form (h0, a2 , . . . , an′ i, h1, a2 , . . . , an′ i, . . . , hm − 1, a2 , . . . , an′ i) for each ha2 , . . . , an′ i ∈ Znm−1 . Hence our induction hypothesis clearly holds for the base case. Induction Step: Suppose our induction hypothesis holds until some i ∈ [n′ ] and we would like to establish it for i+1. Let us consider the permutation αi+1 . We know that it is composed of ′ n′ −(i+1) , αi+1 contains mn −(i+1) disjoint cycles of length m. Indeed, for each hai+2 , . . . , an′ i ∈ Zm a cycle that involves all m tuples where the first i coordinates are all 0 and the last n′ − (i + 1) coordinates are set to hai+2 , . . . , an′ i. From the cycle decomposition of αi+1 and τi into disjoint cycles, it is clear that for any cycle say C in αi+1 there are m disjoint cycles C1 , . . . , Cm in τi , each of them intersecting C in exactly one element. Consider C ′ = C ◦CP 1 ◦· · · ◦Cm . By repeated ′ i+1 . (Here application of Proposition 4.1, we conclude that C is a cycle of length m i=1 |Ci | = m ′ by |Ci | we mean the length of the cycle Ci .) Also C involves tuples where the last n′ − (i + 1) n′ −(i+1) . Thus, C ′ is a cycle of length mi+1 coordinates are set to some fixed hai+2 , . . . , an′ i ∈ Zm containing all tuples of the form hx1 , . . . , xi+1 , ai+2 , . . . , an′ i. Since τi+1 = αi+1 ◦ τi and αi+1 ′ ′ contains mn −(i+1) disjoint cycles, we conclude that τi+1 consists of exactly mn −(i+1) disjoint cycles, each of length mi+1 and containing tuples of the required form. This finishes the proof. Readers may note that this step does not use the fact that m is odd and, thus, it is true for any m ∈ N. If we were to directly implement αi by a decision assignment tree, its read complexity would be i. Hence, we would not get any savings in Lemma 4.3. So we need to further decompose αi into permutations of small read complexity. 16 Step 2: Further decomposition of αi ’s. Our goal is to describe αi as a composition of 2-functions. Recall, Claim 6.3, each αi is an ′ (i − 1)-function on Znm . Suppose, for i ∈ [n′ ], there exists a set of 2-functions αi,1 , . . . , αi,ki such that αi = αi,ki ◦ · · · ◦ αi,1 . Then using Lemma 4.3, where αi,k ’s play the role of σj ’s, we obtain a decision assignment tree implementing a 2-Gray code with potentially low read complexity. Indeed, each αi,k has lowPread complexity by Claim 6.2, hence the read complexity essentially depends on how large is i ki . In the following we will argue that αi ’s can be decomposed into a small set of 2-functions, thus keeping the maximum ki small. As a result, the read complexity bound in Theorem 6.1 will follow. Note α1 , α2 and α3 are already 2-functions. In the case of αi , 4 ≤ i ≤ n′ − 2, we can directly adopt the technique from [2, 5] to generate the desired set of 2-functions. However, as discussed in Section 1.2, that technique falls short when i > n′ − 2. (It needs two free registers to operate.) For i = n′ − 1, it is possible to generalize the proof technique of [8] to decompose αn′ −1 . Unfortunately all the previously known techniques fail to decompose αn′ and we have to develop a new technique. First we provide the adaptation of [2, 5], and then develop a new technique that allows us to express both αn′ −1 and αn′ as a composition of small number of 2-functions, thus overcoming the challenge described above. Lemma 6.5. For any 4 ≤ i ≤ n′ − 2, let αi be the permutation given by Definition 7. Then there exists a set of 2-functions αi,1 , . . . , αi,ki such that αi = αi,ki ◦ · · · ◦ αi,1 , and ki ≤ 4(i − 1)2 − 3. It is worth noting that, although in this section we consider m to be odd, the above lemma holds for any m ∈ N. In [2], computation uses only addition and multiplication from the ring, whereas we can use any function g : Zm → Zm . This subtle difference makes the lemma to be true for any m ∈ N instead of being true only for prime powers. Proof. Pick i ≤ n′ − 2. Let us represent αi as an (i − 1)-function. From the definition we have, αi (ha1 , . . . , ai , . . . , an′ i) = ha1 , . . . , ai−1 , ai + f (a1 , . . . , ai−1 ), ai+1 , . . . , an′ i, where the map f : Zi−1 m → Zm is defined as follows: ( 1 if (a1 , . . . , ai−1 ) = (0, . . . , 0), f (a1 , . . . , ai−1 ) = 0 otherwise. Observe that f is an indicator function of a tuple; in particular, of the all-zeroes tuple. To verify the lemma, we would prove a stronger claim than the statement of the lemma. Consider the set S of r-functions, 1 ≤ r ≤ n′ − 3, such that the function f used to define them is the indicator function of the all-zeroes tuple. That is, τ ∈ S if and only if there exists a set {i1 , i2 , . . . , ir } ⊆ [n′ ] of size r and a j ∈ [n′ ] \ {i1 , i2 , . . . , ir } such that τ (ha1 , . . . , an′ i) = ha1 , . . . , aj−1 , aj + f (ai1 , . . . , air ), aj+1 , . . . , an′ i, where f (x1 , . . . , xr ) = 1 if (x1 , . . . , xr ) = (0, . . . , 0), and 0 otherwise. Observe that αi ∈ S for 4 ≤ i ≤ n′ − 2. We establish the following stronger claim. Claim 6.6. For an r-function τ ∈ S, there exist 2-functions τ1 , . . . , τkr such that τ = τkr ◦· · ·◦τ1 and kr ≤ 4r 2 − 3. (We stress that τ1 , . . . , τkr need not belong to S.) 17 Clearly, the claim implies the lemma. We now prove the claim by induction on r. The base case is r ≤ 2, in which case the claim trivially holds. Suppose the claim holds for all (r − 1)functions in S. Let τ ∈ S be an r-function identified by the set S := {i1 , i2 , . . . , ir } ⊆ [n′ ] and j ∈ [n′ ] \ S. Since f is an indicator Q function, it can be expressed as a product of indicator functions. That is, f (ai1 , . . . , air ) = s∈S g(as ) where g : Zm → Zm is the following {0, 1}-map: ( 1 if y = 0, g(y) = 0 otherwise. Consider a partition of S into two sets A and B of sizes ⌊r/2⌋ and ⌈r/2⌉, respectively. Let j1 and j2 be two distinct integers in [n′ ] \ S ∪ {j}. The existence of such integers is guaranteed by the bound on r. We now express τ as a composition of (r/2)-functions and 2-functions, and then use induction hypothesis to complete the proof. The decomposition of τ is motivated by the following identity: ! ! Y Y Y aj2 + g(as ) − aj2 aj + g(as ) = aj + aj1 + g(as ) − aj1 s∈S s∈B s∈A = aj + aj 1 + Y ! g(as ) s∈A − aj 1 + Y s∈A ! aj 2 + g(as ) aj2 − aj1 Y ! g(as ) s∈B aj2 + Y ! g(as ) s∈B + aj 1 aj 2 . ′ Therefore, we consider three permutations γ, τA and τB such that for any ha1 , . . . , an′ i ∈ Znm their maps are given as follows: γ (ha1 , . . . , an i) = ha1 , . . . , aj−1 , aj + aj1 aj2 , aj+1 , . . . , an′ i, Q τA (ha1 , . . . , an′ i) = ha1 , . . . , aj1 −1 , aj1 + s∈A g(as ), aj1 +1 , . . . , an′ i, and Q τB (ha1 , . . . , an′ i) = ha1 , . . . , aj2 −1 , aj2 + s∈B g(as ), aj2 +1 , . . . , an′ i, where both the multiplications and additions are in Zm . Using the identity it is easy to verify the following decomposition of τ : τ = τB−1 ◦ γ −1 ◦ τA−1 ◦ γ ◦ τB ◦ γ −1 ◦ τA ◦ γ. Clearly, γ is a 2-function, while τA and τB are ⌊r/2⌋-function and ⌈r/2⌉-function, respectively, and belong to S. By induction hypothesis τA and τB can be expressed as a composition of 2functions. Thus their inverses too. Hence we obtain a decomposition of τ in terms of 2-functions. The bound on kr , the length of the decomposition, follows from the following recurrence: T (r) ≤ 2T (⌊r/2⌋) + 2T (⌈r/2⌉) + 4. We would like to mention that another decomposition of τ in terms of 2-functions can be obtained by following the proof of [8], albeit with a much worse bound on the value of kr . Further, by strengthening the induction hypothesis, it is easily seen that the above proof can be generalized to hold for certain special type of r-functions. Let β be an r-function, r ≤ n′ − 3, ′ such that for any ha1 , . . . , an′ i ∈ Znm , β (ha1 , . . . , an′ i) = ha1 , . . . , ai−1 , ai + fe (ai1 , . . . , air ), ai+1 , . . . , an′ i, 18 where the function fe : Zrm → Zm is defined by: ( b if x = e, fe (x) = 0 otherwise, for some b ∈ Zm and e ∈ Zrm , i.e., fe is some constant multiple of the characteristic function of the tuple e. A crucial step in the proof is to express fe as a product of indicator functions. In this case we consider the following functions gij : Zm → Zm for 1 ≤ j ≤ r. Define gi1 : Zm → Zm such that for any y ∈ Zm , gi1 (y) = b if y = e1 , and 0 otherwise. For any 2 ≤ j ≤ r, define gij : Zm → Zm as gij (y) = 1 if y = ei , and 0 otherwise. By definition, we have fe (x1 , . . . , xr ) = gi1 (x1 )gi2 (x2 ) · · · gir (xr ). Thus we get the following generalization of Lemma 6.5. Lemma 6.7. For any m ∈ N and 1 ≤ r ≤ n′ − 3, let τ be an r-function such that for any ′ ha1 , . . . , an′ i ∈ Znm , τ (ha1 , . . . , an′ i) = ha1 , . . . , aj−1 , aj + fe (ai1 , . . . , air ), aj+1 , . . . , an′ i where the function fe : Zrm → Zm is defined by: fe (x) = b if x = e; and 0 otherwise. Then there exists a set of 2-functions τ1 , . . . , τkr such that τ = τkr ◦ · · · ◦ τ1 , and kr ≤ 4r 2 − 3. Comment on decomposition of general r-functions for 3 ≤ r ≤ n′ − 3: Any function P f : Zrm → Zm can be expressed as f = e ce · χe , where χe is the characteristic function of the tuple e ∈ Zrm , and ce ∈ Zm . Thus Lemma 6.7 suffices to argue that any r-function can be decomposed into a set of 2-functions. However, the implied bound on kr may not be small. In particular, the number of tuples where f takes non-zero value might be large. It remains to decompose αn′ −1 and αn′ . The following lemma about cycles that intersect at exactly one point serves as a key tool in our decomposition. Lemma 6.8. Suppose there are two cycles, σ = (t, a1 , · · · , aℓ−1 ) and τ = (t, b1 , · · · , bℓ−1 ), of length ℓ ≥ 2 such that ai 6= bj for every i, j ∈ [ℓ − 1]. Then, (σ ◦ τ )ℓ ◦ (τ ◦ σ)ℓ = σ 2 . Proof. By Proposition 4.1, we have β := τ ◦ σ = (t, a1 , · · · , aℓ−1 , b1 , · · · , bℓ−1 ), and γ := σ ◦ τ = (t, b1 , · · · , bℓ−1 , a1 , · · · , aℓ−1 ). Both β and γ are cycles of length 2ℓ − 1. Also note that 2ℓ − 1 is co-prime with ℓ. Thus both β ℓ and γ ℓ are also cycles of length 2ℓ − 1 given as follow: β ℓ = (t, b1 , a1 , b2 , a2 , · · · , bℓ−1 , aℓ−1 ), and γ ℓ = (t, a1 , b1 , a2 , b2 , · · · , aℓ−1 , bℓ−1 ). Now by Proposition 4.2, σ ◦ β ℓ ◦ σ −1 = (σ(t), σ(b1 ), σ(a1 ), · · · , σ(aℓ−2 ), σ(bℓ−1 ), σ(aℓ−1 )) = (a1 , b1 , a2 , b2 , · · · , aℓ−1 , bℓ−1 , t) = γ ℓ . Therefore, β ℓ ◦ σ −1 = (t, aℓ−1 , aℓ−2 , · · · , a1 ) ◦ (t, a1 , b1 , a2 , b2 , · · · , aℓ−1 , bℓ−1 ) = (a1 , b1 )(a2 , b2 ) · · · (aℓ−1 , bℓ−1 ). 19 It is thus evident that β ℓ ◦ σ −1 2 is the identity permutation. Hence, γ ℓ ◦ β ℓ = σ ◦ β ℓ ◦ σ −1 ◦ β ℓ = σ ◦ β ℓ ◦ σ −1 ◦ β ℓ ◦ σ −1 ◦ σ = σ2 . Before going into the detailed description of the decomposition procedure, let us briefly discuss the main idea. Here we first consider αn′ . The case of αn′ −1 will be analogous. Recall that αn′ = (h00 · · · 00i, h00 · · · 01i, h00 · · · 02i, · · · , h00 · · · 0(m−1)i) is a cycle of length m. For a = (m+1)/2, we define σ = (h00 · · · 0(0·a)i, h00 · · · 0(1·a)i, h00 · · · 0(2·a)i, · · · , h00 · · · 0((m−1)·a)i), and τ = (h(0 · a)00 · · · 0i, h(1 · a)00 · · · 0i, h(2 · a)00 · · · 0i, · · · , h((m − 1) · a)00 · · · 0i), where the multiplication is in Zm . Since m is co-prime with (m + 1)/2, σ and τ are cycles of length m. (Here we use the fact that m is odd.) Observe that σ 2 = αn′ , so by applying Lemma 6.8 to σ and τ we get αn′ . It might seem we didn’t make much progress towards decomposition, as now instead of one (n′ − 1)-function αn′ we have to decompose two (n′ − 1)-functions σ and τ . However, we will not decompose σ and τ directly, but rather we obtain a decomposition for (σ ◦ τ )m and (τ ◦ σ)m . Surprisingly this can be done using Lemma 6.7 although indirectly. We consider an (n′ − 3)-function σ ′ whose cycle decomposition contains σ as one of its cycles. Similarly we consider a 3-function τ ′ whose cycle decomposition contains τ as one of its cycles. We carefully choose these σ ′ and τ ′ such that (σ ′ ◦ τ ′ )m = (σ ◦ τ )m and (τ ′ ◦ σ ′ )m = (τ ◦ σ)m . We will use Lemma 6.7 to directly decompose σ ′ and τ ′ to get the desired decomposition. ′ Lemma 6.9. For any n′ − 1 ≤ i ≤ n′ , let αi be the permutation over Znm given by Definition 7 where m is odd. Then, there exists a set of 2-functions αi,1 , . . . , αi,ki such that αi = αi,ki ◦· · ·◦αi,1 , and ki = O(m · (i − 1)2 ). Proof. For the sake of brevity, we will only describe the procedure to decompose αn′ into a set of 2-functions. The decomposition of αn′ −1 is completely analogous. (We comment on this more at the end of the proof.) ′ Let σ be the following permutation: for any ha1 , . . . , an′ i ∈ Znm , σ (ha1 , . . . , an′ i) = ha1 , . . . , an′ −1 , an′ + f (a1 , . . . , an′ −1 )i ′ where the function f : Znm−1 → Zm is defined as follows: ( (m + 1)/2 if x = h0, . . . , 0i, f (x) = 0 otherwise. Note that (m + 1)/2 is well defined because m is odd. Further, since m and (m + 1)/2 are co-prime, σ is a m length cycle. Moreover, σ 2 = αn′ . The description of σ uses crucially that m is odd. If m were not odd, then the description fails. Indeed finding a substitute for σ is the main hurdle that needs to be addressed to handle the case when m is even. ′ We also consider another permutation τ such that for any ha1 , . . . , an′ i ∈ Znm , τ (ha1 , . . . , an′ i) = ha1 + f (a2 , . . . , an′ ), a2 , . . . , an′ i where the function f is the same as in the definition of σ. So τ is also a cycle of length m. Let t = (m + 1)/2. Then the cycle decomposition of σ and τ are, σ = (h0, 0, . . . , 0, 0 · ti, h0, 0, . . . , 0, 1 · ti, h0, 0, . . . , 0, 2 · ti, · · · , h0, 0, . . . , 0, (m − 1) · ti) , and τ = (h0 · t, 0, 0, . . . , 0i, h1 · t, 0, 0, . . . , 0i, h2 · t, 0, 0, . . . , 0i, · · · , h(m − 1) · t, 0, 0, . . . , 0i) 20 where the multiplication is in Zm . Observe that h0, . . . , 0i is the only common element involved in both cycles σ and τ . Therefore, by Lemma 6.8, (σ ◦ τ )m ◦ (τ ◦ σ)m = σ 2 = αn′ . We note that both σ and τ are (n′ − 1)-functions. Thus so far it is not clear how the above identity helps us to decompose αn′ . We now define two more permutations σ ′ and τ ′ such that they are themselves decomposable into 2-functions and, moreover, (σ ′ ◦ τ ′ )m = (σ ◦ τ )m and (τ ′ ◦ σ ′ )m = (τ ◦ σ)m . ′ The permutations σ ′ and τ ′ are defined as follows: for any ha1 , . . . , an′ i ∈ Znm , σ ′ (ha1 , . . . , an′ i) = ha1 , . . . , an′ −1 , an′ + g(a1 , . . . , an′ −3 )i, and τ ′ (ha1 , . . . , an′ i) = ha1 + h(an′ −2 , an′ −1 , an′ ), a2 , . . . , an′ i ′ n −3 → Z is the following map: where g : Zm m ( (m + 1)/2 g(x) = 0 if x = h0, . . . , 0i, otherwise, and h : Z3m → Zm is similarly defined but on lower dimension: ( (m + 1)/2 if x = h0, 0, 0i, h(x) = 0 otherwise. Since m and (m + 1)/2 are co-prime, σ ′ is composed of m2 disjoint cycles, each of length m. ′ Similarly, τ ′ is composed of m(n −4) disjoint cycles, each of length m. Moreover, σ is one of the ′ cycles among m2 disjoint cycles of σ ′ and τ is one of the cycles among m(n −4) disjoint cycles of τ ′ . So, we can write σ ′ = σ ◦ C1 ◦ · · · ◦ Cm2 −1 , and ′ τ ′ = τ ◦ C1′ ◦ · · · ◦ Cm (n′ −4) −1 where each of Ci ’s and Cj′ ’s is a m length cycle. An important fact regarding σ ′ and τ ′ is that the only element moved by both is the all-zeros tuple h0, . . . , 0i. This is easily seen from their definitions. Recall we had observed that the all-zeroes is, in fact, moved by σ and τ . In other ′ words, cycles in the set {C1 , . . . , Cm2 −1 , C1′ , . . . , Cm (n′ −4) −1 } are mutually disjoint, as well as they are disjoint from σ and τ . Thus, using the fact that Ci ’s and Cj′ ’s are m length cycles, we have (τ ′ ◦ σ ′ )m = (τ ◦ σ)m , and (σ ′ ◦ τ ′ )m = (σ ◦ τ )m . Therefore, we can express αn′ in terms of σ ′ and τ ′ , αn′ = σ 2 = (σ ◦ τ )m ◦ (τ ◦ σ)m = (σ ′ ◦ τ ′ )m ◦ (τ ′ ◦ σ ′ )m . But, by definition, σ ′ and τ ′ are an (n′ −3)-function and a 3-function, respectively. Furthermore, they satisfy the requirement of Lemma 6.7. Hence both σ ′ and τ ′ can be decomposed into a set of 2-functions. As a result we obtain a decomposition of αn′ into a set of kn′ many 2-functions,  ′ 2 2 ′ ′ where kn ≤ 2m 4(n − 3) − 3 + 4 · 3 − 3 = m 8(n − 3)2 + 60 ≤ 60 · m (n′ − 3)2 . 21 The permutation αn′ −1 can be decomposed in a similar fashion. Note that αn′ −1 is composed of m disjoint m length cycles. Each of the m disjoint cycles can be decomposed using the procedure described above. If we do so, we get the length kn′ −1 = O(m2 · (n′ − 3)2 ), which would suffice for our purpose. However, we can improve this bound to O(m · (n′ − 2)2 ) by a slight modification of σ, τ, σ ′ and τ ′ . Below we define these permutations and leave the rest of the proof to the reader as it is analogous to the proof above. The argument should be carried ′ with σ, τ, σ ′ and τ ′ defined as follows: for any ha1 , . . . , an′ i ∈ Znm , σ (ha1 , . . . , an′ i) = ha1 , . . . , an′ −2 , an′ −1 + f (a1 , . . . , an′ −2 ), an′ i, τ (ha1 , . . . , an′ i) = ha1 + f (a2 , . . . , an′ −1 ), . . . , an′ −1 , an′ i, σ ′ (ha1 , . . . , an′ i) = ha1 , . . . , an′ −2 , an′ −1 + g(a1 , . . . , an′ −3 ), an′ i, and τ ′ (ha1 , . . . , an′ i) = ha1 + h(an′ −2 , an′ −1 ), a2 , . . . , an′ −1 , an′ i, ′ n −2 → Z is defined as: f (x) = (m + 1)/2 if x = h0, . . . , 0i; otherwise where the function f : Zm m ′ f (x) = 0, the function g : Znm−3 → Zm is defined as: g(x) = (m+1)/2 if x = h0, . . . , 0i; otherwise g(x) = 0 and the function h : Z2m → Zm is defined as: h(x) = (m + 1)/2 if x = h0, 0i; otherwise h(x) = 0. The only difference is now that both σ and τ are composed of m disjoint cycles of length m, instead of just one cycle as in case of αn′ . However every cycle of σ has non-empty intersection with exactly one cycle of τ , and furthermore, the intersection is singleton. Hence we can still apply Lemma 6.8 with m pairs of m length cycles, where each pair consists of one cycle from σ and another from τ such that they have non-empty intersection. Step 3: Invocation of Lemma 4.3 To finish the construction we just replace αi ’s in α = αn′ ◦ · · · ◦ α1 by αi,ki ◦ · · · ◦ αi,1 . Take P ′ the resulting sequence as σ = σk ◦ · · · ◦ σ1 , where k = ni=1 ki ≤ c3 · m · n′3 for some constant c3 > 0. Now we apply Lemma 4.3 to get a space-optimal counter over Znm . In Lemma 4.3, we ′ ′ ′ set k′ = mr such that r ′ is the smallest integer for which mr ≥ k and set D ′ = Zrm . Hence r ′ ≤ logm k + 1 ≤ 3 logm n + c, for some constant c > 1. Since each of σi ’s is a 2-function, by Claim 6.2 it can be implemented using a decision assignment tree Ti such that READ(Ti ) = 3 and WRITE(Ti ) = 1. In Lemma 4.3, we use the standard space-optimal Gray code over ′ D ′ = Zrm as C ′ . The code C ′ can be implemented using a decision assignment tree T ′ with READ(T ′ ) = r ′ and WRITE(T ′ ) = 1 (Theorem 2.1). Hence the final counter implied from Lemma 4.3 can be computed using a decision assignment tree T with READ(T ) ≤ 4 logm n and WRITE(T ) = 2. Are odd permutations the bottleneck? We saw our decomposition procedure over Znm into 2-functions (Lemma 6.9) requires that m be odd. It is natural to wonder whether such a decomposition exists irrespective of the parity of m. That is, we would like to design a set of 2-functions that generates a cycle of length mn where m is even. Unfortunately, the answer to this question is, unequivocally, NO. A cycle of length mn is an odd permutation when m is even. However, Coppersmith and Grossman [8] showed that 2-functions are even permutations for m = 2. Their proof is easily seen to be extended to hold for all m. Hence, 2-functions can only generate even permutations. Thus, ruling out the possibility of decomposing a full cycle into 2-functions when m is even. In fact, Coppersmith and Grossman [8] showed that (n − 2)functions are all even permutations. Therefore, the decomposition remains impossible even if (n − 2)-functions are allowed. 22 We would like to give further evidence that odd permutations are indeed the bottleneck towards obtaining an efficient decision assignment tree implementing a space-optimal counter over Zn2 . Suppose T is a decision assignment tree that implements an odd permutation σ over Zn2 such that READ(T ) = r and WRITE(T ) = w. From the basic group theory we know that there exists an even permutation τ such that τ ◦σ is a cycle of length 2n . Following the argument in [8], but using an efficient decomposition of (n − 3)-functions into 2-functions (in particular, Lemma 6.7), we see that any even permutation can be decomposed into a set of 2-functions of size at most O(2n−1 n2 ). Indeed, any even permutation can be expressed as a composition of at most 2n−1 3-cycles, while any 3-cycle can be decomposed into at most O(n2 ) 2-functions. 2n+O(log n) We implement τ ◦ σ using Lemma 4.3 to obtain a space-optimal counter over Z2 that is computed by a decision assignment tree with read complexity n + O(log n) + r and write complexity w + 1. Thus, savings made on implementing an odd permutation translate directly to the savings possible in the implementation of some space-optimal counter. Recall the main idea behind 2-Gray codes in Section 5, when m = 2, is to generate the cycle of length 2n − 1 efficiently. Since a cycle of length 2n − 1 is an even permutation, we could decompose it into 2-functions. In particular, we could choose an appropriate 3-cycle as αn in Claim 6.4 such that α becomes a cycle of length 2n − 1. However, the counter thus obtained misses out on O(n3 ) strings, instead of O(n). 6.1 Getting counters for even m We can combine the results from Theorem 1.2 and Theorem 1.1 to get a counter over Znm for any even m. We have already mentioned in Section 3 that if we have space-optimal quasiGray codes over the alphabet Z2 and Zo with o being odd then we can get a space-optimal quasi-Gray code over the alphabet Zm for any even m. Unfortunately in Section 5, instead of space-optimal counters we were only able to generate a counter over binary alphabet that misses O(n) many strings. As a consequence we cannot directly apply Theorem 3.1. The problem is following. Suppose m = 2ℓ o for some ℓ > 0 and odd o. By the argument used in the proof of Lemma 3.3 we know that there is a counter over (Z2ℓ )n−1 of the same length as that over ′ (Z2 )ℓ(n−1) . Furthermore the length of the counter is of the form 2O(log n) (2n − 1), for some n′ that depends on the value of ℓn (see the construction in Section 5). Now to apply Theorem 3.1 as ′ in the proof of Lemma 3.2, 2n − 1 must be co-prime with o. However that may not always be the case. Nevertheless, we can choose the parameters in the construction given in Section 5 to make ′ n′ such that 2n − 1 is co-prime with o. This is always possible because of the following simple algebraic fact. In the following proposition we use the notation Z∗o to denote the multiplicative group modulo o and ordo (e) to denote the order of any element e ∈ Z∗o . Proposition 6.10. For any n ∈ N and odd o ∈ N, consider the set S = {n, n + 1, · · · , n + ′ ordo (2) − 1}. Then there always exists an element n′ ∈ S such that 2n − 1 is co-prime to o. Proof. Inside Z∗o , consider the cyclic subgroup G generated by 2, i.e., G = {1, 2, · · · , 2ordo (2) }. ′ Clearly, {2s (mod o) | s ∈ S} = G. Hence there exists an element n′ ∈ S such that 2n − 1 ′ (mod o) = 1. It is clear that if gcd(2n − 1, o) = 1 then we are done. Now the proposition follows from the following easily verifiable fact: for any a, b, c ∈ N, if a ≡ b (mod c) then gcd(a, c) = gcd(b, c). So for any n ∈ N, in the proof of Lemma 3.2 we take the first coordinate to be Zim for some suitably chosen i ≥ 1 instead of just Zm . The choice of i will be such that the length of the counter over (Z2ℓ )n−i will become co-prime with o. The above proposition guarantees the existence of such an i ∈ [ordo (2)]. Hence we can conclude the following. 23 Theorem 6.11. For any even m ∈ N so that m = 2ℓ o where o is odd, there is a quasi-Gray code C over Znm of length at least mn − O(non ), that can be implemented using a decision assignment tree which reads at most O(logm n + ordo (2)) coordinates and writes at most 3 coordinates. Remark. So far we have only talked about implementing next(·, ·) function for any counter. However we can similarly define complexity of computing prev(·, ·) function in the decision assignment tree model. We would like to emphasize that all our techniques can also be carried over to do that. To extend the result of Section 5, we take the inverse of the linear transformation matrix A and follow exactly the same proof to get the implementation of prev(·, ·). As a consequence we achieve exactly the same bound on read and write complexity. Now for the quasi-Gray code over Zm for any odd m, instead of α in the Step 1 (in Section 6), we simply consider the −1 −1 α−1 which is equal to α−1 1 ◦ α2 ◦ · · · ◦ αn′ . Then we follow an analogous technique to decompose α−1 i ’s and finally invoke Lemma 4.3. Thus we get the same bound on read and write complexity. 7 Lower Bound Results for Hierarchical Counter We have mentioned that the best known lower bound on the depth of a decision assignment tree implementing any space-optimal quasi-Gray code is Ω(logm n). Suppose we are interested in getting a space-optimal quasi-Gray code over the domain Zm1 ×· · ·×Zmn . A natural approach to achieve the O(log n) bound is to build a decision assignment tree with an additional restriction that a variable labels exactly one internal node. We call any counter associated with such a special type of decision assignment tree by hierarchical counter. We note that the aforementioned restriction doesn’t guarantee an O(log n) bound on the depth; for example, a decision assignment tree implementing a counter that increments by 1 to obtain the next element, reads a variable exactly once. Nevertheless, informally, one can argue that if a hierarchical counter is lopsided it must mimic the decision assignment tree in the previous example. We show that a space-optimal hierarchical counter over strings of length 3 exists if and only if m2 and m3 are co-prime. Theorem 7.1. Let m1 ∈ {2, 3} and m2 , m3 ≥ 2. There is no space-optimal hierarchical counter over Zm1 × Zm2 × Zm3 unless m2 and m3 are co-prime. In fact, we conjecture that the above theorem holds for all m1 ≥ 2. Conjecture 1. There is no space-optimal hierarchical counter over Zm1 × Zm2 × Zm3 unless m2 and m3 are co-prime. It follows already from Theorem 3.1 that a space-optimal hierarchical counter exists when m2 and m3 are co-prime. Hence Theorem 7.1 establishes the necessity of co-primality in Theorem 3.1. It also provides a theoretical explanation behind the already known fact, derived from an exhaustive search in [4], that there exists no space-optimal counter over (Z2 )3 that reads at most 2 bits to generate the next element. Proof of Theorem 7.1. We first establish the proof when m1 = 2, and then reduce the case when m1 = 3 to m1 = 2. Following the definition of hierarchical counter, without loss of generality, we assume that it reads the second coordinate if the value of the first coordinate is 0; otherwise it reads the third coordinate. Let C = (ha0 , b0 , c0 i, . . . , hak−1 , bk−1 , ck−1 i) be the cyclic sequence associated with a space-optimal hierarchical counter over Z2 × Zm2 × Zm3 . We claim that for any two consecutive elements in the sequence, hai , bi , ci i and ha(i+1) mod k , b(i+1) mod k , c(i+1) mod k i, the following must hold: 24 1. If ai = 0, then bi+1 6= bi . For the sake of contradiction suppose not, i.e, bi+1 = bi . Since ai = 0 we do not read the third coordinate, thus, implying ci+1 = ci . Hence, ai 6= ai+1 = 1. Therefore, we observe that the second coordinate never changes henceforth. Thence it must be that b1 = b2 = · · · = bk . We thus contradict space-optimality. Similarly, we can argue if ai = 1, then ci+1 6= ci . 2. For i ∈ {0, 1, . . . , k − 1}, a(i+1) mod k 6= ai . Assume otherwise. Let i ∈ {0} ∪ [k − 1] be such that a(i−1) mod k 6= ai = a(i+1) mod k . Clearly, such an i exists, else a1 = a2 = · · · = ak which contradicts space-optimality. We are now going to argue that there exist two distinct elements in the domain Z2 × Zm2 × Zm3 such that they have the same successor in C, thus contradicting that C is generated by a space-optimal counter. For the sake of simplicity let us assume ai = 0. The other case is similar. Since ai = 0, we have ai−1 = 1 and ai+1 = 0. Thus, we have the tuple h0, bi+1 , ci+1 i following the tuple h0, bi , ci i in C. We claim that the successor of the tuple h1, bi+1 , ci−1 i is also h0, bi+1 , ci+1 i. It is easily seen because ci+1 = ci . The second item above says that the period of each element in the first coordinate is 2. That is, in the sequence C the value in the first coordinate repeats every second step. We will now argue that the value in the second coordinate (respectively, third coordinate) repeats every 2m2 steps (respectively, 2m3 steps). We will only establish the claim for the second coordinate, as the argument for the third coordinate is similar. First of all observe that C induces a cycle on the elements of Zm2 . In particular, Consider the sequence b0 , b2 , b4 , . . . , bℓ such that bℓ is the first time when an element in the sequence has appeared twice. First of all, for C to be a valid cycle bℓ must be equal to b0 . Secondly, b0 , b2 , . . . , bℓ−2 are the only distinct elements that appear in the second coordinate throughout C. This is because the change in the second coordinate depends, deterministically, on what value it holds prior to the change. Thus, for C to be space-optimal ℓ/2 must be equal to m2 . Hence, an element in the second coordinate repeats every 2m2 steps in C. Therefore, for the cyclic sequence C, we get that k = lcm(2m2 , 2m3 ) = 2m2 m3 / gcd(m2 , m3 ). Hence if m2 and m3 are not co-prime then k < 2m2 m3 implying C to be not space-optimal. We now proceed to the case m1 = 3. As mentioned in the beginning, we will show that from any space-optimal hierarchical counter over Z3 × Zm2 × Zm3 we can obtain a space-optimal hierarchical counter over Z2 × Zm2 × Zm3 . Let C = (ha0 , b0 , c0 i, . . . , hak−1 , bk−1 , ck−1 i) be the cyclic sequence associated with a spaceoptimal hierarchical counter T over Z3 × Zm2 × Zm3 . We query the first coordinate, say x1 , at the root of T . Let T0 , T1 , and T2 be the three subtrees rooted at x1 = 0, 1, and 2, respectively. Without loss of generality, we assume that the roots of T0 and T2 are labeled by the second coordinate, say x2 , and the root of T1 is labeled by the third coordinate, say x3 . We say that there is a transition from Ti to Tj if there exists a leaf in Ti labeled with an instruction to change x1 to j. We will use the shorthand Ti → Tj to denote this fact. For C to be space-optimal there must be a transition from T1 to either T0 or T2 or both. We now observe some structural properties of T . We first assume that there is a transition from T1 to T0 . (The other case when there is a transition from T0 to T2 will be dual to this.) (i) If T1 → T0 , then T2 6→ T0 . Suppose not, and let h2, x2 , ∗i goes to h0, x′2 , ∗i. Since T1 → T0 , we have h1, ∗, x3 i goes to h0, ∗, x′3 i. It is easily seen that both h2, x2 , x′3 i and h1, x′2 , x3 i have the same successor in C which is h0, x′2 , x′3 i. (Notice similarly we could also argue that if T1 → T2 , then T0 6→ T2 .). (ii) If T1 → T0 , then T1 6→ T2 . Suppose not, then from item (i) we know that T2 6→ T0 and T0 6→ T2 . Thus, in the sequence C two tuples with first coordinate 0 or 2 is separated by 25 a tuple with the first coordinate 1. Since there are exactly equal number of tuples with a fixed first coordinate, C can not be space-optimal. (iii) If T1 → T0 , then T0 6→ T0 and T1 6→ T1 . The argument is similar to item (i). We now prune T to obtain a space-optimal hierarchical counter T ′ over Z2 × Zm2 × Zm3 as follows. We first remove the whole subtree T2 , i.e., only T0 and T1 remain incident on x1 . We don’t make any change to T1 . Now we change instructions at the leaves in T0 to complete the construction of T ′ . Let ℓ be a leaf in T0 such that it is labeled with the assignment: x1 ← 2 and x2 ← b for some b ∈ Zm2 . As a thought experiment, let us traverse T2 in T until we reach a leaf in T2 with assignments of the form x1 ← 1 and x2 ← b′ for some b′ ∈ Zm2 . In this case, we change the content of the leaf ℓ in T0 in T ′ to x1 ← 1 and x2 ← b′ . The fact that T ′ is space-optimal and hierarchical follows easily from properties (i) - (iii) of T , and it being space-optimal and hierarchical. This completes the proof. Acknowledgments Authors would like to thank Gerth Stølting Brodal for bringing the problem to our attention, and to Petr Gregor for giving a talk on space-optimal counters in our seminar, which motivated this research. References [1] David A. Mix Barrington. Bounded-width polynomial-size branching programs recognize exactly those languages in N C 1 . Journal of Computer and System Sciences, 38:150–164, 1989. [2] Michael Ben-Or and Richard Cleve. Computing algebraic formulas using a constant number of registers. SIAM J. Comput., 21(1):54–58, 1992. URL: https://doi.org/10.1137/0221006, doi:10.1137/0221006. [3] Prosenjit Bose, Paz Carmi, Dana Jansens, Anil Maheshwari, Pat Morin, and Michiel H. M. Smid. Improved methods for generating quasi-gray codes. In Algorithm Theory - SWAT 2010, 12th Scandinavian Symposium and Workshops on Algorithm Theory, Bergen, Norway, June 21-23, 2010. Proceedings, pages 224–235, 2010. URL: https://doi.org/10.1007/978-3-642-13731-0_22, doi:10.1007/978-3-642-13731-0_22. [4] Gerth Stølting Brodal, Mark Greve, Vineet Pandey, and Srinivasa Rao Satti. Integer representations towards efficient counting in the bit probe model. J. Discrete Algorithms, 26:34–44, 2014. URL: https://doi.org/10.1016/j.jda.2013.11.001, doi:10.1016/j.jda.2013.11.001. [5] Harry Buhrman, Richard Cleve, Michal Koucký, Bruno Loff, and Florian Speelman. Computing with a full memory: catalytic space. In Symposium on Theory of Computing, STOC 2014, New York, NY, USA, May 31 - June 03, 2014, pages 857–866, 2014. URL: http://doi.acm.org/10.1145/2591796.2591874, doi:10.1145/2591796.2591874. [6] Richard Cleve. Methodologies for Designing Block Ciphers and Cryptographic Protocols. PhD thesis, University of Toronto, April 1989. [7] Martin Cohn. Affine m-ary gray codes. Information and Control, 6(1):70–78, 1963. 26 [8] Don Coppersmith and Edna Grossman. Generators for certain alternating groups with applications to cryptography. j-SIAM-J-APPL-MATH, 29(4):624–627, December 1975. [9] C. Ding, D. Pei, and A. Salomaa. Chinese Remainder Theorem: Applications in Computing, Coding, Cryptography. World Scientific Publishing Co., Inc., River Edge, NJ, USA, 1996. [10] David S. Dummit and Richard M. Foote. Abstract Algebra. John Wiley & Sons, 2004. [11] Tomáš Dvořák, Petr Gregor, and Václav Koubek. Generalized gray codes with prescribed ends. Theor. Comput. Sci., 668:70–94, 2017. URL: https://doi.org/10.1016/j.tcs.2017.01.010, doi:10.1016/j.tcs.2017.01.010. [12] Ivan Flores. Reflected number systems. IRE Transactions on Electronic Computers, EC5(2):79–82, 1956. [13] Gudmund Skovbjerg Frandsen, Peter Bro Miltersen, and Sven Skyum. Dynamic word problems. J. ACM, 44(2):257–271, 1997. URL: http://doi.acm.org/10.1145/256303.256309, doi:10.1145/256303.256309. [14] Michael L. Fredman. Observations on the complexity of generating quasi-gray codes. SIAM J. Comput., 7(2):134–146, 1978. URL: https://doi.org/10.1137/0207012, doi:10.1137/0207012. [15] Zachary Frenette. Towards the efficient generation of gray codes in the bitprobe model. Master’s thesis, University of Waterloo, Waterloo, Ontario, Canada, 2016. [16] F. Gray. Pulse code communication, 1953. http://www.google.com/patents/US2632058. US Patent 2,632,058. URL: [17] Petr Gregor and Torsten Mütze. Trimming and gluing gray codes. In 34th Symposium on Theoretical Aspects of Computer Science, STACS 2017, March 8-11, 2017, Hannover, Germany, pages 40:1–40:14, 2017. URL: https://doi.org/10.4230/LIPIcs.STACS.2017.40, doi:10.4230/LIPIcs.STACS.2017.40. [18] James T. Joichi, Dennis E. White, and S. G. Williamson. Combinatorial gray codes. SIAM J. Comput., 9(1):130–141, 1980. URL: https://doi.org/10.1137/0209013, doi:10.1137/0209013. [19] Donald E. Knuth. The Art of Computer Programming, Volume 4, Fascicle 2: Generating All Tuples and Permutations (Art of Computer Programming). Addison-Wesley Professional, 2005. [20] Serge Lang. Linear Algebra. Undergraduate Texts in Mathematics. Springer New York, 1987. [21] Rudolf Lidl and Harald Niederreiter. Finite Fields. Encyclopedia of Mathematics and its Applications. Cambridge University Press, 2 edition, 1996. doi:10.1017/CBO9780511525926. [22] H. M. Lucal. Arithmetic operations for digital computers using a modified reflected binary code. IRE Transactions on Electronic Computers, EC-8(4):449–458, 1959. [23] Torsten Mütze. Proof of the middle levels conjecture. Proceedings of the London Mathematical Society, 112(4):677–713, 2016. 27 [24] Torsten Mütze and Jerri Nummenpalo. Efficient computation of middle levels gray codes. In Algorithms - ESA 2015 - 23rd Annual European Symposium, Patras, Greece, September 14-16, 2015, Proceedings, pages 915–927, 2015. [25] Torsten Mütze and Jerri Nummenpalo. A constant-time algorithm for middle levels gray codes. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2017, Barcelona, Spain, Hotel Porta Fira, January 16-19, pages 2238– 2253, 2017. [26] Patrick K. Nicholson, Venkatesh Raman, and S. Srinivasa Rao. A survey of data structures in the bitprobe model. In Space-Efficient Data Structures, Streams, and Algorithms - Papers in Honor of J. Ian Munro on the Occasion of His 66th Birthday, pages 303–318, 2013. URL: https://doi.org/10.1007/978-3-642-40273-9_19, doi:10.1007/978-3-642-40273-9_19. [27] M. Ziaur Rahman and J. Ian Munro. Integer representation and counting in the bit probe model. Algorithmica, 56(1):105–127, 2010. URL: https://doi.org/10.1007/s00453-008-9247-2, doi:10.1007/s00453-008-9247-2. [28] Dana S. Richards. Data compression and gray-code sorting. Inf. Process. Lett., 22(4):201–205, 1986. URL: https://doi.org/10.1016/0020-0190(86)90029-3, doi:10.1016/0020-0190(86)90029-3. [29] Carla Savage. A survey of combinatorial gray codes. SIAM review, 39(4):605–629, 1997. [30] Victor Shoup. Searching for primitive roots in finite fields. Mathematics of Computation, 58:369 – 380, 1992. [31] Igor Shparlinski. On finding primitive roots in finite fields. Theoretical Computer Science, 157(2):273 – 275, 1996. [32] Igor Shparlinski. Finite Fields: Theory and Computation, volume 477 of Mathematics and its Applications. Springer Netherlands, 1999. [33] Igor E. Shparlinski. Finding irreducible and primitive polynomials. Applicable Algebra in Engineering, Communication and Computing, 4(4):263–268, 1993. [34] Andrew Chi-Chih Yao. Should tables be sorted? J. ACM, 28(3):615–628, 1981. URL: http://doi.acm.org/10.1145/322261.322274, doi:10.1145/322261.322274. 28
8
ON GENERALIZED HARTSHORNE’S CONJECTURE AND LOCAL COHOMOLOGY MODULES arXiv:1803.07546v1 [] 20 Mar 2018 T. H. FREITAS, V. H. JORGE PÉREZ, AND L. C. MERIGHE* Abstract. Let a denote an ideal of a commutative Noetherian ring R. Let M and N be two R-modules. In this paper, we give partial answers on the extension of Hartshorne’s conjecture about the cofiniteness of torsion and extension functors. For this purpose, we study the cofiniteness of the generalized local cohomology module Hia (M, N ) for a new class of modules, called a-weakly finite modules, in the local and non-local case. Furthermore, we derive some results on attached primes of top generalized local cohomology modules. 1. Introduction Throughout this paper, let a and b be ideals of a commutative Noetherian ring with nonzero identity R. If M and N are two R-modules and i ∈ Z, consider Hia (M, N) := lim ExtiR (M/an M, N), −→ n the ith generalized local cohomology module with respect to a and R-modules M and N, introduced by Herzog [19]. If M = R the definition of generalized local cohomology modules reduces to the notion introduced by Grothendieck of local cohomology modules Hia (N) [6]. One of the interesting open questions within the local cohomology theory is the following: Conjecture: When is ExtiR (R/a, Hja (N)) finite for all integer i and j? This question was proposed by Hartshorne [18] who, in turn, was motivated by the conjecture made by Grothendieck [16] about the finiteness of HomR (R/a, Hja (N)). Also, Hartshorne introduced an interesting class of modules called a-cofinite modules. Recall that an R-module M is said to be a-cofinite if Supp(M) ⊆ V (a) and ExtiR (R/a, M) is finite for all i. In this sense, as a generalization of Hartshorne’s conjecture, we have a natural question. Question 1: When is Hia (M, N) a-cofinite for all i? Concerning this question, several results were obtained and in most of them, the finiteness assumption on the modules is crucial in the proof (for example [8], [15] and [27]). Date: March 21, 2018. 2010 Mathematics Subject Classification. 13D45, 13H10, 13H15, 13E10. Key words and phrases. Torsion functors, Local Cohomology, cofinite module, Artinian module, Attached prime. The first and second author was partially supported CNPq-Brazil 421440/2016-3. The third author was partially supported by CNPq-Brazil - Grants 142376/2016-7. 1 In this paper, we will introduce a new class of modules, called a-weakly finite modules over M. This class is an extension of the class of weakly finite modules introduced in [3] (which contains the class of finitely generated modules, big Cohen-Macaulay modules and a-cofinite modules). In the local and non-local case we will show the following two results, that improves [9, Theorem 2.2] and [14, Theorem 2.2] respectively. Theorem 1.1. Suppose that a ⊆ b and dimR R/b = 0. Let M be a finitely generated R-module and N be an a-weakly finite R-module over M. Then Hib (M, N) is an Artinian R-module and a and b-cofinite, for all i ∈ N. Theorem 1.2. Let (R, m) be a local ring. Consider M, N two R-modules such that M is finitely generated of finite projective dimension (pd M = d < ∞), N is b-weakly finite over M and dimR N = n < ∞. Suppose that a ⊆ b. Then Hd+n (M, N) is Artinian and b-cofinite. a In particular, if a = b, then Hd+n (M, N) is Artinian and a-cofinite. a Another aim of the present paper is to show some results concerning to attached primes of generalized local cohomology modules Hia (M, N). More precisely, we will show an extension of the main result in [10]. Theorem 1.3. Let M be a finitely generated R-module such that pd M < ∞. Let N be an R-module such that dimR N = dimR R = n. Then Hd+n (M, N) has secondary representation a and (M, N)) ⊆ {p ∈ AssR (N) | cd(a, M, R/p) = d + n}. AttR (Hd+n a Returning to Question 1, Melkersson [26, Theorem 2.1] has shown that ExtjR (R/a, M) is finite for all j if and only if TorR j (R/a, M) is finite for all j. We can analyze the Question 1 on a more general point of view. Question 2: When are ExtiR (M, N) and TorR i (M, N) a-cofinite (or finite length, a-cominimax, or a-weakly cofinite) for all (or for some) integer i ? Note that a-cominimax [2] and a-weakly cofinite modules ([11] and [12]) are extensions of the definition of a-cofinite modules. We may also ask, as a particular case of the above i question, when ExtjR (L, Hia (M, N)) and TorR j (L, Ha (M, N)) are a-cofinite (or finite length, or a-cominimax, a-weakly cofinite) for all (or for some) integers i and j, and some R-module L. In this sense, we show the following result. Theorem 1.4. Let H and M be R-modules such that H is Artinian and a-cofinite and M is minimax. Then, for each i ≥ 0, the module ExtiR (M, H) is minimax and b a-cofinite over b R. One of the main goals of this paper is to generalizes the results showed by Naguipour, Bahmanpour and Khalili [4]. In this direction we show the following theorem: Theorem 1.5. Let N be a nonzero a-cominimax R-module and M be a finitely generated R-module. (i) If dimR M = 1, then the R-module TorR i (M, N) is Artinian and a-cofinite for all i ≥ 0. 2 (ii) If dimR M = 2, then the R-module TorR i (M, N) is a-cofinite for all i ≥ 0. For larger dimensions we obtain the following result. Theorem 1.6. Let (R, m) be a local ring and a an R-ideal. Let N be a nonzero a-cominimax R-module and M be a finitely generated R-module. Then the R-module TorR i (M, N) is aweakly cofinite for all i ≥ 0 when one of the following cases holds: (i) dimR N ≤ 2. (ii) dimR M = 3. This paper is organized as follows. In Section 2, we discuss Question 1, define the class of a-weakly finite modules over M and with the purpose to show Theorem 1.1 and Theorem 1.2. In Section 3, we obtain some results on attached primes of generalized local cohomology in order to prove our second main result (Theorem 1.3). In order to understand the cofiniteness behavior of extension and torsion functors, in Section 4 we show some preliminary results that will be useful for this purpose. Also, we recall the definitions of minimax, weakly Laskerian modules, cominimax and weakly cofinite modules. This notions improves the definitions of Noetherian (and Artinian) modules and cofinite modules. In last section, we show some cases where the torsion functor TorR i (M, N) is cofinite or weakly cofinite module for all i. As applications of the results showed in the Section 2, we i study the behavior of ExtjR (L, Hia (M, N)) and TorR j (L, Ha (M, N)), for some R-module L. For conventions of the notation, basic results, and terminology not given in this paper, the reader should consult the books Brodmann-Sharp [6] and Matsumura [22]. 2. Cofiniteness and Artinianess of Generalized Local Cohomology Modules In this section, we fix our notation and list several results for the convenience of the reader. Let R be a commutative Noetherian ring. Throughout this paper, unless otherwise noted, the R-module N is not necessarily finitely generated and we understand the R-dimension of N, denoted by dimR N, by dimR N := sup{dimR R/p | p ∈ SuppR (N)}. Definition 2.1. Let S be the largest class of R-modules satisfying the following four properties: 1. If N ∈ S, then HomR (R/a, N) and HomR (M/aM, N) are finitely generated, where M is an R-module. 2. If N is a non-zero element of S and x is a regular element, then N/xN ∈ S is non-zero and dimR N/xN = dimR N − 1. 3. If N ∈ S, then |AssR (N)| < ∞. 4. If N ∈ S, then N/Γa (N) ∈ S. We say an R-module N is a-weakly finite over M, if it belongs to S. If M = R, we say that the R-module is a-weakly finite. If M = R, (R, m) is a local ring and a = m we just say that the R-module is weakly finite (see [3, Definition 2.1]). Note that if M is a finitely generated R-module, then M is a weakly finite R-module. Another important class of modules are the cofinite modules. Recall that a module N is said to be a-cofinite if SuppR (N) ⊆ V (a) and ExtiR (R/a, N) is a finite module for all i ∈ Z 3 [18]. With this notion, Bagheri ([3, Lemma 2.1]) has shown, in the local case, that the class of cofinite modules is contained in the class of weakly finite modules. Our next proposition shows a similar behaviour for a-weakly finite modules and the proof is analogous to what was done in [3, Lemma 2.1]. Proposition 2.2. Let (R, m) be a local ring and let M be a non-zero R-module of dimension n > 0. If M is a-cofinite, then M is a-weakly finite. Also, Bagheri [3, Theorem 2.1] has shown in the local case that the local cohomology module Him (N) is Artinian when N is a weakly finite R-module. Our main goal in this section is to generalize this result in the non-local case (Theorem 2.9). To do this, we will need some previous results. Lemma 2.3 ([26, Corollary 4.4]). The class of Artinian a-cofinite modules is closed under taking submodules, quotients and extensions, i.e., it is a Serre subcategory of the category of R-modules. Proposition 2.4. Let M be a finitely generated R-module and N ∈ S any R-module. Then for all i ≥ 0, ExtiR (M, N) and TorR i (M, N) are elements of S. Proof. The proof follows from the definition of extension and torsion functors.  Lemma 2.5 ([14, Lemma 2.1(i)]). If M is a finitely generated R-module, then Hi (M, Γa (N)) ∼ = Exti (M, Γa (N)). R a Lemma 2.6 ([6, Lemma 2.1.1]). If T is an R-module such that |AssR (T )| < ∞, then T is a-torsion free if and only if there is an element r ∈ a such that r is T -regular. Proposition 2.7 ([26, Proposition 4.1]). Let M be a module with support in V (a). Then M is Artinian and a-cofinite if and only if (0 :M a) has finite length. If there is an element x ∈ a such that (0 :M x) is Artinian and a-cofinite, then M is Artinian and a-cofinite. Lemma 2.8. Suppose that a ⊆ b and dimR R/b = 0. Let N be an a-weakly finite R-module. Then Γb (N) is Artinian and a and b-cofinite. Proof. Note that HomR (R/a, N) = (0 :N a) is finitely generated, since N is a-weakly finite. By hypothesis, a ⊆ b then HomR (R/b, N) = (0 :N b) ⊆ (0 :N a) = HomR (R/a, N) and, since R is a Noetherian ring, HomR (R/b, N) is finitely generated. Moreover, applying functors HomR (R/a, −) and HomR (R/b, −) in the exact sequence 0 → Γb (N) → N and using the same argument, we conclude HomR (R/a, Γb(N)) and HomR (R/b, Γb (N)) are finitely generated. Since dimR R/b = 0, it follows that HomR (R/b, Γb (N)) = (0 :Γb (N ) b) has finite length. Thus by Proposition 2.7, Γb (N) is Artinian and b-cofinite. On the other hand, Γb (N) = (0 :Γb (N ) b) ⊆ (0 :Γb (N ) a) ⊆ Γb (N) thus (0 :Γb (N ) a) = Γb (N) = (0 :Γb (N ) b) which has finite length. Then, again by Proposition 2.7, Γb (N) is Artinian and a-cofinite. Therefore, Γb (N) is Artinian and a and b-cofinite.  Now we are able to show the first main result of this section. 4 Theorem 2.9. Suppose that a ⊆ b and dimR R/b = 0. Let M be a finitely generated Rmodule and let N be a a-weakly finite R-module over M. Then Hib (M, N) is an Artinian R-module and a and b-cofinite, for all i ∈ N. Proof. We use induction on i. Assume that i = 0. We claim H0b (M, N) is Artinian and a and b-cofinite. Since H0b (M, N) ∼ = HomR (M, Γb (N)) the assertion is trivial by Lemma 2.8 and Proposition 2.4. Now, assume i > 0. By induction, Hjb (M, N) is Artinian and a and b-cofinite, for j < i. Consider the exact sequence Hib (M, Γb (N)) → Hib (M, N) → Hib (M, N/Γb (N)) . By Lemma 2.8, Γb (N) is Artinian and a and b-cofinite. Since M is finitely generated, by Proposition 2.4 and Lemma 2.5, Hib (M, Γb (N)) is Artinian and a and b-cofinite. Therefore, Hib (M, N) is Artinian and a and b-cofinite if and only if Hib (M, N/Γb (N)) is Artinian and a and b-cofinite. Hence we may assume that Γb (N) = 0 and so, by Lemma 2.6, there is an element r ∈ b which is N-regular. The exact sequence ·r 0 → N → N → N/rN → 0 induces the following exact sequence ·r Hi−1 (M, N/rN) −→ Hib (M, N) −→ Hib (M, N). b Since N/rN is a-weakly finite, by induction Hi−1 (M, N/rN) is Artinian and a and b-cofinite. b Thus, by Lemma 2.3, (0 :Hib (M,N ) r) is Artinian and a and b-cofinite. Therefore, by Proposition 2.7, Hib (M, N) is Artinian and a and b-cofinite.  Corollary 2.10. Let (R, m) be a local ring. Let M be a finitely generated R-module and N be a weakly finite R-module over M. Then Him (M, N) is Artinian for all i ∈ N. Corollary 2.11 ([3, Theorem 2.1]). Let (R, m) be a local ring and let N be a weakly finite R-module. Then Him (N) is Artinian for all i ∈ N. Corollary 2.12. Suppose that a ⊆ b and dimR R/b = 0. Let M be a finitely generated R-module and N be a a-weakly finite R-module over M. Then ExtjR (R/a, Hib (M, N)) and ExtjR (R/b, Hib(M, N)) have finite length for all i and j in N. Proof. By Theorem 2.9, Hib (M, N) is Artinian and a and b-cofinite. So, ExtjR (R/a, Hib (M, N)) and ExtjR (R/b, Hib (M, N)) are finitely generated and, since R/a and R/b are finitely generated, by Proposition 2.4, ExtjR (R/a, Hib (M, N)) and ExtjR (R/b, Hib (M, N)) are Artinian. Therefore, ExtjR (R/a, Hib (M, N)) and ExtjR (R/b, Hib (M, N)) have finite length.  As a consequence of this corollary, we have the following result. Corollary 2.13. Let (R, m) be a local ring. Suppose a ⊆ b such that dimR R/b = 0. Let M be a finitely generated R-module and N be a a-weakly finite R-module over M. Then AttR (ExtjR (R/a, Hib (M, N))) = {m} = AttR (ExtjR (R/b, Hib (M, N))). 5 Proof. We show that AttR (ExtjR (R/a, Hib (M, N))) = {m}, the other one is analogous. By Corollary 2.12, ExtjR (R/a, Hib (M, N)) has finite length, so SuppR (ExtjR (R/a, Hib (M, N))) = {m}. Now, by [28, Proposition 2.9 (2) and (3)], AttR (ExtjR (R/a, Hib(M, N))) ⊆ {m}. Therefore AttR (ExtjR (R/a, Hib (M, N))) = {m}, since AttR (ExtjR (R/a, Hib(M, N))) 6= ∅.  The local case. From now on, in this section, we will assume that our ring R is local with maximal ideal m. Theorem 2.14. Let M be a finitely generated R-module such that pd M = d < ∞ and let N be a b-weakly finite R-module over M such that dimR N = n < ∞. Suppose a ⊆ b. Then Hd+n (M, N) is Artinian and b-cofinite. a In particular, if a = b, Hd+n (M, N) is Artinian and a-cofinite. a Proof. We may choose x ∈ m \ a. By [13, Lemma 3.1], there is an exact sequence d+n Hd+n (M, N) −→ Hd+n a+xR (M, N) −→ Ha aRx (Mx , Nx ), where Nx is the localization of N at {xi | i ≥ 0}. Note that dimRx (Nx ) < n, so Hd+n aRx (Mx , Nx ) = d+n d+n 0. Thus there is an epimorphism Ha+xR (M, N) −→ Ha (M, N). Now the assertion follows by assuming m = a + (x1 , . . . , xr ) and applying the argument for finite steps, since R is Noetherian ring. Therefore φ d+n 0 −→ ker φ −→ Hd+n (M, N) −→ 0 m (M, N) −→ Ha is a short exact sequence. Now, from Theorem 2.9, Hd+n m (M, N) is Artinian and b-cofinite. Therefore Hd+n (M, N) is Artinian and b-cofinite.  a Corollary 2.15. Let N be a b-weakly finite R-module such that dimR N = n < ∞. Suppose a ⊆ b. Then Hna (N) is Artinian and b-cofinite. In particular, if a = b, Hna (N) is Artinian and a-cofinite. The next result give us an important isomorphism using a a-weakly finite module. By Corollary 2.15, the proof follows analogously to [17, Proposition 2.2]. Proposition 2.16. Let M and N be R-modules such that M is finitely generated and N is a-weakly finite over M. Assume pd M = d < ∞ and dimR N = n < ∞. Also, assume n ∈ N and x1 , . . . , xn is an a -filter regular sequence on N. Then 1. Hd+n (M, N) ∼ = ExtdR (M, Hna (N)). a 2. AttR (Hd+n (M, N)) ⊆ AttR (Hna (N)). a Proposition 2.17. Let M and N be R-modules such that M is finitely generated and N is weakly finite. Assume pd M = d < ∞, dimR R = dimR N = n and 0 < n < ∞. If Hd+n m (M, N) 6= 0, then it is not finitely generated. d+n Proof. As Hd+n m (M, N) 6= 0, then AttR (Hm (M, N)) 6= ∅. By Proposition 2.16 (2) and (3.1), n AttR (Hd+n m (M, N)) ⊆ AttR (Hm (N)) ⊆ {p ∈ AssR (N) | dimR R/p = n}. d+n Since n > 0, we have AttR (Hd+n m (M, N)) * {m}. Since Hm (M, N) is Artinian, it follows d+n that Hm (M, N) is not finitely generated by [6, Corollary 7.2.12].  6 3. Attached primes of the top generalized local cohomology modules In this section, we assume that (R, m) is a commutative Noetherian local ring with maximal ideal m. In [10, Theorem B], M. T. Dibaei and S. Yassemi have shown that, if dimR N = dimR R = n, then Hna (N) has a secondary representation and AttR (Hna (N)) ⊆ {p ∈ AssR (N) | cd(a, R/p) = n}. (3.1) The main purpose of this section is to generalize this result for generalized local cohomology modules. In order to do this, we give some preliminaries result. Lemma 3.1. Let M be finitely generated R-module and N = lim Nj , where {Nj | j ∈ J} is −→ j a family of finitely generated R-modules. Then Hia (M, N) = lim Hia (M, Nj ). −→ j Proof. Since M is finitely generated, M/an M is finitely generated. Then, by [7, Theorem 1], it is possible to commute direct limit and the functor ExtiR (M/an M, −). Therefore Hia (M, N) = lim ExtiR (M/an M, N) −→ n = lim ExtiR (M/an M, lim Nj ) −→ −→ n = j lim(lim ExtiR (M/an M, Nj )) −→ −→ n j = lim Hia (M, Nj ). −→ j  Lemma 3.2 ([30, Lemma 1.2]). If M is an R-module with AttR (M) = {p} where p is a minimal prime of R, then M is secondary. Proposition 3.3. Let M be a finitely generated R-module such that pd M = d < ∞ and dimR N = n < ∞. Consider L ⊆ N be an arbitrary R-submodule. If AssR (L) is a finite set and AttR (Hd+n (M, L)) ⊆ AssR (L), then Hd+n (M, L) has a secondary representation, for all a a R-submodule L ⊆ N , in particular for L = N. Proof. Let AssR (N) = {p1 , . . . , pl }. We proceed by induction on l. If l = 1, then ∅ 6= AttR (Hd+n (M, N)) ⊆ {p1 }. Therefore Hd+n (M, N) is a p1 -secondary a a R-module, by Lemma 3.2. Now assume l > 1. By [5, Proposition 2 - p.263], for each i, 1 ≤ i ≤ l, there exist Li ⊆ N submodule such that AssR (Li ) = {pi } and AssR (N/Li ) = AssR (N) \ {pi }. By hypothesis, AttR (Hd+n (M, Li )) ⊆ AssR (Li ) = {pi } a and AttR (Hd+n (M, N/Li )) ⊆ AssR (N) \ {pi }. a Thus AttR (Hd+n (M, Li ) is pi -secondary or zero and AttR (Hd+n (M, N/Li )) has secondary a a representation by induction. By the exact sequence ϕi AttR (Hd+n (M, Li )) −→ AttR (Hd+n (M, N)) −→ AttR (Hd+n (M, N/Li )) −→ 0, a a a 7 we obtain that ϕi (AttR (Hd+n (M, Li ))) is pi -secondary or zero. If ϕi (AttR (Hd+n (M, Li ))) = 0 a a d+n d+n for some i, then AttR (Ha (M, N)) ∼ Att (H (M, N/L )) which has secondary represen= R i a d+n tation. So we may assume ϕi (AttR (Ha (M, Li ))) 6= 0 for all i = 1, . . . , l. Hence AttR (Hd+n (M, N))/ a l X ϕi (AttR (Hd+n (M, Li ))) a ⊆ i=1 l \ AttR (Hd+n (M, N)/ϕi (Hd+n (M, Li ))) a a i=1 = l \ AttR (Hd+n (M, Li )) = ∅. a i=1 Therefore Hd+n (M, N) = a l X ϕi (Hd+n (M, Li )) a i=1 which is secondary representation for Hd+n (M, N). a  The next theorem is the main result of this section. Theorem 3.4. Let M be a finitely generated R-module such that pd M = d < ∞. Let dimR N = dimR R = n. Then Hd+n (M, N) has secondary representation and a AttR (Hd+n (M, N)) ⊆ {p ∈ AssR (N) | cd(a, M, R/p) = d + n}. a Proof. Firstly we will show the inclusion of the set of attached primes. We can assume Hd+n (M, N) 6= 0. a Let X = {p ∈ AssR (N) | cd(a, M, R/p) = d + n}. By [5, Proposition 2 - p.263], there exist L ⊆ N submodule such that AssR (N/L) = X and AssR (L) = AssR (N) \ X. Consider the exact sequence Hd+n (M, L) −→ Hd+n (M, N) −→ Hd+n (M, N/L) −→ Had+n+1 (M, L). a a a Then Had+n+1 (M, L) = 0, since n + 1 > dimR L. On the other hand, by Lemma 3.1, Hd+n (M, L) = lim Hd+n (M, Lj ), where {Lj | j ∈ I} is a family of finite submodules of a −→ a j N. We claim that Hd+n (M, Lj ) = 0, for all j ∈ I. If dimR Lj = lj < n, then d + n > d + lj a d+n and therefore Ha (M, Lj ) = 0. Now, if dimR Lj = n, then AttR (Hd+n (M, Lj )) = {p ∈ AssR (Lj ) | cd(a, M, R/p) = d + n}. a On the other hand, as Lj is a submodule of L and AssR (L) = AssR (N/X), we have AssR (Lj )∩ X = ∅. Then, Hd+n (M, Lj ) = 0 and we can conclude Hd+n (M, L) = 0. a a d+n d+n ∼ Therefore, Ha (M, N) = Ha (M, N/L) and we can consider N/L instead of N. Thus AssR (N) = AssR (N/L) = X = {p ∈ AssR (N) | cd(a, M, R/p) = n + d}. [ Now, we claim AttR (Hd+n (M, N)) ⊆ AssR (N). If r ∈ / p, then the short exact a p∈Ass(N ) sequence ·r 0 −→ N −→ N −→ N/rN −→ 0 induces the exact sequence ·r 0 −→ Hd+n (M, N) −→ Hd+n (M, N) −→ Hd+n (M, N/rN). a a a 8 Since cd(a, M, N/rN) < n[ + d, we have Hd+n (M, N/rN) rHd+n (M, N) = a a [ = 0. Then [ d+n Ha (M, N) and r ∈ / p. Therefore p⊆ p. p∈AttR (Had+n (M,N )) p∈AttR (Had+n (M,N )) p∈AssR (N ) Note that |AssR (N)| < ∞, since in this case, AssR (N) ⊆ AssR (R), and so if p ∈ AssR (N) we have that n + d = cd(a, M, R/p) ≤ pd(M) + dimR R/p ≤ pd(M) + dimR R = n + d, which implies, dimR R/p = n. Thus p is minimal over the ideal 0 and p ∈ AssR (R), which is a finite set, since R is Noetherian. In this way, if p ∈ AttR (Hd+n (M, N)), then p ⊆ q, for any q ∈ AssR (N). On the other a hand, since cd(a, M, R/p) ≤ pd(M) + dimR R/p ≤ pd(M) + dimR R = d + n and n + d = cd(a, M, R/q) ≤ cd(a, M, R/p) ≤ n + d we have p = q. Therefore, AttR (Hd+n (M, N)) ⊆ AssR (N). a The secondary representation of AttR (Hd+n (M, N)) follows immediately using Proposition a 3.3 and the first part of this proof.  The inclusion showed in the previous theorem is strict in general (see [10, Example 6]). But, if M and N are finitely generated R-modules such that pd M = d < ∞ and dimR N = n < ∞, the equality holds [17, Theorem 2.3]. 4. Torsion and Extension Functors: Some Preparatory Results Now, we are interested on the cofiniteness of torsion and extension functors. Firstly we will recall some definitions that will be used in for the rest of the paper. (i) An R-module M is called minimax, if there is a finitely generated submodule N of M such that M/N is Artinian [32]. (ii) An R-module M is said to be a-weakly cofinite if Supp(M) ⊆ V (a) and ExtjR (R/a, M) is minimax for all j ([11] and [12]). (iii) An R-module M is said to be weakly Laskerian, if the set of associated primes of any quotient module of M is finite ([11] and [12]). (iv) An R-module M is said to be a-cominimax if Supp(M) ⊆ V (a) and ExtjR (R/a, M) is weakly Laskerian for all j [2]. Now we will recall some facts of minimax modules. Remark 4.1. The following statements hold: (i) Noetherian modules are minimax, as are Artinian modules. (ii) The set of associated primes of any minimax R-module is finite. For our first results, we will use the Serre subcategory of the category of R-modules, which is a class of R-modules closed under taking submodules, quotients and extensions. We denote by S for a Serre subcategory of a category of R-modules. Recall that the class of Noetherian modules, Artinian modules, minimax modules, weakly Laskerian and matlis reflexive are examples of Serre subcategories. 9 Lemma 4.2. Let M be a finitely generated R-module and N be an arbitray module. Let t R be a non-negative integer such that TorR i (M, N) ∈ S for all i ≤ t. Then Tori (L, N) ∈ S for all i ≤ t, when L is a finitely generated R-module such that SuppR (L) ⊆ SuppR (M). Proof. Since SuppR (L) ⊆ SuppR (M), there exists a chain of R-modules 0 = L0 ⊂ L1 ⊂ · · · ⊂ Lk = L such that the factors Lj /Lj−1 are homomorphic images of a direct sum of finitely many copies of M (by Grusons Theorem, [31, Theorem 4.1]). Consider the exact sequences 0 → K → M n → L1 → 0 0 → L1 → L2 → L2 /L1 → 0 .. . 0 → Lk1 → Lk → Lk /Lk−1 → 0 for some positive integer n and some finitely generated R-module K. Now, from the long exact sequence R R R · · · → TorR i+1 (Lj /Lj−1 , N) → Tori (Lj−1 , N) → Tori (Lj , N) → Tori (Lj /Lj−1 , N) → · · · R and using the properties of Serre subcategory, TorR i (Lj , N) ∈ S if, and only if, Tori (Lj−1 , N) ∈ S. In this sense, using an easy induction on k, it suffices to prove the case when k = 1. So, consider the exact sequence mentioned above 0 → K → M n → L → 0. (4.1) We now will use induction in t. If t = 0, we have that L ⊗R N is a submodule of M n ⊗R N which belongs to S. Then L ⊗R N ∈ S. ′ ′ Now, lets assume t > 0 and TorR j (L , N) ∈ S for every finitely generated R-module L with SuppR (L′ ) ⊆ SuppR (M) and all j < t. The exact sequence (4.1) induces the long exact sequence R R n · · · → TorR i (M , N) → Tori (L, N) → Tori−1 (K, N) → · · · so that, by the inductive hypothesis, TorR i−1 (K, N) ∈ S for all i ≤ t. On the other hand, n M n ∼ TorR TorR i (M, N) ∈ S. i (M , N) = Therefore, TorR i (L, N) for all i ≤ t, and the proof is complete.  Lemma 4.3. Let M be a finitely generated R-module and N be an arbitray module. Let t be a non-negative integer such that ExtiR (M, N) ∈ S for all i ≤ t. Then ExtiR (L, N) ∈ S for all i ≤ t, when L is a finitely generated R-module such that SuppR (L) ⊆ SuppR (M). Proof. The proof follows by a similar way to what was done in the previous lemma.  The next theorem is a key ingredient for the rest of the paper. Theorem 4.4. Let t be a non-negative integer. Then, for an arbitrary R-module N, the following conditions are equivalent: (i) TorR i (R/a, N) ∈ S for all i ≤ t. 10 (ii) For any finitely generated R-module M with SuppR (M) ⊆ V (a), TorR i (M, N) ∈ S for all i ≤ t. (iii) For any R-ideal b with a ⊆ b, TorR i (R/b, N) ∈ S for all i ≤ t. (iv) For any minimal prime p over a, TorR i (R/p, N) ∈ S for all i ≤ t. (v) ExtiR (R/a, N) ∈ S for all i ≤ t. (vi) For any finitely generated R-module M with SuppR (M) ⊆ V (a), ExtiR (M, N) ∈ S for all i ≤ t. (vii) For any R-ideal b with a ⊆ b, ExtiR (R/b, N) ∈ S for all i ≤ t. (viii) For any minimal prime p over a, ExtiR (R/p, N) ∈ S for all i ≤ t. Proof. (i)⇒(ii) It follows from Lemma 4.2, since SuppR (R/a) = V (a). (ii)⇒(iii) Take M = R/b and observe that SuppR (R/b) = V (b) ⊆ V (a). (iii)⇒(iv) Immediate. (iv)⇒(i) Let p1 , . . . , pn be the minimal primes of a. Then, by assumption, TorR i (R/pj , N) ∈ L Ln n R R ∼ S for all j = 1, . . . , n. Hence j=1 R/pj , N) ∈ S. Since j=1 Tori (R/pj , N) = Tori ( Ln SuppR (R/a) = SuppR ( j=1 R/pj ), it follows by Lemma 4.2 that TorR i (R/a, N) ∈ S for all i ≤ t, as required. (v)⇒(vi)⇒(vii)⇒(viii)⇒(v) It follows in a similar way to what was done previously, using Lemma 4.3. (i)⇔(v) The proof is the same as that in [26, Theorem 2.1]  In particular, for the class of minimax modules we obtain the following result. Corollary 4.5. Let a be an ideal of R such that dimR R/a = 1 and let t be a non-negative integer. Then, for an arbitrary R-module N, the following conditions are equivalent: (i) ExtiR (R/a, N) is minimax for all i ≤ t. (ii) TorR i (R/a, N) is minimax for all i ≤ t. (iii) The Bass number µi (p, N) is finite for all p ∈ V (a) for all i ≤ t. (iv) The Betti number β i (p, N) is finite for all p ∈ V (a) for all i ≤ t. (v) Hai (N) is a-cominimax, for all integer i. Proof. The proof follows by Theorem 4.4, [20, Theorem 3.3] and [1, Corollary 2.5].  Lemma 4.6. Let M be a minimax R-module. If there is n ∈ N and m1 , . . . , ms maximal ideals of R such that m1 m2 · · · ms M = 0, then M has finite length. Proof. The proof follows by the short exact sequence 0 → L → M → K → 0, where L is finitely generated and K is an Artinian R-module, since M is minimax.  5. Cofiniteness of torsion and Extension functors Our purpose in this section is to give partial answers about Question 2 made in the introduction. The reader can compare the next result with [21, Theorem 2.3]. Theorem 5.1. Let H and N be R-modules such that H is Artinian and a-cofinite and N is b minimax. Then, for each i ≥ 0, the module ExtiR (N, H) is minimax and b a-cofinite over R. Proof. Since N is a minimax module, there exist L ⊆ N submodule such that L is finitely generated and N/L is Artinian. Fix such L and consider the exact sequence 0 → L → N → N/L → 0. 11 Thus, we have a long exact sequence · · · → ExtiR (N/L, H) → ExtiR (N, H) → ExtiR (L, H) → · · · . By Proposition 2.4, since Artinian and a-cofinite modules are a Serre subcategory, we obtain that ExtiR (L, H) is Artinian and a-cofinite. Then is also Artinian and b a-cofinite over i b Thus Ext (L, H) is minimax and b b R. a-cofinite over R. R i b Thus On the other hand, by [21, Corollary 2.3], ExtR (N/L, H) is a Noetherian over R. b ExtiR (N/L, H) is minimax and b a-cofinite over R. Therefore, since the class of b a-cofinite minimax modules is a Serre subcategory [26, Corolb lary 4.4], ExtiR (N, H) is minimax and b a-cofinite over R.  The next two results partially answer a generalization of Hartshorne’s conjecture. Corollary 5.2. Let R be a complete ring and a is an R-ideal such that dimR R/a = 0. Let M be a finitely generated R-module and let N be an a-weakly finite R-module over M. Then, for each i ≥ 0 and j ≥ 0, ExtiR (L, Hja (M, N)) is minimax and a-cofinite R-module, for any minimax module L. Proof. The proof follows by Theorem 2.9 and Theorem 5.1.  Corollary 5.3. Let (R, m) be a complete local ring. Let M be a finitely generated R-module such that pd M = d < ∞ and let N be an a-weakly finite R-module over M such that dimR N = n < ∞. Then, for each i ≥ 0, ExtiR (L, Hd+n (M, N)) is minimax and a-cofinite a R-module, for any minimax module L. Proof. The proof follows by Theorem 2.14 and Theorem 5.1.  Lemma 5.4. Let N be a nonzero a-cominimax R-module. Then for any nonzero R-module M of finite length, the R-module TorR i (M, N) has finite length for all i ≥ 0. Proof. Firstly, note that SuppR (M) is a finite non-empty subset of the set of all maximal ideals of R (denoted by Max(R)), since M has finite length. So, consider SuppR (M) := {p1 , . . . , pr } and b = p1 p2 . . . pr . By Lemma 4.2, it is sufficient to show that TorR i (R/b, N) has R ∼ finite length for all i ≥ 0, because Supp (M) = V (b). By the isomorphism Tor R i (R/b, N) = Lr R i=1 Tori (R/pi , N), we may assume without loss of generality that r = 1, and so b = p1 . Let i ≥ 0 be an integer such that TorR i (R/p1 , N) 6= 0. Note that p1 ∈ SuppR (N), and since N is a-cominimax, SuppR (N) ⊆ V (a) and ExtiR (R/a, N) is minimax for all i ≥ 0. Hence ExtiR (R/pi , N) is minimax and has finite length by Theorem 4.4 and Lemma 4.6. Therefore, applying again Theorem 4.4, we obtain that TorR i (R/p1 , N) is minimax and has finite length for all i ≥ 0, and so we obtain the statement.  Now we are able to show the main result of this section. Theorem 5.5. Let N be a nonzero a-cominimax R-module and M be a finitely generated R-module. (i) If dimR M = 1, then the R-module TorR i (M, N) is Artinian and a-cofinite for all i ≥ 0. (ii) If dimR M = 2, then the R-module TorR i (M, N) is a-cofinite for all i ≥ 0. 12 Proof. (i) Since N is cominimax and SuppR (Γa (M)) ⊆ V (a), we obtain that TorR i (Γa (M), N) is minimax for all i ≥ 0 by Theorem 4.4. Now, by the short exact sequence 0 → Γa (M) → M → M/Γa (M) → 0, we can deduce the following long exact sequence all i ≥ 0 R R R · · · → TorR i (Γa (M), N) → Tori (M, N) → Tori (M/Γa (M), N) → Tori−1 (Γa (M), N) → · · · . So it is sufficient to show that, for all i ≥ 0, TorR i (M/Γa (M), N) is Artinian and a-cofinite. Hence, we may assume that Γa (M) = 0 and therefore ensure the existence of x 6∈ a\∪p∈AssR (M ) p by [6, Lemma 2.2.1]. From the short exact sequence x 0 → M → M → M/xM → 0, we obtain following long exact sequence x R R R · · · → TorR i (M/xM, N) → Tori−1 (M, N) → Tori−1 (M, N) → Tori−1 (M/xM, N) → · · · . By Lemma 5.4, the R-module TorR i (M/xM, N) is of finite length i ≥ 0, because M/xM has finite length. So, by previous long exact sequence, (0 :TorRi (M,N ) x) has finite length for all i ≥ 0 and therefore (0 :TorRi (M,N ) I) has also finite length. Finally, since SuppR (TorR i (M, N)) ⊆ R SuppR (N) ⊆ V (a), we can conclude that Tori (M, N) is a-torsion. Therefore for all i ≥ 0, R TorR i (M, N) is an Artinian R-module by [25, Theorem 1.3]. The cofiniteness of Tori (M, N) follows by [26, Theorem 4.3]. (ii) Proceeding similarly to the proof of item (i), we may assume that assume that Γa (M) = 0, and so take x 6∈ a\ ∪p∈AssR (M ) p. The short exact sequence x 0 → M → M → M/xM → 0, induces following long exact sequence x R R R · · · → TorR i (M/xM, N) → Tori−1 (M, N) → Tori−1 (M, N) → Tori−1 (M/xM, N) → · · · . R Therefore (0 :TorRi (M,N ) I) and TorR i (M, N)/x Tori (M, N) are Artinian R-modules by item (i) and Lemma 5.4 and a-cofinite by [26, Corollary 4.4] for all i ≥ 0. Therefore TorR i (M, N) is a-cofinite by [26, Corollary 3.4]  Corollary 5.6. Let a be an R-ideal such that dimR R/a = 0. Let L, M be finitely generated R-modules and let N be a a-weakly finite R-module over M. j (i) If dimR L = 1, then TorR i (L, Ha (M, N)) is Artinian and a-cofinite for all i, j ≥ 0. j (ii) If dimR L = 2, then TorR i (L, Ha (M, N)) is a-cofinite for all i, j ≥ 0. Proof. The result follows by Theorem 2.9 and Theorem 5.5.  Proposition 5.7. Let N be a nonzero a-cominimax R-module and M be a finitely generated R-module. If that dimR N ≤ 1, then the R-module TorR i (M, N) is a-cominimax for all i ≥ 0. Proof. Let F• be a resolution of M consisting of finite free R-modules. Since TorR i (M, N) = Hi (F• ⊗R N)) is a subquotient of a finite direct sum of copies of N. Now the result follows by the fact that the category of a-cominimax modules with dimension less than or equal to 1 is an Abelian category [20, Theorem 2.5].  13 Corollary 5.8. Let N be a nonzero a-cofinite R-module and M be a finitely generated Rmodule. If dimR N ≤ 1, then the R-module TorR i (M, N) is a-cofinite for all i ≥ 0. Now we will investigate the behavior of torsion functors for larger dimensions. For this purpose, weakly Laskerian modules are key ingredients. Remark 5.9. Note that if R is a Noetherian ring, a is an ideal of R and M an Rmodule, then TorR i (R/a, M) is a weakly Laskerian R-module for all i ≥ 0 if, and only if, i ExtR (R/a, M) is weakly Laskerian R-module for all i ≥ 0. The proof follows by Theorem 4.4 and the fact that the class of weakly Laskerian modules is a Serre subcategory. Theorem 5.10. Let (R, m) be a local ring and a an R-ideal. Let N be a nonzero a-cominimax R-module and M be a finitely generated R-module. Then the R-module TorR i (M, N) is aweakly cofinite for all i ≥ 0 when one of the following cases holds: (i) dimR N ≤ 2. (ii) dimR M = 3. Proof. Note that, in the light of Remark 5.9, it is sufficient to show that the R-modules R TorR j (R/a, Tori (M, N)) are weakly Laskerian for all i ≥ 0 and j ≥ 0. For this purpose, R ′ consider the set Λ = {TorR j (R/a, Tori (M, N)) | i ≥ 0 and j ≥ 0}. Let K ∈ Λ and let K ′ be a submodule of K. The proof is complete if we show that the set AssR (K/K ) is finite. Note that we may assume that R is complete by [22, Ex 7.7] and [23, Lemma 2.1]. Suppose that AssR (K/K ′) is an infinite set. So, we can consider {ps }∞ s=1 the countable infinite subset of non-maximal elements of AssR (K/K ′). Further m 6⊆ ∪∞ s=1 ps by [24, Lemma 3.2]. Define S := R\ ∪∞ p . Now, we will analyze the cases (i) and (ii). s=1 s If dimR N ≤ 2, then we can conclude that S −1 N is a S −1 a-cominimax S −1 R-module of dimension at most one by [22, Ex 7.7] and [29, Lemma 3.4]. Therefore TorS −1 R (S −1 M, S −1 N) is S −1 a-cominimax by Proposition 5.7. Therefore S −1 K/S −1 K ′ is a minimax S −1 R-module and so, AssS −1 R (S −1 K/S −1 K ′ ) is finite −1 by Remark 4.1. However, for each pi ∈ {ps }∞ pi ∈ AssS −1 R (S −1 K/S −1 K ′ ), s=1 , we have that S and so we obtain a contradiction. This completes the proof. In case dimR M = 3, we obtain that TorS −1 R (S −1 M, S −1 N) is S −1 a-cominimax, by Lemma 5.4 and Theorem 5.5. Now the proof follows similarly to the one previously made.  Now we give some applications of the results showed in this section. Corollary 5.11. Let N be a nonzero minimax R-module and M be a finitely generated R-module. (i) If dimR Hia (N) ≤ 1 (e.g. dimR N ≤ 1 or dimR R/a = 1), then the R-module i TorR j (M, Ha (N)) is a-cominimax for all i ≥ 0 and and j ≥ 0. Also, for for all i i ≥ 1 and and j ≥ 0, the R-module TorR j (M, Ha (N)) is a-cofinite. (ii) If (R, m) a is local ring and dimR Hia (M) ≤ 2 (e.g. dimR R/I ≤ 2 , then the R-module i TorR j (M, Ha (N)) is a-weakly cofinite for all i ≥ 0 and j ≥ 0. (iii) Let L and M be finitely generated R-modules such that pd M = d < ∞ and dimR L = 3, and let N be an a-weakly finite R-module over M such that dimR N = n < ∞. Then, for each i ≥ 0, ToriR (L, Hd+n (M, N)) is a-weakly cofinite R-module. a 14 Proof. (i) First note that Hia (N) is an a-cominimax R-module for all i ≥ 0, by [1, Theorem i 2.2]. Now the first statement follows by Proposition 5.7. The cofiniteness of TorR j (M, Ha (N)) j follows by the fact that ExtR (M, Hia (N)) is finite for all i ≥ 1 and j ≥ 0 by [1, Theorem 2.2] and Corollary 5.8. (ii) The proof follows analogously to Theorem 5.10 (i), using the previous item. (iii) Apply Theorem 2.14 and Theorem 5.10 (ii).  References [1] A. Abbasi, H. Roshan-Shekalgourabi and D. Hassanzadeh-Lelekaami, Some results on the local cohomology of minimax modules, Czechoslovak Math. J.,64(139) (2014), 327-333. [2] J.Azami, R. Naghipour and B. Vakili, Finiteness properties of local cohomology modules for aminimax modules, Proc. Am. Math. Soc., 137(2) (2009), 439-448. [3] A. Bagheri, A Non-Vanishing Theorem for Local Cohomology Modules, Bull. Malays. Math. Sci. Soc. (2) 37 (1) (2014), 65-72. [4] K. Bahmanpour, I. Khalili and R. Naghipour, Cofiniteness of torsion functors of cofinite modules, Colloquium Mathematicum, 136 (2), (2014), 221-230. [5] N. Bourbaki, Commutative Algebra, Translated from the French. Paris-Reading MA 1972. [6] M. P. Brodmann and R. Y. Sharp, Local cohomology - an algebraic introduction with geometric applications, Cambridge University Press, (1998). [7] K. S. Brown, Homological criteria for finiteness, Comment. Math. Helvetici, 50 (1975), 129-135. [8] N. T. Cuong, S. Goto and N. Van Hong, On the cofiniteness of generalized local cohomology modules, Kyoto J. Math., 55 (2015), no. 1, 169-185. [9] F. Dehghani-Zadeh, Cofiniteness and Artinianness of Generalized Local Cohomology Modules, Rom. J. Math. Comput. Sci., 5, Issue 1, (2015), 63-69. [10] M.T. Dibaei and S. Yassemi, Attached primes of the top local cohomology modules with respect to an ideal, Arch. Math.(Basel), 84 (2005), 292-297. [11] K. Divaani-Aazar and A. Mafi, Associated primes of local cohomology modules, Proc. Am. Math. Soc., 133(3) (2005), 655-660. [12] K. Divaani-Aazar and A. Mafi, Associated primes of local cohomology modules of weakly Laskerian modules, Commun. Algebra, 34 (2006), 681-690. [13] K. Divaani-Aazar and A. Hajikarimi, Generalized Local Cohomology Modules and Homological Gorenstein Dimensions, Comm. Algebra, 39(6) (2011), 2051-2067. [14] K. Divaani-Aazar, R. Sazeedeh, On Vanishing of Generalized Local Cohomology Modules, Algebra Colloq., 12(2) (2005), 213-218. [15] K. Divaani-Aazar and R. Sazeedeh, Cofiniteness of generalized local cohomology modules, Colloq. Math., 99 (2) (2004), 283-290. [16] Grothendieck, A. (1968). Cohomologie locale des faisceaux coherents et theoremes de Lefschetz locaux et globaux (SGA 2), North-Holland, Amsterdam. [17] Y. Gu and L. Chu, Attached primes of the top generalized local cohomology modules, Bull. Aust. Math. Soc., 79(1) (2009), 59-67. [18] R. Hartshorne, Affine duality and cofiniteness, Invent. Math., 9 (1970), 145-164. [19] J. Herzog, Komplexe Auflosungen und Dualitat in der lokalen Algebra, Habilitationsschrift, Universitat Regensburg (1970). [20] Y. Irani, Cominimaxness with respect to ideals of dimension one, Bull. Korean Math. Soc., 54 (2017), 289-298. [21] B. Kubik, M. J. Leamerb and S. Sather-Wagstaff, Homology of artinian and Matlis reflexive modules, I, J. Pure Appl. Algebra, 215 (2011), 2486-2503. [22] H. Matsumura, Commutative Ring Theory, Cambridge Studies in Advanced Mathematics, 8, Cambridge University Press, Cambridge, 1986. 15 [23] T. Marley, The associated primes of local cohomology modules over rings of small dimension, Manuscripta Math., 104 (2001), 519-525. [24] T. Marley and J. C. Vassilev, Cofiniteness and associated primes of local cohomology modules, J. Algebra, 256 (2002) 180-193. [25] L. Melkersson, On asymptotic stability for sets of prime ideals connected with the powers of an ideal, Math. Proc. Cambridge Philos. Soc., 107 (1990), 267-271. [26] L. Melkersson, Modules cofinite with respect to an ideal, J. Algebra., 285 (2005), 649-668. [27] Y. R. Nasbi, A. Vahidi and K. A. Amoli, Torsion functors of generalized local cohomology modules, Commun. Algebra, 45 (2017), 5420-5430. [28] A. Ooishi, Matlis duality and the width of a module, Hiroshima Math. J., 6 (1976), 573-587. [29] H. Roshan-Shekalgourabi and D. Hassanzadeh-Lelekaami, On the generalized local cohomology of minimax modules, J. Algebra Appl., 15(8) (2016), 1-8. [30] D. E. Rush, Big Cohen-Macaulay modules, Illinois J. Math., 24 (1980), 606-611. [31] W. Vasconcelos, Divisor Theory in Module Categories, North-Holland Publishing Company, Amsterdam, 1974. MR0498530 (58:16637) [32] H. Zöschinger, Minimax-moduln, J. Algebra 102 (1986), 1-32. Universidade Tecnológica Federal do Paraná, 85053-525, Guarapuava-PR, Brazil E-mail address: [email protected] Universidade de São Paulo - ICMC, Caixa Postal 668, 13560-970, São Carlos-SP, Brazil E-mail address: [email protected] Universidade de São Paulo - ICMC, Caixa Postal 668, 13560-970, São Carlos-SP, Brazil E-mail address: [email protected] 16
0
Event-Driven Contrastive Divergence for Spiking Neuromorphic Systems Emre Neftci ∗1 , Srinjoy Das1,2 , Bruno Pedroni3 , Kenneth Kreutz-Delgado2 , and Gert Cauwenberghs1,3 1 arXiv:1311.0966v3 [] 9 Dec 2013 2 Institute for Neural Computation, UCSD, La Jolla, CA, USA Electrical and Computer Engineering Department, UCSD, La Jolla, CA, USA 3 Department of Bioengineering, UCSD, La Jolla, CA, USA March 12, 2018 Abstract posed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality. Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM com∗ 1 Introduction Machine learning algorithms based on stochastic neural network models such as RBMs and deep networks are currently the state-of-the-art in several practical tasks [28, 4]. The training of these models requires significant computational resources, and is often carried out using powerhungry hardware such as large clusters [35] or graphics processing units [5]. Their implementation in dedicated hardware platforms can therefore be very appealing from the perspectives of power dissipation and of scalability. Neuromorphic Very Large Scale Integration (VLSI) systems exploit the physics of the device to emulate very densely the performance of biological neurons in a realtime fashion, while dissipating very low power [39, 30]. The distributed structure of RBMs suggests that neuromorphic VLSI circuits and systems can become ideal candidates for such a platform. Furthermore, the communication Electronic address: [email protected]; Corresponding author 1 in software and in hardware platforms. In standard CD, weight updates are computed on the basis of alternating, feed-forward propagation of activities [29]. In a neuromorphic implementation, this translates to reprogramming the network connections and resetting its state variables at every step of the training. As a consequence, it requires two distinct dynamical systems: one for normal operation (i.e. testing), the other for training, which is highly impractical. To overcome this problem, we train the neural RBMs using an online adaptation of CD. We exploit the recurrent structure of the network to mimic the discrete “construction” and “reconstruction” steps of CD in a spike-driven fashion, and Spike Time Dependent Plasticity (STDP) to carry out the weight updates. Each sample (spike) of each random variable (neuron) causes synaptic weights to be updated. We show that, over longer periods of time, these microscopic updates behave like a macroscopic CD weight update. Compared to standard CD, no additional programming overhead is required during the training steps, and both testing and training take place in the same dynamical system. Because RBMs are generative models, they can act simultaneously as classifiers, content-addressable memories, and carry out probabilistic inference. We demonstrate these features in a MNIST hand-written digit task [37], using an RBM network consisting of 824 “visible“ neurons and 500 “hidden” neurons. The spiking neural network was able to learn a generative model capable of recognition performances with accuracies up to 91.9%, which is close to the performance obtained using standard CD and Gibbs sampling, 93.6%. between neuromorphic components is often mediated using asynchronous address-events [14] enabling them to be interfaced with event-based sensors [38, 44, 42] for embedded applications, and to be implemented in a very scalable fashion [31, 52, 50]. Currently, RBMs and the algorithms used to train them are designed to operate efficiently on digital processors, using batch, discrete-time, iterative updates based on exact arithmetic calculations. However, unlike digital processors, neuromorphic systems compute through the continuoustime dynamics of their components, which are typically Integrate & Fire (I&F) neurons [30], rendering the transfer of such algorithms on such platforms a non-trivial task. We propose here a method to construct RBMs using I&F neuron models and to train them using an online, event-driven adaptation of the Contrastive Divergence (CD) algorithm. We take inspiration from computational neuroscience to identify an efficient neural mechanism for sampling from the underlying probability distribution of the RBM. Neuroscientists argue that brains deal with uncertainty in their environments by encoding and combining probabilities optimally [18], and that such computations are at the core of cognitive function [25]. While many mechanistic theories of how the brain might achieve this exist, a recent neural sampling theory postulates that the spiking activity of the neurons encodes samples of an underlying probability distribution [20]. The advantage for a neural substrate in using such a strategy over the alternative one, in which neurons encode probabilities, is that it requires exponentially fewer neurons. Furthermore, abstract model neurons consistent with the behavior of biological neurons can implement Markov Chain Monte Carlo (MCMC) sampling [7], and RBMs sampled in this way can be efficiently trained using CD, with almost no loss in performance [45]. We identify the conditions under which a dynamical system consisting of I&F neurons performs neural sampling. These conditions are compatible with neuromorphic implementations of I&F neurons, suggesting that they can achieve similar performance. The calibration procedure necessary for configuring the parameters of the spiking neural network is based on firing rate measurements, and so is easy to realize 2 2.1 Materials and Methods Neural Sampling with Noisy I&F Neurons We describe here the conditions under which a dynamical system composed of I&F neurons can perform neural sampling. In has been proven that abstract neuron models consistent with the behavior of biological spiking neurons can perform MCMC sampling of a Boltzmann distribution [7]. 2 Two conditions are sufficient for this. First, the instantaneous firing rate of the neuron verifies: ( 0 if t − t0 < τr , (1) ρ(u(t), t − t0 ) = r(u(t)) t − t0 ≥ τr synapses producing such PSPs is very difficult to realize in hardware, when compared to first-order linear filters that result in “alpha”-shaped PSPs [17, 3]. This is because, in the latter model, the synaptic dynamics are linear, such that a single hardware synapse can be used to generate the same current that would be generated by an arbitrary number of synapses (see also next section). As a consequence, we will use alpha-shaped PSPs instead of rectangular PSPs in our models. The use of the alpha PSP over the rectangular PSP is the major source of degradation in sampling performance, as we will discuss in Sec. 2.2. with r(u(t)) proportional to exp(u(t)), where u(t) is the membrane potential and τr is an absolute refractory period during which the neuron cannot fire. ρ(u(t), t − t0 ) describes the neuron’s instantaneous firing rate as a function of u(t) at time t, given that the last spike occurred at t0 . The average firing rate of this neuron model for stationary u(t) is the sigmoid function: ρ(u) = (τr + exp(−u))−1 . Stochastic I&F Neurons. A neuron whose instantaneous firing rate is consistent with Eq. (1) can perform neural sampling. Eq. (1) is a generalization of the Poisson process to the case when the firing probability depends on the time of the last spike (i.e. it is a renewal process), and so can be verified only if the neuron fires stochastically [11]. Stochasticity in I&F neurons can be obtained through several mechanisms, such as a noisy reset potential, noisy firing threshold, or noise injection [47]. The first two mechanisms necessitate stochasticity in the neuron’s parameters, and therefore may require specialized circuitry. But noise injection in the form of background Poisson spike trains requires only synapse circuits, which are present in many neuromorphic VLSI implementation of spiking neurons [30, 3]. Furthermore, Poisson spike trains can be generated self-consistently in balanced excitatoryinhibitory networks [55], or using finite-size effects and neural mismatch [1]. We show that the abstract neuron model in Eq. (1) can be realized in a simple dynamical system consisting of leaky I&F neurons with noisy currents. The neuron’s membrane potential below firing threshold θ is governed by the following differential equation: (2) Second, the membrane potential of neuron i is equal to the linear sum of its inputs: ui (t) = bi + N X wij zj (t), ∀i = 1, ..., N, (3) j=1 where bi is a constant bias, and zj (t) represents the presynaptic spike train produced by neuron j and is set to 1 for a duration τr after the neuron has spiked. The terms wij zj (t) are identified with the time course of the Post– Synaptic Potential (PSP), i.e. the response of the membrane potential to a pre-synaptic spike. The two conditions above define a neuron model, to which we refer as the “abstract neuron model”. The network then samples from a Boltzmann distribution:  1 exp − E(z1 , ..., zk ) , with Z X 1X E(z1 , ..., zk ) = − Wij zi zj − bi zi , 2 p(z1 , ...., zk ) = ij (4) i where Z is the partition function, and E(z1 , ..., zk ) can be interpreted as an energy function [26]. An important fact of the abstract neuron model is that, according to the dynamics of zj (t), the PSPs are “rectangular” and non-additive. The implementation of a large number of C d ui = −gL ui + Ii (t) + σξ(t), dt ui (t) ∈ (−∞, θ), (5) where C is a membrane capacitance, ui is the membrane potential of neuron i, gL is a leak conductance, σξ(t) is a 3 ρ(u0 )[Hz] white noise term of amplitude σ (which can for example be generated by background activity), Ii (t) its synaptic current and θ is the neuron’s firing threshold. When the membrane potential reaches θ, an action potential is elicited. After a spike is generated, the membrane potential is clamped to the reset potential urst for a refractory period τr . In the case of the neural RBM, the currents Ii (t) depend on the layer the neuron is situated in. For a neuron i in layer v N 125 3 0 0 4 u0 0 120 (6) where Iid (t) is a current representing the data (i.e. the external input), I v is the feedback from the hidden layer activity and the bias, and the q’s are the respective synaptic weights. For a neuron j in layer h, 60 u0 0 30 θ−urst σ =0.1 u0 0 Figure 1: Transfer curve of a leaky I&F neuron for three different parameter sets where u0 = gIL , and τ1r = 250[Hz] (dashed grey). In this plot, σV is varied to produce different rst ratios θ−u σV . The three plots above shows that the fit with the sigmoid function (solid black) improves as the ratio decreases. Ij (t) =Ijh (t), N 250 125 0 θ−urst σ =0.3 125 j=1 v X d qvij vi (t) + qbj bhj (t), τsyn Ijh = − Ijh + dt 250 250 Ii (t) =Iid (t) + Iiv (t), h X d τsyn Iiv = − Iiv + qhji hj (t) + qbi bvi (t), dt θ−urst σ =5.0 (7) i=1 where I h is the feedback from the visible layer, and bv (t) and bh (t) are Poisson spike trains implementing the bias. The dynamics of I h and I v correspond to a first-order linear filter, so each incoming spike results in PSPs that rise and decay exponentially (i.e. alpha-PSP) [23]. Can this neuron neuron verify the conditions required for neural sampling? The membrane potential is already assumed to be equal to the sum of the PSPs as required by neural sampling. So to answer the above question we only need to verify whether Eq. (1) holds. Eq. (5) is a Langevin equation which can be analyzed using the Fokker-Planck equation [22]. The solution to this equation provides the neuron’s input/output response, i.e. its transfer curve (for a review, see [48]): !−1 Z θ−u0 σV √ dx exp(x2 )(1 + erf(x)) , ρ(u0 ) = τrf + τm π urst −u0 σV where erf is the error function (the integral of the normal distribution), u0 = gI is the stationary value of the membrane potential when injected by a constant current I, urst is the reset voltage, and σV2 (u) = σ 2 /(gL C). According to Eq. (2), the condition for neural sampling requires that the average firing rate of the neuron to be the sigmoid function. Although the transfer curve of the noisy I&F neuron Eq. (8) is not equal to the sigmoid function, it was previously shown that with an appropriate choice of parameters, the shape of this curve can be very similar to it [40]. We observe that, for a given refractory period τr , the rst smaller ratio θ−u in Eq. (5), the better the transfer curve σV rst resembles a sigmoid function (Fig. 1) . With a small θ−u σV , the transfer function of a neuron can be fitted to   1 exp(−Iβ) −1 ν(I) = 1+ , (9) τr γτr where β and γ are the parameters to be fitted. The choice (8) 4 Neural mismatch can cause β and γ to differ from neuron to neuron. From Eq. (9) and the linearity of the postsynaptic currents I(t) in the weights, it is clear that this type of mismatch can be compensated by scaling the synaptic weights and biases accordingly. The calibration of the parameters γ and β quantitatively relate the spiking neural network’s parameters to the RBM. In practice, this calibration step is only necessary for mapping pre-trained parameters of the RBM onto the spiking neural network. Although we estimated the parameters of software simulated I&F neurons, parameter estimation based on firing rate measurements were shown to be an accurate and reliable method for VLSI I&F neurons as well [43]. of the neuron model described in Eq. (5) is not critical for neural sampling: A relationship that is qualitatively similar to Eq. (8) holds for neurons with a rigid (reflective) lower boundary [21] which is common in VLSI neurons, and for I&F neurons with conductance-based synapses [46]. This result also shows that synaptic weights qvi , qhj , which have the units of charge are related to the RBM weights Wij by a factor β −1 . To relate the neural activity to the Boltzmann distribution, Eq. (4), each neuron is associated to a binary random variable which is assumed to take the value 1 for a duration τr after the neuron has spiked, and zero otherwise, similarly to [7]. The relation between the random vector and the I&F neurons’ spiking activity is illustrated in Fig. 3. 2.2 Calibration Protocol. In order to transfer the parameters from the probability distribution Eq. (4) to those of the I&F neurons, the parameters γ, β in Eq. (9) need to be fitted. An estimate of a neuron’s transfer function can be obtained by computing its spike rate when injected with different values of constant inputs I. The refractory period τr is the inverse of the maximum firing rate of the neuron, so it can be easily measured by measuring the spike rate for very high input current I. Once τr is known, the parameter estimation can be cast into a simple linear regression problem by fitting log(ρ(i)−1 −τr ) with βI +log(γ). Fig. 2 shows the transfer curve when τr = 0 ms, which is approximately exponential in agreement with Eq. (1). The shape of the transfer curse is strongly dependent on the noise amplitude. In the absence of noise, the transfer curve is a sharp threshold function, which softens as the amplitude of the noise is increased (Fig. 1). As a result, both parameters γ and β are dependent on the variance of the input currents from other neurons I(t). Since βq = w, the effect of the fluctuations on the network is similar to scaling the synaptic weights and the biases which can be problematic. However, by selecting a large enough noise amplitude σ and a slow enough input synapse time constant, the fluctuations due to the background input are much larger than the fluctuations due to the inputs. In this case, β and γ remain approximately constant during the sampling. Validation of Neural Sampling using I&F neurons The I&F neuron verifies Eq. (1) only approximately, and the PSP model is different than the one of Eq. (3). Therefore, the following two important questions naturally arise: how accurately does the I&F neuron-based sampler outlined above sample from a target Boltzmann distribution? How well does it perform in comparison to an exact sampler, such as the Gibbs sampler? To answer these questions we sample from several neural RBM consisting of 5 visible and 5 hidden units for randomly drawn weight and bias parameters. At these small dimensions, the probabilities associated to all possible values of the random vector z can be computed exactly. These probabilities are then compared to those obtained through the histogram constructed with the sampled events. To construct this histogram, each spike was extended to form a box of length τr (as illustrated in Fig. 3), the spiking activity was sampled at 1 kHz, and the occurrences of all the possible 210 states of the random vector z were counted. We added 1 to the number of occurrences of each state to avoid zero probabilities. A common measure of similarity between two distributions is the Kullback-Leibler (KL) divergence: D(p||q) = X i 5 pi log pi . qi νbias σ Mean firing rate of bias Poisson spike train Noise amplitude β γ τr τsyn τbr gL urst C θ W bv , bh Nv , Nh Exponential factor (fit) Baseline firing rate (fit) Refractory period Time constant of recurrent, and bias synapses. “Burn-in” time of the neural sampling Leak conductance Reset Potential Membrane capacitance Firing threshold RBM weight matrix (∈ RNv ×Nh ) RBM bias for layer v and h Number of visible and hidden units in the RBM Nc 2T Number of class label units Epoch duration Tsim Simulation time τST DP η Learning time window Learning rate Table 1 6 all figures all figures, except Fig. 1 Fig. 1 (left) Fig. 1 (right) Fig. 1 (bottom) all figures all figures all figures all figures all figures all figures all figures all figures all figures Fig. 4 Fig. 4 Fig. 4 Fig. 8,7 Fig. 9 Fig. 8,7,9 Fig. 4, 8,7 Fig. 9 Fig. 2 Fig. 4 Fig. 8 Fig. 9 Fig. 7 (testing) Fig. 7 (learning) Fig. 8 standard CD event-driven CD 1000 Hz 3 · 10−11 A 2 · 10−11 A 3 · 10−10 A 1 · 10−9 A 2.044 · 109 A−1 8808 Hz 4 ms 4 ms 10 ms 1 nS 0V 10−12 F 100 mV N (−.75, 1.5) N (−1.5, .5) 5, 5 824, 500 834, 500 40 100 ms 300 ms 5s 1000 s .2 s .85 s 1.0 s 2000 s 4 ms .1 · 10−2 3.2 · 10−2 If the distributions p and q are identical then D(p||q) = 0, otherwise D(p||q) > 0. The average KL divergence for 48 randomly drawn distributions after 1000 s of sampling time was 0.058 . This result is not significantly different if the abstract neuron model Eq. (1) with alpha PSPs is used, and in both cases the KL divergence did not tend to zero as the number of samples increased. The only difference in the latter neuron model compared to the abstract neuron model of [7], which tends to zero when sampling time tends to infinity, is the PSP model. This indicates that the discrepancy is largely due to the use of alpha-PSPs, rather than the approximation of Eq. (1) with I&F neurons. The standard sampling procedure used in RBMs is Gibbs Sampling: the neurons in the visible layer are sampled simultaneously given the activities of the hidden neurons, then the hidden neurons are sampled given the activities of the visible neurons. This procedure is repeated a large number of times. For comparison with the neural sampler, the duration of one Gibbs sampling iteration is identified with one refractory period τr = 4 ms. At this scale, we observe that the speed of convergence of the neural sampler is similar to that of the Gibbs sampler up to 104 ms, after which the neural sampler plateaus above the D(p||q) = 10−2 line. Despite the approximations in the neuron model and the synapse model, these results show that in RBMs of this size, the neural sampler consisting of I&F neurons sample from a distribution that has the same KL divergence as the distribution obtained after 104 iterations of Gibbs sampling, which is more than the typical number of iterations used for MNIST hand-written digit tasks in the literature [27]. Figure 2: Transfer function of I&F neurons driven by background white noise (Eq. (5)). We measure the firing rate of the neuron as a function of a constant current injection to estimate ρ(u0 ), where for constant Iinj , u0 = Iinj /gL . (Top) The transfer function of noisy I&F neurons in the absence of refractory period (ρ(u) = r(u), circles). We observe that ρ is approximately exponential over a wide range of inputs, and therefore compatible with neural sampling. Crosses show the transfer curve of neurons implementing the abstract neuron Eq. (1), exactly. (Bottom) With an absolute refractory period the transfer function approximates the sigmoid function. The firing rate saturates at 250 Hz due to the refractory period chosen for the neuron. . 2.3 Neural Architecture for Learning a Model of MNIST Hand-Written Digits We test the performance of the neural RBM in a digit recognition task. We use the MNIST database, whose data samples consist of centered, gray-scale, 28×28-pixel images of hand-written digits 0 to 9 [37]. The neural RBM’s network architecture consisted of 2 layers, as illustrated in Fig. 5. The visible layer was partitioned into 784 sensory neurons 7 Figure 4: (Left) Example probability distribution obtained by neural sampling of the RBM of Fig. 3. The bars are marginal probabilities computed by counting the events [00000], [00001], · · · , [11110], [11111], respectively. PN S is the distribution obtained by neural sampling and P is the exact probability distribution computed with Eq. (4). (Right) The degree to which the sampled distribution resembles the target distribution is quantified by the KL divergence measured across 48 different distributions, and the shadings correspond to its standard deviation. This plot also shows the KL divergence of the target distribution sampled by Gibbs Sampling (PGibbs ), which is the common choice for RBMs. For comparison with the neural sampler, we identified the duration of one Gibbs sampling iteration with one refractory period τr = 4 ms. The plot shows that up to 104 ms, the two methods are comparable. After this, the KL divergence of the neural sampler tends to a plateau due to the fact that neural sampling with our I&F neural network is approximate. In both figures, PN S,Abstract refers to the marginal probability distribution obtained by using the abstract neuron model Eq. (1). In this case, the KL divergence is not significantly different from the one obtained with the I&F neuron model-based sampler. 8 Hidden (500 neurons) Visible (784 neurons) Class (40 neurons) Data Layer 784+40 neurons Figure 5: The RBM network consists of a visible and a hidden layer. The visible layer is partitioned into 784 sensory neurons (vd ) and 40 class label neurons (vc ) for supervised learning. During data presentation, the activities in the visible layer are driven by a data layer d, consisting of a digit and its label (1 neuron per label). In the RBM, the weight matrix between the visible layer and the hidden layer is symmetric. Figure 3: Neural Sampling in an RBM consisting of 10 stochastic I&F neurons, with 5 neurons in each layer. Each neuron is associated to a binary random variable which take values 1 during a refractory period τr after the neuron has spiked (gray shadings). The variables are sampled at 1 kHz to produce binary vectors that correspond to samples of the joint distribution p(z). In this figure, only the membrane potential and the samples produced by the first 5 neurons are shown. The vectors inside the brackets are example samples of the marginalized distribution p(z1 , z2 , z3 , z4 , z5 ) produced at the time indicated by the vertical lines. In the RBM, there are no recurrent connections within a layer. (vd ) and 40 class label neurons (vc ) for supervised learning. The pixel values of the digits were discretized to 2 values, with low intensity pixel values (p <= .5) mapped to 10−5 and high intensity values (p > .5) mapped to 0.98. A neuron i in d stimulated each neuron i in layer v, with synaptic currents fi such that P (vi = 1) = ν(fi )τr = pi , where 0 ≤ pi ≤ 1 is the value of pixel i. The value fi is calculated by invertingthe transfer  function of the neus −1 ron: fi = ν (s) = log γ−sγτr β −1 . Using this RBM, classification is performed by choosing the most likely label given the input, under the learned model. This equals to choosing the population of class neurons associated to the same label that has the highest population firing rate. To reconstruct a digit from a class label, the class neurons belonging to a given digit are clamped to a high firing rate. For testing the discrimination performance of an energybased model such as the RBM, it is common to compute 9 the free-energy F (vc ) of the class units [26], defined as: X exp(−F (vc )) = exp(−E(vd , vc , h)), (10) and reconstruction phases. In practice, when the data set is very large, weight updates are calculated using a subset of data samples, or “mini-batches”. The above rule can then be interpreted as a stochastic gradient descent [49]. Although the convergence properties of the CD rule are the subject of continuing investigation, extensive software simulations show that the rule often converges to very good solutions [29]. The main result of this paper is an online variation of the CD rule for implementation in neuromorphic hardware. By virtue of neural sampling the spikes generated from the visible and hidden units can be used to compute the statistics of the probability distributions online (further details on neural sampling in the Materials and Methods Sec. 2.1). Therefore a possible neural mechanism for implementing CD is to use synapses whose weights are governed by synaptic plasticity. Because the spikes cause the weight to update in an online, and asynchronous fashion, we refer to this rule as event-driven CD. The weight update in event-driven CD is a modulated, pair-based STDP rule: vd ,h and selecting vc such that the free-energy is minimized. The spiking neural network is simulated using the BRIAN simulator [24]. All the parameters used in the simulations are provided in Tab. 1. 3 3.1 Results Event-Driven Contrastive Divergence A Restricted Boltzmann Machine (RBM) is a stochastic neural network consisting of two symmetrically interconnected layers composed of neuron-like units - a set of visible units v and a set of hidden units h, but has no connections within a layer. The training of RBMs commonly proceeds in two phases. At first the states of the visible units are clamped to a given vector from the training set, then the states of the hidden units are sampled. In a second “reconstruction” phase, the network is allowed to run freely. Using the statistics collected during sampling, the weights are updated in a way that they maximize the likelihood of the data [29]. Collecting equilibrium statistics over the data distribution in the reconstruction phase is often computationally prohibitive. The CD algorithm has been proposed to mitigate this [29, 28]: the reconstruction of the visible units activity is achieved by sampling them conditioned on the values of the hidden units (Fig. 6). This procedure can be repeated k times (the rule is then called CDk ), but relatively good convergence is obtained for the equilibrium distribution even for one iteration. The CD learning rule is summarized as follows: ∆wij = (hvi hj idata − hvi hj irecon ), d qij = g(t) STDPij (vi (t), hj (t)) dt (12) where g(t) ∈ R is a zero-mean global gating signal controlling the data vs. reconstruction phase, qij is the weight of the synapse and vi (t) and hj (t) refer to the spike trains of neurons vi and hj , respectively, which are represented by a sum of Dirac delta pulses P centered on the respective hj (t) = k∈Spi δ(t − tk ), P spike times: vi (t) = δ(t−t ) where Sp and Sp are the set of the spike i j k k∈Spj times of the visible neuron i and hidden neuron j, respectively and δ(t) = 1 if t = 0 and 0 otherwise. As opposed to the standard CD rule, weights are updated after every occurrence of a pre-synaptic and post-synaptic event. While this online approach slightly differentiates it from standard CD, it is integral to a spiking neuromorphic framework where the data samples and weight updates cannot be stored. The weight update is governed by a symmetric STDP rule with a symmetric temporal window (11) where vi and hj are the activities in the visible and hidden layers, respectively. This rule can be interpreted as a difference of Hebbian and anti-Hebbian learning rules between the visible and hidden neurons sampled in the data 10 (b) (a) q (data) ... tpost-tpre (reconstruction) q Figure 6: The standard Contrastive Divergence (CD)k procedure, compared to event-driven CD. (a) In standard CD, learning proceeds iteratively by sampling in “construction” and “reconstruction” phases [29], which is impractical in a continuoustime dynamical system. (b) We propose a spiking neural sampling architecture that folds these updates on a continuous time dimension through the recurrent activity of the network. The synaptic weight update follows a STDP rule modulated by a zero mean signal g(t). This signal switches the behavior of the synapse from Long-Term Potentiation (LTP) to Long-Term Depression (LTD), and partitions the training into two phases analogous to those of the original CD rule. The spikes cause microscopic weight modifications, which on average behave as the macroscopic CD weight update. For this reason, the learning rule is referred to as event-driven CD. K(t) = K(−t), ∀t: STDPij (vi (t), hj (t)) =vi (t)Ahj (t) + hj (t)Avi (t), Z t dsK(s − t)hj (s), Ahj (t) =A −∞ t periods of the neurons [7]. In this work, we used the following modulation function g(t): (13) Z Avi (t) =A dsK(s − t)vi (s),  if mod(t, 2T ) ∈ (τbr , T )  1 g(t) = −1 if mod(t, 2T ) ∈ (T + τbr , 2T ) ,   0 otherwise −∞ with A > 0 defining the magnitude of the weight updates. In our implementation, updates are additive and weights can change polarity. Pairwise STDP with a global modulatory signal approximates CD. The modulatory signal g(t) switches the behavior of the synapse from LTP to LTD (i.e. Hebbian to Anti-Hebbian). The temporal average of g(t) must vanish to balance LTP and LTD, and must vary on much slower time scales than the typical times scale of the network dynamics, denoted τbr , so that the network samples from its stationary distribution when the weights are updated. The time constant τbr corresponds to a “burn-in” time of MCMC sampling and depends on the overall network dynamics and cannot be computed in the general case. However, it is reasonable to assume τbr to be in the order of a few refractory (14) where mod is the modulo function and T is a time interval. The data is presented during the time intervals (2iT, (2i + 1)T ), where i is a positive integer. With the g(t) defined above, no weight update is undertaken during a fixed period of time τbr . This allows us to neglect the transients after the stimulus is turned on and off (respectively in the beginning of the data and reconstruction phases). In this case and under further assumptions discussed below, the event-driven CD rule can be directly compared with standard CD as we now demonstrate. The average weight up11 R0 br with η := 2A T −τ 2T −∞ d∆K(∆). Similar arguments apply to the averages in the time interval tr : date during (0, 2T ) is: d qij i(0,2T ) =Cij + Rij , Z 0 dt d∆K(∆)hvi (t)hj (t + ∆)itr = ηv̄i− h̄− R = 2A ij j . T − τbr −∞ (hvi (t)Ahj (t)itd + hhj (t)Avi (t)itd ) Cij = 2T − − T − τbr Rij = − (hvi (t)Ahj (t)itr + hhj (t)Avi (t)itr ), with v̄i h̄j := hvi (t)itr hhj (t + ∆)itr . The average update 2T in (0, 2T ) then becomes: (15)   d + + − − q i = η v̄ h h̄ − v̄ h̄ . (18) ij (0,2T ) where td = (τbr , T ) and tr = (T + τbr , 2T ) denote the i j i j dt intervals during the positive and negative phases of g(t), Rb 1 According to Eq. (17), any symmetric temporal window and h·i(a,b) = b−a a dt·. that is much shorter than T can be used. For simplicWe write the first average in Cij as follows: ity, we choose an exponential temporal window K(∆) = Z T Z t exp(∆/τST DP ) with decay rate τST DP  T (Fig. 6b). In 1 dsK(s − t)vi (t)hj (s), this case, η = 2A T −τbr τST DP . dt hvi (t)Ahj (t)itd = A 2T T − τbr τbr −∞ The modulatory function g(t) partitions the training into Z T Z 0 1 =A dt d∆K(∆)hvi (t)hj (t + ∆), several epochs of duration 2T . Each epoch consists of a T − τbr τbr −∞ LTP phase during which the data is presented (construcZ 0 tion), followed by a free-running LTD phase (reconstruc=A d∆K(∆)hvi (t)hj (t + ∆)itd . tion). The weights are updated asynchronously during the −∞ time interval in which the neural sampling proceeds, and (16) Eq. (18) tells us that its average resembles Eq. (11). HowIf the spike times are uncorrelated the temporal averages ever, it is different in two ways: the averages are taken over become a product of the average firing rates of a pair of one data and reconstruction phase rather than a mini-batch visible and hidden neurons [23]: of data samples and their reconstructions; and more importantly, the synaptic weights are updated during the data and hvi (t)hj (t + ∆)itd = hvi (t)itd hhj (t + ∆)itd =: v̄i+ h̄+ j . the reconstruction phase, whereas in the CD rule, updates are carried out at the end of the reconstruction phase. In the If we choose a temporal window that is much smaller than derivation above the effect of the weight modification on the T , and since the network activity is assumed to be stationnetwork during an epoch 2T was neglected for the sake of ary in the interval (τbr , T ), we can write (up to a negligible mathematical tractability. In the following, we verify that error [32]) despite this approximation, the event-driven CD performs nearly as well as standard CD in a commonly used benchZ 0 mark task. hvi (t)Ahj (t)itd = Av̄i+ h̄+ d∆K(∆). (17) j h −∞ 3.2 In the uncorrelated case, the second term in Cij contributes the same amount, leading to: Cij = Learning a generative model of hand-written digits We train the RBM to learn a generative model of the MNIST handwritten digits using event-driven CD (see ηv̄i+ h̄+ j . 12 Sec. 2.3 for details). For training, 20000 digits selected randomly from a training set consisting of 10000 digits were presented in sequence, with an equal number of samples for each digit. The raster plots in Fig. 8 show the spiking activity of each layer before and after learning for epochs of duration 100 ms. The top panel shows the population-averaged weight. After training, the sum of the upwards and downward excursions of the average weight is much smaller than before training, because the learning is near convergence. The second panel shows the value of the modulatory signal g(t). The third panel shows the input current (Id ) and the current caused by the recurrent couplings (Ih ). Two methods to estimate the overall classification performance of the neural RBM can be used. The first is by neural sampling: the visible layer is clamped to the digit only, and the network is run for 1s. The known label is then compared with the positions of the group of class neurons that had the highest population rate. The second method is by minimizing free-energy: the neural RBMs parameters are extracted, and for each data sample, the class neurons with the lowest free-energy (See Materials and Methods) is compared to the known label. In both cases, recognition was tested for 1000 data samples that were not used during the training. The results are summarized in Fig. 7. As a reference we provide the best performance achieved using the standard CD and one unit per class label (Nc = 10) (Fig. 7, table row 1), 93.6%. By mapping the learned parameters to the neural RBM the recognition accuracy reached 92.6%. When training a neural RBM of I&F neurons using event-driven CD, the recognition result was 91.9% (Fig. 7, table row 2). The performance of this RBM obtained by minimizing its free-energy was 90.8%. The learned parameters performed well for classification using the free-energy calculation which suggests that the network learned a model that is consistent with the mathematical description of the RBM. In an energy-based model like the RBM the free-energy minimization should give the upper bound on the discrimination performance [26]. For this reason, the fact that Neural Sampling Recognition Accuracy Sampling Duration [s] Standard CD Event-driven CD Event-driven CD (8 bits) Event-driven CD (5 bits) Accuracy Neural Sampler 92.6% 91.9% 91.6% 89.4% Accuracy Free-energy 93.6% 90.8% 91.0% 89.2% Figure 7: To test recognition accuracy, the RBMs are sampled using the I&F neuron-based sampler for up to 1 s. The classification is read out by identifying the group of class label neurons that had the highest activity. This experiment is run for RBM parameter sets obtained by standard CD (black, CD) and event-driven CD (green, eCD). To test the robustness of the RBM, it was run with parameters obtained by event-driven CD discretized to 8 and 5 bits. In all scenarios, the accuracy after 50 ms of sampling was above 80% and after 1 s the accuracies typically reached their peak at around 91.9%. The dashed horizontal lines show the recognition accuracy obtained by minimizing the free-energy (see text). The fact that the eCD curve (solid green) surpasses its free-energy minimization performance suggests that the RBM learns a model that is tailored to the I&F spiking neural network. 13 them, and infer a digit by combining partial evidence. These features are clearly illustrated in the following experiment (Fig. 9). First the digit 3 is presented (i.e. layer vd is driven by layer d) and the correct class label in vc activated. Second, the neurons associated to class label 5 are clamped, and the network generated its learned version of the digit. Third, the right-half part of a digit 8 is presented, and the class neurons are stimulated such that only 3 or 6 are able to activate (the other class neurons are inhibited, indicated by the gray shading). Because the stimulus is inconsistent with 6, the network settled to 3 and reconstructed the left part of the digit. The latter part of the experiment illustrates the integration of information between several partially specified cues, which is of interest for solving sensorimotor transformation or multi-modal sensory cue integration problems [15, 18, 10]. This feature has been used for auditory-visual sensory fusion in a spiking Deep Belief Network (DBN) model [44]. There, the authors trained a DBN with visual and auditory data, which learned to associate the two sensory modalities, very similarly to how class labels and visual data are associated in our architecture. Their network was able to resolve a similar ambiguity as in our experiment in Fig. 9, but using auditory inputs instead of a class label. In the digit generation mode, the trained network had a tendency to be globally bistable, whereby the layer vd completely deactivated layer h. Since all the interactions between vd and vc take place through the hidden layer, vc could not reconstruct the digit. To avoid this, we added two populations of I&F neurons that were wired to layers v and h, respectively. The parameters of these neurons and their couplings were tuned such that each layer was strongly excited when it’s average firing rate fell below 5 Hz. the recognition accuracy is higher when sampling as opposed to using the free-energy method may appear puzzling. However, this is possible because the neural RBM does not exactly sample from the Boltzmann distribution, as explained in Sec. 2.2. This suggests that event-driven CD compensates for the discrepancy between the distribution sampled by the neural RBM and the Boltzmann distribution, by learning a model that is tailored to the spiking neural network. Excessively long training durations can be impractical for real-time neuromorphic systems. Fortunately, the learning using event-driven CD is fast: Compared to the off-line RBM training (250000 presentations, in mini-batches of 100 samples) the event-driven CD training succeeded with a smaller number of data presentations (20000), which corresponded to 2000 s of simulated time. This suggests that the training durations are achievable for real-time neuromorphic systems. The choice of the number of class neurons Nc . Eventdriven CD underperformed in the case of 1 neuron per class label (Nc = 10), which is the common choice for standard CD and Gibbs sampling. This is because a single neuron firing at its maximum rate of 250 Hz cannot efficiently drive the rest of the network without tending to induce spike-tospike correlations (e.g. synchrony), which is incompatible with the assumptions made for sampling with I&F neurons and event-driven CD. As a consequence, the generative properties of the neural RBM degrade. This problem is avoided by using several neurons per class label (in our case four neurons per class label) because the synaptic weight can be much lower to achieve the same effect, resulting in smaller spike-to-spike correlations. 3.3 Neural parameters with finite precision. In hardware systems, the parameters related to the weights and biases cannot be set with floating-point precision, as can be done in a digital computer. In current neuromorphic implementations the synaptic weights can be configured at precisions of about 8 bits [56]. We characterize the impact of finiteprecision synaptic weights on performance by discretizing Generative properties of the RBM We test the neural RBM as a generative model of the MNIST dataset of handwritten digits, using parameters obtained by running the event-driven CD. In the context of the handwritten digit task, the RBM’s generative property enables it to classify digits, generate 14 Figure 8: The spiking neural network learns a generative model of the MNIST dataset using the event-driven CD procedure. (a) Learning curve, shown here up to 10000 samples. (b) Details of the training procedure, before and after training (20000 samples). During the first half of each .1 s epoch, the visible layer v is driven by the sensory layer. During this phase, the gating variable g is 1, meaning that the synapses undergo LTP. During the second half of each epoch, the sensory stimulus is removed, and g is set to −1, so the synapses undergo LTD. The top panels of both figures show the mean of the entries of the weight matrix. The second panel shows the values of the modulatory signal g(t). The third panel shows the synaptic currents of a visible neuron, where Ih is caused by the feedback from the hidden and the bias, and Id is the data. The timing of the clamping (Id ) and g differ due to an interval τbr where no weight update is undertaken to avoid the transients (See Materials and Methods). Before learning and during the reconstruction phase, the activity of the visible layer is random. But as learning progresses, the activity in the visible layer reflects the presented data in the reconstruction phase. This is very well visible in the layer class label neurons vc , whose activity persists after the sensory stimulus is removed. Although the firing rates of the hidden layer neurons before training is high (average 113 Hz), this is only a reflection of the initial conditions for the recurrent couplings W . In fact, at the end of the training, the firing rates in both layers becomes much sparser (average 9.31 Hz). 15 Figure 9: The recurrent structure of the network allows it to classify, reconstruct and infer from partial evidence. (a) Raster plot of an experiment illustrating these features. Before time 0s, the neural RBM runs freely, with no input. Due to the stochasticity in the network, the activity wanders from attractor to attractor. At time 0s, the digit 3 is presented (i.e. layer vd is driven by d), activating the correct class label in vc ; At time t = .3 s, the class neurons associated to 5 are clamped to high activity and the rest of the class label neurons are strongly inhibited, driving the network to reconstruct its version of the digit in layer vd ; At time t = .6 s, the right-half part of a digit 8 is presented, and the class neurons are stimulated such that only 3 or 6 can activate (all others are strongly inhibited as indicated by the gray shading). Because the stimulus is inconsistent with 6, the network settles to a 3 and attempts to reconstruct it. The top figures shows the digits reconstructed in layer vd . (b) Digits 0 − 9, reconstructed in the same manner. Each row corresponds to a different, independent run. (c) Population firing rate of the experiment presented in (a). During recognition, the network typically reaches equilibrium after about 10τr = 40 ms (black bar). 16 the weight and bias parameters to 8 bits and 5 bits. The set of possible weights were spaced uniformly in the interval (µ − 4.5σ, µ + 4.5σ), where µ, σ are the mean and the standard deviation of the parameters across the network, respectively. The classification performance of MNIST digits degraded gracefully. In the 8 bit case, it degrades only slightly to 91.6%, but in the case of 5 bits, it degrades more substantially to 89.4%. In both cases, the RBM still retains its discriminative power, which is encouraging for implementation in hardware neuromorphic systems. 4 chine learning tasks [4]. The ability to implement RBMs with spiking neurons and train then using event-based CD paves the way towards on-line training of DBNs of spiking neurons [27]. We chose the MNIST handwritten digit task as a benchmark for testing our model. When the RBM was trained with standard CD, it could recognize up to 926 out of 1000 of out-of-training samples. The MNIST handwritten digits recognition task was previously shown in a digital neuromorphic chip [2], which performed at 89% accuracy, and in a software simulated visual cortex model [19]. However, both implementations were configured using weights trained off-line. A recent article showed the mapping of off-line trained DBNs onto spiking neural network [44]. Their results demonstrated hand-written digit recognition using neuromorphic event-based sensors as a source of input spikes. Their performance reached up to 94.1% using leaky I&F neurons. The use of off-line CD combined with an additional layer explains to a large extent their better performance compared to ours. Our work extends [44] by demonstrating an on-line training using synaptic plasticity, testing its robustness to finite weight precision, and providing an interpretation of spiking activity in terms of neural sampling. To achieve the computations necessary for sampling from the RBM, we have used the neural sampling framework [20], where each spike is interpreted as a sample of an underlying probability distribution. [7] proved that abstract neuron models consistent with the behavior of biological spiking neurons can perform MCMC, and have applied it to a basic learning task in a fully visible Boltzmann Machine. We extended the neural sampling framework in three ways: First, we identified the conditions under which a dynamical system consisting of I&F neurons can perform neural sampling; Second, we verified that the sampling of RBMs was robust to finite-precision parameters; Third, we demonstrated learning in a Boltzmann Machine with hidden units using STDP synapses. In the neural sampling framework, neurons behave stochastically. This behavior can be achieved in I&F neurons using noisy input currents, created by a Poisson spike Discussion Neuromorphic systems are promising alternatives for largescale implementations of RBMs and deep networks, but the common procedure used to train such networks, Contrastive Divergence (CD), involves iterative, discrete-time updates that do not straightforwardly map on a neural substrate. We solve this problem in the context of the RBM with a spiking neural network model that uses the recurrent network dynamics to compute these updates in a continuous-time fashion. We argue that the recurrent activity coupled with STDP dynamics implements an event-driven variant of CD. Using event-driven CD, the network connectivity remains unchanged during training and testing, enabling the system to learn in an on-line fashion, while being able to carry out functionally relevant tasks such as recognition, data generation and cue integration. The CD algorithm can be used to learn the parameters of probability distributions other than the Boltzmann distribution (even those without any symmetry assumptions). Our choice for the RBM, whose underlying probability distribution is a special case of the Boltzmann distribution, is motivated by the following facts: They are universal approximators of discrete distributions [36]; the conditions under which a spiking neural circuit can naturally perform MCMC sampling of a Boltzmann distribution were previously studied [40, 7]; and RBMs form the building blocks of many deep learning models such as DBNs, which achieve state-of-the-art performance in many ma17 each neuron. Sharing one synapse circuit per pair of neurons can solve this problem. This may be impractical due to the very large number of synapse circuits in the network, but may be less problematic when using Resistive RandomAccess Memorys (RRAMs) (also called memristors) crossbar arrays to emulate synapses [12, 34, 51].RRAM are a new class of nanoscale devices whose current-voltage relationship depends on the history of other electrical quantities [53], and so act like programmable resistors. Because they can conduct currents in both directions, one RRAM circuit can be shared between a pair of neurons. A second problem is the number of recurrent connections. Even our RBM of modest dimensions involved almost 2 million synapses, which is impractical in terms of bandwidth and weight storage. Even if a very high number of weights are zero, the connections between each pair of neurons must exist in order for a synapse to learn such weights. One possible solution is to impose sparse connectivity between the layers [54, 41]. This remains to be tested in our model. train. Spike trains with Poisson-like statistics can be generated with no additional source of noise, for example by the following mechanisms: balanced excitatory and inhibitory connections [55], finite-size effects in a large network, and neural mismatch [1]. The latter mechanism is particularly appealing, because it benefits from fabrication mismatch and operating noise inherent to neuromorphic implementations [9]. Other groups have also proposed to use I&F neuron models for computing the Boltzmann distribution. [40] have shown that noisy I&F neurons’ activation function is approximately sigmoidal as required by the Boltzmann machine, and have devised a scheme whereby a global inhibitory rhythm drives the network to generate samples of the Boltzmann distribution. [44] have demonstrated a deep belief network of I&F neurons that was trained offline, using standard CD and tested it using the MNIST database. Independently and simultaneously to this work, [46] demonstrated that conductance-based I&F neurons in a noisy environment are compatible with neural sampling as described in [7]. Similarly, [46] find that the choice of non-rectangular PSPs and the approximations made by the I&F neurons are not critical to the performance of the neural sampler. Our work extends all of those above by providing an online, STDP-based learning rule to train RBMs sampled using I&F neurons. Outlook: A custom learning rule. Our method combines I&F neurons that perform neural sampling and the CD rule. Although we showed that this leads to a functional model, we do not know whether event-driven CD is optimal in any sense. This is partly due to the fact that CDk is an approximate rule [29], and it is still not entirely understood why it performs so well, despite extensive work in studying its convergence properties [8]. Furthermore, the distribution sampled by the I&F neuron does not exactly correspond to the Boltzmann distribution, and the average weight updates in event-driven CD differ from those of standard CD, because in the latter they are carried out at the end of the reconstruction step. A very attractive alternative is to derive a custom synaptic plasticity rule that minimizes some functionally relevant quantity (such as Kullback-Leibler divergence or Contrastive Divergence), given the encoding of the information in the I&F neuron [16, 6]. A similar idea was recently pursued in [6], where the authors derived a tripletbased synaptic learning rule that minimizes an upper bound of the Kullback-Leibler divergence between the model and Applicability to neuromorphic hardware. Neuromorphic systems are sensible to fabrication mismatch and operating noise. Fortunately, the mismatch in the synaptic weights and the activation function parameters γ and β are not an issue if the biases and the weights are learned, and the functionality of the RBM is robust to small variations in the weights caused by discretization. These two findings are encouraging for neuromorphic implementations of RBMs. However, at least two conceptual problems of the presented RBM architecture must be solved in order to implement such systems on a large-scale. First, the symmetry condition required by the RBM does not necessarily hold. In a neuromorphic device, the symmetry condition is impossible to guarantee if the synapse weights are stored locally at 18 and Y. Bengio. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy), volume 4, 2010. the data distributions. Interestingly, their rule had a similar global signal that modulates the learning rule, as in eventdriven CD, although the nature of this resemblance remains to be explored. Such custom learning rules can be very beneficial in guiding the design of on-chip plasticity in neuromorphic VLSI and RRAM nanotechnologies, and will be the focus of future research. [6] Johanni Brea, Walter Senn, and Jean-Pascal Pfister. Matching recall and storage in sequence learning with spiking neural networks. The Journal of Neuroscience, 33(23):9565–9575, 2013. Acknowledgments [7] L. Buesing, J. Bill, B. Nessler, and W. Maass. Neural dynamics as sampling: A model for stochastic computation in recurrent networks of spiking neurons. PLoS Computational Biology, 7(11):e1002211, 2011. This work was partially funded by the National Science Foundation (NSF EFRI-1137279), the Office of Naval Research (ONR MURI 14-13-1-0205), and the Swiss National Science Foundation (PA00P2_142058). We thank all the anonymous reviewers of a previous version of this article for their constructive comments. [8] Miguel A Carreira-Perpinan and Geoffrey E Hinton. On contrastive divergence learning. In Artificial Intelligence and Statistics, volume 2005, page 17, 2005. [1] D. Amit and N. Brunel. Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cerebral Cortex, 7:237– 252, 1997. [9] E. Chicca and S. Fusi. Stochastic synaptic plasticity in deterministic aVLSI networks of spiking neurons. In Frank Rattay, editor, Proceedings of the World Congress on Neuroinformatics, ARGESIM Reports, pages 468–477, Vienna, 2001. ARGESIM/ASIM Verlag. [2] J. Arthur, P. Merolla, F. Akopyan, R. Alvarez, A. Cassidy, S. Chandra, S. Esser, N. Imam, W. Risk, DBD Rubin, et al. Building block of a programmable neuromorphic substrate: A digital neurosynaptic core. In Neural Networks (IJCNN), The 2012 International Joint Conference on, pages 1–8. IEEE, 2012. [10] D. Corneil, D. Sonnleithner, E. Neftci, E. Chicca, M. Cook, G. Indiveri, and R. Douglas. Function approximation with uncertainty propagation in a VLSI spiking neural network. In International Joint Conference on Neural Networks, IJCNN 2012, pages 2990– 2996. IEEE, 2012. [3] C. Bartolozzi and G. Indiveri. Synaptic dynamics in analog VLSI. Neural Computation, 19(10):2581– 2603, Oct 2007. [11] D.R. Cox. Renewal theory, volume 1. Methuen London, 1962. References [4] Y. Bengio. Learning deep architectures for ai. Foundations and Trends R in Machine Learning, 2(1):1–127, 2009. [12] Jose M Cruz-Albrecht, Timothy Derosier, and Narayan Srinivasa. A scalable neural chip with synaptic electronics using cmos integrated memristors. Nanotechnology, 24(38):384011, 2013. [5] J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde-Farley, [13] George E Dahl, Tara N Sainath, and Geoffrey E Hinton. Improving deep neural networks for lvcsr using 19 [23] W. Gerstner and W. Kistler. Spiking Neuron Models. Single Neurons, Populations, Plasticity. Cambridge University Press, 2002. rectified linear units and dropout. In Proc. ICASSP, 2013. [14] S.R. Deiss, R.J. Douglas, and A.M. Whatley. A pulsecoded communications infrastructure for neuromorphic systems. In W. Maass and C.M. Bishop, editors, Pulsed Neural Networks, chapter 6, pages 157– 78. MIT Press, 1998. [24] D. Goodman and R. Brette. Brian: a simulator for spiking neural networks in Python. Frontiers in Neuroinformatics, 2, 2008. [25] T.L. Griffiths, N. Chater, C. Kemp, A. Perfors, and J. B. Tenenbaum. Probabilistic models of cognition: exploring representations and inductive biases. Trends in cognitive sciences, 14(8):357–364, 2010. [15] S. Deneve, PE Latham, and A. Pouget. Efficient computation and cue integration with noisy population codes. Nature Neuroscience, 4(8):826–831, 2001. [26] Simon S Haykin. Neural networks: A comprehensive foundation, 1999. [16] Sophie Deneve. Bayesian spiking neurons I: Inference. Neural Computation, 20(1):91–117, 2008. [27] G.E. Hinton, S. Osindero, and Y.W. Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554, 2006. [17] A. Destexhe, Z.F. Mainen, and T.J. Sejnowski. Methods in Neuronal Modelling, from ions to networks, chapter Kinetic Models of Synaptic Transmission, pages 1–25. MIT Press, 1998. [28] G.E. Hinton and R.R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, 2006. [18] K. Doya, S. Ishii, A. Pouget, and R.P.N. Rao. Bayesian Brain Probabilistic Approaches to Neural Coding. MIT Press, 2007. [29] Geoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neural computation, 14(8):1771–1800, 2002. [19] C. Eliasmith, T.C. Stewart, X. Choo, T. Bekolay, T. DeWolf, Y. Tang, and D. Rasmussen. A largescale model of the functioning brain. Science, 338(6111):1202–1205, 2012. [30] G. Indiveri, B. Linares-Barranco, T.J. Hamilton, A. van Schaik, R. Etienne-Cummings, T. Delbruck, S.-C. Liu, P. Dudek, P. Häfliger, S. Renaud, J. Schemmel, G. Cauwenberghs, J. Arthur, K. Hynna, F. Folowosele, S. Saighi, T. Serrano-Gotarredona, J. Wijekoon, Y. Wang, and K. Boahen. Neuromorphic silicon neuron circuits. Frontiers in Neuroscience, 5:1–23, 2011. [20] J. Fiser, P. Berkes, G. Orbán, and M. Lengyel. Statistically optimal perception and learning: from behavior to neural representations: Perceptual learning, motor learning, and automaticity. Trends in cognitive sciences, 14(3):119, 2010. [31] S. Joshi, S. Deiss, M. Arnold, J. Park, T. Yu, and G. Cauwenberghs. Scalable event routing in hierarchical neural array architecture with global synaptic connectivity. In Cellular Nanoscale Networks and Their Applications (CNNA), 2010 12th International Workshop on, pages 1–6. IEEE, 2010. [21] S. Fusi and M. Mattia. Collective behavior of networks with linear (VLSI) integrate and fire neurons. Neural Computation, 11:633–52, 1999. [22] Crispin W Gardiner. Handbook of stochastic methods. 2012. 20 complete sparse learning. 19(9):2301–2352, 2007. [32] R. Kempter, W. Gerstner, and J.L. Van Hemmen. Intrinsic stabilization of output rates by spike-based hebbian learning. Neural Computation, 13(12):2709– 2741, 2001. Neural Computation, [42] E. Neftci, J. Binas, U. Rutishauser, E. Chicca, G. Indiveri, and R. J. Douglas. Synthesizing cognition in neuromorphic electronic systems. Proceedings of the National Academy of Sciences, 110(37):E3468–E3476, June 2013. [33] David F Kerridge. Inaccuracy and inference. Journal of the Royal Statistical Society. Series B (Methodological), pages 184–194, 1961. [34] Duygu Kuzum, Rakesh GD Jeyasingh, Byoungil Lee, and H-S Philip Wong. Nanoelectronic programmable synapses based on phase change materials for braininspired computing. Nano letters, 12(5):2179–2186, 2011. [43] E. Neftci, B. Toth, G. Indiveri, and H. Abarbanel. Dynamic state and parameter estimation applied to neuromorphic systems. Neural Computation, 24(7):1669–1694, July 2012. [44] P O’Connor, D. Neil, S.-C. Liu, T. Delbruck, and M. Pfeiffer. Real-time classification and sensor fusion with a spiking deep belief network. Frontiers in Neuroscience, 7(178), 2013. [35] Quoc V Le, Marc’Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg S Corrado, Jeff Dean, and Andrew Y Ng. Building high-level features using large scale unsupervised learning. arXiv preprint arXiv:1112.6209, 2011. [45] B. Pedroni, S. Das, E. Neftci, K. Kreutz-Delgado, and G. Cauwenberghs. Neuromorphic adaptations of restricted boltzmann machines and deep belief networks. In International Joint Conference on Neural Networks, IJCNN 2013, 2013. (accepted). [36] Nicolas Le Roux and Yoshua Bengio. Representational power of restricted boltzmann machines and deep belief networks. Neural Computation, 20(6):1631–1649, 2008. [46] Mihai A Petrovici, Johannes Bill, Ilja Bytschok, Johannes Schemmel, and Karlheinz Meier. Stochastic inference with deterministic spiking neurons. arXiv preprint arXiv:1311.3211, 2013. [37] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [47] Hans E Plesser and Wulfram Gerstner. Noise in integrate-and-fire neurons: From stochastic input to escape rates. Neural Computation, 12(2):367–384, 2000. [38] S.-C. Liu and T. Delbruck. Neuromorphic sensory systems. Current Opinion in Neurobiology, 20(3):288–295, 2010. [39] C.A. Mead. Analog VLSI and Neural Systems. Addison-Wesley, Reading, MA, 1989. [48] A. Renart, P. Song, and X.-J. Wang. Robust spatial working memory through homeostatic synaptic scaling in heterogeneous cortical networks. Neuron, 38:473–485, May 2003. [40] P. Merolla, T. Ursell, and J. Arthur. The thermodynamic temperature of a rhythmic spiking network. ArXiv e-prints, September 2010. [49] Herbert Robbins and Sutton Monro. A stochastic approximation method. The Annals of Mathematical Statistics, pages 400–407, 1951. [41] Joseph F Murray and Kenneth Kreutz-Delgado. Visual recognition and inference using dynamic over21 [50] J. Schemmel, D. Brüderle, A. Grübl, M. Hock, K. Meier, and S. Millner. A wafer-scale neuromorphic hardware system for large-scale neural modeling. In International Symposium on Circuits and Systems, ISCAS 2010, pages 1947–1950. IEEE, 2010. [51] Teresa Serrano-Gotarredona, Timothée Masquelier, Themistoklis Prodromakis, Giacomo Indiveri, and Bernabe Linares-Barranco. Stdp and stdp variations with memristors for spiking neuromorphic learning systems. Frontiers in neuroscience, 7, 2013. [52] R. Silver, K. Boahen, S. Grillner, N. Kopell, and K.L. Olsen. Neurotech for neuroscience: unifying concepts, organizing principles, and emerging tools. J. Neurosci., 27(44):11807, 2007. [53] Dmitri B Strukov, Gregory S Snider, Duncan R Stewart, and R Stanley Williams. The missing memristor found. Nature, 453(7191):80–83, 2008. [54] Yichuan Tang and Chris Eliasmith. Deep networks for robust visual recognition. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 1055–1062, 2010. [55] C. van Vreeswijk and H. Sompolinsky. Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science, 274(5293):1724–1726, December 1996. [56] T. Yu and G. Cauwenberghs. Log-domain timemultiplexed realization of dynamical conductancebased synapses. In International Symposium on Circuits and Systems, (ISCAS), 2010, pages 2558 –2561. IEEE, June 2010. 22
9
JUNE 2017, UPDATED OCTOBER 2017 1 Tree Notation: an antifragile program notation arXiv:1703.01192v7 [] 23 Oct 2017 Breck Yunits Abstract—This paper presents Tree Notation, a new simple, universal syntax. Language designers can invent new programming languages, called Tree Languages, on top of Tree Notation. Tree Languages have a number of advantages over traditional programming languages. We include a Visual Abstract to succinctly display the problem and discovery. Then we describe the problem–the BNF to abstract syntax tree (AST) parse step–and introduce the novel solution we discovered: a new family of 2D programming languages that are written directly as geometric trees. TN defines two special characters, Y Increment (YI) and X Increment (XI). YI is ”\n” and XI is ” ”. XI is a space, not a tab. Many TLs also add a Word Increment (WI) to provide a more succint way of encoding common node types. A comparison quickly illustrates nearly the entirety of the notation: JSON: { ” t i t l e ” : ”Web S t a t s ” , ” visitors ”: { ” m o z i l l a ” : 802 } } Tree Notation: t i t l e Web S t a t s visitors m o z i l l a 802 III. U SEFUL P ROPERTIES A. Simplicity Fig. 1. This Visual Abstract explains the core idea of the paper. This diagram is also the output of a program written in a new Tree Language called Flow. I. T HE P ROBLEM Programming is complicated. Our current 1-dimensional languages (1DLs) add to this complexity. To run 1DL code we first strip whitespace, then transform it to an AST and then transform that to executable machine code. These intermediate steps have enabled local minimum gains in developer productivity. But programmers lose time and insight due to discrepancies between 1DL code and ASTs. II. T HE S OLUTION : T REE N OTATION In this paper and accompanying GitHub ES6 Repo (GER - github.com/breck7/treenotation), we introduce Tree Notation (TN), a new 2D notation. A node in TN maps to an XY point on a 2D plane. You can extend TN to create domain specific languages (DSLs) that don’t require a transformation to a discordant AST. These DSLs, called Tree Languages (TLs), are easy to create and can be simple or Turing Complete. TN encodes one data structure, a TreeNode, with two members: a string called line and an optional array of child TreeNodes called children. Breck Yunits is a researcher at Ohayo Computer ([email protected]). As shown in Fig 1, TN simply maps source code to an XY plane which makes reading code locally and globally easy, and TL programs use far fewer source nodes than equivalent 1DL programs. TLs let programmers write much shorter, perfect programs. B. Zero parse errors Parse errors do not exist in TN. Every text string is a valid TN program. TLs implement microparsers that parallelize easily and can creatively handle errors. With most 1DLs, to get from a blank program to a certain valid program in keystroke increments requires stops at invalid programs. With TN all intermediate steps are valid TN programs. A user can edit the nodes of a program at runtime with no risk of breaking the TN parsing of the entire program. ”Errors”, still can exist at the TL level, but TL microparsers can handle errors independently and even correct errors automatically. TLs cut development and debug time. C. Semantic diffs 1DLs ignore whitespace and so permit programmers and programs to encode the same object to different programs. This often causes merge conflicts for non-semantic changes. TN does not ignore whitespace and diffs contain only semantic meaning. Just one right way to encode a program. JUNE 2017, UPDATED OCTOBER 2017 2 VI. L ITERATURE R EVIEW D. Easy composition Base notations such as XML,2 JSON,3 and Racket4 can encode multi-lingual programs by complecting language blocks. For example, the JSON program below requires extra nodes to encode Python: Turing Machines with 2D Tapes have been studied and are known to be Turing Complete.7 A few esoteric 2D programming languages are available online.8 At time of publication, 387 programming languages and notations were searched and many thousands of pages of research and code was read but { TN was not found. ” source ”: [ The closest someone came to discovering TN, and TLs, ” i m p o r t hn . np a s l z \n ” , was perhaps Mark B. Wells, who wrote a brillant paper at ” p r i n t ( \ ” a a r o n s w pdm ah a s mo gb 28 −3\”)” Los Alamos, back in 1972 which predicted the discovery and ] utility of something like TLs.1 } With TN, the Python code requires no complecting: VII. F URTHER R ESEARCH source i m p o r t hn . np a s l z p r i n t ( ” a a r o n s w pdm ah a s mo gb 28 −3”) The GER contains a TN parser, grammar notation, Tree Language compiler-compiler (TLCC), VM, and some TLs. Future publications might explore the Tree Notation Grammar, TLCC, or machine-written TL programs. IV. D RAWBACKS TN is new and tooling support rounds to zero. TN lacks primitive types. Many 1DLs have notations for common types like floats, and parse directly to efficient inmemory structures. TLs make TN useful. The GER demonstrates how useful TLs can be built with just a few lines of code, which is far less code than one needs to implement even a simple 1DL such as JSON.5 TN is verbose without a TL. A complex node in TN takes multiple lines. Base 1DLs allow multiple nodes per line. Some developers dislike space-indented notations, some wrongly prefer tabs, and some just have no taste. V. P REDICTIONS Prediction 1: no structure will be found that cannot serialize to TN. Some LISP programmers believe all structures are recursive lists (or perhaps ”recursive cons”). We believe in The Tree Conjecture (TTC): All structures are trees. For example, a map could serialize to MapTL: d s l Domain S p e c i f i c Language s f San F r a n c i s c o Therefore, maps are a type of tree. TTC stands. Prediction 2: TLs will be found for every popular 1DL. Below is some code in a simple TL, JsonTL: o s dsl yrt n ma 902 TLs will be found for great 1DLs including C, RISC-V, ES6, and Arc. Some TLs have already been found,6 but their common TN DNA has gone unseen. The immediate benefit of creating a TL for an 1DL is that programs can then be written in a TL editor and compiled to that 1DL. Prediction 3: Tree Oriented Programming (TOP) will supersede Object Oriented Programming. A new style of programming, TOP, will arise. TOP programmers will frequently reference 2D views of their program. Prediction 4: The simplest 2D text encodings for neural networks will be TLs. High level TLs will be found to translate machine written programs into understandable trees. Fig. 2. Rearranging these fridge magnets is equivalent to editing a TN program. The fridge magnet set that includes parentheses is a poor seller. R EFERENCES 1 Wells, Mark B. ”A REVIEW OF TWO-DIMENSIONAL PROGRAMMING LANGUAGES*”. 1972. http://sci-hub.cc/10.1145/942576.807009 (visited in 2017). 2 Bray, Tim, et al. ”Extensible markup language (XML).” World Wide Web Consortium Recommendation REC-xml-19980210. http://www. w3. org/TR/1998/REC-xml-19980210 16 (1998): 16. 3 Crockford, Douglas. ”The application/json media type for javascript object notation (json).” (2006). 4 Tobin-Hochstadt, Sam, et al. ”Languages as libraries.” ACM SIGPLAN Notices. Vol. 46. No. 6. ACM, 2011. 5 Ooms, Jeroen. ”The jsonlite package: A practical and consistent mapping between json data and r objects.” arXiv preprint arXiv:1403.2805 (2014). 6 Roughan, Matthew, and Jonathan Tuke. ”Unravelling graph-exchange file formats.” arXiv preprint arXiv:1503.02781 (2015). 7 Toida, Shunichi. ”Types of Turing Machines.” http://www.cs.odu.edu/∼toida/nerzic/390teched/tm/othertms.html (visited in 2017). 8 Ender, Martin. ”A two-dimensional, hexagonal programming language.” https://github.com/m-ender/hexagony (visited in 2017).
6
On Dart-Zobel Algorithm for Testing Regular Type Inclusion arXiv:cs/9810001v1 [cs.LO] 1 Oct 1998 Lunjin Lu and John G. Cleary Department of Computer Science University of Waikato Hamilton, New Zealand Phone: +64-838-4627/4378 {lunjin,jcleary}@cs.waikato.ac.nz Abstract. This paper answers open questions about the correctness and the completeness of Dart-Zobel algorithm for testing the inclusion relation between two regular types. We show that the algorithm is incorrect for regular types. We also prove that the algorithm is complete for regular types as well as correct for tuple distributive regular types. Also presented is a simplified version of Dart-Zobel algorithm for tuple distributive regular types. Keywords: type, regular term language, regular term grammar, tuple distributivity 1 Introduction Types are ubiquitous in programming languages [4]. They make programs easier to understand and help detect errors since a large number of errors are type errors. Types have been introduced into logic programming in the forms of type checking and inference [3, 7, 11, 23, 28] or type analysis [22, 29, 15, 17, 12, 20, 5, 21] or typed languages [14, 19, 25, 27]. Recent logic programming systems allow the programmer declare types for predicates and type errors are then detected either at compile time or at run time. Even in early logic programming systems, built-in predicates are usually typed and type checking for these predicates are performed at run time. The reader is referred to [24] for more details on type in logic programming. A type is a possibly infinite set of ground terms with a finite representation. An integral part of any type system is its type language that specifies which sets of ground terms are types. To be useful, types should be closed under intersection, union and complement operations. The decision problems such as the emptiness of a type, inclusion of a type in another, equivalence of two types should be decidable. Regular term languages [13, 6], called regular types, satisfy these constraints and has been used widely used as types [26, 22, 29, 7, 15, 19, 25, 27, 11, 28, 17, 12, 20, 5, 21]. Most type systems use tuple distributive regular types which are strictly less powerful than regular types [26, 22, 29, 15, 19, 25, 27, 11, 28, 17, 12, 20, 5, 21]. Tuple distributive regular types are regular types closed under tuple distributive closure. Intuitively, the tuple distributive closure of a set of terms is the set of all terms constructed recursively by permuting each argument position among all terms that have the same function symbol [28]. Tuple distributive regular types are discussed in section 5. To our knowledge, Dart and Zobel’s work [8] is the only one to present, among others, an inclusion algorithm for regular types with respect to a given set of type definitions without the tuple distributive restriction. Set-based analysis can also be used to deriving types based on set constraint solving [2, 1, 18, 16, 10]. However, set constraint solving methods are intended to infer descriptive types [25] rather than for testing inclusion of a prescriptive type [25] in another. Therefore, they are useful in different settings from Dart-Zobel algorithm. Dart-Zobel algorithm has been used in type or type related analyses [7, 9]. However, the completeness and the correctness of the algorithm are left open. This paper provides answers to these open questions. We show that the algorithm is incorrect for regular types. We also prove that the algorithm is complete for regular types in general as well as correct for tuple distributive regular types. These results lead to a simplified version of Dart-Zobel algorithm that is complete and correct for tuple distributive regular types. The remainder of this paper is organised as follows. Section 2 defines regular types by regular term grammars. Section 3 recalls DartZobel algorithm for testing if a regular type is a subset of another regular type. Section 4 addresses the completeness and the correctness of their algorithm that have been left open. In section 5, we show that their algorithm is both complete and correct for tuple distributive regular types and provides a simplified version of their algorithm for tuple distributive regular types. 2 Regular types Several equivalent formalisms such as tree automata [13, 6], regular term grammars [13, 6], regular unary logic programs [28] have been used to describe regular types. In [8], Dart and Zobel use regular term grammars to describe regular types that are sets of ground terms over a ranked alphabet Σ. A regular term grammar is a tuple G = hΠ, Σ, ∆i where1 – Σ is a fixed ranked alphabet. Each symbol in Σ is called a function symbol and has a fixed arity. It is assumed that Σ contains at least one constant that is a function symbol of arity 0. – Π is a set of symbols called nonterminals. These terminals will be called type symbols as they represent types. Type symbols are of arity 0. It is assumed that Π ∩ Σ = ∅. – ∆ is a set of production rules of the form α → τ with α ∈ Π and τ ∈ T (Σ ∪ Π) where T (Σ ∪ Π) is the set of all terms over Σ ∪ Π. Terms in T (Σ ∪ Π) will be called pure type terms. Example 1. Let Σ = {0, s(), nil, cons(, )} and Π = {Nat, NatList}. G = hΠ, Σ, ∆i defines natural numbers and lists of natural numbers where   Nat → 0,        Nat → s(Nat),  ∆=   Natlist → nil,       Natlist → cons(Nat, Natlist) ✷ The above presentation is slightly different from [8] where production rules with the same type symbol on their lefthand sides are grouped together and called a type rule. For instance, production rules in the above examples are grouped into two type rules Nat → {Nat, Natlist} and Natlist → {nil, cons(Nat, Natlist)}. Types denoted by a pure type term is given by a rewrite rule ⇒G associated with G. t ⇒G s if ∆ contains a rule α → τ , α occurs in t and s results from replacing an occurrence of α in t by τ . Let ⇒∗G 1 A start symbol is not needed in our setting. be the reflexive and transitive closure of ⇒G . The type denoted by a pure type term τ is defined as follows. def [τ ]G = {t ∈ T (Σ) | τ ⇒∗G t} [τ ]G is the set of terms over Σ that can be derived from τ by repeatedly replacing the lefthand side of a rule in ∆ with its righthand side. Example 2. Let G be the regular term grammar in example 1. We have Natlist ⇒G cons(Nat, Natlist) ⇒G cons(s(Nat), Natlist) ⇒G cons(s(0), Natlist) ⇒G cons(s(0), nil) Thus, [Natlist]]G contains cons(s(0), nil). The type represented by a sequence ψ of pure type terms and a set Ψ of sequences of pure type terms are defined as follows. def [ǫ]]G = {ǫ} def [hτ i + ψ ′]G = [τ ]G × [ψ ′]G def [ [Ψ ] G = [ψ]]G ψ∈Ψ where ǫ is the empty sequence, + is the infix sequence concatenation operator, hτ i is the sequence consisting of the pure type term τ and × is the Cartesian product operator. The set Π of nonterminals in Dart and Zobel’s type language also contains constant type symbols. Constant type symbols are not defined by production rules and they denote constant types. In particular, Π contains µ denoting the set of all terms over Σ and φ denoting the empty set of terms. We will leave out constant type symbols in this paper in order to simplify presentation. Re-introducing constant type symbols will not affect the results of the paper. Dart-Zobel algorithm works with simplified regular term grammars. A regular term grammar G = hΠ, Σ, ∆i is simplified if [α]]G 6= ∅ for each α ∈ Π and τ 6∈ Π for each (α → τ ) ∈ ∆. Every regular grammar can be simplified. 3 Dart-Zobel Inclusion Algorithm This section recalls Dart and Zobel’s inclusion algorithm for regular types. As indicated in section 2, we shall disregard constant type symbols and simplify their algorithm accordingly. We note that without constant type symbols, many functions in their algorithm can be greatly simplified. In place of a type rule, we use the corresponding set of production rules. These superficial changes don’t change the essence of the algorithm but facilitate the presentation. We shall assume that G is a simplified regular term grammar and omit references to G where there is no confusion. We first describe the ancillary functions used in their algorithm. Let ψ = τ1 τ2 · · · τn be a non-empty sequence of pure type terms and def Ψ be a set of non-empty sequences of pure type terms. head (ψ) = τ1 def def and tail (ψ) = τ2 · · · τn . heads and tails are defined as heads(Ψ ) = def {head (ψ) | ψ ∈ Ψ } and tails(Ψ ) = {tail (ψ) | ψ ∈ Ψ }. The function expand rewrites a non-empty sequence into a set of sequences when necessary. ( def expand (ψ) = def {ψ} if head(ψ) 6∈ Π {hτ i + tail (ψ) | (head (ψ) → τ ) ∈ ∆} if head(ψ) ∈ Π expands(Ψ ) = ψ∈Ψ expand(ψ). The function selects(τ, Ψ ) defined below applies when τ is pure type term and τ 6∈ Π and Ψ is a set of non-empty sequences with heads(Ψ ) ∩ Π = ∅. The output of selects(τ, Ψ ) is the set of the sequences in Ψ that have the same principal function symbol as τ . S def selects(f (τ1 , · · · , τn ), Ψ ) = {ψ ∈ Ψ | head (ψ) = f (ω1 , · · · , ωn )} Note that f (τ1 , · · · , τn ) is a constant when n = 0. The function open(ψ ′ ) defined below applies when ψ ′ is a nonempty sequence with head (ψ ′ ) 6∈ Π. open(ψ ′ ) replaces the head of ψ ′ with its arguments. def open(f (τ1 , · · · , τn ) + ψ) = τ1 τ2 · · · τn + ψ When n = 0, open(f (τ1 , · · · , τn ) + ψ) = ψ. Without constant type symbols, open doesn’t need an extra argument as in [8] that is used to test membership of a term in a constant type and to indicate the required number of arguments when the constant type symbol is µ. def opens(Ψ ) = {open(ψ) | ψ ∈ Ψ }. The inclusion algorithm subset(τ1 , τ2 ) takes two pure type terms τ1 and τ2 and is intended to decide if [τ1]G ⊆ [τ2]G is true or false. The core part subsetv of the inclusion algorithm takes a sequence ψ of pure type terms and a set Ψ of sequences of pure type terms that are of the same length as ψ and is intended to decide if [ψ]]G ⊆ [Ψ ]G . subsetv takes a third argument C to ensure termination. C is a set of pairs hβ, Υ i where β ∈ Π is a type symbol and Υ ⊆ T (Σ ∪ Π) is a set of pure type terms. A pair hβ, Υ i in C can be read as [β]]G ⊆ [Υ ]G . The functions subset and subsetv are defined in the following. Where several alternative definitions of subsetv apply, the first is used. def subset(τ1 , τ2 ) = subsetv (hτ1 i, {hτ2 i}, ∅) def subsetv (ψ, Ψ, C) =   false if Ψ = ∅     true if ψ = ǫ        subsetv (tail (ψ), tails(Ψ ), C)   if hhead (ψ), Υ i ∈ C and heads(Ψ ) ⊇ Υ ′  ∀ψ ∈ expand(ψ).subsetv (ψ ′ , Ψ, C ∪ {hhead (ψ, heads(Ψ )i})      if head(ψ) ∈ Π       subsetv (open(ψ), opens(selects(head (ψ), expands(Ψ ))), C)   if head(ψ) = f (τ1 , · · · , τn ) The second condition heads(Ψ ) ⊇ Υ for the third alternative is obviously mistaken to be heads(Ψ ) ⊆ Υ in [8]. The first two alternatives deal with two trivial cases. The third alternative uses pairs in C to force termination. As we shall see later, this is fine for tuple distributive regular types but is problematic for regular types in general. The fourth alternative expands ψ into a set of sequences ψ ′ and compares each of them with Ψ . The fifth alternative applies when ψ = f (τ1 , · · · , τn ) + ψ ′ . Sequences in Ψ are expanded and the expanded sequences of the form f (σ1 , · · · , σn ) + ω ′ are selected. ψ and the set of the selected sequences are then compared after replacing f (τ1 , · · · , τn ) with τ1 · · · τn in ψ and replacing f (σ1 , · · · , σn ) with σ1 · · · σn in each f (σ1 , · · · , σn ) + ω ′. 4 Correctness and Completeness We now address the correctness and the completeness of Dart-Zobel algorithm that were left open. We first show that the algorithm is incorrect for regular types by means of a counterexample. We then prove that the algorithm is complete for regular types. Thus, the algorithm provides an approximate solution to the inclusion problem of regular types in that it returns true if inclusion relation holds between its two arguments while the reverse is not necessarily true. 4.1 Correctness - a counterexample The following example shows that Dart-Zobel algorithm is incorrect for regular types. Example 3. Let G = hΠ, Σ, ∆i with Π = {α, β, θ, σ, ω}, Σ = {a, b, g(), h(, )} and   α      β   → g(ω)      → g(θ) | g(σ)  ∆ = θ → a | h(θ, a)       σ → b | h(σ, b)       ω → a | b | h(ω, a) | h(ω, b)   where, for instance, θ → a | h(θ, a) is an abbreviation of two rules θ → a and θ → h(θ, a). Let Σh = Σ \ {h}. We have [θ]]G = {t ∈ T (Σh ) | t is left-skewed and leaves of t are a’s} [σ]]G = {t ∈ T (Σh ) | t is left-skewed and leaves of t are b’s} [ω]]G = {t ∈ T (Σh ) | t is left-skewed} [α]]G = {g(t) | t ∈ [ω]]G } [β]]G = {g(t) | t ∈ [θ]]G ∪ [σ]]G } Let t = g(h(h(a, b), a)). t ∈ [α]]G and t 6∈ [β]]G . Therefore, [α]]G 6⊆ [β]]G . The incorrectness of Dart-Zobel algorithm is illustrated by showing subset(α, β) = true as follows. Let C0 = {hα, {β}i}. We have subset(α, β) = subsetv (hαi, {hβi}, ∅) by def. of subset = subsetv (hg(ω)i, {hβi}, C0 ) by 4th def. of subsetv = subsetv (hωi, {hθi, hσi}, C0) by 5th def. of subsetv Let C1 = C0 ∪ {hω, {θ, σ}i}. By the fourth definition of subsetv and the above equation, subsetv (hai, {hθi, hσi}, C1 ) ∧ subsetv (hbi, {hθi, hσi}, C1 )     subset(α, β) =   ∧ subsetv (hh(ω, a)i, {hθi, hσi}, C1 )  ∧ subsetv (hh(ω, b)i, {hθi, hσi}, C1 )   (1) By applying the fifth and then the second definitions of subsetv , subsetv (hai, {hθi, hσi}, C1 ) = subsetv (ǫ, {ǫ}, C1 ) = true. In the same way, we obtain subsetv (hai, {hθi, hσi}, C1 ) = true. subsetv (hh(ω, a)i, {hθi, hσi}, C1) = subsetv (hω, ai, {hθ, ai, hσ, bi}, C1) by 5th = subsetv (hai, {hai, hbi}, C1 ) by 3rd = subsetv (ǫ, {ǫ}, C1 ) by 5th = true by 2nd def. def. def. def. of of of of subsetv subsetv subsetv subsetv We can show subsetv (hh(ω, a)i, {hθi, hσi}, C1 ) = true in the same way as above. Therefore, by equation 1, subset(α, β) = true and subset is incorrect for regular types. ✷ The problem with the algorithm stems from the way the set C is used in the third definition of subsetv . As the above example indicates, the third definition of subsetv severs the dependency between the terms in a tuple, i.e., subterms of a term. In [8], Dart and Zobel show by an example that their algorithm works for some regular types which are not tuple distributive. We don’t know what is the largest subclass of the class of regular types for which the algorithm is correct. 4.2 Completeness We now prove that Dart-Zobel algorithm is complete for regular types in the sense that subset(τ1 , τ2 ) = true whenever [τ1]G ⊆ [τ2]G . Let C be a set of pairs hβ, Υ i with β ∈ Π and Υ ⊆ T (Σ ∪ Π). A pair hβ, Υ i in C states that the denotation of β is included in that of Υ , i.e., [β]]G ⊆ [Υ ]G for regular types. Define def ΓC,G = ∧hβ,Υ i∈C [β]]G ⊆ [Υ ]G The completeness of subset follows from the following theorem which asserts the completeness of subsetv . Theorem 1. Let ψ be a sequence of pure type terms and Ψ a set of sequences of pure type terms of the same length as ψ, C a set of pairs hβ, Υ i with β ∈ Π and Υ ⊆ T (Σ ∪ Π). If ΓC,G |= [ψ]]G ⊆ [Ψ ]G then subsetv (ψ, Ψ, C) = true. Proof. Assume subsetv (ψ, Ψ, C) = false. The proof is done by showing ΓC,G 6|= [ψ]]G ⊆ [Ψ ]G . This is accomplished by induction on hdp(ψ, Ψ, C), lg(ψ)i where lg(ψ) is the length of ψ and dp(ψ, Ψ, C) is the depth of the computation tree for subsetv (ψ, Ψ, C). Define def hk, li < hk ′ , l′ i = (k < k ′ ) ∨ (k = k ′ ) ∧ (l < l′ ). Basis. dp(ψ, Ψ, C) = 0 and lg(ψ) = 0. ψ = ǫ and Ψ = ∅ since subsetv (ψ, Ψ, C) = false. Let t = ǫ. t ∈ [ψ]]G and t 6∈ [Ψ ]G . So, ΓC,G 6|= [ψ]]G ⊆ [Ψ ]G . Induction. dp(ψ, Ψ, C) 6= 0 or lg(ψ) 6= 0. By the definition of subsetv , (a) Ψ = ∅; or (b) subsetv (tail (ψ), tails(Ψ ), C) = false and there is Υ ⊆ T (Σ ∪ Π) such that (hhead(ψ), Υ i ∈ C) ∧ (heads(Ψ ) ⊇ Υ ); or (c) head (ψ) ∈ Π and ∃.ψ ′ ∈ expand(ψ).subsetv (ψ ′ , Ψ, C ′ ) = false where C ′ = C ∪ {hhead (ψ), heads(Ψ )i}; or (d) head (ψ) = f (τ1 , · · · , τn ) and subsetv (ψ ′ , Ψ ′, C) = false where ψ ′ = open(ψ) and Ψ ′ = opens(selects(head(ψ), expands(Ψ ))). It remains to prove that ΓC,G 6|= [ψ]]G ⊆ [Ψ ]G in each of the cases (a)-(d). The case (a) is trivial as G is simplified and hence [ψ]]G 6= ∅. In the case (b), we have dp(tail (ψ), tails(Ψ ), C) ≤ dp(ψ, Ψ, C) and lg(tail (ψ)) < lg(ψ). By the induction hypothesis, ΓC,G 6|= [tail (ψ)]]G ⊆ [tails(Ψ )]]G . Thus, ΓC,G |= ∃t′ .(t′ ∈ [tail (ψ)]]G ∧ t′ 6∈ [tails(Ψ )]]G ). Let t ∈ [head (ψ)]]G and t = hti + t′ . Note that t exists as G is simplified. We have ΓC,G |= t ∈ [ψ]]G ∧ t 6∈ [Ψ ]G . So, ΓC,G 6|= [ψ]]G ⊆ [Ψ ]G . In the case (c), dp(ψ ′ , Ψ, C ′) < dp(ψ, Ψ, C). By the induction hypothesis, ΓC ′ G 6|= [ψ ′]G ⊆ [Ψ ]G . Note that ΓC ′ ,G = ΓC,G ∧([[head (ψ)]]G ⊆ [heads(Ψ )]]G ). So, we have ΓC,G |= [ψ]]G 6⊆ [Ψ ]G ∨[[head (ψ)]]G 6⊆ [heads(Ψ )]]G . Assume ΓC,G = true. Either (i) [ψ]]G 6⊆ [Ψ ]G or (ii) [head (ψ)]]G 6⊆ [heads(Ψ )]]G . In the case (i), ∃.t′ .(t′ ∈ [ψ ′]G ) ∧ (t′ 6∈ [Ψ ]G ). By proposition 5.26 in [8], we have ΓC,G 6|= [ψ]]G ⊆ [Ψ ]G . In the case (ii), ∃t.(t ∈ [head(ψ)]]G ) ∧ (t 6∈ [heads(Ψ )]]G ). Let t′ ∈ [tail (ψ)]]G and t = hti + t′ . Note that t′ exists as G is simplified. We have t ∈ [ψ]]G ∧ t 6∈ [Ψ ]G . So, ΓC,G 6|= [ψ]]G ⊆ [Ψ ]G in the case (c). In the case (d), we have ψ ′ = τ1 · · · τn +tail (ψ) and dp(ψ ′ , Ψ ′ , C) < dp(ψ, Ψ, C). By the induction hypothesis, ΓC,G 6|= [ψ ′]G ⊆ [Ψ ′]G . Thus, ΓC,G |= ∃t1 .∃t2 .(lg(t1 ) = n) ∧ ((t1 + t2 ) ∈ [ψ ′]G ) ∧ ((t1 + t2 ) 6∈ [Ψ ′]G ), which implies ΓC,G |= ∃t1 .∃t2 .(hf (t1 )i + t2 ) ∈ [ψ]]G ) ∧ ((hf (t1 )i + t2 ) 6∈ [Ψ ]G ). So, ΓC,G 6|= [ψ]]G ⊆ [Ψ ]G . ✷ The completeness of subset is a corollary of the above theorem. Corollary 2. Let τ1 and τ2 be pure type terms. If [τ1]G ⊆ [τ2]G then subset(τ1 , τ2 ) = true. Proof. subset(τ1 , τ2 ) = subsetv (hτ1 i, {hτ2 i}, ∅) by the definition of subset. We have Γ∅,G |= [hτ1 i]]G ⊆ [{hτ2 i}]]G since [τ1]G ⊆ [τ2]G . The corollary now follows from the above theorem as Γ∅,G = true. ✷ 5 Tuple Distributive Regular Types Most type languages in logic programming use tuple distributive closures of regular term languages as types [26, 22, 29, 15, 19, 25, 27, 11, 28, 17, 12, 20, 5, 21]. The notion of tuple distributivity is due to Mishra [22]. The following definition of tuple distributivity is due to Heintze and Jaffar [15]. Each function symbol of arity n is −1 −1 −1 associated with n projection operators f(1) , f(2) , · · · , f(n) . Let S be a −1 set of ground terms in T (Σ). f(i) is defined as follows. def −1 f(i) (S) = {ti | f (t1 , · · · , ti , · · · , tn ) ∈ S} The tuple distributive closure of S is def ⋆ −1 S ⋆ = {c | c ∈ S ∧ c ∈ Σ0 } ∪ {f (t1 , · · · , tn ) | ti ∈ (f(i) (S)) } where Σ0 is the set of constants in Σ. The following proposition results from the fact that (.)⋆ is a closure operator and preserves set inclusion, i.e., S1 ⊆ S2 implies S1⋆ ⊆ S2⋆ . Proposition 3. Let S1 , S2 ⊆ T (Σ). (S1 ∪ S2 )⋆ = (S1⋆ ∪ S2⋆ )⋆ . ✷ The tuple distributive regular type h τ i G associated with a pure type term τ is the tuple distributive closure of the regular type [τ ]G associated with τ [22]. def h τ i G = [τ ]⋆G Let ψ be a sequence of pure type terms, Ψ be a set of sequences of pure type terms of the same length. def h ǫiiG = {ǫ} def h ψiiG = {hti + t | t ∈ h head(ψ)iiG ∧ t ∈ h tail (ψ)iiG } def h Ψ iG = ( [ ⋆ h head (ψ)iiG ) × h tails(Ψ )iiG ψ∈Ψ The definition of h Ψ i G makes use of tuple distributivity and hence severs the inter-dependency between components of a sequences of terms. 5.1 Correctness We now prove that Dart-Zobel algorithm is correct for tuple distributive regular types in the sense that if subset(τ1 , τ2 ) = true then h τ1 i G ⊆ h τ2 i G . Let C be a set of pairs hβ, Υ i with β ∈ Π and Υ ⊆ T (Σ ∪ Π). A pair hβ, Υ i in C represents h βiiG ⊆ h Υ i G for tuple distributive regular types. Define def ΦC,G = ∧hβ,Υ i∈C h βiiG ⊆ h Υ i G The correctness of subset follows from the following theorem which asserts the correctness of subsetv for tuple distributive regular types. Theorem 4. Let ψ be a sequence of pure type terms and Ψ a set of sequences of pure type terms of the same length as ψ, C a set of pairs hβ, Υ i with β ∈ Π and Υ ⊆ T (Σ ∪ Π). If subsetv (ψ, Ψ, C) = true then ΦC,G |= h ψiiG ⊆ h Ψ i G . Proof. Assume subsetv (ψ, Ψ, C) = true. The proof is done by induction on hdp(ψ, Ψ, C), lg(ψ)i. Basis. dp(ψ, Ψ, C) = 0 and lg(ψ) = 0. ψ = ǫ and Ψ 6= ∅ by the second definition of subsetv . So, Ψ = {ǫ} and ΦC,G |= h ψiiG ⊆ h Ψ i G . Induction. dp(ψ, Ψ, C) 6= 0 or lg(ψ) 6= 0 By the definition of subsetv , (a) subsetv (tail (ψ), tails(Ψ ), C) = true and there is Υ ⊆ T (Σ ∪ Π) such that (hhead (ψ), Υ i ∈ C) ∧ (heads(Ψ ) ⊇ Υ ); or (b) head (ψ) ∈ Π and ∀.ψ ′ ∈ expand (ψ).subsetv (ψ ′ , Ψ, C ′ ) = true where C ′ = C ∪ {hhead (ψ), heads(Ψ )i}; or (c) head (ψ) = f (τ1 , · · · , τn ) and subsetv (ψ ′ , Ψ ′ , C) = true where ψ ′ = open(ψ) and Ψ ′ = opens(selects(head(ψ), expands(Ψ ))). It remains to prove that ΦC,G |= h ψiiG ⊆ h Ψ i G in each of the cases (a)-(c). In the case (a), we have dp(tail (ψ), tails(Ψ ), C) ≤ dp(ψ, Ψ, C) and lg(tail (ψ)) < lg(ψ). By the induction hypothesis, ΦC,G |= h tail (ψ)iiG ⊆ h tails(Ψ )iiG . (hhead (ψ), Υ i ∈ C) and (heads(Ψ ) ⊇ Υ ) imply ΦC,G |= h head (ψ)iiG ⊆ h heads(Ψ )iiG . Thus, ΦC,G |= h ψiiG ⊆ h Ψ i G by the definitions of h ψiiG and h Ψ i G . Note that tuple distributivity is used in the definition of h Ψ i G . In the case (b), dp(ψ ′ , Ψ, C ′) < dp(ψ, Ψ, C). By the induction hypothesis, ΦC ′ ,G |= h ψ ′ i G ⊆ h Ψ i G . Note that ΦC ′ ,G = ΦC,G ∧ (hhhead (ψ)iiG ⊆ h heads(Ψ )iiG ). So, ΦC,G |= h ψ ′ i G ⊆ h Ψ i G ∨ h head (ψ)iiG 6⊆ h heads(Ψ )iiG subsetv (head (ψ), heads(Ψ ), C) = true since subsetv (ψ, Ψ, C) = true. We have ΦC,G |= h head (ψ)iiG ⊆ h heads(Ψ )iiG by the induction hypothesis since dp(head(ψ), heads(Ψ ), C) < dp(ψ, Ψ, C). So, ΦC,G |= h ψ ′ i G ⊆ h Ψ i G . ΦC,G |= (hh{ψ ′ | ψ ′ ∈ expand(ψ)}iiG ⊆ h Ψ i G ) since (.)⋆ is a closure operator and hence ΦC,G |= h ψiiG ⊆ h Ψ i G In the case (c), we have ψ ′ = τ1 · · · τn +tail (ψ) and dp(ψ ′ , Ψ ′ , C) < dp(ψ, Ψ, C). By the induction hypothesis, ΦC,G |= h ψ ′ i G ⊆ h Ψ ′ i G . By proposition 5.29 in [8], ΦC,G |= h ψiiG ⊆ h Ψ i G . This completes the proof of the theorem. ✷ The correctness of subset is a corollary of the above theorem. Corollary 5. Let τ1 and τ2 be pure type terms. If subset(τ1 , τ2 ) = true then h τ1 i G ⊆ h τ2 i G . Proof. Let subset(τ1 , τ2 ) = true. subsetv (hτ1 i, {hτ2 i}, ∅) = true by the definition of subset. Thus, Φ∅,G |= h hτ1 iiiG ⊆ h {hτ2 i}iiG according to the above theorem. So, h τ1 i G ⊆ h τ2 i G as Φ∅,G = true. ✷ 5.2 Completeness This section presents the completeness of Dart-Zobel algorithm for tuple distributive regular types. The following theorem is the counterpart of theorem 1. Theorem 6. Let ψ be a sequence of pure type terms and Ψ a set of sequences of pure type terms of the same length as ψ, C a set of pairs hβ, Υ i with β ∈ Π and Υ ⊆ T (Σ ∪ Π). If ΦC,G |= h ψiiG ⊆ h Ψ i G then subsetv (ψ, Ψ, C) = true. Proof. The proof can be obtained from that for theorem 1 by simply replacing Γ·,· with Φ·,· and [·]]G with h ·iiG . ✷ The following completeness result of Dart-Zobel algorithm for tuple distributive regular types follows from the above theorem. Corollary 7. Let τ1 and τ2 be pure type terms. If h τ1 i G ⊆ h τ2 i G then subset(τ1 , τ2 ) = true. Proof. The proof can be obtained from that for corollary 2 by simply replacing Γ(·,·) with Φ(·,·) , [·]]G with h ·iiG and theorem 1 with theorem 6. ✷ 5.3 A Simplified Algorithm Now that Dart-Zobel algorithm is complete and correct for tuple distributive regular types but not correct for general regular types. It is desirable to specialise Dart-Zobel algorithm for tuple distributive regular types which was originally proposed for general regular types. The following is a simplified version of the algorithm for tuple distributive regular types. def subset ′ (τ1 , τ2 ) = subset ′ (τ1 , {τ2 }, ∅) def subset ′ (τ, Υ, C) =  false        true if Υ = ∅ if (hτ, Υ ′i ∈ C) ∧ (Υ ⊇ Υ ′)  ∀τ ′ ∈ expand ′ (τ ).subset ′ (τ ′ , Υ, C ∪ {hτ, Υ i}) if τ ∈ Π    ′ ′   subsetv (τ1 · · · τn , {σ1 · · · σn | f (σ1 , · · · , σn ) ∈ expands (Υ )}, C)    if τ = f (τ1 , · · · , τn )   def subsetv ′ (ǫ, {ǫ}, C) = true def subsetv ′ (ψ, Ψ, C) = subset ′ (head(ψ), heads(Ψ ), C) ∧ subsetv ′ (tail (ψ), tails(Ψ ), C) ( {τ } if τ 6∈ Π def ′ expand (τ ) = {σ | τ → σ) ∈ ∆} if τ ∈ Π def expands ′ (Υ ) = [ expand ′ (τ ) τ ∈Υ While Dart-Zobel algorithm mainly deals with sequences of pure type terms, the simplified algorithm primarily deals with pure type terms by breaking a sequence of pure type terms into its component pure type terms. This is allowed because tuple distributive regular types abstract away inter-dependency between component terms in a sequence of ground terms. We forgo presenting the correctness and the completeness of the simplified algorithm because they can be proved by emulating proofs for theorems 1 and 4. 6 Conclusion We have provided answers to open questions about the correctness and the completeness of Dart-Zobel algorithm for testing inclusion of one regular type in another. The algorithm is complete but incorrect for general regular types. It is both complete and correct for tuple distributive regular types. It is our hope that the results presented in this paper will help identify the applicability of Dart-Zobel algorithm. We have also provided a simplified version of Dart-Zobel algorithm for tuple distributive regular types. References [1] A. Aiken and T.K. Lakshman. Directional type checking of logic programs. In B. Le Charlier, editor, Proceedings of the First International Static Analysis Symposium, pages 43–60. Springer-Verlag, 1994. [2] A. Aiken and E. Wimmers. Solving systems of set constraints. In Proceedings of the Seventh IEEE Symposium on Logic in Computer Science, pages 329–340. The IEEE Computer Society Press, 1992. [3] C. Beierle. Type inferencing for polymorphic order-sorted logic programs. In L. Sterling, editor, Proceedings of the Twelfth International Conference on Logic Programming, pages 765–779. The MIT Press, 1995. [4] L. Cardelli and P. Wegner. On understanding types, data abstraction, and polymorphism. ACM computing surveys, 17(4):471–522, 1985. [5] M. Codish and V. Lagoon. Type dependencies for logic programs using aciunification. In Proceedings of the 1996 Israeli Symposium on Theory of Computing and Systems, pages 136–145. IEEE Press, June 1996. [6] H. Comon, M. Dauchet, R. Gilleron, D. Lugiez, S. Tison, and M. Tommasi. Tree Automata Techniques and Applications. Draft, 1998. [7] P.W. Dart and J. Zobel. Efficient run-time type checking of typed logic programs. Journal of Logic Programming, 14(1-2):31–69, 1992. [8] P.W. Dart and J. Zobel. A regular type language for logic programs. In Frank Pfenning, editor, Types in Logic Programming, pages 157–189. The MIT Press, 1992. [9] S. K. Debray, P. López-Garcia, and M. Hermenegildo. Non-failure analysis for logic programs. In Lee Naish, editor, Logic programming: Proceedings of the fourteenth International Conference on Logic Programming, pages 48–62. The MIT Press, July 1997. [10] P. Devienne, J-M. Talbot, and S. Tison. Co-definite set constraints with membership expressions. In J. Jaffar, editor, Proceedings of the 1998 Joint Conference and Symposium on Logic Programming, pages 25–39. The MIT Press, 1998. [11] T. Fruhwirth, E. Shapiro, M.Y. Vardi, and E. Yardeni. Logic programs as types for logic programs. In Proceedings of Sixth Annual IEEE Symposium on Logic in Computer Science, pages 300–309. The IEEE Computer Society Press, 1991. [12] J.P. Gallagher and D.A. de Waal. Fast and precise regular approximations of logic programs. In M. Bruynooghe, editor, Proceedings of the Eleventh International Conference on Logic Programming, pages 599–613. The MIT Press, 1994. [13] F. Gécseg and M. Steinby. Tree Automata. Akadémiai Kiadó, 1984. [14] M. Hanus. Horn clause programs with polymorphic types: semantics and resolution. Theoretical Computer Science, 89(1):63–106, 1991. [15] N. Heintze and J. Jaffar. A finite presentation theorem for approximating logic programs. In Proceedings of the seventh Annual ACM Symposium on Principles of Programming Languages, pages 197–209. The ACM Press, 1990. [16] N. Heintze and J. Jaffar. A decision procedure for a class of set constraints. Technical Report CMU-CS-91-110, Carnegie-Mellon University, February 1991. (Later version of a paper in Proc. 5th IEEE Symposium on LICS). [17] N. Heintze and J. Jaffar. Semantic types for logic programs. In Frank Pfenning, editor, Types in Logic Programming, pages 141–155. The MIT Press, 1992. [18] N. Heintze and J. Jaffar. Set constraints and set-based analysis. In Alan Borning, editor, Principles and Practice of Constraint Programming, volume 874 of Lecture Notes in Computer Science. Springer, May 1994. (PPCP’94: Second International Workshop, Orcas Island, Seattle, USA). [19] D. Jacobs. Type declarations as subtype constraints in logic programming. SIGPLAN Notices, 25(6):165–73, 1990. [20] L. Lu. Type analysis of logic programs in the presence of type definitions. In Proceedings of the 1995 ACM SIGPLAN Symposium on Partial Evaluation and Semantics-Based program manipulation, pages 241–252. The ACM Press, 1995. [21] L. Lu. A polymorphic type analysis in logic programs by abstract interpretation. Journal of Logic Programming, 36(1):1–54, 1998. [22] P. Mishra. Towards a theory of types in Prolog. In Proceedings of the IEEE international Symposium on Logic Programming, pages 289–298. The IEEE Computer Society Press, 1984. [23] A. Mycroft and R.A. O’Keefe. A polymorphic type system for Prolog. Artificial Intelligence, 23:295–307, 1984. [24] Frank Pfenning, editor. Types in logic programming. The MIT Press, Cambridge, Massachusetts, 1992. [25] U.S. Reddy. Types for logic programs. In S. Debray and M. Hermenegildo, editors, Logic Programming. Proceedings of the 1990 North American Conference, pages 836–40. mit, 1990. [26] M. Soloman. Type definitions with parameters. In Conference Record of the Fifth ACM Symposium on Principles of Programming Languages, pages 31–38, 1978. [27] E. Yardeni, T. Fruehwirth, and E. Shapiro. Polymorphically typed logic programs. In K. Furukawa, editor, Logic Programming. Proceedings of the Eighth International Conference, pages 379–93. The MIT Press, 1991. [28] E. Yardeni and E. Shapiro. A type system for logic programs. Journal of Logic Programming, 10(2):125–153, 1991. [29] J. Zobel. Derivation of polymorphic types for Prolog programs. In J.-L. Lassez, editor, Logic Programming: Proceedings of the fourth international conference, pages 817–838. The MIT Press, 1987. This article was processed using the LaTEX macro package with LLNCS style
6
On Asymptotics Related to Classical Inference in Stochastic Differential Equations with Random Effects Trisha Maitra and Sourabh Bhattacharya∗ arXiv:1407.3968v3 [] 11 May 2016 Abstract Delattre et al. (2013) considered n independent stochastic differential equations (SDE’s), where in each case the drift term is associated with a random effect, the distribution of which depends upon unknown parameters. Assuming the independent and identical (iid) situation the authors provide independent proofs of weak consistency and asymptotic normality of the maximum likelihood estimators (M LE’s) of the hyper-parameters of their random effects parameters. In this article, as an alternative route to proving consistency and asymptotic normality in the SDE set-up involving random effects, we verify the regularity conditions required by existing relevant theorems. In particular, this approach allowed us to prove strong consistency under weaker assumption. But much more importantly, we further consider the independent, but non-identical set-up associated with the random effects based SDE framework, and prove asymptotic results associated with the M LE’s. Keywords: Asymptotic normality; Burkholder-Davis-Gundy inequality; Itô isometry; Maximum likelihood estimator; Random effects; Stochastic differential equations. 1 Introduction Delattre et al. (2013) study mixed-effects stochastic differential equations (SDE’s) of the following form: dXi (t) = b(Xi (t), φi )dt + σ(Xi (t))dWi (t), with Xi (0) = xi , i = 1, . . . , n. (1.1) Here, for i = 1, . . . , n, the stochastic process Xi (t) is assumed to be continuously observed on the time interval [0, Ti ] with Ti > 0 known, and {xi ; i = 1, . . . , n} are the known initial values of the i-th process. The processes {Wi (·); i = 1, . . . , n} are independent standard Brownian motions, and {φi ; i = 1, . . . , n} are independently and identically distributed (iid) random variables with common distribution g(ϕ, θ)dν(ϕ) (for all θ, g(ϕ, θ) is a density with respect to a dominating measure on Rd , where R is the real line and d is the dimension), which are independent of the Brownian motions. Here θ ∈ Ω ⊂ Rp (p ≥ 2d) is an unknown parameter to be estimated. The functions b : R × Rd 7→ R and σ : R 7→ R are the drift function and the diffusion coefficient, respectively, both assumed to be known. Delattre et al. (2013) impose regularity conditions that ensure existence of solutions of (1.1). We adopt their assumptions, which are as follows. (H1) (i) The function (x, ϕ) 7→ b(x, ϕ) is C 1 (differentiable with continuous first derivative) on R × Rd , and such that there exists K > 0 so that b2 (x, ϕ) ≤ K(1 + x2 + |ϕ|2 ), for all (x, ϕ) ∈ R × Rd . (ii) The function σ(·) is C 1 on R and σ 2 (x) ≤ K(1 + x2 ), for all x ∈ R. (H2) Let Xiϕ be associated with the SDE of the form (1.1) with drift function b(x, ϕ). Also letting i Qxϕ ,Ti denote the joint distribution of {Xiϕ (t); t ∈ [0, Ti ]}, it is assumed that for i = 1, . . . , n, ∗ Trisha Maitra is a PhD student and Sourabh Bhattacharya is an Associate Professor in Interdisciplinary Statistical Research Unit, Indian Statistical Institute, 203, B. T. Road, Kolkata 700108. Corresponding e-mail: [email protected]. 1 and for all ϕ, ϕ′ , the following holds:  Z Ti 2 b (Xiϕ (t), ϕ′ ) xi ,Ti Qϕ dt < ∞ = 1. σ 2 (Xiϕ (t)) 0 (H3) For f = ∂b ∂ϕj , j = 1, . . . , d, there exist c > 0 and some γ ≥ 0 such that |f (x, ϕ)| ≤ c (1 + |x|γ ) . 2 ϕ∈Rd σ (x) sup Statistically, the i-th process Xi (·) can be thought of as modelling the i-th individual and the corresponding random variable φi denotes the random effect of individual i. For statistical inference, we follow Delattre et al. (2013) who consider the special case where b(x, φi ) = φi b(x). We assume (H1′ ) (i) b(·) and σ(x) are C 1 on R satisfying b2 (x) ≤ K(1 + x2 ) and σ 2 (x) ≤ K(1 + x2 ) for all x ∈ R, for some K > 0. (ii) Almost surely for each i ≥ 1, Z Ti 0 b2 (Xi (s)) ds < ∞. σ 2 (Xi (s)) Under this assumption, (H3) is no longer required; see Delattre et al. (2013). Moreover, Proposition 1 of Delattre et al. (2013) holds; in particular, if for k ≥ 1, E|φi |2k < ∞, then for all T > 0, sup E [Xi (t)]2k < ∞. (1.2) t∈[0,T ] As in Delattre et al. (2013) we assume that φi are normally distributed. Hence, (1.2) is satisfied in our case. Delattre et al. (2013) show that the likelihood, depending upon θ, admits a relatively simple form composed of the following sufficient statistics: Ui = Z 0 Ti b(Xi (s)) dXi (s), σ 2 (Xi (s)) Vi = The exact likelihood is given by L(θ) = n Y Z Ti 0 b2 (Xi (s)) ds, σ 2 (Xi (s)) i = 1, . . . , n. λi (Xi , θ), (1.3) (1.4) i=1 where  ϕ2 g(ϕ, θ) exp ϕUi − λi (Xi , θ) = Vi dν(ϕ). (1.5) 2 R  Assuming that g(ϕ, θ)dν(ϕ) ≡ N µ, ω 2 , Delattre et al. (2013) obtain the following form of λi (Xi , θ): "  2  #  1 Ui Vi Ui 2 λi (Xi , θ) = exp − exp , (1.6) µ− 2 1/2 2 (1 + ω Vi ) Vi 2Vi (1 + ω 2 Vi )  Z where θ = (µ, ω 2 ) ∈ Ω ⊂ R × R+ . As in Delattre et al. (2013), here we assume that (H2′ ) Ω is compact. Delattre et al. (2013) consider xi = x and Ti = T for i = 1, . . . , n, so that the set-up boils down to the iid situation, and investigate asymptotic properties of the M LE of θ, providing proofs of consistency and asymptotic normality independently, without invoking the general results already existing 2 in the literature. In this article, as an alternative, we prove asymptotic properties of the M LE in this SDE set-up by verifying the regularity conditions of relevant theorems already existing in the literature. Our approach allowed us to prove strong consistency of M LE, rather than weak consistency proved by Delattre et al. (2013). Also, importantly, our approach does not require assumption (H4) of Delattre et al. (2013) which required (U1 , V1 ) to have density with respect to the Lebsegue measure on R × R+ , which must be jointly continuous and positive on an open ball of R × R+ ). Far more importantly, we consider the independent but non-identical case (we refer to the latter as non-iid), and prove consistency and asymptotic normality of the M LE in this set-up. In what follows, in Section 2 we investigate asymptotic properties of M LE in the iid context. In Section 3 we investigate classical asymptotics in the non-iid set-up. We summarize our work and provide concluding remarks in Section 4. a.s. P L Notationally, “ → ”, “→” and “→” denote convergence “almost surely”, “in probability” and “in distribution”, respectively. 2 Consistency and asymptotic normality of MLE in the iid set-up 2.1 Strong consistency of MLE Consistency of the M LE under the iid set-up can be verified by validating the regularity conditions of the following theorem (Theorems 7.49 and 7.54 of Schervish (1995)); for our purpose we present the version for compact Ω. Theorem 1 (Schervish (1995)) Let {Xn }∞ n=1  be conditionally iid given θ with density f1 (x|θ) with respect to a measure ν on a space X 1 , B 1 . Fix θ0 ∈ Ω, and define, for each M ⊆ Ω and x ∈ X 1 , Z(M, x) = inf log ψ∈M f1 (x|θ0 ) . f1 (x|ψ) Assume that for each θ 6= θ0 , there is an open set Nθ such that θ ∈ Nθ and that Eθ0 Z(Nθ , Xi ) > −∞. Also assume that f1 (x|·) is continuous at θ for every θ, a.s. [Pθ0 ]. Then, if θ̂n is the M LE of θ corresponding to n observations, it holds that lim θ̂n = θ0 , a.s. [Pθ0 ]. n→∞ 2.1.1 Verification of strong consistency of M LE in our SDE set-up To verify the conditions of Theorem 1 in our case, we note that for any x, f1 (x|θ) = λ1 (x, θ) = λ(x, θ) given by (1.6), which is clearly continuous in θ. Also, it follows from the proof of Proposition 7 of Delattre et al. (2013) that for every θ 6= θ0 ,   1 + ω 2 V1 (ω02 − ω 2 )U12 1 1 f1 (x|θ0 ) = log + log f1 (x|θ) 2 2 (1 + ω 2 V1 )(1 + ω02 V1 ) 1 + ω02 V1   µ0 U1 µ20 V1 µU1 µ2 V1 − − − + 2(1 + ω 2 V1 ) 1 + ω 2 V1 2(1 + ω02 V1 ) 1 + ω02 V1       2  U1 1 2 1 ω2 |ω 2 − ω02 | ω02 2 − |ω0 − ω | log 1 + 2 + ≥− 1+ 2 2 ω2 2 ω ω0 1 + ω02 V1   2 2 2 U1 µ 0 V1 |ω − ω | µ0 U1 − |µ| − . (2.1) − 1+ 0 2 2 2 ω 1 + ω 0 V1 2(1 + ω0 V1 ) 1 + ω02 V1 2    U1 , Eθ0 Taking Nθ = µ, µ × ω2 , ω 2 , and noting that Eθ0 1+ω 2V 0 1 U1 1+ω02 V1 and Eθ0 finite due to Lemma 1 of Delattre et al. (2013), it follows that Eθ0 Z(Nθ , Xi ) > −∞. [Pθ0 ]. We summarize the result in the form of the following theorem: 3 µ20 V1 are 2(1+ω02 V1 ) a.s. Hence, θ̂n → θ0   Theorem 2 Assume the iid setup and conditions (H1′ ) and (H2′ ). Then the M LE is strongly consistent a.s. in the sense that θ̂n → θ0 [Pθ0 ]. 2.2 Asymptotic normality of MLE To verify asymptotic normality of M LE we invoke the following theorem provided in Schervish (1995) (Theorem 7.63): Theorem 3 (Schervish (1995)) Let Ω be a subset of Rd , and let {Xn }∞ n=1 be conditionally iid given P θ each with density f1 (·|θ). Let θ̂n be an M LE. Assume that θ̂n → θ under Pθ for all θ. Assume that f1 (x|θ) has continuous second partial derivatives with respect to θ and that differentiation can be passed under the integral sign. Assume that there exists Hr (x, θ) such that, for each θ0 ∈ int(Ω) and each k, j, sup kθ−θ0 k≤r ∂2 ∂2 log fX1 |Θ (x|θ0 ) − log fX1 |Θ (x|θ) ≤ Hr (x, θ0 ), ∂θk ∂θj ∂θk ∂θj (2.2) with lim Eθ0 Hr (X, θ0 ) = 0. r→0 Assume that the Fisher information matrix I(θ) is finite and non-singular. Then, under Pθ0 ,   √  L n θ̂n − θ0 → N 0, I −1 (θ0 ) . 2.2.1 (2.3) (2.4) Verification of the above regularity conditions for asymptotic normality in our SDE set-up P In Section 2.1.1 we proved almost sure consistency of the M LE θ̂n in the SDE set-up. Hence, θ̂n → θ under Pθ for all θ. In the proof of Proposition 5, Delattre et al. (2013) show that differentiation can be Vi i −µVi passed under the integral sign. Letting γi (θ) = U and Ii = 1+ω 2 V , note that (see the proof of 1+ω 2 Vi i Proposition 6 of Delattre et al. (2013)) ∂2 ∂2 2 log f (x|θ) = −I (ω ), log f1 (x|θ) = −γ1 (θ)I1 (ω 2 ); 1 1 ∂µ2 ∂µ∂ω 2  ∂2 1 log f1 (x|θ) = − 2γ12 (θ)I1 (ω 2 ) − I12 (ω 2 ) . 2 2 ∂ω ∂ω 2 (2.5) (2.6) 2 It follows from (2.5) and (2.6) that in our case ∂θ∂k ∂θj log f1 (x|θ) is differentiable in θ = (µ, ω 2 ), and the derivative has finite expectation; see the proof of Proposition 8 of Delattre et al. (2013)). Hence, (2.2) and (2.3) clearly hold. Following Delattre et al. (2013) we assume: (H3′ ) The true value θ0 ∈ int (Ω). That the information matrix I(θ) is finite and is the covariance matrix of the vector γ1 (θ), 21 γ12 (θ) − I1 ω 2 (hence, nonnegative-definite), are shown in Delattre et al. (2013). We additionally assume, as Delattre et al. (2013): (H4′ ) The information matrix I(θ0 ) is invertible. Hence, asymptotic normality of the M LE, of the form (2.4), holds in our case. Formally, Theorem 4 Assume the iid setup and conditions (H1′ ) – (H4′ ). Then the M LE is asymptotically normally distributed as (2.4). 4  3 Consistency and asymptotic normality of MLE in the non-iid set-up We now consider the case where the processes Xi (·); i = 1, . . . , n, are independently, but not identically distributed. This happens when we no longer enforce the restrictions Ti = T and xi = x for i = 1, . . . , n. However, we do assume that the sequences {T1 , T2 , . . .} and {x1 , x2 , . . . , } are sequences entirely contained in compact sets T and X, respectively. Due to compactness, there exist convergent subsequences with limits in T and X. Abusing notation, we continue to denote the convergent subsequences as {T1 , T2 , . . .} and {x1 , x2 , . . .}. Let the limts be T ∞ ∈ T and x∞ ∈ X. Now, since the distributions of the processes Xi (·) are uniquely defined on the space of real, continuous functions C ([0, Ti ] 7→ R) = {f : [0, Ti ] 7→ R such that f is continuous}, given any t ∈ [0, Ti ], f (t) is clearly a continuous function of the initial value f (0) = x. To emphasize dependence on x, we denote the function as f (t, x). In fact, for any ǫ > 0, there exists δǫ > 0 such that whenever |x1 − x2 | < δǫ , |f (t, x1 ) − f (t, x2 )| < ǫ for all t ∈ [0, Ti ]. Henceforth, we denote the process associated with the initial value x and time point t as X(t, x), and by φ(x) the random effect parameter associated with the initial value x such that φ(xi ) = φi . We assume that (H5′ ) φ(x) is a real-valued, continuous function of x, and that for k ≥ 1, sup E [φ(x)]2k < ∞. (3.1) x∈X For x ∈ X and T ∈ T, let U (x, T ) = V (x, T ) = Z Z T 0 T 0 b(X(s, x)) dX(s, x); σ 2 (X(s, x)) (3.2) b2 (X(s, x)) ds. σ 2 (X(s, x)) (3.3) Clearly, U (xi , Ti ) = Ui and V (xi , Ti ) = Vi , where Ui and Vi are given by (1.3). In this non-iid set-up we assume that (H6′ ) b2 (x) < K(1 + xτ ), for some τ ≥ 1. σ 2 (x) (3.4) This assumption ensures that moments of all orders of V (x, T ) are finite. Then the moments of uniformly integrable continuous functions of U (x, T ), V (x, T ) and θ are continuous in x, T and θ. The result is formalized as Theorem 5, the proof of which is presented in the Appendix. Theorem 5 Assume (H5′ ) and (H6′ ). Let h(u, v, θ) be any continuous function of u, v and θ, such that ∞ ∞ for any sequences {xm }∞ m=1 , {Tm }m=1 and {θm }m=1 , converging to x̃, T̃ and θ̃, respectively, for any x̃ ∈ X, T̃ ∈ T and θ̃ ∈ Ω, the sequence {h (U (xm , Tm ), V (xm , Tm ), θm )}∞ m=1 is uniformly integrable. Then, as m → ∞, h  i E [h (U (xm , Tm ), V (xm , Tm ), θm )] → E h U (x̃, T̃ ), V (x̃, T̃ ), θ̃ . (3.5) Corollary 6 As in Delattre et al. (2013), consider the function   u , h(u, v) = exp ψ 1 + ξv (3.6) ∞ where ψ ∈ R and ξ ∈ R+ . Then, for any sequences {xm }∞ m=1 and {Tm }m=1 converging to x̃ and T̃ , 5 for any x̃ ∈ X and T̃ ∈ T, and for k ≥ 1, as m → ∞. h  ik E [h (U (xm , Tm ), V (xm , Tm ))]k → E h U (x̃, T̃ ), V (x̃, T̃ ) , (3.7) The proof of the above corollary only entails proving uniform integrability of {h (U (xm , Tm ), V (xm , Tm ))}∞ m=1 , which simply follows from the proof of Lemma 1 of Delattre et al. (2013). Note that in our case, the Kullback-Leibler distance and Fisher’s information are expectations of functions of the form h(u, v, θ), continuous in u, v and θ. Assumption (H6′ ), the upper bounds provided in Delattre et al. (2013), Corollary 6, and compactness of Ω, can be used to easily verify uniform integrability of the relevant sequences. It follows that in our situation the Kullback-Leibler distance, which we now denote by Kx,T (θ0 , θ) (or Kx,T (θ, θ0 )) to emphasize dependence on x, T and θ are continuous in θ, x and T . Similarly, the elements of the Fisher’s information matrix Ix,T (θ) are continuous in θ, x and T . For x = xk and T = Tk , we denote the Kullback-Leibler distance and the Fisher’s information as Kk (θ0 , θ) (Kk (θ, θ0 )) and Ik (θ), respectively. Continuity of Kx,T (θ0 , θ) (or Kx,T (θ, θ0 )) and Ix,T (θ0 ) with respect to x and T ensures that as xk → x∞ and Tk → T ∞ , Kxk ,Tk (θ0 , θ) → Kx∞ ,T ∞ (θ0 , θ) = K(θ0 , θ), say. Similarly, Kxk ,Tk (θ, θ0 ) → K(θ, θ0 ) and Ixk ,Tk (θ) → Ix∞ ,T ∞ (θ) = I(θ), say. Since X ∞ and T ∞ are contained in the respective compact sets, the limits K(θ0 , θ), K(θ, θ0 ) and I(θ) are well-defined Kullback-Leibler divergences and Fisher’s information, respectively. From the above limits, it follow that for any θ ∈ Ω, Pn k=1 Kk (θ0 , θ) = K(θ0 , θ); (3.8) lim n→∞ Pn n k=1 Kk (θ, θ0 ) = K(θ, θ0 ); (3.9) lim n→∞ Pnn k=1 Ik (θ) = I(θ). (3.10) lim n→∞ n We investigate consistency and asymptotic normality of M LE in our case using the results of Hoadley (1971). The limit results (3.8), (3.9) and (3.10) will play important roles in our proceedings. 3.1 Consistency and asymptotic normality of MLE in the non-iid set-up Following Hoadley (1971) we define the following: Ri (θ) = log =0 fi (Xi |θ) fi (Xi |θ0 ) if fi (Xi |θ0 ) > 0 otherwise. Ri (θ, ρ) = sup {Ri (ξ) : kξ − θk ≤ ρ} Vi (r) = sup {Ri (θ) : kθk > r} . (3.11) (3.12) (3.13) Following Hoadley (1971) we denote by ri (θ), ri (θ, ρ) and vi (r) to Pbe expectations of Ri (θ), Ri (θ, ρ) and Vi (r) under θ0 ; for any sequence {ai ; i = 1, 2, . . .} we denote ni=1 ai /n by ān . P θ0 : Hoadley (1971) proved that if the following regularity conditions are satisfied, then the MLE θ̂n → (1) Ω is a closed subset of Rd . (2) fi (Xi |θ) is an upper semicontinuous function of θ, uniformly in i, a.s. [Pθ0 ]. 6 (3) There exist ρ∗ = ρ∗ (θ) > 0, r > 0 and 0 < K ∗ < ∞ for which (i) Eθ0 [Ri (θ, ρ)]2 ≤ K ∗ , (ii) Eθ0 [Vi (r)]2 ≤ K ∗ . (4) (i) lim r̄n (θ) < 0, n→∞ 0 ≤ ρ ≤ ρ∗ ; θ 6= θ0 ; (ii) lim v̄n (r) < 0. n→∞ (5) Ri (θ, ρ) and Vi (r) are measurable functions of Xi . Actually, conditions (3) and (4) can be weakened but these are more easily applicable (see Hoadley (1971) for details). 3.1.1 Verification of the regularity conditions Since Ω is compact in our case, the first regularity condition clearly holds. For the second regularity condition, note that given Xi , fi (Xi |θ) is continuous, in fact, uniformly continuous in θ in our case, since Ω is compact. Hence, for any given ǫ > 0, there exists δi (ǫ) > 0, independent of θ, such that kθ1 − θ2 k < δi (ǫ) implies |f (Xi |θ1 ) − f (Xi |θ2 )| < ǫ. Now consider a strictly positive function δx,T (ǫ), continuous in x ∈ X and T ∈ T, such that δxi ,Ti (ǫ) = δi (ǫ). Let δ(ǫ) = inf δx,T (ǫ). Since X and T are compact, it follows that δ(ǫ) > 0. Now it holds that x∈X,T ∈T kθ1 − θ2 k < δ(ǫ) implies |f (Xi |θ1 ) − f (Xi |θ2 )| < ǫ, for all i. Hence, the second regularity condition is satisfied. Let us now focus attention on condition (3)(i). It follows from (2.1) that       2  Ui 1 2 ω2 |ω 2 − ω02 | ω02 1 2 + |ω0 − ω | log 1 + 2 + 1+ 2 Ri (θ) ≤ 2 ω2 2 ω ω0 1 + ω02 Vi     2 2 2 µ 0 Ui |ω − ω | Ui µ 0 Vi − 1+ 0 2 . (3.14) + |µ| + 2 2 ω 1 + ω 0 Vi 2(1 + ω0 Vi ) 1 + ω02 Vi Let us denote {ξ ∈ R × R+ : kξ − θk ≤ ρ} by S(ρ, θ). Here 0 < ρ < ρ∗ (θ), and ρ∗ (θ) is so small that S(ρ, θ) ⊂ Ω for all ρ ∈ (0, ρ∗ (θ)). It then follows from (3.14) that     |ω 2 − ω02 | 1 ω2 sup Ri (ξ) ≤ sup log 1 + 2 + ω2 ω0 ξ∈S(ρ,θ) (µ,ω 2 )∈S(ρ,θ) 2 2     1 2 Ui ω02 2 × sup + ω − ω 1 + 0 ω2 1 + ω02 Vi (µ,ω 2 )∈S(ρ,θ) 2    |ω02 − ω 2 | Ui × sup |µ| 1 + + ω2 1 + ω02 Vi (µ,ω 2 )∈S(ρ,θ) + µ 0 Ui µ20 Vi . + 2 2(1 + ω0 Vi ) 1 + ω02 Vi (3.15) The supremums in (3.15) are finite due to compactness of S(ρ, θ). Since under Pθ0 , Ui /(1 + ω02 Vi ) Vi admits moments of all orders and 0 < Ii (ω02 ) = 1+ω < ω12 (see Delattre et al. (2013)), it follows 2 0 Vi 0 from (3.15) that Eθ0 [Ri (θ, ρ)]2 ≤ Ki (θ), (3.16) where Ki (θ) = K(xi , Ti , θ), with K(x, T, θ) being a continuous function of (x, T, θ), continuity being a consequence of Theorem 5. Since because of compactness of X, T and Ω, Ki (θ) ≤ sup x∈X,T ∈T,θ∈Ω 7 K(x, T, θ) < ∞, regularity condition (3)(i) follows. To verify condition (3)(ii), first note that we can choose r > 0 such that kθ0 k < r and {θ ∈ Ω : kθk > r} = 6 ∅. It then follows that sup Ri (θ) ≤ sup Ri (θ) for every i ≥ 1. The right hand side θ∈Ω {θ∈Ω:kθk>r} is bounded by the same expression as the right hand side of (3.15), with only S(θ, ρ) replaced with Ω. The rest of the verification follows in the same way as verification of (3)(i). To verify condition (4)(i) note that by (3.8) Pn i=1 Ki (θ0 , θ) = −K(θ0 , θ) < 0 for θ 6= θ0 . (3.17) lim r̄n = − lim n→∞ n→∞ n In other words, (4)(i) is satisfied. To verify (4)(ii) we first show that lim v̄n exists for suitably chosen r > 0. Then we prove n→∞ that the limit is negative. To see that the limit exists, we first write Rx,T (θ) = −Kx,T (θ0 , θ). Clearly, Ri (θ) = Rxi ,Ti (θ). Using the arguments provided in the course of verification of (3)(ii), and the moment existence result of Delattre et al. (2013), yield sup Eθ0 i≥1 " sup #2 Ri (θ) {θ∈Ω:kθk>r} ≤ sup Eθ0 i≥1  2 sup Ri (θ) ≤ θ∈Ω sup x∈X,T ∈T K1 (x, T ), (3.18) where K1 (x, T ) is a continuous function of x and T . That K1 (x, T ) is continuous in x and T follows from Theorem 5; the required  uniform integrability follows due to finiteness of the moments of the random variable U (x, T )/ 1 + ω 2 V (x, T ) , for every x ∈ X and T ∈ T, and compactness of X and T. Now, because of compactness(of X and T it also ) follows that the right hand side of (3.18) is ∞ finite, proving uniform integrability of " # vx,T = Eθ0 sup {θ∈Ω:kθk>r} sup Ri (θ) {θ∈Ω:kθk>r} . Hence, it follows from Theorem 5 that i=1 Rx,T (θ) is continuous in x and T . Since xi → x∞ and Ti → T ∞ , v̄i = v̄xi ,Ti → v̄x∞ ,T ∞ . Since v̄x,T is well-defined for every x ∈ X, T ∈ T, and since x∞ ∈ X, T ∞ ∈ T, v̄x∞ ,T ∞ is also well-defined. It follows that lim v̄n = v̄x∞ ,T ∞ n→∞ exists. To show that the limit lim v̄n is negative, let us first re-write Vi (r) as n→∞  fi (Xi |θ0 ) fi (Xi |θ) {θ∈Ω:kθk>r}   fi (Xi |θ0 ) ≤− inf log fi (Xi |θ) {θ∈Ω:kθk≥r} fi (Xi |θ0 ) , = − log fi (Xi |θi∗ (Xi )) Vi (r) = −  inf log (3.19) for some θi∗ (Xi ), depending upon Xi , contained in Ωr = Ω ∩ {θ : kθk ≥ r}. Recall that we chose r > 0 such that kθ0 k < r and Ω ∩ {θ : kθk > r} = 6 ∅, so that θi∗ (Xi ) 6= θ0 as kθi∗ (Xi )k ≥ r > kθ0 k for all Xi . ∗ It is important to observe that θi (Xi ) can not be a one-to-one function of Xi ≡ (Ui , Vi ). To see this, first observe that for any given constant c, the equation log fi (Xi |θ0 ) − log fi (Xi |θ) = c, equivalently, the equation log fi (Ui , Vi |θ0 ) − log fi (Ui , Vi |θ) = c, admits infiniteh number of solutions in (Ui , Vi ), for any i fi (Xi |θ0 ) fi (Xi |θ0 ) 2 ∗ given θ = (µ, ω ). Hence, for θi (Xi ) = ϕ such that inf log fi (Xi |θ) = log fi (Xi |ϕ) = c, there {θ:kθk≥r} 8 exist infinitely many values of (Ui , Vi ) with the same infimum c for the same value ϕ, thereby proving that θi∗ (Xi ) is a many-to-one function of Xi . A consequence of thishis non-degeneracy i of the conditional fi (Xi |θ0 ) = Ki (θ0 , θi∗ (Xi )) distribution of Xi , given θi∗ (Xi ), which ensures that EXi |θi∗ (Xi ),θ0 log fi (X ∗ i |θi (Xi )) is well-defined and strictly positive, since θi∗ (Xi ) 6= θ0 . Given the above arguments, now note that,     fi (Xi |θ0 ) fi (Xi |θ0 ) Eθ0 log = Eθi∗ (Xi )|θ0 EXi |θi∗ (Xi )=ϕi ,θ0 log fi (Xi |θi∗ (Xi )) fi (Xi |θi∗ (Xi ) = ϕi ) = Eθi∗ (Xi )|θ0 [Ki (θ0 , ϕi )]   ≥ Eθi∗ (Xi )|θ0 inf Ki (θ0 , ϕi ) ϕi ∈Ωr = Eθi∗ (Xi )|θ0 [Ki (θ0 , ϕ∗i )] = Ki (θ0 , ϕ∗i ), (3.20) where ϕ∗i ∈ Ωr is where the infimum of Ki (θ0 , ϕi ) is achieved. Since ϕ∗i is independent of Xi , the last step (3.20) follows. Hence, Eθ0 Vi (r) ≤ −Ki (θ0 , ϕ∗i ) ≤ − inf x∈X,T ∈T,θ∈Ωr Kx,T (θ0 , ϕ) = −Kx∗ ,T ∗ (θ0 , ϕ∗ ), (3.21) for some x∗ ∈ X, T ∗ ∈ T and ϕ∗ ∈ Ωr . Since Kx∗ ,T ∗ (θ0 , ϕ∗ ) is a well-defined Kullback-Leibler distance, it is strictly positive since ϕ∗ 6= θ0 . Hence, it follows from (3.21) and the fact that Kx∗ ,T ∗ (θ0 , ϕ∗ ) > 0, that Pn i=1 Eθ0 Vi (r) lim v̄n = lim n→∞ n→∞ Pn n ∗ i=1 Ki (θ0 , ϕi ) ≤ − lim n→∞ n ≤ −Kx∗ ,T ∗ (θ0 , ϕ∗ ) < 0. Thus, condition (4)(ii) holds. Regularity condition (5) holds because for any θ ∈ Ω, Ri (θ) is an almost surely continuous function of Xi rendering it measurable for all θ ∈ Ω, and due to the fact that supremums of measurable functions are measurable. In other words, in the non-iid set-up in the non-iid SDE framework, the following theorem holds: Theorem 7 Assume the non-iid SDE setup and conditions (H1′ ) (i) and (H2′ ) – (H6′ ). Then it holds P that θ̂n → θ0 . 3.2 Asymptotic normality of MLE in the non-iid set-up ′ (x, θ) = Let ζi (x, θ) = log fi (x|θ); also, let ζi′ (x, θ) be the d × 1 vector with j-th component ζi,j 2 ∂ ′′ (x, θ) = and let ζi′′ (x, θ) be the d × d matrix with (j, k)-th element ζi,jk ∂θj ∂θk ζi (x, θ). For proving asymptotic normality in the non-iid framework, Hoadley (1971) assumed the following regularity conditions: ∂ ∂θj ζi (x, θ), (1) Ω is an open subset of Rd . P (2) θ̂n → θ0 . (3) ζi′ (Xi , θ) and ζi′′ (Xi , θ) exist a.s. [Pθ0 ]. 9 (4) ζi′′ (Xi , θ) is a continuous function of θ, uniformly in i, a.s. [Pθ0 ], and is a measurable function of Xi . (5) Eθ [ζi′ (Xi , θ)] = 0 for i = 1, 2, . . ..   (6) Ii (θ) = Eθ ζi′ (Xi , θ)ζi′ (Xi , θ)T = −Eθ [ζi′′ (Xi , θ)], where for any vector y, y T denotes the transpose of y. (7) Īn (θ) → Ī(θ) as n → ∞ and Ī(θ) is positive definite. ′ (X , θ ) (8) Eθ0 ζi,j i 0 3 ≤ K2 , for some 0 < K2 < ∞. (9) There exist ǫ > 0 and random variables Bi,jk (Xi ) such that o n ′′ (X , ξ) : kξ − θ k ≤ ǫ ≤ B (i) sup ζi,jk 0 i i,jk (Xi ). (ii) Eθ0 |Bi,jk (Xi )|1+δ ≤ K2 , for some δ > 0. Condition (8) can be weakened but is relatively easy to handle. Under the above regularity conditions, Hoadley (1971) prove that   √  L n θ̂n − θ0 → N 0, Ī −1 (θ0 ) . (3.22) 3.2.1 Validation of asymptotic normality of M LE in the non-iid SDE set-up Note that condition (1) requires the parameter space Ω to be an open subset. However, the proof of asymptotic normality presented in Hoadley (1971) continues to hold for compact Ω, since for any open cover of Ω we can extract a finite subcover, consisting of open sets. Conditions (2), (3), (5), (6) are clearly valid in our case. Condition (4) can be verified in exactly the same way as condition (2) of Section 3.1 is verified; measurability of ζi′′ (Xi , θ) follows due its continuity with respect to Xi . Condition (7) simply followsfrom (3.10) and condition (8) holds due to finiteness of the moments of the random variable U (x, T )/ 1 + ω 2 V (x, T ) , for every x ∈ X, T ∈ T, and compactness of X and T. ′′ (X , θ) for j, k = 1, 2 are given by ∂ 2 log f (X |θ) = For conditions (9)(i) and (9)(ii) note that ζi,jk i i ∂µ2  ∂2 ∂2 1 2 2 2 2 2 −Ii (ω ), ∂µ∂ω2 log f (Xi |θ) = −γi (θ)Ii (ω ), and ∂ω2 ∂ω2 log f (Xi |θ) = − 2 2γi (θ)Ii (ω ) − Ii (ω 2 ) . Also since Delattre et al. (2013) establish   Ui |µ̄| ω02 (3.23) sup |γi (θ)| ≤ 2+ 2 + 2, 2 ω ω 1 + ω 0 Vi θ∈Ω it follows from (3.23), the fact that 0 < Ii (ω 2 ) < 1/ω 2 , finiteness of moments of all orders of the previously mentioned derivatives for every x ∈ X, T ∈ T, and compactness of X and T, that conditions (9)(i) and (9)(ii) hold. In other words, in our non-iid SDE case we have the following theorem on asymptotic normality. Theorem 8 Assume the non-iid SDE setup and conditions (H1′ ) (i) and (H2′ ) – (H6′ ). Then (3.22) holds. 4 Summary and conclusion In SDE based random effects model framework, Delattre et al. (2013) considered the linearity assumption in the drift function given by b(x, φi ) = φi b(x), where φi are supposed to be Gaussian random variables with mean µ and variance ω 2 , and obtained a closed form expression of the likelihood of the above parameters. Assuming the iid set-up, they proved convergence in probability and asymptotic 10 normality of the maximum likelihood estimator of the parameters. In this paper, we proved strong consistency, rather than weak consistency, and asymptotic normality of the maximum likelihood estimator under weaker assumptions in the iid set-up. Moreover, we extended the model of Delattre et al. (2013) to the independent, but non-identical set-up, proving weak consistency and asymptotic normality. In Maitra and Bhattacharya (2015), we extended our classical asymptotic theory to the Bayesian framework, for both iid and non-iid situations. Specifically, we proved posterior consistency and asymptotic posterior normality, for both iid and non-iid set-ups. There we have also illustrated our theoretical development with several examples and simulation studies. It is to be noted that those examples, illustrating consistency and inconsistency of the associated Bayes estimators, remains valid in the classical paradigm with the Bayes estimators replaced by the maximum likelihood estimators. Acknowledgments Sincere gratitude goes to the reviewer whose suggestions have led to much improved presentation of our article. The first author gratefully acknowledges her CSIR Fellowship, Govt. of India. Appendix Proof of Theorem 5. We can decompose (3.2) as U (x, T ) = Z T b(X(s, x)) φ(x)b(X(s, x))ds σ 2 (X(s, x)) Z T 0 + 0 = φ(x) + Z 0 Z T b(X(s, x)) (dX(s, x) − φ(x)b(X(s, x))ds) σ 2 (X(s, x)) T b2 (X(s, x)) ds σ 2 (X(s, x)) (4.1) b(X(s, x)) dW (s) σ(X(s, x)) (4.2) 0 = φ(x)U (1) (x, T ) + U (2) (x, T ), (say), (4.3) where W (s) is the standard Weiner process defined on [0, T ]. Given the process X(·, ·), continuity of (4.1) with respect to x and T can be seen as follows. Let 11 T1 , T2 ∈ T; without loss of generality, let T2 > T1 . Also, let x1 , x2 ∈ X. Then, U (1) (x1 , T1 ) − U (1) (x2 , T2 ) Z T2 2 Z T1 2 b (X(s, x2 )) b (X(s, x1 )) ds − ds = 2 σ (X(s, x1 )) σ 2 (X(s, x2 )) 0 0  Z T1  2 b (X(s, x1 )) b2 (X(s, x2 )) = − ds σ 2 (X(s, x1 )) σ 2 (X(s, x2 )) 0 Z T2 2 b (X(s, x2 )) − ds 2 T1 σ (X(s, x2 )) Z T1 2 b2 (X(s, x2 )) b (X(s, x1 )) ds − ≤ σ 2 (X(s, x1 )) σ 2 (X(s, x2 )) 0 Z T2 2 b (X(s, x2 )) + ds σ 2 (X(s, x2 )) T1 ≤ T1 sup s∈[0,T1 ] b2 (X(s, x2 )) b2 (X(s, x1 )) − σ 2 (X(s, x1 )) σ 2 (X(s, x2 )) + |T2 − T1 | sup s∈[T1 ,T2 ],x∈X ≤ Tmax b2 (X(s, x)) σ 2 (X(s, x)) b2 (X(s∗ , x1 )) b2 (X(s∗ , x2 )) + C2 |T2 − T1 |, − σ 2 (X(s∗ , x1 )) σ 2 (X(s∗ , x2 )) (4.4) (4.5) (4.6) where Tmax = sup T; s∗ ∈ [0, T1 ] is such that the supremum in (4.4) is attained. That there exists such s∗ is clear due to continuity of the functions in s and compactness of the interval. In (4.6), C2 is the 2 upper bound for the function σb 2(X(s,x)) (X(s,x)) . Since X(·, x) is continuous in x, due to continuity of b(·) and σ(·), for any ǫ > 0, one can choose δ1 (ǫ) > 0 such that |x1 − x2 | < δ1 (ǫ) implies ǫ b2 (X(s∗ , x2 )) b2 (X(s∗ , x1 )) < − , σ 2 (X(s∗ , x1 )) σ 2 (X(s∗ , x2 )) 2Tmax so that the first term in (4.6) is less than ǫ/2. Choosing δ2 (ǫ) = 2Cǫ 2 yields that if |T2 − T1 | < δ2 (ǫ), then the second term in (4.6) is less than ǫ/2. This shows continuity of U (1) (x, T ) with respect to x and ∞ T for given X(·, ·). It follows that, for sequences {xm }∞ m=1 , {Tm }m=1 such that xm → x̃ and Tm → T̃ as m → ∞, L U (1) (xm , Tm ) → U (1) (x̃, T̃ ). (4.7) It is also clear that L φ(xm ) → φ(x̃). (4.8) h i2k sup E φ(xm )U (1) (xm , Tm ) < ∞, (4.9) Now note that due to assumptions (H5′ ), (H6′ ) (observing that U (1) (x, T ) = V (x, T ) for all x ∈ X and T ∈ T), and compactness of X and T, we have, for any k ≥ 1, m≥1 for all m ≥ 1, ensuring requisite uniform integrability. Hence, it follows that h i2 E φ(xm )U (1) (xm , Tm ) − φ(x̃)U (1) (x̃, T̃ ) → 0. (4.10) Let us now deal with U (2) (x, T ) given by (4.2). Letting for any set A, δA (s) = 1 if s ∈ A and 0 12 otherwise, be the indicator function, we define 2 Z Tmax  b(X(s, x̃)) b(X(s, xm )) δ (s) − δ (s) ds Q(xm , Tm ) = σ(X(s, xm )) [0,Tm ] σ(X(s, x̃)) [0,T̃ ] 0 Z Tmax 2 Z Tmax 2 b (X(s, xm )) b (X(s, x̃)) = δ[0,Tm ] (s)ds + δ (s)ds 2 σ (X(s, xm )) σ 2 (X(s, x̃)) [0,T̃ ] 0 0 Z Tmax b(X(s, xm )) b(X(s, x̃)) δ (s)ds −2 σ(X(s, xm )) σ(X(s, x̃)) [0,min{Tm ,T̃ }] 0 Z Tm 2 Z T̃ 2 b (X(s, xm )) b (X(s, x̃)) = ds + ds 2 2 σ (X(s, xm )) 0 0 σ (X(s, x̃)) Z min{Tm ,T̃ } b(X(s, xm )) b(X(s, x̃)) −2 ds. σ(X(s, xm )) σ(X(s, x̃)) 0 (4.11) It follows in the same way as in the proof of continuity of U (1) (·, ·) that the first and the third integrals in (4.11) associated with the function Q(·, ·), are continuous at (x̃, T̃ ). As a result, for given L X(·, ·), Q(xm , Tm ) → 0 as m → ∞. It follows that Q(xm , Tm ) → 0. Now note that " Z 2 Z Tmax 2 2 # Tmax 2 b (X(s, x̃)) b (X(s, xm )) Q(xm , Tm ) ≤ 2 , ds + ds σ 2 (X(s, xm )) σ 2 (X(s, x̃)) 0 0 so that, for any k ≥ 2, E [Q(xm , Tm )]k ≤ 22k E "Z Tmax 0 b2 (X(s, xm )) ds σ 2 (X(s, xm )) 2k + Z Tmax 0 b2 (X(s, x̃)) ds σ 2 (X(s, x̃)) 2k # . (4.12) Since, by assumption (H6′ ) moments of all orders of V (x, T ) are finite, for any x ∈ X and T ∈ T, and since X and T are compact, it follows that sup E [Q(xm , Tm )]k < ∞, m≥1 guaranteeing uniform integrabiility. Hence, E [Q(xm , Tm )] → 0, as m → ∞. (4.13) By Itô isometry (see, for example, Øksendal (2003)) it then follows that E Z Tmax 0 b(X(s, xm )) δ (s)dW (s) − σ(X(s, xm )) [0,Tm ] Z Tmax 0 That is, E "Z Tm 0 b(X(s, xm )) dW (s) − σ(X(s, xm )) It follows that Z T̃ 0 2 b(X(s, x̃)) δ (s)dW (s) → 0. σ(X(s, x̃)) [0,T̃ ] (4.14) #2 b(X(s, x̃)) dW (s) → 0. σ(X(s, x̃)) (4.15) L U (2) (xm , Tm ) → U (2) (x̃, T̃ ). (4.16) Using the Burkholder-Davis-Gundy inequality (see, for example, Delattre et al. (2013)) we obtain h E U (2) Z i2k (xm , Tm ) ≤ Ck E 0 13 Tm b2 (X(s, xm )) ds σ 2 (X(s, xm )) k . (4.17)  2k < Again, due to assumption (H6′ ) and compactness of X and T it follows that sup E U (2) (xm , Tm ) m≥1 ∞, so that uniform integrability is assured. It follows that h i2 E U (2) (xm , Tm ) − U (2) (x̃, T̃ ) → 0. (4.18) From (4.10) and (4.18) it follows, using the Cauchy-Schwartz inequality, that h i2 E U (xm , Tm ) − U (x̃, T̃ ) h i2 h i2 ≤ E φ(xm )U (1) (xm , Tm ) − φ(x̃)U (1) (x̃, T̃ ) + E U (2) (xm , Tm ) − U (2) (x̃, T̃ ) r h i h +2 E φ(xm )U (1) (xm , Tm ) − φ(x̃)U (1) (x̃, T̃ ) → 0. 2 i2 E U (2) (xm , Tm ) − U (2) (x̃, T̃ ) (4.19) Since V (x, T ) = U (1) (x, T ), due to (4.7) and assumption (H6′ ) (the latter ensuring uniform integrability), it easily follows that h i2 E V (xm , Tm ) − V (x̃, T̃ ) → 0. (4.20) Let G(x, T ) = (U (x, T ), V (x, T )). Then it follows from (4.19) and (4.20), that L G(xm , Tm ) → G(x̃, T̃ ). (4.21)   L (U (xm , Tm ), V (xm , Tm )) → U (x̃, T̃ ), V (x̃, T̃ ) . (4.22) That is, Consider also a sequence {θm }∞ m=1 in Ω, converging to θ̃ ∈ Ω. Then, for any function h(u, v, θ), which is continuous in u, v and θ, and such that the sequence {h(U (xm , Tm ), V (xm , Tm ), θm )}∞ m=1 is uniformly integrable, we must have h i E [h(U (xm , Tm ), V (xm , Tm ), θm )] → E h(U (x̃, T̃ ), V (x̃, T̃ ), θ̃) , (4.23) ensuring continuity of E [h(U (x, T ), V (x, T ), θ)] with respect to x, T and θ. References Delattre, M., Genon-Catalot, V., and Samson, A. (2013). Maximum Likelihood Estimation for Stochastic Differential Equations with Random Effects. Scandinavian Journal of Statistics, 40, 322–343. Hoadley, B. (1971). Asymptotic Properties of Maximum Likelihood Estimators for the Independent not Identically Distributed Case. The Annals of Mathematical Statistics, 42, 1977–1991. Maitra, T. and Bhattacharya, S. (2015). On Bayesian Asymptotics in Stochastic Differential Equations with Random Effects. Statistics and Probability Letters, 103, 148–159. Also available at “http://arxiv.org/abs/1407.3971”. Øksendal, B. (2003). Stochastic Differential Equations. Springer-Verlag, New York. Schervish, M. J. (1995). Theory of Statistics. Springer-Verlag, New York. 14
10
Cogn Comput manuscript No. (will be inserted by the editor) Anatomical Pattern Analysis for decoding visual stimuli in human brains arXiv:1710.02113v1 [stat.ML] 5 Oct 2017 Muhammad Yousefnezhad · Daoqiang Zhang Received: 31 Mar 2017 / Accepted: 04 Oct 2017 Abstract Background: A universal unanswered question in neuroscience and machine learning is whether computers can decode the patterns of the human brain. Multi-Voxels Pattern Analysis (MVPA) is a critical tool for addressing this question. However, there are two challenges in the previous MVPA methods, which include decreasing sparsity and noise in the extracted features and increasing the performance of prediction. Methods: In overcoming mentioned challenges, this paper proposes Anatomical Pattern Analysis (APA) for decoding visual stimuli in the human brain. This framework develops a novel anatomical feature extraction method and a new imbalance AdaBoost algorithm for binary classification. Further, it utilizes an Error-Correcting Output Codes (ECOC) method for multiclass prediction. APA can automatically detect active regions for each category of the visual stimuli. Moreover, it enables us to combine homogeneous datasets for applying advanced classification. Results and Conclusions: Experimental studies on 4 visual categories (words, consonants, objects and scrambled photos) demonstrate that the proposed approach achieves superior performance to state-of-the-art methods. Keywords brain decoding · multi-voxel pattern analysis · anatomical feature extraction · visual object recognition 1 Introduction In order to decode visual stimuli in the human brain, MultiVoxel Pattern Analysis (MVPA) technique [31, 39, 40] must apply machine learning methods to task-based functional The authors are with the College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China. E-mail: [email protected]; [email protected] Magnetic Resonance Imaging (fMRI) datasets. Indeed, analyzing the patterns of visual objects is one of the most interesting topics in MVPA, which can enable us to understand how brain stores and processes the visual stimuli [13, 32]. Technically, there are two challenges in the previous studies. As the first issue, trained features are sparse and noisy because most of the previous studies in whole-brain analysis directly utilized raw voxels for predicting the stimuli [13, 32, 12]. As the second challenge, improving the performance of prediction is so hard because task-based fMRI datasets can be considered as the imbalanced classification problems. For instance, consider collected data with 10 same size categories. Since this dataset is imbalance for (one-versus-all) binary classification, most of the classical algorithms cannot provide acceptable performance [13, 9, 25]. As the main contributions, this paper proposes Anatomical Pattern Analysis (APA) for decoding visual stimuli in the human brain. To generate a normalized view, APA automatically detects the active regions and then extracts features based on the brain anatomical structures. Indeed, the normalized view can enable us to combine homogeneous datasets, and it can decrease noise, sparsity, and error of learning. Further, this paper develops a modified version of imbalance Adapting Boosting (AdaBoost) algorithm for binary classification. This algorithm uses a supervised random sampling and penalty values, which are calculated by the correlation between different classes, for improving the performance of prediction. This binary classification will be used in a one-versus-all ECOC method as a multiclass approach for classifying the categories of the brain response. The rest of this paper is organized as follows: The related works are presented in Section 2. This paper introduces the proposed method in Section 3. Experimental results are reported in Section 4; and finally, this paper presents conclusion and pointed out some future works in Section 5. 2 2 Related Works There are three different types of studies for decoding stimuli in the human brain. Pioneer studies just focused on recognizing special regions of the human brain, such as inanimate objects [26], faces [20], visually illustration of words [6], body parts [24], and visual objects [14]. Although they proved that different stimuli can provide distinctive responses in the brain regions, they cannot find the deterministic locations (or patterns) related to each category of stimuli. The next group of studies developed correlation techniques in order to understand the similarity (or difference) between distinctive stimuli. Haxby et al. employed brain patterns located in Fusiform Face Area (FFA) and Parahippocampal Place Area (PPA) in order to analyze correlations between different categories of visual stimuli, i.e. gray-scale images of faces, houses, cats, bottles, scissors, shoes, chairs, and scrambled (nonsense) photos [14]. Kamitani and Tong studied the correlations of low-level visual features in the visual cortex (V1V4) [19]. In similar studies, Haynes et al. analyzed distinctive mental states [16] and more abstract brain patterns such as intentions [17]. Kriegeskorte et al. proposed Representational Similarity Analysis (RSA) in order to evaluate the similarities (or differences) among distinctive brain states [22]. Connolly et al. utilized RSA in order to compare the correlations between human brains and monkey brains [7,8]. RSA demonstrates that the representations of each category of stimuli in distinctive brain regions have a different structure [22, 7, 8]. Rice et al. proved that not only the brain responses are different based on the categories of the stimuli but also they are correlated based on different properties of the stimuli. They extracted the properties of visual stimuli (photos of objects) and calculated the correlations between these properties and the brain responses. They separately reported the correlation matrices for different human faces and different objects (houses, chairs, etc.) [34]. The last group of studies proposed the MVPA techniques for predicting the category of visual stimuli. Cox et al. utilized linear and non-linear versions of Support Vector Machine (SVM) algorithm [9]. In order to decode the brain patterns, some studies [2, 33, 37] employed classical feature selection (ranking) techniques, such as Principal Component Analysis (PCA) [2], Linear Discriminant Analysis (LDA) [33], or Independent Component Analysis (ICA) [37], that these method are mostly used for analyzing rest-state fMRI datasets. Recent studies proved that not only these techniques cannot provide stable performance in the task-based fMRI datasets [4, 5] but also they had spatial locality issue, especially when they were used for whole brain functional analysis [5]. Norman et al. argued for using SVM and Gaussian Naive Bayes classifiers [31]. Kay et al. studied decoded orientation, position and object category from the brain activities in visual cortex [21]. Mitchell et al. introduced a new Muhammad Yousefnezhad, Daoqiang Zhang method in order to predict the brain activities associated with the meanings of nouns [28]. Miyawaki et al. utilized a combination of multiscale local image decoders in order to reconstruct the visual images from the brain activities [29]. In order to generalize the testing procedure for task-based fMRI datasets, Kriegeskorte et al. proved that the data in testing must have no role in the procedure of generating an MVPA model [23]. There are also some studies that focused on sparse learning techniques. Yamashita et al. developed Sparse Logistic Regression (SLR) in order to improve the performance of classification models [38]. Carroll et al. employed the Elastic Net for prediction and interpretation of distributed neural activity with sparse models [3]. Varoquaux et al. proposed a small-sample brain mapping by using sparse recovery on spatially correlated designs with randomization and clustering. Their method is applied on small sets of brain patterns for distinguishing different categories based on a one-versus-one strategy [35]. As the first modern approaches for decoding visual stimuli, Anderson and Oates applied non-linear Artificial Neural Network (ANN) on brain responses [1]. McMenamin et al. studied subsystems underlie Abstract-Category (AC) recognition and priming of objects (e.g., cat, piano) and SpecificExemplar (SE) recognition and priming of objects (e.g., a calico cat, a different calico cat, a grand piano, etc.). Technically, they applied SVM on manually selected ROIs in the human brain for generating the visual stimuli predictors [27]. Mohr et al. compared four different classification methods, i.e. L1/L2 regularized SVM, the Elastic Net, and the Graph Net, for predicting different responses in the human brain. They show that L1-regularization can improve classification performance while simultaneously providing highly specific and interpretable discriminative activation patterns [30]. Osher et al. proposed a network (graph) based approach by using anatomical regions of the human brain for representing and classifying the different visual stimuli responses (faces, objects, bodies, scenes) [32]. 3 The Proposed Method Blood Oxygen Level Dependent (BOLD) signals are used in fMRI techniques for representing the neural activates. Based on hyperalignment problem in the brain decoding [13, 15, 40], quantity values of the BOLD signals in the same experiment for the two subjects are usually different. Therefore, MVPA techniques use the correlation between different voxels as the pattern of the brain response [32, 12]. As depicted in Figure 1, each fMRI experiment includes a set of sessions (time series of 3D images), which can be captured by different subjects or just repeating the imaging procedure with a unique subject. Technically, each session can be partitioned into a set of visual stimuli categories. Indeed, an Anatomical Pattern Analysis for decoding visual stimuli in human brains 3 Fig. 1 Anatomical Pattern Analysis (APA) framework independent category denotes a set of homogeneous conditions, which are generated by using the same type of photos as the visual stimuli. For instance, if a subject watches 6 photos of cats and 5 photos of houses during a unique session, this 4D image includes 2 different categories and 11 conditions. (2) This paper employs maximum functional activities in each voxel as the k-th condition: 3.1 Feature Extraction Preprocessed fMRI time series collected for U sessions can  b be defined by F(`) = fmn ∈ RT ×V , ` = 1:U, m = 1:T , n = 1:Vb , where T is the number of time points, Vb denotes the number of voxels in the original space, and fmn ∈ R defines the functional activity for the `-th session in m-th time point and n-th voxel. Indeed, 3D images (tensors) in fMRI time series are considered vectorized for simplicity [19]. In addition, onsets (or time series) in the `-th session is defined as follows:  S(`) ∈ RT ×P = S(`,1) , S(`,2) , . . . , S(`,k) , . . . , S(`,Q) also defined as follows: n o n (`,k) b (`) (`) b (`,k) = cb(`,k) C ∈ Rt ×V = fm. fm. ∈ F(`) , m ∈ S(`,k) , mn o S(`,k) = t (`,k) . (1) n t (`,k) o> (`,k)  (`,k) e (`,k) ∈ RVb = max b (`,k) , n ∈ [1, Vb ] . C cbmn cbmn ∈ C m=1 (3) In order to extract active voxels and then automatically define Region of Interests (ROIs), F(`) can be also written as a general linear model:  > b (`) + ζ (`) , (4) F(`) = D(`) B n o (`) where D(`) = dmn ∈ RT ×P denotes the design matrix, ζ (`) n o (`) b (`) = βbmn is the noise (error of estimation), and also B ∈ RV ×P denotes the set of correlations for `-th session. Here, (`) (`) b d.n ∈ RT and βb.n ∈ RV respectively denote the design vector and voxel correlations belonged to n-th category of stimuli. Now, design matrix can be calculated by: b Here, P denotes the number of categories of visual stimulus, (`,k) Q ≥ P is the number of stimuli, the vector S(`,k) ∈ Rt denotes the onsets belonged to k-th condition, and t (`,k) is the number of time points for this condition, where t (`,k)  T . By considering (1), all time points for k-th condition can be D(`) = S(`) ∗ H (5) 4 Muhammad Yousefnezhad, Daoqiang Zhang where H is the Hemodynamic Response Function (HRF) signal [12], and ∗ denotes the convolution operator. In order to solve (4), this paper uses Generalized Least Squares (GLS) [12] approach as follows: b (`) = B  D  (`) > Σ  (`) −1 D (`) −1 D  (`) > Σ  (`) −1 (`) > , F (6)  where Σ (`) is the covariance matrix of the noise (var ζ (`) = Σ (`) σ 2 6= Iσ 2 ) [13, 12, 41]. Further, activated voxels can be defined as follows by using the positive values of the correlation matrix: ( (`) (`)  (`) βbmn βbmn > 0 (`) V ×P , (7) B = βmn ∈ R = 0 otherwise (`) b (`) denotes the estimated correlation matrix where βbmn ∈ B (`) in (5). Indeed, non-zero elements in β.n are all activated voxels belonged to n-th category of stimuli. These activated voxel correlations can be applied to the conditions as follows: b e (`,k) ◦ β.n(`) C̄(`,k) ∈ RV = C (8) where ◦ denotes Hadamard product. Here, the k-th condition must be belonged to the n-th category of stimuli. Since mapping whole of fMRI time series to standard space decreases the performance of the final results, most of the previous studies use the original images instead of the standard version. By considering (8) for each condition, this paper enables to map brain activities to a standard space. This mapping can provide normalized view for combing homogeneous datasets. For registering (8) to standard space, this paper utilizes the fMRI Linear Image Registration Tool (FLIRT) algorithm [18], which minimizes the following cost function:   b τ (`,k) ∈ RV ×V = arg min NMI C̄(`,k) , R (9) where R denotes the reference image, the function NMI is the Normalized Mutual Information between two images, τ (`,k) denotes the transformation matrix. The performance of (9) will be analyzed in Section 4. Further, the final mapping can be also defined as follows: τ (`,k) :RV → RV =⇒ C(`,k) = τ (`,k) C̄(`,k) b (10)  (`,k) (`,k) (`,k) (`,k) (`,k) , ci ∈ R, V where C(`,k) = c1 , c2 , . . . , ci , . . . , cV denotes the number of voxels in the standard space. In order to reduce the sparsity of C(`,k) , this paper employs an anatomical atlas. Now, consider A = A1 , A2 , . . . , An , . . . AE T where E is the number of atlas regions, En=1 An = 0, / SE n=1 An = A, and An denotes the set of indexes of voxels for the n-th region. The extracted feature for `-th session in Fig. 2 The proposed AdaBoost algorithm for applying a robust binary classification k-th condition and n-th anatomical region is calculated as follows: x(`,k,n) ∈ R = 1 An (`,k) ∑ ci (11) i∈An Finally, the extracted features for each condition can be de fined by X(`,k) ∈ RE = x(`,k,1) , x(`,k,2) , . . . , x(`,k,n) , . . . , x(`,k,E) , where E < V . Here, X` ∈ RE×Q denotes the extracted features for the `-th session. In addition, whole of dataset can be defined by X ∈ RE×QU , where each column denotes the extracted features for an individual stimulus. 3.2 Binary Classification Algorithm In previous sections, we mentioned the imbalance issue in the MVPA analysis. In practice, there are two approaches in order to deal with this issue, i.e. designing an imbalance classifier, or converting the imbalance problem to an ensemble of balance classification models. Previous studies demonstrated that the performance of imbalance classifiers may not be stable, especially when we have sparsity and noise in our datasets [25, 41, 39, 40]. Since fMRI datasets mostly include noise and sparsity, this paper has chosen the ensemble approach. Technically, ensemble learning also contains two groups of solutions, i.e. bagging or boosting. While bagging generates all classifiers at the same time and then combine all of them as the final model, the boosting gradually creates each classifier in order to improve the performance of each iteration by tracing errors of previous iterations. We just have to note that ensemble learning can be used in both balance and imbalance problems. In fact, the main difference comes from the strategy of sampling. In balance problems, sampling methods are applied to the whole of datasets, whereas instances of the large class are sampled in the imbalance problems [25]. As depicted in Figure 2, this paper presents a new branch of AdaBoost algorithm in order to significantly improve the performance of the final model in fMRI analysis. In a nutshell, this algorithm firstly converts an imbalance MVPA problem to a set of balance Anatomical Pattern Analysis for decoding visual stimuli in human brains 5 Algorithm 1 The proposed binary classification algorithm e Class labels Y(m) . Input: Training set X, Output: Set of classifiers Θ (m) . Method: e = {X eS, X e L} 01. Based on Y(m) , partitioning X 02. Calculating J = int( |Xe L | ). |XS |  (1) (2) e ,X e ,...,X e (n) , . . . , X e (J) . eL = X 03. Random sampling: X L L L L (1) (1) 04. Initiate X̄ = Ȳ = 0. / 05. For (n = 1 . . . J):  e (n) = X eS, X e (n) , X̄(n) as training-set for this iteration. 06. X Tr L  (n) e = Y eS, Y e (n) , Ȳ(n) as class labels for this iteration. 07. Y Tr L ( e S or X̄(n) 1 for instances of X 08. W(n) = (n)  (n) e e e for instances of XL 1 − corr XS , XL  (n) (n) e ,Y e , W(n) as weighted decision tree. 09. θ (n) = classifier X e Tr 10 Tr Constructing X̄(n+1) as instances cannot truly trained in θ (n) . ε (n) = |X̄(n+1) | e (n) | |X α (n) 1 2 as error of classification.  12. = ln AdaBoost weight for the classifier θ (n) . 13. End For    (n) (n) x 14. Return Θ (m) x = sign ∑J+1 as the final model. n=1 α θ 11. Tr 1−ε (n) ε (n) problems. Then, it iteratively applies the decision tree [25] to each of these balance problems. Finally, AdaBoost is used in order to generate the final model. In the proposed method, the weight of each classifier (tree) for the final combination is generated based on the error (failed predictions) of the previous iterations for gradually improving the performance of the final model. In order to apply the binary classification, this paper randomly partitions the extracted features X into the training e and the testing set X. b As a new branch of AdaBoost set X e for training binary clasalgorithm, Algorithm 1 employs X b is utilized for estimating the performance sification. Then, X of the final model. As mentioned before, the binary classification for fMRI analysis is mostly imbalance, especially by using a one-versus-all strategy. Consequently, the number of samples in one of these binary classes is smaller than the other classes. As previously mentioned, this paper exploits this concept in order to solve the imbalance issue. Indeed, e into small Algorithm 1 firstly partitions the training data X e e XS and large XL classes (groups) based on the class labels Y(m) ∈ + 1, −1 . Here, all labels are −1 except the label of instances belong to m-th category of visual stimuli. Then, it calculates the scale J of existed elements between two classes. We have to note that int() defines the floor function. As the next step, the large class is randomly partitioned into J parts. Indeed, J is the number of balance subsets generated from the imbalance dataset. Consequently, the number of the ensemble iteration is J. In each balance subset, traine (n) is generated by all instances of the small class ing data X Tr e S , one of the partitioned parts of the large class X e L , and X (n) the instances of the previous iteration X̄ , which cannot truly be trained (the failed predictions). After that, training Fig. 3 The proposed Error-Correcting Output Codes (ECOC) approach for multiclass classification weights for the final combination (Wn ∈ [0, 1]) are calculated by using the Pearson correlation (corr(a, b) = cov(a,b) σa σb ) between training instances, where larger values increase the learning sensitivity. Indeed, these weights are always maximized for the instances of the small class and the failed instances of the previous iterations. Further, the weights of the other instances are a scale of the correlation between the large class and the small class. Therefore, these weights are updated in each iteration based on the performance of previous iterations. As the last step of each iteration, the proposed method generates a classification model (θ (n) ) and its weight (α (n) ) for the final combination. While classifier() can denote any kind of weighted classification algorithm, this paper employs a simple weighted decision tree [25] as the classification model. At the end, the final model is created by applying the AdaBoost method to the generated balance classifiers. 3.3 Multiclass Classification Algorithm In this paper, a multiclass classifier is a prediction model in order to map extracted features to thecategory of visual b → Y pred where Y pred ∈ 1, 2, . . . , P . Genstimuli, i.e. Θ : X erally, there are two techniques for applying multiclass classification. The first approach directly creates the classification model such as multiclass support vector machine [9] or neural network [31]. In contrast, decomposition design (indirect) uses an array of binary classifiers for solving the multiclass problems. Based on the previous discussion related to imbalance issue in fMRI datasets, this paper utilizes Error-Correcting Output Codes (ECOC) as an indirect multiclass approach in order to extend the proposed binary classifier for the multiclass prediction. As depicted in Figure 3, ECOC includes 6 Muhammad Yousefnezhad, Daoqiang Zhang Table 1 tbl:datasets Title Visual Object Recognition Word and Object Processing Multi-subject, multi-modal ID DS105 DS107 DS117 S 6 49 20 U 71 98 171 P 8 4 2 T 121 164 210 X 79 53 64 Y 79 63 61 Y 75 52 33 Scanner G 3T S 3T S 3T TR 2500 2000 2000 TE 30 28 30 S denotes the number of subject; U is the number of sessions; P denotes the number of stimulus categories; T is the number of scans in unites of TRs (Time of Repetition); X,Y, Z are the size of 3D images in the original space; Scanners include S =Siemens, and G =General Electric in 3 Tesla; TR is Time of Repetition in millisecond; TE denotes Echo Time in millisecond; Please see openfmri.org for more information. three components, i.e. base algorithms, coding matrix and decoding procedures [11]. Since this paper uses one-versusall encoding strategy, Algorithm 1 is employed as the based algorithms (Θ (m) ) in the ECOC, where it generates a binary classifier for each category of visual stimuli. In other words, each independent category of the visual stimuli is compared with the rest of categories. Consequently, the size of the coding matrix is P × P, where i-th diagonal cell of this matrix represents the positive predictions belong to the i-th category of visual stimuli and the rest of cells in this matrix determine the other categories of visual stimuli. Indeed, the number of classifiers in this strategy is exactly equal to the number of categories. As decoding stage, binary predictions, which are generated by applying the brain response to the base algorithms, are assigned to the category in the coding matrix with closest Hamming distance. In order to present an example for ECOC procedure, consider fMRI dataset with 4 categories of visual stimuli, i.e. photos of shoes, houses, bottles, and human faces. In this problem, 4 different binary classifiers must be trained in order to distinguish each category of visual stimuli versus the rest of them (one-versus-all strategy). A 4 × 4 coding matrix is also generated where each diagonal element represents the positive class of these categories (classifiers). By considering the order of the coding matrix, each prediction is assigned to the closest Hamming distance in the coding matrix. In other words, if these classifiers generate the prediction [+1, −1, −1, −1] for a testing instance, then this instance definitely belongs to the first category of visual stimuli. Similarly, the prediction [−1, +1, −1, −1] means the instance belongs to the second category, and etc. 4 Results 4.1 Datasets As depicted in Table 1, this paper employs 3 datasets, shared by openfmri.org, for running empirical studies. As the first dataset, ‘Visual Object Recognition’ (DS105) includes P = 8 categories of visual stimuli, i.e. gray-scale images of faces, houses, cats, bottles, scissors, shoes, chairs, and scrambled (nonsense) photos. This dataset is analyzed in high-level visual stimuli as the binary predictor, by considering all categories except scrambled photos as objects, and low-level visual stimuli in the multiclass prediction. Please see [13, 9] for more information. As the second dataset, ‘Word and Object Processing’ (DS107) contains P = 4 categories of visual stimuli, i.e. words, objects, scrambles, and consonants. Please see [10] for more information. As the last data set, ‘Multi-subject, multi-modal human neuroimaging dataset’ (DS117) includes MEG and fMRI images. This paper just uses the fMRI images of this dataset. It also contains P = 2 categories of visual stimuli, i.e. human faces, and scrambles. Please see [36] for more information. These datasets are preprocessed by SPM 12 (www.fil. ion.ucl.ac.uk/spm/), i.e. slice timing, realignment, normalization, smoothing. Further, whole-brain functional alignment is applied based on [5]. Then, the beta values are calculated for each session. This paper employs the MNI 152 T1 1mm (see Figure 1.d) as the reference image (R) in (9) for registering the extracted conditions (C̄(`,k) ) to the standard space (C(`,k) ). In addition, this paper uses Talairach Atlas (contains regions) in (11) for extracting features. 4.2 Parameter Analysis The registration objective function in (9) will be analyzed in this section by using different distance metrics, i.e. Woods function (W), Correlation Ratio (CR), Joint Entropy (JE), Mutual Information (MI), and Normalized Mutual Information (NMI) [41, 18]. Figures 4.a-c demonstrate examples of brain responses to different stimuli, i.e. (a) word, (b) object, and (c) scramble. Here, gray parts show the anatomical atlas, the colored parts (red, yellow and green) define the functional activities, and also the red rectangles illustrate the error areas after registration. Indeed, these errors can be formulated as the nonzero areas in the brain image which are located in the zero area of the anatomical atlas (the area without region number). Indeed, the registration errors are mostly related to the distance metrics. Previous studies illustrated that the entropy-based metrics can provide better performance [41, 18]. The performances of objective function (9) on DS105, DS107, and DS117 datasets are analyzed in Figure 4.d by using the mentioned distance metrics. As depicted in this figure, the entropy-based metrics (JE, MI, NMI) have provided better performance in comparison with other metrics. Since NMI uses normalization for Anatomical Pattern Analysis for decoding visual stimuli in human brains (A) (B) 7 (C) (D) Fig. 4 Extracted features based on different stimuli, i.e. (A) word, (B) object, and (C) scramble. (D) The effect of different objective functions in (4) on the error of registration. (A) (B) DS105 (C) (D) DS107 (E) (F) DS117 Fig. 5 The correlation matrices: (A) raw voxels and (B) extracted features in the DS105 dataset, (C) raw voxels and (D) extracted features in the DS107 dataset, (E) raw voxels and (F) extracted features in the DS117 dataset. removing the scaling effect [41], it generated the best results among the entropy-based metrics. Therefore, this paper employs NMI as the distance metric in 9 for mapping the brain activities from the original space to the standard space. 4.3 Correlation Analysis The correlations of the extracted features will be compared with the correlations of the original voxels in this section. Previous studies illustrated that patterns of different AbstractCategories (ACs), which is extracted from a suitable feature representation, must provide distinctive correlation values [27, 34]. Therefore, the main assumption in this section is that better feature representation (extraction) can improve the correlation analysis, where the correlation between different categories of visual stimuli must be significantly smaller than the correlation between stimuli belonged to the same category. In order to provide a better perspective, the extracted features are compared by considering two different levels. At the first level, the feature space is compared with the whole of raw voxels in the original space, where this comparison analyzes the correlation between whole-brain data and automatically detected ROIs. At the second level, the correlation values among different ACs are compared in the feature space that it shows how much the feature space is well-designed. Figure 5.A, B, and E illustrate the correlation matrix of the DS105, DS107, and DS117 at the raw voxel space, respectively. Furthermore, Figure 5.B, D, and F respectively show the correlation matrix the DS105, DS107, and DS117 in the feature space. As these figures depicted, different ACs are highly correlated in the voxel space. Indeed, the average of correlations is around +0.5 in DS105 and DS107. And, this average is around +0.8 in DS117 because this dataset just includes 2 classes (photos of scramble and human face). The main reason for these results is that brain responses in the voxel space are sparse, high-dimensional and noisy. Therefore, it is so hard to discriminate between different categories (ACs) in the original space, especially when wholebrain data will be analyzed. By contrast, the feature space (Figure 5.B, D, and E) provides distinctive representation when the proposed method used the correlated patterns in each anatomical region as the extracted features. The correlation between different ACs can be also meaningful in the feature space. In DS105 and DS107, the scramble (nonsense) stimuli have a low correlation (> 0.13) in comparison with sensible categories. As another example in DS105, human faces are mostly correlated to the photos of cats and houses (respectively +0.44, +0.25) in comparison with other objects (the average of correlations is +0.14). Another interesting example is the correlation between meaningful stimuli (words and objects) and nonsense 8 Muhammad Yousefnezhad, Daoqiang Zhang Table 2 Accuracy of binary predictors (mean±std) ↓Alg., Datasets→ SVM [9] Elastic Net [30] Graph Net [30] PCA [33] ICA [37] Selected ROI [27] L1 Reg. SVM [30] Graph-based [32] PCA + Algorithm 1 ICA + Algorithm 1 APA + SVM Binary APA DS105 (Objects) 71.65±0.97 80.77±0.61 79.23±0.74 72.15±0.76 73.25±0.81 83.06±0.36 85.29±0.49 90.82±1.23 83.61±0.97 84.41±0.93 76.32±0.78 98.97±0.12 DS107 (Words) 69.89±1.02 78.26±0.79 79.91±0.91 70.32±0.92 70.82±0.67 89.62±0.52 81.14±0.91 94.21±0.83 80.12±0.81 82.21±0.86 77.19±0.83 98.17±0.36 DS107 (Consonants) 67.84±0.82 75.53±9.87 74.01±0.84 69.57±1.10 71.87±0.94 87.82±0.37 79.69±0.69 95.54±0.99 79.47±0.91 78.88±0.78 78.61±0.91 98.72±0.16 DS107 (Objects) 65.32±1.67 84.15±0.89 85.96±0.76 68.78±0.64 67.99±0.75 84.22±0.44 75.32±0.41 95.62±0.83 82.80±1.01 82.30±0.99 69.22±0.87 95.26±0.92 DS107 (Scramble) 67.96±0.87 87.34±0.93 86.21±0.51 69.41±0.35 72.48±0.89 86.19±0.26 78.45±0.62 93.10±0.78 80.52±0.98 83.99±0.84 73.52±0.99 97.23±0.76 DS117 81.25±1.03 86.49±0.70 85.49±0.88 81.92±0.87 80.71±1.16 85.19±0.56 85.46±0.29 86.61±0.61 86.27±0.88 85.57±1.10 89.90±0.72 96.81±0.79 DS107 (Objects) 63.17±0.59 81.06±0.98 82.08±0.92 67.56±0.59 66.91±0.97 81.54±0.92 73.92±0.28 94.23±0.94 81.76±0.12 81.04±0.81 68.14±1.02 94.92±0.11 DS107 (Scramble) 66.73±0.92 85.54±0.81 83.97±0.97 68.89±0.90 70.20±0.72 85.79±0.42 76.14±0.47 92.23±0.38 77.64±0.84 83.02±0.92 71.07±0.79 97.21±0.92 DS117 79.36±0.33 83.42±0.68 81.67±0.74 79.61±0.72 78.39±0.96 83.71±0.81 83.21±1.23 82.29±0.91 84.32±0.72 82.37±0.88 85.10±0.93 94.08±0.84 Table 3 Area under the ROC Curve (AUC) of binary predictors (mean±std) ↓Alg., Datasets→ SVM [9] Elastic Net [30] Graph Net [30] PCA [33] ICA [37] Selected ROI [27] L1 Reg. SVM [30] Graph-based [32] PCA + Algorithm 1 ICA + Algorithm 1 APA + SVM Binary APA DS105 (Objects) 68.37±1.01 78.23±0.82 77.26±0.72 70.69±0.84 71.33±0.85 82.22±0.42 80.91±0.21 88.54±0.71 81.76±0.90 81.11±0.72 72.27±0.86 97.06±0.82 DS107 (Words) 67.76±0.91 77.94±0.76 78.31±0.97 69.37±0.77 68.86±0.93 86.35±0.39 78.23±0.57 93.61±0.62 78.91±0.88 80.92±0.58 73.59±1.04 97.31±0.82 DS107 (Consonants) 63.84±1.45 74.11±0.82 71.43±0.58 65.12±0.93 71.03±1.07 85.63±0.61 77.41±0.92 94.54±0.31 77.44±0.93 75.76±0.98 76.95±0.94 96.21±0.62 Table 4 Accuracy of multiclass predictors (mean±std) ↓Alg., Datasets→ Multiclass SVM [9] MLP [1] Selected ROI [27] Graph-based [32] Multiclass APA DS105 (# of classes = 8) 18.03±4.07 38.34±3.21 28.72±2.37 50.61±4.83 59.21±2.05 DS107 (# of classes = 4) 38.01±2.56 71.55±2.79 68.51±1.07 89.69±2.32 95.61±1.83 stimuli (scrambles and consonants) in DS107, where the meaningful stimuli are highly correlated (+0.8) and their correlations with nonsense stimuli are negative (the average of correlation is −0.65). Since DS117 is a binary dataset, it is really a good example in order to understand the negative effect of noise and sparsity in fMRI analysis. The correlation between the face category and scramble is around +0.8 in the raw voxel space, whereas this correlation is +0.23 in the feature space. Indeed, the noisy and sparse raw voxels are not suitable (wise) in order to train a high-performance cognitive model. Here, we have to note that a suitable feature representation can also improve the performance of the MVPA analysis (the final cognitive model). By considering the geometric analysis of a linear space, classification algorithms draw a hyperplane (that is known as the decision surface in neuroscience [13]) in a representational space in order to distinguish different classes (categories of visual stimuli). In a highly correlated space, the margin of error for this hyperplane is sensitive. In other words, small changes in the ABSTRACT (# of classes = 5) 31.77±2.61 67.24±3.72 54.19±2.80 78.96±3.32 95.85±1.05 ALL (# of classes = 10) 12.26±5.97 32.94±4.89 35.03±2.66 47.64±5.28 62.93±2.69 parameters of the hyperplane can rapidly reduce the performance of the classifier in a highly correlated space, such as the raw voxel space [4, 5, 41]. Now, suitable feature extraction can minimize the correlation between different categories of visual stimuli. As a result, the margin of error and stability of the final model can be increased in order to train a classifier. 4.4 Performance Analysis In this section, the performance of different methods will be evaluated for both binary and multiclass analyses. In the binary analysis, the performance of the classical binary Support Vector Machine (SVM) is represented. Indeed, this method is used in [9] in order to distinguish different categories of visual stimuli. As regularized methods that are introduced in [30] for decoding the brain patterns, the performances of L1 regularized SVM, the Elastic Net, and the Graph Net are also reported in this section. Further, the performances of component based methods are also evaluated, i.e. Principal Anatomical Pattern Analysis for decoding visual stimuli in human brains Component Analysis (PCA) that is used in [33] for training a cognitive model and Independent Component Analysis (ICA), which is employed in [37] in order to analyze fMRI datasets. As another alternative for decoding visual stimuli, the Selected Region of Interest (ROI) method [27] is reported in this section, where the ROIs for each dataset is manually selected same as the original paper [27] and then the SVM classifier is applied to the selected ROIs in order to train a cognitive model. As the method was developed in [32], the performance of a graph-based approach is reported. In order to represent the effect of different parts of the proposed method, we also report three baselines. As the first alternatives, we utilize two component based methods, i.e. ‘PCA + Algorithm 1’ and ‘ICA + Algorithm 1’ that apply Algorithm 1 to the features that are respectively extracted by PCA and ICA. As the last baseline, ‘APA + SVM’ applies SVM algorithm to the features that are extracted by APA. In the multiclass analysis, the performance of multiclass SVM is presented as a baseline, where this algorithm was used in [9] in order to generate the cognitive model. Further, the performance of the proposed method is compared with Multilayer Perceptron (MLP) that was introduced as a multiclass approach in [1] in order to decode the brain patterns. Selected ROI method [27] and the graph-based approach [32] are also reported as other alternatives. We have to note that a multiclass SVM is used in the Selected ROI method in order to create a multiclass cognitive model (in Table 4). All of the mentioned algorithms are implemented in the MATLAB R2016b (9.1) by authors in order to generate experimental results. Further, all evaluations are applied by using leaveone-subject-out cross validation, e.g. we have selected brain patterns of 5 subjects in DS105 for training a classifier in each iteration and then used the patterns of the rest of the subject in order to test the generated cognitive model. It is worth noting that the number of iterations will be equal to the number of subjects (5 in this example). Indeed, not only the brain patterns in training sets and testing sets are independent across subjects but also fMRI data related to each subject was separately preprocessed [23]. We have to note that the same training set and testing set are applied in each iteration to all of the evaluated methods. Tables 2 and 3 respectively illustrate the classification Accuracy and Area under the ROC Curve (AUC) for the binary predictors based on the category of the visual stimuli. All visual stimuli in the dataset DS105 except scrambled photos are considered as the object category for generating these experimental results. As these tables depicted, SVM cannot create an acceptable performance on the raw voxels because fMRI data in the original space includes noise and sparsity. Moreover, the performances of the componentbased approaches (PCA and ICA) are significantly low because they were applied to whole-brain. Indeed, these methods are suitable for ROI-based problems or rest-mode fMRI 9 data, where they can find the best projection among the wised voxels. Another evidence for this claim is the Selected ROI method, where the performance of this method is significantly improved in comparison with the component-based approaches. In fact, this is the main reason in this paper in order to develop the automatically selected ROI method instead of just applying the component-based methods to the whole-brain data for selecting (ranking) the effective features. As also mentioned in the original paper [30], L1 regularized SVM generated better results in comparison with other regularized techniques, i.e. Elastic Net and Graph Net. As another alternative, the graph-based method that is developed by Osher et al. [32] generated acceptable performance because it also employed the anatomical features in order to create a cognitive model. Further, ‘PCA/ICA + Algorithm 1’ have generated better performances in comparison with PCA/ICA methods because of the ensemble approach. Since ‘APA + SVM’ uses better representational space in contrast with the raw fMRI data, it significantly improves the performance of SVM method. Although these baselines can show that each part of the proposed method can generate better performance in comparison with the classical algorithms (PCA, ICA, and SVM), the best results will be produced when we use all parts at the same time. The last but not least, the proposed algorithm has achieved the best performance in comparison with other methods because it provided a better representation of neural activities by exploiting the anatomical structure of the human brain. Table 4 illustrates the classification accuracy for multiclass predictors. In this table, ‘DS105’ includes P = 8 different categories (classes) and ‘DS107’ contains P = 4 categories of the visual stimuli. This paper also combined three datasets in two distinctive forms. ‘ABSTRACT’ includes 5 different categories, i.e. words, objects, scrambles, consonants, and human faces, which is generated by considering all visual stimuli in the dataset DS105 except faces and scrambled photos as object category and combining them with the datasets DS107 and DS117. Indeed, this combined dataset can be considered for comparing the abstract features of visual stimuli in the human brain. As another alternative, ‘ALL’ in this table generated by combining all of the visual stimuli in the three datasets, i.e. faces, houses, cats, bottles, scissors, shoes, chairs, words, consonants, and scrambled photos. As depicted in Table 4, the accuracy of the proposed method is improved by combining three datasets, whereas the performances of other methods are significantly decreased. As mentioned before, it is the standard space registration problem in the fMRI analysis. In addition, our framework employs the extracted features from the structural regions instead of using all or a subgroup of voxels, which can increase the performance of the predictive models by decreasing noise and sparsity. 10 Muhammad Yousefnezhad, Daoqiang Zhang to different brain tasks such as low-level visual stimuli, emotion and etc. Compliance with Ethical Standards Conflict of Interests Muhammad Yousefnezhad and Daoqiang Zhang declare that they have no conflict of interest. Ethical Approval Fig. 6 Automatically detected active regions (with Probability greater than 50%) across abstract categories of visual stimuli, which is generated by combining 3 datasets, i.e. DS105, DS107, and DS117. 5 Discussions and Conclusions Anatomical Pattern Analysis (APA) can be used by neuroscientist in order to seek most effective active voxels (regions) across abstract categories of visual stimuli in both a singular dataset and the combined datasets. Figure 6 illustrates an example for these active voxels by using the ABSTRACT dataset in Table 4 that was generated by combining the visual stimuli in the 3 datasets, i.e. DS105, DS107, and DS117. In this figure, active regions (B(`) ) for abstract categories (faces, scrambles, and objects) are normalized in the standard space and then the active voxels with probability greater than 50% h are visualized as the i automatically (`) detected ROIs, i.e. Pr ∑U`=1 τ (`) β.n > 50% , where U is the number of all sessions in the combined dataset. As this figure depicted, not only APA can generate a cognitive model in order to predict visual stimuli in the human brain but also it can automatically demonstrate activated loci in human brain across categories of visual stimuli. These activated loci can be used in order to study Specific-Exemplar (SE) recognition [27] or design an accurate brain mask (ROI) for ROIbased studies [41]. In summary, this paper proposes APA framework for decoding visual stimuli in the human brain. This framework uses an anatomical feature extraction method, which provides a normalized representation for combining homogeneous datasets. Further, a new binary imbalance AdaBoost algorithm is introduced. It can increase the performance of prediction by exploiting a supervised random sampling and the correlation between classes. In addition, this algorithm is utilized in an Error-Correcting Output Codes (ECOC) method for multiclass prediction of the brain responses. Empirical studies on 4 visual categories clearly show the superiority of our proposed method in comparison with the voxel-based approaches. In future, we plan to apply the proposed method This article does not contain any studies with human participants or animals performed by any of the authors. Acknowledgements This work was supported in part by the National Natural Science Foundation of China (61422204 and 61473149), and NUAA Fundamental Research Funds (NE2013105). References 1. Anderson, M., Oates, T.: A critique of multi-voxel pattern analysis. In: Proceedings of the Cognitive Science Society, vol. 32 (2010) 2. Carlson, T.A., Schrater, P., He, S.: Patterns of activity in the categorical representations of objects. Journal of Cognitive Neuroscience 15(5), 704–717 (2003) 3. Carroll, M.K., Cecchi, G.A., Rish, I., Garg, R., Rao, A.R.: Prediction and interpretation of distributed neural activity with sparse models. NeuroImage 44(1), 112–122 (2009) 4. Chen, P.H., Chen, J., Yeshurun, Y., Hasson, U., Haxby, J., Ramadge, P.J.: A reduced-dimension fmri shared response model. In: 28th Advances in Neural Information Processing Systems (NIPS15), pp. 460–468. Advances In Neural Information Processing Systems (NIPS), December/7–12, Montral, Canada (2015) 5. Chen, P.H., Zhu, X., Zhang, H., Turek, J.S., Chen, J., Willke, T.L., Hasson, U., Ramadge, P.J.: A convolutional autoencoder for multi-subject fmri data aggregation. In: 29th Workshop of Representation Learning in Artificial and Biological Neural Networks. Advances In Neural Information Processing Systems (NIPS), December/5–10, Barcelona, Spain (2016) 6. Cohen, L., Dehaene, S., Naccache, L., Lehéricy, S., DehaeneLambertz, G., Hénaff, M.A., Michel, F.: The visual word form area: spatial and temporal characterization of an initial stage of reading in normal subjects and posterior split-brain patients. Brain 123(2), 291–307 (2000) 7. Connolly, A., Gobbini, M., Haxby, J.: Three virtues of similaritybased multi-voxel pattern analysis (2012) 8. Connolly, A.C., Guntupalli, J.S., Gors, J., Hanke, M., Halchenko, Y.O., Wu, Y.C., Abdi, H., Haxby, J.V.: The representation of biological classes in the human brain. Journal of Neuroscience 32(8), 2608–2618 (2012) 9. Cox, D.D., Savoy, R.L.: Functional magnetic resonance imaging (fmri) ‘brain reading’: detecting and classifying distributed patterns of fmri activity in human visual cortex. NeuroImage 19(2), 261–270 (2003) 10. Duncan, K.J., Pattamadilok, C., Knierim, I., Devlin, J.T.: Consistency and variability in functional localisers. NeuroImage 46(4), 1018–1026 (2009) Anatomical Pattern Analysis for decoding visual stimuli in human brains 11. Escalera, S., Pujol, O., Radeva, P.: Error-correcting output codes library. Journal of Machine Learning Research 11(Feb), 661–664 (2010) 12. Friston, K.J.: Statistical parametric mapping. In: Neuroscience Databases, pp. 237–250. Springer (2003) 13. Haxby, J.V., Connolly, A.C., Guntupalli, J.S.: Decoding neural representational spaces using multivariate pattern analysis. Annual Review of Neuroscience 37, 435–456 (2014) 14. Haxby, J.V., Gobbini, M.I., Furey, M.L., Ishai, A., Schouten, J.L., Pietrini, P.: Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science 293(5539), 2425– 2430 (2001) 15. Haxby, J.V., Guntupalli, J.S., Connolly, A.C., Halchenko, Y.O., Conroy, B.R., Gobbini, M.I., Hanke, M., Ramadge, P.J.: A common, high-dimensional model of the representational space in human ventral temporal cortex. Neuron 72(2), 404–416 (2011) 16. Haynes, J.D., Rees, G.: Decoding mental states from brain activity in humans. Nature Reviews Neuroscience 7(7), 523 (2006) 17. Haynes, J.D., Sakai, K., Rees, G., Gilbert, S., Frith, C., Passingham, R.E.: Reading hidden intentions in the human brain. Current Biology 17(4), 323–328 (2007) 18. Jenkinson, M., Bannister, P., Brady, M., Smith, S.: Improved optimization for the robust and accurate linear registration and motion correction of brain images. Neuroimage 17(2), 825–841 (2002) 19. Kamitani, Y., Tong, F.: Decoding the visual and subjective contents of the human brain. Nature Neuroscience 8(5), 679–685 (2005) 20. Kanwisher, N., McDermott, J., Chun, M.M.: The fusiform face area: a module in human extrastriate cortex specialized for face perception. Journal of Neuroscience 17(11), 4302–4311 (1997) 21. Kay, K.N., Naselaris, T., Prenger, R.J., Gallant, J.L.: Identifying natural images from human brain activity. Nature 452(7185), 352 (2008) 22. Kriegeskorte, N., Mur, M., Bandettini, P.: Representational similarity analysis–connecting the branches of systems neuroscience. Frontiers in systems neuroscience 2 (2008) 23. Kriegeskorte, N., Simmons, W.K., Bellgowan, P.S., Baker, C.I.: Circular analysis in systems neuroscience: the dangers of double dipping. Nature Neuroscience 12(5), 535–540 (2009) 24. Liesegang, T.J.: A cortical area selective for visual processing of the human body. downing pe, 1 school of psychology, centre for cognitive neuroscience, university of wales, bangor, ll57 2as, united kingdom. e-mail: p. downing@ bangor. ac. uk jiang y, shuman m, kanwisher n. science 2001; 293: 2470–2473. American Journal of Ophthalmology 133(4), 598 (2002) 25. Liu, X.Y., Wu, J., Zhou, Z.H.: Exploratory undersampling for class-imbalance learning. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 39(2), 539–550 (2009) 26. Malach, R., Reppas, J., Benson, R., Kwong, K., Jiang, H., Kennedy, W., Ledden, P., Brady, T., Rosen, B., Tootell, R.: Objectrelated activity revealed by functional magnetic resonance imaging in human occipital cortex. Proceedings of the National Academy of Sciences (PNAS) 92(18), 8135–8139 (1995) 27. McMenamin, B.W., Deason, R.G., Steele, V.R., Koutstaal, W., Marsolek, C.J.: Separability of abstract-category and specificexemplar visual object subsystems: Evidence from fmri pattern analysis. Brain and Cognition 93, 54–63 (2015) 28. Mitchell, T.M., Shinkareva, S.V., Carlson, A., Chang, K.M., Malave, V.L., Mason, R.A., Just, M.A.: Predicting human brain activity associated with the meanings of nouns. science 320(5880), 1191–1195 (2008) 29. Miyawaki, Y., Uchida, H., Yamashita, O., Sato, M.a., Morito, Y., Tanabe, H.C., Sadato, N., Kamitani, Y.: Visual image reconstruction from human brain activity using a combination of multiscale local image decoders. Neuron 60(5), 915–929 (2008) 30. Mohr, H., Wolfensteller, U., Frimmel, S., Ruge, H.: Sparse regularization techniques provide novel insights into outcome integration processes. NeuroImage 104, 163–176 (2015) 11 31. Norman, K.A., Polyn, S.M., Detre, G.J., Haxby, J.V.: Beyond mind-reading: multi-voxel pattern analysis of fmri data. Trends in Cognitive Sciences 10(9), 424–430 (2006) 32. Osher, D.E., Saxe, R.R., Koldewyn, K., Gabrieli, J.D., Kanwisher, N., Saygin, Z.M.: Structural connectivity fingerprints predict cortical selectivity for multiple visual categories across cortex. Cerebral Cortex 26(4), 1668–1683 (2015) 33. O’toole, A.J., Jiang, F., Abdi, H., Haxby, J.V.: Partially distributed representations of objects and faces in ventral temporal cortex. Journal of Cognitive Neuroscience 17(4), 580–590 (2005) 34. Rice, G.E., Watson, D.M., Hartley, T., Andrews, T.J.: Low-level image properties of visual objects predict patterns of neural response across category-selective regions of the ventral visual pathway. Journal of Neuroscience 34(26), 8837–8844 (2014) 35. Varoquaux, G., Gramfort, A., Thirion, B.: Small-sample brain mapping: sparse recovery on spatially correlated designs with randomization and clustering. In: Proceedings of the 29th International Conference on Machine Learning (ICML-12), pp. 1375– 1382 (2012) 36. Wakeman, D.G., Henson, R.N.: A multi-subject, multi-modal human neuroimaging dataset. Scientific Data 2 (2015) 37. Xu, J., Potenza, M.N., Calhoun, V.D.: Spatial ica reveals functional activity hidden from traditional fmri glm-based analyses. Frontiers in Neuroscience 7 (2013) 38. Yamashita, O., Sato, M.a., Yoshioka, T., Tong, F., Kamitani, Y.: Sparse estimation automatically selects voxels relevant for the decoding of fmri activity patterns. NeuroImage 42(4), 1414–1429 (2008) 39. Yousefnezhad, M., Zhang, D.: Decoding visual stimuli in human brain by using anatomical pattern analysis on fmri images. In: 8th International Conference on Brain Inspired Cognitive Systems (BICS’16), pp. 47–57. Springer, November/28–30, Beijing, China (2016) 40. Yousefnezhad, M., Zhang, D.: Local discriminant hyperalignment for multi-subject fmri data alignment. In: 34th AAAI Conference on Artificial Intelligence (AAAI-17), pp. 59–65. Association for the Advancement of Artificial Intelligence (AAAI), February/4– 9, San Francisco, California, USA (2017) 41. Yousefnezhad, M., Zhang, D.: Multi-region neural representation: A novel model for decoding visual stimuli in human brains. In: 17th SIAM International Conference on Data Mininig (SDM17), pp. 54–62. Society for Industrial and Applied Mathematics (SIAM), April/27–29, Houston, Texas, USA (2017)
1