venue
stringclasses
2 values
paper_content
stringlengths
7.54k
83.7k
prompt
stringlengths
161
2.5k
format
stringclasses
5 values
review
stringlengths
293
9.84k
NIPS
Title Learning Articulated Rigid Body Dynamics with Lagrangian Graph Neural Network Abstract Lagrangian and Hamiltonian neural networks (LNNs and HNNs, respectively) encode strong inductive biases that allow them to outperform other models of physical systems signicantly. However, these models have, thus far, mostly been limited to simple systems such as pendulums and springs or a single rigid body such as a gyroscope or a rigid rotor. Here, we present a Lagrangian graph neural network (LGNN) that can learn the dynamics of articulated rigid bodies by exploiting their topology. We demonstrate the performance of LGNN by learning the dynamics of ropes, chains, and trusses with the bars modeled as rigid bodies. LGNN also exhibits generalizability—LGNN trained on chains with a few segments exhibits generalizability to simulate a chain with large number of links and arbitrary link length. We also show that the LGNN can simulate unseen hybrid systems including bars and chains, on which they have not been trained on. Specically, we show that the LGNN can be used to model the dynamics of complex real-world structures such as the stability of tensegrity structures. Finally, we discuss the non-diagonal nature of the mass matrix and its ability to generalize in complex systems. 1 Introduction and Related Works Movements of a robotic arm, rolling ball, or falling chain can be characterized by rigid body motion [1, 2]. Understanding the dynamics of the motion is crucial in several applications including robotics, human-robot interaction, planning, and computer graphics [3, 1]. Traditionally, the rigid body mechanics is studied in the framework of classical mechanics, which relies on either forcebased or energy-based approaches [4]. Force-based approaches involve the computation of all the unknown forces based on the equations of equilibrium and hence is cumbersome for large structures. Energy-based approaches present an elegant formalism which involve the computation of a scalar quantity representing the state of a system, namely, Lagrangian (L = T − V), which is the difference between the kinetic (T (q, q̇)) and potential (V(q)) energies, or Hamiltonian (H = T + V), which represents the total energy of the system. This scalar quantity can, in turn, be used to predict the dynamics of the system. However, the functional form governing this scalar quantity may not be The code is available at https://github.com/M3RG-IITD/rigid_body_dynamics_graph 36th Conference on Neural Information Processing Systems (NeurIPS 2022). known a priori in many cases [5]. Thus, learning the dynamics of rigid bodies directly from the trajectory can simplify and accelerate the modeling of these systems [5, 6, 7, 8]. Learning the dynamics of particles has received much attention recently using physics-informed approaches [9]. Among these, Lagrangian neural networks (LNNs) and Hamiltonian neural networks (HNNs) are two physics-informed neural networks with strong inductive biases that outperform other learning paradigms of dynamical systems [10, 11, 12, 8, 6, 13, 7, 14]. In this approach, a neural network is trained to learn the L (orH) of a system based on its conguration (q, q̇). The L is then used along with the Euler-Lagrange (EL) equation to obtain the time evolution of the system. Note that the training of LNNs is performed by minimizing the error on the predicted trajectory with respect to the actual trajectory. Thus, LNNs can effectively learn the Lagrangian directly from the trajectory of a multi-particle system [6, 13]. Most of the works on LNN has focused on relatively simpler particle-based systems such as springs and pendulums [15, 16, 6, 13, 7, 10, 17]. This approach models a rigid body, for instance a ball, as a particle and predicts the dynamics. This approach thus ignores the additional rotational degrees of freedom of the body due to its nite volume. Specically, while a particle in 3D has three degrees of freedom (translational), a rigid body in 3D has six degrees of freedom (translational and rotational). Thus, the dynamics and energetics associated with these degrees of motions are lost by modeling a rigid body as a particle. To the best of authors’ knowledge, thus far, only one work has attempted to learn rigid body dynamics using LNNs and HNNs, where it was demonstrated the dynamics of simple rigid bodies such as a gyroscope or rotating rotor can be learned [13]. However, the LNNs used in this work, owing to their fully connected MLP architecture, are transductive in nature. An LNN trained on a double-pendulum system or 3-spring system can be used only for the same system and does not generalize to a different system size such as 3-pendulum or 5-spring, respectively. In realistic situations the number of particles in a system can vary arbitrarily, and accordingly, a large number of trained models might be required to model these systems. An alternate approach to model these systems would be to use a graph neural network (GNN) [18, 19, 5, 15, 16], which, once trained, can generalize to arbitrary system sizes. GNNs have been widely used to model physical and atomic systems extensively due to their inductive bias [20, 21, 22, 15, 16]. GNNs have also been used to model rigid bodies mainly following two approaches, namely, particlebased [19] and lumped mass [22, 23] methods. In the rst approach, a rigid body is discretized into nite number of particles and the motion of the individual particles are learned to predict the dynamics of rigid body [19]. Note that this approach is philosophically similar to mess-less methods such as smoothed-particle hydrodynamics (SPH) [24] or peridynamics (PD) [25], where the time-evolution of a continuum body is simulated by discretizing the domain using particles. This approach [19], although useful, have several limitations, namely, it does not (i) conserve physical quantities such as energy when simulated over a long duration, and (ii) generalize to a different timestep of forward simulation than the one on which it is trained. In the second approach, a rigid body is modeled as a lumped mass [22, 26], the dynamics of which is learned by assuming this lumped mass as a particle. For instance, the dynamics of a chain is modeled by discretizing the chain to smaller segments and modeling each segment as a lumped mass. As mentioned earlier, this approach leads to the loss of additional degrees of freedom that are associated with a rigid body. Here, we present a Lagrangian graph neural network (LGNN) framework that can learn the dynamics of rigid bodies. Specically, exploiting the topology of a physical system, we show that a rigid body can be modeled as a graph. Further, the Lagrangian of the graph structure can be learned directly by minimizing the loss on the predicted trajectory with respect to the actual trajectory of the system. The major contributions of the work are as follows. • Topology aware modeling of rigid body. We present a graph-based model for articulated rigid bodies such as in-extensible ropes, chains, or trusses. Further, we demonstrate using LGNN that the dynamics of these systems can be learned in the Lagrangian framework. • Generalizability to arbitrary system sizes. We show that LGNN can generalize to arbitrary system sizes once trained. • Generalizability to complex unseen topology. We demonstrate that the LGNN can generalize to unseen topology, that is, links with varying lengths, a combination of truss and chain structures, and different boundary conditions. Altogether, we demonstrate that LGNN can be a strong framework for simulating the dynamics of articulated rigid bodies. 2 Dynamics of Rigid Bodies The dynamics of a physical system can be represented as q̈ = F (q, q̇, t), where q, q̇ RD is a function of time (t) for a system with D degrees of freedom. The future states or trajectory of the system can be predicted by integrating these equations to obtain q(t + 1) and so on. While there are several physics-based methods for generating the dynamics of the system such as d’Alembert’s principle, Newtonian, Lagrangian, or Hamiltonian approaches, all these approaches result in the equivalent sets of equations [3]. The two broad paradigms for modeling the dynamics involve force- and energy-based approaches. Energy-based approaches is an elegant framework, which relies on the computation of a single scalar quantity, for instance energy, that represents the state of system. The dynamics of the system is, in turn, computed based on this scalar quantity. Among the energy-based approaches, Lagrangian formulation has been widely used to predict the dynamics of particles and rigid bodies by computing the Lagrangian L of the system. The standard form of Lagrange’s equation for a system with holonomic constraints is given by ddt ∂L ∂q̇ − ∂L ∂q = 0, and the Lagrangian is L(q, q̇, t) = T (q, q̇, t)−V(q, t) with T (q, q̇, t) and V(q, t) representing the total kinetic energy of the system and the potential function from which generalized forces can be derived. Accordingly, the dynamics of the system can be represented using EL equations as q̈i = ∂2L ∂q̇2i −1 ∂L ∂qi − ∂L ∂q̇i∂qi q̇i . Modied Euler-Lagrange Equation. A modied version of the EL can be used in cases where some of the terms involved in the equation can be decoupled. This formulation allows explicit incorporation of constraints (holonomic and Pfafan) and additional dissipative terms for friction or drag [3, 1]. In rigid body motion, Pfafan constraints can be crucial in applications such as multi-ngered grasping where, the velocity of two or more ngers are constrained so that the combined geometry formed is able to catch or hold an object. A generic expression of constraints for these systems that accounts for both holonomic and Pfafan can be A(q)q̇ = 0, where, A(q) Rk×D represents k velocity constraints. In addition, drag, friction or other dissipative terms of a system can be expressed as an additional forcing term in the EL equation. It is worth noting that EL equation, by nature, is energy conserving. Hence, the additional dissipative terms are crucial for modeling realistic systems with friction and drag. If these terms are not included, the system will essentially try to simulate an energy preserving trajectory, thereby resulting in huge errors in the dynamics [17]. Considering the additional forces mentioned above, the modied EL equation can be written as: d dt q̇L−qL+AT (q)λ−Υ− F = 0 (1) where AT forms a non-normalized basis for the constraint forces, λ Rk, known as the Lagrange multipliers, gives the relative magnitudes of these force constraints,Υ represents the non-conservative forces, such as friction or drag, which are not directly derivable from a potential, and F represents any external forces acting on the system. This equation can be modied to obtain q̈ as: q̈ = M−1 −Cq̇ +Π+Υ−AT (q)λ+ F (2) where M = ∂∂q̇ ∂L ∂q̇ represents the mass matrix, C = ∂ ∂q ∂L ∂q̇ represents Coriolis-like forces, and Π = ∂L∂q represents the conservative forces derivable from a potential. Differentiating the constraint equation gives A(q)q̈ + Ȧ(q)q̇ = 0. Solving λ (see A.2) and substituting in Eq. 2, we obtain q̈ as q̈ = M−1 Π− Cq̇ +Υ−AT (AM−1AT )−1 AM−1(Π− Cq̇ +Υ+ F ) + Ȧq̇ + F (3) For a system subjected to these forces, the dynamics can be learned using LNN by minimizing the loss on the predicted and observed trajectory, where the predicted acceleration ˆ̈q is obtained using the Equation 3. It is worth noting that in this equation, M,C, and Π can be directly derived from the L. Constraints on the systems are generally known as they generally form part of the topology. It is worth noting that there are some recent works that focus on learning constraints as well [8]. 3 Lagrangian Mechanics for Articulated Rigid Bodies In the case of particle systems such as spring or pendulum systems, the approach mentioned in Sec.2 can be directly used in conjunction with an LNN to learn the dynamics. In this case, the mass matrix M(q) remains constant with only diagonal entries mii in Cartesian coordinates. Inducing this as a prior knowledge, wherein the masses are parameterized as a diagonal matrix is shown to simplify the learning process [13]. However, in the case of an articulated rigid body, the mass matrix is non-diagonal in the Cartesian coordinates. Further, the kinetic energy term T becomes a function of both position and velocity. In other words, the kinetic energy also becomes a function of the topology. This makes learning the dynamics a complex problem especially in real-world complex structures such as trusses or tensegrities, which are a combination of bars, ropes, and chains. To this extent, we briey review the mechanics of a falling rope or chain as an example. Note that simple rigid bodies such as a gyroscope or rotating rotor has already been studied using LNNs [13]. Of our special interest are articulated rigid bodies that can be arbitrarily large such as chains, ropes or trusses, that can be divided into smaller constituent members. This is because, it is generally assumed that extending LNNs to large structures is a challenging problem [17]. Traditionally, the mechanics of chains or ropes are modeled using discrete models [2]. Figure 1 shows a discrete model of a rope of mass M and length L. The rope is discretized into n cylindrical rods or segments each having a mass mi = Mn and length li = Ln. These segments are considered to be rigid, and with a nite uniform cross-sectional area and volume. In order to replicate realistic dynamics of a rope, the li should be signicantly smaller than L. Note that in the case of a chain or truss, such articial discretization is not required and the bars associated with each segment can be directly considered as a rigid body. To formulate the L, the generalized coordinates with orientation of each link represented by ϕi = tan−1 yi−yi−1 xi−xi−1 can be considered. Placing the origin at the beginning of rst segment (see Figure 1), the center of mass of ith segment (xcmi , y cm i ) can be written in terms of generalized coordinates as xcmi = i−1 j=1 lj cosϕj + 1 2 li cosϕi, y cm i = i−1 j=1 lj sinϕj + 1 2 li sinϕi (4) Accordingly, the kinetic energy of the system is given by [2] T = 1 2 n i=1 mi(ẋ 2 i,cm + ẏ 2 i,cm) + Iiϕ̇ 2 i (5) where Ii = 112mil 2 i represents the moment of inertia of the rigid segment i. Similarly, the potential energy of the system can be expressed as: V = n i=1 migy cm i (6) where g represents the acceleration due to gravity. Finally, the Lagrangian of the system can be obtained as L = T − V , which can be substituted in the EL equation to obtain the dynamics of the rigid body. To learn the dynamics of an articulated rigid body, we employ the approach shown in Figure 2. Specically, we model a physical system as a graph. Further, the Lagrangian of system is learned by decoupling the potential and kinetic energy, each of which are learned by two GNNs, namely, GV and GT . Finally, the Lagrangian is computed as L = T − V . This framework is trained end-to-end based by minimizing the loss on the acceleration predicted by the LGNN using EL equation with respect to the ground truth. In this section, we describe the LGNN architecture for rigid bodies in detail (See Figure 2 for an overview). We empirically show that the dynamics of a rigid body can be learned by LGNN. In addition, due to the inductive nature of the graph architecture, once trained on a small system, LGNN can generalize to arbitrary system sizes and topology. Graph structure. Figure 1 shows a chain. The (undirected) graph of the physical system is constructed by considering the bars/segments of the chain as the edges and the connections as nodes. Here, edges represent the rigid bodies and nodes represent the connection between these rigid bodies. This is in contrast to earlier approaches used for particle-based systems, where node represented the particle position and edge represented the connections between them. Hereon, we use the notation G(U , E) to to represent the graph representation of a rigid body with U and E as its node and edge sets. Overview of the architecture. As shown in Figure 2, we use two GNNs; one to predict the potential energies and the other to predict kinetic energies. From these predictions the Lagrangian is computed. The error on the Lagrangian is minimized through an RMSE loss function to jointly train both the GNNs. The architecture of both the GNNs, shown in Figure 2, are identical. Note that the specic graph architecture used in the present work is inspired from previous works on LGNNs for particle-based systems [15, 16]. Input features. Each node ui U is characterized by its position qi = (xi, yi, zi), and velocity (q̇i). Each edge eij is characterized by its type tij , and the relative differences in the positions (∆qij = qi − qj) of its connecting nodes, and ωij = ∆qij ×∆q̇ij . The type tij is a discrete variable and is useful in distinguishing edges of different characteristics within a system (Ex. moment inertia or area of cross section of the edge). Note that the velocity of a rigid body represented by an edge is a function of the velocities of its end points in two and three dimensional spaces. Hence, we do not explicitly track edge velocities. Pre-Processing. In the pre-processing layer, we construct a dense vector representation for each node vi U and edge eij E using MLPs (multi-layer perceptrons). The exact operation for potential energy is provided below in Eqs.7-8. For kinetic energy, we input q̇i in Eq 7 instead of qi and ωij in Eq. 8 instead of ∆qij . h0i = squareplus(MLP(qi)) (7) h0ij = squareplus(MLP(one-hot(ti),∆qij)) (8) squareplus is an activation function. Message passing. To infuse structural information in the edge and node embeddings, we perform L layers of message passing, wherein the embedding in each layer l [1, ·, L] is computed as follows: hl+1ij = squareplus MLP hlij +W l E · hlihlj (9) Here,WlE is a layer-specic learnable weight vector and || represents concatenation operation. The node embeddings in a given layer l are learned as follows: hl+1i = squareplus MLP hli + j∈Ni WlU · hlij (10) Here, Ni = uj (ui, uj) E denotes the edges incident on node ui. Similar to WlE , WlU is a layer-specic learnable weight vector, which performs a linear transformation on the embedding of each incident edge. Following L layers of message passing, the nal node and edge representations in the Lth layer are denoted by zi = hLi and zij = h L ij respectively. Potential and kinetic energy prediction. The predicted potential energy of each edge (rigid body) is computed by passing its nal layer embedding through an MLP, i.e., vij = MLP(zi,j). The global predicted potential energy of the rigid body system is therefore the sum of the individual energies, i.e., V = ∀eij∈E vij . For kinetic energy, the computation is identical except that it occurs in the other GNN with parameters optimized for kinetic energy. Loss function. The predicted Lagrangian is simply the difference between the predicted kinetic energy and the potential energy. Using Euler-Lagrange equations, we obtain the predicted acceleration ̈qi(t) for each node ui. The ground truth acceleration is computed directly from the ground truth trajectory using the Verlet algorithm as: q̈i(t) = 1 (∆t)2 [qi(t+∆t) + qi(t−∆t)− 2qi(t)] (11) The parameters of the GNNs are trained to minimize the RMSE loss over the entire trajectory T: L = 1 U ∀ui∈U |T| t=2 q̈i(t)− ̈qi(t) 2 (12) Since the integration of the equations of motion for the predicted trajectory is also performed using the same algorithm as: q(t+∆t) = 2q(t)− q(t−∆t)+ q̈(∆t)2, this method is equivalent to training from trajectory/positions. 4 Empirical Evaluation In this section, we evaluate the ability of LGNN to learn rigid body dynamics. In addition, we evaluate the ability of LGNN to generalize to larger unseen system sizes, complex topology, and realistic structures such as tensegrity. 4.1 Experimental setup • Simulation environment. All the training and forward simulations are carried out in the JAX environment [21]. The graph architecture is implemented using the jraph package [27]. All the codes related to dataset generation and training are available in https://github.com/M3RGIITD/rigid_body_dynamics_graph. Software packages: numpy-1.20.3, jax-0.2.24, jax-md-0.1.20, jaxlib-0.1.73, jraph-0.0.1.dev0 Hardware: Memory: 16GiB System memory, Processor: Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz •Baselines. As outlined earlier, there are very few works on rigid body simulations using graph-based approaches, where the graph is used to model the topology of the rigid body. To compare the performance of LGNN, we employ three baselines, namely, (i) a graph network simulator GNS, (ii) a Lagrangian graph network (LGN), and (iii) constrained Lagrangian neural network (CLNN). GNS employs a full graph network architecture [5, 12, 19] to predict the update in the position and velocity of node based on the present position and velocity. GNS has been shown to be a versatile model with the capability to simulate a wide range of physical systems [19]. LGN and CLNN employs the exact same equations as LGNN for computing the acceleration and trajectory and hence has the same inductive biases as LGNN in terms of the training and inference. However, while LGN employs a full graph network, CLNN employs a feed-forward multilayer perceptron. Details of the architectures and the hyperparameters of the baselines are provided in the Appendix A.5 and Appendix A.6, respectively. • Datasets and systems. To evaluate the performance LGNN, we selected n-chain/rope systems, where n = (4, 8, 16). All the graph based models are trained only on 4-segment chain system, which are then evaluated on other system sizes. Further, to evaluate the zero-shot generalizability of LGNN to large-scale unseen systems, we simulate 8-, and 16-segment chain systems. Further, to push the limits of LGNN, we evaluate the model trained on 4-segment chain on a 100-link system, and to complex shaped topologies involving truss members (long rigid members) and chains (short rigid members), which have more than 40 segments (see Figure 3). The massmi and moment of inertia Ii of all the members are maintained to be the same for all the segments irrespective of their length. To evaluate the generalizability to realistic systems, we also evaluate the performance on a 4-link system with different link properties and also with an external drag. The details of the experimental systems are given in Appendix A.1. Further, the detailed data-generation procedure is given in the Appendix A.4. • Evaluation Metric. Following the work of [13], we evaluate performance by computing the relative error in (1) the trajectory, known as the rollout error, given by RE(t) = q̂(t)− q(t)2(q̂(t)2 + q(t)2) and (2) energy violation error given by Ĥ−H2(Ĥ2 + H2). In addition, we also compute the geometric mean of rollout and energy error to compare the performance of different models [13]. Note that all the variables with a hat, for example x̂, represent the predicted values based on the trained model and the variables without hat, that is x, represent the ground truth. • Model architecture and training setup. For the graph architectures, namely, LGNN and GNS, all the neural networks are modeled as one hidden layer MLPs with varying number of hidden units. For all the MLPs, a square-plus activation function is used due to its double differentiability. In contrast to the earlier approaches, here, the training is not performed on trajectories. Rather, it is performed on 10000 data points generated from 100 trajectories for all the models. This dataset is divided randomly in 75:25 ratio as training and validation set. The model performance is evaluated on a forward trajectory, a task it was not explicitly trained for, of 1s. Note that this trajectory is ∼2-3 orders of magnitude larger than the training trajectories from which the training data has been sampled. The dynamics of n-body system is known to be chaotic for n ≥ 2. Hence, all the results are averaged over trajectories generated from 100 different initial conditions. Detailed model architecture associated with each of the models and the hyperparameters used in the training are provided in the Appendices A.5 and A.6, respectively. 4.2 Comparison with baselines Model performance. To compare the performance of LGNN with baselines, GNS, LGN [12, 6] and CLNN [13], we evaluate the evolution of energy violation and rollout error. It worth noting that GNS and LGN have been demonstrated only particle-based systems and not on rigid bodies. Hence, to make a fair comparison, we give the same node and edge input features as provided for the LGNN for both GNS and LGN, while training. All the models are trained on a 4-link system and evaluated on all other systems. In the case of CLNN, due to the fully connected architecture, the model is no inductive in nature. Hence, the model is trained and tested on the same system only, that is, the 4-link system. Detailed architecture of each of these systems are provided in Appendix A.5. Figure 4 shows the error in energy and rollout for LGNN, GNS, LGN, and CLNN. We observe that GNS, LGN and CLNN have a larger error in comparison to LGNN as shown in Figure 4 for both energy and rollout error, establishing the superiority of LGNN. To test the ability of LGNN to learn more complex systems, we consider two additional experiments. Specically, two similar 4-link systems, one with varying masses and moment of inertia, and the other subjected to a linear drag are evaluated in the Appendix A.7. Figures 8 and 14 show that LGNN is able to infer the dynamics in both these systems, respectively. Generalizability to different system sizes. Now, we analyze the performance of LGNN, trained on 4- link segment, on 8- and 16-link segments. We observe that LGNN exhibits comparable performances with respect to the 4-segment model, in terms of both energy violation error and rollout error, on systems with 8-, and 16-segments that are unseen by the model. In contrast, GNS exhibits relatively increased error in energy violation error and rollout error, although the error in LGN remains comparable for all systems. This suggests that the inductive bias in terms of the EL equations prevent the accumulation of error and allow improved generalization. However, the error in LGN is still orders magnitude higher than LGNN. This suggests that the architecture employed in LGNN is leading improved learning of the dynamics of the system. This conrms that LGNN can generalize to larger unseen system sizes when trained on a signicantly smaller system size. Note that the plots for CLNN are not shown for 8 and 16-links as the architecture cannot exhibit generalizability to larger system sizes. Finally, to push the limits, we infer the dynamics of a 100-link chain (see Fig. 15). We observe that the LGNN trained on 4-link can scale to a 100-link chain with comparable errors, conrming its ability to model large-scale structures. The trajectories of actual and trained models for some of these systems are provided as videos in the supplementary material (see Appendix A.3 for details). Generalizability to systems with different edge properties and external drag. Although the framework presented here is generic, the results were limited to systems with similar edge properties. Further, dissipative forces such as drag were not considered in these systems. In order to evaluate the model to incorporate these effect, we consider a 4-link system with different edge properties (see Appendix A.7)and also a system with drag. We observe that the LGNN presented can model systems with varying link properties and drag with comparable errors (see Figures 8 and 14). These results conrm that the LGNN framework can be used for realistic systems with arbitrary link properties and external dissipative forces. 4.3 Zero-shot generalizability In the conventional LNNs employing feed forward MLPs, the training and test system have the same number of particles and degrees of freedom. In other words, an LNN trained for an n-particle system cannot be used to perform inference on anm-particle system. In contrast, we show here that LGNN trained on a small 4-link system can be used to perform forward simulations on other unseen complex systems such as 100-link system, and tensegrity structures. This ability to infer on different unseen system sizes and topology is referred to as zero-shot generalizability. In order to analyze the zero-shot generalizability of the trained LGNN to simulate complex real-world geometries and structures, we evaluate the ability of LGNN to model the dynamics of tensegrity and lattice-like structures (see Fig. 3). Note that tensegrity structures are truss-like structures comprising of both tension and compression-members. The topology of a tensegrity structure is designed so that the compression members are always bars and the tension members are always ropes. Here, we analyse the ability LGNN to model the equilibrium dynamics of two complex tensegrity structures and the lattice-like structure shown in Figure 3. To this extent, we use the LGNN trained on the 4-segment structure. We convert the rigid body structure to an equivalent graph and use the trained LGNN to predict the dynamics of the structure when released from the original conguration under gravity. Figure 5 shows the energy error and rollout for both the complex structures and the lattice-like structure shown in Figure 3. We note that the LGNN is able to generalize to a complex structure with varying bar lengths and topology with high accuracy. Specically, the energy violation and rollout error exhibits very low values for LGNN (∼ 10−4). Further, it saturates after a few initial timestep suggesting an equilibrium dynamics. In contrast, we observe that the error in GNS is very high and continues to increase until it reaches 1, which is the maximum it can take. This conrms the superior nature of LGNN to generalize to arbitrary topology, boundary conditions, and bar lengths, after training on a simple 4-segment chain with constant length segments. Visualization of the dynamics of the system T1, predicted by LGNN and the ground truth, is shown in Fig. 6. We observe that the deformed shapes predicted by LGNN are in excellent agreement with the ground truth. Note that since the initial conguration for the forward simulation is xed, it is not possible to generate error bars for the trajectory. 4.4 Nature of the learned mass matrix Finally, we investigate the nature of the mass matrix of LGNN for different systems. Note that in earlier approaches either the mass matrix was learned directly for a given system based on the EL equations [6], or it was assumed to be diagonal in the Cartesian coordinates [13], or the functional form of kinetic energy was assumed [7]. In the present approach, we do not make any assumptions on the nature of the mass matrix. In fact, for a rigid body, the mass matrix need not be diagonal in nature and depends on the actual topology of the structure. This raises an interesting question about the nature of the mass matrix learned by the LGNN and how it generalizes to arbitrary topologies. In order to investigate the nature of the mass matrix, we plot the mass matrix of the LGNN in Figure 7. Note that the mass matrix is computed directly from the Lagrangian asM = ∂2L∂q̇2, where L is obtained from the LGNN. First, we analyze the mass matrix of the 16-segment structure. We observe that the mass matrix is banded with a penta-diagonal band as expected for a chain structure. Now, we analyze the mass matrix for a complex structure T1. Interestingly, we observe that the mass matrix learned is non-diagonal in nature and is congruent with the complex topology of the structure (see Figure 7). This conrms that the mass matrix of LGNN is learned on-the-y during the forward simulation that provides the versatility for LGNN to simulation complex structures. 5 Conclusions In this work, we present a LGNN-based framework that can be used to simulate the dynamics of articulated rigid bodies. Specically, we present the graph architecture, which allows the decoupling of kinetic and potential energies, that can be used to compute the Lagrangian of the system, which when applied with EL equations can infer the dynamics. We show that LGNN can learn the dynamics from a small 4-segment chain and then generalize to larger system sizes. We also demonstrate the zeroshot generalizability of LGNN to arbitrary topology including a tensegrity structures. Interestingly, we show that LGNN can provide insights into the learned mass matrix, which can exhibit non-trivial structures in complex systems. This suggests the ability of LGNN to learn and infer the dynamics of complex real-life structures directly from the observables such as their trajectory. Limitations and future works. From the mechanics perspective, the LGNN assumes the knowledge of constraints. Learning constraints directly from the trajectory is useful. Similarly, extending LGNN to model contacts, collisions, and deformations allows more comprehesive learning of realistic systems. From the modeling perspective, in our message passing LGNN, all messages are provided equal important. Attention heads in message-passing neural networks have been shown to improve performance remarkably in several domains [28]. We plan to study the impact of attention in LGNN in our future works. Acknowledgments and Disclosure of Funding The authors thank the IIT Delhi HPC facility for providing the computational and storage resources.
1. What is the novel framework introduced in the paper, and how does it differ from classical Lagrangian and Hamiltonian Neural Networks? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its ability to generalize to arbitrary topologies and system sizes? 3. Do you have any questions or concerns regarding the notation used in the paper, such as the ∇q˙iq˙i operator? 4. How does the author choose the representation for Section 4.1, and what are the benefits and drawbacks of this choice? 5. Were any experiments performed to validate the Hamiltonian extension, and if not, why not? 6. Why does Figure 5 show a quasi-constant error between the two methods, and is this due to the initial error propagating throughout the rollout trajectory? 7. Why is the GNS result in Figure 6 smoother than the LGNN, and what might cause this difference? 8. What limitations does the paper address, and how could the authors expand on their ideas to address contact modeling and collision in future work?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper A novel framework is introduced for Lagrangian Neural Networks using graph networks to model rigid body dynamics. Comparisons are made with classical Lagrangian and Hamiltonian Neural Networks, showing the generalizability to arbitrary topologies and system sizes. Strengths And Weaknesses The contributions are very clear, and the authors did a great job explaining their framework for rigid bodies. The benchmarks are clear, and the non-diagonal mass matrices show the capabilities nicely of the model. Some steps could, however, involve a few more explanations for readers that are not familiar with the topic, such as in line 100 the ∇ q ˙ i q ˙ i operator, would that be a second-order derivative? It is not a notation I have seen before. Also, in line 121 you mention solving and substituting the lambda, it would be great if this derivation were present in the Appendix. The information presented in Figure 2 and 3 were a bit overlapping, and it could perhaps be combined to make the paper visuals more concise. Figure 4 also has text "ABC" which are not really explained (can be inferred from caption), and ideally the purple lines can be explained in the caption as well. The extension to the Hamiltonian feels oversimplified, though I'm not sure about the topic. It would absolutely be worth investigating. If no experiments were performed, section 4.2 could be left out. In line 276 the authors refer to an appendix, but no appendix was present in the file. All in all, more results with exciting articulated bodies could strengthen the paper a lot, and especially scenarios such as Pfaff et al. 2020 (Learning mesh-based simulation with Graph Networks) with deforming plate and waving flag can be interesting to try. Questions Same from before: line 100 the ∇ q ˙ i q ˙ i operator, would that be a second-order derivative? How did the choice of Section 4.1 come about, where the edges represent rigid bodies and the nodes represent connections between the rigid bodies? What benefit/drawback does this have? Did you perform any experiments on the Hamiltonian extensions? Figure 5 shows a quasi constant error between the two methods, is there a reason for this? Is this not the initial error propagating throughout the rollout trajectory? Why is the GNS result in Figure 6 so smooth, while the LGNN is more chaotic? Limitations The limitations are clearly addressed, though focusing more on that contact modeling is a limitation would be preferred, since the main focus is rigid body mechanics, where contact and collision is essential. Adding some initial ideas towards addressing contact would be appreciated as well.
NIPS
Title Learning Articulated Rigid Body Dynamics with Lagrangian Graph Neural Network Abstract Lagrangian and Hamiltonian neural networks (LNNs and HNNs, respectively) encode strong inductive biases that allow them to outperform other models of physical systems signicantly. However, these models have, thus far, mostly been limited to simple systems such as pendulums and springs or a single rigid body such as a gyroscope or a rigid rotor. Here, we present a Lagrangian graph neural network (LGNN) that can learn the dynamics of articulated rigid bodies by exploiting their topology. We demonstrate the performance of LGNN by learning the dynamics of ropes, chains, and trusses with the bars modeled as rigid bodies. LGNN also exhibits generalizability—LGNN trained on chains with a few segments exhibits generalizability to simulate a chain with large number of links and arbitrary link length. We also show that the LGNN can simulate unseen hybrid systems including bars and chains, on which they have not been trained on. Specically, we show that the LGNN can be used to model the dynamics of complex real-world structures such as the stability of tensegrity structures. Finally, we discuss the non-diagonal nature of the mass matrix and its ability to generalize in complex systems. 1 Introduction and Related Works Movements of a robotic arm, rolling ball, or falling chain can be characterized by rigid body motion [1, 2]. Understanding the dynamics of the motion is crucial in several applications including robotics, human-robot interaction, planning, and computer graphics [3, 1]. Traditionally, the rigid body mechanics is studied in the framework of classical mechanics, which relies on either forcebased or energy-based approaches [4]. Force-based approaches involve the computation of all the unknown forces based on the equations of equilibrium and hence is cumbersome for large structures. Energy-based approaches present an elegant formalism which involve the computation of a scalar quantity representing the state of a system, namely, Lagrangian (L = T − V), which is the difference between the kinetic (T (q, q̇)) and potential (V(q)) energies, or Hamiltonian (H = T + V), which represents the total energy of the system. This scalar quantity can, in turn, be used to predict the dynamics of the system. However, the functional form governing this scalar quantity may not be The code is available at https://github.com/M3RG-IITD/rigid_body_dynamics_graph 36th Conference on Neural Information Processing Systems (NeurIPS 2022). known a priori in many cases [5]. Thus, learning the dynamics of rigid bodies directly from the trajectory can simplify and accelerate the modeling of these systems [5, 6, 7, 8]. Learning the dynamics of particles has received much attention recently using physics-informed approaches [9]. Among these, Lagrangian neural networks (LNNs) and Hamiltonian neural networks (HNNs) are two physics-informed neural networks with strong inductive biases that outperform other learning paradigms of dynamical systems [10, 11, 12, 8, 6, 13, 7, 14]. In this approach, a neural network is trained to learn the L (orH) of a system based on its conguration (q, q̇). The L is then used along with the Euler-Lagrange (EL) equation to obtain the time evolution of the system. Note that the training of LNNs is performed by minimizing the error on the predicted trajectory with respect to the actual trajectory. Thus, LNNs can effectively learn the Lagrangian directly from the trajectory of a multi-particle system [6, 13]. Most of the works on LNN has focused on relatively simpler particle-based systems such as springs and pendulums [15, 16, 6, 13, 7, 10, 17]. This approach models a rigid body, for instance a ball, as a particle and predicts the dynamics. This approach thus ignores the additional rotational degrees of freedom of the body due to its nite volume. Specically, while a particle in 3D has three degrees of freedom (translational), a rigid body in 3D has six degrees of freedom (translational and rotational). Thus, the dynamics and energetics associated with these degrees of motions are lost by modeling a rigid body as a particle. To the best of authors’ knowledge, thus far, only one work has attempted to learn rigid body dynamics using LNNs and HNNs, where it was demonstrated the dynamics of simple rigid bodies such as a gyroscope or rotating rotor can be learned [13]. However, the LNNs used in this work, owing to their fully connected MLP architecture, are transductive in nature. An LNN trained on a double-pendulum system or 3-spring system can be used only for the same system and does not generalize to a different system size such as 3-pendulum or 5-spring, respectively. In realistic situations the number of particles in a system can vary arbitrarily, and accordingly, a large number of trained models might be required to model these systems. An alternate approach to model these systems would be to use a graph neural network (GNN) [18, 19, 5, 15, 16], which, once trained, can generalize to arbitrary system sizes. GNNs have been widely used to model physical and atomic systems extensively due to their inductive bias [20, 21, 22, 15, 16]. GNNs have also been used to model rigid bodies mainly following two approaches, namely, particlebased [19] and lumped mass [22, 23] methods. In the rst approach, a rigid body is discretized into nite number of particles and the motion of the individual particles are learned to predict the dynamics of rigid body [19]. Note that this approach is philosophically similar to mess-less methods such as smoothed-particle hydrodynamics (SPH) [24] or peridynamics (PD) [25], where the time-evolution of a continuum body is simulated by discretizing the domain using particles. This approach [19], although useful, have several limitations, namely, it does not (i) conserve physical quantities such as energy when simulated over a long duration, and (ii) generalize to a different timestep of forward simulation than the one on which it is trained. In the second approach, a rigid body is modeled as a lumped mass [22, 26], the dynamics of which is learned by assuming this lumped mass as a particle. For instance, the dynamics of a chain is modeled by discretizing the chain to smaller segments and modeling each segment as a lumped mass. As mentioned earlier, this approach leads to the loss of additional degrees of freedom that are associated with a rigid body. Here, we present a Lagrangian graph neural network (LGNN) framework that can learn the dynamics of rigid bodies. Specically, exploiting the topology of a physical system, we show that a rigid body can be modeled as a graph. Further, the Lagrangian of the graph structure can be learned directly by minimizing the loss on the predicted trajectory with respect to the actual trajectory of the system. The major contributions of the work are as follows. • Topology aware modeling of rigid body. We present a graph-based model for articulated rigid bodies such as in-extensible ropes, chains, or trusses. Further, we demonstrate using LGNN that the dynamics of these systems can be learned in the Lagrangian framework. • Generalizability to arbitrary system sizes. We show that LGNN can generalize to arbitrary system sizes once trained. • Generalizability to complex unseen topology. We demonstrate that the LGNN can generalize to unseen topology, that is, links with varying lengths, a combination of truss and chain structures, and different boundary conditions. Altogether, we demonstrate that LGNN can be a strong framework for simulating the dynamics of articulated rigid bodies. 2 Dynamics of Rigid Bodies The dynamics of a physical system can be represented as q̈ = F (q, q̇, t), where q, q̇ RD is a function of time (t) for a system with D degrees of freedom. The future states or trajectory of the system can be predicted by integrating these equations to obtain q(t + 1) and so on. While there are several physics-based methods for generating the dynamics of the system such as d’Alembert’s principle, Newtonian, Lagrangian, or Hamiltonian approaches, all these approaches result in the equivalent sets of equations [3]. The two broad paradigms for modeling the dynamics involve force- and energy-based approaches. Energy-based approaches is an elegant framework, which relies on the computation of a single scalar quantity, for instance energy, that represents the state of system. The dynamics of the system is, in turn, computed based on this scalar quantity. Among the energy-based approaches, Lagrangian formulation has been widely used to predict the dynamics of particles and rigid bodies by computing the Lagrangian L of the system. The standard form of Lagrange’s equation for a system with holonomic constraints is given by ddt ∂L ∂q̇ − ∂L ∂q = 0, and the Lagrangian is L(q, q̇, t) = T (q, q̇, t)−V(q, t) with T (q, q̇, t) and V(q, t) representing the total kinetic energy of the system and the potential function from which generalized forces can be derived. Accordingly, the dynamics of the system can be represented using EL equations as q̈i = ∂2L ∂q̇2i −1 ∂L ∂qi − ∂L ∂q̇i∂qi q̇i . Modied Euler-Lagrange Equation. A modied version of the EL can be used in cases where some of the terms involved in the equation can be decoupled. This formulation allows explicit incorporation of constraints (holonomic and Pfafan) and additional dissipative terms for friction or drag [3, 1]. In rigid body motion, Pfafan constraints can be crucial in applications such as multi-ngered grasping where, the velocity of two or more ngers are constrained so that the combined geometry formed is able to catch or hold an object. A generic expression of constraints for these systems that accounts for both holonomic and Pfafan can be A(q)q̇ = 0, where, A(q) Rk×D represents k velocity constraints. In addition, drag, friction or other dissipative terms of a system can be expressed as an additional forcing term in the EL equation. It is worth noting that EL equation, by nature, is energy conserving. Hence, the additional dissipative terms are crucial for modeling realistic systems with friction and drag. If these terms are not included, the system will essentially try to simulate an energy preserving trajectory, thereby resulting in huge errors in the dynamics [17]. Considering the additional forces mentioned above, the modied EL equation can be written as: d dt q̇L−qL+AT (q)λ−Υ− F = 0 (1) where AT forms a non-normalized basis for the constraint forces, λ Rk, known as the Lagrange multipliers, gives the relative magnitudes of these force constraints,Υ represents the non-conservative forces, such as friction or drag, which are not directly derivable from a potential, and F represents any external forces acting on the system. This equation can be modied to obtain q̈ as: q̈ = M−1 −Cq̇ +Π+Υ−AT (q)λ+ F (2) where M = ∂∂q̇ ∂L ∂q̇ represents the mass matrix, C = ∂ ∂q ∂L ∂q̇ represents Coriolis-like forces, and Π = ∂L∂q represents the conservative forces derivable from a potential. Differentiating the constraint equation gives A(q)q̈ + Ȧ(q)q̇ = 0. Solving λ (see A.2) and substituting in Eq. 2, we obtain q̈ as q̈ = M−1 Π− Cq̇ +Υ−AT (AM−1AT )−1 AM−1(Π− Cq̇ +Υ+ F ) + Ȧq̇ + F (3) For a system subjected to these forces, the dynamics can be learned using LNN by minimizing the loss on the predicted and observed trajectory, where the predicted acceleration ˆ̈q is obtained using the Equation 3. It is worth noting that in this equation, M,C, and Π can be directly derived from the L. Constraints on the systems are generally known as they generally form part of the topology. It is worth noting that there are some recent works that focus on learning constraints as well [8]. 3 Lagrangian Mechanics for Articulated Rigid Bodies In the case of particle systems such as spring or pendulum systems, the approach mentioned in Sec.2 can be directly used in conjunction with an LNN to learn the dynamics. In this case, the mass matrix M(q) remains constant with only diagonal entries mii in Cartesian coordinates. Inducing this as a prior knowledge, wherein the masses are parameterized as a diagonal matrix is shown to simplify the learning process [13]. However, in the case of an articulated rigid body, the mass matrix is non-diagonal in the Cartesian coordinates. Further, the kinetic energy term T becomes a function of both position and velocity. In other words, the kinetic energy also becomes a function of the topology. This makes learning the dynamics a complex problem especially in real-world complex structures such as trusses or tensegrities, which are a combination of bars, ropes, and chains. To this extent, we briey review the mechanics of a falling rope or chain as an example. Note that simple rigid bodies such as a gyroscope or rotating rotor has already been studied using LNNs [13]. Of our special interest are articulated rigid bodies that can be arbitrarily large such as chains, ropes or trusses, that can be divided into smaller constituent members. This is because, it is generally assumed that extending LNNs to large structures is a challenging problem [17]. Traditionally, the mechanics of chains or ropes are modeled using discrete models [2]. Figure 1 shows a discrete model of a rope of mass M and length L. The rope is discretized into n cylindrical rods or segments each having a mass mi = Mn and length li = Ln. These segments are considered to be rigid, and with a nite uniform cross-sectional area and volume. In order to replicate realistic dynamics of a rope, the li should be signicantly smaller than L. Note that in the case of a chain or truss, such articial discretization is not required and the bars associated with each segment can be directly considered as a rigid body. To formulate the L, the generalized coordinates with orientation of each link represented by ϕi = tan−1 yi−yi−1 xi−xi−1 can be considered. Placing the origin at the beginning of rst segment (see Figure 1), the center of mass of ith segment (xcmi , y cm i ) can be written in terms of generalized coordinates as xcmi = i−1 j=1 lj cosϕj + 1 2 li cosϕi, y cm i = i−1 j=1 lj sinϕj + 1 2 li sinϕi (4) Accordingly, the kinetic energy of the system is given by [2] T = 1 2 n i=1 mi(ẋ 2 i,cm + ẏ 2 i,cm) + Iiϕ̇ 2 i (5) where Ii = 112mil 2 i represents the moment of inertia of the rigid segment i. Similarly, the potential energy of the system can be expressed as: V = n i=1 migy cm i (6) where g represents the acceleration due to gravity. Finally, the Lagrangian of the system can be obtained as L = T − V , which can be substituted in the EL equation to obtain the dynamics of the rigid body. To learn the dynamics of an articulated rigid body, we employ the approach shown in Figure 2. Specically, we model a physical system as a graph. Further, the Lagrangian of system is learned by decoupling the potential and kinetic energy, each of which are learned by two GNNs, namely, GV and GT . Finally, the Lagrangian is computed as L = T − V . This framework is trained end-to-end based by minimizing the loss on the acceleration predicted by the LGNN using EL equation with respect to the ground truth. In this section, we describe the LGNN architecture for rigid bodies in detail (See Figure 2 for an overview). We empirically show that the dynamics of a rigid body can be learned by LGNN. In addition, due to the inductive nature of the graph architecture, once trained on a small system, LGNN can generalize to arbitrary system sizes and topology. Graph structure. Figure 1 shows a chain. The (undirected) graph of the physical system is constructed by considering the bars/segments of the chain as the edges and the connections as nodes. Here, edges represent the rigid bodies and nodes represent the connection between these rigid bodies. This is in contrast to earlier approaches used for particle-based systems, where node represented the particle position and edge represented the connections between them. Hereon, we use the notation G(U , E) to to represent the graph representation of a rigid body with U and E as its node and edge sets. Overview of the architecture. As shown in Figure 2, we use two GNNs; one to predict the potential energies and the other to predict kinetic energies. From these predictions the Lagrangian is computed. The error on the Lagrangian is minimized through an RMSE loss function to jointly train both the GNNs. The architecture of both the GNNs, shown in Figure 2, are identical. Note that the specic graph architecture used in the present work is inspired from previous works on LGNNs for particle-based systems [15, 16]. Input features. Each node ui U is characterized by its position qi = (xi, yi, zi), and velocity (q̇i). Each edge eij is characterized by its type tij , and the relative differences in the positions (∆qij = qi − qj) of its connecting nodes, and ωij = ∆qij ×∆q̇ij . The type tij is a discrete variable and is useful in distinguishing edges of different characteristics within a system (Ex. moment inertia or area of cross section of the edge). Note that the velocity of a rigid body represented by an edge is a function of the velocities of its end points in two and three dimensional spaces. Hence, we do not explicitly track edge velocities. Pre-Processing. In the pre-processing layer, we construct a dense vector representation for each node vi U and edge eij E using MLPs (multi-layer perceptrons). The exact operation for potential energy is provided below in Eqs.7-8. For kinetic energy, we input q̇i in Eq 7 instead of qi and ωij in Eq. 8 instead of ∆qij . h0i = squareplus(MLP(qi)) (7) h0ij = squareplus(MLP(one-hot(ti),∆qij)) (8) squareplus is an activation function. Message passing. To infuse structural information in the edge and node embeddings, we perform L layers of message passing, wherein the embedding in each layer l [1, ·, L] is computed as follows: hl+1ij = squareplus MLP hlij +W l E · hlihlj (9) Here,WlE is a layer-specic learnable weight vector and || represents concatenation operation. The node embeddings in a given layer l are learned as follows: hl+1i = squareplus MLP hli + j∈Ni WlU · hlij (10) Here, Ni = uj (ui, uj) E denotes the edges incident on node ui. Similar to WlE , WlU is a layer-specic learnable weight vector, which performs a linear transformation on the embedding of each incident edge. Following L layers of message passing, the nal node and edge representations in the Lth layer are denoted by zi = hLi and zij = h L ij respectively. Potential and kinetic energy prediction. The predicted potential energy of each edge (rigid body) is computed by passing its nal layer embedding through an MLP, i.e., vij = MLP(zi,j). The global predicted potential energy of the rigid body system is therefore the sum of the individual energies, i.e., V = ∀eij∈E vij . For kinetic energy, the computation is identical except that it occurs in the other GNN with parameters optimized for kinetic energy. Loss function. The predicted Lagrangian is simply the difference between the predicted kinetic energy and the potential energy. Using Euler-Lagrange equations, we obtain the predicted acceleration ̈qi(t) for each node ui. The ground truth acceleration is computed directly from the ground truth trajectory using the Verlet algorithm as: q̈i(t) = 1 (∆t)2 [qi(t+∆t) + qi(t−∆t)− 2qi(t)] (11) The parameters of the GNNs are trained to minimize the RMSE loss over the entire trajectory T: L = 1 U ∀ui∈U |T| t=2 q̈i(t)− ̈qi(t) 2 (12) Since the integration of the equations of motion for the predicted trajectory is also performed using the same algorithm as: q(t+∆t) = 2q(t)− q(t−∆t)+ q̈(∆t)2, this method is equivalent to training from trajectory/positions. 4 Empirical Evaluation In this section, we evaluate the ability of LGNN to learn rigid body dynamics. In addition, we evaluate the ability of LGNN to generalize to larger unseen system sizes, complex topology, and realistic structures such as tensegrity. 4.1 Experimental setup • Simulation environment. All the training and forward simulations are carried out in the JAX environment [21]. The graph architecture is implemented using the jraph package [27]. All the codes related to dataset generation and training are available in https://github.com/M3RGIITD/rigid_body_dynamics_graph. Software packages: numpy-1.20.3, jax-0.2.24, jax-md-0.1.20, jaxlib-0.1.73, jraph-0.0.1.dev0 Hardware: Memory: 16GiB System memory, Processor: Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz •Baselines. As outlined earlier, there are very few works on rigid body simulations using graph-based approaches, where the graph is used to model the topology of the rigid body. To compare the performance of LGNN, we employ three baselines, namely, (i) a graph network simulator GNS, (ii) a Lagrangian graph network (LGN), and (iii) constrained Lagrangian neural network (CLNN). GNS employs a full graph network architecture [5, 12, 19] to predict the update in the position and velocity of node based on the present position and velocity. GNS has been shown to be a versatile model with the capability to simulate a wide range of physical systems [19]. LGN and CLNN employs the exact same equations as LGNN for computing the acceleration and trajectory and hence has the same inductive biases as LGNN in terms of the training and inference. However, while LGN employs a full graph network, CLNN employs a feed-forward multilayer perceptron. Details of the architectures and the hyperparameters of the baselines are provided in the Appendix A.5 and Appendix A.6, respectively. • Datasets and systems. To evaluate the performance LGNN, we selected n-chain/rope systems, where n = (4, 8, 16). All the graph based models are trained only on 4-segment chain system, which are then evaluated on other system sizes. Further, to evaluate the zero-shot generalizability of LGNN to large-scale unseen systems, we simulate 8-, and 16-segment chain systems. Further, to push the limits of LGNN, we evaluate the model trained on 4-segment chain on a 100-link system, and to complex shaped topologies involving truss members (long rigid members) and chains (short rigid members), which have more than 40 segments (see Figure 3). The massmi and moment of inertia Ii of all the members are maintained to be the same for all the segments irrespective of their length. To evaluate the generalizability to realistic systems, we also evaluate the performance on a 4-link system with different link properties and also with an external drag. The details of the experimental systems are given in Appendix A.1. Further, the detailed data-generation procedure is given in the Appendix A.4. • Evaluation Metric. Following the work of [13], we evaluate performance by computing the relative error in (1) the trajectory, known as the rollout error, given by RE(t) = q̂(t)− q(t)2(q̂(t)2 + q(t)2) and (2) energy violation error given by Ĥ−H2(Ĥ2 + H2). In addition, we also compute the geometric mean of rollout and energy error to compare the performance of different models [13]. Note that all the variables with a hat, for example x̂, represent the predicted values based on the trained model and the variables without hat, that is x, represent the ground truth. • Model architecture and training setup. For the graph architectures, namely, LGNN and GNS, all the neural networks are modeled as one hidden layer MLPs with varying number of hidden units. For all the MLPs, a square-plus activation function is used due to its double differentiability. In contrast to the earlier approaches, here, the training is not performed on trajectories. Rather, it is performed on 10000 data points generated from 100 trajectories for all the models. This dataset is divided randomly in 75:25 ratio as training and validation set. The model performance is evaluated on a forward trajectory, a task it was not explicitly trained for, of 1s. Note that this trajectory is ∼2-3 orders of magnitude larger than the training trajectories from which the training data has been sampled. The dynamics of n-body system is known to be chaotic for n ≥ 2. Hence, all the results are averaged over trajectories generated from 100 different initial conditions. Detailed model architecture associated with each of the models and the hyperparameters used in the training are provided in the Appendices A.5 and A.6, respectively. 4.2 Comparison with baselines Model performance. To compare the performance of LGNN with baselines, GNS, LGN [12, 6] and CLNN [13], we evaluate the evolution of energy violation and rollout error. It worth noting that GNS and LGN have been demonstrated only particle-based systems and not on rigid bodies. Hence, to make a fair comparison, we give the same node and edge input features as provided for the LGNN for both GNS and LGN, while training. All the models are trained on a 4-link system and evaluated on all other systems. In the case of CLNN, due to the fully connected architecture, the model is no inductive in nature. Hence, the model is trained and tested on the same system only, that is, the 4-link system. Detailed architecture of each of these systems are provided in Appendix A.5. Figure 4 shows the error in energy and rollout for LGNN, GNS, LGN, and CLNN. We observe that GNS, LGN and CLNN have a larger error in comparison to LGNN as shown in Figure 4 for both energy and rollout error, establishing the superiority of LGNN. To test the ability of LGNN to learn more complex systems, we consider two additional experiments. Specically, two similar 4-link systems, one with varying masses and moment of inertia, and the other subjected to a linear drag are evaluated in the Appendix A.7. Figures 8 and 14 show that LGNN is able to infer the dynamics in both these systems, respectively. Generalizability to different system sizes. Now, we analyze the performance of LGNN, trained on 4- link segment, on 8- and 16-link segments. We observe that LGNN exhibits comparable performances with respect to the 4-segment model, in terms of both energy violation error and rollout error, on systems with 8-, and 16-segments that are unseen by the model. In contrast, GNS exhibits relatively increased error in energy violation error and rollout error, although the error in LGN remains comparable for all systems. This suggests that the inductive bias in terms of the EL equations prevent the accumulation of error and allow improved generalization. However, the error in LGN is still orders magnitude higher than LGNN. This suggests that the architecture employed in LGNN is leading improved learning of the dynamics of the system. This conrms that LGNN can generalize to larger unseen system sizes when trained on a signicantly smaller system size. Note that the plots for CLNN are not shown for 8 and 16-links as the architecture cannot exhibit generalizability to larger system sizes. Finally, to push the limits, we infer the dynamics of a 100-link chain (see Fig. 15). We observe that the LGNN trained on 4-link can scale to a 100-link chain with comparable errors, conrming its ability to model large-scale structures. The trajectories of actual and trained models for some of these systems are provided as videos in the supplementary material (see Appendix A.3 for details). Generalizability to systems with different edge properties and external drag. Although the framework presented here is generic, the results were limited to systems with similar edge properties. Further, dissipative forces such as drag were not considered in these systems. In order to evaluate the model to incorporate these effect, we consider a 4-link system with different edge properties (see Appendix A.7)and also a system with drag. We observe that the LGNN presented can model systems with varying link properties and drag with comparable errors (see Figures 8 and 14). These results conrm that the LGNN framework can be used for realistic systems with arbitrary link properties and external dissipative forces. 4.3 Zero-shot generalizability In the conventional LNNs employing feed forward MLPs, the training and test system have the same number of particles and degrees of freedom. In other words, an LNN trained for an n-particle system cannot be used to perform inference on anm-particle system. In contrast, we show here that LGNN trained on a small 4-link system can be used to perform forward simulations on other unseen complex systems such as 100-link system, and tensegrity structures. This ability to infer on different unseen system sizes and topology is referred to as zero-shot generalizability. In order to analyze the zero-shot generalizability of the trained LGNN to simulate complex real-world geometries and structures, we evaluate the ability of LGNN to model the dynamics of tensegrity and lattice-like structures (see Fig. 3). Note that tensegrity structures are truss-like structures comprising of both tension and compression-members. The topology of a tensegrity structure is designed so that the compression members are always bars and the tension members are always ropes. Here, we analyse the ability LGNN to model the equilibrium dynamics of two complex tensegrity structures and the lattice-like structure shown in Figure 3. To this extent, we use the LGNN trained on the 4-segment structure. We convert the rigid body structure to an equivalent graph and use the trained LGNN to predict the dynamics of the structure when released from the original conguration under gravity. Figure 5 shows the energy error and rollout for both the complex structures and the lattice-like structure shown in Figure 3. We note that the LGNN is able to generalize to a complex structure with varying bar lengths and topology with high accuracy. Specically, the energy violation and rollout error exhibits very low values for LGNN (∼ 10−4). Further, it saturates after a few initial timestep suggesting an equilibrium dynamics. In contrast, we observe that the error in GNS is very high and continues to increase until it reaches 1, which is the maximum it can take. This conrms the superior nature of LGNN to generalize to arbitrary topology, boundary conditions, and bar lengths, after training on a simple 4-segment chain with constant length segments. Visualization of the dynamics of the system T1, predicted by LGNN and the ground truth, is shown in Fig. 6. We observe that the deformed shapes predicted by LGNN are in excellent agreement with the ground truth. Note that since the initial conguration for the forward simulation is xed, it is not possible to generate error bars for the trajectory. 4.4 Nature of the learned mass matrix Finally, we investigate the nature of the mass matrix of LGNN for different systems. Note that in earlier approaches either the mass matrix was learned directly for a given system based on the EL equations [6], or it was assumed to be diagonal in the Cartesian coordinates [13], or the functional form of kinetic energy was assumed [7]. In the present approach, we do not make any assumptions on the nature of the mass matrix. In fact, for a rigid body, the mass matrix need not be diagonal in nature and depends on the actual topology of the structure. This raises an interesting question about the nature of the mass matrix learned by the LGNN and how it generalizes to arbitrary topologies. In order to investigate the nature of the mass matrix, we plot the mass matrix of the LGNN in Figure 7. Note that the mass matrix is computed directly from the Lagrangian asM = ∂2L∂q̇2, where L is obtained from the LGNN. First, we analyze the mass matrix of the 16-segment structure. We observe that the mass matrix is banded with a penta-diagonal band as expected for a chain structure. Now, we analyze the mass matrix for a complex structure T1. Interestingly, we observe that the mass matrix learned is non-diagonal in nature and is congruent with the complex topology of the structure (see Figure 7). This conrms that the mass matrix of LGNN is learned on-the-y during the forward simulation that provides the versatility for LGNN to simulation complex structures. 5 Conclusions In this work, we present a LGNN-based framework that can be used to simulate the dynamics of articulated rigid bodies. Specically, we present the graph architecture, which allows the decoupling of kinetic and potential energies, that can be used to compute the Lagrangian of the system, which when applied with EL equations can infer the dynamics. We show that LGNN can learn the dynamics from a small 4-segment chain and then generalize to larger system sizes. We also demonstrate the zeroshot generalizability of LGNN to arbitrary topology including a tensegrity structures. Interestingly, we show that LGNN can provide insights into the learned mass matrix, which can exhibit non-trivial structures in complex systems. This suggests the ability of LGNN to learn and infer the dynamics of complex real-life structures directly from the observables such as their trajectory. Limitations and future works. From the mechanics perspective, the LGNN assumes the knowledge of constraints. Learning constraints directly from the trajectory is useful. Similarly, extending LGNN to model contacts, collisions, and deformations allows more comprehesive learning of realistic systems. From the modeling perspective, in our message passing LGNN, all messages are provided equal important. Attention heads in message-passing neural networks have been shown to improve performance remarkably in several domains [28]. We plan to study the impact of attention in LGNN in our future works. Acknowledgments and Disclosure of Funding The authors thank the IIT Delhi HPC facility for providing the computational and storage resources.
1. What is the focus and contribution of the paper on Lagrangian graph neural networks? 2. What are the strengths of the proposed approach, particularly in learning Lagrangian mechanics using GNN? 3. What are the weaknesses of the paper regarding its experiments and comparisons with traditional methods? 4. Do you have any concerns about the scope and generalizability of the proposed method? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper In this paper, the authors propose a Lagrangian graph neural network (LGNN) that can learn the dynamics of rigid bodies which can be modelled as a graph. The LGNN consists of two GNNs, which learn the potential energy and kinetic energy of the system Lagrangian respectively. The performance of the model is verified on 2D articulate body simulations based on in-extensible chains and rods. The experiments show that the proposed method can outperform baseline method Graph Neural Simulator (GNS). The generalizability of the model is evaluated on system with larger sizes and unseen topology. Strengths And Weaknesses Strengths Learning the Lagrangian mechanics using GNN is interesting and the experiments results and the videos shown in the supplemental materials are neat. The paper is written clearly and well organized. The introduction to the background knowledge helps understand the proposed idea. Weakness The method is validated on a simple (less than 20 links) rigid body simulation scenario, where the traditional numerical methods can fairly accurately and effectively solve the problem. Therefore, the key point I am concerning is what the advantage of the learning based method proposed comparing to traditional method. The advantages can be either more efficient than traditional simulations or exploring dynamics the traditional simulations do not easy to capture. However, the comparisons on such cases are missing which makes the contribution somehow hard to evaluate. I think experiments on more large scale or complex scenarios will make the proposed method more convincing. Questions The simulations shown can be solved by traditional numerical methods fairly accurately and effectively. I think experiments on larger scale or more complex scenarios where the traditional numerical methods are slow or struggling, will greatly enhance the statements made in the paper. The term rigid body dynamics is somehow too general I think. Actually the simulations learned here are limited to articulate-body dynamics which can be modelled as a graph. The term zero-shot generalizability is mentioned in the paper multiple times but without explanations or references. It will be helpful to explain what the zero-shot here indicates. Videos shown in the supplemental materials demonstrate a simulation of 1s physical time. However, in the paper, the Figure 5 and Figure 6 only show the errors for 0.1s and 0.3s physical time. It will be more convincing to show how the errors evolving during the whole 1s simulation. There are no visual results of the simulation shown in the paper. Putting some key snapshots of the videos will help the readers to easily understand the problems to solve here. In the appendix A.6 training details, the author mentioned the baseline method Lagrangian Graph Network (LGN), I am wondering what is the difference between the LGN and proposed LGNN? Minors: Mixed usages of colon and period, e.g., line 221 loss function uses colon while other paragraph titles in the same section uses period; the paragraph titles 5.1 use colon but that of 5.2 use period. Limitations The authors have discussed the limitations in the conclusion to some extent.
NIPS
Title Learning Articulated Rigid Body Dynamics with Lagrangian Graph Neural Network Abstract Lagrangian and Hamiltonian neural networks (LNNs and HNNs, respectively) encode strong inductive biases that allow them to outperform other models of physical systems signicantly. However, these models have, thus far, mostly been limited to simple systems such as pendulums and springs or a single rigid body such as a gyroscope or a rigid rotor. Here, we present a Lagrangian graph neural network (LGNN) that can learn the dynamics of articulated rigid bodies by exploiting their topology. We demonstrate the performance of LGNN by learning the dynamics of ropes, chains, and trusses with the bars modeled as rigid bodies. LGNN also exhibits generalizability—LGNN trained on chains with a few segments exhibits generalizability to simulate a chain with large number of links and arbitrary link length. We also show that the LGNN can simulate unseen hybrid systems including bars and chains, on which they have not been trained on. Specically, we show that the LGNN can be used to model the dynamics of complex real-world structures such as the stability of tensegrity structures. Finally, we discuss the non-diagonal nature of the mass matrix and its ability to generalize in complex systems. 1 Introduction and Related Works Movements of a robotic arm, rolling ball, or falling chain can be characterized by rigid body motion [1, 2]. Understanding the dynamics of the motion is crucial in several applications including robotics, human-robot interaction, planning, and computer graphics [3, 1]. Traditionally, the rigid body mechanics is studied in the framework of classical mechanics, which relies on either forcebased or energy-based approaches [4]. Force-based approaches involve the computation of all the unknown forces based on the equations of equilibrium and hence is cumbersome for large structures. Energy-based approaches present an elegant formalism which involve the computation of a scalar quantity representing the state of a system, namely, Lagrangian (L = T − V), which is the difference between the kinetic (T (q, q̇)) and potential (V(q)) energies, or Hamiltonian (H = T + V), which represents the total energy of the system. This scalar quantity can, in turn, be used to predict the dynamics of the system. However, the functional form governing this scalar quantity may not be The code is available at https://github.com/M3RG-IITD/rigid_body_dynamics_graph 36th Conference on Neural Information Processing Systems (NeurIPS 2022). known a priori in many cases [5]. Thus, learning the dynamics of rigid bodies directly from the trajectory can simplify and accelerate the modeling of these systems [5, 6, 7, 8]. Learning the dynamics of particles has received much attention recently using physics-informed approaches [9]. Among these, Lagrangian neural networks (LNNs) and Hamiltonian neural networks (HNNs) are two physics-informed neural networks with strong inductive biases that outperform other learning paradigms of dynamical systems [10, 11, 12, 8, 6, 13, 7, 14]. In this approach, a neural network is trained to learn the L (orH) of a system based on its conguration (q, q̇). The L is then used along with the Euler-Lagrange (EL) equation to obtain the time evolution of the system. Note that the training of LNNs is performed by minimizing the error on the predicted trajectory with respect to the actual trajectory. Thus, LNNs can effectively learn the Lagrangian directly from the trajectory of a multi-particle system [6, 13]. Most of the works on LNN has focused on relatively simpler particle-based systems such as springs and pendulums [15, 16, 6, 13, 7, 10, 17]. This approach models a rigid body, for instance a ball, as a particle and predicts the dynamics. This approach thus ignores the additional rotational degrees of freedom of the body due to its nite volume. Specically, while a particle in 3D has three degrees of freedom (translational), a rigid body in 3D has six degrees of freedom (translational and rotational). Thus, the dynamics and energetics associated with these degrees of motions are lost by modeling a rigid body as a particle. To the best of authors’ knowledge, thus far, only one work has attempted to learn rigid body dynamics using LNNs and HNNs, where it was demonstrated the dynamics of simple rigid bodies such as a gyroscope or rotating rotor can be learned [13]. However, the LNNs used in this work, owing to their fully connected MLP architecture, are transductive in nature. An LNN trained on a double-pendulum system or 3-spring system can be used only for the same system and does not generalize to a different system size such as 3-pendulum or 5-spring, respectively. In realistic situations the number of particles in a system can vary arbitrarily, and accordingly, a large number of trained models might be required to model these systems. An alternate approach to model these systems would be to use a graph neural network (GNN) [18, 19, 5, 15, 16], which, once trained, can generalize to arbitrary system sizes. GNNs have been widely used to model physical and atomic systems extensively due to their inductive bias [20, 21, 22, 15, 16]. GNNs have also been used to model rigid bodies mainly following two approaches, namely, particlebased [19] and lumped mass [22, 23] methods. In the rst approach, a rigid body is discretized into nite number of particles and the motion of the individual particles are learned to predict the dynamics of rigid body [19]. Note that this approach is philosophically similar to mess-less methods such as smoothed-particle hydrodynamics (SPH) [24] or peridynamics (PD) [25], where the time-evolution of a continuum body is simulated by discretizing the domain using particles. This approach [19], although useful, have several limitations, namely, it does not (i) conserve physical quantities such as energy when simulated over a long duration, and (ii) generalize to a different timestep of forward simulation than the one on which it is trained. In the second approach, a rigid body is modeled as a lumped mass [22, 26], the dynamics of which is learned by assuming this lumped mass as a particle. For instance, the dynamics of a chain is modeled by discretizing the chain to smaller segments and modeling each segment as a lumped mass. As mentioned earlier, this approach leads to the loss of additional degrees of freedom that are associated with a rigid body. Here, we present a Lagrangian graph neural network (LGNN) framework that can learn the dynamics of rigid bodies. Specically, exploiting the topology of a physical system, we show that a rigid body can be modeled as a graph. Further, the Lagrangian of the graph structure can be learned directly by minimizing the loss on the predicted trajectory with respect to the actual trajectory of the system. The major contributions of the work are as follows. • Topology aware modeling of rigid body. We present a graph-based model for articulated rigid bodies such as in-extensible ropes, chains, or trusses. Further, we demonstrate using LGNN that the dynamics of these systems can be learned in the Lagrangian framework. • Generalizability to arbitrary system sizes. We show that LGNN can generalize to arbitrary system sizes once trained. • Generalizability to complex unseen topology. We demonstrate that the LGNN can generalize to unseen topology, that is, links with varying lengths, a combination of truss and chain structures, and different boundary conditions. Altogether, we demonstrate that LGNN can be a strong framework for simulating the dynamics of articulated rigid bodies. 2 Dynamics of Rigid Bodies The dynamics of a physical system can be represented as q̈ = F (q, q̇, t), where q, q̇ RD is a function of time (t) for a system with D degrees of freedom. The future states or trajectory of the system can be predicted by integrating these equations to obtain q(t + 1) and so on. While there are several physics-based methods for generating the dynamics of the system such as d’Alembert’s principle, Newtonian, Lagrangian, or Hamiltonian approaches, all these approaches result in the equivalent sets of equations [3]. The two broad paradigms for modeling the dynamics involve force- and energy-based approaches. Energy-based approaches is an elegant framework, which relies on the computation of a single scalar quantity, for instance energy, that represents the state of system. The dynamics of the system is, in turn, computed based on this scalar quantity. Among the energy-based approaches, Lagrangian formulation has been widely used to predict the dynamics of particles and rigid bodies by computing the Lagrangian L of the system. The standard form of Lagrange’s equation for a system with holonomic constraints is given by ddt ∂L ∂q̇ − ∂L ∂q = 0, and the Lagrangian is L(q, q̇, t) = T (q, q̇, t)−V(q, t) with T (q, q̇, t) and V(q, t) representing the total kinetic energy of the system and the potential function from which generalized forces can be derived. Accordingly, the dynamics of the system can be represented using EL equations as q̈i = ∂2L ∂q̇2i −1 ∂L ∂qi − ∂L ∂q̇i∂qi q̇i . Modied Euler-Lagrange Equation. A modied version of the EL can be used in cases where some of the terms involved in the equation can be decoupled. This formulation allows explicit incorporation of constraints (holonomic and Pfafan) and additional dissipative terms for friction or drag [3, 1]. In rigid body motion, Pfafan constraints can be crucial in applications such as multi-ngered grasping where, the velocity of two or more ngers are constrained so that the combined geometry formed is able to catch or hold an object. A generic expression of constraints for these systems that accounts for both holonomic and Pfafan can be A(q)q̇ = 0, where, A(q) Rk×D represents k velocity constraints. In addition, drag, friction or other dissipative terms of a system can be expressed as an additional forcing term in the EL equation. It is worth noting that EL equation, by nature, is energy conserving. Hence, the additional dissipative terms are crucial for modeling realistic systems with friction and drag. If these terms are not included, the system will essentially try to simulate an energy preserving trajectory, thereby resulting in huge errors in the dynamics [17]. Considering the additional forces mentioned above, the modied EL equation can be written as: d dt q̇L−qL+AT (q)λ−Υ− F = 0 (1) where AT forms a non-normalized basis for the constraint forces, λ Rk, known as the Lagrange multipliers, gives the relative magnitudes of these force constraints,Υ represents the non-conservative forces, such as friction or drag, which are not directly derivable from a potential, and F represents any external forces acting on the system. This equation can be modied to obtain q̈ as: q̈ = M−1 −Cq̇ +Π+Υ−AT (q)λ+ F (2) where M = ∂∂q̇ ∂L ∂q̇ represents the mass matrix, C = ∂ ∂q ∂L ∂q̇ represents Coriolis-like forces, and Π = ∂L∂q represents the conservative forces derivable from a potential. Differentiating the constraint equation gives A(q)q̈ + Ȧ(q)q̇ = 0. Solving λ (see A.2) and substituting in Eq. 2, we obtain q̈ as q̈ = M−1 Π− Cq̇ +Υ−AT (AM−1AT )−1 AM−1(Π− Cq̇ +Υ+ F ) + Ȧq̇ + F (3) For a system subjected to these forces, the dynamics can be learned using LNN by minimizing the loss on the predicted and observed trajectory, where the predicted acceleration ˆ̈q is obtained using the Equation 3. It is worth noting that in this equation, M,C, and Π can be directly derived from the L. Constraints on the systems are generally known as they generally form part of the topology. It is worth noting that there are some recent works that focus on learning constraints as well [8]. 3 Lagrangian Mechanics for Articulated Rigid Bodies In the case of particle systems such as spring or pendulum systems, the approach mentioned in Sec.2 can be directly used in conjunction with an LNN to learn the dynamics. In this case, the mass matrix M(q) remains constant with only diagonal entries mii in Cartesian coordinates. Inducing this as a prior knowledge, wherein the masses are parameterized as a diagonal matrix is shown to simplify the learning process [13]. However, in the case of an articulated rigid body, the mass matrix is non-diagonal in the Cartesian coordinates. Further, the kinetic energy term T becomes a function of both position and velocity. In other words, the kinetic energy also becomes a function of the topology. This makes learning the dynamics a complex problem especially in real-world complex structures such as trusses or tensegrities, which are a combination of bars, ropes, and chains. To this extent, we briey review the mechanics of a falling rope or chain as an example. Note that simple rigid bodies such as a gyroscope or rotating rotor has already been studied using LNNs [13]. Of our special interest are articulated rigid bodies that can be arbitrarily large such as chains, ropes or trusses, that can be divided into smaller constituent members. This is because, it is generally assumed that extending LNNs to large structures is a challenging problem [17]. Traditionally, the mechanics of chains or ropes are modeled using discrete models [2]. Figure 1 shows a discrete model of a rope of mass M and length L. The rope is discretized into n cylindrical rods or segments each having a mass mi = Mn and length li = Ln. These segments are considered to be rigid, and with a nite uniform cross-sectional area and volume. In order to replicate realistic dynamics of a rope, the li should be signicantly smaller than L. Note that in the case of a chain or truss, such articial discretization is not required and the bars associated with each segment can be directly considered as a rigid body. To formulate the L, the generalized coordinates with orientation of each link represented by ϕi = tan−1 yi−yi−1 xi−xi−1 can be considered. Placing the origin at the beginning of rst segment (see Figure 1), the center of mass of ith segment (xcmi , y cm i ) can be written in terms of generalized coordinates as xcmi = i−1 j=1 lj cosϕj + 1 2 li cosϕi, y cm i = i−1 j=1 lj sinϕj + 1 2 li sinϕi (4) Accordingly, the kinetic energy of the system is given by [2] T = 1 2 n i=1 mi(ẋ 2 i,cm + ẏ 2 i,cm) + Iiϕ̇ 2 i (5) where Ii = 112mil 2 i represents the moment of inertia of the rigid segment i. Similarly, the potential energy of the system can be expressed as: V = n i=1 migy cm i (6) where g represents the acceleration due to gravity. Finally, the Lagrangian of the system can be obtained as L = T − V , which can be substituted in the EL equation to obtain the dynamics of the rigid body. To learn the dynamics of an articulated rigid body, we employ the approach shown in Figure 2. Specically, we model a physical system as a graph. Further, the Lagrangian of system is learned by decoupling the potential and kinetic energy, each of which are learned by two GNNs, namely, GV and GT . Finally, the Lagrangian is computed as L = T − V . This framework is trained end-to-end based by minimizing the loss on the acceleration predicted by the LGNN using EL equation with respect to the ground truth. In this section, we describe the LGNN architecture for rigid bodies in detail (See Figure 2 for an overview). We empirically show that the dynamics of a rigid body can be learned by LGNN. In addition, due to the inductive nature of the graph architecture, once trained on a small system, LGNN can generalize to arbitrary system sizes and topology. Graph structure. Figure 1 shows a chain. The (undirected) graph of the physical system is constructed by considering the bars/segments of the chain as the edges and the connections as nodes. Here, edges represent the rigid bodies and nodes represent the connection between these rigid bodies. This is in contrast to earlier approaches used for particle-based systems, where node represented the particle position and edge represented the connections between them. Hereon, we use the notation G(U , E) to to represent the graph representation of a rigid body with U and E as its node and edge sets. Overview of the architecture. As shown in Figure 2, we use two GNNs; one to predict the potential energies and the other to predict kinetic energies. From these predictions the Lagrangian is computed. The error on the Lagrangian is minimized through an RMSE loss function to jointly train both the GNNs. The architecture of both the GNNs, shown in Figure 2, are identical. Note that the specic graph architecture used in the present work is inspired from previous works on LGNNs for particle-based systems [15, 16]. Input features. Each node ui U is characterized by its position qi = (xi, yi, zi), and velocity (q̇i). Each edge eij is characterized by its type tij , and the relative differences in the positions (∆qij = qi − qj) of its connecting nodes, and ωij = ∆qij ×∆q̇ij . The type tij is a discrete variable and is useful in distinguishing edges of different characteristics within a system (Ex. moment inertia or area of cross section of the edge). Note that the velocity of a rigid body represented by an edge is a function of the velocities of its end points in two and three dimensional spaces. Hence, we do not explicitly track edge velocities. Pre-Processing. In the pre-processing layer, we construct a dense vector representation for each node vi U and edge eij E using MLPs (multi-layer perceptrons). The exact operation for potential energy is provided below in Eqs.7-8. For kinetic energy, we input q̇i in Eq 7 instead of qi and ωij in Eq. 8 instead of ∆qij . h0i = squareplus(MLP(qi)) (7) h0ij = squareplus(MLP(one-hot(ti),∆qij)) (8) squareplus is an activation function. Message passing. To infuse structural information in the edge and node embeddings, we perform L layers of message passing, wherein the embedding in each layer l [1, ·, L] is computed as follows: hl+1ij = squareplus MLP hlij +W l E · hlihlj (9) Here,WlE is a layer-specic learnable weight vector and || represents concatenation operation. The node embeddings in a given layer l are learned as follows: hl+1i = squareplus MLP hli + j∈Ni WlU · hlij (10) Here, Ni = uj (ui, uj) E denotes the edges incident on node ui. Similar to WlE , WlU is a layer-specic learnable weight vector, which performs a linear transformation on the embedding of each incident edge. Following L layers of message passing, the nal node and edge representations in the Lth layer are denoted by zi = hLi and zij = h L ij respectively. Potential and kinetic energy prediction. The predicted potential energy of each edge (rigid body) is computed by passing its nal layer embedding through an MLP, i.e., vij = MLP(zi,j). The global predicted potential energy of the rigid body system is therefore the sum of the individual energies, i.e., V = ∀eij∈E vij . For kinetic energy, the computation is identical except that it occurs in the other GNN with parameters optimized for kinetic energy. Loss function. The predicted Lagrangian is simply the difference between the predicted kinetic energy and the potential energy. Using Euler-Lagrange equations, we obtain the predicted acceleration ̈qi(t) for each node ui. The ground truth acceleration is computed directly from the ground truth trajectory using the Verlet algorithm as: q̈i(t) = 1 (∆t)2 [qi(t+∆t) + qi(t−∆t)− 2qi(t)] (11) The parameters of the GNNs are trained to minimize the RMSE loss over the entire trajectory T: L = 1 U ∀ui∈U |T| t=2 q̈i(t)− ̈qi(t) 2 (12) Since the integration of the equations of motion for the predicted trajectory is also performed using the same algorithm as: q(t+∆t) = 2q(t)− q(t−∆t)+ q̈(∆t)2, this method is equivalent to training from trajectory/positions. 4 Empirical Evaluation In this section, we evaluate the ability of LGNN to learn rigid body dynamics. In addition, we evaluate the ability of LGNN to generalize to larger unseen system sizes, complex topology, and realistic structures such as tensegrity. 4.1 Experimental setup • Simulation environment. All the training and forward simulations are carried out in the JAX environment [21]. The graph architecture is implemented using the jraph package [27]. All the codes related to dataset generation and training are available in https://github.com/M3RGIITD/rigid_body_dynamics_graph. Software packages: numpy-1.20.3, jax-0.2.24, jax-md-0.1.20, jaxlib-0.1.73, jraph-0.0.1.dev0 Hardware: Memory: 16GiB System memory, Processor: Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz •Baselines. As outlined earlier, there are very few works on rigid body simulations using graph-based approaches, where the graph is used to model the topology of the rigid body. To compare the performance of LGNN, we employ three baselines, namely, (i) a graph network simulator GNS, (ii) a Lagrangian graph network (LGN), and (iii) constrained Lagrangian neural network (CLNN). GNS employs a full graph network architecture [5, 12, 19] to predict the update in the position and velocity of node based on the present position and velocity. GNS has been shown to be a versatile model with the capability to simulate a wide range of physical systems [19]. LGN and CLNN employs the exact same equations as LGNN for computing the acceleration and trajectory and hence has the same inductive biases as LGNN in terms of the training and inference. However, while LGN employs a full graph network, CLNN employs a feed-forward multilayer perceptron. Details of the architectures and the hyperparameters of the baselines are provided in the Appendix A.5 and Appendix A.6, respectively. • Datasets and systems. To evaluate the performance LGNN, we selected n-chain/rope systems, where n = (4, 8, 16). All the graph based models are trained only on 4-segment chain system, which are then evaluated on other system sizes. Further, to evaluate the zero-shot generalizability of LGNN to large-scale unseen systems, we simulate 8-, and 16-segment chain systems. Further, to push the limits of LGNN, we evaluate the model trained on 4-segment chain on a 100-link system, and to complex shaped topologies involving truss members (long rigid members) and chains (short rigid members), which have more than 40 segments (see Figure 3). The massmi and moment of inertia Ii of all the members are maintained to be the same for all the segments irrespective of their length. To evaluate the generalizability to realistic systems, we also evaluate the performance on a 4-link system with different link properties and also with an external drag. The details of the experimental systems are given in Appendix A.1. Further, the detailed data-generation procedure is given in the Appendix A.4. • Evaluation Metric. Following the work of [13], we evaluate performance by computing the relative error in (1) the trajectory, known as the rollout error, given by RE(t) = q̂(t)− q(t)2(q̂(t)2 + q(t)2) and (2) energy violation error given by Ĥ−H2(Ĥ2 + H2). In addition, we also compute the geometric mean of rollout and energy error to compare the performance of different models [13]. Note that all the variables with a hat, for example x̂, represent the predicted values based on the trained model and the variables without hat, that is x, represent the ground truth. • Model architecture and training setup. For the graph architectures, namely, LGNN and GNS, all the neural networks are modeled as one hidden layer MLPs with varying number of hidden units. For all the MLPs, a square-plus activation function is used due to its double differentiability. In contrast to the earlier approaches, here, the training is not performed on trajectories. Rather, it is performed on 10000 data points generated from 100 trajectories for all the models. This dataset is divided randomly in 75:25 ratio as training and validation set. The model performance is evaluated on a forward trajectory, a task it was not explicitly trained for, of 1s. Note that this trajectory is ∼2-3 orders of magnitude larger than the training trajectories from which the training data has been sampled. The dynamics of n-body system is known to be chaotic for n ≥ 2. Hence, all the results are averaged over trajectories generated from 100 different initial conditions. Detailed model architecture associated with each of the models and the hyperparameters used in the training are provided in the Appendices A.5 and A.6, respectively. 4.2 Comparison with baselines Model performance. To compare the performance of LGNN with baselines, GNS, LGN [12, 6] and CLNN [13], we evaluate the evolution of energy violation and rollout error. It worth noting that GNS and LGN have been demonstrated only particle-based systems and not on rigid bodies. Hence, to make a fair comparison, we give the same node and edge input features as provided for the LGNN for both GNS and LGN, while training. All the models are trained on a 4-link system and evaluated on all other systems. In the case of CLNN, due to the fully connected architecture, the model is no inductive in nature. Hence, the model is trained and tested on the same system only, that is, the 4-link system. Detailed architecture of each of these systems are provided in Appendix A.5. Figure 4 shows the error in energy and rollout for LGNN, GNS, LGN, and CLNN. We observe that GNS, LGN and CLNN have a larger error in comparison to LGNN as shown in Figure 4 for both energy and rollout error, establishing the superiority of LGNN. To test the ability of LGNN to learn more complex systems, we consider two additional experiments. Specically, two similar 4-link systems, one with varying masses and moment of inertia, and the other subjected to a linear drag are evaluated in the Appendix A.7. Figures 8 and 14 show that LGNN is able to infer the dynamics in both these systems, respectively. Generalizability to different system sizes. Now, we analyze the performance of LGNN, trained on 4- link segment, on 8- and 16-link segments. We observe that LGNN exhibits comparable performances with respect to the 4-segment model, in terms of both energy violation error and rollout error, on systems with 8-, and 16-segments that are unseen by the model. In contrast, GNS exhibits relatively increased error in energy violation error and rollout error, although the error in LGN remains comparable for all systems. This suggests that the inductive bias in terms of the EL equations prevent the accumulation of error and allow improved generalization. However, the error in LGN is still orders magnitude higher than LGNN. This suggests that the architecture employed in LGNN is leading improved learning of the dynamics of the system. This conrms that LGNN can generalize to larger unseen system sizes when trained on a signicantly smaller system size. Note that the plots for CLNN are not shown for 8 and 16-links as the architecture cannot exhibit generalizability to larger system sizes. Finally, to push the limits, we infer the dynamics of a 100-link chain (see Fig. 15). We observe that the LGNN trained on 4-link can scale to a 100-link chain with comparable errors, conrming its ability to model large-scale structures. The trajectories of actual and trained models for some of these systems are provided as videos in the supplementary material (see Appendix A.3 for details). Generalizability to systems with different edge properties and external drag. Although the framework presented here is generic, the results were limited to systems with similar edge properties. Further, dissipative forces such as drag were not considered in these systems. In order to evaluate the model to incorporate these effect, we consider a 4-link system with different edge properties (see Appendix A.7)and also a system with drag. We observe that the LGNN presented can model systems with varying link properties and drag with comparable errors (see Figures 8 and 14). These results conrm that the LGNN framework can be used for realistic systems with arbitrary link properties and external dissipative forces. 4.3 Zero-shot generalizability In the conventional LNNs employing feed forward MLPs, the training and test system have the same number of particles and degrees of freedom. In other words, an LNN trained for an n-particle system cannot be used to perform inference on anm-particle system. In contrast, we show here that LGNN trained on a small 4-link system can be used to perform forward simulations on other unseen complex systems such as 100-link system, and tensegrity structures. This ability to infer on different unseen system sizes and topology is referred to as zero-shot generalizability. In order to analyze the zero-shot generalizability of the trained LGNN to simulate complex real-world geometries and structures, we evaluate the ability of LGNN to model the dynamics of tensegrity and lattice-like structures (see Fig. 3). Note that tensegrity structures are truss-like structures comprising of both tension and compression-members. The topology of a tensegrity structure is designed so that the compression members are always bars and the tension members are always ropes. Here, we analyse the ability LGNN to model the equilibrium dynamics of two complex tensegrity structures and the lattice-like structure shown in Figure 3. To this extent, we use the LGNN trained on the 4-segment structure. We convert the rigid body structure to an equivalent graph and use the trained LGNN to predict the dynamics of the structure when released from the original conguration under gravity. Figure 5 shows the energy error and rollout for both the complex structures and the lattice-like structure shown in Figure 3. We note that the LGNN is able to generalize to a complex structure with varying bar lengths and topology with high accuracy. Specically, the energy violation and rollout error exhibits very low values for LGNN (∼ 10−4). Further, it saturates after a few initial timestep suggesting an equilibrium dynamics. In contrast, we observe that the error in GNS is very high and continues to increase until it reaches 1, which is the maximum it can take. This conrms the superior nature of LGNN to generalize to arbitrary topology, boundary conditions, and bar lengths, after training on a simple 4-segment chain with constant length segments. Visualization of the dynamics of the system T1, predicted by LGNN and the ground truth, is shown in Fig. 6. We observe that the deformed shapes predicted by LGNN are in excellent agreement with the ground truth. Note that since the initial conguration for the forward simulation is xed, it is not possible to generate error bars for the trajectory. 4.4 Nature of the learned mass matrix Finally, we investigate the nature of the mass matrix of LGNN for different systems. Note that in earlier approaches either the mass matrix was learned directly for a given system based on the EL equations [6], or it was assumed to be diagonal in the Cartesian coordinates [13], or the functional form of kinetic energy was assumed [7]. In the present approach, we do not make any assumptions on the nature of the mass matrix. In fact, for a rigid body, the mass matrix need not be diagonal in nature and depends on the actual topology of the structure. This raises an interesting question about the nature of the mass matrix learned by the LGNN and how it generalizes to arbitrary topologies. In order to investigate the nature of the mass matrix, we plot the mass matrix of the LGNN in Figure 7. Note that the mass matrix is computed directly from the Lagrangian asM = ∂2L∂q̇2, where L is obtained from the LGNN. First, we analyze the mass matrix of the 16-segment structure. We observe that the mass matrix is banded with a penta-diagonal band as expected for a chain structure. Now, we analyze the mass matrix for a complex structure T1. Interestingly, we observe that the mass matrix learned is non-diagonal in nature and is congruent with the complex topology of the structure (see Figure 7). This conrms that the mass matrix of LGNN is learned on-the-y during the forward simulation that provides the versatility for LGNN to simulation complex structures. 5 Conclusions In this work, we present a LGNN-based framework that can be used to simulate the dynamics of articulated rigid bodies. Specically, we present the graph architecture, which allows the decoupling of kinetic and potential energies, that can be used to compute the Lagrangian of the system, which when applied with EL equations can infer the dynamics. We show that LGNN can learn the dynamics from a small 4-segment chain and then generalize to larger system sizes. We also demonstrate the zeroshot generalizability of LGNN to arbitrary topology including a tensegrity structures. Interestingly, we show that LGNN can provide insights into the learned mass matrix, which can exhibit non-trivial structures in complex systems. This suggests the ability of LGNN to learn and infer the dynamics of complex real-life structures directly from the observables such as their trajectory. Limitations and future works. From the mechanics perspective, the LGNN assumes the knowledge of constraints. Learning constraints directly from the trajectory is useful. Similarly, extending LGNN to model contacts, collisions, and deformations allows more comprehesive learning of realistic systems. From the modeling perspective, in our message passing LGNN, all messages are provided equal important. Attention heads in message-passing neural networks have been shown to improve performance remarkably in several domains [28]. We plan to study the impact of attention in LGNN in our future works. Acknowledgments and Disclosure of Funding The authors thank the IIT Delhi HPC facility for providing the computational and storage resources.
1. What is the focus and contribution of the paper on learning the dynamics of chains and ropes? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its ability to handle complex interactions and friction models? 3. Do you have any concerns about the method's limitations in tackling realistic systems with collision and frictional contact? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions regarding the definitions of certain terms used in the method section or the distinction made between bars, chains, and ropes?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper presents a GNN-based Lagrangian formulation for learning the dynamics of chains and ropes, and achieve better accuracy and energy behavior than an unconstrained model (GNS). Strengths And Weaknesses Most papers on HNN, LNN approaches only demonstrate results on mass/spring and n-body system, and the premise of using these methods on more complex, realistic systems is alluring. However, while the paper is motivated by the challenges of rigid-body simulation, I'd argue the method doesn't actually tackle any issue that make rigid-body simulation a hard problem, such as collision between complex object shapes, frictional contact, or even 3D rotation for objects with nontrivial inertial tensor. Contrary to what the introduction states, 2D ropes can be solved pretty well with simple particle-based methods (see e.g. the 2016 Interaction Nets paper). The non-diagonal mass matrix and inertial tensor is (a) only a problem for method which have an explicit formulation for it (e.g. LNN/HNN), end-to-end learned models (such as plain GNNs) can learn the effect from data and neighborhood relations and (b) the effect disappears in the limit of small edge lengths. The main contribution of this paper hence is demonstrating that graph-based LNN approaches work for bars/chains/ropes, and make use of the energy-stability of those methods. However, it's worth noting that properties such as length preservation, generalization that the paper shows stem from the fact that very little is learned here; all the constraints (segment length constraint, friction, drag etc.) are manually encoded, only inertial dynamics are learned, and those equations are both universal, and quite straightforward to infer from geometry. Normally you'd want a learned method to do the opposite, i.e. learn complicated interactions/friction models that may be hard to measure for real systems, and manually encode the known priors. The only case I could think of where the proposed approach would be useful is for inferring non-trivial mass distributions which may not be visible (say, on a drawbridge made from composite anisotropic material); but those use cases would need to be demonstrated. So while I do think there's value in expanding the type of systems that can be tackled with LNN/HNN methods, the paper in its current form is overclaiming its contributions (rigid dynamics) and quite limited in what it adds. It could be extended into a much stronger paper by showing how any of the harder aspects of rigid simulation can be tackled, or how the tricky bits (e.g. constraints, friction, ...) can be learned. Questions the terms gamma, a(q) etc. need to be properly defined in the methods section why do you even need a full GNN with L layers of message passing? kinetic/potential energy are local, so it at most would need 1 round of collecting neighborhood information it sounds like the paper makes a distinction between bars, chains and ropes. In a rigid-body approximation, aren't those exactly the same (the only difference being that ropes having shorter segment lengths than bars) ? in the appendix, why is LGN (which I think refers to a Hamiltonian GNN) much worse? The core method is very similar, and I'd expect it to preserve energy very well. Limitations The method is quite limited in what it can be applied to, which is pointed out briefly in the conclusion section.
NIPS
Title Discriminative Sounding Objects Localization via Self-supervised Audiovisual Matching Abstract Discriminatively localizing sounding objects in cocktail-party, i.e., mixed sound scenes, is commonplace for humans, but still challenging for machines. In this paper, we propose a two-stage learning framework to perform self-supervised class-aware sounding object localization. First, we propose to learn robust object representations by aggregating the candidate sound localization results in the single source scenes. Then, class-aware object localization maps are generated in the cocktail-party scenarios by referring the pre-learned object knowledge, and the sounding objects are accordingly selected by matching audio and visual object category distributions, where the audiovisual consistency is viewed as the self-supervised signal. Experimental results in both realistic and synthesized cocktail-party videos demonstrate that our model is superior in filtering out silent objects and pointing out the location of sounding objects of different classes. Code is available at https://github.com/DTaoo/ Discriminative-Sounding-Objects-Localization. 1 Introduction Audio and visual messages are pervasive in our daily-life. Their natural correspondence provides humans with rich semantic information to achieve effective multi-modal perception and learning [28, 24, 15], e.g., when in the street, we instinctively associate the talking sound with people nearby, and the roaring sound with vehicles passing by. In view of this, we want to question that can they also facilitate machine intelligence? To pursue the human-like audiovisual perception, the typical and challenging problem of visually sound localization is highly expected to be addressed, which aims to associate sounds with specific visual regions and rewards the visual perception ability in the absence of semantic annotations [14, 18, 3, 27, 10]. A straightforward strategy is to encourage the visual features of sound source to take higher similarity with the sound embeddings, which has shown considerable performance in the simple scenarios with single sound [21, 22, 27]. However, there are simultaneously multiple sounding objects as well as silent ones (i.e. The silent objects are considered capable of producing sound.). in our daily scenario, i.e., the cocktail-party, this simple strategy mostly fails to discriminatively localize different sound sources from mixed sound [16]. Recently, audiovisual content modeling is proposed to excavate concrete audio and visual components in the scenario for localization. Yet, due to lack of sufficient semantic annotation, existing works have to resort to extra scene prior knowledge [16, 17, 25] or construct pretext task [31, 30]. Even so, these methods cannot well deal ∗Corresponding Author, Beijing Key Laboratory of Big Data Management and Analysis Methods, Gaoling School of Artificial Intelligence, Renmin University of China, Beijing 100872, China. The research reported in this paper was mainly conducted when the corresponding author worked at Baidu Research. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. with such complex cocktail-party scenario, i.e., not only answering where the sounding area is but also answering what the sounding area is. In this paper, we target to perform class-aware sounding object localization from their mixed sound, where the audiovisual scenario consists of multiple sounding objects and silent objects, as shown in Fig. 1. This interesting problem is quite challenging from two perspectives: 1) Discriminatively localizing objects belonging to different categories without resorting to semantic annotations of objects; 2) Determining whether a specific object is sounding or not, and filtering out silent ones from the corresponding mixed sound. When faced with these challenges, we want to know how do we human address them? Elman [9] stated that human could transform these seemingly unlearnable tasks into learnable by starting from a simpler initial state then building on which to develop more complicated representations of structure. Inspired by this, we propose a two-stage framework, evolving from single sound scenario to the cocktailparty case. Concretely, we first learn potential object knowledge from sound localization in single source scenario, and aggregate them into a dictionary for pursuing robust representation for each object category. By referring to the dictionary, class-aware object localization maps are accordingly proposed for meeting the sounding object selection in multi-source scenario. Then, we reduce the sounding object localization task into a self-supervised audiovisual matching problem, where the sounding objects are selected by minimizing the category-level audio and visual distribution difference. With these evolved curriculums, we can filter out silent objects and achieve class-aware sounding object localization in a cocktail-party scenario. To summarize, our main contributions are as follows. First, we introduce an interesting and challenging problem, i.e., discriminatively localizing sounding objects in the cocktail-party scenario without manual annotation for objects. Second, we propose a novel step-by-step learning framework, which learns robust object representations from single source localization then further expands to the sounding object localization via taking audiovisual consistency as self-supervision for category distribution matching in the cocktail-party scenario. Third, we synthesize some cocktail-party videos and annotate sounding object bounding boxes for the evaluation of class-aware sounding object localization. Our method shows excellent performance on both synthetic and realistic data. 2 Related work Object localization Weakly- and self-supervised object localization expect to achieve comparable performance to the supervised ones with limited annotations. Existing weakly-supervised methods take holistic image labels as supervision, where the salient image region evaluated by recognition scores are considered as the potential object location[20, 21, 6, 32, 26, 7]. For self-supervised models, Baek et al. [5] used point symmetric transformation as self-supervision to extract class-agnostic heat maps for object localization. These methods are purely based on visual features, while we propose to employ audiovisual consistency as self-supervision to achieve class-aware object localization. Self-supervised audiovisual learning The natural correspondence between sound and vision provides essential supervision for audiovisual learning [2, 3, 22, 4, 23]. In [23, 4], authors introduced to learn feature representations of one modality with the supervision from the other. In [2, 22], authors adopted clip-level audiovisual correspondence and temporal synchronization as self-supervision to correlate audiovisual content. Hu et al. [16, 17] associate latent sound-object pairs with clustered audiovisual components, but its performance greatly relies on predefined number of clusters. Alwassel et al. [1] created pseudo labels from clustering features to boost multi-modal representation learning. While in our work, we alternatively use audiovisual correspondence and pseudo labels from clustering to boost audiovisual learning and learn object representations. Sounding object localization in visual scenes Recent methods for localizing sound source in visual context mainly focus on joint modeling of audio and visual modalities [3, 22, 27, 29, 16, 30, 31]. In [3, 22], authors adopted Class Activation Map (CAM) [32] or similar methods to measure the correspondence score between audio and visual features on each spatial grid to localize sounding objects. Senocak et al. [27] proposed an attention mechanism to capture primary areas in a semisupervised or unsupervised setting. Tian et al. [29] leveraged audio-guided visual attention and temporal alignment to find semantic regions corresponding to sound sources. These methods tend to perform well in single source scenes, but comparatively poor for mixed sound localization. Zhao et al. [31, 30] employed a sound-based mix-then-separate framework to associate the audio and visual feature maps, where the sound source position is given by the sound energy of each pixel. Hu et al. [16] established audiovisual clustering to associate sound centers with corresponding visual sources, but it requires the prior of the number of sound sources, and the specific category of the clustering result remains unknown. In contrast, our method can discriminatively localize sounding objects in cocktail-party by employing established object dictionary to generate class-aware object localization maps, and referring to the audiovisual localization map to filter out the silent ones. 3 The proposed method In this work, we aim to discriminatively localize the sounding objects from their mixed sound without the manual annotations of object category. To facilitate this novel and challenging problem, we develop a two-stage learning strategy, evolving from the localization in simple scenario with single sounding object to the complex one with multiple sounding objects, i.e., cocktail-party. Such curriculum learning perspective is based on the findings that existing audiovisual models [3, 16, 27] are capable of predicting reasonable localization map of sounding object in simple scenario, which is considered to provide effective knowledge reference for candidate visual localization of different objects in the cocktail-party scenario. Specifically, for a given set of audiovisual pair with arbitrary number of sounding objects, X = {(ai, vi)|i = 1, 2, ..., N}, we first divide it into one simple set whose scenario only contains single sounding object, X s = {(asi , vsi )| i = 1, 2, ..., Ns}, and one complex set, where each audiovisual pair consists of several sounding objects, X c = {(aci , vci )| i = 1, 2, ..., N c}, where X = X s∪X c and X s∩X c = ∅. In the first stage, we propose to learn potential visual representation of sounding object from their localization map in the simple scenario X s, with which we build a representation dictionary of objects as a kind of visual object knowledge reference. In the second stage, by referring to the learned representation dictionary, we step forward to discriminatively localize multiple sounding objects in the complex scenario X c, where the category distribution of localized sounding objects are required to match the distribution of their mixed sound according to the natural audiovisual consistency [16]. In the rest sections, we detail the first and second learning stage for generalized sounding object localization. 3.1 Learning object representation from localization For the simple audiovisual scenario with single sound source, X s, we target to visually localize the sounding object from its corresponding sound, and synchronously build a representation dictionary from the localization outcomes. The framework is shown in the left part of Fig. 2. At the first step, given an arbitrary audiovisual pair (asi , v s i ) ∈ X s, to exactly predict the position of sounding object, we need to find which region of input image vsi is highly correlated to the sound asi . To this end, we feed the image into a convolution-based network (e.g., ResNet [13]) to extract spatial feature maps f(vsi ) ∈ RC×H×W as the local image region descriptors, where C is the channel dimension, H and W are the spatial size. Then, the localization network is encouraged to enhance the similarity between the image region of sounding object and corresponding sound embeddings g(asi ) from the same video, but suppress those ones where sound and object are mismatched (from different videos), i.e., (asi , v s j ), where i 6= j. Formally, the localization objective can be written as L1 = Lbce(ymatch, GMP (l(g(asi ), f(vsj )))), (1) where the indicator ymatch = 1 is the audio and image are from the same pair, i.e., i = j, otherwise ymatch = 0, and Lbce is the binary cross-entropy loss. l(g(asi ), f(vsj )) is the audiovisual localization function, achieved by computing the cosine similarity of audio and visual feature representation2. 2The cosine similarity is followed by a parameterized sigmoid function to achieve comparable scale to the binary supervision. More details about similarity computation and networks are in the material. Similar to [3], Global Max Pooling (GMP) is used to aggregate the localization map to match the scene-level supervision. As there is no extra semantic annotation employed, the localization model is fully optimized in a self-supervised fashion. As the localization map could provide effective reference of object position, it helps to reduce the disturbance of complex background and boosts the visual perception performance of object appearance. To supply better visual object reference for the multi-source localization in the second stage, we utilize these localization outcomes to learn a kind of representation dictionaryD for different object categories. First, we propose to binarize the localization map li of the i−th audiovisual pair into a mask mi ∈ {0, 1}H×W . As there should be only one sounding object in the simple scenario X s, mi should be a single-object-awareness mask indicator. Hence, we can extract potential object representation oi ∈ RC over the masked visual features f(vsi ), i.e., oi = GAP (f(v s i ) ◦mi), (2) where GAP is the Global Average Pooling operation and ◦ is the Hadamard product. These object representationsO = {o1, o2, ..., oNs} are extracted from the coarse localization results, which makes it difficult to provide robust expression of object characters. To facilitate such progress, we target to learn high-quality object indicators with these candidate representations in a dictionary learning fashion. Specifically, we propose to jointly learn a K × C dictionary D and assignment yi of each object representation oi, where each key dk ∈ R1×C is identified as the representative object character in the k−th category. As K-means can be viewed as an efficient way of constructing representation dictionary [8], in our case we aim to minimize the following problem, L(D, yi) = Ns∑ i=1 min yi ||oi −DT · yi||22 s.t. yi ∈ {0, 1} K , ∑ yi = 1, (3) where K is the number of object category. Solving this problem provides a dictionary D∗ and a set of category assignments {y∗i |i = 1, 2, ...Ns}, where the former one is used for potential object detection in the second stage and the latter can be viewed as pseudo labels indicating different object categories. Recall that object localization could benefit from generalized categorization [21, 32], we therefore choose to alternately optimize the model w.r.t. the localization objective using Eq. 1 and the object classification objective with generated pseudo labels, which could substantially improve the localization performance. 3.2 Discriminative sounding object localization To discriminatively localize different sounding objects from their mixed sound, we propose to localize all the emerged objects in the image first, among which the sounding ones are causally selected based on whether they appear in the sounding area and required to match the category distribution of corresponding audio messages, as shown in the right part of Fig. 2. Let (aci , v c i ) ∈ X c denote the i−th audiovisual message that consists of multiple sounding objects. By referring to the learned representation dictionary of objects D∗, the location of emerged objects is indicated by computing the following inner-product similarity between each location of visual feature map f(vci ) ∈ RC×H×W and each representation key dk ∈ R1×C within D∗, mki = d k · f(vci ), (4) where mki is the predicted object location area of the k−th category in the i−th visual scenario. If the scenario does not involve the object belonging to the k−th category, the corresponding localization map mki tends to remain low response (similarity). At this point, we can obtain K localization maps, indicating the location of different categories of objects. As stated in the beginning, the cocktail-party scenario may consist of multiple sounding objects and silent objects. To localize the sounding objects as well as eliminate the silent ones, the sounding area li that is highly related to the input mixed sound is regarded as a kind of sounding object filter, which is formulated as ski = m k i ◦ li. (5) ski is deemed as the location of sounding object of the k−th category. Intuitively, if the k−th object does not produce any sound even if it visually appears in the image, there will be no sounding areas reflected in ski . Hence, the category distribution of sounding objects for v c i can be written as psovi = softmax([GAP (s 1 i ), GAP (s 2 i ), ..., GAP (s K i )]). (6) As discussed in recent works [16], the natural synchronization between vision and sound provides the self-supervised consistency in terms of sounding object category distribution. In other words, the sound character and the visual appearance of the same sounding object are corresponding in taxonomy, such as barking and dog, meow and cat. Hence, we propose to train the model to discriminatively localize the sounding objects by solving the following problem, Lc = DKL(psovi ||p so ai ), (7) where psoai is the category distribution of sound ai, predicted by a well-trained audio event network 3, and DKL is the Kullback–Leibler divergence. Overall, the second stage consists of two learning objective, one is the category-agnostic sounding area detection and the other one is class-aware sounding object localization, i.e., L2 = Lc + λ · L1, (8) where λ is the hype-parameter balancing the importance of both objective. By solving the problem in Eq. 8, the location of sounding objects are discriminatively revealed in the category-specific maps{ s1i , s 2 i , ..., s K i } . Finally, softmax regression is performed across these class-aware maps on each location for better visualization. 4 Experiments 4.1 Datasets and annotation MUSIC MUSIC dataset [31] contains 685 untrimmed videos, 536 solo and 149 duet, covering 11 classes of musical instruments. To better evaluate sound localization results in diverse scenes, we use the first five/two videos of each instrument category in solo/duet for testing, and use the rest for training. Besides, we use one half of solo training data for the first-stage training, and employ the other half to generate synthetic data for the second-stage learning. Note that, some videos are now not available on YouTube, we finally get 489 solo and 141 duet videos. 3The audio network is trained with the pseudo label in the first stage, more details are in the materials. (b) Results on AudioSet-instrument-solo. MUSIC-Synthetic The categories of instruments in duet videos of MUSIC dataset are quite unbalanced, e.g., more than 80% duet videos contain sound of guitar, which is difficult for training and brings great bias in testing. Thus, we build category-balanced multi-source videos by artificially synthesizing solo videos to facilitate our second-stage learning and evaluation. Concretely, we first randomly choose four 1-second solo audiovisual pairs of different categories, then mix random two of the four audio clips with jittering as the multi-source audio waveform, and concatenate four frames of these clips as the multi-source video frame. That is, in the synthesized audiovisual pair, there are two instruments making sound while the other two are silent. Therefore, this synthesized dataset is quite proper for the evaluation of discriminatively sounding object localization4. AudioSet-instrument AudioSet-instrument dataset is a subset of AudioSet [12], consisting of 63,989 10-second video clips covering 15 categories of instruments. Following [11], we use the videos from the “unbalanced" split for training, and those from the “balanced" for testing. We employ the solo videos with single sound source for the first-stage training and testing, and adopt those with multiple sound sources for the second-stage training and testing. Bounding box annotation To quantitatively evaluate the sound localization performance, we use a well-trained Faster RCNN detector w.r.t 15 instruments [11] to generate bounding boxes on the test set. We further refine the detection results, and manually annotate whether each object is sounding or silent. Annotations are publicly available in the released code, for reproducibility. 4.2 Experimental settings Implementation details Each video in the above datasets are equally divided into one second clips, with no intersection. We randomly sample one image from the video clip as the visual message, which is resized to 256 × 256 then randomly cropped to 224 × 224. The audio messages are first re-sampled into 16K Hz, then translated into spectrogram via Short Time Fourier Transform with a Hann window length of 160 and a hop length of 80. Similarly with [31, 16], Log-Mel projection is performed over the spectrogram to better represent sound characteristics, which therefore becomes a 201× 64 matrix. The audio and visual message from the same video clip are deemed as a matched pair, otherwise mismatched. We use variants of ResNet-18 [13] as audio and visual feature extractors. Detailed architecture is shown in the materials. Our model is trained with Adam optimizer with learning rate of 10−4. In training phase, we use a threshold of 0.05 to binarize the localization maps to obtain object mask, with which we can extract object representations over feature maps. And each center representation in the object dictionary is accordingly assigned to one object category, which is then used for class-aware localization evaluation. Note that, the proposed model is evaluated and trained on the identical dataset. Evaluation metric We employ Intersection over Union (IoU) and Area Under Curve (AUC) as evaluation metrics for single source sound localization, which are calculated with predicted sounding area and annotated bounding box. For discriminative sounding object localization in cocktail-party, we introduce two new metrics, Class-aware IoU (CIoU) and No-Sounding-Area (NSA), for quantitative evaluation. CIoU is defined as the average over class-specific IoU score, and NSA is the average activation area on localization maps of silent categories where the activation is below threshold τ , CIoU = ∑K k=1 δkIoUk∑K k=1 δk , NSA = ∑K k=1(1− δk) ∑ sk < τ∑K k=1(1− δk)A , (9) where IoUk is calculated based on the predicted sounding object area and annotated bounding box for the k−th class, sk is localization map of k-th class, A is the total area of localization map. The 4Available at https://zenodo.org/record/4079386#.X4NPStozbb0 indicator δk = 1 if object of class k is making sound, otherwise 0. These two metrics measure the model’s ability to discriminatively localize sounding objects and filter out the silent ones. 4.3 Single sounding object localization In this subsection, we focus on the simple task of sound localization in single source scenario. Table 1 shows the results on MUSIC-solo and AudioSet-instrument-solo videos, where ours is compared with recent SOTA methods. Note that we use the public source code from [31, 16]. According to the shown results, we have two points should pay attention to. First, the compared methods [3, 16, 27] are trained to match the correct audiovisual pair via the contrastive [16, 27] or classification[3] objective, which is similar to ours. Yet, our proposed method significantly outperform these method by a large margin. Such phenomenon indicates that the learned object representations from localization is effective for semantic discrimination, which further benefits the object localization via the discriminative learning of object category. In order to explain this clearly, we plot the distribution of extracted feature from the well-trained vision network via t-SNE [19]. As shown in Fig. 3, the extracted visual features on MUSIC-solo are more discriminative in terms of object category when we train the model in a localization-classification alternative learning fashion, where the normalized mutual information for the clustering with masked object features achieves 0.74, which reveals high discrimination of learned representations. Second, our method is comparable to Sound-of-pixel [31], especially on the MUSIC-solo dataset. This is because Sound-of-pixel [31] differently employs the audio-based mix-then-separate learning strategy, which highly relies on the quality of input audio messages. Hence, it could effectively correlate specific visual area with audio embeddings in the simple scene with single sound, but suffers from the noisy multi-source scenarios. In contrast, our method can simultaneously deal with both conditions and does not require to construct complex learning objective. Related results can be found in the next subsection. 4.4 Multiple sounding objects localization Natural audiovisual scenario usually consists of multiple sounding and silent objects, which is more challenging for exactly localizing the sounding ones. To responsibly compare different methods under such scenarios, both of the synthetic and realistic data are evaluated. As shown in Table 2, we can find that our model shows significant improvements over all the compared methods in terms of CIoU. Such phenomenon mainly comes from three reasons. First, our model takes consideration of the class information of sounding objects by employing a category-based audiovisual alignment, i.e., Eq. 7, while other methods [3, 27] simply correlate the audiovisual features for sounding area detection so that fail to discriminatively localize the sounding objects. Second, our localization results are achieved with the effective visual knowledge learned from the first-stage, which could vastly help to excavate and localize potential objects from cocktail-party scenario, while the compared method [31] cannot deal with such scenario with mix-then-separate learning fashion. Third, referring to NSA results, our model can automatically filter out the silent objects, but DMC [16] has to rely on given knowledge of the number of sounding objects. Although [31] is high in NSA, it is probably because of too low channel activations to detect objects rather than the success of filtering out silent ones. Apart from the quantitative evaluation, we also provide visualized localization results in Fig. 4. According to the shown results in realistic scenario, the attention-based approach [27] and Object-thesound [3] can just localize the sounding area without discriminating guitar or cello, while DMC [16] suffers from the complex audiovisual components and mix up different visual areas. Among these compared methods, although sound-of-pixel [31] provides better results, it cannot exact localize the sounding object and filter out the silent saxophone. This is probably because it highly depends on the quality of mixed sounds. In contrast, our model can successfully localize the sounding guitar and cello in class-specific maps, as well as remain low response for the silent saxophone and other visual areas. The synthetic data show similar results. 4.5 Ablation study In this section, we perform ablation studies w.r.t. the influence of hyper-parameters. More studies can be found in the supplementary material. Loss function weight λ. As shown in Table 3, we can find that the hyper-parameter of λ has slight effects on the localization performance when in the range of [0.5, 1.0]. But it takes higher influence when becomes smaller or larger. Such phenomenon comes from the fact that the localization objective L1 is easier to converge compared with the distribution matching objective Lc. When λ becomes much larger, the model would suffer from the overfitting problem for localization. When λ becomes much smaller, it is difficult to achieve reasonable sounding area detection for effective filtering. Number of clusters and mask threshold. In previous experiment settings, we set the number of clusters at the first stage equal to the number of categories in the dataset, which provides a strong prior. Therefore, we explore using different number of clusters as well as the mask threshold for the first-stage object feature extraction and clustering. And for evaluation, we adpatively aggregate multiple clusters to one specific category for discriminative localization. Table 4 shows the results on Music dataset. It is clear that our method is generally robust to these to hyper-parameters, and achieves comparable performance without knowing the specific number of categories in the dataset. Training settings for the second stage. We further present some ablation studies on the procedure and training objective for the second stage. We denote the localization loss as L1, the audiovisual consistency loss as Lc, and the silent area suppress operation as Prod. As shown in the Table 5, the product operation is crucial especially under the synthetic circumstance. It is because in our manually synthesized data, there are totally four instruments in a single frame, with two making sound and the other two silent. If without the Prod operation, all the objects would produce high response, thus making the categorical matching between audio and visual components fail and leading to very poor performance. On the other hand, the Lc objective boosts localization on both synthetic and real-world duet data, which demonstrates that evacuating inner consistency between two modalities helps cross-modal modeling. 5 Discussion In this paper, we propose to discriminatively localize sounding objects in the absence of object category annotations, where the object localization in single source videos are aggregated to build discriminative object representation and the audiovisual consistency is used as the self-supervision for category distribution alignment. Although the object semantic learned from simple cases contributes noticeable results, it still need rough partition of single and multiple source videos, which should be emphasized in the future study. Acknowledgement This work was supported in part by the Beijing Outstanding Young Scientist Program NO. BJJWZYJH012019100020098 and Public Computing Cloud, Renmin University of China. Broader Impact Visually sound source localization is a kind of basic perception ability for human, while this work encourages the machine to be equipped with similar ability, especially when faced with multi-source scenarios. Hence, the impact mainly lies in the machine learning technique and application aspect. On the one hand, the proposed approach is fully based on self-supervised learning, but can reward considerable discrimination ability for the visual objects and correlation capabilities across audio and visual modalities. Predictably, without elaborately manual annotation, this approach could still facilitate the progress of unimodal and multimodal learning and parse/model complex scene. On the other hand, it steps forward to pursuing human-like multimodal perception ability, which could further contribute to our society in several aspects, e.g., audio-assistant scene understanding for the deaf people by figuring out which objects are making sound, facilitating exploration into how to solve the cocktail-party effect in realistic audiovisual scenes, i.e., to perceive different sounds and focus on the pertinent content from mixed auditory input.
1. What is the main contribution of the paper regarding sounding object localization in the cocktail party scenario? 2. What are the strengths of the proposed approach, particularly in its ability to address natural audiovisual scenarios and its curriculum learning approach? 3. What are some potential weaknesses or areas for improvement in the paper, such as additional ablations or implementation details?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The authors propose a two-stage framework to tackle the task of discriminatively localising sounding objects in the cocktail party scenario without manual annotations. Robust object representations are first learned in a single source scenario, before expanding to a multi-source scenario where sounding object localisation is formulated as a self-supervised audiovisual consistency problem, which is solved through object category (audio and visual) distribution matching. Strengths + Curriculum learning from a simple scenario with a single sound source to a complex scenario with multiple sounding sources, i.e. cocktail party scenario. + The proposed method addresses natural audiovisual scenarios, which consist of multiple sounding and silent objects (unlike some prior works which do not address silent objects). It is nice to match audio and visual object category distributions. + Ablations are performed (sup mat) to show the benefit of alternating between localisation and classification for the first stage (single source). + In the single source scenario, the proposed method achieves either better or comparable results to Sound of pixels [30] (top performing baseline out of several shown) on the MUSIC-solo and AudioSet-instrument-solo dataset splits. + In the multiple source scenario, the proposed method outperforms all baselines on MUSIC-Synthetic, MUSIC-Duet and AudioSet-Multi dataset splits for CiOU and AUC metrics (although not NSA). Weaknesses 1. Some more ablations would be nice. The authors could for example investigate the impact of removing for the second stage (multi source scenario) the sounding area (li stage) as well as sounding object location (si). 2. Some implementation details are missing. E.g. how long do the authors train the first stage vs the second stage? More details would be better for reproducibility.
NIPS
Title Discriminative Sounding Objects Localization via Self-supervised Audiovisual Matching Abstract Discriminatively localizing sounding objects in cocktail-party, i.e., mixed sound scenes, is commonplace for humans, but still challenging for machines. In this paper, we propose a two-stage learning framework to perform self-supervised class-aware sounding object localization. First, we propose to learn robust object representations by aggregating the candidate sound localization results in the single source scenes. Then, class-aware object localization maps are generated in the cocktail-party scenarios by referring the pre-learned object knowledge, and the sounding objects are accordingly selected by matching audio and visual object category distributions, where the audiovisual consistency is viewed as the self-supervised signal. Experimental results in both realistic and synthesized cocktail-party videos demonstrate that our model is superior in filtering out silent objects and pointing out the location of sounding objects of different classes. Code is available at https://github.com/DTaoo/ Discriminative-Sounding-Objects-Localization. 1 Introduction Audio and visual messages are pervasive in our daily-life. Their natural correspondence provides humans with rich semantic information to achieve effective multi-modal perception and learning [28, 24, 15], e.g., when in the street, we instinctively associate the talking sound with people nearby, and the roaring sound with vehicles passing by. In view of this, we want to question that can they also facilitate machine intelligence? To pursue the human-like audiovisual perception, the typical and challenging problem of visually sound localization is highly expected to be addressed, which aims to associate sounds with specific visual regions and rewards the visual perception ability in the absence of semantic annotations [14, 18, 3, 27, 10]. A straightforward strategy is to encourage the visual features of sound source to take higher similarity with the sound embeddings, which has shown considerable performance in the simple scenarios with single sound [21, 22, 27]. However, there are simultaneously multiple sounding objects as well as silent ones (i.e. The silent objects are considered capable of producing sound.). in our daily scenario, i.e., the cocktail-party, this simple strategy mostly fails to discriminatively localize different sound sources from mixed sound [16]. Recently, audiovisual content modeling is proposed to excavate concrete audio and visual components in the scenario for localization. Yet, due to lack of sufficient semantic annotation, existing works have to resort to extra scene prior knowledge [16, 17, 25] or construct pretext task [31, 30]. Even so, these methods cannot well deal ∗Corresponding Author, Beijing Key Laboratory of Big Data Management and Analysis Methods, Gaoling School of Artificial Intelligence, Renmin University of China, Beijing 100872, China. The research reported in this paper was mainly conducted when the corresponding author worked at Baidu Research. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. with such complex cocktail-party scenario, i.e., not only answering where the sounding area is but also answering what the sounding area is. In this paper, we target to perform class-aware sounding object localization from their mixed sound, where the audiovisual scenario consists of multiple sounding objects and silent objects, as shown in Fig. 1. This interesting problem is quite challenging from two perspectives: 1) Discriminatively localizing objects belonging to different categories without resorting to semantic annotations of objects; 2) Determining whether a specific object is sounding or not, and filtering out silent ones from the corresponding mixed sound. When faced with these challenges, we want to know how do we human address them? Elman [9] stated that human could transform these seemingly unlearnable tasks into learnable by starting from a simpler initial state then building on which to develop more complicated representations of structure. Inspired by this, we propose a two-stage framework, evolving from single sound scenario to the cocktailparty case. Concretely, we first learn potential object knowledge from sound localization in single source scenario, and aggregate them into a dictionary for pursuing robust representation for each object category. By referring to the dictionary, class-aware object localization maps are accordingly proposed for meeting the sounding object selection in multi-source scenario. Then, we reduce the sounding object localization task into a self-supervised audiovisual matching problem, where the sounding objects are selected by minimizing the category-level audio and visual distribution difference. With these evolved curriculums, we can filter out silent objects and achieve class-aware sounding object localization in a cocktail-party scenario. To summarize, our main contributions are as follows. First, we introduce an interesting and challenging problem, i.e., discriminatively localizing sounding objects in the cocktail-party scenario without manual annotation for objects. Second, we propose a novel step-by-step learning framework, which learns robust object representations from single source localization then further expands to the sounding object localization via taking audiovisual consistency as self-supervision for category distribution matching in the cocktail-party scenario. Third, we synthesize some cocktail-party videos and annotate sounding object bounding boxes for the evaluation of class-aware sounding object localization. Our method shows excellent performance on both synthetic and realistic data. 2 Related work Object localization Weakly- and self-supervised object localization expect to achieve comparable performance to the supervised ones with limited annotations. Existing weakly-supervised methods take holistic image labels as supervision, where the salient image region evaluated by recognition scores are considered as the potential object location[20, 21, 6, 32, 26, 7]. For self-supervised models, Baek et al. [5] used point symmetric transformation as self-supervision to extract class-agnostic heat maps for object localization. These methods are purely based on visual features, while we propose to employ audiovisual consistency as self-supervision to achieve class-aware object localization. Self-supervised audiovisual learning The natural correspondence between sound and vision provides essential supervision for audiovisual learning [2, 3, 22, 4, 23]. In [23, 4], authors introduced to learn feature representations of one modality with the supervision from the other. In [2, 22], authors adopted clip-level audiovisual correspondence and temporal synchronization as self-supervision to correlate audiovisual content. Hu et al. [16, 17] associate latent sound-object pairs with clustered audiovisual components, but its performance greatly relies on predefined number of clusters. Alwassel et al. [1] created pseudo labels from clustering features to boost multi-modal representation learning. While in our work, we alternatively use audiovisual correspondence and pseudo labels from clustering to boost audiovisual learning and learn object representations. Sounding object localization in visual scenes Recent methods for localizing sound source in visual context mainly focus on joint modeling of audio and visual modalities [3, 22, 27, 29, 16, 30, 31]. In [3, 22], authors adopted Class Activation Map (CAM) [32] or similar methods to measure the correspondence score between audio and visual features on each spatial grid to localize sounding objects. Senocak et al. [27] proposed an attention mechanism to capture primary areas in a semisupervised or unsupervised setting. Tian et al. [29] leveraged audio-guided visual attention and temporal alignment to find semantic regions corresponding to sound sources. These methods tend to perform well in single source scenes, but comparatively poor for mixed sound localization. Zhao et al. [31, 30] employed a sound-based mix-then-separate framework to associate the audio and visual feature maps, where the sound source position is given by the sound energy of each pixel. Hu et al. [16] established audiovisual clustering to associate sound centers with corresponding visual sources, but it requires the prior of the number of sound sources, and the specific category of the clustering result remains unknown. In contrast, our method can discriminatively localize sounding objects in cocktail-party by employing established object dictionary to generate class-aware object localization maps, and referring to the audiovisual localization map to filter out the silent ones. 3 The proposed method In this work, we aim to discriminatively localize the sounding objects from their mixed sound without the manual annotations of object category. To facilitate this novel and challenging problem, we develop a two-stage learning strategy, evolving from the localization in simple scenario with single sounding object to the complex one with multiple sounding objects, i.e., cocktail-party. Such curriculum learning perspective is based on the findings that existing audiovisual models [3, 16, 27] are capable of predicting reasonable localization map of sounding object in simple scenario, which is considered to provide effective knowledge reference for candidate visual localization of different objects in the cocktail-party scenario. Specifically, for a given set of audiovisual pair with arbitrary number of sounding objects, X = {(ai, vi)|i = 1, 2, ..., N}, we first divide it into one simple set whose scenario only contains single sounding object, X s = {(asi , vsi )| i = 1, 2, ..., Ns}, and one complex set, where each audiovisual pair consists of several sounding objects, X c = {(aci , vci )| i = 1, 2, ..., N c}, where X = X s∪X c and X s∩X c = ∅. In the first stage, we propose to learn potential visual representation of sounding object from their localization map in the simple scenario X s, with which we build a representation dictionary of objects as a kind of visual object knowledge reference. In the second stage, by referring to the learned representation dictionary, we step forward to discriminatively localize multiple sounding objects in the complex scenario X c, where the category distribution of localized sounding objects are required to match the distribution of their mixed sound according to the natural audiovisual consistency [16]. In the rest sections, we detail the first and second learning stage for generalized sounding object localization. 3.1 Learning object representation from localization For the simple audiovisual scenario with single sound source, X s, we target to visually localize the sounding object from its corresponding sound, and synchronously build a representation dictionary from the localization outcomes. The framework is shown in the left part of Fig. 2. At the first step, given an arbitrary audiovisual pair (asi , v s i ) ∈ X s, to exactly predict the position of sounding object, we need to find which region of input image vsi is highly correlated to the sound asi . To this end, we feed the image into a convolution-based network (e.g., ResNet [13]) to extract spatial feature maps f(vsi ) ∈ RC×H×W as the local image region descriptors, where C is the channel dimension, H and W are the spatial size. Then, the localization network is encouraged to enhance the similarity between the image region of sounding object and corresponding sound embeddings g(asi ) from the same video, but suppress those ones where sound and object are mismatched (from different videos), i.e., (asi , v s j ), where i 6= j. Formally, the localization objective can be written as L1 = Lbce(ymatch, GMP (l(g(asi ), f(vsj )))), (1) where the indicator ymatch = 1 is the audio and image are from the same pair, i.e., i = j, otherwise ymatch = 0, and Lbce is the binary cross-entropy loss. l(g(asi ), f(vsj )) is the audiovisual localization function, achieved by computing the cosine similarity of audio and visual feature representation2. 2The cosine similarity is followed by a parameterized sigmoid function to achieve comparable scale to the binary supervision. More details about similarity computation and networks are in the material. Similar to [3], Global Max Pooling (GMP) is used to aggregate the localization map to match the scene-level supervision. As there is no extra semantic annotation employed, the localization model is fully optimized in a self-supervised fashion. As the localization map could provide effective reference of object position, it helps to reduce the disturbance of complex background and boosts the visual perception performance of object appearance. To supply better visual object reference for the multi-source localization in the second stage, we utilize these localization outcomes to learn a kind of representation dictionaryD for different object categories. First, we propose to binarize the localization map li of the i−th audiovisual pair into a mask mi ∈ {0, 1}H×W . As there should be only one sounding object in the simple scenario X s, mi should be a single-object-awareness mask indicator. Hence, we can extract potential object representation oi ∈ RC over the masked visual features f(vsi ), i.e., oi = GAP (f(v s i ) ◦mi), (2) where GAP is the Global Average Pooling operation and ◦ is the Hadamard product. These object representationsO = {o1, o2, ..., oNs} are extracted from the coarse localization results, which makes it difficult to provide robust expression of object characters. To facilitate such progress, we target to learn high-quality object indicators with these candidate representations in a dictionary learning fashion. Specifically, we propose to jointly learn a K × C dictionary D and assignment yi of each object representation oi, where each key dk ∈ R1×C is identified as the representative object character in the k−th category. As K-means can be viewed as an efficient way of constructing representation dictionary [8], in our case we aim to minimize the following problem, L(D, yi) = Ns∑ i=1 min yi ||oi −DT · yi||22 s.t. yi ∈ {0, 1} K , ∑ yi = 1, (3) where K is the number of object category. Solving this problem provides a dictionary D∗ and a set of category assignments {y∗i |i = 1, 2, ...Ns}, where the former one is used for potential object detection in the second stage and the latter can be viewed as pseudo labels indicating different object categories. Recall that object localization could benefit from generalized categorization [21, 32], we therefore choose to alternately optimize the model w.r.t. the localization objective using Eq. 1 and the object classification objective with generated pseudo labels, which could substantially improve the localization performance. 3.2 Discriminative sounding object localization To discriminatively localize different sounding objects from their mixed sound, we propose to localize all the emerged objects in the image first, among which the sounding ones are causally selected based on whether they appear in the sounding area and required to match the category distribution of corresponding audio messages, as shown in the right part of Fig. 2. Let (aci , v c i ) ∈ X c denote the i−th audiovisual message that consists of multiple sounding objects. By referring to the learned representation dictionary of objects D∗, the location of emerged objects is indicated by computing the following inner-product similarity between each location of visual feature map f(vci ) ∈ RC×H×W and each representation key dk ∈ R1×C within D∗, mki = d k · f(vci ), (4) where mki is the predicted object location area of the k−th category in the i−th visual scenario. If the scenario does not involve the object belonging to the k−th category, the corresponding localization map mki tends to remain low response (similarity). At this point, we can obtain K localization maps, indicating the location of different categories of objects. As stated in the beginning, the cocktail-party scenario may consist of multiple sounding objects and silent objects. To localize the sounding objects as well as eliminate the silent ones, the sounding area li that is highly related to the input mixed sound is regarded as a kind of sounding object filter, which is formulated as ski = m k i ◦ li. (5) ski is deemed as the location of sounding object of the k−th category. Intuitively, if the k−th object does not produce any sound even if it visually appears in the image, there will be no sounding areas reflected in ski . Hence, the category distribution of sounding objects for v c i can be written as psovi = softmax([GAP (s 1 i ), GAP (s 2 i ), ..., GAP (s K i )]). (6) As discussed in recent works [16], the natural synchronization between vision and sound provides the self-supervised consistency in terms of sounding object category distribution. In other words, the sound character and the visual appearance of the same sounding object are corresponding in taxonomy, such as barking and dog, meow and cat. Hence, we propose to train the model to discriminatively localize the sounding objects by solving the following problem, Lc = DKL(psovi ||p so ai ), (7) where psoai is the category distribution of sound ai, predicted by a well-trained audio event network 3, and DKL is the Kullback–Leibler divergence. Overall, the second stage consists of two learning objective, one is the category-agnostic sounding area detection and the other one is class-aware sounding object localization, i.e., L2 = Lc + λ · L1, (8) where λ is the hype-parameter balancing the importance of both objective. By solving the problem in Eq. 8, the location of sounding objects are discriminatively revealed in the category-specific maps{ s1i , s 2 i , ..., s K i } . Finally, softmax regression is performed across these class-aware maps on each location for better visualization. 4 Experiments 4.1 Datasets and annotation MUSIC MUSIC dataset [31] contains 685 untrimmed videos, 536 solo and 149 duet, covering 11 classes of musical instruments. To better evaluate sound localization results in diverse scenes, we use the first five/two videos of each instrument category in solo/duet for testing, and use the rest for training. Besides, we use one half of solo training data for the first-stage training, and employ the other half to generate synthetic data for the second-stage learning. Note that, some videos are now not available on YouTube, we finally get 489 solo and 141 duet videos. 3The audio network is trained with the pseudo label in the first stage, more details are in the materials. (b) Results on AudioSet-instrument-solo. MUSIC-Synthetic The categories of instruments in duet videos of MUSIC dataset are quite unbalanced, e.g., more than 80% duet videos contain sound of guitar, which is difficult for training and brings great bias in testing. Thus, we build category-balanced multi-source videos by artificially synthesizing solo videos to facilitate our second-stage learning and evaluation. Concretely, we first randomly choose four 1-second solo audiovisual pairs of different categories, then mix random two of the four audio clips with jittering as the multi-source audio waveform, and concatenate four frames of these clips as the multi-source video frame. That is, in the synthesized audiovisual pair, there are two instruments making sound while the other two are silent. Therefore, this synthesized dataset is quite proper for the evaluation of discriminatively sounding object localization4. AudioSet-instrument AudioSet-instrument dataset is a subset of AudioSet [12], consisting of 63,989 10-second video clips covering 15 categories of instruments. Following [11], we use the videos from the “unbalanced" split for training, and those from the “balanced" for testing. We employ the solo videos with single sound source for the first-stage training and testing, and adopt those with multiple sound sources for the second-stage training and testing. Bounding box annotation To quantitatively evaluate the sound localization performance, we use a well-trained Faster RCNN detector w.r.t 15 instruments [11] to generate bounding boxes on the test set. We further refine the detection results, and manually annotate whether each object is sounding or silent. Annotations are publicly available in the released code, for reproducibility. 4.2 Experimental settings Implementation details Each video in the above datasets are equally divided into one second clips, with no intersection. We randomly sample one image from the video clip as the visual message, which is resized to 256 × 256 then randomly cropped to 224 × 224. The audio messages are first re-sampled into 16K Hz, then translated into spectrogram via Short Time Fourier Transform with a Hann window length of 160 and a hop length of 80. Similarly with [31, 16], Log-Mel projection is performed over the spectrogram to better represent sound characteristics, which therefore becomes a 201× 64 matrix. The audio and visual message from the same video clip are deemed as a matched pair, otherwise mismatched. We use variants of ResNet-18 [13] as audio and visual feature extractors. Detailed architecture is shown in the materials. Our model is trained with Adam optimizer with learning rate of 10−4. In training phase, we use a threshold of 0.05 to binarize the localization maps to obtain object mask, with which we can extract object representations over feature maps. And each center representation in the object dictionary is accordingly assigned to one object category, which is then used for class-aware localization evaluation. Note that, the proposed model is evaluated and trained on the identical dataset. Evaluation metric We employ Intersection over Union (IoU) and Area Under Curve (AUC) as evaluation metrics for single source sound localization, which are calculated with predicted sounding area and annotated bounding box. For discriminative sounding object localization in cocktail-party, we introduce two new metrics, Class-aware IoU (CIoU) and No-Sounding-Area (NSA), for quantitative evaluation. CIoU is defined as the average over class-specific IoU score, and NSA is the average activation area on localization maps of silent categories where the activation is below threshold τ , CIoU = ∑K k=1 δkIoUk∑K k=1 δk , NSA = ∑K k=1(1− δk) ∑ sk < τ∑K k=1(1− δk)A , (9) where IoUk is calculated based on the predicted sounding object area and annotated bounding box for the k−th class, sk is localization map of k-th class, A is the total area of localization map. The 4Available at https://zenodo.org/record/4079386#.X4NPStozbb0 indicator δk = 1 if object of class k is making sound, otherwise 0. These two metrics measure the model’s ability to discriminatively localize sounding objects and filter out the silent ones. 4.3 Single sounding object localization In this subsection, we focus on the simple task of sound localization in single source scenario. Table 1 shows the results on MUSIC-solo and AudioSet-instrument-solo videos, where ours is compared with recent SOTA methods. Note that we use the public source code from [31, 16]. According to the shown results, we have two points should pay attention to. First, the compared methods [3, 16, 27] are trained to match the correct audiovisual pair via the contrastive [16, 27] or classification[3] objective, which is similar to ours. Yet, our proposed method significantly outperform these method by a large margin. Such phenomenon indicates that the learned object representations from localization is effective for semantic discrimination, which further benefits the object localization via the discriminative learning of object category. In order to explain this clearly, we plot the distribution of extracted feature from the well-trained vision network via t-SNE [19]. As shown in Fig. 3, the extracted visual features on MUSIC-solo are more discriminative in terms of object category when we train the model in a localization-classification alternative learning fashion, where the normalized mutual information for the clustering with masked object features achieves 0.74, which reveals high discrimination of learned representations. Second, our method is comparable to Sound-of-pixel [31], especially on the MUSIC-solo dataset. This is because Sound-of-pixel [31] differently employs the audio-based mix-then-separate learning strategy, which highly relies on the quality of input audio messages. Hence, it could effectively correlate specific visual area with audio embeddings in the simple scene with single sound, but suffers from the noisy multi-source scenarios. In contrast, our method can simultaneously deal with both conditions and does not require to construct complex learning objective. Related results can be found in the next subsection. 4.4 Multiple sounding objects localization Natural audiovisual scenario usually consists of multiple sounding and silent objects, which is more challenging for exactly localizing the sounding ones. To responsibly compare different methods under such scenarios, both of the synthetic and realistic data are evaluated. As shown in Table 2, we can find that our model shows significant improvements over all the compared methods in terms of CIoU. Such phenomenon mainly comes from three reasons. First, our model takes consideration of the class information of sounding objects by employing a category-based audiovisual alignment, i.e., Eq. 7, while other methods [3, 27] simply correlate the audiovisual features for sounding area detection so that fail to discriminatively localize the sounding objects. Second, our localization results are achieved with the effective visual knowledge learned from the first-stage, which could vastly help to excavate and localize potential objects from cocktail-party scenario, while the compared method [31] cannot deal with such scenario with mix-then-separate learning fashion. Third, referring to NSA results, our model can automatically filter out the silent objects, but DMC [16] has to rely on given knowledge of the number of sounding objects. Although [31] is high in NSA, it is probably because of too low channel activations to detect objects rather than the success of filtering out silent ones. Apart from the quantitative evaluation, we also provide visualized localization results in Fig. 4. According to the shown results in realistic scenario, the attention-based approach [27] and Object-thesound [3] can just localize the sounding area without discriminating guitar or cello, while DMC [16] suffers from the complex audiovisual components and mix up different visual areas. Among these compared methods, although sound-of-pixel [31] provides better results, it cannot exact localize the sounding object and filter out the silent saxophone. This is probably because it highly depends on the quality of mixed sounds. In contrast, our model can successfully localize the sounding guitar and cello in class-specific maps, as well as remain low response for the silent saxophone and other visual areas. The synthetic data show similar results. 4.5 Ablation study In this section, we perform ablation studies w.r.t. the influence of hyper-parameters. More studies can be found in the supplementary material. Loss function weight λ. As shown in Table 3, we can find that the hyper-parameter of λ has slight effects on the localization performance when in the range of [0.5, 1.0]. But it takes higher influence when becomes smaller or larger. Such phenomenon comes from the fact that the localization objective L1 is easier to converge compared with the distribution matching objective Lc. When λ becomes much larger, the model would suffer from the overfitting problem for localization. When λ becomes much smaller, it is difficult to achieve reasonable sounding area detection for effective filtering. Number of clusters and mask threshold. In previous experiment settings, we set the number of clusters at the first stage equal to the number of categories in the dataset, which provides a strong prior. Therefore, we explore using different number of clusters as well as the mask threshold for the first-stage object feature extraction and clustering. And for evaluation, we adpatively aggregate multiple clusters to one specific category for discriminative localization. Table 4 shows the results on Music dataset. It is clear that our method is generally robust to these to hyper-parameters, and achieves comparable performance without knowing the specific number of categories in the dataset. Training settings for the second stage. We further present some ablation studies on the procedure and training objective for the second stage. We denote the localization loss as L1, the audiovisual consistency loss as Lc, and the silent area suppress operation as Prod. As shown in the Table 5, the product operation is crucial especially under the synthetic circumstance. It is because in our manually synthesized data, there are totally four instruments in a single frame, with two making sound and the other two silent. If without the Prod operation, all the objects would produce high response, thus making the categorical matching between audio and visual components fail and leading to very poor performance. On the other hand, the Lc objective boosts localization on both synthetic and real-world duet data, which demonstrates that evacuating inner consistency between two modalities helps cross-modal modeling. 5 Discussion In this paper, we propose to discriminatively localize sounding objects in the absence of object category annotations, where the object localization in single source videos are aggregated to build discriminative object representation and the audiovisual consistency is used as the self-supervision for category distribution alignment. Although the object semantic learned from simple cases contributes noticeable results, it still need rough partition of single and multiple source videos, which should be emphasized in the future study. Acknowledgement This work was supported in part by the Beijing Outstanding Young Scientist Program NO. BJJWZYJH012019100020098 and Public Computing Cloud, Renmin University of China. Broader Impact Visually sound source localization is a kind of basic perception ability for human, while this work encourages the machine to be equipped with similar ability, especially when faced with multi-source scenarios. Hence, the impact mainly lies in the machine learning technique and application aspect. On the one hand, the proposed approach is fully based on self-supervised learning, but can reward considerable discrimination ability for the visual objects and correlation capabilities across audio and visual modalities. Predictably, without elaborately manual annotation, this approach could still facilitate the progress of unimodal and multimodal learning and parse/model complex scene. On the other hand, it steps forward to pursuing human-like multimodal perception ability, which could further contribute to our society in several aspects, e.g., audio-assistant scene understanding for the deaf people by figuring out which objects are making sound, facilitating exploration into how to solve the cocktail-party effect in realistic audiovisual scenes, i.e., to perceive different sounds and focus on the pertinent content from mixed auditory input.
1. What is the main contribution of the paper in terms of object localization? 2. What are the strengths of the proposed approach, particularly in its unsupervised nature and use of audiovisual consistency? 3. What are the weaknesses of the paper regarding its limitations in dataset and potential generalizability?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper proposes methods to localize objects producing sounds in a given audiovisual scene. This is done in an unsupervised setting where manual annotations are not available. The framework first tries to learn robust object representations and then uses audiovisual consistency to train the networks to localize the sounding objects. Strengths Localizing sounding objects in a given audiovisual scene is an interesting problem. The paper presents a novel approach for this problem which does not require manual semantic labeling and the training is largely self-supervised and relies on inherent audiovisual consistencies for training the models and the overall approach is nice. Comparison with several prior methods has been done to show the superiority of the proposed method. Weaknesses One major limitation of the work is that only music related objects and sounds are used. This does not provide a good idea of how well this method generalizes for everyday objects and sounds produced by them. It would have been nice if this paper had considered this more general condition in their datasets. There are few other concerns w.r.t how the method will generalize. Please look at the detailed comments below.
NIPS
Title Discriminative Sounding Objects Localization via Self-supervised Audiovisual Matching Abstract Discriminatively localizing sounding objects in cocktail-party, i.e., mixed sound scenes, is commonplace for humans, but still challenging for machines. In this paper, we propose a two-stage learning framework to perform self-supervised class-aware sounding object localization. First, we propose to learn robust object representations by aggregating the candidate sound localization results in the single source scenes. Then, class-aware object localization maps are generated in the cocktail-party scenarios by referring the pre-learned object knowledge, and the sounding objects are accordingly selected by matching audio and visual object category distributions, where the audiovisual consistency is viewed as the self-supervised signal. Experimental results in both realistic and synthesized cocktail-party videos demonstrate that our model is superior in filtering out silent objects and pointing out the location of sounding objects of different classes. Code is available at https://github.com/DTaoo/ Discriminative-Sounding-Objects-Localization. 1 Introduction Audio and visual messages are pervasive in our daily-life. Their natural correspondence provides humans with rich semantic information to achieve effective multi-modal perception and learning [28, 24, 15], e.g., when in the street, we instinctively associate the talking sound with people nearby, and the roaring sound with vehicles passing by. In view of this, we want to question that can they also facilitate machine intelligence? To pursue the human-like audiovisual perception, the typical and challenging problem of visually sound localization is highly expected to be addressed, which aims to associate sounds with specific visual regions and rewards the visual perception ability in the absence of semantic annotations [14, 18, 3, 27, 10]. A straightforward strategy is to encourage the visual features of sound source to take higher similarity with the sound embeddings, which has shown considerable performance in the simple scenarios with single sound [21, 22, 27]. However, there are simultaneously multiple sounding objects as well as silent ones (i.e. The silent objects are considered capable of producing sound.). in our daily scenario, i.e., the cocktail-party, this simple strategy mostly fails to discriminatively localize different sound sources from mixed sound [16]. Recently, audiovisual content modeling is proposed to excavate concrete audio and visual components in the scenario for localization. Yet, due to lack of sufficient semantic annotation, existing works have to resort to extra scene prior knowledge [16, 17, 25] or construct pretext task [31, 30]. Even so, these methods cannot well deal ∗Corresponding Author, Beijing Key Laboratory of Big Data Management and Analysis Methods, Gaoling School of Artificial Intelligence, Renmin University of China, Beijing 100872, China. The research reported in this paper was mainly conducted when the corresponding author worked at Baidu Research. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. with such complex cocktail-party scenario, i.e., not only answering where the sounding area is but also answering what the sounding area is. In this paper, we target to perform class-aware sounding object localization from their mixed sound, where the audiovisual scenario consists of multiple sounding objects and silent objects, as shown in Fig. 1. This interesting problem is quite challenging from two perspectives: 1) Discriminatively localizing objects belonging to different categories without resorting to semantic annotations of objects; 2) Determining whether a specific object is sounding or not, and filtering out silent ones from the corresponding mixed sound. When faced with these challenges, we want to know how do we human address them? Elman [9] stated that human could transform these seemingly unlearnable tasks into learnable by starting from a simpler initial state then building on which to develop more complicated representations of structure. Inspired by this, we propose a two-stage framework, evolving from single sound scenario to the cocktailparty case. Concretely, we first learn potential object knowledge from sound localization in single source scenario, and aggregate them into a dictionary for pursuing robust representation for each object category. By referring to the dictionary, class-aware object localization maps are accordingly proposed for meeting the sounding object selection in multi-source scenario. Then, we reduce the sounding object localization task into a self-supervised audiovisual matching problem, where the sounding objects are selected by minimizing the category-level audio and visual distribution difference. With these evolved curriculums, we can filter out silent objects and achieve class-aware sounding object localization in a cocktail-party scenario. To summarize, our main contributions are as follows. First, we introduce an interesting and challenging problem, i.e., discriminatively localizing sounding objects in the cocktail-party scenario without manual annotation for objects. Second, we propose a novel step-by-step learning framework, which learns robust object representations from single source localization then further expands to the sounding object localization via taking audiovisual consistency as self-supervision for category distribution matching in the cocktail-party scenario. Third, we synthesize some cocktail-party videos and annotate sounding object bounding boxes for the evaluation of class-aware sounding object localization. Our method shows excellent performance on both synthetic and realistic data. 2 Related work Object localization Weakly- and self-supervised object localization expect to achieve comparable performance to the supervised ones with limited annotations. Existing weakly-supervised methods take holistic image labels as supervision, where the salient image region evaluated by recognition scores are considered as the potential object location[20, 21, 6, 32, 26, 7]. For self-supervised models, Baek et al. [5] used point symmetric transformation as self-supervision to extract class-agnostic heat maps for object localization. These methods are purely based on visual features, while we propose to employ audiovisual consistency as self-supervision to achieve class-aware object localization. Self-supervised audiovisual learning The natural correspondence between sound and vision provides essential supervision for audiovisual learning [2, 3, 22, 4, 23]. In [23, 4], authors introduced to learn feature representations of one modality with the supervision from the other. In [2, 22], authors adopted clip-level audiovisual correspondence and temporal synchronization as self-supervision to correlate audiovisual content. Hu et al. [16, 17] associate latent sound-object pairs with clustered audiovisual components, but its performance greatly relies on predefined number of clusters. Alwassel et al. [1] created pseudo labels from clustering features to boost multi-modal representation learning. While in our work, we alternatively use audiovisual correspondence and pseudo labels from clustering to boost audiovisual learning and learn object representations. Sounding object localization in visual scenes Recent methods for localizing sound source in visual context mainly focus on joint modeling of audio and visual modalities [3, 22, 27, 29, 16, 30, 31]. In [3, 22], authors adopted Class Activation Map (CAM) [32] or similar methods to measure the correspondence score between audio and visual features on each spatial grid to localize sounding objects. Senocak et al. [27] proposed an attention mechanism to capture primary areas in a semisupervised or unsupervised setting. Tian et al. [29] leveraged audio-guided visual attention and temporal alignment to find semantic regions corresponding to sound sources. These methods tend to perform well in single source scenes, but comparatively poor for mixed sound localization. Zhao et al. [31, 30] employed a sound-based mix-then-separate framework to associate the audio and visual feature maps, where the sound source position is given by the sound energy of each pixel. Hu et al. [16] established audiovisual clustering to associate sound centers with corresponding visual sources, but it requires the prior of the number of sound sources, and the specific category of the clustering result remains unknown. In contrast, our method can discriminatively localize sounding objects in cocktail-party by employing established object dictionary to generate class-aware object localization maps, and referring to the audiovisual localization map to filter out the silent ones. 3 The proposed method In this work, we aim to discriminatively localize the sounding objects from their mixed sound without the manual annotations of object category. To facilitate this novel and challenging problem, we develop a two-stage learning strategy, evolving from the localization in simple scenario with single sounding object to the complex one with multiple sounding objects, i.e., cocktail-party. Such curriculum learning perspective is based on the findings that existing audiovisual models [3, 16, 27] are capable of predicting reasonable localization map of sounding object in simple scenario, which is considered to provide effective knowledge reference for candidate visual localization of different objects in the cocktail-party scenario. Specifically, for a given set of audiovisual pair with arbitrary number of sounding objects, X = {(ai, vi)|i = 1, 2, ..., N}, we first divide it into one simple set whose scenario only contains single sounding object, X s = {(asi , vsi )| i = 1, 2, ..., Ns}, and one complex set, where each audiovisual pair consists of several sounding objects, X c = {(aci , vci )| i = 1, 2, ..., N c}, where X = X s∪X c and X s∩X c = ∅. In the first stage, we propose to learn potential visual representation of sounding object from their localization map in the simple scenario X s, with which we build a representation dictionary of objects as a kind of visual object knowledge reference. In the second stage, by referring to the learned representation dictionary, we step forward to discriminatively localize multiple sounding objects in the complex scenario X c, where the category distribution of localized sounding objects are required to match the distribution of their mixed sound according to the natural audiovisual consistency [16]. In the rest sections, we detail the first and second learning stage for generalized sounding object localization. 3.1 Learning object representation from localization For the simple audiovisual scenario with single sound source, X s, we target to visually localize the sounding object from its corresponding sound, and synchronously build a representation dictionary from the localization outcomes. The framework is shown in the left part of Fig. 2. At the first step, given an arbitrary audiovisual pair (asi , v s i ) ∈ X s, to exactly predict the position of sounding object, we need to find which region of input image vsi is highly correlated to the sound asi . To this end, we feed the image into a convolution-based network (e.g., ResNet [13]) to extract spatial feature maps f(vsi ) ∈ RC×H×W as the local image region descriptors, where C is the channel dimension, H and W are the spatial size. Then, the localization network is encouraged to enhance the similarity between the image region of sounding object and corresponding sound embeddings g(asi ) from the same video, but suppress those ones where sound and object are mismatched (from different videos), i.e., (asi , v s j ), where i 6= j. Formally, the localization objective can be written as L1 = Lbce(ymatch, GMP (l(g(asi ), f(vsj )))), (1) where the indicator ymatch = 1 is the audio and image are from the same pair, i.e., i = j, otherwise ymatch = 0, and Lbce is the binary cross-entropy loss. l(g(asi ), f(vsj )) is the audiovisual localization function, achieved by computing the cosine similarity of audio and visual feature representation2. 2The cosine similarity is followed by a parameterized sigmoid function to achieve comparable scale to the binary supervision. More details about similarity computation and networks are in the material. Similar to [3], Global Max Pooling (GMP) is used to aggregate the localization map to match the scene-level supervision. As there is no extra semantic annotation employed, the localization model is fully optimized in a self-supervised fashion. As the localization map could provide effective reference of object position, it helps to reduce the disturbance of complex background and boosts the visual perception performance of object appearance. To supply better visual object reference for the multi-source localization in the second stage, we utilize these localization outcomes to learn a kind of representation dictionaryD for different object categories. First, we propose to binarize the localization map li of the i−th audiovisual pair into a mask mi ∈ {0, 1}H×W . As there should be only one sounding object in the simple scenario X s, mi should be a single-object-awareness mask indicator. Hence, we can extract potential object representation oi ∈ RC over the masked visual features f(vsi ), i.e., oi = GAP (f(v s i ) ◦mi), (2) where GAP is the Global Average Pooling operation and ◦ is the Hadamard product. These object representationsO = {o1, o2, ..., oNs} are extracted from the coarse localization results, which makes it difficult to provide robust expression of object characters. To facilitate such progress, we target to learn high-quality object indicators with these candidate representations in a dictionary learning fashion. Specifically, we propose to jointly learn a K × C dictionary D and assignment yi of each object representation oi, where each key dk ∈ R1×C is identified as the representative object character in the k−th category. As K-means can be viewed as an efficient way of constructing representation dictionary [8], in our case we aim to minimize the following problem, L(D, yi) = Ns∑ i=1 min yi ||oi −DT · yi||22 s.t. yi ∈ {0, 1} K , ∑ yi = 1, (3) where K is the number of object category. Solving this problem provides a dictionary D∗ and a set of category assignments {y∗i |i = 1, 2, ...Ns}, where the former one is used for potential object detection in the second stage and the latter can be viewed as pseudo labels indicating different object categories. Recall that object localization could benefit from generalized categorization [21, 32], we therefore choose to alternately optimize the model w.r.t. the localization objective using Eq. 1 and the object classification objective with generated pseudo labels, which could substantially improve the localization performance. 3.2 Discriminative sounding object localization To discriminatively localize different sounding objects from their mixed sound, we propose to localize all the emerged objects in the image first, among which the sounding ones are causally selected based on whether they appear in the sounding area and required to match the category distribution of corresponding audio messages, as shown in the right part of Fig. 2. Let (aci , v c i ) ∈ X c denote the i−th audiovisual message that consists of multiple sounding objects. By referring to the learned representation dictionary of objects D∗, the location of emerged objects is indicated by computing the following inner-product similarity between each location of visual feature map f(vci ) ∈ RC×H×W and each representation key dk ∈ R1×C within D∗, mki = d k · f(vci ), (4) where mki is the predicted object location area of the k−th category in the i−th visual scenario. If the scenario does not involve the object belonging to the k−th category, the corresponding localization map mki tends to remain low response (similarity). At this point, we can obtain K localization maps, indicating the location of different categories of objects. As stated in the beginning, the cocktail-party scenario may consist of multiple sounding objects and silent objects. To localize the sounding objects as well as eliminate the silent ones, the sounding area li that is highly related to the input mixed sound is regarded as a kind of sounding object filter, which is formulated as ski = m k i ◦ li. (5) ski is deemed as the location of sounding object of the k−th category. Intuitively, if the k−th object does not produce any sound even if it visually appears in the image, there will be no sounding areas reflected in ski . Hence, the category distribution of sounding objects for v c i can be written as psovi = softmax([GAP (s 1 i ), GAP (s 2 i ), ..., GAP (s K i )]). (6) As discussed in recent works [16], the natural synchronization between vision and sound provides the self-supervised consistency in terms of sounding object category distribution. In other words, the sound character and the visual appearance of the same sounding object are corresponding in taxonomy, such as barking and dog, meow and cat. Hence, we propose to train the model to discriminatively localize the sounding objects by solving the following problem, Lc = DKL(psovi ||p so ai ), (7) where psoai is the category distribution of sound ai, predicted by a well-trained audio event network 3, and DKL is the Kullback–Leibler divergence. Overall, the second stage consists of two learning objective, one is the category-agnostic sounding area detection and the other one is class-aware sounding object localization, i.e., L2 = Lc + λ · L1, (8) where λ is the hype-parameter balancing the importance of both objective. By solving the problem in Eq. 8, the location of sounding objects are discriminatively revealed in the category-specific maps{ s1i , s 2 i , ..., s K i } . Finally, softmax regression is performed across these class-aware maps on each location for better visualization. 4 Experiments 4.1 Datasets and annotation MUSIC MUSIC dataset [31] contains 685 untrimmed videos, 536 solo and 149 duet, covering 11 classes of musical instruments. To better evaluate sound localization results in diverse scenes, we use the first five/two videos of each instrument category in solo/duet for testing, and use the rest for training. Besides, we use one half of solo training data for the first-stage training, and employ the other half to generate synthetic data for the second-stage learning. Note that, some videos are now not available on YouTube, we finally get 489 solo and 141 duet videos. 3The audio network is trained with the pseudo label in the first stage, more details are in the materials. (b) Results on AudioSet-instrument-solo. MUSIC-Synthetic The categories of instruments in duet videos of MUSIC dataset are quite unbalanced, e.g., more than 80% duet videos contain sound of guitar, which is difficult for training and brings great bias in testing. Thus, we build category-balanced multi-source videos by artificially synthesizing solo videos to facilitate our second-stage learning and evaluation. Concretely, we first randomly choose four 1-second solo audiovisual pairs of different categories, then mix random two of the four audio clips with jittering as the multi-source audio waveform, and concatenate four frames of these clips as the multi-source video frame. That is, in the synthesized audiovisual pair, there are two instruments making sound while the other two are silent. Therefore, this synthesized dataset is quite proper for the evaluation of discriminatively sounding object localization4. AudioSet-instrument AudioSet-instrument dataset is a subset of AudioSet [12], consisting of 63,989 10-second video clips covering 15 categories of instruments. Following [11], we use the videos from the “unbalanced" split for training, and those from the “balanced" for testing. We employ the solo videos with single sound source for the first-stage training and testing, and adopt those with multiple sound sources for the second-stage training and testing. Bounding box annotation To quantitatively evaluate the sound localization performance, we use a well-trained Faster RCNN detector w.r.t 15 instruments [11] to generate bounding boxes on the test set. We further refine the detection results, and manually annotate whether each object is sounding or silent. Annotations are publicly available in the released code, for reproducibility. 4.2 Experimental settings Implementation details Each video in the above datasets are equally divided into one second clips, with no intersection. We randomly sample one image from the video clip as the visual message, which is resized to 256 × 256 then randomly cropped to 224 × 224. The audio messages are first re-sampled into 16K Hz, then translated into spectrogram via Short Time Fourier Transform with a Hann window length of 160 and a hop length of 80. Similarly with [31, 16], Log-Mel projection is performed over the spectrogram to better represent sound characteristics, which therefore becomes a 201× 64 matrix. The audio and visual message from the same video clip are deemed as a matched pair, otherwise mismatched. We use variants of ResNet-18 [13] as audio and visual feature extractors. Detailed architecture is shown in the materials. Our model is trained with Adam optimizer with learning rate of 10−4. In training phase, we use a threshold of 0.05 to binarize the localization maps to obtain object mask, with which we can extract object representations over feature maps. And each center representation in the object dictionary is accordingly assigned to one object category, which is then used for class-aware localization evaluation. Note that, the proposed model is evaluated and trained on the identical dataset. Evaluation metric We employ Intersection over Union (IoU) and Area Under Curve (AUC) as evaluation metrics for single source sound localization, which are calculated with predicted sounding area and annotated bounding box. For discriminative sounding object localization in cocktail-party, we introduce two new metrics, Class-aware IoU (CIoU) and No-Sounding-Area (NSA), for quantitative evaluation. CIoU is defined as the average over class-specific IoU score, and NSA is the average activation area on localization maps of silent categories where the activation is below threshold τ , CIoU = ∑K k=1 δkIoUk∑K k=1 δk , NSA = ∑K k=1(1− δk) ∑ sk < τ∑K k=1(1− δk)A , (9) where IoUk is calculated based on the predicted sounding object area and annotated bounding box for the k−th class, sk is localization map of k-th class, A is the total area of localization map. The 4Available at https://zenodo.org/record/4079386#.X4NPStozbb0 indicator δk = 1 if object of class k is making sound, otherwise 0. These two metrics measure the model’s ability to discriminatively localize sounding objects and filter out the silent ones. 4.3 Single sounding object localization In this subsection, we focus on the simple task of sound localization in single source scenario. Table 1 shows the results on MUSIC-solo and AudioSet-instrument-solo videos, where ours is compared with recent SOTA methods. Note that we use the public source code from [31, 16]. According to the shown results, we have two points should pay attention to. First, the compared methods [3, 16, 27] are trained to match the correct audiovisual pair via the contrastive [16, 27] or classification[3] objective, which is similar to ours. Yet, our proposed method significantly outperform these method by a large margin. Such phenomenon indicates that the learned object representations from localization is effective for semantic discrimination, which further benefits the object localization via the discriminative learning of object category. In order to explain this clearly, we plot the distribution of extracted feature from the well-trained vision network via t-SNE [19]. As shown in Fig. 3, the extracted visual features on MUSIC-solo are more discriminative in terms of object category when we train the model in a localization-classification alternative learning fashion, where the normalized mutual information for the clustering with masked object features achieves 0.74, which reveals high discrimination of learned representations. Second, our method is comparable to Sound-of-pixel [31], especially on the MUSIC-solo dataset. This is because Sound-of-pixel [31] differently employs the audio-based mix-then-separate learning strategy, which highly relies on the quality of input audio messages. Hence, it could effectively correlate specific visual area with audio embeddings in the simple scene with single sound, but suffers from the noisy multi-source scenarios. In contrast, our method can simultaneously deal with both conditions and does not require to construct complex learning objective. Related results can be found in the next subsection. 4.4 Multiple sounding objects localization Natural audiovisual scenario usually consists of multiple sounding and silent objects, which is more challenging for exactly localizing the sounding ones. To responsibly compare different methods under such scenarios, both of the synthetic and realistic data are evaluated. As shown in Table 2, we can find that our model shows significant improvements over all the compared methods in terms of CIoU. Such phenomenon mainly comes from three reasons. First, our model takes consideration of the class information of sounding objects by employing a category-based audiovisual alignment, i.e., Eq. 7, while other methods [3, 27] simply correlate the audiovisual features for sounding area detection so that fail to discriminatively localize the sounding objects. Second, our localization results are achieved with the effective visual knowledge learned from the first-stage, which could vastly help to excavate and localize potential objects from cocktail-party scenario, while the compared method [31] cannot deal with such scenario with mix-then-separate learning fashion. Third, referring to NSA results, our model can automatically filter out the silent objects, but DMC [16] has to rely on given knowledge of the number of sounding objects. Although [31] is high in NSA, it is probably because of too low channel activations to detect objects rather than the success of filtering out silent ones. Apart from the quantitative evaluation, we also provide visualized localization results in Fig. 4. According to the shown results in realistic scenario, the attention-based approach [27] and Object-thesound [3] can just localize the sounding area without discriminating guitar or cello, while DMC [16] suffers from the complex audiovisual components and mix up different visual areas. Among these compared methods, although sound-of-pixel [31] provides better results, it cannot exact localize the sounding object and filter out the silent saxophone. This is probably because it highly depends on the quality of mixed sounds. In contrast, our model can successfully localize the sounding guitar and cello in class-specific maps, as well as remain low response for the silent saxophone and other visual areas. The synthetic data show similar results. 4.5 Ablation study In this section, we perform ablation studies w.r.t. the influence of hyper-parameters. More studies can be found in the supplementary material. Loss function weight λ. As shown in Table 3, we can find that the hyper-parameter of λ has slight effects on the localization performance when in the range of [0.5, 1.0]. But it takes higher influence when becomes smaller or larger. Such phenomenon comes from the fact that the localization objective L1 is easier to converge compared with the distribution matching objective Lc. When λ becomes much larger, the model would suffer from the overfitting problem for localization. When λ becomes much smaller, it is difficult to achieve reasonable sounding area detection for effective filtering. Number of clusters and mask threshold. In previous experiment settings, we set the number of clusters at the first stage equal to the number of categories in the dataset, which provides a strong prior. Therefore, we explore using different number of clusters as well as the mask threshold for the first-stage object feature extraction and clustering. And for evaluation, we adpatively aggregate multiple clusters to one specific category for discriminative localization. Table 4 shows the results on Music dataset. It is clear that our method is generally robust to these to hyper-parameters, and achieves comparable performance without knowing the specific number of categories in the dataset. Training settings for the second stage. We further present some ablation studies on the procedure and training objective for the second stage. We denote the localization loss as L1, the audiovisual consistency loss as Lc, and the silent area suppress operation as Prod. As shown in the Table 5, the product operation is crucial especially under the synthetic circumstance. It is because in our manually synthesized data, there are totally four instruments in a single frame, with two making sound and the other two silent. If without the Prod operation, all the objects would produce high response, thus making the categorical matching between audio and visual components fail and leading to very poor performance. On the other hand, the Lc objective boosts localization on both synthetic and real-world duet data, which demonstrates that evacuating inner consistency between two modalities helps cross-modal modeling. 5 Discussion In this paper, we propose to discriminatively localize sounding objects in the absence of object category annotations, where the object localization in single source videos are aggregated to build discriminative object representation and the audiovisual consistency is used as the self-supervision for category distribution alignment. Although the object semantic learned from simple cases contributes noticeable results, it still need rough partition of single and multiple source videos, which should be emphasized in the future study. Acknowledgement This work was supported in part by the Beijing Outstanding Young Scientist Program NO. BJJWZYJH012019100020098 and Public Computing Cloud, Renmin University of China. Broader Impact Visually sound source localization is a kind of basic perception ability for human, while this work encourages the machine to be equipped with similar ability, especially when faced with multi-source scenarios. Hence, the impact mainly lies in the machine learning technique and application aspect. On the one hand, the proposed approach is fully based on self-supervised learning, but can reward considerable discrimination ability for the visual objects and correlation capabilities across audio and visual modalities. Predictably, without elaborately manual annotation, this approach could still facilitate the progress of unimodal and multimodal learning and parse/model complex scene. On the other hand, it steps forward to pursuing human-like multimodal perception ability, which could further contribute to our society in several aspects, e.g., audio-assistant scene understanding for the deaf people by figuring out which objects are making sound, facilitating exploration into how to solve the cocktail-party effect in realistic audiovisual scenes, i.e., to perceive different sounds and focus on the pertinent content from mixed auditory input.
1. What is the focus and contribution of the paper regarding sound source localization? 2. What are the strengths of the proposed approach, particularly in its two-stage framework and experimental results? 3. What are the weaknesses of the paper, especially regarding its claims and assumptions? 4. Do you have any concerns about the method's reliance on class labels or the availability of single-source videos for training? 5. How does the reviewer assess the novelty and contribution of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper addresses the problem of sound source localization in videos frames, and the authors propose a two-stage approach that first learns representaions of sounding objects, and then perform class-aware object localization based on the learned object representations. Experiments demonstrate that the proposed approach leads to some accuracy gains for this task. Strengths Nice motivation to consider a two-stage framework that first learns object representation in single source scenario and then perform class-aware object localization maps in multi-source scenarios. Good results on sound source localiztion compared to prior methods in Table 2. Nice qualitative results on sound source localization. Weaknesses - It is claimed that the proposed method aims to discrminatively localize the sounding objects from their mixed sound without any manual annotations. However, the method aslo aims to do class-aware localization. As shown in Figure 4, the object categories are labeled for the localized regions for the proposed method. It is unclear to this reviewer whether the labels there are only for illustrative purposes? - Even the proposed method doesn't rely on any class labels, it needs the number of categories of potential sound sources in the data to build the object dictionary. - Though the performance of method is pretty good especially in Table 2, the novelty/contribution of the method is somewhat incremental. The main contribution of the work is a new network design drawing inspirations from prior work for the sound source localization task. - The method assumes single source videos are available to train in the first stage, which is also a strong assumption even though class labels are not used. Most in-the-wild videos are noisy and multi-source. It would be desired to have some analysis to show how robust the system is to noise in videos or how the system can learn without clean single source videos to build the object dictionary.
NIPS
Title Discriminative Sounding Objects Localization via Self-supervised Audiovisual Matching Abstract Discriminatively localizing sounding objects in cocktail-party, i.e., mixed sound scenes, is commonplace for humans, but still challenging for machines. In this paper, we propose a two-stage learning framework to perform self-supervised class-aware sounding object localization. First, we propose to learn robust object representations by aggregating the candidate sound localization results in the single source scenes. Then, class-aware object localization maps are generated in the cocktail-party scenarios by referring the pre-learned object knowledge, and the sounding objects are accordingly selected by matching audio and visual object category distributions, where the audiovisual consistency is viewed as the self-supervised signal. Experimental results in both realistic and synthesized cocktail-party videos demonstrate that our model is superior in filtering out silent objects and pointing out the location of sounding objects of different classes. Code is available at https://github.com/DTaoo/ Discriminative-Sounding-Objects-Localization. 1 Introduction Audio and visual messages are pervasive in our daily-life. Their natural correspondence provides humans with rich semantic information to achieve effective multi-modal perception and learning [28, 24, 15], e.g., when in the street, we instinctively associate the talking sound with people nearby, and the roaring sound with vehicles passing by. In view of this, we want to question that can they also facilitate machine intelligence? To pursue the human-like audiovisual perception, the typical and challenging problem of visually sound localization is highly expected to be addressed, which aims to associate sounds with specific visual regions and rewards the visual perception ability in the absence of semantic annotations [14, 18, 3, 27, 10]. A straightforward strategy is to encourage the visual features of sound source to take higher similarity with the sound embeddings, which has shown considerable performance in the simple scenarios with single sound [21, 22, 27]. However, there are simultaneously multiple sounding objects as well as silent ones (i.e. The silent objects are considered capable of producing sound.). in our daily scenario, i.e., the cocktail-party, this simple strategy mostly fails to discriminatively localize different sound sources from mixed sound [16]. Recently, audiovisual content modeling is proposed to excavate concrete audio and visual components in the scenario for localization. Yet, due to lack of sufficient semantic annotation, existing works have to resort to extra scene prior knowledge [16, 17, 25] or construct pretext task [31, 30]. Even so, these methods cannot well deal ∗Corresponding Author, Beijing Key Laboratory of Big Data Management and Analysis Methods, Gaoling School of Artificial Intelligence, Renmin University of China, Beijing 100872, China. The research reported in this paper was mainly conducted when the corresponding author worked at Baidu Research. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. with such complex cocktail-party scenario, i.e., not only answering where the sounding area is but also answering what the sounding area is. In this paper, we target to perform class-aware sounding object localization from their mixed sound, where the audiovisual scenario consists of multiple sounding objects and silent objects, as shown in Fig. 1. This interesting problem is quite challenging from two perspectives: 1) Discriminatively localizing objects belonging to different categories without resorting to semantic annotations of objects; 2) Determining whether a specific object is sounding or not, and filtering out silent ones from the corresponding mixed sound. When faced with these challenges, we want to know how do we human address them? Elman [9] stated that human could transform these seemingly unlearnable tasks into learnable by starting from a simpler initial state then building on which to develop more complicated representations of structure. Inspired by this, we propose a two-stage framework, evolving from single sound scenario to the cocktailparty case. Concretely, we first learn potential object knowledge from sound localization in single source scenario, and aggregate them into a dictionary for pursuing robust representation for each object category. By referring to the dictionary, class-aware object localization maps are accordingly proposed for meeting the sounding object selection in multi-source scenario. Then, we reduce the sounding object localization task into a self-supervised audiovisual matching problem, where the sounding objects are selected by minimizing the category-level audio and visual distribution difference. With these evolved curriculums, we can filter out silent objects and achieve class-aware sounding object localization in a cocktail-party scenario. To summarize, our main contributions are as follows. First, we introduce an interesting and challenging problem, i.e., discriminatively localizing sounding objects in the cocktail-party scenario without manual annotation for objects. Second, we propose a novel step-by-step learning framework, which learns robust object representations from single source localization then further expands to the sounding object localization via taking audiovisual consistency as self-supervision for category distribution matching in the cocktail-party scenario. Third, we synthesize some cocktail-party videos and annotate sounding object bounding boxes for the evaluation of class-aware sounding object localization. Our method shows excellent performance on both synthetic and realistic data. 2 Related work Object localization Weakly- and self-supervised object localization expect to achieve comparable performance to the supervised ones with limited annotations. Existing weakly-supervised methods take holistic image labels as supervision, where the salient image region evaluated by recognition scores are considered as the potential object location[20, 21, 6, 32, 26, 7]. For self-supervised models, Baek et al. [5] used point symmetric transformation as self-supervision to extract class-agnostic heat maps for object localization. These methods are purely based on visual features, while we propose to employ audiovisual consistency as self-supervision to achieve class-aware object localization. Self-supervised audiovisual learning The natural correspondence between sound and vision provides essential supervision for audiovisual learning [2, 3, 22, 4, 23]. In [23, 4], authors introduced to learn feature representations of one modality with the supervision from the other. In [2, 22], authors adopted clip-level audiovisual correspondence and temporal synchronization as self-supervision to correlate audiovisual content. Hu et al. [16, 17] associate latent sound-object pairs with clustered audiovisual components, but its performance greatly relies on predefined number of clusters. Alwassel et al. [1] created pseudo labels from clustering features to boost multi-modal representation learning. While in our work, we alternatively use audiovisual correspondence and pseudo labels from clustering to boost audiovisual learning and learn object representations. Sounding object localization in visual scenes Recent methods for localizing sound source in visual context mainly focus on joint modeling of audio and visual modalities [3, 22, 27, 29, 16, 30, 31]. In [3, 22], authors adopted Class Activation Map (CAM) [32] or similar methods to measure the correspondence score between audio and visual features on each spatial grid to localize sounding objects. Senocak et al. [27] proposed an attention mechanism to capture primary areas in a semisupervised or unsupervised setting. Tian et al. [29] leveraged audio-guided visual attention and temporal alignment to find semantic regions corresponding to sound sources. These methods tend to perform well in single source scenes, but comparatively poor for mixed sound localization. Zhao et al. [31, 30] employed a sound-based mix-then-separate framework to associate the audio and visual feature maps, where the sound source position is given by the sound energy of each pixel. Hu et al. [16] established audiovisual clustering to associate sound centers with corresponding visual sources, but it requires the prior of the number of sound sources, and the specific category of the clustering result remains unknown. In contrast, our method can discriminatively localize sounding objects in cocktail-party by employing established object dictionary to generate class-aware object localization maps, and referring to the audiovisual localization map to filter out the silent ones. 3 The proposed method In this work, we aim to discriminatively localize the sounding objects from their mixed sound without the manual annotations of object category. To facilitate this novel and challenging problem, we develop a two-stage learning strategy, evolving from the localization in simple scenario with single sounding object to the complex one with multiple sounding objects, i.e., cocktail-party. Such curriculum learning perspective is based on the findings that existing audiovisual models [3, 16, 27] are capable of predicting reasonable localization map of sounding object in simple scenario, which is considered to provide effective knowledge reference for candidate visual localization of different objects in the cocktail-party scenario. Specifically, for a given set of audiovisual pair with arbitrary number of sounding objects, X = {(ai, vi)|i = 1, 2, ..., N}, we first divide it into one simple set whose scenario only contains single sounding object, X s = {(asi , vsi )| i = 1, 2, ..., Ns}, and one complex set, where each audiovisual pair consists of several sounding objects, X c = {(aci , vci )| i = 1, 2, ..., N c}, where X = X s∪X c and X s∩X c = ∅. In the first stage, we propose to learn potential visual representation of sounding object from their localization map in the simple scenario X s, with which we build a representation dictionary of objects as a kind of visual object knowledge reference. In the second stage, by referring to the learned representation dictionary, we step forward to discriminatively localize multiple sounding objects in the complex scenario X c, where the category distribution of localized sounding objects are required to match the distribution of their mixed sound according to the natural audiovisual consistency [16]. In the rest sections, we detail the first and second learning stage for generalized sounding object localization. 3.1 Learning object representation from localization For the simple audiovisual scenario with single sound source, X s, we target to visually localize the sounding object from its corresponding sound, and synchronously build a representation dictionary from the localization outcomes. The framework is shown in the left part of Fig. 2. At the first step, given an arbitrary audiovisual pair (asi , v s i ) ∈ X s, to exactly predict the position of sounding object, we need to find which region of input image vsi is highly correlated to the sound asi . To this end, we feed the image into a convolution-based network (e.g., ResNet [13]) to extract spatial feature maps f(vsi ) ∈ RC×H×W as the local image region descriptors, where C is the channel dimension, H and W are the spatial size. Then, the localization network is encouraged to enhance the similarity between the image region of sounding object and corresponding sound embeddings g(asi ) from the same video, but suppress those ones where sound and object are mismatched (from different videos), i.e., (asi , v s j ), where i 6= j. Formally, the localization objective can be written as L1 = Lbce(ymatch, GMP (l(g(asi ), f(vsj )))), (1) where the indicator ymatch = 1 is the audio and image are from the same pair, i.e., i = j, otherwise ymatch = 0, and Lbce is the binary cross-entropy loss. l(g(asi ), f(vsj )) is the audiovisual localization function, achieved by computing the cosine similarity of audio and visual feature representation2. 2The cosine similarity is followed by a parameterized sigmoid function to achieve comparable scale to the binary supervision. More details about similarity computation and networks are in the material. Similar to [3], Global Max Pooling (GMP) is used to aggregate the localization map to match the scene-level supervision. As there is no extra semantic annotation employed, the localization model is fully optimized in a self-supervised fashion. As the localization map could provide effective reference of object position, it helps to reduce the disturbance of complex background and boosts the visual perception performance of object appearance. To supply better visual object reference for the multi-source localization in the second stage, we utilize these localization outcomes to learn a kind of representation dictionaryD for different object categories. First, we propose to binarize the localization map li of the i−th audiovisual pair into a mask mi ∈ {0, 1}H×W . As there should be only one sounding object in the simple scenario X s, mi should be a single-object-awareness mask indicator. Hence, we can extract potential object representation oi ∈ RC over the masked visual features f(vsi ), i.e., oi = GAP (f(v s i ) ◦mi), (2) where GAP is the Global Average Pooling operation and ◦ is the Hadamard product. These object representationsO = {o1, o2, ..., oNs} are extracted from the coarse localization results, which makes it difficult to provide robust expression of object characters. To facilitate such progress, we target to learn high-quality object indicators with these candidate representations in a dictionary learning fashion. Specifically, we propose to jointly learn a K × C dictionary D and assignment yi of each object representation oi, where each key dk ∈ R1×C is identified as the representative object character in the k−th category. As K-means can be viewed as an efficient way of constructing representation dictionary [8], in our case we aim to minimize the following problem, L(D, yi) = Ns∑ i=1 min yi ||oi −DT · yi||22 s.t. yi ∈ {0, 1} K , ∑ yi = 1, (3) where K is the number of object category. Solving this problem provides a dictionary D∗ and a set of category assignments {y∗i |i = 1, 2, ...Ns}, where the former one is used for potential object detection in the second stage and the latter can be viewed as pseudo labels indicating different object categories. Recall that object localization could benefit from generalized categorization [21, 32], we therefore choose to alternately optimize the model w.r.t. the localization objective using Eq. 1 and the object classification objective with generated pseudo labels, which could substantially improve the localization performance. 3.2 Discriminative sounding object localization To discriminatively localize different sounding objects from their mixed sound, we propose to localize all the emerged objects in the image first, among which the sounding ones are causally selected based on whether they appear in the sounding area and required to match the category distribution of corresponding audio messages, as shown in the right part of Fig. 2. Let (aci , v c i ) ∈ X c denote the i−th audiovisual message that consists of multiple sounding objects. By referring to the learned representation dictionary of objects D∗, the location of emerged objects is indicated by computing the following inner-product similarity between each location of visual feature map f(vci ) ∈ RC×H×W and each representation key dk ∈ R1×C within D∗, mki = d k · f(vci ), (4) where mki is the predicted object location area of the k−th category in the i−th visual scenario. If the scenario does not involve the object belonging to the k−th category, the corresponding localization map mki tends to remain low response (similarity). At this point, we can obtain K localization maps, indicating the location of different categories of objects. As stated in the beginning, the cocktail-party scenario may consist of multiple sounding objects and silent objects. To localize the sounding objects as well as eliminate the silent ones, the sounding area li that is highly related to the input mixed sound is regarded as a kind of sounding object filter, which is formulated as ski = m k i ◦ li. (5) ski is deemed as the location of sounding object of the k−th category. Intuitively, if the k−th object does not produce any sound even if it visually appears in the image, there will be no sounding areas reflected in ski . Hence, the category distribution of sounding objects for v c i can be written as psovi = softmax([GAP (s 1 i ), GAP (s 2 i ), ..., GAP (s K i )]). (6) As discussed in recent works [16], the natural synchronization between vision and sound provides the self-supervised consistency in terms of sounding object category distribution. In other words, the sound character and the visual appearance of the same sounding object are corresponding in taxonomy, such as barking and dog, meow and cat. Hence, we propose to train the model to discriminatively localize the sounding objects by solving the following problem, Lc = DKL(psovi ||p so ai ), (7) where psoai is the category distribution of sound ai, predicted by a well-trained audio event network 3, and DKL is the Kullback–Leibler divergence. Overall, the second stage consists of two learning objective, one is the category-agnostic sounding area detection and the other one is class-aware sounding object localization, i.e., L2 = Lc + λ · L1, (8) where λ is the hype-parameter balancing the importance of both objective. By solving the problem in Eq. 8, the location of sounding objects are discriminatively revealed in the category-specific maps{ s1i , s 2 i , ..., s K i } . Finally, softmax regression is performed across these class-aware maps on each location for better visualization. 4 Experiments 4.1 Datasets and annotation MUSIC MUSIC dataset [31] contains 685 untrimmed videos, 536 solo and 149 duet, covering 11 classes of musical instruments. To better evaluate sound localization results in diverse scenes, we use the first five/two videos of each instrument category in solo/duet for testing, and use the rest for training. Besides, we use one half of solo training data for the first-stage training, and employ the other half to generate synthetic data for the second-stage learning. Note that, some videos are now not available on YouTube, we finally get 489 solo and 141 duet videos. 3The audio network is trained with the pseudo label in the first stage, more details are in the materials. (b) Results on AudioSet-instrument-solo. MUSIC-Synthetic The categories of instruments in duet videos of MUSIC dataset are quite unbalanced, e.g., more than 80% duet videos contain sound of guitar, which is difficult for training and brings great bias in testing. Thus, we build category-balanced multi-source videos by artificially synthesizing solo videos to facilitate our second-stage learning and evaluation. Concretely, we first randomly choose four 1-second solo audiovisual pairs of different categories, then mix random two of the four audio clips with jittering as the multi-source audio waveform, and concatenate four frames of these clips as the multi-source video frame. That is, in the synthesized audiovisual pair, there are two instruments making sound while the other two are silent. Therefore, this synthesized dataset is quite proper for the evaluation of discriminatively sounding object localization4. AudioSet-instrument AudioSet-instrument dataset is a subset of AudioSet [12], consisting of 63,989 10-second video clips covering 15 categories of instruments. Following [11], we use the videos from the “unbalanced" split for training, and those from the “balanced" for testing. We employ the solo videos with single sound source for the first-stage training and testing, and adopt those with multiple sound sources for the second-stage training and testing. Bounding box annotation To quantitatively evaluate the sound localization performance, we use a well-trained Faster RCNN detector w.r.t 15 instruments [11] to generate bounding boxes on the test set. We further refine the detection results, and manually annotate whether each object is sounding or silent. Annotations are publicly available in the released code, for reproducibility. 4.2 Experimental settings Implementation details Each video in the above datasets are equally divided into one second clips, with no intersection. We randomly sample one image from the video clip as the visual message, which is resized to 256 × 256 then randomly cropped to 224 × 224. The audio messages are first re-sampled into 16K Hz, then translated into spectrogram via Short Time Fourier Transform with a Hann window length of 160 and a hop length of 80. Similarly with [31, 16], Log-Mel projection is performed over the spectrogram to better represent sound characteristics, which therefore becomes a 201× 64 matrix. The audio and visual message from the same video clip are deemed as a matched pair, otherwise mismatched. We use variants of ResNet-18 [13] as audio and visual feature extractors. Detailed architecture is shown in the materials. Our model is trained with Adam optimizer with learning rate of 10−4. In training phase, we use a threshold of 0.05 to binarize the localization maps to obtain object mask, with which we can extract object representations over feature maps. And each center representation in the object dictionary is accordingly assigned to one object category, which is then used for class-aware localization evaluation. Note that, the proposed model is evaluated and trained on the identical dataset. Evaluation metric We employ Intersection over Union (IoU) and Area Under Curve (AUC) as evaluation metrics for single source sound localization, which are calculated with predicted sounding area and annotated bounding box. For discriminative sounding object localization in cocktail-party, we introduce two new metrics, Class-aware IoU (CIoU) and No-Sounding-Area (NSA), for quantitative evaluation. CIoU is defined as the average over class-specific IoU score, and NSA is the average activation area on localization maps of silent categories where the activation is below threshold τ , CIoU = ∑K k=1 δkIoUk∑K k=1 δk , NSA = ∑K k=1(1− δk) ∑ sk < τ∑K k=1(1− δk)A , (9) where IoUk is calculated based on the predicted sounding object area and annotated bounding box for the k−th class, sk is localization map of k-th class, A is the total area of localization map. The 4Available at https://zenodo.org/record/4079386#.X4NPStozbb0 indicator δk = 1 if object of class k is making sound, otherwise 0. These two metrics measure the model’s ability to discriminatively localize sounding objects and filter out the silent ones. 4.3 Single sounding object localization In this subsection, we focus on the simple task of sound localization in single source scenario. Table 1 shows the results on MUSIC-solo and AudioSet-instrument-solo videos, where ours is compared with recent SOTA methods. Note that we use the public source code from [31, 16]. According to the shown results, we have two points should pay attention to. First, the compared methods [3, 16, 27] are trained to match the correct audiovisual pair via the contrastive [16, 27] or classification[3] objective, which is similar to ours. Yet, our proposed method significantly outperform these method by a large margin. Such phenomenon indicates that the learned object representations from localization is effective for semantic discrimination, which further benefits the object localization via the discriminative learning of object category. In order to explain this clearly, we plot the distribution of extracted feature from the well-trained vision network via t-SNE [19]. As shown in Fig. 3, the extracted visual features on MUSIC-solo are more discriminative in terms of object category when we train the model in a localization-classification alternative learning fashion, where the normalized mutual information for the clustering with masked object features achieves 0.74, which reveals high discrimination of learned representations. Second, our method is comparable to Sound-of-pixel [31], especially on the MUSIC-solo dataset. This is because Sound-of-pixel [31] differently employs the audio-based mix-then-separate learning strategy, which highly relies on the quality of input audio messages. Hence, it could effectively correlate specific visual area with audio embeddings in the simple scene with single sound, but suffers from the noisy multi-source scenarios. In contrast, our method can simultaneously deal with both conditions and does not require to construct complex learning objective. Related results can be found in the next subsection. 4.4 Multiple sounding objects localization Natural audiovisual scenario usually consists of multiple sounding and silent objects, which is more challenging for exactly localizing the sounding ones. To responsibly compare different methods under such scenarios, both of the synthetic and realistic data are evaluated. As shown in Table 2, we can find that our model shows significant improvements over all the compared methods in terms of CIoU. Such phenomenon mainly comes from three reasons. First, our model takes consideration of the class information of sounding objects by employing a category-based audiovisual alignment, i.e., Eq. 7, while other methods [3, 27] simply correlate the audiovisual features for sounding area detection so that fail to discriminatively localize the sounding objects. Second, our localization results are achieved with the effective visual knowledge learned from the first-stage, which could vastly help to excavate and localize potential objects from cocktail-party scenario, while the compared method [31] cannot deal with such scenario with mix-then-separate learning fashion. Third, referring to NSA results, our model can automatically filter out the silent objects, but DMC [16] has to rely on given knowledge of the number of sounding objects. Although [31] is high in NSA, it is probably because of too low channel activations to detect objects rather than the success of filtering out silent ones. Apart from the quantitative evaluation, we also provide visualized localization results in Fig. 4. According to the shown results in realistic scenario, the attention-based approach [27] and Object-thesound [3] can just localize the sounding area without discriminating guitar or cello, while DMC [16] suffers from the complex audiovisual components and mix up different visual areas. Among these compared methods, although sound-of-pixel [31] provides better results, it cannot exact localize the sounding object and filter out the silent saxophone. This is probably because it highly depends on the quality of mixed sounds. In contrast, our model can successfully localize the sounding guitar and cello in class-specific maps, as well as remain low response for the silent saxophone and other visual areas. The synthetic data show similar results. 4.5 Ablation study In this section, we perform ablation studies w.r.t. the influence of hyper-parameters. More studies can be found in the supplementary material. Loss function weight λ. As shown in Table 3, we can find that the hyper-parameter of λ has slight effects on the localization performance when in the range of [0.5, 1.0]. But it takes higher influence when becomes smaller or larger. Such phenomenon comes from the fact that the localization objective L1 is easier to converge compared with the distribution matching objective Lc. When λ becomes much larger, the model would suffer from the overfitting problem for localization. When λ becomes much smaller, it is difficult to achieve reasonable sounding area detection for effective filtering. Number of clusters and mask threshold. In previous experiment settings, we set the number of clusters at the first stage equal to the number of categories in the dataset, which provides a strong prior. Therefore, we explore using different number of clusters as well as the mask threshold for the first-stage object feature extraction and clustering. And for evaluation, we adpatively aggregate multiple clusters to one specific category for discriminative localization. Table 4 shows the results on Music dataset. It is clear that our method is generally robust to these to hyper-parameters, and achieves comparable performance without knowing the specific number of categories in the dataset. Training settings for the second stage. We further present some ablation studies on the procedure and training objective for the second stage. We denote the localization loss as L1, the audiovisual consistency loss as Lc, and the silent area suppress operation as Prod. As shown in the Table 5, the product operation is crucial especially under the synthetic circumstance. It is because in our manually synthesized data, there are totally four instruments in a single frame, with two making sound and the other two silent. If without the Prod operation, all the objects would produce high response, thus making the categorical matching between audio and visual components fail and leading to very poor performance. On the other hand, the Lc objective boosts localization on both synthetic and real-world duet data, which demonstrates that evacuating inner consistency between two modalities helps cross-modal modeling. 5 Discussion In this paper, we propose to discriminatively localize sounding objects in the absence of object category annotations, where the object localization in single source videos are aggregated to build discriminative object representation and the audiovisual consistency is used as the self-supervision for category distribution alignment. Although the object semantic learned from simple cases contributes noticeable results, it still need rough partition of single and multiple source videos, which should be emphasized in the future study. Acknowledgement This work was supported in part by the Beijing Outstanding Young Scientist Program NO. BJJWZYJH012019100020098 and Public Computing Cloud, Renmin University of China. Broader Impact Visually sound source localization is a kind of basic perception ability for human, while this work encourages the machine to be equipped with similar ability, especially when faced with multi-source scenarios. Hence, the impact mainly lies in the machine learning technique and application aspect. On the one hand, the proposed approach is fully based on self-supervised learning, but can reward considerable discrimination ability for the visual objects and correlation capabilities across audio and visual modalities. Predictably, without elaborately manual annotation, this approach could still facilitate the progress of unimodal and multimodal learning and parse/model complex scene. On the other hand, it steps forward to pursuing human-like multimodal perception ability, which could further contribute to our society in several aspects, e.g., audio-assistant scene understanding for the deaf people by figuring out which objects are making sound, facilitating exploration into how to solve the cocktail-party effect in realistic audiovisual scenes, i.e., to perceive different sounds and focus on the pertinent content from mixed auditory input.
1. What is the focus and contribution of the paper on sounding object localization? 2. What are the strengths of the proposed approach, particularly in terms of its two-stage learning framework? 3. What are the weaknesses of the paper, especially regarding the lack of ablation studies? 4. Do you have any concerns about the novelty of the proposed method? 5. How does the reviewer assess the significance of the paper's contribution to the field of object localization?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposes to tackle sounding object localization in a cocktail party scenario, where the sounds are mixed and there might be silent objects. It also proposes a two-stage learning framework by first training an audiovisual localization network in single-sound scenarios and then using audiovisual consistency to match the distribution of visual objects and sounding objects. Strengths The proposed task is interesting and more realistic in real life. Their two-stage learning framework also has good quantitative results and beats other methods on most metrics. Weaknesses My biggest concern is that there is no quantitative ablation study on the effect of the audiovisual consistency objective in equation 7. Although the t-SNE plot shows alternative learning generates better visual features, there are no quantitative studies on how each stage affects the final results. And the lack of ablation makes the second contribution or technical contribution weaker because obviously, the novel part comes from using audiovisual consistency for category distribution matching. I find it interesting that related work including this work doesn't employ temporal information from the video for localization. For example, finger movement is one obvious visual clue of whether an instrument makes sounds.
NIPS
Title Ranking Data with Continuous Labels through Oriented Recursive Partitions Abstract We formulate a supervised learning problem, referred to as continuous ranking, where a continuous real-valued label Y is assigned to an observable r.v. X taking its values in a feature space X and the goal is to order all possible observations x in X by means of a scoring function s : X → R so that s(X) and Y tend to increase or decrease together with highest probability. This problem generalizes bi/multi-partite ranking to a certain extent and the task of finding optimal scoring functions s(x) can be naturally cast as optimization of a dedicated functional criterion, called the IROC curve here, or as maximization of the Kendall τ related to the pair (s(X), Y ). From the theoretical side, we describe the optimal elements of this problem and provide statistical guarantees for empirical Kendall τ maximization under appropriate conditions for the class of scoring function candidates. We also propose a recursive statistical learning algorithm tailored to empirical IROC curve optimization and producing a piecewise constant scoring function that is fully described by an oriented binary tree. Preliminary numerical experiments highlight the difference in nature between regression and continuous ranking and provide strong empirical evidence of the performance of empirical optimizers of the criteria proposed. 1 Introduction The predictive learning problem considered in this paper can be easily stated in an informal fashion, as follows. Given a collection of objects of arbitrary cardinality, N ≥ 1 say, respectively described by characteristics x1, . . . , xN in a feature space X , the goal is to learn how to order them by increasing order of magnitude of a certain unknown continuous variable y. To fix ideas, the attribute y can represent the ’size’ of the object and be difficult to measure, as for the physical measurement of microscopic bodies in chemistry and biology or the cash flow of companies in quantitative finance and the features x may then correspond to indirect measurements. The most convenient way to define a preorder on a feature space X is to transport the natural order on the real line onto it by means of a (measurable) scoring function s : X → R: an object with charcateristics x is then said to be ’larger’ (’strictly larger’, respectively) than an object described by x′ according to the scoring rule s when s(x′) ≤ s(x) (when s(x) < s(x′)). Statistical learning boils down here to build a scoring function s(x), based on a training data set Dn = {(X1, Y1), . . . , (Xn, Yn)} of objects for which the values of all variables (direct and indirect measurements) have been jointly observed, such that s(X) and Y tend to increase or decrease together with highest probability or, in other words, such that the ordering of new objects induced by s(x) matches that defined by their true measures as well as possible. This problem, that shall be referred to as continuous ranking throughout the article can be viewed as an extension of bipartite ranking, where the output variable Y is assumed to be binary and the objective can be naturally formulated as a functionalM -estimation problem by means of the concept of ROC curve, see [7]. Refer also to [4], [11], [1] for approaches based on the optimization 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. of summary performance measures such as the AUC criterion in the binary context. Generalization to the situation where the random label is ordinal and may take a finite number K ≥ 3 of values is referred to as multipartite ranking and has been recently investigated in [16] (see also e.g. [14]), where distributional conditions guaranteeing that ROC surface and the VUS criterion can be used to determine optimal scoring functions are exhibited in particular. It is the major purpose of this paper to formulate the continuous ranking problem in a quantitative manner and explore the connection between the latter and bi/multi-partite ranking. Intuitively, optimal scoring rules would be also optimal for any bipartite subproblem defined by thresholding the continuous variable Y with cut-off t > 0, separating the observations X such that Y < t from those such that Y > t. Viewing this way continuous ranking as a continuum of nested bipartite ranking problems, we provide here sufficient conditions for the existence of such (optimal) scoring rules and we introduce a concept of integrated ROC curve (IROC curve in abbreviated form) that may serve as a natural performance measure for continuous ranking, as well as the related notion of integrated AUC criterion, a summary scalar criterion, akin to Kendall tau. Generalization properties of empirical Kendall tau maximizers are discussed in the Supplementary Material. The paper also introduces a novel recursive algorithm that solves a discretized version of the empirical integrated ROC curve optimization problem, producing a scoring function that can be computed by means of a hierarchical combination of binary classification rules. Numerical experiments providing strong empirical evidence of the relevance of the approach promoted in this paper are also presented. The paper is structured as follows. The probabilistic framework we consider is described and key concepts of bi/multi-partite ranking are briefly recalled in section 2. Conditions under which optimal solutions of the problem of ranking data with continuous labels exist are next investigated in section 3, while section 4 introduces a dedicated quantitative (functional) performance measure, the IROC curve. The algorithmic approach we propose in order to learn scoring functions with nearly optimal IROC curves is presented at length in section 5. Numerical results are displayed in section 6. Some technical proofs are deferred to the Supplementary Material. 2 Notation and Preliminaries Throughout the paper, the indicator function of any event E is denoted by I{E}. The pseudo-inverse of any cdf F (t) on R is denoted by F−1(u) = inf{s ∈ R : F (s) ≥ u}, while U([0, 1]) denotes the uniform distribution on the unit interval [0, 1]. 2.1 The probabilistic framework Given a continuous real valued r.v. Y representing an attribute of an object, its ’size’ say, and a random vector X taking its values in a (typically high dimensional euclidian) feature space X modelling other observable characteristics of the object (e.g. ’indirect measurements’ of the size of the object), hopefully useful for predicting Y , the statistical learning problem considered here is to learn from n ≥ 1 training independent observations Dn = {(X1, Y1), . . . , (Xn, Yn)}, drawn as the pair (X,Y ), a measurable mapping s : X → R, that shall be referred to as a scoring function throughout the paper, so that the variables s(X) and Y tend to increase or decrease together: ideally, the larger the score s(X), the higher the size Y . For simplicity, we assume throughout the article that X = Rd with d ≥ 1 and that the support of Y ’s distribution is compact, equal to [0, 1] say. For any q ≥ 1, we denote by λq the Lebesgue measure on Rq equipped with its Borelian σ-algebra and suppose that the joint distribution FX,Y (dxdy) of the pair (X,Y ) has a density fX,Y (x, y) w.r.t. the tensor product measure λd ⊗ λ1. We also introduces the marginal distributions FY (dy) = fY (y)λ1(dy) and FX(dx) = fX(x)λd(dx), where fY (y) = ∫ x∈X fX,Y (x, y)λd(dx) and fX(x) = ∫ y∈[0,1] fX,Y (x, y)λ1(dy) as well as the conditional densities fX|Y=y(x) = fX,Y (x, y)/fY (y) and fY |X=x(y) = fX,Y (x, y)/fX(x). Observe incidentally that the probabilistic framework of the continuous ranking problem is quite similar to that of distribution-free regression. However, as shall be seen in the subsequent analysis, even if the regression function m(x) = E[Y | X = x] can be optimal under appropriate conditions, just like for regression, measuring ranking performance involves criteria that are of different nature than the expected least square error and plug-in rules may not be relevant for the goal pursued here, as depicted by Fig. 2 in the Supplementary Material. Scoring functions. The set of all scoring functions is denoted by S here. Any scoring function s ∈ S defines a total preorder on the space X : ∀(x, x′) ∈ X 2, x s x′ ⇔ s(x) ≤ s(x′). We also set x ≺s x′ when s(x) < s(x′) and x =s x′ when s(x) = s(x′) for (x, x′) ∈ X 2. 2.2 Bi/multi-partite ranking Suppose thatZ is a binary label, taking its values in {−1,+1} say, assigned to the r.v.X . In bipartite ranking, the goal is to pick s in S so that the larger s(X), the greater the probability that Y is equal to 1 ideally. In other words, the objective is to learn s(x) such that the r.v. s(X) given Y = +1 is as stochastically larger1 as possible than the r.v. s(X) given Y = −1: the difference between Ḡs(t) = P{s(X) ≥ t | Y = +1} and H̄s(t) = P{s(X) ≥ t | Y = −1} should be thus maximal for all t ∈ R. This can be naturally quantified by means of the notion of ROC curve of a candidate s ∈ S, i.e. the parametrized curve t ∈ R 7→ (H̄s(t), Ḡs(t)), which can be viewed as the graph of a mapping ROCs : α ∈ (0, 1) 7→ ROCs(α), connecting possible discontinuity points by linear segments (so that ROCs(α) = Ḡs ◦ (1 − H−1s )(1 − α) when Hs has no flat part in H−1s (1 − α), where Hs = 1− H̄s). A basic Neyman Pearson’s theory argument shows that the optimal elements s∗(x) related to this natural (functional) bipartite ranking criterion (i.e. scoring functions whose ROC curve dominates any other ROC curve everywhere on (0, 1)) are transforms (T ◦ η)(x) of the posterior probability η(x) = P{Z = +1 | X = x}, where T : SUPP(η(X)) → R is any strictly increasing borelian mapping. Optimization of the curve in sup norm has been considered in [7] or in [8] for instance. However, given its functional nature, in practice the ROC curve of any s ∈ S is often summarized by the area under it, which performance measure can be interpreted in a probabilistic manner, as the theoretical rate of concording pairs AUC(s) = P {s(X) < s(X′) | Z = −1, Z′ = +1}+ 1 2 P {s(X) = s(X′) | Z = −1, Z′ = +1} , (1) where (X ′, Z ′) denoted an independent copy of (X,Z). A variety of algorithms aiming at maximizing the AUC criterion or surrogate pairwise criteria have been proposed and studied in the literature, among which [11], [15] or [3], whereas generalization properties of empirical AUC maximizers have been studied in [5], [1] and [12]. An analysis of the relationship between the AUC and the error rate is given in [9]. Extension to the situation where the label Y takes at least three ordinal values (i.e. multipartite ranking) has been also investigated, see e.g. [14] or [6]. In [16], it is shown that, in contrast to the bipartite setup, the existence of optimal solutions cannot be guaranteed in general and conditions on (X,Y )’s distribution ensuring that optimal solutions do exist and that extensions of bipartite ranking criteria such as the ROC manifold and the volume under it can be used for learning optimal scoring rules have been exhibited. An analogous analysis in the context of continuous ranking is carried out in the next section. 3 Optimal elements in ranking data with continuous labels In this section, a natural definition of the set of optimal elements for continuous ranking is first proposed. Existence and characterization of such optimal scoring functions are next discussed. 3.1 Optimal scoring rules for continuous ranking Considering a threshold value y ∈ [0, 1], a considerably weakened (and discretized) version of the problem stated informally above would consist in finding s so that the r.v. s(X) given Y > y is as stochastically larger than s(X) given Y < y as possible. This subproblem coincides with the bipartite ranking problem related to the pair (X,Zy), where Zy = 2I{Y > y} − 1. As briefly recalled in subsection 2.2, the optimal set S∗y is composed of the scoring functions that induce the same ordering as ηy(X) = P{Y > y | X} = 1− (1− py)/(1− py + pyΦy(X)), where py = 1− FY (y) = P{Y > y} and Φy(X) = (dFX|Y >y/dFX|Y <y)(X). 1Given two real-valued r.v.’s U and U ′, recall that U is said to be stochastically larger than U ′ when P{U ≥ t} ≥ P{U ′ ≥ t} for all t ∈ R. A continuum of bipartite ranking problems. The rationale behind the definition of the set S∗ of optimal scoring rules for continuous ranking is that any element s∗ should score observations x in the same order as ηy (or equivalently as Φy). Definition 1. (OPTIMAL SCORING RULE) An optimal scoring rule for the continuous ranking problem related to the random pair (X,Y ) is any element s∗ that fulfills: ∀y ∈ (0, 1), ∀(x, x′) ∈ X 2, ηy(x) < ηy(x′)⇒ s∗(x) < s∗(x′). (2) In other words, the set of optimal rules is defined as S∗ = ⋂ y∈(0,1) S∗y . It is noteworthy that, although the definition above is natural, the set S∗ can be empty in absence of any distributional assumption, as shown by the following example. Example 1. As a counter-example, consider the distributions FX,Y such that FY = U([0, 1]) and FX|Y=y = N (|2y − 1|, (2y − 1)2). Observe that (X, 1− Y ) d =(X,Y ), so that Φ1−t = Φ−1t for all t ∈ (0, 1) and there exists t 6= 0 s.t. Φt is not constant. Hence, there exists no s∗ in S such that (2) holds true for all t ∈ (0, 1). Remark 1. (INVARIANCE) We point out that the class S∗ of optimal elements for continuous ranking thus defined is invariant by strictly increasing transform of the ’size’ variable Y (in particular, a change of unit has no impact on the definition of S∗): for any borelian and strictly increasing mapping H : (0, 1)→ (0, 1), any scoring function s∗(x) that is optimal for the continuous ranking problem related to the pair (X,Y ) is still optimal for that related to (X,H(Y )) (since, under these hypotheses, for any y ∈ (0, 1): Y > y ⇔ H(Y ) > H(y)). 3.2 Existence and characterization of optimal scoring rules We now investigate conditions guaranteeing the existence of optimal scoring functions for the continuous ranking problem. Proposition 1. The following assertions are equivalent. 1. For all 0 < y < y′ < 1, for all (x, x′) ∈ X 2: Φy(x) < Φy(x′)⇒ Φy′(x) ≤ Φy′(x′). 2. There exists an optimal scoring rule s∗ (i.e. S∗ 6= ∅). 3. The regression function m(x) = E[Y | X = x] is an optimal scoring rule. 4. The collection of probability distributions FX|Y=y(dx) = fX|Y=y(x)λd(dx), y ∈ (0, 1) satisfies the monotone likelihood ratio property: there exist s∗ ∈ S and, for all 0 < y < y′ < 1, an increasing function ϕy,y′ : R→ R+ such that: ∀x ∈ Rd, fX|Y=y′ fX|Y=y (x) = ϕy,y′(s ∗(x)). Refer to the Appendix section for the technical proof. Truth should be said, assessing that Assertion 1. is a very challenging statistical task. However, through important examples, we now describe (not uncommon) situations where the conditions stated in Proposition 1 are fulfilled. Example 2. We give a few important examples of probabilistic models fulfilling the properties listed in Proposition 1. • Regression model. Suppose that Y = m(X) + , where m : X → R is a borelian function and is a centered r.v. independent from X . One may easily check that m ∈ S∗. • Exponential families. Suppose that fX|Y=y(x) = exp(κ(y)T (x) − ψ(y))f(x) for all x ∈ Rd, where f : Rd → R+ is borelian, κ : [0, 1] → R is a borelian strictly increasing function and T : Rd → R is a borelian mapping such that ψ(y) = log ∫ x∈Rd exp(κ(y)T (x))f(x)dx < +∞. We point out that, although the regression function m(x) is an optimal scoring function when S∗ 6= ∅, the continuous ranking problem does not coincide with distribution-free regression (notice incidentally that, in this case, any strictly increasing transform of m(x) belongs to S∗ as well). As depicted by Fig. 2 the least-squares criterion is not relevant to evaluate continuous ranking performance and naive plug-in strategies should be avoided, see Remark 3 below. Dedicated performance criteria are proposed in the next section. 4 Performance measures for continuous ranking We now investigate quantitative criteria for assessing the performance in the continuous ranking problem, which practical machine-learning algorithms may rely on. We place ourselves in the situation where the set S∗ is not empty, see Proposition 1 above. A functional performance measure. It follows from the view developped in the previous section that, for any (s, s∗) ∈ S × S∗ and for all y ∈ (0, 1), we have: ∀α ∈ (0, 1), ROCs,y(α) ≤ ROCs∗,y(α) = ROC∗y(α), (3) denoting by ROCs,y the ROC curve of any s ∈ S related to the bipartite ranking subproblem (X,Zy) and by ROC∗y the corresponding optimal ROC curve, i.e. the ROC curve of strictly increasing transforms of ηy(x). Based on this observation, it is natural to design a dedicated performance measure by aggregating these ’sub-criteria’. Integrating over y w.r.t. a σ-finite measure µ with support equal to [0, 1], this leads to the following definition IROCµ,s(α) = ∫ ROCs,y(α)µ(dy). The functional criterion thus defined inherits properties from the ROCs,y’s (e.g. monotonicity, concavity). In addition, the curve IROCµ,s∗ with s∗ ∈ S∗ dominates everywhere on (0, 1) any other curve IROCµ,s for s ∈ S. However, except in pathologic situations (e.g. when s(x) is constant), the curve IROCµ,s is not invariant when replacing Y ’s distribution by that of a strictly increasing transform H(Y ). In order to guarantee that this desirable property is fulfilled (see Remark 1), one should integrate w.r.t. Y ’s distribution (which boils down to replacing Y by the uniformly distributed r.v. FY (Y )). Definition 2. (INTEGRATED ROC/AUC CRITERIA) The integrated ROC curve of any scoring rule s ∈ S is defined as: ∀α ∈ (0, 1), IROCs(α) = ∫ 1 y=0 ROCs,y(α)FY(dy) = E [ROCs,Y(α)] . (4) The integrated AUC criterion is defined as the area under the integrated ROC curve: ∀s ∈ S, IAUC(s) = ∫ 1 α=0 IROCs(α)dα. (5) The following result reveals the relevance of the functional/summary criteria defined above for the continuous ranking problem. Additional properties of IROC curves are listed in the Supplementary Material. Theorem 1. Let s∗ ∈ S. The following assertions are equivalent. 1. The assertions of Proposition 1 are fulfilled and s∗ is an optimal scoring function in the sense given by Definition 1. 2. For all α ∈ (0, 1), IROCs∗(α) = E [ROC∗Y(α)]. 3. We have IAUCs∗ = E [AUC∗Y], where AUC∗y = ∫ 1 α=0 ROC∗y(α)dα for all y ∈ (0, 1). If S∗ 6= ∅, then we have: ∀s ∈ S, IROCs(α) ≤ IROC∗(α) def = E [ROC∗Y(α)] , for any α ∈ (0, 1, ) IAUC(s) ≤ IAUC∗ def= E [AUC∗Y] . In addition, for any borelian and strictly increasing mapping H : (0, 1) → (0, 1), replacing Y by H(Y ) leaves the curves IROCs, s ∈ S, unchanged. Equipped with the notion defined above, a scoring rule s1 is said to be more accurate than another one s2 if IROCs2(α) ≤ IROCs1(α) for all α ∈ (0, 1).The IROC curve criterion thus provides a partial preorder on S. Observe also that, by virtue of Fubini’s theorem, we have IAUC(s) = ∫ AUCy(s)FY(dy) for all s ∈ S , denoting by AUCy(s) the AUC of s related to the bipartite ranking subproblem (X,Zy). Just like the AUC for bipartite ranking, the scalar IAUC criterion defines a full preorder on S for continuous ranking. Based on a training datasetDn of independent copies of (X,Y ), statistical versions of the IROC/IAUC criteria can be straightforwardly computed by replacing the distributions FY , FX|Y >t and FX|Y <t by their empirical counterparts in (3)-(5), see the Supplementary Material for further details. The lemma below provides a probabilistic interpretation of the IAUC criterion. Lemma 1. Let (X ′, Y ′) be a copy of the random pair (X,Y ) and Y ′′ a copy of the r.v. Y . Suppose that (X,Y ), (X ′, Y ′) and Y ′′ are defined on the same probability space and are independent. For all s ∈ S, we have: IAUC(s) = P {s(X) < s(X′) | Y < Y′′ < Y′}+ 1 2 P {s(X) = s(X′) | Y < Y′′ < Y′} . (6) This result shows in particular that a natural statistical estimate of IAUC(s) based on Dn involves U -statistics of degree 3. Its proof is given in the Supplementary Material for completeness. The Kendall τ statistic. The quantity (6) is akin to another popular way to measure the tendency to define the same ordering on the statistical population in a summary fashion: dτ (s) def = P {(s(X)− s(X ′)) · (Y − Y ′) > 0}+ 1 2 P {s(X) = s(X ′)} (7) = P{s(X) < s(X ′) | Y < Y ′}+ 1 2 P {X =s X ′} , where (X ′, Y ′) denotes an independent copy of (X,Y ), observing that P{Y < Y ′} = 1/2. The empirical counterpart of (7) based on the sample Dn, given by d̂n(s) = 2 n(n− 1) ∑ i<j I {(s(Xi)− s(Xj)) · (Yi − Yj) > 0}+ 1 n(n− 1) ∑ i<j I {s(Xi) = s(Xj)} (8) is known as the Kendall τ statistic and is widely used in the context of statistical hypothesis testing. The quantity (7) shall be thus referred to as the (theoretical or true) Kendall τ . Notice that dτ (s) is invariant by strictly increasing transformation of s(x) and thus describes properties of the order it defines. The following result reveals that the class S∗, when non empty, is the set of maximizers of the theoretical Kendall τ . Refer to the Supplementary Material for the technical proof. Proposition 2. Suppose that S∗ 6= ∅. For any (s, s∗) ∈ S × S∗, we have: dτ (s) ≤ dτ (s∗). Equipped with these criteria, the objective expressed above in an informal manner can be now formulated in a quantitative manner as a (possibly functional) M -estimation problem. In practice, the goal pursued is to find a reasonable approximation of a solution to the optimization problem maxs∈S dτ (s) (respectively maxs∈S IAUC(s)), where the supremum is taken over the set of all scoring functions s : X → R. Of course, these criteria are unknown in general, just like (X,Y )’s probability distribution, and the empirical risk minimization (ERM in abbreviated form) paradigm (see [10]) invites for maximizing the statistical version (8) over a class S0 ⊂ S of controlled complexity when considering the criterion dτ (s) for instance. The generalization capacity of empirical maximizers of the Kendall τ can be straightforwardly established using results in [5]. More details are given in the Supplementary Material. Before describing a practical algorithm for recursive maximization of the IROC curve, a few remarks are in order. Remark 2. (ON KENDALL τ AND AUC) We point out that, in the bipartite ranking problem (i.e. when the output variable Z takes its values in {−1, +1}, see subsection 2.2) as well, the AUC criterion can be expressed as a function of the Kendall τ related to the pair (s(X), Z) when the r.v. s(X) is continuous. Indeed, we have in this case 2p(1−p)AUC(s) = dτ (s), where p = P{Z = +1} and dτ (s) = P{(s(X) − s(X ′)) · (Z − Z ′) > 0}, denoting by (X ′, Z ′) an independent copy of (X,Z). Remark 3. (CONNECTION TO DISTRIBUTION-FREE REGRESSION) Consider the nonparametric regression model Y = m(X) + , where is a centered r.v. independent from X . In this case, it is well-known that the regression function m(X) = E[Y | X] is the (unique) solution of the expected least squares minimization. However, although m ∈ S∗, the least squares criterion is far from appropriate to evaluate ranking performance, as depicted by Fig. 2. Observe additionally that, in contrast to the criteria introduced above, increasing transformation of the output variable Y may have a strong impact on the least squares minimizer: except for linear stransforms, E[H(Y ) | X] is not an increasing transform of m(X). Remark 4. (ON DISCRETIZATION) Bi/multi-partite algorithms are not directly applicable to the continuous ranking problem. Indeed a discretization of the interval [0, 1] would be first required but this would raise a difficult question outside our scope: how to choose this discretization based on the training data? We believe that this approach is less efficient than ours which reveals problemspecific criteria, namely IROC and IAUC. 5 Continuous Ranking through Oriented Recursive Partitioning It is the purpose of this section to introduce the algorithm CRANK, a specific tree-structured learning algorithm for continuous ranking. 5.1 Ranking trees and Oriented Recursive Partitions Decision trees undeniably figure among the most popular techniques, in supervised and unsupervised settings, refer to [2] or [13] for instance. This is essentially due to the visual model summary they provide, in the form of a binary tree graphic that permits to describe predictions by means of a hierachichal combination of elementary rules of the type ”X(j) ≤ κ” or ”X(j) > κ”, comparing the value taken by a (quantitative) component of the input vector X (the split variable) to a certain threshold (the split value). In contrast to local learning problems such as classification or regression, predictive rules for a global problem such as ranking cannot be described by a (tree-structured) partition of the feature space: cells (corresponding to the terminal leaves of the binary decision tree) must be ordered so as to define a scoring function. This leads to the definition of ranking trees as binary trees equipped with a ”left-to-right” orientation, defining a tree-structured collection of anomaly scoring functions, as depicted by Fig. 1. Binary ranking trees have been in the context of bipartite ranking in [7] or in [3] and in [16] in the context of multipartite ranking. The root node of a ranking tree TJ of depth J ≥ 0 represents the whole feature space X : C0,0 = X , while each internal node (j, k) with j < J and k ∈ {0, . . . , 2j − 1} corresponds to a subset Cj,k ⊂ X , whose left and right siblings respectively correspond to disjoint subsets Cj+1,2k and Cj+1,2k+1 such that Cj,k = Cj+1,2k∪Cj+1,2k+1. Equipped with the left-to-right orientation, any subtree T ⊂ TJ defines a preorder on X : elements lying in the same terminal cell of T being equally ranked. The scoring function related to the oriented tree T can be written as: sT (x) = ∑ Cj,k: terminal leaf of T 2J ( 1− k 2j ) · I{x ∈ Cj,k}. (9) 5.2 The CRANK algorithm Based on Proposition 2, as mentioned in the Supplementary Material, one can try to build from the training dataset Dn a ranking tree by recursive empirical Kendall τ maximization. We propose below an alternative tree-structured recursive algorithm, relying on a (dyadic) discretization of the ’size’ variable Y . At each iteration, the local sample (i.e. the data lying in the cell described by the current node) is split into two halves (the highest/smallest halves, depending on Y ) and the algorithm calls a binary classification algorithm A to learn how to divide the node into right/left children. The theoretical analysis of this algorithm and its connection with approximation of IROC∗ are difficult questions that will be adressed in future work. Indeed we found out that the IROC cannot be represented as a parametric curve contrary to the ROC, which renders proofs much more difficult than in the bipartite case. THE CRANK ALGORITHM 1. Input. Training data Dn, depth J ≥ 1, binary classification algorithm A. 2. Initialization. Set C0,0 = X . 3. Iterations. For j = 0, . . . , J − 1 and k = 0, . . . , 2J − 1, (a) Compute a median yj,k of the dataset {Y1, . . . , , Yn} ∩ Cj,k and assign the binary label Zi = 2I{Yi > yj,k} − 1 to any data point i lying in Cj,k, i.e. such that Xi ∈ Cj,k. (b) Solve the binary classification problem related to the input space Cj,k and the training set {(Xi, Yi) : 1 ≤ i ≤ n, Xi ∈ Cj,k}, producing a classifier gj,k : Cj,k → {−1, +1}. (c) Set Cj+1,2k = {x ∈ Cj,k, gj,k = +1} = Cj,k \ Cj+1,2k+1. 4. Output. Ranking tree TJ = {Cj,k : 0 ≤ j ≤ J, 0 ≤ k < D}. Of course, the depth J should be chosen such that 2J ≤ n. One may also consider continuing to split the nodes until the number of data points within a cell has reached a minimum specified in advance. In addition, it is well known that recursive partitioning methods fragment the data and the unstability of splits increases with the depth. For this reason, a ranking subtree must be selected. The growing procedure above should be classically followed by a pruning stage, where children of a same parent are progressively merged until the root T0 is reached and a subtree among the sequence T0 ⊂ . . . ⊂ TJ with nearly maximal IAUC should be chosen using cross-validation. Issues related to the implementation of the CRANK algorithm and variants (e.g. exploiting randomization/aggregation) will be investigated in a forthcoming paper. 6 Numerical Experiments In order to illustrate the idea conveyed by Fig. 2 that the least squares criterion is not appropriate for the continuous ranking problem we compared on a toy example CRANK with CART. Recall that the latter is a regression decision tree algorithm which minimizes the MSE (Mean Squared Error). We also runned an alternative version of CRANK which maximizes the empirical Kendall τ instead of the empirical IAUC: this method is refered to as KENDALL from now on. The experimental setting is composed of a unidimensional feature space X = [0, 1] (for visualization reasons) and a simple regression model without any noise: Y = m(X). Intuitively, a least squares strategy can miss slight oscillations of the regression function, which are critical in ranking when they occur in high probability regions as they affect the order among the feature space. The results are presented in Table 1. See Supplementary Material for further details. 7 Conclusion This paper considers the problem of learning how to order objects by increasing ’size’, modeled as a continuous r.v. Y , based on indirect measurements X . We provided a rigorous mathematical formulation of this problem that finds many applications (e.g. quality control, chemistry) and is referred to as continuous ranking. In particular, necessary and sufficient conditions on (X,Y )’s distribution for the existence of optimal solutions are exhibited and appropriate criteria have been proposed for evaluating the performance of scoring rules in these situations. In contrast to distribution-free regression where the goal is to recover the local values taken by the regression function, continuous ranking aims at reproducing the preorder it defines on the feature space as accurately as possible. The numerical results obtained via the algorithmic approaches we proposed for optimizing the criteria aforementioned highlight the difference in nature between these two statistical learning tasks. Acknowledgments This work was supported by the industrial chair Machine Learning for Big Data from Télécom ParisTech and by a public grant (Investissement d’avenir project, reference ANR-11-LABX-0056LMH, LabEx LMH).
1. What is the focus of the paper, and what problem does it attempt to solve? 2. What is the difference between bipartite ranking and continuous ranking? 3. How does the paper extend the idea of bipartite ranking for continuous output Y? 4. What are the requirements for using the proposed method? 5. What is the major contribution of the paper, and how does it differ from previous works? 6. Can you provide examples of real-world applications where continuous ranking would be useful? 7. How does the paper compare continuous ranking with regression, and what are the differences between these two approaches? 8. Do you have any concerns or questions about the numerical experiments used in the paper? 9. How does the paper prove that IAUC can be used to evaluate the score function s(x) in the sense of continuous ranking? 10. Can you explain the theoretical novelty of the paper in simpler terms?
Review
Review The ms tries to solve a problem called continuous ranking, which is an extension of bipartite ranking. The idea of continuous ranking is to find a score function that increase or decrease with output y with highest probability. In bipartite ranking, each data point x is given a binary label y \in {+1, -1}. The goal is to find a score function s(x) such that the difference of p(s(x)>=t|y=+1) and p(s(x) < t|y=-1) is maximal for all threshold t. This can be achieved by maximizing AUC of the score function. The ms extends the bipartite ranking idea for continuous output Y, by introducing a threshold y to separate the data into two ordered parts. By recursively divide the two ordered parts into further ordered two parts, we constructed an ordered binary tree, which provides a continuous ranking on the leaf nodes. The ms requires the user to provide score function s(x) and distribution of output Y. If distribution of Y is not known, then it is approximated by empirical data distribution of the training data. Then we can calculate the IAUC for the given score function and output distribution. The ms proves that the IAUC could be used to evaluate the score function s(x) in the sense of continuous ranking. The major contribution of the ms is from theoretical side. However, my limited knowledge could not help me to fully understand the theoretical novelty. I will try to comment more from the practical side. The ms emphasized the differences between continuous ranking and regression (Figure 2). In continuous ranking, we optimize IAUC, in regression we minimize mean square error. It is a bit confusing to understand "regression function" since on page 4, section 3.2, proposition 1, point 3, the regression function is an optimal scoring rule. The numerical experiment uses specially designed data. The generating function z²(z+1)(z+1.5)(z+2) is uncommon in real datasets and there is no noise. The oscillation that causes problems in Figure 3b looks so small that I worry it will be easily dominated by noise. It will be nice to see some results for real applications.
NIPS
Title Ranking Data with Continuous Labels through Oriented Recursive Partitions Abstract We formulate a supervised learning problem, referred to as continuous ranking, where a continuous real-valued label Y is assigned to an observable r.v. X taking its values in a feature space X and the goal is to order all possible observations x in X by means of a scoring function s : X → R so that s(X) and Y tend to increase or decrease together with highest probability. This problem generalizes bi/multi-partite ranking to a certain extent and the task of finding optimal scoring functions s(x) can be naturally cast as optimization of a dedicated functional criterion, called the IROC curve here, or as maximization of the Kendall τ related to the pair (s(X), Y ). From the theoretical side, we describe the optimal elements of this problem and provide statistical guarantees for empirical Kendall τ maximization under appropriate conditions for the class of scoring function candidates. We also propose a recursive statistical learning algorithm tailored to empirical IROC curve optimization and producing a piecewise constant scoring function that is fully described by an oriented binary tree. Preliminary numerical experiments highlight the difference in nature between regression and continuous ranking and provide strong empirical evidence of the performance of empirical optimizers of the criteria proposed. 1 Introduction The predictive learning problem considered in this paper can be easily stated in an informal fashion, as follows. Given a collection of objects of arbitrary cardinality, N ≥ 1 say, respectively described by characteristics x1, . . . , xN in a feature space X , the goal is to learn how to order them by increasing order of magnitude of a certain unknown continuous variable y. To fix ideas, the attribute y can represent the ’size’ of the object and be difficult to measure, as for the physical measurement of microscopic bodies in chemistry and biology or the cash flow of companies in quantitative finance and the features x may then correspond to indirect measurements. The most convenient way to define a preorder on a feature space X is to transport the natural order on the real line onto it by means of a (measurable) scoring function s : X → R: an object with charcateristics x is then said to be ’larger’ (’strictly larger’, respectively) than an object described by x′ according to the scoring rule s when s(x′) ≤ s(x) (when s(x) < s(x′)). Statistical learning boils down here to build a scoring function s(x), based on a training data set Dn = {(X1, Y1), . . . , (Xn, Yn)} of objects for which the values of all variables (direct and indirect measurements) have been jointly observed, such that s(X) and Y tend to increase or decrease together with highest probability or, in other words, such that the ordering of new objects induced by s(x) matches that defined by their true measures as well as possible. This problem, that shall be referred to as continuous ranking throughout the article can be viewed as an extension of bipartite ranking, where the output variable Y is assumed to be binary and the objective can be naturally formulated as a functionalM -estimation problem by means of the concept of ROC curve, see [7]. Refer also to [4], [11], [1] for approaches based on the optimization 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. of summary performance measures such as the AUC criterion in the binary context. Generalization to the situation where the random label is ordinal and may take a finite number K ≥ 3 of values is referred to as multipartite ranking and has been recently investigated in [16] (see also e.g. [14]), where distributional conditions guaranteeing that ROC surface and the VUS criterion can be used to determine optimal scoring functions are exhibited in particular. It is the major purpose of this paper to formulate the continuous ranking problem in a quantitative manner and explore the connection between the latter and bi/multi-partite ranking. Intuitively, optimal scoring rules would be also optimal for any bipartite subproblem defined by thresholding the continuous variable Y with cut-off t > 0, separating the observations X such that Y < t from those such that Y > t. Viewing this way continuous ranking as a continuum of nested bipartite ranking problems, we provide here sufficient conditions for the existence of such (optimal) scoring rules and we introduce a concept of integrated ROC curve (IROC curve in abbreviated form) that may serve as a natural performance measure for continuous ranking, as well as the related notion of integrated AUC criterion, a summary scalar criterion, akin to Kendall tau. Generalization properties of empirical Kendall tau maximizers are discussed in the Supplementary Material. The paper also introduces a novel recursive algorithm that solves a discretized version of the empirical integrated ROC curve optimization problem, producing a scoring function that can be computed by means of a hierarchical combination of binary classification rules. Numerical experiments providing strong empirical evidence of the relevance of the approach promoted in this paper are also presented. The paper is structured as follows. The probabilistic framework we consider is described and key concepts of bi/multi-partite ranking are briefly recalled in section 2. Conditions under which optimal solutions of the problem of ranking data with continuous labels exist are next investigated in section 3, while section 4 introduces a dedicated quantitative (functional) performance measure, the IROC curve. The algorithmic approach we propose in order to learn scoring functions with nearly optimal IROC curves is presented at length in section 5. Numerical results are displayed in section 6. Some technical proofs are deferred to the Supplementary Material. 2 Notation and Preliminaries Throughout the paper, the indicator function of any event E is denoted by I{E}. The pseudo-inverse of any cdf F (t) on R is denoted by F−1(u) = inf{s ∈ R : F (s) ≥ u}, while U([0, 1]) denotes the uniform distribution on the unit interval [0, 1]. 2.1 The probabilistic framework Given a continuous real valued r.v. Y representing an attribute of an object, its ’size’ say, and a random vector X taking its values in a (typically high dimensional euclidian) feature space X modelling other observable characteristics of the object (e.g. ’indirect measurements’ of the size of the object), hopefully useful for predicting Y , the statistical learning problem considered here is to learn from n ≥ 1 training independent observations Dn = {(X1, Y1), . . . , (Xn, Yn)}, drawn as the pair (X,Y ), a measurable mapping s : X → R, that shall be referred to as a scoring function throughout the paper, so that the variables s(X) and Y tend to increase or decrease together: ideally, the larger the score s(X), the higher the size Y . For simplicity, we assume throughout the article that X = Rd with d ≥ 1 and that the support of Y ’s distribution is compact, equal to [0, 1] say. For any q ≥ 1, we denote by λq the Lebesgue measure on Rq equipped with its Borelian σ-algebra and suppose that the joint distribution FX,Y (dxdy) of the pair (X,Y ) has a density fX,Y (x, y) w.r.t. the tensor product measure λd ⊗ λ1. We also introduces the marginal distributions FY (dy) = fY (y)λ1(dy) and FX(dx) = fX(x)λd(dx), where fY (y) = ∫ x∈X fX,Y (x, y)λd(dx) and fX(x) = ∫ y∈[0,1] fX,Y (x, y)λ1(dy) as well as the conditional densities fX|Y=y(x) = fX,Y (x, y)/fY (y) and fY |X=x(y) = fX,Y (x, y)/fX(x). Observe incidentally that the probabilistic framework of the continuous ranking problem is quite similar to that of distribution-free regression. However, as shall be seen in the subsequent analysis, even if the regression function m(x) = E[Y | X = x] can be optimal under appropriate conditions, just like for regression, measuring ranking performance involves criteria that are of different nature than the expected least square error and plug-in rules may not be relevant for the goal pursued here, as depicted by Fig. 2 in the Supplementary Material. Scoring functions. The set of all scoring functions is denoted by S here. Any scoring function s ∈ S defines a total preorder on the space X : ∀(x, x′) ∈ X 2, x s x′ ⇔ s(x) ≤ s(x′). We also set x ≺s x′ when s(x) < s(x′) and x =s x′ when s(x) = s(x′) for (x, x′) ∈ X 2. 2.2 Bi/multi-partite ranking Suppose thatZ is a binary label, taking its values in {−1,+1} say, assigned to the r.v.X . In bipartite ranking, the goal is to pick s in S so that the larger s(X), the greater the probability that Y is equal to 1 ideally. In other words, the objective is to learn s(x) such that the r.v. s(X) given Y = +1 is as stochastically larger1 as possible than the r.v. s(X) given Y = −1: the difference between Ḡs(t) = P{s(X) ≥ t | Y = +1} and H̄s(t) = P{s(X) ≥ t | Y = −1} should be thus maximal for all t ∈ R. This can be naturally quantified by means of the notion of ROC curve of a candidate s ∈ S, i.e. the parametrized curve t ∈ R 7→ (H̄s(t), Ḡs(t)), which can be viewed as the graph of a mapping ROCs : α ∈ (0, 1) 7→ ROCs(α), connecting possible discontinuity points by linear segments (so that ROCs(α) = Ḡs ◦ (1 − H−1s )(1 − α) when Hs has no flat part in H−1s (1 − α), where Hs = 1− H̄s). A basic Neyman Pearson’s theory argument shows that the optimal elements s∗(x) related to this natural (functional) bipartite ranking criterion (i.e. scoring functions whose ROC curve dominates any other ROC curve everywhere on (0, 1)) are transforms (T ◦ η)(x) of the posterior probability η(x) = P{Z = +1 | X = x}, where T : SUPP(η(X)) → R is any strictly increasing borelian mapping. Optimization of the curve in sup norm has been considered in [7] or in [8] for instance. However, given its functional nature, in practice the ROC curve of any s ∈ S is often summarized by the area under it, which performance measure can be interpreted in a probabilistic manner, as the theoretical rate of concording pairs AUC(s) = P {s(X) < s(X′) | Z = −1, Z′ = +1}+ 1 2 P {s(X) = s(X′) | Z = −1, Z′ = +1} , (1) where (X ′, Z ′) denoted an independent copy of (X,Z). A variety of algorithms aiming at maximizing the AUC criterion or surrogate pairwise criteria have been proposed and studied in the literature, among which [11], [15] or [3], whereas generalization properties of empirical AUC maximizers have been studied in [5], [1] and [12]. An analysis of the relationship between the AUC and the error rate is given in [9]. Extension to the situation where the label Y takes at least three ordinal values (i.e. multipartite ranking) has been also investigated, see e.g. [14] or [6]. In [16], it is shown that, in contrast to the bipartite setup, the existence of optimal solutions cannot be guaranteed in general and conditions on (X,Y )’s distribution ensuring that optimal solutions do exist and that extensions of bipartite ranking criteria such as the ROC manifold and the volume under it can be used for learning optimal scoring rules have been exhibited. An analogous analysis in the context of continuous ranking is carried out in the next section. 3 Optimal elements in ranking data with continuous labels In this section, a natural definition of the set of optimal elements for continuous ranking is first proposed. Existence and characterization of such optimal scoring functions are next discussed. 3.1 Optimal scoring rules for continuous ranking Considering a threshold value y ∈ [0, 1], a considerably weakened (and discretized) version of the problem stated informally above would consist in finding s so that the r.v. s(X) given Y > y is as stochastically larger than s(X) given Y < y as possible. This subproblem coincides with the bipartite ranking problem related to the pair (X,Zy), where Zy = 2I{Y > y} − 1. As briefly recalled in subsection 2.2, the optimal set S∗y is composed of the scoring functions that induce the same ordering as ηy(X) = P{Y > y | X} = 1− (1− py)/(1− py + pyΦy(X)), where py = 1− FY (y) = P{Y > y} and Φy(X) = (dFX|Y >y/dFX|Y <y)(X). 1Given two real-valued r.v.’s U and U ′, recall that U is said to be stochastically larger than U ′ when P{U ≥ t} ≥ P{U ′ ≥ t} for all t ∈ R. A continuum of bipartite ranking problems. The rationale behind the definition of the set S∗ of optimal scoring rules for continuous ranking is that any element s∗ should score observations x in the same order as ηy (or equivalently as Φy). Definition 1. (OPTIMAL SCORING RULE) An optimal scoring rule for the continuous ranking problem related to the random pair (X,Y ) is any element s∗ that fulfills: ∀y ∈ (0, 1), ∀(x, x′) ∈ X 2, ηy(x) < ηy(x′)⇒ s∗(x) < s∗(x′). (2) In other words, the set of optimal rules is defined as S∗ = ⋂ y∈(0,1) S∗y . It is noteworthy that, although the definition above is natural, the set S∗ can be empty in absence of any distributional assumption, as shown by the following example. Example 1. As a counter-example, consider the distributions FX,Y such that FY = U([0, 1]) and FX|Y=y = N (|2y − 1|, (2y − 1)2). Observe that (X, 1− Y ) d =(X,Y ), so that Φ1−t = Φ−1t for all t ∈ (0, 1) and there exists t 6= 0 s.t. Φt is not constant. Hence, there exists no s∗ in S such that (2) holds true for all t ∈ (0, 1). Remark 1. (INVARIANCE) We point out that the class S∗ of optimal elements for continuous ranking thus defined is invariant by strictly increasing transform of the ’size’ variable Y (in particular, a change of unit has no impact on the definition of S∗): for any borelian and strictly increasing mapping H : (0, 1)→ (0, 1), any scoring function s∗(x) that is optimal for the continuous ranking problem related to the pair (X,Y ) is still optimal for that related to (X,H(Y )) (since, under these hypotheses, for any y ∈ (0, 1): Y > y ⇔ H(Y ) > H(y)). 3.2 Existence and characterization of optimal scoring rules We now investigate conditions guaranteeing the existence of optimal scoring functions for the continuous ranking problem. Proposition 1. The following assertions are equivalent. 1. For all 0 < y < y′ < 1, for all (x, x′) ∈ X 2: Φy(x) < Φy(x′)⇒ Φy′(x) ≤ Φy′(x′). 2. There exists an optimal scoring rule s∗ (i.e. S∗ 6= ∅). 3. The regression function m(x) = E[Y | X = x] is an optimal scoring rule. 4. The collection of probability distributions FX|Y=y(dx) = fX|Y=y(x)λd(dx), y ∈ (0, 1) satisfies the monotone likelihood ratio property: there exist s∗ ∈ S and, for all 0 < y < y′ < 1, an increasing function ϕy,y′ : R→ R+ such that: ∀x ∈ Rd, fX|Y=y′ fX|Y=y (x) = ϕy,y′(s ∗(x)). Refer to the Appendix section for the technical proof. Truth should be said, assessing that Assertion 1. is a very challenging statistical task. However, through important examples, we now describe (not uncommon) situations where the conditions stated in Proposition 1 are fulfilled. Example 2. We give a few important examples of probabilistic models fulfilling the properties listed in Proposition 1. • Regression model. Suppose that Y = m(X) + , where m : X → R is a borelian function and is a centered r.v. independent from X . One may easily check that m ∈ S∗. • Exponential families. Suppose that fX|Y=y(x) = exp(κ(y)T (x) − ψ(y))f(x) for all x ∈ Rd, where f : Rd → R+ is borelian, κ : [0, 1] → R is a borelian strictly increasing function and T : Rd → R is a borelian mapping such that ψ(y) = log ∫ x∈Rd exp(κ(y)T (x))f(x)dx < +∞. We point out that, although the regression function m(x) is an optimal scoring function when S∗ 6= ∅, the continuous ranking problem does not coincide with distribution-free regression (notice incidentally that, in this case, any strictly increasing transform of m(x) belongs to S∗ as well). As depicted by Fig. 2 the least-squares criterion is not relevant to evaluate continuous ranking performance and naive plug-in strategies should be avoided, see Remark 3 below. Dedicated performance criteria are proposed in the next section. 4 Performance measures for continuous ranking We now investigate quantitative criteria for assessing the performance in the continuous ranking problem, which practical machine-learning algorithms may rely on. We place ourselves in the situation where the set S∗ is not empty, see Proposition 1 above. A functional performance measure. It follows from the view developped in the previous section that, for any (s, s∗) ∈ S × S∗ and for all y ∈ (0, 1), we have: ∀α ∈ (0, 1), ROCs,y(α) ≤ ROCs∗,y(α) = ROC∗y(α), (3) denoting by ROCs,y the ROC curve of any s ∈ S related to the bipartite ranking subproblem (X,Zy) and by ROC∗y the corresponding optimal ROC curve, i.e. the ROC curve of strictly increasing transforms of ηy(x). Based on this observation, it is natural to design a dedicated performance measure by aggregating these ’sub-criteria’. Integrating over y w.r.t. a σ-finite measure µ with support equal to [0, 1], this leads to the following definition IROCµ,s(α) = ∫ ROCs,y(α)µ(dy). The functional criterion thus defined inherits properties from the ROCs,y’s (e.g. monotonicity, concavity). In addition, the curve IROCµ,s∗ with s∗ ∈ S∗ dominates everywhere on (0, 1) any other curve IROCµ,s for s ∈ S. However, except in pathologic situations (e.g. when s(x) is constant), the curve IROCµ,s is not invariant when replacing Y ’s distribution by that of a strictly increasing transform H(Y ). In order to guarantee that this desirable property is fulfilled (see Remark 1), one should integrate w.r.t. Y ’s distribution (which boils down to replacing Y by the uniformly distributed r.v. FY (Y )). Definition 2. (INTEGRATED ROC/AUC CRITERIA) The integrated ROC curve of any scoring rule s ∈ S is defined as: ∀α ∈ (0, 1), IROCs(α) = ∫ 1 y=0 ROCs,y(α)FY(dy) = E [ROCs,Y(α)] . (4) The integrated AUC criterion is defined as the area under the integrated ROC curve: ∀s ∈ S, IAUC(s) = ∫ 1 α=0 IROCs(α)dα. (5) The following result reveals the relevance of the functional/summary criteria defined above for the continuous ranking problem. Additional properties of IROC curves are listed in the Supplementary Material. Theorem 1. Let s∗ ∈ S. The following assertions are equivalent. 1. The assertions of Proposition 1 are fulfilled and s∗ is an optimal scoring function in the sense given by Definition 1. 2. For all α ∈ (0, 1), IROCs∗(α) = E [ROC∗Y(α)]. 3. We have IAUCs∗ = E [AUC∗Y], where AUC∗y = ∫ 1 α=0 ROC∗y(α)dα for all y ∈ (0, 1). If S∗ 6= ∅, then we have: ∀s ∈ S, IROCs(α) ≤ IROC∗(α) def = E [ROC∗Y(α)] , for any α ∈ (0, 1, ) IAUC(s) ≤ IAUC∗ def= E [AUC∗Y] . In addition, for any borelian and strictly increasing mapping H : (0, 1) → (0, 1), replacing Y by H(Y ) leaves the curves IROCs, s ∈ S, unchanged. Equipped with the notion defined above, a scoring rule s1 is said to be more accurate than another one s2 if IROCs2(α) ≤ IROCs1(α) for all α ∈ (0, 1).The IROC curve criterion thus provides a partial preorder on S. Observe also that, by virtue of Fubini’s theorem, we have IAUC(s) = ∫ AUCy(s)FY(dy) for all s ∈ S , denoting by AUCy(s) the AUC of s related to the bipartite ranking subproblem (X,Zy). Just like the AUC for bipartite ranking, the scalar IAUC criterion defines a full preorder on S for continuous ranking. Based on a training datasetDn of independent copies of (X,Y ), statistical versions of the IROC/IAUC criteria can be straightforwardly computed by replacing the distributions FY , FX|Y >t and FX|Y <t by their empirical counterparts in (3)-(5), see the Supplementary Material for further details. The lemma below provides a probabilistic interpretation of the IAUC criterion. Lemma 1. Let (X ′, Y ′) be a copy of the random pair (X,Y ) and Y ′′ a copy of the r.v. Y . Suppose that (X,Y ), (X ′, Y ′) and Y ′′ are defined on the same probability space and are independent. For all s ∈ S, we have: IAUC(s) = P {s(X) < s(X′) | Y < Y′′ < Y′}+ 1 2 P {s(X) = s(X′) | Y < Y′′ < Y′} . (6) This result shows in particular that a natural statistical estimate of IAUC(s) based on Dn involves U -statistics of degree 3. Its proof is given in the Supplementary Material for completeness. The Kendall τ statistic. The quantity (6) is akin to another popular way to measure the tendency to define the same ordering on the statistical population in a summary fashion: dτ (s) def = P {(s(X)− s(X ′)) · (Y − Y ′) > 0}+ 1 2 P {s(X) = s(X ′)} (7) = P{s(X) < s(X ′) | Y < Y ′}+ 1 2 P {X =s X ′} , where (X ′, Y ′) denotes an independent copy of (X,Y ), observing that P{Y < Y ′} = 1/2. The empirical counterpart of (7) based on the sample Dn, given by d̂n(s) = 2 n(n− 1) ∑ i<j I {(s(Xi)− s(Xj)) · (Yi − Yj) > 0}+ 1 n(n− 1) ∑ i<j I {s(Xi) = s(Xj)} (8) is known as the Kendall τ statistic and is widely used in the context of statistical hypothesis testing. The quantity (7) shall be thus referred to as the (theoretical or true) Kendall τ . Notice that dτ (s) is invariant by strictly increasing transformation of s(x) and thus describes properties of the order it defines. The following result reveals that the class S∗, when non empty, is the set of maximizers of the theoretical Kendall τ . Refer to the Supplementary Material for the technical proof. Proposition 2. Suppose that S∗ 6= ∅. For any (s, s∗) ∈ S × S∗, we have: dτ (s) ≤ dτ (s∗). Equipped with these criteria, the objective expressed above in an informal manner can be now formulated in a quantitative manner as a (possibly functional) M -estimation problem. In practice, the goal pursued is to find a reasonable approximation of a solution to the optimization problem maxs∈S dτ (s) (respectively maxs∈S IAUC(s)), where the supremum is taken over the set of all scoring functions s : X → R. Of course, these criteria are unknown in general, just like (X,Y )’s probability distribution, and the empirical risk minimization (ERM in abbreviated form) paradigm (see [10]) invites for maximizing the statistical version (8) over a class S0 ⊂ S of controlled complexity when considering the criterion dτ (s) for instance. The generalization capacity of empirical maximizers of the Kendall τ can be straightforwardly established using results in [5]. More details are given in the Supplementary Material. Before describing a practical algorithm for recursive maximization of the IROC curve, a few remarks are in order. Remark 2. (ON KENDALL τ AND AUC) We point out that, in the bipartite ranking problem (i.e. when the output variable Z takes its values in {−1, +1}, see subsection 2.2) as well, the AUC criterion can be expressed as a function of the Kendall τ related to the pair (s(X), Z) when the r.v. s(X) is continuous. Indeed, we have in this case 2p(1−p)AUC(s) = dτ (s), where p = P{Z = +1} and dτ (s) = P{(s(X) − s(X ′)) · (Z − Z ′) > 0}, denoting by (X ′, Z ′) an independent copy of (X,Z). Remark 3. (CONNECTION TO DISTRIBUTION-FREE REGRESSION) Consider the nonparametric regression model Y = m(X) + , where is a centered r.v. independent from X . In this case, it is well-known that the regression function m(X) = E[Y | X] is the (unique) solution of the expected least squares minimization. However, although m ∈ S∗, the least squares criterion is far from appropriate to evaluate ranking performance, as depicted by Fig. 2. Observe additionally that, in contrast to the criteria introduced above, increasing transformation of the output variable Y may have a strong impact on the least squares minimizer: except for linear stransforms, E[H(Y ) | X] is not an increasing transform of m(X). Remark 4. (ON DISCRETIZATION) Bi/multi-partite algorithms are not directly applicable to the continuous ranking problem. Indeed a discretization of the interval [0, 1] would be first required but this would raise a difficult question outside our scope: how to choose this discretization based on the training data? We believe that this approach is less efficient than ours which reveals problemspecific criteria, namely IROC and IAUC. 5 Continuous Ranking through Oriented Recursive Partitioning It is the purpose of this section to introduce the algorithm CRANK, a specific tree-structured learning algorithm for continuous ranking. 5.1 Ranking trees and Oriented Recursive Partitions Decision trees undeniably figure among the most popular techniques, in supervised and unsupervised settings, refer to [2] or [13] for instance. This is essentially due to the visual model summary they provide, in the form of a binary tree graphic that permits to describe predictions by means of a hierachichal combination of elementary rules of the type ”X(j) ≤ κ” or ”X(j) > κ”, comparing the value taken by a (quantitative) component of the input vector X (the split variable) to a certain threshold (the split value). In contrast to local learning problems such as classification or regression, predictive rules for a global problem such as ranking cannot be described by a (tree-structured) partition of the feature space: cells (corresponding to the terminal leaves of the binary decision tree) must be ordered so as to define a scoring function. This leads to the definition of ranking trees as binary trees equipped with a ”left-to-right” orientation, defining a tree-structured collection of anomaly scoring functions, as depicted by Fig. 1. Binary ranking trees have been in the context of bipartite ranking in [7] or in [3] and in [16] in the context of multipartite ranking. The root node of a ranking tree TJ of depth J ≥ 0 represents the whole feature space X : C0,0 = X , while each internal node (j, k) with j < J and k ∈ {0, . . . , 2j − 1} corresponds to a subset Cj,k ⊂ X , whose left and right siblings respectively correspond to disjoint subsets Cj+1,2k and Cj+1,2k+1 such that Cj,k = Cj+1,2k∪Cj+1,2k+1. Equipped with the left-to-right orientation, any subtree T ⊂ TJ defines a preorder on X : elements lying in the same terminal cell of T being equally ranked. The scoring function related to the oriented tree T can be written as: sT (x) = ∑ Cj,k: terminal leaf of T 2J ( 1− k 2j ) · I{x ∈ Cj,k}. (9) 5.2 The CRANK algorithm Based on Proposition 2, as mentioned in the Supplementary Material, one can try to build from the training dataset Dn a ranking tree by recursive empirical Kendall τ maximization. We propose below an alternative tree-structured recursive algorithm, relying on a (dyadic) discretization of the ’size’ variable Y . At each iteration, the local sample (i.e. the data lying in the cell described by the current node) is split into two halves (the highest/smallest halves, depending on Y ) and the algorithm calls a binary classification algorithm A to learn how to divide the node into right/left children. The theoretical analysis of this algorithm and its connection with approximation of IROC∗ are difficult questions that will be adressed in future work. Indeed we found out that the IROC cannot be represented as a parametric curve contrary to the ROC, which renders proofs much more difficult than in the bipartite case. THE CRANK ALGORITHM 1. Input. Training data Dn, depth J ≥ 1, binary classification algorithm A. 2. Initialization. Set C0,0 = X . 3. Iterations. For j = 0, . . . , J − 1 and k = 0, . . . , 2J − 1, (a) Compute a median yj,k of the dataset {Y1, . . . , , Yn} ∩ Cj,k and assign the binary label Zi = 2I{Yi > yj,k} − 1 to any data point i lying in Cj,k, i.e. such that Xi ∈ Cj,k. (b) Solve the binary classification problem related to the input space Cj,k and the training set {(Xi, Yi) : 1 ≤ i ≤ n, Xi ∈ Cj,k}, producing a classifier gj,k : Cj,k → {−1, +1}. (c) Set Cj+1,2k = {x ∈ Cj,k, gj,k = +1} = Cj,k \ Cj+1,2k+1. 4. Output. Ranking tree TJ = {Cj,k : 0 ≤ j ≤ J, 0 ≤ k < D}. Of course, the depth J should be chosen such that 2J ≤ n. One may also consider continuing to split the nodes until the number of data points within a cell has reached a minimum specified in advance. In addition, it is well known that recursive partitioning methods fragment the data and the unstability of splits increases with the depth. For this reason, a ranking subtree must be selected. The growing procedure above should be classically followed by a pruning stage, where children of a same parent are progressively merged until the root T0 is reached and a subtree among the sequence T0 ⊂ . . . ⊂ TJ with nearly maximal IAUC should be chosen using cross-validation. Issues related to the implementation of the CRANK algorithm and variants (e.g. exploiting randomization/aggregation) will be investigated in a forthcoming paper. 6 Numerical Experiments In order to illustrate the idea conveyed by Fig. 2 that the least squares criterion is not appropriate for the continuous ranking problem we compared on a toy example CRANK with CART. Recall that the latter is a regression decision tree algorithm which minimizes the MSE (Mean Squared Error). We also runned an alternative version of CRANK which maximizes the empirical Kendall τ instead of the empirical IAUC: this method is refered to as KENDALL from now on. The experimental setting is composed of a unidimensional feature space X = [0, 1] (for visualization reasons) and a simple regression model without any noise: Y = m(X). Intuitively, a least squares strategy can miss slight oscillations of the regression function, which are critical in ranking when they occur in high probability regions as they affect the order among the feature space. The results are presented in Table 1. See Supplementary Material for further details. 7 Conclusion This paper considers the problem of learning how to order objects by increasing ’size’, modeled as a continuous r.v. Y , based on indirect measurements X . We provided a rigorous mathematical formulation of this problem that finds many applications (e.g. quality control, chemistry) and is referred to as continuous ranking. In particular, necessary and sufficient conditions on (X,Y )’s distribution for the existence of optimal solutions are exhibited and appropriate criteria have been proposed for evaluating the performance of scoring rules in these situations. In contrast to distribution-free regression where the goal is to recover the local values taken by the regression function, continuous ranking aims at reproducing the preorder it defines on the feature space as accurately as possible. The numerical results obtained via the algorithmic approaches we proposed for optimizing the criteria aforementioned highlight the difference in nature between these two statistical learning tasks. Acknowledgments This work was supported by the industrial chair Machine Learning for Big Data from Télécom ParisTech and by a public grant (Investissement d’avenir project, reference ANR-11-LABX-0056LMH, LabEx LMH).
1. What is the main contribution of the paper in terms of ranking problems? 2. What are the concerns regarding the motivation and explanation of the continuous ranking problem? 3. Do you have any questions about the proof of propositions and theorems in the paper? 4. Are there any suggestions for improving the paper, such as adding specific examples or citing relevant works?
Review
Review This paper generalizes bi/multi-partite ranking problem and uses the pair (x, y), where x is the feature vector and y is a continuous real-valued label not the discrete label, to find optimal scoring function s(x). The existence of the optimal scoring rules for continuous ranking is given by Proposition 1. A dedicated functional criterion, called the IROC curve here or the maximization of the Kendall \tau related to the pair (s(x), y) are used as the performance measures. A recursive statistical learning algorithm which tailored to empirical IROC curve optimization are presented. An oriented binary tree can be used as piecewise constant scoring function for continuous ranking. My majority concern about this paper is the motivation of the continuous ranking is not well described. The authors didn’t point out the disadvantage of bi/multi-partite ranking and why the generalization of continuous ranking is meaningful. As the discrete binary label is used as the measurement is hard to obtain and only the relevance or not is indicated by y, the continuous real-valued label can be used for ranking itself. The author summarize the potential applications as quality control and chemistry but these scenario are also suitable to bi/multi-partite ranking. This paper need a good example to display the difference between the continuous ranking and the bi/multi-partite ranking. The author should point out the proof hardness of Proposition 1 & 2 and Theorem 1. The proofs looks like a trivial variant of the corresponding part of bi/multi-partite ranking. I think the authors should cite the bi/multi-partite ranking papers and AUC criterion paper. [1] Stéphan Clémençon, Marine Depecker, Nicolas Vayatis: Ranking forests. Journal of Machine Learning Research 14(1): 39-73 (2013). [2] Aditya Krishna Menon, Robert C. Williamson: Bipartite Ranking: a Risk-Theoretic Perspective Journal of Machine Learning Research 17(195):1−102 (2016). [3] Corinna Cortes, Mehryar Mohri: AUC Optimization vs. Error Rate Minimization. NIPS: 313-320 (2003).
NIPS
Title Ranking Data with Continuous Labels through Oriented Recursive Partitions Abstract We formulate a supervised learning problem, referred to as continuous ranking, where a continuous real-valued label Y is assigned to an observable r.v. X taking its values in a feature space X and the goal is to order all possible observations x in X by means of a scoring function s : X → R so that s(X) and Y tend to increase or decrease together with highest probability. This problem generalizes bi/multi-partite ranking to a certain extent and the task of finding optimal scoring functions s(x) can be naturally cast as optimization of a dedicated functional criterion, called the IROC curve here, or as maximization of the Kendall τ related to the pair (s(X), Y ). From the theoretical side, we describe the optimal elements of this problem and provide statistical guarantees for empirical Kendall τ maximization under appropriate conditions for the class of scoring function candidates. We also propose a recursive statistical learning algorithm tailored to empirical IROC curve optimization and producing a piecewise constant scoring function that is fully described by an oriented binary tree. Preliminary numerical experiments highlight the difference in nature between regression and continuous ranking and provide strong empirical evidence of the performance of empirical optimizers of the criteria proposed. 1 Introduction The predictive learning problem considered in this paper can be easily stated in an informal fashion, as follows. Given a collection of objects of arbitrary cardinality, N ≥ 1 say, respectively described by characteristics x1, . . . , xN in a feature space X , the goal is to learn how to order them by increasing order of magnitude of a certain unknown continuous variable y. To fix ideas, the attribute y can represent the ’size’ of the object and be difficult to measure, as for the physical measurement of microscopic bodies in chemistry and biology or the cash flow of companies in quantitative finance and the features x may then correspond to indirect measurements. The most convenient way to define a preorder on a feature space X is to transport the natural order on the real line onto it by means of a (measurable) scoring function s : X → R: an object with charcateristics x is then said to be ’larger’ (’strictly larger’, respectively) than an object described by x′ according to the scoring rule s when s(x′) ≤ s(x) (when s(x) < s(x′)). Statistical learning boils down here to build a scoring function s(x), based on a training data set Dn = {(X1, Y1), . . . , (Xn, Yn)} of objects for which the values of all variables (direct and indirect measurements) have been jointly observed, such that s(X) and Y tend to increase or decrease together with highest probability or, in other words, such that the ordering of new objects induced by s(x) matches that defined by their true measures as well as possible. This problem, that shall be referred to as continuous ranking throughout the article can be viewed as an extension of bipartite ranking, where the output variable Y is assumed to be binary and the objective can be naturally formulated as a functionalM -estimation problem by means of the concept of ROC curve, see [7]. Refer also to [4], [11], [1] for approaches based on the optimization 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. of summary performance measures such as the AUC criterion in the binary context. Generalization to the situation where the random label is ordinal and may take a finite number K ≥ 3 of values is referred to as multipartite ranking and has been recently investigated in [16] (see also e.g. [14]), where distributional conditions guaranteeing that ROC surface and the VUS criterion can be used to determine optimal scoring functions are exhibited in particular. It is the major purpose of this paper to formulate the continuous ranking problem in a quantitative manner and explore the connection between the latter and bi/multi-partite ranking. Intuitively, optimal scoring rules would be also optimal for any bipartite subproblem defined by thresholding the continuous variable Y with cut-off t > 0, separating the observations X such that Y < t from those such that Y > t. Viewing this way continuous ranking as a continuum of nested bipartite ranking problems, we provide here sufficient conditions for the existence of such (optimal) scoring rules and we introduce a concept of integrated ROC curve (IROC curve in abbreviated form) that may serve as a natural performance measure for continuous ranking, as well as the related notion of integrated AUC criterion, a summary scalar criterion, akin to Kendall tau. Generalization properties of empirical Kendall tau maximizers are discussed in the Supplementary Material. The paper also introduces a novel recursive algorithm that solves a discretized version of the empirical integrated ROC curve optimization problem, producing a scoring function that can be computed by means of a hierarchical combination of binary classification rules. Numerical experiments providing strong empirical evidence of the relevance of the approach promoted in this paper are also presented. The paper is structured as follows. The probabilistic framework we consider is described and key concepts of bi/multi-partite ranking are briefly recalled in section 2. Conditions under which optimal solutions of the problem of ranking data with continuous labels exist are next investigated in section 3, while section 4 introduces a dedicated quantitative (functional) performance measure, the IROC curve. The algorithmic approach we propose in order to learn scoring functions with nearly optimal IROC curves is presented at length in section 5. Numerical results are displayed in section 6. Some technical proofs are deferred to the Supplementary Material. 2 Notation and Preliminaries Throughout the paper, the indicator function of any event E is denoted by I{E}. The pseudo-inverse of any cdf F (t) on R is denoted by F−1(u) = inf{s ∈ R : F (s) ≥ u}, while U([0, 1]) denotes the uniform distribution on the unit interval [0, 1]. 2.1 The probabilistic framework Given a continuous real valued r.v. Y representing an attribute of an object, its ’size’ say, and a random vector X taking its values in a (typically high dimensional euclidian) feature space X modelling other observable characteristics of the object (e.g. ’indirect measurements’ of the size of the object), hopefully useful for predicting Y , the statistical learning problem considered here is to learn from n ≥ 1 training independent observations Dn = {(X1, Y1), . . . , (Xn, Yn)}, drawn as the pair (X,Y ), a measurable mapping s : X → R, that shall be referred to as a scoring function throughout the paper, so that the variables s(X) and Y tend to increase or decrease together: ideally, the larger the score s(X), the higher the size Y . For simplicity, we assume throughout the article that X = Rd with d ≥ 1 and that the support of Y ’s distribution is compact, equal to [0, 1] say. For any q ≥ 1, we denote by λq the Lebesgue measure on Rq equipped with its Borelian σ-algebra and suppose that the joint distribution FX,Y (dxdy) of the pair (X,Y ) has a density fX,Y (x, y) w.r.t. the tensor product measure λd ⊗ λ1. We also introduces the marginal distributions FY (dy) = fY (y)λ1(dy) and FX(dx) = fX(x)λd(dx), where fY (y) = ∫ x∈X fX,Y (x, y)λd(dx) and fX(x) = ∫ y∈[0,1] fX,Y (x, y)λ1(dy) as well as the conditional densities fX|Y=y(x) = fX,Y (x, y)/fY (y) and fY |X=x(y) = fX,Y (x, y)/fX(x). Observe incidentally that the probabilistic framework of the continuous ranking problem is quite similar to that of distribution-free regression. However, as shall be seen in the subsequent analysis, even if the regression function m(x) = E[Y | X = x] can be optimal under appropriate conditions, just like for regression, measuring ranking performance involves criteria that are of different nature than the expected least square error and plug-in rules may not be relevant for the goal pursued here, as depicted by Fig. 2 in the Supplementary Material. Scoring functions. The set of all scoring functions is denoted by S here. Any scoring function s ∈ S defines a total preorder on the space X : ∀(x, x′) ∈ X 2, x s x′ ⇔ s(x) ≤ s(x′). We also set x ≺s x′ when s(x) < s(x′) and x =s x′ when s(x) = s(x′) for (x, x′) ∈ X 2. 2.2 Bi/multi-partite ranking Suppose thatZ is a binary label, taking its values in {−1,+1} say, assigned to the r.v.X . In bipartite ranking, the goal is to pick s in S so that the larger s(X), the greater the probability that Y is equal to 1 ideally. In other words, the objective is to learn s(x) such that the r.v. s(X) given Y = +1 is as stochastically larger1 as possible than the r.v. s(X) given Y = −1: the difference between Ḡs(t) = P{s(X) ≥ t | Y = +1} and H̄s(t) = P{s(X) ≥ t | Y = −1} should be thus maximal for all t ∈ R. This can be naturally quantified by means of the notion of ROC curve of a candidate s ∈ S, i.e. the parametrized curve t ∈ R 7→ (H̄s(t), Ḡs(t)), which can be viewed as the graph of a mapping ROCs : α ∈ (0, 1) 7→ ROCs(α), connecting possible discontinuity points by linear segments (so that ROCs(α) = Ḡs ◦ (1 − H−1s )(1 − α) when Hs has no flat part in H−1s (1 − α), where Hs = 1− H̄s). A basic Neyman Pearson’s theory argument shows that the optimal elements s∗(x) related to this natural (functional) bipartite ranking criterion (i.e. scoring functions whose ROC curve dominates any other ROC curve everywhere on (0, 1)) are transforms (T ◦ η)(x) of the posterior probability η(x) = P{Z = +1 | X = x}, where T : SUPP(η(X)) → R is any strictly increasing borelian mapping. Optimization of the curve in sup norm has been considered in [7] or in [8] for instance. However, given its functional nature, in practice the ROC curve of any s ∈ S is often summarized by the area under it, which performance measure can be interpreted in a probabilistic manner, as the theoretical rate of concording pairs AUC(s) = P {s(X) < s(X′) | Z = −1, Z′ = +1}+ 1 2 P {s(X) = s(X′) | Z = −1, Z′ = +1} , (1) where (X ′, Z ′) denoted an independent copy of (X,Z). A variety of algorithms aiming at maximizing the AUC criterion or surrogate pairwise criteria have been proposed and studied in the literature, among which [11], [15] or [3], whereas generalization properties of empirical AUC maximizers have been studied in [5], [1] and [12]. An analysis of the relationship between the AUC and the error rate is given in [9]. Extension to the situation where the label Y takes at least three ordinal values (i.e. multipartite ranking) has been also investigated, see e.g. [14] or [6]. In [16], it is shown that, in contrast to the bipartite setup, the existence of optimal solutions cannot be guaranteed in general and conditions on (X,Y )’s distribution ensuring that optimal solutions do exist and that extensions of bipartite ranking criteria such as the ROC manifold and the volume under it can be used for learning optimal scoring rules have been exhibited. An analogous analysis in the context of continuous ranking is carried out in the next section. 3 Optimal elements in ranking data with continuous labels In this section, a natural definition of the set of optimal elements for continuous ranking is first proposed. Existence and characterization of such optimal scoring functions are next discussed. 3.1 Optimal scoring rules for continuous ranking Considering a threshold value y ∈ [0, 1], a considerably weakened (and discretized) version of the problem stated informally above would consist in finding s so that the r.v. s(X) given Y > y is as stochastically larger than s(X) given Y < y as possible. This subproblem coincides with the bipartite ranking problem related to the pair (X,Zy), where Zy = 2I{Y > y} − 1. As briefly recalled in subsection 2.2, the optimal set S∗y is composed of the scoring functions that induce the same ordering as ηy(X) = P{Y > y | X} = 1− (1− py)/(1− py + pyΦy(X)), where py = 1− FY (y) = P{Y > y} and Φy(X) = (dFX|Y >y/dFX|Y <y)(X). 1Given two real-valued r.v.’s U and U ′, recall that U is said to be stochastically larger than U ′ when P{U ≥ t} ≥ P{U ′ ≥ t} for all t ∈ R. A continuum of bipartite ranking problems. The rationale behind the definition of the set S∗ of optimal scoring rules for continuous ranking is that any element s∗ should score observations x in the same order as ηy (or equivalently as Φy). Definition 1. (OPTIMAL SCORING RULE) An optimal scoring rule for the continuous ranking problem related to the random pair (X,Y ) is any element s∗ that fulfills: ∀y ∈ (0, 1), ∀(x, x′) ∈ X 2, ηy(x) < ηy(x′)⇒ s∗(x) < s∗(x′). (2) In other words, the set of optimal rules is defined as S∗ = ⋂ y∈(0,1) S∗y . It is noteworthy that, although the definition above is natural, the set S∗ can be empty in absence of any distributional assumption, as shown by the following example. Example 1. As a counter-example, consider the distributions FX,Y such that FY = U([0, 1]) and FX|Y=y = N (|2y − 1|, (2y − 1)2). Observe that (X, 1− Y ) d =(X,Y ), so that Φ1−t = Φ−1t for all t ∈ (0, 1) and there exists t 6= 0 s.t. Φt is not constant. Hence, there exists no s∗ in S such that (2) holds true for all t ∈ (0, 1). Remark 1. (INVARIANCE) We point out that the class S∗ of optimal elements for continuous ranking thus defined is invariant by strictly increasing transform of the ’size’ variable Y (in particular, a change of unit has no impact on the definition of S∗): for any borelian and strictly increasing mapping H : (0, 1)→ (0, 1), any scoring function s∗(x) that is optimal for the continuous ranking problem related to the pair (X,Y ) is still optimal for that related to (X,H(Y )) (since, under these hypotheses, for any y ∈ (0, 1): Y > y ⇔ H(Y ) > H(y)). 3.2 Existence and characterization of optimal scoring rules We now investigate conditions guaranteeing the existence of optimal scoring functions for the continuous ranking problem. Proposition 1. The following assertions are equivalent. 1. For all 0 < y < y′ < 1, for all (x, x′) ∈ X 2: Φy(x) < Φy(x′)⇒ Φy′(x) ≤ Φy′(x′). 2. There exists an optimal scoring rule s∗ (i.e. S∗ 6= ∅). 3. The regression function m(x) = E[Y | X = x] is an optimal scoring rule. 4. The collection of probability distributions FX|Y=y(dx) = fX|Y=y(x)λd(dx), y ∈ (0, 1) satisfies the monotone likelihood ratio property: there exist s∗ ∈ S and, for all 0 < y < y′ < 1, an increasing function ϕy,y′ : R→ R+ such that: ∀x ∈ Rd, fX|Y=y′ fX|Y=y (x) = ϕy,y′(s ∗(x)). Refer to the Appendix section for the technical proof. Truth should be said, assessing that Assertion 1. is a very challenging statistical task. However, through important examples, we now describe (not uncommon) situations where the conditions stated in Proposition 1 are fulfilled. Example 2. We give a few important examples of probabilistic models fulfilling the properties listed in Proposition 1. • Regression model. Suppose that Y = m(X) + , where m : X → R is a borelian function and is a centered r.v. independent from X . One may easily check that m ∈ S∗. • Exponential families. Suppose that fX|Y=y(x) = exp(κ(y)T (x) − ψ(y))f(x) for all x ∈ Rd, where f : Rd → R+ is borelian, κ : [0, 1] → R is a borelian strictly increasing function and T : Rd → R is a borelian mapping such that ψ(y) = log ∫ x∈Rd exp(κ(y)T (x))f(x)dx < +∞. We point out that, although the regression function m(x) is an optimal scoring function when S∗ 6= ∅, the continuous ranking problem does not coincide with distribution-free regression (notice incidentally that, in this case, any strictly increasing transform of m(x) belongs to S∗ as well). As depicted by Fig. 2 the least-squares criterion is not relevant to evaluate continuous ranking performance and naive plug-in strategies should be avoided, see Remark 3 below. Dedicated performance criteria are proposed in the next section. 4 Performance measures for continuous ranking We now investigate quantitative criteria for assessing the performance in the continuous ranking problem, which practical machine-learning algorithms may rely on. We place ourselves in the situation where the set S∗ is not empty, see Proposition 1 above. A functional performance measure. It follows from the view developped in the previous section that, for any (s, s∗) ∈ S × S∗ and for all y ∈ (0, 1), we have: ∀α ∈ (0, 1), ROCs,y(α) ≤ ROCs∗,y(α) = ROC∗y(α), (3) denoting by ROCs,y the ROC curve of any s ∈ S related to the bipartite ranking subproblem (X,Zy) and by ROC∗y the corresponding optimal ROC curve, i.e. the ROC curve of strictly increasing transforms of ηy(x). Based on this observation, it is natural to design a dedicated performance measure by aggregating these ’sub-criteria’. Integrating over y w.r.t. a σ-finite measure µ with support equal to [0, 1], this leads to the following definition IROCµ,s(α) = ∫ ROCs,y(α)µ(dy). The functional criterion thus defined inherits properties from the ROCs,y’s (e.g. monotonicity, concavity). In addition, the curve IROCµ,s∗ with s∗ ∈ S∗ dominates everywhere on (0, 1) any other curve IROCµ,s for s ∈ S. However, except in pathologic situations (e.g. when s(x) is constant), the curve IROCµ,s is not invariant when replacing Y ’s distribution by that of a strictly increasing transform H(Y ). In order to guarantee that this desirable property is fulfilled (see Remark 1), one should integrate w.r.t. Y ’s distribution (which boils down to replacing Y by the uniformly distributed r.v. FY (Y )). Definition 2. (INTEGRATED ROC/AUC CRITERIA) The integrated ROC curve of any scoring rule s ∈ S is defined as: ∀α ∈ (0, 1), IROCs(α) = ∫ 1 y=0 ROCs,y(α)FY(dy) = E [ROCs,Y(α)] . (4) The integrated AUC criterion is defined as the area under the integrated ROC curve: ∀s ∈ S, IAUC(s) = ∫ 1 α=0 IROCs(α)dα. (5) The following result reveals the relevance of the functional/summary criteria defined above for the continuous ranking problem. Additional properties of IROC curves are listed in the Supplementary Material. Theorem 1. Let s∗ ∈ S. The following assertions are equivalent. 1. The assertions of Proposition 1 are fulfilled and s∗ is an optimal scoring function in the sense given by Definition 1. 2. For all α ∈ (0, 1), IROCs∗(α) = E [ROC∗Y(α)]. 3. We have IAUCs∗ = E [AUC∗Y], where AUC∗y = ∫ 1 α=0 ROC∗y(α)dα for all y ∈ (0, 1). If S∗ 6= ∅, then we have: ∀s ∈ S, IROCs(α) ≤ IROC∗(α) def = E [ROC∗Y(α)] , for any α ∈ (0, 1, ) IAUC(s) ≤ IAUC∗ def= E [AUC∗Y] . In addition, for any borelian and strictly increasing mapping H : (0, 1) → (0, 1), replacing Y by H(Y ) leaves the curves IROCs, s ∈ S, unchanged. Equipped with the notion defined above, a scoring rule s1 is said to be more accurate than another one s2 if IROCs2(α) ≤ IROCs1(α) for all α ∈ (0, 1).The IROC curve criterion thus provides a partial preorder on S. Observe also that, by virtue of Fubini’s theorem, we have IAUC(s) = ∫ AUCy(s)FY(dy) for all s ∈ S , denoting by AUCy(s) the AUC of s related to the bipartite ranking subproblem (X,Zy). Just like the AUC for bipartite ranking, the scalar IAUC criterion defines a full preorder on S for continuous ranking. Based on a training datasetDn of independent copies of (X,Y ), statistical versions of the IROC/IAUC criteria can be straightforwardly computed by replacing the distributions FY , FX|Y >t and FX|Y <t by their empirical counterparts in (3)-(5), see the Supplementary Material for further details. The lemma below provides a probabilistic interpretation of the IAUC criterion. Lemma 1. Let (X ′, Y ′) be a copy of the random pair (X,Y ) and Y ′′ a copy of the r.v. Y . Suppose that (X,Y ), (X ′, Y ′) and Y ′′ are defined on the same probability space and are independent. For all s ∈ S, we have: IAUC(s) = P {s(X) < s(X′) | Y < Y′′ < Y′}+ 1 2 P {s(X) = s(X′) | Y < Y′′ < Y′} . (6) This result shows in particular that a natural statistical estimate of IAUC(s) based on Dn involves U -statistics of degree 3. Its proof is given in the Supplementary Material for completeness. The Kendall τ statistic. The quantity (6) is akin to another popular way to measure the tendency to define the same ordering on the statistical population in a summary fashion: dτ (s) def = P {(s(X)− s(X ′)) · (Y − Y ′) > 0}+ 1 2 P {s(X) = s(X ′)} (7) = P{s(X) < s(X ′) | Y < Y ′}+ 1 2 P {X =s X ′} , where (X ′, Y ′) denotes an independent copy of (X,Y ), observing that P{Y < Y ′} = 1/2. The empirical counterpart of (7) based on the sample Dn, given by d̂n(s) = 2 n(n− 1) ∑ i<j I {(s(Xi)− s(Xj)) · (Yi − Yj) > 0}+ 1 n(n− 1) ∑ i<j I {s(Xi) = s(Xj)} (8) is known as the Kendall τ statistic and is widely used in the context of statistical hypothesis testing. The quantity (7) shall be thus referred to as the (theoretical or true) Kendall τ . Notice that dτ (s) is invariant by strictly increasing transformation of s(x) and thus describes properties of the order it defines. The following result reveals that the class S∗, when non empty, is the set of maximizers of the theoretical Kendall τ . Refer to the Supplementary Material for the technical proof. Proposition 2. Suppose that S∗ 6= ∅. For any (s, s∗) ∈ S × S∗, we have: dτ (s) ≤ dτ (s∗). Equipped with these criteria, the objective expressed above in an informal manner can be now formulated in a quantitative manner as a (possibly functional) M -estimation problem. In practice, the goal pursued is to find a reasonable approximation of a solution to the optimization problem maxs∈S dτ (s) (respectively maxs∈S IAUC(s)), where the supremum is taken over the set of all scoring functions s : X → R. Of course, these criteria are unknown in general, just like (X,Y )’s probability distribution, and the empirical risk minimization (ERM in abbreviated form) paradigm (see [10]) invites for maximizing the statistical version (8) over a class S0 ⊂ S of controlled complexity when considering the criterion dτ (s) for instance. The generalization capacity of empirical maximizers of the Kendall τ can be straightforwardly established using results in [5]. More details are given in the Supplementary Material. Before describing a practical algorithm for recursive maximization of the IROC curve, a few remarks are in order. Remark 2. (ON KENDALL τ AND AUC) We point out that, in the bipartite ranking problem (i.e. when the output variable Z takes its values in {−1, +1}, see subsection 2.2) as well, the AUC criterion can be expressed as a function of the Kendall τ related to the pair (s(X), Z) when the r.v. s(X) is continuous. Indeed, we have in this case 2p(1−p)AUC(s) = dτ (s), where p = P{Z = +1} and dτ (s) = P{(s(X) − s(X ′)) · (Z − Z ′) > 0}, denoting by (X ′, Z ′) an independent copy of (X,Z). Remark 3. (CONNECTION TO DISTRIBUTION-FREE REGRESSION) Consider the nonparametric regression model Y = m(X) + , where is a centered r.v. independent from X . In this case, it is well-known that the regression function m(X) = E[Y | X] is the (unique) solution of the expected least squares minimization. However, although m ∈ S∗, the least squares criterion is far from appropriate to evaluate ranking performance, as depicted by Fig. 2. Observe additionally that, in contrast to the criteria introduced above, increasing transformation of the output variable Y may have a strong impact on the least squares minimizer: except for linear stransforms, E[H(Y ) | X] is not an increasing transform of m(X). Remark 4. (ON DISCRETIZATION) Bi/multi-partite algorithms are not directly applicable to the continuous ranking problem. Indeed a discretization of the interval [0, 1] would be first required but this would raise a difficult question outside our scope: how to choose this discretization based on the training data? We believe that this approach is less efficient than ours which reveals problemspecific criteria, namely IROC and IAUC. 5 Continuous Ranking through Oriented Recursive Partitioning It is the purpose of this section to introduce the algorithm CRANK, a specific tree-structured learning algorithm for continuous ranking. 5.1 Ranking trees and Oriented Recursive Partitions Decision trees undeniably figure among the most popular techniques, in supervised and unsupervised settings, refer to [2] or [13] for instance. This is essentially due to the visual model summary they provide, in the form of a binary tree graphic that permits to describe predictions by means of a hierachichal combination of elementary rules of the type ”X(j) ≤ κ” or ”X(j) > κ”, comparing the value taken by a (quantitative) component of the input vector X (the split variable) to a certain threshold (the split value). In contrast to local learning problems such as classification or regression, predictive rules for a global problem such as ranking cannot be described by a (tree-structured) partition of the feature space: cells (corresponding to the terminal leaves of the binary decision tree) must be ordered so as to define a scoring function. This leads to the definition of ranking trees as binary trees equipped with a ”left-to-right” orientation, defining a tree-structured collection of anomaly scoring functions, as depicted by Fig. 1. Binary ranking trees have been in the context of bipartite ranking in [7] or in [3] and in [16] in the context of multipartite ranking. The root node of a ranking tree TJ of depth J ≥ 0 represents the whole feature space X : C0,0 = X , while each internal node (j, k) with j < J and k ∈ {0, . . . , 2j − 1} corresponds to a subset Cj,k ⊂ X , whose left and right siblings respectively correspond to disjoint subsets Cj+1,2k and Cj+1,2k+1 such that Cj,k = Cj+1,2k∪Cj+1,2k+1. Equipped with the left-to-right orientation, any subtree T ⊂ TJ defines a preorder on X : elements lying in the same terminal cell of T being equally ranked. The scoring function related to the oriented tree T can be written as: sT (x) = ∑ Cj,k: terminal leaf of T 2J ( 1− k 2j ) · I{x ∈ Cj,k}. (9) 5.2 The CRANK algorithm Based on Proposition 2, as mentioned in the Supplementary Material, one can try to build from the training dataset Dn a ranking tree by recursive empirical Kendall τ maximization. We propose below an alternative tree-structured recursive algorithm, relying on a (dyadic) discretization of the ’size’ variable Y . At each iteration, the local sample (i.e. the data lying in the cell described by the current node) is split into two halves (the highest/smallest halves, depending on Y ) and the algorithm calls a binary classification algorithm A to learn how to divide the node into right/left children. The theoretical analysis of this algorithm and its connection with approximation of IROC∗ are difficult questions that will be adressed in future work. Indeed we found out that the IROC cannot be represented as a parametric curve contrary to the ROC, which renders proofs much more difficult than in the bipartite case. THE CRANK ALGORITHM 1. Input. Training data Dn, depth J ≥ 1, binary classification algorithm A. 2. Initialization. Set C0,0 = X . 3. Iterations. For j = 0, . . . , J − 1 and k = 0, . . . , 2J − 1, (a) Compute a median yj,k of the dataset {Y1, . . . , , Yn} ∩ Cj,k and assign the binary label Zi = 2I{Yi > yj,k} − 1 to any data point i lying in Cj,k, i.e. such that Xi ∈ Cj,k. (b) Solve the binary classification problem related to the input space Cj,k and the training set {(Xi, Yi) : 1 ≤ i ≤ n, Xi ∈ Cj,k}, producing a classifier gj,k : Cj,k → {−1, +1}. (c) Set Cj+1,2k = {x ∈ Cj,k, gj,k = +1} = Cj,k \ Cj+1,2k+1. 4. Output. Ranking tree TJ = {Cj,k : 0 ≤ j ≤ J, 0 ≤ k < D}. Of course, the depth J should be chosen such that 2J ≤ n. One may also consider continuing to split the nodes until the number of data points within a cell has reached a minimum specified in advance. In addition, it is well known that recursive partitioning methods fragment the data and the unstability of splits increases with the depth. For this reason, a ranking subtree must be selected. The growing procedure above should be classically followed by a pruning stage, where children of a same parent are progressively merged until the root T0 is reached and a subtree among the sequence T0 ⊂ . . . ⊂ TJ with nearly maximal IAUC should be chosen using cross-validation. Issues related to the implementation of the CRANK algorithm and variants (e.g. exploiting randomization/aggregation) will be investigated in a forthcoming paper. 6 Numerical Experiments In order to illustrate the idea conveyed by Fig. 2 that the least squares criterion is not appropriate for the continuous ranking problem we compared on a toy example CRANK with CART. Recall that the latter is a regression decision tree algorithm which minimizes the MSE (Mean Squared Error). We also runned an alternative version of CRANK which maximizes the empirical Kendall τ instead of the empirical IAUC: this method is refered to as KENDALL from now on. The experimental setting is composed of a unidimensional feature space X = [0, 1] (for visualization reasons) and a simple regression model without any noise: Y = m(X). Intuitively, a least squares strategy can miss slight oscillations of the regression function, which are critical in ranking when they occur in high probability regions as they affect the order among the feature space. The results are presented in Table 1. See Supplementary Material for further details. 7 Conclusion This paper considers the problem of learning how to order objects by increasing ’size’, modeled as a continuous r.v. Y , based on indirect measurements X . We provided a rigorous mathematical formulation of this problem that finds many applications (e.g. quality control, chemistry) and is referred to as continuous ranking. In particular, necessary and sufficient conditions on (X,Y )’s distribution for the existence of optimal solutions are exhibited and appropriate criteria have been proposed for evaluating the performance of scoring rules in these situations. In contrast to distribution-free regression where the goal is to recover the local values taken by the regression function, continuous ranking aims at reproducing the preorder it defines on the feature space as accurately as possible. The numerical results obtained via the algorithmic approaches we proposed for optimizing the criteria aforementioned highlight the difference in nature between these two statistical learning tasks. Acknowledgments This work was supported by the industrial chair Machine Learning for Big Data from Télécom ParisTech and by a public grant (Investissement d’avenir project, reference ANR-11-LABX-0056LMH, LabEx LMH).
1. What is the main contribution of the paper regarding the continuous ranking problem? 2. What are the strengths of the proposed framework, measures, and algorithms? 3. What are the weaknesses of the paper, particularly in terms of experimental evaluation and literature review? 4. Do you have any questions regarding the proposed Crank algorithm and its connection with the approximation of IROC*? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review The paper studied the problem of continuous ranking, which is a generalization the bi-partite/multi-partite ranking problem in that the ranking label Y is a continuous real-valued random variable. Mathematical formulation of the problem is given. The problem of continuous ranking considered as multiple sub-problems, each corresponds to a bipartite ranking problem. Necessary conditions for the existence of optimal solutions are given. The IROC (and IAUC) measures are proposed for evaluating the performances of scoring functions. Finally, a continuous ranking algorithm called Crank is proposed. Experimental results on toy data showed the effectiveness of the proposed Crank algorithm. The problem investigated in the paper is very interesting and important for learning to rank and related areas. The paper is well written. The proposed framework, measures, and algorithms are clearly presented. The empirical evaluation of the paper is weak, as it is based on toy data and weak baselines. Pros. 1. Generalizes the bipartite/multi-partite ranking problems to the continuous ranking problem. Rigorous formulation of the problem and strong theoretical results. 2. Extending the conventional ROC to IROC for measuring the continuous ranking functions. 3. Proposing Crank algorithm for conducting continuous ranking. Cons. 1. The authors claim that “theoretical analysis of this algorithm and its connection with approximation of IROC* are beyond the scope of this paper and will be the subject of a future work”. I am not convinced that “its connections with approximation of IROC* are beyond the scope of this paper”. As to my understanding of the paper, the goal of proposing IROC is to guide the proposal of new continuous ranking algorithms. Thus, it is very important to build the connections. Otherwise, the performances of the Crank algorithm cannot reflect the effectiveness of the proposed theoretical framework. 2. The experiments in the paper are really weak. The proposed Crank algorithm is tested based on a toy data. The authors conclude in the last section “… finds many applications (e.g., quality control, chemistry)…”. It is necessary to test the proposed framework and solutions on real problems. The baselines are CART and Kendall, which are not designed for the ranking problem. It is better to compare the algorithm with state-of-the-art bipartite and multi-partite ranking models. The generation of the toy examples, the setting of parameters are not given, which makes it is hard to reproduce the results. 3. The literature survey of the paper is not enough. Since the continuous ranking problem is a generalization of the bipartite ranking and multi-partite ranking problems, it is better if the authors could use a section to analyze the advantages and disadvantages of existing ranking models, and their real applications. Currently, only a few references are listed in Section 2.2. 4. Minor issue: Line 52: Kendall tau  Kendall $\tau$; Line 66: F^{-1}(u) = inf{s\in R: F(s}>= t), the last t should be u;
NIPS
Title A Greedy Approach for Budgeted Maximum Inner Product Search Abstract Maximum Inner Product Search (MIPS) is an important task in many machine learning applications such as the prediction phase of low-rank matrix factorization models and deep learning models. Recently, there has been substantial research on how to perform MIPS in sub-linear time, but most of the existing work does not have the flexibility to control the trade-off between search efficiency and search quality. In this paper, we study the important problem of MIPS with a computational budget. By carefully studying the problem structure of MIPS, we develop a novel Greedy-MIPS algorithm, which can handle budgeted MIPS by design. While simple and intuitive, Greedy-MIPS yields surprisingly superior performance compared to state-of-the-art approaches. As a specific example, on a candidate set containing half a million vectors of dimension 200, Greedy-MIPS runs 200x faster than the naive approach while yielding search results with the top-5 precision greater than 75%. 1 Introduction In this paper, we study the computational issue in the prediction phase for many embedding based models such as matrix factorization and deep learning models in recommender systems, which can be mathematically formulated as a Maximum Inner Product Search (MIPS) problem. Specifically, given a large collection of n candidate vectors: H = h j 2 Rk : 1, . . . , n and a query vector w 2 Rk, MIPS aims to identify a subset of candidates that have top largest inner product values with w. We also denote H = [h 1 , . . . ,h j , . . . ,h n ] > as the candidate matrix. A naive linear search procedure to solve MIPS for a given query w requires O(nk) operations to compute n inner products and O(n log n) operations to obtain the sorted ordering of the n candidates. Recently, MIPS has drawn a lot of attention in the machine learning community due to its wide applicability, such as the prediction phase of embedding based recommender systems [6, 7, 10]. In such an embedding based recommender system, each user i is associated with a vector w i of dimension k, while each item j is associated with a vector h j of dimension k. The interaction (such as preference) between a user and an item is modeled by wT i h j . It is clear that identifying top-ranked items in such a system for a user is exactly a MIPS problem. Because both the number of users (the number of queries) and the number of items (size of vector pool in MIPS) can easily grow to millions, a naive linear search is extremely expensive; for example, to compute the preference for all m users over n items with latent embeddings of dimension k in a recommender system requires at least O(mnk) operations. When both m and n are large, the prediction procedure is extremely time consuming; it is even slower than the training procedure used to obtain the m+n embeddings, which ⇤Work done while at the University of Texas at Austin. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. costs only O(|⌦|k) operations per iteration, where |⌦| is number of observations and is much smaller than mn. Taking the yahoo-music dataset as an example, m = 1M , n = 0.6M , |⌦| = 250M , and mn = 600B 250M = |⌦|. As a result, the development of efficient algorithms for MIPS is needed in large-scale recommender systems. In addition, MIPS can be found in many other machine learning applications, such as the prediction for a multi-class or multi-label classifier [16, 17], an object detector, a structure SVM predicator, or as a black-box routine to improve the efficiency of learning and inference algorithm [11]. Also, the prediction phase of neural network could also benefit from a faster MIPS algorithm: the last layer of NN is often a dense fully-connected layer, so finding the label with maximum score becomes a MIPS problem with dense vectors [6]. There is a recent line of research on accelerating MIPS for large n, such as [2, 3, 9, 12–14]. However, most of them do not have the flexibility to control the trade-off between search efficiency and search quality in the prediction phase. In this paper, we consider the budgeted MIPS problem, which is a generalized version of the standard MIPS with a computation budget: how to generate a set of top-ranked candidates under a given budget on the number of inner products one can perform. By carefully studying the problem structure of MIPS, we develop a novel Greedy-MIPS algorithm, which handles budgeted MIPS by design. While simple and intuitive, Greedy-MIPS yields surprisingly superior performance compared to existing approaches. Our Contributions: • We develop Greedy-MIPS, which is a novel algorithm without any nearest neighbor search reduction that is essential in many state-of-the-art approaches [2, 12, 14]. • We establish a sublinear time theoretical guarantee for Greedy-MIPS under certain assumptions. • Greedy-MIPS is orders of magnitudes faster than many state-of-the-art MIPS approaches to obtain a desired search performance. As a specific example, on the yahoo-music data sets with n = 624, 961 and k = 200, Greedy-MIPS runs 200x faster than the naive approach and yields search results with the top-5 precision more than 75%, while the search performance of other state-of-the-art approaches under the similar speedup drops to less than 3% precision. • Greedy-MIPS supports MIPS with a budget, which brings the ability to control of the trade-off between computation efficiency and search quality in the prediction phase. 2 Existing Approaches for Fast MIPS Because of its wide applicability, several algorithms have been proposed for efficient MIPS. Most of existing approaches consider to reduce the MIPS problem to the nearest neighbor search problem (NNS), where the goal is to identify the nearest candidates of the given query, and apply an existing efficient NNS algorithm to solve the reduced problem. [2] is the first MIPS work which adopts such a MIPS-to-NNS reduction. Variants MIPS-to-NNS reduction are also proposed in [14, 15]. Experimental results in [2] show the superiority of the NNS reduction over the traditional branchand-bound search approaches for MIPS [9, 13]. After the reduction, there are many choices to solve the transformed NNS problem, such as locality sensitive hashing scheme (LSH-MIPS) considered in [12, 14, 15], PCA-tree based approaches (PCA-MIPS) in [2], or K-Means approaches in [1]. Fast MIPS approaches with sampling schemes have become popular recently. Various sampling schemes have been proposed to handle MIPS problem with different constraints. The idea of the sampling-based MIPS approach is first proposed in [5] as an approach to perform approximate matrix-matrix multiplications. Its applicability on MIPS problems is studied very recently [3]. The idea behind a sampling-based approach called Sample-MIPS, is about to design an efficient sampling procedure such that the j-th candidate is selected with probability p(j): p(j) ⇠ h> j w. In particular, Sample-MIPS is an efficient scheme to sample (j, t) 2 [n] ⇥ [k] with the probability p(j, t): p(j, t) ⇠ h jt w t . Each time a pair (j, t) is sampled, we increase the count for the j-th item by one. By the end of the sampling process, the spectrum of the counts forms an estimation of n inner product values. Due to the nature of the sampling approach, it can only handle the situation where all the candidate vectors and query vectors are nonnegative. Diamond-MSIPS, a diamond sampling scheme proposed in [3], is an extension of Sample-MIPS to handle the maximum squared inner product search problem (MSIPS) where the goal is to identify candidate vectors with largest values of (h> j w)2. However, the solutions to MSIPS can be very different from the solutions to MIPS in general. For example, if all the inner product values are negative, the ordering for MSIPS is the exactly reverse ordering induced by MIPS. Here we can see that the applicability of both Sample-MIPS and Diamond-MSIPS to MIPS is very limited. 3 Budgeted MIPS The core idea behind the fast approximate MIPS approaches is to trade the search quality for the shorter query latency: the shorter the search latency, the lower the search quality. In most existing fast MIPS approaches, the trade-off depends on the approach-specific parameters such as the depth of the PCA tree in PCA-MIPS or the number of hash functions in LSH-MIPS. Such specific parameters are usually required to construct approach-specific data structures before any query is given, which means that the trade-off is somewhat fixed for all the queries. Thus, the computation cost for a given query is fixed. However, in many real-world scenarios, each query might have a different computational budget, which raises the question: Can we design a MIPS approach supporting the dynamic adjustment of the trade-off in the query phase? 3.1 Essential Components for Fast MIPS Before any query request: • Query-Independent Data Structure Construction: A pre-processing procedure is performed on the entire candidate sets to construct an approach-specific data structure D to store information about H: the LSH hash tables, space partition trees (e.g., KD-tree or PCA-tree), or cluster centroids. For each query request: • Query-dependent Pre-processing: In some approaches, a query dependent pre-processing is needed. For example, a vector augmentation is required in all MIPS-to-NNS approaches. In addition, [2] also requires another normalization. T P is used to denote the time complexity of this stage. • Candidate Screening: In this stage, based on the pre-constructed data structure D, an efficient procedure is performed to filter candidates such that only a subset of candidates C(w) ⇢ H is selected. In a naive linear approach, no screening procedure is performed, so C(w) simply contains all the n candidates. For a tree-based structure, C(w) contains all the candidates stored in the leaf node of the query vector. In a sampling-based MIPS approach, an efficient sampling scheme is designed to generate highly possible candidates to form C(w). T S denotes the computational cost of the screening stage. • Candidate Ranking: An exact ranking is performed on the selected candidates in C(w) obtained from the screening stage. This involves the computation of |C(w)| inner products and the sorting procedure among these |C(w)| values. The overall time complexity T R = O(|C(w)|k + |C(w)| log|C(w)|). The per-query computational cost: T Q = T P + T S + T R . (1) It is clear that the candidate screening stage is the key component for a fast MIPS approach. In terms of the search quality, the performance highly depends on whether the screening procedure can identify highly possible candidates. Regarding the query latency, the efficiency highly depends on the size of C(w) and how fast to generate C(w). The major difference among various MIPS approaches is the choice of the data structure D and the screening procedure. 3.2 Budgeted MIPS: Problem Definition Budgeted MIPS is an extension of the standard approximate MIPS problem with a computational budget: how to generate top-ranked candidates under a given budget on the number of inner products one can perform. Note that the cost for the candidate ranking (T R ) is inevitable in the per-query cost (1). A viable approach for budgeted MIPS must include a screening procedure which satisfies the following requirements: • the flexibility to control the size of C(w) in the candidate screening stage such that |C(w)| B, where B is a given budget, and • an efficient screening procedure to obtain C(w) in O(Bk) time such thatT Q = O(Bk+B logB). As mentioned earlier, most recently proposed MIPS-to-NNS approaches algorithms apply various search space partition data structures or techniques (e.g., LSH, KD-tree, or PCA-tree) designed for NNS to index the candidates H in the query-independent pre-processing stage. As the construction of D is query independent, both the search performance and the computation cost are somewhat fixed when the construction is done. For example, the performance of a PCA-MIPS depends on the depth of the PCA-tree. Given a query vector w, there is no control to the size of C(w) in the candidate generating phase. LSH-based approaches also have the similar issue. There might be some ad-hoc treatments to adjust C(w), it is not clear how to generalize PCA-MIPS and LSH-MIPS in a principled way to handle the situation with a computational budget: how to reduce the size of C(w) under a limited budget and how to improve the performance when a larger budget is given. Unlike other NNS-based algorithms, the design of Sample-MIPS naturally enables it to support budgeted MIPS for a nonnegative candidate matrix H and a nonnegative query w. The more the number of samples, the lower the variance of the estimated frequency spectrum. Clearly, SampleMIPS has the flexibility to control the size of C(w), and thus is a viable approach for the budgeted MIPS problem. However, Sample-MIPS works only on the situation with non-negative H and w. Diamond-MSIPS has the similar issue. 4 Greedy-MIPS We carefully study the structure of MIPS and develop a simple but novel algorithm called GreedyMIPS, which handles budgeted MIPS by design. Unlike the recent MIPS-to-NNS approaches, Greedy-MIPS is an approach without any reduction to a NNS problem. Moreover, Greedy-MIPS is a viable approach for the budgeted MIPS problem without the non-negativity limitation inherited in the sampling approaches. The key component for a fast MIPS approach is the algorithm used in the candidate screening phase. In budgeted MIPS, for any given budget B and query w, an ideal procedure for the candidate screening phase costs O(Bk) time to generate C(w) which contains the B items with the largest B inner product values over the n candidates in H. The requirement on the time complexity O(Bk) implies that the procedure is independent from n = |H|, the number of candidates in H. One might wonder whether such an ideal procedure exists or not. In fact, designing such an ideal procedure with the requirement to generate the largest B items in O(Bk) time is even more challenging than the original budgeted MIPS problem. Definition 1. The rank of an item x among a set of items X = x 1 , . . . , x|X | is defined as rank(x | X ) := X|X | j=1 I[x j x], (2) where I[·] is the indicator function. A ranking induced by X is a function ⇡(·) : X ! {1, . . . , |X |} such that ⇡(x j ) = rank(x j | X ) 8x j 2 X . One way to store a ranking ⇡(·) induced by X is by a sorted index array s[r] of size |X | such that ⇡(xs[1]) ⇡(xs[2]) · · · ⇡(xs[|X |]). We can see that s[r] stores the index to the item x with ⇡(x) = r. To design an efficient candidate screening procedure, we study the operations required for MIPS: In the simple linear MIPS approach, nk multiplication operations are required to obtain n inner product values h> 1 w, . . . ,h> n w . We define an implicit matrix Z 2 Rn⇥k as Z = H diag(w), where diag(w) 2 Rk⇥k is a matrix with w as it diagonal. The (j, t) entry of Z denotes the multiplication operation z jt = h jt w t and z j = diag(w)h j denotes the j-th row of Z. In Figure 1, we use Z> to demonstrate the implicit matrix. Note that Z is query dependant, i.e., the values of Z depend on the query vector w, and n inner product values can be obtained by taking the column-wise summation of Z>. In particular, for each j we have h> j w = P k t=1 z jt , j = 1, . . . , n. Thus, the ranking induced by the n inner product values can be characterized by the marginal ranking ⇡(j|w) defined on the implicit matrix Z as follows: ⇡(j|w) := rank k X t=1 z jt ( k X t=1 z 1t , · · · , k X t=1 z nt )! = rank h> j w | h> 1 w, . . . ,h> n w . (3) As mentioned earlier, it is hard to design an ideal candidate screening procedure generating C(w) based on the marginal ranking. Because the main goal for the candidate screening phase is to quickly identify candidates which are highly possible to be top-ranked items, it suffices to have an efficient procedure generating C(w) by an approximation ranking. Here we propose a greedy heuristic ranking: ⇡̄(j|w) := rank max k t=1 z jt max k t=1 z 1t , · · · ,maxk t=1 z nt , (4) which is obtained by replacing the summation terms in (3) by max operators. The intuition behind this heuristic is that the largest element of z j multiplied by k is an upper bound of h> j w: h> j w = k X t=1 z jt kmax{z jt : t = 1, . . . , k}. (5) Thus, ⇡̄(j|w), which is induced by such an upper bound of h> j w, could be a reasonable approximation ranking for the marginal ranking ⇡(j|w). Next we design an efficient procedure which generates C(w) according to the ranking ⇡̄(j|w) defined in (4). First, based on the relative orderings of {z jt }, we consider the joint ranking and the conditional ranking defined as follows: • Joint ranking: ⇡(j, t|w) is the exact ranking over the nk entries of Z. ⇡(j, t|w) := rank(z jt | {z 11 , . . . , z nk }). • Conditional ranking: ⇡ t (j|w) is the exact ranking over the n entires of the t-th row of Z>. ⇡ t (j|w) := rank(z jt | {z 1t , . . . , z nt }). See Figure 1 for an illustration for both rankings. Similar to the marginal ranking, both joint and conditional rankings are query dependent. Observe that, in (4), for each j, only a single maximum entry of Z, maxk t=1 z jt , is considered to obtain the ranking ⇡̄(j|w). To generate C(w) based on ⇡̄(j|w), we can iterate (j, t) entries of Z in a greedy sequence such that (j 1 , t 1 ) is visited before (j 2 , t 2 ) if z j1t1 > zj2t2 , which is exactly the sequence corresponding to the joint ranking ⇡(j, t|w). Each time an entry (j, t) is visited, we can include the index j into C(w) if j /2 C(w). In Theorem 1, we show that the sequence to include a newly observed j into C(w) is exactly the sequence induced by the ranking ⇡̄(j|w) defined in (4). Theorem 1. For all j 1 and j 2 such that ⇡̄(j 1 |w) < ⇡̄(j 2 |w), j 1 will be included into C(w) before j 2 if we iterate (j, t) pairs following the sequence induced by the joint ranking ⇡(j, t|w). A proof can be found in Section D.1. At first glance, generating (j, t) in the sequence according to the joint ranking ⇡(j, t|w) might require the access to all the nk entries of Z and cost O(nk) time. In fact, based on Property 1 of conditional rankings, we can design an efficient variant of the k-way merge algorithm [8] to generate (j, t) pairs in the desired sequence iteratively. Property 1. Given a fixed candidate matrix H , for any possible w with w t 6= 0, the conditional ranking ⇡ t (j|w) is either ⇡ t+ (j) or ⇡ t (j), where ⇡t+(j) = rank(hjt | {h1t, . . . , hnt}), and ⇡ t (j) = rank( hjt | { h1t, . . . , hnt}). In particular, ⇡t(j|w) = ⇢ ⇡ t+ (j) if w t > 0, ⇡ t (j) if wt < 0. Property 1 enables us to characterize a query dependent conditional ranking ⇡ t (j|w) by two query independent rankings ⇡ t+ (j) and ⇡ t (j). Thus, for each t, we can construct and store a sorted index array st[r], r = 1, . . . , n such that ⇡ t+ (st[1]) ⇡t+(st[2]) · · · ⇡t+(st[n]), (6) ⇡ t (st[1]) ⇡t (st[2]) · · · ⇡t (st[n]). (7) Thus, in the phase of query-independent data structure construction of Greedy-MIPS, we compute and store k query-independent rankings ⇡ t+ (·) by k sorted index arrays of length n: st[r], r = 1, . . . , n, t = 1, . . . , k. The entire construction costs O(kn log n) time and O(kn) space. Next we describe the details of the proposed Greedy-MIPS algorithm for a given query w and a budget B. Greedy-MIPS utilizes the idea of the k-way merge algorithm to visit (j, t) entries of Z according to the joint ranking ⇡(j, t|w). Designed to merge k sorted sublists into a single sorted list, the k-way merge algorithm uses 1) k pointers, one for each sorted sublist, and 2) a binary tree structure (either a heap or a selection tree) containing the elements pointed by these k pointers to obtain the next element to be appended into the sorted list [8]. 4.1 Query-dependent Pre-processing We divide nk entries of (j, t) into k groups. The t-th group contains n entries: {(j, t) : j = 1, . . . , n}. Here we need an iterator playing a similar role as the pointer which can iterate index j 2 {1, . . . , n} in the sorted sequence induced by the conditional ranking ⇡ t (·|w). Utilizing Property 1, the t-th pre-computed sorted arrays st[r], r = 1, . . . , n can be used to construct such an iterator, called CondIter, which supports current() to access the currently pointed index j and getNext() to Algorithm 1 CondIter: an iterator over j 2 {1, . . . , n} based on the conditional ranking ⇡ t (j|w). This code assumes that the k sorted index arrays st[r], r=1, . . . , n, t=1, . . . , k are available. class CondIter: def constructor(dim_idx, query_val): t, w, ptr dim_idx, query_val, 1 def current(): return ⇢ st[ptr] if w > 0, st[n ptr+ 1] otherwise. def hasNext(): return (ptr < n) def getNext(): ptr ptr+ 1 and return current() Algorithm 2 Query-dependent preprocessing procedure in Greedy-MIPS. • Input: query w 2 Rk • For t = 1, . . . , k - iters[t] CondIter(t, w t ) - z h jt w t , where j = iters[t].current() - Q.push((z, t)) • Output: - iters[t], t k: iterators for ⇡ t (·|w). - Q: a max-heap of ⇢ (z, t) | z = nmax j=1 z jt , 8t k . advance the iterator. In Algorithm 1, we describe a pseudo code for CondIter, which utilizes the facts (6) and (7) such that both the construction and the index access cost O(1) space and O(1) time. For each t, we use iters[t] to denote the CondIter for the t-th conditional ranking ⇡ t (j|w). Regarding the binary tree structure used in Greedy-MIPS, we consider a max-heap Q of (z, t) pairs. z 2 R is the compared key used to maintain the heap property of Q, and t 2 {1, . . . , k} is an integer to denote the index to a entry group. Each (z, t) 2 Q denotes the (j, t) entry of Z where j = iters[t].current() and z = z jt = h jt w t . Note that there are most k elements in the max-heap at any time. Thus, we can implement Q by a binary heap such that 1) Q.top() returns the maximum pair (z, t) in O(1) time; 2) Q.pop() deletes the maximum pair of Q in O(log k) time; and 3) Q.push((z, t)) inserts a new pair in O(log k) time. Note that the entire Greedy-MIPS can also be implemented using a selection tree among the k entries pointed by the k iterators. See Section B in the supplementary material for more details. In the query-dependent pre-processing phase, we need to construct iters[t], t = 1, . . . , k, one for each conditional ranking ⇡ t (j|w), and a max-heap Q which is initialized to contain (z, t) | z = maxn j=1 z jt , t k . A detailed procedure is described in Algorithm 2 which costs O(k log k) time and O(k) space. 4.2 Candidate Screening The core idea of Greedy-MIPS is to iteratively traverse (j, t) entries of Z in a greedy sequence and collect newly observed indices j into C(w) until |C(w)| = B. In particular, if r = ⇡(j, t|w), then (j, t) entry is visited at the r-th iterate. Similar to the k-way merge algorithm, we describe a detailed procedure in Algorithm 3, which utilizes the CondIter in Algorithm 1 to perform the screening. Recall both requirements of a viable candidate screening procedure for budgeted MIPS: 1) the flexibility to control the size |C(w)| B; and 2) an efficient procedure runs in O(Bk). First, it is clear that Algorithm 3 has the flexibility to control the size of C(w) by the exiting condition of the outer while-loop. Next, to analyze the overall time complexity of Algorithm 3, we need to know the number of the z jt entries the algorithm iterates before C(w) = B. Theorem 2 gives an upper bound on this number of iterations. Theorem 2. There are at least B distinct indices j in the first Bk entries (j, t) in terms of the joint ranking ⇡(j, t|w) for any w; that is, |{j | 8(j, t) such that ⇡(j, t|w) Bk}| B. (8) A detailed proof can be found in Section D of the supplementary material. Note that there are some O(log k) time operations within both the outer and inner while loops such as Q.push((z, t)) and Q.pop()). As the goal of the screening procedure is to identify j indices only, we can skip the Q.push zjt, t for an entry (j, t) with the j having been included in C(w). As a results, we can guarantee that Q.pop() is executed at most B+ k 1 times when |C(w)| = B. The extra k 1 times occurs in the situation that iters[1].current() = · · · = iters[k].current() at the beginning of the entire screening procedure. Algorithm 3 An improved candidate screening procedure in Greedy-MIPS. The time complexity is O(Bk). • Input: - H, w, and the computational budget B - Q and iters[t]: output of Algorithm 2 - C(w): an empty list - visited[j] = 0, 8j n: a zero-initialized array. • While |C(w)| < B: - (z, t) Q.pop() · · ·O(log k) - j iters[t].current() - If visited[j] = 0: * append j into C(w) and visited[j] 1 - While iters[t].hasNext(): * j iters[t].getNext() * if visited[j] = 0: — z h jt w t and Q.push((z, t)) · · ·O(log k) — break • visited[j] 0, 8j 2 C(w) · · ·O(B) • Output: C(w) = {j | ⇡̄(j|w) B} To check weather a index j in the current C(w) in O(1) time, we use an auxiliary zero-initialized array of length n: visited[j], j = 1, . . . , n to denote whether an index j has been included in C(w) or not. As C(w) contains at most B indices, only B elements of this auxiliary array will be modified during the screening procedure. Furthermore, the auxiliary array can be reset to zero using O(B) time in the end of Algorithm 3, so this auxiliary array can be utilized again for a different query vector w. Notice that Algorithm 3 still iterates Bk entries of Z but at most B + k 1 entries will be pushed into or pop from the max-heap Q. Thus, the overall time complexity of Algorithm 3 is O(Bk + (B + k) log k) = O(Bk), which makes Greedy-MIPS a viable budgeted MIPS approach. 4.3 Connection to Sampling Approaches Sample-MIPS, as mentioned earlier, is essentially a sampling algorithm with replacement scheme to draw entries of Z such that (j, t) is sampled with the probability proportional to z jt . Thus, SampleMIPS can be thought as a traversal of (j, t) entries using in a stratified random sequence determined by a distribution of the values of {z jt }, while the core idea of Greedy-MIPS is to iterate (j, t) entries of Z in a greedy sequence induced by the ordering of {z jt }. Next, we discuss the differences of Greedy-MIPS from Sample-MIPS and Diamond-MSIPS. Sample-MIPS can be applied to the situation where both H and w are nonnegative because of the nature of sampling scheme. In contrast, Greedy-MIPS can work on any MIPS problems as only the ordering of {z jt } matters in Greedy-MIPS. Instead of h> j w, Diamond-MSIPS is designed for the MSIPS problem which is to identify candidates with largest (h> j w)2 or |h> j w| values. In fact, for nonnegative MIPS problems, the diamond sampling is equivalent to Sample-MIPS. Moreover, for MSIPS problems with negative entries, when the number of samples is set to be the budget B,2 the Diamond-MSIPS is equivalent to apply Sample-MIPS to sample (j, t) entries with the probability p(j, t) / |z jt |. Thus, the applicability of the existing sampling-based approaches remains limited for general MIPS problems. 4.4 Theoretical Guarantee Greedy-MIPS is an algorithm based on a greedy heuristic ranking (4). Similar to the analysis of Quicksort, we study the average complexity of Greedy-MIPS by assuming a distribution of the input dataset. For simplicity, our analysis is performed on a stochastic implicit matrix Z instead of w. Each entry in Z is assumed to follow a uniform distribution uniform(a, b). We establish Theorem 3 to prove that the number of entries (j, t) iterated by Greedy-MIPS to include the index to the largest candidate is sublinear to n = |H| with a high probability when n is large enough. Theorem 3. Assume that all the entries z jt are drawn from a uniform distribution uniform(a, b). Let j⇤ be the index to the largest candidate (i.e., ⇡(j⇤|Z) = 1). With high probability, we have ⇡̄(j⇤|Z) O(k log(n)n 1k ). A detailed proof can be found in the supplementary material. Notice that theoretical guarantees for approximate MIPS is challenging even for randomized algorithms. For example, the analysis for Diamond-MSIPS in [3] requires nonnegative assumptions and only works on MSIPS (max-squared-inner-product search) problems instead of MIPS problems. 5 Experimental Results In this section, we perform extensive empirical comparisons to compare Greedy-MIPS with other state-of-the-art fast MIPS approaches on both real-world and synthetic datasets: We use netflix and yahoo-music as our real-world recommender system datasets. There are 17, 770 and 624, 961 items in netflix and yahoo-music, respectively. In particular, we obtain the user embeddings {w i } 2 Rk 2This setting is used in the experiments in [3]. and item embeddings h j 2 Rk by the standard low-rank matrix factorization [4] with k 2 {50, 200}. We also generate synthetic datasets with various n = 2{17,18,19,20} and k = 2{2,5,7,10}. For each synthetic dataset, both candidate vector h j and query w vector are drawn from the normal distribution. 5.1 Experimental Settings To have fair comparisons, all the compared approaches are implemented in C++. • Greedy-MIPS: our proposed approach in Section 4. • PCA-MIPS: the approach proposed in [2]. We vary the depth of PCA tree to control the trade-off. • LSH-MIPS: the approach proposed in [12, 14]. We use the nearest neighbor transform function proposed in [2, 12] and use the random projection scheme as the LSH function as suggested in [12]. We also implement the standard amplification procedure with an OR-construction of b hyper LSH hash functions. Each hyper LSH function is a result of an AND-construction of a random projections. We vary values (a, b) to control the trade-off. • Diamond-MSIPS: the sampling scheme proposed in [3] for the maximum squared inner product search. As it shows better performance than LSH-MIPS in [3] in terms of MIPS problems, we also include Diamond-MSIPS into our comparison. • Naive-MIPS: the baseline approach which applies a linear search to identify the exact top-K candidates. Evaluation Criteria. For each dataset, the actual top-20 items for each query are regarded as the ground truth. We report the average performance on a randomly selected 2,000 query vectors. To evaluate the search quality, we use the precision on the top-P prediction (prec@P ), obtained by selecting top-P items from C(w) returned by the candidate screening procedure. Results with P = 5 is shown in the paper, while more results with various P are in the supplementary material. To evaluate the search efficiency, we report the relative speedups over the Naive-MIPS approach: speedup = prediction time required by Naive-MIPS prediction time by a compared approach . Remarks on Budgeted MIPS versus Non-Budgeted MIPS. As mentioned in Section 3, PCAMIPS and LSH-MIPS cannot handle MIPS with a budget. Both the search computation cost and the search quality are fixed when the corresponding data structure is constructed. As a result, to understand the trade-off between search efficiency and search quality for these two approaches, we can only try various values for its parameters (such as the depth for PCA tree and the amplification parameters (a, b) for LSH). For each combination of parameters, we need to re-run the entire query-independent pre-processing procedure to construct a new data structure. Remarks on data structure construction. Note that the time complexity for the construction for Greedy-MIPS is O(kn log n), which is on par to O(kn) for Diamond-MSIPS, and faster than O(knab) for LSH-MIPS and O(k2n) for PCA-MIPS. As an example, the construction for Greedy-MIPS only takes around 10 seconds on yahoo-music with n = 624, 961 and k = 200. 5.2 Experimental Results Results on Real-World Data sets. Comparison results for netflix and yahoo-music are shown in Figure 2. The first, second, and third columns present the results with k = 50 and k = 200, respectively. It is clearly observed that given a fixed speedup, Greedy-MIPS yields predictions with much higher search quality. In particular, on the yahoo-music data set with k = 200, Greedy-MIPS runs 200x faster than Naive-MIPS and yields search results with p@5 = 70%, while none of PCAMIPS, LSH-MIPS, and Diamond-MSIPS can achieve a p@5 > 10% while maintaining the similar 200x speedups. Results on Synthetic Data Sets. We also perform comparisons on synthetic datasets. The comparison with various n 2 2{17,18,19,20} is shown in Figure 3, while the comparison with various k 2 2{2,5,7,10} is shown in Figure 4. We observe that the performance gap between Greedy-MIPS over other approaches remains when n increases, while the gap becomes smaller when k increases. However, Greedy-MIPS still outperforms other approaches significantly. 6 Conclusions and Future Work In this paper, we develop a novel Greedy-MIPS algorithm, which has the flexibility to handle budgeted MIPS, and yields surprisingly superior performance compared to state-of-the-art approaches. The current implementation focuses on MIPS with dense vectors, while in the future we plan to implement our algorithm also for high dimensional sparse vectors. We also establish a theoretical guarantee for Greedy-MIPS based on the assumption that data are generated from a random distribution. How to relax the assumption or how to design a nondeterministic pre-processing step for Greedy-MIPS to satisfy the assumption are interesting future directions of this work. Acknowledgements This research was supported by NSF grants CCF-1320746, IIS-1546452 and CCF-1564000. CJH was supported by NSF grant RI-1719097.
1. What is the main contribution of the paper regarding the Maximum Inner Product Search problem? 2. How does the proposed approach differ from existing methods in terms of search efficiency and quality of retrieved vectors? 3. What are the strengths of the paper in terms of originality, explanation, and technical soundness? 4. Are there any concerns or suggestions regarding the presentation, figures, and notations used in the paper? 5. Is there any interest in accessing the source code and data to reproduce the experiments?
Review
Review The aim of the paper is to propose a new greedy approach for Maximum Inner Product Search problem: given a candidate vector, retrieve a set of vectors with maximum inner product to the query vector. This is a crucial step in several machine learning and data mining algorithms, and the state of the art methods work in sub-linear time recently. The originality of the paper is to study the MIPS problem under a computational budget. The proposed approach achieves better balance between search efficiency and quality of the retrieved vectors, and does not require a nearest neighbor search phase, as commonly done by state of the art approaches. The authors claim impressive runtime results (their algorithm is 200x faster than the naive approach), and a top-5 precision greater than 75%. The paper is very dense (the space between two lines seems smaller than the one in the template). However, the paper is well-written and the procedure is well-explained. The proposed method seems also quite original, and comes with theoretical guarantees. The technical results seem sound. Some remarks: Figure 1 should be placed at the top of P.5, it is a bit difficult to follow without the later explanations. The bound used in P.4 needs to be more studied, in order to find, for instance, some properties (or better, an approximation). This bound is a key point of this procedure, and it is used at the beginning. P.5 "visit (j,t) entries of Z": (j,t) is a cell in the matrix Z, however you consider this notation as a number. Maybe "j \times t" entries? The reviewer would be interested to have access to the source code of the algorithm and the data, so as he can reproduce the expriments?
NIPS
Title A Greedy Approach for Budgeted Maximum Inner Product Search Abstract Maximum Inner Product Search (MIPS) is an important task in many machine learning applications such as the prediction phase of low-rank matrix factorization models and deep learning models. Recently, there has been substantial research on how to perform MIPS in sub-linear time, but most of the existing work does not have the flexibility to control the trade-off between search efficiency and search quality. In this paper, we study the important problem of MIPS with a computational budget. By carefully studying the problem structure of MIPS, we develop a novel Greedy-MIPS algorithm, which can handle budgeted MIPS by design. While simple and intuitive, Greedy-MIPS yields surprisingly superior performance compared to state-of-the-art approaches. As a specific example, on a candidate set containing half a million vectors of dimension 200, Greedy-MIPS runs 200x faster than the naive approach while yielding search results with the top-5 precision greater than 75%. 1 Introduction In this paper, we study the computational issue in the prediction phase for many embedding based models such as matrix factorization and deep learning models in recommender systems, which can be mathematically formulated as a Maximum Inner Product Search (MIPS) problem. Specifically, given a large collection of n candidate vectors: H = h j 2 Rk : 1, . . . , n and a query vector w 2 Rk, MIPS aims to identify a subset of candidates that have top largest inner product values with w. We also denote H = [h 1 , . . . ,h j , . . . ,h n ] > as the candidate matrix. A naive linear search procedure to solve MIPS for a given query w requires O(nk) operations to compute n inner products and O(n log n) operations to obtain the sorted ordering of the n candidates. Recently, MIPS has drawn a lot of attention in the machine learning community due to its wide applicability, such as the prediction phase of embedding based recommender systems [6, 7, 10]. In such an embedding based recommender system, each user i is associated with a vector w i of dimension k, while each item j is associated with a vector h j of dimension k. The interaction (such as preference) between a user and an item is modeled by wT i h j . It is clear that identifying top-ranked items in such a system for a user is exactly a MIPS problem. Because both the number of users (the number of queries) and the number of items (size of vector pool in MIPS) can easily grow to millions, a naive linear search is extremely expensive; for example, to compute the preference for all m users over n items with latent embeddings of dimension k in a recommender system requires at least O(mnk) operations. When both m and n are large, the prediction procedure is extremely time consuming; it is even slower than the training procedure used to obtain the m+n embeddings, which ⇤Work done while at the University of Texas at Austin. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. costs only O(|⌦|k) operations per iteration, where |⌦| is number of observations and is much smaller than mn. Taking the yahoo-music dataset as an example, m = 1M , n = 0.6M , |⌦| = 250M , and mn = 600B 250M = |⌦|. As a result, the development of efficient algorithms for MIPS is needed in large-scale recommender systems. In addition, MIPS can be found in many other machine learning applications, such as the prediction for a multi-class or multi-label classifier [16, 17], an object detector, a structure SVM predicator, or as a black-box routine to improve the efficiency of learning and inference algorithm [11]. Also, the prediction phase of neural network could also benefit from a faster MIPS algorithm: the last layer of NN is often a dense fully-connected layer, so finding the label with maximum score becomes a MIPS problem with dense vectors [6]. There is a recent line of research on accelerating MIPS for large n, such as [2, 3, 9, 12–14]. However, most of them do not have the flexibility to control the trade-off between search efficiency and search quality in the prediction phase. In this paper, we consider the budgeted MIPS problem, which is a generalized version of the standard MIPS with a computation budget: how to generate a set of top-ranked candidates under a given budget on the number of inner products one can perform. By carefully studying the problem structure of MIPS, we develop a novel Greedy-MIPS algorithm, which handles budgeted MIPS by design. While simple and intuitive, Greedy-MIPS yields surprisingly superior performance compared to existing approaches. Our Contributions: • We develop Greedy-MIPS, which is a novel algorithm without any nearest neighbor search reduction that is essential in many state-of-the-art approaches [2, 12, 14]. • We establish a sublinear time theoretical guarantee for Greedy-MIPS under certain assumptions. • Greedy-MIPS is orders of magnitudes faster than many state-of-the-art MIPS approaches to obtain a desired search performance. As a specific example, on the yahoo-music data sets with n = 624, 961 and k = 200, Greedy-MIPS runs 200x faster than the naive approach and yields search results with the top-5 precision more than 75%, while the search performance of other state-of-the-art approaches under the similar speedup drops to less than 3% precision. • Greedy-MIPS supports MIPS with a budget, which brings the ability to control of the trade-off between computation efficiency and search quality in the prediction phase. 2 Existing Approaches for Fast MIPS Because of its wide applicability, several algorithms have been proposed for efficient MIPS. Most of existing approaches consider to reduce the MIPS problem to the nearest neighbor search problem (NNS), where the goal is to identify the nearest candidates of the given query, and apply an existing efficient NNS algorithm to solve the reduced problem. [2] is the first MIPS work which adopts such a MIPS-to-NNS reduction. Variants MIPS-to-NNS reduction are also proposed in [14, 15]. Experimental results in [2] show the superiority of the NNS reduction over the traditional branchand-bound search approaches for MIPS [9, 13]. After the reduction, there are many choices to solve the transformed NNS problem, such as locality sensitive hashing scheme (LSH-MIPS) considered in [12, 14, 15], PCA-tree based approaches (PCA-MIPS) in [2], or K-Means approaches in [1]. Fast MIPS approaches with sampling schemes have become popular recently. Various sampling schemes have been proposed to handle MIPS problem with different constraints. The idea of the sampling-based MIPS approach is first proposed in [5] as an approach to perform approximate matrix-matrix multiplications. Its applicability on MIPS problems is studied very recently [3]. The idea behind a sampling-based approach called Sample-MIPS, is about to design an efficient sampling procedure such that the j-th candidate is selected with probability p(j): p(j) ⇠ h> j w. In particular, Sample-MIPS is an efficient scheme to sample (j, t) 2 [n] ⇥ [k] with the probability p(j, t): p(j, t) ⇠ h jt w t . Each time a pair (j, t) is sampled, we increase the count for the j-th item by one. By the end of the sampling process, the spectrum of the counts forms an estimation of n inner product values. Due to the nature of the sampling approach, it can only handle the situation where all the candidate vectors and query vectors are nonnegative. Diamond-MSIPS, a diamond sampling scheme proposed in [3], is an extension of Sample-MIPS to handle the maximum squared inner product search problem (MSIPS) where the goal is to identify candidate vectors with largest values of (h> j w)2. However, the solutions to MSIPS can be very different from the solutions to MIPS in general. For example, if all the inner product values are negative, the ordering for MSIPS is the exactly reverse ordering induced by MIPS. Here we can see that the applicability of both Sample-MIPS and Diamond-MSIPS to MIPS is very limited. 3 Budgeted MIPS The core idea behind the fast approximate MIPS approaches is to trade the search quality for the shorter query latency: the shorter the search latency, the lower the search quality. In most existing fast MIPS approaches, the trade-off depends on the approach-specific parameters such as the depth of the PCA tree in PCA-MIPS or the number of hash functions in LSH-MIPS. Such specific parameters are usually required to construct approach-specific data structures before any query is given, which means that the trade-off is somewhat fixed for all the queries. Thus, the computation cost for a given query is fixed. However, in many real-world scenarios, each query might have a different computational budget, which raises the question: Can we design a MIPS approach supporting the dynamic adjustment of the trade-off in the query phase? 3.1 Essential Components for Fast MIPS Before any query request: • Query-Independent Data Structure Construction: A pre-processing procedure is performed on the entire candidate sets to construct an approach-specific data structure D to store information about H: the LSH hash tables, space partition trees (e.g., KD-tree or PCA-tree), or cluster centroids. For each query request: • Query-dependent Pre-processing: In some approaches, a query dependent pre-processing is needed. For example, a vector augmentation is required in all MIPS-to-NNS approaches. In addition, [2] also requires another normalization. T P is used to denote the time complexity of this stage. • Candidate Screening: In this stage, based on the pre-constructed data structure D, an efficient procedure is performed to filter candidates such that only a subset of candidates C(w) ⇢ H is selected. In a naive linear approach, no screening procedure is performed, so C(w) simply contains all the n candidates. For a tree-based structure, C(w) contains all the candidates stored in the leaf node of the query vector. In a sampling-based MIPS approach, an efficient sampling scheme is designed to generate highly possible candidates to form C(w). T S denotes the computational cost of the screening stage. • Candidate Ranking: An exact ranking is performed on the selected candidates in C(w) obtained from the screening stage. This involves the computation of |C(w)| inner products and the sorting procedure among these |C(w)| values. The overall time complexity T R = O(|C(w)|k + |C(w)| log|C(w)|). The per-query computational cost: T Q = T P + T S + T R . (1) It is clear that the candidate screening stage is the key component for a fast MIPS approach. In terms of the search quality, the performance highly depends on whether the screening procedure can identify highly possible candidates. Regarding the query latency, the efficiency highly depends on the size of C(w) and how fast to generate C(w). The major difference among various MIPS approaches is the choice of the data structure D and the screening procedure. 3.2 Budgeted MIPS: Problem Definition Budgeted MIPS is an extension of the standard approximate MIPS problem with a computational budget: how to generate top-ranked candidates under a given budget on the number of inner products one can perform. Note that the cost for the candidate ranking (T R ) is inevitable in the per-query cost (1). A viable approach for budgeted MIPS must include a screening procedure which satisfies the following requirements: • the flexibility to control the size of C(w) in the candidate screening stage such that |C(w)| B, where B is a given budget, and • an efficient screening procedure to obtain C(w) in O(Bk) time such thatT Q = O(Bk+B logB). As mentioned earlier, most recently proposed MIPS-to-NNS approaches algorithms apply various search space partition data structures or techniques (e.g., LSH, KD-tree, or PCA-tree) designed for NNS to index the candidates H in the query-independent pre-processing stage. As the construction of D is query independent, both the search performance and the computation cost are somewhat fixed when the construction is done. For example, the performance of a PCA-MIPS depends on the depth of the PCA-tree. Given a query vector w, there is no control to the size of C(w) in the candidate generating phase. LSH-based approaches also have the similar issue. There might be some ad-hoc treatments to adjust C(w), it is not clear how to generalize PCA-MIPS and LSH-MIPS in a principled way to handle the situation with a computational budget: how to reduce the size of C(w) under a limited budget and how to improve the performance when a larger budget is given. Unlike other NNS-based algorithms, the design of Sample-MIPS naturally enables it to support budgeted MIPS for a nonnegative candidate matrix H and a nonnegative query w. The more the number of samples, the lower the variance of the estimated frequency spectrum. Clearly, SampleMIPS has the flexibility to control the size of C(w), and thus is a viable approach for the budgeted MIPS problem. However, Sample-MIPS works only on the situation with non-negative H and w. Diamond-MSIPS has the similar issue. 4 Greedy-MIPS We carefully study the structure of MIPS and develop a simple but novel algorithm called GreedyMIPS, which handles budgeted MIPS by design. Unlike the recent MIPS-to-NNS approaches, Greedy-MIPS is an approach without any reduction to a NNS problem. Moreover, Greedy-MIPS is a viable approach for the budgeted MIPS problem without the non-negativity limitation inherited in the sampling approaches. The key component for a fast MIPS approach is the algorithm used in the candidate screening phase. In budgeted MIPS, for any given budget B and query w, an ideal procedure for the candidate screening phase costs O(Bk) time to generate C(w) which contains the B items with the largest B inner product values over the n candidates in H. The requirement on the time complexity O(Bk) implies that the procedure is independent from n = |H|, the number of candidates in H. One might wonder whether such an ideal procedure exists or not. In fact, designing such an ideal procedure with the requirement to generate the largest B items in O(Bk) time is even more challenging than the original budgeted MIPS problem. Definition 1. The rank of an item x among a set of items X = x 1 , . . . , x|X | is defined as rank(x | X ) := X|X | j=1 I[x j x], (2) where I[·] is the indicator function. A ranking induced by X is a function ⇡(·) : X ! {1, . . . , |X |} such that ⇡(x j ) = rank(x j | X ) 8x j 2 X . One way to store a ranking ⇡(·) induced by X is by a sorted index array s[r] of size |X | such that ⇡(xs[1]) ⇡(xs[2]) · · · ⇡(xs[|X |]). We can see that s[r] stores the index to the item x with ⇡(x) = r. To design an efficient candidate screening procedure, we study the operations required for MIPS: In the simple linear MIPS approach, nk multiplication operations are required to obtain n inner product values h> 1 w, . . . ,h> n w . We define an implicit matrix Z 2 Rn⇥k as Z = H diag(w), where diag(w) 2 Rk⇥k is a matrix with w as it diagonal. The (j, t) entry of Z denotes the multiplication operation z jt = h jt w t and z j = diag(w)h j denotes the j-th row of Z. In Figure 1, we use Z> to demonstrate the implicit matrix. Note that Z is query dependant, i.e., the values of Z depend on the query vector w, and n inner product values can be obtained by taking the column-wise summation of Z>. In particular, for each j we have h> j w = P k t=1 z jt , j = 1, . . . , n. Thus, the ranking induced by the n inner product values can be characterized by the marginal ranking ⇡(j|w) defined on the implicit matrix Z as follows: ⇡(j|w) := rank k X t=1 z jt ( k X t=1 z 1t , · · · , k X t=1 z nt )! = rank h> j w | h> 1 w, . . . ,h> n w . (3) As mentioned earlier, it is hard to design an ideal candidate screening procedure generating C(w) based on the marginal ranking. Because the main goal for the candidate screening phase is to quickly identify candidates which are highly possible to be top-ranked items, it suffices to have an efficient procedure generating C(w) by an approximation ranking. Here we propose a greedy heuristic ranking: ⇡̄(j|w) := rank max k t=1 z jt max k t=1 z 1t , · · · ,maxk t=1 z nt , (4) which is obtained by replacing the summation terms in (3) by max operators. The intuition behind this heuristic is that the largest element of z j multiplied by k is an upper bound of h> j w: h> j w = k X t=1 z jt kmax{z jt : t = 1, . . . , k}. (5) Thus, ⇡̄(j|w), which is induced by such an upper bound of h> j w, could be a reasonable approximation ranking for the marginal ranking ⇡(j|w). Next we design an efficient procedure which generates C(w) according to the ranking ⇡̄(j|w) defined in (4). First, based on the relative orderings of {z jt }, we consider the joint ranking and the conditional ranking defined as follows: • Joint ranking: ⇡(j, t|w) is the exact ranking over the nk entries of Z. ⇡(j, t|w) := rank(z jt | {z 11 , . . . , z nk }). • Conditional ranking: ⇡ t (j|w) is the exact ranking over the n entires of the t-th row of Z>. ⇡ t (j|w) := rank(z jt | {z 1t , . . . , z nt }). See Figure 1 for an illustration for both rankings. Similar to the marginal ranking, both joint and conditional rankings are query dependent. Observe that, in (4), for each j, only a single maximum entry of Z, maxk t=1 z jt , is considered to obtain the ranking ⇡̄(j|w). To generate C(w) based on ⇡̄(j|w), we can iterate (j, t) entries of Z in a greedy sequence such that (j 1 , t 1 ) is visited before (j 2 , t 2 ) if z j1t1 > zj2t2 , which is exactly the sequence corresponding to the joint ranking ⇡(j, t|w). Each time an entry (j, t) is visited, we can include the index j into C(w) if j /2 C(w). In Theorem 1, we show that the sequence to include a newly observed j into C(w) is exactly the sequence induced by the ranking ⇡̄(j|w) defined in (4). Theorem 1. For all j 1 and j 2 such that ⇡̄(j 1 |w) < ⇡̄(j 2 |w), j 1 will be included into C(w) before j 2 if we iterate (j, t) pairs following the sequence induced by the joint ranking ⇡(j, t|w). A proof can be found in Section D.1. At first glance, generating (j, t) in the sequence according to the joint ranking ⇡(j, t|w) might require the access to all the nk entries of Z and cost O(nk) time. In fact, based on Property 1 of conditional rankings, we can design an efficient variant of the k-way merge algorithm [8] to generate (j, t) pairs in the desired sequence iteratively. Property 1. Given a fixed candidate matrix H , for any possible w with w t 6= 0, the conditional ranking ⇡ t (j|w) is either ⇡ t+ (j) or ⇡ t (j), where ⇡t+(j) = rank(hjt | {h1t, . . . , hnt}), and ⇡ t (j) = rank( hjt | { h1t, . . . , hnt}). In particular, ⇡t(j|w) = ⇢ ⇡ t+ (j) if w t > 0, ⇡ t (j) if wt < 0. Property 1 enables us to characterize a query dependent conditional ranking ⇡ t (j|w) by two query independent rankings ⇡ t+ (j) and ⇡ t (j). Thus, for each t, we can construct and store a sorted index array st[r], r = 1, . . . , n such that ⇡ t+ (st[1]) ⇡t+(st[2]) · · · ⇡t+(st[n]), (6) ⇡ t (st[1]) ⇡t (st[2]) · · · ⇡t (st[n]). (7) Thus, in the phase of query-independent data structure construction of Greedy-MIPS, we compute and store k query-independent rankings ⇡ t+ (·) by k sorted index arrays of length n: st[r], r = 1, . . . , n, t = 1, . . . , k. The entire construction costs O(kn log n) time and O(kn) space. Next we describe the details of the proposed Greedy-MIPS algorithm for a given query w and a budget B. Greedy-MIPS utilizes the idea of the k-way merge algorithm to visit (j, t) entries of Z according to the joint ranking ⇡(j, t|w). Designed to merge k sorted sublists into a single sorted list, the k-way merge algorithm uses 1) k pointers, one for each sorted sublist, and 2) a binary tree structure (either a heap or a selection tree) containing the elements pointed by these k pointers to obtain the next element to be appended into the sorted list [8]. 4.1 Query-dependent Pre-processing We divide nk entries of (j, t) into k groups. The t-th group contains n entries: {(j, t) : j = 1, . . . , n}. Here we need an iterator playing a similar role as the pointer which can iterate index j 2 {1, . . . , n} in the sorted sequence induced by the conditional ranking ⇡ t (·|w). Utilizing Property 1, the t-th pre-computed sorted arrays st[r], r = 1, . . . , n can be used to construct such an iterator, called CondIter, which supports current() to access the currently pointed index j and getNext() to Algorithm 1 CondIter: an iterator over j 2 {1, . . . , n} based on the conditional ranking ⇡ t (j|w). This code assumes that the k sorted index arrays st[r], r=1, . . . , n, t=1, . . . , k are available. class CondIter: def constructor(dim_idx, query_val): t, w, ptr dim_idx, query_val, 1 def current(): return ⇢ st[ptr] if w > 0, st[n ptr+ 1] otherwise. def hasNext(): return (ptr < n) def getNext(): ptr ptr+ 1 and return current() Algorithm 2 Query-dependent preprocessing procedure in Greedy-MIPS. • Input: query w 2 Rk • For t = 1, . . . , k - iters[t] CondIter(t, w t ) - z h jt w t , where j = iters[t].current() - Q.push((z, t)) • Output: - iters[t], t k: iterators for ⇡ t (·|w). - Q: a max-heap of ⇢ (z, t) | z = nmax j=1 z jt , 8t k . advance the iterator. In Algorithm 1, we describe a pseudo code for CondIter, which utilizes the facts (6) and (7) such that both the construction and the index access cost O(1) space and O(1) time. For each t, we use iters[t] to denote the CondIter for the t-th conditional ranking ⇡ t (j|w). Regarding the binary tree structure used in Greedy-MIPS, we consider a max-heap Q of (z, t) pairs. z 2 R is the compared key used to maintain the heap property of Q, and t 2 {1, . . . , k} is an integer to denote the index to a entry group. Each (z, t) 2 Q denotes the (j, t) entry of Z where j = iters[t].current() and z = z jt = h jt w t . Note that there are most k elements in the max-heap at any time. Thus, we can implement Q by a binary heap such that 1) Q.top() returns the maximum pair (z, t) in O(1) time; 2) Q.pop() deletes the maximum pair of Q in O(log k) time; and 3) Q.push((z, t)) inserts a new pair in O(log k) time. Note that the entire Greedy-MIPS can also be implemented using a selection tree among the k entries pointed by the k iterators. See Section B in the supplementary material for more details. In the query-dependent pre-processing phase, we need to construct iters[t], t = 1, . . . , k, one for each conditional ranking ⇡ t (j|w), and a max-heap Q which is initialized to contain (z, t) | z = maxn j=1 z jt , t k . A detailed procedure is described in Algorithm 2 which costs O(k log k) time and O(k) space. 4.2 Candidate Screening The core idea of Greedy-MIPS is to iteratively traverse (j, t) entries of Z in a greedy sequence and collect newly observed indices j into C(w) until |C(w)| = B. In particular, if r = ⇡(j, t|w), then (j, t) entry is visited at the r-th iterate. Similar to the k-way merge algorithm, we describe a detailed procedure in Algorithm 3, which utilizes the CondIter in Algorithm 1 to perform the screening. Recall both requirements of a viable candidate screening procedure for budgeted MIPS: 1) the flexibility to control the size |C(w)| B; and 2) an efficient procedure runs in O(Bk). First, it is clear that Algorithm 3 has the flexibility to control the size of C(w) by the exiting condition of the outer while-loop. Next, to analyze the overall time complexity of Algorithm 3, we need to know the number of the z jt entries the algorithm iterates before C(w) = B. Theorem 2 gives an upper bound on this number of iterations. Theorem 2. There are at least B distinct indices j in the first Bk entries (j, t) in terms of the joint ranking ⇡(j, t|w) for any w; that is, |{j | 8(j, t) such that ⇡(j, t|w) Bk}| B. (8) A detailed proof can be found in Section D of the supplementary material. Note that there are some O(log k) time operations within both the outer and inner while loops such as Q.push((z, t)) and Q.pop()). As the goal of the screening procedure is to identify j indices only, we can skip the Q.push zjt, t for an entry (j, t) with the j having been included in C(w). As a results, we can guarantee that Q.pop() is executed at most B+ k 1 times when |C(w)| = B. The extra k 1 times occurs in the situation that iters[1].current() = · · · = iters[k].current() at the beginning of the entire screening procedure. Algorithm 3 An improved candidate screening procedure in Greedy-MIPS. The time complexity is O(Bk). • Input: - H, w, and the computational budget B - Q and iters[t]: output of Algorithm 2 - C(w): an empty list - visited[j] = 0, 8j n: a zero-initialized array. • While |C(w)| < B: - (z, t) Q.pop() · · ·O(log k) - j iters[t].current() - If visited[j] = 0: * append j into C(w) and visited[j] 1 - While iters[t].hasNext(): * j iters[t].getNext() * if visited[j] = 0: — z h jt w t and Q.push((z, t)) · · ·O(log k) — break • visited[j] 0, 8j 2 C(w) · · ·O(B) • Output: C(w) = {j | ⇡̄(j|w) B} To check weather a index j in the current C(w) in O(1) time, we use an auxiliary zero-initialized array of length n: visited[j], j = 1, . . . , n to denote whether an index j has been included in C(w) or not. As C(w) contains at most B indices, only B elements of this auxiliary array will be modified during the screening procedure. Furthermore, the auxiliary array can be reset to zero using O(B) time in the end of Algorithm 3, so this auxiliary array can be utilized again for a different query vector w. Notice that Algorithm 3 still iterates Bk entries of Z but at most B + k 1 entries will be pushed into or pop from the max-heap Q. Thus, the overall time complexity of Algorithm 3 is O(Bk + (B + k) log k) = O(Bk), which makes Greedy-MIPS a viable budgeted MIPS approach. 4.3 Connection to Sampling Approaches Sample-MIPS, as mentioned earlier, is essentially a sampling algorithm with replacement scheme to draw entries of Z such that (j, t) is sampled with the probability proportional to z jt . Thus, SampleMIPS can be thought as a traversal of (j, t) entries using in a stratified random sequence determined by a distribution of the values of {z jt }, while the core idea of Greedy-MIPS is to iterate (j, t) entries of Z in a greedy sequence induced by the ordering of {z jt }. Next, we discuss the differences of Greedy-MIPS from Sample-MIPS and Diamond-MSIPS. Sample-MIPS can be applied to the situation where both H and w are nonnegative because of the nature of sampling scheme. In contrast, Greedy-MIPS can work on any MIPS problems as only the ordering of {z jt } matters in Greedy-MIPS. Instead of h> j w, Diamond-MSIPS is designed for the MSIPS problem which is to identify candidates with largest (h> j w)2 or |h> j w| values. In fact, for nonnegative MIPS problems, the diamond sampling is equivalent to Sample-MIPS. Moreover, for MSIPS problems with negative entries, when the number of samples is set to be the budget B,2 the Diamond-MSIPS is equivalent to apply Sample-MIPS to sample (j, t) entries with the probability p(j, t) / |z jt |. Thus, the applicability of the existing sampling-based approaches remains limited for general MIPS problems. 4.4 Theoretical Guarantee Greedy-MIPS is an algorithm based on a greedy heuristic ranking (4). Similar to the analysis of Quicksort, we study the average complexity of Greedy-MIPS by assuming a distribution of the input dataset. For simplicity, our analysis is performed on a stochastic implicit matrix Z instead of w. Each entry in Z is assumed to follow a uniform distribution uniform(a, b). We establish Theorem 3 to prove that the number of entries (j, t) iterated by Greedy-MIPS to include the index to the largest candidate is sublinear to n = |H| with a high probability when n is large enough. Theorem 3. Assume that all the entries z jt are drawn from a uniform distribution uniform(a, b). Let j⇤ be the index to the largest candidate (i.e., ⇡(j⇤|Z) = 1). With high probability, we have ⇡̄(j⇤|Z) O(k log(n)n 1k ). A detailed proof can be found in the supplementary material. Notice that theoretical guarantees for approximate MIPS is challenging even for randomized algorithms. For example, the analysis for Diamond-MSIPS in [3] requires nonnegative assumptions and only works on MSIPS (max-squared-inner-product search) problems instead of MIPS problems. 5 Experimental Results In this section, we perform extensive empirical comparisons to compare Greedy-MIPS with other state-of-the-art fast MIPS approaches on both real-world and synthetic datasets: We use netflix and yahoo-music as our real-world recommender system datasets. There are 17, 770 and 624, 961 items in netflix and yahoo-music, respectively. In particular, we obtain the user embeddings {w i } 2 Rk 2This setting is used in the experiments in [3]. and item embeddings h j 2 Rk by the standard low-rank matrix factorization [4] with k 2 {50, 200}. We also generate synthetic datasets with various n = 2{17,18,19,20} and k = 2{2,5,7,10}. For each synthetic dataset, both candidate vector h j and query w vector are drawn from the normal distribution. 5.1 Experimental Settings To have fair comparisons, all the compared approaches are implemented in C++. • Greedy-MIPS: our proposed approach in Section 4. • PCA-MIPS: the approach proposed in [2]. We vary the depth of PCA tree to control the trade-off. • LSH-MIPS: the approach proposed in [12, 14]. We use the nearest neighbor transform function proposed in [2, 12] and use the random projection scheme as the LSH function as suggested in [12]. We also implement the standard amplification procedure with an OR-construction of b hyper LSH hash functions. Each hyper LSH function is a result of an AND-construction of a random projections. We vary values (a, b) to control the trade-off. • Diamond-MSIPS: the sampling scheme proposed in [3] for the maximum squared inner product search. As it shows better performance than LSH-MIPS in [3] in terms of MIPS problems, we also include Diamond-MSIPS into our comparison. • Naive-MIPS: the baseline approach which applies a linear search to identify the exact top-K candidates. Evaluation Criteria. For each dataset, the actual top-20 items for each query are regarded as the ground truth. We report the average performance on a randomly selected 2,000 query vectors. To evaluate the search quality, we use the precision on the top-P prediction (prec@P ), obtained by selecting top-P items from C(w) returned by the candidate screening procedure. Results with P = 5 is shown in the paper, while more results with various P are in the supplementary material. To evaluate the search efficiency, we report the relative speedups over the Naive-MIPS approach: speedup = prediction time required by Naive-MIPS prediction time by a compared approach . Remarks on Budgeted MIPS versus Non-Budgeted MIPS. As mentioned in Section 3, PCAMIPS and LSH-MIPS cannot handle MIPS with a budget. Both the search computation cost and the search quality are fixed when the corresponding data structure is constructed. As a result, to understand the trade-off between search efficiency and search quality for these two approaches, we can only try various values for its parameters (such as the depth for PCA tree and the amplification parameters (a, b) for LSH). For each combination of parameters, we need to re-run the entire query-independent pre-processing procedure to construct a new data structure. Remarks on data structure construction. Note that the time complexity for the construction for Greedy-MIPS is O(kn log n), which is on par to O(kn) for Diamond-MSIPS, and faster than O(knab) for LSH-MIPS and O(k2n) for PCA-MIPS. As an example, the construction for Greedy-MIPS only takes around 10 seconds on yahoo-music with n = 624, 961 and k = 200. 5.2 Experimental Results Results on Real-World Data sets. Comparison results for netflix and yahoo-music are shown in Figure 2. The first, second, and third columns present the results with k = 50 and k = 200, respectively. It is clearly observed that given a fixed speedup, Greedy-MIPS yields predictions with much higher search quality. In particular, on the yahoo-music data set with k = 200, Greedy-MIPS runs 200x faster than Naive-MIPS and yields search results with p@5 = 70%, while none of PCAMIPS, LSH-MIPS, and Diamond-MSIPS can achieve a p@5 > 10% while maintaining the similar 200x speedups. Results on Synthetic Data Sets. We also perform comparisons on synthetic datasets. The comparison with various n 2 2{17,18,19,20} is shown in Figure 3, while the comparison with various k 2 2{2,5,7,10} is shown in Figure 4. We observe that the performance gap between Greedy-MIPS over other approaches remains when n increases, while the gap becomes smaller when k increases. However, Greedy-MIPS still outperforms other approaches significantly. 6 Conclusions and Future Work In this paper, we develop a novel Greedy-MIPS algorithm, which has the flexibility to handle budgeted MIPS, and yields surprisingly superior performance compared to state-of-the-art approaches. The current implementation focuses on MIPS with dense vectors, while in the future we plan to implement our algorithm also for high dimensional sparse vectors. We also establish a theoretical guarantee for Greedy-MIPS based on the assumption that data are generated from a random distribution. How to relax the assumption or how to design a nondeterministic pre-processing step for Greedy-MIPS to satisfy the assumption are interesting future directions of this work. Acknowledgements This research was supported by NSF grants CCF-1320746, IIS-1546452 and CCF-1564000. CJH was supported by NSF grant RI-1719097.
1. What is the main contribution of the paper regarding the MIPS problem? 2. What are the strengths and weaknesses of the proposed greedy approach? 3. How does the reviewer assess the analysis and efficiency of the proposed solution? 4. What are the limitations of the paper's experimental comparison with other works? 5. How does the reviewer evaluate the practicality and scalability of the proposed method for large-scale systems?
Review
Review The papers proposes a greedy approach based to some sorting among the columns of the matrix of candidates to solve the MIPS problem. Some analysis of the proposed solution is made, this leads to an efficient implementation and an asymptotic bound on the error is provided on a simplified case. The analysis does seems very sharp as the provided bound is sublinear but yields better results for high dimensional vectors (this is likely to be an artifact of the uniform iid assumption over the entries). The proposed idea is nice and simple but the writing makes the paper harder to follow than it could be. The MIPS problem received a lot of attention from the community and the experimental part does not compare to the most recent approaches as ALHS (NIPS'14) nor "Learning and Inference via Maximum Inner Product Search" (ICML'16) which can be considered as an issue. The authors discards theses approaches because they consider that the computational budget cannot as easily controlled than with their approach, but in my opinion this reason is not strong enough to not report their performance on the figures. Moreover the time required to build the indexes is not reported in the time comparisons which can be acceptable when the number of user is very large in front of the number of items but again not very fair in terms of time comparisons. From a practitioner point of view, MIPS is not widely used in production because it remains too slow for large scale systems (even when using dense embeddings). One tend to prefer the use of hashtables to some set of products with some well chosen keys.
NIPS
Title A Greedy Approach for Budgeted Maximum Inner Product Search Abstract Maximum Inner Product Search (MIPS) is an important task in many machine learning applications such as the prediction phase of low-rank matrix factorization models and deep learning models. Recently, there has been substantial research on how to perform MIPS in sub-linear time, but most of the existing work does not have the flexibility to control the trade-off between search efficiency and search quality. In this paper, we study the important problem of MIPS with a computational budget. By carefully studying the problem structure of MIPS, we develop a novel Greedy-MIPS algorithm, which can handle budgeted MIPS by design. While simple and intuitive, Greedy-MIPS yields surprisingly superior performance compared to state-of-the-art approaches. As a specific example, on a candidate set containing half a million vectors of dimension 200, Greedy-MIPS runs 200x faster than the naive approach while yielding search results with the top-5 precision greater than 75%. 1 Introduction In this paper, we study the computational issue in the prediction phase for many embedding based models such as matrix factorization and deep learning models in recommender systems, which can be mathematically formulated as a Maximum Inner Product Search (MIPS) problem. Specifically, given a large collection of n candidate vectors: H = h j 2 Rk : 1, . . . , n and a query vector w 2 Rk, MIPS aims to identify a subset of candidates that have top largest inner product values with w. We also denote H = [h 1 , . . . ,h j , . . . ,h n ] > as the candidate matrix. A naive linear search procedure to solve MIPS for a given query w requires O(nk) operations to compute n inner products and O(n log n) operations to obtain the sorted ordering of the n candidates. Recently, MIPS has drawn a lot of attention in the machine learning community due to its wide applicability, such as the prediction phase of embedding based recommender systems [6, 7, 10]. In such an embedding based recommender system, each user i is associated with a vector w i of dimension k, while each item j is associated with a vector h j of dimension k. The interaction (such as preference) between a user and an item is modeled by wT i h j . It is clear that identifying top-ranked items in such a system for a user is exactly a MIPS problem. Because both the number of users (the number of queries) and the number of items (size of vector pool in MIPS) can easily grow to millions, a naive linear search is extremely expensive; for example, to compute the preference for all m users over n items with latent embeddings of dimension k in a recommender system requires at least O(mnk) operations. When both m and n are large, the prediction procedure is extremely time consuming; it is even slower than the training procedure used to obtain the m+n embeddings, which ⇤Work done while at the University of Texas at Austin. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. costs only O(|⌦|k) operations per iteration, where |⌦| is number of observations and is much smaller than mn. Taking the yahoo-music dataset as an example, m = 1M , n = 0.6M , |⌦| = 250M , and mn = 600B 250M = |⌦|. As a result, the development of efficient algorithms for MIPS is needed in large-scale recommender systems. In addition, MIPS can be found in many other machine learning applications, such as the prediction for a multi-class or multi-label classifier [16, 17], an object detector, a structure SVM predicator, or as a black-box routine to improve the efficiency of learning and inference algorithm [11]. Also, the prediction phase of neural network could also benefit from a faster MIPS algorithm: the last layer of NN is often a dense fully-connected layer, so finding the label with maximum score becomes a MIPS problem with dense vectors [6]. There is a recent line of research on accelerating MIPS for large n, such as [2, 3, 9, 12–14]. However, most of them do not have the flexibility to control the trade-off between search efficiency and search quality in the prediction phase. In this paper, we consider the budgeted MIPS problem, which is a generalized version of the standard MIPS with a computation budget: how to generate a set of top-ranked candidates under a given budget on the number of inner products one can perform. By carefully studying the problem structure of MIPS, we develop a novel Greedy-MIPS algorithm, which handles budgeted MIPS by design. While simple and intuitive, Greedy-MIPS yields surprisingly superior performance compared to existing approaches. Our Contributions: • We develop Greedy-MIPS, which is a novel algorithm without any nearest neighbor search reduction that is essential in many state-of-the-art approaches [2, 12, 14]. • We establish a sublinear time theoretical guarantee for Greedy-MIPS under certain assumptions. • Greedy-MIPS is orders of magnitudes faster than many state-of-the-art MIPS approaches to obtain a desired search performance. As a specific example, on the yahoo-music data sets with n = 624, 961 and k = 200, Greedy-MIPS runs 200x faster than the naive approach and yields search results with the top-5 precision more than 75%, while the search performance of other state-of-the-art approaches under the similar speedup drops to less than 3% precision. • Greedy-MIPS supports MIPS with a budget, which brings the ability to control of the trade-off between computation efficiency and search quality in the prediction phase. 2 Existing Approaches for Fast MIPS Because of its wide applicability, several algorithms have been proposed for efficient MIPS. Most of existing approaches consider to reduce the MIPS problem to the nearest neighbor search problem (NNS), where the goal is to identify the nearest candidates of the given query, and apply an existing efficient NNS algorithm to solve the reduced problem. [2] is the first MIPS work which adopts such a MIPS-to-NNS reduction. Variants MIPS-to-NNS reduction are also proposed in [14, 15]. Experimental results in [2] show the superiority of the NNS reduction over the traditional branchand-bound search approaches for MIPS [9, 13]. After the reduction, there are many choices to solve the transformed NNS problem, such as locality sensitive hashing scheme (LSH-MIPS) considered in [12, 14, 15], PCA-tree based approaches (PCA-MIPS) in [2], or K-Means approaches in [1]. Fast MIPS approaches with sampling schemes have become popular recently. Various sampling schemes have been proposed to handle MIPS problem with different constraints. The idea of the sampling-based MIPS approach is first proposed in [5] as an approach to perform approximate matrix-matrix multiplications. Its applicability on MIPS problems is studied very recently [3]. The idea behind a sampling-based approach called Sample-MIPS, is about to design an efficient sampling procedure such that the j-th candidate is selected with probability p(j): p(j) ⇠ h> j w. In particular, Sample-MIPS is an efficient scheme to sample (j, t) 2 [n] ⇥ [k] with the probability p(j, t): p(j, t) ⇠ h jt w t . Each time a pair (j, t) is sampled, we increase the count for the j-th item by one. By the end of the sampling process, the spectrum of the counts forms an estimation of n inner product values. Due to the nature of the sampling approach, it can only handle the situation where all the candidate vectors and query vectors are nonnegative. Diamond-MSIPS, a diamond sampling scheme proposed in [3], is an extension of Sample-MIPS to handle the maximum squared inner product search problem (MSIPS) where the goal is to identify candidate vectors with largest values of (h> j w)2. However, the solutions to MSIPS can be very different from the solutions to MIPS in general. For example, if all the inner product values are negative, the ordering for MSIPS is the exactly reverse ordering induced by MIPS. Here we can see that the applicability of both Sample-MIPS and Diamond-MSIPS to MIPS is very limited. 3 Budgeted MIPS The core idea behind the fast approximate MIPS approaches is to trade the search quality for the shorter query latency: the shorter the search latency, the lower the search quality. In most existing fast MIPS approaches, the trade-off depends on the approach-specific parameters such as the depth of the PCA tree in PCA-MIPS or the number of hash functions in LSH-MIPS. Such specific parameters are usually required to construct approach-specific data structures before any query is given, which means that the trade-off is somewhat fixed for all the queries. Thus, the computation cost for a given query is fixed. However, in many real-world scenarios, each query might have a different computational budget, which raises the question: Can we design a MIPS approach supporting the dynamic adjustment of the trade-off in the query phase? 3.1 Essential Components for Fast MIPS Before any query request: • Query-Independent Data Structure Construction: A pre-processing procedure is performed on the entire candidate sets to construct an approach-specific data structure D to store information about H: the LSH hash tables, space partition trees (e.g., KD-tree or PCA-tree), or cluster centroids. For each query request: • Query-dependent Pre-processing: In some approaches, a query dependent pre-processing is needed. For example, a vector augmentation is required in all MIPS-to-NNS approaches. In addition, [2] also requires another normalization. T P is used to denote the time complexity of this stage. • Candidate Screening: In this stage, based on the pre-constructed data structure D, an efficient procedure is performed to filter candidates such that only a subset of candidates C(w) ⇢ H is selected. In a naive linear approach, no screening procedure is performed, so C(w) simply contains all the n candidates. For a tree-based structure, C(w) contains all the candidates stored in the leaf node of the query vector. In a sampling-based MIPS approach, an efficient sampling scheme is designed to generate highly possible candidates to form C(w). T S denotes the computational cost of the screening stage. • Candidate Ranking: An exact ranking is performed on the selected candidates in C(w) obtained from the screening stage. This involves the computation of |C(w)| inner products and the sorting procedure among these |C(w)| values. The overall time complexity T R = O(|C(w)|k + |C(w)| log|C(w)|). The per-query computational cost: T Q = T P + T S + T R . (1) It is clear that the candidate screening stage is the key component for a fast MIPS approach. In terms of the search quality, the performance highly depends on whether the screening procedure can identify highly possible candidates. Regarding the query latency, the efficiency highly depends on the size of C(w) and how fast to generate C(w). The major difference among various MIPS approaches is the choice of the data structure D and the screening procedure. 3.2 Budgeted MIPS: Problem Definition Budgeted MIPS is an extension of the standard approximate MIPS problem with a computational budget: how to generate top-ranked candidates under a given budget on the number of inner products one can perform. Note that the cost for the candidate ranking (T R ) is inevitable in the per-query cost (1). A viable approach for budgeted MIPS must include a screening procedure which satisfies the following requirements: • the flexibility to control the size of C(w) in the candidate screening stage such that |C(w)| B, where B is a given budget, and • an efficient screening procedure to obtain C(w) in O(Bk) time such thatT Q = O(Bk+B logB). As mentioned earlier, most recently proposed MIPS-to-NNS approaches algorithms apply various search space partition data structures or techniques (e.g., LSH, KD-tree, or PCA-tree) designed for NNS to index the candidates H in the query-independent pre-processing stage. As the construction of D is query independent, both the search performance and the computation cost are somewhat fixed when the construction is done. For example, the performance of a PCA-MIPS depends on the depth of the PCA-tree. Given a query vector w, there is no control to the size of C(w) in the candidate generating phase. LSH-based approaches also have the similar issue. There might be some ad-hoc treatments to adjust C(w), it is not clear how to generalize PCA-MIPS and LSH-MIPS in a principled way to handle the situation with a computational budget: how to reduce the size of C(w) under a limited budget and how to improve the performance when a larger budget is given. Unlike other NNS-based algorithms, the design of Sample-MIPS naturally enables it to support budgeted MIPS for a nonnegative candidate matrix H and a nonnegative query w. The more the number of samples, the lower the variance of the estimated frequency spectrum. Clearly, SampleMIPS has the flexibility to control the size of C(w), and thus is a viable approach for the budgeted MIPS problem. However, Sample-MIPS works only on the situation with non-negative H and w. Diamond-MSIPS has the similar issue. 4 Greedy-MIPS We carefully study the structure of MIPS and develop a simple but novel algorithm called GreedyMIPS, which handles budgeted MIPS by design. Unlike the recent MIPS-to-NNS approaches, Greedy-MIPS is an approach without any reduction to a NNS problem. Moreover, Greedy-MIPS is a viable approach for the budgeted MIPS problem without the non-negativity limitation inherited in the sampling approaches. The key component for a fast MIPS approach is the algorithm used in the candidate screening phase. In budgeted MIPS, for any given budget B and query w, an ideal procedure for the candidate screening phase costs O(Bk) time to generate C(w) which contains the B items with the largest B inner product values over the n candidates in H. The requirement on the time complexity O(Bk) implies that the procedure is independent from n = |H|, the number of candidates in H. One might wonder whether such an ideal procedure exists or not. In fact, designing such an ideal procedure with the requirement to generate the largest B items in O(Bk) time is even more challenging than the original budgeted MIPS problem. Definition 1. The rank of an item x among a set of items X = x 1 , . . . , x|X | is defined as rank(x | X ) := X|X | j=1 I[x j x], (2) where I[·] is the indicator function. A ranking induced by X is a function ⇡(·) : X ! {1, . . . , |X |} such that ⇡(x j ) = rank(x j | X ) 8x j 2 X . One way to store a ranking ⇡(·) induced by X is by a sorted index array s[r] of size |X | such that ⇡(xs[1]) ⇡(xs[2]) · · · ⇡(xs[|X |]). We can see that s[r] stores the index to the item x with ⇡(x) = r. To design an efficient candidate screening procedure, we study the operations required for MIPS: In the simple linear MIPS approach, nk multiplication operations are required to obtain n inner product values h> 1 w, . . . ,h> n w . We define an implicit matrix Z 2 Rn⇥k as Z = H diag(w), where diag(w) 2 Rk⇥k is a matrix with w as it diagonal. The (j, t) entry of Z denotes the multiplication operation z jt = h jt w t and z j = diag(w)h j denotes the j-th row of Z. In Figure 1, we use Z> to demonstrate the implicit matrix. Note that Z is query dependant, i.e., the values of Z depend on the query vector w, and n inner product values can be obtained by taking the column-wise summation of Z>. In particular, for each j we have h> j w = P k t=1 z jt , j = 1, . . . , n. Thus, the ranking induced by the n inner product values can be characterized by the marginal ranking ⇡(j|w) defined on the implicit matrix Z as follows: ⇡(j|w) := rank k X t=1 z jt ( k X t=1 z 1t , · · · , k X t=1 z nt )! = rank h> j w | h> 1 w, . . . ,h> n w . (3) As mentioned earlier, it is hard to design an ideal candidate screening procedure generating C(w) based on the marginal ranking. Because the main goal for the candidate screening phase is to quickly identify candidates which are highly possible to be top-ranked items, it suffices to have an efficient procedure generating C(w) by an approximation ranking. Here we propose a greedy heuristic ranking: ⇡̄(j|w) := rank max k t=1 z jt max k t=1 z 1t , · · · ,maxk t=1 z nt , (4) which is obtained by replacing the summation terms in (3) by max operators. The intuition behind this heuristic is that the largest element of z j multiplied by k is an upper bound of h> j w: h> j w = k X t=1 z jt kmax{z jt : t = 1, . . . , k}. (5) Thus, ⇡̄(j|w), which is induced by such an upper bound of h> j w, could be a reasonable approximation ranking for the marginal ranking ⇡(j|w). Next we design an efficient procedure which generates C(w) according to the ranking ⇡̄(j|w) defined in (4). First, based on the relative orderings of {z jt }, we consider the joint ranking and the conditional ranking defined as follows: • Joint ranking: ⇡(j, t|w) is the exact ranking over the nk entries of Z. ⇡(j, t|w) := rank(z jt | {z 11 , . . . , z nk }). • Conditional ranking: ⇡ t (j|w) is the exact ranking over the n entires of the t-th row of Z>. ⇡ t (j|w) := rank(z jt | {z 1t , . . . , z nt }). See Figure 1 for an illustration for both rankings. Similar to the marginal ranking, both joint and conditional rankings are query dependent. Observe that, in (4), for each j, only a single maximum entry of Z, maxk t=1 z jt , is considered to obtain the ranking ⇡̄(j|w). To generate C(w) based on ⇡̄(j|w), we can iterate (j, t) entries of Z in a greedy sequence such that (j 1 , t 1 ) is visited before (j 2 , t 2 ) if z j1t1 > zj2t2 , which is exactly the sequence corresponding to the joint ranking ⇡(j, t|w). Each time an entry (j, t) is visited, we can include the index j into C(w) if j /2 C(w). In Theorem 1, we show that the sequence to include a newly observed j into C(w) is exactly the sequence induced by the ranking ⇡̄(j|w) defined in (4). Theorem 1. For all j 1 and j 2 such that ⇡̄(j 1 |w) < ⇡̄(j 2 |w), j 1 will be included into C(w) before j 2 if we iterate (j, t) pairs following the sequence induced by the joint ranking ⇡(j, t|w). A proof can be found in Section D.1. At first glance, generating (j, t) in the sequence according to the joint ranking ⇡(j, t|w) might require the access to all the nk entries of Z and cost O(nk) time. In fact, based on Property 1 of conditional rankings, we can design an efficient variant of the k-way merge algorithm [8] to generate (j, t) pairs in the desired sequence iteratively. Property 1. Given a fixed candidate matrix H , for any possible w with w t 6= 0, the conditional ranking ⇡ t (j|w) is either ⇡ t+ (j) or ⇡ t (j), where ⇡t+(j) = rank(hjt | {h1t, . . . , hnt}), and ⇡ t (j) = rank( hjt | { h1t, . . . , hnt}). In particular, ⇡t(j|w) = ⇢ ⇡ t+ (j) if w t > 0, ⇡ t (j) if wt < 0. Property 1 enables us to characterize a query dependent conditional ranking ⇡ t (j|w) by two query independent rankings ⇡ t+ (j) and ⇡ t (j). Thus, for each t, we can construct and store a sorted index array st[r], r = 1, . . . , n such that ⇡ t+ (st[1]) ⇡t+(st[2]) · · · ⇡t+(st[n]), (6) ⇡ t (st[1]) ⇡t (st[2]) · · · ⇡t (st[n]). (7) Thus, in the phase of query-independent data structure construction of Greedy-MIPS, we compute and store k query-independent rankings ⇡ t+ (·) by k sorted index arrays of length n: st[r], r = 1, . . . , n, t = 1, . . . , k. The entire construction costs O(kn log n) time and O(kn) space. Next we describe the details of the proposed Greedy-MIPS algorithm for a given query w and a budget B. Greedy-MIPS utilizes the idea of the k-way merge algorithm to visit (j, t) entries of Z according to the joint ranking ⇡(j, t|w). Designed to merge k sorted sublists into a single sorted list, the k-way merge algorithm uses 1) k pointers, one for each sorted sublist, and 2) a binary tree structure (either a heap or a selection tree) containing the elements pointed by these k pointers to obtain the next element to be appended into the sorted list [8]. 4.1 Query-dependent Pre-processing We divide nk entries of (j, t) into k groups. The t-th group contains n entries: {(j, t) : j = 1, . . . , n}. Here we need an iterator playing a similar role as the pointer which can iterate index j 2 {1, . . . , n} in the sorted sequence induced by the conditional ranking ⇡ t (·|w). Utilizing Property 1, the t-th pre-computed sorted arrays st[r], r = 1, . . . , n can be used to construct such an iterator, called CondIter, which supports current() to access the currently pointed index j and getNext() to Algorithm 1 CondIter: an iterator over j 2 {1, . . . , n} based on the conditional ranking ⇡ t (j|w). This code assumes that the k sorted index arrays st[r], r=1, . . . , n, t=1, . . . , k are available. class CondIter: def constructor(dim_idx, query_val): t, w, ptr dim_idx, query_val, 1 def current(): return ⇢ st[ptr] if w > 0, st[n ptr+ 1] otherwise. def hasNext(): return (ptr < n) def getNext(): ptr ptr+ 1 and return current() Algorithm 2 Query-dependent preprocessing procedure in Greedy-MIPS. • Input: query w 2 Rk • For t = 1, . . . , k - iters[t] CondIter(t, w t ) - z h jt w t , where j = iters[t].current() - Q.push((z, t)) • Output: - iters[t], t k: iterators for ⇡ t (·|w). - Q: a max-heap of ⇢ (z, t) | z = nmax j=1 z jt , 8t k . advance the iterator. In Algorithm 1, we describe a pseudo code for CondIter, which utilizes the facts (6) and (7) such that both the construction and the index access cost O(1) space and O(1) time. For each t, we use iters[t] to denote the CondIter for the t-th conditional ranking ⇡ t (j|w). Regarding the binary tree structure used in Greedy-MIPS, we consider a max-heap Q of (z, t) pairs. z 2 R is the compared key used to maintain the heap property of Q, and t 2 {1, . . . , k} is an integer to denote the index to a entry group. Each (z, t) 2 Q denotes the (j, t) entry of Z where j = iters[t].current() and z = z jt = h jt w t . Note that there are most k elements in the max-heap at any time. Thus, we can implement Q by a binary heap such that 1) Q.top() returns the maximum pair (z, t) in O(1) time; 2) Q.pop() deletes the maximum pair of Q in O(log k) time; and 3) Q.push((z, t)) inserts a new pair in O(log k) time. Note that the entire Greedy-MIPS can also be implemented using a selection tree among the k entries pointed by the k iterators. See Section B in the supplementary material for more details. In the query-dependent pre-processing phase, we need to construct iters[t], t = 1, . . . , k, one for each conditional ranking ⇡ t (j|w), and a max-heap Q which is initialized to contain (z, t) | z = maxn j=1 z jt , t k . A detailed procedure is described in Algorithm 2 which costs O(k log k) time and O(k) space. 4.2 Candidate Screening The core idea of Greedy-MIPS is to iteratively traverse (j, t) entries of Z in a greedy sequence and collect newly observed indices j into C(w) until |C(w)| = B. In particular, if r = ⇡(j, t|w), then (j, t) entry is visited at the r-th iterate. Similar to the k-way merge algorithm, we describe a detailed procedure in Algorithm 3, which utilizes the CondIter in Algorithm 1 to perform the screening. Recall both requirements of a viable candidate screening procedure for budgeted MIPS: 1) the flexibility to control the size |C(w)| B; and 2) an efficient procedure runs in O(Bk). First, it is clear that Algorithm 3 has the flexibility to control the size of C(w) by the exiting condition of the outer while-loop. Next, to analyze the overall time complexity of Algorithm 3, we need to know the number of the z jt entries the algorithm iterates before C(w) = B. Theorem 2 gives an upper bound on this number of iterations. Theorem 2. There are at least B distinct indices j in the first Bk entries (j, t) in terms of the joint ranking ⇡(j, t|w) for any w; that is, |{j | 8(j, t) such that ⇡(j, t|w) Bk}| B. (8) A detailed proof can be found in Section D of the supplementary material. Note that there are some O(log k) time operations within both the outer and inner while loops such as Q.push((z, t)) and Q.pop()). As the goal of the screening procedure is to identify j indices only, we can skip the Q.push zjt, t for an entry (j, t) with the j having been included in C(w). As a results, we can guarantee that Q.pop() is executed at most B+ k 1 times when |C(w)| = B. The extra k 1 times occurs in the situation that iters[1].current() = · · · = iters[k].current() at the beginning of the entire screening procedure. Algorithm 3 An improved candidate screening procedure in Greedy-MIPS. The time complexity is O(Bk). • Input: - H, w, and the computational budget B - Q and iters[t]: output of Algorithm 2 - C(w): an empty list - visited[j] = 0, 8j n: a zero-initialized array. • While |C(w)| < B: - (z, t) Q.pop() · · ·O(log k) - j iters[t].current() - If visited[j] = 0: * append j into C(w) and visited[j] 1 - While iters[t].hasNext(): * j iters[t].getNext() * if visited[j] = 0: — z h jt w t and Q.push((z, t)) · · ·O(log k) — break • visited[j] 0, 8j 2 C(w) · · ·O(B) • Output: C(w) = {j | ⇡̄(j|w) B} To check weather a index j in the current C(w) in O(1) time, we use an auxiliary zero-initialized array of length n: visited[j], j = 1, . . . , n to denote whether an index j has been included in C(w) or not. As C(w) contains at most B indices, only B elements of this auxiliary array will be modified during the screening procedure. Furthermore, the auxiliary array can be reset to zero using O(B) time in the end of Algorithm 3, so this auxiliary array can be utilized again for a different query vector w. Notice that Algorithm 3 still iterates Bk entries of Z but at most B + k 1 entries will be pushed into or pop from the max-heap Q. Thus, the overall time complexity of Algorithm 3 is O(Bk + (B + k) log k) = O(Bk), which makes Greedy-MIPS a viable budgeted MIPS approach. 4.3 Connection to Sampling Approaches Sample-MIPS, as mentioned earlier, is essentially a sampling algorithm with replacement scheme to draw entries of Z such that (j, t) is sampled with the probability proportional to z jt . Thus, SampleMIPS can be thought as a traversal of (j, t) entries using in a stratified random sequence determined by a distribution of the values of {z jt }, while the core idea of Greedy-MIPS is to iterate (j, t) entries of Z in a greedy sequence induced by the ordering of {z jt }. Next, we discuss the differences of Greedy-MIPS from Sample-MIPS and Diamond-MSIPS. Sample-MIPS can be applied to the situation where both H and w are nonnegative because of the nature of sampling scheme. In contrast, Greedy-MIPS can work on any MIPS problems as only the ordering of {z jt } matters in Greedy-MIPS. Instead of h> j w, Diamond-MSIPS is designed for the MSIPS problem which is to identify candidates with largest (h> j w)2 or |h> j w| values. In fact, for nonnegative MIPS problems, the diamond sampling is equivalent to Sample-MIPS. Moreover, for MSIPS problems with negative entries, when the number of samples is set to be the budget B,2 the Diamond-MSIPS is equivalent to apply Sample-MIPS to sample (j, t) entries with the probability p(j, t) / |z jt |. Thus, the applicability of the existing sampling-based approaches remains limited for general MIPS problems. 4.4 Theoretical Guarantee Greedy-MIPS is an algorithm based on a greedy heuristic ranking (4). Similar to the analysis of Quicksort, we study the average complexity of Greedy-MIPS by assuming a distribution of the input dataset. For simplicity, our analysis is performed on a stochastic implicit matrix Z instead of w. Each entry in Z is assumed to follow a uniform distribution uniform(a, b). We establish Theorem 3 to prove that the number of entries (j, t) iterated by Greedy-MIPS to include the index to the largest candidate is sublinear to n = |H| with a high probability when n is large enough. Theorem 3. Assume that all the entries z jt are drawn from a uniform distribution uniform(a, b). Let j⇤ be the index to the largest candidate (i.e., ⇡(j⇤|Z) = 1). With high probability, we have ⇡̄(j⇤|Z) O(k log(n)n 1k ). A detailed proof can be found in the supplementary material. Notice that theoretical guarantees for approximate MIPS is challenging even for randomized algorithms. For example, the analysis for Diamond-MSIPS in [3] requires nonnegative assumptions and only works on MSIPS (max-squared-inner-product search) problems instead of MIPS problems. 5 Experimental Results In this section, we perform extensive empirical comparisons to compare Greedy-MIPS with other state-of-the-art fast MIPS approaches on both real-world and synthetic datasets: We use netflix and yahoo-music as our real-world recommender system datasets. There are 17, 770 and 624, 961 items in netflix and yahoo-music, respectively. In particular, we obtain the user embeddings {w i } 2 Rk 2This setting is used in the experiments in [3]. and item embeddings h j 2 Rk by the standard low-rank matrix factorization [4] with k 2 {50, 200}. We also generate synthetic datasets with various n = 2{17,18,19,20} and k = 2{2,5,7,10}. For each synthetic dataset, both candidate vector h j and query w vector are drawn from the normal distribution. 5.1 Experimental Settings To have fair comparisons, all the compared approaches are implemented in C++. • Greedy-MIPS: our proposed approach in Section 4. • PCA-MIPS: the approach proposed in [2]. We vary the depth of PCA tree to control the trade-off. • LSH-MIPS: the approach proposed in [12, 14]. We use the nearest neighbor transform function proposed in [2, 12] and use the random projection scheme as the LSH function as suggested in [12]. We also implement the standard amplification procedure with an OR-construction of b hyper LSH hash functions. Each hyper LSH function is a result of an AND-construction of a random projections. We vary values (a, b) to control the trade-off. • Diamond-MSIPS: the sampling scheme proposed in [3] for the maximum squared inner product search. As it shows better performance than LSH-MIPS in [3] in terms of MIPS problems, we also include Diamond-MSIPS into our comparison. • Naive-MIPS: the baseline approach which applies a linear search to identify the exact top-K candidates. Evaluation Criteria. For each dataset, the actual top-20 items for each query are regarded as the ground truth. We report the average performance on a randomly selected 2,000 query vectors. To evaluate the search quality, we use the precision on the top-P prediction (prec@P ), obtained by selecting top-P items from C(w) returned by the candidate screening procedure. Results with P = 5 is shown in the paper, while more results with various P are in the supplementary material. To evaluate the search efficiency, we report the relative speedups over the Naive-MIPS approach: speedup = prediction time required by Naive-MIPS prediction time by a compared approach . Remarks on Budgeted MIPS versus Non-Budgeted MIPS. As mentioned in Section 3, PCAMIPS and LSH-MIPS cannot handle MIPS with a budget. Both the search computation cost and the search quality are fixed when the corresponding data structure is constructed. As a result, to understand the trade-off between search efficiency and search quality for these two approaches, we can only try various values for its parameters (such as the depth for PCA tree and the amplification parameters (a, b) for LSH). For each combination of parameters, we need to re-run the entire query-independent pre-processing procedure to construct a new data structure. Remarks on data structure construction. Note that the time complexity for the construction for Greedy-MIPS is O(kn log n), which is on par to O(kn) for Diamond-MSIPS, and faster than O(knab) for LSH-MIPS and O(k2n) for PCA-MIPS. As an example, the construction for Greedy-MIPS only takes around 10 seconds on yahoo-music with n = 624, 961 and k = 200. 5.2 Experimental Results Results on Real-World Data sets. Comparison results for netflix and yahoo-music are shown in Figure 2. The first, second, and third columns present the results with k = 50 and k = 200, respectively. It is clearly observed that given a fixed speedup, Greedy-MIPS yields predictions with much higher search quality. In particular, on the yahoo-music data set with k = 200, Greedy-MIPS runs 200x faster than Naive-MIPS and yields search results with p@5 = 70%, while none of PCAMIPS, LSH-MIPS, and Diamond-MSIPS can achieve a p@5 > 10% while maintaining the similar 200x speedups. Results on Synthetic Data Sets. We also perform comparisons on synthetic datasets. The comparison with various n 2 2{17,18,19,20} is shown in Figure 3, while the comparison with various k 2 2{2,5,7,10} is shown in Figure 4. We observe that the performance gap between Greedy-MIPS over other approaches remains when n increases, while the gap becomes smaller when k increases. However, Greedy-MIPS still outperforms other approaches significantly. 6 Conclusions and Future Work In this paper, we develop a novel Greedy-MIPS algorithm, which has the flexibility to handle budgeted MIPS, and yields surprisingly superior performance compared to state-of-the-art approaches. The current implementation focuses on MIPS with dense vectors, while in the future we plan to implement our algorithm also for high dimensional sparse vectors. We also establish a theoretical guarantee for Greedy-MIPS based on the assumption that data are generated from a random distribution. How to relax the assumption or how to design a nondeterministic pre-processing step for Greedy-MIPS to satisfy the assumption are interesting future directions of this work. Acknowledgements This research was supported by NSF grants CCF-1320746, IIS-1546452 and CCF-1564000. CJH was supported by NSF grant RI-1719097.
1. What is the focus of the paper regarding Maximum Inner Product Search? 2. How does the proposed approach dynamic trade-off between latency and search quality? 3. Are there any limitations to the method's theoretical analysis? 4. Does the paper provide sufficient experimental results to support its claims?
Review
Review The paper considers the problem of Maximum Inner Product Search, which is an important retrieval problem for recommender systems task (among others), e.g. find an item which is most similar to a user's preference vector. I'm not particularly familiar with related work on this topic (such as Budgeted MIPS), so had to learn as I went. Essentially (as I understand it) Budgeted MIPS has a trade-off between latency and search quality. The idea here is to greedily develop algorithms that dynamically adjust this tradeoff to improve performance. The actual procedure, as I understand it, seems fairly straightforward. And plenty of detail is given that what's proposed could easily be re-implemented. Even so, the method (as far as I can tell) is novel and appears to be backed up by some theoretical analysis that demonstrates its validity. The analysis itself is a bit limited, in the sense that it makes strong assumptions about the data distribution (uniformity of the vectors). I can imagine several applications where this would not be realistic. However it seems realistic enough for applications like retrieval in a recommender system, which is an important enough application in and of itself. The experiments on real-world datasets seems to confirm that the proposed approach does indeed result in substantial speed increases.
NIPS
Title Fast and Provably Good Seedings for k-Means Abstract Seeding – the task of finding initial cluster centers – is critical in obtaining highquality clusterings for k-Means. However, k-means++ seeding, the state of the art algorithm, does not scale well to massive datasets as it is inherently sequential and requires k full passes through the data. It was recently shown that Markov chain Monte Carlo sampling can be used to efficiently approximate the seeding step of k-means++. However, this result requires assumptions on the data generating distribution. We propose a simple yet fast seeding algorithm that produces provably good clusterings even without assumptions on the data. Our analysis shows that the algorithm allows for a favourable trade-off between solution quality and computational cost, speeding up k-means++ seeding by up to several orders of magnitude. We validate our theoretical results in extensive experiments on a variety of real-world data sets. N/A art algorithm, does not scale well to massive datasets as it is inherently sequential and requires k full passes through the data. It was recently shown that Markov chain Monte Carlo sampling can be used to efficiently approximate the seeding step of k-means++. However, this result requires assumptions on the data generating distribution. We propose a simple yet fast seeding algorithm that produces provably good clusterings even without assumptions on the data. Our analysis shows that the algorithm allows for a favourable trade-off between solution quality and computational cost, speeding up k-means++ seeding by up to several orders of magnitude. We validate our theoretical results in extensive experiments on a variety of real-world data sets. 1 Introduction k-means++ (Arthur & Vassilvitskii, 2007) is one of the most widely used methods to solve k-Means clustering. The algorithm is simple and consists of two steps: In the seeding step, initial cluster centers are found using an adaptive sampling scheme called D 2 -sampling. In the second step, this solution is refined using Lloyd’s algorithm (Lloyd, 1982), the classic iterative algorithm for k-Means. The key advantages of k-means++ are its strong empirical performance, theoretical guarantees on the solution quality, and ease of use. Arthur & Vassilvitskii (2007) show that k-means++ produces clusterings that are in expectation O(log k)-competitive with the optimal solution without any assumptions on the data. Furthermore, this theoretical guarantee already holds after the seeding step. The subsequent use of Lloyd’s algorithm to refine the solution only guarantees that the solution quality does not deteriorate and that it converges to a locally optimal solution in finite time. In contrast, using naive seeding such as selecting data points uniformly at random followed by Lloyd’s algorithm can produce solutions that are arbitrarily bad compared to the optimal solution. The drawback of k-means++ is that it does not scale easily to massive data sets since both its seeding step and every iteration of Lloyd’s algorithm require the computation of all pairwise distances between cluster centers and data points. Lloyd’s algorithm can be parallelized in the MapReduce framework (Zhao et al., 2009) or even replaced by fast stochastic optimization techniques such as online or mini-batch k-Means (Bottou & Bengio, 1994; Sculley, 2010). However, the seeding step requires k inherently sequential passes through the data, making it impractical even for moderate k. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. This highlights the need for a fast and scalable seeding algorithm. Ideally, it should also retain the theoretical guarantees of k-means++ and provide equally competitive clusterings in practice. Such an approach was presented by Bachem et al. (2016) who propose to approximate k-means++ using a Markov chain Monte Carlo (MCMC) approach and provide a fast seeding algorithm. Under natural assumptions on the data generating distribution, the authors show that the computational complexity of k-means++ can be greatly decreased while retaining the same O(log k) guarantee on the solution quality. The drawback of this approach is that these assumptions may not hold and that checking their validity is expensive (see detailed discussion in Section 3). Our contributions. The goal of this paper is to provide fast and competitive seedings for k-Means clustering without prior assumptions on the data. As our key contributions, we (1) propose a simple yet fast seeding algorithm for k-Means, (2) show that it produces provably good clusterings without assumptions on the data, (3) provide stronger theoretical guarantees under assumptions on the data generating distribution, (4) extend the algorithm to arbitrary distance metrics and various divergence measures, (5) compare the algorithm to previous results, both theoretically and empirically, and (6) demonstrate its effectiveness on several real-world data sets. 2 Background and related work We will start by formalizing the problem and reviewing several recent results. Let X denote a set of n points in Rd. For any finite set C ⇢ Rd and x 2 X , we define d(x,C) 2 = min c2C kx ck22. The objective of k-Means clustering is to find a set C of k cluster centers in Rd such that the quantization error C(X ) is minimized, where C(X ) = X x2X d(x,C) 2 . We denote the optimal quantization error with k centers by k OPT (X ), the mean of X by µ(X ), and the variance of X by Var(X ) = Px2X d(x, µ(X ))2. We note that 1OPT (X ) = Var(X ). D2-sampling. Given a set of centers C, the D2-sampling strategy, as the name suggests, is to sample each point x 2 X with probability proportional to the squared distance to the selected centers, p(x | C) = d(x,C) 2 P x02X d(x 0 , C) 2 . (1) The seeding step of k-means++ builds upon D 2 -sampling: It first samples an initial center uniformly at random. Then, k 1 additional centers are sequentially added to the previously sampled centers using D 2 -sampling. The resulting computational complexity is ⇥(nkd), as for each x 2 X the distance d(x,C) 2 in (1) needs to be updated whenever a center is added to C. Metropolis-Hastings. The Metropolis-Hastings algorithm (Hastings, 1970) is a MCMC method for sampling from a probability distribution p(x) whose density is known only up to constants. Consider the following variant that uses an independent proposal distribution q(x) to build a Markov chain: Start with an arbitrary initial state x1 and in each iteration j 2 [2, . . . ,m] sample a candidate yj using q(x). Then, either accept this candidate (i.e., xj = yj) with probability ⇡(xj 1, yj) = min ✓ p(yj) p(xj 1) q(xj 1) q(yj) , 1 ◆ (2) or reject it otherwise (i.e., xj = xj 1). The stationary distribution of this Markov chain is p(x). Hence, for m sufficiently large, the distribution of xm is approximately p(x). Approximation using MCMC (K-MC2). Bachem et al. (2016) propose to speed up k-means++ by replacing the exact D2-sampling in (1) with a fast approximation based on MCMC sampling. In each iteration j 2 [2, 3, . . . , k], one constructs a Markov chain of length m using the Metropolis-Hasting algorithm with an independent and uniform proposal distribution q(x) = 1/n. The key advantage is that the acceptance probability in (2) only depends on d(yj , C) 2 and d(xj 1, C) 2 since min ✓ p(yj) p(xj 1) q(xj 1) q(yj) , 1 ◆ = min ✓ d(yj , C) 2 d(xj 1, C)2 , 1 ◆ . Critically, in each of the k 1 iterations, the algorithm does not require a full pass through the data, but only needs to compute the distances between m points and up to k 1 centers. As a consequence, the complexity of K-MC 2 is O mk2d compared to O(nkd) for k-means++ seeding. To bound the quality of the solutions produced by K-MC 2 , Bachem et al. (2016) analyze the mixing time of the described Markov chains. To this end, the authors define the two data-dependent quantities: ↵(X ) = max x2X d(x, µ(X ))2P x02X d(x 0 , µ(X ))2 , and (X ) = 1 OPT (X ) k OPT (X ) . (3) In order to bound each term, the authors assume that the data is generated i.i.d. from a distribution F and impose two conditions on F . First, they assume that F exhibits exponential tails and prove that in this case ↵(X ) 2 O log2 n with high probability. Second, they assume that “F is approximately uniform on a hypersphere”. This in turn implies that (X ) 2 O(k) with high probability. Under these assumptions, the authors prove that the solution generated by K-MC 2 is in expectation O(log k)competitive with the optimal solution if m 2 ⇥ k log2 n log k . In this case, the total computational complexity of K-MC 2 is O k3d log2 n log k which is sublinear in the number of data points. Other related work. A survey on seeding methods for k-Means was provided by Celebi et al. (2013). D 2 -sampling and k-means++ have been extensively studied in the literature. Previous work was primarily focused on related algorithms (Arthur & Vassilvitskii, 2007; Ostrovsky et al., 2006; Jaiswal et al., 2014, 2015), its theoretical properties (Ailon et al., 2009; Aggarwal et al., 2009) and bad instances (Arthur & Vassilvitskii, 2007; Brunsch & Röglin, 2011). As such, these results are complementary to the ones presented in this paper. An alternative approach to scalable seeding was investigated by Bahmani et al. (2012). The authors propose the k-meansk algorithm that retains the same O(log k) guarantee in expectation as k-means++. k-meansk reduces the number of sequential passes through the data to O(log n) by oversampling cluster centers in each of the rounds. While this allows one to parallelize each of the O(log n) rounds, it also increases the total computational complexity from O(nkd) to O(nkd log n). This method is feasible if substantial computational resources are available in the form of a cluster. Our approach, on the other hand, has an orthogonal use case: It aims to efficiently approximate k-means++ seeding with a substantially lower complexity. 3 Assumption-free K-MC2 Building on the MCMC strategy introduced by Bachem et al. (2016), we propose an algorithm which addresses the drawbacks of the K-MC 2 algorithm, namely: (1) The theoretical results of K-MC 2 hold only if the data is drawn independently from a distribution satisfying the assumptions stated in Section 2. For example, the results do not extend to heavytailed distributions which are often observed in real world data. (2) Verifying the assumptions, which in turn imply the required chain length, is computationally hard and potentially more expensive than running the algorithm. In fact, calculating ↵(X ) already requires two full passes through the data, while computing (X ) is NP-hard. (3) Theorem 2 of Bachem et al. (2016) does not characterize the tradeoff between m and the expected solution quality: It is only valid for the specific choice of chain length m = ⇥ k log 2 n log k . As a consequence, if the assumptions do not hold, we obtain no theoretical guarantee with regards to the solution quality. Furthermore, the constants in Theorem 2 are not known and may be large. Our approach addresses these shortcomings using three key elements. Firstly, we provide a proposal distribution that renders the assumption on ↵(X ) obsolete. Secondly, a novel theoretic analysis allows us to obtain theoretical guarantees on the solution quality even without assumptions on (X ). Finally, our results characterize the tradeoff between increasing the chain length m and improving the expected solution quality. Algorithm 1 ASSUMPTION-FREE K-MC2(AFK-MC2) Require: Data set X , # of centers k, chain length m // Preprocessing step 1: c1 Point uniformly sampled from X 2: for all x 2 X do 3: q(x) 12 d(x, c1)2/ P x02X d(x 0 , c1) 2 + 1 2n // Main loop 4: C1 {c1} 5: for i = 2, 3, . . . , k do 6: x Point sampled from X using q(x) 7: dx d(x,Ci 1)2 8: for j = 2, 3, . . . ,m do 9: y Point sampled from X using q(y) 10: dy d(y, Ci 1)2 11: if dyq(x)d x q(y) > Unif(0, 1) then x y, dx dy 12: Ci Ci 1 [ {x} 13: return Ck Proposal distribution. We argue that the choice of the proposal distribution is critical. Intuitively, the uniform distribution can be a very bad choice if, in any iteration, the true D 2 -sampling distribution is “highly” nonuniform. We suggest the following proposal distribution: We first sample a center c1 2 X uniformly at random and define for all x 2 X the nonuniform proposal q(x | c1) = 1 2 d(x, c1) 2 P x02X d(x 0 , c1) 2 | {z } (A) + 1 2 1 |X ||{z} (B) . (4) The term (A) is the true D 2 -sampling distribution with regards to the first center c1. For any data set, it ensures that we start with the best possible proposal distribution in the second iteration. We will show that this proposal is sufficient even for later iterations, rendering any assumptions on ↵ obsolete. The term (B) regularizes the proposal distribution and ensures that the mixing time of K-MC 2 is always matched up to a factor of two. Algorithm. Algorithm 1 details the proposed fast seeding algorithm ASSUMPTION-FREE K-MC2. In the preprocessing step, it first samples an initial center c1 uniformly at random and then computes the proposal distribution q(· | c1). In the main loop, it then uses independent Markov chains of length m to sample centers in each of the k 1 iterations. The complexity of the main loop is O mk2d . The preprocessing step of ASSUMPTION-FREE K-MC 2 requires a single pass through the data to compute the proposal q(· | c1). There are several reasons why this additional complexity of O(nd) is not an issue in practice: (1) The preprocessing step only requires a single pass through the data compared to k passes for the seeding of k-means++. (2) It is easily parallelized. (3) Given random access to the data, the proposal distribution can be calculated online when saving or copying the data. (4) As we will see in Section 4, the effort spent in the preprocessing step pays off: It often allows for shorter Markov chains in the main loop. (5) Computing ↵(X ) to verify the first assumption of K-MC 2 is already more expensive than the preprocessing step of ASSUMPTION-FREE K-MC 2 . Theorem 1. Let ✏ 2 (0, 1) and k 2 N. Let X be any set of n points in Rd and C be the output of Algorithm 1 with m = 1 + 8✏ log 4k ✏ . Then, it holds that E [ C(X )] 8(log2 k + 2) kOPT (X ) + ✏Var(X ). The computational complexity of the preprocessing step is O(nd) and the computational complexity of the main loop is O 1✏k2d log k✏ . This result shows that ASSUMPTION-FREE K-MC 2 produces provably good clusterings for arbitrary data sets without assumptions. The guarantee consists of two terms: The first term, i.e., 8(log2 k + 2) k OPT (X ), is the theoretical guarantee of k-means++. The second term, ✏Var(X ), quantifies the potential additional error due to the approximation. The variance is a natural notion as the mean is the optimal quantizer for k = 1. Intuitively, the second term may be interpreted as a scale-invariant and additive approximation error. Theorem 1 directly characterizes the tradeoff between improving the solution quality and the resulting increase in computational complexity. As m is increased, the solution quality converges to the theoretical guarantee of k-means++. At the same time, even for smaller chain lengths m, we obtain a provable bound on the solution quality. In contrast, the guarantee of K-MC 2 on the solution quality only holds for a specific choice of m. For completeness, ASSUMPTION-FREE K-MC 2 may also be analyzed under the assumptions made in Bachem et al. (2016). While for K-MC 2 the required chain length m is linear in ↵(X ), ASSUMPTION-FREE K-MC 2 does not require this assumption. In fact, we will see in Section 4 that this lack of dependence of ↵(X ) leads to a better empirical performance. If we assume (X ) 2 O(k), we obtain the following result similar to the one of K-MC 2 (albeit with a shorter chain length m). Corollary 1. Let k 2 N and X be a set of n points in Rd satisfying (X ) 2 O(k). Let C be the output of Algorithm 1 with m = ⇥(k log k). Then it holds that E [ C(X )] 8(log2 k + 3) kOPT (X ). The computational complexity of the preprocessing is O(nd) and the computational complexity of the main loop is O k3d log k . 3.1 Proof sketch for Theorem 1 In this subsection, we provide a sketch of the proof of Theorem 1 and defer the full proof to Section A of the supplementary materials. Intuitively, we first bound how well a single Markov chain approximates one iteration of exact D 2 -sampling. Then, we analyze how the approximation error accumulates across iterations and provide a bound on the expected solution quality. For the first step, consider any set C ✓ X of previously sampled centers. Let c1 2 C denote the first sampled center that was used to construct the proposal distribution q(x | c1) in (4). In a single iteration, we would ideally sample a new center x 2 X using D2-sampling, i.e., from p(x | C) as defined in (1). Instead, Algorithm 1 constructs a Markov chain to sample a new center x 2 X as the next cluster center. We denote by p̃ c1 m(x | C) the implied probability of sampling a point x 2 X using this Markov chain of length m. The following result shows that in any iteration either C is ✏1-competitive compared to c1 or the Markov chain approximates D 2 -sampling well in terms of total variation distance 1 . Lemma 1. Let ✏1, ✏2 2 (0, 1) and c1 2 X . Consider any set C ✓ X with c1 2 C. For m 1 + 2 ✏1 log 1 ✏2 , at least one of the following holds: (i) C(X ) < ✏1 c1(X ), or (ii) kp(· | C) p̃c1m(· | C)kTV ✏2. In the second step, we bound the expected solution quality of Algorithm 1 based on Lemma 1. While the full proof requires careful propagation of errors across iterations and a corresponding inductive argument, the intuition is based on distinguishing between two possible cases of sampled solutions. First, consider the realizations of the solution C that are ✏1-competitive compared to c1. By definition, C(X ) < ✏1 c1(X ). Furthermore, the expected solution quality of these realizations can be bounded by 2✏1 Var(X ) since c1 is chosen uniformly at random and hence in expectation c1(X ) 2Var(X ). Second, consider the realizations that are not ✏1-competitive compared to c1. Since the quantization error is non-increasing in sampled centers, Lemma 1 implies that all k 1 Markov chains result in a good approximation of the corresponding D 2 -sampling. Intuitively, this implies that the approximation error in terms of total variation distance across all k 1 iterations is at most ✏2(k 1). Informally, the expected solution quality is thus bounded with probability 1 ✏2(k 1) by the expected quality of k-means++ and with probability ✏2(k 1) by c1(X ). Theorem 1 can then be proven by setting ✏1 = ✏/4 and ✏2 = ✏/4k and choosing m sufficiently large. 1 Let ⌦ be a finite sample space on which two probability distributions p and q are defined. The total variation distance kp qkTV between p and q is given by 1 2 P x2⌦ |p(x) q(x)|. 3.2 Extension to other clustering problems While we only consider k-Means clustering and the Euclidean distance in this paper, the results are more general. They can be directly applied, by transforming the data, to any metric space for which there exists a global isometry on Euclidean spaces. Examples would be the Mahalanobis distance and Generalized Symmetrized Bregman divergences (Acharyya et al., 2013). The results also apply to arbitrary distance measures (albeit with different constants) as D 2 -sampling can be generalized to arbitrary distance measures (Arthur & Vassilvitskii, 2007). However, Var(X ) needs to be replaced by 1 OPT (X ) in Theorem 1 since the mean may not be the optimal quantizer (for k = 1) for a different distance metric. The proposed algorithm can further be extended to different potential functions of the form k · kl and used to approximate the corresponding Dl-sampling (Arthur & Vassilvitskii, 2007), again with different constants. Similarly, the results also apply to bregman++ (Ackermann & Blömer, 2010) which provides provably competitive solutions for clustering with a broad class of Bregman divergences (including the KL-divergence and Itakura-Saito distance). 4 Experimental results In this section 2 , we empirically validate our theoretical results and compare the proposed algorithm ASSUMPTION-FREE K-MC 2 (AFK-MC 2 ) to three alternative seeding strategies: (1) RANDOM, a “naive” baseline that samples k centers from X uniformly at random, (2) the full seeding step of k-means++, and (3) K-MC 2 . For both ASSUMPTION-FREE K-MC 2 and K-MC 2 , we consider the different chain lengths m 2 {1, 2, 5, 10, 20, 50, 100, 150, 200}. Table 1 shows the six data sets used in the experiments with their corresponding values for k. We choose an experimental setup similar to Bachem et al. (2016): For half of the data sets, we both train the algorithm and evaluate the corresponding solution on the full data set (denoted by T in the EVAL column of Table 1). This corresponds to the classical k-Means setting. In practice, however, one is often also interested in the generalization error. For the other half of the data sets, we retain 250,000 data points as the holdout set for the evaluation (denoted by H in the EVAL column of Table 1). For all methods, we record the solution quality (either on the full data set or the holdout set) and measure the number of distance evaluations needed to run the algorithm. For ASSUMPTION-FREE K-MC 2 this includes both the preprocessing and the main loop. We run every algorithm 200 times with different random seeds and average the results. We further compute and display 95% confidence intervals for the solution quality. 2 An implementation of ASSUMPTION-FREE K-MC 2 has been released at http://olivierbachem.ch. Discussion. Figure 1 shows the expected quantization error for the two baselines, RANDOM and k-means++, and for the MCMC methods with different chain lengths m. As expected, the seeding step of k-means++ strongly outperforms RANDOM on all data sets. As the chain length m increases, the quality of solutions produced by both ASSUMPTION-FREE K-MC 2 and K-MC 2 quickly converges to that of k-means++ seeding. On all data sets except WEB, ASSUMPTION-FREE K-MC 2 starts with a lower initial error due to the improved proposal distribution and outperforms K-MC 2 for any given chain length m. For WEB, both algorithms exhibit approximately the same performance. This is expected as ↵(X ) of WEB is very low (see Table 1). Hence, there is only a minor difference between the nonuniform proposal of ASSUMPTION-FREE K-MC 2 and the uniform proposal of K-MC 2 . In fact, one of the key advantages of ASSUMPTION-FREE K-MC 2 is that its proposal adapts to the data set at hand. As discussed in Section 3, ASSUMPTION-FREE K-MC 2 requires an additional preprocessing step to compute the nonuniform proposal. Figure 2 shows the expected solution quality in relation to the total computational complexity in terms of number of distance evaluations. Both K-MC 2 and ASSUMPTION-FREE K-MC 2 generate solutions that are competitive with those produced by the seeding step of k-means++. At the same time, they do this at a fraction of the computational cost. Despite the preprocessing, ASSUMPTION-FREE K-MC 2 clearly outperforms K-MC 2 on the data sets with large values for ↵(X ) (CSN, KDD and SONG). The additional effort of computing the nonuniform proposal is compensated by a substantially lower expected quantization error for a given chain size. For the other data sets, ASSUMPTION-FREE K-MC 2 is initially disadvantaged by the cost of computing the proposal distribution. However, as m increases and more time is spent computing the Markov chains, it either outperforms K-MC 2 (RNA and SUSY) or matches its performance (WEB). Table 3 details the practical significance of the proposed algorithm. The results indicate that in practice it is sufficient to run ASSUMPTION-FREE K-MC 2 with a chain length independent of n. Even with a small chain length, ASSUMPTION-FREE K-MC 2 produces competitive clusterings at a fraction of the computational cost of the seeding step of k-means++. For example on CSN, ASSUMPTION-FREE K-MC 2 with m = 20 achieves a relative error of 1.45% and a speedup of 33.3⇥. At the same time, K-MC 2 would have exhibited a substantial relative error of 65.34% while only obtaining a slightly better speedup of 40.0⇥. 5 Conclusion In this paper, we propose ASSUMPTION-FREE K-MC 2 , a simple and fast seeding algorithm for k-Means. In contrast to the previously introduced algorithm K-MC 2 , it produces provably good clusterings even without assumptions on the data. As a key advantage, ASSUMPTION-FREE K-MC 2 allows to provably trade off solution quality for a decreased computational effort. Extensive experiments illustrate the practical significance of the proposed algorithm: It obtains competitive clusterings at a fraction of the cost of k-means++ seeding and it outperforms or matches its main competitor K-MC 2 on all considered data sets. Acknowledgments This research was partially supported by ERC StG 307036, a Google Ph.D. Fellowship and an IBM Ph.D. Fellowship.
1. What is the main contribution of the paper regarding the k-means++ procedure? 2. How does the proposed approach differ from the previous work by Bachem et. al.? 3. What are the strengths of the paper in terms of its writing style and accessibility to non-experts? 4. Can you provide more details about the additive factors in the approximation guarantee and their impact on the results? 5. How does the paper demonstrate the effectiveness of the proposed method on real datasets?
Review
Review This paper extends the work of Bachem et. al. in reducing the running time of the k-means++ procedure. Bachem et. al. suggested sampling points using a Markov process (instead of computing the complete distribution after every iteration and then sampling from this distribution). Since computing the distribution is a costly operation, this technique reduces the running time significantly for very large datasets. However, in order to show that the sampling probabilities is similar to that of the k-means++ procedure, we need to show bounds on the mixing time of the Markov process. This in turn imposes constraints on the datasets on which this technique may be applied. The current paper approaches the problem in a slightly different manner. They try to argue that the Markov process based algorithm works for any dataset if one allows some additive factors in the approximation guarantee. They also argue that in some cases this additive approximation factor does not cause any serious problems. Furthermore, they show that on many real datasets, the additive term is small for reasonable values of parameters. As in Bachem et. al.’s work, the process is significantly faster than the k-means++ seeding procedure.This paper is well written and may be followed and appreciated by non-experts. In my opinion, the results in the paper adds to our knowledge about the k-means++ procedure.
NIPS
Title Fast and Provably Good Seedings for k-Means Abstract Seeding – the task of finding initial cluster centers – is critical in obtaining highquality clusterings for k-Means. However, k-means++ seeding, the state of the art algorithm, does not scale well to massive datasets as it is inherently sequential and requires k full passes through the data. It was recently shown that Markov chain Monte Carlo sampling can be used to efficiently approximate the seeding step of k-means++. However, this result requires assumptions on the data generating distribution. We propose a simple yet fast seeding algorithm that produces provably good clusterings even without assumptions on the data. Our analysis shows that the algorithm allows for a favourable trade-off between solution quality and computational cost, speeding up k-means++ seeding by up to several orders of magnitude. We validate our theoretical results in extensive experiments on a variety of real-world data sets. N/A art algorithm, does not scale well to massive datasets as it is inherently sequential and requires k full passes through the data. It was recently shown that Markov chain Monte Carlo sampling can be used to efficiently approximate the seeding step of k-means++. However, this result requires assumptions on the data generating distribution. We propose a simple yet fast seeding algorithm that produces provably good clusterings even without assumptions on the data. Our analysis shows that the algorithm allows for a favourable trade-off between solution quality and computational cost, speeding up k-means++ seeding by up to several orders of magnitude. We validate our theoretical results in extensive experiments on a variety of real-world data sets. 1 Introduction k-means++ (Arthur & Vassilvitskii, 2007) is one of the most widely used methods to solve k-Means clustering. The algorithm is simple and consists of two steps: In the seeding step, initial cluster centers are found using an adaptive sampling scheme called D 2 -sampling. In the second step, this solution is refined using Lloyd’s algorithm (Lloyd, 1982), the classic iterative algorithm for k-Means. The key advantages of k-means++ are its strong empirical performance, theoretical guarantees on the solution quality, and ease of use. Arthur & Vassilvitskii (2007) show that k-means++ produces clusterings that are in expectation O(log k)-competitive with the optimal solution without any assumptions on the data. Furthermore, this theoretical guarantee already holds after the seeding step. The subsequent use of Lloyd’s algorithm to refine the solution only guarantees that the solution quality does not deteriorate and that it converges to a locally optimal solution in finite time. In contrast, using naive seeding such as selecting data points uniformly at random followed by Lloyd’s algorithm can produce solutions that are arbitrarily bad compared to the optimal solution. The drawback of k-means++ is that it does not scale easily to massive data sets since both its seeding step and every iteration of Lloyd’s algorithm require the computation of all pairwise distances between cluster centers and data points. Lloyd’s algorithm can be parallelized in the MapReduce framework (Zhao et al., 2009) or even replaced by fast stochastic optimization techniques such as online or mini-batch k-Means (Bottou & Bengio, 1994; Sculley, 2010). However, the seeding step requires k inherently sequential passes through the data, making it impractical even for moderate k. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. This highlights the need for a fast and scalable seeding algorithm. Ideally, it should also retain the theoretical guarantees of k-means++ and provide equally competitive clusterings in practice. Such an approach was presented by Bachem et al. (2016) who propose to approximate k-means++ using a Markov chain Monte Carlo (MCMC) approach and provide a fast seeding algorithm. Under natural assumptions on the data generating distribution, the authors show that the computational complexity of k-means++ can be greatly decreased while retaining the same O(log k) guarantee on the solution quality. The drawback of this approach is that these assumptions may not hold and that checking their validity is expensive (see detailed discussion in Section 3). Our contributions. The goal of this paper is to provide fast and competitive seedings for k-Means clustering without prior assumptions on the data. As our key contributions, we (1) propose a simple yet fast seeding algorithm for k-Means, (2) show that it produces provably good clusterings without assumptions on the data, (3) provide stronger theoretical guarantees under assumptions on the data generating distribution, (4) extend the algorithm to arbitrary distance metrics and various divergence measures, (5) compare the algorithm to previous results, both theoretically and empirically, and (6) demonstrate its effectiveness on several real-world data sets. 2 Background and related work We will start by formalizing the problem and reviewing several recent results. Let X denote a set of n points in Rd. For any finite set C ⇢ Rd and x 2 X , we define d(x,C) 2 = min c2C kx ck22. The objective of k-Means clustering is to find a set C of k cluster centers in Rd such that the quantization error C(X ) is minimized, where C(X ) = X x2X d(x,C) 2 . We denote the optimal quantization error with k centers by k OPT (X ), the mean of X by µ(X ), and the variance of X by Var(X ) = Px2X d(x, µ(X ))2. We note that 1OPT (X ) = Var(X ). D2-sampling. Given a set of centers C, the D2-sampling strategy, as the name suggests, is to sample each point x 2 X with probability proportional to the squared distance to the selected centers, p(x | C) = d(x,C) 2 P x02X d(x 0 , C) 2 . (1) The seeding step of k-means++ builds upon D 2 -sampling: It first samples an initial center uniformly at random. Then, k 1 additional centers are sequentially added to the previously sampled centers using D 2 -sampling. The resulting computational complexity is ⇥(nkd), as for each x 2 X the distance d(x,C) 2 in (1) needs to be updated whenever a center is added to C. Metropolis-Hastings. The Metropolis-Hastings algorithm (Hastings, 1970) is a MCMC method for sampling from a probability distribution p(x) whose density is known only up to constants. Consider the following variant that uses an independent proposal distribution q(x) to build a Markov chain: Start with an arbitrary initial state x1 and in each iteration j 2 [2, . . . ,m] sample a candidate yj using q(x). Then, either accept this candidate (i.e., xj = yj) with probability ⇡(xj 1, yj) = min ✓ p(yj) p(xj 1) q(xj 1) q(yj) , 1 ◆ (2) or reject it otherwise (i.e., xj = xj 1). The stationary distribution of this Markov chain is p(x). Hence, for m sufficiently large, the distribution of xm is approximately p(x). Approximation using MCMC (K-MC2). Bachem et al. (2016) propose to speed up k-means++ by replacing the exact D2-sampling in (1) with a fast approximation based on MCMC sampling. In each iteration j 2 [2, 3, . . . , k], one constructs a Markov chain of length m using the Metropolis-Hasting algorithm with an independent and uniform proposal distribution q(x) = 1/n. The key advantage is that the acceptance probability in (2) only depends on d(yj , C) 2 and d(xj 1, C) 2 since min ✓ p(yj) p(xj 1) q(xj 1) q(yj) , 1 ◆ = min ✓ d(yj , C) 2 d(xj 1, C)2 , 1 ◆ . Critically, in each of the k 1 iterations, the algorithm does not require a full pass through the data, but only needs to compute the distances between m points and up to k 1 centers. As a consequence, the complexity of K-MC 2 is O mk2d compared to O(nkd) for k-means++ seeding. To bound the quality of the solutions produced by K-MC 2 , Bachem et al. (2016) analyze the mixing time of the described Markov chains. To this end, the authors define the two data-dependent quantities: ↵(X ) = max x2X d(x, µ(X ))2P x02X d(x 0 , µ(X ))2 , and (X ) = 1 OPT (X ) k OPT (X ) . (3) In order to bound each term, the authors assume that the data is generated i.i.d. from a distribution F and impose two conditions on F . First, they assume that F exhibits exponential tails and prove that in this case ↵(X ) 2 O log2 n with high probability. Second, they assume that “F is approximately uniform on a hypersphere”. This in turn implies that (X ) 2 O(k) with high probability. Under these assumptions, the authors prove that the solution generated by K-MC 2 is in expectation O(log k)competitive with the optimal solution if m 2 ⇥ k log2 n log k . In this case, the total computational complexity of K-MC 2 is O k3d log2 n log k which is sublinear in the number of data points. Other related work. A survey on seeding methods for k-Means was provided by Celebi et al. (2013). D 2 -sampling and k-means++ have been extensively studied in the literature. Previous work was primarily focused on related algorithms (Arthur & Vassilvitskii, 2007; Ostrovsky et al., 2006; Jaiswal et al., 2014, 2015), its theoretical properties (Ailon et al., 2009; Aggarwal et al., 2009) and bad instances (Arthur & Vassilvitskii, 2007; Brunsch & Röglin, 2011). As such, these results are complementary to the ones presented in this paper. An alternative approach to scalable seeding was investigated by Bahmani et al. (2012). The authors propose the k-meansk algorithm that retains the same O(log k) guarantee in expectation as k-means++. k-meansk reduces the number of sequential passes through the data to O(log n) by oversampling cluster centers in each of the rounds. While this allows one to parallelize each of the O(log n) rounds, it also increases the total computational complexity from O(nkd) to O(nkd log n). This method is feasible if substantial computational resources are available in the form of a cluster. Our approach, on the other hand, has an orthogonal use case: It aims to efficiently approximate k-means++ seeding with a substantially lower complexity. 3 Assumption-free K-MC2 Building on the MCMC strategy introduced by Bachem et al. (2016), we propose an algorithm which addresses the drawbacks of the K-MC 2 algorithm, namely: (1) The theoretical results of K-MC 2 hold only if the data is drawn independently from a distribution satisfying the assumptions stated in Section 2. For example, the results do not extend to heavytailed distributions which are often observed in real world data. (2) Verifying the assumptions, which in turn imply the required chain length, is computationally hard and potentially more expensive than running the algorithm. In fact, calculating ↵(X ) already requires two full passes through the data, while computing (X ) is NP-hard. (3) Theorem 2 of Bachem et al. (2016) does not characterize the tradeoff between m and the expected solution quality: It is only valid for the specific choice of chain length m = ⇥ k log 2 n log k . As a consequence, if the assumptions do not hold, we obtain no theoretical guarantee with regards to the solution quality. Furthermore, the constants in Theorem 2 are not known and may be large. Our approach addresses these shortcomings using three key elements. Firstly, we provide a proposal distribution that renders the assumption on ↵(X ) obsolete. Secondly, a novel theoretic analysis allows us to obtain theoretical guarantees on the solution quality even without assumptions on (X ). Finally, our results characterize the tradeoff between increasing the chain length m and improving the expected solution quality. Algorithm 1 ASSUMPTION-FREE K-MC2(AFK-MC2) Require: Data set X , # of centers k, chain length m // Preprocessing step 1: c1 Point uniformly sampled from X 2: for all x 2 X do 3: q(x) 12 d(x, c1)2/ P x02X d(x 0 , c1) 2 + 1 2n // Main loop 4: C1 {c1} 5: for i = 2, 3, . . . , k do 6: x Point sampled from X using q(x) 7: dx d(x,Ci 1)2 8: for j = 2, 3, . . . ,m do 9: y Point sampled from X using q(y) 10: dy d(y, Ci 1)2 11: if dyq(x)d x q(y) > Unif(0, 1) then x y, dx dy 12: Ci Ci 1 [ {x} 13: return Ck Proposal distribution. We argue that the choice of the proposal distribution is critical. Intuitively, the uniform distribution can be a very bad choice if, in any iteration, the true D 2 -sampling distribution is “highly” nonuniform. We suggest the following proposal distribution: We first sample a center c1 2 X uniformly at random and define for all x 2 X the nonuniform proposal q(x | c1) = 1 2 d(x, c1) 2 P x02X d(x 0 , c1) 2 | {z } (A) + 1 2 1 |X ||{z} (B) . (4) The term (A) is the true D 2 -sampling distribution with regards to the first center c1. For any data set, it ensures that we start with the best possible proposal distribution in the second iteration. We will show that this proposal is sufficient even for later iterations, rendering any assumptions on ↵ obsolete. The term (B) regularizes the proposal distribution and ensures that the mixing time of K-MC 2 is always matched up to a factor of two. Algorithm. Algorithm 1 details the proposed fast seeding algorithm ASSUMPTION-FREE K-MC2. In the preprocessing step, it first samples an initial center c1 uniformly at random and then computes the proposal distribution q(· | c1). In the main loop, it then uses independent Markov chains of length m to sample centers in each of the k 1 iterations. The complexity of the main loop is O mk2d . The preprocessing step of ASSUMPTION-FREE K-MC 2 requires a single pass through the data to compute the proposal q(· | c1). There are several reasons why this additional complexity of O(nd) is not an issue in practice: (1) The preprocessing step only requires a single pass through the data compared to k passes for the seeding of k-means++. (2) It is easily parallelized. (3) Given random access to the data, the proposal distribution can be calculated online when saving or copying the data. (4) As we will see in Section 4, the effort spent in the preprocessing step pays off: It often allows for shorter Markov chains in the main loop. (5) Computing ↵(X ) to verify the first assumption of K-MC 2 is already more expensive than the preprocessing step of ASSUMPTION-FREE K-MC 2 . Theorem 1. Let ✏ 2 (0, 1) and k 2 N. Let X be any set of n points in Rd and C be the output of Algorithm 1 with m = 1 + 8✏ log 4k ✏ . Then, it holds that E [ C(X )] 8(log2 k + 2) kOPT (X ) + ✏Var(X ). The computational complexity of the preprocessing step is O(nd) and the computational complexity of the main loop is O 1✏k2d log k✏ . This result shows that ASSUMPTION-FREE K-MC 2 produces provably good clusterings for arbitrary data sets without assumptions. The guarantee consists of two terms: The first term, i.e., 8(log2 k + 2) k OPT (X ), is the theoretical guarantee of k-means++. The second term, ✏Var(X ), quantifies the potential additional error due to the approximation. The variance is a natural notion as the mean is the optimal quantizer for k = 1. Intuitively, the second term may be interpreted as a scale-invariant and additive approximation error. Theorem 1 directly characterizes the tradeoff between improving the solution quality and the resulting increase in computational complexity. As m is increased, the solution quality converges to the theoretical guarantee of k-means++. At the same time, even for smaller chain lengths m, we obtain a provable bound on the solution quality. In contrast, the guarantee of K-MC 2 on the solution quality only holds for a specific choice of m. For completeness, ASSUMPTION-FREE K-MC 2 may also be analyzed under the assumptions made in Bachem et al. (2016). While for K-MC 2 the required chain length m is linear in ↵(X ), ASSUMPTION-FREE K-MC 2 does not require this assumption. In fact, we will see in Section 4 that this lack of dependence of ↵(X ) leads to a better empirical performance. If we assume (X ) 2 O(k), we obtain the following result similar to the one of K-MC 2 (albeit with a shorter chain length m). Corollary 1. Let k 2 N and X be a set of n points in Rd satisfying (X ) 2 O(k). Let C be the output of Algorithm 1 with m = ⇥(k log k). Then it holds that E [ C(X )] 8(log2 k + 3) kOPT (X ). The computational complexity of the preprocessing is O(nd) and the computational complexity of the main loop is O k3d log k . 3.1 Proof sketch for Theorem 1 In this subsection, we provide a sketch of the proof of Theorem 1 and defer the full proof to Section A of the supplementary materials. Intuitively, we first bound how well a single Markov chain approximates one iteration of exact D 2 -sampling. Then, we analyze how the approximation error accumulates across iterations and provide a bound on the expected solution quality. For the first step, consider any set C ✓ X of previously sampled centers. Let c1 2 C denote the first sampled center that was used to construct the proposal distribution q(x | c1) in (4). In a single iteration, we would ideally sample a new center x 2 X using D2-sampling, i.e., from p(x | C) as defined in (1). Instead, Algorithm 1 constructs a Markov chain to sample a new center x 2 X as the next cluster center. We denote by p̃ c1 m(x | C) the implied probability of sampling a point x 2 X using this Markov chain of length m. The following result shows that in any iteration either C is ✏1-competitive compared to c1 or the Markov chain approximates D 2 -sampling well in terms of total variation distance 1 . Lemma 1. Let ✏1, ✏2 2 (0, 1) and c1 2 X . Consider any set C ✓ X with c1 2 C. For m 1 + 2 ✏1 log 1 ✏2 , at least one of the following holds: (i) C(X ) < ✏1 c1(X ), or (ii) kp(· | C) p̃c1m(· | C)kTV ✏2. In the second step, we bound the expected solution quality of Algorithm 1 based on Lemma 1. While the full proof requires careful propagation of errors across iterations and a corresponding inductive argument, the intuition is based on distinguishing between two possible cases of sampled solutions. First, consider the realizations of the solution C that are ✏1-competitive compared to c1. By definition, C(X ) < ✏1 c1(X ). Furthermore, the expected solution quality of these realizations can be bounded by 2✏1 Var(X ) since c1 is chosen uniformly at random and hence in expectation c1(X ) 2Var(X ). Second, consider the realizations that are not ✏1-competitive compared to c1. Since the quantization error is non-increasing in sampled centers, Lemma 1 implies that all k 1 Markov chains result in a good approximation of the corresponding D 2 -sampling. Intuitively, this implies that the approximation error in terms of total variation distance across all k 1 iterations is at most ✏2(k 1). Informally, the expected solution quality is thus bounded with probability 1 ✏2(k 1) by the expected quality of k-means++ and with probability ✏2(k 1) by c1(X ). Theorem 1 can then be proven by setting ✏1 = ✏/4 and ✏2 = ✏/4k and choosing m sufficiently large. 1 Let ⌦ be a finite sample space on which two probability distributions p and q are defined. The total variation distance kp qkTV between p and q is given by 1 2 P x2⌦ |p(x) q(x)|. 3.2 Extension to other clustering problems While we only consider k-Means clustering and the Euclidean distance in this paper, the results are more general. They can be directly applied, by transforming the data, to any metric space for which there exists a global isometry on Euclidean spaces. Examples would be the Mahalanobis distance and Generalized Symmetrized Bregman divergences (Acharyya et al., 2013). The results also apply to arbitrary distance measures (albeit with different constants) as D 2 -sampling can be generalized to arbitrary distance measures (Arthur & Vassilvitskii, 2007). However, Var(X ) needs to be replaced by 1 OPT (X ) in Theorem 1 since the mean may not be the optimal quantizer (for k = 1) for a different distance metric. The proposed algorithm can further be extended to different potential functions of the form k · kl and used to approximate the corresponding Dl-sampling (Arthur & Vassilvitskii, 2007), again with different constants. Similarly, the results also apply to bregman++ (Ackermann & Blömer, 2010) which provides provably competitive solutions for clustering with a broad class of Bregman divergences (including the KL-divergence and Itakura-Saito distance). 4 Experimental results In this section 2 , we empirically validate our theoretical results and compare the proposed algorithm ASSUMPTION-FREE K-MC 2 (AFK-MC 2 ) to three alternative seeding strategies: (1) RANDOM, a “naive” baseline that samples k centers from X uniformly at random, (2) the full seeding step of k-means++, and (3) K-MC 2 . For both ASSUMPTION-FREE K-MC 2 and K-MC 2 , we consider the different chain lengths m 2 {1, 2, 5, 10, 20, 50, 100, 150, 200}. Table 1 shows the six data sets used in the experiments with their corresponding values for k. We choose an experimental setup similar to Bachem et al. (2016): For half of the data sets, we both train the algorithm and evaluate the corresponding solution on the full data set (denoted by T in the EVAL column of Table 1). This corresponds to the classical k-Means setting. In practice, however, one is often also interested in the generalization error. For the other half of the data sets, we retain 250,000 data points as the holdout set for the evaluation (denoted by H in the EVAL column of Table 1). For all methods, we record the solution quality (either on the full data set or the holdout set) and measure the number of distance evaluations needed to run the algorithm. For ASSUMPTION-FREE K-MC 2 this includes both the preprocessing and the main loop. We run every algorithm 200 times with different random seeds and average the results. We further compute and display 95% confidence intervals for the solution quality. 2 An implementation of ASSUMPTION-FREE K-MC 2 has been released at http://olivierbachem.ch. Discussion. Figure 1 shows the expected quantization error for the two baselines, RANDOM and k-means++, and for the MCMC methods with different chain lengths m. As expected, the seeding step of k-means++ strongly outperforms RANDOM on all data sets. As the chain length m increases, the quality of solutions produced by both ASSUMPTION-FREE K-MC 2 and K-MC 2 quickly converges to that of k-means++ seeding. On all data sets except WEB, ASSUMPTION-FREE K-MC 2 starts with a lower initial error due to the improved proposal distribution and outperforms K-MC 2 for any given chain length m. For WEB, both algorithms exhibit approximately the same performance. This is expected as ↵(X ) of WEB is very low (see Table 1). Hence, there is only a minor difference between the nonuniform proposal of ASSUMPTION-FREE K-MC 2 and the uniform proposal of K-MC 2 . In fact, one of the key advantages of ASSUMPTION-FREE K-MC 2 is that its proposal adapts to the data set at hand. As discussed in Section 3, ASSUMPTION-FREE K-MC 2 requires an additional preprocessing step to compute the nonuniform proposal. Figure 2 shows the expected solution quality in relation to the total computational complexity in terms of number of distance evaluations. Both K-MC 2 and ASSUMPTION-FREE K-MC 2 generate solutions that are competitive with those produced by the seeding step of k-means++. At the same time, they do this at a fraction of the computational cost. Despite the preprocessing, ASSUMPTION-FREE K-MC 2 clearly outperforms K-MC 2 on the data sets with large values for ↵(X ) (CSN, KDD and SONG). The additional effort of computing the nonuniform proposal is compensated by a substantially lower expected quantization error for a given chain size. For the other data sets, ASSUMPTION-FREE K-MC 2 is initially disadvantaged by the cost of computing the proposal distribution. However, as m increases and more time is spent computing the Markov chains, it either outperforms K-MC 2 (RNA and SUSY) or matches its performance (WEB). Table 3 details the practical significance of the proposed algorithm. The results indicate that in practice it is sufficient to run ASSUMPTION-FREE K-MC 2 with a chain length independent of n. Even with a small chain length, ASSUMPTION-FREE K-MC 2 produces competitive clusterings at a fraction of the computational cost of the seeding step of k-means++. For example on CSN, ASSUMPTION-FREE K-MC 2 with m = 20 achieves a relative error of 1.45% and a speedup of 33.3⇥. At the same time, K-MC 2 would have exhibited a substantial relative error of 65.34% while only obtaining a slightly better speedup of 40.0⇥. 5 Conclusion In this paper, we propose ASSUMPTION-FREE K-MC 2 , a simple and fast seeding algorithm for k-Means. In contrast to the previously introduced algorithm K-MC 2 , it produces provably good clusterings even without assumptions on the data. As a key advantage, ASSUMPTION-FREE K-MC 2 allows to provably trade off solution quality for a decreased computational effort. Extensive experiments illustrate the practical significance of the proposed algorithm: It obtains competitive clusterings at a fraction of the cost of k-means++ seeding and it outperforms or matches its main competitor K-MC 2 on all considered data sets. Acknowledgments This research was partially supported by ERC StG 307036, a Google Ph.D. Fellowship and an IBM Ph.D. Fellowship.
1. What is the focus of the paper regarding MCMC-based seeding strategies for k-means? 2. What is the proposed approach in the paper, and how does it differ from previous methods? 3. How effective is the proposed method compared to other approaches, particularly in terms of convergence speed? 4. Are there any potential improvements or modifications that could enhance the performance of the proposed method? 5. How do the results of the paper impact the choice of methods in the single-node or distributed settings?
Review
Review Authors identify a superior proposal distribution for an MCMC-based seeding strategy for k-means. Surprisingly (!), only a single data pass of preprocessing based upon a randomly chosen center is sufficient to define a proposal distribution which has fast convergence for all iterations.The simplicity and efficacy of this approach suggests it is likely to be the method of choice in the single-node setting. If a kmeans|| style oversampling trick can be applied to loop on line 5 of algorithm 1, then it would be a strong contender in the distributed setting as well. It is unclear in the experimental section if relative errors are reported for just the seeding step, or if Lloyd's algorithm has been applied to refine the solutions. Assuming the former, it is unclear the extent to which the differences in table 2 would persist given even one Lloyd iteration (except, perhaps, wrt random initialization).
NIPS
Title Fast and Provably Good Seedings for k-Means Abstract Seeding – the task of finding initial cluster centers – is critical in obtaining highquality clusterings for k-Means. However, k-means++ seeding, the state of the art algorithm, does not scale well to massive datasets as it is inherently sequential and requires k full passes through the data. It was recently shown that Markov chain Monte Carlo sampling can be used to efficiently approximate the seeding step of k-means++. However, this result requires assumptions on the data generating distribution. We propose a simple yet fast seeding algorithm that produces provably good clusterings even without assumptions on the data. Our analysis shows that the algorithm allows for a favourable trade-off between solution quality and computational cost, speeding up k-means++ seeding by up to several orders of magnitude. We validate our theoretical results in extensive experiments on a variety of real-world data sets. N/A art algorithm, does not scale well to massive datasets as it is inherently sequential and requires k full passes through the data. It was recently shown that Markov chain Monte Carlo sampling can be used to efficiently approximate the seeding step of k-means++. However, this result requires assumptions on the data generating distribution. We propose a simple yet fast seeding algorithm that produces provably good clusterings even without assumptions on the data. Our analysis shows that the algorithm allows for a favourable trade-off between solution quality and computational cost, speeding up k-means++ seeding by up to several orders of magnitude. We validate our theoretical results in extensive experiments on a variety of real-world data sets. 1 Introduction k-means++ (Arthur & Vassilvitskii, 2007) is one of the most widely used methods to solve k-Means clustering. The algorithm is simple and consists of two steps: In the seeding step, initial cluster centers are found using an adaptive sampling scheme called D 2 -sampling. In the second step, this solution is refined using Lloyd’s algorithm (Lloyd, 1982), the classic iterative algorithm for k-Means. The key advantages of k-means++ are its strong empirical performance, theoretical guarantees on the solution quality, and ease of use. Arthur & Vassilvitskii (2007) show that k-means++ produces clusterings that are in expectation O(log k)-competitive with the optimal solution without any assumptions on the data. Furthermore, this theoretical guarantee already holds after the seeding step. The subsequent use of Lloyd’s algorithm to refine the solution only guarantees that the solution quality does not deteriorate and that it converges to a locally optimal solution in finite time. In contrast, using naive seeding such as selecting data points uniformly at random followed by Lloyd’s algorithm can produce solutions that are arbitrarily bad compared to the optimal solution. The drawback of k-means++ is that it does not scale easily to massive data sets since both its seeding step and every iteration of Lloyd’s algorithm require the computation of all pairwise distances between cluster centers and data points. Lloyd’s algorithm can be parallelized in the MapReduce framework (Zhao et al., 2009) or even replaced by fast stochastic optimization techniques such as online or mini-batch k-Means (Bottou & Bengio, 1994; Sculley, 2010). However, the seeding step requires k inherently sequential passes through the data, making it impractical even for moderate k. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. This highlights the need for a fast and scalable seeding algorithm. Ideally, it should also retain the theoretical guarantees of k-means++ and provide equally competitive clusterings in practice. Such an approach was presented by Bachem et al. (2016) who propose to approximate k-means++ using a Markov chain Monte Carlo (MCMC) approach and provide a fast seeding algorithm. Under natural assumptions on the data generating distribution, the authors show that the computational complexity of k-means++ can be greatly decreased while retaining the same O(log k) guarantee on the solution quality. The drawback of this approach is that these assumptions may not hold and that checking their validity is expensive (see detailed discussion in Section 3). Our contributions. The goal of this paper is to provide fast and competitive seedings for k-Means clustering without prior assumptions on the data. As our key contributions, we (1) propose a simple yet fast seeding algorithm for k-Means, (2) show that it produces provably good clusterings without assumptions on the data, (3) provide stronger theoretical guarantees under assumptions on the data generating distribution, (4) extend the algorithm to arbitrary distance metrics and various divergence measures, (5) compare the algorithm to previous results, both theoretically and empirically, and (6) demonstrate its effectiveness on several real-world data sets. 2 Background and related work We will start by formalizing the problem and reviewing several recent results. Let X denote a set of n points in Rd. For any finite set C ⇢ Rd and x 2 X , we define d(x,C) 2 = min c2C kx ck22. The objective of k-Means clustering is to find a set C of k cluster centers in Rd such that the quantization error C(X ) is minimized, where C(X ) = X x2X d(x,C) 2 . We denote the optimal quantization error with k centers by k OPT (X ), the mean of X by µ(X ), and the variance of X by Var(X ) = Px2X d(x, µ(X ))2. We note that 1OPT (X ) = Var(X ). D2-sampling. Given a set of centers C, the D2-sampling strategy, as the name suggests, is to sample each point x 2 X with probability proportional to the squared distance to the selected centers, p(x | C) = d(x,C) 2 P x02X d(x 0 , C) 2 . (1) The seeding step of k-means++ builds upon D 2 -sampling: It first samples an initial center uniformly at random. Then, k 1 additional centers are sequentially added to the previously sampled centers using D 2 -sampling. The resulting computational complexity is ⇥(nkd), as for each x 2 X the distance d(x,C) 2 in (1) needs to be updated whenever a center is added to C. Metropolis-Hastings. The Metropolis-Hastings algorithm (Hastings, 1970) is a MCMC method for sampling from a probability distribution p(x) whose density is known only up to constants. Consider the following variant that uses an independent proposal distribution q(x) to build a Markov chain: Start with an arbitrary initial state x1 and in each iteration j 2 [2, . . . ,m] sample a candidate yj using q(x). Then, either accept this candidate (i.e., xj = yj) with probability ⇡(xj 1, yj) = min ✓ p(yj) p(xj 1) q(xj 1) q(yj) , 1 ◆ (2) or reject it otherwise (i.e., xj = xj 1). The stationary distribution of this Markov chain is p(x). Hence, for m sufficiently large, the distribution of xm is approximately p(x). Approximation using MCMC (K-MC2). Bachem et al. (2016) propose to speed up k-means++ by replacing the exact D2-sampling in (1) with a fast approximation based on MCMC sampling. In each iteration j 2 [2, 3, . . . , k], one constructs a Markov chain of length m using the Metropolis-Hasting algorithm with an independent and uniform proposal distribution q(x) = 1/n. The key advantage is that the acceptance probability in (2) only depends on d(yj , C) 2 and d(xj 1, C) 2 since min ✓ p(yj) p(xj 1) q(xj 1) q(yj) , 1 ◆ = min ✓ d(yj , C) 2 d(xj 1, C)2 , 1 ◆ . Critically, in each of the k 1 iterations, the algorithm does not require a full pass through the data, but only needs to compute the distances between m points and up to k 1 centers. As a consequence, the complexity of K-MC 2 is O mk2d compared to O(nkd) for k-means++ seeding. To bound the quality of the solutions produced by K-MC 2 , Bachem et al. (2016) analyze the mixing time of the described Markov chains. To this end, the authors define the two data-dependent quantities: ↵(X ) = max x2X d(x, µ(X ))2P x02X d(x 0 , µ(X ))2 , and (X ) = 1 OPT (X ) k OPT (X ) . (3) In order to bound each term, the authors assume that the data is generated i.i.d. from a distribution F and impose two conditions on F . First, they assume that F exhibits exponential tails and prove that in this case ↵(X ) 2 O log2 n with high probability. Second, they assume that “F is approximately uniform on a hypersphere”. This in turn implies that (X ) 2 O(k) with high probability. Under these assumptions, the authors prove that the solution generated by K-MC 2 is in expectation O(log k)competitive with the optimal solution if m 2 ⇥ k log2 n log k . In this case, the total computational complexity of K-MC 2 is O k3d log2 n log k which is sublinear in the number of data points. Other related work. A survey on seeding methods for k-Means was provided by Celebi et al. (2013). D 2 -sampling and k-means++ have been extensively studied in the literature. Previous work was primarily focused on related algorithms (Arthur & Vassilvitskii, 2007; Ostrovsky et al., 2006; Jaiswal et al., 2014, 2015), its theoretical properties (Ailon et al., 2009; Aggarwal et al., 2009) and bad instances (Arthur & Vassilvitskii, 2007; Brunsch & Röglin, 2011). As such, these results are complementary to the ones presented in this paper. An alternative approach to scalable seeding was investigated by Bahmani et al. (2012). The authors propose the k-meansk algorithm that retains the same O(log k) guarantee in expectation as k-means++. k-meansk reduces the number of sequential passes through the data to O(log n) by oversampling cluster centers in each of the rounds. While this allows one to parallelize each of the O(log n) rounds, it also increases the total computational complexity from O(nkd) to O(nkd log n). This method is feasible if substantial computational resources are available in the form of a cluster. Our approach, on the other hand, has an orthogonal use case: It aims to efficiently approximate k-means++ seeding with a substantially lower complexity. 3 Assumption-free K-MC2 Building on the MCMC strategy introduced by Bachem et al. (2016), we propose an algorithm which addresses the drawbacks of the K-MC 2 algorithm, namely: (1) The theoretical results of K-MC 2 hold only if the data is drawn independently from a distribution satisfying the assumptions stated in Section 2. For example, the results do not extend to heavytailed distributions which are often observed in real world data. (2) Verifying the assumptions, which in turn imply the required chain length, is computationally hard and potentially more expensive than running the algorithm. In fact, calculating ↵(X ) already requires two full passes through the data, while computing (X ) is NP-hard. (3) Theorem 2 of Bachem et al. (2016) does not characterize the tradeoff between m and the expected solution quality: It is only valid for the specific choice of chain length m = ⇥ k log 2 n log k . As a consequence, if the assumptions do not hold, we obtain no theoretical guarantee with regards to the solution quality. Furthermore, the constants in Theorem 2 are not known and may be large. Our approach addresses these shortcomings using three key elements. Firstly, we provide a proposal distribution that renders the assumption on ↵(X ) obsolete. Secondly, a novel theoretic analysis allows us to obtain theoretical guarantees on the solution quality even without assumptions on (X ). Finally, our results characterize the tradeoff between increasing the chain length m and improving the expected solution quality. Algorithm 1 ASSUMPTION-FREE K-MC2(AFK-MC2) Require: Data set X , # of centers k, chain length m // Preprocessing step 1: c1 Point uniformly sampled from X 2: for all x 2 X do 3: q(x) 12 d(x, c1)2/ P x02X d(x 0 , c1) 2 + 1 2n // Main loop 4: C1 {c1} 5: for i = 2, 3, . . . , k do 6: x Point sampled from X using q(x) 7: dx d(x,Ci 1)2 8: for j = 2, 3, . . . ,m do 9: y Point sampled from X using q(y) 10: dy d(y, Ci 1)2 11: if dyq(x)d x q(y) > Unif(0, 1) then x y, dx dy 12: Ci Ci 1 [ {x} 13: return Ck Proposal distribution. We argue that the choice of the proposal distribution is critical. Intuitively, the uniform distribution can be a very bad choice if, in any iteration, the true D 2 -sampling distribution is “highly” nonuniform. We suggest the following proposal distribution: We first sample a center c1 2 X uniformly at random and define for all x 2 X the nonuniform proposal q(x | c1) = 1 2 d(x, c1) 2 P x02X d(x 0 , c1) 2 | {z } (A) + 1 2 1 |X ||{z} (B) . (4) The term (A) is the true D 2 -sampling distribution with regards to the first center c1. For any data set, it ensures that we start with the best possible proposal distribution in the second iteration. We will show that this proposal is sufficient even for later iterations, rendering any assumptions on ↵ obsolete. The term (B) regularizes the proposal distribution and ensures that the mixing time of K-MC 2 is always matched up to a factor of two. Algorithm. Algorithm 1 details the proposed fast seeding algorithm ASSUMPTION-FREE K-MC2. In the preprocessing step, it first samples an initial center c1 uniformly at random and then computes the proposal distribution q(· | c1). In the main loop, it then uses independent Markov chains of length m to sample centers in each of the k 1 iterations. The complexity of the main loop is O mk2d . The preprocessing step of ASSUMPTION-FREE K-MC 2 requires a single pass through the data to compute the proposal q(· | c1). There are several reasons why this additional complexity of O(nd) is not an issue in practice: (1) The preprocessing step only requires a single pass through the data compared to k passes for the seeding of k-means++. (2) It is easily parallelized. (3) Given random access to the data, the proposal distribution can be calculated online when saving or copying the data. (4) As we will see in Section 4, the effort spent in the preprocessing step pays off: It often allows for shorter Markov chains in the main loop. (5) Computing ↵(X ) to verify the first assumption of K-MC 2 is already more expensive than the preprocessing step of ASSUMPTION-FREE K-MC 2 . Theorem 1. Let ✏ 2 (0, 1) and k 2 N. Let X be any set of n points in Rd and C be the output of Algorithm 1 with m = 1 + 8✏ log 4k ✏ . Then, it holds that E [ C(X )] 8(log2 k + 2) kOPT (X ) + ✏Var(X ). The computational complexity of the preprocessing step is O(nd) and the computational complexity of the main loop is O 1✏k2d log k✏ . This result shows that ASSUMPTION-FREE K-MC 2 produces provably good clusterings for arbitrary data sets without assumptions. The guarantee consists of two terms: The first term, i.e., 8(log2 k + 2) k OPT (X ), is the theoretical guarantee of k-means++. The second term, ✏Var(X ), quantifies the potential additional error due to the approximation. The variance is a natural notion as the mean is the optimal quantizer for k = 1. Intuitively, the second term may be interpreted as a scale-invariant and additive approximation error. Theorem 1 directly characterizes the tradeoff between improving the solution quality and the resulting increase in computational complexity. As m is increased, the solution quality converges to the theoretical guarantee of k-means++. At the same time, even for smaller chain lengths m, we obtain a provable bound on the solution quality. In contrast, the guarantee of K-MC 2 on the solution quality only holds for a specific choice of m. For completeness, ASSUMPTION-FREE K-MC 2 may also be analyzed under the assumptions made in Bachem et al. (2016). While for K-MC 2 the required chain length m is linear in ↵(X ), ASSUMPTION-FREE K-MC 2 does not require this assumption. In fact, we will see in Section 4 that this lack of dependence of ↵(X ) leads to a better empirical performance. If we assume (X ) 2 O(k), we obtain the following result similar to the one of K-MC 2 (albeit with a shorter chain length m). Corollary 1. Let k 2 N and X be a set of n points in Rd satisfying (X ) 2 O(k). Let C be the output of Algorithm 1 with m = ⇥(k log k). Then it holds that E [ C(X )] 8(log2 k + 3) kOPT (X ). The computational complexity of the preprocessing is O(nd) and the computational complexity of the main loop is O k3d log k . 3.1 Proof sketch for Theorem 1 In this subsection, we provide a sketch of the proof of Theorem 1 and defer the full proof to Section A of the supplementary materials. Intuitively, we first bound how well a single Markov chain approximates one iteration of exact D 2 -sampling. Then, we analyze how the approximation error accumulates across iterations and provide a bound on the expected solution quality. For the first step, consider any set C ✓ X of previously sampled centers. Let c1 2 C denote the first sampled center that was used to construct the proposal distribution q(x | c1) in (4). In a single iteration, we would ideally sample a new center x 2 X using D2-sampling, i.e., from p(x | C) as defined in (1). Instead, Algorithm 1 constructs a Markov chain to sample a new center x 2 X as the next cluster center. We denote by p̃ c1 m(x | C) the implied probability of sampling a point x 2 X using this Markov chain of length m. The following result shows that in any iteration either C is ✏1-competitive compared to c1 or the Markov chain approximates D 2 -sampling well in terms of total variation distance 1 . Lemma 1. Let ✏1, ✏2 2 (0, 1) and c1 2 X . Consider any set C ✓ X with c1 2 C. For m 1 + 2 ✏1 log 1 ✏2 , at least one of the following holds: (i) C(X ) < ✏1 c1(X ), or (ii) kp(· | C) p̃c1m(· | C)kTV ✏2. In the second step, we bound the expected solution quality of Algorithm 1 based on Lemma 1. While the full proof requires careful propagation of errors across iterations and a corresponding inductive argument, the intuition is based on distinguishing between two possible cases of sampled solutions. First, consider the realizations of the solution C that are ✏1-competitive compared to c1. By definition, C(X ) < ✏1 c1(X ). Furthermore, the expected solution quality of these realizations can be bounded by 2✏1 Var(X ) since c1 is chosen uniformly at random and hence in expectation c1(X ) 2Var(X ). Second, consider the realizations that are not ✏1-competitive compared to c1. Since the quantization error is non-increasing in sampled centers, Lemma 1 implies that all k 1 Markov chains result in a good approximation of the corresponding D 2 -sampling. Intuitively, this implies that the approximation error in terms of total variation distance across all k 1 iterations is at most ✏2(k 1). Informally, the expected solution quality is thus bounded with probability 1 ✏2(k 1) by the expected quality of k-means++ and with probability ✏2(k 1) by c1(X ). Theorem 1 can then be proven by setting ✏1 = ✏/4 and ✏2 = ✏/4k and choosing m sufficiently large. 1 Let ⌦ be a finite sample space on which two probability distributions p and q are defined. The total variation distance kp qkTV between p and q is given by 1 2 P x2⌦ |p(x) q(x)|. 3.2 Extension to other clustering problems While we only consider k-Means clustering and the Euclidean distance in this paper, the results are more general. They can be directly applied, by transforming the data, to any metric space for which there exists a global isometry on Euclidean spaces. Examples would be the Mahalanobis distance and Generalized Symmetrized Bregman divergences (Acharyya et al., 2013). The results also apply to arbitrary distance measures (albeit with different constants) as D 2 -sampling can be generalized to arbitrary distance measures (Arthur & Vassilvitskii, 2007). However, Var(X ) needs to be replaced by 1 OPT (X ) in Theorem 1 since the mean may not be the optimal quantizer (for k = 1) for a different distance metric. The proposed algorithm can further be extended to different potential functions of the form k · kl and used to approximate the corresponding Dl-sampling (Arthur & Vassilvitskii, 2007), again with different constants. Similarly, the results also apply to bregman++ (Ackermann & Blömer, 2010) which provides provably competitive solutions for clustering with a broad class of Bregman divergences (including the KL-divergence and Itakura-Saito distance). 4 Experimental results In this section 2 , we empirically validate our theoretical results and compare the proposed algorithm ASSUMPTION-FREE K-MC 2 (AFK-MC 2 ) to three alternative seeding strategies: (1) RANDOM, a “naive” baseline that samples k centers from X uniformly at random, (2) the full seeding step of k-means++, and (3) K-MC 2 . For both ASSUMPTION-FREE K-MC 2 and K-MC 2 , we consider the different chain lengths m 2 {1, 2, 5, 10, 20, 50, 100, 150, 200}. Table 1 shows the six data sets used in the experiments with their corresponding values for k. We choose an experimental setup similar to Bachem et al. (2016): For half of the data sets, we both train the algorithm and evaluate the corresponding solution on the full data set (denoted by T in the EVAL column of Table 1). This corresponds to the classical k-Means setting. In practice, however, one is often also interested in the generalization error. For the other half of the data sets, we retain 250,000 data points as the holdout set for the evaluation (denoted by H in the EVAL column of Table 1). For all methods, we record the solution quality (either on the full data set or the holdout set) and measure the number of distance evaluations needed to run the algorithm. For ASSUMPTION-FREE K-MC 2 this includes both the preprocessing and the main loop. We run every algorithm 200 times with different random seeds and average the results. We further compute and display 95% confidence intervals for the solution quality. 2 An implementation of ASSUMPTION-FREE K-MC 2 has been released at http://olivierbachem.ch. Discussion. Figure 1 shows the expected quantization error for the two baselines, RANDOM and k-means++, and for the MCMC methods with different chain lengths m. As expected, the seeding step of k-means++ strongly outperforms RANDOM on all data sets. As the chain length m increases, the quality of solutions produced by both ASSUMPTION-FREE K-MC 2 and K-MC 2 quickly converges to that of k-means++ seeding. On all data sets except WEB, ASSUMPTION-FREE K-MC 2 starts with a lower initial error due to the improved proposal distribution and outperforms K-MC 2 for any given chain length m. For WEB, both algorithms exhibit approximately the same performance. This is expected as ↵(X ) of WEB is very low (see Table 1). Hence, there is only a minor difference between the nonuniform proposal of ASSUMPTION-FREE K-MC 2 and the uniform proposal of K-MC 2 . In fact, one of the key advantages of ASSUMPTION-FREE K-MC 2 is that its proposal adapts to the data set at hand. As discussed in Section 3, ASSUMPTION-FREE K-MC 2 requires an additional preprocessing step to compute the nonuniform proposal. Figure 2 shows the expected solution quality in relation to the total computational complexity in terms of number of distance evaluations. Both K-MC 2 and ASSUMPTION-FREE K-MC 2 generate solutions that are competitive with those produced by the seeding step of k-means++. At the same time, they do this at a fraction of the computational cost. Despite the preprocessing, ASSUMPTION-FREE K-MC 2 clearly outperforms K-MC 2 on the data sets with large values for ↵(X ) (CSN, KDD and SONG). The additional effort of computing the nonuniform proposal is compensated by a substantially lower expected quantization error for a given chain size. For the other data sets, ASSUMPTION-FREE K-MC 2 is initially disadvantaged by the cost of computing the proposal distribution. However, as m increases and more time is spent computing the Markov chains, it either outperforms K-MC 2 (RNA and SUSY) or matches its performance (WEB). Table 3 details the practical significance of the proposed algorithm. The results indicate that in practice it is sufficient to run ASSUMPTION-FREE K-MC 2 with a chain length independent of n. Even with a small chain length, ASSUMPTION-FREE K-MC 2 produces competitive clusterings at a fraction of the computational cost of the seeding step of k-means++. For example on CSN, ASSUMPTION-FREE K-MC 2 with m = 20 achieves a relative error of 1.45% and a speedup of 33.3⇥. At the same time, K-MC 2 would have exhibited a substantial relative error of 65.34% while only obtaining a slightly better speedup of 40.0⇥. 5 Conclusion In this paper, we propose ASSUMPTION-FREE K-MC 2 , a simple and fast seeding algorithm for k-Means. In contrast to the previously introduced algorithm K-MC 2 , it produces provably good clusterings even without assumptions on the data. As a key advantage, ASSUMPTION-FREE K-MC 2 allows to provably trade off solution quality for a decreased computational effort. Extensive experiments illustrate the practical significance of the proposed algorithm: It obtains competitive clusterings at a fraction of the cost of k-means++ seeding and it outperforms or matches its main competitor K-MC 2 on all considered data sets. Acknowledgments This research was partially supported by ERC StG 307036, a Google Ph.D. Fellowship and an IBM Ph.D. Fellowship.
1. What is the focus of the reviewed paper regarding K-means initialization? 2. How does the proposed method differ from the traditional D^2-sampling approach? 3. What are the strengths and weaknesses of the provided algorithm, particularly in terms of its theoretical guarantees and experimental performance? 4. Do you have any concerns or questions about the proof of the main result, specifically regarding the absence of new technical challenges? 5. How might the inclusion of an empirical variance term in the error guarantee impact the algorithm's performance in certain scenarios?
Review
Review The initialization, or seeding of a K-means algorithm, when k-initial cluster centers are assigned, is a crucial step - as the accuracy of the clustering varies greatly with this. The D^2-sampling for seeding is an well-known procedure that provably outperform random clustering. However this algorithm does not scale with the size of the dataset, which is especially disadvantageous for massive datasets (complexity ~ nkd, n data points, k clusters, d dimension). This paper provides an alternative sampling method that has complexity ~ mk^2d (m is a system parameter). The algorithm is a modification of an existing method called k_MC^2, which has the same complexity, but has theoretical guarantee only assuming an random independent model for dataset. While the practical implication for this result may be limited, it is a crucial theoretical step. However this modified algorithm perform even better experimentally. The proof of the main result, theorem 1 seems quite straightforward. There seem not to be new technical challenges in the proof (or it is not explicitly mentioned). In theorem 1, the error guarantee contains a variance term for the dataset - I assume that is the empirical variance. However this term is not there in the D^2 sampling guarantee. This means there may be dataset where this algorithm may be quite bad. Now, obviously such datasets can be avoided during experiment set up. It is not clear to me how bad the implication of this term can be.
NIPS
Title Fast and Provably Good Seedings for k-Means Abstract Seeding – the task of finding initial cluster centers – is critical in obtaining highquality clusterings for k-Means. However, k-means++ seeding, the state of the art algorithm, does not scale well to massive datasets as it is inherently sequential and requires k full passes through the data. It was recently shown that Markov chain Monte Carlo sampling can be used to efficiently approximate the seeding step of k-means++. However, this result requires assumptions on the data generating distribution. We propose a simple yet fast seeding algorithm that produces provably good clusterings even without assumptions on the data. Our analysis shows that the algorithm allows for a favourable trade-off between solution quality and computational cost, speeding up k-means++ seeding by up to several orders of magnitude. We validate our theoretical results in extensive experiments on a variety of real-world data sets. N/A art algorithm, does not scale well to massive datasets as it is inherently sequential and requires k full passes through the data. It was recently shown that Markov chain Monte Carlo sampling can be used to efficiently approximate the seeding step of k-means++. However, this result requires assumptions on the data generating distribution. We propose a simple yet fast seeding algorithm that produces provably good clusterings even without assumptions on the data. Our analysis shows that the algorithm allows for a favourable trade-off between solution quality and computational cost, speeding up k-means++ seeding by up to several orders of magnitude. We validate our theoretical results in extensive experiments on a variety of real-world data sets. 1 Introduction k-means++ (Arthur & Vassilvitskii, 2007) is one of the most widely used methods to solve k-Means clustering. The algorithm is simple and consists of two steps: In the seeding step, initial cluster centers are found using an adaptive sampling scheme called D 2 -sampling. In the second step, this solution is refined using Lloyd’s algorithm (Lloyd, 1982), the classic iterative algorithm for k-Means. The key advantages of k-means++ are its strong empirical performance, theoretical guarantees on the solution quality, and ease of use. Arthur & Vassilvitskii (2007) show that k-means++ produces clusterings that are in expectation O(log k)-competitive with the optimal solution without any assumptions on the data. Furthermore, this theoretical guarantee already holds after the seeding step. The subsequent use of Lloyd’s algorithm to refine the solution only guarantees that the solution quality does not deteriorate and that it converges to a locally optimal solution in finite time. In contrast, using naive seeding such as selecting data points uniformly at random followed by Lloyd’s algorithm can produce solutions that are arbitrarily bad compared to the optimal solution. The drawback of k-means++ is that it does not scale easily to massive data sets since both its seeding step and every iteration of Lloyd’s algorithm require the computation of all pairwise distances between cluster centers and data points. Lloyd’s algorithm can be parallelized in the MapReduce framework (Zhao et al., 2009) or even replaced by fast stochastic optimization techniques such as online or mini-batch k-Means (Bottou & Bengio, 1994; Sculley, 2010). However, the seeding step requires k inherently sequential passes through the data, making it impractical even for moderate k. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. This highlights the need for a fast and scalable seeding algorithm. Ideally, it should also retain the theoretical guarantees of k-means++ and provide equally competitive clusterings in practice. Such an approach was presented by Bachem et al. (2016) who propose to approximate k-means++ using a Markov chain Monte Carlo (MCMC) approach and provide a fast seeding algorithm. Under natural assumptions on the data generating distribution, the authors show that the computational complexity of k-means++ can be greatly decreased while retaining the same O(log k) guarantee on the solution quality. The drawback of this approach is that these assumptions may not hold and that checking their validity is expensive (see detailed discussion in Section 3). Our contributions. The goal of this paper is to provide fast and competitive seedings for k-Means clustering without prior assumptions on the data. As our key contributions, we (1) propose a simple yet fast seeding algorithm for k-Means, (2) show that it produces provably good clusterings without assumptions on the data, (3) provide stronger theoretical guarantees under assumptions on the data generating distribution, (4) extend the algorithm to arbitrary distance metrics and various divergence measures, (5) compare the algorithm to previous results, both theoretically and empirically, and (6) demonstrate its effectiveness on several real-world data sets. 2 Background and related work We will start by formalizing the problem and reviewing several recent results. Let X denote a set of n points in Rd. For any finite set C ⇢ Rd and x 2 X , we define d(x,C) 2 = min c2C kx ck22. The objective of k-Means clustering is to find a set C of k cluster centers in Rd such that the quantization error C(X ) is minimized, where C(X ) = X x2X d(x,C) 2 . We denote the optimal quantization error with k centers by k OPT (X ), the mean of X by µ(X ), and the variance of X by Var(X ) = Px2X d(x, µ(X ))2. We note that 1OPT (X ) = Var(X ). D2-sampling. Given a set of centers C, the D2-sampling strategy, as the name suggests, is to sample each point x 2 X with probability proportional to the squared distance to the selected centers, p(x | C) = d(x,C) 2 P x02X d(x 0 , C) 2 . (1) The seeding step of k-means++ builds upon D 2 -sampling: It first samples an initial center uniformly at random. Then, k 1 additional centers are sequentially added to the previously sampled centers using D 2 -sampling. The resulting computational complexity is ⇥(nkd), as for each x 2 X the distance d(x,C) 2 in (1) needs to be updated whenever a center is added to C. Metropolis-Hastings. The Metropolis-Hastings algorithm (Hastings, 1970) is a MCMC method for sampling from a probability distribution p(x) whose density is known only up to constants. Consider the following variant that uses an independent proposal distribution q(x) to build a Markov chain: Start with an arbitrary initial state x1 and in each iteration j 2 [2, . . . ,m] sample a candidate yj using q(x). Then, either accept this candidate (i.e., xj = yj) with probability ⇡(xj 1, yj) = min ✓ p(yj) p(xj 1) q(xj 1) q(yj) , 1 ◆ (2) or reject it otherwise (i.e., xj = xj 1). The stationary distribution of this Markov chain is p(x). Hence, for m sufficiently large, the distribution of xm is approximately p(x). Approximation using MCMC (K-MC2). Bachem et al. (2016) propose to speed up k-means++ by replacing the exact D2-sampling in (1) with a fast approximation based on MCMC sampling. In each iteration j 2 [2, 3, . . . , k], one constructs a Markov chain of length m using the Metropolis-Hasting algorithm with an independent and uniform proposal distribution q(x) = 1/n. The key advantage is that the acceptance probability in (2) only depends on d(yj , C) 2 and d(xj 1, C) 2 since min ✓ p(yj) p(xj 1) q(xj 1) q(yj) , 1 ◆ = min ✓ d(yj , C) 2 d(xj 1, C)2 , 1 ◆ . Critically, in each of the k 1 iterations, the algorithm does not require a full pass through the data, but only needs to compute the distances between m points and up to k 1 centers. As a consequence, the complexity of K-MC 2 is O mk2d compared to O(nkd) for k-means++ seeding. To bound the quality of the solutions produced by K-MC 2 , Bachem et al. (2016) analyze the mixing time of the described Markov chains. To this end, the authors define the two data-dependent quantities: ↵(X ) = max x2X d(x, µ(X ))2P x02X d(x 0 , µ(X ))2 , and (X ) = 1 OPT (X ) k OPT (X ) . (3) In order to bound each term, the authors assume that the data is generated i.i.d. from a distribution F and impose two conditions on F . First, they assume that F exhibits exponential tails and prove that in this case ↵(X ) 2 O log2 n with high probability. Second, they assume that “F is approximately uniform on a hypersphere”. This in turn implies that (X ) 2 O(k) with high probability. Under these assumptions, the authors prove that the solution generated by K-MC 2 is in expectation O(log k)competitive with the optimal solution if m 2 ⇥ k log2 n log k . In this case, the total computational complexity of K-MC 2 is O k3d log2 n log k which is sublinear in the number of data points. Other related work. A survey on seeding methods for k-Means was provided by Celebi et al. (2013). D 2 -sampling and k-means++ have been extensively studied in the literature. Previous work was primarily focused on related algorithms (Arthur & Vassilvitskii, 2007; Ostrovsky et al., 2006; Jaiswal et al., 2014, 2015), its theoretical properties (Ailon et al., 2009; Aggarwal et al., 2009) and bad instances (Arthur & Vassilvitskii, 2007; Brunsch & Röglin, 2011). As such, these results are complementary to the ones presented in this paper. An alternative approach to scalable seeding was investigated by Bahmani et al. (2012). The authors propose the k-meansk algorithm that retains the same O(log k) guarantee in expectation as k-means++. k-meansk reduces the number of sequential passes through the data to O(log n) by oversampling cluster centers in each of the rounds. While this allows one to parallelize each of the O(log n) rounds, it also increases the total computational complexity from O(nkd) to O(nkd log n). This method is feasible if substantial computational resources are available in the form of a cluster. Our approach, on the other hand, has an orthogonal use case: It aims to efficiently approximate k-means++ seeding with a substantially lower complexity. 3 Assumption-free K-MC2 Building on the MCMC strategy introduced by Bachem et al. (2016), we propose an algorithm which addresses the drawbacks of the K-MC 2 algorithm, namely: (1) The theoretical results of K-MC 2 hold only if the data is drawn independently from a distribution satisfying the assumptions stated in Section 2. For example, the results do not extend to heavytailed distributions which are often observed in real world data. (2) Verifying the assumptions, which in turn imply the required chain length, is computationally hard and potentially more expensive than running the algorithm. In fact, calculating ↵(X ) already requires two full passes through the data, while computing (X ) is NP-hard. (3) Theorem 2 of Bachem et al. (2016) does not characterize the tradeoff between m and the expected solution quality: It is only valid for the specific choice of chain length m = ⇥ k log 2 n log k . As a consequence, if the assumptions do not hold, we obtain no theoretical guarantee with regards to the solution quality. Furthermore, the constants in Theorem 2 are not known and may be large. Our approach addresses these shortcomings using three key elements. Firstly, we provide a proposal distribution that renders the assumption on ↵(X ) obsolete. Secondly, a novel theoretic analysis allows us to obtain theoretical guarantees on the solution quality even without assumptions on (X ). Finally, our results characterize the tradeoff between increasing the chain length m and improving the expected solution quality. Algorithm 1 ASSUMPTION-FREE K-MC2(AFK-MC2) Require: Data set X , # of centers k, chain length m // Preprocessing step 1: c1 Point uniformly sampled from X 2: for all x 2 X do 3: q(x) 12 d(x, c1)2/ P x02X d(x 0 , c1) 2 + 1 2n // Main loop 4: C1 {c1} 5: for i = 2, 3, . . . , k do 6: x Point sampled from X using q(x) 7: dx d(x,Ci 1)2 8: for j = 2, 3, . . . ,m do 9: y Point sampled from X using q(y) 10: dy d(y, Ci 1)2 11: if dyq(x)d x q(y) > Unif(0, 1) then x y, dx dy 12: Ci Ci 1 [ {x} 13: return Ck Proposal distribution. We argue that the choice of the proposal distribution is critical. Intuitively, the uniform distribution can be a very bad choice if, in any iteration, the true D 2 -sampling distribution is “highly” nonuniform. We suggest the following proposal distribution: We first sample a center c1 2 X uniformly at random and define for all x 2 X the nonuniform proposal q(x | c1) = 1 2 d(x, c1) 2 P x02X d(x 0 , c1) 2 | {z } (A) + 1 2 1 |X ||{z} (B) . (4) The term (A) is the true D 2 -sampling distribution with regards to the first center c1. For any data set, it ensures that we start with the best possible proposal distribution in the second iteration. We will show that this proposal is sufficient even for later iterations, rendering any assumptions on ↵ obsolete. The term (B) regularizes the proposal distribution and ensures that the mixing time of K-MC 2 is always matched up to a factor of two. Algorithm. Algorithm 1 details the proposed fast seeding algorithm ASSUMPTION-FREE K-MC2. In the preprocessing step, it first samples an initial center c1 uniformly at random and then computes the proposal distribution q(· | c1). In the main loop, it then uses independent Markov chains of length m to sample centers in each of the k 1 iterations. The complexity of the main loop is O mk2d . The preprocessing step of ASSUMPTION-FREE K-MC 2 requires a single pass through the data to compute the proposal q(· | c1). There are several reasons why this additional complexity of O(nd) is not an issue in practice: (1) The preprocessing step only requires a single pass through the data compared to k passes for the seeding of k-means++. (2) It is easily parallelized. (3) Given random access to the data, the proposal distribution can be calculated online when saving or copying the data. (4) As we will see in Section 4, the effort spent in the preprocessing step pays off: It often allows for shorter Markov chains in the main loop. (5) Computing ↵(X ) to verify the first assumption of K-MC 2 is already more expensive than the preprocessing step of ASSUMPTION-FREE K-MC 2 . Theorem 1. Let ✏ 2 (0, 1) and k 2 N. Let X be any set of n points in Rd and C be the output of Algorithm 1 with m = 1 + 8✏ log 4k ✏ . Then, it holds that E [ C(X )] 8(log2 k + 2) kOPT (X ) + ✏Var(X ). The computational complexity of the preprocessing step is O(nd) and the computational complexity of the main loop is O 1✏k2d log k✏ . This result shows that ASSUMPTION-FREE K-MC 2 produces provably good clusterings for arbitrary data sets without assumptions. The guarantee consists of two terms: The first term, i.e., 8(log2 k + 2) k OPT (X ), is the theoretical guarantee of k-means++. The second term, ✏Var(X ), quantifies the potential additional error due to the approximation. The variance is a natural notion as the mean is the optimal quantizer for k = 1. Intuitively, the second term may be interpreted as a scale-invariant and additive approximation error. Theorem 1 directly characterizes the tradeoff between improving the solution quality and the resulting increase in computational complexity. As m is increased, the solution quality converges to the theoretical guarantee of k-means++. At the same time, even for smaller chain lengths m, we obtain a provable bound on the solution quality. In contrast, the guarantee of K-MC 2 on the solution quality only holds for a specific choice of m. For completeness, ASSUMPTION-FREE K-MC 2 may also be analyzed under the assumptions made in Bachem et al. (2016). While for K-MC 2 the required chain length m is linear in ↵(X ), ASSUMPTION-FREE K-MC 2 does not require this assumption. In fact, we will see in Section 4 that this lack of dependence of ↵(X ) leads to a better empirical performance. If we assume (X ) 2 O(k), we obtain the following result similar to the one of K-MC 2 (albeit with a shorter chain length m). Corollary 1. Let k 2 N and X be a set of n points in Rd satisfying (X ) 2 O(k). Let C be the output of Algorithm 1 with m = ⇥(k log k). Then it holds that E [ C(X )] 8(log2 k + 3) kOPT (X ). The computational complexity of the preprocessing is O(nd) and the computational complexity of the main loop is O k3d log k . 3.1 Proof sketch for Theorem 1 In this subsection, we provide a sketch of the proof of Theorem 1 and defer the full proof to Section A of the supplementary materials. Intuitively, we first bound how well a single Markov chain approximates one iteration of exact D 2 -sampling. Then, we analyze how the approximation error accumulates across iterations and provide a bound on the expected solution quality. For the first step, consider any set C ✓ X of previously sampled centers. Let c1 2 C denote the first sampled center that was used to construct the proposal distribution q(x | c1) in (4). In a single iteration, we would ideally sample a new center x 2 X using D2-sampling, i.e., from p(x | C) as defined in (1). Instead, Algorithm 1 constructs a Markov chain to sample a new center x 2 X as the next cluster center. We denote by p̃ c1 m(x | C) the implied probability of sampling a point x 2 X using this Markov chain of length m. The following result shows that in any iteration either C is ✏1-competitive compared to c1 or the Markov chain approximates D 2 -sampling well in terms of total variation distance 1 . Lemma 1. Let ✏1, ✏2 2 (0, 1) and c1 2 X . Consider any set C ✓ X with c1 2 C. For m 1 + 2 ✏1 log 1 ✏2 , at least one of the following holds: (i) C(X ) < ✏1 c1(X ), or (ii) kp(· | C) p̃c1m(· | C)kTV ✏2. In the second step, we bound the expected solution quality of Algorithm 1 based on Lemma 1. While the full proof requires careful propagation of errors across iterations and a corresponding inductive argument, the intuition is based on distinguishing between two possible cases of sampled solutions. First, consider the realizations of the solution C that are ✏1-competitive compared to c1. By definition, C(X ) < ✏1 c1(X ). Furthermore, the expected solution quality of these realizations can be bounded by 2✏1 Var(X ) since c1 is chosen uniformly at random and hence in expectation c1(X ) 2Var(X ). Second, consider the realizations that are not ✏1-competitive compared to c1. Since the quantization error is non-increasing in sampled centers, Lemma 1 implies that all k 1 Markov chains result in a good approximation of the corresponding D 2 -sampling. Intuitively, this implies that the approximation error in terms of total variation distance across all k 1 iterations is at most ✏2(k 1). Informally, the expected solution quality is thus bounded with probability 1 ✏2(k 1) by the expected quality of k-means++ and with probability ✏2(k 1) by c1(X ). Theorem 1 can then be proven by setting ✏1 = ✏/4 and ✏2 = ✏/4k and choosing m sufficiently large. 1 Let ⌦ be a finite sample space on which two probability distributions p and q are defined. The total variation distance kp qkTV between p and q is given by 1 2 P x2⌦ |p(x) q(x)|. 3.2 Extension to other clustering problems While we only consider k-Means clustering and the Euclidean distance in this paper, the results are more general. They can be directly applied, by transforming the data, to any metric space for which there exists a global isometry on Euclidean spaces. Examples would be the Mahalanobis distance and Generalized Symmetrized Bregman divergences (Acharyya et al., 2013). The results also apply to arbitrary distance measures (albeit with different constants) as D 2 -sampling can be generalized to arbitrary distance measures (Arthur & Vassilvitskii, 2007). However, Var(X ) needs to be replaced by 1 OPT (X ) in Theorem 1 since the mean may not be the optimal quantizer (for k = 1) for a different distance metric. The proposed algorithm can further be extended to different potential functions of the form k · kl and used to approximate the corresponding Dl-sampling (Arthur & Vassilvitskii, 2007), again with different constants. Similarly, the results also apply to bregman++ (Ackermann & Blömer, 2010) which provides provably competitive solutions for clustering with a broad class of Bregman divergences (including the KL-divergence and Itakura-Saito distance). 4 Experimental results In this section 2 , we empirically validate our theoretical results and compare the proposed algorithm ASSUMPTION-FREE K-MC 2 (AFK-MC 2 ) to three alternative seeding strategies: (1) RANDOM, a “naive” baseline that samples k centers from X uniformly at random, (2) the full seeding step of k-means++, and (3) K-MC 2 . For both ASSUMPTION-FREE K-MC 2 and K-MC 2 , we consider the different chain lengths m 2 {1, 2, 5, 10, 20, 50, 100, 150, 200}. Table 1 shows the six data sets used in the experiments with their corresponding values for k. We choose an experimental setup similar to Bachem et al. (2016): For half of the data sets, we both train the algorithm and evaluate the corresponding solution on the full data set (denoted by T in the EVAL column of Table 1). This corresponds to the classical k-Means setting. In practice, however, one is often also interested in the generalization error. For the other half of the data sets, we retain 250,000 data points as the holdout set for the evaluation (denoted by H in the EVAL column of Table 1). For all methods, we record the solution quality (either on the full data set or the holdout set) and measure the number of distance evaluations needed to run the algorithm. For ASSUMPTION-FREE K-MC 2 this includes both the preprocessing and the main loop. We run every algorithm 200 times with different random seeds and average the results. We further compute and display 95% confidence intervals for the solution quality. 2 An implementation of ASSUMPTION-FREE K-MC 2 has been released at http://olivierbachem.ch. Discussion. Figure 1 shows the expected quantization error for the two baselines, RANDOM and k-means++, and for the MCMC methods with different chain lengths m. As expected, the seeding step of k-means++ strongly outperforms RANDOM on all data sets. As the chain length m increases, the quality of solutions produced by both ASSUMPTION-FREE K-MC 2 and K-MC 2 quickly converges to that of k-means++ seeding. On all data sets except WEB, ASSUMPTION-FREE K-MC 2 starts with a lower initial error due to the improved proposal distribution and outperforms K-MC 2 for any given chain length m. For WEB, both algorithms exhibit approximately the same performance. This is expected as ↵(X ) of WEB is very low (see Table 1). Hence, there is only a minor difference between the nonuniform proposal of ASSUMPTION-FREE K-MC 2 and the uniform proposal of K-MC 2 . In fact, one of the key advantages of ASSUMPTION-FREE K-MC 2 is that its proposal adapts to the data set at hand. As discussed in Section 3, ASSUMPTION-FREE K-MC 2 requires an additional preprocessing step to compute the nonuniform proposal. Figure 2 shows the expected solution quality in relation to the total computational complexity in terms of number of distance evaluations. Both K-MC 2 and ASSUMPTION-FREE K-MC 2 generate solutions that are competitive with those produced by the seeding step of k-means++. At the same time, they do this at a fraction of the computational cost. Despite the preprocessing, ASSUMPTION-FREE K-MC 2 clearly outperforms K-MC 2 on the data sets with large values for ↵(X ) (CSN, KDD and SONG). The additional effort of computing the nonuniform proposal is compensated by a substantially lower expected quantization error for a given chain size. For the other data sets, ASSUMPTION-FREE K-MC 2 is initially disadvantaged by the cost of computing the proposal distribution. However, as m increases and more time is spent computing the Markov chains, it either outperforms K-MC 2 (RNA and SUSY) or matches its performance (WEB). Table 3 details the practical significance of the proposed algorithm. The results indicate that in practice it is sufficient to run ASSUMPTION-FREE K-MC 2 with a chain length independent of n. Even with a small chain length, ASSUMPTION-FREE K-MC 2 produces competitive clusterings at a fraction of the computational cost of the seeding step of k-means++. For example on CSN, ASSUMPTION-FREE K-MC 2 with m = 20 achieves a relative error of 1.45% and a speedup of 33.3⇥. At the same time, K-MC 2 would have exhibited a substantial relative error of 65.34% while only obtaining a slightly better speedup of 40.0⇥. 5 Conclusion In this paper, we propose ASSUMPTION-FREE K-MC 2 , a simple and fast seeding algorithm for k-Means. In contrast to the previously introduced algorithm K-MC 2 , it produces provably good clusterings even without assumptions on the data. As a key advantage, ASSUMPTION-FREE K-MC 2 allows to provably trade off solution quality for a decreased computational effort. Extensive experiments illustrate the practical significance of the proposed algorithm: It obtains competitive clusterings at a fraction of the cost of k-means++ seeding and it outperforms or matches its main competitor K-MC 2 on all considered data sets. Acknowledgments This research was partially supported by ERC StG 307036, a Google Ph.D. Fellowship and an IBM Ph.D. Fellowship.
1. What is the main contribution of the paper regarding clustering? 2. What are the strengths and weaknesses of the proposed MCMC approach compared to k-means++? 3. Do you have any questions or concerns about the technical analysis, particularly regarding Lemmas 2 and 8? 4. How does the reviewer assess the novelty and potential impact of the paper's content? 5. Are there any suggestions for improving the clarity and presentation of the paper's ideas and analysis?
Review
Review The paper proposed a MCMC approach to approximate the D^2-sampling (k-means++) for finding k centers in clustering, by modifying the proposal distribution of an earlier paper by Banchem 16. The algorithm is able to approximately maintain the O(log k) approximation guarantee to the k-means objective for any dataset (as is k-means++), if their analysis is correct. The running time of the algorithm is linear in the number of data size, but shaves off a factor of k (the number of clusters) comparing to that of k-means++.Technical quality: I have two doubts regarding the proof of their Lemma 2: 1. The authors argue, in the case of \phi_{C}(X) < \epsilon_1\phi_{c_1}, the claim holds trivially. I don't see how this goes through. First, I don't see why A^{c_1}(C,l) < =\phi_{C}(X). As I understand it, A^{c_1}(C,l) is the expected cost of C while \phi_{C}(X) is the actual cost of C (random quantity). If so, why is the former upper bounded by the latter? Second, even if the relation above holds, I don't see how the statement holds, since then we'll get A^{c_1}(C,l) < =\phi_{C}(X) < \epsilon_1\phi_{c_1}(X). How does this imply the statement using P^{c_1}(C,l) < 1? 2. I don't see how inequality (8) holds, since I can't see why the equation right above it holds, using the definition of \pi. Novelty: I think the idea of using MCMC methods to approximate D^2-sampling scheme is great. But since this paper is not the first to propose this approach, and the theoretical analysis seems incremental to that of Banchem 16, I think I should take some points off here. Potential impact or usefulness: I think this paper could potentially have good impact in large-scale clustering applications, given that k-means++ is widely used together with Lloyd's algorithm for clustering. Although the proposed algorithm still has a linear running time dependence on the data size, I think the fact that they can shave off a factor of k while approximately maintaining the performance guarantee of k-means++ could mean a lot in practice. The proposed algorithm also seems simple enough to be implemented in practice, comparing to other scaled versions of k-means++. In general, I think this research direction worths further exploration. Clarity and presentation: In terms of highlighting their contribution to previous work, the paper does a good job via both writing and experiments. In terms of the analysis, I think the authors should make more efforts in clarifying definitions and providing For example, the definitions of \phi_{C}(X), A^{c^1}(C,l) and P^{c^1}(C,l) are unclear to me, and seem to be inconsistent from one proof to another. Partly because the definition of C at different stages of the algorithm can change. A^{c^1}(C,l) and P^{c^1}(C,l) became more unclear to me when the authors refer the readers to lines 6-11 of the algorithm; for a moment, I thought "l" denotes the length of the markov chain at an iteration. Maybe the authors can remove this reference. Also, I'd be great if the authors can provide more intuition of their analysis, on a high level.
NIPS
Title Fast and Provably Good Seedings for k-Means Abstract Seeding – the task of finding initial cluster centers – is critical in obtaining highquality clusterings for k-Means. However, k-means++ seeding, the state of the art algorithm, does not scale well to massive datasets as it is inherently sequential and requires k full passes through the data. It was recently shown that Markov chain Monte Carlo sampling can be used to efficiently approximate the seeding step of k-means++. However, this result requires assumptions on the data generating distribution. We propose a simple yet fast seeding algorithm that produces provably good clusterings even without assumptions on the data. Our analysis shows that the algorithm allows for a favourable trade-off between solution quality and computational cost, speeding up k-means++ seeding by up to several orders of magnitude. We validate our theoretical results in extensive experiments on a variety of real-world data sets. N/A art algorithm, does not scale well to massive datasets as it is inherently sequential and requires k full passes through the data. It was recently shown that Markov chain Monte Carlo sampling can be used to efficiently approximate the seeding step of k-means++. However, this result requires assumptions on the data generating distribution. We propose a simple yet fast seeding algorithm that produces provably good clusterings even without assumptions on the data. Our analysis shows that the algorithm allows for a favourable trade-off between solution quality and computational cost, speeding up k-means++ seeding by up to several orders of magnitude. We validate our theoretical results in extensive experiments on a variety of real-world data sets. 1 Introduction k-means++ (Arthur & Vassilvitskii, 2007) is one of the most widely used methods to solve k-Means clustering. The algorithm is simple and consists of two steps: In the seeding step, initial cluster centers are found using an adaptive sampling scheme called D 2 -sampling. In the second step, this solution is refined using Lloyd’s algorithm (Lloyd, 1982), the classic iterative algorithm for k-Means. The key advantages of k-means++ are its strong empirical performance, theoretical guarantees on the solution quality, and ease of use. Arthur & Vassilvitskii (2007) show that k-means++ produces clusterings that are in expectation O(log k)-competitive with the optimal solution without any assumptions on the data. Furthermore, this theoretical guarantee already holds after the seeding step. The subsequent use of Lloyd’s algorithm to refine the solution only guarantees that the solution quality does not deteriorate and that it converges to a locally optimal solution in finite time. In contrast, using naive seeding such as selecting data points uniformly at random followed by Lloyd’s algorithm can produce solutions that are arbitrarily bad compared to the optimal solution. The drawback of k-means++ is that it does not scale easily to massive data sets since both its seeding step and every iteration of Lloyd’s algorithm require the computation of all pairwise distances between cluster centers and data points. Lloyd’s algorithm can be parallelized in the MapReduce framework (Zhao et al., 2009) or even replaced by fast stochastic optimization techniques such as online or mini-batch k-Means (Bottou & Bengio, 1994; Sculley, 2010). However, the seeding step requires k inherently sequential passes through the data, making it impractical even for moderate k. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. This highlights the need for a fast and scalable seeding algorithm. Ideally, it should also retain the theoretical guarantees of k-means++ and provide equally competitive clusterings in practice. Such an approach was presented by Bachem et al. (2016) who propose to approximate k-means++ using a Markov chain Monte Carlo (MCMC) approach and provide a fast seeding algorithm. Under natural assumptions on the data generating distribution, the authors show that the computational complexity of k-means++ can be greatly decreased while retaining the same O(log k) guarantee on the solution quality. The drawback of this approach is that these assumptions may not hold and that checking their validity is expensive (see detailed discussion in Section 3). Our contributions. The goal of this paper is to provide fast and competitive seedings for k-Means clustering without prior assumptions on the data. As our key contributions, we (1) propose a simple yet fast seeding algorithm for k-Means, (2) show that it produces provably good clusterings without assumptions on the data, (3) provide stronger theoretical guarantees under assumptions on the data generating distribution, (4) extend the algorithm to arbitrary distance metrics and various divergence measures, (5) compare the algorithm to previous results, both theoretically and empirically, and (6) demonstrate its effectiveness on several real-world data sets. 2 Background and related work We will start by formalizing the problem and reviewing several recent results. Let X denote a set of n points in Rd. For any finite set C ⇢ Rd and x 2 X , we define d(x,C) 2 = min c2C kx ck22. The objective of k-Means clustering is to find a set C of k cluster centers in Rd such that the quantization error C(X ) is minimized, where C(X ) = X x2X d(x,C) 2 . We denote the optimal quantization error with k centers by k OPT (X ), the mean of X by µ(X ), and the variance of X by Var(X ) = Px2X d(x, µ(X ))2. We note that 1OPT (X ) = Var(X ). D2-sampling. Given a set of centers C, the D2-sampling strategy, as the name suggests, is to sample each point x 2 X with probability proportional to the squared distance to the selected centers, p(x | C) = d(x,C) 2 P x02X d(x 0 , C) 2 . (1) The seeding step of k-means++ builds upon D 2 -sampling: It first samples an initial center uniformly at random. Then, k 1 additional centers are sequentially added to the previously sampled centers using D 2 -sampling. The resulting computational complexity is ⇥(nkd), as for each x 2 X the distance d(x,C) 2 in (1) needs to be updated whenever a center is added to C. Metropolis-Hastings. The Metropolis-Hastings algorithm (Hastings, 1970) is a MCMC method for sampling from a probability distribution p(x) whose density is known only up to constants. Consider the following variant that uses an independent proposal distribution q(x) to build a Markov chain: Start with an arbitrary initial state x1 and in each iteration j 2 [2, . . . ,m] sample a candidate yj using q(x). Then, either accept this candidate (i.e., xj = yj) with probability ⇡(xj 1, yj) = min ✓ p(yj) p(xj 1) q(xj 1) q(yj) , 1 ◆ (2) or reject it otherwise (i.e., xj = xj 1). The stationary distribution of this Markov chain is p(x). Hence, for m sufficiently large, the distribution of xm is approximately p(x). Approximation using MCMC (K-MC2). Bachem et al. (2016) propose to speed up k-means++ by replacing the exact D2-sampling in (1) with a fast approximation based on MCMC sampling. In each iteration j 2 [2, 3, . . . , k], one constructs a Markov chain of length m using the Metropolis-Hasting algorithm with an independent and uniform proposal distribution q(x) = 1/n. The key advantage is that the acceptance probability in (2) only depends on d(yj , C) 2 and d(xj 1, C) 2 since min ✓ p(yj) p(xj 1) q(xj 1) q(yj) , 1 ◆ = min ✓ d(yj , C) 2 d(xj 1, C)2 , 1 ◆ . Critically, in each of the k 1 iterations, the algorithm does not require a full pass through the data, but only needs to compute the distances between m points and up to k 1 centers. As a consequence, the complexity of K-MC 2 is O mk2d compared to O(nkd) for k-means++ seeding. To bound the quality of the solutions produced by K-MC 2 , Bachem et al. (2016) analyze the mixing time of the described Markov chains. To this end, the authors define the two data-dependent quantities: ↵(X ) = max x2X d(x, µ(X ))2P x02X d(x 0 , µ(X ))2 , and (X ) = 1 OPT (X ) k OPT (X ) . (3) In order to bound each term, the authors assume that the data is generated i.i.d. from a distribution F and impose two conditions on F . First, they assume that F exhibits exponential tails and prove that in this case ↵(X ) 2 O log2 n with high probability. Second, they assume that “F is approximately uniform on a hypersphere”. This in turn implies that (X ) 2 O(k) with high probability. Under these assumptions, the authors prove that the solution generated by K-MC 2 is in expectation O(log k)competitive with the optimal solution if m 2 ⇥ k log2 n log k . In this case, the total computational complexity of K-MC 2 is O k3d log2 n log k which is sublinear in the number of data points. Other related work. A survey on seeding methods for k-Means was provided by Celebi et al. (2013). D 2 -sampling and k-means++ have been extensively studied in the literature. Previous work was primarily focused on related algorithms (Arthur & Vassilvitskii, 2007; Ostrovsky et al., 2006; Jaiswal et al., 2014, 2015), its theoretical properties (Ailon et al., 2009; Aggarwal et al., 2009) and bad instances (Arthur & Vassilvitskii, 2007; Brunsch & Röglin, 2011). As such, these results are complementary to the ones presented in this paper. An alternative approach to scalable seeding was investigated by Bahmani et al. (2012). The authors propose the k-meansk algorithm that retains the same O(log k) guarantee in expectation as k-means++. k-meansk reduces the number of sequential passes through the data to O(log n) by oversampling cluster centers in each of the rounds. While this allows one to parallelize each of the O(log n) rounds, it also increases the total computational complexity from O(nkd) to O(nkd log n). This method is feasible if substantial computational resources are available in the form of a cluster. Our approach, on the other hand, has an orthogonal use case: It aims to efficiently approximate k-means++ seeding with a substantially lower complexity. 3 Assumption-free K-MC2 Building on the MCMC strategy introduced by Bachem et al. (2016), we propose an algorithm which addresses the drawbacks of the K-MC 2 algorithm, namely: (1) The theoretical results of K-MC 2 hold only if the data is drawn independently from a distribution satisfying the assumptions stated in Section 2. For example, the results do not extend to heavytailed distributions which are often observed in real world data. (2) Verifying the assumptions, which in turn imply the required chain length, is computationally hard and potentially more expensive than running the algorithm. In fact, calculating ↵(X ) already requires two full passes through the data, while computing (X ) is NP-hard. (3) Theorem 2 of Bachem et al. (2016) does not characterize the tradeoff between m and the expected solution quality: It is only valid for the specific choice of chain length m = ⇥ k log 2 n log k . As a consequence, if the assumptions do not hold, we obtain no theoretical guarantee with regards to the solution quality. Furthermore, the constants in Theorem 2 are not known and may be large. Our approach addresses these shortcomings using three key elements. Firstly, we provide a proposal distribution that renders the assumption on ↵(X ) obsolete. Secondly, a novel theoretic analysis allows us to obtain theoretical guarantees on the solution quality even without assumptions on (X ). Finally, our results characterize the tradeoff between increasing the chain length m and improving the expected solution quality. Algorithm 1 ASSUMPTION-FREE K-MC2(AFK-MC2) Require: Data set X , # of centers k, chain length m // Preprocessing step 1: c1 Point uniformly sampled from X 2: for all x 2 X do 3: q(x) 12 d(x, c1)2/ P x02X d(x 0 , c1) 2 + 1 2n // Main loop 4: C1 {c1} 5: for i = 2, 3, . . . , k do 6: x Point sampled from X using q(x) 7: dx d(x,Ci 1)2 8: for j = 2, 3, . . . ,m do 9: y Point sampled from X using q(y) 10: dy d(y, Ci 1)2 11: if dyq(x)d x q(y) > Unif(0, 1) then x y, dx dy 12: Ci Ci 1 [ {x} 13: return Ck Proposal distribution. We argue that the choice of the proposal distribution is critical. Intuitively, the uniform distribution can be a very bad choice if, in any iteration, the true D 2 -sampling distribution is “highly” nonuniform. We suggest the following proposal distribution: We first sample a center c1 2 X uniformly at random and define for all x 2 X the nonuniform proposal q(x | c1) = 1 2 d(x, c1) 2 P x02X d(x 0 , c1) 2 | {z } (A) + 1 2 1 |X ||{z} (B) . (4) The term (A) is the true D 2 -sampling distribution with regards to the first center c1. For any data set, it ensures that we start with the best possible proposal distribution in the second iteration. We will show that this proposal is sufficient even for later iterations, rendering any assumptions on ↵ obsolete. The term (B) regularizes the proposal distribution and ensures that the mixing time of K-MC 2 is always matched up to a factor of two. Algorithm. Algorithm 1 details the proposed fast seeding algorithm ASSUMPTION-FREE K-MC2. In the preprocessing step, it first samples an initial center c1 uniformly at random and then computes the proposal distribution q(· | c1). In the main loop, it then uses independent Markov chains of length m to sample centers in each of the k 1 iterations. The complexity of the main loop is O mk2d . The preprocessing step of ASSUMPTION-FREE K-MC 2 requires a single pass through the data to compute the proposal q(· | c1). There are several reasons why this additional complexity of O(nd) is not an issue in practice: (1) The preprocessing step only requires a single pass through the data compared to k passes for the seeding of k-means++. (2) It is easily parallelized. (3) Given random access to the data, the proposal distribution can be calculated online when saving or copying the data. (4) As we will see in Section 4, the effort spent in the preprocessing step pays off: It often allows for shorter Markov chains in the main loop. (5) Computing ↵(X ) to verify the first assumption of K-MC 2 is already more expensive than the preprocessing step of ASSUMPTION-FREE K-MC 2 . Theorem 1. Let ✏ 2 (0, 1) and k 2 N. Let X be any set of n points in Rd and C be the output of Algorithm 1 with m = 1 + 8✏ log 4k ✏ . Then, it holds that E [ C(X )] 8(log2 k + 2) kOPT (X ) + ✏Var(X ). The computational complexity of the preprocessing step is O(nd) and the computational complexity of the main loop is O 1✏k2d log k✏ . This result shows that ASSUMPTION-FREE K-MC 2 produces provably good clusterings for arbitrary data sets without assumptions. The guarantee consists of two terms: The first term, i.e., 8(log2 k + 2) k OPT (X ), is the theoretical guarantee of k-means++. The second term, ✏Var(X ), quantifies the potential additional error due to the approximation. The variance is a natural notion as the mean is the optimal quantizer for k = 1. Intuitively, the second term may be interpreted as a scale-invariant and additive approximation error. Theorem 1 directly characterizes the tradeoff between improving the solution quality and the resulting increase in computational complexity. As m is increased, the solution quality converges to the theoretical guarantee of k-means++. At the same time, even for smaller chain lengths m, we obtain a provable bound on the solution quality. In contrast, the guarantee of K-MC 2 on the solution quality only holds for a specific choice of m. For completeness, ASSUMPTION-FREE K-MC 2 may also be analyzed under the assumptions made in Bachem et al. (2016). While for K-MC 2 the required chain length m is linear in ↵(X ), ASSUMPTION-FREE K-MC 2 does not require this assumption. In fact, we will see in Section 4 that this lack of dependence of ↵(X ) leads to a better empirical performance. If we assume (X ) 2 O(k), we obtain the following result similar to the one of K-MC 2 (albeit with a shorter chain length m). Corollary 1. Let k 2 N and X be a set of n points in Rd satisfying (X ) 2 O(k). Let C be the output of Algorithm 1 with m = ⇥(k log k). Then it holds that E [ C(X )] 8(log2 k + 3) kOPT (X ). The computational complexity of the preprocessing is O(nd) and the computational complexity of the main loop is O k3d log k . 3.1 Proof sketch for Theorem 1 In this subsection, we provide a sketch of the proof of Theorem 1 and defer the full proof to Section A of the supplementary materials. Intuitively, we first bound how well a single Markov chain approximates one iteration of exact D 2 -sampling. Then, we analyze how the approximation error accumulates across iterations and provide a bound on the expected solution quality. For the first step, consider any set C ✓ X of previously sampled centers. Let c1 2 C denote the first sampled center that was used to construct the proposal distribution q(x | c1) in (4). In a single iteration, we would ideally sample a new center x 2 X using D2-sampling, i.e., from p(x | C) as defined in (1). Instead, Algorithm 1 constructs a Markov chain to sample a new center x 2 X as the next cluster center. We denote by p̃ c1 m(x | C) the implied probability of sampling a point x 2 X using this Markov chain of length m. The following result shows that in any iteration either C is ✏1-competitive compared to c1 or the Markov chain approximates D 2 -sampling well in terms of total variation distance 1 . Lemma 1. Let ✏1, ✏2 2 (0, 1) and c1 2 X . Consider any set C ✓ X with c1 2 C. For m 1 + 2 ✏1 log 1 ✏2 , at least one of the following holds: (i) C(X ) < ✏1 c1(X ), or (ii) kp(· | C) p̃c1m(· | C)kTV ✏2. In the second step, we bound the expected solution quality of Algorithm 1 based on Lemma 1. While the full proof requires careful propagation of errors across iterations and a corresponding inductive argument, the intuition is based on distinguishing between two possible cases of sampled solutions. First, consider the realizations of the solution C that are ✏1-competitive compared to c1. By definition, C(X ) < ✏1 c1(X ). Furthermore, the expected solution quality of these realizations can be bounded by 2✏1 Var(X ) since c1 is chosen uniformly at random and hence in expectation c1(X ) 2Var(X ). Second, consider the realizations that are not ✏1-competitive compared to c1. Since the quantization error is non-increasing in sampled centers, Lemma 1 implies that all k 1 Markov chains result in a good approximation of the corresponding D 2 -sampling. Intuitively, this implies that the approximation error in terms of total variation distance across all k 1 iterations is at most ✏2(k 1). Informally, the expected solution quality is thus bounded with probability 1 ✏2(k 1) by the expected quality of k-means++ and with probability ✏2(k 1) by c1(X ). Theorem 1 can then be proven by setting ✏1 = ✏/4 and ✏2 = ✏/4k and choosing m sufficiently large. 1 Let ⌦ be a finite sample space on which two probability distributions p and q are defined. The total variation distance kp qkTV between p and q is given by 1 2 P x2⌦ |p(x) q(x)|. 3.2 Extension to other clustering problems While we only consider k-Means clustering and the Euclidean distance in this paper, the results are more general. They can be directly applied, by transforming the data, to any metric space for which there exists a global isometry on Euclidean spaces. Examples would be the Mahalanobis distance and Generalized Symmetrized Bregman divergences (Acharyya et al., 2013). The results also apply to arbitrary distance measures (albeit with different constants) as D 2 -sampling can be generalized to arbitrary distance measures (Arthur & Vassilvitskii, 2007). However, Var(X ) needs to be replaced by 1 OPT (X ) in Theorem 1 since the mean may not be the optimal quantizer (for k = 1) for a different distance metric. The proposed algorithm can further be extended to different potential functions of the form k · kl and used to approximate the corresponding Dl-sampling (Arthur & Vassilvitskii, 2007), again with different constants. Similarly, the results also apply to bregman++ (Ackermann & Blömer, 2010) which provides provably competitive solutions for clustering with a broad class of Bregman divergences (including the KL-divergence and Itakura-Saito distance). 4 Experimental results In this section 2 , we empirically validate our theoretical results and compare the proposed algorithm ASSUMPTION-FREE K-MC 2 (AFK-MC 2 ) to three alternative seeding strategies: (1) RANDOM, a “naive” baseline that samples k centers from X uniformly at random, (2) the full seeding step of k-means++, and (3) K-MC 2 . For both ASSUMPTION-FREE K-MC 2 and K-MC 2 , we consider the different chain lengths m 2 {1, 2, 5, 10, 20, 50, 100, 150, 200}. Table 1 shows the six data sets used in the experiments with their corresponding values for k. We choose an experimental setup similar to Bachem et al. (2016): For half of the data sets, we both train the algorithm and evaluate the corresponding solution on the full data set (denoted by T in the EVAL column of Table 1). This corresponds to the classical k-Means setting. In practice, however, one is often also interested in the generalization error. For the other half of the data sets, we retain 250,000 data points as the holdout set for the evaluation (denoted by H in the EVAL column of Table 1). For all methods, we record the solution quality (either on the full data set or the holdout set) and measure the number of distance evaluations needed to run the algorithm. For ASSUMPTION-FREE K-MC 2 this includes both the preprocessing and the main loop. We run every algorithm 200 times with different random seeds and average the results. We further compute and display 95% confidence intervals for the solution quality. 2 An implementation of ASSUMPTION-FREE K-MC 2 has been released at http://olivierbachem.ch. Discussion. Figure 1 shows the expected quantization error for the two baselines, RANDOM and k-means++, and for the MCMC methods with different chain lengths m. As expected, the seeding step of k-means++ strongly outperforms RANDOM on all data sets. As the chain length m increases, the quality of solutions produced by both ASSUMPTION-FREE K-MC 2 and K-MC 2 quickly converges to that of k-means++ seeding. On all data sets except WEB, ASSUMPTION-FREE K-MC 2 starts with a lower initial error due to the improved proposal distribution and outperforms K-MC 2 for any given chain length m. For WEB, both algorithms exhibit approximately the same performance. This is expected as ↵(X ) of WEB is very low (see Table 1). Hence, there is only a minor difference between the nonuniform proposal of ASSUMPTION-FREE K-MC 2 and the uniform proposal of K-MC 2 . In fact, one of the key advantages of ASSUMPTION-FREE K-MC 2 is that its proposal adapts to the data set at hand. As discussed in Section 3, ASSUMPTION-FREE K-MC 2 requires an additional preprocessing step to compute the nonuniform proposal. Figure 2 shows the expected solution quality in relation to the total computational complexity in terms of number of distance evaluations. Both K-MC 2 and ASSUMPTION-FREE K-MC 2 generate solutions that are competitive with those produced by the seeding step of k-means++. At the same time, they do this at a fraction of the computational cost. Despite the preprocessing, ASSUMPTION-FREE K-MC 2 clearly outperforms K-MC 2 on the data sets with large values for ↵(X ) (CSN, KDD and SONG). The additional effort of computing the nonuniform proposal is compensated by a substantially lower expected quantization error for a given chain size. For the other data sets, ASSUMPTION-FREE K-MC 2 is initially disadvantaged by the cost of computing the proposal distribution. However, as m increases and more time is spent computing the Markov chains, it either outperforms K-MC 2 (RNA and SUSY) or matches its performance (WEB). Table 3 details the practical significance of the proposed algorithm. The results indicate that in practice it is sufficient to run ASSUMPTION-FREE K-MC 2 with a chain length independent of n. Even with a small chain length, ASSUMPTION-FREE K-MC 2 produces competitive clusterings at a fraction of the computational cost of the seeding step of k-means++. For example on CSN, ASSUMPTION-FREE K-MC 2 with m = 20 achieves a relative error of 1.45% and a speedup of 33.3⇥. At the same time, K-MC 2 would have exhibited a substantial relative error of 65.34% while only obtaining a slightly better speedup of 40.0⇥. 5 Conclusion In this paper, we propose ASSUMPTION-FREE K-MC 2 , a simple and fast seeding algorithm for k-Means. In contrast to the previously introduced algorithm K-MC 2 , it produces provably good clusterings even without assumptions on the data. As a key advantage, ASSUMPTION-FREE K-MC 2 allows to provably trade off solution quality for a decreased computational effort. Extensive experiments illustrate the practical significance of the proposed algorithm: It obtains competitive clusterings at a fraction of the cost of k-means++ seeding and it outperforms or matches its main competitor K-MC 2 on all considered data sets. Acknowledgments This research was partially supported by ERC StG 307036, a Google Ph.D. Fellowship and an IBM Ph.D. Fellowship.
1. What is the focus of the paper regarding clustering algorithms? 2. What are the strengths of the proposed approach, particularly in terms of its scalability and performance? 3. Are there any concerns or limitations regarding the algorithm's ability to handle large datasets? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any comparisons made between the proposed method and other existing works in the field?
Review
Review The authors propose scalable to large datasets algorithm for finding initial cluster centers for k-means. The algorithm does not make a-priory assumptions about the data and its performance is demonstrated on several datasets. The paper is well written and provides both the theoretical proof, and evaluation of the performance using several datasets. I find the approach useful. In most cases, the performance is superior to that of K-MC^2
NIPS
Title Fast and Provably Good Seedings for k-Means Abstract Seeding – the task of finding initial cluster centers – is critical in obtaining highquality clusterings for k-Means. However, k-means++ seeding, the state of the art algorithm, does not scale well to massive datasets as it is inherently sequential and requires k full passes through the data. It was recently shown that Markov chain Monte Carlo sampling can be used to efficiently approximate the seeding step of k-means++. However, this result requires assumptions on the data generating distribution. We propose a simple yet fast seeding algorithm that produces provably good clusterings even without assumptions on the data. Our analysis shows that the algorithm allows for a favourable trade-off between solution quality and computational cost, speeding up k-means++ seeding by up to several orders of magnitude. We validate our theoretical results in extensive experiments on a variety of real-world data sets. N/A art algorithm, does not scale well to massive datasets as it is inherently sequential and requires k full passes through the data. It was recently shown that Markov chain Monte Carlo sampling can be used to efficiently approximate the seeding step of k-means++. However, this result requires assumptions on the data generating distribution. We propose a simple yet fast seeding algorithm that produces provably good clusterings even without assumptions on the data. Our analysis shows that the algorithm allows for a favourable trade-off between solution quality and computational cost, speeding up k-means++ seeding by up to several orders of magnitude. We validate our theoretical results in extensive experiments on a variety of real-world data sets. 1 Introduction k-means++ (Arthur & Vassilvitskii, 2007) is one of the most widely used methods to solve k-Means clustering. The algorithm is simple and consists of two steps: In the seeding step, initial cluster centers are found using an adaptive sampling scheme called D 2 -sampling. In the second step, this solution is refined using Lloyd’s algorithm (Lloyd, 1982), the classic iterative algorithm for k-Means. The key advantages of k-means++ are its strong empirical performance, theoretical guarantees on the solution quality, and ease of use. Arthur & Vassilvitskii (2007) show that k-means++ produces clusterings that are in expectation O(log k)-competitive with the optimal solution without any assumptions on the data. Furthermore, this theoretical guarantee already holds after the seeding step. The subsequent use of Lloyd’s algorithm to refine the solution only guarantees that the solution quality does not deteriorate and that it converges to a locally optimal solution in finite time. In contrast, using naive seeding such as selecting data points uniformly at random followed by Lloyd’s algorithm can produce solutions that are arbitrarily bad compared to the optimal solution. The drawback of k-means++ is that it does not scale easily to massive data sets since both its seeding step and every iteration of Lloyd’s algorithm require the computation of all pairwise distances between cluster centers and data points. Lloyd’s algorithm can be parallelized in the MapReduce framework (Zhao et al., 2009) or even replaced by fast stochastic optimization techniques such as online or mini-batch k-Means (Bottou & Bengio, 1994; Sculley, 2010). However, the seeding step requires k inherently sequential passes through the data, making it impractical even for moderate k. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. This highlights the need for a fast and scalable seeding algorithm. Ideally, it should also retain the theoretical guarantees of k-means++ and provide equally competitive clusterings in practice. Such an approach was presented by Bachem et al. (2016) who propose to approximate k-means++ using a Markov chain Monte Carlo (MCMC) approach and provide a fast seeding algorithm. Under natural assumptions on the data generating distribution, the authors show that the computational complexity of k-means++ can be greatly decreased while retaining the same O(log k) guarantee on the solution quality. The drawback of this approach is that these assumptions may not hold and that checking their validity is expensive (see detailed discussion in Section 3). Our contributions. The goal of this paper is to provide fast and competitive seedings for k-Means clustering without prior assumptions on the data. As our key contributions, we (1) propose a simple yet fast seeding algorithm for k-Means, (2) show that it produces provably good clusterings without assumptions on the data, (3) provide stronger theoretical guarantees under assumptions on the data generating distribution, (4) extend the algorithm to arbitrary distance metrics and various divergence measures, (5) compare the algorithm to previous results, both theoretically and empirically, and (6) demonstrate its effectiveness on several real-world data sets. 2 Background and related work We will start by formalizing the problem and reviewing several recent results. Let X denote a set of n points in Rd. For any finite set C ⇢ Rd and x 2 X , we define d(x,C) 2 = min c2C kx ck22. The objective of k-Means clustering is to find a set C of k cluster centers in Rd such that the quantization error C(X ) is minimized, where C(X ) = X x2X d(x,C) 2 . We denote the optimal quantization error with k centers by k OPT (X ), the mean of X by µ(X ), and the variance of X by Var(X ) = Px2X d(x, µ(X ))2. We note that 1OPT (X ) = Var(X ). D2-sampling. Given a set of centers C, the D2-sampling strategy, as the name suggests, is to sample each point x 2 X with probability proportional to the squared distance to the selected centers, p(x | C) = d(x,C) 2 P x02X d(x 0 , C) 2 . (1) The seeding step of k-means++ builds upon D 2 -sampling: It first samples an initial center uniformly at random. Then, k 1 additional centers are sequentially added to the previously sampled centers using D 2 -sampling. The resulting computational complexity is ⇥(nkd), as for each x 2 X the distance d(x,C) 2 in (1) needs to be updated whenever a center is added to C. Metropolis-Hastings. The Metropolis-Hastings algorithm (Hastings, 1970) is a MCMC method for sampling from a probability distribution p(x) whose density is known only up to constants. Consider the following variant that uses an independent proposal distribution q(x) to build a Markov chain: Start with an arbitrary initial state x1 and in each iteration j 2 [2, . . . ,m] sample a candidate yj using q(x). Then, either accept this candidate (i.e., xj = yj) with probability ⇡(xj 1, yj) = min ✓ p(yj) p(xj 1) q(xj 1) q(yj) , 1 ◆ (2) or reject it otherwise (i.e., xj = xj 1). The stationary distribution of this Markov chain is p(x). Hence, for m sufficiently large, the distribution of xm is approximately p(x). Approximation using MCMC (K-MC2). Bachem et al. (2016) propose to speed up k-means++ by replacing the exact D2-sampling in (1) with a fast approximation based on MCMC sampling. In each iteration j 2 [2, 3, . . . , k], one constructs a Markov chain of length m using the Metropolis-Hasting algorithm with an independent and uniform proposal distribution q(x) = 1/n. The key advantage is that the acceptance probability in (2) only depends on d(yj , C) 2 and d(xj 1, C) 2 since min ✓ p(yj) p(xj 1) q(xj 1) q(yj) , 1 ◆ = min ✓ d(yj , C) 2 d(xj 1, C)2 , 1 ◆ . Critically, in each of the k 1 iterations, the algorithm does not require a full pass through the data, but only needs to compute the distances between m points and up to k 1 centers. As a consequence, the complexity of K-MC 2 is O mk2d compared to O(nkd) for k-means++ seeding. To bound the quality of the solutions produced by K-MC 2 , Bachem et al. (2016) analyze the mixing time of the described Markov chains. To this end, the authors define the two data-dependent quantities: ↵(X ) = max x2X d(x, µ(X ))2P x02X d(x 0 , µ(X ))2 , and (X ) = 1 OPT (X ) k OPT (X ) . (3) In order to bound each term, the authors assume that the data is generated i.i.d. from a distribution F and impose two conditions on F . First, they assume that F exhibits exponential tails and prove that in this case ↵(X ) 2 O log2 n with high probability. Second, they assume that “F is approximately uniform on a hypersphere”. This in turn implies that (X ) 2 O(k) with high probability. Under these assumptions, the authors prove that the solution generated by K-MC 2 is in expectation O(log k)competitive with the optimal solution if m 2 ⇥ k log2 n log k . In this case, the total computational complexity of K-MC 2 is O k3d log2 n log k which is sublinear in the number of data points. Other related work. A survey on seeding methods for k-Means was provided by Celebi et al. (2013). D 2 -sampling and k-means++ have been extensively studied in the literature. Previous work was primarily focused on related algorithms (Arthur & Vassilvitskii, 2007; Ostrovsky et al., 2006; Jaiswal et al., 2014, 2015), its theoretical properties (Ailon et al., 2009; Aggarwal et al., 2009) and bad instances (Arthur & Vassilvitskii, 2007; Brunsch & Röglin, 2011). As such, these results are complementary to the ones presented in this paper. An alternative approach to scalable seeding was investigated by Bahmani et al. (2012). The authors propose the k-meansk algorithm that retains the same O(log k) guarantee in expectation as k-means++. k-meansk reduces the number of sequential passes through the data to O(log n) by oversampling cluster centers in each of the rounds. While this allows one to parallelize each of the O(log n) rounds, it also increases the total computational complexity from O(nkd) to O(nkd log n). This method is feasible if substantial computational resources are available in the form of a cluster. Our approach, on the other hand, has an orthogonal use case: It aims to efficiently approximate k-means++ seeding with a substantially lower complexity. 3 Assumption-free K-MC2 Building on the MCMC strategy introduced by Bachem et al. (2016), we propose an algorithm which addresses the drawbacks of the K-MC 2 algorithm, namely: (1) The theoretical results of K-MC 2 hold only if the data is drawn independently from a distribution satisfying the assumptions stated in Section 2. For example, the results do not extend to heavytailed distributions which are often observed in real world data. (2) Verifying the assumptions, which in turn imply the required chain length, is computationally hard and potentially more expensive than running the algorithm. In fact, calculating ↵(X ) already requires two full passes through the data, while computing (X ) is NP-hard. (3) Theorem 2 of Bachem et al. (2016) does not characterize the tradeoff between m and the expected solution quality: It is only valid for the specific choice of chain length m = ⇥ k log 2 n log k . As a consequence, if the assumptions do not hold, we obtain no theoretical guarantee with regards to the solution quality. Furthermore, the constants in Theorem 2 are not known and may be large. Our approach addresses these shortcomings using three key elements. Firstly, we provide a proposal distribution that renders the assumption on ↵(X ) obsolete. Secondly, a novel theoretic analysis allows us to obtain theoretical guarantees on the solution quality even without assumptions on (X ). Finally, our results characterize the tradeoff between increasing the chain length m and improving the expected solution quality. Algorithm 1 ASSUMPTION-FREE K-MC2(AFK-MC2) Require: Data set X , # of centers k, chain length m // Preprocessing step 1: c1 Point uniformly sampled from X 2: for all x 2 X do 3: q(x) 12 d(x, c1)2/ P x02X d(x 0 , c1) 2 + 1 2n // Main loop 4: C1 {c1} 5: for i = 2, 3, . . . , k do 6: x Point sampled from X using q(x) 7: dx d(x,Ci 1)2 8: for j = 2, 3, . . . ,m do 9: y Point sampled from X using q(y) 10: dy d(y, Ci 1)2 11: if dyq(x)d x q(y) > Unif(0, 1) then x y, dx dy 12: Ci Ci 1 [ {x} 13: return Ck Proposal distribution. We argue that the choice of the proposal distribution is critical. Intuitively, the uniform distribution can be a very bad choice if, in any iteration, the true D 2 -sampling distribution is “highly” nonuniform. We suggest the following proposal distribution: We first sample a center c1 2 X uniformly at random and define for all x 2 X the nonuniform proposal q(x | c1) = 1 2 d(x, c1) 2 P x02X d(x 0 , c1) 2 | {z } (A) + 1 2 1 |X ||{z} (B) . (4) The term (A) is the true D 2 -sampling distribution with regards to the first center c1. For any data set, it ensures that we start with the best possible proposal distribution in the second iteration. We will show that this proposal is sufficient even for later iterations, rendering any assumptions on ↵ obsolete. The term (B) regularizes the proposal distribution and ensures that the mixing time of K-MC 2 is always matched up to a factor of two. Algorithm. Algorithm 1 details the proposed fast seeding algorithm ASSUMPTION-FREE K-MC2. In the preprocessing step, it first samples an initial center c1 uniformly at random and then computes the proposal distribution q(· | c1). In the main loop, it then uses independent Markov chains of length m to sample centers in each of the k 1 iterations. The complexity of the main loop is O mk2d . The preprocessing step of ASSUMPTION-FREE K-MC 2 requires a single pass through the data to compute the proposal q(· | c1). There are several reasons why this additional complexity of O(nd) is not an issue in practice: (1) The preprocessing step only requires a single pass through the data compared to k passes for the seeding of k-means++. (2) It is easily parallelized. (3) Given random access to the data, the proposal distribution can be calculated online when saving or copying the data. (4) As we will see in Section 4, the effort spent in the preprocessing step pays off: It often allows for shorter Markov chains in the main loop. (5) Computing ↵(X ) to verify the first assumption of K-MC 2 is already more expensive than the preprocessing step of ASSUMPTION-FREE K-MC 2 . Theorem 1. Let ✏ 2 (0, 1) and k 2 N. Let X be any set of n points in Rd and C be the output of Algorithm 1 with m = 1 + 8✏ log 4k ✏ . Then, it holds that E [ C(X )] 8(log2 k + 2) kOPT (X ) + ✏Var(X ). The computational complexity of the preprocessing step is O(nd) and the computational complexity of the main loop is O 1✏k2d log k✏ . This result shows that ASSUMPTION-FREE K-MC 2 produces provably good clusterings for arbitrary data sets without assumptions. The guarantee consists of two terms: The first term, i.e., 8(log2 k + 2) k OPT (X ), is the theoretical guarantee of k-means++. The second term, ✏Var(X ), quantifies the potential additional error due to the approximation. The variance is a natural notion as the mean is the optimal quantizer for k = 1. Intuitively, the second term may be interpreted as a scale-invariant and additive approximation error. Theorem 1 directly characterizes the tradeoff between improving the solution quality and the resulting increase in computational complexity. As m is increased, the solution quality converges to the theoretical guarantee of k-means++. At the same time, even for smaller chain lengths m, we obtain a provable bound on the solution quality. In contrast, the guarantee of K-MC 2 on the solution quality only holds for a specific choice of m. For completeness, ASSUMPTION-FREE K-MC 2 may also be analyzed under the assumptions made in Bachem et al. (2016). While for K-MC 2 the required chain length m is linear in ↵(X ), ASSUMPTION-FREE K-MC 2 does not require this assumption. In fact, we will see in Section 4 that this lack of dependence of ↵(X ) leads to a better empirical performance. If we assume (X ) 2 O(k), we obtain the following result similar to the one of K-MC 2 (albeit with a shorter chain length m). Corollary 1. Let k 2 N and X be a set of n points in Rd satisfying (X ) 2 O(k). Let C be the output of Algorithm 1 with m = ⇥(k log k). Then it holds that E [ C(X )] 8(log2 k + 3) kOPT (X ). The computational complexity of the preprocessing is O(nd) and the computational complexity of the main loop is O k3d log k . 3.1 Proof sketch for Theorem 1 In this subsection, we provide a sketch of the proof of Theorem 1 and defer the full proof to Section A of the supplementary materials. Intuitively, we first bound how well a single Markov chain approximates one iteration of exact D 2 -sampling. Then, we analyze how the approximation error accumulates across iterations and provide a bound on the expected solution quality. For the first step, consider any set C ✓ X of previously sampled centers. Let c1 2 C denote the first sampled center that was used to construct the proposal distribution q(x | c1) in (4). In a single iteration, we would ideally sample a new center x 2 X using D2-sampling, i.e., from p(x | C) as defined in (1). Instead, Algorithm 1 constructs a Markov chain to sample a new center x 2 X as the next cluster center. We denote by p̃ c1 m(x | C) the implied probability of sampling a point x 2 X using this Markov chain of length m. The following result shows that in any iteration either C is ✏1-competitive compared to c1 or the Markov chain approximates D 2 -sampling well in terms of total variation distance 1 . Lemma 1. Let ✏1, ✏2 2 (0, 1) and c1 2 X . Consider any set C ✓ X with c1 2 C. For m 1 + 2 ✏1 log 1 ✏2 , at least one of the following holds: (i) C(X ) < ✏1 c1(X ), or (ii) kp(· | C) p̃c1m(· | C)kTV ✏2. In the second step, we bound the expected solution quality of Algorithm 1 based on Lemma 1. While the full proof requires careful propagation of errors across iterations and a corresponding inductive argument, the intuition is based on distinguishing between two possible cases of sampled solutions. First, consider the realizations of the solution C that are ✏1-competitive compared to c1. By definition, C(X ) < ✏1 c1(X ). Furthermore, the expected solution quality of these realizations can be bounded by 2✏1 Var(X ) since c1 is chosen uniformly at random and hence in expectation c1(X ) 2Var(X ). Second, consider the realizations that are not ✏1-competitive compared to c1. Since the quantization error is non-increasing in sampled centers, Lemma 1 implies that all k 1 Markov chains result in a good approximation of the corresponding D 2 -sampling. Intuitively, this implies that the approximation error in terms of total variation distance across all k 1 iterations is at most ✏2(k 1). Informally, the expected solution quality is thus bounded with probability 1 ✏2(k 1) by the expected quality of k-means++ and with probability ✏2(k 1) by c1(X ). Theorem 1 can then be proven by setting ✏1 = ✏/4 and ✏2 = ✏/4k and choosing m sufficiently large. 1 Let ⌦ be a finite sample space on which two probability distributions p and q are defined. The total variation distance kp qkTV between p and q is given by 1 2 P x2⌦ |p(x) q(x)|. 3.2 Extension to other clustering problems While we only consider k-Means clustering and the Euclidean distance in this paper, the results are more general. They can be directly applied, by transforming the data, to any metric space for which there exists a global isometry on Euclidean spaces. Examples would be the Mahalanobis distance and Generalized Symmetrized Bregman divergences (Acharyya et al., 2013). The results also apply to arbitrary distance measures (albeit with different constants) as D 2 -sampling can be generalized to arbitrary distance measures (Arthur & Vassilvitskii, 2007). However, Var(X ) needs to be replaced by 1 OPT (X ) in Theorem 1 since the mean may not be the optimal quantizer (for k = 1) for a different distance metric. The proposed algorithm can further be extended to different potential functions of the form k · kl and used to approximate the corresponding Dl-sampling (Arthur & Vassilvitskii, 2007), again with different constants. Similarly, the results also apply to bregman++ (Ackermann & Blömer, 2010) which provides provably competitive solutions for clustering with a broad class of Bregman divergences (including the KL-divergence and Itakura-Saito distance). 4 Experimental results In this section 2 , we empirically validate our theoretical results and compare the proposed algorithm ASSUMPTION-FREE K-MC 2 (AFK-MC 2 ) to three alternative seeding strategies: (1) RANDOM, a “naive” baseline that samples k centers from X uniformly at random, (2) the full seeding step of k-means++, and (3) K-MC 2 . For both ASSUMPTION-FREE K-MC 2 and K-MC 2 , we consider the different chain lengths m 2 {1, 2, 5, 10, 20, 50, 100, 150, 200}. Table 1 shows the six data sets used in the experiments with their corresponding values for k. We choose an experimental setup similar to Bachem et al. (2016): For half of the data sets, we both train the algorithm and evaluate the corresponding solution on the full data set (denoted by T in the EVAL column of Table 1). This corresponds to the classical k-Means setting. In practice, however, one is often also interested in the generalization error. For the other half of the data sets, we retain 250,000 data points as the holdout set for the evaluation (denoted by H in the EVAL column of Table 1). For all methods, we record the solution quality (either on the full data set or the holdout set) and measure the number of distance evaluations needed to run the algorithm. For ASSUMPTION-FREE K-MC 2 this includes both the preprocessing and the main loop. We run every algorithm 200 times with different random seeds and average the results. We further compute and display 95% confidence intervals for the solution quality. 2 An implementation of ASSUMPTION-FREE K-MC 2 has been released at http://olivierbachem.ch. Discussion. Figure 1 shows the expected quantization error for the two baselines, RANDOM and k-means++, and for the MCMC methods with different chain lengths m. As expected, the seeding step of k-means++ strongly outperforms RANDOM on all data sets. As the chain length m increases, the quality of solutions produced by both ASSUMPTION-FREE K-MC 2 and K-MC 2 quickly converges to that of k-means++ seeding. On all data sets except WEB, ASSUMPTION-FREE K-MC 2 starts with a lower initial error due to the improved proposal distribution and outperforms K-MC 2 for any given chain length m. For WEB, both algorithms exhibit approximately the same performance. This is expected as ↵(X ) of WEB is very low (see Table 1). Hence, there is only a minor difference between the nonuniform proposal of ASSUMPTION-FREE K-MC 2 and the uniform proposal of K-MC 2 . In fact, one of the key advantages of ASSUMPTION-FREE K-MC 2 is that its proposal adapts to the data set at hand. As discussed in Section 3, ASSUMPTION-FREE K-MC 2 requires an additional preprocessing step to compute the nonuniform proposal. Figure 2 shows the expected solution quality in relation to the total computational complexity in terms of number of distance evaluations. Both K-MC 2 and ASSUMPTION-FREE K-MC 2 generate solutions that are competitive with those produced by the seeding step of k-means++. At the same time, they do this at a fraction of the computational cost. Despite the preprocessing, ASSUMPTION-FREE K-MC 2 clearly outperforms K-MC 2 on the data sets with large values for ↵(X ) (CSN, KDD and SONG). The additional effort of computing the nonuniform proposal is compensated by a substantially lower expected quantization error for a given chain size. For the other data sets, ASSUMPTION-FREE K-MC 2 is initially disadvantaged by the cost of computing the proposal distribution. However, as m increases and more time is spent computing the Markov chains, it either outperforms K-MC 2 (RNA and SUSY) or matches its performance (WEB). Table 3 details the practical significance of the proposed algorithm. The results indicate that in practice it is sufficient to run ASSUMPTION-FREE K-MC 2 with a chain length independent of n. Even with a small chain length, ASSUMPTION-FREE K-MC 2 produces competitive clusterings at a fraction of the computational cost of the seeding step of k-means++. For example on CSN, ASSUMPTION-FREE K-MC 2 with m = 20 achieves a relative error of 1.45% and a speedup of 33.3⇥. At the same time, K-MC 2 would have exhibited a substantial relative error of 65.34% while only obtaining a slightly better speedup of 40.0⇥. 5 Conclusion In this paper, we propose ASSUMPTION-FREE K-MC 2 , a simple and fast seeding algorithm for k-Means. In contrast to the previously introduced algorithm K-MC 2 , it produces provably good clusterings even without assumptions on the data. As a key advantage, ASSUMPTION-FREE K-MC 2 allows to provably trade off solution quality for a decreased computational effort. Extensive experiments illustrate the practical significance of the proposed algorithm: It obtains competitive clusterings at a fraction of the cost of k-means++ seeding and it outperforms or matches its main competitor K-MC 2 on all considered data sets. Acknowledgments This research was partially supported by ERC StG 307036, a Google Ph.D. Fellowship and an IBM Ph.D. Fellowship.
1. What is the focus of the paper regarding k-Means clustering? 2. What is the novel aspect of the proposed algorithm compared to prior works like Bachem et al. (2016)? 3. How does the reviewer assess the quality of the solutions generated by the algorithm? 4. What are the strengths and weaknesses of the experimental results presented in the paper? 5. Are there any suggestions for improving the accuracy or efficiency of the algorithm?
Review
Review An algorithm for the seeding step of k-Means is proposed for the case of massive datasets.It is mainly based on a previous work (Bachem et al. 2016), which constructs a Markov chain to sample centers. The main novelty is the definition of the distribution used to constuct the Markov chain. The quality of solutions is bounded ; this constitutes the statement of a theorem. A study is carried out with real data sets. The results are compared with those obtained with three other seeding strategies. Competitive clusterings are obtained and the computational cost can be considerably reduced, or similar according to the considered data sets and seeding strategies.The paper is quite interesting and clear. About experimental results, it would be necessary to describe the data sets, in order to explain what is the meaning of the different clusters. On line 236, "distance evaluations" should be replaced by "number of distance evaluations". In the computational complexity, the time required to sample a point using a distribution is not taken in consideration, whereas this time is certainly not negligible. I think the real computational time would be a better measure to compare the different seeding techniques.
NIPS
Title Learn what matters: cross-domain imitation learning with task-relevant embeddings Abstract We study how an autonomous agent learns to perform a task from demonstrations in a different domain, such as a different environment or different agent. Such crossdomain imitation learning is required to, for example, train an artificial agent from demonstrations of a human expert. We propose a scalable framework that enables cross-domain imitation learning without access to additional demonstrations or further domain knowledge. We jointly train the learner agent’s policy and learn a mapping between the learner and expert domains with adversarial training. We effect this by using a mutual information criterion to find an embedding of the expert’s state space that contains task-relevant information and is invariant to domain specifics. This step significantly simplifies estimating the mapping between the learner and expert domains and hence facilitates end-to-end learning. We demonstrate successful transfer of policies between considerably different domains, without extra supervision such as additional demonstrations, and in situations where other methods fail. 1 Introduction Reinforcement learning (RL) has shown great success in diverse tasks and distinct domains [43, 2], however its performance hinges on defining precise reward functions. While rewards are straightforward to define in simple scenarios such as games and simulations, real-world scenarios are significantly more nuanced, especially when they involve interacting with humans. One possibility for overcoming the problem of reward misspecification is to learn policies from observations of expert behaviour, also known as imitation learning. Recent imitation learning algorithms rely on updating the learner agent’s policy until the state occupancy of the learner matches that of the expert demonstrator [4], requiring the learner and expert to be in the same domain. Such a requirement rarely holds true in more realistic scenarios. Consider for example the case where a robot arm learns to move an apple onto a plate from demonstrations of a human performing this task. Here, both domains do inherently share structure (the apples and the plates have similar appearances) but are distinct (the morphologies, dynamics and appearances of the two arms are different). Enabling a learner agent to successfully perform a task from demonstrations that were generated by a different expert agent, which we refer to as a different domain even if the tasks are related, would widely broaden the possibilities to train artificial agents. This cross-domain imitation learning problem is seen as an important step towards value alignment, as it facilitates transferring behaviour from humans to artificial agents [32, Chapter 7]. This problem has only been considered by researchers in realistic settings recently. Due to its difficulty, previous work on cross-domain imitation learning either assumes the expert’s and learner’s domains to be almost identical [42, 17, 6], requires demonstrations of experts in multiple domains that are similar to the learner’s [45, 44], or relies on the availability of demonstrations of proxy 36th Conference on Neural Information Processing Systems (NeurIPS 2022). tasks in both domains [30, 18]. Designing such proxy tasks is a manual process that requires prior knowledge about both domains, since they have to be inherently similar to the target task to convey a relevant mapping between domains [18]. Fickinger et al. [10] overcome the need for proxy tasks by directly comparing distributions in both domains, effectively addressing the same problem setting as us. While very promising, its applicability is limited to short demonstrations and Euclidean spaces. We propose to jointly learn the learner policy and the mapping between the learner and expert state spaces, utilizing adversarial training. Unlike standard generative adversarial imitation learning [16, 39], we use domain-specific encoders for both the learner and expert. We therefore devise a mutual information criterion to find an expert encoder that preserves task-relevant information while discarding domain specifics irrelevant to the task. Note that in general, cross-domain imitation learning is an under-defined problem, as a unique optimal policy for the learner is not defined as part of the problem: for example, should a humanoid agent that imitates a cheetah crawl (imitating its gait) or walk (moving in the same direction)? We evaluate our cross-domain imitation learning approach in different cross-embodiment imitation learning scenarios, comparing on relevant benchmarks, and find that our method robustly learns policies that clearly outperform the baselines. We conduct several ablation studies, in particular finding that we can control how much domain-specific information is transferred from the expert— effectively interpolating between mimicking the expert’s behaviour as much as possible and finding novel policies that use different strategies to maximize the expert’s reward. Our contributions are: • We propose a mutual information criterion to find an embedding of the expert state which contains task-relevant information, while discarding domain specifics irrelevant to the task. • We learn the mapping between the learner domain and the task-relevant embedding without additional proxy task demonstrations. • We demonstrate training robust policies across diverse environments, and the ability to modulate how information flows between the learner and expert domains. 2 Related Work Imitation learning considers the problem of finding an optimal policy for a learner agent from demonstrations generated by an expert agent, where inverse reinforcement learning (IRL) [1, 46] recovers a reward function under which the observed expert’s behaviour is optimal. More recent works [16, 11, 39] define imitation learning as a distribution matching problem and use adversarial training [14] to directly find the learner’s policy, without explicitly recovering the expert’s reward. Cross-domain imitation learning generalizes imitation learning to the case where the learner and expert are in different domains. Small mismaches between the domains, such as changes in viewpoint or gravitational force, or small variations of the dynamics, are addressed by [42, 12, 17, 28, 36, 8] and Bohez et al. [6]. To learn policies cross-domain in the presence of larger mismatches, such as different embodiments of the learner and the expert, previous works used demonstrations of proxy tasks to learn a mapping between the learner and expert domain, which is then used to find the learner’s optimal policy [15, 23, 35, 30, 18], utilized a latent embedding of the environment state [45, 44], or assumed the reward signal to be given [34]. GWIL [10] does not rely on proxy tasks and minimizes the distance between the state-action probability distributions of both agents which lie in different spaces [25]. This approach assumes Euclidean spaces and is computationally intractable when using longer demonstrations, which generally improve the performance of learning algorithms when available. Our approach obviates the need for proxy tasks, scales to detailed demonstrations of complex behaviours, and enables the control of how much domain-specific information is transferred to the learner domain. In classical RL [26], where behaviour is learned from a given reward function, mutual information objectives are commonly used to find compact state representations that increase performance by discarding irrelevant information [29, 3, 37, 24, 22]. We propose to similarly learn a representation of the expert’s state that contains task-relevant information while being invariant to domain specifics. 3 Background Definitions. Following Kim et al. [18], we define a domain as a tuple (S,A,P, ⇣), where S denotes the state space, A is the action space, P is the transition function, and ⇣ is the initial distribution over states. Given an action a 2 A, the distribution over the next state is given by the transition function as P(s0|s, a). An infinite horizon Markov decision process (MDP) is defined by adding a reward function r : S ⇥A ! R, which describes a specific task, and a discount factor 2 [0, 1] to the domain tuple. We define the expert agent’s MDP as ME = (SE ,AE ,PE , rE , E , ⇣E), and its policy as a map ⇡E : SE ! B(AE), where B is the set of all probability measures on AE . We define the learner MDP MLand learner policy ⇡L analogously, except that the learner MDP has no reward function or discount factor. An expert trajectory is a sequence of states ⌧E = {s0E , s1E , . . . , snE}, where n denotes the length of the trajectory. We denote DE = {⌧i} to be a set of such trajectories. Problem Definition. The objective of cross-domain imitation learning is to find a policy ⇡L that optimally performs a task in the learner domain ML, given demonstrations DE in the expert domain ME . In contrast to most prior work, we do not assume access to a dataset of proxy tasks—simple primitive skills in both domains that are similar but different from the inference task—to be given. We do not assume access to the expert demonstration’s actions, which may be non-trivial to obtain, e.g., when learning from videos or human demonstrations, and therefore consider the expert demonstrations to consist only of states. Adversarial Imitation Learning from Observations. We first consider the equal-domain case in which both MDPs are equivalent, i.e., ML= ME , and assume that the expert agent’s optimal policy ⇡E under rE is known. Torabi et al. [39] define a solution to this problem as an extension of the standard imitation learning problem [16], by minimizing the divergence between the learner’s state-transition distribution ⇢⇡L and that of the expert ⇢⇡E , as argmin ⇡L H(⇡L) + DJS (⇢⇡L(s, s0) ⇢⇡E (s, s0)) = RL IRL (⇡E) , (1) where DJS is the Jensen-Shannon divergence and H(⇡L) is the learner’s policy entropy [46]. The state-transition distribution for a policy ⇡ is defined as ⇢⇡(si, sj) = X a P (sj |si, a)⇡(a|si) 1X t=0 t P (st = si|⇡). (2) In particular, the expert’s state-transition distribution ⇢⇡E is estimated using expert demonstrations DE . The above objective (eq. 1) can also be derived as the composition of the IRL and RL problems, where rE = IRL(⇡E) denotes the solution to the Inverse Reinforcement Learning problem from policy ⇡E and ⇡L = RL(rE) denotes the solution to the RL problem with reward rE . The IRL component, which recovers the reward function r : S ⇥ S ! R under which the expert’s demonstrations are uniquely optimal1 by finding a reward function that assigns high rewards to the expert policy and low rewards to other policies, is given as IRL(⇡E) = argminr (max⇡L E⇡L [r(s, s0)] E⇡E [r(s, s0)]) . 4 Unsupervised Imitation Learning Across Domains We first introduce the cross-domain imitation learning problem before deriving an adversarial learning objective that allows the simultaneous training of the learner’s policy and a mapping between the MDPs of the learner and expert. We then demonstrate how the cross-domain imitation learning problem can be significantly simplified be finding an embedding of the expert agent’s state space that contains task-relevant information while discarding domain-specific aspects. Lastly, we introduce a time-invariance constraint to prevent degenerate mapping solutions. As our approach does not rely on additional demonstrations from proxy tasks, we refer to it as unsupervised cross-domain imitation learning objective (UDIL). 4.1 Cross-domain adversarial imitation learning We consider the case in which the expert’s and agent’s MDPs are different, i.e., ML 6= ME , such as when learner and expert are of different embodiments or are in different environments. Kim et al. [18] show that, if there exists an injective mapping g that reduces the learner MDP ML to the expert MDP ME , then a policy ⇡L that is optimal in ML is also optimal in the ME . Since we do not assume extra supervision from the expert’s actions, we define the mapping function between the learner and expert MDPs g : SL ! SE as a mapping between the respective state spaces. We accordingly define the cross-domain adversarial imitation objective as argmin ⇡L H(⇡L) + DJS(⇢⇡L(g(sL), g(s0L)) ⇢⇡E (sE , s0E)). (3) Applying the mapping g to the learner agent’s state allows us to compare the learner’s and expert’s distributions, even though they are defined over different state-spaces. 4.2 Reducing the expert’s state dimension The full state of the expert domain sE generally contains information that is specific to the task which the expert is demonstrating, defined by the expert’s reward function rE , as well as information that is specific to the domain but irrelevant to the task itself. We simplify the cross-domain imitation learning problem by reducing the expert agent’s state space to a task-relevant embedding that is invariant to domain specifics. We assume that the learner state s is multi-dimensional and recall the IRL component of the adversarial imitation problem (eq. 1), which finds the reward function under which the expert’s behavior is optimal. We define a second mapping function f : SE ! Z , that maps the expert states sE 2 SE to lower-dimensional representations z 2 Z , with |Z| ⌧ |SE |. When f is chosen as a dimension reduction operation that discards state dimensions of which the reward is independent, we can write the IRL component of eq. 1 as a function of only the embedded representation z (proof in app. 8.1.1),2 1We swap the cost function for the reward function and omit the cost function regularization for simplicity. 2We assume that the reward function r is also defined on the embedding space Z , see app. 8.1.1 for details. as IRL(⇡E) = argmin r ✓ max ⇡L E⇡L [r(z, z0)] E⇡E [r(z, z0)] ◆ . (4) Simplifying the mapping between learner and expert. Assuming f to be given, we can further redefine the mapping between learner and expert state as g : SL ! Z . That is, the state transformation g no longer has to map the learner state to the full expert state, but only to the task-relevant embedding of the expert state. This not only significantly simplifies the complexity of the mapping function g, but also prevents transferring irrelevant domain specifics from the expert to the learner domain. We can then rewrite the cross-domain adversarial imitation objective as argmin ⇡L,g H(⇡L) + DJS(⇢⇡L(g(sL), g(s0L)) ⇢⇡E (f(sE), f(s0E))), (5) which minimizes the distance between the transformed distribution over learner states sL and the distribution over embedded expert states z. 4.3 Finding a task-relevant embedding We now detail how to find a embedding function f from the expert demonstrations DE . We first assemble a set containing all expert transitions (sE , s0E) observed in the trajectories of the demonstration set DE . We then generate a set of pseudo-random transitions (srand, s0rand) by independently sampling two states out of all individual states contained in DE . We then model all state transitions (s, s0) and their corresponding labels y, indicating whether it is a random or expert transition, as realizations of a random variable (S, S0, Y ) on SE ⇥ SE ⇥ {0, 1}. Note that any time-invariant embedding f : SE ! Z induces a random variable (Z,Z 0, Y ) on Z ⇥Z ⇥ {0, 1} via (Z,Z 0) = (f(S), f(S0)). We then define the mapping f as a mapping that maximizes the mutual information I between the label Y and the embedded state transition (Z,Z 0), that is, argmax f I((Z,Z 0);Y ) = argmax f I((f(S), f(S0));Y ). (6) Observe that maximizing I(Z;Y ) would lead to non-informative representations, as the states contained in the random trajectories are indeed states of the expert trajectory; only state transitions (S, S0) can distinguish between the two. 4.4 Avoiding degenerate solutions Jointly learning the mapping function g and the learner agent’s policy ⇡L may lead to degenerate mappings if g is a function of arbitrary complexity. An overly-expressive g can make the divergence between distributions arbitrarily small, regardless of their common structure, by the universality property of the uniform distribution, i.e., any two distributions can be transformed into each other by leveraging their cumulative density functions (CDFs) and inverse CDFs. We prevent these degenerate solutions with an information asymmetry constraint: we ensure that the mapping f is time-invariant, while the JS-divergence compares distributions across time, i.e., in a time-variant manner. A theoretical analysis is presented in app. 8.1.2. 4.5 Unsupervised cross-domain adversarial imitation learning We finally define the unsupervised cross-domain adversarial imitation learning (UDIL) objective as an adversarial learning problem. We iterate between updating the learner agent’s policy ⇡l, the mapping g between the learner’s and expert’s state spaces, and the discriminator D. The discriminator’s objective is to distinguish between state transitions generated by the learner and state transitions generated by the expert, giving the overall objective min g, ⇡L max ✓ E⇡L [log(D✓(g(sL), g(s0L)))] + E⇡E [log(1 D✓(z, z0))]. (7) 5 Experiments Preliminaries. We test our approach on two different benchmarks that represent multiple domains and different agents with both environment-based and agent-based tasks. We designed our experiments to answer the following questions. • Can we find task-relevant embeddings of the expert state solely from expert demonstrations, and improve the performance of imitation learning? • Does the proposed framework robustly learn meaningful policies compared to previous work? • Can we control the amount of domain-specific information transferred from the expert to the learner? We compare with the GWIL baseline [10], which is the only other work that makes similar assumptions to ours, i.e., unsupervised cross-domain imitation learning with access only to demonstrations of a single expert agent. In the later presented XMagical environment, we also compare to a modified single-demonstrator-agent version of XIRL [45], which originally relies on demonstrations of multiple distinct expert agents. As no reward function in the learner domain is given, we measure performance of the learner agent by defining its reward as the components of the expert agent’s reward function that can be directly transferred to the learner domain. To ensure reproducibility, we run all experiments on random seeds zero to six, report mean and standard error for all experiments (lines and shaded areas), and describe the experiments in full detail in appendix section 8.2. 5.1 XIRL baseline Setup. Figure 2 shows the XMagical environment [41, 45] which consists of four agents with different embodiments that have to perform equivalent modifications in the environment, namely pushing all blocks to a shaded region. The corresponding baseline algorithm XIRL [45] trains each agent with demonstrations of the three other expert agents. As our work only requires demonstrations from a single expert agent, we focus on the two most distinct agents, Gripper and Longstick, which are displayed in Figure 2), and evaluate the performance of each when trained on demonstrations of the other. The reward is given as a function of the average distance between the task-relevant objects and their target positions. Finding a task-relevant embedding. The environment state in XMagical is given as a multidimensional vector that describes different absolute and relative positions of environment objects and the agent itself. To find the task-relevant embedding of this state we first generate sets of expert and pseudo-random transitions, as described in section 4.3. As maximizing mutual information objectives in large continuous domains is intractable [5, 9], we instead approximate the objective in eq. (6) by first computing the empirical mutual information between state transitions and labels for each individual state dimension, using the method of Ross [31]. We then find the task-relevant embedding by selecting the dimensions with highest mutual information using the elbow method [19]. We find a clear margin between those state dimensions that are intuitively relevant to the task, such as dimensions that describe the positions of the blocks, and those dimensions that are intuitively domain-specific and less relevant to the task, such as dimensions that describe the position of the robot. Imitation learning with a task-relevant embedding of the expert state. We use the dataset of expert demonstrations provided by Zakka et al. [45] to compare the performance of our approach to that of the XIRL baseline. We follow Zakka et al. [45] and likewise use the simplified imitation learning framework where the learner agent simply receives a reward signal that corresponds to the distance between the current environment state and the target environment state, which is precomputed by averaging over all terminal states contained in the set of expert demonstrations. Note that the main difference between UDIL and XIRL is the task-relevant embedding of the expert state: XIRL relies on the full expert state. We use the XIRL implementation as given by the authors, apply it directly to the state space and do not change any parameters. Figure 2 shows that we consistently outperform XIRL and in both cases achieve a score close to the maximum possible. We find that our method obtains task-relevant embeddings of the state from expert demonstrations alone, which significantly improves performance of cross-domain imitation learning in the XMagical environment. 5.2 Cross-domain imitation learning of robot control We now evaluate UDIL in the complex Mujoco environments [7, 38]. We use the embodiments displayed in Figure 3, hopper, walker and halfcheetah, which are commonly used to evaluate (cross-domain) imitation learning algorithms [20, 16, 12, 30]. We use the fixed-length trajectory implementation [13] of these environments to prevent implicitly rewarding the learner agent for longer trajectories; the significance of this effect is demonstrated in Kostrikov et al. [20]. We first find a minimal task-relevant embedding, investigate the performance, and compare to GWIL. We then conduct ablation studies to evaluate the importance of the individual components of our framework and investigate how the transfer of information from the expert to the learner domains can be controlled by varying the size of the task-relevant expert embedding. We provide videos of the resulting behaviour, as described in in appendix 8.4. Finding a task-relevant embedding. Analogously to the previous section 5.1, we first generate sets of expert and pseudo-random transitions, and compute the mutual information between individual state dimensions and the transition labels. We find that across all three agents, the x position of the torso has highest task-relevance, followed by the z position (height). This intuitively makes sense, as the expert agents receive relatively large rewards during training for moving in the positive x direction, followed by a smaller reward for being in a healthy (upright) position [7]. Note here that these findings are derived only from the expert demonstrations, without any knowledge of the rewards. Hereafter, the dimensions which describe the angular positions of the main joints with respect to the torso have highest mutual information; lowest mutual information is found for state dimensions that describe velocities of sub-components. We identify the task-relevant embedding with the elbow method as the positions that describe the torso, and later conduct ablation studies with larger embeddings. Jointly learning the learner’s policy and mapping function. We parameterize the learner encoder such that it learns an affine transformation of the input and define its loss as the negative of the discriminator’s loss, i.e., the learner encoder is trained to fool the discriminator. The policy of the learner is parameterized by a neural network, which, in contrast to the learner encoder, cannot be trained by backpropagating the discriminator loss as a sampling step is required to obtain the state transitions form the learner policy. We follow Ho and Ermon [16] and train the learner policy with RL, with the learner agent receiving higher rewards for taking actions that result in transformed state transitions g(sL), g(s0L) which are more likely to fool the discriminator D, i.e., which are more likely to be from the expert’s task-relevant state-transition distribution ⇢E(zE , z0E). We use DAC [20], to jointly train g, ⇡L and D, as depicted in Figure 1, and do not alter any hyperparameters given in the original implementation to ensure comparability. We define the reward of the learner agent as the distance covered in the target direction, as this is the only reward component that is common among all three agents, and compare performance to GWIL [10]. Results. Figure 4 shows that the learner agents robustly learn meaningful policies for six random initializations across different combinations of expert and learner. We find that the hopper and walker cover about 50% of the distance as compared to when they are trained with their ground truth rewards, with the halfcheetah achieving about 13% of the expert distance. We qualitatively inspected the behaviours learned by the agents and found novel locomotion strategies that are distinct from those of the expert. We illustrate these strategies in Figure 3. We hypothesize that these new behaviours were enabled by the task-relevant embedding of the expert state and further investigate in section 5.3 how the embedding size can be chosen to transfer more information from the expert to the learner. It can be seen in Figure 4 that our framework consistently outperforms the GWIL baseline; although we tried different hyperparameter configurations, we found the results of GWIL to be highly stochastic, which is due to the properties of the Gromov–Wasserstein distance [25] used, as indicated by the authors of GWIL [10, Remark 1]. 5.3 Ablation Studies We present our ablation studies that clarify the importance and influence of the different components of the framework, focusing on the hopper and halfcheetah agents. Varying the dimension of the task-relevant embedding. We investigate the relevance of the taskrelevant state embedding’s dimension d and hypothesize that for larger embeddings, more information is transferred from the expert to the learner domain. We evaluate the performance as well as the resulting agent behaviours for d 2 (3, 6, all), where all refers to no reduction, i.e. f is an identity mapping, in which case the learner encoder g has to map the full learner state space to the full expert state space. We can observe in Figure 5 that the mean performance and robustness generally decrease when increasing the embedding size. We investigate different locomotion strategies adopted by the learner agent, dependent on the embedding size d, and illustrate these in Figure 3. We found that for d = 3, both hopper and halfcheetah would lie down on the floor and propel themselves forward. For larger embeddings d 2 {6, all}, both would adopt strategies more similar to the demonstrations by lifting their torso off the ground for longer. The hopper would hop for a few moments and then perform a swimming-like movement, the halfcheetah would exhibit an animal-like quadruped gait. We conclude that changing the size of the expert’s state embedding allows us to modulate the transfer of information between the expert and the learner domains. In one extreme, one might want the learner to solve a task with a minimal task-relevant embedding, to allow the learner to develop strategies distinct from the expert, which could for example allow it to outperform the expert. In the other extreme, one might want the learner to replicate the strategies of the expert as closely as possible, which could be useful if the learner fails to solve the task with less information. Choosing the size of the task-relevant embedding then trades off between these two options. Omitting the time invariance constraint. We omit the time-invariance constraint by reducing the discriminator input from s, s0 to just the current state s. While this setting yields successful results in same-domain imitation learning [27], we found the time-invariance constraint to be essential for adversarial cross-domain imitation learning (see Figure 5). Learning from a single trajectory. We investigated the performance of our approach when only a single expert trajectory is given, which represents the most direct comparison to GWIL, as GWIL can only utilize a single expert trajectory due to its computational complexity. We find that UDIL likewise outperforms GWIL by a large margin if only one demonstration is given, and show more results in appendix 8.3.3. 6 Conclusion We introduce a novel framework for cross-domain imitation learning, which allows a learner agent to jointly learn to imitate an expert and learn a mapping between both state spaces, when they are dissimilar. This is made possible by defining a mutual information criterion to find a task-relevant embedding of the expert’s state, which further allows to control the transfer of information between the expert and learner domains. Our method shows robust performance across different random instantiations and domains, improving significantly upon previous work. However, as cross-domain imitation learning is generally an under-defined problem, the risk of learning incorrect policies remains. The mutual information objective used to find the task-relevant embedding might yield degenerate solutions in special cases, such as when the expert’s policy induces a uniform distribution over state transitions, or when the environment is only partially observable. Also, finding the ideal size of the task-relevant embedding might be challenging in more complex domains. Similarly, the application of our algorithm to high-dimensional observation spaces requires further contributions and may constitute an interesting direction for future work. 7 Acknowledgements We thank Dylan Campbell and Jakob Foerster for their helpful feedback. We are also grateful to the anonymous reviewers for their valuable suggestions. This work was supported by the Royal Academy of Engineering (RF\201819\18\163).
1. What is the main contribution of the paper in unsupervised cross-domain imitation learning? 2. What are the strengths of the proposed approach, particularly in its design decisions and analysis of the task-relevant embedding? 3. What are the weaknesses of the paper regarding the experimental scope and limitations? 4. Do you have any suggestions for additional baselines to strengthen the claims made in the paper? 5. Are there any minor errors or typos in the paper that should be addressed? 6. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper presents UDIL, unsupervised cross-domain imitation learning, a method for learning a policy in one environment using expert demonstrations from another environment. The focus is on learning a task-relevant embedding, which identifies which parts of the state should be used for mapping between learner and expert data. Experiments are presented in two domains: the XMagical benchmark (in which the learner and expert are circular and a long stick or vice versa) and MuJoCo (in which the agents are hopper, halfcheetah, or walker). UDIL outperforms XIRL in the XMagical domain and outperforms GWIL in the MuJoCo domain. Strengths And Weaknesses Strengths The paper is well-written and well-motivated. The method and design decisions are explained clearly. The analysis of the size of the task-relevant embedding (dimension d) is interesting and original. Weaknesses The experiments are limited to environments with small states, where there is a clear distinction between task-relevant dimensions and dimensions which can be discarded. The claims would be strengthened if experiments were extended to environments with visual observations (e.g. atari flavors) or at least more nuanced ones (perhaps MiniGrid). Questions I wonder if there are additional baselines that could be used. For example, what happens if vanilla imitation learning is done on the task-relevant embedding? I expect this would work poorly since the policy does need some environment-specific information. Another baseline could be some form of oracle in which the task-relevant dimensions are hand-picked or make use of the reward in some way. Minor (no need to respond): Typo on line 293 (missing period after domain). Typo on line 318 (3 p's in appendix). Limitations I'd like to see a discussion of limitations, perhaps addressing how the method might extend to more complex observations/environments.
NIPS
Title Learn what matters: cross-domain imitation learning with task-relevant embeddings Abstract We study how an autonomous agent learns to perform a task from demonstrations in a different domain, such as a different environment or different agent. Such crossdomain imitation learning is required to, for example, train an artificial agent from demonstrations of a human expert. We propose a scalable framework that enables cross-domain imitation learning without access to additional demonstrations or further domain knowledge. We jointly train the learner agent’s policy and learn a mapping between the learner and expert domains with adversarial training. We effect this by using a mutual information criterion to find an embedding of the expert’s state space that contains task-relevant information and is invariant to domain specifics. This step significantly simplifies estimating the mapping between the learner and expert domains and hence facilitates end-to-end learning. We demonstrate successful transfer of policies between considerably different domains, without extra supervision such as additional demonstrations, and in situations where other methods fail. 1 Introduction Reinforcement learning (RL) has shown great success in diverse tasks and distinct domains [43, 2], however its performance hinges on defining precise reward functions. While rewards are straightforward to define in simple scenarios such as games and simulations, real-world scenarios are significantly more nuanced, especially when they involve interacting with humans. One possibility for overcoming the problem of reward misspecification is to learn policies from observations of expert behaviour, also known as imitation learning. Recent imitation learning algorithms rely on updating the learner agent’s policy until the state occupancy of the learner matches that of the expert demonstrator [4], requiring the learner and expert to be in the same domain. Such a requirement rarely holds true in more realistic scenarios. Consider for example the case where a robot arm learns to move an apple onto a plate from demonstrations of a human performing this task. Here, both domains do inherently share structure (the apples and the plates have similar appearances) but are distinct (the morphologies, dynamics and appearances of the two arms are different). Enabling a learner agent to successfully perform a task from demonstrations that were generated by a different expert agent, which we refer to as a different domain even if the tasks are related, would widely broaden the possibilities to train artificial agents. This cross-domain imitation learning problem is seen as an important step towards value alignment, as it facilitates transferring behaviour from humans to artificial agents [32, Chapter 7]. This problem has only been considered by researchers in realistic settings recently. Due to its difficulty, previous work on cross-domain imitation learning either assumes the expert’s and learner’s domains to be almost identical [42, 17, 6], requires demonstrations of experts in multiple domains that are similar to the learner’s [45, 44], or relies on the availability of demonstrations of proxy 36th Conference on Neural Information Processing Systems (NeurIPS 2022). tasks in both domains [30, 18]. Designing such proxy tasks is a manual process that requires prior knowledge about both domains, since they have to be inherently similar to the target task to convey a relevant mapping between domains [18]. Fickinger et al. [10] overcome the need for proxy tasks by directly comparing distributions in both domains, effectively addressing the same problem setting as us. While very promising, its applicability is limited to short demonstrations and Euclidean spaces. We propose to jointly learn the learner policy and the mapping between the learner and expert state spaces, utilizing adversarial training. Unlike standard generative adversarial imitation learning [16, 39], we use domain-specific encoders for both the learner and expert. We therefore devise a mutual information criterion to find an expert encoder that preserves task-relevant information while discarding domain specifics irrelevant to the task. Note that in general, cross-domain imitation learning is an under-defined problem, as a unique optimal policy for the learner is not defined as part of the problem: for example, should a humanoid agent that imitates a cheetah crawl (imitating its gait) or walk (moving in the same direction)? We evaluate our cross-domain imitation learning approach in different cross-embodiment imitation learning scenarios, comparing on relevant benchmarks, and find that our method robustly learns policies that clearly outperform the baselines. We conduct several ablation studies, in particular finding that we can control how much domain-specific information is transferred from the expert— effectively interpolating between mimicking the expert’s behaviour as much as possible and finding novel policies that use different strategies to maximize the expert’s reward. Our contributions are: • We propose a mutual information criterion to find an embedding of the expert state which contains task-relevant information, while discarding domain specifics irrelevant to the task. • We learn the mapping between the learner domain and the task-relevant embedding without additional proxy task demonstrations. • We demonstrate training robust policies across diverse environments, and the ability to modulate how information flows between the learner and expert domains. 2 Related Work Imitation learning considers the problem of finding an optimal policy for a learner agent from demonstrations generated by an expert agent, where inverse reinforcement learning (IRL) [1, 46] recovers a reward function under which the observed expert’s behaviour is optimal. More recent works [16, 11, 39] define imitation learning as a distribution matching problem and use adversarial training [14] to directly find the learner’s policy, without explicitly recovering the expert’s reward. Cross-domain imitation learning generalizes imitation learning to the case where the learner and expert are in different domains. Small mismaches between the domains, such as changes in viewpoint or gravitational force, or small variations of the dynamics, are addressed by [42, 12, 17, 28, 36, 8] and Bohez et al. [6]. To learn policies cross-domain in the presence of larger mismatches, such as different embodiments of the learner and the expert, previous works used demonstrations of proxy tasks to learn a mapping between the learner and expert domain, which is then used to find the learner’s optimal policy [15, 23, 35, 30, 18], utilized a latent embedding of the environment state [45, 44], or assumed the reward signal to be given [34]. GWIL [10] does not rely on proxy tasks and minimizes the distance between the state-action probability distributions of both agents which lie in different spaces [25]. This approach assumes Euclidean spaces and is computationally intractable when using longer demonstrations, which generally improve the performance of learning algorithms when available. Our approach obviates the need for proxy tasks, scales to detailed demonstrations of complex behaviours, and enables the control of how much domain-specific information is transferred to the learner domain. In classical RL [26], where behaviour is learned from a given reward function, mutual information objectives are commonly used to find compact state representations that increase performance by discarding irrelevant information [29, 3, 37, 24, 22]. We propose to similarly learn a representation of the expert’s state that contains task-relevant information while being invariant to domain specifics. 3 Background Definitions. Following Kim et al. [18], we define a domain as a tuple (S,A,P, ⇣), where S denotes the state space, A is the action space, P is the transition function, and ⇣ is the initial distribution over states. Given an action a 2 A, the distribution over the next state is given by the transition function as P(s0|s, a). An infinite horizon Markov decision process (MDP) is defined by adding a reward function r : S ⇥A ! R, which describes a specific task, and a discount factor 2 [0, 1] to the domain tuple. We define the expert agent’s MDP as ME = (SE ,AE ,PE , rE , E , ⇣E), and its policy as a map ⇡E : SE ! B(AE), where B is the set of all probability measures on AE . We define the learner MDP MLand learner policy ⇡L analogously, except that the learner MDP has no reward function or discount factor. An expert trajectory is a sequence of states ⌧E = {s0E , s1E , . . . , snE}, where n denotes the length of the trajectory. We denote DE = {⌧i} to be a set of such trajectories. Problem Definition. The objective of cross-domain imitation learning is to find a policy ⇡L that optimally performs a task in the learner domain ML, given demonstrations DE in the expert domain ME . In contrast to most prior work, we do not assume access to a dataset of proxy tasks—simple primitive skills in both domains that are similar but different from the inference task—to be given. We do not assume access to the expert demonstration’s actions, which may be non-trivial to obtain, e.g., when learning from videos or human demonstrations, and therefore consider the expert demonstrations to consist only of states. Adversarial Imitation Learning from Observations. We first consider the equal-domain case in which both MDPs are equivalent, i.e., ML= ME , and assume that the expert agent’s optimal policy ⇡E under rE is known. Torabi et al. [39] define a solution to this problem as an extension of the standard imitation learning problem [16], by minimizing the divergence between the learner’s state-transition distribution ⇢⇡L and that of the expert ⇢⇡E , as argmin ⇡L H(⇡L) + DJS (⇢⇡L(s, s0) ⇢⇡E (s, s0)) = RL IRL (⇡E) , (1) where DJS is the Jensen-Shannon divergence and H(⇡L) is the learner’s policy entropy [46]. The state-transition distribution for a policy ⇡ is defined as ⇢⇡(si, sj) = X a P (sj |si, a)⇡(a|si) 1X t=0 t P (st = si|⇡). (2) In particular, the expert’s state-transition distribution ⇢⇡E is estimated using expert demonstrations DE . The above objective (eq. 1) can also be derived as the composition of the IRL and RL problems, where rE = IRL(⇡E) denotes the solution to the Inverse Reinforcement Learning problem from policy ⇡E and ⇡L = RL(rE) denotes the solution to the RL problem with reward rE . The IRL component, which recovers the reward function r : S ⇥ S ! R under which the expert’s demonstrations are uniquely optimal1 by finding a reward function that assigns high rewards to the expert policy and low rewards to other policies, is given as IRL(⇡E) = argminr (max⇡L E⇡L [r(s, s0)] E⇡E [r(s, s0)]) . 4 Unsupervised Imitation Learning Across Domains We first introduce the cross-domain imitation learning problem before deriving an adversarial learning objective that allows the simultaneous training of the learner’s policy and a mapping between the MDPs of the learner and expert. We then demonstrate how the cross-domain imitation learning problem can be significantly simplified be finding an embedding of the expert agent’s state space that contains task-relevant information while discarding domain-specific aspects. Lastly, we introduce a time-invariance constraint to prevent degenerate mapping solutions. As our approach does not rely on additional demonstrations from proxy tasks, we refer to it as unsupervised cross-domain imitation learning objective (UDIL). 4.1 Cross-domain adversarial imitation learning We consider the case in which the expert’s and agent’s MDPs are different, i.e., ML 6= ME , such as when learner and expert are of different embodiments or are in different environments. Kim et al. [18] show that, if there exists an injective mapping g that reduces the learner MDP ML to the expert MDP ME , then a policy ⇡L that is optimal in ML is also optimal in the ME . Since we do not assume extra supervision from the expert’s actions, we define the mapping function between the learner and expert MDPs g : SL ! SE as a mapping between the respective state spaces. We accordingly define the cross-domain adversarial imitation objective as argmin ⇡L H(⇡L) + DJS(⇢⇡L(g(sL), g(s0L)) ⇢⇡E (sE , s0E)). (3) Applying the mapping g to the learner agent’s state allows us to compare the learner’s and expert’s distributions, even though they are defined over different state-spaces. 4.2 Reducing the expert’s state dimension The full state of the expert domain sE generally contains information that is specific to the task which the expert is demonstrating, defined by the expert’s reward function rE , as well as information that is specific to the domain but irrelevant to the task itself. We simplify the cross-domain imitation learning problem by reducing the expert agent’s state space to a task-relevant embedding that is invariant to domain specifics. We assume that the learner state s is multi-dimensional and recall the IRL component of the adversarial imitation problem (eq. 1), which finds the reward function under which the expert’s behavior is optimal. We define a second mapping function f : SE ! Z , that maps the expert states sE 2 SE to lower-dimensional representations z 2 Z , with |Z| ⌧ |SE |. When f is chosen as a dimension reduction operation that discards state dimensions of which the reward is independent, we can write the IRL component of eq. 1 as a function of only the embedded representation z (proof in app. 8.1.1),2 1We swap the cost function for the reward function and omit the cost function regularization for simplicity. 2We assume that the reward function r is also defined on the embedding space Z , see app. 8.1.1 for details. as IRL(⇡E) = argmin r ✓ max ⇡L E⇡L [r(z, z0)] E⇡E [r(z, z0)] ◆ . (4) Simplifying the mapping between learner and expert. Assuming f to be given, we can further redefine the mapping between learner and expert state as g : SL ! Z . That is, the state transformation g no longer has to map the learner state to the full expert state, but only to the task-relevant embedding of the expert state. This not only significantly simplifies the complexity of the mapping function g, but also prevents transferring irrelevant domain specifics from the expert to the learner domain. We can then rewrite the cross-domain adversarial imitation objective as argmin ⇡L,g H(⇡L) + DJS(⇢⇡L(g(sL), g(s0L)) ⇢⇡E (f(sE), f(s0E))), (5) which minimizes the distance between the transformed distribution over learner states sL and the distribution over embedded expert states z. 4.3 Finding a task-relevant embedding We now detail how to find a embedding function f from the expert demonstrations DE . We first assemble a set containing all expert transitions (sE , s0E) observed in the trajectories of the demonstration set DE . We then generate a set of pseudo-random transitions (srand, s0rand) by independently sampling two states out of all individual states contained in DE . We then model all state transitions (s, s0) and their corresponding labels y, indicating whether it is a random or expert transition, as realizations of a random variable (S, S0, Y ) on SE ⇥ SE ⇥ {0, 1}. Note that any time-invariant embedding f : SE ! Z induces a random variable (Z,Z 0, Y ) on Z ⇥Z ⇥ {0, 1} via (Z,Z 0) = (f(S), f(S0)). We then define the mapping f as a mapping that maximizes the mutual information I between the label Y and the embedded state transition (Z,Z 0), that is, argmax f I((Z,Z 0);Y ) = argmax f I((f(S), f(S0));Y ). (6) Observe that maximizing I(Z;Y ) would lead to non-informative representations, as the states contained in the random trajectories are indeed states of the expert trajectory; only state transitions (S, S0) can distinguish between the two. 4.4 Avoiding degenerate solutions Jointly learning the mapping function g and the learner agent’s policy ⇡L may lead to degenerate mappings if g is a function of arbitrary complexity. An overly-expressive g can make the divergence between distributions arbitrarily small, regardless of their common structure, by the universality property of the uniform distribution, i.e., any two distributions can be transformed into each other by leveraging their cumulative density functions (CDFs) and inverse CDFs. We prevent these degenerate solutions with an information asymmetry constraint: we ensure that the mapping f is time-invariant, while the JS-divergence compares distributions across time, i.e., in a time-variant manner. A theoretical analysis is presented in app. 8.1.2. 4.5 Unsupervised cross-domain adversarial imitation learning We finally define the unsupervised cross-domain adversarial imitation learning (UDIL) objective as an adversarial learning problem. We iterate between updating the learner agent’s policy ⇡l, the mapping g between the learner’s and expert’s state spaces, and the discriminator D. The discriminator’s objective is to distinguish between state transitions generated by the learner and state transitions generated by the expert, giving the overall objective min g, ⇡L max ✓ E⇡L [log(D✓(g(sL), g(s0L)))] + E⇡E [log(1 D✓(z, z0))]. (7) 5 Experiments Preliminaries. We test our approach on two different benchmarks that represent multiple domains and different agents with both environment-based and agent-based tasks. We designed our experiments to answer the following questions. • Can we find task-relevant embeddings of the expert state solely from expert demonstrations, and improve the performance of imitation learning? • Does the proposed framework robustly learn meaningful policies compared to previous work? • Can we control the amount of domain-specific information transferred from the expert to the learner? We compare with the GWIL baseline [10], which is the only other work that makes similar assumptions to ours, i.e., unsupervised cross-domain imitation learning with access only to demonstrations of a single expert agent. In the later presented XMagical environment, we also compare to a modified single-demonstrator-agent version of XIRL [45], which originally relies on demonstrations of multiple distinct expert agents. As no reward function in the learner domain is given, we measure performance of the learner agent by defining its reward as the components of the expert agent’s reward function that can be directly transferred to the learner domain. To ensure reproducibility, we run all experiments on random seeds zero to six, report mean and standard error for all experiments (lines and shaded areas), and describe the experiments in full detail in appendix section 8.2. 5.1 XIRL baseline Setup. Figure 2 shows the XMagical environment [41, 45] which consists of four agents with different embodiments that have to perform equivalent modifications in the environment, namely pushing all blocks to a shaded region. The corresponding baseline algorithm XIRL [45] trains each agent with demonstrations of the three other expert agents. As our work only requires demonstrations from a single expert agent, we focus on the two most distinct agents, Gripper and Longstick, which are displayed in Figure 2), and evaluate the performance of each when trained on demonstrations of the other. The reward is given as a function of the average distance between the task-relevant objects and their target positions. Finding a task-relevant embedding. The environment state in XMagical is given as a multidimensional vector that describes different absolute and relative positions of environment objects and the agent itself. To find the task-relevant embedding of this state we first generate sets of expert and pseudo-random transitions, as described in section 4.3. As maximizing mutual information objectives in large continuous domains is intractable [5, 9], we instead approximate the objective in eq. (6) by first computing the empirical mutual information between state transitions and labels for each individual state dimension, using the method of Ross [31]. We then find the task-relevant embedding by selecting the dimensions with highest mutual information using the elbow method [19]. We find a clear margin between those state dimensions that are intuitively relevant to the task, such as dimensions that describe the positions of the blocks, and those dimensions that are intuitively domain-specific and less relevant to the task, such as dimensions that describe the position of the robot. Imitation learning with a task-relevant embedding of the expert state. We use the dataset of expert demonstrations provided by Zakka et al. [45] to compare the performance of our approach to that of the XIRL baseline. We follow Zakka et al. [45] and likewise use the simplified imitation learning framework where the learner agent simply receives a reward signal that corresponds to the distance between the current environment state and the target environment state, which is precomputed by averaging over all terminal states contained in the set of expert demonstrations. Note that the main difference between UDIL and XIRL is the task-relevant embedding of the expert state: XIRL relies on the full expert state. We use the XIRL implementation as given by the authors, apply it directly to the state space and do not change any parameters. Figure 2 shows that we consistently outperform XIRL and in both cases achieve a score close to the maximum possible. We find that our method obtains task-relevant embeddings of the state from expert demonstrations alone, which significantly improves performance of cross-domain imitation learning in the XMagical environment. 5.2 Cross-domain imitation learning of robot control We now evaluate UDIL in the complex Mujoco environments [7, 38]. We use the embodiments displayed in Figure 3, hopper, walker and halfcheetah, which are commonly used to evaluate (cross-domain) imitation learning algorithms [20, 16, 12, 30]. We use the fixed-length trajectory implementation [13] of these environments to prevent implicitly rewarding the learner agent for longer trajectories; the significance of this effect is demonstrated in Kostrikov et al. [20]. We first find a minimal task-relevant embedding, investigate the performance, and compare to GWIL. We then conduct ablation studies to evaluate the importance of the individual components of our framework and investigate how the transfer of information from the expert to the learner domains can be controlled by varying the size of the task-relevant expert embedding. We provide videos of the resulting behaviour, as described in in appendix 8.4. Finding a task-relevant embedding. Analogously to the previous section 5.1, we first generate sets of expert and pseudo-random transitions, and compute the mutual information between individual state dimensions and the transition labels. We find that across all three agents, the x position of the torso has highest task-relevance, followed by the z position (height). This intuitively makes sense, as the expert agents receive relatively large rewards during training for moving in the positive x direction, followed by a smaller reward for being in a healthy (upright) position [7]. Note here that these findings are derived only from the expert demonstrations, without any knowledge of the rewards. Hereafter, the dimensions which describe the angular positions of the main joints with respect to the torso have highest mutual information; lowest mutual information is found for state dimensions that describe velocities of sub-components. We identify the task-relevant embedding with the elbow method as the positions that describe the torso, and later conduct ablation studies with larger embeddings. Jointly learning the learner’s policy and mapping function. We parameterize the learner encoder such that it learns an affine transformation of the input and define its loss as the negative of the discriminator’s loss, i.e., the learner encoder is trained to fool the discriminator. The policy of the learner is parameterized by a neural network, which, in contrast to the learner encoder, cannot be trained by backpropagating the discriminator loss as a sampling step is required to obtain the state transitions form the learner policy. We follow Ho and Ermon [16] and train the learner policy with RL, with the learner agent receiving higher rewards for taking actions that result in transformed state transitions g(sL), g(s0L) which are more likely to fool the discriminator D, i.e., which are more likely to be from the expert’s task-relevant state-transition distribution ⇢E(zE , z0E). We use DAC [20], to jointly train g, ⇡L and D, as depicted in Figure 1, and do not alter any hyperparameters given in the original implementation to ensure comparability. We define the reward of the learner agent as the distance covered in the target direction, as this is the only reward component that is common among all three agents, and compare performance to GWIL [10]. Results. Figure 4 shows that the learner agents robustly learn meaningful policies for six random initializations across different combinations of expert and learner. We find that the hopper and walker cover about 50% of the distance as compared to when they are trained with their ground truth rewards, with the halfcheetah achieving about 13% of the expert distance. We qualitatively inspected the behaviours learned by the agents and found novel locomotion strategies that are distinct from those of the expert. We illustrate these strategies in Figure 3. We hypothesize that these new behaviours were enabled by the task-relevant embedding of the expert state and further investigate in section 5.3 how the embedding size can be chosen to transfer more information from the expert to the learner. It can be seen in Figure 4 that our framework consistently outperforms the GWIL baseline; although we tried different hyperparameter configurations, we found the results of GWIL to be highly stochastic, which is due to the properties of the Gromov–Wasserstein distance [25] used, as indicated by the authors of GWIL [10, Remark 1]. 5.3 Ablation Studies We present our ablation studies that clarify the importance and influence of the different components of the framework, focusing on the hopper and halfcheetah agents. Varying the dimension of the task-relevant embedding. We investigate the relevance of the taskrelevant state embedding’s dimension d and hypothesize that for larger embeddings, more information is transferred from the expert to the learner domain. We evaluate the performance as well as the resulting agent behaviours for d 2 (3, 6, all), where all refers to no reduction, i.e. f is an identity mapping, in which case the learner encoder g has to map the full learner state space to the full expert state space. We can observe in Figure 5 that the mean performance and robustness generally decrease when increasing the embedding size. We investigate different locomotion strategies adopted by the learner agent, dependent on the embedding size d, and illustrate these in Figure 3. We found that for d = 3, both hopper and halfcheetah would lie down on the floor and propel themselves forward. For larger embeddings d 2 {6, all}, both would adopt strategies more similar to the demonstrations by lifting their torso off the ground for longer. The hopper would hop for a few moments and then perform a swimming-like movement, the halfcheetah would exhibit an animal-like quadruped gait. We conclude that changing the size of the expert’s state embedding allows us to modulate the transfer of information between the expert and the learner domains. In one extreme, one might want the learner to solve a task with a minimal task-relevant embedding, to allow the learner to develop strategies distinct from the expert, which could for example allow it to outperform the expert. In the other extreme, one might want the learner to replicate the strategies of the expert as closely as possible, which could be useful if the learner fails to solve the task with less information. Choosing the size of the task-relevant embedding then trades off between these two options. Omitting the time invariance constraint. We omit the time-invariance constraint by reducing the discriminator input from s, s0 to just the current state s. While this setting yields successful results in same-domain imitation learning [27], we found the time-invariance constraint to be essential for adversarial cross-domain imitation learning (see Figure 5). Learning from a single trajectory. We investigated the performance of our approach when only a single expert trajectory is given, which represents the most direct comparison to GWIL, as GWIL can only utilize a single expert trajectory due to its computational complexity. We find that UDIL likewise outperforms GWIL by a large margin if only one demonstration is given, and show more results in appendix 8.3.3. 6 Conclusion We introduce a novel framework for cross-domain imitation learning, which allows a learner agent to jointly learn to imitate an expert and learn a mapping between both state spaces, when they are dissimilar. This is made possible by defining a mutual information criterion to find a task-relevant embedding of the expert’s state, which further allows to control the transfer of information between the expert and learner domains. Our method shows robust performance across different random instantiations and domains, improving significantly upon previous work. However, as cross-domain imitation learning is generally an under-defined problem, the risk of learning incorrect policies remains. The mutual information objective used to find the task-relevant embedding might yield degenerate solutions in special cases, such as when the expert’s policy induces a uniform distribution over state transitions, or when the environment is only partially observable. Also, finding the ideal size of the task-relevant embedding might be challenging in more complex domains. Similarly, the application of our algorithm to high-dimensional observation spaces requires further contributions and may constitute an interesting direction for future work. 7 Acknowledgements We thank Dylan Campbell and Jakob Foerster for their helpful feedback. We are also grateful to the anonymous reviewers for their valuable suggestions. This work was supported by the Royal Academy of Engineering (RF\201819\18\163).
1. What is the focus and contribution of the paper regarding cross-domain imitation learning? 2. What are the strengths of the proposed method, particularly in its novelty and relevance to the research community? 3. What are the weaknesses of the paper, especially regarding its comparisons with other works and performances in different embodiment cases? 4. Do you have any questions regarding the use of multiple expert agents, the intuition behind the noisy reward curve, and the reason behind the performance decrease for the larger embedding case? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The problem setting under consideration is cross-domain imitation learning, where the goal is to enable an imitation learning agent to learn from expert demonstrations from a different environment or agent embodiment. The authors propose a method for cross-domain imitation learning that requires less supervision than prior methods (i.e. without additional proxy tasks or multiple demonstrators/domains), by primarily leveraging a mutual information-based objective to encourage learning a more task-relevant representation of the expert state space. This embedding is then used by the learner for an imitation learning objective. The method further imposes a time-invariance constraint to prevent learning a degenerate embedding space, and the overall method uses an adversarial imitation learning setup to learn solely from observations. The experiments compare against other recent methods in cross-domain imitation learning, across a range of task settings. Strengths And Weaknesses Strengths: Observational cross-domain imitation learning has grown increasingly relevant as a problem setting, for which the authors propose a novel self-supervised method (to the best of my knowledge). While the individual components -- mutual information based representation learning, adversarial imitation learning -- are not novel, the construction of the method and problem domain seems novel. This problem setting and proposed method has high relevance to the research community, as methods developed in this area opens the door for unsupervised learning of behaviours from widely available online demonstrations (eg. Youtube videos). The method is well explained and straightforward. The authors carry out a range of evaluations and ablations to show the capabilities of their proposed method, comparing against recent work in this domain, and show that their proposed method performs well. The paper is very well written, structured nicely, and was a pleasure to read. Being able to adjust how much of the task-relevant information to retain using the embedding size is an interesting outcome of the method. Weaknesses: While not explicitly directed at imitation learning across different embodiments, there are some relevant works in unsupervised methods for domain regularization in observational imitation learning, which should be cited in the paper: Stadie, Bradly C., Pieter Abbeel, and Ilya Sutskever. "Third-person imitation learning." (2017). Cetin, Edoardo, and Oya Celiktutan. "Domain-robust visual imitation learning with mutual information constraints." (2021). While the performance in Section 5.1 compared to XIRL seems strong, I would like to see the same evaluations carried out on different combinations of agent embodiments rather than just the two that are most different. In the other embodiment cases, does UDIL still outperform XIRL or is the performance more similar as the embodiment gap closes? It also seems like the authors do not use the adversarial imitation learning setup in the comparisons against XIRL. It would be interesting to see if the adversarial setup improves or hurts performance in this case. Questions Have the authors considered / experimented with using multiple expert agents providing demonstrations? One of the stated motivations for this work is wanting to avoid reliance on a large number of different demonstrators, but it would be interesting to see if there is any performance improvement in the multi-demonstrator case (i.e. would the quality of the learned embedding space improve, or is one demonstrator sufficient for capturing the task-relevant features?). In the ablations plot (Figure 5) with Hopper from HalfCheetah, what is the intuition behind the noisy reward curve for UDIL? As adversarial training objectives can be unstable to train, I am curious if the authors have seen any other similar instabilities / difficulties with training. Do the authors have any hypotheses for why the performance increases then decreases for the larger embedding case for Hopper from HalfCheetah in Figure 5? Limitations The paper would be improved with a section more clearly discussing the limitations of the proposed approach -- e.g. if there are difficulties with adversarial training, whether there are cross-domain environments or tasks where the proposed mutual information objective would fail, or how much additional overhead is required to search for the best embedding dimension size for the task you care about.
NIPS
Title Time-Conditioned Dances with Simplicial Complexes: Zigzag Filtration Curve based Supra-Hodge Convolution Networks for Time-series Forecasting Abstract Graph neural networks (GNNs) offer a new powerful alternative for multivariate time series forecasting, demonstrating remarkable success in a variety of spatiotemporal applications, from urban flow monitoring systems to health care informatics to financial analytics. Yet, such GNN models pre-dominantly capture only lower order interactions, that is, pairwise relations among nodes, and also largely ignore intrinsic time-conditioned information on the underlying topology of multivariate time series. To address these limitations, we propose a new time-aware GNN architecture which amplifies the power of the recently emerged simplicial neural networks with a time-conditioned topological knowledge representation in a form of zigzag persistence. That is, our new approach, Zigzag Filtration Curve based Supra-Hodge Convolution Networks (ZFC-SHCN) is built upon the two main components: (i) a new highly computationally efficient zigzag persistence curve which allows us to systematically encode time-conditioned topological information, and (ii) a new temporal multiplex graph representation module for learning higherorder network interactions. We discuss theoretical properties of the proposed time-conditioned topological knowledge representation and extensively validate the new time-aware ZFC-SHCN model in conjunction with time series forecasting on a broad range of synthetic and real-world datasets: traffic flows, COVID-19 biosurveillance, Ethereum blockchain, surface air temperature, wind energy, and vector autoregressions. Our experiments demonstrate that the ZFC-SHCN achieves the state-of-the-art performance with lower requirements on computational costs. 1 Introduction Over the last few years, graph neural networks (GNNs) have emerged as a new powerful alternative to traditional statistical and machine learning models in conjunction with univariate and multivariate time series forecasting tasks [27, 4, 40, 40, 28]. Such successful applications of GNNs range from urban traffic analytics to forecasting COVID-19 hospitalizations to electrocardiogram monitoring [3, 36, 56, 10, 20]. However, most GNNs remain inherently static and do not explicitly incorporate the inherent time characteristics of the encoded knowledge [59, 42]. In turn, limitations in capturing the 36th Conference on Neural Information Processing Systems (NeurIPS 2022). time dimension in the knowledge representation and learning mechanisms for time-evolving data results in GNNs becoming less relevant over time and, hence, requiring frequent updates. Furthermore, GNNs tend to pre-dominantly focus only on information propagation among nodes and also be limited in their ability to describe polyadic relationships among multiple substructures of multivariate time series or multi-node interactions in dynamics graphs. However, as recently shown by [6, 21], such higher-order interactions might be the key toward better understanding of the underlying mechanisms of many real-world graph-structured phenomena. This challenge on polyadic graph interactions has been recently addressed by [24, 8, 7] who propose to model higher order substructures as simplices. Then, by borrowing the concepts of the Hodge theory, these approaches allow for generalization of the ideas of the combinatorial graph Laplacian which describes a diffusion from node to node via edges to a case of diffusion over simplices. Such Hodge Laplacian construction allows for extending the notion of convolution operation to simplicial convolution, and the resulting simplicial neural networks (SNNs) are arguably one of the frontlines in graph learning today. However, these ideas have never been yet applied in conjunction with knowledge representation and learning of time-evolving objects. Our goal here is to bridge the emerging concept of time-aware learning with the recent notions of simplicial convolution, with a particular focus on explicitly integrating the core time-conditioned topological characteristics. In particular, we amplify the power of SNNs with a time-conditioned topological knowledge representation in a form of zigzag persistence for time-indexed data and, more specifically, its new highly computationally efficient summary, Zigzag Filtration Curve. As a result, our new approach, Zigzag Filtration Curve based Supra-Hodge Convolution Networks (ZFC-SHCN) enables us to systematically learn the most intrinsic time-conditioned information both on the underlying topology of the time-evolving data and higher-order interactions among various substructures. Significance of our contributions can be summarized as follows: • ZFC-SHCN is the first approach bringing the concepts of simplicial convolution and SNNs to time-aware learning. • We propose a new highly computationally efficient summary of persistence for time-indexed data, Zigzag Filtration Curve, and derive its theoretical stability guarantees. • We validate the utility of ZFC-SHCN in conjunction with forecasting multivariate time series from diverse application domains such as traffic networks, COVID-19 biosurveillance, surface air temperature, token prices on Ethereum blockchain, wind energy, and vector autoregressions. Our findings indicate that ZFC-SHCN delivers the state-of-the-art forecasting performance, with a significant margin and demonstrates higher computational efficiency. 2 Related Work Time-series Forecasting and Spatio-temporal Graph Convolutional Networks Time-series forecasting is one of the core subfields in statistical sciences [15, 9]. Most recently, there have appeared a number of unconventional machine learning approaches to time-series forecasting. In particular, graph convolutional network (GCN)-based models for spatio-temporal network data have emerged as a promising forecasting tool. For instance, DCRNN [42] introduces spectral graph convolution into spatio-temporal network data prediction, which can capture spatio-temporal dependencies. STGCN [59] uses convolutional neural networks (CNNs) to model temporal correlations. Moreover, to infer hidden inter-dependencies between different traffic variables, [57, 3, 10] conduct a convolution operation in spatial dimension through adaptive adjacency matrices. Recent Z-GCNETs [20] develops a zigzag topological layer equipped with a zigzag persistence image into a GCN framework to model temporal correlations. Another promising recent direction for time series forecasting beyond GCN is a fractional-order dynamical model proposed by [27]. This approach offers an alternating scheme to determine the best estimate of the model parameters and unknown stimuli. In turn, [28] proposes a Padé approximation based exponential neural operator (Padé Exp), aiming to improve time-series forecasting with exponential operators in neural operator learning schemes. However, all of the above methods only focus on node-level representations. In contrast, in this paper, we focus on both higher-order structure representation and topological information learning. Topological Data Analysis for Graph Learning Persistent homology [25, 62] is a suite of tools within topological data analysis (TDA) that provides a way for measuring topological features of shapes and functions. The extracted topological features have been recently shown to provide invaluable insights into hidden mechanisms behind the organization and functionality of graph structured data. In particular, topological features have been actively used for node classification [61, 17], link prediction [58], and graph classification [31, 32, 14, 30]. For instance, [31] is one of the first approaches to integrate topological features into neural networks for graph classification, while [14] proposes a versatile framework for learning multiple vectorizations of persistent diagrams on graphs. In turn, [61, 17, 58] apply topological features to GNNs to understand and improve the message passing between nodes. Finally, [33] proposes a topological graph layer with learnable filtration functions for graph and node classification tasks, while [13] advances the ideas of multipersistence to graph learning. Zigzag Persistent Homology Despite its promise, regular persistent homology does not explicitly model the geometric and topological information from a sequence of topological spaces. To address this limitation, a generalization of ordinary persistence, i.e., zigzag persistent homology, based on the theory of quiver representation, has been proposed by [12]. Zigzag persistence allows us to systematically describe how the homology changes over a sequence of spaces. Despite its high potential, especially in conjunction with analysis of time-evolving data, zigzag persistence still remains largely a theoretical concept, and its applications are yet scarce. The recent results for time-dependent data studies include, for example, zigzag-based clustering [37], bifurcation analysis of dynamic systems [55], and time series forecasting [20]. The memory and computational efficiency of zigzag persistence is one of the daunting challenges. Inspired by [44], we propose a novel highly computationally efficient representation of zigzag persistence for learning time-evolving data, that is, zigzag filtration curve. Simplicial Neural Networks Modeling higher-order interactions on graphs is an emerging direction in graph representation learning. While the role of higher-order structures for graph learning has been documented for a number of years [1, 34] and involves such diverse applications as graph signal processing in image recognition [23], dynamics of disease transmission and biological networks, integration of higher-order graph substructures into deep learning on graphs has emerged only in 2020. As shown by [6, 50], higher-order network structures can be leveraged to boost graph learning performance. Indeed, several recent approaches [24, 49, 8, 18] propose to leverage simplicial information to perform neural networks on graphs. However, neither of these Simplicial Neural Networks (SNNs) are integrated with a topology-based graph convolution layer allowing us to learn both time-aware persistent topological features and simplicial geometry of graphs. In this paper, we propose ZFC-SHCN to address this limitation. 3 Time-Aware Topological Learning with Zigzag Curves Spatio-temporal Graph Construction A spatio-temporal graph is a collection of snapshots at different time steps, denoted by G = {G1,G2, · · · ,GT }, where T is the maximum timestamp. Here Gt = (Vt, Et,At,Xt) is the graph observed at time step t ∈ [1, T ], where Vt is a finite set of |V| = N nodes, Et is a set of edges, At ∈ RN×N is the adjacency matrix, and Xt ∈ RN×d is the node feature matrix. Specifically, each row of Xt is a d-dimensional feature vector of the corresponding node. For sake of notations, wherever applicable below, we omit the subscript t and denote graph Gt at time t as G. Background on Ordinary Persistence Tools of ordinary persistence, or persistent homology (PH), allow us to study salient data shape patterns along various dimensions. By shape here we broadly understand data properties that are invariant under continuous transformations, that is, transformations that do not alter “holes” in the data, for example, bending, twisting, and stretching. The key idea is to choose some suitable scale parameter ν and then to study a graph G not as a single object but as a nested sequence of graphs, or graph filtration G1 ⊆ . . . ⊆ Gn = G, which is induced by monotonic changes of scale ν. For example, if G is an edge-weighted graph (V, E , w) with weight function w : E 7! R, then for each νj , j = 1, . . . , n, we set G≤νj = (V, E , w−1(−∞, νj ]), yielding the induced edge-weighted filtration. We can also consider only induced subgraphs of G with maximal degree of νj for each j = 1, . . . , n, resulting in the degree sublevel set filtration. (For more discussion on graph filtrations see [30].) Armed with this construction, we can track which shape patterns, for example, independent components, loops, and voids, emerge as the scale ν varies. To make the process of pattern counting more systematic and efficient, we build an abstract simplicial complex K (Gj) on each Gj . We also record complex indices jb (birth) and jd (death) at which we first or last observe each shape feature. Topological features with longer lifespans are said to persist and are likelier to yield important information on the structural organization of G. Learning Shapes of Time-Conditioned Data with Zigzag Persistence This construction enables us to extract the key topological descriptors from a single graph G. However, in our case, we observe not a single graph but a sequence of time-evolving graphs {G1, . . . ,GT }. How can we track shape signatures which are not just individualistic for each time stamp but characterize intrinsic properties of the observed object over time? One approach to how we can bring PH tools to analysis of timeconditioned objects is zigzag persistence. Based on the theory of quiver representations, zigzag persistence generalizes ordinary persistence to track characteristics of graphs (or other topological spaces) with inclusions going in different directions [12, 11]. In particular, given a time-indexed sequence of graphs {G1, . . . ,GT }, we first form a set of graph inclusions over time G1 ∪ G2 G2 ∪ G3 G3 ∪ G4 . . . ↗ ↖ ↗ ↖ ↗ ↖ ↗ G1 G2 G3 G4 and then assess the compatibility of persistent topological features across unions of graphs. That is, we record indices at which topological features (dis)appear, for some given scale ν∗. If for a given ν∗ topological feature ρ (i.e., p-dimensional hole, 0 ≤ p ≤ K, where K is the dimension of the simplicial complex K(G)) is first recorded in K(Gj), we say that the feature’s birth is j, and if ρ first appears in K(Gj ∪Gj+1), we record its birth as j +1/2. In turn, if ρ is last seen in K(Gj), we record its death as j, while if it is last seen in K(Gj ∪ Gj+1), we say that its death is at j + 1/2. Let J be the set of all observed topological features for a given ν∗. Collecting then births and deaths over J, we summarize all extracted information as a multiset Dν∗ = {(bρ, dρ) ∈ R2|bρ < dρ, ρ ∈ J}, called a zigzag persistent diagram (ZPD) (where bρ and dρ are the birth and death of the topological feature ρ respectively). This makes zigzag persistence particularly attractive for the analysis of dynamic objects which are naturally indexed by time. However, the idea of zigzag persistence is applicable far beyond learning time-evolving objects. Nevertheless, zigzag persistence still remains largely a theoretical concept, with yet only a handful of applications, and one of the roadblocks hindering a broader proliferation of zigzag-based methods in practice is their computational costs. Here we take a step toward bringing a more computationally efficient summary of zigzag persistence to real-world applications. Time-Aware Zigzag Filtration Curves Consider a sequence of time intervals associated with a zigzag filtration over a time period [t1, tN ]( t1, t1 + 1 2 ) , ( t1 + 1 2 , t2 ) , ( t2, t2 + 1 2 ) , . . . , ( tN−1 + 1 2 , tN ) . Let DgmZZν∗ be the resulting ZPD for a given ν∗ and M be the number of off-diagonal topological features in ZPD, i.e., DgmZZν∗ . Inspired by the recent results on stabilized Betti sequences by [35] and filtration curves by [44] for ordinary persistence, we propose a new simple and computationally efficient summary of zigzag persistence, called a Zigzag Filtration Curve. Definition 3.1 (Zigzag Filtration Curve (ZFC)). The zigzag filtration curve evaluated at ∆t−i = (ti−1 + 1 2 , ti), i = {1, 2, . . . ,N}, for a given ν∗, is defined as ZFCpν∗(∆t − i ) = M∑ j=1 ξi(tbj , tdj )ωi, where (tbj , tdj ) ∈ R2 is a vector containing the birth and death of the j-th off-diagonal p-dimensional topological feature in DgmZZν∗ (as such, tbj < tdj ), j = {1, 2, . . . ,M}, 0 ≤ p ≤ K; ξi : R 2 7! R is some suitable Lipschitz continuous function with Lipschitz constant Li, for example, a Gaussian density; and ωi > 0, i = {1, 2, . . . ,N} are weights such that ∑ i ωi = 1. Zigzag filtration curve at ∆t+i = (ti, ti + 1 2 ) is defined analogously. (For the sake of notational simplicity, wherever applicable in the further exposition we suppress the index p in ZFC.) Motivated by [35], here as the Lipschitz continuous function ξi for intervals ∆t−i , we use a Gaussian density f with mean (ti−1 + 1/2, ti), while for intervals ∆t+i , we set the mean of f to (ti, ti + 1/2), i = 1, 2, . . .N . For both ∆t−i and ∆t + i , we choose the 2 × 2-variance-covariance matrix Σ to be the identity matrix. (See Appendix ?? for more discussion on sensitivity analysis.) Another suitable choice of ξ is the arctan function. As we show below, the proposed ZPC also enjoys important theoretical stability guarantees in terms of Wasserstein-1 distance. Proposition 3.2 (Stability of Zigzag Filtration Curve). Let DgmZZν∗ be a zigzag persistence diagram and DgmZZ′c∗ be its perturbed copy such that W1 ( DgmZZν∗ ,DgmZZ ′ ν∗ ) < ϵ, where W1 is Wasserstein-1 distance. Then, ZFC is stable with respect to Wasserstein-1 distance. In practice topological features of various dimensions p, p = 0, 1, . . . ,K, may play different roles in the learning task performance, and these roles are not known a-priori. Hence, to harness timeconditioned information encoded in ZFC corresponding to different dimensions p, we propose MultiZigzag Filtration Curves (M-ZFCs) M-ZFCsν∗ ∈ RK× N−1 2 by stacking ZFC0,ZFC1, . . . ,ZFCK. Figure ?? in Appendix ?? shows the both 0- and 1-dimensional ZFCs obtained from the proposed ZFC. In the following section, we demonstrate how ZFC can be integrated into neural network architectures for graph learning tasks. 4 Zigzag Filtration Curve Based Supra-Hodge Convolution Networks Given a graph G and its historical ω step graph signals Xω = {Xt−ω+1, . . . ,Xt} ∈ Rω×N×F (F is the node feature dimensionality), the time-series forecasting problem is to learn a mapping function f that maps the historical data {Xt−ω+1, . . . ,Xt} into the next h step data {Xt+1, . . . ,Xt+h}. The mapping relation is represented as follows: {Xt−ω+1, . . . ,Xt} f −! {Xt+1, . . . ,Xt+h}. 4.1 Graph convolution in the spatial dimension Given the node embedding dictionary W ϕ = (wϕ1 , w ϕ 2 , . . . , w ϕ N ) ∈ RN×dc (where xϕu ∈ Rdc and dc is the dimension of node embedding), we aim to seek a non-negative function Su,v = G (wϕu , w ϕ v ) which represents the pairwise similarity between any two nodes u and v. Concretely, the multiplication between W ϕ and (W ϕ)⊤ can (i) give a sum pooling of second-order features from the outer product of all the embedding vector pairs (wϕu , w ϕ v ) and (ii) infer the hidden spatial dependencies of nodes Suv = G (w ϕ u , w ϕ v ) = exp (ReLU(wϕu(w ϕ v ) ⊤)∑N u=1 exp (ReLU(w ϕ u(w ϕ v )⊤) , where ReLU(·) = max (0, ·) is a nonlinear activation function, which is used to eliminate weak connections proactively, and the role of the softmax function is applied to normalize the learned graph S. Inspired by the recent advancements in random walk-based graph embedding learning [47, 26], we make a graph convolution in spatial dimension, feeding a power series of the learned graph S with varying random walk steps {1, 2, · · · , r} (r ∈ Z+), as follows: H (ℓ+1) t,GC = σ(Stack(I,S, · · · ,S r)H (ℓ) t,GCΘ (ℓ) GC), (1) where σ(·) stands for a nonlinear activation function, Stack(·) is the function which stacks r powered learned graphs, H(ℓ)t,GC and H (ℓ+1) t,GC are the input and output activations for layer ℓ (where H (0) t,GC = Xt ∈ RN×F ), and Θ(ℓ)GC ∈ Rd GC ℓ ×d GC ℓ+1 is the ℓ-th layer’s trainable weights. Next, we introduce representation learning of the higher-order graph (sub)structures using the supra-Hodge Laplacian which allows us to systematically leverage the underlying topological information. 4.2 Supra-Hodge convolution in temporal dimension Time-evolving data such as multivariate time series, spatio-temporal processes, and dynamic networks, often exhibit a highly complex dependency among its substructures that goes far beyond what can be described by dyadic (or pairwise) interactions among nodes. Instead, such higher-order polyadic interactions can be systematically addressed using the Hodge theory. In particular, the discrete Hodge theory allows us to generalize the notion of a standard combinatorial graph Laplacian which describes diffusion on graph G from node to node via edges to diffusion over higher-order substructures of G [43, 6]. In turn, higher-order substructures can be modeled as k-simplices of G. (See Appendix ?? for background information on Hodge Laplacians.) Convolutional architectures on simplicial complexes based on the associated concepts of the Hodge theory have emerged as a recent direction in graph neural networks but have not yet been applied to learning time-evolving data. Our goal here is to introduce the notion of simplicial convolution and the ideas of Hodge-Laplacians to time-aware learning. In particular, to capture time-conditioned higher-order interactions on G and to describe diffusion of information over simplices along the temporal dimension, we build a supra-Hodge convolution operation, based on the multiplex network representation learning. (In the following for simplicity, notation without sub/superscript k stands for node-level quantities and in our experiments we always consider k ∈ Z+). First, given the historical spatio-temporal network series Gt−ω+1:t = {Gt−ω+1,Gt−ω+2, . . . ,Gt}, we consider a directed connected node-aligned multiplex network, which is made up of ω layers with N nodes on each layer. That is, the adjacency matrix Aα = {aαuv}N×N (where α ∈ {t− ω + 1, . . . , t}) defines the intra-connection between nodes u and v in layer α and a distance matrix Dαβ = {dαβuu}N×N quantifies the transition probability of moving from node u of layer α to node u of layer β. (Here β > α, since we consider information diffusion procedures only along the temporal dimension). Next, based on the discrete Hodge theory, we propose a new Hodge k-Laplacian for multiplex graphs, called the supra-Hodge k-Laplacian LSupk ∈ Rϕkω×ϕkω LSupk = (L11k ) r D12k+1 ··· D 1ω k+1 0 (L22k ) r ··· D2ωk+1 ... ... . . . ... 0 0 ··· (Lωωk ) r , (2) where Lααk is the Hodge k-Laplacian in layer α, Dk+1 is the diagonal matrix of degrees of each k-simplex, i.e., Dk+1 = max (diag(|Bk+1|1, I)) and Bk+1 is the k-simplex-to-(k + 1)-simplex incidence matrix, and the r-th power of Lααk represents r-step random walk on the Hodge k-Laplacian of layer α which will allow every k-simplex to accumulate information from its neighbors. Hence, when k = 1, we can infer the spatial dependencies between each pair of edges and capture meaningful edge information in both spatial and temporal dimensions – through the lens of the supra-Hodge 1-Laplacian. For instance, in molecule networks, each node represents an atom and each edge is a bond connecting two atoms; the bond (i.e., edge) features include bond type, ring status, and molecular charge which are closely related to atom (i.e., node) features (such as atomic total and partial charges). Since the goal of the forecasting task is to predict node (i.e., 0-simplex) attribute(s) in the next few time steps, we propose a novel diffusion supra-Hodge convolution on the sliding window Gt−ω+1:t. We then update nodes’ representations by transforming the multiplex k-simplex embedding to nodes via incidence matrices H (ℓ+1) t,k,SH = σ(L Sup k H (ℓ) t,k,SHΘ (ℓ) k,SH), (3) H (ℓ+1) t,SH = (B ⊤ 1 · · ·B⊤k )H (ℓ+1) t,k,SH, (4) where (i) in Equation 3: Θ(ℓ)k,SH ∈ R dSHk;ℓ×d SH k;ℓ+1 is a learnable filter matrix for layer ℓ (here dSHk;ℓ and d SH k;ℓ+1 are the intermediate and output dimensions to the ℓ-th layer), H (ℓ) t,k,SH and H (ℓ+1) t,k,SH are the input and output activations for layer ℓ (where H (0) t,k,SH = X̄k;t−ω+1:t ∈ Rϕkω×dink and the historical k-simplex features of the spatio-temporal networks Xk;t−ω+1:t = {Xk;t−ω+1,Xk;t−ω+2, . . . ,Xk;t} ∈ Rϕk×ω×d in k is reshaped as a matrix X̄k;t−ω+1:t with shape ϕkω × dink ) and (ii) in Equation 4: we transform the k-simplex embedding H (ℓ+1) t,k,SH to node embedding H(ℓ+1)t,SH ∈ R N×dSHk;ℓ+1 through incidence matrices. 4.3 ZFC convolution: a bridge between spatial and time dimensions Armed with the representation learning of graph (sub)structures at each timestamp, we now discuss the ZFC convolution which allows us to preserve and propagate both spatial and time-aware topological information simultaneously. The intuition behind ZFC convolution is that it learns a strong connection between two dimensions via two 1D convolution layers, i.e., time-wise and node-wise. ZFC convolution consists of three key components: (i) a linear embedding on M-ZFCs, which can learn the importance of time-aware topological features for each node to form a time-dimension-specific node embedding; (ii) a time-wise 1D convolution layer, where it gathers time-aware topological features from the entire space into a compact set; (iii) a node-wise 1D convolution layer, which can capture relations between different nodes. The resulted ZFC convolution operation over a M-ZFCsω is defined as Ht,M-ZFC = Fθ(Fψ(ΘM-ZFCM-ZFCsω)⊤)⊤, (5) where ω is the size of the window for sequence learning, M-ZFCsω denotes the M-ZFCs feature extracted from the time window with size ω, ΘM-ZFC ∈ RN×dq is a weight matrix to be learned, Fθ and Fψ are 1D convolutional layers, and Ht,M-ZFC ∈ RN×d M-ZFC out is the dM-ZFCout -dimensional output. We then combine the embeddings from graph convolution, M-ZFCs convolution, and supra-Hodge convolution to get the final embedding H(ℓ+1)t,out H (ℓ+1) t,out = [H (ℓ+1) t,GC ,Ht,M-ZFCs,H (ℓ+1) t,SH ], (6) where [·, ·, ·] denotes the concatenation of the outputs from three convolution operations, and H (ℓ+1) t,out ∈ RN×dout (where dout = dGCℓ+1 + dZFCout + dSHℓ+1). 4.4 Gate Recurrent Unit with ZFC-SHCN To describe the complex spatio-temporal dependencies among time series and assess a hidden state of nodes at a future timestamp, we feed the final embedding H(ℓ+1)t,out into Gated Recurrent Units (GRUs). Formally, we set the forward propagation equations of the GRUs as ℜt = η ( Wℜ [ Ψt−1,H (ℓ+1) t,out ] + bℜ ) , ℑt = η ( Wℑ [ Ψt−1,H (ℓ+1) t,out ] + bℑ ) , Ψt = tanh ( WΨ [ ℑt ⊙Ψt−1, H(ℓ+1)t,out ] + bΨ ) , Ψ̃t = ℜi ⊙Ψt−1 + (1−ℜt)⊙Ψt, where η(·) is an activation function (e.g., ReLU, LeakyReLU), ⊙ is the elementwise product, ℜt is the update gate and ℑi is the reset gate. Here bℜ, bℑ, bΨ, Wℜ, Wℑ, and WΨ are learnable parameters, while [ Ψt−1,H (ℓ+1) t,out ] and Ψt are the input and output of GRU model, respectively. We then obtain Ψ̃t which contains both the spatio-temporal and time-aware information. 5 Experiments 5.1 Datasets We validate our ZFC-SHCN model on six diverse data types: (i) COVID-19 datasets [51]: CA, PA, and TX represent the number of COVID-19 hospitalizations in California (CA), Pennsylvania (PA), and Texas (TX) respectively; (ii) traffic datasets [16]: PeMSD4 and PeMSD8 are two real-time traffic datasets from California; (iii) synthetic multivariate time-series (MTS) datasets based on vector autoregression (VAR) [29, 45] (where the VAR model is a generalization of the univariate AR process with more than one time-evolving component); (iv) daily surface air temperature in CA, PA, and TX over 02/01/2020–12/31/2020; (v) Bytom token prices of Ethereum blockchain over 07/27/2017–05/07/2018 [41, 53]; and (vi) wind speed data of 57 stations on the East Coast. The results on (i)–(iii) are presented in the main body, and the analysis of (iv) and (v) is in Appendix ?? and ??. The detailed description of each dataset is in Appendix ??. We also report results on the wind speed dataset in Appendix ??. 5.2 Baselines We compare our proposed ZFC-SHCN with 14 types of state-of-the-art baselines (SOAs), including FC-LSTM [54], SFM [60], N-BEATS [46], DCRNN [42], LSTNet [38], STGCN [59], TCN [4], DeepState [48], GraphWaveNet [57], DeepGLO [52], LRGCN [39] AGCRN [3], StemGNN [10], and Z-GCNETs [20]. 5.3 Experimental settings We implement ZFC-SHCN within a Pytorch framework on NVIDIA GeForce RTX 3090 GPU. We optimize all the models using an Adam optimizer for a maximum of 200 epochs. The learning rate is searched in {0.001, 0.003, 0.005, 0.01, 0.05} and the embedding dimension is searched in {1, 2, 3, 5, 10}. Our ZFC-SHCN is trained with batch sizes of 64 and 8 on PeMSD4 and PeMSD8, respectively. On both COVID-19 and surface air temperature datasets (i.e., CA, PA, and TX), we set the batch size to be 8. We train two 1D convolutional layers for ZFC representation learning with the same hidden layer dimension nhid where nhid ∈ {8, 16, 32, 64, 128}. For PeMSD4 and PeMSD8, we consider the window size ω = 12 and the horizon h = 3; for both COVID-19 and surface air temperature datasets, we consider a window size ω = 5 and horizon h = 15; for two simulated VAR datasets VART1 and VART2 , we set the window size as ω = 10 and horizon as h = 5, and set the batch size as 8; for Bytom, we consider the window size ω = 7 and horizon h = 7, and set the batch size as 8; for the wind speed dataset, we consider the window size ω = 12 and horizon h = 12, and set the batch size as 8. All models are evaluated in terms of Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Mean Absolute Percentage Error (MAPE). The best results are shown in bold font and the results shown with dotted underlines are the second-best results. We also perform a one-sided two-sample t-test between the best result and the best performance achieved by the runner-up, where *, **, *** denote p-value < 0.1, 0.05, 0.01 (i.e., denote significant, statistically significant, and highly statistically significant results, respectively. Code is available at https://github.com/zfcshcn/ZFC-SHCN.git. 5.4 Experimental results Real datasets The experimental results on PeMSD4 and PeMSD8 traffic data are reported in Table 2. As Table 2 shows, ZFC-SHCN achieves the best MAE, RMSE, and MAPE compared with SOAs on both PeMSD4 and PeMSD8. Compared to the RNN-based methods such as FCLSTM, SFM, N-BEATS, LSTNet, and TCN, ZFC-SHCN achieves relative gains in RMSE over the runner-ups, ranging from 17.68% to 65.41% for both PeMSD4 and PeMSD8. In turn, DCRNN, STGCN, GraphWaveNet, AGCRN, and StemGNN only focus on learning node-level representations. Compared to them, ZFC-SHCN captures interactions and encodes higher-order structure correlations beyond pairwise relations among nodes and yields a relative gain from 2.06% to 5.63% in RMSE on the traffic datasets. In addition, we compare ZFC-SHCN with the method based on the zigzag persistence image, i.e., Z-GCNETs, and find that ZFC-SHCN outperforms Z-GCNETs by 1.75% on PeMSD4 and 5.36% on PeMSD8 in terms of RMSE. Table 3 presents COVID-19 hospitalization prediction results (RMSE) in CA, PA, and TX, and we observe the following findings. First, our proposed ZFC-SHCN achieves state-ofthe-art performance on all three datasets. For instance, ZFC-SHCN yields 3.61%, 1.47%, 65.55% relative gains in RMSE over the runner-ups (including both GCN-based and zigzag persistence image-based methods) on three biosurveillance datasets. These results indicate that the ZFC mechanism and higherorder representation learning module play significant roles in capturing both topological information and higher-order structures. Second, as shown in Fig- ure ?? in Appendix ??, we find that, compared to the runner-up (i.e., Z-GCNETs), the predicted value of COVID-19 hospitalizations is more consistent with the ground-truth. Finally, Tables ?? and ?? in Appendix ?? present the overall prediction performances of ZFC-SHCN and representative baselines on surface air temperature and Ethereum blockchain datasets. We find that our proposed ZFC-SHCN consistently outperforms all baselines with either a significant or (highly) statistically significant margin across all data, except surface air temperature in TX, where ZFC-SHCN still yields the best performance across all models. Synthetic datasets The evaluation results on two VAR datasets are summarized in Table 1. Compared to the three strongest baselines (i.e., AGCRN, StemGNN, and Z-GCNETs), we observe that our proposed ZFC-SHCN consistently yields the best performance for all synthetic datasets. More precisely, ZFC-SHCN outperforms the runner-ups from 8.89% to 10.52% for VART1 and VART2 . Furthermore, to assess the time-wise and high network interactions, we use the global clustering coefficient (GCC) and Euler-Poincaré characteristic (EPC) as measures of higher order substructures [5]. We find that for GCC for VART1 and VART2 are 4.96 and 5.87, respectively; while the average EPC for VART1 and VART2 are 7.47 and 6.91, respectively. Interestingly (although it could be expected), higher GCC and lower EPC tend to be associated with higher relative gains delivered by ZFC-SHCN. Finally, in Appendix ??, we present the sensitivity analysis for ZFC as a function of the covariance matrix in VAR models. 5.5 Ablation studies To evaluate the performance of different components in our ZFC-SHCN model, we perform an expansive ablation study. The ablation study is conducted with three setups: (i) ZFC-SHCN without graph convolution in spatial dimension (W/o Graph convolution in spatial dimension), (ii) ZFC-SHCN without ZFC convolution (W/o ZFC convolution), and (iii) ZFC-SHCN without supra-Hodge convolution (W/o Supra-Hodge convolution). The experimental results are shown in Table 4 and prove the validity of each component. As Table 4 indicates, compared to ZFC-SHCN w/o ZFC convolution, the zigzag homological feature is vital for capturing the topological structure of spatio-temporal graph and our proposed graph convolution operation on ZFC significantly improves forecasting performance. By comparing to ZFC-SHCN w/o supra- Hodge convolution, we illustrate the significance of higher-order structure representation learning for guiding the model to how to capture information on higher-order interactions. Also, ZFC-SHCN w/o graph convolution in spatial dimension demonstrates that the learned graph obtained from trainable weights can learn hidden information and enhance (multivariate) time-series representation learning. 5.6 Computational complexity For higher-order simplices, the incidence matrices B1 and B2 can be calculated efficiently with complexity O(N +M) and O(M + Q) respectively, where N is the number of 0-simplices (i.e., nodes), M is the number of 1-simplices (i.e., edges), and Q is the number of 2-simplices (i.e., filled triangles). The computational complexity of ZFC is O(Υδ) [2, 22], where Υ represents the number of points in time interval and δ ∈ [2, 2.373). The computational complexity of the overall approach is O(N2 +Υδ + ΞkωFkdout + Ξkω2dout/2 + dout ∑t−1 ℓ=t−w Ξ (ℓ) k+1 +WGRU ), including (i) graph convolution in spatial dimension: O(N2), (ii) zigzag filtration curve: O(Υδ), (iii) supra-Hodge convolution in temporal dimension: O(ΞkωFkdout + Ξkω2dout/2 + dout ∑t−1 ℓ=t−w Ξ (ℓ) k+1) (where Fk is the number of k-simplex attribute features, ω is the sliding window size, dout is the output dimension of the supra-Hodge convolution layer, and Ξ(ℓ)k+1 is the number of (k+1)-simplex Ξk+1 at the ℓ-th layer), and (iv) GRU: O(WGRU ). We also compare our ZFC-SHCN with the most recent approach based on multipersistence-GNN [19] (i.e., TAMP-S2GCNets). We find that ZFC-SHCN yields either on-par or more competitive performance than TAMP-S2GCNets, while our proposed ZFC-SHCN significantly improves the computational efficiency (see Appendix ?? for more details). More details about running time comparison can be found in Appendix ??. 6 Conclusion We have proposed a novel framework for time-aware deep learning of time-evolving objects which takes advantages of both the higher-order interactions among the data substructures, described as simplices, and the most intrinsic time-conditioned topological information exhibited by the object, characterized via zigzag persistent homology. By leveraging the power of simplicial convolution operation and zigzag persistence for time-indexed data, ZFC-SHCN has been shown to demonstrate capabilities to yield the most competitive forecasting performance, while requiring fewer computational resources than its closest competitors. Still, computational complexity and limited theoretical results on statistical inference for zigzag persistence remain one of the major existing limitations of ZFC and, more generally, all topological methods for time dependent processes. In the future, we plan to investigate these theoretical and methodological challenges and will extend the ZFC-SHCN idea to anomaly detection in streaming time-dependent processes. Acknowledgments This work was partially supported by the National Science Foundation (NSF) under awards # ECCS 2039701 and # ECCS-2039716, the Department of the Navy, Office of Naval Research (ONR) under ONR award # N00014-21-1-2530, C3.ai Digital Transformation Institute, and NASA AIST grant 21-AIST21_2-0059. Part of this material is also based upon work supported by (while serving at) the NSF. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF, ONR, C3.ai DTI, or NASA.
1. What is the focus and contribution of the paper on time-aware GNN networks? 2. What are the strengths of the proposed approach, particularly in terms of its novelty and effectiveness? 3. What are the weaknesses of the paper, especially regarding the experiment comparisons? 4. Do you have any concerns or suggestions for improving the experimental analysis? 5. Are there any limitations or areas for future work regarding the proposed ZFC-SHCN approach?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper proposes a new time-aware GNN networks called Zigzag Filtration Curve based Supra-Hodge Convolution Networks(ZFC-SHCN). Specifically, the networks composes 1) the simplicial neural networks with an efficient zigzag persistence curve and 2) a new temporal multiplex graph representation module. In practice, the experimental results on several synthetic and real world datasets demonstrate the effectiveness and efficiency of the proposed ZFC-SHCN. Strengths And Weaknesses Strengths The paper is well-written and well-organized. The idea, amplifying the power of SNNs with a time-conditioned topological knowledge representation in a form of zigzag persistence, is interesting and novel. The experimental results show ZFC-SHCN outperforms other baselines. Codes are provided. Weaknesses It worth noting that the gains from ZFC convolution and Supra-Hodge convolution in Tabel 4 are not significant as expected, It would be fairer if the authors could provide more comparison experiments. E.g. Replacing ZFC with zigzag persistence image (ZPI) to show the gain from ZFC, replacing Supra-Hodge convolution with G C N t e m p to show the gain from Supra-Hodge convolution, etc. Or theoretical analysis of the differences is given. Questions Same as weaknesses. Limitations N/A
NIPS
Title Time-Conditioned Dances with Simplicial Complexes: Zigzag Filtration Curve based Supra-Hodge Convolution Networks for Time-series Forecasting Abstract Graph neural networks (GNNs) offer a new powerful alternative for multivariate time series forecasting, demonstrating remarkable success in a variety of spatiotemporal applications, from urban flow monitoring systems to health care informatics to financial analytics. Yet, such GNN models pre-dominantly capture only lower order interactions, that is, pairwise relations among nodes, and also largely ignore intrinsic time-conditioned information on the underlying topology of multivariate time series. To address these limitations, we propose a new time-aware GNN architecture which amplifies the power of the recently emerged simplicial neural networks with a time-conditioned topological knowledge representation in a form of zigzag persistence. That is, our new approach, Zigzag Filtration Curve based Supra-Hodge Convolution Networks (ZFC-SHCN) is built upon the two main components: (i) a new highly computationally efficient zigzag persistence curve which allows us to systematically encode time-conditioned topological information, and (ii) a new temporal multiplex graph representation module for learning higherorder network interactions. We discuss theoretical properties of the proposed time-conditioned topological knowledge representation and extensively validate the new time-aware ZFC-SHCN model in conjunction with time series forecasting on a broad range of synthetic and real-world datasets: traffic flows, COVID-19 biosurveillance, Ethereum blockchain, surface air temperature, wind energy, and vector autoregressions. Our experiments demonstrate that the ZFC-SHCN achieves the state-of-the-art performance with lower requirements on computational costs. 1 Introduction Over the last few years, graph neural networks (GNNs) have emerged as a new powerful alternative to traditional statistical and machine learning models in conjunction with univariate and multivariate time series forecasting tasks [27, 4, 40, 40, 28]. Such successful applications of GNNs range from urban traffic analytics to forecasting COVID-19 hospitalizations to electrocardiogram monitoring [3, 36, 56, 10, 20]. However, most GNNs remain inherently static and do not explicitly incorporate the inherent time characteristics of the encoded knowledge [59, 42]. In turn, limitations in capturing the 36th Conference on Neural Information Processing Systems (NeurIPS 2022). time dimension in the knowledge representation and learning mechanisms for time-evolving data results in GNNs becoming less relevant over time and, hence, requiring frequent updates. Furthermore, GNNs tend to pre-dominantly focus only on information propagation among nodes and also be limited in their ability to describe polyadic relationships among multiple substructures of multivariate time series or multi-node interactions in dynamics graphs. However, as recently shown by [6, 21], such higher-order interactions might be the key toward better understanding of the underlying mechanisms of many real-world graph-structured phenomena. This challenge on polyadic graph interactions has been recently addressed by [24, 8, 7] who propose to model higher order substructures as simplices. Then, by borrowing the concepts of the Hodge theory, these approaches allow for generalization of the ideas of the combinatorial graph Laplacian which describes a diffusion from node to node via edges to a case of diffusion over simplices. Such Hodge Laplacian construction allows for extending the notion of convolution operation to simplicial convolution, and the resulting simplicial neural networks (SNNs) are arguably one of the frontlines in graph learning today. However, these ideas have never been yet applied in conjunction with knowledge representation and learning of time-evolving objects. Our goal here is to bridge the emerging concept of time-aware learning with the recent notions of simplicial convolution, with a particular focus on explicitly integrating the core time-conditioned topological characteristics. In particular, we amplify the power of SNNs with a time-conditioned topological knowledge representation in a form of zigzag persistence for time-indexed data and, more specifically, its new highly computationally efficient summary, Zigzag Filtration Curve. As a result, our new approach, Zigzag Filtration Curve based Supra-Hodge Convolution Networks (ZFC-SHCN) enables us to systematically learn the most intrinsic time-conditioned information both on the underlying topology of the time-evolving data and higher-order interactions among various substructures. Significance of our contributions can be summarized as follows: • ZFC-SHCN is the first approach bringing the concepts of simplicial convolution and SNNs to time-aware learning. • We propose a new highly computationally efficient summary of persistence for time-indexed data, Zigzag Filtration Curve, and derive its theoretical stability guarantees. • We validate the utility of ZFC-SHCN in conjunction with forecasting multivariate time series from diverse application domains such as traffic networks, COVID-19 biosurveillance, surface air temperature, token prices on Ethereum blockchain, wind energy, and vector autoregressions. Our findings indicate that ZFC-SHCN delivers the state-of-the-art forecasting performance, with a significant margin and demonstrates higher computational efficiency. 2 Related Work Time-series Forecasting and Spatio-temporal Graph Convolutional Networks Time-series forecasting is one of the core subfields in statistical sciences [15, 9]. Most recently, there have appeared a number of unconventional machine learning approaches to time-series forecasting. In particular, graph convolutional network (GCN)-based models for spatio-temporal network data have emerged as a promising forecasting tool. For instance, DCRNN [42] introduces spectral graph convolution into spatio-temporal network data prediction, which can capture spatio-temporal dependencies. STGCN [59] uses convolutional neural networks (CNNs) to model temporal correlations. Moreover, to infer hidden inter-dependencies between different traffic variables, [57, 3, 10] conduct a convolution operation in spatial dimension through adaptive adjacency matrices. Recent Z-GCNETs [20] develops a zigzag topological layer equipped with a zigzag persistence image into a GCN framework to model temporal correlations. Another promising recent direction for time series forecasting beyond GCN is a fractional-order dynamical model proposed by [27]. This approach offers an alternating scheme to determine the best estimate of the model parameters and unknown stimuli. In turn, [28] proposes a Padé approximation based exponential neural operator (Padé Exp), aiming to improve time-series forecasting with exponential operators in neural operator learning schemes. However, all of the above methods only focus on node-level representations. In contrast, in this paper, we focus on both higher-order structure representation and topological information learning. Topological Data Analysis for Graph Learning Persistent homology [25, 62] is a suite of tools within topological data analysis (TDA) that provides a way for measuring topological features of shapes and functions. The extracted topological features have been recently shown to provide invaluable insights into hidden mechanisms behind the organization and functionality of graph structured data. In particular, topological features have been actively used for node classification [61, 17], link prediction [58], and graph classification [31, 32, 14, 30]. For instance, [31] is one of the first approaches to integrate topological features into neural networks for graph classification, while [14] proposes a versatile framework for learning multiple vectorizations of persistent diagrams on graphs. In turn, [61, 17, 58] apply topological features to GNNs to understand and improve the message passing between nodes. Finally, [33] proposes a topological graph layer with learnable filtration functions for graph and node classification tasks, while [13] advances the ideas of multipersistence to graph learning. Zigzag Persistent Homology Despite its promise, regular persistent homology does not explicitly model the geometric and topological information from a sequence of topological spaces. To address this limitation, a generalization of ordinary persistence, i.e., zigzag persistent homology, based on the theory of quiver representation, has been proposed by [12]. Zigzag persistence allows us to systematically describe how the homology changes over a sequence of spaces. Despite its high potential, especially in conjunction with analysis of time-evolving data, zigzag persistence still remains largely a theoretical concept, and its applications are yet scarce. The recent results for time-dependent data studies include, for example, zigzag-based clustering [37], bifurcation analysis of dynamic systems [55], and time series forecasting [20]. The memory and computational efficiency of zigzag persistence is one of the daunting challenges. Inspired by [44], we propose a novel highly computationally efficient representation of zigzag persistence for learning time-evolving data, that is, zigzag filtration curve. Simplicial Neural Networks Modeling higher-order interactions on graphs is an emerging direction in graph representation learning. While the role of higher-order structures for graph learning has been documented for a number of years [1, 34] and involves such diverse applications as graph signal processing in image recognition [23], dynamics of disease transmission and biological networks, integration of higher-order graph substructures into deep learning on graphs has emerged only in 2020. As shown by [6, 50], higher-order network structures can be leveraged to boost graph learning performance. Indeed, several recent approaches [24, 49, 8, 18] propose to leverage simplicial information to perform neural networks on graphs. However, neither of these Simplicial Neural Networks (SNNs) are integrated with a topology-based graph convolution layer allowing us to learn both time-aware persistent topological features and simplicial geometry of graphs. In this paper, we propose ZFC-SHCN to address this limitation. 3 Time-Aware Topological Learning with Zigzag Curves Spatio-temporal Graph Construction A spatio-temporal graph is a collection of snapshots at different time steps, denoted by G = {G1,G2, · · · ,GT }, where T is the maximum timestamp. Here Gt = (Vt, Et,At,Xt) is the graph observed at time step t ∈ [1, T ], where Vt is a finite set of |V| = N nodes, Et is a set of edges, At ∈ RN×N is the adjacency matrix, and Xt ∈ RN×d is the node feature matrix. Specifically, each row of Xt is a d-dimensional feature vector of the corresponding node. For sake of notations, wherever applicable below, we omit the subscript t and denote graph Gt at time t as G. Background on Ordinary Persistence Tools of ordinary persistence, or persistent homology (PH), allow us to study salient data shape patterns along various dimensions. By shape here we broadly understand data properties that are invariant under continuous transformations, that is, transformations that do not alter “holes” in the data, for example, bending, twisting, and stretching. The key idea is to choose some suitable scale parameter ν and then to study a graph G not as a single object but as a nested sequence of graphs, or graph filtration G1 ⊆ . . . ⊆ Gn = G, which is induced by monotonic changes of scale ν. For example, if G is an edge-weighted graph (V, E , w) with weight function w : E 7! R, then for each νj , j = 1, . . . , n, we set G≤νj = (V, E , w−1(−∞, νj ]), yielding the induced edge-weighted filtration. We can also consider only induced subgraphs of G with maximal degree of νj for each j = 1, . . . , n, resulting in the degree sublevel set filtration. (For more discussion on graph filtrations see [30].) Armed with this construction, we can track which shape patterns, for example, independent components, loops, and voids, emerge as the scale ν varies. To make the process of pattern counting more systematic and efficient, we build an abstract simplicial complex K (Gj) on each Gj . We also record complex indices jb (birth) and jd (death) at which we first or last observe each shape feature. Topological features with longer lifespans are said to persist and are likelier to yield important information on the structural organization of G. Learning Shapes of Time-Conditioned Data with Zigzag Persistence This construction enables us to extract the key topological descriptors from a single graph G. However, in our case, we observe not a single graph but a sequence of time-evolving graphs {G1, . . . ,GT }. How can we track shape signatures which are not just individualistic for each time stamp but characterize intrinsic properties of the observed object over time? One approach to how we can bring PH tools to analysis of timeconditioned objects is zigzag persistence. Based on the theory of quiver representations, zigzag persistence generalizes ordinary persistence to track characteristics of graphs (or other topological spaces) with inclusions going in different directions [12, 11]. In particular, given a time-indexed sequence of graphs {G1, . . . ,GT }, we first form a set of graph inclusions over time G1 ∪ G2 G2 ∪ G3 G3 ∪ G4 . . . ↗ ↖ ↗ ↖ ↗ ↖ ↗ G1 G2 G3 G4 and then assess the compatibility of persistent topological features across unions of graphs. That is, we record indices at which topological features (dis)appear, for some given scale ν∗. If for a given ν∗ topological feature ρ (i.e., p-dimensional hole, 0 ≤ p ≤ K, where K is the dimension of the simplicial complex K(G)) is first recorded in K(Gj), we say that the feature’s birth is j, and if ρ first appears in K(Gj ∪Gj+1), we record its birth as j +1/2. In turn, if ρ is last seen in K(Gj), we record its death as j, while if it is last seen in K(Gj ∪ Gj+1), we say that its death is at j + 1/2. Let J be the set of all observed topological features for a given ν∗. Collecting then births and deaths over J, we summarize all extracted information as a multiset Dν∗ = {(bρ, dρ) ∈ R2|bρ < dρ, ρ ∈ J}, called a zigzag persistent diagram (ZPD) (where bρ and dρ are the birth and death of the topological feature ρ respectively). This makes zigzag persistence particularly attractive for the analysis of dynamic objects which are naturally indexed by time. However, the idea of zigzag persistence is applicable far beyond learning time-evolving objects. Nevertheless, zigzag persistence still remains largely a theoretical concept, with yet only a handful of applications, and one of the roadblocks hindering a broader proliferation of zigzag-based methods in practice is their computational costs. Here we take a step toward bringing a more computationally efficient summary of zigzag persistence to real-world applications. Time-Aware Zigzag Filtration Curves Consider a sequence of time intervals associated with a zigzag filtration over a time period [t1, tN ]( t1, t1 + 1 2 ) , ( t1 + 1 2 , t2 ) , ( t2, t2 + 1 2 ) , . . . , ( tN−1 + 1 2 , tN ) . Let DgmZZν∗ be the resulting ZPD for a given ν∗ and M be the number of off-diagonal topological features in ZPD, i.e., DgmZZν∗ . Inspired by the recent results on stabilized Betti sequences by [35] and filtration curves by [44] for ordinary persistence, we propose a new simple and computationally efficient summary of zigzag persistence, called a Zigzag Filtration Curve. Definition 3.1 (Zigzag Filtration Curve (ZFC)). The zigzag filtration curve evaluated at ∆t−i = (ti−1 + 1 2 , ti), i = {1, 2, . . . ,N}, for a given ν∗, is defined as ZFCpν∗(∆t − i ) = M∑ j=1 ξi(tbj , tdj )ωi, where (tbj , tdj ) ∈ R2 is a vector containing the birth and death of the j-th off-diagonal p-dimensional topological feature in DgmZZν∗ (as such, tbj < tdj ), j = {1, 2, . . . ,M}, 0 ≤ p ≤ K; ξi : R 2 7! R is some suitable Lipschitz continuous function with Lipschitz constant Li, for example, a Gaussian density; and ωi > 0, i = {1, 2, . . . ,N} are weights such that ∑ i ωi = 1. Zigzag filtration curve at ∆t+i = (ti, ti + 1 2 ) is defined analogously. (For the sake of notational simplicity, wherever applicable in the further exposition we suppress the index p in ZFC.) Motivated by [35], here as the Lipschitz continuous function ξi for intervals ∆t−i , we use a Gaussian density f with mean (ti−1 + 1/2, ti), while for intervals ∆t+i , we set the mean of f to (ti, ti + 1/2), i = 1, 2, . . .N . For both ∆t−i and ∆t + i , we choose the 2 × 2-variance-covariance matrix Σ to be the identity matrix. (See Appendix ?? for more discussion on sensitivity analysis.) Another suitable choice of ξ is the arctan function. As we show below, the proposed ZPC also enjoys important theoretical stability guarantees in terms of Wasserstein-1 distance. Proposition 3.2 (Stability of Zigzag Filtration Curve). Let DgmZZν∗ be a zigzag persistence diagram and DgmZZ′c∗ be its perturbed copy such that W1 ( DgmZZν∗ ,DgmZZ ′ ν∗ ) < ϵ, where W1 is Wasserstein-1 distance. Then, ZFC is stable with respect to Wasserstein-1 distance. In practice topological features of various dimensions p, p = 0, 1, . . . ,K, may play different roles in the learning task performance, and these roles are not known a-priori. Hence, to harness timeconditioned information encoded in ZFC corresponding to different dimensions p, we propose MultiZigzag Filtration Curves (M-ZFCs) M-ZFCsν∗ ∈ RK× N−1 2 by stacking ZFC0,ZFC1, . . . ,ZFCK. Figure ?? in Appendix ?? shows the both 0- and 1-dimensional ZFCs obtained from the proposed ZFC. In the following section, we demonstrate how ZFC can be integrated into neural network architectures for graph learning tasks. 4 Zigzag Filtration Curve Based Supra-Hodge Convolution Networks Given a graph G and its historical ω step graph signals Xω = {Xt−ω+1, . . . ,Xt} ∈ Rω×N×F (F is the node feature dimensionality), the time-series forecasting problem is to learn a mapping function f that maps the historical data {Xt−ω+1, . . . ,Xt} into the next h step data {Xt+1, . . . ,Xt+h}. The mapping relation is represented as follows: {Xt−ω+1, . . . ,Xt} f −! {Xt+1, . . . ,Xt+h}. 4.1 Graph convolution in the spatial dimension Given the node embedding dictionary W ϕ = (wϕ1 , w ϕ 2 , . . . , w ϕ N ) ∈ RN×dc (where xϕu ∈ Rdc and dc is the dimension of node embedding), we aim to seek a non-negative function Su,v = G (wϕu , w ϕ v ) which represents the pairwise similarity between any two nodes u and v. Concretely, the multiplication between W ϕ and (W ϕ)⊤ can (i) give a sum pooling of second-order features from the outer product of all the embedding vector pairs (wϕu , w ϕ v ) and (ii) infer the hidden spatial dependencies of nodes Suv = G (w ϕ u , w ϕ v ) = exp (ReLU(wϕu(w ϕ v ) ⊤)∑N u=1 exp (ReLU(w ϕ u(w ϕ v )⊤) , where ReLU(·) = max (0, ·) is a nonlinear activation function, which is used to eliminate weak connections proactively, and the role of the softmax function is applied to normalize the learned graph S. Inspired by the recent advancements in random walk-based graph embedding learning [47, 26], we make a graph convolution in spatial dimension, feeding a power series of the learned graph S with varying random walk steps {1, 2, · · · , r} (r ∈ Z+), as follows: H (ℓ+1) t,GC = σ(Stack(I,S, · · · ,S r)H (ℓ) t,GCΘ (ℓ) GC), (1) where σ(·) stands for a nonlinear activation function, Stack(·) is the function which stacks r powered learned graphs, H(ℓ)t,GC and H (ℓ+1) t,GC are the input and output activations for layer ℓ (where H (0) t,GC = Xt ∈ RN×F ), and Θ(ℓ)GC ∈ Rd GC ℓ ×d GC ℓ+1 is the ℓ-th layer’s trainable weights. Next, we introduce representation learning of the higher-order graph (sub)structures using the supra-Hodge Laplacian which allows us to systematically leverage the underlying topological information. 4.2 Supra-Hodge convolution in temporal dimension Time-evolving data such as multivariate time series, spatio-temporal processes, and dynamic networks, often exhibit a highly complex dependency among its substructures that goes far beyond what can be described by dyadic (or pairwise) interactions among nodes. Instead, such higher-order polyadic interactions can be systematically addressed using the Hodge theory. In particular, the discrete Hodge theory allows us to generalize the notion of a standard combinatorial graph Laplacian which describes diffusion on graph G from node to node via edges to diffusion over higher-order substructures of G [43, 6]. In turn, higher-order substructures can be modeled as k-simplices of G. (See Appendix ?? for background information on Hodge Laplacians.) Convolutional architectures on simplicial complexes based on the associated concepts of the Hodge theory have emerged as a recent direction in graph neural networks but have not yet been applied to learning time-evolving data. Our goal here is to introduce the notion of simplicial convolution and the ideas of Hodge-Laplacians to time-aware learning. In particular, to capture time-conditioned higher-order interactions on G and to describe diffusion of information over simplices along the temporal dimension, we build a supra-Hodge convolution operation, based on the multiplex network representation learning. (In the following for simplicity, notation without sub/superscript k stands for node-level quantities and in our experiments we always consider k ∈ Z+). First, given the historical spatio-temporal network series Gt−ω+1:t = {Gt−ω+1,Gt−ω+2, . . . ,Gt}, we consider a directed connected node-aligned multiplex network, which is made up of ω layers with N nodes on each layer. That is, the adjacency matrix Aα = {aαuv}N×N (where α ∈ {t− ω + 1, . . . , t}) defines the intra-connection between nodes u and v in layer α and a distance matrix Dαβ = {dαβuu}N×N quantifies the transition probability of moving from node u of layer α to node u of layer β. (Here β > α, since we consider information diffusion procedures only along the temporal dimension). Next, based on the discrete Hodge theory, we propose a new Hodge k-Laplacian for multiplex graphs, called the supra-Hodge k-Laplacian LSupk ∈ Rϕkω×ϕkω LSupk = (L11k ) r D12k+1 ··· D 1ω k+1 0 (L22k ) r ··· D2ωk+1 ... ... . . . ... 0 0 ··· (Lωωk ) r , (2) where Lααk is the Hodge k-Laplacian in layer α, Dk+1 is the diagonal matrix of degrees of each k-simplex, i.e., Dk+1 = max (diag(|Bk+1|1, I)) and Bk+1 is the k-simplex-to-(k + 1)-simplex incidence matrix, and the r-th power of Lααk represents r-step random walk on the Hodge k-Laplacian of layer α which will allow every k-simplex to accumulate information from its neighbors. Hence, when k = 1, we can infer the spatial dependencies between each pair of edges and capture meaningful edge information in both spatial and temporal dimensions – through the lens of the supra-Hodge 1-Laplacian. For instance, in molecule networks, each node represents an atom and each edge is a bond connecting two atoms; the bond (i.e., edge) features include bond type, ring status, and molecular charge which are closely related to atom (i.e., node) features (such as atomic total and partial charges). Since the goal of the forecasting task is to predict node (i.e., 0-simplex) attribute(s) in the next few time steps, we propose a novel diffusion supra-Hodge convolution on the sliding window Gt−ω+1:t. We then update nodes’ representations by transforming the multiplex k-simplex embedding to nodes via incidence matrices H (ℓ+1) t,k,SH = σ(L Sup k H (ℓ) t,k,SHΘ (ℓ) k,SH), (3) H (ℓ+1) t,SH = (B ⊤ 1 · · ·B⊤k )H (ℓ+1) t,k,SH, (4) where (i) in Equation 3: Θ(ℓ)k,SH ∈ R dSHk;ℓ×d SH k;ℓ+1 is a learnable filter matrix for layer ℓ (here dSHk;ℓ and d SH k;ℓ+1 are the intermediate and output dimensions to the ℓ-th layer), H (ℓ) t,k,SH and H (ℓ+1) t,k,SH are the input and output activations for layer ℓ (where H (0) t,k,SH = X̄k;t−ω+1:t ∈ Rϕkω×dink and the historical k-simplex features of the spatio-temporal networks Xk;t−ω+1:t = {Xk;t−ω+1,Xk;t−ω+2, . . . ,Xk;t} ∈ Rϕk×ω×d in k is reshaped as a matrix X̄k;t−ω+1:t with shape ϕkω × dink ) and (ii) in Equation 4: we transform the k-simplex embedding H (ℓ+1) t,k,SH to node embedding H(ℓ+1)t,SH ∈ R N×dSHk;ℓ+1 through incidence matrices. 4.3 ZFC convolution: a bridge between spatial and time dimensions Armed with the representation learning of graph (sub)structures at each timestamp, we now discuss the ZFC convolution which allows us to preserve and propagate both spatial and time-aware topological information simultaneously. The intuition behind ZFC convolution is that it learns a strong connection between two dimensions via two 1D convolution layers, i.e., time-wise and node-wise. ZFC convolution consists of three key components: (i) a linear embedding on M-ZFCs, which can learn the importance of time-aware topological features for each node to form a time-dimension-specific node embedding; (ii) a time-wise 1D convolution layer, where it gathers time-aware topological features from the entire space into a compact set; (iii) a node-wise 1D convolution layer, which can capture relations between different nodes. The resulted ZFC convolution operation over a M-ZFCsω is defined as Ht,M-ZFC = Fθ(Fψ(ΘM-ZFCM-ZFCsω)⊤)⊤, (5) where ω is the size of the window for sequence learning, M-ZFCsω denotes the M-ZFCs feature extracted from the time window with size ω, ΘM-ZFC ∈ RN×dq is a weight matrix to be learned, Fθ and Fψ are 1D convolutional layers, and Ht,M-ZFC ∈ RN×d M-ZFC out is the dM-ZFCout -dimensional output. We then combine the embeddings from graph convolution, M-ZFCs convolution, and supra-Hodge convolution to get the final embedding H(ℓ+1)t,out H (ℓ+1) t,out = [H (ℓ+1) t,GC ,Ht,M-ZFCs,H (ℓ+1) t,SH ], (6) where [·, ·, ·] denotes the concatenation of the outputs from three convolution operations, and H (ℓ+1) t,out ∈ RN×dout (where dout = dGCℓ+1 + dZFCout + dSHℓ+1). 4.4 Gate Recurrent Unit with ZFC-SHCN To describe the complex spatio-temporal dependencies among time series and assess a hidden state of nodes at a future timestamp, we feed the final embedding H(ℓ+1)t,out into Gated Recurrent Units (GRUs). Formally, we set the forward propagation equations of the GRUs as ℜt = η ( Wℜ [ Ψt−1,H (ℓ+1) t,out ] + bℜ ) , ℑt = η ( Wℑ [ Ψt−1,H (ℓ+1) t,out ] + bℑ ) , Ψt = tanh ( WΨ [ ℑt ⊙Ψt−1, H(ℓ+1)t,out ] + bΨ ) , Ψ̃t = ℜi ⊙Ψt−1 + (1−ℜt)⊙Ψt, where η(·) is an activation function (e.g., ReLU, LeakyReLU), ⊙ is the elementwise product, ℜt is the update gate and ℑi is the reset gate. Here bℜ, bℑ, bΨ, Wℜ, Wℑ, and WΨ are learnable parameters, while [ Ψt−1,H (ℓ+1) t,out ] and Ψt are the input and output of GRU model, respectively. We then obtain Ψ̃t which contains both the spatio-temporal and time-aware information. 5 Experiments 5.1 Datasets We validate our ZFC-SHCN model on six diverse data types: (i) COVID-19 datasets [51]: CA, PA, and TX represent the number of COVID-19 hospitalizations in California (CA), Pennsylvania (PA), and Texas (TX) respectively; (ii) traffic datasets [16]: PeMSD4 and PeMSD8 are two real-time traffic datasets from California; (iii) synthetic multivariate time-series (MTS) datasets based on vector autoregression (VAR) [29, 45] (where the VAR model is a generalization of the univariate AR process with more than one time-evolving component); (iv) daily surface air temperature in CA, PA, and TX over 02/01/2020–12/31/2020; (v) Bytom token prices of Ethereum blockchain over 07/27/2017–05/07/2018 [41, 53]; and (vi) wind speed data of 57 stations on the East Coast. The results on (i)–(iii) are presented in the main body, and the analysis of (iv) and (v) is in Appendix ?? and ??. The detailed description of each dataset is in Appendix ??. We also report results on the wind speed dataset in Appendix ??. 5.2 Baselines We compare our proposed ZFC-SHCN with 14 types of state-of-the-art baselines (SOAs), including FC-LSTM [54], SFM [60], N-BEATS [46], DCRNN [42], LSTNet [38], STGCN [59], TCN [4], DeepState [48], GraphWaveNet [57], DeepGLO [52], LRGCN [39] AGCRN [3], StemGNN [10], and Z-GCNETs [20]. 5.3 Experimental settings We implement ZFC-SHCN within a Pytorch framework on NVIDIA GeForce RTX 3090 GPU. We optimize all the models using an Adam optimizer for a maximum of 200 epochs. The learning rate is searched in {0.001, 0.003, 0.005, 0.01, 0.05} and the embedding dimension is searched in {1, 2, 3, 5, 10}. Our ZFC-SHCN is trained with batch sizes of 64 and 8 on PeMSD4 and PeMSD8, respectively. On both COVID-19 and surface air temperature datasets (i.e., CA, PA, and TX), we set the batch size to be 8. We train two 1D convolutional layers for ZFC representation learning with the same hidden layer dimension nhid where nhid ∈ {8, 16, 32, 64, 128}. For PeMSD4 and PeMSD8, we consider the window size ω = 12 and the horizon h = 3; for both COVID-19 and surface air temperature datasets, we consider a window size ω = 5 and horizon h = 15; for two simulated VAR datasets VART1 and VART2 , we set the window size as ω = 10 and horizon as h = 5, and set the batch size as 8; for Bytom, we consider the window size ω = 7 and horizon h = 7, and set the batch size as 8; for the wind speed dataset, we consider the window size ω = 12 and horizon h = 12, and set the batch size as 8. All models are evaluated in terms of Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Mean Absolute Percentage Error (MAPE). The best results are shown in bold font and the results shown with dotted underlines are the second-best results. We also perform a one-sided two-sample t-test between the best result and the best performance achieved by the runner-up, where *, **, *** denote p-value < 0.1, 0.05, 0.01 (i.e., denote significant, statistically significant, and highly statistically significant results, respectively. Code is available at https://github.com/zfcshcn/ZFC-SHCN.git. 5.4 Experimental results Real datasets The experimental results on PeMSD4 and PeMSD8 traffic data are reported in Table 2. As Table 2 shows, ZFC-SHCN achieves the best MAE, RMSE, and MAPE compared with SOAs on both PeMSD4 and PeMSD8. Compared to the RNN-based methods such as FCLSTM, SFM, N-BEATS, LSTNet, and TCN, ZFC-SHCN achieves relative gains in RMSE over the runner-ups, ranging from 17.68% to 65.41% for both PeMSD4 and PeMSD8. In turn, DCRNN, STGCN, GraphWaveNet, AGCRN, and StemGNN only focus on learning node-level representations. Compared to them, ZFC-SHCN captures interactions and encodes higher-order structure correlations beyond pairwise relations among nodes and yields a relative gain from 2.06% to 5.63% in RMSE on the traffic datasets. In addition, we compare ZFC-SHCN with the method based on the zigzag persistence image, i.e., Z-GCNETs, and find that ZFC-SHCN outperforms Z-GCNETs by 1.75% on PeMSD4 and 5.36% on PeMSD8 in terms of RMSE. Table 3 presents COVID-19 hospitalization prediction results (RMSE) in CA, PA, and TX, and we observe the following findings. First, our proposed ZFC-SHCN achieves state-ofthe-art performance on all three datasets. For instance, ZFC-SHCN yields 3.61%, 1.47%, 65.55% relative gains in RMSE over the runner-ups (including both GCN-based and zigzag persistence image-based methods) on three biosurveillance datasets. These results indicate that the ZFC mechanism and higherorder representation learning module play significant roles in capturing both topological information and higher-order structures. Second, as shown in Fig- ure ?? in Appendix ??, we find that, compared to the runner-up (i.e., Z-GCNETs), the predicted value of COVID-19 hospitalizations is more consistent with the ground-truth. Finally, Tables ?? and ?? in Appendix ?? present the overall prediction performances of ZFC-SHCN and representative baselines on surface air temperature and Ethereum blockchain datasets. We find that our proposed ZFC-SHCN consistently outperforms all baselines with either a significant or (highly) statistically significant margin across all data, except surface air temperature in TX, where ZFC-SHCN still yields the best performance across all models. Synthetic datasets The evaluation results on two VAR datasets are summarized in Table 1. Compared to the three strongest baselines (i.e., AGCRN, StemGNN, and Z-GCNETs), we observe that our proposed ZFC-SHCN consistently yields the best performance for all synthetic datasets. More precisely, ZFC-SHCN outperforms the runner-ups from 8.89% to 10.52% for VART1 and VART2 . Furthermore, to assess the time-wise and high network interactions, we use the global clustering coefficient (GCC) and Euler-Poincaré characteristic (EPC) as measures of higher order substructures [5]. We find that for GCC for VART1 and VART2 are 4.96 and 5.87, respectively; while the average EPC for VART1 and VART2 are 7.47 and 6.91, respectively. Interestingly (although it could be expected), higher GCC and lower EPC tend to be associated with higher relative gains delivered by ZFC-SHCN. Finally, in Appendix ??, we present the sensitivity analysis for ZFC as a function of the covariance matrix in VAR models. 5.5 Ablation studies To evaluate the performance of different components in our ZFC-SHCN model, we perform an expansive ablation study. The ablation study is conducted with three setups: (i) ZFC-SHCN without graph convolution in spatial dimension (W/o Graph convolution in spatial dimension), (ii) ZFC-SHCN without ZFC convolution (W/o ZFC convolution), and (iii) ZFC-SHCN without supra-Hodge convolution (W/o Supra-Hodge convolution). The experimental results are shown in Table 4 and prove the validity of each component. As Table 4 indicates, compared to ZFC-SHCN w/o ZFC convolution, the zigzag homological feature is vital for capturing the topological structure of spatio-temporal graph and our proposed graph convolution operation on ZFC significantly improves forecasting performance. By comparing to ZFC-SHCN w/o supra- Hodge convolution, we illustrate the significance of higher-order structure representation learning for guiding the model to how to capture information on higher-order interactions. Also, ZFC-SHCN w/o graph convolution in spatial dimension demonstrates that the learned graph obtained from trainable weights can learn hidden information and enhance (multivariate) time-series representation learning. 5.6 Computational complexity For higher-order simplices, the incidence matrices B1 and B2 can be calculated efficiently with complexity O(N +M) and O(M + Q) respectively, where N is the number of 0-simplices (i.e., nodes), M is the number of 1-simplices (i.e., edges), and Q is the number of 2-simplices (i.e., filled triangles). The computational complexity of ZFC is O(Υδ) [2, 22], where Υ represents the number of points in time interval and δ ∈ [2, 2.373). The computational complexity of the overall approach is O(N2 +Υδ + ΞkωFkdout + Ξkω2dout/2 + dout ∑t−1 ℓ=t−w Ξ (ℓ) k+1 +WGRU ), including (i) graph convolution in spatial dimension: O(N2), (ii) zigzag filtration curve: O(Υδ), (iii) supra-Hodge convolution in temporal dimension: O(ΞkωFkdout + Ξkω2dout/2 + dout ∑t−1 ℓ=t−w Ξ (ℓ) k+1) (where Fk is the number of k-simplex attribute features, ω is the sliding window size, dout is the output dimension of the supra-Hodge convolution layer, and Ξ(ℓ)k+1 is the number of (k+1)-simplex Ξk+1 at the ℓ-th layer), and (iv) GRU: O(WGRU ). We also compare our ZFC-SHCN with the most recent approach based on multipersistence-GNN [19] (i.e., TAMP-S2GCNets). We find that ZFC-SHCN yields either on-par or more competitive performance than TAMP-S2GCNets, while our proposed ZFC-SHCN significantly improves the computational efficiency (see Appendix ?? for more details). More details about running time comparison can be found in Appendix ??. 6 Conclusion We have proposed a novel framework for time-aware deep learning of time-evolving objects which takes advantages of both the higher-order interactions among the data substructures, described as simplices, and the most intrinsic time-conditioned topological information exhibited by the object, characterized via zigzag persistent homology. By leveraging the power of simplicial convolution operation and zigzag persistence for time-indexed data, ZFC-SHCN has been shown to demonstrate capabilities to yield the most competitive forecasting performance, while requiring fewer computational resources than its closest competitors. Still, computational complexity and limited theoretical results on statistical inference for zigzag persistence remain one of the major existing limitations of ZFC and, more generally, all topological methods for time dependent processes. In the future, we plan to investigate these theoretical and methodological challenges and will extend the ZFC-SHCN idea to anomaly detection in streaming time-dependent processes. Acknowledgments This work was partially supported by the National Science Foundation (NSF) under awards # ECCS 2039701 and # ECCS-2039716, the Department of the Navy, Office of Naval Research (ONR) under ONR award # N00014-21-1-2530, C3.ai Digital Transformation Institute, and NASA AIST grant 21-AIST21_2-0059. Part of this material is also based upon work supported by (while serving at) the NSF. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF, ONR, C3.ai DTI, or NASA.
1. What is the focus of the paper in terms of capturing information beyond pairwise relations in GNN? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical properties? 3. What are the weaknesses of the paper regarding its complexity and potential applications? 4. Are there any concerns or limitations regarding the proposed method's ability to capture diverse types of tasks?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper addresses the problem of capturing information other than pairwise relations in GNN. A time-conditioned topological knowledge representation is proposed, and related theoretical properties are presented. The topic is very interesting and influential. Strengths And Weaknesses Strengths good idea. there is theoretical stability guarantees. Weaknesses the complexity should be discussed. it is not very convincing that the expression is enough for various of tasks. Questions The complexity should be discussed and some other type of tasks should be done. Limitations No suggestions.
NIPS
Title Time-Conditioned Dances with Simplicial Complexes: Zigzag Filtration Curve based Supra-Hodge Convolution Networks for Time-series Forecasting Abstract Graph neural networks (GNNs) offer a new powerful alternative for multivariate time series forecasting, demonstrating remarkable success in a variety of spatiotemporal applications, from urban flow monitoring systems to health care informatics to financial analytics. Yet, such GNN models pre-dominantly capture only lower order interactions, that is, pairwise relations among nodes, and also largely ignore intrinsic time-conditioned information on the underlying topology of multivariate time series. To address these limitations, we propose a new time-aware GNN architecture which amplifies the power of the recently emerged simplicial neural networks with a time-conditioned topological knowledge representation in a form of zigzag persistence. That is, our new approach, Zigzag Filtration Curve based Supra-Hodge Convolution Networks (ZFC-SHCN) is built upon the two main components: (i) a new highly computationally efficient zigzag persistence curve which allows us to systematically encode time-conditioned topological information, and (ii) a new temporal multiplex graph representation module for learning higherorder network interactions. We discuss theoretical properties of the proposed time-conditioned topological knowledge representation and extensively validate the new time-aware ZFC-SHCN model in conjunction with time series forecasting on a broad range of synthetic and real-world datasets: traffic flows, COVID-19 biosurveillance, Ethereum blockchain, surface air temperature, wind energy, and vector autoregressions. Our experiments demonstrate that the ZFC-SHCN achieves the state-of-the-art performance with lower requirements on computational costs. 1 Introduction Over the last few years, graph neural networks (GNNs) have emerged as a new powerful alternative to traditional statistical and machine learning models in conjunction with univariate and multivariate time series forecasting tasks [27, 4, 40, 40, 28]. Such successful applications of GNNs range from urban traffic analytics to forecasting COVID-19 hospitalizations to electrocardiogram monitoring [3, 36, 56, 10, 20]. However, most GNNs remain inherently static and do not explicitly incorporate the inherent time characteristics of the encoded knowledge [59, 42]. In turn, limitations in capturing the 36th Conference on Neural Information Processing Systems (NeurIPS 2022). time dimension in the knowledge representation and learning mechanisms for time-evolving data results in GNNs becoming less relevant over time and, hence, requiring frequent updates. Furthermore, GNNs tend to pre-dominantly focus only on information propagation among nodes and also be limited in their ability to describe polyadic relationships among multiple substructures of multivariate time series or multi-node interactions in dynamics graphs. However, as recently shown by [6, 21], such higher-order interactions might be the key toward better understanding of the underlying mechanisms of many real-world graph-structured phenomena. This challenge on polyadic graph interactions has been recently addressed by [24, 8, 7] who propose to model higher order substructures as simplices. Then, by borrowing the concepts of the Hodge theory, these approaches allow for generalization of the ideas of the combinatorial graph Laplacian which describes a diffusion from node to node via edges to a case of diffusion over simplices. Such Hodge Laplacian construction allows for extending the notion of convolution operation to simplicial convolution, and the resulting simplicial neural networks (SNNs) are arguably one of the frontlines in graph learning today. However, these ideas have never been yet applied in conjunction with knowledge representation and learning of time-evolving objects. Our goal here is to bridge the emerging concept of time-aware learning with the recent notions of simplicial convolution, with a particular focus on explicitly integrating the core time-conditioned topological characteristics. In particular, we amplify the power of SNNs with a time-conditioned topological knowledge representation in a form of zigzag persistence for time-indexed data and, more specifically, its new highly computationally efficient summary, Zigzag Filtration Curve. As a result, our new approach, Zigzag Filtration Curve based Supra-Hodge Convolution Networks (ZFC-SHCN) enables us to systematically learn the most intrinsic time-conditioned information both on the underlying topology of the time-evolving data and higher-order interactions among various substructures. Significance of our contributions can be summarized as follows: • ZFC-SHCN is the first approach bringing the concepts of simplicial convolution and SNNs to time-aware learning. • We propose a new highly computationally efficient summary of persistence for time-indexed data, Zigzag Filtration Curve, and derive its theoretical stability guarantees. • We validate the utility of ZFC-SHCN in conjunction with forecasting multivariate time series from diverse application domains such as traffic networks, COVID-19 biosurveillance, surface air temperature, token prices on Ethereum blockchain, wind energy, and vector autoregressions. Our findings indicate that ZFC-SHCN delivers the state-of-the-art forecasting performance, with a significant margin and demonstrates higher computational efficiency. 2 Related Work Time-series Forecasting and Spatio-temporal Graph Convolutional Networks Time-series forecasting is one of the core subfields in statistical sciences [15, 9]. Most recently, there have appeared a number of unconventional machine learning approaches to time-series forecasting. In particular, graph convolutional network (GCN)-based models for spatio-temporal network data have emerged as a promising forecasting tool. For instance, DCRNN [42] introduces spectral graph convolution into spatio-temporal network data prediction, which can capture spatio-temporal dependencies. STGCN [59] uses convolutional neural networks (CNNs) to model temporal correlations. Moreover, to infer hidden inter-dependencies between different traffic variables, [57, 3, 10] conduct a convolution operation in spatial dimension through adaptive adjacency matrices. Recent Z-GCNETs [20] develops a zigzag topological layer equipped with a zigzag persistence image into a GCN framework to model temporal correlations. Another promising recent direction for time series forecasting beyond GCN is a fractional-order dynamical model proposed by [27]. This approach offers an alternating scheme to determine the best estimate of the model parameters and unknown stimuli. In turn, [28] proposes a Padé approximation based exponential neural operator (Padé Exp), aiming to improve time-series forecasting with exponential operators in neural operator learning schemes. However, all of the above methods only focus on node-level representations. In contrast, in this paper, we focus on both higher-order structure representation and topological information learning. Topological Data Analysis for Graph Learning Persistent homology [25, 62] is a suite of tools within topological data analysis (TDA) that provides a way for measuring topological features of shapes and functions. The extracted topological features have been recently shown to provide invaluable insights into hidden mechanisms behind the organization and functionality of graph structured data. In particular, topological features have been actively used for node classification [61, 17], link prediction [58], and graph classification [31, 32, 14, 30]. For instance, [31] is one of the first approaches to integrate topological features into neural networks for graph classification, while [14] proposes a versatile framework for learning multiple vectorizations of persistent diagrams on graphs. In turn, [61, 17, 58] apply topological features to GNNs to understand and improve the message passing between nodes. Finally, [33] proposes a topological graph layer with learnable filtration functions for graph and node classification tasks, while [13] advances the ideas of multipersistence to graph learning. Zigzag Persistent Homology Despite its promise, regular persistent homology does not explicitly model the geometric and topological information from a sequence of topological spaces. To address this limitation, a generalization of ordinary persistence, i.e., zigzag persistent homology, based on the theory of quiver representation, has been proposed by [12]. Zigzag persistence allows us to systematically describe how the homology changes over a sequence of spaces. Despite its high potential, especially in conjunction with analysis of time-evolving data, zigzag persistence still remains largely a theoretical concept, and its applications are yet scarce. The recent results for time-dependent data studies include, for example, zigzag-based clustering [37], bifurcation analysis of dynamic systems [55], and time series forecasting [20]. The memory and computational efficiency of zigzag persistence is one of the daunting challenges. Inspired by [44], we propose a novel highly computationally efficient representation of zigzag persistence for learning time-evolving data, that is, zigzag filtration curve. Simplicial Neural Networks Modeling higher-order interactions on graphs is an emerging direction in graph representation learning. While the role of higher-order structures for graph learning has been documented for a number of years [1, 34] and involves such diverse applications as graph signal processing in image recognition [23], dynamics of disease transmission and biological networks, integration of higher-order graph substructures into deep learning on graphs has emerged only in 2020. As shown by [6, 50], higher-order network structures can be leveraged to boost graph learning performance. Indeed, several recent approaches [24, 49, 8, 18] propose to leverage simplicial information to perform neural networks on graphs. However, neither of these Simplicial Neural Networks (SNNs) are integrated with a topology-based graph convolution layer allowing us to learn both time-aware persistent topological features and simplicial geometry of graphs. In this paper, we propose ZFC-SHCN to address this limitation. 3 Time-Aware Topological Learning with Zigzag Curves Spatio-temporal Graph Construction A spatio-temporal graph is a collection of snapshots at different time steps, denoted by G = {G1,G2, · · · ,GT }, where T is the maximum timestamp. Here Gt = (Vt, Et,At,Xt) is the graph observed at time step t ∈ [1, T ], where Vt is a finite set of |V| = N nodes, Et is a set of edges, At ∈ RN×N is the adjacency matrix, and Xt ∈ RN×d is the node feature matrix. Specifically, each row of Xt is a d-dimensional feature vector of the corresponding node. For sake of notations, wherever applicable below, we omit the subscript t and denote graph Gt at time t as G. Background on Ordinary Persistence Tools of ordinary persistence, or persistent homology (PH), allow us to study salient data shape patterns along various dimensions. By shape here we broadly understand data properties that are invariant under continuous transformations, that is, transformations that do not alter “holes” in the data, for example, bending, twisting, and stretching. The key idea is to choose some suitable scale parameter ν and then to study a graph G not as a single object but as a nested sequence of graphs, or graph filtration G1 ⊆ . . . ⊆ Gn = G, which is induced by monotonic changes of scale ν. For example, if G is an edge-weighted graph (V, E , w) with weight function w : E 7! R, then for each νj , j = 1, . . . , n, we set G≤νj = (V, E , w−1(−∞, νj ]), yielding the induced edge-weighted filtration. We can also consider only induced subgraphs of G with maximal degree of νj for each j = 1, . . . , n, resulting in the degree sublevel set filtration. (For more discussion on graph filtrations see [30].) Armed with this construction, we can track which shape patterns, for example, independent components, loops, and voids, emerge as the scale ν varies. To make the process of pattern counting more systematic and efficient, we build an abstract simplicial complex K (Gj) on each Gj . We also record complex indices jb (birth) and jd (death) at which we first or last observe each shape feature. Topological features with longer lifespans are said to persist and are likelier to yield important information on the structural organization of G. Learning Shapes of Time-Conditioned Data with Zigzag Persistence This construction enables us to extract the key topological descriptors from a single graph G. However, in our case, we observe not a single graph but a sequence of time-evolving graphs {G1, . . . ,GT }. How can we track shape signatures which are not just individualistic for each time stamp but characterize intrinsic properties of the observed object over time? One approach to how we can bring PH tools to analysis of timeconditioned objects is zigzag persistence. Based on the theory of quiver representations, zigzag persistence generalizes ordinary persistence to track characteristics of graphs (or other topological spaces) with inclusions going in different directions [12, 11]. In particular, given a time-indexed sequence of graphs {G1, . . . ,GT }, we first form a set of graph inclusions over time G1 ∪ G2 G2 ∪ G3 G3 ∪ G4 . . . ↗ ↖ ↗ ↖ ↗ ↖ ↗ G1 G2 G3 G4 and then assess the compatibility of persistent topological features across unions of graphs. That is, we record indices at which topological features (dis)appear, for some given scale ν∗. If for a given ν∗ topological feature ρ (i.e., p-dimensional hole, 0 ≤ p ≤ K, where K is the dimension of the simplicial complex K(G)) is first recorded in K(Gj), we say that the feature’s birth is j, and if ρ first appears in K(Gj ∪Gj+1), we record its birth as j +1/2. In turn, if ρ is last seen in K(Gj), we record its death as j, while if it is last seen in K(Gj ∪ Gj+1), we say that its death is at j + 1/2. Let J be the set of all observed topological features for a given ν∗. Collecting then births and deaths over J, we summarize all extracted information as a multiset Dν∗ = {(bρ, dρ) ∈ R2|bρ < dρ, ρ ∈ J}, called a zigzag persistent diagram (ZPD) (where bρ and dρ are the birth and death of the topological feature ρ respectively). This makes zigzag persistence particularly attractive for the analysis of dynamic objects which are naturally indexed by time. However, the idea of zigzag persistence is applicable far beyond learning time-evolving objects. Nevertheless, zigzag persistence still remains largely a theoretical concept, with yet only a handful of applications, and one of the roadblocks hindering a broader proliferation of zigzag-based methods in practice is their computational costs. Here we take a step toward bringing a more computationally efficient summary of zigzag persistence to real-world applications. Time-Aware Zigzag Filtration Curves Consider a sequence of time intervals associated with a zigzag filtration over a time period [t1, tN ]( t1, t1 + 1 2 ) , ( t1 + 1 2 , t2 ) , ( t2, t2 + 1 2 ) , . . . , ( tN−1 + 1 2 , tN ) . Let DgmZZν∗ be the resulting ZPD for a given ν∗ and M be the number of off-diagonal topological features in ZPD, i.e., DgmZZν∗ . Inspired by the recent results on stabilized Betti sequences by [35] and filtration curves by [44] for ordinary persistence, we propose a new simple and computationally efficient summary of zigzag persistence, called a Zigzag Filtration Curve. Definition 3.1 (Zigzag Filtration Curve (ZFC)). The zigzag filtration curve evaluated at ∆t−i = (ti−1 + 1 2 , ti), i = {1, 2, . . . ,N}, for a given ν∗, is defined as ZFCpν∗(∆t − i ) = M∑ j=1 ξi(tbj , tdj )ωi, where (tbj , tdj ) ∈ R2 is a vector containing the birth and death of the j-th off-diagonal p-dimensional topological feature in DgmZZν∗ (as such, tbj < tdj ), j = {1, 2, . . . ,M}, 0 ≤ p ≤ K; ξi : R 2 7! R is some suitable Lipschitz continuous function with Lipschitz constant Li, for example, a Gaussian density; and ωi > 0, i = {1, 2, . . . ,N} are weights such that ∑ i ωi = 1. Zigzag filtration curve at ∆t+i = (ti, ti + 1 2 ) is defined analogously. (For the sake of notational simplicity, wherever applicable in the further exposition we suppress the index p in ZFC.) Motivated by [35], here as the Lipschitz continuous function ξi for intervals ∆t−i , we use a Gaussian density f with mean (ti−1 + 1/2, ti), while for intervals ∆t+i , we set the mean of f to (ti, ti + 1/2), i = 1, 2, . . .N . For both ∆t−i and ∆t + i , we choose the 2 × 2-variance-covariance matrix Σ to be the identity matrix. (See Appendix ?? for more discussion on sensitivity analysis.) Another suitable choice of ξ is the arctan function. As we show below, the proposed ZPC also enjoys important theoretical stability guarantees in terms of Wasserstein-1 distance. Proposition 3.2 (Stability of Zigzag Filtration Curve). Let DgmZZν∗ be a zigzag persistence diagram and DgmZZ′c∗ be its perturbed copy such that W1 ( DgmZZν∗ ,DgmZZ ′ ν∗ ) < ϵ, where W1 is Wasserstein-1 distance. Then, ZFC is stable with respect to Wasserstein-1 distance. In practice topological features of various dimensions p, p = 0, 1, . . . ,K, may play different roles in the learning task performance, and these roles are not known a-priori. Hence, to harness timeconditioned information encoded in ZFC corresponding to different dimensions p, we propose MultiZigzag Filtration Curves (M-ZFCs) M-ZFCsν∗ ∈ RK× N−1 2 by stacking ZFC0,ZFC1, . . . ,ZFCK. Figure ?? in Appendix ?? shows the both 0- and 1-dimensional ZFCs obtained from the proposed ZFC. In the following section, we demonstrate how ZFC can be integrated into neural network architectures for graph learning tasks. 4 Zigzag Filtration Curve Based Supra-Hodge Convolution Networks Given a graph G and its historical ω step graph signals Xω = {Xt−ω+1, . . . ,Xt} ∈ Rω×N×F (F is the node feature dimensionality), the time-series forecasting problem is to learn a mapping function f that maps the historical data {Xt−ω+1, . . . ,Xt} into the next h step data {Xt+1, . . . ,Xt+h}. The mapping relation is represented as follows: {Xt−ω+1, . . . ,Xt} f −! {Xt+1, . . . ,Xt+h}. 4.1 Graph convolution in the spatial dimension Given the node embedding dictionary W ϕ = (wϕ1 , w ϕ 2 , . . . , w ϕ N ) ∈ RN×dc (where xϕu ∈ Rdc and dc is the dimension of node embedding), we aim to seek a non-negative function Su,v = G (wϕu , w ϕ v ) which represents the pairwise similarity between any two nodes u and v. Concretely, the multiplication between W ϕ and (W ϕ)⊤ can (i) give a sum pooling of second-order features from the outer product of all the embedding vector pairs (wϕu , w ϕ v ) and (ii) infer the hidden spatial dependencies of nodes Suv = G (w ϕ u , w ϕ v ) = exp (ReLU(wϕu(w ϕ v ) ⊤)∑N u=1 exp (ReLU(w ϕ u(w ϕ v )⊤) , where ReLU(·) = max (0, ·) is a nonlinear activation function, which is used to eliminate weak connections proactively, and the role of the softmax function is applied to normalize the learned graph S. Inspired by the recent advancements in random walk-based graph embedding learning [47, 26], we make a graph convolution in spatial dimension, feeding a power series of the learned graph S with varying random walk steps {1, 2, · · · , r} (r ∈ Z+), as follows: H (ℓ+1) t,GC = σ(Stack(I,S, · · · ,S r)H (ℓ) t,GCΘ (ℓ) GC), (1) where σ(·) stands for a nonlinear activation function, Stack(·) is the function which stacks r powered learned graphs, H(ℓ)t,GC and H (ℓ+1) t,GC are the input and output activations for layer ℓ (where H (0) t,GC = Xt ∈ RN×F ), and Θ(ℓ)GC ∈ Rd GC ℓ ×d GC ℓ+1 is the ℓ-th layer’s trainable weights. Next, we introduce representation learning of the higher-order graph (sub)structures using the supra-Hodge Laplacian which allows us to systematically leverage the underlying topological information. 4.2 Supra-Hodge convolution in temporal dimension Time-evolving data such as multivariate time series, spatio-temporal processes, and dynamic networks, often exhibit a highly complex dependency among its substructures that goes far beyond what can be described by dyadic (or pairwise) interactions among nodes. Instead, such higher-order polyadic interactions can be systematically addressed using the Hodge theory. In particular, the discrete Hodge theory allows us to generalize the notion of a standard combinatorial graph Laplacian which describes diffusion on graph G from node to node via edges to diffusion over higher-order substructures of G [43, 6]. In turn, higher-order substructures can be modeled as k-simplices of G. (See Appendix ?? for background information on Hodge Laplacians.) Convolutional architectures on simplicial complexes based on the associated concepts of the Hodge theory have emerged as a recent direction in graph neural networks but have not yet been applied to learning time-evolving data. Our goal here is to introduce the notion of simplicial convolution and the ideas of Hodge-Laplacians to time-aware learning. In particular, to capture time-conditioned higher-order interactions on G and to describe diffusion of information over simplices along the temporal dimension, we build a supra-Hodge convolution operation, based on the multiplex network representation learning. (In the following for simplicity, notation without sub/superscript k stands for node-level quantities and in our experiments we always consider k ∈ Z+). First, given the historical spatio-temporal network series Gt−ω+1:t = {Gt−ω+1,Gt−ω+2, . . . ,Gt}, we consider a directed connected node-aligned multiplex network, which is made up of ω layers with N nodes on each layer. That is, the adjacency matrix Aα = {aαuv}N×N (where α ∈ {t− ω + 1, . . . , t}) defines the intra-connection between nodes u and v in layer α and a distance matrix Dαβ = {dαβuu}N×N quantifies the transition probability of moving from node u of layer α to node u of layer β. (Here β > α, since we consider information diffusion procedures only along the temporal dimension). Next, based on the discrete Hodge theory, we propose a new Hodge k-Laplacian for multiplex graphs, called the supra-Hodge k-Laplacian LSupk ∈ Rϕkω×ϕkω LSupk = (L11k ) r D12k+1 ··· D 1ω k+1 0 (L22k ) r ··· D2ωk+1 ... ... . . . ... 0 0 ··· (Lωωk ) r , (2) where Lααk is the Hodge k-Laplacian in layer α, Dk+1 is the diagonal matrix of degrees of each k-simplex, i.e., Dk+1 = max (diag(|Bk+1|1, I)) and Bk+1 is the k-simplex-to-(k + 1)-simplex incidence matrix, and the r-th power of Lααk represents r-step random walk on the Hodge k-Laplacian of layer α which will allow every k-simplex to accumulate information from its neighbors. Hence, when k = 1, we can infer the spatial dependencies between each pair of edges and capture meaningful edge information in both spatial and temporal dimensions – through the lens of the supra-Hodge 1-Laplacian. For instance, in molecule networks, each node represents an atom and each edge is a bond connecting two atoms; the bond (i.e., edge) features include bond type, ring status, and molecular charge which are closely related to atom (i.e., node) features (such as atomic total and partial charges). Since the goal of the forecasting task is to predict node (i.e., 0-simplex) attribute(s) in the next few time steps, we propose a novel diffusion supra-Hodge convolution on the sliding window Gt−ω+1:t. We then update nodes’ representations by transforming the multiplex k-simplex embedding to nodes via incidence matrices H (ℓ+1) t,k,SH = σ(L Sup k H (ℓ) t,k,SHΘ (ℓ) k,SH), (3) H (ℓ+1) t,SH = (B ⊤ 1 · · ·B⊤k )H (ℓ+1) t,k,SH, (4) where (i) in Equation 3: Θ(ℓ)k,SH ∈ R dSHk;ℓ×d SH k;ℓ+1 is a learnable filter matrix for layer ℓ (here dSHk;ℓ and d SH k;ℓ+1 are the intermediate and output dimensions to the ℓ-th layer), H (ℓ) t,k,SH and H (ℓ+1) t,k,SH are the input and output activations for layer ℓ (where H (0) t,k,SH = X̄k;t−ω+1:t ∈ Rϕkω×dink and the historical k-simplex features of the spatio-temporal networks Xk;t−ω+1:t = {Xk;t−ω+1,Xk;t−ω+2, . . . ,Xk;t} ∈ Rϕk×ω×d in k is reshaped as a matrix X̄k;t−ω+1:t with shape ϕkω × dink ) and (ii) in Equation 4: we transform the k-simplex embedding H (ℓ+1) t,k,SH to node embedding H(ℓ+1)t,SH ∈ R N×dSHk;ℓ+1 through incidence matrices. 4.3 ZFC convolution: a bridge between spatial and time dimensions Armed with the representation learning of graph (sub)structures at each timestamp, we now discuss the ZFC convolution which allows us to preserve and propagate both spatial and time-aware topological information simultaneously. The intuition behind ZFC convolution is that it learns a strong connection between two dimensions via two 1D convolution layers, i.e., time-wise and node-wise. ZFC convolution consists of three key components: (i) a linear embedding on M-ZFCs, which can learn the importance of time-aware topological features for each node to form a time-dimension-specific node embedding; (ii) a time-wise 1D convolution layer, where it gathers time-aware topological features from the entire space into a compact set; (iii) a node-wise 1D convolution layer, which can capture relations between different nodes. The resulted ZFC convolution operation over a M-ZFCsω is defined as Ht,M-ZFC = Fθ(Fψ(ΘM-ZFCM-ZFCsω)⊤)⊤, (5) where ω is the size of the window for sequence learning, M-ZFCsω denotes the M-ZFCs feature extracted from the time window with size ω, ΘM-ZFC ∈ RN×dq is a weight matrix to be learned, Fθ and Fψ are 1D convolutional layers, and Ht,M-ZFC ∈ RN×d M-ZFC out is the dM-ZFCout -dimensional output. We then combine the embeddings from graph convolution, M-ZFCs convolution, and supra-Hodge convolution to get the final embedding H(ℓ+1)t,out H (ℓ+1) t,out = [H (ℓ+1) t,GC ,Ht,M-ZFCs,H (ℓ+1) t,SH ], (6) where [·, ·, ·] denotes the concatenation of the outputs from three convolution operations, and H (ℓ+1) t,out ∈ RN×dout (where dout = dGCℓ+1 + dZFCout + dSHℓ+1). 4.4 Gate Recurrent Unit with ZFC-SHCN To describe the complex spatio-temporal dependencies among time series and assess a hidden state of nodes at a future timestamp, we feed the final embedding H(ℓ+1)t,out into Gated Recurrent Units (GRUs). Formally, we set the forward propagation equations of the GRUs as ℜt = η ( Wℜ [ Ψt−1,H (ℓ+1) t,out ] + bℜ ) , ℑt = η ( Wℑ [ Ψt−1,H (ℓ+1) t,out ] + bℑ ) , Ψt = tanh ( WΨ [ ℑt ⊙Ψt−1, H(ℓ+1)t,out ] + bΨ ) , Ψ̃t = ℜi ⊙Ψt−1 + (1−ℜt)⊙Ψt, where η(·) is an activation function (e.g., ReLU, LeakyReLU), ⊙ is the elementwise product, ℜt is the update gate and ℑi is the reset gate. Here bℜ, bℑ, bΨ, Wℜ, Wℑ, and WΨ are learnable parameters, while [ Ψt−1,H (ℓ+1) t,out ] and Ψt are the input and output of GRU model, respectively. We then obtain Ψ̃t which contains both the spatio-temporal and time-aware information. 5 Experiments 5.1 Datasets We validate our ZFC-SHCN model on six diverse data types: (i) COVID-19 datasets [51]: CA, PA, and TX represent the number of COVID-19 hospitalizations in California (CA), Pennsylvania (PA), and Texas (TX) respectively; (ii) traffic datasets [16]: PeMSD4 and PeMSD8 are two real-time traffic datasets from California; (iii) synthetic multivariate time-series (MTS) datasets based on vector autoregression (VAR) [29, 45] (where the VAR model is a generalization of the univariate AR process with more than one time-evolving component); (iv) daily surface air temperature in CA, PA, and TX over 02/01/2020–12/31/2020; (v) Bytom token prices of Ethereum blockchain over 07/27/2017–05/07/2018 [41, 53]; and (vi) wind speed data of 57 stations on the East Coast. The results on (i)–(iii) are presented in the main body, and the analysis of (iv) and (v) is in Appendix ?? and ??. The detailed description of each dataset is in Appendix ??. We also report results on the wind speed dataset in Appendix ??. 5.2 Baselines We compare our proposed ZFC-SHCN with 14 types of state-of-the-art baselines (SOAs), including FC-LSTM [54], SFM [60], N-BEATS [46], DCRNN [42], LSTNet [38], STGCN [59], TCN [4], DeepState [48], GraphWaveNet [57], DeepGLO [52], LRGCN [39] AGCRN [3], StemGNN [10], and Z-GCNETs [20]. 5.3 Experimental settings We implement ZFC-SHCN within a Pytorch framework on NVIDIA GeForce RTX 3090 GPU. We optimize all the models using an Adam optimizer for a maximum of 200 epochs. The learning rate is searched in {0.001, 0.003, 0.005, 0.01, 0.05} and the embedding dimension is searched in {1, 2, 3, 5, 10}. Our ZFC-SHCN is trained with batch sizes of 64 and 8 on PeMSD4 and PeMSD8, respectively. On both COVID-19 and surface air temperature datasets (i.e., CA, PA, and TX), we set the batch size to be 8. We train two 1D convolutional layers for ZFC representation learning with the same hidden layer dimension nhid where nhid ∈ {8, 16, 32, 64, 128}. For PeMSD4 and PeMSD8, we consider the window size ω = 12 and the horizon h = 3; for both COVID-19 and surface air temperature datasets, we consider a window size ω = 5 and horizon h = 15; for two simulated VAR datasets VART1 and VART2 , we set the window size as ω = 10 and horizon as h = 5, and set the batch size as 8; for Bytom, we consider the window size ω = 7 and horizon h = 7, and set the batch size as 8; for the wind speed dataset, we consider the window size ω = 12 and horizon h = 12, and set the batch size as 8. All models are evaluated in terms of Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Mean Absolute Percentage Error (MAPE). The best results are shown in bold font and the results shown with dotted underlines are the second-best results. We also perform a one-sided two-sample t-test between the best result and the best performance achieved by the runner-up, where *, **, *** denote p-value < 0.1, 0.05, 0.01 (i.e., denote significant, statistically significant, and highly statistically significant results, respectively. Code is available at https://github.com/zfcshcn/ZFC-SHCN.git. 5.4 Experimental results Real datasets The experimental results on PeMSD4 and PeMSD8 traffic data are reported in Table 2. As Table 2 shows, ZFC-SHCN achieves the best MAE, RMSE, and MAPE compared with SOAs on both PeMSD4 and PeMSD8. Compared to the RNN-based methods such as FCLSTM, SFM, N-BEATS, LSTNet, and TCN, ZFC-SHCN achieves relative gains in RMSE over the runner-ups, ranging from 17.68% to 65.41% for both PeMSD4 and PeMSD8. In turn, DCRNN, STGCN, GraphWaveNet, AGCRN, and StemGNN only focus on learning node-level representations. Compared to them, ZFC-SHCN captures interactions and encodes higher-order structure correlations beyond pairwise relations among nodes and yields a relative gain from 2.06% to 5.63% in RMSE on the traffic datasets. In addition, we compare ZFC-SHCN with the method based on the zigzag persistence image, i.e., Z-GCNETs, and find that ZFC-SHCN outperforms Z-GCNETs by 1.75% on PeMSD4 and 5.36% on PeMSD8 in terms of RMSE. Table 3 presents COVID-19 hospitalization prediction results (RMSE) in CA, PA, and TX, and we observe the following findings. First, our proposed ZFC-SHCN achieves state-ofthe-art performance on all three datasets. For instance, ZFC-SHCN yields 3.61%, 1.47%, 65.55% relative gains in RMSE over the runner-ups (including both GCN-based and zigzag persistence image-based methods) on three biosurveillance datasets. These results indicate that the ZFC mechanism and higherorder representation learning module play significant roles in capturing both topological information and higher-order structures. Second, as shown in Fig- ure ?? in Appendix ??, we find that, compared to the runner-up (i.e., Z-GCNETs), the predicted value of COVID-19 hospitalizations is more consistent with the ground-truth. Finally, Tables ?? and ?? in Appendix ?? present the overall prediction performances of ZFC-SHCN and representative baselines on surface air temperature and Ethereum blockchain datasets. We find that our proposed ZFC-SHCN consistently outperforms all baselines with either a significant or (highly) statistically significant margin across all data, except surface air temperature in TX, where ZFC-SHCN still yields the best performance across all models. Synthetic datasets The evaluation results on two VAR datasets are summarized in Table 1. Compared to the three strongest baselines (i.e., AGCRN, StemGNN, and Z-GCNETs), we observe that our proposed ZFC-SHCN consistently yields the best performance for all synthetic datasets. More precisely, ZFC-SHCN outperforms the runner-ups from 8.89% to 10.52% for VART1 and VART2 . Furthermore, to assess the time-wise and high network interactions, we use the global clustering coefficient (GCC) and Euler-Poincaré characteristic (EPC) as measures of higher order substructures [5]. We find that for GCC for VART1 and VART2 are 4.96 and 5.87, respectively; while the average EPC for VART1 and VART2 are 7.47 and 6.91, respectively. Interestingly (although it could be expected), higher GCC and lower EPC tend to be associated with higher relative gains delivered by ZFC-SHCN. Finally, in Appendix ??, we present the sensitivity analysis for ZFC as a function of the covariance matrix in VAR models. 5.5 Ablation studies To evaluate the performance of different components in our ZFC-SHCN model, we perform an expansive ablation study. The ablation study is conducted with three setups: (i) ZFC-SHCN without graph convolution in spatial dimension (W/o Graph convolution in spatial dimension), (ii) ZFC-SHCN without ZFC convolution (W/o ZFC convolution), and (iii) ZFC-SHCN without supra-Hodge convolution (W/o Supra-Hodge convolution). The experimental results are shown in Table 4 and prove the validity of each component. As Table 4 indicates, compared to ZFC-SHCN w/o ZFC convolution, the zigzag homological feature is vital for capturing the topological structure of spatio-temporal graph and our proposed graph convolution operation on ZFC significantly improves forecasting performance. By comparing to ZFC-SHCN w/o supra- Hodge convolution, we illustrate the significance of higher-order structure representation learning for guiding the model to how to capture information on higher-order interactions. Also, ZFC-SHCN w/o graph convolution in spatial dimension demonstrates that the learned graph obtained from trainable weights can learn hidden information and enhance (multivariate) time-series representation learning. 5.6 Computational complexity For higher-order simplices, the incidence matrices B1 and B2 can be calculated efficiently with complexity O(N +M) and O(M + Q) respectively, where N is the number of 0-simplices (i.e., nodes), M is the number of 1-simplices (i.e., edges), and Q is the number of 2-simplices (i.e., filled triangles). The computational complexity of ZFC is O(Υδ) [2, 22], where Υ represents the number of points in time interval and δ ∈ [2, 2.373). The computational complexity of the overall approach is O(N2 +Υδ + ΞkωFkdout + Ξkω2dout/2 + dout ∑t−1 ℓ=t−w Ξ (ℓ) k+1 +WGRU ), including (i) graph convolution in spatial dimension: O(N2), (ii) zigzag filtration curve: O(Υδ), (iii) supra-Hodge convolution in temporal dimension: O(ΞkωFkdout + Ξkω2dout/2 + dout ∑t−1 ℓ=t−w Ξ (ℓ) k+1) (where Fk is the number of k-simplex attribute features, ω is the sliding window size, dout is the output dimension of the supra-Hodge convolution layer, and Ξ(ℓ)k+1 is the number of (k+1)-simplex Ξk+1 at the ℓ-th layer), and (iv) GRU: O(WGRU ). We also compare our ZFC-SHCN with the most recent approach based on multipersistence-GNN [19] (i.e., TAMP-S2GCNets). We find that ZFC-SHCN yields either on-par or more competitive performance than TAMP-S2GCNets, while our proposed ZFC-SHCN significantly improves the computational efficiency (see Appendix ?? for more details). More details about running time comparison can be found in Appendix ??. 6 Conclusion We have proposed a novel framework for time-aware deep learning of time-evolving objects which takes advantages of both the higher-order interactions among the data substructures, described as simplices, and the most intrinsic time-conditioned topological information exhibited by the object, characterized via zigzag persistent homology. By leveraging the power of simplicial convolution operation and zigzag persistence for time-indexed data, ZFC-SHCN has been shown to demonstrate capabilities to yield the most competitive forecasting performance, while requiring fewer computational resources than its closest competitors. Still, computational complexity and limited theoretical results on statistical inference for zigzag persistence remain one of the major existing limitations of ZFC and, more generally, all topological methods for time dependent processes. In the future, we plan to investigate these theoretical and methodological challenges and will extend the ZFC-SHCN idea to anomaly detection in streaming time-dependent processes. Acknowledgments This work was partially supported by the National Science Foundation (NSF) under awards # ECCS 2039701 and # ECCS-2039716, the Department of the Navy, Office of Naval Research (ONR) under ONR award # N00014-21-1-2530, C3.ai Digital Transformation Institute, and NASA AIST grant 21-AIST21_2-0059. Part of this material is also based upon work supported by (while serving at) the NSF. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF, ONR, C3.ai DTI, or NASA.
1. What is the focus and contribution of the paper regarding time series prediction of graphs with topological data analysis (TDA)? 2. What are the strengths of the proposed approach, particularly in terms of its ability to incorporate topological information? 3. What are the weaknesses of the paper, specifically regarding experiment setup and feature necessity? 4. Do you have any concerns about the choice of features introduced by the proposed method? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper In this paper, the authors propose a method for predicting time series of graphs with the addition of the concept of TDA. There is no discussion of graph construction methods here, but rather the general setting of the existence of time-varying graphs. The proposed method is basically a conventional graph convolution -recurrent neural network with an additional mechanism to consider topological information. The zigzag filtration curve and supra-Hodge convolution are used as topological information. The authors also prove the stability of ZFC and demonstrate its effectiveness in comparison to conventional methods in their experiments. Strengths And Weaknesses Strength The authors provide a framework for time series prediction of graphs that includes topological information that has not been previously taken into account. Introduced zigzag filtration as topological information The authors have organized the mathematical background of the algorithm before introducing the algorithm. The authors have tested the effectiveness of this method compared to many conventional methods from multiple perspectives on multiple data sets Weakness Although the authors have compared the proposed method with conventional methods that do not include topology information, they have not examined whether the proposed method is appropriate as a method that takes topology information into account. There are doubts about some of the experimental setups. Overall, the prediction of graph time series is a very important issue, including from a practical viewpoint, and it is worthwhile to have provided an effective framework for this problem with the addition of TDA-based information. Since I have some doubts, I would like to make a final decision based on the answers to the following questions. Questions The proposed method introduces zigzag filtration features and supra-Hodge convolution features in addition to the conventional graph convolution features. Although the overall framework has been evaluated, it is not clear which features have what effect on each of them. Since it is not possible to determine whether all of the elements are necessary, I wonder if a comparison with a system that excludes any of the information would make the observation possible. Are such comparisons being made? Features using Zigzag filtration Creation is a natural concept, but I am sure there are other possibilities. Are there any comparisons with other methods? Of course, since the effectiveness of this system as a whole has been demonstrated, it is only a proposal as a first step, and if improvements using other TDA methods is a future issue, that is fine. The experimental results, especially in Table 3, show that the proposed method seems to create different models for different evaluation indicators for a single problem. The same model should be used to compare different indicators in the evaluation. If you want to show that each indicator can be set appropriately, you should compare them separately. Limitations The authors have clearly defined the application, clearly described the performance, and adequately addressed the limitation of their work.
NIPS
Title Time-Conditioned Dances with Simplicial Complexes: Zigzag Filtration Curve based Supra-Hodge Convolution Networks for Time-series Forecasting Abstract Graph neural networks (GNNs) offer a new powerful alternative for multivariate time series forecasting, demonstrating remarkable success in a variety of spatiotemporal applications, from urban flow monitoring systems to health care informatics to financial analytics. Yet, such GNN models pre-dominantly capture only lower order interactions, that is, pairwise relations among nodes, and also largely ignore intrinsic time-conditioned information on the underlying topology of multivariate time series. To address these limitations, we propose a new time-aware GNN architecture which amplifies the power of the recently emerged simplicial neural networks with a time-conditioned topological knowledge representation in a form of zigzag persistence. That is, our new approach, Zigzag Filtration Curve based Supra-Hodge Convolution Networks (ZFC-SHCN) is built upon the two main components: (i) a new highly computationally efficient zigzag persistence curve which allows us to systematically encode time-conditioned topological information, and (ii) a new temporal multiplex graph representation module for learning higherorder network interactions. We discuss theoretical properties of the proposed time-conditioned topological knowledge representation and extensively validate the new time-aware ZFC-SHCN model in conjunction with time series forecasting on a broad range of synthetic and real-world datasets: traffic flows, COVID-19 biosurveillance, Ethereum blockchain, surface air temperature, wind energy, and vector autoregressions. Our experiments demonstrate that the ZFC-SHCN achieves the state-of-the-art performance with lower requirements on computational costs. 1 Introduction Over the last few years, graph neural networks (GNNs) have emerged as a new powerful alternative to traditional statistical and machine learning models in conjunction with univariate and multivariate time series forecasting tasks [27, 4, 40, 40, 28]. Such successful applications of GNNs range from urban traffic analytics to forecasting COVID-19 hospitalizations to electrocardiogram monitoring [3, 36, 56, 10, 20]. However, most GNNs remain inherently static and do not explicitly incorporate the inherent time characteristics of the encoded knowledge [59, 42]. In turn, limitations in capturing the 36th Conference on Neural Information Processing Systems (NeurIPS 2022). time dimension in the knowledge representation and learning mechanisms for time-evolving data results in GNNs becoming less relevant over time and, hence, requiring frequent updates. Furthermore, GNNs tend to pre-dominantly focus only on information propagation among nodes and also be limited in their ability to describe polyadic relationships among multiple substructures of multivariate time series or multi-node interactions in dynamics graphs. However, as recently shown by [6, 21], such higher-order interactions might be the key toward better understanding of the underlying mechanisms of many real-world graph-structured phenomena. This challenge on polyadic graph interactions has been recently addressed by [24, 8, 7] who propose to model higher order substructures as simplices. Then, by borrowing the concepts of the Hodge theory, these approaches allow for generalization of the ideas of the combinatorial graph Laplacian which describes a diffusion from node to node via edges to a case of diffusion over simplices. Such Hodge Laplacian construction allows for extending the notion of convolution operation to simplicial convolution, and the resulting simplicial neural networks (SNNs) are arguably one of the frontlines in graph learning today. However, these ideas have never been yet applied in conjunction with knowledge representation and learning of time-evolving objects. Our goal here is to bridge the emerging concept of time-aware learning with the recent notions of simplicial convolution, with a particular focus on explicitly integrating the core time-conditioned topological characteristics. In particular, we amplify the power of SNNs with a time-conditioned topological knowledge representation in a form of zigzag persistence for time-indexed data and, more specifically, its new highly computationally efficient summary, Zigzag Filtration Curve. As a result, our new approach, Zigzag Filtration Curve based Supra-Hodge Convolution Networks (ZFC-SHCN) enables us to systematically learn the most intrinsic time-conditioned information both on the underlying topology of the time-evolving data and higher-order interactions among various substructures. Significance of our contributions can be summarized as follows: • ZFC-SHCN is the first approach bringing the concepts of simplicial convolution and SNNs to time-aware learning. • We propose a new highly computationally efficient summary of persistence for time-indexed data, Zigzag Filtration Curve, and derive its theoretical stability guarantees. • We validate the utility of ZFC-SHCN in conjunction with forecasting multivariate time series from diverse application domains such as traffic networks, COVID-19 biosurveillance, surface air temperature, token prices on Ethereum blockchain, wind energy, and vector autoregressions. Our findings indicate that ZFC-SHCN delivers the state-of-the-art forecasting performance, with a significant margin and demonstrates higher computational efficiency. 2 Related Work Time-series Forecasting and Spatio-temporal Graph Convolutional Networks Time-series forecasting is one of the core subfields in statistical sciences [15, 9]. Most recently, there have appeared a number of unconventional machine learning approaches to time-series forecasting. In particular, graph convolutional network (GCN)-based models for spatio-temporal network data have emerged as a promising forecasting tool. For instance, DCRNN [42] introduces spectral graph convolution into spatio-temporal network data prediction, which can capture spatio-temporal dependencies. STGCN [59] uses convolutional neural networks (CNNs) to model temporal correlations. Moreover, to infer hidden inter-dependencies between different traffic variables, [57, 3, 10] conduct a convolution operation in spatial dimension through adaptive adjacency matrices. Recent Z-GCNETs [20] develops a zigzag topological layer equipped with a zigzag persistence image into a GCN framework to model temporal correlations. Another promising recent direction for time series forecasting beyond GCN is a fractional-order dynamical model proposed by [27]. This approach offers an alternating scheme to determine the best estimate of the model parameters and unknown stimuli. In turn, [28] proposes a Padé approximation based exponential neural operator (Padé Exp), aiming to improve time-series forecasting with exponential operators in neural operator learning schemes. However, all of the above methods only focus on node-level representations. In contrast, in this paper, we focus on both higher-order structure representation and topological information learning. Topological Data Analysis for Graph Learning Persistent homology [25, 62] is a suite of tools within topological data analysis (TDA) that provides a way for measuring topological features of shapes and functions. The extracted topological features have been recently shown to provide invaluable insights into hidden mechanisms behind the organization and functionality of graph structured data. In particular, topological features have been actively used for node classification [61, 17], link prediction [58], and graph classification [31, 32, 14, 30]. For instance, [31] is one of the first approaches to integrate topological features into neural networks for graph classification, while [14] proposes a versatile framework for learning multiple vectorizations of persistent diagrams on graphs. In turn, [61, 17, 58] apply topological features to GNNs to understand and improve the message passing between nodes. Finally, [33] proposes a topological graph layer with learnable filtration functions for graph and node classification tasks, while [13] advances the ideas of multipersistence to graph learning. Zigzag Persistent Homology Despite its promise, regular persistent homology does not explicitly model the geometric and topological information from a sequence of topological spaces. To address this limitation, a generalization of ordinary persistence, i.e., zigzag persistent homology, based on the theory of quiver representation, has been proposed by [12]. Zigzag persistence allows us to systematically describe how the homology changes over a sequence of spaces. Despite its high potential, especially in conjunction with analysis of time-evolving data, zigzag persistence still remains largely a theoretical concept, and its applications are yet scarce. The recent results for time-dependent data studies include, for example, zigzag-based clustering [37], bifurcation analysis of dynamic systems [55], and time series forecasting [20]. The memory and computational efficiency of zigzag persistence is one of the daunting challenges. Inspired by [44], we propose a novel highly computationally efficient representation of zigzag persistence for learning time-evolving data, that is, zigzag filtration curve. Simplicial Neural Networks Modeling higher-order interactions on graphs is an emerging direction in graph representation learning. While the role of higher-order structures for graph learning has been documented for a number of years [1, 34] and involves such diverse applications as graph signal processing in image recognition [23], dynamics of disease transmission and biological networks, integration of higher-order graph substructures into deep learning on graphs has emerged only in 2020. As shown by [6, 50], higher-order network structures can be leveraged to boost graph learning performance. Indeed, several recent approaches [24, 49, 8, 18] propose to leverage simplicial information to perform neural networks on graphs. However, neither of these Simplicial Neural Networks (SNNs) are integrated with a topology-based graph convolution layer allowing us to learn both time-aware persistent topological features and simplicial geometry of graphs. In this paper, we propose ZFC-SHCN to address this limitation. 3 Time-Aware Topological Learning with Zigzag Curves Spatio-temporal Graph Construction A spatio-temporal graph is a collection of snapshots at different time steps, denoted by G = {G1,G2, · · · ,GT }, where T is the maximum timestamp. Here Gt = (Vt, Et,At,Xt) is the graph observed at time step t ∈ [1, T ], where Vt is a finite set of |V| = N nodes, Et is a set of edges, At ∈ RN×N is the adjacency matrix, and Xt ∈ RN×d is the node feature matrix. Specifically, each row of Xt is a d-dimensional feature vector of the corresponding node. For sake of notations, wherever applicable below, we omit the subscript t and denote graph Gt at time t as G. Background on Ordinary Persistence Tools of ordinary persistence, or persistent homology (PH), allow us to study salient data shape patterns along various dimensions. By shape here we broadly understand data properties that are invariant under continuous transformations, that is, transformations that do not alter “holes” in the data, for example, bending, twisting, and stretching. The key idea is to choose some suitable scale parameter ν and then to study a graph G not as a single object but as a nested sequence of graphs, or graph filtration G1 ⊆ . . . ⊆ Gn = G, which is induced by monotonic changes of scale ν. For example, if G is an edge-weighted graph (V, E , w) with weight function w : E 7! R, then for each νj , j = 1, . . . , n, we set G≤νj = (V, E , w−1(−∞, νj ]), yielding the induced edge-weighted filtration. We can also consider only induced subgraphs of G with maximal degree of νj for each j = 1, . . . , n, resulting in the degree sublevel set filtration. (For more discussion on graph filtrations see [30].) Armed with this construction, we can track which shape patterns, for example, independent components, loops, and voids, emerge as the scale ν varies. To make the process of pattern counting more systematic and efficient, we build an abstract simplicial complex K (Gj) on each Gj . We also record complex indices jb (birth) and jd (death) at which we first or last observe each shape feature. Topological features with longer lifespans are said to persist and are likelier to yield important information on the structural organization of G. Learning Shapes of Time-Conditioned Data with Zigzag Persistence This construction enables us to extract the key topological descriptors from a single graph G. However, in our case, we observe not a single graph but a sequence of time-evolving graphs {G1, . . . ,GT }. How can we track shape signatures which are not just individualistic for each time stamp but characterize intrinsic properties of the observed object over time? One approach to how we can bring PH tools to analysis of timeconditioned objects is zigzag persistence. Based on the theory of quiver representations, zigzag persistence generalizes ordinary persistence to track characteristics of graphs (or other topological spaces) with inclusions going in different directions [12, 11]. In particular, given a time-indexed sequence of graphs {G1, . . . ,GT }, we first form a set of graph inclusions over time G1 ∪ G2 G2 ∪ G3 G3 ∪ G4 . . . ↗ ↖ ↗ ↖ ↗ ↖ ↗ G1 G2 G3 G4 and then assess the compatibility of persistent topological features across unions of graphs. That is, we record indices at which topological features (dis)appear, for some given scale ν∗. If for a given ν∗ topological feature ρ (i.e., p-dimensional hole, 0 ≤ p ≤ K, where K is the dimension of the simplicial complex K(G)) is first recorded in K(Gj), we say that the feature’s birth is j, and if ρ first appears in K(Gj ∪Gj+1), we record its birth as j +1/2. In turn, if ρ is last seen in K(Gj), we record its death as j, while if it is last seen in K(Gj ∪ Gj+1), we say that its death is at j + 1/2. Let J be the set of all observed topological features for a given ν∗. Collecting then births and deaths over J, we summarize all extracted information as a multiset Dν∗ = {(bρ, dρ) ∈ R2|bρ < dρ, ρ ∈ J}, called a zigzag persistent diagram (ZPD) (where bρ and dρ are the birth and death of the topological feature ρ respectively). This makes zigzag persistence particularly attractive for the analysis of dynamic objects which are naturally indexed by time. However, the idea of zigzag persistence is applicable far beyond learning time-evolving objects. Nevertheless, zigzag persistence still remains largely a theoretical concept, with yet only a handful of applications, and one of the roadblocks hindering a broader proliferation of zigzag-based methods in practice is their computational costs. Here we take a step toward bringing a more computationally efficient summary of zigzag persistence to real-world applications. Time-Aware Zigzag Filtration Curves Consider a sequence of time intervals associated with a zigzag filtration over a time period [t1, tN ]( t1, t1 + 1 2 ) , ( t1 + 1 2 , t2 ) , ( t2, t2 + 1 2 ) , . . . , ( tN−1 + 1 2 , tN ) . Let DgmZZν∗ be the resulting ZPD for a given ν∗ and M be the number of off-diagonal topological features in ZPD, i.e., DgmZZν∗ . Inspired by the recent results on stabilized Betti sequences by [35] and filtration curves by [44] for ordinary persistence, we propose a new simple and computationally efficient summary of zigzag persistence, called a Zigzag Filtration Curve. Definition 3.1 (Zigzag Filtration Curve (ZFC)). The zigzag filtration curve evaluated at ∆t−i = (ti−1 + 1 2 , ti), i = {1, 2, . . . ,N}, for a given ν∗, is defined as ZFCpν∗(∆t − i ) = M∑ j=1 ξi(tbj , tdj )ωi, where (tbj , tdj ) ∈ R2 is a vector containing the birth and death of the j-th off-diagonal p-dimensional topological feature in DgmZZν∗ (as such, tbj < tdj ), j = {1, 2, . . . ,M}, 0 ≤ p ≤ K; ξi : R 2 7! R is some suitable Lipschitz continuous function with Lipschitz constant Li, for example, a Gaussian density; and ωi > 0, i = {1, 2, . . . ,N} are weights such that ∑ i ωi = 1. Zigzag filtration curve at ∆t+i = (ti, ti + 1 2 ) is defined analogously. (For the sake of notational simplicity, wherever applicable in the further exposition we suppress the index p in ZFC.) Motivated by [35], here as the Lipschitz continuous function ξi for intervals ∆t−i , we use a Gaussian density f with mean (ti−1 + 1/2, ti), while for intervals ∆t+i , we set the mean of f to (ti, ti + 1/2), i = 1, 2, . . .N . For both ∆t−i and ∆t + i , we choose the 2 × 2-variance-covariance matrix Σ to be the identity matrix. (See Appendix ?? for more discussion on sensitivity analysis.) Another suitable choice of ξ is the arctan function. As we show below, the proposed ZPC also enjoys important theoretical stability guarantees in terms of Wasserstein-1 distance. Proposition 3.2 (Stability of Zigzag Filtration Curve). Let DgmZZν∗ be a zigzag persistence diagram and DgmZZ′c∗ be its perturbed copy such that W1 ( DgmZZν∗ ,DgmZZ ′ ν∗ ) < ϵ, where W1 is Wasserstein-1 distance. Then, ZFC is stable with respect to Wasserstein-1 distance. In practice topological features of various dimensions p, p = 0, 1, . . . ,K, may play different roles in the learning task performance, and these roles are not known a-priori. Hence, to harness timeconditioned information encoded in ZFC corresponding to different dimensions p, we propose MultiZigzag Filtration Curves (M-ZFCs) M-ZFCsν∗ ∈ RK× N−1 2 by stacking ZFC0,ZFC1, . . . ,ZFCK. Figure ?? in Appendix ?? shows the both 0- and 1-dimensional ZFCs obtained from the proposed ZFC. In the following section, we demonstrate how ZFC can be integrated into neural network architectures for graph learning tasks. 4 Zigzag Filtration Curve Based Supra-Hodge Convolution Networks Given a graph G and its historical ω step graph signals Xω = {Xt−ω+1, . . . ,Xt} ∈ Rω×N×F (F is the node feature dimensionality), the time-series forecasting problem is to learn a mapping function f that maps the historical data {Xt−ω+1, . . . ,Xt} into the next h step data {Xt+1, . . . ,Xt+h}. The mapping relation is represented as follows: {Xt−ω+1, . . . ,Xt} f −! {Xt+1, . . . ,Xt+h}. 4.1 Graph convolution in the spatial dimension Given the node embedding dictionary W ϕ = (wϕ1 , w ϕ 2 , . . . , w ϕ N ) ∈ RN×dc (where xϕu ∈ Rdc and dc is the dimension of node embedding), we aim to seek a non-negative function Su,v = G (wϕu , w ϕ v ) which represents the pairwise similarity between any two nodes u and v. Concretely, the multiplication between W ϕ and (W ϕ)⊤ can (i) give a sum pooling of second-order features from the outer product of all the embedding vector pairs (wϕu , w ϕ v ) and (ii) infer the hidden spatial dependencies of nodes Suv = G (w ϕ u , w ϕ v ) = exp (ReLU(wϕu(w ϕ v ) ⊤)∑N u=1 exp (ReLU(w ϕ u(w ϕ v )⊤) , where ReLU(·) = max (0, ·) is a nonlinear activation function, which is used to eliminate weak connections proactively, and the role of the softmax function is applied to normalize the learned graph S. Inspired by the recent advancements in random walk-based graph embedding learning [47, 26], we make a graph convolution in spatial dimension, feeding a power series of the learned graph S with varying random walk steps {1, 2, · · · , r} (r ∈ Z+), as follows: H (ℓ+1) t,GC = σ(Stack(I,S, · · · ,S r)H (ℓ) t,GCΘ (ℓ) GC), (1) where σ(·) stands for a nonlinear activation function, Stack(·) is the function which stacks r powered learned graphs, H(ℓ)t,GC and H (ℓ+1) t,GC are the input and output activations for layer ℓ (where H (0) t,GC = Xt ∈ RN×F ), and Θ(ℓ)GC ∈ Rd GC ℓ ×d GC ℓ+1 is the ℓ-th layer’s trainable weights. Next, we introduce representation learning of the higher-order graph (sub)structures using the supra-Hodge Laplacian which allows us to systematically leverage the underlying topological information. 4.2 Supra-Hodge convolution in temporal dimension Time-evolving data such as multivariate time series, spatio-temporal processes, and dynamic networks, often exhibit a highly complex dependency among its substructures that goes far beyond what can be described by dyadic (or pairwise) interactions among nodes. Instead, such higher-order polyadic interactions can be systematically addressed using the Hodge theory. In particular, the discrete Hodge theory allows us to generalize the notion of a standard combinatorial graph Laplacian which describes diffusion on graph G from node to node via edges to diffusion over higher-order substructures of G [43, 6]. In turn, higher-order substructures can be modeled as k-simplices of G. (See Appendix ?? for background information on Hodge Laplacians.) Convolutional architectures on simplicial complexes based on the associated concepts of the Hodge theory have emerged as a recent direction in graph neural networks but have not yet been applied to learning time-evolving data. Our goal here is to introduce the notion of simplicial convolution and the ideas of Hodge-Laplacians to time-aware learning. In particular, to capture time-conditioned higher-order interactions on G and to describe diffusion of information over simplices along the temporal dimension, we build a supra-Hodge convolution operation, based on the multiplex network representation learning. (In the following for simplicity, notation without sub/superscript k stands for node-level quantities and in our experiments we always consider k ∈ Z+). First, given the historical spatio-temporal network series Gt−ω+1:t = {Gt−ω+1,Gt−ω+2, . . . ,Gt}, we consider a directed connected node-aligned multiplex network, which is made up of ω layers with N nodes on each layer. That is, the adjacency matrix Aα = {aαuv}N×N (where α ∈ {t− ω + 1, . . . , t}) defines the intra-connection between nodes u and v in layer α and a distance matrix Dαβ = {dαβuu}N×N quantifies the transition probability of moving from node u of layer α to node u of layer β. (Here β > α, since we consider information diffusion procedures only along the temporal dimension). Next, based on the discrete Hodge theory, we propose a new Hodge k-Laplacian for multiplex graphs, called the supra-Hodge k-Laplacian LSupk ∈ Rϕkω×ϕkω LSupk = (L11k ) r D12k+1 ··· D 1ω k+1 0 (L22k ) r ··· D2ωk+1 ... ... . . . ... 0 0 ··· (Lωωk ) r , (2) where Lααk is the Hodge k-Laplacian in layer α, Dk+1 is the diagonal matrix of degrees of each k-simplex, i.e., Dk+1 = max (diag(|Bk+1|1, I)) and Bk+1 is the k-simplex-to-(k + 1)-simplex incidence matrix, and the r-th power of Lααk represents r-step random walk on the Hodge k-Laplacian of layer α which will allow every k-simplex to accumulate information from its neighbors. Hence, when k = 1, we can infer the spatial dependencies between each pair of edges and capture meaningful edge information in both spatial and temporal dimensions – through the lens of the supra-Hodge 1-Laplacian. For instance, in molecule networks, each node represents an atom and each edge is a bond connecting two atoms; the bond (i.e., edge) features include bond type, ring status, and molecular charge which are closely related to atom (i.e., node) features (such as atomic total and partial charges). Since the goal of the forecasting task is to predict node (i.e., 0-simplex) attribute(s) in the next few time steps, we propose a novel diffusion supra-Hodge convolution on the sliding window Gt−ω+1:t. We then update nodes’ representations by transforming the multiplex k-simplex embedding to nodes via incidence matrices H (ℓ+1) t,k,SH = σ(L Sup k H (ℓ) t,k,SHΘ (ℓ) k,SH), (3) H (ℓ+1) t,SH = (B ⊤ 1 · · ·B⊤k )H (ℓ+1) t,k,SH, (4) where (i) in Equation 3: Θ(ℓ)k,SH ∈ R dSHk;ℓ×d SH k;ℓ+1 is a learnable filter matrix for layer ℓ (here dSHk;ℓ and d SH k;ℓ+1 are the intermediate and output dimensions to the ℓ-th layer), H (ℓ) t,k,SH and H (ℓ+1) t,k,SH are the input and output activations for layer ℓ (where H (0) t,k,SH = X̄k;t−ω+1:t ∈ Rϕkω×dink and the historical k-simplex features of the spatio-temporal networks Xk;t−ω+1:t = {Xk;t−ω+1,Xk;t−ω+2, . . . ,Xk;t} ∈ Rϕk×ω×d in k is reshaped as a matrix X̄k;t−ω+1:t with shape ϕkω × dink ) and (ii) in Equation 4: we transform the k-simplex embedding H (ℓ+1) t,k,SH to node embedding H(ℓ+1)t,SH ∈ R N×dSHk;ℓ+1 through incidence matrices. 4.3 ZFC convolution: a bridge between spatial and time dimensions Armed with the representation learning of graph (sub)structures at each timestamp, we now discuss the ZFC convolution which allows us to preserve and propagate both spatial and time-aware topological information simultaneously. The intuition behind ZFC convolution is that it learns a strong connection between two dimensions via two 1D convolution layers, i.e., time-wise and node-wise. ZFC convolution consists of three key components: (i) a linear embedding on M-ZFCs, which can learn the importance of time-aware topological features for each node to form a time-dimension-specific node embedding; (ii) a time-wise 1D convolution layer, where it gathers time-aware topological features from the entire space into a compact set; (iii) a node-wise 1D convolution layer, which can capture relations between different nodes. The resulted ZFC convolution operation over a M-ZFCsω is defined as Ht,M-ZFC = Fθ(Fψ(ΘM-ZFCM-ZFCsω)⊤)⊤, (5) where ω is the size of the window for sequence learning, M-ZFCsω denotes the M-ZFCs feature extracted from the time window with size ω, ΘM-ZFC ∈ RN×dq is a weight matrix to be learned, Fθ and Fψ are 1D convolutional layers, and Ht,M-ZFC ∈ RN×d M-ZFC out is the dM-ZFCout -dimensional output. We then combine the embeddings from graph convolution, M-ZFCs convolution, and supra-Hodge convolution to get the final embedding H(ℓ+1)t,out H (ℓ+1) t,out = [H (ℓ+1) t,GC ,Ht,M-ZFCs,H (ℓ+1) t,SH ], (6) where [·, ·, ·] denotes the concatenation of the outputs from three convolution operations, and H (ℓ+1) t,out ∈ RN×dout (where dout = dGCℓ+1 + dZFCout + dSHℓ+1). 4.4 Gate Recurrent Unit with ZFC-SHCN To describe the complex spatio-temporal dependencies among time series and assess a hidden state of nodes at a future timestamp, we feed the final embedding H(ℓ+1)t,out into Gated Recurrent Units (GRUs). Formally, we set the forward propagation equations of the GRUs as ℜt = η ( Wℜ [ Ψt−1,H (ℓ+1) t,out ] + bℜ ) , ℑt = η ( Wℑ [ Ψt−1,H (ℓ+1) t,out ] + bℑ ) , Ψt = tanh ( WΨ [ ℑt ⊙Ψt−1, H(ℓ+1)t,out ] + bΨ ) , Ψ̃t = ℜi ⊙Ψt−1 + (1−ℜt)⊙Ψt, where η(·) is an activation function (e.g., ReLU, LeakyReLU), ⊙ is the elementwise product, ℜt is the update gate and ℑi is the reset gate. Here bℜ, bℑ, bΨ, Wℜ, Wℑ, and WΨ are learnable parameters, while [ Ψt−1,H (ℓ+1) t,out ] and Ψt are the input and output of GRU model, respectively. We then obtain Ψ̃t which contains both the spatio-temporal and time-aware information. 5 Experiments 5.1 Datasets We validate our ZFC-SHCN model on six diverse data types: (i) COVID-19 datasets [51]: CA, PA, and TX represent the number of COVID-19 hospitalizations in California (CA), Pennsylvania (PA), and Texas (TX) respectively; (ii) traffic datasets [16]: PeMSD4 and PeMSD8 are two real-time traffic datasets from California; (iii) synthetic multivariate time-series (MTS) datasets based on vector autoregression (VAR) [29, 45] (where the VAR model is a generalization of the univariate AR process with more than one time-evolving component); (iv) daily surface air temperature in CA, PA, and TX over 02/01/2020–12/31/2020; (v) Bytom token prices of Ethereum blockchain over 07/27/2017–05/07/2018 [41, 53]; and (vi) wind speed data of 57 stations on the East Coast. The results on (i)–(iii) are presented in the main body, and the analysis of (iv) and (v) is in Appendix ?? and ??. The detailed description of each dataset is in Appendix ??. We also report results on the wind speed dataset in Appendix ??. 5.2 Baselines We compare our proposed ZFC-SHCN with 14 types of state-of-the-art baselines (SOAs), including FC-LSTM [54], SFM [60], N-BEATS [46], DCRNN [42], LSTNet [38], STGCN [59], TCN [4], DeepState [48], GraphWaveNet [57], DeepGLO [52], LRGCN [39] AGCRN [3], StemGNN [10], and Z-GCNETs [20]. 5.3 Experimental settings We implement ZFC-SHCN within a Pytorch framework on NVIDIA GeForce RTX 3090 GPU. We optimize all the models using an Adam optimizer for a maximum of 200 epochs. The learning rate is searched in {0.001, 0.003, 0.005, 0.01, 0.05} and the embedding dimension is searched in {1, 2, 3, 5, 10}. Our ZFC-SHCN is trained with batch sizes of 64 and 8 on PeMSD4 and PeMSD8, respectively. On both COVID-19 and surface air temperature datasets (i.e., CA, PA, and TX), we set the batch size to be 8. We train two 1D convolutional layers for ZFC representation learning with the same hidden layer dimension nhid where nhid ∈ {8, 16, 32, 64, 128}. For PeMSD4 and PeMSD8, we consider the window size ω = 12 and the horizon h = 3; for both COVID-19 and surface air temperature datasets, we consider a window size ω = 5 and horizon h = 15; for two simulated VAR datasets VART1 and VART2 , we set the window size as ω = 10 and horizon as h = 5, and set the batch size as 8; for Bytom, we consider the window size ω = 7 and horizon h = 7, and set the batch size as 8; for the wind speed dataset, we consider the window size ω = 12 and horizon h = 12, and set the batch size as 8. All models are evaluated in terms of Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Mean Absolute Percentage Error (MAPE). The best results are shown in bold font and the results shown with dotted underlines are the second-best results. We also perform a one-sided two-sample t-test between the best result and the best performance achieved by the runner-up, where *, **, *** denote p-value < 0.1, 0.05, 0.01 (i.e., denote significant, statistically significant, and highly statistically significant results, respectively. Code is available at https://github.com/zfcshcn/ZFC-SHCN.git. 5.4 Experimental results Real datasets The experimental results on PeMSD4 and PeMSD8 traffic data are reported in Table 2. As Table 2 shows, ZFC-SHCN achieves the best MAE, RMSE, and MAPE compared with SOAs on both PeMSD4 and PeMSD8. Compared to the RNN-based methods such as FCLSTM, SFM, N-BEATS, LSTNet, and TCN, ZFC-SHCN achieves relative gains in RMSE over the runner-ups, ranging from 17.68% to 65.41% for both PeMSD4 and PeMSD8. In turn, DCRNN, STGCN, GraphWaveNet, AGCRN, and StemGNN only focus on learning node-level representations. Compared to them, ZFC-SHCN captures interactions and encodes higher-order structure correlations beyond pairwise relations among nodes and yields a relative gain from 2.06% to 5.63% in RMSE on the traffic datasets. In addition, we compare ZFC-SHCN with the method based on the zigzag persistence image, i.e., Z-GCNETs, and find that ZFC-SHCN outperforms Z-GCNETs by 1.75% on PeMSD4 and 5.36% on PeMSD8 in terms of RMSE. Table 3 presents COVID-19 hospitalization prediction results (RMSE) in CA, PA, and TX, and we observe the following findings. First, our proposed ZFC-SHCN achieves state-ofthe-art performance on all three datasets. For instance, ZFC-SHCN yields 3.61%, 1.47%, 65.55% relative gains in RMSE over the runner-ups (including both GCN-based and zigzag persistence image-based methods) on three biosurveillance datasets. These results indicate that the ZFC mechanism and higherorder representation learning module play significant roles in capturing both topological information and higher-order structures. Second, as shown in Fig- ure ?? in Appendix ??, we find that, compared to the runner-up (i.e., Z-GCNETs), the predicted value of COVID-19 hospitalizations is more consistent with the ground-truth. Finally, Tables ?? and ?? in Appendix ?? present the overall prediction performances of ZFC-SHCN and representative baselines on surface air temperature and Ethereum blockchain datasets. We find that our proposed ZFC-SHCN consistently outperforms all baselines with either a significant or (highly) statistically significant margin across all data, except surface air temperature in TX, where ZFC-SHCN still yields the best performance across all models. Synthetic datasets The evaluation results on two VAR datasets are summarized in Table 1. Compared to the three strongest baselines (i.e., AGCRN, StemGNN, and Z-GCNETs), we observe that our proposed ZFC-SHCN consistently yields the best performance for all synthetic datasets. More precisely, ZFC-SHCN outperforms the runner-ups from 8.89% to 10.52% for VART1 and VART2 . Furthermore, to assess the time-wise and high network interactions, we use the global clustering coefficient (GCC) and Euler-Poincaré characteristic (EPC) as measures of higher order substructures [5]. We find that for GCC for VART1 and VART2 are 4.96 and 5.87, respectively; while the average EPC for VART1 and VART2 are 7.47 and 6.91, respectively. Interestingly (although it could be expected), higher GCC and lower EPC tend to be associated with higher relative gains delivered by ZFC-SHCN. Finally, in Appendix ??, we present the sensitivity analysis for ZFC as a function of the covariance matrix in VAR models. 5.5 Ablation studies To evaluate the performance of different components in our ZFC-SHCN model, we perform an expansive ablation study. The ablation study is conducted with three setups: (i) ZFC-SHCN without graph convolution in spatial dimension (W/o Graph convolution in spatial dimension), (ii) ZFC-SHCN without ZFC convolution (W/o ZFC convolution), and (iii) ZFC-SHCN without supra-Hodge convolution (W/o Supra-Hodge convolution). The experimental results are shown in Table 4 and prove the validity of each component. As Table 4 indicates, compared to ZFC-SHCN w/o ZFC convolution, the zigzag homological feature is vital for capturing the topological structure of spatio-temporal graph and our proposed graph convolution operation on ZFC significantly improves forecasting performance. By comparing to ZFC-SHCN w/o supra- Hodge convolution, we illustrate the significance of higher-order structure representation learning for guiding the model to how to capture information on higher-order interactions. Also, ZFC-SHCN w/o graph convolution in spatial dimension demonstrates that the learned graph obtained from trainable weights can learn hidden information and enhance (multivariate) time-series representation learning. 5.6 Computational complexity For higher-order simplices, the incidence matrices B1 and B2 can be calculated efficiently with complexity O(N +M) and O(M + Q) respectively, where N is the number of 0-simplices (i.e., nodes), M is the number of 1-simplices (i.e., edges), and Q is the number of 2-simplices (i.e., filled triangles). The computational complexity of ZFC is O(Υδ) [2, 22], where Υ represents the number of points in time interval and δ ∈ [2, 2.373). The computational complexity of the overall approach is O(N2 +Υδ + ΞkωFkdout + Ξkω2dout/2 + dout ∑t−1 ℓ=t−w Ξ (ℓ) k+1 +WGRU ), including (i) graph convolution in spatial dimension: O(N2), (ii) zigzag filtration curve: O(Υδ), (iii) supra-Hodge convolution in temporal dimension: O(ΞkωFkdout + Ξkω2dout/2 + dout ∑t−1 ℓ=t−w Ξ (ℓ) k+1) (where Fk is the number of k-simplex attribute features, ω is the sliding window size, dout is the output dimension of the supra-Hodge convolution layer, and Ξ(ℓ)k+1 is the number of (k+1)-simplex Ξk+1 at the ℓ-th layer), and (iv) GRU: O(WGRU ). We also compare our ZFC-SHCN with the most recent approach based on multipersistence-GNN [19] (i.e., TAMP-S2GCNets). We find that ZFC-SHCN yields either on-par or more competitive performance than TAMP-S2GCNets, while our proposed ZFC-SHCN significantly improves the computational efficiency (see Appendix ?? for more details). More details about running time comparison can be found in Appendix ??. 6 Conclusion We have proposed a novel framework for time-aware deep learning of time-evolving objects which takes advantages of both the higher-order interactions among the data substructures, described as simplices, and the most intrinsic time-conditioned topological information exhibited by the object, characterized via zigzag persistent homology. By leveraging the power of simplicial convolution operation and zigzag persistence for time-indexed data, ZFC-SHCN has been shown to demonstrate capabilities to yield the most competitive forecasting performance, while requiring fewer computational resources than its closest competitors. Still, computational complexity and limited theoretical results on statistical inference for zigzag persistence remain one of the major existing limitations of ZFC and, more generally, all topological methods for time dependent processes. In the future, we plan to investigate these theoretical and methodological challenges and will extend the ZFC-SHCN idea to anomaly detection in streaming time-dependent processes. Acknowledgments This work was partially supported by the National Science Foundation (NSF) under awards # ECCS 2039701 and # ECCS-2039716, the Department of the Navy, Office of Naval Research (ONR) under ONR award # N00014-21-1-2530, C3.ai Digital Transformation Institute, and NASA AIST grant 21-AIST21_2-0059. Part of this material is also based upon work supported by (while serving at) the NSF. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF, ONR, C3.ai DTI, or NASA.
1. What is the focus and contribution of the paper regarding time-aware persistent topological features and simplicial geometry of graphs? 2. What are the strengths of the proposed framework, particularly in connecting dynamical behavior and persistence homology analysis? 3. What are the weaknesses of the paper, especially regarding its claims, experiments, and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions regarding the construction of graphs or simplicial complexes on each dataset? 6. Is there any concern about the embedding dimension mentioned in the paper?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors propose the Zigzag Filtration Curve based Supra-Hodge Convolution Networks (ZFC-SHCN) which is able to learn time-aware persistent topological features and simplicial geometry of graphs. The theoretical stability of the zigzag filtration curve is studied in the paper. Experiment results show that ZFC-SHCN achieves the best performance of forecasting with less computational requirements. Strengths And Weaknesses Strengths: The proposed novel framework connects the dynamical behavior of complex systems and persistence homology analysis. The authors claim that this paper is the first work that brings the concepts of simplicial convolution to time-aware learning. The experiments on varying datasets including covid case and traffic flow show the superiority of the proposed method in terms of the prediction tasks. ZFC-SHCN is also faster compared with runner-ups. Weakness: I recommend adding some references in the introduction section to support the statements. I suggest the authors to check these related paper of time series prediction: https://ieeexplore.ieee.org/abstract/document/8430866 https://openreview.net/forum?id=d2TT6gK9qZn The writing quality of this paper needs to be improved, please see below some questions that need to be clarified in the paper main body. The contribution is clear as stated in the introduction section. However, the connection among/inside section 3 and 4 (and the subsections) is not very clear. Are the tasks to predict the further covid cases, traffic flow etc. as both time series and node attribute of graph? How are the graphs or the simplicial complexes constructed on each dataset? In the appendix it says the graph represents the border connection, but it is unclear about other datasets. How is the simplex constructed in the experiment section? Does it only consider nodes and edges, or also involve actual higher-order information like a 2-simplex, 3-simplex or more? If not, where does the higher-order information appear in the graph structure? Is the embedding dimension mentioned in line 281 the same as the dimension of feature of each node (d_e) or something else? Is it predefined or computed following any rules? The number here looks rather small, any reason for such a choice? Questions Please see weakness. Limitations Limitations and potential negative societal impact have been addressed.
NIPS
Title Practical Deep Learning with Bayesian Principles Abstract Bayesian methods promise to fix many shortcomings of deep learning, but they are impractical and rarely match the performance of standard methods, let alone improve them. In this paper, we demonstrate practical training of deep networks with natural-gradient variational inference. By applying techniques such as batch normalisation, data augmentation, and distributed training, we achieve similar performance in about the same number of epochs as the Adam optimiser, even on large datasets such as ImageNet. Importantly, the benefits of Bayesian principles are preserved: predictive probabilities are well-calibrated, uncertainties on outof-distribution data are improved, and continual-learning performance is boosted. This work enables practical deep learning while preserving benefits of Bayesian principles. A PyTorch implementation1 is available as a plug-and-play optimiser. 1 Introduction Deep learning has been extremely successful in many fields such as computer vision [29], speech processing [17], and natural-language processing [39], but it is also plagued with several issues that make its application difficult in many other fields. For example, it requires a large amount of high-quality data and it can overfit when dataset size is small. Similarly, sequential learning can cause forgetting of past knowledge [27], and lack of reliable confidence estimates and other robustness issues can make it vulnerable to adversarial attacks [6]. Ultimately, due to such issues, application of deep learning remains challenging, especially for applications where human lives are at risk. Bayesian principles have the potential to address such issues. For example, we can represent uncertainty using the posterior distribution, enable sequential learning using Bayes’ rule, and reduce overfitting with Bayesian model averaging [19]. The use of such Bayesian principles for neural networks has been advocated from very early on. Bayesian inference on neural networks were all proposed in the 90s, e.g., by using MCMC methods [41], Laplace’s method [35], and variational inference (VI) [18, 2, 49, 1]. Benefits of Bayesian principles are even discussed in machine-learning textbooks [36, 3]. Despite this, they are rarely employed in practice. This is mainly due to computational concerns, unfortunately overshadowing their theoretical advantages. The difficulty lies in the computation of the posterior distribution, which is especially challenging for deep learning. Even approximation methods, such as VI and MCMC, have historically been difficult * These two authors contributed equally. † This work is conducted during an internship at RIKEN Center for AI project. ‡ Corresponding author: [email protected] 1 The code is available at https://github.com/team-approx-bayes/dl-with-bayes. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. to scale to large datasets such as ImageNet [47]. Due to this, it is common to use less principled approximations, such as MC-dropout [9], even though they are not ideal when it comes to fixing the issues of deep learning. For example, MC-dropout is unsuitable for continual learning [27] since its posterior approximation does not have mass over the whole weight space. It is also found to perform poorly for sequential decision making [45]. The form of the approximation used by such methods is usually rigid and cannot be easily improved, e.g., to other forms such as a mixture of Gaussians. The goal of this paper is to make more principled Bayesian methods, such as VI, practical for deep learning, thereby helping researchers tackle its key limitations. We demonstrate practical training of deep networks by using recently proposed natural-gradient VI methods. These methods resemble the Adam optimiser, enabling us to leverage existing techniques for initialisation, momentum, batch normalisation, data augmentation, and distributed training. As a result, we obtain similar performance in about the same number of epochs as Adam when training many popular deep networks (e.g., LeNet, AlexNet, ResNet) on datasets such as CIFAR-10 and ImageNet (see Fig. 1). The results show that, despite using an approximate posterior, the training methods preserve the benefits coming from Bayesian principles. Compared to standard deep-learning methods, the predictive probabilities are well-calibrated, uncertainties on out-of-distribution inputs are improved, and performance for continual-learning tasks is boosted. Our work shows that practical deep learning is possible with Bayesian methods and aims to support further research in this area. Related work. Previous VI methods, notably by Graves [14] and Blundell et al. [4], require significant implementation and tuning effort to perform well, e.g., on convolution neural networks (CNN). Slow convergence is found to be especially problematic for sequential problems [45]. There appears to be no reported results with complex networks on large problems, such as ImageNet. Our work solves these issues by applying deep-learning techniques to natural-gradient VI [24, 56]. In their paper, Zhang et al. [56] also employed data augmentation and batch normalisation for a natural-gradient method called Noisy K-FAC (see Appendix A) and showed results on VGG on CIFAR-10. However, a mean-field method called Noisy Adam was found to be unstable with batch normalisation. In contrast, we show that a similar method, called Variational Online Gauss-Newton (VOGN), proposed by Khan et al. [24], works well with such techniques. We show results for distributed training with Noisy K-FAC on Imagenet, but do not provide extensive comparisons since tuning it is time-consuming. Many of our techniques can speed-up Noisy K-FAC, which is promising. Many other approaches have recently been proposed to compute posterior approximations by training deterministic networks [46, 37, 38]. Similarly to MC-dropout, their posterior approximations are not flexible, making it difficult to improve the accuracy of their approximations. On the other hand, VI offers a much more flexible alternative to apply Bayesian principles to deep learning. 2 Deep Learning with Bayesian Principles and Its Challenges The success of deep learning is partly due to the availability of scalable and practical methods for training deep neural networks (DNNs). Network training is formulated as an optimisation problem where a loss between the data and the DNN’s predictions is minimised. For example, in a supervised learning task with a dataset D of N inputs xi and corresponding outputs yi of length K, we minimise a loss of the following form: ¯̀(w) + w>w, where ¯̀(w) := 1 N P i `(y i , fw(xi)), fw(x) 2 RK denotes the DNN outputs with weights w, `(y, f) denotes a differentiable loss function between an output y and the function f , and > 0 is the L2 regulariser.2 Deep learning relies on stochasticgradient (SG) methods to minimise such loss functions. The most commonly used optimisers, such as stochastic-gradient descent (SGD), RMSprop [53], and Adam [25], take the following form3 (all operations below are element-wise): wt+1 wt ↵t ĝ(wt) + wtp st+1 + ✏ , st+1 (1 t)st + t (ĝ(wt) + wt)2 , (1) where t is the iteration, ↵t > 0 and 0 < t < 1 are learning rates, ✏ > 0 is a small scalar, and ĝ(w) is the stochastic gradients at w defined as follows: ĝ(w) := 1 M P i2Mt rw`(yi, fw(xi)) using a minibatch Mt of M data examples. This simple update scales extremely well and can be applied to very large problems. With techniques such as initialisation protocols, momentum, weight-decay, batch normalisation, and data augmentation, it also achieves good performance for many problems. In contrast, the full Bayesian approach to deep learning is computationally very expensive. The posterior distribution can be obtained using Bayes’ rule: p(w|D) = exp N ¯̀(w)/⌧ p(w)/p(D) where 0 < ⌧ 1.4 This is costly due to the computation of the marginal likelihood p(D), a high-dimensional integral that is difficult to compute for large networks. Variational inference (VI) is a principled approach to more scalably estimate an approximation to p(w|D). The main idea is to employ a parametric approximation, e.g., a Gaussian q(w) := N (w|µ,⌃) with mean µ and covariance ⌃. The parameters µ and ⌃ can then be estimated by maximising the evidence lower bound (ELBO): ELBO: L(µ,⌃) := NEq ⇥ ¯̀(w) ⇤ ⌧D KL [q(w) k p(w)], (2) where DKL[·] denotes the Kullback-Leibler divergence. By using more complex approximations, we can further reduce the approximation error, but at a computational cost. By formulating Bayesian inference as an optimisation problem, VI enables a practical application of Bayesian principles. Despite this, VI has remained impractical for training large deep networks on large datasets. Existing methods, such as Graves [14] and Blundell et al. [4], directly apply popular SG methods to optimise the variational parameters in the ELBO, yet they fail to get a reasonable performance on large problems, usually converging very slowly. The failure of such direct applications of deep-learning methods to VI is not surprising. The techniques used in one field may not directly lead to improvements in the other, but it will be useful if they do, e.g., if we can optimise the ELBO in a way that allows us to exploit the tricks and techniques of deep learning and boost the performance of VI. The goal of this work is to do just that. We now describe our methods in detail. 3 Practical Deep Learning with Natural-Gradient Variational Inference In this paper, we propose natural-gradient VI methods for practical deep learning with Bayesian principles. The natural-gradient update takes a simple form when estimating exponential-family approximations [23, 22]. When p(w) := N (w|0, I/ ), the update of the natural-parameter is performed by using the stochastic gradient of the expected regularised-loss: t+1 = (1 ⌧⇢) t ⇢rµEq ⇥ ¯̀(w) + 12⌧ w > w ⇤ , (3) 2This regulariser is sometimes set to 0 or a very small value. 3Alternate versions with weight-decay and momentum differ from this update [34]. We present a form useful to establish the connection between SG methods and natural-gradient VI. 4This is a tempered posterior [54] setup where ⌧ is set 6= 1 when we expect model misspecification and/or adversarial examples [10]. Setting ⌧ = 1 recovers standard Bayesian inference. where ⇢ > 0 is the learning rate, and we note that the stochastic gradients are computed with respect to µ, the expectation parameters of q. The moving average above helps to deal with the stochasticity of the gradient estimates, and is very similar to the moving average used in deep learning (see (1)). When ⌧ is set to 0, the update essentially minimises the regularised loss (see Section 5 in Khan et al. [24]). These properties of natural-gradient VI makes it an ideal candidate for deep learning. Recent work by Khan et al. [24] and Zhang et al. [56] further show that, when q is Gaussian, the update (3) assumes a form that is strikingly similar to the update (1). For example, the Variational Online Gauss-Newton (VOGN) method of Khan et al. [24] estimates a Gaussian with mean µ t and a diagonal covariance matrix ⌃t using the following update: µ t+1 µt ↵t ĝ(wt) + ̃µt st+1 + ̃ , st+1 (1 ⌧ t)st + t 1 M X i2Mt (g i (wt)) 2 , (4) where g i (wt) := rw`(yi, fwt(xi)), wt ⇠ N (w|µt,⌃t) with ⌃t := diag(1/(N(st + ̃))), ̃ := ⌧ /N , and ↵t, t > 0 are learning rates. Operations are performed element-wise. Similarly to (1), the vector st adapts the learning rate and is updated using a moving average. A major difference in VOGN is that the update of st is now based on a Gauss-Newton approximation [14] which uses 1 M P i2Mt(gi(wt)) 2. This is fundamentally different from the SG update in (1) which instead uses the gradient-magnitude ( 1 M P i2Mt gi(wt) + wt) 2 [5]. The first approach uses the sum outside the square while the second approach uses it inside. VOGN is therefore a secondorder method and, similarly to Newton’s method, does not need a square-root over st. Implementation of this step requires an additional calculation (see Appendix B) which makes VOGN a bit slower than Adam, but VOGN is expected to give better variance estimates (see Theorem 1 in Khan et al. [24]). The main contribution of this paper is to demonstrate practical training of deep networks using VOGN. Since VOGN takes a similar form to SG methods, we can easily borrow existing deeplearning techniques to improve performance. We will now describe these techniques in detail. Pseudo-code for VOGN is shown in Algorithm 1. Batch normalisation: Batch normalisation [20] has been found to significantly speed up and stabilise training of neural networks, and is widely used in deep learning. BatchNorm layers are inserted between neural network layers. They help stabilise each layer’s input distribution by normalising the running average of the inputs’ mean and variance. In our VOGN implementation, we simply use the existing implementation with default hyperparameter settings. We do not apply L2 regularisation and weight decay to BatchNorm parameters, like in Goyal et al. [13], or maintain uncertainty over the BatchNorm parameters. This straightforward application of batch normalisation works for VOGN. Data Augmentation: When training on image datasets, data augmentation (DA) techniques can improve performance drastically [13]. We consider two common real-time data augmentation techniques: random cropping and horizontal flipping. After randomly selecting a minibatch at each iteration, we use a randomly selected cropped version of all images. Each image in the minibatch has a 50% chance of being horizontally flipped. We find that directly applying DA gives slightly worse performance than expected, and also affects the calibration of the resulting uncertainty. However, DA increases the effective sample size. We therefore modify it to be ⇢N where ⇢ 1, improving performance (see step 2 in Algorithm 1). The reason for this performance boost might be due to the complex relationship between the regularisation and N . For the regularised loss ¯̀(w) + w>w, the two are unidentifiable, i.e., we can multiply by a constant and reduce N by the same constant without changing the minimum. However, in a Bayesian setting (like in (2)), the two quantities are separate, and therefore changing the data might also change the optimal prior variance hyperparameter in a complicated way. This needs further theoretical investigations, but our simple fix of scaling N seems to work well in the experiments. We set ⇢ by considering the specific DA techniques used. When training on CIFAR-10, the random cropping DA step involves first padding the 32x32 images to become of size 40x40, and then taking randomly selected 28x28 cropped images. We consider this as effectively increasing the dataset size by a factor of 5 (4 images for each corner, and one central image). The horizontal flipping DA step doubles the dataset size (one dataset of unflipped images, one for flipped images). Combined, this gives ⇢ = 10. Similar arguments for ImageNet DA techniques give ⇢ = 5. Even though ⇢ is another hyperparameter to set, we find that its precise value does not matter much. Typically, after setting an estimate for ⇢, tuning a little seems to work well (see Appendix E). Algorithm 1: Variational Online Gauss Newton (VOGN) 1: Initialise µ0, s0, m0. 2: N ⇢N , ̃ ⌧ /N . 3: repeat 4: Sample a minibatch M of size M . 5: Split M into each GPU (local minibatch Mlocal). 6: for each GPU in parallel do 7: for k = 1, 2, . . . ,K do 8: Sample ✏ ⇠ N (0, I). 9: w(k) µ+ ✏ with (1/(N(s+ ̃ + )))1/2. 10: Compute g(k)i rw`(yi, fw(k)(xi)), 8i 2Mlocal using the method described in Appendix B. 11: ĝk 1M P i2Mlocal g (k) i . 12: ĥk 1M P i2Mlocal(g (k) i ) 2 . 13: end for 14: ĝ 1K PK k=1 ĝk and ĥ 1 K PK k=1 ĥk. 15: end for 16: AllReduce ĝ, ĥ. 17: m 1m+ (ĝ + ̃µ). 18: s (1 ⌧ 2)s+ 2ĥ. 19: µ µ ↵m/(s+ ̃ + ). 20: until stopping criterion is met w (8) <latexit sha1_base64="0AX6CIJdpG7lxM3dZNYUgXYXyYA=">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyVRwS6LblxWsA9oa5lMJ+3QySTMTCol5E/cuFDErX/izr9x0mahrQcGDufcyz1zvIgzpR3n2yqsrW9sbhW3Szu7e/sH9uFRS4WxJLRJQh7KjocV5UzQpmaa004kKQ48Ttve5Dbz21MqFQvFg55FtB/gkWA+I1gbaWDbvQDrsecnT+ljUqmdpwO77FSdOdAqcXNShhyNgf3VG4YkDqjQhGOluq4T6X6CpWaE07TUixWNMJngEe0aKnBAVT+ZJ0/RmVGGyA+leUKjufp7I8GBUrPAM5NZTrXsZeJ/XjfWfq2fMBHFmgqyOOTHHOkQZTWgIZOUaD4zBBPJTFZExlhiok1ZJVOCu/zlVdK6qLqXVef+qly/yesowgmcQgVcuIY63EEDmkBgCs/wCm9WYr1Y79bHYrRg5TvH8AfW5w9GnJNp</latexit> w (7) <latexit sha1_base64="wdyEarqCEeVHbD9YSbbId9H4C4Y=">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyVRoS6LblxWsA9oa5lMJ+3QySTMTCol5E/cuFDErX/izr9x0mahrQcGDufcyz1zvIgzpR3n2yqsrW9sbhW3Szu7e/sH9uFRS4WxJLRJQh7KjocV5UzQpmaa004kKQ48Ttve5Dbz21MqFQvFg55FtB/gkWA+I1gbaWDbvQDrsecnT+ljUqmdpwO77FSdOdAqcXNShhyNgf3VG4YkDqjQhGOluq4T6X6CpWaE07TUixWNMJngEe0aKnBAVT+ZJ0/RmVGGyA+leUKjufp7I8GBUrPAM5NZTrXsZeJ/XjfW/nU/YSKKNRVkcciPOdIhympAQyYp0XxmCCaSmayIjLHERJuySqYEd/nLq6R1UXUvq879Vbl+k9dRhBM4hQq4UIM63EEDmkBgCs/wCm9WYr1Y79bHYrRg5TvH8AfW5w9FFpNo</latexit> w (6) <latexit sha1_base64="ZRrNh8WvBse2CToET/IVGFUdSXg=">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyVRUZdFNy4r2Ae0tUymk3boZBJmJpUS8iduXCji1j9x5984abPQ1gMDh3Pu5Z45XsSZ0o7zbRVWVtfWN4qbpa3tnd09e/+gqcJYEtogIQ9l28OKciZoQzPNaTuSFAcepy1vfJv5rQmVioXiQU8j2gvwUDCfEayN1LftboD1yPOTp/QxqVyepn277FSdGdAycXNShhz1vv3VHYQkDqjQhGOlOq4T6V6CpWaE07TUjRWNMBnjIe0YKnBAVS+ZJU/RiVEGyA+leUKjmfp7I8GBUtPAM5NZTrXoZeJ/XifW/nUvYSKKNRVkfsiPOdIhympAAyYp0XxqCCaSmayIjLDERJuySqYEd/HLy6R5VnXPq879Rbl2k9dRhCM4hgq4cAU1uIM6NIDABJ7hFd6sxHqx3q2P+WjByncO4Q+szx9DkJNn</latexit> w (5) <latexit sha1_base64="B2LnCCBsDGx8M7KwkiP54CtpI2E=">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyXxgS6LblxWsA9oa5lMJ+3QySTMTCol5E/cuFDErX/izr9x0mahrQcGDufcyz1zvIgzpR3n2yqsrK6tbxQ3S1vbO7t79v5BU4WxJLRBQh7KtocV5UzQhmaa03YkKQ48Tlve+DbzWxMqFQvFg55GtBfgoWA+I1gbqW/b3QDrkecnT+ljUrk8Tft22ak6M6Bl4uakDDnqffurOwhJHFChCcdKdVwn0r0ES80Ip2mpGysaYTLGQ9oxVOCAql4yS56iE6MMkB9K84RGM/X3RoIDpaaBZyaznGrRy8T/vE6s/etewkQUayrI/JAfc6RDlNWABkxSovnUEEwkM1kRGWGJiTZllUwJ7uKXl0nzrOqeV537i3LtJq+jCEdwDBVw4QpqcAd1aACBCTzDK7xZifVivVsf89GCle8cwh9Ynz9CCpNm</latexit> w (4) <latexit sha1_base64="Jjh+roV7RXCtHv15jiKBS8Q2kV8=">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyXRgi6LblxWsA9oa5lMJ+3QySTMTCol5E/cuFDErX/izr9x0mahrQcGDufcyz1zvIgzpR3n2yqsrW9sbhW3Szu7e/sH9uFRS4WxJLRJQh7KjocV5UzQpmaa004kKQ48Ttve5Dbz21MqFQvFg55FtB/gkWA+I1gbaWDbvQDrsecnT+ljUqmdpwO77FSdOdAqcXNShhyNgf3VG4YkDqjQhGOluq4T6X6CpWaE07TUixWNMJngEe0aKnBAVT+ZJ0/RmVGGyA+leUKjufp7I8GBUrPAM5NZTrXsZeJ/XjfW/nU/YSKKNRVkcciPOdIhympAQyYp0XxmCCaSmayIjLHERJuySqYEd/nLq6R1UXUvq859rVy/yesowgmcQgVcuII63EEDmkBgCs/wCm9WYr1Y79bHYrRg5TvH8AfW5w9AhJNl</latexit> w (3) <latexit sha1_base64="hvhnf9uGBxMR9D/7hkTLQSq9t6g=">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyWxgi6LblxWsA9oa5lMJ+3QySTMTCol5E/cuFDErX/izr9x0mahrQcGDufcyz1zvIgzpR3n2yqsrW9sbhW3Szu7e/sH9uFRS4WxJLRJQh7KjocV5UzQpmaa004kKQ48Ttve5Dbz21MqFQvFg55FtB/gkWA+I1gbaWDbvQDrsecnT+ljUqmdpwO77FSdOdAqcXNShhyNgf3VG4YkDqjQhGOluq4T6X6CpWaE07TUixWNMJngEe0aKnBAVT+ZJ0/RmVGGyA+leUKjufp7I8GBUrPAM5NZTrXsZeJ/XjfW/nU/YSKKNRVkcciPOdIhympAQyYp0XxmCCaSmayIjLHERJuySqYEd/nLq6R1UXVrVef+sly/yesowgmcQgVcuII63EEDmkBgCs/wCm9WYr1Y79bHYrRg5TvH8AfW5w8+/pNk</latexit> w (2) <latexit sha1_base64="peZMHpnhsnVKmu0FyhJJG0tzlO4=">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyWpgi6LblxWsA9oa5lMJ+3QySTMTCol5E/cuFDErX/izr9x0mahrQcGDufcyz1zvIgzpR3n2yqsrW9sbhW3Szu7e/sH9uFRS4WxJLRJQh7KjocV5UzQpmaa004kKQ48Ttve5Dbz21MqFQvFg55FtB/gkWA+I1gbaWDbvQDrsecnT+ljUqmdpwO77FSdOdAqcXNShhyNgf3VG4YkDqjQhGOluq4T6X6CpWaE07TUixWNMJngEe0aKnBAVT+ZJ0/RmVGGyA+leUKjufp7I8GBUrPAM5NZTrXsZeJ/XjfW/nU/YSKKNRVkcciPOdIhympAQyYp0XxmCCaSmayIjLHERJuySqYEd/nLq6RVq7oXVef+sly/yesowgmcQgVcuII63EEDmkBgCs/wCm9WYr1Y79bHYrRg5TvH8AfW5w89eJNj</latexit> w (1) <latexit sha1_base64="DZVriWHhAABp/VIrxpVe/AkNe74=">AAAB+XicbVDLSsNAFL3xWesr6tLNYBHqpiQq6LLoxmUF+4A2lsl00g6dTMLMpFJC/sSNC0Xc+ifu/BsnbRbaemDgcM693DPHjzlT2nG+rZXVtfWNzdJWeXtnd2/fPjhsqSiRhDZJxCPZ8bGinAna1Exz2oklxaHPadsf3+Z+e0KlYpF40NOYeiEeChYwgrWR+rbdC7Ee+UH6lD2mVfcs69sVp+bMgJaJW5AKFGj07a/eICJJSIUmHCvVdZ1YeymWmhFOs3IvUTTGZIyHtGuowCFVXjpLnqFTowxQEEnzhEYz9fdGikOlpqFvJvOcatHLxf+8bqKDay9lIk40FWR+KEg40hHKa0ADJinRfGoIJpKZrIiMsMREm7LKpgR38cvLpHVecy9qzv1lpX5T1FGCYziBKrhwBXW4gwY0gcAEnuEV3qzUerHerY/56IpV7BzBH1ifPzvyk2I=</latexit> w (i) ⇠ q(w) <latexit sha1_base64="Dv2ZWT7aVb6yCMx0AIW4B6zAHBY=">AAACC3icbVDLSsNAFJ3UV42vqEs3Q4vQbkqigi6LblxWsA9oYplMJ+3QySTOTJQSunfjr7hxoYhbf8Cdf+OkDaitBy4czrmXe+/xY0alsu0vo7C0vLK6Vlw3Nza3tnes3b2WjBKBSRNHLBIdH0nCKCdNRRUjnVgQFPqMtP3RRea374iQNOLXahwTL0QDTgOKkdJSzyq5IVJDP0jvJzdphVYnrqSheVv5kas9q2zX7CngInFyUgY5Gj3r0+1HOAkJV5ghKbuOHSsvRUJRzMjEdBNJYoRHaEC6mnIUEuml018m8FArfRhEQhdXcKr+nkhRKOU49HVndqKc9zLxP6+bqODMSymPE0U4ni0KEgZVBLNgYJ8KghUba4KwoPpWiIdIIKx0fKYOwZl/eZG0jmrOcc2+OinXz/M4iuAAlEAFOOAU1MElaIAmwOABPIEX8Go8Gs/Gm/E+ay0Y+cw++APj4xv4CZr8</latexit> M <latexit sha1_base64="uZvzERQPKMMR9NiLGAcS/T5qG+0=">AAAB8nicbVBNS8NAFHypX7V+VT16WSyCp5KooMeiFy9CBWsLaSib7bZdutmE3RehhP4MLx4U8eqv8ea/cdPmoK0DC8PMe+y8CRMpDLrut1NaWV1b3yhvVra2d3b3qvsHjyZONeMtFstYd0JquBSKt1Cg5J1EcxqFkrfD8U3ut5+4NiJWDzhJeBDRoRIDwShaye9GFEeMyuxu2qvW3Lo7A1kmXkFqUKDZq351+zFLI66QSWqM77kJBhnVKJjk00o3NTyhbEyH3LdU0YibIJtFnpITq/TJINb2KSQz9fdGRiNjJlFoJ/OIZtHLxf88P8XBVZAJlaTIFZt/NEglwZjk95O+0JyhnFhCmRY2K2EjqilD21LFluAtnrxMHs/q3nndvb+oNa6LOspwBMdwCh5cQgNuoQktYBDDM7zCm4POi/PufMxHS06xcwh/4Hz+AIM0kWU=</latexit> Mlocal <latexit sha1_base64="ZwiUQGlfNIiAPSgLdNtgjp+cW94=">AAAB/HicbVDLSsNAFL3xWesr2qWbwSK4KokKuiy6cSNUsA9oQ5hMJ+3QySTMTIQQ6q+4caGIWz/EnX/jpM1CWw8MHM65l3vmBAlnSjvOt7Wyura+sVnZqm7v7O7t2weHHRWnktA2iXksewFWlDNB25ppTnuJpDgKOO0Gk5vC7z5SqVgsHnSWUC/CI8FCRrA2km/XBhHWY4J5fjf1cx4bNvXtutNwZkDLxC1JHUq0fPtrMIxJGlGhCcdK9V0n0V6OpWaE02l1kCqaYDLBI9o3VOCIKi+fhZ+iE6MMURhL84RGM/X3Ro4jpbIoMJNFVLXoFeJ/Xj/V4ZWXM5GkmgoyPxSmHOkYFU2gIZOUaJ4ZgolkJisiYywx0aavqinBXfzyMumcNdzzhnN/UW9el3VU4AiO4RRcuIQm3EIL2kAgg2d4hTfryXqx3q2P+eiKVe7U4A+szx9z0JVI</latexit> ĝ <latexit sha1_base64="MlFgSWDuhHul1vp77q5wm5HaDKU=">AAAB+XicbVDLSsNAFL2pr1pfUZduBovgqiQq6LLoxmUF+4AmlMl00g6dPJi5KZTQP3HjQhG3/ok7/8ZJm4VWDwwczrmXe+YEqRQaHefLqqytb2xuVbdrO7t7+wf24VFHJ5livM0SmaheQDWXIuZtFCh5L1WcRoHk3WByV/jdKVdaJPEjzlLuR3QUi1AwikYa2LY3pph7EcVxEOaj+Xxg152GswD5S9yS1KFEa2B/esOEZRGPkUmqdd91UvRzqlAwyec1L9M8pWxCR7xvaEwjrv18kXxOzowyJGGizIuRLNSfGzmNtJ5FgZksIupVrxD/8/oZhjd+LuI0Qx6z5aEwkwQTUtRAhkJxhnJmCGVKmKyEjamiDE1ZNVOCu/rlv6Rz0XAvG87DVb15W9ZRhRM4hXNw4RqacA8taAODKTzBC7xaufVsvVnvy9GKVe4cwy9YH988yZQL</latexit> ĥ <latexit sha1_base64="gooRaEzmAkh1JPQL65zWVoIqQzM=">AAAB+XicbVDLSsNAFJ3UV62vqEs3g0VwVRIVdFl047KCfUATymQ6aYZOJmHmplBC/sSNC0Xc+ifu/BsnbRZaPTBwOOde7pkTpIJrcJwvq7a2vrG5Vd9u7Ozu7R/Yh0c9nWSKsi5NRKIGAdFMcMm6wEGwQaoYiQPB+sH0rvT7M6Y0T+QjzFPmx2QiecgpASONbNuLCOReTCAKwjwqipHddFrOAvgvcSvSRBU6I/vTGyc0i5kEKojWQ9dJwc+JAk4FKxpepllK6JRM2NBQSWKm/XyRvMBnRhnjMFHmScAL9edGTmKt53FgJsuIetUrxf+8YQbhjZ9zmWbAJF0eCjOBIcFlDXjMFaMg5oYQqrjJimlEFKFgymqYEtzVL/8lvYuWe9lyHq6a7duqjjo6QafoHLnoGrXRPeqgLqJohp7QC3q1cuvZerPel6M1q9o5Rr9gfXwDPk+UDA==</latexit> Mlocal <latexit sha1_base64="ZwiUQGlfNIiAPSgLdNtgjp+cW94=">AAAB/HicbVDLSsNAFL3xWesr2qWbwSK4KokKuiy6cSNUsA9oQ5hMJ+3QySTMTIQQ6q+4caGIWz/EnX/jpM1CWw8MHM65l3vmBAlnSjvOt7Wyura+sVnZqm7v7O7t2weHHRWnktA2iXksewFWlDNB25ppTnuJpDgKOO0Gk5vC7z5SqVgsHnSWUC/CI8FCRrA2km/XBhHWY4J5fjf1cx4bNvXtutNwZkDLxC1JHUq0fPtrMIxJGlGhCcdK9V0n0V6OpWaE02l1kCqaYDLBI9o3VOCIKi+fhZ+iE6MMURhL84RGM/X3Ro4jpbIoMJNFVLXoFeJ/Xj/V4ZWXM5GkmgoyPxSmHOkYFU2gIZOUaJ4ZgolkJisiYywx0aavqinBXfzyMumcNdzzhnN/UW9el3VU4AiO4RRcuIQm3EIL2kAgg2d4hTfryXqx3q2P+eiKVe7U4A+szx9z0JVI</latexit> Mlocal <latexit sha1_base64="ZwiUQGlfNIiAPSgLdNtgjp+cW94=">AAAB/HicbVDLSsNAFL3xWesr2qWbwSK4KokKuiy6cSNUsA9oQ5hMJ+3QySTMTIQQ6q+4caGIWz/EnX/jpM1CWw8MHM65l3vmBAlnSjvOt7Wyura+sVnZqm7v7O7t2weHHRWnktA2iXksewFWlDNB25ppTnuJpDgKOO0Gk5vC7z5SqVgsHnSWUC/CI8FCRrA2km/XBhHWY4J5fjf1cx4bNvXtutNwZkDLxC1JHUq0fPtrMIxJGlGhCcdK9V0n0V6OpWaE02l1kCqaYDLBI9o3VOCIKi+fhZ+iE6MMURhL84RGM/X3Ro4jpbIoMJNFVLXoFeJ/Xj/V4ZWXM5GkmgoyPxSmHOkYFU2gIZOUaJ4ZgolkJisiYywx0aavqinBXfzyMumcNdzzhnN/UW9el3VU4AiO4RRcuIQm3EIL2kAgg2d4hTfryXqx3q2P+eiKVe7U4A+szx9z0JVI</latexit> Mlocal <latexit sha1_base64="ZwiUQGlfNIiAPSgLdNtgjp+cW94=">AAAB/HicbVDLSsNAFL3xWesr2qWbwSK4KokKuiy6cSNUsA9oQ5hMJ+3QySTMTIQQ6q+4caGIWz/EnX/jpM1CWw8MHM65l3vmBAlnSjvOt7Wyura+sVnZqm7v7O7t2weHHRWnktA2iXksewFWlDNB25ppTnuJpDgKOO0Gk5vC7z5SqVgsHnSWUC/CI8FCRrA2km/XBhHWY4J5fjf1cx4bNvXtutNwZkDLxC1JHUq0fPtrMIxJGlGhCcdK9V0n0V6OpWaE02l1kCqaYDLBI9o3VOCIKi+fhZ+iE6MMURhL84RGM/X3Ro4jpbIoMJNFVLXoFeJ/Xj/V4ZWXM5GkmgoyPxSmHOkYFU2gIZOUaJ4ZgolkJisiYywx0aavqinBXfzyMumcNdzzhnN/UW9el3VU4AiO4RRcuIQm3EIL2kAgg2d4hTfryXqx3q2P+eiKVe7U4A+szx9z0JVI</latexit> ĝ <latexit sha1_base64="MlFgSWDuhHul1vp77q5wm5HaDKU=">AAAB+XicbVDLSsNAFL2pr1pfUZduBovgqiQq6LLoxmUF+4AmlMl00g6dPJi5KZTQP3HjQhG3/ok7/8ZJm4VWDwwczrmXe+YEqRQaHefLqqytb2xuVbdrO7t7+wf24VFHJ5livM0SmaheQDWXIuZtFCh5L1WcRoHk3WByV/jdKVdaJPEjzlLuR3QUi1AwikYa2LY3pph7EcVxEOaj+Xxg152GswD5S9yS1KFEa2B/esOEZRGPkUmqdd91UvRzqlAwyec1L9M8pWxCR7xvaEwjrv18kXxOzowyJGGizIuRLNSfGzmNtJ5FgZksIupVrxD/8/oZhjd+LuI0Qx6z5aEwkwQTUtRAhkJxhnJmCGVKmKyEjamiDE1ZNVOCu/rlv6Rz0XAvG87DVb15W9ZRhRM4hXNw4RqacA8taAODKTzBC7xaufVsvVnvy9GKVe4cwy9YH988yZQL</latexit> ĥ <latexit sha1_base64="gooRaEzmAkh1JPQL65zWVoIqQzM=">AAAB+XicbVDLSsNAFJ3UV62vqEs3g0VwVRIVdFl047KCfUATymQ6aYZOJmHmplBC/sSNC0Xc+ifu/BsnbRZaPTBwOOde7pkTpIJrcJwvq7a2vrG5Vd9u7Ozu7R/Yh0c9nWSKsi5NRKIGAdFMcMm6wEGwQaoYiQPB+sH0rvT7M6Y0T+QjzFPmx2QiecgpASONbNuLCOReTCAKwjwqipHddFrOAvgvcSvSRBU6I/vTGyc0i5kEKojWQ9dJwc+JAk4FKxpepllK6JRM2NBQSWKm/XyRvMBnRhnjMFHmScAL9edGTmKt53FgJsuIetUrxf+8YQbhjZ9zmWbAJF0eCjOBIcFlDXjMFaMg5oYQqrjJimlEFKFgymqYEtzVL/8lvYuWe9lyHq6a7duqjjo6QafoHLnoGrXRPeqgLqJohp7QC3q1cuvZerPel6M1q9o5Rr9gfXwDPk+UDA==</latexit> ĝ <latexit sha1_base64="MlFgSWDuhHul1vp77q5wm5HaDKU=">AAAB+XicbVDLSsNAFL2pr1pfUZduBovgqiQq6LLoxmUF+4AmlMl00g6dPJi5KZTQP3HjQhG3/ok7/8ZJm4VWDwwczrmXe+YEqRQaHefLqqytb2xuVbdrO7t7+wf24VFHJ5livM0SmaheQDWXIuZtFCh5L1WcRoHk3WByV/jdKVdaJPEjzlLuR3QUi1AwikYa2LY3pph7EcVxEOaj+Xxg152GswD5S9yS1KFEa2B/esOEZRGPkUmqdd91UvRzqlAwyec1L9M8pWxCR7xvaEwjrv18kXxOzowyJGGizIuRLNSfGzmNtJ5FgZksIupVrxD/8/oZhjd+LuI0Qx6z5aEwkwQTUtRAhkJxhnJmCGVKmKyEjamiDE1ZNVOCu/rlv6Rz0XAvG87DVb15W9ZRhRM4hXNw4RqacA8taAODKTzBC7xaufVsvVnvy9GKVe4cwy9YH988yZQL</latexit> ĥ <latexit sha1_base64="gooRaEzmAkh1JPQL65zWVoIqQzM=">AAAB+XicbVDLSsNAFJ3UV62vqEs3g0VwVRIVdFl047KCfUATymQ6aYZOJmHmplBC/sSNC0Xc+ifu/BsnbRZaPTBwOOde7pkTpIJrcJwvq7a2vrG5Vd9u7Ozu7R/Yh0c9nWSKsi5NRKIGAdFMcMm6wEGwQaoYiQPB+sH0rvT7M6Y0T+QjzFPmx2QiecgpASONbNuLCOReTCAKwjwqipHddFrOAvgvcSvSRBU6I/vTGyc0i5kEKojWQ9dJwc+JAk4FKxpepllK6JRM2NBQSWKm/XyRvMBnRhnjMFHmScAL9edGTmKt53FgJsuIetUrxf+8YQbhjZ9zmWbAJF0eCjOBIcFlDXjMFaMg5oYQqrjJimlEFKFgymqYEtzVL/8lvYuWe9lyHq6a7duqjjo6QafoHLnoGrXRPeqgLqJohp7QC3q1cuvZerPel6M1q9o5Rr9gfXwDPk+UDA==</latexit> ĝ <latexit sha1_base64="MlFgSWDuhHul1vp77q5wm5HaDKU=">AAAB+XicbVDLSsNAFL2pr1pfUZduBovgqiQq6LLoxmUF+4AmlMl00g6dPJi5KZTQP3HjQhG3/ok7/8ZJm4VWDwwczrmXe+YEqRQaHefLqqytb2xuVbdrO7t7+wf24VFHJ5livM0SmaheQDWXIuZtFCh5L1WcRoHk3WByV/jdKVdaJPEjzlLuR3QUi1AwikYa2LY3pph7EcVxEOaj+Xxg152GswD5S9yS1KFEa2B/esOEZRGPkUmqdd91UvRzqlAwyec1L9M8pWxCR7xvaEwjrv18kXxOzowyJGGizIuRLNSfGzmNtJ5FgZksIupVrxD/8/oZhjd+LuI0Qx6z5aEwkwQTUtRAhkJxhnJmCGVKmKyEjamiDE1ZNVOCu/rlv6Rz0XAvG87DVb15W9ZRhRM4hXNw4RqacA8taAODKTzBC7xaufVsvVnvy9GKVe4cwy9YH988yZQL</latexit> ĥ <latexit sha1_base64="gooRaEzmAkh1JPQL65zWVoIqQzM=">AAAB+XicbVDLSsNAFJ3UV62vqEs3g0VwVRIVdFl047KCfUATymQ6aYZOJmHmplBC/sSNC0Xc+ifu/BsnbRZaPTBwOOde7pkTpIJrcJwvq7a2vrG5Vd9u7Ozu7R/Yh0c9nWSKsi5NRKIGAdFMcMm6wEGwQaoYiQPB+sH0rvT7M6Y0T+QjzFPmx2QiecgpASONbNuLCOReTCAKwjwqipHddFrOAvgvcSvSRBU6I/vTGyc0i5kEKojWQ9dJwc+JAk4FKxpepllK6JRM2NBQSWKm/XyRvMBnRhnjMFHmScAL9edGTmKt53FgJsuIetUrxf+8YQbhjZ9zmWbAJF0eCjOBIcFlDXjMFaMg5oYQqrjJimlEFKFgymqYEtzVL/8lvYuWe9lyHq6a7duqjjo6QafoHLnoGrXRPeqgLqJohp7QC3q1cuvZerPel6M1q9o5Rr9gfXwDPk+UDA==</latexit> ĝ <latexit sha1_base64="MlFgSWDuhHul1vp77q5wm5HaDKU=">AAAB+XicbVDLSsNAFL2pr1pfUZduBovgqiQq6LLoxmUF+4AmlMl00g6dPJi5KZTQP3HjQhG3/ok7/8ZJm4VWDwwczrmXe+YEqRQaHefLqqytb2xuVbdrO7t7+wf24VFHJ5livM0SmaheQDWXIuZtFCh5L1WcRoHk3WByV/jdKVdaJPEjzlLuR3QUi1AwikYa2LY3pph7EcVxEOaj+Xxg152GswD5S9yS1KFEa2B/esOEZRGPkUmqdd91UvRzqlAwyec1L9M8pWxCR7xvaEwjrv18kXxOzowyJGGizIuRLNSfGzmNtJ5FgZksIupVrxD/8/oZhjd+LuI0Qx6z5aEwkwQTUtRAhkJxhnJmCGVKmKyEjamiDE1ZNVOCu/rlv6Rz0XAvG87DVb15W9ZRhRM4hXNw4RqacA8taAODKTzBC7xaufVsvVnvy9GKVe4cwy9YH988yZQL</latexit> ĥ <latexit sha1_base64="gooRaEzmAkh1JPQL65zWVoIqQzM=">AAAB+XicbVDLSsNAFJ3UV62vqEs3g0VwVRIVdFl047KCfUATymQ6aYZOJmHmplBC/sSNC0Xc+ifu/BsnbRZaPTBwOOde7pkTpIJrcJwvq7a2vrG5Vd9u7Ozu7R/Yh0c9nWSKsi5NRKIGAdFMcMm6wEGwQaoYiQPB+sH0rvT7M6Y0T+QjzFPmx2QiecgpASONbNuLCOReTCAKwjwqipHddFrOAvgvcSvSRBU6I/vTGyc0i5kEKojWQ9dJwc+JAk4FKxpepllK6JRM2NBQSWKm/XyRvMBnRhnjMFHmScAL9edGTmKt53FgJsuIetUrxf+8YQbhjZ9zmWbAJF0eCjOBIcFlDXjMFaMg5oYQqrjJimlEFKFgymqYEtzVL/8lvYuWe9lyHq6a7duqjjo6QafoHLnoGrXRPeqgLqJohp7QC3q1cuvZerPel6M1q9o5Rr9gfXwDPk+UDA==</latexit> Learning rate ↵ Momentum rate 1 Exp. moving average rate 2 Prior precision External damping factor Tempering parameter ⌧ # MC samples for training K Data augmentation factor ⇢ Figure 2: A pseudo-code for our distributed VOGN algorithm is shown in Algorithm 1, and the distributed scheme is shown in the right figure. The computation in line 10 requires an extra calculation (see Appendix B), making VOGN slower than Adam. The bottom table gives a list of algorithmic hyperparameters needed for VOGN. Momentum and initialisation: It is well known that both momentum and good initialisation can improve the speed of convergence for SG methods in deep learning [51]. Since VOGN is similar to Adam, we can implement momentum in a similar way. This is shown in step 17 of Algorithm 1, where 1 is the momentum rate. We initialise the mean µ in the same way the weights are initialised in Adam (we use init.xavier_normal in PyTorch [11]). For the momentum term m, we use the same initialisation as Adam (initialised to 0). VOGN requires an additional initialisation for the variance 2. For this, we first run a forward pass through the first minibatch, calculate the average of the squared gradients and initialise the scale s0 with it (see step 1 in Algorithm 1). This implies that the variance is initialised to 20 = ⌧/(N(s0 + ̃)). For the tempering parameter ⌧ , we use a schedule where it is increased from a small value (e.g., 0.1) to 1. With these initialisation protocols, VOGN is able to mimic the convergence behaviour of Adam in the beginning. Learning rate scheduling: A common approach to quickly achieve high validation accuracies is to use a specific learning rate schedule [13]. The learning rate (denoted by ↵ in Algorithm 1) is regularly decayed by a factor (typically a factor of 10). The frequency and timings of this decay are usually pre-specified. In VOGN, we use the same schedule used for Adam, which works well. Distributed training: We also employ distributed training for VOGN to perform large experiments quickly. We can parallelise computation both over data and Monte-Carlo (MC) samples. Data parallelism is useful to split up large minibatch sizes. This is followed by averaging over multiple MC samples and their losses on a single GPU. MC sample parallelism is useful when minibatch size is small, and we can copy the entire minibatch and process it on a single GPU. Algorithm 1 and Figure 2 illustrate our distributed scheme. We use a combination of these two parallelism techniques with different MC samples for different inputs. This theoretically reduces the variance during training (see Equation 5 in Kingma et al. [26]), but sometimes requires averaging over multiple MC samples to get a sufficiently low variance in the early iterations. Overall, we find that this type of distributed training is essential for fast training on large problems such as ImageNet. Implementation of the Gauss-Newton update in VOGN: As discussed earlier, VOGN uses the Gauss-Newton approximation, which is fundamentally different from Adam. In this approximation, the gradients on individual data examples are first squared and then averaged afterwards (see step 12 in Algorithm 1 which implements the update for st shown in (4)). We need extra computation to get access to individual gradients, due to which, VOGN is slower Adam or SGD (e.g., in Fig. 1). However, this is not a theoretical limitation and this can be improved if a framework enables an easy computation of the individual gradients. Details of our implementation are described in Appendix B. This implementation is much more efficient than a naive one where gradients over examples are stored and the sum over the square is computed sequentially. Our implementation usually brings the running time of VOGN to within 2-5 times of the time that Adam takes. Tuning VOGN: Currently, there is no common recipe for tuning the algorithmic hyperparameters for VI, especially for large-scale tasks like ImageNet classification. One key idea we use in our experiments is to start with Adam hyperparameters and then make sure that VOGN training closely follows an Adam-like trajectory in the beginning of training. To achieve this, we divide the tuning into an optimisation part and a regularisation part. In the optimisation part, we first tune the hyperparameters of a deterministic version of VOGN, called the online Gauss-Newton (OGN) method. This method, described in Appendix C, is more stable than VOGN since it does not require MC sampling, and can be used as a stepping stone when moving from Adam/SGD to VOGN. After reaching a competitive performance to Adam/SGD by OGN, we move to the regularisation part, where we tune the prior precision , the tempering parameter ⌧ , and the number of MC samples K for VOGN. We initialise our search by setting the prior precision using the L2-regularisation parameter used for OGN, as well as the dataset size N . Another technique is to warm-up the parameter ⌧ towards ⌧ = 1 (also see the “momentum and initialisation" part). Setting ⌧ to smaller values usually stabilises the training, and increasing it slowly also helps during tuning. We also add an external damping factor > 0 to the moving average st. This increases the lower bound of the eigenvalues of the diagonal covariance ⌃t and prevents the noise and the step size from becoming too large. We find that a mix of these techniques works well for the problems we considered. 4 Experiments In this section, we present experiments on fitting several deep networks on CIFAR-10 and ImageNet. Our experiments demonstrate practical training using VOGN on these benchmarks and show performance that is competitive with Adam and SGD. We also assess the quality of the posterior approximation, finding that benefits of Bayesian principles are preserved. CIFAR-10 [28] contains 10 classes with 50,000 images for training and 10,000 images for validation. For ImageNet, we train with 1.28 million training examples and validate on 50,000 examples, classifying between 1,000 classes. We used a large minibatch size M = 4, 096 and parallelise them across 128 GPUs (NVIDIA Tesla P100). We compare the following methods on CIFAR-10: Adam, MC-dropout [9]. For ImageNet, we also compare to SGD, K-FAC, and Noisy K-FAC. We do not consider Noisy K-FAC for other comparisons since tuning is difficult. We compare 3 architectures: LeNet-5, AlexNet, ResNet-18. We only compare to Bayes by Backprop (BBB) [4] for CIFAR-10 with LeNet-5 since it is very slow to converge for larger-scale experiments. We carefully set the hyperparameters of all methods, following the best practice of large distributed training [13] as the initial point of our hyperparameter tuning. The full set of hyperparameters is in Appendix D. 4.1 Performance on CIFAR-10 and ImageNet We start by showing the effectiveness of momentum and batch normalisation for boosting the performance of VOGN. Figure 3a shows that these methods significantly speed up convergence and performance (in terms of both accuracy and log likelihoods). Figures 1 and 4 compare the convergence of VOGN to Adam (for all experiments), SGD (on ImageNet), and MC-dropout (on the rest). VOGN shows similar convergence and its performance is competitive with these methods. We also try BBB on LeNet-5, where it converges prohibitively slowly, performing very poorly. We are not able to successfully train other architectures using this approach. We found it far simpler to tune VOGN because we can borrow all the techniques used for Adam. Figure 4 also shows the importance of DA in improving performance. Table 1 gives a final comparison of train/validation accuracies, negative log likelihoods, epochs required for convergence, and run-time per epoch. We can see that the accuracy, log likelihoods, and the number of epochs are comparable. VOGN is 2-5 times slower than Adam and SGD. This is mainly due to the computation of individual gradients required in VOGN (see the discussion in Section 3). We clearly see that by using deep-learning techniques on VOGN, we can perform practical deep learning. This is not possible with methods such as BBB. Due to the Bayesian nature of VOGN, there are some trade-offs to consider. Reducing the prior precision ( in Algorithm 1) results in higher validation accuracy, but also larger train-test gap (more overfitting). This is shown in Appendix E for VOGN on ResNet-18 on ImageNet. As expected, when the prior precision is small, performance is similar to non-Bayesian methods. We also show the effect of changing the effective dataset size ⇢ in Appendix E: note that, since we are going to tune the prior variance anyway, it is sufficient to set ⇢ to its correct order of magnitude. Another trade-off concerns the number of Monte-Carlo (MC) samples, shown in Appendix F. Increasing the number of training MC samples (up to a limit) improves VOGN’s convergence rate and stability, but also increases the computation. Increasing the number of MC samples during testing improves generalisation, as expected due to averaging. Finally, a few comments on the performance of the other methods. Adam regularly overfits the training set in most settings, with large train-test differences in both validation accuracy and log likelihood. One exception is LeNet-5, which is most likely due to the small architecture which results in underfitting (this is consistent with the low validation accuracies obtained). In contrast to Adam, MC-dropout has small train-test gap, usually smaller than VOGN’s. However, we will see in Section 4.2 that this is because of underfitting. Moreover, the performance of MC-dropout is highly sensitive to the dropout rate (see Appendix G for a comparison of different dropout rates). On ImageNet, Noisy K-FAC performs well too. It is slower than VOGN, but it takes fewer epochs. Overall, wall clock time is about the same as VOGN. 4.2 Quality of the Predictive Probabilities In this section, we compare the quality of the predictive probabilities for various methods. For Bayesian methods, we compute these probabilities by averaging over the samples from the posterior approximations (see Appendix H for details). For non-Bayesian methods, these are obtained using the point estimate of the weights. We compare the probabilities using the following metrics: validation negative log-likelihood (NLL), area under ROC (AUROC) and expected calibration curves (ECE) [40, 15]. For the first and third metric, a lower number is better, while for the second, a higher number is better. See Appendix H for an explanation of these metrics. Results are summarised in Table 1. VOGN’s uncertainty performance is more consistent and marginally better than the other methods, as expected from a more principled Bayesian method. Out of the 15 metrics (NLL, ECE and AUROC on 5 dataset/architecture combinations), VOGN performs the best or tied best on 10, and is second-best on the other 5. In contrast, both MC-dropout’s and Adam’s performance varies significantly, sometimes performing poorly, sometimes performing decently. MC-dropout is best on 4, and Adam is best on 1 (on LeNet-5; as argued earlier, the small architecture may result in underfitting). We also show calibration curves [7] in Figures 1 and 14. Adam is consistently over-confident, with its calibration curve below the diagonal. Conversely, MC-dropout is usually under-confident. On ImageNet, MC-dropout performs well on ECE (all methods are very similar on AUROC), but this required an excessively tuned dropout rate (see Appendix G). We also compare performance on out-of-distribution datasets. When testing on datasets that are different from the training datasets, predictions should be more uncertain. We use experimental protocol from the literature [16, 31, 8, 32] to compare VOGN, Adam and MC-dropout on CIFAR-10. We also borrow metrics from other works [16, 30], showing predictive entropy histograms and also reporting AUROC and FPR at 95% TPR. See Appendix I for further details on the datasets and metrics. Ideally, we want predictive entropy to be high on out-of-distribution data and low on in-distribution data. Our results are summarised in Figure 5 and Appendix I. On ResNet-18 and AlexNet, VOGN’s predictive entropy histograms show the desired behaviour: a spread of entropies for the in-distribution data, and high entropies for out-of-distribution data. Adam has many predictive entropies at zero, indicating Adam tends to classify out-of-distribution data too confidently. Conversely, MC-dropout’s predictive entropies are generally high (particularly in-distribution), indicating MC-dropout has too much noise. On LeNet-5, we observe the same result as before: Adam and MC-dropout both perform well. The metrics (AUROC and FPR at 95% TPR) do not provide a clear story across architectures. 4.2.1 Performance on a Continual-learning task The goal of continual learning is to avoid forgetting of old tasks while sequentially observing new tasks. The past tasks are never visited again, making it difficult to remember them. The field of continual learning has recently grown, with many approaches proposed to tackle this problem [27, 33, 43, 48, 50]. Most approaches consider a simple setting where the tasks (such as classifying a subset of classes) arrive sequentially, and all the data from that task is available. We consider the same setup in our experiments. We compare to Elastic Weight Consolidation (EWC) [27] and a VI-based approach called Variational Continual Learning (VCL) [43]. VCL employs BBB for each task, and we expect to boost its performance by replacing BBB by VOGN. Figure 3b shows results on a common benchmark called Permuted MNIST. We use the same experimental setup as in Swaroop et al. [52]. In Permuted MNIST, each task consists of the entire MNIST dataset (10-way classification) with a different fixed random permutation applied to the input images’ pixels. We run each method 20 times, with different random seeds for both the benchmark’s permutations and model training. See Appendix D.2 for hyperparameter settings and further details. We see that VOGN performs at least as well as VCL, and far better than a popular approach called EWC [27]. Additionally, as found in the batch learning setting, VOGN is much quicker than BBB: we run VOGN for only 100 epochs per task, whereas VCL requires 800 epochs per task to achieve best results [52]. 5 Conclusions We successfully train deep networks with a natural-gradient variational inference method, VOGN, on a variety of architectures and datasets, even scaling up to ImageNet. This is made possible due to the similarity of VOGN to Adam, enabling us to boost performance by borrowing deep-learning techniques. Our accuracies and convergence rates are comparable to SGD and Adam. Unlike them, however, VOGN retains the benefits of Bayesian principles, with well-calibrated uncertainty and good performance on out-of-distribution data. Better uncertainty estimates open up a whole range of potential future experiments, for example, small data experiments, active learning, adversarial experiments, and sequential decision making. Our results on a continual-learning task confirm this. Another potential avenue for research is to consider structured covariance approximations. Acknowledgements We would like to thank Hikaru Nakata (Tokyo Institute of Technology) and Ikuro Sato (Denso IT Laboratory, Inc.) for their help on the PyTorch implementation. We are also thankful for the RAIDEN computing system and its support team at the RIKEN Center for AI Project which we used extensively for our experiments. This research used computational resources of the HPCI system provided by Tokyo Institute of Technology (TSUBAME3.0) through the HPCI System Research Project (Project ID:hp190122). K. O. is a Research Fellow of JSPS and is supported by JSPS KAKENHI Grant Number JP19J13477.
1. What is the focus of the paper regarding deep learning training strategies? 2. What are the claimed benefits of the proposed approach compared to other baselines? 3. Do the experimental results support the claims made in the paper? 4. How does the reviewer assess the generalizability of the proposed method to different deep models?
Review
Review This paper proposes a deep learning training strategy using natural gradient variational inference, and claims that this will preserve the benefits of bayesian principles. However, experimental results by the proposed method are not very impressive compared with other baselines despite the more complicated training process. In addition, I think it would be better if the author can discuss more about how easy the proposed method can be generalized to different deep models.
NIPS
Title Practical Deep Learning with Bayesian Principles Abstract Bayesian methods promise to fix many shortcomings of deep learning, but they are impractical and rarely match the performance of standard methods, let alone improve them. In this paper, we demonstrate practical training of deep networks with natural-gradient variational inference. By applying techniques such as batch normalisation, data augmentation, and distributed training, we achieve similar performance in about the same number of epochs as the Adam optimiser, even on large datasets such as ImageNet. Importantly, the benefits of Bayesian principles are preserved: predictive probabilities are well-calibrated, uncertainties on outof-distribution data are improved, and continual-learning performance is boosted. This work enables practical deep learning while preserving benefits of Bayesian principles. A PyTorch implementation1 is available as a plug-and-play optimiser. 1 Introduction Deep learning has been extremely successful in many fields such as computer vision [29], speech processing [17], and natural-language processing [39], but it is also plagued with several issues that make its application difficult in many other fields. For example, it requires a large amount of high-quality data and it can overfit when dataset size is small. Similarly, sequential learning can cause forgetting of past knowledge [27], and lack of reliable confidence estimates and other robustness issues can make it vulnerable to adversarial attacks [6]. Ultimately, due to such issues, application of deep learning remains challenging, especially for applications where human lives are at risk. Bayesian principles have the potential to address such issues. For example, we can represent uncertainty using the posterior distribution, enable sequential learning using Bayes’ rule, and reduce overfitting with Bayesian model averaging [19]. The use of such Bayesian principles for neural networks has been advocated from very early on. Bayesian inference on neural networks were all proposed in the 90s, e.g., by using MCMC methods [41], Laplace’s method [35], and variational inference (VI) [18, 2, 49, 1]. Benefits of Bayesian principles are even discussed in machine-learning textbooks [36, 3]. Despite this, they are rarely employed in practice. This is mainly due to computational concerns, unfortunately overshadowing their theoretical advantages. The difficulty lies in the computation of the posterior distribution, which is especially challenging for deep learning. Even approximation methods, such as VI and MCMC, have historically been difficult * These two authors contributed equally. † This work is conducted during an internship at RIKEN Center for AI project. ‡ Corresponding author: [email protected] 1 The code is available at https://github.com/team-approx-bayes/dl-with-bayes. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. to scale to large datasets such as ImageNet [47]. Due to this, it is common to use less principled approximations, such as MC-dropout [9], even though they are not ideal when it comes to fixing the issues of deep learning. For example, MC-dropout is unsuitable for continual learning [27] since its posterior approximation does not have mass over the whole weight space. It is also found to perform poorly for sequential decision making [45]. The form of the approximation used by such methods is usually rigid and cannot be easily improved, e.g., to other forms such as a mixture of Gaussians. The goal of this paper is to make more principled Bayesian methods, such as VI, practical for deep learning, thereby helping researchers tackle its key limitations. We demonstrate practical training of deep networks by using recently proposed natural-gradient VI methods. These methods resemble the Adam optimiser, enabling us to leverage existing techniques for initialisation, momentum, batch normalisation, data augmentation, and distributed training. As a result, we obtain similar performance in about the same number of epochs as Adam when training many popular deep networks (e.g., LeNet, AlexNet, ResNet) on datasets such as CIFAR-10 and ImageNet (see Fig. 1). The results show that, despite using an approximate posterior, the training methods preserve the benefits coming from Bayesian principles. Compared to standard deep-learning methods, the predictive probabilities are well-calibrated, uncertainties on out-of-distribution inputs are improved, and performance for continual-learning tasks is boosted. Our work shows that practical deep learning is possible with Bayesian methods and aims to support further research in this area. Related work. Previous VI methods, notably by Graves [14] and Blundell et al. [4], require significant implementation and tuning effort to perform well, e.g., on convolution neural networks (CNN). Slow convergence is found to be especially problematic for sequential problems [45]. There appears to be no reported results with complex networks on large problems, such as ImageNet. Our work solves these issues by applying deep-learning techniques to natural-gradient VI [24, 56]. In their paper, Zhang et al. [56] also employed data augmentation and batch normalisation for a natural-gradient method called Noisy K-FAC (see Appendix A) and showed results on VGG on CIFAR-10. However, a mean-field method called Noisy Adam was found to be unstable with batch normalisation. In contrast, we show that a similar method, called Variational Online Gauss-Newton (VOGN), proposed by Khan et al. [24], works well with such techniques. We show results for distributed training with Noisy K-FAC on Imagenet, but do not provide extensive comparisons since tuning it is time-consuming. Many of our techniques can speed-up Noisy K-FAC, which is promising. Many other approaches have recently been proposed to compute posterior approximations by training deterministic networks [46, 37, 38]. Similarly to MC-dropout, their posterior approximations are not flexible, making it difficult to improve the accuracy of their approximations. On the other hand, VI offers a much more flexible alternative to apply Bayesian principles to deep learning. 2 Deep Learning with Bayesian Principles and Its Challenges The success of deep learning is partly due to the availability of scalable and practical methods for training deep neural networks (DNNs). Network training is formulated as an optimisation problem where a loss between the data and the DNN’s predictions is minimised. For example, in a supervised learning task with a dataset D of N inputs xi and corresponding outputs yi of length K, we minimise a loss of the following form: ¯̀(w) + w>w, where ¯̀(w) := 1 N P i `(y i , fw(xi)), fw(x) 2 RK denotes the DNN outputs with weights w, `(y, f) denotes a differentiable loss function between an output y and the function f , and > 0 is the L2 regulariser.2 Deep learning relies on stochasticgradient (SG) methods to minimise such loss functions. The most commonly used optimisers, such as stochastic-gradient descent (SGD), RMSprop [53], and Adam [25], take the following form3 (all operations below are element-wise): wt+1 wt ↵t ĝ(wt) + wtp st+1 + ✏ , st+1 (1 t)st + t (ĝ(wt) + wt)2 , (1) where t is the iteration, ↵t > 0 and 0 < t < 1 are learning rates, ✏ > 0 is a small scalar, and ĝ(w) is the stochastic gradients at w defined as follows: ĝ(w) := 1 M P i2Mt rw`(yi, fw(xi)) using a minibatch Mt of M data examples. This simple update scales extremely well and can be applied to very large problems. With techniques such as initialisation protocols, momentum, weight-decay, batch normalisation, and data augmentation, it also achieves good performance for many problems. In contrast, the full Bayesian approach to deep learning is computationally very expensive. The posterior distribution can be obtained using Bayes’ rule: p(w|D) = exp N ¯̀(w)/⌧ p(w)/p(D) where 0 < ⌧ 1.4 This is costly due to the computation of the marginal likelihood p(D), a high-dimensional integral that is difficult to compute for large networks. Variational inference (VI) is a principled approach to more scalably estimate an approximation to p(w|D). The main idea is to employ a parametric approximation, e.g., a Gaussian q(w) := N (w|µ,⌃) with mean µ and covariance ⌃. The parameters µ and ⌃ can then be estimated by maximising the evidence lower bound (ELBO): ELBO: L(µ,⌃) := NEq ⇥ ¯̀(w) ⇤ ⌧D KL [q(w) k p(w)], (2) where DKL[·] denotes the Kullback-Leibler divergence. By using more complex approximations, we can further reduce the approximation error, but at a computational cost. By formulating Bayesian inference as an optimisation problem, VI enables a practical application of Bayesian principles. Despite this, VI has remained impractical for training large deep networks on large datasets. Existing methods, such as Graves [14] and Blundell et al. [4], directly apply popular SG methods to optimise the variational parameters in the ELBO, yet they fail to get a reasonable performance on large problems, usually converging very slowly. The failure of such direct applications of deep-learning methods to VI is not surprising. The techniques used in one field may not directly lead to improvements in the other, but it will be useful if they do, e.g., if we can optimise the ELBO in a way that allows us to exploit the tricks and techniques of deep learning and boost the performance of VI. The goal of this work is to do just that. We now describe our methods in detail. 3 Practical Deep Learning with Natural-Gradient Variational Inference In this paper, we propose natural-gradient VI methods for practical deep learning with Bayesian principles. The natural-gradient update takes a simple form when estimating exponential-family approximations [23, 22]. When p(w) := N (w|0, I/ ), the update of the natural-parameter is performed by using the stochastic gradient of the expected regularised-loss: t+1 = (1 ⌧⇢) t ⇢rµEq ⇥ ¯̀(w) + 12⌧ w > w ⇤ , (3) 2This regulariser is sometimes set to 0 or a very small value. 3Alternate versions with weight-decay and momentum differ from this update [34]. We present a form useful to establish the connection between SG methods and natural-gradient VI. 4This is a tempered posterior [54] setup where ⌧ is set 6= 1 when we expect model misspecification and/or adversarial examples [10]. Setting ⌧ = 1 recovers standard Bayesian inference. where ⇢ > 0 is the learning rate, and we note that the stochastic gradients are computed with respect to µ, the expectation parameters of q. The moving average above helps to deal with the stochasticity of the gradient estimates, and is very similar to the moving average used in deep learning (see (1)). When ⌧ is set to 0, the update essentially minimises the regularised loss (see Section 5 in Khan et al. [24]). These properties of natural-gradient VI makes it an ideal candidate for deep learning. Recent work by Khan et al. [24] and Zhang et al. [56] further show that, when q is Gaussian, the update (3) assumes a form that is strikingly similar to the update (1). For example, the Variational Online Gauss-Newton (VOGN) method of Khan et al. [24] estimates a Gaussian with mean µ t and a diagonal covariance matrix ⌃t using the following update: µ t+1 µt ↵t ĝ(wt) + ̃µt st+1 + ̃ , st+1 (1 ⌧ t)st + t 1 M X i2Mt (g i (wt)) 2 , (4) where g i (wt) := rw`(yi, fwt(xi)), wt ⇠ N (w|µt,⌃t) with ⌃t := diag(1/(N(st + ̃))), ̃ := ⌧ /N , and ↵t, t > 0 are learning rates. Operations are performed element-wise. Similarly to (1), the vector st adapts the learning rate and is updated using a moving average. A major difference in VOGN is that the update of st is now based on a Gauss-Newton approximation [14] which uses 1 M P i2Mt(gi(wt)) 2. This is fundamentally different from the SG update in (1) which instead uses the gradient-magnitude ( 1 M P i2Mt gi(wt) + wt) 2 [5]. The first approach uses the sum outside the square while the second approach uses it inside. VOGN is therefore a secondorder method and, similarly to Newton’s method, does not need a square-root over st. Implementation of this step requires an additional calculation (see Appendix B) which makes VOGN a bit slower than Adam, but VOGN is expected to give better variance estimates (see Theorem 1 in Khan et al. [24]). The main contribution of this paper is to demonstrate practical training of deep networks using VOGN. Since VOGN takes a similar form to SG methods, we can easily borrow existing deeplearning techniques to improve performance. We will now describe these techniques in detail. Pseudo-code for VOGN is shown in Algorithm 1. Batch normalisation: Batch normalisation [20] has been found to significantly speed up and stabilise training of neural networks, and is widely used in deep learning. BatchNorm layers are inserted between neural network layers. They help stabilise each layer’s input distribution by normalising the running average of the inputs’ mean and variance. In our VOGN implementation, we simply use the existing implementation with default hyperparameter settings. We do not apply L2 regularisation and weight decay to BatchNorm parameters, like in Goyal et al. [13], or maintain uncertainty over the BatchNorm parameters. This straightforward application of batch normalisation works for VOGN. Data Augmentation: When training on image datasets, data augmentation (DA) techniques can improve performance drastically [13]. We consider two common real-time data augmentation techniques: random cropping and horizontal flipping. After randomly selecting a minibatch at each iteration, we use a randomly selected cropped version of all images. Each image in the minibatch has a 50% chance of being horizontally flipped. We find that directly applying DA gives slightly worse performance than expected, and also affects the calibration of the resulting uncertainty. However, DA increases the effective sample size. We therefore modify it to be ⇢N where ⇢ 1, improving performance (see step 2 in Algorithm 1). The reason for this performance boost might be due to the complex relationship between the regularisation and N . For the regularised loss ¯̀(w) + w>w, the two are unidentifiable, i.e., we can multiply by a constant and reduce N by the same constant without changing the minimum. However, in a Bayesian setting (like in (2)), the two quantities are separate, and therefore changing the data might also change the optimal prior variance hyperparameter in a complicated way. This needs further theoretical investigations, but our simple fix of scaling N seems to work well in the experiments. We set ⇢ by considering the specific DA techniques used. When training on CIFAR-10, the random cropping DA step involves first padding the 32x32 images to become of size 40x40, and then taking randomly selected 28x28 cropped images. We consider this as effectively increasing the dataset size by a factor of 5 (4 images for each corner, and one central image). The horizontal flipping DA step doubles the dataset size (one dataset of unflipped images, one for flipped images). Combined, this gives ⇢ = 10. Similar arguments for ImageNet DA techniques give ⇢ = 5. Even though ⇢ is another hyperparameter to set, we find that its precise value does not matter much. Typically, after setting an estimate for ⇢, tuning a little seems to work well (see Appendix E). Algorithm 1: Variational Online Gauss Newton (VOGN) 1: Initialise µ0, s0, m0. 2: N ⇢N , ̃ ⌧ /N . 3: repeat 4: Sample a minibatch M of size M . 5: Split M into each GPU (local minibatch Mlocal). 6: for each GPU in parallel do 7: for k = 1, 2, . . . ,K do 8: Sample ✏ ⇠ N (0, I). 9: w(k) µ+ ✏ with (1/(N(s+ ̃ + )))1/2. 10: Compute g(k)i rw`(yi, fw(k)(xi)), 8i 2Mlocal using the method described in Appendix B. 11: ĝk 1M P i2Mlocal g (k) i . 12: ĥk 1M P i2Mlocal(g (k) i ) 2 . 13: end for 14: ĝ 1K PK k=1 ĝk and ĥ 1 K PK k=1 ĥk. 15: end for 16: AllReduce ĝ, ĥ. 17: m 1m+ (ĝ + ̃µ). 18: s (1 ⌧ 2)s+ 2ĥ. 19: µ µ ↵m/(s+ ̃ + ). 20: until stopping criterion is met w (8) <latexit sha1_base64="0AX6CIJdpG7lxM3dZNYUgXYXyYA=">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyVRwS6LblxWsA9oa5lMJ+3QySTMTCol5E/cuFDErX/izr9x0mahrQcGDufcyz1zvIgzpR3n2yqsrW9sbhW3Szu7e/sH9uFRS4WxJLRJQh7KjocV5UzQpmaa004kKQ48Ttve5Dbz21MqFQvFg55FtB/gkWA+I1gbaWDbvQDrsecnT+ljUqmdpwO77FSdOdAqcXNShhyNgf3VG4YkDqjQhGOluq4T6X6CpWaE07TUixWNMJngEe0aKnBAVT+ZJ0/RmVGGyA+leUKjufp7I8GBUrPAM5NZTrXsZeJ/XjfWfq2fMBHFmgqyOOTHHOkQZTWgIZOUaD4zBBPJTFZExlhiok1ZJVOCu/zlVdK6qLqXVef+qly/yesowgmcQgVcuIY63EEDmkBgCs/wCm9WYr1Y79bHYrRg5TvH8AfW5w9GnJNp</latexit> w (7) <latexit sha1_base64="wdyEarqCEeVHbD9YSbbId9H4C4Y=">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyVRoS6LblxWsA9oa5lMJ+3QySTMTCol5E/cuFDErX/izr9x0mahrQcGDufcyz1zvIgzpR3n2yqsrW9sbhW3Szu7e/sH9uFRS4WxJLRJQh7KjocV5UzQpmaa004kKQ48Ttve5Dbz21MqFQvFg55FtB/gkWA+I1gbaWDbvQDrsecnT+ljUqmdpwO77FSdOdAqcXNShhyNgf3VG4YkDqjQhGOluq4T6X6CpWaE07TUixWNMJngEe0aKnBAVT+ZJ0/RmVGGyA+leUKjufp7I8GBUrPAM5NZTrXsZeJ/XjfW/nU/YSKKNRVkcciPOdIhympAQyYp0XxmCCaSmayIjLHERJuySqYEd/nLq6R1UXUvq879Vbl+k9dRhBM4hQq4UIM63EEDmkBgCs/wCm9WYr1Y79bHYrRg5TvH8AfW5w9FFpNo</latexit> w (6) <latexit sha1_base64="ZRrNh8WvBse2CToET/IVGFUdSXg=">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyVRUZdFNy4r2Ae0tUymk3boZBJmJpUS8iduXCji1j9x5984abPQ1gMDh3Pu5Z45XsSZ0o7zbRVWVtfWN4qbpa3tnd09e/+gqcJYEtogIQ9l28OKciZoQzPNaTuSFAcepy1vfJv5rQmVioXiQU8j2gvwUDCfEayN1LftboD1yPOTp/QxqVyepn277FSdGdAycXNShhz1vv3VHYQkDqjQhGOlOq4T6V6CpWaE07TUjRWNMBnjIe0YKnBAVS+ZJU/RiVEGyA+leUKjmfp7I8GBUtPAM5NZTrXoZeJ/XifW/nUvYSKKNRVkfsiPOdIhympAAyYp0XxqCCaSmayIjLDERJuySqYEd/HLy6R5VnXPq879Rbl2k9dRhCM4hgq4cAU1uIM6NIDABJ7hFd6sxHqx3q2P+WjByncO4Q+szx9DkJNn</latexit> w (5) <latexit sha1_base64="B2LnCCBsDGx8M7KwkiP54CtpI2E=">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyXxgS6LblxWsA9oa5lMJ+3QySTMTCol5E/cuFDErX/izr9x0mahrQcGDufcyz1zvIgzpR3n2yqsrK6tbxQ3S1vbO7t79v5BU4WxJLRBQh7KtocV5UzQhmaa03YkKQ48Tlve+DbzWxMqFQvFg55GtBfgoWA+I1gbqW/b3QDrkecnT+ljUrk8Tft22ak6M6Bl4uakDDnqffurOwhJHFChCcdKdVwn0r0ES80Ip2mpGysaYTLGQ9oxVOCAql4yS56iE6MMkB9K84RGM/X3RoIDpaaBZyaznGrRy8T/vE6s/etewkQUayrI/JAfc6RDlNWABkxSovnUEEwkM1kRGWGJiTZllUwJ7uKXl0nzrOqeV537i3LtJq+jCEdwDBVw4QpqcAd1aACBCTzDK7xZifVivVsf89GCle8cwh9Ynz9CCpNm</latexit> w (4) <latexit sha1_base64="Jjh+roV7RXCtHv15jiKBS8Q2kV8=">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyXRgi6LblxWsA9oa5lMJ+3QySTMTCol5E/cuFDErX/izr9x0mahrQcGDufcyz1zvIgzpR3n2yqsrW9sbhW3Szu7e/sH9uFRS4WxJLRJQh7KjocV5UzQpmaa004kKQ48Ttve5Dbz21MqFQvFg55FtB/gkWA+I1gbaWDbvQDrsecnT+ljUqmdpwO77FSdOdAqcXNShhyNgf3VG4YkDqjQhGOluq4T6X6CpWaE07TUixWNMJngEe0aKnBAVT+ZJ0/RmVGGyA+leUKjufp7I8GBUrPAM5NZTrXsZeJ/XjfW/nU/YSKKNRVkcciPOdIhympAQyYp0XxmCCaSmayIjLHERJuySqYEd/nLq6R1UXUvq859rVy/yesowgmcQgVcuII63EEDmkBgCs/wCm9WYr1Y79bHYrRg5TvH8AfW5w9AhJNl</latexit> w (3) <latexit sha1_base64="hvhnf9uGBxMR9D/7hkTLQSq9t6g=">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyWxgi6LblxWsA9oa5lMJ+3QySTMTCol5E/cuFDErX/izr9x0mahrQcGDufcyz1zvIgzpR3n2yqsrW9sbhW3Szu7e/sH9uFRS4WxJLRJQh7KjocV5UzQpmaa004kKQ48Ttve5Dbz21MqFQvFg55FtB/gkWA+I1gbaWDbvQDrsecnT+ljUqmdpwO77FSdOdAqcXNShhyNgf3VG4YkDqjQhGOluq4T6X6CpWaE07TUixWNMJngEe0aKnBAVT+ZJ0/RmVGGyA+leUKjufp7I8GBUrPAM5NZTrXsZeJ/XjfW/nU/YSKKNRVkcciPOdIhympAQyYp0XxmCCaSmayIjLHERJuySqYEd/nLq6R1UXVrVef+sly/yesowgmcQgVcuII63EEDmkBgCs/wCm9WYr1Y79bHYrRg5TvH8AfW5w8+/pNk</latexit> w (2) <latexit sha1_base64="peZMHpnhsnVKmu0FyhJJG0tzlO4=">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyWpgi6LblxWsA9oa5lMJ+3QySTMTCol5E/cuFDErX/izr9x0mahrQcGDufcyz1zvIgzpR3n2yqsrW9sbhW3Szu7e/sH9uFRS4WxJLRJQh7KjocV5UzQpmaa004kKQ48Ttve5Dbz21MqFQvFg55FtB/gkWA+I1gbaWDbvQDrsecnT+ljUqmdpwO77FSdOdAqcXNShhyNgf3VG4YkDqjQhGOluq4T6X6CpWaE07TUixWNMJngEe0aKnBAVT+ZJ0/RmVGGyA+leUKjufp7I8GBUrPAM5NZTrXsZeJ/XjfW/nU/YSKKNRVkcciPOdIhympAQyYp0XxmCCaSmayIjLHERJuySqYEd/nLq6RVq7oXVef+sly/yesowgmcQgVcuII63EEDmkBgCs/wCm9WYr1Y79bHYrRg5TvH8AfW5w89eJNj</latexit> w (1) <latexit sha1_base64="DZVriWHhAABp/VIrxpVe/AkNe74=">AAAB+XicbVDLSsNAFL3xWesr6tLNYBHqpiQq6LLoxmUF+4A2lsl00g6dTMLMpFJC/sSNC0Xc+ifu/BsnbRbaemDgcM693DPHjzlT2nG+rZXVtfWNzdJWeXtnd2/fPjhsqSiRhDZJxCPZ8bGinAna1Exz2oklxaHPadsf3+Z+e0KlYpF40NOYeiEeChYwgrWR+rbdC7Ee+UH6lD2mVfcs69sVp+bMgJaJW5AKFGj07a/eICJJSIUmHCvVdZ1YeymWmhFOs3IvUTTGZIyHtGuowCFVXjpLnqFTowxQEEnzhEYz9fdGikOlpqFvJvOcatHLxf+8bqKDay9lIk40FWR+KEg40hHKa0ADJinRfGoIJpKZrIiMsMREm7LKpgR38cvLpHVecy9qzv1lpX5T1FGCYziBKrhwBXW4gwY0gcAEnuEV3qzUerHerY/56IpV7BzBH1ifPzvyk2I=</latexit> w (i) ⇠ q(w) <latexit sha1_base64="Dv2ZWT7aVb6yCMx0AIW4B6zAHBY=">AAACC3icbVDLSsNAFJ3UV42vqEs3Q4vQbkqigi6LblxWsA9oYplMJ+3QySTOTJQSunfjr7hxoYhbf8Cdf+OkDaitBy4czrmXe+/xY0alsu0vo7C0vLK6Vlw3Nza3tnes3b2WjBKBSRNHLBIdH0nCKCdNRRUjnVgQFPqMtP3RRea374iQNOLXahwTL0QDTgOKkdJSzyq5IVJDP0jvJzdphVYnrqSheVv5kas9q2zX7CngInFyUgY5Gj3r0+1HOAkJV5ghKbuOHSsvRUJRzMjEdBNJYoRHaEC6mnIUEuml018m8FArfRhEQhdXcKr+nkhRKOU49HVndqKc9zLxP6+bqODMSymPE0U4ni0KEgZVBLNgYJ8KghUba4KwoPpWiIdIIKx0fKYOwZl/eZG0jmrOcc2+OinXz/M4iuAAlEAFOOAU1MElaIAmwOABPIEX8Go8Gs/Gm/E+ay0Y+cw++APj4xv4CZr8</latexit> M <latexit sha1_base64="uZvzERQPKMMR9NiLGAcS/T5qG+0=">AAAB8nicbVBNS8NAFHypX7V+VT16WSyCp5KooMeiFy9CBWsLaSib7bZdutmE3RehhP4MLx4U8eqv8ea/cdPmoK0DC8PMe+y8CRMpDLrut1NaWV1b3yhvVra2d3b3qvsHjyZONeMtFstYd0JquBSKt1Cg5J1EcxqFkrfD8U3ut5+4NiJWDzhJeBDRoRIDwShaye9GFEeMyuxu2qvW3Lo7A1kmXkFqUKDZq351+zFLI66QSWqM77kJBhnVKJjk00o3NTyhbEyH3LdU0YibIJtFnpITq/TJINb2KSQz9fdGRiNjJlFoJ/OIZtHLxf88P8XBVZAJlaTIFZt/NEglwZjk95O+0JyhnFhCmRY2K2EjqilD21LFluAtnrxMHs/q3nndvb+oNa6LOspwBMdwCh5cQgNuoQktYBDDM7zCm4POi/PufMxHS06xcwh/4Hz+AIM0kWU=</latexit> Mlocal <latexit sha1_base64="ZwiUQGlfNIiAPSgLdNtgjp+cW94=">AAAB/HicbVDLSsNAFL3xWesr2qWbwSK4KokKuiy6cSNUsA9oQ5hMJ+3QySTMTIQQ6q+4caGIWz/EnX/jpM1CWw8MHM65l3vmBAlnSjvOt7Wyura+sVnZqm7v7O7t2weHHRWnktA2iXksewFWlDNB25ppTnuJpDgKOO0Gk5vC7z5SqVgsHnSWUC/CI8FCRrA2km/XBhHWY4J5fjf1cx4bNvXtutNwZkDLxC1JHUq0fPtrMIxJGlGhCcdK9V0n0V6OpWaE02l1kCqaYDLBI9o3VOCIKi+fhZ+iE6MMURhL84RGM/X3Ro4jpbIoMJNFVLXoFeJ/Xj/V4ZWXM5GkmgoyPxSmHOkYFU2gIZOUaJ4ZgolkJisiYywx0aavqinBXfzyMumcNdzzhnN/UW9el3VU4AiO4RRcuIQm3EIL2kAgg2d4hTfryXqx3q2P+eiKVe7U4A+szx9z0JVI</latexit> ĝ <latexit sha1_base64="MlFgSWDuhHul1vp77q5wm5HaDKU=">AAAB+XicbVDLSsNAFL2pr1pfUZduBovgqiQq6LLoxmUF+4AmlMl00g6dPJi5KZTQP3HjQhG3/ok7/8ZJm4VWDwwczrmXe+YEqRQaHefLqqytb2xuVbdrO7t7+wf24VFHJ5livM0SmaheQDWXIuZtFCh5L1WcRoHk3WByV/jdKVdaJPEjzlLuR3QUi1AwikYa2LY3pph7EcVxEOaj+Xxg152GswD5S9yS1KFEa2B/esOEZRGPkUmqdd91UvRzqlAwyec1L9M8pWxCR7xvaEwjrv18kXxOzowyJGGizIuRLNSfGzmNtJ5FgZksIupVrxD/8/oZhjd+LuI0Qx6z5aEwkwQTUtRAhkJxhnJmCGVKmKyEjamiDE1ZNVOCu/rlv6Rz0XAvG87DVb15W9ZRhRM4hXNw4RqacA8taAODKTzBC7xaufVsvVnvy9GKVe4cwy9YH988yZQL</latexit> ĥ <latexit sha1_base64="gooRaEzmAkh1JPQL65zWVoIqQzM=">AAAB+XicbVDLSsNAFJ3UV62vqEs3g0VwVRIVdFl047KCfUATymQ6aYZOJmHmplBC/sSNC0Xc+ifu/BsnbRZaPTBwOOde7pkTpIJrcJwvq7a2vrG5Vd9u7Ozu7R/Yh0c9nWSKsi5NRKIGAdFMcMm6wEGwQaoYiQPB+sH0rvT7M6Y0T+QjzFPmx2QiecgpASONbNuLCOReTCAKwjwqipHddFrOAvgvcSvSRBU6I/vTGyc0i5kEKojWQ9dJwc+JAk4FKxpepllK6JRM2NBQSWKm/XyRvMBnRhnjMFHmScAL9edGTmKt53FgJsuIetUrxf+8YQbhjZ9zmWbAJF0eCjOBIcFlDXjMFaMg5oYQqrjJimlEFKFgymqYEtzVL/8lvYuWe9lyHq6a7duqjjo6QafoHLnoGrXRPeqgLqJohp7QC3q1cuvZerPel6M1q9o5Rr9gfXwDPk+UDA==</latexit> Mlocal <latexit sha1_base64="ZwiUQGlfNIiAPSgLdNtgjp+cW94=">AAAB/HicbVDLSsNAFL3xWesr2qWbwSK4KokKuiy6cSNUsA9oQ5hMJ+3QySTMTIQQ6q+4caGIWz/EnX/jpM1CWw8MHM65l3vmBAlnSjvOt7Wyura+sVnZqm7v7O7t2weHHRWnktA2iXksewFWlDNB25ppTnuJpDgKOO0Gk5vC7z5SqVgsHnSWUC/CI8FCRrA2km/XBhHWY4J5fjf1cx4bNvXtutNwZkDLxC1JHUq0fPtrMIxJGlGhCcdK9V0n0V6OpWaE02l1kCqaYDLBI9o3VOCIKi+fhZ+iE6MMURhL84RGM/X3Ro4jpbIoMJNFVLXoFeJ/Xj/V4ZWXM5GkmgoyPxSmHOkYFU2gIZOUaJ4ZgolkJisiYywx0aavqinBXfzyMumcNdzzhnN/UW9el3VU4AiO4RRcuIQm3EIL2kAgg2d4hTfryXqx3q2P+eiKVe7U4A+szx9z0JVI</latexit> Mlocal <latexit sha1_base64="ZwiUQGlfNIiAPSgLdNtgjp+cW94=">AAAB/HicbVDLSsNAFL3xWesr2qWbwSK4KokKuiy6cSNUsA9oQ5hMJ+3QySTMTIQQ6q+4caGIWz/EnX/jpM1CWw8MHM65l3vmBAlnSjvOt7Wyura+sVnZqm7v7O7t2weHHRWnktA2iXksewFWlDNB25ppTnuJpDgKOO0Gk5vC7z5SqVgsHnSWUC/CI8FCRrA2km/XBhHWY4J5fjf1cx4bNvXtutNwZkDLxC1JHUq0fPtrMIxJGlGhCcdK9V0n0V6OpWaE02l1kCqaYDLBI9o3VOCIKi+fhZ+iE6MMURhL84RGM/X3Ro4jpbIoMJNFVLXoFeJ/Xj/V4ZWXM5GkmgoyPxSmHOkYFU2gIZOUaJ4ZgolkJisiYywx0aavqinBXfzyMumcNdzzhnN/UW9el3VU4AiO4RRcuIQm3EIL2kAgg2d4hTfryXqx3q2P+eiKVe7U4A+szx9z0JVI</latexit> Mlocal <latexit sha1_base64="ZwiUQGlfNIiAPSgLdNtgjp+cW94=">AAAB/HicbVDLSsNAFL3xWesr2qWbwSK4KokKuiy6cSNUsA9oQ5hMJ+3QySTMTIQQ6q+4caGIWz/EnX/jpM1CWw8MHM65l3vmBAlnSjvOt7Wyura+sVnZqm7v7O7t2weHHRWnktA2iXksewFWlDNB25ppTnuJpDgKOO0Gk5vC7z5SqVgsHnSWUC/CI8FCRrA2km/XBhHWY4J5fjf1cx4bNvXtutNwZkDLxC1JHUq0fPtrMIxJGlGhCcdK9V0n0V6OpWaE02l1kCqaYDLBI9o3VOCIKi+fhZ+iE6MMURhL84RGM/X3Ro4jpbIoMJNFVLXoFeJ/Xj/V4ZWXM5GkmgoyPxSmHOkYFU2gIZOUaJ4ZgolkJisiYywx0aavqinBXfzyMumcNdzzhnN/UW9el3VU4AiO4RRcuIQm3EIL2kAgg2d4hTfryXqx3q2P+eiKVe7U4A+szx9z0JVI</latexit> ĝ <latexit sha1_base64="MlFgSWDuhHul1vp77q5wm5HaDKU=">AAAB+XicbVDLSsNAFL2pr1pfUZduBovgqiQq6LLoxmUF+4AmlMl00g6dPJi5KZTQP3HjQhG3/ok7/8ZJm4VWDwwczrmXe+YEqRQaHefLqqytb2xuVbdrO7t7+wf24VFHJ5livM0SmaheQDWXIuZtFCh5L1WcRoHk3WByV/jdKVdaJPEjzlLuR3QUi1AwikYa2LY3pph7EcVxEOaj+Xxg152GswD5S9yS1KFEa2B/esOEZRGPkUmqdd91UvRzqlAwyec1L9M8pWxCR7xvaEwjrv18kXxOzowyJGGizIuRLNSfGzmNtJ5FgZksIupVrxD/8/oZhjd+LuI0Qx6z5aEwkwQTUtRAhkJxhnJmCGVKmKyEjamiDE1ZNVOCu/rlv6Rz0XAvG87DVb15W9ZRhRM4hXNw4RqacA8taAODKTzBC7xaufVsvVnvy9GKVe4cwy9YH988yZQL</latexit> ĥ <latexit sha1_base64="gooRaEzmAkh1JPQL65zWVoIqQzM=">AAAB+XicbVDLSsNAFJ3UV62vqEs3g0VwVRIVdFl047KCfUATymQ6aYZOJmHmplBC/sSNC0Xc+ifu/BsnbRZaPTBwOOde7pkTpIJrcJwvq7a2vrG5Vd9u7Ozu7R/Yh0c9nWSKsi5NRKIGAdFMcMm6wEGwQaoYiQPB+sH0rvT7M6Y0T+QjzFPmx2QiecgpASONbNuLCOReTCAKwjwqipHddFrOAvgvcSvSRBU6I/vTGyc0i5kEKojWQ9dJwc+JAk4FKxpepllK6JRM2NBQSWKm/XyRvMBnRhnjMFHmScAL9edGTmKt53FgJsuIetUrxf+8YQbhjZ9zmWbAJF0eCjOBIcFlDXjMFaMg5oYQqrjJimlEFKFgymqYEtzVL/8lvYuWe9lyHq6a7duqjjo6QafoHLnoGrXRPeqgLqJohp7QC3q1cuvZerPel6M1q9o5Rr9gfXwDPk+UDA==</latexit> ĝ <latexit sha1_base64="MlFgSWDuhHul1vp77q5wm5HaDKU=">AAAB+XicbVDLSsNAFL2pr1pfUZduBovgqiQq6LLoxmUF+4AmlMl00g6dPJi5KZTQP3HjQhG3/ok7/8ZJm4VWDwwczrmXe+YEqRQaHefLqqytb2xuVbdrO7t7+wf24VFHJ5livM0SmaheQDWXIuZtFCh5L1WcRoHk3WByV/jdKVdaJPEjzlLuR3QUi1AwikYa2LY3pph7EcVxEOaj+Xxg152GswD5S9yS1KFEa2B/esOEZRGPkUmqdd91UvRzqlAwyec1L9M8pWxCR7xvaEwjrv18kXxOzowyJGGizIuRLNSfGzmNtJ5FgZksIupVrxD/8/oZhjd+LuI0Qx6z5aEwkwQTUtRAhkJxhnJmCGVKmKyEjamiDE1ZNVOCu/rlv6Rz0XAvG87DVb15W9ZRhRM4hXNw4RqacA8taAODKTzBC7xaufVsvVnvy9GKVe4cwy9YH988yZQL</latexit> ĥ <latexit sha1_base64="gooRaEzmAkh1JPQL65zWVoIqQzM=">AAAB+XicbVDLSsNAFJ3UV62vqEs3g0VwVRIVdFl047KCfUATymQ6aYZOJmHmplBC/sSNC0Xc+ifu/BsnbRZaPTBwOOde7pkTpIJrcJwvq7a2vrG5Vd9u7Ozu7R/Yh0c9nWSKsi5NRKIGAdFMcMm6wEGwQaoYiQPB+sH0rvT7M6Y0T+QjzFPmx2QiecgpASONbNuLCOReTCAKwjwqipHddFrOAvgvcSvSRBU6I/vTGyc0i5kEKojWQ9dJwc+JAk4FKxpepllK6JRM2NBQSWKm/XyRvMBnRhnjMFHmScAL9edGTmKt53FgJsuIetUrxf+8YQbhjZ9zmWbAJF0eCjOBIcFlDXjMFaMg5oYQqrjJimlEFKFgymqYEtzVL/8lvYuWe9lyHq6a7duqjjo6QafoHLnoGrXRPeqgLqJohp7QC3q1cuvZerPel6M1q9o5Rr9gfXwDPk+UDA==</latexit> ĝ <latexit sha1_base64="MlFgSWDuhHul1vp77q5wm5HaDKU=">AAAB+XicbVDLSsNAFL2pr1pfUZduBovgqiQq6LLoxmUF+4AmlMl00g6dPJi5KZTQP3HjQhG3/ok7/8ZJm4VWDwwczrmXe+YEqRQaHefLqqytb2xuVbdrO7t7+wf24VFHJ5livM0SmaheQDWXIuZtFCh5L1WcRoHk3WByV/jdKVdaJPEjzlLuR3QUi1AwikYa2LY3pph7EcVxEOaj+Xxg152GswD5S9yS1KFEa2B/esOEZRGPkUmqdd91UvRzqlAwyec1L9M8pWxCR7xvaEwjrv18kXxOzowyJGGizIuRLNSfGzmNtJ5FgZksIupVrxD/8/oZhjd+LuI0Qx6z5aEwkwQTUtRAhkJxhnJmCGVKmKyEjamiDE1ZNVOCu/rlv6Rz0XAvG87DVb15W9ZRhRM4hXNw4RqacA8taAODKTzBC7xaufVsvVnvy9GKVe4cwy9YH988yZQL</latexit> ĥ <latexit sha1_base64="gooRaEzmAkh1JPQL65zWVoIqQzM=">AAAB+XicbVDLSsNAFJ3UV62vqEs3g0VwVRIVdFl047KCfUATymQ6aYZOJmHmplBC/sSNC0Xc+ifu/BsnbRZaPTBwOOde7pkTpIJrcJwvq7a2vrG5Vd9u7Ozu7R/Yh0c9nWSKsi5NRKIGAdFMcMm6wEGwQaoYiQPB+sH0rvT7M6Y0T+QjzFPmx2QiecgpASONbNuLCOReTCAKwjwqipHddFrOAvgvcSvSRBU6I/vTGyc0i5kEKojWQ9dJwc+JAk4FKxpepllK6JRM2NBQSWKm/XyRvMBnRhnjMFHmScAL9edGTmKt53FgJsuIetUrxf+8YQbhjZ9zmWbAJF0eCjOBIcFlDXjMFaMg5oYQqrjJimlEFKFgymqYEtzVL/8lvYuWe9lyHq6a7duqjjo6QafoHLnoGrXRPeqgLqJohp7QC3q1cuvZerPel6M1q9o5Rr9gfXwDPk+UDA==</latexit> ĝ <latexit sha1_base64="MlFgSWDuhHul1vp77q5wm5HaDKU=">AAAB+XicbVDLSsNAFL2pr1pfUZduBovgqiQq6LLoxmUF+4AmlMl00g6dPJi5KZTQP3HjQhG3/ok7/8ZJm4VWDwwczrmXe+YEqRQaHefLqqytb2xuVbdrO7t7+wf24VFHJ5livM0SmaheQDWXIuZtFCh5L1WcRoHk3WByV/jdKVdaJPEjzlLuR3QUi1AwikYa2LY3pph7EcVxEOaj+Xxg152GswD5S9yS1KFEa2B/esOEZRGPkUmqdd91UvRzqlAwyec1L9M8pWxCR7xvaEwjrv18kXxOzowyJGGizIuRLNSfGzmNtJ5FgZksIupVrxD/8/oZhjd+LuI0Qx6z5aEwkwQTUtRAhkJxhnJmCGVKmKyEjamiDE1ZNVOCu/rlv6Rz0XAvG87DVb15W9ZRhRM4hXNw4RqacA8taAODKTzBC7xaufVsvVnvy9GKVe4cwy9YH988yZQL</latexit> ĥ <latexit sha1_base64="gooRaEzmAkh1JPQL65zWVoIqQzM=">AAAB+XicbVDLSsNAFJ3UV62vqEs3g0VwVRIVdFl047KCfUATymQ6aYZOJmHmplBC/sSNC0Xc+ifu/BsnbRZaPTBwOOde7pkTpIJrcJwvq7a2vrG5Vd9u7Ozu7R/Yh0c9nWSKsi5NRKIGAdFMcMm6wEGwQaoYiQPB+sH0rvT7M6Y0T+QjzFPmx2QiecgpASONbNuLCOReTCAKwjwqipHddFrOAvgvcSvSRBU6I/vTGyc0i5kEKojWQ9dJwc+JAk4FKxpepllK6JRM2NBQSWKm/XyRvMBnRhnjMFHmScAL9edGTmKt53FgJsuIetUrxf+8YQbhjZ9zmWbAJF0eCjOBIcFlDXjMFaMg5oYQqrjJimlEFKFgymqYEtzVL/8lvYuWe9lyHq6a7duqjjo6QafoHLnoGrXRPeqgLqJohp7QC3q1cuvZerPel6M1q9o5Rr9gfXwDPk+UDA==</latexit> Learning rate ↵ Momentum rate 1 Exp. moving average rate 2 Prior precision External damping factor Tempering parameter ⌧ # MC samples for training K Data augmentation factor ⇢ Figure 2: A pseudo-code for our distributed VOGN algorithm is shown in Algorithm 1, and the distributed scheme is shown in the right figure. The computation in line 10 requires an extra calculation (see Appendix B), making VOGN slower than Adam. The bottom table gives a list of algorithmic hyperparameters needed for VOGN. Momentum and initialisation: It is well known that both momentum and good initialisation can improve the speed of convergence for SG methods in deep learning [51]. Since VOGN is similar to Adam, we can implement momentum in a similar way. This is shown in step 17 of Algorithm 1, where 1 is the momentum rate. We initialise the mean µ in the same way the weights are initialised in Adam (we use init.xavier_normal in PyTorch [11]). For the momentum term m, we use the same initialisation as Adam (initialised to 0). VOGN requires an additional initialisation for the variance 2. For this, we first run a forward pass through the first minibatch, calculate the average of the squared gradients and initialise the scale s0 with it (see step 1 in Algorithm 1). This implies that the variance is initialised to 20 = ⌧/(N(s0 + ̃)). For the tempering parameter ⌧ , we use a schedule where it is increased from a small value (e.g., 0.1) to 1. With these initialisation protocols, VOGN is able to mimic the convergence behaviour of Adam in the beginning. Learning rate scheduling: A common approach to quickly achieve high validation accuracies is to use a specific learning rate schedule [13]. The learning rate (denoted by ↵ in Algorithm 1) is regularly decayed by a factor (typically a factor of 10). The frequency and timings of this decay are usually pre-specified. In VOGN, we use the same schedule used for Adam, which works well. Distributed training: We also employ distributed training for VOGN to perform large experiments quickly. We can parallelise computation both over data and Monte-Carlo (MC) samples. Data parallelism is useful to split up large minibatch sizes. This is followed by averaging over multiple MC samples and their losses on a single GPU. MC sample parallelism is useful when minibatch size is small, and we can copy the entire minibatch and process it on a single GPU. Algorithm 1 and Figure 2 illustrate our distributed scheme. We use a combination of these two parallelism techniques with different MC samples for different inputs. This theoretically reduces the variance during training (see Equation 5 in Kingma et al. [26]), but sometimes requires averaging over multiple MC samples to get a sufficiently low variance in the early iterations. Overall, we find that this type of distributed training is essential for fast training on large problems such as ImageNet. Implementation of the Gauss-Newton update in VOGN: As discussed earlier, VOGN uses the Gauss-Newton approximation, which is fundamentally different from Adam. In this approximation, the gradients on individual data examples are first squared and then averaged afterwards (see step 12 in Algorithm 1 which implements the update for st shown in (4)). We need extra computation to get access to individual gradients, due to which, VOGN is slower Adam or SGD (e.g., in Fig. 1). However, this is not a theoretical limitation and this can be improved if a framework enables an easy computation of the individual gradients. Details of our implementation are described in Appendix B. This implementation is much more efficient than a naive one where gradients over examples are stored and the sum over the square is computed sequentially. Our implementation usually brings the running time of VOGN to within 2-5 times of the time that Adam takes. Tuning VOGN: Currently, there is no common recipe for tuning the algorithmic hyperparameters for VI, especially for large-scale tasks like ImageNet classification. One key idea we use in our experiments is to start with Adam hyperparameters and then make sure that VOGN training closely follows an Adam-like trajectory in the beginning of training. To achieve this, we divide the tuning into an optimisation part and a regularisation part. In the optimisation part, we first tune the hyperparameters of a deterministic version of VOGN, called the online Gauss-Newton (OGN) method. This method, described in Appendix C, is more stable than VOGN since it does not require MC sampling, and can be used as a stepping stone when moving from Adam/SGD to VOGN. After reaching a competitive performance to Adam/SGD by OGN, we move to the regularisation part, where we tune the prior precision , the tempering parameter ⌧ , and the number of MC samples K for VOGN. We initialise our search by setting the prior precision using the L2-regularisation parameter used for OGN, as well as the dataset size N . Another technique is to warm-up the parameter ⌧ towards ⌧ = 1 (also see the “momentum and initialisation" part). Setting ⌧ to smaller values usually stabilises the training, and increasing it slowly also helps during tuning. We also add an external damping factor > 0 to the moving average st. This increases the lower bound of the eigenvalues of the diagonal covariance ⌃t and prevents the noise and the step size from becoming too large. We find that a mix of these techniques works well for the problems we considered. 4 Experiments In this section, we present experiments on fitting several deep networks on CIFAR-10 and ImageNet. Our experiments demonstrate practical training using VOGN on these benchmarks and show performance that is competitive with Adam and SGD. We also assess the quality of the posterior approximation, finding that benefits of Bayesian principles are preserved. CIFAR-10 [28] contains 10 classes with 50,000 images for training and 10,000 images for validation. For ImageNet, we train with 1.28 million training examples and validate on 50,000 examples, classifying between 1,000 classes. We used a large minibatch size M = 4, 096 and parallelise them across 128 GPUs (NVIDIA Tesla P100). We compare the following methods on CIFAR-10: Adam, MC-dropout [9]. For ImageNet, we also compare to SGD, K-FAC, and Noisy K-FAC. We do not consider Noisy K-FAC for other comparisons since tuning is difficult. We compare 3 architectures: LeNet-5, AlexNet, ResNet-18. We only compare to Bayes by Backprop (BBB) [4] for CIFAR-10 with LeNet-5 since it is very slow to converge for larger-scale experiments. We carefully set the hyperparameters of all methods, following the best practice of large distributed training [13] as the initial point of our hyperparameter tuning. The full set of hyperparameters is in Appendix D. 4.1 Performance on CIFAR-10 and ImageNet We start by showing the effectiveness of momentum and batch normalisation for boosting the performance of VOGN. Figure 3a shows that these methods significantly speed up convergence and performance (in terms of both accuracy and log likelihoods). Figures 1 and 4 compare the convergence of VOGN to Adam (for all experiments), SGD (on ImageNet), and MC-dropout (on the rest). VOGN shows similar convergence and its performance is competitive with these methods. We also try BBB on LeNet-5, where it converges prohibitively slowly, performing very poorly. We are not able to successfully train other architectures using this approach. We found it far simpler to tune VOGN because we can borrow all the techniques used for Adam. Figure 4 also shows the importance of DA in improving performance. Table 1 gives a final comparison of train/validation accuracies, negative log likelihoods, epochs required for convergence, and run-time per epoch. We can see that the accuracy, log likelihoods, and the number of epochs are comparable. VOGN is 2-5 times slower than Adam and SGD. This is mainly due to the computation of individual gradients required in VOGN (see the discussion in Section 3). We clearly see that by using deep-learning techniques on VOGN, we can perform practical deep learning. This is not possible with methods such as BBB. Due to the Bayesian nature of VOGN, there are some trade-offs to consider. Reducing the prior precision ( in Algorithm 1) results in higher validation accuracy, but also larger train-test gap (more overfitting). This is shown in Appendix E for VOGN on ResNet-18 on ImageNet. As expected, when the prior precision is small, performance is similar to non-Bayesian methods. We also show the effect of changing the effective dataset size ⇢ in Appendix E: note that, since we are going to tune the prior variance anyway, it is sufficient to set ⇢ to its correct order of magnitude. Another trade-off concerns the number of Monte-Carlo (MC) samples, shown in Appendix F. Increasing the number of training MC samples (up to a limit) improves VOGN’s convergence rate and stability, but also increases the computation. Increasing the number of MC samples during testing improves generalisation, as expected due to averaging. Finally, a few comments on the performance of the other methods. Adam regularly overfits the training set in most settings, with large train-test differences in both validation accuracy and log likelihood. One exception is LeNet-5, which is most likely due to the small architecture which results in underfitting (this is consistent with the low validation accuracies obtained). In contrast to Adam, MC-dropout has small train-test gap, usually smaller than VOGN’s. However, we will see in Section 4.2 that this is because of underfitting. Moreover, the performance of MC-dropout is highly sensitive to the dropout rate (see Appendix G for a comparison of different dropout rates). On ImageNet, Noisy K-FAC performs well too. It is slower than VOGN, but it takes fewer epochs. Overall, wall clock time is about the same as VOGN. 4.2 Quality of the Predictive Probabilities In this section, we compare the quality of the predictive probabilities for various methods. For Bayesian methods, we compute these probabilities by averaging over the samples from the posterior approximations (see Appendix H for details). For non-Bayesian methods, these are obtained using the point estimate of the weights. We compare the probabilities using the following metrics: validation negative log-likelihood (NLL), area under ROC (AUROC) and expected calibration curves (ECE) [40, 15]. For the first and third metric, a lower number is better, while for the second, a higher number is better. See Appendix H for an explanation of these metrics. Results are summarised in Table 1. VOGN’s uncertainty performance is more consistent and marginally better than the other methods, as expected from a more principled Bayesian method. Out of the 15 metrics (NLL, ECE and AUROC on 5 dataset/architecture combinations), VOGN performs the best or tied best on 10, and is second-best on the other 5. In contrast, both MC-dropout’s and Adam’s performance varies significantly, sometimes performing poorly, sometimes performing decently. MC-dropout is best on 4, and Adam is best on 1 (on LeNet-5; as argued earlier, the small architecture may result in underfitting). We also show calibration curves [7] in Figures 1 and 14. Adam is consistently over-confident, with its calibration curve below the diagonal. Conversely, MC-dropout is usually under-confident. On ImageNet, MC-dropout performs well on ECE (all methods are very similar on AUROC), but this required an excessively tuned dropout rate (see Appendix G). We also compare performance on out-of-distribution datasets. When testing on datasets that are different from the training datasets, predictions should be more uncertain. We use experimental protocol from the literature [16, 31, 8, 32] to compare VOGN, Adam and MC-dropout on CIFAR-10. We also borrow metrics from other works [16, 30], showing predictive entropy histograms and also reporting AUROC and FPR at 95% TPR. See Appendix I for further details on the datasets and metrics. Ideally, we want predictive entropy to be high on out-of-distribution data and low on in-distribution data. Our results are summarised in Figure 5 and Appendix I. On ResNet-18 and AlexNet, VOGN’s predictive entropy histograms show the desired behaviour: a spread of entropies for the in-distribution data, and high entropies for out-of-distribution data. Adam has many predictive entropies at zero, indicating Adam tends to classify out-of-distribution data too confidently. Conversely, MC-dropout’s predictive entropies are generally high (particularly in-distribution), indicating MC-dropout has too much noise. On LeNet-5, we observe the same result as before: Adam and MC-dropout both perform well. The metrics (AUROC and FPR at 95% TPR) do not provide a clear story across architectures. 4.2.1 Performance on a Continual-learning task The goal of continual learning is to avoid forgetting of old tasks while sequentially observing new tasks. The past tasks are never visited again, making it difficult to remember them. The field of continual learning has recently grown, with many approaches proposed to tackle this problem [27, 33, 43, 48, 50]. Most approaches consider a simple setting where the tasks (such as classifying a subset of classes) arrive sequentially, and all the data from that task is available. We consider the same setup in our experiments. We compare to Elastic Weight Consolidation (EWC) [27] and a VI-based approach called Variational Continual Learning (VCL) [43]. VCL employs BBB for each task, and we expect to boost its performance by replacing BBB by VOGN. Figure 3b shows results on a common benchmark called Permuted MNIST. We use the same experimental setup as in Swaroop et al. [52]. In Permuted MNIST, each task consists of the entire MNIST dataset (10-way classification) with a different fixed random permutation applied to the input images’ pixels. We run each method 20 times, with different random seeds for both the benchmark’s permutations and model training. See Appendix D.2 for hyperparameter settings and further details. We see that VOGN performs at least as well as VCL, and far better than a popular approach called EWC [27]. Additionally, as found in the batch learning setting, VOGN is much quicker than BBB: we run VOGN for only 100 epochs per task, whereas VCL requires 800 epochs per task to achieve best results [52]. 5 Conclusions We successfully train deep networks with a natural-gradient variational inference method, VOGN, on a variety of architectures and datasets, even scaling up to ImageNet. This is made possible due to the similarity of VOGN to Adam, enabling us to boost performance by borrowing deep-learning techniques. Our accuracies and convergence rates are comparable to SGD and Adam. Unlike them, however, VOGN retains the benefits of Bayesian principles, with well-calibrated uncertainty and good performance on out-of-distribution data. Better uncertainty estimates open up a whole range of potential future experiments, for example, small data experiments, active learning, adversarial experiments, and sequential decision making. Our results on a continual-learning task confirm this. Another potential avenue for research is to consider structured covariance approximations. Acknowledgements We would like to thank Hikaru Nakata (Tokyo Institute of Technology) and Ikuro Sato (Denso IT Laboratory, Inc.) for their help on the PyTorch implementation. We are also thankful for the RAIDEN computing system and its support team at the RIKEN Center for AI Project which we used extensively for our experiments. This research used computational resources of the HPCI system provided by Tokyo Institute of Technology (TSUBAME3.0) through the HPCI System Research Project (Project ID:hp190122). K. O. is a Research Fellow of JSPS and is supported by JSPS KAKENHI Grant Number JP19J13477.
1. What is the main contribution of the paper regarding applying tricks from deep learning literature to VOGN? 2. What are the strengths and weaknesses of the paper regarding its technical novelty, quality, clarity, significance, and focus on Bayesian inference? 3. Do you have any concerns about the experiments conducted in the paper, particularly regarding their small calibration improvements and lack of online learning or fine-tuning experiments? 4. How do you assess the numbers in Table 1, and what do they indicate about the performance of Bayesian deep learning on smaller datasets? 5. What are your thoughts on the comparison between VOGN and BBB, specifically regarding their optimization of the same objective with an approximate posterior from the same parametric family? 6. Do you think the paper could provide more insight into why VOGN works better than other methods like BBB, and what might be some interesting directions for future research in this area?
Review
Review Originality: Rather low The main technical novelty lies in applying tricks from the deep learning literature to VOGN. The experiments are fairly standard. Quality: High That being said, the experiments seem to be carefully executed, described in detail and the overall method is technically sound. While not overly ambitious in terms of technical novelty, I think this is a well-executed piece of work. Clarity: High The paper is well-written and easy to follow. Significance: Mixed I find that the paper does itself a bit of a disservice by putting so much focus on technicalities. I believe this in an attempt to appeal to readers with an interest in deep learning rather than Bayesian inference, however I don't find the empirical part of the paper to make a particularly strong case for using Bayesian methods in deep learning. My main takeaway from the experiments would be that "being Bayesian" does not matter too much on a large dataset like Imagenet (or even CIFAR-10) and the small calibration improvements as in Figure 1 are probably not worth the extra headache. If the authors indeed wish to make a case for Bayesian deep learning to a larger audience, I think that the paper would be much stronger if it had some online learning or fine-tuning experiments when using the approximate posterior as a prior on a much smaller dataset, where ignoring parameter uncertainty would most likely lead to dramatically worse performance. The numbers in Table 1 are too close/inconsistent to be really convincing in an empirical paper and for the out-of-distribution uncertainty as in Figure 5 it is unclear if it is a good metric since we don't know the uncertainty of the true posterior. Alternatively, this could also be a much more relevant contribution to the Bayesian deep learning subfield if the paper made an attempt to gain insight into why VOGN works better than e.g. BBB. The paragraph in lines 91 to 97 does not make much sense to me, since (unless I misunderstood something) both methods optimize the same objective with an approximate posterior from the same parametric family - the difference is that VOGN is a natural gradient method. So the failure of BBB can't be attributed to the ELBO if VOGN works. But if the argument is that natural gradient is necessary, I find it surprising that Nosiy KFAC is apparently difficult to tune. Digging a bit deeper here would probably lead to interesting insights.
NIPS
Title Practical Deep Learning with Bayesian Principles Abstract Bayesian methods promise to fix many shortcomings of deep learning, but they are impractical and rarely match the performance of standard methods, let alone improve them. In this paper, we demonstrate practical training of deep networks with natural-gradient variational inference. By applying techniques such as batch normalisation, data augmentation, and distributed training, we achieve similar performance in about the same number of epochs as the Adam optimiser, even on large datasets such as ImageNet. Importantly, the benefits of Bayesian principles are preserved: predictive probabilities are well-calibrated, uncertainties on outof-distribution data are improved, and continual-learning performance is boosted. This work enables practical deep learning while preserving benefits of Bayesian principles. A PyTorch implementation1 is available as a plug-and-play optimiser. 1 Introduction Deep learning has been extremely successful in many fields such as computer vision [29], speech processing [17], and natural-language processing [39], but it is also plagued with several issues that make its application difficult in many other fields. For example, it requires a large amount of high-quality data and it can overfit when dataset size is small. Similarly, sequential learning can cause forgetting of past knowledge [27], and lack of reliable confidence estimates and other robustness issues can make it vulnerable to adversarial attacks [6]. Ultimately, due to such issues, application of deep learning remains challenging, especially for applications where human lives are at risk. Bayesian principles have the potential to address such issues. For example, we can represent uncertainty using the posterior distribution, enable sequential learning using Bayes’ rule, and reduce overfitting with Bayesian model averaging [19]. The use of such Bayesian principles for neural networks has been advocated from very early on. Bayesian inference on neural networks were all proposed in the 90s, e.g., by using MCMC methods [41], Laplace’s method [35], and variational inference (VI) [18, 2, 49, 1]. Benefits of Bayesian principles are even discussed in machine-learning textbooks [36, 3]. Despite this, they are rarely employed in practice. This is mainly due to computational concerns, unfortunately overshadowing their theoretical advantages. The difficulty lies in the computation of the posterior distribution, which is especially challenging for deep learning. Even approximation methods, such as VI and MCMC, have historically been difficult * These two authors contributed equally. † This work is conducted during an internship at RIKEN Center for AI project. ‡ Corresponding author: [email protected] 1 The code is available at https://github.com/team-approx-bayes/dl-with-bayes. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. to scale to large datasets such as ImageNet [47]. Due to this, it is common to use less principled approximations, such as MC-dropout [9], even though they are not ideal when it comes to fixing the issues of deep learning. For example, MC-dropout is unsuitable for continual learning [27] since its posterior approximation does not have mass over the whole weight space. It is also found to perform poorly for sequential decision making [45]. The form of the approximation used by such methods is usually rigid and cannot be easily improved, e.g., to other forms such as a mixture of Gaussians. The goal of this paper is to make more principled Bayesian methods, such as VI, practical for deep learning, thereby helping researchers tackle its key limitations. We demonstrate practical training of deep networks by using recently proposed natural-gradient VI methods. These methods resemble the Adam optimiser, enabling us to leverage existing techniques for initialisation, momentum, batch normalisation, data augmentation, and distributed training. As a result, we obtain similar performance in about the same number of epochs as Adam when training many popular deep networks (e.g., LeNet, AlexNet, ResNet) on datasets such as CIFAR-10 and ImageNet (see Fig. 1). The results show that, despite using an approximate posterior, the training methods preserve the benefits coming from Bayesian principles. Compared to standard deep-learning methods, the predictive probabilities are well-calibrated, uncertainties on out-of-distribution inputs are improved, and performance for continual-learning tasks is boosted. Our work shows that practical deep learning is possible with Bayesian methods and aims to support further research in this area. Related work. Previous VI methods, notably by Graves [14] and Blundell et al. [4], require significant implementation and tuning effort to perform well, e.g., on convolution neural networks (CNN). Slow convergence is found to be especially problematic for sequential problems [45]. There appears to be no reported results with complex networks on large problems, such as ImageNet. Our work solves these issues by applying deep-learning techniques to natural-gradient VI [24, 56]. In their paper, Zhang et al. [56] also employed data augmentation and batch normalisation for a natural-gradient method called Noisy K-FAC (see Appendix A) and showed results on VGG on CIFAR-10. However, a mean-field method called Noisy Adam was found to be unstable with batch normalisation. In contrast, we show that a similar method, called Variational Online Gauss-Newton (VOGN), proposed by Khan et al. [24], works well with such techniques. We show results for distributed training with Noisy K-FAC on Imagenet, but do not provide extensive comparisons since tuning it is time-consuming. Many of our techniques can speed-up Noisy K-FAC, which is promising. Many other approaches have recently been proposed to compute posterior approximations by training deterministic networks [46, 37, 38]. Similarly to MC-dropout, their posterior approximations are not flexible, making it difficult to improve the accuracy of their approximations. On the other hand, VI offers a much more flexible alternative to apply Bayesian principles to deep learning. 2 Deep Learning with Bayesian Principles and Its Challenges The success of deep learning is partly due to the availability of scalable and practical methods for training deep neural networks (DNNs). Network training is formulated as an optimisation problem where a loss between the data and the DNN’s predictions is minimised. For example, in a supervised learning task with a dataset D of N inputs xi and corresponding outputs yi of length K, we minimise a loss of the following form: ¯̀(w) + w>w, where ¯̀(w) := 1 N P i `(y i , fw(xi)), fw(x) 2 RK denotes the DNN outputs with weights w, `(y, f) denotes a differentiable loss function between an output y and the function f , and > 0 is the L2 regulariser.2 Deep learning relies on stochasticgradient (SG) methods to minimise such loss functions. The most commonly used optimisers, such as stochastic-gradient descent (SGD), RMSprop [53], and Adam [25], take the following form3 (all operations below are element-wise): wt+1 wt ↵t ĝ(wt) + wtp st+1 + ✏ , st+1 (1 t)st + t (ĝ(wt) + wt)2 , (1) where t is the iteration, ↵t > 0 and 0 < t < 1 are learning rates, ✏ > 0 is a small scalar, and ĝ(w) is the stochastic gradients at w defined as follows: ĝ(w) := 1 M P i2Mt rw`(yi, fw(xi)) using a minibatch Mt of M data examples. This simple update scales extremely well and can be applied to very large problems. With techniques such as initialisation protocols, momentum, weight-decay, batch normalisation, and data augmentation, it also achieves good performance for many problems. In contrast, the full Bayesian approach to deep learning is computationally very expensive. The posterior distribution can be obtained using Bayes’ rule: p(w|D) = exp N ¯̀(w)/⌧ p(w)/p(D) where 0 < ⌧ 1.4 This is costly due to the computation of the marginal likelihood p(D), a high-dimensional integral that is difficult to compute for large networks. Variational inference (VI) is a principled approach to more scalably estimate an approximation to p(w|D). The main idea is to employ a parametric approximation, e.g., a Gaussian q(w) := N (w|µ,⌃) with mean µ and covariance ⌃. The parameters µ and ⌃ can then be estimated by maximising the evidence lower bound (ELBO): ELBO: L(µ,⌃) := NEq ⇥ ¯̀(w) ⇤ ⌧D KL [q(w) k p(w)], (2) where DKL[·] denotes the Kullback-Leibler divergence. By using more complex approximations, we can further reduce the approximation error, but at a computational cost. By formulating Bayesian inference as an optimisation problem, VI enables a practical application of Bayesian principles. Despite this, VI has remained impractical for training large deep networks on large datasets. Existing methods, such as Graves [14] and Blundell et al. [4], directly apply popular SG methods to optimise the variational parameters in the ELBO, yet they fail to get a reasonable performance on large problems, usually converging very slowly. The failure of such direct applications of deep-learning methods to VI is not surprising. The techniques used in one field may not directly lead to improvements in the other, but it will be useful if they do, e.g., if we can optimise the ELBO in a way that allows us to exploit the tricks and techniques of deep learning and boost the performance of VI. The goal of this work is to do just that. We now describe our methods in detail. 3 Practical Deep Learning with Natural-Gradient Variational Inference In this paper, we propose natural-gradient VI methods for practical deep learning with Bayesian principles. The natural-gradient update takes a simple form when estimating exponential-family approximations [23, 22]. When p(w) := N (w|0, I/ ), the update of the natural-parameter is performed by using the stochastic gradient of the expected regularised-loss: t+1 = (1 ⌧⇢) t ⇢rµEq ⇥ ¯̀(w) + 12⌧ w > w ⇤ , (3) 2This regulariser is sometimes set to 0 or a very small value. 3Alternate versions with weight-decay and momentum differ from this update [34]. We present a form useful to establish the connection between SG methods and natural-gradient VI. 4This is a tempered posterior [54] setup where ⌧ is set 6= 1 when we expect model misspecification and/or adversarial examples [10]. Setting ⌧ = 1 recovers standard Bayesian inference. where ⇢ > 0 is the learning rate, and we note that the stochastic gradients are computed with respect to µ, the expectation parameters of q. The moving average above helps to deal with the stochasticity of the gradient estimates, and is very similar to the moving average used in deep learning (see (1)). When ⌧ is set to 0, the update essentially minimises the regularised loss (see Section 5 in Khan et al. [24]). These properties of natural-gradient VI makes it an ideal candidate for deep learning. Recent work by Khan et al. [24] and Zhang et al. [56] further show that, when q is Gaussian, the update (3) assumes a form that is strikingly similar to the update (1). For example, the Variational Online Gauss-Newton (VOGN) method of Khan et al. [24] estimates a Gaussian with mean µ t and a diagonal covariance matrix ⌃t using the following update: µ t+1 µt ↵t ĝ(wt) + ̃µt st+1 + ̃ , st+1 (1 ⌧ t)st + t 1 M X i2Mt (g i (wt)) 2 , (4) where g i (wt) := rw`(yi, fwt(xi)), wt ⇠ N (w|µt,⌃t) with ⌃t := diag(1/(N(st + ̃))), ̃ := ⌧ /N , and ↵t, t > 0 are learning rates. Operations are performed element-wise. Similarly to (1), the vector st adapts the learning rate and is updated using a moving average. A major difference in VOGN is that the update of st is now based on a Gauss-Newton approximation [14] which uses 1 M P i2Mt(gi(wt)) 2. This is fundamentally different from the SG update in (1) which instead uses the gradient-magnitude ( 1 M P i2Mt gi(wt) + wt) 2 [5]. The first approach uses the sum outside the square while the second approach uses it inside. VOGN is therefore a secondorder method and, similarly to Newton’s method, does not need a square-root over st. Implementation of this step requires an additional calculation (see Appendix B) which makes VOGN a bit slower than Adam, but VOGN is expected to give better variance estimates (see Theorem 1 in Khan et al. [24]). The main contribution of this paper is to demonstrate practical training of deep networks using VOGN. Since VOGN takes a similar form to SG methods, we can easily borrow existing deeplearning techniques to improve performance. We will now describe these techniques in detail. Pseudo-code for VOGN is shown in Algorithm 1. Batch normalisation: Batch normalisation [20] has been found to significantly speed up and stabilise training of neural networks, and is widely used in deep learning. BatchNorm layers are inserted between neural network layers. They help stabilise each layer’s input distribution by normalising the running average of the inputs’ mean and variance. In our VOGN implementation, we simply use the existing implementation with default hyperparameter settings. We do not apply L2 regularisation and weight decay to BatchNorm parameters, like in Goyal et al. [13], or maintain uncertainty over the BatchNorm parameters. This straightforward application of batch normalisation works for VOGN. Data Augmentation: When training on image datasets, data augmentation (DA) techniques can improve performance drastically [13]. We consider two common real-time data augmentation techniques: random cropping and horizontal flipping. After randomly selecting a minibatch at each iteration, we use a randomly selected cropped version of all images. Each image in the minibatch has a 50% chance of being horizontally flipped. We find that directly applying DA gives slightly worse performance than expected, and also affects the calibration of the resulting uncertainty. However, DA increases the effective sample size. We therefore modify it to be ⇢N where ⇢ 1, improving performance (see step 2 in Algorithm 1). The reason for this performance boost might be due to the complex relationship between the regularisation and N . For the regularised loss ¯̀(w) + w>w, the two are unidentifiable, i.e., we can multiply by a constant and reduce N by the same constant without changing the minimum. However, in a Bayesian setting (like in (2)), the two quantities are separate, and therefore changing the data might also change the optimal prior variance hyperparameter in a complicated way. This needs further theoretical investigations, but our simple fix of scaling N seems to work well in the experiments. We set ⇢ by considering the specific DA techniques used. When training on CIFAR-10, the random cropping DA step involves first padding the 32x32 images to become of size 40x40, and then taking randomly selected 28x28 cropped images. We consider this as effectively increasing the dataset size by a factor of 5 (4 images for each corner, and one central image). The horizontal flipping DA step doubles the dataset size (one dataset of unflipped images, one for flipped images). Combined, this gives ⇢ = 10. Similar arguments for ImageNet DA techniques give ⇢ = 5. Even though ⇢ is another hyperparameter to set, we find that its precise value does not matter much. Typically, after setting an estimate for ⇢, tuning a little seems to work well (see Appendix E). Algorithm 1: Variational Online Gauss Newton (VOGN) 1: Initialise µ0, s0, m0. 2: N ⇢N , ̃ ⌧ /N . 3: repeat 4: Sample a minibatch M of size M . 5: Split M into each GPU (local minibatch Mlocal). 6: for each GPU in parallel do 7: for k = 1, 2, . . . ,K do 8: Sample ✏ ⇠ N (0, I). 9: w(k) µ+ ✏ with (1/(N(s+ ̃ + )))1/2. 10: Compute g(k)i rw`(yi, fw(k)(xi)), 8i 2Mlocal using the method described in Appendix B. 11: ĝk 1M P i2Mlocal g (k) i . 12: ĥk 1M P i2Mlocal(g (k) i ) 2 . 13: end for 14: ĝ 1K PK k=1 ĝk and ĥ 1 K PK k=1 ĥk. 15: end for 16: AllReduce ĝ, ĥ. 17: m 1m+ (ĝ + ̃µ). 18: s (1 ⌧ 2)s+ 2ĥ. 19: µ µ ↵m/(s+ ̃ + ). 20: until stopping criterion is met w (8) <latexit sha1_base64="0AX6CIJdpG7lxM3dZNYUgXYXyYA=">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyVRwS6LblxWsA9oa5lMJ+3QySTMTCol5E/cuFDErX/izr9x0mahrQcGDufcyz1zvIgzpR3n2yqsrW9sbhW3Szu7e/sH9uFRS4WxJLRJQh7KjocV5UzQpmaa004kKQ48Ttve5Dbz21MqFQvFg55FtB/gkWA+I1gbaWDbvQDrsecnT+ljUqmdpwO77FSdOdAqcXNShhyNgf3VG4YkDqjQhGOluq4T6X6CpWaE07TUixWNMJngEe0aKnBAVT+ZJ0/RmVGGyA+leUKjufp7I8GBUrPAM5NZTrXsZeJ/XjfWfq2fMBHFmgqyOOTHHOkQZTWgIZOUaD4zBBPJTFZExlhiok1ZJVOCu/zlVdK6qLqXVef+qly/yesowgmcQgVcuIY63EEDmkBgCs/wCm9WYr1Y79bHYrRg5TvH8AfW5w9GnJNp</latexit> w (7) <latexit sha1_base64="wdyEarqCEeVHbD9YSbbId9H4C4Y=">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyVRoS6LblxWsA9oa5lMJ+3QySTMTCol5E/cuFDErX/izr9x0mahrQcGDufcyz1zvIgzpR3n2yqsrW9sbhW3Szu7e/sH9uFRS4WxJLRJQh7KjocV5UzQpmaa004kKQ48Ttve5Dbz21MqFQvFg55FtB/gkWA+I1gbaWDbvQDrsecnT+ljUqmdpwO77FSdOdAqcXNShhyNgf3VG4YkDqjQhGOluq4T6X6CpWaE07TUixWNMJngEe0aKnBAVT+ZJ0/RmVGGyA+leUKjufp7I8GBUrPAM5NZTrXsZeJ/XjfW/nU/YSKKNRVkcciPOdIhympAQyYp0XxmCCaSmayIjLHERJuySqYEd/nLq6R1UXUvq879Vbl+k9dRhBM4hQq4UIM63EEDmkBgCs/wCm9WYr1Y79bHYrRg5TvH8AfW5w9FFpNo</latexit> w (6) <latexit sha1_base64="ZRrNh8WvBse2CToET/IVGFUdSXg=">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyVRUZdFNy4r2Ae0tUymk3boZBJmJpUS8iduXCji1j9x5984abPQ1gMDh3Pu5Z45XsSZ0o7zbRVWVtfWN4qbpa3tnd09e/+gqcJYEtogIQ9l28OKciZoQzPNaTuSFAcepy1vfJv5rQmVioXiQU8j2gvwUDCfEayN1LftboD1yPOTp/QxqVyepn277FSdGdAycXNShhz1vv3VHYQkDqjQhGOlOq4T6V6CpWaE07TUjRWNMBnjIe0YKnBAVS+ZJU/RiVEGyA+leUKjmfp7I8GBUtPAM5NZTrXoZeJ/XifW/nUvYSKKNRVkfsiPOdIhympAAyYp0XxqCCaSmayIjLDERJuySqYEd/HLy6R5VnXPq879Rbl2k9dRhCM4hgq4cAU1uIM6NIDABJ7hFd6sxHqx3q2P+WjByncO4Q+szx9DkJNn</latexit> w (5) <latexit sha1_base64="B2LnCCBsDGx8M7KwkiP54CtpI2E=">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyXxgS6LblxWsA9oa5lMJ+3QySTMTCol5E/cuFDErX/izr9x0mahrQcGDufcyz1zvIgzpR3n2yqsrK6tbxQ3S1vbO7t79v5BU4WxJLRBQh7KtocV5UzQhmaa03YkKQ48Tlve+DbzWxMqFQvFg55GtBfgoWA+I1gbqW/b3QDrkecnT+ljUrk8Tft22ak6M6Bl4uakDDnqffurOwhJHFChCcdKdVwn0r0ES80Ip2mpGysaYTLGQ9oxVOCAql4yS56iE6MMkB9K84RGM/X3RoIDpaaBZyaznGrRy8T/vE6s/etewkQUayrI/JAfc6RDlNWABkxSovnUEEwkM1kRGWGJiTZllUwJ7uKXl0nzrOqeV537i3LtJq+jCEdwDBVw4QpqcAd1aACBCTzDK7xZifVivVsf89GCle8cwh9Ynz9CCpNm</latexit> w (4) <latexit sha1_base64="Jjh+roV7RXCtHv15jiKBS8Q2kV8=">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyXRgi6LblxWsA9oa5lMJ+3QySTMTCol5E/cuFDErX/izr9x0mahrQcGDufcyz1zvIgzpR3n2yqsrW9sbhW3Szu7e/sH9uFRS4WxJLRJQh7KjocV5UzQpmaa004kKQ48Ttve5Dbz21MqFQvFg55FtB/gkWA+I1gbaWDbvQDrsecnT+ljUqmdpwO77FSdOdAqcXNShhyNgf3VG4YkDqjQhGOluq4T6X6CpWaE07TUixWNMJngEe0aKnBAVT+ZJ0/RmVGGyA+leUKjufp7I8GBUrPAM5NZTrXsZeJ/XjfW/nU/YSKKNRVkcciPOdIhympAQyYp0XxmCCaSmayIjLHERJuySqYEd/nLq6R1UXUvq859rVy/yesowgmcQgVcuII63EEDmkBgCs/wCm9WYr1Y79bHYrRg5TvH8AfW5w9AhJNl</latexit> w (3) <latexit sha1_base64="hvhnf9uGBxMR9D/7hkTLQSq9t6g=">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyWxgi6LblxWsA9oa5lMJ+3QySTMTCol5E/cuFDErX/izr9x0mahrQcGDufcyz1zvIgzpR3n2yqsrW9sbhW3Szu7e/sH9uFRS4WxJLRJQh7KjocV5UzQpmaa004kKQ48Ttve5Dbz21MqFQvFg55FtB/gkWA+I1gbaWDbvQDrsecnT+ljUqmdpwO77FSdOdAqcXNShhyNgf3VG4YkDqjQhGOluq4T6X6CpWaE07TUixWNMJngEe0aKnBAVT+ZJ0/RmVGGyA+leUKjufp7I8GBUrPAM5NZTrXsZeJ/XjfW/nU/YSKKNRVkcciPOdIhympAQyYp0XxmCCaSmayIjLHERJuySqYEd/nLq6R1UXVrVef+sly/yesowgmcQgVcuII63EEDmkBgCs/wCm9WYr1Y79bHYrRg5TvH8AfW5w8+/pNk</latexit> w (2) <latexit sha1_base64="peZMHpnhsnVKmu0FyhJJG0tzlO4=">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyWpgi6LblxWsA9oa5lMJ+3QySTMTCol5E/cuFDErX/izr9x0mahrQcGDufcyz1zvIgzpR3n2yqsrW9sbhW3Szu7e/sH9uFRS4WxJLRJQh7KjocV5UzQpmaa004kKQ48Ttve5Dbz21MqFQvFg55FtB/gkWA+I1gbaWDbvQDrsecnT+ljUqmdpwO77FSdOdAqcXNShhyNgf3VG4YkDqjQhGOluq4T6X6CpWaE07TUixWNMJngEe0aKnBAVT+ZJ0/RmVGGyA+leUKjufp7I8GBUrPAM5NZTrXsZeJ/XjfW/nU/YSKKNRVkcciPOdIhympAQyYp0XxmCCaSmayIjLHERJuySqYEd/nLq6RVq7oXVef+sly/yesowgmcQgVcuII63EEDmkBgCs/wCm9WYr1Y79bHYrRg5TvH8AfW5w89eJNj</latexit> w (1) <latexit sha1_base64="DZVriWHhAABp/VIrxpVe/AkNe74=">AAAB+XicbVDLSsNAFL3xWesr6tLNYBHqpiQq6LLoxmUF+4A2lsl00g6dTMLMpFJC/sSNC0Xc+ifu/BsnbRbaemDgcM693DPHjzlT2nG+rZXVtfWNzdJWeXtnd2/fPjhsqSiRhDZJxCPZ8bGinAna1Exz2oklxaHPadsf3+Z+e0KlYpF40NOYeiEeChYwgrWR+rbdC7Ee+UH6lD2mVfcs69sVp+bMgJaJW5AKFGj07a/eICJJSIUmHCvVdZ1YeymWmhFOs3IvUTTGZIyHtGuowCFVXjpLnqFTowxQEEnzhEYz9fdGikOlpqFvJvOcatHLxf+8bqKDay9lIk40FWR+KEg40hHKa0ADJinRfGoIJpKZrIiMsMREm7LKpgR38cvLpHVecy9qzv1lpX5T1FGCYziBKrhwBXW4gwY0gcAEnuEV3qzUerHerY/56IpV7BzBH1ifPzvyk2I=</latexit> w (i) ⇠ q(w) <latexit sha1_base64="Dv2ZWT7aVb6yCMx0AIW4B6zAHBY=">AAACC3icbVDLSsNAFJ3UV42vqEs3Q4vQbkqigi6LblxWsA9oYplMJ+3QySTOTJQSunfjr7hxoYhbf8Cdf+OkDaitBy4czrmXe+/xY0alsu0vo7C0vLK6Vlw3Nza3tnes3b2WjBKBSRNHLBIdH0nCKCdNRRUjnVgQFPqMtP3RRea374iQNOLXahwTL0QDTgOKkdJSzyq5IVJDP0jvJzdphVYnrqSheVv5kas9q2zX7CngInFyUgY5Gj3r0+1HOAkJV5ghKbuOHSsvRUJRzMjEdBNJYoRHaEC6mnIUEuml018m8FArfRhEQhdXcKr+nkhRKOU49HVndqKc9zLxP6+bqODMSymPE0U4ni0KEgZVBLNgYJ8KghUba4KwoPpWiIdIIKx0fKYOwZl/eZG0jmrOcc2+OinXz/M4iuAAlEAFOOAU1MElaIAmwOABPIEX8Go8Gs/Gm/E+ay0Y+cw++APj4xv4CZr8</latexit> M <latexit sha1_base64="uZvzERQPKMMR9NiLGAcS/T5qG+0=">AAAB8nicbVBNS8NAFHypX7V+VT16WSyCp5KooMeiFy9CBWsLaSib7bZdutmE3RehhP4MLx4U8eqv8ea/cdPmoK0DC8PMe+y8CRMpDLrut1NaWV1b3yhvVra2d3b3qvsHjyZONeMtFstYd0JquBSKt1Cg5J1EcxqFkrfD8U3ut5+4NiJWDzhJeBDRoRIDwShaye9GFEeMyuxu2qvW3Lo7A1kmXkFqUKDZq351+zFLI66QSWqM77kJBhnVKJjk00o3NTyhbEyH3LdU0YibIJtFnpITq/TJINb2KSQz9fdGRiNjJlFoJ/OIZtHLxf88P8XBVZAJlaTIFZt/NEglwZjk95O+0JyhnFhCmRY2K2EjqilD21LFluAtnrxMHs/q3nndvb+oNa6LOspwBMdwCh5cQgNuoQktYBDDM7zCm4POi/PufMxHS06xcwh/4Hz+AIM0kWU=</latexit> Mlocal <latexit sha1_base64="ZwiUQGlfNIiAPSgLdNtgjp+cW94=">AAAB/HicbVDLSsNAFL3xWesr2qWbwSK4KokKuiy6cSNUsA9oQ5hMJ+3QySTMTIQQ6q+4caGIWz/EnX/jpM1CWw8MHM65l3vmBAlnSjvOt7Wyura+sVnZqm7v7O7t2weHHRWnktA2iXksewFWlDNB25ppTnuJpDgKOO0Gk5vC7z5SqVgsHnSWUC/CI8FCRrA2km/XBhHWY4J5fjf1cx4bNvXtutNwZkDLxC1JHUq0fPtrMIxJGlGhCcdK9V0n0V6OpWaE02l1kCqaYDLBI9o3VOCIKi+fhZ+iE6MMURhL84RGM/X3Ro4jpbIoMJNFVLXoFeJ/Xj/V4ZWXM5GkmgoyPxSmHOkYFU2gIZOUaJ4ZgolkJisiYywx0aavqinBXfzyMumcNdzzhnN/UW9el3VU4AiO4RRcuIQm3EIL2kAgg2d4hTfryXqx3q2P+eiKVe7U4A+szx9z0JVI</latexit> ĝ <latexit sha1_base64="MlFgSWDuhHul1vp77q5wm5HaDKU=">AAAB+XicbVDLSsNAFL2pr1pfUZduBovgqiQq6LLoxmUF+4AmlMl00g6dPJi5KZTQP3HjQhG3/ok7/8ZJm4VWDwwczrmXe+YEqRQaHefLqqytb2xuVbdrO7t7+wf24VFHJ5livM0SmaheQDWXIuZtFCh5L1WcRoHk3WByV/jdKVdaJPEjzlLuR3QUi1AwikYa2LY3pph7EcVxEOaj+Xxg152GswD5S9yS1KFEa2B/esOEZRGPkUmqdd91UvRzqlAwyec1L9M8pWxCR7xvaEwjrv18kXxOzowyJGGizIuRLNSfGzmNtJ5FgZksIupVrxD/8/oZhjd+LuI0Qx6z5aEwkwQTUtRAhkJxhnJmCGVKmKyEjamiDE1ZNVOCu/rlv6Rz0XAvG87DVb15W9ZRhRM4hXNw4RqacA8taAODKTzBC7xaufVsvVnvy9GKVe4cwy9YH988yZQL</latexit> ĥ <latexit sha1_base64="gooRaEzmAkh1JPQL65zWVoIqQzM=">AAAB+XicbVDLSsNAFJ3UV62vqEs3g0VwVRIVdFl047KCfUATymQ6aYZOJmHmplBC/sSNC0Xc+ifu/BsnbRZaPTBwOOde7pkTpIJrcJwvq7a2vrG5Vd9u7Ozu7R/Yh0c9nWSKsi5NRKIGAdFMcMm6wEGwQaoYiQPB+sH0rvT7M6Y0T+QjzFPmx2QiecgpASONbNuLCOReTCAKwjwqipHddFrOAvgvcSvSRBU6I/vTGyc0i5kEKojWQ9dJwc+JAk4FKxpepllK6JRM2NBQSWKm/XyRvMBnRhnjMFHmScAL9edGTmKt53FgJsuIetUrxf+8YQbhjZ9zmWbAJF0eCjOBIcFlDXjMFaMg5oYQqrjJimlEFKFgymqYEtzVL/8lvYuWe9lyHq6a7duqjjo6QafoHLnoGrXRPeqgLqJohp7QC3q1cuvZerPel6M1q9o5Rr9gfXwDPk+UDA==</latexit> Mlocal <latexit sha1_base64="ZwiUQGlfNIiAPSgLdNtgjp+cW94=">AAAB/HicbVDLSsNAFL3xWesr2qWbwSK4KokKuiy6cSNUsA9oQ5hMJ+3QySTMTIQQ6q+4caGIWz/EnX/jpM1CWw8MHM65l3vmBAlnSjvOt7Wyura+sVnZqm7v7O7t2weHHRWnktA2iXksewFWlDNB25ppTnuJpDgKOO0Gk5vC7z5SqVgsHnSWUC/CI8FCRrA2km/XBhHWY4J5fjf1cx4bNvXtutNwZkDLxC1JHUq0fPtrMIxJGlGhCcdK9V0n0V6OpWaE02l1kCqaYDLBI9o3VOCIKi+fhZ+iE6MMURhL84RGM/X3Ro4jpbIoMJNFVLXoFeJ/Xj/V4ZWXM5GkmgoyPxSmHOkYFU2gIZOUaJ4ZgolkJisiYywx0aavqinBXfzyMumcNdzzhnN/UW9el3VU4AiO4RRcuIQm3EIL2kAgg2d4hTfryXqx3q2P+eiKVe7U4A+szx9z0JVI</latexit> Mlocal <latexit sha1_base64="ZwiUQGlfNIiAPSgLdNtgjp+cW94=">AAAB/HicbVDLSsNAFL3xWesr2qWbwSK4KokKuiy6cSNUsA9oQ5hMJ+3QySTMTIQQ6q+4caGIWz/EnX/jpM1CWw8MHM65l3vmBAlnSjvOt7Wyura+sVnZqm7v7O7t2weHHRWnktA2iXksewFWlDNB25ppTnuJpDgKOO0Gk5vC7z5SqVgsHnSWUC/CI8FCRrA2km/XBhHWY4J5fjf1cx4bNvXtutNwZkDLxC1JHUq0fPtrMIxJGlGhCcdK9V0n0V6OpWaE02l1kCqaYDLBI9o3VOCIKi+fhZ+iE6MMURhL84RGM/X3Ro4jpbIoMJNFVLXoFeJ/Xj/V4ZWXM5GkmgoyPxSmHOkYFU2gIZOUaJ4ZgolkJisiYywx0aavqinBXfzyMumcNdzzhnN/UW9el3VU4AiO4RRcuIQm3EIL2kAgg2d4hTfryXqx3q2P+eiKVe7U4A+szx9z0JVI</latexit> Mlocal <latexit sha1_base64="ZwiUQGlfNIiAPSgLdNtgjp+cW94=">AAAB/HicbVDLSsNAFL3xWesr2qWbwSK4KokKuiy6cSNUsA9oQ5hMJ+3QySTMTIQQ6q+4caGIWz/EnX/jpM1CWw8MHM65l3vmBAlnSjvOt7Wyura+sVnZqm7v7O7t2weHHRWnktA2iXksewFWlDNB25ppTnuJpDgKOO0Gk5vC7z5SqVgsHnSWUC/CI8FCRrA2km/XBhHWY4J5fjf1cx4bNvXtutNwZkDLxC1JHUq0fPtrMIxJGlGhCcdK9V0n0V6OpWaE02l1kCqaYDLBI9o3VOCIKi+fhZ+iE6MMURhL84RGM/X3Ro4jpbIoMJNFVLXoFeJ/Xj/V4ZWXM5GkmgoyPxSmHOkYFU2gIZOUaJ4ZgolkJisiYywx0aavqinBXfzyMumcNdzzhnN/UW9el3VU4AiO4RRcuIQm3EIL2kAgg2d4hTfryXqx3q2P+eiKVe7U4A+szx9z0JVI</latexit> ĝ <latexit sha1_base64="MlFgSWDuhHul1vp77q5wm5HaDKU=">AAAB+XicbVDLSsNAFL2pr1pfUZduBovgqiQq6LLoxmUF+4AmlMl00g6dPJi5KZTQP3HjQhG3/ok7/8ZJm4VWDwwczrmXe+YEqRQaHefLqqytb2xuVbdrO7t7+wf24VFHJ5livM0SmaheQDWXIuZtFCh5L1WcRoHk3WByV/jdKVdaJPEjzlLuR3QUi1AwikYa2LY3pph7EcVxEOaj+Xxg152GswD5S9yS1KFEa2B/esOEZRGPkUmqdd91UvRzqlAwyec1L9M8pWxCR7xvaEwjrv18kXxOzowyJGGizIuRLNSfGzmNtJ5FgZksIupVrxD/8/oZhjd+LuI0Qx6z5aEwkwQTUtRAhkJxhnJmCGVKmKyEjamiDE1ZNVOCu/rlv6Rz0XAvG87DVb15W9ZRhRM4hXNw4RqacA8taAODKTzBC7xaufVsvVnvy9GKVe4cwy9YH988yZQL</latexit> ĥ <latexit sha1_base64="gooRaEzmAkh1JPQL65zWVoIqQzM=">AAAB+XicbVDLSsNAFJ3UV62vqEs3g0VwVRIVdFl047KCfUATymQ6aYZOJmHmplBC/sSNC0Xc+ifu/BsnbRZaPTBwOOde7pkTpIJrcJwvq7a2vrG5Vd9u7Ozu7R/Yh0c9nWSKsi5NRKIGAdFMcMm6wEGwQaoYiQPB+sH0rvT7M6Y0T+QjzFPmx2QiecgpASONbNuLCOReTCAKwjwqipHddFrOAvgvcSvSRBU6I/vTGyc0i5kEKojWQ9dJwc+JAk4FKxpepllK6JRM2NBQSWKm/XyRvMBnRhnjMFHmScAL9edGTmKt53FgJsuIetUrxf+8YQbhjZ9zmWbAJF0eCjOBIcFlDXjMFaMg5oYQqrjJimlEFKFgymqYEtzVL/8lvYuWe9lyHq6a7duqjjo6QafoHLnoGrXRPeqgLqJohp7QC3q1cuvZerPel6M1q9o5Rr9gfXwDPk+UDA==</latexit> ĝ <latexit sha1_base64="MlFgSWDuhHul1vp77q5wm5HaDKU=">AAAB+XicbVDLSsNAFL2pr1pfUZduBovgqiQq6LLoxmUF+4AmlMl00g6dPJi5KZTQP3HjQhG3/ok7/8ZJm4VWDwwczrmXe+YEqRQaHefLqqytb2xuVbdrO7t7+wf24VFHJ5livM0SmaheQDWXIuZtFCh5L1WcRoHk3WByV/jdKVdaJPEjzlLuR3QUi1AwikYa2LY3pph7EcVxEOaj+Xxg152GswD5S9yS1KFEa2B/esOEZRGPkUmqdd91UvRzqlAwyec1L9M8pWxCR7xvaEwjrv18kXxOzowyJGGizIuRLNSfGzmNtJ5FgZksIupVrxD/8/oZhjd+LuI0Qx6z5aEwkwQTUtRAhkJxhnJmCGVKmKyEjamiDE1ZNVOCu/rlv6Rz0XAvG87DVb15W9ZRhRM4hXNw4RqacA8taAODKTzBC7xaufVsvVnvy9GKVe4cwy9YH988yZQL</latexit> ĥ <latexit sha1_base64="gooRaEzmAkh1JPQL65zWVoIqQzM=">AAAB+XicbVDLSsNAFJ3UV62vqEs3g0VwVRIVdFl047KCfUATymQ6aYZOJmHmplBC/sSNC0Xc+ifu/BsnbRZaPTBwOOde7pkTpIJrcJwvq7a2vrG5Vd9u7Ozu7R/Yh0c9nWSKsi5NRKIGAdFMcMm6wEGwQaoYiQPB+sH0rvT7M6Y0T+QjzFPmx2QiecgpASONbNuLCOReTCAKwjwqipHddFrOAvgvcSvSRBU6I/vTGyc0i5kEKojWQ9dJwc+JAk4FKxpepllK6JRM2NBQSWKm/XyRvMBnRhnjMFHmScAL9edGTmKt53FgJsuIetUrxf+8YQbhjZ9zmWbAJF0eCjOBIcFlDXjMFaMg5oYQqrjJimlEFKFgymqYEtzVL/8lvYuWe9lyHq6a7duqjjo6QafoHLnoGrXRPeqgLqJohp7QC3q1cuvZerPel6M1q9o5Rr9gfXwDPk+UDA==</latexit> ĝ <latexit sha1_base64="MlFgSWDuhHul1vp77q5wm5HaDKU=">AAAB+XicbVDLSsNAFL2pr1pfUZduBovgqiQq6LLoxmUF+4AmlMl00g6dPJi5KZTQP3HjQhG3/ok7/8ZJm4VWDwwczrmXe+YEqRQaHefLqqytb2xuVbdrO7t7+wf24VFHJ5livM0SmaheQDWXIuZtFCh5L1WcRoHk3WByV/jdKVdaJPEjzlLuR3QUi1AwikYa2LY3pph7EcVxEOaj+Xxg152GswD5S9yS1KFEa2B/esOEZRGPkUmqdd91UvRzqlAwyec1L9M8pWxCR7xvaEwjrv18kXxOzowyJGGizIuRLNSfGzmNtJ5FgZksIupVrxD/8/oZhjd+LuI0Qx6z5aEwkwQTUtRAhkJxhnJmCGVKmKyEjamiDE1ZNVOCu/rlv6Rz0XAvG87DVb15W9ZRhRM4hXNw4RqacA8taAODKTzBC7xaufVsvVnvy9GKVe4cwy9YH988yZQL</latexit> ĥ <latexit sha1_base64="gooRaEzmAkh1JPQL65zWVoIqQzM=">AAAB+XicbVDLSsNAFJ3UV62vqEs3g0VwVRIVdFl047KCfUATymQ6aYZOJmHmplBC/sSNC0Xc+ifu/BsnbRZaPTBwOOde7pkTpIJrcJwvq7a2vrG5Vd9u7Ozu7R/Yh0c9nWSKsi5NRKIGAdFMcMm6wEGwQaoYiQPB+sH0rvT7M6Y0T+QjzFPmx2QiecgpASONbNuLCOReTCAKwjwqipHddFrOAvgvcSvSRBU6I/vTGyc0i5kEKojWQ9dJwc+JAk4FKxpepllK6JRM2NBQSWKm/XyRvMBnRhnjMFHmScAL9edGTmKt53FgJsuIetUrxf+8YQbhjZ9zmWbAJF0eCjOBIcFlDXjMFaMg5oYQqrjJimlEFKFgymqYEtzVL/8lvYuWe9lyHq6a7duqjjo6QafoHLnoGrXRPeqgLqJohp7QC3q1cuvZerPel6M1q9o5Rr9gfXwDPk+UDA==</latexit> ĝ <latexit sha1_base64="MlFgSWDuhHul1vp77q5wm5HaDKU=">AAAB+XicbVDLSsNAFL2pr1pfUZduBovgqiQq6LLoxmUF+4AmlMl00g6dPJi5KZTQP3HjQhG3/ok7/8ZJm4VWDwwczrmXe+YEqRQaHefLqqytb2xuVbdrO7t7+wf24VFHJ5livM0SmaheQDWXIuZtFCh5L1WcRoHk3WByV/jdKVdaJPEjzlLuR3QUi1AwikYa2LY3pph7EcVxEOaj+Xxg152GswD5S9yS1KFEa2B/esOEZRGPkUmqdd91UvRzqlAwyec1L9M8pWxCR7xvaEwjrv18kXxOzowyJGGizIuRLNSfGzmNtJ5FgZksIupVrxD/8/oZhjd+LuI0Qx6z5aEwkwQTUtRAhkJxhnJmCGVKmKyEjamiDE1ZNVOCu/rlv6Rz0XAvG87DVb15W9ZRhRM4hXNw4RqacA8taAODKTzBC7xaufVsvVnvy9GKVe4cwy9YH988yZQL</latexit> ĥ <latexit sha1_base64="gooRaEzmAkh1JPQL65zWVoIqQzM=">AAAB+XicbVDLSsNAFJ3UV62vqEs3g0VwVRIVdFl047KCfUATymQ6aYZOJmHmplBC/sSNC0Xc+ifu/BsnbRZaPTBwOOde7pkTpIJrcJwvq7a2vrG5Vd9u7Ozu7R/Yh0c9nWSKsi5NRKIGAdFMcMm6wEGwQaoYiQPB+sH0rvT7M6Y0T+QjzFPmx2QiecgpASONbNuLCOReTCAKwjwqipHddFrOAvgvcSvSRBU6I/vTGyc0i5kEKojWQ9dJwc+JAk4FKxpepllK6JRM2NBQSWKm/XyRvMBnRhnjMFHmScAL9edGTmKt53FgJsuIetUrxf+8YQbhjZ9zmWbAJF0eCjOBIcFlDXjMFaMg5oYQqrjJimlEFKFgymqYEtzVL/8lvYuWe9lyHq6a7duqjjo6QafoHLnoGrXRPeqgLqJohp7QC3q1cuvZerPel6M1q9o5Rr9gfXwDPk+UDA==</latexit> Learning rate ↵ Momentum rate 1 Exp. moving average rate 2 Prior precision External damping factor Tempering parameter ⌧ # MC samples for training K Data augmentation factor ⇢ Figure 2: A pseudo-code for our distributed VOGN algorithm is shown in Algorithm 1, and the distributed scheme is shown in the right figure. The computation in line 10 requires an extra calculation (see Appendix B), making VOGN slower than Adam. The bottom table gives a list of algorithmic hyperparameters needed for VOGN. Momentum and initialisation: It is well known that both momentum and good initialisation can improve the speed of convergence for SG methods in deep learning [51]. Since VOGN is similar to Adam, we can implement momentum in a similar way. This is shown in step 17 of Algorithm 1, where 1 is the momentum rate. We initialise the mean µ in the same way the weights are initialised in Adam (we use init.xavier_normal in PyTorch [11]). For the momentum term m, we use the same initialisation as Adam (initialised to 0). VOGN requires an additional initialisation for the variance 2. For this, we first run a forward pass through the first minibatch, calculate the average of the squared gradients and initialise the scale s0 with it (see step 1 in Algorithm 1). This implies that the variance is initialised to 20 = ⌧/(N(s0 + ̃)). For the tempering parameter ⌧ , we use a schedule where it is increased from a small value (e.g., 0.1) to 1. With these initialisation protocols, VOGN is able to mimic the convergence behaviour of Adam in the beginning. Learning rate scheduling: A common approach to quickly achieve high validation accuracies is to use a specific learning rate schedule [13]. The learning rate (denoted by ↵ in Algorithm 1) is regularly decayed by a factor (typically a factor of 10). The frequency and timings of this decay are usually pre-specified. In VOGN, we use the same schedule used for Adam, which works well. Distributed training: We also employ distributed training for VOGN to perform large experiments quickly. We can parallelise computation both over data and Monte-Carlo (MC) samples. Data parallelism is useful to split up large minibatch sizes. This is followed by averaging over multiple MC samples and their losses on a single GPU. MC sample parallelism is useful when minibatch size is small, and we can copy the entire minibatch and process it on a single GPU. Algorithm 1 and Figure 2 illustrate our distributed scheme. We use a combination of these two parallelism techniques with different MC samples for different inputs. This theoretically reduces the variance during training (see Equation 5 in Kingma et al. [26]), but sometimes requires averaging over multiple MC samples to get a sufficiently low variance in the early iterations. Overall, we find that this type of distributed training is essential for fast training on large problems such as ImageNet. Implementation of the Gauss-Newton update in VOGN: As discussed earlier, VOGN uses the Gauss-Newton approximation, which is fundamentally different from Adam. In this approximation, the gradients on individual data examples are first squared and then averaged afterwards (see step 12 in Algorithm 1 which implements the update for st shown in (4)). We need extra computation to get access to individual gradients, due to which, VOGN is slower Adam or SGD (e.g., in Fig. 1). However, this is not a theoretical limitation and this can be improved if a framework enables an easy computation of the individual gradients. Details of our implementation are described in Appendix B. This implementation is much more efficient than a naive one where gradients over examples are stored and the sum over the square is computed sequentially. Our implementation usually brings the running time of VOGN to within 2-5 times of the time that Adam takes. Tuning VOGN: Currently, there is no common recipe for tuning the algorithmic hyperparameters for VI, especially for large-scale tasks like ImageNet classification. One key idea we use in our experiments is to start with Adam hyperparameters and then make sure that VOGN training closely follows an Adam-like trajectory in the beginning of training. To achieve this, we divide the tuning into an optimisation part and a regularisation part. In the optimisation part, we first tune the hyperparameters of a deterministic version of VOGN, called the online Gauss-Newton (OGN) method. This method, described in Appendix C, is more stable than VOGN since it does not require MC sampling, and can be used as a stepping stone when moving from Adam/SGD to VOGN. After reaching a competitive performance to Adam/SGD by OGN, we move to the regularisation part, where we tune the prior precision , the tempering parameter ⌧ , and the number of MC samples K for VOGN. We initialise our search by setting the prior precision using the L2-regularisation parameter used for OGN, as well as the dataset size N . Another technique is to warm-up the parameter ⌧ towards ⌧ = 1 (also see the “momentum and initialisation" part). Setting ⌧ to smaller values usually stabilises the training, and increasing it slowly also helps during tuning. We also add an external damping factor > 0 to the moving average st. This increases the lower bound of the eigenvalues of the diagonal covariance ⌃t and prevents the noise and the step size from becoming too large. We find that a mix of these techniques works well for the problems we considered. 4 Experiments In this section, we present experiments on fitting several deep networks on CIFAR-10 and ImageNet. Our experiments demonstrate practical training using VOGN on these benchmarks and show performance that is competitive with Adam and SGD. We also assess the quality of the posterior approximation, finding that benefits of Bayesian principles are preserved. CIFAR-10 [28] contains 10 classes with 50,000 images for training and 10,000 images for validation. For ImageNet, we train with 1.28 million training examples and validate on 50,000 examples, classifying between 1,000 classes. We used a large minibatch size M = 4, 096 and parallelise them across 128 GPUs (NVIDIA Tesla P100). We compare the following methods on CIFAR-10: Adam, MC-dropout [9]. For ImageNet, we also compare to SGD, K-FAC, and Noisy K-FAC. We do not consider Noisy K-FAC for other comparisons since tuning is difficult. We compare 3 architectures: LeNet-5, AlexNet, ResNet-18. We only compare to Bayes by Backprop (BBB) [4] for CIFAR-10 with LeNet-5 since it is very slow to converge for larger-scale experiments. We carefully set the hyperparameters of all methods, following the best practice of large distributed training [13] as the initial point of our hyperparameter tuning. The full set of hyperparameters is in Appendix D. 4.1 Performance on CIFAR-10 and ImageNet We start by showing the effectiveness of momentum and batch normalisation for boosting the performance of VOGN. Figure 3a shows that these methods significantly speed up convergence and performance (in terms of both accuracy and log likelihoods). Figures 1 and 4 compare the convergence of VOGN to Adam (for all experiments), SGD (on ImageNet), and MC-dropout (on the rest). VOGN shows similar convergence and its performance is competitive with these methods. We also try BBB on LeNet-5, where it converges prohibitively slowly, performing very poorly. We are not able to successfully train other architectures using this approach. We found it far simpler to tune VOGN because we can borrow all the techniques used for Adam. Figure 4 also shows the importance of DA in improving performance. Table 1 gives a final comparison of train/validation accuracies, negative log likelihoods, epochs required for convergence, and run-time per epoch. We can see that the accuracy, log likelihoods, and the number of epochs are comparable. VOGN is 2-5 times slower than Adam and SGD. This is mainly due to the computation of individual gradients required in VOGN (see the discussion in Section 3). We clearly see that by using deep-learning techniques on VOGN, we can perform practical deep learning. This is not possible with methods such as BBB. Due to the Bayesian nature of VOGN, there are some trade-offs to consider. Reducing the prior precision ( in Algorithm 1) results in higher validation accuracy, but also larger train-test gap (more overfitting). This is shown in Appendix E for VOGN on ResNet-18 on ImageNet. As expected, when the prior precision is small, performance is similar to non-Bayesian methods. We also show the effect of changing the effective dataset size ⇢ in Appendix E: note that, since we are going to tune the prior variance anyway, it is sufficient to set ⇢ to its correct order of magnitude. Another trade-off concerns the number of Monte-Carlo (MC) samples, shown in Appendix F. Increasing the number of training MC samples (up to a limit) improves VOGN’s convergence rate and stability, but also increases the computation. Increasing the number of MC samples during testing improves generalisation, as expected due to averaging. Finally, a few comments on the performance of the other methods. Adam regularly overfits the training set in most settings, with large train-test differences in both validation accuracy and log likelihood. One exception is LeNet-5, which is most likely due to the small architecture which results in underfitting (this is consistent with the low validation accuracies obtained). In contrast to Adam, MC-dropout has small train-test gap, usually smaller than VOGN’s. However, we will see in Section 4.2 that this is because of underfitting. Moreover, the performance of MC-dropout is highly sensitive to the dropout rate (see Appendix G for a comparison of different dropout rates). On ImageNet, Noisy K-FAC performs well too. It is slower than VOGN, but it takes fewer epochs. Overall, wall clock time is about the same as VOGN. 4.2 Quality of the Predictive Probabilities In this section, we compare the quality of the predictive probabilities for various methods. For Bayesian methods, we compute these probabilities by averaging over the samples from the posterior approximations (see Appendix H for details). For non-Bayesian methods, these are obtained using the point estimate of the weights. We compare the probabilities using the following metrics: validation negative log-likelihood (NLL), area under ROC (AUROC) and expected calibration curves (ECE) [40, 15]. For the first and third metric, a lower number is better, while for the second, a higher number is better. See Appendix H for an explanation of these metrics. Results are summarised in Table 1. VOGN’s uncertainty performance is more consistent and marginally better than the other methods, as expected from a more principled Bayesian method. Out of the 15 metrics (NLL, ECE and AUROC on 5 dataset/architecture combinations), VOGN performs the best or tied best on 10, and is second-best on the other 5. In contrast, both MC-dropout’s and Adam’s performance varies significantly, sometimes performing poorly, sometimes performing decently. MC-dropout is best on 4, and Adam is best on 1 (on LeNet-5; as argued earlier, the small architecture may result in underfitting). We also show calibration curves [7] in Figures 1 and 14. Adam is consistently over-confident, with its calibration curve below the diagonal. Conversely, MC-dropout is usually under-confident. On ImageNet, MC-dropout performs well on ECE (all methods are very similar on AUROC), but this required an excessively tuned dropout rate (see Appendix G). We also compare performance on out-of-distribution datasets. When testing on datasets that are different from the training datasets, predictions should be more uncertain. We use experimental protocol from the literature [16, 31, 8, 32] to compare VOGN, Adam and MC-dropout on CIFAR-10. We also borrow metrics from other works [16, 30], showing predictive entropy histograms and also reporting AUROC and FPR at 95% TPR. See Appendix I for further details on the datasets and metrics. Ideally, we want predictive entropy to be high on out-of-distribution data and low on in-distribution data. Our results are summarised in Figure 5 and Appendix I. On ResNet-18 and AlexNet, VOGN’s predictive entropy histograms show the desired behaviour: a spread of entropies for the in-distribution data, and high entropies for out-of-distribution data. Adam has many predictive entropies at zero, indicating Adam tends to classify out-of-distribution data too confidently. Conversely, MC-dropout’s predictive entropies are generally high (particularly in-distribution), indicating MC-dropout has too much noise. On LeNet-5, we observe the same result as before: Adam and MC-dropout both perform well. The metrics (AUROC and FPR at 95% TPR) do not provide a clear story across architectures. 4.2.1 Performance on a Continual-learning task The goal of continual learning is to avoid forgetting of old tasks while sequentially observing new tasks. The past tasks are never visited again, making it difficult to remember them. The field of continual learning has recently grown, with many approaches proposed to tackle this problem [27, 33, 43, 48, 50]. Most approaches consider a simple setting where the tasks (such as classifying a subset of classes) arrive sequentially, and all the data from that task is available. We consider the same setup in our experiments. We compare to Elastic Weight Consolidation (EWC) [27] and a VI-based approach called Variational Continual Learning (VCL) [43]. VCL employs BBB for each task, and we expect to boost its performance by replacing BBB by VOGN. Figure 3b shows results on a common benchmark called Permuted MNIST. We use the same experimental setup as in Swaroop et al. [52]. In Permuted MNIST, each task consists of the entire MNIST dataset (10-way classification) with a different fixed random permutation applied to the input images’ pixels. We run each method 20 times, with different random seeds for both the benchmark’s permutations and model training. See Appendix D.2 for hyperparameter settings and further details. We see that VOGN performs at least as well as VCL, and far better than a popular approach called EWC [27]. Additionally, as found in the batch learning setting, VOGN is much quicker than BBB: we run VOGN for only 100 epochs per task, whereas VCL requires 800 epochs per task to achieve best results [52]. 5 Conclusions We successfully train deep networks with a natural-gradient variational inference method, VOGN, on a variety of architectures and datasets, even scaling up to ImageNet. This is made possible due to the similarity of VOGN to Adam, enabling us to boost performance by borrowing deep-learning techniques. Our accuracies and convergence rates are comparable to SGD and Adam. Unlike them, however, VOGN retains the benefits of Bayesian principles, with well-calibrated uncertainty and good performance on out-of-distribution data. Better uncertainty estimates open up a whole range of potential future experiments, for example, small data experiments, active learning, adversarial experiments, and sequential decision making. Our results on a continual-learning task confirm this. Another potential avenue for research is to consider structured covariance approximations. Acknowledgements We would like to thank Hikaru Nakata (Tokyo Institute of Technology) and Ikuro Sato (Denso IT Laboratory, Inc.) for their help on the PyTorch implementation. We are also thankful for the RAIDEN computing system and its support team at the RIKEN Center for AI Project which we used extensively for our experiments. This research used computational resources of the HPCI system provided by Tokyo Institute of Technology (TSUBAME3.0) through the HPCI System Research Project (Project ID:hp190122). K. O. is a Research Fellow of JSPS and is supported by JSPS KAKENHI Grant Number JP19J13477.
1. What is the main contribution of the paper in the field of Bayesian Neural Networks? 2. What are the strengths of the proposed approach, particularly in its ability to incorporate various tricks into BNN training? 3. How does the reviewer assess the novelty of the paper compared to prior works in BNNs and deep learning? 4. What are some potential limitations or areas for improvement regarding the proposed method? 5. How does the reviewer evaluate the clarity and quality of the paper's content?
Review
Review This paper proposes a perspective on training Bayesian Neural Networks (BNNs) that motivates how to best incorporate different tricks (such as batch normalization and momentum) into BNN training. The resulting algorithm scales to large inference problems like fitting a BNN to ImageNet and achieves well calibrated predictions. Starting point is an existing approach (VOGN) for fitting BNNs with natural gradients. The authors observe that the update equations of VOGN are similar to the update equations of popular SGD methods with adaptive learning rates. From this perspective, they can derive by analogy how to best incorporate different tricks for practical deep learning (batch normalization, data augmentation, distributed training). The extensive experimental study supports the claims of the authors. Topic-wise, this work is a good fit to the Neurips community. There seem to be no 'new ideas' in this paper (VOGN comes from ref [22] and batch normalization, data augmentation, etc. come from the deep learning literature), so I would rate it lower on originality. Yet, I find it an important contribution to bridging the gap between Bayesian neural networks and practical deep learning. The ideas and how they are connected are described clearly. This work is an interesting step into the direction of finding the right trade-off between computational efficiency and well calibrated predictions in Bayesian deep learning.
NIPS
Title A Communication-Efficient Parallel Algorithm for Decision Tree Abstract Decision tree (and its extensions such as Gradient Boosting Decision Trees and Random Forest) is a widely used machine learning algorithm, due to its practical effectiveness and model interpretability. With the emergence of big data, there is an increasing need to parallelize the training process of decision tree. However, most existing attempts along this line suffer from high communication costs. In this paper, we propose a new algorithm, called Parallel Voting Decision Tree (PV-Tree), to tackle this challenge. After partitioning the training data onto a number of (e.g., M ) machines, this algorithm performs both local voting and global voting in each iteration. For local voting, the top-k attributes are selected from each machine according to its local data. Then, globally top-2k attributes are determined by a majority voting among these local candidates. Finally, the full-grained histograms of the globally top-2k attributes are collected from local machines in order to identify the best (most informative) attribute and its split point. PV-Tree can achieve a very low communication cost (independent of the total number of attributes) and thus can scale out very well. Furthermore, theoretical analysis shows that this algorithm can learn a near optimal decision tree, since it can find the best attribute with a large probability. Our experiments on real-world datasets show that PV-Tree significantly outperforms the existing parallel decision tree algorithms in the trade-off between accuracy and efficiency. 1 Introduction Decision tree [16] is a widely used machine learning algorithm, since it is practically effective and the rules it learns are simple and interpretable. Based on decision tree, people have developed other algorithms such as Random Forest (RF) [3] and Gradient Boosting Decision Trees (GBDT) [7], which have demonstrated very promising performances in various learning tasks [5]. In recent years, with the emergence of very big training data (which cannot be held in one single machine), there has been an increasing need of parallelizing the training process of decision tree. To this end, there have been two major categories of attempts: 2. ∗Denotes equal contribution. This work was done when the first author was visiting Microsoft Research Asia. 2There is another category of works that parallelize the tasks of sub-tree training once a node is split [15], which require the training data to be moved from machine to machine for many times and are thus inefficient. Moreover, there are also some other works accelerating decision tree construction by using pre-sorting [13] [19] [11] and binning [17] [8] [10], or employing a shared-memory-processors approach [12] [1]. However, they are out of our scope. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Attribute-parallel: Training data are vertically partitioned according to the attributes and allocated to different machines, and then in each iteration, the machines work on non-overlapping sets of attributes in parallel in order to find the best attribute and its split point (suppose this best attribute locates at the i-th machine) [19] [11] [20]. This process is communicationally very efficient. However, after that, the re-partition of the data on other machines than the i-th machine will induce very high communication costs (proportional to the number of data samples). This is because those machines have no information about the best attribute at all, and in order to fulfill the re-partitioning, they must retrieve the partition information of every data sample from the i-th machine. Furthermore, as each worker still has full sample set, the partition process is not parallelized, which slows down the algorithm. Data-parallel: Training data are horizontally partitioned according to the samples and allocated to different machines. Then the machines communicate with each other the local histograms of all attributes (according to their own data samples) in order to obtain the global attribute distributions and identify the best attribute and split point [12] [14]. It is clear that the corresponding communication cost is very high and proportional to the total number of attributes and histogram size. To reduce the cost, in [2] and [21] [10], it was proposed to exchange quantized histograms between machines when estimating the global attribute distributions. However, this does not really solve the problem – the communication cost is still proportional to the total number of attributes, not to mentioned that the quantization may hurt the accuracy. In this paper, we proposed a new data-parallel algorithm for decision tree, called Parallel Voting Decision Tree (PV-Tree), which can achieve much better balance between communication efficiency and accuracy. The key difference between conventional data-parallel decision tree algorithm and PV-Tree lies in that the former only trusts the globally aggregated histogram information, while the latter leverages the local statistical information contained in each machine through a two-stage voting process, thus can significantly reduce the communication cost. Specifically, PV-Tree contains the following steps in each iteration. 1) Local voting. On each machine, we select the top-k attributes based on its local data according to the informativeness scores (e.g., risk reduction for regression, and information gain for classification). 2) Global voting. We determine global top-2k attributes by a majority voting among the local candidates selected in the previous step. That is, we rank the attributes according to the number of local machines who select them, and choose the top 2k attributes from the ranked list. 3) Best attribute identification. We collect the full-grained histograms of the globally top-2k attributes from local machines in order to compute their global distributions. Then we identify the best attribute and its split point according to the informativeness scores calculated from the global distributions. It is easy to see that PV-Tree algorithm has a very low communication cost. It does not need to communicate the information of all attributes, instead, it only communicates indices of the locally top-k attributes per machine and the histograms of the globally top-2k attributes. In other words, its communication cost is independent of the total number of attributes. This makes PV-Tree highly scalable. On the other hand, it can be proven that PV-Tree can find the best attribute with a large probability, and the probability will approach 1 regardless of k when the training data become sufficiently large. In contrast, the data-parallel algorithm based on quantized histogram could fail in finding the best attribute, since the bias introduced by histogram quantization cannot be reduced to zero even if the training data are sufficiently large. We have conducted experiments on real-world datasets to evaluate the performance of PV-Tree. The experimental results show that PV-Tree has consistently higher accuracy and training speed than all the baselines we implemented. We further conducted experiments to evaluate the performance of PV-Tree in different settings (e.g., with different numbers of machines, different values of k). The experimental results are in accordance with our theoretical analysis. 2 Decision Tree Suppose the training data set Dn = {(xi,j , yi); i = 1, · · · , n, j = 1, · · · , d} are independently sampled from ∏d j=1 Xj × Y according to ( ∏d j=1 PXj )PY |X . The goal is to learn a regression or classification model f ∈ F : ∏d j=1 Xj → Y by minimizing loss functions on the training data, which hopefully could achieve accurate prediction for the unseen test data. Decision tree[16, 18] is a widely used model for both regression [4] and classification [18]. A typical decision tree algorithm is described in Alg 1. As can be seen, the tree growth procedure is recursive, and the nodes will not stop growing until they reach the stopping criteria. There are two important functions in the algorithm: FindBestSplit returns the best split point {attribute, threshold} of a node, and Split splits the training data according to the best split point. The details of FindBestSplit is given in Alg 2: first histograms of the attributes are constructed (for continuous attributes, one usually converts their numerical values to finite bins for ease of compuation) by going over all training data on the current node; then all bins (split points) are traversed from left to right, and leftSum and rightSum are used to accumulate sum of left and right parts of the split point respectively. When selecting the best split point, an informativeness measure is adopted. The widely used informative measures are information gain and variance gain for classification and regression, respectively. Algorithm 1 BulidTree Input: Node N, Dateset D if StoppingCirteria(D) then N.output = Prediction(D) else bestSplit = FindBestSplit(D) (DL, DR) = Split(D, N, bestSplit) BuildTree(N.leftChild, DL) BuildTree(N.rightChild, DR) end if Definition 2.1 [6][16] In classification, the information gain (IG) for attribute Xj ∈ [w1, w2] at node O, is defined as the entropy reduction of the output Y after splitting node O by attribute Xj at w, i.e., IGj(w;O) = Hj − (Hlj(w) +Hrj (w)) = P (w1 ≤ Xj ≤ w2)H(Y |w1 ≤ Xj ≤ w2)− P (w1 ≤ Xj < w)H(Y |w1 ≤ Xj < w) − P (w ≤ Xj ≤ w2)H(Y |w ≤ Xj ≤ w2), where H(·|·) denotes the conditional entropy. In regression, the variance gain (VG) for attribute Xj ∈ [w1, w2] at node O, is defined as variance reduction of the output Y after splitting node O by attribute Xj at w, i.e., V Gj(w;O) = σj − (σlj(w) + σrj (w)) = P (w1 ≤ Xj ≤ w2)V ar[Y |w1 ≤ Xj ≤ w2]− P (w1 ≤ Xj < w)V ar[Y |w1 ≤ Xj < w] − P (w2 ≥ Xj ≥ w)V ar[Y |w2 ≥ Xj ≥ w], where V ar[·|·] denotes the conditional variance. 3 PV-Tree In this section, we describe our proposed PV-Tree algorithm for parallel decision tree learning, which has a very low communication cost, and can achieve a good trade-off between communication efficiency and learning accuracy. PV-Tree is a data-parallel algorithm, which also partitions the training data onto M machines just like in [2] [21]. However, its design principal is very different. In [2][21], one does not trust the local information about the attributes in each machine, and decides the best attribute and split point only based on the aggregated global histograms of the attributes. In contrast, in PV-Tree, we leverage the meaningful statistical information about the attributes contained in each local machine, and make decisions through a two-stage (local and then global) voting process. In this way, we can significantly reduce the communication cost since we do not need to communicate the histogram information of all the attributes across machines, instead, only the histograms of those attributes that survive in the voting process. The flow of PV-tree algorithm is very similar to the standard decision tree, except function FindBestSplit. So we only give the new implementation of this function in Alg 3, which contains following three steps: Local Voting: We select the top-k attributes for each machine based on its local data set (according to the informativeness scores, e.g., information gain for classification and variance reduction for regression), and then exchange indices of the selected attributes among machines. Please note that the communication cost for this step is very low, because only the indices for a small number of (i.e., k ×M ) attributes need to be communicated. Global Voting: We determine the globally top-2k attributes by a majority voting among all locally selected attributes in the previous step. That is, we rank the attributes according to the number of local machines who select them, and choose the top-2k attributes from the ranked list. It can be proven that when the local data are big enough to be statistically representative, there is a very high probability that the top-2k attributes obtained by this majority voting will contain the globally best attribute. Please note that this step does not induce any communication cost. Best Attribute Identification: We collect full-grained histograms of the globally top-2k attributes from local machines in order to compute their global distributions. Then we identify the best attribute and its split point according to the informativeness scores calculated from the global distributions. Please note that the communication cost for this step is also low, because we only need to communicate the histograms of 2k pre-selected attributes (but not all attributes).3 As a result, PV-Tree algorithm can scale very well since its communication cost is independent of both the total number of attributes and the total number of samples in the dataset. In next section, we will provide theoretical analysis on accuracy guarantee of PV-Tree algorithm. Algorithm 2 FindBestSplit Input: DataSet D for all X in D.Attribute do . Construct Histogram H = new Histogram() for all x in X do H.binAt(x.bin).Put(x.label) end for . Find Best Split leftSum = new HistogramSum() for all bin in H do leftSum = leftSum + H.binAt(bin) rightSum = H.AllSum - leftSum split.gain = CalSplitGain(leftSum, rightSum) bestSplit = ChoiceBetterOne(split,bestSplit) end for end for return bestSplit Algorithm 3 PV-Tree_FindBestSplit Input: Dataset D localHistograms = ConstructHistograms(D) . Local Voting splits = [] for all H in localHistograms do splits.Push(H.FindBestSplit()) end for localTop = splits.TopKByGain(K) . Gather all candidates allCandidates = AllGather(localTop) . Global Voting globalTop = allCandidates.TopKByMajority(2*K) . Merge global histograms globalHistograms = Gather(globalTop, localHistograms) bestSplit = globalHistograms.FindBestSplit() return bestSplit 4 Theoretical Analysis In this section, we conduct theoretical analysis on proposed PV-Tree algorithm. Specifically, we prove that, PV-Tree can select the best (most informative) attribute in a large probability, for both classification and regression. In order to better present the theorem, we firstly introduce some notations4 In classification, we denote IGj = maxw IGj(w), and rank {IGj ; j ∈ [d]} from large to small as {IG(1), ..., IG(d)}. We call the attribute j(1) the most informative attribute. Then, we denote l(j)(k) = |IG(1)−IG(j)| 2 , ∀j ≥ k + 1 to indicate the distance between the largest and the k-th largest IG. In regression, l(j)(k) is defined in the same way, except replacing IG with VG. Theorem 4.1 Suppose we have M local machines, and each one has n training data. PV-Tree at an arbitrary tree node with local voting size k and global majority voting size 2k will select the most informative attribute with a probability at least M∑ m=[M/2+1] CmM 1− d∑ j=k+1 δ(j)(n, k) m d∑ j=k+1 δ(j)(n, k) M−m , where δ(j)(n, k) = α(j)(n) + 4e−c(j)n(l(j)(k)) 2 with limn→∞ α(j)(n) = 0 and c(j) is constant. Due to space restrictions, we briefly illustrate the proof idea here and leave detailed proof to supplementary materials. Our proof contains two parts. (1) For local voting, we find a sufficient condition to guarantee a similar rank of attributes ordered by information gain computed based on local data and full data. Then, we derive a lower bound of probability to make the sufficient condition holds by 3As indicated by our theoretical analysis and empirical study (see the next sections), a very small k already leads to good performance in PV-Tree algorithm. 4Since all analysis are for one arbitrarily fixed node O, we omit the notation O here. using concentration inequalities. (2) For global voting, we select top-2k attributes. It’s easy to proof that we can select the most informative attribute if only no less than [M/2 + 1] of all machines select it.5 Therefore, we can calculate the probability in the theorem using binomial distribution. Regarding Theorem 4.1, we have following discussions on factors that impact the lower bound for probability of selecting the best attribute. 1.Size of local training data n: Since δ(j)(n, k) decreased with n, with more and more local training data, the lower bound will increase. That means, if we have sufficiently large data, PV-Tree will select the best attribute with almost probability 1. 2. Input dimension d: It is clear that for fixed local voting size k and global voting size 2k, with d increasing, the lower bound is decreasing. Consider the case that the number of attributes become 100 times larger. Then the terms in the summation (from ∑d j=k+1 to ∑100d j=k+1) is roughly 100 times larger for a relatively small k. But there must be many attributes away from attribute (1) and l(j)(k) is a large number which results in a small δ(j)(n, k). Thus we can say that the bound in the theorem is not sensitive with d. 3. Number of machines M : We assume the whole training data size N is fixed and the local data size n = NM . Then on one hand, as M increases, n decreases, and therefore the lower bound will decrease due to larger δj(n, k). On the other hand, because function ∑M m=[M/2+1] C m Mp m(1− p)M−m will approach 1 as M increases when p > 0.5 [[23]], the lower bound will increase. In other words, the number of machines M has dual effect on the lower bound: with more machines, local data size becomes smaller which reduces the accuracy of local voting, however, it also leads to more copies of local votes and thus increase the reliability of global voting. Therefore, in terms of accuracy, there should be an optimal number of machines given a fixed-size training data.6 4. Local/Global voting size k/2k: Local/Global voting size k/2k influence l(j)(k) and the terms in the summation in the lower bound . As k increases, l(j)(k) increases and the terms in the summation decreases, and the lower bound increases. But increasing k will bring more communication and calculating time. Therefore, we should better select a moderate k. For some distributions, especially for the distributions over high-dimensional space, l(j)(k) is less sensitive to k, then we can choose a relatively smaller k to save communication time. As a comparison, we also prove a theorem for the data-parallel algorithm based on quantized histogram as follows (please refer to the supplementary material for its proof). The theorem basically tells us that the bias introduced by histogram quantization cannot be reduced to zero even if the training data are sufficiently large, and as a result the corresponding algorithm could fail in finding the best attribute.7 This could be the critical weakness of this algorithm in big data scenario. Theorem 4.2 We denote quantized histogram with b bins of the underlying distribution P as P b, that of the empirical distribution Pn as P bn, the information gain ofXj calculated under the distribution P b and P bn as IG b j and IG b n,j respectively, and fj(b) , |IGj − IGbj |. Then, for ≤ minj=1,··· ,d fj(b), with probability at least δj(n, fj(b)− )), we have |IGbn,j − IGj | > . 5 Experiments In this section, we report the experimental comparisons between PV-Tree and baseline algorithms. We used two data sets, one for learning to rank (LTR) and the other for ad click prediction (CTR)8 (see Table 1 for details). For LTR, we extracted about 1200 numerical attributes per data sample, and used NDCG [5] as the evaluation measure. For CTR, we extracted about 800 numerical attributes [9], and used AUC as the evaluation measure. 5In fact, the global voting size can be βk with β > 1. Then the sufficient condition becomes that no less than [M/β + 1] of all machines select the most informative attribute. 6Please note that using more machines will reduce local computing time, thus the optimal value of machine number may be larger in terms of speed-up. 7The theorem for regression holds in the same way, with replacing IG with VG. 8We use private data in LTR experiments and data of KDD Cup 2012 track 2 in CTR experiments. Table 1: Datasets Task #Train #Test #Attribute Source LTR 11M 1M 1200 Private CTR 235M 31M 800 KDD Cup Table 2: Convergence time (seconds) Task Sequential Data- Attribute- PV-Tree Parallel Parallel LTR 28690 32260 14660 5825 CTR 154112 9209 26928 5349 According to recent industrial practices, a single decision tree might not be strong enough to learn an effective model for complicated tasks like ranking and click prediction. Therefore, people usually use decision tree based boosting algorithms (e.g., GBDT) to perform tasks. In this paper, we also use GBDT as a platform to examine the efficiency and effectiveness of decision tree parallelization. That is, we used PV-Tree or other baseline algorithms to parallelize the decision tree construction process in each iteration of GBDT, and compare their performance. Our experimental environment is a cluster of servers (each with 12 CPU cores and 32 GB RAM) inter-connected with 1 Gbps Ethernet. For the experiments on LTR, we used 8 machines for parallel training; and for the experiments on CTR, we used 32 machines since the dataset is much larger. 5.1 Comparison with Other Parallel Decision Trees For comparison with PV-Tree, we have implemented an attribute-parallel algorithm, in which a binary vector is used to indicate the split information and exchanged across machines. In addition, we implemented a data-parallel algorithm according to [2, 21], which can communicate both full-grained histograms and quantized histograms. All parallel algorithms and sequential(single machine) version are compared together. The experimental results can be found in Figure 1a and 1b. From these figures, we have the following observations: For LTR, since the number of data samples is relatively small, the communication of the split information about the samples does not take too much time. As a result, the attribute-parallel algorithm appears to be efficient. Since most attributes take numerical values in this dataset, the fullgrained histogram has quite a lot of bins. Therefore, the data-parallel algorithm which communicates full-grained histogram is quite slow, even slower than the sequential algorithm. When reducing the bins in the histogram to 10%, the data-parallel algorithm becomes much more efficient, however, its convergence point is not good (consistent with our theory – the bias in quantized histograms leads to accuracy drop). For CTR, attribute-parallel algorithm becomes very slow since the number of data samples is very large. In contrast, many attributes in CTR take binary or discrete values, which make the full-grained histogram have limited number of bins. As a result, the data-parallel algorithm with full-grain histogram is faster than the sequential algorithm. The data-parallel algorithm with quantized histograms is even faster, however, its convergence point is once again not very good. PV-Tree reaches the best point achieved by sequential algorithm within the shortest time in both LTR and CTR task. For a more quantitative comparison on efficiency, we list the time for each algorithm (8 machines for LTR and 32 machines for CTR) to reach the convergent accuracy of the sequential algorithm in Table 2. From the table, we can see that, for LTR, it costed PV-Tree 5825 seconds, while it costed the data-parallel algorithm (with full-grained histogram9) and attribute-parallel algorithm 32260 and 14660 seconds respectively. As compared with the sequential algorithm (which took 28690 seconds to converge), PV-Tree achieves 4.9x speed up on 8 machines. For CTR, it costed PV-Tree 5349 seconds, while it costed the data-parallel algorithm (with full-grained histogram) and attributeparallel algorithm 9209 and 26928 seconds respectively. As compared with the sequential algorithm (which took 154112 seconds to converge), PV-Tree achieves 28.8x speed up on 32 machines. We also conducted independent experiments to get a clear comparison of communication cost for different parallel algorithms given some typical big data workload setting. The result is listed in Table 3. We find the cost of attribute-parallel algorithm is relative to the size of training data N , and the cost of data-parallel algorithm is relative to the number of attributes d. In contrast, the cost of PV-Tree is constant. 9The data-parallel algorithm with 10% bins could not achieve the same accuracy with the sequential algorithm and thus we did not put it in the table. Table 3: Comparison of communication cost, train one tree with depth=6. Table 4: Convergence time and accuracy w.r.t. global voting parameter k for PV-Tree. 5.2 Tradeoff between Speed-up and Accuracy in PV-Tree In the previous subsection, we have shown that PV-tree is more efficient than other algorithms. Here we make a deep dive into PV-tree to see how its key parameters affect the trade-off between efficiency and accuracy. According to Theorem 4.1, the following two parameters are critical to PV-Tree: the number of machines M and the size of voting k. 5.2.1 On Different Numbers of Machines When more machines join the distributed training process, the data throughput will grow larger but the amortized training data on each machine will get smaller. When the data size on each machine becomes too small, there will be no guarantee on the accuracy of the voting procedure, according to our theorem. So it is important to appropriately set the number of machines. To gain more insights on this, we conducted some additional experiments, whose results are shown in Figure 2a and 2b. From these figures, we can see that for LTR, when the number of machines grows from 2 to 8, the training process is significantly accelerated. However, when the number goes up to 16, the convergence speed is even lower than that of using 8 machines. Similar results can be observed for CTR. These observations are consistent with our theoretical findings. Please note that PV-Tree is designed for the big data scenario. Only when the entire training data are huge (and thus distribution of the training data on each local machine can be similar to that of the entire training data), the full power of PV-Tree can be realized. Otherwise, we need to have a reasonable expectation on the speed-up, and should choose to use a smaller number of machines to parallelize the training. 5.2.2 On Different Sizes of Voting In PV-Tree, we have a parameter k, which controls the number of top attributes selected during local and global voting. Intuitively, larger k will increase the probability of finding the globally best attribute from the local candidates, however, it also means higher communication cost. According to our theorem, the choice of k should depend on the size of local training data. If the size of local training data is large, the locally best attributes will be similar to the globally best one. In this case, one can safely choose a small value of k. Otherwise, we should choose a relatively larger k. To gain more insights on this, we conducted some experiments, whose results are shown in Table 4, where M refers to the number of machines. From the table, we have the following observations. First, for both cases, in order to achieve good accuracy, one does not need to choose a large k. When k ≤ 40, the accuracy has been very good. Second, we find that for the cases of using small number of machines, k can be set to an even smaller value, e.g., k = 5. This is because, given a fixed-size training data, when using fewer machines, the size of training data per machine will become larger and thus a smaller k can already guarantee the approximation accuracy. 5.3 Comparison with Other Parallel GBDT Algorithms While we mainly focus on how to parallelize the decision tree construction process inside GBDT in the previous subsections, one could also parallelize GBDT in other ways. For example, in [22, 20], each machine learns its own decision tree separately without communication. After that, these decision trees are aggregated by means of winner-takes-all or output ensemble. Although these works are not the focus of our paper, it is still interesting to compare with them. For this purpose, we implemented both the algorithms proposed in [22] and [20]. For ease of reference, we denote them as Svore and Yu respectively. Their performances are shown in Figure 3a and 3b. From the figures, we can see that PV-Tree outperforms both Svore and Yu: although these two algorithms converge at a similar speed to PV-Tree, they have much worse converge points. According to our limited understanding, these two algorithms are lacking solid theoretical guarantee. Since the candidate decision trees are trained separately and independently without necessary information exchange, they may have non-negligible bias, which will lead to accuracy drop at the end. In contrast, we can clearly characterize the theoretical properties of PV-tree, and use it in an appropriate setting so as to avoid observable accuracy drop. To sum up all the experiments, we can see that with appropriately-set parameters, PV-Tree can achieve a very good trade-off between efficiency and accuracy, and outperforms both other parallel decision tree algorithms designed specifically for GBDT parallelization. 6 Conclusions In this paper, we proposed a novel parallel algorithm for decision tree, called Parallel Voting Decision Tree (PV-Tree), which can achieve high accuracy at a very low communication cost. Experiments on both ranking and ad click prediction indicate that PV-Tree has its advantage over a number of baselines algorithms. As for future work, we plan to generalize the idea of PV-Tree to parallelize other machine learning algorithms. Furthermore, we will open-source PV-Tree algorithm to benefit more researchers and practitioners.
1. What is the main contribution of the paper regarding decision trees? 2. What are the strengths of the proposed data-parallel algorithm, particularly in terms of communication efficiency? 3. Do you have any concerns about the theoretical analysis, specifically regarding the success probability? 4. How does the reviewer assess the effectiveness of the voting scheme and the empirical results? 5. Are there any minor issues or suggestions for improvement in the paper?
Review
Review Authors propose a data-parallel algorithm for learning decision trees, which greatly improves communication efficiency compared to previously proposed algorithms. Instead of computing the global histogram for all attributes, each local worker votes to k attributes, and global histogram is computed only for top 2k attributes that receive most votes. Authors provide a theoretical analysis that characterizes the probability an optimal attribute would be chosen. Empirical comparisons against previously proposed attribute-parallel and data-parallel algorithms are provided and results are encouraging.The proposed voting scheme is an intuitive and appealing way of reducing the communication cost: first figure out which attributes are important only with crude information (votes instead of histograms), and then concentrate the communication cost on attractive set of attributes. The guarantee in the Theorem 4.1 seems to be quite week, as the success probability would exponentially decrease as a function of d. However, empirical results alleviate the concern, as near-optimal performance seem to be achieved even when there are 128 machines and only one attributes are elected (k=1). Minor comments: 1. Line 38 in the appendix is a bit hand-wavy; would be nice if the proof of equation (7) could be included in the appendix. 2. 'attribute (1)' and 'h_{(j)}(k)' in line 156 do not seem to be defined. I guess authors mean the most informative attribute by '(1)'.
NIPS
Title A Communication-Efficient Parallel Algorithm for Decision Tree Abstract Decision tree (and its extensions such as Gradient Boosting Decision Trees and Random Forest) is a widely used machine learning algorithm, due to its practical effectiveness and model interpretability. With the emergence of big data, there is an increasing need to parallelize the training process of decision tree. However, most existing attempts along this line suffer from high communication costs. In this paper, we propose a new algorithm, called Parallel Voting Decision Tree (PV-Tree), to tackle this challenge. After partitioning the training data onto a number of (e.g., M ) machines, this algorithm performs both local voting and global voting in each iteration. For local voting, the top-k attributes are selected from each machine according to its local data. Then, globally top-2k attributes are determined by a majority voting among these local candidates. Finally, the full-grained histograms of the globally top-2k attributes are collected from local machines in order to identify the best (most informative) attribute and its split point. PV-Tree can achieve a very low communication cost (independent of the total number of attributes) and thus can scale out very well. Furthermore, theoretical analysis shows that this algorithm can learn a near optimal decision tree, since it can find the best attribute with a large probability. Our experiments on real-world datasets show that PV-Tree significantly outperforms the existing parallel decision tree algorithms in the trade-off between accuracy and efficiency. 1 Introduction Decision tree [16] is a widely used machine learning algorithm, since it is practically effective and the rules it learns are simple and interpretable. Based on decision tree, people have developed other algorithms such as Random Forest (RF) [3] and Gradient Boosting Decision Trees (GBDT) [7], which have demonstrated very promising performances in various learning tasks [5]. In recent years, with the emergence of very big training data (which cannot be held in one single machine), there has been an increasing need of parallelizing the training process of decision tree. To this end, there have been two major categories of attempts: 2. ∗Denotes equal contribution. This work was done when the first author was visiting Microsoft Research Asia. 2There is another category of works that parallelize the tasks of sub-tree training once a node is split [15], which require the training data to be moved from machine to machine for many times and are thus inefficient. Moreover, there are also some other works accelerating decision tree construction by using pre-sorting [13] [19] [11] and binning [17] [8] [10], or employing a shared-memory-processors approach [12] [1]. However, they are out of our scope. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Attribute-parallel: Training data are vertically partitioned according to the attributes and allocated to different machines, and then in each iteration, the machines work on non-overlapping sets of attributes in parallel in order to find the best attribute and its split point (suppose this best attribute locates at the i-th machine) [19] [11] [20]. This process is communicationally very efficient. However, after that, the re-partition of the data on other machines than the i-th machine will induce very high communication costs (proportional to the number of data samples). This is because those machines have no information about the best attribute at all, and in order to fulfill the re-partitioning, they must retrieve the partition information of every data sample from the i-th machine. Furthermore, as each worker still has full sample set, the partition process is not parallelized, which slows down the algorithm. Data-parallel: Training data are horizontally partitioned according to the samples and allocated to different machines. Then the machines communicate with each other the local histograms of all attributes (according to their own data samples) in order to obtain the global attribute distributions and identify the best attribute and split point [12] [14]. It is clear that the corresponding communication cost is very high and proportional to the total number of attributes and histogram size. To reduce the cost, in [2] and [21] [10], it was proposed to exchange quantized histograms between machines when estimating the global attribute distributions. However, this does not really solve the problem – the communication cost is still proportional to the total number of attributes, not to mentioned that the quantization may hurt the accuracy. In this paper, we proposed a new data-parallel algorithm for decision tree, called Parallel Voting Decision Tree (PV-Tree), which can achieve much better balance between communication efficiency and accuracy. The key difference between conventional data-parallel decision tree algorithm and PV-Tree lies in that the former only trusts the globally aggregated histogram information, while the latter leverages the local statistical information contained in each machine through a two-stage voting process, thus can significantly reduce the communication cost. Specifically, PV-Tree contains the following steps in each iteration. 1) Local voting. On each machine, we select the top-k attributes based on its local data according to the informativeness scores (e.g., risk reduction for regression, and information gain for classification). 2) Global voting. We determine global top-2k attributes by a majority voting among the local candidates selected in the previous step. That is, we rank the attributes according to the number of local machines who select them, and choose the top 2k attributes from the ranked list. 3) Best attribute identification. We collect the full-grained histograms of the globally top-2k attributes from local machines in order to compute their global distributions. Then we identify the best attribute and its split point according to the informativeness scores calculated from the global distributions. It is easy to see that PV-Tree algorithm has a very low communication cost. It does not need to communicate the information of all attributes, instead, it only communicates indices of the locally top-k attributes per machine and the histograms of the globally top-2k attributes. In other words, its communication cost is independent of the total number of attributes. This makes PV-Tree highly scalable. On the other hand, it can be proven that PV-Tree can find the best attribute with a large probability, and the probability will approach 1 regardless of k when the training data become sufficiently large. In contrast, the data-parallel algorithm based on quantized histogram could fail in finding the best attribute, since the bias introduced by histogram quantization cannot be reduced to zero even if the training data are sufficiently large. We have conducted experiments on real-world datasets to evaluate the performance of PV-Tree. The experimental results show that PV-Tree has consistently higher accuracy and training speed than all the baselines we implemented. We further conducted experiments to evaluate the performance of PV-Tree in different settings (e.g., with different numbers of machines, different values of k). The experimental results are in accordance with our theoretical analysis. 2 Decision Tree Suppose the training data set Dn = {(xi,j , yi); i = 1, · · · , n, j = 1, · · · , d} are independently sampled from ∏d j=1 Xj × Y according to ( ∏d j=1 PXj )PY |X . The goal is to learn a regression or classification model f ∈ F : ∏d j=1 Xj → Y by minimizing loss functions on the training data, which hopefully could achieve accurate prediction for the unseen test data. Decision tree[16, 18] is a widely used model for both regression [4] and classification [18]. A typical decision tree algorithm is described in Alg 1. As can be seen, the tree growth procedure is recursive, and the nodes will not stop growing until they reach the stopping criteria. There are two important functions in the algorithm: FindBestSplit returns the best split point {attribute, threshold} of a node, and Split splits the training data according to the best split point. The details of FindBestSplit is given in Alg 2: first histograms of the attributes are constructed (for continuous attributes, one usually converts their numerical values to finite bins for ease of compuation) by going over all training data on the current node; then all bins (split points) are traversed from left to right, and leftSum and rightSum are used to accumulate sum of left and right parts of the split point respectively. When selecting the best split point, an informativeness measure is adopted. The widely used informative measures are information gain and variance gain for classification and regression, respectively. Algorithm 1 BulidTree Input: Node N, Dateset D if StoppingCirteria(D) then N.output = Prediction(D) else bestSplit = FindBestSplit(D) (DL, DR) = Split(D, N, bestSplit) BuildTree(N.leftChild, DL) BuildTree(N.rightChild, DR) end if Definition 2.1 [6][16] In classification, the information gain (IG) for attribute Xj ∈ [w1, w2] at node O, is defined as the entropy reduction of the output Y after splitting node O by attribute Xj at w, i.e., IGj(w;O) = Hj − (Hlj(w) +Hrj (w)) = P (w1 ≤ Xj ≤ w2)H(Y |w1 ≤ Xj ≤ w2)− P (w1 ≤ Xj < w)H(Y |w1 ≤ Xj < w) − P (w ≤ Xj ≤ w2)H(Y |w ≤ Xj ≤ w2), where H(·|·) denotes the conditional entropy. In regression, the variance gain (VG) for attribute Xj ∈ [w1, w2] at node O, is defined as variance reduction of the output Y after splitting node O by attribute Xj at w, i.e., V Gj(w;O) = σj − (σlj(w) + σrj (w)) = P (w1 ≤ Xj ≤ w2)V ar[Y |w1 ≤ Xj ≤ w2]− P (w1 ≤ Xj < w)V ar[Y |w1 ≤ Xj < w] − P (w2 ≥ Xj ≥ w)V ar[Y |w2 ≥ Xj ≥ w], where V ar[·|·] denotes the conditional variance. 3 PV-Tree In this section, we describe our proposed PV-Tree algorithm for parallel decision tree learning, which has a very low communication cost, and can achieve a good trade-off between communication efficiency and learning accuracy. PV-Tree is a data-parallel algorithm, which also partitions the training data onto M machines just like in [2] [21]. However, its design principal is very different. In [2][21], one does not trust the local information about the attributes in each machine, and decides the best attribute and split point only based on the aggregated global histograms of the attributes. In contrast, in PV-Tree, we leverage the meaningful statistical information about the attributes contained in each local machine, and make decisions through a two-stage (local and then global) voting process. In this way, we can significantly reduce the communication cost since we do not need to communicate the histogram information of all the attributes across machines, instead, only the histograms of those attributes that survive in the voting process. The flow of PV-tree algorithm is very similar to the standard decision tree, except function FindBestSplit. So we only give the new implementation of this function in Alg 3, which contains following three steps: Local Voting: We select the top-k attributes for each machine based on its local data set (according to the informativeness scores, e.g., information gain for classification and variance reduction for regression), and then exchange indices of the selected attributes among machines. Please note that the communication cost for this step is very low, because only the indices for a small number of (i.e., k ×M ) attributes need to be communicated. Global Voting: We determine the globally top-2k attributes by a majority voting among all locally selected attributes in the previous step. That is, we rank the attributes according to the number of local machines who select them, and choose the top-2k attributes from the ranked list. It can be proven that when the local data are big enough to be statistically representative, there is a very high probability that the top-2k attributes obtained by this majority voting will contain the globally best attribute. Please note that this step does not induce any communication cost. Best Attribute Identification: We collect full-grained histograms of the globally top-2k attributes from local machines in order to compute their global distributions. Then we identify the best attribute and its split point according to the informativeness scores calculated from the global distributions. Please note that the communication cost for this step is also low, because we only need to communicate the histograms of 2k pre-selected attributes (but not all attributes).3 As a result, PV-Tree algorithm can scale very well since its communication cost is independent of both the total number of attributes and the total number of samples in the dataset. In next section, we will provide theoretical analysis on accuracy guarantee of PV-Tree algorithm. Algorithm 2 FindBestSplit Input: DataSet D for all X in D.Attribute do . Construct Histogram H = new Histogram() for all x in X do H.binAt(x.bin).Put(x.label) end for . Find Best Split leftSum = new HistogramSum() for all bin in H do leftSum = leftSum + H.binAt(bin) rightSum = H.AllSum - leftSum split.gain = CalSplitGain(leftSum, rightSum) bestSplit = ChoiceBetterOne(split,bestSplit) end for end for return bestSplit Algorithm 3 PV-Tree_FindBestSplit Input: Dataset D localHistograms = ConstructHistograms(D) . Local Voting splits = [] for all H in localHistograms do splits.Push(H.FindBestSplit()) end for localTop = splits.TopKByGain(K) . Gather all candidates allCandidates = AllGather(localTop) . Global Voting globalTop = allCandidates.TopKByMajority(2*K) . Merge global histograms globalHistograms = Gather(globalTop, localHistograms) bestSplit = globalHistograms.FindBestSplit() return bestSplit 4 Theoretical Analysis In this section, we conduct theoretical analysis on proposed PV-Tree algorithm. Specifically, we prove that, PV-Tree can select the best (most informative) attribute in a large probability, for both classification and regression. In order to better present the theorem, we firstly introduce some notations4 In classification, we denote IGj = maxw IGj(w), and rank {IGj ; j ∈ [d]} from large to small as {IG(1), ..., IG(d)}. We call the attribute j(1) the most informative attribute. Then, we denote l(j)(k) = |IG(1)−IG(j)| 2 , ∀j ≥ k + 1 to indicate the distance between the largest and the k-th largest IG. In regression, l(j)(k) is defined in the same way, except replacing IG with VG. Theorem 4.1 Suppose we have M local machines, and each one has n training data. PV-Tree at an arbitrary tree node with local voting size k and global majority voting size 2k will select the most informative attribute with a probability at least M∑ m=[M/2+1] CmM 1− d∑ j=k+1 δ(j)(n, k) m d∑ j=k+1 δ(j)(n, k) M−m , where δ(j)(n, k) = α(j)(n) + 4e−c(j)n(l(j)(k)) 2 with limn→∞ α(j)(n) = 0 and c(j) is constant. Due to space restrictions, we briefly illustrate the proof idea here and leave detailed proof to supplementary materials. Our proof contains two parts. (1) For local voting, we find a sufficient condition to guarantee a similar rank of attributes ordered by information gain computed based on local data and full data. Then, we derive a lower bound of probability to make the sufficient condition holds by 3As indicated by our theoretical analysis and empirical study (see the next sections), a very small k already leads to good performance in PV-Tree algorithm. 4Since all analysis are for one arbitrarily fixed node O, we omit the notation O here. using concentration inequalities. (2) For global voting, we select top-2k attributes. It’s easy to proof that we can select the most informative attribute if only no less than [M/2 + 1] of all machines select it.5 Therefore, we can calculate the probability in the theorem using binomial distribution. Regarding Theorem 4.1, we have following discussions on factors that impact the lower bound for probability of selecting the best attribute. 1.Size of local training data n: Since δ(j)(n, k) decreased with n, with more and more local training data, the lower bound will increase. That means, if we have sufficiently large data, PV-Tree will select the best attribute with almost probability 1. 2. Input dimension d: It is clear that for fixed local voting size k and global voting size 2k, with d increasing, the lower bound is decreasing. Consider the case that the number of attributes become 100 times larger. Then the terms in the summation (from ∑d j=k+1 to ∑100d j=k+1) is roughly 100 times larger for a relatively small k. But there must be many attributes away from attribute (1) and l(j)(k) is a large number which results in a small δ(j)(n, k). Thus we can say that the bound in the theorem is not sensitive with d. 3. Number of machines M : We assume the whole training data size N is fixed and the local data size n = NM . Then on one hand, as M increases, n decreases, and therefore the lower bound will decrease due to larger δj(n, k). On the other hand, because function ∑M m=[M/2+1] C m Mp m(1− p)M−m will approach 1 as M increases when p > 0.5 [[23]], the lower bound will increase. In other words, the number of machines M has dual effect on the lower bound: with more machines, local data size becomes smaller which reduces the accuracy of local voting, however, it also leads to more copies of local votes and thus increase the reliability of global voting. Therefore, in terms of accuracy, there should be an optimal number of machines given a fixed-size training data.6 4. Local/Global voting size k/2k: Local/Global voting size k/2k influence l(j)(k) and the terms in the summation in the lower bound . As k increases, l(j)(k) increases and the terms in the summation decreases, and the lower bound increases. But increasing k will bring more communication and calculating time. Therefore, we should better select a moderate k. For some distributions, especially for the distributions over high-dimensional space, l(j)(k) is less sensitive to k, then we can choose a relatively smaller k to save communication time. As a comparison, we also prove a theorem for the data-parallel algorithm based on quantized histogram as follows (please refer to the supplementary material for its proof). The theorem basically tells us that the bias introduced by histogram quantization cannot be reduced to zero even if the training data are sufficiently large, and as a result the corresponding algorithm could fail in finding the best attribute.7 This could be the critical weakness of this algorithm in big data scenario. Theorem 4.2 We denote quantized histogram with b bins of the underlying distribution P as P b, that of the empirical distribution Pn as P bn, the information gain ofXj calculated under the distribution P b and P bn as IG b j and IG b n,j respectively, and fj(b) , |IGj − IGbj |. Then, for ≤ minj=1,··· ,d fj(b), with probability at least δj(n, fj(b)− )), we have |IGbn,j − IGj | > . 5 Experiments In this section, we report the experimental comparisons between PV-Tree and baseline algorithms. We used two data sets, one for learning to rank (LTR) and the other for ad click prediction (CTR)8 (see Table 1 for details). For LTR, we extracted about 1200 numerical attributes per data sample, and used NDCG [5] as the evaluation measure. For CTR, we extracted about 800 numerical attributes [9], and used AUC as the evaluation measure. 5In fact, the global voting size can be βk with β > 1. Then the sufficient condition becomes that no less than [M/β + 1] of all machines select the most informative attribute. 6Please note that using more machines will reduce local computing time, thus the optimal value of machine number may be larger in terms of speed-up. 7The theorem for regression holds in the same way, with replacing IG with VG. 8We use private data in LTR experiments and data of KDD Cup 2012 track 2 in CTR experiments. Table 1: Datasets Task #Train #Test #Attribute Source LTR 11M 1M 1200 Private CTR 235M 31M 800 KDD Cup Table 2: Convergence time (seconds) Task Sequential Data- Attribute- PV-Tree Parallel Parallel LTR 28690 32260 14660 5825 CTR 154112 9209 26928 5349 According to recent industrial practices, a single decision tree might not be strong enough to learn an effective model for complicated tasks like ranking and click prediction. Therefore, people usually use decision tree based boosting algorithms (e.g., GBDT) to perform tasks. In this paper, we also use GBDT as a platform to examine the efficiency and effectiveness of decision tree parallelization. That is, we used PV-Tree or other baseline algorithms to parallelize the decision tree construction process in each iteration of GBDT, and compare their performance. Our experimental environment is a cluster of servers (each with 12 CPU cores and 32 GB RAM) inter-connected with 1 Gbps Ethernet. For the experiments on LTR, we used 8 machines for parallel training; and for the experiments on CTR, we used 32 machines since the dataset is much larger. 5.1 Comparison with Other Parallel Decision Trees For comparison with PV-Tree, we have implemented an attribute-parallel algorithm, in which a binary vector is used to indicate the split information and exchanged across machines. In addition, we implemented a data-parallel algorithm according to [2, 21], which can communicate both full-grained histograms and quantized histograms. All parallel algorithms and sequential(single machine) version are compared together. The experimental results can be found in Figure 1a and 1b. From these figures, we have the following observations: For LTR, since the number of data samples is relatively small, the communication of the split information about the samples does not take too much time. As a result, the attribute-parallel algorithm appears to be efficient. Since most attributes take numerical values in this dataset, the fullgrained histogram has quite a lot of bins. Therefore, the data-parallel algorithm which communicates full-grained histogram is quite slow, even slower than the sequential algorithm. When reducing the bins in the histogram to 10%, the data-parallel algorithm becomes much more efficient, however, its convergence point is not good (consistent with our theory – the bias in quantized histograms leads to accuracy drop). For CTR, attribute-parallel algorithm becomes very slow since the number of data samples is very large. In contrast, many attributes in CTR take binary or discrete values, which make the full-grained histogram have limited number of bins. As a result, the data-parallel algorithm with full-grain histogram is faster than the sequential algorithm. The data-parallel algorithm with quantized histograms is even faster, however, its convergence point is once again not very good. PV-Tree reaches the best point achieved by sequential algorithm within the shortest time in both LTR and CTR task. For a more quantitative comparison on efficiency, we list the time for each algorithm (8 machines for LTR and 32 machines for CTR) to reach the convergent accuracy of the sequential algorithm in Table 2. From the table, we can see that, for LTR, it costed PV-Tree 5825 seconds, while it costed the data-parallel algorithm (with full-grained histogram9) and attribute-parallel algorithm 32260 and 14660 seconds respectively. As compared with the sequential algorithm (which took 28690 seconds to converge), PV-Tree achieves 4.9x speed up on 8 machines. For CTR, it costed PV-Tree 5349 seconds, while it costed the data-parallel algorithm (with full-grained histogram) and attributeparallel algorithm 9209 and 26928 seconds respectively. As compared with the sequential algorithm (which took 154112 seconds to converge), PV-Tree achieves 28.8x speed up on 32 machines. We also conducted independent experiments to get a clear comparison of communication cost for different parallel algorithms given some typical big data workload setting. The result is listed in Table 3. We find the cost of attribute-parallel algorithm is relative to the size of training data N , and the cost of data-parallel algorithm is relative to the number of attributes d. In contrast, the cost of PV-Tree is constant. 9The data-parallel algorithm with 10% bins could not achieve the same accuracy with the sequential algorithm and thus we did not put it in the table. Table 3: Comparison of communication cost, train one tree with depth=6. Table 4: Convergence time and accuracy w.r.t. global voting parameter k for PV-Tree. 5.2 Tradeoff between Speed-up and Accuracy in PV-Tree In the previous subsection, we have shown that PV-tree is more efficient than other algorithms. Here we make a deep dive into PV-tree to see how its key parameters affect the trade-off between efficiency and accuracy. According to Theorem 4.1, the following two parameters are critical to PV-Tree: the number of machines M and the size of voting k. 5.2.1 On Different Numbers of Machines When more machines join the distributed training process, the data throughput will grow larger but the amortized training data on each machine will get smaller. When the data size on each machine becomes too small, there will be no guarantee on the accuracy of the voting procedure, according to our theorem. So it is important to appropriately set the number of machines. To gain more insights on this, we conducted some additional experiments, whose results are shown in Figure 2a and 2b. From these figures, we can see that for LTR, when the number of machines grows from 2 to 8, the training process is significantly accelerated. However, when the number goes up to 16, the convergence speed is even lower than that of using 8 machines. Similar results can be observed for CTR. These observations are consistent with our theoretical findings. Please note that PV-Tree is designed for the big data scenario. Only when the entire training data are huge (and thus distribution of the training data on each local machine can be similar to that of the entire training data), the full power of PV-Tree can be realized. Otherwise, we need to have a reasonable expectation on the speed-up, and should choose to use a smaller number of machines to parallelize the training. 5.2.2 On Different Sizes of Voting In PV-Tree, we have a parameter k, which controls the number of top attributes selected during local and global voting. Intuitively, larger k will increase the probability of finding the globally best attribute from the local candidates, however, it also means higher communication cost. According to our theorem, the choice of k should depend on the size of local training data. If the size of local training data is large, the locally best attributes will be similar to the globally best one. In this case, one can safely choose a small value of k. Otherwise, we should choose a relatively larger k. To gain more insights on this, we conducted some experiments, whose results are shown in Table 4, where M refers to the number of machines. From the table, we have the following observations. First, for both cases, in order to achieve good accuracy, one does not need to choose a large k. When k ≤ 40, the accuracy has been very good. Second, we find that for the cases of using small number of machines, k can be set to an even smaller value, e.g., k = 5. This is because, given a fixed-size training data, when using fewer machines, the size of training data per machine will become larger and thus a smaller k can already guarantee the approximation accuracy. 5.3 Comparison with Other Parallel GBDT Algorithms While we mainly focus on how to parallelize the decision tree construction process inside GBDT in the previous subsections, one could also parallelize GBDT in other ways. For example, in [22, 20], each machine learns its own decision tree separately without communication. After that, these decision trees are aggregated by means of winner-takes-all or output ensemble. Although these works are not the focus of our paper, it is still interesting to compare with them. For this purpose, we implemented both the algorithms proposed in [22] and [20]. For ease of reference, we denote them as Svore and Yu respectively. Their performances are shown in Figure 3a and 3b. From the figures, we can see that PV-Tree outperforms both Svore and Yu: although these two algorithms converge at a similar speed to PV-Tree, they have much worse converge points. According to our limited understanding, these two algorithms are lacking solid theoretical guarantee. Since the candidate decision trees are trained separately and independently without necessary information exchange, they may have non-negligible bias, which will lead to accuracy drop at the end. In contrast, we can clearly characterize the theoretical properties of PV-tree, and use it in an appropriate setting so as to avoid observable accuracy drop. To sum up all the experiments, we can see that with appropriately-set parameters, PV-Tree can achieve a very good trade-off between efficiency and accuracy, and outperforms both other parallel decision tree algorithms designed specifically for GBDT parallelization. 6 Conclusions In this paper, we proposed a novel parallel algorithm for decision tree, called Parallel Voting Decision Tree (PV-Tree), which can achieve high accuracy at a very low communication cost. Experiments on both ranking and ad click prediction indicate that PV-Tree has its advantage over a number of baselines algorithms. As for future work, we plan to generalize the idea of PV-Tree to parallelize other machine learning algorithms. Furthermore, we will open-source PV-Tree algorithm to benefit more researchers and practitioners.
1. What is the main contribution of the paper regarding parallel decision tree training? 2. What are the strengths and weaknesses of the proposed algorithm compared to existing methods? 3. How does the reviewer assess the theoretical analysis and experimental results presented in the paper? 4. What are the concerns raised by the reviewer regarding the performance and practicality of the proposed method? 5. Are there any suggestions or recommendations for future work related to the paper's topic?
Review
Review This paper proposes a new parallel algorithm for training decision trees (for classification or regression). The main idea of this algorithm is to distribute the data on different nodes and then at each node, to identify the top-k splits on each data subset, determine globally to top-2k splits by voting individual top-k lists and then identify the best ones among them by combining histograms from individual workers. A theoretical analysis is performed that gives a lower bound on the probability to find the globally optimal split depending on the main method parameters. Several experiments are conducted on two large-scale datasets, where PV-tree are shown to outperform other parallel decision tree implementation. Given the popularity of decision trees, proposing an efficient parallel implementation of this method is of course very relevant. The proposed parallelization is original with respect to existing methods and it should indeed lead to less communications than other methods. The theoretical analysis is sound and I like the discussion of the impact of the main problem and method parameters that follows from the lower bound provided in theorem 4.1. Experiments are conducted on two very large problems, where, in the limit of the tested settings (see below), PV-tree is clearly shown to outperform other parallel implementations, in terms of both computing times to reach a given accuracy level and communication costs. I nevertheless have two major concerns with the proposed parallelization. First, given the way it works, the performance of the algorithm is clearly affected by the number of workers and this number thus needs to be tuned for optimal performance. This is a serious drawback as It means that the algorithm can not benefit from large number of workers. From my understanding, this problem is not shared by other parallelizations. Using more machines with these methods might not always give significant improvement in terms of computing times but at least, it will not deteriorate predictive performance. While the experiments show the impact of the number of workers on PV-trees, the impact on other methods should be also studied in comparison. Experiments should show if PV-tree is still the best method on LTR (resp. CTR) when the number of machines grows (far) beyond 8 (resp. 32) machines. Second, I'm also puzzled by this statement from the authors in Section 5.2.1 (and similar statements are given in Section 4): "Only when the entire training data are huge (and thus distribution of the training data on each local machine can be similar to that of the entire training data), the full power of PV-Tree can be realized." If this is indeed true, then I'm wondering why it's actually needed to distribute the data over different machines and why just selecting the best split over a single random subset of N/M examples at each node is not enough. (Or similarly is the tree built using PV-tree really better than a tree grown using only the subset of data available at one of the worker?) Indeed, if the distribution of the training data on each local machine is assumed, for the method to work, to be similar to that of the entire data, then the local top-k splits are expected to be the same as the global top-k splits and there is no need to vote over several workers. I think that this should be checked in the experiments. Do we really need to vote over several workers or is one worker not actually enough, in the conditions (in terms of the number of workers) where PV-tree works best? The paper is very pleasant to read. Several important details are however missing in the textual description of the algorithms and in the pseudocodes. Some points that need to be clarified are as follows: - How are PV-tree and the competitors implemented? (language, framework, etc) - The pseudocode does not clearly indicate how and when the data is distributed. At each node or only once globally? What about other methods? What is included in their communication cost? - I don’t find the pseudo-codes very clear, probably because of the use of an object oriented formalism. Most functions/methods are not defined and it’s not always easy to guess what they are doing from their names. I think it could be improved. - Which information is contained in the histograms, in particular in the case of continuous attributes? How is this information merged between workers to yield the global histogram and how is the split threshold determined from this histogram? I don't have a clear idea of what kind of information is actually communicated between the machines. - Is the global split found using the call ‘globalHistograms.FindBestSplit()’ at the end of Algorithm 3 exactly the same as the split that would have been obtained with all available data (assuming the optimal variable is among the 2k selected ones).
NIPS
Title A Communication-Efficient Parallel Algorithm for Decision Tree Abstract Decision tree (and its extensions such as Gradient Boosting Decision Trees and Random Forest) is a widely used machine learning algorithm, due to its practical effectiveness and model interpretability. With the emergence of big data, there is an increasing need to parallelize the training process of decision tree. However, most existing attempts along this line suffer from high communication costs. In this paper, we propose a new algorithm, called Parallel Voting Decision Tree (PV-Tree), to tackle this challenge. After partitioning the training data onto a number of (e.g., M ) machines, this algorithm performs both local voting and global voting in each iteration. For local voting, the top-k attributes are selected from each machine according to its local data. Then, globally top-2k attributes are determined by a majority voting among these local candidates. Finally, the full-grained histograms of the globally top-2k attributes are collected from local machines in order to identify the best (most informative) attribute and its split point. PV-Tree can achieve a very low communication cost (independent of the total number of attributes) and thus can scale out very well. Furthermore, theoretical analysis shows that this algorithm can learn a near optimal decision tree, since it can find the best attribute with a large probability. Our experiments on real-world datasets show that PV-Tree significantly outperforms the existing parallel decision tree algorithms in the trade-off between accuracy and efficiency. 1 Introduction Decision tree [16] is a widely used machine learning algorithm, since it is practically effective and the rules it learns are simple and interpretable. Based on decision tree, people have developed other algorithms such as Random Forest (RF) [3] and Gradient Boosting Decision Trees (GBDT) [7], which have demonstrated very promising performances in various learning tasks [5]. In recent years, with the emergence of very big training data (which cannot be held in one single machine), there has been an increasing need of parallelizing the training process of decision tree. To this end, there have been two major categories of attempts: 2. ∗Denotes equal contribution. This work was done when the first author was visiting Microsoft Research Asia. 2There is another category of works that parallelize the tasks of sub-tree training once a node is split [15], which require the training data to be moved from machine to machine for many times and are thus inefficient. Moreover, there are also some other works accelerating decision tree construction by using pre-sorting [13] [19] [11] and binning [17] [8] [10], or employing a shared-memory-processors approach [12] [1]. However, they are out of our scope. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Attribute-parallel: Training data are vertically partitioned according to the attributes and allocated to different machines, and then in each iteration, the machines work on non-overlapping sets of attributes in parallel in order to find the best attribute and its split point (suppose this best attribute locates at the i-th machine) [19] [11] [20]. This process is communicationally very efficient. However, after that, the re-partition of the data on other machines than the i-th machine will induce very high communication costs (proportional to the number of data samples). This is because those machines have no information about the best attribute at all, and in order to fulfill the re-partitioning, they must retrieve the partition information of every data sample from the i-th machine. Furthermore, as each worker still has full sample set, the partition process is not parallelized, which slows down the algorithm. Data-parallel: Training data are horizontally partitioned according to the samples and allocated to different machines. Then the machines communicate with each other the local histograms of all attributes (according to their own data samples) in order to obtain the global attribute distributions and identify the best attribute and split point [12] [14]. It is clear that the corresponding communication cost is very high and proportional to the total number of attributes and histogram size. To reduce the cost, in [2] and [21] [10], it was proposed to exchange quantized histograms between machines when estimating the global attribute distributions. However, this does not really solve the problem – the communication cost is still proportional to the total number of attributes, not to mentioned that the quantization may hurt the accuracy. In this paper, we proposed a new data-parallel algorithm for decision tree, called Parallel Voting Decision Tree (PV-Tree), which can achieve much better balance between communication efficiency and accuracy. The key difference between conventional data-parallel decision tree algorithm and PV-Tree lies in that the former only trusts the globally aggregated histogram information, while the latter leverages the local statistical information contained in each machine through a two-stage voting process, thus can significantly reduce the communication cost. Specifically, PV-Tree contains the following steps in each iteration. 1) Local voting. On each machine, we select the top-k attributes based on its local data according to the informativeness scores (e.g., risk reduction for regression, and information gain for classification). 2) Global voting. We determine global top-2k attributes by a majority voting among the local candidates selected in the previous step. That is, we rank the attributes according to the number of local machines who select them, and choose the top 2k attributes from the ranked list. 3) Best attribute identification. We collect the full-grained histograms of the globally top-2k attributes from local machines in order to compute their global distributions. Then we identify the best attribute and its split point according to the informativeness scores calculated from the global distributions. It is easy to see that PV-Tree algorithm has a very low communication cost. It does not need to communicate the information of all attributes, instead, it only communicates indices of the locally top-k attributes per machine and the histograms of the globally top-2k attributes. In other words, its communication cost is independent of the total number of attributes. This makes PV-Tree highly scalable. On the other hand, it can be proven that PV-Tree can find the best attribute with a large probability, and the probability will approach 1 regardless of k when the training data become sufficiently large. In contrast, the data-parallel algorithm based on quantized histogram could fail in finding the best attribute, since the bias introduced by histogram quantization cannot be reduced to zero even if the training data are sufficiently large. We have conducted experiments on real-world datasets to evaluate the performance of PV-Tree. The experimental results show that PV-Tree has consistently higher accuracy and training speed than all the baselines we implemented. We further conducted experiments to evaluate the performance of PV-Tree in different settings (e.g., with different numbers of machines, different values of k). The experimental results are in accordance with our theoretical analysis. 2 Decision Tree Suppose the training data set Dn = {(xi,j , yi); i = 1, · · · , n, j = 1, · · · , d} are independently sampled from ∏d j=1 Xj × Y according to ( ∏d j=1 PXj )PY |X . The goal is to learn a regression or classification model f ∈ F : ∏d j=1 Xj → Y by minimizing loss functions on the training data, which hopefully could achieve accurate prediction for the unseen test data. Decision tree[16, 18] is a widely used model for both regression [4] and classification [18]. A typical decision tree algorithm is described in Alg 1. As can be seen, the tree growth procedure is recursive, and the nodes will not stop growing until they reach the stopping criteria. There are two important functions in the algorithm: FindBestSplit returns the best split point {attribute, threshold} of a node, and Split splits the training data according to the best split point. The details of FindBestSplit is given in Alg 2: first histograms of the attributes are constructed (for continuous attributes, one usually converts their numerical values to finite bins for ease of compuation) by going over all training data on the current node; then all bins (split points) are traversed from left to right, and leftSum and rightSum are used to accumulate sum of left and right parts of the split point respectively. When selecting the best split point, an informativeness measure is adopted. The widely used informative measures are information gain and variance gain for classification and regression, respectively. Algorithm 1 BulidTree Input: Node N, Dateset D if StoppingCirteria(D) then N.output = Prediction(D) else bestSplit = FindBestSplit(D) (DL, DR) = Split(D, N, bestSplit) BuildTree(N.leftChild, DL) BuildTree(N.rightChild, DR) end if Definition 2.1 [6][16] In classification, the information gain (IG) for attribute Xj ∈ [w1, w2] at node O, is defined as the entropy reduction of the output Y after splitting node O by attribute Xj at w, i.e., IGj(w;O) = Hj − (Hlj(w) +Hrj (w)) = P (w1 ≤ Xj ≤ w2)H(Y |w1 ≤ Xj ≤ w2)− P (w1 ≤ Xj < w)H(Y |w1 ≤ Xj < w) − P (w ≤ Xj ≤ w2)H(Y |w ≤ Xj ≤ w2), where H(·|·) denotes the conditional entropy. In regression, the variance gain (VG) for attribute Xj ∈ [w1, w2] at node O, is defined as variance reduction of the output Y after splitting node O by attribute Xj at w, i.e., V Gj(w;O) = σj − (σlj(w) + σrj (w)) = P (w1 ≤ Xj ≤ w2)V ar[Y |w1 ≤ Xj ≤ w2]− P (w1 ≤ Xj < w)V ar[Y |w1 ≤ Xj < w] − P (w2 ≥ Xj ≥ w)V ar[Y |w2 ≥ Xj ≥ w], where V ar[·|·] denotes the conditional variance. 3 PV-Tree In this section, we describe our proposed PV-Tree algorithm for parallel decision tree learning, which has a very low communication cost, and can achieve a good trade-off between communication efficiency and learning accuracy. PV-Tree is a data-parallel algorithm, which also partitions the training data onto M machines just like in [2] [21]. However, its design principal is very different. In [2][21], one does not trust the local information about the attributes in each machine, and decides the best attribute and split point only based on the aggregated global histograms of the attributes. In contrast, in PV-Tree, we leverage the meaningful statistical information about the attributes contained in each local machine, and make decisions through a two-stage (local and then global) voting process. In this way, we can significantly reduce the communication cost since we do not need to communicate the histogram information of all the attributes across machines, instead, only the histograms of those attributes that survive in the voting process. The flow of PV-tree algorithm is very similar to the standard decision tree, except function FindBestSplit. So we only give the new implementation of this function in Alg 3, which contains following three steps: Local Voting: We select the top-k attributes for each machine based on its local data set (according to the informativeness scores, e.g., information gain for classification and variance reduction for regression), and then exchange indices of the selected attributes among machines. Please note that the communication cost for this step is very low, because only the indices for a small number of (i.e., k ×M ) attributes need to be communicated. Global Voting: We determine the globally top-2k attributes by a majority voting among all locally selected attributes in the previous step. That is, we rank the attributes according to the number of local machines who select them, and choose the top-2k attributes from the ranked list. It can be proven that when the local data are big enough to be statistically representative, there is a very high probability that the top-2k attributes obtained by this majority voting will contain the globally best attribute. Please note that this step does not induce any communication cost. Best Attribute Identification: We collect full-grained histograms of the globally top-2k attributes from local machines in order to compute their global distributions. Then we identify the best attribute and its split point according to the informativeness scores calculated from the global distributions. Please note that the communication cost for this step is also low, because we only need to communicate the histograms of 2k pre-selected attributes (but not all attributes).3 As a result, PV-Tree algorithm can scale very well since its communication cost is independent of both the total number of attributes and the total number of samples in the dataset. In next section, we will provide theoretical analysis on accuracy guarantee of PV-Tree algorithm. Algorithm 2 FindBestSplit Input: DataSet D for all X in D.Attribute do . Construct Histogram H = new Histogram() for all x in X do H.binAt(x.bin).Put(x.label) end for . Find Best Split leftSum = new HistogramSum() for all bin in H do leftSum = leftSum + H.binAt(bin) rightSum = H.AllSum - leftSum split.gain = CalSplitGain(leftSum, rightSum) bestSplit = ChoiceBetterOne(split,bestSplit) end for end for return bestSplit Algorithm 3 PV-Tree_FindBestSplit Input: Dataset D localHistograms = ConstructHistograms(D) . Local Voting splits = [] for all H in localHistograms do splits.Push(H.FindBestSplit()) end for localTop = splits.TopKByGain(K) . Gather all candidates allCandidates = AllGather(localTop) . Global Voting globalTop = allCandidates.TopKByMajority(2*K) . Merge global histograms globalHistograms = Gather(globalTop, localHistograms) bestSplit = globalHistograms.FindBestSplit() return bestSplit 4 Theoretical Analysis In this section, we conduct theoretical analysis on proposed PV-Tree algorithm. Specifically, we prove that, PV-Tree can select the best (most informative) attribute in a large probability, for both classification and regression. In order to better present the theorem, we firstly introduce some notations4 In classification, we denote IGj = maxw IGj(w), and rank {IGj ; j ∈ [d]} from large to small as {IG(1), ..., IG(d)}. We call the attribute j(1) the most informative attribute. Then, we denote l(j)(k) = |IG(1)−IG(j)| 2 , ∀j ≥ k + 1 to indicate the distance between the largest and the k-th largest IG. In regression, l(j)(k) is defined in the same way, except replacing IG with VG. Theorem 4.1 Suppose we have M local machines, and each one has n training data. PV-Tree at an arbitrary tree node with local voting size k and global majority voting size 2k will select the most informative attribute with a probability at least M∑ m=[M/2+1] CmM 1− d∑ j=k+1 δ(j)(n, k) m d∑ j=k+1 δ(j)(n, k) M−m , where δ(j)(n, k) = α(j)(n) + 4e−c(j)n(l(j)(k)) 2 with limn→∞ α(j)(n) = 0 and c(j) is constant. Due to space restrictions, we briefly illustrate the proof idea here and leave detailed proof to supplementary materials. Our proof contains two parts. (1) For local voting, we find a sufficient condition to guarantee a similar rank of attributes ordered by information gain computed based on local data and full data. Then, we derive a lower bound of probability to make the sufficient condition holds by 3As indicated by our theoretical analysis and empirical study (see the next sections), a very small k already leads to good performance in PV-Tree algorithm. 4Since all analysis are for one arbitrarily fixed node O, we omit the notation O here. using concentration inequalities. (2) For global voting, we select top-2k attributes. It’s easy to proof that we can select the most informative attribute if only no less than [M/2 + 1] of all machines select it.5 Therefore, we can calculate the probability in the theorem using binomial distribution. Regarding Theorem 4.1, we have following discussions on factors that impact the lower bound for probability of selecting the best attribute. 1.Size of local training data n: Since δ(j)(n, k) decreased with n, with more and more local training data, the lower bound will increase. That means, if we have sufficiently large data, PV-Tree will select the best attribute with almost probability 1. 2. Input dimension d: It is clear that for fixed local voting size k and global voting size 2k, with d increasing, the lower bound is decreasing. Consider the case that the number of attributes become 100 times larger. Then the terms in the summation (from ∑d j=k+1 to ∑100d j=k+1) is roughly 100 times larger for a relatively small k. But there must be many attributes away from attribute (1) and l(j)(k) is a large number which results in a small δ(j)(n, k). Thus we can say that the bound in the theorem is not sensitive with d. 3. Number of machines M : We assume the whole training data size N is fixed and the local data size n = NM . Then on one hand, as M increases, n decreases, and therefore the lower bound will decrease due to larger δj(n, k). On the other hand, because function ∑M m=[M/2+1] C m Mp m(1− p)M−m will approach 1 as M increases when p > 0.5 [[23]], the lower bound will increase. In other words, the number of machines M has dual effect on the lower bound: with more machines, local data size becomes smaller which reduces the accuracy of local voting, however, it also leads to more copies of local votes and thus increase the reliability of global voting. Therefore, in terms of accuracy, there should be an optimal number of machines given a fixed-size training data.6 4. Local/Global voting size k/2k: Local/Global voting size k/2k influence l(j)(k) and the terms in the summation in the lower bound . As k increases, l(j)(k) increases and the terms in the summation decreases, and the lower bound increases. But increasing k will bring more communication and calculating time. Therefore, we should better select a moderate k. For some distributions, especially for the distributions over high-dimensional space, l(j)(k) is less sensitive to k, then we can choose a relatively smaller k to save communication time. As a comparison, we also prove a theorem for the data-parallel algorithm based on quantized histogram as follows (please refer to the supplementary material for its proof). The theorem basically tells us that the bias introduced by histogram quantization cannot be reduced to zero even if the training data are sufficiently large, and as a result the corresponding algorithm could fail in finding the best attribute.7 This could be the critical weakness of this algorithm in big data scenario. Theorem 4.2 We denote quantized histogram with b bins of the underlying distribution P as P b, that of the empirical distribution Pn as P bn, the information gain ofXj calculated under the distribution P b and P bn as IG b j and IG b n,j respectively, and fj(b) , |IGj − IGbj |. Then, for ≤ minj=1,··· ,d fj(b), with probability at least δj(n, fj(b)− )), we have |IGbn,j − IGj | > . 5 Experiments In this section, we report the experimental comparisons between PV-Tree and baseline algorithms. We used two data sets, one for learning to rank (LTR) and the other for ad click prediction (CTR)8 (see Table 1 for details). For LTR, we extracted about 1200 numerical attributes per data sample, and used NDCG [5] as the evaluation measure. For CTR, we extracted about 800 numerical attributes [9], and used AUC as the evaluation measure. 5In fact, the global voting size can be βk with β > 1. Then the sufficient condition becomes that no less than [M/β + 1] of all machines select the most informative attribute. 6Please note that using more machines will reduce local computing time, thus the optimal value of machine number may be larger in terms of speed-up. 7The theorem for regression holds in the same way, with replacing IG with VG. 8We use private data in LTR experiments and data of KDD Cup 2012 track 2 in CTR experiments. Table 1: Datasets Task #Train #Test #Attribute Source LTR 11M 1M 1200 Private CTR 235M 31M 800 KDD Cup Table 2: Convergence time (seconds) Task Sequential Data- Attribute- PV-Tree Parallel Parallel LTR 28690 32260 14660 5825 CTR 154112 9209 26928 5349 According to recent industrial practices, a single decision tree might not be strong enough to learn an effective model for complicated tasks like ranking and click prediction. Therefore, people usually use decision tree based boosting algorithms (e.g., GBDT) to perform tasks. In this paper, we also use GBDT as a platform to examine the efficiency and effectiveness of decision tree parallelization. That is, we used PV-Tree or other baseline algorithms to parallelize the decision tree construction process in each iteration of GBDT, and compare their performance. Our experimental environment is a cluster of servers (each with 12 CPU cores and 32 GB RAM) inter-connected with 1 Gbps Ethernet. For the experiments on LTR, we used 8 machines for parallel training; and for the experiments on CTR, we used 32 machines since the dataset is much larger. 5.1 Comparison with Other Parallel Decision Trees For comparison with PV-Tree, we have implemented an attribute-parallel algorithm, in which a binary vector is used to indicate the split information and exchanged across machines. In addition, we implemented a data-parallel algorithm according to [2, 21], which can communicate both full-grained histograms and quantized histograms. All parallel algorithms and sequential(single machine) version are compared together. The experimental results can be found in Figure 1a and 1b. From these figures, we have the following observations: For LTR, since the number of data samples is relatively small, the communication of the split information about the samples does not take too much time. As a result, the attribute-parallel algorithm appears to be efficient. Since most attributes take numerical values in this dataset, the fullgrained histogram has quite a lot of bins. Therefore, the data-parallel algorithm which communicates full-grained histogram is quite slow, even slower than the sequential algorithm. When reducing the bins in the histogram to 10%, the data-parallel algorithm becomes much more efficient, however, its convergence point is not good (consistent with our theory – the bias in quantized histograms leads to accuracy drop). For CTR, attribute-parallel algorithm becomes very slow since the number of data samples is very large. In contrast, many attributes in CTR take binary or discrete values, which make the full-grained histogram have limited number of bins. As a result, the data-parallel algorithm with full-grain histogram is faster than the sequential algorithm. The data-parallel algorithm with quantized histograms is even faster, however, its convergence point is once again not very good. PV-Tree reaches the best point achieved by sequential algorithm within the shortest time in both LTR and CTR task. For a more quantitative comparison on efficiency, we list the time for each algorithm (8 machines for LTR and 32 machines for CTR) to reach the convergent accuracy of the sequential algorithm in Table 2. From the table, we can see that, for LTR, it costed PV-Tree 5825 seconds, while it costed the data-parallel algorithm (with full-grained histogram9) and attribute-parallel algorithm 32260 and 14660 seconds respectively. As compared with the sequential algorithm (which took 28690 seconds to converge), PV-Tree achieves 4.9x speed up on 8 machines. For CTR, it costed PV-Tree 5349 seconds, while it costed the data-parallel algorithm (with full-grained histogram) and attributeparallel algorithm 9209 and 26928 seconds respectively. As compared with the sequential algorithm (which took 154112 seconds to converge), PV-Tree achieves 28.8x speed up on 32 machines. We also conducted independent experiments to get a clear comparison of communication cost for different parallel algorithms given some typical big data workload setting. The result is listed in Table 3. We find the cost of attribute-parallel algorithm is relative to the size of training data N , and the cost of data-parallel algorithm is relative to the number of attributes d. In contrast, the cost of PV-Tree is constant. 9The data-parallel algorithm with 10% bins could not achieve the same accuracy with the sequential algorithm and thus we did not put it in the table. Table 3: Comparison of communication cost, train one tree with depth=6. Table 4: Convergence time and accuracy w.r.t. global voting parameter k for PV-Tree. 5.2 Tradeoff between Speed-up and Accuracy in PV-Tree In the previous subsection, we have shown that PV-tree is more efficient than other algorithms. Here we make a deep dive into PV-tree to see how its key parameters affect the trade-off between efficiency and accuracy. According to Theorem 4.1, the following two parameters are critical to PV-Tree: the number of machines M and the size of voting k. 5.2.1 On Different Numbers of Machines When more machines join the distributed training process, the data throughput will grow larger but the amortized training data on each machine will get smaller. When the data size on each machine becomes too small, there will be no guarantee on the accuracy of the voting procedure, according to our theorem. So it is important to appropriately set the number of machines. To gain more insights on this, we conducted some additional experiments, whose results are shown in Figure 2a and 2b. From these figures, we can see that for LTR, when the number of machines grows from 2 to 8, the training process is significantly accelerated. However, when the number goes up to 16, the convergence speed is even lower than that of using 8 machines. Similar results can be observed for CTR. These observations are consistent with our theoretical findings. Please note that PV-Tree is designed for the big data scenario. Only when the entire training data are huge (and thus distribution of the training data on each local machine can be similar to that of the entire training data), the full power of PV-Tree can be realized. Otherwise, we need to have a reasonable expectation on the speed-up, and should choose to use a smaller number of machines to parallelize the training. 5.2.2 On Different Sizes of Voting In PV-Tree, we have a parameter k, which controls the number of top attributes selected during local and global voting. Intuitively, larger k will increase the probability of finding the globally best attribute from the local candidates, however, it also means higher communication cost. According to our theorem, the choice of k should depend on the size of local training data. If the size of local training data is large, the locally best attributes will be similar to the globally best one. In this case, one can safely choose a small value of k. Otherwise, we should choose a relatively larger k. To gain more insights on this, we conducted some experiments, whose results are shown in Table 4, where M refers to the number of machines. From the table, we have the following observations. First, for both cases, in order to achieve good accuracy, one does not need to choose a large k. When k ≤ 40, the accuracy has been very good. Second, we find that for the cases of using small number of machines, k can be set to an even smaller value, e.g., k = 5. This is because, given a fixed-size training data, when using fewer machines, the size of training data per machine will become larger and thus a smaller k can already guarantee the approximation accuracy. 5.3 Comparison with Other Parallel GBDT Algorithms While we mainly focus on how to parallelize the decision tree construction process inside GBDT in the previous subsections, one could also parallelize GBDT in other ways. For example, in [22, 20], each machine learns its own decision tree separately without communication. After that, these decision trees are aggregated by means of winner-takes-all or output ensemble. Although these works are not the focus of our paper, it is still interesting to compare with them. For this purpose, we implemented both the algorithms proposed in [22] and [20]. For ease of reference, we denote them as Svore and Yu respectively. Their performances are shown in Figure 3a and 3b. From the figures, we can see that PV-Tree outperforms both Svore and Yu: although these two algorithms converge at a similar speed to PV-Tree, they have much worse converge points. According to our limited understanding, these two algorithms are lacking solid theoretical guarantee. Since the candidate decision trees are trained separately and independently without necessary information exchange, they may have non-negligible bias, which will lead to accuracy drop at the end. In contrast, we can clearly characterize the theoretical properties of PV-tree, and use it in an appropriate setting so as to avoid observable accuracy drop. To sum up all the experiments, we can see that with appropriately-set parameters, PV-Tree can achieve a very good trade-off between efficiency and accuracy, and outperforms both other parallel decision tree algorithms designed specifically for GBDT parallelization. 6 Conclusions In this paper, we proposed a novel parallel algorithm for decision tree, called Parallel Voting Decision Tree (PV-Tree), which can achieve high accuracy at a very low communication cost. Experiments on both ranking and ad click prediction indicate that PV-Tree has its advantage over a number of baselines algorithms. As for future work, we plan to generalize the idea of PV-Tree to parallelize other machine learning algorithms. Furthermore, we will open-source PV-Tree algorithm to benefit more researchers and practitioners.
1. What is the focus of the paper regarding decision trees? 2. What are the strengths of the proposed algorithm compared to prior works? 3. What are the weaknesses of the paper, particularly regarding its claims and experimental results? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any concerns or suggestions regarding the presentation of the algorithm and its analysis?
Review
Review In this paper, a new data-parallel algorithm is proposed for the parallel training of decision trees. This algorithm is declared to have achieved a better trade-off between accuracy and efficiency, compared to existing parallel decision tree algorithms. Specifically, both local voting and global voting are performed before the most informative attribute is identified from the collection of local machines. The proposed algorithm sacrifices computational efficiency in local machines for the reduction of overall communication cost. The presentation is clear, though the reference seems not quite up-to-date. (No citation published later than 2012.) The experiments and results sustained the claim that PV-Tree outperforms existing parallel decision tree algorithms, if not by a significant margin. Specifically, Table 2 and Table 3 illustrate the advantage of PV-Tree over other algorithms in the aspect of efficiency and communication cost, respectively. Since it is declared in the abstract that PV-Tree outperforms other algorithms in the trade-off between accuracy and efficiency, a question may be raised, which is, how is the evaluation of the trade-off defined? Is there any weight involved in the balance between accuracy and efficiency, or, are they equally important? This seems not have been illustrated. Results are pretty well analyzed. For instance, the limitation of PV-Tree has been illustrated that it is designed for the big data scenario, and that larger number of machines does not necessarily render faster training than a smaller number of machines, when the data split in each machine is small. The tables are good, but one suggestion might be helpful: An explanatory figure may better visualize the algorithm to be proposed.
NIPS
Title A Communication-Efficient Parallel Algorithm for Decision Tree Abstract Decision tree (and its extensions such as Gradient Boosting Decision Trees and Random Forest) is a widely used machine learning algorithm, due to its practical effectiveness and model interpretability. With the emergence of big data, there is an increasing need to parallelize the training process of decision tree. However, most existing attempts along this line suffer from high communication costs. In this paper, we propose a new algorithm, called Parallel Voting Decision Tree (PV-Tree), to tackle this challenge. After partitioning the training data onto a number of (e.g., M ) machines, this algorithm performs both local voting and global voting in each iteration. For local voting, the top-k attributes are selected from each machine according to its local data. Then, globally top-2k attributes are determined by a majority voting among these local candidates. Finally, the full-grained histograms of the globally top-2k attributes are collected from local machines in order to identify the best (most informative) attribute and its split point. PV-Tree can achieve a very low communication cost (independent of the total number of attributes) and thus can scale out very well. Furthermore, theoretical analysis shows that this algorithm can learn a near optimal decision tree, since it can find the best attribute with a large probability. Our experiments on real-world datasets show that PV-Tree significantly outperforms the existing parallel decision tree algorithms in the trade-off between accuracy and efficiency. 1 Introduction Decision tree [16] is a widely used machine learning algorithm, since it is practically effective and the rules it learns are simple and interpretable. Based on decision tree, people have developed other algorithms such as Random Forest (RF) [3] and Gradient Boosting Decision Trees (GBDT) [7], which have demonstrated very promising performances in various learning tasks [5]. In recent years, with the emergence of very big training data (which cannot be held in one single machine), there has been an increasing need of parallelizing the training process of decision tree. To this end, there have been two major categories of attempts: 2. ∗Denotes equal contribution. This work was done when the first author was visiting Microsoft Research Asia. 2There is another category of works that parallelize the tasks of sub-tree training once a node is split [15], which require the training data to be moved from machine to machine for many times and are thus inefficient. Moreover, there are also some other works accelerating decision tree construction by using pre-sorting [13] [19] [11] and binning [17] [8] [10], or employing a shared-memory-processors approach [12] [1]. However, they are out of our scope. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Attribute-parallel: Training data are vertically partitioned according to the attributes and allocated to different machines, and then in each iteration, the machines work on non-overlapping sets of attributes in parallel in order to find the best attribute and its split point (suppose this best attribute locates at the i-th machine) [19] [11] [20]. This process is communicationally very efficient. However, after that, the re-partition of the data on other machines than the i-th machine will induce very high communication costs (proportional to the number of data samples). This is because those machines have no information about the best attribute at all, and in order to fulfill the re-partitioning, they must retrieve the partition information of every data sample from the i-th machine. Furthermore, as each worker still has full sample set, the partition process is not parallelized, which slows down the algorithm. Data-parallel: Training data are horizontally partitioned according to the samples and allocated to different machines. Then the machines communicate with each other the local histograms of all attributes (according to their own data samples) in order to obtain the global attribute distributions and identify the best attribute and split point [12] [14]. It is clear that the corresponding communication cost is very high and proportional to the total number of attributes and histogram size. To reduce the cost, in [2] and [21] [10], it was proposed to exchange quantized histograms between machines when estimating the global attribute distributions. However, this does not really solve the problem – the communication cost is still proportional to the total number of attributes, not to mentioned that the quantization may hurt the accuracy. In this paper, we proposed a new data-parallel algorithm for decision tree, called Parallel Voting Decision Tree (PV-Tree), which can achieve much better balance between communication efficiency and accuracy. The key difference between conventional data-parallel decision tree algorithm and PV-Tree lies in that the former only trusts the globally aggregated histogram information, while the latter leverages the local statistical information contained in each machine through a two-stage voting process, thus can significantly reduce the communication cost. Specifically, PV-Tree contains the following steps in each iteration. 1) Local voting. On each machine, we select the top-k attributes based on its local data according to the informativeness scores (e.g., risk reduction for regression, and information gain for classification). 2) Global voting. We determine global top-2k attributes by a majority voting among the local candidates selected in the previous step. That is, we rank the attributes according to the number of local machines who select them, and choose the top 2k attributes from the ranked list. 3) Best attribute identification. We collect the full-grained histograms of the globally top-2k attributes from local machines in order to compute their global distributions. Then we identify the best attribute and its split point according to the informativeness scores calculated from the global distributions. It is easy to see that PV-Tree algorithm has a very low communication cost. It does not need to communicate the information of all attributes, instead, it only communicates indices of the locally top-k attributes per machine and the histograms of the globally top-2k attributes. In other words, its communication cost is independent of the total number of attributes. This makes PV-Tree highly scalable. On the other hand, it can be proven that PV-Tree can find the best attribute with a large probability, and the probability will approach 1 regardless of k when the training data become sufficiently large. In contrast, the data-parallel algorithm based on quantized histogram could fail in finding the best attribute, since the bias introduced by histogram quantization cannot be reduced to zero even if the training data are sufficiently large. We have conducted experiments on real-world datasets to evaluate the performance of PV-Tree. The experimental results show that PV-Tree has consistently higher accuracy and training speed than all the baselines we implemented. We further conducted experiments to evaluate the performance of PV-Tree in different settings (e.g., with different numbers of machines, different values of k). The experimental results are in accordance with our theoretical analysis. 2 Decision Tree Suppose the training data set Dn = {(xi,j , yi); i = 1, · · · , n, j = 1, · · · , d} are independently sampled from ∏d j=1 Xj × Y according to ( ∏d j=1 PXj )PY |X . The goal is to learn a regression or classification model f ∈ F : ∏d j=1 Xj → Y by minimizing loss functions on the training data, which hopefully could achieve accurate prediction for the unseen test data. Decision tree[16, 18] is a widely used model for both regression [4] and classification [18]. A typical decision tree algorithm is described in Alg 1. As can be seen, the tree growth procedure is recursive, and the nodes will not stop growing until they reach the stopping criteria. There are two important functions in the algorithm: FindBestSplit returns the best split point {attribute, threshold} of a node, and Split splits the training data according to the best split point. The details of FindBestSplit is given in Alg 2: first histograms of the attributes are constructed (for continuous attributes, one usually converts their numerical values to finite bins for ease of compuation) by going over all training data on the current node; then all bins (split points) are traversed from left to right, and leftSum and rightSum are used to accumulate sum of left and right parts of the split point respectively. When selecting the best split point, an informativeness measure is adopted. The widely used informative measures are information gain and variance gain for classification and regression, respectively. Algorithm 1 BulidTree Input: Node N, Dateset D if StoppingCirteria(D) then N.output = Prediction(D) else bestSplit = FindBestSplit(D) (DL, DR) = Split(D, N, bestSplit) BuildTree(N.leftChild, DL) BuildTree(N.rightChild, DR) end if Definition 2.1 [6][16] In classification, the information gain (IG) for attribute Xj ∈ [w1, w2] at node O, is defined as the entropy reduction of the output Y after splitting node O by attribute Xj at w, i.e., IGj(w;O) = Hj − (Hlj(w) +Hrj (w)) = P (w1 ≤ Xj ≤ w2)H(Y |w1 ≤ Xj ≤ w2)− P (w1 ≤ Xj < w)H(Y |w1 ≤ Xj < w) − P (w ≤ Xj ≤ w2)H(Y |w ≤ Xj ≤ w2), where H(·|·) denotes the conditional entropy. In regression, the variance gain (VG) for attribute Xj ∈ [w1, w2] at node O, is defined as variance reduction of the output Y after splitting node O by attribute Xj at w, i.e., V Gj(w;O) = σj − (σlj(w) + σrj (w)) = P (w1 ≤ Xj ≤ w2)V ar[Y |w1 ≤ Xj ≤ w2]− P (w1 ≤ Xj < w)V ar[Y |w1 ≤ Xj < w] − P (w2 ≥ Xj ≥ w)V ar[Y |w2 ≥ Xj ≥ w], where V ar[·|·] denotes the conditional variance. 3 PV-Tree In this section, we describe our proposed PV-Tree algorithm for parallel decision tree learning, which has a very low communication cost, and can achieve a good trade-off between communication efficiency and learning accuracy. PV-Tree is a data-parallel algorithm, which also partitions the training data onto M machines just like in [2] [21]. However, its design principal is very different. In [2][21], one does not trust the local information about the attributes in each machine, and decides the best attribute and split point only based on the aggregated global histograms of the attributes. In contrast, in PV-Tree, we leverage the meaningful statistical information about the attributes contained in each local machine, and make decisions through a two-stage (local and then global) voting process. In this way, we can significantly reduce the communication cost since we do not need to communicate the histogram information of all the attributes across machines, instead, only the histograms of those attributes that survive in the voting process. The flow of PV-tree algorithm is very similar to the standard decision tree, except function FindBestSplit. So we only give the new implementation of this function in Alg 3, which contains following three steps: Local Voting: We select the top-k attributes for each machine based on its local data set (according to the informativeness scores, e.g., information gain for classification and variance reduction for regression), and then exchange indices of the selected attributes among machines. Please note that the communication cost for this step is very low, because only the indices for a small number of (i.e., k ×M ) attributes need to be communicated. Global Voting: We determine the globally top-2k attributes by a majority voting among all locally selected attributes in the previous step. That is, we rank the attributes according to the number of local machines who select them, and choose the top-2k attributes from the ranked list. It can be proven that when the local data are big enough to be statistically representative, there is a very high probability that the top-2k attributes obtained by this majority voting will contain the globally best attribute. Please note that this step does not induce any communication cost. Best Attribute Identification: We collect full-grained histograms of the globally top-2k attributes from local machines in order to compute their global distributions. Then we identify the best attribute and its split point according to the informativeness scores calculated from the global distributions. Please note that the communication cost for this step is also low, because we only need to communicate the histograms of 2k pre-selected attributes (but not all attributes).3 As a result, PV-Tree algorithm can scale very well since its communication cost is independent of both the total number of attributes and the total number of samples in the dataset. In next section, we will provide theoretical analysis on accuracy guarantee of PV-Tree algorithm. Algorithm 2 FindBestSplit Input: DataSet D for all X in D.Attribute do . Construct Histogram H = new Histogram() for all x in X do H.binAt(x.bin).Put(x.label) end for . Find Best Split leftSum = new HistogramSum() for all bin in H do leftSum = leftSum + H.binAt(bin) rightSum = H.AllSum - leftSum split.gain = CalSplitGain(leftSum, rightSum) bestSplit = ChoiceBetterOne(split,bestSplit) end for end for return bestSplit Algorithm 3 PV-Tree_FindBestSplit Input: Dataset D localHistograms = ConstructHistograms(D) . Local Voting splits = [] for all H in localHistograms do splits.Push(H.FindBestSplit()) end for localTop = splits.TopKByGain(K) . Gather all candidates allCandidates = AllGather(localTop) . Global Voting globalTop = allCandidates.TopKByMajority(2*K) . Merge global histograms globalHistograms = Gather(globalTop, localHistograms) bestSplit = globalHistograms.FindBestSplit() return bestSplit 4 Theoretical Analysis In this section, we conduct theoretical analysis on proposed PV-Tree algorithm. Specifically, we prove that, PV-Tree can select the best (most informative) attribute in a large probability, for both classification and regression. In order to better present the theorem, we firstly introduce some notations4 In classification, we denote IGj = maxw IGj(w), and rank {IGj ; j ∈ [d]} from large to small as {IG(1), ..., IG(d)}. We call the attribute j(1) the most informative attribute. Then, we denote l(j)(k) = |IG(1)−IG(j)| 2 , ∀j ≥ k + 1 to indicate the distance between the largest and the k-th largest IG. In regression, l(j)(k) is defined in the same way, except replacing IG with VG. Theorem 4.1 Suppose we have M local machines, and each one has n training data. PV-Tree at an arbitrary tree node with local voting size k and global majority voting size 2k will select the most informative attribute with a probability at least M∑ m=[M/2+1] CmM 1− d∑ j=k+1 δ(j)(n, k) m d∑ j=k+1 δ(j)(n, k) M−m , where δ(j)(n, k) = α(j)(n) + 4e−c(j)n(l(j)(k)) 2 with limn→∞ α(j)(n) = 0 and c(j) is constant. Due to space restrictions, we briefly illustrate the proof idea here and leave detailed proof to supplementary materials. Our proof contains two parts. (1) For local voting, we find a sufficient condition to guarantee a similar rank of attributes ordered by information gain computed based on local data and full data. Then, we derive a lower bound of probability to make the sufficient condition holds by 3As indicated by our theoretical analysis and empirical study (see the next sections), a very small k already leads to good performance in PV-Tree algorithm. 4Since all analysis are for one arbitrarily fixed node O, we omit the notation O here. using concentration inequalities. (2) For global voting, we select top-2k attributes. It’s easy to proof that we can select the most informative attribute if only no less than [M/2 + 1] of all machines select it.5 Therefore, we can calculate the probability in the theorem using binomial distribution. Regarding Theorem 4.1, we have following discussions on factors that impact the lower bound for probability of selecting the best attribute. 1.Size of local training data n: Since δ(j)(n, k) decreased with n, with more and more local training data, the lower bound will increase. That means, if we have sufficiently large data, PV-Tree will select the best attribute with almost probability 1. 2. Input dimension d: It is clear that for fixed local voting size k and global voting size 2k, with d increasing, the lower bound is decreasing. Consider the case that the number of attributes become 100 times larger. Then the terms in the summation (from ∑d j=k+1 to ∑100d j=k+1) is roughly 100 times larger for a relatively small k. But there must be many attributes away from attribute (1) and l(j)(k) is a large number which results in a small δ(j)(n, k). Thus we can say that the bound in the theorem is not sensitive with d. 3. Number of machines M : We assume the whole training data size N is fixed and the local data size n = NM . Then on one hand, as M increases, n decreases, and therefore the lower bound will decrease due to larger δj(n, k). On the other hand, because function ∑M m=[M/2+1] C m Mp m(1− p)M−m will approach 1 as M increases when p > 0.5 [[23]], the lower bound will increase. In other words, the number of machines M has dual effect on the lower bound: with more machines, local data size becomes smaller which reduces the accuracy of local voting, however, it also leads to more copies of local votes and thus increase the reliability of global voting. Therefore, in terms of accuracy, there should be an optimal number of machines given a fixed-size training data.6 4. Local/Global voting size k/2k: Local/Global voting size k/2k influence l(j)(k) and the terms in the summation in the lower bound . As k increases, l(j)(k) increases and the terms in the summation decreases, and the lower bound increases. But increasing k will bring more communication and calculating time. Therefore, we should better select a moderate k. For some distributions, especially for the distributions over high-dimensional space, l(j)(k) is less sensitive to k, then we can choose a relatively smaller k to save communication time. As a comparison, we also prove a theorem for the data-parallel algorithm based on quantized histogram as follows (please refer to the supplementary material for its proof). The theorem basically tells us that the bias introduced by histogram quantization cannot be reduced to zero even if the training data are sufficiently large, and as a result the corresponding algorithm could fail in finding the best attribute.7 This could be the critical weakness of this algorithm in big data scenario. Theorem 4.2 We denote quantized histogram with b bins of the underlying distribution P as P b, that of the empirical distribution Pn as P bn, the information gain ofXj calculated under the distribution P b and P bn as IG b j and IG b n,j respectively, and fj(b) , |IGj − IGbj |. Then, for ≤ minj=1,··· ,d fj(b), with probability at least δj(n, fj(b)− )), we have |IGbn,j − IGj | > . 5 Experiments In this section, we report the experimental comparisons between PV-Tree and baseline algorithms. We used two data sets, one for learning to rank (LTR) and the other for ad click prediction (CTR)8 (see Table 1 for details). For LTR, we extracted about 1200 numerical attributes per data sample, and used NDCG [5] as the evaluation measure. For CTR, we extracted about 800 numerical attributes [9], and used AUC as the evaluation measure. 5In fact, the global voting size can be βk with β > 1. Then the sufficient condition becomes that no less than [M/β + 1] of all machines select the most informative attribute. 6Please note that using more machines will reduce local computing time, thus the optimal value of machine number may be larger in terms of speed-up. 7The theorem for regression holds in the same way, with replacing IG with VG. 8We use private data in LTR experiments and data of KDD Cup 2012 track 2 in CTR experiments. Table 1: Datasets Task #Train #Test #Attribute Source LTR 11M 1M 1200 Private CTR 235M 31M 800 KDD Cup Table 2: Convergence time (seconds) Task Sequential Data- Attribute- PV-Tree Parallel Parallel LTR 28690 32260 14660 5825 CTR 154112 9209 26928 5349 According to recent industrial practices, a single decision tree might not be strong enough to learn an effective model for complicated tasks like ranking and click prediction. Therefore, people usually use decision tree based boosting algorithms (e.g., GBDT) to perform tasks. In this paper, we also use GBDT as a platform to examine the efficiency and effectiveness of decision tree parallelization. That is, we used PV-Tree or other baseline algorithms to parallelize the decision tree construction process in each iteration of GBDT, and compare their performance. Our experimental environment is a cluster of servers (each with 12 CPU cores and 32 GB RAM) inter-connected with 1 Gbps Ethernet. For the experiments on LTR, we used 8 machines for parallel training; and for the experiments on CTR, we used 32 machines since the dataset is much larger. 5.1 Comparison with Other Parallel Decision Trees For comparison with PV-Tree, we have implemented an attribute-parallel algorithm, in which a binary vector is used to indicate the split information and exchanged across machines. In addition, we implemented a data-parallel algorithm according to [2, 21], which can communicate both full-grained histograms and quantized histograms. All parallel algorithms and sequential(single machine) version are compared together. The experimental results can be found in Figure 1a and 1b. From these figures, we have the following observations: For LTR, since the number of data samples is relatively small, the communication of the split information about the samples does not take too much time. As a result, the attribute-parallel algorithm appears to be efficient. Since most attributes take numerical values in this dataset, the fullgrained histogram has quite a lot of bins. Therefore, the data-parallel algorithm which communicates full-grained histogram is quite slow, even slower than the sequential algorithm. When reducing the bins in the histogram to 10%, the data-parallel algorithm becomes much more efficient, however, its convergence point is not good (consistent with our theory – the bias in quantized histograms leads to accuracy drop). For CTR, attribute-parallel algorithm becomes very slow since the number of data samples is very large. In contrast, many attributes in CTR take binary or discrete values, which make the full-grained histogram have limited number of bins. As a result, the data-parallel algorithm with full-grain histogram is faster than the sequential algorithm. The data-parallel algorithm with quantized histograms is even faster, however, its convergence point is once again not very good. PV-Tree reaches the best point achieved by sequential algorithm within the shortest time in both LTR and CTR task. For a more quantitative comparison on efficiency, we list the time for each algorithm (8 machines for LTR and 32 machines for CTR) to reach the convergent accuracy of the sequential algorithm in Table 2. From the table, we can see that, for LTR, it costed PV-Tree 5825 seconds, while it costed the data-parallel algorithm (with full-grained histogram9) and attribute-parallel algorithm 32260 and 14660 seconds respectively. As compared with the sequential algorithm (which took 28690 seconds to converge), PV-Tree achieves 4.9x speed up on 8 machines. For CTR, it costed PV-Tree 5349 seconds, while it costed the data-parallel algorithm (with full-grained histogram) and attributeparallel algorithm 9209 and 26928 seconds respectively. As compared with the sequential algorithm (which took 154112 seconds to converge), PV-Tree achieves 28.8x speed up on 32 machines. We also conducted independent experiments to get a clear comparison of communication cost for different parallel algorithms given some typical big data workload setting. The result is listed in Table 3. We find the cost of attribute-parallel algorithm is relative to the size of training data N , and the cost of data-parallel algorithm is relative to the number of attributes d. In contrast, the cost of PV-Tree is constant. 9The data-parallel algorithm with 10% bins could not achieve the same accuracy with the sequential algorithm and thus we did not put it in the table. Table 3: Comparison of communication cost, train one tree with depth=6. Table 4: Convergence time and accuracy w.r.t. global voting parameter k for PV-Tree. 5.2 Tradeoff between Speed-up and Accuracy in PV-Tree In the previous subsection, we have shown that PV-tree is more efficient than other algorithms. Here we make a deep dive into PV-tree to see how its key parameters affect the trade-off between efficiency and accuracy. According to Theorem 4.1, the following two parameters are critical to PV-Tree: the number of machines M and the size of voting k. 5.2.1 On Different Numbers of Machines When more machines join the distributed training process, the data throughput will grow larger but the amortized training data on each machine will get smaller. When the data size on each machine becomes too small, there will be no guarantee on the accuracy of the voting procedure, according to our theorem. So it is important to appropriately set the number of machines. To gain more insights on this, we conducted some additional experiments, whose results are shown in Figure 2a and 2b. From these figures, we can see that for LTR, when the number of machines grows from 2 to 8, the training process is significantly accelerated. However, when the number goes up to 16, the convergence speed is even lower than that of using 8 machines. Similar results can be observed for CTR. These observations are consistent with our theoretical findings. Please note that PV-Tree is designed for the big data scenario. Only when the entire training data are huge (and thus distribution of the training data on each local machine can be similar to that of the entire training data), the full power of PV-Tree can be realized. Otherwise, we need to have a reasonable expectation on the speed-up, and should choose to use a smaller number of machines to parallelize the training. 5.2.2 On Different Sizes of Voting In PV-Tree, we have a parameter k, which controls the number of top attributes selected during local and global voting. Intuitively, larger k will increase the probability of finding the globally best attribute from the local candidates, however, it also means higher communication cost. According to our theorem, the choice of k should depend on the size of local training data. If the size of local training data is large, the locally best attributes will be similar to the globally best one. In this case, one can safely choose a small value of k. Otherwise, we should choose a relatively larger k. To gain more insights on this, we conducted some experiments, whose results are shown in Table 4, where M refers to the number of machines. From the table, we have the following observations. First, for both cases, in order to achieve good accuracy, one does not need to choose a large k. When k ≤ 40, the accuracy has been very good. Second, we find that for the cases of using small number of machines, k can be set to an even smaller value, e.g., k = 5. This is because, given a fixed-size training data, when using fewer machines, the size of training data per machine will become larger and thus a smaller k can already guarantee the approximation accuracy. 5.3 Comparison with Other Parallel GBDT Algorithms While we mainly focus on how to parallelize the decision tree construction process inside GBDT in the previous subsections, one could also parallelize GBDT in other ways. For example, in [22, 20], each machine learns its own decision tree separately without communication. After that, these decision trees are aggregated by means of winner-takes-all or output ensemble. Although these works are not the focus of our paper, it is still interesting to compare with them. For this purpose, we implemented both the algorithms proposed in [22] and [20]. For ease of reference, we denote them as Svore and Yu respectively. Their performances are shown in Figure 3a and 3b. From the figures, we can see that PV-Tree outperforms both Svore and Yu: although these two algorithms converge at a similar speed to PV-Tree, they have much worse converge points. According to our limited understanding, these two algorithms are lacking solid theoretical guarantee. Since the candidate decision trees are trained separately and independently without necessary information exchange, they may have non-negligible bias, which will lead to accuracy drop at the end. In contrast, we can clearly characterize the theoretical properties of PV-tree, and use it in an appropriate setting so as to avoid observable accuracy drop. To sum up all the experiments, we can see that with appropriately-set parameters, PV-Tree can achieve a very good trade-off between efficiency and accuracy, and outperforms both other parallel decision tree algorithms designed specifically for GBDT parallelization. 6 Conclusions In this paper, we proposed a novel parallel algorithm for decision tree, called Parallel Voting Decision Tree (PV-Tree), which can achieve high accuracy at a very low communication cost. Experiments on both ranking and ad click prediction indicate that PV-Tree has its advantage over a number of baselines algorithms. As for future work, we plan to generalize the idea of PV-Tree to parallelize other machine learning algorithms. Furthermore, we will open-source PV-Tree algorithm to benefit more researchers and practitioners.
1. What is the main contribution of the paper, and how does it address the problem of decision tree building in big data? 2. What are the strengths and weaknesses of the proposed PV-Tree algorithm, particularly regarding its parallelization approach and theoretical guarantees? 3. Do you have any concerns or questions regarding the proof of the theorem in the paper, especially regarding the size of local training data, input dimension, and rate of convergence? 4. How does the reviewer assess the clarity and quality of the paper's writing, organization, and presentation? 5. Are there any minor issues or suggestions for improvement in the review, such as typos, terminology, or explanations?
Review
Review This paper introduces a new parallel algorithm for decision tree, called PV-Tree. The main goal is to significantly decrease the communciation cost of such decision tree building. To do that, the authors propose to be non-comprehensive in each machine (which run in paralle) when communicating information about the splits. The auhtors prove a theorem to justify their approach. Indeed with high probability PV-tree manage to find the actual best split (i.e. the one which would be find if all data were processed in one single machine). Finally, experiments are presented on two relatively large datasets, and comparisons with other parallel decision trees are made.The subject of the paper is interesting, especially in the context of big data, where obviously standard implementation of decision trees are not reasonable (or even possible). The paper is well written and clear. My main concerns are about the remarks following theorem 4.1: 1. size of local training data n: it is said that if n is large enough, PV-tree will select the best attribute with almost probability one. I agree that the probability is increasing with n, but I wonder why the question on computation time is not addressed here. My point applies to the sentence l. 126 where authors say that PV-tree can scale independently of the number of samples. 2. input dimension d: the conclustion of the remark is that the bound in the theorem is not sensitive with d. But the rest of the remark suggests the opposite. How do we know that the \delta_{(j)} are small enough to compensate the sum (which gets 100 times more temrs) ? 3. The same here, it is said that a probability tends to one as M increases, but the rated of convergence is important here. 4. I think there is a mistake here: "as k increases, the lower bound decreases". According to me, the more k increase, the more chances we have to get the actual best attribute in each machine, hence the lower bound should increase. My last main concern is about the sentence l.247-249. How do we know that the training data in each local machine is similar to that of the entire data ? I think this point is central in the big data context, and the authors never address the fact that the way the data are distributed in several machines is very important. Indeed, it is not because every machine sees a huge amount of data that every samples have the same distribution. Minor: - l. 145: it's esay to PROVE - l. 230-231: I think #data and #attribute should be avoided.
NIPS
Title A Communication-Efficient Parallel Algorithm for Decision Tree Abstract Decision tree (and its extensions such as Gradient Boosting Decision Trees and Random Forest) is a widely used machine learning algorithm, due to its practical effectiveness and model interpretability. With the emergence of big data, there is an increasing need to parallelize the training process of decision tree. However, most existing attempts along this line suffer from high communication costs. In this paper, we propose a new algorithm, called Parallel Voting Decision Tree (PV-Tree), to tackle this challenge. After partitioning the training data onto a number of (e.g., M ) machines, this algorithm performs both local voting and global voting in each iteration. For local voting, the top-k attributes are selected from each machine according to its local data. Then, globally top-2k attributes are determined by a majority voting among these local candidates. Finally, the full-grained histograms of the globally top-2k attributes are collected from local machines in order to identify the best (most informative) attribute and its split point. PV-Tree can achieve a very low communication cost (independent of the total number of attributes) and thus can scale out very well. Furthermore, theoretical analysis shows that this algorithm can learn a near optimal decision tree, since it can find the best attribute with a large probability. Our experiments on real-world datasets show that PV-Tree significantly outperforms the existing parallel decision tree algorithms in the trade-off between accuracy and efficiency. 1 Introduction Decision tree [16] is a widely used machine learning algorithm, since it is practically effective and the rules it learns are simple and interpretable. Based on decision tree, people have developed other algorithms such as Random Forest (RF) [3] and Gradient Boosting Decision Trees (GBDT) [7], which have demonstrated very promising performances in various learning tasks [5]. In recent years, with the emergence of very big training data (which cannot be held in one single machine), there has been an increasing need of parallelizing the training process of decision tree. To this end, there have been two major categories of attempts: 2. ∗Denotes equal contribution. This work was done when the first author was visiting Microsoft Research Asia. 2There is another category of works that parallelize the tasks of sub-tree training once a node is split [15], which require the training data to be moved from machine to machine for many times and are thus inefficient. Moreover, there are also some other works accelerating decision tree construction by using pre-sorting [13] [19] [11] and binning [17] [8] [10], or employing a shared-memory-processors approach [12] [1]. However, they are out of our scope. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Attribute-parallel: Training data are vertically partitioned according to the attributes and allocated to different machines, and then in each iteration, the machines work on non-overlapping sets of attributes in parallel in order to find the best attribute and its split point (suppose this best attribute locates at the i-th machine) [19] [11] [20]. This process is communicationally very efficient. However, after that, the re-partition of the data on other machines than the i-th machine will induce very high communication costs (proportional to the number of data samples). This is because those machines have no information about the best attribute at all, and in order to fulfill the re-partitioning, they must retrieve the partition information of every data sample from the i-th machine. Furthermore, as each worker still has full sample set, the partition process is not parallelized, which slows down the algorithm. Data-parallel: Training data are horizontally partitioned according to the samples and allocated to different machines. Then the machines communicate with each other the local histograms of all attributes (according to their own data samples) in order to obtain the global attribute distributions and identify the best attribute and split point [12] [14]. It is clear that the corresponding communication cost is very high and proportional to the total number of attributes and histogram size. To reduce the cost, in [2] and [21] [10], it was proposed to exchange quantized histograms between machines when estimating the global attribute distributions. However, this does not really solve the problem – the communication cost is still proportional to the total number of attributes, not to mentioned that the quantization may hurt the accuracy. In this paper, we proposed a new data-parallel algorithm for decision tree, called Parallel Voting Decision Tree (PV-Tree), which can achieve much better balance between communication efficiency and accuracy. The key difference between conventional data-parallel decision tree algorithm and PV-Tree lies in that the former only trusts the globally aggregated histogram information, while the latter leverages the local statistical information contained in each machine through a two-stage voting process, thus can significantly reduce the communication cost. Specifically, PV-Tree contains the following steps in each iteration. 1) Local voting. On each machine, we select the top-k attributes based on its local data according to the informativeness scores (e.g., risk reduction for regression, and information gain for classification). 2) Global voting. We determine global top-2k attributes by a majority voting among the local candidates selected in the previous step. That is, we rank the attributes according to the number of local machines who select them, and choose the top 2k attributes from the ranked list. 3) Best attribute identification. We collect the full-grained histograms of the globally top-2k attributes from local machines in order to compute their global distributions. Then we identify the best attribute and its split point according to the informativeness scores calculated from the global distributions. It is easy to see that PV-Tree algorithm has a very low communication cost. It does not need to communicate the information of all attributes, instead, it only communicates indices of the locally top-k attributes per machine and the histograms of the globally top-2k attributes. In other words, its communication cost is independent of the total number of attributes. This makes PV-Tree highly scalable. On the other hand, it can be proven that PV-Tree can find the best attribute with a large probability, and the probability will approach 1 regardless of k when the training data become sufficiently large. In contrast, the data-parallel algorithm based on quantized histogram could fail in finding the best attribute, since the bias introduced by histogram quantization cannot be reduced to zero even if the training data are sufficiently large. We have conducted experiments on real-world datasets to evaluate the performance of PV-Tree. The experimental results show that PV-Tree has consistently higher accuracy and training speed than all the baselines we implemented. We further conducted experiments to evaluate the performance of PV-Tree in different settings (e.g., with different numbers of machines, different values of k). The experimental results are in accordance with our theoretical analysis. 2 Decision Tree Suppose the training data set Dn = {(xi,j , yi); i = 1, · · · , n, j = 1, · · · , d} are independently sampled from ∏d j=1 Xj × Y according to ( ∏d j=1 PXj )PY |X . The goal is to learn a regression or classification model f ∈ F : ∏d j=1 Xj → Y by minimizing loss functions on the training data, which hopefully could achieve accurate prediction for the unseen test data. Decision tree[16, 18] is a widely used model for both regression [4] and classification [18]. A typical decision tree algorithm is described in Alg 1. As can be seen, the tree growth procedure is recursive, and the nodes will not stop growing until they reach the stopping criteria. There are two important functions in the algorithm: FindBestSplit returns the best split point {attribute, threshold} of a node, and Split splits the training data according to the best split point. The details of FindBestSplit is given in Alg 2: first histograms of the attributes are constructed (for continuous attributes, one usually converts their numerical values to finite bins for ease of compuation) by going over all training data on the current node; then all bins (split points) are traversed from left to right, and leftSum and rightSum are used to accumulate sum of left and right parts of the split point respectively. When selecting the best split point, an informativeness measure is adopted. The widely used informative measures are information gain and variance gain for classification and regression, respectively. Algorithm 1 BulidTree Input: Node N, Dateset D if StoppingCirteria(D) then N.output = Prediction(D) else bestSplit = FindBestSplit(D) (DL, DR) = Split(D, N, bestSplit) BuildTree(N.leftChild, DL) BuildTree(N.rightChild, DR) end if Definition 2.1 [6][16] In classification, the information gain (IG) for attribute Xj ∈ [w1, w2] at node O, is defined as the entropy reduction of the output Y after splitting node O by attribute Xj at w, i.e., IGj(w;O) = Hj − (Hlj(w) +Hrj (w)) = P (w1 ≤ Xj ≤ w2)H(Y |w1 ≤ Xj ≤ w2)− P (w1 ≤ Xj < w)H(Y |w1 ≤ Xj < w) − P (w ≤ Xj ≤ w2)H(Y |w ≤ Xj ≤ w2), where H(·|·) denotes the conditional entropy. In regression, the variance gain (VG) for attribute Xj ∈ [w1, w2] at node O, is defined as variance reduction of the output Y after splitting node O by attribute Xj at w, i.e., V Gj(w;O) = σj − (σlj(w) + σrj (w)) = P (w1 ≤ Xj ≤ w2)V ar[Y |w1 ≤ Xj ≤ w2]− P (w1 ≤ Xj < w)V ar[Y |w1 ≤ Xj < w] − P (w2 ≥ Xj ≥ w)V ar[Y |w2 ≥ Xj ≥ w], where V ar[·|·] denotes the conditional variance. 3 PV-Tree In this section, we describe our proposed PV-Tree algorithm for parallel decision tree learning, which has a very low communication cost, and can achieve a good trade-off between communication efficiency and learning accuracy. PV-Tree is a data-parallel algorithm, which also partitions the training data onto M machines just like in [2] [21]. However, its design principal is very different. In [2][21], one does not trust the local information about the attributes in each machine, and decides the best attribute and split point only based on the aggregated global histograms of the attributes. In contrast, in PV-Tree, we leverage the meaningful statistical information about the attributes contained in each local machine, and make decisions through a two-stage (local and then global) voting process. In this way, we can significantly reduce the communication cost since we do not need to communicate the histogram information of all the attributes across machines, instead, only the histograms of those attributes that survive in the voting process. The flow of PV-tree algorithm is very similar to the standard decision tree, except function FindBestSplit. So we only give the new implementation of this function in Alg 3, which contains following three steps: Local Voting: We select the top-k attributes for each machine based on its local data set (according to the informativeness scores, e.g., information gain for classification and variance reduction for regression), and then exchange indices of the selected attributes among machines. Please note that the communication cost for this step is very low, because only the indices for a small number of (i.e., k ×M ) attributes need to be communicated. Global Voting: We determine the globally top-2k attributes by a majority voting among all locally selected attributes in the previous step. That is, we rank the attributes according to the number of local machines who select them, and choose the top-2k attributes from the ranked list. It can be proven that when the local data are big enough to be statistically representative, there is a very high probability that the top-2k attributes obtained by this majority voting will contain the globally best attribute. Please note that this step does not induce any communication cost. Best Attribute Identification: We collect full-grained histograms of the globally top-2k attributes from local machines in order to compute their global distributions. Then we identify the best attribute and its split point according to the informativeness scores calculated from the global distributions. Please note that the communication cost for this step is also low, because we only need to communicate the histograms of 2k pre-selected attributes (but not all attributes).3 As a result, PV-Tree algorithm can scale very well since its communication cost is independent of both the total number of attributes and the total number of samples in the dataset. In next section, we will provide theoretical analysis on accuracy guarantee of PV-Tree algorithm. Algorithm 2 FindBestSplit Input: DataSet D for all X in D.Attribute do . Construct Histogram H = new Histogram() for all x in X do H.binAt(x.bin).Put(x.label) end for . Find Best Split leftSum = new HistogramSum() for all bin in H do leftSum = leftSum + H.binAt(bin) rightSum = H.AllSum - leftSum split.gain = CalSplitGain(leftSum, rightSum) bestSplit = ChoiceBetterOne(split,bestSplit) end for end for return bestSplit Algorithm 3 PV-Tree_FindBestSplit Input: Dataset D localHistograms = ConstructHistograms(D) . Local Voting splits = [] for all H in localHistograms do splits.Push(H.FindBestSplit()) end for localTop = splits.TopKByGain(K) . Gather all candidates allCandidates = AllGather(localTop) . Global Voting globalTop = allCandidates.TopKByMajority(2*K) . Merge global histograms globalHistograms = Gather(globalTop, localHistograms) bestSplit = globalHistograms.FindBestSplit() return bestSplit 4 Theoretical Analysis In this section, we conduct theoretical analysis on proposed PV-Tree algorithm. Specifically, we prove that, PV-Tree can select the best (most informative) attribute in a large probability, for both classification and regression. In order to better present the theorem, we firstly introduce some notations4 In classification, we denote IGj = maxw IGj(w), and rank {IGj ; j ∈ [d]} from large to small as {IG(1), ..., IG(d)}. We call the attribute j(1) the most informative attribute. Then, we denote l(j)(k) = |IG(1)−IG(j)| 2 , ∀j ≥ k + 1 to indicate the distance between the largest and the k-th largest IG. In regression, l(j)(k) is defined in the same way, except replacing IG with VG. Theorem 4.1 Suppose we have M local machines, and each one has n training data. PV-Tree at an arbitrary tree node with local voting size k and global majority voting size 2k will select the most informative attribute with a probability at least M∑ m=[M/2+1] CmM 1− d∑ j=k+1 δ(j)(n, k) m d∑ j=k+1 δ(j)(n, k) M−m , where δ(j)(n, k) = α(j)(n) + 4e−c(j)n(l(j)(k)) 2 with limn→∞ α(j)(n) = 0 and c(j) is constant. Due to space restrictions, we briefly illustrate the proof idea here and leave detailed proof to supplementary materials. Our proof contains two parts. (1) For local voting, we find a sufficient condition to guarantee a similar rank of attributes ordered by information gain computed based on local data and full data. Then, we derive a lower bound of probability to make the sufficient condition holds by 3As indicated by our theoretical analysis and empirical study (see the next sections), a very small k already leads to good performance in PV-Tree algorithm. 4Since all analysis are for one arbitrarily fixed node O, we omit the notation O here. using concentration inequalities. (2) For global voting, we select top-2k attributes. It’s easy to proof that we can select the most informative attribute if only no less than [M/2 + 1] of all machines select it.5 Therefore, we can calculate the probability in the theorem using binomial distribution. Regarding Theorem 4.1, we have following discussions on factors that impact the lower bound for probability of selecting the best attribute. 1.Size of local training data n: Since δ(j)(n, k) decreased with n, with more and more local training data, the lower bound will increase. That means, if we have sufficiently large data, PV-Tree will select the best attribute with almost probability 1. 2. Input dimension d: It is clear that for fixed local voting size k and global voting size 2k, with d increasing, the lower bound is decreasing. Consider the case that the number of attributes become 100 times larger. Then the terms in the summation (from ∑d j=k+1 to ∑100d j=k+1) is roughly 100 times larger for a relatively small k. But there must be many attributes away from attribute (1) and l(j)(k) is a large number which results in a small δ(j)(n, k). Thus we can say that the bound in the theorem is not sensitive with d. 3. Number of machines M : We assume the whole training data size N is fixed and the local data size n = NM . Then on one hand, as M increases, n decreases, and therefore the lower bound will decrease due to larger δj(n, k). On the other hand, because function ∑M m=[M/2+1] C m Mp m(1− p)M−m will approach 1 as M increases when p > 0.5 [[23]], the lower bound will increase. In other words, the number of machines M has dual effect on the lower bound: with more machines, local data size becomes smaller which reduces the accuracy of local voting, however, it also leads to more copies of local votes and thus increase the reliability of global voting. Therefore, in terms of accuracy, there should be an optimal number of machines given a fixed-size training data.6 4. Local/Global voting size k/2k: Local/Global voting size k/2k influence l(j)(k) and the terms in the summation in the lower bound . As k increases, l(j)(k) increases and the terms in the summation decreases, and the lower bound increases. But increasing k will bring more communication and calculating time. Therefore, we should better select a moderate k. For some distributions, especially for the distributions over high-dimensional space, l(j)(k) is less sensitive to k, then we can choose a relatively smaller k to save communication time. As a comparison, we also prove a theorem for the data-parallel algorithm based on quantized histogram as follows (please refer to the supplementary material for its proof). The theorem basically tells us that the bias introduced by histogram quantization cannot be reduced to zero even if the training data are sufficiently large, and as a result the corresponding algorithm could fail in finding the best attribute.7 This could be the critical weakness of this algorithm in big data scenario. Theorem 4.2 We denote quantized histogram with b bins of the underlying distribution P as P b, that of the empirical distribution Pn as P bn, the information gain ofXj calculated under the distribution P b and P bn as IG b j and IG b n,j respectively, and fj(b) , |IGj − IGbj |. Then, for ≤ minj=1,··· ,d fj(b), with probability at least δj(n, fj(b)− )), we have |IGbn,j − IGj | > . 5 Experiments In this section, we report the experimental comparisons between PV-Tree and baseline algorithms. We used two data sets, one for learning to rank (LTR) and the other for ad click prediction (CTR)8 (see Table 1 for details). For LTR, we extracted about 1200 numerical attributes per data sample, and used NDCG [5] as the evaluation measure. For CTR, we extracted about 800 numerical attributes [9], and used AUC as the evaluation measure. 5In fact, the global voting size can be βk with β > 1. Then the sufficient condition becomes that no less than [M/β + 1] of all machines select the most informative attribute. 6Please note that using more machines will reduce local computing time, thus the optimal value of machine number may be larger in terms of speed-up. 7The theorem for regression holds in the same way, with replacing IG with VG. 8We use private data in LTR experiments and data of KDD Cup 2012 track 2 in CTR experiments. Table 1: Datasets Task #Train #Test #Attribute Source LTR 11M 1M 1200 Private CTR 235M 31M 800 KDD Cup Table 2: Convergence time (seconds) Task Sequential Data- Attribute- PV-Tree Parallel Parallel LTR 28690 32260 14660 5825 CTR 154112 9209 26928 5349 According to recent industrial practices, a single decision tree might not be strong enough to learn an effective model for complicated tasks like ranking and click prediction. Therefore, people usually use decision tree based boosting algorithms (e.g., GBDT) to perform tasks. In this paper, we also use GBDT as a platform to examine the efficiency and effectiveness of decision tree parallelization. That is, we used PV-Tree or other baseline algorithms to parallelize the decision tree construction process in each iteration of GBDT, and compare their performance. Our experimental environment is a cluster of servers (each with 12 CPU cores and 32 GB RAM) inter-connected with 1 Gbps Ethernet. For the experiments on LTR, we used 8 machines for parallel training; and for the experiments on CTR, we used 32 machines since the dataset is much larger. 5.1 Comparison with Other Parallel Decision Trees For comparison with PV-Tree, we have implemented an attribute-parallel algorithm, in which a binary vector is used to indicate the split information and exchanged across machines. In addition, we implemented a data-parallel algorithm according to [2, 21], which can communicate both full-grained histograms and quantized histograms. All parallel algorithms and sequential(single machine) version are compared together. The experimental results can be found in Figure 1a and 1b. From these figures, we have the following observations: For LTR, since the number of data samples is relatively small, the communication of the split information about the samples does not take too much time. As a result, the attribute-parallel algorithm appears to be efficient. Since most attributes take numerical values in this dataset, the fullgrained histogram has quite a lot of bins. Therefore, the data-parallel algorithm which communicates full-grained histogram is quite slow, even slower than the sequential algorithm. When reducing the bins in the histogram to 10%, the data-parallel algorithm becomes much more efficient, however, its convergence point is not good (consistent with our theory – the bias in quantized histograms leads to accuracy drop). For CTR, attribute-parallel algorithm becomes very slow since the number of data samples is very large. In contrast, many attributes in CTR take binary or discrete values, which make the full-grained histogram have limited number of bins. As a result, the data-parallel algorithm with full-grain histogram is faster than the sequential algorithm. The data-parallel algorithm with quantized histograms is even faster, however, its convergence point is once again not very good. PV-Tree reaches the best point achieved by sequential algorithm within the shortest time in both LTR and CTR task. For a more quantitative comparison on efficiency, we list the time for each algorithm (8 machines for LTR and 32 machines for CTR) to reach the convergent accuracy of the sequential algorithm in Table 2. From the table, we can see that, for LTR, it costed PV-Tree 5825 seconds, while it costed the data-parallel algorithm (with full-grained histogram9) and attribute-parallel algorithm 32260 and 14660 seconds respectively. As compared with the sequential algorithm (which took 28690 seconds to converge), PV-Tree achieves 4.9x speed up on 8 machines. For CTR, it costed PV-Tree 5349 seconds, while it costed the data-parallel algorithm (with full-grained histogram) and attributeparallel algorithm 9209 and 26928 seconds respectively. As compared with the sequential algorithm (which took 154112 seconds to converge), PV-Tree achieves 28.8x speed up on 32 machines. We also conducted independent experiments to get a clear comparison of communication cost for different parallel algorithms given some typical big data workload setting. The result is listed in Table 3. We find the cost of attribute-parallel algorithm is relative to the size of training data N , and the cost of data-parallel algorithm is relative to the number of attributes d. In contrast, the cost of PV-Tree is constant. 9The data-parallel algorithm with 10% bins could not achieve the same accuracy with the sequential algorithm and thus we did not put it in the table. Table 3: Comparison of communication cost, train one tree with depth=6. Table 4: Convergence time and accuracy w.r.t. global voting parameter k for PV-Tree. 5.2 Tradeoff between Speed-up and Accuracy in PV-Tree In the previous subsection, we have shown that PV-tree is more efficient than other algorithms. Here we make a deep dive into PV-tree to see how its key parameters affect the trade-off between efficiency and accuracy. According to Theorem 4.1, the following two parameters are critical to PV-Tree: the number of machines M and the size of voting k. 5.2.1 On Different Numbers of Machines When more machines join the distributed training process, the data throughput will grow larger but the amortized training data on each machine will get smaller. When the data size on each machine becomes too small, there will be no guarantee on the accuracy of the voting procedure, according to our theorem. So it is important to appropriately set the number of machines. To gain more insights on this, we conducted some additional experiments, whose results are shown in Figure 2a and 2b. From these figures, we can see that for LTR, when the number of machines grows from 2 to 8, the training process is significantly accelerated. However, when the number goes up to 16, the convergence speed is even lower than that of using 8 machines. Similar results can be observed for CTR. These observations are consistent with our theoretical findings. Please note that PV-Tree is designed for the big data scenario. Only when the entire training data are huge (and thus distribution of the training data on each local machine can be similar to that of the entire training data), the full power of PV-Tree can be realized. Otherwise, we need to have a reasonable expectation on the speed-up, and should choose to use a smaller number of machines to parallelize the training. 5.2.2 On Different Sizes of Voting In PV-Tree, we have a parameter k, which controls the number of top attributes selected during local and global voting. Intuitively, larger k will increase the probability of finding the globally best attribute from the local candidates, however, it also means higher communication cost. According to our theorem, the choice of k should depend on the size of local training data. If the size of local training data is large, the locally best attributes will be similar to the globally best one. In this case, one can safely choose a small value of k. Otherwise, we should choose a relatively larger k. To gain more insights on this, we conducted some experiments, whose results are shown in Table 4, where M refers to the number of machines. From the table, we have the following observations. First, for both cases, in order to achieve good accuracy, one does not need to choose a large k. When k ≤ 40, the accuracy has been very good. Second, we find that for the cases of using small number of machines, k can be set to an even smaller value, e.g., k = 5. This is because, given a fixed-size training data, when using fewer machines, the size of training data per machine will become larger and thus a smaller k can already guarantee the approximation accuracy. 5.3 Comparison with Other Parallel GBDT Algorithms While we mainly focus on how to parallelize the decision tree construction process inside GBDT in the previous subsections, one could also parallelize GBDT in other ways. For example, in [22, 20], each machine learns its own decision tree separately without communication. After that, these decision trees are aggregated by means of winner-takes-all or output ensemble. Although these works are not the focus of our paper, it is still interesting to compare with them. For this purpose, we implemented both the algorithms proposed in [22] and [20]. For ease of reference, we denote them as Svore and Yu respectively. Their performances are shown in Figure 3a and 3b. From the figures, we can see that PV-Tree outperforms both Svore and Yu: although these two algorithms converge at a similar speed to PV-Tree, they have much worse converge points. According to our limited understanding, these two algorithms are lacking solid theoretical guarantee. Since the candidate decision trees are trained separately and independently without necessary information exchange, they may have non-negligible bias, which will lead to accuracy drop at the end. In contrast, we can clearly characterize the theoretical properties of PV-tree, and use it in an appropriate setting so as to avoid observable accuracy drop. To sum up all the experiments, we can see that with appropriately-set parameters, PV-Tree can achieve a very good trade-off between efficiency and accuracy, and outperforms both other parallel decision tree algorithms designed specifically for GBDT parallelization. 6 Conclusions In this paper, we proposed a novel parallel algorithm for decision tree, called Parallel Voting Decision Tree (PV-Tree), which can achieve high accuracy at a very low communication cost. Experiments on both ranking and ad click prediction indicate that PV-Tree has its advantage over a number of baselines algorithms. As for future work, we plan to generalize the idea of PV-Tree to parallelize other machine learning algorithms. Furthermore, we will open-source PV-Tree algorithm to benefit more researchers and practitioners.
1. What is the focus of the paper, and what are the key contributions of the proposed algorithm? 2. What are the strengths of the paper regarding theoretical analysis and experimental results? 3. Do you have any concerns or questions about the choice of global voting size, the effect of input dimension on the bound, and the algorithm's performance with large datasets? 4. Are there any typos or unclear explanations in the paper that you would like to bring to the authors' attention?
Review
Review The paper introduces a novel algorithm, named the Parallel Voting Decision Tree (PV-Tree), which effectively parallelizes the training process of the decision tree. The algorithm partitions the training data onto the different machines and then proceeds in two steps. In the first step, the algorithm finds the top-k attributes from the local data for each machine. In the second step, the algorithm globally finds the top 2k attributes among the previous set of local attributes. Finally, the full-grained histograms of these top 2k attributes are collected from the local machines in order to find their global distributions and identify the best attribute and its split point. The algorithm is much more efficient than existing algorithms that parallelize the training process of decision trees. It is more efficient than both data-parallel and attribute-parallel algorithms due to a very low communication cost. The algorithm only communicates the indices of the top k attributes for every machine and the histograms of the globally top 2k attributes. The author also proves the best attribute is accurately chosen by the algorithm with a very large probability that converges to one as the training sample size increases. The accuracy holds regardless of the value of k chosen. The proposed novel algorithm is very well supported with theoretical results as well as experimental results. Overall a very good paper. There were a few issues though that were not clearly explained. 1. The global voting size through out the paper has been chosen to be 2k, which is twice the local voting size. The author mentions that it can be chosen to be anything greater than the local voting size. But it has not been discussed how the global voting size effects the efficiency or accuracy of the algorithm. It has also not been explained why 2k was chosen. 2. Lines 153-158: It is not very clear how the input dimension, d affects the bound. The author mentions that the lower bound decreases with increasing dimension, but concludes by saying that the bound is not sensitive to the dimension. The explanation given is not clear. 3. Lines 247-249: The author mentions that the algorithm works well for large data sets. How large does a training set need to be for the algorithm to work well? 4. Figure 2(b): The convergence rate for 64 machines seems to be lower in the figure than both 32 machines and 128 machines. The accuracy appears to be the same regardless of the number of machines. Is there a reason why the algorithm works worse when 64 machines are used? Some minor points: Line 88, In the equation, H(Y|.) has not been defined earlier. Line 98, "However its design principal..." should be "However its principal design..." . Line 156: h(j)(k) has not been introduced before. It is probably a typo and should be l(j)(k) instead.
NIPS
Title A Communication-Efficient Parallel Algorithm for Decision Tree Abstract Decision tree (and its extensions such as Gradient Boosting Decision Trees and Random Forest) is a widely used machine learning algorithm, due to its practical effectiveness and model interpretability. With the emergence of big data, there is an increasing need to parallelize the training process of decision tree. However, most existing attempts along this line suffer from high communication costs. In this paper, we propose a new algorithm, called Parallel Voting Decision Tree (PV-Tree), to tackle this challenge. After partitioning the training data onto a number of (e.g., M ) machines, this algorithm performs both local voting and global voting in each iteration. For local voting, the top-k attributes are selected from each machine according to its local data. Then, globally top-2k attributes are determined by a majority voting among these local candidates. Finally, the full-grained histograms of the globally top-2k attributes are collected from local machines in order to identify the best (most informative) attribute and its split point. PV-Tree can achieve a very low communication cost (independent of the total number of attributes) and thus can scale out very well. Furthermore, theoretical analysis shows that this algorithm can learn a near optimal decision tree, since it can find the best attribute with a large probability. Our experiments on real-world datasets show that PV-Tree significantly outperforms the existing parallel decision tree algorithms in the trade-off between accuracy and efficiency. 1 Introduction Decision tree [16] is a widely used machine learning algorithm, since it is practically effective and the rules it learns are simple and interpretable. Based on decision tree, people have developed other algorithms such as Random Forest (RF) [3] and Gradient Boosting Decision Trees (GBDT) [7], which have demonstrated very promising performances in various learning tasks [5]. In recent years, with the emergence of very big training data (which cannot be held in one single machine), there has been an increasing need of parallelizing the training process of decision tree. To this end, there have been two major categories of attempts: 2. ∗Denotes equal contribution. This work was done when the first author was visiting Microsoft Research Asia. 2There is another category of works that parallelize the tasks of sub-tree training once a node is split [15], which require the training data to be moved from machine to machine for many times and are thus inefficient. Moreover, there are also some other works accelerating decision tree construction by using pre-sorting [13] [19] [11] and binning [17] [8] [10], or employing a shared-memory-processors approach [12] [1]. However, they are out of our scope. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Attribute-parallel: Training data are vertically partitioned according to the attributes and allocated to different machines, and then in each iteration, the machines work on non-overlapping sets of attributes in parallel in order to find the best attribute and its split point (suppose this best attribute locates at the i-th machine) [19] [11] [20]. This process is communicationally very efficient. However, after that, the re-partition of the data on other machines than the i-th machine will induce very high communication costs (proportional to the number of data samples). This is because those machines have no information about the best attribute at all, and in order to fulfill the re-partitioning, they must retrieve the partition information of every data sample from the i-th machine. Furthermore, as each worker still has full sample set, the partition process is not parallelized, which slows down the algorithm. Data-parallel: Training data are horizontally partitioned according to the samples and allocated to different machines. Then the machines communicate with each other the local histograms of all attributes (according to their own data samples) in order to obtain the global attribute distributions and identify the best attribute and split point [12] [14]. It is clear that the corresponding communication cost is very high and proportional to the total number of attributes and histogram size. To reduce the cost, in [2] and [21] [10], it was proposed to exchange quantized histograms between machines when estimating the global attribute distributions. However, this does not really solve the problem – the communication cost is still proportional to the total number of attributes, not to mentioned that the quantization may hurt the accuracy. In this paper, we proposed a new data-parallel algorithm for decision tree, called Parallel Voting Decision Tree (PV-Tree), which can achieve much better balance between communication efficiency and accuracy. The key difference between conventional data-parallel decision tree algorithm and PV-Tree lies in that the former only trusts the globally aggregated histogram information, while the latter leverages the local statistical information contained in each machine through a two-stage voting process, thus can significantly reduce the communication cost. Specifically, PV-Tree contains the following steps in each iteration. 1) Local voting. On each machine, we select the top-k attributes based on its local data according to the informativeness scores (e.g., risk reduction for regression, and information gain for classification). 2) Global voting. We determine global top-2k attributes by a majority voting among the local candidates selected in the previous step. That is, we rank the attributes according to the number of local machines who select them, and choose the top 2k attributes from the ranked list. 3) Best attribute identification. We collect the full-grained histograms of the globally top-2k attributes from local machines in order to compute their global distributions. Then we identify the best attribute and its split point according to the informativeness scores calculated from the global distributions. It is easy to see that PV-Tree algorithm has a very low communication cost. It does not need to communicate the information of all attributes, instead, it only communicates indices of the locally top-k attributes per machine and the histograms of the globally top-2k attributes. In other words, its communication cost is independent of the total number of attributes. This makes PV-Tree highly scalable. On the other hand, it can be proven that PV-Tree can find the best attribute with a large probability, and the probability will approach 1 regardless of k when the training data become sufficiently large. In contrast, the data-parallel algorithm based on quantized histogram could fail in finding the best attribute, since the bias introduced by histogram quantization cannot be reduced to zero even if the training data are sufficiently large. We have conducted experiments on real-world datasets to evaluate the performance of PV-Tree. The experimental results show that PV-Tree has consistently higher accuracy and training speed than all the baselines we implemented. We further conducted experiments to evaluate the performance of PV-Tree in different settings (e.g., with different numbers of machines, different values of k). The experimental results are in accordance with our theoretical analysis. 2 Decision Tree Suppose the training data set Dn = {(xi,j , yi); i = 1, · · · , n, j = 1, · · · , d} are independently sampled from ∏d j=1 Xj × Y according to ( ∏d j=1 PXj )PY |X . The goal is to learn a regression or classification model f ∈ F : ∏d j=1 Xj → Y by minimizing loss functions on the training data, which hopefully could achieve accurate prediction for the unseen test data. Decision tree[16, 18] is a widely used model for both regression [4] and classification [18]. A typical decision tree algorithm is described in Alg 1. As can be seen, the tree growth procedure is recursive, and the nodes will not stop growing until they reach the stopping criteria. There are two important functions in the algorithm: FindBestSplit returns the best split point {attribute, threshold} of a node, and Split splits the training data according to the best split point. The details of FindBestSplit is given in Alg 2: first histograms of the attributes are constructed (for continuous attributes, one usually converts their numerical values to finite bins for ease of compuation) by going over all training data on the current node; then all bins (split points) are traversed from left to right, and leftSum and rightSum are used to accumulate sum of left and right parts of the split point respectively. When selecting the best split point, an informativeness measure is adopted. The widely used informative measures are information gain and variance gain for classification and regression, respectively. Algorithm 1 BulidTree Input: Node N, Dateset D if StoppingCirteria(D) then N.output = Prediction(D) else bestSplit = FindBestSplit(D) (DL, DR) = Split(D, N, bestSplit) BuildTree(N.leftChild, DL) BuildTree(N.rightChild, DR) end if Definition 2.1 [6][16] In classification, the information gain (IG) for attribute Xj ∈ [w1, w2] at node O, is defined as the entropy reduction of the output Y after splitting node O by attribute Xj at w, i.e., IGj(w;O) = Hj − (Hlj(w) +Hrj (w)) = P (w1 ≤ Xj ≤ w2)H(Y |w1 ≤ Xj ≤ w2)− P (w1 ≤ Xj < w)H(Y |w1 ≤ Xj < w) − P (w ≤ Xj ≤ w2)H(Y |w ≤ Xj ≤ w2), where H(·|·) denotes the conditional entropy. In regression, the variance gain (VG) for attribute Xj ∈ [w1, w2] at node O, is defined as variance reduction of the output Y after splitting node O by attribute Xj at w, i.e., V Gj(w;O) = σj − (σlj(w) + σrj (w)) = P (w1 ≤ Xj ≤ w2)V ar[Y |w1 ≤ Xj ≤ w2]− P (w1 ≤ Xj < w)V ar[Y |w1 ≤ Xj < w] − P (w2 ≥ Xj ≥ w)V ar[Y |w2 ≥ Xj ≥ w], where V ar[·|·] denotes the conditional variance. 3 PV-Tree In this section, we describe our proposed PV-Tree algorithm for parallel decision tree learning, which has a very low communication cost, and can achieve a good trade-off between communication efficiency and learning accuracy. PV-Tree is a data-parallel algorithm, which also partitions the training data onto M machines just like in [2] [21]. However, its design principal is very different. In [2][21], one does not trust the local information about the attributes in each machine, and decides the best attribute and split point only based on the aggregated global histograms of the attributes. In contrast, in PV-Tree, we leverage the meaningful statistical information about the attributes contained in each local machine, and make decisions through a two-stage (local and then global) voting process. In this way, we can significantly reduce the communication cost since we do not need to communicate the histogram information of all the attributes across machines, instead, only the histograms of those attributes that survive in the voting process. The flow of PV-tree algorithm is very similar to the standard decision tree, except function FindBestSplit. So we only give the new implementation of this function in Alg 3, which contains following three steps: Local Voting: We select the top-k attributes for each machine based on its local data set (according to the informativeness scores, e.g., information gain for classification and variance reduction for regression), and then exchange indices of the selected attributes among machines. Please note that the communication cost for this step is very low, because only the indices for a small number of (i.e., k ×M ) attributes need to be communicated. Global Voting: We determine the globally top-2k attributes by a majority voting among all locally selected attributes in the previous step. That is, we rank the attributes according to the number of local machines who select them, and choose the top-2k attributes from the ranked list. It can be proven that when the local data are big enough to be statistically representative, there is a very high probability that the top-2k attributes obtained by this majority voting will contain the globally best attribute. Please note that this step does not induce any communication cost. Best Attribute Identification: We collect full-grained histograms of the globally top-2k attributes from local machines in order to compute their global distributions. Then we identify the best attribute and its split point according to the informativeness scores calculated from the global distributions. Please note that the communication cost for this step is also low, because we only need to communicate the histograms of 2k pre-selected attributes (but not all attributes).3 As a result, PV-Tree algorithm can scale very well since its communication cost is independent of both the total number of attributes and the total number of samples in the dataset. In next section, we will provide theoretical analysis on accuracy guarantee of PV-Tree algorithm. Algorithm 2 FindBestSplit Input: DataSet D for all X in D.Attribute do . Construct Histogram H = new Histogram() for all x in X do H.binAt(x.bin).Put(x.label) end for . Find Best Split leftSum = new HistogramSum() for all bin in H do leftSum = leftSum + H.binAt(bin) rightSum = H.AllSum - leftSum split.gain = CalSplitGain(leftSum, rightSum) bestSplit = ChoiceBetterOne(split,bestSplit) end for end for return bestSplit Algorithm 3 PV-Tree_FindBestSplit Input: Dataset D localHistograms = ConstructHistograms(D) . Local Voting splits = [] for all H in localHistograms do splits.Push(H.FindBestSplit()) end for localTop = splits.TopKByGain(K) . Gather all candidates allCandidates = AllGather(localTop) . Global Voting globalTop = allCandidates.TopKByMajority(2*K) . Merge global histograms globalHistograms = Gather(globalTop, localHistograms) bestSplit = globalHistograms.FindBestSplit() return bestSplit 4 Theoretical Analysis In this section, we conduct theoretical analysis on proposed PV-Tree algorithm. Specifically, we prove that, PV-Tree can select the best (most informative) attribute in a large probability, for both classification and regression. In order to better present the theorem, we firstly introduce some notations4 In classification, we denote IGj = maxw IGj(w), and rank {IGj ; j ∈ [d]} from large to small as {IG(1), ..., IG(d)}. We call the attribute j(1) the most informative attribute. Then, we denote l(j)(k) = |IG(1)−IG(j)| 2 , ∀j ≥ k + 1 to indicate the distance between the largest and the k-th largest IG. In regression, l(j)(k) is defined in the same way, except replacing IG with VG. Theorem 4.1 Suppose we have M local machines, and each one has n training data. PV-Tree at an arbitrary tree node with local voting size k and global majority voting size 2k will select the most informative attribute with a probability at least M∑ m=[M/2+1] CmM 1− d∑ j=k+1 δ(j)(n, k) m d∑ j=k+1 δ(j)(n, k) M−m , where δ(j)(n, k) = α(j)(n) + 4e−c(j)n(l(j)(k)) 2 with limn→∞ α(j)(n) = 0 and c(j) is constant. Due to space restrictions, we briefly illustrate the proof idea here and leave detailed proof to supplementary materials. Our proof contains two parts. (1) For local voting, we find a sufficient condition to guarantee a similar rank of attributes ordered by information gain computed based on local data and full data. Then, we derive a lower bound of probability to make the sufficient condition holds by 3As indicated by our theoretical analysis and empirical study (see the next sections), a very small k already leads to good performance in PV-Tree algorithm. 4Since all analysis are for one arbitrarily fixed node O, we omit the notation O here. using concentration inequalities. (2) For global voting, we select top-2k attributes. It’s easy to proof that we can select the most informative attribute if only no less than [M/2 + 1] of all machines select it.5 Therefore, we can calculate the probability in the theorem using binomial distribution. Regarding Theorem 4.1, we have following discussions on factors that impact the lower bound for probability of selecting the best attribute. 1.Size of local training data n: Since δ(j)(n, k) decreased with n, with more and more local training data, the lower bound will increase. That means, if we have sufficiently large data, PV-Tree will select the best attribute with almost probability 1. 2. Input dimension d: It is clear that for fixed local voting size k and global voting size 2k, with d increasing, the lower bound is decreasing. Consider the case that the number of attributes become 100 times larger. Then the terms in the summation (from ∑d j=k+1 to ∑100d j=k+1) is roughly 100 times larger for a relatively small k. But there must be many attributes away from attribute (1) and l(j)(k) is a large number which results in a small δ(j)(n, k). Thus we can say that the bound in the theorem is not sensitive with d. 3. Number of machines M : We assume the whole training data size N is fixed and the local data size n = NM . Then on one hand, as M increases, n decreases, and therefore the lower bound will decrease due to larger δj(n, k). On the other hand, because function ∑M m=[M/2+1] C m Mp m(1− p)M−m will approach 1 as M increases when p > 0.5 [[23]], the lower bound will increase. In other words, the number of machines M has dual effect on the lower bound: with more machines, local data size becomes smaller which reduces the accuracy of local voting, however, it also leads to more copies of local votes and thus increase the reliability of global voting. Therefore, in terms of accuracy, there should be an optimal number of machines given a fixed-size training data.6 4. Local/Global voting size k/2k: Local/Global voting size k/2k influence l(j)(k) and the terms in the summation in the lower bound . As k increases, l(j)(k) increases and the terms in the summation decreases, and the lower bound increases. But increasing k will bring more communication and calculating time. Therefore, we should better select a moderate k. For some distributions, especially for the distributions over high-dimensional space, l(j)(k) is less sensitive to k, then we can choose a relatively smaller k to save communication time. As a comparison, we also prove a theorem for the data-parallel algorithm based on quantized histogram as follows (please refer to the supplementary material for its proof). The theorem basically tells us that the bias introduced by histogram quantization cannot be reduced to zero even if the training data are sufficiently large, and as a result the corresponding algorithm could fail in finding the best attribute.7 This could be the critical weakness of this algorithm in big data scenario. Theorem 4.2 We denote quantized histogram with b bins of the underlying distribution P as P b, that of the empirical distribution Pn as P bn, the information gain ofXj calculated under the distribution P b and P bn as IG b j and IG b n,j respectively, and fj(b) , |IGj − IGbj |. Then, for ≤ minj=1,··· ,d fj(b), with probability at least δj(n, fj(b)− )), we have |IGbn,j − IGj | > . 5 Experiments In this section, we report the experimental comparisons between PV-Tree and baseline algorithms. We used two data sets, one for learning to rank (LTR) and the other for ad click prediction (CTR)8 (see Table 1 for details). For LTR, we extracted about 1200 numerical attributes per data sample, and used NDCG [5] as the evaluation measure. For CTR, we extracted about 800 numerical attributes [9], and used AUC as the evaluation measure. 5In fact, the global voting size can be βk with β > 1. Then the sufficient condition becomes that no less than [M/β + 1] of all machines select the most informative attribute. 6Please note that using more machines will reduce local computing time, thus the optimal value of machine number may be larger in terms of speed-up. 7The theorem for regression holds in the same way, with replacing IG with VG. 8We use private data in LTR experiments and data of KDD Cup 2012 track 2 in CTR experiments. Table 1: Datasets Task #Train #Test #Attribute Source LTR 11M 1M 1200 Private CTR 235M 31M 800 KDD Cup Table 2: Convergence time (seconds) Task Sequential Data- Attribute- PV-Tree Parallel Parallel LTR 28690 32260 14660 5825 CTR 154112 9209 26928 5349 According to recent industrial practices, a single decision tree might not be strong enough to learn an effective model for complicated tasks like ranking and click prediction. Therefore, people usually use decision tree based boosting algorithms (e.g., GBDT) to perform tasks. In this paper, we also use GBDT as a platform to examine the efficiency and effectiveness of decision tree parallelization. That is, we used PV-Tree or other baseline algorithms to parallelize the decision tree construction process in each iteration of GBDT, and compare their performance. Our experimental environment is a cluster of servers (each with 12 CPU cores and 32 GB RAM) inter-connected with 1 Gbps Ethernet. For the experiments on LTR, we used 8 machines for parallel training; and for the experiments on CTR, we used 32 machines since the dataset is much larger. 5.1 Comparison with Other Parallel Decision Trees For comparison with PV-Tree, we have implemented an attribute-parallel algorithm, in which a binary vector is used to indicate the split information and exchanged across machines. In addition, we implemented a data-parallel algorithm according to [2, 21], which can communicate both full-grained histograms and quantized histograms. All parallel algorithms and sequential(single machine) version are compared together. The experimental results can be found in Figure 1a and 1b. From these figures, we have the following observations: For LTR, since the number of data samples is relatively small, the communication of the split information about the samples does not take too much time. As a result, the attribute-parallel algorithm appears to be efficient. Since most attributes take numerical values in this dataset, the fullgrained histogram has quite a lot of bins. Therefore, the data-parallel algorithm which communicates full-grained histogram is quite slow, even slower than the sequential algorithm. When reducing the bins in the histogram to 10%, the data-parallel algorithm becomes much more efficient, however, its convergence point is not good (consistent with our theory – the bias in quantized histograms leads to accuracy drop). For CTR, attribute-parallel algorithm becomes very slow since the number of data samples is very large. In contrast, many attributes in CTR take binary or discrete values, which make the full-grained histogram have limited number of bins. As a result, the data-parallel algorithm with full-grain histogram is faster than the sequential algorithm. The data-parallel algorithm with quantized histograms is even faster, however, its convergence point is once again not very good. PV-Tree reaches the best point achieved by sequential algorithm within the shortest time in both LTR and CTR task. For a more quantitative comparison on efficiency, we list the time for each algorithm (8 machines for LTR and 32 machines for CTR) to reach the convergent accuracy of the sequential algorithm in Table 2. From the table, we can see that, for LTR, it costed PV-Tree 5825 seconds, while it costed the data-parallel algorithm (with full-grained histogram9) and attribute-parallel algorithm 32260 and 14660 seconds respectively. As compared with the sequential algorithm (which took 28690 seconds to converge), PV-Tree achieves 4.9x speed up on 8 machines. For CTR, it costed PV-Tree 5349 seconds, while it costed the data-parallel algorithm (with full-grained histogram) and attributeparallel algorithm 9209 and 26928 seconds respectively. As compared with the sequential algorithm (which took 154112 seconds to converge), PV-Tree achieves 28.8x speed up on 32 machines. We also conducted independent experiments to get a clear comparison of communication cost for different parallel algorithms given some typical big data workload setting. The result is listed in Table 3. We find the cost of attribute-parallel algorithm is relative to the size of training data N , and the cost of data-parallel algorithm is relative to the number of attributes d. In contrast, the cost of PV-Tree is constant. 9The data-parallel algorithm with 10% bins could not achieve the same accuracy with the sequential algorithm and thus we did not put it in the table. Table 3: Comparison of communication cost, train one tree with depth=6. Table 4: Convergence time and accuracy w.r.t. global voting parameter k for PV-Tree. 5.2 Tradeoff between Speed-up and Accuracy in PV-Tree In the previous subsection, we have shown that PV-tree is more efficient than other algorithms. Here we make a deep dive into PV-tree to see how its key parameters affect the trade-off between efficiency and accuracy. According to Theorem 4.1, the following two parameters are critical to PV-Tree: the number of machines M and the size of voting k. 5.2.1 On Different Numbers of Machines When more machines join the distributed training process, the data throughput will grow larger but the amortized training data on each machine will get smaller. When the data size on each machine becomes too small, there will be no guarantee on the accuracy of the voting procedure, according to our theorem. So it is important to appropriately set the number of machines. To gain more insights on this, we conducted some additional experiments, whose results are shown in Figure 2a and 2b. From these figures, we can see that for LTR, when the number of machines grows from 2 to 8, the training process is significantly accelerated. However, when the number goes up to 16, the convergence speed is even lower than that of using 8 machines. Similar results can be observed for CTR. These observations are consistent with our theoretical findings. Please note that PV-Tree is designed for the big data scenario. Only when the entire training data are huge (and thus distribution of the training data on each local machine can be similar to that of the entire training data), the full power of PV-Tree can be realized. Otherwise, we need to have a reasonable expectation on the speed-up, and should choose to use a smaller number of machines to parallelize the training. 5.2.2 On Different Sizes of Voting In PV-Tree, we have a parameter k, which controls the number of top attributes selected during local and global voting. Intuitively, larger k will increase the probability of finding the globally best attribute from the local candidates, however, it also means higher communication cost. According to our theorem, the choice of k should depend on the size of local training data. If the size of local training data is large, the locally best attributes will be similar to the globally best one. In this case, one can safely choose a small value of k. Otherwise, we should choose a relatively larger k. To gain more insights on this, we conducted some experiments, whose results are shown in Table 4, where M refers to the number of machines. From the table, we have the following observations. First, for both cases, in order to achieve good accuracy, one does not need to choose a large k. When k ≤ 40, the accuracy has been very good. Second, we find that for the cases of using small number of machines, k can be set to an even smaller value, e.g., k = 5. This is because, given a fixed-size training data, when using fewer machines, the size of training data per machine will become larger and thus a smaller k can already guarantee the approximation accuracy. 5.3 Comparison with Other Parallel GBDT Algorithms While we mainly focus on how to parallelize the decision tree construction process inside GBDT in the previous subsections, one could also parallelize GBDT in other ways. For example, in [22, 20], each machine learns its own decision tree separately without communication. After that, these decision trees are aggregated by means of winner-takes-all or output ensemble. Although these works are not the focus of our paper, it is still interesting to compare with them. For this purpose, we implemented both the algorithms proposed in [22] and [20]. For ease of reference, we denote them as Svore and Yu respectively. Their performances are shown in Figure 3a and 3b. From the figures, we can see that PV-Tree outperforms both Svore and Yu: although these two algorithms converge at a similar speed to PV-Tree, they have much worse converge points. According to our limited understanding, these two algorithms are lacking solid theoretical guarantee. Since the candidate decision trees are trained separately and independently without necessary information exchange, they may have non-negligible bias, which will lead to accuracy drop at the end. In contrast, we can clearly characterize the theoretical properties of PV-tree, and use it in an appropriate setting so as to avoid observable accuracy drop. To sum up all the experiments, we can see that with appropriately-set parameters, PV-Tree can achieve a very good trade-off between efficiency and accuracy, and outperforms both other parallel decision tree algorithms designed specifically for GBDT parallelization. 6 Conclusions In this paper, we proposed a novel parallel algorithm for decision tree, called Parallel Voting Decision Tree (PV-Tree), which can achieve high accuracy at a very low communication cost. Experiments on both ranking and ad click prediction indicate that PV-Tree has its advantage over a number of baselines algorithms. As for future work, we plan to generalize the idea of PV-Tree to parallelize other machine learning algorithms. Furthermore, we will open-source PV-Tree algorithm to benefit more researchers and practitioners.
1. What is the main contribution of the paper in terms of decision tree algorithms? 2. How does the proposed method, PV-Tree, reduce communication costs? 3. What factors affect the statistical performance of PV-Tree, and how are they addressed in the paper? 4. How does the choice of 2k impact the performance of the algorithm, and what are the implications of choosing other multiples of k? 5. Would assigning different weights to local candidates improve the selection process, and why? 6. Are there any limitations or areas for improvement in the numerical experiments presented in the paper? 7. How does the paper's organization and writing style contribute to its overall clarity and readability?
Review
Review This paper proposes a communication-efficient algorithm for decision trees when data are split across a number of machines. The PV-Tree method adopts both local and global voting mechanism to leverage local information to reduce the communication cost. Factors that affect the statistical performance are discussed based on the theorem and numerical experiments further support the analysis.The idea of leveraging local information to avoid redundant communication is novel. The main communication cost of the algorithm is on the best attribute identification. It seems to me that the reason for the choice of 2k is that as long as [M/2+1] of all machines select an attribute, it is guaranteed to be in the global list. As mentioned in the footnote, we can choose other multiples of k, so I wonder if the performance would be significantly affected if more or less candidates are considered. Now we treat the local candidates equally when aggregating to the global list. Would assigning different weights a reasonable idea to improve the selection? The numerical experiments are comprehensive and well support the algorithm and theoretical results. It would be better to give a definition or some explanation of the NDCG measure. In general, the algorithm is novel and can have many applications. The paper is well written and organized.
NIPS
Title Efficient Projection onto the Perfect Phylogeny Model Abstract Several algorithms build on the perfect phylogeny model to infer evolutionary trees. This problem is particularly hard when evolutionary trees are inferred from the fraction of genomes that have mutations in different positions, across different samples. Existing algorithms might do extensive searches over the space of possible trees. At the center of these algorithms is a projection problem that assigns a fitness cost to phylogenetic trees. In order to perform a wide search over the space of the trees, it is critical to solve this projection problem fast. In this paper, we use Moreau’s decomposition for proximal operators, and a tree reduction scheme, to develop a new algorithm to compute this projection. Our algorithm terminates with an exact solution in a finite number of steps, and is extremely fast. In particular, it can search over all evolutionary trees with fewer than 11 nodes, a size relevant for several biological problems (more than 2 billion trees) in about 2 hours. 1 Introduction The perfect phylogeny model (PPM) [1, 2] is used in biology to study evolving populations. It assumes that the same position in the genome never mutates twice, hence mutations only accumulate. Consider a population of organisms evolving under the PPM. The evolution process can be described by a labeled rooted tree, T = (r,V, E), where r is the root, i.e., the common oldest ancestor, the nodes V are the mutants, and the edges E are mutations acquired between older and younger mutants. Since each position in the genome only mutates once, we can associate with each node v 6= r, a unique mutated position, the mutation associated to the ancestral edge of v. By convention, let us associate with the root r, a null mutation that is shared by all mutants in T . This allows us to refer to each node v ∈ V as both a mutation in a position in the genome (the mutation associated to the ancestral edge of v), and a mutant (the mutant with the fewest mutations that has a mutation v). Hence, without loss of generality, V = {1, . . . , q}, E = {2, . . . , q}, where q is the length of the genome, and r = 1 refers to both the oldest common ancestor and the null mutation shared by all. One very important use of the PPM is to infer how mutants of a common ancestor evolve [3–8]. A common type of data used for this purpose is the frequency, with which different positions in the genome mutate across multiple samples, obtained, e.g., from whole-genome or targeted deep sequencing [9]. Consider a sample s, one of p samples, obtained at a given stage of the evolution process. This sample has many mutants, some with the same genome, some with different genomes. Let F ∈ Rq×p be such that Fv,s is the fraction of genomes in s with a mutation in position v in the genome. Let M ∈ Rq×p be such that Mv,s is the fraction of mutant v in s. By definition, the columns of M must sum to 1. Let U ∈ {0, 1}q×q be such that Uv,v′ = 1, if and only if mutant v is an ancestor of mutant v′, or if v = v′. We denote the set of all possible U matrices, M matrices and labeled rooted trees T , by U ,M and T , respectively. See Figure 1 for an illustration. The PPM implies F = UM. (1) Our work contributes to the problem of inferring clonal evolution from mutation-frequencies: How do we infer M and U from F? Note that finding U is the same as finding T (see Lemma B.2). ∗Bei Jia is currently with Element AI. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. Although model (1) is simple, simultaneously inferring M and U from F can be hard [3]. One popular inference approach is the following optimization problem over U , M and F , min U∈U C(U), (2) C(U) = min M,F∈Rq×p ‖F̂ − F‖ subject to F = UM,M ≥ 0,M>1 = 1, (3) where ‖ · ‖ is the Frobenius norm, and F̂ ∈ Rq×p contains the measured fractions of mutations per position in each sample, which are known and fixed. In a nutshell, we want to project our measurement F̂ onto the space of valid PPM models. Problem (2) is a hard mixed integer-continuous optimization problem. To approximately solve it, we might find a finite subset {Ui} ⊂ U , that corresponds to a “heuristically good” subset of trees, {Ti} ⊂ T , and, for each fixed matrix Ui, solve (3), which is a convex optimization problem. We can then return Tx, where x ∈ arg mini C(Ui). Fortunately, in many biological applications, e.g., [3–8], the reconstructed evolutionary tree involves a very small number of mutated positions, e.g., q ≤ 11. In practice, a position v might be an effective position that is a cluster of multiple real positions in the genome. For a small q, we can compute C(U) for many trees, and hence approximate M , U , and get uncertainty measures for these estimates. This is important, since data is generally scarce and noisy. Contributions: (i) we propose a new algorithm to compute C(U) exactly in O(q2p) steps, the first non-iterative algorithm to compute C(U); (ii) we compare its performance against state-of-the-art iterative algorithms, and observe a much faster convergence. In particular, our algorithm scales much faster thanO(q2p) in practice; (iii) we implement our algorithm on a GPU, and show that it computes the cost of all (more than 2 billion) trees with ≤ 11 nodes, in ≤ 2.5 hours. 2 Related work A problem related to ours, but somewhat different, is that of inferring a phylogenetic tree from single-cell whole-genome sequencing data. Given all the mutations in a set of mutants, the problem is to arrange the mutants in a phylogenetic tree, [10, 11]. Mathematically, this corresponds to inferring T from partial or corrupted observation of U . If the PPM is assumed, and all the mutations of all the mutants are correctly observed, this problem can be solved in linear time, e.g., [12]. In general, this problem is equivalent to finding a minimum cost Steiner tree on a hypercube, whose nodes and edges represent mutants and mutations respectively, a problem known to be hard [13]. We mention a few works on clonality inference, based on the PPM, that try to infer both U and M from F̂ . No previous work solves problem (2) exactly in general, even for trees of size q ≤ 11. Using our fast projection algorithm, we can solve (2) exactly by searching over all trees, if q ≤ 11. Ref. [3] (AncesTree) reduces the space of possible trees T to subtrees of a heuristically constructed DAG. The authors use the element-wise 1-norm in (3) and, after introducing more variables to linearize the product UM , reduce this search to solving a MILP, which they try to solve via branch and bound. Ref. [6] (CITUP) searches the space of all unlabeled trees, and, for each unlabeled tree, tries to solve an MIQP, again using branch and bound techniques, which finds a labeling for the unlabeled tree, and simultaneously minimizes the distance ‖F̂ − F‖. Refs. [5] and [14] (PhyloSub/PhyloWGS), use a stochastic model to sample trees that are likely to explain the data. Their model is based on [15], which generates hierarchical clusterings of objects, and from which lineage trees can be formed. A score is then computed for these trees, and the highest scoring trees are returned. Procedure (2) can be justified as MLE if we assume the stochastic model F̂ = F + N (0, Iσ2), where F , U and M satisfy the PPM model, and N (0, Iσ2) represents additive, component-wise, Gaussian measurement noise, with zero mean and covariance Iσ2. Alternative stochastic models can be assumed, e.g., as M − U−1F̂ = N (0, Iσ2), where M is non-negative and its columns must sum to one, andN (0, Iσ2) is as described before. For this model, and for each matrix U , the cost C(U) is a projection of U−1F̂ onto the probability simplex M ≥ 0,M>1 = 1. Several fast algorithms are known for this problem, e.g., [16–20] and references therein. In a pq-dimensional space, the exact projection onto the simplex can be done in O(qp) steps. Our algorithm is the first to solve (3) exactly in a finite number of steps. We can also use iterative methods to solve (3). One advantage of our algorithm is that it has no tuning parameters, and requires no effort to check for convergence for a given accuracy. Since iterative algorithms can converge very fast, we numerically compare the speed of our algorithm with different implementations of the Alternating Direction Method of Multipliers (ADMM) [21], which, if properly tuned, has a convergence rate that equals the fastest convergence rate among all first order methods [22] under some convexity assumptions, and is known to produce good solutions for several other kinds of problems, even for non-convex ones [23–29]. 3 Main results We now state our main results, and explain the ideas behind their proofs. Detailed proofs can be found in the Appendix. Our algorithm computes C(U) and minimizers of (3), resp. M∗ and F ∗, by solving an equivalent problem. Without loss of generality, we assume that p = 1, since, by squaring the objective in (3), it decomposes into p independent problems. Sometimes we denote C(U) by C(T ), since given U , we can specify T , and vice-versa. Let ī be the closest ancestor of i in T = (r,V, E). Let ∆i be the set of all the ancestors of i in T , plus i. Let ∂i be the set of children of i in T . Theorem 3.1 (Equivalent formulation). Problem (3) can be solved by solving min t∈R t+ L(t), (4) L(t) = min Z∈Rq 1 2 ∑ i∈V (Zi − Zī)2 subject to Zi ≤ t−Ni ,∀i ∈ V, (5) where Ni = ∑ j∈∆i F̂j , and, by convention, Zī = 0 for i = r. In particular, if t ∗ minimizes (4), Z∗ minimizes (5) for t = t∗, and M∗, F ∗ minimize (3), then M∗i = −Z∗i + Z∗ī + ∑ r∈∂i (Z∗r − Z∗r̄ ) and F ∗i = −Z∗i + Z∗ī ,∀i ∈ V. (6) Furthermore, t∗, M∗, F ∗ and Z∗ are unique. Theorem 3.1 comes from a dual form of (3), which we build using Moreau’s decomposition [30]. 3.1 Useful observations Let Z∗(t) be the unique minimizer of (5) for some t. The main ideas behind our algorithm depend on a few simple properties of the paths {Z∗(t)} and {L′(t)}, the derivative of L(t) with respect to t. Note that L is also a function of N , as defined in Theorem 3.1, which depends on the input data F̂ . Lemma 3.2. L(t) is a convex function of t and N . Furthermore, L(t) is continuous in t and N , and L′(t) is non-decreasing with t. Lemma 3.3. Z∗(t) is continuous as a function of t and N . Z∗(t∗) is continuous as a function of N . Let B(t) = {i : Z∗(t)i = t−Ni}, i.e., the set of components of the solution at the boundary of (5). Variables in B are called fixed, and we call other variables free. Free (resp. fixed) nodes are nodes corresponding to free (resp. fixed) variables. Lemma 3.4. B(t) is piecewise constant in t. Consider dividing the tree T = (r,V, E) into subtrees, each with at least one free node, using B(t) as separation points. See Figure 4 in Appendix A for an illustration. Each i ∈ B(t) belongs to at most degree(i) different subtrees, where degree(i) is the degree of node i, and each i ∈ V\B(t) belongs exactly to one subtree. Let T1, . . . , Tk be the set of resulting (rooted, labeled) trees. Let Tw = (rw,Vw, Ew), where the root rw is the closest node in Tw to r. We call {Tw} the subtrees induced by B(t). We define Bw(t) = B(t) ∩ Vw, and, when it does not create ambiguity, we drop the index t in Bw(t). Note that different Bw(t)’s might have elements in common. Also note that, by construction, if i ∈ Bw, then i must be a leaf of Tw, or the root of Tw. Definition 3.5. The (Tw,Bw)-problem is the optimization problem over |Vw\B(t)| variables min {Zj :j∈Vw\B(t)} (1/2) ∑ j∈Vw (Zj − Zj̄)2, (7) where j̄ is the parent of j in Tw, Zj̄ = 0 if j = rw, and Zj = Z∗(t)j = t−Nj if j ∈ Bw(t). Lemma 3.6. Problem (5) decomposes into k independent problems. In particular, the minimizers {Z∗(t)j : j ∈ Vw\B(t)} are determined as the solution of the (Tw,Bw)-problem. If j ∈ Vw, then Z∗(t)j = c1t+ c2 , where c1 and c2 depend on j but not on t, and 0 ≤ c1 ≤ 1. Lemma 3.7. Z∗(t) and L′(t) are piecewise linear and continuous in t. Furthermore, Z∗(t) and L′(t) change linear segments if and only if B(t) changes. Lemma 3.8. If t ≤ t′, then B(t′) ⊆ B(t). In particular, B(t) changes at most q times with t. Lemma 3.9. Z∗(t) and L′(t) have less than q + 1 different linear segments. 3.2 The Algorithm In a nutshell, our algorithm computes the solution path {Z∗(t)}t∈R and the derivative {L′(t)}t∈R. From these paths, it finds the unique t∗, at which d(t+ L(t))/dt = 0|t=t∗ ⇔ L′(t∗) = −1. (8) It then evaluates the path Z∗(t) at t = t∗, and uses this value, along with (6), to find M∗ and F ∗, the unique minimizers of (3). Finally, we compute C(T ) = ‖F̂ − F ∗‖. We know that {Z∗(t)} and {L′(t)} are continuous piecewise linear, with a finite number of different linear segments (Lemmas 3.7, 3.8 and 3.9). Hence, to describe {Z∗(t)} and {L′(t)}, we only need to evaluate them at the critical values, t1 > t2 > · · · > tk, at which Z∗(t) and L′(t) change linear segments. We will later use Lemma 3.7 as a criteria to find the critical values. Namely, {ti} are the values of t at which, as t decreases, new variables become fixed, and B(t) changes. Note that variables never become free once fixed, by Lemma 3.8, which also implies that k ≤ q. The values {Z∗(ti)} and {L′(ti)} are computed sequentially as follows. If t is very large, the constraint in (5) is not active, and Z∗(t) = L(t) = L′(t) = 0. Lemma 3.7 tells us that, as we decrease t, the first critical value is the largest t for which this constraint becomes active, and at which B(t) changes for the first time. Hence, if i = 1, we have ti = maxs{Ns}, Z∗(ti) = L′(ti) = 0, and B(ti) = arg maxs{Ns}. Once we have ti, we compute the rates Z ′∗(ti) and L′′(ti) from B(ti) and T , as explained in Section 3.3. Since the paths are piecewise linear, derivatives are not defined at critical points. Hence, here, and throughout this section, these derivatives are taken from the left, i.e., Z ′∗(ti) = limt↑ti(Z ∗(ti)− Z∗(t))/(ti − t) and L′′(ti) = limt↑ti(L′(ti)− L′(t))/(ti − t). Since Z ′∗(t) and L′′(t) are constant for t ∈ (ti+1, ti], for t ∈ (ti+1, ti] we have Z∗(t) = Z∗(ti) + (t− ti)Z ′∗(ti), L′(t) = L′(ti) + (t− ti)L′′(ti), (9) and the next critical value, ti+1, is the largest t < ti, for which new variables become fixed, and B(t) changes. The value ti+1 is found by solving for t < ti in Z∗(t)r = Z ∗(ti)r + (t− ti)Z ′∗(ti)r = t−Nr, (10) and keeping the largest solution among all r /∈ B. Once ti+1 is computed, we update B with the new variables that became fixed, and we obtain Z∗(ti+1) and L′(ti+1) from (9). The process then repeats. By Lemma 3.2, L′ never increases. Hence, we stop this process (a) as soon as L′(ti) < −1, or (b) when all the variables are in B, and thus there are no more critical values to compute. If (a), let tk be the last critical value with L′(tk) > −1, and if (b), let tk be the last computed critical value. We use tk and (9) to compute t∗, at which L′(t∗) = −1 and also Z∗(t∗). From Z∗(t∗) we then compute M∗ and F ∗ and C(U) = ‖F̂ − F ∗‖. The algorithm is shown compactly in Alg. 1. Its inputs are F̂ and T , represented, e.g., using a linkednodes data structure. Its outputs are minimizers to (3). It makes use of a procedure ComputeRates, which we will explain later. This procedure terminates in O(q) steps and uses O(q) memory. Line 5 comes from solving (10) for t. In line 14, the symbols M∗(Z∗, T ) and F ∗(Z∗, T ) remind us that M∗ and F ∗ are computed from Z∗ and T using (6). The correctness of Alg. 1 follows from the Lemmas in Section 3.1, and the explanation above. In particular, since there are at most q + 1 different linear regimes, the bound q in the for-loop does not prevent us from finding any critical value. Its time complexity is O(q2), since each line completes in O(q) steps, and is executed at most q times. Theorem 3.10 (Complexity). Algorithm 1 finishes in O(q2) steps, and requires O(q) memory. Theorem 3.11 (Correctness). Algorithm 1 outputs the solution to (3). Algorithm 1 Projection onto the PPM (input: T and F̂ ; output: M∗ and F ∗) 1: Ni = ∑ j∈∆i F̂j , for all i ∈ V . This takes O(q) steps using a DFS, see proof of Theorem 3.10 2: i = 1, ti = maxr{Nr}, B(ti) = arg maxr{Nr}, Z∗(ti) = 0, L′(ti) = 0. . Initialize 3: for i = 1 to q do 4: (Z ′∗(ti),L′′(ti)) = ComputeRates(B(ti), T ) . Update rates of change 5: P = {Pr : Pr = Nr+Z ∗(ti)r−tiZ′∗(ti)r 1−Z′∗(ti)r if r /∈ B(ti), tr < ti, and Pr = −∞ otherwise} 6: ti+1 = maxr Pr . Update next critical value from (9) 7: B(ti+1) = B(ti) ∪ arg maxr Ps . Update list of fixed variables 8: Z∗(ti+1) = Z∗(ti) + (ti+1 − ti)Z ′∗(ti) . Update solution path 9: L′(ti+1) = L′(ti) + (ti+1 − ti)L′′(ti) . Update objective’s derivative 10: if L′(ti+1) < −1 then break . If already passed by t∗, then exit the for-loop 11: end for 12: t∗ = ti − 1+L ′(ti) L′′(ti) . Find solution to (8) 13: Z∗ = Z∗(ti) + (t∗ − ti)Z ′∗(ti) . Find minimizers of (5) for t = t∗ 14: return M∗(Z∗, T ), F ∗(Z∗, T ) . Return solution to (3) using (6), which takes O(q) steps 3.3 Computing the rates We now explain how the procedure ComputeRates works. Recall that it takes as input the tree T and the set B(ti), and it outputs the derivatives Z ′∗(ti) and L′′(ti). A simple calculation shows that if we compute Z ′∗(ti), then computing L′′(ti) is easy. Lemma 3.12. L′′(ti) can be computed from Z ′∗(ti) in O(q) steps and with O(1) memory as L′′(ti) = ∑ j∈V (Z ′∗(ti)j − Z ′∗(ti)j̄)2, (11) where j̄ is the closest ancestor to j in T . We note that if j ∈ B(ti), then, by definition, Z ′∗(ti)j = 1. Assume now that j ∈ V\B(ti). Lemma 3.6 implies we can find Z ′∗(ti)j by solving the (Tw = (rw,Vw, Ew),Bw)-problem as a function of t, where w is such that j ∈ Vw. In a nutshell, ComputeRates is a recursive procedure to solve all the (Tw,Bw)-problems as an explicit function of t. It suffices to explain how ComputeRates solves one particular (Tw,Bw)-problem explicitly. To simplify notation, in the rest of this section, we refer to Tw and Bw as T and B. Recall that, by the definition of T = Tw and B = Bw, if i ∈ B, then i must be a leaf of T , or the root of T . Definition 3.13. Consider a rooted tree T = (r,V, E), a set B ⊆ V , and variables {Zj : j ∈ V} such that, if j ∈ B, then Zj = αjt+ βj for some α and β. We define the (T,B, α, β, γ)-problem as min {Zj :j∈V\B} 1 2 ∑ j∈V γj(Zj − Zj̄)2, (12) where γ > 0, j̄ is the closest ancestor to j in T , and Zj̄ = 0 if j = r. We refer to the solution of the (T,B, α, β, γ)-problem as {Z∗j : j ∈ V\B}, which uniquely minimizes (12). Note that (12) is unconstrained and its solution, Z∗, is a linear function of t. Furthermore, the (Tw,Bw)-problem is the same as the (Tw,Bw,1,−N,1)-problem, which is what we actually solve. We now state three useful lemmas that help us solve any (T,B, α, β, γ)-problem efficiently. Lemma 3.14 (Pruning). Consider the solution Z∗ of the (T,B, α, β, γ)-problem. Let j ∈ V\B be a leaf. Then Z∗j = Z ∗ j̄ . Furthermore, consider the (T̃,B, α, β, γ)-problem, where T̃ = (r̃, Ṽ, Ẽ) is equal to T with node j pruned, and let its solution be Z̃∗. We have that Z∗i = Z̃ ∗ i , for all i ∈ Ṽ . Lemma 3.15 (Star problem). Let T be a star such that node 1 is the center node, node 2 is the root, and nodes 3, . . . , r are leaves. Let B = {2, . . . , r}. Let Z∗1 ∈ R be the solution of the (T,B, α, β, γ)-problem. Then, Z∗1 = ( γ1α2 + ∑r i=3 γrαr γ1 + ∑r i=3 γr ) t+ ( γ1β2 + ∑r i=3 γrβr γ1 + ∑r i=3 γr ) . (13) In particular, to find the rate at which Z∗1 changes with t, we only need to know α and γ, not β. Lemma 3.16 (Reduction). Consider the (T,B, α, β, γ)-problem such that j, j̄ ∈ V\B, and such that j has all its children 1, . . . , r ∈ B. Let Z∗ be its solution. Consider the (T̃, B̃, α̃, β̃, γ̃)− problem, where T̃ = (r̃, Ṽ, Ẽ) is equal to T with nodes 1, . . . , r removed, and B̃ = (B\{1, . . . , r}) ∪ {j}. Let Z̃∗ be its solution. If (α̃i, β̃i, γ̃i) = (αi, βi, γi) for all i ∈ B\{1, . . . , r}, and α̃j , β̃j and γ̃j satisfy α̃j = ∑r i=1 γiαi∑r i=1 γi , β̃j = ∑r i=1 γiβi∑r i=1 γi , γ̃j = (γj)−1 + ( r∑ i=1 γi )−1 −1 , (14) then Z∗i = Z̃ ∗ i for all i ∈ V\{j}. Lemma 3.15 and Lemma 3.16 allow us to recursively solve any (T,B, α, β, γ)-problem, and obtain for it an explicit solution of the form Z∗(t) = c1t+ c2, where c1 and c2 do not depend on t. Assume that we have already repeatedly pruned T , by repeatedly invoking Lemma 3.14, such that, if i is a leaf, then i ∈ B. See Figure 2-(left). First, we find some node j ∈ V\B such that all of its children are in B. If j̄ ∈ B, then j̄ must be the root, and the (T,B, α, β, γ)-problem must be a star problem as in Lemma 3.15. We can use Lemma 3.15 to solve it explicitly. Alternatively, if j̄ /∈ V\B, then we invoke Lemma 3.16, and reduce the (T,B, α, β, γ)-problem to a strictly smaller (T̃, B̃, α̃, β̃, γ̃)-problem, which we solve recursively. Once the (T̃, B̃, α̃, β̃, γ̃)-problem is solved, we have an explicit expression Z∗i (t) = c1it + c2i for all i ∈ V\{j}, and, in particular, we have an explicit expression Z∗ j̄ (t) = c1 j̄t+ c2 j̄ . The only free variable of the (T,B, α, β, γ)-problem to be determined is Z∗j (t). To compute Z ∗ j (t), we apply Lemma 3.15 to the ( ≈ T , ≈ B, ≈α, ≈ β, ≈ γ)-problem, where ≈ T is a star around j, ≈γ are the components of γ corresponding to nodes that are neighbors of j, ≈α and ≈ β are such that Z∗i (t) = ≈ αit+ ≈ βi for all i that are neighbors of j, and for which Z ∗ i (t) is already known, and ≈ B are all the neighbors of j. See Figure 2-(right). The algorithm is compactly described in Alg. 2. It is slightly different from the description above for computational efficiency. Instead of computing Z∗(t) = c1t+ c2, we keep track only of c1, the rates, and we do so only for the variables in V\B. The algorithm assumes that the input T has been pruned. The inputs T , B, α, β and γ are passed by reference. They are modified inside the algorithm but, once ComputeRatesRec finishes, they keep their initial values. Throughout the execution of the algorithm, T = (r,V, E) encodes (1) a doubly-linked list where each node points to its children and its parent, which we call T.a, and (b) a a doubly-linked list of all the nodes in V\B for which all the children are in B, which we call T.b. In the proof of Theorem 3.17, we prove how this representation of T can be kept updated with little computational effort. The input Y , also passed by reference, starts as an uninitialized array of size q, where we will store the rates {Z ′∗i }. At the end, we read Z ′∗ from Y . Algorithm 2 ComputeRatesRec (input: T = (r,V, E),B, α, β, γ, Y ) 1: Let j be some node in V\B whose children are in B . We read j from T.b in O(1) steps 2: if j̄ ∈ B then 3: Set Yj using (13) in Lemma 3.15 . If j̄ ∈ B, then the (T,B, α, β, γ)-problem is star-shaped 4: else 5: Modify (T,B, α, β, γ) to match (T̃, B̃, α̃, β̃, γ̃) defined by Lemma 3.16 for j in line 1 6: ComputeRatesRec(T,B, α, β, γ, Y ) . Sets Yi = Z ′∗i for all i ∈ V\B; Yj is not yet defined 7: Restore (T,B, α, β, γ) to its original value before line 5 was executed 8: Compute Yj from (13), using for α, β, γ in (13) the values ≈ α, ≈ β, ≈ γ, where ≈γ are the com- ponents of γ corresponding to nodes that are neighbors of j in T , and ≈α and ≈ β are such that Z∗i = ≈ αit+ ≈ βi for all i that are neighbors of j in T , and for which Z ∗ i is already known 9: end if Let q be the number of nodes of the tree T that is the input at the zeroth level of the recursion. Theorem 3.17. Algorithm 2 correctly computes Z ′∗ for the (T,B, α, β, γ)-problem, and it can be implemented to finish in O(q) steps, and to use O(q) memory. The correctness of Algorithm 2 follows from Lemmas 3.14-3.16, and the explanation above. Its complexity is bounded by the total time spent on the two lines that actually compute rates during the whole recursion, lines 3 and 8. All the other lines only transform the input problem into a more computable form. Lines 3 and 8 solve a star-shaped problem with at most degree(j) variables, which, by inspecting (13), we know can be done inO(degree(j)) steps. Since, j never takes the same value twice, the overall complexity is bounded by O(∑j∈V degree(j)) = O(|E|) = O(q). The O(q) bound on memory is possible because all the variables that occupy significant memory are being passed by reference, and are modified in place during the whole recursive procedure. The following lemma shows how the recursive procedure to solve a (T,B, α, β, γ)-problem can be used to compute the rates of change of Z∗(t) of a (T,B)-problem. Its proof follows from the observation that the rate of change of the solution with t in (13) in Lemma 3.15 only depends on α and β, and that the reduction equations (14) in Lemma 3.16 never make α′ or γ′ depend on β. Lemma 3.18 (Rates only). Let Z∗(t) be the solution of the (T,B)-problem, and let Z̃∗(t) be the solution of the (T,B,1, 0,1)-problem. Then, Z∗(t) = c1t+ c2, and Z̃∗(t) = c1t for some c1 and c2. We finally present the full algorithm to compute Z ′∗(ti) and L′′ ∗ (ti) from T and B(ti). Algorithm 3 ComputeRates (input: T and B(ti) output: Z ′∗(ti) and L′′(ti)) 1: Z ′∗(ti)j = 1 for all j ∈ B(ti) 2: for each (Tw,Bw)-problem induced by B(ti) do 3: Set T̃w to be Tw pruned of all leaf nodes in Bw, by repeatedly evoking Lemma 3.14 4: ComputeRatesRec(T̃w, j,Bw,1,0,1, Z̃ ′∗) 5: Z ′∗(ti)j = Z̃ ′∗j for all j ∈ Vw\B 6: end for 7: Compute L′′(ti) from Z ′∗(ti) using Lemma 3.12 8: return Z ′∗(ti) and L′′(ti) The following theorem follows almost directly from Theorem 3.17. Theorem 3.19. Alg. 3 correctly computes Z ′∗(ti) and L′′(ti) in O(q) steps, and uses O(q) memory. 4 Reducing computation time in practice Our numerical results are obtained for an improved version of Algorithm 1. We now explain the main idea behind this algorithm. The bulk of the complexity of Alg. 1 comes from line 4, i.e., computing the rates {Z ′∗(ti)j}j∈V\B(ti) from B(ti) and T . For a fixed j ∈ V\B(ti), and by Lemma 3.6, the rate Z ′∗(ti)j , depends only on one particular (Tw = (rw,Vw, Ew),Bw)-problem induced by B(ti). If exactly this same problem is induced by both B(ti) and B(ti+1), which happens if the new nodes that become fixed in line 7 of round i of Algorithm 1 are not in Vw\Bw, then we can save computation time in round i+ 1, by not recomputing any rates for j ∈ Vw\Bw, and using for Z ′∗(ti+1)j the value Z ′∗(ti)j . Furthermore, if only a few {Z ′∗j } change from round i to round i + 1, then we can also save computation time in computing L′′ from Z ′∗ by subtracting from the sum in the right hand side of equation (11) the terms that depend on the previous, now changed, rates, and adding new terms that depend on the new rates. Finally, if the rate Z ′∗j does not change, then the value of t < ti at which Z ∗ j (t) might intersect t−Nj , and become fixed, given by Pj in line 5, also does not change. (Note that this is not obvious from the formula for Pr in line 5). If not all {Pr} change from round i to round i+ 1, we can also save computation time in computing the maximum, and maximizers, in line 7 by storing P in a maximum binary heap, and executing lines 5 and 7 by extracting all the maximal values from the top of the heap. Each time any Pr changes, the heap needs to be updated. 5 Numerical results Our algorithm to solve (3) exactly in a finite number of steps is of interest in itself. Still, it is interesting to compare it with other algorithms. In particular, we compare the convergence rate of our algorithm with two popular methods that solve (3) iteratively: the Alternating Direction Method of Multipliers (ADMM), and the Projected Gradient Descent (PGD) method. We apply the ADMM, and the PGD, to both the primal formulation (3), and the dual formulation (4). We implemented all the algorithms in C, and derived closed-form updates for ADMM and PG, see Appendix F. We ran all algorithms on a single core of an Intel Core i5 2.5GHz processor. Figure 5-(left) compares different algorithms for a random Galton–Watson input tree truncated to have q = 1000 nodes, with the number of children of each node chosen uniformly within a fixed range, and for a random input F̂ ∈ Rq, with entries chosen i.i.d. from a normal distribution. We observe the same behavior for all random instances that was tested. We gave ADMM and PGD an advantage by optimally tuning them for each individual problem-instance tested. In contrast, our algorithm requires no tuning, which is a clear advantage. At each iteration, the error is measured as maxj{|Mj −M∗j |}. Our algorithm is about 74× faster than its closest competitor (PGD-primal) for 10−3 accuracy. In Figure 5-(right), we show the average run time of our algorithm versus the problem size, for random inputs of the same form. The scaling of our algorithm is (almost) linear, and much faster than our O(q2p), p = 1, theoretical bound. 0 0.1 0.2 0.3 0.4 0.5 Time in seconds 0 0.05 0.1 0.15 E rr o r ADMM Primal ADMM Dual Projected Gradient Descent Primal Projected Gradient Descent Dual Our Algorithm = 0.0027 seconds 0 2000 4000 6000 8000 10000 Problem size 0 0.005 0.01 0.015 0.02 0.025 A ve ra ge r un ti m e Figure 3: (Left) Time that the different algorithms take to solve our problem for trees of with 1000 nodes. (Right) Average run time of our algorithm for problems of different sizes. For each size, each point is averaged over 500 random problem instances. Finally, we use our algorithm to exactly solve (2) by computing C(U) for all trees and a given input F̂ . Exactly solving (2) is very important for biology, since several relevant phylogenetic tree inference problems deal with trees of small sizes. We use an NVIDIA QUAD P5000 GPU to compute the cost of all possible trees with q nodes in parallel, and return the tree with the smallest cost. Basically, we assign to each GPU virtual thread a unique tree, using Prufer sequences [31], and then have each thread compute the cost for its tree. For q = 10, we compute the cost of all 100 million trees in about 8 minutes, and for q = 11, we compute the cost of all 2.5 billion trees in slightly less than 2.5 hours. Code to solve (3) using Alg. 1, with the improvements of Section 4, can be found in [32]. More results using our algorithm can be found in Appendix G. 6 Conclusions and future work We propose a new direct algorithm that, for a given tree, computes how close the matrix of frequency of mutations per position is to satisfying the perfect phylogeny model. Our algorithm is faster than the state-of-the-art iterative methods for the same problem, even if we optimally tune them. We use the proposed algorithm to build a GPU-based phylogenetic tree inference engine for the trees of relevant biological sizes. Unlike existing algorithms, which only heuristically search a small part of the space of possible trees, our algorithm performs a complete search over all trees relatively fast. It is an open problem to find direct algorithms that can provably solve our problem in linear time on average, or even for a worst-case input. Acknowledgement: This work was partially funded by NIH/1U01AI124302, NSF/IIS-1741129, and a NVIDIA hardware grant.
1. What is the focus of the paper, and what is the author's contribution to the field? 2. How does the paper's approach differ from previous works in the field? 3. What are the strengths and weaknesses of the proposed method? 4. Are there any concerns or limitations regarding the method's applicability or effectiveness? 5. Does the paper provide sufficient evidence or case studies to support its claims?
Review
Review The paper studies an algorithm to study efficient projection onto perfect phylogeny model. The major contribution of the paper is to give a Moreau's decomposition for proximal operator, and a tree reduction scheme to solve the projection problem. I find it hard to justify the novelty of the paper as the formulation is not too technical. The contribution of the paper should be from the application perspective. However, the paper did not provide sufficient numerical studies especially not using real data to provide real case studies. I thus do not suggest an acceptance. After reading the rebuttal, I still do not see enough new contribution from this paper. I think this should be a rejection,
NIPS
Title Efficient Projection onto the Perfect Phylogeny Model Abstract Several algorithms build on the perfect phylogeny model to infer evolutionary trees. This problem is particularly hard when evolutionary trees are inferred from the fraction of genomes that have mutations in different positions, across different samples. Existing algorithms might do extensive searches over the space of possible trees. At the center of these algorithms is a projection problem that assigns a fitness cost to phylogenetic trees. In order to perform a wide search over the space of the trees, it is critical to solve this projection problem fast. In this paper, we use Moreau’s decomposition for proximal operators, and a tree reduction scheme, to develop a new algorithm to compute this projection. Our algorithm terminates with an exact solution in a finite number of steps, and is extremely fast. In particular, it can search over all evolutionary trees with fewer than 11 nodes, a size relevant for several biological problems (more than 2 billion trees) in about 2 hours. 1 Introduction The perfect phylogeny model (PPM) [1, 2] is used in biology to study evolving populations. It assumes that the same position in the genome never mutates twice, hence mutations only accumulate. Consider a population of organisms evolving under the PPM. The evolution process can be described by a labeled rooted tree, T = (r,V, E), where r is the root, i.e., the common oldest ancestor, the nodes V are the mutants, and the edges E are mutations acquired between older and younger mutants. Since each position in the genome only mutates once, we can associate with each node v 6= r, a unique mutated position, the mutation associated to the ancestral edge of v. By convention, let us associate with the root r, a null mutation that is shared by all mutants in T . This allows us to refer to each node v ∈ V as both a mutation in a position in the genome (the mutation associated to the ancestral edge of v), and a mutant (the mutant with the fewest mutations that has a mutation v). Hence, without loss of generality, V = {1, . . . , q}, E = {2, . . . , q}, where q is the length of the genome, and r = 1 refers to both the oldest common ancestor and the null mutation shared by all. One very important use of the PPM is to infer how mutants of a common ancestor evolve [3–8]. A common type of data used for this purpose is the frequency, with which different positions in the genome mutate across multiple samples, obtained, e.g., from whole-genome or targeted deep sequencing [9]. Consider a sample s, one of p samples, obtained at a given stage of the evolution process. This sample has many mutants, some with the same genome, some with different genomes. Let F ∈ Rq×p be such that Fv,s is the fraction of genomes in s with a mutation in position v in the genome. Let M ∈ Rq×p be such that Mv,s is the fraction of mutant v in s. By definition, the columns of M must sum to 1. Let U ∈ {0, 1}q×q be such that Uv,v′ = 1, if and only if mutant v is an ancestor of mutant v′, or if v = v′. We denote the set of all possible U matrices, M matrices and labeled rooted trees T , by U ,M and T , respectively. See Figure 1 for an illustration. The PPM implies F = UM. (1) Our work contributes to the problem of inferring clonal evolution from mutation-frequencies: How do we infer M and U from F? Note that finding U is the same as finding T (see Lemma B.2). ∗Bei Jia is currently with Element AI. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. Although model (1) is simple, simultaneously inferring M and U from F can be hard [3]. One popular inference approach is the following optimization problem over U , M and F , min U∈U C(U), (2) C(U) = min M,F∈Rq×p ‖F̂ − F‖ subject to F = UM,M ≥ 0,M>1 = 1, (3) where ‖ · ‖ is the Frobenius norm, and F̂ ∈ Rq×p contains the measured fractions of mutations per position in each sample, which are known and fixed. In a nutshell, we want to project our measurement F̂ onto the space of valid PPM models. Problem (2) is a hard mixed integer-continuous optimization problem. To approximately solve it, we might find a finite subset {Ui} ⊂ U , that corresponds to a “heuristically good” subset of trees, {Ti} ⊂ T , and, for each fixed matrix Ui, solve (3), which is a convex optimization problem. We can then return Tx, where x ∈ arg mini C(Ui). Fortunately, in many biological applications, e.g., [3–8], the reconstructed evolutionary tree involves a very small number of mutated positions, e.g., q ≤ 11. In practice, a position v might be an effective position that is a cluster of multiple real positions in the genome. For a small q, we can compute C(U) for many trees, and hence approximate M , U , and get uncertainty measures for these estimates. This is important, since data is generally scarce and noisy. Contributions: (i) we propose a new algorithm to compute C(U) exactly in O(q2p) steps, the first non-iterative algorithm to compute C(U); (ii) we compare its performance against state-of-the-art iterative algorithms, and observe a much faster convergence. In particular, our algorithm scales much faster thanO(q2p) in practice; (iii) we implement our algorithm on a GPU, and show that it computes the cost of all (more than 2 billion) trees with ≤ 11 nodes, in ≤ 2.5 hours. 2 Related work A problem related to ours, but somewhat different, is that of inferring a phylogenetic tree from single-cell whole-genome sequencing data. Given all the mutations in a set of mutants, the problem is to arrange the mutants in a phylogenetic tree, [10, 11]. Mathematically, this corresponds to inferring T from partial or corrupted observation of U . If the PPM is assumed, and all the mutations of all the mutants are correctly observed, this problem can be solved in linear time, e.g., [12]. In general, this problem is equivalent to finding a minimum cost Steiner tree on a hypercube, whose nodes and edges represent mutants and mutations respectively, a problem known to be hard [13]. We mention a few works on clonality inference, based on the PPM, that try to infer both U and M from F̂ . No previous work solves problem (2) exactly in general, even for trees of size q ≤ 11. Using our fast projection algorithm, we can solve (2) exactly by searching over all trees, if q ≤ 11. Ref. [3] (AncesTree) reduces the space of possible trees T to subtrees of a heuristically constructed DAG. The authors use the element-wise 1-norm in (3) and, after introducing more variables to linearize the product UM , reduce this search to solving a MILP, which they try to solve via branch and bound. Ref. [6] (CITUP) searches the space of all unlabeled trees, and, for each unlabeled tree, tries to solve an MIQP, again using branch and bound techniques, which finds a labeling for the unlabeled tree, and simultaneously minimizes the distance ‖F̂ − F‖. Refs. [5] and [14] (PhyloSub/PhyloWGS), use a stochastic model to sample trees that are likely to explain the data. Their model is based on [15], which generates hierarchical clusterings of objects, and from which lineage trees can be formed. A score is then computed for these trees, and the highest scoring trees are returned. Procedure (2) can be justified as MLE if we assume the stochastic model F̂ = F + N (0, Iσ2), where F , U and M satisfy the PPM model, and N (0, Iσ2) represents additive, component-wise, Gaussian measurement noise, with zero mean and covariance Iσ2. Alternative stochastic models can be assumed, e.g., as M − U−1F̂ = N (0, Iσ2), where M is non-negative and its columns must sum to one, andN (0, Iσ2) is as described before. For this model, and for each matrix U , the cost C(U) is a projection of U−1F̂ onto the probability simplex M ≥ 0,M>1 = 1. Several fast algorithms are known for this problem, e.g., [16–20] and references therein. In a pq-dimensional space, the exact projection onto the simplex can be done in O(qp) steps. Our algorithm is the first to solve (3) exactly in a finite number of steps. We can also use iterative methods to solve (3). One advantage of our algorithm is that it has no tuning parameters, and requires no effort to check for convergence for a given accuracy. Since iterative algorithms can converge very fast, we numerically compare the speed of our algorithm with different implementations of the Alternating Direction Method of Multipliers (ADMM) [21], which, if properly tuned, has a convergence rate that equals the fastest convergence rate among all first order methods [22] under some convexity assumptions, and is known to produce good solutions for several other kinds of problems, even for non-convex ones [23–29]. 3 Main results We now state our main results, and explain the ideas behind their proofs. Detailed proofs can be found in the Appendix. Our algorithm computes C(U) and minimizers of (3), resp. M∗ and F ∗, by solving an equivalent problem. Without loss of generality, we assume that p = 1, since, by squaring the objective in (3), it decomposes into p independent problems. Sometimes we denote C(U) by C(T ), since given U , we can specify T , and vice-versa. Let ī be the closest ancestor of i in T = (r,V, E). Let ∆i be the set of all the ancestors of i in T , plus i. Let ∂i be the set of children of i in T . Theorem 3.1 (Equivalent formulation). Problem (3) can be solved by solving min t∈R t+ L(t), (4) L(t) = min Z∈Rq 1 2 ∑ i∈V (Zi − Zī)2 subject to Zi ≤ t−Ni ,∀i ∈ V, (5) where Ni = ∑ j∈∆i F̂j , and, by convention, Zī = 0 for i = r. In particular, if t ∗ minimizes (4), Z∗ minimizes (5) for t = t∗, and M∗, F ∗ minimize (3), then M∗i = −Z∗i + Z∗ī + ∑ r∈∂i (Z∗r − Z∗r̄ ) and F ∗i = −Z∗i + Z∗ī ,∀i ∈ V. (6) Furthermore, t∗, M∗, F ∗ and Z∗ are unique. Theorem 3.1 comes from a dual form of (3), which we build using Moreau’s decomposition [30]. 3.1 Useful observations Let Z∗(t) be the unique minimizer of (5) for some t. The main ideas behind our algorithm depend on a few simple properties of the paths {Z∗(t)} and {L′(t)}, the derivative of L(t) with respect to t. Note that L is also a function of N , as defined in Theorem 3.1, which depends on the input data F̂ . Lemma 3.2. L(t) is a convex function of t and N . Furthermore, L(t) is continuous in t and N , and L′(t) is non-decreasing with t. Lemma 3.3. Z∗(t) is continuous as a function of t and N . Z∗(t∗) is continuous as a function of N . Let B(t) = {i : Z∗(t)i = t−Ni}, i.e., the set of components of the solution at the boundary of (5). Variables in B are called fixed, and we call other variables free. Free (resp. fixed) nodes are nodes corresponding to free (resp. fixed) variables. Lemma 3.4. B(t) is piecewise constant in t. Consider dividing the tree T = (r,V, E) into subtrees, each with at least one free node, using B(t) as separation points. See Figure 4 in Appendix A for an illustration. Each i ∈ B(t) belongs to at most degree(i) different subtrees, where degree(i) is the degree of node i, and each i ∈ V\B(t) belongs exactly to one subtree. Let T1, . . . , Tk be the set of resulting (rooted, labeled) trees. Let Tw = (rw,Vw, Ew), where the root rw is the closest node in Tw to r. We call {Tw} the subtrees induced by B(t). We define Bw(t) = B(t) ∩ Vw, and, when it does not create ambiguity, we drop the index t in Bw(t). Note that different Bw(t)’s might have elements in common. Also note that, by construction, if i ∈ Bw, then i must be a leaf of Tw, or the root of Tw. Definition 3.5. The (Tw,Bw)-problem is the optimization problem over |Vw\B(t)| variables min {Zj :j∈Vw\B(t)} (1/2) ∑ j∈Vw (Zj − Zj̄)2, (7) where j̄ is the parent of j in Tw, Zj̄ = 0 if j = rw, and Zj = Z∗(t)j = t−Nj if j ∈ Bw(t). Lemma 3.6. Problem (5) decomposes into k independent problems. In particular, the minimizers {Z∗(t)j : j ∈ Vw\B(t)} are determined as the solution of the (Tw,Bw)-problem. If j ∈ Vw, then Z∗(t)j = c1t+ c2 , where c1 and c2 depend on j but not on t, and 0 ≤ c1 ≤ 1. Lemma 3.7. Z∗(t) and L′(t) are piecewise linear and continuous in t. Furthermore, Z∗(t) and L′(t) change linear segments if and only if B(t) changes. Lemma 3.8. If t ≤ t′, then B(t′) ⊆ B(t). In particular, B(t) changes at most q times with t. Lemma 3.9. Z∗(t) and L′(t) have less than q + 1 different linear segments. 3.2 The Algorithm In a nutshell, our algorithm computes the solution path {Z∗(t)}t∈R and the derivative {L′(t)}t∈R. From these paths, it finds the unique t∗, at which d(t+ L(t))/dt = 0|t=t∗ ⇔ L′(t∗) = −1. (8) It then evaluates the path Z∗(t) at t = t∗, and uses this value, along with (6), to find M∗ and F ∗, the unique minimizers of (3). Finally, we compute C(T ) = ‖F̂ − F ∗‖. We know that {Z∗(t)} and {L′(t)} are continuous piecewise linear, with a finite number of different linear segments (Lemmas 3.7, 3.8 and 3.9). Hence, to describe {Z∗(t)} and {L′(t)}, we only need to evaluate them at the critical values, t1 > t2 > · · · > tk, at which Z∗(t) and L′(t) change linear segments. We will later use Lemma 3.7 as a criteria to find the critical values. Namely, {ti} are the values of t at which, as t decreases, new variables become fixed, and B(t) changes. Note that variables never become free once fixed, by Lemma 3.8, which also implies that k ≤ q. The values {Z∗(ti)} and {L′(ti)} are computed sequentially as follows. If t is very large, the constraint in (5) is not active, and Z∗(t) = L(t) = L′(t) = 0. Lemma 3.7 tells us that, as we decrease t, the first critical value is the largest t for which this constraint becomes active, and at which B(t) changes for the first time. Hence, if i = 1, we have ti = maxs{Ns}, Z∗(ti) = L′(ti) = 0, and B(ti) = arg maxs{Ns}. Once we have ti, we compute the rates Z ′∗(ti) and L′′(ti) from B(ti) and T , as explained in Section 3.3. Since the paths are piecewise linear, derivatives are not defined at critical points. Hence, here, and throughout this section, these derivatives are taken from the left, i.e., Z ′∗(ti) = limt↑ti(Z ∗(ti)− Z∗(t))/(ti − t) and L′′(ti) = limt↑ti(L′(ti)− L′(t))/(ti − t). Since Z ′∗(t) and L′′(t) are constant for t ∈ (ti+1, ti], for t ∈ (ti+1, ti] we have Z∗(t) = Z∗(ti) + (t− ti)Z ′∗(ti), L′(t) = L′(ti) + (t− ti)L′′(ti), (9) and the next critical value, ti+1, is the largest t < ti, for which new variables become fixed, and B(t) changes. The value ti+1 is found by solving for t < ti in Z∗(t)r = Z ∗(ti)r + (t− ti)Z ′∗(ti)r = t−Nr, (10) and keeping the largest solution among all r /∈ B. Once ti+1 is computed, we update B with the new variables that became fixed, and we obtain Z∗(ti+1) and L′(ti+1) from (9). The process then repeats. By Lemma 3.2, L′ never increases. Hence, we stop this process (a) as soon as L′(ti) < −1, or (b) when all the variables are in B, and thus there are no more critical values to compute. If (a), let tk be the last critical value with L′(tk) > −1, and if (b), let tk be the last computed critical value. We use tk and (9) to compute t∗, at which L′(t∗) = −1 and also Z∗(t∗). From Z∗(t∗) we then compute M∗ and F ∗ and C(U) = ‖F̂ − F ∗‖. The algorithm is shown compactly in Alg. 1. Its inputs are F̂ and T , represented, e.g., using a linkednodes data structure. Its outputs are minimizers to (3). It makes use of a procedure ComputeRates, which we will explain later. This procedure terminates in O(q) steps and uses O(q) memory. Line 5 comes from solving (10) for t. In line 14, the symbols M∗(Z∗, T ) and F ∗(Z∗, T ) remind us that M∗ and F ∗ are computed from Z∗ and T using (6). The correctness of Alg. 1 follows from the Lemmas in Section 3.1, and the explanation above. In particular, since there are at most q + 1 different linear regimes, the bound q in the for-loop does not prevent us from finding any critical value. Its time complexity is O(q2), since each line completes in O(q) steps, and is executed at most q times. Theorem 3.10 (Complexity). Algorithm 1 finishes in O(q2) steps, and requires O(q) memory. Theorem 3.11 (Correctness). Algorithm 1 outputs the solution to (3). Algorithm 1 Projection onto the PPM (input: T and F̂ ; output: M∗ and F ∗) 1: Ni = ∑ j∈∆i F̂j , for all i ∈ V . This takes O(q) steps using a DFS, see proof of Theorem 3.10 2: i = 1, ti = maxr{Nr}, B(ti) = arg maxr{Nr}, Z∗(ti) = 0, L′(ti) = 0. . Initialize 3: for i = 1 to q do 4: (Z ′∗(ti),L′′(ti)) = ComputeRates(B(ti), T ) . Update rates of change 5: P = {Pr : Pr = Nr+Z ∗(ti)r−tiZ′∗(ti)r 1−Z′∗(ti)r if r /∈ B(ti), tr < ti, and Pr = −∞ otherwise} 6: ti+1 = maxr Pr . Update next critical value from (9) 7: B(ti+1) = B(ti) ∪ arg maxr Ps . Update list of fixed variables 8: Z∗(ti+1) = Z∗(ti) + (ti+1 − ti)Z ′∗(ti) . Update solution path 9: L′(ti+1) = L′(ti) + (ti+1 − ti)L′′(ti) . Update objective’s derivative 10: if L′(ti+1) < −1 then break . If already passed by t∗, then exit the for-loop 11: end for 12: t∗ = ti − 1+L ′(ti) L′′(ti) . Find solution to (8) 13: Z∗ = Z∗(ti) + (t∗ − ti)Z ′∗(ti) . Find minimizers of (5) for t = t∗ 14: return M∗(Z∗, T ), F ∗(Z∗, T ) . Return solution to (3) using (6), which takes O(q) steps 3.3 Computing the rates We now explain how the procedure ComputeRates works. Recall that it takes as input the tree T and the set B(ti), and it outputs the derivatives Z ′∗(ti) and L′′(ti). A simple calculation shows that if we compute Z ′∗(ti), then computing L′′(ti) is easy. Lemma 3.12. L′′(ti) can be computed from Z ′∗(ti) in O(q) steps and with O(1) memory as L′′(ti) = ∑ j∈V (Z ′∗(ti)j − Z ′∗(ti)j̄)2, (11) where j̄ is the closest ancestor to j in T . We note that if j ∈ B(ti), then, by definition, Z ′∗(ti)j = 1. Assume now that j ∈ V\B(ti). Lemma 3.6 implies we can find Z ′∗(ti)j by solving the (Tw = (rw,Vw, Ew),Bw)-problem as a function of t, where w is such that j ∈ Vw. In a nutshell, ComputeRates is a recursive procedure to solve all the (Tw,Bw)-problems as an explicit function of t. It suffices to explain how ComputeRates solves one particular (Tw,Bw)-problem explicitly. To simplify notation, in the rest of this section, we refer to Tw and Bw as T and B. Recall that, by the definition of T = Tw and B = Bw, if i ∈ B, then i must be a leaf of T , or the root of T . Definition 3.13. Consider a rooted tree T = (r,V, E), a set B ⊆ V , and variables {Zj : j ∈ V} such that, if j ∈ B, then Zj = αjt+ βj for some α and β. We define the (T,B, α, β, γ)-problem as min {Zj :j∈V\B} 1 2 ∑ j∈V γj(Zj − Zj̄)2, (12) where γ > 0, j̄ is the closest ancestor to j in T , and Zj̄ = 0 if j = r. We refer to the solution of the (T,B, α, β, γ)-problem as {Z∗j : j ∈ V\B}, which uniquely minimizes (12). Note that (12) is unconstrained and its solution, Z∗, is a linear function of t. Furthermore, the (Tw,Bw)-problem is the same as the (Tw,Bw,1,−N,1)-problem, which is what we actually solve. We now state three useful lemmas that help us solve any (T,B, α, β, γ)-problem efficiently. Lemma 3.14 (Pruning). Consider the solution Z∗ of the (T,B, α, β, γ)-problem. Let j ∈ V\B be a leaf. Then Z∗j = Z ∗ j̄ . Furthermore, consider the (T̃,B, α, β, γ)-problem, where T̃ = (r̃, Ṽ, Ẽ) is equal to T with node j pruned, and let its solution be Z̃∗. We have that Z∗i = Z̃ ∗ i , for all i ∈ Ṽ . Lemma 3.15 (Star problem). Let T be a star such that node 1 is the center node, node 2 is the root, and nodes 3, . . . , r are leaves. Let B = {2, . . . , r}. Let Z∗1 ∈ R be the solution of the (T,B, α, β, γ)-problem. Then, Z∗1 = ( γ1α2 + ∑r i=3 γrαr γ1 + ∑r i=3 γr ) t+ ( γ1β2 + ∑r i=3 γrβr γ1 + ∑r i=3 γr ) . (13) In particular, to find the rate at which Z∗1 changes with t, we only need to know α and γ, not β. Lemma 3.16 (Reduction). Consider the (T,B, α, β, γ)-problem such that j, j̄ ∈ V\B, and such that j has all its children 1, . . . , r ∈ B. Let Z∗ be its solution. Consider the (T̃, B̃, α̃, β̃, γ̃)− problem, where T̃ = (r̃, Ṽ, Ẽ) is equal to T with nodes 1, . . . , r removed, and B̃ = (B\{1, . . . , r}) ∪ {j}. Let Z̃∗ be its solution. If (α̃i, β̃i, γ̃i) = (αi, βi, γi) for all i ∈ B\{1, . . . , r}, and α̃j , β̃j and γ̃j satisfy α̃j = ∑r i=1 γiαi∑r i=1 γi , β̃j = ∑r i=1 γiβi∑r i=1 γi , γ̃j = (γj)−1 + ( r∑ i=1 γi )−1 −1 , (14) then Z∗i = Z̃ ∗ i for all i ∈ V\{j}. Lemma 3.15 and Lemma 3.16 allow us to recursively solve any (T,B, α, β, γ)-problem, and obtain for it an explicit solution of the form Z∗(t) = c1t+ c2, where c1 and c2 do not depend on t. Assume that we have already repeatedly pruned T , by repeatedly invoking Lemma 3.14, such that, if i is a leaf, then i ∈ B. See Figure 2-(left). First, we find some node j ∈ V\B such that all of its children are in B. If j̄ ∈ B, then j̄ must be the root, and the (T,B, α, β, γ)-problem must be a star problem as in Lemma 3.15. We can use Lemma 3.15 to solve it explicitly. Alternatively, if j̄ /∈ V\B, then we invoke Lemma 3.16, and reduce the (T,B, α, β, γ)-problem to a strictly smaller (T̃, B̃, α̃, β̃, γ̃)-problem, which we solve recursively. Once the (T̃, B̃, α̃, β̃, γ̃)-problem is solved, we have an explicit expression Z∗i (t) = c1it + c2i for all i ∈ V\{j}, and, in particular, we have an explicit expression Z∗ j̄ (t) = c1 j̄t+ c2 j̄ . The only free variable of the (T,B, α, β, γ)-problem to be determined is Z∗j (t). To compute Z ∗ j (t), we apply Lemma 3.15 to the ( ≈ T , ≈ B, ≈α, ≈ β, ≈ γ)-problem, where ≈ T is a star around j, ≈γ are the components of γ corresponding to nodes that are neighbors of j, ≈α and ≈ β are such that Z∗i (t) = ≈ αit+ ≈ βi for all i that are neighbors of j, and for which Z ∗ i (t) is already known, and ≈ B are all the neighbors of j. See Figure 2-(right). The algorithm is compactly described in Alg. 2. It is slightly different from the description above for computational efficiency. Instead of computing Z∗(t) = c1t+ c2, we keep track only of c1, the rates, and we do so only for the variables in V\B. The algorithm assumes that the input T has been pruned. The inputs T , B, α, β and γ are passed by reference. They are modified inside the algorithm but, once ComputeRatesRec finishes, they keep their initial values. Throughout the execution of the algorithm, T = (r,V, E) encodes (1) a doubly-linked list where each node points to its children and its parent, which we call T.a, and (b) a a doubly-linked list of all the nodes in V\B for which all the children are in B, which we call T.b. In the proof of Theorem 3.17, we prove how this representation of T can be kept updated with little computational effort. The input Y , also passed by reference, starts as an uninitialized array of size q, where we will store the rates {Z ′∗i }. At the end, we read Z ′∗ from Y . Algorithm 2 ComputeRatesRec (input: T = (r,V, E),B, α, β, γ, Y ) 1: Let j be some node in V\B whose children are in B . We read j from T.b in O(1) steps 2: if j̄ ∈ B then 3: Set Yj using (13) in Lemma 3.15 . If j̄ ∈ B, then the (T,B, α, β, γ)-problem is star-shaped 4: else 5: Modify (T,B, α, β, γ) to match (T̃, B̃, α̃, β̃, γ̃) defined by Lemma 3.16 for j in line 1 6: ComputeRatesRec(T,B, α, β, γ, Y ) . Sets Yi = Z ′∗i for all i ∈ V\B; Yj is not yet defined 7: Restore (T,B, α, β, γ) to its original value before line 5 was executed 8: Compute Yj from (13), using for α, β, γ in (13) the values ≈ α, ≈ β, ≈ γ, where ≈γ are the com- ponents of γ corresponding to nodes that are neighbors of j in T , and ≈α and ≈ β are such that Z∗i = ≈ αit+ ≈ βi for all i that are neighbors of j in T , and for which Z ∗ i is already known 9: end if Let q be the number of nodes of the tree T that is the input at the zeroth level of the recursion. Theorem 3.17. Algorithm 2 correctly computes Z ′∗ for the (T,B, α, β, γ)-problem, and it can be implemented to finish in O(q) steps, and to use O(q) memory. The correctness of Algorithm 2 follows from Lemmas 3.14-3.16, and the explanation above. Its complexity is bounded by the total time spent on the two lines that actually compute rates during the whole recursion, lines 3 and 8. All the other lines only transform the input problem into a more computable form. Lines 3 and 8 solve a star-shaped problem with at most degree(j) variables, which, by inspecting (13), we know can be done inO(degree(j)) steps. Since, j never takes the same value twice, the overall complexity is bounded by O(∑j∈V degree(j)) = O(|E|) = O(q). The O(q) bound on memory is possible because all the variables that occupy significant memory are being passed by reference, and are modified in place during the whole recursive procedure. The following lemma shows how the recursive procedure to solve a (T,B, α, β, γ)-problem can be used to compute the rates of change of Z∗(t) of a (T,B)-problem. Its proof follows from the observation that the rate of change of the solution with t in (13) in Lemma 3.15 only depends on α and β, and that the reduction equations (14) in Lemma 3.16 never make α′ or γ′ depend on β. Lemma 3.18 (Rates only). Let Z∗(t) be the solution of the (T,B)-problem, and let Z̃∗(t) be the solution of the (T,B,1, 0,1)-problem. Then, Z∗(t) = c1t+ c2, and Z̃∗(t) = c1t for some c1 and c2. We finally present the full algorithm to compute Z ′∗(ti) and L′′ ∗ (ti) from T and B(ti). Algorithm 3 ComputeRates (input: T and B(ti) output: Z ′∗(ti) and L′′(ti)) 1: Z ′∗(ti)j = 1 for all j ∈ B(ti) 2: for each (Tw,Bw)-problem induced by B(ti) do 3: Set T̃w to be Tw pruned of all leaf nodes in Bw, by repeatedly evoking Lemma 3.14 4: ComputeRatesRec(T̃w, j,Bw,1,0,1, Z̃ ′∗) 5: Z ′∗(ti)j = Z̃ ′∗j for all j ∈ Vw\B 6: end for 7: Compute L′′(ti) from Z ′∗(ti) using Lemma 3.12 8: return Z ′∗(ti) and L′′(ti) The following theorem follows almost directly from Theorem 3.17. Theorem 3.19. Alg. 3 correctly computes Z ′∗(ti) and L′′(ti) in O(q) steps, and uses O(q) memory. 4 Reducing computation time in practice Our numerical results are obtained for an improved version of Algorithm 1. We now explain the main idea behind this algorithm. The bulk of the complexity of Alg. 1 comes from line 4, i.e., computing the rates {Z ′∗(ti)j}j∈V\B(ti) from B(ti) and T . For a fixed j ∈ V\B(ti), and by Lemma 3.6, the rate Z ′∗(ti)j , depends only on one particular (Tw = (rw,Vw, Ew),Bw)-problem induced by B(ti). If exactly this same problem is induced by both B(ti) and B(ti+1), which happens if the new nodes that become fixed in line 7 of round i of Algorithm 1 are not in Vw\Bw, then we can save computation time in round i+ 1, by not recomputing any rates for j ∈ Vw\Bw, and using for Z ′∗(ti+1)j the value Z ′∗(ti)j . Furthermore, if only a few {Z ′∗j } change from round i to round i + 1, then we can also save computation time in computing L′′ from Z ′∗ by subtracting from the sum in the right hand side of equation (11) the terms that depend on the previous, now changed, rates, and adding new terms that depend on the new rates. Finally, if the rate Z ′∗j does not change, then the value of t < ti at which Z ∗ j (t) might intersect t−Nj , and become fixed, given by Pj in line 5, also does not change. (Note that this is not obvious from the formula for Pr in line 5). If not all {Pr} change from round i to round i+ 1, we can also save computation time in computing the maximum, and maximizers, in line 7 by storing P in a maximum binary heap, and executing lines 5 and 7 by extracting all the maximal values from the top of the heap. Each time any Pr changes, the heap needs to be updated. 5 Numerical results Our algorithm to solve (3) exactly in a finite number of steps is of interest in itself. Still, it is interesting to compare it with other algorithms. In particular, we compare the convergence rate of our algorithm with two popular methods that solve (3) iteratively: the Alternating Direction Method of Multipliers (ADMM), and the Projected Gradient Descent (PGD) method. We apply the ADMM, and the PGD, to both the primal formulation (3), and the dual formulation (4). We implemented all the algorithms in C, and derived closed-form updates for ADMM and PG, see Appendix F. We ran all algorithms on a single core of an Intel Core i5 2.5GHz processor. Figure 5-(left) compares different algorithms for a random Galton–Watson input tree truncated to have q = 1000 nodes, with the number of children of each node chosen uniformly within a fixed range, and for a random input F̂ ∈ Rq, with entries chosen i.i.d. from a normal distribution. We observe the same behavior for all random instances that was tested. We gave ADMM and PGD an advantage by optimally tuning them for each individual problem-instance tested. In contrast, our algorithm requires no tuning, which is a clear advantage. At each iteration, the error is measured as maxj{|Mj −M∗j |}. Our algorithm is about 74× faster than its closest competitor (PGD-primal) for 10−3 accuracy. In Figure 5-(right), we show the average run time of our algorithm versus the problem size, for random inputs of the same form. The scaling of our algorithm is (almost) linear, and much faster than our O(q2p), p = 1, theoretical bound. 0 0.1 0.2 0.3 0.4 0.5 Time in seconds 0 0.05 0.1 0.15 E rr o r ADMM Primal ADMM Dual Projected Gradient Descent Primal Projected Gradient Descent Dual Our Algorithm = 0.0027 seconds 0 2000 4000 6000 8000 10000 Problem size 0 0.005 0.01 0.015 0.02 0.025 A ve ra ge r un ti m e Figure 3: (Left) Time that the different algorithms take to solve our problem for trees of with 1000 nodes. (Right) Average run time of our algorithm for problems of different sizes. For each size, each point is averaged over 500 random problem instances. Finally, we use our algorithm to exactly solve (2) by computing C(U) for all trees and a given input F̂ . Exactly solving (2) is very important for biology, since several relevant phylogenetic tree inference problems deal with trees of small sizes. We use an NVIDIA QUAD P5000 GPU to compute the cost of all possible trees with q nodes in parallel, and return the tree with the smallest cost. Basically, we assign to each GPU virtual thread a unique tree, using Prufer sequences [31], and then have each thread compute the cost for its tree. For q = 10, we compute the cost of all 100 million trees in about 8 minutes, and for q = 11, we compute the cost of all 2.5 billion trees in slightly less than 2.5 hours. Code to solve (3) using Alg. 1, with the improvements of Section 4, can be found in [32]. More results using our algorithm can be found in Appendix G. 6 Conclusions and future work We propose a new direct algorithm that, for a given tree, computes how close the matrix of frequency of mutations per position is to satisfying the perfect phylogeny model. Our algorithm is faster than the state-of-the-art iterative methods for the same problem, even if we optimally tune them. We use the proposed algorithm to build a GPU-based phylogenetic tree inference engine for the trees of relevant biological sizes. Unlike existing algorithms, which only heuristically search a small part of the space of possible trees, our algorithm performs a complete search over all trees relatively fast. It is an open problem to find direct algorithms that can provably solve our problem in linear time on average, or even for a worst-case input. Acknowledgement: This work was partially funded by NIH/1U01AI124302, NSF/IIS-1741129, and a NVIDIA hardware grant.
1. What is the main contribution of the paper in terms of efficiency in phylogenetic model reconstruction? 2. How does the proposed algorithm efficiently search the entire subspace around a proposed model? 3. Can you explain the time complexity of the algorithm and how it compares to other search algorithms implemented? 4. Are there any limitations or trade-offs in the efficiency achieved by the proposed algorithm? 5. How do the experimental results support the claims of efficiency improvement over other search algorithms?
Review
Review This paper provides a very efficient algorithm to reconstruct phylogenetic models (trees). They key insights are efficient ways to search the entire subspaces around a proposed model, so that a full search can be made overall efficiently. The algorithm runs in O(q^2 p) steps for a genome of length q, and with p samples. Experiments show that the algorithm runs significantly faster than other search algorithms they implemented, a x74 speed-up! The paper is well-written, although fairly dense. Re REBUTAL: Thank you for the time you took to provide a detailed rebuttal. However, I wished you had ONLY addressed the main concerns from Reviewer #2. I felt lost in the point of most of the comments.
NIPS
Title Efficient Projection onto the Perfect Phylogeny Model Abstract Several algorithms build on the perfect phylogeny model to infer evolutionary trees. This problem is particularly hard when evolutionary trees are inferred from the fraction of genomes that have mutations in different positions, across different samples. Existing algorithms might do extensive searches over the space of possible trees. At the center of these algorithms is a projection problem that assigns a fitness cost to phylogenetic trees. In order to perform a wide search over the space of the trees, it is critical to solve this projection problem fast. In this paper, we use Moreau’s decomposition for proximal operators, and a tree reduction scheme, to develop a new algorithm to compute this projection. Our algorithm terminates with an exact solution in a finite number of steps, and is extremely fast. In particular, it can search over all evolutionary trees with fewer than 11 nodes, a size relevant for several biological problems (more than 2 billion trees) in about 2 hours. 1 Introduction The perfect phylogeny model (PPM) [1, 2] is used in biology to study evolving populations. It assumes that the same position in the genome never mutates twice, hence mutations only accumulate. Consider a population of organisms evolving under the PPM. The evolution process can be described by a labeled rooted tree, T = (r,V, E), where r is the root, i.e., the common oldest ancestor, the nodes V are the mutants, and the edges E are mutations acquired between older and younger mutants. Since each position in the genome only mutates once, we can associate with each node v 6= r, a unique mutated position, the mutation associated to the ancestral edge of v. By convention, let us associate with the root r, a null mutation that is shared by all mutants in T . This allows us to refer to each node v ∈ V as both a mutation in a position in the genome (the mutation associated to the ancestral edge of v), and a mutant (the mutant with the fewest mutations that has a mutation v). Hence, without loss of generality, V = {1, . . . , q}, E = {2, . . . , q}, where q is the length of the genome, and r = 1 refers to both the oldest common ancestor and the null mutation shared by all. One very important use of the PPM is to infer how mutants of a common ancestor evolve [3–8]. A common type of data used for this purpose is the frequency, with which different positions in the genome mutate across multiple samples, obtained, e.g., from whole-genome or targeted deep sequencing [9]. Consider a sample s, one of p samples, obtained at a given stage of the evolution process. This sample has many mutants, some with the same genome, some with different genomes. Let F ∈ Rq×p be such that Fv,s is the fraction of genomes in s with a mutation in position v in the genome. Let M ∈ Rq×p be such that Mv,s is the fraction of mutant v in s. By definition, the columns of M must sum to 1. Let U ∈ {0, 1}q×q be such that Uv,v′ = 1, if and only if mutant v is an ancestor of mutant v′, or if v = v′. We denote the set of all possible U matrices, M matrices and labeled rooted trees T , by U ,M and T , respectively. See Figure 1 for an illustration. The PPM implies F = UM. (1) Our work contributes to the problem of inferring clonal evolution from mutation-frequencies: How do we infer M and U from F? Note that finding U is the same as finding T (see Lemma B.2). ∗Bei Jia is currently with Element AI. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. Although model (1) is simple, simultaneously inferring M and U from F can be hard [3]. One popular inference approach is the following optimization problem over U , M and F , min U∈U C(U), (2) C(U) = min M,F∈Rq×p ‖F̂ − F‖ subject to F = UM,M ≥ 0,M>1 = 1, (3) where ‖ · ‖ is the Frobenius norm, and F̂ ∈ Rq×p contains the measured fractions of mutations per position in each sample, which are known and fixed. In a nutshell, we want to project our measurement F̂ onto the space of valid PPM models. Problem (2) is a hard mixed integer-continuous optimization problem. To approximately solve it, we might find a finite subset {Ui} ⊂ U , that corresponds to a “heuristically good” subset of trees, {Ti} ⊂ T , and, for each fixed matrix Ui, solve (3), which is a convex optimization problem. We can then return Tx, where x ∈ arg mini C(Ui). Fortunately, in many biological applications, e.g., [3–8], the reconstructed evolutionary tree involves a very small number of mutated positions, e.g., q ≤ 11. In practice, a position v might be an effective position that is a cluster of multiple real positions in the genome. For a small q, we can compute C(U) for many trees, and hence approximate M , U , and get uncertainty measures for these estimates. This is important, since data is generally scarce and noisy. Contributions: (i) we propose a new algorithm to compute C(U) exactly in O(q2p) steps, the first non-iterative algorithm to compute C(U); (ii) we compare its performance against state-of-the-art iterative algorithms, and observe a much faster convergence. In particular, our algorithm scales much faster thanO(q2p) in practice; (iii) we implement our algorithm on a GPU, and show that it computes the cost of all (more than 2 billion) trees with ≤ 11 nodes, in ≤ 2.5 hours. 2 Related work A problem related to ours, but somewhat different, is that of inferring a phylogenetic tree from single-cell whole-genome sequencing data. Given all the mutations in a set of mutants, the problem is to arrange the mutants in a phylogenetic tree, [10, 11]. Mathematically, this corresponds to inferring T from partial or corrupted observation of U . If the PPM is assumed, and all the mutations of all the mutants are correctly observed, this problem can be solved in linear time, e.g., [12]. In general, this problem is equivalent to finding a minimum cost Steiner tree on a hypercube, whose nodes and edges represent mutants and mutations respectively, a problem known to be hard [13]. We mention a few works on clonality inference, based on the PPM, that try to infer both U and M from F̂ . No previous work solves problem (2) exactly in general, even for trees of size q ≤ 11. Using our fast projection algorithm, we can solve (2) exactly by searching over all trees, if q ≤ 11. Ref. [3] (AncesTree) reduces the space of possible trees T to subtrees of a heuristically constructed DAG. The authors use the element-wise 1-norm in (3) and, after introducing more variables to linearize the product UM , reduce this search to solving a MILP, which they try to solve via branch and bound. Ref. [6] (CITUP) searches the space of all unlabeled trees, and, for each unlabeled tree, tries to solve an MIQP, again using branch and bound techniques, which finds a labeling for the unlabeled tree, and simultaneously minimizes the distance ‖F̂ − F‖. Refs. [5] and [14] (PhyloSub/PhyloWGS), use a stochastic model to sample trees that are likely to explain the data. Their model is based on [15], which generates hierarchical clusterings of objects, and from which lineage trees can be formed. A score is then computed for these trees, and the highest scoring trees are returned. Procedure (2) can be justified as MLE if we assume the stochastic model F̂ = F + N (0, Iσ2), where F , U and M satisfy the PPM model, and N (0, Iσ2) represents additive, component-wise, Gaussian measurement noise, with zero mean and covariance Iσ2. Alternative stochastic models can be assumed, e.g., as M − U−1F̂ = N (0, Iσ2), where M is non-negative and its columns must sum to one, andN (0, Iσ2) is as described before. For this model, and for each matrix U , the cost C(U) is a projection of U−1F̂ onto the probability simplex M ≥ 0,M>1 = 1. Several fast algorithms are known for this problem, e.g., [16–20] and references therein. In a pq-dimensional space, the exact projection onto the simplex can be done in O(qp) steps. Our algorithm is the first to solve (3) exactly in a finite number of steps. We can also use iterative methods to solve (3). One advantage of our algorithm is that it has no tuning parameters, and requires no effort to check for convergence for a given accuracy. Since iterative algorithms can converge very fast, we numerically compare the speed of our algorithm with different implementations of the Alternating Direction Method of Multipliers (ADMM) [21], which, if properly tuned, has a convergence rate that equals the fastest convergence rate among all first order methods [22] under some convexity assumptions, and is known to produce good solutions for several other kinds of problems, even for non-convex ones [23–29]. 3 Main results We now state our main results, and explain the ideas behind their proofs. Detailed proofs can be found in the Appendix. Our algorithm computes C(U) and minimizers of (3), resp. M∗ and F ∗, by solving an equivalent problem. Without loss of generality, we assume that p = 1, since, by squaring the objective in (3), it decomposes into p independent problems. Sometimes we denote C(U) by C(T ), since given U , we can specify T , and vice-versa. Let ī be the closest ancestor of i in T = (r,V, E). Let ∆i be the set of all the ancestors of i in T , plus i. Let ∂i be the set of children of i in T . Theorem 3.1 (Equivalent formulation). Problem (3) can be solved by solving min t∈R t+ L(t), (4) L(t) = min Z∈Rq 1 2 ∑ i∈V (Zi − Zī)2 subject to Zi ≤ t−Ni ,∀i ∈ V, (5) where Ni = ∑ j∈∆i F̂j , and, by convention, Zī = 0 for i = r. In particular, if t ∗ minimizes (4), Z∗ minimizes (5) for t = t∗, and M∗, F ∗ minimize (3), then M∗i = −Z∗i + Z∗ī + ∑ r∈∂i (Z∗r − Z∗r̄ ) and F ∗i = −Z∗i + Z∗ī ,∀i ∈ V. (6) Furthermore, t∗, M∗, F ∗ and Z∗ are unique. Theorem 3.1 comes from a dual form of (3), which we build using Moreau’s decomposition [30]. 3.1 Useful observations Let Z∗(t) be the unique minimizer of (5) for some t. The main ideas behind our algorithm depend on a few simple properties of the paths {Z∗(t)} and {L′(t)}, the derivative of L(t) with respect to t. Note that L is also a function of N , as defined in Theorem 3.1, which depends on the input data F̂ . Lemma 3.2. L(t) is a convex function of t and N . Furthermore, L(t) is continuous in t and N , and L′(t) is non-decreasing with t. Lemma 3.3. Z∗(t) is continuous as a function of t and N . Z∗(t∗) is continuous as a function of N . Let B(t) = {i : Z∗(t)i = t−Ni}, i.e., the set of components of the solution at the boundary of (5). Variables in B are called fixed, and we call other variables free. Free (resp. fixed) nodes are nodes corresponding to free (resp. fixed) variables. Lemma 3.4. B(t) is piecewise constant in t. Consider dividing the tree T = (r,V, E) into subtrees, each with at least one free node, using B(t) as separation points. See Figure 4 in Appendix A for an illustration. Each i ∈ B(t) belongs to at most degree(i) different subtrees, where degree(i) is the degree of node i, and each i ∈ V\B(t) belongs exactly to one subtree. Let T1, . . . , Tk be the set of resulting (rooted, labeled) trees. Let Tw = (rw,Vw, Ew), where the root rw is the closest node in Tw to r. We call {Tw} the subtrees induced by B(t). We define Bw(t) = B(t) ∩ Vw, and, when it does not create ambiguity, we drop the index t in Bw(t). Note that different Bw(t)’s might have elements in common. Also note that, by construction, if i ∈ Bw, then i must be a leaf of Tw, or the root of Tw. Definition 3.5. The (Tw,Bw)-problem is the optimization problem over |Vw\B(t)| variables min {Zj :j∈Vw\B(t)} (1/2) ∑ j∈Vw (Zj − Zj̄)2, (7) where j̄ is the parent of j in Tw, Zj̄ = 0 if j = rw, and Zj = Z∗(t)j = t−Nj if j ∈ Bw(t). Lemma 3.6. Problem (5) decomposes into k independent problems. In particular, the minimizers {Z∗(t)j : j ∈ Vw\B(t)} are determined as the solution of the (Tw,Bw)-problem. If j ∈ Vw, then Z∗(t)j = c1t+ c2 , where c1 and c2 depend on j but not on t, and 0 ≤ c1 ≤ 1. Lemma 3.7. Z∗(t) and L′(t) are piecewise linear and continuous in t. Furthermore, Z∗(t) and L′(t) change linear segments if and only if B(t) changes. Lemma 3.8. If t ≤ t′, then B(t′) ⊆ B(t). In particular, B(t) changes at most q times with t. Lemma 3.9. Z∗(t) and L′(t) have less than q + 1 different linear segments. 3.2 The Algorithm In a nutshell, our algorithm computes the solution path {Z∗(t)}t∈R and the derivative {L′(t)}t∈R. From these paths, it finds the unique t∗, at which d(t+ L(t))/dt = 0|t=t∗ ⇔ L′(t∗) = −1. (8) It then evaluates the path Z∗(t) at t = t∗, and uses this value, along with (6), to find M∗ and F ∗, the unique minimizers of (3). Finally, we compute C(T ) = ‖F̂ − F ∗‖. We know that {Z∗(t)} and {L′(t)} are continuous piecewise linear, with a finite number of different linear segments (Lemmas 3.7, 3.8 and 3.9). Hence, to describe {Z∗(t)} and {L′(t)}, we only need to evaluate them at the critical values, t1 > t2 > · · · > tk, at which Z∗(t) and L′(t) change linear segments. We will later use Lemma 3.7 as a criteria to find the critical values. Namely, {ti} are the values of t at which, as t decreases, new variables become fixed, and B(t) changes. Note that variables never become free once fixed, by Lemma 3.8, which also implies that k ≤ q. The values {Z∗(ti)} and {L′(ti)} are computed sequentially as follows. If t is very large, the constraint in (5) is not active, and Z∗(t) = L(t) = L′(t) = 0. Lemma 3.7 tells us that, as we decrease t, the first critical value is the largest t for which this constraint becomes active, and at which B(t) changes for the first time. Hence, if i = 1, we have ti = maxs{Ns}, Z∗(ti) = L′(ti) = 0, and B(ti) = arg maxs{Ns}. Once we have ti, we compute the rates Z ′∗(ti) and L′′(ti) from B(ti) and T , as explained in Section 3.3. Since the paths are piecewise linear, derivatives are not defined at critical points. Hence, here, and throughout this section, these derivatives are taken from the left, i.e., Z ′∗(ti) = limt↑ti(Z ∗(ti)− Z∗(t))/(ti − t) and L′′(ti) = limt↑ti(L′(ti)− L′(t))/(ti − t). Since Z ′∗(t) and L′′(t) are constant for t ∈ (ti+1, ti], for t ∈ (ti+1, ti] we have Z∗(t) = Z∗(ti) + (t− ti)Z ′∗(ti), L′(t) = L′(ti) + (t− ti)L′′(ti), (9) and the next critical value, ti+1, is the largest t < ti, for which new variables become fixed, and B(t) changes. The value ti+1 is found by solving for t < ti in Z∗(t)r = Z ∗(ti)r + (t− ti)Z ′∗(ti)r = t−Nr, (10) and keeping the largest solution among all r /∈ B. Once ti+1 is computed, we update B with the new variables that became fixed, and we obtain Z∗(ti+1) and L′(ti+1) from (9). The process then repeats. By Lemma 3.2, L′ never increases. Hence, we stop this process (a) as soon as L′(ti) < −1, or (b) when all the variables are in B, and thus there are no more critical values to compute. If (a), let tk be the last critical value with L′(tk) > −1, and if (b), let tk be the last computed critical value. We use tk and (9) to compute t∗, at which L′(t∗) = −1 and also Z∗(t∗). From Z∗(t∗) we then compute M∗ and F ∗ and C(U) = ‖F̂ − F ∗‖. The algorithm is shown compactly in Alg. 1. Its inputs are F̂ and T , represented, e.g., using a linkednodes data structure. Its outputs are minimizers to (3). It makes use of a procedure ComputeRates, which we will explain later. This procedure terminates in O(q) steps and uses O(q) memory. Line 5 comes from solving (10) for t. In line 14, the symbols M∗(Z∗, T ) and F ∗(Z∗, T ) remind us that M∗ and F ∗ are computed from Z∗ and T using (6). The correctness of Alg. 1 follows from the Lemmas in Section 3.1, and the explanation above. In particular, since there are at most q + 1 different linear regimes, the bound q in the for-loop does not prevent us from finding any critical value. Its time complexity is O(q2), since each line completes in O(q) steps, and is executed at most q times. Theorem 3.10 (Complexity). Algorithm 1 finishes in O(q2) steps, and requires O(q) memory. Theorem 3.11 (Correctness). Algorithm 1 outputs the solution to (3). Algorithm 1 Projection onto the PPM (input: T and F̂ ; output: M∗ and F ∗) 1: Ni = ∑ j∈∆i F̂j , for all i ∈ V . This takes O(q) steps using a DFS, see proof of Theorem 3.10 2: i = 1, ti = maxr{Nr}, B(ti) = arg maxr{Nr}, Z∗(ti) = 0, L′(ti) = 0. . Initialize 3: for i = 1 to q do 4: (Z ′∗(ti),L′′(ti)) = ComputeRates(B(ti), T ) . Update rates of change 5: P = {Pr : Pr = Nr+Z ∗(ti)r−tiZ′∗(ti)r 1−Z′∗(ti)r if r /∈ B(ti), tr < ti, and Pr = −∞ otherwise} 6: ti+1 = maxr Pr . Update next critical value from (9) 7: B(ti+1) = B(ti) ∪ arg maxr Ps . Update list of fixed variables 8: Z∗(ti+1) = Z∗(ti) + (ti+1 − ti)Z ′∗(ti) . Update solution path 9: L′(ti+1) = L′(ti) + (ti+1 − ti)L′′(ti) . Update objective’s derivative 10: if L′(ti+1) < −1 then break . If already passed by t∗, then exit the for-loop 11: end for 12: t∗ = ti − 1+L ′(ti) L′′(ti) . Find solution to (8) 13: Z∗ = Z∗(ti) + (t∗ − ti)Z ′∗(ti) . Find minimizers of (5) for t = t∗ 14: return M∗(Z∗, T ), F ∗(Z∗, T ) . Return solution to (3) using (6), which takes O(q) steps 3.3 Computing the rates We now explain how the procedure ComputeRates works. Recall that it takes as input the tree T and the set B(ti), and it outputs the derivatives Z ′∗(ti) and L′′(ti). A simple calculation shows that if we compute Z ′∗(ti), then computing L′′(ti) is easy. Lemma 3.12. L′′(ti) can be computed from Z ′∗(ti) in O(q) steps and with O(1) memory as L′′(ti) = ∑ j∈V (Z ′∗(ti)j − Z ′∗(ti)j̄)2, (11) where j̄ is the closest ancestor to j in T . We note that if j ∈ B(ti), then, by definition, Z ′∗(ti)j = 1. Assume now that j ∈ V\B(ti). Lemma 3.6 implies we can find Z ′∗(ti)j by solving the (Tw = (rw,Vw, Ew),Bw)-problem as a function of t, where w is such that j ∈ Vw. In a nutshell, ComputeRates is a recursive procedure to solve all the (Tw,Bw)-problems as an explicit function of t. It suffices to explain how ComputeRates solves one particular (Tw,Bw)-problem explicitly. To simplify notation, in the rest of this section, we refer to Tw and Bw as T and B. Recall that, by the definition of T = Tw and B = Bw, if i ∈ B, then i must be a leaf of T , or the root of T . Definition 3.13. Consider a rooted tree T = (r,V, E), a set B ⊆ V , and variables {Zj : j ∈ V} such that, if j ∈ B, then Zj = αjt+ βj for some α and β. We define the (T,B, α, β, γ)-problem as min {Zj :j∈V\B} 1 2 ∑ j∈V γj(Zj − Zj̄)2, (12) where γ > 0, j̄ is the closest ancestor to j in T , and Zj̄ = 0 if j = r. We refer to the solution of the (T,B, α, β, γ)-problem as {Z∗j : j ∈ V\B}, which uniquely minimizes (12). Note that (12) is unconstrained and its solution, Z∗, is a linear function of t. Furthermore, the (Tw,Bw)-problem is the same as the (Tw,Bw,1,−N,1)-problem, which is what we actually solve. We now state three useful lemmas that help us solve any (T,B, α, β, γ)-problem efficiently. Lemma 3.14 (Pruning). Consider the solution Z∗ of the (T,B, α, β, γ)-problem. Let j ∈ V\B be a leaf. Then Z∗j = Z ∗ j̄ . Furthermore, consider the (T̃,B, α, β, γ)-problem, where T̃ = (r̃, Ṽ, Ẽ) is equal to T with node j pruned, and let its solution be Z̃∗. We have that Z∗i = Z̃ ∗ i , for all i ∈ Ṽ . Lemma 3.15 (Star problem). Let T be a star such that node 1 is the center node, node 2 is the root, and nodes 3, . . . , r are leaves. Let B = {2, . . . , r}. Let Z∗1 ∈ R be the solution of the (T,B, α, β, γ)-problem. Then, Z∗1 = ( γ1α2 + ∑r i=3 γrαr γ1 + ∑r i=3 γr ) t+ ( γ1β2 + ∑r i=3 γrβr γ1 + ∑r i=3 γr ) . (13) In particular, to find the rate at which Z∗1 changes with t, we only need to know α and γ, not β. Lemma 3.16 (Reduction). Consider the (T,B, α, β, γ)-problem such that j, j̄ ∈ V\B, and such that j has all its children 1, . . . , r ∈ B. Let Z∗ be its solution. Consider the (T̃, B̃, α̃, β̃, γ̃)− problem, where T̃ = (r̃, Ṽ, Ẽ) is equal to T with nodes 1, . . . , r removed, and B̃ = (B\{1, . . . , r}) ∪ {j}. Let Z̃∗ be its solution. If (α̃i, β̃i, γ̃i) = (αi, βi, γi) for all i ∈ B\{1, . . . , r}, and α̃j , β̃j and γ̃j satisfy α̃j = ∑r i=1 γiαi∑r i=1 γi , β̃j = ∑r i=1 γiβi∑r i=1 γi , γ̃j = (γj)−1 + ( r∑ i=1 γi )−1 −1 , (14) then Z∗i = Z̃ ∗ i for all i ∈ V\{j}. Lemma 3.15 and Lemma 3.16 allow us to recursively solve any (T,B, α, β, γ)-problem, and obtain for it an explicit solution of the form Z∗(t) = c1t+ c2, where c1 and c2 do not depend on t. Assume that we have already repeatedly pruned T , by repeatedly invoking Lemma 3.14, such that, if i is a leaf, then i ∈ B. See Figure 2-(left). First, we find some node j ∈ V\B such that all of its children are in B. If j̄ ∈ B, then j̄ must be the root, and the (T,B, α, β, γ)-problem must be a star problem as in Lemma 3.15. We can use Lemma 3.15 to solve it explicitly. Alternatively, if j̄ /∈ V\B, then we invoke Lemma 3.16, and reduce the (T,B, α, β, γ)-problem to a strictly smaller (T̃, B̃, α̃, β̃, γ̃)-problem, which we solve recursively. Once the (T̃, B̃, α̃, β̃, γ̃)-problem is solved, we have an explicit expression Z∗i (t) = c1it + c2i for all i ∈ V\{j}, and, in particular, we have an explicit expression Z∗ j̄ (t) = c1 j̄t+ c2 j̄ . The only free variable of the (T,B, α, β, γ)-problem to be determined is Z∗j (t). To compute Z ∗ j (t), we apply Lemma 3.15 to the ( ≈ T , ≈ B, ≈α, ≈ β, ≈ γ)-problem, where ≈ T is a star around j, ≈γ are the components of γ corresponding to nodes that are neighbors of j, ≈α and ≈ β are such that Z∗i (t) = ≈ αit+ ≈ βi for all i that are neighbors of j, and for which Z ∗ i (t) is already known, and ≈ B are all the neighbors of j. See Figure 2-(right). The algorithm is compactly described in Alg. 2. It is slightly different from the description above for computational efficiency. Instead of computing Z∗(t) = c1t+ c2, we keep track only of c1, the rates, and we do so only for the variables in V\B. The algorithm assumes that the input T has been pruned. The inputs T , B, α, β and γ are passed by reference. They are modified inside the algorithm but, once ComputeRatesRec finishes, they keep their initial values. Throughout the execution of the algorithm, T = (r,V, E) encodes (1) a doubly-linked list where each node points to its children and its parent, which we call T.a, and (b) a a doubly-linked list of all the nodes in V\B for which all the children are in B, which we call T.b. In the proof of Theorem 3.17, we prove how this representation of T can be kept updated with little computational effort. The input Y , also passed by reference, starts as an uninitialized array of size q, where we will store the rates {Z ′∗i }. At the end, we read Z ′∗ from Y . Algorithm 2 ComputeRatesRec (input: T = (r,V, E),B, α, β, γ, Y ) 1: Let j be some node in V\B whose children are in B . We read j from T.b in O(1) steps 2: if j̄ ∈ B then 3: Set Yj using (13) in Lemma 3.15 . If j̄ ∈ B, then the (T,B, α, β, γ)-problem is star-shaped 4: else 5: Modify (T,B, α, β, γ) to match (T̃, B̃, α̃, β̃, γ̃) defined by Lemma 3.16 for j in line 1 6: ComputeRatesRec(T,B, α, β, γ, Y ) . Sets Yi = Z ′∗i for all i ∈ V\B; Yj is not yet defined 7: Restore (T,B, α, β, γ) to its original value before line 5 was executed 8: Compute Yj from (13), using for α, β, γ in (13) the values ≈ α, ≈ β, ≈ γ, where ≈γ are the com- ponents of γ corresponding to nodes that are neighbors of j in T , and ≈α and ≈ β are such that Z∗i = ≈ αit+ ≈ βi for all i that are neighbors of j in T , and for which Z ∗ i is already known 9: end if Let q be the number of nodes of the tree T that is the input at the zeroth level of the recursion. Theorem 3.17. Algorithm 2 correctly computes Z ′∗ for the (T,B, α, β, γ)-problem, and it can be implemented to finish in O(q) steps, and to use O(q) memory. The correctness of Algorithm 2 follows from Lemmas 3.14-3.16, and the explanation above. Its complexity is bounded by the total time spent on the two lines that actually compute rates during the whole recursion, lines 3 and 8. All the other lines only transform the input problem into a more computable form. Lines 3 and 8 solve a star-shaped problem with at most degree(j) variables, which, by inspecting (13), we know can be done inO(degree(j)) steps. Since, j never takes the same value twice, the overall complexity is bounded by O(∑j∈V degree(j)) = O(|E|) = O(q). The O(q) bound on memory is possible because all the variables that occupy significant memory are being passed by reference, and are modified in place during the whole recursive procedure. The following lemma shows how the recursive procedure to solve a (T,B, α, β, γ)-problem can be used to compute the rates of change of Z∗(t) of a (T,B)-problem. Its proof follows from the observation that the rate of change of the solution with t in (13) in Lemma 3.15 only depends on α and β, and that the reduction equations (14) in Lemma 3.16 never make α′ or γ′ depend on β. Lemma 3.18 (Rates only). Let Z∗(t) be the solution of the (T,B)-problem, and let Z̃∗(t) be the solution of the (T,B,1, 0,1)-problem. Then, Z∗(t) = c1t+ c2, and Z̃∗(t) = c1t for some c1 and c2. We finally present the full algorithm to compute Z ′∗(ti) and L′′ ∗ (ti) from T and B(ti). Algorithm 3 ComputeRates (input: T and B(ti) output: Z ′∗(ti) and L′′(ti)) 1: Z ′∗(ti)j = 1 for all j ∈ B(ti) 2: for each (Tw,Bw)-problem induced by B(ti) do 3: Set T̃w to be Tw pruned of all leaf nodes in Bw, by repeatedly evoking Lemma 3.14 4: ComputeRatesRec(T̃w, j,Bw,1,0,1, Z̃ ′∗) 5: Z ′∗(ti)j = Z̃ ′∗j for all j ∈ Vw\B 6: end for 7: Compute L′′(ti) from Z ′∗(ti) using Lemma 3.12 8: return Z ′∗(ti) and L′′(ti) The following theorem follows almost directly from Theorem 3.17. Theorem 3.19. Alg. 3 correctly computes Z ′∗(ti) and L′′(ti) in O(q) steps, and uses O(q) memory. 4 Reducing computation time in practice Our numerical results are obtained for an improved version of Algorithm 1. We now explain the main idea behind this algorithm. The bulk of the complexity of Alg. 1 comes from line 4, i.e., computing the rates {Z ′∗(ti)j}j∈V\B(ti) from B(ti) and T . For a fixed j ∈ V\B(ti), and by Lemma 3.6, the rate Z ′∗(ti)j , depends only on one particular (Tw = (rw,Vw, Ew),Bw)-problem induced by B(ti). If exactly this same problem is induced by both B(ti) and B(ti+1), which happens if the new nodes that become fixed in line 7 of round i of Algorithm 1 are not in Vw\Bw, then we can save computation time in round i+ 1, by not recomputing any rates for j ∈ Vw\Bw, and using for Z ′∗(ti+1)j the value Z ′∗(ti)j . Furthermore, if only a few {Z ′∗j } change from round i to round i + 1, then we can also save computation time in computing L′′ from Z ′∗ by subtracting from the sum in the right hand side of equation (11) the terms that depend on the previous, now changed, rates, and adding new terms that depend on the new rates. Finally, if the rate Z ′∗j does not change, then the value of t < ti at which Z ∗ j (t) might intersect t−Nj , and become fixed, given by Pj in line 5, also does not change. (Note that this is not obvious from the formula for Pr in line 5). If not all {Pr} change from round i to round i+ 1, we can also save computation time in computing the maximum, and maximizers, in line 7 by storing P in a maximum binary heap, and executing lines 5 and 7 by extracting all the maximal values from the top of the heap. Each time any Pr changes, the heap needs to be updated. 5 Numerical results Our algorithm to solve (3) exactly in a finite number of steps is of interest in itself. Still, it is interesting to compare it with other algorithms. In particular, we compare the convergence rate of our algorithm with two popular methods that solve (3) iteratively: the Alternating Direction Method of Multipliers (ADMM), and the Projected Gradient Descent (PGD) method. We apply the ADMM, and the PGD, to both the primal formulation (3), and the dual formulation (4). We implemented all the algorithms in C, and derived closed-form updates for ADMM and PG, see Appendix F. We ran all algorithms on a single core of an Intel Core i5 2.5GHz processor. Figure 5-(left) compares different algorithms for a random Galton–Watson input tree truncated to have q = 1000 nodes, with the number of children of each node chosen uniformly within a fixed range, and for a random input F̂ ∈ Rq, with entries chosen i.i.d. from a normal distribution. We observe the same behavior for all random instances that was tested. We gave ADMM and PGD an advantage by optimally tuning them for each individual problem-instance tested. In contrast, our algorithm requires no tuning, which is a clear advantage. At each iteration, the error is measured as maxj{|Mj −M∗j |}. Our algorithm is about 74× faster than its closest competitor (PGD-primal) for 10−3 accuracy. In Figure 5-(right), we show the average run time of our algorithm versus the problem size, for random inputs of the same form. The scaling of our algorithm is (almost) linear, and much faster than our O(q2p), p = 1, theoretical bound. 0 0.1 0.2 0.3 0.4 0.5 Time in seconds 0 0.05 0.1 0.15 E rr o r ADMM Primal ADMM Dual Projected Gradient Descent Primal Projected Gradient Descent Dual Our Algorithm = 0.0027 seconds 0 2000 4000 6000 8000 10000 Problem size 0 0.005 0.01 0.015 0.02 0.025 A ve ra ge r un ti m e Figure 3: (Left) Time that the different algorithms take to solve our problem for trees of with 1000 nodes. (Right) Average run time of our algorithm for problems of different sizes. For each size, each point is averaged over 500 random problem instances. Finally, we use our algorithm to exactly solve (2) by computing C(U) for all trees and a given input F̂ . Exactly solving (2) is very important for biology, since several relevant phylogenetic tree inference problems deal with trees of small sizes. We use an NVIDIA QUAD P5000 GPU to compute the cost of all possible trees with q nodes in parallel, and return the tree with the smallest cost. Basically, we assign to each GPU virtual thread a unique tree, using Prufer sequences [31], and then have each thread compute the cost for its tree. For q = 10, we compute the cost of all 100 million trees in about 8 minutes, and for q = 11, we compute the cost of all 2.5 billion trees in slightly less than 2.5 hours. Code to solve (3) using Alg. 1, with the improvements of Section 4, can be found in [32]. More results using our algorithm can be found in Appendix G. 6 Conclusions and future work We propose a new direct algorithm that, for a given tree, computes how close the matrix of frequency of mutations per position is to satisfying the perfect phylogeny model. Our algorithm is faster than the state-of-the-art iterative methods for the same problem, even if we optimally tune them. We use the proposed algorithm to build a GPU-based phylogenetic tree inference engine for the trees of relevant biological sizes. Unlike existing algorithms, which only heuristically search a small part of the space of possible trees, our algorithm performs a complete search over all trees relatively fast. It is an open problem to find direct algorithms that can provably solve our problem in linear time on average, or even for a worst-case input. Acknowledgement: This work was partially funded by NIH/1U01AI124302, NSF/IIS-1741129, and a NVIDIA hardware grant.
1. What is the primary contribution of the paper? 2. What is the significance of the optimization subproblem addressed in the paper? 3. How does the proposed algorithm compare to existing iterative solvers in terms of efficiency and optimality? 4. What are the practical techniques proposed for evaluating the algorithm efficiently? 5. How do the experimental results support the effectiveness of the proposed algorithm? 6. Who might be interested in this paper beyond those studying phylogenetic trees?
Review
Review This paper attempts to solve an optimization subproblem which arises in a matrix factorization which arises in studying the phylogenetic evolutionary tree. In particular, the overall problem is given a matrix F, find the matrices U and M such that F=UM where U is a binary matrix and M is a positive matrix whose columns sum to 1. The subproblem is, given U, to find the least-squares solution F,M such that || \hat{F} - F || is minimized subject to the constraints that F = UM where \hat{F} is the measured matrix. The paper proposes an algorithm which can solve this subproblem directly in polynomial time (compared with iterative solvers) and the major contribution is the algorithm for computing this along with the theoretical proofs of its optimality. Beyond this analysis, practical techniques are proposed for evaluating the algorithm efficiently. The algorithm is then applied and compared against current state-of-the-art iterative solvers, specifically ADMM and PGD. Experimental results show that the proposed algorithm is dramatically faster in practice when compared against existing iterative solvers. Overall the paper seems well written and the results significant. I suspect the paper will be of interest beyond those studying phyolgenetic trees as the main contribution is related to the optimization sub problem. UPDATE: Having read the author feedback I remain largely positive on the paper. In addition, the authors included additional experimental comparison against other methods seems to be a very positive and compelling addition to the paper.
NIPS
Title Efficient Projection onto the Perfect Phylogeny Model Abstract Several algorithms build on the perfect phylogeny model to infer evolutionary trees. This problem is particularly hard when evolutionary trees are inferred from the fraction of genomes that have mutations in different positions, across different samples. Existing algorithms might do extensive searches over the space of possible trees. At the center of these algorithms is a projection problem that assigns a fitness cost to phylogenetic trees. In order to perform a wide search over the space of the trees, it is critical to solve this projection problem fast. In this paper, we use Moreau’s decomposition for proximal operators, and a tree reduction scheme, to develop a new algorithm to compute this projection. Our algorithm terminates with an exact solution in a finite number of steps, and is extremely fast. In particular, it can search over all evolutionary trees with fewer than 11 nodes, a size relevant for several biological problems (more than 2 billion trees) in about 2 hours. 1 Introduction The perfect phylogeny model (PPM) [1, 2] is used in biology to study evolving populations. It assumes that the same position in the genome never mutates twice, hence mutations only accumulate. Consider a population of organisms evolving under the PPM. The evolution process can be described by a labeled rooted tree, T = (r,V, E), where r is the root, i.e., the common oldest ancestor, the nodes V are the mutants, and the edges E are mutations acquired between older and younger mutants. Since each position in the genome only mutates once, we can associate with each node v 6= r, a unique mutated position, the mutation associated to the ancestral edge of v. By convention, let us associate with the root r, a null mutation that is shared by all mutants in T . This allows us to refer to each node v ∈ V as both a mutation in a position in the genome (the mutation associated to the ancestral edge of v), and a mutant (the mutant with the fewest mutations that has a mutation v). Hence, without loss of generality, V = {1, . . . , q}, E = {2, . . . , q}, where q is the length of the genome, and r = 1 refers to both the oldest common ancestor and the null mutation shared by all. One very important use of the PPM is to infer how mutants of a common ancestor evolve [3–8]. A common type of data used for this purpose is the frequency, with which different positions in the genome mutate across multiple samples, obtained, e.g., from whole-genome or targeted deep sequencing [9]. Consider a sample s, one of p samples, obtained at a given stage of the evolution process. This sample has many mutants, some with the same genome, some with different genomes. Let F ∈ Rq×p be such that Fv,s is the fraction of genomes in s with a mutation in position v in the genome. Let M ∈ Rq×p be such that Mv,s is the fraction of mutant v in s. By definition, the columns of M must sum to 1. Let U ∈ {0, 1}q×q be such that Uv,v′ = 1, if and only if mutant v is an ancestor of mutant v′, or if v = v′. We denote the set of all possible U matrices, M matrices and labeled rooted trees T , by U ,M and T , respectively. See Figure 1 for an illustration. The PPM implies F = UM. (1) Our work contributes to the problem of inferring clonal evolution from mutation-frequencies: How do we infer M and U from F? Note that finding U is the same as finding T (see Lemma B.2). ∗Bei Jia is currently with Element AI. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. Although model (1) is simple, simultaneously inferring M and U from F can be hard [3]. One popular inference approach is the following optimization problem over U , M and F , min U∈U C(U), (2) C(U) = min M,F∈Rq×p ‖F̂ − F‖ subject to F = UM,M ≥ 0,M>1 = 1, (3) where ‖ · ‖ is the Frobenius norm, and F̂ ∈ Rq×p contains the measured fractions of mutations per position in each sample, which are known and fixed. In a nutshell, we want to project our measurement F̂ onto the space of valid PPM models. Problem (2) is a hard mixed integer-continuous optimization problem. To approximately solve it, we might find a finite subset {Ui} ⊂ U , that corresponds to a “heuristically good” subset of trees, {Ti} ⊂ T , and, for each fixed matrix Ui, solve (3), which is a convex optimization problem. We can then return Tx, where x ∈ arg mini C(Ui). Fortunately, in many biological applications, e.g., [3–8], the reconstructed evolutionary tree involves a very small number of mutated positions, e.g., q ≤ 11. In practice, a position v might be an effective position that is a cluster of multiple real positions in the genome. For a small q, we can compute C(U) for many trees, and hence approximate M , U , and get uncertainty measures for these estimates. This is important, since data is generally scarce and noisy. Contributions: (i) we propose a new algorithm to compute C(U) exactly in O(q2p) steps, the first non-iterative algorithm to compute C(U); (ii) we compare its performance against state-of-the-art iterative algorithms, and observe a much faster convergence. In particular, our algorithm scales much faster thanO(q2p) in practice; (iii) we implement our algorithm on a GPU, and show that it computes the cost of all (more than 2 billion) trees with ≤ 11 nodes, in ≤ 2.5 hours. 2 Related work A problem related to ours, but somewhat different, is that of inferring a phylogenetic tree from single-cell whole-genome sequencing data. Given all the mutations in a set of mutants, the problem is to arrange the mutants in a phylogenetic tree, [10, 11]. Mathematically, this corresponds to inferring T from partial or corrupted observation of U . If the PPM is assumed, and all the mutations of all the mutants are correctly observed, this problem can be solved in linear time, e.g., [12]. In general, this problem is equivalent to finding a minimum cost Steiner tree on a hypercube, whose nodes and edges represent mutants and mutations respectively, a problem known to be hard [13]. We mention a few works on clonality inference, based on the PPM, that try to infer both U and M from F̂ . No previous work solves problem (2) exactly in general, even for trees of size q ≤ 11. Using our fast projection algorithm, we can solve (2) exactly by searching over all trees, if q ≤ 11. Ref. [3] (AncesTree) reduces the space of possible trees T to subtrees of a heuristically constructed DAG. The authors use the element-wise 1-norm in (3) and, after introducing more variables to linearize the product UM , reduce this search to solving a MILP, which they try to solve via branch and bound. Ref. [6] (CITUP) searches the space of all unlabeled trees, and, for each unlabeled tree, tries to solve an MIQP, again using branch and bound techniques, which finds a labeling for the unlabeled tree, and simultaneously minimizes the distance ‖F̂ − F‖. Refs. [5] and [14] (PhyloSub/PhyloWGS), use a stochastic model to sample trees that are likely to explain the data. Their model is based on [15], which generates hierarchical clusterings of objects, and from which lineage trees can be formed. A score is then computed for these trees, and the highest scoring trees are returned. Procedure (2) can be justified as MLE if we assume the stochastic model F̂ = F + N (0, Iσ2), where F , U and M satisfy the PPM model, and N (0, Iσ2) represents additive, component-wise, Gaussian measurement noise, with zero mean and covariance Iσ2. Alternative stochastic models can be assumed, e.g., as M − U−1F̂ = N (0, Iσ2), where M is non-negative and its columns must sum to one, andN (0, Iσ2) is as described before. For this model, and for each matrix U , the cost C(U) is a projection of U−1F̂ onto the probability simplex M ≥ 0,M>1 = 1. Several fast algorithms are known for this problem, e.g., [16–20] and references therein. In a pq-dimensional space, the exact projection onto the simplex can be done in O(qp) steps. Our algorithm is the first to solve (3) exactly in a finite number of steps. We can also use iterative methods to solve (3). One advantage of our algorithm is that it has no tuning parameters, and requires no effort to check for convergence for a given accuracy. Since iterative algorithms can converge very fast, we numerically compare the speed of our algorithm with different implementations of the Alternating Direction Method of Multipliers (ADMM) [21], which, if properly tuned, has a convergence rate that equals the fastest convergence rate among all first order methods [22] under some convexity assumptions, and is known to produce good solutions for several other kinds of problems, even for non-convex ones [23–29]. 3 Main results We now state our main results, and explain the ideas behind their proofs. Detailed proofs can be found in the Appendix. Our algorithm computes C(U) and minimizers of (3), resp. M∗ and F ∗, by solving an equivalent problem. Without loss of generality, we assume that p = 1, since, by squaring the objective in (3), it decomposes into p independent problems. Sometimes we denote C(U) by C(T ), since given U , we can specify T , and vice-versa. Let ī be the closest ancestor of i in T = (r,V, E). Let ∆i be the set of all the ancestors of i in T , plus i. Let ∂i be the set of children of i in T . Theorem 3.1 (Equivalent formulation). Problem (3) can be solved by solving min t∈R t+ L(t), (4) L(t) = min Z∈Rq 1 2 ∑ i∈V (Zi − Zī)2 subject to Zi ≤ t−Ni ,∀i ∈ V, (5) where Ni = ∑ j∈∆i F̂j , and, by convention, Zī = 0 for i = r. In particular, if t ∗ minimizes (4), Z∗ minimizes (5) for t = t∗, and M∗, F ∗ minimize (3), then M∗i = −Z∗i + Z∗ī + ∑ r∈∂i (Z∗r − Z∗r̄ ) and F ∗i = −Z∗i + Z∗ī ,∀i ∈ V. (6) Furthermore, t∗, M∗, F ∗ and Z∗ are unique. Theorem 3.1 comes from a dual form of (3), which we build using Moreau’s decomposition [30]. 3.1 Useful observations Let Z∗(t) be the unique minimizer of (5) for some t. The main ideas behind our algorithm depend on a few simple properties of the paths {Z∗(t)} and {L′(t)}, the derivative of L(t) with respect to t. Note that L is also a function of N , as defined in Theorem 3.1, which depends on the input data F̂ . Lemma 3.2. L(t) is a convex function of t and N . Furthermore, L(t) is continuous in t and N , and L′(t) is non-decreasing with t. Lemma 3.3. Z∗(t) is continuous as a function of t and N . Z∗(t∗) is continuous as a function of N . Let B(t) = {i : Z∗(t)i = t−Ni}, i.e., the set of components of the solution at the boundary of (5). Variables in B are called fixed, and we call other variables free. Free (resp. fixed) nodes are nodes corresponding to free (resp. fixed) variables. Lemma 3.4. B(t) is piecewise constant in t. Consider dividing the tree T = (r,V, E) into subtrees, each with at least one free node, using B(t) as separation points. See Figure 4 in Appendix A for an illustration. Each i ∈ B(t) belongs to at most degree(i) different subtrees, where degree(i) is the degree of node i, and each i ∈ V\B(t) belongs exactly to one subtree. Let T1, . . . , Tk be the set of resulting (rooted, labeled) trees. Let Tw = (rw,Vw, Ew), where the root rw is the closest node in Tw to r. We call {Tw} the subtrees induced by B(t). We define Bw(t) = B(t) ∩ Vw, and, when it does not create ambiguity, we drop the index t in Bw(t). Note that different Bw(t)’s might have elements in common. Also note that, by construction, if i ∈ Bw, then i must be a leaf of Tw, or the root of Tw. Definition 3.5. The (Tw,Bw)-problem is the optimization problem over |Vw\B(t)| variables min {Zj :j∈Vw\B(t)} (1/2) ∑ j∈Vw (Zj − Zj̄)2, (7) where j̄ is the parent of j in Tw, Zj̄ = 0 if j = rw, and Zj = Z∗(t)j = t−Nj if j ∈ Bw(t). Lemma 3.6. Problem (5) decomposes into k independent problems. In particular, the minimizers {Z∗(t)j : j ∈ Vw\B(t)} are determined as the solution of the (Tw,Bw)-problem. If j ∈ Vw, then Z∗(t)j = c1t+ c2 , where c1 and c2 depend on j but not on t, and 0 ≤ c1 ≤ 1. Lemma 3.7. Z∗(t) and L′(t) are piecewise linear and continuous in t. Furthermore, Z∗(t) and L′(t) change linear segments if and only if B(t) changes. Lemma 3.8. If t ≤ t′, then B(t′) ⊆ B(t). In particular, B(t) changes at most q times with t. Lemma 3.9. Z∗(t) and L′(t) have less than q + 1 different linear segments. 3.2 The Algorithm In a nutshell, our algorithm computes the solution path {Z∗(t)}t∈R and the derivative {L′(t)}t∈R. From these paths, it finds the unique t∗, at which d(t+ L(t))/dt = 0|t=t∗ ⇔ L′(t∗) = −1. (8) It then evaluates the path Z∗(t) at t = t∗, and uses this value, along with (6), to find M∗ and F ∗, the unique minimizers of (3). Finally, we compute C(T ) = ‖F̂ − F ∗‖. We know that {Z∗(t)} and {L′(t)} are continuous piecewise linear, with a finite number of different linear segments (Lemmas 3.7, 3.8 and 3.9). Hence, to describe {Z∗(t)} and {L′(t)}, we only need to evaluate them at the critical values, t1 > t2 > · · · > tk, at which Z∗(t) and L′(t) change linear segments. We will later use Lemma 3.7 as a criteria to find the critical values. Namely, {ti} are the values of t at which, as t decreases, new variables become fixed, and B(t) changes. Note that variables never become free once fixed, by Lemma 3.8, which also implies that k ≤ q. The values {Z∗(ti)} and {L′(ti)} are computed sequentially as follows. If t is very large, the constraint in (5) is not active, and Z∗(t) = L(t) = L′(t) = 0. Lemma 3.7 tells us that, as we decrease t, the first critical value is the largest t for which this constraint becomes active, and at which B(t) changes for the first time. Hence, if i = 1, we have ti = maxs{Ns}, Z∗(ti) = L′(ti) = 0, and B(ti) = arg maxs{Ns}. Once we have ti, we compute the rates Z ′∗(ti) and L′′(ti) from B(ti) and T , as explained in Section 3.3. Since the paths are piecewise linear, derivatives are not defined at critical points. Hence, here, and throughout this section, these derivatives are taken from the left, i.e., Z ′∗(ti) = limt↑ti(Z ∗(ti)− Z∗(t))/(ti − t) and L′′(ti) = limt↑ti(L′(ti)− L′(t))/(ti − t). Since Z ′∗(t) and L′′(t) are constant for t ∈ (ti+1, ti], for t ∈ (ti+1, ti] we have Z∗(t) = Z∗(ti) + (t− ti)Z ′∗(ti), L′(t) = L′(ti) + (t− ti)L′′(ti), (9) and the next critical value, ti+1, is the largest t < ti, for which new variables become fixed, and B(t) changes. The value ti+1 is found by solving for t < ti in Z∗(t)r = Z ∗(ti)r + (t− ti)Z ′∗(ti)r = t−Nr, (10) and keeping the largest solution among all r /∈ B. Once ti+1 is computed, we update B with the new variables that became fixed, and we obtain Z∗(ti+1) and L′(ti+1) from (9). The process then repeats. By Lemma 3.2, L′ never increases. Hence, we stop this process (a) as soon as L′(ti) < −1, or (b) when all the variables are in B, and thus there are no more critical values to compute. If (a), let tk be the last critical value with L′(tk) > −1, and if (b), let tk be the last computed critical value. We use tk and (9) to compute t∗, at which L′(t∗) = −1 and also Z∗(t∗). From Z∗(t∗) we then compute M∗ and F ∗ and C(U) = ‖F̂ − F ∗‖. The algorithm is shown compactly in Alg. 1. Its inputs are F̂ and T , represented, e.g., using a linkednodes data structure. Its outputs are minimizers to (3). It makes use of a procedure ComputeRates, which we will explain later. This procedure terminates in O(q) steps and uses O(q) memory. Line 5 comes from solving (10) for t. In line 14, the symbols M∗(Z∗, T ) and F ∗(Z∗, T ) remind us that M∗ and F ∗ are computed from Z∗ and T using (6). The correctness of Alg. 1 follows from the Lemmas in Section 3.1, and the explanation above. In particular, since there are at most q + 1 different linear regimes, the bound q in the for-loop does not prevent us from finding any critical value. Its time complexity is O(q2), since each line completes in O(q) steps, and is executed at most q times. Theorem 3.10 (Complexity). Algorithm 1 finishes in O(q2) steps, and requires O(q) memory. Theorem 3.11 (Correctness). Algorithm 1 outputs the solution to (3). Algorithm 1 Projection onto the PPM (input: T and F̂ ; output: M∗ and F ∗) 1: Ni = ∑ j∈∆i F̂j , for all i ∈ V . This takes O(q) steps using a DFS, see proof of Theorem 3.10 2: i = 1, ti = maxr{Nr}, B(ti) = arg maxr{Nr}, Z∗(ti) = 0, L′(ti) = 0. . Initialize 3: for i = 1 to q do 4: (Z ′∗(ti),L′′(ti)) = ComputeRates(B(ti), T ) . Update rates of change 5: P = {Pr : Pr = Nr+Z ∗(ti)r−tiZ′∗(ti)r 1−Z′∗(ti)r if r /∈ B(ti), tr < ti, and Pr = −∞ otherwise} 6: ti+1 = maxr Pr . Update next critical value from (9) 7: B(ti+1) = B(ti) ∪ arg maxr Ps . Update list of fixed variables 8: Z∗(ti+1) = Z∗(ti) + (ti+1 − ti)Z ′∗(ti) . Update solution path 9: L′(ti+1) = L′(ti) + (ti+1 − ti)L′′(ti) . Update objective’s derivative 10: if L′(ti+1) < −1 then break . If already passed by t∗, then exit the for-loop 11: end for 12: t∗ = ti − 1+L ′(ti) L′′(ti) . Find solution to (8) 13: Z∗ = Z∗(ti) + (t∗ − ti)Z ′∗(ti) . Find minimizers of (5) for t = t∗ 14: return M∗(Z∗, T ), F ∗(Z∗, T ) . Return solution to (3) using (6), which takes O(q) steps 3.3 Computing the rates We now explain how the procedure ComputeRates works. Recall that it takes as input the tree T and the set B(ti), and it outputs the derivatives Z ′∗(ti) and L′′(ti). A simple calculation shows that if we compute Z ′∗(ti), then computing L′′(ti) is easy. Lemma 3.12. L′′(ti) can be computed from Z ′∗(ti) in O(q) steps and with O(1) memory as L′′(ti) = ∑ j∈V (Z ′∗(ti)j − Z ′∗(ti)j̄)2, (11) where j̄ is the closest ancestor to j in T . We note that if j ∈ B(ti), then, by definition, Z ′∗(ti)j = 1. Assume now that j ∈ V\B(ti). Lemma 3.6 implies we can find Z ′∗(ti)j by solving the (Tw = (rw,Vw, Ew),Bw)-problem as a function of t, where w is such that j ∈ Vw. In a nutshell, ComputeRates is a recursive procedure to solve all the (Tw,Bw)-problems as an explicit function of t. It suffices to explain how ComputeRates solves one particular (Tw,Bw)-problem explicitly. To simplify notation, in the rest of this section, we refer to Tw and Bw as T and B. Recall that, by the definition of T = Tw and B = Bw, if i ∈ B, then i must be a leaf of T , or the root of T . Definition 3.13. Consider a rooted tree T = (r,V, E), a set B ⊆ V , and variables {Zj : j ∈ V} such that, if j ∈ B, then Zj = αjt+ βj for some α and β. We define the (T,B, α, β, γ)-problem as min {Zj :j∈V\B} 1 2 ∑ j∈V γj(Zj − Zj̄)2, (12) where γ > 0, j̄ is the closest ancestor to j in T , and Zj̄ = 0 if j = r. We refer to the solution of the (T,B, α, β, γ)-problem as {Z∗j : j ∈ V\B}, which uniquely minimizes (12). Note that (12) is unconstrained and its solution, Z∗, is a linear function of t. Furthermore, the (Tw,Bw)-problem is the same as the (Tw,Bw,1,−N,1)-problem, which is what we actually solve. We now state three useful lemmas that help us solve any (T,B, α, β, γ)-problem efficiently. Lemma 3.14 (Pruning). Consider the solution Z∗ of the (T,B, α, β, γ)-problem. Let j ∈ V\B be a leaf. Then Z∗j = Z ∗ j̄ . Furthermore, consider the (T̃,B, α, β, γ)-problem, where T̃ = (r̃, Ṽ, Ẽ) is equal to T with node j pruned, and let its solution be Z̃∗. We have that Z∗i = Z̃ ∗ i , for all i ∈ Ṽ . Lemma 3.15 (Star problem). Let T be a star such that node 1 is the center node, node 2 is the root, and nodes 3, . . . , r are leaves. Let B = {2, . . . , r}. Let Z∗1 ∈ R be the solution of the (T,B, α, β, γ)-problem. Then, Z∗1 = ( γ1α2 + ∑r i=3 γrαr γ1 + ∑r i=3 γr ) t+ ( γ1β2 + ∑r i=3 γrβr γ1 + ∑r i=3 γr ) . (13) In particular, to find the rate at which Z∗1 changes with t, we only need to know α and γ, not β. Lemma 3.16 (Reduction). Consider the (T,B, α, β, γ)-problem such that j, j̄ ∈ V\B, and such that j has all its children 1, . . . , r ∈ B. Let Z∗ be its solution. Consider the (T̃, B̃, α̃, β̃, γ̃)− problem, where T̃ = (r̃, Ṽ, Ẽ) is equal to T with nodes 1, . . . , r removed, and B̃ = (B\{1, . . . , r}) ∪ {j}. Let Z̃∗ be its solution. If (α̃i, β̃i, γ̃i) = (αi, βi, γi) for all i ∈ B\{1, . . . , r}, and α̃j , β̃j and γ̃j satisfy α̃j = ∑r i=1 γiαi∑r i=1 γi , β̃j = ∑r i=1 γiβi∑r i=1 γi , γ̃j = (γj)−1 + ( r∑ i=1 γi )−1 −1 , (14) then Z∗i = Z̃ ∗ i for all i ∈ V\{j}. Lemma 3.15 and Lemma 3.16 allow us to recursively solve any (T,B, α, β, γ)-problem, and obtain for it an explicit solution of the form Z∗(t) = c1t+ c2, where c1 and c2 do not depend on t. Assume that we have already repeatedly pruned T , by repeatedly invoking Lemma 3.14, such that, if i is a leaf, then i ∈ B. See Figure 2-(left). First, we find some node j ∈ V\B such that all of its children are in B. If j̄ ∈ B, then j̄ must be the root, and the (T,B, α, β, γ)-problem must be a star problem as in Lemma 3.15. We can use Lemma 3.15 to solve it explicitly. Alternatively, if j̄ /∈ V\B, then we invoke Lemma 3.16, and reduce the (T,B, α, β, γ)-problem to a strictly smaller (T̃, B̃, α̃, β̃, γ̃)-problem, which we solve recursively. Once the (T̃, B̃, α̃, β̃, γ̃)-problem is solved, we have an explicit expression Z∗i (t) = c1it + c2i for all i ∈ V\{j}, and, in particular, we have an explicit expression Z∗ j̄ (t) = c1 j̄t+ c2 j̄ . The only free variable of the (T,B, α, β, γ)-problem to be determined is Z∗j (t). To compute Z ∗ j (t), we apply Lemma 3.15 to the ( ≈ T , ≈ B, ≈α, ≈ β, ≈ γ)-problem, where ≈ T is a star around j, ≈γ are the components of γ corresponding to nodes that are neighbors of j, ≈α and ≈ β are such that Z∗i (t) = ≈ αit+ ≈ βi for all i that are neighbors of j, and for which Z ∗ i (t) is already known, and ≈ B are all the neighbors of j. See Figure 2-(right). The algorithm is compactly described in Alg. 2. It is slightly different from the description above for computational efficiency. Instead of computing Z∗(t) = c1t+ c2, we keep track only of c1, the rates, and we do so only for the variables in V\B. The algorithm assumes that the input T has been pruned. The inputs T , B, α, β and γ are passed by reference. They are modified inside the algorithm but, once ComputeRatesRec finishes, they keep their initial values. Throughout the execution of the algorithm, T = (r,V, E) encodes (1) a doubly-linked list where each node points to its children and its parent, which we call T.a, and (b) a a doubly-linked list of all the nodes in V\B for which all the children are in B, which we call T.b. In the proof of Theorem 3.17, we prove how this representation of T can be kept updated with little computational effort. The input Y , also passed by reference, starts as an uninitialized array of size q, where we will store the rates {Z ′∗i }. At the end, we read Z ′∗ from Y . Algorithm 2 ComputeRatesRec (input: T = (r,V, E),B, α, β, γ, Y ) 1: Let j be some node in V\B whose children are in B . We read j from T.b in O(1) steps 2: if j̄ ∈ B then 3: Set Yj using (13) in Lemma 3.15 . If j̄ ∈ B, then the (T,B, α, β, γ)-problem is star-shaped 4: else 5: Modify (T,B, α, β, γ) to match (T̃, B̃, α̃, β̃, γ̃) defined by Lemma 3.16 for j in line 1 6: ComputeRatesRec(T,B, α, β, γ, Y ) . Sets Yi = Z ′∗i for all i ∈ V\B; Yj is not yet defined 7: Restore (T,B, α, β, γ) to its original value before line 5 was executed 8: Compute Yj from (13), using for α, β, γ in (13) the values ≈ α, ≈ β, ≈ γ, where ≈γ are the com- ponents of γ corresponding to nodes that are neighbors of j in T , and ≈α and ≈ β are such that Z∗i = ≈ αit+ ≈ βi for all i that are neighbors of j in T , and for which Z ∗ i is already known 9: end if Let q be the number of nodes of the tree T that is the input at the zeroth level of the recursion. Theorem 3.17. Algorithm 2 correctly computes Z ′∗ for the (T,B, α, β, γ)-problem, and it can be implemented to finish in O(q) steps, and to use O(q) memory. The correctness of Algorithm 2 follows from Lemmas 3.14-3.16, and the explanation above. Its complexity is bounded by the total time spent on the two lines that actually compute rates during the whole recursion, lines 3 and 8. All the other lines only transform the input problem into a more computable form. Lines 3 and 8 solve a star-shaped problem with at most degree(j) variables, which, by inspecting (13), we know can be done inO(degree(j)) steps. Since, j never takes the same value twice, the overall complexity is bounded by O(∑j∈V degree(j)) = O(|E|) = O(q). The O(q) bound on memory is possible because all the variables that occupy significant memory are being passed by reference, and are modified in place during the whole recursive procedure. The following lemma shows how the recursive procedure to solve a (T,B, α, β, γ)-problem can be used to compute the rates of change of Z∗(t) of a (T,B)-problem. Its proof follows from the observation that the rate of change of the solution with t in (13) in Lemma 3.15 only depends on α and β, and that the reduction equations (14) in Lemma 3.16 never make α′ or γ′ depend on β. Lemma 3.18 (Rates only). Let Z∗(t) be the solution of the (T,B)-problem, and let Z̃∗(t) be the solution of the (T,B,1, 0,1)-problem. Then, Z∗(t) = c1t+ c2, and Z̃∗(t) = c1t for some c1 and c2. We finally present the full algorithm to compute Z ′∗(ti) and L′′ ∗ (ti) from T and B(ti). Algorithm 3 ComputeRates (input: T and B(ti) output: Z ′∗(ti) and L′′(ti)) 1: Z ′∗(ti)j = 1 for all j ∈ B(ti) 2: for each (Tw,Bw)-problem induced by B(ti) do 3: Set T̃w to be Tw pruned of all leaf nodes in Bw, by repeatedly evoking Lemma 3.14 4: ComputeRatesRec(T̃w, j,Bw,1,0,1, Z̃ ′∗) 5: Z ′∗(ti)j = Z̃ ′∗j for all j ∈ Vw\B 6: end for 7: Compute L′′(ti) from Z ′∗(ti) using Lemma 3.12 8: return Z ′∗(ti) and L′′(ti) The following theorem follows almost directly from Theorem 3.17. Theorem 3.19. Alg. 3 correctly computes Z ′∗(ti) and L′′(ti) in O(q) steps, and uses O(q) memory. 4 Reducing computation time in practice Our numerical results are obtained for an improved version of Algorithm 1. We now explain the main idea behind this algorithm. The bulk of the complexity of Alg. 1 comes from line 4, i.e., computing the rates {Z ′∗(ti)j}j∈V\B(ti) from B(ti) and T . For a fixed j ∈ V\B(ti), and by Lemma 3.6, the rate Z ′∗(ti)j , depends only on one particular (Tw = (rw,Vw, Ew),Bw)-problem induced by B(ti). If exactly this same problem is induced by both B(ti) and B(ti+1), which happens if the new nodes that become fixed in line 7 of round i of Algorithm 1 are not in Vw\Bw, then we can save computation time in round i+ 1, by not recomputing any rates for j ∈ Vw\Bw, and using for Z ′∗(ti+1)j the value Z ′∗(ti)j . Furthermore, if only a few {Z ′∗j } change from round i to round i + 1, then we can also save computation time in computing L′′ from Z ′∗ by subtracting from the sum in the right hand side of equation (11) the terms that depend on the previous, now changed, rates, and adding new terms that depend on the new rates. Finally, if the rate Z ′∗j does not change, then the value of t < ti at which Z ∗ j (t) might intersect t−Nj , and become fixed, given by Pj in line 5, also does not change. (Note that this is not obvious from the formula for Pr in line 5). If not all {Pr} change from round i to round i+ 1, we can also save computation time in computing the maximum, and maximizers, in line 7 by storing P in a maximum binary heap, and executing lines 5 and 7 by extracting all the maximal values from the top of the heap. Each time any Pr changes, the heap needs to be updated. 5 Numerical results Our algorithm to solve (3) exactly in a finite number of steps is of interest in itself. Still, it is interesting to compare it with other algorithms. In particular, we compare the convergence rate of our algorithm with two popular methods that solve (3) iteratively: the Alternating Direction Method of Multipliers (ADMM), and the Projected Gradient Descent (PGD) method. We apply the ADMM, and the PGD, to both the primal formulation (3), and the dual formulation (4). We implemented all the algorithms in C, and derived closed-form updates for ADMM and PG, see Appendix F. We ran all algorithms on a single core of an Intel Core i5 2.5GHz processor. Figure 5-(left) compares different algorithms for a random Galton–Watson input tree truncated to have q = 1000 nodes, with the number of children of each node chosen uniformly within a fixed range, and for a random input F̂ ∈ Rq, with entries chosen i.i.d. from a normal distribution. We observe the same behavior for all random instances that was tested. We gave ADMM and PGD an advantage by optimally tuning them for each individual problem-instance tested. In contrast, our algorithm requires no tuning, which is a clear advantage. At each iteration, the error is measured as maxj{|Mj −M∗j |}. Our algorithm is about 74× faster than its closest competitor (PGD-primal) for 10−3 accuracy. In Figure 5-(right), we show the average run time of our algorithm versus the problem size, for random inputs of the same form. The scaling of our algorithm is (almost) linear, and much faster than our O(q2p), p = 1, theoretical bound. 0 0.1 0.2 0.3 0.4 0.5 Time in seconds 0 0.05 0.1 0.15 E rr o r ADMM Primal ADMM Dual Projected Gradient Descent Primal Projected Gradient Descent Dual Our Algorithm = 0.0027 seconds 0 2000 4000 6000 8000 10000 Problem size 0 0.005 0.01 0.015 0.02 0.025 A ve ra ge r un ti m e Figure 3: (Left) Time that the different algorithms take to solve our problem for trees of with 1000 nodes. (Right) Average run time of our algorithm for problems of different sizes. For each size, each point is averaged over 500 random problem instances. Finally, we use our algorithm to exactly solve (2) by computing C(U) for all trees and a given input F̂ . Exactly solving (2) is very important for biology, since several relevant phylogenetic tree inference problems deal with trees of small sizes. We use an NVIDIA QUAD P5000 GPU to compute the cost of all possible trees with q nodes in parallel, and return the tree with the smallest cost. Basically, we assign to each GPU virtual thread a unique tree, using Prufer sequences [31], and then have each thread compute the cost for its tree. For q = 10, we compute the cost of all 100 million trees in about 8 minutes, and for q = 11, we compute the cost of all 2.5 billion trees in slightly less than 2.5 hours. Code to solve (3) using Alg. 1, with the improvements of Section 4, can be found in [32]. More results using our algorithm can be found in Appendix G. 6 Conclusions and future work We propose a new direct algorithm that, for a given tree, computes how close the matrix of frequency of mutations per position is to satisfying the perfect phylogeny model. Our algorithm is faster than the state-of-the-art iterative methods for the same problem, even if we optimally tune them. We use the proposed algorithm to build a GPU-based phylogenetic tree inference engine for the trees of relevant biological sizes. Unlike existing algorithms, which only heuristically search a small part of the space of possible trees, our algorithm performs a complete search over all trees relatively fast. It is an open problem to find direct algorithms that can provably solve our problem in linear time on average, or even for a worst-case input. Acknowledgement: This work was partially funded by NIH/1U01AI124302, NSF/IIS-1741129, and a NVIDIA hardware grant.
1. What is the focus of the paper, and how does it contribute to the field of inferring perfect phylogeny? 2. Can you explain the proposed 2-stage procedure and how it addresses the problem of noisy data? 3. How does the paper compare its approach to other implementations, and what are the advantages or disadvantages of the proposed method? 4. Are there any concerns or limitations regarding the applicability of the method for practical use, especially with large datasets? 5. What are the strengths and weaknesses of the paper, particularly in terms of its theoretical analysis and experimental results?
Review
Review This is a nice mixed programming problem to a topical problem of inferring the most plausible perfect phylogeny in the setting of noisy data, that is the input matrix contains fractions of the population with mutations at the respected position. The authors propose a 2-stage procedure in which in the first stage, they find good candidate matrices U's from the space of valid trees, and in the second stage they find the exact goodness of that matrix/tree. The space of these matrices is exponential in the dimension of the matrix U (although these matrices are over 1/0 only) and therefore the emphasis is over this stage. The authors claim they achieve reasonable results for q=11 and they compared it to other implementations. I cannot see a real practical use for such magnitude, but I trust the authors they obtained good results.
NIPS
Title Constraints Based Convex Belief Propagation Abstract Inference in Markov random fields subject to consistency structure is a fundamental problem that arises in many real-life applications. In order to enforce consistency, classical approaches utilize consistency potentials or encode constraints over feasible instances. Unfortunately this comes at the price of a tremendous computational burden. In this paper we suggest to tackle consistency by incorporating constraints on beliefs. This permits derivation of a closed-form message-passing algorithm which we refer to as the Constraints Based Convex Belief Propagation (CBCBP). Experiments show that CBCBP outperforms the conventional consistency potential based approach, while being at least an order of magnitude faster. N/A Inference in Markov random fields subject to consistency structure is a fundamental problem that arises in many real-life applications. In order to enforce consistency, classical approaches utilize consistency potentials or encode constraints over feasible instances. Unfortunately this comes at the price of a tremendous computational burden. In this paper we suggest to tackle consistency by incorporating constraints on beliefs. This permits derivation of a closed-form message-passing algorithm which we refer to as the Constraints Based Convex Belief Propagation (CBCBP). Experiments show that CBCBP outperforms the conventional consistency potential based approach, while being at least an order of magnitude faster. 1 Introduction Markov random fields (MRFs) [10] are widely used across different domains from computer vision and natural language processing to computational biology, because they are a general tool to describe distributions that involve multiple variables. The dependencies between variables are conveniently encoded via potentials that define the structure of a graph. Besides encoding dependencies, in a variety of real-world applications we often want consistent solutions that are physically plausible, e.g., when jointly reasoning about multiple tasks or when enforcing geometric constraints in 3D indoor scene understanding applications [18]. Therefore, various methods [22, 13, 16, 12, 1] enforce consistency structure during inference by imposing constraints on the feasible instances. This was shown to be effective in practice. However for each new constraint we may need to design a specifically tailored algorithm. Therefore, the most common approach to impose consistency is usage of PN-consistency potentials [9]. This permits reuse of existing message passing solvers, however, at the expense of an additional computational burden, as real-world applications may involve hundreds of additional factors. Our goal in this work is to bypass this computational burden while being generally applicable. To do so, we consider the problem of inference when probabilistic equalities are imposed over the beliefs of the model rather than its feasible instances. As we show in Sec. 3, the adaptive nature of message passing algorithms conveniently allows for such probabilistic equality constraints within its framework. Since our method eliminates potentially many multivariate factors, inference is much more scalable than using PN-consistency potentials [9]. In this paper, for notational simplicity, we illustrate the belief constraints based message passing rules using a framework known as convex belief propagation (CBP). We refer to the illustrated algorithm as constraints based CBP (CBCBP). However we note that the same derivation can be used to obtain, e.g., a constraints based tree-reweighted message passing algorithm. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. We evaluate the benefits of our algorithm on semantic image segmentation and machine translation tasks. Our results indicate that CBCBP improves accuracy while being at least an order of magnitude faster than CBP. 2 Background In this section we review the standard CBP algorithm. To this end we consider joint distributions defined over a set of discrete random variables X = (X1, . . . , Xn). The distribution p(x1, . . . , xn) is assumed to factor into a product of non-negative potential functions, i.e., p(x1, . . . , xn) ∝ exp ( ∑ r θr(xr)) , where r ⊂ {1, ..., n} is a subset of variable indices, which we use to restrict the domain via xr = (xi)i∈r. The real-valued functions θr(xr) assign a preference to each of the variables in the subset r. To visualize the factorization structure we use a region graph, i.e., a generalization of factor graphs. In this graph, each real-valued function θr(xr) corresponds to a node. Nodes θr and θp can be connected if r ⊂ p. Hence the parent set P (r) of a region r contains index sets p ∈ P (r) if r ⊂ p. Conversely we define the set of children of region r as C(r) = {c : r ∈ P (c)}. An important inference task is computation of the marginal probabilities p(xr) = ∑ x\xr p(x). Whenever the region graph has no cycles, marginals are easily computed using belief propagation. Unfortunately, this algorithm may not converge in the presence of cycles. To fix convergence a variety of approximations have been suggested, one of which is known as convex belief propagation (CBP). CBP performs block-coordinate descent over the dual function of the following program: max br ∑ r,xr br(xr)θr(xr)+ ∑ r H(br) s.t. { ∀r br(xr) ≥ 0, ∑ xr br(xr) = 1, ∀r, p ∈ P (r), xr ∑ xp\xr bp(xp) = br(xr). (1) This program is defined over marginal distributions br(xr) and incorporates their entropy H(br) in addition to the potential function θr. In many real world applications we require the solution to be consistent, i.e., hard constraints between some of the involved variables exist. For example, consider the case where X1, X2 are two binary variables such that for every feasible joint assignment, x1 = x2. To encourage consistency while reusing general purpose solvers, a PN-consistency potential [9] is often incorporated into the model: θ1,2(x1, x2) = { 0 x1 = x2 −c otherwise . (2) Hereby c is a positive constant that is tuned to penalize for the violation of consistency. As c increases, the following constraint holds: b1(X1 = x1) = b2(X2 = x2). (3) However, usage of PN-potentials raises concerns: (i) increasing the number of pairwise constraints decreases computational efficiency, (ii) enforcing consistency in a soft manner requires tuning of an additional parameter c, (iii) large values of c reduce convergence, and (iv) large values of c result in corresponding beliefs being assigned zero probability mass which is not desirable. To alleviate these issues we suggest to enforce the equality constraints given in Eq. (3) directly during optimization of the program given in Eq. (1). We refer to the additionally introduced constraints as consistency constraints. At this point two notes are in place. First we emphasize that utilizing consistency constraints instead of PN-consistency potentials has a computational advantage, since it omits all pairwise beliefs that correspond to consistency potentials. Therefore it results in an optimization problem with fewer functions, which is expect to be more efficiently solvable. Second we highlight that the two approaches are not equivalent. Intuitively, as c increases, we expect consistency constraints to yield better results than usage of PN-potentials. Indeed, as c increases, the PN-consistency potential enforces the joint distribution to be diagonal, i.e., b(X1 = i,X2 = j) = 0, ∀i 6= j. However, the consistency constraint as specified in Eq. (3) only requires the univariate marginals to agree. The latter is a considerably weaker requirement, as a diagonal pairwise distribution implies agreement of the univariate marginals, but the opposite direction does not hold. Consequently, using consistency constraints results in a larger search space, which is desirable. Algorithm 1 Constraints Based Convex Belief Propagation (CBCBP) Repeat until convergence: Update λ messages - for each r update for all p ∈ P (r), xr: µp→r(xr)= ln ∑ xp\xr exp θr(xr)−∑ p′∈P (p) λp→p′(xp) + ∑ r′∈C(p)\r λr′→p(xr′)− ∑ k∈Kp νp→k(xp) λr→p(xr)∝ 1 1 + |P (r)| θr(xr) +∑ c∈C(r) λc→r(xc) + ∑ p∈P (r) µp→r(xr)− ∑ k∈Kr νr→k(xr) −µp→r(xr) Update ν messages - for each k ∈ K update for all r ∈ N(k) using αr,k as defined in Eq. (6): νr→k(s k r ) = logαr,k − 1 |N(k)| ∑ r′∈N(k) logαr′,k Figure 1: The CBCBP algorithm. Shown are the update rules for the λ and ν messages. Next we derive a general message-passing algorithm that aims at solving the optimization problem given in Eq. (1) subject to consistency constraints of the form given in Eq. (3). 3 Constraints Based Convex Belief Propagation (CBCBP) To enforce consistency of beliefs we want to incorporate constraints of the form br1(xr1) = . . . = brm(xrm). Each constraint involves a set of regions ri and some of their assignments xri . If this constraint involves more than two regions, i.e., if m > 2, it is easier to formulate the constraint as a series of constraints bri(xri) = v, i ∈ {1, . . . ,m}, for some constant v that eventually cancels. Generally, given a constraint k, we define the set of its neighbours N(k) to be the involved regions rki as well as the involved assignment x k ri , i.e., N(k) = {r k i , x k ri} mk i=1. To simplify notation we subsequently use r ∈ N(k) instead of (r, xr) ∈ N(k). However, it should be clear from the context that each region rk is matched with a value xkr . We subsume all constraints within the set K. Additionally, we let Kr denote the set of all those constraints k which depend on region r, i.e., Kr = {k : r ∈ N(k)}. Using the aforementioned notation we are now ready to augment the conventional CBP given in Eq. (1) with one additional set of constraints. The CBCBP program then reads as follows: max br ∑ r,xr br(xr)θr(xr) + ∑ r H(br) s.t. ∀r br(xr) ≥ 0, ∑ xr br(xr) = 1 ∀r, p ∈ P (r), xr ∑ xp\xr bp(xp) = br(xr) ∀k ∈ K, r ∈ N(k) br(xkr ) = vk . (4) To solve this program we observe that its constraint space exhibits a rich structure, defined on the one hand by the parent set P , and on the other hand by the neighborhood of the constraint subsumed in the set K. To exploit this structure, we aim at deriving the dual which is possible because the program is strictly convex. Importantly we can subsequently derive block-coordinate updates for the dual variables, which are efficiently computable in closed form. Hence solving the program given in Eq. (4) via its dual is much more effective. In the following we first present the dual before discussing how to efficiently solve it. Derivation of the dual program: The dual program of the task given in Eq. (4) is obtained by using the Lagrangian as shown in the following lemma. Lemma 3.1.: The dual problem associated with the primal program given in Eq. (4) is: min λ,ν ∑ r log ∑ xr exp ( θr(xr, λ)− ∑ k∈Kr νr→k(xr) ) s.t. ∀k ∈ K, ∑ r∈N(k) νr→k(x k r ) = 0, where we set νr→k(xr) = 0 ∀k ∈ K, r ∈ N(k), xr 6= xkr and where we introduced θr(xr, λ) = θr(xr)− ∑ p∈P (r) λr→p(xr) + ∑ c∈C(r) λc→r(sc). Proof: We begin by defining a Lagrange multiplier for each of the constraints given in Eq. (4). Concretely, for all r, p ∈ P (r), xr we let λr→p(xr) be the Lagrange multiplier associated with the marginalization constraint ∑ xp\xr bp(xp) = br(xr). Similarly for all k ∈ K, r ∈ N(k), we let νr→k(x k r ) be the Lagrange multiplier that is associated with the constraint br(x k r ) = vk. The corresponding Lagrangian is then given by L(b, λ, ν) = ∑ r,xr br(xr) ( θr(xr, λ)− ∑ k∈Kr νr→k(xr) ) + ∑ r H(br) + ∑ k∈K,r∈N(k) νr→k(x k r )vk, where θr(xr, λ) = θr(xr) − ∑ p∈P (r) λr→p(xr) + ∑ c∈C(r) λc→r(xc) and νr→k(xr) = 0 for all k, r ∈ N(k), xr 6= xkr . Due to conjugate duality between the entropy and the log-sum-exp function [25], the dual function is: D(λ, ν) = max b L(b, λ, ν) = ∑ r log ∑ xr exp ( θr(xr, λ)− ∑ k∈Kr νr→k(xr) ) + ∑ k vk ∑ r∈N(k) νr→k(x k r ). The result follows since the dual function is unbounded from below with respect to the Lagrange multipliers νr→k(xkr ), requiring constraints. Derivation of message passing update rules: As mentioned before we can derive blockcoordinate descent update rules for the dual which are computable in closed form. Hence the dual given in Lemma 3.1 can be solved efficiently, which is summarized in the following theorem: Theorem 3.2.: A block coordinates descent over the dual problem giving in Lemma 3.1 results in a message passing algorithm whose details are given in Fig. 1 and which we refer to as the CBCBP algorithm. It is guaranteed to converge. Before proving this result, we provide intuition for the update rules: as in the standard and distributed [19] CBP algorithm, each region r sends a message to its parents via the dual variable λr→p. Differently from CBP but similar to distributed variants [19], our algorithm has another type of messages, i.e., the ν messages. Conceptually, think of the constraints as a new node. A constraint node k is connected to a region r if r ∈ N(k). Hence, a region r ‘informs’ the constraint node using the dual variable νr→k. We now show how to derive the message passing rules to optimize the dual. Proof: First we note that convergence is guaranteed by the strict convexity of the primal problem [6]. Next we begin by optimizing the dual function given in Lemma 3.1 with respect to the λ parameters. Specifically, for a chosen region r we optimize the dual w.r.t. a block of Lagrange multipliers λr→p(xr) ∀p ∈ P (r), xr. To this end we derive the dual with respect to λr→p(xr) while keeping all other variables fixed. The technique for solving the optimality conditions follows existing literature, augmented by messages νr→k. It yields the update rules given in Fig. 1. Next we turn to optimizing the dual with respect to the Lagrange multipliers ν. Recall that each constraint k ∈ K in the dual function given in Lemma 3.1 is associated with the linear constraint∑ r∈N(k) νr→k(x k r ) = 0. Therefore we employ a Lagrange multiplier γk for each k. For compact exposition, we introduce the Lagrangian that is associated with a constraint k, denoted by Lk: Lk(λ, ν) = ∑ r∈N(k) log ∑ xr exp ( θr(xr, λ)− ∑ k∈Kr νr→k(xr) ) + γk ∑ r∈N(k) νr→k(x k r ) . Deriving Lk with respect to νr→k ∀r ∈ N(k) and using optimality conditions, we then arrive at: νr→k(x k r ) = log ( αr,k · 1 + γk −γk ) (5) for all r ∈ N(k), where αr,k = exp ( θr(x k r , λ)− ∑ k′∈Kr\k νr→k′(x k r ) ) ∑ xr\xkr exp ( θr(xr, λ)− ∑ k′∈Kr νr→k′(xr) ) . (6) Summing the right hand side of Eq. (5) over r ∈ N(k) and using the constraint ∑ r∈N(k) νr→k(x k r ) = 0 results in 1 + γk −γk = ∏ r∈N(k) 1 αr,k 1|N(k)| . Finally, substituting this result back into Eq. (5) yields the desired update rule. We summarized the resulting algorithm in Fig. 1 and now turn our attention to its evaluation. 4 Experiments We first demonstrate the applicability of the procedure using synthetic data. We then turn to image segmentation and machine translation, using real-world datasets. As our work directly improves the standard CBP approach, we use it as a baseline. 4.1 Synthetic Evaluation Consider two binary variables X and Y whose support consists of L levels, {1, . . . , L}. Assume we are given the following PN-consistency potential: θx,y(x, y) = { 0 (y = 1 ∧ x = 1) ∨ (y = 0 ∧ x 6= 1) −c otherwise, (7) where c is some positive parameter. This potential encourages the assignment y = 1 to agree with the assignment x = 1 and y = 0 to agree with x = {2, . . . , L}. Phrased differently, this potential favours beliefs such that: by(y = 1) = bx(x = 1), by(y = 0) = bx(x 6= 1). (8) Therefore, one may replace the above potential using a single consistency constraint. Note that the above two constraints complement each other, hence, it suffices to include one of them. We use the left consistency constraint since it fits our derivation. We test this hypothesis by constructing four networks that consist of n = 2v, v = 50, 100, 150, 200 variables, where v variables are binary, denoted by Y and the other v variables are multi-levels, subsumed within X. Note that the support of variable Xi, 1 ≤ i ≤ v, consists of i states. Each multi-level variable is matched with a binary one. For each variable we randomly generate unary potentials according to the standard Gaussian distribution. We then run the standard CBP algorithm using the aforementioned PN-consistency potential given in Eq. (7) with c = 1. In a next step we replace each such potential by its corresponding consistency constraint following Eq. (8). For each network we repeat this process 10 times and report the mean running time and standard deviation in Tab. 1. As expected, CBCBP is significantly faster than the standard CBP. Quantitatively, CBCBP was approximately 25 times faster for the smallest, and more than 31 times faster for the largest graphs. Obviously, different values of c effect the convexity of the problem and therefore also the running time of both CBP and CBCBP. To quantify its impact we repeat the experiment with n = 200 for distinct values of c ∈ {2, 4, 6, 8, 10}. In Tab. 2 we report the mean speedup factor over 10 repetitions, for each value of c. As clearly evident, the speedup factors substantially increases with c. 4.2 Image Segmentation We evaluate our approach on the task of semantic segmentation using the MSRC-21 dataset [21] as well as the PascalVOC 2012 [4] dataset. Both contain 21 foreground classes. Each variable Xi in our model corresponds to a super-pixel in an image. In addition, each super-pixel is associated with a binary variable Yi, that indicates whether the super-pixel belongs to the foreground, i.e., yi = 1, or to the background, i.e., yi = 0. The model potentials are: Super-pixel unary potentials: For MSRC-21 these potentials are computed by averaging the TextonBoost [11] pixel-potentials inside each super-pixel. For the PascalVOC 2012 dataset we train a convolutional neural network following the VGG16 architecture. Foreground/Background unary potentials: For MSRC-21 we let the value of the potential at yi = 1 equal the value of the super-pixel unary potential that corresponds to the ‘void’ label, and for yi = 0 we define it to be the maximum value of the super-pixel unary potential among the other labels. For PascalVOC 2012 we obtain the foreground/background potential by training another convolutional neural network following again the VGG16 architecture. Super pixel - foreground/background consistency: We define pairwise potentials between superpixel and the foreground/background labels using Eq. (7) and set c = 1. Naturally, these consistency potentials encourage CBP to favour beliefs where pixels that are labeled as ‘void’ are also labeled as ‘background’ and vice versa. This can also be formulated using the constraints bi(Xi = 0) = bi(Yi = 0) and bi(Xi 6= 1) = bi(Yi = 1). We compare the CBCBP algorithm with the standard CBP approach. For MSRC-21 we use the standard error measure of average per class accuracy and average per pixel accuracy, denoted as global. Performances results are provided in Tab. 3. Appealingly, our results indicate that CBCBP outperforms the standard CBP, across both metrics. Moreover and as summarized in Tab. 4, in 19 out of 21 classes CBCBP achieves an accuracy that is equal to or higher than CBP. At last, CBCBP is more than 65 times faster than CBP. In Tab. 5 we present the average pixel accuracy as well as the Intersection over Union (IoU) metric for the VOC2012 data. We observe CBCBP to perform better since it is able to transfer information between the foreground-background classification and the semantic segmentation. 4.3 Machine Translation We now consider the task of machine translation. We define a phrase-based translation model as a factor graph with many large constraints and use CBCBP to efficiently incorporate them during inference. Our model is inspired by the widely-used approach of [8]. Given a sentence in a source language, the output of our phrase-based model consists of a segmentation of the source sentence into phrases (subsequences of words), a phrase translation for each source phrase, and an ordering of the phrase translations. See Fig. 2 for an illustration. We index variables in our model by i = 1, . . . , n, which include source words (sw), source phrases (sp), and translation phrase slots (tp). The sequence of source words is first segmented into source phrases. The possible values for source word sw are Xsw = {(sw1, sw2) : (sw1 ≤ sw ≤ sw2) ∧ (sw2 − sw1 < m)}, where m is the maximum phrase length. If source phrase sp is used in the derivation, we say that sp aligns to a translation phrase slot tp. If sp is not used, it aligns to ∅. We define variables Xsp to indicate what sp aligns to: Xsp = {tp : sw1 − d ≤ tp ≤ sw2 + d} ∪ {∅}, i.e., all translation phrase slots tp (numbered from left to right in the translation) such that the slot number is at most distance d from an edge of sp.1 Each translation phrase slot tp generates actual target-language words which comprise the translation. We define variables Xtp ranging over the possible target-language word sequences (translation phrases) that can be generated at slot tp. However, not all translation phrase slots must be filled in with translations. Beyond some value of tp (equaling the number of source phrases used in the derivation), they must all be empty. To enforce this, we also permit a null (∅) translation. Consistency constraints: Many derivations defined by the discrete product space X1 × · · · ×Xn are semantically inconsistent. For example, a derivation may place the first source word into the source phrase (1, 2) and the second source word into (2, 3). This is problematic because the phrases overlap; each source word must be placed into exactly one source phrase. We introduce source word consistency constraints: ∀sp,∀sw ∈ sp : bsw(sp) = b(sp). These constraints force the source word beliefs bsw(xsw) to agree on their span. There are other consistencies we wish to enforce in our model. Specifically, we must match a source phrase to a translation phrase slot if and only if the source phrase is consistently chosen by all of its source words. Formally, ∀ sp : b(sp) = ∑ xsp 6=∅ bsp(xsp). Phrase translation potentials: We use pairwise potential functions between source phrases sp = (sw1, sw2) and their aligned translation phrase slots tp. We include a factor 〈sp, tp〉 ∈ E if sw1− d ≤ tp ≤ sw2+d. Letting πsp be the actual words in sp, the potentials θsp,tp(xsp, xtp) determine the preference of the phrase translation 〈πsp, xtp〉 using a phrase table feature function τ : 〈π, π′〉 → Rk. In particular, θsp,tp(xsp, xtp) = γ>p τ(〈πsp, xtp〉) if xsp = tp and a large negative value otherwise, where γp is the weight vector for the Moses phrase table feature vector. Language model potentials: To include n-gram language models, we add potentials that score pairs of consecutive target phrases, i.e., θtp−1,tp(xtp−1, xtp) = γ` ∑|xtp| i=1 log Pr(x (i) tp |xtp−1 · x (1) tp · ... · x(i−1)tp ), where |xtp| is the number of words in xtp, x (i) tp is the i-th word in xtp, · denotes string concatenation, and γ` is the feature weight. This potential sums n-gram log-probabilities of words in the second of the two target phrases. Internal n-gram features and the standard word penalty feature [7] are computed in the θtp potentials, since they depend only on the words in xtp. Source phrase separation potentials: We use pairwise potentials between source phrases to prevent them aligning to the same translation slot. We also prevent two overlapping source phrases 1Our distortion limit d is based on distances from source words to translation slots, rather than distances between source words as in the Moses system [7]. from both aligning to non-null slots (i.e., one must align to ∅). We include a factor between two sources phrases if there is a translation phrase that may relate to both, namely 〈sp1, sp2〉 ∈ E if ∃ tp : 〈sp1, tp〉 ∈ E, 〈sp2, tp〉 ∈ E. The source phrase separation potential θsp1,sp2(xsp1 , xsp2) is −∞ if either xsp1 = xsp2 6= ∅ or sp1∩sp2 6= ∅∧xsp1 6= ∅∧xsp2 6= ∅. Otherwise, it is−γd|(δ(sp1, sp2)− |xsp1 − xsp2 |)|, where δ(sp1, sp2) returns the number of source words between the spans sp1 and sp2. This favors similar distances between source phrases and their aligned slots. Experimental Setup: We consider German-to-English translation. As training data for constructing the phrase table, we use the WMT2011 parallel data [2], which contains 1.9M sentence pairs. We use the phrase table to compute θsp,tp and to fill Xtp. We use a bigram language model estimated from the English side of the parallel data along with 601M tokens of randomly-selected sentences from the Linguistic Data Consortium’s Gigaword corpus. This is used when computing the θtp−1,tp potentials. As our test set, we use the first 150 sentences from the WMT2009 test set. Results below are (uncased) %BLEU scores [17] on this 150-sentence set. We use maximum phrase length m = 3 and distortion limit d = 3. We run 250 iterations of CBCBP for each sentence. For the feature weights (γ), we use the default weights in Moses, since our features are analogous to theirs. Learning the weights is left to future work. Results: We compare to a simplified version of our model that omits the sw variables and all constraints and terms pertaining to them. This variation still contains all sp and tp variables and their factors. This comparison shows the contribution of our novel handling of consistency constraints. Tab. 6 shows our results. The consistency constraints lead to a large improvement for our model at negligible increase in runtime due to our closed-form update rules. We found it impractical to attempt to obtain these results using the standard CBP algorithm for any source sentences of typical length. For comparison to a standard benchmark, we also trained a Moses system [7], a state-of-the-art phrase-based system, on the same data. We used default settings and feature weights, except we used max phrase length 3 and no lexicalized reordering model, in order to more closely match the setting of our model. The Moses %BLEU on this dataset is 17.88. When using the source word consistency constraints, we are within 1.2% of Moses. Our model has the virtue of being able to compute marginals for downstream applications and also permits us to study particular forms of constraints in phrase-based translation modeling. Future work can add or remove constraints like we did in our experiments here in order to determine the most effective constraints for phrase-based translation. Our efficient inference framework makes such exploration possible. 5 Related Work Variational approaches to inference have been extensively studied in the past. We address approximate inference using the entropy barrier function and there has been extensive work in this direction, e.g., [24, 14, 23, 5, 19, 20] to name a few. Our work differs since we incorporate consistency constraints within the inference engine. We show that closed-form update rules are still available. Consistency constraints are implied when using PN-potentials [9]. However, pairwise functions are included for every constraint which is expensive if many constraints are involved. In contrast, constraints over the feasible instances are considered in [22, 13, 16, 12, 1]. While impressive results have been shown, each different restrictions of the feasible set may require a tailored algorithm. In contrast, we propose to include probabilistic equalities among the model beliefs, which permits derivation of an algorithm that is generally applicable. 6 Conclusions In this work we tackled the problem of inference with belief based equality constraints, which arises when consistency among variables in the network is required. We introduced the CBCBP algorithm that directly incorporates constraints into the CBP framework and results in closed-form update rules. We demonstrated the merit of CBCBP both on synthetic data and on two real-world tasks. Our experiments indicate that CBCBP outperforms PN-potentials in both speed and accuracy. In the future we intend to incorporate our approximate inference with consistency constraints into learning frameworks, e.g., [15, 3].
1. What is the main contribution of the paper regarding convex Bayesian programming? 2. What are the strengths of the proposed approach, particularly in its ability to handle certain constraints? 3. Do you have any concerns or questions about the formulation and optimization details of the method? 4. How does the reviewer assess the clarity and effectiveness of the presentation, including the discussion of speed and experimental results? 5. Are there any suggestions for improving the paper, such as providing more concrete comparisons or implementation details?
Review
Review The basic idea is that one would like to do convex BP but including certain constraints, e.g. that b(x1)=b(x2). (Note that this is very different from including x1=x2, which is trivial). This is basically done by taking the regular formulation for convex BP where inference is phrased as an optimization over the local Polytope. Then, an extra set of Lagrange multipliers are added to impose the constraint, and the rest of the optimization details go through slightly changed to include these.This is an interesting and plausible idea. I only have a few random comments. - On line 63, please describe more clearly the problem with (iv) (Differnce between x1=x2 and b(x1)=b(x2)) - In Eq. 4, should be maximum also take place over v? - I think the general discussion of speed needs more discussion. The introduction implies that it should be faster *per iteration* compared to regular convex BP with a "c penalty". Please formalize how much faster. In addition, the experiments often compare speed, but this is not clearly described enough to be useful. Why is it faster? Is it because of faster iterations or fewer iterations? The experiments don't discuss stopping criteria, basically making them useless as written. (Some discussion of implementation details should also be made.) How do we know the speedup isn't an illusion caused by the convergence threshold? Some plots of, e.g., accuracy vs time would be much more convincing. - In Sections 4.2/4.3, what is the comparison? To some fixed value of c, or to just discarding the constraints?
NIPS
Title Constraints Based Convex Belief Propagation Abstract Inference in Markov random fields subject to consistency structure is a fundamental problem that arises in many real-life applications. In order to enforce consistency, classical approaches utilize consistency potentials or encode constraints over feasible instances. Unfortunately this comes at the price of a tremendous computational burden. In this paper we suggest to tackle consistency by incorporating constraints on beliefs. This permits derivation of a closed-form message-passing algorithm which we refer to as the Constraints Based Convex Belief Propagation (CBCBP). Experiments show that CBCBP outperforms the conventional consistency potential based approach, while being at least an order of magnitude faster. N/A Inference in Markov random fields subject to consistency structure is a fundamental problem that arises in many real-life applications. In order to enforce consistency, classical approaches utilize consistency potentials or encode constraints over feasible instances. Unfortunately this comes at the price of a tremendous computational burden. In this paper we suggest to tackle consistency by incorporating constraints on beliefs. This permits derivation of a closed-form message-passing algorithm which we refer to as the Constraints Based Convex Belief Propagation (CBCBP). Experiments show that CBCBP outperforms the conventional consistency potential based approach, while being at least an order of magnitude faster. 1 Introduction Markov random fields (MRFs) [10] are widely used across different domains from computer vision and natural language processing to computational biology, because they are a general tool to describe distributions that involve multiple variables. The dependencies between variables are conveniently encoded via potentials that define the structure of a graph. Besides encoding dependencies, in a variety of real-world applications we often want consistent solutions that are physically plausible, e.g., when jointly reasoning about multiple tasks or when enforcing geometric constraints in 3D indoor scene understanding applications [18]. Therefore, various methods [22, 13, 16, 12, 1] enforce consistency structure during inference by imposing constraints on the feasible instances. This was shown to be effective in practice. However for each new constraint we may need to design a specifically tailored algorithm. Therefore, the most common approach to impose consistency is usage of PN-consistency potentials [9]. This permits reuse of existing message passing solvers, however, at the expense of an additional computational burden, as real-world applications may involve hundreds of additional factors. Our goal in this work is to bypass this computational burden while being generally applicable. To do so, we consider the problem of inference when probabilistic equalities are imposed over the beliefs of the model rather than its feasible instances. As we show in Sec. 3, the adaptive nature of message passing algorithms conveniently allows for such probabilistic equality constraints within its framework. Since our method eliminates potentially many multivariate factors, inference is much more scalable than using PN-consistency potentials [9]. In this paper, for notational simplicity, we illustrate the belief constraints based message passing rules using a framework known as convex belief propagation (CBP). We refer to the illustrated algorithm as constraints based CBP (CBCBP). However we note that the same derivation can be used to obtain, e.g., a constraints based tree-reweighted message passing algorithm. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. We evaluate the benefits of our algorithm on semantic image segmentation and machine translation tasks. Our results indicate that CBCBP improves accuracy while being at least an order of magnitude faster than CBP. 2 Background In this section we review the standard CBP algorithm. To this end we consider joint distributions defined over a set of discrete random variables X = (X1, . . . , Xn). The distribution p(x1, . . . , xn) is assumed to factor into a product of non-negative potential functions, i.e., p(x1, . . . , xn) ∝ exp ( ∑ r θr(xr)) , where r ⊂ {1, ..., n} is a subset of variable indices, which we use to restrict the domain via xr = (xi)i∈r. The real-valued functions θr(xr) assign a preference to each of the variables in the subset r. To visualize the factorization structure we use a region graph, i.e., a generalization of factor graphs. In this graph, each real-valued function θr(xr) corresponds to a node. Nodes θr and θp can be connected if r ⊂ p. Hence the parent set P (r) of a region r contains index sets p ∈ P (r) if r ⊂ p. Conversely we define the set of children of region r as C(r) = {c : r ∈ P (c)}. An important inference task is computation of the marginal probabilities p(xr) = ∑ x\xr p(x). Whenever the region graph has no cycles, marginals are easily computed using belief propagation. Unfortunately, this algorithm may not converge in the presence of cycles. To fix convergence a variety of approximations have been suggested, one of which is known as convex belief propagation (CBP). CBP performs block-coordinate descent over the dual function of the following program: max br ∑ r,xr br(xr)θr(xr)+ ∑ r H(br) s.t. { ∀r br(xr) ≥ 0, ∑ xr br(xr) = 1, ∀r, p ∈ P (r), xr ∑ xp\xr bp(xp) = br(xr). (1) This program is defined over marginal distributions br(xr) and incorporates their entropy H(br) in addition to the potential function θr. In many real world applications we require the solution to be consistent, i.e., hard constraints between some of the involved variables exist. For example, consider the case where X1, X2 are two binary variables such that for every feasible joint assignment, x1 = x2. To encourage consistency while reusing general purpose solvers, a PN-consistency potential [9] is often incorporated into the model: θ1,2(x1, x2) = { 0 x1 = x2 −c otherwise . (2) Hereby c is a positive constant that is tuned to penalize for the violation of consistency. As c increases, the following constraint holds: b1(X1 = x1) = b2(X2 = x2). (3) However, usage of PN-potentials raises concerns: (i) increasing the number of pairwise constraints decreases computational efficiency, (ii) enforcing consistency in a soft manner requires tuning of an additional parameter c, (iii) large values of c reduce convergence, and (iv) large values of c result in corresponding beliefs being assigned zero probability mass which is not desirable. To alleviate these issues we suggest to enforce the equality constraints given in Eq. (3) directly during optimization of the program given in Eq. (1). We refer to the additionally introduced constraints as consistency constraints. At this point two notes are in place. First we emphasize that utilizing consistency constraints instead of PN-consistency potentials has a computational advantage, since it omits all pairwise beliefs that correspond to consistency potentials. Therefore it results in an optimization problem with fewer functions, which is expect to be more efficiently solvable. Second we highlight that the two approaches are not equivalent. Intuitively, as c increases, we expect consistency constraints to yield better results than usage of PN-potentials. Indeed, as c increases, the PN-consistency potential enforces the joint distribution to be diagonal, i.e., b(X1 = i,X2 = j) = 0, ∀i 6= j. However, the consistency constraint as specified in Eq. (3) only requires the univariate marginals to agree. The latter is a considerably weaker requirement, as a diagonal pairwise distribution implies agreement of the univariate marginals, but the opposite direction does not hold. Consequently, using consistency constraints results in a larger search space, which is desirable. Algorithm 1 Constraints Based Convex Belief Propagation (CBCBP) Repeat until convergence: Update λ messages - for each r update for all p ∈ P (r), xr: µp→r(xr)= ln ∑ xp\xr exp θr(xr)−∑ p′∈P (p) λp→p′(xp) + ∑ r′∈C(p)\r λr′→p(xr′)− ∑ k∈Kp νp→k(xp) λr→p(xr)∝ 1 1 + |P (r)| θr(xr) +∑ c∈C(r) λc→r(xc) + ∑ p∈P (r) µp→r(xr)− ∑ k∈Kr νr→k(xr) −µp→r(xr) Update ν messages - for each k ∈ K update for all r ∈ N(k) using αr,k as defined in Eq. (6): νr→k(s k r ) = logαr,k − 1 |N(k)| ∑ r′∈N(k) logαr′,k Figure 1: The CBCBP algorithm. Shown are the update rules for the λ and ν messages. Next we derive a general message-passing algorithm that aims at solving the optimization problem given in Eq. (1) subject to consistency constraints of the form given in Eq. (3). 3 Constraints Based Convex Belief Propagation (CBCBP) To enforce consistency of beliefs we want to incorporate constraints of the form br1(xr1) = . . . = brm(xrm). Each constraint involves a set of regions ri and some of their assignments xri . If this constraint involves more than two regions, i.e., if m > 2, it is easier to formulate the constraint as a series of constraints bri(xri) = v, i ∈ {1, . . . ,m}, for some constant v that eventually cancels. Generally, given a constraint k, we define the set of its neighbours N(k) to be the involved regions rki as well as the involved assignment x k ri , i.e., N(k) = {r k i , x k ri} mk i=1. To simplify notation we subsequently use r ∈ N(k) instead of (r, xr) ∈ N(k). However, it should be clear from the context that each region rk is matched with a value xkr . We subsume all constraints within the set K. Additionally, we let Kr denote the set of all those constraints k which depend on region r, i.e., Kr = {k : r ∈ N(k)}. Using the aforementioned notation we are now ready to augment the conventional CBP given in Eq. (1) with one additional set of constraints. The CBCBP program then reads as follows: max br ∑ r,xr br(xr)θr(xr) + ∑ r H(br) s.t. ∀r br(xr) ≥ 0, ∑ xr br(xr) = 1 ∀r, p ∈ P (r), xr ∑ xp\xr bp(xp) = br(xr) ∀k ∈ K, r ∈ N(k) br(xkr ) = vk . (4) To solve this program we observe that its constraint space exhibits a rich structure, defined on the one hand by the parent set P , and on the other hand by the neighborhood of the constraint subsumed in the set K. To exploit this structure, we aim at deriving the dual which is possible because the program is strictly convex. Importantly we can subsequently derive block-coordinate updates for the dual variables, which are efficiently computable in closed form. Hence solving the program given in Eq. (4) via its dual is much more effective. In the following we first present the dual before discussing how to efficiently solve it. Derivation of the dual program: The dual program of the task given in Eq. (4) is obtained by using the Lagrangian as shown in the following lemma. Lemma 3.1.: The dual problem associated with the primal program given in Eq. (4) is: min λ,ν ∑ r log ∑ xr exp ( θr(xr, λ)− ∑ k∈Kr νr→k(xr) ) s.t. ∀k ∈ K, ∑ r∈N(k) νr→k(x k r ) = 0, where we set νr→k(xr) = 0 ∀k ∈ K, r ∈ N(k), xr 6= xkr and where we introduced θr(xr, λ) = θr(xr)− ∑ p∈P (r) λr→p(xr) + ∑ c∈C(r) λc→r(sc). Proof: We begin by defining a Lagrange multiplier for each of the constraints given in Eq. (4). Concretely, for all r, p ∈ P (r), xr we let λr→p(xr) be the Lagrange multiplier associated with the marginalization constraint ∑ xp\xr bp(xp) = br(xr). Similarly for all k ∈ K, r ∈ N(k), we let νr→k(x k r ) be the Lagrange multiplier that is associated with the constraint br(x k r ) = vk. The corresponding Lagrangian is then given by L(b, λ, ν) = ∑ r,xr br(xr) ( θr(xr, λ)− ∑ k∈Kr νr→k(xr) ) + ∑ r H(br) + ∑ k∈K,r∈N(k) νr→k(x k r )vk, where θr(xr, λ) = θr(xr) − ∑ p∈P (r) λr→p(xr) + ∑ c∈C(r) λc→r(xc) and νr→k(xr) = 0 for all k, r ∈ N(k), xr 6= xkr . Due to conjugate duality between the entropy and the log-sum-exp function [25], the dual function is: D(λ, ν) = max b L(b, λ, ν) = ∑ r log ∑ xr exp ( θr(xr, λ)− ∑ k∈Kr νr→k(xr) ) + ∑ k vk ∑ r∈N(k) νr→k(x k r ). The result follows since the dual function is unbounded from below with respect to the Lagrange multipliers νr→k(xkr ), requiring constraints. Derivation of message passing update rules: As mentioned before we can derive blockcoordinate descent update rules for the dual which are computable in closed form. Hence the dual given in Lemma 3.1 can be solved efficiently, which is summarized in the following theorem: Theorem 3.2.: A block coordinates descent over the dual problem giving in Lemma 3.1 results in a message passing algorithm whose details are given in Fig. 1 and which we refer to as the CBCBP algorithm. It is guaranteed to converge. Before proving this result, we provide intuition for the update rules: as in the standard and distributed [19] CBP algorithm, each region r sends a message to its parents via the dual variable λr→p. Differently from CBP but similar to distributed variants [19], our algorithm has another type of messages, i.e., the ν messages. Conceptually, think of the constraints as a new node. A constraint node k is connected to a region r if r ∈ N(k). Hence, a region r ‘informs’ the constraint node using the dual variable νr→k. We now show how to derive the message passing rules to optimize the dual. Proof: First we note that convergence is guaranteed by the strict convexity of the primal problem [6]. Next we begin by optimizing the dual function given in Lemma 3.1 with respect to the λ parameters. Specifically, for a chosen region r we optimize the dual w.r.t. a block of Lagrange multipliers λr→p(xr) ∀p ∈ P (r), xr. To this end we derive the dual with respect to λr→p(xr) while keeping all other variables fixed. The technique for solving the optimality conditions follows existing literature, augmented by messages νr→k. It yields the update rules given in Fig. 1. Next we turn to optimizing the dual with respect to the Lagrange multipliers ν. Recall that each constraint k ∈ K in the dual function given in Lemma 3.1 is associated with the linear constraint∑ r∈N(k) νr→k(x k r ) = 0. Therefore we employ a Lagrange multiplier γk for each k. For compact exposition, we introduce the Lagrangian that is associated with a constraint k, denoted by Lk: Lk(λ, ν) = ∑ r∈N(k) log ∑ xr exp ( θr(xr, λ)− ∑ k∈Kr νr→k(xr) ) + γk ∑ r∈N(k) νr→k(x k r ) . Deriving Lk with respect to νr→k ∀r ∈ N(k) and using optimality conditions, we then arrive at: νr→k(x k r ) = log ( αr,k · 1 + γk −γk ) (5) for all r ∈ N(k), where αr,k = exp ( θr(x k r , λ)− ∑ k′∈Kr\k νr→k′(x k r ) ) ∑ xr\xkr exp ( θr(xr, λ)− ∑ k′∈Kr νr→k′(xr) ) . (6) Summing the right hand side of Eq. (5) over r ∈ N(k) and using the constraint ∑ r∈N(k) νr→k(x k r ) = 0 results in 1 + γk −γk = ∏ r∈N(k) 1 αr,k 1|N(k)| . Finally, substituting this result back into Eq. (5) yields the desired update rule. We summarized the resulting algorithm in Fig. 1 and now turn our attention to its evaluation. 4 Experiments We first demonstrate the applicability of the procedure using synthetic data. We then turn to image segmentation and machine translation, using real-world datasets. As our work directly improves the standard CBP approach, we use it as a baseline. 4.1 Synthetic Evaluation Consider two binary variables X and Y whose support consists of L levels, {1, . . . , L}. Assume we are given the following PN-consistency potential: θx,y(x, y) = { 0 (y = 1 ∧ x = 1) ∨ (y = 0 ∧ x 6= 1) −c otherwise, (7) where c is some positive parameter. This potential encourages the assignment y = 1 to agree with the assignment x = 1 and y = 0 to agree with x = {2, . . . , L}. Phrased differently, this potential favours beliefs such that: by(y = 1) = bx(x = 1), by(y = 0) = bx(x 6= 1). (8) Therefore, one may replace the above potential using a single consistency constraint. Note that the above two constraints complement each other, hence, it suffices to include one of them. We use the left consistency constraint since it fits our derivation. We test this hypothesis by constructing four networks that consist of n = 2v, v = 50, 100, 150, 200 variables, where v variables are binary, denoted by Y and the other v variables are multi-levels, subsumed within X. Note that the support of variable Xi, 1 ≤ i ≤ v, consists of i states. Each multi-level variable is matched with a binary one. For each variable we randomly generate unary potentials according to the standard Gaussian distribution. We then run the standard CBP algorithm using the aforementioned PN-consistency potential given in Eq. (7) with c = 1. In a next step we replace each such potential by its corresponding consistency constraint following Eq. (8). For each network we repeat this process 10 times and report the mean running time and standard deviation in Tab. 1. As expected, CBCBP is significantly faster than the standard CBP. Quantitatively, CBCBP was approximately 25 times faster for the smallest, and more than 31 times faster for the largest graphs. Obviously, different values of c effect the convexity of the problem and therefore also the running time of both CBP and CBCBP. To quantify its impact we repeat the experiment with n = 200 for distinct values of c ∈ {2, 4, 6, 8, 10}. In Tab. 2 we report the mean speedup factor over 10 repetitions, for each value of c. As clearly evident, the speedup factors substantially increases with c. 4.2 Image Segmentation We evaluate our approach on the task of semantic segmentation using the MSRC-21 dataset [21] as well as the PascalVOC 2012 [4] dataset. Both contain 21 foreground classes. Each variable Xi in our model corresponds to a super-pixel in an image. In addition, each super-pixel is associated with a binary variable Yi, that indicates whether the super-pixel belongs to the foreground, i.e., yi = 1, or to the background, i.e., yi = 0. The model potentials are: Super-pixel unary potentials: For MSRC-21 these potentials are computed by averaging the TextonBoost [11] pixel-potentials inside each super-pixel. For the PascalVOC 2012 dataset we train a convolutional neural network following the VGG16 architecture. Foreground/Background unary potentials: For MSRC-21 we let the value of the potential at yi = 1 equal the value of the super-pixel unary potential that corresponds to the ‘void’ label, and for yi = 0 we define it to be the maximum value of the super-pixel unary potential among the other labels. For PascalVOC 2012 we obtain the foreground/background potential by training another convolutional neural network following again the VGG16 architecture. Super pixel - foreground/background consistency: We define pairwise potentials between superpixel and the foreground/background labels using Eq. (7) and set c = 1. Naturally, these consistency potentials encourage CBP to favour beliefs where pixels that are labeled as ‘void’ are also labeled as ‘background’ and vice versa. This can also be formulated using the constraints bi(Xi = 0) = bi(Yi = 0) and bi(Xi 6= 1) = bi(Yi = 1). We compare the CBCBP algorithm with the standard CBP approach. For MSRC-21 we use the standard error measure of average per class accuracy and average per pixel accuracy, denoted as global. Performances results are provided in Tab. 3. Appealingly, our results indicate that CBCBP outperforms the standard CBP, across both metrics. Moreover and as summarized in Tab. 4, in 19 out of 21 classes CBCBP achieves an accuracy that is equal to or higher than CBP. At last, CBCBP is more than 65 times faster than CBP. In Tab. 5 we present the average pixel accuracy as well as the Intersection over Union (IoU) metric for the VOC2012 data. We observe CBCBP to perform better since it is able to transfer information between the foreground-background classification and the semantic segmentation. 4.3 Machine Translation We now consider the task of machine translation. We define a phrase-based translation model as a factor graph with many large constraints and use CBCBP to efficiently incorporate them during inference. Our model is inspired by the widely-used approach of [8]. Given a sentence in a source language, the output of our phrase-based model consists of a segmentation of the source sentence into phrases (subsequences of words), a phrase translation for each source phrase, and an ordering of the phrase translations. See Fig. 2 for an illustration. We index variables in our model by i = 1, . . . , n, which include source words (sw), source phrases (sp), and translation phrase slots (tp). The sequence of source words is first segmented into source phrases. The possible values for source word sw are Xsw = {(sw1, sw2) : (sw1 ≤ sw ≤ sw2) ∧ (sw2 − sw1 < m)}, where m is the maximum phrase length. If source phrase sp is used in the derivation, we say that sp aligns to a translation phrase slot tp. If sp is not used, it aligns to ∅. We define variables Xsp to indicate what sp aligns to: Xsp = {tp : sw1 − d ≤ tp ≤ sw2 + d} ∪ {∅}, i.e., all translation phrase slots tp (numbered from left to right in the translation) such that the slot number is at most distance d from an edge of sp.1 Each translation phrase slot tp generates actual target-language words which comprise the translation. We define variables Xtp ranging over the possible target-language word sequences (translation phrases) that can be generated at slot tp. However, not all translation phrase slots must be filled in with translations. Beyond some value of tp (equaling the number of source phrases used in the derivation), they must all be empty. To enforce this, we also permit a null (∅) translation. Consistency constraints: Many derivations defined by the discrete product space X1 × · · · ×Xn are semantically inconsistent. For example, a derivation may place the first source word into the source phrase (1, 2) and the second source word into (2, 3). This is problematic because the phrases overlap; each source word must be placed into exactly one source phrase. We introduce source word consistency constraints: ∀sp,∀sw ∈ sp : bsw(sp) = b(sp). These constraints force the source word beliefs bsw(xsw) to agree on their span. There are other consistencies we wish to enforce in our model. Specifically, we must match a source phrase to a translation phrase slot if and only if the source phrase is consistently chosen by all of its source words. Formally, ∀ sp : b(sp) = ∑ xsp 6=∅ bsp(xsp). Phrase translation potentials: We use pairwise potential functions between source phrases sp = (sw1, sw2) and their aligned translation phrase slots tp. We include a factor 〈sp, tp〉 ∈ E if sw1− d ≤ tp ≤ sw2+d. Letting πsp be the actual words in sp, the potentials θsp,tp(xsp, xtp) determine the preference of the phrase translation 〈πsp, xtp〉 using a phrase table feature function τ : 〈π, π′〉 → Rk. In particular, θsp,tp(xsp, xtp) = γ>p τ(〈πsp, xtp〉) if xsp = tp and a large negative value otherwise, where γp is the weight vector for the Moses phrase table feature vector. Language model potentials: To include n-gram language models, we add potentials that score pairs of consecutive target phrases, i.e., θtp−1,tp(xtp−1, xtp) = γ` ∑|xtp| i=1 log Pr(x (i) tp |xtp−1 · x (1) tp · ... · x(i−1)tp ), where |xtp| is the number of words in xtp, x (i) tp is the i-th word in xtp, · denotes string concatenation, and γ` is the feature weight. This potential sums n-gram log-probabilities of words in the second of the two target phrases. Internal n-gram features and the standard word penalty feature [7] are computed in the θtp potentials, since they depend only on the words in xtp. Source phrase separation potentials: We use pairwise potentials between source phrases to prevent them aligning to the same translation slot. We also prevent two overlapping source phrases 1Our distortion limit d is based on distances from source words to translation slots, rather than distances between source words as in the Moses system [7]. from both aligning to non-null slots (i.e., one must align to ∅). We include a factor between two sources phrases if there is a translation phrase that may relate to both, namely 〈sp1, sp2〉 ∈ E if ∃ tp : 〈sp1, tp〉 ∈ E, 〈sp2, tp〉 ∈ E. The source phrase separation potential θsp1,sp2(xsp1 , xsp2) is −∞ if either xsp1 = xsp2 6= ∅ or sp1∩sp2 6= ∅∧xsp1 6= ∅∧xsp2 6= ∅. Otherwise, it is−γd|(δ(sp1, sp2)− |xsp1 − xsp2 |)|, where δ(sp1, sp2) returns the number of source words between the spans sp1 and sp2. This favors similar distances between source phrases and their aligned slots. Experimental Setup: We consider German-to-English translation. As training data for constructing the phrase table, we use the WMT2011 parallel data [2], which contains 1.9M sentence pairs. We use the phrase table to compute θsp,tp and to fill Xtp. We use a bigram language model estimated from the English side of the parallel data along with 601M tokens of randomly-selected sentences from the Linguistic Data Consortium’s Gigaword corpus. This is used when computing the θtp−1,tp potentials. As our test set, we use the first 150 sentences from the WMT2009 test set. Results below are (uncased) %BLEU scores [17] on this 150-sentence set. We use maximum phrase length m = 3 and distortion limit d = 3. We run 250 iterations of CBCBP for each sentence. For the feature weights (γ), we use the default weights in Moses, since our features are analogous to theirs. Learning the weights is left to future work. Results: We compare to a simplified version of our model that omits the sw variables and all constraints and terms pertaining to them. This variation still contains all sp and tp variables and their factors. This comparison shows the contribution of our novel handling of consistency constraints. Tab. 6 shows our results. The consistency constraints lead to a large improvement for our model at negligible increase in runtime due to our closed-form update rules. We found it impractical to attempt to obtain these results using the standard CBP algorithm for any source sentences of typical length. For comparison to a standard benchmark, we also trained a Moses system [7], a state-of-the-art phrase-based system, on the same data. We used default settings and feature weights, except we used max phrase length 3 and no lexicalized reordering model, in order to more closely match the setting of our model. The Moses %BLEU on this dataset is 17.88. When using the source word consistency constraints, we are within 1.2% of Moses. Our model has the virtue of being able to compute marginals for downstream applications and also permits us to study particular forms of constraints in phrase-based translation modeling. Future work can add or remove constraints like we did in our experiments here in order to determine the most effective constraints for phrase-based translation. Our efficient inference framework makes such exploration possible. 5 Related Work Variational approaches to inference have been extensively studied in the past. We address approximate inference using the entropy barrier function and there has been extensive work in this direction, e.g., [24, 14, 23, 5, 19, 20] to name a few. Our work differs since we incorporate consistency constraints within the inference engine. We show that closed-form update rules are still available. Consistency constraints are implied when using PN-potentials [9]. However, pairwise functions are included for every constraint which is expensive if many constraints are involved. In contrast, constraints over the feasible instances are considered in [22, 13, 16, 12, 1]. While impressive results have been shown, each different restrictions of the feasible set may require a tailored algorithm. In contrast, we propose to include probabilistic equalities among the model beliefs, which permits derivation of an algorithm that is generally applicable. 6 Conclusions In this work we tackled the problem of inference with belief based equality constraints, which arises when consistency among variables in the network is required. We introduced the CBCBP algorithm that directly incorporates constraints into the CBP framework and results in closed-form update rules. We demonstrated the merit of CBCBP both on synthetic data and on two real-world tasks. Our experiments indicate that CBCBP outperforms PN-potentials in both speed and accuracy. In the future we intend to incorporate our approximate inference with consistency constraints into learning frameworks, e.g., [15, 3].
1. What is the focus and contribution of the paper on discrete graphical models? 2. What are the strengths and weaknesses of the proposed belief propagation algorithm? 3. How does the reviewer assess the novelty and significance of the derived closed-form solutions? 4. What are the limitations regarding the empirical comparison with other algorithms? 5. How does the reviewer evaluate the related work and previous research on inference with higher-order potentials? 6. Is the paper's content clear, well-organized, and easy to follow?
Review
Review The authors propose belief propagation for discrete graphical models with certain consistency constraints, related to PN-potentials. The contribution lies in deriving closed-form solutions to the belief-propagation operations in this more general model and showing empirically that it is advantageous over naively doing belief-propagation on general factors explicitly modeling the consistency constraints. General comments: (i) The authors only solve a new special kind of higher order consistency constraints, generalizing soft PN-potentials, but not a truly general class of constraints, as indicated in the title or in the abstract. (ii) I do not agree with what the authors say in lines 72 - 78. In case of MAP-inference, which is normally desired, the goal is to obtain a single assignment which satisfies all given linear constraints. The proposed model (i.e. computing marginals) is then less desirable. The relaxed model the authors optimize is simply a byproduct of looking for marginals instead of MAP-assignments (the added entropy is responsible for this). In case of vanishing entropy one gets the same model. Hence there certainly remains the disadvantage of a parameter in the PN-potential, but now hidden in the entropy. Additionally, when a MAP-solution is wanted, the proposed algorithm is in fact disadvantageous, as inference is not done w.r.t. the energy of a single MAP-solution and rounding results in some arbitrariness of the obtained solution. (iii) Experimental comparison: - No comparison against solving MAP-inference with PN-potentials and existing inference algorithms is given. - Experiments are very small scale, e.g. image segmentation is only done on superpixels. I do not consider such microbenchmarks very informative. One can usually just plug everything into an off-the-shelf LP-solver in such cases. - To be able to really judge the algorithm, convergence plots would be helpful, but none are given. In general, I deem experimental comparison unconvincing: while many experiments have been performed, no real comparison against any algorithm other than CBP is performed. (iv) Related work: Many references to work on inference with higher order potentials are lacking. To name a few: - Komodakis: Beyond pairwise energies: Efficient optimization for higher-order MRFs - Tarlow et al.: HOP-MAP: Efficient Message Passing with High Order Potentials. - Kappes: Higher-order Segmentation via Multicuts E.g., the work of Kappes shows how to very efficiently include PN-potentials in an LP-relaxation. Generally, efficient ways for PN-potentials have been explored before, which is not acknowledged in the author's text. (v) The work is rather incremental: The main contribution is the derivation of a new updating formula for convex belief propagation and consistency constraints. Detailed comments: - The entropy H is never defined. - Notation is scattered around: Some is defined in section 2 and some is defined in section 3, but only informally in the text, making the article harder to read. - Line 8: What is the standard approach? - Line 71: which is expect -> which is expected - Line 96: One can also derive duals when the primal is not strictly convex (duals exist even for non-convex programs).
NIPS
Title Constraints Based Convex Belief Propagation Abstract Inference in Markov random fields subject to consistency structure is a fundamental problem that arises in many real-life applications. In order to enforce consistency, classical approaches utilize consistency potentials or encode constraints over feasible instances. Unfortunately this comes at the price of a tremendous computational burden. In this paper we suggest to tackle consistency by incorporating constraints on beliefs. This permits derivation of a closed-form message-passing algorithm which we refer to as the Constraints Based Convex Belief Propagation (CBCBP). Experiments show that CBCBP outperforms the conventional consistency potential based approach, while being at least an order of magnitude faster. N/A Inference in Markov random fields subject to consistency structure is a fundamental problem that arises in many real-life applications. In order to enforce consistency, classical approaches utilize consistency potentials or encode constraints over feasible instances. Unfortunately this comes at the price of a tremendous computational burden. In this paper we suggest to tackle consistency by incorporating constraints on beliefs. This permits derivation of a closed-form message-passing algorithm which we refer to as the Constraints Based Convex Belief Propagation (CBCBP). Experiments show that CBCBP outperforms the conventional consistency potential based approach, while being at least an order of magnitude faster. 1 Introduction Markov random fields (MRFs) [10] are widely used across different domains from computer vision and natural language processing to computational biology, because they are a general tool to describe distributions that involve multiple variables. The dependencies between variables are conveniently encoded via potentials that define the structure of a graph. Besides encoding dependencies, in a variety of real-world applications we often want consistent solutions that are physically plausible, e.g., when jointly reasoning about multiple tasks or when enforcing geometric constraints in 3D indoor scene understanding applications [18]. Therefore, various methods [22, 13, 16, 12, 1] enforce consistency structure during inference by imposing constraints on the feasible instances. This was shown to be effective in practice. However for each new constraint we may need to design a specifically tailored algorithm. Therefore, the most common approach to impose consistency is usage of PN-consistency potentials [9]. This permits reuse of existing message passing solvers, however, at the expense of an additional computational burden, as real-world applications may involve hundreds of additional factors. Our goal in this work is to bypass this computational burden while being generally applicable. To do so, we consider the problem of inference when probabilistic equalities are imposed over the beliefs of the model rather than its feasible instances. As we show in Sec. 3, the adaptive nature of message passing algorithms conveniently allows for such probabilistic equality constraints within its framework. Since our method eliminates potentially many multivariate factors, inference is much more scalable than using PN-consistency potentials [9]. In this paper, for notational simplicity, we illustrate the belief constraints based message passing rules using a framework known as convex belief propagation (CBP). We refer to the illustrated algorithm as constraints based CBP (CBCBP). However we note that the same derivation can be used to obtain, e.g., a constraints based tree-reweighted message passing algorithm. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. We evaluate the benefits of our algorithm on semantic image segmentation and machine translation tasks. Our results indicate that CBCBP improves accuracy while being at least an order of magnitude faster than CBP. 2 Background In this section we review the standard CBP algorithm. To this end we consider joint distributions defined over a set of discrete random variables X = (X1, . . . , Xn). The distribution p(x1, . . . , xn) is assumed to factor into a product of non-negative potential functions, i.e., p(x1, . . . , xn) ∝ exp ( ∑ r θr(xr)) , where r ⊂ {1, ..., n} is a subset of variable indices, which we use to restrict the domain via xr = (xi)i∈r. The real-valued functions θr(xr) assign a preference to each of the variables in the subset r. To visualize the factorization structure we use a region graph, i.e., a generalization of factor graphs. In this graph, each real-valued function θr(xr) corresponds to a node. Nodes θr and θp can be connected if r ⊂ p. Hence the parent set P (r) of a region r contains index sets p ∈ P (r) if r ⊂ p. Conversely we define the set of children of region r as C(r) = {c : r ∈ P (c)}. An important inference task is computation of the marginal probabilities p(xr) = ∑ x\xr p(x). Whenever the region graph has no cycles, marginals are easily computed using belief propagation. Unfortunately, this algorithm may not converge in the presence of cycles. To fix convergence a variety of approximations have been suggested, one of which is known as convex belief propagation (CBP). CBP performs block-coordinate descent over the dual function of the following program: max br ∑ r,xr br(xr)θr(xr)+ ∑ r H(br) s.t. { ∀r br(xr) ≥ 0, ∑ xr br(xr) = 1, ∀r, p ∈ P (r), xr ∑ xp\xr bp(xp) = br(xr). (1) This program is defined over marginal distributions br(xr) and incorporates their entropy H(br) in addition to the potential function θr. In many real world applications we require the solution to be consistent, i.e., hard constraints between some of the involved variables exist. For example, consider the case where X1, X2 are two binary variables such that for every feasible joint assignment, x1 = x2. To encourage consistency while reusing general purpose solvers, a PN-consistency potential [9] is often incorporated into the model: θ1,2(x1, x2) = { 0 x1 = x2 −c otherwise . (2) Hereby c is a positive constant that is tuned to penalize for the violation of consistency. As c increases, the following constraint holds: b1(X1 = x1) = b2(X2 = x2). (3) However, usage of PN-potentials raises concerns: (i) increasing the number of pairwise constraints decreases computational efficiency, (ii) enforcing consistency in a soft manner requires tuning of an additional parameter c, (iii) large values of c reduce convergence, and (iv) large values of c result in corresponding beliefs being assigned zero probability mass which is not desirable. To alleviate these issues we suggest to enforce the equality constraints given in Eq. (3) directly during optimization of the program given in Eq. (1). We refer to the additionally introduced constraints as consistency constraints. At this point two notes are in place. First we emphasize that utilizing consistency constraints instead of PN-consistency potentials has a computational advantage, since it omits all pairwise beliefs that correspond to consistency potentials. Therefore it results in an optimization problem with fewer functions, which is expect to be more efficiently solvable. Second we highlight that the two approaches are not equivalent. Intuitively, as c increases, we expect consistency constraints to yield better results than usage of PN-potentials. Indeed, as c increases, the PN-consistency potential enforces the joint distribution to be diagonal, i.e., b(X1 = i,X2 = j) = 0, ∀i 6= j. However, the consistency constraint as specified in Eq. (3) only requires the univariate marginals to agree. The latter is a considerably weaker requirement, as a diagonal pairwise distribution implies agreement of the univariate marginals, but the opposite direction does not hold. Consequently, using consistency constraints results in a larger search space, which is desirable. Algorithm 1 Constraints Based Convex Belief Propagation (CBCBP) Repeat until convergence: Update λ messages - for each r update for all p ∈ P (r), xr: µp→r(xr)= ln ∑ xp\xr exp θr(xr)−∑ p′∈P (p) λp→p′(xp) + ∑ r′∈C(p)\r λr′→p(xr′)− ∑ k∈Kp νp→k(xp) λr→p(xr)∝ 1 1 + |P (r)| θr(xr) +∑ c∈C(r) λc→r(xc) + ∑ p∈P (r) µp→r(xr)− ∑ k∈Kr νr→k(xr) −µp→r(xr) Update ν messages - for each k ∈ K update for all r ∈ N(k) using αr,k as defined in Eq. (6): νr→k(s k r ) = logαr,k − 1 |N(k)| ∑ r′∈N(k) logαr′,k Figure 1: The CBCBP algorithm. Shown are the update rules for the λ and ν messages. Next we derive a general message-passing algorithm that aims at solving the optimization problem given in Eq. (1) subject to consistency constraints of the form given in Eq. (3). 3 Constraints Based Convex Belief Propagation (CBCBP) To enforce consistency of beliefs we want to incorporate constraints of the form br1(xr1) = . . . = brm(xrm). Each constraint involves a set of regions ri and some of their assignments xri . If this constraint involves more than two regions, i.e., if m > 2, it is easier to formulate the constraint as a series of constraints bri(xri) = v, i ∈ {1, . . . ,m}, for some constant v that eventually cancels. Generally, given a constraint k, we define the set of its neighbours N(k) to be the involved regions rki as well as the involved assignment x k ri , i.e., N(k) = {r k i , x k ri} mk i=1. To simplify notation we subsequently use r ∈ N(k) instead of (r, xr) ∈ N(k). However, it should be clear from the context that each region rk is matched with a value xkr . We subsume all constraints within the set K. Additionally, we let Kr denote the set of all those constraints k which depend on region r, i.e., Kr = {k : r ∈ N(k)}. Using the aforementioned notation we are now ready to augment the conventional CBP given in Eq. (1) with one additional set of constraints. The CBCBP program then reads as follows: max br ∑ r,xr br(xr)θr(xr) + ∑ r H(br) s.t. ∀r br(xr) ≥ 0, ∑ xr br(xr) = 1 ∀r, p ∈ P (r), xr ∑ xp\xr bp(xp) = br(xr) ∀k ∈ K, r ∈ N(k) br(xkr ) = vk . (4) To solve this program we observe that its constraint space exhibits a rich structure, defined on the one hand by the parent set P , and on the other hand by the neighborhood of the constraint subsumed in the set K. To exploit this structure, we aim at deriving the dual which is possible because the program is strictly convex. Importantly we can subsequently derive block-coordinate updates for the dual variables, which are efficiently computable in closed form. Hence solving the program given in Eq. (4) via its dual is much more effective. In the following we first present the dual before discussing how to efficiently solve it. Derivation of the dual program: The dual program of the task given in Eq. (4) is obtained by using the Lagrangian as shown in the following lemma. Lemma 3.1.: The dual problem associated with the primal program given in Eq. (4) is: min λ,ν ∑ r log ∑ xr exp ( θr(xr, λ)− ∑ k∈Kr νr→k(xr) ) s.t. ∀k ∈ K, ∑ r∈N(k) νr→k(x k r ) = 0, where we set νr→k(xr) = 0 ∀k ∈ K, r ∈ N(k), xr 6= xkr and where we introduced θr(xr, λ) = θr(xr)− ∑ p∈P (r) λr→p(xr) + ∑ c∈C(r) λc→r(sc). Proof: We begin by defining a Lagrange multiplier for each of the constraints given in Eq. (4). Concretely, for all r, p ∈ P (r), xr we let λr→p(xr) be the Lagrange multiplier associated with the marginalization constraint ∑ xp\xr bp(xp) = br(xr). Similarly for all k ∈ K, r ∈ N(k), we let νr→k(x k r ) be the Lagrange multiplier that is associated with the constraint br(x k r ) = vk. The corresponding Lagrangian is then given by L(b, λ, ν) = ∑ r,xr br(xr) ( θr(xr, λ)− ∑ k∈Kr νr→k(xr) ) + ∑ r H(br) + ∑ k∈K,r∈N(k) νr→k(x k r )vk, where θr(xr, λ) = θr(xr) − ∑ p∈P (r) λr→p(xr) + ∑ c∈C(r) λc→r(xc) and νr→k(xr) = 0 for all k, r ∈ N(k), xr 6= xkr . Due to conjugate duality between the entropy and the log-sum-exp function [25], the dual function is: D(λ, ν) = max b L(b, λ, ν) = ∑ r log ∑ xr exp ( θr(xr, λ)− ∑ k∈Kr νr→k(xr) ) + ∑ k vk ∑ r∈N(k) νr→k(x k r ). The result follows since the dual function is unbounded from below with respect to the Lagrange multipliers νr→k(xkr ), requiring constraints. Derivation of message passing update rules: As mentioned before we can derive blockcoordinate descent update rules for the dual which are computable in closed form. Hence the dual given in Lemma 3.1 can be solved efficiently, which is summarized in the following theorem: Theorem 3.2.: A block coordinates descent over the dual problem giving in Lemma 3.1 results in a message passing algorithm whose details are given in Fig. 1 and which we refer to as the CBCBP algorithm. It is guaranteed to converge. Before proving this result, we provide intuition for the update rules: as in the standard and distributed [19] CBP algorithm, each region r sends a message to its parents via the dual variable λr→p. Differently from CBP but similar to distributed variants [19], our algorithm has another type of messages, i.e., the ν messages. Conceptually, think of the constraints as a new node. A constraint node k is connected to a region r if r ∈ N(k). Hence, a region r ‘informs’ the constraint node using the dual variable νr→k. We now show how to derive the message passing rules to optimize the dual. Proof: First we note that convergence is guaranteed by the strict convexity of the primal problem [6]. Next we begin by optimizing the dual function given in Lemma 3.1 with respect to the λ parameters. Specifically, for a chosen region r we optimize the dual w.r.t. a block of Lagrange multipliers λr→p(xr) ∀p ∈ P (r), xr. To this end we derive the dual with respect to λr→p(xr) while keeping all other variables fixed. The technique for solving the optimality conditions follows existing literature, augmented by messages νr→k. It yields the update rules given in Fig. 1. Next we turn to optimizing the dual with respect to the Lagrange multipliers ν. Recall that each constraint k ∈ K in the dual function given in Lemma 3.1 is associated with the linear constraint∑ r∈N(k) νr→k(x k r ) = 0. Therefore we employ a Lagrange multiplier γk for each k. For compact exposition, we introduce the Lagrangian that is associated with a constraint k, denoted by Lk: Lk(λ, ν) = ∑ r∈N(k) log ∑ xr exp ( θr(xr, λ)− ∑ k∈Kr νr→k(xr) ) + γk ∑ r∈N(k) νr→k(x k r ) . Deriving Lk with respect to νr→k ∀r ∈ N(k) and using optimality conditions, we then arrive at: νr→k(x k r ) = log ( αr,k · 1 + γk −γk ) (5) for all r ∈ N(k), where αr,k = exp ( θr(x k r , λ)− ∑ k′∈Kr\k νr→k′(x k r ) ) ∑ xr\xkr exp ( θr(xr, λ)− ∑ k′∈Kr νr→k′(xr) ) . (6) Summing the right hand side of Eq. (5) over r ∈ N(k) and using the constraint ∑ r∈N(k) νr→k(x k r ) = 0 results in 1 + γk −γk = ∏ r∈N(k) 1 αr,k 1|N(k)| . Finally, substituting this result back into Eq. (5) yields the desired update rule. We summarized the resulting algorithm in Fig. 1 and now turn our attention to its evaluation. 4 Experiments We first demonstrate the applicability of the procedure using synthetic data. We then turn to image segmentation and machine translation, using real-world datasets. As our work directly improves the standard CBP approach, we use it as a baseline. 4.1 Synthetic Evaluation Consider two binary variables X and Y whose support consists of L levels, {1, . . . , L}. Assume we are given the following PN-consistency potential: θx,y(x, y) = { 0 (y = 1 ∧ x = 1) ∨ (y = 0 ∧ x 6= 1) −c otherwise, (7) where c is some positive parameter. This potential encourages the assignment y = 1 to agree with the assignment x = 1 and y = 0 to agree with x = {2, . . . , L}. Phrased differently, this potential favours beliefs such that: by(y = 1) = bx(x = 1), by(y = 0) = bx(x 6= 1). (8) Therefore, one may replace the above potential using a single consistency constraint. Note that the above two constraints complement each other, hence, it suffices to include one of them. We use the left consistency constraint since it fits our derivation. We test this hypothesis by constructing four networks that consist of n = 2v, v = 50, 100, 150, 200 variables, where v variables are binary, denoted by Y and the other v variables are multi-levels, subsumed within X. Note that the support of variable Xi, 1 ≤ i ≤ v, consists of i states. Each multi-level variable is matched with a binary one. For each variable we randomly generate unary potentials according to the standard Gaussian distribution. We then run the standard CBP algorithm using the aforementioned PN-consistency potential given in Eq. (7) with c = 1. In a next step we replace each such potential by its corresponding consistency constraint following Eq. (8). For each network we repeat this process 10 times and report the mean running time and standard deviation in Tab. 1. As expected, CBCBP is significantly faster than the standard CBP. Quantitatively, CBCBP was approximately 25 times faster for the smallest, and more than 31 times faster for the largest graphs. Obviously, different values of c effect the convexity of the problem and therefore also the running time of both CBP and CBCBP. To quantify its impact we repeat the experiment with n = 200 for distinct values of c ∈ {2, 4, 6, 8, 10}. In Tab. 2 we report the mean speedup factor over 10 repetitions, for each value of c. As clearly evident, the speedup factors substantially increases with c. 4.2 Image Segmentation We evaluate our approach on the task of semantic segmentation using the MSRC-21 dataset [21] as well as the PascalVOC 2012 [4] dataset. Both contain 21 foreground classes. Each variable Xi in our model corresponds to a super-pixel in an image. In addition, each super-pixel is associated with a binary variable Yi, that indicates whether the super-pixel belongs to the foreground, i.e., yi = 1, or to the background, i.e., yi = 0. The model potentials are: Super-pixel unary potentials: For MSRC-21 these potentials are computed by averaging the TextonBoost [11] pixel-potentials inside each super-pixel. For the PascalVOC 2012 dataset we train a convolutional neural network following the VGG16 architecture. Foreground/Background unary potentials: For MSRC-21 we let the value of the potential at yi = 1 equal the value of the super-pixel unary potential that corresponds to the ‘void’ label, and for yi = 0 we define it to be the maximum value of the super-pixel unary potential among the other labels. For PascalVOC 2012 we obtain the foreground/background potential by training another convolutional neural network following again the VGG16 architecture. Super pixel - foreground/background consistency: We define pairwise potentials between superpixel and the foreground/background labels using Eq. (7) and set c = 1. Naturally, these consistency potentials encourage CBP to favour beliefs where pixels that are labeled as ‘void’ are also labeled as ‘background’ and vice versa. This can also be formulated using the constraints bi(Xi = 0) = bi(Yi = 0) and bi(Xi 6= 1) = bi(Yi = 1). We compare the CBCBP algorithm with the standard CBP approach. For MSRC-21 we use the standard error measure of average per class accuracy and average per pixel accuracy, denoted as global. Performances results are provided in Tab. 3. Appealingly, our results indicate that CBCBP outperforms the standard CBP, across both metrics. Moreover and as summarized in Tab. 4, in 19 out of 21 classes CBCBP achieves an accuracy that is equal to or higher than CBP. At last, CBCBP is more than 65 times faster than CBP. In Tab. 5 we present the average pixel accuracy as well as the Intersection over Union (IoU) metric for the VOC2012 data. We observe CBCBP to perform better since it is able to transfer information between the foreground-background classification and the semantic segmentation. 4.3 Machine Translation We now consider the task of machine translation. We define a phrase-based translation model as a factor graph with many large constraints and use CBCBP to efficiently incorporate them during inference. Our model is inspired by the widely-used approach of [8]. Given a sentence in a source language, the output of our phrase-based model consists of a segmentation of the source sentence into phrases (subsequences of words), a phrase translation for each source phrase, and an ordering of the phrase translations. See Fig. 2 for an illustration. We index variables in our model by i = 1, . . . , n, which include source words (sw), source phrases (sp), and translation phrase slots (tp). The sequence of source words is first segmented into source phrases. The possible values for source word sw are Xsw = {(sw1, sw2) : (sw1 ≤ sw ≤ sw2) ∧ (sw2 − sw1 < m)}, where m is the maximum phrase length. If source phrase sp is used in the derivation, we say that sp aligns to a translation phrase slot tp. If sp is not used, it aligns to ∅. We define variables Xsp to indicate what sp aligns to: Xsp = {tp : sw1 − d ≤ tp ≤ sw2 + d} ∪ {∅}, i.e., all translation phrase slots tp (numbered from left to right in the translation) such that the slot number is at most distance d from an edge of sp.1 Each translation phrase slot tp generates actual target-language words which comprise the translation. We define variables Xtp ranging over the possible target-language word sequences (translation phrases) that can be generated at slot tp. However, not all translation phrase slots must be filled in with translations. Beyond some value of tp (equaling the number of source phrases used in the derivation), they must all be empty. To enforce this, we also permit a null (∅) translation. Consistency constraints: Many derivations defined by the discrete product space X1 × · · · ×Xn are semantically inconsistent. For example, a derivation may place the first source word into the source phrase (1, 2) and the second source word into (2, 3). This is problematic because the phrases overlap; each source word must be placed into exactly one source phrase. We introduce source word consistency constraints: ∀sp,∀sw ∈ sp : bsw(sp) = b(sp). These constraints force the source word beliefs bsw(xsw) to agree on their span. There are other consistencies we wish to enforce in our model. Specifically, we must match a source phrase to a translation phrase slot if and only if the source phrase is consistently chosen by all of its source words. Formally, ∀ sp : b(sp) = ∑ xsp 6=∅ bsp(xsp). Phrase translation potentials: We use pairwise potential functions between source phrases sp = (sw1, sw2) and their aligned translation phrase slots tp. We include a factor 〈sp, tp〉 ∈ E if sw1− d ≤ tp ≤ sw2+d. Letting πsp be the actual words in sp, the potentials θsp,tp(xsp, xtp) determine the preference of the phrase translation 〈πsp, xtp〉 using a phrase table feature function τ : 〈π, π′〉 → Rk. In particular, θsp,tp(xsp, xtp) = γ>p τ(〈πsp, xtp〉) if xsp = tp and a large negative value otherwise, where γp is the weight vector for the Moses phrase table feature vector. Language model potentials: To include n-gram language models, we add potentials that score pairs of consecutive target phrases, i.e., θtp−1,tp(xtp−1, xtp) = γ` ∑|xtp| i=1 log Pr(x (i) tp |xtp−1 · x (1) tp · ... · x(i−1)tp ), where |xtp| is the number of words in xtp, x (i) tp is the i-th word in xtp, · denotes string concatenation, and γ` is the feature weight. This potential sums n-gram log-probabilities of words in the second of the two target phrases. Internal n-gram features and the standard word penalty feature [7] are computed in the θtp potentials, since they depend only on the words in xtp. Source phrase separation potentials: We use pairwise potentials between source phrases to prevent them aligning to the same translation slot. We also prevent two overlapping source phrases 1Our distortion limit d is based on distances from source words to translation slots, rather than distances between source words as in the Moses system [7]. from both aligning to non-null slots (i.e., one must align to ∅). We include a factor between two sources phrases if there is a translation phrase that may relate to both, namely 〈sp1, sp2〉 ∈ E if ∃ tp : 〈sp1, tp〉 ∈ E, 〈sp2, tp〉 ∈ E. The source phrase separation potential θsp1,sp2(xsp1 , xsp2) is −∞ if either xsp1 = xsp2 6= ∅ or sp1∩sp2 6= ∅∧xsp1 6= ∅∧xsp2 6= ∅. Otherwise, it is−γd|(δ(sp1, sp2)− |xsp1 − xsp2 |)|, where δ(sp1, sp2) returns the number of source words between the spans sp1 and sp2. This favors similar distances between source phrases and their aligned slots. Experimental Setup: We consider German-to-English translation. As training data for constructing the phrase table, we use the WMT2011 parallel data [2], which contains 1.9M sentence pairs. We use the phrase table to compute θsp,tp and to fill Xtp. We use a bigram language model estimated from the English side of the parallel data along with 601M tokens of randomly-selected sentences from the Linguistic Data Consortium’s Gigaword corpus. This is used when computing the θtp−1,tp potentials. As our test set, we use the first 150 sentences from the WMT2009 test set. Results below are (uncased) %BLEU scores [17] on this 150-sentence set. We use maximum phrase length m = 3 and distortion limit d = 3. We run 250 iterations of CBCBP for each sentence. For the feature weights (γ), we use the default weights in Moses, since our features are analogous to theirs. Learning the weights is left to future work. Results: We compare to a simplified version of our model that omits the sw variables and all constraints and terms pertaining to them. This variation still contains all sp and tp variables and their factors. This comparison shows the contribution of our novel handling of consistency constraints. Tab. 6 shows our results. The consistency constraints lead to a large improvement for our model at negligible increase in runtime due to our closed-form update rules. We found it impractical to attempt to obtain these results using the standard CBP algorithm for any source sentences of typical length. For comparison to a standard benchmark, we also trained a Moses system [7], a state-of-the-art phrase-based system, on the same data. We used default settings and feature weights, except we used max phrase length 3 and no lexicalized reordering model, in order to more closely match the setting of our model. The Moses %BLEU on this dataset is 17.88. When using the source word consistency constraints, we are within 1.2% of Moses. Our model has the virtue of being able to compute marginals for downstream applications and also permits us to study particular forms of constraints in phrase-based translation modeling. Future work can add or remove constraints like we did in our experiments here in order to determine the most effective constraints for phrase-based translation. Our efficient inference framework makes such exploration possible. 5 Related Work Variational approaches to inference have been extensively studied in the past. We address approximate inference using the entropy barrier function and there has been extensive work in this direction, e.g., [24, 14, 23, 5, 19, 20] to name a few. Our work differs since we incorporate consistency constraints within the inference engine. We show that closed-form update rules are still available. Consistency constraints are implied when using PN-potentials [9]. However, pairwise functions are included for every constraint which is expensive if many constraints are involved. In contrast, constraints over the feasible instances are considered in [22, 13, 16, 12, 1]. While impressive results have been shown, each different restrictions of the feasible set may require a tailored algorithm. In contrast, we propose to include probabilistic equalities among the model beliefs, which permits derivation of an algorithm that is generally applicable. 6 Conclusions In this work we tackled the problem of inference with belief based equality constraints, which arises when consistency among variables in the network is required. We introduced the CBCBP algorithm that directly incorporates constraints into the CBP framework and results in closed-form update rules. We demonstrated the merit of CBCBP both on synthetic data and on two real-world tasks. Our experiments indicate that CBCBP outperforms PN-potentials in both speed and accuracy. In the future we intend to incorporate our approximate inference with consistency constraints into learning frameworks, e.g., [15, 3].
1. What is the focus of the paper regarding convexified belief propagation? 2. What are the strengths and weaknesses of the proposed approach compared to prior works? 3. How does the reviewer assess the experimental validation and argument support in the paper? 4. What are some concerns or suggestions regarding the inclusion of constraints and its impact on performance? 5. Are there any questions regarding the extension of CBCBP to a max-product version?
Review
Review This paper studies adding constraints on the values of beliefs during convexified belief propagation, as opposed to including factors in the model that penalize disagreement between random variables. This is a weaker form of constraint, but the authors argue that it is sufficient for many applications and show that it is less computationally expensive on synthetic data and benchmark problems without sacrificing significant accuracy.The proposed method is appealing in that it provides a convenient way to bypass tuning the weights of PN-potentials for convexified BP. My opinion is that it is a straightforward derivation, but one that is worth sharing with the community. The experimental validation is sound. It supports the argument that constrained beliefs are sufficient for enforcing the domain knowledge typically encoded as PN-potentials. If anything, I think the argument that needs to be supported more is the claim that such constraints are useful in practice. It is sometimes considered conventional wisdom, and reinforced by the papers addressing the topic, but it would be good to also evaluate in this manuscript how accurate a model is on these tasks that just includes untuned PN-potentials (so that they're probabilistic dependencies, not hard constraints), as well as some baseline that doesn't include them at all. I'm curious to know how much the inclusion of constraints improves performance. Regarding related work, the discussion on lines 271-276 should be expanded. Some of the cited references, such as [1, 12, 13], use the alternating direction method of multipliers. For this reason, incorporating additional linear constraints that constrain beliefs (or continuous random variables in the case of [1]) is trivial. I think the distinguishing feature of this work is that it incorporates constrained beliefs into a BP algorithm that optimizes the objective using block-coordinate descent. Also, the references [1, 12, 13] are primarily focused on MAP inference. Would there be any complication in extending CBCBP to a max-product version? After author response: I think the authors' plan of including empirical evidence supporting the argument for using constraints will strengthen the manuscript.
NIPS
Title Constraints Based Convex Belief Propagation Abstract Inference in Markov random fields subject to consistency structure is a fundamental problem that arises in many real-life applications. In order to enforce consistency, classical approaches utilize consistency potentials or encode constraints over feasible instances. Unfortunately this comes at the price of a tremendous computational burden. In this paper we suggest to tackle consistency by incorporating constraints on beliefs. This permits derivation of a closed-form message-passing algorithm which we refer to as the Constraints Based Convex Belief Propagation (CBCBP). Experiments show that CBCBP outperforms the conventional consistency potential based approach, while being at least an order of magnitude faster. N/A Inference in Markov random fields subject to consistency structure is a fundamental problem that arises in many real-life applications. In order to enforce consistency, classical approaches utilize consistency potentials or encode constraints over feasible instances. Unfortunately this comes at the price of a tremendous computational burden. In this paper we suggest to tackle consistency by incorporating constraints on beliefs. This permits derivation of a closed-form message-passing algorithm which we refer to as the Constraints Based Convex Belief Propagation (CBCBP). Experiments show that CBCBP outperforms the conventional consistency potential based approach, while being at least an order of magnitude faster. 1 Introduction Markov random fields (MRFs) [10] are widely used across different domains from computer vision and natural language processing to computational biology, because they are a general tool to describe distributions that involve multiple variables. The dependencies between variables are conveniently encoded via potentials that define the structure of a graph. Besides encoding dependencies, in a variety of real-world applications we often want consistent solutions that are physically plausible, e.g., when jointly reasoning about multiple tasks or when enforcing geometric constraints in 3D indoor scene understanding applications [18]. Therefore, various methods [22, 13, 16, 12, 1] enforce consistency structure during inference by imposing constraints on the feasible instances. This was shown to be effective in practice. However for each new constraint we may need to design a specifically tailored algorithm. Therefore, the most common approach to impose consistency is usage of PN-consistency potentials [9]. This permits reuse of existing message passing solvers, however, at the expense of an additional computational burden, as real-world applications may involve hundreds of additional factors. Our goal in this work is to bypass this computational burden while being generally applicable. To do so, we consider the problem of inference when probabilistic equalities are imposed over the beliefs of the model rather than its feasible instances. As we show in Sec. 3, the adaptive nature of message passing algorithms conveniently allows for such probabilistic equality constraints within its framework. Since our method eliminates potentially many multivariate factors, inference is much more scalable than using PN-consistency potentials [9]. In this paper, for notational simplicity, we illustrate the belief constraints based message passing rules using a framework known as convex belief propagation (CBP). We refer to the illustrated algorithm as constraints based CBP (CBCBP). However we note that the same derivation can be used to obtain, e.g., a constraints based tree-reweighted message passing algorithm. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. We evaluate the benefits of our algorithm on semantic image segmentation and machine translation tasks. Our results indicate that CBCBP improves accuracy while being at least an order of magnitude faster than CBP. 2 Background In this section we review the standard CBP algorithm. To this end we consider joint distributions defined over a set of discrete random variables X = (X1, . . . , Xn). The distribution p(x1, . . . , xn) is assumed to factor into a product of non-negative potential functions, i.e., p(x1, . . . , xn) ∝ exp ( ∑ r θr(xr)) , where r ⊂ {1, ..., n} is a subset of variable indices, which we use to restrict the domain via xr = (xi)i∈r. The real-valued functions θr(xr) assign a preference to each of the variables in the subset r. To visualize the factorization structure we use a region graph, i.e., a generalization of factor graphs. In this graph, each real-valued function θr(xr) corresponds to a node. Nodes θr and θp can be connected if r ⊂ p. Hence the parent set P (r) of a region r contains index sets p ∈ P (r) if r ⊂ p. Conversely we define the set of children of region r as C(r) = {c : r ∈ P (c)}. An important inference task is computation of the marginal probabilities p(xr) = ∑ x\xr p(x). Whenever the region graph has no cycles, marginals are easily computed using belief propagation. Unfortunately, this algorithm may not converge in the presence of cycles. To fix convergence a variety of approximations have been suggested, one of which is known as convex belief propagation (CBP). CBP performs block-coordinate descent over the dual function of the following program: max br ∑ r,xr br(xr)θr(xr)+ ∑ r H(br) s.t. { ∀r br(xr) ≥ 0, ∑ xr br(xr) = 1, ∀r, p ∈ P (r), xr ∑ xp\xr bp(xp) = br(xr). (1) This program is defined over marginal distributions br(xr) and incorporates their entropy H(br) in addition to the potential function θr. In many real world applications we require the solution to be consistent, i.e., hard constraints between some of the involved variables exist. For example, consider the case where X1, X2 are two binary variables such that for every feasible joint assignment, x1 = x2. To encourage consistency while reusing general purpose solvers, a PN-consistency potential [9] is often incorporated into the model: θ1,2(x1, x2) = { 0 x1 = x2 −c otherwise . (2) Hereby c is a positive constant that is tuned to penalize for the violation of consistency. As c increases, the following constraint holds: b1(X1 = x1) = b2(X2 = x2). (3) However, usage of PN-potentials raises concerns: (i) increasing the number of pairwise constraints decreases computational efficiency, (ii) enforcing consistency in a soft manner requires tuning of an additional parameter c, (iii) large values of c reduce convergence, and (iv) large values of c result in corresponding beliefs being assigned zero probability mass which is not desirable. To alleviate these issues we suggest to enforce the equality constraints given in Eq. (3) directly during optimization of the program given in Eq. (1). We refer to the additionally introduced constraints as consistency constraints. At this point two notes are in place. First we emphasize that utilizing consistency constraints instead of PN-consistency potentials has a computational advantage, since it omits all pairwise beliefs that correspond to consistency potentials. Therefore it results in an optimization problem with fewer functions, which is expect to be more efficiently solvable. Second we highlight that the two approaches are not equivalent. Intuitively, as c increases, we expect consistency constraints to yield better results than usage of PN-potentials. Indeed, as c increases, the PN-consistency potential enforces the joint distribution to be diagonal, i.e., b(X1 = i,X2 = j) = 0, ∀i 6= j. However, the consistency constraint as specified in Eq. (3) only requires the univariate marginals to agree. The latter is a considerably weaker requirement, as a diagonal pairwise distribution implies agreement of the univariate marginals, but the opposite direction does not hold. Consequently, using consistency constraints results in a larger search space, which is desirable. Algorithm 1 Constraints Based Convex Belief Propagation (CBCBP) Repeat until convergence: Update λ messages - for each r update for all p ∈ P (r), xr: µp→r(xr)= ln ∑ xp\xr exp θr(xr)−∑ p′∈P (p) λp→p′(xp) + ∑ r′∈C(p)\r λr′→p(xr′)− ∑ k∈Kp νp→k(xp) λr→p(xr)∝ 1 1 + |P (r)| θr(xr) +∑ c∈C(r) λc→r(xc) + ∑ p∈P (r) µp→r(xr)− ∑ k∈Kr νr→k(xr) −µp→r(xr) Update ν messages - for each k ∈ K update for all r ∈ N(k) using αr,k as defined in Eq. (6): νr→k(s k r ) = logαr,k − 1 |N(k)| ∑ r′∈N(k) logαr′,k Figure 1: The CBCBP algorithm. Shown are the update rules for the λ and ν messages. Next we derive a general message-passing algorithm that aims at solving the optimization problem given in Eq. (1) subject to consistency constraints of the form given in Eq. (3). 3 Constraints Based Convex Belief Propagation (CBCBP) To enforce consistency of beliefs we want to incorporate constraints of the form br1(xr1) = . . . = brm(xrm). Each constraint involves a set of regions ri and some of their assignments xri . If this constraint involves more than two regions, i.e., if m > 2, it is easier to formulate the constraint as a series of constraints bri(xri) = v, i ∈ {1, . . . ,m}, for some constant v that eventually cancels. Generally, given a constraint k, we define the set of its neighbours N(k) to be the involved regions rki as well as the involved assignment x k ri , i.e., N(k) = {r k i , x k ri} mk i=1. To simplify notation we subsequently use r ∈ N(k) instead of (r, xr) ∈ N(k). However, it should be clear from the context that each region rk is matched with a value xkr . We subsume all constraints within the set K. Additionally, we let Kr denote the set of all those constraints k which depend on region r, i.e., Kr = {k : r ∈ N(k)}. Using the aforementioned notation we are now ready to augment the conventional CBP given in Eq. (1) with one additional set of constraints. The CBCBP program then reads as follows: max br ∑ r,xr br(xr)θr(xr) + ∑ r H(br) s.t. ∀r br(xr) ≥ 0, ∑ xr br(xr) = 1 ∀r, p ∈ P (r), xr ∑ xp\xr bp(xp) = br(xr) ∀k ∈ K, r ∈ N(k) br(xkr ) = vk . (4) To solve this program we observe that its constraint space exhibits a rich structure, defined on the one hand by the parent set P , and on the other hand by the neighborhood of the constraint subsumed in the set K. To exploit this structure, we aim at deriving the dual which is possible because the program is strictly convex. Importantly we can subsequently derive block-coordinate updates for the dual variables, which are efficiently computable in closed form. Hence solving the program given in Eq. (4) via its dual is much more effective. In the following we first present the dual before discussing how to efficiently solve it. Derivation of the dual program: The dual program of the task given in Eq. (4) is obtained by using the Lagrangian as shown in the following lemma. Lemma 3.1.: The dual problem associated with the primal program given in Eq. (4) is: min λ,ν ∑ r log ∑ xr exp ( θr(xr, λ)− ∑ k∈Kr νr→k(xr) ) s.t. ∀k ∈ K, ∑ r∈N(k) νr→k(x k r ) = 0, where we set νr→k(xr) = 0 ∀k ∈ K, r ∈ N(k), xr 6= xkr and where we introduced θr(xr, λ) = θr(xr)− ∑ p∈P (r) λr→p(xr) + ∑ c∈C(r) λc→r(sc). Proof: We begin by defining a Lagrange multiplier for each of the constraints given in Eq. (4). Concretely, for all r, p ∈ P (r), xr we let λr→p(xr) be the Lagrange multiplier associated with the marginalization constraint ∑ xp\xr bp(xp) = br(xr). Similarly for all k ∈ K, r ∈ N(k), we let νr→k(x k r ) be the Lagrange multiplier that is associated with the constraint br(x k r ) = vk. The corresponding Lagrangian is then given by L(b, λ, ν) = ∑ r,xr br(xr) ( θr(xr, λ)− ∑ k∈Kr νr→k(xr) ) + ∑ r H(br) + ∑ k∈K,r∈N(k) νr→k(x k r )vk, where θr(xr, λ) = θr(xr) − ∑ p∈P (r) λr→p(xr) + ∑ c∈C(r) λc→r(xc) and νr→k(xr) = 0 for all k, r ∈ N(k), xr 6= xkr . Due to conjugate duality between the entropy and the log-sum-exp function [25], the dual function is: D(λ, ν) = max b L(b, λ, ν) = ∑ r log ∑ xr exp ( θr(xr, λ)− ∑ k∈Kr νr→k(xr) ) + ∑ k vk ∑ r∈N(k) νr→k(x k r ). The result follows since the dual function is unbounded from below with respect to the Lagrange multipliers νr→k(xkr ), requiring constraints. Derivation of message passing update rules: As mentioned before we can derive blockcoordinate descent update rules for the dual which are computable in closed form. Hence the dual given in Lemma 3.1 can be solved efficiently, which is summarized in the following theorem: Theorem 3.2.: A block coordinates descent over the dual problem giving in Lemma 3.1 results in a message passing algorithm whose details are given in Fig. 1 and which we refer to as the CBCBP algorithm. It is guaranteed to converge. Before proving this result, we provide intuition for the update rules: as in the standard and distributed [19] CBP algorithm, each region r sends a message to its parents via the dual variable λr→p. Differently from CBP but similar to distributed variants [19], our algorithm has another type of messages, i.e., the ν messages. Conceptually, think of the constraints as a new node. A constraint node k is connected to a region r if r ∈ N(k). Hence, a region r ‘informs’ the constraint node using the dual variable νr→k. We now show how to derive the message passing rules to optimize the dual. Proof: First we note that convergence is guaranteed by the strict convexity of the primal problem [6]. Next we begin by optimizing the dual function given in Lemma 3.1 with respect to the λ parameters. Specifically, for a chosen region r we optimize the dual w.r.t. a block of Lagrange multipliers λr→p(xr) ∀p ∈ P (r), xr. To this end we derive the dual with respect to λr→p(xr) while keeping all other variables fixed. The technique for solving the optimality conditions follows existing literature, augmented by messages νr→k. It yields the update rules given in Fig. 1. Next we turn to optimizing the dual with respect to the Lagrange multipliers ν. Recall that each constraint k ∈ K in the dual function given in Lemma 3.1 is associated with the linear constraint∑ r∈N(k) νr→k(x k r ) = 0. Therefore we employ a Lagrange multiplier γk for each k. For compact exposition, we introduce the Lagrangian that is associated with a constraint k, denoted by Lk: Lk(λ, ν) = ∑ r∈N(k) log ∑ xr exp ( θr(xr, λ)− ∑ k∈Kr νr→k(xr) ) + γk ∑ r∈N(k) νr→k(x k r ) . Deriving Lk with respect to νr→k ∀r ∈ N(k) and using optimality conditions, we then arrive at: νr→k(x k r ) = log ( αr,k · 1 + γk −γk ) (5) for all r ∈ N(k), where αr,k = exp ( θr(x k r , λ)− ∑ k′∈Kr\k νr→k′(x k r ) ) ∑ xr\xkr exp ( θr(xr, λ)− ∑ k′∈Kr νr→k′(xr) ) . (6) Summing the right hand side of Eq. (5) over r ∈ N(k) and using the constraint ∑ r∈N(k) νr→k(x k r ) = 0 results in 1 + γk −γk = ∏ r∈N(k) 1 αr,k 1|N(k)| . Finally, substituting this result back into Eq. (5) yields the desired update rule. We summarized the resulting algorithm in Fig. 1 and now turn our attention to its evaluation. 4 Experiments We first demonstrate the applicability of the procedure using synthetic data. We then turn to image segmentation and machine translation, using real-world datasets. As our work directly improves the standard CBP approach, we use it as a baseline. 4.1 Synthetic Evaluation Consider two binary variables X and Y whose support consists of L levels, {1, . . . , L}. Assume we are given the following PN-consistency potential: θx,y(x, y) = { 0 (y = 1 ∧ x = 1) ∨ (y = 0 ∧ x 6= 1) −c otherwise, (7) where c is some positive parameter. This potential encourages the assignment y = 1 to agree with the assignment x = 1 and y = 0 to agree with x = {2, . . . , L}. Phrased differently, this potential favours beliefs such that: by(y = 1) = bx(x = 1), by(y = 0) = bx(x 6= 1). (8) Therefore, one may replace the above potential using a single consistency constraint. Note that the above two constraints complement each other, hence, it suffices to include one of them. We use the left consistency constraint since it fits our derivation. We test this hypothesis by constructing four networks that consist of n = 2v, v = 50, 100, 150, 200 variables, where v variables are binary, denoted by Y and the other v variables are multi-levels, subsumed within X. Note that the support of variable Xi, 1 ≤ i ≤ v, consists of i states. Each multi-level variable is matched with a binary one. For each variable we randomly generate unary potentials according to the standard Gaussian distribution. We then run the standard CBP algorithm using the aforementioned PN-consistency potential given in Eq. (7) with c = 1. In a next step we replace each such potential by its corresponding consistency constraint following Eq. (8). For each network we repeat this process 10 times and report the mean running time and standard deviation in Tab. 1. As expected, CBCBP is significantly faster than the standard CBP. Quantitatively, CBCBP was approximately 25 times faster for the smallest, and more than 31 times faster for the largest graphs. Obviously, different values of c effect the convexity of the problem and therefore also the running time of both CBP and CBCBP. To quantify its impact we repeat the experiment with n = 200 for distinct values of c ∈ {2, 4, 6, 8, 10}. In Tab. 2 we report the mean speedup factor over 10 repetitions, for each value of c. As clearly evident, the speedup factors substantially increases with c. 4.2 Image Segmentation We evaluate our approach on the task of semantic segmentation using the MSRC-21 dataset [21] as well as the PascalVOC 2012 [4] dataset. Both contain 21 foreground classes. Each variable Xi in our model corresponds to a super-pixel in an image. In addition, each super-pixel is associated with a binary variable Yi, that indicates whether the super-pixel belongs to the foreground, i.e., yi = 1, or to the background, i.e., yi = 0. The model potentials are: Super-pixel unary potentials: For MSRC-21 these potentials are computed by averaging the TextonBoost [11] pixel-potentials inside each super-pixel. For the PascalVOC 2012 dataset we train a convolutional neural network following the VGG16 architecture. Foreground/Background unary potentials: For MSRC-21 we let the value of the potential at yi = 1 equal the value of the super-pixel unary potential that corresponds to the ‘void’ label, and for yi = 0 we define it to be the maximum value of the super-pixel unary potential among the other labels. For PascalVOC 2012 we obtain the foreground/background potential by training another convolutional neural network following again the VGG16 architecture. Super pixel - foreground/background consistency: We define pairwise potentials between superpixel and the foreground/background labels using Eq. (7) and set c = 1. Naturally, these consistency potentials encourage CBP to favour beliefs where pixels that are labeled as ‘void’ are also labeled as ‘background’ and vice versa. This can also be formulated using the constraints bi(Xi = 0) = bi(Yi = 0) and bi(Xi 6= 1) = bi(Yi = 1). We compare the CBCBP algorithm with the standard CBP approach. For MSRC-21 we use the standard error measure of average per class accuracy and average per pixel accuracy, denoted as global. Performances results are provided in Tab. 3. Appealingly, our results indicate that CBCBP outperforms the standard CBP, across both metrics. Moreover and as summarized in Tab. 4, in 19 out of 21 classes CBCBP achieves an accuracy that is equal to or higher than CBP. At last, CBCBP is more than 65 times faster than CBP. In Tab. 5 we present the average pixel accuracy as well as the Intersection over Union (IoU) metric for the VOC2012 data. We observe CBCBP to perform better since it is able to transfer information between the foreground-background classification and the semantic segmentation. 4.3 Machine Translation We now consider the task of machine translation. We define a phrase-based translation model as a factor graph with many large constraints and use CBCBP to efficiently incorporate them during inference. Our model is inspired by the widely-used approach of [8]. Given a sentence in a source language, the output of our phrase-based model consists of a segmentation of the source sentence into phrases (subsequences of words), a phrase translation for each source phrase, and an ordering of the phrase translations. See Fig. 2 for an illustration. We index variables in our model by i = 1, . . . , n, which include source words (sw), source phrases (sp), and translation phrase slots (tp). The sequence of source words is first segmented into source phrases. The possible values for source word sw are Xsw = {(sw1, sw2) : (sw1 ≤ sw ≤ sw2) ∧ (sw2 − sw1 < m)}, where m is the maximum phrase length. If source phrase sp is used in the derivation, we say that sp aligns to a translation phrase slot tp. If sp is not used, it aligns to ∅. We define variables Xsp to indicate what sp aligns to: Xsp = {tp : sw1 − d ≤ tp ≤ sw2 + d} ∪ {∅}, i.e., all translation phrase slots tp (numbered from left to right in the translation) such that the slot number is at most distance d from an edge of sp.1 Each translation phrase slot tp generates actual target-language words which comprise the translation. We define variables Xtp ranging over the possible target-language word sequences (translation phrases) that can be generated at slot tp. However, not all translation phrase slots must be filled in with translations. Beyond some value of tp (equaling the number of source phrases used in the derivation), they must all be empty. To enforce this, we also permit a null (∅) translation. Consistency constraints: Many derivations defined by the discrete product space X1 × · · · ×Xn are semantically inconsistent. For example, a derivation may place the first source word into the source phrase (1, 2) and the second source word into (2, 3). This is problematic because the phrases overlap; each source word must be placed into exactly one source phrase. We introduce source word consistency constraints: ∀sp,∀sw ∈ sp : bsw(sp) = b(sp). These constraints force the source word beliefs bsw(xsw) to agree on their span. There are other consistencies we wish to enforce in our model. Specifically, we must match a source phrase to a translation phrase slot if and only if the source phrase is consistently chosen by all of its source words. Formally, ∀ sp : b(sp) = ∑ xsp 6=∅ bsp(xsp). Phrase translation potentials: We use pairwise potential functions between source phrases sp = (sw1, sw2) and their aligned translation phrase slots tp. We include a factor 〈sp, tp〉 ∈ E if sw1− d ≤ tp ≤ sw2+d. Letting πsp be the actual words in sp, the potentials θsp,tp(xsp, xtp) determine the preference of the phrase translation 〈πsp, xtp〉 using a phrase table feature function τ : 〈π, π′〉 → Rk. In particular, θsp,tp(xsp, xtp) = γ>p τ(〈πsp, xtp〉) if xsp = tp and a large negative value otherwise, where γp is the weight vector for the Moses phrase table feature vector. Language model potentials: To include n-gram language models, we add potentials that score pairs of consecutive target phrases, i.e., θtp−1,tp(xtp−1, xtp) = γ` ∑|xtp| i=1 log Pr(x (i) tp |xtp−1 · x (1) tp · ... · x(i−1)tp ), where |xtp| is the number of words in xtp, x (i) tp is the i-th word in xtp, · denotes string concatenation, and γ` is the feature weight. This potential sums n-gram log-probabilities of words in the second of the two target phrases. Internal n-gram features and the standard word penalty feature [7] are computed in the θtp potentials, since they depend only on the words in xtp. Source phrase separation potentials: We use pairwise potentials between source phrases to prevent them aligning to the same translation slot. We also prevent two overlapping source phrases 1Our distortion limit d is based on distances from source words to translation slots, rather than distances between source words as in the Moses system [7]. from both aligning to non-null slots (i.e., one must align to ∅). We include a factor between two sources phrases if there is a translation phrase that may relate to both, namely 〈sp1, sp2〉 ∈ E if ∃ tp : 〈sp1, tp〉 ∈ E, 〈sp2, tp〉 ∈ E. The source phrase separation potential θsp1,sp2(xsp1 , xsp2) is −∞ if either xsp1 = xsp2 6= ∅ or sp1∩sp2 6= ∅∧xsp1 6= ∅∧xsp2 6= ∅. Otherwise, it is−γd|(δ(sp1, sp2)− |xsp1 − xsp2 |)|, where δ(sp1, sp2) returns the number of source words between the spans sp1 and sp2. This favors similar distances between source phrases and their aligned slots. Experimental Setup: We consider German-to-English translation. As training data for constructing the phrase table, we use the WMT2011 parallel data [2], which contains 1.9M sentence pairs. We use the phrase table to compute θsp,tp and to fill Xtp. We use a bigram language model estimated from the English side of the parallel data along with 601M tokens of randomly-selected sentences from the Linguistic Data Consortium’s Gigaword corpus. This is used when computing the θtp−1,tp potentials. As our test set, we use the first 150 sentences from the WMT2009 test set. Results below are (uncased) %BLEU scores [17] on this 150-sentence set. We use maximum phrase length m = 3 and distortion limit d = 3. We run 250 iterations of CBCBP for each sentence. For the feature weights (γ), we use the default weights in Moses, since our features are analogous to theirs. Learning the weights is left to future work. Results: We compare to a simplified version of our model that omits the sw variables and all constraints and terms pertaining to them. This variation still contains all sp and tp variables and their factors. This comparison shows the contribution of our novel handling of consistency constraints. Tab. 6 shows our results. The consistency constraints lead to a large improvement for our model at negligible increase in runtime due to our closed-form update rules. We found it impractical to attempt to obtain these results using the standard CBP algorithm for any source sentences of typical length. For comparison to a standard benchmark, we also trained a Moses system [7], a state-of-the-art phrase-based system, on the same data. We used default settings and feature weights, except we used max phrase length 3 and no lexicalized reordering model, in order to more closely match the setting of our model. The Moses %BLEU on this dataset is 17.88. When using the source word consistency constraints, we are within 1.2% of Moses. Our model has the virtue of being able to compute marginals for downstream applications and also permits us to study particular forms of constraints in phrase-based translation modeling. Future work can add or remove constraints like we did in our experiments here in order to determine the most effective constraints for phrase-based translation. Our efficient inference framework makes such exploration possible. 5 Related Work Variational approaches to inference have been extensively studied in the past. We address approximate inference using the entropy barrier function and there has been extensive work in this direction, e.g., [24, 14, 23, 5, 19, 20] to name a few. Our work differs since we incorporate consistency constraints within the inference engine. We show that closed-form update rules are still available. Consistency constraints are implied when using PN-potentials [9]. However, pairwise functions are included for every constraint which is expensive if many constraints are involved. In contrast, constraints over the feasible instances are considered in [22, 13, 16, 12, 1]. While impressive results have been shown, each different restrictions of the feasible set may require a tailored algorithm. In contrast, we propose to include probabilistic equalities among the model beliefs, which permits derivation of an algorithm that is generally applicable. 6 Conclusions In this work we tackled the problem of inference with belief based equality constraints, which arises when consistency among variables in the network is required. We introduced the CBCBP algorithm that directly incorporates constraints into the CBP framework and results in closed-form update rules. We demonstrated the merit of CBCBP both on synthetic data and on two real-world tasks. Our experiments indicate that CBCBP outperforms PN-potentials in both speed and accuracy. In the future we intend to incorporate our approximate inference with consistency constraints into learning frameworks, e.g., [15, 3].
1. What is the main contribution of the paper in terms of marginal inference? 2. What is the novelty of the proposed dual belief propagation algorithm? 3. What are the limitations of the paper regarding its applications and impact? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any suggestions for improving the paper or expanding its scope?
Review
Review In this paper, the authors proposed a dual belief propagation algorithm for marginal inference with a special type of constraint --- some variables' assignment have to agree. They enforce this by letting the beliefs of these variables to equal, and add to the local marginal polytope to form a new constraint set. Together with the original linear objective function (with a entropy term), it forms a LP problem, whose dual admits a belief propagation algorithm. The idea is straightforward, and no surprise. Minor issues: (1) the title is not informative. It doesn't really tell what the task is and how to do it. (2) 'consistency structure' is not a widely accepted term, perhaps should not be used in the abstract. It would be even better if avoid. (3) The constraint that the proposed BP handles is too special, which restricts its impact. Overall, I am leaning towards accepting it for it's a useful addition to the community. ==Post rebuttal == I still would like to accept it.
NIPS
Title Constraints Based Convex Belief Propagation Abstract Inference in Markov random fields subject to consistency structure is a fundamental problem that arises in many real-life applications. In order to enforce consistency, classical approaches utilize consistency potentials or encode constraints over feasible instances. Unfortunately this comes at the price of a tremendous computational burden. In this paper we suggest to tackle consistency by incorporating constraints on beliefs. This permits derivation of a closed-form message-passing algorithm which we refer to as the Constraints Based Convex Belief Propagation (CBCBP). Experiments show that CBCBP outperforms the conventional consistency potential based approach, while being at least an order of magnitude faster. N/A Inference in Markov random fields subject to consistency structure is a fundamental problem that arises in many real-life applications. In order to enforce consistency, classical approaches utilize consistency potentials or encode constraints over feasible instances. Unfortunately this comes at the price of a tremendous computational burden. In this paper we suggest to tackle consistency by incorporating constraints on beliefs. This permits derivation of a closed-form message-passing algorithm which we refer to as the Constraints Based Convex Belief Propagation (CBCBP). Experiments show that CBCBP outperforms the conventional consistency potential based approach, while being at least an order of magnitude faster. 1 Introduction Markov random fields (MRFs) [10] are widely used across different domains from computer vision and natural language processing to computational biology, because they are a general tool to describe distributions that involve multiple variables. The dependencies between variables are conveniently encoded via potentials that define the structure of a graph. Besides encoding dependencies, in a variety of real-world applications we often want consistent solutions that are physically plausible, e.g., when jointly reasoning about multiple tasks or when enforcing geometric constraints in 3D indoor scene understanding applications [18]. Therefore, various methods [22, 13, 16, 12, 1] enforce consistency structure during inference by imposing constraints on the feasible instances. This was shown to be effective in practice. However for each new constraint we may need to design a specifically tailored algorithm. Therefore, the most common approach to impose consistency is usage of PN-consistency potentials [9]. This permits reuse of existing message passing solvers, however, at the expense of an additional computational burden, as real-world applications may involve hundreds of additional factors. Our goal in this work is to bypass this computational burden while being generally applicable. To do so, we consider the problem of inference when probabilistic equalities are imposed over the beliefs of the model rather than its feasible instances. As we show in Sec. 3, the adaptive nature of message passing algorithms conveniently allows for such probabilistic equality constraints within its framework. Since our method eliminates potentially many multivariate factors, inference is much more scalable than using PN-consistency potentials [9]. In this paper, for notational simplicity, we illustrate the belief constraints based message passing rules using a framework known as convex belief propagation (CBP). We refer to the illustrated algorithm as constraints based CBP (CBCBP). However we note that the same derivation can be used to obtain, e.g., a constraints based tree-reweighted message passing algorithm. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. We evaluate the benefits of our algorithm on semantic image segmentation and machine translation tasks. Our results indicate that CBCBP improves accuracy while being at least an order of magnitude faster than CBP. 2 Background In this section we review the standard CBP algorithm. To this end we consider joint distributions defined over a set of discrete random variables X = (X1, . . . , Xn). The distribution p(x1, . . . , xn) is assumed to factor into a product of non-negative potential functions, i.e., p(x1, . . . , xn) ∝ exp ( ∑ r θr(xr)) , where r ⊂ {1, ..., n} is a subset of variable indices, which we use to restrict the domain via xr = (xi)i∈r. The real-valued functions θr(xr) assign a preference to each of the variables in the subset r. To visualize the factorization structure we use a region graph, i.e., a generalization of factor graphs. In this graph, each real-valued function θr(xr) corresponds to a node. Nodes θr and θp can be connected if r ⊂ p. Hence the parent set P (r) of a region r contains index sets p ∈ P (r) if r ⊂ p. Conversely we define the set of children of region r as C(r) = {c : r ∈ P (c)}. An important inference task is computation of the marginal probabilities p(xr) = ∑ x\xr p(x). Whenever the region graph has no cycles, marginals are easily computed using belief propagation. Unfortunately, this algorithm may not converge in the presence of cycles. To fix convergence a variety of approximations have been suggested, one of which is known as convex belief propagation (CBP). CBP performs block-coordinate descent over the dual function of the following program: max br ∑ r,xr br(xr)θr(xr)+ ∑ r H(br) s.t. { ∀r br(xr) ≥ 0, ∑ xr br(xr) = 1, ∀r, p ∈ P (r), xr ∑ xp\xr bp(xp) = br(xr). (1) This program is defined over marginal distributions br(xr) and incorporates their entropy H(br) in addition to the potential function θr. In many real world applications we require the solution to be consistent, i.e., hard constraints between some of the involved variables exist. For example, consider the case where X1, X2 are two binary variables such that for every feasible joint assignment, x1 = x2. To encourage consistency while reusing general purpose solvers, a PN-consistency potential [9] is often incorporated into the model: θ1,2(x1, x2) = { 0 x1 = x2 −c otherwise . (2) Hereby c is a positive constant that is tuned to penalize for the violation of consistency. As c increases, the following constraint holds: b1(X1 = x1) = b2(X2 = x2). (3) However, usage of PN-potentials raises concerns: (i) increasing the number of pairwise constraints decreases computational efficiency, (ii) enforcing consistency in a soft manner requires tuning of an additional parameter c, (iii) large values of c reduce convergence, and (iv) large values of c result in corresponding beliefs being assigned zero probability mass which is not desirable. To alleviate these issues we suggest to enforce the equality constraints given in Eq. (3) directly during optimization of the program given in Eq. (1). We refer to the additionally introduced constraints as consistency constraints. At this point two notes are in place. First we emphasize that utilizing consistency constraints instead of PN-consistency potentials has a computational advantage, since it omits all pairwise beliefs that correspond to consistency potentials. Therefore it results in an optimization problem with fewer functions, which is expect to be more efficiently solvable. Second we highlight that the two approaches are not equivalent. Intuitively, as c increases, we expect consistency constraints to yield better results than usage of PN-potentials. Indeed, as c increases, the PN-consistency potential enforces the joint distribution to be diagonal, i.e., b(X1 = i,X2 = j) = 0, ∀i 6= j. However, the consistency constraint as specified in Eq. (3) only requires the univariate marginals to agree. The latter is a considerably weaker requirement, as a diagonal pairwise distribution implies agreement of the univariate marginals, but the opposite direction does not hold. Consequently, using consistency constraints results in a larger search space, which is desirable. Algorithm 1 Constraints Based Convex Belief Propagation (CBCBP) Repeat until convergence: Update λ messages - for each r update for all p ∈ P (r), xr: µp→r(xr)= ln ∑ xp\xr exp θr(xr)−∑ p′∈P (p) λp→p′(xp) + ∑ r′∈C(p)\r λr′→p(xr′)− ∑ k∈Kp νp→k(xp) λr→p(xr)∝ 1 1 + |P (r)| θr(xr) +∑ c∈C(r) λc→r(xc) + ∑ p∈P (r) µp→r(xr)− ∑ k∈Kr νr→k(xr) −µp→r(xr) Update ν messages - for each k ∈ K update for all r ∈ N(k) using αr,k as defined in Eq. (6): νr→k(s k r ) = logαr,k − 1 |N(k)| ∑ r′∈N(k) logαr′,k Figure 1: The CBCBP algorithm. Shown are the update rules for the λ and ν messages. Next we derive a general message-passing algorithm that aims at solving the optimization problem given in Eq. (1) subject to consistency constraints of the form given in Eq. (3). 3 Constraints Based Convex Belief Propagation (CBCBP) To enforce consistency of beliefs we want to incorporate constraints of the form br1(xr1) = . . . = brm(xrm). Each constraint involves a set of regions ri and some of their assignments xri . If this constraint involves more than two regions, i.e., if m > 2, it is easier to formulate the constraint as a series of constraints bri(xri) = v, i ∈ {1, . . . ,m}, for some constant v that eventually cancels. Generally, given a constraint k, we define the set of its neighbours N(k) to be the involved regions rki as well as the involved assignment x k ri , i.e., N(k) = {r k i , x k ri} mk i=1. To simplify notation we subsequently use r ∈ N(k) instead of (r, xr) ∈ N(k). However, it should be clear from the context that each region rk is matched with a value xkr . We subsume all constraints within the set K. Additionally, we let Kr denote the set of all those constraints k which depend on region r, i.e., Kr = {k : r ∈ N(k)}. Using the aforementioned notation we are now ready to augment the conventional CBP given in Eq. (1) with one additional set of constraints. The CBCBP program then reads as follows: max br ∑ r,xr br(xr)θr(xr) + ∑ r H(br) s.t. ∀r br(xr) ≥ 0, ∑ xr br(xr) = 1 ∀r, p ∈ P (r), xr ∑ xp\xr bp(xp) = br(xr) ∀k ∈ K, r ∈ N(k) br(xkr ) = vk . (4) To solve this program we observe that its constraint space exhibits a rich structure, defined on the one hand by the parent set P , and on the other hand by the neighborhood of the constraint subsumed in the set K. To exploit this structure, we aim at deriving the dual which is possible because the program is strictly convex. Importantly we can subsequently derive block-coordinate updates for the dual variables, which are efficiently computable in closed form. Hence solving the program given in Eq. (4) via its dual is much more effective. In the following we first present the dual before discussing how to efficiently solve it. Derivation of the dual program: The dual program of the task given in Eq. (4) is obtained by using the Lagrangian as shown in the following lemma. Lemma 3.1.: The dual problem associated with the primal program given in Eq. (4) is: min λ,ν ∑ r log ∑ xr exp ( θr(xr, λ)− ∑ k∈Kr νr→k(xr) ) s.t. ∀k ∈ K, ∑ r∈N(k) νr→k(x k r ) = 0, where we set νr→k(xr) = 0 ∀k ∈ K, r ∈ N(k), xr 6= xkr and where we introduced θr(xr, λ) = θr(xr)− ∑ p∈P (r) λr→p(xr) + ∑ c∈C(r) λc→r(sc). Proof: We begin by defining a Lagrange multiplier for each of the constraints given in Eq. (4). Concretely, for all r, p ∈ P (r), xr we let λr→p(xr) be the Lagrange multiplier associated with the marginalization constraint ∑ xp\xr bp(xp) = br(xr). Similarly for all k ∈ K, r ∈ N(k), we let νr→k(x k r ) be the Lagrange multiplier that is associated with the constraint br(x k r ) = vk. The corresponding Lagrangian is then given by L(b, λ, ν) = ∑ r,xr br(xr) ( θr(xr, λ)− ∑ k∈Kr νr→k(xr) ) + ∑ r H(br) + ∑ k∈K,r∈N(k) νr→k(x k r )vk, where θr(xr, λ) = θr(xr) − ∑ p∈P (r) λr→p(xr) + ∑ c∈C(r) λc→r(xc) and νr→k(xr) = 0 for all k, r ∈ N(k), xr 6= xkr . Due to conjugate duality between the entropy and the log-sum-exp function [25], the dual function is: D(λ, ν) = max b L(b, λ, ν) = ∑ r log ∑ xr exp ( θr(xr, λ)− ∑ k∈Kr νr→k(xr) ) + ∑ k vk ∑ r∈N(k) νr→k(x k r ). The result follows since the dual function is unbounded from below with respect to the Lagrange multipliers νr→k(xkr ), requiring constraints. Derivation of message passing update rules: As mentioned before we can derive blockcoordinate descent update rules for the dual which are computable in closed form. Hence the dual given in Lemma 3.1 can be solved efficiently, which is summarized in the following theorem: Theorem 3.2.: A block coordinates descent over the dual problem giving in Lemma 3.1 results in a message passing algorithm whose details are given in Fig. 1 and which we refer to as the CBCBP algorithm. It is guaranteed to converge. Before proving this result, we provide intuition for the update rules: as in the standard and distributed [19] CBP algorithm, each region r sends a message to its parents via the dual variable λr→p. Differently from CBP but similar to distributed variants [19], our algorithm has another type of messages, i.e., the ν messages. Conceptually, think of the constraints as a new node. A constraint node k is connected to a region r if r ∈ N(k). Hence, a region r ‘informs’ the constraint node using the dual variable νr→k. We now show how to derive the message passing rules to optimize the dual. Proof: First we note that convergence is guaranteed by the strict convexity of the primal problem [6]. Next we begin by optimizing the dual function given in Lemma 3.1 with respect to the λ parameters. Specifically, for a chosen region r we optimize the dual w.r.t. a block of Lagrange multipliers λr→p(xr) ∀p ∈ P (r), xr. To this end we derive the dual with respect to λr→p(xr) while keeping all other variables fixed. The technique for solving the optimality conditions follows existing literature, augmented by messages νr→k. It yields the update rules given in Fig. 1. Next we turn to optimizing the dual with respect to the Lagrange multipliers ν. Recall that each constraint k ∈ K in the dual function given in Lemma 3.1 is associated with the linear constraint∑ r∈N(k) νr→k(x k r ) = 0. Therefore we employ a Lagrange multiplier γk for each k. For compact exposition, we introduce the Lagrangian that is associated with a constraint k, denoted by Lk: Lk(λ, ν) = ∑ r∈N(k) log ∑ xr exp ( θr(xr, λ)− ∑ k∈Kr νr→k(xr) ) + γk ∑ r∈N(k) νr→k(x k r ) . Deriving Lk with respect to νr→k ∀r ∈ N(k) and using optimality conditions, we then arrive at: νr→k(x k r ) = log ( αr,k · 1 + γk −γk ) (5) for all r ∈ N(k), where αr,k = exp ( θr(x k r , λ)− ∑ k′∈Kr\k νr→k′(x k r ) ) ∑ xr\xkr exp ( θr(xr, λ)− ∑ k′∈Kr νr→k′(xr) ) . (6) Summing the right hand side of Eq. (5) over r ∈ N(k) and using the constraint ∑ r∈N(k) νr→k(x k r ) = 0 results in 1 + γk −γk = ∏ r∈N(k) 1 αr,k 1|N(k)| . Finally, substituting this result back into Eq. (5) yields the desired update rule. We summarized the resulting algorithm in Fig. 1 and now turn our attention to its evaluation. 4 Experiments We first demonstrate the applicability of the procedure using synthetic data. We then turn to image segmentation and machine translation, using real-world datasets. As our work directly improves the standard CBP approach, we use it as a baseline. 4.1 Synthetic Evaluation Consider two binary variables X and Y whose support consists of L levels, {1, . . . , L}. Assume we are given the following PN-consistency potential: θx,y(x, y) = { 0 (y = 1 ∧ x = 1) ∨ (y = 0 ∧ x 6= 1) −c otherwise, (7) where c is some positive parameter. This potential encourages the assignment y = 1 to agree with the assignment x = 1 and y = 0 to agree with x = {2, . . . , L}. Phrased differently, this potential favours beliefs such that: by(y = 1) = bx(x = 1), by(y = 0) = bx(x 6= 1). (8) Therefore, one may replace the above potential using a single consistency constraint. Note that the above two constraints complement each other, hence, it suffices to include one of them. We use the left consistency constraint since it fits our derivation. We test this hypothesis by constructing four networks that consist of n = 2v, v = 50, 100, 150, 200 variables, where v variables are binary, denoted by Y and the other v variables are multi-levels, subsumed within X. Note that the support of variable Xi, 1 ≤ i ≤ v, consists of i states. Each multi-level variable is matched with a binary one. For each variable we randomly generate unary potentials according to the standard Gaussian distribution. We then run the standard CBP algorithm using the aforementioned PN-consistency potential given in Eq. (7) with c = 1. In a next step we replace each such potential by its corresponding consistency constraint following Eq. (8). For each network we repeat this process 10 times and report the mean running time and standard deviation in Tab. 1. As expected, CBCBP is significantly faster than the standard CBP. Quantitatively, CBCBP was approximately 25 times faster for the smallest, and more than 31 times faster for the largest graphs. Obviously, different values of c effect the convexity of the problem and therefore also the running time of both CBP and CBCBP. To quantify its impact we repeat the experiment with n = 200 for distinct values of c ∈ {2, 4, 6, 8, 10}. In Tab. 2 we report the mean speedup factor over 10 repetitions, for each value of c. As clearly evident, the speedup factors substantially increases with c. 4.2 Image Segmentation We evaluate our approach on the task of semantic segmentation using the MSRC-21 dataset [21] as well as the PascalVOC 2012 [4] dataset. Both contain 21 foreground classes. Each variable Xi in our model corresponds to a super-pixel in an image. In addition, each super-pixel is associated with a binary variable Yi, that indicates whether the super-pixel belongs to the foreground, i.e., yi = 1, or to the background, i.e., yi = 0. The model potentials are: Super-pixel unary potentials: For MSRC-21 these potentials are computed by averaging the TextonBoost [11] pixel-potentials inside each super-pixel. For the PascalVOC 2012 dataset we train a convolutional neural network following the VGG16 architecture. Foreground/Background unary potentials: For MSRC-21 we let the value of the potential at yi = 1 equal the value of the super-pixel unary potential that corresponds to the ‘void’ label, and for yi = 0 we define it to be the maximum value of the super-pixel unary potential among the other labels. For PascalVOC 2012 we obtain the foreground/background potential by training another convolutional neural network following again the VGG16 architecture. Super pixel - foreground/background consistency: We define pairwise potentials between superpixel and the foreground/background labels using Eq. (7) and set c = 1. Naturally, these consistency potentials encourage CBP to favour beliefs where pixels that are labeled as ‘void’ are also labeled as ‘background’ and vice versa. This can also be formulated using the constraints bi(Xi = 0) = bi(Yi = 0) and bi(Xi 6= 1) = bi(Yi = 1). We compare the CBCBP algorithm with the standard CBP approach. For MSRC-21 we use the standard error measure of average per class accuracy and average per pixel accuracy, denoted as global. Performances results are provided in Tab. 3. Appealingly, our results indicate that CBCBP outperforms the standard CBP, across both metrics. Moreover and as summarized in Tab. 4, in 19 out of 21 classes CBCBP achieves an accuracy that is equal to or higher than CBP. At last, CBCBP is more than 65 times faster than CBP. In Tab. 5 we present the average pixel accuracy as well as the Intersection over Union (IoU) metric for the VOC2012 data. We observe CBCBP to perform better since it is able to transfer information between the foreground-background classification and the semantic segmentation. 4.3 Machine Translation We now consider the task of machine translation. We define a phrase-based translation model as a factor graph with many large constraints and use CBCBP to efficiently incorporate them during inference. Our model is inspired by the widely-used approach of [8]. Given a sentence in a source language, the output of our phrase-based model consists of a segmentation of the source sentence into phrases (subsequences of words), a phrase translation for each source phrase, and an ordering of the phrase translations. See Fig. 2 for an illustration. We index variables in our model by i = 1, . . . , n, which include source words (sw), source phrases (sp), and translation phrase slots (tp). The sequence of source words is first segmented into source phrases. The possible values for source word sw are Xsw = {(sw1, sw2) : (sw1 ≤ sw ≤ sw2) ∧ (sw2 − sw1 < m)}, where m is the maximum phrase length. If source phrase sp is used in the derivation, we say that sp aligns to a translation phrase slot tp. If sp is not used, it aligns to ∅. We define variables Xsp to indicate what sp aligns to: Xsp = {tp : sw1 − d ≤ tp ≤ sw2 + d} ∪ {∅}, i.e., all translation phrase slots tp (numbered from left to right in the translation) such that the slot number is at most distance d from an edge of sp.1 Each translation phrase slot tp generates actual target-language words which comprise the translation. We define variables Xtp ranging over the possible target-language word sequences (translation phrases) that can be generated at slot tp. However, not all translation phrase slots must be filled in with translations. Beyond some value of tp (equaling the number of source phrases used in the derivation), they must all be empty. To enforce this, we also permit a null (∅) translation. Consistency constraints: Many derivations defined by the discrete product space X1 × · · · ×Xn are semantically inconsistent. For example, a derivation may place the first source word into the source phrase (1, 2) and the second source word into (2, 3). This is problematic because the phrases overlap; each source word must be placed into exactly one source phrase. We introduce source word consistency constraints: ∀sp,∀sw ∈ sp : bsw(sp) = b(sp). These constraints force the source word beliefs bsw(xsw) to agree on their span. There are other consistencies we wish to enforce in our model. Specifically, we must match a source phrase to a translation phrase slot if and only if the source phrase is consistently chosen by all of its source words. Formally, ∀ sp : b(sp) = ∑ xsp 6=∅ bsp(xsp). Phrase translation potentials: We use pairwise potential functions between source phrases sp = (sw1, sw2) and their aligned translation phrase slots tp. We include a factor 〈sp, tp〉 ∈ E if sw1− d ≤ tp ≤ sw2+d. Letting πsp be the actual words in sp, the potentials θsp,tp(xsp, xtp) determine the preference of the phrase translation 〈πsp, xtp〉 using a phrase table feature function τ : 〈π, π′〉 → Rk. In particular, θsp,tp(xsp, xtp) = γ>p τ(〈πsp, xtp〉) if xsp = tp and a large negative value otherwise, where γp is the weight vector for the Moses phrase table feature vector. Language model potentials: To include n-gram language models, we add potentials that score pairs of consecutive target phrases, i.e., θtp−1,tp(xtp−1, xtp) = γ` ∑|xtp| i=1 log Pr(x (i) tp |xtp−1 · x (1) tp · ... · x(i−1)tp ), where |xtp| is the number of words in xtp, x (i) tp is the i-th word in xtp, · denotes string concatenation, and γ` is the feature weight. This potential sums n-gram log-probabilities of words in the second of the two target phrases. Internal n-gram features and the standard word penalty feature [7] are computed in the θtp potentials, since they depend only on the words in xtp. Source phrase separation potentials: We use pairwise potentials between source phrases to prevent them aligning to the same translation slot. We also prevent two overlapping source phrases 1Our distortion limit d is based on distances from source words to translation slots, rather than distances between source words as in the Moses system [7]. from both aligning to non-null slots (i.e., one must align to ∅). We include a factor between two sources phrases if there is a translation phrase that may relate to both, namely 〈sp1, sp2〉 ∈ E if ∃ tp : 〈sp1, tp〉 ∈ E, 〈sp2, tp〉 ∈ E. The source phrase separation potential θsp1,sp2(xsp1 , xsp2) is −∞ if either xsp1 = xsp2 6= ∅ or sp1∩sp2 6= ∅∧xsp1 6= ∅∧xsp2 6= ∅. Otherwise, it is−γd|(δ(sp1, sp2)− |xsp1 − xsp2 |)|, where δ(sp1, sp2) returns the number of source words between the spans sp1 and sp2. This favors similar distances between source phrases and their aligned slots. Experimental Setup: We consider German-to-English translation. As training data for constructing the phrase table, we use the WMT2011 parallel data [2], which contains 1.9M sentence pairs. We use the phrase table to compute θsp,tp and to fill Xtp. We use a bigram language model estimated from the English side of the parallel data along with 601M tokens of randomly-selected sentences from the Linguistic Data Consortium’s Gigaword corpus. This is used when computing the θtp−1,tp potentials. As our test set, we use the first 150 sentences from the WMT2009 test set. Results below are (uncased) %BLEU scores [17] on this 150-sentence set. We use maximum phrase length m = 3 and distortion limit d = 3. We run 250 iterations of CBCBP for each sentence. For the feature weights (γ), we use the default weights in Moses, since our features are analogous to theirs. Learning the weights is left to future work. Results: We compare to a simplified version of our model that omits the sw variables and all constraints and terms pertaining to them. This variation still contains all sp and tp variables and their factors. This comparison shows the contribution of our novel handling of consistency constraints. Tab. 6 shows our results. The consistency constraints lead to a large improvement for our model at negligible increase in runtime due to our closed-form update rules. We found it impractical to attempt to obtain these results using the standard CBP algorithm for any source sentences of typical length. For comparison to a standard benchmark, we also trained a Moses system [7], a state-of-the-art phrase-based system, on the same data. We used default settings and feature weights, except we used max phrase length 3 and no lexicalized reordering model, in order to more closely match the setting of our model. The Moses %BLEU on this dataset is 17.88. When using the source word consistency constraints, we are within 1.2% of Moses. Our model has the virtue of being able to compute marginals for downstream applications and also permits us to study particular forms of constraints in phrase-based translation modeling. Future work can add or remove constraints like we did in our experiments here in order to determine the most effective constraints for phrase-based translation. Our efficient inference framework makes such exploration possible. 5 Related Work Variational approaches to inference have been extensively studied in the past. We address approximate inference using the entropy barrier function and there has been extensive work in this direction, e.g., [24, 14, 23, 5, 19, 20] to name a few. Our work differs since we incorporate consistency constraints within the inference engine. We show that closed-form update rules are still available. Consistency constraints are implied when using PN-potentials [9]. However, pairwise functions are included for every constraint which is expensive if many constraints are involved. In contrast, constraints over the feasible instances are considered in [22, 13, 16, 12, 1]. While impressive results have been shown, each different restrictions of the feasible set may require a tailored algorithm. In contrast, we propose to include probabilistic equalities among the model beliefs, which permits derivation of an algorithm that is generally applicable. 6 Conclusions In this work we tackled the problem of inference with belief based equality constraints, which arises when consistency among variables in the network is required. We introduced the CBCBP algorithm that directly incorporates constraints into the CBP framework and results in closed-form update rules. We demonstrated the merit of CBCBP both on synthetic data and on two real-world tasks. Our experiments indicate that CBCBP outperforms PN-potentials in both speed and accuracy. In the future we intend to incorporate our approximate inference with consistency constraints into learning frameworks, e.g., [15, 3].
1. What is the focus of the paper regarding approximate marginal inference of discrete Markov random fields? 2. What are the strengths and weaknesses of the proposed method compared to previous approaches? 3. Do you have any concerns about the technical aspects of the paper, such as the transformation used in the variational formulation? 4. How does the reviewer assess the novelty and potential impact of the paper's contributions? 5. Are there any questions regarding the clarity and presentation of the paper?
Review
Review This paper considers a special case of the approximate marginal inference of discrete Markov random fields, where hard equality constraints of label assignment are enforced for certain nodes. Unlike previous approach [9] who introduces additional consistency potentials, the proposed method explicitly imposes equality constraints to the variational formulation of the marginal inference and solves it with dual coordinate descent. The experiments on synthetic data, image segmentation and machine translation show its computational and statistical efficiency over the baseline [9]. Technical quality: > The main contribution of this paper is the idea of writing the transitive equality constraints of consistency as simpler equality constraints of common constants. However, I am afraid the transformation (i.e., (4)) is just a relaxation of the original problem rather than an equivalent formulation. One simple argument is that v_k can take different values in [0,1] leading to different optimization problems. The optimal one among those problems gives rise to the original problem. I am also not sure if v_k can be canceled out in the dual problem. The explanation in line 115-116 is vague. Even if we cancel out the linear term, the log-sum-exp function itself could be unbounded below. The soundness of the derived sum-to-zero constraint of nu variables is thus arguable. Also, it seems nontrivial to recover (4) from the dual of the dual given in Lemma 3.1. But all of these could be the matter of details. > Regarding experiments, did you check the beliefs numerically to verify the correctness of those consistency constraints? > It is not 100% clear why sums are taking in (6) without including k and x_r^k, i.e., \sum_{k^\prime \in K_r \setminus k} and \sum_{x_r \setminus x_r^k}. More steps or explanations are needed. Novelty and potential impact: As far as I know, the formulation of label consistency constraint is novel. However, the proposed method works only for a particular type of constraint. Its usage could be limited. Clarity and presentation: This is a well written paper with clear presentation and sufficient experiments. It could be better if the convergence analysis and/or its variants are presented. After rebuttal: The "maximizing over v_k" issue doesn't affect the main results of the paper. However, the proof of lemma 3.1 could be majorly revised: By taking derivative of the Lagrangian w.r.t. v_k, we get the sum-to-zero constraints immediately. Other issues have clear response in the rebuttal.
NIPS
Title Constraints Based Convex Belief Propagation Abstract Inference in Markov random fields subject to consistency structure is a fundamental problem that arises in many real-life applications. In order to enforce consistency, classical approaches utilize consistency potentials or encode constraints over feasible instances. Unfortunately this comes at the price of a tremendous computational burden. In this paper we suggest to tackle consistency by incorporating constraints on beliefs. This permits derivation of a closed-form message-passing algorithm which we refer to as the Constraints Based Convex Belief Propagation (CBCBP). Experiments show that CBCBP outperforms the conventional consistency potential based approach, while being at least an order of magnitude faster. N/A Inference in Markov random fields subject to consistency structure is a fundamental problem that arises in many real-life applications. In order to enforce consistency, classical approaches utilize consistency potentials or encode constraints over feasible instances. Unfortunately this comes at the price of a tremendous computational burden. In this paper we suggest to tackle consistency by incorporating constraints on beliefs. This permits derivation of a closed-form message-passing algorithm which we refer to as the Constraints Based Convex Belief Propagation (CBCBP). Experiments show that CBCBP outperforms the conventional consistency potential based approach, while being at least an order of magnitude faster. 1 Introduction Markov random fields (MRFs) [10] are widely used across different domains from computer vision and natural language processing to computational biology, because they are a general tool to describe distributions that involve multiple variables. The dependencies between variables are conveniently encoded via potentials that define the structure of a graph. Besides encoding dependencies, in a variety of real-world applications we often want consistent solutions that are physically plausible, e.g., when jointly reasoning about multiple tasks or when enforcing geometric constraints in 3D indoor scene understanding applications [18]. Therefore, various methods [22, 13, 16, 12, 1] enforce consistency structure during inference by imposing constraints on the feasible instances. This was shown to be effective in practice. However for each new constraint we may need to design a specifically tailored algorithm. Therefore, the most common approach to impose consistency is usage of PN-consistency potentials [9]. This permits reuse of existing message passing solvers, however, at the expense of an additional computational burden, as real-world applications may involve hundreds of additional factors. Our goal in this work is to bypass this computational burden while being generally applicable. To do so, we consider the problem of inference when probabilistic equalities are imposed over the beliefs of the model rather than its feasible instances. As we show in Sec. 3, the adaptive nature of message passing algorithms conveniently allows for such probabilistic equality constraints within its framework. Since our method eliminates potentially many multivariate factors, inference is much more scalable than using PN-consistency potentials [9]. In this paper, for notational simplicity, we illustrate the belief constraints based message passing rules using a framework known as convex belief propagation (CBP). We refer to the illustrated algorithm as constraints based CBP (CBCBP). However we note that the same derivation can be used to obtain, e.g., a constraints based tree-reweighted message passing algorithm. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. We evaluate the benefits of our algorithm on semantic image segmentation and machine translation tasks. Our results indicate that CBCBP improves accuracy while being at least an order of magnitude faster than CBP. 2 Background In this section we review the standard CBP algorithm. To this end we consider joint distributions defined over a set of discrete random variables X = (X1, . . . , Xn). The distribution p(x1, . . . , xn) is assumed to factor into a product of non-negative potential functions, i.e., p(x1, . . . , xn) ∝ exp ( ∑ r θr(xr)) , where r ⊂ {1, ..., n} is a subset of variable indices, which we use to restrict the domain via xr = (xi)i∈r. The real-valued functions θr(xr) assign a preference to each of the variables in the subset r. To visualize the factorization structure we use a region graph, i.e., a generalization of factor graphs. In this graph, each real-valued function θr(xr) corresponds to a node. Nodes θr and θp can be connected if r ⊂ p. Hence the parent set P (r) of a region r contains index sets p ∈ P (r) if r ⊂ p. Conversely we define the set of children of region r as C(r) = {c : r ∈ P (c)}. An important inference task is computation of the marginal probabilities p(xr) = ∑ x\xr p(x). Whenever the region graph has no cycles, marginals are easily computed using belief propagation. Unfortunately, this algorithm may not converge in the presence of cycles. To fix convergence a variety of approximations have been suggested, one of which is known as convex belief propagation (CBP). CBP performs block-coordinate descent over the dual function of the following program: max br ∑ r,xr br(xr)θr(xr)+ ∑ r H(br) s.t. { ∀r br(xr) ≥ 0, ∑ xr br(xr) = 1, ∀r, p ∈ P (r), xr ∑ xp\xr bp(xp) = br(xr). (1) This program is defined over marginal distributions br(xr) and incorporates their entropy H(br) in addition to the potential function θr. In many real world applications we require the solution to be consistent, i.e., hard constraints between some of the involved variables exist. For example, consider the case where X1, X2 are two binary variables such that for every feasible joint assignment, x1 = x2. To encourage consistency while reusing general purpose solvers, a PN-consistency potential [9] is often incorporated into the model: θ1,2(x1, x2) = { 0 x1 = x2 −c otherwise . (2) Hereby c is a positive constant that is tuned to penalize for the violation of consistency. As c increases, the following constraint holds: b1(X1 = x1) = b2(X2 = x2). (3) However, usage of PN-potentials raises concerns: (i) increasing the number of pairwise constraints decreases computational efficiency, (ii) enforcing consistency in a soft manner requires tuning of an additional parameter c, (iii) large values of c reduce convergence, and (iv) large values of c result in corresponding beliefs being assigned zero probability mass which is not desirable. To alleviate these issues we suggest to enforce the equality constraints given in Eq. (3) directly during optimization of the program given in Eq. (1). We refer to the additionally introduced constraints as consistency constraints. At this point two notes are in place. First we emphasize that utilizing consistency constraints instead of PN-consistency potentials has a computational advantage, since it omits all pairwise beliefs that correspond to consistency potentials. Therefore it results in an optimization problem with fewer functions, which is expect to be more efficiently solvable. Second we highlight that the two approaches are not equivalent. Intuitively, as c increases, we expect consistency constraints to yield better results than usage of PN-potentials. Indeed, as c increases, the PN-consistency potential enforces the joint distribution to be diagonal, i.e., b(X1 = i,X2 = j) = 0, ∀i 6= j. However, the consistency constraint as specified in Eq. (3) only requires the univariate marginals to agree. The latter is a considerably weaker requirement, as a diagonal pairwise distribution implies agreement of the univariate marginals, but the opposite direction does not hold. Consequently, using consistency constraints results in a larger search space, which is desirable. Algorithm 1 Constraints Based Convex Belief Propagation (CBCBP) Repeat until convergence: Update λ messages - for each r update for all p ∈ P (r), xr: µp→r(xr)= ln ∑ xp\xr exp θr(xr)−∑ p′∈P (p) λp→p′(xp) + ∑ r′∈C(p)\r λr′→p(xr′)− ∑ k∈Kp νp→k(xp) λr→p(xr)∝ 1 1 + |P (r)| θr(xr) +∑ c∈C(r) λc→r(xc) + ∑ p∈P (r) µp→r(xr)− ∑ k∈Kr νr→k(xr) −µp→r(xr) Update ν messages - for each k ∈ K update for all r ∈ N(k) using αr,k as defined in Eq. (6): νr→k(s k r ) = logαr,k − 1 |N(k)| ∑ r′∈N(k) logαr′,k Figure 1: The CBCBP algorithm. Shown are the update rules for the λ and ν messages. Next we derive a general message-passing algorithm that aims at solving the optimization problem given in Eq. (1) subject to consistency constraints of the form given in Eq. (3). 3 Constraints Based Convex Belief Propagation (CBCBP) To enforce consistency of beliefs we want to incorporate constraints of the form br1(xr1) = . . . = brm(xrm). Each constraint involves a set of regions ri and some of their assignments xri . If this constraint involves more than two regions, i.e., if m > 2, it is easier to formulate the constraint as a series of constraints bri(xri) = v, i ∈ {1, . . . ,m}, for some constant v that eventually cancels. Generally, given a constraint k, we define the set of its neighbours N(k) to be the involved regions rki as well as the involved assignment x k ri , i.e., N(k) = {r k i , x k ri} mk i=1. To simplify notation we subsequently use r ∈ N(k) instead of (r, xr) ∈ N(k). However, it should be clear from the context that each region rk is matched with a value xkr . We subsume all constraints within the set K. Additionally, we let Kr denote the set of all those constraints k which depend on region r, i.e., Kr = {k : r ∈ N(k)}. Using the aforementioned notation we are now ready to augment the conventional CBP given in Eq. (1) with one additional set of constraints. The CBCBP program then reads as follows: max br ∑ r,xr br(xr)θr(xr) + ∑ r H(br) s.t. ∀r br(xr) ≥ 0, ∑ xr br(xr) = 1 ∀r, p ∈ P (r), xr ∑ xp\xr bp(xp) = br(xr) ∀k ∈ K, r ∈ N(k) br(xkr ) = vk . (4) To solve this program we observe that its constraint space exhibits a rich structure, defined on the one hand by the parent set P , and on the other hand by the neighborhood of the constraint subsumed in the set K. To exploit this structure, we aim at deriving the dual which is possible because the program is strictly convex. Importantly we can subsequently derive block-coordinate updates for the dual variables, which are efficiently computable in closed form. Hence solving the program given in Eq. (4) via its dual is much more effective. In the following we first present the dual before discussing how to efficiently solve it. Derivation of the dual program: The dual program of the task given in Eq. (4) is obtained by using the Lagrangian as shown in the following lemma. Lemma 3.1.: The dual problem associated with the primal program given in Eq. (4) is: min λ,ν ∑ r log ∑ xr exp ( θr(xr, λ)− ∑ k∈Kr νr→k(xr) ) s.t. ∀k ∈ K, ∑ r∈N(k) νr→k(x k r ) = 0, where we set νr→k(xr) = 0 ∀k ∈ K, r ∈ N(k), xr 6= xkr and where we introduced θr(xr, λ) = θr(xr)− ∑ p∈P (r) λr→p(xr) + ∑ c∈C(r) λc→r(sc). Proof: We begin by defining a Lagrange multiplier for each of the constraints given in Eq. (4). Concretely, for all r, p ∈ P (r), xr we let λr→p(xr) be the Lagrange multiplier associated with the marginalization constraint ∑ xp\xr bp(xp) = br(xr). Similarly for all k ∈ K, r ∈ N(k), we let νr→k(x k r ) be the Lagrange multiplier that is associated with the constraint br(x k r ) = vk. The corresponding Lagrangian is then given by L(b, λ, ν) = ∑ r,xr br(xr) ( θr(xr, λ)− ∑ k∈Kr νr→k(xr) ) + ∑ r H(br) + ∑ k∈K,r∈N(k) νr→k(x k r )vk, where θr(xr, λ) = θr(xr) − ∑ p∈P (r) λr→p(xr) + ∑ c∈C(r) λc→r(xc) and νr→k(xr) = 0 for all k, r ∈ N(k), xr 6= xkr . Due to conjugate duality between the entropy and the log-sum-exp function [25], the dual function is: D(λ, ν) = max b L(b, λ, ν) = ∑ r log ∑ xr exp ( θr(xr, λ)− ∑ k∈Kr νr→k(xr) ) + ∑ k vk ∑ r∈N(k) νr→k(x k r ). The result follows since the dual function is unbounded from below with respect to the Lagrange multipliers νr→k(xkr ), requiring constraints. Derivation of message passing update rules: As mentioned before we can derive blockcoordinate descent update rules for the dual which are computable in closed form. Hence the dual given in Lemma 3.1 can be solved efficiently, which is summarized in the following theorem: Theorem 3.2.: A block coordinates descent over the dual problem giving in Lemma 3.1 results in a message passing algorithm whose details are given in Fig. 1 and which we refer to as the CBCBP algorithm. It is guaranteed to converge. Before proving this result, we provide intuition for the update rules: as in the standard and distributed [19] CBP algorithm, each region r sends a message to its parents via the dual variable λr→p. Differently from CBP but similar to distributed variants [19], our algorithm has another type of messages, i.e., the ν messages. Conceptually, think of the constraints as a new node. A constraint node k is connected to a region r if r ∈ N(k). Hence, a region r ‘informs’ the constraint node using the dual variable νr→k. We now show how to derive the message passing rules to optimize the dual. Proof: First we note that convergence is guaranteed by the strict convexity of the primal problem [6]. Next we begin by optimizing the dual function given in Lemma 3.1 with respect to the λ parameters. Specifically, for a chosen region r we optimize the dual w.r.t. a block of Lagrange multipliers λr→p(xr) ∀p ∈ P (r), xr. To this end we derive the dual with respect to λr→p(xr) while keeping all other variables fixed. The technique for solving the optimality conditions follows existing literature, augmented by messages νr→k. It yields the update rules given in Fig. 1. Next we turn to optimizing the dual with respect to the Lagrange multipliers ν. Recall that each constraint k ∈ K in the dual function given in Lemma 3.1 is associated with the linear constraint∑ r∈N(k) νr→k(x k r ) = 0. Therefore we employ a Lagrange multiplier γk for each k. For compact exposition, we introduce the Lagrangian that is associated with a constraint k, denoted by Lk: Lk(λ, ν) = ∑ r∈N(k) log ∑ xr exp ( θr(xr, λ)− ∑ k∈Kr νr→k(xr) ) + γk ∑ r∈N(k) νr→k(x k r ) . Deriving Lk with respect to νr→k ∀r ∈ N(k) and using optimality conditions, we then arrive at: νr→k(x k r ) = log ( αr,k · 1 + γk −γk ) (5) for all r ∈ N(k), where αr,k = exp ( θr(x k r , λ)− ∑ k′∈Kr\k νr→k′(x k r ) ) ∑ xr\xkr exp ( θr(xr, λ)− ∑ k′∈Kr νr→k′(xr) ) . (6) Summing the right hand side of Eq. (5) over r ∈ N(k) and using the constraint ∑ r∈N(k) νr→k(x k r ) = 0 results in 1 + γk −γk = ∏ r∈N(k) 1 αr,k 1|N(k)| . Finally, substituting this result back into Eq. (5) yields the desired update rule. We summarized the resulting algorithm in Fig. 1 and now turn our attention to its evaluation. 4 Experiments We first demonstrate the applicability of the procedure using synthetic data. We then turn to image segmentation and machine translation, using real-world datasets. As our work directly improves the standard CBP approach, we use it as a baseline. 4.1 Synthetic Evaluation Consider two binary variables X and Y whose support consists of L levels, {1, . . . , L}. Assume we are given the following PN-consistency potential: θx,y(x, y) = { 0 (y = 1 ∧ x = 1) ∨ (y = 0 ∧ x 6= 1) −c otherwise, (7) where c is some positive parameter. This potential encourages the assignment y = 1 to agree with the assignment x = 1 and y = 0 to agree with x = {2, . . . , L}. Phrased differently, this potential favours beliefs such that: by(y = 1) = bx(x = 1), by(y = 0) = bx(x 6= 1). (8) Therefore, one may replace the above potential using a single consistency constraint. Note that the above two constraints complement each other, hence, it suffices to include one of them. We use the left consistency constraint since it fits our derivation. We test this hypothesis by constructing four networks that consist of n = 2v, v = 50, 100, 150, 200 variables, where v variables are binary, denoted by Y and the other v variables are multi-levels, subsumed within X. Note that the support of variable Xi, 1 ≤ i ≤ v, consists of i states. Each multi-level variable is matched with a binary one. For each variable we randomly generate unary potentials according to the standard Gaussian distribution. We then run the standard CBP algorithm using the aforementioned PN-consistency potential given in Eq. (7) with c = 1. In a next step we replace each such potential by its corresponding consistency constraint following Eq. (8). For each network we repeat this process 10 times and report the mean running time and standard deviation in Tab. 1. As expected, CBCBP is significantly faster than the standard CBP. Quantitatively, CBCBP was approximately 25 times faster for the smallest, and more than 31 times faster for the largest graphs. Obviously, different values of c effect the convexity of the problem and therefore also the running time of both CBP and CBCBP. To quantify its impact we repeat the experiment with n = 200 for distinct values of c ∈ {2, 4, 6, 8, 10}. In Tab. 2 we report the mean speedup factor over 10 repetitions, for each value of c. As clearly evident, the speedup factors substantially increases with c. 4.2 Image Segmentation We evaluate our approach on the task of semantic segmentation using the MSRC-21 dataset [21] as well as the PascalVOC 2012 [4] dataset. Both contain 21 foreground classes. Each variable Xi in our model corresponds to a super-pixel in an image. In addition, each super-pixel is associated with a binary variable Yi, that indicates whether the super-pixel belongs to the foreground, i.e., yi = 1, or to the background, i.e., yi = 0. The model potentials are: Super-pixel unary potentials: For MSRC-21 these potentials are computed by averaging the TextonBoost [11] pixel-potentials inside each super-pixel. For the PascalVOC 2012 dataset we train a convolutional neural network following the VGG16 architecture. Foreground/Background unary potentials: For MSRC-21 we let the value of the potential at yi = 1 equal the value of the super-pixel unary potential that corresponds to the ‘void’ label, and for yi = 0 we define it to be the maximum value of the super-pixel unary potential among the other labels. For PascalVOC 2012 we obtain the foreground/background potential by training another convolutional neural network following again the VGG16 architecture. Super pixel - foreground/background consistency: We define pairwise potentials between superpixel and the foreground/background labels using Eq. (7) and set c = 1. Naturally, these consistency potentials encourage CBP to favour beliefs where pixels that are labeled as ‘void’ are also labeled as ‘background’ and vice versa. This can also be formulated using the constraints bi(Xi = 0) = bi(Yi = 0) and bi(Xi 6= 1) = bi(Yi = 1). We compare the CBCBP algorithm with the standard CBP approach. For MSRC-21 we use the standard error measure of average per class accuracy and average per pixel accuracy, denoted as global. Performances results are provided in Tab. 3. Appealingly, our results indicate that CBCBP outperforms the standard CBP, across both metrics. Moreover and as summarized in Tab. 4, in 19 out of 21 classes CBCBP achieves an accuracy that is equal to or higher than CBP. At last, CBCBP is more than 65 times faster than CBP. In Tab. 5 we present the average pixel accuracy as well as the Intersection over Union (IoU) metric for the VOC2012 data. We observe CBCBP to perform better since it is able to transfer information between the foreground-background classification and the semantic segmentation. 4.3 Machine Translation We now consider the task of machine translation. We define a phrase-based translation model as a factor graph with many large constraints and use CBCBP to efficiently incorporate them during inference. Our model is inspired by the widely-used approach of [8]. Given a sentence in a source language, the output of our phrase-based model consists of a segmentation of the source sentence into phrases (subsequences of words), a phrase translation for each source phrase, and an ordering of the phrase translations. See Fig. 2 for an illustration. We index variables in our model by i = 1, . . . , n, which include source words (sw), source phrases (sp), and translation phrase slots (tp). The sequence of source words is first segmented into source phrases. The possible values for source word sw are Xsw = {(sw1, sw2) : (sw1 ≤ sw ≤ sw2) ∧ (sw2 − sw1 < m)}, where m is the maximum phrase length. If source phrase sp is used in the derivation, we say that sp aligns to a translation phrase slot tp. If sp is not used, it aligns to ∅. We define variables Xsp to indicate what sp aligns to: Xsp = {tp : sw1 − d ≤ tp ≤ sw2 + d} ∪ {∅}, i.e., all translation phrase slots tp (numbered from left to right in the translation) such that the slot number is at most distance d from an edge of sp.1 Each translation phrase slot tp generates actual target-language words which comprise the translation. We define variables Xtp ranging over the possible target-language word sequences (translation phrases) that can be generated at slot tp. However, not all translation phrase slots must be filled in with translations. Beyond some value of tp (equaling the number of source phrases used in the derivation), they must all be empty. To enforce this, we also permit a null (∅) translation. Consistency constraints: Many derivations defined by the discrete product space X1 × · · · ×Xn are semantically inconsistent. For example, a derivation may place the first source word into the source phrase (1, 2) and the second source word into (2, 3). This is problematic because the phrases overlap; each source word must be placed into exactly one source phrase. We introduce source word consistency constraints: ∀sp,∀sw ∈ sp : bsw(sp) = b(sp). These constraints force the source word beliefs bsw(xsw) to agree on their span. There are other consistencies we wish to enforce in our model. Specifically, we must match a source phrase to a translation phrase slot if and only if the source phrase is consistently chosen by all of its source words. Formally, ∀ sp : b(sp) = ∑ xsp 6=∅ bsp(xsp). Phrase translation potentials: We use pairwise potential functions between source phrases sp = (sw1, sw2) and their aligned translation phrase slots tp. We include a factor 〈sp, tp〉 ∈ E if sw1− d ≤ tp ≤ sw2+d. Letting πsp be the actual words in sp, the potentials θsp,tp(xsp, xtp) determine the preference of the phrase translation 〈πsp, xtp〉 using a phrase table feature function τ : 〈π, π′〉 → Rk. In particular, θsp,tp(xsp, xtp) = γ>p τ(〈πsp, xtp〉) if xsp = tp and a large negative value otherwise, where γp is the weight vector for the Moses phrase table feature vector. Language model potentials: To include n-gram language models, we add potentials that score pairs of consecutive target phrases, i.e., θtp−1,tp(xtp−1, xtp) = γ` ∑|xtp| i=1 log Pr(x (i) tp |xtp−1 · x (1) tp · ... · x(i−1)tp ), where |xtp| is the number of words in xtp, x (i) tp is the i-th word in xtp, · denotes string concatenation, and γ` is the feature weight. This potential sums n-gram log-probabilities of words in the second of the two target phrases. Internal n-gram features and the standard word penalty feature [7] are computed in the θtp potentials, since they depend only on the words in xtp. Source phrase separation potentials: We use pairwise potentials between source phrases to prevent them aligning to the same translation slot. We also prevent two overlapping source phrases 1Our distortion limit d is based on distances from source words to translation slots, rather than distances between source words as in the Moses system [7]. from both aligning to non-null slots (i.e., one must align to ∅). We include a factor between two sources phrases if there is a translation phrase that may relate to both, namely 〈sp1, sp2〉 ∈ E if ∃ tp : 〈sp1, tp〉 ∈ E, 〈sp2, tp〉 ∈ E. The source phrase separation potential θsp1,sp2(xsp1 , xsp2) is −∞ if either xsp1 = xsp2 6= ∅ or sp1∩sp2 6= ∅∧xsp1 6= ∅∧xsp2 6= ∅. Otherwise, it is−γd|(δ(sp1, sp2)− |xsp1 − xsp2 |)|, where δ(sp1, sp2) returns the number of source words between the spans sp1 and sp2. This favors similar distances between source phrases and their aligned slots. Experimental Setup: We consider German-to-English translation. As training data for constructing the phrase table, we use the WMT2011 parallel data [2], which contains 1.9M sentence pairs. We use the phrase table to compute θsp,tp and to fill Xtp. We use a bigram language model estimated from the English side of the parallel data along with 601M tokens of randomly-selected sentences from the Linguistic Data Consortium’s Gigaword corpus. This is used when computing the θtp−1,tp potentials. As our test set, we use the first 150 sentences from the WMT2009 test set. Results below are (uncased) %BLEU scores [17] on this 150-sentence set. We use maximum phrase length m = 3 and distortion limit d = 3. We run 250 iterations of CBCBP for each sentence. For the feature weights (γ), we use the default weights in Moses, since our features are analogous to theirs. Learning the weights is left to future work. Results: We compare to a simplified version of our model that omits the sw variables and all constraints and terms pertaining to them. This variation still contains all sp and tp variables and their factors. This comparison shows the contribution of our novel handling of consistency constraints. Tab. 6 shows our results. The consistency constraints lead to a large improvement for our model at negligible increase in runtime due to our closed-form update rules. We found it impractical to attempt to obtain these results using the standard CBP algorithm for any source sentences of typical length. For comparison to a standard benchmark, we also trained a Moses system [7], a state-of-the-art phrase-based system, on the same data. We used default settings and feature weights, except we used max phrase length 3 and no lexicalized reordering model, in order to more closely match the setting of our model. The Moses %BLEU on this dataset is 17.88. When using the source word consistency constraints, we are within 1.2% of Moses. Our model has the virtue of being able to compute marginals for downstream applications and also permits us to study particular forms of constraints in phrase-based translation modeling. Future work can add or remove constraints like we did in our experiments here in order to determine the most effective constraints for phrase-based translation. Our efficient inference framework makes such exploration possible. 5 Related Work Variational approaches to inference have been extensively studied in the past. We address approximate inference using the entropy barrier function and there has been extensive work in this direction, e.g., [24, 14, 23, 5, 19, 20] to name a few. Our work differs since we incorporate consistency constraints within the inference engine. We show that closed-form update rules are still available. Consistency constraints are implied when using PN-potentials [9]. However, pairwise functions are included for every constraint which is expensive if many constraints are involved. In contrast, constraints over the feasible instances are considered in [22, 13, 16, 12, 1]. While impressive results have been shown, each different restrictions of the feasible set may require a tailored algorithm. In contrast, we propose to include probabilistic equalities among the model beliefs, which permits derivation of an algorithm that is generally applicable. 6 Conclusions In this work we tackled the problem of inference with belief based equality constraints, which arises when consistency among variables in the network is required. We introduced the CBCBP algorithm that directly incorporates constraints into the CBP framework and results in closed-form update rules. We demonstrated the merit of CBCBP both on synthetic data and on two real-world tasks. Our experiments indicate that CBCBP outperforms PN-potentials in both speed and accuracy. In the future we intend to incorporate our approximate inference with consistency constraints into learning frameworks, e.g., [15, 3].
1. What is the focus of the paper regarding convex belief propagation? 2. What are the strengths of the proposed approach, particularly in terms of flexibility and computational speedup? 3. What are the weaknesses of the paper, especially regarding its contributions and improvements over the baseline? 4. How does the reviewer assess the overall technical quality of the paper's content? 5. Are there any suggestions for improving the clarity and explanations in certain parts of the paper?
Review
Review Authors of this paper extend the previous work on convex belief propagation by including consistency constraints. Presented benefits of the proposed model include increased flexibility in constraint definition and computational speedup. Algorithm and proof of convergence is given. Model is compared with the baseline on one synthetic and two real tasks, semantic image segmentation and machine translation. This paper introduces an incremental improvement over the previous work on convex belief propagation. Contribution seems minor, but results show notable improvement over the baseline. Overall technical quality of the paper is good, but some parts of the paper could use additional clarification. For example, eq. (1) is hard to follow if the reader is not familiar with prior work. Additional explanation (even a single additional sentence) regarding the constraints would make it much more apprehensible.
NIPS
Title Partial Optimal Tranport with applications on Positive-Unlabeled Learning Abstract Classical optimal transport problem seeks a transportation map that preserves the total mass between two probability distributions, requiring their masses to be equal. This may be too restrictive in some applications such as color or shape matching, since the distributions may have arbitrary masses and/or only a fraction of the total mass has to be transported. In this paper, we address the partial Wasserstein and Gromov-Wasserstein problems and propose exact algorithms to solve them. We showcase the new formulation in a positive-unlabeled (PU) learning application. To the best of our knowledge, this is the first application of optimal transport in this context and we first highlight that partial Wasserstein-based metrics prove effective in usual PU learning settings. We then demonstrate that partial GromovWasserstein metrics are efficient in scenarii in which the samples from the positive and the unlabeled datasets come from different domains or have different features. 1 Introduction Optimal transport (OT) has been gaining in recent years an increasing attention in the machine learning community, mainly due to its capacity to exploit the geometric property of the samples. Generally speaking, OT is a mathematical tool to compare distributions by computing a transportation mass plan from a source to a target distribution. Distances based on OT are referred to as the Monge-Kantorovich or Wasserstein distances (Villani, 2009) and have been successfully employed in a wide variety of machine learning applications including clustering (Ho et al., 2017), computer vision (Bonneel et al., 2011; Solomon et al., 2015), generative adversarial networks (Arjovsky et al., 2017) or domain adaptation (Courty et al., 2017). A key limitation of the Wasserstein distance is that it relies on the assumption of aligned distributions, namely they must belong to the same ground space or at least a meaningful distance across domains can be computed. Nevertheless, source and target distributions can be collected under distinct environments, representing different times of collection, contexts or measurements (see Fig. 1, left and right). To get benefit from OT on such heterogeneous distribution settings, one can compute the Gromov-Wasserstein (GW) distance (Sturm, 2006; Mémoli, 2011) to overcome the lack of intrinsic correspondence between the distribution spaces. GW extends Wasserstein by computing a distance between metrics defined within each of the source and target spaces. From a computational point view, it involves a non convex quadratic problem (Peyré and Cuturi, 2019), hard to lift to large scale settings. A remedy to such a heavy computation burden lies in a prevalent approach referred to as 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. regularized OT (Cuturi, 2013), allowing one to add an entropic regularization penalty to the original problem. Peyré et al. (2016); Solomon et al. (2016) propose the entropic GW discrepancy, that can be solved by Sinkhorn iterations (Cuturi, 2013; Benamou et al., 2015). A major bottleneck of OT in its traditional formulation is that it requires the two input measures to have the same total probability mass and/or that all the mass has to be transported. This is too restrictive for many applications, such as in color matching or shape registration (Bonneel and Coeurjolly, 2019), since mass changes may occur due to a creation or an annihilation while computing an OT plan. To tackle this limitation, one may employ strategies such as partial or unbalanced transport (Guittet, 2002; Figalli, 2010; Caffarelli and McCann, 2010). Chizat et al. (2018) propose to relax the marginal constraints of unbalanced total masses using divergences such as Kullback-Leibler or Total Variation, allowing the use of generalized Sinkhorn iterations. Yang and Uhler (2019) generalize this approach to GANs and Lee et al. (2019) present an ADMM algorithm for the relaxed partial OT. Most of all these approaches concentrate on partial-Wasserstein. This paper deals with exact partial Wassertein (partial-W) and Gromov-Wasserstein (partial-GW). Some strategies for computing such partial-W require relaxations of the marginals constraints. We rather build our approach upon adding virtual or dummy points onto the marginals, which is a common practice in OT works. Among the latter, Caffarelli and McCann (2010) attach such points to allow choosing the maximum distance mass that can be transported. Pele and Werman (2009) threshold ground distances and send the extra mass to a dummy point to compute a robust EMD distance. Gramfort et al. (2015) consider the case of unnormalized measures and use a dummy point to “fill” the distributions, the extended problem then having both marginals summing to one. More recently, Sarlin et al. (2020) deal with the partial assignment problem by extending the initial problem and fill the ground distance matrix with a single learnable parameter. In this paper, the dummy points are used as a buffer when comparing distributions with different probability masses, allowing partial-W to boil down to solving an extended but standard Wasserstein problem. The main advantage of our approach is that it defines explicitly the mass to be transported and it leads to computing sparse transport plans and hence exact partial-W or -GW distances instead of regularized discrepancies obtained by running Sinkhorn algorithms. Regarding partial-GW, our approach relies on a Frank-Wolfe optimization algorithm (Frank and Wolfe, 1956) that builds on computations of partial-W. Tackling partial-OT problems that preserve sparsity is motivated by the fact that they are more suitable to some applications such as the Positive-Unlabeled (PU) learning (see Bekker and Davis (2020) for a review) we target in this paper. We shall notice that this is the first application of OT for solving PU learning tasks. In a nutshell, PU classification is a variant of the binary classification problem, in which we have only access to labeled samples from the positive (Pos) class in the training stage. The aim is to assign classes to the points of an unlabeled (Unl) set which mixes data from both positive and negative classes. Using OT allows identifying the positive points within Unl, even when Pos and Unl samples do not lie in the same space (see Fig. 1). The paper is organized as follows: we first recall some background on OT. In Section 3, we propose an algorithm to solve an exact partial-W problem, together with a Frank-Wolfe based algorithm to compute the partial-GW solution. After describing in more details the PU learning task and the use of partial-OT to solve it, we illustrate the advantage of partial-GW when the source and the target distributions are collected onto distinct environments. We finally give some perspectives. Notations ΣN is an histogram of N bins with { p ∈ RN+ , ∑ i pi = 1 } and δ is the Dirac function. Let 1n be the n-dimensional vector of ones. 〈·, ·〉F stands for the Frobenius dot product. |p| indicates the length of vector p. 2 Background on optimal transport Let X = {xi}ni=1 and Y = {yj}mj=1 be two point clouds representing the source and target samples, respectively. We assume two empirical distributions (p, q) ∈ Σn × Σm over X and Y , p = n∑ i=1 piδxi and q = m∑ j=1 qjδyj , where Σn and Σm are histograms of |p| = n and |q| = m bins respectively. The set of all admissible couplings Π(p, q) between histograms is given by Π(p, q) = {T ∈ R|p|×|q|+ |T1|q| = p,T>1|p| = q}, where T = (Tij)i,j is a coupling matrix with an entry Tij that describes the amount of mass pi found at xi flowing toward the mass qj of yj . OT addresses the problem of optimally transporting p toward q, given a cost Dij measured as a geometric distance between xi and yj . More precisely, when the ground cost C = Dp = ( Dpij ) i,j is a distance matrix, the p-Wassertein distance on Σn × Σm at the power of p is defined as: W pp (p, q) = min T∈Π(p,q) 〈C,T 〉F = min T∈Π(p,q) n∑ i=1 m∑ j=1 CijTij . In some applications, the two distributions are not registered (i.e. we can not compute a ground cost between xi and yj) or do not lie in the same underlying space. The Gromov-Wasserstein distance addresses this bottleneck by extending the Wasserstein distance to such settings, also allowing invariance to translation, rotation or scaling. Informally, it defines the distortion when transporting the whole set of points from one space to another. It relies on intra-domain distance matrices of source Cs = (Csik)i,k = (C s(xi,xk))i,k ∈ Rn×n+ and target Ct = (Ctjl)j,l = (Ct(yj ,yl))j,l ∈ R m×m + , and is defined as in Mémoli (2011): GW pp (p, q) = min T∈Π(p,q) n∑ i,k=1 m∑ j,l=1 ∣∣Csik − Ctjl∣∣p TijTkl. 3 Exact Partial Wasserstein and Gromov-Wasserstein distance We first detail how extending a balanced Wasserstein problem allows solving a partial-Wasserstein one. We then propose a Frank-Wolfe scheme that relies on computing partial-W to solve the partial-GW problem. 3.1 Partial Wasserstein distance The previous OT distances require the two distributions to have the same total probability mass ‖p‖1 = ‖q‖1 and that all the mass has to be transported. This may be a problematic assumption where some mass variation or partial mass displacement should be handled. The partial OT problem focuses on transporting only a fraction 0 ≤ s ≤ min(‖p‖1, ‖q‖1) of the mass as cheaply as possible. In that case, the set of admissible couplings becomes Πu(p, q) = {T ∈ R|p|×|q|+ |T1|q| ≤ p,T>1|p| ≤ q,1>|p|T1|q| = s}, and the partial-W distance reads as PW pp (p, q) = min T∈Πu(p,q) n∑ i=1 m∑ j=1 〈C,T 〉F . This problem has been studied by (Caffarelli and McCann, 2010; Figalli, 2010); numerical solutions have notably been provided by (Benamou et al., 2015; Chizat et al., 2018) in the entropic-regularized Wasserstein case. We propose here to directly solve the exact partial-W problem by adding dummy or virtual points xn+1 and ym+1 (with any features) and extending the cost matrix as follows: C̄ = [ C ξ1|q| ξ1>|p| 2ξ +A ] (1) in which A > 0 and ξ is a fixed positive or nul scalar. When the mass of these dummy points is set such that pn+1 = ‖q‖1 − s and qm+1 = ‖p‖1 − s, computing partial-W distance boils down to solving a unconstrained problem W pp (p̄, q̄) = minT̄∈Π(p̄,q̄)〈C̄, T̄ 〉F , where p̄ = [p, ‖q‖1 − s] and q̄ = [q, ‖p‖1 − s]. The intuitive derivation of this equivalent formulation is exposed in Appendix 1.1. Proposition 1 Assume that A > 0 and that ξ is a positive or nul scalar, one has W pp (p̄, q̄)− PW pp (p, q) = ξ(‖p‖1 + ‖q‖1 − 2s) and the optimum transport plan T ∗ of the partial Wasserstein problem is the optimum transport plan T̄ ∗ of W pp (p̄, q̄) deprived from its last row and column. The proof is postponed to Appendix 1.2. 3.2 Partial Gromov-Wasserstein We are now interested in the partial extension of Gromov-Wasserstein. In the case of a quadratic cost, p = 2, the partial-GW problem writes as PGW 22 (p, q) = min T∈Πu(p,q) JCs,Ct(T ) where JCs,Ct(T ) = 1 2 n∑ i,k=1 m∑ j,l=1 (Csik − Ctjl)2TijTkl. (2) The loss function JCs,Ct is non-convex and the couplings feasibility domain Πu(p, q) is convex and compact. One may expect to introduce virtual points in the GW formulation in order to solve the partial-GW problem. Nevertheless, this strategy is no longer valid as GW involves pairwise distances that do not allow the computations related to the dummy points to be isolated (see Appendix 1.3). In the following, we build upon a Frank-Wolfe optimization scheme (Frank and Wolfe, 1956) a.k.a. the conditional gradient method (Demyanov and Rubinov, 1970). It has received significant renewed interest in machine learning (Jaggi, 2013; Lacoste-Julien and Jaggi, 2015) and in OT community, since it serves as a basis to approximate penalized OT problems (Ferradans et al., 2013; Courty et al., 2017) or GW distances (Peyré et al., 2016; Vayer et al., 2020). Our proposed Frank-Wolfe iterations strongly rely on computing partial-W distances and as such, achieve a sparse transport plan (Ferradans et al., 2013). Let us first introduce some additional notations. For any tensorM = (Mijkl)i,j,k,l ∈ Rn×n×m×m, we denote byM◦ T the matrix in Rn×m such that its (i, j)-th element is defined as (M◦ T )i,j = n∑ k=1 m∑ l=1 MijklTkl for all i = 1, . . . , n, j = 1, . . . ,m. Introducing the 4-th order tensor M(Cs,Ct) = 12 ((C s ik − Ctjl) 2)i,j,k,l, we notice that JCs,Ct(T ), following Peyré et al. (2016), can be written as JCs,Ct(T ) = 〈M(Cs,Ct) ◦ T ,T 〉F . The Frank-Wolfe algorithm for partial-GW is shown in Algorithm 1. Like classical Frank-Wolfe procedure, it is summarized in three steps for each iteration k, as detailed below. A theoretical study of the convergence of the Frank-Wolfe algorithm for partial-GW is given in Appendix 2.2, together with a detailed derivation of the line search step (see Appendix 2.1). Step1 Compute a linear minimization oracle over the set Πu(p, q), i.e., T̃ (k) ← argmin T∈Πu(p,q) 〈∇JCs,Ct(T (k)),T 〉F , (3) To do so, we solve an extended Wasserstein problem with the ground metric∇JCs,Ct(T (k)) extended as in eq. (1): T̄ (k) ← argmin T∈Π(p̄,q̄) 〈∇̄JCs,Ct(T (k)),T 〉F , (4) and get T̃ (k) from T̄ (k) by removing its last row and column. Step2 Determine optimal step-size γ(k) subject to γ(k) ← argmin γ∈[0,1] {JCs,Ct((1− γ)T (k) + γT̃ (k))}. (5) It can be shown that γ(k) can take the following values, with E(k) = T̃ (k) − T (k): • if 〈M(Cs,Ct) ◦E(k),E(k)〉F < 0 we have γ(k) = 1 • if 〈M(Cs,Ct) ◦E(k),E(k)〉F > 0 we have γ(k) = min ( 1,−〈M(C s,Ct) ◦E(k),T (k)〉F 〈M(Cs,Ct) ◦E(k),E(k)〉F ) Step3 Update T (k+1) ← (1− γ(k))T (k) + γ(k)T̃ (k). Algorithm 1 Frank-Wolfe algorithm for partial-GW 1: Input: Source and target samples: (X ,p) and (Y, q), mass s, p = 2, initial guess T (0) 2: Compute cost matrices Cs and Ct, build p̄ = [p, ‖q‖1 − s] and q̄ = [q, ‖p‖1 − s] 3: for k = 0, 1, 2, 3, . . . do 4: G(k) ←M(Cs,Ct) ◦ T (k) // Compute the gradient ∇JCs,Ct(T (k)) 5: T̄ (k) ← argminT∈Π(p̄,q̄)〈Ḡ (k) ,T 〉F // Compute partial-W, with Ḡ as in eq. (1) 6: Get T̃ (k) from T̄ (k) // Remove last row and column 7: Compute γ(k) as in Eq. (5) // Line-search 8: T (k+1) ← (1− γ(k))T (k) + γ(k)T̃ (k) // Update 9: end for 10: Return: T (k) 4 Optimal transport for PU learning We hereafter investigate the application of partial optimal transport for learning from Positive and Unlabeled (PU) data. After introducing PU learning, we present how to formulate a PU learning problem into a partial-OT one. 4.1 Overview of PU learning Learning from PU data is a variant of classical binary classification problem, in which the training data consist of only positive points, and the test data is composed of unlabeled positives and negatives. Let Pos = {xi}nPi=1 be the positive samples drawn according to the conditional distribution p(x|y = 1) and Unl = {xUi } nU i=1 the unlabeled set sampled according to the marginal p(x) = πp(x|y = 1) + (1− π)p(x|y = −1). The true proportion of positives, called class prior, is π = p(y = 1) and p(x|y = −1) is the distribution of negative samples which are all unlabeled.The goal is to learn a binary classifier solely using Pos and Unl. A broad overview of existing PU learning approaches can be seen in (Bekker and Davis, 2020). Most PU learning methods commonly rely on the selected completely at random (SCAR) assumption (Elkan and Noto, 2008) which assumes that the labeled samples are drawn at random among the positive distribution, independently of their attributes. Nevertheless, this assumption is often violated in real-case scenarii and PU data are often subject to selection biases, e.g. when part of the data may be easier to collect. Recently, a less restrictive assumption has been studied: the selected at random (SAR) setting (Bekker and Davis, 2018) which assumes that the positives are labeled according to a subset of features of the samples. Kato et al. (2019) move a step further and consider that the sampling scheme of the positives is such that p(o = 1|x, y = 1) (o = 1 means observed label) preserves the ordering induced by the posterior distribution p(y = 1|x) over the samples. Other approaches, as in (Hsieh et al., 2019), consider a classical PU learning problem adjuncted with a small proportion of observed negative samples. Those negatives are selected with bias following the distribution p(x|y = −1). 4.2 PU learning formulation using partial optimal transport We propose in this paper to build on partial optimal transport to perform PU learning. In a nutshell, we aim at transporting a mass s = π from the unlabeled (source) dataset to the positive (target) one. As such, the transport matrix T should be such that the unlabeled positive points are mapped to the positive samples (as they have similar features or intra-domain distance matrices) while the negatives are discarded (in our context, they are not transported at all). Defining the optimal transport point-of-view of PU learning. More formally, the unlabeled points Unl represent the source distribution X and the positive points Pos are the target dataset Y . We set the total probability mass to be transported as the proportion of positives in the unlabeled set, that is s = π. We look for an optimal transport plan that belongs to the following set of couplings, assuming n = nU , m = nP , pi = 1n and qj = s m : ΠPU (p, q) = {T ∈ R|p|×|q|+ |T1|q| = {p, 0},T>1|p| ≤ q,1>|p|T1|q| = s}, (6) in which T1|q| = {p, 0} means that ∑m j=1 Tij = pi exactly or 0, ∀i to avoid matching part of the mass of an unlabeled negative with a positive. This set is not empty as long as s mod pi = 0. The problem that we aim at solving is the following: PUW pp (p, q) = min T∈ΠPU (p,q) n∑ i=1 m∑ j=1 CijTij . Though the positive samples Pos are assumed easy to label, their features may be corrupted with noise or they may be mislabeled. Let assume 0 ≤ α ≤ 1− s, the noise level. Solving the PU problem. To enforce the condition T1|q| = {p, 0}, we adopt a regularized point of view of the partial-OT problem as in Courty et al. (2017) and we solve the following problem: T̄ ∗ = argmin T̄∈Π(p̄,q̄) n+1∑ i=1 m+1∑ j=1 C̄ij T̄ij + ηΩ(T̄ ) (7) where pi = 1−αn , qj = s+α m , p̄, q̄, C̄ij are defined as in Section 3.1, η ≥ 0 is a regularization parameter and α is the percentage of Pos that we assume to be noisy (that is to say we do not want to map them to a point of Unl). We choose Ω(T̄ ) = ∑n i=1 ( ‖T̄i(:m)‖2 + ‖T̄i(m+1)‖2 ) where T̄i(:m) is a vector that contains the entries of the ith row of T̄ associated to the first m columns. This group-lasso regularization leads to a sparse transportation map and enforces each of the Unl samples xi to be mapped to only the Pos samples or to the dummy point ym+1. An illustration is provided in Appendix 5. When partial-GW is involved, we use this regularized-OT in the step (i) of the Frank-Wolfe algorithm. We can establish that solving problem (7) provides the solution to PU learning using partial-OT. Proposition 2 Assume that A > 0, ξ is a constant, there exists a large η > 0 such that: W ∗pp (p̄, q̄)− PUW pp (p, q) = ξ(1− s). where W ∗pp (p̄, q̄) = ∑n+1 i=1 ∑m+1 j=1 C̄ij T̄ ∗ ij with T̄ solution of eq. (7). The proof is postponed to Appendix 3. 5 Experiments 5.1 Experimental design We illustrate the behavior of partial-W and -GW on real datasets in a PU learning context. First, we consider a SCAR assumption, then a SAR one and finally a more general setting, in which the underlying distributions of the samples come from different domains, or do not belong to the same metric space. Algorithm 1 has been implemented and is avalaible on the Python Optimal Transport (POT) toolbox (Flamary and Courty, 2017). Following previous works (Kato et al., 2019; Hsieh et al., 2019), we assume that the class prior π is known throughout the experiments; otherwise, it can be estimated from {xi}nPi=1 and {xUi } nU i=1 using off-the-shelf methods, e.g. Zeiberg and Radivojac (2020); Plessis et al. (2017); Jain and Radivojac (2016). For both partial-W and partial-GW, we choose p = 2 and the cost matrices C are computed using Euclidean distance. We carry experiments on real-world datasets under the aforementioned scenarii. We rely on six datasets Mushrooms, Shuttle, Pageblocks, USPS, Connect-4, Spambase from the UCI repository1 (following Kato et al. (2019)’s setting) and colored MNIST (Arjovsky et al., 2019) to illustrate our method in SCAR and SAR settings respectively. We also consider the Caltech office dataset, which is a common application of domain adaptation (Courty et al., 2017) to explore the effectiveness of our method on heterogeneous distribution settings. Whenever they contain several classes, these datasets are converted into binary classification problems following Kato et al. (2019), and the positives are the samples that belong to the y = 1 class. For UCI and colored MNIST datasets, we randomly draw nP = 400 positive and nU = 800 unlabeled points among the remaining data. As the Caltech office datasets are smaller, we choose nP = 100 and nU = 100 in that context. To ease the presentation, we report here the results with class prior π set as the true proportion of positive class in the dataset and provide in Appendix 6.3 additional results when varying s. We ran the experiments 10 times and report the mean accuracy rate (standard deviations are shown in Appendix 6.1). We test 2 levels of noise in Pos: α = 0 or α = 0.025, fix ξ = 0, A = max(C) and choose a large η = 106. For the experiments, we consider unbiased PU learning method (denoted by PU in the sequel) (Du Plessis et al., 2014) and the most recent and effective method to address PU learning with a selection bias (called PUSB below) that tries to weaken the SCAR assumption (Kato et al., 2019). Whenever possible (that is to say when source and target samples share the same features), we compare our approaches P-W and P-GW with PU and PUSB; if not, we are not aware of any competitive PU learning method able to handle different features in Pos and Unl. The GW formulation is a non convex problem and the quality of the solution is highly dependent on the initialization. We explore several initializations of the transport matrix for P-GW and report the results that yield to the lowest partial OT-distance (see Appendix 4 for details). 5.2 Partial-W and partial-GW in a PU learning under a SCAR assumption Under SCAR, the Pos dataset and the positives in Unl are assumed independently and identically drawn according to the distribution p(x|y = 1) from a set of positive points. We experiment on the UCI datasets and Table 1 (top) summarizes our findings. Except for Connect-4 and spambase, partial-W has similar results or consistently outperforms PU and PUSB. Including some noise has little impact on the results, except for the connect-4 dataset. Partial-GW has competitive results, showing that relying on intra-domain matrices may allow discriminating the classes. It nevertheless 1https://archive.ics.uci.edu/ml/datasets.php under-performs relatively to partial-W, as the distance matrix C between Pos and Unl is more informative than only relying on intra-domain matrices. 5.3 Experiments under a SAR assumption The SAR assumption supposes that Pos is drawn according to some features of the samples. To implement such a setting, we inspire from (Arjovsky et al., 2019) and we construct a colored version of MNIST: each digit is colored, either in green or red, with a probability of 90% to be colored in red. The probability to label a digit y = 1 as positive depends on its color, with only green y = 1 composing the positive set. The Unl dataset is then mostly composed of red digits. Results under this setting are provided in Table 1 (middle). When we consider a SCAR scenario, partial-W exhibits the best performance. However, its effectiveness highly drops when a covariate shift appears in the distributions p(x|y = 1) of the Pos and Unl datasets as in this SAR scenario. On the opposite, partial-GW allows maintaining a comparable level of accuracy as the discriminative information are preserved in intra-domain distance matrices. 5.4 Partial-W and -GW in a PU learning with different domains and/or feature spaces To further validate the proposed method in a different context, we apply partial-W and partial-GW to a domain adaptation task. We consider the Caltech Office dataset, that consists of four domains: Caltech 256 (C) (Griffin et al., 2007), Amazon (A), Webcam (W) and DSLR (D) (Saenko et al., 2010). There exists a high inter-domain variability as the objects may face different illumination, orientation etc. Following a standard protocol, each image of each domain is described by a set of SURF features (Saenko et al., 2010) consisting of a normalized 800-bins histogram, and by a set of DECAF features (Donahue et al., 2014), that are 4096-dimensional features extracted from a neural network. The Pos dataset consists of images from Caltech 256. The unlabeled samples are formed by the Amazon, Webcam, DSLR images together with the Caltech 256 images that are not included in Pos. We perform a PCA to project the data onto d = 10 dimensions for the SURF features and d = 40 for the DECAF ones. We first investigate the case where the objects are represented by the same features but belong to the same or different domains. Results are given in Table 1 (bottom). For both features, we first notice that PU and PUSB have similar performances than partial-W when the domains are the same. As soon as the two domains differ, partial-GW exhibits the best performances, suggesting that it is able to capture some domain shift. We then consider a scenario where the source and target objects are described by different features (Table 2). In that case, only partial-GW is applicable and its performances suggest that it is able to efficiently leverage on the discriminative information conveyed in the intra-domain similarity matrices, especially when using SURF features to make predictions based on DECAF ones. 6 Conclusion and future work In this paper, we build on partial-W and -GW distances to solve a PU learning problem. We propose a scheme relying on iterations of a Frank-Wolfe algorithm to compute a partial-GW solution, in which each iteration requires solving a partial-W problem that is derived from the solution of an extended Wassertein problem. We show that those distances compete and sometimes outperform the state-of-the-art PU learning methods, and that partial-GW allows remarkable improvements when the underlying spaces of the positive and unlabeled datasets are distinct or even unregistered. While considering only features (with partial-W) or intra-domain distances (with partial-GW), this work can be extended to define a partial-Fused Gromov-Wasserstein distance (Vayer et al., 2020) that can combines both aspects. Another line of work will also focus on lowering the computational complexity by using sliced partial-GW, building on existing works on sliced partial-W (Bonneel and Coeurjolly, 2019) and sliced GW (Vayer et al., 2019). Regarding the application view point, we envision a potential use of the approach to subgraph matching (Kriege and Mutzel, 2012) or PU learning on graphs (Zhao et al., 2011) as GW has been proved to be effective to compare structured data such as graphs. In addition, we also target applications such as detecting out-of-distributions examples or open-set domain adaptation (Saito et al., 2018). Finally, we plan to derive an extension of this work to PU learning in which the proportion of positives in the dataset will be estimated in a unified optimal transport formulation, building on results of GW-based test of isomorphism between distributions (Brécheteau, 2019). Broader impact This work does not present any significant societal, environnemental or ethical consequence. Acknowledgments This work is partially funded through the projects OATMIL ANR-17-CE23-0012, MULTISCALE ANR-18-CE23-0022-01 and RAIMO ANR-20-CHIA-0021-01.
1. What is the main contribution of the paper regarding partial Wasserstein and Gromov-Wasserstein problems? 2. What are the strengths of the proposed approach, particularly in its application to the partial assignment problem? 3. What are the weaknesses of the paper, especially regarding the choice of parameters and computational efficiency?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper found that the solution of the partial Wasserstein problem's solution could be got by solving the original extended Wasserstein problem first then dropping the extended row and column. Then the paper applies the same trick to the partial Gromov-Wasserstein problem and provides a way to solve this problem. Strengths Introducing the outlier bin to the optimal transport problem is not a new idea, for example in paper[1]. In this paper, the authors use this idea to solve the partial assignment problem and extend it to the partial GW problem with theoretical proof. This part is novel and the paper provides detailed mathematical proof. The proposed method has been tested on the dataset and compared with other baseline methods. Sometimes this method shows better performance. [1] SuperGlue: Learning Feature Matching with Graph Neural Networks, CVPR2020 Weaknesses 1. In this work, although the partial assignment problem is addressed, how to set the mass of the dummy point and the kesi in the cost matrix might be a potential issue. These parameters associated with the dataset which could highly affect the result. It might be difficult to tune. 2. I'm a little bit curious about the running time, which is totally missing in the paper.
NIPS
Title Partial Optimal Tranport with applications on Positive-Unlabeled Learning Abstract Classical optimal transport problem seeks a transportation map that preserves the total mass between two probability distributions, requiring their masses to be equal. This may be too restrictive in some applications such as color or shape matching, since the distributions may have arbitrary masses and/or only a fraction of the total mass has to be transported. In this paper, we address the partial Wasserstein and Gromov-Wasserstein problems and propose exact algorithms to solve them. We showcase the new formulation in a positive-unlabeled (PU) learning application. To the best of our knowledge, this is the first application of optimal transport in this context and we first highlight that partial Wasserstein-based metrics prove effective in usual PU learning settings. We then demonstrate that partial GromovWasserstein metrics are efficient in scenarii in which the samples from the positive and the unlabeled datasets come from different domains or have different features. 1 Introduction Optimal transport (OT) has been gaining in recent years an increasing attention in the machine learning community, mainly due to its capacity to exploit the geometric property of the samples. Generally speaking, OT is a mathematical tool to compare distributions by computing a transportation mass plan from a source to a target distribution. Distances based on OT are referred to as the Monge-Kantorovich or Wasserstein distances (Villani, 2009) and have been successfully employed in a wide variety of machine learning applications including clustering (Ho et al., 2017), computer vision (Bonneel et al., 2011; Solomon et al., 2015), generative adversarial networks (Arjovsky et al., 2017) or domain adaptation (Courty et al., 2017). A key limitation of the Wasserstein distance is that it relies on the assumption of aligned distributions, namely they must belong to the same ground space or at least a meaningful distance across domains can be computed. Nevertheless, source and target distributions can be collected under distinct environments, representing different times of collection, contexts or measurements (see Fig. 1, left and right). To get benefit from OT on such heterogeneous distribution settings, one can compute the Gromov-Wasserstein (GW) distance (Sturm, 2006; Mémoli, 2011) to overcome the lack of intrinsic correspondence between the distribution spaces. GW extends Wasserstein by computing a distance between metrics defined within each of the source and target spaces. From a computational point view, it involves a non convex quadratic problem (Peyré and Cuturi, 2019), hard to lift to large scale settings. A remedy to such a heavy computation burden lies in a prevalent approach referred to as 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. regularized OT (Cuturi, 2013), allowing one to add an entropic regularization penalty to the original problem. Peyré et al. (2016); Solomon et al. (2016) propose the entropic GW discrepancy, that can be solved by Sinkhorn iterations (Cuturi, 2013; Benamou et al., 2015). A major bottleneck of OT in its traditional formulation is that it requires the two input measures to have the same total probability mass and/or that all the mass has to be transported. This is too restrictive for many applications, such as in color matching or shape registration (Bonneel and Coeurjolly, 2019), since mass changes may occur due to a creation or an annihilation while computing an OT plan. To tackle this limitation, one may employ strategies such as partial or unbalanced transport (Guittet, 2002; Figalli, 2010; Caffarelli and McCann, 2010). Chizat et al. (2018) propose to relax the marginal constraints of unbalanced total masses using divergences such as Kullback-Leibler or Total Variation, allowing the use of generalized Sinkhorn iterations. Yang and Uhler (2019) generalize this approach to GANs and Lee et al. (2019) present an ADMM algorithm for the relaxed partial OT. Most of all these approaches concentrate on partial-Wasserstein. This paper deals with exact partial Wassertein (partial-W) and Gromov-Wasserstein (partial-GW). Some strategies for computing such partial-W require relaxations of the marginals constraints. We rather build our approach upon adding virtual or dummy points onto the marginals, which is a common practice in OT works. Among the latter, Caffarelli and McCann (2010) attach such points to allow choosing the maximum distance mass that can be transported. Pele and Werman (2009) threshold ground distances and send the extra mass to a dummy point to compute a robust EMD distance. Gramfort et al. (2015) consider the case of unnormalized measures and use a dummy point to “fill” the distributions, the extended problem then having both marginals summing to one. More recently, Sarlin et al. (2020) deal with the partial assignment problem by extending the initial problem and fill the ground distance matrix with a single learnable parameter. In this paper, the dummy points are used as a buffer when comparing distributions with different probability masses, allowing partial-W to boil down to solving an extended but standard Wasserstein problem. The main advantage of our approach is that it defines explicitly the mass to be transported and it leads to computing sparse transport plans and hence exact partial-W or -GW distances instead of regularized discrepancies obtained by running Sinkhorn algorithms. Regarding partial-GW, our approach relies on a Frank-Wolfe optimization algorithm (Frank and Wolfe, 1956) that builds on computations of partial-W. Tackling partial-OT problems that preserve sparsity is motivated by the fact that they are more suitable to some applications such as the Positive-Unlabeled (PU) learning (see Bekker and Davis (2020) for a review) we target in this paper. We shall notice that this is the first application of OT for solving PU learning tasks. In a nutshell, PU classification is a variant of the binary classification problem, in which we have only access to labeled samples from the positive (Pos) class in the training stage. The aim is to assign classes to the points of an unlabeled (Unl) set which mixes data from both positive and negative classes. Using OT allows identifying the positive points within Unl, even when Pos and Unl samples do not lie in the same space (see Fig. 1). The paper is organized as follows: we first recall some background on OT. In Section 3, we propose an algorithm to solve an exact partial-W problem, together with a Frank-Wolfe based algorithm to compute the partial-GW solution. After describing in more details the PU learning task and the use of partial-OT to solve it, we illustrate the advantage of partial-GW when the source and the target distributions are collected onto distinct environments. We finally give some perspectives. Notations ΣN is an histogram of N bins with { p ∈ RN+ , ∑ i pi = 1 } and δ is the Dirac function. Let 1n be the n-dimensional vector of ones. 〈·, ·〉F stands for the Frobenius dot product. |p| indicates the length of vector p. 2 Background on optimal transport Let X = {xi}ni=1 and Y = {yj}mj=1 be two point clouds representing the source and target samples, respectively. We assume two empirical distributions (p, q) ∈ Σn × Σm over X and Y , p = n∑ i=1 piδxi and q = m∑ j=1 qjδyj , where Σn and Σm are histograms of |p| = n and |q| = m bins respectively. The set of all admissible couplings Π(p, q) between histograms is given by Π(p, q) = {T ∈ R|p|×|q|+ |T1|q| = p,T>1|p| = q}, where T = (Tij)i,j is a coupling matrix with an entry Tij that describes the amount of mass pi found at xi flowing toward the mass qj of yj . OT addresses the problem of optimally transporting p toward q, given a cost Dij measured as a geometric distance between xi and yj . More precisely, when the ground cost C = Dp = ( Dpij ) i,j is a distance matrix, the p-Wassertein distance on Σn × Σm at the power of p is defined as: W pp (p, q) = min T∈Π(p,q) 〈C,T 〉F = min T∈Π(p,q) n∑ i=1 m∑ j=1 CijTij . In some applications, the two distributions are not registered (i.e. we can not compute a ground cost between xi and yj) or do not lie in the same underlying space. The Gromov-Wasserstein distance addresses this bottleneck by extending the Wasserstein distance to such settings, also allowing invariance to translation, rotation or scaling. Informally, it defines the distortion when transporting the whole set of points from one space to another. It relies on intra-domain distance matrices of source Cs = (Csik)i,k = (C s(xi,xk))i,k ∈ Rn×n+ and target Ct = (Ctjl)j,l = (Ct(yj ,yl))j,l ∈ R m×m + , and is defined as in Mémoli (2011): GW pp (p, q) = min T∈Π(p,q) n∑ i,k=1 m∑ j,l=1 ∣∣Csik − Ctjl∣∣p TijTkl. 3 Exact Partial Wasserstein and Gromov-Wasserstein distance We first detail how extending a balanced Wasserstein problem allows solving a partial-Wasserstein one. We then propose a Frank-Wolfe scheme that relies on computing partial-W to solve the partial-GW problem. 3.1 Partial Wasserstein distance The previous OT distances require the two distributions to have the same total probability mass ‖p‖1 = ‖q‖1 and that all the mass has to be transported. This may be a problematic assumption where some mass variation or partial mass displacement should be handled. The partial OT problem focuses on transporting only a fraction 0 ≤ s ≤ min(‖p‖1, ‖q‖1) of the mass as cheaply as possible. In that case, the set of admissible couplings becomes Πu(p, q) = {T ∈ R|p|×|q|+ |T1|q| ≤ p,T>1|p| ≤ q,1>|p|T1|q| = s}, and the partial-W distance reads as PW pp (p, q) = min T∈Πu(p,q) n∑ i=1 m∑ j=1 〈C,T 〉F . This problem has been studied by (Caffarelli and McCann, 2010; Figalli, 2010); numerical solutions have notably been provided by (Benamou et al., 2015; Chizat et al., 2018) in the entropic-regularized Wasserstein case. We propose here to directly solve the exact partial-W problem by adding dummy or virtual points xn+1 and ym+1 (with any features) and extending the cost matrix as follows: C̄ = [ C ξ1|q| ξ1>|p| 2ξ +A ] (1) in which A > 0 and ξ is a fixed positive or nul scalar. When the mass of these dummy points is set such that pn+1 = ‖q‖1 − s and qm+1 = ‖p‖1 − s, computing partial-W distance boils down to solving a unconstrained problem W pp (p̄, q̄) = minT̄∈Π(p̄,q̄)〈C̄, T̄ 〉F , where p̄ = [p, ‖q‖1 − s] and q̄ = [q, ‖p‖1 − s]. The intuitive derivation of this equivalent formulation is exposed in Appendix 1.1. Proposition 1 Assume that A > 0 and that ξ is a positive or nul scalar, one has W pp (p̄, q̄)− PW pp (p, q) = ξ(‖p‖1 + ‖q‖1 − 2s) and the optimum transport plan T ∗ of the partial Wasserstein problem is the optimum transport plan T̄ ∗ of W pp (p̄, q̄) deprived from its last row and column. The proof is postponed to Appendix 1.2. 3.2 Partial Gromov-Wasserstein We are now interested in the partial extension of Gromov-Wasserstein. In the case of a quadratic cost, p = 2, the partial-GW problem writes as PGW 22 (p, q) = min T∈Πu(p,q) JCs,Ct(T ) where JCs,Ct(T ) = 1 2 n∑ i,k=1 m∑ j,l=1 (Csik − Ctjl)2TijTkl. (2) The loss function JCs,Ct is non-convex and the couplings feasibility domain Πu(p, q) is convex and compact. One may expect to introduce virtual points in the GW formulation in order to solve the partial-GW problem. Nevertheless, this strategy is no longer valid as GW involves pairwise distances that do not allow the computations related to the dummy points to be isolated (see Appendix 1.3). In the following, we build upon a Frank-Wolfe optimization scheme (Frank and Wolfe, 1956) a.k.a. the conditional gradient method (Demyanov and Rubinov, 1970). It has received significant renewed interest in machine learning (Jaggi, 2013; Lacoste-Julien and Jaggi, 2015) and in OT community, since it serves as a basis to approximate penalized OT problems (Ferradans et al., 2013; Courty et al., 2017) or GW distances (Peyré et al., 2016; Vayer et al., 2020). Our proposed Frank-Wolfe iterations strongly rely on computing partial-W distances and as such, achieve a sparse transport plan (Ferradans et al., 2013). Let us first introduce some additional notations. For any tensorM = (Mijkl)i,j,k,l ∈ Rn×n×m×m, we denote byM◦ T the matrix in Rn×m such that its (i, j)-th element is defined as (M◦ T )i,j = n∑ k=1 m∑ l=1 MijklTkl for all i = 1, . . . , n, j = 1, . . . ,m. Introducing the 4-th order tensor M(Cs,Ct) = 12 ((C s ik − Ctjl) 2)i,j,k,l, we notice that JCs,Ct(T ), following Peyré et al. (2016), can be written as JCs,Ct(T ) = 〈M(Cs,Ct) ◦ T ,T 〉F . The Frank-Wolfe algorithm for partial-GW is shown in Algorithm 1. Like classical Frank-Wolfe procedure, it is summarized in three steps for each iteration k, as detailed below. A theoretical study of the convergence of the Frank-Wolfe algorithm for partial-GW is given in Appendix 2.2, together with a detailed derivation of the line search step (see Appendix 2.1). Step1 Compute a linear minimization oracle over the set Πu(p, q), i.e., T̃ (k) ← argmin T∈Πu(p,q) 〈∇JCs,Ct(T (k)),T 〉F , (3) To do so, we solve an extended Wasserstein problem with the ground metric∇JCs,Ct(T (k)) extended as in eq. (1): T̄ (k) ← argmin T∈Π(p̄,q̄) 〈∇̄JCs,Ct(T (k)),T 〉F , (4) and get T̃ (k) from T̄ (k) by removing its last row and column. Step2 Determine optimal step-size γ(k) subject to γ(k) ← argmin γ∈[0,1] {JCs,Ct((1− γ)T (k) + γT̃ (k))}. (5) It can be shown that γ(k) can take the following values, with E(k) = T̃ (k) − T (k): • if 〈M(Cs,Ct) ◦E(k),E(k)〉F < 0 we have γ(k) = 1 • if 〈M(Cs,Ct) ◦E(k),E(k)〉F > 0 we have γ(k) = min ( 1,−〈M(C s,Ct) ◦E(k),T (k)〉F 〈M(Cs,Ct) ◦E(k),E(k)〉F ) Step3 Update T (k+1) ← (1− γ(k))T (k) + γ(k)T̃ (k). Algorithm 1 Frank-Wolfe algorithm for partial-GW 1: Input: Source and target samples: (X ,p) and (Y, q), mass s, p = 2, initial guess T (0) 2: Compute cost matrices Cs and Ct, build p̄ = [p, ‖q‖1 − s] and q̄ = [q, ‖p‖1 − s] 3: for k = 0, 1, 2, 3, . . . do 4: G(k) ←M(Cs,Ct) ◦ T (k) // Compute the gradient ∇JCs,Ct(T (k)) 5: T̄ (k) ← argminT∈Π(p̄,q̄)〈Ḡ (k) ,T 〉F // Compute partial-W, with Ḡ as in eq. (1) 6: Get T̃ (k) from T̄ (k) // Remove last row and column 7: Compute γ(k) as in Eq. (5) // Line-search 8: T (k+1) ← (1− γ(k))T (k) + γ(k)T̃ (k) // Update 9: end for 10: Return: T (k) 4 Optimal transport for PU learning We hereafter investigate the application of partial optimal transport for learning from Positive and Unlabeled (PU) data. After introducing PU learning, we present how to formulate a PU learning problem into a partial-OT one. 4.1 Overview of PU learning Learning from PU data is a variant of classical binary classification problem, in which the training data consist of only positive points, and the test data is composed of unlabeled positives and negatives. Let Pos = {xi}nPi=1 be the positive samples drawn according to the conditional distribution p(x|y = 1) and Unl = {xUi } nU i=1 the unlabeled set sampled according to the marginal p(x) = πp(x|y = 1) + (1− π)p(x|y = −1). The true proportion of positives, called class prior, is π = p(y = 1) and p(x|y = −1) is the distribution of negative samples which are all unlabeled.The goal is to learn a binary classifier solely using Pos and Unl. A broad overview of existing PU learning approaches can be seen in (Bekker and Davis, 2020). Most PU learning methods commonly rely on the selected completely at random (SCAR) assumption (Elkan and Noto, 2008) which assumes that the labeled samples are drawn at random among the positive distribution, independently of their attributes. Nevertheless, this assumption is often violated in real-case scenarii and PU data are often subject to selection biases, e.g. when part of the data may be easier to collect. Recently, a less restrictive assumption has been studied: the selected at random (SAR) setting (Bekker and Davis, 2018) which assumes that the positives are labeled according to a subset of features of the samples. Kato et al. (2019) move a step further and consider that the sampling scheme of the positives is such that p(o = 1|x, y = 1) (o = 1 means observed label) preserves the ordering induced by the posterior distribution p(y = 1|x) over the samples. Other approaches, as in (Hsieh et al., 2019), consider a classical PU learning problem adjuncted with a small proportion of observed negative samples. Those negatives are selected with bias following the distribution p(x|y = −1). 4.2 PU learning formulation using partial optimal transport We propose in this paper to build on partial optimal transport to perform PU learning. In a nutshell, we aim at transporting a mass s = π from the unlabeled (source) dataset to the positive (target) one. As such, the transport matrix T should be such that the unlabeled positive points are mapped to the positive samples (as they have similar features or intra-domain distance matrices) while the negatives are discarded (in our context, they are not transported at all). Defining the optimal transport point-of-view of PU learning. More formally, the unlabeled points Unl represent the source distribution X and the positive points Pos are the target dataset Y . We set the total probability mass to be transported as the proportion of positives in the unlabeled set, that is s = π. We look for an optimal transport plan that belongs to the following set of couplings, assuming n = nU , m = nP , pi = 1n and qj = s m : ΠPU (p, q) = {T ∈ R|p|×|q|+ |T1|q| = {p, 0},T>1|p| ≤ q,1>|p|T1|q| = s}, (6) in which T1|q| = {p, 0} means that ∑m j=1 Tij = pi exactly or 0, ∀i to avoid matching part of the mass of an unlabeled negative with a positive. This set is not empty as long as s mod pi = 0. The problem that we aim at solving is the following: PUW pp (p, q) = min T∈ΠPU (p,q) n∑ i=1 m∑ j=1 CijTij . Though the positive samples Pos are assumed easy to label, their features may be corrupted with noise or they may be mislabeled. Let assume 0 ≤ α ≤ 1− s, the noise level. Solving the PU problem. To enforce the condition T1|q| = {p, 0}, we adopt a regularized point of view of the partial-OT problem as in Courty et al. (2017) and we solve the following problem: T̄ ∗ = argmin T̄∈Π(p̄,q̄) n+1∑ i=1 m+1∑ j=1 C̄ij T̄ij + ηΩ(T̄ ) (7) where pi = 1−αn , qj = s+α m , p̄, q̄, C̄ij are defined as in Section 3.1, η ≥ 0 is a regularization parameter and α is the percentage of Pos that we assume to be noisy (that is to say we do not want to map them to a point of Unl). We choose Ω(T̄ ) = ∑n i=1 ( ‖T̄i(:m)‖2 + ‖T̄i(m+1)‖2 ) where T̄i(:m) is a vector that contains the entries of the ith row of T̄ associated to the first m columns. This group-lasso regularization leads to a sparse transportation map and enforces each of the Unl samples xi to be mapped to only the Pos samples or to the dummy point ym+1. An illustration is provided in Appendix 5. When partial-GW is involved, we use this regularized-OT in the step (i) of the Frank-Wolfe algorithm. We can establish that solving problem (7) provides the solution to PU learning using partial-OT. Proposition 2 Assume that A > 0, ξ is a constant, there exists a large η > 0 such that: W ∗pp (p̄, q̄)− PUW pp (p, q) = ξ(1− s). where W ∗pp (p̄, q̄) = ∑n+1 i=1 ∑m+1 j=1 C̄ij T̄ ∗ ij with T̄ solution of eq. (7). The proof is postponed to Appendix 3. 5 Experiments 5.1 Experimental design We illustrate the behavior of partial-W and -GW on real datasets in a PU learning context. First, we consider a SCAR assumption, then a SAR one and finally a more general setting, in which the underlying distributions of the samples come from different domains, or do not belong to the same metric space. Algorithm 1 has been implemented and is avalaible on the Python Optimal Transport (POT) toolbox (Flamary and Courty, 2017). Following previous works (Kato et al., 2019; Hsieh et al., 2019), we assume that the class prior π is known throughout the experiments; otherwise, it can be estimated from {xi}nPi=1 and {xUi } nU i=1 using off-the-shelf methods, e.g. Zeiberg and Radivojac (2020); Plessis et al. (2017); Jain and Radivojac (2016). For both partial-W and partial-GW, we choose p = 2 and the cost matrices C are computed using Euclidean distance. We carry experiments on real-world datasets under the aforementioned scenarii. We rely on six datasets Mushrooms, Shuttle, Pageblocks, USPS, Connect-4, Spambase from the UCI repository1 (following Kato et al. (2019)’s setting) and colored MNIST (Arjovsky et al., 2019) to illustrate our method in SCAR and SAR settings respectively. We also consider the Caltech office dataset, which is a common application of domain adaptation (Courty et al., 2017) to explore the effectiveness of our method on heterogeneous distribution settings. Whenever they contain several classes, these datasets are converted into binary classification problems following Kato et al. (2019), and the positives are the samples that belong to the y = 1 class. For UCI and colored MNIST datasets, we randomly draw nP = 400 positive and nU = 800 unlabeled points among the remaining data. As the Caltech office datasets are smaller, we choose nP = 100 and nU = 100 in that context. To ease the presentation, we report here the results with class prior π set as the true proportion of positive class in the dataset and provide in Appendix 6.3 additional results when varying s. We ran the experiments 10 times and report the mean accuracy rate (standard deviations are shown in Appendix 6.1). We test 2 levels of noise in Pos: α = 0 or α = 0.025, fix ξ = 0, A = max(C) and choose a large η = 106. For the experiments, we consider unbiased PU learning method (denoted by PU in the sequel) (Du Plessis et al., 2014) and the most recent and effective method to address PU learning with a selection bias (called PUSB below) that tries to weaken the SCAR assumption (Kato et al., 2019). Whenever possible (that is to say when source and target samples share the same features), we compare our approaches P-W and P-GW with PU and PUSB; if not, we are not aware of any competitive PU learning method able to handle different features in Pos and Unl. The GW formulation is a non convex problem and the quality of the solution is highly dependent on the initialization. We explore several initializations of the transport matrix for P-GW and report the results that yield to the lowest partial OT-distance (see Appendix 4 for details). 5.2 Partial-W and partial-GW in a PU learning under a SCAR assumption Under SCAR, the Pos dataset and the positives in Unl are assumed independently and identically drawn according to the distribution p(x|y = 1) from a set of positive points. We experiment on the UCI datasets and Table 1 (top) summarizes our findings. Except for Connect-4 and spambase, partial-W has similar results or consistently outperforms PU and PUSB. Including some noise has little impact on the results, except for the connect-4 dataset. Partial-GW has competitive results, showing that relying on intra-domain matrices may allow discriminating the classes. It nevertheless 1https://archive.ics.uci.edu/ml/datasets.php under-performs relatively to partial-W, as the distance matrix C between Pos and Unl is more informative than only relying on intra-domain matrices. 5.3 Experiments under a SAR assumption The SAR assumption supposes that Pos is drawn according to some features of the samples. To implement such a setting, we inspire from (Arjovsky et al., 2019) and we construct a colored version of MNIST: each digit is colored, either in green or red, with a probability of 90% to be colored in red. The probability to label a digit y = 1 as positive depends on its color, with only green y = 1 composing the positive set. The Unl dataset is then mostly composed of red digits. Results under this setting are provided in Table 1 (middle). When we consider a SCAR scenario, partial-W exhibits the best performance. However, its effectiveness highly drops when a covariate shift appears in the distributions p(x|y = 1) of the Pos and Unl datasets as in this SAR scenario. On the opposite, partial-GW allows maintaining a comparable level of accuracy as the discriminative information are preserved in intra-domain distance matrices. 5.4 Partial-W and -GW in a PU learning with different domains and/or feature spaces To further validate the proposed method in a different context, we apply partial-W and partial-GW to a domain adaptation task. We consider the Caltech Office dataset, that consists of four domains: Caltech 256 (C) (Griffin et al., 2007), Amazon (A), Webcam (W) and DSLR (D) (Saenko et al., 2010). There exists a high inter-domain variability as the objects may face different illumination, orientation etc. Following a standard protocol, each image of each domain is described by a set of SURF features (Saenko et al., 2010) consisting of a normalized 800-bins histogram, and by a set of DECAF features (Donahue et al., 2014), that are 4096-dimensional features extracted from a neural network. The Pos dataset consists of images from Caltech 256. The unlabeled samples are formed by the Amazon, Webcam, DSLR images together with the Caltech 256 images that are not included in Pos. We perform a PCA to project the data onto d = 10 dimensions for the SURF features and d = 40 for the DECAF ones. We first investigate the case where the objects are represented by the same features but belong to the same or different domains. Results are given in Table 1 (bottom). For both features, we first notice that PU and PUSB have similar performances than partial-W when the domains are the same. As soon as the two domains differ, partial-GW exhibits the best performances, suggesting that it is able to capture some domain shift. We then consider a scenario where the source and target objects are described by different features (Table 2). In that case, only partial-GW is applicable and its performances suggest that it is able to efficiently leverage on the discriminative information conveyed in the intra-domain similarity matrices, especially when using SURF features to make predictions based on DECAF ones. 6 Conclusion and future work In this paper, we build on partial-W and -GW distances to solve a PU learning problem. We propose a scheme relying on iterations of a Frank-Wolfe algorithm to compute a partial-GW solution, in which each iteration requires solving a partial-W problem that is derived from the solution of an extended Wassertein problem. We show that those distances compete and sometimes outperform the state-of-the-art PU learning methods, and that partial-GW allows remarkable improvements when the underlying spaces of the positive and unlabeled datasets are distinct or even unregistered. While considering only features (with partial-W) or intra-domain distances (with partial-GW), this work can be extended to define a partial-Fused Gromov-Wasserstein distance (Vayer et al., 2020) that can combines both aspects. Another line of work will also focus on lowering the computational complexity by using sliced partial-GW, building on existing works on sliced partial-W (Bonneel and Coeurjolly, 2019) and sliced GW (Vayer et al., 2019). Regarding the application view point, we envision a potential use of the approach to subgraph matching (Kriege and Mutzel, 2012) or PU learning on graphs (Zhao et al., 2011) as GW has been proved to be effective to compare structured data such as graphs. In addition, we also target applications such as detecting out-of-distributions examples or open-set domain adaptation (Saito et al., 2018). Finally, we plan to derive an extension of this work to PU learning in which the proportion of positives in the dataset will be estimated in a unified optimal transport formulation, building on results of GW-based test of isomorphism between distributions (Brécheteau, 2019). Broader impact This work does not present any significant societal, environnemental or ethical consequence. Acknowledgments This work is partially funded through the projects OATMIL ANR-17-CE23-0012, MULTISCALE ANR-18-CE23-0022-01 and RAIMO ANR-20-CHIA-0021-01.
1. What is the focus of the paper in terms of optimal transport metrics? 2. What are the main contributions of the paper, particularly in reducing partial-W to a full Wasserstein problem and the proposal of a Frank-Wolfe algorithm for partial-GW? 3. What are the strengths of the paper regarding its clarity and novelty? 4. Do you have any concerns or questions about the paper's content, such as the definition of the Gromov-Wasserstein distance?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper tackles the problem of computing partial optimal transport metrics. More precisely, the authors consider both the partial-Wassserstein (partial-W) and the partial-Gromov-Wasserstein (partial-GW) distances. Their first contribution is to show that the partial-W can be reduced to a (full) Wasserstein problem by adding dummy points. For the partial-GW approach, a Frank-Wolfe algorithm is proposed, where a partial-W distance is computed at each iteration. The second contribution is to show that the PU-learning problem can be solved by using partial-W and partial-GW distances. Experiments on various datasets illustrate the efficiency of this approach. Strengths - This work has pedagogical qualities: the different variants of the Wasserstein distance are recalled very clearly. - The "dummy point" trick to compute the partial-W is very smart and interesting. Weaknesses The definition of the Gromov-Wasserstein distance should be discussed a bit more, as it is less usual than the Wasserstein distance. Otherwise it seems a bit arbitrary ... For instance, is there a case where both W and GW coincide?
NIPS
Title Partial Optimal Tranport with applications on Positive-Unlabeled Learning Abstract Classical optimal transport problem seeks a transportation map that preserves the total mass between two probability distributions, requiring their masses to be equal. This may be too restrictive in some applications such as color or shape matching, since the distributions may have arbitrary masses and/or only a fraction of the total mass has to be transported. In this paper, we address the partial Wasserstein and Gromov-Wasserstein problems and propose exact algorithms to solve them. We showcase the new formulation in a positive-unlabeled (PU) learning application. To the best of our knowledge, this is the first application of optimal transport in this context and we first highlight that partial Wasserstein-based metrics prove effective in usual PU learning settings. We then demonstrate that partial GromovWasserstein metrics are efficient in scenarii in which the samples from the positive and the unlabeled datasets come from different domains or have different features. 1 Introduction Optimal transport (OT) has been gaining in recent years an increasing attention in the machine learning community, mainly due to its capacity to exploit the geometric property of the samples. Generally speaking, OT is a mathematical tool to compare distributions by computing a transportation mass plan from a source to a target distribution. Distances based on OT are referred to as the Monge-Kantorovich or Wasserstein distances (Villani, 2009) and have been successfully employed in a wide variety of machine learning applications including clustering (Ho et al., 2017), computer vision (Bonneel et al., 2011; Solomon et al., 2015), generative adversarial networks (Arjovsky et al., 2017) or domain adaptation (Courty et al., 2017). A key limitation of the Wasserstein distance is that it relies on the assumption of aligned distributions, namely they must belong to the same ground space or at least a meaningful distance across domains can be computed. Nevertheless, source and target distributions can be collected under distinct environments, representing different times of collection, contexts or measurements (see Fig. 1, left and right). To get benefit from OT on such heterogeneous distribution settings, one can compute the Gromov-Wasserstein (GW) distance (Sturm, 2006; Mémoli, 2011) to overcome the lack of intrinsic correspondence between the distribution spaces. GW extends Wasserstein by computing a distance between metrics defined within each of the source and target spaces. From a computational point view, it involves a non convex quadratic problem (Peyré and Cuturi, 2019), hard to lift to large scale settings. A remedy to such a heavy computation burden lies in a prevalent approach referred to as 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. regularized OT (Cuturi, 2013), allowing one to add an entropic regularization penalty to the original problem. Peyré et al. (2016); Solomon et al. (2016) propose the entropic GW discrepancy, that can be solved by Sinkhorn iterations (Cuturi, 2013; Benamou et al., 2015). A major bottleneck of OT in its traditional formulation is that it requires the two input measures to have the same total probability mass and/or that all the mass has to be transported. This is too restrictive for many applications, such as in color matching or shape registration (Bonneel and Coeurjolly, 2019), since mass changes may occur due to a creation or an annihilation while computing an OT plan. To tackle this limitation, one may employ strategies such as partial or unbalanced transport (Guittet, 2002; Figalli, 2010; Caffarelli and McCann, 2010). Chizat et al. (2018) propose to relax the marginal constraints of unbalanced total masses using divergences such as Kullback-Leibler or Total Variation, allowing the use of generalized Sinkhorn iterations. Yang and Uhler (2019) generalize this approach to GANs and Lee et al. (2019) present an ADMM algorithm for the relaxed partial OT. Most of all these approaches concentrate on partial-Wasserstein. This paper deals with exact partial Wassertein (partial-W) and Gromov-Wasserstein (partial-GW). Some strategies for computing such partial-W require relaxations of the marginals constraints. We rather build our approach upon adding virtual or dummy points onto the marginals, which is a common practice in OT works. Among the latter, Caffarelli and McCann (2010) attach such points to allow choosing the maximum distance mass that can be transported. Pele and Werman (2009) threshold ground distances and send the extra mass to a dummy point to compute a robust EMD distance. Gramfort et al. (2015) consider the case of unnormalized measures and use a dummy point to “fill” the distributions, the extended problem then having both marginals summing to one. More recently, Sarlin et al. (2020) deal with the partial assignment problem by extending the initial problem and fill the ground distance matrix with a single learnable parameter. In this paper, the dummy points are used as a buffer when comparing distributions with different probability masses, allowing partial-W to boil down to solving an extended but standard Wasserstein problem. The main advantage of our approach is that it defines explicitly the mass to be transported and it leads to computing sparse transport plans and hence exact partial-W or -GW distances instead of regularized discrepancies obtained by running Sinkhorn algorithms. Regarding partial-GW, our approach relies on a Frank-Wolfe optimization algorithm (Frank and Wolfe, 1956) that builds on computations of partial-W. Tackling partial-OT problems that preserve sparsity is motivated by the fact that they are more suitable to some applications such as the Positive-Unlabeled (PU) learning (see Bekker and Davis (2020) for a review) we target in this paper. We shall notice that this is the first application of OT for solving PU learning tasks. In a nutshell, PU classification is a variant of the binary classification problem, in which we have only access to labeled samples from the positive (Pos) class in the training stage. The aim is to assign classes to the points of an unlabeled (Unl) set which mixes data from both positive and negative classes. Using OT allows identifying the positive points within Unl, even when Pos and Unl samples do not lie in the same space (see Fig. 1). The paper is organized as follows: we first recall some background on OT. In Section 3, we propose an algorithm to solve an exact partial-W problem, together with a Frank-Wolfe based algorithm to compute the partial-GW solution. After describing in more details the PU learning task and the use of partial-OT to solve it, we illustrate the advantage of partial-GW when the source and the target distributions are collected onto distinct environments. We finally give some perspectives. Notations ΣN is an histogram of N bins with { p ∈ RN+ , ∑ i pi = 1 } and δ is the Dirac function. Let 1n be the n-dimensional vector of ones. 〈·, ·〉F stands for the Frobenius dot product. |p| indicates the length of vector p. 2 Background on optimal transport Let X = {xi}ni=1 and Y = {yj}mj=1 be two point clouds representing the source and target samples, respectively. We assume two empirical distributions (p, q) ∈ Σn × Σm over X and Y , p = n∑ i=1 piδxi and q = m∑ j=1 qjδyj , where Σn and Σm are histograms of |p| = n and |q| = m bins respectively. The set of all admissible couplings Π(p, q) between histograms is given by Π(p, q) = {T ∈ R|p|×|q|+ |T1|q| = p,T>1|p| = q}, where T = (Tij)i,j is a coupling matrix with an entry Tij that describes the amount of mass pi found at xi flowing toward the mass qj of yj . OT addresses the problem of optimally transporting p toward q, given a cost Dij measured as a geometric distance between xi and yj . More precisely, when the ground cost C = Dp = ( Dpij ) i,j is a distance matrix, the p-Wassertein distance on Σn × Σm at the power of p is defined as: W pp (p, q) = min T∈Π(p,q) 〈C,T 〉F = min T∈Π(p,q) n∑ i=1 m∑ j=1 CijTij . In some applications, the two distributions are not registered (i.e. we can not compute a ground cost between xi and yj) or do not lie in the same underlying space. The Gromov-Wasserstein distance addresses this bottleneck by extending the Wasserstein distance to such settings, also allowing invariance to translation, rotation or scaling. Informally, it defines the distortion when transporting the whole set of points from one space to another. It relies on intra-domain distance matrices of source Cs = (Csik)i,k = (C s(xi,xk))i,k ∈ Rn×n+ and target Ct = (Ctjl)j,l = (Ct(yj ,yl))j,l ∈ R m×m + , and is defined as in Mémoli (2011): GW pp (p, q) = min T∈Π(p,q) n∑ i,k=1 m∑ j,l=1 ∣∣Csik − Ctjl∣∣p TijTkl. 3 Exact Partial Wasserstein and Gromov-Wasserstein distance We first detail how extending a balanced Wasserstein problem allows solving a partial-Wasserstein one. We then propose a Frank-Wolfe scheme that relies on computing partial-W to solve the partial-GW problem. 3.1 Partial Wasserstein distance The previous OT distances require the two distributions to have the same total probability mass ‖p‖1 = ‖q‖1 and that all the mass has to be transported. This may be a problematic assumption where some mass variation or partial mass displacement should be handled. The partial OT problem focuses on transporting only a fraction 0 ≤ s ≤ min(‖p‖1, ‖q‖1) of the mass as cheaply as possible. In that case, the set of admissible couplings becomes Πu(p, q) = {T ∈ R|p|×|q|+ |T1|q| ≤ p,T>1|p| ≤ q,1>|p|T1|q| = s}, and the partial-W distance reads as PW pp (p, q) = min T∈Πu(p,q) n∑ i=1 m∑ j=1 〈C,T 〉F . This problem has been studied by (Caffarelli and McCann, 2010; Figalli, 2010); numerical solutions have notably been provided by (Benamou et al., 2015; Chizat et al., 2018) in the entropic-regularized Wasserstein case. We propose here to directly solve the exact partial-W problem by adding dummy or virtual points xn+1 and ym+1 (with any features) and extending the cost matrix as follows: C̄ = [ C ξ1|q| ξ1>|p| 2ξ +A ] (1) in which A > 0 and ξ is a fixed positive or nul scalar. When the mass of these dummy points is set such that pn+1 = ‖q‖1 − s and qm+1 = ‖p‖1 − s, computing partial-W distance boils down to solving a unconstrained problem W pp (p̄, q̄) = minT̄∈Π(p̄,q̄)〈C̄, T̄ 〉F , where p̄ = [p, ‖q‖1 − s] and q̄ = [q, ‖p‖1 − s]. The intuitive derivation of this equivalent formulation is exposed in Appendix 1.1. Proposition 1 Assume that A > 0 and that ξ is a positive or nul scalar, one has W pp (p̄, q̄)− PW pp (p, q) = ξ(‖p‖1 + ‖q‖1 − 2s) and the optimum transport plan T ∗ of the partial Wasserstein problem is the optimum transport plan T̄ ∗ of W pp (p̄, q̄) deprived from its last row and column. The proof is postponed to Appendix 1.2. 3.2 Partial Gromov-Wasserstein We are now interested in the partial extension of Gromov-Wasserstein. In the case of a quadratic cost, p = 2, the partial-GW problem writes as PGW 22 (p, q) = min T∈Πu(p,q) JCs,Ct(T ) where JCs,Ct(T ) = 1 2 n∑ i,k=1 m∑ j,l=1 (Csik − Ctjl)2TijTkl. (2) The loss function JCs,Ct is non-convex and the couplings feasibility domain Πu(p, q) is convex and compact. One may expect to introduce virtual points in the GW formulation in order to solve the partial-GW problem. Nevertheless, this strategy is no longer valid as GW involves pairwise distances that do not allow the computations related to the dummy points to be isolated (see Appendix 1.3). In the following, we build upon a Frank-Wolfe optimization scheme (Frank and Wolfe, 1956) a.k.a. the conditional gradient method (Demyanov and Rubinov, 1970). It has received significant renewed interest in machine learning (Jaggi, 2013; Lacoste-Julien and Jaggi, 2015) and in OT community, since it serves as a basis to approximate penalized OT problems (Ferradans et al., 2013; Courty et al., 2017) or GW distances (Peyré et al., 2016; Vayer et al., 2020). Our proposed Frank-Wolfe iterations strongly rely on computing partial-W distances and as such, achieve a sparse transport plan (Ferradans et al., 2013). Let us first introduce some additional notations. For any tensorM = (Mijkl)i,j,k,l ∈ Rn×n×m×m, we denote byM◦ T the matrix in Rn×m such that its (i, j)-th element is defined as (M◦ T )i,j = n∑ k=1 m∑ l=1 MijklTkl for all i = 1, . . . , n, j = 1, . . . ,m. Introducing the 4-th order tensor M(Cs,Ct) = 12 ((C s ik − Ctjl) 2)i,j,k,l, we notice that JCs,Ct(T ), following Peyré et al. (2016), can be written as JCs,Ct(T ) = 〈M(Cs,Ct) ◦ T ,T 〉F . The Frank-Wolfe algorithm for partial-GW is shown in Algorithm 1. Like classical Frank-Wolfe procedure, it is summarized in three steps for each iteration k, as detailed below. A theoretical study of the convergence of the Frank-Wolfe algorithm for partial-GW is given in Appendix 2.2, together with a detailed derivation of the line search step (see Appendix 2.1). Step1 Compute a linear minimization oracle over the set Πu(p, q), i.e., T̃ (k) ← argmin T∈Πu(p,q) 〈∇JCs,Ct(T (k)),T 〉F , (3) To do so, we solve an extended Wasserstein problem with the ground metric∇JCs,Ct(T (k)) extended as in eq. (1): T̄ (k) ← argmin T∈Π(p̄,q̄) 〈∇̄JCs,Ct(T (k)),T 〉F , (4) and get T̃ (k) from T̄ (k) by removing its last row and column. Step2 Determine optimal step-size γ(k) subject to γ(k) ← argmin γ∈[0,1] {JCs,Ct((1− γ)T (k) + γT̃ (k))}. (5) It can be shown that γ(k) can take the following values, with E(k) = T̃ (k) − T (k): • if 〈M(Cs,Ct) ◦E(k),E(k)〉F < 0 we have γ(k) = 1 • if 〈M(Cs,Ct) ◦E(k),E(k)〉F > 0 we have γ(k) = min ( 1,−〈M(C s,Ct) ◦E(k),T (k)〉F 〈M(Cs,Ct) ◦E(k),E(k)〉F ) Step3 Update T (k+1) ← (1− γ(k))T (k) + γ(k)T̃ (k). Algorithm 1 Frank-Wolfe algorithm for partial-GW 1: Input: Source and target samples: (X ,p) and (Y, q), mass s, p = 2, initial guess T (0) 2: Compute cost matrices Cs and Ct, build p̄ = [p, ‖q‖1 − s] and q̄ = [q, ‖p‖1 − s] 3: for k = 0, 1, 2, 3, . . . do 4: G(k) ←M(Cs,Ct) ◦ T (k) // Compute the gradient ∇JCs,Ct(T (k)) 5: T̄ (k) ← argminT∈Π(p̄,q̄)〈Ḡ (k) ,T 〉F // Compute partial-W, with Ḡ as in eq. (1) 6: Get T̃ (k) from T̄ (k) // Remove last row and column 7: Compute γ(k) as in Eq. (5) // Line-search 8: T (k+1) ← (1− γ(k))T (k) + γ(k)T̃ (k) // Update 9: end for 10: Return: T (k) 4 Optimal transport for PU learning We hereafter investigate the application of partial optimal transport for learning from Positive and Unlabeled (PU) data. After introducing PU learning, we present how to formulate a PU learning problem into a partial-OT one. 4.1 Overview of PU learning Learning from PU data is a variant of classical binary classification problem, in which the training data consist of only positive points, and the test data is composed of unlabeled positives and negatives. Let Pos = {xi}nPi=1 be the positive samples drawn according to the conditional distribution p(x|y = 1) and Unl = {xUi } nU i=1 the unlabeled set sampled according to the marginal p(x) = πp(x|y = 1) + (1− π)p(x|y = −1). The true proportion of positives, called class prior, is π = p(y = 1) and p(x|y = −1) is the distribution of negative samples which are all unlabeled.The goal is to learn a binary classifier solely using Pos and Unl. A broad overview of existing PU learning approaches can be seen in (Bekker and Davis, 2020). Most PU learning methods commonly rely on the selected completely at random (SCAR) assumption (Elkan and Noto, 2008) which assumes that the labeled samples are drawn at random among the positive distribution, independently of their attributes. Nevertheless, this assumption is often violated in real-case scenarii and PU data are often subject to selection biases, e.g. when part of the data may be easier to collect. Recently, a less restrictive assumption has been studied: the selected at random (SAR) setting (Bekker and Davis, 2018) which assumes that the positives are labeled according to a subset of features of the samples. Kato et al. (2019) move a step further and consider that the sampling scheme of the positives is such that p(o = 1|x, y = 1) (o = 1 means observed label) preserves the ordering induced by the posterior distribution p(y = 1|x) over the samples. Other approaches, as in (Hsieh et al., 2019), consider a classical PU learning problem adjuncted with a small proportion of observed negative samples. Those negatives are selected with bias following the distribution p(x|y = −1). 4.2 PU learning formulation using partial optimal transport We propose in this paper to build on partial optimal transport to perform PU learning. In a nutshell, we aim at transporting a mass s = π from the unlabeled (source) dataset to the positive (target) one. As such, the transport matrix T should be such that the unlabeled positive points are mapped to the positive samples (as they have similar features or intra-domain distance matrices) while the negatives are discarded (in our context, they are not transported at all). Defining the optimal transport point-of-view of PU learning. More formally, the unlabeled points Unl represent the source distribution X and the positive points Pos are the target dataset Y . We set the total probability mass to be transported as the proportion of positives in the unlabeled set, that is s = π. We look for an optimal transport plan that belongs to the following set of couplings, assuming n = nU , m = nP , pi = 1n and qj = s m : ΠPU (p, q) = {T ∈ R|p|×|q|+ |T1|q| = {p, 0},T>1|p| ≤ q,1>|p|T1|q| = s}, (6) in which T1|q| = {p, 0} means that ∑m j=1 Tij = pi exactly or 0, ∀i to avoid matching part of the mass of an unlabeled negative with a positive. This set is not empty as long as s mod pi = 0. The problem that we aim at solving is the following: PUW pp (p, q) = min T∈ΠPU (p,q) n∑ i=1 m∑ j=1 CijTij . Though the positive samples Pos are assumed easy to label, their features may be corrupted with noise or they may be mislabeled. Let assume 0 ≤ α ≤ 1− s, the noise level. Solving the PU problem. To enforce the condition T1|q| = {p, 0}, we adopt a regularized point of view of the partial-OT problem as in Courty et al. (2017) and we solve the following problem: T̄ ∗ = argmin T̄∈Π(p̄,q̄) n+1∑ i=1 m+1∑ j=1 C̄ij T̄ij + ηΩ(T̄ ) (7) where pi = 1−αn , qj = s+α m , p̄, q̄, C̄ij are defined as in Section 3.1, η ≥ 0 is a regularization parameter and α is the percentage of Pos that we assume to be noisy (that is to say we do not want to map them to a point of Unl). We choose Ω(T̄ ) = ∑n i=1 ( ‖T̄i(:m)‖2 + ‖T̄i(m+1)‖2 ) where T̄i(:m) is a vector that contains the entries of the ith row of T̄ associated to the first m columns. This group-lasso regularization leads to a sparse transportation map and enforces each of the Unl samples xi to be mapped to only the Pos samples or to the dummy point ym+1. An illustration is provided in Appendix 5. When partial-GW is involved, we use this regularized-OT in the step (i) of the Frank-Wolfe algorithm. We can establish that solving problem (7) provides the solution to PU learning using partial-OT. Proposition 2 Assume that A > 0, ξ is a constant, there exists a large η > 0 such that: W ∗pp (p̄, q̄)− PUW pp (p, q) = ξ(1− s). where W ∗pp (p̄, q̄) = ∑n+1 i=1 ∑m+1 j=1 C̄ij T̄ ∗ ij with T̄ solution of eq. (7). The proof is postponed to Appendix 3. 5 Experiments 5.1 Experimental design We illustrate the behavior of partial-W and -GW on real datasets in a PU learning context. First, we consider a SCAR assumption, then a SAR one and finally a more general setting, in which the underlying distributions of the samples come from different domains, or do not belong to the same metric space. Algorithm 1 has been implemented and is avalaible on the Python Optimal Transport (POT) toolbox (Flamary and Courty, 2017). Following previous works (Kato et al., 2019; Hsieh et al., 2019), we assume that the class prior π is known throughout the experiments; otherwise, it can be estimated from {xi}nPi=1 and {xUi } nU i=1 using off-the-shelf methods, e.g. Zeiberg and Radivojac (2020); Plessis et al. (2017); Jain and Radivojac (2016). For both partial-W and partial-GW, we choose p = 2 and the cost matrices C are computed using Euclidean distance. We carry experiments on real-world datasets under the aforementioned scenarii. We rely on six datasets Mushrooms, Shuttle, Pageblocks, USPS, Connect-4, Spambase from the UCI repository1 (following Kato et al. (2019)’s setting) and colored MNIST (Arjovsky et al., 2019) to illustrate our method in SCAR and SAR settings respectively. We also consider the Caltech office dataset, which is a common application of domain adaptation (Courty et al., 2017) to explore the effectiveness of our method on heterogeneous distribution settings. Whenever they contain several classes, these datasets are converted into binary classification problems following Kato et al. (2019), and the positives are the samples that belong to the y = 1 class. For UCI and colored MNIST datasets, we randomly draw nP = 400 positive and nU = 800 unlabeled points among the remaining data. As the Caltech office datasets are smaller, we choose nP = 100 and nU = 100 in that context. To ease the presentation, we report here the results with class prior π set as the true proportion of positive class in the dataset and provide in Appendix 6.3 additional results when varying s. We ran the experiments 10 times and report the mean accuracy rate (standard deviations are shown in Appendix 6.1). We test 2 levels of noise in Pos: α = 0 or α = 0.025, fix ξ = 0, A = max(C) and choose a large η = 106. For the experiments, we consider unbiased PU learning method (denoted by PU in the sequel) (Du Plessis et al., 2014) and the most recent and effective method to address PU learning with a selection bias (called PUSB below) that tries to weaken the SCAR assumption (Kato et al., 2019). Whenever possible (that is to say when source and target samples share the same features), we compare our approaches P-W and P-GW with PU and PUSB; if not, we are not aware of any competitive PU learning method able to handle different features in Pos and Unl. The GW formulation is a non convex problem and the quality of the solution is highly dependent on the initialization. We explore several initializations of the transport matrix for P-GW and report the results that yield to the lowest partial OT-distance (see Appendix 4 for details). 5.2 Partial-W and partial-GW in a PU learning under a SCAR assumption Under SCAR, the Pos dataset and the positives in Unl are assumed independently and identically drawn according to the distribution p(x|y = 1) from a set of positive points. We experiment on the UCI datasets and Table 1 (top) summarizes our findings. Except for Connect-4 and spambase, partial-W has similar results or consistently outperforms PU and PUSB. Including some noise has little impact on the results, except for the connect-4 dataset. Partial-GW has competitive results, showing that relying on intra-domain matrices may allow discriminating the classes. It nevertheless 1https://archive.ics.uci.edu/ml/datasets.php under-performs relatively to partial-W, as the distance matrix C between Pos and Unl is more informative than only relying on intra-domain matrices. 5.3 Experiments under a SAR assumption The SAR assumption supposes that Pos is drawn according to some features of the samples. To implement such a setting, we inspire from (Arjovsky et al., 2019) and we construct a colored version of MNIST: each digit is colored, either in green or red, with a probability of 90% to be colored in red. The probability to label a digit y = 1 as positive depends on its color, with only green y = 1 composing the positive set. The Unl dataset is then mostly composed of red digits. Results under this setting are provided in Table 1 (middle). When we consider a SCAR scenario, partial-W exhibits the best performance. However, its effectiveness highly drops when a covariate shift appears in the distributions p(x|y = 1) of the Pos and Unl datasets as in this SAR scenario. On the opposite, partial-GW allows maintaining a comparable level of accuracy as the discriminative information are preserved in intra-domain distance matrices. 5.4 Partial-W and -GW in a PU learning with different domains and/or feature spaces To further validate the proposed method in a different context, we apply partial-W and partial-GW to a domain adaptation task. We consider the Caltech Office dataset, that consists of four domains: Caltech 256 (C) (Griffin et al., 2007), Amazon (A), Webcam (W) and DSLR (D) (Saenko et al., 2010). There exists a high inter-domain variability as the objects may face different illumination, orientation etc. Following a standard protocol, each image of each domain is described by a set of SURF features (Saenko et al., 2010) consisting of a normalized 800-bins histogram, and by a set of DECAF features (Donahue et al., 2014), that are 4096-dimensional features extracted from a neural network. The Pos dataset consists of images from Caltech 256. The unlabeled samples are formed by the Amazon, Webcam, DSLR images together with the Caltech 256 images that are not included in Pos. We perform a PCA to project the data onto d = 10 dimensions for the SURF features and d = 40 for the DECAF ones. We first investigate the case where the objects are represented by the same features but belong to the same or different domains. Results are given in Table 1 (bottom). For both features, we first notice that PU and PUSB have similar performances than partial-W when the domains are the same. As soon as the two domains differ, partial-GW exhibits the best performances, suggesting that it is able to capture some domain shift. We then consider a scenario where the source and target objects are described by different features (Table 2). In that case, only partial-GW is applicable and its performances suggest that it is able to efficiently leverage on the discriminative information conveyed in the intra-domain similarity matrices, especially when using SURF features to make predictions based on DECAF ones. 6 Conclusion and future work In this paper, we build on partial-W and -GW distances to solve a PU learning problem. We propose a scheme relying on iterations of a Frank-Wolfe algorithm to compute a partial-GW solution, in which each iteration requires solving a partial-W problem that is derived from the solution of an extended Wassertein problem. We show that those distances compete and sometimes outperform the state-of-the-art PU learning methods, and that partial-GW allows remarkable improvements when the underlying spaces of the positive and unlabeled datasets are distinct or even unregistered. While considering only features (with partial-W) or intra-domain distances (with partial-GW), this work can be extended to define a partial-Fused Gromov-Wasserstein distance (Vayer et al., 2020) that can combines both aspects. Another line of work will also focus on lowering the computational complexity by using sliced partial-GW, building on existing works on sliced partial-W (Bonneel and Coeurjolly, 2019) and sliced GW (Vayer et al., 2019). Regarding the application view point, we envision a potential use of the approach to subgraph matching (Kriege and Mutzel, 2012) or PU learning on graphs (Zhao et al., 2011) as GW has been proved to be effective to compare structured data such as graphs. In addition, we also target applications such as detecting out-of-distributions examples or open-set domain adaptation (Saito et al., 2018). Finally, we plan to derive an extension of this work to PU learning in which the proportion of positives in the dataset will be estimated in a unified optimal transport formulation, building on results of GW-based test of isomorphism between distributions (Brécheteau, 2019). Broader impact This work does not present any significant societal, environnemental or ethical consequence. Acknowledgments This work is partially funded through the projects OATMIL ANR-17-CE23-0012, MULTISCALE ANR-18-CE23-0022-01 and RAIMO ANR-20-CHIA-0021-01.
1. What is the main contribution of the paper regarding optimal transport and its application to positive-unlabeled learning? 2. What are the strengths and weaknesses of the proposed algorithms for partial optimal transport and their theoretical implications? 3. How does the reviewer assess the novelty and technical soundness of the paper's content? 4. Are there any concerns regarding the practicality and limitations of the approach, particularly in handling biased positives and class prior estimation? 5. How does the reviewer evaluate the comparisons with existing PU learning methods and the sensitivity to class prior? 6. Are there any suggestions for improving the paper's readability, notation, and intuition?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper concerns optimal transport (OT) plans minimizing Wasserstein (W) and Gromov-Wasserstein (GW) distances between the empirical distributions of a pair of samples and their application to Positive-Unlabeled (PU) learning. A transport plan between two samples is represented as a matrix with linear equality constraints. The OT plan is formulated as a solution to an optimization problem over a space of transport matrices. The formulation for the Wasserstein distance is based on a single cost matrix representing the pairwise distance between the points of the two samples, whereas the Gromov-Wasserstein distance is based on two cost matrices representing within sample pairwise distances. The paper’s main contribution concerns partial optimal transport plans (for both W and GW distances) that constraint the mass transported between the samples. The formulation relies on the same optimization criteria as for full optimal transport, although with different constraints on the transport matrix. In the case of Wasserstein distance, the formulation is relaxed back to the original constraints by adding two dummy points in the two samples and modifying the cost matrix. In the case of Gromov-Wasserstein (GW) distance, a Frank-Wolfe based algorithm is derived to solve the optimization problem. PU learning is set as a partial OT problem with an additional regularization term. The approach is tested empirically with comparisons to existing PU learning methods. Strengths The paper contains novel algorithms on learning partial optimal transport and technically sound theory to support the algorithms. The application to PU learning is intuitive and it is first approach to addresses domain adaptation with different features in the PU context. Weaknesses Overall: When applied to PU learning, the theoretical implications are not well understood and an important state of the art method is not compared with. Assuming the class prior to be known is a significant limitation. Furthermore, the time complexity of the algorithm might make it impractical. The paper is difficult to read and significant effort should be spent to simplify the notation, make the paper self contained and wherever possible state the meaning and implication of formulas and constraints. Details: 1) The partial OT formulation enforces that the mass transported back and forth between the two samples is equal. The way PU learning problem is set up, with p_i = 1/n and q_i = 1/m as Partial OT seems to be suboptimal. The mass \pi (positives) from the unlabeled sample gets transported to only mass \pi of the labeled sample. However, ideally it should get transported to the entirety of the labeled sample. If p_i is defined as 1/(n\pi), then the positives in the unlabeled would correspond to the mass of 1 and it can be transported to the entirety of the labeled sample. 2) An important PU method is not included in the comparisons [3]. 3) The algorithms for estimation of the class prior are only suitable for the unbiased PU setting. Assuming that the class prior can be reliably estimated in the biased case is not realistic. Some experiments on the sensitivity to class prior should be conducted. 4) Solving PU learning with bias is an intractable problem in general. That’s why most algorithms for biased PU learning make an assumption for bias first and then derive a method specific to handle that bias. What kind of bias does the proposed method handle? What prevents the method from learning the biased positives from the unlabeled data? It the colored MNIST experiment was designed with the unlabeled data containing both red and green colored digits and the prior was misspecified or set closer to the proportion of the green positives in the unlabeled data, would the partial-GW method assign a higher score to the red positives compared to negatives? Maybe simulating a one dimensional biased PU dataset can be used to illustrate the debiasing effects of the proposed method. 5) The text following equation 6, including the introduction of \alpha, regularization, is quite challenging to follow. Please attempt to give more intuition and make the paper self contained. g and the notation T(i, I_g) is not defined. 6) Would the partial-GW method still work if the negatives were obtained by adding a constant to the positives (think one dimensional data)? The pairwise distances between the negatives would be similar to the pairwise distances between the positives. Is it conceivable that the labeled positives are equally likely to be transported to the negatives or the positives in the unlabeled data? [3] Kiryo R, Niu G, Du Plessis MC, Sugiyama M. Positive-unlabeled learning with non-negative risk estimator. InAdvances in neural information processing systems 2017 (pp. 1675-1685). ========= After Rebuttal========= I still find the section on the regularization unclear. The clarification provided by the author is related to a typo, but not about regularization. If accepted the authors must spend significant effort in making this clear.
NIPS
Title Partial Optimal Tranport with applications on Positive-Unlabeled Learning Abstract Classical optimal transport problem seeks a transportation map that preserves the total mass between two probability distributions, requiring their masses to be equal. This may be too restrictive in some applications such as color or shape matching, since the distributions may have arbitrary masses and/or only a fraction of the total mass has to be transported. In this paper, we address the partial Wasserstein and Gromov-Wasserstein problems and propose exact algorithms to solve them. We showcase the new formulation in a positive-unlabeled (PU) learning application. To the best of our knowledge, this is the first application of optimal transport in this context and we first highlight that partial Wasserstein-based metrics prove effective in usual PU learning settings. We then demonstrate that partial GromovWasserstein metrics are efficient in scenarii in which the samples from the positive and the unlabeled datasets come from different domains or have different features. 1 Introduction Optimal transport (OT) has been gaining in recent years an increasing attention in the machine learning community, mainly due to its capacity to exploit the geometric property of the samples. Generally speaking, OT is a mathematical tool to compare distributions by computing a transportation mass plan from a source to a target distribution. Distances based on OT are referred to as the Monge-Kantorovich or Wasserstein distances (Villani, 2009) and have been successfully employed in a wide variety of machine learning applications including clustering (Ho et al., 2017), computer vision (Bonneel et al., 2011; Solomon et al., 2015), generative adversarial networks (Arjovsky et al., 2017) or domain adaptation (Courty et al., 2017). A key limitation of the Wasserstein distance is that it relies on the assumption of aligned distributions, namely they must belong to the same ground space or at least a meaningful distance across domains can be computed. Nevertheless, source and target distributions can be collected under distinct environments, representing different times of collection, contexts or measurements (see Fig. 1, left and right). To get benefit from OT on such heterogeneous distribution settings, one can compute the Gromov-Wasserstein (GW) distance (Sturm, 2006; Mémoli, 2011) to overcome the lack of intrinsic correspondence between the distribution spaces. GW extends Wasserstein by computing a distance between metrics defined within each of the source and target spaces. From a computational point view, it involves a non convex quadratic problem (Peyré and Cuturi, 2019), hard to lift to large scale settings. A remedy to such a heavy computation burden lies in a prevalent approach referred to as 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. regularized OT (Cuturi, 2013), allowing one to add an entropic regularization penalty to the original problem. Peyré et al. (2016); Solomon et al. (2016) propose the entropic GW discrepancy, that can be solved by Sinkhorn iterations (Cuturi, 2013; Benamou et al., 2015). A major bottleneck of OT in its traditional formulation is that it requires the two input measures to have the same total probability mass and/or that all the mass has to be transported. This is too restrictive for many applications, such as in color matching or shape registration (Bonneel and Coeurjolly, 2019), since mass changes may occur due to a creation or an annihilation while computing an OT plan. To tackle this limitation, one may employ strategies such as partial or unbalanced transport (Guittet, 2002; Figalli, 2010; Caffarelli and McCann, 2010). Chizat et al. (2018) propose to relax the marginal constraints of unbalanced total masses using divergences such as Kullback-Leibler or Total Variation, allowing the use of generalized Sinkhorn iterations. Yang and Uhler (2019) generalize this approach to GANs and Lee et al. (2019) present an ADMM algorithm for the relaxed partial OT. Most of all these approaches concentrate on partial-Wasserstein. This paper deals with exact partial Wassertein (partial-W) and Gromov-Wasserstein (partial-GW). Some strategies for computing such partial-W require relaxations of the marginals constraints. We rather build our approach upon adding virtual or dummy points onto the marginals, which is a common practice in OT works. Among the latter, Caffarelli and McCann (2010) attach such points to allow choosing the maximum distance mass that can be transported. Pele and Werman (2009) threshold ground distances and send the extra mass to a dummy point to compute a robust EMD distance. Gramfort et al. (2015) consider the case of unnormalized measures and use a dummy point to “fill” the distributions, the extended problem then having both marginals summing to one. More recently, Sarlin et al. (2020) deal with the partial assignment problem by extending the initial problem and fill the ground distance matrix with a single learnable parameter. In this paper, the dummy points are used as a buffer when comparing distributions with different probability masses, allowing partial-W to boil down to solving an extended but standard Wasserstein problem. The main advantage of our approach is that it defines explicitly the mass to be transported and it leads to computing sparse transport plans and hence exact partial-W or -GW distances instead of regularized discrepancies obtained by running Sinkhorn algorithms. Regarding partial-GW, our approach relies on a Frank-Wolfe optimization algorithm (Frank and Wolfe, 1956) that builds on computations of partial-W. Tackling partial-OT problems that preserve sparsity is motivated by the fact that they are more suitable to some applications such as the Positive-Unlabeled (PU) learning (see Bekker and Davis (2020) for a review) we target in this paper. We shall notice that this is the first application of OT for solving PU learning tasks. In a nutshell, PU classification is a variant of the binary classification problem, in which we have only access to labeled samples from the positive (Pos) class in the training stage. The aim is to assign classes to the points of an unlabeled (Unl) set which mixes data from both positive and negative classes. Using OT allows identifying the positive points within Unl, even when Pos and Unl samples do not lie in the same space (see Fig. 1). The paper is organized as follows: we first recall some background on OT. In Section 3, we propose an algorithm to solve an exact partial-W problem, together with a Frank-Wolfe based algorithm to compute the partial-GW solution. After describing in more details the PU learning task and the use of partial-OT to solve it, we illustrate the advantage of partial-GW when the source and the target distributions are collected onto distinct environments. We finally give some perspectives. Notations ΣN is an histogram of N bins with { p ∈ RN+ , ∑ i pi = 1 } and δ is the Dirac function. Let 1n be the n-dimensional vector of ones. 〈·, ·〉F stands for the Frobenius dot product. |p| indicates the length of vector p. 2 Background on optimal transport Let X = {xi}ni=1 and Y = {yj}mj=1 be two point clouds representing the source and target samples, respectively. We assume two empirical distributions (p, q) ∈ Σn × Σm over X and Y , p = n∑ i=1 piδxi and q = m∑ j=1 qjδyj , where Σn and Σm are histograms of |p| = n and |q| = m bins respectively. The set of all admissible couplings Π(p, q) between histograms is given by Π(p, q) = {T ∈ R|p|×|q|+ |T1|q| = p,T>1|p| = q}, where T = (Tij)i,j is a coupling matrix with an entry Tij that describes the amount of mass pi found at xi flowing toward the mass qj of yj . OT addresses the problem of optimally transporting p toward q, given a cost Dij measured as a geometric distance between xi and yj . More precisely, when the ground cost C = Dp = ( Dpij ) i,j is a distance matrix, the p-Wassertein distance on Σn × Σm at the power of p is defined as: W pp (p, q) = min T∈Π(p,q) 〈C,T 〉F = min T∈Π(p,q) n∑ i=1 m∑ j=1 CijTij . In some applications, the two distributions are not registered (i.e. we can not compute a ground cost between xi and yj) or do not lie in the same underlying space. The Gromov-Wasserstein distance addresses this bottleneck by extending the Wasserstein distance to such settings, also allowing invariance to translation, rotation or scaling. Informally, it defines the distortion when transporting the whole set of points from one space to another. It relies on intra-domain distance matrices of source Cs = (Csik)i,k = (C s(xi,xk))i,k ∈ Rn×n+ and target Ct = (Ctjl)j,l = (Ct(yj ,yl))j,l ∈ R m×m + , and is defined as in Mémoli (2011): GW pp (p, q) = min T∈Π(p,q) n∑ i,k=1 m∑ j,l=1 ∣∣Csik − Ctjl∣∣p TijTkl. 3 Exact Partial Wasserstein and Gromov-Wasserstein distance We first detail how extending a balanced Wasserstein problem allows solving a partial-Wasserstein one. We then propose a Frank-Wolfe scheme that relies on computing partial-W to solve the partial-GW problem. 3.1 Partial Wasserstein distance The previous OT distances require the two distributions to have the same total probability mass ‖p‖1 = ‖q‖1 and that all the mass has to be transported. This may be a problematic assumption where some mass variation or partial mass displacement should be handled. The partial OT problem focuses on transporting only a fraction 0 ≤ s ≤ min(‖p‖1, ‖q‖1) of the mass as cheaply as possible. In that case, the set of admissible couplings becomes Πu(p, q) = {T ∈ R|p|×|q|+ |T1|q| ≤ p,T>1|p| ≤ q,1>|p|T1|q| = s}, and the partial-W distance reads as PW pp (p, q) = min T∈Πu(p,q) n∑ i=1 m∑ j=1 〈C,T 〉F . This problem has been studied by (Caffarelli and McCann, 2010; Figalli, 2010); numerical solutions have notably been provided by (Benamou et al., 2015; Chizat et al., 2018) in the entropic-regularized Wasserstein case. We propose here to directly solve the exact partial-W problem by adding dummy or virtual points xn+1 and ym+1 (with any features) and extending the cost matrix as follows: C̄ = [ C ξ1|q| ξ1>|p| 2ξ +A ] (1) in which A > 0 and ξ is a fixed positive or nul scalar. When the mass of these dummy points is set such that pn+1 = ‖q‖1 − s and qm+1 = ‖p‖1 − s, computing partial-W distance boils down to solving a unconstrained problem W pp (p̄, q̄) = minT̄∈Π(p̄,q̄)〈C̄, T̄ 〉F , where p̄ = [p, ‖q‖1 − s] and q̄ = [q, ‖p‖1 − s]. The intuitive derivation of this equivalent formulation is exposed in Appendix 1.1. Proposition 1 Assume that A > 0 and that ξ is a positive or nul scalar, one has W pp (p̄, q̄)− PW pp (p, q) = ξ(‖p‖1 + ‖q‖1 − 2s) and the optimum transport plan T ∗ of the partial Wasserstein problem is the optimum transport plan T̄ ∗ of W pp (p̄, q̄) deprived from its last row and column. The proof is postponed to Appendix 1.2. 3.2 Partial Gromov-Wasserstein We are now interested in the partial extension of Gromov-Wasserstein. In the case of a quadratic cost, p = 2, the partial-GW problem writes as PGW 22 (p, q) = min T∈Πu(p,q) JCs,Ct(T ) where JCs,Ct(T ) = 1 2 n∑ i,k=1 m∑ j,l=1 (Csik − Ctjl)2TijTkl. (2) The loss function JCs,Ct is non-convex and the couplings feasibility domain Πu(p, q) is convex and compact. One may expect to introduce virtual points in the GW formulation in order to solve the partial-GW problem. Nevertheless, this strategy is no longer valid as GW involves pairwise distances that do not allow the computations related to the dummy points to be isolated (see Appendix 1.3). In the following, we build upon a Frank-Wolfe optimization scheme (Frank and Wolfe, 1956) a.k.a. the conditional gradient method (Demyanov and Rubinov, 1970). It has received significant renewed interest in machine learning (Jaggi, 2013; Lacoste-Julien and Jaggi, 2015) and in OT community, since it serves as a basis to approximate penalized OT problems (Ferradans et al., 2013; Courty et al., 2017) or GW distances (Peyré et al., 2016; Vayer et al., 2020). Our proposed Frank-Wolfe iterations strongly rely on computing partial-W distances and as such, achieve a sparse transport plan (Ferradans et al., 2013). Let us first introduce some additional notations. For any tensorM = (Mijkl)i,j,k,l ∈ Rn×n×m×m, we denote byM◦ T the matrix in Rn×m such that its (i, j)-th element is defined as (M◦ T )i,j = n∑ k=1 m∑ l=1 MijklTkl for all i = 1, . . . , n, j = 1, . . . ,m. Introducing the 4-th order tensor M(Cs,Ct) = 12 ((C s ik − Ctjl) 2)i,j,k,l, we notice that JCs,Ct(T ), following Peyré et al. (2016), can be written as JCs,Ct(T ) = 〈M(Cs,Ct) ◦ T ,T 〉F . The Frank-Wolfe algorithm for partial-GW is shown in Algorithm 1. Like classical Frank-Wolfe procedure, it is summarized in three steps for each iteration k, as detailed below. A theoretical study of the convergence of the Frank-Wolfe algorithm for partial-GW is given in Appendix 2.2, together with a detailed derivation of the line search step (see Appendix 2.1). Step1 Compute a linear minimization oracle over the set Πu(p, q), i.e., T̃ (k) ← argmin T∈Πu(p,q) 〈∇JCs,Ct(T (k)),T 〉F , (3) To do so, we solve an extended Wasserstein problem with the ground metric∇JCs,Ct(T (k)) extended as in eq. (1): T̄ (k) ← argmin T∈Π(p̄,q̄) 〈∇̄JCs,Ct(T (k)),T 〉F , (4) and get T̃ (k) from T̄ (k) by removing its last row and column. Step2 Determine optimal step-size γ(k) subject to γ(k) ← argmin γ∈[0,1] {JCs,Ct((1− γ)T (k) + γT̃ (k))}. (5) It can be shown that γ(k) can take the following values, with E(k) = T̃ (k) − T (k): • if 〈M(Cs,Ct) ◦E(k),E(k)〉F < 0 we have γ(k) = 1 • if 〈M(Cs,Ct) ◦E(k),E(k)〉F > 0 we have γ(k) = min ( 1,−〈M(C s,Ct) ◦E(k),T (k)〉F 〈M(Cs,Ct) ◦E(k),E(k)〉F ) Step3 Update T (k+1) ← (1− γ(k))T (k) + γ(k)T̃ (k). Algorithm 1 Frank-Wolfe algorithm for partial-GW 1: Input: Source and target samples: (X ,p) and (Y, q), mass s, p = 2, initial guess T (0) 2: Compute cost matrices Cs and Ct, build p̄ = [p, ‖q‖1 − s] and q̄ = [q, ‖p‖1 − s] 3: for k = 0, 1, 2, 3, . . . do 4: G(k) ←M(Cs,Ct) ◦ T (k) // Compute the gradient ∇JCs,Ct(T (k)) 5: T̄ (k) ← argminT∈Π(p̄,q̄)〈Ḡ (k) ,T 〉F // Compute partial-W, with Ḡ as in eq. (1) 6: Get T̃ (k) from T̄ (k) // Remove last row and column 7: Compute γ(k) as in Eq. (5) // Line-search 8: T (k+1) ← (1− γ(k))T (k) + γ(k)T̃ (k) // Update 9: end for 10: Return: T (k) 4 Optimal transport for PU learning We hereafter investigate the application of partial optimal transport for learning from Positive and Unlabeled (PU) data. After introducing PU learning, we present how to formulate a PU learning problem into a partial-OT one. 4.1 Overview of PU learning Learning from PU data is a variant of classical binary classification problem, in which the training data consist of only positive points, and the test data is composed of unlabeled positives and negatives. Let Pos = {xi}nPi=1 be the positive samples drawn according to the conditional distribution p(x|y = 1) and Unl = {xUi } nU i=1 the unlabeled set sampled according to the marginal p(x) = πp(x|y = 1) + (1− π)p(x|y = −1). The true proportion of positives, called class prior, is π = p(y = 1) and p(x|y = −1) is the distribution of negative samples which are all unlabeled.The goal is to learn a binary classifier solely using Pos and Unl. A broad overview of existing PU learning approaches can be seen in (Bekker and Davis, 2020). Most PU learning methods commonly rely on the selected completely at random (SCAR) assumption (Elkan and Noto, 2008) which assumes that the labeled samples are drawn at random among the positive distribution, independently of their attributes. Nevertheless, this assumption is often violated in real-case scenarii and PU data are often subject to selection biases, e.g. when part of the data may be easier to collect. Recently, a less restrictive assumption has been studied: the selected at random (SAR) setting (Bekker and Davis, 2018) which assumes that the positives are labeled according to a subset of features of the samples. Kato et al. (2019) move a step further and consider that the sampling scheme of the positives is such that p(o = 1|x, y = 1) (o = 1 means observed label) preserves the ordering induced by the posterior distribution p(y = 1|x) over the samples. Other approaches, as in (Hsieh et al., 2019), consider a classical PU learning problem adjuncted with a small proportion of observed negative samples. Those negatives are selected with bias following the distribution p(x|y = −1). 4.2 PU learning formulation using partial optimal transport We propose in this paper to build on partial optimal transport to perform PU learning. In a nutshell, we aim at transporting a mass s = π from the unlabeled (source) dataset to the positive (target) one. As such, the transport matrix T should be such that the unlabeled positive points are mapped to the positive samples (as they have similar features or intra-domain distance matrices) while the negatives are discarded (in our context, they are not transported at all). Defining the optimal transport point-of-view of PU learning. More formally, the unlabeled points Unl represent the source distribution X and the positive points Pos are the target dataset Y . We set the total probability mass to be transported as the proportion of positives in the unlabeled set, that is s = π. We look for an optimal transport plan that belongs to the following set of couplings, assuming n = nU , m = nP , pi = 1n and qj = s m : ΠPU (p, q) = {T ∈ R|p|×|q|+ |T1|q| = {p, 0},T>1|p| ≤ q,1>|p|T1|q| = s}, (6) in which T1|q| = {p, 0} means that ∑m j=1 Tij = pi exactly or 0, ∀i to avoid matching part of the mass of an unlabeled negative with a positive. This set is not empty as long as s mod pi = 0. The problem that we aim at solving is the following: PUW pp (p, q) = min T∈ΠPU (p,q) n∑ i=1 m∑ j=1 CijTij . Though the positive samples Pos are assumed easy to label, their features may be corrupted with noise or they may be mislabeled. Let assume 0 ≤ α ≤ 1− s, the noise level. Solving the PU problem. To enforce the condition T1|q| = {p, 0}, we adopt a regularized point of view of the partial-OT problem as in Courty et al. (2017) and we solve the following problem: T̄ ∗ = argmin T̄∈Π(p̄,q̄) n+1∑ i=1 m+1∑ j=1 C̄ij T̄ij + ηΩ(T̄ ) (7) where pi = 1−αn , qj = s+α m , p̄, q̄, C̄ij are defined as in Section 3.1, η ≥ 0 is a regularization parameter and α is the percentage of Pos that we assume to be noisy (that is to say we do not want to map them to a point of Unl). We choose Ω(T̄ ) = ∑n i=1 ( ‖T̄i(:m)‖2 + ‖T̄i(m+1)‖2 ) where T̄i(:m) is a vector that contains the entries of the ith row of T̄ associated to the first m columns. This group-lasso regularization leads to a sparse transportation map and enforces each of the Unl samples xi to be mapped to only the Pos samples or to the dummy point ym+1. An illustration is provided in Appendix 5. When partial-GW is involved, we use this regularized-OT in the step (i) of the Frank-Wolfe algorithm. We can establish that solving problem (7) provides the solution to PU learning using partial-OT. Proposition 2 Assume that A > 0, ξ is a constant, there exists a large η > 0 such that: W ∗pp (p̄, q̄)− PUW pp (p, q) = ξ(1− s). where W ∗pp (p̄, q̄) = ∑n+1 i=1 ∑m+1 j=1 C̄ij T̄ ∗ ij with T̄ solution of eq. (7). The proof is postponed to Appendix 3. 5 Experiments 5.1 Experimental design We illustrate the behavior of partial-W and -GW on real datasets in a PU learning context. First, we consider a SCAR assumption, then a SAR one and finally a more general setting, in which the underlying distributions of the samples come from different domains, or do not belong to the same metric space. Algorithm 1 has been implemented and is avalaible on the Python Optimal Transport (POT) toolbox (Flamary and Courty, 2017). Following previous works (Kato et al., 2019; Hsieh et al., 2019), we assume that the class prior π is known throughout the experiments; otherwise, it can be estimated from {xi}nPi=1 and {xUi } nU i=1 using off-the-shelf methods, e.g. Zeiberg and Radivojac (2020); Plessis et al. (2017); Jain and Radivojac (2016). For both partial-W and partial-GW, we choose p = 2 and the cost matrices C are computed using Euclidean distance. We carry experiments on real-world datasets under the aforementioned scenarii. We rely on six datasets Mushrooms, Shuttle, Pageblocks, USPS, Connect-4, Spambase from the UCI repository1 (following Kato et al. (2019)’s setting) and colored MNIST (Arjovsky et al., 2019) to illustrate our method in SCAR and SAR settings respectively. We also consider the Caltech office dataset, which is a common application of domain adaptation (Courty et al., 2017) to explore the effectiveness of our method on heterogeneous distribution settings. Whenever they contain several classes, these datasets are converted into binary classification problems following Kato et al. (2019), and the positives are the samples that belong to the y = 1 class. For UCI and colored MNIST datasets, we randomly draw nP = 400 positive and nU = 800 unlabeled points among the remaining data. As the Caltech office datasets are smaller, we choose nP = 100 and nU = 100 in that context. To ease the presentation, we report here the results with class prior π set as the true proportion of positive class in the dataset and provide in Appendix 6.3 additional results when varying s. We ran the experiments 10 times and report the mean accuracy rate (standard deviations are shown in Appendix 6.1). We test 2 levels of noise in Pos: α = 0 or α = 0.025, fix ξ = 0, A = max(C) and choose a large η = 106. For the experiments, we consider unbiased PU learning method (denoted by PU in the sequel) (Du Plessis et al., 2014) and the most recent and effective method to address PU learning with a selection bias (called PUSB below) that tries to weaken the SCAR assumption (Kato et al., 2019). Whenever possible (that is to say when source and target samples share the same features), we compare our approaches P-W and P-GW with PU and PUSB; if not, we are not aware of any competitive PU learning method able to handle different features in Pos and Unl. The GW formulation is a non convex problem and the quality of the solution is highly dependent on the initialization. We explore several initializations of the transport matrix for P-GW and report the results that yield to the lowest partial OT-distance (see Appendix 4 for details). 5.2 Partial-W and partial-GW in a PU learning under a SCAR assumption Under SCAR, the Pos dataset and the positives in Unl are assumed independently and identically drawn according to the distribution p(x|y = 1) from a set of positive points. We experiment on the UCI datasets and Table 1 (top) summarizes our findings. Except for Connect-4 and spambase, partial-W has similar results or consistently outperforms PU and PUSB. Including some noise has little impact on the results, except for the connect-4 dataset. Partial-GW has competitive results, showing that relying on intra-domain matrices may allow discriminating the classes. It nevertheless 1https://archive.ics.uci.edu/ml/datasets.php under-performs relatively to partial-W, as the distance matrix C between Pos and Unl is more informative than only relying on intra-domain matrices. 5.3 Experiments under a SAR assumption The SAR assumption supposes that Pos is drawn according to some features of the samples. To implement such a setting, we inspire from (Arjovsky et al., 2019) and we construct a colored version of MNIST: each digit is colored, either in green or red, with a probability of 90% to be colored in red. The probability to label a digit y = 1 as positive depends on its color, with only green y = 1 composing the positive set. The Unl dataset is then mostly composed of red digits. Results under this setting are provided in Table 1 (middle). When we consider a SCAR scenario, partial-W exhibits the best performance. However, its effectiveness highly drops when a covariate shift appears in the distributions p(x|y = 1) of the Pos and Unl datasets as in this SAR scenario. On the opposite, partial-GW allows maintaining a comparable level of accuracy as the discriminative information are preserved in intra-domain distance matrices. 5.4 Partial-W and -GW in a PU learning with different domains and/or feature spaces To further validate the proposed method in a different context, we apply partial-W and partial-GW to a domain adaptation task. We consider the Caltech Office dataset, that consists of four domains: Caltech 256 (C) (Griffin et al., 2007), Amazon (A), Webcam (W) and DSLR (D) (Saenko et al., 2010). There exists a high inter-domain variability as the objects may face different illumination, orientation etc. Following a standard protocol, each image of each domain is described by a set of SURF features (Saenko et al., 2010) consisting of a normalized 800-bins histogram, and by a set of DECAF features (Donahue et al., 2014), that are 4096-dimensional features extracted from a neural network. The Pos dataset consists of images from Caltech 256. The unlabeled samples are formed by the Amazon, Webcam, DSLR images together with the Caltech 256 images that are not included in Pos. We perform a PCA to project the data onto d = 10 dimensions for the SURF features and d = 40 for the DECAF ones. We first investigate the case where the objects are represented by the same features but belong to the same or different domains. Results are given in Table 1 (bottom). For both features, we first notice that PU and PUSB have similar performances than partial-W when the domains are the same. As soon as the two domains differ, partial-GW exhibits the best performances, suggesting that it is able to capture some domain shift. We then consider a scenario where the source and target objects are described by different features (Table 2). In that case, only partial-GW is applicable and its performances suggest that it is able to efficiently leverage on the discriminative information conveyed in the intra-domain similarity matrices, especially when using SURF features to make predictions based on DECAF ones. 6 Conclusion and future work In this paper, we build on partial-W and -GW distances to solve a PU learning problem. We propose a scheme relying on iterations of a Frank-Wolfe algorithm to compute a partial-GW solution, in which each iteration requires solving a partial-W problem that is derived from the solution of an extended Wassertein problem. We show that those distances compete and sometimes outperform the state-of-the-art PU learning methods, and that partial-GW allows remarkable improvements when the underlying spaces of the positive and unlabeled datasets are distinct or even unregistered. While considering only features (with partial-W) or intra-domain distances (with partial-GW), this work can be extended to define a partial-Fused Gromov-Wasserstein distance (Vayer et al., 2020) that can combines both aspects. Another line of work will also focus on lowering the computational complexity by using sliced partial-GW, building on existing works on sliced partial-W (Bonneel and Coeurjolly, 2019) and sliced GW (Vayer et al., 2019). Regarding the application view point, we envision a potential use of the approach to subgraph matching (Kriege and Mutzel, 2012) or PU learning on graphs (Zhao et al., 2011) as GW has been proved to be effective to compare structured data such as graphs. In addition, we also target applications such as detecting out-of-distributions examples or open-set domain adaptation (Saito et al., 2018). Finally, we plan to derive an extension of this work to PU learning in which the proportion of positives in the dataset will be estimated in a unified optimal transport formulation, building on results of GW-based test of isomorphism between distributions (Brécheteau, 2019). Broader impact This work does not present any significant societal, environnemental or ethical consequence. Acknowledgments This work is partially funded through the projects OATMIL ANR-17-CE23-0012, MULTISCALE ANR-18-CE23-0022-01 and RAIMO ANR-20-CHIA-0021-01.
1. What is the focus and contribution of the paper regarding Wasserstein and Gromov-Wasserstein problems? 2. What are the strengths of the proposed method, particularly its novelty and effectiveness in solving positive-unlabeled learning problems? 3. What are the weaknesses of the paper, including typos, limited experiment scope, and potential integration with deep learning models? 4. Do you have any concerns about the figures presented in the paper, such as their origin and relevance to real-world experiments? 5. How does the reviewer assess the clarity and consistency of the paper's writing, including citation styles and sentence tenses?
Summary and Contributions Strengths Weaknesses
Summary and Contributions 1. The paper proposes an algorithm to solve Wasserstein and Gromov-Wasserstein problems. 2. The proposed method can be applied to solve positive-unlabeled learning problem, and the experimental results demonstrate the effectiveness of the proposed method. Strengths 1. The paper is well-written and easy to follow. 2. The proposed method is novel and can be applied to solve positive-unlabeled learning problem, and the results are OK. Weaknesses My main concerns are listed as follows: 1. There are some typos in the manuscript, e.g., in Abstract, "betwenn". 2. It is a pity that the authors only perform experiments on positive-unlabeled learning, optimal transport techniques have been used in many applications. More results on other applications such as transfer learning, few-shot learning, or zero-shot learning may be better, with more baseline methods being compared. 3. In recent years, both optimal transport and deep learning are hot research issues. The authors are encourage to explain how to expand the proposed method to integrate with deep learning models. 4. For Figure 1, are the figures generated by real experiments or artificially? If they are artificially generated, can authors conduct some real-world experiments to support the phenomenon occurred in these figures? This would be an important evaluation of the proposed method. 5. When citing literature, the tense of sentences is inconsistent, e.g., "Peyré et al. (2016) proposed" and "Chizat et al. (2018) propose".
NIPS
Title Group Knowledge Transfer: Federated Learning of Large CNNs at the Edge Abstract Scaling up the convolutional neural network (CNN) size (e.g., width, depth, etc.) is known to effectively improve model accuracy. However, the large model size impedes training on resource-constrained edge devices. For instance, federated learning (FL) may place undue burden on the compute capability of edge nodes, even though there is a strong practical need for FL due to its privacy and confidentiality properties. To address the resource-constrained reality of edge devices, we reformulate FL as a group knowledge transfer training algorithm, called FedGKT. FedGKT designs a variant of the alternating minimization approach to train small CNNs on edge nodes and periodically transfer their knowledge by knowledge distillation to a large server-side CNN. FedGKT consolidates several advantages into a single framework: reduced demand for edge computation, lower communication bandwidth for large CNNs, and asynchronous training, all while maintaining model accuracy comparable to FedAvg. We train CNNs designed based on ResNet-56 and ResNet-110 using three distinct datasets (CIFAR-10, CIFAR-100, and CINIC-10) and their non-I.I.D. variants. Our results show that FedGKT can obtain comparable or even slightly higher accuracy than FedAvg. More importantly, FedGKT makes edge training affordable. Compared to the edge training using FedAvg, FedGKT demands 9 to 17 times less computational power (FLOPs) on edge devices and requires 54 to 105 times fewer parameters in the edge CNN. Our source code is released at FedML (https://fedml.ai). 1 Introduction The size of convolutional neural networks (CNN) matters. As seen in both manually designed neural architectures (ResNet [1]) and automated architectures discovered by neural architecture search (DARTS [2], MiLeNAS [3], EfficientNets [4]), scaling up CNN size (e.g., width, depth, etc.) is known to be an effective approach for improving model accuracy. Unfortunately, training large CNNs is challenging for resource-constrained edge devices (e.g., smartphones, IoT devices, and edge servers). The demand for edge-based training is increasing as evinced by a recent surge of interest in Federated Learning (FL) [5]. FL is a distributed learning paradigm that can collaboratively train a global model for many edge devices without centralizing any device’s dataset [6, 7, 8]. FL can boost model accuracy in situations when a single organization or user does not have sufficient or relevant data. Consequently, many FL services have been deployed commercially. For instance, Google has improved the accuracy of item ranking and language models on Android smartphones by using FL [9]. FL is also a promising solution when data centralization is undesirable or infeasible due to privacy and regulatory constraints [5]. However, one significant impediment in edge training is the gap between the computational demand of large CNNs and the meager computational power on edge devices. FL approaches, such as FedAvg [6] can reduce communication frequency by local SGD and model averaging [10], but they only evaluate the convergence property on small CNNs, 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. or assume the client has enough computational power with GPUs to train large CNNs, which is unrealistic in a real-world system. To tackle the computational limitation of edge nodes, model parallelism-based split learning (SL) [11, 12] partitions a large model and offloads some portion of the neural architecture to the cloud, but SL has a severe straggler problem because a single mini-batch iteration requires multiple rounds of communication between the server and edges. In this paper, we propose Group Knowledge Transfer (FedGKT), an efficient federated learning framework for resource-constrained edge devices. FedGKT aims to incorporate benefits from both FedAvg [6] and SL [11, 12] by training using local SGD as in FedAvg but also placing low compute demand at the edge as in SL. FedGKT can transfer knowledge from many compact CNNs trained at the edge to a large CNN trained at a cloud server. The essence of FedGKT is that it reformulates FL as an alternating minimization (AM) approach [13, 14, 15, 16, 17, 18], which optimizes two random variables (the edge model and the server model) by alternatively fixing one and optimizing another. Under this reformulation, FedGKT not only boosts training CNNs at the edge but also contributes to the development of a new knowledge distillation (KD) paradigm, group knowledge transfer, to boost the performance of the server model. Fig. 1(a) provides an overview of FedGKT. The compact CNN on the edge device consists of a lightweight feature extractor and classifier that can be trained efficiently using its private data (1 - local training). After local training, all the edge nodes agree to generate exactly the same tensor dimensions as an output from the feature extractor. The larger server model is trained by taking features extracted from the edge-side model as inputs to the model, and then uses KD-based loss function that can minimize the gap between the ground truth and soft label (probabilistic prediction in KD [19, 20, 21, 22]) predicted from the edge-side model (2 - periodic transfer). To boost the edge model’s performance, the server sends its predicted soft labels to the edge, then the edge also trains its local dataset with a KD-based loss function using server-side soft labels (3 - transfer back). Thus, the server’s performance is essentially boosted by knowledge transferred from the edge models and vice-versa. Once the training is complete, the final model is a combination of its local feature extractor and shared server model (4 - edge-sided model). The primary trade-off is that FedGKT shifts the computing burden from edge devices to the powerful server. FedGKT unifies multiple advantages into a single framework: 1. FedGKT is memory and computation efficient, similar to SL; 2. FedGKT can train in a local SGD manner like FedAvg to reduce the communication frequency; 3. Exchanging hidden features as in SL, as opposed to exchanging the entire model as in FedAvg, reduces the communication bandwidth requirement. 4. FedGKT naturally supports asynchronous training, which circumvents the severe synchronization issue in SL. The server model can immediately start training when it receives inputs from any client. We develop FedGKT based on the FedML research library [23] and comprehensively evaluate FedGKT using edge and server CNNs designed based on ResNet [1] (as shown in Fig. 1(b)). We train on three datasets with varying training difficulties (CIFAR-10 [24], CIFAR-100 [24], and CINIC-10 [25]) and their non-I.I.D. (non identical and independent distribution) variants. As for the model accuracy, our results on both I.I.D. and non-I.I.D. datasets show that FedGKT can obtain accuracy comparable to FedAvg [6]. More importantly, FedGKT makes edge training affordable. Compared to FedAvg, FedGKT demands 9 to 17 times less computational power (FLOPs) on edge devices and requires 54 to 105 times fewer parameters in the edge CNN. To understand FedGKT comprehensively, asynchronous training and ablation studies are performed. Some limitations are also discussed. 2 Related Works Federated Learning. Existing FL methods such as FedAvg [6], FedOpt [26], and FedMA [8] face significant hurdles in training large CNNs on resource-constrained devices. Recent works FedNAS [27, 3] and [28] work on large CNNs, but they rely on GPU training to complete the evaluations. Others [29, 30, 31, 32, 33, 34, 35, 36, 37] optimize the communication cost without considering edge computational limitations. Model parallelism-based split learning [11, 12] attempts to break the computational constraint, but it requires frequent communication with the server. Knowledge Distillation (KD). We use KD [19] in a different manner from existing and concurrent works [38, 39, 40, 41, 42, 43, 44, 45]. Previous works only consider transferring knowledge from a large network to a smaller one [19, 20, 21, 22], or they transfer knowledge from a group, but each member in the group shares the same large model architecture or a large portion of the neural architecture with specific tail or head layers [46, 47, 48, 49, 50, 51]. Moreover, all teachers and students in distillation share the same dataset [50, 52, 53, 54], while in our setting each member (client) can only access its own independent dataset. Previous methods use centralized training, but we utilize an alternating training method. Efficient On-device Deep Learning. Our work also relates to efficient deep learning on edge devices, such as model compression [55, 56, 57], manually designed architectures (MobileNets [58], ShuffeNets [59], SqueezeNets [60]), or even efficient neural architecture search (EfficientNets [4], FBNet [61]). However, all of these techniques are tailored for the inference phase rather than the training phase. 3 Group Knowledge Transfer 3.1 Preliminary We aim to collaboratively train large convolutional neural networks (e.g., ResNet) on many resourceconstrained devices that are not equipped with GPU accelerators, without centralizing each device’s dataset to the server side. We specifically consider supervised learning with C categories in the entire dataset D. We assume that there are K clients (edge devices) in the network. The kth node has its own dataset Dk := {( Xki , yi )}N(k) i=1 , where Xi is the ith training sample, yi is the corresponding label of Xi, yi ∈ {1, 2, . . . , C} (a multi-classification learning task), and N (k) is the sample number in dataset Dk. D = {D1,D2, ...,Dk}, N = ∑K k=1N (k). In general, we can formulate CNN-based federated learning as a distributed optimization problem: min W F (W ) def = min W K∑ k=1 N (k) N · f (k)(W ),where f (k)(W ) = 1 N (k) N(k)∑ i=1 `(W ;Xi, yi) (1) where W represents the network weight of a global CNN in each client. f (k)(W ) is the kth client’s local objective function that measures the local empirical risk over the heterogeneous dataset Dk. ` is the loss function of the global CNN model. Most off-the-shelf federated optimization methods (e.g., FedAvg [6], FedProx [62], FedNova [63], and FedOpt [26]) propose to solve objective function (1) with variant local SGD [10] optimization methods for communication-efficient training and demonstrate their characteristics with experiments on linear models (logistic regression) or shallow neural networks (2 convolutional layers). However, as shown in Fig. 2(a), the main drawback is that these methods cannot train large CNN at the resource-constrained edge devices due to lack of GPU accelerators and sufficient memory. Model parallelism-based split learning [11, 12], as shown in Fig. 2(b), attempts to break the computational constraint by splitting W into two portions and offloading the larger portion into the server-side, but a single mini-batch iteration requires remote forward propagation and backpropagation. For edge computing, such a highly frequent synchronization mechanism may lead to the severe straggler problem that significantly slows down the training process. 3.2 Reformulation Non-convex Optimization. To solve the resource-constrained problem in existing FL, we reconsider another methodology to solve the FL optimization problem. As illustrated in Fig. 2(c), we divide the global CNN W in Eq. (1) into two partitions: a small feature extractor model W e and a large-scale server-side model W s, and put them on the edge and the server, respectively. We also add a classifier W c for W e to create a small but fully trainable model on the edge. Consequently, we reformulate a single global model optimization into an non-convex optimization problem that requires us to solve the server model Fs and the edge model Fc simultaneously. Our reformulation is as follows: argmin W s Fs(W s,W ∗ e) = argmin W s K∑ k=1 N(k)∑ i=1 `s ( fs(W s;H (k) i ), y (k) i ) (2) subject to: H(k)i = f (k) e (W (k) e ;X (k) i ) (3) argmin (W (k) e ,W (k) c ) Fc(W (k) e ,W (k) c ) = argmin (W (k) e ,W (k) c ) N(k)∑ i=1 `c ( f (k) ( (W (k)e ,W (k) c );X (k) i ) , y (k) i ) (4) = argmin (W (k) e ,W (k) c ) N(k)∑ i=1 `c ( f (k)c (W (k) c ; f (k) e (W (k) e ;X (k) i︸ ︷︷ ︸ H (k) i )), y (k) i ) (5) Where `s and `c are general loss functions for the server model and the edge model, respectively. fs is the server model, and f (k) is the edge-side model which consists of feature extractor f (k)e followed by a classifier f (k)c . W s, W (k)e , W (k) c are the network weights of fs, f (k) e , f (k) c , respectively. H (k) i is i-th sample’s feature map (a hidden vector or tensor) output by feature extractor f (k)e (Eq. (3)). Note that Eq. (5) can be solved independently on each client. The kth client model f (k) is trained on its local dataset (Eq. (5)), while the server model fs is trained using H (k) i as input features (Eq. (2)). During the inference phase, the final trained model architecture for client k is stacked by the architecture of the feature extractor f (k)e and the architecture of the server model fs. In practice, the client can either run offline inference by downloading the server model fs and using it locally or perform online inference through a network connection with the server. Advantages and Challenges. The core advantage of the above reformulation is that when we assume the model size of f (k) is multiple orders of magnitude smaller than that of fs, the edge training is affordable. Moreover, as discussed in [11, 12], for large CNN training, the communication bandwidth for transferring H(k)i to the server is substantially less than communicating all model parameters as is done in traditional federated learning. Conversely, we also observe the difficulty of the reformulated optimization problem. First, each client is expected to adequately solve the inner optimization (Eq. (5)). Namely, each client should train its feature extractor f (k)e well to ensure that Eq. (3) can accurately generate H (k) i for any given input image. However, in the FL setting, the dataset on each edge device is small and thus may be inadequate in training a CNN-based feature extractor solely based on the local dataset. In addition, the outer optimization Eq. (2) and inter optimization Eq. (5) are correlated: Eq. (2) relies on the quality of H(k)i which is optimized by Eq. (5). This correlation further makes the outer optimization Eq. (2) difficult to converge if the individual client-side feature extractors f (k)e are not trained adequately. 3.3 Group Knowledge Transfer (FedGKT) Scaling Edge Dataset Limitations with Knowledge Transfer. Given the above challenges, we incorporate knowledge distillation loss into the optimization equations to circumvent the optimization difficulty. The intuition is that knowledge transferred from the the server model can boost the optimization on the edge (Eq. (5)). As such, we propose to transfer group knowledge bidirectionally. The server CNN absorbs the knowledge from many edges, and an individual edge CNN obtains enhanced knowledge from the server CNN. To be more specific, in Eq. (2) and (5), we design `s and `c as follows. `s = `CE + K∑ k=1 `KD ( zs, z (k) c ) = `CE + K∑ k=1 DKL (pk‖ps) (6) `(k)c = `CE + `KD ( zs, z (k) c ) = `CE +DKL (ps‖pk) (7) `CE is the cross-entropy loss between the predicted values and the ground truth labels. DKL is the Kullback Leibler (KL) Divergence function that serves as a term in the loss function `s and `c to transfer knowledge from a network to another. pik = exp(z(k,i)c /T)∑C i=1 exp ( z (k,i) c /T ) and pis = exp(zis/T)∑C i=1 exp(z i s/T ) . They are the probabilistic prediction of the edge model f (k) and the server model fs, respectively. They are calculated with the softmax of logits z. The logit zs and z (k) c are the output of the last fully connected layer in the server model and the client model, respectively. T is the temperature hyperparameter of the softmax function. Intuitively, the KL divergence loss attempts to bring the soft label and the ground truth close to each other. In doing so, the server model absorbs the knowledge gained from each of the edge models. Similarly, the edge models attempt to bring their predictions closer to the server model’s prediction and thereby absorb the server model knowledge to improve their feature extraction capability. Improved Alternating Minimization. After plugging Eq. (6) and (7) into our reformulation (Eq. (2) and (5)), we propose a variant of Alternating Minimization (AM) [13, 14, 15, 16, 17, 18] to solve the reformulated optimization problem as follows: argmin W s Fs(W s,W (k)∗ e ) = argmin W s K∑ k=1 N(k)∑ i=1 `CE ( fs(W s; f (k) e (W (k)∗ e ;X (k) i︸ ︷︷ ︸ H (k) i ), y (k) i ) + K∑ k=1 `KD ( z(k)∗c ,zs ) (8) where z(k)∗c = f (k) c (W (k) c ; f (k) e (W (k)∗ e ;X (k) i︸ ︷︷ ︸ H (k) i )), and zs = fs(W s;H (k) i ) (9) argmin W (k) Fc(W ∗ s ,W (k)) = argmin W (k) N(k)∑ i=1 `CE ( f (k)c (W (k) c ; f (k) e (W (k) e ;X (k) i︸ ︷︷ ︸ H (k) i )), y (k) i ) + `KD ( z∗s ,z (k) c ) (10) where z(k)c = f (k) c (W (k) c ; f (k) e (W (k) e ;X (k) i︸ ︷︷ ︸ H (k) i )), and z∗s = fs(W ∗ s ;H (k) i ) (11) Where the ∗ superscript notation in above equations presents related random variables are fixed during optimization. W (k) is the combination of W (k)e and W (k) c . AM is a solver in convex and non-convex optimization theory and practice that optimizes two random variables alternatively. In Eq. (8), we fix W (k) and optimize (train) W s for several epochs, and then we switch to (10) to fix W s and optimize W (k) for several epochs. This optimization occurs throughout many rounds between Eq. (8) and (10) until reaching a convergence state. Key Insight. The essence of our reformulation is that the alternating minimization (Eq. (8) and Eq. (10)) uses knowledge distillation across all edges to simplify the optimization, which scales the dataset limitation on each edge in federated learning. In particular, we achieve this objective using a local cross-entropy loss computed based only on the ground truth and the model output, and a second loss that uses the KL divergence across edges and the server, which effectively captures the contribution (knowledge) from multiple client datasets. Moreover, each minimization subproblem can be solved with SGD and its variants (e.g., SGD with momentum [64], ADAM [65, 66]). Algorithm 1 Group Knowledge Transfer. The subscript s and k stands for the server and the kth edge, respectively. E is the number of local epochs, T is the number of communication rounds; η is the learning rate; X(k) represents input images at edge k; H(k) is the extracted feature map from X(k); Zs and Z(k)c are the logit tensor from the client and the server, respectively. 1: ServerExecute(): 2: for each round t = 1, 2, ..., T do 3: for each client k in parallel do 4: // the server broadcasts Z(k)c to the client 5: H(k),Z(k)c ,Y (k) ← ClientTrain(k,Z(k)s ) 6: Zs ← empty dictionary 7: for each local epoch i from 1 to Es do 8: for each client k do 9: for idx, b ∈ {H(k),Z(k)c ,Y (k)} do 10: W s ←W s − ηs∇`s(W s; b) 11: if i == Es then 12: Z(k)s [idx]← fs(W s;h(k)) 13: // illustrated as "transfer back" in Fig. 1(a) 14: for each client k in parallel do 15: send the server logits Z(k)s to client k 16: 17: ClientTrain(k,Z(k)s ): 18: // illustrated as "local training "in Fig. 1(a) 19: for each local epoch i from 1 to Ec do 20: for batch b ∈ {X(k),Z(k)s ,Y (k)} do 21: // `(k)c is computed using Eq. (7) 22: W (k) ←W (k) − ηk∇`(k)c (W (k); b) 23: // extract features and logits 24: H(k),Z(k)c ← empty dictionary 25: for idx, batch x(k),y(k) ∈ {X(k),Y (k)} do 26: h(k) ← f (k)e (W (k)e ;x(k)) 27: z(k)c ← fc(W (k)c ;h(k)) 28: H(k)[idx]← h(k) 29: Z(k)c [idx]← z(k)c 30: return H(k), Z(k)c , Y (k) to server Training Algorithm. To elaborate, we illustrate the alternating training algorithm FedGKT in Fig. 1(a) and summarize it as Algorithm 1. During each round of training, the client uses local SGD to train several epochs and then sends the extracted feature maps and related logits to the server. When the server receives extracted features and logits from each client, it trains the much larger server-side CNN. The server then sends back its global logits to each client. This process iterates over multiple rounds, and during each round the knowledge of all clients is transferred to the server model and vice-versa. For the FedGKT training framework, the remaining step is to design specific neural architectures for the client model and the server model. To evaluate the effectiveness of FedGKT, we design CNN architectures based on ResNet [1], which are shown in Fig. 1(b). More details can also be found in Appendix B.3. 4 Experiments 4.1 Experimental Setup Implementation. We develop the FedGKT training framework based on FedML [23], an open source federated learning research library that simplifies the new algorithm development and deploys it in a distributed computing environment. Our server node has 4 NVIDIA RTX 2080Ti GPUs with sufficient GPU memory for large model training. We use several CPU-based nodes as clients training small CNNs. Task and Dataset. Our training task is image classification on CIFAR-10 [24], CIFAR-100 [24], and CINIC-10 [25]. We also generate their non-I.I.D. variants by splitting training samples into K unbalanced partitions. Details of these three datasets are introduced in Appendix A.1. The test images are used for a global test after each round. For different methods, we record the top 1 test accuracy as the metric to compare model performance. Note that we do not use LEAF [67] benchmark datasets because the benchmark models provided are tiny models (CNN with only two convolutional layers) or the datasets they contain are too easy for modern CNNs (e.g., Federated EMNIST), which are unable to adequately evaluate our algorithm running on large CNN models. Compared to LEAF, FedML [23] benchmark supports CIFAR-10, CIFAR-100, and CINIC-10 (contains images from ImageNet). Baselines. We compare FedGKT with state-of-the-art FL method FedAvg [6], and a centralized training approach. Split Learning-based method [11, 12] is used to compare the communication cost. Note that we do not compare with FedProx [62] because it performs worse than FedAvg in the large CNN setting, as demonstrated in [8]. We also do not compare with FedMA [8] because it cannot work on modern DNNs that contain batch normalization layers (e.g., ResNet). Model Architectures. Two modern CNN architectures are evaluated: ResNet-56 and ResNet-110 [1]. The baseline FedAvg requires all edge nodes to train using these two CNNs. For FedGKT, the edge and server-sided models are designed based on these two CNNs. On the edges, we design a tiny CNN architecture called ResNet-8, which is a compact CNN containing 8 convolutional layers (described in Fig. 1(b) and Table 7 in Appendix). The server-sided model architectures are ResNet-55 and ResNet-109 (Table 8 and 9 in Appendix), which have the same input dimension to match the output of the edge-sided feature extractor. For split learning, we use the extractor in ResNet-8 as the edge-sided partition of CNNs, while the server-side partitions of CNN are also ResNet-55 and ResNet-109. 4.2 Result of Model Accuracy For standard experiments, we run on 16 clients and a GPU server for all datasets and models. Fig. 3 shows the curve of the test accuracy during training on ResNet-56 model with 3 datasets. It includes the result of centralized training, FedAvg, and FedGKT. We also summarize all numerical results of ResNet-56 and ResNet-110 in Table 1. In both I.I.D. and non-I.I.D. setting, FedGKT obtains comparable or even better accuracy than FedAvg. Hyperparameters. There are four important hyper-parameters in our FedGKT framework: the communication round, as stated in line #2 of Algorithm 1, the edge-side epoch number, the serverside epoch number, and the server-side learning rate. After a tuning effort, we find that the edge-side epoch number can simply be 1. The server epoch number depends on the data distribution. For I.I.D. data, the value is 20, and for non-I.I.D., the value depends on the level of data bias. For I.I.D., Adam optimizer [65] works better than SGD with momentum [64], while for non-I.I.D., SGD with momentum works better. During training, we reduce the learning rate once the accuracy has plateaued [68, 69]. We use the same data augmentation techniques for fair comparison (random crop, random horizontal flip, and normalization). More details of hyper-parameters are described in Appendix B.4. 4.3 Efficiency Evaluation To compare the computational demand on training, we count the number of FLOPs (floating-point operations) performed on edge using prior methods [70, 71]. We report the result on CIFAR-100 in Fig. 4. Compared to the FedAvg baseline, the computational cost on the edge of our FedGKT (ResNet-8) is 9 times less than that of ResNet-56 and 17 times less than that of ResNet-110 (The memory cost comparison can be roughly compared by the model parameter number: ResNet-8 has 11K parameters, which is 54 times less than that of ResNet-56 and 105 times less than that of ResNet-110. We also test the CPU running time per mini-batch (batch size is 64) forward-backward propagation on Intel i7 CPU (which has a more aggressive performance than current edge devices). The results show that ResNet-8 requires only 3% of ResNet-110’s training time (30 ms v.s. 950 ms). To compare communication costs, we use SL [11, 12] as the baseline, which also exchanges hidden feature maps rather than the entire model. The communication cost is calculated using Eq. (12) and (13) in Appendix B.2 without using data compression techniques. The results are shown in Fig. 5 (X-axis units: GBytes). FedGKT uses fewer feature map exchanges with the server than SL. 4.4 Ablation Study: Understanding FedGKT under Different Settings Table 2: Ablation Study on Loss Functions CIFAR-10 CIFAR-100 CINIC-10 None -/diverge -/diverge -/diverge S–>E 92.97 68.44 81.51 S<–>E 90.53 69.57 80.01 The Effectiveness of Knowledge Transfer. Table 2 shows the results on the efficacy of using distillation loss `KD in Eq. (7) and Eq. (6). We created a scenario in which both the client and server only use `CE without using `KD (labeled None). In this setting, the accuracy is low (e.g., 40%) or the training diverges (uniformly notated as “-/diverge”). In another scenario, only the clients use `KD to update their local models, but the server does not (noted as single directional transfer S->E). We observe that the transfer from the server to the edge is always helpful, while the bidirectional transfer (S<–>E) is more effective as the dataset becomes increasingly difficult (CIFAR-100). Asynchronous Training. Since the server does not need to wait for updates from all clients to start training, FedGKT naturally supports asynchronous training. We present the experimental results in Table 3. The result shows that asynchronous training does not negatively affect model accuracy. This demonstrates the advantage of our method over SL, in which every edge requires multiple synchronizations for each mini-batch iteration. Table 4: FedGKT with Different # of Edge 8 16 64 128 FedGKT 69.51 69.57 69.65 69.59 FedGKT with Different Edge Number. To understand the scalability of FedGKT, we evaluate its performance with varying edge nodes. The test accuracy results are shown in Table 4. In general, adding more edge nodes does not negatively affect accuracy. Smaller Architectures. We test the performance of FedGKT using even smaller edge models: ResNet-4 and ResNet-6 on CIFAR-10. ResNet-4 and ResNet-6 use one and two BasicBlock components (including two convolutional layers), respectively. The result is shown in Table 5. While reducing the edge model size to ResNet-8 did not reduce accuracy, when the model size is reduced even more substantially, it does reduce the overall accuracy. 5 Discussion Federated learning (FL) is an art of trade-offs among many aspects, including model accuracy, data privacy, computational efficiency, communication cost, and scalability. We recognize the challenges of developing a universal method that can address all problems; thus, we discuss some limitations of our method. 1. Privacy and robustness: [72] shows we can backdoor federated learning. Although our work does not address the privacy concern, we believe existing methods such as differential privacy (DP) and multi-party computation (MPC) can defend the data privacy from the hidden vector reconstruction attack. Intuitively, exchanging hidden feature maps is safer than exchanging the model or gradient. Note that the hidden map exchange happens at the training phase. This consequently makes the attack more difficult because the attacker’s access is limited to the evolving and untrained feature map rather than the fully trained feature map that represents the raw data. Given that the model and gradient exchange may also leak privacy, the lack of analysis and comparison of the degree of privacy leakages between these three settings (gradient, model, and hidden map) is the first limitation of our work. 2. Communication cost: compared to the entire model weight or gradient, the hidden vector is definitely much smaller (e.g., the hidden vector size of ResNet-110 is around 64KB while the entire gradient/model size is 4.6MB for 32x32 images). Even in the high resolution vision tasks settings, this observation also holds (e.g., when image size is 224x224, the hidden feature map size is only 1Mb, compared to the size of ResNet 100Mb). Since the hidden vector for each data point can be transmitted independently, FedGKT has a smaller bandwidth requirement than gradient or model exchange. However, our proposed method has a potential drawback in that the total communication cost depends on the number of data points, although our experimental results demonstrate that our method has smaller communication costs than split learning because of fewer communication rounds for convergence. In settings where the sample number is extremely large and the image resolution is extremely high, both our method and split learning would have a high communication cost in total. 3. Label deficiency: The proposed FedGKT can only work on supervised learning. However, label deficiency is a practical problem that cannot be ignored. Many application cases do not have sufficient labels, since it is difficult to design mechanisms to incentivize users to label their private local data. 4. Scalability (a large number of clients): in the cross-device setting, we need to collaboratively train models with numerous smartphones (e.g., if the client number is as high as 1 million). One way to mitigate the scalability is by selecting clients in each round with a uniform sampling strategy [6]. We run experiments under this setting but found that this sampling method requires many more rounds of training to converge. Even though the communication cost is acceptable, this sampling method is still imperfect in practice ([9] describes many constraints that a production system might face). We argue that uniform sampling may not be the best practice and that scalability is a common limitation for most existing works. In summary, we concede that our proposed method does not have an advantage in addressing the scalability challenge. 5. Model personalization: the final trained model under our FedGKT framework is a combination of the global server model and the client model, which is a potential method to help clients learn personalized models. For example, we can fine-tune the client model for several epochs to see if the combination of such a personalized client model and the server model is more effective. We do not explicitly demonstrate this in our experiments, but we hope to explore this possibility in future works. 6 Conclusion In this work, to tackle the resource-constrained reality, we reformulate FL as a group knowledge transfer (FedGKT) training algorithm. FedGKT can efficiently train small CNNs on edges and periodically transfer their knowledge by knowledge distillation to a server-side CNN with a large capacity. FedGKT achieves several advantages in a single framework: reduced demand for edge computation, lower communication cost for large CNNs, and asynchronous training, all while maintaining model accuracy comparable to FL. To simplify the edge training, we also develop a distributed training system based on our FedGKT. We evaluate FedGKT by training modern CNN architectures (ResNet-56 and ResNet-110) on three distinct datasets (CIFAR-10, CIFAR-100, and CINIC-10) and their non-I.I.D. variants. Our results show that FedGKT can obtain comparable or even slightly higher accuracy. More importantly, FedGKT makes edge training affordable. Compared to the edge training using FedAvg, FedGKT costs 9 to 17 times less computational power (FLOPs) and requires 54 to 105 times fewer parameters. Broader Impact FedGKT can efficiently train large deep neural networks (CNNs) in resource-constrained edge devices (such as smartphones, IoT devices, and edge servers). Unlike past FL approaches, FedGKT demonstrates the feasibility of training a large server-side model by using many small client models. FedGKT preserves the data privacy requirements of the FL approach but also works within the constraints of an edge computing environment. Smartphone users may benefit from this technique because their private data is protected, and they may also simultaneously obtain a high-quality model service. Organizations such as hospitals, and other non-profit entities with limited training resources, can collaboratively train a large CNN model without revealing their datasets while achieving significant training cost savings. They can also meet requirements regarding the protection of intellectual property, confidentiality, regulatory restrictions, and legal constraints. As for the potential risks of our method, a client can maliciously send incorrect hidden feature maps and soft labels to the server, which may potentially impact the overall model accuracy. These effects must be detected and addressed to maintain overall system stability. Second, the relative benefits for each client may vary. For instance, in terms of fairness, edge nodes which have smaller datasets may obtain more model accuracy improvement from collaborative training than those which have a larger amount of training data. Our training framework does not consider how to balance this interest of different parties. Acknowledgments This material is based upon work supported by Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001117C0053 and FA8750-19-2-1005, ARO award W911NF1810400, NSF grants CCF-1703575 and CCF-1763673, and ONR Award No. N00014-16-12189. The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.
1. What is the focus of the paper regarding collaborative training at the edge? 2. What are the strengths of the proposed approach, particularly in efficiency improvements? 3. What are the weaknesses of the method, especially concerning dataset size and privacy preservation? 4. How does the method communicate hidden features for knowledge distillation in a distributed setting? 5. Are there any potential applications of the method in healthcare or industry? 6. Can the method scale to high-resolution images, and how would it handle large CNNs in such settings? 7. What is the significance of the paper's contribution, and how does it compare to other state-of-the-art federated learning methods?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The authors propose a method for collaborative training of large CNNs at the edge, which is not feasible today because of the computational case. Instead, the paper trains small CNNs locally on each device, and then uses knowledge distillation to distill the small CNNs into a larger CNN that runs on the server-side. Importantly, instead of communicating the model weights as in federated learnin, the approach communicates the hidden features to train the server and edge networks. Strengths - The contribution is novel, and addresses an important problem within edge device learning. They creatively adapt knowledge distillation to share information in a distributed setting. - Potential applications in health care as well, where high resolution CNNs may need to be learned in a distributed setting - The improvements in efficiency are substantial. Weaknesses - The weakness of this method, which should be made more clear, is that by communicating the hidden features, and not the weights, there is now a dependence of bandwidth on the dataset size. This is called out partly in lines 141-143, and lines 172-178, but its unclear if the GKT formulation removes this dataset limitation, since H is still being transferred between the edge and the servers. Any empirical data would have been ideal here, but not required. - Large CNNs can either be deep (e.g. RN-56 and RN-110 addressed here) or have high resolution inputs (e.g. in medical settings). Since both are quite prevalent in industry, the paper's significance could be strengthened by a discussion on how these methods (would or would not) scale to high resolution images. - The privacy preserving properties of this method have not been described [response to rebuttal]. Thanks for taking the time to put the rebuttal together. The authors acknowledged that the privacy preserving is not significantly analyzed in the paper. While the rebuttal provides some response, without data it is not convincing. Especially as the paper compared against other SOTA FL methods, this seems to be a critical weakness. Furthermore, the author's rebuttal states that the feature maps are much smaller than the weights, but use a 32x32 image example. In most real world examples with image sizes in 224x224 (e.g. ImageNet) or 1000x1000 (segmentation datasets), the opposite is true. For example, a typical 3x3 conv layer would have H=128, W = 128, and C=K=64. Then, the feature map is CHW ~ 1M elements, whereas weights are 3x3xCxK ~ 37K elements. This indicates the the method's communication characteristics measured in this paper do not scale to real-world CV environments. However, understand that in some cases, the trade-off may be worthwhile between on-device compute and communication. For these reasons, and having read through the other rebuttals, I have downgraded my score.
NIPS
Title Group Knowledge Transfer: Federated Learning of Large CNNs at the Edge Abstract Scaling up the convolutional neural network (CNN) size (e.g., width, depth, etc.) is known to effectively improve model accuracy. However, the large model size impedes training on resource-constrained edge devices. For instance, federated learning (FL) may place undue burden on the compute capability of edge nodes, even though there is a strong practical need for FL due to its privacy and confidentiality properties. To address the resource-constrained reality of edge devices, we reformulate FL as a group knowledge transfer training algorithm, called FedGKT. FedGKT designs a variant of the alternating minimization approach to train small CNNs on edge nodes and periodically transfer their knowledge by knowledge distillation to a large server-side CNN. FedGKT consolidates several advantages into a single framework: reduced demand for edge computation, lower communication bandwidth for large CNNs, and asynchronous training, all while maintaining model accuracy comparable to FedAvg. We train CNNs designed based on ResNet-56 and ResNet-110 using three distinct datasets (CIFAR-10, CIFAR-100, and CINIC-10) and their non-I.I.D. variants. Our results show that FedGKT can obtain comparable or even slightly higher accuracy than FedAvg. More importantly, FedGKT makes edge training affordable. Compared to the edge training using FedAvg, FedGKT demands 9 to 17 times less computational power (FLOPs) on edge devices and requires 54 to 105 times fewer parameters in the edge CNN. Our source code is released at FedML (https://fedml.ai). 1 Introduction The size of convolutional neural networks (CNN) matters. As seen in both manually designed neural architectures (ResNet [1]) and automated architectures discovered by neural architecture search (DARTS [2], MiLeNAS [3], EfficientNets [4]), scaling up CNN size (e.g., width, depth, etc.) is known to be an effective approach for improving model accuracy. Unfortunately, training large CNNs is challenging for resource-constrained edge devices (e.g., smartphones, IoT devices, and edge servers). The demand for edge-based training is increasing as evinced by a recent surge of interest in Federated Learning (FL) [5]. FL is a distributed learning paradigm that can collaboratively train a global model for many edge devices without centralizing any device’s dataset [6, 7, 8]. FL can boost model accuracy in situations when a single organization or user does not have sufficient or relevant data. Consequently, many FL services have been deployed commercially. For instance, Google has improved the accuracy of item ranking and language models on Android smartphones by using FL [9]. FL is also a promising solution when data centralization is undesirable or infeasible due to privacy and regulatory constraints [5]. However, one significant impediment in edge training is the gap between the computational demand of large CNNs and the meager computational power on edge devices. FL approaches, such as FedAvg [6] can reduce communication frequency by local SGD and model averaging [10], but they only evaluate the convergence property on small CNNs, 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. or assume the client has enough computational power with GPUs to train large CNNs, which is unrealistic in a real-world system. To tackle the computational limitation of edge nodes, model parallelism-based split learning (SL) [11, 12] partitions a large model and offloads some portion of the neural architecture to the cloud, but SL has a severe straggler problem because a single mini-batch iteration requires multiple rounds of communication between the server and edges. In this paper, we propose Group Knowledge Transfer (FedGKT), an efficient federated learning framework for resource-constrained edge devices. FedGKT aims to incorporate benefits from both FedAvg [6] and SL [11, 12] by training using local SGD as in FedAvg but also placing low compute demand at the edge as in SL. FedGKT can transfer knowledge from many compact CNNs trained at the edge to a large CNN trained at a cloud server. The essence of FedGKT is that it reformulates FL as an alternating minimization (AM) approach [13, 14, 15, 16, 17, 18], which optimizes two random variables (the edge model and the server model) by alternatively fixing one and optimizing another. Under this reformulation, FedGKT not only boosts training CNNs at the edge but also contributes to the development of a new knowledge distillation (KD) paradigm, group knowledge transfer, to boost the performance of the server model. Fig. 1(a) provides an overview of FedGKT. The compact CNN on the edge device consists of a lightweight feature extractor and classifier that can be trained efficiently using its private data (1 - local training). After local training, all the edge nodes agree to generate exactly the same tensor dimensions as an output from the feature extractor. The larger server model is trained by taking features extracted from the edge-side model as inputs to the model, and then uses KD-based loss function that can minimize the gap between the ground truth and soft label (probabilistic prediction in KD [19, 20, 21, 22]) predicted from the edge-side model (2 - periodic transfer). To boost the edge model’s performance, the server sends its predicted soft labels to the edge, then the edge also trains its local dataset with a KD-based loss function using server-side soft labels (3 - transfer back). Thus, the server’s performance is essentially boosted by knowledge transferred from the edge models and vice-versa. Once the training is complete, the final model is a combination of its local feature extractor and shared server model (4 - edge-sided model). The primary trade-off is that FedGKT shifts the computing burden from edge devices to the powerful server. FedGKT unifies multiple advantages into a single framework: 1. FedGKT is memory and computation efficient, similar to SL; 2. FedGKT can train in a local SGD manner like FedAvg to reduce the communication frequency; 3. Exchanging hidden features as in SL, as opposed to exchanging the entire model as in FedAvg, reduces the communication bandwidth requirement. 4. FedGKT naturally supports asynchronous training, which circumvents the severe synchronization issue in SL. The server model can immediately start training when it receives inputs from any client. We develop FedGKT based on the FedML research library [23] and comprehensively evaluate FedGKT using edge and server CNNs designed based on ResNet [1] (as shown in Fig. 1(b)). We train on three datasets with varying training difficulties (CIFAR-10 [24], CIFAR-100 [24], and CINIC-10 [25]) and their non-I.I.D. (non identical and independent distribution) variants. As for the model accuracy, our results on both I.I.D. and non-I.I.D. datasets show that FedGKT can obtain accuracy comparable to FedAvg [6]. More importantly, FedGKT makes edge training affordable. Compared to FedAvg, FedGKT demands 9 to 17 times less computational power (FLOPs) on edge devices and requires 54 to 105 times fewer parameters in the edge CNN. To understand FedGKT comprehensively, asynchronous training and ablation studies are performed. Some limitations are also discussed. 2 Related Works Federated Learning. Existing FL methods such as FedAvg [6], FedOpt [26], and FedMA [8] face significant hurdles in training large CNNs on resource-constrained devices. Recent works FedNAS [27, 3] and [28] work on large CNNs, but they rely on GPU training to complete the evaluations. Others [29, 30, 31, 32, 33, 34, 35, 36, 37] optimize the communication cost without considering edge computational limitations. Model parallelism-based split learning [11, 12] attempts to break the computational constraint, but it requires frequent communication with the server. Knowledge Distillation (KD). We use KD [19] in a different manner from existing and concurrent works [38, 39, 40, 41, 42, 43, 44, 45]. Previous works only consider transferring knowledge from a large network to a smaller one [19, 20, 21, 22], or they transfer knowledge from a group, but each member in the group shares the same large model architecture or a large portion of the neural architecture with specific tail or head layers [46, 47, 48, 49, 50, 51]. Moreover, all teachers and students in distillation share the same dataset [50, 52, 53, 54], while in our setting each member (client) can only access its own independent dataset. Previous methods use centralized training, but we utilize an alternating training method. Efficient On-device Deep Learning. Our work also relates to efficient deep learning on edge devices, such as model compression [55, 56, 57], manually designed architectures (MobileNets [58], ShuffeNets [59], SqueezeNets [60]), or even efficient neural architecture search (EfficientNets [4], FBNet [61]). However, all of these techniques are tailored for the inference phase rather than the training phase. 3 Group Knowledge Transfer 3.1 Preliminary We aim to collaboratively train large convolutional neural networks (e.g., ResNet) on many resourceconstrained devices that are not equipped with GPU accelerators, without centralizing each device’s dataset to the server side. We specifically consider supervised learning with C categories in the entire dataset D. We assume that there are K clients (edge devices) in the network. The kth node has its own dataset Dk := {( Xki , yi )}N(k) i=1 , where Xi is the ith training sample, yi is the corresponding label of Xi, yi ∈ {1, 2, . . . , C} (a multi-classification learning task), and N (k) is the sample number in dataset Dk. D = {D1,D2, ...,Dk}, N = ∑K k=1N (k). In general, we can formulate CNN-based federated learning as a distributed optimization problem: min W F (W ) def = min W K∑ k=1 N (k) N · f (k)(W ),where f (k)(W ) = 1 N (k) N(k)∑ i=1 `(W ;Xi, yi) (1) where W represents the network weight of a global CNN in each client. f (k)(W ) is the kth client’s local objective function that measures the local empirical risk over the heterogeneous dataset Dk. ` is the loss function of the global CNN model. Most off-the-shelf federated optimization methods (e.g., FedAvg [6], FedProx [62], FedNova [63], and FedOpt [26]) propose to solve objective function (1) with variant local SGD [10] optimization methods for communication-efficient training and demonstrate their characteristics with experiments on linear models (logistic regression) or shallow neural networks (2 convolutional layers). However, as shown in Fig. 2(a), the main drawback is that these methods cannot train large CNN at the resource-constrained edge devices due to lack of GPU accelerators and sufficient memory. Model parallelism-based split learning [11, 12], as shown in Fig. 2(b), attempts to break the computational constraint by splitting W into two portions and offloading the larger portion into the server-side, but a single mini-batch iteration requires remote forward propagation and backpropagation. For edge computing, such a highly frequent synchronization mechanism may lead to the severe straggler problem that significantly slows down the training process. 3.2 Reformulation Non-convex Optimization. To solve the resource-constrained problem in existing FL, we reconsider another methodology to solve the FL optimization problem. As illustrated in Fig. 2(c), we divide the global CNN W in Eq. (1) into two partitions: a small feature extractor model W e and a large-scale server-side model W s, and put them on the edge and the server, respectively. We also add a classifier W c for W e to create a small but fully trainable model on the edge. Consequently, we reformulate a single global model optimization into an non-convex optimization problem that requires us to solve the server model Fs and the edge model Fc simultaneously. Our reformulation is as follows: argmin W s Fs(W s,W ∗ e) = argmin W s K∑ k=1 N(k)∑ i=1 `s ( fs(W s;H (k) i ), y (k) i ) (2) subject to: H(k)i = f (k) e (W (k) e ;X (k) i ) (3) argmin (W (k) e ,W (k) c ) Fc(W (k) e ,W (k) c ) = argmin (W (k) e ,W (k) c ) N(k)∑ i=1 `c ( f (k) ( (W (k)e ,W (k) c );X (k) i ) , y (k) i ) (4) = argmin (W (k) e ,W (k) c ) N(k)∑ i=1 `c ( f (k)c (W (k) c ; f (k) e (W (k) e ;X (k) i︸ ︷︷ ︸ H (k) i )), y (k) i ) (5) Where `s and `c are general loss functions for the server model and the edge model, respectively. fs is the server model, and f (k) is the edge-side model which consists of feature extractor f (k)e followed by a classifier f (k)c . W s, W (k)e , W (k) c are the network weights of fs, f (k) e , f (k) c , respectively. H (k) i is i-th sample’s feature map (a hidden vector or tensor) output by feature extractor f (k)e (Eq. (3)). Note that Eq. (5) can be solved independently on each client. The kth client model f (k) is trained on its local dataset (Eq. (5)), while the server model fs is trained using H (k) i as input features (Eq. (2)). During the inference phase, the final trained model architecture for client k is stacked by the architecture of the feature extractor f (k)e and the architecture of the server model fs. In practice, the client can either run offline inference by downloading the server model fs and using it locally or perform online inference through a network connection with the server. Advantages and Challenges. The core advantage of the above reformulation is that when we assume the model size of f (k) is multiple orders of magnitude smaller than that of fs, the edge training is affordable. Moreover, as discussed in [11, 12], for large CNN training, the communication bandwidth for transferring H(k)i to the server is substantially less than communicating all model parameters as is done in traditional federated learning. Conversely, we also observe the difficulty of the reformulated optimization problem. First, each client is expected to adequately solve the inner optimization (Eq. (5)). Namely, each client should train its feature extractor f (k)e well to ensure that Eq. (3) can accurately generate H (k) i for any given input image. However, in the FL setting, the dataset on each edge device is small and thus may be inadequate in training a CNN-based feature extractor solely based on the local dataset. In addition, the outer optimization Eq. (2) and inter optimization Eq. (5) are correlated: Eq. (2) relies on the quality of H(k)i which is optimized by Eq. (5). This correlation further makes the outer optimization Eq. (2) difficult to converge if the individual client-side feature extractors f (k)e are not trained adequately. 3.3 Group Knowledge Transfer (FedGKT) Scaling Edge Dataset Limitations with Knowledge Transfer. Given the above challenges, we incorporate knowledge distillation loss into the optimization equations to circumvent the optimization difficulty. The intuition is that knowledge transferred from the the server model can boost the optimization on the edge (Eq. (5)). As such, we propose to transfer group knowledge bidirectionally. The server CNN absorbs the knowledge from many edges, and an individual edge CNN obtains enhanced knowledge from the server CNN. To be more specific, in Eq. (2) and (5), we design `s and `c as follows. `s = `CE + K∑ k=1 `KD ( zs, z (k) c ) = `CE + K∑ k=1 DKL (pk‖ps) (6) `(k)c = `CE + `KD ( zs, z (k) c ) = `CE +DKL (ps‖pk) (7) `CE is the cross-entropy loss between the predicted values and the ground truth labels. DKL is the Kullback Leibler (KL) Divergence function that serves as a term in the loss function `s and `c to transfer knowledge from a network to another. pik = exp(z(k,i)c /T)∑C i=1 exp ( z (k,i) c /T ) and pis = exp(zis/T)∑C i=1 exp(z i s/T ) . They are the probabilistic prediction of the edge model f (k) and the server model fs, respectively. They are calculated with the softmax of logits z. The logit zs and z (k) c are the output of the last fully connected layer in the server model and the client model, respectively. T is the temperature hyperparameter of the softmax function. Intuitively, the KL divergence loss attempts to bring the soft label and the ground truth close to each other. In doing so, the server model absorbs the knowledge gained from each of the edge models. Similarly, the edge models attempt to bring their predictions closer to the server model’s prediction and thereby absorb the server model knowledge to improve their feature extraction capability. Improved Alternating Minimization. After plugging Eq. (6) and (7) into our reformulation (Eq. (2) and (5)), we propose a variant of Alternating Minimization (AM) [13, 14, 15, 16, 17, 18] to solve the reformulated optimization problem as follows: argmin W s Fs(W s,W (k)∗ e ) = argmin W s K∑ k=1 N(k)∑ i=1 `CE ( fs(W s; f (k) e (W (k)∗ e ;X (k) i︸ ︷︷ ︸ H (k) i ), y (k) i ) + K∑ k=1 `KD ( z(k)∗c ,zs ) (8) where z(k)∗c = f (k) c (W (k) c ; f (k) e (W (k)∗ e ;X (k) i︸ ︷︷ ︸ H (k) i )), and zs = fs(W s;H (k) i ) (9) argmin W (k) Fc(W ∗ s ,W (k)) = argmin W (k) N(k)∑ i=1 `CE ( f (k)c (W (k) c ; f (k) e (W (k) e ;X (k) i︸ ︷︷ ︸ H (k) i )), y (k) i ) + `KD ( z∗s ,z (k) c ) (10) where z(k)c = f (k) c (W (k) c ; f (k) e (W (k) e ;X (k) i︸ ︷︷ ︸ H (k) i )), and z∗s = fs(W ∗ s ;H (k) i ) (11) Where the ∗ superscript notation in above equations presents related random variables are fixed during optimization. W (k) is the combination of W (k)e and W (k) c . AM is a solver in convex and non-convex optimization theory and practice that optimizes two random variables alternatively. In Eq. (8), we fix W (k) and optimize (train) W s for several epochs, and then we switch to (10) to fix W s and optimize W (k) for several epochs. This optimization occurs throughout many rounds between Eq. (8) and (10) until reaching a convergence state. Key Insight. The essence of our reformulation is that the alternating minimization (Eq. (8) and Eq. (10)) uses knowledge distillation across all edges to simplify the optimization, which scales the dataset limitation on each edge in federated learning. In particular, we achieve this objective using a local cross-entropy loss computed based only on the ground truth and the model output, and a second loss that uses the KL divergence across edges and the server, which effectively captures the contribution (knowledge) from multiple client datasets. Moreover, each minimization subproblem can be solved with SGD and its variants (e.g., SGD with momentum [64], ADAM [65, 66]). Algorithm 1 Group Knowledge Transfer. The subscript s and k stands for the server and the kth edge, respectively. E is the number of local epochs, T is the number of communication rounds; η is the learning rate; X(k) represents input images at edge k; H(k) is the extracted feature map from X(k); Zs and Z(k)c are the logit tensor from the client and the server, respectively. 1: ServerExecute(): 2: for each round t = 1, 2, ..., T do 3: for each client k in parallel do 4: // the server broadcasts Z(k)c to the client 5: H(k),Z(k)c ,Y (k) ← ClientTrain(k,Z(k)s ) 6: Zs ← empty dictionary 7: for each local epoch i from 1 to Es do 8: for each client k do 9: for idx, b ∈ {H(k),Z(k)c ,Y (k)} do 10: W s ←W s − ηs∇`s(W s; b) 11: if i == Es then 12: Z(k)s [idx]← fs(W s;h(k)) 13: // illustrated as "transfer back" in Fig. 1(a) 14: for each client k in parallel do 15: send the server logits Z(k)s to client k 16: 17: ClientTrain(k,Z(k)s ): 18: // illustrated as "local training "in Fig. 1(a) 19: for each local epoch i from 1 to Ec do 20: for batch b ∈ {X(k),Z(k)s ,Y (k)} do 21: // `(k)c is computed using Eq. (7) 22: W (k) ←W (k) − ηk∇`(k)c (W (k); b) 23: // extract features and logits 24: H(k),Z(k)c ← empty dictionary 25: for idx, batch x(k),y(k) ∈ {X(k),Y (k)} do 26: h(k) ← f (k)e (W (k)e ;x(k)) 27: z(k)c ← fc(W (k)c ;h(k)) 28: H(k)[idx]← h(k) 29: Z(k)c [idx]← z(k)c 30: return H(k), Z(k)c , Y (k) to server Training Algorithm. To elaborate, we illustrate the alternating training algorithm FedGKT in Fig. 1(a) and summarize it as Algorithm 1. During each round of training, the client uses local SGD to train several epochs and then sends the extracted feature maps and related logits to the server. When the server receives extracted features and logits from each client, it trains the much larger server-side CNN. The server then sends back its global logits to each client. This process iterates over multiple rounds, and during each round the knowledge of all clients is transferred to the server model and vice-versa. For the FedGKT training framework, the remaining step is to design specific neural architectures for the client model and the server model. To evaluate the effectiveness of FedGKT, we design CNN architectures based on ResNet [1], which are shown in Fig. 1(b). More details can also be found in Appendix B.3. 4 Experiments 4.1 Experimental Setup Implementation. We develop the FedGKT training framework based on FedML [23], an open source federated learning research library that simplifies the new algorithm development and deploys it in a distributed computing environment. Our server node has 4 NVIDIA RTX 2080Ti GPUs with sufficient GPU memory for large model training. We use several CPU-based nodes as clients training small CNNs. Task and Dataset. Our training task is image classification on CIFAR-10 [24], CIFAR-100 [24], and CINIC-10 [25]. We also generate their non-I.I.D. variants by splitting training samples into K unbalanced partitions. Details of these three datasets are introduced in Appendix A.1. The test images are used for a global test after each round. For different methods, we record the top 1 test accuracy as the metric to compare model performance. Note that we do not use LEAF [67] benchmark datasets because the benchmark models provided are tiny models (CNN with only two convolutional layers) or the datasets they contain are too easy for modern CNNs (e.g., Federated EMNIST), which are unable to adequately evaluate our algorithm running on large CNN models. Compared to LEAF, FedML [23] benchmark supports CIFAR-10, CIFAR-100, and CINIC-10 (contains images from ImageNet). Baselines. We compare FedGKT with state-of-the-art FL method FedAvg [6], and a centralized training approach. Split Learning-based method [11, 12] is used to compare the communication cost. Note that we do not compare with FedProx [62] because it performs worse than FedAvg in the large CNN setting, as demonstrated in [8]. We also do not compare with FedMA [8] because it cannot work on modern DNNs that contain batch normalization layers (e.g., ResNet). Model Architectures. Two modern CNN architectures are evaluated: ResNet-56 and ResNet-110 [1]. The baseline FedAvg requires all edge nodes to train using these two CNNs. For FedGKT, the edge and server-sided models are designed based on these two CNNs. On the edges, we design a tiny CNN architecture called ResNet-8, which is a compact CNN containing 8 convolutional layers (described in Fig. 1(b) and Table 7 in Appendix). The server-sided model architectures are ResNet-55 and ResNet-109 (Table 8 and 9 in Appendix), which have the same input dimension to match the output of the edge-sided feature extractor. For split learning, we use the extractor in ResNet-8 as the edge-sided partition of CNNs, while the server-side partitions of CNN are also ResNet-55 and ResNet-109. 4.2 Result of Model Accuracy For standard experiments, we run on 16 clients and a GPU server for all datasets and models. Fig. 3 shows the curve of the test accuracy during training on ResNet-56 model with 3 datasets. It includes the result of centralized training, FedAvg, and FedGKT. We also summarize all numerical results of ResNet-56 and ResNet-110 in Table 1. In both I.I.D. and non-I.I.D. setting, FedGKT obtains comparable or even better accuracy than FedAvg. Hyperparameters. There are four important hyper-parameters in our FedGKT framework: the communication round, as stated in line #2 of Algorithm 1, the edge-side epoch number, the serverside epoch number, and the server-side learning rate. After a tuning effort, we find that the edge-side epoch number can simply be 1. The server epoch number depends on the data distribution. For I.I.D. data, the value is 20, and for non-I.I.D., the value depends on the level of data bias. For I.I.D., Adam optimizer [65] works better than SGD with momentum [64], while for non-I.I.D., SGD with momentum works better. During training, we reduce the learning rate once the accuracy has plateaued [68, 69]. We use the same data augmentation techniques for fair comparison (random crop, random horizontal flip, and normalization). More details of hyper-parameters are described in Appendix B.4. 4.3 Efficiency Evaluation To compare the computational demand on training, we count the number of FLOPs (floating-point operations) performed on edge using prior methods [70, 71]. We report the result on CIFAR-100 in Fig. 4. Compared to the FedAvg baseline, the computational cost on the edge of our FedGKT (ResNet-8) is 9 times less than that of ResNet-56 and 17 times less than that of ResNet-110 (The memory cost comparison can be roughly compared by the model parameter number: ResNet-8 has 11K parameters, which is 54 times less than that of ResNet-56 and 105 times less than that of ResNet-110. We also test the CPU running time per mini-batch (batch size is 64) forward-backward propagation on Intel i7 CPU (which has a more aggressive performance than current edge devices). The results show that ResNet-8 requires only 3% of ResNet-110’s training time (30 ms v.s. 950 ms). To compare communication costs, we use SL [11, 12] as the baseline, which also exchanges hidden feature maps rather than the entire model. The communication cost is calculated using Eq. (12) and (13) in Appendix B.2 without using data compression techniques. The results are shown in Fig. 5 (X-axis units: GBytes). FedGKT uses fewer feature map exchanges with the server than SL. 4.4 Ablation Study: Understanding FedGKT under Different Settings Table 2: Ablation Study on Loss Functions CIFAR-10 CIFAR-100 CINIC-10 None -/diverge -/diverge -/diverge S–>E 92.97 68.44 81.51 S<–>E 90.53 69.57 80.01 The Effectiveness of Knowledge Transfer. Table 2 shows the results on the efficacy of using distillation loss `KD in Eq. (7) and Eq. (6). We created a scenario in which both the client and server only use `CE without using `KD (labeled None). In this setting, the accuracy is low (e.g., 40%) or the training diverges (uniformly notated as “-/diverge”). In another scenario, only the clients use `KD to update their local models, but the server does not (noted as single directional transfer S->E). We observe that the transfer from the server to the edge is always helpful, while the bidirectional transfer (S<–>E) is more effective as the dataset becomes increasingly difficult (CIFAR-100). Asynchronous Training. Since the server does not need to wait for updates from all clients to start training, FedGKT naturally supports asynchronous training. We present the experimental results in Table 3. The result shows that asynchronous training does not negatively affect model accuracy. This demonstrates the advantage of our method over SL, in which every edge requires multiple synchronizations for each mini-batch iteration. Table 4: FedGKT with Different # of Edge 8 16 64 128 FedGKT 69.51 69.57 69.65 69.59 FedGKT with Different Edge Number. To understand the scalability of FedGKT, we evaluate its performance with varying edge nodes. The test accuracy results are shown in Table 4. In general, adding more edge nodes does not negatively affect accuracy. Smaller Architectures. We test the performance of FedGKT using even smaller edge models: ResNet-4 and ResNet-6 on CIFAR-10. ResNet-4 and ResNet-6 use one and two BasicBlock components (including two convolutional layers), respectively. The result is shown in Table 5. While reducing the edge model size to ResNet-8 did not reduce accuracy, when the model size is reduced even more substantially, it does reduce the overall accuracy. 5 Discussion Federated learning (FL) is an art of trade-offs among many aspects, including model accuracy, data privacy, computational efficiency, communication cost, and scalability. We recognize the challenges of developing a universal method that can address all problems; thus, we discuss some limitations of our method. 1. Privacy and robustness: [72] shows we can backdoor federated learning. Although our work does not address the privacy concern, we believe existing methods such as differential privacy (DP) and multi-party computation (MPC) can defend the data privacy from the hidden vector reconstruction attack. Intuitively, exchanging hidden feature maps is safer than exchanging the model or gradient. Note that the hidden map exchange happens at the training phase. This consequently makes the attack more difficult because the attacker’s access is limited to the evolving and untrained feature map rather than the fully trained feature map that represents the raw data. Given that the model and gradient exchange may also leak privacy, the lack of analysis and comparison of the degree of privacy leakages between these three settings (gradient, model, and hidden map) is the first limitation of our work. 2. Communication cost: compared to the entire model weight or gradient, the hidden vector is definitely much smaller (e.g., the hidden vector size of ResNet-110 is around 64KB while the entire gradient/model size is 4.6MB for 32x32 images). Even in the high resolution vision tasks settings, this observation also holds (e.g., when image size is 224x224, the hidden feature map size is only 1Mb, compared to the size of ResNet 100Mb). Since the hidden vector for each data point can be transmitted independently, FedGKT has a smaller bandwidth requirement than gradient or model exchange. However, our proposed method has a potential drawback in that the total communication cost depends on the number of data points, although our experimental results demonstrate that our method has smaller communication costs than split learning because of fewer communication rounds for convergence. In settings where the sample number is extremely large and the image resolution is extremely high, both our method and split learning would have a high communication cost in total. 3. Label deficiency: The proposed FedGKT can only work on supervised learning. However, label deficiency is a practical problem that cannot be ignored. Many application cases do not have sufficient labels, since it is difficult to design mechanisms to incentivize users to label their private local data. 4. Scalability (a large number of clients): in the cross-device setting, we need to collaboratively train models with numerous smartphones (e.g., if the client number is as high as 1 million). One way to mitigate the scalability is by selecting clients in each round with a uniform sampling strategy [6]. We run experiments under this setting but found that this sampling method requires many more rounds of training to converge. Even though the communication cost is acceptable, this sampling method is still imperfect in practice ([9] describes many constraints that a production system might face). We argue that uniform sampling may not be the best practice and that scalability is a common limitation for most existing works. In summary, we concede that our proposed method does not have an advantage in addressing the scalability challenge. 5. Model personalization: the final trained model under our FedGKT framework is a combination of the global server model and the client model, which is a potential method to help clients learn personalized models. For example, we can fine-tune the client model for several epochs to see if the combination of such a personalized client model and the server model is more effective. We do not explicitly demonstrate this in our experiments, but we hope to explore this possibility in future works. 6 Conclusion In this work, to tackle the resource-constrained reality, we reformulate FL as a group knowledge transfer (FedGKT) training algorithm. FedGKT can efficiently train small CNNs on edges and periodically transfer their knowledge by knowledge distillation to a server-side CNN with a large capacity. FedGKT achieves several advantages in a single framework: reduced demand for edge computation, lower communication cost for large CNNs, and asynchronous training, all while maintaining model accuracy comparable to FL. To simplify the edge training, we also develop a distributed training system based on our FedGKT. We evaluate FedGKT by training modern CNN architectures (ResNet-56 and ResNet-110) on three distinct datasets (CIFAR-10, CIFAR-100, and CINIC-10) and their non-I.I.D. variants. Our results show that FedGKT can obtain comparable or even slightly higher accuracy. More importantly, FedGKT makes edge training affordable. Compared to the edge training using FedAvg, FedGKT costs 9 to 17 times less computational power (FLOPs) and requires 54 to 105 times fewer parameters. Broader Impact FedGKT can efficiently train large deep neural networks (CNNs) in resource-constrained edge devices (such as smartphones, IoT devices, and edge servers). Unlike past FL approaches, FedGKT demonstrates the feasibility of training a large server-side model by using many small client models. FedGKT preserves the data privacy requirements of the FL approach but also works within the constraints of an edge computing environment. Smartphone users may benefit from this technique because their private data is protected, and they may also simultaneously obtain a high-quality model service. Organizations such as hospitals, and other non-profit entities with limited training resources, can collaboratively train a large CNN model without revealing their datasets while achieving significant training cost savings. They can also meet requirements regarding the protection of intellectual property, confidentiality, regulatory restrictions, and legal constraints. As for the potential risks of our method, a client can maliciously send incorrect hidden feature maps and soft labels to the server, which may potentially impact the overall model accuracy. These effects must be detected and addressed to maintain overall system stability. Second, the relative benefits for each client may vary. For instance, in terms of fairness, edge nodes which have smaller datasets may obtain more model accuracy improvement from collaborative training than those which have a larger amount of training data. Our training framework does not consider how to balance this interest of different parties. Acknowledgments This material is based upon work supported by Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001117C0053 and FA8750-19-2-1005, ARO award W911NF1810400, NSF grants CCF-1703575 and CCF-1763673, and ONR Award No. N00014-16-12189. The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.
1. What is the main contribution of the paper, and how does it improve upon existing federated learning algorithms? 2. What are the strengths of the paper, particularly in terms of its research area and the advantage it offers over other methods? 3. What are the weaknesses of the paper, and how could they be addressed? 4. How does the paper's proposed method, Group Knowledge Transfer (GKT), compare to other federated learning algorithms, such as FedAvg and FedMD? 5. Are there any limitations or potential risks associated with the use of GKT, such as privacy concerns?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper presents Group Knowledge Transfer (GKT), a novel federated learning algorithm that takes the advantage of knowledge distillation on both clients-to-server and server-to-clients sides to both improve the global model quality and communication efficiency in federated learning applications. Moreover, in GKT the clients only need to train compact model architectures, which improves the training efficiency especially under the scenarios where participating clients are mobile devices. Extensive experimental results are provided to demonstrate the effectiveness of GKT in federated learning scenarios. Strengths The paper is well-written. The research area of improving the communication efficiency and effectiveness of existing federated learning algorithms is promising. The advantage of allowing clients to train smaller local models in GKT is convincing, which can improve the on-device training efficiency when participating clients are mobile devices. I commend the authors for providing extensive experimental results. Weaknesses The concerns on this paper are summarized below, I will be happy to improve my evaluating score if the concerns are addressed: (1) In the FedAvg [1] algorithm, not all clients will be sampled to participate in a particular federated learning round. However, in GKT, it seems (from Algorithm 1) all available clients will participate in every federated learning round. Will it be possible for GKT to support subsampling a batch of clients for each federated learning round? The subsampling approach will also save communication since less number of clients are allowed to communicate with the data center. (2) From the experimental results, it seems GKT works well for convolutional neural networks. But the knowledge distillation framework seems to be easy to extend to language models e.g. LSTMs and the Transformer architectures [2]. It would be helpful to show the effectiveness of GKT under several NLP tasks e.g. language translation and sentiment analysis. (3) FedMD [3] also uses knowledge distillation in federated learning applications. It would be useful to compare GKT with FedMD to understand the effectiveness of GKT better. [1] https://arxiv.org/pdf/1602.05629.pdf [2] https://arxiv.org/abs/1706.03762 [3] https://arxiv.org/abs/1910.03581 ----------------------------------------------------------------------------- update after the author response: I appreciate the authors for providing the response, which addresses part of my concerns on the scalability of the proposed method and its compatibility with the subsampling approach people usually take in FL. The proposed method is novel to me although the privacy issue remains open for further discussion. At this stage, I tend to remain my overall evaluation score unchanged. -----------------------------------------------------------------------------
NIPS
Title Group Knowledge Transfer: Federated Learning of Large CNNs at the Edge Abstract Scaling up the convolutional neural network (CNN) size (e.g., width, depth, etc.) is known to effectively improve model accuracy. However, the large model size impedes training on resource-constrained edge devices. For instance, federated learning (FL) may place undue burden on the compute capability of edge nodes, even though there is a strong practical need for FL due to its privacy and confidentiality properties. To address the resource-constrained reality of edge devices, we reformulate FL as a group knowledge transfer training algorithm, called FedGKT. FedGKT designs a variant of the alternating minimization approach to train small CNNs on edge nodes and periodically transfer their knowledge by knowledge distillation to a large server-side CNN. FedGKT consolidates several advantages into a single framework: reduced demand for edge computation, lower communication bandwidth for large CNNs, and asynchronous training, all while maintaining model accuracy comparable to FedAvg. We train CNNs designed based on ResNet-56 and ResNet-110 using three distinct datasets (CIFAR-10, CIFAR-100, and CINIC-10) and their non-I.I.D. variants. Our results show that FedGKT can obtain comparable or even slightly higher accuracy than FedAvg. More importantly, FedGKT makes edge training affordable. Compared to the edge training using FedAvg, FedGKT demands 9 to 17 times less computational power (FLOPs) on edge devices and requires 54 to 105 times fewer parameters in the edge CNN. Our source code is released at FedML (https://fedml.ai). 1 Introduction The size of convolutional neural networks (CNN) matters. As seen in both manually designed neural architectures (ResNet [1]) and automated architectures discovered by neural architecture search (DARTS [2], MiLeNAS [3], EfficientNets [4]), scaling up CNN size (e.g., width, depth, etc.) is known to be an effective approach for improving model accuracy. Unfortunately, training large CNNs is challenging for resource-constrained edge devices (e.g., smartphones, IoT devices, and edge servers). The demand for edge-based training is increasing as evinced by a recent surge of interest in Federated Learning (FL) [5]. FL is a distributed learning paradigm that can collaboratively train a global model for many edge devices without centralizing any device’s dataset [6, 7, 8]. FL can boost model accuracy in situations when a single organization or user does not have sufficient or relevant data. Consequently, many FL services have been deployed commercially. For instance, Google has improved the accuracy of item ranking and language models on Android smartphones by using FL [9]. FL is also a promising solution when data centralization is undesirable or infeasible due to privacy and regulatory constraints [5]. However, one significant impediment in edge training is the gap between the computational demand of large CNNs and the meager computational power on edge devices. FL approaches, such as FedAvg [6] can reduce communication frequency by local SGD and model averaging [10], but they only evaluate the convergence property on small CNNs, 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. or assume the client has enough computational power with GPUs to train large CNNs, which is unrealistic in a real-world system. To tackle the computational limitation of edge nodes, model parallelism-based split learning (SL) [11, 12] partitions a large model and offloads some portion of the neural architecture to the cloud, but SL has a severe straggler problem because a single mini-batch iteration requires multiple rounds of communication between the server and edges. In this paper, we propose Group Knowledge Transfer (FedGKT), an efficient federated learning framework for resource-constrained edge devices. FedGKT aims to incorporate benefits from both FedAvg [6] and SL [11, 12] by training using local SGD as in FedAvg but also placing low compute demand at the edge as in SL. FedGKT can transfer knowledge from many compact CNNs trained at the edge to a large CNN trained at a cloud server. The essence of FedGKT is that it reformulates FL as an alternating minimization (AM) approach [13, 14, 15, 16, 17, 18], which optimizes two random variables (the edge model and the server model) by alternatively fixing one and optimizing another. Under this reformulation, FedGKT not only boosts training CNNs at the edge but also contributes to the development of a new knowledge distillation (KD) paradigm, group knowledge transfer, to boost the performance of the server model. Fig. 1(a) provides an overview of FedGKT. The compact CNN on the edge device consists of a lightweight feature extractor and classifier that can be trained efficiently using its private data (1 - local training). After local training, all the edge nodes agree to generate exactly the same tensor dimensions as an output from the feature extractor. The larger server model is trained by taking features extracted from the edge-side model as inputs to the model, and then uses KD-based loss function that can minimize the gap between the ground truth and soft label (probabilistic prediction in KD [19, 20, 21, 22]) predicted from the edge-side model (2 - periodic transfer). To boost the edge model’s performance, the server sends its predicted soft labels to the edge, then the edge also trains its local dataset with a KD-based loss function using server-side soft labels (3 - transfer back). Thus, the server’s performance is essentially boosted by knowledge transferred from the edge models and vice-versa. Once the training is complete, the final model is a combination of its local feature extractor and shared server model (4 - edge-sided model). The primary trade-off is that FedGKT shifts the computing burden from edge devices to the powerful server. FedGKT unifies multiple advantages into a single framework: 1. FedGKT is memory and computation efficient, similar to SL; 2. FedGKT can train in a local SGD manner like FedAvg to reduce the communication frequency; 3. Exchanging hidden features as in SL, as opposed to exchanging the entire model as in FedAvg, reduces the communication bandwidth requirement. 4. FedGKT naturally supports asynchronous training, which circumvents the severe synchronization issue in SL. The server model can immediately start training when it receives inputs from any client. We develop FedGKT based on the FedML research library [23] and comprehensively evaluate FedGKT using edge and server CNNs designed based on ResNet [1] (as shown in Fig. 1(b)). We train on three datasets with varying training difficulties (CIFAR-10 [24], CIFAR-100 [24], and CINIC-10 [25]) and their non-I.I.D. (non identical and independent distribution) variants. As for the model accuracy, our results on both I.I.D. and non-I.I.D. datasets show that FedGKT can obtain accuracy comparable to FedAvg [6]. More importantly, FedGKT makes edge training affordable. Compared to FedAvg, FedGKT demands 9 to 17 times less computational power (FLOPs) on edge devices and requires 54 to 105 times fewer parameters in the edge CNN. To understand FedGKT comprehensively, asynchronous training and ablation studies are performed. Some limitations are also discussed. 2 Related Works Federated Learning. Existing FL methods such as FedAvg [6], FedOpt [26], and FedMA [8] face significant hurdles in training large CNNs on resource-constrained devices. Recent works FedNAS [27, 3] and [28] work on large CNNs, but they rely on GPU training to complete the evaluations. Others [29, 30, 31, 32, 33, 34, 35, 36, 37] optimize the communication cost without considering edge computational limitations. Model parallelism-based split learning [11, 12] attempts to break the computational constraint, but it requires frequent communication with the server. Knowledge Distillation (KD). We use KD [19] in a different manner from existing and concurrent works [38, 39, 40, 41, 42, 43, 44, 45]. Previous works only consider transferring knowledge from a large network to a smaller one [19, 20, 21, 22], or they transfer knowledge from a group, but each member in the group shares the same large model architecture or a large portion of the neural architecture with specific tail or head layers [46, 47, 48, 49, 50, 51]. Moreover, all teachers and students in distillation share the same dataset [50, 52, 53, 54], while in our setting each member (client) can only access its own independent dataset. Previous methods use centralized training, but we utilize an alternating training method. Efficient On-device Deep Learning. Our work also relates to efficient deep learning on edge devices, such as model compression [55, 56, 57], manually designed architectures (MobileNets [58], ShuffeNets [59], SqueezeNets [60]), or even efficient neural architecture search (EfficientNets [4], FBNet [61]). However, all of these techniques are tailored for the inference phase rather than the training phase. 3 Group Knowledge Transfer 3.1 Preliminary We aim to collaboratively train large convolutional neural networks (e.g., ResNet) on many resourceconstrained devices that are not equipped with GPU accelerators, without centralizing each device’s dataset to the server side. We specifically consider supervised learning with C categories in the entire dataset D. We assume that there are K clients (edge devices) in the network. The kth node has its own dataset Dk := {( Xki , yi )}N(k) i=1 , where Xi is the ith training sample, yi is the corresponding label of Xi, yi ∈ {1, 2, . . . , C} (a multi-classification learning task), and N (k) is the sample number in dataset Dk. D = {D1,D2, ...,Dk}, N = ∑K k=1N (k). In general, we can formulate CNN-based federated learning as a distributed optimization problem: min W F (W ) def = min W K∑ k=1 N (k) N · f (k)(W ),where f (k)(W ) = 1 N (k) N(k)∑ i=1 `(W ;Xi, yi) (1) where W represents the network weight of a global CNN in each client. f (k)(W ) is the kth client’s local objective function that measures the local empirical risk over the heterogeneous dataset Dk. ` is the loss function of the global CNN model. Most off-the-shelf federated optimization methods (e.g., FedAvg [6], FedProx [62], FedNova [63], and FedOpt [26]) propose to solve objective function (1) with variant local SGD [10] optimization methods for communication-efficient training and demonstrate their characteristics with experiments on linear models (logistic regression) or shallow neural networks (2 convolutional layers). However, as shown in Fig. 2(a), the main drawback is that these methods cannot train large CNN at the resource-constrained edge devices due to lack of GPU accelerators and sufficient memory. Model parallelism-based split learning [11, 12], as shown in Fig. 2(b), attempts to break the computational constraint by splitting W into two portions and offloading the larger portion into the server-side, but a single mini-batch iteration requires remote forward propagation and backpropagation. For edge computing, such a highly frequent synchronization mechanism may lead to the severe straggler problem that significantly slows down the training process. 3.2 Reformulation Non-convex Optimization. To solve the resource-constrained problem in existing FL, we reconsider another methodology to solve the FL optimization problem. As illustrated in Fig. 2(c), we divide the global CNN W in Eq. (1) into two partitions: a small feature extractor model W e and a large-scale server-side model W s, and put them on the edge and the server, respectively. We also add a classifier W c for W e to create a small but fully trainable model on the edge. Consequently, we reformulate a single global model optimization into an non-convex optimization problem that requires us to solve the server model Fs and the edge model Fc simultaneously. Our reformulation is as follows: argmin W s Fs(W s,W ∗ e) = argmin W s K∑ k=1 N(k)∑ i=1 `s ( fs(W s;H (k) i ), y (k) i ) (2) subject to: H(k)i = f (k) e (W (k) e ;X (k) i ) (3) argmin (W (k) e ,W (k) c ) Fc(W (k) e ,W (k) c ) = argmin (W (k) e ,W (k) c ) N(k)∑ i=1 `c ( f (k) ( (W (k)e ,W (k) c );X (k) i ) , y (k) i ) (4) = argmin (W (k) e ,W (k) c ) N(k)∑ i=1 `c ( f (k)c (W (k) c ; f (k) e (W (k) e ;X (k) i︸ ︷︷ ︸ H (k) i )), y (k) i ) (5) Where `s and `c are general loss functions for the server model and the edge model, respectively. fs is the server model, and f (k) is the edge-side model which consists of feature extractor f (k)e followed by a classifier f (k)c . W s, W (k)e , W (k) c are the network weights of fs, f (k) e , f (k) c , respectively. H (k) i is i-th sample’s feature map (a hidden vector or tensor) output by feature extractor f (k)e (Eq. (3)). Note that Eq. (5) can be solved independently on each client. The kth client model f (k) is trained on its local dataset (Eq. (5)), while the server model fs is trained using H (k) i as input features (Eq. (2)). During the inference phase, the final trained model architecture for client k is stacked by the architecture of the feature extractor f (k)e and the architecture of the server model fs. In practice, the client can either run offline inference by downloading the server model fs and using it locally or perform online inference through a network connection with the server. Advantages and Challenges. The core advantage of the above reformulation is that when we assume the model size of f (k) is multiple orders of magnitude smaller than that of fs, the edge training is affordable. Moreover, as discussed in [11, 12], for large CNN training, the communication bandwidth for transferring H(k)i to the server is substantially less than communicating all model parameters as is done in traditional federated learning. Conversely, we also observe the difficulty of the reformulated optimization problem. First, each client is expected to adequately solve the inner optimization (Eq. (5)). Namely, each client should train its feature extractor f (k)e well to ensure that Eq. (3) can accurately generate H (k) i for any given input image. However, in the FL setting, the dataset on each edge device is small and thus may be inadequate in training a CNN-based feature extractor solely based on the local dataset. In addition, the outer optimization Eq. (2) and inter optimization Eq. (5) are correlated: Eq. (2) relies on the quality of H(k)i which is optimized by Eq. (5). This correlation further makes the outer optimization Eq. (2) difficult to converge if the individual client-side feature extractors f (k)e are not trained adequately. 3.3 Group Knowledge Transfer (FedGKT) Scaling Edge Dataset Limitations with Knowledge Transfer. Given the above challenges, we incorporate knowledge distillation loss into the optimization equations to circumvent the optimization difficulty. The intuition is that knowledge transferred from the the server model can boost the optimization on the edge (Eq. (5)). As such, we propose to transfer group knowledge bidirectionally. The server CNN absorbs the knowledge from many edges, and an individual edge CNN obtains enhanced knowledge from the server CNN. To be more specific, in Eq. (2) and (5), we design `s and `c as follows. `s = `CE + K∑ k=1 `KD ( zs, z (k) c ) = `CE + K∑ k=1 DKL (pk‖ps) (6) `(k)c = `CE + `KD ( zs, z (k) c ) = `CE +DKL (ps‖pk) (7) `CE is the cross-entropy loss between the predicted values and the ground truth labels. DKL is the Kullback Leibler (KL) Divergence function that serves as a term in the loss function `s and `c to transfer knowledge from a network to another. pik = exp(z(k,i)c /T)∑C i=1 exp ( z (k,i) c /T ) and pis = exp(zis/T)∑C i=1 exp(z i s/T ) . They are the probabilistic prediction of the edge model f (k) and the server model fs, respectively. They are calculated with the softmax of logits z. The logit zs and z (k) c are the output of the last fully connected layer in the server model and the client model, respectively. T is the temperature hyperparameter of the softmax function. Intuitively, the KL divergence loss attempts to bring the soft label and the ground truth close to each other. In doing so, the server model absorbs the knowledge gained from each of the edge models. Similarly, the edge models attempt to bring their predictions closer to the server model’s prediction and thereby absorb the server model knowledge to improve their feature extraction capability. Improved Alternating Minimization. After plugging Eq. (6) and (7) into our reformulation (Eq. (2) and (5)), we propose a variant of Alternating Minimization (AM) [13, 14, 15, 16, 17, 18] to solve the reformulated optimization problem as follows: argmin W s Fs(W s,W (k)∗ e ) = argmin W s K∑ k=1 N(k)∑ i=1 `CE ( fs(W s; f (k) e (W (k)∗ e ;X (k) i︸ ︷︷ ︸ H (k) i ), y (k) i ) + K∑ k=1 `KD ( z(k)∗c ,zs ) (8) where z(k)∗c = f (k) c (W (k) c ; f (k) e (W (k)∗ e ;X (k) i︸ ︷︷ ︸ H (k) i )), and zs = fs(W s;H (k) i ) (9) argmin W (k) Fc(W ∗ s ,W (k)) = argmin W (k) N(k)∑ i=1 `CE ( f (k)c (W (k) c ; f (k) e (W (k) e ;X (k) i︸ ︷︷ ︸ H (k) i )), y (k) i ) + `KD ( z∗s ,z (k) c ) (10) where z(k)c = f (k) c (W (k) c ; f (k) e (W (k) e ;X (k) i︸ ︷︷ ︸ H (k) i )), and z∗s = fs(W ∗ s ;H (k) i ) (11) Where the ∗ superscript notation in above equations presents related random variables are fixed during optimization. W (k) is the combination of W (k)e and W (k) c . AM is a solver in convex and non-convex optimization theory and practice that optimizes two random variables alternatively. In Eq. (8), we fix W (k) and optimize (train) W s for several epochs, and then we switch to (10) to fix W s and optimize W (k) for several epochs. This optimization occurs throughout many rounds between Eq. (8) and (10) until reaching a convergence state. Key Insight. The essence of our reformulation is that the alternating minimization (Eq. (8) and Eq. (10)) uses knowledge distillation across all edges to simplify the optimization, which scales the dataset limitation on each edge in federated learning. In particular, we achieve this objective using a local cross-entropy loss computed based only on the ground truth and the model output, and a second loss that uses the KL divergence across edges and the server, which effectively captures the contribution (knowledge) from multiple client datasets. Moreover, each minimization subproblem can be solved with SGD and its variants (e.g., SGD with momentum [64], ADAM [65, 66]). Algorithm 1 Group Knowledge Transfer. The subscript s and k stands for the server and the kth edge, respectively. E is the number of local epochs, T is the number of communication rounds; η is the learning rate; X(k) represents input images at edge k; H(k) is the extracted feature map from X(k); Zs and Z(k)c are the logit tensor from the client and the server, respectively. 1: ServerExecute(): 2: for each round t = 1, 2, ..., T do 3: for each client k in parallel do 4: // the server broadcasts Z(k)c to the client 5: H(k),Z(k)c ,Y (k) ← ClientTrain(k,Z(k)s ) 6: Zs ← empty dictionary 7: for each local epoch i from 1 to Es do 8: for each client k do 9: for idx, b ∈ {H(k),Z(k)c ,Y (k)} do 10: W s ←W s − ηs∇`s(W s; b) 11: if i == Es then 12: Z(k)s [idx]← fs(W s;h(k)) 13: // illustrated as "transfer back" in Fig. 1(a) 14: for each client k in parallel do 15: send the server logits Z(k)s to client k 16: 17: ClientTrain(k,Z(k)s ): 18: // illustrated as "local training "in Fig. 1(a) 19: for each local epoch i from 1 to Ec do 20: for batch b ∈ {X(k),Z(k)s ,Y (k)} do 21: // `(k)c is computed using Eq. (7) 22: W (k) ←W (k) − ηk∇`(k)c (W (k); b) 23: // extract features and logits 24: H(k),Z(k)c ← empty dictionary 25: for idx, batch x(k),y(k) ∈ {X(k),Y (k)} do 26: h(k) ← f (k)e (W (k)e ;x(k)) 27: z(k)c ← fc(W (k)c ;h(k)) 28: H(k)[idx]← h(k) 29: Z(k)c [idx]← z(k)c 30: return H(k), Z(k)c , Y (k) to server Training Algorithm. To elaborate, we illustrate the alternating training algorithm FedGKT in Fig. 1(a) and summarize it as Algorithm 1. During each round of training, the client uses local SGD to train several epochs and then sends the extracted feature maps and related logits to the server. When the server receives extracted features and logits from each client, it trains the much larger server-side CNN. The server then sends back its global logits to each client. This process iterates over multiple rounds, and during each round the knowledge of all clients is transferred to the server model and vice-versa. For the FedGKT training framework, the remaining step is to design specific neural architectures for the client model and the server model. To evaluate the effectiveness of FedGKT, we design CNN architectures based on ResNet [1], which are shown in Fig. 1(b). More details can also be found in Appendix B.3. 4 Experiments 4.1 Experimental Setup Implementation. We develop the FedGKT training framework based on FedML [23], an open source federated learning research library that simplifies the new algorithm development and deploys it in a distributed computing environment. Our server node has 4 NVIDIA RTX 2080Ti GPUs with sufficient GPU memory for large model training. We use several CPU-based nodes as clients training small CNNs. Task and Dataset. Our training task is image classification on CIFAR-10 [24], CIFAR-100 [24], and CINIC-10 [25]. We also generate their non-I.I.D. variants by splitting training samples into K unbalanced partitions. Details of these three datasets are introduced in Appendix A.1. The test images are used for a global test after each round. For different methods, we record the top 1 test accuracy as the metric to compare model performance. Note that we do not use LEAF [67] benchmark datasets because the benchmark models provided are tiny models (CNN with only two convolutional layers) or the datasets they contain are too easy for modern CNNs (e.g., Federated EMNIST), which are unable to adequately evaluate our algorithm running on large CNN models. Compared to LEAF, FedML [23] benchmark supports CIFAR-10, CIFAR-100, and CINIC-10 (contains images from ImageNet). Baselines. We compare FedGKT with state-of-the-art FL method FedAvg [6], and a centralized training approach. Split Learning-based method [11, 12] is used to compare the communication cost. Note that we do not compare with FedProx [62] because it performs worse than FedAvg in the large CNN setting, as demonstrated in [8]. We also do not compare with FedMA [8] because it cannot work on modern DNNs that contain batch normalization layers (e.g., ResNet). Model Architectures. Two modern CNN architectures are evaluated: ResNet-56 and ResNet-110 [1]. The baseline FedAvg requires all edge nodes to train using these two CNNs. For FedGKT, the edge and server-sided models are designed based on these two CNNs. On the edges, we design a tiny CNN architecture called ResNet-8, which is a compact CNN containing 8 convolutional layers (described in Fig. 1(b) and Table 7 in Appendix). The server-sided model architectures are ResNet-55 and ResNet-109 (Table 8 and 9 in Appendix), which have the same input dimension to match the output of the edge-sided feature extractor. For split learning, we use the extractor in ResNet-8 as the edge-sided partition of CNNs, while the server-side partitions of CNN are also ResNet-55 and ResNet-109. 4.2 Result of Model Accuracy For standard experiments, we run on 16 clients and a GPU server for all datasets and models. Fig. 3 shows the curve of the test accuracy during training on ResNet-56 model with 3 datasets. It includes the result of centralized training, FedAvg, and FedGKT. We also summarize all numerical results of ResNet-56 and ResNet-110 in Table 1. In both I.I.D. and non-I.I.D. setting, FedGKT obtains comparable or even better accuracy than FedAvg. Hyperparameters. There are four important hyper-parameters in our FedGKT framework: the communication round, as stated in line #2 of Algorithm 1, the edge-side epoch number, the serverside epoch number, and the server-side learning rate. After a tuning effort, we find that the edge-side epoch number can simply be 1. The server epoch number depends on the data distribution. For I.I.D. data, the value is 20, and for non-I.I.D., the value depends on the level of data bias. For I.I.D., Adam optimizer [65] works better than SGD with momentum [64], while for non-I.I.D., SGD with momentum works better. During training, we reduce the learning rate once the accuracy has plateaued [68, 69]. We use the same data augmentation techniques for fair comparison (random crop, random horizontal flip, and normalization). More details of hyper-parameters are described in Appendix B.4. 4.3 Efficiency Evaluation To compare the computational demand on training, we count the number of FLOPs (floating-point operations) performed on edge using prior methods [70, 71]. We report the result on CIFAR-100 in Fig. 4. Compared to the FedAvg baseline, the computational cost on the edge of our FedGKT (ResNet-8) is 9 times less than that of ResNet-56 and 17 times less than that of ResNet-110 (The memory cost comparison can be roughly compared by the model parameter number: ResNet-8 has 11K parameters, which is 54 times less than that of ResNet-56 and 105 times less than that of ResNet-110. We also test the CPU running time per mini-batch (batch size is 64) forward-backward propagation on Intel i7 CPU (which has a more aggressive performance than current edge devices). The results show that ResNet-8 requires only 3% of ResNet-110’s training time (30 ms v.s. 950 ms). To compare communication costs, we use SL [11, 12] as the baseline, which also exchanges hidden feature maps rather than the entire model. The communication cost is calculated using Eq. (12) and (13) in Appendix B.2 without using data compression techniques. The results are shown in Fig. 5 (X-axis units: GBytes). FedGKT uses fewer feature map exchanges with the server than SL. 4.4 Ablation Study: Understanding FedGKT under Different Settings Table 2: Ablation Study on Loss Functions CIFAR-10 CIFAR-100 CINIC-10 None -/diverge -/diverge -/diverge S–>E 92.97 68.44 81.51 S<–>E 90.53 69.57 80.01 The Effectiveness of Knowledge Transfer. Table 2 shows the results on the efficacy of using distillation loss `KD in Eq. (7) and Eq. (6). We created a scenario in which both the client and server only use `CE without using `KD (labeled None). In this setting, the accuracy is low (e.g., 40%) or the training diverges (uniformly notated as “-/diverge”). In another scenario, only the clients use `KD to update their local models, but the server does not (noted as single directional transfer S->E). We observe that the transfer from the server to the edge is always helpful, while the bidirectional transfer (S<–>E) is more effective as the dataset becomes increasingly difficult (CIFAR-100). Asynchronous Training. Since the server does not need to wait for updates from all clients to start training, FedGKT naturally supports asynchronous training. We present the experimental results in Table 3. The result shows that asynchronous training does not negatively affect model accuracy. This demonstrates the advantage of our method over SL, in which every edge requires multiple synchronizations for each mini-batch iteration. Table 4: FedGKT with Different # of Edge 8 16 64 128 FedGKT 69.51 69.57 69.65 69.59 FedGKT with Different Edge Number. To understand the scalability of FedGKT, we evaluate its performance with varying edge nodes. The test accuracy results are shown in Table 4. In general, adding more edge nodes does not negatively affect accuracy. Smaller Architectures. We test the performance of FedGKT using even smaller edge models: ResNet-4 and ResNet-6 on CIFAR-10. ResNet-4 and ResNet-6 use one and two BasicBlock components (including two convolutional layers), respectively. The result is shown in Table 5. While reducing the edge model size to ResNet-8 did not reduce accuracy, when the model size is reduced even more substantially, it does reduce the overall accuracy. 5 Discussion Federated learning (FL) is an art of trade-offs among many aspects, including model accuracy, data privacy, computational efficiency, communication cost, and scalability. We recognize the challenges of developing a universal method that can address all problems; thus, we discuss some limitations of our method. 1. Privacy and robustness: [72] shows we can backdoor federated learning. Although our work does not address the privacy concern, we believe existing methods such as differential privacy (DP) and multi-party computation (MPC) can defend the data privacy from the hidden vector reconstruction attack. Intuitively, exchanging hidden feature maps is safer than exchanging the model or gradient. Note that the hidden map exchange happens at the training phase. This consequently makes the attack more difficult because the attacker’s access is limited to the evolving and untrained feature map rather than the fully trained feature map that represents the raw data. Given that the model and gradient exchange may also leak privacy, the lack of analysis and comparison of the degree of privacy leakages between these three settings (gradient, model, and hidden map) is the first limitation of our work. 2. Communication cost: compared to the entire model weight or gradient, the hidden vector is definitely much smaller (e.g., the hidden vector size of ResNet-110 is around 64KB while the entire gradient/model size is 4.6MB for 32x32 images). Even in the high resolution vision tasks settings, this observation also holds (e.g., when image size is 224x224, the hidden feature map size is only 1Mb, compared to the size of ResNet 100Mb). Since the hidden vector for each data point can be transmitted independently, FedGKT has a smaller bandwidth requirement than gradient or model exchange. However, our proposed method has a potential drawback in that the total communication cost depends on the number of data points, although our experimental results demonstrate that our method has smaller communication costs than split learning because of fewer communication rounds for convergence. In settings where the sample number is extremely large and the image resolution is extremely high, both our method and split learning would have a high communication cost in total. 3. Label deficiency: The proposed FedGKT can only work on supervised learning. However, label deficiency is a practical problem that cannot be ignored. Many application cases do not have sufficient labels, since it is difficult to design mechanisms to incentivize users to label their private local data. 4. Scalability (a large number of clients): in the cross-device setting, we need to collaboratively train models with numerous smartphones (e.g., if the client number is as high as 1 million). One way to mitigate the scalability is by selecting clients in each round with a uniform sampling strategy [6]. We run experiments under this setting but found that this sampling method requires many more rounds of training to converge. Even though the communication cost is acceptable, this sampling method is still imperfect in practice ([9] describes many constraints that a production system might face). We argue that uniform sampling may not be the best practice and that scalability is a common limitation for most existing works. In summary, we concede that our proposed method does not have an advantage in addressing the scalability challenge. 5. Model personalization: the final trained model under our FedGKT framework is a combination of the global server model and the client model, which is a potential method to help clients learn personalized models. For example, we can fine-tune the client model for several epochs to see if the combination of such a personalized client model and the server model is more effective. We do not explicitly demonstrate this in our experiments, but we hope to explore this possibility in future works. 6 Conclusion In this work, to tackle the resource-constrained reality, we reformulate FL as a group knowledge transfer (FedGKT) training algorithm. FedGKT can efficiently train small CNNs on edges and periodically transfer their knowledge by knowledge distillation to a server-side CNN with a large capacity. FedGKT achieves several advantages in a single framework: reduced demand for edge computation, lower communication cost for large CNNs, and asynchronous training, all while maintaining model accuracy comparable to FL. To simplify the edge training, we also develop a distributed training system based on our FedGKT. We evaluate FedGKT by training modern CNN architectures (ResNet-56 and ResNet-110) on three distinct datasets (CIFAR-10, CIFAR-100, and CINIC-10) and their non-I.I.D. variants. Our results show that FedGKT can obtain comparable or even slightly higher accuracy. More importantly, FedGKT makes edge training affordable. Compared to the edge training using FedAvg, FedGKT costs 9 to 17 times less computational power (FLOPs) and requires 54 to 105 times fewer parameters. Broader Impact FedGKT can efficiently train large deep neural networks (CNNs) in resource-constrained edge devices (such as smartphones, IoT devices, and edge servers). Unlike past FL approaches, FedGKT demonstrates the feasibility of training a large server-side model by using many small client models. FedGKT preserves the data privacy requirements of the FL approach but also works within the constraints of an edge computing environment. Smartphone users may benefit from this technique because their private data is protected, and they may also simultaneously obtain a high-quality model service. Organizations such as hospitals, and other non-profit entities with limited training resources, can collaboratively train a large CNN model without revealing their datasets while achieving significant training cost savings. They can also meet requirements regarding the protection of intellectual property, confidentiality, regulatory restrictions, and legal constraints. As for the potential risks of our method, a client can maliciously send incorrect hidden feature maps and soft labels to the server, which may potentially impact the overall model accuracy. These effects must be detected and addressed to maintain overall system stability. Second, the relative benefits for each client may vary. For instance, in terms of fairness, edge nodes which have smaller datasets may obtain more model accuracy improvement from collaborative training than those which have a larger amount of training data. Our training framework does not consider how to balance this interest of different parties. Acknowledgments This material is based upon work supported by Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001117C0053 and FA8750-19-2-1005, ARO award W911NF1810400, NSF grants CCF-1703575 and CCF-1763673, and ONR Award No. N00014-16-12189. The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.
1. What is the main contribution of the paper regarding training large CNNs on decentralized data? 2. What are the strengths and weaknesses of the proposed alternating minimization approach? 3. How does the reviewer assess the novelty and applicability of the approach in real-world scenarios? 4. What are the concerns regarding privacy in the proposed method? 5. How does the reviewer suggest improving the motivation and discussion of the paper? 6. What are the issues with the ablation study and experimental results presented in the paper?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper describes an alternating minimization approach to training large CNNs on decentralized data (multiple edge devices each with a local dataset). The approach offers communication and local computation advantages over previous approaches such as split learning and the FedAvg algorithm for federated learning. Strengths This is an interesting and relatively novel approach. Experiments, including ablation studies, do a good job of demonstrating the value of the new approach. Hyperparameters are well documented. The communication and computation savings demonstrated are large. Weaknesses The pseudocode is critical to understanding the proposed approach precisely, however notational inconsistencies make it considerably harder to understand. See comments under “clarity” below. In order for this work to be well motivated, more details need to be provided indicating the real-world scenarios for which it might be helpful, and how constraints or characteristics of those settings are addressed by the algorithm. For example, the description seems to imply all clients participate in every round, which would rule out the application to cross-device FL setting (See [A] Table 1). Similarly, it is worth clarifying whether clients need to maintain state across rounds, which is typically also not possible in cross-device settings. How do the algorithms perform when on each round devices are sampled from a very large set? (See comments below about optimizer state in particular). Some of the discussion in [A] Sec 1.2 could also be used to improve the motivation. -------------------- Update based on the author response: Thank you for addressing the above point. Please make sure this point is also addressed in the revised version, particularly being explicit about client state and what assumptions need to hold for this approach to apply. In particular, I'm not sure that the "pre-defined client selection strategy" sketched is practical from an infrastructure perspective. See e.g. https://arxiv.org/abs/1902.01046 which describes the many constraints that a production system might face which could limit the ability of groups of clients to participate repeatedly. -------------------- Privacy is one of the primary motivations for federated learning and other collaborative training techniques, and unfortunately that is not addressed. A key principle in FL specifically is using aggregates of user messages, which allows composability with other privacy-enhancing technologies including secure aggregation (the server only sees the sum of updates, not any individual device’s update) and differential privacy. See the definition of FL in [A], as well as Sec 4. Unfortunately, the features maps H^k and labels Y^k might reveal significant private information about client $k$ (in fact, the whole point seems to be that H^k preserves as much semantic information about the examples as possible). It would be preferable to test the algorithm on standard benchmark datasets, for example [B] proposes a natural non-IID partitioning of CIFAR-10 and gives strong baselines for FL on this dataset. By using such a standard federated dataset, comparison to the baselines in this work would be possible. It is implied that there is significant communication savings relative to FedAvg, but this comparison has not been done explicitly (either via a table showing communication cost per round for each algorithm, or by using total bytes communicated as the x-axis in e.g. Fig 3). It is important that the ablation study is included, though currently it isn’t clear if the ablation study was on the IID or non-IID versions of the datasets; please report results for both. These experiments also highlight two points that deserve further attention. First, “diverges” for the problem without the KT loss functions is unconvincing (I assume this is equivalent to solving Eqs (4) - (5) and then (2) and (3), but this should be clarified). I would expect with an appropriately chosen optimizer and learning rate you should get some results from this approach. Second, the differences between “Only S -> E” and “Both” are small, and it’s not clear if they are significant (were the results compared across multiple different randomized initializations, for example?). The takeaway seems to be that S -> E is what matters, and the early parts of the paper should be written with this in mind (For example, when the KT losses are introduced, it would be good to mention this result already). [A] Advances and Open Problems in Federated Learning. https://arxiv.org/abs/1912.04977 [B] Adaptive Federated Optimization. https://arxiv.org/abs/2003.00295.
NIPS
Title Group Knowledge Transfer: Federated Learning of Large CNNs at the Edge Abstract Scaling up the convolutional neural network (CNN) size (e.g., width, depth, etc.) is known to effectively improve model accuracy. However, the large model size impedes training on resource-constrained edge devices. For instance, federated learning (FL) may place undue burden on the compute capability of edge nodes, even though there is a strong practical need for FL due to its privacy and confidentiality properties. To address the resource-constrained reality of edge devices, we reformulate FL as a group knowledge transfer training algorithm, called FedGKT. FedGKT designs a variant of the alternating minimization approach to train small CNNs on edge nodes and periodically transfer their knowledge by knowledge distillation to a large server-side CNN. FedGKT consolidates several advantages into a single framework: reduced demand for edge computation, lower communication bandwidth for large CNNs, and asynchronous training, all while maintaining model accuracy comparable to FedAvg. We train CNNs designed based on ResNet-56 and ResNet-110 using three distinct datasets (CIFAR-10, CIFAR-100, and CINIC-10) and their non-I.I.D. variants. Our results show that FedGKT can obtain comparable or even slightly higher accuracy than FedAvg. More importantly, FedGKT makes edge training affordable. Compared to the edge training using FedAvg, FedGKT demands 9 to 17 times less computational power (FLOPs) on edge devices and requires 54 to 105 times fewer parameters in the edge CNN. Our source code is released at FedML (https://fedml.ai). 1 Introduction The size of convolutional neural networks (CNN) matters. As seen in both manually designed neural architectures (ResNet [1]) and automated architectures discovered by neural architecture search (DARTS [2], MiLeNAS [3], EfficientNets [4]), scaling up CNN size (e.g., width, depth, etc.) is known to be an effective approach for improving model accuracy. Unfortunately, training large CNNs is challenging for resource-constrained edge devices (e.g., smartphones, IoT devices, and edge servers). The demand for edge-based training is increasing as evinced by a recent surge of interest in Federated Learning (FL) [5]. FL is a distributed learning paradigm that can collaboratively train a global model for many edge devices without centralizing any device’s dataset [6, 7, 8]. FL can boost model accuracy in situations when a single organization or user does not have sufficient or relevant data. Consequently, many FL services have been deployed commercially. For instance, Google has improved the accuracy of item ranking and language models on Android smartphones by using FL [9]. FL is also a promising solution when data centralization is undesirable or infeasible due to privacy and regulatory constraints [5]. However, one significant impediment in edge training is the gap between the computational demand of large CNNs and the meager computational power on edge devices. FL approaches, such as FedAvg [6] can reduce communication frequency by local SGD and model averaging [10], but they only evaluate the convergence property on small CNNs, 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. or assume the client has enough computational power with GPUs to train large CNNs, which is unrealistic in a real-world system. To tackle the computational limitation of edge nodes, model parallelism-based split learning (SL) [11, 12] partitions a large model and offloads some portion of the neural architecture to the cloud, but SL has a severe straggler problem because a single mini-batch iteration requires multiple rounds of communication between the server and edges. In this paper, we propose Group Knowledge Transfer (FedGKT), an efficient federated learning framework for resource-constrained edge devices. FedGKT aims to incorporate benefits from both FedAvg [6] and SL [11, 12] by training using local SGD as in FedAvg but also placing low compute demand at the edge as in SL. FedGKT can transfer knowledge from many compact CNNs trained at the edge to a large CNN trained at a cloud server. The essence of FedGKT is that it reformulates FL as an alternating minimization (AM) approach [13, 14, 15, 16, 17, 18], which optimizes two random variables (the edge model and the server model) by alternatively fixing one and optimizing another. Under this reformulation, FedGKT not only boosts training CNNs at the edge but also contributes to the development of a new knowledge distillation (KD) paradigm, group knowledge transfer, to boost the performance of the server model. Fig. 1(a) provides an overview of FedGKT. The compact CNN on the edge device consists of a lightweight feature extractor and classifier that can be trained efficiently using its private data (1 - local training). After local training, all the edge nodes agree to generate exactly the same tensor dimensions as an output from the feature extractor. The larger server model is trained by taking features extracted from the edge-side model as inputs to the model, and then uses KD-based loss function that can minimize the gap between the ground truth and soft label (probabilistic prediction in KD [19, 20, 21, 22]) predicted from the edge-side model (2 - periodic transfer). To boost the edge model’s performance, the server sends its predicted soft labels to the edge, then the edge also trains its local dataset with a KD-based loss function using server-side soft labels (3 - transfer back). Thus, the server’s performance is essentially boosted by knowledge transferred from the edge models and vice-versa. Once the training is complete, the final model is a combination of its local feature extractor and shared server model (4 - edge-sided model). The primary trade-off is that FedGKT shifts the computing burden from edge devices to the powerful server. FedGKT unifies multiple advantages into a single framework: 1. FedGKT is memory and computation efficient, similar to SL; 2. FedGKT can train in a local SGD manner like FedAvg to reduce the communication frequency; 3. Exchanging hidden features as in SL, as opposed to exchanging the entire model as in FedAvg, reduces the communication bandwidth requirement. 4. FedGKT naturally supports asynchronous training, which circumvents the severe synchronization issue in SL. The server model can immediately start training when it receives inputs from any client. We develop FedGKT based on the FedML research library [23] and comprehensively evaluate FedGKT using edge and server CNNs designed based on ResNet [1] (as shown in Fig. 1(b)). We train on three datasets with varying training difficulties (CIFAR-10 [24], CIFAR-100 [24], and CINIC-10 [25]) and their non-I.I.D. (non identical and independent distribution) variants. As for the model accuracy, our results on both I.I.D. and non-I.I.D. datasets show that FedGKT can obtain accuracy comparable to FedAvg [6]. More importantly, FedGKT makes edge training affordable. Compared to FedAvg, FedGKT demands 9 to 17 times less computational power (FLOPs) on edge devices and requires 54 to 105 times fewer parameters in the edge CNN. To understand FedGKT comprehensively, asynchronous training and ablation studies are performed. Some limitations are also discussed. 2 Related Works Federated Learning. Existing FL methods such as FedAvg [6], FedOpt [26], and FedMA [8] face significant hurdles in training large CNNs on resource-constrained devices. Recent works FedNAS [27, 3] and [28] work on large CNNs, but they rely on GPU training to complete the evaluations. Others [29, 30, 31, 32, 33, 34, 35, 36, 37] optimize the communication cost without considering edge computational limitations. Model parallelism-based split learning [11, 12] attempts to break the computational constraint, but it requires frequent communication with the server. Knowledge Distillation (KD). We use KD [19] in a different manner from existing and concurrent works [38, 39, 40, 41, 42, 43, 44, 45]. Previous works only consider transferring knowledge from a large network to a smaller one [19, 20, 21, 22], or they transfer knowledge from a group, but each member in the group shares the same large model architecture or a large portion of the neural architecture with specific tail or head layers [46, 47, 48, 49, 50, 51]. Moreover, all teachers and students in distillation share the same dataset [50, 52, 53, 54], while in our setting each member (client) can only access its own independent dataset. Previous methods use centralized training, but we utilize an alternating training method. Efficient On-device Deep Learning. Our work also relates to efficient deep learning on edge devices, such as model compression [55, 56, 57], manually designed architectures (MobileNets [58], ShuffeNets [59], SqueezeNets [60]), or even efficient neural architecture search (EfficientNets [4], FBNet [61]). However, all of these techniques are tailored for the inference phase rather than the training phase. 3 Group Knowledge Transfer 3.1 Preliminary We aim to collaboratively train large convolutional neural networks (e.g., ResNet) on many resourceconstrained devices that are not equipped with GPU accelerators, without centralizing each device’s dataset to the server side. We specifically consider supervised learning with C categories in the entire dataset D. We assume that there are K clients (edge devices) in the network. The kth node has its own dataset Dk := {( Xki , yi )}N(k) i=1 , where Xi is the ith training sample, yi is the corresponding label of Xi, yi ∈ {1, 2, . . . , C} (a multi-classification learning task), and N (k) is the sample number in dataset Dk. D = {D1,D2, ...,Dk}, N = ∑K k=1N (k). In general, we can formulate CNN-based federated learning as a distributed optimization problem: min W F (W ) def = min W K∑ k=1 N (k) N · f (k)(W ),where f (k)(W ) = 1 N (k) N(k)∑ i=1 `(W ;Xi, yi) (1) where W represents the network weight of a global CNN in each client. f (k)(W ) is the kth client’s local objective function that measures the local empirical risk over the heterogeneous dataset Dk. ` is the loss function of the global CNN model. Most off-the-shelf federated optimization methods (e.g., FedAvg [6], FedProx [62], FedNova [63], and FedOpt [26]) propose to solve objective function (1) with variant local SGD [10] optimization methods for communication-efficient training and demonstrate their characteristics with experiments on linear models (logistic regression) or shallow neural networks (2 convolutional layers). However, as shown in Fig. 2(a), the main drawback is that these methods cannot train large CNN at the resource-constrained edge devices due to lack of GPU accelerators and sufficient memory. Model parallelism-based split learning [11, 12], as shown in Fig. 2(b), attempts to break the computational constraint by splitting W into two portions and offloading the larger portion into the server-side, but a single mini-batch iteration requires remote forward propagation and backpropagation. For edge computing, such a highly frequent synchronization mechanism may lead to the severe straggler problem that significantly slows down the training process. 3.2 Reformulation Non-convex Optimization. To solve the resource-constrained problem in existing FL, we reconsider another methodology to solve the FL optimization problem. As illustrated in Fig. 2(c), we divide the global CNN W in Eq. (1) into two partitions: a small feature extractor model W e and a large-scale server-side model W s, and put them on the edge and the server, respectively. We also add a classifier W c for W e to create a small but fully trainable model on the edge. Consequently, we reformulate a single global model optimization into an non-convex optimization problem that requires us to solve the server model Fs and the edge model Fc simultaneously. Our reformulation is as follows: argmin W s Fs(W s,W ∗ e) = argmin W s K∑ k=1 N(k)∑ i=1 `s ( fs(W s;H (k) i ), y (k) i ) (2) subject to: H(k)i = f (k) e (W (k) e ;X (k) i ) (3) argmin (W (k) e ,W (k) c ) Fc(W (k) e ,W (k) c ) = argmin (W (k) e ,W (k) c ) N(k)∑ i=1 `c ( f (k) ( (W (k)e ,W (k) c );X (k) i ) , y (k) i ) (4) = argmin (W (k) e ,W (k) c ) N(k)∑ i=1 `c ( f (k)c (W (k) c ; f (k) e (W (k) e ;X (k) i︸ ︷︷ ︸ H (k) i )), y (k) i ) (5) Where `s and `c are general loss functions for the server model and the edge model, respectively. fs is the server model, and f (k) is the edge-side model which consists of feature extractor f (k)e followed by a classifier f (k)c . W s, W (k)e , W (k) c are the network weights of fs, f (k) e , f (k) c , respectively. H (k) i is i-th sample’s feature map (a hidden vector or tensor) output by feature extractor f (k)e (Eq. (3)). Note that Eq. (5) can be solved independently on each client. The kth client model f (k) is trained on its local dataset (Eq. (5)), while the server model fs is trained using H (k) i as input features (Eq. (2)). During the inference phase, the final trained model architecture for client k is stacked by the architecture of the feature extractor f (k)e and the architecture of the server model fs. In practice, the client can either run offline inference by downloading the server model fs and using it locally or perform online inference through a network connection with the server. Advantages and Challenges. The core advantage of the above reformulation is that when we assume the model size of f (k) is multiple orders of magnitude smaller than that of fs, the edge training is affordable. Moreover, as discussed in [11, 12], for large CNN training, the communication bandwidth for transferring H(k)i to the server is substantially less than communicating all model parameters as is done in traditional federated learning. Conversely, we also observe the difficulty of the reformulated optimization problem. First, each client is expected to adequately solve the inner optimization (Eq. (5)). Namely, each client should train its feature extractor f (k)e well to ensure that Eq. (3) can accurately generate H (k) i for any given input image. However, in the FL setting, the dataset on each edge device is small and thus may be inadequate in training a CNN-based feature extractor solely based on the local dataset. In addition, the outer optimization Eq. (2) and inter optimization Eq. (5) are correlated: Eq. (2) relies on the quality of H(k)i which is optimized by Eq. (5). This correlation further makes the outer optimization Eq. (2) difficult to converge if the individual client-side feature extractors f (k)e are not trained adequately. 3.3 Group Knowledge Transfer (FedGKT) Scaling Edge Dataset Limitations with Knowledge Transfer. Given the above challenges, we incorporate knowledge distillation loss into the optimization equations to circumvent the optimization difficulty. The intuition is that knowledge transferred from the the server model can boost the optimization on the edge (Eq. (5)). As such, we propose to transfer group knowledge bidirectionally. The server CNN absorbs the knowledge from many edges, and an individual edge CNN obtains enhanced knowledge from the server CNN. To be more specific, in Eq. (2) and (5), we design `s and `c as follows. `s = `CE + K∑ k=1 `KD ( zs, z (k) c ) = `CE + K∑ k=1 DKL (pk‖ps) (6) `(k)c = `CE + `KD ( zs, z (k) c ) = `CE +DKL (ps‖pk) (7) `CE is the cross-entropy loss between the predicted values and the ground truth labels. DKL is the Kullback Leibler (KL) Divergence function that serves as a term in the loss function `s and `c to transfer knowledge from a network to another. pik = exp(z(k,i)c /T)∑C i=1 exp ( z (k,i) c /T ) and pis = exp(zis/T)∑C i=1 exp(z i s/T ) . They are the probabilistic prediction of the edge model f (k) and the server model fs, respectively. They are calculated with the softmax of logits z. The logit zs and z (k) c are the output of the last fully connected layer in the server model and the client model, respectively. T is the temperature hyperparameter of the softmax function. Intuitively, the KL divergence loss attempts to bring the soft label and the ground truth close to each other. In doing so, the server model absorbs the knowledge gained from each of the edge models. Similarly, the edge models attempt to bring their predictions closer to the server model’s prediction and thereby absorb the server model knowledge to improve their feature extraction capability. Improved Alternating Minimization. After plugging Eq. (6) and (7) into our reformulation (Eq. (2) and (5)), we propose a variant of Alternating Minimization (AM) [13, 14, 15, 16, 17, 18] to solve the reformulated optimization problem as follows: argmin W s Fs(W s,W (k)∗ e ) = argmin W s K∑ k=1 N(k)∑ i=1 `CE ( fs(W s; f (k) e (W (k)∗ e ;X (k) i︸ ︷︷ ︸ H (k) i ), y (k) i ) + K∑ k=1 `KD ( z(k)∗c ,zs ) (8) where z(k)∗c = f (k) c (W (k) c ; f (k) e (W (k)∗ e ;X (k) i︸ ︷︷ ︸ H (k) i )), and zs = fs(W s;H (k) i ) (9) argmin W (k) Fc(W ∗ s ,W (k)) = argmin W (k) N(k)∑ i=1 `CE ( f (k)c (W (k) c ; f (k) e (W (k) e ;X (k) i︸ ︷︷ ︸ H (k) i )), y (k) i ) + `KD ( z∗s ,z (k) c ) (10) where z(k)c = f (k) c (W (k) c ; f (k) e (W (k) e ;X (k) i︸ ︷︷ ︸ H (k) i )), and z∗s = fs(W ∗ s ;H (k) i ) (11) Where the ∗ superscript notation in above equations presents related random variables are fixed during optimization. W (k) is the combination of W (k)e and W (k) c . AM is a solver in convex and non-convex optimization theory and practice that optimizes two random variables alternatively. In Eq. (8), we fix W (k) and optimize (train) W s for several epochs, and then we switch to (10) to fix W s and optimize W (k) for several epochs. This optimization occurs throughout many rounds between Eq. (8) and (10) until reaching a convergence state. Key Insight. The essence of our reformulation is that the alternating minimization (Eq. (8) and Eq. (10)) uses knowledge distillation across all edges to simplify the optimization, which scales the dataset limitation on each edge in federated learning. In particular, we achieve this objective using a local cross-entropy loss computed based only on the ground truth and the model output, and a second loss that uses the KL divergence across edges and the server, which effectively captures the contribution (knowledge) from multiple client datasets. Moreover, each minimization subproblem can be solved with SGD and its variants (e.g., SGD with momentum [64], ADAM [65, 66]). Algorithm 1 Group Knowledge Transfer. The subscript s and k stands for the server and the kth edge, respectively. E is the number of local epochs, T is the number of communication rounds; η is the learning rate; X(k) represents input images at edge k; H(k) is the extracted feature map from X(k); Zs and Z(k)c are the logit tensor from the client and the server, respectively. 1: ServerExecute(): 2: for each round t = 1, 2, ..., T do 3: for each client k in parallel do 4: // the server broadcasts Z(k)c to the client 5: H(k),Z(k)c ,Y (k) ← ClientTrain(k,Z(k)s ) 6: Zs ← empty dictionary 7: for each local epoch i from 1 to Es do 8: for each client k do 9: for idx, b ∈ {H(k),Z(k)c ,Y (k)} do 10: W s ←W s − ηs∇`s(W s; b) 11: if i == Es then 12: Z(k)s [idx]← fs(W s;h(k)) 13: // illustrated as "transfer back" in Fig. 1(a) 14: for each client k in parallel do 15: send the server logits Z(k)s to client k 16: 17: ClientTrain(k,Z(k)s ): 18: // illustrated as "local training "in Fig. 1(a) 19: for each local epoch i from 1 to Ec do 20: for batch b ∈ {X(k),Z(k)s ,Y (k)} do 21: // `(k)c is computed using Eq. (7) 22: W (k) ←W (k) − ηk∇`(k)c (W (k); b) 23: // extract features and logits 24: H(k),Z(k)c ← empty dictionary 25: for idx, batch x(k),y(k) ∈ {X(k),Y (k)} do 26: h(k) ← f (k)e (W (k)e ;x(k)) 27: z(k)c ← fc(W (k)c ;h(k)) 28: H(k)[idx]← h(k) 29: Z(k)c [idx]← z(k)c 30: return H(k), Z(k)c , Y (k) to server Training Algorithm. To elaborate, we illustrate the alternating training algorithm FedGKT in Fig. 1(a) and summarize it as Algorithm 1. During each round of training, the client uses local SGD to train several epochs and then sends the extracted feature maps and related logits to the server. When the server receives extracted features and logits from each client, it trains the much larger server-side CNN. The server then sends back its global logits to each client. This process iterates over multiple rounds, and during each round the knowledge of all clients is transferred to the server model and vice-versa. For the FedGKT training framework, the remaining step is to design specific neural architectures for the client model and the server model. To evaluate the effectiveness of FedGKT, we design CNN architectures based on ResNet [1], which are shown in Fig. 1(b). More details can also be found in Appendix B.3. 4 Experiments 4.1 Experimental Setup Implementation. We develop the FedGKT training framework based on FedML [23], an open source federated learning research library that simplifies the new algorithm development and deploys it in a distributed computing environment. Our server node has 4 NVIDIA RTX 2080Ti GPUs with sufficient GPU memory for large model training. We use several CPU-based nodes as clients training small CNNs. Task and Dataset. Our training task is image classification on CIFAR-10 [24], CIFAR-100 [24], and CINIC-10 [25]. We also generate their non-I.I.D. variants by splitting training samples into K unbalanced partitions. Details of these three datasets are introduced in Appendix A.1. The test images are used for a global test after each round. For different methods, we record the top 1 test accuracy as the metric to compare model performance. Note that we do not use LEAF [67] benchmark datasets because the benchmark models provided are tiny models (CNN with only two convolutional layers) or the datasets they contain are too easy for modern CNNs (e.g., Federated EMNIST), which are unable to adequately evaluate our algorithm running on large CNN models. Compared to LEAF, FedML [23] benchmark supports CIFAR-10, CIFAR-100, and CINIC-10 (contains images from ImageNet). Baselines. We compare FedGKT with state-of-the-art FL method FedAvg [6], and a centralized training approach. Split Learning-based method [11, 12] is used to compare the communication cost. Note that we do not compare with FedProx [62] because it performs worse than FedAvg in the large CNN setting, as demonstrated in [8]. We also do not compare with FedMA [8] because it cannot work on modern DNNs that contain batch normalization layers (e.g., ResNet). Model Architectures. Two modern CNN architectures are evaluated: ResNet-56 and ResNet-110 [1]. The baseline FedAvg requires all edge nodes to train using these two CNNs. For FedGKT, the edge and server-sided models are designed based on these two CNNs. On the edges, we design a tiny CNN architecture called ResNet-8, which is a compact CNN containing 8 convolutional layers (described in Fig. 1(b) and Table 7 in Appendix). The server-sided model architectures are ResNet-55 and ResNet-109 (Table 8 and 9 in Appendix), which have the same input dimension to match the output of the edge-sided feature extractor. For split learning, we use the extractor in ResNet-8 as the edge-sided partition of CNNs, while the server-side partitions of CNN are also ResNet-55 and ResNet-109. 4.2 Result of Model Accuracy For standard experiments, we run on 16 clients and a GPU server for all datasets and models. Fig. 3 shows the curve of the test accuracy during training on ResNet-56 model with 3 datasets. It includes the result of centralized training, FedAvg, and FedGKT. We also summarize all numerical results of ResNet-56 and ResNet-110 in Table 1. In both I.I.D. and non-I.I.D. setting, FedGKT obtains comparable or even better accuracy than FedAvg. Hyperparameters. There are four important hyper-parameters in our FedGKT framework: the communication round, as stated in line #2 of Algorithm 1, the edge-side epoch number, the serverside epoch number, and the server-side learning rate. After a tuning effort, we find that the edge-side epoch number can simply be 1. The server epoch number depends on the data distribution. For I.I.D. data, the value is 20, and for non-I.I.D., the value depends on the level of data bias. For I.I.D., Adam optimizer [65] works better than SGD with momentum [64], while for non-I.I.D., SGD with momentum works better. During training, we reduce the learning rate once the accuracy has plateaued [68, 69]. We use the same data augmentation techniques for fair comparison (random crop, random horizontal flip, and normalization). More details of hyper-parameters are described in Appendix B.4. 4.3 Efficiency Evaluation To compare the computational demand on training, we count the number of FLOPs (floating-point operations) performed on edge using prior methods [70, 71]. We report the result on CIFAR-100 in Fig. 4. Compared to the FedAvg baseline, the computational cost on the edge of our FedGKT (ResNet-8) is 9 times less than that of ResNet-56 and 17 times less than that of ResNet-110 (The memory cost comparison can be roughly compared by the model parameter number: ResNet-8 has 11K parameters, which is 54 times less than that of ResNet-56 and 105 times less than that of ResNet-110. We also test the CPU running time per mini-batch (batch size is 64) forward-backward propagation on Intel i7 CPU (which has a more aggressive performance than current edge devices). The results show that ResNet-8 requires only 3% of ResNet-110’s training time (30 ms v.s. 950 ms). To compare communication costs, we use SL [11, 12] as the baseline, which also exchanges hidden feature maps rather than the entire model. The communication cost is calculated using Eq. (12) and (13) in Appendix B.2 without using data compression techniques. The results are shown in Fig. 5 (X-axis units: GBytes). FedGKT uses fewer feature map exchanges with the server than SL. 4.4 Ablation Study: Understanding FedGKT under Different Settings Table 2: Ablation Study on Loss Functions CIFAR-10 CIFAR-100 CINIC-10 None -/diverge -/diverge -/diverge S–>E 92.97 68.44 81.51 S<–>E 90.53 69.57 80.01 The Effectiveness of Knowledge Transfer. Table 2 shows the results on the efficacy of using distillation loss `KD in Eq. (7) and Eq. (6). We created a scenario in which both the client and server only use `CE without using `KD (labeled None). In this setting, the accuracy is low (e.g., 40%) or the training diverges (uniformly notated as “-/diverge”). In another scenario, only the clients use `KD to update their local models, but the server does not (noted as single directional transfer S->E). We observe that the transfer from the server to the edge is always helpful, while the bidirectional transfer (S<–>E) is more effective as the dataset becomes increasingly difficult (CIFAR-100). Asynchronous Training. Since the server does not need to wait for updates from all clients to start training, FedGKT naturally supports asynchronous training. We present the experimental results in Table 3. The result shows that asynchronous training does not negatively affect model accuracy. This demonstrates the advantage of our method over SL, in which every edge requires multiple synchronizations for each mini-batch iteration. Table 4: FedGKT with Different # of Edge 8 16 64 128 FedGKT 69.51 69.57 69.65 69.59 FedGKT with Different Edge Number. To understand the scalability of FedGKT, we evaluate its performance with varying edge nodes. The test accuracy results are shown in Table 4. In general, adding more edge nodes does not negatively affect accuracy. Smaller Architectures. We test the performance of FedGKT using even smaller edge models: ResNet-4 and ResNet-6 on CIFAR-10. ResNet-4 and ResNet-6 use one and two BasicBlock components (including two convolutional layers), respectively. The result is shown in Table 5. While reducing the edge model size to ResNet-8 did not reduce accuracy, when the model size is reduced even more substantially, it does reduce the overall accuracy. 5 Discussion Federated learning (FL) is an art of trade-offs among many aspects, including model accuracy, data privacy, computational efficiency, communication cost, and scalability. We recognize the challenges of developing a universal method that can address all problems; thus, we discuss some limitations of our method. 1. Privacy and robustness: [72] shows we can backdoor federated learning. Although our work does not address the privacy concern, we believe existing methods such as differential privacy (DP) and multi-party computation (MPC) can defend the data privacy from the hidden vector reconstruction attack. Intuitively, exchanging hidden feature maps is safer than exchanging the model or gradient. Note that the hidden map exchange happens at the training phase. This consequently makes the attack more difficult because the attacker’s access is limited to the evolving and untrained feature map rather than the fully trained feature map that represents the raw data. Given that the model and gradient exchange may also leak privacy, the lack of analysis and comparison of the degree of privacy leakages between these three settings (gradient, model, and hidden map) is the first limitation of our work. 2. Communication cost: compared to the entire model weight or gradient, the hidden vector is definitely much smaller (e.g., the hidden vector size of ResNet-110 is around 64KB while the entire gradient/model size is 4.6MB for 32x32 images). Even in the high resolution vision tasks settings, this observation also holds (e.g., when image size is 224x224, the hidden feature map size is only 1Mb, compared to the size of ResNet 100Mb). Since the hidden vector for each data point can be transmitted independently, FedGKT has a smaller bandwidth requirement than gradient or model exchange. However, our proposed method has a potential drawback in that the total communication cost depends on the number of data points, although our experimental results demonstrate that our method has smaller communication costs than split learning because of fewer communication rounds for convergence. In settings where the sample number is extremely large and the image resolution is extremely high, both our method and split learning would have a high communication cost in total. 3. Label deficiency: The proposed FedGKT can only work on supervised learning. However, label deficiency is a practical problem that cannot be ignored. Many application cases do not have sufficient labels, since it is difficult to design mechanisms to incentivize users to label their private local data. 4. Scalability (a large number of clients): in the cross-device setting, we need to collaboratively train models with numerous smartphones (e.g., if the client number is as high as 1 million). One way to mitigate the scalability is by selecting clients in each round with a uniform sampling strategy [6]. We run experiments under this setting but found that this sampling method requires many more rounds of training to converge. Even though the communication cost is acceptable, this sampling method is still imperfect in practice ([9] describes many constraints that a production system might face). We argue that uniform sampling may not be the best practice and that scalability is a common limitation for most existing works. In summary, we concede that our proposed method does not have an advantage in addressing the scalability challenge. 5. Model personalization: the final trained model under our FedGKT framework is a combination of the global server model and the client model, which is a potential method to help clients learn personalized models. For example, we can fine-tune the client model for several epochs to see if the combination of such a personalized client model and the server model is more effective. We do not explicitly demonstrate this in our experiments, but we hope to explore this possibility in future works. 6 Conclusion In this work, to tackle the resource-constrained reality, we reformulate FL as a group knowledge transfer (FedGKT) training algorithm. FedGKT can efficiently train small CNNs on edges and periodically transfer their knowledge by knowledge distillation to a server-side CNN with a large capacity. FedGKT achieves several advantages in a single framework: reduced demand for edge computation, lower communication cost for large CNNs, and asynchronous training, all while maintaining model accuracy comparable to FL. To simplify the edge training, we also develop a distributed training system based on our FedGKT. We evaluate FedGKT by training modern CNN architectures (ResNet-56 and ResNet-110) on three distinct datasets (CIFAR-10, CIFAR-100, and CINIC-10) and their non-I.I.D. variants. Our results show that FedGKT can obtain comparable or even slightly higher accuracy. More importantly, FedGKT makes edge training affordable. Compared to the edge training using FedAvg, FedGKT costs 9 to 17 times less computational power (FLOPs) and requires 54 to 105 times fewer parameters. Broader Impact FedGKT can efficiently train large deep neural networks (CNNs) in resource-constrained edge devices (such as smartphones, IoT devices, and edge servers). Unlike past FL approaches, FedGKT demonstrates the feasibility of training a large server-side model by using many small client models. FedGKT preserves the data privacy requirements of the FL approach but also works within the constraints of an edge computing environment. Smartphone users may benefit from this technique because their private data is protected, and they may also simultaneously obtain a high-quality model service. Organizations such as hospitals, and other non-profit entities with limited training resources, can collaboratively train a large CNN model without revealing their datasets while achieving significant training cost savings. They can also meet requirements regarding the protection of intellectual property, confidentiality, regulatory restrictions, and legal constraints. As for the potential risks of our method, a client can maliciously send incorrect hidden feature maps and soft labels to the server, which may potentially impact the overall model accuracy. These effects must be detected and addressed to maintain overall system stability. Second, the relative benefits for each client may vary. For instance, in terms of fairness, edge nodes which have smaller datasets may obtain more model accuracy improvement from collaborative training than those which have a larger amount of training data. Our training framework does not consider how to balance this interest of different parties. Acknowledgments This material is based upon work supported by Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001117C0053 and FA8750-19-2-1005, ARO award W911NF1810400, NSF grants CCF-1703575 and CCF-1763673, and ONR Award No. N00014-16-12189. The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.
1. What is the focus and contribution of the paper on federated learning? 2. What are the strengths of the proposed approach, particularly in terms of privacy preservation and affordability? 3. What are the weaknesses of the paper, especially regarding data privacy and communication costs? 4. Do you have any concerns about the method's ability to protect sensitive information? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper reformulates federated learning (FL) as a group knowledge transfer (GKT) training algorithm. The authors can train small CNNs on clients and periodically transfer their knowledge by knowledge distillation to a large server-side CNN. They evaluate GKT by training ResNet-56 and ResNet-110 on CIFAR-10, CIFAR-100, and CINIC-10 and their non-IID variants. The experimental results show that GKT can achieve comparable or slightly higher accuracy. Update: I have read the authors' rebuttal and the other reviews. I acknowledge the authors for addressing my review. I have increased my score in light of the authors' rebuttal and the reviewer discussion. I hope the authors add their corresponding explanations in the revised manuscript. Strengths - The author claims that GKT makes edge training affordable and preserves the data privacy requirements of the FL approach. Weaknesses However, there are two main concerns. - In algorithm 1, ClientLocalTraning has to send features and logits to server. [1] shows that even gradients may cause the deep leakage, and we can obtain the private training set from the publicly shared gradients. The features and logits contain much richer information than the gradients. Thus, this work has a serious privacy problem. Then the motivation of this work, applying knowledge distillation to FL, is not convincing. - The authors mentioned GKT requires lower communication cost. Sending features and logits to server may not always save the communication cost because it depends on the sizes of features and logits. [1] Zhu, Ligeng, Zhijian Liu, and Song Han. "Deep leakage from gradients." Advances in Neural Information Processing Systems. 2019.
NIPS
Title Split-kl and PAC-Bayes-split-kl Inequalities for Ternary Random Variables Abstract We present a new concentration of measure inequality for sums of independent bounded random variables, which we name a split-kl inequality. The inequality is particularly well-suited for ternary random variables, which naturally show up in a variety of problems, including analysis of excess losses in classification, analysis of weighted majority votes, and learning with abstention. We demonstrate that for ternary random variables the inequality is simultaneously competitive with the kl inequality, the Empirical Bernstein inequality, and the Unexpected Bernstein inequality, and in certain regimes outperforms all of them. It resolves an open question by Tolstikhin and Seldin [2013] and Mhammedi et al. [2019] on how to match simultaneously the combinatorial power of the kl inequality when the distribution happens to be close to binary and the power of Bersntein inequalities to exploit low variance when the probability mass is concentrated on the middle value. We also derive a PAC-Bayes-split-kl inequality and compare it with the PACBayes-kl, PAC-Bayes-Empirical-Bennett, and PAC-Bayes-Unexpected-Bernstein inequalities in an analysis of excess losses and in an analysis of a weighted majority vote for several UCI datasets. Last, but not least, our study provides the first direct comparison of the Empirical Bernstein and Unexpected Bernstein inequalities and their PAC-Bayes extensions. 1 Introduction Concentration of measure inequalities for sums of independent random variables are the most fundamental analysis tools in statistics and many other domains [Boucheron et al., 2013]. Their history stretches almost a century back, and inequalities such as Hoeffding’s [Hoeffding, 1963] and Bernstein’s [Bernstein, 1946] are the main work horses of learning theory. For binary random variables, one of the tightest concentration of measure inequalities is the kl inequality [Maurer, 2004, Langford, 2005, Foong et al., 2021, 2022], which is based on combinatorial properties of a sum of n independent random variables.1 However, while being extremely tight for binary random variables and applicable to any bounded random variables, the kl inequality is not necessarily a good choice for sums of bounded random variables that can take more than two values. In the latter case, the Empirical Bernstein [Mnih et al., 2008, Audibert et al., 2009, Maurer and Pontil, 2009] and the Unexpected Bernstein [Cesa-Bianchi et al., 2007, Mhammedi et al., 2019] inequalities can be significantly tighter due to their ability to exploit low variance, as shown by Tolstikhin and Seldin [2013]. However, the Empirical and Unexpected Bernstein inequalities are loose for binary random variables [Tolstikhin and Seldin, 2013]. 1The Binomial tail bound is slightly tighter, but it does not extend to the PAC-Bayes setting [Langford, 2005]. Our split-kl approach can be directly applied to obtain a “split-Binomial-tail” inequality. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). The challenge of exploiting low variance and, at the same time, matching the tightness of the kl inequality if a distribution happens to be close to binary, was faced by multiple prior works [Tolstikhin and Seldin, 2013, Mhammedi et al., 2019, Wu et al., 2021], but remained an open question. We resolve this question for the case of ternary random variables. Such random variables appear in a variety of applications, and we illustrate two of them. One is a study of excess losses, which are differences between the zero-one losses of a prediction rule h and a reference prediction rule h∗, Z = ℓ(h(X), Y ) − ℓ(h∗(X), Y ) ∈ {−1, 0, 1}. Mhammedi et al. [2019] have applied the PACBayes-Unexpected-Bernstein bound to excess losses in order to improve generalization bounds for classification. Another example of ternary random variables is the tandem loss with an offset, defined by ℓα(h(X), h′(X), Y ) = (ℓ(h(X), Y )−α)(ℓ(h′(X), Y )−α) ∈ { α2,−α(1− α), (1− α)2 } . Wu et al. [2021] have applied the PAC-Bayes-Empirical-Bennett inequality to the tandem loss with an offset to obtain a generalization bound for the weighted majority vote. Yet another potential application, which we leave for future work, is learning with abstention [Cortes et al., 2018, Thulasidasan et al., 2019]. We present the split-kl inequality, which simultaneously matches the tightness of the Empirical/Unexpected Bernstein and the kl, and outperforms both for certain distributions. It works for sums of any bounded random variables Z1, . . . , Zn, not only the ternary ones, but it is best suited for ternary random variables, for which it is almost tight (in the same sense, as the kl is tight for binary random variables). The idea behind the split-kl inequality is to write a random variable Z as Z = µ + Z+ − Z−, where µ is a constant, Z+ = max{0, Z − µ}, and Z− = max{0, µ − Z}. Then E [Z] = µ + E [Z+] − E [Z−] and, given an i.i.d. sample Z1, . . . , Zn, we can bound the distance between 1n ∑n i=1 Zi and E [Z] by using kl upper and lower bounds on the distances between 1 n ∑n i=1 Z + i and E [Z+], and 1 n ∑n i=1 Z − i and E [Z−], respectively. For ternary random variables Z ∈ {a, b, c} with a ≤ b ≤ c, the best split is to take µ = b, then both Z+ and Z− are binary and the kl upper and lower bounds for their rescaled versions are tight and, therefore, the split-kl inequality for Z is also tight. Thus, this approach provides the best of both worlds: the combinatorial tightness of the kl bound and exploitation of low variance when the probability mass on the middle value happens to be large, as in Empirical Bernstein inequalities. We further elevate the idea to the PAC-Bayes domain and derive a PAC-Bayes-split-kl inequality. We present an extensive set of experiments, where we first compare the kl, Empirical Bernstein, Unexpected Bernstein, and split-kl inequalities applied to (individual) sums of independent random variables in simulated data, and then compare the PAC-Bayes-kl, PAC-Bayes-Unexpected-Bersnstein, PAC-Bayes-split-kl, and, in some of the setups, PAC-Bayes-Empirical-Bennett, for several prediction models on several UCI datasets. In particular, we evaluate the bounds in the linear classification setup studied by Mhammedi et al. [2019] and in the weighted majority prediction setup studied by Wu et al. [2021]. To the best of our knowledge, this is also the first time when the Empirical Bernstein and the Unexpected Bernstein inequalities are directly compared, with and without the PAC-Bayesian extension. In Appendix A.2 we also show that an inequality introduced by Cesa-Bianchi et al. [2007] yields a relaxation of the Unexpected Bernstein inequality by Mhammedi et al. [2019]. 2 Concentration of Measure Inequalities for Sums of Independent Random Variables We start with the most basic question in probability theory and statistics: how far can an average of an i.i.d. sample Z1, . . . , Zn deviate from its expectation? We cite the major existing inequalities, the kl, Empirical Bernstein, and Unexpected Bernstein, then derive the new split-kl inequality, and then provide a numerical comparison. 2.1 Background We use KL(ρ∥π) to denote the Kullback-Leibler divergence between two probability distributions, ρ and π [Cover and Thomas, 2006]. We further use kl(p∥q) as a shorthand for the KullbackLeibler divergence between two Bernoulli distributions with biases p and q, namely kl(p∥q) = KL((1 − p, p)∥(1 − q, q)). For p̂ ∈ [0, 1] and ε ≥ 0 we define the upper and lower inverse of kl, respectively, as kl−1,+(p̂, ε) := max {p : p ∈ [0, 1] and kl(p̂∥p) ≤ ε} and kl−1,−(p̂, ε) := min {p : p ∈ [0, 1] and kl(p̂∥p) ≤ ε}. The first inequality that we cite is the kl inequality. Theorem 1 (kl Inequality [Langford, 2005, Foong et al., 2021, 2022]). Let Z1, · · · , Zn be i.i.d. random variables bounded in the [0, 1] interval and with E [Zi] = p for all i. Let p̂ = 1n ∑n i=1 Zi be their empirical mean. Then, for any δ ∈ (0, 1): P ( kl(p̂∥p) ≥ ln 1δ n ) ≤ δ and, by inversion of the kl, P ( p ≥ kl−1,+ ( p̂, 1 n ln 1 δ )) ≤ δ, (1) P ( p ≤ kl−1,− ( p̂, 1 n ln 1 δ )) ≤ δ. (2) We note that the PAC-Bayes-kl inequality (Theorem 5 below) is based on the inequality E [ en kl(p̂∥p) ] ≤ 2 √ n [Maurer, 2004], which gives P ( kl(p̂∥p) ≥ ln 2 √ n δ n ) ≤ δ. Foong et al. [2021, 2022] reduce the logarithmic factor down to ln 1δ by basing the proof on Chernoff’s inequality, but this proof technique cannot be combined with PAC-Bayes. Therefore, when we move on to PAC-Bayes we pay the extra ln 2 √ n factor in the bounds. It is a long-standing open question whether this factor can be reduced in the PAC-Bayesian setting [Foong et al., 2021]. Next we cite two versions of the Empirical Bernstein inequality. Theorem 2 (Empirical Bernstein Inequality [Maurer and Pontil, 2009]). Let Z1, · · · , Zn be i.i.d. random variables bounded in a [a, b] interval for some a, b ∈ R, and with E [Zi] = p for all i. Let p̂ = 1n ∑n i=1 Zi be the empirical mean and let σ̂ = 1 n−1 n∑ i=1 (Zi − p̂)2 be the empirical variance. Then for any δ ∈ (0, 1) : P p ≥ p̂+ √ 2σ̂ ln 2δ n + 7(b− a) ln 2δ 3(n− 1) ≤ δ. (3) Theorem 3 (Unexpected Bernstein Inequality [Fan et al., 2015, Mhammedi et al., 2019]). Let Z1, · · · , Zn be i.i.d. random variables bounded from above by b for some b > 0, and with E [Zi] = p for all i. Let p̂ = 1n ∑n i=1 Zi be the empirical mean and let σ̂ = 1 n ∑n i=1 Z 2 i be the empirical mean of the second moments. Let ψ(u) := u− ln(1 + u) for u > −1. Then, for any γ ∈ (0, 1/b) and any δ ∈ (0, 1): P ( p ≥ p̂+ ψ(−γb) γb2 σ̂ + ln 1δ γn ) ≤ δ. (4) To facilitate a comparison with other bounds, Theorem 3 provides a slightly different form of the Unexpected Bernstein inequality than the one used by Mhammedi et al. [2019]. We provide a proof of the theorem in Appendix A.1, which is based on the Unexpected Bernstein Lemma [Fan et al., 2015]. We note that an inequality proposed by Cesa-Bianchi et al. [2007] can be used to derive a relaxed version of the Unexpected Bernstein inequality, as discussed in Appendix A.2. 2.2 The Split-kl Inequality Let Z be a random variable bounded in a [a, b] interval for some a, b ∈ R and let µ ∈ [a, b] be a constant. We decompose Z = µ + Z+ − Z−, where Z+ = max(0, Z − µ) and Z− = max(0, µ− Z). Let p = E [Z], p+ = E [Z+], and p− = E [Z−]. For an i.i.d. sample Z1, . . . , Zn let p̂+ = 1n ∑n i=1 Z + i and p̂ − = 1n ∑n i=1 Z − i . With these definitions we present the split-kl inequality. Theorem 4 (Split-kl inequality). Let Z1, . . . , Zn be i.i.d. random variables in a [a, b] interval for some a, b ∈ R, then for any µ ∈ [a, b] and δ ∈ (0, 1): P ( p ≥ µ+ (b− µ) kl−1,+ ( p̂+ b− µ , 1 n ln 2 δ ) − (µ− a) kl−1,− ( p̂− µ− a , 1 n ln 2 δ )) ≤ δ. (5) Proof. P ( p ≥ µ+ (b− µ) kl−1,+ ( p̂+ b− µ , 1 n ln 2 δ ) − (µ− a) kl−1,− ( p̂− µ− a , 1 n ln 2 δ )) ≤ P ( p+ ≥ (b− µ) kl−1,+ ( p̂+ b− µ , 1 n ln 2 δ )) + P ( p− ≤ (µ− a) kl−1,− ( p̂− µ− a , 1 n ln 2 δ )) ≤ δ, where the last inequality follows by application of the kl upper and lower bounds from Theorem 1 to the first and second terms in the middle line, respectively. For ternary random variables the best choice is to take µ to be the middle value, then the resulting Z+ and Z− are binary and the corresponding kl upper and lower bounds on p+ and p− are tight, and the resulting split-kl bound is tight. The inequality can be applied to any bounded random variables, but same way as the kl inequality is not necessarily a good choice for bounded random variables, if the distribution is not binary, the split-kl in not necessarily a good choice if the distribution is not ternary. 2.3 Empirical Comparison We present an empirical comparison of the tightness of the above four concentration inequalities: the kl, the Empirical Bernstein, the Unexpected Bernstein, and the split-kl. We take n i.i.d. samples Z1, . . . , Zn taking values in {−1, 0, 1}. The choice is motivated both by instructiveness of presentation and by subsequent applications to excess losses. We let p−1 = P(Z = −1), p0 = P(Z = 0), and p1 = P(Z = 1), where p−1+p0+p1 = 1. Then p = E [Z] = p1−p−1. We also let p̂ = 1n ∑n i=1 Zi. In Figure 1 we plot the difference between the bounds on p given by the inequalities (1), (3), (4), and (5), and p̂. Lower values in the plot correspond to tighter bounds. To compute the kl bound we first rescale the losses to the [0, 1] interval, and then rescale the bound back to the [−1, 1] interval. For the Empirical Bernstein bound we take a = −1 and b = 1. For the Unexpected Bernstein bound we take a grid of γ ∈ {1/(2b), · · · , 1/(2kb)} for k = ⌈log2( √ n/ ln(1/δ)/2)⌉ and a union bound over the grid, as proposed by Mhammedi et al. [2019]. For the split-kl bound we take µ to be the middle value, 0, of the ternary random variable. In the experiments we take δ = 0.05, and truncate the bounds at 1. In the first experiment, presented in Figure 1a, we take p−1 = p1 = (1−p0)/2 and plot the difference between the values of the bounds and p̂ as a function of p0. For p0 = 0 the random variable Z is Bernoulli and, as expected, the kl inequality performs the best, followed by split-kl, and then Unexpected Bernstein. As p0 grows closer to 1, the variance of Z decreases and, also as expected, the kl inequality falls behind, whereas split-kl and Unexpected Bernstein go closely together. Empirical Bernstein falls behind all other bounds throughout most of the range, except slightly outperforming kl when p0 gets very close to 1. In the second experiment, presented in Figure 1b, we take a skewed random variable with p1 = 0.99(1− p0) and p−1 = 0.01(1− p0), and again plot the difference between the values of the bounds and p̂ as a function of p0. This time the kl also starts well for p0 close to zero, but then falls behind due to its inability of properly handling the values inside the interval. Unexpected Bernstein exhibits the opposite trend due to being based on uncentered second moment, which is high when p0 is close to zero, even though the variance is small in this case. Empirical Bernstein lags behind all other bounds for most of the range due to poor constants, whereas split-kl matches the tightest bounds, the kl and Unexpected Bernstein, at the endpoints of the range of p0, and outperforms all other bounds in the middle of the range, around p0 = 0.6, due to being able to exploit the combinatorics of the problem. The experiments demonstrate that for ternary random variables the split-kl is a powerful alternative to existing concentration of measure inequalities. To the best of our knowledge, this is also the first empirical evaluation of the Unexpected Bernstein inequality, and it shows that in many cases it is also a powerful inequality. We also observe that in most settings the Empirical Bernstein is weaker than the other three inequalities we consider. Numerical evaluations in additional settings are provided in Appendix D. (a) Comparison of the concentration bounds with n = 100, δ = 0.05 and p−1 = p1 = 0.5(1−p0). (b) Comparison of the concentration bounds with n = 100, δ = 0.05, p1 = 0.99(1 − p0), and p−1 = 0.01(1− p0). Figure 1: Empirical comparison of the concentration bounds. 3 PAC-Bayesian Inequalities Now we elevate the basic concentration of measure inequalities to the PAC-Bayesian domain. We start with the supervised learning problem setup, then provide a background on existing PAC-Bayesian inequalities, and finish with presentation of the PAC-Bayes-split-kl inequality. 3.1 Supervised Learning Problem Setup and Notations Let X be a sample space, Y be a label space, and let S = {(Xi, Yi)}ni=1 be an i.i.d. sample drawn according to an unknown distribution D on the product-space X × Y . Let H be a hypothesis space containing hypotheses h : X → Y . The quality of a hypothesis h is measured using the zero-one loss ℓ(h(X), Y ) = 1(h(X) ̸= Y ), where 1(·) is the indicator function. The expected loss of h is denoted by L(h) = E(X,Y )∼D [ℓ(h(X), Y )], and the empirical loss of h on a sample S is denoted by L̂(h, S) = 1|S| ∑ (X,Y )∈S ℓ(h(X), Y ). We use ED[·] as a shorthand for E(X,Y )∼D[·]. PAC-Bayesian bounds bound the generalization error of Gibbs prediction rules. For each input X ∈ X , Gibbs prediction rule associated with a distribution ρ on H randomly draws a hypothesis h ∈ H according to ρ and predicts h(X). The expected loss of the Gibbs prediction rule is Eh∼ρ[L(h)] and the empirical loss is Eh∼ρ[L̂(h, S)]. We use Eρ[·] as a shorthand for Eh∼ρ[·]. 3.2 PAC-Bayesian Analysis Background Now we present a brief background on the relevant results from the PAC-Bayesian analysis. PAC-Bayes-kl Inequality The PAC-Bayes-kl inequality cited below is one of the tightest known generalization bounds on the expected loss of the Gibbs prediction rule. Theorem 5 (PAC-Bayes-kl Inequality, Seeger, 2002, Maurer, 2004). For any probability distribution π on H that is independent of S and any δ ∈ (0, 1): P ( ∃ρ ∈ P : kl ( Eρ[L̂(h, S)] ∥∥∥Eρ [L(h)]) ≥ KL(ρ∥π) + ln(2√n/δ) n ) ≤ δ, (6) where P is the set of all possible probability distributions on H that can depend on S. The following relaxation of the PAC-Bayes-kl inequality based on Refined Pinsker’s relaxation of the kl divergence helps getting some intuition about the bound [McAllester, 2003]. With probability at least 1− δ, for all ρ ∈ P we have Eρ[L(h)] ≤ Eρ[L̂(h, S)]+ √ 2Eρ[L̂(h, S)] KL(ρ∥π) + ln(2 √ n/δ) n + 2 (KL(ρ∥π) + ln(2 √ n/δ)) n . (7) If Eρ[L̂(h, S)] is close to zero, the middle term in the inequality above vanishes, leading to so-called "fast convergence rates" (convergence of Eρ[L̂(h, S)] to Eρ[L(h)] at the rate of 1/n). However, achieving low Eρ[L̂(h, S)] is not always possible [Dziugaite and Roy, 2017, Zhou et al., 2019]. Subsequent research in PAC-Bayesian analysis has focused on two goals: (1) achieving fast convergence rates when the variance of prediction errors is low (and not necessarily the errors themselves), and (2) reducing the KL(ρ∥π) term, which may be quite large for large hypothesis spaces. For the first goal Tolstikhin and Seldin [2013] developed the PAC-Bayes-Empirical-Bernstein inequality and Mhammedi et al. [2019] proposed to use excess losses and also derived the alternative PACBayes-Unexpected-Bernstein inequality. For the second goal Ambroladze et al. [2007] suggested to use informed priors and Mhammedi et al. [2019] perfected the idea by proposing to average over "forward" and "backward" construction with informed prior. Next we explain the ideas behind the excess losses and informed priors in more details. Excess Losses Let h∗ be a reference prediction rule that is independent of S. We define the excess loss of a prediction rule h with respect to the reference h∗ by ∆ℓ(h(X), h ∗(X), Y ) = ℓ(h(X), Y )− ℓ(h∗(X), Y ). If ℓ is the zero-one loss, the excess loss naturally gives rise to ternary random variables, but it is well-defined for any real-valued loss function. We use ∆L(h, h∗) = ED[∆ℓ(h(X), h∗(X), Y )] = L(h) − L(h∗) to denote the expected excess loss of h relative to h∗ and ∆L̂(h, h′, S) = 1 |S| ∑ (X,Y )∈S ∆ℓ(h(X), h ∗(X), Y ) = L̂(h) − L̂(h∗) to denote the empirical excess loss of h relative to h∗. The expected loss of a Gibbs prediction rule can then be written as Eρ[L(h)] = Eρ[∆L(h, h∗)] + L(h∗). A bound on Eρ[L(h)] can thus be decomposed into a summation of a PAC-Bayes bound on Eρ[∆L(h, h∗)] and a bound on L(h∗). When the variance of the excess loss is small, we can use tools that exploit small variance, such as the PAC-Bayes-Empirical-Bernstein, PAC-Bayes-UnexpectedBernstein, or PAC-Bayes-Split-kl inequalities proposed below, to achieve fast convergence rates for the excess loss. Bounding L(h∗) involves just a single prediction rule and does not depend on the value of KL(ρ∥π). We note that it is essential that the variance and not just the magnitude of the excess loss is small. For example, if the excess losses primarily take values in {−1, 1} and average out to zero, fast convergence rates are impossible. Informed Priors The idea behind informed priors is to split the data into two subsets, S = S1 ∪S2, and to use S1 to learn a prior πS1 , and then use it to learn a posterior on S2 Ambroladze et al. [2007]. Note that since the size of S2 is smaller than the size of S, this approach gains in having potentially smaller KL(ρ∥πS1), but loses in having a smaller sample size in the denominator of the PAC-Bayes bounds. The balance between the advantage and disadvantage depends on the data: for some data sets it strengthens the bounds, but for some it weakens them. Mhammedi et al. [2019] perfected the approach by proposing to use it in the "forward" and "backward" direction and average over the two. Let S1 and S2 be of equal size. The "forward" part uses S1 to train πS1 and then computes a posterior on S2, while the "backward" part uses S2 to train πS2 and then computes a posterior on S1. Finally, the two posteriors are averaged with equal weight and the KL term becomes 1 2 (KL(ρ∥πS1) + KL(ρ∥πS2)). See [Mhammedi et al., 2019] for the derivation. Excess Losses and Informed Priors Excess losses and informed priors make an ideal combination. If we split S into two equal parts, S = S1 ∪ S2, we can use S1 to train both a reference prediction rule hS1 and a prior πS1 , and then learn a PAC-Bayes posterior on S2, and the other way around. By combining the "forward" and "backward" approaches we can write Eρ[L(h)] = 1 2 Eρ[∆L(h, hS1)] + 1 2 Eρ[∆L(h, hS2)] + 1 2 (L(hS1) + L(hS2)) , (8) and we can use PAC-Bayes to bound the first term using the prior πS1 and the data in S2, and to bound the second term using the prior πS2 and the data in S1, and we can bound L(hS1) and L(hS2) using the "complementary" data in S2 and S1, respectively. PAC-Bayes-Empirical-Bernstein Inequalities The excess losses are ternary random variables taking values in {−1, 0, 1} and, as we have already discussed, the kl inequality is not well-suited for them. PAC-Bayesian inequalities tailored for non-binary random variables were derived by Seldin et al. [2012], Tolstikhin and Seldin [2013], Wu et al. [2021], and Mhammedi et al. [2019]. Seldin et al. [2012] derived the PAC-Bayes-Bernstein oracle bound, which assumes knowledge of the variance. Tolstikhin and Seldin [2013] made it into an empirical bound by deriving the PACBayes-Empirical-Bernstein bound for the variance and plugging it into the PAC-Bayes-Bernstein bound of Seldin et al.. Wu et al. [2021] derived an oracle PAC-Bayes-Bennett inequality, which again assumes oracle knowledge of the variance, and showed that it is always at least as tight as the PAC-Bayes-Bernstein, and then also plugged in the PAC-Bayes-Empirical-Bernstein bound on the variance. Mhammedi et al. [2019] derived the PAC-Bayes-Unexpected-Bernstein inequality, which directly uses the empirical second moment. Since we have already shown that the Unexpected Bernstein inequality is tighter than the Empirical Bernstein, and since the approach of Wu et al. requires a combination of two inequalities, PAC-Bayes-Empirical-Bernstein for the variance and PAC-Bayes-Bennett for the loss, whereas the approach of Mhammedi et al. only makes a single application of PAC-Bayes-Unexpected-Bernstein, we only compare our work to the latter. We cite the inequality of Mhammedi et al. [2019], which applies to an arbitrary loss function. We use ℓ̃ and matching tilde-marked quantities to distinguish it from the zero-one loss ℓ. For any h ∈ H, let L̃(h) = ED[ℓ̃(h(X), Y )] be the expected tilde-loss of h, and let ˆ̃L(h, S) = 1 |S| ∑ (X,Y )∈S ℓ̃(h(X), Y ) be the empirical tilde-loss of h on a sample S. Theorem 6 (PAC-Bayes-Unexpected-Bernstein inequality [Mhammedi et al., 2019]). Let ℓ̃(·, ·) be an arbitrary loss function bounded from above by b for some b > 0, and assume that ˆ̃V(h, S) = 1 |S| ∑ (X,Y )∈S ℓ̃(h(X), Y ) 2 is finite for all h. Let ψ(u) := u− ln(1 + u) for u > −1. Then for any distribution π on H that is independent of S, any γ ∈ (0, 1/b), and any δ ∈ (0, 1): P ( ∃ρ ∈ P : Eρ[L̃(h)] ≥ Eρ[ ˆ̃L(h, S)] + ψ(−γb) γb2 Eρ[ ˆ̃V(h, S)] + KL(ρ∥π) + ln 1δ γn ) ≤ δ, where P is the set of all possible probability distributions on H that can depend on S. In optimization of the bound, we take the same grid of γ ∈ {1/(2b), · · · , 1/(2kb)} for k = ⌈log2( √ n/ ln(1/δ)/2)⌉ and a union bound over the grid, as we did for Theorem 3. 3.3 PAC-Bayes-Split-kl Inequality Now we present our PAC-Bayes-Split-kl inequality. For an arbitrary loss function ℓ̃ taking values in a [a, b] interval for some a, b ∈ R, let ℓ̃+ := max{0, ℓ̃− µ} and ℓ̃− := max{0, µ− ℓ̃} for some µ ∈ [a, b]. For any h ∈ H, let L̃+(h) = ED[ℓ̃+(h(X), Y )] and L̃−(h) = ED[ℓ̃−(h(X), Y )]. The corresponding empirical losses are denoted by ˆ̃L+(h, S) = 1n ∑n i=1 ℓ̃ +(h(Xi), Yi) and ˆ̃L−(h, S) = 1 n ∑n i=1 ℓ̃ −(h(Xi), Yi). Theorem 7 (PAC-Bayes-Split-kl Inequality). Let ℓ̃(·, ·) be an arbitrary loss function taking values in a [a, b] interval for some a, b ∈ R. Then for any distribution π on H that is independent of S, any µ ∈ [a, b], and any δ ∈ (0, 1): P [ ∃ρ ∈ P : Eρ[L̃(h)] ≥ µ+ (b− µ) kl−1,+ ( Eρ[ ˆ̃L+(h, S)] b− µ , KL(ρ∥π) + ln 4 √ n δ n ) − (µ− a) kl−1,− ( Eρ[ ˆ̃L−(h, S)] µ− a , KL(ρ∥π) + ln 4 √ n δ n )] ≤ δ, where P is the set of all possible probability distributions on H that can depend on S. Proof. We have Eρ[L̃(h)] = µ+ Eρ[L̃+(h)]− Eρ[L̃−(h)]. Similar to the proof of Theorem 4, we take a union bound of PAC-Bayes-kl upper bound on Eρ[L̃+(h)] and PAC-Bayes-kl lower bound on Eρ[L̃−(h)]. 3.4 PAC-Bayes-split-kl with Excess Loss and Informed Prior Looking back at the expected loss decomposition in equation (8), we can use PAC-Bayes-splitkl to bound the first two terms and a bound on the binomial tail distribution to bound the last term. For n i.i.d. Bernoulli random variables Z1, . . . , Zn with bias p ∈ (0, 1), we define the binomial tail distribution Bin(n, k, p) = P( ∑n i=1Xi ≤ k) and its inverse Bin −1(n, k, δ) = max {p : p ∈ [0, 1] and Bin(n, k, p) ≥ δ}. The following theorem relates p̂ = 1n ∑n i=1 Zi and p. Theorem 8 (Test Set Bound [Langford, 2005]). Let Z1, . . . , Zn be n i.i.d. Bernoulli random variables with bias p ∈ (0, 1) and let p̂ = 1n ∑n i=1 Zi be the empirical mean. Then for any δ ∈ (0, 1): P ( p ≥ Bin−1(n, np̂, δ) ) ≤ δ. By applying Theorems 7 and 8 to equation (8) we obtain the following result. Theorem 9. For any µ ∈ [−1, 1] and any δ ∈ (0, 1): P ( ∃ρ ∈ P : Eρ[L(h)] ≥ µ+ (1− µ)(a)− (µ+ 1)(b) + 1 2 (c) ) ≤ δ, where P is the set of all possible probability distributions on H that can depend on S, (a) = kl−1,+ 1 2 Eρ[∆+L̂(h, hS1 , S2)] 1− µ + 1 2 Eρ[∆+L̂(h, hS2 , S1)] 1− µ , KL(ρ∥π) + ln 8 √ n/2 δ n/2 , (b) = kl−1,− 1 2 Eρ[∆−L̂ (h, hS1 , S2)] µ+ 1 + 1 2 Eρ[∆−L̂ (h, hS2 , S1)] µ+ 1 , KL(ρ∥π) + ln 8 √ n/2 δ n/2 , in which π = 12πS1 + 1 2πS2 , and (c) = Bin−1 ( n 2 , n 2 L̂(hS1 , S2), δ 4 ) + Bin−1 ( n 2 , n 2 L̂(hS2 , S1), δ 4 ) . The proof is postponed to Appendix C. 4 Experiments We evaluate the performance of the PAC-Bayes-split-kl inequality in linear classification and in weighted majority vote using several data sets from UCI and LibSVM repositories [Dua and Graff, 2019, Chang and Lin, 2011]. An overview of the data sets is provided in Appendix E.1. For linear classification we reproduce the experimental setup of Mhammedi et al. [2019], and for the weighted majority vote we reproduce the experimental setup of Wu et al. [2021]. 4.1 The Experimental Setup of Mhammedi et al. [2019]: Linear Classifiers In the first experiment we follow the experimental setup of Mhammedi et al. [2019], who consider binary classification problems with linear classifiers in Rd and Gaussian priors and posteriors. A classifier hw associated with a vector w ∈ Rd makes a prediction on an input X by hw(X) = 1 ( w⊤X > 0 ) . The posteriors have the form of Gaussian distributions centered at wS ∈ Rd, with covariance ΣS that depends on a sample S, ρ = N (wS ,ΣS). The informed priors πS1 = N (wS1 ,ΣS1) and πS2 = N (wS2 ,ΣS2) are also taken to be Gaussian distributions centered at wS1 and wS2 , with covariance ΣS1 and ΣS2 , respectively. We take the classifier associated with wS1 as the reference classifier hS1 and the classifier associated with wS2 as the reference classifier hS2 . More details on the construction are provided in Appendix E.2. Figure 2 compares the PAC-Bayes-Unexpected-Bernstein bound PBUB and the PAC-Bayes-split-kl bound PBSkl with excess losses and informed priors. The ternary random variables in this setup take values in {−1, 0, 1}, and we select µ to be the middle value 0. Since the PAC-Bayes-kl bound (PBkl) is one of the tightest known generalization bounds, we take PBkl with informed priors as a baseline. The details on bound calculation and optimization are provided in Appendix E.2. In this experiment all the three bounds, PBkl, PBUB, and PBSkl performed comparably. We believe that the reason is that with informed priors the KL(ρ∥π) term is small. From the relaxation of the PBkl bound in equation (7), we observe that a small KL(ρ∥π) term implies smaller difference between fast and slow convergence rates, and thus smaller advantage to bounding the excess loss instead of the raw loss. In other words, we believe that the effect of using informed priors dominates the effect of using excess losses. We note that in order to use excess losses we need to train the reference hypothesis h∗ on part of the data and, therefore, training an informed prior on the same data comes at no extra cost. 4.2 The Experimental Setup of Wu et al. [2021]: Weighted Majority Vote In the second experiment we reproduce the experimental setup of Wu et al. [2021], who consider multiclass classification by a weighted majority vote. Given an input X ∈ X , a hypothesis space H, and a distribution ρ on H, a ρ-weighted majority vote classifier predicts MVρ(X) = argmaxy∈Y Eρ[1(h(X) = y)]. One of the tightest bound for the majority vote is the tandem bound (TND) proposed by Masegosa et al. [2020], which is based on tandem losses for pairs of hypotheses, ℓ(h(X), h′(X), Y ) = 1(h(X) ̸= Y )1(h′(X) ̸= Y ), and the second order Markov’s inequality. Wu et al. [2021] proposed two improved forms of the bound, both based on a parametric form of the Chebyshev-Cantelli inequality. The first, CCTND, using Chebyshev-Cantelli with the tandem losses and the PAC-Bayes-kl bound for bounding the tandem losses. The second, CCPBB, using tandem losses with an offset, defined by ℓα(h(X), h′(X), Y ) = (1(h(X) ̸= Y )− α)(1(h′(X) ̸= Y )− α) for α < 0.5, and PAC-Bayes-Empirical-Bennett inequality for bounding the tandem losses with an offset. We note that while the tandem losses are binary random variables, tandem losses with an offset are ternary random variables taking values in {α2,−α(1 − α), (1 − α)2} and, therefore, application of Empirical Bernstein type inequalities makes sense. However, in the experiments of Wu et al. CCPBB lagged behind TND and CCTND. We replaced PAC-Bayes-Empirical-Bennett with PAC-Bayes-Unexpected-Bernstein (CCPBUB) and PAC-Bayes-split-kl (CCPBSkl) and showed that the weakness of CCPBB was caused by looseness of PAC-Bayes-Empirical-Bernstein, and that CCPBUB and CCPBSkl lead to tighter bounds that are competitive and sometimes outperforming TND and CCTND. For the PAC-Bayes-split-kl bound we took µ to be the middle value of the tandem loss with an offset, namely, for α ≥ 0 we took µ = α2, and for α < 0 we took µ = −α(1− α). In Figure 3 we present a comparison of the TND, CCTND, CCPBB, CCPBUB, and CCPBSkl bounds on weighted majority vote of heterogeneous classifiers (Linear Discriminant Analysis, kNearest Neighbors, Decision Tree, Logistic Regression, and Gaussian Naive Bayes), which adds the two new bounds, CCPBUB and CCPBSkl to the experiment done by Wu et al. [2021]. A more detailed description of the experiment and results for additional data sets are provided in Appendix E.3. We note that CCPBUB and CCPBSkl consistently outperform CCPBB, demonstrating that they are more appropriate for tandem losses with an offset. The former two bounds perform comparably to TND and CCTND, which operate on tandem losses without an offset. In Appendix E.4 we replicate another experiment of Wu et al., where we use the bounds to reweigh trees in a random forest classifier. The results are similar to the results for heterogeneous classifiers. 5 Discussion We have presented the split-kl and PAC-Bayes-split-kl inequalities. The inequalities answer a longstanding open question on how to exploit the structure of ternary random variables in order to provide tight concentration bounds. The proposed split-kl and PAC-Bayes-split-kl inequalities are as tight for ternary random variables, as the kl and PAC-Bayes-kl inequalities are tight for binary random variables. In our empirical evaluation the split-kl inequality was always competitive with the kl and Unexpected Bernstein inequalities and outperformed both in certain regimes, whereas Empirical Bernstein typically lagged behind. In our experiments in the PAC-Bayesian setting the PAC-Bayes-split-kl was always comparable to PAC-Bayes-Unexpected-Bernstein, whereas PAC-Bayes-Empirical-Bennett most often lagged behind. The first two inequalities were usually comparable to PAC-Bayes-kl, although in some cases the attempt to exploit low variance did not pay off and PAC-Bayes-kl outperformed, which is also the trend observed earlier by Mhammedi et al. [2019]. To the best of our knowledge, this is the first time when the various approaches to exploitation of low variance were directly compared, and the proposed split-kl emerged as a clear winner in the basic setting, whereas in the PAC-Bayes setting in our experiments the PAC-Bayes-Unexpected-Bernstein and PAC-Bayes-split-kl were comparable, and preferable over PAC-Bayes-Empirical-Bernstein and PAC-Bayes-Empirical-Bennett. Acknowledgments and Disclosure of Funding This project has received funding from European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 801199. The authors also acknowledge partial support by the Independent Research Fund Denmark, grant number 0135- 00259B.
1. What is the focus and contribution of the paper regarding concentration of measure inequality? 2. What are the strengths of the proposed split-kl inequality, particularly in its empirical performance? 3. What are the weaknesses of the paper, especially in understanding the difference between the proposed inequality and the existing kl inequality? 4. Do you have any concerns or suggestions regarding the choice of μ and its impact on the bound's tightness? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors present a new concentration of measure inequality for sum of independent bounded random variables namely split-kl inequality. They derive this new inequality by combining kl-inequalities (1 and 2) in a clever way. They provide empirical cmparison of this new inequalities with the existing concentration inequalities such as kl-inequality, Empirical Bernstein inequality and Unexpected Bernstein inequality. They show that their new inequality is tighter than all of these inequalities in some regimes. They further extend their contribution to PAC Bayes setting and derive PAC-Bayes-split-kl inequality. Again, they empirically (in synthetic and real world data) identify regimes where their inequality performs better than other existing inequalities such as PAC-Bayes-kl, PAC-Bayes Empirical Bernstein, PAC-Bayes Unexpected Bernstein, and PAC-Bayes Empirical Bennett inequalities. Strengths And Weaknesses Strengths: The paper is easy to follow and claims stem from logical arguments. The experiments are extensive and support the claims made by authors. Theoretically, the idea is simple but interestingly, it leads to good empirical results. Weaknesses: It is difficult to understand that how is this new inequality fundamentally different than the kl inequality. Without a careful choice of μ , I am not sure if this new inequality would always be tighter than kl inequality in all the regimes. My observation comes from the following argument: consider Z ∈ [ a , b ] . Take μ = a , then Z + = Z − a and Z − = 0 . Similalry, take μ = b , then Z + = 0 and Z − = b − Z . In both these cases, we are just translating Z and both kl inequality and kl-split inequality should behave similar for these choices of μ . Of course, there might be a clever choice of μ which makes one perform better than the other but I am not sure how to make that choice. Questions Can you add some experiments to show the dependency of the bound on choice of μ ? It would also be helpful to discuss the tighntness of various bounds as we increase n. Limitations The limitations are discussed adequately.
NIPS
Title Split-kl and PAC-Bayes-split-kl Inequalities for Ternary Random Variables Abstract We present a new concentration of measure inequality for sums of independent bounded random variables, which we name a split-kl inequality. The inequality is particularly well-suited for ternary random variables, which naturally show up in a variety of problems, including analysis of excess losses in classification, analysis of weighted majority votes, and learning with abstention. We demonstrate that for ternary random variables the inequality is simultaneously competitive with the kl inequality, the Empirical Bernstein inequality, and the Unexpected Bernstein inequality, and in certain regimes outperforms all of them. It resolves an open question by Tolstikhin and Seldin [2013] and Mhammedi et al. [2019] on how to match simultaneously the combinatorial power of the kl inequality when the distribution happens to be close to binary and the power of Bersntein inequalities to exploit low variance when the probability mass is concentrated on the middle value. We also derive a PAC-Bayes-split-kl inequality and compare it with the PACBayes-kl, PAC-Bayes-Empirical-Bennett, and PAC-Bayes-Unexpected-Bernstein inequalities in an analysis of excess losses and in an analysis of a weighted majority vote for several UCI datasets. Last, but not least, our study provides the first direct comparison of the Empirical Bernstein and Unexpected Bernstein inequalities and their PAC-Bayes extensions. 1 Introduction Concentration of measure inequalities for sums of independent random variables are the most fundamental analysis tools in statistics and many other domains [Boucheron et al., 2013]. Their history stretches almost a century back, and inequalities such as Hoeffding’s [Hoeffding, 1963] and Bernstein’s [Bernstein, 1946] are the main work horses of learning theory. For binary random variables, one of the tightest concentration of measure inequalities is the kl inequality [Maurer, 2004, Langford, 2005, Foong et al., 2021, 2022], which is based on combinatorial properties of a sum of n independent random variables.1 However, while being extremely tight for binary random variables and applicable to any bounded random variables, the kl inequality is not necessarily a good choice for sums of bounded random variables that can take more than two values. In the latter case, the Empirical Bernstein [Mnih et al., 2008, Audibert et al., 2009, Maurer and Pontil, 2009] and the Unexpected Bernstein [Cesa-Bianchi et al., 2007, Mhammedi et al., 2019] inequalities can be significantly tighter due to their ability to exploit low variance, as shown by Tolstikhin and Seldin [2013]. However, the Empirical and Unexpected Bernstein inequalities are loose for binary random variables [Tolstikhin and Seldin, 2013]. 1The Binomial tail bound is slightly tighter, but it does not extend to the PAC-Bayes setting [Langford, 2005]. Our split-kl approach can be directly applied to obtain a “split-Binomial-tail” inequality. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). The challenge of exploiting low variance and, at the same time, matching the tightness of the kl inequality if a distribution happens to be close to binary, was faced by multiple prior works [Tolstikhin and Seldin, 2013, Mhammedi et al., 2019, Wu et al., 2021], but remained an open question. We resolve this question for the case of ternary random variables. Such random variables appear in a variety of applications, and we illustrate two of them. One is a study of excess losses, which are differences between the zero-one losses of a prediction rule h and a reference prediction rule h∗, Z = ℓ(h(X), Y ) − ℓ(h∗(X), Y ) ∈ {−1, 0, 1}. Mhammedi et al. [2019] have applied the PACBayes-Unexpected-Bernstein bound to excess losses in order to improve generalization bounds for classification. Another example of ternary random variables is the tandem loss with an offset, defined by ℓα(h(X), h′(X), Y ) = (ℓ(h(X), Y )−α)(ℓ(h′(X), Y )−α) ∈ { α2,−α(1− α), (1− α)2 } . Wu et al. [2021] have applied the PAC-Bayes-Empirical-Bennett inequality to the tandem loss with an offset to obtain a generalization bound for the weighted majority vote. Yet another potential application, which we leave for future work, is learning with abstention [Cortes et al., 2018, Thulasidasan et al., 2019]. We present the split-kl inequality, which simultaneously matches the tightness of the Empirical/Unexpected Bernstein and the kl, and outperforms both for certain distributions. It works for sums of any bounded random variables Z1, . . . , Zn, not only the ternary ones, but it is best suited for ternary random variables, for which it is almost tight (in the same sense, as the kl is tight for binary random variables). The idea behind the split-kl inequality is to write a random variable Z as Z = µ + Z+ − Z−, where µ is a constant, Z+ = max{0, Z − µ}, and Z− = max{0, µ − Z}. Then E [Z] = µ + E [Z+] − E [Z−] and, given an i.i.d. sample Z1, . . . , Zn, we can bound the distance between 1n ∑n i=1 Zi and E [Z] by using kl upper and lower bounds on the distances between 1 n ∑n i=1 Z + i and E [Z+], and 1 n ∑n i=1 Z − i and E [Z−], respectively. For ternary random variables Z ∈ {a, b, c} with a ≤ b ≤ c, the best split is to take µ = b, then both Z+ and Z− are binary and the kl upper and lower bounds for their rescaled versions are tight and, therefore, the split-kl inequality for Z is also tight. Thus, this approach provides the best of both worlds: the combinatorial tightness of the kl bound and exploitation of low variance when the probability mass on the middle value happens to be large, as in Empirical Bernstein inequalities. We further elevate the idea to the PAC-Bayes domain and derive a PAC-Bayes-split-kl inequality. We present an extensive set of experiments, where we first compare the kl, Empirical Bernstein, Unexpected Bernstein, and split-kl inequalities applied to (individual) sums of independent random variables in simulated data, and then compare the PAC-Bayes-kl, PAC-Bayes-Unexpected-Bersnstein, PAC-Bayes-split-kl, and, in some of the setups, PAC-Bayes-Empirical-Bennett, for several prediction models on several UCI datasets. In particular, we evaluate the bounds in the linear classification setup studied by Mhammedi et al. [2019] and in the weighted majority prediction setup studied by Wu et al. [2021]. To the best of our knowledge, this is also the first time when the Empirical Bernstein and the Unexpected Bernstein inequalities are directly compared, with and without the PAC-Bayesian extension. In Appendix A.2 we also show that an inequality introduced by Cesa-Bianchi et al. [2007] yields a relaxation of the Unexpected Bernstein inequality by Mhammedi et al. [2019]. 2 Concentration of Measure Inequalities for Sums of Independent Random Variables We start with the most basic question in probability theory and statistics: how far can an average of an i.i.d. sample Z1, . . . , Zn deviate from its expectation? We cite the major existing inequalities, the kl, Empirical Bernstein, and Unexpected Bernstein, then derive the new split-kl inequality, and then provide a numerical comparison. 2.1 Background We use KL(ρ∥π) to denote the Kullback-Leibler divergence between two probability distributions, ρ and π [Cover and Thomas, 2006]. We further use kl(p∥q) as a shorthand for the KullbackLeibler divergence between two Bernoulli distributions with biases p and q, namely kl(p∥q) = KL((1 − p, p)∥(1 − q, q)). For p̂ ∈ [0, 1] and ε ≥ 0 we define the upper and lower inverse of kl, respectively, as kl−1,+(p̂, ε) := max {p : p ∈ [0, 1] and kl(p̂∥p) ≤ ε} and kl−1,−(p̂, ε) := min {p : p ∈ [0, 1] and kl(p̂∥p) ≤ ε}. The first inequality that we cite is the kl inequality. Theorem 1 (kl Inequality [Langford, 2005, Foong et al., 2021, 2022]). Let Z1, · · · , Zn be i.i.d. random variables bounded in the [0, 1] interval and with E [Zi] = p for all i. Let p̂ = 1n ∑n i=1 Zi be their empirical mean. Then, for any δ ∈ (0, 1): P ( kl(p̂∥p) ≥ ln 1δ n ) ≤ δ and, by inversion of the kl, P ( p ≥ kl−1,+ ( p̂, 1 n ln 1 δ )) ≤ δ, (1) P ( p ≤ kl−1,− ( p̂, 1 n ln 1 δ )) ≤ δ. (2) We note that the PAC-Bayes-kl inequality (Theorem 5 below) is based on the inequality E [ en kl(p̂∥p) ] ≤ 2 √ n [Maurer, 2004], which gives P ( kl(p̂∥p) ≥ ln 2 √ n δ n ) ≤ δ. Foong et al. [2021, 2022] reduce the logarithmic factor down to ln 1δ by basing the proof on Chernoff’s inequality, but this proof technique cannot be combined with PAC-Bayes. Therefore, when we move on to PAC-Bayes we pay the extra ln 2 √ n factor in the bounds. It is a long-standing open question whether this factor can be reduced in the PAC-Bayesian setting [Foong et al., 2021]. Next we cite two versions of the Empirical Bernstein inequality. Theorem 2 (Empirical Bernstein Inequality [Maurer and Pontil, 2009]). Let Z1, · · · , Zn be i.i.d. random variables bounded in a [a, b] interval for some a, b ∈ R, and with E [Zi] = p for all i. Let p̂ = 1n ∑n i=1 Zi be the empirical mean and let σ̂ = 1 n−1 n∑ i=1 (Zi − p̂)2 be the empirical variance. Then for any δ ∈ (0, 1) : P p ≥ p̂+ √ 2σ̂ ln 2δ n + 7(b− a) ln 2δ 3(n− 1) ≤ δ. (3) Theorem 3 (Unexpected Bernstein Inequality [Fan et al., 2015, Mhammedi et al., 2019]). Let Z1, · · · , Zn be i.i.d. random variables bounded from above by b for some b > 0, and with E [Zi] = p for all i. Let p̂ = 1n ∑n i=1 Zi be the empirical mean and let σ̂ = 1 n ∑n i=1 Z 2 i be the empirical mean of the second moments. Let ψ(u) := u− ln(1 + u) for u > −1. Then, for any γ ∈ (0, 1/b) and any δ ∈ (0, 1): P ( p ≥ p̂+ ψ(−γb) γb2 σ̂ + ln 1δ γn ) ≤ δ. (4) To facilitate a comparison with other bounds, Theorem 3 provides a slightly different form of the Unexpected Bernstein inequality than the one used by Mhammedi et al. [2019]. We provide a proof of the theorem in Appendix A.1, which is based on the Unexpected Bernstein Lemma [Fan et al., 2015]. We note that an inequality proposed by Cesa-Bianchi et al. [2007] can be used to derive a relaxed version of the Unexpected Bernstein inequality, as discussed in Appendix A.2. 2.2 The Split-kl Inequality Let Z be a random variable bounded in a [a, b] interval for some a, b ∈ R and let µ ∈ [a, b] be a constant. We decompose Z = µ + Z+ − Z−, where Z+ = max(0, Z − µ) and Z− = max(0, µ− Z). Let p = E [Z], p+ = E [Z+], and p− = E [Z−]. For an i.i.d. sample Z1, . . . , Zn let p̂+ = 1n ∑n i=1 Z + i and p̂ − = 1n ∑n i=1 Z − i . With these definitions we present the split-kl inequality. Theorem 4 (Split-kl inequality). Let Z1, . . . , Zn be i.i.d. random variables in a [a, b] interval for some a, b ∈ R, then for any µ ∈ [a, b] and δ ∈ (0, 1): P ( p ≥ µ+ (b− µ) kl−1,+ ( p̂+ b− µ , 1 n ln 2 δ ) − (µ− a) kl−1,− ( p̂− µ− a , 1 n ln 2 δ )) ≤ δ. (5) Proof. P ( p ≥ µ+ (b− µ) kl−1,+ ( p̂+ b− µ , 1 n ln 2 δ ) − (µ− a) kl−1,− ( p̂− µ− a , 1 n ln 2 δ )) ≤ P ( p+ ≥ (b− µ) kl−1,+ ( p̂+ b− µ , 1 n ln 2 δ )) + P ( p− ≤ (µ− a) kl−1,− ( p̂− µ− a , 1 n ln 2 δ )) ≤ δ, where the last inequality follows by application of the kl upper and lower bounds from Theorem 1 to the first and second terms in the middle line, respectively. For ternary random variables the best choice is to take µ to be the middle value, then the resulting Z+ and Z− are binary and the corresponding kl upper and lower bounds on p+ and p− are tight, and the resulting split-kl bound is tight. The inequality can be applied to any bounded random variables, but same way as the kl inequality is not necessarily a good choice for bounded random variables, if the distribution is not binary, the split-kl in not necessarily a good choice if the distribution is not ternary. 2.3 Empirical Comparison We present an empirical comparison of the tightness of the above four concentration inequalities: the kl, the Empirical Bernstein, the Unexpected Bernstein, and the split-kl. We take n i.i.d. samples Z1, . . . , Zn taking values in {−1, 0, 1}. The choice is motivated both by instructiveness of presentation and by subsequent applications to excess losses. We let p−1 = P(Z = −1), p0 = P(Z = 0), and p1 = P(Z = 1), where p−1+p0+p1 = 1. Then p = E [Z] = p1−p−1. We also let p̂ = 1n ∑n i=1 Zi. In Figure 1 we plot the difference between the bounds on p given by the inequalities (1), (3), (4), and (5), and p̂. Lower values in the plot correspond to tighter bounds. To compute the kl bound we first rescale the losses to the [0, 1] interval, and then rescale the bound back to the [−1, 1] interval. For the Empirical Bernstein bound we take a = −1 and b = 1. For the Unexpected Bernstein bound we take a grid of γ ∈ {1/(2b), · · · , 1/(2kb)} for k = ⌈log2( √ n/ ln(1/δ)/2)⌉ and a union bound over the grid, as proposed by Mhammedi et al. [2019]. For the split-kl bound we take µ to be the middle value, 0, of the ternary random variable. In the experiments we take δ = 0.05, and truncate the bounds at 1. In the first experiment, presented in Figure 1a, we take p−1 = p1 = (1−p0)/2 and plot the difference between the values of the bounds and p̂ as a function of p0. For p0 = 0 the random variable Z is Bernoulli and, as expected, the kl inequality performs the best, followed by split-kl, and then Unexpected Bernstein. As p0 grows closer to 1, the variance of Z decreases and, also as expected, the kl inequality falls behind, whereas split-kl and Unexpected Bernstein go closely together. Empirical Bernstein falls behind all other bounds throughout most of the range, except slightly outperforming kl when p0 gets very close to 1. In the second experiment, presented in Figure 1b, we take a skewed random variable with p1 = 0.99(1− p0) and p−1 = 0.01(1− p0), and again plot the difference between the values of the bounds and p̂ as a function of p0. This time the kl also starts well for p0 close to zero, but then falls behind due to its inability of properly handling the values inside the interval. Unexpected Bernstein exhibits the opposite trend due to being based on uncentered second moment, which is high when p0 is close to zero, even though the variance is small in this case. Empirical Bernstein lags behind all other bounds for most of the range due to poor constants, whereas split-kl matches the tightest bounds, the kl and Unexpected Bernstein, at the endpoints of the range of p0, and outperforms all other bounds in the middle of the range, around p0 = 0.6, due to being able to exploit the combinatorics of the problem. The experiments demonstrate that for ternary random variables the split-kl is a powerful alternative to existing concentration of measure inequalities. To the best of our knowledge, this is also the first empirical evaluation of the Unexpected Bernstein inequality, and it shows that in many cases it is also a powerful inequality. We also observe that in most settings the Empirical Bernstein is weaker than the other three inequalities we consider. Numerical evaluations in additional settings are provided in Appendix D. (a) Comparison of the concentration bounds with n = 100, δ = 0.05 and p−1 = p1 = 0.5(1−p0). (b) Comparison of the concentration bounds with n = 100, δ = 0.05, p1 = 0.99(1 − p0), and p−1 = 0.01(1− p0). Figure 1: Empirical comparison of the concentration bounds. 3 PAC-Bayesian Inequalities Now we elevate the basic concentration of measure inequalities to the PAC-Bayesian domain. We start with the supervised learning problem setup, then provide a background on existing PAC-Bayesian inequalities, and finish with presentation of the PAC-Bayes-split-kl inequality. 3.1 Supervised Learning Problem Setup and Notations Let X be a sample space, Y be a label space, and let S = {(Xi, Yi)}ni=1 be an i.i.d. sample drawn according to an unknown distribution D on the product-space X × Y . Let H be a hypothesis space containing hypotheses h : X → Y . The quality of a hypothesis h is measured using the zero-one loss ℓ(h(X), Y ) = 1(h(X) ̸= Y ), where 1(·) is the indicator function. The expected loss of h is denoted by L(h) = E(X,Y )∼D [ℓ(h(X), Y )], and the empirical loss of h on a sample S is denoted by L̂(h, S) = 1|S| ∑ (X,Y )∈S ℓ(h(X), Y ). We use ED[·] as a shorthand for E(X,Y )∼D[·]. PAC-Bayesian bounds bound the generalization error of Gibbs prediction rules. For each input X ∈ X , Gibbs prediction rule associated with a distribution ρ on H randomly draws a hypothesis h ∈ H according to ρ and predicts h(X). The expected loss of the Gibbs prediction rule is Eh∼ρ[L(h)] and the empirical loss is Eh∼ρ[L̂(h, S)]. We use Eρ[·] as a shorthand for Eh∼ρ[·]. 3.2 PAC-Bayesian Analysis Background Now we present a brief background on the relevant results from the PAC-Bayesian analysis. PAC-Bayes-kl Inequality The PAC-Bayes-kl inequality cited below is one of the tightest known generalization bounds on the expected loss of the Gibbs prediction rule. Theorem 5 (PAC-Bayes-kl Inequality, Seeger, 2002, Maurer, 2004). For any probability distribution π on H that is independent of S and any δ ∈ (0, 1): P ( ∃ρ ∈ P : kl ( Eρ[L̂(h, S)] ∥∥∥Eρ [L(h)]) ≥ KL(ρ∥π) + ln(2√n/δ) n ) ≤ δ, (6) where P is the set of all possible probability distributions on H that can depend on S. The following relaxation of the PAC-Bayes-kl inequality based on Refined Pinsker’s relaxation of the kl divergence helps getting some intuition about the bound [McAllester, 2003]. With probability at least 1− δ, for all ρ ∈ P we have Eρ[L(h)] ≤ Eρ[L̂(h, S)]+ √ 2Eρ[L̂(h, S)] KL(ρ∥π) + ln(2 √ n/δ) n + 2 (KL(ρ∥π) + ln(2 √ n/δ)) n . (7) If Eρ[L̂(h, S)] is close to zero, the middle term in the inequality above vanishes, leading to so-called "fast convergence rates" (convergence of Eρ[L̂(h, S)] to Eρ[L(h)] at the rate of 1/n). However, achieving low Eρ[L̂(h, S)] is not always possible [Dziugaite and Roy, 2017, Zhou et al., 2019]. Subsequent research in PAC-Bayesian analysis has focused on two goals: (1) achieving fast convergence rates when the variance of prediction errors is low (and not necessarily the errors themselves), and (2) reducing the KL(ρ∥π) term, which may be quite large for large hypothesis spaces. For the first goal Tolstikhin and Seldin [2013] developed the PAC-Bayes-Empirical-Bernstein inequality and Mhammedi et al. [2019] proposed to use excess losses and also derived the alternative PACBayes-Unexpected-Bernstein inequality. For the second goal Ambroladze et al. [2007] suggested to use informed priors and Mhammedi et al. [2019] perfected the idea by proposing to average over "forward" and "backward" construction with informed prior. Next we explain the ideas behind the excess losses and informed priors in more details. Excess Losses Let h∗ be a reference prediction rule that is independent of S. We define the excess loss of a prediction rule h with respect to the reference h∗ by ∆ℓ(h(X), h ∗(X), Y ) = ℓ(h(X), Y )− ℓ(h∗(X), Y ). If ℓ is the zero-one loss, the excess loss naturally gives rise to ternary random variables, but it is well-defined for any real-valued loss function. We use ∆L(h, h∗) = ED[∆ℓ(h(X), h∗(X), Y )] = L(h) − L(h∗) to denote the expected excess loss of h relative to h∗ and ∆L̂(h, h′, S) = 1 |S| ∑ (X,Y )∈S ∆ℓ(h(X), h ∗(X), Y ) = L̂(h) − L̂(h∗) to denote the empirical excess loss of h relative to h∗. The expected loss of a Gibbs prediction rule can then be written as Eρ[L(h)] = Eρ[∆L(h, h∗)] + L(h∗). A bound on Eρ[L(h)] can thus be decomposed into a summation of a PAC-Bayes bound on Eρ[∆L(h, h∗)] and a bound on L(h∗). When the variance of the excess loss is small, we can use tools that exploit small variance, such as the PAC-Bayes-Empirical-Bernstein, PAC-Bayes-UnexpectedBernstein, or PAC-Bayes-Split-kl inequalities proposed below, to achieve fast convergence rates for the excess loss. Bounding L(h∗) involves just a single prediction rule and does not depend on the value of KL(ρ∥π). We note that it is essential that the variance and not just the magnitude of the excess loss is small. For example, if the excess losses primarily take values in {−1, 1} and average out to zero, fast convergence rates are impossible. Informed Priors The idea behind informed priors is to split the data into two subsets, S = S1 ∪S2, and to use S1 to learn a prior πS1 , and then use it to learn a posterior on S2 Ambroladze et al. [2007]. Note that since the size of S2 is smaller than the size of S, this approach gains in having potentially smaller KL(ρ∥πS1), but loses in having a smaller sample size in the denominator of the PAC-Bayes bounds. The balance between the advantage and disadvantage depends on the data: for some data sets it strengthens the bounds, but for some it weakens them. Mhammedi et al. [2019] perfected the approach by proposing to use it in the "forward" and "backward" direction and average over the two. Let S1 and S2 be of equal size. The "forward" part uses S1 to train πS1 and then computes a posterior on S2, while the "backward" part uses S2 to train πS2 and then computes a posterior on S1. Finally, the two posteriors are averaged with equal weight and the KL term becomes 1 2 (KL(ρ∥πS1) + KL(ρ∥πS2)). See [Mhammedi et al., 2019] for the derivation. Excess Losses and Informed Priors Excess losses and informed priors make an ideal combination. If we split S into two equal parts, S = S1 ∪ S2, we can use S1 to train both a reference prediction rule hS1 and a prior πS1 , and then learn a PAC-Bayes posterior on S2, and the other way around. By combining the "forward" and "backward" approaches we can write Eρ[L(h)] = 1 2 Eρ[∆L(h, hS1)] + 1 2 Eρ[∆L(h, hS2)] + 1 2 (L(hS1) + L(hS2)) , (8) and we can use PAC-Bayes to bound the first term using the prior πS1 and the data in S2, and to bound the second term using the prior πS2 and the data in S1, and we can bound L(hS1) and L(hS2) using the "complementary" data in S2 and S1, respectively. PAC-Bayes-Empirical-Bernstein Inequalities The excess losses are ternary random variables taking values in {−1, 0, 1} and, as we have already discussed, the kl inequality is not well-suited for them. PAC-Bayesian inequalities tailored for non-binary random variables were derived by Seldin et al. [2012], Tolstikhin and Seldin [2013], Wu et al. [2021], and Mhammedi et al. [2019]. Seldin et al. [2012] derived the PAC-Bayes-Bernstein oracle bound, which assumes knowledge of the variance. Tolstikhin and Seldin [2013] made it into an empirical bound by deriving the PACBayes-Empirical-Bernstein bound for the variance and plugging it into the PAC-Bayes-Bernstein bound of Seldin et al.. Wu et al. [2021] derived an oracle PAC-Bayes-Bennett inequality, which again assumes oracle knowledge of the variance, and showed that it is always at least as tight as the PAC-Bayes-Bernstein, and then also plugged in the PAC-Bayes-Empirical-Bernstein bound on the variance. Mhammedi et al. [2019] derived the PAC-Bayes-Unexpected-Bernstein inequality, which directly uses the empirical second moment. Since we have already shown that the Unexpected Bernstein inequality is tighter than the Empirical Bernstein, and since the approach of Wu et al. requires a combination of two inequalities, PAC-Bayes-Empirical-Bernstein for the variance and PAC-Bayes-Bennett for the loss, whereas the approach of Mhammedi et al. only makes a single application of PAC-Bayes-Unexpected-Bernstein, we only compare our work to the latter. We cite the inequality of Mhammedi et al. [2019], which applies to an arbitrary loss function. We use ℓ̃ and matching tilde-marked quantities to distinguish it from the zero-one loss ℓ. For any h ∈ H, let L̃(h) = ED[ℓ̃(h(X), Y )] be the expected tilde-loss of h, and let ˆ̃L(h, S) = 1 |S| ∑ (X,Y )∈S ℓ̃(h(X), Y ) be the empirical tilde-loss of h on a sample S. Theorem 6 (PAC-Bayes-Unexpected-Bernstein inequality [Mhammedi et al., 2019]). Let ℓ̃(·, ·) be an arbitrary loss function bounded from above by b for some b > 0, and assume that ˆ̃V(h, S) = 1 |S| ∑ (X,Y )∈S ℓ̃(h(X), Y ) 2 is finite for all h. Let ψ(u) := u− ln(1 + u) for u > −1. Then for any distribution π on H that is independent of S, any γ ∈ (0, 1/b), and any δ ∈ (0, 1): P ( ∃ρ ∈ P : Eρ[L̃(h)] ≥ Eρ[ ˆ̃L(h, S)] + ψ(−γb) γb2 Eρ[ ˆ̃V(h, S)] + KL(ρ∥π) + ln 1δ γn ) ≤ δ, where P is the set of all possible probability distributions on H that can depend on S. In optimization of the bound, we take the same grid of γ ∈ {1/(2b), · · · , 1/(2kb)} for k = ⌈log2( √ n/ ln(1/δ)/2)⌉ and a union bound over the grid, as we did for Theorem 3. 3.3 PAC-Bayes-Split-kl Inequality Now we present our PAC-Bayes-Split-kl inequality. For an arbitrary loss function ℓ̃ taking values in a [a, b] interval for some a, b ∈ R, let ℓ̃+ := max{0, ℓ̃− µ} and ℓ̃− := max{0, µ− ℓ̃} for some µ ∈ [a, b]. For any h ∈ H, let L̃+(h) = ED[ℓ̃+(h(X), Y )] and L̃−(h) = ED[ℓ̃−(h(X), Y )]. The corresponding empirical losses are denoted by ˆ̃L+(h, S) = 1n ∑n i=1 ℓ̃ +(h(Xi), Yi) and ˆ̃L−(h, S) = 1 n ∑n i=1 ℓ̃ −(h(Xi), Yi). Theorem 7 (PAC-Bayes-Split-kl Inequality). Let ℓ̃(·, ·) be an arbitrary loss function taking values in a [a, b] interval for some a, b ∈ R. Then for any distribution π on H that is independent of S, any µ ∈ [a, b], and any δ ∈ (0, 1): P [ ∃ρ ∈ P : Eρ[L̃(h)] ≥ µ+ (b− µ) kl−1,+ ( Eρ[ ˆ̃L+(h, S)] b− µ , KL(ρ∥π) + ln 4 √ n δ n ) − (µ− a) kl−1,− ( Eρ[ ˆ̃L−(h, S)] µ− a , KL(ρ∥π) + ln 4 √ n δ n )] ≤ δ, where P is the set of all possible probability distributions on H that can depend on S. Proof. We have Eρ[L̃(h)] = µ+ Eρ[L̃+(h)]− Eρ[L̃−(h)]. Similar to the proof of Theorem 4, we take a union bound of PAC-Bayes-kl upper bound on Eρ[L̃+(h)] and PAC-Bayes-kl lower bound on Eρ[L̃−(h)]. 3.4 PAC-Bayes-split-kl with Excess Loss and Informed Prior Looking back at the expected loss decomposition in equation (8), we can use PAC-Bayes-splitkl to bound the first two terms and a bound on the binomial tail distribution to bound the last term. For n i.i.d. Bernoulli random variables Z1, . . . , Zn with bias p ∈ (0, 1), we define the binomial tail distribution Bin(n, k, p) = P( ∑n i=1Xi ≤ k) and its inverse Bin −1(n, k, δ) = max {p : p ∈ [0, 1] and Bin(n, k, p) ≥ δ}. The following theorem relates p̂ = 1n ∑n i=1 Zi and p. Theorem 8 (Test Set Bound [Langford, 2005]). Let Z1, . . . , Zn be n i.i.d. Bernoulli random variables with bias p ∈ (0, 1) and let p̂ = 1n ∑n i=1 Zi be the empirical mean. Then for any δ ∈ (0, 1): P ( p ≥ Bin−1(n, np̂, δ) ) ≤ δ. By applying Theorems 7 and 8 to equation (8) we obtain the following result. Theorem 9. For any µ ∈ [−1, 1] and any δ ∈ (0, 1): P ( ∃ρ ∈ P : Eρ[L(h)] ≥ µ+ (1− µ)(a)− (µ+ 1)(b) + 1 2 (c) ) ≤ δ, where P is the set of all possible probability distributions on H that can depend on S, (a) = kl−1,+ 1 2 Eρ[∆+L̂(h, hS1 , S2)] 1− µ + 1 2 Eρ[∆+L̂(h, hS2 , S1)] 1− µ , KL(ρ∥π) + ln 8 √ n/2 δ n/2 , (b) = kl−1,− 1 2 Eρ[∆−L̂ (h, hS1 , S2)] µ+ 1 + 1 2 Eρ[∆−L̂ (h, hS2 , S1)] µ+ 1 , KL(ρ∥π) + ln 8 √ n/2 δ n/2 , in which π = 12πS1 + 1 2πS2 , and (c) = Bin−1 ( n 2 , n 2 L̂(hS1 , S2), δ 4 ) + Bin−1 ( n 2 , n 2 L̂(hS2 , S1), δ 4 ) . The proof is postponed to Appendix C. 4 Experiments We evaluate the performance of the PAC-Bayes-split-kl inequality in linear classification and in weighted majority vote using several data sets from UCI and LibSVM repositories [Dua and Graff, 2019, Chang and Lin, 2011]. An overview of the data sets is provided in Appendix E.1. For linear classification we reproduce the experimental setup of Mhammedi et al. [2019], and for the weighted majority vote we reproduce the experimental setup of Wu et al. [2021]. 4.1 The Experimental Setup of Mhammedi et al. [2019]: Linear Classifiers In the first experiment we follow the experimental setup of Mhammedi et al. [2019], who consider binary classification problems with linear classifiers in Rd and Gaussian priors and posteriors. A classifier hw associated with a vector w ∈ Rd makes a prediction on an input X by hw(X) = 1 ( w⊤X > 0 ) . The posteriors have the form of Gaussian distributions centered at wS ∈ Rd, with covariance ΣS that depends on a sample S, ρ = N (wS ,ΣS). The informed priors πS1 = N (wS1 ,ΣS1) and πS2 = N (wS2 ,ΣS2) are also taken to be Gaussian distributions centered at wS1 and wS2 , with covariance ΣS1 and ΣS2 , respectively. We take the classifier associated with wS1 as the reference classifier hS1 and the classifier associated with wS2 as the reference classifier hS2 . More details on the construction are provided in Appendix E.2. Figure 2 compares the PAC-Bayes-Unexpected-Bernstein bound PBUB and the PAC-Bayes-split-kl bound PBSkl with excess losses and informed priors. The ternary random variables in this setup take values in {−1, 0, 1}, and we select µ to be the middle value 0. Since the PAC-Bayes-kl bound (PBkl) is one of the tightest known generalization bounds, we take PBkl with informed priors as a baseline. The details on bound calculation and optimization are provided in Appendix E.2. In this experiment all the three bounds, PBkl, PBUB, and PBSkl performed comparably. We believe that the reason is that with informed priors the KL(ρ∥π) term is small. From the relaxation of the PBkl bound in equation (7), we observe that a small KL(ρ∥π) term implies smaller difference between fast and slow convergence rates, and thus smaller advantage to bounding the excess loss instead of the raw loss. In other words, we believe that the effect of using informed priors dominates the effect of using excess losses. We note that in order to use excess losses we need to train the reference hypothesis h∗ on part of the data and, therefore, training an informed prior on the same data comes at no extra cost. 4.2 The Experimental Setup of Wu et al. [2021]: Weighted Majority Vote In the second experiment we reproduce the experimental setup of Wu et al. [2021], who consider multiclass classification by a weighted majority vote. Given an input X ∈ X , a hypothesis space H, and a distribution ρ on H, a ρ-weighted majority vote classifier predicts MVρ(X) = argmaxy∈Y Eρ[1(h(X) = y)]. One of the tightest bound for the majority vote is the tandem bound (TND) proposed by Masegosa et al. [2020], which is based on tandem losses for pairs of hypotheses, ℓ(h(X), h′(X), Y ) = 1(h(X) ̸= Y )1(h′(X) ̸= Y ), and the second order Markov’s inequality. Wu et al. [2021] proposed two improved forms of the bound, both based on a parametric form of the Chebyshev-Cantelli inequality. The first, CCTND, using Chebyshev-Cantelli with the tandem losses and the PAC-Bayes-kl bound for bounding the tandem losses. The second, CCPBB, using tandem losses with an offset, defined by ℓα(h(X), h′(X), Y ) = (1(h(X) ̸= Y )− α)(1(h′(X) ̸= Y )− α) for α < 0.5, and PAC-Bayes-Empirical-Bennett inequality for bounding the tandem losses with an offset. We note that while the tandem losses are binary random variables, tandem losses with an offset are ternary random variables taking values in {α2,−α(1 − α), (1 − α)2} and, therefore, application of Empirical Bernstein type inequalities makes sense. However, in the experiments of Wu et al. CCPBB lagged behind TND and CCTND. We replaced PAC-Bayes-Empirical-Bennett with PAC-Bayes-Unexpected-Bernstein (CCPBUB) and PAC-Bayes-split-kl (CCPBSkl) and showed that the weakness of CCPBB was caused by looseness of PAC-Bayes-Empirical-Bernstein, and that CCPBUB and CCPBSkl lead to tighter bounds that are competitive and sometimes outperforming TND and CCTND. For the PAC-Bayes-split-kl bound we took µ to be the middle value of the tandem loss with an offset, namely, for α ≥ 0 we took µ = α2, and for α < 0 we took µ = −α(1− α). In Figure 3 we present a comparison of the TND, CCTND, CCPBB, CCPBUB, and CCPBSkl bounds on weighted majority vote of heterogeneous classifiers (Linear Discriminant Analysis, kNearest Neighbors, Decision Tree, Logistic Regression, and Gaussian Naive Bayes), which adds the two new bounds, CCPBUB and CCPBSkl to the experiment done by Wu et al. [2021]. A more detailed description of the experiment and results for additional data sets are provided in Appendix E.3. We note that CCPBUB and CCPBSkl consistently outperform CCPBB, demonstrating that they are more appropriate for tandem losses with an offset. The former two bounds perform comparably to TND and CCTND, which operate on tandem losses without an offset. In Appendix E.4 we replicate another experiment of Wu et al., where we use the bounds to reweigh trees in a random forest classifier. The results are similar to the results for heterogeneous classifiers. 5 Discussion We have presented the split-kl and PAC-Bayes-split-kl inequalities. The inequalities answer a longstanding open question on how to exploit the structure of ternary random variables in order to provide tight concentration bounds. The proposed split-kl and PAC-Bayes-split-kl inequalities are as tight for ternary random variables, as the kl and PAC-Bayes-kl inequalities are tight for binary random variables. In our empirical evaluation the split-kl inequality was always competitive with the kl and Unexpected Bernstein inequalities and outperformed both in certain regimes, whereas Empirical Bernstein typically lagged behind. In our experiments in the PAC-Bayesian setting the PAC-Bayes-split-kl was always comparable to PAC-Bayes-Unexpected-Bernstein, whereas PAC-Bayes-Empirical-Bennett most often lagged behind. The first two inequalities were usually comparable to PAC-Bayes-kl, although in some cases the attempt to exploit low variance did not pay off and PAC-Bayes-kl outperformed, which is also the trend observed earlier by Mhammedi et al. [2019]. To the best of our knowledge, this is the first time when the various approaches to exploitation of low variance were directly compared, and the proposed split-kl emerged as a clear winner in the basic setting, whereas in the PAC-Bayes setting in our experiments the PAC-Bayes-Unexpected-Bernstein and PAC-Bayes-split-kl were comparable, and preferable over PAC-Bayes-Empirical-Bernstein and PAC-Bayes-Empirical-Bennett. Acknowledgments and Disclosure of Funding This project has received funding from European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 801199. The authors also acknowledge partial support by the Independent Research Fund Denmark, grant number 0135- 00259B.
1. What is the focus and contribution of the paper regarding concentration inequalities and PAC-Bayes bounds? 2. What are the strengths of the proposed approach, particularly in terms of its simplicity and comprehensiveness? 3. What are the weaknesses of the paper, especially regarding the significance and impact of the proposed method when combined with existing approaches? 4. How much does the use of "informed prior" affect the effectiveness of the generalization bounds in this context? 5. Could the authors provide additional references to support their approach using "informed prior"? 6. Would it be possible to visually distinguish the proposed bound in Figures 2 and 3? 7. How would the proposed method perform for more complex models such as LeNet for MNIST? 8. Are there any potential negative societal impacts associated with the proposed method?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors introduced a new approach to a concetration inquality for random variables over a bounded interval called "split kl inequality", which first decomposes the original random variable into three terms and then applies an existing bound "kl ineqaulity" to the decomposed terms. Then the authors proposed to use the split kl inequality for PAC-Bayes bounds of generalisation error of learning alrogrithms as well as to combine it with existing approaches of excess loss and informed prior. The derived PAC Bayes generalisation error bound were compared and examined in a few different experiments. Strengths And Weaknesses The reviewer is personally very much fond of the authors' writing in this paper, which explains important matters of this work / other existing works in an intuitive and comprehensive manner. For example, the motivation of this work is nicely lined up with a proper technical level to wide audiences in introduction. In addition, the advantage of split kl inequality has been made clear in Figure 1. Comprehensive presentation and simplicity of the idea is a clear strengh of this work. My main concern is the significance / impact when we combine this idea with PAC-Bayes bounds. The derived new generalisation bound in Figure 2, 3 seemed similar to the other existing bounds at first glance, or it was unclear how to interprete the improvement level. For the first experiment for example, since the authors combined their idea of split kl inequality with existing approaches of "informed priors", some might get an impression from these figures that the "informed prior" part has already finished the majority of works to lower a bound in each bound and they may wonder about how critical the improvement by the split kl part is. Questions How much is "informed prior" crucial to produce meaningful generalisation bounds in this context? — Would it be possilble to see how much improvement of bounds are done by "informed prior"? I was also not familier with how commonly or frequently "informed prior" is used in PAC-Bayes domain. It may be helpful to more strongly justify that "informed prior" is a reasonable approach to use in practice by additional references. It would be visually helpful to make clear which bound is the proposed one in Figure 2 and 3 e.g. adding "(Ours)" or something to the name lavel of the proposed one in Figure 2 and 3. The models in the experiments dealt in this paper seem relatively simple. I understand that proving generalisation bounds of complex models is challenging but I was personally interested in seeing if the generalisation bound still works for more complex models e.g. LeNet for MNIST. Limitations There would not no concern for potential negative societal impact. To me personally, the current limitation is that it is difficult to interprete from experiments or equations if the proposed idea of PAC-Bayes-split-kl inequalities has imporved the generalisation bounds to a fair defree or not. For example, would the difference of number in the figures be significant in the context of PAC-Bayes? The reviewer's position on this paper is neutral and the reviewer is happy to increase the score if the technical or practical impact is well justified.
NIPS
Title Split-kl and PAC-Bayes-split-kl Inequalities for Ternary Random Variables Abstract We present a new concentration of measure inequality for sums of independent bounded random variables, which we name a split-kl inequality. The inequality is particularly well-suited for ternary random variables, which naturally show up in a variety of problems, including analysis of excess losses in classification, analysis of weighted majority votes, and learning with abstention. We demonstrate that for ternary random variables the inequality is simultaneously competitive with the kl inequality, the Empirical Bernstein inequality, and the Unexpected Bernstein inequality, and in certain regimes outperforms all of them. It resolves an open question by Tolstikhin and Seldin [2013] and Mhammedi et al. [2019] on how to match simultaneously the combinatorial power of the kl inequality when the distribution happens to be close to binary and the power of Bersntein inequalities to exploit low variance when the probability mass is concentrated on the middle value. We also derive a PAC-Bayes-split-kl inequality and compare it with the PACBayes-kl, PAC-Bayes-Empirical-Bennett, and PAC-Bayes-Unexpected-Bernstein inequalities in an analysis of excess losses and in an analysis of a weighted majority vote for several UCI datasets. Last, but not least, our study provides the first direct comparison of the Empirical Bernstein and Unexpected Bernstein inequalities and their PAC-Bayes extensions. 1 Introduction Concentration of measure inequalities for sums of independent random variables are the most fundamental analysis tools in statistics and many other domains [Boucheron et al., 2013]. Their history stretches almost a century back, and inequalities such as Hoeffding’s [Hoeffding, 1963] and Bernstein’s [Bernstein, 1946] are the main work horses of learning theory. For binary random variables, one of the tightest concentration of measure inequalities is the kl inequality [Maurer, 2004, Langford, 2005, Foong et al., 2021, 2022], which is based on combinatorial properties of a sum of n independent random variables.1 However, while being extremely tight for binary random variables and applicable to any bounded random variables, the kl inequality is not necessarily a good choice for sums of bounded random variables that can take more than two values. In the latter case, the Empirical Bernstein [Mnih et al., 2008, Audibert et al., 2009, Maurer and Pontil, 2009] and the Unexpected Bernstein [Cesa-Bianchi et al., 2007, Mhammedi et al., 2019] inequalities can be significantly tighter due to their ability to exploit low variance, as shown by Tolstikhin and Seldin [2013]. However, the Empirical and Unexpected Bernstein inequalities are loose for binary random variables [Tolstikhin and Seldin, 2013]. 1The Binomial tail bound is slightly tighter, but it does not extend to the PAC-Bayes setting [Langford, 2005]. Our split-kl approach can be directly applied to obtain a “split-Binomial-tail” inequality. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). The challenge of exploiting low variance and, at the same time, matching the tightness of the kl inequality if a distribution happens to be close to binary, was faced by multiple prior works [Tolstikhin and Seldin, 2013, Mhammedi et al., 2019, Wu et al., 2021], but remained an open question. We resolve this question for the case of ternary random variables. Such random variables appear in a variety of applications, and we illustrate two of them. One is a study of excess losses, which are differences between the zero-one losses of a prediction rule h and a reference prediction rule h∗, Z = ℓ(h(X), Y ) − ℓ(h∗(X), Y ) ∈ {−1, 0, 1}. Mhammedi et al. [2019] have applied the PACBayes-Unexpected-Bernstein bound to excess losses in order to improve generalization bounds for classification. Another example of ternary random variables is the tandem loss with an offset, defined by ℓα(h(X), h′(X), Y ) = (ℓ(h(X), Y )−α)(ℓ(h′(X), Y )−α) ∈ { α2,−α(1− α), (1− α)2 } . Wu et al. [2021] have applied the PAC-Bayes-Empirical-Bennett inequality to the tandem loss with an offset to obtain a generalization bound for the weighted majority vote. Yet another potential application, which we leave for future work, is learning with abstention [Cortes et al., 2018, Thulasidasan et al., 2019]. We present the split-kl inequality, which simultaneously matches the tightness of the Empirical/Unexpected Bernstein and the kl, and outperforms both for certain distributions. It works for sums of any bounded random variables Z1, . . . , Zn, not only the ternary ones, but it is best suited for ternary random variables, for which it is almost tight (in the same sense, as the kl is tight for binary random variables). The idea behind the split-kl inequality is to write a random variable Z as Z = µ + Z+ − Z−, where µ is a constant, Z+ = max{0, Z − µ}, and Z− = max{0, µ − Z}. Then E [Z] = µ + E [Z+] − E [Z−] and, given an i.i.d. sample Z1, . . . , Zn, we can bound the distance between 1n ∑n i=1 Zi and E [Z] by using kl upper and lower bounds on the distances between 1 n ∑n i=1 Z + i and E [Z+], and 1 n ∑n i=1 Z − i and E [Z−], respectively. For ternary random variables Z ∈ {a, b, c} with a ≤ b ≤ c, the best split is to take µ = b, then both Z+ and Z− are binary and the kl upper and lower bounds for their rescaled versions are tight and, therefore, the split-kl inequality for Z is also tight. Thus, this approach provides the best of both worlds: the combinatorial tightness of the kl bound and exploitation of low variance when the probability mass on the middle value happens to be large, as in Empirical Bernstein inequalities. We further elevate the idea to the PAC-Bayes domain and derive a PAC-Bayes-split-kl inequality. We present an extensive set of experiments, where we first compare the kl, Empirical Bernstein, Unexpected Bernstein, and split-kl inequalities applied to (individual) sums of independent random variables in simulated data, and then compare the PAC-Bayes-kl, PAC-Bayes-Unexpected-Bersnstein, PAC-Bayes-split-kl, and, in some of the setups, PAC-Bayes-Empirical-Bennett, for several prediction models on several UCI datasets. In particular, we evaluate the bounds in the linear classification setup studied by Mhammedi et al. [2019] and in the weighted majority prediction setup studied by Wu et al. [2021]. To the best of our knowledge, this is also the first time when the Empirical Bernstein and the Unexpected Bernstein inequalities are directly compared, with and without the PAC-Bayesian extension. In Appendix A.2 we also show that an inequality introduced by Cesa-Bianchi et al. [2007] yields a relaxation of the Unexpected Bernstein inequality by Mhammedi et al. [2019]. 2 Concentration of Measure Inequalities for Sums of Independent Random Variables We start with the most basic question in probability theory and statistics: how far can an average of an i.i.d. sample Z1, . . . , Zn deviate from its expectation? We cite the major existing inequalities, the kl, Empirical Bernstein, and Unexpected Bernstein, then derive the new split-kl inequality, and then provide a numerical comparison. 2.1 Background We use KL(ρ∥π) to denote the Kullback-Leibler divergence between two probability distributions, ρ and π [Cover and Thomas, 2006]. We further use kl(p∥q) as a shorthand for the KullbackLeibler divergence between two Bernoulli distributions with biases p and q, namely kl(p∥q) = KL((1 − p, p)∥(1 − q, q)). For p̂ ∈ [0, 1] and ε ≥ 0 we define the upper and lower inverse of kl, respectively, as kl−1,+(p̂, ε) := max {p : p ∈ [0, 1] and kl(p̂∥p) ≤ ε} and kl−1,−(p̂, ε) := min {p : p ∈ [0, 1] and kl(p̂∥p) ≤ ε}. The first inequality that we cite is the kl inequality. Theorem 1 (kl Inequality [Langford, 2005, Foong et al., 2021, 2022]). Let Z1, · · · , Zn be i.i.d. random variables bounded in the [0, 1] interval and with E [Zi] = p for all i. Let p̂ = 1n ∑n i=1 Zi be their empirical mean. Then, for any δ ∈ (0, 1): P ( kl(p̂∥p) ≥ ln 1δ n ) ≤ δ and, by inversion of the kl, P ( p ≥ kl−1,+ ( p̂, 1 n ln 1 δ )) ≤ δ, (1) P ( p ≤ kl−1,− ( p̂, 1 n ln 1 δ )) ≤ δ. (2) We note that the PAC-Bayes-kl inequality (Theorem 5 below) is based on the inequality E [ en kl(p̂∥p) ] ≤ 2 √ n [Maurer, 2004], which gives P ( kl(p̂∥p) ≥ ln 2 √ n δ n ) ≤ δ. Foong et al. [2021, 2022] reduce the logarithmic factor down to ln 1δ by basing the proof on Chernoff’s inequality, but this proof technique cannot be combined with PAC-Bayes. Therefore, when we move on to PAC-Bayes we pay the extra ln 2 √ n factor in the bounds. It is a long-standing open question whether this factor can be reduced in the PAC-Bayesian setting [Foong et al., 2021]. Next we cite two versions of the Empirical Bernstein inequality. Theorem 2 (Empirical Bernstein Inequality [Maurer and Pontil, 2009]). Let Z1, · · · , Zn be i.i.d. random variables bounded in a [a, b] interval for some a, b ∈ R, and with E [Zi] = p for all i. Let p̂ = 1n ∑n i=1 Zi be the empirical mean and let σ̂ = 1 n−1 n∑ i=1 (Zi − p̂)2 be the empirical variance. Then for any δ ∈ (0, 1) : P p ≥ p̂+ √ 2σ̂ ln 2δ n + 7(b− a) ln 2δ 3(n− 1) ≤ δ. (3) Theorem 3 (Unexpected Bernstein Inequality [Fan et al., 2015, Mhammedi et al., 2019]). Let Z1, · · · , Zn be i.i.d. random variables bounded from above by b for some b > 0, and with E [Zi] = p for all i. Let p̂ = 1n ∑n i=1 Zi be the empirical mean and let σ̂ = 1 n ∑n i=1 Z 2 i be the empirical mean of the second moments. Let ψ(u) := u− ln(1 + u) for u > −1. Then, for any γ ∈ (0, 1/b) and any δ ∈ (0, 1): P ( p ≥ p̂+ ψ(−γb) γb2 σ̂ + ln 1δ γn ) ≤ δ. (4) To facilitate a comparison with other bounds, Theorem 3 provides a slightly different form of the Unexpected Bernstein inequality than the one used by Mhammedi et al. [2019]. We provide a proof of the theorem in Appendix A.1, which is based on the Unexpected Bernstein Lemma [Fan et al., 2015]. We note that an inequality proposed by Cesa-Bianchi et al. [2007] can be used to derive a relaxed version of the Unexpected Bernstein inequality, as discussed in Appendix A.2. 2.2 The Split-kl Inequality Let Z be a random variable bounded in a [a, b] interval for some a, b ∈ R and let µ ∈ [a, b] be a constant. We decompose Z = µ + Z+ − Z−, where Z+ = max(0, Z − µ) and Z− = max(0, µ− Z). Let p = E [Z], p+ = E [Z+], and p− = E [Z−]. For an i.i.d. sample Z1, . . . , Zn let p̂+ = 1n ∑n i=1 Z + i and p̂ − = 1n ∑n i=1 Z − i . With these definitions we present the split-kl inequality. Theorem 4 (Split-kl inequality). Let Z1, . . . , Zn be i.i.d. random variables in a [a, b] interval for some a, b ∈ R, then for any µ ∈ [a, b] and δ ∈ (0, 1): P ( p ≥ µ+ (b− µ) kl−1,+ ( p̂+ b− µ , 1 n ln 2 δ ) − (µ− a) kl−1,− ( p̂− µ− a , 1 n ln 2 δ )) ≤ δ. (5) Proof. P ( p ≥ µ+ (b− µ) kl−1,+ ( p̂+ b− µ , 1 n ln 2 δ ) − (µ− a) kl−1,− ( p̂− µ− a , 1 n ln 2 δ )) ≤ P ( p+ ≥ (b− µ) kl−1,+ ( p̂+ b− µ , 1 n ln 2 δ )) + P ( p− ≤ (µ− a) kl−1,− ( p̂− µ− a , 1 n ln 2 δ )) ≤ δ, where the last inequality follows by application of the kl upper and lower bounds from Theorem 1 to the first and second terms in the middle line, respectively. For ternary random variables the best choice is to take µ to be the middle value, then the resulting Z+ and Z− are binary and the corresponding kl upper and lower bounds on p+ and p− are tight, and the resulting split-kl bound is tight. The inequality can be applied to any bounded random variables, but same way as the kl inequality is not necessarily a good choice for bounded random variables, if the distribution is not binary, the split-kl in not necessarily a good choice if the distribution is not ternary. 2.3 Empirical Comparison We present an empirical comparison of the tightness of the above four concentration inequalities: the kl, the Empirical Bernstein, the Unexpected Bernstein, and the split-kl. We take n i.i.d. samples Z1, . . . , Zn taking values in {−1, 0, 1}. The choice is motivated both by instructiveness of presentation and by subsequent applications to excess losses. We let p−1 = P(Z = −1), p0 = P(Z = 0), and p1 = P(Z = 1), where p−1+p0+p1 = 1. Then p = E [Z] = p1−p−1. We also let p̂ = 1n ∑n i=1 Zi. In Figure 1 we plot the difference between the bounds on p given by the inequalities (1), (3), (4), and (5), and p̂. Lower values in the plot correspond to tighter bounds. To compute the kl bound we first rescale the losses to the [0, 1] interval, and then rescale the bound back to the [−1, 1] interval. For the Empirical Bernstein bound we take a = −1 and b = 1. For the Unexpected Bernstein bound we take a grid of γ ∈ {1/(2b), · · · , 1/(2kb)} for k = ⌈log2( √ n/ ln(1/δ)/2)⌉ and a union bound over the grid, as proposed by Mhammedi et al. [2019]. For the split-kl bound we take µ to be the middle value, 0, of the ternary random variable. In the experiments we take δ = 0.05, and truncate the bounds at 1. In the first experiment, presented in Figure 1a, we take p−1 = p1 = (1−p0)/2 and plot the difference between the values of the bounds and p̂ as a function of p0. For p0 = 0 the random variable Z is Bernoulli and, as expected, the kl inequality performs the best, followed by split-kl, and then Unexpected Bernstein. As p0 grows closer to 1, the variance of Z decreases and, also as expected, the kl inequality falls behind, whereas split-kl and Unexpected Bernstein go closely together. Empirical Bernstein falls behind all other bounds throughout most of the range, except slightly outperforming kl when p0 gets very close to 1. In the second experiment, presented in Figure 1b, we take a skewed random variable with p1 = 0.99(1− p0) and p−1 = 0.01(1− p0), and again plot the difference between the values of the bounds and p̂ as a function of p0. This time the kl also starts well for p0 close to zero, but then falls behind due to its inability of properly handling the values inside the interval. Unexpected Bernstein exhibits the opposite trend due to being based on uncentered second moment, which is high when p0 is close to zero, even though the variance is small in this case. Empirical Bernstein lags behind all other bounds for most of the range due to poor constants, whereas split-kl matches the tightest bounds, the kl and Unexpected Bernstein, at the endpoints of the range of p0, and outperforms all other bounds in the middle of the range, around p0 = 0.6, due to being able to exploit the combinatorics of the problem. The experiments demonstrate that for ternary random variables the split-kl is a powerful alternative to existing concentration of measure inequalities. To the best of our knowledge, this is also the first empirical evaluation of the Unexpected Bernstein inequality, and it shows that in many cases it is also a powerful inequality. We also observe that in most settings the Empirical Bernstein is weaker than the other three inequalities we consider. Numerical evaluations in additional settings are provided in Appendix D. (a) Comparison of the concentration bounds with n = 100, δ = 0.05 and p−1 = p1 = 0.5(1−p0). (b) Comparison of the concentration bounds with n = 100, δ = 0.05, p1 = 0.99(1 − p0), and p−1 = 0.01(1− p0). Figure 1: Empirical comparison of the concentration bounds. 3 PAC-Bayesian Inequalities Now we elevate the basic concentration of measure inequalities to the PAC-Bayesian domain. We start with the supervised learning problem setup, then provide a background on existing PAC-Bayesian inequalities, and finish with presentation of the PAC-Bayes-split-kl inequality. 3.1 Supervised Learning Problem Setup and Notations Let X be a sample space, Y be a label space, and let S = {(Xi, Yi)}ni=1 be an i.i.d. sample drawn according to an unknown distribution D on the product-space X × Y . Let H be a hypothesis space containing hypotheses h : X → Y . The quality of a hypothesis h is measured using the zero-one loss ℓ(h(X), Y ) = 1(h(X) ̸= Y ), where 1(·) is the indicator function. The expected loss of h is denoted by L(h) = E(X,Y )∼D [ℓ(h(X), Y )], and the empirical loss of h on a sample S is denoted by L̂(h, S) = 1|S| ∑ (X,Y )∈S ℓ(h(X), Y ). We use ED[·] as a shorthand for E(X,Y )∼D[·]. PAC-Bayesian bounds bound the generalization error of Gibbs prediction rules. For each input X ∈ X , Gibbs prediction rule associated with a distribution ρ on H randomly draws a hypothesis h ∈ H according to ρ and predicts h(X). The expected loss of the Gibbs prediction rule is Eh∼ρ[L(h)] and the empirical loss is Eh∼ρ[L̂(h, S)]. We use Eρ[·] as a shorthand for Eh∼ρ[·]. 3.2 PAC-Bayesian Analysis Background Now we present a brief background on the relevant results from the PAC-Bayesian analysis. PAC-Bayes-kl Inequality The PAC-Bayes-kl inequality cited below is one of the tightest known generalization bounds on the expected loss of the Gibbs prediction rule. Theorem 5 (PAC-Bayes-kl Inequality, Seeger, 2002, Maurer, 2004). For any probability distribution π on H that is independent of S and any δ ∈ (0, 1): P ( ∃ρ ∈ P : kl ( Eρ[L̂(h, S)] ∥∥∥Eρ [L(h)]) ≥ KL(ρ∥π) + ln(2√n/δ) n ) ≤ δ, (6) where P is the set of all possible probability distributions on H that can depend on S. The following relaxation of the PAC-Bayes-kl inequality based on Refined Pinsker’s relaxation of the kl divergence helps getting some intuition about the bound [McAllester, 2003]. With probability at least 1− δ, for all ρ ∈ P we have Eρ[L(h)] ≤ Eρ[L̂(h, S)]+ √ 2Eρ[L̂(h, S)] KL(ρ∥π) + ln(2 √ n/δ) n + 2 (KL(ρ∥π) + ln(2 √ n/δ)) n . (7) If Eρ[L̂(h, S)] is close to zero, the middle term in the inequality above vanishes, leading to so-called "fast convergence rates" (convergence of Eρ[L̂(h, S)] to Eρ[L(h)] at the rate of 1/n). However, achieving low Eρ[L̂(h, S)] is not always possible [Dziugaite and Roy, 2017, Zhou et al., 2019]. Subsequent research in PAC-Bayesian analysis has focused on two goals: (1) achieving fast convergence rates when the variance of prediction errors is low (and not necessarily the errors themselves), and (2) reducing the KL(ρ∥π) term, which may be quite large for large hypothesis spaces. For the first goal Tolstikhin and Seldin [2013] developed the PAC-Bayes-Empirical-Bernstein inequality and Mhammedi et al. [2019] proposed to use excess losses and also derived the alternative PACBayes-Unexpected-Bernstein inequality. For the second goal Ambroladze et al. [2007] suggested to use informed priors and Mhammedi et al. [2019] perfected the idea by proposing to average over "forward" and "backward" construction with informed prior. Next we explain the ideas behind the excess losses and informed priors in more details. Excess Losses Let h∗ be a reference prediction rule that is independent of S. We define the excess loss of a prediction rule h with respect to the reference h∗ by ∆ℓ(h(X), h ∗(X), Y ) = ℓ(h(X), Y )− ℓ(h∗(X), Y ). If ℓ is the zero-one loss, the excess loss naturally gives rise to ternary random variables, but it is well-defined for any real-valued loss function. We use ∆L(h, h∗) = ED[∆ℓ(h(X), h∗(X), Y )] = L(h) − L(h∗) to denote the expected excess loss of h relative to h∗ and ∆L̂(h, h′, S) = 1 |S| ∑ (X,Y )∈S ∆ℓ(h(X), h ∗(X), Y ) = L̂(h) − L̂(h∗) to denote the empirical excess loss of h relative to h∗. The expected loss of a Gibbs prediction rule can then be written as Eρ[L(h)] = Eρ[∆L(h, h∗)] + L(h∗). A bound on Eρ[L(h)] can thus be decomposed into a summation of a PAC-Bayes bound on Eρ[∆L(h, h∗)] and a bound on L(h∗). When the variance of the excess loss is small, we can use tools that exploit small variance, such as the PAC-Bayes-Empirical-Bernstein, PAC-Bayes-UnexpectedBernstein, or PAC-Bayes-Split-kl inequalities proposed below, to achieve fast convergence rates for the excess loss. Bounding L(h∗) involves just a single prediction rule and does not depend on the value of KL(ρ∥π). We note that it is essential that the variance and not just the magnitude of the excess loss is small. For example, if the excess losses primarily take values in {−1, 1} and average out to zero, fast convergence rates are impossible. Informed Priors The idea behind informed priors is to split the data into two subsets, S = S1 ∪S2, and to use S1 to learn a prior πS1 , and then use it to learn a posterior on S2 Ambroladze et al. [2007]. Note that since the size of S2 is smaller than the size of S, this approach gains in having potentially smaller KL(ρ∥πS1), but loses in having a smaller sample size in the denominator of the PAC-Bayes bounds. The balance between the advantage and disadvantage depends on the data: for some data sets it strengthens the bounds, but for some it weakens them. Mhammedi et al. [2019] perfected the approach by proposing to use it in the "forward" and "backward" direction and average over the two. Let S1 and S2 be of equal size. The "forward" part uses S1 to train πS1 and then computes a posterior on S2, while the "backward" part uses S2 to train πS2 and then computes a posterior on S1. Finally, the two posteriors are averaged with equal weight and the KL term becomes 1 2 (KL(ρ∥πS1) + KL(ρ∥πS2)). See [Mhammedi et al., 2019] for the derivation. Excess Losses and Informed Priors Excess losses and informed priors make an ideal combination. If we split S into two equal parts, S = S1 ∪ S2, we can use S1 to train both a reference prediction rule hS1 and a prior πS1 , and then learn a PAC-Bayes posterior on S2, and the other way around. By combining the "forward" and "backward" approaches we can write Eρ[L(h)] = 1 2 Eρ[∆L(h, hS1)] + 1 2 Eρ[∆L(h, hS2)] + 1 2 (L(hS1) + L(hS2)) , (8) and we can use PAC-Bayes to bound the first term using the prior πS1 and the data in S2, and to bound the second term using the prior πS2 and the data in S1, and we can bound L(hS1) and L(hS2) using the "complementary" data in S2 and S1, respectively. PAC-Bayes-Empirical-Bernstein Inequalities The excess losses are ternary random variables taking values in {−1, 0, 1} and, as we have already discussed, the kl inequality is not well-suited for them. PAC-Bayesian inequalities tailored for non-binary random variables were derived by Seldin et al. [2012], Tolstikhin and Seldin [2013], Wu et al. [2021], and Mhammedi et al. [2019]. Seldin et al. [2012] derived the PAC-Bayes-Bernstein oracle bound, which assumes knowledge of the variance. Tolstikhin and Seldin [2013] made it into an empirical bound by deriving the PACBayes-Empirical-Bernstein bound for the variance and plugging it into the PAC-Bayes-Bernstein bound of Seldin et al.. Wu et al. [2021] derived an oracle PAC-Bayes-Bennett inequality, which again assumes oracle knowledge of the variance, and showed that it is always at least as tight as the PAC-Bayes-Bernstein, and then also plugged in the PAC-Bayes-Empirical-Bernstein bound on the variance. Mhammedi et al. [2019] derived the PAC-Bayes-Unexpected-Bernstein inequality, which directly uses the empirical second moment. Since we have already shown that the Unexpected Bernstein inequality is tighter than the Empirical Bernstein, and since the approach of Wu et al. requires a combination of two inequalities, PAC-Bayes-Empirical-Bernstein for the variance and PAC-Bayes-Bennett for the loss, whereas the approach of Mhammedi et al. only makes a single application of PAC-Bayes-Unexpected-Bernstein, we only compare our work to the latter. We cite the inequality of Mhammedi et al. [2019], which applies to an arbitrary loss function. We use ℓ̃ and matching tilde-marked quantities to distinguish it from the zero-one loss ℓ. For any h ∈ H, let L̃(h) = ED[ℓ̃(h(X), Y )] be the expected tilde-loss of h, and let ˆ̃L(h, S) = 1 |S| ∑ (X,Y )∈S ℓ̃(h(X), Y ) be the empirical tilde-loss of h on a sample S. Theorem 6 (PAC-Bayes-Unexpected-Bernstein inequality [Mhammedi et al., 2019]). Let ℓ̃(·, ·) be an arbitrary loss function bounded from above by b for some b > 0, and assume that ˆ̃V(h, S) = 1 |S| ∑ (X,Y )∈S ℓ̃(h(X), Y ) 2 is finite for all h. Let ψ(u) := u− ln(1 + u) for u > −1. Then for any distribution π on H that is independent of S, any γ ∈ (0, 1/b), and any δ ∈ (0, 1): P ( ∃ρ ∈ P : Eρ[L̃(h)] ≥ Eρ[ ˆ̃L(h, S)] + ψ(−γb) γb2 Eρ[ ˆ̃V(h, S)] + KL(ρ∥π) + ln 1δ γn ) ≤ δ, where P is the set of all possible probability distributions on H that can depend on S. In optimization of the bound, we take the same grid of γ ∈ {1/(2b), · · · , 1/(2kb)} for k = ⌈log2( √ n/ ln(1/δ)/2)⌉ and a union bound over the grid, as we did for Theorem 3. 3.3 PAC-Bayes-Split-kl Inequality Now we present our PAC-Bayes-Split-kl inequality. For an arbitrary loss function ℓ̃ taking values in a [a, b] interval for some a, b ∈ R, let ℓ̃+ := max{0, ℓ̃− µ} and ℓ̃− := max{0, µ− ℓ̃} for some µ ∈ [a, b]. For any h ∈ H, let L̃+(h) = ED[ℓ̃+(h(X), Y )] and L̃−(h) = ED[ℓ̃−(h(X), Y )]. The corresponding empirical losses are denoted by ˆ̃L+(h, S) = 1n ∑n i=1 ℓ̃ +(h(Xi), Yi) and ˆ̃L−(h, S) = 1 n ∑n i=1 ℓ̃ −(h(Xi), Yi). Theorem 7 (PAC-Bayes-Split-kl Inequality). Let ℓ̃(·, ·) be an arbitrary loss function taking values in a [a, b] interval for some a, b ∈ R. Then for any distribution π on H that is independent of S, any µ ∈ [a, b], and any δ ∈ (0, 1): P [ ∃ρ ∈ P : Eρ[L̃(h)] ≥ µ+ (b− µ) kl−1,+ ( Eρ[ ˆ̃L+(h, S)] b− µ , KL(ρ∥π) + ln 4 √ n δ n ) − (µ− a) kl−1,− ( Eρ[ ˆ̃L−(h, S)] µ− a , KL(ρ∥π) + ln 4 √ n δ n )] ≤ δ, where P is the set of all possible probability distributions on H that can depend on S. Proof. We have Eρ[L̃(h)] = µ+ Eρ[L̃+(h)]− Eρ[L̃−(h)]. Similar to the proof of Theorem 4, we take a union bound of PAC-Bayes-kl upper bound on Eρ[L̃+(h)] and PAC-Bayes-kl lower bound on Eρ[L̃−(h)]. 3.4 PAC-Bayes-split-kl with Excess Loss and Informed Prior Looking back at the expected loss decomposition in equation (8), we can use PAC-Bayes-splitkl to bound the first two terms and a bound on the binomial tail distribution to bound the last term. For n i.i.d. Bernoulli random variables Z1, . . . , Zn with bias p ∈ (0, 1), we define the binomial tail distribution Bin(n, k, p) = P( ∑n i=1Xi ≤ k) and its inverse Bin −1(n, k, δ) = max {p : p ∈ [0, 1] and Bin(n, k, p) ≥ δ}. The following theorem relates p̂ = 1n ∑n i=1 Zi and p. Theorem 8 (Test Set Bound [Langford, 2005]). Let Z1, . . . , Zn be n i.i.d. Bernoulli random variables with bias p ∈ (0, 1) and let p̂ = 1n ∑n i=1 Zi be the empirical mean. Then for any δ ∈ (0, 1): P ( p ≥ Bin−1(n, np̂, δ) ) ≤ δ. By applying Theorems 7 and 8 to equation (8) we obtain the following result. Theorem 9. For any µ ∈ [−1, 1] and any δ ∈ (0, 1): P ( ∃ρ ∈ P : Eρ[L(h)] ≥ µ+ (1− µ)(a)− (µ+ 1)(b) + 1 2 (c) ) ≤ δ, where P is the set of all possible probability distributions on H that can depend on S, (a) = kl−1,+ 1 2 Eρ[∆+L̂(h, hS1 , S2)] 1− µ + 1 2 Eρ[∆+L̂(h, hS2 , S1)] 1− µ , KL(ρ∥π) + ln 8 √ n/2 δ n/2 , (b) = kl−1,− 1 2 Eρ[∆−L̂ (h, hS1 , S2)] µ+ 1 + 1 2 Eρ[∆−L̂ (h, hS2 , S1)] µ+ 1 , KL(ρ∥π) + ln 8 √ n/2 δ n/2 , in which π = 12πS1 + 1 2πS2 , and (c) = Bin−1 ( n 2 , n 2 L̂(hS1 , S2), δ 4 ) + Bin−1 ( n 2 , n 2 L̂(hS2 , S1), δ 4 ) . The proof is postponed to Appendix C. 4 Experiments We evaluate the performance of the PAC-Bayes-split-kl inequality in linear classification and in weighted majority vote using several data sets from UCI and LibSVM repositories [Dua and Graff, 2019, Chang and Lin, 2011]. An overview of the data sets is provided in Appendix E.1. For linear classification we reproduce the experimental setup of Mhammedi et al. [2019], and for the weighted majority vote we reproduce the experimental setup of Wu et al. [2021]. 4.1 The Experimental Setup of Mhammedi et al. [2019]: Linear Classifiers In the first experiment we follow the experimental setup of Mhammedi et al. [2019], who consider binary classification problems with linear classifiers in Rd and Gaussian priors and posteriors. A classifier hw associated with a vector w ∈ Rd makes a prediction on an input X by hw(X) = 1 ( w⊤X > 0 ) . The posteriors have the form of Gaussian distributions centered at wS ∈ Rd, with covariance ΣS that depends on a sample S, ρ = N (wS ,ΣS). The informed priors πS1 = N (wS1 ,ΣS1) and πS2 = N (wS2 ,ΣS2) are also taken to be Gaussian distributions centered at wS1 and wS2 , with covariance ΣS1 and ΣS2 , respectively. We take the classifier associated with wS1 as the reference classifier hS1 and the classifier associated with wS2 as the reference classifier hS2 . More details on the construction are provided in Appendix E.2. Figure 2 compares the PAC-Bayes-Unexpected-Bernstein bound PBUB and the PAC-Bayes-split-kl bound PBSkl with excess losses and informed priors. The ternary random variables in this setup take values in {−1, 0, 1}, and we select µ to be the middle value 0. Since the PAC-Bayes-kl bound (PBkl) is one of the tightest known generalization bounds, we take PBkl with informed priors as a baseline. The details on bound calculation and optimization are provided in Appendix E.2. In this experiment all the three bounds, PBkl, PBUB, and PBSkl performed comparably. We believe that the reason is that with informed priors the KL(ρ∥π) term is small. From the relaxation of the PBkl bound in equation (7), we observe that a small KL(ρ∥π) term implies smaller difference between fast and slow convergence rates, and thus smaller advantage to bounding the excess loss instead of the raw loss. In other words, we believe that the effect of using informed priors dominates the effect of using excess losses. We note that in order to use excess losses we need to train the reference hypothesis h∗ on part of the data and, therefore, training an informed prior on the same data comes at no extra cost. 4.2 The Experimental Setup of Wu et al. [2021]: Weighted Majority Vote In the second experiment we reproduce the experimental setup of Wu et al. [2021], who consider multiclass classification by a weighted majority vote. Given an input X ∈ X , a hypothesis space H, and a distribution ρ on H, a ρ-weighted majority vote classifier predicts MVρ(X) = argmaxy∈Y Eρ[1(h(X) = y)]. One of the tightest bound for the majority vote is the tandem bound (TND) proposed by Masegosa et al. [2020], which is based on tandem losses for pairs of hypotheses, ℓ(h(X), h′(X), Y ) = 1(h(X) ̸= Y )1(h′(X) ̸= Y ), and the second order Markov’s inequality. Wu et al. [2021] proposed two improved forms of the bound, both based on a parametric form of the Chebyshev-Cantelli inequality. The first, CCTND, using Chebyshev-Cantelli with the tandem losses and the PAC-Bayes-kl bound for bounding the tandem losses. The second, CCPBB, using tandem losses with an offset, defined by ℓα(h(X), h′(X), Y ) = (1(h(X) ̸= Y )− α)(1(h′(X) ̸= Y )− α) for α < 0.5, and PAC-Bayes-Empirical-Bennett inequality for bounding the tandem losses with an offset. We note that while the tandem losses are binary random variables, tandem losses with an offset are ternary random variables taking values in {α2,−α(1 − α), (1 − α)2} and, therefore, application of Empirical Bernstein type inequalities makes sense. However, in the experiments of Wu et al. CCPBB lagged behind TND and CCTND. We replaced PAC-Bayes-Empirical-Bennett with PAC-Bayes-Unexpected-Bernstein (CCPBUB) and PAC-Bayes-split-kl (CCPBSkl) and showed that the weakness of CCPBB was caused by looseness of PAC-Bayes-Empirical-Bernstein, and that CCPBUB and CCPBSkl lead to tighter bounds that are competitive and sometimes outperforming TND and CCTND. For the PAC-Bayes-split-kl bound we took µ to be the middle value of the tandem loss with an offset, namely, for α ≥ 0 we took µ = α2, and for α < 0 we took µ = −α(1− α). In Figure 3 we present a comparison of the TND, CCTND, CCPBB, CCPBUB, and CCPBSkl bounds on weighted majority vote of heterogeneous classifiers (Linear Discriminant Analysis, kNearest Neighbors, Decision Tree, Logistic Regression, and Gaussian Naive Bayes), which adds the two new bounds, CCPBUB and CCPBSkl to the experiment done by Wu et al. [2021]. A more detailed description of the experiment and results for additional data sets are provided in Appendix E.3. We note that CCPBUB and CCPBSkl consistently outperform CCPBB, demonstrating that they are more appropriate for tandem losses with an offset. The former two bounds perform comparably to TND and CCTND, which operate on tandem losses without an offset. In Appendix E.4 we replicate another experiment of Wu et al., where we use the bounds to reweigh trees in a random forest classifier. The results are similar to the results for heterogeneous classifiers. 5 Discussion We have presented the split-kl and PAC-Bayes-split-kl inequalities. The inequalities answer a longstanding open question on how to exploit the structure of ternary random variables in order to provide tight concentration bounds. The proposed split-kl and PAC-Bayes-split-kl inequalities are as tight for ternary random variables, as the kl and PAC-Bayes-kl inequalities are tight for binary random variables. In our empirical evaluation the split-kl inequality was always competitive with the kl and Unexpected Bernstein inequalities and outperformed both in certain regimes, whereas Empirical Bernstein typically lagged behind. In our experiments in the PAC-Bayesian setting the PAC-Bayes-split-kl was always comparable to PAC-Bayes-Unexpected-Bernstein, whereas PAC-Bayes-Empirical-Bennett most often lagged behind. The first two inequalities were usually comparable to PAC-Bayes-kl, although in some cases the attempt to exploit low variance did not pay off and PAC-Bayes-kl outperformed, which is also the trend observed earlier by Mhammedi et al. [2019]. To the best of our knowledge, this is the first time when the various approaches to exploitation of low variance were directly compared, and the proposed split-kl emerged as a clear winner in the basic setting, whereas in the PAC-Bayes setting in our experiments the PAC-Bayes-Unexpected-Bernstein and PAC-Bayes-split-kl were comparable, and preferable over PAC-Bayes-Empirical-Bernstein and PAC-Bayes-Empirical-Bennett. Acknowledgments and Disclosure of Funding This project has received funding from European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 801199. The authors also acknowledge partial support by the Independent Research Fund Denmark, grant number 0135- 00259B.
1. What is the focus and contribution of the paper regarding concentration inequalities? 2. What are the strengths of the proposed approach, particularly in its novel technique? 3. What are the weaknesses of the paper, especially in its empirical evaluation? 4. Do you have any suggestions for improving the paper's content or presentation?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper introduces a new concentration inequality for the sum of iid bounded random variables. The paper uses a technique of splitting the samples with a threshold and then using a kl-inequality on each part. This splitting allows using both the lower and upper bound kl-inequalities. The resulting bound enjoys both the tightness of the kl-inequality and the ability to exploit the lower variance of r.v. that takes values within a segment. The empirical comparison clearly shows how the tightness of the new split-kl bound in different regimes, compared to the empirical Berenstein and the standard kl inequalities. The paper then derives PAC-Bayes-Split-kl inequality and applies it to the excess loss of a binary classification problem. The new bound exploits the lowered variance of the excess losses compared to the binary losses, and therefore, the overall split-kl-PB bound can be competitive with the standard kl-PB bound, as demonstrated on synthetic and real-world data. Strengths And Weaknesses Strengths I believe the work is original and well-motivated. The use of the splitting technique is clever and novel, as far as I know. The paper is well-written and clear. The authors provide an adequate survey of related work. The empirical evaluation of the split-kl inequality clearly shows its merits. Weaknesses The empirical evaluation of the split-kl-PAC-Bayes bound does not seem to give definitive conclusions, besides the looseness of PAC-Bayes-Empirical-Bennett on certain datasets. I suggest adding more controlled synthetic experiments, as were done in Fig 1. for the concentration bounds since it can give good intuition to when certain bounds are preferable. Questions No additional questions Limitations No additional limitations
NIPS
Title Split-kl and PAC-Bayes-split-kl Inequalities for Ternary Random Variables Abstract We present a new concentration of measure inequality for sums of independent bounded random variables, which we name a split-kl inequality. The inequality is particularly well-suited for ternary random variables, which naturally show up in a variety of problems, including analysis of excess losses in classification, analysis of weighted majority votes, and learning with abstention. We demonstrate that for ternary random variables the inequality is simultaneously competitive with the kl inequality, the Empirical Bernstein inequality, and the Unexpected Bernstein inequality, and in certain regimes outperforms all of them. It resolves an open question by Tolstikhin and Seldin [2013] and Mhammedi et al. [2019] on how to match simultaneously the combinatorial power of the kl inequality when the distribution happens to be close to binary and the power of Bersntein inequalities to exploit low variance when the probability mass is concentrated on the middle value. We also derive a PAC-Bayes-split-kl inequality and compare it with the PACBayes-kl, PAC-Bayes-Empirical-Bennett, and PAC-Bayes-Unexpected-Bernstein inequalities in an analysis of excess losses and in an analysis of a weighted majority vote for several UCI datasets. Last, but not least, our study provides the first direct comparison of the Empirical Bernstein and Unexpected Bernstein inequalities and their PAC-Bayes extensions. 1 Introduction Concentration of measure inequalities for sums of independent random variables are the most fundamental analysis tools in statistics and many other domains [Boucheron et al., 2013]. Their history stretches almost a century back, and inequalities such as Hoeffding’s [Hoeffding, 1963] and Bernstein’s [Bernstein, 1946] are the main work horses of learning theory. For binary random variables, one of the tightest concentration of measure inequalities is the kl inequality [Maurer, 2004, Langford, 2005, Foong et al., 2021, 2022], which is based on combinatorial properties of a sum of n independent random variables.1 However, while being extremely tight for binary random variables and applicable to any bounded random variables, the kl inequality is not necessarily a good choice for sums of bounded random variables that can take more than two values. In the latter case, the Empirical Bernstein [Mnih et al., 2008, Audibert et al., 2009, Maurer and Pontil, 2009] and the Unexpected Bernstein [Cesa-Bianchi et al., 2007, Mhammedi et al., 2019] inequalities can be significantly tighter due to their ability to exploit low variance, as shown by Tolstikhin and Seldin [2013]. However, the Empirical and Unexpected Bernstein inequalities are loose for binary random variables [Tolstikhin and Seldin, 2013]. 1The Binomial tail bound is slightly tighter, but it does not extend to the PAC-Bayes setting [Langford, 2005]. Our split-kl approach can be directly applied to obtain a “split-Binomial-tail” inequality. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). The challenge of exploiting low variance and, at the same time, matching the tightness of the kl inequality if a distribution happens to be close to binary, was faced by multiple prior works [Tolstikhin and Seldin, 2013, Mhammedi et al., 2019, Wu et al., 2021], but remained an open question. We resolve this question for the case of ternary random variables. Such random variables appear in a variety of applications, and we illustrate two of them. One is a study of excess losses, which are differences between the zero-one losses of a prediction rule h and a reference prediction rule h∗, Z = ℓ(h(X), Y ) − ℓ(h∗(X), Y ) ∈ {−1, 0, 1}. Mhammedi et al. [2019] have applied the PACBayes-Unexpected-Bernstein bound to excess losses in order to improve generalization bounds for classification. Another example of ternary random variables is the tandem loss with an offset, defined by ℓα(h(X), h′(X), Y ) = (ℓ(h(X), Y )−α)(ℓ(h′(X), Y )−α) ∈ { α2,−α(1− α), (1− α)2 } . Wu et al. [2021] have applied the PAC-Bayes-Empirical-Bennett inequality to the tandem loss with an offset to obtain a generalization bound for the weighted majority vote. Yet another potential application, which we leave for future work, is learning with abstention [Cortes et al., 2018, Thulasidasan et al., 2019]. We present the split-kl inequality, which simultaneously matches the tightness of the Empirical/Unexpected Bernstein and the kl, and outperforms both for certain distributions. It works for sums of any bounded random variables Z1, . . . , Zn, not only the ternary ones, but it is best suited for ternary random variables, for which it is almost tight (in the same sense, as the kl is tight for binary random variables). The idea behind the split-kl inequality is to write a random variable Z as Z = µ + Z+ − Z−, where µ is a constant, Z+ = max{0, Z − µ}, and Z− = max{0, µ − Z}. Then E [Z] = µ + E [Z+] − E [Z−] and, given an i.i.d. sample Z1, . . . , Zn, we can bound the distance between 1n ∑n i=1 Zi and E [Z] by using kl upper and lower bounds on the distances between 1 n ∑n i=1 Z + i and E [Z+], and 1 n ∑n i=1 Z − i and E [Z−], respectively. For ternary random variables Z ∈ {a, b, c} with a ≤ b ≤ c, the best split is to take µ = b, then both Z+ and Z− are binary and the kl upper and lower bounds for their rescaled versions are tight and, therefore, the split-kl inequality for Z is also tight. Thus, this approach provides the best of both worlds: the combinatorial tightness of the kl bound and exploitation of low variance when the probability mass on the middle value happens to be large, as in Empirical Bernstein inequalities. We further elevate the idea to the PAC-Bayes domain and derive a PAC-Bayes-split-kl inequality. We present an extensive set of experiments, where we first compare the kl, Empirical Bernstein, Unexpected Bernstein, and split-kl inequalities applied to (individual) sums of independent random variables in simulated data, and then compare the PAC-Bayes-kl, PAC-Bayes-Unexpected-Bersnstein, PAC-Bayes-split-kl, and, in some of the setups, PAC-Bayes-Empirical-Bennett, for several prediction models on several UCI datasets. In particular, we evaluate the bounds in the linear classification setup studied by Mhammedi et al. [2019] and in the weighted majority prediction setup studied by Wu et al. [2021]. To the best of our knowledge, this is also the first time when the Empirical Bernstein and the Unexpected Bernstein inequalities are directly compared, with and without the PAC-Bayesian extension. In Appendix A.2 we also show that an inequality introduced by Cesa-Bianchi et al. [2007] yields a relaxation of the Unexpected Bernstein inequality by Mhammedi et al. [2019]. 2 Concentration of Measure Inequalities for Sums of Independent Random Variables We start with the most basic question in probability theory and statistics: how far can an average of an i.i.d. sample Z1, . . . , Zn deviate from its expectation? We cite the major existing inequalities, the kl, Empirical Bernstein, and Unexpected Bernstein, then derive the new split-kl inequality, and then provide a numerical comparison. 2.1 Background We use KL(ρ∥π) to denote the Kullback-Leibler divergence between two probability distributions, ρ and π [Cover and Thomas, 2006]. We further use kl(p∥q) as a shorthand for the KullbackLeibler divergence between two Bernoulli distributions with biases p and q, namely kl(p∥q) = KL((1 − p, p)∥(1 − q, q)). For p̂ ∈ [0, 1] and ε ≥ 0 we define the upper and lower inverse of kl, respectively, as kl−1,+(p̂, ε) := max {p : p ∈ [0, 1] and kl(p̂∥p) ≤ ε} and kl−1,−(p̂, ε) := min {p : p ∈ [0, 1] and kl(p̂∥p) ≤ ε}. The first inequality that we cite is the kl inequality. Theorem 1 (kl Inequality [Langford, 2005, Foong et al., 2021, 2022]). Let Z1, · · · , Zn be i.i.d. random variables bounded in the [0, 1] interval and with E [Zi] = p for all i. Let p̂ = 1n ∑n i=1 Zi be their empirical mean. Then, for any δ ∈ (0, 1): P ( kl(p̂∥p) ≥ ln 1δ n ) ≤ δ and, by inversion of the kl, P ( p ≥ kl−1,+ ( p̂, 1 n ln 1 δ )) ≤ δ, (1) P ( p ≤ kl−1,− ( p̂, 1 n ln 1 δ )) ≤ δ. (2) We note that the PAC-Bayes-kl inequality (Theorem 5 below) is based on the inequality E [ en kl(p̂∥p) ] ≤ 2 √ n [Maurer, 2004], which gives P ( kl(p̂∥p) ≥ ln 2 √ n δ n ) ≤ δ. Foong et al. [2021, 2022] reduce the logarithmic factor down to ln 1δ by basing the proof on Chernoff’s inequality, but this proof technique cannot be combined with PAC-Bayes. Therefore, when we move on to PAC-Bayes we pay the extra ln 2 √ n factor in the bounds. It is a long-standing open question whether this factor can be reduced in the PAC-Bayesian setting [Foong et al., 2021]. Next we cite two versions of the Empirical Bernstein inequality. Theorem 2 (Empirical Bernstein Inequality [Maurer and Pontil, 2009]). Let Z1, · · · , Zn be i.i.d. random variables bounded in a [a, b] interval for some a, b ∈ R, and with E [Zi] = p for all i. Let p̂ = 1n ∑n i=1 Zi be the empirical mean and let σ̂ = 1 n−1 n∑ i=1 (Zi − p̂)2 be the empirical variance. Then for any δ ∈ (0, 1) : P p ≥ p̂+ √ 2σ̂ ln 2δ n + 7(b− a) ln 2δ 3(n− 1) ≤ δ. (3) Theorem 3 (Unexpected Bernstein Inequality [Fan et al., 2015, Mhammedi et al., 2019]). Let Z1, · · · , Zn be i.i.d. random variables bounded from above by b for some b > 0, and with E [Zi] = p for all i. Let p̂ = 1n ∑n i=1 Zi be the empirical mean and let σ̂ = 1 n ∑n i=1 Z 2 i be the empirical mean of the second moments. Let ψ(u) := u− ln(1 + u) for u > −1. Then, for any γ ∈ (0, 1/b) and any δ ∈ (0, 1): P ( p ≥ p̂+ ψ(−γb) γb2 σ̂ + ln 1δ γn ) ≤ δ. (4) To facilitate a comparison with other bounds, Theorem 3 provides a slightly different form of the Unexpected Bernstein inequality than the one used by Mhammedi et al. [2019]. We provide a proof of the theorem in Appendix A.1, which is based on the Unexpected Bernstein Lemma [Fan et al., 2015]. We note that an inequality proposed by Cesa-Bianchi et al. [2007] can be used to derive a relaxed version of the Unexpected Bernstein inequality, as discussed in Appendix A.2. 2.2 The Split-kl Inequality Let Z be a random variable bounded in a [a, b] interval for some a, b ∈ R and let µ ∈ [a, b] be a constant. We decompose Z = µ + Z+ − Z−, where Z+ = max(0, Z − µ) and Z− = max(0, µ− Z). Let p = E [Z], p+ = E [Z+], and p− = E [Z−]. For an i.i.d. sample Z1, . . . , Zn let p̂+ = 1n ∑n i=1 Z + i and p̂ − = 1n ∑n i=1 Z − i . With these definitions we present the split-kl inequality. Theorem 4 (Split-kl inequality). Let Z1, . . . , Zn be i.i.d. random variables in a [a, b] interval for some a, b ∈ R, then for any µ ∈ [a, b] and δ ∈ (0, 1): P ( p ≥ µ+ (b− µ) kl−1,+ ( p̂+ b− µ , 1 n ln 2 δ ) − (µ− a) kl−1,− ( p̂− µ− a , 1 n ln 2 δ )) ≤ δ. (5) Proof. P ( p ≥ µ+ (b− µ) kl−1,+ ( p̂+ b− µ , 1 n ln 2 δ ) − (µ− a) kl−1,− ( p̂− µ− a , 1 n ln 2 δ )) ≤ P ( p+ ≥ (b− µ) kl−1,+ ( p̂+ b− µ , 1 n ln 2 δ )) + P ( p− ≤ (µ− a) kl−1,− ( p̂− µ− a , 1 n ln 2 δ )) ≤ δ, where the last inequality follows by application of the kl upper and lower bounds from Theorem 1 to the first and second terms in the middle line, respectively. For ternary random variables the best choice is to take µ to be the middle value, then the resulting Z+ and Z− are binary and the corresponding kl upper and lower bounds on p+ and p− are tight, and the resulting split-kl bound is tight. The inequality can be applied to any bounded random variables, but same way as the kl inequality is not necessarily a good choice for bounded random variables, if the distribution is not binary, the split-kl in not necessarily a good choice if the distribution is not ternary. 2.3 Empirical Comparison We present an empirical comparison of the tightness of the above four concentration inequalities: the kl, the Empirical Bernstein, the Unexpected Bernstein, and the split-kl. We take n i.i.d. samples Z1, . . . , Zn taking values in {−1, 0, 1}. The choice is motivated both by instructiveness of presentation and by subsequent applications to excess losses. We let p−1 = P(Z = −1), p0 = P(Z = 0), and p1 = P(Z = 1), where p−1+p0+p1 = 1. Then p = E [Z] = p1−p−1. We also let p̂ = 1n ∑n i=1 Zi. In Figure 1 we plot the difference between the bounds on p given by the inequalities (1), (3), (4), and (5), and p̂. Lower values in the plot correspond to tighter bounds. To compute the kl bound we first rescale the losses to the [0, 1] interval, and then rescale the bound back to the [−1, 1] interval. For the Empirical Bernstein bound we take a = −1 and b = 1. For the Unexpected Bernstein bound we take a grid of γ ∈ {1/(2b), · · · , 1/(2kb)} for k = ⌈log2( √ n/ ln(1/δ)/2)⌉ and a union bound over the grid, as proposed by Mhammedi et al. [2019]. For the split-kl bound we take µ to be the middle value, 0, of the ternary random variable. In the experiments we take δ = 0.05, and truncate the bounds at 1. In the first experiment, presented in Figure 1a, we take p−1 = p1 = (1−p0)/2 and plot the difference between the values of the bounds and p̂ as a function of p0. For p0 = 0 the random variable Z is Bernoulli and, as expected, the kl inequality performs the best, followed by split-kl, and then Unexpected Bernstein. As p0 grows closer to 1, the variance of Z decreases and, also as expected, the kl inequality falls behind, whereas split-kl and Unexpected Bernstein go closely together. Empirical Bernstein falls behind all other bounds throughout most of the range, except slightly outperforming kl when p0 gets very close to 1. In the second experiment, presented in Figure 1b, we take a skewed random variable with p1 = 0.99(1− p0) and p−1 = 0.01(1− p0), and again plot the difference between the values of the bounds and p̂ as a function of p0. This time the kl also starts well for p0 close to zero, but then falls behind due to its inability of properly handling the values inside the interval. Unexpected Bernstein exhibits the opposite trend due to being based on uncentered second moment, which is high when p0 is close to zero, even though the variance is small in this case. Empirical Bernstein lags behind all other bounds for most of the range due to poor constants, whereas split-kl matches the tightest bounds, the kl and Unexpected Bernstein, at the endpoints of the range of p0, and outperforms all other bounds in the middle of the range, around p0 = 0.6, due to being able to exploit the combinatorics of the problem. The experiments demonstrate that for ternary random variables the split-kl is a powerful alternative to existing concentration of measure inequalities. To the best of our knowledge, this is also the first empirical evaluation of the Unexpected Bernstein inequality, and it shows that in many cases it is also a powerful inequality. We also observe that in most settings the Empirical Bernstein is weaker than the other three inequalities we consider. Numerical evaluations in additional settings are provided in Appendix D. (a) Comparison of the concentration bounds with n = 100, δ = 0.05 and p−1 = p1 = 0.5(1−p0). (b) Comparison of the concentration bounds with n = 100, δ = 0.05, p1 = 0.99(1 − p0), and p−1 = 0.01(1− p0). Figure 1: Empirical comparison of the concentration bounds. 3 PAC-Bayesian Inequalities Now we elevate the basic concentration of measure inequalities to the PAC-Bayesian domain. We start with the supervised learning problem setup, then provide a background on existing PAC-Bayesian inequalities, and finish with presentation of the PAC-Bayes-split-kl inequality. 3.1 Supervised Learning Problem Setup and Notations Let X be a sample space, Y be a label space, and let S = {(Xi, Yi)}ni=1 be an i.i.d. sample drawn according to an unknown distribution D on the product-space X × Y . Let H be a hypothesis space containing hypotheses h : X → Y . The quality of a hypothesis h is measured using the zero-one loss ℓ(h(X), Y ) = 1(h(X) ̸= Y ), where 1(·) is the indicator function. The expected loss of h is denoted by L(h) = E(X,Y )∼D [ℓ(h(X), Y )], and the empirical loss of h on a sample S is denoted by L̂(h, S) = 1|S| ∑ (X,Y )∈S ℓ(h(X), Y ). We use ED[·] as a shorthand for E(X,Y )∼D[·]. PAC-Bayesian bounds bound the generalization error of Gibbs prediction rules. For each input X ∈ X , Gibbs prediction rule associated with a distribution ρ on H randomly draws a hypothesis h ∈ H according to ρ and predicts h(X). The expected loss of the Gibbs prediction rule is Eh∼ρ[L(h)] and the empirical loss is Eh∼ρ[L̂(h, S)]. We use Eρ[·] as a shorthand for Eh∼ρ[·]. 3.2 PAC-Bayesian Analysis Background Now we present a brief background on the relevant results from the PAC-Bayesian analysis. PAC-Bayes-kl Inequality The PAC-Bayes-kl inequality cited below is one of the tightest known generalization bounds on the expected loss of the Gibbs prediction rule. Theorem 5 (PAC-Bayes-kl Inequality, Seeger, 2002, Maurer, 2004). For any probability distribution π on H that is independent of S and any δ ∈ (0, 1): P ( ∃ρ ∈ P : kl ( Eρ[L̂(h, S)] ∥∥∥Eρ [L(h)]) ≥ KL(ρ∥π) + ln(2√n/δ) n ) ≤ δ, (6) where P is the set of all possible probability distributions on H that can depend on S. The following relaxation of the PAC-Bayes-kl inequality based on Refined Pinsker’s relaxation of the kl divergence helps getting some intuition about the bound [McAllester, 2003]. With probability at least 1− δ, for all ρ ∈ P we have Eρ[L(h)] ≤ Eρ[L̂(h, S)]+ √ 2Eρ[L̂(h, S)] KL(ρ∥π) + ln(2 √ n/δ) n + 2 (KL(ρ∥π) + ln(2 √ n/δ)) n . (7) If Eρ[L̂(h, S)] is close to zero, the middle term in the inequality above vanishes, leading to so-called "fast convergence rates" (convergence of Eρ[L̂(h, S)] to Eρ[L(h)] at the rate of 1/n). However, achieving low Eρ[L̂(h, S)] is not always possible [Dziugaite and Roy, 2017, Zhou et al., 2019]. Subsequent research in PAC-Bayesian analysis has focused on two goals: (1) achieving fast convergence rates when the variance of prediction errors is low (and not necessarily the errors themselves), and (2) reducing the KL(ρ∥π) term, which may be quite large for large hypothesis spaces. For the first goal Tolstikhin and Seldin [2013] developed the PAC-Bayes-Empirical-Bernstein inequality and Mhammedi et al. [2019] proposed to use excess losses and also derived the alternative PACBayes-Unexpected-Bernstein inequality. For the second goal Ambroladze et al. [2007] suggested to use informed priors and Mhammedi et al. [2019] perfected the idea by proposing to average over "forward" and "backward" construction with informed prior. Next we explain the ideas behind the excess losses and informed priors in more details. Excess Losses Let h∗ be a reference prediction rule that is independent of S. We define the excess loss of a prediction rule h with respect to the reference h∗ by ∆ℓ(h(X), h ∗(X), Y ) = ℓ(h(X), Y )− ℓ(h∗(X), Y ). If ℓ is the zero-one loss, the excess loss naturally gives rise to ternary random variables, but it is well-defined for any real-valued loss function. We use ∆L(h, h∗) = ED[∆ℓ(h(X), h∗(X), Y )] = L(h) − L(h∗) to denote the expected excess loss of h relative to h∗ and ∆L̂(h, h′, S) = 1 |S| ∑ (X,Y )∈S ∆ℓ(h(X), h ∗(X), Y ) = L̂(h) − L̂(h∗) to denote the empirical excess loss of h relative to h∗. The expected loss of a Gibbs prediction rule can then be written as Eρ[L(h)] = Eρ[∆L(h, h∗)] + L(h∗). A bound on Eρ[L(h)] can thus be decomposed into a summation of a PAC-Bayes bound on Eρ[∆L(h, h∗)] and a bound on L(h∗). When the variance of the excess loss is small, we can use tools that exploit small variance, such as the PAC-Bayes-Empirical-Bernstein, PAC-Bayes-UnexpectedBernstein, or PAC-Bayes-Split-kl inequalities proposed below, to achieve fast convergence rates for the excess loss. Bounding L(h∗) involves just a single prediction rule and does not depend on the value of KL(ρ∥π). We note that it is essential that the variance and not just the magnitude of the excess loss is small. For example, if the excess losses primarily take values in {−1, 1} and average out to zero, fast convergence rates are impossible. Informed Priors The idea behind informed priors is to split the data into two subsets, S = S1 ∪S2, and to use S1 to learn a prior πS1 , and then use it to learn a posterior on S2 Ambroladze et al. [2007]. Note that since the size of S2 is smaller than the size of S, this approach gains in having potentially smaller KL(ρ∥πS1), but loses in having a smaller sample size in the denominator of the PAC-Bayes bounds. The balance between the advantage and disadvantage depends on the data: for some data sets it strengthens the bounds, but for some it weakens them. Mhammedi et al. [2019] perfected the approach by proposing to use it in the "forward" and "backward" direction and average over the two. Let S1 and S2 be of equal size. The "forward" part uses S1 to train πS1 and then computes a posterior on S2, while the "backward" part uses S2 to train πS2 and then computes a posterior on S1. Finally, the two posteriors are averaged with equal weight and the KL term becomes 1 2 (KL(ρ∥πS1) + KL(ρ∥πS2)). See [Mhammedi et al., 2019] for the derivation. Excess Losses and Informed Priors Excess losses and informed priors make an ideal combination. If we split S into two equal parts, S = S1 ∪ S2, we can use S1 to train both a reference prediction rule hS1 and a prior πS1 , and then learn a PAC-Bayes posterior on S2, and the other way around. By combining the "forward" and "backward" approaches we can write Eρ[L(h)] = 1 2 Eρ[∆L(h, hS1)] + 1 2 Eρ[∆L(h, hS2)] + 1 2 (L(hS1) + L(hS2)) , (8) and we can use PAC-Bayes to bound the first term using the prior πS1 and the data in S2, and to bound the second term using the prior πS2 and the data in S1, and we can bound L(hS1) and L(hS2) using the "complementary" data in S2 and S1, respectively. PAC-Bayes-Empirical-Bernstein Inequalities The excess losses are ternary random variables taking values in {−1, 0, 1} and, as we have already discussed, the kl inequality is not well-suited for them. PAC-Bayesian inequalities tailored for non-binary random variables were derived by Seldin et al. [2012], Tolstikhin and Seldin [2013], Wu et al. [2021], and Mhammedi et al. [2019]. Seldin et al. [2012] derived the PAC-Bayes-Bernstein oracle bound, which assumes knowledge of the variance. Tolstikhin and Seldin [2013] made it into an empirical bound by deriving the PACBayes-Empirical-Bernstein bound for the variance and plugging it into the PAC-Bayes-Bernstein bound of Seldin et al.. Wu et al. [2021] derived an oracle PAC-Bayes-Bennett inequality, which again assumes oracle knowledge of the variance, and showed that it is always at least as tight as the PAC-Bayes-Bernstein, and then also plugged in the PAC-Bayes-Empirical-Bernstein bound on the variance. Mhammedi et al. [2019] derived the PAC-Bayes-Unexpected-Bernstein inequality, which directly uses the empirical second moment. Since we have already shown that the Unexpected Bernstein inequality is tighter than the Empirical Bernstein, and since the approach of Wu et al. requires a combination of two inequalities, PAC-Bayes-Empirical-Bernstein for the variance and PAC-Bayes-Bennett for the loss, whereas the approach of Mhammedi et al. only makes a single application of PAC-Bayes-Unexpected-Bernstein, we only compare our work to the latter. We cite the inequality of Mhammedi et al. [2019], which applies to an arbitrary loss function. We use ℓ̃ and matching tilde-marked quantities to distinguish it from the zero-one loss ℓ. For any h ∈ H, let L̃(h) = ED[ℓ̃(h(X), Y )] be the expected tilde-loss of h, and let ˆ̃L(h, S) = 1 |S| ∑ (X,Y )∈S ℓ̃(h(X), Y ) be the empirical tilde-loss of h on a sample S. Theorem 6 (PAC-Bayes-Unexpected-Bernstein inequality [Mhammedi et al., 2019]). Let ℓ̃(·, ·) be an arbitrary loss function bounded from above by b for some b > 0, and assume that ˆ̃V(h, S) = 1 |S| ∑ (X,Y )∈S ℓ̃(h(X), Y ) 2 is finite for all h. Let ψ(u) := u− ln(1 + u) for u > −1. Then for any distribution π on H that is independent of S, any γ ∈ (0, 1/b), and any δ ∈ (0, 1): P ( ∃ρ ∈ P : Eρ[L̃(h)] ≥ Eρ[ ˆ̃L(h, S)] + ψ(−γb) γb2 Eρ[ ˆ̃V(h, S)] + KL(ρ∥π) + ln 1δ γn ) ≤ δ, where P is the set of all possible probability distributions on H that can depend on S. In optimization of the bound, we take the same grid of γ ∈ {1/(2b), · · · , 1/(2kb)} for k = ⌈log2( √ n/ ln(1/δ)/2)⌉ and a union bound over the grid, as we did for Theorem 3. 3.3 PAC-Bayes-Split-kl Inequality Now we present our PAC-Bayes-Split-kl inequality. For an arbitrary loss function ℓ̃ taking values in a [a, b] interval for some a, b ∈ R, let ℓ̃+ := max{0, ℓ̃− µ} and ℓ̃− := max{0, µ− ℓ̃} for some µ ∈ [a, b]. For any h ∈ H, let L̃+(h) = ED[ℓ̃+(h(X), Y )] and L̃−(h) = ED[ℓ̃−(h(X), Y )]. The corresponding empirical losses are denoted by ˆ̃L+(h, S) = 1n ∑n i=1 ℓ̃ +(h(Xi), Yi) and ˆ̃L−(h, S) = 1 n ∑n i=1 ℓ̃ −(h(Xi), Yi). Theorem 7 (PAC-Bayes-Split-kl Inequality). Let ℓ̃(·, ·) be an arbitrary loss function taking values in a [a, b] interval for some a, b ∈ R. Then for any distribution π on H that is independent of S, any µ ∈ [a, b], and any δ ∈ (0, 1): P [ ∃ρ ∈ P : Eρ[L̃(h)] ≥ µ+ (b− µ) kl−1,+ ( Eρ[ ˆ̃L+(h, S)] b− µ , KL(ρ∥π) + ln 4 √ n δ n ) − (µ− a) kl−1,− ( Eρ[ ˆ̃L−(h, S)] µ− a , KL(ρ∥π) + ln 4 √ n δ n )] ≤ δ, where P is the set of all possible probability distributions on H that can depend on S. Proof. We have Eρ[L̃(h)] = µ+ Eρ[L̃+(h)]− Eρ[L̃−(h)]. Similar to the proof of Theorem 4, we take a union bound of PAC-Bayes-kl upper bound on Eρ[L̃+(h)] and PAC-Bayes-kl lower bound on Eρ[L̃−(h)]. 3.4 PAC-Bayes-split-kl with Excess Loss and Informed Prior Looking back at the expected loss decomposition in equation (8), we can use PAC-Bayes-splitkl to bound the first two terms and a bound on the binomial tail distribution to bound the last term. For n i.i.d. Bernoulli random variables Z1, . . . , Zn with bias p ∈ (0, 1), we define the binomial tail distribution Bin(n, k, p) = P( ∑n i=1Xi ≤ k) and its inverse Bin −1(n, k, δ) = max {p : p ∈ [0, 1] and Bin(n, k, p) ≥ δ}. The following theorem relates p̂ = 1n ∑n i=1 Zi and p. Theorem 8 (Test Set Bound [Langford, 2005]). Let Z1, . . . , Zn be n i.i.d. Bernoulli random variables with bias p ∈ (0, 1) and let p̂ = 1n ∑n i=1 Zi be the empirical mean. Then for any δ ∈ (0, 1): P ( p ≥ Bin−1(n, np̂, δ) ) ≤ δ. By applying Theorems 7 and 8 to equation (8) we obtain the following result. Theorem 9. For any µ ∈ [−1, 1] and any δ ∈ (0, 1): P ( ∃ρ ∈ P : Eρ[L(h)] ≥ µ+ (1− µ)(a)− (µ+ 1)(b) + 1 2 (c) ) ≤ δ, where P is the set of all possible probability distributions on H that can depend on S, (a) = kl−1,+ 1 2 Eρ[∆+L̂(h, hS1 , S2)] 1− µ + 1 2 Eρ[∆+L̂(h, hS2 , S1)] 1− µ , KL(ρ∥π) + ln 8 √ n/2 δ n/2 , (b) = kl−1,− 1 2 Eρ[∆−L̂ (h, hS1 , S2)] µ+ 1 + 1 2 Eρ[∆−L̂ (h, hS2 , S1)] µ+ 1 , KL(ρ∥π) + ln 8 √ n/2 δ n/2 , in which π = 12πS1 + 1 2πS2 , and (c) = Bin−1 ( n 2 , n 2 L̂(hS1 , S2), δ 4 ) + Bin−1 ( n 2 , n 2 L̂(hS2 , S1), δ 4 ) . The proof is postponed to Appendix C. 4 Experiments We evaluate the performance of the PAC-Bayes-split-kl inequality in linear classification and in weighted majority vote using several data sets from UCI and LibSVM repositories [Dua and Graff, 2019, Chang and Lin, 2011]. An overview of the data sets is provided in Appendix E.1. For linear classification we reproduce the experimental setup of Mhammedi et al. [2019], and for the weighted majority vote we reproduce the experimental setup of Wu et al. [2021]. 4.1 The Experimental Setup of Mhammedi et al. [2019]: Linear Classifiers In the first experiment we follow the experimental setup of Mhammedi et al. [2019], who consider binary classification problems with linear classifiers in Rd and Gaussian priors and posteriors. A classifier hw associated with a vector w ∈ Rd makes a prediction on an input X by hw(X) = 1 ( w⊤X > 0 ) . The posteriors have the form of Gaussian distributions centered at wS ∈ Rd, with covariance ΣS that depends on a sample S, ρ = N (wS ,ΣS). The informed priors πS1 = N (wS1 ,ΣS1) and πS2 = N (wS2 ,ΣS2) are also taken to be Gaussian distributions centered at wS1 and wS2 , with covariance ΣS1 and ΣS2 , respectively. We take the classifier associated with wS1 as the reference classifier hS1 and the classifier associated with wS2 as the reference classifier hS2 . More details on the construction are provided in Appendix E.2. Figure 2 compares the PAC-Bayes-Unexpected-Bernstein bound PBUB and the PAC-Bayes-split-kl bound PBSkl with excess losses and informed priors. The ternary random variables in this setup take values in {−1, 0, 1}, and we select µ to be the middle value 0. Since the PAC-Bayes-kl bound (PBkl) is one of the tightest known generalization bounds, we take PBkl with informed priors as a baseline. The details on bound calculation and optimization are provided in Appendix E.2. In this experiment all the three bounds, PBkl, PBUB, and PBSkl performed comparably. We believe that the reason is that with informed priors the KL(ρ∥π) term is small. From the relaxation of the PBkl bound in equation (7), we observe that a small KL(ρ∥π) term implies smaller difference between fast and slow convergence rates, and thus smaller advantage to bounding the excess loss instead of the raw loss. In other words, we believe that the effect of using informed priors dominates the effect of using excess losses. We note that in order to use excess losses we need to train the reference hypothesis h∗ on part of the data and, therefore, training an informed prior on the same data comes at no extra cost. 4.2 The Experimental Setup of Wu et al. [2021]: Weighted Majority Vote In the second experiment we reproduce the experimental setup of Wu et al. [2021], who consider multiclass classification by a weighted majority vote. Given an input X ∈ X , a hypothesis space H, and a distribution ρ on H, a ρ-weighted majority vote classifier predicts MVρ(X) = argmaxy∈Y Eρ[1(h(X) = y)]. One of the tightest bound for the majority vote is the tandem bound (TND) proposed by Masegosa et al. [2020], which is based on tandem losses for pairs of hypotheses, ℓ(h(X), h′(X), Y ) = 1(h(X) ̸= Y )1(h′(X) ̸= Y ), and the second order Markov’s inequality. Wu et al. [2021] proposed two improved forms of the bound, both based on a parametric form of the Chebyshev-Cantelli inequality. The first, CCTND, using Chebyshev-Cantelli with the tandem losses and the PAC-Bayes-kl bound for bounding the tandem losses. The second, CCPBB, using tandem losses with an offset, defined by ℓα(h(X), h′(X), Y ) = (1(h(X) ̸= Y )− α)(1(h′(X) ̸= Y )− α) for α < 0.5, and PAC-Bayes-Empirical-Bennett inequality for bounding the tandem losses with an offset. We note that while the tandem losses are binary random variables, tandem losses with an offset are ternary random variables taking values in {α2,−α(1 − α), (1 − α)2} and, therefore, application of Empirical Bernstein type inequalities makes sense. However, in the experiments of Wu et al. CCPBB lagged behind TND and CCTND. We replaced PAC-Bayes-Empirical-Bennett with PAC-Bayes-Unexpected-Bernstein (CCPBUB) and PAC-Bayes-split-kl (CCPBSkl) and showed that the weakness of CCPBB was caused by looseness of PAC-Bayes-Empirical-Bernstein, and that CCPBUB and CCPBSkl lead to tighter bounds that are competitive and sometimes outperforming TND and CCTND. For the PAC-Bayes-split-kl bound we took µ to be the middle value of the tandem loss with an offset, namely, for α ≥ 0 we took µ = α2, and for α < 0 we took µ = −α(1− α). In Figure 3 we present a comparison of the TND, CCTND, CCPBB, CCPBUB, and CCPBSkl bounds on weighted majority vote of heterogeneous classifiers (Linear Discriminant Analysis, kNearest Neighbors, Decision Tree, Logistic Regression, and Gaussian Naive Bayes), which adds the two new bounds, CCPBUB and CCPBSkl to the experiment done by Wu et al. [2021]. A more detailed description of the experiment and results for additional data sets are provided in Appendix E.3. We note that CCPBUB and CCPBSkl consistently outperform CCPBB, demonstrating that they are more appropriate for tandem losses with an offset. The former two bounds perform comparably to TND and CCTND, which operate on tandem losses without an offset. In Appendix E.4 we replicate another experiment of Wu et al., where we use the bounds to reweigh trees in a random forest classifier. The results are similar to the results for heterogeneous classifiers. 5 Discussion We have presented the split-kl and PAC-Bayes-split-kl inequalities. The inequalities answer a longstanding open question on how to exploit the structure of ternary random variables in order to provide tight concentration bounds. The proposed split-kl and PAC-Bayes-split-kl inequalities are as tight for ternary random variables, as the kl and PAC-Bayes-kl inequalities are tight for binary random variables. In our empirical evaluation the split-kl inequality was always competitive with the kl and Unexpected Bernstein inequalities and outperformed both in certain regimes, whereas Empirical Bernstein typically lagged behind. In our experiments in the PAC-Bayesian setting the PAC-Bayes-split-kl was always comparable to PAC-Bayes-Unexpected-Bernstein, whereas PAC-Bayes-Empirical-Bennett most often lagged behind. The first two inequalities were usually comparable to PAC-Bayes-kl, although in some cases the attempt to exploit low variance did not pay off and PAC-Bayes-kl outperformed, which is also the trend observed earlier by Mhammedi et al. [2019]. To the best of our knowledge, this is the first time when the various approaches to exploitation of low variance were directly compared, and the proposed split-kl emerged as a clear winner in the basic setting, whereas in the PAC-Bayes setting in our experiments the PAC-Bayes-Unexpected-Bernstein and PAC-Bayes-split-kl were comparable, and preferable over PAC-Bayes-Empirical-Bernstein and PAC-Bayes-Empirical-Bennett. Acknowledgments and Disclosure of Funding This project has received funding from European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 801199. The authors also acknowledge partial support by the Independent Research Fund Denmark, grant number 0135- 00259B.
1. What is the main contribution of the paper regarding PAC-Bayes bounds for low variance losses? 2. How does the proposed split-kl bound compare to previous works, such as [1, 2], in terms of technical contribution and originality? 3. What are the strengths and weaknesses of the experimental results presented in the paper? 4. Are there any limitations to the applicability of the proposed bound, especially when applied to real-world scenarios? 5. Can the authors provide further explanations or comparisons regarding the use of different bounds for the weighted majority vote, particularly the first-order bound L(MV) ≤ 2L(ρ), in their experiments?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors address the question of providing PAC-Bayes bounds for losses when the (empirical) variance is low, as previously addressed by e.g. [1, 2]. A special case of this is finding bounds for ternary losses in {-1,0,1}, which arises in two important ways: bounds on the excess misclassification loss, which can also be used as per [1] to tighten PAC-Bayes bounds on the non-excess loss in conjunction with the Cantelli-Chebyshev relaxation given by [3] to provide bounds on the (non-randomized) weighted majority vote via PAC-Bayes. For losses in {0, 1} the small-kl PAC-Bayes bound [e.g. 4] is usually the tightest, even when the variance is low, but not for losses in [-1, 1] (after rescaling the bound). In order to leverage this, the authors decompose translate each random variable in the sum before decomposing it into positive and negative parts, Z i = μ + Z i + Z i − = μ + max ( 0 , Z i − μ ) + max ( 0 , − Z i + μ ) before applying the small-kl bound to the sums of Z i + and Z i − separately (which are both {0, 1} valued in the ternary untranslated case). This is called the split-kl (PAC-Bayes) bound. This is used to prove new concentration and PAC-Bayes bounds. These are further combined with the excess risk and informed prior ideas from [1], or the Cantelli-Chebyshev relaxation from [3], and evaluated in experimental setups taken from the above. [1] Zakaria Mhammedi, Peter Grünwald, and Benjamin Guedj. PAC-Bayes un-expected Bernstein inequality. [2] Ilya Tolstikhin and Yevgeny Seldin. PAC-Bayes-Empirical-Bernstein inequality. [3] Yi-Shan Wu, Andres Masegosa, Stephan Lorenzen, Christian Igel, and Yevgeny Seldin. Chebyshev-cantelli pac-bayes-bennett inequality for the weighted majority vote. [4] John Langford. Tutorial on practical prediction theory for classification. UPDATE: Overall I am not satisfied with the quite limited evaluation of this bound, which does not show clear improvements from previous results. This weakens the motivation for the paper too because of the limited number of new technical ideas. Therefore I find myself much more on the borderline than my original review and I do agree with some of the criticisms of reviewer nL9t. However, given that related work has previously appeared at NeurIPS with similarly negligible empirical improvements, I will keep my "weak accept" score. Strengths And Weaknesses Strengths Clarity and motivation: the paper is very well written and was a pleasure to read. The relationships to previous works [1, 2] was very well explained and the incorporation of ideas from [1] was well motivated. The alternative form of the main result from [1] is an improvement in clarity to how it is stated therein and the situation of this work within its wider context was reasonably clear. My only minor criticism is that the experiments in section 4.2 do not sufficiently explain the use of the Chebyshev-Cantelli bound and majority votes as used there. This is a shame as I think the use of the split-kl bound for majority votes is a good use case. Relevance: I think that the paper makes a contribution to an important and highly-active area of machine learning, improving PAC-Bayes bounds, which are among the most useful in contemporary learning theory. They bring some ideas from [1] to a wider application which is a valuable contribution. Weaknesses Technical contribution and originality: here I think the paper falls down a bit. The main technical result is simply a decomposition of a random variable into positive and negative parts, combined with an application of the small-kl PAC-Bayes inequality. This is combined with the excess loss idea from [1] and the experimental setup therein, or the Cantelli-Chebyshev bound from [3] and their experimental setup, all of which is straightforward. Such simple ideas are can be very valuable when they lead to breakthroughs but that does not seem to be the case here, and most of the ideas used in the paper and discussed at length were originated by [1]. Experimental results: in the more important PAC-Bayes setting the new results are quite weak, with the new bound giving very similar results to that of [1]. The bound is not shown to be any improvement as optimization objective either. The simpler concentration inequality setting is not particularly interesting except as a motivation, and for the ternary r.v.s used an even better bound would be obtained by applying the test set bound (Th. 8) to the decomposition Z = Z + − Z − (i.e. a "split-Binomial" bound). Questions Is the bound stated in Theorem 3 equivalent to the different form given by [1]? It would be nice to show this. In the experiments in section 4.2, it seems all of the bounds are based on the Cantelli-Chebyshev relaxation (with the tandem bound being α = 0 ). Why have you not also compared to other bounds for the weighted majority vote, in particular the first order bound L ( M V ) ≤ 2 L ( ρ ) with the small-kl, which is often the tightest? Limitations N/A the results are primarily of a theoretical nature.
NIPS
Title Estimators for Multivariate Information Measures in General Probability Spaces Abstract Information theoretic quantities play an important role in various settings in machine learning, including causality testing, structure inference in graphical models, time-series problems, feature selection as well as in providing privacy guarantees. A key quantity of interest is the mutual information and generalizations thereof, including conditional mutual information, multivariate mutual information, total correlation and directed information. While the aforementioned information quantities are well defined in arbitrary probability spaces, existing estimators add or subtract entropies (we term them ΣH methods). These methods work only in purely discrete space or purely continuous case since entropy (or differential entropy) is well defined only in that regime. In this paper, we define a general graph divergence measure (GDM),as a measure of incompatibility between the observed distribution and a given graphical model structure. This generalizes the aforementioned information measures and we construct a novel estimator via a coupling trick that directly estimates these multivariate information measures using the Radon-Nikodym derivative. These estimators are proven to be consistent in a general setting which includes several cases where the existing estimators fail, thus providing the only known estimators for the following settings: (1) the data has some discrete and some continuous valued components (2) some (or all) of the components themselves are discrete-continuous mixtures (3) the data is real-valued but does not have a joint density on the entire space, rather is supported on a low-dimensional manifold. We show that our proposed estimators significantly outperform known estimators on synthetic and real datasets. 1 Introduction Information theoretic quantities, such as mutual information and its generalizations, play an important role in various settings in machine learning and statistical estimation and inference. Here we summarize briefly the role of some generalizations of mutual information in learning (cf. Sec. 2.1 for a mathematical definition of these quantities). 1. Conditional mutual information measures the amount of information between two variables X and Y given a third variable Z and is zero iff X is independent of Y given Z. CMI finds a wide 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. range of applications in learning including causality testing [1, 2], structure inference in graphical models [3], feature selection [4] as well as in providing privacy guarantees [5]. 2. Total correlation measures the degree to which a set ofN variables are independent of each other, and appears as a natural metric of interest in several machine learning problems, for example, in independent component analysis, the objective is to maximize the independence of the variables quantified through total correlation [6]. In feature selection, ensuring the independence of selected features is one goal, pursued using total correlation in [7, 8]. 3. Multivariate mutual information measures the amount of information shared between multiple variables [9, 10] and is useful in feature selection [11, 12] and clustering [13]. 4. Directed information measures the amount of information between two random processes [14,15] and is shown as the correct metric in identifying time-series graphical models [16–21]. Estimation of these information-theoretic quantities from observed samples is a non-trivial problem that needs to be solved in order to utilize these quantities in the aforementioned applications. While there has been long history in estimation of entropy [22–25], and renewed recent interest [26–28], much less effort has been spent on the multivariate versions. A standard approach to estimating general information theoretic quantities is to write them out as a sum or difference of entropy (denoted H usually) terms which are then directly estimated; we term such a paradigm as ΣH paradigm. However, the ΣH paradigm is applicable only when the variables involved are all discrete or there is a joint density on the space of all variables (in which case, differential entropy h can be utilized). The underlying information measures themselves are well defined in very general probability spaces, for example, involving mixtures of discrete and continuous variables; however, no known estimators exist. We motivate the requirement of estimators in general probability spaces by some examples in contemporary machine learning and statistical inference. 1. It is common place in machine learning to have data-sets where some variables are discrete, and some are continuous. For example, in recent work on utilizing information bottleneck to understand deep learning [29], an important step is to quantify the mutual information between the training samples (which are discrete) and the layer output (which is continuous). The employed methodology was to quantize the continuous variables; this is common practice, even though highly sub-optimal. 2. Some variables involved in the calculation may be mixtures of discrete and continuous variables. For example, the output of ReLU neuron will not have a density even when the input data has a density. Instead, the neuron will have a discrete mass at 0 (or wherever the ReLU breakpoint is) but will have a continuous distribution on the positive values. This is also the case in gene expression data, where a gene may have a discrete mass at expression 0 due to an effect called drop-out [30] but have a continuous distribution elsewhere. 3. The variables involved may have a joint density only on a low dimensional manifold. For example, when calculating mutual information between input and output of a neural network, some of the neurons are deterministic functions of the input variables and hence they will have a joint density supported on a low-dimensional manifold rather than the entire space. In the aforementioned cases, no existing estimators are known to work. It is not merely a matter of having provable guarantees either. When we plug in estimators that assume a joint density into data that does not, the estimated information measure can be strongly negative. We summarize our main contributions below: 1. General paradigm (Section 2): We define a general paradigm of graph divergence measures which captures the aforementioned generalizations of mutual information as special cases. Given a directed acyclic graph (DAG) between n variables, the graph divergence is defined as the KullbackLeibler (KL) divergence between the true data distribution PX and a restricted distribution PX defined on the Bayesian network and can be thought of as a measure of incompatibility with the given graphical model structure. These graph divergence measures are defined using the Radon Nikodym derivatives which are well-defined for general probability spaces. 2. Novel estimators (Section 3): We propose novel estimators for these graph divergence measures, which directly estimate the corresponding Radon-Nikodym derivatives. To the best of our knowl- edge, these are the first family of estimators that are well defined for general probability spaces (breaking the ΣH paradigm). 3. Consistency proofs (Section 4): We prove that the proposed estimators converge to the true value of the corresponding graph divergence measures as the number of observed samples increases in a general setting which includes several cases: (1) the data has some discrete and some continuous valued components (2) some (or all) of the components themselves are discrete-continuous mixtures (3) the data is real-valued but does not have a joint density on the entire space but is supported on a low-dimensional manifold. 4. Numerical results (Section 5): Extensive numerical results demonstrate that (1) existing algorithms have severe failure modes in general probability spaces (strongly negative values, for example), and (2) our proposed estimator achieves consistency as well as significantly better finite-sample performance. 2 Graph Divergence Measure In this section, we define the family of graph divergence measures. To begin with, we first define some notational preliminaries. We denote any random variable by an uppercase letter such as X . The sample space of the variable X is denoted by X and any value in X is denoted by the lowercase letter x. For any subset A ⊆ X , the probability of A for a given distribution function PX(.) over X is denoted by PX(A). We note that the random variable X can be a d-dimensional vector of random variables, i.e. X ≡ (X1, . . . , Xd). The N observed samples drawn from the distribution PX are denoted by x(1), x(2), . . . , x(N), i.e. x(i) is the ith observed sample. Sometimes we might be interested in a subset of components of a random variable, S ⊆ {X1, . . . , Xd} instead of the entire vector X . Accordingly, the sample space of the variable S is denoted by S. For instance, X = (X1, X2, X3, X4) and S = (X1, X2). Throughout the entire paper, unless otherwise stated, there is a one-to-one correspondence between the notations of X and any S. For example for any value x ∈ X , the corresponding value in S is simply denoted by s. Further, s(i) ∈ S represents the lower-dimensional sample corresponding to the ith observed sample x(i) ∈ X . Furthermore, any marginal distribution defined over S with respect to PX is denoted by PS . Consider a directed acyclic graph (DAG) G defined over d nodes (corresponding to the d components of the random variable X). A probability measure Q over X is said to be compatible with the graph G if it is a Bayesian network on G. Given a graph G and a distribution PX , there is a natural measure PX(.) which is compatible with the graph and is defined as follows: PX = d∏ l=1 PXl|pa(Xl) (1) where pa(Xl) ⊂ X is the set of the parent nodes of the random variable Xl, with the sample space denoted by Xpa(l), and the sample values xpa(l) corresponding to x. The distribution PXl|pa(Xl) is the conditional distribution of Xl given pa(Xl). Throughout the paper, whenever mentioning the variable Xl with its own parents pa(Xl) we indicate it by pa+(Xl), i.e. pa+(Xl) ≡ ( Xl, pa(Xl) ) . An example is shown in Fig. 1a. We note the fact that PS|X\S is well defined for any subset of variables S ⊂ X . Therefore if we let S = X \ pa(Xl), then PX\pa(Xl)|pa(Xl) is well defined for any l ∈ {1, . . . , d}. By marginalizing over X \ pa+(Xl) we see that PXl|pa(Xl) and thus the distribution PX is well defined. The graph divergence measure is now defined as a function of the probability measure PX and the graph G. In this work we will focus only on the KL Divergence as being the distance metric, hence unless otherwise stated D(· ‖ ·) = DKL(· ‖ ·). Let’s first consider the case where PX is absolutely continuous with respect to PX and hence the Radon-Nikodym derivative dPX/dPX exists. Therefore for a given set of random variables X and a Bayesian Network G, we define Graph Divergence Measure (GDM) as : GDM(X,G) = D(PX‖PX) = ∫ X log dPX dPX dPX (2) Here we implicitly assume that log ( dPX/dPX ) is measurable and integrable with respect to the measure PX . The GDM is set to infinity wherever Radon-Nikodym derivative does not exist. It is clear that GDM(X,G) = 0 if and only if the data distribution is compatible with the graphical model, thus the GDM can be thought of as a measure of incompatibility with the given graphical model structure. We now have relevant variational characterization as below on our graph divergence measure, which can be harnessed to compute upper and lower bounds (More details in supplementary material): Proposition 2.1. For a random variable X , a DAG G, let Π(G) be the set of measures QX defined on the Bayesian Network G, then GDM(X,G) = infQX∈Π(G)D(PX‖QX). Furthermore, let C denote the set of functions h : X → R such that EQX [exp(h(X))] < ∞. If GDM(X,G) <∞, then for every h ∈ C, EPX [h(X)] exists and: GDM(X,G) = sup h∈C EPX [h(X)]− logEQX [exp(h(X))] . (3) 2.1 Special cases For specific choices of X and Bayesian Network, G, the Equation 2 is reduced to the well-known information measures. Some examples of these measures are as follows: Mutual Information (MI): X = (X1, X2) and G has no directed edge between X1 and X2. Thus PX = PX1 .PX2 , and we get, GDM(X,G) = I(X1;X2) = D(PX1X2‖PX1PX2). Conditional Mutual Information (CMI): We recover the conditional mutual information of X1 and X2 given X3 by constraining G to be the one in Fig. 1b, since PX = PX3 .PX2|X3 .PX1|X3 , i.e., GDM(X,G) = I(X1;X2|X3) = D(PX1X2X3‖PX1|X3PX2|X3PX3). Total Correlation (TC): When X = (X1, · · · , Xd), and G is the graph with no edges (as in Fig. 1c, we recover the total correlation of the random variables X1, . . . , Xd since PX = PX1 . . .PXd , i.e., GDM(X,Gdc) = TC(X1, . . . , Xd) = D(PX1...Xd‖PX1 . . .PXd) Multivariate Mutual Information (MMI) : While the MMI defined by [9] is not positive in general,there is a different definition by [10] which is both non-negative and has an operational interpretation. Since MMI can be defined as the optimal total correlation after clustering, we can utilize our definition to define MMI (cf. supplementary material). Directed Information : Suppose there are two stationary random processes X and Y , the directed information rate from X to Y as first introduced by Massey [31] is defined as: I(X → Y ) = 1 T T∑ t=1 I ( Xt;Yt ∣∣Y t−1) It can be seen that the directed information can be written as: I(X → Y ) = GDM ( (XT , Y T ),GI ) −GDM ( (XT , Y T ),GC ) where the graphical model GI correponds to the independent distribution between XT and Y T , and GC corresponds to the causal distribution from X to Y (more details provided in supplementary material). 3 Estimators 3.1 Prior Art Estimators for entropy date back to Shannon, who guesstimated the entropy rate of English [32]. Discrete entropy estimation is a well-studied topic and minimax rate of this problem is well-understood as a function of the alphabet size [33–35]. The estimation of differential entropy is considerably harder and also studied extensively in literature [23,25,26,36–39] and can be broadly divided into two groups; based on either Kernel density estimates [40,41] or based on k-nearest-neighbor estimation [27,42,43]. In a remarkable work, Kozachenko and Leonenko suggested a nearest neighbor method for entropy estimation [22] which was then generalized to a kth nearest neighbor approach [44]. In this method, the distance to the kth nearest neighbor (KNN) is measured for each data-point, and based on this the probability density around each data point is estimated and substituted into the entropy expression. When k is fixed, each density estimate is noisy and the estimator accrues a bias and a remarkable result is that the bias is distribution-independent and can be subtracted out [45]. While the entropy estimation problem is well-studied, mutual information and its generalizations are typically estimated using a sum of signed entropy (H) terms, which are estimated first; we term such estimators as ΣH estimators. In the discrete alphabet case, this principle has been shown to be worst-case optimal [28]. In the case of distributions with a joint density, an estimator that breaks the ΣH principle is the KSG estimator [46], which builds on the KNN estimation paradigm but couples the estimates in order to reduce the bias. This estimator is widely used and has excellent practical performance. The original paper did not have any consistency guarantees and its convergence rates were recently established [47]. There have been some extensions to the KSG estimator for other information measures such as conditional mutual information [48, 49], directed information [50] but none of them show theoretical guarantees on consistency of the estimators, furthermore they fail completely in mixture distributions. When the data distribution is neither discrete nor admits a joint density, the ΣH approach is no longer feasible as the individual entropy terms are not well defined. This is the regime of interest in our paper. Recently, Gao et al [51] proposed a mutual-information estimator based on KNN principle, which can handle such continuous-discrete mixture cases, and the consistency was demonstrated. However it is not clear how it generalizes to even Conditional Mutual Information (CMI) estimation, let alone other generalizations of mutual information. In this paper, we build on that estimator in order to design an estimator for general graph divergence measures and establish its consistency for generic probability spaces. 3.2 Proposed Estimator The proposed estimator is given in Algorithm 1 where ψ(·) is the digamma function and 1{·} is the indicator function. The process is schematically shown in Fig. 3 (cf. supplementary material). We used the `∞-norm everywhere in our algorithm and proofs. The estimator intuitively estimates the GDM by the resubstitution estimate 1N ∑N i=1 log f̂(x (i)) in which f̂(x(i)) is the estimation of Radon-Nikodym derivative at each sample x(i). If x(i) lies in a region where there is a density, the RN derivative is equal to gX(x(i))/ḡX(x(i)) in which gX(.) and ḡX(.) are density functions corresponding to PX and PX respectively. On the other hand, if x(i) lies on a point where there is a discrete mass, the RN derivative will be equal to hX(x(i))/h̄X(x(i)) in which hX(.) and h̄X(.) are mass functions corresponding to PX and PX respectively. The density function ḡX(x(i)) can be written as ∏d l=1 ( gpa+(Xl)(xpa+(l) (i))/gpa(Xl)(xpa(l) (i)) ) for continuous components. Equivalently, the mass function h̄X(x(i)) can be written as∏d l=1 ( hpa+(Xl)(xpa+(l) (i))/hpa(Xl)(xpa(l) (i)) ) . Thus we need to estimate the density functions g(.) and the mass functions h(.) according to the type of x(i). The existing kth nearest neighbor algorithms will suffer while estimating the mass functions h(.), since ρnS ,i (the distance to the nS-th nearest neighbor in subspace S) for such points will be equal to zero for large N . Our algorithm, however, is designed in a way that it’s capable of approximating both g(.) functions as ≈ nSN 1 (ρnS,i) dS and h(.) functions as ≈ nSN dynamically for any subset S ⊆ X . This is achieved by setting ρnS ,i terms such that all of them cancel out, yielding the estimator as in Eq. (4). . Input: Parameter: k ∈ Z+, Samples: x(1), x(2), . . . , x(N), Bayesian Network: G on Variables: X = (X1, X2, · · · , Xd) Output: ĜDM (N) (X,G) 1: for i = 1 to N do 2: Query: 3: ρk,i = `∞-distance to the kth nearest neighbor of x(i) in the space X 4: Inquire: 5: k̃i = # points within the ρk,i-neighborhood of x(i) in the space X 6: n(i)pa(Xl) = # points within the ρk,i-neighborhood of x (i) in the space Xpa(l) 7: n(i)pa+(Xl) = # points within the ρk,i-neighborhood of x (i) in the space Xpa+(l) 8: Compute: 9: ζi = ψ(k̃i) + ∑d l=1 ( 1{pa(Xl)6=∅} log ( n (i) pa(Xl) + 1 ) − log ( n (i) pa+(Xl) + 1 )) 10: end for 11: Final Estimator: ĜDM (N) (X,G) = 1 N N∑ i=1 ζi + ( d∑ l=1 1{pa(Xl)=∅} − 1 ) logN (4) Algorithm 1: Estimating Graph Divergence Measure GDM(X,G) 4 Proof of Consistency The proof of consistency for our estimator consists of two steps: First we prove that the expected value of the estimator in Eq. (4) converges to the true value as N →∞ , and second we prove that the variance of the estimator converges to zero as N →∞. Let’s begin with the definition of PX(x, r): PX(x, r) = PX { a ∈ X : ‖a− x‖∞ ≤ r } = PX { Br(x) } (5) Thus PX(x, r) is the probability of a hypercube with the edge length of 2r centered at the point x. We then state the following assmuptions: Assumption 1. We make the following assumptions to prove the consistency of our method: 1. k is set such that limN→∞ k =∞ and limN→∞ k logNN = 0. 2. The set of discrete points {x : PX(x, 0) > 0} is finite. 3. ∫ X ∣∣ log f(x)∣∣dPX < +∞, where f ≡ dPX/dPX is Radon-Nikodym derivative. The Assumption 1.1 with 1.2 controls the boundary effect between the continuous and the discrete regions; with this assumption we make sure that all the k nearest neighbors of each point belong to the same region almost surely (i.e. all of them are either continuous or discrete). Assumption 1.3 is the log-integrability of the Radon-Nikodym derivative. These assumptions are satisfied under mild technical conditions whenever the distribution PX over the set X is (1) finitely discrete; (2) continuous; (3) finitely discrete over some dimensions and continuous over some others; (4) a mixture of the previous cases; (5) has a joint density supported over a lower dimensional manifold. These cases represent almost all the real world data. As an example of a case not conforming to the aforementioned cases, we can consider singular distributions, among which the Cantor distribution is a significant example whose cumulative distribution function is the Cantor function. This distribution has neither a probability density function nor a probability mass function, although its cumulative distribution function is a continuous function. It is thus neither a discrete nor an absolutely continuous probability distribution, nor is it a mixture of these. The Theorem 1 formally states the mean-convergence of the estimator while Theorem 2 formally states that convergence of the variance to zero. Theorem 1. Under the Assumptions 1, we have limN→∞ E [ ĜDM (N) (X,G) ] = GDM(X,G). Theorem 2. In addition to the Assumptions 1, assume that we have (kN logN)2/N → 0 as N goes to infinity. Then we have limN→∞ Var [ ĜDM (N) (X,G) ] = 0. The Theorems 1 and 2 combined yield the consistency of the estimator 4. The proof of the Theorem 1 starts with writing the Radon-Nikodym derivative explicitly. Then we need to upper-bound the term ∣∣E[ĜDM(N)(X,G)] − GDM(X,G)∣∣. To achieve this goal, we segregate the domain of X into three parts as X = Ω1 ∪ Ω2 ∪ Ω3 where Ω1 = {x : f(x) = 0}, Ω2 = {x : f(x) > 0, PX(x, 0) > 0} and Ω3 = {x : f(x) > 0, PX(x, 0) = 0}. We will show that PX(Ω1) = 0. The sets Ω2 and Ω3 correspond to the discrete and continuous regions respectively. Then for each of the two regions, we introduce an upperbound which goes to zero as N grows boundlessly. Thus equivalently we show the mean of the estimate ζ1 is close to log f(x) for any x. The proof of the Theorem 2 is based on the Efron-Stein inequality, which upperbounds any estimator for any quantity from the observed samples x(1), . . . , x(N). For any sample x(i), we then upperbound the term ∣∣ζi(X)− ζi(X\j)∣∣ by segregating the samples into various cases, and examining each case separately. ζi(X) is the estimate using all the samples x(1), . . . , x(N) and ζi(X\j) is the estimate when the jth sample is removed. Summing up over all the i’s, we obtain an upper-bound which will converge to 0 as N goes to infinity. 5 Empirical Results In this section, we evaluate the performance of our proposed estimator in comparison with other estimators via numerical experiments. The estimators evaluated here are our estimator referred to as GDM, the plain KSG-based estimators for continuous distributions to which we refer by KSG, the binning estimators and the noise-induced ΣH estimators. A more detailed discussion can be found in Section G. Experiment 1: Markov chain model with continuous-discrete mixture. For the first experiment, we simulated an X-Z-Y Markov chain model in which the random variable X is a uniform random variable U(0, 1) clipped at a threshold 0 < α1 < 1 from above. Then Z = min (X,α2) and Y = min (Z,α3) in which 0 < α3 < α2 < α1. We simulated this system for various numbers of samples, setting α1 = 0.9, α2 = 0.8 and α3 = 0.7. For each set of samples we estimated I(X;Y |Z) via different methods. The theory value for I(X;Y |Z) is 0. The results are shown in Figure 2a. We can see that in this regime, only the GDM estimator can correctly converge. The KSG estimator and the ΣH estimator show high negative biases and the binning estimator shows a positive bias. Experiment 2: Mixture of AWGN and BSC channels with variable error probability. For the second scheme of our experiments, we considered an Additive White Gaussian Noise (AWGN) Channel in parallel with a Binary Symmetric Channel (BSC) where only one of the two can be activated at a time. The random variable Z = min(α, Z̃) where Z̃ ∼ U(0, 1) controls which channel is activated; i.e. if Z is lower than the threshold β, activate the AWGN channel, otherwise initiate the BSC channel where Z also determines the error probability at each time point. We set α = 0.3, β = 0.2, BSC channel input as X ∼ Bern(0.5), and AWGN input and noise deviation as σX = 1 and σN = 0.1 respectively, and obtained the estimates of I(X;Y |Z,Z2, Z3) for various estimators. While the theory value is equal to I(X;Y |Z) = 0.53241, yet it’s conditioned over a low-dimensional manifold in a high-dimensional space. The results are shown in Figure 2b. Similar to the previous experiment, the GDM estimator can correctly converge to the true value. The ΣH and binning estimators show a negative bias, and the KSG estimator gets totally lost. Experiment 3: Total Correlation for independent mixtures. In this experiment, we estimate the total correlation of three independent variables X , Y and Z. The samples for the variable X are generated in the following fashion: First toss a fair coin, if heads appears we fix X at αX , otherwise we draw X from a uniform distribution between 0 and 1. samples from Y and Z are also generated in the same way independently with parameters αY and αZ respectively. For this setup, TC(X,Y, Z) = 0. We set αX = 1, αY = 1/2 and αZ = 1/4, and generated various datasets with different lengths. Then estimated total correlation values are shown in the Figure 2c. Experiment 4: Total Correlation for independent uniforms with correlated zero-inflation. Here we first consider four auxiliary uniform variables X̃1, X̃2, X̃3 and X̃4 which are taken from U(0.5, 1.5). Then each sample is erased with a Bernoulli probability; i.e. X1 = α1X̃1, X2 = α1X̃2 and X3 = α2X̃3, X4 = α2X̃4 in which α1 ∼ Bern(p1) and α2 ∼ Bern(p2). As we see, after zero-inflation X1 and X2 become correlated, so do X3 and X4 while still (X1, X2) |= (X3, X4). In the experiment, we set p1 = p2 = 0.6. The results of running different algorithms over the data can be seen in Figure 2d. For the total correlation experiments 3 and 4, similar to that of conditional mutual information in experiments 1 and 2, only the GDM estimator can best estimate the true value. The estimator ΣH was removed from the figures due to its high inaccuracy. Experiment 5: Gene Regulatory Networks. In this experiment we use different estimators to do Gene Regulatory Network inference based on the conditional Restricted Directed Information (cRDI) [20]. We do our test on the simulated neuron cells’ development process, based on a model from [52]. In this model, the time series vector X consists of 13 random variables each of which corresponding to a single gene in the development process. We simulated the development process for various lengths of time-series in which the noise N ∼ N (0, .03) is added for all the genes, and every single sample is then subject to erasure (i.e. be replaced by 0s) with a probability of 0.5. Then we applied the cRDI method utilizing various CMI estimators and then calculated the Area-Under-ROC curve (AUROC). The results are shown in Figure 2e. It’s seen that the cRDI method implemented with the GDM estimator outperform the other estimators by at least %10 in terms of AUROC. In the tests, cRDI for each (Xi, Xj) is conditioned over the node k 6= i with the highest RDI value to j. We notice that the causal signals are highly destroyed due to the zero-inflation, so we won’t expect high performance of the causal inference over the data. We did not include the ΣH estimator results due to its very low performance. Experiment 6: Feature Selection by Conditional Mutual Information Maximization. Feature selection is an important pre-processing step in many learning tasks. The application of information theoretic measures in feature selection is well studied in the literature [7]. Among the well-known methods is the conditional mutual information maximization (CMIM) first introduced by Flueret [4], a variation of which was later introduced called CMIM-2 [53]. Both methods use conditional mutual information as their core measure to select the features. Hence the performance of the estimators can significantly influence the performance of the methods. In our experiment, we generated a vector X = (X1, . . . , X15) of 15 random variables in which all the random variables are taken fromN (0, 1) and then each random variable Xi is clipped from above at αi which is initially taken randomly from U(0.25, 0.3) and then kept constant during the sample generation. Then Y is generated as Y = cos (∑5 i=1Xi ) . Then we did the CMIM-2 algorithm with various CMI estimators to evaluate the performance of the estimators in extracting the relevant features X1, . . . , X5. The AUROC values for each algorithm versus the number of samples generated are shown in Figure 2f. The feature selection methods implemented with the GDM estimator outperform the other estimators. 6 Discussion and Future Work A general paradigm of graph divergence measures and novel estimators thereof, for general probability spaces are proposed, which estimate several generalizations of mutual information. In future, we would like to derive further efficient estimators for high dimensional data. In the current work, estimators are shown to be consistent with infinite scaling of parameter k. In future, we would like to understand the finite k performance of the estimators as well as guarantees on sample complexity and rates of convergence. Another potential direction to follow is to study the variational characterization of the graph divergence measure to design estimators. Improving the computational efficiency of the estimator is another direction of future work. Recent literature including [54] provide a new methodology to estimate mutual information in a computationally efficient manner and leveraging these ideas for the generalized measures and general proabability distributions can be a promising direction ahead. 7 Acknowledgement This work was partially supported by NSF grants 1651236, 1703403 and NIH grant 5R01HG008164. The authors also would like to thank Yihan Jiang for presenting our work at the NeurIPS conference.
1. What are the strengths and weaknesses of the paper regarding its contributions and advancements in multivariate information measures? 2. How does the reviewer assess the relevance and adequacy of the reference list and prior art sections? 3. What are the unique aspects of the paper compared to other works in the field, particularly those mentioned in the review? 4. How does the reviewer evaluate the technical content, writing style, and suitability of the paper for different academic venues? 5. Are there any suggestions for improving the presentation of numerical results or the selection of experiments?
Review
Review This paper develops multivariate information measures in fairly general probability spaces based on Bayesian Networks. The references include a variety of references on multivariate and conditional information theory, although some related work has appeared at NIPS in recent years, such as [47] and other work by the authors of [47]. The reference list and prior art sections appear to be adequate, though explicit pointers to a set of papers prototypical of the \Sigma H paradigm would be useful. The extension to multivariate information measures is an original advance in the context of prior literature, and highlights some defects in prior literature (lines 180-183). The paper is well written overall. I feel like the technical content and style of this paper is more suited to a statistics or information theory venue, like IEEE Transactions on information theory, in line with many of the references presented, but is still within the purview of NIPS. Figure 3 is useful in the context of describing the proposed estimator. The presentation of numerical results in Figure 2 should include some confidence intervals for the estimates for better comparison of methods. The selection of experiments seems sufficient in Section 5, though calling your estimator GDM or something rather than "mixture" may make it a bit clearer. Update: Re-scored in light of author's responses and reviews.
NIPS
Title Estimators for Multivariate Information Measures in General Probability Spaces Abstract Information theoretic quantities play an important role in various settings in machine learning, including causality testing, structure inference in graphical models, time-series problems, feature selection as well as in providing privacy guarantees. A key quantity of interest is the mutual information and generalizations thereof, including conditional mutual information, multivariate mutual information, total correlation and directed information. While the aforementioned information quantities are well defined in arbitrary probability spaces, existing estimators add or subtract entropies (we term them ΣH methods). These methods work only in purely discrete space or purely continuous case since entropy (or differential entropy) is well defined only in that regime. In this paper, we define a general graph divergence measure (GDM),as a measure of incompatibility between the observed distribution and a given graphical model structure. This generalizes the aforementioned information measures and we construct a novel estimator via a coupling trick that directly estimates these multivariate information measures using the Radon-Nikodym derivative. These estimators are proven to be consistent in a general setting which includes several cases where the existing estimators fail, thus providing the only known estimators for the following settings: (1) the data has some discrete and some continuous valued components (2) some (or all) of the components themselves are discrete-continuous mixtures (3) the data is real-valued but does not have a joint density on the entire space, rather is supported on a low-dimensional manifold. We show that our proposed estimators significantly outperform known estimators on synthetic and real datasets. 1 Introduction Information theoretic quantities, such as mutual information and its generalizations, play an important role in various settings in machine learning and statistical estimation and inference. Here we summarize briefly the role of some generalizations of mutual information in learning (cf. Sec. 2.1 for a mathematical definition of these quantities). 1. Conditional mutual information measures the amount of information between two variables X and Y given a third variable Z and is zero iff X is independent of Y given Z. CMI finds a wide 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. range of applications in learning including causality testing [1, 2], structure inference in graphical models [3], feature selection [4] as well as in providing privacy guarantees [5]. 2. Total correlation measures the degree to which a set ofN variables are independent of each other, and appears as a natural metric of interest in several machine learning problems, for example, in independent component analysis, the objective is to maximize the independence of the variables quantified through total correlation [6]. In feature selection, ensuring the independence of selected features is one goal, pursued using total correlation in [7, 8]. 3. Multivariate mutual information measures the amount of information shared between multiple variables [9, 10] and is useful in feature selection [11, 12] and clustering [13]. 4. Directed information measures the amount of information between two random processes [14,15] and is shown as the correct metric in identifying time-series graphical models [16–21]. Estimation of these information-theoretic quantities from observed samples is a non-trivial problem that needs to be solved in order to utilize these quantities in the aforementioned applications. While there has been long history in estimation of entropy [22–25], and renewed recent interest [26–28], much less effort has been spent on the multivariate versions. A standard approach to estimating general information theoretic quantities is to write them out as a sum or difference of entropy (denoted H usually) terms which are then directly estimated; we term such a paradigm as ΣH paradigm. However, the ΣH paradigm is applicable only when the variables involved are all discrete or there is a joint density on the space of all variables (in which case, differential entropy h can be utilized). The underlying information measures themselves are well defined in very general probability spaces, for example, involving mixtures of discrete and continuous variables; however, no known estimators exist. We motivate the requirement of estimators in general probability spaces by some examples in contemporary machine learning and statistical inference. 1. It is common place in machine learning to have data-sets where some variables are discrete, and some are continuous. For example, in recent work on utilizing information bottleneck to understand deep learning [29], an important step is to quantify the mutual information between the training samples (which are discrete) and the layer output (which is continuous). The employed methodology was to quantize the continuous variables; this is common practice, even though highly sub-optimal. 2. Some variables involved in the calculation may be mixtures of discrete and continuous variables. For example, the output of ReLU neuron will not have a density even when the input data has a density. Instead, the neuron will have a discrete mass at 0 (or wherever the ReLU breakpoint is) but will have a continuous distribution on the positive values. This is also the case in gene expression data, where a gene may have a discrete mass at expression 0 due to an effect called drop-out [30] but have a continuous distribution elsewhere. 3. The variables involved may have a joint density only on a low dimensional manifold. For example, when calculating mutual information between input and output of a neural network, some of the neurons are deterministic functions of the input variables and hence they will have a joint density supported on a low-dimensional manifold rather than the entire space. In the aforementioned cases, no existing estimators are known to work. It is not merely a matter of having provable guarantees either. When we plug in estimators that assume a joint density into data that does not, the estimated information measure can be strongly negative. We summarize our main contributions below: 1. General paradigm (Section 2): We define a general paradigm of graph divergence measures which captures the aforementioned generalizations of mutual information as special cases. Given a directed acyclic graph (DAG) between n variables, the graph divergence is defined as the KullbackLeibler (KL) divergence between the true data distribution PX and a restricted distribution PX defined on the Bayesian network and can be thought of as a measure of incompatibility with the given graphical model structure. These graph divergence measures are defined using the Radon Nikodym derivatives which are well-defined for general probability spaces. 2. Novel estimators (Section 3): We propose novel estimators for these graph divergence measures, which directly estimate the corresponding Radon-Nikodym derivatives. To the best of our knowl- edge, these are the first family of estimators that are well defined for general probability spaces (breaking the ΣH paradigm). 3. Consistency proofs (Section 4): We prove that the proposed estimators converge to the true value of the corresponding graph divergence measures as the number of observed samples increases in a general setting which includes several cases: (1) the data has some discrete and some continuous valued components (2) some (or all) of the components themselves are discrete-continuous mixtures (3) the data is real-valued but does not have a joint density on the entire space but is supported on a low-dimensional manifold. 4. Numerical results (Section 5): Extensive numerical results demonstrate that (1) existing algorithms have severe failure modes in general probability spaces (strongly negative values, for example), and (2) our proposed estimator achieves consistency as well as significantly better finite-sample performance. 2 Graph Divergence Measure In this section, we define the family of graph divergence measures. To begin with, we first define some notational preliminaries. We denote any random variable by an uppercase letter such as X . The sample space of the variable X is denoted by X and any value in X is denoted by the lowercase letter x. For any subset A ⊆ X , the probability of A for a given distribution function PX(.) over X is denoted by PX(A). We note that the random variable X can be a d-dimensional vector of random variables, i.e. X ≡ (X1, . . . , Xd). The N observed samples drawn from the distribution PX are denoted by x(1), x(2), . . . , x(N), i.e. x(i) is the ith observed sample. Sometimes we might be interested in a subset of components of a random variable, S ⊆ {X1, . . . , Xd} instead of the entire vector X . Accordingly, the sample space of the variable S is denoted by S. For instance, X = (X1, X2, X3, X4) and S = (X1, X2). Throughout the entire paper, unless otherwise stated, there is a one-to-one correspondence between the notations of X and any S. For example for any value x ∈ X , the corresponding value in S is simply denoted by s. Further, s(i) ∈ S represents the lower-dimensional sample corresponding to the ith observed sample x(i) ∈ X . Furthermore, any marginal distribution defined over S with respect to PX is denoted by PS . Consider a directed acyclic graph (DAG) G defined over d nodes (corresponding to the d components of the random variable X). A probability measure Q over X is said to be compatible with the graph G if it is a Bayesian network on G. Given a graph G and a distribution PX , there is a natural measure PX(.) which is compatible with the graph and is defined as follows: PX = d∏ l=1 PXl|pa(Xl) (1) where pa(Xl) ⊂ X is the set of the parent nodes of the random variable Xl, with the sample space denoted by Xpa(l), and the sample values xpa(l) corresponding to x. The distribution PXl|pa(Xl) is the conditional distribution of Xl given pa(Xl). Throughout the paper, whenever mentioning the variable Xl with its own parents pa(Xl) we indicate it by pa+(Xl), i.e. pa+(Xl) ≡ ( Xl, pa(Xl) ) . An example is shown in Fig. 1a. We note the fact that PS|X\S is well defined for any subset of variables S ⊂ X . Therefore if we let S = X \ pa(Xl), then PX\pa(Xl)|pa(Xl) is well defined for any l ∈ {1, . . . , d}. By marginalizing over X \ pa+(Xl) we see that PXl|pa(Xl) and thus the distribution PX is well defined. The graph divergence measure is now defined as a function of the probability measure PX and the graph G. In this work we will focus only on the KL Divergence as being the distance metric, hence unless otherwise stated D(· ‖ ·) = DKL(· ‖ ·). Let’s first consider the case where PX is absolutely continuous with respect to PX and hence the Radon-Nikodym derivative dPX/dPX exists. Therefore for a given set of random variables X and a Bayesian Network G, we define Graph Divergence Measure (GDM) as : GDM(X,G) = D(PX‖PX) = ∫ X log dPX dPX dPX (2) Here we implicitly assume that log ( dPX/dPX ) is measurable and integrable with respect to the measure PX . The GDM is set to infinity wherever Radon-Nikodym derivative does not exist. It is clear that GDM(X,G) = 0 if and only if the data distribution is compatible with the graphical model, thus the GDM can be thought of as a measure of incompatibility with the given graphical model structure. We now have relevant variational characterization as below on our graph divergence measure, which can be harnessed to compute upper and lower bounds (More details in supplementary material): Proposition 2.1. For a random variable X , a DAG G, let Π(G) be the set of measures QX defined on the Bayesian Network G, then GDM(X,G) = infQX∈Π(G)D(PX‖QX). Furthermore, let C denote the set of functions h : X → R such that EQX [exp(h(X))] < ∞. If GDM(X,G) <∞, then for every h ∈ C, EPX [h(X)] exists and: GDM(X,G) = sup h∈C EPX [h(X)]− logEQX [exp(h(X))] . (3) 2.1 Special cases For specific choices of X and Bayesian Network, G, the Equation 2 is reduced to the well-known information measures. Some examples of these measures are as follows: Mutual Information (MI): X = (X1, X2) and G has no directed edge between X1 and X2. Thus PX = PX1 .PX2 , and we get, GDM(X,G) = I(X1;X2) = D(PX1X2‖PX1PX2). Conditional Mutual Information (CMI): We recover the conditional mutual information of X1 and X2 given X3 by constraining G to be the one in Fig. 1b, since PX = PX3 .PX2|X3 .PX1|X3 , i.e., GDM(X,G) = I(X1;X2|X3) = D(PX1X2X3‖PX1|X3PX2|X3PX3). Total Correlation (TC): When X = (X1, · · · , Xd), and G is the graph with no edges (as in Fig. 1c, we recover the total correlation of the random variables X1, . . . , Xd since PX = PX1 . . .PXd , i.e., GDM(X,Gdc) = TC(X1, . . . , Xd) = D(PX1...Xd‖PX1 . . .PXd) Multivariate Mutual Information (MMI) : While the MMI defined by [9] is not positive in general,there is a different definition by [10] which is both non-negative and has an operational interpretation. Since MMI can be defined as the optimal total correlation after clustering, we can utilize our definition to define MMI (cf. supplementary material). Directed Information : Suppose there are two stationary random processes X and Y , the directed information rate from X to Y as first introduced by Massey [31] is defined as: I(X → Y ) = 1 T T∑ t=1 I ( Xt;Yt ∣∣Y t−1) It can be seen that the directed information can be written as: I(X → Y ) = GDM ( (XT , Y T ),GI ) −GDM ( (XT , Y T ),GC ) where the graphical model GI correponds to the independent distribution between XT and Y T , and GC corresponds to the causal distribution from X to Y (more details provided in supplementary material). 3 Estimators 3.1 Prior Art Estimators for entropy date back to Shannon, who guesstimated the entropy rate of English [32]. Discrete entropy estimation is a well-studied topic and minimax rate of this problem is well-understood as a function of the alphabet size [33–35]. The estimation of differential entropy is considerably harder and also studied extensively in literature [23,25,26,36–39] and can be broadly divided into two groups; based on either Kernel density estimates [40,41] or based on k-nearest-neighbor estimation [27,42,43]. In a remarkable work, Kozachenko and Leonenko suggested a nearest neighbor method for entropy estimation [22] which was then generalized to a kth nearest neighbor approach [44]. In this method, the distance to the kth nearest neighbor (KNN) is measured for each data-point, and based on this the probability density around each data point is estimated and substituted into the entropy expression. When k is fixed, each density estimate is noisy and the estimator accrues a bias and a remarkable result is that the bias is distribution-independent and can be subtracted out [45]. While the entropy estimation problem is well-studied, mutual information and its generalizations are typically estimated using a sum of signed entropy (H) terms, which are estimated first; we term such estimators as ΣH estimators. In the discrete alphabet case, this principle has been shown to be worst-case optimal [28]. In the case of distributions with a joint density, an estimator that breaks the ΣH principle is the KSG estimator [46], which builds on the KNN estimation paradigm but couples the estimates in order to reduce the bias. This estimator is widely used and has excellent practical performance. The original paper did not have any consistency guarantees and its convergence rates were recently established [47]. There have been some extensions to the KSG estimator for other information measures such as conditional mutual information [48, 49], directed information [50] but none of them show theoretical guarantees on consistency of the estimators, furthermore they fail completely in mixture distributions. When the data distribution is neither discrete nor admits a joint density, the ΣH approach is no longer feasible as the individual entropy terms are not well defined. This is the regime of interest in our paper. Recently, Gao et al [51] proposed a mutual-information estimator based on KNN principle, which can handle such continuous-discrete mixture cases, and the consistency was demonstrated. However it is not clear how it generalizes to even Conditional Mutual Information (CMI) estimation, let alone other generalizations of mutual information. In this paper, we build on that estimator in order to design an estimator for general graph divergence measures and establish its consistency for generic probability spaces. 3.2 Proposed Estimator The proposed estimator is given in Algorithm 1 where ψ(·) is the digamma function and 1{·} is the indicator function. The process is schematically shown in Fig. 3 (cf. supplementary material). We used the `∞-norm everywhere in our algorithm and proofs. The estimator intuitively estimates the GDM by the resubstitution estimate 1N ∑N i=1 log f̂(x (i)) in which f̂(x(i)) is the estimation of Radon-Nikodym derivative at each sample x(i). If x(i) lies in a region where there is a density, the RN derivative is equal to gX(x(i))/ḡX(x(i)) in which gX(.) and ḡX(.) are density functions corresponding to PX and PX respectively. On the other hand, if x(i) lies on a point where there is a discrete mass, the RN derivative will be equal to hX(x(i))/h̄X(x(i)) in which hX(.) and h̄X(.) are mass functions corresponding to PX and PX respectively. The density function ḡX(x(i)) can be written as ∏d l=1 ( gpa+(Xl)(xpa+(l) (i))/gpa(Xl)(xpa(l) (i)) ) for continuous components. Equivalently, the mass function h̄X(x(i)) can be written as∏d l=1 ( hpa+(Xl)(xpa+(l) (i))/hpa(Xl)(xpa(l) (i)) ) . Thus we need to estimate the density functions g(.) and the mass functions h(.) according to the type of x(i). The existing kth nearest neighbor algorithms will suffer while estimating the mass functions h(.), since ρnS ,i (the distance to the nS-th nearest neighbor in subspace S) for such points will be equal to zero for large N . Our algorithm, however, is designed in a way that it’s capable of approximating both g(.) functions as ≈ nSN 1 (ρnS,i) dS and h(.) functions as ≈ nSN dynamically for any subset S ⊆ X . This is achieved by setting ρnS ,i terms such that all of them cancel out, yielding the estimator as in Eq. (4). . Input: Parameter: k ∈ Z+, Samples: x(1), x(2), . . . , x(N), Bayesian Network: G on Variables: X = (X1, X2, · · · , Xd) Output: ĜDM (N) (X,G) 1: for i = 1 to N do 2: Query: 3: ρk,i = `∞-distance to the kth nearest neighbor of x(i) in the space X 4: Inquire: 5: k̃i = # points within the ρk,i-neighborhood of x(i) in the space X 6: n(i)pa(Xl) = # points within the ρk,i-neighborhood of x (i) in the space Xpa(l) 7: n(i)pa+(Xl) = # points within the ρk,i-neighborhood of x (i) in the space Xpa+(l) 8: Compute: 9: ζi = ψ(k̃i) + ∑d l=1 ( 1{pa(Xl)6=∅} log ( n (i) pa(Xl) + 1 ) − log ( n (i) pa+(Xl) + 1 )) 10: end for 11: Final Estimator: ĜDM (N) (X,G) = 1 N N∑ i=1 ζi + ( d∑ l=1 1{pa(Xl)=∅} − 1 ) logN (4) Algorithm 1: Estimating Graph Divergence Measure GDM(X,G) 4 Proof of Consistency The proof of consistency for our estimator consists of two steps: First we prove that the expected value of the estimator in Eq. (4) converges to the true value as N →∞ , and second we prove that the variance of the estimator converges to zero as N →∞. Let’s begin with the definition of PX(x, r): PX(x, r) = PX { a ∈ X : ‖a− x‖∞ ≤ r } = PX { Br(x) } (5) Thus PX(x, r) is the probability of a hypercube with the edge length of 2r centered at the point x. We then state the following assmuptions: Assumption 1. We make the following assumptions to prove the consistency of our method: 1. k is set such that limN→∞ k =∞ and limN→∞ k logNN = 0. 2. The set of discrete points {x : PX(x, 0) > 0} is finite. 3. ∫ X ∣∣ log f(x)∣∣dPX < +∞, where f ≡ dPX/dPX is Radon-Nikodym derivative. The Assumption 1.1 with 1.2 controls the boundary effect between the continuous and the discrete regions; with this assumption we make sure that all the k nearest neighbors of each point belong to the same region almost surely (i.e. all of them are either continuous or discrete). Assumption 1.3 is the log-integrability of the Radon-Nikodym derivative. These assumptions are satisfied under mild technical conditions whenever the distribution PX over the set X is (1) finitely discrete; (2) continuous; (3) finitely discrete over some dimensions and continuous over some others; (4) a mixture of the previous cases; (5) has a joint density supported over a lower dimensional manifold. These cases represent almost all the real world data. As an example of a case not conforming to the aforementioned cases, we can consider singular distributions, among which the Cantor distribution is a significant example whose cumulative distribution function is the Cantor function. This distribution has neither a probability density function nor a probability mass function, although its cumulative distribution function is a continuous function. It is thus neither a discrete nor an absolutely continuous probability distribution, nor is it a mixture of these. The Theorem 1 formally states the mean-convergence of the estimator while Theorem 2 formally states that convergence of the variance to zero. Theorem 1. Under the Assumptions 1, we have limN→∞ E [ ĜDM (N) (X,G) ] = GDM(X,G). Theorem 2. In addition to the Assumptions 1, assume that we have (kN logN)2/N → 0 as N goes to infinity. Then we have limN→∞ Var [ ĜDM (N) (X,G) ] = 0. The Theorems 1 and 2 combined yield the consistency of the estimator 4. The proof of the Theorem 1 starts with writing the Radon-Nikodym derivative explicitly. Then we need to upper-bound the term ∣∣E[ĜDM(N)(X,G)] − GDM(X,G)∣∣. To achieve this goal, we segregate the domain of X into three parts as X = Ω1 ∪ Ω2 ∪ Ω3 where Ω1 = {x : f(x) = 0}, Ω2 = {x : f(x) > 0, PX(x, 0) > 0} and Ω3 = {x : f(x) > 0, PX(x, 0) = 0}. We will show that PX(Ω1) = 0. The sets Ω2 and Ω3 correspond to the discrete and continuous regions respectively. Then for each of the two regions, we introduce an upperbound which goes to zero as N grows boundlessly. Thus equivalently we show the mean of the estimate ζ1 is close to log f(x) for any x. The proof of the Theorem 2 is based on the Efron-Stein inequality, which upperbounds any estimator for any quantity from the observed samples x(1), . . . , x(N). For any sample x(i), we then upperbound the term ∣∣ζi(X)− ζi(X\j)∣∣ by segregating the samples into various cases, and examining each case separately. ζi(X) is the estimate using all the samples x(1), . . . , x(N) and ζi(X\j) is the estimate when the jth sample is removed. Summing up over all the i’s, we obtain an upper-bound which will converge to 0 as N goes to infinity. 5 Empirical Results In this section, we evaluate the performance of our proposed estimator in comparison with other estimators via numerical experiments. The estimators evaluated here are our estimator referred to as GDM, the plain KSG-based estimators for continuous distributions to which we refer by KSG, the binning estimators and the noise-induced ΣH estimators. A more detailed discussion can be found in Section G. Experiment 1: Markov chain model with continuous-discrete mixture. For the first experiment, we simulated an X-Z-Y Markov chain model in which the random variable X is a uniform random variable U(0, 1) clipped at a threshold 0 < α1 < 1 from above. Then Z = min (X,α2) and Y = min (Z,α3) in which 0 < α3 < α2 < α1. We simulated this system for various numbers of samples, setting α1 = 0.9, α2 = 0.8 and α3 = 0.7. For each set of samples we estimated I(X;Y |Z) via different methods. The theory value for I(X;Y |Z) is 0. The results are shown in Figure 2a. We can see that in this regime, only the GDM estimator can correctly converge. The KSG estimator and the ΣH estimator show high negative biases and the binning estimator shows a positive bias. Experiment 2: Mixture of AWGN and BSC channels with variable error probability. For the second scheme of our experiments, we considered an Additive White Gaussian Noise (AWGN) Channel in parallel with a Binary Symmetric Channel (BSC) where only one of the two can be activated at a time. The random variable Z = min(α, Z̃) where Z̃ ∼ U(0, 1) controls which channel is activated; i.e. if Z is lower than the threshold β, activate the AWGN channel, otherwise initiate the BSC channel where Z also determines the error probability at each time point. We set α = 0.3, β = 0.2, BSC channel input as X ∼ Bern(0.5), and AWGN input and noise deviation as σX = 1 and σN = 0.1 respectively, and obtained the estimates of I(X;Y |Z,Z2, Z3) for various estimators. While the theory value is equal to I(X;Y |Z) = 0.53241, yet it’s conditioned over a low-dimensional manifold in a high-dimensional space. The results are shown in Figure 2b. Similar to the previous experiment, the GDM estimator can correctly converge to the true value. The ΣH and binning estimators show a negative bias, and the KSG estimator gets totally lost. Experiment 3: Total Correlation for independent mixtures. In this experiment, we estimate the total correlation of three independent variables X , Y and Z. The samples for the variable X are generated in the following fashion: First toss a fair coin, if heads appears we fix X at αX , otherwise we draw X from a uniform distribution between 0 and 1. samples from Y and Z are also generated in the same way independently with parameters αY and αZ respectively. For this setup, TC(X,Y, Z) = 0. We set αX = 1, αY = 1/2 and αZ = 1/4, and generated various datasets with different lengths. Then estimated total correlation values are shown in the Figure 2c. Experiment 4: Total Correlation for independent uniforms with correlated zero-inflation. Here we first consider four auxiliary uniform variables X̃1, X̃2, X̃3 and X̃4 which are taken from U(0.5, 1.5). Then each sample is erased with a Bernoulli probability; i.e. X1 = α1X̃1, X2 = α1X̃2 and X3 = α2X̃3, X4 = α2X̃4 in which α1 ∼ Bern(p1) and α2 ∼ Bern(p2). As we see, after zero-inflation X1 and X2 become correlated, so do X3 and X4 while still (X1, X2) |= (X3, X4). In the experiment, we set p1 = p2 = 0.6. The results of running different algorithms over the data can be seen in Figure 2d. For the total correlation experiments 3 and 4, similar to that of conditional mutual information in experiments 1 and 2, only the GDM estimator can best estimate the true value. The estimator ΣH was removed from the figures due to its high inaccuracy. Experiment 5: Gene Regulatory Networks. In this experiment we use different estimators to do Gene Regulatory Network inference based on the conditional Restricted Directed Information (cRDI) [20]. We do our test on the simulated neuron cells’ development process, based on a model from [52]. In this model, the time series vector X consists of 13 random variables each of which corresponding to a single gene in the development process. We simulated the development process for various lengths of time-series in which the noise N ∼ N (0, .03) is added for all the genes, and every single sample is then subject to erasure (i.e. be replaced by 0s) with a probability of 0.5. Then we applied the cRDI method utilizing various CMI estimators and then calculated the Area-Under-ROC curve (AUROC). The results are shown in Figure 2e. It’s seen that the cRDI method implemented with the GDM estimator outperform the other estimators by at least %10 in terms of AUROC. In the tests, cRDI for each (Xi, Xj) is conditioned over the node k 6= i with the highest RDI value to j. We notice that the causal signals are highly destroyed due to the zero-inflation, so we won’t expect high performance of the causal inference over the data. We did not include the ΣH estimator results due to its very low performance. Experiment 6: Feature Selection by Conditional Mutual Information Maximization. Feature selection is an important pre-processing step in many learning tasks. The application of information theoretic measures in feature selection is well studied in the literature [7]. Among the well-known methods is the conditional mutual information maximization (CMIM) first introduced by Flueret [4], a variation of which was later introduced called CMIM-2 [53]. Both methods use conditional mutual information as their core measure to select the features. Hence the performance of the estimators can significantly influence the performance of the methods. In our experiment, we generated a vector X = (X1, . . . , X15) of 15 random variables in which all the random variables are taken fromN (0, 1) and then each random variable Xi is clipped from above at αi which is initially taken randomly from U(0.25, 0.3) and then kept constant during the sample generation. Then Y is generated as Y = cos (∑5 i=1Xi ) . Then we did the CMIM-2 algorithm with various CMI estimators to evaluate the performance of the estimators in extracting the relevant features X1, . . . , X5. The AUROC values for each algorithm versus the number of samples generated are shown in Figure 2f. The feature selection methods implemented with the GDM estimator outperform the other estimators. 6 Discussion and Future Work A general paradigm of graph divergence measures and novel estimators thereof, for general probability spaces are proposed, which estimate several generalizations of mutual information. In future, we would like to derive further efficient estimators for high dimensional data. In the current work, estimators are shown to be consistent with infinite scaling of parameter k. In future, we would like to understand the finite k performance of the estimators as well as guarantees on sample complexity and rates of convergence. Another potential direction to follow is to study the variational characterization of the graph divergence measure to design estimators. Improving the computational efficiency of the estimator is another direction of future work. Recent literature including [54] provide a new methodology to estimate mutual information in a computationally efficient manner and leveraging these ideas for the generalized measures and general proabability distributions can be a promising direction ahead. 7 Acknowledgement This work was partially supported by NSF grants 1651236, 1703403 and NIH grant 5R01HG008164. The authors also would like to thank Yihan Jiang for presenting our work at the NeurIPS conference.
1. What is the focus of the paper regarding information theoretic quantities? 2. What are the strengths of the proposed approach, particularly in terms of its applicability under minimal assumptions? 3. What are the weaknesses of the paper, especially when it comes to dealing with violations of the assumptions? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review This paper introduces consistent estimators for several information theoretic quantities, given access to iid samples from the underlying distribution. Known estimators impose very stringent assumptions on the underlying distribution (finite support or underlying density etc.), while the major thrust of this article is to introduce a method which is valid fairly generally, and has guarantees under minimal assumptions. I believe the problem to be of significant importance, and very well-aligned with the interests of this community. The paper is very well written, motivates the problem in very clear terms, and proposes a fairly general solution. The authors provide a very succinct summary of the state of the art, and compare their proposed procedure to other alternative and existing estimators through extensive numerical experiments. As a comment, I request the authors to also discuss situations where the Assumptions (L209) are violated, and scenarios where their estimator might be expected to behave poorly.
NIPS
Title Estimators for Multivariate Information Measures in General Probability Spaces Abstract Information theoretic quantities play an important role in various settings in machine learning, including causality testing, structure inference in graphical models, time-series problems, feature selection as well as in providing privacy guarantees. A key quantity of interest is the mutual information and generalizations thereof, including conditional mutual information, multivariate mutual information, total correlation and directed information. While the aforementioned information quantities are well defined in arbitrary probability spaces, existing estimators add or subtract entropies (we term them ΣH methods). These methods work only in purely discrete space or purely continuous case since entropy (or differential entropy) is well defined only in that regime. In this paper, we define a general graph divergence measure (GDM),as a measure of incompatibility between the observed distribution and a given graphical model structure. This generalizes the aforementioned information measures and we construct a novel estimator via a coupling trick that directly estimates these multivariate information measures using the Radon-Nikodym derivative. These estimators are proven to be consistent in a general setting which includes several cases where the existing estimators fail, thus providing the only known estimators for the following settings: (1) the data has some discrete and some continuous valued components (2) some (or all) of the components themselves are discrete-continuous mixtures (3) the data is real-valued but does not have a joint density on the entire space, rather is supported on a low-dimensional manifold. We show that our proposed estimators significantly outperform known estimators on synthetic and real datasets. 1 Introduction Information theoretic quantities, such as mutual information and its generalizations, play an important role in various settings in machine learning and statistical estimation and inference. Here we summarize briefly the role of some generalizations of mutual information in learning (cf. Sec. 2.1 for a mathematical definition of these quantities). 1. Conditional mutual information measures the amount of information between two variables X and Y given a third variable Z and is zero iff X is independent of Y given Z. CMI finds a wide 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. range of applications in learning including causality testing [1, 2], structure inference in graphical models [3], feature selection [4] as well as in providing privacy guarantees [5]. 2. Total correlation measures the degree to which a set ofN variables are independent of each other, and appears as a natural metric of interest in several machine learning problems, for example, in independent component analysis, the objective is to maximize the independence of the variables quantified through total correlation [6]. In feature selection, ensuring the independence of selected features is one goal, pursued using total correlation in [7, 8]. 3. Multivariate mutual information measures the amount of information shared between multiple variables [9, 10] and is useful in feature selection [11, 12] and clustering [13]. 4. Directed information measures the amount of information between two random processes [14,15] and is shown as the correct metric in identifying time-series graphical models [16–21]. Estimation of these information-theoretic quantities from observed samples is a non-trivial problem that needs to be solved in order to utilize these quantities in the aforementioned applications. While there has been long history in estimation of entropy [22–25], and renewed recent interest [26–28], much less effort has been spent on the multivariate versions. A standard approach to estimating general information theoretic quantities is to write them out as a sum or difference of entropy (denoted H usually) terms which are then directly estimated; we term such a paradigm as ΣH paradigm. However, the ΣH paradigm is applicable only when the variables involved are all discrete or there is a joint density on the space of all variables (in which case, differential entropy h can be utilized). The underlying information measures themselves are well defined in very general probability spaces, for example, involving mixtures of discrete and continuous variables; however, no known estimators exist. We motivate the requirement of estimators in general probability spaces by some examples in contemporary machine learning and statistical inference. 1. It is common place in machine learning to have data-sets where some variables are discrete, and some are continuous. For example, in recent work on utilizing information bottleneck to understand deep learning [29], an important step is to quantify the mutual information between the training samples (which are discrete) and the layer output (which is continuous). The employed methodology was to quantize the continuous variables; this is common practice, even though highly sub-optimal. 2. Some variables involved in the calculation may be mixtures of discrete and continuous variables. For example, the output of ReLU neuron will not have a density even when the input data has a density. Instead, the neuron will have a discrete mass at 0 (or wherever the ReLU breakpoint is) but will have a continuous distribution on the positive values. This is also the case in gene expression data, where a gene may have a discrete mass at expression 0 due to an effect called drop-out [30] but have a continuous distribution elsewhere. 3. The variables involved may have a joint density only on a low dimensional manifold. For example, when calculating mutual information between input and output of a neural network, some of the neurons are deterministic functions of the input variables and hence they will have a joint density supported on a low-dimensional manifold rather than the entire space. In the aforementioned cases, no existing estimators are known to work. It is not merely a matter of having provable guarantees either. When we plug in estimators that assume a joint density into data that does not, the estimated information measure can be strongly negative. We summarize our main contributions below: 1. General paradigm (Section 2): We define a general paradigm of graph divergence measures which captures the aforementioned generalizations of mutual information as special cases. Given a directed acyclic graph (DAG) between n variables, the graph divergence is defined as the KullbackLeibler (KL) divergence between the true data distribution PX and a restricted distribution PX defined on the Bayesian network and can be thought of as a measure of incompatibility with the given graphical model structure. These graph divergence measures are defined using the Radon Nikodym derivatives which are well-defined for general probability spaces. 2. Novel estimators (Section 3): We propose novel estimators for these graph divergence measures, which directly estimate the corresponding Radon-Nikodym derivatives. To the best of our knowl- edge, these are the first family of estimators that are well defined for general probability spaces (breaking the ΣH paradigm). 3. Consistency proofs (Section 4): We prove that the proposed estimators converge to the true value of the corresponding graph divergence measures as the number of observed samples increases in a general setting which includes several cases: (1) the data has some discrete and some continuous valued components (2) some (or all) of the components themselves are discrete-continuous mixtures (3) the data is real-valued but does not have a joint density on the entire space but is supported on a low-dimensional manifold. 4. Numerical results (Section 5): Extensive numerical results demonstrate that (1) existing algorithms have severe failure modes in general probability spaces (strongly negative values, for example), and (2) our proposed estimator achieves consistency as well as significantly better finite-sample performance. 2 Graph Divergence Measure In this section, we define the family of graph divergence measures. To begin with, we first define some notational preliminaries. We denote any random variable by an uppercase letter such as X . The sample space of the variable X is denoted by X and any value in X is denoted by the lowercase letter x. For any subset A ⊆ X , the probability of A for a given distribution function PX(.) over X is denoted by PX(A). We note that the random variable X can be a d-dimensional vector of random variables, i.e. X ≡ (X1, . . . , Xd). The N observed samples drawn from the distribution PX are denoted by x(1), x(2), . . . , x(N), i.e. x(i) is the ith observed sample. Sometimes we might be interested in a subset of components of a random variable, S ⊆ {X1, . . . , Xd} instead of the entire vector X . Accordingly, the sample space of the variable S is denoted by S. For instance, X = (X1, X2, X3, X4) and S = (X1, X2). Throughout the entire paper, unless otherwise stated, there is a one-to-one correspondence between the notations of X and any S. For example for any value x ∈ X , the corresponding value in S is simply denoted by s. Further, s(i) ∈ S represents the lower-dimensional sample corresponding to the ith observed sample x(i) ∈ X . Furthermore, any marginal distribution defined over S with respect to PX is denoted by PS . Consider a directed acyclic graph (DAG) G defined over d nodes (corresponding to the d components of the random variable X). A probability measure Q over X is said to be compatible with the graph G if it is a Bayesian network on G. Given a graph G and a distribution PX , there is a natural measure PX(.) which is compatible with the graph and is defined as follows: PX = d∏ l=1 PXl|pa(Xl) (1) where pa(Xl) ⊂ X is the set of the parent nodes of the random variable Xl, with the sample space denoted by Xpa(l), and the sample values xpa(l) corresponding to x. The distribution PXl|pa(Xl) is the conditional distribution of Xl given pa(Xl). Throughout the paper, whenever mentioning the variable Xl with its own parents pa(Xl) we indicate it by pa+(Xl), i.e. pa+(Xl) ≡ ( Xl, pa(Xl) ) . An example is shown in Fig. 1a. We note the fact that PS|X\S is well defined for any subset of variables S ⊂ X . Therefore if we let S = X \ pa(Xl), then PX\pa(Xl)|pa(Xl) is well defined for any l ∈ {1, . . . , d}. By marginalizing over X \ pa+(Xl) we see that PXl|pa(Xl) and thus the distribution PX is well defined. The graph divergence measure is now defined as a function of the probability measure PX and the graph G. In this work we will focus only on the KL Divergence as being the distance metric, hence unless otherwise stated D(· ‖ ·) = DKL(· ‖ ·). Let’s first consider the case where PX is absolutely continuous with respect to PX and hence the Radon-Nikodym derivative dPX/dPX exists. Therefore for a given set of random variables X and a Bayesian Network G, we define Graph Divergence Measure (GDM) as : GDM(X,G) = D(PX‖PX) = ∫ X log dPX dPX dPX (2) Here we implicitly assume that log ( dPX/dPX ) is measurable and integrable with respect to the measure PX . The GDM is set to infinity wherever Radon-Nikodym derivative does not exist. It is clear that GDM(X,G) = 0 if and only if the data distribution is compatible with the graphical model, thus the GDM can be thought of as a measure of incompatibility with the given graphical model structure. We now have relevant variational characterization as below on our graph divergence measure, which can be harnessed to compute upper and lower bounds (More details in supplementary material): Proposition 2.1. For a random variable X , a DAG G, let Π(G) be the set of measures QX defined on the Bayesian Network G, then GDM(X,G) = infQX∈Π(G)D(PX‖QX). Furthermore, let C denote the set of functions h : X → R such that EQX [exp(h(X))] < ∞. If GDM(X,G) <∞, then for every h ∈ C, EPX [h(X)] exists and: GDM(X,G) = sup h∈C EPX [h(X)]− logEQX [exp(h(X))] . (3) 2.1 Special cases For specific choices of X and Bayesian Network, G, the Equation 2 is reduced to the well-known information measures. Some examples of these measures are as follows: Mutual Information (MI): X = (X1, X2) and G has no directed edge between X1 and X2. Thus PX = PX1 .PX2 , and we get, GDM(X,G) = I(X1;X2) = D(PX1X2‖PX1PX2). Conditional Mutual Information (CMI): We recover the conditional mutual information of X1 and X2 given X3 by constraining G to be the one in Fig. 1b, since PX = PX3 .PX2|X3 .PX1|X3 , i.e., GDM(X,G) = I(X1;X2|X3) = D(PX1X2X3‖PX1|X3PX2|X3PX3). Total Correlation (TC): When X = (X1, · · · , Xd), and G is the graph with no edges (as in Fig. 1c, we recover the total correlation of the random variables X1, . . . , Xd since PX = PX1 . . .PXd , i.e., GDM(X,Gdc) = TC(X1, . . . , Xd) = D(PX1...Xd‖PX1 . . .PXd) Multivariate Mutual Information (MMI) : While the MMI defined by [9] is not positive in general,there is a different definition by [10] which is both non-negative and has an operational interpretation. Since MMI can be defined as the optimal total correlation after clustering, we can utilize our definition to define MMI (cf. supplementary material). Directed Information : Suppose there are two stationary random processes X and Y , the directed information rate from X to Y as first introduced by Massey [31] is defined as: I(X → Y ) = 1 T T∑ t=1 I ( Xt;Yt ∣∣Y t−1) It can be seen that the directed information can be written as: I(X → Y ) = GDM ( (XT , Y T ),GI ) −GDM ( (XT , Y T ),GC ) where the graphical model GI correponds to the independent distribution between XT and Y T , and GC corresponds to the causal distribution from X to Y (more details provided in supplementary material). 3 Estimators 3.1 Prior Art Estimators for entropy date back to Shannon, who guesstimated the entropy rate of English [32]. Discrete entropy estimation is a well-studied topic and minimax rate of this problem is well-understood as a function of the alphabet size [33–35]. The estimation of differential entropy is considerably harder and also studied extensively in literature [23,25,26,36–39] and can be broadly divided into two groups; based on either Kernel density estimates [40,41] or based on k-nearest-neighbor estimation [27,42,43]. In a remarkable work, Kozachenko and Leonenko suggested a nearest neighbor method for entropy estimation [22] which was then generalized to a kth nearest neighbor approach [44]. In this method, the distance to the kth nearest neighbor (KNN) is measured for each data-point, and based on this the probability density around each data point is estimated and substituted into the entropy expression. When k is fixed, each density estimate is noisy and the estimator accrues a bias and a remarkable result is that the bias is distribution-independent and can be subtracted out [45]. While the entropy estimation problem is well-studied, mutual information and its generalizations are typically estimated using a sum of signed entropy (H) terms, which are estimated first; we term such estimators as ΣH estimators. In the discrete alphabet case, this principle has been shown to be worst-case optimal [28]. In the case of distributions with a joint density, an estimator that breaks the ΣH principle is the KSG estimator [46], which builds on the KNN estimation paradigm but couples the estimates in order to reduce the bias. This estimator is widely used and has excellent practical performance. The original paper did not have any consistency guarantees and its convergence rates were recently established [47]. There have been some extensions to the KSG estimator for other information measures such as conditional mutual information [48, 49], directed information [50] but none of them show theoretical guarantees on consistency of the estimators, furthermore they fail completely in mixture distributions. When the data distribution is neither discrete nor admits a joint density, the ΣH approach is no longer feasible as the individual entropy terms are not well defined. This is the regime of interest in our paper. Recently, Gao et al [51] proposed a mutual-information estimator based on KNN principle, which can handle such continuous-discrete mixture cases, and the consistency was demonstrated. However it is not clear how it generalizes to even Conditional Mutual Information (CMI) estimation, let alone other generalizations of mutual information. In this paper, we build on that estimator in order to design an estimator for general graph divergence measures and establish its consistency for generic probability spaces. 3.2 Proposed Estimator The proposed estimator is given in Algorithm 1 where ψ(·) is the digamma function and 1{·} is the indicator function. The process is schematically shown in Fig. 3 (cf. supplementary material). We used the `∞-norm everywhere in our algorithm and proofs. The estimator intuitively estimates the GDM by the resubstitution estimate 1N ∑N i=1 log f̂(x (i)) in which f̂(x(i)) is the estimation of Radon-Nikodym derivative at each sample x(i). If x(i) lies in a region where there is a density, the RN derivative is equal to gX(x(i))/ḡX(x(i)) in which gX(.) and ḡX(.) are density functions corresponding to PX and PX respectively. On the other hand, if x(i) lies on a point where there is a discrete mass, the RN derivative will be equal to hX(x(i))/h̄X(x(i)) in which hX(.) and h̄X(.) are mass functions corresponding to PX and PX respectively. The density function ḡX(x(i)) can be written as ∏d l=1 ( gpa+(Xl)(xpa+(l) (i))/gpa(Xl)(xpa(l) (i)) ) for continuous components. Equivalently, the mass function h̄X(x(i)) can be written as∏d l=1 ( hpa+(Xl)(xpa+(l) (i))/hpa(Xl)(xpa(l) (i)) ) . Thus we need to estimate the density functions g(.) and the mass functions h(.) according to the type of x(i). The existing kth nearest neighbor algorithms will suffer while estimating the mass functions h(.), since ρnS ,i (the distance to the nS-th nearest neighbor in subspace S) for such points will be equal to zero for large N . Our algorithm, however, is designed in a way that it’s capable of approximating both g(.) functions as ≈ nSN 1 (ρnS,i) dS and h(.) functions as ≈ nSN dynamically for any subset S ⊆ X . This is achieved by setting ρnS ,i terms such that all of them cancel out, yielding the estimator as in Eq. (4). . Input: Parameter: k ∈ Z+, Samples: x(1), x(2), . . . , x(N), Bayesian Network: G on Variables: X = (X1, X2, · · · , Xd) Output: ĜDM (N) (X,G) 1: for i = 1 to N do 2: Query: 3: ρk,i = `∞-distance to the kth nearest neighbor of x(i) in the space X 4: Inquire: 5: k̃i = # points within the ρk,i-neighborhood of x(i) in the space X 6: n(i)pa(Xl) = # points within the ρk,i-neighborhood of x (i) in the space Xpa(l) 7: n(i)pa+(Xl) = # points within the ρk,i-neighborhood of x (i) in the space Xpa+(l) 8: Compute: 9: ζi = ψ(k̃i) + ∑d l=1 ( 1{pa(Xl)6=∅} log ( n (i) pa(Xl) + 1 ) − log ( n (i) pa+(Xl) + 1 )) 10: end for 11: Final Estimator: ĜDM (N) (X,G) = 1 N N∑ i=1 ζi + ( d∑ l=1 1{pa(Xl)=∅} − 1 ) logN (4) Algorithm 1: Estimating Graph Divergence Measure GDM(X,G) 4 Proof of Consistency The proof of consistency for our estimator consists of two steps: First we prove that the expected value of the estimator in Eq. (4) converges to the true value as N →∞ , and second we prove that the variance of the estimator converges to zero as N →∞. Let’s begin with the definition of PX(x, r): PX(x, r) = PX { a ∈ X : ‖a− x‖∞ ≤ r } = PX { Br(x) } (5) Thus PX(x, r) is the probability of a hypercube with the edge length of 2r centered at the point x. We then state the following assmuptions: Assumption 1. We make the following assumptions to prove the consistency of our method: 1. k is set such that limN→∞ k =∞ and limN→∞ k logNN = 0. 2. The set of discrete points {x : PX(x, 0) > 0} is finite. 3. ∫ X ∣∣ log f(x)∣∣dPX < +∞, where f ≡ dPX/dPX is Radon-Nikodym derivative. The Assumption 1.1 with 1.2 controls the boundary effect between the continuous and the discrete regions; with this assumption we make sure that all the k nearest neighbors of each point belong to the same region almost surely (i.e. all of them are either continuous or discrete). Assumption 1.3 is the log-integrability of the Radon-Nikodym derivative. These assumptions are satisfied under mild technical conditions whenever the distribution PX over the set X is (1) finitely discrete; (2) continuous; (3) finitely discrete over some dimensions and continuous over some others; (4) a mixture of the previous cases; (5) has a joint density supported over a lower dimensional manifold. These cases represent almost all the real world data. As an example of a case not conforming to the aforementioned cases, we can consider singular distributions, among which the Cantor distribution is a significant example whose cumulative distribution function is the Cantor function. This distribution has neither a probability density function nor a probability mass function, although its cumulative distribution function is a continuous function. It is thus neither a discrete nor an absolutely continuous probability distribution, nor is it a mixture of these. The Theorem 1 formally states the mean-convergence of the estimator while Theorem 2 formally states that convergence of the variance to zero. Theorem 1. Under the Assumptions 1, we have limN→∞ E [ ĜDM (N) (X,G) ] = GDM(X,G). Theorem 2. In addition to the Assumptions 1, assume that we have (kN logN)2/N → 0 as N goes to infinity. Then we have limN→∞ Var [ ĜDM (N) (X,G) ] = 0. The Theorems 1 and 2 combined yield the consistency of the estimator 4. The proof of the Theorem 1 starts with writing the Radon-Nikodym derivative explicitly. Then we need to upper-bound the term ∣∣E[ĜDM(N)(X,G)] − GDM(X,G)∣∣. To achieve this goal, we segregate the domain of X into three parts as X = Ω1 ∪ Ω2 ∪ Ω3 where Ω1 = {x : f(x) = 0}, Ω2 = {x : f(x) > 0, PX(x, 0) > 0} and Ω3 = {x : f(x) > 0, PX(x, 0) = 0}. We will show that PX(Ω1) = 0. The sets Ω2 and Ω3 correspond to the discrete and continuous regions respectively. Then for each of the two regions, we introduce an upperbound which goes to zero as N grows boundlessly. Thus equivalently we show the mean of the estimate ζ1 is close to log f(x) for any x. The proof of the Theorem 2 is based on the Efron-Stein inequality, which upperbounds any estimator for any quantity from the observed samples x(1), . . . , x(N). For any sample x(i), we then upperbound the term ∣∣ζi(X)− ζi(X\j)∣∣ by segregating the samples into various cases, and examining each case separately. ζi(X) is the estimate using all the samples x(1), . . . , x(N) and ζi(X\j) is the estimate when the jth sample is removed. Summing up over all the i’s, we obtain an upper-bound which will converge to 0 as N goes to infinity. 5 Empirical Results In this section, we evaluate the performance of our proposed estimator in comparison with other estimators via numerical experiments. The estimators evaluated here are our estimator referred to as GDM, the plain KSG-based estimators for continuous distributions to which we refer by KSG, the binning estimators and the noise-induced ΣH estimators. A more detailed discussion can be found in Section G. Experiment 1: Markov chain model with continuous-discrete mixture. For the first experiment, we simulated an X-Z-Y Markov chain model in which the random variable X is a uniform random variable U(0, 1) clipped at a threshold 0 < α1 < 1 from above. Then Z = min (X,α2) and Y = min (Z,α3) in which 0 < α3 < α2 < α1. We simulated this system for various numbers of samples, setting α1 = 0.9, α2 = 0.8 and α3 = 0.7. For each set of samples we estimated I(X;Y |Z) via different methods. The theory value for I(X;Y |Z) is 0. The results are shown in Figure 2a. We can see that in this regime, only the GDM estimator can correctly converge. The KSG estimator and the ΣH estimator show high negative biases and the binning estimator shows a positive bias. Experiment 2: Mixture of AWGN and BSC channels with variable error probability. For the second scheme of our experiments, we considered an Additive White Gaussian Noise (AWGN) Channel in parallel with a Binary Symmetric Channel (BSC) where only one of the two can be activated at a time. The random variable Z = min(α, Z̃) where Z̃ ∼ U(0, 1) controls which channel is activated; i.e. if Z is lower than the threshold β, activate the AWGN channel, otherwise initiate the BSC channel where Z also determines the error probability at each time point. We set α = 0.3, β = 0.2, BSC channel input as X ∼ Bern(0.5), and AWGN input and noise deviation as σX = 1 and σN = 0.1 respectively, and obtained the estimates of I(X;Y |Z,Z2, Z3) for various estimators. While the theory value is equal to I(X;Y |Z) = 0.53241, yet it’s conditioned over a low-dimensional manifold in a high-dimensional space. The results are shown in Figure 2b. Similar to the previous experiment, the GDM estimator can correctly converge to the true value. The ΣH and binning estimators show a negative bias, and the KSG estimator gets totally lost. Experiment 3: Total Correlation for independent mixtures. In this experiment, we estimate the total correlation of three independent variables X , Y and Z. The samples for the variable X are generated in the following fashion: First toss a fair coin, if heads appears we fix X at αX , otherwise we draw X from a uniform distribution between 0 and 1. samples from Y and Z are also generated in the same way independently with parameters αY and αZ respectively. For this setup, TC(X,Y, Z) = 0. We set αX = 1, αY = 1/2 and αZ = 1/4, and generated various datasets with different lengths. Then estimated total correlation values are shown in the Figure 2c. Experiment 4: Total Correlation for independent uniforms with correlated zero-inflation. Here we first consider four auxiliary uniform variables X̃1, X̃2, X̃3 and X̃4 which are taken from U(0.5, 1.5). Then each sample is erased with a Bernoulli probability; i.e. X1 = α1X̃1, X2 = α1X̃2 and X3 = α2X̃3, X4 = α2X̃4 in which α1 ∼ Bern(p1) and α2 ∼ Bern(p2). As we see, after zero-inflation X1 and X2 become correlated, so do X3 and X4 while still (X1, X2) |= (X3, X4). In the experiment, we set p1 = p2 = 0.6. The results of running different algorithms over the data can be seen in Figure 2d. For the total correlation experiments 3 and 4, similar to that of conditional mutual information in experiments 1 and 2, only the GDM estimator can best estimate the true value. The estimator ΣH was removed from the figures due to its high inaccuracy. Experiment 5: Gene Regulatory Networks. In this experiment we use different estimators to do Gene Regulatory Network inference based on the conditional Restricted Directed Information (cRDI) [20]. We do our test on the simulated neuron cells’ development process, based on a model from [52]. In this model, the time series vector X consists of 13 random variables each of which corresponding to a single gene in the development process. We simulated the development process for various lengths of time-series in which the noise N ∼ N (0, .03) is added for all the genes, and every single sample is then subject to erasure (i.e. be replaced by 0s) with a probability of 0.5. Then we applied the cRDI method utilizing various CMI estimators and then calculated the Area-Under-ROC curve (AUROC). The results are shown in Figure 2e. It’s seen that the cRDI method implemented with the GDM estimator outperform the other estimators by at least %10 in terms of AUROC. In the tests, cRDI for each (Xi, Xj) is conditioned over the node k 6= i with the highest RDI value to j. We notice that the causal signals are highly destroyed due to the zero-inflation, so we won’t expect high performance of the causal inference over the data. We did not include the ΣH estimator results due to its very low performance. Experiment 6: Feature Selection by Conditional Mutual Information Maximization. Feature selection is an important pre-processing step in many learning tasks. The application of information theoretic measures in feature selection is well studied in the literature [7]. Among the well-known methods is the conditional mutual information maximization (CMIM) first introduced by Flueret [4], a variation of which was later introduced called CMIM-2 [53]. Both methods use conditional mutual information as their core measure to select the features. Hence the performance of the estimators can significantly influence the performance of the methods. In our experiment, we generated a vector X = (X1, . . . , X15) of 15 random variables in which all the random variables are taken fromN (0, 1) and then each random variable Xi is clipped from above at αi which is initially taken randomly from U(0.25, 0.3) and then kept constant during the sample generation. Then Y is generated as Y = cos (∑5 i=1Xi ) . Then we did the CMIM-2 algorithm with various CMI estimators to evaluate the performance of the estimators in extracting the relevant features X1, . . . , X5. The AUROC values for each algorithm versus the number of samples generated are shown in Figure 2f. The feature selection methods implemented with the GDM estimator outperform the other estimators. 6 Discussion and Future Work A general paradigm of graph divergence measures and novel estimators thereof, for general probability spaces are proposed, which estimate several generalizations of mutual information. In future, we would like to derive further efficient estimators for high dimensional data. In the current work, estimators are shown to be consistent with infinite scaling of parameter k. In future, we would like to understand the finite k performance of the estimators as well as guarantees on sample complexity and rates of convergence. Another potential direction to follow is to study the variational characterization of the graph divergence measure to design estimators. Improving the computational efficiency of the estimator is another direction of future work. Recent literature including [54] provide a new methodology to estimate mutual information in a computationally efficient manner and leveraging these ideas for the generalized measures and general proabability distributions can be a promising direction ahead. 7 Acknowledgement This work was partially supported by NSF grants 1651236, 1703403 and NIH grant 5R01HG008164. The authors also would like to thank Yihan Jiang for presenting our work at the NeurIPS conference.
1. What is the focus of the paper regarding mutual information estimation? 2. What are the strengths of the proposed approach compared to existing methods? 3. What are the limitations of the proposed estimator regarding computational complexity? 4. How does the reviewer assess the novelty and significance of the paper's contribution? 5. Are there any concerns or questions regarding the technical aspects of the paper?
Review
Review This paper considered the problem of estimating (multivariate) mutual information from the i.i.d. drawn data samples. While this problem has been rather extensively studied in the literature recently, most current methods focus on estimating entropy/differential entropy and then use them to build a plug-in estimator for mutual information. Apparently, this method does not work for general probability spaces with mixed value components. This paper proposed a new angle of viewing (multivariate) mutual information as divergences between a given distribution and a graphical model structure. The most important advantage of the proposed angle is that the Radon-Nikodym derivative is guaranteed to exist and thus the divergences are well defined for general probability spaces. The authors then proposed an estimator to estimate such divergences based on the coupling trick used to construct the well-known KSG estimator and proved the consistency of the proposed estimator. The paper is well organized and well written. In my opinion, the technical innovation is sufficiently novel for publication. My main complaint about this work is that just like the KSG estimator, the computational complexity of the proposed estimator is very high, especially for multi-dimensional distributions. This limits the applicability of the estimator in practice.
NIPS
Title Improved Sample Complexity for Incremental Autonomous Exploration in MDPs Abstract We investigate the exploration of an unknown environment when no reward function is provided. Building on the incremental exploration setting introduced by Lim and Auer [1], we define the objective of learning the set of "-optimal goal-conditioned policies attaining all states that are incrementally reachable within L steps (in expectation) from a reference state s0. In this paper, we introduce a novel modelbased approach that interleaves discovering new states from s0 and improving the accuracy of a model estimate that is used to compute goal-conditioned policies to reach newly discovered states. The resulting algorithm, DisCo, achieves a sample complexity scaling as e O(LSL+" L+"A " 2), where A is the number of actions, SL+" is the number of states that are incrementally reachable from s0 in L + " steps, and L+" is the branching factor of the dynamics over such states. This improves over the algorithm proposed in [1] in both " and L at the cost of an extra L+" factor, which is small in most environments of interest. Furthermore, DisCo is the first algorithm that can return an "/cmin-optimal policy for any cost-sensitive shortest-path problem defined on the L-reachable states with minimum cost cmin. Finally, we report preliminary empirical results confirming our theoretical findings. 1 Introduction In cases where the reward signal is not informative enough — e.g., too sparse, time-varying or even absent — a reinforcement learning (RL) agent needs to explore the environment driven by objectives other than reward maximization, see [e.g., 2, 3, 4, 5, 6]. This can be performed by designing intrinsic rewards to drive the learning process, for instance via state visitation counts [7, 8], novelty or prediction errors [9, 10, 11]. Other recent methods perform information-theoretic skill discovery to learn a set of diverse and task-agnostic behaviors [12, 13, 14]. Alternatively, goal-conditioned policies learned by carefully designing the sequence of goals during the learning process are often used to solve sparse reward problems [15] and a variety of goal-reaching tasks [16, 17, 18, 19]. While the approaches reviewed above effectively leverage deep RL techniques and are able to achieve impressive results in complex domains (e.g., Montezuma’s Revenge [15] or real-world robotic manipulation tasks [19]), they often lack substantial theoretical understanding and guarantees. Recently, some unsupervised RL objectives were analyzed rigorously. Some of them quantify how well the agent visits the states under a sought-after frequency, e.g., to induce a maximally entropic state distribution [20, 21, 22, 23]. While such strategies provably mimic their desired behavior via a Frank-Wolfe algorithmic scheme, they may not learn how to effectively reach any state of the environment and thus may not be sufficient to efficiently solve downstream tasks. Another relevant take is the reward-free RL paradigm of [24]: following its exploration phase, the agent is able to 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. compute a near-optimal policy for any reward function at test time. While this framework yields strong end-to-end guarantees, it is limited to the finite-horizon setting and the agent is thus unable to tackle tasks beyond finite-horizon, e.g., goal-conditioned tasks. In this paper, we build on and refine the setting of incremental exploration of [1]: the agent starts at an initial state s0 in an unknown, possibly large environment, and it is provided with a RESET action to restart at s0. At a high level, in this setting the agent should explore the environment and stop when it has identified the tasks within its reach and learned to master each of them sufficiently well. More specifically, the objective of the agent is to learn a goal-conditioned policy for any state that can be reached from s0 within L steps in expectation; such a state is said to be L-controllable. Lim and Auer [1] address this setting with the UcbExplore method for which they bound the number of exploration steps that are required to identify in an incremental way all L-controllable states (i.e., the algorithm needs to define a suitable stopping condition) and to return a set of policies that are able to reach each of them in at most L+ " steps. A key aspect of UcbExplore is to first focus on simple states (i.e., states that can be reached within a few steps), learn policies to efficiently reach them, and leverage them to identify and tackle states that are increasingly more difficult to reach. This approach aims to avoid wasting exploration in the attempt of reaching states that are further than L steps from s0 or that are too difficult to reach given the limited knowledge available at earlier stages of the exploration process. Our main contributions are: • We strengthen the objective of incremental exploration and require the agent to learn "-optimal goal-conditioned policies for any L-controllable state. Formally, let V ?(s) be the length of the shortest path from s0 to s, then the agent needs to learn a policy to navigate from s0 to s in at most V ?(s) + " steps, while in [1] any policy reaching s in at most L+ " steps is acceptable. • We design DisCo, a novel algorithm for incremental exploration. DisCo relies on an estimate of the transition model to compute goal-conditioned policies to the states observed so far and then use those policies to improve the accuracy of the model and incrementally discover new states. • We derive a sample complexity bound for DisCo scaling as1 eO(L5SL+" L+"A " 2), where A is the number of actions, SL+" is the number of states that are incrementally controllable from s0 in L + " steps, and L+" is the branching factor of the dynamics over such incrementally controllable states. Not only is this sample complexity obtained for a more challenging objective than UcbExplore, but it also improves in both " and L at the cost of an extra L+" factor, which is small in most environments of interest. • Leveraging the model-based nature of DisCo, we can also readily compute an "/cmin-optimal policy for any cost-sensitive shortest-path problem defined on the L-controllable states with minimum cost cmin. This result serves as a goal-conditioned counterpart to the reward-free exploration framework defined by Jin et al. [24] for the finite-horizon setting. 2 Incremental Exploration to Discover and Control In this section we expand [1], with a more challenging objective for autonomous exploration. 2.1 L-Controllable States We consider a reward-free Markov decision process [25, Sect. 8.3] M := hS,A, p, s0i. We assume a finite action space A with A = |A| actions, and a finite, possibly large state space S for which an upper bound S on its cardinality is known, i.e., |S| S.2 Each state-action pair (s, a) 2 S ⇥A is characterized by an unknown transition probability distribution p(·|s, a) over next states. We denote by S0 := maxs2S0,ak{p(s0|s, a)}s02S0k0 the largest branching factor of the dynamics over states in any subset S 0 ✓ S . The environment has no extrinsic reward, and s0 2 S is a designated initial state. A deterministic stationary policy ⇡ : S ! A is a mapping between states to actions and we denote by ⇧ the set of all possible policies. Since in environments with arbitrary dynamics the learner may get stuck in a state without being able to return to s0, we introduce the following assumption.3 1We say that f(") = eO("↵) if there are constants a, b, such that f(") a · "↵ logb " . 2Lim and Auer [1] originally considered a countable, possibly infinite state space; however this leads to a technical issue in the analysis of UcbExplore (acknowledged by the authors via personal communication and explained in App. E.3), which disappears by considering only finite state spaces. 3This assumption should be contrasted with the finite-horizon setting, where each policy resets automatically after H steps, or assumptions on the MDP dynamics such as ergodicity or bounded diameter, which guarantee that it is always possible to find a policy navigating between any two states. Assumption 1. The action space contains a RESET action s.t. p(s0|s, RESET) = 1 for any s 2 S . We make explicit the states where a policy ⇡ takes action RESET in the following definition. Definition 1 (Policy restricted on a subset). For any S 0 ✓ S, a policy ⇡ is restricted on S 0 if ⇡(s) = RESET for any s /2 S 0. We denote by ⇧(S 0) the set of policies restricted on S 0. We measure the performance of a policy in navigating the MDP as follows. Definition 2. For any policy ⇡ and a pair of states (s, s0) 2 S2, let ⌧⇡(s ! s0) be the (random) number of steps it takes to reach s0 starting from s when executing policy ⇡, i.e., ⌧⇡(s ! s0) := inf{t 0 : st+1 = s0 | s1 = s,⇡}. We also set v⇡(s ! s0) := E[⌧⇡(s ! s0)] as the expected traveling time, which corresponds to the value function of policy ⇡ in a stochastic shortest-path setting (SSP, [26, Sect. 3]) with initial state s, goal state s0 and unit cost function. Note that we have v⇡(s ! s0) = +1 when the policy ⇡ does not reach s0 from s with probability 1. Furthermore, for any subset S 0 ✓ S and any state s, we denote by V ?S0(s0 ! s) := min ⇡2⇧(S0) v⇡(s0 ! s), the length of the shortest path to s, restricted to policies resetting to s0 from any state outside S 0. The objective of the learning agent is to control efficiently the environment in the vicinity of s0. We say that a state s is controlled if the agent can reliably navigate to it from s0, that is, there exists an effective goal-conditioned policy — i.e., a shortest-path policy — from s0 to s. Definition 3 (L-controllable states). Given a reference state s0, we say that a state s is L-controllable if there exists a policy ⇡ such that v⇡(s0 ! s) L. The set of L-controllable states is then SL := {s 2 S : min ⇡2⇧ v⇡(s0 ! s) L}. (1) We illustrate the concept of controllable states in Fig. 1 for L = 3. Interestingly, in the right figure, the black states are not L-controllable. In fact, there is no policy that can directly choose which one of the black states to reach. On the other hand, the red state, despite being in some sense further from s0 than the black states, does belong to SL. In general, there is a crucial difference between the existence of a random realization where a state s is reached from s0 in less than L steps (i.e., black states) and the notion of L-controllability, which means that there exists a policy that consistently reaches the state in a number of steps less or equal than L on average (i.e., red state). This explains the choice of the term controllable over reachable, since a state s is often said to be reachable if there is a policy ⇡ with a non-zero probability to eventually reach it, which is a weaker requirement. Unfortunately, Lim and Auer [1] showed that in order to discover all the states in SL, the learner may require a number of exploration steps that is exponential in L or |SL|. Intuitively, this negative result is due to the fact that the minimum in Eq. 1 is over the set of all possible policies, including those that may traverse states that are not in SL.4 Hence, we similarly constrain the learner to focus on the set of incrementally controllable states. Definition 4 (Incrementally controllable states S! L ). Let be some partial order on S. The set S L of states controllable in L steps w.r.t. is defined inductively as follows. The initial state s0 4We refer the reader to [1, Sect. 2.1] for a more formal and complete characterization of this negative result. belongs to S L by definition and if there exists a policy ⇡ restricted on {s0 2 S L : s0 s} with v⇡(s0 ! s) L, then s 2 S L . The set S!L of incrementally L-controllable states is defined as S! L := [ S L , where the union is over all possible partial orders. By way of illustration, in Fig. 1 for L = 3, it holds that S! L = SL in the left figure, whereas S! L = {s0} 6= SL in the right figure. Indeed, while the red state is L-controllable, it requires traversing the black states, which are not L-controllable. 2.2 AX Objectives We are now ready to formalize two alternative objectives for Autonomous eXploration (AX) in MDPs. Definition 5 (AX sample complexity). Fix any length L 1, error threshold " > 0 and confidence level 2 (0, 1). The sample complexities CAXL(A, L, ", ) and CAX?(A, L, ", ) are defined as the number of time steps required by a learning algorithm A to identify a set K ◆ S! L such that with probability at least 1 , it has learned a set of policies {⇡s}s2K that respectively verifies the following AX requirement (AXL) 8s 2 K, v⇡s(s0 ! s) L+ ", (AX?) 8s 2 K, v⇡s(s0 ! s) V ?S! L (s0 ! s) + ". Designing agents satisfying the objectives defined above introduces critical difficulties w.r.t. standard goal-directed learning in RL. First, the agent has to find accurate policies for a set of goals (i.e., all incrementally L-controllable states) and not just for one specific goal. On top of this, the set of desired goals itself (i.e., the set S! L ) is unknown in advance and has to be estimated online. Specifically, AXL is the original objective introduced in [1] and it requires the agent to discover all the incrementally L-controllable states as fast as possible.5 At the end of the learning process, for each state s 2 S! L the agent should return a policy that can reach s from s0 in at most L steps (in expectation). Unfortunately, this may correspond to a rather poor performance in practice. Consider a state s 2 S! L such that V ?S! L (s0 ! s) ⌧ L, i.e., the shortest path between s0 to s following policies restricted on S! L is much smaller than L. Satisfying AXL only guarantees that a policy reaching s in L steps is found. On the other hand, objective AX? is more demanding, as it requires learning a near-optimal shortest-path policy for each state in S! L . Since V ?S! L (s0 ! s) L and the gap between the two quantities may be arbitrarily large, especially for states close to s0 and far from the fringe of S! L , AX? is a significantly tighter objective than AXL and it is thus preferable in practice. We say that an exploration algorithm solves the AX problem if its sample complexity CAX(A, L, ", ) in Def. 5 is polynomial in |K|, A, L, " 1 and log(S). Notice that requiring a logarithmic dependency on the size of S is crucial but nontrivial, since the overall state space may be large and we do not want the agent to waste time trying to reach states that are not L-controllable. The dependency on the (algorithmic-dependent and random) set K can be always replaced using the upper bound |K| |S! L+"|, which is implied with high probability by both AXL and AX? conditions. Finally, notice that the error threshold " > 0 has a two-fold impact on the performance of the algorithm. First, " defines the largest set S! L+" that could be returned by the algorithm: the larger ", the bigger the set. Second, as " increases, the quality (in terms of controllability and navigational precision) of the output policies worsens w.r.t. the shortest-path policy restricted on S! L . 3 The DisCo Algorithm The algorithm DisCo — for Discover and Control — is detailed in Alg. 1. It maintains a set K of “controllable” states and a set U of states that are considered “uncontrollable” so far. A state s is tagged as controllable when a policy to reach s in at most L + " steps (in expectation from s0) has been found with high confidence, and we denote by ⇡s such policy. The states in U are states that have been discovered as potential members of S! L , but the algorithm has yet to produce a policy to control any of them in less than L + " steps. The algorithm stores an estimate of the transition model and it proceeds through rounds, which are indexed by k and incremented whenever a state in U gets transferred to the set K, i.e., when the transition model reaches a level of accuracy sufficient 5Note that we translated in the condition in [1] of a relative error of L" to an absolute error of ", to align it with the common formulation of sample complexity in RL. Algorithm 1: Algorithm DisCo Input: Actions A, initial state s0, confidence parameter 2 (0, 1), error threshold " > 0, L 1 and (possibly adaptive) allocation function : P(S) ! N (where P(S) denotes the power set of S). 1 Initialize k := 0, K0 := {s0}, U0 := {} and a restricted policy ⇡s0 2 ⇧(K0). 2 Set " := min{", 1} and continue := True. 3 while continue do 4 Set k += 1. //new round // ¨ Sample collection on K 5 For each (s, a) 2 Kk ⇥A, execute policy ⇡s until the total number of visits Nk(s, a) to (s, a) satisfies Nk(s, a) nk := (Kk). For each (s, a) 2 Kk ⇥A, add s0 ⇠ p(·|s, a) to Uk if s0 /2 Kk. // ≠ Restriction of candidate states U 6 Compute transitions bpk(s0|s, a) and Wk := n s0 2 Uk : 9(s, a) 2 Kk ⇥A, bpk(s0|s, a) 1 "/2L o · 7 if Wk is empty then 8 Set continue := False. //condition STOP1 9 else // Æ Computation of the optimistic policies on K 10 for each state s0 2 Wk do 11 Compute (eus0 , e⇡s0) := OVISSP(Kk,A, s0, Nk, "6L ), see Alg. 3 in App. D.1. 12 Let s† := argmins2Wk eus(s0) and eu † := eus†(s0). 13 if eu† > L then 14 Set continue := False. //condition STOP2 15 else // Ø State transfer from U to K 16 Set Kk+1 := Kk [ {s†}, Uk+1 := Uk \ {s†} and ⇡s† := e⇡s† . // ∞ Policy consolidation: computation on the final set K 17 Set K := k. 18 for each state s 2 KK do 19 Compute (eus, e⇡s) := OVISSP(KK ,A, s,NK , "6L ). 20 Output: the states s in KK and their corresponding policy ⇡s := e⇡s. to compute a policy to control one of the states encountered before. We denote by Kk (resp.Uk) the set of controllable (resp. uncontrollable) states at the beginning of round k. DisCo stops at a round K when it can confidently claim that all the remaining states outside of KK cannot be L-controllable. At each round, the algorithm uses all samples observed so far to build an estimate of the transition model denoted by bp(s0|s, a) = N(s, a, s0)/N(s, a), where N(s, a) and N(s, a, s0) are counters for state-action and state-action-next state visitations. Each round is divided into two phases. The first is a sample collection phase. At the beginning of round k, the agent collects additional samples until nk := (Kk) samples are available at each state-action pair in Kk ⇥A (step ¨). A key challenge lies in the careful (and adaptive) choice of the allocation function , which we report in the statement of Thm. 1 (see Eq. 19 in App. D.4 for its exact definition). Importantly, the incremental construction of Kk entails that sampling at each state s 2 Kk can be done efficiently. In fact, for all s 2 Kk the agent has already confidently learned a policy ⇡s to reach s in at most L+ " steps on average (see how such policy is computed in the second phase). The generation of transitions (s, a, s0) for (s, a) 2 Kk ⇥A achieves two objectives at once. First, it serves as a discovery step, since all observed next states s0 not in Uk are added to it — in particular this guarantees sufficient exploration at the fringe (or border) of the set Kk. Second, it improves the accuracy of the model p in the states in Kk, which is essential in computing near-optimal policies and thus fulfilling the AX? condition. The second phase does not require interacting with the environment and it focuses on the computation of optimistic policies. The agent begins by significantly restricting the set of candidate states in each round to alleviate the computational complexity of the algorithm. Namely, among all the states in Uk, it discards those that do not have a high probability of belonging to S! L by considering a restricted set Wk ✓ Uk (step ≠). In fact, if the estimated probability bpk of reaching a state s 2 Uk from any of the controllable states in Kk is lower than (1 "/2)/L, then no shortest-path policy restricted on Kk could get to s from s0 in less than L+ " steps on average. Then for each state s0 in Wk, DisCo computes an optimistic policy restricted on Kk to reach s0. Formally, for any candidate state s0 2 Wk, we define the induced stochastic shortest path (SSP) MDP M 0 k with goal state s0 as follows. Definition 6. We define the SSP-MDP M 0 k := hS,A0 k (·), c0 k , p0 k i with goal state s0, where the action space is such that A0 k (s) = A for all s 2 Kk and A0k(s) = {RESET} otherwise (i.e., we focus on policies restricted on Kk). The cost function is such that for all a 2 A, c0k(s0, a) = 0, and for any s 6= s0, c0 k (s, a) = 1. The transition model is p0 k (s0|s0, a) = 1 and p0 k (·|s, a) = p(·|s, a) otherwise.6 The solution of M 0 k is the shortest-path policy from s0 to s0 restricted on Kk. Since p0k is unknown, DisCo cannot compute the exact solution of M 0 k , but instead, it executes optimistic value iteration (OVISSP) for SSP [27, 28] to obtain a value function eus0 and its associated greedy policy e⇡s0 restricted on Kk (see App. D.1 for more details). The agent then chooses a candidate goal state s† for which the value eu† := eus†(s0) is the smallest. This step can be interpreted as selecting the optimistically most promising new state to control. Two cases are possible. If eu† L, then s† is added to Kk (step Ø), since the accuracy of the model estimate on the state-action space Kk ⇥ A guarantees that the policy e⇡s† is able to reach the state s† in less than L + " steps in expectation with high probability (i.e., s† is incrementally (L + ")-controllable). Otherwise, we can guarantee that S! L ✓ Kk with high probability. In the latter case, the algorithm terminates and, using the current estimates of the model, it recomputes an optimistic shortest-path policy ⇡s restricted on the final set KK for each state s 2 KK (step ∞). This policy consolidation step is essential to identify near-optimal policies restricted on the final set KK (and thus on S! L ): indeed the expansion of the set of the so far controllable states may alter and refine the optimal goal-reaching policies restricted on it (see App. A). Computational Complexity. Note that algorithmically, we do not need to define M 0 k (Def. 6) over the whole state space S as we can limit it to Kk [ {s0}, i.e., the candidate state s0 and the set Kk of so far controllable states. As shown in Thm. 1, this set can be significantly smaller than S . In particular this implies that the computational complexity of the value iteration algorithm used to compute the optimistic policies is independent from S (see App. D.9 for more details). 4 Sample Complexity Analysis of DisCo We now present our main result: a sample complexity guarantee for DisCo for the AX? objective, which directly implies that AXL is also satisfied. Theorem 1. There exists an absolute constant ↵ > 0 such that for any L 1, " 2 (0, 1], and 2 (0, 1), if we set the allocation function as : X ! ↵ · L4b⇥(X ) "2 log2 ✓ LSA " ◆ + L2|X | " log ✓ LSA " ◆! , (2) with b⇥(X ) := max(s,a)2X⇥A P s02X p bp(s0|s, a)(1 bp(s0|s, a)) 2 , then the algorithm DisCo (Alg. 1) satisfies the following sample complexity bound for AX? CAX?(DisCo, L, ", ) = eO ✓ L5 L+"SL+"A "2 + L3S2 L+"A " ◆ , (3) where SL+" := |S!L+"| and L+" := max (s,a)2S! L+"⇥A k{p(s0|s, a)}s02S! L+" k0 SL+" is the maximal support of the transition probabilities p(·|s, a) restricted to the set S! L+". Given the definition of AX?, Thm. 1 implies that DisCo 1) terminates after CAX?(DisCo, L, ", ) time steps, 2) discovers a set of states K ◆ S! L with |K| SL+", 3) and for each s 2 K outputs a policy ⇡s which is "-optimal w.r.t. policies restricted on S!L , i.e., v⇡s(s0 ! s) V ?S! L (s0 ! s) + ". Note that Eq. 3 displays only a logarithmic dependency on S, the total number of states. This property on the sample complexity of DisCo, along with its S-independent computational complexity, is significant when the state space S grows large w.r.t. the unknown set of interest S! L . 6In words, all actions at states in Kk behave exactly as in M and suffer a unit cost, in all states outside Kk only the reset action to s0 is available with a unit cost, and all actions at the goal s0 induce a zero-cost self-loop. 4.1 Proof Sketch of Theorem 1 While the complete proof is reported in App. D, we now provide the main intuition behind the result. State Transfer from U to K (step Ø). Let us focus on a round k and a state s† 2 Uk that gets added to Kk. For clarity we remove in the notation the round k, goal state s† and starting state s0. We denote by v and ev the value functions of the candidate policy e⇡ in the true and optimistic model respectively, and by eu the quantity w.r.t. which e⇡ is optimistically greedy. We aim to prove that s† 2 S! L+" (with high probability). The main chain of inequalities underpinning the argument is v |v ev|+ ev (a) " 2 + ev (b) " 2 + eu+ " 2 (c) L+ ", (4) where (c) is guaranteed by algorithmic construction and (b) stems from the chosen level of value iteration accuracy. Inequality (a) has the flavor of a simulation lemma for SSP, by relating the shortest-path value function of a same policy between two models (the true one and the optimistic one). Importantly, when restricted to K these two models are close in virtue of the algorithmic design which enforces the collection of a minimum amount of samples at each state-action pair of K ⇥A, denoted by n. Specifically, we obtain that |v ev| = eO ⇣rL4 K n + L2|K| n ⌘ , with K := max (s,a)2K⇥A k{p(s0|s, a)}s02Kk0 |K|. Note that K is the branching factor restricted to the set K. Our choice of n (given in Eq. 2) is then dictated to upper bound the above quantity by "/2 in order to satisfy inequality (a). Let us point out that, interestingly yet unfortunately, the structure of the problem does not appear to allow for technical variance-aware improvements seeking to lower the value of n prescribed above (indeed the AX framework requires to analytically encompass the uncontrollable states U into a single meta state with higher transitional uncertainty, see App. D for details). Termination of the Algorithm. Since S! L is unknown, we have to ensure that none of the states in S! L are “missed”. As such, we prove that with overwhelming probability, we have S! L ✓ KK when the algorithm terminates at a round denoted by K. There remains to justify the final near-optimal guarantee w.r.t. the set of policies ⇧(S! L ). Leveraging that step ∞ recomputes the policies (⇡s)s2KK on the final set KK , we establish the following chain of inequalities v |v ev|+ ev (a) " 2 + ev (b) " 2 + eu+ " 2 (c) V ?KK + " (d) V ?S! L + ", (5) where (a) and (b) are as in Eq. 4, (c) leverages optimism and (d) stems from the inclusion S! L ✓ KK . Sample Complexity Bound. The choice of allocation function in Eq. 2 bounds nK which is the total number of samples required at each state-action pair in KK ⇥ A. We then compute a high-probability bound on the time steps needed to collect a given sample, and show that it scales as eO(L). Since the sample complexity is solely induced by the sample collection phase (step ¨), it can be bounded by the quantity nK |KK |A. Putting everything together yields the bound of Thm. 1. 4.2 Comparison with UcbExplore [1] We start recalling the critical distinction that DisCo succeeds in tackling problem AX?, while UcbExplore [1] fails to do so (see App. A for details on the AX objectives). Nonetheless, in the following we show that even if we restrict our attention to AXL, for which UcbExplore is designed, DisCo yields a better sample complexity in most of the cases. From [1], UcbExplore verifies7 CAXL(UcbExplore, L, ", ) = eO ✓ L6SL+"A "3 ◆ · (6) Eq. 6 shows that the sample complexity of UcbExplore is linear in SL+", while for DisCo the dependency is somewhat worse. In the main-order term eO(1/"2) of Eq. 3, the bound depends linearly on SL+" but also grows with the branching factor L+", which is not the “global” branching factor 7Note that if we replace the error of " for AXL with an error of L" as in [1], we recover the sample complexity of eO L3SL+"A/" 3 stated in [1, Thm. 8]. but denotes the number of possible next states in S! L+" starting from S!L+". While in general we only have L+" SL+", in many practical domains (e.g., robotics, user modeling), each state can only transition to a small number of states, i.e., we often have L+" = O(1) as long as the dynamics is not too “chaotic”. While DisCo does suffer from a quadratic dependency on SL+" in the second term of order eO(1/"), we notice that for any SL+" L3" 2 the bound of DisCo is still preferable. Furthermore, since for "! 0, SL+" tends to SL, the condition is always verified for small enough ". Compared to DisCo, the sample complexity of UcbExplore is worse in both " and L. As stressed in Sect. 2.2, the better dependency on " both improves the quality of the output goal-reaching policies as well as reduces the number of incrementally (L+ ")-controllable states returned by the algorithm. It is interesting to investigate why the bound of [1] (Eq. 6) inherits a eO(" 3) dependency. As reviewed in App. E, UcbExplore alternates between two phases of state discovery and policy evaluation. The optimistic policies computed by UcbExplore solve a finite-horizon problem (with horizon set to HUCB). However, minimizing the expected time to reach a target state is intrinsically an SSP problem, which is exactly what DisCo leverages. By computing policies that solve a finitehorizon problem (note that UcbExplore resets every HUCB time steps), [1] sets the horizon to HUCB := dL + L2" 1e, which leads to a policy-evaluation phase with sample complexity scaling as eO(HUCB" 2) = eO(" 3). Since the rollout budget of eO(" 3) is hard-coded into the algorithm, the dependency on " of UcbExplore’s sample complexity cannot be improved by a more refined analysis; instead a different algorithmic approach is required such as the one employed by DisCo. 4.3 Goal-Free Cost-Free Exploration on S! L with DisCo A compelling advantage of DisCo is that it achieves an accurate estimation of the environment’s dynamics restricted to the unknown subset of interest S! L . In contrast to UcbExplore which needs to restart its sample collection from scratch whenever L, " or some transition costs change, DisCo can thus be robust to changes in such problem parameters. At the end of its exploration phase in Alg. 1, DisCo is able to perform zero-shot planning to solve other tasks restricted on S! L , such as cost-sensitive ones. Indeed in the following we show how the DisCo agent is able to compute an "/cmin-optimal policy for any stochastic shortest-path problem on S!L with goal state s 2 S!L (i.e., s is absorbing and zero-cost) and cost function lower bounded by cmin > 0. Corollary 1. There exists an absolute constant > 0 such that for any L 1, " 2 (0, 1] and cmin 2 (0, 1] verifying " · (L cmin), with probability at least 1 , for whatever goal state s 2 S! L and whatever cost function c in [cmin, 1], DisCo can compute (after its exploration phase, without additional environment interaction) a policy b⇡s,c whose SSP value function Vb⇡s,c verifies Vb⇡s,c(s0 ! s) V ?S! L (s0 ! s) + " cmin , where V⇡(s0 ! s) := E hP ⌧⇡(s0!s) t=1 c(st,⇡(st)) s1 = s0 i is the SSP value function of a policy ⇡ and V ?S! L (s0 ! s) := min⇡2⇧(S! L ) V⇡(s0 ! s) is the optimal SSP value function restricted on S!L . It is interesting to compare Cor. 1 with the reward-free exploration framework recently introduced by Jin et al. [24] in finite-horizon. At a high level, the result in Cor. 1 can be seen as a counterpart of [24] beyond finite-horizon problems, specifically in the goal-conditioned setting. While the parameter L defines the horizon of interest for DisCo, resetting after every L steps (as in finite-horizon) would prevent the agent to identify L-controllable states and lead to poor performance. This explains the distinct technical tools used: while [24] executes finite-horizon no-regret algorithms, DisCo deploys SSP policies restricted on the set of states that it “controls” so far. Algorithmically, both approaches seek to build accurate estimates of the transitions on a specific (unknown) state space of interest: the so-called “significant” states within H steps for [24], and the incrementally L-controllable states S! L for DisCo. Bound-wise, the cost-sensitive AX? problem inherits the critical role of the minimum cost cmin in SSP problems (see App. C and e.g., [27, 28, 29]), which is reflected in the accuracy of Cor. 1 scaling inversely with cmin. Another interesting element of comparison is the dependency on the size of the state space. While the algorithm introduced in [24] is robust w.r.t. states that can be reached with very low probability, it still displays a polynomial dependency on the total number of states S. On the other hand, DisCo has only a logarithmic dependency on S, while it directly depends on the number of (L + ")-controllable states, which shows that DisCo effectively adapts to the state space of interest and it ignores all other states. This result is significant since not only SL+" can be arbitrarily smaller than S, but also because the set S! L+" itself is initially unknown to the algorithm. 5 Numerical Simulation In this section, we provide the first evaluation of algorithms in the incremental autonomous exploration setting. In the implementation of both DisCo and UcbExplore, we remove the logarithmic and constant terms for simplicity. We also boost the empirical performance of UcbExplore in various ways, for example by considering confidence intervals derived from the empirical Bernstein inequality (see [30]) as opposed to Hoeffding as done in [1]. We refer the reader to App. F for details on the algorithmic configurations and on the environments considered. We compare the sample complexity empirically achieved by DisCo and UcbExplore. Fig. 2 depicts the time needed to identify all the incrementally L-controllable states when L = 4.5 for different values of ", on a confusing chain domain. Note that the sample complexity is achieved soon after, when the algorithm can confidently discard all the remaining states as non-controllable (it is reported in Tab. 2 of App. F). We observe that DisCo outperforms UcbExplore for any value of ". In particular, the gap in performance increases as " decreases, which matches the theoretical improvement in sample complexity from eO(" 3) for UcbExplore to eO(" 2) for DisCo. On a second environment — the combination lock problem introduced in [31] — we notice that DisCo again outperforms UcbExplore, as shown in App. F. Another important feature of DisCo is that it targets the tighter objective AX?, whereas UcbExplore is only able to fulfill objective AXL and may therefore elect suboptimal policies. In App. F we show empirically that, as expected theoretically, this directly translates into higher-quality goal-reaching policies recovered by DisCo. 6 Conclusion and Extensions Connections to existing deep-RL methods. While we primarily focus the analysis of DisCo in the tabular case, we believe that the formal definition of AX problems and the general structure of DisCo may also serve as a theoretical grounding of many recent approaches to unsupervised exploration. For instance, it is interesting to draw a parallel between DisCo and the ideas behind Go-Explore [32]. Go-Explore similarly exploits the following principles: (1) remember states that have previously been visited, (2) first return to a promising state (without exploration), (3) then explore from it. Go-Explore assumes that the world is deterministic and resettable, meaning that one can reset the state of the simulator to a previous visit to that cell. Very recently [15], the same authors proposed a way to relax this requirement by training goal-conditioned policies to reliably return to cells in the archive during the exploration phase. In this paper, we investigated the theoretical dimension of this direction, by provably learning such goal-conditioned policies for the set of incrementally controllable states. Future work. Interesting directions for future investigation include: 1) Deriving a lower bound for the AX problems; 2) Integrating DisCo into the meta-algorithm MNM [33] which deals with incremental exploration for AXL in non-stationary environments; 3) Extending the problem to continuous state space and function approximation; 4) Relaxing the definition of incrementally controllable states and relaxing the performance definition towards allowing the agent to have a non-zero but limited sample complexity of learning a shortest-path policy for any state at test time. Broader Impact This paper makes contributions to the fundamentals of online learning (RL) and due to its theoretical nature, we see no ethical or immediate societal consequence of our work.
1. What is the main contribution of the paper in the field of artificial intelligence? 2. What are the strengths of the proposed algorithm, particularly in terms of sample complexity and optimality? 3. Are there any weaknesses or limitations in the paper that need to be addressed? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any other relevant works that should be considered when evaluating this paper?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper proposes an algorithm DISCO to compute the \epsilon-optimal goal-conditioned policies covering states which are L-incrementally reachable from a reference state. It improves over the previous sample complexity bound of UCBEXPLORE in terms of L and epsilon but worsens in terms of S. DISCO is further adopted to find epsilon/c-optimal policy for cost-sensitive SSP. Strengths 1. The paper proposes a new algorithm which is well explained and modular. 2. The paper provides a new bound on sample complexity while satisfying the stronger AX* condition. 3. The paper draws parallels and contrasts to existing work [1] in detail which is very helpful. 4. Disco also provides guarantees for finding optimal policies in cost-sensitive SSP. 5. This paper can be a good step to analyse popular deep-RL methods like Go-Explore. Weaknesses 1. Not enough comparison with the recent reward-free exploration work [23]. 2. Discussing non-theoretical motivation between the L-controllable states and more importantly its limitations. 3. There is no analysis or discussion of computational complexity which seems to be pretty expensive. 4. Error in empirical evidence section is a bit misleading.
NIPS
Title Improved Sample Complexity for Incremental Autonomous Exploration in MDPs Abstract We investigate the exploration of an unknown environment when no reward function is provided. Building on the incremental exploration setting introduced by Lim and Auer [1], we define the objective of learning the set of "-optimal goal-conditioned policies attaining all states that are incrementally reachable within L steps (in expectation) from a reference state s0. In this paper, we introduce a novel modelbased approach that interleaves discovering new states from s0 and improving the accuracy of a model estimate that is used to compute goal-conditioned policies to reach newly discovered states. The resulting algorithm, DisCo, achieves a sample complexity scaling as e O(LSL+" L+"A " 2), where A is the number of actions, SL+" is the number of states that are incrementally reachable from s0 in L + " steps, and L+" is the branching factor of the dynamics over such states. This improves over the algorithm proposed in [1] in both " and L at the cost of an extra L+" factor, which is small in most environments of interest. Furthermore, DisCo is the first algorithm that can return an "/cmin-optimal policy for any cost-sensitive shortest-path problem defined on the L-reachable states with minimum cost cmin. Finally, we report preliminary empirical results confirming our theoretical findings. 1 Introduction In cases where the reward signal is not informative enough — e.g., too sparse, time-varying or even absent — a reinforcement learning (RL) agent needs to explore the environment driven by objectives other than reward maximization, see [e.g., 2, 3, 4, 5, 6]. This can be performed by designing intrinsic rewards to drive the learning process, for instance via state visitation counts [7, 8], novelty or prediction errors [9, 10, 11]. Other recent methods perform information-theoretic skill discovery to learn a set of diverse and task-agnostic behaviors [12, 13, 14]. Alternatively, goal-conditioned policies learned by carefully designing the sequence of goals during the learning process are often used to solve sparse reward problems [15] and a variety of goal-reaching tasks [16, 17, 18, 19]. While the approaches reviewed above effectively leverage deep RL techniques and are able to achieve impressive results in complex domains (e.g., Montezuma’s Revenge [15] or real-world robotic manipulation tasks [19]), they often lack substantial theoretical understanding and guarantees. Recently, some unsupervised RL objectives were analyzed rigorously. Some of them quantify how well the agent visits the states under a sought-after frequency, e.g., to induce a maximally entropic state distribution [20, 21, 22, 23]. While such strategies provably mimic their desired behavior via a Frank-Wolfe algorithmic scheme, they may not learn how to effectively reach any state of the environment and thus may not be sufficient to efficiently solve downstream tasks. Another relevant take is the reward-free RL paradigm of [24]: following its exploration phase, the agent is able to 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. compute a near-optimal policy for any reward function at test time. While this framework yields strong end-to-end guarantees, it is limited to the finite-horizon setting and the agent is thus unable to tackle tasks beyond finite-horizon, e.g., goal-conditioned tasks. In this paper, we build on and refine the setting of incremental exploration of [1]: the agent starts at an initial state s0 in an unknown, possibly large environment, and it is provided with a RESET action to restart at s0. At a high level, in this setting the agent should explore the environment and stop when it has identified the tasks within its reach and learned to master each of them sufficiently well. More specifically, the objective of the agent is to learn a goal-conditioned policy for any state that can be reached from s0 within L steps in expectation; such a state is said to be L-controllable. Lim and Auer [1] address this setting with the UcbExplore method for which they bound the number of exploration steps that are required to identify in an incremental way all L-controllable states (i.e., the algorithm needs to define a suitable stopping condition) and to return a set of policies that are able to reach each of them in at most L+ " steps. A key aspect of UcbExplore is to first focus on simple states (i.e., states that can be reached within a few steps), learn policies to efficiently reach them, and leverage them to identify and tackle states that are increasingly more difficult to reach. This approach aims to avoid wasting exploration in the attempt of reaching states that are further than L steps from s0 or that are too difficult to reach given the limited knowledge available at earlier stages of the exploration process. Our main contributions are: • We strengthen the objective of incremental exploration and require the agent to learn "-optimal goal-conditioned policies for any L-controllable state. Formally, let V ?(s) be the length of the shortest path from s0 to s, then the agent needs to learn a policy to navigate from s0 to s in at most V ?(s) + " steps, while in [1] any policy reaching s in at most L+ " steps is acceptable. • We design DisCo, a novel algorithm for incremental exploration. DisCo relies on an estimate of the transition model to compute goal-conditioned policies to the states observed so far and then use those policies to improve the accuracy of the model and incrementally discover new states. • We derive a sample complexity bound for DisCo scaling as1 eO(L5SL+" L+"A " 2), where A is the number of actions, SL+" is the number of states that are incrementally controllable from s0 in L + " steps, and L+" is the branching factor of the dynamics over such incrementally controllable states. Not only is this sample complexity obtained for a more challenging objective than UcbExplore, but it also improves in both " and L at the cost of an extra L+" factor, which is small in most environments of interest. • Leveraging the model-based nature of DisCo, we can also readily compute an "/cmin-optimal policy for any cost-sensitive shortest-path problem defined on the L-controllable states with minimum cost cmin. This result serves as a goal-conditioned counterpart to the reward-free exploration framework defined by Jin et al. [24] for the finite-horizon setting. 2 Incremental Exploration to Discover and Control In this section we expand [1], with a more challenging objective for autonomous exploration. 2.1 L-Controllable States We consider a reward-free Markov decision process [25, Sect. 8.3] M := hS,A, p, s0i. We assume a finite action space A with A = |A| actions, and a finite, possibly large state space S for which an upper bound S on its cardinality is known, i.e., |S| S.2 Each state-action pair (s, a) 2 S ⇥A is characterized by an unknown transition probability distribution p(·|s, a) over next states. We denote by S0 := maxs2S0,ak{p(s0|s, a)}s02S0k0 the largest branching factor of the dynamics over states in any subset S 0 ✓ S . The environment has no extrinsic reward, and s0 2 S is a designated initial state. A deterministic stationary policy ⇡ : S ! A is a mapping between states to actions and we denote by ⇧ the set of all possible policies. Since in environments with arbitrary dynamics the learner may get stuck in a state without being able to return to s0, we introduce the following assumption.3 1We say that f(") = eO("↵) if there are constants a, b, such that f(") a · "↵ logb " . 2Lim and Auer [1] originally considered a countable, possibly infinite state space; however this leads to a technical issue in the analysis of UcbExplore (acknowledged by the authors via personal communication and explained in App. E.3), which disappears by considering only finite state spaces. 3This assumption should be contrasted with the finite-horizon setting, where each policy resets automatically after H steps, or assumptions on the MDP dynamics such as ergodicity or bounded diameter, which guarantee that it is always possible to find a policy navigating between any two states. Assumption 1. The action space contains a RESET action s.t. p(s0|s, RESET) = 1 for any s 2 S . We make explicit the states where a policy ⇡ takes action RESET in the following definition. Definition 1 (Policy restricted on a subset). For any S 0 ✓ S, a policy ⇡ is restricted on S 0 if ⇡(s) = RESET for any s /2 S 0. We denote by ⇧(S 0) the set of policies restricted on S 0. We measure the performance of a policy in navigating the MDP as follows. Definition 2. For any policy ⇡ and a pair of states (s, s0) 2 S2, let ⌧⇡(s ! s0) be the (random) number of steps it takes to reach s0 starting from s when executing policy ⇡, i.e., ⌧⇡(s ! s0) := inf{t 0 : st+1 = s0 | s1 = s,⇡}. We also set v⇡(s ! s0) := E[⌧⇡(s ! s0)] as the expected traveling time, which corresponds to the value function of policy ⇡ in a stochastic shortest-path setting (SSP, [26, Sect. 3]) with initial state s, goal state s0 and unit cost function. Note that we have v⇡(s ! s0) = +1 when the policy ⇡ does not reach s0 from s with probability 1. Furthermore, for any subset S 0 ✓ S and any state s, we denote by V ?S0(s0 ! s) := min ⇡2⇧(S0) v⇡(s0 ! s), the length of the shortest path to s, restricted to policies resetting to s0 from any state outside S 0. The objective of the learning agent is to control efficiently the environment in the vicinity of s0. We say that a state s is controlled if the agent can reliably navigate to it from s0, that is, there exists an effective goal-conditioned policy — i.e., a shortest-path policy — from s0 to s. Definition 3 (L-controllable states). Given a reference state s0, we say that a state s is L-controllable if there exists a policy ⇡ such that v⇡(s0 ! s) L. The set of L-controllable states is then SL := {s 2 S : min ⇡2⇧ v⇡(s0 ! s) L}. (1) We illustrate the concept of controllable states in Fig. 1 for L = 3. Interestingly, in the right figure, the black states are not L-controllable. In fact, there is no policy that can directly choose which one of the black states to reach. On the other hand, the red state, despite being in some sense further from s0 than the black states, does belong to SL. In general, there is a crucial difference between the existence of a random realization where a state s is reached from s0 in less than L steps (i.e., black states) and the notion of L-controllability, which means that there exists a policy that consistently reaches the state in a number of steps less or equal than L on average (i.e., red state). This explains the choice of the term controllable over reachable, since a state s is often said to be reachable if there is a policy ⇡ with a non-zero probability to eventually reach it, which is a weaker requirement. Unfortunately, Lim and Auer [1] showed that in order to discover all the states in SL, the learner may require a number of exploration steps that is exponential in L or |SL|. Intuitively, this negative result is due to the fact that the minimum in Eq. 1 is over the set of all possible policies, including those that may traverse states that are not in SL.4 Hence, we similarly constrain the learner to focus on the set of incrementally controllable states. Definition 4 (Incrementally controllable states S! L ). Let be some partial order on S. The set S L of states controllable in L steps w.r.t. is defined inductively as follows. The initial state s0 4We refer the reader to [1, Sect. 2.1] for a more formal and complete characterization of this negative result. belongs to S L by definition and if there exists a policy ⇡ restricted on {s0 2 S L : s0 s} with v⇡(s0 ! s) L, then s 2 S L . The set S!L of incrementally L-controllable states is defined as S! L := [ S L , where the union is over all possible partial orders. By way of illustration, in Fig. 1 for L = 3, it holds that S! L = SL in the left figure, whereas S! L = {s0} 6= SL in the right figure. Indeed, while the red state is L-controllable, it requires traversing the black states, which are not L-controllable. 2.2 AX Objectives We are now ready to formalize two alternative objectives for Autonomous eXploration (AX) in MDPs. Definition 5 (AX sample complexity). Fix any length L 1, error threshold " > 0 and confidence level 2 (0, 1). The sample complexities CAXL(A, L, ", ) and CAX?(A, L, ", ) are defined as the number of time steps required by a learning algorithm A to identify a set K ◆ S! L such that with probability at least 1 , it has learned a set of policies {⇡s}s2K that respectively verifies the following AX requirement (AXL) 8s 2 K, v⇡s(s0 ! s) L+ ", (AX?) 8s 2 K, v⇡s(s0 ! s) V ?S! L (s0 ! s) + ". Designing agents satisfying the objectives defined above introduces critical difficulties w.r.t. standard goal-directed learning in RL. First, the agent has to find accurate policies for a set of goals (i.e., all incrementally L-controllable states) and not just for one specific goal. On top of this, the set of desired goals itself (i.e., the set S! L ) is unknown in advance and has to be estimated online. Specifically, AXL is the original objective introduced in [1] and it requires the agent to discover all the incrementally L-controllable states as fast as possible.5 At the end of the learning process, for each state s 2 S! L the agent should return a policy that can reach s from s0 in at most L steps (in expectation). Unfortunately, this may correspond to a rather poor performance in practice. Consider a state s 2 S! L such that V ?S! L (s0 ! s) ⌧ L, i.e., the shortest path between s0 to s following policies restricted on S! L is much smaller than L. Satisfying AXL only guarantees that a policy reaching s in L steps is found. On the other hand, objective AX? is more demanding, as it requires learning a near-optimal shortest-path policy for each state in S! L . Since V ?S! L (s0 ! s) L and the gap between the two quantities may be arbitrarily large, especially for states close to s0 and far from the fringe of S! L , AX? is a significantly tighter objective than AXL and it is thus preferable in practice. We say that an exploration algorithm solves the AX problem if its sample complexity CAX(A, L, ", ) in Def. 5 is polynomial in |K|, A, L, " 1 and log(S). Notice that requiring a logarithmic dependency on the size of S is crucial but nontrivial, since the overall state space may be large and we do not want the agent to waste time trying to reach states that are not L-controllable. The dependency on the (algorithmic-dependent and random) set K can be always replaced using the upper bound |K| |S! L+"|, which is implied with high probability by both AXL and AX? conditions. Finally, notice that the error threshold " > 0 has a two-fold impact on the performance of the algorithm. First, " defines the largest set S! L+" that could be returned by the algorithm: the larger ", the bigger the set. Second, as " increases, the quality (in terms of controllability and navigational precision) of the output policies worsens w.r.t. the shortest-path policy restricted on S! L . 3 The DisCo Algorithm The algorithm DisCo — for Discover and Control — is detailed in Alg. 1. It maintains a set K of “controllable” states and a set U of states that are considered “uncontrollable” so far. A state s is tagged as controllable when a policy to reach s in at most L + " steps (in expectation from s0) has been found with high confidence, and we denote by ⇡s such policy. The states in U are states that have been discovered as potential members of S! L , but the algorithm has yet to produce a policy to control any of them in less than L + " steps. The algorithm stores an estimate of the transition model and it proceeds through rounds, which are indexed by k and incremented whenever a state in U gets transferred to the set K, i.e., when the transition model reaches a level of accuracy sufficient 5Note that we translated in the condition in [1] of a relative error of L" to an absolute error of ", to align it with the common formulation of sample complexity in RL. Algorithm 1: Algorithm DisCo Input: Actions A, initial state s0, confidence parameter 2 (0, 1), error threshold " > 0, L 1 and (possibly adaptive) allocation function : P(S) ! N (where P(S) denotes the power set of S). 1 Initialize k := 0, K0 := {s0}, U0 := {} and a restricted policy ⇡s0 2 ⇧(K0). 2 Set " := min{", 1} and continue := True. 3 while continue do 4 Set k += 1. //new round // ¨ Sample collection on K 5 For each (s, a) 2 Kk ⇥A, execute policy ⇡s until the total number of visits Nk(s, a) to (s, a) satisfies Nk(s, a) nk := (Kk). For each (s, a) 2 Kk ⇥A, add s0 ⇠ p(·|s, a) to Uk if s0 /2 Kk. // ≠ Restriction of candidate states U 6 Compute transitions bpk(s0|s, a) and Wk := n s0 2 Uk : 9(s, a) 2 Kk ⇥A, bpk(s0|s, a) 1 "/2L o · 7 if Wk is empty then 8 Set continue := False. //condition STOP1 9 else // Æ Computation of the optimistic policies on K 10 for each state s0 2 Wk do 11 Compute (eus0 , e⇡s0) := OVISSP(Kk,A, s0, Nk, "6L ), see Alg. 3 in App. D.1. 12 Let s† := argmins2Wk eus(s0) and eu † := eus†(s0). 13 if eu† > L then 14 Set continue := False. //condition STOP2 15 else // Ø State transfer from U to K 16 Set Kk+1 := Kk [ {s†}, Uk+1 := Uk \ {s†} and ⇡s† := e⇡s† . // ∞ Policy consolidation: computation on the final set K 17 Set K := k. 18 for each state s 2 KK do 19 Compute (eus, e⇡s) := OVISSP(KK ,A, s,NK , "6L ). 20 Output: the states s in KK and their corresponding policy ⇡s := e⇡s. to compute a policy to control one of the states encountered before. We denote by Kk (resp.Uk) the set of controllable (resp. uncontrollable) states at the beginning of round k. DisCo stops at a round K when it can confidently claim that all the remaining states outside of KK cannot be L-controllable. At each round, the algorithm uses all samples observed so far to build an estimate of the transition model denoted by bp(s0|s, a) = N(s, a, s0)/N(s, a), where N(s, a) and N(s, a, s0) are counters for state-action and state-action-next state visitations. Each round is divided into two phases. The first is a sample collection phase. At the beginning of round k, the agent collects additional samples until nk := (Kk) samples are available at each state-action pair in Kk ⇥A (step ¨). A key challenge lies in the careful (and adaptive) choice of the allocation function , which we report in the statement of Thm. 1 (see Eq. 19 in App. D.4 for its exact definition). Importantly, the incremental construction of Kk entails that sampling at each state s 2 Kk can be done efficiently. In fact, for all s 2 Kk the agent has already confidently learned a policy ⇡s to reach s in at most L+ " steps on average (see how such policy is computed in the second phase). The generation of transitions (s, a, s0) for (s, a) 2 Kk ⇥A achieves two objectives at once. First, it serves as a discovery step, since all observed next states s0 not in Uk are added to it — in particular this guarantees sufficient exploration at the fringe (or border) of the set Kk. Second, it improves the accuracy of the model p in the states in Kk, which is essential in computing near-optimal policies and thus fulfilling the AX? condition. The second phase does not require interacting with the environment and it focuses on the computation of optimistic policies. The agent begins by significantly restricting the set of candidate states in each round to alleviate the computational complexity of the algorithm. Namely, among all the states in Uk, it discards those that do not have a high probability of belonging to S! L by considering a restricted set Wk ✓ Uk (step ≠). In fact, if the estimated probability bpk of reaching a state s 2 Uk from any of the controllable states in Kk is lower than (1 "/2)/L, then no shortest-path policy restricted on Kk could get to s from s0 in less than L+ " steps on average. Then for each state s0 in Wk, DisCo computes an optimistic policy restricted on Kk to reach s0. Formally, for any candidate state s0 2 Wk, we define the induced stochastic shortest path (SSP) MDP M 0 k with goal state s0 as follows. Definition 6. We define the SSP-MDP M 0 k := hS,A0 k (·), c0 k , p0 k i with goal state s0, where the action space is such that A0 k (s) = A for all s 2 Kk and A0k(s) = {RESET} otherwise (i.e., we focus on policies restricted on Kk). The cost function is such that for all a 2 A, c0k(s0, a) = 0, and for any s 6= s0, c0 k (s, a) = 1. The transition model is p0 k (s0|s0, a) = 1 and p0 k (·|s, a) = p(·|s, a) otherwise.6 The solution of M 0 k is the shortest-path policy from s0 to s0 restricted on Kk. Since p0k is unknown, DisCo cannot compute the exact solution of M 0 k , but instead, it executes optimistic value iteration (OVISSP) for SSP [27, 28] to obtain a value function eus0 and its associated greedy policy e⇡s0 restricted on Kk (see App. D.1 for more details). The agent then chooses a candidate goal state s† for which the value eu† := eus†(s0) is the smallest. This step can be interpreted as selecting the optimistically most promising new state to control. Two cases are possible. If eu† L, then s† is added to Kk (step Ø), since the accuracy of the model estimate on the state-action space Kk ⇥ A guarantees that the policy e⇡s† is able to reach the state s† in less than L + " steps in expectation with high probability (i.e., s† is incrementally (L + ")-controllable). Otherwise, we can guarantee that S! L ✓ Kk with high probability. In the latter case, the algorithm terminates and, using the current estimates of the model, it recomputes an optimistic shortest-path policy ⇡s restricted on the final set KK for each state s 2 KK (step ∞). This policy consolidation step is essential to identify near-optimal policies restricted on the final set KK (and thus on S! L ): indeed the expansion of the set of the so far controllable states may alter and refine the optimal goal-reaching policies restricted on it (see App. A). Computational Complexity. Note that algorithmically, we do not need to define M 0 k (Def. 6) over the whole state space S as we can limit it to Kk [ {s0}, i.e., the candidate state s0 and the set Kk of so far controllable states. As shown in Thm. 1, this set can be significantly smaller than S . In particular this implies that the computational complexity of the value iteration algorithm used to compute the optimistic policies is independent from S (see App. D.9 for more details). 4 Sample Complexity Analysis of DisCo We now present our main result: a sample complexity guarantee for DisCo for the AX? objective, which directly implies that AXL is also satisfied. Theorem 1. There exists an absolute constant ↵ > 0 such that for any L 1, " 2 (0, 1], and 2 (0, 1), if we set the allocation function as : X ! ↵ · L4b⇥(X ) "2 log2 ✓ LSA " ◆ + L2|X | " log ✓ LSA " ◆! , (2) with b⇥(X ) := max(s,a)2X⇥A P s02X p bp(s0|s, a)(1 bp(s0|s, a)) 2 , then the algorithm DisCo (Alg. 1) satisfies the following sample complexity bound for AX? CAX?(DisCo, L, ", ) = eO ✓ L5 L+"SL+"A "2 + L3S2 L+"A " ◆ , (3) where SL+" := |S!L+"| and L+" := max (s,a)2S! L+"⇥A k{p(s0|s, a)}s02S! L+" k0 SL+" is the maximal support of the transition probabilities p(·|s, a) restricted to the set S! L+". Given the definition of AX?, Thm. 1 implies that DisCo 1) terminates after CAX?(DisCo, L, ", ) time steps, 2) discovers a set of states K ◆ S! L with |K| SL+", 3) and for each s 2 K outputs a policy ⇡s which is "-optimal w.r.t. policies restricted on S!L , i.e., v⇡s(s0 ! s) V ?S! L (s0 ! s) + ". Note that Eq. 3 displays only a logarithmic dependency on S, the total number of states. This property on the sample complexity of DisCo, along with its S-independent computational complexity, is significant when the state space S grows large w.r.t. the unknown set of interest S! L . 6In words, all actions at states in Kk behave exactly as in M and suffer a unit cost, in all states outside Kk only the reset action to s0 is available with a unit cost, and all actions at the goal s0 induce a zero-cost self-loop. 4.1 Proof Sketch of Theorem 1 While the complete proof is reported in App. D, we now provide the main intuition behind the result. State Transfer from U to K (step Ø). Let us focus on a round k and a state s† 2 Uk that gets added to Kk. For clarity we remove in the notation the round k, goal state s† and starting state s0. We denote by v and ev the value functions of the candidate policy e⇡ in the true and optimistic model respectively, and by eu the quantity w.r.t. which e⇡ is optimistically greedy. We aim to prove that s† 2 S! L+" (with high probability). The main chain of inequalities underpinning the argument is v |v ev|+ ev (a) " 2 + ev (b) " 2 + eu+ " 2 (c) L+ ", (4) where (c) is guaranteed by algorithmic construction and (b) stems from the chosen level of value iteration accuracy. Inequality (a) has the flavor of a simulation lemma for SSP, by relating the shortest-path value function of a same policy between two models (the true one and the optimistic one). Importantly, when restricted to K these two models are close in virtue of the algorithmic design which enforces the collection of a minimum amount of samples at each state-action pair of K ⇥A, denoted by n. Specifically, we obtain that |v ev| = eO ⇣rL4 K n + L2|K| n ⌘ , with K := max (s,a)2K⇥A k{p(s0|s, a)}s02Kk0 |K|. Note that K is the branching factor restricted to the set K. Our choice of n (given in Eq. 2) is then dictated to upper bound the above quantity by "/2 in order to satisfy inequality (a). Let us point out that, interestingly yet unfortunately, the structure of the problem does not appear to allow for technical variance-aware improvements seeking to lower the value of n prescribed above (indeed the AX framework requires to analytically encompass the uncontrollable states U into a single meta state with higher transitional uncertainty, see App. D for details). Termination of the Algorithm. Since S! L is unknown, we have to ensure that none of the states in S! L are “missed”. As such, we prove that with overwhelming probability, we have S! L ✓ KK when the algorithm terminates at a round denoted by K. There remains to justify the final near-optimal guarantee w.r.t. the set of policies ⇧(S! L ). Leveraging that step ∞ recomputes the policies (⇡s)s2KK on the final set KK , we establish the following chain of inequalities v |v ev|+ ev (a) " 2 + ev (b) " 2 + eu+ " 2 (c) V ?KK + " (d) V ?S! L + ", (5) where (a) and (b) are as in Eq. 4, (c) leverages optimism and (d) stems from the inclusion S! L ✓ KK . Sample Complexity Bound. The choice of allocation function in Eq. 2 bounds nK which is the total number of samples required at each state-action pair in KK ⇥ A. We then compute a high-probability bound on the time steps needed to collect a given sample, and show that it scales as eO(L). Since the sample complexity is solely induced by the sample collection phase (step ¨), it can be bounded by the quantity nK |KK |A. Putting everything together yields the bound of Thm. 1. 4.2 Comparison with UcbExplore [1] We start recalling the critical distinction that DisCo succeeds in tackling problem AX?, while UcbExplore [1] fails to do so (see App. A for details on the AX objectives). Nonetheless, in the following we show that even if we restrict our attention to AXL, for which UcbExplore is designed, DisCo yields a better sample complexity in most of the cases. From [1], UcbExplore verifies7 CAXL(UcbExplore, L, ", ) = eO ✓ L6SL+"A "3 ◆ · (6) Eq. 6 shows that the sample complexity of UcbExplore is linear in SL+", while for DisCo the dependency is somewhat worse. In the main-order term eO(1/"2) of Eq. 3, the bound depends linearly on SL+" but also grows with the branching factor L+", which is not the “global” branching factor 7Note that if we replace the error of " for AXL with an error of L" as in [1], we recover the sample complexity of eO L3SL+"A/" 3 stated in [1, Thm. 8]. but denotes the number of possible next states in S! L+" starting from S!L+". While in general we only have L+" SL+", in many practical domains (e.g., robotics, user modeling), each state can only transition to a small number of states, i.e., we often have L+" = O(1) as long as the dynamics is not too “chaotic”. While DisCo does suffer from a quadratic dependency on SL+" in the second term of order eO(1/"), we notice that for any SL+" L3" 2 the bound of DisCo is still preferable. Furthermore, since for "! 0, SL+" tends to SL, the condition is always verified for small enough ". Compared to DisCo, the sample complexity of UcbExplore is worse in both " and L. As stressed in Sect. 2.2, the better dependency on " both improves the quality of the output goal-reaching policies as well as reduces the number of incrementally (L+ ")-controllable states returned by the algorithm. It is interesting to investigate why the bound of [1] (Eq. 6) inherits a eO(" 3) dependency. As reviewed in App. E, UcbExplore alternates between two phases of state discovery and policy evaluation. The optimistic policies computed by UcbExplore solve a finite-horizon problem (with horizon set to HUCB). However, minimizing the expected time to reach a target state is intrinsically an SSP problem, which is exactly what DisCo leverages. By computing policies that solve a finitehorizon problem (note that UcbExplore resets every HUCB time steps), [1] sets the horizon to HUCB := dL + L2" 1e, which leads to a policy-evaluation phase with sample complexity scaling as eO(HUCB" 2) = eO(" 3). Since the rollout budget of eO(" 3) is hard-coded into the algorithm, the dependency on " of UcbExplore’s sample complexity cannot be improved by a more refined analysis; instead a different algorithmic approach is required such as the one employed by DisCo. 4.3 Goal-Free Cost-Free Exploration on S! L with DisCo A compelling advantage of DisCo is that it achieves an accurate estimation of the environment’s dynamics restricted to the unknown subset of interest S! L . In contrast to UcbExplore which needs to restart its sample collection from scratch whenever L, " or some transition costs change, DisCo can thus be robust to changes in such problem parameters. At the end of its exploration phase in Alg. 1, DisCo is able to perform zero-shot planning to solve other tasks restricted on S! L , such as cost-sensitive ones. Indeed in the following we show how the DisCo agent is able to compute an "/cmin-optimal policy for any stochastic shortest-path problem on S!L with goal state s 2 S!L (i.e., s is absorbing and zero-cost) and cost function lower bounded by cmin > 0. Corollary 1. There exists an absolute constant > 0 such that for any L 1, " 2 (0, 1] and cmin 2 (0, 1] verifying " · (L cmin), with probability at least 1 , for whatever goal state s 2 S! L and whatever cost function c in [cmin, 1], DisCo can compute (after its exploration phase, without additional environment interaction) a policy b⇡s,c whose SSP value function Vb⇡s,c verifies Vb⇡s,c(s0 ! s) V ?S! L (s0 ! s) + " cmin , where V⇡(s0 ! s) := E hP ⌧⇡(s0!s) t=1 c(st,⇡(st)) s1 = s0 i is the SSP value function of a policy ⇡ and V ?S! L (s0 ! s) := min⇡2⇧(S! L ) V⇡(s0 ! s) is the optimal SSP value function restricted on S!L . It is interesting to compare Cor. 1 with the reward-free exploration framework recently introduced by Jin et al. [24] in finite-horizon. At a high level, the result in Cor. 1 can be seen as a counterpart of [24] beyond finite-horizon problems, specifically in the goal-conditioned setting. While the parameter L defines the horizon of interest for DisCo, resetting after every L steps (as in finite-horizon) would prevent the agent to identify L-controllable states and lead to poor performance. This explains the distinct technical tools used: while [24] executes finite-horizon no-regret algorithms, DisCo deploys SSP policies restricted on the set of states that it “controls” so far. Algorithmically, both approaches seek to build accurate estimates of the transitions on a specific (unknown) state space of interest: the so-called “significant” states within H steps for [24], and the incrementally L-controllable states S! L for DisCo. Bound-wise, the cost-sensitive AX? problem inherits the critical role of the minimum cost cmin in SSP problems (see App. C and e.g., [27, 28, 29]), which is reflected in the accuracy of Cor. 1 scaling inversely with cmin. Another interesting element of comparison is the dependency on the size of the state space. While the algorithm introduced in [24] is robust w.r.t. states that can be reached with very low probability, it still displays a polynomial dependency on the total number of states S. On the other hand, DisCo has only a logarithmic dependency on S, while it directly depends on the number of (L + ")-controllable states, which shows that DisCo effectively adapts to the state space of interest and it ignores all other states. This result is significant since not only SL+" can be arbitrarily smaller than S, but also because the set S! L+" itself is initially unknown to the algorithm. 5 Numerical Simulation In this section, we provide the first evaluation of algorithms in the incremental autonomous exploration setting. In the implementation of both DisCo and UcbExplore, we remove the logarithmic and constant terms for simplicity. We also boost the empirical performance of UcbExplore in various ways, for example by considering confidence intervals derived from the empirical Bernstein inequality (see [30]) as opposed to Hoeffding as done in [1]. We refer the reader to App. F for details on the algorithmic configurations and on the environments considered. We compare the sample complexity empirically achieved by DisCo and UcbExplore. Fig. 2 depicts the time needed to identify all the incrementally L-controllable states when L = 4.5 for different values of ", on a confusing chain domain. Note that the sample complexity is achieved soon after, when the algorithm can confidently discard all the remaining states as non-controllable (it is reported in Tab. 2 of App. F). We observe that DisCo outperforms UcbExplore for any value of ". In particular, the gap in performance increases as " decreases, which matches the theoretical improvement in sample complexity from eO(" 3) for UcbExplore to eO(" 2) for DisCo. On a second environment — the combination lock problem introduced in [31] — we notice that DisCo again outperforms UcbExplore, as shown in App. F. Another important feature of DisCo is that it targets the tighter objective AX?, whereas UcbExplore is only able to fulfill objective AXL and may therefore elect suboptimal policies. In App. F we show empirically that, as expected theoretically, this directly translates into higher-quality goal-reaching policies recovered by DisCo. 6 Conclusion and Extensions Connections to existing deep-RL methods. While we primarily focus the analysis of DisCo in the tabular case, we believe that the formal definition of AX problems and the general structure of DisCo may also serve as a theoretical grounding of many recent approaches to unsupervised exploration. For instance, it is interesting to draw a parallel between DisCo and the ideas behind Go-Explore [32]. Go-Explore similarly exploits the following principles: (1) remember states that have previously been visited, (2) first return to a promising state (without exploration), (3) then explore from it. Go-Explore assumes that the world is deterministic and resettable, meaning that one can reset the state of the simulator to a previous visit to that cell. Very recently [15], the same authors proposed a way to relax this requirement by training goal-conditioned policies to reliably return to cells in the archive during the exploration phase. In this paper, we investigated the theoretical dimension of this direction, by provably learning such goal-conditioned policies for the set of incrementally controllable states. Future work. Interesting directions for future investigation include: 1) Deriving a lower bound for the AX problems; 2) Integrating DisCo into the meta-algorithm MNM [33] which deals with incremental exploration for AXL in non-stationary environments; 3) Extending the problem to continuous state space and function approximation; 4) Relaxing the definition of incrementally controllable states and relaxing the performance definition towards allowing the agent to have a non-zero but limited sample complexity of learning a shortest-path policy for any state at test time. Broader Impact This paper makes contributions to the fundamentals of online learning (RL) and due to its theoretical nature, we see no ethical or immediate societal consequence of our work.
1. What is the primary contribution of the paper regarding optimizing exploration in MDPs? 2. What are the strengths of the paper, particularly in its relevance to the NeurIPS community? 3. What are the weaknesses of the paper regarding its theoretical analysis and experimental results?
Summary and Contributions Strengths Weaknesses
Summary and Contributions In this paper, the authors are interested in optimizing the exploration of the state space in MDPs, when, for example, the reward function is sparse. The aim is to find out which states can be reached in a defined average number of time steps, and to compute the associated policy. This work builds on the incremental exploration setting of Lim et al. 2012 that proposed UcbExplore. After formalizing the framework and notions of state controllability, the authors propose a new criterion for policy optimization that is stronger than the previous one. They then present their model-based algorithm DisCo which returns (incrementally) controllable states, and associated policies. They deduce a bound over the sample complexity that is often better than the one of UCBEXPLORE. Finally, the authors present an empirical evaluation of their algorithm and conclude by discussing the links with deep-RL methods. Strengths This paper is very interesting, well written and well presented. It focuses on autonomous exploration, which is a problem of interest in reinforcement learning, especially when the reward function is sparse. This paper is therefore very relevant to the NeurIPS community. Weaknesses My main concerns with this paper are the lack of sketches of proof in the main paper for the theoretical results and the few experimental results. Indeed, the main contributions are Theorem 1 and Corollary 1, so it would be relevant to provide short justifications for them. Finally, the algorithm is only tested on a single environment and compared to UCBEXPLORE for a single value of L (the average number of steps to reach the desired states).
NIPS
Title Improved Sample Complexity for Incremental Autonomous Exploration in MDPs Abstract We investigate the exploration of an unknown environment when no reward function is provided. Building on the incremental exploration setting introduced by Lim and Auer [1], we define the objective of learning the set of "-optimal goal-conditioned policies attaining all states that are incrementally reachable within L steps (in expectation) from a reference state s0. In this paper, we introduce a novel modelbased approach that interleaves discovering new states from s0 and improving the accuracy of a model estimate that is used to compute goal-conditioned policies to reach newly discovered states. The resulting algorithm, DisCo, achieves a sample complexity scaling as e O(LSL+" L+"A " 2), where A is the number of actions, SL+" is the number of states that are incrementally reachable from s0 in L + " steps, and L+" is the branching factor of the dynamics over such states. This improves over the algorithm proposed in [1] in both " and L at the cost of an extra L+" factor, which is small in most environments of interest. Furthermore, DisCo is the first algorithm that can return an "/cmin-optimal policy for any cost-sensitive shortest-path problem defined on the L-reachable states with minimum cost cmin. Finally, we report preliminary empirical results confirming our theoretical findings. 1 Introduction In cases where the reward signal is not informative enough — e.g., too sparse, time-varying or even absent — a reinforcement learning (RL) agent needs to explore the environment driven by objectives other than reward maximization, see [e.g., 2, 3, 4, 5, 6]. This can be performed by designing intrinsic rewards to drive the learning process, for instance via state visitation counts [7, 8], novelty or prediction errors [9, 10, 11]. Other recent methods perform information-theoretic skill discovery to learn a set of diverse and task-agnostic behaviors [12, 13, 14]. Alternatively, goal-conditioned policies learned by carefully designing the sequence of goals during the learning process are often used to solve sparse reward problems [15] and a variety of goal-reaching tasks [16, 17, 18, 19]. While the approaches reviewed above effectively leverage deep RL techniques and are able to achieve impressive results in complex domains (e.g., Montezuma’s Revenge [15] or real-world robotic manipulation tasks [19]), they often lack substantial theoretical understanding and guarantees. Recently, some unsupervised RL objectives were analyzed rigorously. Some of them quantify how well the agent visits the states under a sought-after frequency, e.g., to induce a maximally entropic state distribution [20, 21, 22, 23]. While such strategies provably mimic their desired behavior via a Frank-Wolfe algorithmic scheme, they may not learn how to effectively reach any state of the environment and thus may not be sufficient to efficiently solve downstream tasks. Another relevant take is the reward-free RL paradigm of [24]: following its exploration phase, the agent is able to 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. compute a near-optimal policy for any reward function at test time. While this framework yields strong end-to-end guarantees, it is limited to the finite-horizon setting and the agent is thus unable to tackle tasks beyond finite-horizon, e.g., goal-conditioned tasks. In this paper, we build on and refine the setting of incremental exploration of [1]: the agent starts at an initial state s0 in an unknown, possibly large environment, and it is provided with a RESET action to restart at s0. At a high level, in this setting the agent should explore the environment and stop when it has identified the tasks within its reach and learned to master each of them sufficiently well. More specifically, the objective of the agent is to learn a goal-conditioned policy for any state that can be reached from s0 within L steps in expectation; such a state is said to be L-controllable. Lim and Auer [1] address this setting with the UcbExplore method for which they bound the number of exploration steps that are required to identify in an incremental way all L-controllable states (i.e., the algorithm needs to define a suitable stopping condition) and to return a set of policies that are able to reach each of them in at most L+ " steps. A key aspect of UcbExplore is to first focus on simple states (i.e., states that can be reached within a few steps), learn policies to efficiently reach them, and leverage them to identify and tackle states that are increasingly more difficult to reach. This approach aims to avoid wasting exploration in the attempt of reaching states that are further than L steps from s0 or that are too difficult to reach given the limited knowledge available at earlier stages of the exploration process. Our main contributions are: • We strengthen the objective of incremental exploration and require the agent to learn "-optimal goal-conditioned policies for any L-controllable state. Formally, let V ?(s) be the length of the shortest path from s0 to s, then the agent needs to learn a policy to navigate from s0 to s in at most V ?(s) + " steps, while in [1] any policy reaching s in at most L+ " steps is acceptable. • We design DisCo, a novel algorithm for incremental exploration. DisCo relies on an estimate of the transition model to compute goal-conditioned policies to the states observed so far and then use those policies to improve the accuracy of the model and incrementally discover new states. • We derive a sample complexity bound for DisCo scaling as1 eO(L5SL+" L+"A " 2), where A is the number of actions, SL+" is the number of states that are incrementally controllable from s0 in L + " steps, and L+" is the branching factor of the dynamics over such incrementally controllable states. Not only is this sample complexity obtained for a more challenging objective than UcbExplore, but it also improves in both " and L at the cost of an extra L+" factor, which is small in most environments of interest. • Leveraging the model-based nature of DisCo, we can also readily compute an "/cmin-optimal policy for any cost-sensitive shortest-path problem defined on the L-controllable states with minimum cost cmin. This result serves as a goal-conditioned counterpart to the reward-free exploration framework defined by Jin et al. [24] for the finite-horizon setting. 2 Incremental Exploration to Discover and Control In this section we expand [1], with a more challenging objective for autonomous exploration. 2.1 L-Controllable States We consider a reward-free Markov decision process [25, Sect. 8.3] M := hS,A, p, s0i. We assume a finite action space A with A = |A| actions, and a finite, possibly large state space S for which an upper bound S on its cardinality is known, i.e., |S| S.2 Each state-action pair (s, a) 2 S ⇥A is characterized by an unknown transition probability distribution p(·|s, a) over next states. We denote by S0 := maxs2S0,ak{p(s0|s, a)}s02S0k0 the largest branching factor of the dynamics over states in any subset S 0 ✓ S . The environment has no extrinsic reward, and s0 2 S is a designated initial state. A deterministic stationary policy ⇡ : S ! A is a mapping between states to actions and we denote by ⇧ the set of all possible policies. Since in environments with arbitrary dynamics the learner may get stuck in a state without being able to return to s0, we introduce the following assumption.3 1We say that f(") = eO("↵) if there are constants a, b, such that f(") a · "↵ logb " . 2Lim and Auer [1] originally considered a countable, possibly infinite state space; however this leads to a technical issue in the analysis of UcbExplore (acknowledged by the authors via personal communication and explained in App. E.3), which disappears by considering only finite state spaces. 3This assumption should be contrasted with the finite-horizon setting, where each policy resets automatically after H steps, or assumptions on the MDP dynamics such as ergodicity or bounded diameter, which guarantee that it is always possible to find a policy navigating between any two states. Assumption 1. The action space contains a RESET action s.t. p(s0|s, RESET) = 1 for any s 2 S . We make explicit the states where a policy ⇡ takes action RESET in the following definition. Definition 1 (Policy restricted on a subset). For any S 0 ✓ S, a policy ⇡ is restricted on S 0 if ⇡(s) = RESET for any s /2 S 0. We denote by ⇧(S 0) the set of policies restricted on S 0. We measure the performance of a policy in navigating the MDP as follows. Definition 2. For any policy ⇡ and a pair of states (s, s0) 2 S2, let ⌧⇡(s ! s0) be the (random) number of steps it takes to reach s0 starting from s when executing policy ⇡, i.e., ⌧⇡(s ! s0) := inf{t 0 : st+1 = s0 | s1 = s,⇡}. We also set v⇡(s ! s0) := E[⌧⇡(s ! s0)] as the expected traveling time, which corresponds to the value function of policy ⇡ in a stochastic shortest-path setting (SSP, [26, Sect. 3]) with initial state s, goal state s0 and unit cost function. Note that we have v⇡(s ! s0) = +1 when the policy ⇡ does not reach s0 from s with probability 1. Furthermore, for any subset S 0 ✓ S and any state s, we denote by V ?S0(s0 ! s) := min ⇡2⇧(S0) v⇡(s0 ! s), the length of the shortest path to s, restricted to policies resetting to s0 from any state outside S 0. The objective of the learning agent is to control efficiently the environment in the vicinity of s0. We say that a state s is controlled if the agent can reliably navigate to it from s0, that is, there exists an effective goal-conditioned policy — i.e., a shortest-path policy — from s0 to s. Definition 3 (L-controllable states). Given a reference state s0, we say that a state s is L-controllable if there exists a policy ⇡ such that v⇡(s0 ! s) L. The set of L-controllable states is then SL := {s 2 S : min ⇡2⇧ v⇡(s0 ! s) L}. (1) We illustrate the concept of controllable states in Fig. 1 for L = 3. Interestingly, in the right figure, the black states are not L-controllable. In fact, there is no policy that can directly choose which one of the black states to reach. On the other hand, the red state, despite being in some sense further from s0 than the black states, does belong to SL. In general, there is a crucial difference between the existence of a random realization where a state s is reached from s0 in less than L steps (i.e., black states) and the notion of L-controllability, which means that there exists a policy that consistently reaches the state in a number of steps less or equal than L on average (i.e., red state). This explains the choice of the term controllable over reachable, since a state s is often said to be reachable if there is a policy ⇡ with a non-zero probability to eventually reach it, which is a weaker requirement. Unfortunately, Lim and Auer [1] showed that in order to discover all the states in SL, the learner may require a number of exploration steps that is exponential in L or |SL|. Intuitively, this negative result is due to the fact that the minimum in Eq. 1 is over the set of all possible policies, including those that may traverse states that are not in SL.4 Hence, we similarly constrain the learner to focus on the set of incrementally controllable states. Definition 4 (Incrementally controllable states S! L ). Let be some partial order on S. The set S L of states controllable in L steps w.r.t. is defined inductively as follows. The initial state s0 4We refer the reader to [1, Sect. 2.1] for a more formal and complete characterization of this negative result. belongs to S L by definition and if there exists a policy ⇡ restricted on {s0 2 S L : s0 s} with v⇡(s0 ! s) L, then s 2 S L . The set S!L of incrementally L-controllable states is defined as S! L := [ S L , where the union is over all possible partial orders. By way of illustration, in Fig. 1 for L = 3, it holds that S! L = SL in the left figure, whereas S! L = {s0} 6= SL in the right figure. Indeed, while the red state is L-controllable, it requires traversing the black states, which are not L-controllable. 2.2 AX Objectives We are now ready to formalize two alternative objectives for Autonomous eXploration (AX) in MDPs. Definition 5 (AX sample complexity). Fix any length L 1, error threshold " > 0 and confidence level 2 (0, 1). The sample complexities CAXL(A, L, ", ) and CAX?(A, L, ", ) are defined as the number of time steps required by a learning algorithm A to identify a set K ◆ S! L such that with probability at least 1 , it has learned a set of policies {⇡s}s2K that respectively verifies the following AX requirement (AXL) 8s 2 K, v⇡s(s0 ! s) L+ ", (AX?) 8s 2 K, v⇡s(s0 ! s) V ?S! L (s0 ! s) + ". Designing agents satisfying the objectives defined above introduces critical difficulties w.r.t. standard goal-directed learning in RL. First, the agent has to find accurate policies for a set of goals (i.e., all incrementally L-controllable states) and not just for one specific goal. On top of this, the set of desired goals itself (i.e., the set S! L ) is unknown in advance and has to be estimated online. Specifically, AXL is the original objective introduced in [1] and it requires the agent to discover all the incrementally L-controllable states as fast as possible.5 At the end of the learning process, for each state s 2 S! L the agent should return a policy that can reach s from s0 in at most L steps (in expectation). Unfortunately, this may correspond to a rather poor performance in practice. Consider a state s 2 S! L such that V ?S! L (s0 ! s) ⌧ L, i.e., the shortest path between s0 to s following policies restricted on S! L is much smaller than L. Satisfying AXL only guarantees that a policy reaching s in L steps is found. On the other hand, objective AX? is more demanding, as it requires learning a near-optimal shortest-path policy for each state in S! L . Since V ?S! L (s0 ! s) L and the gap between the two quantities may be arbitrarily large, especially for states close to s0 and far from the fringe of S! L , AX? is a significantly tighter objective than AXL and it is thus preferable in practice. We say that an exploration algorithm solves the AX problem if its sample complexity CAX(A, L, ", ) in Def. 5 is polynomial in |K|, A, L, " 1 and log(S). Notice that requiring a logarithmic dependency on the size of S is crucial but nontrivial, since the overall state space may be large and we do not want the agent to waste time trying to reach states that are not L-controllable. The dependency on the (algorithmic-dependent and random) set K can be always replaced using the upper bound |K| |S! L+"|, which is implied with high probability by both AXL and AX? conditions. Finally, notice that the error threshold " > 0 has a two-fold impact on the performance of the algorithm. First, " defines the largest set S! L+" that could be returned by the algorithm: the larger ", the bigger the set. Second, as " increases, the quality (in terms of controllability and navigational precision) of the output policies worsens w.r.t. the shortest-path policy restricted on S! L . 3 The DisCo Algorithm The algorithm DisCo — for Discover and Control — is detailed in Alg. 1. It maintains a set K of “controllable” states and a set U of states that are considered “uncontrollable” so far. A state s is tagged as controllable when a policy to reach s in at most L + " steps (in expectation from s0) has been found with high confidence, and we denote by ⇡s such policy. The states in U are states that have been discovered as potential members of S! L , but the algorithm has yet to produce a policy to control any of them in less than L + " steps. The algorithm stores an estimate of the transition model and it proceeds through rounds, which are indexed by k and incremented whenever a state in U gets transferred to the set K, i.e., when the transition model reaches a level of accuracy sufficient 5Note that we translated in the condition in [1] of a relative error of L" to an absolute error of ", to align it with the common formulation of sample complexity in RL. Algorithm 1: Algorithm DisCo Input: Actions A, initial state s0, confidence parameter 2 (0, 1), error threshold " > 0, L 1 and (possibly adaptive) allocation function : P(S) ! N (where P(S) denotes the power set of S). 1 Initialize k := 0, K0 := {s0}, U0 := {} and a restricted policy ⇡s0 2 ⇧(K0). 2 Set " := min{", 1} and continue := True. 3 while continue do 4 Set k += 1. //new round // ¨ Sample collection on K 5 For each (s, a) 2 Kk ⇥A, execute policy ⇡s until the total number of visits Nk(s, a) to (s, a) satisfies Nk(s, a) nk := (Kk). For each (s, a) 2 Kk ⇥A, add s0 ⇠ p(·|s, a) to Uk if s0 /2 Kk. // ≠ Restriction of candidate states U 6 Compute transitions bpk(s0|s, a) and Wk := n s0 2 Uk : 9(s, a) 2 Kk ⇥A, bpk(s0|s, a) 1 "/2L o · 7 if Wk is empty then 8 Set continue := False. //condition STOP1 9 else // Æ Computation of the optimistic policies on K 10 for each state s0 2 Wk do 11 Compute (eus0 , e⇡s0) := OVISSP(Kk,A, s0, Nk, "6L ), see Alg. 3 in App. D.1. 12 Let s† := argmins2Wk eus(s0) and eu † := eus†(s0). 13 if eu† > L then 14 Set continue := False. //condition STOP2 15 else // Ø State transfer from U to K 16 Set Kk+1 := Kk [ {s†}, Uk+1 := Uk \ {s†} and ⇡s† := e⇡s† . // ∞ Policy consolidation: computation on the final set K 17 Set K := k. 18 for each state s 2 KK do 19 Compute (eus, e⇡s) := OVISSP(KK ,A, s,NK , "6L ). 20 Output: the states s in KK and their corresponding policy ⇡s := e⇡s. to compute a policy to control one of the states encountered before. We denote by Kk (resp.Uk) the set of controllable (resp. uncontrollable) states at the beginning of round k. DisCo stops at a round K when it can confidently claim that all the remaining states outside of KK cannot be L-controllable. At each round, the algorithm uses all samples observed so far to build an estimate of the transition model denoted by bp(s0|s, a) = N(s, a, s0)/N(s, a), where N(s, a) and N(s, a, s0) are counters for state-action and state-action-next state visitations. Each round is divided into two phases. The first is a sample collection phase. At the beginning of round k, the agent collects additional samples until nk := (Kk) samples are available at each state-action pair in Kk ⇥A (step ¨). A key challenge lies in the careful (and adaptive) choice of the allocation function , which we report in the statement of Thm. 1 (see Eq. 19 in App. D.4 for its exact definition). Importantly, the incremental construction of Kk entails that sampling at each state s 2 Kk can be done efficiently. In fact, for all s 2 Kk the agent has already confidently learned a policy ⇡s to reach s in at most L+ " steps on average (see how such policy is computed in the second phase). The generation of transitions (s, a, s0) for (s, a) 2 Kk ⇥A achieves two objectives at once. First, it serves as a discovery step, since all observed next states s0 not in Uk are added to it — in particular this guarantees sufficient exploration at the fringe (or border) of the set Kk. Second, it improves the accuracy of the model p in the states in Kk, which is essential in computing near-optimal policies and thus fulfilling the AX? condition. The second phase does not require interacting with the environment and it focuses on the computation of optimistic policies. The agent begins by significantly restricting the set of candidate states in each round to alleviate the computational complexity of the algorithm. Namely, among all the states in Uk, it discards those that do not have a high probability of belonging to S! L by considering a restricted set Wk ✓ Uk (step ≠). In fact, if the estimated probability bpk of reaching a state s 2 Uk from any of the controllable states in Kk is lower than (1 "/2)/L, then no shortest-path policy restricted on Kk could get to s from s0 in less than L+ " steps on average. Then for each state s0 in Wk, DisCo computes an optimistic policy restricted on Kk to reach s0. Formally, for any candidate state s0 2 Wk, we define the induced stochastic shortest path (SSP) MDP M 0 k with goal state s0 as follows. Definition 6. We define the SSP-MDP M 0 k := hS,A0 k (·), c0 k , p0 k i with goal state s0, where the action space is such that A0 k (s) = A for all s 2 Kk and A0k(s) = {RESET} otherwise (i.e., we focus on policies restricted on Kk). The cost function is such that for all a 2 A, c0k(s0, a) = 0, and for any s 6= s0, c0 k (s, a) = 1. The transition model is p0 k (s0|s0, a) = 1 and p0 k (·|s, a) = p(·|s, a) otherwise.6 The solution of M 0 k is the shortest-path policy from s0 to s0 restricted on Kk. Since p0k is unknown, DisCo cannot compute the exact solution of M 0 k , but instead, it executes optimistic value iteration (OVISSP) for SSP [27, 28] to obtain a value function eus0 and its associated greedy policy e⇡s0 restricted on Kk (see App. D.1 for more details). The agent then chooses a candidate goal state s† for which the value eu† := eus†(s0) is the smallest. This step can be interpreted as selecting the optimistically most promising new state to control. Two cases are possible. If eu† L, then s† is added to Kk (step Ø), since the accuracy of the model estimate on the state-action space Kk ⇥ A guarantees that the policy e⇡s† is able to reach the state s† in less than L + " steps in expectation with high probability (i.e., s† is incrementally (L + ")-controllable). Otherwise, we can guarantee that S! L ✓ Kk with high probability. In the latter case, the algorithm terminates and, using the current estimates of the model, it recomputes an optimistic shortest-path policy ⇡s restricted on the final set KK for each state s 2 KK (step ∞). This policy consolidation step is essential to identify near-optimal policies restricted on the final set KK (and thus on S! L ): indeed the expansion of the set of the so far controllable states may alter and refine the optimal goal-reaching policies restricted on it (see App. A). Computational Complexity. Note that algorithmically, we do not need to define M 0 k (Def. 6) over the whole state space S as we can limit it to Kk [ {s0}, i.e., the candidate state s0 and the set Kk of so far controllable states. As shown in Thm. 1, this set can be significantly smaller than S . In particular this implies that the computational complexity of the value iteration algorithm used to compute the optimistic policies is independent from S (see App. D.9 for more details). 4 Sample Complexity Analysis of DisCo We now present our main result: a sample complexity guarantee for DisCo for the AX? objective, which directly implies that AXL is also satisfied. Theorem 1. There exists an absolute constant ↵ > 0 such that for any L 1, " 2 (0, 1], and 2 (0, 1), if we set the allocation function as : X ! ↵ · L4b⇥(X ) "2 log2 ✓ LSA " ◆ + L2|X | " log ✓ LSA " ◆! , (2) with b⇥(X ) := max(s,a)2X⇥A P s02X p bp(s0|s, a)(1 bp(s0|s, a)) 2 , then the algorithm DisCo (Alg. 1) satisfies the following sample complexity bound for AX? CAX?(DisCo, L, ", ) = eO ✓ L5 L+"SL+"A "2 + L3S2 L+"A " ◆ , (3) where SL+" := |S!L+"| and L+" := max (s,a)2S! L+"⇥A k{p(s0|s, a)}s02S! L+" k0 SL+" is the maximal support of the transition probabilities p(·|s, a) restricted to the set S! L+". Given the definition of AX?, Thm. 1 implies that DisCo 1) terminates after CAX?(DisCo, L, ", ) time steps, 2) discovers a set of states K ◆ S! L with |K| SL+", 3) and for each s 2 K outputs a policy ⇡s which is "-optimal w.r.t. policies restricted on S!L , i.e., v⇡s(s0 ! s) V ?S! L (s0 ! s) + ". Note that Eq. 3 displays only a logarithmic dependency on S, the total number of states. This property on the sample complexity of DisCo, along with its S-independent computational complexity, is significant when the state space S grows large w.r.t. the unknown set of interest S! L . 6In words, all actions at states in Kk behave exactly as in M and suffer a unit cost, in all states outside Kk only the reset action to s0 is available with a unit cost, and all actions at the goal s0 induce a zero-cost self-loop. 4.1 Proof Sketch of Theorem 1 While the complete proof is reported in App. D, we now provide the main intuition behind the result. State Transfer from U to K (step Ø). Let us focus on a round k and a state s† 2 Uk that gets added to Kk. For clarity we remove in the notation the round k, goal state s† and starting state s0. We denote by v and ev the value functions of the candidate policy e⇡ in the true and optimistic model respectively, and by eu the quantity w.r.t. which e⇡ is optimistically greedy. We aim to prove that s† 2 S! L+" (with high probability). The main chain of inequalities underpinning the argument is v |v ev|+ ev (a) " 2 + ev (b) " 2 + eu+ " 2 (c) L+ ", (4) where (c) is guaranteed by algorithmic construction and (b) stems from the chosen level of value iteration accuracy. Inequality (a) has the flavor of a simulation lemma for SSP, by relating the shortest-path value function of a same policy between two models (the true one and the optimistic one). Importantly, when restricted to K these two models are close in virtue of the algorithmic design which enforces the collection of a minimum amount of samples at each state-action pair of K ⇥A, denoted by n. Specifically, we obtain that |v ev| = eO ⇣rL4 K n + L2|K| n ⌘ , with K := max (s,a)2K⇥A k{p(s0|s, a)}s02Kk0 |K|. Note that K is the branching factor restricted to the set K. Our choice of n (given in Eq. 2) is then dictated to upper bound the above quantity by "/2 in order to satisfy inequality (a). Let us point out that, interestingly yet unfortunately, the structure of the problem does not appear to allow for technical variance-aware improvements seeking to lower the value of n prescribed above (indeed the AX framework requires to analytically encompass the uncontrollable states U into a single meta state with higher transitional uncertainty, see App. D for details). Termination of the Algorithm. Since S! L is unknown, we have to ensure that none of the states in S! L are “missed”. As such, we prove that with overwhelming probability, we have S! L ✓ KK when the algorithm terminates at a round denoted by K. There remains to justify the final near-optimal guarantee w.r.t. the set of policies ⇧(S! L ). Leveraging that step ∞ recomputes the policies (⇡s)s2KK on the final set KK , we establish the following chain of inequalities v |v ev|+ ev (a) " 2 + ev (b) " 2 + eu+ " 2 (c) V ?KK + " (d) V ?S! L + ", (5) where (a) and (b) are as in Eq. 4, (c) leverages optimism and (d) stems from the inclusion S! L ✓ KK . Sample Complexity Bound. The choice of allocation function in Eq. 2 bounds nK which is the total number of samples required at each state-action pair in KK ⇥ A. We then compute a high-probability bound on the time steps needed to collect a given sample, and show that it scales as eO(L). Since the sample complexity is solely induced by the sample collection phase (step ¨), it can be bounded by the quantity nK |KK |A. Putting everything together yields the bound of Thm. 1. 4.2 Comparison with UcbExplore [1] We start recalling the critical distinction that DisCo succeeds in tackling problem AX?, while UcbExplore [1] fails to do so (see App. A for details on the AX objectives). Nonetheless, in the following we show that even if we restrict our attention to AXL, for which UcbExplore is designed, DisCo yields a better sample complexity in most of the cases. From [1], UcbExplore verifies7 CAXL(UcbExplore, L, ", ) = eO ✓ L6SL+"A "3 ◆ · (6) Eq. 6 shows that the sample complexity of UcbExplore is linear in SL+", while for DisCo the dependency is somewhat worse. In the main-order term eO(1/"2) of Eq. 3, the bound depends linearly on SL+" but also grows with the branching factor L+", which is not the “global” branching factor 7Note that if we replace the error of " for AXL with an error of L" as in [1], we recover the sample complexity of eO L3SL+"A/" 3 stated in [1, Thm. 8]. but denotes the number of possible next states in S! L+" starting from S!L+". While in general we only have L+" SL+", in many practical domains (e.g., robotics, user modeling), each state can only transition to a small number of states, i.e., we often have L+" = O(1) as long as the dynamics is not too “chaotic”. While DisCo does suffer from a quadratic dependency on SL+" in the second term of order eO(1/"), we notice that for any SL+" L3" 2 the bound of DisCo is still preferable. Furthermore, since for "! 0, SL+" tends to SL, the condition is always verified for small enough ". Compared to DisCo, the sample complexity of UcbExplore is worse in both " and L. As stressed in Sect. 2.2, the better dependency on " both improves the quality of the output goal-reaching policies as well as reduces the number of incrementally (L+ ")-controllable states returned by the algorithm. It is interesting to investigate why the bound of [1] (Eq. 6) inherits a eO(" 3) dependency. As reviewed in App. E, UcbExplore alternates between two phases of state discovery and policy evaluation. The optimistic policies computed by UcbExplore solve a finite-horizon problem (with horizon set to HUCB). However, minimizing the expected time to reach a target state is intrinsically an SSP problem, which is exactly what DisCo leverages. By computing policies that solve a finitehorizon problem (note that UcbExplore resets every HUCB time steps), [1] sets the horizon to HUCB := dL + L2" 1e, which leads to a policy-evaluation phase with sample complexity scaling as eO(HUCB" 2) = eO(" 3). Since the rollout budget of eO(" 3) is hard-coded into the algorithm, the dependency on " of UcbExplore’s sample complexity cannot be improved by a more refined analysis; instead a different algorithmic approach is required such as the one employed by DisCo. 4.3 Goal-Free Cost-Free Exploration on S! L with DisCo A compelling advantage of DisCo is that it achieves an accurate estimation of the environment’s dynamics restricted to the unknown subset of interest S! L . In contrast to UcbExplore which needs to restart its sample collection from scratch whenever L, " or some transition costs change, DisCo can thus be robust to changes in such problem parameters. At the end of its exploration phase in Alg. 1, DisCo is able to perform zero-shot planning to solve other tasks restricted on S! L , such as cost-sensitive ones. Indeed in the following we show how the DisCo agent is able to compute an "/cmin-optimal policy for any stochastic shortest-path problem on S!L with goal state s 2 S!L (i.e., s is absorbing and zero-cost) and cost function lower bounded by cmin > 0. Corollary 1. There exists an absolute constant > 0 such that for any L 1, " 2 (0, 1] and cmin 2 (0, 1] verifying " · (L cmin), with probability at least 1 , for whatever goal state s 2 S! L and whatever cost function c in [cmin, 1], DisCo can compute (after its exploration phase, without additional environment interaction) a policy b⇡s,c whose SSP value function Vb⇡s,c verifies Vb⇡s,c(s0 ! s) V ?S! L (s0 ! s) + " cmin , where V⇡(s0 ! s) := E hP ⌧⇡(s0!s) t=1 c(st,⇡(st)) s1 = s0 i is the SSP value function of a policy ⇡ and V ?S! L (s0 ! s) := min⇡2⇧(S! L ) V⇡(s0 ! s) is the optimal SSP value function restricted on S!L . It is interesting to compare Cor. 1 with the reward-free exploration framework recently introduced by Jin et al. [24] in finite-horizon. At a high level, the result in Cor. 1 can be seen as a counterpart of [24] beyond finite-horizon problems, specifically in the goal-conditioned setting. While the parameter L defines the horizon of interest for DisCo, resetting after every L steps (as in finite-horizon) would prevent the agent to identify L-controllable states and lead to poor performance. This explains the distinct technical tools used: while [24] executes finite-horizon no-regret algorithms, DisCo deploys SSP policies restricted on the set of states that it “controls” so far. Algorithmically, both approaches seek to build accurate estimates of the transitions on a specific (unknown) state space of interest: the so-called “significant” states within H steps for [24], and the incrementally L-controllable states S! L for DisCo. Bound-wise, the cost-sensitive AX? problem inherits the critical role of the minimum cost cmin in SSP problems (see App. C and e.g., [27, 28, 29]), which is reflected in the accuracy of Cor. 1 scaling inversely with cmin. Another interesting element of comparison is the dependency on the size of the state space. While the algorithm introduced in [24] is robust w.r.t. states that can be reached with very low probability, it still displays a polynomial dependency on the total number of states S. On the other hand, DisCo has only a logarithmic dependency on S, while it directly depends on the number of (L + ")-controllable states, which shows that DisCo effectively adapts to the state space of interest and it ignores all other states. This result is significant since not only SL+" can be arbitrarily smaller than S, but also because the set S! L+" itself is initially unknown to the algorithm. 5 Numerical Simulation In this section, we provide the first evaluation of algorithms in the incremental autonomous exploration setting. In the implementation of both DisCo and UcbExplore, we remove the logarithmic and constant terms for simplicity. We also boost the empirical performance of UcbExplore in various ways, for example by considering confidence intervals derived from the empirical Bernstein inequality (see [30]) as opposed to Hoeffding as done in [1]. We refer the reader to App. F for details on the algorithmic configurations and on the environments considered. We compare the sample complexity empirically achieved by DisCo and UcbExplore. Fig. 2 depicts the time needed to identify all the incrementally L-controllable states when L = 4.5 for different values of ", on a confusing chain domain. Note that the sample complexity is achieved soon after, when the algorithm can confidently discard all the remaining states as non-controllable (it is reported in Tab. 2 of App. F). We observe that DisCo outperforms UcbExplore for any value of ". In particular, the gap in performance increases as " decreases, which matches the theoretical improvement in sample complexity from eO(" 3) for UcbExplore to eO(" 2) for DisCo. On a second environment — the combination lock problem introduced in [31] — we notice that DisCo again outperforms UcbExplore, as shown in App. F. Another important feature of DisCo is that it targets the tighter objective AX?, whereas UcbExplore is only able to fulfill objective AXL and may therefore elect suboptimal policies. In App. F we show empirically that, as expected theoretically, this directly translates into higher-quality goal-reaching policies recovered by DisCo. 6 Conclusion and Extensions Connections to existing deep-RL methods. While we primarily focus the analysis of DisCo in the tabular case, we believe that the formal definition of AX problems and the general structure of DisCo may also serve as a theoretical grounding of many recent approaches to unsupervised exploration. For instance, it is interesting to draw a parallel between DisCo and the ideas behind Go-Explore [32]. Go-Explore similarly exploits the following principles: (1) remember states that have previously been visited, (2) first return to a promising state (without exploration), (3) then explore from it. Go-Explore assumes that the world is deterministic and resettable, meaning that one can reset the state of the simulator to a previous visit to that cell. Very recently [15], the same authors proposed a way to relax this requirement by training goal-conditioned policies to reliably return to cells in the archive during the exploration phase. In this paper, we investigated the theoretical dimension of this direction, by provably learning such goal-conditioned policies for the set of incrementally controllable states. Future work. Interesting directions for future investigation include: 1) Deriving a lower bound for the AX problems; 2) Integrating DisCo into the meta-algorithm MNM [33] which deals with incremental exploration for AXL in non-stationary environments; 3) Extending the problem to continuous state space and function approximation; 4) Relaxing the definition of incrementally controllable states and relaxing the performance definition towards allowing the agent to have a non-zero but limited sample complexity of learning a shortest-path policy for any state at test time. Broader Impact This paper makes contributions to the fundamentals of online learning (RL) and due to its theoretical nature, we see no ethical or immediate societal consequence of our work.
1. What is the main contribution of the paper, and how does it improve over previous algorithms? 2. What are the strengths of the proposed algorithm, particularly in its ability to efficiently explore states? 3. What are the weaknesses of the paper, especially regarding its comparisons with other works and its applicability to general RL problems? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper introduces a new algorithm for efficiently exploring states that are within an expected L steps from an initial state s_0 and solving the all-pairs shortest path problem between those states. It improves over the UCBExplore algorithm that focused just on the first of those problems with a hard horizon of L rather than an expectation. The new algorithm proceeds by incrementally growing a set of known states in each round, sampling new actions from all of these to either refine the existing shortest path calculations or discover unknown states that are feasible, targeting the closest of these for exploration in the next round. Theoretical bounds on the sample complexity are proven and compare favorably to UCBExplore and a numerical simulation shows the performance is translated to empirical gains. Strengths The Disco algorithm is very clever, building on the long history of algorithms growing a set of known states but also introducing new machinery in the calculations and adapting them to the 2-objective problem in this paper. The pseudocode is well laid out, the text description is very intuitive, and the supplemental material was helpful understanding the nuances of how a subcomponent like OptiVI was used. The theory is novel and compares favorably with UCBExplore, though I think some more caveats may be needed (see below). The empirical results are on a single domain but useful here just to show that the theory translates to a real performance gain. Weaknesses Overall I am happy with the paper but I think there are some areas where wording can be cleaned up or where claims need to be given with more context or nuance, particularly around the comparison to UCBExplore bounds and the discussion of the algorithm for general RL beyond stochastic shortest path problems, as outlined in the detailed sections below.
NIPS
Title Improved Sample Complexity for Incremental Autonomous Exploration in MDPs Abstract We investigate the exploration of an unknown environment when no reward function is provided. Building on the incremental exploration setting introduced by Lim and Auer [1], we define the objective of learning the set of "-optimal goal-conditioned policies attaining all states that are incrementally reachable within L steps (in expectation) from a reference state s0. In this paper, we introduce a novel modelbased approach that interleaves discovering new states from s0 and improving the accuracy of a model estimate that is used to compute goal-conditioned policies to reach newly discovered states. The resulting algorithm, DisCo, achieves a sample complexity scaling as e O(LSL+" L+"A " 2), where A is the number of actions, SL+" is the number of states that are incrementally reachable from s0 in L + " steps, and L+" is the branching factor of the dynamics over such states. This improves over the algorithm proposed in [1] in both " and L at the cost of an extra L+" factor, which is small in most environments of interest. Furthermore, DisCo is the first algorithm that can return an "/cmin-optimal policy for any cost-sensitive shortest-path problem defined on the L-reachable states with minimum cost cmin. Finally, we report preliminary empirical results confirming our theoretical findings. 1 Introduction In cases where the reward signal is not informative enough — e.g., too sparse, time-varying or even absent — a reinforcement learning (RL) agent needs to explore the environment driven by objectives other than reward maximization, see [e.g., 2, 3, 4, 5, 6]. This can be performed by designing intrinsic rewards to drive the learning process, for instance via state visitation counts [7, 8], novelty or prediction errors [9, 10, 11]. Other recent methods perform information-theoretic skill discovery to learn a set of diverse and task-agnostic behaviors [12, 13, 14]. Alternatively, goal-conditioned policies learned by carefully designing the sequence of goals during the learning process are often used to solve sparse reward problems [15] and a variety of goal-reaching tasks [16, 17, 18, 19]. While the approaches reviewed above effectively leverage deep RL techniques and are able to achieve impressive results in complex domains (e.g., Montezuma’s Revenge [15] or real-world robotic manipulation tasks [19]), they often lack substantial theoretical understanding and guarantees. Recently, some unsupervised RL objectives were analyzed rigorously. Some of them quantify how well the agent visits the states under a sought-after frequency, e.g., to induce a maximally entropic state distribution [20, 21, 22, 23]. While such strategies provably mimic their desired behavior via a Frank-Wolfe algorithmic scheme, they may not learn how to effectively reach any state of the environment and thus may not be sufficient to efficiently solve downstream tasks. Another relevant take is the reward-free RL paradigm of [24]: following its exploration phase, the agent is able to 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. compute a near-optimal policy for any reward function at test time. While this framework yields strong end-to-end guarantees, it is limited to the finite-horizon setting and the agent is thus unable to tackle tasks beyond finite-horizon, e.g., goal-conditioned tasks. In this paper, we build on and refine the setting of incremental exploration of [1]: the agent starts at an initial state s0 in an unknown, possibly large environment, and it is provided with a RESET action to restart at s0. At a high level, in this setting the agent should explore the environment and stop when it has identified the tasks within its reach and learned to master each of them sufficiently well. More specifically, the objective of the agent is to learn a goal-conditioned policy for any state that can be reached from s0 within L steps in expectation; such a state is said to be L-controllable. Lim and Auer [1] address this setting with the UcbExplore method for which they bound the number of exploration steps that are required to identify in an incremental way all L-controllable states (i.e., the algorithm needs to define a suitable stopping condition) and to return a set of policies that are able to reach each of them in at most L+ " steps. A key aspect of UcbExplore is to first focus on simple states (i.e., states that can be reached within a few steps), learn policies to efficiently reach them, and leverage them to identify and tackle states that are increasingly more difficult to reach. This approach aims to avoid wasting exploration in the attempt of reaching states that are further than L steps from s0 or that are too difficult to reach given the limited knowledge available at earlier stages of the exploration process. Our main contributions are: • We strengthen the objective of incremental exploration and require the agent to learn "-optimal goal-conditioned policies for any L-controllable state. Formally, let V ?(s) be the length of the shortest path from s0 to s, then the agent needs to learn a policy to navigate from s0 to s in at most V ?(s) + " steps, while in [1] any policy reaching s in at most L+ " steps is acceptable. • We design DisCo, a novel algorithm for incremental exploration. DisCo relies on an estimate of the transition model to compute goal-conditioned policies to the states observed so far and then use those policies to improve the accuracy of the model and incrementally discover new states. • We derive a sample complexity bound for DisCo scaling as1 eO(L5SL+" L+"A " 2), where A is the number of actions, SL+" is the number of states that are incrementally controllable from s0 in L + " steps, and L+" is the branching factor of the dynamics over such incrementally controllable states. Not only is this sample complexity obtained for a more challenging objective than UcbExplore, but it also improves in both " and L at the cost of an extra L+" factor, which is small in most environments of interest. • Leveraging the model-based nature of DisCo, we can also readily compute an "/cmin-optimal policy for any cost-sensitive shortest-path problem defined on the L-controllable states with minimum cost cmin. This result serves as a goal-conditioned counterpart to the reward-free exploration framework defined by Jin et al. [24] for the finite-horizon setting. 2 Incremental Exploration to Discover and Control In this section we expand [1], with a more challenging objective for autonomous exploration. 2.1 L-Controllable States We consider a reward-free Markov decision process [25, Sect. 8.3] M := hS,A, p, s0i. We assume a finite action space A with A = |A| actions, and a finite, possibly large state space S for which an upper bound S on its cardinality is known, i.e., |S| S.2 Each state-action pair (s, a) 2 S ⇥A is characterized by an unknown transition probability distribution p(·|s, a) over next states. We denote by S0 := maxs2S0,ak{p(s0|s, a)}s02S0k0 the largest branching factor of the dynamics over states in any subset S 0 ✓ S . The environment has no extrinsic reward, and s0 2 S is a designated initial state. A deterministic stationary policy ⇡ : S ! A is a mapping between states to actions and we denote by ⇧ the set of all possible policies. Since in environments with arbitrary dynamics the learner may get stuck in a state without being able to return to s0, we introduce the following assumption.3 1We say that f(") = eO("↵) if there are constants a, b, such that f(") a · "↵ logb " . 2Lim and Auer [1] originally considered a countable, possibly infinite state space; however this leads to a technical issue in the analysis of UcbExplore (acknowledged by the authors via personal communication and explained in App. E.3), which disappears by considering only finite state spaces. 3This assumption should be contrasted with the finite-horizon setting, where each policy resets automatically after H steps, or assumptions on the MDP dynamics such as ergodicity or bounded diameter, which guarantee that it is always possible to find a policy navigating between any two states. Assumption 1. The action space contains a RESET action s.t. p(s0|s, RESET) = 1 for any s 2 S . We make explicit the states where a policy ⇡ takes action RESET in the following definition. Definition 1 (Policy restricted on a subset). For any S 0 ✓ S, a policy ⇡ is restricted on S 0 if ⇡(s) = RESET for any s /2 S 0. We denote by ⇧(S 0) the set of policies restricted on S 0. We measure the performance of a policy in navigating the MDP as follows. Definition 2. For any policy ⇡ and a pair of states (s, s0) 2 S2, let ⌧⇡(s ! s0) be the (random) number of steps it takes to reach s0 starting from s when executing policy ⇡, i.e., ⌧⇡(s ! s0) := inf{t 0 : st+1 = s0 | s1 = s,⇡}. We also set v⇡(s ! s0) := E[⌧⇡(s ! s0)] as the expected traveling time, which corresponds to the value function of policy ⇡ in a stochastic shortest-path setting (SSP, [26, Sect. 3]) with initial state s, goal state s0 and unit cost function. Note that we have v⇡(s ! s0) = +1 when the policy ⇡ does not reach s0 from s with probability 1. Furthermore, for any subset S 0 ✓ S and any state s, we denote by V ?S0(s0 ! s) := min ⇡2⇧(S0) v⇡(s0 ! s), the length of the shortest path to s, restricted to policies resetting to s0 from any state outside S 0. The objective of the learning agent is to control efficiently the environment in the vicinity of s0. We say that a state s is controlled if the agent can reliably navigate to it from s0, that is, there exists an effective goal-conditioned policy — i.e., a shortest-path policy — from s0 to s. Definition 3 (L-controllable states). Given a reference state s0, we say that a state s is L-controllable if there exists a policy ⇡ such that v⇡(s0 ! s) L. The set of L-controllable states is then SL := {s 2 S : min ⇡2⇧ v⇡(s0 ! s) L}. (1) We illustrate the concept of controllable states in Fig. 1 for L = 3. Interestingly, in the right figure, the black states are not L-controllable. In fact, there is no policy that can directly choose which one of the black states to reach. On the other hand, the red state, despite being in some sense further from s0 than the black states, does belong to SL. In general, there is a crucial difference between the existence of a random realization where a state s is reached from s0 in less than L steps (i.e., black states) and the notion of L-controllability, which means that there exists a policy that consistently reaches the state in a number of steps less or equal than L on average (i.e., red state). This explains the choice of the term controllable over reachable, since a state s is often said to be reachable if there is a policy ⇡ with a non-zero probability to eventually reach it, which is a weaker requirement. Unfortunately, Lim and Auer [1] showed that in order to discover all the states in SL, the learner may require a number of exploration steps that is exponential in L or |SL|. Intuitively, this negative result is due to the fact that the minimum in Eq. 1 is over the set of all possible policies, including those that may traverse states that are not in SL.4 Hence, we similarly constrain the learner to focus on the set of incrementally controllable states. Definition 4 (Incrementally controllable states S! L ). Let be some partial order on S. The set S L of states controllable in L steps w.r.t. is defined inductively as follows. The initial state s0 4We refer the reader to [1, Sect. 2.1] for a more formal and complete characterization of this negative result. belongs to S L by definition and if there exists a policy ⇡ restricted on {s0 2 S L : s0 s} with v⇡(s0 ! s) L, then s 2 S L . The set S!L of incrementally L-controllable states is defined as S! L := [ S L , where the union is over all possible partial orders. By way of illustration, in Fig. 1 for L = 3, it holds that S! L = SL in the left figure, whereas S! L = {s0} 6= SL in the right figure. Indeed, while the red state is L-controllable, it requires traversing the black states, which are not L-controllable. 2.2 AX Objectives We are now ready to formalize two alternative objectives for Autonomous eXploration (AX) in MDPs. Definition 5 (AX sample complexity). Fix any length L 1, error threshold " > 0 and confidence level 2 (0, 1). The sample complexities CAXL(A, L, ", ) and CAX?(A, L, ", ) are defined as the number of time steps required by a learning algorithm A to identify a set K ◆ S! L such that with probability at least 1 , it has learned a set of policies {⇡s}s2K that respectively verifies the following AX requirement (AXL) 8s 2 K, v⇡s(s0 ! s) L+ ", (AX?) 8s 2 K, v⇡s(s0 ! s) V ?S! L (s0 ! s) + ". Designing agents satisfying the objectives defined above introduces critical difficulties w.r.t. standard goal-directed learning in RL. First, the agent has to find accurate policies for a set of goals (i.e., all incrementally L-controllable states) and not just for one specific goal. On top of this, the set of desired goals itself (i.e., the set S! L ) is unknown in advance and has to be estimated online. Specifically, AXL is the original objective introduced in [1] and it requires the agent to discover all the incrementally L-controllable states as fast as possible.5 At the end of the learning process, for each state s 2 S! L the agent should return a policy that can reach s from s0 in at most L steps (in expectation). Unfortunately, this may correspond to a rather poor performance in practice. Consider a state s 2 S! L such that V ?S! L (s0 ! s) ⌧ L, i.e., the shortest path between s0 to s following policies restricted on S! L is much smaller than L. Satisfying AXL only guarantees that a policy reaching s in L steps is found. On the other hand, objective AX? is more demanding, as it requires learning a near-optimal shortest-path policy for each state in S! L . Since V ?S! L (s0 ! s) L and the gap between the two quantities may be arbitrarily large, especially for states close to s0 and far from the fringe of S! L , AX? is a significantly tighter objective than AXL and it is thus preferable in practice. We say that an exploration algorithm solves the AX problem if its sample complexity CAX(A, L, ", ) in Def. 5 is polynomial in |K|, A, L, " 1 and log(S). Notice that requiring a logarithmic dependency on the size of S is crucial but nontrivial, since the overall state space may be large and we do not want the agent to waste time trying to reach states that are not L-controllable. The dependency on the (algorithmic-dependent and random) set K can be always replaced using the upper bound |K| |S! L+"|, which is implied with high probability by both AXL and AX? conditions. Finally, notice that the error threshold " > 0 has a two-fold impact on the performance of the algorithm. First, " defines the largest set S! L+" that could be returned by the algorithm: the larger ", the bigger the set. Second, as " increases, the quality (in terms of controllability and navigational precision) of the output policies worsens w.r.t. the shortest-path policy restricted on S! L . 3 The DisCo Algorithm The algorithm DisCo — for Discover and Control — is detailed in Alg. 1. It maintains a set K of “controllable” states and a set U of states that are considered “uncontrollable” so far. A state s is tagged as controllable when a policy to reach s in at most L + " steps (in expectation from s0) has been found with high confidence, and we denote by ⇡s such policy. The states in U are states that have been discovered as potential members of S! L , but the algorithm has yet to produce a policy to control any of them in less than L + " steps. The algorithm stores an estimate of the transition model and it proceeds through rounds, which are indexed by k and incremented whenever a state in U gets transferred to the set K, i.e., when the transition model reaches a level of accuracy sufficient 5Note that we translated in the condition in [1] of a relative error of L" to an absolute error of ", to align it with the common formulation of sample complexity in RL. Algorithm 1: Algorithm DisCo Input: Actions A, initial state s0, confidence parameter 2 (0, 1), error threshold " > 0, L 1 and (possibly adaptive) allocation function : P(S) ! N (where P(S) denotes the power set of S). 1 Initialize k := 0, K0 := {s0}, U0 := {} and a restricted policy ⇡s0 2 ⇧(K0). 2 Set " := min{", 1} and continue := True. 3 while continue do 4 Set k += 1. //new round // ¨ Sample collection on K 5 For each (s, a) 2 Kk ⇥A, execute policy ⇡s until the total number of visits Nk(s, a) to (s, a) satisfies Nk(s, a) nk := (Kk). For each (s, a) 2 Kk ⇥A, add s0 ⇠ p(·|s, a) to Uk if s0 /2 Kk. // ≠ Restriction of candidate states U 6 Compute transitions bpk(s0|s, a) and Wk := n s0 2 Uk : 9(s, a) 2 Kk ⇥A, bpk(s0|s, a) 1 "/2L o · 7 if Wk is empty then 8 Set continue := False. //condition STOP1 9 else // Æ Computation of the optimistic policies on K 10 for each state s0 2 Wk do 11 Compute (eus0 , e⇡s0) := OVISSP(Kk,A, s0, Nk, "6L ), see Alg. 3 in App. D.1. 12 Let s† := argmins2Wk eus(s0) and eu † := eus†(s0). 13 if eu† > L then 14 Set continue := False. //condition STOP2 15 else // Ø State transfer from U to K 16 Set Kk+1 := Kk [ {s†}, Uk+1 := Uk \ {s†} and ⇡s† := e⇡s† . // ∞ Policy consolidation: computation on the final set K 17 Set K := k. 18 for each state s 2 KK do 19 Compute (eus, e⇡s) := OVISSP(KK ,A, s,NK , "6L ). 20 Output: the states s in KK and their corresponding policy ⇡s := e⇡s. to compute a policy to control one of the states encountered before. We denote by Kk (resp.Uk) the set of controllable (resp. uncontrollable) states at the beginning of round k. DisCo stops at a round K when it can confidently claim that all the remaining states outside of KK cannot be L-controllable. At each round, the algorithm uses all samples observed so far to build an estimate of the transition model denoted by bp(s0|s, a) = N(s, a, s0)/N(s, a), where N(s, a) and N(s, a, s0) are counters for state-action and state-action-next state visitations. Each round is divided into two phases. The first is a sample collection phase. At the beginning of round k, the agent collects additional samples until nk := (Kk) samples are available at each state-action pair in Kk ⇥A (step ¨). A key challenge lies in the careful (and adaptive) choice of the allocation function , which we report in the statement of Thm. 1 (see Eq. 19 in App. D.4 for its exact definition). Importantly, the incremental construction of Kk entails that sampling at each state s 2 Kk can be done efficiently. In fact, for all s 2 Kk the agent has already confidently learned a policy ⇡s to reach s in at most L+ " steps on average (see how such policy is computed in the second phase). The generation of transitions (s, a, s0) for (s, a) 2 Kk ⇥A achieves two objectives at once. First, it serves as a discovery step, since all observed next states s0 not in Uk are added to it — in particular this guarantees sufficient exploration at the fringe (or border) of the set Kk. Second, it improves the accuracy of the model p in the states in Kk, which is essential in computing near-optimal policies and thus fulfilling the AX? condition. The second phase does not require interacting with the environment and it focuses on the computation of optimistic policies. The agent begins by significantly restricting the set of candidate states in each round to alleviate the computational complexity of the algorithm. Namely, among all the states in Uk, it discards those that do not have a high probability of belonging to S! L by considering a restricted set Wk ✓ Uk (step ≠). In fact, if the estimated probability bpk of reaching a state s 2 Uk from any of the controllable states in Kk is lower than (1 "/2)/L, then no shortest-path policy restricted on Kk could get to s from s0 in less than L+ " steps on average. Then for each state s0 in Wk, DisCo computes an optimistic policy restricted on Kk to reach s0. Formally, for any candidate state s0 2 Wk, we define the induced stochastic shortest path (SSP) MDP M 0 k with goal state s0 as follows. Definition 6. We define the SSP-MDP M 0 k := hS,A0 k (·), c0 k , p0 k i with goal state s0, where the action space is such that A0 k (s) = A for all s 2 Kk and A0k(s) = {RESET} otherwise (i.e., we focus on policies restricted on Kk). The cost function is such that for all a 2 A, c0k(s0, a) = 0, and for any s 6= s0, c0 k (s, a) = 1. The transition model is p0 k (s0|s0, a) = 1 and p0 k (·|s, a) = p(·|s, a) otherwise.6 The solution of M 0 k is the shortest-path policy from s0 to s0 restricted on Kk. Since p0k is unknown, DisCo cannot compute the exact solution of M 0 k , but instead, it executes optimistic value iteration (OVISSP) for SSP [27, 28] to obtain a value function eus0 and its associated greedy policy e⇡s0 restricted on Kk (see App. D.1 for more details). The agent then chooses a candidate goal state s† for which the value eu† := eus†(s0) is the smallest. This step can be interpreted as selecting the optimistically most promising new state to control. Two cases are possible. If eu† L, then s† is added to Kk (step Ø), since the accuracy of the model estimate on the state-action space Kk ⇥ A guarantees that the policy e⇡s† is able to reach the state s† in less than L + " steps in expectation with high probability (i.e., s† is incrementally (L + ")-controllable). Otherwise, we can guarantee that S! L ✓ Kk with high probability. In the latter case, the algorithm terminates and, using the current estimates of the model, it recomputes an optimistic shortest-path policy ⇡s restricted on the final set KK for each state s 2 KK (step ∞). This policy consolidation step is essential to identify near-optimal policies restricted on the final set KK (and thus on S! L ): indeed the expansion of the set of the so far controllable states may alter and refine the optimal goal-reaching policies restricted on it (see App. A). Computational Complexity. Note that algorithmically, we do not need to define M 0 k (Def. 6) over the whole state space S as we can limit it to Kk [ {s0}, i.e., the candidate state s0 and the set Kk of so far controllable states. As shown in Thm. 1, this set can be significantly smaller than S . In particular this implies that the computational complexity of the value iteration algorithm used to compute the optimistic policies is independent from S (see App. D.9 for more details). 4 Sample Complexity Analysis of DisCo We now present our main result: a sample complexity guarantee for DisCo for the AX? objective, which directly implies that AXL is also satisfied. Theorem 1. There exists an absolute constant ↵ > 0 such that for any L 1, " 2 (0, 1], and 2 (0, 1), if we set the allocation function as : X ! ↵ · L4b⇥(X ) "2 log2 ✓ LSA " ◆ + L2|X | " log ✓ LSA " ◆! , (2) with b⇥(X ) := max(s,a)2X⇥A P s02X p bp(s0|s, a)(1 bp(s0|s, a)) 2 , then the algorithm DisCo (Alg. 1) satisfies the following sample complexity bound for AX? CAX?(DisCo, L, ", ) = eO ✓ L5 L+"SL+"A "2 + L3S2 L+"A " ◆ , (3) where SL+" := |S!L+"| and L+" := max (s,a)2S! L+"⇥A k{p(s0|s, a)}s02S! L+" k0 SL+" is the maximal support of the transition probabilities p(·|s, a) restricted to the set S! L+". Given the definition of AX?, Thm. 1 implies that DisCo 1) terminates after CAX?(DisCo, L, ", ) time steps, 2) discovers a set of states K ◆ S! L with |K| SL+", 3) and for each s 2 K outputs a policy ⇡s which is "-optimal w.r.t. policies restricted on S!L , i.e., v⇡s(s0 ! s) V ?S! L (s0 ! s) + ". Note that Eq. 3 displays only a logarithmic dependency on S, the total number of states. This property on the sample complexity of DisCo, along with its S-independent computational complexity, is significant when the state space S grows large w.r.t. the unknown set of interest S! L . 6In words, all actions at states in Kk behave exactly as in M and suffer a unit cost, in all states outside Kk only the reset action to s0 is available with a unit cost, and all actions at the goal s0 induce a zero-cost self-loop. 4.1 Proof Sketch of Theorem 1 While the complete proof is reported in App. D, we now provide the main intuition behind the result. State Transfer from U to K (step Ø). Let us focus on a round k and a state s† 2 Uk that gets added to Kk. For clarity we remove in the notation the round k, goal state s† and starting state s0. We denote by v and ev the value functions of the candidate policy e⇡ in the true and optimistic model respectively, and by eu the quantity w.r.t. which e⇡ is optimistically greedy. We aim to prove that s† 2 S! L+" (with high probability). The main chain of inequalities underpinning the argument is v |v ev|+ ev (a) " 2 + ev (b) " 2 + eu+ " 2 (c) L+ ", (4) where (c) is guaranteed by algorithmic construction and (b) stems from the chosen level of value iteration accuracy. Inequality (a) has the flavor of a simulation lemma for SSP, by relating the shortest-path value function of a same policy between two models (the true one and the optimistic one). Importantly, when restricted to K these two models are close in virtue of the algorithmic design which enforces the collection of a minimum amount of samples at each state-action pair of K ⇥A, denoted by n. Specifically, we obtain that |v ev| = eO ⇣rL4 K n + L2|K| n ⌘ , with K := max (s,a)2K⇥A k{p(s0|s, a)}s02Kk0 |K|. Note that K is the branching factor restricted to the set K. Our choice of n (given in Eq. 2) is then dictated to upper bound the above quantity by "/2 in order to satisfy inequality (a). Let us point out that, interestingly yet unfortunately, the structure of the problem does not appear to allow for technical variance-aware improvements seeking to lower the value of n prescribed above (indeed the AX framework requires to analytically encompass the uncontrollable states U into a single meta state with higher transitional uncertainty, see App. D for details). Termination of the Algorithm. Since S! L is unknown, we have to ensure that none of the states in S! L are “missed”. As such, we prove that with overwhelming probability, we have S! L ✓ KK when the algorithm terminates at a round denoted by K. There remains to justify the final near-optimal guarantee w.r.t. the set of policies ⇧(S! L ). Leveraging that step ∞ recomputes the policies (⇡s)s2KK on the final set KK , we establish the following chain of inequalities v |v ev|+ ev (a) " 2 + ev (b) " 2 + eu+ " 2 (c) V ?KK + " (d) V ?S! L + ", (5) where (a) and (b) are as in Eq. 4, (c) leverages optimism and (d) stems from the inclusion S! L ✓ KK . Sample Complexity Bound. The choice of allocation function in Eq. 2 bounds nK which is the total number of samples required at each state-action pair in KK ⇥ A. We then compute a high-probability bound on the time steps needed to collect a given sample, and show that it scales as eO(L). Since the sample complexity is solely induced by the sample collection phase (step ¨), it can be bounded by the quantity nK |KK |A. Putting everything together yields the bound of Thm. 1. 4.2 Comparison with UcbExplore [1] We start recalling the critical distinction that DisCo succeeds in tackling problem AX?, while UcbExplore [1] fails to do so (see App. A for details on the AX objectives). Nonetheless, in the following we show that even if we restrict our attention to AXL, for which UcbExplore is designed, DisCo yields a better sample complexity in most of the cases. From [1], UcbExplore verifies7 CAXL(UcbExplore, L, ", ) = eO ✓ L6SL+"A "3 ◆ · (6) Eq. 6 shows that the sample complexity of UcbExplore is linear in SL+", while for DisCo the dependency is somewhat worse. In the main-order term eO(1/"2) of Eq. 3, the bound depends linearly on SL+" but also grows with the branching factor L+", which is not the “global” branching factor 7Note that if we replace the error of " for AXL with an error of L" as in [1], we recover the sample complexity of eO L3SL+"A/" 3 stated in [1, Thm. 8]. but denotes the number of possible next states in S! L+" starting from S!L+". While in general we only have L+" SL+", in many practical domains (e.g., robotics, user modeling), each state can only transition to a small number of states, i.e., we often have L+" = O(1) as long as the dynamics is not too “chaotic”. While DisCo does suffer from a quadratic dependency on SL+" in the second term of order eO(1/"), we notice that for any SL+" L3" 2 the bound of DisCo is still preferable. Furthermore, since for "! 0, SL+" tends to SL, the condition is always verified for small enough ". Compared to DisCo, the sample complexity of UcbExplore is worse in both " and L. As stressed in Sect. 2.2, the better dependency on " both improves the quality of the output goal-reaching policies as well as reduces the number of incrementally (L+ ")-controllable states returned by the algorithm. It is interesting to investigate why the bound of [1] (Eq. 6) inherits a eO(" 3) dependency. As reviewed in App. E, UcbExplore alternates between two phases of state discovery and policy evaluation. The optimistic policies computed by UcbExplore solve a finite-horizon problem (with horizon set to HUCB). However, minimizing the expected time to reach a target state is intrinsically an SSP problem, which is exactly what DisCo leverages. By computing policies that solve a finitehorizon problem (note that UcbExplore resets every HUCB time steps), [1] sets the horizon to HUCB := dL + L2" 1e, which leads to a policy-evaluation phase with sample complexity scaling as eO(HUCB" 2) = eO(" 3). Since the rollout budget of eO(" 3) is hard-coded into the algorithm, the dependency on " of UcbExplore’s sample complexity cannot be improved by a more refined analysis; instead a different algorithmic approach is required such as the one employed by DisCo. 4.3 Goal-Free Cost-Free Exploration on S! L with DisCo A compelling advantage of DisCo is that it achieves an accurate estimation of the environment’s dynamics restricted to the unknown subset of interest S! L . In contrast to UcbExplore which needs to restart its sample collection from scratch whenever L, " or some transition costs change, DisCo can thus be robust to changes in such problem parameters. At the end of its exploration phase in Alg. 1, DisCo is able to perform zero-shot planning to solve other tasks restricted on S! L , such as cost-sensitive ones. Indeed in the following we show how the DisCo agent is able to compute an "/cmin-optimal policy for any stochastic shortest-path problem on S!L with goal state s 2 S!L (i.e., s is absorbing and zero-cost) and cost function lower bounded by cmin > 0. Corollary 1. There exists an absolute constant > 0 such that for any L 1, " 2 (0, 1] and cmin 2 (0, 1] verifying " · (L cmin), with probability at least 1 , for whatever goal state s 2 S! L and whatever cost function c in [cmin, 1], DisCo can compute (after its exploration phase, without additional environment interaction) a policy b⇡s,c whose SSP value function Vb⇡s,c verifies Vb⇡s,c(s0 ! s) V ?S! L (s0 ! s) + " cmin , where V⇡(s0 ! s) := E hP ⌧⇡(s0!s) t=1 c(st,⇡(st)) s1 = s0 i is the SSP value function of a policy ⇡ and V ?S! L (s0 ! s) := min⇡2⇧(S! L ) V⇡(s0 ! s) is the optimal SSP value function restricted on S!L . It is interesting to compare Cor. 1 with the reward-free exploration framework recently introduced by Jin et al. [24] in finite-horizon. At a high level, the result in Cor. 1 can be seen as a counterpart of [24] beyond finite-horizon problems, specifically in the goal-conditioned setting. While the parameter L defines the horizon of interest for DisCo, resetting after every L steps (as in finite-horizon) would prevent the agent to identify L-controllable states and lead to poor performance. This explains the distinct technical tools used: while [24] executes finite-horizon no-regret algorithms, DisCo deploys SSP policies restricted on the set of states that it “controls” so far. Algorithmically, both approaches seek to build accurate estimates of the transitions on a specific (unknown) state space of interest: the so-called “significant” states within H steps for [24], and the incrementally L-controllable states S! L for DisCo. Bound-wise, the cost-sensitive AX? problem inherits the critical role of the minimum cost cmin in SSP problems (see App. C and e.g., [27, 28, 29]), which is reflected in the accuracy of Cor. 1 scaling inversely with cmin. Another interesting element of comparison is the dependency on the size of the state space. While the algorithm introduced in [24] is robust w.r.t. states that can be reached with very low probability, it still displays a polynomial dependency on the total number of states S. On the other hand, DisCo has only a logarithmic dependency on S, while it directly depends on the number of (L + ")-controllable states, which shows that DisCo effectively adapts to the state space of interest and it ignores all other states. This result is significant since not only SL+" can be arbitrarily smaller than S, but also because the set S! L+" itself is initially unknown to the algorithm. 5 Numerical Simulation In this section, we provide the first evaluation of algorithms in the incremental autonomous exploration setting. In the implementation of both DisCo and UcbExplore, we remove the logarithmic and constant terms for simplicity. We also boost the empirical performance of UcbExplore in various ways, for example by considering confidence intervals derived from the empirical Bernstein inequality (see [30]) as opposed to Hoeffding as done in [1]. We refer the reader to App. F for details on the algorithmic configurations and on the environments considered. We compare the sample complexity empirically achieved by DisCo and UcbExplore. Fig. 2 depicts the time needed to identify all the incrementally L-controllable states when L = 4.5 for different values of ", on a confusing chain domain. Note that the sample complexity is achieved soon after, when the algorithm can confidently discard all the remaining states as non-controllable (it is reported in Tab. 2 of App. F). We observe that DisCo outperforms UcbExplore for any value of ". In particular, the gap in performance increases as " decreases, which matches the theoretical improvement in sample complexity from eO(" 3) for UcbExplore to eO(" 2) for DisCo. On a second environment — the combination lock problem introduced in [31] — we notice that DisCo again outperforms UcbExplore, as shown in App. F. Another important feature of DisCo is that it targets the tighter objective AX?, whereas UcbExplore is only able to fulfill objective AXL and may therefore elect suboptimal policies. In App. F we show empirically that, as expected theoretically, this directly translates into higher-quality goal-reaching policies recovered by DisCo. 6 Conclusion and Extensions Connections to existing deep-RL methods. While we primarily focus the analysis of DisCo in the tabular case, we believe that the formal definition of AX problems and the general structure of DisCo may also serve as a theoretical grounding of many recent approaches to unsupervised exploration. For instance, it is interesting to draw a parallel between DisCo and the ideas behind Go-Explore [32]. Go-Explore similarly exploits the following principles: (1) remember states that have previously been visited, (2) first return to a promising state (without exploration), (3) then explore from it. Go-Explore assumes that the world is deterministic and resettable, meaning that one can reset the state of the simulator to a previous visit to that cell. Very recently [15], the same authors proposed a way to relax this requirement by training goal-conditioned policies to reliably return to cells in the archive during the exploration phase. In this paper, we investigated the theoretical dimension of this direction, by provably learning such goal-conditioned policies for the set of incrementally controllable states. Future work. Interesting directions for future investigation include: 1) Deriving a lower bound for the AX problems; 2) Integrating DisCo into the meta-algorithm MNM [33] which deals with incremental exploration for AXL in non-stationary environments; 3) Extending the problem to continuous state space and function approximation; 4) Relaxing the definition of incrementally controllable states and relaxing the performance definition towards allowing the agent to have a non-zero but limited sample complexity of learning a shortest-path policy for any state at test time. Broader Impact This paper makes contributions to the fundamentals of online learning (RL) and due to its theoretical nature, we see no ethical or immediate societal consequence of our work.
1. What is the focus and contribution of the paper regarding pure exploration in MDPs? 2. What are the strengths of the proposed approach, particularly in its performance measure and theoretical analysis? 3. What are the weaknesses of the paper, especially regarding its practical applicability and performance bounds? 4. Do you have any concerns about the algorithm's ability to handle large MDPs or its immediate practicality? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper proposes an algorithm for pure exploration in MDPs. It gives an asymptotic performance guarantee, which shows performance is better in terms of the maximum navigation time (L) and the tolerance epsilon compared to an existing guarantee for UCBexplore. Update after reviewer discussion: thanks for clarifying my points! Strengths A. Pure exploration in MDPs in a very relevant problem. In many practical cases, we either want to to solve several versions of an MDP with different rewards but the same transition kernels or else the reward is much more expensive to obtain than access to simulated transitions. B. I agree with the authors that the performance measure AX* is more natural than AXL - we do pure exploration to get efficient policies, not policies that get us to the goal eventually. C. The paper is VERY well-written. Weaknesses This work does not have any major weaknesses. I list a couple of minor points below. A. Performance in the table-lookup setting The immediate practical applicability of the paper remains limited. In small tabular MDPs, where this work can be applied directly, the performance difference shown in Figure 2 isn't huge (although I respect the authors for putting the figure in - honest evaluation of claims makes the paper stronger). B. Practical applicability in large MDPs In practical cases where we have MDPs with huge or infinite state spaces we would have to construct some kind of a "latent MDP" to profitably apply the algorithm. Constructing this "latent MDP" well is probably a harder problem than pure exploration in a tabular MDP. I know that covering this use case would be very hard in a single paper. However, you do mention Montezuma's Revenge in the intro so I was kind of expecting a discussion along these lines. Having said that, what the algorithm does, it does really well, so I don't think this is a major problem in terms of the score. C. How tight are the bounds? Theorem 1 is stated in the tilde-O notation, which ignores constants a,b and lower-order terms. Is it possible to evaluate the bounds in practice? It would be useful to discuss how far the derived bounds are from actual performance on the example given in Section 5.
NIPS
Title Introspective Distillation for Robust Question Answering Abstract Question answering (QA) models are well-known to exploit data bias, e.g., the language prior in visual QA and the position bias in reading comprehension. Recent debiasing methods achieve good out-of-distribution (OOD) generalizability with a considerable sacrifice of the in-distribution (ID) performance. Therefore, they are only applicable in domains where the test distribution is known in advance. In this paper, we present a novel debiasing method called Introspective Distillation (IntroD) to make the best of both worlds for QA. Our key technical contribution is to blend the inductive bias of OOD and ID by introspecting whether a training sample fits in the factual ID world or the counterfactual OOD one. Experiments on visual QA datasets VQA v2, VQA-CP, and reading comprehension dataset SQuAD demonstrate that our proposed IntroD maintains the competitive OOD performance compared to other debiasing methods, while sacrificing little or even achieving better ID performance compared to the non-debiasing ones. N/A 1 Introduction UpDn [4] S-MRL [6] CFVQA+IntroD CSS+IntroD LMH+IntroD Ensemble-based Methods Ours LMH [11] CFVQA [25] CSS [7] Baseline Debiasing Methods Ours OOD Accuracy (VQA-CP v2 test) ID Ac cu ra cy (V Q A v2 va l) Figure 1: Recent debiasing methods achieve high OOD accuracy with the sacrifice of ID accuracy. Our proposed IntroD makes the best of both worlds. Question answering (QA), which requires machines to answer questions given a context, is one of the most fundamental AI tasks. Popular contexts are vision (e.g., image for VQA [5]) and natural language (e.g., passage for extractive QA [27]). A common observation is that QA models prefer to over-exploit the training bias, which bypasses the context comprehension for a shortcut answer. For example, by only using the linguistic correlations between questions and answers, VQA models can answer most questions correctly [16, 2, 5, 20]. Similarly, extractive QA models may use the spurious positional cues to locate the answer in the passage [22]. As a result, QA models that have already achieved strong in-distribution (ID) performance may inevitably fail in out-of-distribution (OOD) test scenarios, regardless of the scale of training data and models [14, 22, 37]. Recently, several debiasing methods aim to close the gap between the ID and OOD performances [6, 11, 7, 25]. However, many of them hold the assumption that the training and test distributions are very different or even reversed, e.g., if there are more “yes” answers in training, there must be more “no” answers in testing. As a result, these methods encounter a severe performance drop under the ID evaluation, although they significantly outperform non-debiasing baselines in terms of OOD performance. An interesting observation from Figure 1 is that non-debiasing methods (circles) obtain high ID but low OOD performance, while debiasing methods (squares) achieve high OOD but low ID performance. This observation motivates us to ask: can we make the best of both worlds? 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Question type Answer Distribution Question type Answer Distribution Question type Answer Distribution In this paper, we take a step forward to building robust QA models that achieve strong performances in both ID and ODD evaluations. We point out that if the model is over-exploiting the bias in one world, the performance in the other one would be significantly degraded. Therefore, the “best of both” model should be fair with the inductive bias in either world. To this end, we present a simple yet effective training paradigm—Introspective Distillation (IntroD)—to blend the inductive bias of both worlds fairly. Suppose that we have two expert teacher models: ID-teacher and OOD-teacher, each of which captures the ID or OOD inductive bias and represents the corresponding world. Figure 2 illustrates three cases about how an introspective student learns from the two very different teachers. Case 1: if ID-bias > OOD-bias, then ID-teacher < OOD-teacher. ID inductive bias dominates the learning, and the student should listen more to OOD-teacher. This case occurs when ID-teacher has a low training loss while OOD-teacher has a high one. As shown in Figure 2 (a), it is hard for QA models to conclude whether the oven is electric or not without additional context. Due to the inductive bias in the training data, i.e., most questions starting with “is” are answered by “yes”, ID-teacher concludes with over-confidence while OOD-teacher does not. Case 2: if ID-bias < OOD-bias, then ID-teacher > OOD-teacher. OOD inductive bias dominates the learning, and the student should listen more to ID-teacher. This case occurs when ID-teacher has a high training loss while OOD-teacher has a low one. As shown in Figure 2 (c), there are at least two older men, one in a blue shirt selling fruits and one in a white shirt walking in the crowd. Therefore, both “blue” and “white” should be correct. However, as most training questions starting with “what color” are labeled by “white” answer, the bias of “OOD should be different from ID” enforces OOD-teacher to downplay “white” unfairly while ID-teacher does not. Case 3: if ID ≈ OOD, then ID-teacher ≈ OOD-teacher. Learning is fair and the student should listen to both teachers equally. This case occurs when the training losses of the two are close. As shown in Figure 2 (b), the ID-teacher and OOD-teacher produce similar predictions. The above introspection can be represented as a blended knowledge of the two teachers, which is distilled to the student model [18]. Yet, an unsolved challenge is how to obtain the “oracle” teachers, especially the OOD-teacher, because the OOD distribution is unseen in training, not mentioning to train a teacher model. Thanks to the recent causality-based approach [25], we can approximate the OOD-teacher using a causal model that imagines the unseen world by counterfactual reasoning. Without loss of generality, we take visual QA and extractive QA as case studies. Experiments on VQA-CP [2], VQA v2 [16], and SQuAD [27] validate the effectiveness of our proposed IntroD. Interestingly, extensive ablations demonstrate that the success of IntroD is indeed from the causal introspection but not from the simple ensemble. 2 Related Work Visual Question Answering (VQA) [5, 3, 16] is to answer the question given a visual context, i.e., image. Traditional VQA models are found to exploit the language priors in the training data [16, 2, 20]. For example, in the first version of the VQA dataset VQA v1.0, about 40% of the sports- related questions are answered as “tennis”. Although utilizing the shortcut bias helps with the in-distribution (ID) performance, the out-of-distribution (OOD) one is severely hurt [2]. In order to mitigate the language bias, recent methods proposed to utilize extra annotations for accurate visual grounding [28, 33], generate synthetic data for data augmentation [7, 1, 14, 30, 31], modifying language modules [19, 23], or explicitly formulate and exclude the language prior [6, 11, 25]. These methods obtain significant OOD improvement on the VQA-CP [2] dataset whose answer distributions in training and testing are reversed. However, the OOD improvement is achieved with the cost of a severe ID performance drop. Therefore, it is still a challenge to achieve strong performances in both ID and OOD evaluations. Extractive Question Answering (extractive QA) is to answer the question given a natural language context, i.e., passage [27]. Extractive QA assumes that the answer always locates in the passage, and further reduces the generative QA task to a classification task, i.e., position prediction. Recent years have witness many influential works [35, 29, 10, 39, 12, 38, 9]. However, directly predicting the answer positions has a severe side effect, i.e., correlating answers with positions [22]. For example, if a language model is trained on a biased dataset where answers always locate in the first sentence of the passage, the model will tend to ground the answer in the first sentence. Recently, a new variant of the reading comprehensive dataset SQuAD [27] is proposed to evaluate whether language models are robust to the position bias [22]. Similar to VQA, the answer position distribution is skewed in the training set. In this paper, we follow Ko et al. [22] to evaluate the robustness for extractive QA. Ensemble-based methods for debiasing explicitly formulate and exclude the shortcut bias in the training data [6, 11, 7, 25, 8]. The shortcut bias can be captured by a separate branch [6] or statistical priors [11]. These methods are further interpreted as causality-based approaches [25]. However, most of these methods achieve promising performance under the out-of-distribution (OOD) evaluation but sacrifice the performance under the in-distribution (ID) evaluation. The reason is that these methods hold an assumption that the training and test distribution are very different or even reversed. In this paper, we implement our ID-teacher and OOD-teacher using the causality-based methods, and further achieve a good trade-off between ID and OOD evaluations. Previous OOD-teachers, i.e., causality-based methods, only generate the OOD-prediction for debiased inference and ignore the role of ID-prediction. We further point out that the ID-prediction is crucial in introspecting the training process and achieving a good trade-off between ID performance and OOD performance. Knowledge Distillation is first proposed for model compression by transfering the teacher’s knowledge to a small student model [18, 15]. The idea of knowledge distillation has been further extended to establish debiasing models in natural language understanding (NLU) tasks [32, 13] and long-tail classification [34, 42, 17]. The idea of “introspection” is related to “self distillation”, which considers a student model itself as the teacher for the next training epoch or stage [24, 41, 21, 36, 40]. Although our introspection and self distillation both share the similar idea of “self-teaching”, they are fundamentally different: the latter is still in-distribution and has no comparative reasoning about the seen factual and unseen counterfactual. This difference reveals the key reason why introspection introduces new blended knowledge rather than just an old copy. Also, different from traditional knowledge distillation methods that use a fixed weight as hyper-parameter, our IntroD weights the models based on the introspective weights, which does not require a careful selection of hyper-parameters. 3 Introspective Distillation We present a simple yet effective training paradigm, Introspective Distillation (IntroD), to achieve a good trade-off between the in-distribution (ID) and out-of-distribution (OOD) performances for robust QA. Given a visual or natural language context C=c and a question Q=q as input, the QA model generates an answer A=a. Generally, the model is usually not prototyped as a generation but a multi-classification for prediction space reduction, i.e., a ∈ A. For VQA [5], the context refers to an image, and the answers are selected from a pre-defined candidate set. For extractive QA [27], the context refers to a passage, and the answers are locations in it. Our IntroD aims to blend the ID and OOD inductive bias fairly. As illustrated in Figure 3, it consists of three key parts: 1) causal teacher for capturing the ID and OOD inductive bias, 2) introspection for blending the two different inductive biases, and 3) distillation for a robust student model. 3.1 ID-Teacher and OOD-Teacher We expect ID-teacher and OOD-teacher to delineate the ID and OOD worlds, respectively. However, without access to the OOD distribution, it is difficult to obtain the “oracle” OOD-teacher. Thanks to the recently proposed causality-based method [25], OOD-teacher can be approximated by counterfactual reasoning. Also, ID-teacher can be approximated using the same causal model by factual reasoning. We briefly introduce the key concepts of the causal method below, and encourage readers to refer to Niu et al. [25] for more details. The causal QA models formulate the causal relations between the input {Q,C} and the output A. The ID inductive bias is formulated as the direct effect of inputs on the output, e.g., the language prior in VQA as Q→A and the position bias in extractive QA as C→A. Compared to traditional QA models that can only conduct factual reasoning to formulate the seen ID world, the causal QA models can also imagine the unseen OOD world by counterfactual reasoning. Therefore, we can implement ID-teacher and OOD-teacher using the same causal model. By factual reasoning, the causal QA model predicts the answers as P ID that include the ID inductive bias into total causal effect. By counterfactual reasoning, the causal QA model explicitly estimates the direct causal effect to exclude the inductive bias, and generate the counterfactual predictions P OOD, i.e., total indirect effect [25] or natural indirect effect [6, 11], that reflect the unseen OOD world. The training of ID and OOD teachers strictly follows their corresponding methods. The teacher model is trained with standard cross-entropy loss on the ID data, and we do not separately train the ID and OOD teachers. 3.2 Introspection of Inductive Bias Introspection first examines whether the model over-exploits the inductive bias in either ID or OOD world, and then blends the ID and OOD inductive bias fairly. If the ID inductive bias in one world dominates the learning, we expect the student model to learn more from the other world for debiasing. This raises two questions, how to define “dominate” and “more”. In other words, how to introspect and weight the inductive bias. Introspecting the bias. We introspect the effect of inductive bias by comparing the predictions of ID-teacher and OOD-teacher. If the inductive bias dominates the learning of a sample, ID-teacher’s confidence (i.e., predicted probability) on the ground-truth answers would be much larger than that of OOD-teacher. We denote the confidence as: sID = ∑ a∈AGT P ID(a), sOOD = ∑ a∈AGT P OOD(a), (1) where AGT denotes the set of ground-truth answers1. These scores reflect how well the training sample is matched with the inductive bias. The introspection is realized by comparing sID and sOOD. 1The number of answers can be one for single-label classification or multiple for multi-label classification. If sID>sOOD, we think the sample’s learning is dominated by the ID inductive bias (see Figure 2 (a)), and vice versa (see Figure 2 (c)). Note that the cross entropy between the ground-truth answers and predictions, XE, is inversely proportional to the confidence. Therefore, we can also use the standard cross-entropy loss to denote the matching scores sID and sOOD: sID = 1 XE(P GT,P ID) = 1∑ a∈A −P GT(a) logP ID(a) , sOOD = 1 XE(P GT,P OOD) = 1∑ a∈A −P GT(a) logP OOD(a) , (2) where P GT denotes the ground-truth labels. We empirically found that the cross-entropy loss achieves more stable improvements compared to the confidence in the implementation (see Table 3). Weighting the bias. We blend the ID and OOD knowledge by a weighted sum of their knowledge. The purpose of knowledge blending is to mix the ID and OOD inductive bias fairly. If the learning is biased to one world, the model may suffer from over-exploiting the corresponding inductive bias. As illustrated in Figure 2 (a), it is difficult to judge whether the oven is electric or not without external knowledge. However, ID-teacher is over-confident in its prediction due to the over-exploitation of the training answer distribution, i.e., sID>sOOD. In this case, the model should learn less from ID-teacher. We realize this by increasing the weight of OOD-knowledge wOOD and decreasing the weight of ID-knowledge wID, i.e., wID <wOOD. Similarly, for the training samples that is overconfident by OOD-teacher (see Figure 2 (c)), i.e., sID<sOOD, we set wID>wOOD. We determine the knowledge weights by setting the weights inversely proportional to the matching scores, i.e., w ∝ s−1. The weights are normalized by scaling it between 0 and 1: wID = (sID)−1 (sID)−1 + (sOOD)−1 = sOOD sID + sOOD , wOOD = 1− wID = s ID sID + sOOD . (3) We take VQA as an example to show how the distribution of knowledge weights reflect the effect of inductive bias, i.e., language prior. Recall that VQA v2 [16] is proposed to balance the answer distribution to remove the language bias, while VQA-CP v2 [2] is proposed to evaluate whether VQA models memorize the language priors. As a result, the VQA v2 train split contains little language bias, while the bias in VQA-CP v2 is artificially severe. Figure 4 illustrates the distribution of wID on the two training sets using CF-VQA [25] as the causal teacher. It can be clearly observed that the distributions of wID are totally different, which exactly reflects how the data bias affects the training process. Note that a small wID indicates a high ID-bias. Here are three interesting observations: • The wID of most samples is around 0.5 for both of the datasets. This indicates that most of the samples are learned unbiasedly and predicted fairly (e.g., Figure 2 (b)). • Both of the distributions are left-skewed. In particular, only 4% of the samples have wID that is larger than 0.6, while the ratio for wID < 0.4 is 40% on VQA-CP v2 and 25% on VQA v2. The reason is that ID-teacher is directly optimized on the ID data, while OOD-teacher is indirectly approximated. Therefore, ID-teacher outperforms OOD-teacher on the seen ID data in most cases, i.e., wID < 0.5. • A spike lies at the left side of the VQA-CP v2 distribution. In particular, 9.6% of the samples have wID that is lower than 0.05, while the ratio is only 0.4% on VQA v2. Also, the difference between the percentages becomes larger with a decreasing wID and wID < 0.5. This observation indicates that VQA models tend to exploit the training bias on the imbalanced VQA-CP v2 dataset while not on the balanced one. Recall that the VQA-CP training set is artificially modified to “encourage” the models to learn from the language prior. Without the memorized priors, VQA models cannot answer the questions confidently or correctly in a few extreme cases (e.g., Figure 2 (a)). We also define a stochastic hard variant to weigh the bias: wID = { 1 , if sID ≤ sOOD, 0 , otherwise. (4) The hard weighting forces the student to entirely learn from the OOD teacher for most of the training samples to maintain its OOD performance. In practice, one may choose soft or hard variants based on the trade-off between ID and OOD performances. We empirically use the soft variant for strong OOD-teachers and the hard variant for weak ones that achieve relatively lower OOD performance. Based on the knowledge weights, the ID-knowledge and OOD-knowledge are blended as: P T = wID · ID-Knowledge + wOOD · OOD-Knowledge. (5) Considering that the ID ground-truth labels P GT are more accurate than the ID-predictions P ID, we use P GT as the “oracle” ID-Knowledge. Since the OOD distribution is unobserved in training, it is impossible to obtain the oracle OOD-Knowledge. Thanks to the causal teacher, we can use the OOD-prediction P OOD to approximate the OOD-knowledge. 3.3 Distillation of Fair Knowledge After obtaining the blended fair knowledge from the causal teacher, we train a student model using a knowledge distillation manner [18]: L = KL(P T,P S) = ∑ a∈A P T(a) log P T(a) P S(a) , (6) where P S denotes the output of the student model. The difference between the teacher model and the student model is their architectures. The student model is simply the baseline model, e.g., UpDn [4] for VQA and BERT [12] for extractive QA. Besides the baseline model, the teacher model ensembles a separate branch to formulate the shortcut bias, e.g., Q→A for VQA and C→A for extractive QA. Therefore, the student is more efficient in both parameters and inference speed compared to the causal teacher model. We fix the causal teacher and only update the student model during distillation. 4 Experiments We take visual QA and extractive QA, two representative QA tasks, as examples to evaluate our proposed Introspective Distillation (IntroD)2. 4.1 Visual QA Dataset. We conducted experiments on the benchmark datasets VQA v2 [16] and VQA-CP v2 [2]. VQA v2 is a balanced VQA dataset that significantly reduces the language bias. For each question in the dataset, VQA v2 has two different answers for two different images. VQA-CP v2 is a variant of VQA v2 to evaluate whether the model answers the questions by simply memorizing the language priors. VQA-CP v2 reverses the priors in the training and validation splits. For example, most of “what sports” questions are answered as “tennis” in the training set while “baseball” in the test set. Metric and setting. The standard evaluation metric for VQA is accuracy. In order to evaluate the robustness of VQA methods, we conducted experiments on two settings: in-distribution (ID) setting and out-of-distribution (OOD) setting. For the ID setting, we reported the results on VQA v2 val set. For the OOD setting, we report the results on VQA-CP v2 test set. For the VQA-CP dataset, we 2Code are available at https://github.com/yuleiniu/introd. also followed Teney et al. [31] and held out 8k samples from the training set as the val set for ID evaluation. We further reported the harmonic mean (HM) of the accuracies on VQA-CP v2 test and VQA v2 val set. We use this metric to evaluate the trade-off between ID and OOD evaluations. Methods. According to the causal explanation [25], we implemented the counterfactual teacher as RUBi [6], LMH [11], CSS [7] and CF-VQA [25]. In particular, the earlier works RUBi and LMH used natural indirect effect (NIE) [26] for inference. CSS is a variant of LMH that generates counterfactual training samples for data augmentation. CF-VQA proposed to use total indirect effect (TIE) [26] for debiasing, and improved RUBi by replacing NIE with TIE. We denote this variant as RUBi-CF. Following previous works, we used UpDn [4] and S-MRL [6] as the backbone. Based on the debiasing ability, we used the soft variant of weights for LMH, CSS, RUBi-CF and CF-VQA, and the hard variant for RUBi (see Table 5). More training details are in the appendix. Overall results. Table 1 and 2 show how our proposed IntroD strengthens the existing causal models. First, according to the HM metric, IntroD improves the trade-off ability of all the causal teachers. In particular, CSS+IntroD achieves an accuracy of over 60% under both ID and OOD settings, which is the only among all the combinations. Second, with a deep look at the OOD evaluation, IntroD shows its competitive debiasing ability. Surprisingly, IntroD even slightly increases the OOD performance of causal teachers except for LMH. Third, with a deep look at the ID evaluation, IntroD outperforms RUBi by 0.7% and other teachers by over 2.4%. The biggest winners are LMH and CSS which suffer from a significant drop in the ID performance. Their increases in ID performance are over 5.5%. Similar conclusions can be obtained based on Table 2. Furthermore, IntroD with CF-VQA obtains higher ID performance (63.40%) than the baseline S-MRL (63.12%), which achieves the best of both ID and OOD worlds. These results demonstrate the effectiveness of our proposed IntroD on top of different causal VQA models. Also, the results indicate that the OOD approximation has an impact on the OOD performance of students. Overall, the OOD performance of the student is proportional to that of the teacher, while there is no clue whether the student’s ID performance is correlated to that of the OOD-teacher. As shown in Table 1, CSS+IntroD with the best OOD teacher CSS (58.95%) achieves the highest accuracy (60.17%) compared to other students on VQA-CP v2 test set. Also, IntroD increases the OOD performance of CSS by 1.22%, while the improvement over CF-VQA is much slighter (0.12%). The student achieves even decreased accuracy over the comparatively weakest LMH (-0.70%). Ablation studies. We further conducted ablation studies to evaluate the introspection and distillation strategy. We compared the alternatives with ID-teacher and OOD-teacher, i.e., factual and counterfactual predictions of the same causal model. The ablations aimed to answer the following questions. Note that Q1 is for “introspecting the bias”, Q2-Q5 are for “weighing the bias”, and Q6 and Q7 are for “distillation of fair knowledge” in Section 3. Q1: Can we use the predicted probability of the ground-truth answer (“Prob.” for short) as the matching scores? Better not. As shown in Table 3, although using “Prob.” achieves even better ID performance than ID-teacher, the OOD-performance drops by ∼7% compared to LMH and 4.5% compared to CSS. As a result, the trade-off metric HM decreases with LMH, and increases marginally (<1%) with CF-VQA and CSS. Q2: Can the student learn more from the more accurate teacher, i.e., setting w∝ s? No. This is a natural question because we hope to learn the best from the best. Unfortunately, this alternative (“Weight Avg.” for short) enhances the inductive bias rather than reduces it. As shown in Table 4, the alternative “Weight Avg.” achieves the best ID performance on top of different causal teachers, even beat ID-teacher. However, the students fail to learn the debiasing ability from OOD-teachers and achieves much lower OOD performance compared to OOD-teachers. This observation verifies that the “best” here should be the debiasing ability to the inductive bias rather than the fitting ability. Q3: Can the student equally learn from ID and OOD teachers, i.e., setting wID=wOOD=0.5? No. This alternative can be regarded as a simple average ensemble (“Simple Avg.” for short) of ID and OOD teachers. As shown in Table 4, similar to Q2, the students outperform ID-teachers on the ID evaluation with the sacrifice of OOD-performance compared to OOD-teachers. Besides, there is a large gap between “Simple Avg.” and our IntroD with difference causal models, e.g., >2% for LMH and CF-VQA, and ∼5% for CSS. This observation indicates that our IntroD is not just a simple ensemble method that combines two teacher models into a bigger one. Q4: Can the student only learn from OOD-teacher? Yes, but worse than IntroD. This alternative can be called counterfactual distillation (“CFD” for short) as the student model only learns from the counterfactual teacher. As shown in Table 4, CFD also achieves a better trade-off on top of different causal teachers, especially promote all of the OOD performance compared to OOD-teacher. However, there is a large gap between IntroD’s and CFD’s ID performances because the ID-knowledge is not utilized. As a result, for the HM metric, IntroD outperforms CFD by a small margin (<0.4%) on LMH and CF-VQA and a large margin (> 2%) on CSS. Q5: Should we use the hard or soft variant to calculate the knowledge weights? It depends on the debiasing ability of the causal teacher. There are some interesting observations from Table 5. First, the OOD performance is proportional to OOD-teachers’ debiasing ability. Second, the hard variants marginally improve OOD-teacher’s OOD performances in all cases. Third, the hard variants cannot fully overcome the sacrifice of degrading ID performance compared to the ID teacher. Empirically, we use the hard variant for the weaker OOD-teacher, e.g., RUBi, and the soft variant for the stronger OOD-teachers, e.g., LMH, CF-VQA, and CSS. Q6: Can we use the ID-Prediction P ID as the ID-Knowledge? No. As shown in Table 6, using P ID as the ID-Knowledge significantly degrades the OOD performance for LMH and CF-VQA. This observation indicates that it is better to use the oracle knowledge if available. Q7: Can we ensemble the two teacher models and directly use that without distillation? In other words, is IntroD just an ensemble method? No. Recall that our goal is to achieve the best of both ID and OOD worlds, i.e., a high OOD performance with less or no sacrifice of ID performance. However, the naive ensemble strategy simply combines two models’ predictions using a fixed weight without figuring out whether a sample comes from ID or OOD distribution. As a result, the ensemble method only inherits the disadvantages of the two teacher models rather than their advantages. Empirical results in Table 7 and 8 further verify our analysis. Here we report the results of ensembling two teachers with different wID, the weight of ID teacher. In particular, wID=0 denotes the OOD teacher and wID=1 denotes the ID teacher. We can see that (1) with wID increasing, the ID performance keeps improving, but the OOD performance is gradually decreasing, (2) all of the ensemble alternatives achieve a lower HM compared to the OOD teacher. These results indicate that (1) a simple ensemble of the two teacher models fails to achieve a good trade-off between ID and OOD performances, (2) our IntroD is not simply an ensemble method. 4.2 Extractive QA Dataset and settings. We conducted experiments on the reading comprehension benchmark dataset SQuAD [27]. SQuAD requires QA models to extract the answer from a passage. Recently, a new setting[22] was proposed to evaluate whether the extractive QA models suffer from the position bias. This setting divided a subset from the training set SQuADtrain based on the position of answers. For example, SQuADk=1train denotes the subset where all answers are in the first sentences. The test set is divided into two subsets: SQuADk=1dev for ID evaluation and SQuAD k ̸=1 dev for OOD evaluation. Metrics and method. The standard evaluation metrics are exact match (EM) and F1 score [27]. Following Ko et al. [22], we used XLNet [38] and BERT [12] as the backbone models, and LM [11] as the causal teacher. We empirically used the hard variant for the knowledge weights calculation. Results. Table 9 shows the main analysis with SQuADk=1train as the biased training set. The results are reproduced based on the released code3. Overall, LM increases the OOD performance by a large margin but slightly sacrifices the ID performance. As a comparison, our IntroD achieves the best of both ID and OOD performances. Table 10 further shows that IntroD can promote LM with different answer position bias and different numbers of training samples. In particular, when trained on the less biased training subset SQuADk≤5train where the answers locate in sentences except the first four, LM achieves less improvement on the overall performance, while IntroD stably promotes LM. Furthermore, using the origin training set SQuADtrain for unbiased training, LM slightly degrades the performance, while IntroD can still beat the baseline models. This observation indicates that IntroD does not over-correct the inductive bias. 5 Conclusion In this paper, we proposed a novel training paradigm, Introspective Distillation (IntroD), to achieve a fair trade-off between in-distribution (ID) and out-of-distribution (OOD) evaluations for question answering tasks, e.g., visual QA and extractive QA. IntroD uses a causal teacher to estimate the ID and OOD inductive bias, introspects whether one of the inductive biases dominates the learning, blends the inductive bias fairly, and distills the knowledge to the student model. Experiments on VQA v2, VQA-CP v2, and SQuAD demonstrated that our IntroD is able to achieve the best of both ID and OOD worlds. The main limitation of our IntroD is that its OOD performance heavily relies on the OOD-teacher. In the future, we will explore how to establish a stronger OOD-teacher. Acknowledgement We thank anonymous ACs and reviewers for their valuable discussion and insightful suggestions. This work was supported in part by NTU-Alibaba JRI and MOE AcRF Tier 2 grant. 3https://github.com/dmis-lab/position-bias
1. What is the main contribution of the paper regarding training VQA systems? 2. How does the proposed method differ from previous approaches in handling ID and OOD settings? 3. What are the strengths of the paper, particularly in terms of technical soundness and experimentation? 4. Are there any minor issues or areas for improvement in the writing or presentation of the paper? 5. How significant are the improvements achieved by the proposed system compared to baseline models?
Summary Of The Paper Review
Summary Of The Paper This paper propose a novel method on training VQA system that can achieve competitive performance both in-distribution and out-of-distribution. Specifically, the proposed system leverage a recent causality-based QA model to estimate the OOD distribution (via counterfactual reasoning), thus building two sub-modules to handle ID and OOD settings respectively. The two modules are then blended via knowledge distillation to train a single student model that can better handle the inductive bias. The authors perform extensive experiments on VQA and text-only question answering, with multiple baseline models and show the proposed training paradigm can consistently improve the overall performance (measured by harmonic mean over ID and OOD settings). ======================================================================================== Thanks for the response from the authors. I've updated my rating after reading the rebuttal as well as the comments from other reviewers. Review Originality The main idea of the paper can be summarized as "proportionally distilling two teacher models into a single one to handle both ID and OOD setting" in question answering, which is quite neat and novel. The authors cleverly leveraged a recent study that approximates the OOD performance via causal QA and counterfactual reasoning. The other parts of the paradigm are quite intuitive and straightforward, including estimating the relative weights based on cross-entropy or predicted probability, and to transfer knowledge to a new model via distillation. The combination of these techniques are shown to be effective for building a more robust QA system. Quality The proposed system is technically sound, and authors have done extensive experiments on VQA and reading comprehension to validate its effectiveness, and important ablation studies are provided. From the experiments the proposed system consistently show improvements over different baseline models. Clarity In general, the paper is well-organized and quite easy to follow, but there still exists some minor issues in the writing. For example, in L151 and L165 the authors claim "setting the weights disproportionate to the matching scores". My guess here (based on the equations) is that the authors meant to say "inversely proportional" rather than "disproportionate". Otherwise I could not follow the logic. Significance This paper achieves quite significant improvements over baselines and the proposed system seems compatible with a wide range of systems.
NIPS
Title Introspective Distillation for Robust Question Answering Abstract Question answering (QA) models are well-known to exploit data bias, e.g., the language prior in visual QA and the position bias in reading comprehension. Recent debiasing methods achieve good out-of-distribution (OOD) generalizability with a considerable sacrifice of the in-distribution (ID) performance. Therefore, they are only applicable in domains where the test distribution is known in advance. In this paper, we present a novel debiasing method called Introspective Distillation (IntroD) to make the best of both worlds for QA. Our key technical contribution is to blend the inductive bias of OOD and ID by introspecting whether a training sample fits in the factual ID world or the counterfactual OOD one. Experiments on visual QA datasets VQA v2, VQA-CP, and reading comprehension dataset SQuAD demonstrate that our proposed IntroD maintains the competitive OOD performance compared to other debiasing methods, while sacrificing little or even achieving better ID performance compared to the non-debiasing ones. N/A 1 Introduction UpDn [4] S-MRL [6] CFVQA+IntroD CSS+IntroD LMH+IntroD Ensemble-based Methods Ours LMH [11] CFVQA [25] CSS [7] Baseline Debiasing Methods Ours OOD Accuracy (VQA-CP v2 test) ID Ac cu ra cy (V Q A v2 va l) Figure 1: Recent debiasing methods achieve high OOD accuracy with the sacrifice of ID accuracy. Our proposed IntroD makes the best of both worlds. Question answering (QA), which requires machines to answer questions given a context, is one of the most fundamental AI tasks. Popular contexts are vision (e.g., image for VQA [5]) and natural language (e.g., passage for extractive QA [27]). A common observation is that QA models prefer to over-exploit the training bias, which bypasses the context comprehension for a shortcut answer. For example, by only using the linguistic correlations between questions and answers, VQA models can answer most questions correctly [16, 2, 5, 20]. Similarly, extractive QA models may use the spurious positional cues to locate the answer in the passage [22]. As a result, QA models that have already achieved strong in-distribution (ID) performance may inevitably fail in out-of-distribution (OOD) test scenarios, regardless of the scale of training data and models [14, 22, 37]. Recently, several debiasing methods aim to close the gap between the ID and OOD performances [6, 11, 7, 25]. However, many of them hold the assumption that the training and test distributions are very different or even reversed, e.g., if there are more “yes” answers in training, there must be more “no” answers in testing. As a result, these methods encounter a severe performance drop under the ID evaluation, although they significantly outperform non-debiasing baselines in terms of OOD performance. An interesting observation from Figure 1 is that non-debiasing methods (circles) obtain high ID but low OOD performance, while debiasing methods (squares) achieve high OOD but low ID performance. This observation motivates us to ask: can we make the best of both worlds? 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Question type Answer Distribution Question type Answer Distribution Question type Answer Distribution In this paper, we take a step forward to building robust QA models that achieve strong performances in both ID and ODD evaluations. We point out that if the model is over-exploiting the bias in one world, the performance in the other one would be significantly degraded. Therefore, the “best of both” model should be fair with the inductive bias in either world. To this end, we present a simple yet effective training paradigm—Introspective Distillation (IntroD)—to blend the inductive bias of both worlds fairly. Suppose that we have two expert teacher models: ID-teacher and OOD-teacher, each of which captures the ID or OOD inductive bias and represents the corresponding world. Figure 2 illustrates three cases about how an introspective student learns from the two very different teachers. Case 1: if ID-bias > OOD-bias, then ID-teacher < OOD-teacher. ID inductive bias dominates the learning, and the student should listen more to OOD-teacher. This case occurs when ID-teacher has a low training loss while OOD-teacher has a high one. As shown in Figure 2 (a), it is hard for QA models to conclude whether the oven is electric or not without additional context. Due to the inductive bias in the training data, i.e., most questions starting with “is” are answered by “yes”, ID-teacher concludes with over-confidence while OOD-teacher does not. Case 2: if ID-bias < OOD-bias, then ID-teacher > OOD-teacher. OOD inductive bias dominates the learning, and the student should listen more to ID-teacher. This case occurs when ID-teacher has a high training loss while OOD-teacher has a low one. As shown in Figure 2 (c), there are at least two older men, one in a blue shirt selling fruits and one in a white shirt walking in the crowd. Therefore, both “blue” and “white” should be correct. However, as most training questions starting with “what color” are labeled by “white” answer, the bias of “OOD should be different from ID” enforces OOD-teacher to downplay “white” unfairly while ID-teacher does not. Case 3: if ID ≈ OOD, then ID-teacher ≈ OOD-teacher. Learning is fair and the student should listen to both teachers equally. This case occurs when the training losses of the two are close. As shown in Figure 2 (b), the ID-teacher and OOD-teacher produce similar predictions. The above introspection can be represented as a blended knowledge of the two teachers, which is distilled to the student model [18]. Yet, an unsolved challenge is how to obtain the “oracle” teachers, especially the OOD-teacher, because the OOD distribution is unseen in training, not mentioning to train a teacher model. Thanks to the recent causality-based approach [25], we can approximate the OOD-teacher using a causal model that imagines the unseen world by counterfactual reasoning. Without loss of generality, we take visual QA and extractive QA as case studies. Experiments on VQA-CP [2], VQA v2 [16], and SQuAD [27] validate the effectiveness of our proposed IntroD. Interestingly, extensive ablations demonstrate that the success of IntroD is indeed from the causal introspection but not from the simple ensemble. 2 Related Work Visual Question Answering (VQA) [5, 3, 16] is to answer the question given a visual context, i.e., image. Traditional VQA models are found to exploit the language priors in the training data [16, 2, 20]. For example, in the first version of the VQA dataset VQA v1.0, about 40% of the sports- related questions are answered as “tennis”. Although utilizing the shortcut bias helps with the in-distribution (ID) performance, the out-of-distribution (OOD) one is severely hurt [2]. In order to mitigate the language bias, recent methods proposed to utilize extra annotations for accurate visual grounding [28, 33], generate synthetic data for data augmentation [7, 1, 14, 30, 31], modifying language modules [19, 23], or explicitly formulate and exclude the language prior [6, 11, 25]. These methods obtain significant OOD improvement on the VQA-CP [2] dataset whose answer distributions in training and testing are reversed. However, the OOD improvement is achieved with the cost of a severe ID performance drop. Therefore, it is still a challenge to achieve strong performances in both ID and OOD evaluations. Extractive Question Answering (extractive QA) is to answer the question given a natural language context, i.e., passage [27]. Extractive QA assumes that the answer always locates in the passage, and further reduces the generative QA task to a classification task, i.e., position prediction. Recent years have witness many influential works [35, 29, 10, 39, 12, 38, 9]. However, directly predicting the answer positions has a severe side effect, i.e., correlating answers with positions [22]. For example, if a language model is trained on a biased dataset where answers always locate in the first sentence of the passage, the model will tend to ground the answer in the first sentence. Recently, a new variant of the reading comprehensive dataset SQuAD [27] is proposed to evaluate whether language models are robust to the position bias [22]. Similar to VQA, the answer position distribution is skewed in the training set. In this paper, we follow Ko et al. [22] to evaluate the robustness for extractive QA. Ensemble-based methods for debiasing explicitly formulate and exclude the shortcut bias in the training data [6, 11, 7, 25, 8]. The shortcut bias can be captured by a separate branch [6] or statistical priors [11]. These methods are further interpreted as causality-based approaches [25]. However, most of these methods achieve promising performance under the out-of-distribution (OOD) evaluation but sacrifice the performance under the in-distribution (ID) evaluation. The reason is that these methods hold an assumption that the training and test distribution are very different or even reversed. In this paper, we implement our ID-teacher and OOD-teacher using the causality-based methods, and further achieve a good trade-off between ID and OOD evaluations. Previous OOD-teachers, i.e., causality-based methods, only generate the OOD-prediction for debiased inference and ignore the role of ID-prediction. We further point out that the ID-prediction is crucial in introspecting the training process and achieving a good trade-off between ID performance and OOD performance. Knowledge Distillation is first proposed for model compression by transfering the teacher’s knowledge to a small student model [18, 15]. The idea of knowledge distillation has been further extended to establish debiasing models in natural language understanding (NLU) tasks [32, 13] and long-tail classification [34, 42, 17]. The idea of “introspection” is related to “self distillation”, which considers a student model itself as the teacher for the next training epoch or stage [24, 41, 21, 36, 40]. Although our introspection and self distillation both share the similar idea of “self-teaching”, they are fundamentally different: the latter is still in-distribution and has no comparative reasoning about the seen factual and unseen counterfactual. This difference reveals the key reason why introspection introduces new blended knowledge rather than just an old copy. Also, different from traditional knowledge distillation methods that use a fixed weight as hyper-parameter, our IntroD weights the models based on the introspective weights, which does not require a careful selection of hyper-parameters. 3 Introspective Distillation We present a simple yet effective training paradigm, Introspective Distillation (IntroD), to achieve a good trade-off between the in-distribution (ID) and out-of-distribution (OOD) performances for robust QA. Given a visual or natural language context C=c and a question Q=q as input, the QA model generates an answer A=a. Generally, the model is usually not prototyped as a generation but a multi-classification for prediction space reduction, i.e., a ∈ A. For VQA [5], the context refers to an image, and the answers are selected from a pre-defined candidate set. For extractive QA [27], the context refers to a passage, and the answers are locations in it. Our IntroD aims to blend the ID and OOD inductive bias fairly. As illustrated in Figure 3, it consists of three key parts: 1) causal teacher for capturing the ID and OOD inductive bias, 2) introspection for blending the two different inductive biases, and 3) distillation for a robust student model. 3.1 ID-Teacher and OOD-Teacher We expect ID-teacher and OOD-teacher to delineate the ID and OOD worlds, respectively. However, without access to the OOD distribution, it is difficult to obtain the “oracle” OOD-teacher. Thanks to the recently proposed causality-based method [25], OOD-teacher can be approximated by counterfactual reasoning. Also, ID-teacher can be approximated using the same causal model by factual reasoning. We briefly introduce the key concepts of the causal method below, and encourage readers to refer to Niu et al. [25] for more details. The causal QA models formulate the causal relations between the input {Q,C} and the output A. The ID inductive bias is formulated as the direct effect of inputs on the output, e.g., the language prior in VQA as Q→A and the position bias in extractive QA as C→A. Compared to traditional QA models that can only conduct factual reasoning to formulate the seen ID world, the causal QA models can also imagine the unseen OOD world by counterfactual reasoning. Therefore, we can implement ID-teacher and OOD-teacher using the same causal model. By factual reasoning, the causal QA model predicts the answers as P ID that include the ID inductive bias into total causal effect. By counterfactual reasoning, the causal QA model explicitly estimates the direct causal effect to exclude the inductive bias, and generate the counterfactual predictions P OOD, i.e., total indirect effect [25] or natural indirect effect [6, 11], that reflect the unseen OOD world. The training of ID and OOD teachers strictly follows their corresponding methods. The teacher model is trained with standard cross-entropy loss on the ID data, and we do not separately train the ID and OOD teachers. 3.2 Introspection of Inductive Bias Introspection first examines whether the model over-exploits the inductive bias in either ID or OOD world, and then blends the ID and OOD inductive bias fairly. If the ID inductive bias in one world dominates the learning, we expect the student model to learn more from the other world for debiasing. This raises two questions, how to define “dominate” and “more”. In other words, how to introspect and weight the inductive bias. Introspecting the bias. We introspect the effect of inductive bias by comparing the predictions of ID-teacher and OOD-teacher. If the inductive bias dominates the learning of a sample, ID-teacher’s confidence (i.e., predicted probability) on the ground-truth answers would be much larger than that of OOD-teacher. We denote the confidence as: sID = ∑ a∈AGT P ID(a), sOOD = ∑ a∈AGT P OOD(a), (1) where AGT denotes the set of ground-truth answers1. These scores reflect how well the training sample is matched with the inductive bias. The introspection is realized by comparing sID and sOOD. 1The number of answers can be one for single-label classification or multiple for multi-label classification. If sID>sOOD, we think the sample’s learning is dominated by the ID inductive bias (see Figure 2 (a)), and vice versa (see Figure 2 (c)). Note that the cross entropy between the ground-truth answers and predictions, XE, is inversely proportional to the confidence. Therefore, we can also use the standard cross-entropy loss to denote the matching scores sID and sOOD: sID = 1 XE(P GT,P ID) = 1∑ a∈A −P GT(a) logP ID(a) , sOOD = 1 XE(P GT,P OOD) = 1∑ a∈A −P GT(a) logP OOD(a) , (2) where P GT denotes the ground-truth labels. We empirically found that the cross-entropy loss achieves more stable improvements compared to the confidence in the implementation (see Table 3). Weighting the bias. We blend the ID and OOD knowledge by a weighted sum of their knowledge. The purpose of knowledge blending is to mix the ID and OOD inductive bias fairly. If the learning is biased to one world, the model may suffer from over-exploiting the corresponding inductive bias. As illustrated in Figure 2 (a), it is difficult to judge whether the oven is electric or not without external knowledge. However, ID-teacher is over-confident in its prediction due to the over-exploitation of the training answer distribution, i.e., sID>sOOD. In this case, the model should learn less from ID-teacher. We realize this by increasing the weight of OOD-knowledge wOOD and decreasing the weight of ID-knowledge wID, i.e., wID <wOOD. Similarly, for the training samples that is overconfident by OOD-teacher (see Figure 2 (c)), i.e., sID<sOOD, we set wID>wOOD. We determine the knowledge weights by setting the weights inversely proportional to the matching scores, i.e., w ∝ s−1. The weights are normalized by scaling it between 0 and 1: wID = (sID)−1 (sID)−1 + (sOOD)−1 = sOOD sID + sOOD , wOOD = 1− wID = s ID sID + sOOD . (3) We take VQA as an example to show how the distribution of knowledge weights reflect the effect of inductive bias, i.e., language prior. Recall that VQA v2 [16] is proposed to balance the answer distribution to remove the language bias, while VQA-CP v2 [2] is proposed to evaluate whether VQA models memorize the language priors. As a result, the VQA v2 train split contains little language bias, while the bias in VQA-CP v2 is artificially severe. Figure 4 illustrates the distribution of wID on the two training sets using CF-VQA [25] as the causal teacher. It can be clearly observed that the distributions of wID are totally different, which exactly reflects how the data bias affects the training process. Note that a small wID indicates a high ID-bias. Here are three interesting observations: • The wID of most samples is around 0.5 for both of the datasets. This indicates that most of the samples are learned unbiasedly and predicted fairly (e.g., Figure 2 (b)). • Both of the distributions are left-skewed. In particular, only 4% of the samples have wID that is larger than 0.6, while the ratio for wID < 0.4 is 40% on VQA-CP v2 and 25% on VQA v2. The reason is that ID-teacher is directly optimized on the ID data, while OOD-teacher is indirectly approximated. Therefore, ID-teacher outperforms OOD-teacher on the seen ID data in most cases, i.e., wID < 0.5. • A spike lies at the left side of the VQA-CP v2 distribution. In particular, 9.6% of the samples have wID that is lower than 0.05, while the ratio is only 0.4% on VQA v2. Also, the difference between the percentages becomes larger with a decreasing wID and wID < 0.5. This observation indicates that VQA models tend to exploit the training bias on the imbalanced VQA-CP v2 dataset while not on the balanced one. Recall that the VQA-CP training set is artificially modified to “encourage” the models to learn from the language prior. Without the memorized priors, VQA models cannot answer the questions confidently or correctly in a few extreme cases (e.g., Figure 2 (a)). We also define a stochastic hard variant to weigh the bias: wID = { 1 , if sID ≤ sOOD, 0 , otherwise. (4) The hard weighting forces the student to entirely learn from the OOD teacher for most of the training samples to maintain its OOD performance. In practice, one may choose soft or hard variants based on the trade-off between ID and OOD performances. We empirically use the soft variant for strong OOD-teachers and the hard variant for weak ones that achieve relatively lower OOD performance. Based on the knowledge weights, the ID-knowledge and OOD-knowledge are blended as: P T = wID · ID-Knowledge + wOOD · OOD-Knowledge. (5) Considering that the ID ground-truth labels P GT are more accurate than the ID-predictions P ID, we use P GT as the “oracle” ID-Knowledge. Since the OOD distribution is unobserved in training, it is impossible to obtain the oracle OOD-Knowledge. Thanks to the causal teacher, we can use the OOD-prediction P OOD to approximate the OOD-knowledge. 3.3 Distillation of Fair Knowledge After obtaining the blended fair knowledge from the causal teacher, we train a student model using a knowledge distillation manner [18]: L = KL(P T,P S) = ∑ a∈A P T(a) log P T(a) P S(a) , (6) where P S denotes the output of the student model. The difference between the teacher model and the student model is their architectures. The student model is simply the baseline model, e.g., UpDn [4] for VQA and BERT [12] for extractive QA. Besides the baseline model, the teacher model ensembles a separate branch to formulate the shortcut bias, e.g., Q→A for VQA and C→A for extractive QA. Therefore, the student is more efficient in both parameters and inference speed compared to the causal teacher model. We fix the causal teacher and only update the student model during distillation. 4 Experiments We take visual QA and extractive QA, two representative QA tasks, as examples to evaluate our proposed Introspective Distillation (IntroD)2. 4.1 Visual QA Dataset. We conducted experiments on the benchmark datasets VQA v2 [16] and VQA-CP v2 [2]. VQA v2 is a balanced VQA dataset that significantly reduces the language bias. For each question in the dataset, VQA v2 has two different answers for two different images. VQA-CP v2 is a variant of VQA v2 to evaluate whether the model answers the questions by simply memorizing the language priors. VQA-CP v2 reverses the priors in the training and validation splits. For example, most of “what sports” questions are answered as “tennis” in the training set while “baseball” in the test set. Metric and setting. The standard evaluation metric for VQA is accuracy. In order to evaluate the robustness of VQA methods, we conducted experiments on two settings: in-distribution (ID) setting and out-of-distribution (OOD) setting. For the ID setting, we reported the results on VQA v2 val set. For the OOD setting, we report the results on VQA-CP v2 test set. For the VQA-CP dataset, we 2Code are available at https://github.com/yuleiniu/introd. also followed Teney et al. [31] and held out 8k samples from the training set as the val set for ID evaluation. We further reported the harmonic mean (HM) of the accuracies on VQA-CP v2 test and VQA v2 val set. We use this metric to evaluate the trade-off between ID and OOD evaluations. Methods. According to the causal explanation [25], we implemented the counterfactual teacher as RUBi [6], LMH [11], CSS [7] and CF-VQA [25]. In particular, the earlier works RUBi and LMH used natural indirect effect (NIE) [26] for inference. CSS is a variant of LMH that generates counterfactual training samples for data augmentation. CF-VQA proposed to use total indirect effect (TIE) [26] for debiasing, and improved RUBi by replacing NIE with TIE. We denote this variant as RUBi-CF. Following previous works, we used UpDn [4] and S-MRL [6] as the backbone. Based on the debiasing ability, we used the soft variant of weights for LMH, CSS, RUBi-CF and CF-VQA, and the hard variant for RUBi (see Table 5). More training details are in the appendix. Overall results. Table 1 and 2 show how our proposed IntroD strengthens the existing causal models. First, according to the HM metric, IntroD improves the trade-off ability of all the causal teachers. In particular, CSS+IntroD achieves an accuracy of over 60% under both ID and OOD settings, which is the only among all the combinations. Second, with a deep look at the OOD evaluation, IntroD shows its competitive debiasing ability. Surprisingly, IntroD even slightly increases the OOD performance of causal teachers except for LMH. Third, with a deep look at the ID evaluation, IntroD outperforms RUBi by 0.7% and other teachers by over 2.4%. The biggest winners are LMH and CSS which suffer from a significant drop in the ID performance. Their increases in ID performance are over 5.5%. Similar conclusions can be obtained based on Table 2. Furthermore, IntroD with CF-VQA obtains higher ID performance (63.40%) than the baseline S-MRL (63.12%), which achieves the best of both ID and OOD worlds. These results demonstrate the effectiveness of our proposed IntroD on top of different causal VQA models. Also, the results indicate that the OOD approximation has an impact on the OOD performance of students. Overall, the OOD performance of the student is proportional to that of the teacher, while there is no clue whether the student’s ID performance is correlated to that of the OOD-teacher. As shown in Table 1, CSS+IntroD with the best OOD teacher CSS (58.95%) achieves the highest accuracy (60.17%) compared to other students on VQA-CP v2 test set. Also, IntroD increases the OOD performance of CSS by 1.22%, while the improvement over CF-VQA is much slighter (0.12%). The student achieves even decreased accuracy over the comparatively weakest LMH (-0.70%). Ablation studies. We further conducted ablation studies to evaluate the introspection and distillation strategy. We compared the alternatives with ID-teacher and OOD-teacher, i.e., factual and counterfactual predictions of the same causal model. The ablations aimed to answer the following questions. Note that Q1 is for “introspecting the bias”, Q2-Q5 are for “weighing the bias”, and Q6 and Q7 are for “distillation of fair knowledge” in Section 3. Q1: Can we use the predicted probability of the ground-truth answer (“Prob.” for short) as the matching scores? Better not. As shown in Table 3, although using “Prob.” achieves even better ID performance than ID-teacher, the OOD-performance drops by ∼7% compared to LMH and 4.5% compared to CSS. As a result, the trade-off metric HM decreases with LMH, and increases marginally (<1%) with CF-VQA and CSS. Q2: Can the student learn more from the more accurate teacher, i.e., setting w∝ s? No. This is a natural question because we hope to learn the best from the best. Unfortunately, this alternative (“Weight Avg.” for short) enhances the inductive bias rather than reduces it. As shown in Table 4, the alternative “Weight Avg.” achieves the best ID performance on top of different causal teachers, even beat ID-teacher. However, the students fail to learn the debiasing ability from OOD-teachers and achieves much lower OOD performance compared to OOD-teachers. This observation verifies that the “best” here should be the debiasing ability to the inductive bias rather than the fitting ability. Q3: Can the student equally learn from ID and OOD teachers, i.e., setting wID=wOOD=0.5? No. This alternative can be regarded as a simple average ensemble (“Simple Avg.” for short) of ID and OOD teachers. As shown in Table 4, similar to Q2, the students outperform ID-teachers on the ID evaluation with the sacrifice of OOD-performance compared to OOD-teachers. Besides, there is a large gap between “Simple Avg.” and our IntroD with difference causal models, e.g., >2% for LMH and CF-VQA, and ∼5% for CSS. This observation indicates that our IntroD is not just a simple ensemble method that combines two teacher models into a bigger one. Q4: Can the student only learn from OOD-teacher? Yes, but worse than IntroD. This alternative can be called counterfactual distillation (“CFD” for short) as the student model only learns from the counterfactual teacher. As shown in Table 4, CFD also achieves a better trade-off on top of different causal teachers, especially promote all of the OOD performance compared to OOD-teacher. However, there is a large gap between IntroD’s and CFD’s ID performances because the ID-knowledge is not utilized. As a result, for the HM metric, IntroD outperforms CFD by a small margin (<0.4%) on LMH and CF-VQA and a large margin (> 2%) on CSS. Q5: Should we use the hard or soft variant to calculate the knowledge weights? It depends on the debiasing ability of the causal teacher. There are some interesting observations from Table 5. First, the OOD performance is proportional to OOD-teachers’ debiasing ability. Second, the hard variants marginally improve OOD-teacher’s OOD performances in all cases. Third, the hard variants cannot fully overcome the sacrifice of degrading ID performance compared to the ID teacher. Empirically, we use the hard variant for the weaker OOD-teacher, e.g., RUBi, and the soft variant for the stronger OOD-teachers, e.g., LMH, CF-VQA, and CSS. Q6: Can we use the ID-Prediction P ID as the ID-Knowledge? No. As shown in Table 6, using P ID as the ID-Knowledge significantly degrades the OOD performance for LMH and CF-VQA. This observation indicates that it is better to use the oracle knowledge if available. Q7: Can we ensemble the two teacher models and directly use that without distillation? In other words, is IntroD just an ensemble method? No. Recall that our goal is to achieve the best of both ID and OOD worlds, i.e., a high OOD performance with less or no sacrifice of ID performance. However, the naive ensemble strategy simply combines two models’ predictions using a fixed weight without figuring out whether a sample comes from ID or OOD distribution. As a result, the ensemble method only inherits the disadvantages of the two teacher models rather than their advantages. Empirical results in Table 7 and 8 further verify our analysis. Here we report the results of ensembling two teachers with different wID, the weight of ID teacher. In particular, wID=0 denotes the OOD teacher and wID=1 denotes the ID teacher. We can see that (1) with wID increasing, the ID performance keeps improving, but the OOD performance is gradually decreasing, (2) all of the ensemble alternatives achieve a lower HM compared to the OOD teacher. These results indicate that (1) a simple ensemble of the two teacher models fails to achieve a good trade-off between ID and OOD performances, (2) our IntroD is not simply an ensemble method. 4.2 Extractive QA Dataset and settings. We conducted experiments on the reading comprehension benchmark dataset SQuAD [27]. SQuAD requires QA models to extract the answer from a passage. Recently, a new setting[22] was proposed to evaluate whether the extractive QA models suffer from the position bias. This setting divided a subset from the training set SQuADtrain based on the position of answers. For example, SQuADk=1train denotes the subset where all answers are in the first sentences. The test set is divided into two subsets: SQuADk=1dev for ID evaluation and SQuAD k ̸=1 dev for OOD evaluation. Metrics and method. The standard evaluation metrics are exact match (EM) and F1 score [27]. Following Ko et al. [22], we used XLNet [38] and BERT [12] as the backbone models, and LM [11] as the causal teacher. We empirically used the hard variant for the knowledge weights calculation. Results. Table 9 shows the main analysis with SQuADk=1train as the biased training set. The results are reproduced based on the released code3. Overall, LM increases the OOD performance by a large margin but slightly sacrifices the ID performance. As a comparison, our IntroD achieves the best of both ID and OOD performances. Table 10 further shows that IntroD can promote LM with different answer position bias and different numbers of training samples. In particular, when trained on the less biased training subset SQuADk≤5train where the answers locate in sentences except the first four, LM achieves less improvement on the overall performance, while IntroD stably promotes LM. Furthermore, using the origin training set SQuADtrain for unbiased training, LM slightly degrades the performance, while IntroD can still beat the baseline models. This observation indicates that IntroD does not over-correct the inductive bias. 5 Conclusion In this paper, we proposed a novel training paradigm, Introspective Distillation (IntroD), to achieve a fair trade-off between in-distribution (ID) and out-of-distribution (OOD) evaluations for question answering tasks, e.g., visual QA and extractive QA. IntroD uses a causal teacher to estimate the ID and OOD inductive bias, introspects whether one of the inductive biases dominates the learning, blends the inductive bias fairly, and distills the knowledge to the student model. Experiments on VQA v2, VQA-CP v2, and SQuAD demonstrated that our IntroD is able to achieve the best of both ID and OOD worlds. The main limitation of our IntroD is that its OOD performance heavily relies on the OOD-teacher. In the future, we will explore how to establish a stronger OOD-teacher. Acknowledgement We thank anonymous ACs and reviewers for their valuable discussion and insightful suggestions. This work was supported in part by NTU-Alibaba JRI and MOE AcRF Tier 2 grant. 3https://github.com/dmis-lab/position-bias
1. What is the main contribution of the paper regarding OOD generalizability? 2. What are the key components of the Introspective Distillation (IntroD) training paradigm? 3. How does the reviewer assess the effectiveness and significance of the proposed approach? 4. Are there any suggestions for additional experiments to further support the effectiveness of IntroD? 5. Can the proposed approach be applied to other annotation biases in extractive QA?
Summary Of The Paper Review
Summary Of The Paper This paper proposed Introspective Distillation (IntroD) to achieve good OOD generalizability without sacrificing ID performance. Their training paradigm has three key components (1) factual reasoning ID teader model and counterfactual reasoning OOD model to capture ID and OOD inductive bias, (2) intersection between two inductive biases by comparing the predictions between ID teacher and OOD teader, and (3) knowledge distillation for a strong student model. They conduct experiments on visual QA and extractive QA tasks. On VQA-CP v2 and VQA v2, built upon four counterfactual teacher models (i.e., RUBi, LMH, CSS and CF-VQA0, their introD have shown large ID improvements, leading to considerably large improvements in harmonic mean of ID and OOD accuracies. They also conduct detailed ablation studies to identify the best introspection and distillation strategy. For Extractive QA experiments, they follow the experimental setting from prior work (Ko et al., 2020) where the dataset is divided into subsets based on the position of answers. IntroD has shown its effectiveness on the extractive QA experiments as well. Review Originality: The ID performance deterioration by debiasing has been pointed out, and this work addresses the important problem with a simple yet effective approach. By combining prior counterfactual models with their proposed introD, they significantly improve the ID performance while maintaining or improving the original OOD performance. Quality: I think the proposed approach is technically sound and is well-motivated and the rigorous experimental results support its effectiveness. The detailed analysis helps us to understand the best training strategy to address the aforementioned issue. Probably more experiments on extractive QA. For example, testing on another extractive QA dataset having a similar positioning bias, or studying another annotation bias seen in extractive QA (e.g., lexical overlap) would make this paper even stronger. Clarity: I think this paper is generally well-written and easy to read. Significance: I think this paper addresses an important issue and provides a simple yet effective solution which can be combined with current or new debiasing techniques. The experimental results show its effectiveness on two tasks, namely VQA and extractive QA. As mentioned above, the experimental results on extractive QA may be weak, and more experiments on that task would be helpful. Position issue is one of the core issues in extractive QA, but there are several major annotation biases in the task, and I wonder if the proposed approach would be useful or not for those (more complex) inductive biases.
NIPS
Title Introspective Distillation for Robust Question Answering Abstract Question answering (QA) models are well-known to exploit data bias, e.g., the language prior in visual QA and the position bias in reading comprehension. Recent debiasing methods achieve good out-of-distribution (OOD) generalizability with a considerable sacrifice of the in-distribution (ID) performance. Therefore, they are only applicable in domains where the test distribution is known in advance. In this paper, we present a novel debiasing method called Introspective Distillation (IntroD) to make the best of both worlds for QA. Our key technical contribution is to blend the inductive bias of OOD and ID by introspecting whether a training sample fits in the factual ID world or the counterfactual OOD one. Experiments on visual QA datasets VQA v2, VQA-CP, and reading comprehension dataset SQuAD demonstrate that our proposed IntroD maintains the competitive OOD performance compared to other debiasing methods, while sacrificing little or even achieving better ID performance compared to the non-debiasing ones. N/A 1 Introduction UpDn [4] S-MRL [6] CFVQA+IntroD CSS+IntroD LMH+IntroD Ensemble-based Methods Ours LMH [11] CFVQA [25] CSS [7] Baseline Debiasing Methods Ours OOD Accuracy (VQA-CP v2 test) ID Ac cu ra cy (V Q A v2 va l) Figure 1: Recent debiasing methods achieve high OOD accuracy with the sacrifice of ID accuracy. Our proposed IntroD makes the best of both worlds. Question answering (QA), which requires machines to answer questions given a context, is one of the most fundamental AI tasks. Popular contexts are vision (e.g., image for VQA [5]) and natural language (e.g., passage for extractive QA [27]). A common observation is that QA models prefer to over-exploit the training bias, which bypasses the context comprehension for a shortcut answer. For example, by only using the linguistic correlations between questions and answers, VQA models can answer most questions correctly [16, 2, 5, 20]. Similarly, extractive QA models may use the spurious positional cues to locate the answer in the passage [22]. As a result, QA models that have already achieved strong in-distribution (ID) performance may inevitably fail in out-of-distribution (OOD) test scenarios, regardless of the scale of training data and models [14, 22, 37]. Recently, several debiasing methods aim to close the gap between the ID and OOD performances [6, 11, 7, 25]. However, many of them hold the assumption that the training and test distributions are very different or even reversed, e.g., if there are more “yes” answers in training, there must be more “no” answers in testing. As a result, these methods encounter a severe performance drop under the ID evaluation, although they significantly outperform non-debiasing baselines in terms of OOD performance. An interesting observation from Figure 1 is that non-debiasing methods (circles) obtain high ID but low OOD performance, while debiasing methods (squares) achieve high OOD but low ID performance. This observation motivates us to ask: can we make the best of both worlds? 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Question type Answer Distribution Question type Answer Distribution Question type Answer Distribution In this paper, we take a step forward to building robust QA models that achieve strong performances in both ID and ODD evaluations. We point out that if the model is over-exploiting the bias in one world, the performance in the other one would be significantly degraded. Therefore, the “best of both” model should be fair with the inductive bias in either world. To this end, we present a simple yet effective training paradigm—Introspective Distillation (IntroD)—to blend the inductive bias of both worlds fairly. Suppose that we have two expert teacher models: ID-teacher and OOD-teacher, each of which captures the ID or OOD inductive bias and represents the corresponding world. Figure 2 illustrates three cases about how an introspective student learns from the two very different teachers. Case 1: if ID-bias > OOD-bias, then ID-teacher < OOD-teacher. ID inductive bias dominates the learning, and the student should listen more to OOD-teacher. This case occurs when ID-teacher has a low training loss while OOD-teacher has a high one. As shown in Figure 2 (a), it is hard for QA models to conclude whether the oven is electric or not without additional context. Due to the inductive bias in the training data, i.e., most questions starting with “is” are answered by “yes”, ID-teacher concludes with over-confidence while OOD-teacher does not. Case 2: if ID-bias < OOD-bias, then ID-teacher > OOD-teacher. OOD inductive bias dominates the learning, and the student should listen more to ID-teacher. This case occurs when ID-teacher has a high training loss while OOD-teacher has a low one. As shown in Figure 2 (c), there are at least two older men, one in a blue shirt selling fruits and one in a white shirt walking in the crowd. Therefore, both “blue” and “white” should be correct. However, as most training questions starting with “what color” are labeled by “white” answer, the bias of “OOD should be different from ID” enforces OOD-teacher to downplay “white” unfairly while ID-teacher does not. Case 3: if ID ≈ OOD, then ID-teacher ≈ OOD-teacher. Learning is fair and the student should listen to both teachers equally. This case occurs when the training losses of the two are close. As shown in Figure 2 (b), the ID-teacher and OOD-teacher produce similar predictions. The above introspection can be represented as a blended knowledge of the two teachers, which is distilled to the student model [18]. Yet, an unsolved challenge is how to obtain the “oracle” teachers, especially the OOD-teacher, because the OOD distribution is unseen in training, not mentioning to train a teacher model. Thanks to the recent causality-based approach [25], we can approximate the OOD-teacher using a causal model that imagines the unseen world by counterfactual reasoning. Without loss of generality, we take visual QA and extractive QA as case studies. Experiments on VQA-CP [2], VQA v2 [16], and SQuAD [27] validate the effectiveness of our proposed IntroD. Interestingly, extensive ablations demonstrate that the success of IntroD is indeed from the causal introspection but not from the simple ensemble. 2 Related Work Visual Question Answering (VQA) [5, 3, 16] is to answer the question given a visual context, i.e., image. Traditional VQA models are found to exploit the language priors in the training data [16, 2, 20]. For example, in the first version of the VQA dataset VQA v1.0, about 40% of the sports- related questions are answered as “tennis”. Although utilizing the shortcut bias helps with the in-distribution (ID) performance, the out-of-distribution (OOD) one is severely hurt [2]. In order to mitigate the language bias, recent methods proposed to utilize extra annotations for accurate visual grounding [28, 33], generate synthetic data for data augmentation [7, 1, 14, 30, 31], modifying language modules [19, 23], or explicitly formulate and exclude the language prior [6, 11, 25]. These methods obtain significant OOD improvement on the VQA-CP [2] dataset whose answer distributions in training and testing are reversed. However, the OOD improvement is achieved with the cost of a severe ID performance drop. Therefore, it is still a challenge to achieve strong performances in both ID and OOD evaluations. Extractive Question Answering (extractive QA) is to answer the question given a natural language context, i.e., passage [27]. Extractive QA assumes that the answer always locates in the passage, and further reduces the generative QA task to a classification task, i.e., position prediction. Recent years have witness many influential works [35, 29, 10, 39, 12, 38, 9]. However, directly predicting the answer positions has a severe side effect, i.e., correlating answers with positions [22]. For example, if a language model is trained on a biased dataset where answers always locate in the first sentence of the passage, the model will tend to ground the answer in the first sentence. Recently, a new variant of the reading comprehensive dataset SQuAD [27] is proposed to evaluate whether language models are robust to the position bias [22]. Similar to VQA, the answer position distribution is skewed in the training set. In this paper, we follow Ko et al. [22] to evaluate the robustness for extractive QA. Ensemble-based methods for debiasing explicitly formulate and exclude the shortcut bias in the training data [6, 11, 7, 25, 8]. The shortcut bias can be captured by a separate branch [6] or statistical priors [11]. These methods are further interpreted as causality-based approaches [25]. However, most of these methods achieve promising performance under the out-of-distribution (OOD) evaluation but sacrifice the performance under the in-distribution (ID) evaluation. The reason is that these methods hold an assumption that the training and test distribution are very different or even reversed. In this paper, we implement our ID-teacher and OOD-teacher using the causality-based methods, and further achieve a good trade-off between ID and OOD evaluations. Previous OOD-teachers, i.e., causality-based methods, only generate the OOD-prediction for debiased inference and ignore the role of ID-prediction. We further point out that the ID-prediction is crucial in introspecting the training process and achieving a good trade-off between ID performance and OOD performance. Knowledge Distillation is first proposed for model compression by transfering the teacher’s knowledge to a small student model [18, 15]. The idea of knowledge distillation has been further extended to establish debiasing models in natural language understanding (NLU) tasks [32, 13] and long-tail classification [34, 42, 17]. The idea of “introspection” is related to “self distillation”, which considers a student model itself as the teacher for the next training epoch or stage [24, 41, 21, 36, 40]. Although our introspection and self distillation both share the similar idea of “self-teaching”, they are fundamentally different: the latter is still in-distribution and has no comparative reasoning about the seen factual and unseen counterfactual. This difference reveals the key reason why introspection introduces new blended knowledge rather than just an old copy. Also, different from traditional knowledge distillation methods that use a fixed weight as hyper-parameter, our IntroD weights the models based on the introspective weights, which does not require a careful selection of hyper-parameters. 3 Introspective Distillation We present a simple yet effective training paradigm, Introspective Distillation (IntroD), to achieve a good trade-off between the in-distribution (ID) and out-of-distribution (OOD) performances for robust QA. Given a visual or natural language context C=c and a question Q=q as input, the QA model generates an answer A=a. Generally, the model is usually not prototyped as a generation but a multi-classification for prediction space reduction, i.e., a ∈ A. For VQA [5], the context refers to an image, and the answers are selected from a pre-defined candidate set. For extractive QA [27], the context refers to a passage, and the answers are locations in it. Our IntroD aims to blend the ID and OOD inductive bias fairly. As illustrated in Figure 3, it consists of three key parts: 1) causal teacher for capturing the ID and OOD inductive bias, 2) introspection for blending the two different inductive biases, and 3) distillation for a robust student model. 3.1 ID-Teacher and OOD-Teacher We expect ID-teacher and OOD-teacher to delineate the ID and OOD worlds, respectively. However, without access to the OOD distribution, it is difficult to obtain the “oracle” OOD-teacher. Thanks to the recently proposed causality-based method [25], OOD-teacher can be approximated by counterfactual reasoning. Also, ID-teacher can be approximated using the same causal model by factual reasoning. We briefly introduce the key concepts of the causal method below, and encourage readers to refer to Niu et al. [25] for more details. The causal QA models formulate the causal relations between the input {Q,C} and the output A. The ID inductive bias is formulated as the direct effect of inputs on the output, e.g., the language prior in VQA as Q→A and the position bias in extractive QA as C→A. Compared to traditional QA models that can only conduct factual reasoning to formulate the seen ID world, the causal QA models can also imagine the unseen OOD world by counterfactual reasoning. Therefore, we can implement ID-teacher and OOD-teacher using the same causal model. By factual reasoning, the causal QA model predicts the answers as P ID that include the ID inductive bias into total causal effect. By counterfactual reasoning, the causal QA model explicitly estimates the direct causal effect to exclude the inductive bias, and generate the counterfactual predictions P OOD, i.e., total indirect effect [25] or natural indirect effect [6, 11], that reflect the unseen OOD world. The training of ID and OOD teachers strictly follows their corresponding methods. The teacher model is trained with standard cross-entropy loss on the ID data, and we do not separately train the ID and OOD teachers. 3.2 Introspection of Inductive Bias Introspection first examines whether the model over-exploits the inductive bias in either ID or OOD world, and then blends the ID and OOD inductive bias fairly. If the ID inductive bias in one world dominates the learning, we expect the student model to learn more from the other world for debiasing. This raises two questions, how to define “dominate” and “more”. In other words, how to introspect and weight the inductive bias. Introspecting the bias. We introspect the effect of inductive bias by comparing the predictions of ID-teacher and OOD-teacher. If the inductive bias dominates the learning of a sample, ID-teacher’s confidence (i.e., predicted probability) on the ground-truth answers would be much larger than that of OOD-teacher. We denote the confidence as: sID = ∑ a∈AGT P ID(a), sOOD = ∑ a∈AGT P OOD(a), (1) where AGT denotes the set of ground-truth answers1. These scores reflect how well the training sample is matched with the inductive bias. The introspection is realized by comparing sID and sOOD. 1The number of answers can be one for single-label classification or multiple for multi-label classification. If sID>sOOD, we think the sample’s learning is dominated by the ID inductive bias (see Figure 2 (a)), and vice versa (see Figure 2 (c)). Note that the cross entropy between the ground-truth answers and predictions, XE, is inversely proportional to the confidence. Therefore, we can also use the standard cross-entropy loss to denote the matching scores sID and sOOD: sID = 1 XE(P GT,P ID) = 1∑ a∈A −P GT(a) logP ID(a) , sOOD = 1 XE(P GT,P OOD) = 1∑ a∈A −P GT(a) logP OOD(a) , (2) where P GT denotes the ground-truth labels. We empirically found that the cross-entropy loss achieves more stable improvements compared to the confidence in the implementation (see Table 3). Weighting the bias. We blend the ID and OOD knowledge by a weighted sum of their knowledge. The purpose of knowledge blending is to mix the ID and OOD inductive bias fairly. If the learning is biased to one world, the model may suffer from over-exploiting the corresponding inductive bias. As illustrated in Figure 2 (a), it is difficult to judge whether the oven is electric or not without external knowledge. However, ID-teacher is over-confident in its prediction due to the over-exploitation of the training answer distribution, i.e., sID>sOOD. In this case, the model should learn less from ID-teacher. We realize this by increasing the weight of OOD-knowledge wOOD and decreasing the weight of ID-knowledge wID, i.e., wID <wOOD. Similarly, for the training samples that is overconfident by OOD-teacher (see Figure 2 (c)), i.e., sID<sOOD, we set wID>wOOD. We determine the knowledge weights by setting the weights inversely proportional to the matching scores, i.e., w ∝ s−1. The weights are normalized by scaling it between 0 and 1: wID = (sID)−1 (sID)−1 + (sOOD)−1 = sOOD sID + sOOD , wOOD = 1− wID = s ID sID + sOOD . (3) We take VQA as an example to show how the distribution of knowledge weights reflect the effect of inductive bias, i.e., language prior. Recall that VQA v2 [16] is proposed to balance the answer distribution to remove the language bias, while VQA-CP v2 [2] is proposed to evaluate whether VQA models memorize the language priors. As a result, the VQA v2 train split contains little language bias, while the bias in VQA-CP v2 is artificially severe. Figure 4 illustrates the distribution of wID on the two training sets using CF-VQA [25] as the causal teacher. It can be clearly observed that the distributions of wID are totally different, which exactly reflects how the data bias affects the training process. Note that a small wID indicates a high ID-bias. Here are three interesting observations: • The wID of most samples is around 0.5 for both of the datasets. This indicates that most of the samples are learned unbiasedly and predicted fairly (e.g., Figure 2 (b)). • Both of the distributions are left-skewed. In particular, only 4% of the samples have wID that is larger than 0.6, while the ratio for wID < 0.4 is 40% on VQA-CP v2 and 25% on VQA v2. The reason is that ID-teacher is directly optimized on the ID data, while OOD-teacher is indirectly approximated. Therefore, ID-teacher outperforms OOD-teacher on the seen ID data in most cases, i.e., wID < 0.5. • A spike lies at the left side of the VQA-CP v2 distribution. In particular, 9.6% of the samples have wID that is lower than 0.05, while the ratio is only 0.4% on VQA v2. Also, the difference between the percentages becomes larger with a decreasing wID and wID < 0.5. This observation indicates that VQA models tend to exploit the training bias on the imbalanced VQA-CP v2 dataset while not on the balanced one. Recall that the VQA-CP training set is artificially modified to “encourage” the models to learn from the language prior. Without the memorized priors, VQA models cannot answer the questions confidently or correctly in a few extreme cases (e.g., Figure 2 (a)). We also define a stochastic hard variant to weigh the bias: wID = { 1 , if sID ≤ sOOD, 0 , otherwise. (4) The hard weighting forces the student to entirely learn from the OOD teacher for most of the training samples to maintain its OOD performance. In practice, one may choose soft or hard variants based on the trade-off between ID and OOD performances. We empirically use the soft variant for strong OOD-teachers and the hard variant for weak ones that achieve relatively lower OOD performance. Based on the knowledge weights, the ID-knowledge and OOD-knowledge are blended as: P T = wID · ID-Knowledge + wOOD · OOD-Knowledge. (5) Considering that the ID ground-truth labels P GT are more accurate than the ID-predictions P ID, we use P GT as the “oracle” ID-Knowledge. Since the OOD distribution is unobserved in training, it is impossible to obtain the oracle OOD-Knowledge. Thanks to the causal teacher, we can use the OOD-prediction P OOD to approximate the OOD-knowledge. 3.3 Distillation of Fair Knowledge After obtaining the blended fair knowledge from the causal teacher, we train a student model using a knowledge distillation manner [18]: L = KL(P T,P S) = ∑ a∈A P T(a) log P T(a) P S(a) , (6) where P S denotes the output of the student model. The difference between the teacher model and the student model is their architectures. The student model is simply the baseline model, e.g., UpDn [4] for VQA and BERT [12] for extractive QA. Besides the baseline model, the teacher model ensembles a separate branch to formulate the shortcut bias, e.g., Q→A for VQA and C→A for extractive QA. Therefore, the student is more efficient in both parameters and inference speed compared to the causal teacher model. We fix the causal teacher and only update the student model during distillation. 4 Experiments We take visual QA and extractive QA, two representative QA tasks, as examples to evaluate our proposed Introspective Distillation (IntroD)2. 4.1 Visual QA Dataset. We conducted experiments on the benchmark datasets VQA v2 [16] and VQA-CP v2 [2]. VQA v2 is a balanced VQA dataset that significantly reduces the language bias. For each question in the dataset, VQA v2 has two different answers for two different images. VQA-CP v2 is a variant of VQA v2 to evaluate whether the model answers the questions by simply memorizing the language priors. VQA-CP v2 reverses the priors in the training and validation splits. For example, most of “what sports” questions are answered as “tennis” in the training set while “baseball” in the test set. Metric and setting. The standard evaluation metric for VQA is accuracy. In order to evaluate the robustness of VQA methods, we conducted experiments on two settings: in-distribution (ID) setting and out-of-distribution (OOD) setting. For the ID setting, we reported the results on VQA v2 val set. For the OOD setting, we report the results on VQA-CP v2 test set. For the VQA-CP dataset, we 2Code are available at https://github.com/yuleiniu/introd. also followed Teney et al. [31] and held out 8k samples from the training set as the val set for ID evaluation. We further reported the harmonic mean (HM) of the accuracies on VQA-CP v2 test and VQA v2 val set. We use this metric to evaluate the trade-off between ID and OOD evaluations. Methods. According to the causal explanation [25], we implemented the counterfactual teacher as RUBi [6], LMH [11], CSS [7] and CF-VQA [25]. In particular, the earlier works RUBi and LMH used natural indirect effect (NIE) [26] for inference. CSS is a variant of LMH that generates counterfactual training samples for data augmentation. CF-VQA proposed to use total indirect effect (TIE) [26] for debiasing, and improved RUBi by replacing NIE with TIE. We denote this variant as RUBi-CF. Following previous works, we used UpDn [4] and S-MRL [6] as the backbone. Based on the debiasing ability, we used the soft variant of weights for LMH, CSS, RUBi-CF and CF-VQA, and the hard variant for RUBi (see Table 5). More training details are in the appendix. Overall results. Table 1 and 2 show how our proposed IntroD strengthens the existing causal models. First, according to the HM metric, IntroD improves the trade-off ability of all the causal teachers. In particular, CSS+IntroD achieves an accuracy of over 60% under both ID and OOD settings, which is the only among all the combinations. Second, with a deep look at the OOD evaluation, IntroD shows its competitive debiasing ability. Surprisingly, IntroD even slightly increases the OOD performance of causal teachers except for LMH. Third, with a deep look at the ID evaluation, IntroD outperforms RUBi by 0.7% and other teachers by over 2.4%. The biggest winners are LMH and CSS which suffer from a significant drop in the ID performance. Their increases in ID performance are over 5.5%. Similar conclusions can be obtained based on Table 2. Furthermore, IntroD with CF-VQA obtains higher ID performance (63.40%) than the baseline S-MRL (63.12%), which achieves the best of both ID and OOD worlds. These results demonstrate the effectiveness of our proposed IntroD on top of different causal VQA models. Also, the results indicate that the OOD approximation has an impact on the OOD performance of students. Overall, the OOD performance of the student is proportional to that of the teacher, while there is no clue whether the student’s ID performance is correlated to that of the OOD-teacher. As shown in Table 1, CSS+IntroD with the best OOD teacher CSS (58.95%) achieves the highest accuracy (60.17%) compared to other students on VQA-CP v2 test set. Also, IntroD increases the OOD performance of CSS by 1.22%, while the improvement over CF-VQA is much slighter (0.12%). The student achieves even decreased accuracy over the comparatively weakest LMH (-0.70%). Ablation studies. We further conducted ablation studies to evaluate the introspection and distillation strategy. We compared the alternatives with ID-teacher and OOD-teacher, i.e., factual and counterfactual predictions of the same causal model. The ablations aimed to answer the following questions. Note that Q1 is for “introspecting the bias”, Q2-Q5 are for “weighing the bias”, and Q6 and Q7 are for “distillation of fair knowledge” in Section 3. Q1: Can we use the predicted probability of the ground-truth answer (“Prob.” for short) as the matching scores? Better not. As shown in Table 3, although using “Prob.” achieves even better ID performance than ID-teacher, the OOD-performance drops by ∼7% compared to LMH and 4.5% compared to CSS. As a result, the trade-off metric HM decreases with LMH, and increases marginally (<1%) with CF-VQA and CSS. Q2: Can the student learn more from the more accurate teacher, i.e., setting w∝ s? No. This is a natural question because we hope to learn the best from the best. Unfortunately, this alternative (“Weight Avg.” for short) enhances the inductive bias rather than reduces it. As shown in Table 4, the alternative “Weight Avg.” achieves the best ID performance on top of different causal teachers, even beat ID-teacher. However, the students fail to learn the debiasing ability from OOD-teachers and achieves much lower OOD performance compared to OOD-teachers. This observation verifies that the “best” here should be the debiasing ability to the inductive bias rather than the fitting ability. Q3: Can the student equally learn from ID and OOD teachers, i.e., setting wID=wOOD=0.5? No. This alternative can be regarded as a simple average ensemble (“Simple Avg.” for short) of ID and OOD teachers. As shown in Table 4, similar to Q2, the students outperform ID-teachers on the ID evaluation with the sacrifice of OOD-performance compared to OOD-teachers. Besides, there is a large gap between “Simple Avg.” and our IntroD with difference causal models, e.g., >2% for LMH and CF-VQA, and ∼5% for CSS. This observation indicates that our IntroD is not just a simple ensemble method that combines two teacher models into a bigger one. Q4: Can the student only learn from OOD-teacher? Yes, but worse than IntroD. This alternative can be called counterfactual distillation (“CFD” for short) as the student model only learns from the counterfactual teacher. As shown in Table 4, CFD also achieves a better trade-off on top of different causal teachers, especially promote all of the OOD performance compared to OOD-teacher. However, there is a large gap between IntroD’s and CFD’s ID performances because the ID-knowledge is not utilized. As a result, for the HM metric, IntroD outperforms CFD by a small margin (<0.4%) on LMH and CF-VQA and a large margin (> 2%) on CSS. Q5: Should we use the hard or soft variant to calculate the knowledge weights? It depends on the debiasing ability of the causal teacher. There are some interesting observations from Table 5. First, the OOD performance is proportional to OOD-teachers’ debiasing ability. Second, the hard variants marginally improve OOD-teacher’s OOD performances in all cases. Third, the hard variants cannot fully overcome the sacrifice of degrading ID performance compared to the ID teacher. Empirically, we use the hard variant for the weaker OOD-teacher, e.g., RUBi, and the soft variant for the stronger OOD-teachers, e.g., LMH, CF-VQA, and CSS. Q6: Can we use the ID-Prediction P ID as the ID-Knowledge? No. As shown in Table 6, using P ID as the ID-Knowledge significantly degrades the OOD performance for LMH and CF-VQA. This observation indicates that it is better to use the oracle knowledge if available. Q7: Can we ensemble the two teacher models and directly use that without distillation? In other words, is IntroD just an ensemble method? No. Recall that our goal is to achieve the best of both ID and OOD worlds, i.e., a high OOD performance with less or no sacrifice of ID performance. However, the naive ensemble strategy simply combines two models’ predictions using a fixed weight without figuring out whether a sample comes from ID or OOD distribution. As a result, the ensemble method only inherits the disadvantages of the two teacher models rather than their advantages. Empirical results in Table 7 and 8 further verify our analysis. Here we report the results of ensembling two teachers with different wID, the weight of ID teacher. In particular, wID=0 denotes the OOD teacher and wID=1 denotes the ID teacher. We can see that (1) with wID increasing, the ID performance keeps improving, but the OOD performance is gradually decreasing, (2) all of the ensemble alternatives achieve a lower HM compared to the OOD teacher. These results indicate that (1) a simple ensemble of the two teacher models fails to achieve a good trade-off between ID and OOD performances, (2) our IntroD is not simply an ensemble method. 4.2 Extractive QA Dataset and settings. We conducted experiments on the reading comprehension benchmark dataset SQuAD [27]. SQuAD requires QA models to extract the answer from a passage. Recently, a new setting[22] was proposed to evaluate whether the extractive QA models suffer from the position bias. This setting divided a subset from the training set SQuADtrain based on the position of answers. For example, SQuADk=1train denotes the subset where all answers are in the first sentences. The test set is divided into two subsets: SQuADk=1dev for ID evaluation and SQuAD k ̸=1 dev for OOD evaluation. Metrics and method. The standard evaluation metrics are exact match (EM) and F1 score [27]. Following Ko et al. [22], we used XLNet [38] and BERT [12] as the backbone models, and LM [11] as the causal teacher. We empirically used the hard variant for the knowledge weights calculation. Results. Table 9 shows the main analysis with SQuADk=1train as the biased training set. The results are reproduced based on the released code3. Overall, LM increases the OOD performance by a large margin but slightly sacrifices the ID performance. As a comparison, our IntroD achieves the best of both ID and OOD performances. Table 10 further shows that IntroD can promote LM with different answer position bias and different numbers of training samples. In particular, when trained on the less biased training subset SQuADk≤5train where the answers locate in sentences except the first four, LM achieves less improvement on the overall performance, while IntroD stably promotes LM. Furthermore, using the origin training set SQuADtrain for unbiased training, LM slightly degrades the performance, while IntroD can still beat the baseline models. This observation indicates that IntroD does not over-correct the inductive bias. 5 Conclusion In this paper, we proposed a novel training paradigm, Introspective Distillation (IntroD), to achieve a fair trade-off between in-distribution (ID) and out-of-distribution (OOD) evaluations for question answering tasks, e.g., visual QA and extractive QA. IntroD uses a causal teacher to estimate the ID and OOD inductive bias, introspects whether one of the inductive biases dominates the learning, blends the inductive bias fairly, and distills the knowledge to the student model. Experiments on VQA v2, VQA-CP v2, and SQuAD demonstrated that our IntroD is able to achieve the best of both ID and OOD worlds. The main limitation of our IntroD is that its OOD performance heavily relies on the OOD-teacher. In the future, we will explore how to establish a stronger OOD-teacher. Acknowledgement We thank anonymous ACs and reviewers for their valuable discussion and insightful suggestions. This work was supported in part by NTU-Alibaba JRI and MOE AcRF Tier 2 grant. 3https://github.com/dmis-lab/position-bias
1. What is the main contribution of the paper, and how does it improve upon previous works? 2. What are the strengths of the proposed method, particularly in terms of its effectiveness and simplicity? 3. What are the weaknesses of the paper, especially regarding the lack of clarity in certain sections and the concerns about the OOD/ID testing setup? 4. How does the reviewer assess the novelty and significance of the paper's content? 5. Are there any questions or concerns that the reviewer has regarding the paper's methodology, results, or conclusions?
Summary Of The Paper Review
Summary Of The Paper The paper claims to provide a better balance of in-domain and out-of-distribution settings. Normally, algorithms optimized for one hurt the other. The paper proposes to use a weighting mechanism to to balance a model's reliance on in-domain world (modeled as causal factual world) and out-of distribution world (modeled as causal counterfactual world). Experiments on VQA-CP and SQUAD datasets show that the proposed method does indeed help improve ID performance of several algorithms optimized of OOD accuracy while maintaining/slightly improving its OOD performance. Review Strengths [S1] Interesting and well-motivated method: The proposed weighting scheme by dividing the training samples into belonging to "ID world" and "OOD world" is well-motivated and interesting. Despite its simplicity, it is quite effective (S2). [S2] The paper boasts really good results: The paper consistently improves ID performance of several OOD-oriented algorithms without sacrificing its "debiasing" abilities. This is, furthermore, achieved with a well-motivated approach, which is more than can be said for several recent works. [S3] Clearly written except for Section 3.1: The paper, generally, raises, addresses and answers all of the most relevant questions adequately and clearly. It is easy to understand the motivations, implementations, and results of those choices. The ablations experiments also show a clear picture of different model choices. The only caveat is the details about the ID and OOD teacher which is sorely lacking (even with supplemental materials). More on the below Weaknesses [W1] Very unclear about how the ID and OOD teachers are trained/modeled (aka Section 3.1 + supplementary): I am assuming the paper simply followed the implementation of [23]. If that is the case, then there are no problems and I am familiar with [23]. However, it is super unclear what is happening with regards to ID and OOD teachers, even after reading the supplemental. E.g., the notations used for TIE and NIE are not consistent with the paper -- e.g., compare paper's presentation with Figure 3 from [23] which shows that counterfactual VQA used only Q to predict the answer, whereas the current paper shows links from inputs X which contains both V and Q to answer Y, which is not correct. Please clarify the exact formulation of the ID and OOD teachers in the revised version. [W2] Some concerns about OOD/ID testing setup : W2.1 Use of soft/hard weighting for strong vs weak teachers is not very well motivated. Why is Rubi (or by extension, NIE methods) a weak teacher? Why does using hard weighing help "weak models"? W2.2 Are ID and OOD performance measured for the same model/data choices? As mentioned in [30], retraining before reporting ID performance can inflate the results on ID evaluation and effectively create two different versions of the model. Upon reading the paper, I don't think that this has been addressed. Overall: Overall, I think this is a good paper and I currently recommend a weak acceptance. However, it could be slightly higher if the clarity issues were resolved.
NIPS
Title Introspective Distillation for Robust Question Answering Abstract Question answering (QA) models are well-known to exploit data bias, e.g., the language prior in visual QA and the position bias in reading comprehension. Recent debiasing methods achieve good out-of-distribution (OOD) generalizability with a considerable sacrifice of the in-distribution (ID) performance. Therefore, they are only applicable in domains where the test distribution is known in advance. In this paper, we present a novel debiasing method called Introspective Distillation (IntroD) to make the best of both worlds for QA. Our key technical contribution is to blend the inductive bias of OOD and ID by introspecting whether a training sample fits in the factual ID world or the counterfactual OOD one. Experiments on visual QA datasets VQA v2, VQA-CP, and reading comprehension dataset SQuAD demonstrate that our proposed IntroD maintains the competitive OOD performance compared to other debiasing methods, while sacrificing little or even achieving better ID performance compared to the non-debiasing ones. N/A 1 Introduction UpDn [4] S-MRL [6] CFVQA+IntroD CSS+IntroD LMH+IntroD Ensemble-based Methods Ours LMH [11] CFVQA [25] CSS [7] Baseline Debiasing Methods Ours OOD Accuracy (VQA-CP v2 test) ID Ac cu ra cy (V Q A v2 va l) Figure 1: Recent debiasing methods achieve high OOD accuracy with the sacrifice of ID accuracy. Our proposed IntroD makes the best of both worlds. Question answering (QA), which requires machines to answer questions given a context, is one of the most fundamental AI tasks. Popular contexts are vision (e.g., image for VQA [5]) and natural language (e.g., passage for extractive QA [27]). A common observation is that QA models prefer to over-exploit the training bias, which bypasses the context comprehension for a shortcut answer. For example, by only using the linguistic correlations between questions and answers, VQA models can answer most questions correctly [16, 2, 5, 20]. Similarly, extractive QA models may use the spurious positional cues to locate the answer in the passage [22]. As a result, QA models that have already achieved strong in-distribution (ID) performance may inevitably fail in out-of-distribution (OOD) test scenarios, regardless of the scale of training data and models [14, 22, 37]. Recently, several debiasing methods aim to close the gap between the ID and OOD performances [6, 11, 7, 25]. However, many of them hold the assumption that the training and test distributions are very different or even reversed, e.g., if there are more “yes” answers in training, there must be more “no” answers in testing. As a result, these methods encounter a severe performance drop under the ID evaluation, although they significantly outperform non-debiasing baselines in terms of OOD performance. An interesting observation from Figure 1 is that non-debiasing methods (circles) obtain high ID but low OOD performance, while debiasing methods (squares) achieve high OOD but low ID performance. This observation motivates us to ask: can we make the best of both worlds? 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Question type Answer Distribution Question type Answer Distribution Question type Answer Distribution In this paper, we take a step forward to building robust QA models that achieve strong performances in both ID and ODD evaluations. We point out that if the model is over-exploiting the bias in one world, the performance in the other one would be significantly degraded. Therefore, the “best of both” model should be fair with the inductive bias in either world. To this end, we present a simple yet effective training paradigm—Introspective Distillation (IntroD)—to blend the inductive bias of both worlds fairly. Suppose that we have two expert teacher models: ID-teacher and OOD-teacher, each of which captures the ID or OOD inductive bias and represents the corresponding world. Figure 2 illustrates three cases about how an introspective student learns from the two very different teachers. Case 1: if ID-bias > OOD-bias, then ID-teacher < OOD-teacher. ID inductive bias dominates the learning, and the student should listen more to OOD-teacher. This case occurs when ID-teacher has a low training loss while OOD-teacher has a high one. As shown in Figure 2 (a), it is hard for QA models to conclude whether the oven is electric or not without additional context. Due to the inductive bias in the training data, i.e., most questions starting with “is” are answered by “yes”, ID-teacher concludes with over-confidence while OOD-teacher does not. Case 2: if ID-bias < OOD-bias, then ID-teacher > OOD-teacher. OOD inductive bias dominates the learning, and the student should listen more to ID-teacher. This case occurs when ID-teacher has a high training loss while OOD-teacher has a low one. As shown in Figure 2 (c), there are at least two older men, one in a blue shirt selling fruits and one in a white shirt walking in the crowd. Therefore, both “blue” and “white” should be correct. However, as most training questions starting with “what color” are labeled by “white” answer, the bias of “OOD should be different from ID” enforces OOD-teacher to downplay “white” unfairly while ID-teacher does not. Case 3: if ID ≈ OOD, then ID-teacher ≈ OOD-teacher. Learning is fair and the student should listen to both teachers equally. This case occurs when the training losses of the two are close. As shown in Figure 2 (b), the ID-teacher and OOD-teacher produce similar predictions. The above introspection can be represented as a blended knowledge of the two teachers, which is distilled to the student model [18]. Yet, an unsolved challenge is how to obtain the “oracle” teachers, especially the OOD-teacher, because the OOD distribution is unseen in training, not mentioning to train a teacher model. Thanks to the recent causality-based approach [25], we can approximate the OOD-teacher using a causal model that imagines the unseen world by counterfactual reasoning. Without loss of generality, we take visual QA and extractive QA as case studies. Experiments on VQA-CP [2], VQA v2 [16], and SQuAD [27] validate the effectiveness of our proposed IntroD. Interestingly, extensive ablations demonstrate that the success of IntroD is indeed from the causal introspection but not from the simple ensemble. 2 Related Work Visual Question Answering (VQA) [5, 3, 16] is to answer the question given a visual context, i.e., image. Traditional VQA models are found to exploit the language priors in the training data [16, 2, 20]. For example, in the first version of the VQA dataset VQA v1.0, about 40% of the sports- related questions are answered as “tennis”. Although utilizing the shortcut bias helps with the in-distribution (ID) performance, the out-of-distribution (OOD) one is severely hurt [2]. In order to mitigate the language bias, recent methods proposed to utilize extra annotations for accurate visual grounding [28, 33], generate synthetic data for data augmentation [7, 1, 14, 30, 31], modifying language modules [19, 23], or explicitly formulate and exclude the language prior [6, 11, 25]. These methods obtain significant OOD improvement on the VQA-CP [2] dataset whose answer distributions in training and testing are reversed. However, the OOD improvement is achieved with the cost of a severe ID performance drop. Therefore, it is still a challenge to achieve strong performances in both ID and OOD evaluations. Extractive Question Answering (extractive QA) is to answer the question given a natural language context, i.e., passage [27]. Extractive QA assumes that the answer always locates in the passage, and further reduces the generative QA task to a classification task, i.e., position prediction. Recent years have witness many influential works [35, 29, 10, 39, 12, 38, 9]. However, directly predicting the answer positions has a severe side effect, i.e., correlating answers with positions [22]. For example, if a language model is trained on a biased dataset where answers always locate in the first sentence of the passage, the model will tend to ground the answer in the first sentence. Recently, a new variant of the reading comprehensive dataset SQuAD [27] is proposed to evaluate whether language models are robust to the position bias [22]. Similar to VQA, the answer position distribution is skewed in the training set. In this paper, we follow Ko et al. [22] to evaluate the robustness for extractive QA. Ensemble-based methods for debiasing explicitly formulate and exclude the shortcut bias in the training data [6, 11, 7, 25, 8]. The shortcut bias can be captured by a separate branch [6] or statistical priors [11]. These methods are further interpreted as causality-based approaches [25]. However, most of these methods achieve promising performance under the out-of-distribution (OOD) evaluation but sacrifice the performance under the in-distribution (ID) evaluation. The reason is that these methods hold an assumption that the training and test distribution are very different or even reversed. In this paper, we implement our ID-teacher and OOD-teacher using the causality-based methods, and further achieve a good trade-off between ID and OOD evaluations. Previous OOD-teachers, i.e., causality-based methods, only generate the OOD-prediction for debiased inference and ignore the role of ID-prediction. We further point out that the ID-prediction is crucial in introspecting the training process and achieving a good trade-off between ID performance and OOD performance. Knowledge Distillation is first proposed for model compression by transfering the teacher’s knowledge to a small student model [18, 15]. The idea of knowledge distillation has been further extended to establish debiasing models in natural language understanding (NLU) tasks [32, 13] and long-tail classification [34, 42, 17]. The idea of “introspection” is related to “self distillation”, which considers a student model itself as the teacher for the next training epoch or stage [24, 41, 21, 36, 40]. Although our introspection and self distillation both share the similar idea of “self-teaching”, they are fundamentally different: the latter is still in-distribution and has no comparative reasoning about the seen factual and unseen counterfactual. This difference reveals the key reason why introspection introduces new blended knowledge rather than just an old copy. Also, different from traditional knowledge distillation methods that use a fixed weight as hyper-parameter, our IntroD weights the models based on the introspective weights, which does not require a careful selection of hyper-parameters. 3 Introspective Distillation We present a simple yet effective training paradigm, Introspective Distillation (IntroD), to achieve a good trade-off between the in-distribution (ID) and out-of-distribution (OOD) performances for robust QA. Given a visual or natural language context C=c and a question Q=q as input, the QA model generates an answer A=a. Generally, the model is usually not prototyped as a generation but a multi-classification for prediction space reduction, i.e., a ∈ A. For VQA [5], the context refers to an image, and the answers are selected from a pre-defined candidate set. For extractive QA [27], the context refers to a passage, and the answers are locations in it. Our IntroD aims to blend the ID and OOD inductive bias fairly. As illustrated in Figure 3, it consists of three key parts: 1) causal teacher for capturing the ID and OOD inductive bias, 2) introspection for blending the two different inductive biases, and 3) distillation for a robust student model. 3.1 ID-Teacher and OOD-Teacher We expect ID-teacher and OOD-teacher to delineate the ID and OOD worlds, respectively. However, without access to the OOD distribution, it is difficult to obtain the “oracle” OOD-teacher. Thanks to the recently proposed causality-based method [25], OOD-teacher can be approximated by counterfactual reasoning. Also, ID-teacher can be approximated using the same causal model by factual reasoning. We briefly introduce the key concepts of the causal method below, and encourage readers to refer to Niu et al. [25] for more details. The causal QA models formulate the causal relations between the input {Q,C} and the output A. The ID inductive bias is formulated as the direct effect of inputs on the output, e.g., the language prior in VQA as Q→A and the position bias in extractive QA as C→A. Compared to traditional QA models that can only conduct factual reasoning to formulate the seen ID world, the causal QA models can also imagine the unseen OOD world by counterfactual reasoning. Therefore, we can implement ID-teacher and OOD-teacher using the same causal model. By factual reasoning, the causal QA model predicts the answers as P ID that include the ID inductive bias into total causal effect. By counterfactual reasoning, the causal QA model explicitly estimates the direct causal effect to exclude the inductive bias, and generate the counterfactual predictions P OOD, i.e., total indirect effect [25] or natural indirect effect [6, 11], that reflect the unseen OOD world. The training of ID and OOD teachers strictly follows their corresponding methods. The teacher model is trained with standard cross-entropy loss on the ID data, and we do not separately train the ID and OOD teachers. 3.2 Introspection of Inductive Bias Introspection first examines whether the model over-exploits the inductive bias in either ID or OOD world, and then blends the ID and OOD inductive bias fairly. If the ID inductive bias in one world dominates the learning, we expect the student model to learn more from the other world for debiasing. This raises two questions, how to define “dominate” and “more”. In other words, how to introspect and weight the inductive bias. Introspecting the bias. We introspect the effect of inductive bias by comparing the predictions of ID-teacher and OOD-teacher. If the inductive bias dominates the learning of a sample, ID-teacher’s confidence (i.e., predicted probability) on the ground-truth answers would be much larger than that of OOD-teacher. We denote the confidence as: sID = ∑ a∈AGT P ID(a), sOOD = ∑ a∈AGT P OOD(a), (1) where AGT denotes the set of ground-truth answers1. These scores reflect how well the training sample is matched with the inductive bias. The introspection is realized by comparing sID and sOOD. 1The number of answers can be one for single-label classification or multiple for multi-label classification. If sID>sOOD, we think the sample’s learning is dominated by the ID inductive bias (see Figure 2 (a)), and vice versa (see Figure 2 (c)). Note that the cross entropy between the ground-truth answers and predictions, XE, is inversely proportional to the confidence. Therefore, we can also use the standard cross-entropy loss to denote the matching scores sID and sOOD: sID = 1 XE(P GT,P ID) = 1∑ a∈A −P GT(a) logP ID(a) , sOOD = 1 XE(P GT,P OOD) = 1∑ a∈A −P GT(a) logP OOD(a) , (2) where P GT denotes the ground-truth labels. We empirically found that the cross-entropy loss achieves more stable improvements compared to the confidence in the implementation (see Table 3). Weighting the bias. We blend the ID and OOD knowledge by a weighted sum of their knowledge. The purpose of knowledge blending is to mix the ID and OOD inductive bias fairly. If the learning is biased to one world, the model may suffer from over-exploiting the corresponding inductive bias. As illustrated in Figure 2 (a), it is difficult to judge whether the oven is electric or not without external knowledge. However, ID-teacher is over-confident in its prediction due to the over-exploitation of the training answer distribution, i.e., sID>sOOD. In this case, the model should learn less from ID-teacher. We realize this by increasing the weight of OOD-knowledge wOOD and decreasing the weight of ID-knowledge wID, i.e., wID <wOOD. Similarly, for the training samples that is overconfident by OOD-teacher (see Figure 2 (c)), i.e., sID<sOOD, we set wID>wOOD. We determine the knowledge weights by setting the weights inversely proportional to the matching scores, i.e., w ∝ s−1. The weights are normalized by scaling it between 0 and 1: wID = (sID)−1 (sID)−1 + (sOOD)−1 = sOOD sID + sOOD , wOOD = 1− wID = s ID sID + sOOD . (3) We take VQA as an example to show how the distribution of knowledge weights reflect the effect of inductive bias, i.e., language prior. Recall that VQA v2 [16] is proposed to balance the answer distribution to remove the language bias, while VQA-CP v2 [2] is proposed to evaluate whether VQA models memorize the language priors. As a result, the VQA v2 train split contains little language bias, while the bias in VQA-CP v2 is artificially severe. Figure 4 illustrates the distribution of wID on the two training sets using CF-VQA [25] as the causal teacher. It can be clearly observed that the distributions of wID are totally different, which exactly reflects how the data bias affects the training process. Note that a small wID indicates a high ID-bias. Here are three interesting observations: • The wID of most samples is around 0.5 for both of the datasets. This indicates that most of the samples are learned unbiasedly and predicted fairly (e.g., Figure 2 (b)). • Both of the distributions are left-skewed. In particular, only 4% of the samples have wID that is larger than 0.6, while the ratio for wID < 0.4 is 40% on VQA-CP v2 and 25% on VQA v2. The reason is that ID-teacher is directly optimized on the ID data, while OOD-teacher is indirectly approximated. Therefore, ID-teacher outperforms OOD-teacher on the seen ID data in most cases, i.e., wID < 0.5. • A spike lies at the left side of the VQA-CP v2 distribution. In particular, 9.6% of the samples have wID that is lower than 0.05, while the ratio is only 0.4% on VQA v2. Also, the difference between the percentages becomes larger with a decreasing wID and wID < 0.5. This observation indicates that VQA models tend to exploit the training bias on the imbalanced VQA-CP v2 dataset while not on the balanced one. Recall that the VQA-CP training set is artificially modified to “encourage” the models to learn from the language prior. Without the memorized priors, VQA models cannot answer the questions confidently or correctly in a few extreme cases (e.g., Figure 2 (a)). We also define a stochastic hard variant to weigh the bias: wID = { 1 , if sID ≤ sOOD, 0 , otherwise. (4) The hard weighting forces the student to entirely learn from the OOD teacher for most of the training samples to maintain its OOD performance. In practice, one may choose soft or hard variants based on the trade-off between ID and OOD performances. We empirically use the soft variant for strong OOD-teachers and the hard variant for weak ones that achieve relatively lower OOD performance. Based on the knowledge weights, the ID-knowledge and OOD-knowledge are blended as: P T = wID · ID-Knowledge + wOOD · OOD-Knowledge. (5) Considering that the ID ground-truth labels P GT are more accurate than the ID-predictions P ID, we use P GT as the “oracle” ID-Knowledge. Since the OOD distribution is unobserved in training, it is impossible to obtain the oracle OOD-Knowledge. Thanks to the causal teacher, we can use the OOD-prediction P OOD to approximate the OOD-knowledge. 3.3 Distillation of Fair Knowledge After obtaining the blended fair knowledge from the causal teacher, we train a student model using a knowledge distillation manner [18]: L = KL(P T,P S) = ∑ a∈A P T(a) log P T(a) P S(a) , (6) where P S denotes the output of the student model. The difference between the teacher model and the student model is their architectures. The student model is simply the baseline model, e.g., UpDn [4] for VQA and BERT [12] for extractive QA. Besides the baseline model, the teacher model ensembles a separate branch to formulate the shortcut bias, e.g., Q→A for VQA and C→A for extractive QA. Therefore, the student is more efficient in both parameters and inference speed compared to the causal teacher model. We fix the causal teacher and only update the student model during distillation. 4 Experiments We take visual QA and extractive QA, two representative QA tasks, as examples to evaluate our proposed Introspective Distillation (IntroD)2. 4.1 Visual QA Dataset. We conducted experiments on the benchmark datasets VQA v2 [16] and VQA-CP v2 [2]. VQA v2 is a balanced VQA dataset that significantly reduces the language bias. For each question in the dataset, VQA v2 has two different answers for two different images. VQA-CP v2 is a variant of VQA v2 to evaluate whether the model answers the questions by simply memorizing the language priors. VQA-CP v2 reverses the priors in the training and validation splits. For example, most of “what sports” questions are answered as “tennis” in the training set while “baseball” in the test set. Metric and setting. The standard evaluation metric for VQA is accuracy. In order to evaluate the robustness of VQA methods, we conducted experiments on two settings: in-distribution (ID) setting and out-of-distribution (OOD) setting. For the ID setting, we reported the results on VQA v2 val set. For the OOD setting, we report the results on VQA-CP v2 test set. For the VQA-CP dataset, we 2Code are available at https://github.com/yuleiniu/introd. also followed Teney et al. [31] and held out 8k samples from the training set as the val set for ID evaluation. We further reported the harmonic mean (HM) of the accuracies on VQA-CP v2 test and VQA v2 val set. We use this metric to evaluate the trade-off between ID and OOD evaluations. Methods. According to the causal explanation [25], we implemented the counterfactual teacher as RUBi [6], LMH [11], CSS [7] and CF-VQA [25]. In particular, the earlier works RUBi and LMH used natural indirect effect (NIE) [26] for inference. CSS is a variant of LMH that generates counterfactual training samples for data augmentation. CF-VQA proposed to use total indirect effect (TIE) [26] for debiasing, and improved RUBi by replacing NIE with TIE. We denote this variant as RUBi-CF. Following previous works, we used UpDn [4] and S-MRL [6] as the backbone. Based on the debiasing ability, we used the soft variant of weights for LMH, CSS, RUBi-CF and CF-VQA, and the hard variant for RUBi (see Table 5). More training details are in the appendix. Overall results. Table 1 and 2 show how our proposed IntroD strengthens the existing causal models. First, according to the HM metric, IntroD improves the trade-off ability of all the causal teachers. In particular, CSS+IntroD achieves an accuracy of over 60% under both ID and OOD settings, which is the only among all the combinations. Second, with a deep look at the OOD evaluation, IntroD shows its competitive debiasing ability. Surprisingly, IntroD even slightly increases the OOD performance of causal teachers except for LMH. Third, with a deep look at the ID evaluation, IntroD outperforms RUBi by 0.7% and other teachers by over 2.4%. The biggest winners are LMH and CSS which suffer from a significant drop in the ID performance. Their increases in ID performance are over 5.5%. Similar conclusions can be obtained based on Table 2. Furthermore, IntroD with CF-VQA obtains higher ID performance (63.40%) than the baseline S-MRL (63.12%), which achieves the best of both ID and OOD worlds. These results demonstrate the effectiveness of our proposed IntroD on top of different causal VQA models. Also, the results indicate that the OOD approximation has an impact on the OOD performance of students. Overall, the OOD performance of the student is proportional to that of the teacher, while there is no clue whether the student’s ID performance is correlated to that of the OOD-teacher. As shown in Table 1, CSS+IntroD with the best OOD teacher CSS (58.95%) achieves the highest accuracy (60.17%) compared to other students on VQA-CP v2 test set. Also, IntroD increases the OOD performance of CSS by 1.22%, while the improvement over CF-VQA is much slighter (0.12%). The student achieves even decreased accuracy over the comparatively weakest LMH (-0.70%). Ablation studies. We further conducted ablation studies to evaluate the introspection and distillation strategy. We compared the alternatives with ID-teacher and OOD-teacher, i.e., factual and counterfactual predictions of the same causal model. The ablations aimed to answer the following questions. Note that Q1 is for “introspecting the bias”, Q2-Q5 are for “weighing the bias”, and Q6 and Q7 are for “distillation of fair knowledge” in Section 3. Q1: Can we use the predicted probability of the ground-truth answer (“Prob.” for short) as the matching scores? Better not. As shown in Table 3, although using “Prob.” achieves even better ID performance than ID-teacher, the OOD-performance drops by ∼7% compared to LMH and 4.5% compared to CSS. As a result, the trade-off metric HM decreases with LMH, and increases marginally (<1%) with CF-VQA and CSS. Q2: Can the student learn more from the more accurate teacher, i.e., setting w∝ s? No. This is a natural question because we hope to learn the best from the best. Unfortunately, this alternative (“Weight Avg.” for short) enhances the inductive bias rather than reduces it. As shown in Table 4, the alternative “Weight Avg.” achieves the best ID performance on top of different causal teachers, even beat ID-teacher. However, the students fail to learn the debiasing ability from OOD-teachers and achieves much lower OOD performance compared to OOD-teachers. This observation verifies that the “best” here should be the debiasing ability to the inductive bias rather than the fitting ability. Q3: Can the student equally learn from ID and OOD teachers, i.e., setting wID=wOOD=0.5? No. This alternative can be regarded as a simple average ensemble (“Simple Avg.” for short) of ID and OOD teachers. As shown in Table 4, similar to Q2, the students outperform ID-teachers on the ID evaluation with the sacrifice of OOD-performance compared to OOD-teachers. Besides, there is a large gap between “Simple Avg.” and our IntroD with difference causal models, e.g., >2% for LMH and CF-VQA, and ∼5% for CSS. This observation indicates that our IntroD is not just a simple ensemble method that combines two teacher models into a bigger one. Q4: Can the student only learn from OOD-teacher? Yes, but worse than IntroD. This alternative can be called counterfactual distillation (“CFD” for short) as the student model only learns from the counterfactual teacher. As shown in Table 4, CFD also achieves a better trade-off on top of different causal teachers, especially promote all of the OOD performance compared to OOD-teacher. However, there is a large gap between IntroD’s and CFD’s ID performances because the ID-knowledge is not utilized. As a result, for the HM metric, IntroD outperforms CFD by a small margin (<0.4%) on LMH and CF-VQA and a large margin (> 2%) on CSS. Q5: Should we use the hard or soft variant to calculate the knowledge weights? It depends on the debiasing ability of the causal teacher. There are some interesting observations from Table 5. First, the OOD performance is proportional to OOD-teachers’ debiasing ability. Second, the hard variants marginally improve OOD-teacher’s OOD performances in all cases. Third, the hard variants cannot fully overcome the sacrifice of degrading ID performance compared to the ID teacher. Empirically, we use the hard variant for the weaker OOD-teacher, e.g., RUBi, and the soft variant for the stronger OOD-teachers, e.g., LMH, CF-VQA, and CSS. Q6: Can we use the ID-Prediction P ID as the ID-Knowledge? No. As shown in Table 6, using P ID as the ID-Knowledge significantly degrades the OOD performance for LMH and CF-VQA. This observation indicates that it is better to use the oracle knowledge if available. Q7: Can we ensemble the two teacher models and directly use that without distillation? In other words, is IntroD just an ensemble method? No. Recall that our goal is to achieve the best of both ID and OOD worlds, i.e., a high OOD performance with less or no sacrifice of ID performance. However, the naive ensemble strategy simply combines two models’ predictions using a fixed weight without figuring out whether a sample comes from ID or OOD distribution. As a result, the ensemble method only inherits the disadvantages of the two teacher models rather than their advantages. Empirical results in Table 7 and 8 further verify our analysis. Here we report the results of ensembling two teachers with different wID, the weight of ID teacher. In particular, wID=0 denotes the OOD teacher and wID=1 denotes the ID teacher. We can see that (1) with wID increasing, the ID performance keeps improving, but the OOD performance is gradually decreasing, (2) all of the ensemble alternatives achieve a lower HM compared to the OOD teacher. These results indicate that (1) a simple ensemble of the two teacher models fails to achieve a good trade-off between ID and OOD performances, (2) our IntroD is not simply an ensemble method. 4.2 Extractive QA Dataset and settings. We conducted experiments on the reading comprehension benchmark dataset SQuAD [27]. SQuAD requires QA models to extract the answer from a passage. Recently, a new setting[22] was proposed to evaluate whether the extractive QA models suffer from the position bias. This setting divided a subset from the training set SQuADtrain based on the position of answers. For example, SQuADk=1train denotes the subset where all answers are in the first sentences. The test set is divided into two subsets: SQuADk=1dev for ID evaluation and SQuAD k ̸=1 dev for OOD evaluation. Metrics and method. The standard evaluation metrics are exact match (EM) and F1 score [27]. Following Ko et al. [22], we used XLNet [38] and BERT [12] as the backbone models, and LM [11] as the causal teacher. We empirically used the hard variant for the knowledge weights calculation. Results. Table 9 shows the main analysis with SQuADk=1train as the biased training set. The results are reproduced based on the released code3. Overall, LM increases the OOD performance by a large margin but slightly sacrifices the ID performance. As a comparison, our IntroD achieves the best of both ID and OOD performances. Table 10 further shows that IntroD can promote LM with different answer position bias and different numbers of training samples. In particular, when trained on the less biased training subset SQuADk≤5train where the answers locate in sentences except the first four, LM achieves less improvement on the overall performance, while IntroD stably promotes LM. Furthermore, using the origin training set SQuADtrain for unbiased training, LM slightly degrades the performance, while IntroD can still beat the baseline models. This observation indicates that IntroD does not over-correct the inductive bias. 5 Conclusion In this paper, we proposed a novel training paradigm, Introspective Distillation (IntroD), to achieve a fair trade-off between in-distribution (ID) and out-of-distribution (OOD) evaluations for question answering tasks, e.g., visual QA and extractive QA. IntroD uses a causal teacher to estimate the ID and OOD inductive bias, introspects whether one of the inductive biases dominates the learning, blends the inductive bias fairly, and distills the knowledge to the student model. Experiments on VQA v2, VQA-CP v2, and SQuAD demonstrated that our IntroD is able to achieve the best of both ID and OOD worlds. The main limitation of our IntroD is that its OOD performance heavily relies on the OOD-teacher. In the future, we will explore how to establish a stronger OOD-teacher. Acknowledgement We thank anonymous ACs and reviewers for their valuable discussion and insightful suggestions. This work was supported in part by NTU-Alibaba JRI and MOE AcRF Tier 2 grant. 3https://github.com/dmis-lab/position-bias
1. What is the main contribution of the paper regarding learning models for visual and text-only question answering? 2. What are the strengths of the paper, particularly in its pitch and ablation study? 3. What are the weaknesses of the paper, especially regarding its novelty and clarity in certain aspects of the approach? 4. Do you have any concerns about the effectiveness of knowledge distillation in this dataset, or the necessity of using it versus ensembling the teacher models directly?
Summary Of The Paper Review
Summary Of The Paper This paper studies learning models for visual and text-only question answering (VQA and SQUAD) that do well on both in-distribution test sets, as well as out-of-distribution ones. The paper takes a knowledge distillation approach for this and designs two "teacher" modules: "ID-teacher" that captures in-domain bias, and "OOD-teacher" in which the in-domain bias is reduced using [23] (Niu et al 2021)'s method. The models are then ensembled and the aggregate probabilities are used to supervise training of a student model. The paper evaluates this approach on VQA and SQUAD, with a variety of different methods as teachers. The method seems to increase performance on both OOD and ID sets (which is perhaps surprising, since one might hypothesize that the ID teacher ought to do best by itself on ID data, and likewise for the OOD teacher). Review To this reviewer, this paper looks promising overall, however there are a few key concerns that hold me back from wanting to accept it at this moment. If they are addressed, I would be willing to increase my score in the rebuttal. Strengths: To this reviewer, the pitch of the paper seems interesting, that of using knowledge distillation as a way to effectively ensemble a model that is robust on out-of-distribution data, with one that is robust on in-distribution data. It could be helpful for people working on VQA. The ablaion study answers a lot of reasonable questions about model performance, e.g. that knowledge distillation from just the OOD teacher isn't as good, and about which ways of ensembling knowledge work best for which models. Weaknesses: The novelty of this paper is a bit unclear to this reviewer. One argument made in this paper is that prior work on ensemble-based methods (like 10; Clark et al 2019 Don’t Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset Biases") explicitly formulates what the language prior is. However, it is not clear to me how the causal approach proposed in this work (which seems to be from [23]; Niu et al 2021) does not explicitly formulate what the language prior is. Adding onto this point, to this reviewer, it wasn't super clear how the ID teacher and OOD teacher are trained, or what aspects of [23] (Niu et al 2021) are being used. I looked through the appendix as well but was still confused. Relatedly, one concern that comes to mind is whether knowlege distillation helps in this dataset, or whether for the "harmonic mean" evaluation on both VQA-v2-CP and VQA-v2, using the proposed weighting method is what is important. In other words, could one ensemble the two teacher models, and directly use that (without knowledge distillation)?
NIPS
Title Subspace Recovery from Heterogeneous Data with Non-isotropic Noise Abstract Recovering linear subspaces from data is a fundamental and important task in statistics and machine learning. Motivated by heterogeneity in Federated Learning settings, we study a basic formulation of this problem: the principal component analysis (PCA), with a focus on dealing with irregular noise. Our data come from n users with user i contributing data samples from a d-dimensional distribution with mean μi. Our goal is to recover the linear subspace shared by μ1, . . . , μn using the data points from all users, where every data point from user i is formed by adding an independent mean-zero noise vector to μi. If we only have one data point from every user, subspace recovery is information-theoretically impossible when the covariance matrices of the noise vectors can be non-spherical, necessitating additional restrictive assumptions in previous work. We avoid these assumptions by leveraging at least two data points from each user, which allows us to design an efficiently-computable estimator under non-spherical and user-dependent noise. We prove an upper bound for the estimation error of our estimator in general scenarios where the number of data points and amount of noise can vary across users, and prove an information-theoretic error lower bound that not only matches the upper bound up to a constant factor, but also holds even for spherical Gaussian noise. This implies that our estimator does not introduce additional estimation error (up to a constant factor) due to irregularity in the noise. We show additional results for a linear regression problem in a similar setup. N/A Recovering linear subspaces from data is a fundamental and important task in statistics and machine learning. Motivated by heterogeneity in Federated Learning settings, we study a basic formulation of this problem: the principal component analysis (PCA), with a focus on dealing with irregular noise. Our data come from n users with user i contributing data samples from a d-dimensional distribution with mean µi. Our goal is to recover the linear subspace shared by µ1, . . . , µn using the data points from all users, where every data point from user i is formed by adding an independent mean-zero noise vector to µi. If we only have one data point from every user, subspace recovery is information-theoretically impossible when the covariance matrices of the noise vectors can be non-spherical, necessitating additional restrictive assumptions in previous work. We avoid these assumptions by leveraging at least two data points from each user, which allows us to design an efficiently-computable estimator under non-spherical and user-dependent noise. We prove an upper bound for the estimation error of our estimator in general scenarios where the number of data points and amount of noise can vary across users, and prove an information-theoretic error lower bound that not only matches the upper bound up to a constant factor, but also holds even for spherical Gaussian noise. This implies that our estimator does not introduce additional estimation error (up to a constant factor) due to irregularity in the noise. We show additional results for a linear regression problem in a similar setup. 1 Introduction We study the problem of learning low-dimensional structure amongst data distributions, given multiple samples from each distribution. This problem arises naturally in settings such as federated learning, where we want to learn from data coming from a set of individuals, each of which has samples from their own distribution. These distributions however are related to each other, and in this work, we consider the setting when these distributions have means lying in a low-dimensional subspace. The goal is to learn this subspace, even when the distributions may have different (and potentially non-spherical) variances. This heterogeneity can manifest itself in practice as differing number of samples per user, or the variance differing across individuals, possibly depending on their mean. Recovery of the subspace containing the means can in turn help better estimate individual means. In other words, this can allow for learning good estimator for all individual means, by leveraging information from all the individuals. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). The irregularity of the noise makes this task challenging even when we have sufficiently many individual distributions. For example, suppose we have n individuals and for every i = 1, . . . , n, an unknown µi ∈ Rd. For simplicity, suppose that µ1, . . . , µn are distributed independently as N(0, σ2uuT) for σ ∈ R≥0 and an unknown unit vector u ∈ Rd. In this setting, our goal is to recover the one-dimensional subspace, equivalently the vector u. For every i, we have a data point xi = µi+zi where zi ∈ Rd is a mean-zero noise vector. If zi is drawn independently from a spherical Gaussian N(0, α2I), we can recover the unknown subspace with arbitrary accuracy as n grows to infinity because 1n ∑ xix T i concentrates to E[xixTi ] = σ2uuT + α2I , whose top eigenvector is ±u. However, if the noise zi is drawn from a non-spherical distribution, the top eigenvector of 1n ∑ xix T i can deviate from ±u significantly, and to make things worse, if the noise zi is drawn independently from a non-spherical Gaussian N(0, σ2(I−uuT)+α2I), then our data points xi = µi+zi distribute independently as N(0, (σ2 + α2)I), giving no information about the vector u.1 The information-theoretic impossibility in this example however disappears as soon as one has at least two samples from each distribution. Indeed, given two data points xi1 = µi + zi1 and xi2 = µi + zi2 from user i, as long as the noise zi1, zi2 are independent and have zero mean, we always have E[xi1xTi2] = σ2uuT regardless of the specific distributions of zi1 and zi2. This allows us to recover the subspace in this example, as long as we have sufficiently many users each contributing at least two examples. As this is commonly the case in our motivating examples, we make this assumption of multiple data points per user, and show that this intuition extends well beyond this particular example. We design efficiently computable estimators for this subspace recovery problem given samples from multiple heteroscedastic distributions (see Section 1.1 for details). We prove upper bounds on the error of our estimator measured in the maximum principal angle (see Section 2 for definition). We also prove an information-theoretic error lower bound, showing that our estimator achieves the optimal error up to a constant factor in general scenarios where the number of data points and the amount of noise can vary across users. Somewhat surprisingly, our lower bound holds even when the noise distributes as spherical Gaussians. Thus non-spherical noise in setting does not lead to increased error. We then show that our techniques extend beyond the mean estimation problem to a linear regression setting where for each µi, we get (at least two) samples (xij , xTijµi + zij) where zij is zero-mean noise from some noise distribution that depends on i and xij . This turns out to be a model that was recently studied in the meta-learning literature under more restrictive assumptions (e.g. zij is independent of xij) [Kong et al., 2020, Tripuraneni et al., 2021, Collins et al., 2021, Thekumparampil et al., 2021]. We show a simple estimator achieving an error upper bound matching the ones in prior work without making these restrictive assumptions. 1.1 Our contributions PCA with heterogeneous and non-isotropic noise: Upper Bounds. In the PCA setting, the data points from each user i are drawn from a user-specific distribution with mean µi ∈ Rd, and we assume that µ1, . . . , µn lie in a shared k-dimensional subspace that we want to recover. Specifically, we have mi data points xij ∈ Rd from user i for j = 1, . . . ,mi, and each data point is determined by xij = µi + zij where zij ∈ Rd is a noise vector drawn independently from a mean zero distribution. We allow the distribution of zij to be non-spherical and non-identical across different pairs (i, j). We use ηi ∈ R≥0 to quantify the amount of noise in user i’s data points by assuming that zij is an ηi-sub-Gaussian random variable. As mentioned earlier, if we only have a single data point from each user, it is information-theoretically impossible to recover the subspace. Thus, we focus on the case where mi ≥ 2 for every i = 1, . . . , n. In this setting, for appropriate weights w1, . . . , wn ∈ R≥0, we compute a matrix A: A = n∑ i=1 wi mi(mi − 1) ∑ j1 ̸=j2 xij1x T ij2 , (1) where the inner summation is over all pairs j1, j2 ∈ {1, . . . ,mi} satisfying j1 ̸= j2. Our estimator is then defined by the subspace spanned by the top-k eigenvectors of A. Although the inner summation 1This information-theoretic impossibility naturally extends to recovering k-dimensional subspaces for k > 1 by replacing the unit vector u ∈ Rd with a matrix U ∈ Rd×k with orthonormal columns. is over mi(mi − 1) terms, the time complexity for computing it need not grow quadratically with mi because of the following equation: ∑ j1 ̸=j2 xij1x T ij2 = mi∑ j=1 xij mi∑ j=1 xij T − mi∑ j=1 xijx T ij . The flexibility in the weights w1, . . . , wn allows us to deal with variations in mi and ηi for different users i. In the special case where η1 = · · · = ηn = η and m1 = · · · = mn = m, we choose w1 = · · · = wn = 1/n and we show that our estimator achieves the following error upper bound with success probability at least 1− δ: sin θ = O (( ησ1 σ2k √ m + η2 σ2km )√ d+ log(1/δ) n ) . Here, θ is the maximum principal angle between our estimator and the true subspace shared by µ1, . . . , µn, and we define σℓ ≥ 0 such that σ2ℓ is the ℓ-th largest eigenvalue of ∑n i=1 wiµiµ T i . Our error upper bound for general mi, ηi, wi is given in Theorem 3.1. We instantiate our error upper bound to the case where µ1, . . . , µn are drawn iid from a Gaussian distribution N(0, σ2UUT), where the columns of U ∈ Rd×k form an orthonormal basis of the subspace containing µ1, . . . , µn. By choosing the weights w1, . . . , wn according to m1, . . . ,mn and η1, . . . , ηn, our estimator achieves the error upper bound sin θ ≤ O (√ d+ log(1/δ)∑n i=1 γ ′ i ) (2) under a mild assumption (Assumption 3.2), where γ′i is defined in Definition 3.1 and often equals( η2i σ2mi + η4i σ4m2i )−1 . PCA: Lower Bounds. We show that the error upper bound (2) is optimal up to a constant factor by proving a matching information-theoretic lower bound (Theorem 3.7). Our lower bound holds for general mi and ηi that can vary among users i, and it holds even when the noise vectors zij are drawn from spherical Gaussians, showing that our estimator essentially pays no additional cost in error or sample complexity due to non-isotropic noise. We prove the lower bound using Fano’s method on a local packing over the Grassmannian manifold. We carefully select a non-trivial hard distribution so that the strength of our lower bound is not affected by a group of fewer than k users each having a huge amount of data points with little noise. Linear Models. While the PCA setting is the main focus of our paper, we extend our research to a related linear models setting that has recently been well studied in the meta-learning and federated learning literature [Kong et al., 2020, Tripuraneni et al., 2021, Collins et al., 2021, Thekumparampil et al., 2021]. Here, the user-specific distribution of each user i is parameterized by βi ∈ Rd, and we again assume that β1, . . . , βn lie in a k-dimensional linear subspace that we want to recover. From each user i we observe mi data points (xij , yij) ∈ Rd × R for j = 1, . . . ,mi drawn from the user-specific distribution satisfying yij = xTijβi + zij for an O(1)-sub-Gaussian measurement vector xij ∈ Rd with zero mean and identity covariance and an ηi-sub-Gaussian mean-zero noise term zij ∈ R. While it may seem that non-isotropic noise is less of a challenge in this setting since each noise term zij is a scalar, our goal is to handle a challenging scenario where the variances of the noise terms zij can depend on the realized measurements xij , which is a more general and widely applicable setting compared to those in prior work. Similarly to the PCA setting, our relaxed assumptions on the noise make it information-theoretically impossible to do subspace recovery if we only have one data point from each user (see Section 4), and thus we assume each user contributes at least two data points. For appropriate weights w1, . . . , wn ∈ R≥0, we use the subspace spanned by the top-k eigenvectors of the following matrix A as our estimator: A = n∑ i=1 wi mi(mi − 1) ∑ j1 ̸=j2 (xij1yij1)(xij2yij2) T. (3) In the special case where η1 = · · · = ηn = η,m1 = · · · = mn = m, and ∥βi∥2 ≤ r for all i, our estimator achieves the following error upper bound using weights w1 = · · · = wn = 1/n: sin θ ≤ O ( log3(nd/δ) √ d(r4 + r2η2 + η4/m) mnσ4k ) , (4) where θ is the maximum principal angle between our estimator and the true subspace shared by β1, . . . , βn, and σ2k is the k-th largest eigenvalue of ∑n i=1 wiβiβ T i (Corollary L.2). Our error upper bound extends smoothly to more general cases where ηi and mi vary among users (Theorem L.1). Moreover, our upper bound matches the ones in prior work [e.g. Tripuraneni et al., 2021, Theorem 3] despite requiring less restrictive assumptions. 1.2 Related Work Principal component analysis under non-isotropic noise has been studied by Vaswani and Narayanamurthy [2017], Zhang et al. [2018] and Narayanamurthy and Vaswani [2020]. When translated to our setting, these papers focus on having only one data point from each user and thus they require additional assumptions—either the level of non-isotropy is low, or the noise is coordinate-wise independent and the subspace is incoherent. The estimation error guarantees in these papers depend crucially on how well these additional assumptions are satisfied. Zhu et al. [2019] and Cai et al. [2021] study PCA with noise and missing data, and Chen et al. [2021] and Cheng et al. [2021] study eigenvalue and eigenvector estimation under heteroscedastic noise. These four papers all assume that the noise is coordinate-wise independent and the subspace/eigenspace is incoherent. The linear models setting we consider has recently been studied as a basic setting of meta-learning and federated learning by Kong et al. [2020], Tripuraneni et al. [2021], Collins et al. [2021], and Thekumparampil et al. [2021]. These papers all make the assumption that the noise terms zij are independent of the measurements xij , an assumption that we relax in this paper. Collins et al. [2021] and Thekumparampil et al. [2021] make improvements in sample complexity and error guarantees compared to earlier work by Kong et al. [2020] and Tripuraneni et al. [2021], but Collins et al. [2021] focus on the noiseless setting (zij = 0) and Thekumparampil et al. [2021] require at least Ω(k2) examples per user. Tripuraneni et al. [2021] and Thekumparampil et al. [2021] assume that the measurements xij are drawn from the standard (multivariate) Gaussian distribution, where as Kong et al. [2020], Collins et al. [2021] and our work make the relaxed assumption that xij are sub-Gaussian with identity covariance, which, in particular, allows the fourth-order moments of xij to be non-isotropic. There is a large body of prior work on meta-learning beyond the linear setting [see e.g. Maurer et al., 2016, Tripuraneni et al., 2020, Du et al., 2020]. When collecting data from users, it is often important to ensure that private information about users is not revealed through the release of the learned estimator. Many recent works proposed and analyzed estimators that achieve user-level differential privacy in settings including mean estimation [Levy et al., 2021, Esfandiari et al., 2021], meta-learning [Jain et al., 2021] and PAC learning [Ghazi et al., 2021]. Recently, Cummings et al. [2021] study one-dimensional mean estimation in a setting similar to ours, under a differential privacy constraint. The matrix A we define in (1) is a weighted sum of Ai := 1mi(mi−1) ∑ j1 ̸=j2 xij1x T ij2 over users i = 1, . . . , n, and each Ai has the form of a U -statistic [Halmos, 1946, Hoeffding, 1948]. U -statistics have been applied to many statistical tasks including tensor completion [Xia and Yuan, 2019] and various testing problems [Zhong and Chen, 2011, He et al., 2021, Schrab et al., 2022]. In our definition of Ai, we do not make the assumption that the distributions of xi1, . . . , ximi are identical although the assumption is commonly used in applications of U -statistics. The matrix A in (3) is also a weighted sum of U -statistics where we again do not make the assumption of identical distribution. 1.3 Paper Organization In Section 2, we formally define the maximum principal angle and other notions we use throughout the paper. Our results in the PCA setting and the linear models setting are presented in Sections 3 and 4, respectively. We defer most technical proofs to the appendices. 2 Preliminaries We use ∥A∥ to denote the spectral norm of a matrix A, and use ∥u∥2 to denote the ℓ2 norm of a vector u. For positive integers k ≤ d, we use Od,k to denote the set of matrices A ∈ Rd×k satisfying ATA = Ik, where Ik is the k × k identity matrix. We use Od to denote Od,d, which is the set of d× d orthogonal matrices. We use col(A) to denote the linear subspace spanned by the columns of a matrix A. We use the base-e logarithm throughout the paper. Maximum Principal Angle. Let U, Û ∈ Od be two orthogonal matrices. Suppose the columns of U and Û are partitioned as U = [U1 U2], Û = [Û1 Û2] where U1, Û1 ∈ Od,k for an integer k satisfying 0 < k < d. Let Γ (resp. Γ̂) be the k-dimensional linear subspace spanned by the columns of U1 (resp. Û1). Originating from [Jordan, 1875], the maximum principal angle θ ∈ [0, π/2] between Γ and Γ̂, denoted by ∠(Γ, Γ̂) or ∠(U1, Û1), is defined by sin θ = ∥U1UT1 − Û1ÛT1 ∥ = ∥UT1 Û2∥ = ∥UT2 Û1∥. It is not hard to see that the maximum principal angle depend only on the subspaces Γ, Γ̂ and not on the choices of U and Û , and sin∠(Γ, Γ̂) is a natural metric between k-dimensional subspaces (see Appendix A for more details where we discuss the definition of principal angles for any two subspaces with possibly different dimensions). With the definition of the maximum principal angle, we can now state a variant of the Davis–Kahan sin θ theorem [Davis and Kahan, 1970] that will be useful in our analysis (see Appendix E for proof): Theorem 2.1 (Variant of Davis–Kahan sin θ theorem). Let A, Â ∈ Rd×d be symmetric matrices. Let λi denote the i-th largest eigenvalue of A. For a positive integer k smaller than d, let θ denote the maximum principal angle between the subspaces spanned by the top-k eigenvectors of A and Â. Assuming λk > λk+1, sin θ ≤ 2∥A− Â∥ λk − λk+1 . Sub-Gaussian and sub-exponential distributions. We say a random variable x ∈ R with expectation E[x] ∈ R has sub-Gaussian constant b ∈ R≥0 if E[|x − E[x]|p]1/p ≤ b √ p for every p ≥ 1. We say x has sub-exponential constant b ∈ R≥0 if E[|x− E[x]|p]1/p ≤ bp for every p ≥ 1. We say a random vector y ∈ Rd has sub-Gaussian (resp. sub-exponential) constant b ∈ R≥0 if for every unit vector u ∈ Rd (i.e., ∥u∥2 = 1), the random variable uTy ∈ R has sub-Gaussian (resp. sub-exponential) constant b. We say y is b-sub-Gaussian (resp. b-sub-exponential) if it has sub-Gaussian (resp. sub-exponential) constant b. 3 Principal Component Analysis In the principal component analysis (PCA) setting, our goal is to recover the k-dimensional subspace Γ spanned by the user-specific means µ1, . . . , µn ∈ Rd of the n users. From each user i, we have mi ≥ 2 data points xij = µi + zij for j = 1, . . . ,mi. (5) We assume the noise zij ∈ Rd is drawn independently from a mean zero distribution with subGaussian constant ηi. We do not assume that the variance of zij is the same along every direction, nor do we assume that the distribution of zij is the same for different (i, j). We first show an error upper bound for our estimator when the user-specific means µ1, . . . , µn are deterministic vectors (Section 3.1) and then apply this result to the case where µ1, . . . , µn are drawn from a sub-Gaussian distribution (Section 3.2). In Section 3.3 we prove an information-theoretic error lower bound matching our upper bound. 3.1 Fixed User-Specific Means We first focus on the case where µ1, . . . , µn are deterministic vectors. In this case, all the randomness in the data comes from the noise zij . Our estimator is the subspace Γ̂ spanned by the top-k eigenvectors of A defined in (1). For ℓ = 1, . . . , d, we define σℓ ≥ 0 such that σ2ℓ is the ℓ-th largest eigenvalue of ∑n i=1 wiµiµ T i . Since µ1, . . . , µn share a k-dimensional subspace, σℓ = 0 for ℓ > k. We prove the following general theorem on the error guarantee of our estimator: Theorem 3.1. Define ξ2 = ∥ ∑n i=1 w 2 i µiµ T i η 2 i /mi∥ and let θ denote the maximum principal angle between our estimator Γ̂ and the true subspace Γ spanned by µ1, . . . , µn. For any δ ∈ (0, 1/2), with probability at least 1− δ, sin θ = O σ−2k √√√√(d+ log(1/δ))(ξ2 + n∑ i=1 w2i η 4 i m2i ) + σ−2k (d+ log(1/δ))maxi wiη 2 i mi . (6) We can simplify the bound in Theorem 3.1 by considering special cases: Corollary 3.2. Assume max{η1/ √ m1, . . . , ηn/ √ mn} = t and we choose w1 = · · · = wn = 1/n. For any δ ∈ (0, 1/2), with probability at least 1− δ, sin θ = O ( tσ1 + t 2 σ2k √ d+ log(1/δ) n ) . (7) In particular, when η1 = · · · = ηn = η, and m1 = · · · = mn = m, error bound (7) becomes sin θ = O (( ησ1 σ2k √ m + η2 σ2km )√ d+ log(1/δ) n ) . We defer the complete proof of Theorem 3.1 and Corollary 3.2 to Appendices F and G. Our proof is based on the Davis-Kahan sin θ theorem (Theorem 2.1). Since σ2k+1 = 0, Theorem 2.1 implies sin θ ≤ 2∥A− ∑n i=1 wiµiµ T i ∥ σ2k . (8) This reduces our goal to proving an upper bound on the spectral norm of A− ∑n i=1 wiµiµ T i . Since for distinct j1 and j2 in {1, . . . ,mi} we have E[xij1xTij2 ] = µiµ T i , our construction of A in (1) guarantees E[A] = ∑n i=1 wiµiµ T i . Therefore, our goal becomes controlling the deviation of A from its expectation, and we achieve this goal using techniques for matrix concentration inequalities. 3.2 Sub-Gaussian User-Specific Means We apply our error upper bound in Theorem 3.1 to the case where µ1, . . . , µn ∈ Rd are drawn iid from N(0, σ2UUT) for an unknown U ∈ Od,k. We still assume that each data point xij ∈ Rd is generated by adding a noise vector zij ∈ Rd to the user-specific mean µi as in (5). We do not assume that the noise vectors (zij)1≤i≤n,1≤j≤mi are independent of the user-specific means (µi)1≤i≤n, but we assume that when conditioned on (µi)1≤i≤n, every noise vector zij independently follows a distribution with mean zero and sub-Gaussian constant ηi. We use the same estimator Γ̂ as before: Γ̂ is the subspace spanned by the top-k eigenvectors of A defined in (1). We determine the optimal weights w1, . . . , wn in (1) as long as m1, . . . ,mn and η1, . . . , ηn satisfy a mild assumption (Assumption 3.2), achieving an error upper bound in Theorem 3.4. In the next subsection, we prove an error lower bound (Theorem 3.7) that matches our upper bound (Theorem 3.4) up to a constant factor, assuming d ≥ (1 + Ω(1))k and δ = Θ(1). We prove our error upper bound in a slightly more general setting than µ1, . . . , µn drawn iid from N(0, σ2UUT). Specifically, we make the following assumption on the distribution of µ1, . . . , µn: Assumption 3.1. The user-specific means µ1, . . . , µn ∈ Rd are mean-zero independent random vectors supported on an unknown k-dimensional subspace Γ. Moreover, for a parameter σ > 0, for every i = 1, . . . , n, µi has sub-Gaussian constant O(σ), and the k-th largest eigenvalue of E[µiµTi ] is at least σ2. Under this assumption, we have the following lower bound on the σ2k in Theorem 3.1 (see Appendix H for proof): Claim 3.3. Under Assumption 3.1, let w1, . . . , wn ∈ R≥0 be user weights satisfying w1+ · · ·+wn = 1 and σ2k be the k-th largest eigenvalue of ∑n i=1 wiµiµ T i . There exists an absolute constant C∗ > 1 such that for any δ ∈ (0, 1/2), as long as max1≤i≤n wi ≤ 1/C∗(k + log(1/δ)), then σ2k ≥ σ2/2 with probability at least 1− δ/2. The following definition is important for us to choose the weights w1, . . . , wn in (1) optimally: Definition 3.1. Define γi = ( η2i σ2mi + η4i σ4m2i )−1 and assume w.l.o.g. that γ1 ≥ · · · ≥ γn. Define γ′i = γi if i ≥ k, and γ′i = γk if i < k. Intuitively, we can view γi as measuring the “amount of information” provided by the data points from user i. This is consistent with the fact that γi increases as the number mi of data points from user i increases, and γi decreases as the noise magnitude ηi from user i increases. With the users sorted so that γ1 ≥ · · · ≥ γn, the quantity γ′i is then defined to be γk for the k most “informative” users i = 1, . . . , k, and γ′i = γi for other users. We make the following mild assumption on γ ′ i under which we achieve optimal estimation error: Assumption 3.2. ∑n i=1 γ ′ i ≥ C∗(k + log(1/δ))γ′1 for C∗ defined in Claim 3.3. By the definition of γ′i, it is easy to show that Assumption 3.2 is equivalent to ∑n i=k+1 γi ≥ ((C∗ − 1)k + C∗ log(1/δ))γk. Therefore, if we view γi as the “amount of information” from user i, Assumption 3.2 intuitively requires that a significant contribution to the total “information” comes from outside the k most “informative” users. This assumption allows us to avoid the case where we only have exactly n = k users: in that case, we would have σ2k ≈ σ2/k2 for uniform weights w1 = · · · = wn (see [Rudelson and Vershynin, 2008] and references therein), as opposed to the desired σ2k ≥ σ2/2 in Claim 3.3. Assumption 3.2 is a mild assumption. For example, when γk = · · · = γn, Assumption 3.2 holds as long as n ≥ C∗(k+log(1/δ)). Also, since γ′1 = · · · = γ′k ≥ γ′k+1 ≥ · · · ≥ γ′n ≥ 0, it trivially holds that ∑n i=1 γ ′ i ≥ kγ′1. Assumption 3.2 is relatively mild when compared to this trivial inequality. Under Assumption 3.2, we show that it is optimal to choose the weights w1, . . . , wn as wi = γ′i∑n ℓ=1 γ ′ ℓ . (9) Specifically, if we plug (9) into Theorem 3.1 and bound ξ and σk based on the distribution of µ1, . . . , µn, we get the following error upper bound which matches our lower bound (Theorem 3.7) in Section 3.3. We defer its proof to Appendix I. Theorem 3.4. Under Assumptions 3.1 and 3.2, if we choose w1, . . . , wn as in (9) and define θ = ∠(Γ, Γ̂), for δ ∈ (0, 1/2), with probability at least 1− δ, sin θ ≤ O (√ d+ log(1/δ)∑n i=1 γ ′ i ) . (10) For comparison, consider the setting when σ = ηi = 1 for every i = 1, . . . , n. The result then says that sin θ is bounded by approximately √ d∑n i=1 mi . This is the same rate as we would get if we have∑n i=1 mi users each contributing a single independent data point with homogeneous spherical noise. Thus as long as the data points are not too concentrated on fewer than k users, the heterogeneity comes at no additional cost. 3.3 Lower Bound We prove a lower bound matching the upper bound in Theorem 3.4 up to constant in the setting where δ = Θ(1), d ≥ (1 + Ω(1))k. For every positive integer d, there is a natural “uniform” distribution over Od given by Haar’s theorem [Haar, 1933] (see e.g. [Diestel and Spalsbury, 2014] for a textbook). We denote this distribution by Haar(Od). A random matrix A drawn from Haar(Od) has the following invariance property: for any deterministic matrix B ∈ Od, the random matrices A,AB and BA all have the same distribution. For an integer k ≤ d, we can construct a random matrix A1 ∈ Od,k by first drawing A ∈ Rd×d from Haar(Od) and then take the first k columns of A. We denote the distribution of A1 by Haar(Od,k). The invariance property of Haar(Od) immediately implies the following claims: Claim 3.5. Let A ∈ Od be a random matrix drawn from Haar(Od) and let B ∈ Od,k be a fixed matrix. Then AB distributes as Haar(Od,k). Proof. The matrix B can be written as the first k columns of a matrix C ∈ Od. Now AB is the first k columns of AC, where AC distributes as Haar(Od) by the invariance property. This implies that AB distributes as Haar(Od,k). Claim 3.6. Let B ∈ Od,k be a random matrix. Assume for every fixed matrix A ∈ Od, the random matrices B and AB have the same distribution. Then B ∼ Haar(Od,k). Proof. If we draw A independently from Haar(Od), the random matrices B and AB still have the same distribution. By Claim 3.5, AB distributes as Haar(Od,k), so B must also distribute as Haar(Od,k). With the definition of Haar(Od,k), we state our lower bound in the following theorem: Theorem 3.7. Let k, d, n be positive integers satisfying k < d and k ≤ n. Let m1, . . . ,mn be positive integers and σ, η1, . . . , ηn be positive real numbers. Suppose we draw U ∈ Od,k from Haar(Od,k) and then draw µ1, . . . , µn independently from N(0, σ2UUT). For every i = 1, . . . , n, we draw mi data points xij for j = 1, . . . ,mi as xij = µi + zij , where each zij is drawn independently from the spherical Gaussian N(0, η2i I). Let Γ̂ be any estimator mapping (xij)1≤i≤n,1≤j≤mi to a (possibly randomized) k-dimensional subspace of Rd. Let θ denote the maximum principal angle between Γ̂((xij)1≤i≤n,1≤j≤mi) and the true subspace Γ = col(U). If real numbers t ≥ 0 and δ ∈ [0, 1/2) satisfy Pr[sin θ ≤ t] ≥ 1− δ, then t ≥ Ω ( min { 1, √ (d− k)(1− δ)∑n i=k γi }) , (11) where γ1, . . . , γn are defined in Definition 3.1. Note that γ′i = γi for i ≥ k, so our upper bound in (10) matches the lower bound (11) up to a constant factor assuming δ = Θ(1) and d ≥ (1 + Ω(1))k. We use the local Fano method to prove the lower bound using the technical lemmas in Appendix D. In particular, we reduce our goal to proving an upper bound on the KL divergence between Gaussian distributions whose covariance matrices are defined based on matrices U, Û ∈ Od,k with ∥UUT − Û ÛT∥F bounded. We prove the following lemma in Appendix J that upper bounds the KL divergence using ∥UUT − Û ÛT∥F : Lemma 3.8. For σ ∈ R≥0, η ∈ R>0, U, Û ∈ Od,k, define Σ = σ2UUT + η2I and Σ̂ = σ2Û ÛT + η2I . Then, Dkl(N(0, Σ̂)∥N(0,Σ)) = σ4∥UUT − Û ÛT∥2F 4(σ2η2 + η4) . Lemma 3.8 and the results in Appendix D allow us to prove a version of (11) in which the sum in the demoninator is over i = 1, . . . , n. This, however, is weaker and less useful than (11) in which the sum in the denominator is over i = k, k + 1, . . . , n. To prove Theorem 3.7, we extract a hard distribution in which the data points from users 1, . . . , k − 1 are “useless” in terms of subspace recovery. Let Γ1 be the (k − 1)-dimensional subspace spanned by µ1, . . . , µk−1. We let v1, . . . , vk−1 be a random orthonormal basis of Γ1, and we append another vector vk ∈ Γ to form an orthonormal basis v1, . . . , vk of Γ. We define V1 = [v1 · · · vk−1] ∈ Od,k−1 and V = [v1 · · · vk] ∈ Od,k. In Figure 1 we show a graphical model demonstrating the dependency among the random objects we defined. Let us focus on the joint distribution of (V1, V, (µ1, . . . , µk−1)). By the invariance property, for any matrices Ṽ1 ∈ Od,k−1, Ṽ ∈ Od,k, measurable set S ⊆ (Rd)k−1, and orthogonal matrix G ∈ Od, Pr[(µ1, . . . , µk−1) ∈ S|V = Ṽ , V1 = Ṽ1] = Pr[(µ1, . . . , µk−1) ∈ SG|V = GṼ , V1 = GṼ1], where SG = {(Gµ̃1, . . . , Gµ̃k−1) : (µ̃1, . . . , µ̃k−1) ∈ S}. For any Ṽ , Ṽ ′ ∈ Od,k whose first k − 1 columns are both Ṽ1, there exists G ∈ Od such that Ṽ ′ = GṼ and thus Ṽ1 = GṼ1. This implies that for any µ̃ ∈ col(Ṽ1), we have Gµ̃ = µ̃, and thus (S ∩ col(Ṽ1)k−1)G = S ∩ col(Ṽ1)k−1 for any measurable S ⊆ (Rd)k−1. Here, col(Ṽ1)k−1 = {(µ̃1, . . . , µ̃k−1) : µ̃i ∈ col(Ṽ1) for i = 1, . . . , k − 1} ⊆ (Rd)k−1. When conditioned on V1 = Ṽ1, for every i = 1, . . . , k − 1 we have µi ∈ Γ1 = col(V1) = col(Ṽ1), which implies that (µ1, . . . , µk−1) ∈ col(Ṽ1)k−1. Therefore, Pr[(µ1, . . . , µk−1) ∈ S|V = Ṽ , V1 = Ṽ1] = Pr[(µ1, . . . , µk−1) ∈ S ∩ col(Ṽ1)k−1|V = Ṽ , V1 = Ṽ1] = Pr[(µ1, . . . , µk−1) ∈ (S ∩ col(Ṽ1)k−1)G|V = GṼ , V1 = GṼ1] = Pr[(µ1, . . . , µk−1) ∈ S ∩ col(Ṽ1)k−1|V = Ṽ ′, V1 = Ṽ1] = Pr[(µ1, . . . , µk−1) ∈ S|V = Ṽ ′, V1 = Ṽ1]. This implies that (µ1, . . . , µk−1) and V are conditionally independent given V1. Therefore, the joint distribution of (V1, V, (µ1, . . . , µk−1)) can be formed by first drawing V and V1, and then drawing µ1, . . . , µk−1 based only on V1 and not on V . Since µk, . . . , µn are drawn iid from N(0, σ2UUT) = N(0, σ2V V T), we have the graphical model shown in Figure 2. By Claim 3.6, the marginal distribution of V is Haar(Od,k). By Claim 3.5, we can implement this distribution by first drawing W ∼ Haar(Od) and then drawing E independently from any distribution over Od,k and let V = WE. We choose the distribution of E later, where we ensure that the first k − 1 columms of E is always [ Ik−1 0 ] . This guarantees that the first k − 1 columns of W and V are the same, and thus V1 is exactly the first k− 1 columns of W , resulting in the graphical model shown in Figure 3. Note that in Figure 3 there is no directed path from E to (µ1, . . . , µk−1). Intuitively, this means that knowing (µ1, . . . , µk−1) gives us no information about E. Now by choosing the distribution of E appropriately, we can prove (11) in which the denominator does not contain γ1, . . . , γk−1. We defer the complete proof of Theorem 3.7 to Appendix K. 4 Linear Models In the linear models setting, the data distribution of user i is parameterized by an unknown vector βi ∈ Rd. As before, we assume that the vectors β1, . . . , βn from the n users lie in an unknown k-dimensional subspace Γ. Our goal is to recover the subspace using the following data. For every i = 1, . . . , n, we have mi data points from user i: (xi1, yi1), . . . , (ximi , yimi) ∈ Rd × R. For every j = 1, . . . ,mi, we assume the measurement xij ∈ Rd is a random vector drawn independently from an O(1)-sub-Gaussian distribution with zero mean and identity covariance matrix. The measurement outcome yij is determined by yij = xTijβi + zij , where the random noise zij ∈ R can depend on the measurements xi1, . . . , ximi . When conditioned on xi1, . . . , ximi , we assume every zij for j = 1, . . . ,mi is independently drawn from an ηi-sub-Gaussian distribution with zero mean, but we do not assume that the conditional distribution of zij is the same for every j = 1, . . . ,mi. The (in)dependence among xij and zij for i = 1, . . . , n and j = 1, . . . ,mi can be summarized by the example graphical model in Figure 4. Since we allow the noise zij to depend on the measurements xij , it is information-theoretically impossible to recover the subspace if we only have one data point from every user. Consider the scenario where every βi is drawn independently from N(0, σ2uuT) for an unknown unit vector u ∈ Rd and every xij is drawn independently and uniformly from {−1, 1}d. If we set zij to be zij = x T ijνij where νij is independently drawn from N(0, σ 2(I − uuT)), then every yij satisfies yij = x T ij(βi + νij) where βi + νij distributes as N(0, σ 2I) independently from xij . This implies that the joint distribution of ((xi1, yi1))i=1,...,n does not change with u, i.e., we get no information about u from one data point per user. Thus, we assume mi ≥ 2 for every user i. In this case, we achieve error upper bounds that match the ones in [Tripuraneni et al., 2021] despite our relaxed assumptions on the noise. Our estimator is the subspace Γ̂ spanned by the top-k eigenvectors of A defined in (3). We defer the analysis of our estimator to Appendix L. Acknowledgments and Disclosure of Funding Part of this work was performed while LH was interning at Apple. LH is also supported by Omer Reingold’s NSF Award IIS-1908774, Omer Reingold’s Simons Foundation Investigators Award 689988, and Moses Charikar’s Simons Foundation Investigators Award.
1. What is the focus of the paper regarding subspace estimation? 2. What are the strengths of the proposed approach, particularly in terms of technical novelty? 3. What are the weaknesses of the paper, especially regarding the assumptions and writing clarity? 4. Do you have any questions or concerns about the paper's content? 5. What are the limitations of the proposed method that the authors did not address?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors consider the problem of estimating a subspace spanned by unknown vectors when given noisy observations of the vectors . Specifically, each user i contains a vector μ i , and the observations are x i j = μ i + z i j , and the authors want to estimate the k -dimensional subspace spanned by μ 1 , ⋯ , μ n . The authors propose an estimator that takes a weighted combination of the observed vectors to get a sample covariance matrix, and the final subspace is that spanned by the largest k eigenvectors of the sample covariance matrix. The upper bound shows that the maximal angle between the true subspace and the estimated subspace is small under certain assumptions on the sub-Gaussianity of the μ vectors. The authors construct a hard distribution over μ i s and show an almost-matching lower bound to show that any algorithm must have an error that is close to their prescribed upper bound. Strengths And Weaknesses Strengths: The analysis of the upper bound is original with good technical novelty. I find the estimator and the prescribed optimal weights quite interesting. The lower bound uses a ``standard'' approach of using Fano's inequality on a maximally distributed set, but the construction of the hard distribution and subsequent analysis are interesting and novel. The application to linear models / meta-learning is interesting although it is not the main focus of the paper. Weaknesses: The authors state that having non-isotropic noise in the observations is a very challenging problem, as the noise may be very correlated with the distribution of the user's vectors -- in the example provided at the beginning of the paper, the noise is strictly orthogonal to the user. However, later on, they assume that the noises are subGaussian, and the user vectors are sufficiently anti-concentrated within their subspace, and now it's not clear whether the same hardness holds. The writing can be a little confusing at times. The bound in Theorem 2.1 requires σ k to depend on w , and this is only resolved in the next section -- even there, the authors do not state how σ k is ultimately lower bounded. The assumptions can be somewhat cryptic, such as Assumption 3.2. I suppose Assumption 3.2 helps ensure that users k + 1 , ⋯ , n (after sorting by relative importance) provide useful information. I'm not sure the paragraph after Assumption 3.2 helps explain it's significance / requirement. Questions I do not have major concerns about the paper. However, a more detailed discussion about the first point in the "Weaknesses" section would be appreciated. Limitations The authors do not really address the limitations of their work. They do not state conditions under which their assumptions will fail, and addressing that is important.
NIPS
Title Subspace Recovery from Heterogeneous Data with Non-isotropic Noise Abstract Recovering linear subspaces from data is a fundamental and important task in statistics and machine learning. Motivated by heterogeneity in Federated Learning settings, we study a basic formulation of this problem: the principal component analysis (PCA), with a focus on dealing with irregular noise. Our data come from n users with user i contributing data samples from a d-dimensional distribution with mean μi. Our goal is to recover the linear subspace shared by μ1, . . . , μn using the data points from all users, where every data point from user i is formed by adding an independent mean-zero noise vector to μi. If we only have one data point from every user, subspace recovery is information-theoretically impossible when the covariance matrices of the noise vectors can be non-spherical, necessitating additional restrictive assumptions in previous work. We avoid these assumptions by leveraging at least two data points from each user, which allows us to design an efficiently-computable estimator under non-spherical and user-dependent noise. We prove an upper bound for the estimation error of our estimator in general scenarios where the number of data points and amount of noise can vary across users, and prove an information-theoretic error lower bound that not only matches the upper bound up to a constant factor, but also holds even for spherical Gaussian noise. This implies that our estimator does not introduce additional estimation error (up to a constant factor) due to irregularity in the noise. We show additional results for a linear regression problem in a similar setup. N/A Recovering linear subspaces from data is a fundamental and important task in statistics and machine learning. Motivated by heterogeneity in Federated Learning settings, we study a basic formulation of this problem: the principal component analysis (PCA), with a focus on dealing with irregular noise. Our data come from n users with user i contributing data samples from a d-dimensional distribution with mean µi. Our goal is to recover the linear subspace shared by µ1, . . . , µn using the data points from all users, where every data point from user i is formed by adding an independent mean-zero noise vector to µi. If we only have one data point from every user, subspace recovery is information-theoretically impossible when the covariance matrices of the noise vectors can be non-spherical, necessitating additional restrictive assumptions in previous work. We avoid these assumptions by leveraging at least two data points from each user, which allows us to design an efficiently-computable estimator under non-spherical and user-dependent noise. We prove an upper bound for the estimation error of our estimator in general scenarios where the number of data points and amount of noise can vary across users, and prove an information-theoretic error lower bound that not only matches the upper bound up to a constant factor, but also holds even for spherical Gaussian noise. This implies that our estimator does not introduce additional estimation error (up to a constant factor) due to irregularity in the noise. We show additional results for a linear regression problem in a similar setup. 1 Introduction We study the problem of learning low-dimensional structure amongst data distributions, given multiple samples from each distribution. This problem arises naturally in settings such as federated learning, where we want to learn from data coming from a set of individuals, each of which has samples from their own distribution. These distributions however are related to each other, and in this work, we consider the setting when these distributions have means lying in a low-dimensional subspace. The goal is to learn this subspace, even when the distributions may have different (and potentially non-spherical) variances. This heterogeneity can manifest itself in practice as differing number of samples per user, or the variance differing across individuals, possibly depending on their mean. Recovery of the subspace containing the means can in turn help better estimate individual means. In other words, this can allow for learning good estimator for all individual means, by leveraging information from all the individuals. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). The irregularity of the noise makes this task challenging even when we have sufficiently many individual distributions. For example, suppose we have n individuals and for every i = 1, . . . , n, an unknown µi ∈ Rd. For simplicity, suppose that µ1, . . . , µn are distributed independently as N(0, σ2uuT) for σ ∈ R≥0 and an unknown unit vector u ∈ Rd. In this setting, our goal is to recover the one-dimensional subspace, equivalently the vector u. For every i, we have a data point xi = µi+zi where zi ∈ Rd is a mean-zero noise vector. If zi is drawn independently from a spherical Gaussian N(0, α2I), we can recover the unknown subspace with arbitrary accuracy as n grows to infinity because 1n ∑ xix T i concentrates to E[xixTi ] = σ2uuT + α2I , whose top eigenvector is ±u. However, if the noise zi is drawn from a non-spherical distribution, the top eigenvector of 1n ∑ xix T i can deviate from ±u significantly, and to make things worse, if the noise zi is drawn independently from a non-spherical Gaussian N(0, σ2(I−uuT)+α2I), then our data points xi = µi+zi distribute independently as N(0, (σ2 + α2)I), giving no information about the vector u.1 The information-theoretic impossibility in this example however disappears as soon as one has at least two samples from each distribution. Indeed, given two data points xi1 = µi + zi1 and xi2 = µi + zi2 from user i, as long as the noise zi1, zi2 are independent and have zero mean, we always have E[xi1xTi2] = σ2uuT regardless of the specific distributions of zi1 and zi2. This allows us to recover the subspace in this example, as long as we have sufficiently many users each contributing at least two examples. As this is commonly the case in our motivating examples, we make this assumption of multiple data points per user, and show that this intuition extends well beyond this particular example. We design efficiently computable estimators for this subspace recovery problem given samples from multiple heteroscedastic distributions (see Section 1.1 for details). We prove upper bounds on the error of our estimator measured in the maximum principal angle (see Section 2 for definition). We also prove an information-theoretic error lower bound, showing that our estimator achieves the optimal error up to a constant factor in general scenarios where the number of data points and the amount of noise can vary across users. Somewhat surprisingly, our lower bound holds even when the noise distributes as spherical Gaussians. Thus non-spherical noise in setting does not lead to increased error. We then show that our techniques extend beyond the mean estimation problem to a linear regression setting where for each µi, we get (at least two) samples (xij , xTijµi + zij) where zij is zero-mean noise from some noise distribution that depends on i and xij . This turns out to be a model that was recently studied in the meta-learning literature under more restrictive assumptions (e.g. zij is independent of xij) [Kong et al., 2020, Tripuraneni et al., 2021, Collins et al., 2021, Thekumparampil et al., 2021]. We show a simple estimator achieving an error upper bound matching the ones in prior work without making these restrictive assumptions. 1.1 Our contributions PCA with heterogeneous and non-isotropic noise: Upper Bounds. In the PCA setting, the data points from each user i are drawn from a user-specific distribution with mean µi ∈ Rd, and we assume that µ1, . . . , µn lie in a shared k-dimensional subspace that we want to recover. Specifically, we have mi data points xij ∈ Rd from user i for j = 1, . . . ,mi, and each data point is determined by xij = µi + zij where zij ∈ Rd is a noise vector drawn independently from a mean zero distribution. We allow the distribution of zij to be non-spherical and non-identical across different pairs (i, j). We use ηi ∈ R≥0 to quantify the amount of noise in user i’s data points by assuming that zij is an ηi-sub-Gaussian random variable. As mentioned earlier, if we only have a single data point from each user, it is information-theoretically impossible to recover the subspace. Thus, we focus on the case where mi ≥ 2 for every i = 1, . . . , n. In this setting, for appropriate weights w1, . . . , wn ∈ R≥0, we compute a matrix A: A = n∑ i=1 wi mi(mi − 1) ∑ j1 ̸=j2 xij1x T ij2 , (1) where the inner summation is over all pairs j1, j2 ∈ {1, . . . ,mi} satisfying j1 ̸= j2. Our estimator is then defined by the subspace spanned by the top-k eigenvectors of A. Although the inner summation 1This information-theoretic impossibility naturally extends to recovering k-dimensional subspaces for k > 1 by replacing the unit vector u ∈ Rd with a matrix U ∈ Rd×k with orthonormal columns. is over mi(mi − 1) terms, the time complexity for computing it need not grow quadratically with mi because of the following equation: ∑ j1 ̸=j2 xij1x T ij2 = mi∑ j=1 xij mi∑ j=1 xij T − mi∑ j=1 xijx T ij . The flexibility in the weights w1, . . . , wn allows us to deal with variations in mi and ηi for different users i. In the special case where η1 = · · · = ηn = η and m1 = · · · = mn = m, we choose w1 = · · · = wn = 1/n and we show that our estimator achieves the following error upper bound with success probability at least 1− δ: sin θ = O (( ησ1 σ2k √ m + η2 σ2km )√ d+ log(1/δ) n ) . Here, θ is the maximum principal angle between our estimator and the true subspace shared by µ1, . . . , µn, and we define σℓ ≥ 0 such that σ2ℓ is the ℓ-th largest eigenvalue of ∑n i=1 wiµiµ T i . Our error upper bound for general mi, ηi, wi is given in Theorem 3.1. We instantiate our error upper bound to the case where µ1, . . . , µn are drawn iid from a Gaussian distribution N(0, σ2UUT), where the columns of U ∈ Rd×k form an orthonormal basis of the subspace containing µ1, . . . , µn. By choosing the weights w1, . . . , wn according to m1, . . . ,mn and η1, . . . , ηn, our estimator achieves the error upper bound sin θ ≤ O (√ d+ log(1/δ)∑n i=1 γ ′ i ) (2) under a mild assumption (Assumption 3.2), where γ′i is defined in Definition 3.1 and often equals( η2i σ2mi + η4i σ4m2i )−1 . PCA: Lower Bounds. We show that the error upper bound (2) is optimal up to a constant factor by proving a matching information-theoretic lower bound (Theorem 3.7). Our lower bound holds for general mi and ηi that can vary among users i, and it holds even when the noise vectors zij are drawn from spherical Gaussians, showing that our estimator essentially pays no additional cost in error or sample complexity due to non-isotropic noise. We prove the lower bound using Fano’s method on a local packing over the Grassmannian manifold. We carefully select a non-trivial hard distribution so that the strength of our lower bound is not affected by a group of fewer than k users each having a huge amount of data points with little noise. Linear Models. While the PCA setting is the main focus of our paper, we extend our research to a related linear models setting that has recently been well studied in the meta-learning and federated learning literature [Kong et al., 2020, Tripuraneni et al., 2021, Collins et al., 2021, Thekumparampil et al., 2021]. Here, the user-specific distribution of each user i is parameterized by βi ∈ Rd, and we again assume that β1, . . . , βn lie in a k-dimensional linear subspace that we want to recover. From each user i we observe mi data points (xij , yij) ∈ Rd × R for j = 1, . . . ,mi drawn from the user-specific distribution satisfying yij = xTijβi + zij for an O(1)-sub-Gaussian measurement vector xij ∈ Rd with zero mean and identity covariance and an ηi-sub-Gaussian mean-zero noise term zij ∈ R. While it may seem that non-isotropic noise is less of a challenge in this setting since each noise term zij is a scalar, our goal is to handle a challenging scenario where the variances of the noise terms zij can depend on the realized measurements xij , which is a more general and widely applicable setting compared to those in prior work. Similarly to the PCA setting, our relaxed assumptions on the noise make it information-theoretically impossible to do subspace recovery if we only have one data point from each user (see Section 4), and thus we assume each user contributes at least two data points. For appropriate weights w1, . . . , wn ∈ R≥0, we use the subspace spanned by the top-k eigenvectors of the following matrix A as our estimator: A = n∑ i=1 wi mi(mi − 1) ∑ j1 ̸=j2 (xij1yij1)(xij2yij2) T. (3) In the special case where η1 = · · · = ηn = η,m1 = · · · = mn = m, and ∥βi∥2 ≤ r for all i, our estimator achieves the following error upper bound using weights w1 = · · · = wn = 1/n: sin θ ≤ O ( log3(nd/δ) √ d(r4 + r2η2 + η4/m) mnσ4k ) , (4) where θ is the maximum principal angle between our estimator and the true subspace shared by β1, . . . , βn, and σ2k is the k-th largest eigenvalue of ∑n i=1 wiβiβ T i (Corollary L.2). Our error upper bound extends smoothly to more general cases where ηi and mi vary among users (Theorem L.1). Moreover, our upper bound matches the ones in prior work [e.g. Tripuraneni et al., 2021, Theorem 3] despite requiring less restrictive assumptions. 1.2 Related Work Principal component analysis under non-isotropic noise has been studied by Vaswani and Narayanamurthy [2017], Zhang et al. [2018] and Narayanamurthy and Vaswani [2020]. When translated to our setting, these papers focus on having only one data point from each user and thus they require additional assumptions—either the level of non-isotropy is low, or the noise is coordinate-wise independent and the subspace is incoherent. The estimation error guarantees in these papers depend crucially on how well these additional assumptions are satisfied. Zhu et al. [2019] and Cai et al. [2021] study PCA with noise and missing data, and Chen et al. [2021] and Cheng et al. [2021] study eigenvalue and eigenvector estimation under heteroscedastic noise. These four papers all assume that the noise is coordinate-wise independent and the subspace/eigenspace is incoherent. The linear models setting we consider has recently been studied as a basic setting of meta-learning and federated learning by Kong et al. [2020], Tripuraneni et al. [2021], Collins et al. [2021], and Thekumparampil et al. [2021]. These papers all make the assumption that the noise terms zij are independent of the measurements xij , an assumption that we relax in this paper. Collins et al. [2021] and Thekumparampil et al. [2021] make improvements in sample complexity and error guarantees compared to earlier work by Kong et al. [2020] and Tripuraneni et al. [2021], but Collins et al. [2021] focus on the noiseless setting (zij = 0) and Thekumparampil et al. [2021] require at least Ω(k2) examples per user. Tripuraneni et al. [2021] and Thekumparampil et al. [2021] assume that the measurements xij are drawn from the standard (multivariate) Gaussian distribution, where as Kong et al. [2020], Collins et al. [2021] and our work make the relaxed assumption that xij are sub-Gaussian with identity covariance, which, in particular, allows the fourth-order moments of xij to be non-isotropic. There is a large body of prior work on meta-learning beyond the linear setting [see e.g. Maurer et al., 2016, Tripuraneni et al., 2020, Du et al., 2020]. When collecting data from users, it is often important to ensure that private information about users is not revealed through the release of the learned estimator. Many recent works proposed and analyzed estimators that achieve user-level differential privacy in settings including mean estimation [Levy et al., 2021, Esfandiari et al., 2021], meta-learning [Jain et al., 2021] and PAC learning [Ghazi et al., 2021]. Recently, Cummings et al. [2021] study one-dimensional mean estimation in a setting similar to ours, under a differential privacy constraint. The matrix A we define in (1) is a weighted sum of Ai := 1mi(mi−1) ∑ j1 ̸=j2 xij1x T ij2 over users i = 1, . . . , n, and each Ai has the form of a U -statistic [Halmos, 1946, Hoeffding, 1948]. U -statistics have been applied to many statistical tasks including tensor completion [Xia and Yuan, 2019] and various testing problems [Zhong and Chen, 2011, He et al., 2021, Schrab et al., 2022]. In our definition of Ai, we do not make the assumption that the distributions of xi1, . . . , ximi are identical although the assumption is commonly used in applications of U -statistics. The matrix A in (3) is also a weighted sum of U -statistics where we again do not make the assumption of identical distribution. 1.3 Paper Organization In Section 2, we formally define the maximum principal angle and other notions we use throughout the paper. Our results in the PCA setting and the linear models setting are presented in Sections 3 and 4, respectively. We defer most technical proofs to the appendices. 2 Preliminaries We use ∥A∥ to denote the spectral norm of a matrix A, and use ∥u∥2 to denote the ℓ2 norm of a vector u. For positive integers k ≤ d, we use Od,k to denote the set of matrices A ∈ Rd×k satisfying ATA = Ik, where Ik is the k × k identity matrix. We use Od to denote Od,d, which is the set of d× d orthogonal matrices. We use col(A) to denote the linear subspace spanned by the columns of a matrix A. We use the base-e logarithm throughout the paper. Maximum Principal Angle. Let U, Û ∈ Od be two orthogonal matrices. Suppose the columns of U and Û are partitioned as U = [U1 U2], Û = [Û1 Û2] where U1, Û1 ∈ Od,k for an integer k satisfying 0 < k < d. Let Γ (resp. Γ̂) be the k-dimensional linear subspace spanned by the columns of U1 (resp. Û1). Originating from [Jordan, 1875], the maximum principal angle θ ∈ [0, π/2] between Γ and Γ̂, denoted by ∠(Γ, Γ̂) or ∠(U1, Û1), is defined by sin θ = ∥U1UT1 − Û1ÛT1 ∥ = ∥UT1 Û2∥ = ∥UT2 Û1∥. It is not hard to see that the maximum principal angle depend only on the subspaces Γ, Γ̂ and not on the choices of U and Û , and sin∠(Γ, Γ̂) is a natural metric between k-dimensional subspaces (see Appendix A for more details where we discuss the definition of principal angles for any two subspaces with possibly different dimensions). With the definition of the maximum principal angle, we can now state a variant of the Davis–Kahan sin θ theorem [Davis and Kahan, 1970] that will be useful in our analysis (see Appendix E for proof): Theorem 2.1 (Variant of Davis–Kahan sin θ theorem). Let A, Â ∈ Rd×d be symmetric matrices. Let λi denote the i-th largest eigenvalue of A. For a positive integer k smaller than d, let θ denote the maximum principal angle between the subspaces spanned by the top-k eigenvectors of A and Â. Assuming λk > λk+1, sin θ ≤ 2∥A− Â∥ λk − λk+1 . Sub-Gaussian and sub-exponential distributions. We say a random variable x ∈ R with expectation E[x] ∈ R has sub-Gaussian constant b ∈ R≥0 if E[|x − E[x]|p]1/p ≤ b √ p for every p ≥ 1. We say x has sub-exponential constant b ∈ R≥0 if E[|x− E[x]|p]1/p ≤ bp for every p ≥ 1. We say a random vector y ∈ Rd has sub-Gaussian (resp. sub-exponential) constant b ∈ R≥0 if for every unit vector u ∈ Rd (i.e., ∥u∥2 = 1), the random variable uTy ∈ R has sub-Gaussian (resp. sub-exponential) constant b. We say y is b-sub-Gaussian (resp. b-sub-exponential) if it has sub-Gaussian (resp. sub-exponential) constant b. 3 Principal Component Analysis In the principal component analysis (PCA) setting, our goal is to recover the k-dimensional subspace Γ spanned by the user-specific means µ1, . . . , µn ∈ Rd of the n users. From each user i, we have mi ≥ 2 data points xij = µi + zij for j = 1, . . . ,mi. (5) We assume the noise zij ∈ Rd is drawn independently from a mean zero distribution with subGaussian constant ηi. We do not assume that the variance of zij is the same along every direction, nor do we assume that the distribution of zij is the same for different (i, j). We first show an error upper bound for our estimator when the user-specific means µ1, . . . , µn are deterministic vectors (Section 3.1) and then apply this result to the case where µ1, . . . , µn are drawn from a sub-Gaussian distribution (Section 3.2). In Section 3.3 we prove an information-theoretic error lower bound matching our upper bound. 3.1 Fixed User-Specific Means We first focus on the case where µ1, . . . , µn are deterministic vectors. In this case, all the randomness in the data comes from the noise zij . Our estimator is the subspace Γ̂ spanned by the top-k eigenvectors of A defined in (1). For ℓ = 1, . . . , d, we define σℓ ≥ 0 such that σ2ℓ is the ℓ-th largest eigenvalue of ∑n i=1 wiµiµ T i . Since µ1, . . . , µn share a k-dimensional subspace, σℓ = 0 for ℓ > k. We prove the following general theorem on the error guarantee of our estimator: Theorem 3.1. Define ξ2 = ∥ ∑n i=1 w 2 i µiµ T i η 2 i /mi∥ and let θ denote the maximum principal angle between our estimator Γ̂ and the true subspace Γ spanned by µ1, . . . , µn. For any δ ∈ (0, 1/2), with probability at least 1− δ, sin θ = O σ−2k √√√√(d+ log(1/δ))(ξ2 + n∑ i=1 w2i η 4 i m2i ) + σ−2k (d+ log(1/δ))maxi wiη 2 i mi . (6) We can simplify the bound in Theorem 3.1 by considering special cases: Corollary 3.2. Assume max{η1/ √ m1, . . . , ηn/ √ mn} = t and we choose w1 = · · · = wn = 1/n. For any δ ∈ (0, 1/2), with probability at least 1− δ, sin θ = O ( tσ1 + t 2 σ2k √ d+ log(1/δ) n ) . (7) In particular, when η1 = · · · = ηn = η, and m1 = · · · = mn = m, error bound (7) becomes sin θ = O (( ησ1 σ2k √ m + η2 σ2km )√ d+ log(1/δ) n ) . We defer the complete proof of Theorem 3.1 and Corollary 3.2 to Appendices F and G. Our proof is based on the Davis-Kahan sin θ theorem (Theorem 2.1). Since σ2k+1 = 0, Theorem 2.1 implies sin θ ≤ 2∥A− ∑n i=1 wiµiµ T i ∥ σ2k . (8) This reduces our goal to proving an upper bound on the spectral norm of A− ∑n i=1 wiµiµ T i . Since for distinct j1 and j2 in {1, . . . ,mi} we have E[xij1xTij2 ] = µiµ T i , our construction of A in (1) guarantees E[A] = ∑n i=1 wiµiµ T i . Therefore, our goal becomes controlling the deviation of A from its expectation, and we achieve this goal using techniques for matrix concentration inequalities. 3.2 Sub-Gaussian User-Specific Means We apply our error upper bound in Theorem 3.1 to the case where µ1, . . . , µn ∈ Rd are drawn iid from N(0, σ2UUT) for an unknown U ∈ Od,k. We still assume that each data point xij ∈ Rd is generated by adding a noise vector zij ∈ Rd to the user-specific mean µi as in (5). We do not assume that the noise vectors (zij)1≤i≤n,1≤j≤mi are independent of the user-specific means (µi)1≤i≤n, but we assume that when conditioned on (µi)1≤i≤n, every noise vector zij independently follows a distribution with mean zero and sub-Gaussian constant ηi. We use the same estimator Γ̂ as before: Γ̂ is the subspace spanned by the top-k eigenvectors of A defined in (1). We determine the optimal weights w1, . . . , wn in (1) as long as m1, . . . ,mn and η1, . . . , ηn satisfy a mild assumption (Assumption 3.2), achieving an error upper bound in Theorem 3.4. In the next subsection, we prove an error lower bound (Theorem 3.7) that matches our upper bound (Theorem 3.4) up to a constant factor, assuming d ≥ (1 + Ω(1))k and δ = Θ(1). We prove our error upper bound in a slightly more general setting than µ1, . . . , µn drawn iid from N(0, σ2UUT). Specifically, we make the following assumption on the distribution of µ1, . . . , µn: Assumption 3.1. The user-specific means µ1, . . . , µn ∈ Rd are mean-zero independent random vectors supported on an unknown k-dimensional subspace Γ. Moreover, for a parameter σ > 0, for every i = 1, . . . , n, µi has sub-Gaussian constant O(σ), and the k-th largest eigenvalue of E[µiµTi ] is at least σ2. Under this assumption, we have the following lower bound on the σ2k in Theorem 3.1 (see Appendix H for proof): Claim 3.3. Under Assumption 3.1, let w1, . . . , wn ∈ R≥0 be user weights satisfying w1+ · · ·+wn = 1 and σ2k be the k-th largest eigenvalue of ∑n i=1 wiµiµ T i . There exists an absolute constant C∗ > 1 such that for any δ ∈ (0, 1/2), as long as max1≤i≤n wi ≤ 1/C∗(k + log(1/δ)), then σ2k ≥ σ2/2 with probability at least 1− δ/2. The following definition is important for us to choose the weights w1, . . . , wn in (1) optimally: Definition 3.1. Define γi = ( η2i σ2mi + η4i σ4m2i )−1 and assume w.l.o.g. that γ1 ≥ · · · ≥ γn. Define γ′i = γi if i ≥ k, and γ′i = γk if i < k. Intuitively, we can view γi as measuring the “amount of information” provided by the data points from user i. This is consistent with the fact that γi increases as the number mi of data points from user i increases, and γi decreases as the noise magnitude ηi from user i increases. With the users sorted so that γ1 ≥ · · · ≥ γn, the quantity γ′i is then defined to be γk for the k most “informative” users i = 1, . . . , k, and γ′i = γi for other users. We make the following mild assumption on γ ′ i under which we achieve optimal estimation error: Assumption 3.2. ∑n i=1 γ ′ i ≥ C∗(k + log(1/δ))γ′1 for C∗ defined in Claim 3.3. By the definition of γ′i, it is easy to show that Assumption 3.2 is equivalent to ∑n i=k+1 γi ≥ ((C∗ − 1)k + C∗ log(1/δ))γk. Therefore, if we view γi as the “amount of information” from user i, Assumption 3.2 intuitively requires that a significant contribution to the total “information” comes from outside the k most “informative” users. This assumption allows us to avoid the case where we only have exactly n = k users: in that case, we would have σ2k ≈ σ2/k2 for uniform weights w1 = · · · = wn (see [Rudelson and Vershynin, 2008] and references therein), as opposed to the desired σ2k ≥ σ2/2 in Claim 3.3. Assumption 3.2 is a mild assumption. For example, when γk = · · · = γn, Assumption 3.2 holds as long as n ≥ C∗(k+log(1/δ)). Also, since γ′1 = · · · = γ′k ≥ γ′k+1 ≥ · · · ≥ γ′n ≥ 0, it trivially holds that ∑n i=1 γ ′ i ≥ kγ′1. Assumption 3.2 is relatively mild when compared to this trivial inequality. Under Assumption 3.2, we show that it is optimal to choose the weights w1, . . . , wn as wi = γ′i∑n ℓ=1 γ ′ ℓ . (9) Specifically, if we plug (9) into Theorem 3.1 and bound ξ and σk based on the distribution of µ1, . . . , µn, we get the following error upper bound which matches our lower bound (Theorem 3.7) in Section 3.3. We defer its proof to Appendix I. Theorem 3.4. Under Assumptions 3.1 and 3.2, if we choose w1, . . . , wn as in (9) and define θ = ∠(Γ, Γ̂), for δ ∈ (0, 1/2), with probability at least 1− δ, sin θ ≤ O (√ d+ log(1/δ)∑n i=1 γ ′ i ) . (10) For comparison, consider the setting when σ = ηi = 1 for every i = 1, . . . , n. The result then says that sin θ is bounded by approximately √ d∑n i=1 mi . This is the same rate as we would get if we have∑n i=1 mi users each contributing a single independent data point with homogeneous spherical noise. Thus as long as the data points are not too concentrated on fewer than k users, the heterogeneity comes at no additional cost. 3.3 Lower Bound We prove a lower bound matching the upper bound in Theorem 3.4 up to constant in the setting where δ = Θ(1), d ≥ (1 + Ω(1))k. For every positive integer d, there is a natural “uniform” distribution over Od given by Haar’s theorem [Haar, 1933] (see e.g. [Diestel and Spalsbury, 2014] for a textbook). We denote this distribution by Haar(Od). A random matrix A drawn from Haar(Od) has the following invariance property: for any deterministic matrix B ∈ Od, the random matrices A,AB and BA all have the same distribution. For an integer k ≤ d, we can construct a random matrix A1 ∈ Od,k by first drawing A ∈ Rd×d from Haar(Od) and then take the first k columns of A. We denote the distribution of A1 by Haar(Od,k). The invariance property of Haar(Od) immediately implies the following claims: Claim 3.5. Let A ∈ Od be a random matrix drawn from Haar(Od) and let B ∈ Od,k be a fixed matrix. Then AB distributes as Haar(Od,k). Proof. The matrix B can be written as the first k columns of a matrix C ∈ Od. Now AB is the first k columns of AC, where AC distributes as Haar(Od) by the invariance property. This implies that AB distributes as Haar(Od,k). Claim 3.6. Let B ∈ Od,k be a random matrix. Assume for every fixed matrix A ∈ Od, the random matrices B and AB have the same distribution. Then B ∼ Haar(Od,k). Proof. If we draw A independently from Haar(Od), the random matrices B and AB still have the same distribution. By Claim 3.5, AB distributes as Haar(Od,k), so B must also distribute as Haar(Od,k). With the definition of Haar(Od,k), we state our lower bound in the following theorem: Theorem 3.7. Let k, d, n be positive integers satisfying k < d and k ≤ n. Let m1, . . . ,mn be positive integers and σ, η1, . . . , ηn be positive real numbers. Suppose we draw U ∈ Od,k from Haar(Od,k) and then draw µ1, . . . , µn independently from N(0, σ2UUT). For every i = 1, . . . , n, we draw mi data points xij for j = 1, . . . ,mi as xij = µi + zij , where each zij is drawn independently from the spherical Gaussian N(0, η2i I). Let Γ̂ be any estimator mapping (xij)1≤i≤n,1≤j≤mi to a (possibly randomized) k-dimensional subspace of Rd. Let θ denote the maximum principal angle between Γ̂((xij)1≤i≤n,1≤j≤mi) and the true subspace Γ = col(U). If real numbers t ≥ 0 and δ ∈ [0, 1/2) satisfy Pr[sin θ ≤ t] ≥ 1− δ, then t ≥ Ω ( min { 1, √ (d− k)(1− δ)∑n i=k γi }) , (11) where γ1, . . . , γn are defined in Definition 3.1. Note that γ′i = γi for i ≥ k, so our upper bound in (10) matches the lower bound (11) up to a constant factor assuming δ = Θ(1) and d ≥ (1 + Ω(1))k. We use the local Fano method to prove the lower bound using the technical lemmas in Appendix D. In particular, we reduce our goal to proving an upper bound on the KL divergence between Gaussian distributions whose covariance matrices are defined based on matrices U, Û ∈ Od,k with ∥UUT − Û ÛT∥F bounded. We prove the following lemma in Appendix J that upper bounds the KL divergence using ∥UUT − Û ÛT∥F : Lemma 3.8. For σ ∈ R≥0, η ∈ R>0, U, Û ∈ Od,k, define Σ = σ2UUT + η2I and Σ̂ = σ2Û ÛT + η2I . Then, Dkl(N(0, Σ̂)∥N(0,Σ)) = σ4∥UUT − Û ÛT∥2F 4(σ2η2 + η4) . Lemma 3.8 and the results in Appendix D allow us to prove a version of (11) in which the sum in the demoninator is over i = 1, . . . , n. This, however, is weaker and less useful than (11) in which the sum in the denominator is over i = k, k + 1, . . . , n. To prove Theorem 3.7, we extract a hard distribution in which the data points from users 1, . . . , k − 1 are “useless” in terms of subspace recovery. Let Γ1 be the (k − 1)-dimensional subspace spanned by µ1, . . . , µk−1. We let v1, . . . , vk−1 be a random orthonormal basis of Γ1, and we append another vector vk ∈ Γ to form an orthonormal basis v1, . . . , vk of Γ. We define V1 = [v1 · · · vk−1] ∈ Od,k−1 and V = [v1 · · · vk] ∈ Od,k. In Figure 1 we show a graphical model demonstrating the dependency among the random objects we defined. Let us focus on the joint distribution of (V1, V, (µ1, . . . , µk−1)). By the invariance property, for any matrices Ṽ1 ∈ Od,k−1, Ṽ ∈ Od,k, measurable set S ⊆ (Rd)k−1, and orthogonal matrix G ∈ Od, Pr[(µ1, . . . , µk−1) ∈ S|V = Ṽ , V1 = Ṽ1] = Pr[(µ1, . . . , µk−1) ∈ SG|V = GṼ , V1 = GṼ1], where SG = {(Gµ̃1, . . . , Gµ̃k−1) : (µ̃1, . . . , µ̃k−1) ∈ S}. For any Ṽ , Ṽ ′ ∈ Od,k whose first k − 1 columns are both Ṽ1, there exists G ∈ Od such that Ṽ ′ = GṼ and thus Ṽ1 = GṼ1. This implies that for any µ̃ ∈ col(Ṽ1), we have Gµ̃ = µ̃, and thus (S ∩ col(Ṽ1)k−1)G = S ∩ col(Ṽ1)k−1 for any measurable S ⊆ (Rd)k−1. Here, col(Ṽ1)k−1 = {(µ̃1, . . . , µ̃k−1) : µ̃i ∈ col(Ṽ1) for i = 1, . . . , k − 1} ⊆ (Rd)k−1. When conditioned on V1 = Ṽ1, for every i = 1, . . . , k − 1 we have µi ∈ Γ1 = col(V1) = col(Ṽ1), which implies that (µ1, . . . , µk−1) ∈ col(Ṽ1)k−1. Therefore, Pr[(µ1, . . . , µk−1) ∈ S|V = Ṽ , V1 = Ṽ1] = Pr[(µ1, . . . , µk−1) ∈ S ∩ col(Ṽ1)k−1|V = Ṽ , V1 = Ṽ1] = Pr[(µ1, . . . , µk−1) ∈ (S ∩ col(Ṽ1)k−1)G|V = GṼ , V1 = GṼ1] = Pr[(µ1, . . . , µk−1) ∈ S ∩ col(Ṽ1)k−1|V = Ṽ ′, V1 = Ṽ1] = Pr[(µ1, . . . , µk−1) ∈ S|V = Ṽ ′, V1 = Ṽ1]. This implies that (µ1, . . . , µk−1) and V are conditionally independent given V1. Therefore, the joint distribution of (V1, V, (µ1, . . . , µk−1)) can be formed by first drawing V and V1, and then drawing µ1, . . . , µk−1 based only on V1 and not on V . Since µk, . . . , µn are drawn iid from N(0, σ2UUT) = N(0, σ2V V T), we have the graphical model shown in Figure 2. By Claim 3.6, the marginal distribution of V is Haar(Od,k). By Claim 3.5, we can implement this distribution by first drawing W ∼ Haar(Od) and then drawing E independently from any distribution over Od,k and let V = WE. We choose the distribution of E later, where we ensure that the first k − 1 columms of E is always [ Ik−1 0 ] . This guarantees that the first k − 1 columns of W and V are the same, and thus V1 is exactly the first k− 1 columns of W , resulting in the graphical model shown in Figure 3. Note that in Figure 3 there is no directed path from E to (µ1, . . . , µk−1). Intuitively, this means that knowing (µ1, . . . , µk−1) gives us no information about E. Now by choosing the distribution of E appropriately, we can prove (11) in which the denominator does not contain γ1, . . . , γk−1. We defer the complete proof of Theorem 3.7 to Appendix K. 4 Linear Models In the linear models setting, the data distribution of user i is parameterized by an unknown vector βi ∈ Rd. As before, we assume that the vectors β1, . . . , βn from the n users lie in an unknown k-dimensional subspace Γ. Our goal is to recover the subspace using the following data. For every i = 1, . . . , n, we have mi data points from user i: (xi1, yi1), . . . , (ximi , yimi) ∈ Rd × R. For every j = 1, . . . ,mi, we assume the measurement xij ∈ Rd is a random vector drawn independently from an O(1)-sub-Gaussian distribution with zero mean and identity covariance matrix. The measurement outcome yij is determined by yij = xTijβi + zij , where the random noise zij ∈ R can depend on the measurements xi1, . . . , ximi . When conditioned on xi1, . . . , ximi , we assume every zij for j = 1, . . . ,mi is independently drawn from an ηi-sub-Gaussian distribution with zero mean, but we do not assume that the conditional distribution of zij is the same for every j = 1, . . . ,mi. The (in)dependence among xij and zij for i = 1, . . . , n and j = 1, . . . ,mi can be summarized by the example graphical model in Figure 4. Since we allow the noise zij to depend on the measurements xij , it is information-theoretically impossible to recover the subspace if we only have one data point from every user. Consider the scenario where every βi is drawn independently from N(0, σ2uuT) for an unknown unit vector u ∈ Rd and every xij is drawn independently and uniformly from {−1, 1}d. If we set zij to be zij = x T ijνij where νij is independently drawn from N(0, σ 2(I − uuT)), then every yij satisfies yij = x T ij(βi + νij) where βi + νij distributes as N(0, σ 2I) independently from xij . This implies that the joint distribution of ((xi1, yi1))i=1,...,n does not change with u, i.e., we get no information about u from one data point per user. Thus, we assume mi ≥ 2 for every user i. In this case, we achieve error upper bounds that match the ones in [Tripuraneni et al., 2021] despite our relaxed assumptions on the noise. Our estimator is the subspace Γ̂ spanned by the top-k eigenvectors of A defined in (3). We defer the analysis of our estimator to Appendix L. Acknowledgments and Disclosure of Funding Part of this work was performed while LH was interning at Apple. LH is also supported by Omer Reingold’s NSF Award IIS-1908774, Omer Reingold’s Simons Foundation Investigators Award 689988, and Moses Charikar’s Simons Foundation Investigators Award.
1. What is the focus of the paper regarding principal component analysis? 2. What are the strengths of the proposed approach, particularly in terms of theoretical analysis? 3. Do you have any concerns or questions about the established bounds and their dependence on certain dimensions? 4. How do you think the paper could be improved regarding numerical verification and optimization?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper In this paper, the authors consider the problem of the principal component analysis from heterogeneous data with non-isotropic noise. The upper bound of the estimation error is established by the specific estimator, and the lower bound is obtained using Fano’s method. The upper bound matches the lower bound up to a constant factor. Strengths And Weaknesses Strengths: The main contributions of this work are the theoretical results, i.e., establishing the lower and upper bounds that match with each other up to a constant factor. Weakness: There is no numerical verification to justify the optimality of the bound established in the theorems. The parameters are in the k -dimensional subspace. It is unclear why the optimal bound established in the theorems depends on the dimension d , but not k . It is expected to depend on k instead. Questions It should be explained clearly why the optimal bound established depends on the dimension d , but not k . Numerical simulations like the phase transition are expected to verify the optimality of the theoretical bound. Limitations NA.
NIPS
Title Subspace Recovery from Heterogeneous Data with Non-isotropic Noise Abstract Recovering linear subspaces from data is a fundamental and important task in statistics and machine learning. Motivated by heterogeneity in Federated Learning settings, we study a basic formulation of this problem: the principal component analysis (PCA), with a focus on dealing with irregular noise. Our data come from n users with user i contributing data samples from a d-dimensional distribution with mean μi. Our goal is to recover the linear subspace shared by μ1, . . . , μn using the data points from all users, where every data point from user i is formed by adding an independent mean-zero noise vector to μi. If we only have one data point from every user, subspace recovery is information-theoretically impossible when the covariance matrices of the noise vectors can be non-spherical, necessitating additional restrictive assumptions in previous work. We avoid these assumptions by leveraging at least two data points from each user, which allows us to design an efficiently-computable estimator under non-spherical and user-dependent noise. We prove an upper bound for the estimation error of our estimator in general scenarios where the number of data points and amount of noise can vary across users, and prove an information-theoretic error lower bound that not only matches the upper bound up to a constant factor, but also holds even for spherical Gaussian noise. This implies that our estimator does not introduce additional estimation error (up to a constant factor) due to irregularity in the noise. We show additional results for a linear regression problem in a similar setup. N/A Recovering linear subspaces from data is a fundamental and important task in statistics and machine learning. Motivated by heterogeneity in Federated Learning settings, we study a basic formulation of this problem: the principal component analysis (PCA), with a focus on dealing with irregular noise. Our data come from n users with user i contributing data samples from a d-dimensional distribution with mean µi. Our goal is to recover the linear subspace shared by µ1, . . . , µn using the data points from all users, where every data point from user i is formed by adding an independent mean-zero noise vector to µi. If we only have one data point from every user, subspace recovery is information-theoretically impossible when the covariance matrices of the noise vectors can be non-spherical, necessitating additional restrictive assumptions in previous work. We avoid these assumptions by leveraging at least two data points from each user, which allows us to design an efficiently-computable estimator under non-spherical and user-dependent noise. We prove an upper bound for the estimation error of our estimator in general scenarios where the number of data points and amount of noise can vary across users, and prove an information-theoretic error lower bound that not only matches the upper bound up to a constant factor, but also holds even for spherical Gaussian noise. This implies that our estimator does not introduce additional estimation error (up to a constant factor) due to irregularity in the noise. We show additional results for a linear regression problem in a similar setup. 1 Introduction We study the problem of learning low-dimensional structure amongst data distributions, given multiple samples from each distribution. This problem arises naturally in settings such as federated learning, where we want to learn from data coming from a set of individuals, each of which has samples from their own distribution. These distributions however are related to each other, and in this work, we consider the setting when these distributions have means lying in a low-dimensional subspace. The goal is to learn this subspace, even when the distributions may have different (and potentially non-spherical) variances. This heterogeneity can manifest itself in practice as differing number of samples per user, or the variance differing across individuals, possibly depending on their mean. Recovery of the subspace containing the means can in turn help better estimate individual means. In other words, this can allow for learning good estimator for all individual means, by leveraging information from all the individuals. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). The irregularity of the noise makes this task challenging even when we have sufficiently many individual distributions. For example, suppose we have n individuals and for every i = 1, . . . , n, an unknown µi ∈ Rd. For simplicity, suppose that µ1, . . . , µn are distributed independently as N(0, σ2uuT) for σ ∈ R≥0 and an unknown unit vector u ∈ Rd. In this setting, our goal is to recover the one-dimensional subspace, equivalently the vector u. For every i, we have a data point xi = µi+zi where zi ∈ Rd is a mean-zero noise vector. If zi is drawn independently from a spherical Gaussian N(0, α2I), we can recover the unknown subspace with arbitrary accuracy as n grows to infinity because 1n ∑ xix T i concentrates to E[xixTi ] = σ2uuT + α2I , whose top eigenvector is ±u. However, if the noise zi is drawn from a non-spherical distribution, the top eigenvector of 1n ∑ xix T i can deviate from ±u significantly, and to make things worse, if the noise zi is drawn independently from a non-spherical Gaussian N(0, σ2(I−uuT)+α2I), then our data points xi = µi+zi distribute independently as N(0, (σ2 + α2)I), giving no information about the vector u.1 The information-theoretic impossibility in this example however disappears as soon as one has at least two samples from each distribution. Indeed, given two data points xi1 = µi + zi1 and xi2 = µi + zi2 from user i, as long as the noise zi1, zi2 are independent and have zero mean, we always have E[xi1xTi2] = σ2uuT regardless of the specific distributions of zi1 and zi2. This allows us to recover the subspace in this example, as long as we have sufficiently many users each contributing at least two examples. As this is commonly the case in our motivating examples, we make this assumption of multiple data points per user, and show that this intuition extends well beyond this particular example. We design efficiently computable estimators for this subspace recovery problem given samples from multiple heteroscedastic distributions (see Section 1.1 for details). We prove upper bounds on the error of our estimator measured in the maximum principal angle (see Section 2 for definition). We also prove an information-theoretic error lower bound, showing that our estimator achieves the optimal error up to a constant factor in general scenarios where the number of data points and the amount of noise can vary across users. Somewhat surprisingly, our lower bound holds even when the noise distributes as spherical Gaussians. Thus non-spherical noise in setting does not lead to increased error. We then show that our techniques extend beyond the mean estimation problem to a linear regression setting where for each µi, we get (at least two) samples (xij , xTijµi + zij) where zij is zero-mean noise from some noise distribution that depends on i and xij . This turns out to be a model that was recently studied in the meta-learning literature under more restrictive assumptions (e.g. zij is independent of xij) [Kong et al., 2020, Tripuraneni et al., 2021, Collins et al., 2021, Thekumparampil et al., 2021]. We show a simple estimator achieving an error upper bound matching the ones in prior work without making these restrictive assumptions. 1.1 Our contributions PCA with heterogeneous and non-isotropic noise: Upper Bounds. In the PCA setting, the data points from each user i are drawn from a user-specific distribution with mean µi ∈ Rd, and we assume that µ1, . . . , µn lie in a shared k-dimensional subspace that we want to recover. Specifically, we have mi data points xij ∈ Rd from user i for j = 1, . . . ,mi, and each data point is determined by xij = µi + zij where zij ∈ Rd is a noise vector drawn independently from a mean zero distribution. We allow the distribution of zij to be non-spherical and non-identical across different pairs (i, j). We use ηi ∈ R≥0 to quantify the amount of noise in user i’s data points by assuming that zij is an ηi-sub-Gaussian random variable. As mentioned earlier, if we only have a single data point from each user, it is information-theoretically impossible to recover the subspace. Thus, we focus on the case where mi ≥ 2 for every i = 1, . . . , n. In this setting, for appropriate weights w1, . . . , wn ∈ R≥0, we compute a matrix A: A = n∑ i=1 wi mi(mi − 1) ∑ j1 ̸=j2 xij1x T ij2 , (1) where the inner summation is over all pairs j1, j2 ∈ {1, . . . ,mi} satisfying j1 ̸= j2. Our estimator is then defined by the subspace spanned by the top-k eigenvectors of A. Although the inner summation 1This information-theoretic impossibility naturally extends to recovering k-dimensional subspaces for k > 1 by replacing the unit vector u ∈ Rd with a matrix U ∈ Rd×k with orthonormal columns. is over mi(mi − 1) terms, the time complexity for computing it need not grow quadratically with mi because of the following equation: ∑ j1 ̸=j2 xij1x T ij2 = mi∑ j=1 xij mi∑ j=1 xij T − mi∑ j=1 xijx T ij . The flexibility in the weights w1, . . . , wn allows us to deal with variations in mi and ηi for different users i. In the special case where η1 = · · · = ηn = η and m1 = · · · = mn = m, we choose w1 = · · · = wn = 1/n and we show that our estimator achieves the following error upper bound with success probability at least 1− δ: sin θ = O (( ησ1 σ2k √ m + η2 σ2km )√ d+ log(1/δ) n ) . Here, θ is the maximum principal angle between our estimator and the true subspace shared by µ1, . . . , µn, and we define σℓ ≥ 0 such that σ2ℓ is the ℓ-th largest eigenvalue of ∑n i=1 wiµiµ T i . Our error upper bound for general mi, ηi, wi is given in Theorem 3.1. We instantiate our error upper bound to the case where µ1, . . . , µn are drawn iid from a Gaussian distribution N(0, σ2UUT), where the columns of U ∈ Rd×k form an orthonormal basis of the subspace containing µ1, . . . , µn. By choosing the weights w1, . . . , wn according to m1, . . . ,mn and η1, . . . , ηn, our estimator achieves the error upper bound sin θ ≤ O (√ d+ log(1/δ)∑n i=1 γ ′ i ) (2) under a mild assumption (Assumption 3.2), where γ′i is defined in Definition 3.1 and often equals( η2i σ2mi + η4i σ4m2i )−1 . PCA: Lower Bounds. We show that the error upper bound (2) is optimal up to a constant factor by proving a matching information-theoretic lower bound (Theorem 3.7). Our lower bound holds for general mi and ηi that can vary among users i, and it holds even when the noise vectors zij are drawn from spherical Gaussians, showing that our estimator essentially pays no additional cost in error or sample complexity due to non-isotropic noise. We prove the lower bound using Fano’s method on a local packing over the Grassmannian manifold. We carefully select a non-trivial hard distribution so that the strength of our lower bound is not affected by a group of fewer than k users each having a huge amount of data points with little noise. Linear Models. While the PCA setting is the main focus of our paper, we extend our research to a related linear models setting that has recently been well studied in the meta-learning and federated learning literature [Kong et al., 2020, Tripuraneni et al., 2021, Collins et al., 2021, Thekumparampil et al., 2021]. Here, the user-specific distribution of each user i is parameterized by βi ∈ Rd, and we again assume that β1, . . . , βn lie in a k-dimensional linear subspace that we want to recover. From each user i we observe mi data points (xij , yij) ∈ Rd × R for j = 1, . . . ,mi drawn from the user-specific distribution satisfying yij = xTijβi + zij for an O(1)-sub-Gaussian measurement vector xij ∈ Rd with zero mean and identity covariance and an ηi-sub-Gaussian mean-zero noise term zij ∈ R. While it may seem that non-isotropic noise is less of a challenge in this setting since each noise term zij is a scalar, our goal is to handle a challenging scenario where the variances of the noise terms zij can depend on the realized measurements xij , which is a more general and widely applicable setting compared to those in prior work. Similarly to the PCA setting, our relaxed assumptions on the noise make it information-theoretically impossible to do subspace recovery if we only have one data point from each user (see Section 4), and thus we assume each user contributes at least two data points. For appropriate weights w1, . . . , wn ∈ R≥0, we use the subspace spanned by the top-k eigenvectors of the following matrix A as our estimator: A = n∑ i=1 wi mi(mi − 1) ∑ j1 ̸=j2 (xij1yij1)(xij2yij2) T. (3) In the special case where η1 = · · · = ηn = η,m1 = · · · = mn = m, and ∥βi∥2 ≤ r for all i, our estimator achieves the following error upper bound using weights w1 = · · · = wn = 1/n: sin θ ≤ O ( log3(nd/δ) √ d(r4 + r2η2 + η4/m) mnσ4k ) , (4) where θ is the maximum principal angle between our estimator and the true subspace shared by β1, . . . , βn, and σ2k is the k-th largest eigenvalue of ∑n i=1 wiβiβ T i (Corollary L.2). Our error upper bound extends smoothly to more general cases where ηi and mi vary among users (Theorem L.1). Moreover, our upper bound matches the ones in prior work [e.g. Tripuraneni et al., 2021, Theorem 3] despite requiring less restrictive assumptions. 1.2 Related Work Principal component analysis under non-isotropic noise has been studied by Vaswani and Narayanamurthy [2017], Zhang et al. [2018] and Narayanamurthy and Vaswani [2020]. When translated to our setting, these papers focus on having only one data point from each user and thus they require additional assumptions—either the level of non-isotropy is low, or the noise is coordinate-wise independent and the subspace is incoherent. The estimation error guarantees in these papers depend crucially on how well these additional assumptions are satisfied. Zhu et al. [2019] and Cai et al. [2021] study PCA with noise and missing data, and Chen et al. [2021] and Cheng et al. [2021] study eigenvalue and eigenvector estimation under heteroscedastic noise. These four papers all assume that the noise is coordinate-wise independent and the subspace/eigenspace is incoherent. The linear models setting we consider has recently been studied as a basic setting of meta-learning and federated learning by Kong et al. [2020], Tripuraneni et al. [2021], Collins et al. [2021], and Thekumparampil et al. [2021]. These papers all make the assumption that the noise terms zij are independent of the measurements xij , an assumption that we relax in this paper. Collins et al. [2021] and Thekumparampil et al. [2021] make improvements in sample complexity and error guarantees compared to earlier work by Kong et al. [2020] and Tripuraneni et al. [2021], but Collins et al. [2021] focus on the noiseless setting (zij = 0) and Thekumparampil et al. [2021] require at least Ω(k2) examples per user. Tripuraneni et al. [2021] and Thekumparampil et al. [2021] assume that the measurements xij are drawn from the standard (multivariate) Gaussian distribution, where as Kong et al. [2020], Collins et al. [2021] and our work make the relaxed assumption that xij are sub-Gaussian with identity covariance, which, in particular, allows the fourth-order moments of xij to be non-isotropic. There is a large body of prior work on meta-learning beyond the linear setting [see e.g. Maurer et al., 2016, Tripuraneni et al., 2020, Du et al., 2020]. When collecting data from users, it is often important to ensure that private information about users is not revealed through the release of the learned estimator. Many recent works proposed and analyzed estimators that achieve user-level differential privacy in settings including mean estimation [Levy et al., 2021, Esfandiari et al., 2021], meta-learning [Jain et al., 2021] and PAC learning [Ghazi et al., 2021]. Recently, Cummings et al. [2021] study one-dimensional mean estimation in a setting similar to ours, under a differential privacy constraint. The matrix A we define in (1) is a weighted sum of Ai := 1mi(mi−1) ∑ j1 ̸=j2 xij1x T ij2 over users i = 1, . . . , n, and each Ai has the form of a U -statistic [Halmos, 1946, Hoeffding, 1948]. U -statistics have been applied to many statistical tasks including tensor completion [Xia and Yuan, 2019] and various testing problems [Zhong and Chen, 2011, He et al., 2021, Schrab et al., 2022]. In our definition of Ai, we do not make the assumption that the distributions of xi1, . . . , ximi are identical although the assumption is commonly used in applications of U -statistics. The matrix A in (3) is also a weighted sum of U -statistics where we again do not make the assumption of identical distribution. 1.3 Paper Organization In Section 2, we formally define the maximum principal angle and other notions we use throughout the paper. Our results in the PCA setting and the linear models setting are presented in Sections 3 and 4, respectively. We defer most technical proofs to the appendices. 2 Preliminaries We use ∥A∥ to denote the spectral norm of a matrix A, and use ∥u∥2 to denote the ℓ2 norm of a vector u. For positive integers k ≤ d, we use Od,k to denote the set of matrices A ∈ Rd×k satisfying ATA = Ik, where Ik is the k × k identity matrix. We use Od to denote Od,d, which is the set of d× d orthogonal matrices. We use col(A) to denote the linear subspace spanned by the columns of a matrix A. We use the base-e logarithm throughout the paper. Maximum Principal Angle. Let U, Û ∈ Od be two orthogonal matrices. Suppose the columns of U and Û are partitioned as U = [U1 U2], Û = [Û1 Û2] where U1, Û1 ∈ Od,k for an integer k satisfying 0 < k < d. Let Γ (resp. Γ̂) be the k-dimensional linear subspace spanned by the columns of U1 (resp. Û1). Originating from [Jordan, 1875], the maximum principal angle θ ∈ [0, π/2] between Γ and Γ̂, denoted by ∠(Γ, Γ̂) or ∠(U1, Û1), is defined by sin θ = ∥U1UT1 − Û1ÛT1 ∥ = ∥UT1 Û2∥ = ∥UT2 Û1∥. It is not hard to see that the maximum principal angle depend only on the subspaces Γ, Γ̂ and not on the choices of U and Û , and sin∠(Γ, Γ̂) is a natural metric between k-dimensional subspaces (see Appendix A for more details where we discuss the definition of principal angles for any two subspaces with possibly different dimensions). With the definition of the maximum principal angle, we can now state a variant of the Davis–Kahan sin θ theorem [Davis and Kahan, 1970] that will be useful in our analysis (see Appendix E for proof): Theorem 2.1 (Variant of Davis–Kahan sin θ theorem). Let A, Â ∈ Rd×d be symmetric matrices. Let λi denote the i-th largest eigenvalue of A. For a positive integer k smaller than d, let θ denote the maximum principal angle between the subspaces spanned by the top-k eigenvectors of A and Â. Assuming λk > λk+1, sin θ ≤ 2∥A− Â∥ λk − λk+1 . Sub-Gaussian and sub-exponential distributions. We say a random variable x ∈ R with expectation E[x] ∈ R has sub-Gaussian constant b ∈ R≥0 if E[|x − E[x]|p]1/p ≤ b √ p for every p ≥ 1. We say x has sub-exponential constant b ∈ R≥0 if E[|x− E[x]|p]1/p ≤ bp for every p ≥ 1. We say a random vector y ∈ Rd has sub-Gaussian (resp. sub-exponential) constant b ∈ R≥0 if for every unit vector u ∈ Rd (i.e., ∥u∥2 = 1), the random variable uTy ∈ R has sub-Gaussian (resp. sub-exponential) constant b. We say y is b-sub-Gaussian (resp. b-sub-exponential) if it has sub-Gaussian (resp. sub-exponential) constant b. 3 Principal Component Analysis In the principal component analysis (PCA) setting, our goal is to recover the k-dimensional subspace Γ spanned by the user-specific means µ1, . . . , µn ∈ Rd of the n users. From each user i, we have mi ≥ 2 data points xij = µi + zij for j = 1, . . . ,mi. (5) We assume the noise zij ∈ Rd is drawn independently from a mean zero distribution with subGaussian constant ηi. We do not assume that the variance of zij is the same along every direction, nor do we assume that the distribution of zij is the same for different (i, j). We first show an error upper bound for our estimator when the user-specific means µ1, . . . , µn are deterministic vectors (Section 3.1) and then apply this result to the case where µ1, . . . , µn are drawn from a sub-Gaussian distribution (Section 3.2). In Section 3.3 we prove an information-theoretic error lower bound matching our upper bound. 3.1 Fixed User-Specific Means We first focus on the case where µ1, . . . , µn are deterministic vectors. In this case, all the randomness in the data comes from the noise zij . Our estimator is the subspace Γ̂ spanned by the top-k eigenvectors of A defined in (1). For ℓ = 1, . . . , d, we define σℓ ≥ 0 such that σ2ℓ is the ℓ-th largest eigenvalue of ∑n i=1 wiµiµ T i . Since µ1, . . . , µn share a k-dimensional subspace, σℓ = 0 for ℓ > k. We prove the following general theorem on the error guarantee of our estimator: Theorem 3.1. Define ξ2 = ∥ ∑n i=1 w 2 i µiµ T i η 2 i /mi∥ and let θ denote the maximum principal angle between our estimator Γ̂ and the true subspace Γ spanned by µ1, . . . , µn. For any δ ∈ (0, 1/2), with probability at least 1− δ, sin θ = O σ−2k √√√√(d+ log(1/δ))(ξ2 + n∑ i=1 w2i η 4 i m2i ) + σ−2k (d+ log(1/δ))maxi wiη 2 i mi . (6) We can simplify the bound in Theorem 3.1 by considering special cases: Corollary 3.2. Assume max{η1/ √ m1, . . . , ηn/ √ mn} = t and we choose w1 = · · · = wn = 1/n. For any δ ∈ (0, 1/2), with probability at least 1− δ, sin θ = O ( tσ1 + t 2 σ2k √ d+ log(1/δ) n ) . (7) In particular, when η1 = · · · = ηn = η, and m1 = · · · = mn = m, error bound (7) becomes sin θ = O (( ησ1 σ2k √ m + η2 σ2km )√ d+ log(1/δ) n ) . We defer the complete proof of Theorem 3.1 and Corollary 3.2 to Appendices F and G. Our proof is based on the Davis-Kahan sin θ theorem (Theorem 2.1). Since σ2k+1 = 0, Theorem 2.1 implies sin θ ≤ 2∥A− ∑n i=1 wiµiµ T i ∥ σ2k . (8) This reduces our goal to proving an upper bound on the spectral norm of A− ∑n i=1 wiµiµ T i . Since for distinct j1 and j2 in {1, . . . ,mi} we have E[xij1xTij2 ] = µiµ T i , our construction of A in (1) guarantees E[A] = ∑n i=1 wiµiµ T i . Therefore, our goal becomes controlling the deviation of A from its expectation, and we achieve this goal using techniques for matrix concentration inequalities. 3.2 Sub-Gaussian User-Specific Means We apply our error upper bound in Theorem 3.1 to the case where µ1, . . . , µn ∈ Rd are drawn iid from N(0, σ2UUT) for an unknown U ∈ Od,k. We still assume that each data point xij ∈ Rd is generated by adding a noise vector zij ∈ Rd to the user-specific mean µi as in (5). We do not assume that the noise vectors (zij)1≤i≤n,1≤j≤mi are independent of the user-specific means (µi)1≤i≤n, but we assume that when conditioned on (µi)1≤i≤n, every noise vector zij independently follows a distribution with mean zero and sub-Gaussian constant ηi. We use the same estimator Γ̂ as before: Γ̂ is the subspace spanned by the top-k eigenvectors of A defined in (1). We determine the optimal weights w1, . . . , wn in (1) as long as m1, . . . ,mn and η1, . . . , ηn satisfy a mild assumption (Assumption 3.2), achieving an error upper bound in Theorem 3.4. In the next subsection, we prove an error lower bound (Theorem 3.7) that matches our upper bound (Theorem 3.4) up to a constant factor, assuming d ≥ (1 + Ω(1))k and δ = Θ(1). We prove our error upper bound in a slightly more general setting than µ1, . . . , µn drawn iid from N(0, σ2UUT). Specifically, we make the following assumption on the distribution of µ1, . . . , µn: Assumption 3.1. The user-specific means µ1, . . . , µn ∈ Rd are mean-zero independent random vectors supported on an unknown k-dimensional subspace Γ. Moreover, for a parameter σ > 0, for every i = 1, . . . , n, µi has sub-Gaussian constant O(σ), and the k-th largest eigenvalue of E[µiµTi ] is at least σ2. Under this assumption, we have the following lower bound on the σ2k in Theorem 3.1 (see Appendix H for proof): Claim 3.3. Under Assumption 3.1, let w1, . . . , wn ∈ R≥0 be user weights satisfying w1+ · · ·+wn = 1 and σ2k be the k-th largest eigenvalue of ∑n i=1 wiµiµ T i . There exists an absolute constant C∗ > 1 such that for any δ ∈ (0, 1/2), as long as max1≤i≤n wi ≤ 1/C∗(k + log(1/δ)), then σ2k ≥ σ2/2 with probability at least 1− δ/2. The following definition is important for us to choose the weights w1, . . . , wn in (1) optimally: Definition 3.1. Define γi = ( η2i σ2mi + η4i σ4m2i )−1 and assume w.l.o.g. that γ1 ≥ · · · ≥ γn. Define γ′i = γi if i ≥ k, and γ′i = γk if i < k. Intuitively, we can view γi as measuring the “amount of information” provided by the data points from user i. This is consistent with the fact that γi increases as the number mi of data points from user i increases, and γi decreases as the noise magnitude ηi from user i increases. With the users sorted so that γ1 ≥ · · · ≥ γn, the quantity γ′i is then defined to be γk for the k most “informative” users i = 1, . . . , k, and γ′i = γi for other users. We make the following mild assumption on γ ′ i under which we achieve optimal estimation error: Assumption 3.2. ∑n i=1 γ ′ i ≥ C∗(k + log(1/δ))γ′1 for C∗ defined in Claim 3.3. By the definition of γ′i, it is easy to show that Assumption 3.2 is equivalent to ∑n i=k+1 γi ≥ ((C∗ − 1)k + C∗ log(1/δ))γk. Therefore, if we view γi as the “amount of information” from user i, Assumption 3.2 intuitively requires that a significant contribution to the total “information” comes from outside the k most “informative” users. This assumption allows us to avoid the case where we only have exactly n = k users: in that case, we would have σ2k ≈ σ2/k2 for uniform weights w1 = · · · = wn (see [Rudelson and Vershynin, 2008] and references therein), as opposed to the desired σ2k ≥ σ2/2 in Claim 3.3. Assumption 3.2 is a mild assumption. For example, when γk = · · · = γn, Assumption 3.2 holds as long as n ≥ C∗(k+log(1/δ)). Also, since γ′1 = · · · = γ′k ≥ γ′k+1 ≥ · · · ≥ γ′n ≥ 0, it trivially holds that ∑n i=1 γ ′ i ≥ kγ′1. Assumption 3.2 is relatively mild when compared to this trivial inequality. Under Assumption 3.2, we show that it is optimal to choose the weights w1, . . . , wn as wi = γ′i∑n ℓ=1 γ ′ ℓ . (9) Specifically, if we plug (9) into Theorem 3.1 and bound ξ and σk based on the distribution of µ1, . . . , µn, we get the following error upper bound which matches our lower bound (Theorem 3.7) in Section 3.3. We defer its proof to Appendix I. Theorem 3.4. Under Assumptions 3.1 and 3.2, if we choose w1, . . . , wn as in (9) and define θ = ∠(Γ, Γ̂), for δ ∈ (0, 1/2), with probability at least 1− δ, sin θ ≤ O (√ d+ log(1/δ)∑n i=1 γ ′ i ) . (10) For comparison, consider the setting when σ = ηi = 1 for every i = 1, . . . , n. The result then says that sin θ is bounded by approximately √ d∑n i=1 mi . This is the same rate as we would get if we have∑n i=1 mi users each contributing a single independent data point with homogeneous spherical noise. Thus as long as the data points are not too concentrated on fewer than k users, the heterogeneity comes at no additional cost. 3.3 Lower Bound We prove a lower bound matching the upper bound in Theorem 3.4 up to constant in the setting where δ = Θ(1), d ≥ (1 + Ω(1))k. For every positive integer d, there is a natural “uniform” distribution over Od given by Haar’s theorem [Haar, 1933] (see e.g. [Diestel and Spalsbury, 2014] for a textbook). We denote this distribution by Haar(Od). A random matrix A drawn from Haar(Od) has the following invariance property: for any deterministic matrix B ∈ Od, the random matrices A,AB and BA all have the same distribution. For an integer k ≤ d, we can construct a random matrix A1 ∈ Od,k by first drawing A ∈ Rd×d from Haar(Od) and then take the first k columns of A. We denote the distribution of A1 by Haar(Od,k). The invariance property of Haar(Od) immediately implies the following claims: Claim 3.5. Let A ∈ Od be a random matrix drawn from Haar(Od) and let B ∈ Od,k be a fixed matrix. Then AB distributes as Haar(Od,k). Proof. The matrix B can be written as the first k columns of a matrix C ∈ Od. Now AB is the first k columns of AC, where AC distributes as Haar(Od) by the invariance property. This implies that AB distributes as Haar(Od,k). Claim 3.6. Let B ∈ Od,k be a random matrix. Assume for every fixed matrix A ∈ Od, the random matrices B and AB have the same distribution. Then B ∼ Haar(Od,k). Proof. If we draw A independently from Haar(Od), the random matrices B and AB still have the same distribution. By Claim 3.5, AB distributes as Haar(Od,k), so B must also distribute as Haar(Od,k). With the definition of Haar(Od,k), we state our lower bound in the following theorem: Theorem 3.7. Let k, d, n be positive integers satisfying k < d and k ≤ n. Let m1, . . . ,mn be positive integers and σ, η1, . . . , ηn be positive real numbers. Suppose we draw U ∈ Od,k from Haar(Od,k) and then draw µ1, . . . , µn independently from N(0, σ2UUT). For every i = 1, . . . , n, we draw mi data points xij for j = 1, . . . ,mi as xij = µi + zij , where each zij is drawn independently from the spherical Gaussian N(0, η2i I). Let Γ̂ be any estimator mapping (xij)1≤i≤n,1≤j≤mi to a (possibly randomized) k-dimensional subspace of Rd. Let θ denote the maximum principal angle between Γ̂((xij)1≤i≤n,1≤j≤mi) and the true subspace Γ = col(U). If real numbers t ≥ 0 and δ ∈ [0, 1/2) satisfy Pr[sin θ ≤ t] ≥ 1− δ, then t ≥ Ω ( min { 1, √ (d− k)(1− δ)∑n i=k γi }) , (11) where γ1, . . . , γn are defined in Definition 3.1. Note that γ′i = γi for i ≥ k, so our upper bound in (10) matches the lower bound (11) up to a constant factor assuming δ = Θ(1) and d ≥ (1 + Ω(1))k. We use the local Fano method to prove the lower bound using the technical lemmas in Appendix D. In particular, we reduce our goal to proving an upper bound on the KL divergence between Gaussian distributions whose covariance matrices are defined based on matrices U, Û ∈ Od,k with ∥UUT − Û ÛT∥F bounded. We prove the following lemma in Appendix J that upper bounds the KL divergence using ∥UUT − Û ÛT∥F : Lemma 3.8. For σ ∈ R≥0, η ∈ R>0, U, Û ∈ Od,k, define Σ = σ2UUT + η2I and Σ̂ = σ2Û ÛT + η2I . Then, Dkl(N(0, Σ̂)∥N(0,Σ)) = σ4∥UUT − Û ÛT∥2F 4(σ2η2 + η4) . Lemma 3.8 and the results in Appendix D allow us to prove a version of (11) in which the sum in the demoninator is over i = 1, . . . , n. This, however, is weaker and less useful than (11) in which the sum in the denominator is over i = k, k + 1, . . . , n. To prove Theorem 3.7, we extract a hard distribution in which the data points from users 1, . . . , k − 1 are “useless” in terms of subspace recovery. Let Γ1 be the (k − 1)-dimensional subspace spanned by µ1, . . . , µk−1. We let v1, . . . , vk−1 be a random orthonormal basis of Γ1, and we append another vector vk ∈ Γ to form an orthonormal basis v1, . . . , vk of Γ. We define V1 = [v1 · · · vk−1] ∈ Od,k−1 and V = [v1 · · · vk] ∈ Od,k. In Figure 1 we show a graphical model demonstrating the dependency among the random objects we defined. Let us focus on the joint distribution of (V1, V, (µ1, . . . , µk−1)). By the invariance property, for any matrices Ṽ1 ∈ Od,k−1, Ṽ ∈ Od,k, measurable set S ⊆ (Rd)k−1, and orthogonal matrix G ∈ Od, Pr[(µ1, . . . , µk−1) ∈ S|V = Ṽ , V1 = Ṽ1] = Pr[(µ1, . . . , µk−1) ∈ SG|V = GṼ , V1 = GṼ1], where SG = {(Gµ̃1, . . . , Gµ̃k−1) : (µ̃1, . . . , µ̃k−1) ∈ S}. For any Ṽ , Ṽ ′ ∈ Od,k whose first k − 1 columns are both Ṽ1, there exists G ∈ Od such that Ṽ ′ = GṼ and thus Ṽ1 = GṼ1. This implies that for any µ̃ ∈ col(Ṽ1), we have Gµ̃ = µ̃, and thus (S ∩ col(Ṽ1)k−1)G = S ∩ col(Ṽ1)k−1 for any measurable S ⊆ (Rd)k−1. Here, col(Ṽ1)k−1 = {(µ̃1, . . . , µ̃k−1) : µ̃i ∈ col(Ṽ1) for i = 1, . . . , k − 1} ⊆ (Rd)k−1. When conditioned on V1 = Ṽ1, for every i = 1, . . . , k − 1 we have µi ∈ Γ1 = col(V1) = col(Ṽ1), which implies that (µ1, . . . , µk−1) ∈ col(Ṽ1)k−1. Therefore, Pr[(µ1, . . . , µk−1) ∈ S|V = Ṽ , V1 = Ṽ1] = Pr[(µ1, . . . , µk−1) ∈ S ∩ col(Ṽ1)k−1|V = Ṽ , V1 = Ṽ1] = Pr[(µ1, . . . , µk−1) ∈ (S ∩ col(Ṽ1)k−1)G|V = GṼ , V1 = GṼ1] = Pr[(µ1, . . . , µk−1) ∈ S ∩ col(Ṽ1)k−1|V = Ṽ ′, V1 = Ṽ1] = Pr[(µ1, . . . , µk−1) ∈ S|V = Ṽ ′, V1 = Ṽ1]. This implies that (µ1, . . . , µk−1) and V are conditionally independent given V1. Therefore, the joint distribution of (V1, V, (µ1, . . . , µk−1)) can be formed by first drawing V and V1, and then drawing µ1, . . . , µk−1 based only on V1 and not on V . Since µk, . . . , µn are drawn iid from N(0, σ2UUT) = N(0, σ2V V T), we have the graphical model shown in Figure 2. By Claim 3.6, the marginal distribution of V is Haar(Od,k). By Claim 3.5, we can implement this distribution by first drawing W ∼ Haar(Od) and then drawing E independently from any distribution over Od,k and let V = WE. We choose the distribution of E later, where we ensure that the first k − 1 columms of E is always [ Ik−1 0 ] . This guarantees that the first k − 1 columns of W and V are the same, and thus V1 is exactly the first k− 1 columns of W , resulting in the graphical model shown in Figure 3. Note that in Figure 3 there is no directed path from E to (µ1, . . . , µk−1). Intuitively, this means that knowing (µ1, . . . , µk−1) gives us no information about E. Now by choosing the distribution of E appropriately, we can prove (11) in which the denominator does not contain γ1, . . . , γk−1. We defer the complete proof of Theorem 3.7 to Appendix K. 4 Linear Models In the linear models setting, the data distribution of user i is parameterized by an unknown vector βi ∈ Rd. As before, we assume that the vectors β1, . . . , βn from the n users lie in an unknown k-dimensional subspace Γ. Our goal is to recover the subspace using the following data. For every i = 1, . . . , n, we have mi data points from user i: (xi1, yi1), . . . , (ximi , yimi) ∈ Rd × R. For every j = 1, . . . ,mi, we assume the measurement xij ∈ Rd is a random vector drawn independently from an O(1)-sub-Gaussian distribution with zero mean and identity covariance matrix. The measurement outcome yij is determined by yij = xTijβi + zij , where the random noise zij ∈ R can depend on the measurements xi1, . . . , ximi . When conditioned on xi1, . . . , ximi , we assume every zij for j = 1, . . . ,mi is independently drawn from an ηi-sub-Gaussian distribution with zero mean, but we do not assume that the conditional distribution of zij is the same for every j = 1, . . . ,mi. The (in)dependence among xij and zij for i = 1, . . . , n and j = 1, . . . ,mi can be summarized by the example graphical model in Figure 4. Since we allow the noise zij to depend on the measurements xij , it is information-theoretically impossible to recover the subspace if we only have one data point from every user. Consider the scenario where every βi is drawn independently from N(0, σ2uuT) for an unknown unit vector u ∈ Rd and every xij is drawn independently and uniformly from {−1, 1}d. If we set zij to be zij = x T ijνij where νij is independently drawn from N(0, σ 2(I − uuT)), then every yij satisfies yij = x T ij(βi + νij) where βi + νij distributes as N(0, σ 2I) independently from xij . This implies that the joint distribution of ((xi1, yi1))i=1,...,n does not change with u, i.e., we get no information about u from one data point per user. Thus, we assume mi ≥ 2 for every user i. In this case, we achieve error upper bounds that match the ones in [Tripuraneni et al., 2021] despite our relaxed assumptions on the noise. Our estimator is the subspace Γ̂ spanned by the top-k eigenvectors of A defined in (3). We defer the analysis of our estimator to Appendix L. Acknowledgments and Disclosure of Funding Part of this work was performed while LH was interning at Apple. LH is also supported by Omer Reingold’s NSF Award IIS-1908774, Omer Reingold’s Simons Foundation Investigators Award 689988, and Moses Charikar’s Simons Foundation Investigators Award.
1. What is the focus and contribution of the paper on subspace recovery? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical analysis and optimality proof? 3. Do you have any concerns regarding the novelty of the proposed method, considering similar works in the literature? 4. How could the authors improve their bibliography and address potential gaps in their literature review?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper considers the problem of subspace recovery from heterogeneous data. Specifically, it is assumed that there exists n difference distributions and the i -th distribution generates data according to x i = μ i + z i where z i is a zero-mean noise vector. The goal is to estimate the subspace spanned by μ i i = 1 n . The authors propose an estimator and provide an upper bound for its estimation error. Moreover, a matching lower bound is established to show the optimality of the estimator, which also shows that the non-spherical noise does not make the problem harder. Moving beyond, the authors also apply the method to the setting of mixed linear regression, leading to improved performance over the existing works. Strengths And Weaknesses Strength: Overall, this paper is well-written and easy to read. The theoretical results seem to be correct and sound, and the estimation error upper bound is shown to be optimal with a matching lower bound. Weakness: The U-statistic type estimator in (1) is not novel and similar estimators have already appeared in the literature, e.g. "On Polynomial Time Methods for Exact Low Rank Tensor Completion, Dong Xia and Ming Yuan". So I'm not very sure about the novelty of the results and This makes me believe that there might exist other works on this topic. Could the authors complete their bibliography and check if there are similar results in the literature. Questions See above Limitations N/A
NIPS
Title Subspace Recovery from Heterogeneous Data with Non-isotropic Noise Abstract Recovering linear subspaces from data is a fundamental and important task in statistics and machine learning. Motivated by heterogeneity in Federated Learning settings, we study a basic formulation of this problem: the principal component analysis (PCA), with a focus on dealing with irregular noise. Our data come from n users with user i contributing data samples from a d-dimensional distribution with mean μi. Our goal is to recover the linear subspace shared by μ1, . . . , μn using the data points from all users, where every data point from user i is formed by adding an independent mean-zero noise vector to μi. If we only have one data point from every user, subspace recovery is information-theoretically impossible when the covariance matrices of the noise vectors can be non-spherical, necessitating additional restrictive assumptions in previous work. We avoid these assumptions by leveraging at least two data points from each user, which allows us to design an efficiently-computable estimator under non-spherical and user-dependent noise. We prove an upper bound for the estimation error of our estimator in general scenarios where the number of data points and amount of noise can vary across users, and prove an information-theoretic error lower bound that not only matches the upper bound up to a constant factor, but also holds even for spherical Gaussian noise. This implies that our estimator does not introduce additional estimation error (up to a constant factor) due to irregularity in the noise. We show additional results for a linear regression problem in a similar setup. N/A Recovering linear subspaces from data is a fundamental and important task in statistics and machine learning. Motivated by heterogeneity in Federated Learning settings, we study a basic formulation of this problem: the principal component analysis (PCA), with a focus on dealing with irregular noise. Our data come from n users with user i contributing data samples from a d-dimensional distribution with mean µi. Our goal is to recover the linear subspace shared by µ1, . . . , µn using the data points from all users, where every data point from user i is formed by adding an independent mean-zero noise vector to µi. If we only have one data point from every user, subspace recovery is information-theoretically impossible when the covariance matrices of the noise vectors can be non-spherical, necessitating additional restrictive assumptions in previous work. We avoid these assumptions by leveraging at least two data points from each user, which allows us to design an efficiently-computable estimator under non-spherical and user-dependent noise. We prove an upper bound for the estimation error of our estimator in general scenarios where the number of data points and amount of noise can vary across users, and prove an information-theoretic error lower bound that not only matches the upper bound up to a constant factor, but also holds even for spherical Gaussian noise. This implies that our estimator does not introduce additional estimation error (up to a constant factor) due to irregularity in the noise. We show additional results for a linear regression problem in a similar setup. 1 Introduction We study the problem of learning low-dimensional structure amongst data distributions, given multiple samples from each distribution. This problem arises naturally in settings such as federated learning, where we want to learn from data coming from a set of individuals, each of which has samples from their own distribution. These distributions however are related to each other, and in this work, we consider the setting when these distributions have means lying in a low-dimensional subspace. The goal is to learn this subspace, even when the distributions may have different (and potentially non-spherical) variances. This heterogeneity can manifest itself in practice as differing number of samples per user, or the variance differing across individuals, possibly depending on their mean. Recovery of the subspace containing the means can in turn help better estimate individual means. In other words, this can allow for learning good estimator for all individual means, by leveraging information from all the individuals. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). The irregularity of the noise makes this task challenging even when we have sufficiently many individual distributions. For example, suppose we have n individuals and for every i = 1, . . . , n, an unknown µi ∈ Rd. For simplicity, suppose that µ1, . . . , µn are distributed independently as N(0, σ2uuT) for σ ∈ R≥0 and an unknown unit vector u ∈ Rd. In this setting, our goal is to recover the one-dimensional subspace, equivalently the vector u. For every i, we have a data point xi = µi+zi where zi ∈ Rd is a mean-zero noise vector. If zi is drawn independently from a spherical Gaussian N(0, α2I), we can recover the unknown subspace with arbitrary accuracy as n grows to infinity because 1n ∑ xix T i concentrates to E[xixTi ] = σ2uuT + α2I , whose top eigenvector is ±u. However, if the noise zi is drawn from a non-spherical distribution, the top eigenvector of 1n ∑ xix T i can deviate from ±u significantly, and to make things worse, if the noise zi is drawn independently from a non-spherical Gaussian N(0, σ2(I−uuT)+α2I), then our data points xi = µi+zi distribute independently as N(0, (σ2 + α2)I), giving no information about the vector u.1 The information-theoretic impossibility in this example however disappears as soon as one has at least two samples from each distribution. Indeed, given two data points xi1 = µi + zi1 and xi2 = µi + zi2 from user i, as long as the noise zi1, zi2 are independent and have zero mean, we always have E[xi1xTi2] = σ2uuT regardless of the specific distributions of zi1 and zi2. This allows us to recover the subspace in this example, as long as we have sufficiently many users each contributing at least two examples. As this is commonly the case in our motivating examples, we make this assumption of multiple data points per user, and show that this intuition extends well beyond this particular example. We design efficiently computable estimators for this subspace recovery problem given samples from multiple heteroscedastic distributions (see Section 1.1 for details). We prove upper bounds on the error of our estimator measured in the maximum principal angle (see Section 2 for definition). We also prove an information-theoretic error lower bound, showing that our estimator achieves the optimal error up to a constant factor in general scenarios where the number of data points and the amount of noise can vary across users. Somewhat surprisingly, our lower bound holds even when the noise distributes as spherical Gaussians. Thus non-spherical noise in setting does not lead to increased error. We then show that our techniques extend beyond the mean estimation problem to a linear regression setting where for each µi, we get (at least two) samples (xij , xTijµi + zij) where zij is zero-mean noise from some noise distribution that depends on i and xij . This turns out to be a model that was recently studied in the meta-learning literature under more restrictive assumptions (e.g. zij is independent of xij) [Kong et al., 2020, Tripuraneni et al., 2021, Collins et al., 2021, Thekumparampil et al., 2021]. We show a simple estimator achieving an error upper bound matching the ones in prior work without making these restrictive assumptions. 1.1 Our contributions PCA with heterogeneous and non-isotropic noise: Upper Bounds. In the PCA setting, the data points from each user i are drawn from a user-specific distribution with mean µi ∈ Rd, and we assume that µ1, . . . , µn lie in a shared k-dimensional subspace that we want to recover. Specifically, we have mi data points xij ∈ Rd from user i for j = 1, . . . ,mi, and each data point is determined by xij = µi + zij where zij ∈ Rd is a noise vector drawn independently from a mean zero distribution. We allow the distribution of zij to be non-spherical and non-identical across different pairs (i, j). We use ηi ∈ R≥0 to quantify the amount of noise in user i’s data points by assuming that zij is an ηi-sub-Gaussian random variable. As mentioned earlier, if we only have a single data point from each user, it is information-theoretically impossible to recover the subspace. Thus, we focus on the case where mi ≥ 2 for every i = 1, . . . , n. In this setting, for appropriate weights w1, . . . , wn ∈ R≥0, we compute a matrix A: A = n∑ i=1 wi mi(mi − 1) ∑ j1 ̸=j2 xij1x T ij2 , (1) where the inner summation is over all pairs j1, j2 ∈ {1, . . . ,mi} satisfying j1 ̸= j2. Our estimator is then defined by the subspace spanned by the top-k eigenvectors of A. Although the inner summation 1This information-theoretic impossibility naturally extends to recovering k-dimensional subspaces for k > 1 by replacing the unit vector u ∈ Rd with a matrix U ∈ Rd×k with orthonormal columns. is over mi(mi − 1) terms, the time complexity for computing it need not grow quadratically with mi because of the following equation: ∑ j1 ̸=j2 xij1x T ij2 = mi∑ j=1 xij mi∑ j=1 xij T − mi∑ j=1 xijx T ij . The flexibility in the weights w1, . . . , wn allows us to deal with variations in mi and ηi for different users i. In the special case where η1 = · · · = ηn = η and m1 = · · · = mn = m, we choose w1 = · · · = wn = 1/n and we show that our estimator achieves the following error upper bound with success probability at least 1− δ: sin θ = O (( ησ1 σ2k √ m + η2 σ2km )√ d+ log(1/δ) n ) . Here, θ is the maximum principal angle between our estimator and the true subspace shared by µ1, . . . , µn, and we define σℓ ≥ 0 such that σ2ℓ is the ℓ-th largest eigenvalue of ∑n i=1 wiµiµ T i . Our error upper bound for general mi, ηi, wi is given in Theorem 3.1. We instantiate our error upper bound to the case where µ1, . . . , µn are drawn iid from a Gaussian distribution N(0, σ2UUT), where the columns of U ∈ Rd×k form an orthonormal basis of the subspace containing µ1, . . . , µn. By choosing the weights w1, . . . , wn according to m1, . . . ,mn and η1, . . . , ηn, our estimator achieves the error upper bound sin θ ≤ O (√ d+ log(1/δ)∑n i=1 γ ′ i ) (2) under a mild assumption (Assumption 3.2), where γ′i is defined in Definition 3.1 and often equals( η2i σ2mi + η4i σ4m2i )−1 . PCA: Lower Bounds. We show that the error upper bound (2) is optimal up to a constant factor by proving a matching information-theoretic lower bound (Theorem 3.7). Our lower bound holds for general mi and ηi that can vary among users i, and it holds even when the noise vectors zij are drawn from spherical Gaussians, showing that our estimator essentially pays no additional cost in error or sample complexity due to non-isotropic noise. We prove the lower bound using Fano’s method on a local packing over the Grassmannian manifold. We carefully select a non-trivial hard distribution so that the strength of our lower bound is not affected by a group of fewer than k users each having a huge amount of data points with little noise. Linear Models. While the PCA setting is the main focus of our paper, we extend our research to a related linear models setting that has recently been well studied in the meta-learning and federated learning literature [Kong et al., 2020, Tripuraneni et al., 2021, Collins et al., 2021, Thekumparampil et al., 2021]. Here, the user-specific distribution of each user i is parameterized by βi ∈ Rd, and we again assume that β1, . . . , βn lie in a k-dimensional linear subspace that we want to recover. From each user i we observe mi data points (xij , yij) ∈ Rd × R for j = 1, . . . ,mi drawn from the user-specific distribution satisfying yij = xTijβi + zij for an O(1)-sub-Gaussian measurement vector xij ∈ Rd with zero mean and identity covariance and an ηi-sub-Gaussian mean-zero noise term zij ∈ R. While it may seem that non-isotropic noise is less of a challenge in this setting since each noise term zij is a scalar, our goal is to handle a challenging scenario where the variances of the noise terms zij can depend on the realized measurements xij , which is a more general and widely applicable setting compared to those in prior work. Similarly to the PCA setting, our relaxed assumptions on the noise make it information-theoretically impossible to do subspace recovery if we only have one data point from each user (see Section 4), and thus we assume each user contributes at least two data points. For appropriate weights w1, . . . , wn ∈ R≥0, we use the subspace spanned by the top-k eigenvectors of the following matrix A as our estimator: A = n∑ i=1 wi mi(mi − 1) ∑ j1 ̸=j2 (xij1yij1)(xij2yij2) T. (3) In the special case where η1 = · · · = ηn = η,m1 = · · · = mn = m, and ∥βi∥2 ≤ r for all i, our estimator achieves the following error upper bound using weights w1 = · · · = wn = 1/n: sin θ ≤ O ( log3(nd/δ) √ d(r4 + r2η2 + η4/m) mnσ4k ) , (4) where θ is the maximum principal angle between our estimator and the true subspace shared by β1, . . . , βn, and σ2k is the k-th largest eigenvalue of ∑n i=1 wiβiβ T i (Corollary L.2). Our error upper bound extends smoothly to more general cases where ηi and mi vary among users (Theorem L.1). Moreover, our upper bound matches the ones in prior work [e.g. Tripuraneni et al., 2021, Theorem 3] despite requiring less restrictive assumptions. 1.2 Related Work Principal component analysis under non-isotropic noise has been studied by Vaswani and Narayanamurthy [2017], Zhang et al. [2018] and Narayanamurthy and Vaswani [2020]. When translated to our setting, these papers focus on having only one data point from each user and thus they require additional assumptions—either the level of non-isotropy is low, or the noise is coordinate-wise independent and the subspace is incoherent. The estimation error guarantees in these papers depend crucially on how well these additional assumptions are satisfied. Zhu et al. [2019] and Cai et al. [2021] study PCA with noise and missing data, and Chen et al. [2021] and Cheng et al. [2021] study eigenvalue and eigenvector estimation under heteroscedastic noise. These four papers all assume that the noise is coordinate-wise independent and the subspace/eigenspace is incoherent. The linear models setting we consider has recently been studied as a basic setting of meta-learning and federated learning by Kong et al. [2020], Tripuraneni et al. [2021], Collins et al. [2021], and Thekumparampil et al. [2021]. These papers all make the assumption that the noise terms zij are independent of the measurements xij , an assumption that we relax in this paper. Collins et al. [2021] and Thekumparampil et al. [2021] make improvements in sample complexity and error guarantees compared to earlier work by Kong et al. [2020] and Tripuraneni et al. [2021], but Collins et al. [2021] focus on the noiseless setting (zij = 0) and Thekumparampil et al. [2021] require at least Ω(k2) examples per user. Tripuraneni et al. [2021] and Thekumparampil et al. [2021] assume that the measurements xij are drawn from the standard (multivariate) Gaussian distribution, where as Kong et al. [2020], Collins et al. [2021] and our work make the relaxed assumption that xij are sub-Gaussian with identity covariance, which, in particular, allows the fourth-order moments of xij to be non-isotropic. There is a large body of prior work on meta-learning beyond the linear setting [see e.g. Maurer et al., 2016, Tripuraneni et al., 2020, Du et al., 2020]. When collecting data from users, it is often important to ensure that private information about users is not revealed through the release of the learned estimator. Many recent works proposed and analyzed estimators that achieve user-level differential privacy in settings including mean estimation [Levy et al., 2021, Esfandiari et al., 2021], meta-learning [Jain et al., 2021] and PAC learning [Ghazi et al., 2021]. Recently, Cummings et al. [2021] study one-dimensional mean estimation in a setting similar to ours, under a differential privacy constraint. The matrix A we define in (1) is a weighted sum of Ai := 1mi(mi−1) ∑ j1 ̸=j2 xij1x T ij2 over users i = 1, . . . , n, and each Ai has the form of a U -statistic [Halmos, 1946, Hoeffding, 1948]. U -statistics have been applied to many statistical tasks including tensor completion [Xia and Yuan, 2019] and various testing problems [Zhong and Chen, 2011, He et al., 2021, Schrab et al., 2022]. In our definition of Ai, we do not make the assumption that the distributions of xi1, . . . , ximi are identical although the assumption is commonly used in applications of U -statistics. The matrix A in (3) is also a weighted sum of U -statistics where we again do not make the assumption of identical distribution. 1.3 Paper Organization In Section 2, we formally define the maximum principal angle and other notions we use throughout the paper. Our results in the PCA setting and the linear models setting are presented in Sections 3 and 4, respectively. We defer most technical proofs to the appendices. 2 Preliminaries We use ∥A∥ to denote the spectral norm of a matrix A, and use ∥u∥2 to denote the ℓ2 norm of a vector u. For positive integers k ≤ d, we use Od,k to denote the set of matrices A ∈ Rd×k satisfying ATA = Ik, where Ik is the k × k identity matrix. We use Od to denote Od,d, which is the set of d× d orthogonal matrices. We use col(A) to denote the linear subspace spanned by the columns of a matrix A. We use the base-e logarithm throughout the paper. Maximum Principal Angle. Let U, Û ∈ Od be two orthogonal matrices. Suppose the columns of U and Û are partitioned as U = [U1 U2], Û = [Û1 Û2] where U1, Û1 ∈ Od,k for an integer k satisfying 0 < k < d. Let Γ (resp. Γ̂) be the k-dimensional linear subspace spanned by the columns of U1 (resp. Û1). Originating from [Jordan, 1875], the maximum principal angle θ ∈ [0, π/2] between Γ and Γ̂, denoted by ∠(Γ, Γ̂) or ∠(U1, Û1), is defined by sin θ = ∥U1UT1 − Û1ÛT1 ∥ = ∥UT1 Û2∥ = ∥UT2 Û1∥. It is not hard to see that the maximum principal angle depend only on the subspaces Γ, Γ̂ and not on the choices of U and Û , and sin∠(Γ, Γ̂) is a natural metric between k-dimensional subspaces (see Appendix A for more details where we discuss the definition of principal angles for any two subspaces with possibly different dimensions). With the definition of the maximum principal angle, we can now state a variant of the Davis–Kahan sin θ theorem [Davis and Kahan, 1970] that will be useful in our analysis (see Appendix E for proof): Theorem 2.1 (Variant of Davis–Kahan sin θ theorem). Let A, Â ∈ Rd×d be symmetric matrices. Let λi denote the i-th largest eigenvalue of A. For a positive integer k smaller than d, let θ denote the maximum principal angle between the subspaces spanned by the top-k eigenvectors of A and Â. Assuming λk > λk+1, sin θ ≤ 2∥A− Â∥ λk − λk+1 . Sub-Gaussian and sub-exponential distributions. We say a random variable x ∈ R with expectation E[x] ∈ R has sub-Gaussian constant b ∈ R≥0 if E[|x − E[x]|p]1/p ≤ b √ p for every p ≥ 1. We say x has sub-exponential constant b ∈ R≥0 if E[|x− E[x]|p]1/p ≤ bp for every p ≥ 1. We say a random vector y ∈ Rd has sub-Gaussian (resp. sub-exponential) constant b ∈ R≥0 if for every unit vector u ∈ Rd (i.e., ∥u∥2 = 1), the random variable uTy ∈ R has sub-Gaussian (resp. sub-exponential) constant b. We say y is b-sub-Gaussian (resp. b-sub-exponential) if it has sub-Gaussian (resp. sub-exponential) constant b. 3 Principal Component Analysis In the principal component analysis (PCA) setting, our goal is to recover the k-dimensional subspace Γ spanned by the user-specific means µ1, . . . , µn ∈ Rd of the n users. From each user i, we have mi ≥ 2 data points xij = µi + zij for j = 1, . . . ,mi. (5) We assume the noise zij ∈ Rd is drawn independently from a mean zero distribution with subGaussian constant ηi. We do not assume that the variance of zij is the same along every direction, nor do we assume that the distribution of zij is the same for different (i, j). We first show an error upper bound for our estimator when the user-specific means µ1, . . . , µn are deterministic vectors (Section 3.1) and then apply this result to the case where µ1, . . . , µn are drawn from a sub-Gaussian distribution (Section 3.2). In Section 3.3 we prove an information-theoretic error lower bound matching our upper bound. 3.1 Fixed User-Specific Means We first focus on the case where µ1, . . . , µn are deterministic vectors. In this case, all the randomness in the data comes from the noise zij . Our estimator is the subspace Γ̂ spanned by the top-k eigenvectors of A defined in (1). For ℓ = 1, . . . , d, we define σℓ ≥ 0 such that σ2ℓ is the ℓ-th largest eigenvalue of ∑n i=1 wiµiµ T i . Since µ1, . . . , µn share a k-dimensional subspace, σℓ = 0 for ℓ > k. We prove the following general theorem on the error guarantee of our estimator: Theorem 3.1. Define ξ2 = ∥ ∑n i=1 w 2 i µiµ T i η 2 i /mi∥ and let θ denote the maximum principal angle between our estimator Γ̂ and the true subspace Γ spanned by µ1, . . . , µn. For any δ ∈ (0, 1/2), with probability at least 1− δ, sin θ = O σ−2k √√√√(d+ log(1/δ))(ξ2 + n∑ i=1 w2i η 4 i m2i ) + σ−2k (d+ log(1/δ))maxi wiη 2 i mi . (6) We can simplify the bound in Theorem 3.1 by considering special cases: Corollary 3.2. Assume max{η1/ √ m1, . . . , ηn/ √ mn} = t and we choose w1 = · · · = wn = 1/n. For any δ ∈ (0, 1/2), with probability at least 1− δ, sin θ = O ( tσ1 + t 2 σ2k √ d+ log(1/δ) n ) . (7) In particular, when η1 = · · · = ηn = η, and m1 = · · · = mn = m, error bound (7) becomes sin θ = O (( ησ1 σ2k √ m + η2 σ2km )√ d+ log(1/δ) n ) . We defer the complete proof of Theorem 3.1 and Corollary 3.2 to Appendices F and G. Our proof is based on the Davis-Kahan sin θ theorem (Theorem 2.1). Since σ2k+1 = 0, Theorem 2.1 implies sin θ ≤ 2∥A− ∑n i=1 wiµiµ T i ∥ σ2k . (8) This reduces our goal to proving an upper bound on the spectral norm of A− ∑n i=1 wiµiµ T i . Since for distinct j1 and j2 in {1, . . . ,mi} we have E[xij1xTij2 ] = µiµ T i , our construction of A in (1) guarantees E[A] = ∑n i=1 wiµiµ T i . Therefore, our goal becomes controlling the deviation of A from its expectation, and we achieve this goal using techniques for matrix concentration inequalities. 3.2 Sub-Gaussian User-Specific Means We apply our error upper bound in Theorem 3.1 to the case where µ1, . . . , µn ∈ Rd are drawn iid from N(0, σ2UUT) for an unknown U ∈ Od,k. We still assume that each data point xij ∈ Rd is generated by adding a noise vector zij ∈ Rd to the user-specific mean µi as in (5). We do not assume that the noise vectors (zij)1≤i≤n,1≤j≤mi are independent of the user-specific means (µi)1≤i≤n, but we assume that when conditioned on (µi)1≤i≤n, every noise vector zij independently follows a distribution with mean zero and sub-Gaussian constant ηi. We use the same estimator Γ̂ as before: Γ̂ is the subspace spanned by the top-k eigenvectors of A defined in (1). We determine the optimal weights w1, . . . , wn in (1) as long as m1, . . . ,mn and η1, . . . , ηn satisfy a mild assumption (Assumption 3.2), achieving an error upper bound in Theorem 3.4. In the next subsection, we prove an error lower bound (Theorem 3.7) that matches our upper bound (Theorem 3.4) up to a constant factor, assuming d ≥ (1 + Ω(1))k and δ = Θ(1). We prove our error upper bound in a slightly more general setting than µ1, . . . , µn drawn iid from N(0, σ2UUT). Specifically, we make the following assumption on the distribution of µ1, . . . , µn: Assumption 3.1. The user-specific means µ1, . . . , µn ∈ Rd are mean-zero independent random vectors supported on an unknown k-dimensional subspace Γ. Moreover, for a parameter σ > 0, for every i = 1, . . . , n, µi has sub-Gaussian constant O(σ), and the k-th largest eigenvalue of E[µiµTi ] is at least σ2. Under this assumption, we have the following lower bound on the σ2k in Theorem 3.1 (see Appendix H for proof): Claim 3.3. Under Assumption 3.1, let w1, . . . , wn ∈ R≥0 be user weights satisfying w1+ · · ·+wn = 1 and σ2k be the k-th largest eigenvalue of ∑n i=1 wiµiµ T i . There exists an absolute constant C∗ > 1 such that for any δ ∈ (0, 1/2), as long as max1≤i≤n wi ≤ 1/C∗(k + log(1/δ)), then σ2k ≥ σ2/2 with probability at least 1− δ/2. The following definition is important for us to choose the weights w1, . . . , wn in (1) optimally: Definition 3.1. Define γi = ( η2i σ2mi + η4i σ4m2i )−1 and assume w.l.o.g. that γ1 ≥ · · · ≥ γn. Define γ′i = γi if i ≥ k, and γ′i = γk if i < k. Intuitively, we can view γi as measuring the “amount of information” provided by the data points from user i. This is consistent with the fact that γi increases as the number mi of data points from user i increases, and γi decreases as the noise magnitude ηi from user i increases. With the users sorted so that γ1 ≥ · · · ≥ γn, the quantity γ′i is then defined to be γk for the k most “informative” users i = 1, . . . , k, and γ′i = γi for other users. We make the following mild assumption on γ ′ i under which we achieve optimal estimation error: Assumption 3.2. ∑n i=1 γ ′ i ≥ C∗(k + log(1/δ))γ′1 for C∗ defined in Claim 3.3. By the definition of γ′i, it is easy to show that Assumption 3.2 is equivalent to ∑n i=k+1 γi ≥ ((C∗ − 1)k + C∗ log(1/δ))γk. Therefore, if we view γi as the “amount of information” from user i, Assumption 3.2 intuitively requires that a significant contribution to the total “information” comes from outside the k most “informative” users. This assumption allows us to avoid the case where we only have exactly n = k users: in that case, we would have σ2k ≈ σ2/k2 for uniform weights w1 = · · · = wn (see [Rudelson and Vershynin, 2008] and references therein), as opposed to the desired σ2k ≥ σ2/2 in Claim 3.3. Assumption 3.2 is a mild assumption. For example, when γk = · · · = γn, Assumption 3.2 holds as long as n ≥ C∗(k+log(1/δ)). Also, since γ′1 = · · · = γ′k ≥ γ′k+1 ≥ · · · ≥ γ′n ≥ 0, it trivially holds that ∑n i=1 γ ′ i ≥ kγ′1. Assumption 3.2 is relatively mild when compared to this trivial inequality. Under Assumption 3.2, we show that it is optimal to choose the weights w1, . . . , wn as wi = γ′i∑n ℓ=1 γ ′ ℓ . (9) Specifically, if we plug (9) into Theorem 3.1 and bound ξ and σk based on the distribution of µ1, . . . , µn, we get the following error upper bound which matches our lower bound (Theorem 3.7) in Section 3.3. We defer its proof to Appendix I. Theorem 3.4. Under Assumptions 3.1 and 3.2, if we choose w1, . . . , wn as in (9) and define θ = ∠(Γ, Γ̂), for δ ∈ (0, 1/2), with probability at least 1− δ, sin θ ≤ O (√ d+ log(1/δ)∑n i=1 γ ′ i ) . (10) For comparison, consider the setting when σ = ηi = 1 for every i = 1, . . . , n. The result then says that sin θ is bounded by approximately √ d∑n i=1 mi . This is the same rate as we would get if we have∑n i=1 mi users each contributing a single independent data point with homogeneous spherical noise. Thus as long as the data points are not too concentrated on fewer than k users, the heterogeneity comes at no additional cost. 3.3 Lower Bound We prove a lower bound matching the upper bound in Theorem 3.4 up to constant in the setting where δ = Θ(1), d ≥ (1 + Ω(1))k. For every positive integer d, there is a natural “uniform” distribution over Od given by Haar’s theorem [Haar, 1933] (see e.g. [Diestel and Spalsbury, 2014] for a textbook). We denote this distribution by Haar(Od). A random matrix A drawn from Haar(Od) has the following invariance property: for any deterministic matrix B ∈ Od, the random matrices A,AB and BA all have the same distribution. For an integer k ≤ d, we can construct a random matrix A1 ∈ Od,k by first drawing A ∈ Rd×d from Haar(Od) and then take the first k columns of A. We denote the distribution of A1 by Haar(Od,k). The invariance property of Haar(Od) immediately implies the following claims: Claim 3.5. Let A ∈ Od be a random matrix drawn from Haar(Od) and let B ∈ Od,k be a fixed matrix. Then AB distributes as Haar(Od,k). Proof. The matrix B can be written as the first k columns of a matrix C ∈ Od. Now AB is the first k columns of AC, where AC distributes as Haar(Od) by the invariance property. This implies that AB distributes as Haar(Od,k). Claim 3.6. Let B ∈ Od,k be a random matrix. Assume for every fixed matrix A ∈ Od, the random matrices B and AB have the same distribution. Then B ∼ Haar(Od,k). Proof. If we draw A independently from Haar(Od), the random matrices B and AB still have the same distribution. By Claim 3.5, AB distributes as Haar(Od,k), so B must also distribute as Haar(Od,k). With the definition of Haar(Od,k), we state our lower bound in the following theorem: Theorem 3.7. Let k, d, n be positive integers satisfying k < d and k ≤ n. Let m1, . . . ,mn be positive integers and σ, η1, . . . , ηn be positive real numbers. Suppose we draw U ∈ Od,k from Haar(Od,k) and then draw µ1, . . . , µn independently from N(0, σ2UUT). For every i = 1, . . . , n, we draw mi data points xij for j = 1, . . . ,mi as xij = µi + zij , where each zij is drawn independently from the spherical Gaussian N(0, η2i I). Let Γ̂ be any estimator mapping (xij)1≤i≤n,1≤j≤mi to a (possibly randomized) k-dimensional subspace of Rd. Let θ denote the maximum principal angle between Γ̂((xij)1≤i≤n,1≤j≤mi) and the true subspace Γ = col(U). If real numbers t ≥ 0 and δ ∈ [0, 1/2) satisfy Pr[sin θ ≤ t] ≥ 1− δ, then t ≥ Ω ( min { 1, √ (d− k)(1− δ)∑n i=k γi }) , (11) where γ1, . . . , γn are defined in Definition 3.1. Note that γ′i = γi for i ≥ k, so our upper bound in (10) matches the lower bound (11) up to a constant factor assuming δ = Θ(1) and d ≥ (1 + Ω(1))k. We use the local Fano method to prove the lower bound using the technical lemmas in Appendix D. In particular, we reduce our goal to proving an upper bound on the KL divergence between Gaussian distributions whose covariance matrices are defined based on matrices U, Û ∈ Od,k with ∥UUT − Û ÛT∥F bounded. We prove the following lemma in Appendix J that upper bounds the KL divergence using ∥UUT − Û ÛT∥F : Lemma 3.8. For σ ∈ R≥0, η ∈ R>0, U, Û ∈ Od,k, define Σ = σ2UUT + η2I and Σ̂ = σ2Û ÛT + η2I . Then, Dkl(N(0, Σ̂)∥N(0,Σ)) = σ4∥UUT − Û ÛT∥2F 4(σ2η2 + η4) . Lemma 3.8 and the results in Appendix D allow us to prove a version of (11) in which the sum in the demoninator is over i = 1, . . . , n. This, however, is weaker and less useful than (11) in which the sum in the denominator is over i = k, k + 1, . . . , n. To prove Theorem 3.7, we extract a hard distribution in which the data points from users 1, . . . , k − 1 are “useless” in terms of subspace recovery. Let Γ1 be the (k − 1)-dimensional subspace spanned by µ1, . . . , µk−1. We let v1, . . . , vk−1 be a random orthonormal basis of Γ1, and we append another vector vk ∈ Γ to form an orthonormal basis v1, . . . , vk of Γ. We define V1 = [v1 · · · vk−1] ∈ Od,k−1 and V = [v1 · · · vk] ∈ Od,k. In Figure 1 we show a graphical model demonstrating the dependency among the random objects we defined. Let us focus on the joint distribution of (V1, V, (µ1, . . . , µk−1)). By the invariance property, for any matrices Ṽ1 ∈ Od,k−1, Ṽ ∈ Od,k, measurable set S ⊆ (Rd)k−1, and orthogonal matrix G ∈ Od, Pr[(µ1, . . . , µk−1) ∈ S|V = Ṽ , V1 = Ṽ1] = Pr[(µ1, . . . , µk−1) ∈ SG|V = GṼ , V1 = GṼ1], where SG = {(Gµ̃1, . . . , Gµ̃k−1) : (µ̃1, . . . , µ̃k−1) ∈ S}. For any Ṽ , Ṽ ′ ∈ Od,k whose first k − 1 columns are both Ṽ1, there exists G ∈ Od such that Ṽ ′ = GṼ and thus Ṽ1 = GṼ1. This implies that for any µ̃ ∈ col(Ṽ1), we have Gµ̃ = µ̃, and thus (S ∩ col(Ṽ1)k−1)G = S ∩ col(Ṽ1)k−1 for any measurable S ⊆ (Rd)k−1. Here, col(Ṽ1)k−1 = {(µ̃1, . . . , µ̃k−1) : µ̃i ∈ col(Ṽ1) for i = 1, . . . , k − 1} ⊆ (Rd)k−1. When conditioned on V1 = Ṽ1, for every i = 1, . . . , k − 1 we have µi ∈ Γ1 = col(V1) = col(Ṽ1), which implies that (µ1, . . . , µk−1) ∈ col(Ṽ1)k−1. Therefore, Pr[(µ1, . . . , µk−1) ∈ S|V = Ṽ , V1 = Ṽ1] = Pr[(µ1, . . . , µk−1) ∈ S ∩ col(Ṽ1)k−1|V = Ṽ , V1 = Ṽ1] = Pr[(µ1, . . . , µk−1) ∈ (S ∩ col(Ṽ1)k−1)G|V = GṼ , V1 = GṼ1] = Pr[(µ1, . . . , µk−1) ∈ S ∩ col(Ṽ1)k−1|V = Ṽ ′, V1 = Ṽ1] = Pr[(µ1, . . . , µk−1) ∈ S|V = Ṽ ′, V1 = Ṽ1]. This implies that (µ1, . . . , µk−1) and V are conditionally independent given V1. Therefore, the joint distribution of (V1, V, (µ1, . . . , µk−1)) can be formed by first drawing V and V1, and then drawing µ1, . . . , µk−1 based only on V1 and not on V . Since µk, . . . , µn are drawn iid from N(0, σ2UUT) = N(0, σ2V V T), we have the graphical model shown in Figure 2. By Claim 3.6, the marginal distribution of V is Haar(Od,k). By Claim 3.5, we can implement this distribution by first drawing W ∼ Haar(Od) and then drawing E independently from any distribution over Od,k and let V = WE. We choose the distribution of E later, where we ensure that the first k − 1 columms of E is always [ Ik−1 0 ] . This guarantees that the first k − 1 columns of W and V are the same, and thus V1 is exactly the first k− 1 columns of W , resulting in the graphical model shown in Figure 3. Note that in Figure 3 there is no directed path from E to (µ1, . . . , µk−1). Intuitively, this means that knowing (µ1, . . . , µk−1) gives us no information about E. Now by choosing the distribution of E appropriately, we can prove (11) in which the denominator does not contain γ1, . . . , γk−1. We defer the complete proof of Theorem 3.7 to Appendix K. 4 Linear Models In the linear models setting, the data distribution of user i is parameterized by an unknown vector βi ∈ Rd. As before, we assume that the vectors β1, . . . , βn from the n users lie in an unknown k-dimensional subspace Γ. Our goal is to recover the subspace using the following data. For every i = 1, . . . , n, we have mi data points from user i: (xi1, yi1), . . . , (ximi , yimi) ∈ Rd × R. For every j = 1, . . . ,mi, we assume the measurement xij ∈ Rd is a random vector drawn independently from an O(1)-sub-Gaussian distribution with zero mean and identity covariance matrix. The measurement outcome yij is determined by yij = xTijβi + zij , where the random noise zij ∈ R can depend on the measurements xi1, . . . , ximi . When conditioned on xi1, . . . , ximi , we assume every zij for j = 1, . . . ,mi is independently drawn from an ηi-sub-Gaussian distribution with zero mean, but we do not assume that the conditional distribution of zij is the same for every j = 1, . . . ,mi. The (in)dependence among xij and zij for i = 1, . . . , n and j = 1, . . . ,mi can be summarized by the example graphical model in Figure 4. Since we allow the noise zij to depend on the measurements xij , it is information-theoretically impossible to recover the subspace if we only have one data point from every user. Consider the scenario where every βi is drawn independently from N(0, σ2uuT) for an unknown unit vector u ∈ Rd and every xij is drawn independently and uniformly from {−1, 1}d. If we set zij to be zij = x T ijνij where νij is independently drawn from N(0, σ 2(I − uuT)), then every yij satisfies yij = x T ij(βi + νij) where βi + νij distributes as N(0, σ 2I) independently from xij . This implies that the joint distribution of ((xi1, yi1))i=1,...,n does not change with u, i.e., we get no information about u from one data point per user. Thus, we assume mi ≥ 2 for every user i. In this case, we achieve error upper bounds that match the ones in [Tripuraneni et al., 2021] despite our relaxed assumptions on the noise. Our estimator is the subspace Γ̂ spanned by the top-k eigenvectors of A defined in (3). We defer the analysis of our estimator to Appendix L. Acknowledgments and Disclosure of Funding Part of this work was performed while LH was interning at Apple. LH is also supported by Omer Reingold’s NSF Award IIS-1908774, Omer Reingold’s Simons Foundation Investigators Award 689988, and Moses Charikar’s Simons Foundation Investigators Award.
1. What is the focus of the paper regarding subspace recovery from noisy linear measurements? 2. What are the strengths of the proposed spectral estimator, particularly in terms of recovery guarantees? 3. What are the weaknesses of the paper regarding its flow and arrangement? 4. Do you have any concerns about the assumptions made in Section 3.2? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper Consider m = ∑ i = 1 n m i points, where for i = 1 , … , n , m i (>=2) of them have been drawn independently from m_i sub-Gaussian distributions of a shared unknown mean (\mu_i) and a shared known sub-Gaussian constant (\eta_i); see Lines 184-189. Assume that m_i's and \eta_i's are known. Further, assume that \mu_1,..,\mu_n belong to a subspace. The goal is to estimate said subspace given the points. The authors study a spectral estimator and establish recovery guarantees in terms of the maximum principal angle between the estimated subspace and the true subspace; Theorem 3.1. They also derive special cases of this guarantee under different assumptions on \mu_i's (being equal, being drawn from a certain distribution). A matching lower bound, in certain regimes, has also been provided. In addition to the "PCA" scenario, subspace recovery from noisy linear measurements of the \mu_i's has also been addressed. Standard proof technique for spectral estimators, such as the one utilized in this work, has been used; outlined on top of page 6, and for the lower bound. Strengths And Weaknesses I was not able to form a big picture of the multitude of assumptions in Section 3.2; on k, C*, eta_i’s, d, w_i’s, gamma_i’s, and gamma_i^prime’s. Further discussions, elaborations, or examples (special cases), on the scenario under consideration would be helpful. While the result seem sound, I personally had a hard time following the arguments in a linear read; a sample of milestones (for the arguments and proofs) have been mentioned in the main body but I personally was not able to get a clear picture from these samples. Example: line 286 "we reduce our goal to ...", before which it would be useful to know why Gaussians are being compared. On the other hand, I think the arguments in Lines 297-321 (an entire page) can safely be summarized within the main body and moved to the appendices. I believe the current manuscript requires a revision in flow and arrangement before publication. Questions Could you state the intuitive discussion on top of page two using a general subspace, to avoid possible confusion due to uu^T being a special case? Please provide a proof/reference for the statement on Line 75. Please clarify the assumption on the knowledge of eta's; e.g., in (2). It would also be helpful to provide a discussion on the availability/estimability of such information in the federated learning setup. Please clarify the setting in which the upper and lower bounds match. Top of page 4: please provide a brief definition for "the subspace is incoherent". I am not sure if the "equivalence" claim on Line 238 is correct. Providing a reference for Lines 256-272 could be helpful to readers, to connect to the rest of the relevant literature. Similarly, please provide references for the background reviewed in Appendix A; something similar to Appendix B. Limitations Please see the comment on the assumptions in Section 3.2.
NIPS
Title Learning with Average Top-k Loss Abstract In this work, we introduce the average top-k (ATk) loss as a new aggregate loss for supervised learning, which is the average over the k largest individual losses over a training dataset. We show that the ATk loss is a natural generalization of the two widely used aggregate losses, namely the average loss and the maximum loss, but can combine their advantages and mitigate their drawbacks to better adapt to different data distributions. Furthermore, it remains a convex function over all individual losses, which can lead to convex optimization problems that can be solved effectively with conventional gradient-based methods. We provide an intuitive interpretation of the ATk loss based on its equivalent effect on the continuous individual loss functions, suggesting that it can reduce the penalty on correctly classified data. We further give a learning theory analysis of MATk learning on the classification calibration of the ATk loss and the error bounds of ATk-SVM. We demonstrate the applicability of minimum average top-k learning for binary classification and regression using synthetic and real datasets. 1 Introduction Supervised learning concerns the inference of a function f : X 7→ Y that predicts a target y ∈ Y from data/features x ∈ X using a set of labeled training examples {(xi, yi)}ni=1. This is typically achieved by seeking a function f that minimizes an aggregate loss formed from individual losses evaluated over all training samples. To be more specific, the individual loss for a sample (x, y) is given by `(f(x), y), in which ` is a nonnegative bivariate function that evaluates the quality of the prediction made by function f . For example, for binary classification (i.e., yi ∈ {±1}), commonly used forms for individual loss include the 0-1 loss, Iyf(x)≤0, which is 1 when y and f(x) have different sign and 0 otherwise, the hinge loss, max(0, 1 − yf(x)), and the logistic loss, log2(1 + exp(−yf(x))), all of which can be further simplified as the so-called margin loss, i.e., `(y, f(x)) = `(yf(x)). For regression, squared difference (y−f(x))2 and absolute difference |y−f(x)| are two most popular forms for individual loss, which can be simplified as `(y, f(x)) = `(|y − f(x)|). Usually the individual loss is chosen to be a convex function of its input, but recent works also propose various types of non-convex individual losses (e.g., [10, 15, 27, 28]). The supervised learning problem is then formulated as minf {L(Lz(f)) + Ω(f)}, where L(Lz(f)) is the aggregate loss accumulates all individual losses over training samples, i.e., Lz(f) = {`i(f)}ni=1, with `i(f) being the shorthand notation for `(f(xi), yi), and Ω(f) is the regularizer on f . However, in contrast to the plethora of the types of individual losses, there are only a few choices when we consider the aggregate loss: ∗Corresponding author. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. • the average loss: Lavg(Lz(f)) = 1n ∑n i=1 `i(f), i.e., the mean of all individual losses;• the maximum loss: Lmax(Lz(f)) = max1≤k≤n `i(f), i.e., the largest individual loss; • the top-k loss [20]: Ltop-k(Lz(f)) = `[k](f)2 for 1 ≤ k ≤ n, i.e., the k-th largest (top-k) individual loss. The average loss is unarguably the most widely used aggregate loss, as it is a unbiased approximation to the expected risk and leads to the empirical risk minimization in learning theory [1, 7, 22, 25, 26]. Further, minimizing the average loss affords simple and efficient stochastic gradient descent algorithms [3, 21]. On the other hand, the work in [20] shows that constructing learning objective based on the maximum loss may lead to improved performance for data with separate typical and rare subpopulations. The top-k loss [20] generalizes the maximum loss, as Lmax(Lz(f)) = Ltop-1(Lz(f)), and can alleviate the sensitivity to outliers of the latter. However, unlike the average loss or the maximum loss, the top-k loss in general does not lead to a convex learning objective, as it is not convex of all the individual losses Lz(f). In this work, we propose a new type of aggregate loss that we term as the average top-k (ATk) loss, which is the average of the largest k individual losses, that is defined as: Lavt-k(Lz(f)) = 1k ∑k i=1 `[i](f). (1) We refer to learning objectives based on minimizing the ATk loss as MATk learning. The ATk loss generalizes the average loss (k = n) and the maximum loss (k = 1), yet it is less susceptible to their corresponding drawbacks, i.e., it is less sensitive to outliers than the maximum loss and can adapt to imbalanced and/or multi-modal data distributions better than the average loss. This is illustrated with two toy examples of synthesized 2D data for binary classification in Fig.1 (see supplementary materials for a complete illustration). As these plots show, the linear classifier obtained with the maximum loss is not optimal due to the existence of outliers while the linear classifier corresponding to the average loss has to accommodate the requirement to minimize individual losses across all training data, and sacrifices smaller sub-clusters of data (e.g., the rare population of + class in the top row and the smaller dataset of − class in the bottom row). In contrast, using ATk loss with k = 10 can better protect such smaller sub-clusters and leads to linear classifiers closer to the optimal Bayesian linear classifier. This is also corroborated by the plots of corresponding misclassification rate of ATk vs. k value in Fig.1, which show that minimum misclassification rates occur at k value other than 1 (maximum loss) or n (average loss). The ATk loss is a tight upper-bound of the top-k loss, as Lavt-k(Lz(f)) ≥ Ltop-k(Lz(f)) with equality holds when k = 1 or `i(f) = constant, and it is a convex function of the individual losses (see Section 2). Indeed, we can express `[k](f) as the difference of two convex functions kLavt-k(Lz(f))−(k−1)Lavt-(k−1)(Lz(f)), which shows that in general Ltop-k(Lz(f)) is not convex with regards to the individual losses. 2We define the top-k element of a set S = {s1, · · · , sn} as s[k], such that s[1] ≥ s[2] ≥ · · · ≥ s[n]. In sequel, we will provide a detailed analysis of the ATk loss and MATk learning. First, we establish a reformulation of the ATk loss as the minimum of the average of the individual losses over all training examples transformed by a hinge function. This reformulation leads to a simple and effective stochastic gradient-based algorithm for MATk learning, and interprets the effect of the ATk loss as shifting down and truncating at zero the individual loss to reduce the undesirable penalty on correctly classified data. When combined with the hinge function as individual loss, the ATk aggregate loss leads to a new variant of SVM algorithm that we term as ATk SVM, which generalizes the C-SVM and the ν-SVM algorithms [19]. We further study learning theory of MATk learning, focusing on the classification calibration of the ATk loss function and error bounds of the ATk SVM algorithm. This provides a theoretical lower-bound for k for reliable classification performance. We demonstrate the applicability of minimum average top-k learning for binary classification and regression using synthetic and real datasets. The main contributions of this work can be summarized as follows. • We introduce the ATk loss for supervised learning, which can balance the pros and cons of the average and maximum losses, and allows the learning algorithm to better adapt to imbalanced and multi-modal data distributions. • We provide algorithm and interpretation of the ATk loss, suggesting that most existing learning algorithms can take advantage of it without significant increase in computation. • We further study the theoretical aspects of ATk loss on classification calibration and error bounds of minimum average top-k learning for ATk-SVM. • We perform extensive experiments to validate the effectiveness of the MATk learning. 2 Formulation and Interpretation The original ATk loss, though intuitive, is not convenient to work with because of the sorting procedure involved. This also obscures its connection with the statistical view of supervised learning as minimizing the expectation of individual loss with regards to the underlying data distribution. Yet, it affords an equivalent form, which is based on the following result. Lemma 1 (Lemma 1, [16]). ∑k i=1 x[i] is a convex function of (x1, · · · , xn). Furthermore, for xi ≥ 0 and i = 1, · · · , n, we have ∑k i=1 x[i] = minλ≥0 { kλ+ ∑n i=1 [xi − λ]+ } , where [a]+ = max{0, a} is the hinge function. For completeness, we include a proof of Lemma 1 in supplementary materials. Using Lemma 1, we can reformulate the ATk loss (1) as Lavt-k(Lz(f)) = 1 k k∑ i=1 `[i](f) ∝ min λ≥0 { 1 n n∑ i=1 [`i(f)− λ]+ + k n λ } . (2) In other words, the ATk loss is equivalent to minimum of the average of individual losses that are shifted and truncated by the hinge function controlled by λ. This sheds more lights on the ATk loss, which is particularly easy to illustrate in the context of binary classification using the margin losses, `(f(x), y) = `(yf(x)). In binary classification, the “gold standard” of individual loss is the 0-1 loss Iyf(x)≤0, which exerts a constant penalty 1 to examples that are misclassified by f and no penalty to correctly classified examples. However, the 0-1 loss is difficult to work as it is neither continuous nor convex. In practice, it is usually replaced by a surrogate convex loss. Such convex surrogates afford efficient algorithms, but as continuous and convex upper-bounds of the 0-1 loss, they typically also penalize correctly classified examples, i.e., for y and x that satisfy yf(x) > 0, `(yf(x)) > 0, whereas Iyf(x)≤0 = 0 (Fig.2). This implies that when the average of individual losses across all training examples is minimized, correctly classified examples by f that are “too close” to the classification boundary may be sacrificed to accommodate reducing the average loss, as is shown in Fig.1. In contrast, after the individual loss is combined with the hinge function, i.e., [`(yf(x))− λ]+ with λ > 0, it has the effect of “shifting down” the original individual loss function and truncating it at zero, see Fig.2. The transformation of the individual loss reduces penalties of all examples, and in particular benefits correctly classified data. In particular, if such examples are “far enough” from the decision boundary, like in the 0-1 loss, their penalty becomes zero. This alleviates the likelihood of misclassification on those rare sub-populations of data that are close to the decision boundary. Algorithm: The reformulation of the ATk loss in Eq.(2) also facilitates development of optimization algorithms for the minimum ATk learning. As practical supervised learning problems usually use a parametric form of f , as f(x;w), where w is the parameter, the corresponding minimum ATk objective becomes min w,λ≥0 { 1 n n∑ i=1 [`(f(xi;w), yi)− λ]+ + k n λ+ Ω(w) } , (3) It is not hard to see that if `(f(x;w), y) is convex with respect to w, the objective function of in Eq.(3) is a convex function for w and λ jointly. This leads to an immediate stochastic (projected) gradient descent [3, 21] for solving (3). For instance, with Ω(w) = 12C ‖w‖ 2, where C > 0 is a regularization factor, at the t-th iteration, the corresponding MATk objective can be minimized by first randomly sampling (xit , yit) from the training set and then updating the parameters as w(t+1) ← w(t) − ηt ( ∂w`(f(xit ;w (t)), yit) · I[`(f(xit ;w(t)),yit )>λ(t)] + w(t) C ) λ(t+1) ← [ λ(t) − ηt ( k n − I[`(f(xit ;w(t),yit )>λ(t)] )] + (4) where ∂w`(f(x;w), y) denotes the sub-gradient with respect to w, and ηt ∼ 1√t is the step size. ATk-SVM: As a general aggregate loss, the ATk loss can be combined with any functional form for individual losses. In the case of binary classification, the ATk loss combined with the individual hinge loss for a prediction function f from a reproducing kernel Hilbert space (RKHS) [18] leads to the ATk-SVM model. Specifically, we consider function f as a member of RKHS HK with norm ‖ · ‖K , which is induced from a reproducing kernel K : X × X → R. Using the individual hinge loss, [1− yif(xi)]+, the corresponding MATk learning objective in RKHS becomes min f∈HK ,λ≥0 1 n n∑ i=1 [ [1− yif(xi)]+ − λ ] + + k n λ+ 1 2C ‖f‖2K , (5) where C > 0 is the regularization factor. Furthermore, the outer hinge function in (5) can be removed due to the following result. Lemma 2. For a ≥ 0, b ≥ 0, there holds [ [a− `]+ − b ] + = [a− b− `]+. Proof of Lemma 2 can be found in the supplementary materials. In addition, note that for any minimizer (fz, λz) of (5), setting f(x) = 0, λ = 1 in the objective function of (5), we have knλz ≤ 1 n ∑n i=1 [ [1− yifz(xi)]+ − λz ] + + knλz + 1 2C ‖fz‖ 2 K ≤ kn , so we have 0 ≤ λz ≤ 1 which means that the minimization can be restricted to 0 ≤ λ ≤ 1. Using these results and introducing ρ = 1−λ, Eq.(5) can be rewritten as min f∈HK ,0≤ρ≤1 1 n n∑ i=1 [ρ− yif(xi)]+ − k n ρ+ 1 2C ‖f‖2K . (6) The ATk-SVM objective generalizes many several existing SVM models. For example, when k = n, it equals to the standard C-SVM [5]. When C = 1 and with conditions K(xi,xi) ≤ 1 for any i, ATk-SVM reduces to ν-SVM [19] with ν = kn . Furthermore, similar to the conventional SVM model, writing in the dual form of (6) can lead to a convex quadratic programming problem that can be solved efficiently. See supplementary materials for more detailed explanations. Choosing k. The number of top individual losses in the ATk loss is a critical parameter that affects the learning performance. In concept, using ATk loss will not be worse than using average or maximum losses as they correspond to specific choices of k. In practice, k can be chosen during training from a validation dataset as the experiments in Section 4. As k is an integer, a simple grid search usually suffices to find a satisfactory value. Besides, Theorem 1 in Section 3 establishes a theoretical lower bound for k to guarantee reliable classification based on the Bayes error. If we have information about the proportion of outliers, we can also narrow searching space of k based on the fact that ATk loss is the convex upper bound of the top-k loss, which is similar to [20]. 3 Statistical Analysis In this section, we address the statistical properties of the ATk objective in the context of binary classification. Specifically, we investigate the property of classification calibration [1] of the ATk general objective, and derive bounds for the misclassification error of the ATk-SVM model in the framework of statistical learning theory (e.g. [1, 7, 23, 26]). 3.1 Classification Calibration under ATk Loss We assume the training data z = {(xi, yi)}ni=1 are i.i.d. samples from an unknown distribution p on X×{±1}. Let pX be the marginal distribution of p on the input spaceX . Then, the misclassification error of a classifier f : X → {±1} is denoted byR(f) = Pr(y 6= f(x)) = E[Iyf(x)≤0]. The Bayes error is given byR∗ = inff R(f), where the infimum is over all measurable functions. No function can achieve less risk than the Bayes rule fc(x) = sign(η(x)− 12 ), where η(x) = Pr(y = 1|x) [8]. In practice, one uses a surrogate loss ` : R → [0,∞) which is convex and upper-bound the 0-1 loss. The population `-risk (generalization error) is given by E`(f) = E[`(yf(x))]. Denote the optimal `-risk by E∗` = inff E`(f). A very basic requirement for using such a surrogate loss ` is the so-called classification calibration (point-wise form of Fisher consistency) [1, 14]. Specifically, a loss ` is classification calibrated with respect to distribution p if, for any x, the minimizer f∗` = inff E`(f) should have the same sign as the Bayes rule fc(x), i.e., sign(f∗` (x)) = sign(fc(x)) whenever fc(x) 6= 0. An appealing result concerning the classification calibration of a loss function ` was obtained in [1], which states that ` is classification calibrated if ` is convex, differentiable at 0 and `′(0) < 0. In the same spirit, we investigate the classification calibration property of the ATk loss. Specifically, we first obtain the population form of the ATk objective using the infinite limit of (2) 1 n n∑ i=1 [`(yif(xi))− λ]+ + k n λ k n→ν−−−−→ n→∞ E [[`(yf(x))− λ]+] + νλ. We then consider the optimization problem (f∗, λ∗) = arg inf f,λ≥0 E [[`(yf(x))− λ]+] + νλ, (7) where the infimum is taken over all measurable function f : X → R. We say the ATk (aggregate) loss is classification calibrated with respect to p if f∗ has the same sign as the Bayes rule fc. The following theorem establishes such conditions. Theorem 1. Suppose the individual loss ` : R → R+ is convex, differentiable at 0 and `′(0) < 0. Without loss of generality, assume that `(0) = 1. Let (f∗, λ∗) be defined in (7), (i) If ν > E∗` then the ATk loss is classification calibrated. (ii) If, moreover, ` is monotonically decreasing and the ATk aggregate loss is classification calibrated then ν ≥ ∫ η(x)6= 12 min(η(x), 1− η(x))dpX (x). The proof of Theorem 1 can be found in the supplementary materials. Part (i) and (ii) of the above theorem address respectively the sufficient and necessary conditions on ν such that the ATk loss becomes classification calibrated. Since ` is an upper bound surrogate of the 0-1 loss, the optimal `-risk E∗` is larger than the Bayes error R∗, i.e., E∗` ≥ R∗. In particular, if the individual loss ` is the hinge loss then E∗` = 2R∗. Part (ii) of the above theorem indicates that the ATk aggregate loss is classification calibrated if ν = limn→∞ k/n is larger than the optimal generalization error E∗` associated with the individual loss. The choice of k > nE∗` thus guarantees classification calibration, which gives a lower bound of k. This result also provides a theoretical underpinning of the sensitivity to outliers of the maximum loss (ATk loss with k = 1). If the probability of the set {x : η(x) = 1/2} is zero,R∗ = ∫ X min(η(x), 1− η(x))dpX (x) = ∫ η(x)6=1/2 min(η(x), 1− η(x))dpX (x). Theorem 1 indicates that in this case, if the maximum loss is calibrated, one must have 1n ≈ ν ≥ R ∗. In other words, as the number of training data increases, the Bayes error has to be arbitrarily small, which is consistent with the empirical observation that the maximum loss works well under the well-separable data setting but are sensitive to outliers and non-separable data. 3.2 Error bounds of ATk-SVM We next study the excess misclassification error of the ATk-SVM model i.e., R(sign(fz)) − R∗. Let (fz, ρz) be the minimizer of the ATk-SVM objective (6) in the RKHS setting. Let fH be the minimizer of the generalization error over the RKHS space HK , i.e., fH = argminf∈HK Eh(f), where we use the notation Eh(f) = E [[1− yf(x)]+] to denote the `-risk of the hinge loss. In the finite-dimension case, the existence of fH follows from the direct method in the variational calculus, as Eh(·) is lower bounded by zero, coercive, and weakly sequentially lower semi-continuous by its convexity. For an infinite dimensional HK , we assume the existence of fH. We also assume that Eh(fH) < 1 since even a naı̈ve zero classifier can achieve Eh(0) = 1. Denote the approximation error by A(HK) = inff∈HK Eh(f)− Eh(fc) = Eh(fH)− Eh(fc), and let κ = supx∈X √ K(x,x). The main theorem can be stated as follows. Theorem 2. Consider the ATk-SVM in RKHS (6). For any ε ∈ (0, 1] and µ ∈ (0, 1 − Eh(fH)), choosing k = dn(Eh(fH) + µ)e. Then, it holds Pr { R(sign(fz))−R∗ ≥ µ+A(H) + ε+ 1 + Cκ,H√ nµ } ≤ 2 exp ( − nµ 2ε2 (1 + Cκ,H)2 ) , where Cκ,H = κ(2 √ 2C + 4‖fH‖K). The complete proof of Theorem 2 is given in the supplementary materials. The main idea is to show that ρz is bounded from below by a positive constant with high probability, and then bound the excess misclassification errorR(sign(f∗z ))−R∗ by Eh(fz/ρz)− Eh(fc). If K is a universal kernel then A(HK) = 0 [23]. In this case, let µ = ε ∈ (0, 1− Eh(fH)), then from Theorem 2 we have Pr { R(sign(fz))−R∗ ≥ 2ε+ 1 + Cκ,H√ nε } ≤ 2 exp ( − nε 4 (1 + Cκ,H)2 ) , Consequently, choosing C such that limn→∞ C/n = 0, which is equivalent to limn→∞ (1 + Cκ,H) 2/n = 0, then R(sign(fz)) can be arbitrarily close to the Bayes error R∗, with high probability, as long as n is sufficiently large. 4 Experiments We have demonstrated that ATk loss provides a continuum between the average loss and the maximum loss, which can potentially alleviates their drawbacks. A natural question is whether such an advantage actually benefits practical learning problems. In this section, we demonstrate the behaviors of MATk learning coupled with different individual losses for binary classification and regression on synthetic and real datasets, with minimizing the average loss and the maximum loss treated as special cases for k = n and k = 1, respectively. For simplicity, in all experiments, we use homogenized linear prediction functions f(x) = wTx with parameters w and the Tikhonov regularizer Ω(w) = 12C ||w|| 2 , and optimize the MATk learning objective with the stochastic gradient descent method given in (4). Binary Classification: We conduct experiments on binary classification using eight benchmark datasets from the UCI3 and KEEL4 data repositories to illustrate the potential effects of using ATk loss in practical learning to adapt to different underlying data distributions. A detailed description of the datasets is given in supplementary materials. The standard individual logistic loss and hinge loss are combined with different aggregate losses. Note that average loss combined with individual logistic loss corresponds to the logistic regression model and average loss combined with individual hinge loss leads to the C-SVM algorithm [5]. For each dataset, we randomly sample 50%, 25%, 25% examples as training, validation and testing sets, respectively. During training, we select parameters C (regularization factor) and k (number of top losses) on the validation set. Parameter C is searched on grids of log10 scale in the range of [10−5, 105] (extended when optimal value is on the boundary), and k is searched on grids of log10 scale in the range of [1, n]. We use k∗ to denote the optimal k selected from the validation set. 3https://archive.ics.uci.edu/ml/datasets.html 4http://sci2s.ugr.es/keel/datasets.php We report the average performance over 10 random splitting of training/validation/testing for each dataset with MATk learning objectives formed from individual logistic loss and hinge loss. Table 1 gives their experimental results in terms of misclassification rate (results in terms of other classification quality metrics are given in supplementary materials). Note that on these datasets, the average loss consistently outperforms the maximum loss, but the performance can be further improved with the ATk loss, which is more adaptable to different data distributions. This advantage of the ATk loss is particularly conspicuous for datasets Monk and Australian. To further understand the behavior of MATk learning on individual datasets, we show plots of misclassification rate on testing set vs. k for four representative datasets in Fig.3 (in which C is fixed to 102 and k ∈ [1, n]). As these plots show, on all four datasets, there is a clear range of k value with better classification performance than the two extreme cases k = 1 and k = n, corresponding to the maximum and average loss, respectively. To be more specific, when k = 1, the potential noises and outliers will have the highest negative effects on the learned classifier and the related classification performance is very poor. As k increases, the negative effects of noises and outliers will reduce and the classification performance becomes better, this is more significant on dataset Monk, Australian and Splice. However, if k keeps increase, the classification performance may decrease (e.g., when k = n). This may because that as k increases, more and more well classified samples will be included and the non-zero loss on these samples will have negative effects on the learned classifier (see our analysis in Section 2), especifically for dataset Monk, Australian and Phoneme. Regression. Next, we report experimental results of linear regression on one synthetic dataset (Sinc) and three real datasets from [4], with a detailed description of these datasets given in supplementary materials. The standard square loss and absolute loss are adopted as individual losses. Note that average loss coupled with individual square loss is standard ridge regression model and average loss coupled with individual absolute loss reduces to ν-SVR [19]. We normalize the target output to [0, 1] and report their root mean square error (RMSE) in Table 2, with optimal C and k∗ obtained by a grid search as in the case of classification (performance in terms of mean absolute square error (MAE) is given in supplementary materials). Similar to the classification cases, using the ATk loss usually improves performance in comparison to the average loss or maximum loss. 5 Related Works Most work on learning objectives focus on designing individual losses, and only a few are dedicated to new forms of aggregate losses. Recently, aggregate loss considering the order of training data have been proposed in curriculum learning [2] and self-paced learning [11, 9], which suggest to organize the training process in several passes and samples are included from easy to hard gradually. It is interesting to note that each pass of self-paced learning [11] is equivalent to minimum the average of the k smallest individual losses, i.e., 1k ∑n i=n−k+1 `[i](f), which we term it as the average bottom-k loss in contrast to the average top-k losses in our case. In [20], the pros and cons of the maximum loss and the average loss are compared, and the top-k loss, i.e., `[k](f), is advocated as a remedy to the problem of both. However, unlike the ATk loss, in general, neither the average bottom-k loss nor the top-k loss are convex functions with regards to the individual losses. Minimizing top-k errors has also been used in individual losses. For ranking problems, the work of [17, 24] describes a form of individual loss that gives more weights to the top examples in a ranked list. In multi-class classification, the top-1 loss is commonly used which causes penalties when the top-1 predicted class is not the same as the target class label [6]. This has been further extended in [12, 13] to the top-k multi-class loss, in which for a class label that can take m different values, the classifier is only penalized when the correct value does not show up in the top k most confident predicted values. As an individual loss, these works are complementary to the ATk loss and they can be combined to improve learning performance. 6 Discussion In this work, we introduce the average top-k (ATk) loss as a new aggregate loss for supervised learning, which is the average over the k largest individual losses over a training dataset. We show that the ATk loss is a natural generalization of the two widely used aggregate losses, namely the average loss and the maximum loss, but can combine their advantages and mitigate their drawbacks to better adapt to different data distributions. We demonstrate that the ATk loss can better protect small subsets of hard samples from being swamped by a large number of easy ones, especially for imbalanced problems. Furthermore, it remains a convex function over all individual losses, which can lead to convex optimization problems that can be solved effectively with conventional gradientbased methods. We provide an intuitive interpretation of the ATk loss based on its equivalent effect on the continuous individual loss functions, suggesting that it can reduce the penalty on correctly classified data. We further study the theoretical aspects of ATk loss on classification calibration and error bounds of minimum average top-k learning for ATk-SVM. We demonstrate the applicability of minimum average top-k learning for binary classification and regression using synthetic and real datasets. There are many interesting questions left unanswered regarding using the ATk loss as learning objectives. Currently, we use conventional gradient-based algorithms for its optimization, but we are investigating special instantiations of MATk learning for which more efficient optimization methods can be developed. Furthermore, the ATk loss can also be used for unsupervised learning problems (e.g., clustering), which is a focus of our subsequent study. It is also of practical importance to combine ATk loss with other successful learning paradigms such as deep learning, and to apply it to large scale real life dataset. Lastly, it would be very interesting to derive error bounds of MATk with general individual loss functions. 7 Acknowledgments We thank the anonymous reviewers for their constructive comments. This work was completed when the first author was a visiting student at SUNY Albany, supported by a scholarship from University of Chinese Academy of Sciences (UCAS). Siwei Lyu is supported by the National Science Foundation (NSF, Grant IIS-1537257) and Yiming Ying is supported by the Simons Foundation (#422504) and the 2016-2017 Presidential Innovation Fund for Research and Scholarship (PIFRS) program from SUNY Albany. This work is also partially supported by the National Science Foundation of China (NSFC, Grant 61620106003) for Bao-Gang Hu and Yanbo Fan.
1. What is the focus of the paper, and what are the proposed approaches? 2. What are the strengths of the paper, particularly in the theoretical analysis? 3. Do you have any questions regarding the paper? 4. What are the weaknesses of the paper, especially in the experiment section? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review This is an interesting paper that introduces and analyses a new way of aggregating individual losses over training examples, being an alternative for the commonly used average loss and the recently introduced maximum loss. The proposed average top-k loss lies in between of those two existing approaches. The premises concerning an alternative to the average loss shown in the beginning of the paper seem to be very valid. Indeed the behavior of the average loss in the situations presented in Figure 1 and described in the corresponding paragraphs are a good justification for this research topic. Also the analysis of the behavior of the maximum loss in comparison with average and average top-k for the case of non-separable data is interesting. Interestingly, in the experiments on all the datasets the accuracy of the model optimized for average loss is better than the one optimized for max loss. According to [19] there are data sets for which max loss should perform better than the average loss. It would be interesting to include such data sets to the experimental study and see how the average top-k loss performs on them. One could also try to use other aggregation function over the individual losses to be optimized on a training set (e.g., median, quantiles, OVA, different types of integral? Could you comment on that? Minor comments: - Is the name "ensemble loss" often used? For me, it sounds somehow confusing. - Let \hat{k}^* be tuned on a validation set of size \hat{n}. If we use the entire training set for learinng a final model, should not k^* be appropriately adjusted to reflect the ratio \hat{k}^*/\hat{n}? - line 5 and 307: can combines => can combine After rebuttal: I thank the authors for their response.
NIPS
Title Learning with Average Top-k Loss Abstract In this work, we introduce the average top-k (ATk) loss as a new aggregate loss for supervised learning, which is the average over the k largest individual losses over a training dataset. We show that the ATk loss is a natural generalization of the two widely used aggregate losses, namely the average loss and the maximum loss, but can combine their advantages and mitigate their drawbacks to better adapt to different data distributions. Furthermore, it remains a convex function over all individual losses, which can lead to convex optimization problems that can be solved effectively with conventional gradient-based methods. We provide an intuitive interpretation of the ATk loss based on its equivalent effect on the continuous individual loss functions, suggesting that it can reduce the penalty on correctly classified data. We further give a learning theory analysis of MATk learning on the classification calibration of the ATk loss and the error bounds of ATk-SVM. We demonstrate the applicability of minimum average top-k learning for binary classification and regression using synthetic and real datasets. 1 Introduction Supervised learning concerns the inference of a function f : X 7→ Y that predicts a target y ∈ Y from data/features x ∈ X using a set of labeled training examples {(xi, yi)}ni=1. This is typically achieved by seeking a function f that minimizes an aggregate loss formed from individual losses evaluated over all training samples. To be more specific, the individual loss for a sample (x, y) is given by `(f(x), y), in which ` is a nonnegative bivariate function that evaluates the quality of the prediction made by function f . For example, for binary classification (i.e., yi ∈ {±1}), commonly used forms for individual loss include the 0-1 loss, Iyf(x)≤0, which is 1 when y and f(x) have different sign and 0 otherwise, the hinge loss, max(0, 1 − yf(x)), and the logistic loss, log2(1 + exp(−yf(x))), all of which can be further simplified as the so-called margin loss, i.e., `(y, f(x)) = `(yf(x)). For regression, squared difference (y−f(x))2 and absolute difference |y−f(x)| are two most popular forms for individual loss, which can be simplified as `(y, f(x)) = `(|y − f(x)|). Usually the individual loss is chosen to be a convex function of its input, but recent works also propose various types of non-convex individual losses (e.g., [10, 15, 27, 28]). The supervised learning problem is then formulated as minf {L(Lz(f)) + Ω(f)}, where L(Lz(f)) is the aggregate loss accumulates all individual losses over training samples, i.e., Lz(f) = {`i(f)}ni=1, with `i(f) being the shorthand notation for `(f(xi), yi), and Ω(f) is the regularizer on f . However, in contrast to the plethora of the types of individual losses, there are only a few choices when we consider the aggregate loss: ∗Corresponding author. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. • the average loss: Lavg(Lz(f)) = 1n ∑n i=1 `i(f), i.e., the mean of all individual losses;• the maximum loss: Lmax(Lz(f)) = max1≤k≤n `i(f), i.e., the largest individual loss; • the top-k loss [20]: Ltop-k(Lz(f)) = `[k](f)2 for 1 ≤ k ≤ n, i.e., the k-th largest (top-k) individual loss. The average loss is unarguably the most widely used aggregate loss, as it is a unbiased approximation to the expected risk and leads to the empirical risk minimization in learning theory [1, 7, 22, 25, 26]. Further, minimizing the average loss affords simple and efficient stochastic gradient descent algorithms [3, 21]. On the other hand, the work in [20] shows that constructing learning objective based on the maximum loss may lead to improved performance for data with separate typical and rare subpopulations. The top-k loss [20] generalizes the maximum loss, as Lmax(Lz(f)) = Ltop-1(Lz(f)), and can alleviate the sensitivity to outliers of the latter. However, unlike the average loss or the maximum loss, the top-k loss in general does not lead to a convex learning objective, as it is not convex of all the individual losses Lz(f). In this work, we propose a new type of aggregate loss that we term as the average top-k (ATk) loss, which is the average of the largest k individual losses, that is defined as: Lavt-k(Lz(f)) = 1k ∑k i=1 `[i](f). (1) We refer to learning objectives based on minimizing the ATk loss as MATk learning. The ATk loss generalizes the average loss (k = n) and the maximum loss (k = 1), yet it is less susceptible to their corresponding drawbacks, i.e., it is less sensitive to outliers than the maximum loss and can adapt to imbalanced and/or multi-modal data distributions better than the average loss. This is illustrated with two toy examples of synthesized 2D data for binary classification in Fig.1 (see supplementary materials for a complete illustration). As these plots show, the linear classifier obtained with the maximum loss is not optimal due to the existence of outliers while the linear classifier corresponding to the average loss has to accommodate the requirement to minimize individual losses across all training data, and sacrifices smaller sub-clusters of data (e.g., the rare population of + class in the top row and the smaller dataset of − class in the bottom row). In contrast, using ATk loss with k = 10 can better protect such smaller sub-clusters and leads to linear classifiers closer to the optimal Bayesian linear classifier. This is also corroborated by the plots of corresponding misclassification rate of ATk vs. k value in Fig.1, which show that minimum misclassification rates occur at k value other than 1 (maximum loss) or n (average loss). The ATk loss is a tight upper-bound of the top-k loss, as Lavt-k(Lz(f)) ≥ Ltop-k(Lz(f)) with equality holds when k = 1 or `i(f) = constant, and it is a convex function of the individual losses (see Section 2). Indeed, we can express `[k](f) as the difference of two convex functions kLavt-k(Lz(f))−(k−1)Lavt-(k−1)(Lz(f)), which shows that in general Ltop-k(Lz(f)) is not convex with regards to the individual losses. 2We define the top-k element of a set S = {s1, · · · , sn} as s[k], such that s[1] ≥ s[2] ≥ · · · ≥ s[n]. In sequel, we will provide a detailed analysis of the ATk loss and MATk learning. First, we establish a reformulation of the ATk loss as the minimum of the average of the individual losses over all training examples transformed by a hinge function. This reformulation leads to a simple and effective stochastic gradient-based algorithm for MATk learning, and interprets the effect of the ATk loss as shifting down and truncating at zero the individual loss to reduce the undesirable penalty on correctly classified data. When combined with the hinge function as individual loss, the ATk aggregate loss leads to a new variant of SVM algorithm that we term as ATk SVM, which generalizes the C-SVM and the ν-SVM algorithms [19]. We further study learning theory of MATk learning, focusing on the classification calibration of the ATk loss function and error bounds of the ATk SVM algorithm. This provides a theoretical lower-bound for k for reliable classification performance. We demonstrate the applicability of minimum average top-k learning for binary classification and regression using synthetic and real datasets. The main contributions of this work can be summarized as follows. • We introduce the ATk loss for supervised learning, which can balance the pros and cons of the average and maximum losses, and allows the learning algorithm to better adapt to imbalanced and multi-modal data distributions. • We provide algorithm and interpretation of the ATk loss, suggesting that most existing learning algorithms can take advantage of it without significant increase in computation. • We further study the theoretical aspects of ATk loss on classification calibration and error bounds of minimum average top-k learning for ATk-SVM. • We perform extensive experiments to validate the effectiveness of the MATk learning. 2 Formulation and Interpretation The original ATk loss, though intuitive, is not convenient to work with because of the sorting procedure involved. This also obscures its connection with the statistical view of supervised learning as minimizing the expectation of individual loss with regards to the underlying data distribution. Yet, it affords an equivalent form, which is based on the following result. Lemma 1 (Lemma 1, [16]). ∑k i=1 x[i] is a convex function of (x1, · · · , xn). Furthermore, for xi ≥ 0 and i = 1, · · · , n, we have ∑k i=1 x[i] = minλ≥0 { kλ+ ∑n i=1 [xi − λ]+ } , where [a]+ = max{0, a} is the hinge function. For completeness, we include a proof of Lemma 1 in supplementary materials. Using Lemma 1, we can reformulate the ATk loss (1) as Lavt-k(Lz(f)) = 1 k k∑ i=1 `[i](f) ∝ min λ≥0 { 1 n n∑ i=1 [`i(f)− λ]+ + k n λ } . (2) In other words, the ATk loss is equivalent to minimum of the average of individual losses that are shifted and truncated by the hinge function controlled by λ. This sheds more lights on the ATk loss, which is particularly easy to illustrate in the context of binary classification using the margin losses, `(f(x), y) = `(yf(x)). In binary classification, the “gold standard” of individual loss is the 0-1 loss Iyf(x)≤0, which exerts a constant penalty 1 to examples that are misclassified by f and no penalty to correctly classified examples. However, the 0-1 loss is difficult to work as it is neither continuous nor convex. In practice, it is usually replaced by a surrogate convex loss. Such convex surrogates afford efficient algorithms, but as continuous and convex upper-bounds of the 0-1 loss, they typically also penalize correctly classified examples, i.e., for y and x that satisfy yf(x) > 0, `(yf(x)) > 0, whereas Iyf(x)≤0 = 0 (Fig.2). This implies that when the average of individual losses across all training examples is minimized, correctly classified examples by f that are “too close” to the classification boundary may be sacrificed to accommodate reducing the average loss, as is shown in Fig.1. In contrast, after the individual loss is combined with the hinge function, i.e., [`(yf(x))− λ]+ with λ > 0, it has the effect of “shifting down” the original individual loss function and truncating it at zero, see Fig.2. The transformation of the individual loss reduces penalties of all examples, and in particular benefits correctly classified data. In particular, if such examples are “far enough” from the decision boundary, like in the 0-1 loss, their penalty becomes zero. This alleviates the likelihood of misclassification on those rare sub-populations of data that are close to the decision boundary. Algorithm: The reformulation of the ATk loss in Eq.(2) also facilitates development of optimization algorithms for the minimum ATk learning. As practical supervised learning problems usually use a parametric form of f , as f(x;w), where w is the parameter, the corresponding minimum ATk objective becomes min w,λ≥0 { 1 n n∑ i=1 [`(f(xi;w), yi)− λ]+ + k n λ+ Ω(w) } , (3) It is not hard to see that if `(f(x;w), y) is convex with respect to w, the objective function of in Eq.(3) is a convex function for w and λ jointly. This leads to an immediate stochastic (projected) gradient descent [3, 21] for solving (3). For instance, with Ω(w) = 12C ‖w‖ 2, where C > 0 is a regularization factor, at the t-th iteration, the corresponding MATk objective can be minimized by first randomly sampling (xit , yit) from the training set and then updating the parameters as w(t+1) ← w(t) − ηt ( ∂w`(f(xit ;w (t)), yit) · I[`(f(xit ;w(t)),yit )>λ(t)] + w(t) C ) λ(t+1) ← [ λ(t) − ηt ( k n − I[`(f(xit ;w(t),yit )>λ(t)] )] + (4) where ∂w`(f(x;w), y) denotes the sub-gradient with respect to w, and ηt ∼ 1√t is the step size. ATk-SVM: As a general aggregate loss, the ATk loss can be combined with any functional form for individual losses. In the case of binary classification, the ATk loss combined with the individual hinge loss for a prediction function f from a reproducing kernel Hilbert space (RKHS) [18] leads to the ATk-SVM model. Specifically, we consider function f as a member of RKHS HK with norm ‖ · ‖K , which is induced from a reproducing kernel K : X × X → R. Using the individual hinge loss, [1− yif(xi)]+, the corresponding MATk learning objective in RKHS becomes min f∈HK ,λ≥0 1 n n∑ i=1 [ [1− yif(xi)]+ − λ ] + + k n λ+ 1 2C ‖f‖2K , (5) where C > 0 is the regularization factor. Furthermore, the outer hinge function in (5) can be removed due to the following result. Lemma 2. For a ≥ 0, b ≥ 0, there holds [ [a− `]+ − b ] + = [a− b− `]+. Proof of Lemma 2 can be found in the supplementary materials. In addition, note that for any minimizer (fz, λz) of (5), setting f(x) = 0, λ = 1 in the objective function of (5), we have knλz ≤ 1 n ∑n i=1 [ [1− yifz(xi)]+ − λz ] + + knλz + 1 2C ‖fz‖ 2 K ≤ kn , so we have 0 ≤ λz ≤ 1 which means that the minimization can be restricted to 0 ≤ λ ≤ 1. Using these results and introducing ρ = 1−λ, Eq.(5) can be rewritten as min f∈HK ,0≤ρ≤1 1 n n∑ i=1 [ρ− yif(xi)]+ − k n ρ+ 1 2C ‖f‖2K . (6) The ATk-SVM objective generalizes many several existing SVM models. For example, when k = n, it equals to the standard C-SVM [5]. When C = 1 and with conditions K(xi,xi) ≤ 1 for any i, ATk-SVM reduces to ν-SVM [19] with ν = kn . Furthermore, similar to the conventional SVM model, writing in the dual form of (6) can lead to a convex quadratic programming problem that can be solved efficiently. See supplementary materials for more detailed explanations. Choosing k. The number of top individual losses in the ATk loss is a critical parameter that affects the learning performance. In concept, using ATk loss will not be worse than using average or maximum losses as they correspond to specific choices of k. In practice, k can be chosen during training from a validation dataset as the experiments in Section 4. As k is an integer, a simple grid search usually suffices to find a satisfactory value. Besides, Theorem 1 in Section 3 establishes a theoretical lower bound for k to guarantee reliable classification based on the Bayes error. If we have information about the proportion of outliers, we can also narrow searching space of k based on the fact that ATk loss is the convex upper bound of the top-k loss, which is similar to [20]. 3 Statistical Analysis In this section, we address the statistical properties of the ATk objective in the context of binary classification. Specifically, we investigate the property of classification calibration [1] of the ATk general objective, and derive bounds for the misclassification error of the ATk-SVM model in the framework of statistical learning theory (e.g. [1, 7, 23, 26]). 3.1 Classification Calibration under ATk Loss We assume the training data z = {(xi, yi)}ni=1 are i.i.d. samples from an unknown distribution p on X×{±1}. Let pX be the marginal distribution of p on the input spaceX . Then, the misclassification error of a classifier f : X → {±1} is denoted byR(f) = Pr(y 6= f(x)) = E[Iyf(x)≤0]. The Bayes error is given byR∗ = inff R(f), where the infimum is over all measurable functions. No function can achieve less risk than the Bayes rule fc(x) = sign(η(x)− 12 ), where η(x) = Pr(y = 1|x) [8]. In practice, one uses a surrogate loss ` : R → [0,∞) which is convex and upper-bound the 0-1 loss. The population `-risk (generalization error) is given by E`(f) = E[`(yf(x))]. Denote the optimal `-risk by E∗` = inff E`(f). A very basic requirement for using such a surrogate loss ` is the so-called classification calibration (point-wise form of Fisher consistency) [1, 14]. Specifically, a loss ` is classification calibrated with respect to distribution p if, for any x, the minimizer f∗` = inff E`(f) should have the same sign as the Bayes rule fc(x), i.e., sign(f∗` (x)) = sign(fc(x)) whenever fc(x) 6= 0. An appealing result concerning the classification calibration of a loss function ` was obtained in [1], which states that ` is classification calibrated if ` is convex, differentiable at 0 and `′(0) < 0. In the same spirit, we investigate the classification calibration property of the ATk loss. Specifically, we first obtain the population form of the ATk objective using the infinite limit of (2) 1 n n∑ i=1 [`(yif(xi))− λ]+ + k n λ k n→ν−−−−→ n→∞ E [[`(yf(x))− λ]+] + νλ. We then consider the optimization problem (f∗, λ∗) = arg inf f,λ≥0 E [[`(yf(x))− λ]+] + νλ, (7) where the infimum is taken over all measurable function f : X → R. We say the ATk (aggregate) loss is classification calibrated with respect to p if f∗ has the same sign as the Bayes rule fc. The following theorem establishes such conditions. Theorem 1. Suppose the individual loss ` : R → R+ is convex, differentiable at 0 and `′(0) < 0. Without loss of generality, assume that `(0) = 1. Let (f∗, λ∗) be defined in (7), (i) If ν > E∗` then the ATk loss is classification calibrated. (ii) If, moreover, ` is monotonically decreasing and the ATk aggregate loss is classification calibrated then ν ≥ ∫ η(x)6= 12 min(η(x), 1− η(x))dpX (x). The proof of Theorem 1 can be found in the supplementary materials. Part (i) and (ii) of the above theorem address respectively the sufficient and necessary conditions on ν such that the ATk loss becomes classification calibrated. Since ` is an upper bound surrogate of the 0-1 loss, the optimal `-risk E∗` is larger than the Bayes error R∗, i.e., E∗` ≥ R∗. In particular, if the individual loss ` is the hinge loss then E∗` = 2R∗. Part (ii) of the above theorem indicates that the ATk aggregate loss is classification calibrated if ν = limn→∞ k/n is larger than the optimal generalization error E∗` associated with the individual loss. The choice of k > nE∗` thus guarantees classification calibration, which gives a lower bound of k. This result also provides a theoretical underpinning of the sensitivity to outliers of the maximum loss (ATk loss with k = 1). If the probability of the set {x : η(x) = 1/2} is zero,R∗ = ∫ X min(η(x), 1− η(x))dpX (x) = ∫ η(x)6=1/2 min(η(x), 1− η(x))dpX (x). Theorem 1 indicates that in this case, if the maximum loss is calibrated, one must have 1n ≈ ν ≥ R ∗. In other words, as the number of training data increases, the Bayes error has to be arbitrarily small, which is consistent with the empirical observation that the maximum loss works well under the well-separable data setting but are sensitive to outliers and non-separable data. 3.2 Error bounds of ATk-SVM We next study the excess misclassification error of the ATk-SVM model i.e., R(sign(fz)) − R∗. Let (fz, ρz) be the minimizer of the ATk-SVM objective (6) in the RKHS setting. Let fH be the minimizer of the generalization error over the RKHS space HK , i.e., fH = argminf∈HK Eh(f), where we use the notation Eh(f) = E [[1− yf(x)]+] to denote the `-risk of the hinge loss. In the finite-dimension case, the existence of fH follows from the direct method in the variational calculus, as Eh(·) is lower bounded by zero, coercive, and weakly sequentially lower semi-continuous by its convexity. For an infinite dimensional HK , we assume the existence of fH. We also assume that Eh(fH) < 1 since even a naı̈ve zero classifier can achieve Eh(0) = 1. Denote the approximation error by A(HK) = inff∈HK Eh(f)− Eh(fc) = Eh(fH)− Eh(fc), and let κ = supx∈X √ K(x,x). The main theorem can be stated as follows. Theorem 2. Consider the ATk-SVM in RKHS (6). For any ε ∈ (0, 1] and µ ∈ (0, 1 − Eh(fH)), choosing k = dn(Eh(fH) + µ)e. Then, it holds Pr { R(sign(fz))−R∗ ≥ µ+A(H) + ε+ 1 + Cκ,H√ nµ } ≤ 2 exp ( − nµ 2ε2 (1 + Cκ,H)2 ) , where Cκ,H = κ(2 √ 2C + 4‖fH‖K). The complete proof of Theorem 2 is given in the supplementary materials. The main idea is to show that ρz is bounded from below by a positive constant with high probability, and then bound the excess misclassification errorR(sign(f∗z ))−R∗ by Eh(fz/ρz)− Eh(fc). If K is a universal kernel then A(HK) = 0 [23]. In this case, let µ = ε ∈ (0, 1− Eh(fH)), then from Theorem 2 we have Pr { R(sign(fz))−R∗ ≥ 2ε+ 1 + Cκ,H√ nε } ≤ 2 exp ( − nε 4 (1 + Cκ,H)2 ) , Consequently, choosing C such that limn→∞ C/n = 0, which is equivalent to limn→∞ (1 + Cκ,H) 2/n = 0, then R(sign(fz)) can be arbitrarily close to the Bayes error R∗, with high probability, as long as n is sufficiently large. 4 Experiments We have demonstrated that ATk loss provides a continuum between the average loss and the maximum loss, which can potentially alleviates their drawbacks. A natural question is whether such an advantage actually benefits practical learning problems. In this section, we demonstrate the behaviors of MATk learning coupled with different individual losses for binary classification and regression on synthetic and real datasets, with minimizing the average loss and the maximum loss treated as special cases for k = n and k = 1, respectively. For simplicity, in all experiments, we use homogenized linear prediction functions f(x) = wTx with parameters w and the Tikhonov regularizer Ω(w) = 12C ||w|| 2 , and optimize the MATk learning objective with the stochastic gradient descent method given in (4). Binary Classification: We conduct experiments on binary classification using eight benchmark datasets from the UCI3 and KEEL4 data repositories to illustrate the potential effects of using ATk loss in practical learning to adapt to different underlying data distributions. A detailed description of the datasets is given in supplementary materials. The standard individual logistic loss and hinge loss are combined with different aggregate losses. Note that average loss combined with individual logistic loss corresponds to the logistic regression model and average loss combined with individual hinge loss leads to the C-SVM algorithm [5]. For each dataset, we randomly sample 50%, 25%, 25% examples as training, validation and testing sets, respectively. During training, we select parameters C (regularization factor) and k (number of top losses) on the validation set. Parameter C is searched on grids of log10 scale in the range of [10−5, 105] (extended when optimal value is on the boundary), and k is searched on grids of log10 scale in the range of [1, n]. We use k∗ to denote the optimal k selected from the validation set. 3https://archive.ics.uci.edu/ml/datasets.html 4http://sci2s.ugr.es/keel/datasets.php We report the average performance over 10 random splitting of training/validation/testing for each dataset with MATk learning objectives formed from individual logistic loss and hinge loss. Table 1 gives their experimental results in terms of misclassification rate (results in terms of other classification quality metrics are given in supplementary materials). Note that on these datasets, the average loss consistently outperforms the maximum loss, but the performance can be further improved with the ATk loss, which is more adaptable to different data distributions. This advantage of the ATk loss is particularly conspicuous for datasets Monk and Australian. To further understand the behavior of MATk learning on individual datasets, we show plots of misclassification rate on testing set vs. k for four representative datasets in Fig.3 (in which C is fixed to 102 and k ∈ [1, n]). As these plots show, on all four datasets, there is a clear range of k value with better classification performance than the two extreme cases k = 1 and k = n, corresponding to the maximum and average loss, respectively. To be more specific, when k = 1, the potential noises and outliers will have the highest negative effects on the learned classifier and the related classification performance is very poor. As k increases, the negative effects of noises and outliers will reduce and the classification performance becomes better, this is more significant on dataset Monk, Australian and Splice. However, if k keeps increase, the classification performance may decrease (e.g., when k = n). This may because that as k increases, more and more well classified samples will be included and the non-zero loss on these samples will have negative effects on the learned classifier (see our analysis in Section 2), especifically for dataset Monk, Australian and Phoneme. Regression. Next, we report experimental results of linear regression on one synthetic dataset (Sinc) and three real datasets from [4], with a detailed description of these datasets given in supplementary materials. The standard square loss and absolute loss are adopted as individual losses. Note that average loss coupled with individual square loss is standard ridge regression model and average loss coupled with individual absolute loss reduces to ν-SVR [19]. We normalize the target output to [0, 1] and report their root mean square error (RMSE) in Table 2, with optimal C and k∗ obtained by a grid search as in the case of classification (performance in terms of mean absolute square error (MAE) is given in supplementary materials). Similar to the classification cases, using the ATk loss usually improves performance in comparison to the average loss or maximum loss. 5 Related Works Most work on learning objectives focus on designing individual losses, and only a few are dedicated to new forms of aggregate losses. Recently, aggregate loss considering the order of training data have been proposed in curriculum learning [2] and self-paced learning [11, 9], which suggest to organize the training process in several passes and samples are included from easy to hard gradually. It is interesting to note that each pass of self-paced learning [11] is equivalent to minimum the average of the k smallest individual losses, i.e., 1k ∑n i=n−k+1 `[i](f), which we term it as the average bottom-k loss in contrast to the average top-k losses in our case. In [20], the pros and cons of the maximum loss and the average loss are compared, and the top-k loss, i.e., `[k](f), is advocated as a remedy to the problem of both. However, unlike the ATk loss, in general, neither the average bottom-k loss nor the top-k loss are convex functions with regards to the individual losses. Minimizing top-k errors has also been used in individual losses. For ranking problems, the work of [17, 24] describes a form of individual loss that gives more weights to the top examples in a ranked list. In multi-class classification, the top-1 loss is commonly used which causes penalties when the top-1 predicted class is not the same as the target class label [6]. This has been further extended in [12, 13] to the top-k multi-class loss, in which for a class label that can take m different values, the classifier is only penalized when the correct value does not show up in the top k most confident predicted values. As an individual loss, these works are complementary to the ATk loss and they can be combined to improve learning performance. 6 Discussion In this work, we introduce the average top-k (ATk) loss as a new aggregate loss for supervised learning, which is the average over the k largest individual losses over a training dataset. We show that the ATk loss is a natural generalization of the two widely used aggregate losses, namely the average loss and the maximum loss, but can combine their advantages and mitigate their drawbacks to better adapt to different data distributions. We demonstrate that the ATk loss can better protect small subsets of hard samples from being swamped by a large number of easy ones, especially for imbalanced problems. Furthermore, it remains a convex function over all individual losses, which can lead to convex optimization problems that can be solved effectively with conventional gradientbased methods. We provide an intuitive interpretation of the ATk loss based on its equivalent effect on the continuous individual loss functions, suggesting that it can reduce the penalty on correctly classified data. We further study the theoretical aspects of ATk loss on classification calibration and error bounds of minimum average top-k learning for ATk-SVM. We demonstrate the applicability of minimum average top-k learning for binary classification and regression using synthetic and real datasets. There are many interesting questions left unanswered regarding using the ATk loss as learning objectives. Currently, we use conventional gradient-based algorithms for its optimization, but we are investigating special instantiations of MATk learning for which more efficient optimization methods can be developed. Furthermore, the ATk loss can also be used for unsupervised learning problems (e.g., clustering), which is a focus of our subsequent study. It is also of practical importance to combine ATk loss with other successful learning paradigms such as deep learning, and to apply it to large scale real life dataset. Lastly, it would be very interesting to derive error bounds of MATk with general individual loss functions. 7 Acknowledgments We thank the anonymous reviewers for their constructive comments. This work was completed when the first author was a visiting student at SUNY Albany, supported by a scholarship from University of Chinese Academy of Sciences (UCAS). Siwei Lyu is supported by the National Science Foundation (NSF, Grant IIS-1537257) and Yiming Ying is supported by the Simons Foundation (#422504) and the 2016-2017 Presidential Innovation Fund for Research and Scholarship (PIFRS) program from SUNY Albany. This work is also partially supported by the National Science Foundation of China (NSFC, Grant 61620106003) for Bao-Gang Hu and Yanbo Fan.
1. What is the focus of the paper regarding supervised learning? 2. What are the strengths of the proposed approach, particularly in its theoretical analysis? 3. Do you have any concerns or questions regarding the paper's content? 4. How does the reviewer assess the clarity and effectiveness of the presented algorithm? 5. What are the limitations of the paper regarding the choice of parameters and performance comparisons?
Review
Review This paper investigates a new learning setting: optimizing the average k largest (top-k) individual functions for supervised learning. This setting is different from the standard Empirical Risk minimization (ERM), which optimize the average loss function over datasets. The proposed setting is also different from maximum loss (Shalev-Shwartz and Wexler 2016), which optimize the maximum loss. This paper tries to optimize the average top-k loss functions. This can be viewed as a natural generalization of the ERM and the maximum loss. The authors summary it as a convex optimization problem, which can be solved with conventional gradient-based method. The authors give some learning theory analyses of setting on the classification calibration of the Top-k loss and the error bounds of ATk-SVM. Finally, the authors present some experiments to verify the effectiveness of the proposed algorithm. This work is generally well-written with some advantages as follows: 1) The authors introduce a new direction for supervised learning, which is a natural generalization of ERM and the work of (Shalev-Shwartz and Wexler 2016). 2) Some theoretical analyses are presented for the proposed learning setting. 3) The author present a learning algorithm. Cons: 1) Some statements are not clear, for example, top-k loss, which is similar to top-k ranking; more important, ensembles loss gives some impressions for ensemble learning whereas they are totally different. 2) When we used the average top-k loss? I do not think that the authors make clear explanations. Intuitively, the performance (w.r.t. accuracy) of average top-k loss is less than ERM without noise and outliers, while I guess the average too-k loss algorithm may have good performance when deal with noise data and outliers. 3) How to choose the parameter k? The authors use cross-validation in experiments, while there is no some analyses. 4) The authors should present some t-test on the performance on benchmark datasets. I doubt some experimental results. For example, the accuracy for German is about 0.79 for standard learning algorithms. 5) How about the efficiency in comparison with ERM and the work of (Shalev-Shwartz and Wexler 2016).
NIPS
Title Learning with Average Top-k Loss Abstract In this work, we introduce the average top-k (ATk) loss as a new aggregate loss for supervised learning, which is the average over the k largest individual losses over a training dataset. We show that the ATk loss is a natural generalization of the two widely used aggregate losses, namely the average loss and the maximum loss, but can combine their advantages and mitigate their drawbacks to better adapt to different data distributions. Furthermore, it remains a convex function over all individual losses, which can lead to convex optimization problems that can be solved effectively with conventional gradient-based methods. We provide an intuitive interpretation of the ATk loss based on its equivalent effect on the continuous individual loss functions, suggesting that it can reduce the penalty on correctly classified data. We further give a learning theory analysis of MATk learning on the classification calibration of the ATk loss and the error bounds of ATk-SVM. We demonstrate the applicability of minimum average top-k learning for binary classification and regression using synthetic and real datasets. 1 Introduction Supervised learning concerns the inference of a function f : X 7→ Y that predicts a target y ∈ Y from data/features x ∈ X using a set of labeled training examples {(xi, yi)}ni=1. This is typically achieved by seeking a function f that minimizes an aggregate loss formed from individual losses evaluated over all training samples. To be more specific, the individual loss for a sample (x, y) is given by `(f(x), y), in which ` is a nonnegative bivariate function that evaluates the quality of the prediction made by function f . For example, for binary classification (i.e., yi ∈ {±1}), commonly used forms for individual loss include the 0-1 loss, Iyf(x)≤0, which is 1 when y and f(x) have different sign and 0 otherwise, the hinge loss, max(0, 1 − yf(x)), and the logistic loss, log2(1 + exp(−yf(x))), all of which can be further simplified as the so-called margin loss, i.e., `(y, f(x)) = `(yf(x)). For regression, squared difference (y−f(x))2 and absolute difference |y−f(x)| are two most popular forms for individual loss, which can be simplified as `(y, f(x)) = `(|y − f(x)|). Usually the individual loss is chosen to be a convex function of its input, but recent works also propose various types of non-convex individual losses (e.g., [10, 15, 27, 28]). The supervised learning problem is then formulated as minf {L(Lz(f)) + Ω(f)}, where L(Lz(f)) is the aggregate loss accumulates all individual losses over training samples, i.e., Lz(f) = {`i(f)}ni=1, with `i(f) being the shorthand notation for `(f(xi), yi), and Ω(f) is the regularizer on f . However, in contrast to the plethora of the types of individual losses, there are only a few choices when we consider the aggregate loss: ∗Corresponding author. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. • the average loss: Lavg(Lz(f)) = 1n ∑n i=1 `i(f), i.e., the mean of all individual losses;• the maximum loss: Lmax(Lz(f)) = max1≤k≤n `i(f), i.e., the largest individual loss; • the top-k loss [20]: Ltop-k(Lz(f)) = `[k](f)2 for 1 ≤ k ≤ n, i.e., the k-th largest (top-k) individual loss. The average loss is unarguably the most widely used aggregate loss, as it is a unbiased approximation to the expected risk and leads to the empirical risk minimization in learning theory [1, 7, 22, 25, 26]. Further, minimizing the average loss affords simple and efficient stochastic gradient descent algorithms [3, 21]. On the other hand, the work in [20] shows that constructing learning objective based on the maximum loss may lead to improved performance for data with separate typical and rare subpopulations. The top-k loss [20] generalizes the maximum loss, as Lmax(Lz(f)) = Ltop-1(Lz(f)), and can alleviate the sensitivity to outliers of the latter. However, unlike the average loss or the maximum loss, the top-k loss in general does not lead to a convex learning objective, as it is not convex of all the individual losses Lz(f). In this work, we propose a new type of aggregate loss that we term as the average top-k (ATk) loss, which is the average of the largest k individual losses, that is defined as: Lavt-k(Lz(f)) = 1k ∑k i=1 `[i](f). (1) We refer to learning objectives based on minimizing the ATk loss as MATk learning. The ATk loss generalizes the average loss (k = n) and the maximum loss (k = 1), yet it is less susceptible to their corresponding drawbacks, i.e., it is less sensitive to outliers than the maximum loss and can adapt to imbalanced and/or multi-modal data distributions better than the average loss. This is illustrated with two toy examples of synthesized 2D data for binary classification in Fig.1 (see supplementary materials for a complete illustration). As these plots show, the linear classifier obtained with the maximum loss is not optimal due to the existence of outliers while the linear classifier corresponding to the average loss has to accommodate the requirement to minimize individual losses across all training data, and sacrifices smaller sub-clusters of data (e.g., the rare population of + class in the top row and the smaller dataset of − class in the bottom row). In contrast, using ATk loss with k = 10 can better protect such smaller sub-clusters and leads to linear classifiers closer to the optimal Bayesian linear classifier. This is also corroborated by the plots of corresponding misclassification rate of ATk vs. k value in Fig.1, which show that minimum misclassification rates occur at k value other than 1 (maximum loss) or n (average loss). The ATk loss is a tight upper-bound of the top-k loss, as Lavt-k(Lz(f)) ≥ Ltop-k(Lz(f)) with equality holds when k = 1 or `i(f) = constant, and it is a convex function of the individual losses (see Section 2). Indeed, we can express `[k](f) as the difference of two convex functions kLavt-k(Lz(f))−(k−1)Lavt-(k−1)(Lz(f)), which shows that in general Ltop-k(Lz(f)) is not convex with regards to the individual losses. 2We define the top-k element of a set S = {s1, · · · , sn} as s[k], such that s[1] ≥ s[2] ≥ · · · ≥ s[n]. In sequel, we will provide a detailed analysis of the ATk loss and MATk learning. First, we establish a reformulation of the ATk loss as the minimum of the average of the individual losses over all training examples transformed by a hinge function. This reformulation leads to a simple and effective stochastic gradient-based algorithm for MATk learning, and interprets the effect of the ATk loss as shifting down and truncating at zero the individual loss to reduce the undesirable penalty on correctly classified data. When combined with the hinge function as individual loss, the ATk aggregate loss leads to a new variant of SVM algorithm that we term as ATk SVM, which generalizes the C-SVM and the ν-SVM algorithms [19]. We further study learning theory of MATk learning, focusing on the classification calibration of the ATk loss function and error bounds of the ATk SVM algorithm. This provides a theoretical lower-bound for k for reliable classification performance. We demonstrate the applicability of minimum average top-k learning for binary classification and regression using synthetic and real datasets. The main contributions of this work can be summarized as follows. • We introduce the ATk loss for supervised learning, which can balance the pros and cons of the average and maximum losses, and allows the learning algorithm to better adapt to imbalanced and multi-modal data distributions. • We provide algorithm and interpretation of the ATk loss, suggesting that most existing learning algorithms can take advantage of it without significant increase in computation. • We further study the theoretical aspects of ATk loss on classification calibration and error bounds of minimum average top-k learning for ATk-SVM. • We perform extensive experiments to validate the effectiveness of the MATk learning. 2 Formulation and Interpretation The original ATk loss, though intuitive, is not convenient to work with because of the sorting procedure involved. This also obscures its connection with the statistical view of supervised learning as minimizing the expectation of individual loss with regards to the underlying data distribution. Yet, it affords an equivalent form, which is based on the following result. Lemma 1 (Lemma 1, [16]). ∑k i=1 x[i] is a convex function of (x1, · · · , xn). Furthermore, for xi ≥ 0 and i = 1, · · · , n, we have ∑k i=1 x[i] = minλ≥0 { kλ+ ∑n i=1 [xi − λ]+ } , where [a]+ = max{0, a} is the hinge function. For completeness, we include a proof of Lemma 1 in supplementary materials. Using Lemma 1, we can reformulate the ATk loss (1) as Lavt-k(Lz(f)) = 1 k k∑ i=1 `[i](f) ∝ min λ≥0 { 1 n n∑ i=1 [`i(f)− λ]+ + k n λ } . (2) In other words, the ATk loss is equivalent to minimum of the average of individual losses that are shifted and truncated by the hinge function controlled by λ. This sheds more lights on the ATk loss, which is particularly easy to illustrate in the context of binary classification using the margin losses, `(f(x), y) = `(yf(x)). In binary classification, the “gold standard” of individual loss is the 0-1 loss Iyf(x)≤0, which exerts a constant penalty 1 to examples that are misclassified by f and no penalty to correctly classified examples. However, the 0-1 loss is difficult to work as it is neither continuous nor convex. In practice, it is usually replaced by a surrogate convex loss. Such convex surrogates afford efficient algorithms, but as continuous and convex upper-bounds of the 0-1 loss, they typically also penalize correctly classified examples, i.e., for y and x that satisfy yf(x) > 0, `(yf(x)) > 0, whereas Iyf(x)≤0 = 0 (Fig.2). This implies that when the average of individual losses across all training examples is minimized, correctly classified examples by f that are “too close” to the classification boundary may be sacrificed to accommodate reducing the average loss, as is shown in Fig.1. In contrast, after the individual loss is combined with the hinge function, i.e., [`(yf(x))− λ]+ with λ > 0, it has the effect of “shifting down” the original individual loss function and truncating it at zero, see Fig.2. The transformation of the individual loss reduces penalties of all examples, and in particular benefits correctly classified data. In particular, if such examples are “far enough” from the decision boundary, like in the 0-1 loss, their penalty becomes zero. This alleviates the likelihood of misclassification on those rare sub-populations of data that are close to the decision boundary. Algorithm: The reformulation of the ATk loss in Eq.(2) also facilitates development of optimization algorithms for the minimum ATk learning. As practical supervised learning problems usually use a parametric form of f , as f(x;w), where w is the parameter, the corresponding minimum ATk objective becomes min w,λ≥0 { 1 n n∑ i=1 [`(f(xi;w), yi)− λ]+ + k n λ+ Ω(w) } , (3) It is not hard to see that if `(f(x;w), y) is convex with respect to w, the objective function of in Eq.(3) is a convex function for w and λ jointly. This leads to an immediate stochastic (projected) gradient descent [3, 21] for solving (3). For instance, with Ω(w) = 12C ‖w‖ 2, where C > 0 is a regularization factor, at the t-th iteration, the corresponding MATk objective can be minimized by first randomly sampling (xit , yit) from the training set and then updating the parameters as w(t+1) ← w(t) − ηt ( ∂w`(f(xit ;w (t)), yit) · I[`(f(xit ;w(t)),yit )>λ(t)] + w(t) C ) λ(t+1) ← [ λ(t) − ηt ( k n − I[`(f(xit ;w(t),yit )>λ(t)] )] + (4) where ∂w`(f(x;w), y) denotes the sub-gradient with respect to w, and ηt ∼ 1√t is the step size. ATk-SVM: As a general aggregate loss, the ATk loss can be combined with any functional form for individual losses. In the case of binary classification, the ATk loss combined with the individual hinge loss for a prediction function f from a reproducing kernel Hilbert space (RKHS) [18] leads to the ATk-SVM model. Specifically, we consider function f as a member of RKHS HK with norm ‖ · ‖K , which is induced from a reproducing kernel K : X × X → R. Using the individual hinge loss, [1− yif(xi)]+, the corresponding MATk learning objective in RKHS becomes min f∈HK ,λ≥0 1 n n∑ i=1 [ [1− yif(xi)]+ − λ ] + + k n λ+ 1 2C ‖f‖2K , (5) where C > 0 is the regularization factor. Furthermore, the outer hinge function in (5) can be removed due to the following result. Lemma 2. For a ≥ 0, b ≥ 0, there holds [ [a− `]+ − b ] + = [a− b− `]+. Proof of Lemma 2 can be found in the supplementary materials. In addition, note that for any minimizer (fz, λz) of (5), setting f(x) = 0, λ = 1 in the objective function of (5), we have knλz ≤ 1 n ∑n i=1 [ [1− yifz(xi)]+ − λz ] + + knλz + 1 2C ‖fz‖ 2 K ≤ kn , so we have 0 ≤ λz ≤ 1 which means that the minimization can be restricted to 0 ≤ λ ≤ 1. Using these results and introducing ρ = 1−λ, Eq.(5) can be rewritten as min f∈HK ,0≤ρ≤1 1 n n∑ i=1 [ρ− yif(xi)]+ − k n ρ+ 1 2C ‖f‖2K . (6) The ATk-SVM objective generalizes many several existing SVM models. For example, when k = n, it equals to the standard C-SVM [5]. When C = 1 and with conditions K(xi,xi) ≤ 1 for any i, ATk-SVM reduces to ν-SVM [19] with ν = kn . Furthermore, similar to the conventional SVM model, writing in the dual form of (6) can lead to a convex quadratic programming problem that can be solved efficiently. See supplementary materials for more detailed explanations. Choosing k. The number of top individual losses in the ATk loss is a critical parameter that affects the learning performance. In concept, using ATk loss will not be worse than using average or maximum losses as they correspond to specific choices of k. In practice, k can be chosen during training from a validation dataset as the experiments in Section 4. As k is an integer, a simple grid search usually suffices to find a satisfactory value. Besides, Theorem 1 in Section 3 establishes a theoretical lower bound for k to guarantee reliable classification based on the Bayes error. If we have information about the proportion of outliers, we can also narrow searching space of k based on the fact that ATk loss is the convex upper bound of the top-k loss, which is similar to [20]. 3 Statistical Analysis In this section, we address the statistical properties of the ATk objective in the context of binary classification. Specifically, we investigate the property of classification calibration [1] of the ATk general objective, and derive bounds for the misclassification error of the ATk-SVM model in the framework of statistical learning theory (e.g. [1, 7, 23, 26]). 3.1 Classification Calibration under ATk Loss We assume the training data z = {(xi, yi)}ni=1 are i.i.d. samples from an unknown distribution p on X×{±1}. Let pX be the marginal distribution of p on the input spaceX . Then, the misclassification error of a classifier f : X → {±1} is denoted byR(f) = Pr(y 6= f(x)) = E[Iyf(x)≤0]. The Bayes error is given byR∗ = inff R(f), where the infimum is over all measurable functions. No function can achieve less risk than the Bayes rule fc(x) = sign(η(x)− 12 ), where η(x) = Pr(y = 1|x) [8]. In practice, one uses a surrogate loss ` : R → [0,∞) which is convex and upper-bound the 0-1 loss. The population `-risk (generalization error) is given by E`(f) = E[`(yf(x))]. Denote the optimal `-risk by E∗` = inff E`(f). A very basic requirement for using such a surrogate loss ` is the so-called classification calibration (point-wise form of Fisher consistency) [1, 14]. Specifically, a loss ` is classification calibrated with respect to distribution p if, for any x, the minimizer f∗` = inff E`(f) should have the same sign as the Bayes rule fc(x), i.e., sign(f∗` (x)) = sign(fc(x)) whenever fc(x) 6= 0. An appealing result concerning the classification calibration of a loss function ` was obtained in [1], which states that ` is classification calibrated if ` is convex, differentiable at 0 and `′(0) < 0. In the same spirit, we investigate the classification calibration property of the ATk loss. Specifically, we first obtain the population form of the ATk objective using the infinite limit of (2) 1 n n∑ i=1 [`(yif(xi))− λ]+ + k n λ k n→ν−−−−→ n→∞ E [[`(yf(x))− λ]+] + νλ. We then consider the optimization problem (f∗, λ∗) = arg inf f,λ≥0 E [[`(yf(x))− λ]+] + νλ, (7) where the infimum is taken over all measurable function f : X → R. We say the ATk (aggregate) loss is classification calibrated with respect to p if f∗ has the same sign as the Bayes rule fc. The following theorem establishes such conditions. Theorem 1. Suppose the individual loss ` : R → R+ is convex, differentiable at 0 and `′(0) < 0. Without loss of generality, assume that `(0) = 1. Let (f∗, λ∗) be defined in (7), (i) If ν > E∗` then the ATk loss is classification calibrated. (ii) If, moreover, ` is monotonically decreasing and the ATk aggregate loss is classification calibrated then ν ≥ ∫ η(x)6= 12 min(η(x), 1− η(x))dpX (x). The proof of Theorem 1 can be found in the supplementary materials. Part (i) and (ii) of the above theorem address respectively the sufficient and necessary conditions on ν such that the ATk loss becomes classification calibrated. Since ` is an upper bound surrogate of the 0-1 loss, the optimal `-risk E∗` is larger than the Bayes error R∗, i.e., E∗` ≥ R∗. In particular, if the individual loss ` is the hinge loss then E∗` = 2R∗. Part (ii) of the above theorem indicates that the ATk aggregate loss is classification calibrated if ν = limn→∞ k/n is larger than the optimal generalization error E∗` associated with the individual loss. The choice of k > nE∗` thus guarantees classification calibration, which gives a lower bound of k. This result also provides a theoretical underpinning of the sensitivity to outliers of the maximum loss (ATk loss with k = 1). If the probability of the set {x : η(x) = 1/2} is zero,R∗ = ∫ X min(η(x), 1− η(x))dpX (x) = ∫ η(x)6=1/2 min(η(x), 1− η(x))dpX (x). Theorem 1 indicates that in this case, if the maximum loss is calibrated, one must have 1n ≈ ν ≥ R ∗. In other words, as the number of training data increases, the Bayes error has to be arbitrarily small, which is consistent with the empirical observation that the maximum loss works well under the well-separable data setting but are sensitive to outliers and non-separable data. 3.2 Error bounds of ATk-SVM We next study the excess misclassification error of the ATk-SVM model i.e., R(sign(fz)) − R∗. Let (fz, ρz) be the minimizer of the ATk-SVM objective (6) in the RKHS setting. Let fH be the minimizer of the generalization error over the RKHS space HK , i.e., fH = argminf∈HK Eh(f), where we use the notation Eh(f) = E [[1− yf(x)]+] to denote the `-risk of the hinge loss. In the finite-dimension case, the existence of fH follows from the direct method in the variational calculus, as Eh(·) is lower bounded by zero, coercive, and weakly sequentially lower semi-continuous by its convexity. For an infinite dimensional HK , we assume the existence of fH. We also assume that Eh(fH) < 1 since even a naı̈ve zero classifier can achieve Eh(0) = 1. Denote the approximation error by A(HK) = inff∈HK Eh(f)− Eh(fc) = Eh(fH)− Eh(fc), and let κ = supx∈X √ K(x,x). The main theorem can be stated as follows. Theorem 2. Consider the ATk-SVM in RKHS (6). For any ε ∈ (0, 1] and µ ∈ (0, 1 − Eh(fH)), choosing k = dn(Eh(fH) + µ)e. Then, it holds Pr { R(sign(fz))−R∗ ≥ µ+A(H) + ε+ 1 + Cκ,H√ nµ } ≤ 2 exp ( − nµ 2ε2 (1 + Cκ,H)2 ) , where Cκ,H = κ(2 √ 2C + 4‖fH‖K). The complete proof of Theorem 2 is given in the supplementary materials. The main idea is to show that ρz is bounded from below by a positive constant with high probability, and then bound the excess misclassification errorR(sign(f∗z ))−R∗ by Eh(fz/ρz)− Eh(fc). If K is a universal kernel then A(HK) = 0 [23]. In this case, let µ = ε ∈ (0, 1− Eh(fH)), then from Theorem 2 we have Pr { R(sign(fz))−R∗ ≥ 2ε+ 1 + Cκ,H√ nε } ≤ 2 exp ( − nε 4 (1 + Cκ,H)2 ) , Consequently, choosing C such that limn→∞ C/n = 0, which is equivalent to limn→∞ (1 + Cκ,H) 2/n = 0, then R(sign(fz)) can be arbitrarily close to the Bayes error R∗, with high probability, as long as n is sufficiently large. 4 Experiments We have demonstrated that ATk loss provides a continuum between the average loss and the maximum loss, which can potentially alleviates their drawbacks. A natural question is whether such an advantage actually benefits practical learning problems. In this section, we demonstrate the behaviors of MATk learning coupled with different individual losses for binary classification and regression on synthetic and real datasets, with minimizing the average loss and the maximum loss treated as special cases for k = n and k = 1, respectively. For simplicity, in all experiments, we use homogenized linear prediction functions f(x) = wTx with parameters w and the Tikhonov regularizer Ω(w) = 12C ||w|| 2 , and optimize the MATk learning objective with the stochastic gradient descent method given in (4). Binary Classification: We conduct experiments on binary classification using eight benchmark datasets from the UCI3 and KEEL4 data repositories to illustrate the potential effects of using ATk loss in practical learning to adapt to different underlying data distributions. A detailed description of the datasets is given in supplementary materials. The standard individual logistic loss and hinge loss are combined with different aggregate losses. Note that average loss combined with individual logistic loss corresponds to the logistic regression model and average loss combined with individual hinge loss leads to the C-SVM algorithm [5]. For each dataset, we randomly sample 50%, 25%, 25% examples as training, validation and testing sets, respectively. During training, we select parameters C (regularization factor) and k (number of top losses) on the validation set. Parameter C is searched on grids of log10 scale in the range of [10−5, 105] (extended when optimal value is on the boundary), and k is searched on grids of log10 scale in the range of [1, n]. We use k∗ to denote the optimal k selected from the validation set. 3https://archive.ics.uci.edu/ml/datasets.html 4http://sci2s.ugr.es/keel/datasets.php We report the average performance over 10 random splitting of training/validation/testing for each dataset with MATk learning objectives formed from individual logistic loss and hinge loss. Table 1 gives their experimental results in terms of misclassification rate (results in terms of other classification quality metrics are given in supplementary materials). Note that on these datasets, the average loss consistently outperforms the maximum loss, but the performance can be further improved with the ATk loss, which is more adaptable to different data distributions. This advantage of the ATk loss is particularly conspicuous for datasets Monk and Australian. To further understand the behavior of MATk learning on individual datasets, we show plots of misclassification rate on testing set vs. k for four representative datasets in Fig.3 (in which C is fixed to 102 and k ∈ [1, n]). As these plots show, on all four datasets, there is a clear range of k value with better classification performance than the two extreme cases k = 1 and k = n, corresponding to the maximum and average loss, respectively. To be more specific, when k = 1, the potential noises and outliers will have the highest negative effects on the learned classifier and the related classification performance is very poor. As k increases, the negative effects of noises and outliers will reduce and the classification performance becomes better, this is more significant on dataset Monk, Australian and Splice. However, if k keeps increase, the classification performance may decrease (e.g., when k = n). This may because that as k increases, more and more well classified samples will be included and the non-zero loss on these samples will have negative effects on the learned classifier (see our analysis in Section 2), especifically for dataset Monk, Australian and Phoneme. Regression. Next, we report experimental results of linear regression on one synthetic dataset (Sinc) and three real datasets from [4], with a detailed description of these datasets given in supplementary materials. The standard square loss and absolute loss are adopted as individual losses. Note that average loss coupled with individual square loss is standard ridge regression model and average loss coupled with individual absolute loss reduces to ν-SVR [19]. We normalize the target output to [0, 1] and report their root mean square error (RMSE) in Table 2, with optimal C and k∗ obtained by a grid search as in the case of classification (performance in terms of mean absolute square error (MAE) is given in supplementary materials). Similar to the classification cases, using the ATk loss usually improves performance in comparison to the average loss or maximum loss. 5 Related Works Most work on learning objectives focus on designing individual losses, and only a few are dedicated to new forms of aggregate losses. Recently, aggregate loss considering the order of training data have been proposed in curriculum learning [2] and self-paced learning [11, 9], which suggest to organize the training process in several passes and samples are included from easy to hard gradually. It is interesting to note that each pass of self-paced learning [11] is equivalent to minimum the average of the k smallest individual losses, i.e., 1k ∑n i=n−k+1 `[i](f), which we term it as the average bottom-k loss in contrast to the average top-k losses in our case. In [20], the pros and cons of the maximum loss and the average loss are compared, and the top-k loss, i.e., `[k](f), is advocated as a remedy to the problem of both. However, unlike the ATk loss, in general, neither the average bottom-k loss nor the top-k loss are convex functions with regards to the individual losses. Minimizing top-k errors has also been used in individual losses. For ranking problems, the work of [17, 24] describes a form of individual loss that gives more weights to the top examples in a ranked list. In multi-class classification, the top-1 loss is commonly used which causes penalties when the top-1 predicted class is not the same as the target class label [6]. This has been further extended in [12, 13] to the top-k multi-class loss, in which for a class label that can take m different values, the classifier is only penalized when the correct value does not show up in the top k most confident predicted values. As an individual loss, these works are complementary to the ATk loss and they can be combined to improve learning performance. 6 Discussion In this work, we introduce the average top-k (ATk) loss as a new aggregate loss for supervised learning, which is the average over the k largest individual losses over a training dataset. We show that the ATk loss is a natural generalization of the two widely used aggregate losses, namely the average loss and the maximum loss, but can combine their advantages and mitigate their drawbacks to better adapt to different data distributions. We demonstrate that the ATk loss can better protect small subsets of hard samples from being swamped by a large number of easy ones, especially for imbalanced problems. Furthermore, it remains a convex function over all individual losses, which can lead to convex optimization problems that can be solved effectively with conventional gradientbased methods. We provide an intuitive interpretation of the ATk loss based on its equivalent effect on the continuous individual loss functions, suggesting that it can reduce the penalty on correctly classified data. We further study the theoretical aspects of ATk loss on classification calibration and error bounds of minimum average top-k learning for ATk-SVM. We demonstrate the applicability of minimum average top-k learning for binary classification and regression using synthetic and real datasets. There are many interesting questions left unanswered regarding using the ATk loss as learning objectives. Currently, we use conventional gradient-based algorithms for its optimization, but we are investigating special instantiations of MATk learning for which more efficient optimization methods can be developed. Furthermore, the ATk loss can also be used for unsupervised learning problems (e.g., clustering), which is a focus of our subsequent study. It is also of practical importance to combine ATk loss with other successful learning paradigms such as deep learning, and to apply it to large scale real life dataset. Lastly, it would be very interesting to derive error bounds of MATk with general individual loss functions. 7 Acknowledgments We thank the anonymous reviewers for their constructive comments. This work was completed when the first author was a visiting student at SUNY Albany, supported by a scholarship from University of Chinese Academy of Sciences (UCAS). Siwei Lyu is supported by the National Science Foundation (NSF, Grant IIS-1537257) and Yiming Ying is supported by the Simons Foundation (#422504) and the 2016-2017 Presidential Innovation Fund for Research and Scholarship (PIFRS) program from SUNY Albany. This work is also partially supported by the National Science Foundation of China (NSFC, Grant 61620106003) for Bao-Gang Hu and Yanbo Fan.
1. What is the focus and contribution of the paper on ensemble loss for supervised learning problems? 2. What are the strengths of the proposed approach, particularly in terms of its convex formulation and sample complexity bound? 3. What are the weaknesses of the paper regarding its experimental results and analysis? 4. How does the reviewer assess the clarity and quality of the paper's content, including its figures and tables? 5. Are there any concerns or suggestions regarding the selection of datasets, the choice of evaluation metrics, and the interpretation of the results?
Review
Review This paper proposed a new ensemble loss (average top-k loss) for supervised learning problems. The average over the k largest individual losses over a training set is used as the objective function for supervised training purpose. The author proposed a convex formulation to implement this idea and formulate the overall problem as a convex optimization which can be solved using gradient methods. The author provides also analysis on how the free parameter k relates to the classification problems and its optimal setting. Similar to standard average loss, the author provide sample complexity bound for ATk based SVM formulation. * on experiments: - it would be better to provide some details on the datasets and why these datasets are selected - if 10 random train/test splits are used, it is better to have std in table 1. - It would be great to include some more comments on why the K-plots in figure 3 have different trends for the four datasets. Is this connected to some property of the dataset? It is not clear to conclude from these plots how to select k in general. - for the regression problem RMSE is used a metric but the author considered both square loss and abs loss. It would be good to use both RMSE and MAE to measure the performance. * Figure 1 has four different synthetic datasets but it is really difficult to parse the information without detailed explanation. It would be more helpful to illustrate the key idea in the introduction by explaining what is the key difference of the four synthetic examples and comments on in which cases the ATk loss makes more sense and helped reduce certain errors.
NIPS
Title Coresets for Clustering with Fairness Constraints Abstract In a recent work, [20] studied the following “fair” variants of classical clustering problems such as k-means and k-median: given a set of n data points in R and a binary type associated to each data point, the goal is to cluster the points while ensuring that the proportion of each type in each cluster is roughly the same as its underlying proportion. Subsequent work has focused on either extending this setting to when each data point has multiple, non-disjoint sensitive types such as race and gender [7], or to address the problem that the clustering algorithms in the above work do not scale well [42, 8, 6]. The main contribution of this paper is an approach to clustering with fairness constraints that involve multiple, non-disjoint types, that is also scalable. Our approach is based on novel constructions of coresets: for the k-median objective, we construct an ε-coreset of size O(Γk2ε−d) where Γ is the number of distinct collections of groups that a point may belong to, and for the k-means objective, we show how to construct an ε-coreset of size O(Γk3ε−d−1). The former result is the first known coreset construction for the fair clustering problem with the k-median objective, and the latter result removes the dependence on the size of the full dataset as in [42] and generalizes it to multiple, non-disjoint types. Plugging our coresets into existing algorithms for fair clustering such as [6] results in the fastest algorithms for several cases. Empirically, we assess our approach over the Adult, Bank, Diabetes and Athlete dataset, and show that the coreset sizes are much smaller than the full dataset; applying coresets indeed accelerates the running time of computing the fair clustering objective while ensuring that the resulting objective difference is small. We also achieve a speed-up to recent fair clustering algorithms [6, 7] by incorporating our coreset construction. 1 Introduction Clustering algorithms are widely used in automated decision-making tasks, e.g., unsupervised learning [43], feature engineering [33, 27], and recommendation systems [10, 40, 21]. With the increasing applications of clustering algorithms in human-centric contexts, there is a growing concern that, if left unchecked, they can lead to discriminatory outcomes for protected groups, e.g., females/black people. For instance, the proportion of a minority group assigned to some cluster can be far from its underlying proportion, even if clustering algorithms do not take the sensitive attribute into its decision making [20]. Such an outcome may, in turn, lead to unfair treatment of minority groups, e.g., women may receive proportionally fewer job recommendations with high salary [22, 38] due to their underrepresentation in the cluster of high salary recommendations. To address this issue, Chierichetti et al. [20] recently proposed the fair clustering problem that requires the clustering assignment to be balanced with respect to a binary sensitive type, e.g., sex.2 Given a set X of n data points in Rd and a binary type associated to each data point, the goal is to cluster the points such that the proportion of each type in each cluster is roughly the same as ∗Authors are listed in alphabetical order of family names. Full version: [31]. 2A type consists of several disjoint groups, e.g., the sex type consists of females and males. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. its underlying proportion, while ensuring that the clustering objective is minimized. Subsequent work has focused on either extending this setting to when each data point has multiple, non-disjoint sensitive types [7] (Definition 2.3), or to address the problem that the clustering algorithms do not scale well [20, 41, 42, 8, 6]. Due to the large scale of datasets, several existing fair clustering algorithms have to take samples instead of using the full dataset, since their running time is at least quadratic in the input size [20, 41, 8, 7]. Very recently, Backurs et al. [6] propose a nearly linear approximation algorithm for fair k-median, but it only works for a binary type. It is still unknown whether there exists a scalable approximation algorithm for multiple sensitive types [6]. To improve the running time of fair clustering algorithms, a powerful technique called coreset was introduced. Roughly, a coreset for fair clustering is a small weighted point set, such that for any k-subset and any fairness constraint, the fair clustering objective computed over the coreset is approximately the same as that computed from the full dataset (Definition 2.1). Thus, a coreset can be used as a proxy for the full dataset – one can apply any fair clustering algorithm on the coreset, achieve a good approximate solution on the full dataset, and hope to speed up the algorithm. As mentioned in [6], using coresets can indeed accelerate the computation time and save storage space for fair clustering problems. Another benefit is that one may want to compare the clustering performance under different fairness constraints, and hence it may be more efficient to repeatedly use coresets. Currently, the only known result for coresets for fair clustering is by Schmidt et al. [42], who constructed an ε-coreset for fair k-means clustering. However, their coreset size includes a log n factor and only restricts to a sensitive type. Moreover, there is no known coreset construction for other commonly-used clusterings, e.g., fair k-median. Our contributions. Our main contribution is an efficient construction of coresets for clustering with fairness constraints that involve multiple, non-disjoint types. Technically, we show efficient constructions of ε-coresets of size independent of n for both fair k-median and fair k-means, summarized in Table 1. Let Γ denote the number of distinct collections of groups that a point may belong to (see the first paragraph of Section 4 for the formal definition). • Our coreset for fair k-median is of size O(Γk2ε−d) (Theorem 4.1), which is the first known coreset to the best of our knowledge. • For fair k-means, our coreset is of size O(Γk3ε−d−1) (Theorem 4.2), which improves the result of [42] by an Θ( lognεk2 ) factor and generalizes it to multiple, non-disjoint types. • As mentioned in [6], applying coresets can accelerate the running time of fair clustering algorithms, while suffering only an additional (1+ε) factor in the approxiation ratio. Setting ε = Ω(1) and plugging our coresets into existing algorithms [42, 7, 6], we directly achieve scalable fair clustering algorithms, summarized in Table 2. We present novel technical ideas to deal with fairness constraints for coresets. • Our first technical contribution is a reduction to the case Γ = 1 (Theorem 4.3) which greatly simplifies the problem. Our reduction not only works for our specific construction, but also for all coreset constructions in general. • Furthermore, to deal with the Γ = 1 case, we provide several interesting geometric observations for the optimal fair k-median/means clustering (Lemma 4.1), which may be of independent interest. We implement our algorithm and conduct experiments on Adult, Bank, Diabetes and Athlete datasets. • A vanilla implementation results in a coreset with size that depends on ε−d. Our implementation is inspired by our theoretical results and produces coresets whose size is much smaller in practice. This improved implementation is still within the framework of our analysis, and the same worst case theoretical bound still holds. • To validate the performance of our implementation, we experiment with varying ε for both fair k-median and k-means. As expected, the empirical error is well under the theoretical guarantee ε, and the size does not suffer from the ε−d factor. Specifically, for fair k-median, we achieve 5% empirical error using only 3% points of the original data sets, and we achieve similar error using 20% points of the original data set for the k-means case. In addition, our coreset for fair k-means is better than uniform sampling and that of [42] in the empirical error. 1.1 Other related works There are other fair variants of clustering problems. Ahmadian et al. [4] studied a variant of the fair k-center problem in which the number of each type in each cluster has an upper bound, and proposed a bi-criteria approximation algorithm. Chen et al. [19] studied the fair clustering problem in which any n/k points are entitled to form their own cluster if there is another center closer in distance for all of them. Kleindessner et al. [34] investigate the fair k-center problem in which each center has a type, and the selection of the k-subset is restricted to include a fixed amount of centers belonging to each type. In another paper [35], they developed fair variants of spectral clusterings (a heuristic k-means clustering framework) by incorporating the proportional fairness constraints proposed by [20]. The notion of coreset was first proposed by Agarwal et al. [2]. There has been a large body of work for unconstrained clustering problems in Euclidean spaces [3, 28, 18, 29, 36, 24, 25, 9]). Apart from these, for the general (k, z)-clustering problem, Feldman and Langberg [24] presented an ε-coreset of size Õ(dkε−2z) in Õ(nk) time. Huang et al. [30] showed an ε-coreset of size Õ(ddim(X) ·k3ε−2z), where ddim(X) is doubling dimension that measures the intrinsic dimensionality of a space. For the special case of k-means, Braverman et al. [9] improved the size to Õ(kε−2 ·min {k/ε, d}) by a dimension reduction approach. Works such as [24] use importance sampling technique which avoid the size factor ε−d, but it is unknown if such approaches can be used in fair clustering. 2 Problem definition Consider a set X ⊆ Rd of n data points, an integer k (number of clusters), and l groups P1, . . . , Pl ⊆ X . An assignment constraint, which was proposed by Schmidt et al. [42], is a k × l integer matrix F . A clustering C = {C1, . . . , Ck}, which is a k-partitioning of X , is said to satisfy assignment constraint F if |Ci ∩ Pj | = Fij , ∀i ∈ [k], j ∈ [l]. For a k-subset C = {c1, . . . , ck} ⊆ X (the center set) and z ∈ R>0, we define Kz(X,F,C) as the minimum value of ∑ i∈[k] ∑ x∈Ci d z(x, ci) among all clustering C = {C1, . . . , Ck} that satisfies F , which we call the optimal fair (k, z)-clustering value. If there is no clustering satisfying F , Kz(X,F,C) is set to be infinity. The following is our notion of coresets for fair (k, z)-clustering. This generalizes the notion introduced in [42] which only considers a partitioned group structure. Definition 2.1 (Coreset for fair clustering). Given a set X ⊆ Rd of n points and l groups P1, . . . , Pl ⊆ X , a weighted point set S ⊆ Rd with weight function w : S → R>0 is an εcoreset for the fair (k, z)-clustering problem, if for each k-subset C ⊆ Rd and each assignment constraint F ∈ Zk×l≥0 , it holds that Kz(S, F,C) ∈ (1± ε) · Kz(X,F,C). Since points in S might receive fractional weights, we change the definition of Kz a little, so that in evaluating Kz(S, F,C), a point x ∈ S may be partially assigned to more than one cluster and the total amount of assignments of x equals w(x). The currently most general notion of fairness in clustering was proposed by [7], which enforces both upper bounds and lower bounds of any group’s proportion in a cluster. Definition 2.2 ((α, β)-proportionally-fair). A clustering C = (C1, . . . , Ck) is (α, β)proportionally-fair (α, β ∈ [0, 1]l), if for each clusterCi and j ∈ [l], it holds that αj ≤ |Ci∩Pj ||Ci| ≤ βj . The above definition directly implies for each cluster Ci and any two groups Pj1 , Pj2 ∈ [l], αj1 βj2 ≤ |Ci∩Pj1 | |Ci∩Pj2 | ≤ βj1αj2 . In other words, the fraction of points belonging to groups Pj1 , Pj2 in each cluster is bounded from both sides. Indeed, similar fairness constraints have been investigated by works on other fundamental algorithmic problems such as data summarization [14], ranking [16, 44], elections [12], personalization [17, 13], classification [11], and online advertising [15]. Naturally, Bera et al. [7] also defined the fair clustering problem with respect to (α, β)-proportionally-fairness as follows. Definition 2.3 ((α, β)-proportionally-fair (k, z)-clustering). Given a set X ⊆ Rd of n points, l groups P1, . . . , Pl ⊆ X , and two vectors α, β ∈ [0, 1]l, the objective of (α, β)-proportionallyfair (k, z)-clustering is to find a k-subset C = {c1, . . . , ck} ∈ Rd and (α, β)-proportionally-fair clustering C = {C1, . . . , Ck}, such that the objective function ∑ i∈[k] ∑ x∈Ci d z(x, ci) is minimized. Our notion of coresets is very general, and we relate our notion of coresets to the (α, β)-proportionallyfair clustering problem, via the following observation, which is similar to Proposition 5 in [42]. Proposition 2.1. Given a k-subset C, the assignment restriction required by (α, β)-proportionallyfairness can be modeled as a collection of assignment constraints. As a result, if a weighted set S is an ε-coreset satisfying Definition 2.1, then for any α, β ∈ [0, 1]l, the (α, β)-proportionally-fair (k, z)-clustering value computed from S must be a (1± ε)-approximation of that computed from X . 3 Technical overview We introduce novel techniques to tackle the assignment constraints. Recall that Γ denotes the number of distinct collections of groups that a point may belong to. Our first technical contribution is a general reduction to the Γ = 1 case which works for any coreset construction algorithm (Theorem 4.3). The idea is to divide X into Γ parts with respect to the groups that a point belongs to, and construct a fair coreset with parameter Γ = 1 for each group. The observation is that the union of these coresets is a coreset for the original instance and Γ. Our coreset construction for the case Γ = 1 is based on the framework of [29] in which unconstrained k-median/means coresets were provided. The main observation of [29] is that it suffices to deal with X that lies on a line. Specifically, they show that it suffices to construct at most O(kε−d+1) lines, project X to their closest lines and construct an ε/3-coreset for each line. The coreset for each line is then constructed by partitioning the line into poly(k/ε) contiguous sub-intervals, and designate at most two points to represent each sub-interval and include these points in the coreset. In their analysis, a crucially used property is that the clustering for any given centers partitions X into k contiguous parts on the line, since each point must be assigned to its nearest center. However, this property might not hold in fair clustering, which is our main difficulty. Nonetheless, we manage to show a new structural lemma, that the optimal fair k-median/means clustering partitions X into O(k) contiguous intervals. Specifically, for fair k-median, the key geometric observation is that there always exists a center whose corresponding optimal fair k-median cluster forms a contiguous interval (Claim 4.1), and this combined with an induction implies the optimal fair clustering partitions X into 2k − 1 intervals. For fair k-means, we show that each optimal fair cluster actually forms a single contiguous interval. Thanks to the new structural properties, plugging in a slightly different set of parameters in [29] yields fair coresets. 4 Coresets for fair clustering For each x ∈ X , denote Px = {i ∈ [l] : x ∈ Pi} as the collection of groups that x belongs to. Let ΓX denote the number of distinct Px’s, i.e. ΓX := |{Px : x ∈ X}|. Let Tz(n) denote the running time of a constant approximation algorithm for the (k, z)-clustering problem. The main theorems are as follows. Theorem 4.1 (Coreset for fair k-median (z = 1)). There exists an algorithm that constructs an ε-coreset for the fair k-median problem of size O(Γk2ε−d), in O(kε−d+1n+ T1(n)) time. Theorem 4.2 (Coreset for fair k-means (z = 2)). There exists an algorithm that constructs εcoreset for the fair k-means problem of size O(Γk3ε−d−1), in O(kε−d+1n+ T2(n)) time. Note that ΓX is usually small. For instance, if there is only one sensitive attribute [42], then each Px is singleton and hence ΓX = l. More generally, let Λ denote the maximum number of groups that any point belongs to, then ΓX ≤ lΛ, but there is often only O(1) sensitive attributes for each point. As noted above, the main technical difficulty for the coreset construction is to deal with the assignment constraints. We make an important observation (Theorem 4.3), that one only needs to prove Theorem 4.1 for the case l = 1.The proof of Theorem 4.3 can be found in the full version. This theorem is a generalization of Theorem 7 in [42], and the coreset of [42] actually extends to arbitrary group structure thanks to our theorem. Theorem 4.3 (Reduction from l groups to a single group). Suppose there exists an algorithm that computes an ε-coreset of size t for the fair (k, z)-clustering problem of X̂ with l = 1, in time T (|X̂|, ε, k, z). Then there exists an algorithm that takes a set X , and computes an ε-coreset of size ΓX · t for the fair (k, z)-clustering problem, in time ΓX · T (|X|, ε, k, z). Our coreset construction for both fair k-median and k-means are similar to that in [29], except using a different set of parameters. At a high level, the algorithm reduces general instances to instances where data lie on a line, and it only remains to give a coreset for the line case. Next, we focus on fair k-median, and the construction for the k-means case is similar and can be found in the full version. Remark 4.1. Theorem 4.3 can be applied to construct an ε-coreset of size O(ΓXkε−d+1) for the fair k-center clustering problem, since Har-Peled’s coreset result [28] directly provides an ε-coreset of size O(kε−d+1) for the case of l = 1. 4.1 The line case Since l = 1, we interpret F as an integer vector in Zk≥0. For a weighted point set S with weight w : S → R≥0, we define the mean of S by S := 1|S| ∑ p∈S w(p) · p and the error of S by ∆(S) := ∑ p∈S w(p) · d(p, S). Denote OPT as the optimal value of the unconstrained k-median clustering. Our construction is similar to [29] and we summarize it in Algorithm 1. An illustration of Algorithm 1 may be found in Figure 1. Input: X = {x1, . . . , xn} ⊂ Rd lying on the real line where x1 ≤ . . . ≤ xn, an integer k ∈ [n], a number OPT as the optimal value of k-median clustering. Output: an ε-coreset S of X together with weights w : S → R≥0. 1 Set a threshold ξ satisfying that ξ = ε·OPT30k ; 2 Consider the points from x1 to xn and group them into batches in a greedy way: each batch B is a maximal point set satisfying that ∆(B) ≤ ξ; 3 Denote B(X) as the collection of all batches. Let S ← ⋃ B∈B(X)B; 4 For each point x = B ∈ S, w(x)← |B|; 5 Return (S,w); Algorithm 1: FairMedian-1D(X, k) Theorem 4.4 (Coreset for fair k-median when X lies on a line). Algorithm 1 computes an ε/3coreset S for fair k-median clustering of X , in time O(|X|). The running time is immediate since for each batch B ∈ B(X), it only costs O(|B|) time to compute B. Hence, Algorithm 1 runs in O(|X|) time. We focus on correctness in the following. In [29], it was shown that S is an ε/3-coreset for the unconstrained k-median clustering problem. In their analysis, it is crucially used that the optimal clustering partitions X into k contiguous intervals. Unfortunately, the nice “contiguous” property does not hold in our case because of the assignment constraint F ∈ Rk. To resolve this issue, we prove a new structural property (Lemma 4.1) that the optimal fair k-median clustering actually partitions X into only O(k) contiguous intervals. With this property, Theorem 4.4 is implied by a similar argument as in [29]. The detailed proof can be found in the full version. Lemma 4.1 (Fair k-median clustering consists of 2k − 1 contiguous intervals). Suppose X := {x1, . . . , xn} ⊂ Rd lies on the real line where x1 ≤ . . . ≤ xn. For every k-subset C = (c1, . . . , ck) ∈ Rd and every assignment constraints F ∈ Zk≥0, there exists an optimal fair k-median clustering that partitions X into at most 2k − 1 contiguous intervals. Proof. We prove by induction on k. The induction hypothesis is that, for every k ≥ 1, Lemma 4.1 holds for any data set X , any k-subset C ⊆ Rd and any assignment constraint F ∈ Zk≥0. The base case k = 1 holds trivially since all points in X must be assigned to c1. Assume the lemma holds for k−1 (k ≥ 2) and we will prove the inductive step k. Let C?1 , . . . , C?k be the optimal fair k-median clustering w.r.t. C and F , where C?i ⊆ X is the subset assigned to center ci. We present the structural property in Claim 4.1, whose proof can be found in the full version. Claim 4.1. There exists i ∈ [k] such that C?i consists of exactly one contiguous interval. We continue the proof of the inductive step by constructing a reduced instance (X ′, F ′, C ′) where a) C ′ := C \ {ci0}; b) X ′ = X \C?i0 ; c) F ′ is formed by removing the i0-th coordinate of F . Applying the hypothesis on (X ′, F ′, C ′), we know the optimal fair (k − 1)-median clustering consists of at most 2k − 3 contiguous intervals. Combining with C?i0 which has exactly one contiguous interval would increase the number of intervals by at most 2. Thus, we conclude that the optimal fair k-median clustering for (X,F,C) has at most 2k− 1 contiguous intervals. This finishes the inductive step. 4.2 Extending to higher dimension The extension is the same as that of [29]. We start with a set of k centers that is a O(1)-approximate solution C? to unconstrained k-median. Then we emit O(ε−d+1) rays around each center c in C? (which correspond to an O(ε)-net on the unit sphere centered at c), and project data points to the nearest ray, such that the total projection cost is ε · OPT/3. Then for each line, we compute an ε/3-coreset for fair k-median by Theorem 4.4, and let S denote the combination of coresets generated from all lines. By the same argument as in Theorem 2.9 of [29], S is an ε-coreset for fair k-median clustering, which implies Theorem 4.1. The detailed proof can be found in the full version. Remark 4.2. In fact, it suffices to emit an arbitrary set of rays such that the total projection cost is at most ε ·OPT/3. This observation is crucially used in our implementations (Section 5) to reduce the size of the coreset, particularly to avoid the construction of the O(ε)-net which is of O(ε−d) size. 5 Empirical results We implement our algorithm and evaluate its performance on real datasets.3 The implementation mostly follows our description of algorithms, but a vanilla implementation would bring in an ε−d factor in the coreset size. To avoid this, as observed in Remark 4.2, we may actually emit any set of rays as long as the total projection cost is bounded, instead of ε−d rays. We implement this idea by finding the smallest integer m and m lines, such that the minimum cost of projecting data onto m lines is within the error threshold. In our implementation for fair k-means, we adopt the widely used Lloyd’s heuristic [37] to find the m lines, where the only change to Lloyd’s heuristic is that, for each cluster, we need to find a line that minimizes the projection cost instead of a point, and we use SVD to efficiently find this line optimally. Unfortunately, the above approach does not work for fair k-median, as the SVD does not give the optimal line. As a result, we still need to construct the ε-net, but we alternatively employ some heuristics to find the net adaptively w.r.t. the dataset. Our evaluation is conducted on four datasets: Adult (~50k), Bank (~45k) and Diabetes (~100k) from UCI Machine Learning Repository [23], and Athlete (~200k) from [1], which are also considered in previous papers [20, 42, 7]. For all datasets, we choose numerical features to form a vector in Rd for each record, where d = 6 for Adult, d = 10 for Bank, d = 29 for Diabetes and d = 3 for Athlete. We choose two sensitive types for the first three datasets: sex and marital for Adult (9 groups, Γ = 14); marital and default for Bank (7 groups, Γ = 12); sex and age for Diabetes (12 groups, Γ = 20), and we choose a binary sensitive type sex for Athlete (2 groups, Γ = 2). In addition, in the full version, we will also discuss how the following affects the result: a) choosing a binary type as the sensitive type, or b) normalization of the dataset. We pick k = 3 (i.e. number of clusters) throughout our experiment. We define the empirical error as | Kz(S,F,C)Kz(X,F,C)−1| (which is the same measure as ε) for some F and C. To evaluate the empirical error, we draw 500 independent random samples of (F,C) and report the maximum empirical error among these samples. For each (F,C), the fair clustering objectives Kz(·, F, C) may be formulated as integer linear programs (ILP). We use CPLEX [32] to solve the ILP’s, report the average running time4 TX and TS for evaluating the objective on dataset X and coreset S respectively, and also report the running time TC for constructing coreset S. For both k-median and k-means, we employ uniform sampling (Uni) as a baseline, in which we partitionX into Γ parts according to distinct Px’s (the collection of groups that x belongs to) and take uniform samples from each collection. Additionally, for k-means, we select another baseline from a recent work [42] that presented a coreset construction for fair k-means, whose implementation is based on the BICO library which is a high-performance coreset-based library for computing k-means clustering [26]. We evaluate the performance of our coreset for fair k-means against BICO and Uni. As a remark of BICO and Uni implementations, they do not support specifying parameter ε, but a hinted size of the resulted coreset. Hence, we start with evaluating our coreset, and set the hinted size for Uni and BICO as the size of our coreset. 3https://github.com/sfjiang1990/Coresets-for-Clustering-with-Fairness-Constraints. 4The experiments are conducted on a 4-Core desktop CPU with 64 GB RAM. We also showcase the speed-up to two recently published approximation algorithms by applying a 0.5-coreset. The first algorithm is a practically efficient, O(log n)-approximate algorithm for fair k-median [6] that works for a binary type, referred to as FairTree. The other one is a bicriteria approximation algorithm [7] for both fair k-median and k-means, referred to as FairLP. We slightly modify the implementations of FairTree and FairLP to enable them work with our coreset, particularly making them handle weighted inputs efficiently. We do experiments on a large dataset Census1990 which consists of about 2.5 million records (where we select d = 13 features and a binary type), in addition to the above-mentioned Adult, Bank, Diabetes and Athlete datasets. 5.1 Results Table 3 and 4 summarize the accuracy-size trade-off of our coresets for fair k-median and k-means respectively, under different error guarantee ε. Since the coreset construction time TC for Uni is very small (usually less than 50 ms) we do not report it in the table. From the table, a key finding is that the size of the coreset does not suffer from the ε−d factor thanks to our optimized implementation. As for the fair k-median, the empirical error of our coreset is well under control. In particular, to achieve 5% empirical error, only less than 3 percents of data is necessary for all datasets, and this results in a ~200x acceleration in evaluating the objective and 10x acceleration even taking the coreset construction time into consideration.5 Regarding the running time, our coreset construction time scales roughly linearly with the size of the coreset, which means our algorithm is output-sensitive. The empirical error of Uni is comparable to ours on Diabetes, but the worst-case error is unbounded (2x-10x to our coreset, even larger than ε) in general and seems not stable when ε varies. Our coreset works well for fair k-means, and it also offers significant acceleration of evaluating the objective. Compared with BICO, our coreset achieves smaller empirical error for fixed ε and the construction time is between 0.5x to 2x that of BICO. Again, the empirical error of Uni could be 2x smaller than ours and BICO on Diabetes, but the worst-case error is unbounded in general. Table 5 demonstrates the speed-up to FairTree and FairLP with the help of our coreset. We observed that the adaption of our coresets offers a 5x-15x speed-up to FairTree and a 15x-30x speed-up to FairLP for all datasets, even taking the coreset construction time into consideration. Specifically, the runtime on top of our coreset for FairLP is less than 1s for all datasets, which is extremely fast. We also observe that the clustering objective obj′ALG on top of our coresets is usually within 0.6-1.2 times of objALG which is the objective without the coreset (noting that coresets might shrink the objective). The only exception is FairLP on Census1990, in which obj′ALG is only 35% of objALG. A possible reason is that in the implementation of FairLP, an important step is to compute an approximate (unconstrained) k-means clustering solution on the dataset by employing the sklearn library [39]. However, sklearn tends to trade accuracy for speed when the dataset gets large. As a result, FairLP actually finds a better approximate k-means solution on the coreset than on the large dataset Census1990 and hence applying coresets can achieve a much smaller clustering objective. 6 Future work This paper constructs ε-coresets for the fair k-median/means clustering problem of size independent on the full dataset, and when the data may have multiple, non-disjoint types. Our coreset for fair k-median is the first known coreset construction to the best of our knowledge. For fair k-means, we improve the coreset size of the prior result [42], and extend it to multiple non-disjoint types. The empirical results show that our coresets are indeed much smaller than the full dataset and result in significant reductions in the running time of computing the fair clustering objective. Our work leaves several interesting futural directions. For unconstrained clustering, there exist several works using the sampling approach such that the coreset size does not depend exponentially on the Euclidean dimension d. It is interesting to investigate whether sampling approaches can be applied for constructing fair coresets and achieve similar size bound as the unconstrained setting. Another direction is to construct coresets for general fair (k, z)-clustering beyond k-median/means/center. 5The same coreset may be used for clustering with any assignment constraints, so its construction time would be averaged out if multiple fair clustering tasks are performed. Acknowledgments This research was supported in part by NSF CCF-1908347, SNSF 200021_182527, ONR Award N00014-18-1-2364 and a Minerva Foundation grant.
1. What is the main contribution of the paper regarding fair clustering? 2. What are the strengths and weaknesses of the proposed approach compared to previous work? 3. How does the reviewer assess the clarity, quality, originality, and significance of the paper's content? 4. Are there any suggestions for improving the presentation, such as including an algorithm box or visual description? 5. What additional comparisons or experiments could be made to enhance the paper's impact?
Review
Review This paper introduces a new coreset construction mechanism for fair clustering in which the points can be of multiple disjoint types. As in classic fair clustering, the goal of this work is to construct a clustering in which the types represented in each cluster are balanced. Unlike previous work, the focus here is on constructing the clustering efficiently via coresets. This work provides a coreset construction algorithm for fair k-median (previously unknown) and improves the previously known coreset construction algorithm for fair k-means. In addition to theoretical contributions with respect to coreset size and construction time, the authors also provide a small empirical study. The main strength of this paper is theoretical. The authors prove that their algorithms construct coresets with size independent of N for fair k-medians and k-means. To do so, they adapt the proof technique from a previous paper that does not deal with fair clustering to the fair clustering setting. This is a meaningful contribution as well as timely as fair clustering is an active and quickly growing research area. The primary weakness of this paper is the clarity of the technical sections. Section 3 helps with this, but Sections 4 and 5 are difficult to understand without carefully studying reference [25] (in fact, there is a longer version of Section 5 in the appendix!). Elements including an algorithm box and/or visual description of the 1-d case (section 4.1) would improve the presentation. The source code was helpful here although an algorithm box may be better for readers without access to the source. In the experiments, this paper compares their coreset construction algorithm to one other coreset construction algorithm for fair k-means. While this comparison is informative, the work misses other clear comparisons that would improve the paper. For example, there are no comparisons to fair clustering approaches that do not use coresets (for example, reference [4] in particular) to help explore the tradeoff between error and speed. **Edit**: thank you for including the additional experimentation. I think this does improve your paper. Additionally, your commitment to include visualization and pseudocode should improve readability. Originality: the paper exhibits a reasonable amount of originality. Many of the proof techniques and other ideas stem from reference 25. However, the authors show how that work can be extended to the fair clustering setting with multiple disjoint types, which requires novel analysis. Clarity: the writing and overall storyline of the paper is relatively clear however there are many details and definitions that are omitted that make understanding the paper challenging. For example, the definitions/differences between the terms “groups” and “types” is never made explicit. Also, this paper draws heavily on reference [25]; understanding the details this work was challenging for me without familiarity with [25]. Significance: this paper contributes to an active and important area of coresets for fair clustering problems. This includes both fair k-medians and fair k-means. The results presented in this paper will be important for other researchers studying scalable approaches to fair clustering with coresets. Quality: the theoretical pieces of this paper were challenging to evaluate but seem to be correct. === Smaller issues === Section 1: “Due the scale at which one is required to clustering”, ungrammatical Section 1: “Save the storage” → “Save storage” Section 1: “size independent on N” → “size independent of N” Section 1: “with size depend on” → “with size that depends on” Definition 2.1: K_z(S,F,C) \in (1 \pm \epsilon) K_z(X,F,C). I don’t think that “\in” is the correct notation here since K_z is a value and not a set. Section 4: “In a high level” → “at a high level” Section 4: “is the same to” → “is the same as” Section 5: “we show how the construction of coresets” → “we show how to construct coresets” Section 5: “same to [25]” → “similar to [25]” General comment: I think the terms “collections of groups” and “types” in this paper are closely connected but never defined and thus a bit confusing. Define (roughly) these and give examples in the introduction. Later on this becomes more clear.
NIPS
Title Coresets for Clustering with Fairness Constraints Abstract In a recent work, [20] studied the following “fair” variants of classical clustering problems such as k-means and k-median: given a set of n data points in R and a binary type associated to each data point, the goal is to cluster the points while ensuring that the proportion of each type in each cluster is roughly the same as its underlying proportion. Subsequent work has focused on either extending this setting to when each data point has multiple, non-disjoint sensitive types such as race and gender [7], or to address the problem that the clustering algorithms in the above work do not scale well [42, 8, 6]. The main contribution of this paper is an approach to clustering with fairness constraints that involve multiple, non-disjoint types, that is also scalable. Our approach is based on novel constructions of coresets: for the k-median objective, we construct an ε-coreset of size O(Γk2ε−d) where Γ is the number of distinct collections of groups that a point may belong to, and for the k-means objective, we show how to construct an ε-coreset of size O(Γk3ε−d−1). The former result is the first known coreset construction for the fair clustering problem with the k-median objective, and the latter result removes the dependence on the size of the full dataset as in [42] and generalizes it to multiple, non-disjoint types. Plugging our coresets into existing algorithms for fair clustering such as [6] results in the fastest algorithms for several cases. Empirically, we assess our approach over the Adult, Bank, Diabetes and Athlete dataset, and show that the coreset sizes are much smaller than the full dataset; applying coresets indeed accelerates the running time of computing the fair clustering objective while ensuring that the resulting objective difference is small. We also achieve a speed-up to recent fair clustering algorithms [6, 7] by incorporating our coreset construction. 1 Introduction Clustering algorithms are widely used in automated decision-making tasks, e.g., unsupervised learning [43], feature engineering [33, 27], and recommendation systems [10, 40, 21]. With the increasing applications of clustering algorithms in human-centric contexts, there is a growing concern that, if left unchecked, they can lead to discriminatory outcomes for protected groups, e.g., females/black people. For instance, the proportion of a minority group assigned to some cluster can be far from its underlying proportion, even if clustering algorithms do not take the sensitive attribute into its decision making [20]. Such an outcome may, in turn, lead to unfair treatment of minority groups, e.g., women may receive proportionally fewer job recommendations with high salary [22, 38] due to their underrepresentation in the cluster of high salary recommendations. To address this issue, Chierichetti et al. [20] recently proposed the fair clustering problem that requires the clustering assignment to be balanced with respect to a binary sensitive type, e.g., sex.2 Given a set X of n data points in Rd and a binary type associated to each data point, the goal is to cluster the points such that the proportion of each type in each cluster is roughly the same as ∗Authors are listed in alphabetical order of family names. Full version: [31]. 2A type consists of several disjoint groups, e.g., the sex type consists of females and males. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. its underlying proportion, while ensuring that the clustering objective is minimized. Subsequent work has focused on either extending this setting to when each data point has multiple, non-disjoint sensitive types [7] (Definition 2.3), or to address the problem that the clustering algorithms do not scale well [20, 41, 42, 8, 6]. Due to the large scale of datasets, several existing fair clustering algorithms have to take samples instead of using the full dataset, since their running time is at least quadratic in the input size [20, 41, 8, 7]. Very recently, Backurs et al. [6] propose a nearly linear approximation algorithm for fair k-median, but it only works for a binary type. It is still unknown whether there exists a scalable approximation algorithm for multiple sensitive types [6]. To improve the running time of fair clustering algorithms, a powerful technique called coreset was introduced. Roughly, a coreset for fair clustering is a small weighted point set, such that for any k-subset and any fairness constraint, the fair clustering objective computed over the coreset is approximately the same as that computed from the full dataset (Definition 2.1). Thus, a coreset can be used as a proxy for the full dataset – one can apply any fair clustering algorithm on the coreset, achieve a good approximate solution on the full dataset, and hope to speed up the algorithm. As mentioned in [6], using coresets can indeed accelerate the computation time and save storage space for fair clustering problems. Another benefit is that one may want to compare the clustering performance under different fairness constraints, and hence it may be more efficient to repeatedly use coresets. Currently, the only known result for coresets for fair clustering is by Schmidt et al. [42], who constructed an ε-coreset for fair k-means clustering. However, their coreset size includes a log n factor and only restricts to a sensitive type. Moreover, there is no known coreset construction for other commonly-used clusterings, e.g., fair k-median. Our contributions. Our main contribution is an efficient construction of coresets for clustering with fairness constraints that involve multiple, non-disjoint types. Technically, we show efficient constructions of ε-coresets of size independent of n for both fair k-median and fair k-means, summarized in Table 1. Let Γ denote the number of distinct collections of groups that a point may belong to (see the first paragraph of Section 4 for the formal definition). • Our coreset for fair k-median is of size O(Γk2ε−d) (Theorem 4.1), which is the first known coreset to the best of our knowledge. • For fair k-means, our coreset is of size O(Γk3ε−d−1) (Theorem 4.2), which improves the result of [42] by an Θ( lognεk2 ) factor and generalizes it to multiple, non-disjoint types. • As mentioned in [6], applying coresets can accelerate the running time of fair clustering algorithms, while suffering only an additional (1+ε) factor in the approxiation ratio. Setting ε = Ω(1) and plugging our coresets into existing algorithms [42, 7, 6], we directly achieve scalable fair clustering algorithms, summarized in Table 2. We present novel technical ideas to deal with fairness constraints for coresets. • Our first technical contribution is a reduction to the case Γ = 1 (Theorem 4.3) which greatly simplifies the problem. Our reduction not only works for our specific construction, but also for all coreset constructions in general. • Furthermore, to deal with the Γ = 1 case, we provide several interesting geometric observations for the optimal fair k-median/means clustering (Lemma 4.1), which may be of independent interest. We implement our algorithm and conduct experiments on Adult, Bank, Diabetes and Athlete datasets. • A vanilla implementation results in a coreset with size that depends on ε−d. Our implementation is inspired by our theoretical results and produces coresets whose size is much smaller in practice. This improved implementation is still within the framework of our analysis, and the same worst case theoretical bound still holds. • To validate the performance of our implementation, we experiment with varying ε for both fair k-median and k-means. As expected, the empirical error is well under the theoretical guarantee ε, and the size does not suffer from the ε−d factor. Specifically, for fair k-median, we achieve 5% empirical error using only 3% points of the original data sets, and we achieve similar error using 20% points of the original data set for the k-means case. In addition, our coreset for fair k-means is better than uniform sampling and that of [42] in the empirical error. 1.1 Other related works There are other fair variants of clustering problems. Ahmadian et al. [4] studied a variant of the fair k-center problem in which the number of each type in each cluster has an upper bound, and proposed a bi-criteria approximation algorithm. Chen et al. [19] studied the fair clustering problem in which any n/k points are entitled to form their own cluster if there is another center closer in distance for all of them. Kleindessner et al. [34] investigate the fair k-center problem in which each center has a type, and the selection of the k-subset is restricted to include a fixed amount of centers belonging to each type. In another paper [35], they developed fair variants of spectral clusterings (a heuristic k-means clustering framework) by incorporating the proportional fairness constraints proposed by [20]. The notion of coreset was first proposed by Agarwal et al. [2]. There has been a large body of work for unconstrained clustering problems in Euclidean spaces [3, 28, 18, 29, 36, 24, 25, 9]). Apart from these, for the general (k, z)-clustering problem, Feldman and Langberg [24] presented an ε-coreset of size Õ(dkε−2z) in Õ(nk) time. Huang et al. [30] showed an ε-coreset of size Õ(ddim(X) ·k3ε−2z), where ddim(X) is doubling dimension that measures the intrinsic dimensionality of a space. For the special case of k-means, Braverman et al. [9] improved the size to Õ(kε−2 ·min {k/ε, d}) by a dimension reduction approach. Works such as [24] use importance sampling technique which avoid the size factor ε−d, but it is unknown if such approaches can be used in fair clustering. 2 Problem definition Consider a set X ⊆ Rd of n data points, an integer k (number of clusters), and l groups P1, . . . , Pl ⊆ X . An assignment constraint, which was proposed by Schmidt et al. [42], is a k × l integer matrix F . A clustering C = {C1, . . . , Ck}, which is a k-partitioning of X , is said to satisfy assignment constraint F if |Ci ∩ Pj | = Fij , ∀i ∈ [k], j ∈ [l]. For a k-subset C = {c1, . . . , ck} ⊆ X (the center set) and z ∈ R>0, we define Kz(X,F,C) as the minimum value of ∑ i∈[k] ∑ x∈Ci d z(x, ci) among all clustering C = {C1, . . . , Ck} that satisfies F , which we call the optimal fair (k, z)-clustering value. If there is no clustering satisfying F , Kz(X,F,C) is set to be infinity. The following is our notion of coresets for fair (k, z)-clustering. This generalizes the notion introduced in [42] which only considers a partitioned group structure. Definition 2.1 (Coreset for fair clustering). Given a set X ⊆ Rd of n points and l groups P1, . . . , Pl ⊆ X , a weighted point set S ⊆ Rd with weight function w : S → R>0 is an εcoreset for the fair (k, z)-clustering problem, if for each k-subset C ⊆ Rd and each assignment constraint F ∈ Zk×l≥0 , it holds that Kz(S, F,C) ∈ (1± ε) · Kz(X,F,C). Since points in S might receive fractional weights, we change the definition of Kz a little, so that in evaluating Kz(S, F,C), a point x ∈ S may be partially assigned to more than one cluster and the total amount of assignments of x equals w(x). The currently most general notion of fairness in clustering was proposed by [7], which enforces both upper bounds and lower bounds of any group’s proportion in a cluster. Definition 2.2 ((α, β)-proportionally-fair). A clustering C = (C1, . . . , Ck) is (α, β)proportionally-fair (α, β ∈ [0, 1]l), if for each clusterCi and j ∈ [l], it holds that αj ≤ |Ci∩Pj ||Ci| ≤ βj . The above definition directly implies for each cluster Ci and any two groups Pj1 , Pj2 ∈ [l], αj1 βj2 ≤ |Ci∩Pj1 | |Ci∩Pj2 | ≤ βj1αj2 . In other words, the fraction of points belonging to groups Pj1 , Pj2 in each cluster is bounded from both sides. Indeed, similar fairness constraints have been investigated by works on other fundamental algorithmic problems such as data summarization [14], ranking [16, 44], elections [12], personalization [17, 13], classification [11], and online advertising [15]. Naturally, Bera et al. [7] also defined the fair clustering problem with respect to (α, β)-proportionally-fairness as follows. Definition 2.3 ((α, β)-proportionally-fair (k, z)-clustering). Given a set X ⊆ Rd of n points, l groups P1, . . . , Pl ⊆ X , and two vectors α, β ∈ [0, 1]l, the objective of (α, β)-proportionallyfair (k, z)-clustering is to find a k-subset C = {c1, . . . , ck} ∈ Rd and (α, β)-proportionally-fair clustering C = {C1, . . . , Ck}, such that the objective function ∑ i∈[k] ∑ x∈Ci d z(x, ci) is minimized. Our notion of coresets is very general, and we relate our notion of coresets to the (α, β)-proportionallyfair clustering problem, via the following observation, which is similar to Proposition 5 in [42]. Proposition 2.1. Given a k-subset C, the assignment restriction required by (α, β)-proportionallyfairness can be modeled as a collection of assignment constraints. As a result, if a weighted set S is an ε-coreset satisfying Definition 2.1, then for any α, β ∈ [0, 1]l, the (α, β)-proportionally-fair (k, z)-clustering value computed from S must be a (1± ε)-approximation of that computed from X . 3 Technical overview We introduce novel techniques to tackle the assignment constraints. Recall that Γ denotes the number of distinct collections of groups that a point may belong to. Our first technical contribution is a general reduction to the Γ = 1 case which works for any coreset construction algorithm (Theorem 4.3). The idea is to divide X into Γ parts with respect to the groups that a point belongs to, and construct a fair coreset with parameter Γ = 1 for each group. The observation is that the union of these coresets is a coreset for the original instance and Γ. Our coreset construction for the case Γ = 1 is based on the framework of [29] in which unconstrained k-median/means coresets were provided. The main observation of [29] is that it suffices to deal with X that lies on a line. Specifically, they show that it suffices to construct at most O(kε−d+1) lines, project X to their closest lines and construct an ε/3-coreset for each line. The coreset for each line is then constructed by partitioning the line into poly(k/ε) contiguous sub-intervals, and designate at most two points to represent each sub-interval and include these points in the coreset. In their analysis, a crucially used property is that the clustering for any given centers partitions X into k contiguous parts on the line, since each point must be assigned to its nearest center. However, this property might not hold in fair clustering, which is our main difficulty. Nonetheless, we manage to show a new structural lemma, that the optimal fair k-median/means clustering partitions X into O(k) contiguous intervals. Specifically, for fair k-median, the key geometric observation is that there always exists a center whose corresponding optimal fair k-median cluster forms a contiguous interval (Claim 4.1), and this combined with an induction implies the optimal fair clustering partitions X into 2k − 1 intervals. For fair k-means, we show that each optimal fair cluster actually forms a single contiguous interval. Thanks to the new structural properties, plugging in a slightly different set of parameters in [29] yields fair coresets. 4 Coresets for fair clustering For each x ∈ X , denote Px = {i ∈ [l] : x ∈ Pi} as the collection of groups that x belongs to. Let ΓX denote the number of distinct Px’s, i.e. ΓX := |{Px : x ∈ X}|. Let Tz(n) denote the running time of a constant approximation algorithm for the (k, z)-clustering problem. The main theorems are as follows. Theorem 4.1 (Coreset for fair k-median (z = 1)). There exists an algorithm that constructs an ε-coreset for the fair k-median problem of size O(Γk2ε−d), in O(kε−d+1n+ T1(n)) time. Theorem 4.2 (Coreset for fair k-means (z = 2)). There exists an algorithm that constructs εcoreset for the fair k-means problem of size O(Γk3ε−d−1), in O(kε−d+1n+ T2(n)) time. Note that ΓX is usually small. For instance, if there is only one sensitive attribute [42], then each Px is singleton and hence ΓX = l. More generally, let Λ denote the maximum number of groups that any point belongs to, then ΓX ≤ lΛ, but there is often only O(1) sensitive attributes for each point. As noted above, the main technical difficulty for the coreset construction is to deal with the assignment constraints. We make an important observation (Theorem 4.3), that one only needs to prove Theorem 4.1 for the case l = 1.The proof of Theorem 4.3 can be found in the full version. This theorem is a generalization of Theorem 7 in [42], and the coreset of [42] actually extends to arbitrary group structure thanks to our theorem. Theorem 4.3 (Reduction from l groups to a single group). Suppose there exists an algorithm that computes an ε-coreset of size t for the fair (k, z)-clustering problem of X̂ with l = 1, in time T (|X̂|, ε, k, z). Then there exists an algorithm that takes a set X , and computes an ε-coreset of size ΓX · t for the fair (k, z)-clustering problem, in time ΓX · T (|X|, ε, k, z). Our coreset construction for both fair k-median and k-means are similar to that in [29], except using a different set of parameters. At a high level, the algorithm reduces general instances to instances where data lie on a line, and it only remains to give a coreset for the line case. Next, we focus on fair k-median, and the construction for the k-means case is similar and can be found in the full version. Remark 4.1. Theorem 4.3 can be applied to construct an ε-coreset of size O(ΓXkε−d+1) for the fair k-center clustering problem, since Har-Peled’s coreset result [28] directly provides an ε-coreset of size O(kε−d+1) for the case of l = 1. 4.1 The line case Since l = 1, we interpret F as an integer vector in Zk≥0. For a weighted point set S with weight w : S → R≥0, we define the mean of S by S := 1|S| ∑ p∈S w(p) · p and the error of S by ∆(S) := ∑ p∈S w(p) · d(p, S). Denote OPT as the optimal value of the unconstrained k-median clustering. Our construction is similar to [29] and we summarize it in Algorithm 1. An illustration of Algorithm 1 may be found in Figure 1. Input: X = {x1, . . . , xn} ⊂ Rd lying on the real line where x1 ≤ . . . ≤ xn, an integer k ∈ [n], a number OPT as the optimal value of k-median clustering. Output: an ε-coreset S of X together with weights w : S → R≥0. 1 Set a threshold ξ satisfying that ξ = ε·OPT30k ; 2 Consider the points from x1 to xn and group them into batches in a greedy way: each batch B is a maximal point set satisfying that ∆(B) ≤ ξ; 3 Denote B(X) as the collection of all batches. Let S ← ⋃ B∈B(X)B; 4 For each point x = B ∈ S, w(x)← |B|; 5 Return (S,w); Algorithm 1: FairMedian-1D(X, k) Theorem 4.4 (Coreset for fair k-median when X lies on a line). Algorithm 1 computes an ε/3coreset S for fair k-median clustering of X , in time O(|X|). The running time is immediate since for each batch B ∈ B(X), it only costs O(|B|) time to compute B. Hence, Algorithm 1 runs in O(|X|) time. We focus on correctness in the following. In [29], it was shown that S is an ε/3-coreset for the unconstrained k-median clustering problem. In their analysis, it is crucially used that the optimal clustering partitions X into k contiguous intervals. Unfortunately, the nice “contiguous” property does not hold in our case because of the assignment constraint F ∈ Rk. To resolve this issue, we prove a new structural property (Lemma 4.1) that the optimal fair k-median clustering actually partitions X into only O(k) contiguous intervals. With this property, Theorem 4.4 is implied by a similar argument as in [29]. The detailed proof can be found in the full version. Lemma 4.1 (Fair k-median clustering consists of 2k − 1 contiguous intervals). Suppose X := {x1, . . . , xn} ⊂ Rd lies on the real line where x1 ≤ . . . ≤ xn. For every k-subset C = (c1, . . . , ck) ∈ Rd and every assignment constraints F ∈ Zk≥0, there exists an optimal fair k-median clustering that partitions X into at most 2k − 1 contiguous intervals. Proof. We prove by induction on k. The induction hypothesis is that, for every k ≥ 1, Lemma 4.1 holds for any data set X , any k-subset C ⊆ Rd and any assignment constraint F ∈ Zk≥0. The base case k = 1 holds trivially since all points in X must be assigned to c1. Assume the lemma holds for k−1 (k ≥ 2) and we will prove the inductive step k. Let C?1 , . . . , C?k be the optimal fair k-median clustering w.r.t. C and F , where C?i ⊆ X is the subset assigned to center ci. We present the structural property in Claim 4.1, whose proof can be found in the full version. Claim 4.1. There exists i ∈ [k] such that C?i consists of exactly one contiguous interval. We continue the proof of the inductive step by constructing a reduced instance (X ′, F ′, C ′) where a) C ′ := C \ {ci0}; b) X ′ = X \C?i0 ; c) F ′ is formed by removing the i0-th coordinate of F . Applying the hypothesis on (X ′, F ′, C ′), we know the optimal fair (k − 1)-median clustering consists of at most 2k − 3 contiguous intervals. Combining with C?i0 which has exactly one contiguous interval would increase the number of intervals by at most 2. Thus, we conclude that the optimal fair k-median clustering for (X,F,C) has at most 2k− 1 contiguous intervals. This finishes the inductive step. 4.2 Extending to higher dimension The extension is the same as that of [29]. We start with a set of k centers that is a O(1)-approximate solution C? to unconstrained k-median. Then we emit O(ε−d+1) rays around each center c in C? (which correspond to an O(ε)-net on the unit sphere centered at c), and project data points to the nearest ray, such that the total projection cost is ε · OPT/3. Then for each line, we compute an ε/3-coreset for fair k-median by Theorem 4.4, and let S denote the combination of coresets generated from all lines. By the same argument as in Theorem 2.9 of [29], S is an ε-coreset for fair k-median clustering, which implies Theorem 4.1. The detailed proof can be found in the full version. Remark 4.2. In fact, it suffices to emit an arbitrary set of rays such that the total projection cost is at most ε ·OPT/3. This observation is crucially used in our implementations (Section 5) to reduce the size of the coreset, particularly to avoid the construction of the O(ε)-net which is of O(ε−d) size. 5 Empirical results We implement our algorithm and evaluate its performance on real datasets.3 The implementation mostly follows our description of algorithms, but a vanilla implementation would bring in an ε−d factor in the coreset size. To avoid this, as observed in Remark 4.2, we may actually emit any set of rays as long as the total projection cost is bounded, instead of ε−d rays. We implement this idea by finding the smallest integer m and m lines, such that the minimum cost of projecting data onto m lines is within the error threshold. In our implementation for fair k-means, we adopt the widely used Lloyd’s heuristic [37] to find the m lines, where the only change to Lloyd’s heuristic is that, for each cluster, we need to find a line that minimizes the projection cost instead of a point, and we use SVD to efficiently find this line optimally. Unfortunately, the above approach does not work for fair k-median, as the SVD does not give the optimal line. As a result, we still need to construct the ε-net, but we alternatively employ some heuristics to find the net adaptively w.r.t. the dataset. Our evaluation is conducted on four datasets: Adult (~50k), Bank (~45k) and Diabetes (~100k) from UCI Machine Learning Repository [23], and Athlete (~200k) from [1], which are also considered in previous papers [20, 42, 7]. For all datasets, we choose numerical features to form a vector in Rd for each record, where d = 6 for Adult, d = 10 for Bank, d = 29 for Diabetes and d = 3 for Athlete. We choose two sensitive types for the first three datasets: sex and marital for Adult (9 groups, Γ = 14); marital and default for Bank (7 groups, Γ = 12); sex and age for Diabetes (12 groups, Γ = 20), and we choose a binary sensitive type sex for Athlete (2 groups, Γ = 2). In addition, in the full version, we will also discuss how the following affects the result: a) choosing a binary type as the sensitive type, or b) normalization of the dataset. We pick k = 3 (i.e. number of clusters) throughout our experiment. We define the empirical error as | Kz(S,F,C)Kz(X,F,C)−1| (which is the same measure as ε) for some F and C. To evaluate the empirical error, we draw 500 independent random samples of (F,C) and report the maximum empirical error among these samples. For each (F,C), the fair clustering objectives Kz(·, F, C) may be formulated as integer linear programs (ILP). We use CPLEX [32] to solve the ILP’s, report the average running time4 TX and TS for evaluating the objective on dataset X and coreset S respectively, and also report the running time TC for constructing coreset S. For both k-median and k-means, we employ uniform sampling (Uni) as a baseline, in which we partitionX into Γ parts according to distinct Px’s (the collection of groups that x belongs to) and take uniform samples from each collection. Additionally, for k-means, we select another baseline from a recent work [42] that presented a coreset construction for fair k-means, whose implementation is based on the BICO library which is a high-performance coreset-based library for computing k-means clustering [26]. We evaluate the performance of our coreset for fair k-means against BICO and Uni. As a remark of BICO and Uni implementations, they do not support specifying parameter ε, but a hinted size of the resulted coreset. Hence, we start with evaluating our coreset, and set the hinted size for Uni and BICO as the size of our coreset. 3https://github.com/sfjiang1990/Coresets-for-Clustering-with-Fairness-Constraints. 4The experiments are conducted on a 4-Core desktop CPU with 64 GB RAM. We also showcase the speed-up to two recently published approximation algorithms by applying a 0.5-coreset. The first algorithm is a practically efficient, O(log n)-approximate algorithm for fair k-median [6] that works for a binary type, referred to as FairTree. The other one is a bicriteria approximation algorithm [7] for both fair k-median and k-means, referred to as FairLP. We slightly modify the implementations of FairTree and FairLP to enable them work with our coreset, particularly making them handle weighted inputs efficiently. We do experiments on a large dataset Census1990 which consists of about 2.5 million records (where we select d = 13 features and a binary type), in addition to the above-mentioned Adult, Bank, Diabetes and Athlete datasets. 5.1 Results Table 3 and 4 summarize the accuracy-size trade-off of our coresets for fair k-median and k-means respectively, under different error guarantee ε. Since the coreset construction time TC for Uni is very small (usually less than 50 ms) we do not report it in the table. From the table, a key finding is that the size of the coreset does not suffer from the ε−d factor thanks to our optimized implementation. As for the fair k-median, the empirical error of our coreset is well under control. In particular, to achieve 5% empirical error, only less than 3 percents of data is necessary for all datasets, and this results in a ~200x acceleration in evaluating the objective and 10x acceleration even taking the coreset construction time into consideration.5 Regarding the running time, our coreset construction time scales roughly linearly with the size of the coreset, which means our algorithm is output-sensitive. The empirical error of Uni is comparable to ours on Diabetes, but the worst-case error is unbounded (2x-10x to our coreset, even larger than ε) in general and seems not stable when ε varies. Our coreset works well for fair k-means, and it also offers significant acceleration of evaluating the objective. Compared with BICO, our coreset achieves smaller empirical error for fixed ε and the construction time is between 0.5x to 2x that of BICO. Again, the empirical error of Uni could be 2x smaller than ours and BICO on Diabetes, but the worst-case error is unbounded in general. Table 5 demonstrates the speed-up to FairTree and FairLP with the help of our coreset. We observed that the adaption of our coresets offers a 5x-15x speed-up to FairTree and a 15x-30x speed-up to FairLP for all datasets, even taking the coreset construction time into consideration. Specifically, the runtime on top of our coreset for FairLP is less than 1s for all datasets, which is extremely fast. We also observe that the clustering objective obj′ALG on top of our coresets is usually within 0.6-1.2 times of objALG which is the objective without the coreset (noting that coresets might shrink the objective). The only exception is FairLP on Census1990, in which obj′ALG is only 35% of objALG. A possible reason is that in the implementation of FairLP, an important step is to compute an approximate (unconstrained) k-means clustering solution on the dataset by employing the sklearn library [39]. However, sklearn tends to trade accuracy for speed when the dataset gets large. As a result, FairLP actually finds a better approximate k-means solution on the coreset than on the large dataset Census1990 and hence applying coresets can achieve a much smaller clustering objective. 6 Future work This paper constructs ε-coresets for the fair k-median/means clustering problem of size independent on the full dataset, and when the data may have multiple, non-disjoint types. Our coreset for fair k-median is the first known coreset construction to the best of our knowledge. For fair k-means, we improve the coreset size of the prior result [42], and extend it to multiple non-disjoint types. The empirical results show that our coresets are indeed much smaller than the full dataset and result in significant reductions in the running time of computing the fair clustering objective. Our work leaves several interesting futural directions. For unconstrained clustering, there exist several works using the sampling approach such that the coreset size does not depend exponentially on the Euclidean dimension d. It is interesting to investigate whether sampling approaches can be applied for constructing fair coresets and achieve similar size bound as the unconstrained setting. Another direction is to construct coresets for general fair (k, z)-clustering beyond k-median/means/center. 5The same coreset may be used for clustering with any assignment constraints, so its construction time would be averaged out if multiple fair clustering tasks are performed. Acknowledgments This research was supported in part by NSF CCF-1908347, SNSF 200021_182527, ONR Award N00014-18-1-2364 and a Minerva Foundation grant.
1. What is the focus of the paper, particularly regarding clustering algorithms? 2. What are the strengths and weaknesses of the proposed approach compared to prior works? 3. How does the reviewer assess the originality and significance of the paper's contributions? 4. What are the concerns regarding the empirical errors presented in the paper? 5. Are there any suggestions for improvements or alternative approaches proposed by the reviewer?
Review
Review (Originality) The authors claims the novelty lies with the construction of a novel epsilon-corsets with k-median and k-mean objective. The constructed corsets which is of smaller size than the full set is used in existing fair clustering algorithms. This construction is very similar to that proposed in [25]. Originality seems minor. (Quality) It would be more meaningful if the corset construction was conducted with k-center objective. (Clarity) Well written and relatively clear- better if there were illustrations. (significance) Empirical error that is superlinear with increasing epsilon. How is this a concern?
NIPS
Title Coresets for Clustering with Fairness Constraints Abstract In a recent work, [20] studied the following “fair” variants of classical clustering problems such as k-means and k-median: given a set of n data points in R and a binary type associated to each data point, the goal is to cluster the points while ensuring that the proportion of each type in each cluster is roughly the same as its underlying proportion. Subsequent work has focused on either extending this setting to when each data point has multiple, non-disjoint sensitive types such as race and gender [7], or to address the problem that the clustering algorithms in the above work do not scale well [42, 8, 6]. The main contribution of this paper is an approach to clustering with fairness constraints that involve multiple, non-disjoint types, that is also scalable. Our approach is based on novel constructions of coresets: for the k-median objective, we construct an ε-coreset of size O(Γk2ε−d) where Γ is the number of distinct collections of groups that a point may belong to, and for the k-means objective, we show how to construct an ε-coreset of size O(Γk3ε−d−1). The former result is the first known coreset construction for the fair clustering problem with the k-median objective, and the latter result removes the dependence on the size of the full dataset as in [42] and generalizes it to multiple, non-disjoint types. Plugging our coresets into existing algorithms for fair clustering such as [6] results in the fastest algorithms for several cases. Empirically, we assess our approach over the Adult, Bank, Diabetes and Athlete dataset, and show that the coreset sizes are much smaller than the full dataset; applying coresets indeed accelerates the running time of computing the fair clustering objective while ensuring that the resulting objective difference is small. We also achieve a speed-up to recent fair clustering algorithms [6, 7] by incorporating our coreset construction. 1 Introduction Clustering algorithms are widely used in automated decision-making tasks, e.g., unsupervised learning [43], feature engineering [33, 27], and recommendation systems [10, 40, 21]. With the increasing applications of clustering algorithms in human-centric contexts, there is a growing concern that, if left unchecked, they can lead to discriminatory outcomes for protected groups, e.g., females/black people. For instance, the proportion of a minority group assigned to some cluster can be far from its underlying proportion, even if clustering algorithms do not take the sensitive attribute into its decision making [20]. Such an outcome may, in turn, lead to unfair treatment of minority groups, e.g., women may receive proportionally fewer job recommendations with high salary [22, 38] due to their underrepresentation in the cluster of high salary recommendations. To address this issue, Chierichetti et al. [20] recently proposed the fair clustering problem that requires the clustering assignment to be balanced with respect to a binary sensitive type, e.g., sex.2 Given a set X of n data points in Rd and a binary type associated to each data point, the goal is to cluster the points such that the proportion of each type in each cluster is roughly the same as ∗Authors are listed in alphabetical order of family names. Full version: [31]. 2A type consists of several disjoint groups, e.g., the sex type consists of females and males. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. its underlying proportion, while ensuring that the clustering objective is minimized. Subsequent work has focused on either extending this setting to when each data point has multiple, non-disjoint sensitive types [7] (Definition 2.3), or to address the problem that the clustering algorithms do not scale well [20, 41, 42, 8, 6]. Due to the large scale of datasets, several existing fair clustering algorithms have to take samples instead of using the full dataset, since their running time is at least quadratic in the input size [20, 41, 8, 7]. Very recently, Backurs et al. [6] propose a nearly linear approximation algorithm for fair k-median, but it only works for a binary type. It is still unknown whether there exists a scalable approximation algorithm for multiple sensitive types [6]. To improve the running time of fair clustering algorithms, a powerful technique called coreset was introduced. Roughly, a coreset for fair clustering is a small weighted point set, such that for any k-subset and any fairness constraint, the fair clustering objective computed over the coreset is approximately the same as that computed from the full dataset (Definition 2.1). Thus, a coreset can be used as a proxy for the full dataset – one can apply any fair clustering algorithm on the coreset, achieve a good approximate solution on the full dataset, and hope to speed up the algorithm. As mentioned in [6], using coresets can indeed accelerate the computation time and save storage space for fair clustering problems. Another benefit is that one may want to compare the clustering performance under different fairness constraints, and hence it may be more efficient to repeatedly use coresets. Currently, the only known result for coresets for fair clustering is by Schmidt et al. [42], who constructed an ε-coreset for fair k-means clustering. However, their coreset size includes a log n factor and only restricts to a sensitive type. Moreover, there is no known coreset construction for other commonly-used clusterings, e.g., fair k-median. Our contributions. Our main contribution is an efficient construction of coresets for clustering with fairness constraints that involve multiple, non-disjoint types. Technically, we show efficient constructions of ε-coresets of size independent of n for both fair k-median and fair k-means, summarized in Table 1. Let Γ denote the number of distinct collections of groups that a point may belong to (see the first paragraph of Section 4 for the formal definition). • Our coreset for fair k-median is of size O(Γk2ε−d) (Theorem 4.1), which is the first known coreset to the best of our knowledge. • For fair k-means, our coreset is of size O(Γk3ε−d−1) (Theorem 4.2), which improves the result of [42] by an Θ( lognεk2 ) factor and generalizes it to multiple, non-disjoint types. • As mentioned in [6], applying coresets can accelerate the running time of fair clustering algorithms, while suffering only an additional (1+ε) factor in the approxiation ratio. Setting ε = Ω(1) and plugging our coresets into existing algorithms [42, 7, 6], we directly achieve scalable fair clustering algorithms, summarized in Table 2. We present novel technical ideas to deal with fairness constraints for coresets. • Our first technical contribution is a reduction to the case Γ = 1 (Theorem 4.3) which greatly simplifies the problem. Our reduction not only works for our specific construction, but also for all coreset constructions in general. • Furthermore, to deal with the Γ = 1 case, we provide several interesting geometric observations for the optimal fair k-median/means clustering (Lemma 4.1), which may be of independent interest. We implement our algorithm and conduct experiments on Adult, Bank, Diabetes and Athlete datasets. • A vanilla implementation results in a coreset with size that depends on ε−d. Our implementation is inspired by our theoretical results and produces coresets whose size is much smaller in practice. This improved implementation is still within the framework of our analysis, and the same worst case theoretical bound still holds. • To validate the performance of our implementation, we experiment with varying ε for both fair k-median and k-means. As expected, the empirical error is well under the theoretical guarantee ε, and the size does not suffer from the ε−d factor. Specifically, for fair k-median, we achieve 5% empirical error using only 3% points of the original data sets, and we achieve similar error using 20% points of the original data set for the k-means case. In addition, our coreset for fair k-means is better than uniform sampling and that of [42] in the empirical error. 1.1 Other related works There are other fair variants of clustering problems. Ahmadian et al. [4] studied a variant of the fair k-center problem in which the number of each type in each cluster has an upper bound, and proposed a bi-criteria approximation algorithm. Chen et al. [19] studied the fair clustering problem in which any n/k points are entitled to form their own cluster if there is another center closer in distance for all of them. Kleindessner et al. [34] investigate the fair k-center problem in which each center has a type, and the selection of the k-subset is restricted to include a fixed amount of centers belonging to each type. In another paper [35], they developed fair variants of spectral clusterings (a heuristic k-means clustering framework) by incorporating the proportional fairness constraints proposed by [20]. The notion of coreset was first proposed by Agarwal et al. [2]. There has been a large body of work for unconstrained clustering problems in Euclidean spaces [3, 28, 18, 29, 36, 24, 25, 9]). Apart from these, for the general (k, z)-clustering problem, Feldman and Langberg [24] presented an ε-coreset of size Õ(dkε−2z) in Õ(nk) time. Huang et al. [30] showed an ε-coreset of size Õ(ddim(X) ·k3ε−2z), where ddim(X) is doubling dimension that measures the intrinsic dimensionality of a space. For the special case of k-means, Braverman et al. [9] improved the size to Õ(kε−2 ·min {k/ε, d}) by a dimension reduction approach. Works such as [24] use importance sampling technique which avoid the size factor ε−d, but it is unknown if such approaches can be used in fair clustering. 2 Problem definition Consider a set X ⊆ Rd of n data points, an integer k (number of clusters), and l groups P1, . . . , Pl ⊆ X . An assignment constraint, which was proposed by Schmidt et al. [42], is a k × l integer matrix F . A clustering C = {C1, . . . , Ck}, which is a k-partitioning of X , is said to satisfy assignment constraint F if |Ci ∩ Pj | = Fij , ∀i ∈ [k], j ∈ [l]. For a k-subset C = {c1, . . . , ck} ⊆ X (the center set) and z ∈ R>0, we define Kz(X,F,C) as the minimum value of ∑ i∈[k] ∑ x∈Ci d z(x, ci) among all clustering C = {C1, . . . , Ck} that satisfies F , which we call the optimal fair (k, z)-clustering value. If there is no clustering satisfying F , Kz(X,F,C) is set to be infinity. The following is our notion of coresets for fair (k, z)-clustering. This generalizes the notion introduced in [42] which only considers a partitioned group structure. Definition 2.1 (Coreset for fair clustering). Given a set X ⊆ Rd of n points and l groups P1, . . . , Pl ⊆ X , a weighted point set S ⊆ Rd with weight function w : S → R>0 is an εcoreset for the fair (k, z)-clustering problem, if for each k-subset C ⊆ Rd and each assignment constraint F ∈ Zk×l≥0 , it holds that Kz(S, F,C) ∈ (1± ε) · Kz(X,F,C). Since points in S might receive fractional weights, we change the definition of Kz a little, so that in evaluating Kz(S, F,C), a point x ∈ S may be partially assigned to more than one cluster and the total amount of assignments of x equals w(x). The currently most general notion of fairness in clustering was proposed by [7], which enforces both upper bounds and lower bounds of any group’s proportion in a cluster. Definition 2.2 ((α, β)-proportionally-fair). A clustering C = (C1, . . . , Ck) is (α, β)proportionally-fair (α, β ∈ [0, 1]l), if for each clusterCi and j ∈ [l], it holds that αj ≤ |Ci∩Pj ||Ci| ≤ βj . The above definition directly implies for each cluster Ci and any two groups Pj1 , Pj2 ∈ [l], αj1 βj2 ≤ |Ci∩Pj1 | |Ci∩Pj2 | ≤ βj1αj2 . In other words, the fraction of points belonging to groups Pj1 , Pj2 in each cluster is bounded from both sides. Indeed, similar fairness constraints have been investigated by works on other fundamental algorithmic problems such as data summarization [14], ranking [16, 44], elections [12], personalization [17, 13], classification [11], and online advertising [15]. Naturally, Bera et al. [7] also defined the fair clustering problem with respect to (α, β)-proportionally-fairness as follows. Definition 2.3 ((α, β)-proportionally-fair (k, z)-clustering). Given a set X ⊆ Rd of n points, l groups P1, . . . , Pl ⊆ X , and two vectors α, β ∈ [0, 1]l, the objective of (α, β)-proportionallyfair (k, z)-clustering is to find a k-subset C = {c1, . . . , ck} ∈ Rd and (α, β)-proportionally-fair clustering C = {C1, . . . , Ck}, such that the objective function ∑ i∈[k] ∑ x∈Ci d z(x, ci) is minimized. Our notion of coresets is very general, and we relate our notion of coresets to the (α, β)-proportionallyfair clustering problem, via the following observation, which is similar to Proposition 5 in [42]. Proposition 2.1. Given a k-subset C, the assignment restriction required by (α, β)-proportionallyfairness can be modeled as a collection of assignment constraints. As a result, if a weighted set S is an ε-coreset satisfying Definition 2.1, then for any α, β ∈ [0, 1]l, the (α, β)-proportionally-fair (k, z)-clustering value computed from S must be a (1± ε)-approximation of that computed from X . 3 Technical overview We introduce novel techniques to tackle the assignment constraints. Recall that Γ denotes the number of distinct collections of groups that a point may belong to. Our first technical contribution is a general reduction to the Γ = 1 case which works for any coreset construction algorithm (Theorem 4.3). The idea is to divide X into Γ parts with respect to the groups that a point belongs to, and construct a fair coreset with parameter Γ = 1 for each group. The observation is that the union of these coresets is a coreset for the original instance and Γ. Our coreset construction for the case Γ = 1 is based on the framework of [29] in which unconstrained k-median/means coresets were provided. The main observation of [29] is that it suffices to deal with X that lies on a line. Specifically, they show that it suffices to construct at most O(kε−d+1) lines, project X to their closest lines and construct an ε/3-coreset for each line. The coreset for each line is then constructed by partitioning the line into poly(k/ε) contiguous sub-intervals, and designate at most two points to represent each sub-interval and include these points in the coreset. In their analysis, a crucially used property is that the clustering for any given centers partitions X into k contiguous parts on the line, since each point must be assigned to its nearest center. However, this property might not hold in fair clustering, which is our main difficulty. Nonetheless, we manage to show a new structural lemma, that the optimal fair k-median/means clustering partitions X into O(k) contiguous intervals. Specifically, for fair k-median, the key geometric observation is that there always exists a center whose corresponding optimal fair k-median cluster forms a contiguous interval (Claim 4.1), and this combined with an induction implies the optimal fair clustering partitions X into 2k − 1 intervals. For fair k-means, we show that each optimal fair cluster actually forms a single contiguous interval. Thanks to the new structural properties, plugging in a slightly different set of parameters in [29] yields fair coresets. 4 Coresets for fair clustering For each x ∈ X , denote Px = {i ∈ [l] : x ∈ Pi} as the collection of groups that x belongs to. Let ΓX denote the number of distinct Px’s, i.e. ΓX := |{Px : x ∈ X}|. Let Tz(n) denote the running time of a constant approximation algorithm for the (k, z)-clustering problem. The main theorems are as follows. Theorem 4.1 (Coreset for fair k-median (z = 1)). There exists an algorithm that constructs an ε-coreset for the fair k-median problem of size O(Γk2ε−d), in O(kε−d+1n+ T1(n)) time. Theorem 4.2 (Coreset for fair k-means (z = 2)). There exists an algorithm that constructs εcoreset for the fair k-means problem of size O(Γk3ε−d−1), in O(kε−d+1n+ T2(n)) time. Note that ΓX is usually small. For instance, if there is only one sensitive attribute [42], then each Px is singleton and hence ΓX = l. More generally, let Λ denote the maximum number of groups that any point belongs to, then ΓX ≤ lΛ, but there is often only O(1) sensitive attributes for each point. As noted above, the main technical difficulty for the coreset construction is to deal with the assignment constraints. We make an important observation (Theorem 4.3), that one only needs to prove Theorem 4.1 for the case l = 1.The proof of Theorem 4.3 can be found in the full version. This theorem is a generalization of Theorem 7 in [42], and the coreset of [42] actually extends to arbitrary group structure thanks to our theorem. Theorem 4.3 (Reduction from l groups to a single group). Suppose there exists an algorithm that computes an ε-coreset of size t for the fair (k, z)-clustering problem of X̂ with l = 1, in time T (|X̂|, ε, k, z). Then there exists an algorithm that takes a set X , and computes an ε-coreset of size ΓX · t for the fair (k, z)-clustering problem, in time ΓX · T (|X|, ε, k, z). Our coreset construction for both fair k-median and k-means are similar to that in [29], except using a different set of parameters. At a high level, the algorithm reduces general instances to instances where data lie on a line, and it only remains to give a coreset for the line case. Next, we focus on fair k-median, and the construction for the k-means case is similar and can be found in the full version. Remark 4.1. Theorem 4.3 can be applied to construct an ε-coreset of size O(ΓXkε−d+1) for the fair k-center clustering problem, since Har-Peled’s coreset result [28] directly provides an ε-coreset of size O(kε−d+1) for the case of l = 1. 4.1 The line case Since l = 1, we interpret F as an integer vector in Zk≥0. For a weighted point set S with weight w : S → R≥0, we define the mean of S by S := 1|S| ∑ p∈S w(p) · p and the error of S by ∆(S) := ∑ p∈S w(p) · d(p, S). Denote OPT as the optimal value of the unconstrained k-median clustering. Our construction is similar to [29] and we summarize it in Algorithm 1. An illustration of Algorithm 1 may be found in Figure 1. Input: X = {x1, . . . , xn} ⊂ Rd lying on the real line where x1 ≤ . . . ≤ xn, an integer k ∈ [n], a number OPT as the optimal value of k-median clustering. Output: an ε-coreset S of X together with weights w : S → R≥0. 1 Set a threshold ξ satisfying that ξ = ε·OPT30k ; 2 Consider the points from x1 to xn and group them into batches in a greedy way: each batch B is a maximal point set satisfying that ∆(B) ≤ ξ; 3 Denote B(X) as the collection of all batches. Let S ← ⋃ B∈B(X)B; 4 For each point x = B ∈ S, w(x)← |B|; 5 Return (S,w); Algorithm 1: FairMedian-1D(X, k) Theorem 4.4 (Coreset for fair k-median when X lies on a line). Algorithm 1 computes an ε/3coreset S for fair k-median clustering of X , in time O(|X|). The running time is immediate since for each batch B ∈ B(X), it only costs O(|B|) time to compute B. Hence, Algorithm 1 runs in O(|X|) time. We focus on correctness in the following. In [29], it was shown that S is an ε/3-coreset for the unconstrained k-median clustering problem. In their analysis, it is crucially used that the optimal clustering partitions X into k contiguous intervals. Unfortunately, the nice “contiguous” property does not hold in our case because of the assignment constraint F ∈ Rk. To resolve this issue, we prove a new structural property (Lemma 4.1) that the optimal fair k-median clustering actually partitions X into only O(k) contiguous intervals. With this property, Theorem 4.4 is implied by a similar argument as in [29]. The detailed proof can be found in the full version. Lemma 4.1 (Fair k-median clustering consists of 2k − 1 contiguous intervals). Suppose X := {x1, . . . , xn} ⊂ Rd lies on the real line where x1 ≤ . . . ≤ xn. For every k-subset C = (c1, . . . , ck) ∈ Rd and every assignment constraints F ∈ Zk≥0, there exists an optimal fair k-median clustering that partitions X into at most 2k − 1 contiguous intervals. Proof. We prove by induction on k. The induction hypothesis is that, for every k ≥ 1, Lemma 4.1 holds for any data set X , any k-subset C ⊆ Rd and any assignment constraint F ∈ Zk≥0. The base case k = 1 holds trivially since all points in X must be assigned to c1. Assume the lemma holds for k−1 (k ≥ 2) and we will prove the inductive step k. Let C?1 , . . . , C?k be the optimal fair k-median clustering w.r.t. C and F , where C?i ⊆ X is the subset assigned to center ci. We present the structural property in Claim 4.1, whose proof can be found in the full version. Claim 4.1. There exists i ∈ [k] such that C?i consists of exactly one contiguous interval. We continue the proof of the inductive step by constructing a reduced instance (X ′, F ′, C ′) where a) C ′ := C \ {ci0}; b) X ′ = X \C?i0 ; c) F ′ is formed by removing the i0-th coordinate of F . Applying the hypothesis on (X ′, F ′, C ′), we know the optimal fair (k − 1)-median clustering consists of at most 2k − 3 contiguous intervals. Combining with C?i0 which has exactly one contiguous interval would increase the number of intervals by at most 2. Thus, we conclude that the optimal fair k-median clustering for (X,F,C) has at most 2k− 1 contiguous intervals. This finishes the inductive step. 4.2 Extending to higher dimension The extension is the same as that of [29]. We start with a set of k centers that is a O(1)-approximate solution C? to unconstrained k-median. Then we emit O(ε−d+1) rays around each center c in C? (which correspond to an O(ε)-net on the unit sphere centered at c), and project data points to the nearest ray, such that the total projection cost is ε · OPT/3. Then for each line, we compute an ε/3-coreset for fair k-median by Theorem 4.4, and let S denote the combination of coresets generated from all lines. By the same argument as in Theorem 2.9 of [29], S is an ε-coreset for fair k-median clustering, which implies Theorem 4.1. The detailed proof can be found in the full version. Remark 4.2. In fact, it suffices to emit an arbitrary set of rays such that the total projection cost is at most ε ·OPT/3. This observation is crucially used in our implementations (Section 5) to reduce the size of the coreset, particularly to avoid the construction of the O(ε)-net which is of O(ε−d) size. 5 Empirical results We implement our algorithm and evaluate its performance on real datasets.3 The implementation mostly follows our description of algorithms, but a vanilla implementation would bring in an ε−d factor in the coreset size. To avoid this, as observed in Remark 4.2, we may actually emit any set of rays as long as the total projection cost is bounded, instead of ε−d rays. We implement this idea by finding the smallest integer m and m lines, such that the minimum cost of projecting data onto m lines is within the error threshold. In our implementation for fair k-means, we adopt the widely used Lloyd’s heuristic [37] to find the m lines, where the only change to Lloyd’s heuristic is that, for each cluster, we need to find a line that minimizes the projection cost instead of a point, and we use SVD to efficiently find this line optimally. Unfortunately, the above approach does not work for fair k-median, as the SVD does not give the optimal line. As a result, we still need to construct the ε-net, but we alternatively employ some heuristics to find the net adaptively w.r.t. the dataset. Our evaluation is conducted on four datasets: Adult (~50k), Bank (~45k) and Diabetes (~100k) from UCI Machine Learning Repository [23], and Athlete (~200k) from [1], which are also considered in previous papers [20, 42, 7]. For all datasets, we choose numerical features to form a vector in Rd for each record, where d = 6 for Adult, d = 10 for Bank, d = 29 for Diabetes and d = 3 for Athlete. We choose two sensitive types for the first three datasets: sex and marital for Adult (9 groups, Γ = 14); marital and default for Bank (7 groups, Γ = 12); sex and age for Diabetes (12 groups, Γ = 20), and we choose a binary sensitive type sex for Athlete (2 groups, Γ = 2). In addition, in the full version, we will also discuss how the following affects the result: a) choosing a binary type as the sensitive type, or b) normalization of the dataset. We pick k = 3 (i.e. number of clusters) throughout our experiment. We define the empirical error as | Kz(S,F,C)Kz(X,F,C)−1| (which is the same measure as ε) for some F and C. To evaluate the empirical error, we draw 500 independent random samples of (F,C) and report the maximum empirical error among these samples. For each (F,C), the fair clustering objectives Kz(·, F, C) may be formulated as integer linear programs (ILP). We use CPLEX [32] to solve the ILP’s, report the average running time4 TX and TS for evaluating the objective on dataset X and coreset S respectively, and also report the running time TC for constructing coreset S. For both k-median and k-means, we employ uniform sampling (Uni) as a baseline, in which we partitionX into Γ parts according to distinct Px’s (the collection of groups that x belongs to) and take uniform samples from each collection. Additionally, for k-means, we select another baseline from a recent work [42] that presented a coreset construction for fair k-means, whose implementation is based on the BICO library which is a high-performance coreset-based library for computing k-means clustering [26]. We evaluate the performance of our coreset for fair k-means against BICO and Uni. As a remark of BICO and Uni implementations, they do not support specifying parameter ε, but a hinted size of the resulted coreset. Hence, we start with evaluating our coreset, and set the hinted size for Uni and BICO as the size of our coreset. 3https://github.com/sfjiang1990/Coresets-for-Clustering-with-Fairness-Constraints. 4The experiments are conducted on a 4-Core desktop CPU with 64 GB RAM. We also showcase the speed-up to two recently published approximation algorithms by applying a 0.5-coreset. The first algorithm is a practically efficient, O(log n)-approximate algorithm for fair k-median [6] that works for a binary type, referred to as FairTree. The other one is a bicriteria approximation algorithm [7] for both fair k-median and k-means, referred to as FairLP. We slightly modify the implementations of FairTree and FairLP to enable them work with our coreset, particularly making them handle weighted inputs efficiently. We do experiments on a large dataset Census1990 which consists of about 2.5 million records (where we select d = 13 features and a binary type), in addition to the above-mentioned Adult, Bank, Diabetes and Athlete datasets. 5.1 Results Table 3 and 4 summarize the accuracy-size trade-off of our coresets for fair k-median and k-means respectively, under different error guarantee ε. Since the coreset construction time TC for Uni is very small (usually less than 50 ms) we do not report it in the table. From the table, a key finding is that the size of the coreset does not suffer from the ε−d factor thanks to our optimized implementation. As for the fair k-median, the empirical error of our coreset is well under control. In particular, to achieve 5% empirical error, only less than 3 percents of data is necessary for all datasets, and this results in a ~200x acceleration in evaluating the objective and 10x acceleration even taking the coreset construction time into consideration.5 Regarding the running time, our coreset construction time scales roughly linearly with the size of the coreset, which means our algorithm is output-sensitive. The empirical error of Uni is comparable to ours on Diabetes, but the worst-case error is unbounded (2x-10x to our coreset, even larger than ε) in general and seems not stable when ε varies. Our coreset works well for fair k-means, and it also offers significant acceleration of evaluating the objective. Compared with BICO, our coreset achieves smaller empirical error for fixed ε and the construction time is between 0.5x to 2x that of BICO. Again, the empirical error of Uni could be 2x smaller than ours and BICO on Diabetes, but the worst-case error is unbounded in general. Table 5 demonstrates the speed-up to FairTree and FairLP with the help of our coreset. We observed that the adaption of our coresets offers a 5x-15x speed-up to FairTree and a 15x-30x speed-up to FairLP for all datasets, even taking the coreset construction time into consideration. Specifically, the runtime on top of our coreset for FairLP is less than 1s for all datasets, which is extremely fast. We also observe that the clustering objective obj′ALG on top of our coresets is usually within 0.6-1.2 times of objALG which is the objective without the coreset (noting that coresets might shrink the objective). The only exception is FairLP on Census1990, in which obj′ALG is only 35% of objALG. A possible reason is that in the implementation of FairLP, an important step is to compute an approximate (unconstrained) k-means clustering solution on the dataset by employing the sklearn library [39]. However, sklearn tends to trade accuracy for speed when the dataset gets large. As a result, FairLP actually finds a better approximate k-means solution on the coreset than on the large dataset Census1990 and hence applying coresets can achieve a much smaller clustering objective. 6 Future work This paper constructs ε-coresets for the fair k-median/means clustering problem of size independent on the full dataset, and when the data may have multiple, non-disjoint types. Our coreset for fair k-median is the first known coreset construction to the best of our knowledge. For fair k-means, we improve the coreset size of the prior result [42], and extend it to multiple non-disjoint types. The empirical results show that our coresets are indeed much smaller than the full dataset and result in significant reductions in the running time of computing the fair clustering objective. Our work leaves several interesting futural directions. For unconstrained clustering, there exist several works using the sampling approach such that the coreset size does not depend exponentially on the Euclidean dimension d. It is interesting to investigate whether sampling approaches can be applied for constructing fair coresets and achieve similar size bound as the unconstrained setting. Another direction is to construct coresets for general fair (k, z)-clustering beyond k-median/means/center. 5The same coreset may be used for clustering with any assignment constraints, so its construction time would be averaged out if multiple fair clustering tasks are performed. Acknowledgments This research was supported in part by NSF CCF-1908347, SNSF 200021_182527, ONR Award N00014-18-1-2364 and a Minerva Foundation grant.
1. What is the main contribution of the paper regarding theoretical coreset construction? 2. What are the strengths and weaknesses of the experimental evaluation? 3. Do you have any questions or concerns about the paper's assumptions or definitions? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review The paper clearly identifies which parts/techniques are taken from other papers (i.e. known coreset construction from Har-Peled et al), and where extensions have been made. It also gives a good overivew on the contributions and experiments and is easy to follow. The main techniques behind the theoretical coreset constructions are well-known from [25]. The two new components are the observation regarding the reduction of constraint types (theorem 4.2) and the Lemmata regarding the analysis of the number of "contiguous intervals" (Lemma 4.1 and Lemma 5.1 / B.2). (Note that Theorem 4.2 is a generalisation of Lemma 6 in [36].) It is great that, in contrast to many theoretical clustering papers, this paper includes an evaluation of a practical version of the algorithm that's analysed in the theoretical part of the paper. However, there are also some things about the experiments that I struggle with: * As always, it is a bit hard to interpret the cost alone (without any goal of what actually will be done with the clustering). For errors of up to 10%, the results seem to be comparable with the BICO solution [36]. After that, there's a small gap (up to about 5%) between the solutions. I'm not sure if this is significant or not. * There is *no* comparison to simpler baselines, ie.g. to uniform sampling where, to ensure fairness, one could just sample uniformly at random per "constraint type". (One might suspect that the number of constraint types grows, the performance of this simple baseline should become better and better (as the constraint types pre-partition the space kind of)). * The experiments are only on two specific data sets. Given the fast processing times (cf. T_S and T_X in Tables 3 and 4), it is surprising that not more datasets have been considered. * ... and the times needed to compute the coresets are not reported. Moreover, there is one point (right in the problem definition section) that remains unclear to me: the re-definition described in ll. 131 (after Definition 2.1): "a point [...] may be partially assigned to more than one cluster". This seems unnecessary. I guess the original intent was just to point out that the pointwise costs are now weighted? (otherwise, I might be missing something here) Besides, some minor remarks: - Theorem 4.2 is not a pure "existence" statement (and it would be useless in the following if it was just that) - this should be made clear. - Table 2 is a bit confusing since some rows still contain eps, which should be Omega(1) - It should be made clear that a practial variant of the theoretical algorithm is evaluated. - There are some typos: several upper/lower-case typos in the references , l. 40 (due to the scale at which one is required to clustering), l. 175 (let ... denotes the)
NIPS
Title Heterogeneous Multi-output Gaussian Process Prediction Abstract We present a novel extension of multi-output Gaussian processes for handling heterogeneous outputs. We assume that each output has its own likelihood function and use a vector-valued Gaussian process prior to jointly model the parameters in all likelihoods as latent functions. Our multi-output Gaussian process uses a covariance function with a linear model of coregionalisation form. Assuming conditional independence across the underlying latent functions together with an inducing variable framework, we are able to obtain tractable variational bounds amenable to stochastic variational inference. We illustrate the performance of the model on synthetic data and two real datasets: a human behavioral study and a demographic high-dimensional dataset. N/A We present a novel extension of multi-output Gaussian processes for handling heterogeneous outputs. We assume that each output has its own likelihood function and use a vector-valued Gaussian process prior to jointly model the parameters in all likelihoods as latent functions. Our multi-output Gaussian process uses a covariance function with a linear model of coregionalisation form. Assuming conditional independence across the underlying latent functions together with an inducing variable framework, we are able to obtain tractable variational bounds amenable to stochastic variational inference. We illustrate the performance of the model on synthetic data and two real datasets: a human behavioral study and a demographic high-dimensional dataset. 1 Introduction Multi-output Gaussian processes (MOGP) generalise the powerful Gaussian process (GP) predictive model to the vector-valued random field setup (Alvarez et al., 2012). It has been experimentally shown that by simultaneously exploiting correlations between multiple outputs and across the input space, it is possible to provide better predictions, particularly in scenarios with missing or noisy data (Bonilla et al., 2008; Dai et al., 2017). The main focus in the literature for MOGP has been on the definition of a suitable cross-covariance function between the multiple outputs that allows for the treatment of outputs as a single GP with a properly defined covariance function (Alvarez et al., 2012). The two classical alternatives to define such cross-covariance functions are the linear model of coregionalisation (LMC) (Journel and Huijbregts, 1978) and process convolutions (Higdon, 2002). In the former case, each output corresponds to a weighted sum of shared latent random functions. In the latter, each output is modelled as the convolution integral between a smoothing kernel and a latent random function common to all outputs. In both cases, the unknown latent functions follow Gaussian process priors leading to straight-forward expressions to compute the cross-covariance functions among different outputs. More recent alternatives to build valid covariance functions for MOGP include the work by Ulrich et al. (2015) and Parra and Tobar (2017), that build the cross-covariances in the spectral domain. Regarding the type of outputs that can be modelled, most alternatives focus on multiple-output regression for continuous variables. Traditionally, each output is assumed to follow a Gaussian likelihood where the mean function is given by one of the outputs of the MOGP and the variance in that distribution is treated as an unknown parameter. Bayesian inference is tractable for these models. In this paper, we are interested in the heterogeneous case for which the outputs are a mix of continuous, categorical, binary or discrete variables with different likelihood functions. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. There have been few attempts to extend the MOGP to other types of likelihoods. For example, Skolidis and Sanguinetti (2011) use the outputs of a MOGP for jointly modelling several binary classification problems, each of which uses a probit likelihood. They use an intrinsic coregionalisation model (ICM), a particular case of LMC. Posterior inference is perfomed using expectation-propagation (EP) and variational mean field. Both Chai (2012) and Dezfouli and Bonilla (2015) have also used ICM for modeling a single categorical variable with a multinomial logistic likelihood. The outputs of the ICM model are used as replacements for the linear predictors in the softmax function. Chai (2012) derives a particular variational bound for the marginal likelihood and computes Gaussian posterior distributions; and Dezfouli and Bonilla (2015) introduce an scalable inference procedure that uses a mixture of Gaussians to approximate the posterior distribution using automated variational inference (AVI) (Nguyen and Bonilla, 2014a) that requires sampling from univariate Gaussians. For the single-output GP case, the usual practice for handling non-Gaussian likelihoods has been replacing the parameters or linear predictors of the non-Gaussian likelihood by one or more independent GP priors. Since computing posterior distributions becomes intractable, different alternatives have been offered for approximate inference. An example is the Gaussian heteroscedastic regression model with variational inference (Lázaro-Gredilla and Titsias, 2011), Laplace approximation (Vanhatalo et al., 2013); and stochastic variational inference (SVI) (Saul et al., 2016). This last reference uses the same idea for modulating the parameters of a Student-t likelihood, a log-logistic distribution, a beta distribution and a Poisson distribution. The generalised Wishart process (Wilson and Ghahramani, 2011) is another example where the entries of the scale matrix of a Wishart distribution are modulated by independent GPs. Our main contribution in this paper is to provide an extension of multiple-output Gaussian processes for prediction in heterogeneous datasets. The key principle in our model is to use the outputs of a MOGP as the latent functions that modulate the parameters of several likelihood functions, one likelihood function per output. We tackle the model’s intractability using variational inference. Furthermore, we use the inducing variable formalism for MOGP introduced by Alvarez and Lawrence (2009) and compute a variational bound suitable for stochastic optimisation as in Hensman et al. (2013). We experimentally provide evidence of the benefits of simultaneously modeling heterogeneous outputs in different applied problems. Our model can be seen as a generalisation of Saul et al. (2016) for multiple correlated output functions of an heterogeneous nature. Our Python implementation follows the spirit of Hadfield et al. (2010), where the user only needs to specify a list of likelihood functions likelihood_list = [Bernoulli(), Poisson(), HetGaussian()], where HetGaussian refers to the heteroscedastic Gaussian distribution, and the number of latent parameter functions per likelihood is assigned automatically. 2 Heterogeneous Multi-output Gaussian process Consider a set of output functions Y = {yd(x)}Dd=1, with x ∈ Rp, that we want to jointly model using Gaussian processes. Traditionally, the literature has considered the case for which each yd(x) is continuous and Gaussian distributed. In this paper, we are interested in the heterogeneous case for which the outputs in Y are a mix of continuous, categorical, binary or discrete variables with several different distributions. In particular, we will assume that the distribution over yd(x) is completely specified by a set of parameters θd(x) ∈ X Jd , where we have a generic X domain for the parameters and Jd is the number of parameters thet define the distribution. Each parameter θd,j(x) ∈ θd(x) is a non-linear transformation of a Gaussian process prior fd,j(x), this is, θd,j(x) = gd,j(fd,j(x)), where gd,j(·) is a deterministic function that maps the GP output to the appropriate domain for the parameter θd,j . To make the notation concrete, let us assume an heterogeneous multiple-output problem for which D = 3. Assume that output y1(x) is binary and that it will be modelled using a Bernoulli distribution. The Bernoulli distribution uses a single parameter (the probability of success), J1 = 1, restricted to values in the range [0, 1]. This means that θ1(x) = θ1,1(x) = g1,1(f1,1(x)), where g1,1(·) could be modelled using the logistic sigmoid function σ(z) = 1/(1 + exp(−z)) that maps σ : R → [0, 1]. Assume now that the second output y2(x) corresponds to a count variable that can take values y2(x) ∈ N ∪ {0}. The count variable can be modelled using a Poisson distribution with a single parameter (the rate), J2 = 1, restricted to the positive reals. This means that θ2(x) = θ2,1(x) = g2,1(f2,1(x)) where g2,1(·) could be modelled as an exponential function g2,1(·) = exp(·) to ensure strictly positive values for the parameter. Finally, y3(x) is a continuous variable with heteroscedastic noise. It can be modelled using a Gaussian distribution where both the mean and the variance are functions of x. This means that θ3(x) = [θ3,1(x) θ3,2(x)]> = [g3,1(f3,1(x)) g3,2(f3,2(x))]>, where the first function is used to model the mean of the Gaussian, and the second function is used to model the variance. Therefore, we can assume the g3,1(·) is the identity function and g3,2(·) is a function that ensures that the variance takes strictly positive values, e.g. the exponential function. Let us define a vector-valued function y(x) = [y1(x), y2(x), · · · , yD(x)]>. We assume that the outputs are conditionally independent given the vector of parameters θ(x) = [θ1(x), θ2(x), · · · , θD(x)]>, defined by specifying the vector of latent functions f(x) = [f1,1(x), f1,2(x), · · · f1,J1(x), f2,1(x), f2,2(x), · · · , fD,JD (x)]> ∈ RJ×1, where J = ∑D d=1 Jd, p(y(x)|θ(x)) = p(y(x)|f(x)) = D∏ d=1 p(yd(x)|θd(x)) = D∏ d=1 p(yd(x)|̃fd(x)), (1) where we have defined f̃d(x) = [fd,1(x), · · · , fd,Jd(x)]> ∈ RJd×1, the set of latent functions that specify the parameters in θd(x). Notice that J ≥ D. This is, there is not always a one-to-one map from f(x) to y(x). Most previous work has assumed that D = 1, and that the corresponding elements in θd(x), this is, the latent functions in f̃1(x) = [f1,1(x), · · · , f1,J1(x)]> are drawn from independent Gaussian processes. Important exceptions are Chai (2012) and Dezfouli and Bonilla (2015), that assumed a categorical variable y1(x), where the elements in f̃1(x) were drawn from an intrinsic coregionalisation model. In what follows, we generalise these models for D > 1 and potentially heterogeneuos outputs yd(x). We will use the word “output” to refer to the elements yd(x) and “latent parameter function” (LPF) or “parameter function” (PF) to refer to fd,j(x). 2.1 A multi-parameter GP prior Our main departure from previous work is in modeling of f(x) using a multi-parameter Gaussian process that allows correlations for the parameter functions fd,j(x). We will use a linear model of corregionalisation type of covariance function for expressing correlations between functions fd,j(x), and fd′,j′(x′). The particular construction is as follows. Consider an additional set of independent latent functions U = {uq(x)}Qq=1 that will be linearly combined to produce J LPFs {fd,j(x)}Jd,Dj=1,d=1. Each latent function uq(x) is assummed to be drawn from an independent GP prior such that uq(·) ∼ GP(0, kq(·, ·)), where kq can be any valid covariance function, and the zero mean is assumed for simplicity. Each latent parameter fd,j(x) is then given as fd,j(x) = Q∑ q=1 Rq∑ i=1 aid,j,qu i q(x), (2) where uiq(x) are IID samples from uq(·) ∼ GP(0, kq(·, ·)) and aid,j,q ∈ R. The mean function for fd,j(x) is zero and the cross-covariance function kfd,jfd′,j′ (x,x ′) = cov[fd,j(x), fd′,j′(x ′)] is equal to ∑Q q=1 b q (d,j),(d′,j′)kq(x,x ′), where bq(d,j),(d′,j′) = ∑Rq i=1 a i d,j,qa i d′,j′,q. Let us define X = {xn}Nn=1 ∈ RN×p as a set of common input vectors for all outputs yd(x). Although, the presentation could be extended for the case of a different set of inputs per output. Let us also define fd,j = [fd,j(x1), · · · , fd,j(xN )]> ∈ RN×1; f̃d = [f>d,1 · · · f>d,Jd ]> ∈ RJdN×1, and f = [̃f>1 · · · f̃>D ]> ∈ RJN×1. The generative model for the heterogeneous MOGP is as follows. We sample f ∼ N (0,K), where K is a block-wise matrix with blocks given by {Kfd,jfd′,j′} D,D,Jd,Jd′ d=1,d′=1,j=1,j′=1. In turn, the elements in Kfd,jfd′,j′ are given by kfd,jfd′,j′ (xn,xm), with xn,xm ∈ X. For the particular case of equal inputs X for all LPF, K can also be expressed as the sum of Kronecker products K = ∑Q q=1 AqA > q ⊗Kq = ∑Q q=1 Bq ⊗Kq, where Aq ∈ RJ×Rq has entries {aid,j,q} D,Jd,Rq d=1,j=1,i=1 and Bq has entries {bq(d,j),(d′,j′)} D,D,Jd,Jd′ d=1,d′=1,j=1,j′=1. The matrix Kq ∈ RN×N has entries given by kq(xn,xm) for xn,xm ∈ X. Matrices Bq ∈ RJ×J are known as the coregionalisation matrices. Once we obtain the sample for f , we evaluate the vector of parameters θ = [θ>1 · · ·θ>D]>, where θd = f̃d. Having specified θ, we can generate samples for the output vector y = [y>1 · · ·y>D]> ∈ XDN×1, where the elements in yd are obtained by sampling from the conditional distributions p(yd(x)|θd(x)). To keep the notation uncluttered, we will assume from now that Rq = 1, meaning that Aq = aq ∈ RJ×1, and the corregionalisation matrices are rank-one. In the literature such model is known as the semiparametric latent factor model (Teh et al., 2005). 2.2 Scalable variational inference Given an heterogeneous dataset D = {X,y}, we would like to compute the posterior distribution for p(f |D), which is intractable in our model. In what follows, we use similar ideas to Alvarez and Lawrence (2009); Álvarez et al. (2010) that introduce the inducing variable formalism for computational efficiency in MOGP. However, instead of marginalising the latent functions U to obtain a variational lower bound, we keep their presence in a way that allows us to apply stochastic variational inference as in Hensman et al. (2013); Saul et al. (2016). 2.2.1 Inducing variables for MOGP A key idea to reduce computational complexity in Gaussian process models is to introduce auxiliary variables or inducing variables. These variables have been used already in the context of MOGP (Alvarez and Lawrence, 2009; Álvarez et al., 2010) . A subtle difference from the single output case is that the inducing variables are not taken from the same latent process, say f1(x), but from the latent processes U used also to build the model for multiple outputs. We will follow the same formalism here. We start by defining the set of M inducing variables per latent function uq(x) as uq = [uq(z1), · · · , uq(zM )]>, evaluated at a set of inducing inputs Z = {zm}Mm=1 ∈ RM×p. We also define u = [u>1 , · · · ,u>Q]> ∈ RQM×1. For simplicity in the exposition, we have assumed that all the inducing variables, for all q, have been evaluated at the same set of inputs Z. Instead of marginalising {uq(x)}Qq from the model in (2), we explicitly use the joint Gaussian prior p(f ,u) = p(f |u)p(u). Due to the assumed independence in the latent functions uq(x), the distribution p(u) factorises as p(u) = ∏Q q=1 p(uq), with uq ∼ N (0,Kq), where Kq ∈ RM×M has entries kq(zi, zj) with zi, zj ∈ Z. Notice that the dimensions of Kq are different to the dimensions of Kq in section 2.1. The LPFs fd,j are conditionally independent given u, so we can write the conditional distribution p(f |u) as p(f |u) = D∏ d=1 Jd∏ j=1 p(fd,j |u) = D∏ d=1 Jd∏ j=1 N ( fd,j |Kfd,juK−1uuu,Kfd,jfd,j −Kfd,juK−1uuK>fd,ju ) , where Kuu ∈ RQM×QM is a block-diagonal matrix with blocks given by Kq and Kfd,ju ∈ RN×QM is the cross-covariance matrix computed from the cross-covariances between fd,j(x) and uq(z). The expression for this cross-covariance function can be obtained from (2) leading to kfd,juq (x, z) = ad,j,qkq(x, z). This form for the cross-covariance between the LPF fd,j(x) and uq(z) is a key difference between the inducing variable methods for the single-output GP case and the MOGP case. 2.2.2 Variational Bounds Exact posterior inference is intractable in our model due to the presence of an arbitrary number of non-Gaussian likelihoods. We use variational inference to compute a lower bound L for the marginal log-likelihood log p(y), and for approximating the posterior distribution p(f ,u|D). Following Álvarez et al. (2010), the posterior of the LPFs f and the latent functions u can be approximated as p(f ,u|y,X) ≈ q(f ,u) = p(f |u)q(u) = D∏ d=1 Jd∏ j=1 p(fd,j |u) Q∏ q=1 q(uq), where q(uq) = N (uq|µuq ,Suq ) are Gaussian variational distributions whose parameters {µuq ,Suq}Qq=1 must be optimised. Building on previous work by Saul et al. (2016); Hensman et al. (2015), we derive a lower bound that accepts any log-likelihood function that can be modulated by the LPFs f . The lower bound L for log p(y) is obtained as follows log p(y) = log ∫ p(y|f)p(f |u)p(u)dfdu ≥ ∫ q(f ,u) log p(y|f)p(f |u)p(u) q(f ,u) dfdu = L. We can further simplify L to obtain L = ∫ ∫ p(f |u)q(u) log p(y|f)dfdu− Q∑ q=1 KL ( q(uq)||p(uq) ) = ∫ ∫ D∏ d=1 Jd∏ j=1 p(fd,j |u)q(u) log p(y|f)dudf − Q∑ q=1 KL ( q(uq)||p(uq) ) , where KL is the Kullback-Leibler divergence. Moreover, the approximate marginal posterior for fd,j is q(fd,j) = ∫ p(fd,j |u)q(u)du, leading to q(fd,j) = N ( fd,j |Kfd,juK−1uuµu,Kfd,jfd,j +Kfd,juK−1uu(Su −Kuu)K−1uuK>fd,ju ) , where µu = [µ>u1 , · · · ,µ>uQ ]> and Su is a block-diagonal matrix with blocks given by Suq . The expression for log p(y|f) factorises, according to (1): log p(y|f) = ∑Dd=1 log p(yd |̃fd) =∑D d=1 log p(yd|fd,1, · · · , fd,Jd). Using this expression for log p(y|f) leads to the following expression for the bound D∑ d=1 Eq(fd,1)···q(fd,Jd ) [ log p(yd|fd,1, · · · , fd,Jd) ] − Q∑ q=1 KL ( q(uq)||p(uq) ) . When D = 1 in the expression above, we recover the bound obtained in Saul et al. (2016). To maximize this lower bound, we need to find the optimal variational parameters {µuq}Qq=1 and {Suq}Qq=1. We represent each matrix Suq as Suq = LuqL>uq and, to ensure positive definiteness for Suq , we estimate Luq instead of Suq . Computation of the posterior distributions over fd,j can be done analytically. There is still an intractability issue in the variational expectations on the log-likelihood functions. Since we construct these bounds in order to accept any possible data type, we need a general way to solve these integrals. One obvious solution is to apply Monte Carlo methods, however it would be slow both maximising the lower bound and updating variational parameters by sampling thousands of times (for approximating expectations) at each iteration. Instead, we address this problem by using Gaussian-Hermite quadratures as in Hensman et al. (2015); Saul et al. (2016). Stochastic Variational Inference. The conditional expectations in the bound above are also valid across data observations so that we can express the bound as D∑ d=1 N∑ n=1 Eq(fd,1(xn))···q(fd,Jd (xn)) [ log p(yd(xn)|fd,1(xn), · · · , fd,Jd(xn)) ] − Q∑ q=1 KL ( q(uq)||p(uq) ) . This functional form allows the use of mini-batches of smaller sets of training samples, performing the optimization process using noisy estimates of the global objective gradient in a similar fashion to Hoffman et al. (2013); Hensman et al. (2013, 2015); Saul et al. (2016) . This scalable bound makes our multi-ouput model applicable to large heterogenous datasets. Notice that computational complexity is dominated by the inversion of Kuu with a cost of O(QM3) and products like Kfu with a cost of O(JNQM2). Hyperparameter learning. Hyperparameters in our model include Z, {Bq}Qq=1, and {γq}Qq=1, the hyperparameters associated to the covariance functions {kq(·, ·)}Qq=1. Since the variational distribution q(u) is sensitive to changes of the hyperparameters, we maximize the variational parameters for q(u), and the hyperparameters using a Variational EM algorithm (Beal, 2003) when employing the full dataset, or the stochastic version when using mini-batches (Hoffman et al., 2013). 2.3 Predictive distribution Consider a set of test inputs X∗. Assuming that p(u|y) ≈ q(u), the predictive distribution p(y∗) can be approximated as p(y∗|y) ≈ ∫ p(y∗|f∗)q(f∗)df∗, where q(f∗) = ∫ p(f∗|u)q(u)du. Computing the expression q(f∗) = ∏D d=1 ∏Jd j=1 q(fd,j,∗) involves evaluating Kfd,j,∗u at X∗. As in the case of the lower bound, the integral above is intractable for the non-Gaussian likelihoods p(y∗|f∗). We can once again make use of Monte Carlo integration or quadratures to approximate the integral. Simpler integration problems are obtained if we are only interested in the predictive mean, E[y∗], and the predictive variance, var[y∗]. 3 Related Work The most closely related works to ours are Skolidis and Sanguinetti (2011), Chai (2012), Dezfouli and Bonilla (2015) and Saul et al. (2016). We are different from Skolidis and Sanguinetti (2011) because we allow more general heterogeneous outputs beyond the specific case of several binary classification problems. Our inference method also scales to large datasets. The works by Chai (2012) and Dezfouli and Bonilla (2015) do use a MOGP, but they only handle a single categorical variable. Our inference approach scales when compared to the one in Chai (2012) and it is fundamentally different to the one in Dezfouli and Bonilla (2015) since we do not use AVI. Our model is also different to Saul et al. (2016) since we allow for several dependent outputs, D > 1, and our scalable approach is more akin to applying SVI to the inducing variable approach of Álvarez et al. (2010). More recenty, Vanhatalo et al. (2018) used additive multi-output GP models to account for interdependencies between counting and binary observations. They use the Laplace approximation for approximating the posterior distribution. Similarly, Pourmohamad and Lee (2016) perform combined regression and binary classification with a multi-output GP learned via sequential Monte Carlo. Nguyen and Bonilla (2014b) also uses the same idea from Álvarez et al. (2010) to provide scalability for multiple-output GP models conditioning the latent parameter functions fd,j(x) on the inducing variables u, but only considers the multivariate regression case. It is also important to mention that multi-output Gaussian processes have been considered as alternative models for multi-task learning (Alvarez et al., 2012). Multi-task learning also addresses multiple prediction problems together within a single inference framework. Most previous work in this area has focused on problems where all tasks are exclusively regression or classification problems. When tasks are heterogeneous, the common practice is to introduce a regularizer per data type in a global cost function (Zhang et al., 2012; Han et al., 2017). Usually, these cost functions are compounded by additive terms, each one referring to every single task, while the correlation assumption among heterogeneous likelihoods is addressed by mixing regularizers in a global penalty term (Li et al., 2014) or by forcing different tasks to share a common mean (Ngufor et al., 2015). Another natural way of treating both continuous and discrete tasks is to assume that all of them share a common input set that varies its influence on each output. Then, by sharing a jointly sparsity pattern, it is possible to optimize a global cost function with a single regularization parameter on the level of sparsity (Yang et al., 2009). There have also been efforts for modeling heterogeneous data outside the label of multi-task learning including mixed graphical models (Yang et al., 2014), where varied types of data are assumed to be combinations of exponential families, and latent feature models (Valera et al., 2017) with heterogeneous observations being mappings of a set of Gaussian distributed variables. 4 Experiments In this section, we evaluate our model on different heterogeneous scenarios 1. To demonstrate its performance in terms of multi-output learning, prediction and scalability, we have explored several applications with both synthetic and real data. For all the experiments, we consider an RBF kernel for each covariance function kq(·, ·) and we set Q = 3. For standard optimization we used the LBFGS-B algorithm. When SVI was needed, we considered ADADELTA included in the climin library, and a mini-batch size of 500 samples in every output. All performance metrics are given in terms of the negative log-predictive density (NLPD) calculated from a test subset and applicable to any type of likelihood. Further details about experiments are included in the appendix. Missing Gap Prediction: In our first experiment, we evaluate if our model is able to predict observations in one output using training information from another one. We setup a toy problem which consists of D = 2 heterogeneous outputs, where the first function y1(x) is real and y2(x) binary. Assumming that heterogeneous outputs do not share a common input set, we observe 1The code is publicly available in the repository github.com/pmorenoz/HetMOGP/ N1 = 600 and N2 = 500 samples respectively. All inputs are uniformly distributed in the input range [0, 1], and we generate a gap only in the set of binary observations by removing Ntest = 150 samples in the interval [0.7, 0.9]. Using the remaining points from both outputs for training, we fitted our MOGP model. In Figures 1(a,b) we can see how uncertainty in binary test predictions is reduced by learning from the first output. In contrast, Figure 1(c) shows wider variance in the predicted parameter when it is trained independently. For the multi-output case we obtained a NLPD value for test data of 32.5± 0.2× 10−2 while in the single-output case the NLPD was 40.51± 0.08× 10−2. Human Behavior Data: In this experiment, we are interested in modeling human behavior in psychiatric patients. Previous work by Soleimani et al. (2018) already explores the application of scalable MOGP models to healthcare for reliable predictions from multivariate time-series. Our data comes from a medical study that asked patients to download a monitoring app (EB2)2 on their smartphones. The system captures information about mobility, communication metadata and interactions in social media. The work has a particular interest in mental health since shifts or misalignments in the circadian feature of human behavior (24h cycles) can be interpreted as early signs of crisis. In particular, we obtained a binary indicator variable of presence/absence at home by monitoring latitude-longitude and measuring its distance from the patient’s home location within a 50m radius range. Then, using the already measured distances, we generated a mobility sequence with all log-distance values. Our last output consists of binary samples representing use/non-use of the 2This smartphone application can be found at https://www.eb2.tech/. Whatsapp application in the smartphone. At each monitoring time instant, we used its differential data consumption to determine use or non-use of the application. We considered an entire week in seconds as the input domain, normalized to the range [0, 1]. In Figure (2), after training on N = 750 samples, we find that the circadian feature is mainly contained in the first output. During the learning process, this periodicity is transferred to the other outputs through the latent functions improving the performance of the entire model. Experimentally, we tested that this circadian pattern was not captured in mobility and social data when training outputs independently. In Table 1 we can see prediction metrics for multi-output and independent prediction. London House Price Data: Based on the large scale experiments in Hensman et al. (2013), we obtained the complete register of properties sold in the Greater London County during 2017 (https://www.gov.uk/government/collections/price-paid-data). We preprocessed it to translate all property addresses to latitude-longitude points. For each spatial input, we considered two observations, one binary and one real. The first one indicates if the property is or is not a flat (zero would mean detached, semi-detached, terraced, etc.. ), and the second one the sale price of houses. Our goal is to predict features of houses given a certain location in the London area. We used a training set of N = 20, 000 samples, 1, 000 for test predictions and M = 100 inducing points. Results in Figure (3) show a portion of the entire heterogeneous dataset and its test prediction curves. We obtained a global NLPD score of 16.44± 0.01 using the MOGP and 17.31± 1.06 in the independent outputs setting (both ×10−2). There is an improvement in performance when training our multi-output model even in large scale datasets. See Table (2) for scores per each output. High Dimensional Input Data: In our last experiment, we tested our MOGP model for the arrhythmia dataset in the UCI repository (http://archive.ics.uci.edu/ml/). We use a dataset of dimensionality p = 255 and 452 samples that we divide in training, validation and test sets (more details are in the appendix). We use our model for predicting a binary output (gender) and a continuous output (logarithmic age) and we compared against independent Chained GPs per output. The binary output is modelled as a Bernoulli distribution and the continuous one as a Gaussian. We obtained an average NLPD value of 0.0191 for both multi-output and independent output models with a slight difference in the standard deviation. 5 Conclusions In this paper we have introduced a novel extension of multi-output Gaussian Processes for handling heterogeneous observations. Our model is able to work on large scale datasets by using sparse approximations within stochastic variational inference. Experimental results show relevant improvements with respect to independent learning of heterogeneous data in different scenarios. In future work it would be interesting to employ convolutional processes (CPs) as an alternative to build the multi-output GP prior. Also, instead of typing hand-made definitions of heterogeneous likelihoods, we may consider to automatically discover them (Valera and Ghahramani, 2017) as an input block in a pipeline setup of our tool. Acknowledgments The authors want to thank Wil Ward for his constructive comments and Juan José Giraldo for his useful advice about SVI experiments and simulations. We also thank Alan Saul and David Ramírez for their recommendations about scalable inference and feedback on the equations. We are grateful to Eero Siivola and Marcelo Hartmann for sharing their Python module for heterogeneous likelihoods and to Francisco J. R. Ruiz for his illuminating help about the stochastic version of the VEM algorithm. Also, we would like to thank Juan José Campaña for his assistance on the London House Price dataset. Pablo Moreno-Muñoz acknowledges the support of his doctoral FPI grant BES2016-077626 and was also supported by Ministerio de Economía of Spain under the project Macro-ADOBE (TEC2015-67719-P), Antonio Artés-Rodríguez acknowledges the support of projects ADVENTURE (TEC2015-69868-C2-1-R), AID (TEC2014-62194-EXP) and CASI-CAM-CM (S2013/ICE-2845). Mauricio A. Álvarez has been partially financed by the Engineering and Physical Research Council (EPSRC) Research Projects EP/N014162/1 and EP/R034303/1.
1. What is the main contribution of the paper, and how does it extend the usefulness of the model further? 2. What are the concerns regarding the novelty of the technical contribution, and how does it relate to previous works? 3. What are the strengths and weaknesses of the paper in terms of its writing, experiments, and conclusions? 4. What are the minor questions or issues that the reviewer has regarding the paper's content, such as the Monte Carlo approach, standard deviations, features, and training set size? 5. How does the reviewer assess the impact and potential spread of multi-output models for a specific type of data, given the Python implementation provided with the paper?
Review
Review *** Update after author feedback *** Thank you for your feedback. I didn't understand first that the model cannot only be used for the LMC models, but also for convolutions. This is great and extends the usefulness of the model further. ***************************************** The paper introduces a new multi-output model for heterogenous outputs, i.e. each output can have its own likelihood function. The models builds upon the linear model of coregionalization for coupling the latent regressors of the multiple outputs and uses stochastic variational inference to allow for non-Gaussian likelihoods. The main concern I have about the paper is its novelty in terms of technical contribution. The paper combines the LMC model for the Gaussian case (Bonilla et al., 2014, I missed the citation!) with progress for other types of likelihoods (Saul et al, 2016). Are both models special cases of the aforementioned model or are there any differences in the inference scheme? However, the paper is well written and the experiments are convincing which is why I would recommend a "weak accept". The paper also comes along with a Python implementation that allows to quickly try out multi-output models with new data types which could lead to a wider spread of multi-output models for that type of data. Minors: L 186: Why would the Monte Carlo approach be slower than Gaussian-Hermite quadrature? Please be more explicit. L 291: Please state the standard deviations. L 267: What are the features? I first thought it is the time but then the sentence (270-272) does not make any sense. L 268: The dataset consists of 604,800 samples but only 750 samples are used for training. How is the performance w.r.t. runtime/NLPD if the training setsize is increased? Literature: - Nguyen, Trung V., and Edwin V. Bonilla. "Collaborative Multi-output Gaussian Processes." UAI. 2014. - Saul, Alan D., et al. "Chained gaussian processes." Artificial Intelligence and Statistics. 2016.
NIPS
Title Heterogeneous Multi-output Gaussian Process Prediction Abstract We present a novel extension of multi-output Gaussian processes for handling heterogeneous outputs. We assume that each output has its own likelihood function and use a vector-valued Gaussian process prior to jointly model the parameters in all likelihoods as latent functions. Our multi-output Gaussian process uses a covariance function with a linear model of coregionalisation form. Assuming conditional independence across the underlying latent functions together with an inducing variable framework, we are able to obtain tractable variational bounds amenable to stochastic variational inference. We illustrate the performance of the model on synthetic data and two real datasets: a human behavioral study and a demographic high-dimensional dataset. N/A We present a novel extension of multi-output Gaussian processes for handling heterogeneous outputs. We assume that each output has its own likelihood function and use a vector-valued Gaussian process prior to jointly model the parameters in all likelihoods as latent functions. Our multi-output Gaussian process uses a covariance function with a linear model of coregionalisation form. Assuming conditional independence across the underlying latent functions together with an inducing variable framework, we are able to obtain tractable variational bounds amenable to stochastic variational inference. We illustrate the performance of the model on synthetic data and two real datasets: a human behavioral study and a demographic high-dimensional dataset. 1 Introduction Multi-output Gaussian processes (MOGP) generalise the powerful Gaussian process (GP) predictive model to the vector-valued random field setup (Alvarez et al., 2012). It has been experimentally shown that by simultaneously exploiting correlations between multiple outputs and across the input space, it is possible to provide better predictions, particularly in scenarios with missing or noisy data (Bonilla et al., 2008; Dai et al., 2017). The main focus in the literature for MOGP has been on the definition of a suitable cross-covariance function between the multiple outputs that allows for the treatment of outputs as a single GP with a properly defined covariance function (Alvarez et al., 2012). The two classical alternatives to define such cross-covariance functions are the linear model of coregionalisation (LMC) (Journel and Huijbregts, 1978) and process convolutions (Higdon, 2002). In the former case, each output corresponds to a weighted sum of shared latent random functions. In the latter, each output is modelled as the convolution integral between a smoothing kernel and a latent random function common to all outputs. In both cases, the unknown latent functions follow Gaussian process priors leading to straight-forward expressions to compute the cross-covariance functions among different outputs. More recent alternatives to build valid covariance functions for MOGP include the work by Ulrich et al. (2015) and Parra and Tobar (2017), that build the cross-covariances in the spectral domain. Regarding the type of outputs that can be modelled, most alternatives focus on multiple-output regression for continuous variables. Traditionally, each output is assumed to follow a Gaussian likelihood where the mean function is given by one of the outputs of the MOGP and the variance in that distribution is treated as an unknown parameter. Bayesian inference is tractable for these models. In this paper, we are interested in the heterogeneous case for which the outputs are a mix of continuous, categorical, binary or discrete variables with different likelihood functions. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. There have been few attempts to extend the MOGP to other types of likelihoods. For example, Skolidis and Sanguinetti (2011) use the outputs of a MOGP for jointly modelling several binary classification problems, each of which uses a probit likelihood. They use an intrinsic coregionalisation model (ICM), a particular case of LMC. Posterior inference is perfomed using expectation-propagation (EP) and variational mean field. Both Chai (2012) and Dezfouli and Bonilla (2015) have also used ICM for modeling a single categorical variable with a multinomial logistic likelihood. The outputs of the ICM model are used as replacements for the linear predictors in the softmax function. Chai (2012) derives a particular variational bound for the marginal likelihood and computes Gaussian posterior distributions; and Dezfouli and Bonilla (2015) introduce an scalable inference procedure that uses a mixture of Gaussians to approximate the posterior distribution using automated variational inference (AVI) (Nguyen and Bonilla, 2014a) that requires sampling from univariate Gaussians. For the single-output GP case, the usual practice for handling non-Gaussian likelihoods has been replacing the parameters or linear predictors of the non-Gaussian likelihood by one or more independent GP priors. Since computing posterior distributions becomes intractable, different alternatives have been offered for approximate inference. An example is the Gaussian heteroscedastic regression model with variational inference (Lázaro-Gredilla and Titsias, 2011), Laplace approximation (Vanhatalo et al., 2013); and stochastic variational inference (SVI) (Saul et al., 2016). This last reference uses the same idea for modulating the parameters of a Student-t likelihood, a log-logistic distribution, a beta distribution and a Poisson distribution. The generalised Wishart process (Wilson and Ghahramani, 2011) is another example where the entries of the scale matrix of a Wishart distribution are modulated by independent GPs. Our main contribution in this paper is to provide an extension of multiple-output Gaussian processes for prediction in heterogeneous datasets. The key principle in our model is to use the outputs of a MOGP as the latent functions that modulate the parameters of several likelihood functions, one likelihood function per output. We tackle the model’s intractability using variational inference. Furthermore, we use the inducing variable formalism for MOGP introduced by Alvarez and Lawrence (2009) and compute a variational bound suitable for stochastic optimisation as in Hensman et al. (2013). We experimentally provide evidence of the benefits of simultaneously modeling heterogeneous outputs in different applied problems. Our model can be seen as a generalisation of Saul et al. (2016) for multiple correlated output functions of an heterogeneous nature. Our Python implementation follows the spirit of Hadfield et al. (2010), where the user only needs to specify a list of likelihood functions likelihood_list = [Bernoulli(), Poisson(), HetGaussian()], where HetGaussian refers to the heteroscedastic Gaussian distribution, and the number of latent parameter functions per likelihood is assigned automatically. 2 Heterogeneous Multi-output Gaussian process Consider a set of output functions Y = {yd(x)}Dd=1, with x ∈ Rp, that we want to jointly model using Gaussian processes. Traditionally, the literature has considered the case for which each yd(x) is continuous and Gaussian distributed. In this paper, we are interested in the heterogeneous case for which the outputs in Y are a mix of continuous, categorical, binary or discrete variables with several different distributions. In particular, we will assume that the distribution over yd(x) is completely specified by a set of parameters θd(x) ∈ X Jd , where we have a generic X domain for the parameters and Jd is the number of parameters thet define the distribution. Each parameter θd,j(x) ∈ θd(x) is a non-linear transformation of a Gaussian process prior fd,j(x), this is, θd,j(x) = gd,j(fd,j(x)), where gd,j(·) is a deterministic function that maps the GP output to the appropriate domain for the parameter θd,j . To make the notation concrete, let us assume an heterogeneous multiple-output problem for which D = 3. Assume that output y1(x) is binary and that it will be modelled using a Bernoulli distribution. The Bernoulli distribution uses a single parameter (the probability of success), J1 = 1, restricted to values in the range [0, 1]. This means that θ1(x) = θ1,1(x) = g1,1(f1,1(x)), where g1,1(·) could be modelled using the logistic sigmoid function σ(z) = 1/(1 + exp(−z)) that maps σ : R → [0, 1]. Assume now that the second output y2(x) corresponds to a count variable that can take values y2(x) ∈ N ∪ {0}. The count variable can be modelled using a Poisson distribution with a single parameter (the rate), J2 = 1, restricted to the positive reals. This means that θ2(x) = θ2,1(x) = g2,1(f2,1(x)) where g2,1(·) could be modelled as an exponential function g2,1(·) = exp(·) to ensure strictly positive values for the parameter. Finally, y3(x) is a continuous variable with heteroscedastic noise. It can be modelled using a Gaussian distribution where both the mean and the variance are functions of x. This means that θ3(x) = [θ3,1(x) θ3,2(x)]> = [g3,1(f3,1(x)) g3,2(f3,2(x))]>, where the first function is used to model the mean of the Gaussian, and the second function is used to model the variance. Therefore, we can assume the g3,1(·) is the identity function and g3,2(·) is a function that ensures that the variance takes strictly positive values, e.g. the exponential function. Let us define a vector-valued function y(x) = [y1(x), y2(x), · · · , yD(x)]>. We assume that the outputs are conditionally independent given the vector of parameters θ(x) = [θ1(x), θ2(x), · · · , θD(x)]>, defined by specifying the vector of latent functions f(x) = [f1,1(x), f1,2(x), · · · f1,J1(x), f2,1(x), f2,2(x), · · · , fD,JD (x)]> ∈ RJ×1, where J = ∑D d=1 Jd, p(y(x)|θ(x)) = p(y(x)|f(x)) = D∏ d=1 p(yd(x)|θd(x)) = D∏ d=1 p(yd(x)|̃fd(x)), (1) where we have defined f̃d(x) = [fd,1(x), · · · , fd,Jd(x)]> ∈ RJd×1, the set of latent functions that specify the parameters in θd(x). Notice that J ≥ D. This is, there is not always a one-to-one map from f(x) to y(x). Most previous work has assumed that D = 1, and that the corresponding elements in θd(x), this is, the latent functions in f̃1(x) = [f1,1(x), · · · , f1,J1(x)]> are drawn from independent Gaussian processes. Important exceptions are Chai (2012) and Dezfouli and Bonilla (2015), that assumed a categorical variable y1(x), where the elements in f̃1(x) were drawn from an intrinsic coregionalisation model. In what follows, we generalise these models for D > 1 and potentially heterogeneuos outputs yd(x). We will use the word “output” to refer to the elements yd(x) and “latent parameter function” (LPF) or “parameter function” (PF) to refer to fd,j(x). 2.1 A multi-parameter GP prior Our main departure from previous work is in modeling of f(x) using a multi-parameter Gaussian process that allows correlations for the parameter functions fd,j(x). We will use a linear model of corregionalisation type of covariance function for expressing correlations between functions fd,j(x), and fd′,j′(x′). The particular construction is as follows. Consider an additional set of independent latent functions U = {uq(x)}Qq=1 that will be linearly combined to produce J LPFs {fd,j(x)}Jd,Dj=1,d=1. Each latent function uq(x) is assummed to be drawn from an independent GP prior such that uq(·) ∼ GP(0, kq(·, ·)), where kq can be any valid covariance function, and the zero mean is assumed for simplicity. Each latent parameter fd,j(x) is then given as fd,j(x) = Q∑ q=1 Rq∑ i=1 aid,j,qu i q(x), (2) where uiq(x) are IID samples from uq(·) ∼ GP(0, kq(·, ·)) and aid,j,q ∈ R. The mean function for fd,j(x) is zero and the cross-covariance function kfd,jfd′,j′ (x,x ′) = cov[fd,j(x), fd′,j′(x ′)] is equal to ∑Q q=1 b q (d,j),(d′,j′)kq(x,x ′), where bq(d,j),(d′,j′) = ∑Rq i=1 a i d,j,qa i d′,j′,q. Let us define X = {xn}Nn=1 ∈ RN×p as a set of common input vectors for all outputs yd(x). Although, the presentation could be extended for the case of a different set of inputs per output. Let us also define fd,j = [fd,j(x1), · · · , fd,j(xN )]> ∈ RN×1; f̃d = [f>d,1 · · · f>d,Jd ]> ∈ RJdN×1, and f = [̃f>1 · · · f̃>D ]> ∈ RJN×1. The generative model for the heterogeneous MOGP is as follows. We sample f ∼ N (0,K), where K is a block-wise matrix with blocks given by {Kfd,jfd′,j′} D,D,Jd,Jd′ d=1,d′=1,j=1,j′=1. In turn, the elements in Kfd,jfd′,j′ are given by kfd,jfd′,j′ (xn,xm), with xn,xm ∈ X. For the particular case of equal inputs X for all LPF, K can also be expressed as the sum of Kronecker products K = ∑Q q=1 AqA > q ⊗Kq = ∑Q q=1 Bq ⊗Kq, where Aq ∈ RJ×Rq has entries {aid,j,q} D,Jd,Rq d=1,j=1,i=1 and Bq has entries {bq(d,j),(d′,j′)} D,D,Jd,Jd′ d=1,d′=1,j=1,j′=1. The matrix Kq ∈ RN×N has entries given by kq(xn,xm) for xn,xm ∈ X. Matrices Bq ∈ RJ×J are known as the coregionalisation matrices. Once we obtain the sample for f , we evaluate the vector of parameters θ = [θ>1 · · ·θ>D]>, where θd = f̃d. Having specified θ, we can generate samples for the output vector y = [y>1 · · ·y>D]> ∈ XDN×1, where the elements in yd are obtained by sampling from the conditional distributions p(yd(x)|θd(x)). To keep the notation uncluttered, we will assume from now that Rq = 1, meaning that Aq = aq ∈ RJ×1, and the corregionalisation matrices are rank-one. In the literature such model is known as the semiparametric latent factor model (Teh et al., 2005). 2.2 Scalable variational inference Given an heterogeneous dataset D = {X,y}, we would like to compute the posterior distribution for p(f |D), which is intractable in our model. In what follows, we use similar ideas to Alvarez and Lawrence (2009); Álvarez et al. (2010) that introduce the inducing variable formalism for computational efficiency in MOGP. However, instead of marginalising the latent functions U to obtain a variational lower bound, we keep their presence in a way that allows us to apply stochastic variational inference as in Hensman et al. (2013); Saul et al. (2016). 2.2.1 Inducing variables for MOGP A key idea to reduce computational complexity in Gaussian process models is to introduce auxiliary variables or inducing variables. These variables have been used already in the context of MOGP (Alvarez and Lawrence, 2009; Álvarez et al., 2010) . A subtle difference from the single output case is that the inducing variables are not taken from the same latent process, say f1(x), but from the latent processes U used also to build the model for multiple outputs. We will follow the same formalism here. We start by defining the set of M inducing variables per latent function uq(x) as uq = [uq(z1), · · · , uq(zM )]>, evaluated at a set of inducing inputs Z = {zm}Mm=1 ∈ RM×p. We also define u = [u>1 , · · · ,u>Q]> ∈ RQM×1. For simplicity in the exposition, we have assumed that all the inducing variables, for all q, have been evaluated at the same set of inputs Z. Instead of marginalising {uq(x)}Qq from the model in (2), we explicitly use the joint Gaussian prior p(f ,u) = p(f |u)p(u). Due to the assumed independence in the latent functions uq(x), the distribution p(u) factorises as p(u) = ∏Q q=1 p(uq), with uq ∼ N (0,Kq), where Kq ∈ RM×M has entries kq(zi, zj) with zi, zj ∈ Z. Notice that the dimensions of Kq are different to the dimensions of Kq in section 2.1. The LPFs fd,j are conditionally independent given u, so we can write the conditional distribution p(f |u) as p(f |u) = D∏ d=1 Jd∏ j=1 p(fd,j |u) = D∏ d=1 Jd∏ j=1 N ( fd,j |Kfd,juK−1uuu,Kfd,jfd,j −Kfd,juK−1uuK>fd,ju ) , where Kuu ∈ RQM×QM is a block-diagonal matrix with blocks given by Kq and Kfd,ju ∈ RN×QM is the cross-covariance matrix computed from the cross-covariances between fd,j(x) and uq(z). The expression for this cross-covariance function can be obtained from (2) leading to kfd,juq (x, z) = ad,j,qkq(x, z). This form for the cross-covariance between the LPF fd,j(x) and uq(z) is a key difference between the inducing variable methods for the single-output GP case and the MOGP case. 2.2.2 Variational Bounds Exact posterior inference is intractable in our model due to the presence of an arbitrary number of non-Gaussian likelihoods. We use variational inference to compute a lower bound L for the marginal log-likelihood log p(y), and for approximating the posterior distribution p(f ,u|D). Following Álvarez et al. (2010), the posterior of the LPFs f and the latent functions u can be approximated as p(f ,u|y,X) ≈ q(f ,u) = p(f |u)q(u) = D∏ d=1 Jd∏ j=1 p(fd,j |u) Q∏ q=1 q(uq), where q(uq) = N (uq|µuq ,Suq ) are Gaussian variational distributions whose parameters {µuq ,Suq}Qq=1 must be optimised. Building on previous work by Saul et al. (2016); Hensman et al. (2015), we derive a lower bound that accepts any log-likelihood function that can be modulated by the LPFs f . The lower bound L for log p(y) is obtained as follows log p(y) = log ∫ p(y|f)p(f |u)p(u)dfdu ≥ ∫ q(f ,u) log p(y|f)p(f |u)p(u) q(f ,u) dfdu = L. We can further simplify L to obtain L = ∫ ∫ p(f |u)q(u) log p(y|f)dfdu− Q∑ q=1 KL ( q(uq)||p(uq) ) = ∫ ∫ D∏ d=1 Jd∏ j=1 p(fd,j |u)q(u) log p(y|f)dudf − Q∑ q=1 KL ( q(uq)||p(uq) ) , where KL is the Kullback-Leibler divergence. Moreover, the approximate marginal posterior for fd,j is q(fd,j) = ∫ p(fd,j |u)q(u)du, leading to q(fd,j) = N ( fd,j |Kfd,juK−1uuµu,Kfd,jfd,j +Kfd,juK−1uu(Su −Kuu)K−1uuK>fd,ju ) , where µu = [µ>u1 , · · · ,µ>uQ ]> and Su is a block-diagonal matrix with blocks given by Suq . The expression for log p(y|f) factorises, according to (1): log p(y|f) = ∑Dd=1 log p(yd |̃fd) =∑D d=1 log p(yd|fd,1, · · · , fd,Jd). Using this expression for log p(y|f) leads to the following expression for the bound D∑ d=1 Eq(fd,1)···q(fd,Jd ) [ log p(yd|fd,1, · · · , fd,Jd) ] − Q∑ q=1 KL ( q(uq)||p(uq) ) . When D = 1 in the expression above, we recover the bound obtained in Saul et al. (2016). To maximize this lower bound, we need to find the optimal variational parameters {µuq}Qq=1 and {Suq}Qq=1. We represent each matrix Suq as Suq = LuqL>uq and, to ensure positive definiteness for Suq , we estimate Luq instead of Suq . Computation of the posterior distributions over fd,j can be done analytically. There is still an intractability issue in the variational expectations on the log-likelihood functions. Since we construct these bounds in order to accept any possible data type, we need a general way to solve these integrals. One obvious solution is to apply Monte Carlo methods, however it would be slow both maximising the lower bound and updating variational parameters by sampling thousands of times (for approximating expectations) at each iteration. Instead, we address this problem by using Gaussian-Hermite quadratures as in Hensman et al. (2015); Saul et al. (2016). Stochastic Variational Inference. The conditional expectations in the bound above are also valid across data observations so that we can express the bound as D∑ d=1 N∑ n=1 Eq(fd,1(xn))···q(fd,Jd (xn)) [ log p(yd(xn)|fd,1(xn), · · · , fd,Jd(xn)) ] − Q∑ q=1 KL ( q(uq)||p(uq) ) . This functional form allows the use of mini-batches of smaller sets of training samples, performing the optimization process using noisy estimates of the global objective gradient in a similar fashion to Hoffman et al. (2013); Hensman et al. (2013, 2015); Saul et al. (2016) . This scalable bound makes our multi-ouput model applicable to large heterogenous datasets. Notice that computational complexity is dominated by the inversion of Kuu with a cost of O(QM3) and products like Kfu with a cost of O(JNQM2). Hyperparameter learning. Hyperparameters in our model include Z, {Bq}Qq=1, and {γq}Qq=1, the hyperparameters associated to the covariance functions {kq(·, ·)}Qq=1. Since the variational distribution q(u) is sensitive to changes of the hyperparameters, we maximize the variational parameters for q(u), and the hyperparameters using a Variational EM algorithm (Beal, 2003) when employing the full dataset, or the stochastic version when using mini-batches (Hoffman et al., 2013). 2.3 Predictive distribution Consider a set of test inputs X∗. Assuming that p(u|y) ≈ q(u), the predictive distribution p(y∗) can be approximated as p(y∗|y) ≈ ∫ p(y∗|f∗)q(f∗)df∗, where q(f∗) = ∫ p(f∗|u)q(u)du. Computing the expression q(f∗) = ∏D d=1 ∏Jd j=1 q(fd,j,∗) involves evaluating Kfd,j,∗u at X∗. As in the case of the lower bound, the integral above is intractable for the non-Gaussian likelihoods p(y∗|f∗). We can once again make use of Monte Carlo integration or quadratures to approximate the integral. Simpler integration problems are obtained if we are only interested in the predictive mean, E[y∗], and the predictive variance, var[y∗]. 3 Related Work The most closely related works to ours are Skolidis and Sanguinetti (2011), Chai (2012), Dezfouli and Bonilla (2015) and Saul et al. (2016). We are different from Skolidis and Sanguinetti (2011) because we allow more general heterogeneous outputs beyond the specific case of several binary classification problems. Our inference method also scales to large datasets. The works by Chai (2012) and Dezfouli and Bonilla (2015) do use a MOGP, but they only handle a single categorical variable. Our inference approach scales when compared to the one in Chai (2012) and it is fundamentally different to the one in Dezfouli and Bonilla (2015) since we do not use AVI. Our model is also different to Saul et al. (2016) since we allow for several dependent outputs, D > 1, and our scalable approach is more akin to applying SVI to the inducing variable approach of Álvarez et al. (2010). More recenty, Vanhatalo et al. (2018) used additive multi-output GP models to account for interdependencies between counting and binary observations. They use the Laplace approximation for approximating the posterior distribution. Similarly, Pourmohamad and Lee (2016) perform combined regression and binary classification with a multi-output GP learned via sequential Monte Carlo. Nguyen and Bonilla (2014b) also uses the same idea from Álvarez et al. (2010) to provide scalability for multiple-output GP models conditioning the latent parameter functions fd,j(x) on the inducing variables u, but only considers the multivariate regression case. It is also important to mention that multi-output Gaussian processes have been considered as alternative models for multi-task learning (Alvarez et al., 2012). Multi-task learning also addresses multiple prediction problems together within a single inference framework. Most previous work in this area has focused on problems where all tasks are exclusively regression or classification problems. When tasks are heterogeneous, the common practice is to introduce a regularizer per data type in a global cost function (Zhang et al., 2012; Han et al., 2017). Usually, these cost functions are compounded by additive terms, each one referring to every single task, while the correlation assumption among heterogeneous likelihoods is addressed by mixing regularizers in a global penalty term (Li et al., 2014) or by forcing different tasks to share a common mean (Ngufor et al., 2015). Another natural way of treating both continuous and discrete tasks is to assume that all of them share a common input set that varies its influence on each output. Then, by sharing a jointly sparsity pattern, it is possible to optimize a global cost function with a single regularization parameter on the level of sparsity (Yang et al., 2009). There have also been efforts for modeling heterogeneous data outside the label of multi-task learning including mixed graphical models (Yang et al., 2014), where varied types of data are assumed to be combinations of exponential families, and latent feature models (Valera et al., 2017) with heterogeneous observations being mappings of a set of Gaussian distributed variables. 4 Experiments In this section, we evaluate our model on different heterogeneous scenarios 1. To demonstrate its performance in terms of multi-output learning, prediction and scalability, we have explored several applications with both synthetic and real data. For all the experiments, we consider an RBF kernel for each covariance function kq(·, ·) and we set Q = 3. For standard optimization we used the LBFGS-B algorithm. When SVI was needed, we considered ADADELTA included in the climin library, and a mini-batch size of 500 samples in every output. All performance metrics are given in terms of the negative log-predictive density (NLPD) calculated from a test subset and applicable to any type of likelihood. Further details about experiments are included in the appendix. Missing Gap Prediction: In our first experiment, we evaluate if our model is able to predict observations in one output using training information from another one. We setup a toy problem which consists of D = 2 heterogeneous outputs, where the first function y1(x) is real and y2(x) binary. Assumming that heterogeneous outputs do not share a common input set, we observe 1The code is publicly available in the repository github.com/pmorenoz/HetMOGP/ N1 = 600 and N2 = 500 samples respectively. All inputs are uniformly distributed in the input range [0, 1], and we generate a gap only in the set of binary observations by removing Ntest = 150 samples in the interval [0.7, 0.9]. Using the remaining points from both outputs for training, we fitted our MOGP model. In Figures 1(a,b) we can see how uncertainty in binary test predictions is reduced by learning from the first output. In contrast, Figure 1(c) shows wider variance in the predicted parameter when it is trained independently. For the multi-output case we obtained a NLPD value for test data of 32.5± 0.2× 10−2 while in the single-output case the NLPD was 40.51± 0.08× 10−2. Human Behavior Data: In this experiment, we are interested in modeling human behavior in psychiatric patients. Previous work by Soleimani et al. (2018) already explores the application of scalable MOGP models to healthcare for reliable predictions from multivariate time-series. Our data comes from a medical study that asked patients to download a monitoring app (EB2)2 on their smartphones. The system captures information about mobility, communication metadata and interactions in social media. The work has a particular interest in mental health since shifts or misalignments in the circadian feature of human behavior (24h cycles) can be interpreted as early signs of crisis. In particular, we obtained a binary indicator variable of presence/absence at home by monitoring latitude-longitude and measuring its distance from the patient’s home location within a 50m radius range. Then, using the already measured distances, we generated a mobility sequence with all log-distance values. Our last output consists of binary samples representing use/non-use of the 2This smartphone application can be found at https://www.eb2.tech/. Whatsapp application in the smartphone. At each monitoring time instant, we used its differential data consumption to determine use or non-use of the application. We considered an entire week in seconds as the input domain, normalized to the range [0, 1]. In Figure (2), after training on N = 750 samples, we find that the circadian feature is mainly contained in the first output. During the learning process, this periodicity is transferred to the other outputs through the latent functions improving the performance of the entire model. Experimentally, we tested that this circadian pattern was not captured in mobility and social data when training outputs independently. In Table 1 we can see prediction metrics for multi-output and independent prediction. London House Price Data: Based on the large scale experiments in Hensman et al. (2013), we obtained the complete register of properties sold in the Greater London County during 2017 (https://www.gov.uk/government/collections/price-paid-data). We preprocessed it to translate all property addresses to latitude-longitude points. For each spatial input, we considered two observations, one binary and one real. The first one indicates if the property is or is not a flat (zero would mean detached, semi-detached, terraced, etc.. ), and the second one the sale price of houses. Our goal is to predict features of houses given a certain location in the London area. We used a training set of N = 20, 000 samples, 1, 000 for test predictions and M = 100 inducing points. Results in Figure (3) show a portion of the entire heterogeneous dataset and its test prediction curves. We obtained a global NLPD score of 16.44± 0.01 using the MOGP and 17.31± 1.06 in the independent outputs setting (both ×10−2). There is an improvement in performance when training our multi-output model even in large scale datasets. See Table (2) for scores per each output. High Dimensional Input Data: In our last experiment, we tested our MOGP model for the arrhythmia dataset in the UCI repository (http://archive.ics.uci.edu/ml/). We use a dataset of dimensionality p = 255 and 452 samples that we divide in training, validation and test sets (more details are in the appendix). We use our model for predicting a binary output (gender) and a continuous output (logarithmic age) and we compared against independent Chained GPs per output. The binary output is modelled as a Bernoulli distribution and the continuous one as a Gaussian. We obtained an average NLPD value of 0.0191 for both multi-output and independent output models with a slight difference in the standard deviation. 5 Conclusions In this paper we have introduced a novel extension of multi-output Gaussian Processes for handling heterogeneous observations. Our model is able to work on large scale datasets by using sparse approximations within stochastic variational inference. Experimental results show relevant improvements with respect to independent learning of heterogeneous data in different scenarios. In future work it would be interesting to employ convolutional processes (CPs) as an alternative to build the multi-output GP prior. Also, instead of typing hand-made definitions of heterogeneous likelihoods, we may consider to automatically discover them (Valera and Ghahramani, 2017) as an input block in a pipeline setup of our tool. Acknowledgments The authors want to thank Wil Ward for his constructive comments and Juan José Giraldo for his useful advice about SVI experiments and simulations. We also thank Alan Saul and David Ramírez for their recommendations about scalable inference and feedback on the equations. We are grateful to Eero Siivola and Marcelo Hartmann for sharing their Python module for heterogeneous likelihoods and to Francisco J. R. Ruiz for his illuminating help about the stochastic version of the VEM algorithm. Also, we would like to thank Juan José Campaña for his assistance on the London House Price dataset. Pablo Moreno-Muñoz acknowledges the support of his doctoral FPI grant BES2016-077626 and was also supported by Ministerio de Economía of Spain under the project Macro-ADOBE (TEC2015-67719-P), Antonio Artés-Rodríguez acknowledges the support of projects ADVENTURE (TEC2015-69868-C2-1-R), AID (TEC2014-62194-EXP) and CASI-CAM-CM (S2013/ICE-2845). Mauricio A. Álvarez has been partially financed by the Engineering and Physical Research Council (EPSRC) Research Projects EP/N014162/1 and EP/R034303/1.
1. What is the main contribution of the paper? 2. What are the strengths and weaknesses of the proposed approach? 3. How does the reviewer assess the significance and originality of the paper's content? 4. What are the concerns regarding the experiments and comparisons with other works? 5. Are there any suggestions for improving the paper or its impact?
Review
Review The paper proposes using different likelihoods for each of the outputs of a multi-output Gaussian process. The paper addresses most areas: a model, an approximation (with variational bounds), stochastic inference for large data sets, hyper-parameter tuning, how prediction is done in the approximate model, experiments on synthetic data, and experiments on real-world data. Hence, the paper is very thorough. However, there a a number of shortcomings which will be highlighted below. While I have not seen any paper prior to this that explicitly places different likelihoods on the outputs of a multi-output Gaussian process, it is not hard imagine this --- especially when variational inference is used which reduces the (log-)likelihood terms to a summation. The difficulty is to find convincing applications/data for which such a model is useful. I think the Human Behaviour Data is a good fit for this model, except that the presence/absence-at-home output is somewhat redundant and contrived --- isn't this output a threshold version of the distance-from-home output? I also find the choice of the London House Price Data rather inappropriate --- the authors have used the property-type as an output when it is best used as an input. For the High Dimensional Input Data, the authors have chosen to predict the gender and age, which are attributes/covariates/inputs in the data set, while side-stepping the more interesting and important and original intended task of distinguish between "the presence and absence of cardiac arrhythmia and to classify it in one of the 16 groups." Instead of these last two data sets, I recommend that the authors concentrate on the Human Behavior Data --- for example, using a counts model for the number of Whatsapp messages sent; and/or analyzing the effect of the present/absence-at-home output. In addition, I also think a covariance function that incorporate periodicity is a better fit than a vanilla RBF kernel for this data. L65: A better characterization of the method may be to say that it *combines* MOGP with Chained GP, and that it *generalises* L154 and L157: Here, the concepts of independence in the model and independence in the approximation are not clearly stated. In section 4, it may be clearer to state that Chained GP with Bernoulli is simply the classical binary-classification Gaussian processes. Minor: The two [Valera el al. 2017] references need to be given different labels. [Quality] This submission is technically sound. However, I feel that the experiments are lacking to fully evaluation the benefits of this model. In addition, the authors should compare with the work of Yang et al. 2009 using the data sets therein (both simulated and the asthma data); or the work of Valera et al. 2017 using the data sets therein. [Clarity] This submission can be clearer by addressing some of the points above. [Originality] This paper is a new but rather obvious combination of previous work. [Significance] I think the work, especially with the released code, will be widely used in data sets of this nature. [Comments after author rebuttal] The reply has addressed my reservations on "convincing applications and data", and I have revised my score upwards.
NIPS
Title Heterogeneous Multi-output Gaussian Process Prediction Abstract We present a novel extension of multi-output Gaussian processes for handling heterogeneous outputs. We assume that each output has its own likelihood function and use a vector-valued Gaussian process prior to jointly model the parameters in all likelihoods as latent functions. Our multi-output Gaussian process uses a covariance function with a linear model of coregionalisation form. Assuming conditional independence across the underlying latent functions together with an inducing variable framework, we are able to obtain tractable variational bounds amenable to stochastic variational inference. We illustrate the performance of the model on synthetic data and two real datasets: a human behavioral study and a demographic high-dimensional dataset. N/A We present a novel extension of multi-output Gaussian processes for handling heterogeneous outputs. We assume that each output has its own likelihood function and use a vector-valued Gaussian process prior to jointly model the parameters in all likelihoods as latent functions. Our multi-output Gaussian process uses a covariance function with a linear model of coregionalisation form. Assuming conditional independence across the underlying latent functions together with an inducing variable framework, we are able to obtain tractable variational bounds amenable to stochastic variational inference. We illustrate the performance of the model on synthetic data and two real datasets: a human behavioral study and a demographic high-dimensional dataset. 1 Introduction Multi-output Gaussian processes (MOGP) generalise the powerful Gaussian process (GP) predictive model to the vector-valued random field setup (Alvarez et al., 2012). It has been experimentally shown that by simultaneously exploiting correlations between multiple outputs and across the input space, it is possible to provide better predictions, particularly in scenarios with missing or noisy data (Bonilla et al., 2008; Dai et al., 2017). The main focus in the literature for MOGP has been on the definition of a suitable cross-covariance function between the multiple outputs that allows for the treatment of outputs as a single GP with a properly defined covariance function (Alvarez et al., 2012). The two classical alternatives to define such cross-covariance functions are the linear model of coregionalisation (LMC) (Journel and Huijbregts, 1978) and process convolutions (Higdon, 2002). In the former case, each output corresponds to a weighted sum of shared latent random functions. In the latter, each output is modelled as the convolution integral between a smoothing kernel and a latent random function common to all outputs. In both cases, the unknown latent functions follow Gaussian process priors leading to straight-forward expressions to compute the cross-covariance functions among different outputs. More recent alternatives to build valid covariance functions for MOGP include the work by Ulrich et al. (2015) and Parra and Tobar (2017), that build the cross-covariances in the spectral domain. Regarding the type of outputs that can be modelled, most alternatives focus on multiple-output regression for continuous variables. Traditionally, each output is assumed to follow a Gaussian likelihood where the mean function is given by one of the outputs of the MOGP and the variance in that distribution is treated as an unknown parameter. Bayesian inference is tractable for these models. In this paper, we are interested in the heterogeneous case for which the outputs are a mix of continuous, categorical, binary or discrete variables with different likelihood functions. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. There have been few attempts to extend the MOGP to other types of likelihoods. For example, Skolidis and Sanguinetti (2011) use the outputs of a MOGP for jointly modelling several binary classification problems, each of which uses a probit likelihood. They use an intrinsic coregionalisation model (ICM), a particular case of LMC. Posterior inference is perfomed using expectation-propagation (EP) and variational mean field. Both Chai (2012) and Dezfouli and Bonilla (2015) have also used ICM for modeling a single categorical variable with a multinomial logistic likelihood. The outputs of the ICM model are used as replacements for the linear predictors in the softmax function. Chai (2012) derives a particular variational bound for the marginal likelihood and computes Gaussian posterior distributions; and Dezfouli and Bonilla (2015) introduce an scalable inference procedure that uses a mixture of Gaussians to approximate the posterior distribution using automated variational inference (AVI) (Nguyen and Bonilla, 2014a) that requires sampling from univariate Gaussians. For the single-output GP case, the usual practice for handling non-Gaussian likelihoods has been replacing the parameters or linear predictors of the non-Gaussian likelihood by one or more independent GP priors. Since computing posterior distributions becomes intractable, different alternatives have been offered for approximate inference. An example is the Gaussian heteroscedastic regression model with variational inference (Lázaro-Gredilla and Titsias, 2011), Laplace approximation (Vanhatalo et al., 2013); and stochastic variational inference (SVI) (Saul et al., 2016). This last reference uses the same idea for modulating the parameters of a Student-t likelihood, a log-logistic distribution, a beta distribution and a Poisson distribution. The generalised Wishart process (Wilson and Ghahramani, 2011) is another example where the entries of the scale matrix of a Wishart distribution are modulated by independent GPs. Our main contribution in this paper is to provide an extension of multiple-output Gaussian processes for prediction in heterogeneous datasets. The key principle in our model is to use the outputs of a MOGP as the latent functions that modulate the parameters of several likelihood functions, one likelihood function per output. We tackle the model’s intractability using variational inference. Furthermore, we use the inducing variable formalism for MOGP introduced by Alvarez and Lawrence (2009) and compute a variational bound suitable for stochastic optimisation as in Hensman et al. (2013). We experimentally provide evidence of the benefits of simultaneously modeling heterogeneous outputs in different applied problems. Our model can be seen as a generalisation of Saul et al. (2016) for multiple correlated output functions of an heterogeneous nature. Our Python implementation follows the spirit of Hadfield et al. (2010), where the user only needs to specify a list of likelihood functions likelihood_list = [Bernoulli(), Poisson(), HetGaussian()], where HetGaussian refers to the heteroscedastic Gaussian distribution, and the number of latent parameter functions per likelihood is assigned automatically. 2 Heterogeneous Multi-output Gaussian process Consider a set of output functions Y = {yd(x)}Dd=1, with x ∈ Rp, that we want to jointly model using Gaussian processes. Traditionally, the literature has considered the case for which each yd(x) is continuous and Gaussian distributed. In this paper, we are interested in the heterogeneous case for which the outputs in Y are a mix of continuous, categorical, binary or discrete variables with several different distributions. In particular, we will assume that the distribution over yd(x) is completely specified by a set of parameters θd(x) ∈ X Jd , where we have a generic X domain for the parameters and Jd is the number of parameters thet define the distribution. Each parameter θd,j(x) ∈ θd(x) is a non-linear transformation of a Gaussian process prior fd,j(x), this is, θd,j(x) = gd,j(fd,j(x)), where gd,j(·) is a deterministic function that maps the GP output to the appropriate domain for the parameter θd,j . To make the notation concrete, let us assume an heterogeneous multiple-output problem for which D = 3. Assume that output y1(x) is binary and that it will be modelled using a Bernoulli distribution. The Bernoulli distribution uses a single parameter (the probability of success), J1 = 1, restricted to values in the range [0, 1]. This means that θ1(x) = θ1,1(x) = g1,1(f1,1(x)), where g1,1(·) could be modelled using the logistic sigmoid function σ(z) = 1/(1 + exp(−z)) that maps σ : R → [0, 1]. Assume now that the second output y2(x) corresponds to a count variable that can take values y2(x) ∈ N ∪ {0}. The count variable can be modelled using a Poisson distribution with a single parameter (the rate), J2 = 1, restricted to the positive reals. This means that θ2(x) = θ2,1(x) = g2,1(f2,1(x)) where g2,1(·) could be modelled as an exponential function g2,1(·) = exp(·) to ensure strictly positive values for the parameter. Finally, y3(x) is a continuous variable with heteroscedastic noise. It can be modelled using a Gaussian distribution where both the mean and the variance are functions of x. This means that θ3(x) = [θ3,1(x) θ3,2(x)]> = [g3,1(f3,1(x)) g3,2(f3,2(x))]>, where the first function is used to model the mean of the Gaussian, and the second function is used to model the variance. Therefore, we can assume the g3,1(·) is the identity function and g3,2(·) is a function that ensures that the variance takes strictly positive values, e.g. the exponential function. Let us define a vector-valued function y(x) = [y1(x), y2(x), · · · , yD(x)]>. We assume that the outputs are conditionally independent given the vector of parameters θ(x) = [θ1(x), θ2(x), · · · , θD(x)]>, defined by specifying the vector of latent functions f(x) = [f1,1(x), f1,2(x), · · · f1,J1(x), f2,1(x), f2,2(x), · · · , fD,JD (x)]> ∈ RJ×1, where J = ∑D d=1 Jd, p(y(x)|θ(x)) = p(y(x)|f(x)) = D∏ d=1 p(yd(x)|θd(x)) = D∏ d=1 p(yd(x)|̃fd(x)), (1) where we have defined f̃d(x) = [fd,1(x), · · · , fd,Jd(x)]> ∈ RJd×1, the set of latent functions that specify the parameters in θd(x). Notice that J ≥ D. This is, there is not always a one-to-one map from f(x) to y(x). Most previous work has assumed that D = 1, and that the corresponding elements in θd(x), this is, the latent functions in f̃1(x) = [f1,1(x), · · · , f1,J1(x)]> are drawn from independent Gaussian processes. Important exceptions are Chai (2012) and Dezfouli and Bonilla (2015), that assumed a categorical variable y1(x), where the elements in f̃1(x) were drawn from an intrinsic coregionalisation model. In what follows, we generalise these models for D > 1 and potentially heterogeneuos outputs yd(x). We will use the word “output” to refer to the elements yd(x) and “latent parameter function” (LPF) or “parameter function” (PF) to refer to fd,j(x). 2.1 A multi-parameter GP prior Our main departure from previous work is in modeling of f(x) using a multi-parameter Gaussian process that allows correlations for the parameter functions fd,j(x). We will use a linear model of corregionalisation type of covariance function for expressing correlations between functions fd,j(x), and fd′,j′(x′). The particular construction is as follows. Consider an additional set of independent latent functions U = {uq(x)}Qq=1 that will be linearly combined to produce J LPFs {fd,j(x)}Jd,Dj=1,d=1. Each latent function uq(x) is assummed to be drawn from an independent GP prior such that uq(·) ∼ GP(0, kq(·, ·)), where kq can be any valid covariance function, and the zero mean is assumed for simplicity. Each latent parameter fd,j(x) is then given as fd,j(x) = Q∑ q=1 Rq∑ i=1 aid,j,qu i q(x), (2) where uiq(x) are IID samples from uq(·) ∼ GP(0, kq(·, ·)) and aid,j,q ∈ R. The mean function for fd,j(x) is zero and the cross-covariance function kfd,jfd′,j′ (x,x ′) = cov[fd,j(x), fd′,j′(x ′)] is equal to ∑Q q=1 b q (d,j),(d′,j′)kq(x,x ′), where bq(d,j),(d′,j′) = ∑Rq i=1 a i d,j,qa i d′,j′,q. Let us define X = {xn}Nn=1 ∈ RN×p as a set of common input vectors for all outputs yd(x). Although, the presentation could be extended for the case of a different set of inputs per output. Let us also define fd,j = [fd,j(x1), · · · , fd,j(xN )]> ∈ RN×1; f̃d = [f>d,1 · · · f>d,Jd ]> ∈ RJdN×1, and f = [̃f>1 · · · f̃>D ]> ∈ RJN×1. The generative model for the heterogeneous MOGP is as follows. We sample f ∼ N (0,K), where K is a block-wise matrix with blocks given by {Kfd,jfd′,j′} D,D,Jd,Jd′ d=1,d′=1,j=1,j′=1. In turn, the elements in Kfd,jfd′,j′ are given by kfd,jfd′,j′ (xn,xm), with xn,xm ∈ X. For the particular case of equal inputs X for all LPF, K can also be expressed as the sum of Kronecker products K = ∑Q q=1 AqA > q ⊗Kq = ∑Q q=1 Bq ⊗Kq, where Aq ∈ RJ×Rq has entries {aid,j,q} D,Jd,Rq d=1,j=1,i=1 and Bq has entries {bq(d,j),(d′,j′)} D,D,Jd,Jd′ d=1,d′=1,j=1,j′=1. The matrix Kq ∈ RN×N has entries given by kq(xn,xm) for xn,xm ∈ X. Matrices Bq ∈ RJ×J are known as the coregionalisation matrices. Once we obtain the sample for f , we evaluate the vector of parameters θ = [θ>1 · · ·θ>D]>, where θd = f̃d. Having specified θ, we can generate samples for the output vector y = [y>1 · · ·y>D]> ∈ XDN×1, where the elements in yd are obtained by sampling from the conditional distributions p(yd(x)|θd(x)). To keep the notation uncluttered, we will assume from now that Rq = 1, meaning that Aq = aq ∈ RJ×1, and the corregionalisation matrices are rank-one. In the literature such model is known as the semiparametric latent factor model (Teh et al., 2005). 2.2 Scalable variational inference Given an heterogeneous dataset D = {X,y}, we would like to compute the posterior distribution for p(f |D), which is intractable in our model. In what follows, we use similar ideas to Alvarez and Lawrence (2009); Álvarez et al. (2010) that introduce the inducing variable formalism for computational efficiency in MOGP. However, instead of marginalising the latent functions U to obtain a variational lower bound, we keep their presence in a way that allows us to apply stochastic variational inference as in Hensman et al. (2013); Saul et al. (2016). 2.2.1 Inducing variables for MOGP A key idea to reduce computational complexity in Gaussian process models is to introduce auxiliary variables or inducing variables. These variables have been used already in the context of MOGP (Alvarez and Lawrence, 2009; Álvarez et al., 2010) . A subtle difference from the single output case is that the inducing variables are not taken from the same latent process, say f1(x), but from the latent processes U used also to build the model for multiple outputs. We will follow the same formalism here. We start by defining the set of M inducing variables per latent function uq(x) as uq = [uq(z1), · · · , uq(zM )]>, evaluated at a set of inducing inputs Z = {zm}Mm=1 ∈ RM×p. We also define u = [u>1 , · · · ,u>Q]> ∈ RQM×1. For simplicity in the exposition, we have assumed that all the inducing variables, for all q, have been evaluated at the same set of inputs Z. Instead of marginalising {uq(x)}Qq from the model in (2), we explicitly use the joint Gaussian prior p(f ,u) = p(f |u)p(u). Due to the assumed independence in the latent functions uq(x), the distribution p(u) factorises as p(u) = ∏Q q=1 p(uq), with uq ∼ N (0,Kq), where Kq ∈ RM×M has entries kq(zi, zj) with zi, zj ∈ Z. Notice that the dimensions of Kq are different to the dimensions of Kq in section 2.1. The LPFs fd,j are conditionally independent given u, so we can write the conditional distribution p(f |u) as p(f |u) = D∏ d=1 Jd∏ j=1 p(fd,j |u) = D∏ d=1 Jd∏ j=1 N ( fd,j |Kfd,juK−1uuu,Kfd,jfd,j −Kfd,juK−1uuK>fd,ju ) , where Kuu ∈ RQM×QM is a block-diagonal matrix with blocks given by Kq and Kfd,ju ∈ RN×QM is the cross-covariance matrix computed from the cross-covariances between fd,j(x) and uq(z). The expression for this cross-covariance function can be obtained from (2) leading to kfd,juq (x, z) = ad,j,qkq(x, z). This form for the cross-covariance between the LPF fd,j(x) and uq(z) is a key difference between the inducing variable methods for the single-output GP case and the MOGP case. 2.2.2 Variational Bounds Exact posterior inference is intractable in our model due to the presence of an arbitrary number of non-Gaussian likelihoods. We use variational inference to compute a lower bound L for the marginal log-likelihood log p(y), and for approximating the posterior distribution p(f ,u|D). Following Álvarez et al. (2010), the posterior of the LPFs f and the latent functions u can be approximated as p(f ,u|y,X) ≈ q(f ,u) = p(f |u)q(u) = D∏ d=1 Jd∏ j=1 p(fd,j |u) Q∏ q=1 q(uq), where q(uq) = N (uq|µuq ,Suq ) are Gaussian variational distributions whose parameters {µuq ,Suq}Qq=1 must be optimised. Building on previous work by Saul et al. (2016); Hensman et al. (2015), we derive a lower bound that accepts any log-likelihood function that can be modulated by the LPFs f . The lower bound L for log p(y) is obtained as follows log p(y) = log ∫ p(y|f)p(f |u)p(u)dfdu ≥ ∫ q(f ,u) log p(y|f)p(f |u)p(u) q(f ,u) dfdu = L. We can further simplify L to obtain L = ∫ ∫ p(f |u)q(u) log p(y|f)dfdu− Q∑ q=1 KL ( q(uq)||p(uq) ) = ∫ ∫ D∏ d=1 Jd∏ j=1 p(fd,j |u)q(u) log p(y|f)dudf − Q∑ q=1 KL ( q(uq)||p(uq) ) , where KL is the Kullback-Leibler divergence. Moreover, the approximate marginal posterior for fd,j is q(fd,j) = ∫ p(fd,j |u)q(u)du, leading to q(fd,j) = N ( fd,j |Kfd,juK−1uuµu,Kfd,jfd,j +Kfd,juK−1uu(Su −Kuu)K−1uuK>fd,ju ) , where µu = [µ>u1 , · · · ,µ>uQ ]> and Su is a block-diagonal matrix with blocks given by Suq . The expression for log p(y|f) factorises, according to (1): log p(y|f) = ∑Dd=1 log p(yd |̃fd) =∑D d=1 log p(yd|fd,1, · · · , fd,Jd). Using this expression for log p(y|f) leads to the following expression for the bound D∑ d=1 Eq(fd,1)···q(fd,Jd ) [ log p(yd|fd,1, · · · , fd,Jd) ] − Q∑ q=1 KL ( q(uq)||p(uq) ) . When D = 1 in the expression above, we recover the bound obtained in Saul et al. (2016). To maximize this lower bound, we need to find the optimal variational parameters {µuq}Qq=1 and {Suq}Qq=1. We represent each matrix Suq as Suq = LuqL>uq and, to ensure positive definiteness for Suq , we estimate Luq instead of Suq . Computation of the posterior distributions over fd,j can be done analytically. There is still an intractability issue in the variational expectations on the log-likelihood functions. Since we construct these bounds in order to accept any possible data type, we need a general way to solve these integrals. One obvious solution is to apply Monte Carlo methods, however it would be slow both maximising the lower bound and updating variational parameters by sampling thousands of times (for approximating expectations) at each iteration. Instead, we address this problem by using Gaussian-Hermite quadratures as in Hensman et al. (2015); Saul et al. (2016). Stochastic Variational Inference. The conditional expectations in the bound above are also valid across data observations so that we can express the bound as D∑ d=1 N∑ n=1 Eq(fd,1(xn))···q(fd,Jd (xn)) [ log p(yd(xn)|fd,1(xn), · · · , fd,Jd(xn)) ] − Q∑ q=1 KL ( q(uq)||p(uq) ) . This functional form allows the use of mini-batches of smaller sets of training samples, performing the optimization process using noisy estimates of the global objective gradient in a similar fashion to Hoffman et al. (2013); Hensman et al. (2013, 2015); Saul et al. (2016) . This scalable bound makes our multi-ouput model applicable to large heterogenous datasets. Notice that computational complexity is dominated by the inversion of Kuu with a cost of O(QM3) and products like Kfu with a cost of O(JNQM2). Hyperparameter learning. Hyperparameters in our model include Z, {Bq}Qq=1, and {γq}Qq=1, the hyperparameters associated to the covariance functions {kq(·, ·)}Qq=1. Since the variational distribution q(u) is sensitive to changes of the hyperparameters, we maximize the variational parameters for q(u), and the hyperparameters using a Variational EM algorithm (Beal, 2003) when employing the full dataset, or the stochastic version when using mini-batches (Hoffman et al., 2013). 2.3 Predictive distribution Consider a set of test inputs X∗. Assuming that p(u|y) ≈ q(u), the predictive distribution p(y∗) can be approximated as p(y∗|y) ≈ ∫ p(y∗|f∗)q(f∗)df∗, where q(f∗) = ∫ p(f∗|u)q(u)du. Computing the expression q(f∗) = ∏D d=1 ∏Jd j=1 q(fd,j,∗) involves evaluating Kfd,j,∗u at X∗. As in the case of the lower bound, the integral above is intractable for the non-Gaussian likelihoods p(y∗|f∗). We can once again make use of Monte Carlo integration or quadratures to approximate the integral. Simpler integration problems are obtained if we are only interested in the predictive mean, E[y∗], and the predictive variance, var[y∗]. 3 Related Work The most closely related works to ours are Skolidis and Sanguinetti (2011), Chai (2012), Dezfouli and Bonilla (2015) and Saul et al. (2016). We are different from Skolidis and Sanguinetti (2011) because we allow more general heterogeneous outputs beyond the specific case of several binary classification problems. Our inference method also scales to large datasets. The works by Chai (2012) and Dezfouli and Bonilla (2015) do use a MOGP, but they only handle a single categorical variable. Our inference approach scales when compared to the one in Chai (2012) and it is fundamentally different to the one in Dezfouli and Bonilla (2015) since we do not use AVI. Our model is also different to Saul et al. (2016) since we allow for several dependent outputs, D > 1, and our scalable approach is more akin to applying SVI to the inducing variable approach of Álvarez et al. (2010). More recenty, Vanhatalo et al. (2018) used additive multi-output GP models to account for interdependencies between counting and binary observations. They use the Laplace approximation for approximating the posterior distribution. Similarly, Pourmohamad and Lee (2016) perform combined regression and binary classification with a multi-output GP learned via sequential Monte Carlo. Nguyen and Bonilla (2014b) also uses the same idea from Álvarez et al. (2010) to provide scalability for multiple-output GP models conditioning the latent parameter functions fd,j(x) on the inducing variables u, but only considers the multivariate regression case. It is also important to mention that multi-output Gaussian processes have been considered as alternative models for multi-task learning (Alvarez et al., 2012). Multi-task learning also addresses multiple prediction problems together within a single inference framework. Most previous work in this area has focused on problems where all tasks are exclusively regression or classification problems. When tasks are heterogeneous, the common practice is to introduce a regularizer per data type in a global cost function (Zhang et al., 2012; Han et al., 2017). Usually, these cost functions are compounded by additive terms, each one referring to every single task, while the correlation assumption among heterogeneous likelihoods is addressed by mixing regularizers in a global penalty term (Li et al., 2014) or by forcing different tasks to share a common mean (Ngufor et al., 2015). Another natural way of treating both continuous and discrete tasks is to assume that all of them share a common input set that varies its influence on each output. Then, by sharing a jointly sparsity pattern, it is possible to optimize a global cost function with a single regularization parameter on the level of sparsity (Yang et al., 2009). There have also been efforts for modeling heterogeneous data outside the label of multi-task learning including mixed graphical models (Yang et al., 2014), where varied types of data are assumed to be combinations of exponential families, and latent feature models (Valera et al., 2017) with heterogeneous observations being mappings of a set of Gaussian distributed variables. 4 Experiments In this section, we evaluate our model on different heterogeneous scenarios 1. To demonstrate its performance in terms of multi-output learning, prediction and scalability, we have explored several applications with both synthetic and real data. For all the experiments, we consider an RBF kernel for each covariance function kq(·, ·) and we set Q = 3. For standard optimization we used the LBFGS-B algorithm. When SVI was needed, we considered ADADELTA included in the climin library, and a mini-batch size of 500 samples in every output. All performance metrics are given in terms of the negative log-predictive density (NLPD) calculated from a test subset and applicable to any type of likelihood. Further details about experiments are included in the appendix. Missing Gap Prediction: In our first experiment, we evaluate if our model is able to predict observations in one output using training information from another one. We setup a toy problem which consists of D = 2 heterogeneous outputs, where the first function y1(x) is real and y2(x) binary. Assumming that heterogeneous outputs do not share a common input set, we observe 1The code is publicly available in the repository github.com/pmorenoz/HetMOGP/ N1 = 600 and N2 = 500 samples respectively. All inputs are uniformly distributed in the input range [0, 1], and we generate a gap only in the set of binary observations by removing Ntest = 150 samples in the interval [0.7, 0.9]. Using the remaining points from both outputs for training, we fitted our MOGP model. In Figures 1(a,b) we can see how uncertainty in binary test predictions is reduced by learning from the first output. In contrast, Figure 1(c) shows wider variance in the predicted parameter when it is trained independently. For the multi-output case we obtained a NLPD value for test data of 32.5± 0.2× 10−2 while in the single-output case the NLPD was 40.51± 0.08× 10−2. Human Behavior Data: In this experiment, we are interested in modeling human behavior in psychiatric patients. Previous work by Soleimani et al. (2018) already explores the application of scalable MOGP models to healthcare for reliable predictions from multivariate time-series. Our data comes from a medical study that asked patients to download a monitoring app (EB2)2 on their smartphones. The system captures information about mobility, communication metadata and interactions in social media. The work has a particular interest in mental health since shifts or misalignments in the circadian feature of human behavior (24h cycles) can be interpreted as early signs of crisis. In particular, we obtained a binary indicator variable of presence/absence at home by monitoring latitude-longitude and measuring its distance from the patient’s home location within a 50m radius range. Then, using the already measured distances, we generated a mobility sequence with all log-distance values. Our last output consists of binary samples representing use/non-use of the 2This smartphone application can be found at https://www.eb2.tech/. Whatsapp application in the smartphone. At each monitoring time instant, we used its differential data consumption to determine use or non-use of the application. We considered an entire week in seconds as the input domain, normalized to the range [0, 1]. In Figure (2), after training on N = 750 samples, we find that the circadian feature is mainly contained in the first output. During the learning process, this periodicity is transferred to the other outputs through the latent functions improving the performance of the entire model. Experimentally, we tested that this circadian pattern was not captured in mobility and social data when training outputs independently. In Table 1 we can see prediction metrics for multi-output and independent prediction. London House Price Data: Based on the large scale experiments in Hensman et al. (2013), we obtained the complete register of properties sold in the Greater London County during 2017 (https://www.gov.uk/government/collections/price-paid-data). We preprocessed it to translate all property addresses to latitude-longitude points. For each spatial input, we considered two observations, one binary and one real. The first one indicates if the property is or is not a flat (zero would mean detached, semi-detached, terraced, etc.. ), and the second one the sale price of houses. Our goal is to predict features of houses given a certain location in the London area. We used a training set of N = 20, 000 samples, 1, 000 for test predictions and M = 100 inducing points. Results in Figure (3) show a portion of the entire heterogeneous dataset and its test prediction curves. We obtained a global NLPD score of 16.44± 0.01 using the MOGP and 17.31± 1.06 in the independent outputs setting (both ×10−2). There is an improvement in performance when training our multi-output model even in large scale datasets. See Table (2) for scores per each output. High Dimensional Input Data: In our last experiment, we tested our MOGP model for the arrhythmia dataset in the UCI repository (http://archive.ics.uci.edu/ml/). We use a dataset of dimensionality p = 255 and 452 samples that we divide in training, validation and test sets (more details are in the appendix). We use our model for predicting a binary output (gender) and a continuous output (logarithmic age) and we compared against independent Chained GPs per output. The binary output is modelled as a Bernoulli distribution and the continuous one as a Gaussian. We obtained an average NLPD value of 0.0191 for both multi-output and independent output models with a slight difference in the standard deviation. 5 Conclusions In this paper we have introduced a novel extension of multi-output Gaussian Processes for handling heterogeneous observations. Our model is able to work on large scale datasets by using sparse approximations within stochastic variational inference. Experimental results show relevant improvements with respect to independent learning of heterogeneous data in different scenarios. In future work it would be interesting to employ convolutional processes (CPs) as an alternative to build the multi-output GP prior. Also, instead of typing hand-made definitions of heterogeneous likelihoods, we may consider to automatically discover them (Valera and Ghahramani, 2017) as an input block in a pipeline setup of our tool. Acknowledgments The authors want to thank Wil Ward for his constructive comments and Juan José Giraldo for his useful advice about SVI experiments and simulations. We also thank Alan Saul and David Ramírez for their recommendations about scalable inference and feedback on the equations. We are grateful to Eero Siivola and Marcelo Hartmann for sharing their Python module for heterogeneous likelihoods and to Francisco J. R. Ruiz for his illuminating help about the stochastic version of the VEM algorithm. Also, we would like to thank Juan José Campaña for his assistance on the London House Price dataset. Pablo Moreno-Muñoz acknowledges the support of his doctoral FPI grant BES2016-077626 and was also supported by Ministerio de Economía of Spain under the project Macro-ADOBE (TEC2015-67719-P), Antonio Artés-Rodríguez acknowledges the support of projects ADVENTURE (TEC2015-69868-C2-1-R), AID (TEC2014-62194-EXP) and CASI-CAM-CM (S2013/ICE-2845). Mauricio A. Álvarez has been partially financed by the Engineering and Physical Research Council (EPSRC) Research Projects EP/N014162/1 and EP/R034303/1.
1. What is the main contribution of the paper, and how does it extend previous work on multi-output Gaussian processes? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its ability to handle heterogeneous outputs and learn correlations between parameters? 3. How does the paper address the issue of scalability for large datasets, and what are the tradeoffs involved in the proposed approach? 4. Are there any limitations or assumptions in the model that could be improved or relaxed in future work? 5. How do the results of the empirical analysis support the effectiveness and versatility of the proposed approach, and what additional experiments or scenarios would be interesting to explore? 6. Are there any minor issues or suggestions for improvement in the paper's presentation or organization, such as providing more detail in certain derivations or clarifying specific points?
Review
Review ## [Updated after author feedback] Thank you for your feedback. I am happy to see the updated results and I hope you will add them to the paper. While I agree with the other reviewers that the individual parts of the idea are not new, I find the combination elegant - a whole that is greater than the sum of its parts. I will, therefore, keep my score. ## Summary The paper presents an extension to multi-output Gaussian processes enabling them to deal with heterogeneous outputs specified by different distributions, thus requiring different likelihood functions. By assuming that each output is completely specified by a distribution, the task is to infer the parameters of the distributions. Each parameter is modelled as the (non-linear transformation of the) output of a latent parameter function f, which itself is a linear combination of Q latent functions u. The main novelty is then to impose a multi-output GP prior on f, allowing the model to learn correlations between all parameters for all outputs. The authors introduce inducing variables and derive bounds allowing for stochastic variational inference, thus making the model applicable to large datasets. ## Quality The paper is of high technical quality. Some of the derivations are left out or moved to the supplementary material. Is some sense this is justified, as they follow previous work which is adequately cited. However, a pointer to the gradient specification in the appendix would be appreciated. The authors present four experiments for empirical analysis. I hate to be that reviewer asking for additional experiments, but I think it would be interesting to see how the model performs on a dataset with a large number of outputs. The maximum number of outputs evaluated is three, whereas a high-dimensional input problem is considered. A high-dimensional output problem is, in my opinion, at least as interesting. Using synthetic data as in the first problem would be just fine. In table 1 and 2, I cannot find information on how the uncertainties were obtained. That should be included. Also, I am unsure what is meant by the "Global" column. How does the covariance function look here? ## Clarity The paper is clear and well-written. The authors have clearly spent time on the writing and the structure. The main problem and contribution are both clearly stated. I really like the paragraph headlines in section 2.2 - they provide a structured and easy to follow overview. One suggestion is to consider making a sketch of the setup. With both latent functions and parameter functions, things quickly get complex with lots of subscripts and indices. Not that the text is unclear, it is just a complex problem to wrap your head around. ## Originality Related work section is clear and concise. I could not find papers that have been missed. To my knowledge, a multi-output GP model capable (in principle) of handling any type and number of outputs has not been proposed before. ## Significance The paper addresses an important issue, namely learning correlations between heterogeneous outputs using GPs. The authors further make it scalable to large datasets by casting it in the stochastic variational inference framework. This method is an important contribution to the field of GPs. ## Comments line 100: "there is no always" -> "there is not always". line 170: I believe the mean should be \mu_{u_q} instead of u_q? line 291: "slighty difference" -> "slight difference"
NIPS
Title Robot Learning in Homes: Improving Generalization and Reducing Dataset Bias Abstract Data-driven approaches to solving robotic tasks have gained a lot of traction in recent years. However, most existing policies are trained on large-scale datasets collected in curated lab settings. If we aim to deploy these models in unstructured visual environments like people’s homes, they will be unable to cope with the mismatch in data distribution. In such light, we present the first systematic effort in collecting a large dataset for robotic grasping in homes. First, to scale and parallelize data collection, we built a low cost mobile manipulator assembled for under 3K USD. Second, data collected using low cost robots suffer from noisy labels due to imperfect execution and calibration errors. To handle this, we develop a framework which factors out the noise as a latent variable. Our model is trained on 28K grasps collected in several houses under an array of different environmental conditions. We evaluate our models by physically executing grasps on a collection of novel objects in multiple unseen homes. The models trained with our home dataset showed a marked improvement of 43.7% over a baseline model trained with data collected in lab. Our architecture which explicitly models the latent noise in the dataset also performed 10% better than one that did not factor out the noise. We hope this effort inspires the robotics community to look outside the lab and embrace learning based approaches to handle inaccurate cheap robots. 1 Introduction Powered by the availability of cheaper robots, robust simulators and greater processing speeds, the last decade has witnessed the rise of data-driven approaches in robotics. Instead of using hand-designed models, these approaches focus on the collection of large-scale datasets to learn policies that map from high-dimensional observations to actions. Current data-driven approaches mostly focus on using simulators since it is considerably less expensive to collect simulated data than on an actual robot in real-time. The hope is that these approaches will either be robust enough to domain shifts or that the models can be adapted using a small amount of real world data via transfer learning. However, beyond simple robotic picking tasks [1, 2, 3], there exist little support to this level of optimism. One major reason for this is the wide “reality gap” between simulators and the real world. Therefore, there has concurrently been a push in the robotics community to collect real-world physical interaction data [4, 5, 6, 7, 8, 9, 10, 11] in multiple robotics labs. A major driving force behind this effort is the declining costs of hardware which allows scaling up data collection efforts for a variety of robotic tasks. This approach has indeed been quite successful at tasks such as grasping, pushing, poking and imitation learning. However, these learned models have often been shown to overfit (even after increasing the number of datapoints) and the performance of these robot learning methods tends ∗Equal contribution. Direct correspondence to: {abhinavg,amurali,dgandhi,lerrelp}@cs.cmu.edu 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. to plateau fast. This leads us to an important question: why does robotic action data not lead to similar gains as we see in other prominent areas such as computer vision [12] and natural language processing [13]? The key to answering this question lies in the word: “real”. Many approaches claim that the data collected in the lab is real-world data. But is this really true? How often do we see white table-clothes or green backgrounds in real-world scenarios? In this paper, we argue that current robotic datasets lack the diversity of environments required for data-driven approaches to learn invariances. Therefore, the key lies in moving data collection efforts from a lab setting to real-world homes of people. We argue that learning based approaches in robotics need to move out of simulators and labs and enter the homes of people where the “real” data lives. There are however several challenges in moving the data collection efforts inside the home. First, even the cheapest industrial robots like the Sawyer or the Baxter are too expensive (>20K USD). In order to collect data in homes, we need a cheap and compact robot. But the challenge with low-cost robots is that the lack of accurate control makes the data unreliable. Furthermore, data collection in homes cannot receive 24/7 supervision by humans, which coupled with external factors will lead to more noise in the data collection. Finally, there is a chicken-egg problem for home-robotics: current robots are not good enough to collect data in homes; but to improve robots we need data in homes. In this paper, we propose to break this chicken-egg problem and present the first systematic effort in collecting a dataset inside the homes. Towards this goal: (a) we assemble a robot which costs less than 3K USD; (b) we use this robot to collect data inside 6 different homes for training and 3 homes for testing; (c) we present an approach that models and factors the noise in labeled data; (d) we demonstrate how data collected from these diverse home environment leads to superior performance and requires little-to-no domain adaptation. We hope this effort drives the robotics community to move out of the lab and use learning based approaches to handle inaccurate cheap robots. 2 Overview The goal of our paper is to highlight the importance of diversifying the data and environments for robot learning. We want to show that data collected from homes will be less biased and in turn allow for greater generalization. For the purposes of this paper, we focus on the task of grasping. Even for simple manipulation primitive tasks like grasping, current datasets suffer from strong biases such as simple backgrounds and the same environment dynamics (friction of tabletop etc.). We argue that current learning approaches exploit these biases and are not able to learn truly generalizable models. Of-course one important question is what kind of hardware should we use for collecting the largescale data inside the homes. We envision that since we would need to collect data from hundreds and thousands of homes; one of the prime-requirement for scaling is significantly reducing the cost of the robot. Towards this goal, we assembled a customized mobile manipulator as described below. Hardware Setup: Our robot consists of a Dobot Magician robotic arm [14] mounted on a Kobuki mobile base [15]. The robotic arm came with four degrees of freedom (DOF) and we customized the last link with a two axis wrist. We also modified the original pneumatic gripper with a two-fingered electric gripper [16]. The resulting robotic arm has five DOFs - x, y, z, roll & pitch - with a payload capacity of 0.3kg. The arm is rigidly attached on top of the moving base. The Kobuki base is about 0.2m high with 4.5kg of payload capacity. An Intel R200 RGBD [17] camera was also mounted with a pan-tilt attachment at a height of 1m above the ground. All the processing for the robot is performed an on-board laptop [18] attached on the back. The laptop has intel core i5-8250U processor with 8GB of RAM and runs for around three hours on a single charge. The battery in the base is used to power both the base and the arm. With a single charge, the system can run for 1.5 hours. One unavoidable consequence of significant cost reduction is the inaccurate control due to cheap motors. Unlike expensive setups such as Sawyer or Baxter, our setup has higher calibration errors and lower accuracy due to in-accuracte kinematics and hardware execution errors. Therefore, unlike existing self-supervised datasets; our dataset is diverse and huge but the labels are noisy. For example, the robot might be trying to grasp at location x, y but to due to noise the execution is at (x+ δx, y + δy). Therefore, the success/failure label corresponds to a different location. In order to tackle this challenge, we present an approach to learn from noisy data. Specifically, we model noise as a latent variable and use two networks: one which predicts the likely noise and other that predicts the action to execute. 3 Learning on Low Cost Robot Data We now present our method for learning a robotic grasping model given low-cost data. We first introduce the patch grasping framework presented in Pinto and Gupta [4]. Unlike the data collected in industrial/collaborative robots like the Sawyer and Baxter, there is a higher tendency for noisy labels in the datasets collected with cheap robots. This error in position control can be attributed to a myraid of factors: hardware execution error, inaccurate kinematics, camera calibration, proprioception, wear and tear, etc. We present an architecture to disentangle the noise of the low-cost robot’s actual and commanded executions. 3.1 Grasping Formulation Similar to [4], we are interested in the problem of planar grasping. This means that every object in the dataset is grasped at the same height (fixed cartesian z) and perpendicular to the ground (fixed end-effector pitch). The goal is find a grasp configuration (x, y, θ) given an observation I of the object. Here x and y are the translational degrees of freedom, while θ represents the rotational degrees of freedom (roll of the end-effector). Since our main baseline comparison is with the lab data collected in Pinto and Gupta [4], we follow a model architecture similar to theirs. Instead of directly predicting (x, y, θ) on the entire image I , several smaller patches IP centered at different locations (x, y) are sampled and the angle of grasp θ is predicted from this patch. The angle is discretized as θD into N bins to allow for multimodal predictions. For training, each datapoint consists of an image I , the executed grasp (x, y, θ) and the grasp success label g. This is converted to the image patch IP and the discrete angle θD. A binary cross entropy loss is then used to minimize the classification error between the predicted and ground truth label g. We use a Imagenet pre-trained convolutional neural network as initialization. 3.2 Modeling Noise as Latent Variable Unlike [4] where a relatively accurate industrial arm is used along with well calibrated cameras, our low-cost setup suffered from inaccurate position control and calibration. Though the executions are noisy, there is some structure in the noise which is dependent on both the design and individual robots. This means that the structure of noise can be modelled as a latent variable and decoupled during training [19]. Our approach is summarized in Fig 2. The conventional approach [4] models the grasp success probability for image patch IP at angle θD as P (g|IP , θD;R). HereR represents variables of the environment which can introduce noise in the system. In the case of standard commercial robots with high accuracy,R does not play a significant role. However, in the low cost setting with multiple robots collecting data in parallel, it becomes AlexNet Pretrained Parameters Learnt Parameters an important consideration for learning. For instance, given an observed execution of patch IP , the actual execution could have been at a neighbouring patch. Here, z models the latent variable of the actual patch executed, and ÎP belongs to a set of possible hypothesis neighbouring patches P . We considered a total of nine patches centered around IP , as explained in Fig 2. The conditional probability of grasping at a noisy image patch IP can hence be computed by marginalizing over z: P (g|IP , θD,R) = ∑ ÎP∈P P (g|z = ÎP , θD,R) · P (z = ÎP |θD, IP ,R) (1) Here P (z = ÎP |θD, IP ,R) represents the noise which is dependent on the environment variablesR, while P (g|z = ÎP , θD,R) represents the grasp prediction probability given the true patch. The first part of the equation is implemented as a standard grasp network, which we refer to as the Grasp Prediction Network (GPN). Specifically, we feed in nine possible patches and obtain their respective success probability distribution. The second probability distribution over noise is modeled via a separate network, which we call Noise Modelling Network (NMN). The overall grasp model Robust-Grasp is defined by GPN ⊗NMN, where ⊗ is the marginalization operator. 3.3 Learning the latent noise model Thus far, we have presented our Robust-Grasp architecture which models the true grasping distribution and latent noise. What should be the inputs to the NMN network and how should it be trained? We assume that z is conditionally independent of the local patch-specific variables (θD, IP ) given the global information R, i.e P (z = ÎP |θD, IP ,R) ≡ P (z = ÎP |R). Apart from the patch IP and grasp information (x, y, θ), other auxiliary information such as the image of the entire scene, ID of the specific robot that collected a datapoint and the raw pixels location of the grasp are stored. The image of the whole scene might contain essential cues about the system, such as the relative location of camera to the ground which may change over the lifetime of the robot. The identification number of the robot might give cues about errors specific to a particular hardware. Finally, the raw pixels of execution contain calibration specific information, since calibration error is coupled with pixel location, since we do least squares fit to compute calibration parameters. It is important to emphasize that we do not have explicit labels to train NMN. Since we have to estimate the latent variable z, one could use Expectation Maximization (EM) [20]. But inspired from Misra et al. [19], we use direct optimization to jointly learn both NMN and GPN with the noisy labels from our dataset. The entire image of the scene along with the environment information is passed into NMN. This outputs a probability distribution over the patches where the grasps might have been executed. Finally, we apply the binary cross entropy loss on the overall marginalized output GPN⊗NMN and the true grasp label g. 3.4 Training details We used PyTorch [21] to implement our models. Instead of learning the visual representations from scratch, we finetune on a pretrained ResNet-18 [22] model. For the noise modelling network (NMN), we concatenate the 512 dimensional ResNet feature with a one-hot vector of the robot’s ID and the raw pixel location of the grasp. This passes through a series of three fully connected layers and a SoftMax layer to convert the correct patch predictions to a probability distribution. For the grasp prediction network (GPN), we extract nine candidate correct patches to input. One of these inputs is the original noisy patch, while the others are equidistant from the original patch. The angle predictions for all the patches are passed through a sigmoid activation at the end to obtain grasp success probability for a specific patch at a specific angle. We train our network in two stages. First, we only train GPN using the noisy patch which allows it to learn a good initialization for grasp prediction and in turn provide better gradients to NMN. This training is done over five epochs of the data. In the second stage, we add the NMN and marginalization operator to simultaneously train NMN and GPN in an end-to-end fashion. This is done over 25 epochs of the data. We note that this two-stage approach is crucial for effective training of our networks, without which NMN trivially selects the same patch irrespective of the input. The optimizer used for training is Adam [23]. 4 Results In our experimental evaluation, we demonstrate that collecting data in diverse households is crucial for our learned models to generalize to unseen home environments. Furthermore, we also show that modelling the error of low cost robots in our Robust-Grasp architecture significantly improves grasping performance. We here onwards refer to our robot as the Low Cost Arm (LCA). Data Collection: First, we describe our methodology for collecting grasp data. We collected a diverse set (see Fig 3) of planar grasping in six homes. Each home has several environments and the data was collected in parallel using multiple robots. Since we are collecting data in homes which have very unstructured visual input, we used an object detector (specifically tiny-YOLO, due to compute and memory constraints on LCA) [24]. This results in bounding box predictions for the objects amidst clutter and diverse backgrounds, of which we only use the 2D location and discard the object class information. Once we have the location of the object in image space, we first sample a grasp and then compute the 3D grasp location from the noisy PointCloud. The motion planning pipeline is carefully designed since our under-constrained robot only has 5 DOFs. When collecting training data, we scattered a diverse set of objects and let the mobile base randomly move and grasp objects. The base was constrained to a 2m wide area to prevent the robot from colliding with obstacles beyond its zone of operation. We collected a dataset of about 28K grasps. Quantitative Evaluation: For quantitative evaluation, we use three different test settings: • Binary Classification (Held-out Data): For our first test, we collect a held-out test set by performing random grasps on objects. We measure the performance of binary classification where given a location and grasp angle; the model has to predict whether the grasp would be successful or not. This methodology allows us evaluate a large number models without needing to run them on a real robot. For our experiments, we use three different environments/set-ups for held-out data. We collected two held-out datasets using LCA in lab and LCA in home environments. Our third dataset is publicly available Baxter robot data [4]. • Real Low Cost Arm (Real-LCA): We evaluated the physical grasping performance of our learned models on the low cost arm in this setting. For testing, we used 20 novel objects in four canonical orientations in three homes not seen in training. Since both the homes and the objects are not seen in training, this metric tests the generalization of our learned model. • Real Sawyer (Real-Sawyer): In the third metric, we measure the physical grasping performance of our learned models on an industrial robotic arm (Sawyer). Similar to the Real-LCA metric, we grasp 20 novel objects in four canonical orientations in our lab environment. The goal of this experiment is to show that training models with data collected in homes also improves task performance in curated environments like the lab. Since the Sawyer is a more accurate and better calibrated, we evaluate our Robust-Grasp model against the model which does not disentangle the noise in the data. Baselines: Next we describe the baselines used in our experiments. Since we want to evaluate the performance of both the home robot dataset (Home-LCA) and the Robust-Grasp architecture, we used baselines for both the data and model. We used two datasets for the baseline: grasp data collected by [4] (Lab-Baxter) as well as data collected with our low cost arms in a single environment (Lab-LCA). To benchmark our Robust-Grasp model, we compared to the noise independent patch grasping model [4], which we call Patch-Grasp. We also compared our data and model with DexNet-3.0 from Mahler et al. [25] (DexNet) for a strong real-world grasping baseline. 4.1 Experiment 1: Performance on held-out data To demonstrate the importance of learning from home data, we train a Robust-Grasp model on both the Lab-Baxter and Lab-LCA dataset and compare it to the model trained with the Home-LCA dataset. As shown in Table 1, models trained on only lab data overfit to their respective environments and do not generalize to the more challenging Home-LCA environment, corresponding to a lower binary classification accuracy score. On the other hand, the model trained on Home-LCA perform well on both home and curated lab environments. To illustrate the importance of collecting a large Home-LCA dataset, we compare to a common domain adaptation baseline: fine-tuning the model learned on Lab-LCA with 5K home grasps (‘Fine-tuned’ in Table 1). We notice that this is significantly worse than the model trained with just home data from scratch. Our hypothesis is that the feature representation learned from Lab data is insufficient to capture the richer variety present in Home Data. Further, to demonstrate the importance of the NMN for noise modelling, we compare to a baseline model without NMN and feed the robot_id to the grasp prediction network directly (‘Robot-ID Conditioned’ in Table 1), similar to Hardware Conditioned Policies [26]. This baseline gives competitive results while testing on Lab-LCA and Lab-Baxter datasets, however it did not fare as well as Robust-Grasp. This demonstrates the importance of NMN and sharing data across different LCAs. 4.2 Experiment 2: Performance on Real LCA Robot In Real-LCA, our most challenging evaluation, we compare our model against a pre-trained DexNet baseline model and the model trained on the Lab-Baxter dataset. The models were benchmarked based on the physical grasping performance on novel objects in unseen environments. We observe a significant improvement of 43.7% (see Table 2) when training on the Home-LCA dataset over the Lab-Baxter dataset. Moreover, our model is also 33% better than DexNet, though the latter has achieved state-of-the-art results in the bin-picking task [25]. The relatively low performance of DexNet in these environments can be attributed to the high quality depth sensing it requires. Since our robots are tested in homes which typically have a lot of natural light, the depth images are quite noisy. This effect is further coupled with the cheap commodity RGBD cameras that we use on our robot. We used the Robust-Grasp model to train on the Home-LCA dataset. 4.3 Does factoring out the noise in data improve performance? To evaluate the performance of our Robust-Grasp model vis-à-vis the Patch-Grasp model, we would ideally need a noise-free dataset for fair comparisons. Since it is difficult to collect noise-free data on our home robots, we use Lab-Baxter for benchmarking. The Baxter robot is more accurate and better calibrated than the LCA robot and thus has less noisy labels. Testing is done on the Sawyer robot to ensure the testing robot is different from both training robots. Results for the Real-Sawyer are reported in Table 3. On this metric, our Robust-Grasp model trained on Home-LCA achieves 77.5% grasping accuracy. This is a significant improvement over the 56.25% grasping accuracy of the Patch-Grasp baseline trained on the same dataset. We also note that our grasp accuracy is similar to the performance reported (around 80%) in several recent learning to grasp papers [7]. However unlike these methods, we train in a completely different environment (homes) and test in the lab. The improvements of the Robust-Grasp model is also demonstrated with the binary classification metric in Table 1, where it outperforms the Patch-Grasp by about 4% on the Lab-Baxter and Home-LCA datasets. Moreover, our visualizations of predicted noise corrections in Fig 4, show that the corrections depend on both the pixel locations of the noisy grasp and the specific robot. 5 Related Work 5.1 Large scale robot learning Over the last few year there has been a growing interest in scaling up robot learning with large scale robot datasets. The Cornell Grasp Dataset [27] was among the first works that released a hand annotated grasping dataset. Following this, Pinto and Gupta [4] created a self-supervized grasping dataset in which a Baxter robot collected and self-annotated the data. Levine et al. [7] took the next Robot #1 Robot #2 Robot #3 Robot #4 step in robotic data collection by employing an Arm-Farm of several industrial manipulators to learn grasping using reinforcement learning. All of these works, use data in a restrictive lab environment using high-cost data labelling mechanisms. In our work, we show how low-cost data in a variety of homes can be used to train grasping models. Apart from grasping, there has also been a significant effort is collecting data for other robotic tasks. Agarwal et al. [8], Finn et al. [9], and Pinto and Gupta [28] collected data of a manipulator pushing objects on a table. Similarly, Nair et al. [10] collects data for manipulating a rope on a table while Yahya et al. [29] used several robots in parallel to train a policy to open a door. Erickson et al. [30], Murali et al. [31], and Calandra et al. [32] collected a dataset of robotic tactile interactions for material recognition and grasp stability estimation. Again, all of this data is collected in a lab environment. We also note several pioneering work in lifelong robotics like Veloso et al. [33], Hawes et al. [34]. In contrast to our work, they focus on navigation and long-term autonomy. 5.2 Grasping Grasping is one of the fundamental problems in robotic manipulation and we refer readers to recent surveys Bicchi and Kumar [35], Bohg et al. [36] for a comprehensive review. Classical approaches focused on physics-based analysis of stability [37] and usually require explicit 3D models of the objects. Recent papers have focused on data-driven approaches that directly learn a mapping from visual observations to grasp control [27, 4, 7]. For large-scale data collection both simulation [25, 38, 39, 40] and real-world robots [4, 7] have been used. Mahler et al. [25] propose a versatile grasping model, that achieves 90% grasping performance in the lab for the bin-picking task. However since this method uses depth as input, we demonstrate that it is challenging to use it for home robots which may not have accurate depth sensing in these environments. 5.3 Learning with low cost robots Given that most labs run experiments with standard collaborative or industrial robots, there is very limited research on learning on low cost robots and manipulators. Deisenroth et al. [41] used model-based RL to teach a cheap inaccurate 6 DOF robot to stack multiple blocks. Though mobile robots like iRobot’s Roomba have been in the home consumer electronics market for a decade, it is not clear whether they use learning approaches alongside mapping and planning. 5.4 Modelling noise in data Learning from noisy inputs is a challenging problem that has received significant attention in computer vision. Nettleton et al. [42] show that training models from noisy data detrimentally impacts performance. However, as the work in Frénay and Verleysen [43] points out, the noise can be either independent of the environment or statistically dependent on the environment. This means that creating models that can account for and correct noise [19, 44] are valuable. Inspired from Misra et al. [19], we present a model that disentangles the noise in the training grasping data to learn a better grasping model. 6 Conclusion In summary, we present the first effort in collecting large scale robot data inside diverse environments like people’s homes. We first assemble a mobile manipulator which costs under 3K USD and collect a dataset of about 28K grasps in six homes under varying environmental conditions. Collecting data with cheap inaccurate robots introduces the challenge of noisy labels and we present an architectural framework which factors out the noise in the data. We demonstrate that it is crucial to train models with data collected in households if the goal is to eventually test them in homes. To evaluate our models, we physically tested them by grasping a set of 20 novel objects in lab and in three unseen home environments from Airbnb. The model trained with our home dataset showed a 43.7% improvement over a model trained with data collected in the lab. Furthermore, our framework performed 33% better than a baseline DexNet model, which struggled with the typically poor depth sensing in common household environments with a lot of natural light. We also demonstrate that our model improves grasp performance in curated environments like the lab. Our model was also able to successfully disentangle the structured noise in the data and improved performance by about 10%. ACKNOWLEDGEMENTS This work was supported by ONR MURI N000141612007. Abhinav was supported in part by Sloan Research Fellowship and Adithya was partly supported by a Uber Fellowship.
1. What is the focus of the paper regarding robot grasping tasks? 2. What are the strengths of the proposed approach, particularly in its motivation and results? 3. Are there any concerns or limitations regarding the experimental evaluations? 4. How does the reviewer assess the novelty and potential impact of the paper's contributions? 5. Can the proposed method be applied to more complex control tasks or RL approaches?
Review
Review In this paper a new dataset for robot grasping task is proposed. Compared to grasping data collected in a lab environment, the authors propose to collect the data from real world environments (homes). To collect data in the wild, the authors propose to use cheap robots (measured by the $ cost) with low DoF. In order to compensate the noisy behavior of the less calibrated robots, the authors model the noise as a latent variable and jointly learn it with the grasping task. Results show that the combination of the aforementioned ideas result in a robot grasping model that can work well on both lab environments, and new real world environment. Pros: 1. I really like the motivation of the paper as it is, to the best of my knowledge, the first one to emphasize the following two very important perspectives in robot learning: 1. how do we develop and use cheap robots with less calibrated mechanical details (sensors, actuators, hardware wear-out etc.) 2. how we can extend robots to real world, so we can learn much richer representations. The paper does a good job from this perspective and opens up potentially a new research direction. 2. Results are competitive and clearly suggest the efficacy and advantage in generalization when learning in the wild. 3. As the major technical contribution (though the specific tech has been developed and used in [18]), the noise modeling network shows promising results and worth further discussions. Cons: As probably the first attempt, the paper is still very preliminary. For cheap robots, it is currently measured by the cost. It is okay but it might be even more helpful to set up a standard of system identification for cheap robots, i.e. what are the possible factors that make a robot cheap? It is okay to just model the noise as a simple latent variable in this vision based grasping task, however, considering harder control tasks, or with a RL approach, we might need to have a better idea on what could possibly go wrong. One interesting thing to try is to see how cheap robots perform after a long period of working time without human recalibration, and see if the robust learning algorithm can handle that to some extent. Of course this is beyond the scope of this paper. For real world environments, what are the major difference between different homes? Is it just different background image for grasping (floor/carpet?). Do you deploy the robots on places of different physical properties as well, e.g. on a table, on the bed, or in a bathtub? The thing really worries me is the experimental evaluations. I have the feeling that the gap between Lab vs Home can be easily reduced by simple data augmentation. It is mentioned in the paper that in the lab environments, people usually only care about variations of objects for grasping. And I think the major difference of doing this at real homes (despite the issue of cheap robots etc), is adding a new data augmentation dimension on the grasping background. One experiment I could think of is, under lab environment, using different carpet on the workstation as the "background" of the grasping task. I would imagine this will completely match the performance of home collected data. From the algorithm perspective, since it is basically a vision task (no need to worry about, say rl policy adaptation) simple domain adaptation could help reduce the gap too. I might be careless but I'm wondering what's the justification of no finetuning experiment for the experimental evaluation (learning on lab data, and finetune on the real world data for a small number of epochs). Overall, this is an interesting paper. We are far from solving the sim2real transfer problem, but it is a good to think about "lab2real" transfer problem and this paper is, though not perfect, a good initial attempt.
NIPS
Title Robot Learning in Homes: Improving Generalization and Reducing Dataset Bias Abstract Data-driven approaches to solving robotic tasks have gained a lot of traction in recent years. However, most existing policies are trained on large-scale datasets collected in curated lab settings. If we aim to deploy these models in unstructured visual environments like people’s homes, they will be unable to cope with the mismatch in data distribution. In such light, we present the first systematic effort in collecting a large dataset for robotic grasping in homes. First, to scale and parallelize data collection, we built a low cost mobile manipulator assembled for under 3K USD. Second, data collected using low cost robots suffer from noisy labels due to imperfect execution and calibration errors. To handle this, we develop a framework which factors out the noise as a latent variable. Our model is trained on 28K grasps collected in several houses under an array of different environmental conditions. We evaluate our models by physically executing grasps on a collection of novel objects in multiple unseen homes. The models trained with our home dataset showed a marked improvement of 43.7% over a baseline model trained with data collected in lab. Our architecture which explicitly models the latent noise in the dataset also performed 10% better than one that did not factor out the noise. We hope this effort inspires the robotics community to look outside the lab and embrace learning based approaches to handle inaccurate cheap robots. 1 Introduction Powered by the availability of cheaper robots, robust simulators and greater processing speeds, the last decade has witnessed the rise of data-driven approaches in robotics. Instead of using hand-designed models, these approaches focus on the collection of large-scale datasets to learn policies that map from high-dimensional observations to actions. Current data-driven approaches mostly focus on using simulators since it is considerably less expensive to collect simulated data than on an actual robot in real-time. The hope is that these approaches will either be robust enough to domain shifts or that the models can be adapted using a small amount of real world data via transfer learning. However, beyond simple robotic picking tasks [1, 2, 3], there exist little support to this level of optimism. One major reason for this is the wide “reality gap” between simulators and the real world. Therefore, there has concurrently been a push in the robotics community to collect real-world physical interaction data [4, 5, 6, 7, 8, 9, 10, 11] in multiple robotics labs. A major driving force behind this effort is the declining costs of hardware which allows scaling up data collection efforts for a variety of robotic tasks. This approach has indeed been quite successful at tasks such as grasping, pushing, poking and imitation learning. However, these learned models have often been shown to overfit (even after increasing the number of datapoints) and the performance of these robot learning methods tends ∗Equal contribution. Direct correspondence to: {abhinavg,amurali,dgandhi,lerrelp}@cs.cmu.edu 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. to plateau fast. This leads us to an important question: why does robotic action data not lead to similar gains as we see in other prominent areas such as computer vision [12] and natural language processing [13]? The key to answering this question lies in the word: “real”. Many approaches claim that the data collected in the lab is real-world data. But is this really true? How often do we see white table-clothes or green backgrounds in real-world scenarios? In this paper, we argue that current robotic datasets lack the diversity of environments required for data-driven approaches to learn invariances. Therefore, the key lies in moving data collection efforts from a lab setting to real-world homes of people. We argue that learning based approaches in robotics need to move out of simulators and labs and enter the homes of people where the “real” data lives. There are however several challenges in moving the data collection efforts inside the home. First, even the cheapest industrial robots like the Sawyer or the Baxter are too expensive (>20K USD). In order to collect data in homes, we need a cheap and compact robot. But the challenge with low-cost robots is that the lack of accurate control makes the data unreliable. Furthermore, data collection in homes cannot receive 24/7 supervision by humans, which coupled with external factors will lead to more noise in the data collection. Finally, there is a chicken-egg problem for home-robotics: current robots are not good enough to collect data in homes; but to improve robots we need data in homes. In this paper, we propose to break this chicken-egg problem and present the first systematic effort in collecting a dataset inside the homes. Towards this goal: (a) we assemble a robot which costs less than 3K USD; (b) we use this robot to collect data inside 6 different homes for training and 3 homes for testing; (c) we present an approach that models and factors the noise in labeled data; (d) we demonstrate how data collected from these diverse home environment leads to superior performance and requires little-to-no domain adaptation. We hope this effort drives the robotics community to move out of the lab and use learning based approaches to handle inaccurate cheap robots. 2 Overview The goal of our paper is to highlight the importance of diversifying the data and environments for robot learning. We want to show that data collected from homes will be less biased and in turn allow for greater generalization. For the purposes of this paper, we focus on the task of grasping. Even for simple manipulation primitive tasks like grasping, current datasets suffer from strong biases such as simple backgrounds and the same environment dynamics (friction of tabletop etc.). We argue that current learning approaches exploit these biases and are not able to learn truly generalizable models. Of-course one important question is what kind of hardware should we use for collecting the largescale data inside the homes. We envision that since we would need to collect data from hundreds and thousands of homes; one of the prime-requirement for scaling is significantly reducing the cost of the robot. Towards this goal, we assembled a customized mobile manipulator as described below. Hardware Setup: Our robot consists of a Dobot Magician robotic arm [14] mounted on a Kobuki mobile base [15]. The robotic arm came with four degrees of freedom (DOF) and we customized the last link with a two axis wrist. We also modified the original pneumatic gripper with a two-fingered electric gripper [16]. The resulting robotic arm has five DOFs - x, y, z, roll & pitch - with a payload capacity of 0.3kg. The arm is rigidly attached on top of the moving base. The Kobuki base is about 0.2m high with 4.5kg of payload capacity. An Intel R200 RGBD [17] camera was also mounted with a pan-tilt attachment at a height of 1m above the ground. All the processing for the robot is performed an on-board laptop [18] attached on the back. The laptop has intel core i5-8250U processor with 8GB of RAM and runs for around three hours on a single charge. The battery in the base is used to power both the base and the arm. With a single charge, the system can run for 1.5 hours. One unavoidable consequence of significant cost reduction is the inaccurate control due to cheap motors. Unlike expensive setups such as Sawyer or Baxter, our setup has higher calibration errors and lower accuracy due to in-accuracte kinematics and hardware execution errors. Therefore, unlike existing self-supervised datasets; our dataset is diverse and huge but the labels are noisy. For example, the robot might be trying to grasp at location x, y but to due to noise the execution is at (x+ δx, y + δy). Therefore, the success/failure label corresponds to a different location. In order to tackle this challenge, we present an approach to learn from noisy data. Specifically, we model noise as a latent variable and use two networks: one which predicts the likely noise and other that predicts the action to execute. 3 Learning on Low Cost Robot Data We now present our method for learning a robotic grasping model given low-cost data. We first introduce the patch grasping framework presented in Pinto and Gupta [4]. Unlike the data collected in industrial/collaborative robots like the Sawyer and Baxter, there is a higher tendency for noisy labels in the datasets collected with cheap robots. This error in position control can be attributed to a myraid of factors: hardware execution error, inaccurate kinematics, camera calibration, proprioception, wear and tear, etc. We present an architecture to disentangle the noise of the low-cost robot’s actual and commanded executions. 3.1 Grasping Formulation Similar to [4], we are interested in the problem of planar grasping. This means that every object in the dataset is grasped at the same height (fixed cartesian z) and perpendicular to the ground (fixed end-effector pitch). The goal is find a grasp configuration (x, y, θ) given an observation I of the object. Here x and y are the translational degrees of freedom, while θ represents the rotational degrees of freedom (roll of the end-effector). Since our main baseline comparison is with the lab data collected in Pinto and Gupta [4], we follow a model architecture similar to theirs. Instead of directly predicting (x, y, θ) on the entire image I , several smaller patches IP centered at different locations (x, y) are sampled and the angle of grasp θ is predicted from this patch. The angle is discretized as θD into N bins to allow for multimodal predictions. For training, each datapoint consists of an image I , the executed grasp (x, y, θ) and the grasp success label g. This is converted to the image patch IP and the discrete angle θD. A binary cross entropy loss is then used to minimize the classification error between the predicted and ground truth label g. We use a Imagenet pre-trained convolutional neural network as initialization. 3.2 Modeling Noise as Latent Variable Unlike [4] where a relatively accurate industrial arm is used along with well calibrated cameras, our low-cost setup suffered from inaccurate position control and calibration. Though the executions are noisy, there is some structure in the noise which is dependent on both the design and individual robots. This means that the structure of noise can be modelled as a latent variable and decoupled during training [19]. Our approach is summarized in Fig 2. The conventional approach [4] models the grasp success probability for image patch IP at angle θD as P (g|IP , θD;R). HereR represents variables of the environment which can introduce noise in the system. In the case of standard commercial robots with high accuracy,R does not play a significant role. However, in the low cost setting with multiple robots collecting data in parallel, it becomes AlexNet Pretrained Parameters Learnt Parameters an important consideration for learning. For instance, given an observed execution of patch IP , the actual execution could have been at a neighbouring patch. Here, z models the latent variable of the actual patch executed, and ÎP belongs to a set of possible hypothesis neighbouring patches P . We considered a total of nine patches centered around IP , as explained in Fig 2. The conditional probability of grasping at a noisy image patch IP can hence be computed by marginalizing over z: P (g|IP , θD,R) = ∑ ÎP∈P P (g|z = ÎP , θD,R) · P (z = ÎP |θD, IP ,R) (1) Here P (z = ÎP |θD, IP ,R) represents the noise which is dependent on the environment variablesR, while P (g|z = ÎP , θD,R) represents the grasp prediction probability given the true patch. The first part of the equation is implemented as a standard grasp network, which we refer to as the Grasp Prediction Network (GPN). Specifically, we feed in nine possible patches and obtain their respective success probability distribution. The second probability distribution over noise is modeled via a separate network, which we call Noise Modelling Network (NMN). The overall grasp model Robust-Grasp is defined by GPN ⊗NMN, where ⊗ is the marginalization operator. 3.3 Learning the latent noise model Thus far, we have presented our Robust-Grasp architecture which models the true grasping distribution and latent noise. What should be the inputs to the NMN network and how should it be trained? We assume that z is conditionally independent of the local patch-specific variables (θD, IP ) given the global information R, i.e P (z = ÎP |θD, IP ,R) ≡ P (z = ÎP |R). Apart from the patch IP and grasp information (x, y, θ), other auxiliary information such as the image of the entire scene, ID of the specific robot that collected a datapoint and the raw pixels location of the grasp are stored. The image of the whole scene might contain essential cues about the system, such as the relative location of camera to the ground which may change over the lifetime of the robot. The identification number of the robot might give cues about errors specific to a particular hardware. Finally, the raw pixels of execution contain calibration specific information, since calibration error is coupled with pixel location, since we do least squares fit to compute calibration parameters. It is important to emphasize that we do not have explicit labels to train NMN. Since we have to estimate the latent variable z, one could use Expectation Maximization (EM) [20]. But inspired from Misra et al. [19], we use direct optimization to jointly learn both NMN and GPN with the noisy labels from our dataset. The entire image of the scene along with the environment information is passed into NMN. This outputs a probability distribution over the patches where the grasps might have been executed. Finally, we apply the binary cross entropy loss on the overall marginalized output GPN⊗NMN and the true grasp label g. 3.4 Training details We used PyTorch [21] to implement our models. Instead of learning the visual representations from scratch, we finetune on a pretrained ResNet-18 [22] model. For the noise modelling network (NMN), we concatenate the 512 dimensional ResNet feature with a one-hot vector of the robot’s ID and the raw pixel location of the grasp. This passes through a series of three fully connected layers and a SoftMax layer to convert the correct patch predictions to a probability distribution. For the grasp prediction network (GPN), we extract nine candidate correct patches to input. One of these inputs is the original noisy patch, while the others are equidistant from the original patch. The angle predictions for all the patches are passed through a sigmoid activation at the end to obtain grasp success probability for a specific patch at a specific angle. We train our network in two stages. First, we only train GPN using the noisy patch which allows it to learn a good initialization for grasp prediction and in turn provide better gradients to NMN. This training is done over five epochs of the data. In the second stage, we add the NMN and marginalization operator to simultaneously train NMN and GPN in an end-to-end fashion. This is done over 25 epochs of the data. We note that this two-stage approach is crucial for effective training of our networks, without which NMN trivially selects the same patch irrespective of the input. The optimizer used for training is Adam [23]. 4 Results In our experimental evaluation, we demonstrate that collecting data in diverse households is crucial for our learned models to generalize to unseen home environments. Furthermore, we also show that modelling the error of low cost robots in our Robust-Grasp architecture significantly improves grasping performance. We here onwards refer to our robot as the Low Cost Arm (LCA). Data Collection: First, we describe our methodology for collecting grasp data. We collected a diverse set (see Fig 3) of planar grasping in six homes. Each home has several environments and the data was collected in parallel using multiple robots. Since we are collecting data in homes which have very unstructured visual input, we used an object detector (specifically tiny-YOLO, due to compute and memory constraints on LCA) [24]. This results in bounding box predictions for the objects amidst clutter and diverse backgrounds, of which we only use the 2D location and discard the object class information. Once we have the location of the object in image space, we first sample a grasp and then compute the 3D grasp location from the noisy PointCloud. The motion planning pipeline is carefully designed since our under-constrained robot only has 5 DOFs. When collecting training data, we scattered a diverse set of objects and let the mobile base randomly move and grasp objects. The base was constrained to a 2m wide area to prevent the robot from colliding with obstacles beyond its zone of operation. We collected a dataset of about 28K grasps. Quantitative Evaluation: For quantitative evaluation, we use three different test settings: • Binary Classification (Held-out Data): For our first test, we collect a held-out test set by performing random grasps on objects. We measure the performance of binary classification where given a location and grasp angle; the model has to predict whether the grasp would be successful or not. This methodology allows us evaluate a large number models without needing to run them on a real robot. For our experiments, we use three different environments/set-ups for held-out data. We collected two held-out datasets using LCA in lab and LCA in home environments. Our third dataset is publicly available Baxter robot data [4]. • Real Low Cost Arm (Real-LCA): We evaluated the physical grasping performance of our learned models on the low cost arm in this setting. For testing, we used 20 novel objects in four canonical orientations in three homes not seen in training. Since both the homes and the objects are not seen in training, this metric tests the generalization of our learned model. • Real Sawyer (Real-Sawyer): In the third metric, we measure the physical grasping performance of our learned models on an industrial robotic arm (Sawyer). Similar to the Real-LCA metric, we grasp 20 novel objects in four canonical orientations in our lab environment. The goal of this experiment is to show that training models with data collected in homes also improves task performance in curated environments like the lab. Since the Sawyer is a more accurate and better calibrated, we evaluate our Robust-Grasp model against the model which does not disentangle the noise in the data. Baselines: Next we describe the baselines used in our experiments. Since we want to evaluate the performance of both the home robot dataset (Home-LCA) and the Robust-Grasp architecture, we used baselines for both the data and model. We used two datasets for the baseline: grasp data collected by [4] (Lab-Baxter) as well as data collected with our low cost arms in a single environment (Lab-LCA). To benchmark our Robust-Grasp model, we compared to the noise independent patch grasping model [4], which we call Patch-Grasp. We also compared our data and model with DexNet-3.0 from Mahler et al. [25] (DexNet) for a strong real-world grasping baseline. 4.1 Experiment 1: Performance on held-out data To demonstrate the importance of learning from home data, we train a Robust-Grasp model on both the Lab-Baxter and Lab-LCA dataset and compare it to the model trained with the Home-LCA dataset. As shown in Table 1, models trained on only lab data overfit to their respective environments and do not generalize to the more challenging Home-LCA environment, corresponding to a lower binary classification accuracy score. On the other hand, the model trained on Home-LCA perform well on both home and curated lab environments. To illustrate the importance of collecting a large Home-LCA dataset, we compare to a common domain adaptation baseline: fine-tuning the model learned on Lab-LCA with 5K home grasps (‘Fine-tuned’ in Table 1). We notice that this is significantly worse than the model trained with just home data from scratch. Our hypothesis is that the feature representation learned from Lab data is insufficient to capture the richer variety present in Home Data. Further, to demonstrate the importance of the NMN for noise modelling, we compare to a baseline model without NMN and feed the robot_id to the grasp prediction network directly (‘Robot-ID Conditioned’ in Table 1), similar to Hardware Conditioned Policies [26]. This baseline gives competitive results while testing on Lab-LCA and Lab-Baxter datasets, however it did not fare as well as Robust-Grasp. This demonstrates the importance of NMN and sharing data across different LCAs. 4.2 Experiment 2: Performance on Real LCA Robot In Real-LCA, our most challenging evaluation, we compare our model against a pre-trained DexNet baseline model and the model trained on the Lab-Baxter dataset. The models were benchmarked based on the physical grasping performance on novel objects in unseen environments. We observe a significant improvement of 43.7% (see Table 2) when training on the Home-LCA dataset over the Lab-Baxter dataset. Moreover, our model is also 33% better than DexNet, though the latter has achieved state-of-the-art results in the bin-picking task [25]. The relatively low performance of DexNet in these environments can be attributed to the high quality depth sensing it requires. Since our robots are tested in homes which typically have a lot of natural light, the depth images are quite noisy. This effect is further coupled with the cheap commodity RGBD cameras that we use on our robot. We used the Robust-Grasp model to train on the Home-LCA dataset. 4.3 Does factoring out the noise in data improve performance? To evaluate the performance of our Robust-Grasp model vis-à-vis the Patch-Grasp model, we would ideally need a noise-free dataset for fair comparisons. Since it is difficult to collect noise-free data on our home robots, we use Lab-Baxter for benchmarking. The Baxter robot is more accurate and better calibrated than the LCA robot and thus has less noisy labels. Testing is done on the Sawyer robot to ensure the testing robot is different from both training robots. Results for the Real-Sawyer are reported in Table 3. On this metric, our Robust-Grasp model trained on Home-LCA achieves 77.5% grasping accuracy. This is a significant improvement over the 56.25% grasping accuracy of the Patch-Grasp baseline trained on the same dataset. We also note that our grasp accuracy is similar to the performance reported (around 80%) in several recent learning to grasp papers [7]. However unlike these methods, we train in a completely different environment (homes) and test in the lab. The improvements of the Robust-Grasp model is also demonstrated with the binary classification metric in Table 1, where it outperforms the Patch-Grasp by about 4% on the Lab-Baxter and Home-LCA datasets. Moreover, our visualizations of predicted noise corrections in Fig 4, show that the corrections depend on both the pixel locations of the noisy grasp and the specific robot. 5 Related Work 5.1 Large scale robot learning Over the last few year there has been a growing interest in scaling up robot learning with large scale robot datasets. The Cornell Grasp Dataset [27] was among the first works that released a hand annotated grasping dataset. Following this, Pinto and Gupta [4] created a self-supervized grasping dataset in which a Baxter robot collected and self-annotated the data. Levine et al. [7] took the next Robot #1 Robot #2 Robot #3 Robot #4 step in robotic data collection by employing an Arm-Farm of several industrial manipulators to learn grasping using reinforcement learning. All of these works, use data in a restrictive lab environment using high-cost data labelling mechanisms. In our work, we show how low-cost data in a variety of homes can be used to train grasping models. Apart from grasping, there has also been a significant effort is collecting data for other robotic tasks. Agarwal et al. [8], Finn et al. [9], and Pinto and Gupta [28] collected data of a manipulator pushing objects on a table. Similarly, Nair et al. [10] collects data for manipulating a rope on a table while Yahya et al. [29] used several robots in parallel to train a policy to open a door. Erickson et al. [30], Murali et al. [31], and Calandra et al. [32] collected a dataset of robotic tactile interactions for material recognition and grasp stability estimation. Again, all of this data is collected in a lab environment. We also note several pioneering work in lifelong robotics like Veloso et al. [33], Hawes et al. [34]. In contrast to our work, they focus on navigation and long-term autonomy. 5.2 Grasping Grasping is one of the fundamental problems in robotic manipulation and we refer readers to recent surveys Bicchi and Kumar [35], Bohg et al. [36] for a comprehensive review. Classical approaches focused on physics-based analysis of stability [37] and usually require explicit 3D models of the objects. Recent papers have focused on data-driven approaches that directly learn a mapping from visual observations to grasp control [27, 4, 7]. For large-scale data collection both simulation [25, 38, 39, 40] and real-world robots [4, 7] have been used. Mahler et al. [25] propose a versatile grasping model, that achieves 90% grasping performance in the lab for the bin-picking task. However since this method uses depth as input, we demonstrate that it is challenging to use it for home robots which may not have accurate depth sensing in these environments. 5.3 Learning with low cost robots Given that most labs run experiments with standard collaborative or industrial robots, there is very limited research on learning on low cost robots and manipulators. Deisenroth et al. [41] used model-based RL to teach a cheap inaccurate 6 DOF robot to stack multiple blocks. Though mobile robots like iRobot’s Roomba have been in the home consumer electronics market for a decade, it is not clear whether they use learning approaches alongside mapping and planning. 5.4 Modelling noise in data Learning from noisy inputs is a challenging problem that has received significant attention in computer vision. Nettleton et al. [42] show that training models from noisy data detrimentally impacts performance. However, as the work in Frénay and Verleysen [43] points out, the noise can be either independent of the environment or statistically dependent on the environment. This means that creating models that can account for and correct noise [19, 44] are valuable. Inspired from Misra et al. [19], we present a model that disentangles the noise in the training grasping data to learn a better grasping model. 6 Conclusion In summary, we present the first effort in collecting large scale robot data inside diverse environments like people’s homes. We first assemble a mobile manipulator which costs under 3K USD and collect a dataset of about 28K grasps in six homes under varying environmental conditions. Collecting data with cheap inaccurate robots introduces the challenge of noisy labels and we present an architectural framework which factors out the noise in the data. We demonstrate that it is crucial to train models with data collected in households if the goal is to eventually test them in homes. To evaluate our models, we physically tested them by grasping a set of 20 novel objects in lab and in three unseen home environments from Airbnb. The model trained with our home dataset showed a 43.7% improvement over a model trained with data collected in the lab. Furthermore, our framework performed 33% better than a baseline DexNet model, which struggled with the typically poor depth sensing in common household environments with a lot of natural light. We also demonstrate that our model improves grasp performance in curated environments like the lab. Our model was also able to successfully disentangle the structured noise in the data and improved performance by about 10%. ACKNOWLEDGEMENTS This work was supported by ONR MURI N000141612007. Abhinav was supported in part by Sloan Research Fellowship and Adithya was partly supported by a Uber Fellowship.
1. What are the contributions and strengths of the paper regarding robot learning for object grasping? 2. What are the limitations and concerns regarding the dataset used in the study? 3. How does the reviewer assess the significance of the low-cost robot platform used in the research? 4. What are some examples of existing research on robot long-term autonomy that the paper overlooked? 5. How does the reviewer evaluate the discussion of grasping in Section 5.2 of the paper? 6. Are there any suggestions for improving the paper, particularly in addressing the low-cost platforms and related work on "looking outside the lab"?
Review
Review This paper is on robot learning to grasp objects in homes (an everyday environment). The work makes a set of contributions, including a dataset that includes 28K grasps using real robots in homes, an architecture for grasp planning and scene modeling (although the individual components of the architecture are not new), and a set of comparisons using different learning methods and existing datasets. The paper is overall well written, and there are the following concerns about the work. It is claimed in the paper that the dataset was collected in the real-world home environments (which is true), but still the robot was constrained to a 2m wide area, and the grasps are limited to strictly downward grasps. All the robot sees is just objects placed on the ground. From this review, this work is not facing the real challenge of home environments. The point of highlighting the low-cost robot platform is unclear. Of course, there is the better availability of low-cost robots. But even if we consider only the 3K robots, there can be very different designs. Also, it's (unlikely but still) possible that a Baxter robot's price is significantly reduced in a few years. How would we position this work, given the cost reduction of robot platforms? It's better to better highlight the quantitative features of the low-cost platform, such as sensing range, precision of the arm, and battery capacity. The work overlooked existing research on robot long-term autonomy. Examples include the EU STRANDS robots, CMU Cobots, and UT Austin BWIBots. These robots have traveled thousands of kms, and served thousands of people. For instance, the EU STRANDS robots have been able to operate without (or with minimum) human involvement for weeks. The robots were able to collect various types of data. While most of these robots do not have arms, the robotics community has been "looking outside the lab" for many years. The discussion of grasping in Section 5.2 is unfair to many of the existing works on grasping. The main reason is that grasping in a high-dimensional space is extremely challenging. This work greatly simplified the problem by fixing the height and having the arm perpendicular to the ground. The challenging part of trajectory planning in high-dimensional spaces is avoided in this work. AFTER REBUTTAL: Thanks for the reply in detail in the response letter. The paper (in a good shape) can be further improved from the angles of addressing the low-cost platforms, and elaborating in related work on efforts on "looking outside the lab", as agreed by the authors in the letter.
NIPS
Title Robot Learning in Homes: Improving Generalization and Reducing Dataset Bias Abstract Data-driven approaches to solving robotic tasks have gained a lot of traction in recent years. However, most existing policies are trained on large-scale datasets collected in curated lab settings. If we aim to deploy these models in unstructured visual environments like people’s homes, they will be unable to cope with the mismatch in data distribution. In such light, we present the first systematic effort in collecting a large dataset for robotic grasping in homes. First, to scale and parallelize data collection, we built a low cost mobile manipulator assembled for under 3K USD. Second, data collected using low cost robots suffer from noisy labels due to imperfect execution and calibration errors. To handle this, we develop a framework which factors out the noise as a latent variable. Our model is trained on 28K grasps collected in several houses under an array of different environmental conditions. We evaluate our models by physically executing grasps on a collection of novel objects in multiple unseen homes. The models trained with our home dataset showed a marked improvement of 43.7% over a baseline model trained with data collected in lab. Our architecture which explicitly models the latent noise in the dataset also performed 10% better than one that did not factor out the noise. We hope this effort inspires the robotics community to look outside the lab and embrace learning based approaches to handle inaccurate cheap robots. 1 Introduction Powered by the availability of cheaper robots, robust simulators and greater processing speeds, the last decade has witnessed the rise of data-driven approaches in robotics. Instead of using hand-designed models, these approaches focus on the collection of large-scale datasets to learn policies that map from high-dimensional observations to actions. Current data-driven approaches mostly focus on using simulators since it is considerably less expensive to collect simulated data than on an actual robot in real-time. The hope is that these approaches will either be robust enough to domain shifts or that the models can be adapted using a small amount of real world data via transfer learning. However, beyond simple robotic picking tasks [1, 2, 3], there exist little support to this level of optimism. One major reason for this is the wide “reality gap” between simulators and the real world. Therefore, there has concurrently been a push in the robotics community to collect real-world physical interaction data [4, 5, 6, 7, 8, 9, 10, 11] in multiple robotics labs. A major driving force behind this effort is the declining costs of hardware which allows scaling up data collection efforts for a variety of robotic tasks. This approach has indeed been quite successful at tasks such as grasping, pushing, poking and imitation learning. However, these learned models have often been shown to overfit (even after increasing the number of datapoints) and the performance of these robot learning methods tends ∗Equal contribution. Direct correspondence to: {abhinavg,amurali,dgandhi,lerrelp}@cs.cmu.edu 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. to plateau fast. This leads us to an important question: why does robotic action data not lead to similar gains as we see in other prominent areas such as computer vision [12] and natural language processing [13]? The key to answering this question lies in the word: “real”. Many approaches claim that the data collected in the lab is real-world data. But is this really true? How often do we see white table-clothes or green backgrounds in real-world scenarios? In this paper, we argue that current robotic datasets lack the diversity of environments required for data-driven approaches to learn invariances. Therefore, the key lies in moving data collection efforts from a lab setting to real-world homes of people. We argue that learning based approaches in robotics need to move out of simulators and labs and enter the homes of people where the “real” data lives. There are however several challenges in moving the data collection efforts inside the home. First, even the cheapest industrial robots like the Sawyer or the Baxter are too expensive (>20K USD). In order to collect data in homes, we need a cheap and compact robot. But the challenge with low-cost robots is that the lack of accurate control makes the data unreliable. Furthermore, data collection in homes cannot receive 24/7 supervision by humans, which coupled with external factors will lead to more noise in the data collection. Finally, there is a chicken-egg problem for home-robotics: current robots are not good enough to collect data in homes; but to improve robots we need data in homes. In this paper, we propose to break this chicken-egg problem and present the first systematic effort in collecting a dataset inside the homes. Towards this goal: (a) we assemble a robot which costs less than 3K USD; (b) we use this robot to collect data inside 6 different homes for training and 3 homes for testing; (c) we present an approach that models and factors the noise in labeled data; (d) we demonstrate how data collected from these diverse home environment leads to superior performance and requires little-to-no domain adaptation. We hope this effort drives the robotics community to move out of the lab and use learning based approaches to handle inaccurate cheap robots. 2 Overview The goal of our paper is to highlight the importance of diversifying the data and environments for robot learning. We want to show that data collected from homes will be less biased and in turn allow for greater generalization. For the purposes of this paper, we focus on the task of grasping. Even for simple manipulation primitive tasks like grasping, current datasets suffer from strong biases such as simple backgrounds and the same environment dynamics (friction of tabletop etc.). We argue that current learning approaches exploit these biases and are not able to learn truly generalizable models. Of-course one important question is what kind of hardware should we use for collecting the largescale data inside the homes. We envision that since we would need to collect data from hundreds and thousands of homes; one of the prime-requirement for scaling is significantly reducing the cost of the robot. Towards this goal, we assembled a customized mobile manipulator as described below. Hardware Setup: Our robot consists of a Dobot Magician robotic arm [14] mounted on a Kobuki mobile base [15]. The robotic arm came with four degrees of freedom (DOF) and we customized the last link with a two axis wrist. We also modified the original pneumatic gripper with a two-fingered electric gripper [16]. The resulting robotic arm has five DOFs - x, y, z, roll & pitch - with a payload capacity of 0.3kg. The arm is rigidly attached on top of the moving base. The Kobuki base is about 0.2m high with 4.5kg of payload capacity. An Intel R200 RGBD [17] camera was also mounted with a pan-tilt attachment at a height of 1m above the ground. All the processing for the robot is performed an on-board laptop [18] attached on the back. The laptop has intel core i5-8250U processor with 8GB of RAM and runs for around three hours on a single charge. The battery in the base is used to power both the base and the arm. With a single charge, the system can run for 1.5 hours. One unavoidable consequence of significant cost reduction is the inaccurate control due to cheap motors. Unlike expensive setups such as Sawyer or Baxter, our setup has higher calibration errors and lower accuracy due to in-accuracte kinematics and hardware execution errors. Therefore, unlike existing self-supervised datasets; our dataset is diverse and huge but the labels are noisy. For example, the robot might be trying to grasp at location x, y but to due to noise the execution is at (x+ δx, y + δy). Therefore, the success/failure label corresponds to a different location. In order to tackle this challenge, we present an approach to learn from noisy data. Specifically, we model noise as a latent variable and use two networks: one which predicts the likely noise and other that predicts the action to execute. 3 Learning on Low Cost Robot Data We now present our method for learning a robotic grasping model given low-cost data. We first introduce the patch grasping framework presented in Pinto and Gupta [4]. Unlike the data collected in industrial/collaborative robots like the Sawyer and Baxter, there is a higher tendency for noisy labels in the datasets collected with cheap robots. This error in position control can be attributed to a myraid of factors: hardware execution error, inaccurate kinematics, camera calibration, proprioception, wear and tear, etc. We present an architecture to disentangle the noise of the low-cost robot’s actual and commanded executions. 3.1 Grasping Formulation Similar to [4], we are interested in the problem of planar grasping. This means that every object in the dataset is grasped at the same height (fixed cartesian z) and perpendicular to the ground (fixed end-effector pitch). The goal is find a grasp configuration (x, y, θ) given an observation I of the object. Here x and y are the translational degrees of freedom, while θ represents the rotational degrees of freedom (roll of the end-effector). Since our main baseline comparison is with the lab data collected in Pinto and Gupta [4], we follow a model architecture similar to theirs. Instead of directly predicting (x, y, θ) on the entire image I , several smaller patches IP centered at different locations (x, y) are sampled and the angle of grasp θ is predicted from this patch. The angle is discretized as θD into N bins to allow for multimodal predictions. For training, each datapoint consists of an image I , the executed grasp (x, y, θ) and the grasp success label g. This is converted to the image patch IP and the discrete angle θD. A binary cross entropy loss is then used to minimize the classification error between the predicted and ground truth label g. We use a Imagenet pre-trained convolutional neural network as initialization. 3.2 Modeling Noise as Latent Variable Unlike [4] where a relatively accurate industrial arm is used along with well calibrated cameras, our low-cost setup suffered from inaccurate position control and calibration. Though the executions are noisy, there is some structure in the noise which is dependent on both the design and individual robots. This means that the structure of noise can be modelled as a latent variable and decoupled during training [19]. Our approach is summarized in Fig 2. The conventional approach [4] models the grasp success probability for image patch IP at angle θD as P (g|IP , θD;R). HereR represents variables of the environment which can introduce noise in the system. In the case of standard commercial robots with high accuracy,R does not play a significant role. However, in the low cost setting with multiple robots collecting data in parallel, it becomes AlexNet Pretrained Parameters Learnt Parameters an important consideration for learning. For instance, given an observed execution of patch IP , the actual execution could have been at a neighbouring patch. Here, z models the latent variable of the actual patch executed, and ÎP belongs to a set of possible hypothesis neighbouring patches P . We considered a total of nine patches centered around IP , as explained in Fig 2. The conditional probability of grasping at a noisy image patch IP can hence be computed by marginalizing over z: P (g|IP , θD,R) = ∑ ÎP∈P P (g|z = ÎP , θD,R) · P (z = ÎP |θD, IP ,R) (1) Here P (z = ÎP |θD, IP ,R) represents the noise which is dependent on the environment variablesR, while P (g|z = ÎP , θD,R) represents the grasp prediction probability given the true patch. The first part of the equation is implemented as a standard grasp network, which we refer to as the Grasp Prediction Network (GPN). Specifically, we feed in nine possible patches and obtain their respective success probability distribution. The second probability distribution over noise is modeled via a separate network, which we call Noise Modelling Network (NMN). The overall grasp model Robust-Grasp is defined by GPN ⊗NMN, where ⊗ is the marginalization operator. 3.3 Learning the latent noise model Thus far, we have presented our Robust-Grasp architecture which models the true grasping distribution and latent noise. What should be the inputs to the NMN network and how should it be trained? We assume that z is conditionally independent of the local patch-specific variables (θD, IP ) given the global information R, i.e P (z = ÎP |θD, IP ,R) ≡ P (z = ÎP |R). Apart from the patch IP and grasp information (x, y, θ), other auxiliary information such as the image of the entire scene, ID of the specific robot that collected a datapoint and the raw pixels location of the grasp are stored. The image of the whole scene might contain essential cues about the system, such as the relative location of camera to the ground which may change over the lifetime of the robot. The identification number of the robot might give cues about errors specific to a particular hardware. Finally, the raw pixels of execution contain calibration specific information, since calibration error is coupled with pixel location, since we do least squares fit to compute calibration parameters. It is important to emphasize that we do not have explicit labels to train NMN. Since we have to estimate the latent variable z, one could use Expectation Maximization (EM) [20]. But inspired from Misra et al. [19], we use direct optimization to jointly learn both NMN and GPN with the noisy labels from our dataset. The entire image of the scene along with the environment information is passed into NMN. This outputs a probability distribution over the patches where the grasps might have been executed. Finally, we apply the binary cross entropy loss on the overall marginalized output GPN⊗NMN and the true grasp label g. 3.4 Training details We used PyTorch [21] to implement our models. Instead of learning the visual representations from scratch, we finetune on a pretrained ResNet-18 [22] model. For the noise modelling network (NMN), we concatenate the 512 dimensional ResNet feature with a one-hot vector of the robot’s ID and the raw pixel location of the grasp. This passes through a series of three fully connected layers and a SoftMax layer to convert the correct patch predictions to a probability distribution. For the grasp prediction network (GPN), we extract nine candidate correct patches to input. One of these inputs is the original noisy patch, while the others are equidistant from the original patch. The angle predictions for all the patches are passed through a sigmoid activation at the end to obtain grasp success probability for a specific patch at a specific angle. We train our network in two stages. First, we only train GPN using the noisy patch which allows it to learn a good initialization for grasp prediction and in turn provide better gradients to NMN. This training is done over five epochs of the data. In the second stage, we add the NMN and marginalization operator to simultaneously train NMN and GPN in an end-to-end fashion. This is done over 25 epochs of the data. We note that this two-stage approach is crucial for effective training of our networks, without which NMN trivially selects the same patch irrespective of the input. The optimizer used for training is Adam [23]. 4 Results In our experimental evaluation, we demonstrate that collecting data in diverse households is crucial for our learned models to generalize to unseen home environments. Furthermore, we also show that modelling the error of low cost robots in our Robust-Grasp architecture significantly improves grasping performance. We here onwards refer to our robot as the Low Cost Arm (LCA). Data Collection: First, we describe our methodology for collecting grasp data. We collected a diverse set (see Fig 3) of planar grasping in six homes. Each home has several environments and the data was collected in parallel using multiple robots. Since we are collecting data in homes which have very unstructured visual input, we used an object detector (specifically tiny-YOLO, due to compute and memory constraints on LCA) [24]. This results in bounding box predictions for the objects amidst clutter and diverse backgrounds, of which we only use the 2D location and discard the object class information. Once we have the location of the object in image space, we first sample a grasp and then compute the 3D grasp location from the noisy PointCloud. The motion planning pipeline is carefully designed since our under-constrained robot only has 5 DOFs. When collecting training data, we scattered a diverse set of objects and let the mobile base randomly move and grasp objects. The base was constrained to a 2m wide area to prevent the robot from colliding with obstacles beyond its zone of operation. We collected a dataset of about 28K grasps. Quantitative Evaluation: For quantitative evaluation, we use three different test settings: • Binary Classification (Held-out Data): For our first test, we collect a held-out test set by performing random grasps on objects. We measure the performance of binary classification where given a location and grasp angle; the model has to predict whether the grasp would be successful or not. This methodology allows us evaluate a large number models without needing to run them on a real robot. For our experiments, we use three different environments/set-ups for held-out data. We collected two held-out datasets using LCA in lab and LCA in home environments. Our third dataset is publicly available Baxter robot data [4]. • Real Low Cost Arm (Real-LCA): We evaluated the physical grasping performance of our learned models on the low cost arm in this setting. For testing, we used 20 novel objects in four canonical orientations in three homes not seen in training. Since both the homes and the objects are not seen in training, this metric tests the generalization of our learned model. • Real Sawyer (Real-Sawyer): In the third metric, we measure the physical grasping performance of our learned models on an industrial robotic arm (Sawyer). Similar to the Real-LCA metric, we grasp 20 novel objects in four canonical orientations in our lab environment. The goal of this experiment is to show that training models with data collected in homes also improves task performance in curated environments like the lab. Since the Sawyer is a more accurate and better calibrated, we evaluate our Robust-Grasp model against the model which does not disentangle the noise in the data. Baselines: Next we describe the baselines used in our experiments. Since we want to evaluate the performance of both the home robot dataset (Home-LCA) and the Robust-Grasp architecture, we used baselines for both the data and model. We used two datasets for the baseline: grasp data collected by [4] (Lab-Baxter) as well as data collected with our low cost arms in a single environment (Lab-LCA). To benchmark our Robust-Grasp model, we compared to the noise independent patch grasping model [4], which we call Patch-Grasp. We also compared our data and model with DexNet-3.0 from Mahler et al. [25] (DexNet) for a strong real-world grasping baseline. 4.1 Experiment 1: Performance on held-out data To demonstrate the importance of learning from home data, we train a Robust-Grasp model on both the Lab-Baxter and Lab-LCA dataset and compare it to the model trained with the Home-LCA dataset. As shown in Table 1, models trained on only lab data overfit to their respective environments and do not generalize to the more challenging Home-LCA environment, corresponding to a lower binary classification accuracy score. On the other hand, the model trained on Home-LCA perform well on both home and curated lab environments. To illustrate the importance of collecting a large Home-LCA dataset, we compare to a common domain adaptation baseline: fine-tuning the model learned on Lab-LCA with 5K home grasps (‘Fine-tuned’ in Table 1). We notice that this is significantly worse than the model trained with just home data from scratch. Our hypothesis is that the feature representation learned from Lab data is insufficient to capture the richer variety present in Home Data. Further, to demonstrate the importance of the NMN for noise modelling, we compare to a baseline model without NMN and feed the robot_id to the grasp prediction network directly (‘Robot-ID Conditioned’ in Table 1), similar to Hardware Conditioned Policies [26]. This baseline gives competitive results while testing on Lab-LCA and Lab-Baxter datasets, however it did not fare as well as Robust-Grasp. This demonstrates the importance of NMN and sharing data across different LCAs. 4.2 Experiment 2: Performance on Real LCA Robot In Real-LCA, our most challenging evaluation, we compare our model against a pre-trained DexNet baseline model and the model trained on the Lab-Baxter dataset. The models were benchmarked based on the physical grasping performance on novel objects in unseen environments. We observe a significant improvement of 43.7% (see Table 2) when training on the Home-LCA dataset over the Lab-Baxter dataset. Moreover, our model is also 33% better than DexNet, though the latter has achieved state-of-the-art results in the bin-picking task [25]. The relatively low performance of DexNet in these environments can be attributed to the high quality depth sensing it requires. Since our robots are tested in homes which typically have a lot of natural light, the depth images are quite noisy. This effect is further coupled with the cheap commodity RGBD cameras that we use on our robot. We used the Robust-Grasp model to train on the Home-LCA dataset. 4.3 Does factoring out the noise in data improve performance? To evaluate the performance of our Robust-Grasp model vis-à-vis the Patch-Grasp model, we would ideally need a noise-free dataset for fair comparisons. Since it is difficult to collect noise-free data on our home robots, we use Lab-Baxter for benchmarking. The Baxter robot is more accurate and better calibrated than the LCA robot and thus has less noisy labels. Testing is done on the Sawyer robot to ensure the testing robot is different from both training robots. Results for the Real-Sawyer are reported in Table 3. On this metric, our Robust-Grasp model trained on Home-LCA achieves 77.5% grasping accuracy. This is a significant improvement over the 56.25% grasping accuracy of the Patch-Grasp baseline trained on the same dataset. We also note that our grasp accuracy is similar to the performance reported (around 80%) in several recent learning to grasp papers [7]. However unlike these methods, we train in a completely different environment (homes) and test in the lab. The improvements of the Robust-Grasp model is also demonstrated with the binary classification metric in Table 1, where it outperforms the Patch-Grasp by about 4% on the Lab-Baxter and Home-LCA datasets. Moreover, our visualizations of predicted noise corrections in Fig 4, show that the corrections depend on both the pixel locations of the noisy grasp and the specific robot. 5 Related Work 5.1 Large scale robot learning Over the last few year there has been a growing interest in scaling up robot learning with large scale robot datasets. The Cornell Grasp Dataset [27] was among the first works that released a hand annotated grasping dataset. Following this, Pinto and Gupta [4] created a self-supervized grasping dataset in which a Baxter robot collected and self-annotated the data. Levine et al. [7] took the next Robot #1 Robot #2 Robot #3 Robot #4 step in robotic data collection by employing an Arm-Farm of several industrial manipulators to learn grasping using reinforcement learning. All of these works, use data in a restrictive lab environment using high-cost data labelling mechanisms. In our work, we show how low-cost data in a variety of homes can be used to train grasping models. Apart from grasping, there has also been a significant effort is collecting data for other robotic tasks. Agarwal et al. [8], Finn et al. [9], and Pinto and Gupta [28] collected data of a manipulator pushing objects on a table. Similarly, Nair et al. [10] collects data for manipulating a rope on a table while Yahya et al. [29] used several robots in parallel to train a policy to open a door. Erickson et al. [30], Murali et al. [31], and Calandra et al. [32] collected a dataset of robotic tactile interactions for material recognition and grasp stability estimation. Again, all of this data is collected in a lab environment. We also note several pioneering work in lifelong robotics like Veloso et al. [33], Hawes et al. [34]. In contrast to our work, they focus on navigation and long-term autonomy. 5.2 Grasping Grasping is one of the fundamental problems in robotic manipulation and we refer readers to recent surveys Bicchi and Kumar [35], Bohg et al. [36] for a comprehensive review. Classical approaches focused on physics-based analysis of stability [37] and usually require explicit 3D models of the objects. Recent papers have focused on data-driven approaches that directly learn a mapping from visual observations to grasp control [27, 4, 7]. For large-scale data collection both simulation [25, 38, 39, 40] and real-world robots [4, 7] have been used. Mahler et al. [25] propose a versatile grasping model, that achieves 90% grasping performance in the lab for the bin-picking task. However since this method uses depth as input, we demonstrate that it is challenging to use it for home robots which may not have accurate depth sensing in these environments. 5.3 Learning with low cost robots Given that most labs run experiments with standard collaborative or industrial robots, there is very limited research on learning on low cost robots and manipulators. Deisenroth et al. [41] used model-based RL to teach a cheap inaccurate 6 DOF robot to stack multiple blocks. Though mobile robots like iRobot’s Roomba have been in the home consumer electronics market for a decade, it is not clear whether they use learning approaches alongside mapping and planning. 5.4 Modelling noise in data Learning from noisy inputs is a challenging problem that has received significant attention in computer vision. Nettleton et al. [42] show that training models from noisy data detrimentally impacts performance. However, as the work in Frénay and Verleysen [43] points out, the noise can be either independent of the environment or statistically dependent on the environment. This means that creating models that can account for and correct noise [19, 44] are valuable. Inspired from Misra et al. [19], we present a model that disentangles the noise in the training grasping data to learn a better grasping model. 6 Conclusion In summary, we present the first effort in collecting large scale robot data inside diverse environments like people’s homes. We first assemble a mobile manipulator which costs under 3K USD and collect a dataset of about 28K grasps in six homes under varying environmental conditions. Collecting data with cheap inaccurate robots introduces the challenge of noisy labels and we present an architectural framework which factors out the noise in the data. We demonstrate that it is crucial to train models with data collected in households if the goal is to eventually test them in homes. To evaluate our models, we physically tested them by grasping a set of 20 novel objects in lab and in three unseen home environments from Airbnb. The model trained with our home dataset showed a 43.7% improvement over a model trained with data collected in the lab. Furthermore, our framework performed 33% better than a baseline DexNet model, which struggled with the typically poor depth sensing in common household environments with a lot of natural light. We also demonstrate that our model improves grasp performance in curated environments like the lab. Our model was also able to successfully disentangle the structured noise in the data and improved performance by about 10%. ACKNOWLEDGEMENTS This work was supported by ONR MURI N000141612007. Abhinav was supported in part by Sloan Research Fellowship and Adithya was partly supported by a Uber Fellowship.
1. What is the main contribution of the paper regarding robot grasping data collection? 2. How does the proposed method differ from other deep learning works in robotics? 3. What are the strengths and weaknesses of the paper's experiments and comparisons with other works? 4. Are there any suggestions or alternatives to improve the proposed method? 5. Are there any grammatical errors or unclear descriptions in the paper that need correction?
Review
Review The authors presented a system to collect large scale robot grasping data in diverse home environments, and showed that the model improved the grasp performance compared to the model trained with lab environment. To do the data collection, the authors built a low cost mobile manipulation robot platform. To incorporate the noise in robot and environment, they proposed a noise modeling network to model noise as latent variable. Compared to other deep learning works in robotics which are normally done in a simulated or lab environment, the authors tackled an important and also more challenging problem in robotics on how to learn in an unstructured real world environment. The learning framework is built on top of the patch grasping presented by Pinto and Gupta. The authors made modification to add another noise modeling network to handle real world noise. The paper is well written and easy to understand in most parts. The experiments are very thorough, and are conducted in the real world with comparison to other baseline methods. The results show the benefits of learning in the real world environment, which is not very surprising. The following papers on learning grasping with large scale data collection in simulation are also related and should be cited: "Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping", Bousmalis et al., ICRA 2018 "Multi-task Domain Adaptation for Deep Learning of Instance Grasping from Simulation", Fang et al., ICRA 2018 The following are some detailed questions and comments to the authors: (1) Instead of using noise modeling network, have you considered using the robot auxiliary information e.g. robot id as input to the grasp prediction network directly? (2) It is not clear what the input "raw pixel location" exactly is from line 143-145, or from Figure 2. Is it the grasp position in the image space? (3) In line 84, "to due to", extra "to".
NIPS
Title Breaking the centralized barrier for cross-device federated learning Abstract Federated learning (FL) is a challenging setting for optimization due to the heterogeneity of the data across different clients which can cause a client drift phenomenon. In fact, designing an algorithm for FL that is uniformly better than simple centralized training has been a major open problem thus far. In this work, we propose a general algorithmic framework, MIME, which i) mitigates client drift and ii) adapts an arbitrary centralized optimization algorithm such as momentum and Adam to the cross-device federated learning setting. MIME uses a combination of control-variates and server-level optimizer state (e.g. momentum) at every client-update step to ensure that each local update mimics that of the centralized method run on i.i.d. data. We prove a reduction result showing that MIME can translate the convergence of a generic algorithm in the centralized setting into convergence in the federated setting. Moreover, we show that, when combined with momentum-based variance reduction, MIME is provably faster than any centralized method–the first such result. We also perform a thorough experimental exploration of MIME’s performance on real world datasets (implemented here). 1 Introduction Federated learning (FL) is an increasingly important large-scale learning framework where the training data remains distributed over a large number of clients, which may be mobile phones or network sensors [38, 37, 43, 44, 28]. A server then orchestrates the clients to train a single model, here referred to as a server model, without ever transmitting client data over the network, thereby providing some basic levels of data privacy and security. Two important settings are distinguished in FL [28, Table 1]: the cross-device and the cross-silo settings. The cross-silo setting corresponds to a relatively small number of reliable clients, typically organizations, such as medical or financial institutions. In contrast, in the cross-device federated learning setting, the number of clients may be extremely large and include, for example, all 3.5 billion active android phones [25]. Thus, in that setting, we may never make even a single pass over ∗This work was also appears under the alternative title “Mime: Mimicking Centralized Stochastic Algorithms in Federated Learning” [31]. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). the entire clients’ data during training. The cross-device setting is further characterized by resourcepoor clients communicating over a highly unreliable network. Together, the essential features of this setting give rise to unique challenges not present in the cross-silo setting. In this work, we are interested in the more challenging cross-device setting, for which we will formalize and study stochastic optimization algorithms. Importantly, recent advances in FL optimization, such as SCAFFOLD [32] or FedDyn [1], are not anymore applicable since they are designed for the cross-silo setting. The problem. The de facto standard algorithm for the cross-device setting is FEDAVG [43], which performs multiple SGD updates on the available clients before communicating to the server. While this approach can reduce the frequency of communication required, performing multiple steps on the same client can lead to ‘over-fitting’ to its atypical local data, a phenomenon known as client drift [32]. This in turn leads to slower convergence and can, somewhat counter-intuitively, require larger total communication [69]. Despite significant attention received from the optimization community, the communication complexity of heterogeneous cross-device has not improved upon that of simple centralized methods, which take no local steps (aka SERVER-ONLY methods). Furthermore, algorithmic innovations such as momentum [59, 14], adaptivity [35, 75, 77], and clipping [71, 72, 76] are critical to the success of deep learning applications. The lack of a theoretical understanding of the impact of multiple client steps has also hindered adapting these techniques in a principled manner into the client updates, in order to replace the vanilla SGD update of FEDAVG. To overcome such deficiencies, we propose a new framework, MIME, that mitigates client drift and can adapt an arbitrary centralized optimization algorithm, e.g. SGD with momentum or Adam, to the federated setting. In each local client update, MIME uses global optimizer state, e.g. momentum or adaptive learning rates, and an SVRG-style correction to mimic the updates of the centralized algorithm run on i.i.d. data. This optimizer state is computed only at the server level and kept fixed throughout the local steps, thereby avoiding overfitting to the atypical local data of any single client. Contributions. We summarize our main results below. • MIME framework. We formalize the cross-device federated learning problem, and propose a new framework MIME that can adapt arbitrary centralized algorithms to this setting. • Convergence result. We prove a result showing that MIME successfully reduces client drift. We also prove that the convergence of any generic algorithm in the centralized setting translates convergence of its MIME version in the federated setting. • Speed-up over centralized methods. By carefully tracking the bias introduced due to multiple local steps, we prove that MIME with momentum-based variance reduction (MVR) can beat a lower bound for centralized methods, thus breaking a fundamental barrier. This is the first such result in FL, and also the first general result showing asymptotic speed-up due to local steps. • Empirical validation. We propose a simpler variant, MIMELITE, with an empirical performance similar to MIME. We report the results of thorough experimental analysis demonstrating that both MIME and MIMELITE indeed converge faster than FEDAVG. Related work. Analysis of FEDAVG: Much of the recent work in federated learning has focused on analyzing FEDAVG. For identical clients, FEDAVG coincides with parallel SGD, for which [78] derived an analysis with asymptotic convergence. Sharper and more refined analyses of the same method, sometimes called local SGD, were provided by [56], and more recently by [57], [47], [34], and [70], for identical functions. Their analysis was extended to heterogeneous clients in [68, 74, 32, 34, 36]. [11] derived a tight characterization of FedAvg with quadratic functions and demonstrated the sensitivity of the algorithm to both client and server step sizes. Matching upper and lower bounds were recently given by [32] and [69] for general functions, proving that FEDAVG can be slower than even SGD for heterogeneous data, due to the client-drift. Comparison to SCAFFOLD: For the cross-silo setting where the number of clients is relatively low, [32] proposed the SCAFFOLD algorithm, which uses control-variates (similar to SVRG) to correct for client drift. However, their algorithm crucially relies on stateful clients which repeatedly participate in the training process. FedDyn [1] reduces the communication requirements, but also requires persistent stateful clients. In contrast, we focus on the cross-device setting where clients may be visited only once during training and where they are stateless (and thus SCAFFOLD and FedDyn are inapplicable). This is akin to the difference between the finite-sum (corresponding to cross-silo) and stochastic (cross-device) settings in traditional centralized optimization [39]. Comparison to FedAvg and variants: [26] and [67] observed that using server momentum significantly improves over vanilla FEDAVG. This idea was generalized by [49], who replaced the server update with an arbitrary optimizer, e.g. Adam. However, these methods only modify the server update while using SGD for the client updates. We henceforth refer to this meta algorithm as FedAvg. FedAvgSGD, FedAvgMom, FedAvgAdam denote specific instantiations of the server optimizer in FedAvg with SGD, Momentum or Adam. MIME, on the other hand, ensures that every local client update resembles the optimizer e.g. MIME would apply momentum in every client update and not just at the server level. Beyond this, [40] proposed to add a regularizer to ensure client updates remain close. However, this may slow down convergence (cf. Fig. 5 and [32, 66]). Other orthogonal directions which can be combined with MIME include tackling computation heterogeneity, where some clients perform many more updates than others [66], improving fairness by modifying the objective [44, 41], incorporating differential privacy [20, 2, 61], Byzantine adversaries [48, 65, 30], secure aggregation [8, 24], etc. We defer additional discussion to the extensive survey by [28]. Momentum based variance reduction. Initial optimal methods for stochastic non-convex optimization like SPIDER [17] and SARAH [46] required intermittently computing very large batch gradients. Subsequently, it was shown that momentum based variance reduction (MVR) methods obtained a similar optimal rate without needing such large batch gradient computations [62, 14]. Momentum is an exponential moving average of many stochastic gradients and so it has much smaller variance than the stochastic gradients themselves. However, because these gradients are computed at different parameters it also has a bias. MVR adds a small additional correction term which significantly reduces this bias and provides improved rates. 2 Problem setup This section formalizes the problem of cross-device federated learning [28]. Cross-device FL is characterized by a large number of client devices like mobile phones which may potentially connect to the server at most once. Due to their transient nature, it is not possible to store any state on the clients, precluding an algorithm like SCAFFOLD. Furthermore, each client has only a few samples, and there is wide heterogeneity in the samples across clients. Finally, communication is a major bottleneck and a key metric for optimization in this setting is the number of communication rounds. Thus, our objective will be to minimize the following quantity within the fewest number of clientserver communication rounds: f(x) = Ei∼C [ fi(x) := 1 ni ni∑ ν=1 fi(x; ζi,ν) ] . (1) Here, fi denotes the loss function of client i and {ζi,1, . . . , ζi,ni} its local data. Since the number of clients is extremely large, while the size of each local data is rather modest, we represent the former as an expectation and the latter as a finite sum. In each round, the algorithm samples a subset of clients (of size S) and performs some updates to the server model. Due to the transient and heterogeneous nature of the clients, it is easy to see that the problem becomes intractable with arbitrarily dissimilar clients. Thus, it is necessary to assume bounded dissimilarity across clients. (A1) G2-BGV or bounded inter-client gradient variance: there exists G ≥ 0 such that Ei∼C [‖∇fi(x)−∇f(x)‖2] ≤ G2 , ∀x . Next, we also characterize the variance in the Hessians. (A2) δ-BHV or bounded Hessian variance: Almost surely, the loss function of any client i satisfies ‖∇2fi(x; ζ)−∇2f(x)‖ ≤ δ , ∀x . This is in contrast to the usual smoothness assumption that can be stated as: (A2*) L-smooth: ‖∇2fi(x; ζ)‖ ≤ L , ∀x , a.s. for any i. Note that if fi(x; ζ) is L-smooth then (A2) is satisfied with δ ≤ 2L, and hence (A2) is weaker than (A2*). In realistic examples we expect the clients to be similar and hence that δ L. In addition, we assume that f(x) is bounded from below by f? and is L-smooth, as is standard. 3 Mime framework In this section we describe how to adapt an arbitrary centralized optimizer (referred to as the “base” optimizer) which may have internal state (e.g. momentum) to the federated learning problem (1) while ensuring there is no client-drift. Algorithm 4 describes our framework. We develop two variants, MIME and MIMELITE, which consist of three components i) a base optimizer we are seeking to mimic, ii) the global (server) optimizer state computation, and iii) the local client updates. Algorithm 1 Mime and MimeLite input: initial x and s, learning rate η and base optimizer B = (U ,V) for each round t = 1, · · · , T do sample subset S of clients communicate (x, s) to all clients i ∈ S communicate c← 1|S| ∑ j∈S ∇fj(x) (only Mime) on client i ∈ S in parallel do initialize local model yi ← x for k = 1, · · · ,K do sample mini-batch ζ from local data gi ← ∇fi(yi; ζ)−∇fi(x; ζ) + c (Mime) gi ← ∇fi(yi; ζ) (MimeLite) update yi ← yi − ηU(gi, s) end for compute full local-batch gradient∇fi(x) communicate (yi,∇fi(x)) end on client s ← V ( 1 |S| ∑ i∈S ∇fi(x), s ) (update optimizer state) x← 1|S| ∑ i∈S yi (update server parameters) end for Base optimizer. We assume the centralized base optimizer we are imitating can be decomposed into two steps: an update step U which updates the parameters x, and a optimizer state update step V(·) which keeps track of global optimizer state s. Each step of the base optimizer B = (U ,V) uses a gradient g to update the parameter x and the optimizer state s as follows: x← x− η U(g, s) , s← V(g, s) . (BASEOPT) As an example, consider SGD with momentum. The state here is the momentum mt and uses the following update steps: xt = xt−1 − η ((1− β)∇fi(xt−1) + βmt−1) , mt = (1− β)∇fi(xt−1) + βmt−1 . Thus, SGD with momentum can be represented in the above generic form with U(g, s) = (1 − β)g + βs and V(g, s) = (1 − β)g + βs. Table 5 in Appendix shows how other algo- rithms like Adam, Adagrad, etc. can be represented in this manner. We keep the update U to be linear in the gradient g, whereas V can be more complicated. This implies that while the parameter update step U is relatively resilient to receiving a biased gradient g while V can be much more sensitive. Compute optimizer state globally, apply locally. When updating the optimizer state of the base algorithm, we use only the gradient computed at the server parameters. Further, they remain fixed throughout the local updates of the clients. This ensures that these optimizer state remain unbiased and representative of the global function f(·). At the end of the round, the server performs s← V ( 1 |S| ∑ i∈S ∇fi(x), s ) , ∇fi(x) = 1ni ∑ni ν=1∇fi(x; ζi,ν) . (OPTSTATE) Note that we use full-batch gradients computed at the server parameters x, not client parameters yi. Local client updates. Each client i ∈ S performs K updates using U of the base algorithm and a minibatch gradient. There are two variants possible corresponding to MIME and MIMELITE differentiated using colored boxes. Starting from yi ← x, repeat the following K times yi ← yi − ηU(gi, s) (CLIENTSTEP) where gi ← ∇fi(yi; ζ) for MIMELITE, and gi ← ∇fi(yi; ζ)−∇fi(x; ζ) + 1|S| ∑ j∈S ∇fj(x) for MIME. MIMELITE simply uses the local minibatch gradient whereas MIME uses an SVRG style correction [27]. This is done to reduce the noise from sampling a local mini-batch. While this correction yields faster rates in theory (and in practice for convex problems), in deep learning applications we found that MIMELITE closely matches the performance of MIME. Finally, there are two modifications made in practical FL: we weight all averages across the clients by the number of datapoints ni [43], and we perform K epochs instead of K steps [66]. 4 Theoretical analysis of Mime Table 1 summarizes the rates of MIME (highlighted in blue) and MIMELITE (highlighted in green) and compares them to SERVER-ONLY methods when using SGD, Adam and momentum methods as the base algorithms. We will first examine the convergence of MIME and MIMELITE with a generic base optimizer and show that its properties are preserved in the federated setting. We then examine a specific momentum based base optimizer, and prove that Mime and MimeLite can be asymptotically faster than the best server-only method. This is the first result to prove the usefulness of local steps and demonstrate asymptotic speed-ups. 4.1 Convergence with a generic base optimizer We will prove a generic reduction result demonstrating that if the underlying base algorithm converges, and is robust to slight perturbations, then MIME and MIMELITE also preserve the convergence of the algorithm when applied to the federated setting with additinoal local steps. Theorem I. Suppose that we have G2 inter-client gradient variance (A1), L-smooth {fi} (A2*), and σ2 intra-client gradient variance (A3). Further, suppose that the updater U of our baseoptimizer B = (U ,V) satisfies i) linearity for a fixed state s: U(g1 + g2; s) = U(g1; s) + U(g2; s), and ii) Lipschitzness: ‖U(g; s)‖ ≤ B‖g‖ for some B ≥ 0. Then, running MIME or MIMELITE with K local updates and step-size η is equivalent to running a centralized algorithm with step-size η̃ := Kη ≤ 12LB , and updates xt ← xt−1 − η̃ U(gt + et , st−1) , and st ← V(gt, st−1) , where we have an unbiased gradient Et[gt] = ∇f(xt−1), with variance bounded as Et‖gt −∇f(xt−1)‖2 ≤ { G2 S MIME , G2 S + σ2 KS MIMELITE . and finally a small error bounded as 1 B2L2η̃2 Et‖ et ‖ 2 ≤ { Et‖gt‖2 MIME , Et‖gt‖2 +G2 + σ 2 K MIMELITE . Here, we have proven that MIME and MIMELITE truly mimic the centralized base algorithm with very small perturbations—the magnitude of et is O(η̃2). The key to the result is the linearity of the parameter update step U( · ; s). By separating the base optimizer into a very simple parameter step U and a more complicated optimizer state update step V , we can ensure that commonly used algorithms such as momentum, Adam, Adagrad, and others all satisfy this property. Armed with this general reduction, we can easily obtain specific convergence results. Corollary II ((Mime/MimeLite) with SGD). Given that the conditions in Theorem I are satisfied, let us run T rounds withK local steps using SGD as the base optimizer and output xout. This output satisfies E‖∇f(xout)‖2 ≤ for F := f(x0)− f?, G̃2 := G2 + σ2/K and • µ-PL inequality: η = Õ ( 1 µKT ) , and T = Õ ( LG2 µS + LF µ log ( 1 )) MIME , Õ ( LG̃2 µS + LG̃ µ √ + LFµ log ( 1 )) MIMELITE . • Non-convex: for η = O (√ FS LG̃2TK2 ) , and T = O ( LG2F S 2 + LF ) MIME , O ( LG̃2F S 2 + L2G̃F 3/2 + LF ) MIMELITE . Table 1: Number of communication rounds required to reach ‖∇f(x)‖2 ≤ (log factors are ignored) with S clients sampled each round. All analyses except SCAFFOLD assume G2 bounded gradient dissimilarity (A1). All analyses assume L-smooth losses, except MimeLiteMVR and MimeMVR, which only assume δ bounded Hessian dissimilarity (A2). Convergence of SCAFFOLD depends on the total number of clientsN which is potentially infinite. FEDAVG and MIMELITE are slightly slower than the server-only methods due to additional drift terms in most cases. MIME is the fastest and either matches or improves upon the optimal statistical rates (first term in the rates). In fact, MimeMVR and MimeLiteMVR beat lower bounds for any server-only method when δ L. Algorithm Non-convex µ-PL inequality SCAFFOLDa [32] ( N S ) 2 3 L N S + L µ SGD SERVER-ONLY [21] LG 2 S 2 + L G2 µS + L µ MimeLiteSGD≡ FedAvgSGD c LG 2 S 2 + L 2G 3/2 + L G2 µS + LG µ √ + L µ MimeSGD LG 2 S 2 + L G2 µS + L µ ADAM SERVER-ONLY [75]b L −G2/S – MimeLiteAdambc L √ S −G2/S – MimeAdamb L −G2/S – Momentum Variance Reduction (MVR) SERVER-ONLY [14] LG√ S 3/2 + G 2 S + L – MimeLiteMVRd δ(G+σ) 3/2 + G 2+σ2 + δ – MimeMVRd δG√ S 3/2 + G 2 S + δ – SERVER-ONLY lower bound [5] Ω ( LG√ S 3/2 + G 2 S + L ) Ω ( G2 S ) a Num. clients (N ) can be same order as num. total rounds or even∞, making the bounds vacuous. b Adam requires large batch-size S ≥ G2/ to converge [50, 75]. Convergence of FedAdam with client sampling is unknown ([49] only analyze with full client participation). c RequiresK ≥ σ2/G2 number of local updates. Typically, intra-client variance is small (σ2 . G2). d RequiresK ≥ L/δ number of local updates. Faster than the lower bound (and hence any SERVERONLY algorithm) when δ L i.e. our methods can take advantage of Hessian similarity, whereas SERVER-ONLY methods cannot. In worst case, δ ≈ L and all methods are comparable. If we take a sufficient number of local steps K ≥ G2/σ2, then we have G̃ = O(G) in the above rates. On comparing with the rates in Table 1 for SERVER-ONLY SGD, we see that MIME exactly matches its rates. MIMELITE matches the asymptotic term but has a few higher order terms. Note that when using SGD as the base optimizer, MIMELITE becomes exactly the same as FEDAVG and hence has the same rate of convergence. Corollary III ((Mime/MimeLite) with Adam). Suppose that the conditions in Theorem I are satisfied, and further |∇jfi(x)| ≤ H for any coordinate j ∈ [d]. Then let us run T rounds using Adam as the base optimizer withK local steps, β1 = 0, ε0 > 0, η ≤ ε20/KL(H+ε0), and any β2 ∈ [0, 1). Output xout chosen randomly from {x1, . . .xT } satisfies E‖∇f(xout)‖2 ≤ for T = O ( LF (H+ε0) 2 ε20( −G̃2/S) ) MIME Adam , O ( LF (H+ε0) 2 √ S ε20( −G̃2/S) ) MIMELITE Adam . where F := f(x0)− f?, G̃2 := G2 + σ2/K. Note that here ε0 represents a small positive parameter used in Adam for regularization, and is different from the error . Similar to the SERVER-ONLY analysis of Adam [75], we assume β1 = 0 and that batch size is large enough such that S ≥ G2/ . A similar analysis can also be carried out for AdaGrad, and other novel variants of Adam [42]. 4.2 Circumventing server-only lower bounds The rates obtained above, while providing a safety-check, do not beat those of the SERVER-ONLY approach. The previous best rates for cross-device FL correspond to MimeLiteSGD which is O(LG 2 S 2 + L2G 3/2 ) [34, 36, 69]. While, using a separate server-learning rate can remove the effect of the second term [33], this at best matches the rate of SERVER-ONLY SGD O(LG 2 S 2 ). This is significantly slower than simply using momentum based variance reduction (MVR) as in in the FL setting (SERVER-ONLY MVR) which has a communication complexity of O( LG√ S 3/2 ) [14]. Thus, even though the main reason for studying local-step methods was to improve the communication complexity, none thus far show such improvement. The above difficulty of beating SERVER-ONLY may not be surprising given the two sets of strong lower bounds known. Necessity of local steps. Firstly, [5] show a gradient oracle lower bound of Ω( LG√ S 3/2 ). This matches the complexity of MVR, and hence at first glance it seems that SERVER-ONLY MVR is optimal. However, the lower bound is really only on the number of gradients computed and not on the number of clients sampled (sample complexity) [18], or number of rounds of communication required. In particular, multiple local updates increases number of gradients computed without needing additional communication offers us a potential way to side-step such lower bounds. A careful analysis of the bias introduced as a result of such local steps is a key part of our analysis. Necessity of δ-BHD. A second set of lower bounds directly study the number of communication rounds required in heterogeneous optimization [6, 69]. These results prove that there exist settings where local steps provide no advantage and SERVER-ONLY methods are optimal. This however contradicts real world experimental evidence [43]. As before, the disparity arises due to the contrived settings considered by the lower bounds. For distributed optimization (with full client participation) and convex quadratic objectives, δ-BHD (A2) was shown to be a sufficient [54, 51] and necessary [6] condition to circumvent these lower bounds and yield highly performant methods. We similarly leverage δ-BHD (A2) to design novel methods which significantly extend prior results to i) all smooth non-convex functions (not just quadratics), and ii) cross-device FL with client sampling. We now state our convergence results with momentum based variance reduction (MVR) as the basealgorithm since it is known to be optimal in the SERVER-ONLY setting. Theorem IV. For L-smooth f with G2 gradient dissimilarity (A1), δ Hessian dissimilarity (A2) and F := (f(x0) − f?), let us run MVR as the base algorithm for T rounds with K ≥ L/δ local steps and generate an output xout. This output satisfies E‖∇f(xout)‖2 ≤ for • MimeMVR : η = O ( min ( 1 δK , ( SF G2TK3 ) 1/3 )) , momentum β = 1−O( δ 2S2/3 (TG2)2/3 ), and T = O ( δGF√ S 3/2 + G2 S + δF ) . • MimeLiteMVR : η = O ( min ( 1 δK , ( F Ĝ2TK3 )1/3 )) , momentum β = 1−O( δ 2 (TĜ2)2/3 ), and T = O (δĜF 3/2 + Ĝ2 + δF ) . Here, we define Ĝ2 := G2 + σ2 and the expectation in E‖∇f(xout)‖2 ≤ is taken both over the sampling of the clients during the running of the algorithm, the sampling of the mini-batches in local updates, and the choice of xout (which is chosen randomly from the client iterates yi). Remarkably, the rates of our methods are independent of L and only depend on δ. Thus, when δ ≤ L and δ ≤ L/S for MimeMVR and MimeLiteMVR, the rates beat the server only lower bound of Ω( LG√ S 3/2 ). In fact, if the Hessian variance is small and δ ≈ 0, our methods only needO(1/ ) rounds to communicate. Intuitively, our results show that local steps are very useful when heterogeneity (represented by δ) is smaller than optimization difficulty (captured by smoothness constant L). MimeMVR uses a momentum parameter β of the order of (1 − O(TG2)−2/3) i.e. as T increases, β asymptotically approaches 1. In contrast, previous analyses of distributed momentum (e.g. [73]) prove rates of the form G 2 S(1−β) 2 , which are worse than that of standard SGD by a factor of 1 1−β . Thus, ours is also the first result which theoretically showcases the usefulness of using large momentum in distributed and federated learning. While we only prove the utility of local steps for MimeMVR, we believe our theory can be extended to other local update methods as well. Our analysis is highly non-trivial and involves two crucial ingredients: i) computing the momentum at the server level to ensure that it remains unbiased and then applying it locally during every client update to reduce variance, and ii) carefully keeping track of the bias introduced via additional local steps. Our experiments (Sec. 5) verify our theoretical insights are indeed applicable in deep learning settings as well. See App. B for a proof sketch and App. G–H detailed proofs. 5 Experimental analysis on real world datasets We run experiments on natively federated datasets to confirm our theory and accurately measure real world performance. Our main findings are i) MIME and MIMELITE consistently outperform FEDAVG, and ii) momentum and adaptivity significantly improves performance. 5.1 Setup Algorithms. We consider three (meta) algorithms: FEDAVG, MIME, and MIMELITE. Each of these adapt four base optimizers: SGD, momentum, Adam, and Adagrad. FEDAVG follows [49] who run multiple epochs of SGD on each client sampled, and then aggregate the net client updates. This aggregated update is used as a pseudo-gradient in the base optimizer (called server optimizer). The learning rate for the server optimizer is fixed to 1 as in [67]. This is done to ensure all algorithms have the same number of hyper-parameters. MIME and MIMELITE follow Algorithm 4 and also run a fixed number of epochs on the client. However, note that this requires communicating both the full local-batch gradient as well as the parameter updates doubling the communication required to be sent by the client. For a fairer comparison, we split the sampled clients in MIME and MIMELITE into two groups–the first communicates only full local-batch gradient and the latter communicates only parameter updates. Thus, all methods have equal client communication to the server. This variant retains the convergence guarantees up to constants (details in the Appendix). We also run Loc-MIME where instead of keeping the global optimizer state fixed, we update it locally within the client. The optimizer state is reset after the round finishes. In all methods, aggregation is weighted by the number of samples on the clients. Datasets and models. We run five simulations on three real-world federated datasets: EMNIST62 with i) a linear classifier, ii) an MLP, and iii) a CNN, iv) a charRNN on Shakespeare, and v) an LSTM for next word prediction on StackOverflow, all accessed through Tensorflow Federated [60]. The learning rates were individually tuned and other optimizer hyper-parameters such as β for momentum, β1, β2, ε0 for Adam and AdaGrad were left to their default values, unless explicitly stated otherwise. We refer to Appendix C for additional setup details and discussion. 5.2 Ablation and comparative study In order to study the different algorithms, we train a 2 hidden layer (300µ-100) MLP on EMNIST62 with 10 local epochs for 1k rounds and use SGD+momentum (with tuned β) as the base optimizer. Mime ≈ MimeLite > FedAvg > SCAFFOLD > FedProx. Fig. 1 (left) shows MIME and MIMELITE have nearly identical performance, and are about 7× faster than FedAvg. This implies our strategy of applying momentum to client updates is faster than simply using server momentum. FedProx [40] uses an additional regularizer µ tuned over [0.1, 0.5, 1] (µ = 0 is the same as FedAvg). Regularization does not seem to reduce client drift but still slows down convergence [66]. SCAFFOLD [32] is also slower than Mime and FedAvg in this setup. This is because in cross-device setting with a large number of clients (N = 3.4k) means that each client is visited less than 6 times during the entire training (20 clients per round for 1k rounds). This means that the correction term utilized by SCAFFOLD uses control-variates which are quite stale (computed about 200 rounds ago) which slows down the convergence. In contrast, the SVRG correction term in Mime is computed using clients sampled in the current or previous rounds, and so is much more accurate. With momentum > without momentum. Fig. 1 (center) examines the impact of momentum on FedAvg and Mime. Momentum slightly improves the performance of FedAvg, whereas it has a significant impact on the performance of Mime. This is also in line with our theory and confirms that Mime’s strategy of applying it locally at every client update makes better use of momentum. Fixed > locally updated optimizer state. Finally, we check how the performance of Mime changes if instead of keeping the momentum fixed throughout a round, we let it change. The latter is a way to combine global and local momentum. The momentum is reset at the end of the round ignoring the changes the clients make to it. Fig. 1 (right) shows that this worsens the performance, confirming that it is better to keep the global optimizer state fixed as predicted by our theory. Together, the above observations validate all aspects of Mime (and MimeLite) design: compute statistics at the server level, and apply them unchanged at every client update. 5.3 Large scale comparison with equal server and client communication We perform a larger scale study closely matching the setup of [49]. For both MIME and MIMELITE, only half the clients compute and transmit the updated parameters, and other half transmit the full local-batch gradients. Hence, client to server communication cost is the same for all methods for all clients. However, MIME and MIMELITE require sending additional optimization state to the clients. Hence, we also reduce the number of clients sampled in each round to ensure sum total of communication at each round is 40× model size for EMNIST and Shakespeare experiments, and 100× model size for the StackOverflow next word prediction experiment. Since we only perform 1 local epoch, the hyper-parameters (e.g. epsilon for adaptive methods) are more carefully chosen following [49], and MIME and MIMELITE use significantly fewer clients per round, the difference between FEDAVG and MIME is smaller here. Table 2 summarizes the results. For the image classification tasks of EMNIST62 logistic and EMNIST62 CNN, Mime and MimeLite with Adam achieve the best performance. Using momentum (both with SGD and in Adam) significantly improves their performance. In contrast, FedAvgAdam is more unstable with worse performance. This is because FedAvg is excessively sensitive to hyperparameters (cf. App. E). We next consider the character prediction task on Shakespeare dataset, and next word prediction on StackOverflow. Here, the momentum based methods (SGD+momentum and Adam) are slower than their non-momentum counterparts (vanilla SGD and AdaGrad). This is because the mini-batch gradients in these tasks are sparse, with the gradients corresponding to tokens not in the mini-batch being zero. This sparsity structure is however destroyed when using momentum or Adam. For the same reason, Mime which uses an SVRG correction also significantly increases the gradient density. Discussion. For traditional tasks such as image classification, we observe that Mime (especially with Adam) usually outperforms MimeLite which in turn outperforms FedAvg. These methods are able to successfully leverage momentum and adaptivity to improve performance. For tasks where the client gradients are sparse, the SVRG correction used by Mime hinders performance. Adapting our techniques to work with sparse gradients (à la Yogi [75]) could lead to further improvements. Also, note that we reduce communication by naı̈vely reducing the number of participating clients per round. More sophisticated approaches to save on client communication including quantization or sparsification [58, 3], or even novel algorithmic innovations [1] could be explored. Further, server communication could be reduced using memory efficient optimizers e.g. AdaFactor [55] or SM3 [4]. 6 Conclusion Our work initiated a formal study of the cross-device federated learning problem and provided theoretically justified algorithms. We introduced a new framework MIME which overcomes the natural client-heterogeneity in such a setting, and can adapt arbitrary centralized algorithms such as Adam without additional hyper-parameters. We demonstrated the superiority of MIME via strong convergence guarantees and empirical evaluations. Further, we proved that a particular instance of our method, MimeMVR, beat centralized lower-bounds, demonstrating that additional local steps can yield asymptotic improvements for the first time. We believe our analysis will be of independent interest beyond the federated setting for understanding the sample complexity of non-convex optimization, and for yielding improved analysis of decentralized optimization algorithms.
1. What is the main contribution of the paper in converting centralized optimization algorithms to federated learning? 2. What are the strengths of the proposed framework, particularly in its theoretical analysis and experimental performance? 3. Do you have any concerns or questions regarding the paper's claims and comparisons with other works?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a framework MIME to convert centralized optimization algorithms to the federated learning setting. The key components are some control variates to reduce the effect of data distribution heterogeneity. Contribution: A framework to convert centralized algorithms to the federated learning setting. Theoretical analysis to characterize the convergence of converted algorithms Experiments show MIME framework can have better performance than FedAvg. Review Strengths: The convergence rate of converted algorithms matches their centralized versions. MimeMVR provides an improved convergence rate assuming small Hessian variance. This is an interesting theoretical result. The algorithms converted by Mime show strong empirical performance. Weaknesses: Since the 'breaking the lower bound' contribution is highlighted in the title, this point may deserve more discussion. More specifically, it will be helpful to discuss some technicality on how to use the Hessian variance to improve convergence rate and why centralized algorithms cannot achieve such a rate. --------------after rebuttal------------- My concerns are addressed.
NIPS
Title Breaking the centralized barrier for cross-device federated learning Abstract Federated learning (FL) is a challenging setting for optimization due to the heterogeneity of the data across different clients which can cause a client drift phenomenon. In fact, designing an algorithm for FL that is uniformly better than simple centralized training has been a major open problem thus far. In this work, we propose a general algorithmic framework, MIME, which i) mitigates client drift and ii) adapts an arbitrary centralized optimization algorithm such as momentum and Adam to the cross-device federated learning setting. MIME uses a combination of control-variates and server-level optimizer state (e.g. momentum) at every client-update step to ensure that each local update mimics that of the centralized method run on i.i.d. data. We prove a reduction result showing that MIME can translate the convergence of a generic algorithm in the centralized setting into convergence in the federated setting. Moreover, we show that, when combined with momentum-based variance reduction, MIME is provably faster than any centralized method–the first such result. We also perform a thorough experimental exploration of MIME’s performance on real world datasets (implemented here). 1 Introduction Federated learning (FL) is an increasingly important large-scale learning framework where the training data remains distributed over a large number of clients, which may be mobile phones or network sensors [38, 37, 43, 44, 28]. A server then orchestrates the clients to train a single model, here referred to as a server model, without ever transmitting client data over the network, thereby providing some basic levels of data privacy and security. Two important settings are distinguished in FL [28, Table 1]: the cross-device and the cross-silo settings. The cross-silo setting corresponds to a relatively small number of reliable clients, typically organizations, such as medical or financial institutions. In contrast, in the cross-device federated learning setting, the number of clients may be extremely large and include, for example, all 3.5 billion active android phones [25]. Thus, in that setting, we may never make even a single pass over ∗This work was also appears under the alternative title “Mime: Mimicking Centralized Stochastic Algorithms in Federated Learning” [31]. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). the entire clients’ data during training. The cross-device setting is further characterized by resourcepoor clients communicating over a highly unreliable network. Together, the essential features of this setting give rise to unique challenges not present in the cross-silo setting. In this work, we are interested in the more challenging cross-device setting, for which we will formalize and study stochastic optimization algorithms. Importantly, recent advances in FL optimization, such as SCAFFOLD [32] or FedDyn [1], are not anymore applicable since they are designed for the cross-silo setting. The problem. The de facto standard algorithm for the cross-device setting is FEDAVG [43], which performs multiple SGD updates on the available clients before communicating to the server. While this approach can reduce the frequency of communication required, performing multiple steps on the same client can lead to ‘over-fitting’ to its atypical local data, a phenomenon known as client drift [32]. This in turn leads to slower convergence and can, somewhat counter-intuitively, require larger total communication [69]. Despite significant attention received from the optimization community, the communication complexity of heterogeneous cross-device has not improved upon that of simple centralized methods, which take no local steps (aka SERVER-ONLY methods). Furthermore, algorithmic innovations such as momentum [59, 14], adaptivity [35, 75, 77], and clipping [71, 72, 76] are critical to the success of deep learning applications. The lack of a theoretical understanding of the impact of multiple client steps has also hindered adapting these techniques in a principled manner into the client updates, in order to replace the vanilla SGD update of FEDAVG. To overcome such deficiencies, we propose a new framework, MIME, that mitigates client drift and can adapt an arbitrary centralized optimization algorithm, e.g. SGD with momentum or Adam, to the federated setting. In each local client update, MIME uses global optimizer state, e.g. momentum or adaptive learning rates, and an SVRG-style correction to mimic the updates of the centralized algorithm run on i.i.d. data. This optimizer state is computed only at the server level and kept fixed throughout the local steps, thereby avoiding overfitting to the atypical local data of any single client. Contributions. We summarize our main results below. • MIME framework. We formalize the cross-device federated learning problem, and propose a new framework MIME that can adapt arbitrary centralized algorithms to this setting. • Convergence result. We prove a result showing that MIME successfully reduces client drift. We also prove that the convergence of any generic algorithm in the centralized setting translates convergence of its MIME version in the federated setting. • Speed-up over centralized methods. By carefully tracking the bias introduced due to multiple local steps, we prove that MIME with momentum-based variance reduction (MVR) can beat a lower bound for centralized methods, thus breaking a fundamental barrier. This is the first such result in FL, and also the first general result showing asymptotic speed-up due to local steps. • Empirical validation. We propose a simpler variant, MIMELITE, with an empirical performance similar to MIME. We report the results of thorough experimental analysis demonstrating that both MIME and MIMELITE indeed converge faster than FEDAVG. Related work. Analysis of FEDAVG: Much of the recent work in federated learning has focused on analyzing FEDAVG. For identical clients, FEDAVG coincides with parallel SGD, for which [78] derived an analysis with asymptotic convergence. Sharper and more refined analyses of the same method, sometimes called local SGD, were provided by [56], and more recently by [57], [47], [34], and [70], for identical functions. Their analysis was extended to heterogeneous clients in [68, 74, 32, 34, 36]. [11] derived a tight characterization of FedAvg with quadratic functions and demonstrated the sensitivity of the algorithm to both client and server step sizes. Matching upper and lower bounds were recently given by [32] and [69] for general functions, proving that FEDAVG can be slower than even SGD for heterogeneous data, due to the client-drift. Comparison to SCAFFOLD: For the cross-silo setting where the number of clients is relatively low, [32] proposed the SCAFFOLD algorithm, which uses control-variates (similar to SVRG) to correct for client drift. However, their algorithm crucially relies on stateful clients which repeatedly participate in the training process. FedDyn [1] reduces the communication requirements, but also requires persistent stateful clients. In contrast, we focus on the cross-device setting where clients may be visited only once during training and where they are stateless (and thus SCAFFOLD and FedDyn are inapplicable). This is akin to the difference between the finite-sum (corresponding to cross-silo) and stochastic (cross-device) settings in traditional centralized optimization [39]. Comparison to FedAvg and variants: [26] and [67] observed that using server momentum significantly improves over vanilla FEDAVG. This idea was generalized by [49], who replaced the server update with an arbitrary optimizer, e.g. Adam. However, these methods only modify the server update while using SGD for the client updates. We henceforth refer to this meta algorithm as FedAvg. FedAvgSGD, FedAvgMom, FedAvgAdam denote specific instantiations of the server optimizer in FedAvg with SGD, Momentum or Adam. MIME, on the other hand, ensures that every local client update resembles the optimizer e.g. MIME would apply momentum in every client update and not just at the server level. Beyond this, [40] proposed to add a regularizer to ensure client updates remain close. However, this may slow down convergence (cf. Fig. 5 and [32, 66]). Other orthogonal directions which can be combined with MIME include tackling computation heterogeneity, where some clients perform many more updates than others [66], improving fairness by modifying the objective [44, 41], incorporating differential privacy [20, 2, 61], Byzantine adversaries [48, 65, 30], secure aggregation [8, 24], etc. We defer additional discussion to the extensive survey by [28]. Momentum based variance reduction. Initial optimal methods for stochastic non-convex optimization like SPIDER [17] and SARAH [46] required intermittently computing very large batch gradients. Subsequently, it was shown that momentum based variance reduction (MVR) methods obtained a similar optimal rate without needing such large batch gradient computations [62, 14]. Momentum is an exponential moving average of many stochastic gradients and so it has much smaller variance than the stochastic gradients themselves. However, because these gradients are computed at different parameters it also has a bias. MVR adds a small additional correction term which significantly reduces this bias and provides improved rates. 2 Problem setup This section formalizes the problem of cross-device federated learning [28]. Cross-device FL is characterized by a large number of client devices like mobile phones which may potentially connect to the server at most once. Due to their transient nature, it is not possible to store any state on the clients, precluding an algorithm like SCAFFOLD. Furthermore, each client has only a few samples, and there is wide heterogeneity in the samples across clients. Finally, communication is a major bottleneck and a key metric for optimization in this setting is the number of communication rounds. Thus, our objective will be to minimize the following quantity within the fewest number of clientserver communication rounds: f(x) = Ei∼C [ fi(x) := 1 ni ni∑ ν=1 fi(x; ζi,ν) ] . (1) Here, fi denotes the loss function of client i and {ζi,1, . . . , ζi,ni} its local data. Since the number of clients is extremely large, while the size of each local data is rather modest, we represent the former as an expectation and the latter as a finite sum. In each round, the algorithm samples a subset of clients (of size S) and performs some updates to the server model. Due to the transient and heterogeneous nature of the clients, it is easy to see that the problem becomes intractable with arbitrarily dissimilar clients. Thus, it is necessary to assume bounded dissimilarity across clients. (A1) G2-BGV or bounded inter-client gradient variance: there exists G ≥ 0 such that Ei∼C [‖∇fi(x)−∇f(x)‖2] ≤ G2 , ∀x . Next, we also characterize the variance in the Hessians. (A2) δ-BHV or bounded Hessian variance: Almost surely, the loss function of any client i satisfies ‖∇2fi(x; ζ)−∇2f(x)‖ ≤ δ , ∀x . This is in contrast to the usual smoothness assumption that can be stated as: (A2*) L-smooth: ‖∇2fi(x; ζ)‖ ≤ L , ∀x , a.s. for any i. Note that if fi(x; ζ) is L-smooth then (A2) is satisfied with δ ≤ 2L, and hence (A2) is weaker than (A2*). In realistic examples we expect the clients to be similar and hence that δ L. In addition, we assume that f(x) is bounded from below by f? and is L-smooth, as is standard. 3 Mime framework In this section we describe how to adapt an arbitrary centralized optimizer (referred to as the “base” optimizer) which may have internal state (e.g. momentum) to the federated learning problem (1) while ensuring there is no client-drift. Algorithm 4 describes our framework. We develop two variants, MIME and MIMELITE, which consist of three components i) a base optimizer we are seeking to mimic, ii) the global (server) optimizer state computation, and iii) the local client updates. Algorithm 1 Mime and MimeLite input: initial x and s, learning rate η and base optimizer B = (U ,V) for each round t = 1, · · · , T do sample subset S of clients communicate (x, s) to all clients i ∈ S communicate c← 1|S| ∑ j∈S ∇fj(x) (only Mime) on client i ∈ S in parallel do initialize local model yi ← x for k = 1, · · · ,K do sample mini-batch ζ from local data gi ← ∇fi(yi; ζ)−∇fi(x; ζ) + c (Mime) gi ← ∇fi(yi; ζ) (MimeLite) update yi ← yi − ηU(gi, s) end for compute full local-batch gradient∇fi(x) communicate (yi,∇fi(x)) end on client s ← V ( 1 |S| ∑ i∈S ∇fi(x), s ) (update optimizer state) x← 1|S| ∑ i∈S yi (update server parameters) end for Base optimizer. We assume the centralized base optimizer we are imitating can be decomposed into two steps: an update step U which updates the parameters x, and a optimizer state update step V(·) which keeps track of global optimizer state s. Each step of the base optimizer B = (U ,V) uses a gradient g to update the parameter x and the optimizer state s as follows: x← x− η U(g, s) , s← V(g, s) . (BASEOPT) As an example, consider SGD with momentum. The state here is the momentum mt and uses the following update steps: xt = xt−1 − η ((1− β)∇fi(xt−1) + βmt−1) , mt = (1− β)∇fi(xt−1) + βmt−1 . Thus, SGD with momentum can be represented in the above generic form with U(g, s) = (1 − β)g + βs and V(g, s) = (1 − β)g + βs. Table 5 in Appendix shows how other algo- rithms like Adam, Adagrad, etc. can be represented in this manner. We keep the update U to be linear in the gradient g, whereas V can be more complicated. This implies that while the parameter update step U is relatively resilient to receiving a biased gradient g while V can be much more sensitive. Compute optimizer state globally, apply locally. When updating the optimizer state of the base algorithm, we use only the gradient computed at the server parameters. Further, they remain fixed throughout the local updates of the clients. This ensures that these optimizer state remain unbiased and representative of the global function f(·). At the end of the round, the server performs s← V ( 1 |S| ∑ i∈S ∇fi(x), s ) , ∇fi(x) = 1ni ∑ni ν=1∇fi(x; ζi,ν) . (OPTSTATE) Note that we use full-batch gradients computed at the server parameters x, not client parameters yi. Local client updates. Each client i ∈ S performs K updates using U of the base algorithm and a minibatch gradient. There are two variants possible corresponding to MIME and MIMELITE differentiated using colored boxes. Starting from yi ← x, repeat the following K times yi ← yi − ηU(gi, s) (CLIENTSTEP) where gi ← ∇fi(yi; ζ) for MIMELITE, and gi ← ∇fi(yi; ζ)−∇fi(x; ζ) + 1|S| ∑ j∈S ∇fj(x) for MIME. MIMELITE simply uses the local minibatch gradient whereas MIME uses an SVRG style correction [27]. This is done to reduce the noise from sampling a local mini-batch. While this correction yields faster rates in theory (and in practice for convex problems), in deep learning applications we found that MIMELITE closely matches the performance of MIME. Finally, there are two modifications made in practical FL: we weight all averages across the clients by the number of datapoints ni [43], and we perform K epochs instead of K steps [66]. 4 Theoretical analysis of Mime Table 1 summarizes the rates of MIME (highlighted in blue) and MIMELITE (highlighted in green) and compares them to SERVER-ONLY methods when using SGD, Adam and momentum methods as the base algorithms. We will first examine the convergence of MIME and MIMELITE with a generic base optimizer and show that its properties are preserved in the federated setting. We then examine a specific momentum based base optimizer, and prove that Mime and MimeLite can be asymptotically faster than the best server-only method. This is the first result to prove the usefulness of local steps and demonstrate asymptotic speed-ups. 4.1 Convergence with a generic base optimizer We will prove a generic reduction result demonstrating that if the underlying base algorithm converges, and is robust to slight perturbations, then MIME and MIMELITE also preserve the convergence of the algorithm when applied to the federated setting with additinoal local steps. Theorem I. Suppose that we have G2 inter-client gradient variance (A1), L-smooth {fi} (A2*), and σ2 intra-client gradient variance (A3). Further, suppose that the updater U of our baseoptimizer B = (U ,V) satisfies i) linearity for a fixed state s: U(g1 + g2; s) = U(g1; s) + U(g2; s), and ii) Lipschitzness: ‖U(g; s)‖ ≤ B‖g‖ for some B ≥ 0. Then, running MIME or MIMELITE with K local updates and step-size η is equivalent to running a centralized algorithm with step-size η̃ := Kη ≤ 12LB , and updates xt ← xt−1 − η̃ U(gt + et , st−1) , and st ← V(gt, st−1) , where we have an unbiased gradient Et[gt] = ∇f(xt−1), with variance bounded as Et‖gt −∇f(xt−1)‖2 ≤ { G2 S MIME , G2 S + σ2 KS MIMELITE . and finally a small error bounded as 1 B2L2η̃2 Et‖ et ‖ 2 ≤ { Et‖gt‖2 MIME , Et‖gt‖2 +G2 + σ 2 K MIMELITE . Here, we have proven that MIME and MIMELITE truly mimic the centralized base algorithm with very small perturbations—the magnitude of et is O(η̃2). The key to the result is the linearity of the parameter update step U( · ; s). By separating the base optimizer into a very simple parameter step U and a more complicated optimizer state update step V , we can ensure that commonly used algorithms such as momentum, Adam, Adagrad, and others all satisfy this property. Armed with this general reduction, we can easily obtain specific convergence results. Corollary II ((Mime/MimeLite) with SGD). Given that the conditions in Theorem I are satisfied, let us run T rounds withK local steps using SGD as the base optimizer and output xout. This output satisfies E‖∇f(xout)‖2 ≤ for F := f(x0)− f?, G̃2 := G2 + σ2/K and • µ-PL inequality: η = Õ ( 1 µKT ) , and T = Õ ( LG2 µS + LF µ log ( 1 )) MIME , Õ ( LG̃2 µS + LG̃ µ √ + LFµ log ( 1 )) MIMELITE . • Non-convex: for η = O (√ FS LG̃2TK2 ) , and T = O ( LG2F S 2 + LF ) MIME , O ( LG̃2F S 2 + L2G̃F 3/2 + LF ) MIMELITE . Table 1: Number of communication rounds required to reach ‖∇f(x)‖2 ≤ (log factors are ignored) with S clients sampled each round. All analyses except SCAFFOLD assume G2 bounded gradient dissimilarity (A1). All analyses assume L-smooth losses, except MimeLiteMVR and MimeMVR, which only assume δ bounded Hessian dissimilarity (A2). Convergence of SCAFFOLD depends on the total number of clientsN which is potentially infinite. FEDAVG and MIMELITE are slightly slower than the server-only methods due to additional drift terms in most cases. MIME is the fastest and either matches or improves upon the optimal statistical rates (first term in the rates). In fact, MimeMVR and MimeLiteMVR beat lower bounds for any server-only method when δ L. Algorithm Non-convex µ-PL inequality SCAFFOLDa [32] ( N S ) 2 3 L N S + L µ SGD SERVER-ONLY [21] LG 2 S 2 + L G2 µS + L µ MimeLiteSGD≡ FedAvgSGD c LG 2 S 2 + L 2G 3/2 + L G2 µS + LG µ √ + L µ MimeSGD LG 2 S 2 + L G2 µS + L µ ADAM SERVER-ONLY [75]b L −G2/S – MimeLiteAdambc L √ S −G2/S – MimeAdamb L −G2/S – Momentum Variance Reduction (MVR) SERVER-ONLY [14] LG√ S 3/2 + G 2 S + L – MimeLiteMVRd δ(G+σ) 3/2 + G 2+σ2 + δ – MimeMVRd δG√ S 3/2 + G 2 S + δ – SERVER-ONLY lower bound [5] Ω ( LG√ S 3/2 + G 2 S + L ) Ω ( G2 S ) a Num. clients (N ) can be same order as num. total rounds or even∞, making the bounds vacuous. b Adam requires large batch-size S ≥ G2/ to converge [50, 75]. Convergence of FedAdam with client sampling is unknown ([49] only analyze with full client participation). c RequiresK ≥ σ2/G2 number of local updates. Typically, intra-client variance is small (σ2 . G2). d RequiresK ≥ L/δ number of local updates. Faster than the lower bound (and hence any SERVERONLY algorithm) when δ L i.e. our methods can take advantage of Hessian similarity, whereas SERVER-ONLY methods cannot. In worst case, δ ≈ L and all methods are comparable. If we take a sufficient number of local steps K ≥ G2/σ2, then we have G̃ = O(G) in the above rates. On comparing with the rates in Table 1 for SERVER-ONLY SGD, we see that MIME exactly matches its rates. MIMELITE matches the asymptotic term but has a few higher order terms. Note that when using SGD as the base optimizer, MIMELITE becomes exactly the same as FEDAVG and hence has the same rate of convergence. Corollary III ((Mime/MimeLite) with Adam). Suppose that the conditions in Theorem I are satisfied, and further |∇jfi(x)| ≤ H for any coordinate j ∈ [d]. Then let us run T rounds using Adam as the base optimizer withK local steps, β1 = 0, ε0 > 0, η ≤ ε20/KL(H+ε0), and any β2 ∈ [0, 1). Output xout chosen randomly from {x1, . . .xT } satisfies E‖∇f(xout)‖2 ≤ for T = O ( LF (H+ε0) 2 ε20( −G̃2/S) ) MIME Adam , O ( LF (H+ε0) 2 √ S ε20( −G̃2/S) ) MIMELITE Adam . where F := f(x0)− f?, G̃2 := G2 + σ2/K. Note that here ε0 represents a small positive parameter used in Adam for regularization, and is different from the error . Similar to the SERVER-ONLY analysis of Adam [75], we assume β1 = 0 and that batch size is large enough such that S ≥ G2/ . A similar analysis can also be carried out for AdaGrad, and other novel variants of Adam [42]. 4.2 Circumventing server-only lower bounds The rates obtained above, while providing a safety-check, do not beat those of the SERVER-ONLY approach. The previous best rates for cross-device FL correspond to MimeLiteSGD which is O(LG 2 S 2 + L2G 3/2 ) [34, 36, 69]. While, using a separate server-learning rate can remove the effect of the second term [33], this at best matches the rate of SERVER-ONLY SGD O(LG 2 S 2 ). This is significantly slower than simply using momentum based variance reduction (MVR) as in in the FL setting (SERVER-ONLY MVR) which has a communication complexity of O( LG√ S 3/2 ) [14]. Thus, even though the main reason for studying local-step methods was to improve the communication complexity, none thus far show such improvement. The above difficulty of beating SERVER-ONLY may not be surprising given the two sets of strong lower bounds known. Necessity of local steps. Firstly, [5] show a gradient oracle lower bound of Ω( LG√ S 3/2 ). This matches the complexity of MVR, and hence at first glance it seems that SERVER-ONLY MVR is optimal. However, the lower bound is really only on the number of gradients computed and not on the number of clients sampled (sample complexity) [18], or number of rounds of communication required. In particular, multiple local updates increases number of gradients computed without needing additional communication offers us a potential way to side-step such lower bounds. A careful analysis of the bias introduced as a result of such local steps is a key part of our analysis. Necessity of δ-BHD. A second set of lower bounds directly study the number of communication rounds required in heterogeneous optimization [6, 69]. These results prove that there exist settings where local steps provide no advantage and SERVER-ONLY methods are optimal. This however contradicts real world experimental evidence [43]. As before, the disparity arises due to the contrived settings considered by the lower bounds. For distributed optimization (with full client participation) and convex quadratic objectives, δ-BHD (A2) was shown to be a sufficient [54, 51] and necessary [6] condition to circumvent these lower bounds and yield highly performant methods. We similarly leverage δ-BHD (A2) to design novel methods which significantly extend prior results to i) all smooth non-convex functions (not just quadratics), and ii) cross-device FL with client sampling. We now state our convergence results with momentum based variance reduction (MVR) as the basealgorithm since it is known to be optimal in the SERVER-ONLY setting. Theorem IV. For L-smooth f with G2 gradient dissimilarity (A1), δ Hessian dissimilarity (A2) and F := (f(x0) − f?), let us run MVR as the base algorithm for T rounds with K ≥ L/δ local steps and generate an output xout. This output satisfies E‖∇f(xout)‖2 ≤ for • MimeMVR : η = O ( min ( 1 δK , ( SF G2TK3 ) 1/3 )) , momentum β = 1−O( δ 2S2/3 (TG2)2/3 ), and T = O ( δGF√ S 3/2 + G2 S + δF ) . • MimeLiteMVR : η = O ( min ( 1 δK , ( F Ĝ2TK3 )1/3 )) , momentum β = 1−O( δ 2 (TĜ2)2/3 ), and T = O (δĜF 3/2 + Ĝ2 + δF ) . Here, we define Ĝ2 := G2 + σ2 and the expectation in E‖∇f(xout)‖2 ≤ is taken both over the sampling of the clients during the running of the algorithm, the sampling of the mini-batches in local updates, and the choice of xout (which is chosen randomly from the client iterates yi). Remarkably, the rates of our methods are independent of L and only depend on δ. Thus, when δ ≤ L and δ ≤ L/S for MimeMVR and MimeLiteMVR, the rates beat the server only lower bound of Ω( LG√ S 3/2 ). In fact, if the Hessian variance is small and δ ≈ 0, our methods only needO(1/ ) rounds to communicate. Intuitively, our results show that local steps are very useful when heterogeneity (represented by δ) is smaller than optimization difficulty (captured by smoothness constant L). MimeMVR uses a momentum parameter β of the order of (1 − O(TG2)−2/3) i.e. as T increases, β asymptotically approaches 1. In contrast, previous analyses of distributed momentum (e.g. [73]) prove rates of the form G 2 S(1−β) 2 , which are worse than that of standard SGD by a factor of 1 1−β . Thus, ours is also the first result which theoretically showcases the usefulness of using large momentum in distributed and federated learning. While we only prove the utility of local steps for MimeMVR, we believe our theory can be extended to other local update methods as well. Our analysis is highly non-trivial and involves two crucial ingredients: i) computing the momentum at the server level to ensure that it remains unbiased and then applying it locally during every client update to reduce variance, and ii) carefully keeping track of the bias introduced via additional local steps. Our experiments (Sec. 5) verify our theoretical insights are indeed applicable in deep learning settings as well. See App. B for a proof sketch and App. G–H detailed proofs. 5 Experimental analysis on real world datasets We run experiments on natively federated datasets to confirm our theory and accurately measure real world performance. Our main findings are i) MIME and MIMELITE consistently outperform FEDAVG, and ii) momentum and adaptivity significantly improves performance. 5.1 Setup Algorithms. We consider three (meta) algorithms: FEDAVG, MIME, and MIMELITE. Each of these adapt four base optimizers: SGD, momentum, Adam, and Adagrad. FEDAVG follows [49] who run multiple epochs of SGD on each client sampled, and then aggregate the net client updates. This aggregated update is used as a pseudo-gradient in the base optimizer (called server optimizer). The learning rate for the server optimizer is fixed to 1 as in [67]. This is done to ensure all algorithms have the same number of hyper-parameters. MIME and MIMELITE follow Algorithm 4 and also run a fixed number of epochs on the client. However, note that this requires communicating both the full local-batch gradient as well as the parameter updates doubling the communication required to be sent by the client. For a fairer comparison, we split the sampled clients in MIME and MIMELITE into two groups–the first communicates only full local-batch gradient and the latter communicates only parameter updates. Thus, all methods have equal client communication to the server. This variant retains the convergence guarantees up to constants (details in the Appendix). We also run Loc-MIME where instead of keeping the global optimizer state fixed, we update it locally within the client. The optimizer state is reset after the round finishes. In all methods, aggregation is weighted by the number of samples on the clients. Datasets and models. We run five simulations on three real-world federated datasets: EMNIST62 with i) a linear classifier, ii) an MLP, and iii) a CNN, iv) a charRNN on Shakespeare, and v) an LSTM for next word prediction on StackOverflow, all accessed through Tensorflow Federated [60]. The learning rates were individually tuned and other optimizer hyper-parameters such as β for momentum, β1, β2, ε0 for Adam and AdaGrad were left to their default values, unless explicitly stated otherwise. We refer to Appendix C for additional setup details and discussion. 5.2 Ablation and comparative study In order to study the different algorithms, we train a 2 hidden layer (300µ-100) MLP on EMNIST62 with 10 local epochs for 1k rounds and use SGD+momentum (with tuned β) as the base optimizer. Mime ≈ MimeLite > FedAvg > SCAFFOLD > FedProx. Fig. 1 (left) shows MIME and MIMELITE have nearly identical performance, and are about 7× faster than FedAvg. This implies our strategy of applying momentum to client updates is faster than simply using server momentum. FedProx [40] uses an additional regularizer µ tuned over [0.1, 0.5, 1] (µ = 0 is the same as FedAvg). Regularization does not seem to reduce client drift but still slows down convergence [66]. SCAFFOLD [32] is also slower than Mime and FedAvg in this setup. This is because in cross-device setting with a large number of clients (N = 3.4k) means that each client is visited less than 6 times during the entire training (20 clients per round for 1k rounds). This means that the correction term utilized by SCAFFOLD uses control-variates which are quite stale (computed about 200 rounds ago) which slows down the convergence. In contrast, the SVRG correction term in Mime is computed using clients sampled in the current or previous rounds, and so is much more accurate. With momentum > without momentum. Fig. 1 (center) examines the impact of momentum on FedAvg and Mime. Momentum slightly improves the performance of FedAvg, whereas it has a significant impact on the performance of Mime. This is also in line with our theory and confirms that Mime’s strategy of applying it locally at every client update makes better use of momentum. Fixed > locally updated optimizer state. Finally, we check how the performance of Mime changes if instead of keeping the momentum fixed throughout a round, we let it change. The latter is a way to combine global and local momentum. The momentum is reset at the end of the round ignoring the changes the clients make to it. Fig. 1 (right) shows that this worsens the performance, confirming that it is better to keep the global optimizer state fixed as predicted by our theory. Together, the above observations validate all aspects of Mime (and MimeLite) design: compute statistics at the server level, and apply them unchanged at every client update. 5.3 Large scale comparison with equal server and client communication We perform a larger scale study closely matching the setup of [49]. For both MIME and MIMELITE, only half the clients compute and transmit the updated parameters, and other half transmit the full local-batch gradients. Hence, client to server communication cost is the same for all methods for all clients. However, MIME and MIMELITE require sending additional optimization state to the clients. Hence, we also reduce the number of clients sampled in each round to ensure sum total of communication at each round is 40× model size for EMNIST and Shakespeare experiments, and 100× model size for the StackOverflow next word prediction experiment. Since we only perform 1 local epoch, the hyper-parameters (e.g. epsilon for adaptive methods) are more carefully chosen following [49], and MIME and MIMELITE use significantly fewer clients per round, the difference between FEDAVG and MIME is smaller here. Table 2 summarizes the results. For the image classification tasks of EMNIST62 logistic and EMNIST62 CNN, Mime and MimeLite with Adam achieve the best performance. Using momentum (both with SGD and in Adam) significantly improves their performance. In contrast, FedAvgAdam is more unstable with worse performance. This is because FedAvg is excessively sensitive to hyperparameters (cf. App. E). We next consider the character prediction task on Shakespeare dataset, and next word prediction on StackOverflow. Here, the momentum based methods (SGD+momentum and Adam) are slower than their non-momentum counterparts (vanilla SGD and AdaGrad). This is because the mini-batch gradients in these tasks are sparse, with the gradients corresponding to tokens not in the mini-batch being zero. This sparsity structure is however destroyed when using momentum or Adam. For the same reason, Mime which uses an SVRG correction also significantly increases the gradient density. Discussion. For traditional tasks such as image classification, we observe that Mime (especially with Adam) usually outperforms MimeLite which in turn outperforms FedAvg. These methods are able to successfully leverage momentum and adaptivity to improve performance. For tasks where the client gradients are sparse, the SVRG correction used by Mime hinders performance. Adapting our techniques to work with sparse gradients (à la Yogi [75]) could lead to further improvements. Also, note that we reduce communication by naı̈vely reducing the number of participating clients per round. More sophisticated approaches to save on client communication including quantization or sparsification [58, 3], or even novel algorithmic innovations [1] could be explored. Further, server communication could be reduced using memory efficient optimizers e.g. AdaFactor [55] or SM3 [4]. 6 Conclusion Our work initiated a formal study of the cross-device federated learning problem and provided theoretically justified algorithms. We introduced a new framework MIME which overcomes the natural client-heterogeneity in such a setting, and can adapt arbitrary centralized algorithms such as Adam without additional hyper-parameters. We demonstrated the superiority of MIME via strong convergence guarantees and empirical evaluations. Further, we proved that a particular instance of our method, MimeMVR, beat centralized lower-bounds, demonstrating that additional local steps can yield asymptotic improvements for the first time. We believe our analysis will be of independent interest beyond the federated setting for understanding the sample complexity of non-convex optimization, and for yielding improved analysis of decentralized optimization algorithms.
1. What is the main contribution of the paper on federated learning? 2. What are the strengths of the proposed MIME framework, particularly in adapting arbitrary centralized algorithms to the FL setting? 3. Do you have any concerns or questions regarding the technical content of the paper, such as the proof of Theorem I and Assumptions A3 and A2? 4. How does the reviewer assess the novelty and significance of the MIME framework compared to prior works in FL? 5. Are there any minor issues or typos in the paper that need to be addressed?
Summary Of The Paper Review
Summary Of The Paper The paper proposes MIME, a general algorithmic framework for federated learning (FL) that can adapt to (and match the performance of) arbitrary centralized algorithms like Adam, Momentum SGD, and SGD. The authors show that via the MIME framework the convergence of any centralized algorithm can be translated into the convergence of the algorithm in the FL setting. The authors also propose a momentum-based variance reduction (MVR) algorithm that the authors claim to be faster than any centralized algorithm. Finally, the authors conduct extensive numerical experiments to show that the algorithms under the MIME framework perform well on real datasets. Review Overall the paper is well written with a clear presentation. The ideas presented in the paper are significantly original and of importance to the research community. However, there are a few issues with the paper, predominantly with the technical content of the paper which the authors should address. The detailed comments are listed below: Theorem I (Corollary II and III): It seems that there is a technical issue with the results presented in Theorem I which then affects the results presented in Corollaries II and III too. Please clarify the following issue. In the proof of Theorem I, the authors have used the fact that ∑ k ∇ f i ( x ; ζ i , k t ) = K ∇ f i ( x ) (in Lines 873 and 877 of the supplementary material) which follows from assuming K to be the number of epochs. This implies that either ζ i , k t are sampled without replacement or ζ i , k t represents the full data, i.e., ∇ f i ( x ; ζ i , k t ) = ∇ f i ( x ) . In my understanding, this means that the stochastic gradient is not an unbiased estimate of the local gradient (or it is the full gradient). However, in Assumption (A3) this stochastic gradient is assumed to be an unbiased estimate of the local client’s gradient. The proof Theorem I (via Lemmas 7, 8) and Corollaries II and III rely on Assumption (A3) which might not hold if ∑ k ∇ f i ( x ; ζ i , k t ) = K ∇ f i ( x ) , as in my understanding Assumption (A3) and ∑ k ∇ f i ( x ; ζ i , k t ) = K ∇ f i ( x ) both cannot be true at the same time. If my observation is correct the proofs of Theorem I and Corollaries II and III do not hold with the stated set of assumptions. Please clarify. Assumptions: Note that the linearity of the update U ( g 1 + g 2 ) = U ( g 1 ) + U ( g 2 ) in Algorithm 1 (and Theorem I) might not hold in general unless g 1 and g 2 come from the same round of Algorithm 1. This can be seen from the fact that for adaptive algorithms like Adam updates are not linear in g , however, since the adaptive step is fixed at the start of the round, linearity will hold, but g 1 and g 2 must be generated within the same round. Please update the assumption accordingly. Speed-up over centralized methods: The authors claim that MIME with MVR beats the lower bound for centralized methods, thus breaking a fundamental barrier. These claims are a bit misleading as the lower bounds corresponding to the centralized algorithms are on the computations, whereas the authors measure the total communication rounds required to reach an approximate stationary point, wherein within each round the clients compute multiple gradients. MimeMVR required total communication of O ( 1 / ϵ 3 / 2 ) , however, it is well known that for FL problems better O ( 1 / ϵ ) communication requirement can be achieved. The authors should include Assumption (A3) in the main text of the paper as it is important and σ 2 appears in the statements of Theorems and Corollaries. Moreover, the assumption E i [ ∇ f i ( x ) ] = ∇ f ( x ) is not stated in the paper, however it is being used in the proofs. The bounded Hessian variance and L-smooth assumptions (Assumptions (A2) and (A2*)) are strong assumptions as they are made on the stochastic samples of the local client’s data. In fact, (A2) loosely forces the data at each client to be homogeneous. Generally, for FL problems (A2) is not required and a weaker version of L-smoothness (A2*), namely, mean square smoothness is required if some sort of variance reduction is used. It seems the analysis relies on these stronger assumptions. The authors mention that Hessian similarity is crucial for federated optimization and refer to [6], however [6] considers a convex problem that is different from the non-convex problems authors tackle. Please clarify. Moreover, the MIME framework relies on computing the full gradients of the sampled clients at each round which seems to be a stronger requirement compared to existing FL algorithms. In the discussion of Theorem IV the authors discuss that if δ ≈ 0 then MimeMVR requires O ( 1 / ϵ ) communication rounds. However, the statement of Theorem IV requires K ≥ L / δ , which implies as δ → 0 we will have K → ∞ . What implication this will have on the performance of MimeMVR? In the Appendix, the authors mention that Mime MVR does not exactly fit the MIME framework as it relies on gradient computation at two consecutive points. I think instead of just including the convergence result of the MVR in the main paper, the authors should also include the MVR algorithm in the main paper since it seems to be a non-trivial construction. Also, in Table 5 it would be helpful if the authors can add the tracking step used by the Mime framework rather than the tracking step of the centralized algorithm. Experiments: The authors have conducted sufficient experiments to validate the proposed algorithms. A few concerns are discussed next. In the Experiments section, from Figure 1 it seems that Mime and MimeLite clearly outperform the rest of the algorithms, however, in Table 1 the reported performance of Mime and MimeLite is very close to other algorithms. Why is this happening? Moreover, it seems that Mime and MimeLite compute more stochastic gradients per client compared to FedAvg between two communication rounds. If this is true isn’t it true that the improved performance of Mime framework is expected? Typos and minor issues: x out in the statement of Corollary II is not defined. Line 832: Replace ( E [ X − E [ X ] ] ) 2 by E [ X − E [ X ] ] 2 Line 853: The inequality is in the wrong direction. Expression in the statement of Lemma 4: Please use braces with the l.h.s. term to show that the sum over i ∈ S over the complete term. With the proofs of Theorems, Lemmas, and Corollaries the authors should explain the reasoning behind inequalities (and equalities). Line 974-975: Please state the tracking step for MimeAdam and MimeLiteAdam. The authors first use e t to denote the perturbation in the update direction of the centralized algorithm and then later in MVR for denoting the error in server momentum. Please use a different symbol for each if possible.
NIPS
Title Breaking the centralized barrier for cross-device federated learning Abstract Federated learning (FL) is a challenging setting for optimization due to the heterogeneity of the data across different clients which can cause a client drift phenomenon. In fact, designing an algorithm for FL that is uniformly better than simple centralized training has been a major open problem thus far. In this work, we propose a general algorithmic framework, MIME, which i) mitigates client drift and ii) adapts an arbitrary centralized optimization algorithm such as momentum and Adam to the cross-device federated learning setting. MIME uses a combination of control-variates and server-level optimizer state (e.g. momentum) at every client-update step to ensure that each local update mimics that of the centralized method run on i.i.d. data. We prove a reduction result showing that MIME can translate the convergence of a generic algorithm in the centralized setting into convergence in the federated setting. Moreover, we show that, when combined with momentum-based variance reduction, MIME is provably faster than any centralized method–the first such result. We also perform a thorough experimental exploration of MIME’s performance on real world datasets (implemented here). 1 Introduction Federated learning (FL) is an increasingly important large-scale learning framework where the training data remains distributed over a large number of clients, which may be mobile phones or network sensors [38, 37, 43, 44, 28]. A server then orchestrates the clients to train a single model, here referred to as a server model, without ever transmitting client data over the network, thereby providing some basic levels of data privacy and security. Two important settings are distinguished in FL [28, Table 1]: the cross-device and the cross-silo settings. The cross-silo setting corresponds to a relatively small number of reliable clients, typically organizations, such as medical or financial institutions. In contrast, in the cross-device federated learning setting, the number of clients may be extremely large and include, for example, all 3.5 billion active android phones [25]. Thus, in that setting, we may never make even a single pass over ∗This work was also appears under the alternative title “Mime: Mimicking Centralized Stochastic Algorithms in Federated Learning” [31]. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). the entire clients’ data during training. The cross-device setting is further characterized by resourcepoor clients communicating over a highly unreliable network. Together, the essential features of this setting give rise to unique challenges not present in the cross-silo setting. In this work, we are interested in the more challenging cross-device setting, for which we will formalize and study stochastic optimization algorithms. Importantly, recent advances in FL optimization, such as SCAFFOLD [32] or FedDyn [1], are not anymore applicable since they are designed for the cross-silo setting. The problem. The de facto standard algorithm for the cross-device setting is FEDAVG [43], which performs multiple SGD updates on the available clients before communicating to the server. While this approach can reduce the frequency of communication required, performing multiple steps on the same client can lead to ‘over-fitting’ to its atypical local data, a phenomenon known as client drift [32]. This in turn leads to slower convergence and can, somewhat counter-intuitively, require larger total communication [69]. Despite significant attention received from the optimization community, the communication complexity of heterogeneous cross-device has not improved upon that of simple centralized methods, which take no local steps (aka SERVER-ONLY methods). Furthermore, algorithmic innovations such as momentum [59, 14], adaptivity [35, 75, 77], and clipping [71, 72, 76] are critical to the success of deep learning applications. The lack of a theoretical understanding of the impact of multiple client steps has also hindered adapting these techniques in a principled manner into the client updates, in order to replace the vanilla SGD update of FEDAVG. To overcome such deficiencies, we propose a new framework, MIME, that mitigates client drift and can adapt an arbitrary centralized optimization algorithm, e.g. SGD with momentum or Adam, to the federated setting. In each local client update, MIME uses global optimizer state, e.g. momentum or adaptive learning rates, and an SVRG-style correction to mimic the updates of the centralized algorithm run on i.i.d. data. This optimizer state is computed only at the server level and kept fixed throughout the local steps, thereby avoiding overfitting to the atypical local data of any single client. Contributions. We summarize our main results below. • MIME framework. We formalize the cross-device federated learning problem, and propose a new framework MIME that can adapt arbitrary centralized algorithms to this setting. • Convergence result. We prove a result showing that MIME successfully reduces client drift. We also prove that the convergence of any generic algorithm in the centralized setting translates convergence of its MIME version in the federated setting. • Speed-up over centralized methods. By carefully tracking the bias introduced due to multiple local steps, we prove that MIME with momentum-based variance reduction (MVR) can beat a lower bound for centralized methods, thus breaking a fundamental barrier. This is the first such result in FL, and also the first general result showing asymptotic speed-up due to local steps. • Empirical validation. We propose a simpler variant, MIMELITE, with an empirical performance similar to MIME. We report the results of thorough experimental analysis demonstrating that both MIME and MIMELITE indeed converge faster than FEDAVG. Related work. Analysis of FEDAVG: Much of the recent work in federated learning has focused on analyzing FEDAVG. For identical clients, FEDAVG coincides with parallel SGD, for which [78] derived an analysis with asymptotic convergence. Sharper and more refined analyses of the same method, sometimes called local SGD, were provided by [56], and more recently by [57], [47], [34], and [70], for identical functions. Their analysis was extended to heterogeneous clients in [68, 74, 32, 34, 36]. [11] derived a tight characterization of FedAvg with quadratic functions and demonstrated the sensitivity of the algorithm to both client and server step sizes. Matching upper and lower bounds were recently given by [32] and [69] for general functions, proving that FEDAVG can be slower than even SGD for heterogeneous data, due to the client-drift. Comparison to SCAFFOLD: For the cross-silo setting where the number of clients is relatively low, [32] proposed the SCAFFOLD algorithm, which uses control-variates (similar to SVRG) to correct for client drift. However, their algorithm crucially relies on stateful clients which repeatedly participate in the training process. FedDyn [1] reduces the communication requirements, but also requires persistent stateful clients. In contrast, we focus on the cross-device setting where clients may be visited only once during training and where they are stateless (and thus SCAFFOLD and FedDyn are inapplicable). This is akin to the difference between the finite-sum (corresponding to cross-silo) and stochastic (cross-device) settings in traditional centralized optimization [39]. Comparison to FedAvg and variants: [26] and [67] observed that using server momentum significantly improves over vanilla FEDAVG. This idea was generalized by [49], who replaced the server update with an arbitrary optimizer, e.g. Adam. However, these methods only modify the server update while using SGD for the client updates. We henceforth refer to this meta algorithm as FedAvg. FedAvgSGD, FedAvgMom, FedAvgAdam denote specific instantiations of the server optimizer in FedAvg with SGD, Momentum or Adam. MIME, on the other hand, ensures that every local client update resembles the optimizer e.g. MIME would apply momentum in every client update and not just at the server level. Beyond this, [40] proposed to add a regularizer to ensure client updates remain close. However, this may slow down convergence (cf. Fig. 5 and [32, 66]). Other orthogonal directions which can be combined with MIME include tackling computation heterogeneity, where some clients perform many more updates than others [66], improving fairness by modifying the objective [44, 41], incorporating differential privacy [20, 2, 61], Byzantine adversaries [48, 65, 30], secure aggregation [8, 24], etc. We defer additional discussion to the extensive survey by [28]. Momentum based variance reduction. Initial optimal methods for stochastic non-convex optimization like SPIDER [17] and SARAH [46] required intermittently computing very large batch gradients. Subsequently, it was shown that momentum based variance reduction (MVR) methods obtained a similar optimal rate without needing such large batch gradient computations [62, 14]. Momentum is an exponential moving average of many stochastic gradients and so it has much smaller variance than the stochastic gradients themselves. However, because these gradients are computed at different parameters it also has a bias. MVR adds a small additional correction term which significantly reduces this bias and provides improved rates. 2 Problem setup This section formalizes the problem of cross-device federated learning [28]. Cross-device FL is characterized by a large number of client devices like mobile phones which may potentially connect to the server at most once. Due to their transient nature, it is not possible to store any state on the clients, precluding an algorithm like SCAFFOLD. Furthermore, each client has only a few samples, and there is wide heterogeneity in the samples across clients. Finally, communication is a major bottleneck and a key metric for optimization in this setting is the number of communication rounds. Thus, our objective will be to minimize the following quantity within the fewest number of clientserver communication rounds: f(x) = Ei∼C [ fi(x) := 1 ni ni∑ ν=1 fi(x; ζi,ν) ] . (1) Here, fi denotes the loss function of client i and {ζi,1, . . . , ζi,ni} its local data. Since the number of clients is extremely large, while the size of each local data is rather modest, we represent the former as an expectation and the latter as a finite sum. In each round, the algorithm samples a subset of clients (of size S) and performs some updates to the server model. Due to the transient and heterogeneous nature of the clients, it is easy to see that the problem becomes intractable with arbitrarily dissimilar clients. Thus, it is necessary to assume bounded dissimilarity across clients. (A1) G2-BGV or bounded inter-client gradient variance: there exists G ≥ 0 such that Ei∼C [‖∇fi(x)−∇f(x)‖2] ≤ G2 , ∀x . Next, we also characterize the variance in the Hessians. (A2) δ-BHV or bounded Hessian variance: Almost surely, the loss function of any client i satisfies ‖∇2fi(x; ζ)−∇2f(x)‖ ≤ δ , ∀x . This is in contrast to the usual smoothness assumption that can be stated as: (A2*) L-smooth: ‖∇2fi(x; ζ)‖ ≤ L , ∀x , a.s. for any i. Note that if fi(x; ζ) is L-smooth then (A2) is satisfied with δ ≤ 2L, and hence (A2) is weaker than (A2*). In realistic examples we expect the clients to be similar and hence that δ L. In addition, we assume that f(x) is bounded from below by f? and is L-smooth, as is standard. 3 Mime framework In this section we describe how to adapt an arbitrary centralized optimizer (referred to as the “base” optimizer) which may have internal state (e.g. momentum) to the federated learning problem (1) while ensuring there is no client-drift. Algorithm 4 describes our framework. We develop two variants, MIME and MIMELITE, which consist of three components i) a base optimizer we are seeking to mimic, ii) the global (server) optimizer state computation, and iii) the local client updates. Algorithm 1 Mime and MimeLite input: initial x and s, learning rate η and base optimizer B = (U ,V) for each round t = 1, · · · , T do sample subset S of clients communicate (x, s) to all clients i ∈ S communicate c← 1|S| ∑ j∈S ∇fj(x) (only Mime) on client i ∈ S in parallel do initialize local model yi ← x for k = 1, · · · ,K do sample mini-batch ζ from local data gi ← ∇fi(yi; ζ)−∇fi(x; ζ) + c (Mime) gi ← ∇fi(yi; ζ) (MimeLite) update yi ← yi − ηU(gi, s) end for compute full local-batch gradient∇fi(x) communicate (yi,∇fi(x)) end on client s ← V ( 1 |S| ∑ i∈S ∇fi(x), s ) (update optimizer state) x← 1|S| ∑ i∈S yi (update server parameters) end for Base optimizer. We assume the centralized base optimizer we are imitating can be decomposed into two steps: an update step U which updates the parameters x, and a optimizer state update step V(·) which keeps track of global optimizer state s. Each step of the base optimizer B = (U ,V) uses a gradient g to update the parameter x and the optimizer state s as follows: x← x− η U(g, s) , s← V(g, s) . (BASEOPT) As an example, consider SGD with momentum. The state here is the momentum mt and uses the following update steps: xt = xt−1 − η ((1− β)∇fi(xt−1) + βmt−1) , mt = (1− β)∇fi(xt−1) + βmt−1 . Thus, SGD with momentum can be represented in the above generic form with U(g, s) = (1 − β)g + βs and V(g, s) = (1 − β)g + βs. Table 5 in Appendix shows how other algo- rithms like Adam, Adagrad, etc. can be represented in this manner. We keep the update U to be linear in the gradient g, whereas V can be more complicated. This implies that while the parameter update step U is relatively resilient to receiving a biased gradient g while V can be much more sensitive. Compute optimizer state globally, apply locally. When updating the optimizer state of the base algorithm, we use only the gradient computed at the server parameters. Further, they remain fixed throughout the local updates of the clients. This ensures that these optimizer state remain unbiased and representative of the global function f(·). At the end of the round, the server performs s← V ( 1 |S| ∑ i∈S ∇fi(x), s ) , ∇fi(x) = 1ni ∑ni ν=1∇fi(x; ζi,ν) . (OPTSTATE) Note that we use full-batch gradients computed at the server parameters x, not client parameters yi. Local client updates. Each client i ∈ S performs K updates using U of the base algorithm and a minibatch gradient. There are two variants possible corresponding to MIME and MIMELITE differentiated using colored boxes. Starting from yi ← x, repeat the following K times yi ← yi − ηU(gi, s) (CLIENTSTEP) where gi ← ∇fi(yi; ζ) for MIMELITE, and gi ← ∇fi(yi; ζ)−∇fi(x; ζ) + 1|S| ∑ j∈S ∇fj(x) for MIME. MIMELITE simply uses the local minibatch gradient whereas MIME uses an SVRG style correction [27]. This is done to reduce the noise from sampling a local mini-batch. While this correction yields faster rates in theory (and in practice for convex problems), in deep learning applications we found that MIMELITE closely matches the performance of MIME. Finally, there are two modifications made in practical FL: we weight all averages across the clients by the number of datapoints ni [43], and we perform K epochs instead of K steps [66]. 4 Theoretical analysis of Mime Table 1 summarizes the rates of MIME (highlighted in blue) and MIMELITE (highlighted in green) and compares them to SERVER-ONLY methods when using SGD, Adam and momentum methods as the base algorithms. We will first examine the convergence of MIME and MIMELITE with a generic base optimizer and show that its properties are preserved in the federated setting. We then examine a specific momentum based base optimizer, and prove that Mime and MimeLite can be asymptotically faster than the best server-only method. This is the first result to prove the usefulness of local steps and demonstrate asymptotic speed-ups. 4.1 Convergence with a generic base optimizer We will prove a generic reduction result demonstrating that if the underlying base algorithm converges, and is robust to slight perturbations, then MIME and MIMELITE also preserve the convergence of the algorithm when applied to the federated setting with additinoal local steps. Theorem I. Suppose that we have G2 inter-client gradient variance (A1), L-smooth {fi} (A2*), and σ2 intra-client gradient variance (A3). Further, suppose that the updater U of our baseoptimizer B = (U ,V) satisfies i) linearity for a fixed state s: U(g1 + g2; s) = U(g1; s) + U(g2; s), and ii) Lipschitzness: ‖U(g; s)‖ ≤ B‖g‖ for some B ≥ 0. Then, running MIME or MIMELITE with K local updates and step-size η is equivalent to running a centralized algorithm with step-size η̃ := Kη ≤ 12LB , and updates xt ← xt−1 − η̃ U(gt + et , st−1) , and st ← V(gt, st−1) , where we have an unbiased gradient Et[gt] = ∇f(xt−1), with variance bounded as Et‖gt −∇f(xt−1)‖2 ≤ { G2 S MIME , G2 S + σ2 KS MIMELITE . and finally a small error bounded as 1 B2L2η̃2 Et‖ et ‖ 2 ≤ { Et‖gt‖2 MIME , Et‖gt‖2 +G2 + σ 2 K MIMELITE . Here, we have proven that MIME and MIMELITE truly mimic the centralized base algorithm with very small perturbations—the magnitude of et is O(η̃2). The key to the result is the linearity of the parameter update step U( · ; s). By separating the base optimizer into a very simple parameter step U and a more complicated optimizer state update step V , we can ensure that commonly used algorithms such as momentum, Adam, Adagrad, and others all satisfy this property. Armed with this general reduction, we can easily obtain specific convergence results. Corollary II ((Mime/MimeLite) with SGD). Given that the conditions in Theorem I are satisfied, let us run T rounds withK local steps using SGD as the base optimizer and output xout. This output satisfies E‖∇f(xout)‖2 ≤ for F := f(x0)− f?, G̃2 := G2 + σ2/K and • µ-PL inequality: η = Õ ( 1 µKT ) , and T = Õ ( LG2 µS + LF µ log ( 1 )) MIME , Õ ( LG̃2 µS + LG̃ µ √ + LFµ log ( 1 )) MIMELITE . • Non-convex: for η = O (√ FS LG̃2TK2 ) , and T = O ( LG2F S 2 + LF ) MIME , O ( LG̃2F S 2 + L2G̃F 3/2 + LF ) MIMELITE . Table 1: Number of communication rounds required to reach ‖∇f(x)‖2 ≤ (log factors are ignored) with S clients sampled each round. All analyses except SCAFFOLD assume G2 bounded gradient dissimilarity (A1). All analyses assume L-smooth losses, except MimeLiteMVR and MimeMVR, which only assume δ bounded Hessian dissimilarity (A2). Convergence of SCAFFOLD depends on the total number of clientsN which is potentially infinite. FEDAVG and MIMELITE are slightly slower than the server-only methods due to additional drift terms in most cases. MIME is the fastest and either matches or improves upon the optimal statistical rates (first term in the rates). In fact, MimeMVR and MimeLiteMVR beat lower bounds for any server-only method when δ L. Algorithm Non-convex µ-PL inequality SCAFFOLDa [32] ( N S ) 2 3 L N S + L µ SGD SERVER-ONLY [21] LG 2 S 2 + L G2 µS + L µ MimeLiteSGD≡ FedAvgSGD c LG 2 S 2 + L 2G 3/2 + L G2 µS + LG µ √ + L µ MimeSGD LG 2 S 2 + L G2 µS + L µ ADAM SERVER-ONLY [75]b L −G2/S – MimeLiteAdambc L √ S −G2/S – MimeAdamb L −G2/S – Momentum Variance Reduction (MVR) SERVER-ONLY [14] LG√ S 3/2 + G 2 S + L – MimeLiteMVRd δ(G+σ) 3/2 + G 2+σ2 + δ – MimeMVRd δG√ S 3/2 + G 2 S + δ – SERVER-ONLY lower bound [5] Ω ( LG√ S 3/2 + G 2 S + L ) Ω ( G2 S ) a Num. clients (N ) can be same order as num. total rounds or even∞, making the bounds vacuous. b Adam requires large batch-size S ≥ G2/ to converge [50, 75]. Convergence of FedAdam with client sampling is unknown ([49] only analyze with full client participation). c RequiresK ≥ σ2/G2 number of local updates. Typically, intra-client variance is small (σ2 . G2). d RequiresK ≥ L/δ number of local updates. Faster than the lower bound (and hence any SERVERONLY algorithm) when δ L i.e. our methods can take advantage of Hessian similarity, whereas SERVER-ONLY methods cannot. In worst case, δ ≈ L and all methods are comparable. If we take a sufficient number of local steps K ≥ G2/σ2, then we have G̃ = O(G) in the above rates. On comparing with the rates in Table 1 for SERVER-ONLY SGD, we see that MIME exactly matches its rates. MIMELITE matches the asymptotic term but has a few higher order terms. Note that when using SGD as the base optimizer, MIMELITE becomes exactly the same as FEDAVG and hence has the same rate of convergence. Corollary III ((Mime/MimeLite) with Adam). Suppose that the conditions in Theorem I are satisfied, and further |∇jfi(x)| ≤ H for any coordinate j ∈ [d]. Then let us run T rounds using Adam as the base optimizer withK local steps, β1 = 0, ε0 > 0, η ≤ ε20/KL(H+ε0), and any β2 ∈ [0, 1). Output xout chosen randomly from {x1, . . .xT } satisfies E‖∇f(xout)‖2 ≤ for T = O ( LF (H+ε0) 2 ε20( −G̃2/S) ) MIME Adam , O ( LF (H+ε0) 2 √ S ε20( −G̃2/S) ) MIMELITE Adam . where F := f(x0)− f?, G̃2 := G2 + σ2/K. Note that here ε0 represents a small positive parameter used in Adam for regularization, and is different from the error . Similar to the SERVER-ONLY analysis of Adam [75], we assume β1 = 0 and that batch size is large enough such that S ≥ G2/ . A similar analysis can also be carried out for AdaGrad, and other novel variants of Adam [42]. 4.2 Circumventing server-only lower bounds The rates obtained above, while providing a safety-check, do not beat those of the SERVER-ONLY approach. The previous best rates for cross-device FL correspond to MimeLiteSGD which is O(LG 2 S 2 + L2G 3/2 ) [34, 36, 69]. While, using a separate server-learning rate can remove the effect of the second term [33], this at best matches the rate of SERVER-ONLY SGD O(LG 2 S 2 ). This is significantly slower than simply using momentum based variance reduction (MVR) as in in the FL setting (SERVER-ONLY MVR) which has a communication complexity of O( LG√ S 3/2 ) [14]. Thus, even though the main reason for studying local-step methods was to improve the communication complexity, none thus far show such improvement. The above difficulty of beating SERVER-ONLY may not be surprising given the two sets of strong lower bounds known. Necessity of local steps. Firstly, [5] show a gradient oracle lower bound of Ω( LG√ S 3/2 ). This matches the complexity of MVR, and hence at first glance it seems that SERVER-ONLY MVR is optimal. However, the lower bound is really only on the number of gradients computed and not on the number of clients sampled (sample complexity) [18], or number of rounds of communication required. In particular, multiple local updates increases number of gradients computed without needing additional communication offers us a potential way to side-step such lower bounds. A careful analysis of the bias introduced as a result of such local steps is a key part of our analysis. Necessity of δ-BHD. A second set of lower bounds directly study the number of communication rounds required in heterogeneous optimization [6, 69]. These results prove that there exist settings where local steps provide no advantage and SERVER-ONLY methods are optimal. This however contradicts real world experimental evidence [43]. As before, the disparity arises due to the contrived settings considered by the lower bounds. For distributed optimization (with full client participation) and convex quadratic objectives, δ-BHD (A2) was shown to be a sufficient [54, 51] and necessary [6] condition to circumvent these lower bounds and yield highly performant methods. We similarly leverage δ-BHD (A2) to design novel methods which significantly extend prior results to i) all smooth non-convex functions (not just quadratics), and ii) cross-device FL with client sampling. We now state our convergence results with momentum based variance reduction (MVR) as the basealgorithm since it is known to be optimal in the SERVER-ONLY setting. Theorem IV. For L-smooth f with G2 gradient dissimilarity (A1), δ Hessian dissimilarity (A2) and F := (f(x0) − f?), let us run MVR as the base algorithm for T rounds with K ≥ L/δ local steps and generate an output xout. This output satisfies E‖∇f(xout)‖2 ≤ for • MimeMVR : η = O ( min ( 1 δK , ( SF G2TK3 ) 1/3 )) , momentum β = 1−O( δ 2S2/3 (TG2)2/3 ), and T = O ( δGF√ S 3/2 + G2 S + δF ) . • MimeLiteMVR : η = O ( min ( 1 δK , ( F Ĝ2TK3 )1/3 )) , momentum β = 1−O( δ 2 (TĜ2)2/3 ), and T = O (δĜF 3/2 + Ĝ2 + δF ) . Here, we define Ĝ2 := G2 + σ2 and the expectation in E‖∇f(xout)‖2 ≤ is taken both over the sampling of the clients during the running of the algorithm, the sampling of the mini-batches in local updates, and the choice of xout (which is chosen randomly from the client iterates yi). Remarkably, the rates of our methods are independent of L and only depend on δ. Thus, when δ ≤ L and δ ≤ L/S for MimeMVR and MimeLiteMVR, the rates beat the server only lower bound of Ω( LG√ S 3/2 ). In fact, if the Hessian variance is small and δ ≈ 0, our methods only needO(1/ ) rounds to communicate. Intuitively, our results show that local steps are very useful when heterogeneity (represented by δ) is smaller than optimization difficulty (captured by smoothness constant L). MimeMVR uses a momentum parameter β of the order of (1 − O(TG2)−2/3) i.e. as T increases, β asymptotically approaches 1. In contrast, previous analyses of distributed momentum (e.g. [73]) prove rates of the form G 2 S(1−β) 2 , which are worse than that of standard SGD by a factor of 1 1−β . Thus, ours is also the first result which theoretically showcases the usefulness of using large momentum in distributed and federated learning. While we only prove the utility of local steps for MimeMVR, we believe our theory can be extended to other local update methods as well. Our analysis is highly non-trivial and involves two crucial ingredients: i) computing the momentum at the server level to ensure that it remains unbiased and then applying it locally during every client update to reduce variance, and ii) carefully keeping track of the bias introduced via additional local steps. Our experiments (Sec. 5) verify our theoretical insights are indeed applicable in deep learning settings as well. See App. B for a proof sketch and App. G–H detailed proofs. 5 Experimental analysis on real world datasets We run experiments on natively federated datasets to confirm our theory and accurately measure real world performance. Our main findings are i) MIME and MIMELITE consistently outperform FEDAVG, and ii) momentum and adaptivity significantly improves performance. 5.1 Setup Algorithms. We consider three (meta) algorithms: FEDAVG, MIME, and MIMELITE. Each of these adapt four base optimizers: SGD, momentum, Adam, and Adagrad. FEDAVG follows [49] who run multiple epochs of SGD on each client sampled, and then aggregate the net client updates. This aggregated update is used as a pseudo-gradient in the base optimizer (called server optimizer). The learning rate for the server optimizer is fixed to 1 as in [67]. This is done to ensure all algorithms have the same number of hyper-parameters. MIME and MIMELITE follow Algorithm 4 and also run a fixed number of epochs on the client. However, note that this requires communicating both the full local-batch gradient as well as the parameter updates doubling the communication required to be sent by the client. For a fairer comparison, we split the sampled clients in MIME and MIMELITE into two groups–the first communicates only full local-batch gradient and the latter communicates only parameter updates. Thus, all methods have equal client communication to the server. This variant retains the convergence guarantees up to constants (details in the Appendix). We also run Loc-MIME where instead of keeping the global optimizer state fixed, we update it locally within the client. The optimizer state is reset after the round finishes. In all methods, aggregation is weighted by the number of samples on the clients. Datasets and models. We run five simulations on three real-world federated datasets: EMNIST62 with i) a linear classifier, ii) an MLP, and iii) a CNN, iv) a charRNN on Shakespeare, and v) an LSTM for next word prediction on StackOverflow, all accessed through Tensorflow Federated [60]. The learning rates were individually tuned and other optimizer hyper-parameters such as β for momentum, β1, β2, ε0 for Adam and AdaGrad were left to their default values, unless explicitly stated otherwise. We refer to Appendix C for additional setup details and discussion. 5.2 Ablation and comparative study In order to study the different algorithms, we train a 2 hidden layer (300µ-100) MLP on EMNIST62 with 10 local epochs for 1k rounds and use SGD+momentum (with tuned β) as the base optimizer. Mime ≈ MimeLite > FedAvg > SCAFFOLD > FedProx. Fig. 1 (left) shows MIME and MIMELITE have nearly identical performance, and are about 7× faster than FedAvg. This implies our strategy of applying momentum to client updates is faster than simply using server momentum. FedProx [40] uses an additional regularizer µ tuned over [0.1, 0.5, 1] (µ = 0 is the same as FedAvg). Regularization does not seem to reduce client drift but still slows down convergence [66]. SCAFFOLD [32] is also slower than Mime and FedAvg in this setup. This is because in cross-device setting with a large number of clients (N = 3.4k) means that each client is visited less than 6 times during the entire training (20 clients per round for 1k rounds). This means that the correction term utilized by SCAFFOLD uses control-variates which are quite stale (computed about 200 rounds ago) which slows down the convergence. In contrast, the SVRG correction term in Mime is computed using clients sampled in the current or previous rounds, and so is much more accurate. With momentum > without momentum. Fig. 1 (center) examines the impact of momentum on FedAvg and Mime. Momentum slightly improves the performance of FedAvg, whereas it has a significant impact on the performance of Mime. This is also in line with our theory and confirms that Mime’s strategy of applying it locally at every client update makes better use of momentum. Fixed > locally updated optimizer state. Finally, we check how the performance of Mime changes if instead of keeping the momentum fixed throughout a round, we let it change. The latter is a way to combine global and local momentum. The momentum is reset at the end of the round ignoring the changes the clients make to it. Fig. 1 (right) shows that this worsens the performance, confirming that it is better to keep the global optimizer state fixed as predicted by our theory. Together, the above observations validate all aspects of Mime (and MimeLite) design: compute statistics at the server level, and apply them unchanged at every client update. 5.3 Large scale comparison with equal server and client communication We perform a larger scale study closely matching the setup of [49]. For both MIME and MIMELITE, only half the clients compute and transmit the updated parameters, and other half transmit the full local-batch gradients. Hence, client to server communication cost is the same for all methods for all clients. However, MIME and MIMELITE require sending additional optimization state to the clients. Hence, we also reduce the number of clients sampled in each round to ensure sum total of communication at each round is 40× model size for EMNIST and Shakespeare experiments, and 100× model size for the StackOverflow next word prediction experiment. Since we only perform 1 local epoch, the hyper-parameters (e.g. epsilon for adaptive methods) are more carefully chosen following [49], and MIME and MIMELITE use significantly fewer clients per round, the difference between FEDAVG and MIME is smaller here. Table 2 summarizes the results. For the image classification tasks of EMNIST62 logistic and EMNIST62 CNN, Mime and MimeLite with Adam achieve the best performance. Using momentum (both with SGD and in Adam) significantly improves their performance. In contrast, FedAvgAdam is more unstable with worse performance. This is because FedAvg is excessively sensitive to hyperparameters (cf. App. E). We next consider the character prediction task on Shakespeare dataset, and next word prediction on StackOverflow. Here, the momentum based methods (SGD+momentum and Adam) are slower than their non-momentum counterparts (vanilla SGD and AdaGrad). This is because the mini-batch gradients in these tasks are sparse, with the gradients corresponding to tokens not in the mini-batch being zero. This sparsity structure is however destroyed when using momentum or Adam. For the same reason, Mime which uses an SVRG correction also significantly increases the gradient density. Discussion. For traditional tasks such as image classification, we observe that Mime (especially with Adam) usually outperforms MimeLite which in turn outperforms FedAvg. These methods are able to successfully leverage momentum and adaptivity to improve performance. For tasks where the client gradients are sparse, the SVRG correction used by Mime hinders performance. Adapting our techniques to work with sparse gradients (à la Yogi [75]) could lead to further improvements. Also, note that we reduce communication by naı̈vely reducing the number of participating clients per round. More sophisticated approaches to save on client communication including quantization or sparsification [58, 3], or even novel algorithmic innovations [1] could be explored. Further, server communication could be reduced using memory efficient optimizers e.g. AdaFactor [55] or SM3 [4]. 6 Conclusion Our work initiated a formal study of the cross-device federated learning problem and provided theoretically justified algorithms. We introduced a new framework MIME which overcomes the natural client-heterogeneity in such a setting, and can adapt arbitrary centralized algorithms such as Adam without additional hyper-parameters. We demonstrated the superiority of MIME via strong convergence guarantees and empirical evaluations. Further, we proved that a particular instance of our method, MimeMVR, beat centralized lower-bounds, demonstrating that additional local steps can yield asymptotic improvements for the first time. We believe our analysis will be of independent interest beyond the federated setting for understanding the sample complexity of non-convex optimization, and for yielding improved analysis of decentralized optimization algorithms.
1. What are the strengths and weaknesses of the proposed method for adapting optimization algorithms in federated learning settings? 2. How does the proposed method compare to other approaches in terms of convergence guarantees, communication cost, and practicality? 3. Are there any concerns regarding the applicability of the proposed method in real-world scenarios, such as client heterogeneity and resource constraints? 4. How does the paper structure and presentation of the material affect the reader's understanding of the proposed method and its limitations? 5. Are there any suggestions or recommendations for future work related to the proposed method and its applications?
Summary Of The Paper Review
Summary Of The Paper The paper introduces a new generalized algorithmic optimization framework named MIME for adapting and analyzing the convergence of any optimization algorithm applied in centralized settings into cross-device federated learning environments. The algorithmic reduction is accomplished by decoupling the centralized optimization algorithm into optimizer (server-side) and parameter state (client-side) updates in federated settings. The convergence of the proposed adaptation is shown both theoretically and empirically over a range of challenging real-world datasets for different optimization algorithms. Interestingly, when applying the proposed federated adaptation for a specific momentum-based optimization algorithm, the adaptation is proven to be asymptotically faster than its centralized counterpart. Review Pros - Interesting adaptation of any centralized optimization algorithm in federated learning settings, by decoupling the optimization problem into optimizer state (w/ complex) updates and parameter state (w/ linear) updates. - Proven upper bound convergence guarantees of the MIME (MIMELite) framework for different optimizers and comparison against their centralized counterpart. The upper bound convergence rate of the Momentum-based variance federated adaptation is proven to beat the lower bound of the centralized version when clients' heterogeneity is small. - Experiments on real-world datasets and large-scale experiment setup evaluation for EMNIST62, Shakespeare, and StackOverflow datasets. Cons - The proposed method introduces additional communication cost in federated cross-device settings. - Even though the empirical evaluation is sufficient, further clarification is needed. - Paper structure is not necessarily coherent; some methods are introduced differently in some parts of the paper and are hard to follow. Discussion Communication. With your proposed framework, there seems to be a lot more extra communication cost from server to clients (global model, server state, and SVRG style correction lines 140, 141) and clients to the server (updated local model weights, plus full-batch gradients). Can you propose any approaches on how this extra cost (at least on the client-end) can be mitigated? You address some part of the latter case in line 274, and in Appendix section C.3, but I think this should be introduced earlier in the text to alleviate some concerns. From my understanding, your analysis focus on the perspective that local batch gradient and parameter update are performed at the same client. Does the creation of the two subgroups (local batch gradient and parameter update) affect your convergence? It is clear that out of S clients, S/2 send full-batch gradients and S/2 parameter updates. Instead of creating two subgroups would a single subgroup of size S/2 lead to the same performance? My concern here is that since clients have heterogeneous distributions, considering only local batch gradients from half of the clients and parameter updates from the remaining might introduce additional noise in the state updates. Is such perspective captured in your theoretical evaluation? Is there a case that some of the participating clients cannot compute the full local batch gradient because of resource constraints (Line 146, Algorithm 1)? Should the subsampling method take this constraint into account? Experiments. Does Figure 1 show test accuracy or validation accuracy? Why do you provide the validation accuracy in Table 2, the testing accuracy would have been more informative. If this is not possible then, how were the validation datasets constructed and how was the validation accuracy computed across all clients? Moreover, showing the learning curves of the training methods presented in Table 2, based on federation round, would greatly simplify the understanding of the communication cost for each method. Finally, it would be nice to provide/plot the performance of the centralized model (e.g., as horizontal line) in Figure 1. Paper structure. In Table 1 you introduce the μ-PL inequality, but in the text there is no discussion on why this convergence analysis is important, what its intricacies are (except of course in Appendix E.1) and why its analysis is not provided for ADAM and MVR – it would be better to discuss this in the text. When presenting the empirical evaluation with Momentum, it is not very clear whether you are discussing server-side or client-side momentum (additional reading is needed). Moreover, the presented momentum results are without variance (not MVR), correct? If that’s the case then what is the convergence of MVR? In addition, in Table 1 you provide convergence guarantees for SGD, Adam, and MVR, however, from the Algorithm it is not clear how the MVR or the Adam is applied since the correction term (c) seems to be always present in the MIME, which is also why MIME is different from MIMELite. Can you please elaborate more in the text to address these discrepancies? Additional Related Work In [1], Momentum GD is used as part of the client local step (without server-side optimization) and it is shown to provide accelerated convergence compared to vanilla SGD (FedAvg). Their approach seems to be very similar to your FedAvgMomentum training method. In the related work section, "Comparison to FedAvg and variants", in [2], the authors also assign more local steps to computationally fast learners in cross-silo settings and (empirically) show faster convergence. [1] https://arxiv.org/pdf/1910.03197.pdf, [2] https://arxiv.org/pdf/2102.02849.pdf Text corrections / suggestions Lines 100, 107: Notation D is a bit misleading when referring to clients (most of the time denotes dataset), would another notation (e.g., C) be more appropriate? Line 119: please add eq. or equation 1 Line 130: change table 4 reference to table 5 Line 133: while … while fragmentation Table 1: Why is MimeLiteSGD equivalent to FedSGD? Did you refer to FedAvg? FedSGD is different from FedAvg [3], similarly in Appendix section C.5 Line 759. Line 204: epsilon(ε) refers to accuracy or error? In your analysis, it refers to error Line 221: which increases to increase Line 259: Appendix D does not refer to proof sketch, change to B Line 285: Appendix B does not refer to the experimental setup, change to C Please be consistent with the Mime or MIME naming (see sections 4 and 5). Please align subfigures in Figure 1. Line 331: Statement slightly contradicts your empirical evaluation, which is also shown in Table 2 – MIME does not always outperform MIMELite. [3] https://arxiv.org/pdf/1602.05629.pdf POST AUTHORS RESPONSE COMMENT: I read carefully the response of the authors to my points and the points raised by the other reviewers. My primary concern is the additional communication cost introduced by the framework in cross-device federated settings. In their follow-up answer, the authors propose an approach to reduce this cost through sampling and indicated (in another response) that the lower bound analysis of the communication complexity remains an open question. I would like to see this question and others raised during the reviewing process as part of the paper's discussion/future work section since it can help readers to further investigate these open problems. My score remains unchanged.
NIPS
Title Breaking the centralized barrier for cross-device federated learning Abstract Federated learning (FL) is a challenging setting for optimization due to the heterogeneity of the data across different clients which can cause a client drift phenomenon. In fact, designing an algorithm for FL that is uniformly better than simple centralized training has been a major open problem thus far. In this work, we propose a general algorithmic framework, MIME, which i) mitigates client drift and ii) adapts an arbitrary centralized optimization algorithm such as momentum and Adam to the cross-device federated learning setting. MIME uses a combination of control-variates and server-level optimizer state (e.g. momentum) at every client-update step to ensure that each local update mimics that of the centralized method run on i.i.d. data. We prove a reduction result showing that MIME can translate the convergence of a generic algorithm in the centralized setting into convergence in the federated setting. Moreover, we show that, when combined with momentum-based variance reduction, MIME is provably faster than any centralized method–the first such result. We also perform a thorough experimental exploration of MIME’s performance on real world datasets (implemented here). 1 Introduction Federated learning (FL) is an increasingly important large-scale learning framework where the training data remains distributed over a large number of clients, which may be mobile phones or network sensors [38, 37, 43, 44, 28]. A server then orchestrates the clients to train a single model, here referred to as a server model, without ever transmitting client data over the network, thereby providing some basic levels of data privacy and security. Two important settings are distinguished in FL [28, Table 1]: the cross-device and the cross-silo settings. The cross-silo setting corresponds to a relatively small number of reliable clients, typically organizations, such as medical or financial institutions. In contrast, in the cross-device federated learning setting, the number of clients may be extremely large and include, for example, all 3.5 billion active android phones [25]. Thus, in that setting, we may never make even a single pass over ∗This work was also appears under the alternative title “Mime: Mimicking Centralized Stochastic Algorithms in Federated Learning” [31]. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). the entire clients’ data during training. The cross-device setting is further characterized by resourcepoor clients communicating over a highly unreliable network. Together, the essential features of this setting give rise to unique challenges not present in the cross-silo setting. In this work, we are interested in the more challenging cross-device setting, for which we will formalize and study stochastic optimization algorithms. Importantly, recent advances in FL optimization, such as SCAFFOLD [32] or FedDyn [1], are not anymore applicable since they are designed for the cross-silo setting. The problem. The de facto standard algorithm for the cross-device setting is FEDAVG [43], which performs multiple SGD updates on the available clients before communicating to the server. While this approach can reduce the frequency of communication required, performing multiple steps on the same client can lead to ‘over-fitting’ to its atypical local data, a phenomenon known as client drift [32]. This in turn leads to slower convergence and can, somewhat counter-intuitively, require larger total communication [69]. Despite significant attention received from the optimization community, the communication complexity of heterogeneous cross-device has not improved upon that of simple centralized methods, which take no local steps (aka SERVER-ONLY methods). Furthermore, algorithmic innovations such as momentum [59, 14], adaptivity [35, 75, 77], and clipping [71, 72, 76] are critical to the success of deep learning applications. The lack of a theoretical understanding of the impact of multiple client steps has also hindered adapting these techniques in a principled manner into the client updates, in order to replace the vanilla SGD update of FEDAVG. To overcome such deficiencies, we propose a new framework, MIME, that mitigates client drift and can adapt an arbitrary centralized optimization algorithm, e.g. SGD with momentum or Adam, to the federated setting. In each local client update, MIME uses global optimizer state, e.g. momentum or adaptive learning rates, and an SVRG-style correction to mimic the updates of the centralized algorithm run on i.i.d. data. This optimizer state is computed only at the server level and kept fixed throughout the local steps, thereby avoiding overfitting to the atypical local data of any single client. Contributions. We summarize our main results below. • MIME framework. We formalize the cross-device federated learning problem, and propose a new framework MIME that can adapt arbitrary centralized algorithms to this setting. • Convergence result. We prove a result showing that MIME successfully reduces client drift. We also prove that the convergence of any generic algorithm in the centralized setting translates convergence of its MIME version in the federated setting. • Speed-up over centralized methods. By carefully tracking the bias introduced due to multiple local steps, we prove that MIME with momentum-based variance reduction (MVR) can beat a lower bound for centralized methods, thus breaking a fundamental barrier. This is the first such result in FL, and also the first general result showing asymptotic speed-up due to local steps. • Empirical validation. We propose a simpler variant, MIMELITE, with an empirical performance similar to MIME. We report the results of thorough experimental analysis demonstrating that both MIME and MIMELITE indeed converge faster than FEDAVG. Related work. Analysis of FEDAVG: Much of the recent work in federated learning has focused on analyzing FEDAVG. For identical clients, FEDAVG coincides with parallel SGD, for which [78] derived an analysis with asymptotic convergence. Sharper and more refined analyses of the same method, sometimes called local SGD, were provided by [56], and more recently by [57], [47], [34], and [70], for identical functions. Their analysis was extended to heterogeneous clients in [68, 74, 32, 34, 36]. [11] derived a tight characterization of FedAvg with quadratic functions and demonstrated the sensitivity of the algorithm to both client and server step sizes. Matching upper and lower bounds were recently given by [32] and [69] for general functions, proving that FEDAVG can be slower than even SGD for heterogeneous data, due to the client-drift. Comparison to SCAFFOLD: For the cross-silo setting where the number of clients is relatively low, [32] proposed the SCAFFOLD algorithm, which uses control-variates (similar to SVRG) to correct for client drift. However, their algorithm crucially relies on stateful clients which repeatedly participate in the training process. FedDyn [1] reduces the communication requirements, but also requires persistent stateful clients. In contrast, we focus on the cross-device setting where clients may be visited only once during training and where they are stateless (and thus SCAFFOLD and FedDyn are inapplicable). This is akin to the difference between the finite-sum (corresponding to cross-silo) and stochastic (cross-device) settings in traditional centralized optimization [39]. Comparison to FedAvg and variants: [26] and [67] observed that using server momentum significantly improves over vanilla FEDAVG. This idea was generalized by [49], who replaced the server update with an arbitrary optimizer, e.g. Adam. However, these methods only modify the server update while using SGD for the client updates. We henceforth refer to this meta algorithm as FedAvg. FedAvgSGD, FedAvgMom, FedAvgAdam denote specific instantiations of the server optimizer in FedAvg with SGD, Momentum or Adam. MIME, on the other hand, ensures that every local client update resembles the optimizer e.g. MIME would apply momentum in every client update and not just at the server level. Beyond this, [40] proposed to add a regularizer to ensure client updates remain close. However, this may slow down convergence (cf. Fig. 5 and [32, 66]). Other orthogonal directions which can be combined with MIME include tackling computation heterogeneity, where some clients perform many more updates than others [66], improving fairness by modifying the objective [44, 41], incorporating differential privacy [20, 2, 61], Byzantine adversaries [48, 65, 30], secure aggregation [8, 24], etc. We defer additional discussion to the extensive survey by [28]. Momentum based variance reduction. Initial optimal methods for stochastic non-convex optimization like SPIDER [17] and SARAH [46] required intermittently computing very large batch gradients. Subsequently, it was shown that momentum based variance reduction (MVR) methods obtained a similar optimal rate without needing such large batch gradient computations [62, 14]. Momentum is an exponential moving average of many stochastic gradients and so it has much smaller variance than the stochastic gradients themselves. However, because these gradients are computed at different parameters it also has a bias. MVR adds a small additional correction term which significantly reduces this bias and provides improved rates. 2 Problem setup This section formalizes the problem of cross-device federated learning [28]. Cross-device FL is characterized by a large number of client devices like mobile phones which may potentially connect to the server at most once. Due to their transient nature, it is not possible to store any state on the clients, precluding an algorithm like SCAFFOLD. Furthermore, each client has only a few samples, and there is wide heterogeneity in the samples across clients. Finally, communication is a major bottleneck and a key metric for optimization in this setting is the number of communication rounds. Thus, our objective will be to minimize the following quantity within the fewest number of clientserver communication rounds: f(x) = Ei∼C [ fi(x) := 1 ni ni∑ ν=1 fi(x; ζi,ν) ] . (1) Here, fi denotes the loss function of client i and {ζi,1, . . . , ζi,ni} its local data. Since the number of clients is extremely large, while the size of each local data is rather modest, we represent the former as an expectation and the latter as a finite sum. In each round, the algorithm samples a subset of clients (of size S) and performs some updates to the server model. Due to the transient and heterogeneous nature of the clients, it is easy to see that the problem becomes intractable with arbitrarily dissimilar clients. Thus, it is necessary to assume bounded dissimilarity across clients. (A1) G2-BGV or bounded inter-client gradient variance: there exists G ≥ 0 such that Ei∼C [‖∇fi(x)−∇f(x)‖2] ≤ G2 , ∀x . Next, we also characterize the variance in the Hessians. (A2) δ-BHV or bounded Hessian variance: Almost surely, the loss function of any client i satisfies ‖∇2fi(x; ζ)−∇2f(x)‖ ≤ δ , ∀x . This is in contrast to the usual smoothness assumption that can be stated as: (A2*) L-smooth: ‖∇2fi(x; ζ)‖ ≤ L , ∀x , a.s. for any i. Note that if fi(x; ζ) is L-smooth then (A2) is satisfied with δ ≤ 2L, and hence (A2) is weaker than (A2*). In realistic examples we expect the clients to be similar and hence that δ L. In addition, we assume that f(x) is bounded from below by f? and is L-smooth, as is standard. 3 Mime framework In this section we describe how to adapt an arbitrary centralized optimizer (referred to as the “base” optimizer) which may have internal state (e.g. momentum) to the federated learning problem (1) while ensuring there is no client-drift. Algorithm 4 describes our framework. We develop two variants, MIME and MIMELITE, which consist of three components i) a base optimizer we are seeking to mimic, ii) the global (server) optimizer state computation, and iii) the local client updates. Algorithm 1 Mime and MimeLite input: initial x and s, learning rate η and base optimizer B = (U ,V) for each round t = 1, · · · , T do sample subset S of clients communicate (x, s) to all clients i ∈ S communicate c← 1|S| ∑ j∈S ∇fj(x) (only Mime) on client i ∈ S in parallel do initialize local model yi ← x for k = 1, · · · ,K do sample mini-batch ζ from local data gi ← ∇fi(yi; ζ)−∇fi(x; ζ) + c (Mime) gi ← ∇fi(yi; ζ) (MimeLite) update yi ← yi − ηU(gi, s) end for compute full local-batch gradient∇fi(x) communicate (yi,∇fi(x)) end on client s ← V ( 1 |S| ∑ i∈S ∇fi(x), s ) (update optimizer state) x← 1|S| ∑ i∈S yi (update server parameters) end for Base optimizer. We assume the centralized base optimizer we are imitating can be decomposed into two steps: an update step U which updates the parameters x, and a optimizer state update step V(·) which keeps track of global optimizer state s. Each step of the base optimizer B = (U ,V) uses a gradient g to update the parameter x and the optimizer state s as follows: x← x− η U(g, s) , s← V(g, s) . (BASEOPT) As an example, consider SGD with momentum. The state here is the momentum mt and uses the following update steps: xt = xt−1 − η ((1− β)∇fi(xt−1) + βmt−1) , mt = (1− β)∇fi(xt−1) + βmt−1 . Thus, SGD with momentum can be represented in the above generic form with U(g, s) = (1 − β)g + βs and V(g, s) = (1 − β)g + βs. Table 5 in Appendix shows how other algo- rithms like Adam, Adagrad, etc. can be represented in this manner. We keep the update U to be linear in the gradient g, whereas V can be more complicated. This implies that while the parameter update step U is relatively resilient to receiving a biased gradient g while V can be much more sensitive. Compute optimizer state globally, apply locally. When updating the optimizer state of the base algorithm, we use only the gradient computed at the server parameters. Further, they remain fixed throughout the local updates of the clients. This ensures that these optimizer state remain unbiased and representative of the global function f(·). At the end of the round, the server performs s← V ( 1 |S| ∑ i∈S ∇fi(x), s ) , ∇fi(x) = 1ni ∑ni ν=1∇fi(x; ζi,ν) . (OPTSTATE) Note that we use full-batch gradients computed at the server parameters x, not client parameters yi. Local client updates. Each client i ∈ S performs K updates using U of the base algorithm and a minibatch gradient. There are two variants possible corresponding to MIME and MIMELITE differentiated using colored boxes. Starting from yi ← x, repeat the following K times yi ← yi − ηU(gi, s) (CLIENTSTEP) where gi ← ∇fi(yi; ζ) for MIMELITE, and gi ← ∇fi(yi; ζ)−∇fi(x; ζ) + 1|S| ∑ j∈S ∇fj(x) for MIME. MIMELITE simply uses the local minibatch gradient whereas MIME uses an SVRG style correction [27]. This is done to reduce the noise from sampling a local mini-batch. While this correction yields faster rates in theory (and in practice for convex problems), in deep learning applications we found that MIMELITE closely matches the performance of MIME. Finally, there are two modifications made in practical FL: we weight all averages across the clients by the number of datapoints ni [43], and we perform K epochs instead of K steps [66]. 4 Theoretical analysis of Mime Table 1 summarizes the rates of MIME (highlighted in blue) and MIMELITE (highlighted in green) and compares them to SERVER-ONLY methods when using SGD, Adam and momentum methods as the base algorithms. We will first examine the convergence of MIME and MIMELITE with a generic base optimizer and show that its properties are preserved in the federated setting. We then examine a specific momentum based base optimizer, and prove that Mime and MimeLite can be asymptotically faster than the best server-only method. This is the first result to prove the usefulness of local steps and demonstrate asymptotic speed-ups. 4.1 Convergence with a generic base optimizer We will prove a generic reduction result demonstrating that if the underlying base algorithm converges, and is robust to slight perturbations, then MIME and MIMELITE also preserve the convergence of the algorithm when applied to the federated setting with additinoal local steps. Theorem I. Suppose that we have G2 inter-client gradient variance (A1), L-smooth {fi} (A2*), and σ2 intra-client gradient variance (A3). Further, suppose that the updater U of our baseoptimizer B = (U ,V) satisfies i) linearity for a fixed state s: U(g1 + g2; s) = U(g1; s) + U(g2; s), and ii) Lipschitzness: ‖U(g; s)‖ ≤ B‖g‖ for some B ≥ 0. Then, running MIME or MIMELITE with K local updates and step-size η is equivalent to running a centralized algorithm with step-size η̃ := Kη ≤ 12LB , and updates xt ← xt−1 − η̃ U(gt + et , st−1) , and st ← V(gt, st−1) , where we have an unbiased gradient Et[gt] = ∇f(xt−1), with variance bounded as Et‖gt −∇f(xt−1)‖2 ≤ { G2 S MIME , G2 S + σ2 KS MIMELITE . and finally a small error bounded as 1 B2L2η̃2 Et‖ et ‖ 2 ≤ { Et‖gt‖2 MIME , Et‖gt‖2 +G2 + σ 2 K MIMELITE . Here, we have proven that MIME and MIMELITE truly mimic the centralized base algorithm with very small perturbations—the magnitude of et is O(η̃2). The key to the result is the linearity of the parameter update step U( · ; s). By separating the base optimizer into a very simple parameter step U and a more complicated optimizer state update step V , we can ensure that commonly used algorithms such as momentum, Adam, Adagrad, and others all satisfy this property. Armed with this general reduction, we can easily obtain specific convergence results. Corollary II ((Mime/MimeLite) with SGD). Given that the conditions in Theorem I are satisfied, let us run T rounds withK local steps using SGD as the base optimizer and output xout. This output satisfies E‖∇f(xout)‖2 ≤ for F := f(x0)− f?, G̃2 := G2 + σ2/K and • µ-PL inequality: η = Õ ( 1 µKT ) , and T = Õ ( LG2 µS + LF µ log ( 1 )) MIME , Õ ( LG̃2 µS + LG̃ µ √ + LFµ log ( 1 )) MIMELITE . • Non-convex: for η = O (√ FS LG̃2TK2 ) , and T = O ( LG2F S 2 + LF ) MIME , O ( LG̃2F S 2 + L2G̃F 3/2 + LF ) MIMELITE . Table 1: Number of communication rounds required to reach ‖∇f(x)‖2 ≤ (log factors are ignored) with S clients sampled each round. All analyses except SCAFFOLD assume G2 bounded gradient dissimilarity (A1). All analyses assume L-smooth losses, except MimeLiteMVR and MimeMVR, which only assume δ bounded Hessian dissimilarity (A2). Convergence of SCAFFOLD depends on the total number of clientsN which is potentially infinite. FEDAVG and MIMELITE are slightly slower than the server-only methods due to additional drift terms in most cases. MIME is the fastest and either matches or improves upon the optimal statistical rates (first term in the rates). In fact, MimeMVR and MimeLiteMVR beat lower bounds for any server-only method when δ L. Algorithm Non-convex µ-PL inequality SCAFFOLDa [32] ( N S ) 2 3 L N S + L µ SGD SERVER-ONLY [21] LG 2 S 2 + L G2 µS + L µ MimeLiteSGD≡ FedAvgSGD c LG 2 S 2 + L 2G 3/2 + L G2 µS + LG µ √ + L µ MimeSGD LG 2 S 2 + L G2 µS + L µ ADAM SERVER-ONLY [75]b L −G2/S – MimeLiteAdambc L √ S −G2/S – MimeAdamb L −G2/S – Momentum Variance Reduction (MVR) SERVER-ONLY [14] LG√ S 3/2 + G 2 S + L – MimeLiteMVRd δ(G+σ) 3/2 + G 2+σ2 + δ – MimeMVRd δG√ S 3/2 + G 2 S + δ – SERVER-ONLY lower bound [5] Ω ( LG√ S 3/2 + G 2 S + L ) Ω ( G2 S ) a Num. clients (N ) can be same order as num. total rounds or even∞, making the bounds vacuous. b Adam requires large batch-size S ≥ G2/ to converge [50, 75]. Convergence of FedAdam with client sampling is unknown ([49] only analyze with full client participation). c RequiresK ≥ σ2/G2 number of local updates. Typically, intra-client variance is small (σ2 . G2). d RequiresK ≥ L/δ number of local updates. Faster than the lower bound (and hence any SERVERONLY algorithm) when δ L i.e. our methods can take advantage of Hessian similarity, whereas SERVER-ONLY methods cannot. In worst case, δ ≈ L and all methods are comparable. If we take a sufficient number of local steps K ≥ G2/σ2, then we have G̃ = O(G) in the above rates. On comparing with the rates in Table 1 for SERVER-ONLY SGD, we see that MIME exactly matches its rates. MIMELITE matches the asymptotic term but has a few higher order terms. Note that when using SGD as the base optimizer, MIMELITE becomes exactly the same as FEDAVG and hence has the same rate of convergence. Corollary III ((Mime/MimeLite) with Adam). Suppose that the conditions in Theorem I are satisfied, and further |∇jfi(x)| ≤ H for any coordinate j ∈ [d]. Then let us run T rounds using Adam as the base optimizer withK local steps, β1 = 0, ε0 > 0, η ≤ ε20/KL(H+ε0), and any β2 ∈ [0, 1). Output xout chosen randomly from {x1, . . .xT } satisfies E‖∇f(xout)‖2 ≤ for T = O ( LF (H+ε0) 2 ε20( −G̃2/S) ) MIME Adam , O ( LF (H+ε0) 2 √ S ε20( −G̃2/S) ) MIMELITE Adam . where F := f(x0)− f?, G̃2 := G2 + σ2/K. Note that here ε0 represents a small positive parameter used in Adam for regularization, and is different from the error . Similar to the SERVER-ONLY analysis of Adam [75], we assume β1 = 0 and that batch size is large enough such that S ≥ G2/ . A similar analysis can also be carried out for AdaGrad, and other novel variants of Adam [42]. 4.2 Circumventing server-only lower bounds The rates obtained above, while providing a safety-check, do not beat those of the SERVER-ONLY approach. The previous best rates for cross-device FL correspond to MimeLiteSGD which is O(LG 2 S 2 + L2G 3/2 ) [34, 36, 69]. While, using a separate server-learning rate can remove the effect of the second term [33], this at best matches the rate of SERVER-ONLY SGD O(LG 2 S 2 ). This is significantly slower than simply using momentum based variance reduction (MVR) as in in the FL setting (SERVER-ONLY MVR) which has a communication complexity of O( LG√ S 3/2 ) [14]. Thus, even though the main reason for studying local-step methods was to improve the communication complexity, none thus far show such improvement. The above difficulty of beating SERVER-ONLY may not be surprising given the two sets of strong lower bounds known. Necessity of local steps. Firstly, [5] show a gradient oracle lower bound of Ω( LG√ S 3/2 ). This matches the complexity of MVR, and hence at first glance it seems that SERVER-ONLY MVR is optimal. However, the lower bound is really only on the number of gradients computed and not on the number of clients sampled (sample complexity) [18], or number of rounds of communication required. In particular, multiple local updates increases number of gradients computed without needing additional communication offers us a potential way to side-step such lower bounds. A careful analysis of the bias introduced as a result of such local steps is a key part of our analysis. Necessity of δ-BHD. A second set of lower bounds directly study the number of communication rounds required in heterogeneous optimization [6, 69]. These results prove that there exist settings where local steps provide no advantage and SERVER-ONLY methods are optimal. This however contradicts real world experimental evidence [43]. As before, the disparity arises due to the contrived settings considered by the lower bounds. For distributed optimization (with full client participation) and convex quadratic objectives, δ-BHD (A2) was shown to be a sufficient [54, 51] and necessary [6] condition to circumvent these lower bounds and yield highly performant methods. We similarly leverage δ-BHD (A2) to design novel methods which significantly extend prior results to i) all smooth non-convex functions (not just quadratics), and ii) cross-device FL with client sampling. We now state our convergence results with momentum based variance reduction (MVR) as the basealgorithm since it is known to be optimal in the SERVER-ONLY setting. Theorem IV. For L-smooth f with G2 gradient dissimilarity (A1), δ Hessian dissimilarity (A2) and F := (f(x0) − f?), let us run MVR as the base algorithm for T rounds with K ≥ L/δ local steps and generate an output xout. This output satisfies E‖∇f(xout)‖2 ≤ for • MimeMVR : η = O ( min ( 1 δK , ( SF G2TK3 ) 1/3 )) , momentum β = 1−O( δ 2S2/3 (TG2)2/3 ), and T = O ( δGF√ S 3/2 + G2 S + δF ) . • MimeLiteMVR : η = O ( min ( 1 δK , ( F Ĝ2TK3 )1/3 )) , momentum β = 1−O( δ 2 (TĜ2)2/3 ), and T = O (δĜF 3/2 + Ĝ2 + δF ) . Here, we define Ĝ2 := G2 + σ2 and the expectation in E‖∇f(xout)‖2 ≤ is taken both over the sampling of the clients during the running of the algorithm, the sampling of the mini-batches in local updates, and the choice of xout (which is chosen randomly from the client iterates yi). Remarkably, the rates of our methods are independent of L and only depend on δ. Thus, when δ ≤ L and δ ≤ L/S for MimeMVR and MimeLiteMVR, the rates beat the server only lower bound of Ω( LG√ S 3/2 ). In fact, if the Hessian variance is small and δ ≈ 0, our methods only needO(1/ ) rounds to communicate. Intuitively, our results show that local steps are very useful when heterogeneity (represented by δ) is smaller than optimization difficulty (captured by smoothness constant L). MimeMVR uses a momentum parameter β of the order of (1 − O(TG2)−2/3) i.e. as T increases, β asymptotically approaches 1. In contrast, previous analyses of distributed momentum (e.g. [73]) prove rates of the form G 2 S(1−β) 2 , which are worse than that of standard SGD by a factor of 1 1−β . Thus, ours is also the first result which theoretically showcases the usefulness of using large momentum in distributed and federated learning. While we only prove the utility of local steps for MimeMVR, we believe our theory can be extended to other local update methods as well. Our analysis is highly non-trivial and involves two crucial ingredients: i) computing the momentum at the server level to ensure that it remains unbiased and then applying it locally during every client update to reduce variance, and ii) carefully keeping track of the bias introduced via additional local steps. Our experiments (Sec. 5) verify our theoretical insights are indeed applicable in deep learning settings as well. See App. B for a proof sketch and App. G–H detailed proofs. 5 Experimental analysis on real world datasets We run experiments on natively federated datasets to confirm our theory and accurately measure real world performance. Our main findings are i) MIME and MIMELITE consistently outperform FEDAVG, and ii) momentum and adaptivity significantly improves performance. 5.1 Setup Algorithms. We consider three (meta) algorithms: FEDAVG, MIME, and MIMELITE. Each of these adapt four base optimizers: SGD, momentum, Adam, and Adagrad. FEDAVG follows [49] who run multiple epochs of SGD on each client sampled, and then aggregate the net client updates. This aggregated update is used as a pseudo-gradient in the base optimizer (called server optimizer). The learning rate for the server optimizer is fixed to 1 as in [67]. This is done to ensure all algorithms have the same number of hyper-parameters. MIME and MIMELITE follow Algorithm 4 and also run a fixed number of epochs on the client. However, note that this requires communicating both the full local-batch gradient as well as the parameter updates doubling the communication required to be sent by the client. For a fairer comparison, we split the sampled clients in MIME and MIMELITE into two groups–the first communicates only full local-batch gradient and the latter communicates only parameter updates. Thus, all methods have equal client communication to the server. This variant retains the convergence guarantees up to constants (details in the Appendix). We also run Loc-MIME where instead of keeping the global optimizer state fixed, we update it locally within the client. The optimizer state is reset after the round finishes. In all methods, aggregation is weighted by the number of samples on the clients. Datasets and models. We run five simulations on three real-world federated datasets: EMNIST62 with i) a linear classifier, ii) an MLP, and iii) a CNN, iv) a charRNN on Shakespeare, and v) an LSTM for next word prediction on StackOverflow, all accessed through Tensorflow Federated [60]. The learning rates were individually tuned and other optimizer hyper-parameters such as β for momentum, β1, β2, ε0 for Adam and AdaGrad were left to their default values, unless explicitly stated otherwise. We refer to Appendix C for additional setup details and discussion. 5.2 Ablation and comparative study In order to study the different algorithms, we train a 2 hidden layer (300µ-100) MLP on EMNIST62 with 10 local epochs for 1k rounds and use SGD+momentum (with tuned β) as the base optimizer. Mime ≈ MimeLite > FedAvg > SCAFFOLD > FedProx. Fig. 1 (left) shows MIME and MIMELITE have nearly identical performance, and are about 7× faster than FedAvg. This implies our strategy of applying momentum to client updates is faster than simply using server momentum. FedProx [40] uses an additional regularizer µ tuned over [0.1, 0.5, 1] (µ = 0 is the same as FedAvg). Regularization does not seem to reduce client drift but still slows down convergence [66]. SCAFFOLD [32] is also slower than Mime and FedAvg in this setup. This is because in cross-device setting with a large number of clients (N = 3.4k) means that each client is visited less than 6 times during the entire training (20 clients per round for 1k rounds). This means that the correction term utilized by SCAFFOLD uses control-variates which are quite stale (computed about 200 rounds ago) which slows down the convergence. In contrast, the SVRG correction term in Mime is computed using clients sampled in the current or previous rounds, and so is much more accurate. With momentum > without momentum. Fig. 1 (center) examines the impact of momentum on FedAvg and Mime. Momentum slightly improves the performance of FedAvg, whereas it has a significant impact on the performance of Mime. This is also in line with our theory and confirms that Mime’s strategy of applying it locally at every client update makes better use of momentum. Fixed > locally updated optimizer state. Finally, we check how the performance of Mime changes if instead of keeping the momentum fixed throughout a round, we let it change. The latter is a way to combine global and local momentum. The momentum is reset at the end of the round ignoring the changes the clients make to it. Fig. 1 (right) shows that this worsens the performance, confirming that it is better to keep the global optimizer state fixed as predicted by our theory. Together, the above observations validate all aspects of Mime (and MimeLite) design: compute statistics at the server level, and apply them unchanged at every client update. 5.3 Large scale comparison with equal server and client communication We perform a larger scale study closely matching the setup of [49]. For both MIME and MIMELITE, only half the clients compute and transmit the updated parameters, and other half transmit the full local-batch gradients. Hence, client to server communication cost is the same for all methods for all clients. However, MIME and MIMELITE require sending additional optimization state to the clients. Hence, we also reduce the number of clients sampled in each round to ensure sum total of communication at each round is 40× model size for EMNIST and Shakespeare experiments, and 100× model size for the StackOverflow next word prediction experiment. Since we only perform 1 local epoch, the hyper-parameters (e.g. epsilon for adaptive methods) are more carefully chosen following [49], and MIME and MIMELITE use significantly fewer clients per round, the difference between FEDAVG and MIME is smaller here. Table 2 summarizes the results. For the image classification tasks of EMNIST62 logistic and EMNIST62 CNN, Mime and MimeLite with Adam achieve the best performance. Using momentum (both with SGD and in Adam) significantly improves their performance. In contrast, FedAvgAdam is more unstable with worse performance. This is because FedAvg is excessively sensitive to hyperparameters (cf. App. E). We next consider the character prediction task on Shakespeare dataset, and next word prediction on StackOverflow. Here, the momentum based methods (SGD+momentum and Adam) are slower than their non-momentum counterparts (vanilla SGD and AdaGrad). This is because the mini-batch gradients in these tasks are sparse, with the gradients corresponding to tokens not in the mini-batch being zero. This sparsity structure is however destroyed when using momentum or Adam. For the same reason, Mime which uses an SVRG correction also significantly increases the gradient density. Discussion. For traditional tasks such as image classification, we observe that Mime (especially with Adam) usually outperforms MimeLite which in turn outperforms FedAvg. These methods are able to successfully leverage momentum and adaptivity to improve performance. For tasks where the client gradients are sparse, the SVRG correction used by Mime hinders performance. Adapting our techniques to work with sparse gradients (à la Yogi [75]) could lead to further improvements. Also, note that we reduce communication by naı̈vely reducing the number of participating clients per round. More sophisticated approaches to save on client communication including quantization or sparsification [58, 3], or even novel algorithmic innovations [1] could be explored. Further, server communication could be reduced using memory efficient optimizers e.g. AdaFactor [55] or SM3 [4]. 6 Conclusion Our work initiated a formal study of the cross-device federated learning problem and provided theoretically justified algorithms. We introduced a new framework MIME which overcomes the natural client-heterogeneity in such a setting, and can adapt arbitrary centralized algorithms such as Adam without additional hyper-parameters. We demonstrated the superiority of MIME via strong convergence guarantees and empirical evaluations. Further, we proved that a particular instance of our method, MimeMVR, beat centralized lower-bounds, demonstrating that additional local steps can yield asymptotic improvements for the first time. We believe our analysis will be of independent interest beyond the federated setting for understanding the sample complexity of non-convex optimization, and for yielding improved analysis of decentralized optimization algorithms.
1. What is the focus and contribution of the paper on federated learning? 2. What are the strengths of the proposed meta-algorithms, particularly in comparison to centralized algorithms? 3. What are the weaknesses of the paper regarding its lack of clarity and missing details? 4. How does the reviewer assess the effectiveness and impact of the proposed methods in practical scenarios? 5. What are the limitations of the paper's analysis and experiments that need to be addressed?
Summary Of The Paper Review
Summary Of The Paper This paper studies cross-device federated learning setting where the number of clients is very large and most of the clients participate in only one round of communication. Because of this low client participation per round, it is hard to have any client state/memory. State is usually helpful in traditional federated learning setting (eg: in SCAFFOLD) in estimating client heterogeneity, which can then be used to correct for the bias of the client. To mitigate this, the authors propose two meta-algorithms: MIME and MIMELITE, to adaptive a general class of centralized algorithms to this cross-device setting. MIME uses variance reduction at the client (like SCAFFOLD) and MIMELITE does not. The paper analyzes the effect of these meta-algorithms for three standard centralized (server-only) deep learning optimization methods: SGD, Adam, MomentumVarianceReduction (MVR). However, reviewer could not find the pseduo-code for the MVR in the paper. For SGD, and Adam authors show that MIME modification performs at least as good as their centralized base algorithms for nonconvex problems. For MVR, authors prove that the server-level momentum of MIME helps beat the performance of centralized baselines under low-heterogeneity. Review Contributions are original and their quality seems solid, but it is hard to verify it due to lack of clarity (see below). Improving clarity could better convince the readers of the claims. Although it is not clear if the contributions are consistently useful in practice (see below), these analysis and the experiments could be impact for future research and practice. Reviewer is willing to reconsider the score if the clarity and empirical quality can be improved. -Strengths If the base centralized algorithms use a momentum, MIME and MIMELITE uses an unbiased estimator of the server-level momentum. These algorithms communicate this momentum to the participating clients at the beginning of each round, and the clients use it to correct their bias when taking gradient descent steps. Note that clients do not update the momentum for their whole round to maintain the lack of bias of the momentum. It it shown that for SGD and Adam, MIME matches the performance of the corresponding centralized (without local gradient steps) base algorithm. Whereas, MIMELITE is slightly worse and it could match centralized performance under certain conditions. For MVR algorithm, MIME is shown to perform much better than centralized algorithm when the client Hessian heterogeneity is smaller than the smoothness of the global loss. This seems to be first such result for smooth non-quadratic setting proving the benefit of federated algorithm over centralized methods. However it was hard to verify this claim as MVR algorithm is not given. Authors report (small scale single-run) ablation study on 2 layer MLP with base SGD+momentum algorithm, to show that MIME & MIMELITE perform better than FedAvg in cross-device setting. It is shown that momentum helps MIMEs however it doesn’t affect FedAvg. Reviewer appreciates that authors also report both positive and negative results they get on large-scale datasets. However, from the given single runs it is is not clear if their meta-algorithms are consistently better than baselines. One way or the other, it would be useful to know. -Comments Although there are only some subtle differences between original FedAvg/SCAFFOLD and MIME-SGD/MIMELITE-SGD, latter gets a much better rate nonconvex in cross-device setting. Can the authors provide more discussion on this difference and intuition behind this improvement? App A: Over-all App A is confusing. a. Fig 1: It is not clear which algorithm is FedAvg. I think authors use FedAvg both for some meta-algorithm (not given) and standard FedAvg. b. What is “server-momentum” in App A and mentioned everywhere else? Can the authors please define it? Why is it called “momentum” and not some kind of pseudo “gradient” for FedAvg? App B is confusing. It could be written more clearly. For example, line 649: where is the convex case in Theorem IV? It is hard to understand the section due to the sudden analogy with convex case. Table 1 and Appendix: Can the authors please add the reference for optimal statistical rate? Where are the full pseudo codes of MVR & MIME-MVR algorithm given? Cannot verify the result without the algorithm. It is not given in Table 5. There is not MVR algorithm in [14]. Why are there no simulations of MVR and its MIME variants? Table1: a. Why is FedAvg rate not given? b. What is FedSGD? Same for FedSGD, FedSGDm, and FedAdam in the Appendix. What is the FedAvg meta algorithm used in the experiments? It is confusing to use FedAvg to refer to the original FedAvg and the some other meta algorithm from [45]. Table 2: It is not clear why authors are confident that numbers will hold the same over multiple runs. As this is an paper on optimization methods, it is appropriate give the numbers over multiple runs and give their standard errors. Most of the numbers are very close to each other, so it is not conclusive that Mime or Mimelite are useful for large datasets, except for Stackoverflow with Adam. Are authors using “SGDm” and “Momentum” interchangeably? Why does the theory (SGD,MVR) and expts (Adagrad,SGD+momentum) use different base algorithms? I was expecting to the see the experiment with MVR for different levels of heterogeneity. Ablation study: it is not clear why MIME{Adam,Adagrad} is not compared with Fed{Adam,Adagrad} which are much more practical base algorithm. [https://openreview.net/forum?id=LkFG3lB13U5] Although MIME predominantly can only match the centralized rates, but authors claim they may outperform centralized methods in practice (mismatch of theory and practice). Why is this the case? Why is there no discussion of MIME on convex problems? -Other Comments line 890: ± is not a standard notation for adding and subtracting. Missing brackets for argument inside U line 677: uses i both outside and inside sum line. 323: “”App ??” When saying that “MimeMVR is the first algorithm which beat centralized rates”, are the authors claiming this for general nonconvex setting or is this applicable to convex problems too? Authors may want to be explicit about this for non-expert readers. Algo 1: From lines 140-141 and 146-147 it is not clear how c & s are calculated. Authors could mention the direction of communication when calculating and sending c. line 325: Why is sparsity of gradients useful? —After author response— I keep my score the same as I am still confused about some parts of the paper. I highly encourage authors to improve the clarity in the next revision. —After after author response— With their latest response, authors have clarified most of my initial concerns. Unfortunately, I am still keeping my scores the same because now I am not sure if empirical results are directly validating the theoretical results.
NIPS
Title Robust Generalized Method of Moments: A Finite Sample Viewpoint Abstract For many inference problems in statistics and econometrics, the unknown parameter is identified by a set of moment conditions. A generic method of solving moment conditions is the Generalized Method of Moments (GMM). However, classical GMM estimation is potentially very sensitive to outliers. Robustified GMM estimators have been developed in the past, but suffer from several drawbacks: computational intractability, poor dimension-dependence, and no quantitative recovery guarantees in the presence of a constant fraction of outliers. In this work, we develop the first computationally efficient GMM estimator (under intuitive assumptions) that can tolerate a constant fraction of adversarially corrupted samples, and that has an `2 recovery guarantee of O( √ ). To achieve this, we draw upon and extend a recent line of work on algorithmic robust statistics for related but simpler problems such as mean estimation, linear regression and stochastic optimization. As a special case, we apply our algorithm to instrumental variables linear regression with heterogeneous treatment effects, and experimentally demonstrate that it can tolerate as much as 10 – 15% corruption, significantly improving upon baseline methods. N/A √ ). To achieve this, we draw upon and extend a recent line of work on algorithmic robust statistics for related but simpler problems such as mean estimation, linear regression and stochastic optimization. As a special case, we apply our algorithm to instrumental variables linear regression with heterogeneous treatment effects, and experimentally demonstrate that it can tolerate as much as 10 – 15% corruption, significantly improving upon baseline methods. 1 Introduction Econometric and causal inference methodologies are increasingly being incorporated in automated large scale decision systems. Inevitably these systems need to deal with the plethora of practical issues that arise from automation. One important aspect is being able to deal with corrupted or irregular data, either due to poor data collection, the presence of outliers, or adversarial attacks by malicious agents. Even traditional applications of econometric methods, in social science studies, can greatly benefit from robust inference so as not to draw conclusions solely driven by a handful of samples, as was recently highlighted in [4]. One broad statistical framework, that encompasses the most widely used estimation techniques in econometrics and causal inference, is the framework of estimating models defined via moment conditions. In this paper we offer a robust estimation algorithm that extends prior recent work in robust statistics to this more general estimation setting. For a family of distributions {Dθ : θ ∈ Θ}, identifying the parameter θ is often equivalent to solving EX∼Dθ [g(X, θ)] = 0, (1) for an appropriate problem-specific vector-valued function g. This formalism encompasses such problems as linear regression (with covariates X , response Y , and moment g((X,Y ), θ) = X(Y − ∗[email protected]. This work was partially done while the first author was an intern at Microsoft Research New England. †[email protected]. This work was partially done while the second author was a Principal Researcher at Microsoft Research New England. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). XT θ)) and instrumental variables (IV) linear regression (with covariates X , response Y , instruments Z, and moment g((X,Y, Z), θ) = Z(Y −XT θ)). Under simple identifiability assumptions, moment conditions are statistically tractable, and can be solved by the Generalized Method of Moments (GMM) [16]. Given independent observations X1, . . . , Xn ∼ Dθ, the (unweighted) GMM estimator is θ̂ = argmin θ∈Θ ∥∥∥∥∥ 1n n∑ i=1 g(Xi, θ) ∥∥∥∥∥ 2 2 . Of course, for general functions g, finding θ̂ (the global minimizer of a potentially non-convex function) may be computationally intractable. Stronger assumptions imply that all approximate local minima of the above function are near the true parameter, in which case the GMM estimator is efficiently approximable. For instrumental variables (IV) linear regression, these assumptions follow from standard non-degeneracy assumptions. Due to its flexibility, the GMM estimator is widely used in practice (along with heuristic variants, in models where it is computationally intractable) [29]. Unfortunately, like most other classical estimators in statistics, the GMM estimator suffers from a lack of robustness: a single outlier in the observations can arbitrarily corrupt the estimate. Robust statistics Initiated by Tukey and Huber in the 1960s, robust statistics is a broad field studying estimators which have provable guarantees even in the presence of outliers [18]. Outliers can be modelled as samples from a heavy-tailed distribution, or even as adversarially and arbitrarily corrupted data. Classically, robustness of an estimator against arbitrary outliers is measured by breakdown point (the fraction of outliers which can be tolerated without causing the estimator to become unbounded [14]) and influence (the maximum change in the estimator under an infinitesimal fraction of outliers [15]). These metrics have spurred development and study of numerous statistical estimators which are often used in practice to mitigate the effect of outliers (e.g. Huber loss for mean estimation, linear regression, and other problems [17]). Problems such as robust univariate mean estimation are by now thoroughly understood [24, 22], and have statistically and computationally efficient estimators. Unfortunately, in higher dimensions, there has long appeared to be a tradeoff between robustness and computational tractability; as a result, much of the literature on high-dimensional robust statistics has focused on statistical efficiency at the expense of computational feasibility [5, 23, 13]. While there is a rich literature on IV regression and GMM in the context of robust statistics, those works either present computationally intractable estimators [21, 12] or are robust in the sense of bounded influence [1, 27, 20] but not robust against arbitrary outliers. Until the last few years, most high-dimensional statistical problems lacked robust estimators satisfying the following basic properties [7]: 1. Computational tractability (i.e. evading the curse of dimensionality) 2. Robustness to a constant fraction of arbitrary outliers 3. Quantitative error guarantees without dimension dependence. Recently, a line of work on algorithmic robust statistics has blossomed within the theoretical computer science community, with the aim of filling this gap in the high-dimensional statistics literature. Estimators with the above properties have been developed for various fundamental high-dimensional problems, including mean and covariance estimation [7, 9], linear regression [10, 3], and stochastic optimization [26, 8]. However, practitioners in econometrics and applied statistics often employ more sophisticated inference methods such as GMM and IV regression [29, 2]. Such methods are not traditionally under the purview of theoretical computer science and learning theory; perhaps as a result, computationally and statistically efficient robust estimators are still lacking. Our contribution We address this lack. Methodologically speaking, our main contribution is to introduce GMM to the algorithmic robust statistics literature and vice versa (even aside from robustness, basic algorithmic questions about GMM remain open and surprisingly unstudied). Theoretically speaking, we prove that a simple modification to the SEVER algorithm for robust stochastic optimization [8] (based on using higher-derivative information) yields a computationally efficient and provably robust GMM estimator under intuitive deterministic assumptions about the uncorrupted data. We instantiate this estimator for two important special cases of GMM—instrumental variables linear regression and instrumental variables logistic regression—under distributional assumptions about the covariates, instruments, and responses (and in fact our algorithm also applies to the IV generalized linear model under certain conditions on the link function). Experimentally, we apply our algorithm to robustly solve IV linear regression. We find that it performs well for a wide range of instrument strengths. In the important setting of heterogeneous treatment effects, our algorithm tolerates as much as 10% corruption. Applied to a seminal dataset previously used to estimate the effect of education on wages [6], we provide evidence for the robustness of the inference, and demonstrate that our algorithm can recover the original inference from corruptions of the dataset, significantly better than baseline approaches. Technical Overview Our robust GMM algorithm builds upon the SEVER algorithm and framework introduced in [8] for robust stochastic optimization, which itself builds on seminal work on robust multivariate mean estimation via spectral filtering [7, 9]. In this section, we outline the increasing levels of complexity. First, given samples v1, . . . , vn ∈ Rd among which n are corrupted, robust mean estimation asks for an estimate of the mean of the uncorrupted samples. The spectral filtering approach due to [9] iteratively does the following, until the sample covariance matrix is bounded: remove outliers in the direction of the largest variance. So long as the uncorrupted samples have bounded covariance, the filtering ensures that at termination, the empirical mean will approximate the uncorrupted mean. Second, given functions f1, . . . , fn : Rd → R among which n are corrupted, robust stochastic optimization asks for an approximate critical point of the mean of the uncorrupted functions. The SEVER algorithm [8] achieves this by alternating between (a) finding a critical point ŵ of the current sample set S, and (b) applying one iteration of spectral filtering to the vectors {∇fi(ŵ) : i ∈ S}, terminating when no samples are removed from S.3 The termination guarantee of spectral filtering immediately implies that at termination, the average gradient of the uncorrupted samples at ŵ is near the average gradient of the final sample set S at ŵ, which is 0 by part (a). So ŵ at termination is an approximate critical point of the mean of the uncorrupted functions. In our problem, we are given functions g1, . . . , gn : Rd → Rp among which n are corrupted, and wish to find an approximate minimizer of ∥∥∥ 1|U |∑i∈U gi(w)∥∥∥2 2 , where U ⊆ [n] is the set of uncorrupted functions. The obvious approach is to alternate between (a) finding a minimizer ŵ of ∥∥∥ 1|S|∑i∈S gi(w)∥∥∥2 2 , where S is the current sample set, and (b) applying spectral filtering to the vectors {gi(ŵ) : i ∈ S}, terminating when no samples are removed from S. The termination guarantee of spectral filtering implies that the final sample average 1|S| ∑ i∈S gi(ŵ) is near the uncorrupted average 1|U | ∑ i∈U gi(ŵ). Unfortunately, there is no guarantee that 1 |S| ∑ i∈S gi(ŵ) has small norm: part (a) only implies that ŵ is a local minimizer (and hence critical point) of the norm, so 1 |S| ∑ i∈S (∇gi(ŵ))T · 1 |S| ∑ i∈S gi(ŵ) = 0. In the above equality, the sample gradient matrix at ŵ could be arbitrarily corrupted, so the sample average at ŵ could have arbitrarily large norm. In principle, even the global minimizer could have large norm. However, this issue can be fixed by using higher-derivative information: specifically, we also apply spectral filtering to (projections of) the matrices ∇gi(ŵ). Under appropriate boundedness and smoothness assumptions, it can then be shown that at termination (when neither filtering step removes samples), ŵ is an approximate critical point of the norm of the uncorrupted average ∥∥∥ 1|U |∑i∈U gi(w)∥∥∥2 2 . By a “strong identifiability” assumption, this implies that ŵ is near the minimizer of ∥∥∥ 1|U |∑i∈U gi(x)∥∥∥2 2 , as desired. 3A related approach simply applies robust mean estimation to estimate the gradients at each step of gradient descent [26]. 2 Preliminaries For real scalars or vectors {ξi}i∈S indexed by a set S, we use the notation ES [ξi] for the sample expectation 1|S| ∑ i∈S ξi. Similarly, if ξi are scalars, then we define the sample variance VarS(ξi) = ES(ξi − ESξi)2. If ξi are vectors then we define the sample covariance matrix CovS(ξi) = ES(ξi − ESξi)(ξi − ESξi)T . A random vector X is (4, 2, τ)-hypercontractive if E(〈X,u〉)4 ≤ τ(E(〈X,u〉)2)2 for all vectors u. Definition 2.1. For a closed setH, a function f : H → R, and γ > 0, a γ-approximate critical point of f (inH) is some x ∈ H such that for any vector v with x+ δv ∈ H for arbitrarily small δ > 0, it holds that v · ∇f(x) ≥ −γ ‖v‖2. Definition 2.2. For a closed setH, a γ-approximate critical point oracle Lγ,H is an algorithm which, given a differentiable function f : H → R returns a γ-approximate critical point of f . Definition 2.3. The (unscaled) logistic function G : R→ R is defined by G(x) = 1/(1 + e−x). Outline In Section 3, we describe the robust GMM problem, and we describe deterministic assumptions on a set of corrupted sample moments, under which we’ll be able to efficiently estimate the parameter which makes the uncorrupted moments small. In Section 4, we describe a key subroutine of our robust GMM algorithm, which is commonly known in the literature as filtering. In Section 5, we describe the robust GMM algorithm and prove a recovery guarantee under the assumptions from Section 3. In Section 6, we apply this algorithm to instrumental variable linear and logistic regression, proving that under reasonable stochastic assumptions on the uncorrupted data, arbitrarily -corrupted moments from these models satisfy the desired deterministic assumptions with high probability. Finally, in Section 7, we evaluate the performance of our algorithm on two corrupted datasets. 3 Robust GMM Model In this section, we formalize the model in which we will provide a robust GMM algorithm. Classically, the goal of GMM estimation is to identify θ ∈ Θ given data X1, . . . , Xn ∼ Dθ, using the moment condition EX∼Dθ [g(X, θ)] = 0. We consider the added challenge of the -strong contamination model, in which an adversary is allowed to inspect the data X1, . . . , Xn and replace n samples with arbitrary data, before the algorithm is allowed to see the data. This corruption model encompasses most reasonable sources of outliers. For our main theorem, we do not make stochastic assumptions about {Dθ : θ ∈ Θ}. Instead, we make deterministic assumptions about the empirical moments gi(θ) := g(Xi, θ) of the given data, which are robust to -strong contamination. Concretely, we make the following assumption. Assumption 3.1. Given differentiable moments g1, . . . , gn : Rd → Rp, a corruption parameter > 0, well-conditionedness parameters λ and L, a Lipschitzness parameter Lg, and a noise level parameter σ2, there is a set Igood ⊆ [n] with |Igood| ≥ (1− )n (the “uncorrupted samples”), a vector w∗ ∈ Rd (the “true parameter”), and a radius R0 ≥ ‖w∗‖2 with the following properties: • Strong identifiability. σmin(EIgood∇g(w∗)) ≥ λ • Bounded-variance gradient. EIgood(uT∇g(w∗)v)2 ≤ L2 for all unit-vectors u ∈ Rp, v ∈ Rd • Bounded-variance noise. EIgood(v · g(w∗))2 ≤ σ2L for all unit vectors v • Well-specification. ∥∥EIgoodg(w∗)∥∥2 ≤ σ√L • Lipschitz gradient. ∥∥EIgood∇g(w)− EIgood∇g(w∗)∥∥op ≤ Lg ‖w − w∗‖2 for all w ∈ B2R0(0) • Stability of gradient. R0 < λ/(9Lg). Intuitively, Assumption 3.1 can be thought of as a condition on the uncorrupted samples, because if they satisfy the assumption with parameter 0, then after -strong contamination, the corrupted samples will still satisfy the assumption with parameter 0 + . Strong identifiability is needed for parameter recovery (even without corruption). Bounded-variance gradient is a technical condition which e.g. reduces to a 4th moment bound for IV regression. The third and fourth conditions ensure that the data is approximately well-specified by the moment conditions. The fifth and sixth conditions hold trivially for IV linear regression; for non-linear moment problems, such as our logistic IV regression problem, this condition requires that the `2-norm of the parameters be sufficiently small, such that the logits do not approach the flat region of the logistic function, a condition that is natural to avoid loss of gradient information and extreme propensities. 4 The FILTER Algorithm In many robust statistics algorithms, an important subroutine is a filtering algorithm for robust mean estimation. In this section we describe a filtering algorithm used in numerous prior works, including e.g. [8, 9]. Given a set of vectors {ξi : i ∈ S} and a threshold M , the algorithm returns a subset of S, by thresholding outliers in the direction of largest variance. Formally, see Algorithm 1. Algorithm 1 FILTER 1: procedure FILTER({ξi : i ∈ S},M ) 2: ξ̂ ← ES [ξi], CovS(ξi) = ES [(ξi − ξ̂)(ξi − ξ̂)T ] 3: v ← largest eigenvector of CovS(ξi) 4: τi ← (v · (ξi − ξ̂))2 for i ∈ S 5: if 1|S| ∑ i∈S τi ≤ 24M then 6: return S 7: else 8: Sample T ← Unif([0,max τi]) 9: return S \ {i ∈ S : τi > T} This algorithm has two important properties. First, if it does not filter any samples, then the sample mean is provably stable, i.e. it cannot have been affected much by the corruptions, so long as the uncorrupted samples had bounded variance (proof in Appendix B.1). Lemma 4.1 (see e.g. [8, 9]). Suppose that FILTER does not filter out any samples. Then ‖ESξ − EIξ‖2 ≤ 3 √ 48 √ (M + ‖CovI(ξ)‖op) for any I ⊆ [n] and > 0 such that |S|, |I| ≥ (1− )n. Second, if the threshold is chosen appropriately (based on the variance of the uncorrupted samples), then the filtering step always in expectation removes at least as many corrupted samples as uncorrupted samples. Equivalently, the size of the symmetric difference between the current sample set and the uncorrupted samples (i.e. the number of corrupted samples in the current set plus the number of uncorrupted samples which have been filtered out of the current set) always decreases in expectation (proof in Appendix B.1.1). Lemma 4.2 (see e.g. [8, 9]). Consider an execution of FILTER with sample set S of size |S| ≥ 2n/3, and vectors {ξi : i ∈ S}, and bound M . Let S′ be the sample set after this iteration’s filtering. Let Igood ⊆ [n] satisfy |Igood| ≥ (5/6)n. Suppose that CovIgood(ξi) MI , then E|S′4Igood| ≤ E|S4Igood|, where the expectation is over the random threshold, and ∆ denotes symmetric difference. 5 The ITERATED-GMM-SEVER Algorithm In this section, we describe and analyze an algorithm ITERATED-GMM-SEVER for robustly solving moment conditions under Assumption 3.1. The key subroutine is the algorithm GMM-SEVER, which given an initial estimate w0 and a radius R such that the true parameter is contained in BR(w0), returns a refined estimate w such that (with large probability) the radius bound can be decreased by a constant factor. We assume access to an approximate constrained critical point oracle L (Definition 2.2), which can be efficiently implemented (for arbitrary smooth bounded functions) by gradient descent. Algorithm 2 GMM-SEVER 1: procedure GMM-SEVER(L, {g1, . . . , gn}, w0, R, γ, L, σ) 2: S ← [n] 3: repeat 4: Compute a γ-approximate critical point w ← Lγ,BR(w0)(‖ES(gi(·))‖ 2 2) 5: u← ESgi(w) 6: S′ ← FILTER({∇gi(w) · u : i ∈ S}, L2 ‖u‖22) 7: if S′ 6= S then 8: Set S ← S′ and return to line 4 9: S′′ ← FILTER({gi(w) : i ∈ S}, σ2L+ 4L2R2) 10: if S′′ 6= S then 11: Set S ← S′′ and return to line 4 12: until S′′ = S 13: return (w, S) Algorithm 3 AMPLIFIED-GMM-SEVER 1: procedure AMPLIFIED-GMM-SEVER(L, {g1, . . . , gn}, w0, R, γ, , L, σ, δ) 2: t← 0 3: repeat 4: w, S ← GMM-SEVER(L, {g1, . . . , gn}, w0, R, γ, L, σ) 5: t← t+ 1 6: until |S| ≥ (1− 11 )n or (1/10)t ≤ δ 7: return w Like the algorithm SEVER [8], our algorithm GMM-SEVER alternates (a) finding a critical point of a function associated to the current samples, and (b) filtering out “outlier” samples. Unlike SEVER, the function we optimize is not simply an empirical mean over the samples, but rather the squared-norm of the sample moments. Moreover, we need two filtering steps: the moments as well as directional derivatives of the moments, in a carefully chosen direction. See Algorithm 2 for the complete description. We will only prove a constant failure probability for GMM-SEVER. However, we will show that it can be amplified to an arbitrarily small failure probability δ. We call the resulting algorithm AMPLIFIED-GMM-SEVER; see Algorithm 3. The algorithm ITERATED-GMM-SEVER then consists of iteratively calling AMPLIFIED-GMM-SEVER to refine the parameter estimate and bound the true parameter within successively smaller balls; see Algorithm 4. We start by analyzing GMM-SEVER. In the next two lemmas, we show that if the algorithm does not filter out too many samples, then we can bound the distance from the output to w∗. First, we show a first-order criticality condition (in the direction ŵ − w∗) for the norm of the moments of the “good" samples. If there was no corruption, then we would have an inequality of the form (ŵ − w∗)T ‖ŵ − w∗‖2 EIgood∇g(ŵ)TEIgoodg(ŵ) ≤ γ. With -corruption, the algorithm is designed so that we can still show the following inequality, matching the above guarantee up to O( √ ) (proof in Appendix C.1): Lemma 5.1. Suppose that the input parameters R and w0 satisfy BR(w0) ⊆ B2R0(0). Under Assumption 3.1, at algorithm termination, if |S| ≥ (1− 10 )n, then the output ŵ of GMM-SEVER satisfies (ŵ − w∗)T ‖ŵ − w∗‖2 EIgood∇g(ŵ)TEIgoodg(ŵ) ≤ γ + 275σL3/2 √ + 603L2R √ Moreover, we can show that any point satisfying the first-order criticality condition must be close to w∗, using the least singular value bound on the gradient (proof in Appendix C.2). Lemma 5.2. Suppose that the input parameters R and w0 satisfy BR(w0) ⊆ B2R0(0). Under Assumption 3.1, suppose that w ∈ BR(w0) satisfies (w − w∗)TEIgood∇g(w)TEIgoodg(w) ≤ κ ‖w − w∗‖2 . Algorithm 4 ITERATED-GMM-SEVER 1: procedure ITERATED-GMM-SEVER({g1, . . . , gn}, R0, γ, , λ, L, σ, δ) 2: t← 1, w1 ← 0, R1 ← R0, δ′ ← cδ/ log(R √ L/(σ √ ), γ = σL3/2 √ 3: repeat 4: ŵt := AMPLIFIED-GMM-SEVER({g1, . . . , gn}, wt, Rt, , L, σ, γ, δ′) 5: Rt+1 ← 2γ/λ2 + C((L2/λ2)Rt √ + σ(L3/2/λ2) √ ) 6: t← t+ 1 7: until Rt > Rt−1/2 8: return ŵt−1 Then ‖w − w∗‖2 ≤ 4(κ+ σL3/2 √ )/λ2. Putting the above lemmas together, we immediately get the following bound on ‖ŵ − w∗‖2. Lemma 5.3. Suppose that the input parameters R and w0 satisfy BR(w0) ⊆ B2R0(0). Under Assumption 3.1, at algorithm termination, if |S| ≥ (1− 10 )n, then the output ŵ of GMM-SEVER satisfies ‖ŵ − w∗‖2 ≤ 4γ λ2 + 2412(L2/λ2)R √ + 1102σ(L3/2/λ2) √ . It remains to bound the size of S at termination. We follow the super-martingale argument from [8], which uses Lemma 4.2 (proof in Appendix C.3). Theorem 5.4. Suppose that the input parameters R and w0 satisfy BR(w0) ⊆ B2R0(0). Let ŵ be the output of GMM-SEVER. Then with probability at least 9/10, it holds that ‖ŵ − w∗‖2 ≤ 4γ λ2 + 2412(L2/λ2)R √ + 1102σ(L3/2/λ2) √ . The time complexity of GMM-SEVER is O(poly(n, d, p, Tγ)) where Tγ is the time complexity of the γ-approximate learner L. Moreover, for any δ > 0 the success probability can be amplified to 1− δ by repeating GMM-SEVER O(log 1/δ) times, or until |S| ≥ (1− 10 )n at termination. We call this AMPLIFIED-GMM-SEVER, and it has time complexity O(poly(n, d, p, Tγ) · log(1/δ)). With the above guarantee for GMM-SEVER and AMPLIFIED-GMM-SEVER, we can now analyze ITERATED-GMM-SEVER (proof in Appendix C.4). Theorem 5.5. Suppose that the input to ITERATED-GMM-SEVER consists of functions g1, . . . , gn : Rd → Rp, a corruption parameter > 0, well-conditionedness parameters λ and L, a Lipschitzness parameter Lg , a noise level parameter σ2, a radius bound R0, and an optimization error parameter γ, such that Assumption 3.1 is satisfied for some unknown parameter w∗ ∈ Rd, and (L2/λ2) √ ≤ 1/9648. 4 Suppose that the algorithm is also given a failure probability parameter δ > 0. Then the output ŵ of ITERATED-GMM-SEVER satisfies ‖ŵ − w∗‖2 ≤ O(σ(L 3/2/λ2) √ ) with probability at least 1− δ. Moreover, the algorithm has time complexity O(poly(n, d, p, Tγ) · log(1/δ) · log(R √ L/(σ √ ))), where Tγ is the time complexity of a γ-approximate learner and γ = σL3/2 √ . 6 Applications In this section, we apply ITERATED-GMM-SEVER to solve linear and logistic instrumental variables regression in the strong contamination model. 4This constant may be improved; we focus in this paper on dependence on the parameters of the problem and do not optimize constants. Robust IV Linear Regression Let Z be the vector of p real-valued instruments, and let X be the vector of d real-valued covariates. Suppose that Z and X are mean-zero. Suppose that the response can be described as Y = XTw∗ + ξ for some fixed w∗ ∈ Rd. The distributional assumptions we will make about X , Y , and Z are described below. Assumption 6.1. Given a corruption parameter > 0, well-conditionedness parameters λ and L, hypercontractivity parameter τ , noise level parameter σ2, and norm bound R0, we assume the following: (i) Valid instruments: E[ξ|Z] = 0, (ii) Bounded-variance noise: E[ξ2|Z] ≤ σ2, (iii) Strong instruments: σmin(EZXT ) ≥ λ, (iv) Boundedness: ‖Cov([Z;X]‖op ≤ L, (v) Hypercontractivity: [Z;X] is (4, 2, τ)-hypercontractive, (vi) Bounded 8th moments: maxiX8i ≤ O(τ2L4) and maxi Z8i ≤ O(τ2L4) (vii) Bounded norm parameter: ‖w∗‖2 ≤ R0. For intuition, conditions (i – iii) are standard for IV regression even in the absence of corruption; (iv – vi) are conditions on the moments of the distribution, and hold for a variety of reasonable distributions including but not limited to any multivariate Gaussian distribution with bounded-spectral-norm covariance. Condition (vii) essentially states that we need an initial estimate of w∗ (but the time complexity of our algorithm will depend only logarithmically on the initial estimate error R0). Define the random variable g(w) = Z(Y −XTw) for w ∈ Rd, and let (Xi, Yi, Zi) be n independent samples drawn according to (X,Y, Z). Let > 0. We prove that under the above assumption, if n is sufficiently large, then with high probability, for any -contamination (X ′i, Y ′ i , Z ′ i) n i=1 of (Xi, Yi, Zi) n i=1, the functions gi(w) = Z ′ i(Y ′ i − (X ′i)Tw) satisfy Assumption 3.1. Formally, we prove the following theorem (see Appendix D): Theorem 6.2. Let > 0. Suppose that < cmin(λ2/(τL2), λ4/L4) for a sufficiently small constant c > 0, and suppose that n ≥ C(d + p)5τ log((p + d)/τ )/ 2 for a sufficiently large constant C. Then with probability at least 0.95 over the samples (Xi, Yi, Zi)ni=1, the following holds: for any -corruption of the samples and any upper bound R0 ≥ ‖w∗‖2, Assumption 3.1 is satisfied. In that event, if L, λ, σ, and are known, then there is a poly(n, d, p, log(1/δ), log(R0/(σ √ )))- time algorithm which produces an estimate ŵ satisfying ‖ŵ − w∗‖2 ≤ O(σ(L3/2/λ2) √ ) with probability at least 1− δ. Robust IV Logistic Regression Let Z be a vector of p real-valued instruments, and let X be a vector of d real-valued covariates. Suppose that Z and X are mean-zero. Suppose that the response can be described as Y = G(XTw∗) + ξ for some fixed w∗ ∈ Rd, where G is the (unscaled) logistic function. The proofs only use 1-Lipschitzness of G and G′, and that G′(0) is bounded away from 0. As far as distributional assumptions, we assume in this section that Assumption 6.1 holds, and additionally assume that the norm bound satisfies R0 ≤ cmin(λ2/L, λ/ √ τL3) for an appropriate constant c, where λ, L, and τ are as required for the Assumption. We obtain the following algorithmic result (proof in Appendix E): Theorem 6.3. Let > 0. Suppose that < cmin(λ2/(τL2), λ4/L4) for a sufficiently small constant c > 0, and suppose that n ≥ C(d + p)5τ log((p + d)/τ )/ 2 for a sufficiently large constant C. Suppose that ‖w∗‖2 ≤ R0 ≤ cmin(λ2/L, λ/ √ τL3). Then with probability at least 0.95 over the samples (Xi, Yi, Zi)ni=1, the following holds: for any -corruption of the samples, Assumption 3.1 is satisfied. In that event, if R0, L, λ, σ, and are known, then there is a poly(n, d, p, log(1/δ), log(R0/(σ √ )))-time algorithm which produces an estimate ŵ satisfying ‖ŵ − w∗‖2 ≤ O(σ(L3/2/λ2) √ ) with probability at least 1− δ. 7 Experiments In this section we corroborate our theory by applying our algorithm ITERATED-GMM-SEVER to several datasets for IV linear regression. See Appendix G for omitted figures and experimental details (e.g. hyperparameter choices and descriptions of the baselines). Error bars are at 25th and 75th percentiles across independent trials. Varied Instrument Strength. We construct a synthetic dataset with endogenous noise and 1% corruptions, and evaluate our estimator as the instrument strength is varied. Concretely, for dimension d and strength α, we draw independent samples (Xi, Yi, Zi)ni=1 where for unobserved noise ηi ∼ N(0, Id), we define instruments Zi ∼ N(0, Id) and covariates Xi = αZi + ηi, and response yi = 〈Xi, θ∗〉 + 〈ηi,1〉. For k = 0.01n of the samples, we introduce corruption by setting Zi = −A/(k √ d) and yi = √ d where A = ∑ Zjyj , which zeroes out the IV estimate. We take n = 104, d = 20 and θ∗ = (1, 0, . . . , 0), and vary α from 0.1 to 10. For each α, we do 10 independent trials, comparing median `2 error of ITERATED-GMM-SEVER with classical IV and two-stage Huber regression. We also compare to the “clean IV” error, i.e. the error of IV on the uncorrupted samples. When α is small, essentially no inference is possible (the clean error is large), but as α increases, our estimator starts to outperform the baselines, and roughly tracks the clean error (Figure 1a). Similar results can be seen for d = 100 (Figure 2 in Appendix G.5). Our next two examples consider IV linear regression with heterogeneous treatment effects, a natural setting in which the instruments and covariates are high-dimensional, necessitating dimensionindependent robust estimators. Consider a study in which each sample has a vector X of characteristics, a scalar instrument Z, a scalar treatment T , and a response Y . Assuming that the control response and treatment effect are linear in the characteristics, with unknown coefficients β∗ and θ∗ respectively, and that the response noise is mean-zero conditioned on Z and X (but may correlate with the treatment), we can write the moment conditions E[XZ(Y − T 〈X, θ∗〉 − 〈X,β∗〉)] = E[X(Y − T 〈X, θ∗〉 − 〈X,β∗〉)] = 0. This can be interpreted as an IV linear regression with covariates (TX,X) and instruments (ZX,X). Synthetic HE dataset. For parameters n, d, we generate a unknown d-dimensional parameter vector θ∗ ∼ N(0, Id). We then generate independent samples (Xi, Yi, Zi)ni=1 as follows. Draw Xi ∼ N(0, Id) and Zi ∼ Ber(1/2). The binary treatment is drawn Ti ∼ Ber(pi) with pi = 1 1 + exp(−Zi − UiX̄i) , where Ui ∼ N(0, 1) and X̄i = d−1/2〈Xi,1〉. Finally, the response is Yi = 〈Xi, θ∗〉Ti + 〈Xi, β∗〉+ Ui with β∗ := 0. Ordinary least squares would produce a biased estimate of (θ∗, β∗), since TX̄ is correlated with the response noise U . However, U is by construction independent of X and Z. Thus, in the absence of corruption, IV linear regression with covariates (TX,X), response Y , and instrument (ZX,X) should approximately recover the true parameters (θ, β). For n = 103 and d = 20, the IV estimate still has significant variance, and in this regime, even with no added corruptions, we find that ITERATED-GMM-SEVER has lower recovery error than baselines (Table 1 in Appendix G.5). For n = 104 and d = 20, the IV estimate is more accurate. Hence, we corrupt the first n samples, by setting Xi := 1 and Yi := 3 √ d. Varying from 0.01 to 0.1, we compute the median `2 recovery error of ITERATED-GMM-SEVER, classical IV, and two-stage Huber regression, across 50 independent trials (for each ). The results (Figure 1b) demonstrate that our algorithm is resilient to up to 10% corruptions, whereas both baselines rapidly degrade as increases. NLSYM dataset. In this experiment, we use the data of [6] from the National Longitudinal Survey of Young Men for estimating the average treatment effect (ATE) of education on wages. The data consists of 3010 samples with years of education as the treatment, log wages as the response, and proximity to a 4-year college as the instrument, along with 22 covariates (e.g. geographic indicator variables). For simplicity, we restrict the model to only two covariates (years and squared years of labor force experience) and bias term. We find that the ATE estimated by ITERATED-GMM-SEVER is close to the positive ATE (≈ 0.277) estimated by classical IV, suggesting that Card’s inference may be robust (Figure 3 in Appendix G.5). Next, we corrupt a random -fraction of the responses, in a way that negates the ATE inferred by classical IV regression (see Appendix G.2 for method). Varying from 0.01 to 0.2, we perform 10 independent trials (i.e. resampling the subset of corrupted samples each time). For each trial, we compute the ATE estimate of IV regression, the ATE estimate of two-stage Huber regression, and the median ATE estimate of 50 runs of ITERATED-GMM-SEVER. For each , we then plot the median absolute error of each algorithm across the 10 trials. We find that our algorithm outperforms both baselines, and has lower variance than two-stage Huber regression, up to ≈ 0.15 (Figure 1c; note that error is on log-scale, so the Huber regression is extremely noisy).
1. What is the main contribution of the paper regarding the Generalized Method of Moments (GMM) estimator? 2. What are the strengths and weaknesses of the proposed algorithm, particularly in its theoretical analysis and numerical results? 3. Do you have any concerns or questions regarding the paper's assumptions, algorithms, and experimental results? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper In this paper, the authors claim to develop the first computationally efficient Generalized Method of Moments (GMM) estimator that is robust to a constant fraction of arbitrary outliers. The authors instantiate this estimator for two important special cases of GMM, namely instrumental variable (IV) linear regression and IV logistic regression, under distributional assumptions about the covariates, instruments, and responses. Some numerical results are provided to support the theory. Strengths And Weaknesses Strengths: A computationally efficient and robust GMM estimator should be of interest to the community. This paper is generally well-written and the theoretical results seem to be reasonable and reliable. Weaknesses: The algorithmic and theoretical contributions in this work seem to be incremental to me. In particular, the authors mention that "Estimators with the above properties have been developed for various fundamental high-dimensional problems, including mean and covariance estimation [6, 8], linear regression [9, 2], and stochastic optimization [25, 7]" and that the algorithm proposed in this work is "a simple modification to the SEVER algorithm for robust stochastic optimization [7]". I appreciate the authors' effort to provide the Technical Overview . But I hope that the authors can demonstrate more clearly what are the major technical novelty in this submission (especially compared to [7]). Personally, I do not like Assumption 3.1. It involves too many conditions. Even if the authors show in Theorems 6.2 and 6.3 that this assumption is satisfied by robust IV Linear/Logistic Regression, there are still too many involved conditions/parameters that make this assumption not easy to parse. I guess for Algorithms 2, 3, and 4, the situation is even worse. That is, there are too many input parameters for these algorithms, and for some of the parameters (e.g., the noise level σ 2 and well-conditionedness parameters λ and L ), I am not sure whether it is reasonable to use them as the inputs of the algorithms. In my opinion, practitioners typically prefer simple algorithms. But the algorithms provided in this submission seem to be too complicated (so many parameters!) for practitioners. The experimental results are not very convincing. In particular, Figures 1a and 1b are for synthetic data. Figure 1c is for real data, which is more desired, but it looks quite messy. For some cases, Huber regression outperforms the method provided in this work. In addition, the authors mention that "note that error is on log-scale, so the Huber regression is extremely noisy". It seems that the method proposed in this work is also quite noisy. Some references are missing. For example, the authors should add references for the sentence "Unfortunately, like most other classical estimators in statistics, the GMM estimator suffers from a lack of robustness: a single outlier in the observations can arbitrarily corrupt the estimate." and the sentence "However, practitioners in econometrics and applied statistics often employ more sophisticated inference methods such as GMM and IV regression." (Since compared to prior works, IV regression is emphasized in this submission and I am not familiar with IV regression. I am curious about how important IV regression is in practice). In the Experiments section, the authors should provide references to the baseline methods. Questions Some minor suggestions are listed as follows: The authors should provide the full name of the SEVER algorithm (at least when it first appears on Page 2). The figures in Figure 1 look quite vague. Perhaps the authors can present them as separate figures (i.e., Figures 1, 2, 3 instead of Figures 1a, 1b, 1c). Limitations Not applicable.
NIPS
Title Robust Generalized Method of Moments: A Finite Sample Viewpoint Abstract For many inference problems in statistics and econometrics, the unknown parameter is identified by a set of moment conditions. A generic method of solving moment conditions is the Generalized Method of Moments (GMM). However, classical GMM estimation is potentially very sensitive to outliers. Robustified GMM estimators have been developed in the past, but suffer from several drawbacks: computational intractability, poor dimension-dependence, and no quantitative recovery guarantees in the presence of a constant fraction of outliers. In this work, we develop the first computationally efficient GMM estimator (under intuitive assumptions) that can tolerate a constant fraction of adversarially corrupted samples, and that has an `2 recovery guarantee of O( √ ). To achieve this, we draw upon and extend a recent line of work on algorithmic robust statistics for related but simpler problems such as mean estimation, linear regression and stochastic optimization. As a special case, we apply our algorithm to instrumental variables linear regression with heterogeneous treatment effects, and experimentally demonstrate that it can tolerate as much as 10 – 15% corruption, significantly improving upon baseline methods. N/A √ ). To achieve this, we draw upon and extend a recent line of work on algorithmic robust statistics for related but simpler problems such as mean estimation, linear regression and stochastic optimization. As a special case, we apply our algorithm to instrumental variables linear regression with heterogeneous treatment effects, and experimentally demonstrate that it can tolerate as much as 10 – 15% corruption, significantly improving upon baseline methods. 1 Introduction Econometric and causal inference methodologies are increasingly being incorporated in automated large scale decision systems. Inevitably these systems need to deal with the plethora of practical issues that arise from automation. One important aspect is being able to deal with corrupted or irregular data, either due to poor data collection, the presence of outliers, or adversarial attacks by malicious agents. Even traditional applications of econometric methods, in social science studies, can greatly benefit from robust inference so as not to draw conclusions solely driven by a handful of samples, as was recently highlighted in [4]. One broad statistical framework, that encompasses the most widely used estimation techniques in econometrics and causal inference, is the framework of estimating models defined via moment conditions. In this paper we offer a robust estimation algorithm that extends prior recent work in robust statistics to this more general estimation setting. For a family of distributions {Dθ : θ ∈ Θ}, identifying the parameter θ is often equivalent to solving EX∼Dθ [g(X, θ)] = 0, (1) for an appropriate problem-specific vector-valued function g. This formalism encompasses such problems as linear regression (with covariates X , response Y , and moment g((X,Y ), θ) = X(Y − ∗[email protected]. This work was partially done while the first author was an intern at Microsoft Research New England. †[email protected]. This work was partially done while the second author was a Principal Researcher at Microsoft Research New England. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). XT θ)) and instrumental variables (IV) linear regression (with covariates X , response Y , instruments Z, and moment g((X,Y, Z), θ) = Z(Y −XT θ)). Under simple identifiability assumptions, moment conditions are statistically tractable, and can be solved by the Generalized Method of Moments (GMM) [16]. Given independent observations X1, . . . , Xn ∼ Dθ, the (unweighted) GMM estimator is θ̂ = argmin θ∈Θ ∥∥∥∥∥ 1n n∑ i=1 g(Xi, θ) ∥∥∥∥∥ 2 2 . Of course, for general functions g, finding θ̂ (the global minimizer of a potentially non-convex function) may be computationally intractable. Stronger assumptions imply that all approximate local minima of the above function are near the true parameter, in which case the GMM estimator is efficiently approximable. For instrumental variables (IV) linear regression, these assumptions follow from standard non-degeneracy assumptions. Due to its flexibility, the GMM estimator is widely used in practice (along with heuristic variants, in models where it is computationally intractable) [29]. Unfortunately, like most other classical estimators in statistics, the GMM estimator suffers from a lack of robustness: a single outlier in the observations can arbitrarily corrupt the estimate. Robust statistics Initiated by Tukey and Huber in the 1960s, robust statistics is a broad field studying estimators which have provable guarantees even in the presence of outliers [18]. Outliers can be modelled as samples from a heavy-tailed distribution, or even as adversarially and arbitrarily corrupted data. Classically, robustness of an estimator against arbitrary outliers is measured by breakdown point (the fraction of outliers which can be tolerated without causing the estimator to become unbounded [14]) and influence (the maximum change in the estimator under an infinitesimal fraction of outliers [15]). These metrics have spurred development and study of numerous statistical estimators which are often used in practice to mitigate the effect of outliers (e.g. Huber loss for mean estimation, linear regression, and other problems [17]). Problems such as robust univariate mean estimation are by now thoroughly understood [24, 22], and have statistically and computationally efficient estimators. Unfortunately, in higher dimensions, there has long appeared to be a tradeoff between robustness and computational tractability; as a result, much of the literature on high-dimensional robust statistics has focused on statistical efficiency at the expense of computational feasibility [5, 23, 13]. While there is a rich literature on IV regression and GMM in the context of robust statistics, those works either present computationally intractable estimators [21, 12] or are robust in the sense of bounded influence [1, 27, 20] but not robust against arbitrary outliers. Until the last few years, most high-dimensional statistical problems lacked robust estimators satisfying the following basic properties [7]: 1. Computational tractability (i.e. evading the curse of dimensionality) 2. Robustness to a constant fraction of arbitrary outliers 3. Quantitative error guarantees without dimension dependence. Recently, a line of work on algorithmic robust statistics has blossomed within the theoretical computer science community, with the aim of filling this gap in the high-dimensional statistics literature. Estimators with the above properties have been developed for various fundamental high-dimensional problems, including mean and covariance estimation [7, 9], linear regression [10, 3], and stochastic optimization [26, 8]. However, practitioners in econometrics and applied statistics often employ more sophisticated inference methods such as GMM and IV regression [29, 2]. Such methods are not traditionally under the purview of theoretical computer science and learning theory; perhaps as a result, computationally and statistically efficient robust estimators are still lacking. Our contribution We address this lack. Methodologically speaking, our main contribution is to introduce GMM to the algorithmic robust statistics literature and vice versa (even aside from robustness, basic algorithmic questions about GMM remain open and surprisingly unstudied). Theoretically speaking, we prove that a simple modification to the SEVER algorithm for robust stochastic optimization [8] (based on using higher-derivative information) yields a computationally efficient and provably robust GMM estimator under intuitive deterministic assumptions about the uncorrupted data. We instantiate this estimator for two important special cases of GMM—instrumental variables linear regression and instrumental variables logistic regression—under distributional assumptions about the covariates, instruments, and responses (and in fact our algorithm also applies to the IV generalized linear model under certain conditions on the link function). Experimentally, we apply our algorithm to robustly solve IV linear regression. We find that it performs well for a wide range of instrument strengths. In the important setting of heterogeneous treatment effects, our algorithm tolerates as much as 10% corruption. Applied to a seminal dataset previously used to estimate the effect of education on wages [6], we provide evidence for the robustness of the inference, and demonstrate that our algorithm can recover the original inference from corruptions of the dataset, significantly better than baseline approaches. Technical Overview Our robust GMM algorithm builds upon the SEVER algorithm and framework introduced in [8] for robust stochastic optimization, which itself builds on seminal work on robust multivariate mean estimation via spectral filtering [7, 9]. In this section, we outline the increasing levels of complexity. First, given samples v1, . . . , vn ∈ Rd among which n are corrupted, robust mean estimation asks for an estimate of the mean of the uncorrupted samples. The spectral filtering approach due to [9] iteratively does the following, until the sample covariance matrix is bounded: remove outliers in the direction of the largest variance. So long as the uncorrupted samples have bounded covariance, the filtering ensures that at termination, the empirical mean will approximate the uncorrupted mean. Second, given functions f1, . . . , fn : Rd → R among which n are corrupted, robust stochastic optimization asks for an approximate critical point of the mean of the uncorrupted functions. The SEVER algorithm [8] achieves this by alternating between (a) finding a critical point ŵ of the current sample set S, and (b) applying one iteration of spectral filtering to the vectors {∇fi(ŵ) : i ∈ S}, terminating when no samples are removed from S.3 The termination guarantee of spectral filtering immediately implies that at termination, the average gradient of the uncorrupted samples at ŵ is near the average gradient of the final sample set S at ŵ, which is 0 by part (a). So ŵ at termination is an approximate critical point of the mean of the uncorrupted functions. In our problem, we are given functions g1, . . . , gn : Rd → Rp among which n are corrupted, and wish to find an approximate minimizer of ∥∥∥ 1|U |∑i∈U gi(w)∥∥∥2 2 , where U ⊆ [n] is the set of uncorrupted functions. The obvious approach is to alternate between (a) finding a minimizer ŵ of ∥∥∥ 1|S|∑i∈S gi(w)∥∥∥2 2 , where S is the current sample set, and (b) applying spectral filtering to the vectors {gi(ŵ) : i ∈ S}, terminating when no samples are removed from S. The termination guarantee of spectral filtering implies that the final sample average 1|S| ∑ i∈S gi(ŵ) is near the uncorrupted average 1|U | ∑ i∈U gi(ŵ). Unfortunately, there is no guarantee that 1 |S| ∑ i∈S gi(ŵ) has small norm: part (a) only implies that ŵ is a local minimizer (and hence critical point) of the norm, so 1 |S| ∑ i∈S (∇gi(ŵ))T · 1 |S| ∑ i∈S gi(ŵ) = 0. In the above equality, the sample gradient matrix at ŵ could be arbitrarily corrupted, so the sample average at ŵ could have arbitrarily large norm. In principle, even the global minimizer could have large norm. However, this issue can be fixed by using higher-derivative information: specifically, we also apply spectral filtering to (projections of) the matrices ∇gi(ŵ). Under appropriate boundedness and smoothness assumptions, it can then be shown that at termination (when neither filtering step removes samples), ŵ is an approximate critical point of the norm of the uncorrupted average ∥∥∥ 1|U |∑i∈U gi(w)∥∥∥2 2 . By a “strong identifiability” assumption, this implies that ŵ is near the minimizer of ∥∥∥ 1|U |∑i∈U gi(x)∥∥∥2 2 , as desired. 3A related approach simply applies robust mean estimation to estimate the gradients at each step of gradient descent [26]. 2 Preliminaries For real scalars or vectors {ξi}i∈S indexed by a set S, we use the notation ES [ξi] for the sample expectation 1|S| ∑ i∈S ξi. Similarly, if ξi are scalars, then we define the sample variance VarS(ξi) = ES(ξi − ESξi)2. If ξi are vectors then we define the sample covariance matrix CovS(ξi) = ES(ξi − ESξi)(ξi − ESξi)T . A random vector X is (4, 2, τ)-hypercontractive if E(〈X,u〉)4 ≤ τ(E(〈X,u〉)2)2 for all vectors u. Definition 2.1. For a closed setH, a function f : H → R, and γ > 0, a γ-approximate critical point of f (inH) is some x ∈ H such that for any vector v with x+ δv ∈ H for arbitrarily small δ > 0, it holds that v · ∇f(x) ≥ −γ ‖v‖2. Definition 2.2. For a closed setH, a γ-approximate critical point oracle Lγ,H is an algorithm which, given a differentiable function f : H → R returns a γ-approximate critical point of f . Definition 2.3. The (unscaled) logistic function G : R→ R is defined by G(x) = 1/(1 + e−x). Outline In Section 3, we describe the robust GMM problem, and we describe deterministic assumptions on a set of corrupted sample moments, under which we’ll be able to efficiently estimate the parameter which makes the uncorrupted moments small. In Section 4, we describe a key subroutine of our robust GMM algorithm, which is commonly known in the literature as filtering. In Section 5, we describe the robust GMM algorithm and prove a recovery guarantee under the assumptions from Section 3. In Section 6, we apply this algorithm to instrumental variable linear and logistic regression, proving that under reasonable stochastic assumptions on the uncorrupted data, arbitrarily -corrupted moments from these models satisfy the desired deterministic assumptions with high probability. Finally, in Section 7, we evaluate the performance of our algorithm on two corrupted datasets. 3 Robust GMM Model In this section, we formalize the model in which we will provide a robust GMM algorithm. Classically, the goal of GMM estimation is to identify θ ∈ Θ given data X1, . . . , Xn ∼ Dθ, using the moment condition EX∼Dθ [g(X, θ)] = 0. We consider the added challenge of the -strong contamination model, in which an adversary is allowed to inspect the data X1, . . . , Xn and replace n samples with arbitrary data, before the algorithm is allowed to see the data. This corruption model encompasses most reasonable sources of outliers. For our main theorem, we do not make stochastic assumptions about {Dθ : θ ∈ Θ}. Instead, we make deterministic assumptions about the empirical moments gi(θ) := g(Xi, θ) of the given data, which are robust to -strong contamination. Concretely, we make the following assumption. Assumption 3.1. Given differentiable moments g1, . . . , gn : Rd → Rp, a corruption parameter > 0, well-conditionedness parameters λ and L, a Lipschitzness parameter Lg, and a noise level parameter σ2, there is a set Igood ⊆ [n] with |Igood| ≥ (1− )n (the “uncorrupted samples”), a vector w∗ ∈ Rd (the “true parameter”), and a radius R0 ≥ ‖w∗‖2 with the following properties: • Strong identifiability. σmin(EIgood∇g(w∗)) ≥ λ • Bounded-variance gradient. EIgood(uT∇g(w∗)v)2 ≤ L2 for all unit-vectors u ∈ Rp, v ∈ Rd • Bounded-variance noise. EIgood(v · g(w∗))2 ≤ σ2L for all unit vectors v • Well-specification. ∥∥EIgoodg(w∗)∥∥2 ≤ σ√L • Lipschitz gradient. ∥∥EIgood∇g(w)− EIgood∇g(w∗)∥∥op ≤ Lg ‖w − w∗‖2 for all w ∈ B2R0(0) • Stability of gradient. R0 < λ/(9Lg). Intuitively, Assumption 3.1 can be thought of as a condition on the uncorrupted samples, because if they satisfy the assumption with parameter 0, then after -strong contamination, the corrupted samples will still satisfy the assumption with parameter 0 + . Strong identifiability is needed for parameter recovery (even without corruption). Bounded-variance gradient is a technical condition which e.g. reduces to a 4th moment bound for IV regression. The third and fourth conditions ensure that the data is approximately well-specified by the moment conditions. The fifth and sixth conditions hold trivially for IV linear regression; for non-linear moment problems, such as our logistic IV regression problem, this condition requires that the `2-norm of the parameters be sufficiently small, such that the logits do not approach the flat region of the logistic function, a condition that is natural to avoid loss of gradient information and extreme propensities. 4 The FILTER Algorithm In many robust statistics algorithms, an important subroutine is a filtering algorithm for robust mean estimation. In this section we describe a filtering algorithm used in numerous prior works, including e.g. [8, 9]. Given a set of vectors {ξi : i ∈ S} and a threshold M , the algorithm returns a subset of S, by thresholding outliers in the direction of largest variance. Formally, see Algorithm 1. Algorithm 1 FILTER 1: procedure FILTER({ξi : i ∈ S},M ) 2: ξ̂ ← ES [ξi], CovS(ξi) = ES [(ξi − ξ̂)(ξi − ξ̂)T ] 3: v ← largest eigenvector of CovS(ξi) 4: τi ← (v · (ξi − ξ̂))2 for i ∈ S 5: if 1|S| ∑ i∈S τi ≤ 24M then 6: return S 7: else 8: Sample T ← Unif([0,max τi]) 9: return S \ {i ∈ S : τi > T} This algorithm has two important properties. First, if it does not filter any samples, then the sample mean is provably stable, i.e. it cannot have been affected much by the corruptions, so long as the uncorrupted samples had bounded variance (proof in Appendix B.1). Lemma 4.1 (see e.g. [8, 9]). Suppose that FILTER does not filter out any samples. Then ‖ESξ − EIξ‖2 ≤ 3 √ 48 √ (M + ‖CovI(ξ)‖op) for any I ⊆ [n] and > 0 such that |S|, |I| ≥ (1− )n. Second, if the threshold is chosen appropriately (based on the variance of the uncorrupted samples), then the filtering step always in expectation removes at least as many corrupted samples as uncorrupted samples. Equivalently, the size of the symmetric difference between the current sample set and the uncorrupted samples (i.e. the number of corrupted samples in the current set plus the number of uncorrupted samples which have been filtered out of the current set) always decreases in expectation (proof in Appendix B.1.1). Lemma 4.2 (see e.g. [8, 9]). Consider an execution of FILTER with sample set S of size |S| ≥ 2n/3, and vectors {ξi : i ∈ S}, and bound M . Let S′ be the sample set after this iteration’s filtering. Let Igood ⊆ [n] satisfy |Igood| ≥ (5/6)n. Suppose that CovIgood(ξi) MI , then E|S′4Igood| ≤ E|S4Igood|, where the expectation is over the random threshold, and ∆ denotes symmetric difference. 5 The ITERATED-GMM-SEVER Algorithm In this section, we describe and analyze an algorithm ITERATED-GMM-SEVER for robustly solving moment conditions under Assumption 3.1. The key subroutine is the algorithm GMM-SEVER, which given an initial estimate w0 and a radius R such that the true parameter is contained in BR(w0), returns a refined estimate w such that (with large probability) the radius bound can be decreased by a constant factor. We assume access to an approximate constrained critical point oracle L (Definition 2.2), which can be efficiently implemented (for arbitrary smooth bounded functions) by gradient descent. Algorithm 2 GMM-SEVER 1: procedure GMM-SEVER(L, {g1, . . . , gn}, w0, R, γ, L, σ) 2: S ← [n] 3: repeat 4: Compute a γ-approximate critical point w ← Lγ,BR(w0)(‖ES(gi(·))‖ 2 2) 5: u← ESgi(w) 6: S′ ← FILTER({∇gi(w) · u : i ∈ S}, L2 ‖u‖22) 7: if S′ 6= S then 8: Set S ← S′ and return to line 4 9: S′′ ← FILTER({gi(w) : i ∈ S}, σ2L+ 4L2R2) 10: if S′′ 6= S then 11: Set S ← S′′ and return to line 4 12: until S′′ = S 13: return (w, S) Algorithm 3 AMPLIFIED-GMM-SEVER 1: procedure AMPLIFIED-GMM-SEVER(L, {g1, . . . , gn}, w0, R, γ, , L, σ, δ) 2: t← 0 3: repeat 4: w, S ← GMM-SEVER(L, {g1, . . . , gn}, w0, R, γ, L, σ) 5: t← t+ 1 6: until |S| ≥ (1− 11 )n or (1/10)t ≤ δ 7: return w Like the algorithm SEVER [8], our algorithm GMM-SEVER alternates (a) finding a critical point of a function associated to the current samples, and (b) filtering out “outlier” samples. Unlike SEVER, the function we optimize is not simply an empirical mean over the samples, but rather the squared-norm of the sample moments. Moreover, we need two filtering steps: the moments as well as directional derivatives of the moments, in a carefully chosen direction. See Algorithm 2 for the complete description. We will only prove a constant failure probability for GMM-SEVER. However, we will show that it can be amplified to an arbitrarily small failure probability δ. We call the resulting algorithm AMPLIFIED-GMM-SEVER; see Algorithm 3. The algorithm ITERATED-GMM-SEVER then consists of iteratively calling AMPLIFIED-GMM-SEVER to refine the parameter estimate and bound the true parameter within successively smaller balls; see Algorithm 4. We start by analyzing GMM-SEVER. In the next two lemmas, we show that if the algorithm does not filter out too many samples, then we can bound the distance from the output to w∗. First, we show a first-order criticality condition (in the direction ŵ − w∗) for the norm of the moments of the “good" samples. If there was no corruption, then we would have an inequality of the form (ŵ − w∗)T ‖ŵ − w∗‖2 EIgood∇g(ŵ)TEIgoodg(ŵ) ≤ γ. With -corruption, the algorithm is designed so that we can still show the following inequality, matching the above guarantee up to O( √ ) (proof in Appendix C.1): Lemma 5.1. Suppose that the input parameters R and w0 satisfy BR(w0) ⊆ B2R0(0). Under Assumption 3.1, at algorithm termination, if |S| ≥ (1− 10 )n, then the output ŵ of GMM-SEVER satisfies (ŵ − w∗)T ‖ŵ − w∗‖2 EIgood∇g(ŵ)TEIgoodg(ŵ) ≤ γ + 275σL3/2 √ + 603L2R √ Moreover, we can show that any point satisfying the first-order criticality condition must be close to w∗, using the least singular value bound on the gradient (proof in Appendix C.2). Lemma 5.2. Suppose that the input parameters R and w0 satisfy BR(w0) ⊆ B2R0(0). Under Assumption 3.1, suppose that w ∈ BR(w0) satisfies (w − w∗)TEIgood∇g(w)TEIgoodg(w) ≤ κ ‖w − w∗‖2 . Algorithm 4 ITERATED-GMM-SEVER 1: procedure ITERATED-GMM-SEVER({g1, . . . , gn}, R0, γ, , λ, L, σ, δ) 2: t← 1, w1 ← 0, R1 ← R0, δ′ ← cδ/ log(R √ L/(σ √ ), γ = σL3/2 √ 3: repeat 4: ŵt := AMPLIFIED-GMM-SEVER({g1, . . . , gn}, wt, Rt, , L, σ, γ, δ′) 5: Rt+1 ← 2γ/λ2 + C((L2/λ2)Rt √ + σ(L3/2/λ2) √ ) 6: t← t+ 1 7: until Rt > Rt−1/2 8: return ŵt−1 Then ‖w − w∗‖2 ≤ 4(κ+ σL3/2 √ )/λ2. Putting the above lemmas together, we immediately get the following bound on ‖ŵ − w∗‖2. Lemma 5.3. Suppose that the input parameters R and w0 satisfy BR(w0) ⊆ B2R0(0). Under Assumption 3.1, at algorithm termination, if |S| ≥ (1− 10 )n, then the output ŵ of GMM-SEVER satisfies ‖ŵ − w∗‖2 ≤ 4γ λ2 + 2412(L2/λ2)R √ + 1102σ(L3/2/λ2) √ . It remains to bound the size of S at termination. We follow the super-martingale argument from [8], which uses Lemma 4.2 (proof in Appendix C.3). Theorem 5.4. Suppose that the input parameters R and w0 satisfy BR(w0) ⊆ B2R0(0). Let ŵ be the output of GMM-SEVER. Then with probability at least 9/10, it holds that ‖ŵ − w∗‖2 ≤ 4γ λ2 + 2412(L2/λ2)R √ + 1102σ(L3/2/λ2) √ . The time complexity of GMM-SEVER is O(poly(n, d, p, Tγ)) where Tγ is the time complexity of the γ-approximate learner L. Moreover, for any δ > 0 the success probability can be amplified to 1− δ by repeating GMM-SEVER O(log 1/δ) times, or until |S| ≥ (1− 10 )n at termination. We call this AMPLIFIED-GMM-SEVER, and it has time complexity O(poly(n, d, p, Tγ) · log(1/δ)). With the above guarantee for GMM-SEVER and AMPLIFIED-GMM-SEVER, we can now analyze ITERATED-GMM-SEVER (proof in Appendix C.4). Theorem 5.5. Suppose that the input to ITERATED-GMM-SEVER consists of functions g1, . . . , gn : Rd → Rp, a corruption parameter > 0, well-conditionedness parameters λ and L, a Lipschitzness parameter Lg , a noise level parameter σ2, a radius bound R0, and an optimization error parameter γ, such that Assumption 3.1 is satisfied for some unknown parameter w∗ ∈ Rd, and (L2/λ2) √ ≤ 1/9648. 4 Suppose that the algorithm is also given a failure probability parameter δ > 0. Then the output ŵ of ITERATED-GMM-SEVER satisfies ‖ŵ − w∗‖2 ≤ O(σ(L 3/2/λ2) √ ) with probability at least 1− δ. Moreover, the algorithm has time complexity O(poly(n, d, p, Tγ) · log(1/δ) · log(R √ L/(σ √ ))), where Tγ is the time complexity of a γ-approximate learner and γ = σL3/2 √ . 6 Applications In this section, we apply ITERATED-GMM-SEVER to solve linear and logistic instrumental variables regression in the strong contamination model. 4This constant may be improved; we focus in this paper on dependence on the parameters of the problem and do not optimize constants. Robust IV Linear Regression Let Z be the vector of p real-valued instruments, and let X be the vector of d real-valued covariates. Suppose that Z and X are mean-zero. Suppose that the response can be described as Y = XTw∗ + ξ for some fixed w∗ ∈ Rd. The distributional assumptions we will make about X , Y , and Z are described below. Assumption 6.1. Given a corruption parameter > 0, well-conditionedness parameters λ and L, hypercontractivity parameter τ , noise level parameter σ2, and norm bound R0, we assume the following: (i) Valid instruments: E[ξ|Z] = 0, (ii) Bounded-variance noise: E[ξ2|Z] ≤ σ2, (iii) Strong instruments: σmin(EZXT ) ≥ λ, (iv) Boundedness: ‖Cov([Z;X]‖op ≤ L, (v) Hypercontractivity: [Z;X] is (4, 2, τ)-hypercontractive, (vi) Bounded 8th moments: maxiX8i ≤ O(τ2L4) and maxi Z8i ≤ O(τ2L4) (vii) Bounded norm parameter: ‖w∗‖2 ≤ R0. For intuition, conditions (i – iii) are standard for IV regression even in the absence of corruption; (iv – vi) are conditions on the moments of the distribution, and hold for a variety of reasonable distributions including but not limited to any multivariate Gaussian distribution with bounded-spectral-norm covariance. Condition (vii) essentially states that we need an initial estimate of w∗ (but the time complexity of our algorithm will depend only logarithmically on the initial estimate error R0). Define the random variable g(w) = Z(Y −XTw) for w ∈ Rd, and let (Xi, Yi, Zi) be n independent samples drawn according to (X,Y, Z). Let > 0. We prove that under the above assumption, if n is sufficiently large, then with high probability, for any -contamination (X ′i, Y ′ i , Z ′ i) n i=1 of (Xi, Yi, Zi) n i=1, the functions gi(w) = Z ′ i(Y ′ i − (X ′i)Tw) satisfy Assumption 3.1. Formally, we prove the following theorem (see Appendix D): Theorem 6.2. Let > 0. Suppose that < cmin(λ2/(τL2), λ4/L4) for a sufficiently small constant c > 0, and suppose that n ≥ C(d + p)5τ log((p + d)/τ )/ 2 for a sufficiently large constant C. Then with probability at least 0.95 over the samples (Xi, Yi, Zi)ni=1, the following holds: for any -corruption of the samples and any upper bound R0 ≥ ‖w∗‖2, Assumption 3.1 is satisfied. In that event, if L, λ, σ, and are known, then there is a poly(n, d, p, log(1/δ), log(R0/(σ √ )))- time algorithm which produces an estimate ŵ satisfying ‖ŵ − w∗‖2 ≤ O(σ(L3/2/λ2) √ ) with probability at least 1− δ. Robust IV Logistic Regression Let Z be a vector of p real-valued instruments, and let X be a vector of d real-valued covariates. Suppose that Z and X are mean-zero. Suppose that the response can be described as Y = G(XTw∗) + ξ for some fixed w∗ ∈ Rd, where G is the (unscaled) logistic function. The proofs only use 1-Lipschitzness of G and G′, and that G′(0) is bounded away from 0. As far as distributional assumptions, we assume in this section that Assumption 6.1 holds, and additionally assume that the norm bound satisfies R0 ≤ cmin(λ2/L, λ/ √ τL3) for an appropriate constant c, where λ, L, and τ are as required for the Assumption. We obtain the following algorithmic result (proof in Appendix E): Theorem 6.3. Let > 0. Suppose that < cmin(λ2/(τL2), λ4/L4) for a sufficiently small constant c > 0, and suppose that n ≥ C(d + p)5τ log((p + d)/τ )/ 2 for a sufficiently large constant C. Suppose that ‖w∗‖2 ≤ R0 ≤ cmin(λ2/L, λ/ √ τL3). Then with probability at least 0.95 over the samples (Xi, Yi, Zi)ni=1, the following holds: for any -corruption of the samples, Assumption 3.1 is satisfied. In that event, if R0, L, λ, σ, and are known, then there is a poly(n, d, p, log(1/δ), log(R0/(σ √ )))-time algorithm which produces an estimate ŵ satisfying ‖ŵ − w∗‖2 ≤ O(σ(L3/2/λ2) √ ) with probability at least 1− δ. 7 Experiments In this section we corroborate our theory by applying our algorithm ITERATED-GMM-SEVER to several datasets for IV linear regression. See Appendix G for omitted figures and experimental details (e.g. hyperparameter choices and descriptions of the baselines). Error bars are at 25th and 75th percentiles across independent trials. Varied Instrument Strength. We construct a synthetic dataset with endogenous noise and 1% corruptions, and evaluate our estimator as the instrument strength is varied. Concretely, for dimension d and strength α, we draw independent samples (Xi, Yi, Zi)ni=1 where for unobserved noise ηi ∼ N(0, Id), we define instruments Zi ∼ N(0, Id) and covariates Xi = αZi + ηi, and response yi = 〈Xi, θ∗〉 + 〈ηi,1〉. For k = 0.01n of the samples, we introduce corruption by setting Zi = −A/(k √ d) and yi = √ d where A = ∑ Zjyj , which zeroes out the IV estimate. We take n = 104, d = 20 and θ∗ = (1, 0, . . . , 0), and vary α from 0.1 to 10. For each α, we do 10 independent trials, comparing median `2 error of ITERATED-GMM-SEVER with classical IV and two-stage Huber regression. We also compare to the “clean IV” error, i.e. the error of IV on the uncorrupted samples. When α is small, essentially no inference is possible (the clean error is large), but as α increases, our estimator starts to outperform the baselines, and roughly tracks the clean error (Figure 1a). Similar results can be seen for d = 100 (Figure 2 in Appendix G.5). Our next two examples consider IV linear regression with heterogeneous treatment effects, a natural setting in which the instruments and covariates are high-dimensional, necessitating dimensionindependent robust estimators. Consider a study in which each sample has a vector X of characteristics, a scalar instrument Z, a scalar treatment T , and a response Y . Assuming that the control response and treatment effect are linear in the characteristics, with unknown coefficients β∗ and θ∗ respectively, and that the response noise is mean-zero conditioned on Z and X (but may correlate with the treatment), we can write the moment conditions E[XZ(Y − T 〈X, θ∗〉 − 〈X,β∗〉)] = E[X(Y − T 〈X, θ∗〉 − 〈X,β∗〉)] = 0. This can be interpreted as an IV linear regression with covariates (TX,X) and instruments (ZX,X). Synthetic HE dataset. For parameters n, d, we generate a unknown d-dimensional parameter vector θ∗ ∼ N(0, Id). We then generate independent samples (Xi, Yi, Zi)ni=1 as follows. Draw Xi ∼ N(0, Id) and Zi ∼ Ber(1/2). The binary treatment is drawn Ti ∼ Ber(pi) with pi = 1 1 + exp(−Zi − UiX̄i) , where Ui ∼ N(0, 1) and X̄i = d−1/2〈Xi,1〉. Finally, the response is Yi = 〈Xi, θ∗〉Ti + 〈Xi, β∗〉+ Ui with β∗ := 0. Ordinary least squares would produce a biased estimate of (θ∗, β∗), since TX̄ is correlated with the response noise U . However, U is by construction independent of X and Z. Thus, in the absence of corruption, IV linear regression with covariates (TX,X), response Y , and instrument (ZX,X) should approximately recover the true parameters (θ, β). For n = 103 and d = 20, the IV estimate still has significant variance, and in this regime, even with no added corruptions, we find that ITERATED-GMM-SEVER has lower recovery error than baselines (Table 1 in Appendix G.5). For n = 104 and d = 20, the IV estimate is more accurate. Hence, we corrupt the first n samples, by setting Xi := 1 and Yi := 3 √ d. Varying from 0.01 to 0.1, we compute the median `2 recovery error of ITERATED-GMM-SEVER, classical IV, and two-stage Huber regression, across 50 independent trials (for each ). The results (Figure 1b) demonstrate that our algorithm is resilient to up to 10% corruptions, whereas both baselines rapidly degrade as increases. NLSYM dataset. In this experiment, we use the data of [6] from the National Longitudinal Survey of Young Men for estimating the average treatment effect (ATE) of education on wages. The data consists of 3010 samples with years of education as the treatment, log wages as the response, and proximity to a 4-year college as the instrument, along with 22 covariates (e.g. geographic indicator variables). For simplicity, we restrict the model to only two covariates (years and squared years of labor force experience) and bias term. We find that the ATE estimated by ITERATED-GMM-SEVER is close to the positive ATE (≈ 0.277) estimated by classical IV, suggesting that Card’s inference may be robust (Figure 3 in Appendix G.5). Next, we corrupt a random -fraction of the responses, in a way that negates the ATE inferred by classical IV regression (see Appendix G.2 for method). Varying from 0.01 to 0.2, we perform 10 independent trials (i.e. resampling the subset of corrupted samples each time). For each trial, we compute the ATE estimate of IV regression, the ATE estimate of two-stage Huber regression, and the median ATE estimate of 50 runs of ITERATED-GMM-SEVER. For each , we then plot the median absolute error of each algorithm across the 10 trials. We find that our algorithm outperforms both baselines, and has lower variance than two-stage Huber regression, up to ≈ 0.15 (Figure 1c; note that error is on log-scale, so the Huber regression is extremely noisy).
1. What is the focus and contribution of the paper regarding robust inference? 2. What are the strengths of the proposed algorithm, particularly in its theoretical guarantee? 3. What are the concerns regarding the optimality of the results, especially in Theorems 5.5, 6.2, and 6.3? 4. How does the reviewer assess the relevance and applicability of the provided concrete applications? 5. Are there any limitations or areas for improvement in the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper studies the problem of robust inference when the loss is taken as the norm of generalized moments. The authors provide robust GMM algorithm based on the SEVER algorithm and the iterative filtering framework. The robust GMM algorithm filters both the moments as well as the directional derivatives of the moments. Strengths And Weaknesses The authors provide a new algorithm, robust GMM, based on SEVER and iterative filtering. Theoretical guarantee is provided for the proposed algorithm. The authors also provide concrete applications for robust IV linear regression and logistic regression. I enjoy reading the paper and understanding the techniques. My main concern is about the optimality of the rates, as detailed in the question section. Questions I'd like the authors to provide more comment on the optimality of the results, in particular for Theorem 5.5, 6.2 and 6.3. It seems to me that in the special case of robust mean estimation with g i ( w ) = X i − w , Assumption 3.1 holds when X_i has bounded covariance and Theorem 5.5 gives a tight bound except for a worse breakdown point compared to [1] (and the algorithm basically coincides with the iterative filtering for mean estimation). If it is correct, perhaps it's worth mentioning after Theorem 5.5. However, I'm unsure about the optimality of Theorem 6.2 and 6.3, in particular, for a strongly convex function with bounded variance, the parameter estimation error is usually O ( ϵ ) , see e.g. [2]. And the rate becomes better with higher moment assumption. However, the rate under bounded 8th moment is still O ( ϵ ) . Can the authors comment on whether this rate is tight or not? Also, would strongly convexity or any other property helps improve the rate in Theorem 5.5? [1] Zhu, Banghua, Jiantao Jiao, and Jacob Steinhardt. "Robust estimation via generalized quasi-gradients." Information and Inference: A Journal of the IMA 11.2 (2022): 581-636. [2] Yin, Dong, et al. "Byzantine-robust distributed learning: Towards optimal statistical rates." International Conference on Machine Learning. PMLR, 2018. Limitations N/A
NIPS
Title Robust Generalized Method of Moments: A Finite Sample Viewpoint Abstract For many inference problems in statistics and econometrics, the unknown parameter is identified by a set of moment conditions. A generic method of solving moment conditions is the Generalized Method of Moments (GMM). However, classical GMM estimation is potentially very sensitive to outliers. Robustified GMM estimators have been developed in the past, but suffer from several drawbacks: computational intractability, poor dimension-dependence, and no quantitative recovery guarantees in the presence of a constant fraction of outliers. In this work, we develop the first computationally efficient GMM estimator (under intuitive assumptions) that can tolerate a constant fraction of adversarially corrupted samples, and that has an `2 recovery guarantee of O( √ ). To achieve this, we draw upon and extend a recent line of work on algorithmic robust statistics for related but simpler problems such as mean estimation, linear regression and stochastic optimization. As a special case, we apply our algorithm to instrumental variables linear regression with heterogeneous treatment effects, and experimentally demonstrate that it can tolerate as much as 10 – 15% corruption, significantly improving upon baseline methods. N/A √ ). To achieve this, we draw upon and extend a recent line of work on algorithmic robust statistics for related but simpler problems such as mean estimation, linear regression and stochastic optimization. As a special case, we apply our algorithm to instrumental variables linear regression with heterogeneous treatment effects, and experimentally demonstrate that it can tolerate as much as 10 – 15% corruption, significantly improving upon baseline methods. 1 Introduction Econometric and causal inference methodologies are increasingly being incorporated in automated large scale decision systems. Inevitably these systems need to deal with the plethora of practical issues that arise from automation. One important aspect is being able to deal with corrupted or irregular data, either due to poor data collection, the presence of outliers, or adversarial attacks by malicious agents. Even traditional applications of econometric methods, in social science studies, can greatly benefit from robust inference so as not to draw conclusions solely driven by a handful of samples, as was recently highlighted in [4]. One broad statistical framework, that encompasses the most widely used estimation techniques in econometrics and causal inference, is the framework of estimating models defined via moment conditions. In this paper we offer a robust estimation algorithm that extends prior recent work in robust statistics to this more general estimation setting. For a family of distributions {Dθ : θ ∈ Θ}, identifying the parameter θ is often equivalent to solving EX∼Dθ [g(X, θ)] = 0, (1) for an appropriate problem-specific vector-valued function g. This formalism encompasses such problems as linear regression (with covariates X , response Y , and moment g((X,Y ), θ) = X(Y − ∗[email protected]. This work was partially done while the first author was an intern at Microsoft Research New England. †[email protected]. This work was partially done while the second author was a Principal Researcher at Microsoft Research New England. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). XT θ)) and instrumental variables (IV) linear regression (with covariates X , response Y , instruments Z, and moment g((X,Y, Z), θ) = Z(Y −XT θ)). Under simple identifiability assumptions, moment conditions are statistically tractable, and can be solved by the Generalized Method of Moments (GMM) [16]. Given independent observations X1, . . . , Xn ∼ Dθ, the (unweighted) GMM estimator is θ̂ = argmin θ∈Θ ∥∥∥∥∥ 1n n∑ i=1 g(Xi, θ) ∥∥∥∥∥ 2 2 . Of course, for general functions g, finding θ̂ (the global minimizer of a potentially non-convex function) may be computationally intractable. Stronger assumptions imply that all approximate local minima of the above function are near the true parameter, in which case the GMM estimator is efficiently approximable. For instrumental variables (IV) linear regression, these assumptions follow from standard non-degeneracy assumptions. Due to its flexibility, the GMM estimator is widely used in practice (along with heuristic variants, in models where it is computationally intractable) [29]. Unfortunately, like most other classical estimators in statistics, the GMM estimator suffers from a lack of robustness: a single outlier in the observations can arbitrarily corrupt the estimate. Robust statistics Initiated by Tukey and Huber in the 1960s, robust statistics is a broad field studying estimators which have provable guarantees even in the presence of outliers [18]. Outliers can be modelled as samples from a heavy-tailed distribution, or even as adversarially and arbitrarily corrupted data. Classically, robustness of an estimator against arbitrary outliers is measured by breakdown point (the fraction of outliers which can be tolerated without causing the estimator to become unbounded [14]) and influence (the maximum change in the estimator under an infinitesimal fraction of outliers [15]). These metrics have spurred development and study of numerous statistical estimators which are often used in practice to mitigate the effect of outliers (e.g. Huber loss for mean estimation, linear regression, and other problems [17]). Problems such as robust univariate mean estimation are by now thoroughly understood [24, 22], and have statistically and computationally efficient estimators. Unfortunately, in higher dimensions, there has long appeared to be a tradeoff between robustness and computational tractability; as a result, much of the literature on high-dimensional robust statistics has focused on statistical efficiency at the expense of computational feasibility [5, 23, 13]. While there is a rich literature on IV regression and GMM in the context of robust statistics, those works either present computationally intractable estimators [21, 12] or are robust in the sense of bounded influence [1, 27, 20] but not robust against arbitrary outliers. Until the last few years, most high-dimensional statistical problems lacked robust estimators satisfying the following basic properties [7]: 1. Computational tractability (i.e. evading the curse of dimensionality) 2. Robustness to a constant fraction of arbitrary outliers 3. Quantitative error guarantees without dimension dependence. Recently, a line of work on algorithmic robust statistics has blossomed within the theoretical computer science community, with the aim of filling this gap in the high-dimensional statistics literature. Estimators with the above properties have been developed for various fundamental high-dimensional problems, including mean and covariance estimation [7, 9], linear regression [10, 3], and stochastic optimization [26, 8]. However, practitioners in econometrics and applied statistics often employ more sophisticated inference methods such as GMM and IV regression [29, 2]. Such methods are not traditionally under the purview of theoretical computer science and learning theory; perhaps as a result, computationally and statistically efficient robust estimators are still lacking. Our contribution We address this lack. Methodologically speaking, our main contribution is to introduce GMM to the algorithmic robust statistics literature and vice versa (even aside from robustness, basic algorithmic questions about GMM remain open and surprisingly unstudied). Theoretically speaking, we prove that a simple modification to the SEVER algorithm for robust stochastic optimization [8] (based on using higher-derivative information) yields a computationally efficient and provably robust GMM estimator under intuitive deterministic assumptions about the uncorrupted data. We instantiate this estimator for two important special cases of GMM—instrumental variables linear regression and instrumental variables logistic regression—under distributional assumptions about the covariates, instruments, and responses (and in fact our algorithm also applies to the IV generalized linear model under certain conditions on the link function). Experimentally, we apply our algorithm to robustly solve IV linear regression. We find that it performs well for a wide range of instrument strengths. In the important setting of heterogeneous treatment effects, our algorithm tolerates as much as 10% corruption. Applied to a seminal dataset previously used to estimate the effect of education on wages [6], we provide evidence for the robustness of the inference, and demonstrate that our algorithm can recover the original inference from corruptions of the dataset, significantly better than baseline approaches. Technical Overview Our robust GMM algorithm builds upon the SEVER algorithm and framework introduced in [8] for robust stochastic optimization, which itself builds on seminal work on robust multivariate mean estimation via spectral filtering [7, 9]. In this section, we outline the increasing levels of complexity. First, given samples v1, . . . , vn ∈ Rd among which n are corrupted, robust mean estimation asks for an estimate of the mean of the uncorrupted samples. The spectral filtering approach due to [9] iteratively does the following, until the sample covariance matrix is bounded: remove outliers in the direction of the largest variance. So long as the uncorrupted samples have bounded covariance, the filtering ensures that at termination, the empirical mean will approximate the uncorrupted mean. Second, given functions f1, . . . , fn : Rd → R among which n are corrupted, robust stochastic optimization asks for an approximate critical point of the mean of the uncorrupted functions. The SEVER algorithm [8] achieves this by alternating between (a) finding a critical point ŵ of the current sample set S, and (b) applying one iteration of spectral filtering to the vectors {∇fi(ŵ) : i ∈ S}, terminating when no samples are removed from S.3 The termination guarantee of spectral filtering immediately implies that at termination, the average gradient of the uncorrupted samples at ŵ is near the average gradient of the final sample set S at ŵ, which is 0 by part (a). So ŵ at termination is an approximate critical point of the mean of the uncorrupted functions. In our problem, we are given functions g1, . . . , gn : Rd → Rp among which n are corrupted, and wish to find an approximate minimizer of ∥∥∥ 1|U |∑i∈U gi(w)∥∥∥2 2 , where U ⊆ [n] is the set of uncorrupted functions. The obvious approach is to alternate between (a) finding a minimizer ŵ of ∥∥∥ 1|S|∑i∈S gi(w)∥∥∥2 2 , where S is the current sample set, and (b) applying spectral filtering to the vectors {gi(ŵ) : i ∈ S}, terminating when no samples are removed from S. The termination guarantee of spectral filtering implies that the final sample average 1|S| ∑ i∈S gi(ŵ) is near the uncorrupted average 1|U | ∑ i∈U gi(ŵ). Unfortunately, there is no guarantee that 1 |S| ∑ i∈S gi(ŵ) has small norm: part (a) only implies that ŵ is a local minimizer (and hence critical point) of the norm, so 1 |S| ∑ i∈S (∇gi(ŵ))T · 1 |S| ∑ i∈S gi(ŵ) = 0. In the above equality, the sample gradient matrix at ŵ could be arbitrarily corrupted, so the sample average at ŵ could have arbitrarily large norm. In principle, even the global minimizer could have large norm. However, this issue can be fixed by using higher-derivative information: specifically, we also apply spectral filtering to (projections of) the matrices ∇gi(ŵ). Under appropriate boundedness and smoothness assumptions, it can then be shown that at termination (when neither filtering step removes samples), ŵ is an approximate critical point of the norm of the uncorrupted average ∥∥∥ 1|U |∑i∈U gi(w)∥∥∥2 2 . By a “strong identifiability” assumption, this implies that ŵ is near the minimizer of ∥∥∥ 1|U |∑i∈U gi(x)∥∥∥2 2 , as desired. 3A related approach simply applies robust mean estimation to estimate the gradients at each step of gradient descent [26]. 2 Preliminaries For real scalars or vectors {ξi}i∈S indexed by a set S, we use the notation ES [ξi] for the sample expectation 1|S| ∑ i∈S ξi. Similarly, if ξi are scalars, then we define the sample variance VarS(ξi) = ES(ξi − ESξi)2. If ξi are vectors then we define the sample covariance matrix CovS(ξi) = ES(ξi − ESξi)(ξi − ESξi)T . A random vector X is (4, 2, τ)-hypercontractive if E(〈X,u〉)4 ≤ τ(E(〈X,u〉)2)2 for all vectors u. Definition 2.1. For a closed setH, a function f : H → R, and γ > 0, a γ-approximate critical point of f (inH) is some x ∈ H such that for any vector v with x+ δv ∈ H for arbitrarily small δ > 0, it holds that v · ∇f(x) ≥ −γ ‖v‖2. Definition 2.2. For a closed setH, a γ-approximate critical point oracle Lγ,H is an algorithm which, given a differentiable function f : H → R returns a γ-approximate critical point of f . Definition 2.3. The (unscaled) logistic function G : R→ R is defined by G(x) = 1/(1 + e−x). Outline In Section 3, we describe the robust GMM problem, and we describe deterministic assumptions on a set of corrupted sample moments, under which we’ll be able to efficiently estimate the parameter which makes the uncorrupted moments small. In Section 4, we describe a key subroutine of our robust GMM algorithm, which is commonly known in the literature as filtering. In Section 5, we describe the robust GMM algorithm and prove a recovery guarantee under the assumptions from Section 3. In Section 6, we apply this algorithm to instrumental variable linear and logistic regression, proving that under reasonable stochastic assumptions on the uncorrupted data, arbitrarily -corrupted moments from these models satisfy the desired deterministic assumptions with high probability. Finally, in Section 7, we evaluate the performance of our algorithm on two corrupted datasets. 3 Robust GMM Model In this section, we formalize the model in which we will provide a robust GMM algorithm. Classically, the goal of GMM estimation is to identify θ ∈ Θ given data X1, . . . , Xn ∼ Dθ, using the moment condition EX∼Dθ [g(X, θ)] = 0. We consider the added challenge of the -strong contamination model, in which an adversary is allowed to inspect the data X1, . . . , Xn and replace n samples with arbitrary data, before the algorithm is allowed to see the data. This corruption model encompasses most reasonable sources of outliers. For our main theorem, we do not make stochastic assumptions about {Dθ : θ ∈ Θ}. Instead, we make deterministic assumptions about the empirical moments gi(θ) := g(Xi, θ) of the given data, which are robust to -strong contamination. Concretely, we make the following assumption. Assumption 3.1. Given differentiable moments g1, . . . , gn : Rd → Rp, a corruption parameter > 0, well-conditionedness parameters λ and L, a Lipschitzness parameter Lg, and a noise level parameter σ2, there is a set Igood ⊆ [n] with |Igood| ≥ (1− )n (the “uncorrupted samples”), a vector w∗ ∈ Rd (the “true parameter”), and a radius R0 ≥ ‖w∗‖2 with the following properties: • Strong identifiability. σmin(EIgood∇g(w∗)) ≥ λ • Bounded-variance gradient. EIgood(uT∇g(w∗)v)2 ≤ L2 for all unit-vectors u ∈ Rp, v ∈ Rd • Bounded-variance noise. EIgood(v · g(w∗))2 ≤ σ2L for all unit vectors v • Well-specification. ∥∥EIgoodg(w∗)∥∥2 ≤ σ√L • Lipschitz gradient. ∥∥EIgood∇g(w)− EIgood∇g(w∗)∥∥op ≤ Lg ‖w − w∗‖2 for all w ∈ B2R0(0) • Stability of gradient. R0 < λ/(9Lg). Intuitively, Assumption 3.1 can be thought of as a condition on the uncorrupted samples, because if they satisfy the assumption with parameter 0, then after -strong contamination, the corrupted samples will still satisfy the assumption with parameter 0 + . Strong identifiability is needed for parameter recovery (even without corruption). Bounded-variance gradient is a technical condition which e.g. reduces to a 4th moment bound for IV regression. The third and fourth conditions ensure that the data is approximately well-specified by the moment conditions. The fifth and sixth conditions hold trivially for IV linear regression; for non-linear moment problems, such as our logistic IV regression problem, this condition requires that the `2-norm of the parameters be sufficiently small, such that the logits do not approach the flat region of the logistic function, a condition that is natural to avoid loss of gradient information and extreme propensities. 4 The FILTER Algorithm In many robust statistics algorithms, an important subroutine is a filtering algorithm for robust mean estimation. In this section we describe a filtering algorithm used in numerous prior works, including e.g. [8, 9]. Given a set of vectors {ξi : i ∈ S} and a threshold M , the algorithm returns a subset of S, by thresholding outliers in the direction of largest variance. Formally, see Algorithm 1. Algorithm 1 FILTER 1: procedure FILTER({ξi : i ∈ S},M ) 2: ξ̂ ← ES [ξi], CovS(ξi) = ES [(ξi − ξ̂)(ξi − ξ̂)T ] 3: v ← largest eigenvector of CovS(ξi) 4: τi ← (v · (ξi − ξ̂))2 for i ∈ S 5: if 1|S| ∑ i∈S τi ≤ 24M then 6: return S 7: else 8: Sample T ← Unif([0,max τi]) 9: return S \ {i ∈ S : τi > T} This algorithm has two important properties. First, if it does not filter any samples, then the sample mean is provably stable, i.e. it cannot have been affected much by the corruptions, so long as the uncorrupted samples had bounded variance (proof in Appendix B.1). Lemma 4.1 (see e.g. [8, 9]). Suppose that FILTER does not filter out any samples. Then ‖ESξ − EIξ‖2 ≤ 3 √ 48 √ (M + ‖CovI(ξ)‖op) for any I ⊆ [n] and > 0 such that |S|, |I| ≥ (1− )n. Second, if the threshold is chosen appropriately (based on the variance of the uncorrupted samples), then the filtering step always in expectation removes at least as many corrupted samples as uncorrupted samples. Equivalently, the size of the symmetric difference between the current sample set and the uncorrupted samples (i.e. the number of corrupted samples in the current set plus the number of uncorrupted samples which have been filtered out of the current set) always decreases in expectation (proof in Appendix B.1.1). Lemma 4.2 (see e.g. [8, 9]). Consider an execution of FILTER with sample set S of size |S| ≥ 2n/3, and vectors {ξi : i ∈ S}, and bound M . Let S′ be the sample set after this iteration’s filtering. Let Igood ⊆ [n] satisfy |Igood| ≥ (5/6)n. Suppose that CovIgood(ξi) MI , then E|S′4Igood| ≤ E|S4Igood|, where the expectation is over the random threshold, and ∆ denotes symmetric difference. 5 The ITERATED-GMM-SEVER Algorithm In this section, we describe and analyze an algorithm ITERATED-GMM-SEVER for robustly solving moment conditions under Assumption 3.1. The key subroutine is the algorithm GMM-SEVER, which given an initial estimate w0 and a radius R such that the true parameter is contained in BR(w0), returns a refined estimate w such that (with large probability) the radius bound can be decreased by a constant factor. We assume access to an approximate constrained critical point oracle L (Definition 2.2), which can be efficiently implemented (for arbitrary smooth bounded functions) by gradient descent. Algorithm 2 GMM-SEVER 1: procedure GMM-SEVER(L, {g1, . . . , gn}, w0, R, γ, L, σ) 2: S ← [n] 3: repeat 4: Compute a γ-approximate critical point w ← Lγ,BR(w0)(‖ES(gi(·))‖ 2 2) 5: u← ESgi(w) 6: S′ ← FILTER({∇gi(w) · u : i ∈ S}, L2 ‖u‖22) 7: if S′ 6= S then 8: Set S ← S′ and return to line 4 9: S′′ ← FILTER({gi(w) : i ∈ S}, σ2L+ 4L2R2) 10: if S′′ 6= S then 11: Set S ← S′′ and return to line 4 12: until S′′ = S 13: return (w, S) Algorithm 3 AMPLIFIED-GMM-SEVER 1: procedure AMPLIFIED-GMM-SEVER(L, {g1, . . . , gn}, w0, R, γ, , L, σ, δ) 2: t← 0 3: repeat 4: w, S ← GMM-SEVER(L, {g1, . . . , gn}, w0, R, γ, L, σ) 5: t← t+ 1 6: until |S| ≥ (1− 11 )n or (1/10)t ≤ δ 7: return w Like the algorithm SEVER [8], our algorithm GMM-SEVER alternates (a) finding a critical point of a function associated to the current samples, and (b) filtering out “outlier” samples. Unlike SEVER, the function we optimize is not simply an empirical mean over the samples, but rather the squared-norm of the sample moments. Moreover, we need two filtering steps: the moments as well as directional derivatives of the moments, in a carefully chosen direction. See Algorithm 2 for the complete description. We will only prove a constant failure probability for GMM-SEVER. However, we will show that it can be amplified to an arbitrarily small failure probability δ. We call the resulting algorithm AMPLIFIED-GMM-SEVER; see Algorithm 3. The algorithm ITERATED-GMM-SEVER then consists of iteratively calling AMPLIFIED-GMM-SEVER to refine the parameter estimate and bound the true parameter within successively smaller balls; see Algorithm 4. We start by analyzing GMM-SEVER. In the next two lemmas, we show that if the algorithm does not filter out too many samples, then we can bound the distance from the output to w∗. First, we show a first-order criticality condition (in the direction ŵ − w∗) for the norm of the moments of the “good" samples. If there was no corruption, then we would have an inequality of the form (ŵ − w∗)T ‖ŵ − w∗‖2 EIgood∇g(ŵ)TEIgoodg(ŵ) ≤ γ. With -corruption, the algorithm is designed so that we can still show the following inequality, matching the above guarantee up to O( √ ) (proof in Appendix C.1): Lemma 5.1. Suppose that the input parameters R and w0 satisfy BR(w0) ⊆ B2R0(0). Under Assumption 3.1, at algorithm termination, if |S| ≥ (1− 10 )n, then the output ŵ of GMM-SEVER satisfies (ŵ − w∗)T ‖ŵ − w∗‖2 EIgood∇g(ŵ)TEIgoodg(ŵ) ≤ γ + 275σL3/2 √ + 603L2R √ Moreover, we can show that any point satisfying the first-order criticality condition must be close to w∗, using the least singular value bound on the gradient (proof in Appendix C.2). Lemma 5.2. Suppose that the input parameters R and w0 satisfy BR(w0) ⊆ B2R0(0). Under Assumption 3.1, suppose that w ∈ BR(w0) satisfies (w − w∗)TEIgood∇g(w)TEIgoodg(w) ≤ κ ‖w − w∗‖2 . Algorithm 4 ITERATED-GMM-SEVER 1: procedure ITERATED-GMM-SEVER({g1, . . . , gn}, R0, γ, , λ, L, σ, δ) 2: t← 1, w1 ← 0, R1 ← R0, δ′ ← cδ/ log(R √ L/(σ √ ), γ = σL3/2 √ 3: repeat 4: ŵt := AMPLIFIED-GMM-SEVER({g1, . . . , gn}, wt, Rt, , L, σ, γ, δ′) 5: Rt+1 ← 2γ/λ2 + C((L2/λ2)Rt √ + σ(L3/2/λ2) √ ) 6: t← t+ 1 7: until Rt > Rt−1/2 8: return ŵt−1 Then ‖w − w∗‖2 ≤ 4(κ+ σL3/2 √ )/λ2. Putting the above lemmas together, we immediately get the following bound on ‖ŵ − w∗‖2. Lemma 5.3. Suppose that the input parameters R and w0 satisfy BR(w0) ⊆ B2R0(0). Under Assumption 3.1, at algorithm termination, if |S| ≥ (1− 10 )n, then the output ŵ of GMM-SEVER satisfies ‖ŵ − w∗‖2 ≤ 4γ λ2 + 2412(L2/λ2)R √ + 1102σ(L3/2/λ2) √ . It remains to bound the size of S at termination. We follow the super-martingale argument from [8], which uses Lemma 4.2 (proof in Appendix C.3). Theorem 5.4. Suppose that the input parameters R and w0 satisfy BR(w0) ⊆ B2R0(0). Let ŵ be the output of GMM-SEVER. Then with probability at least 9/10, it holds that ‖ŵ − w∗‖2 ≤ 4γ λ2 + 2412(L2/λ2)R √ + 1102σ(L3/2/λ2) √ . The time complexity of GMM-SEVER is O(poly(n, d, p, Tγ)) where Tγ is the time complexity of the γ-approximate learner L. Moreover, for any δ > 0 the success probability can be amplified to 1− δ by repeating GMM-SEVER O(log 1/δ) times, or until |S| ≥ (1− 10 )n at termination. We call this AMPLIFIED-GMM-SEVER, and it has time complexity O(poly(n, d, p, Tγ) · log(1/δ)). With the above guarantee for GMM-SEVER and AMPLIFIED-GMM-SEVER, we can now analyze ITERATED-GMM-SEVER (proof in Appendix C.4). Theorem 5.5. Suppose that the input to ITERATED-GMM-SEVER consists of functions g1, . . . , gn : Rd → Rp, a corruption parameter > 0, well-conditionedness parameters λ and L, a Lipschitzness parameter Lg , a noise level parameter σ2, a radius bound R0, and an optimization error parameter γ, such that Assumption 3.1 is satisfied for some unknown parameter w∗ ∈ Rd, and (L2/λ2) √ ≤ 1/9648. 4 Suppose that the algorithm is also given a failure probability parameter δ > 0. Then the output ŵ of ITERATED-GMM-SEVER satisfies ‖ŵ − w∗‖2 ≤ O(σ(L 3/2/λ2) √ ) with probability at least 1− δ. Moreover, the algorithm has time complexity O(poly(n, d, p, Tγ) · log(1/δ) · log(R √ L/(σ √ ))), where Tγ is the time complexity of a γ-approximate learner and γ = σL3/2 √ . 6 Applications In this section, we apply ITERATED-GMM-SEVER to solve linear and logistic instrumental variables regression in the strong contamination model. 4This constant may be improved; we focus in this paper on dependence on the parameters of the problem and do not optimize constants. Robust IV Linear Regression Let Z be the vector of p real-valued instruments, and let X be the vector of d real-valued covariates. Suppose that Z and X are mean-zero. Suppose that the response can be described as Y = XTw∗ + ξ for some fixed w∗ ∈ Rd. The distributional assumptions we will make about X , Y , and Z are described below. Assumption 6.1. Given a corruption parameter > 0, well-conditionedness parameters λ and L, hypercontractivity parameter τ , noise level parameter σ2, and norm bound R0, we assume the following: (i) Valid instruments: E[ξ|Z] = 0, (ii) Bounded-variance noise: E[ξ2|Z] ≤ σ2, (iii) Strong instruments: σmin(EZXT ) ≥ λ, (iv) Boundedness: ‖Cov([Z;X]‖op ≤ L, (v) Hypercontractivity: [Z;X] is (4, 2, τ)-hypercontractive, (vi) Bounded 8th moments: maxiX8i ≤ O(τ2L4) and maxi Z8i ≤ O(τ2L4) (vii) Bounded norm parameter: ‖w∗‖2 ≤ R0. For intuition, conditions (i – iii) are standard for IV regression even in the absence of corruption; (iv – vi) are conditions on the moments of the distribution, and hold for a variety of reasonable distributions including but not limited to any multivariate Gaussian distribution with bounded-spectral-norm covariance. Condition (vii) essentially states that we need an initial estimate of w∗ (but the time complexity of our algorithm will depend only logarithmically on the initial estimate error R0). Define the random variable g(w) = Z(Y −XTw) for w ∈ Rd, and let (Xi, Yi, Zi) be n independent samples drawn according to (X,Y, Z). Let > 0. We prove that under the above assumption, if n is sufficiently large, then with high probability, for any -contamination (X ′i, Y ′ i , Z ′ i) n i=1 of (Xi, Yi, Zi) n i=1, the functions gi(w) = Z ′ i(Y ′ i − (X ′i)Tw) satisfy Assumption 3.1. Formally, we prove the following theorem (see Appendix D): Theorem 6.2. Let > 0. Suppose that < cmin(λ2/(τL2), λ4/L4) for a sufficiently small constant c > 0, and suppose that n ≥ C(d + p)5τ log((p + d)/τ )/ 2 for a sufficiently large constant C. Then with probability at least 0.95 over the samples (Xi, Yi, Zi)ni=1, the following holds: for any -corruption of the samples and any upper bound R0 ≥ ‖w∗‖2, Assumption 3.1 is satisfied. In that event, if L, λ, σ, and are known, then there is a poly(n, d, p, log(1/δ), log(R0/(σ √ )))- time algorithm which produces an estimate ŵ satisfying ‖ŵ − w∗‖2 ≤ O(σ(L3/2/λ2) √ ) with probability at least 1− δ. Robust IV Logistic Regression Let Z be a vector of p real-valued instruments, and let X be a vector of d real-valued covariates. Suppose that Z and X are mean-zero. Suppose that the response can be described as Y = G(XTw∗) + ξ for some fixed w∗ ∈ Rd, where G is the (unscaled) logistic function. The proofs only use 1-Lipschitzness of G and G′, and that G′(0) is bounded away from 0. As far as distributional assumptions, we assume in this section that Assumption 6.1 holds, and additionally assume that the norm bound satisfies R0 ≤ cmin(λ2/L, λ/ √ τL3) for an appropriate constant c, where λ, L, and τ are as required for the Assumption. We obtain the following algorithmic result (proof in Appendix E): Theorem 6.3. Let > 0. Suppose that < cmin(λ2/(τL2), λ4/L4) for a sufficiently small constant c > 0, and suppose that n ≥ C(d + p)5τ log((p + d)/τ )/ 2 for a sufficiently large constant C. Suppose that ‖w∗‖2 ≤ R0 ≤ cmin(λ2/L, λ/ √ τL3). Then with probability at least 0.95 over the samples (Xi, Yi, Zi)ni=1, the following holds: for any -corruption of the samples, Assumption 3.1 is satisfied. In that event, if R0, L, λ, σ, and are known, then there is a poly(n, d, p, log(1/δ), log(R0/(σ √ )))-time algorithm which produces an estimate ŵ satisfying ‖ŵ − w∗‖2 ≤ O(σ(L3/2/λ2) √ ) with probability at least 1− δ. 7 Experiments In this section we corroborate our theory by applying our algorithm ITERATED-GMM-SEVER to several datasets for IV linear regression. See Appendix G for omitted figures and experimental details (e.g. hyperparameter choices and descriptions of the baselines). Error bars are at 25th and 75th percentiles across independent trials. Varied Instrument Strength. We construct a synthetic dataset with endogenous noise and 1% corruptions, and evaluate our estimator as the instrument strength is varied. Concretely, for dimension d and strength α, we draw independent samples (Xi, Yi, Zi)ni=1 where for unobserved noise ηi ∼ N(0, Id), we define instruments Zi ∼ N(0, Id) and covariates Xi = αZi + ηi, and response yi = 〈Xi, θ∗〉 + 〈ηi,1〉. For k = 0.01n of the samples, we introduce corruption by setting Zi = −A/(k √ d) and yi = √ d where A = ∑ Zjyj , which zeroes out the IV estimate. We take n = 104, d = 20 and θ∗ = (1, 0, . . . , 0), and vary α from 0.1 to 10. For each α, we do 10 independent trials, comparing median `2 error of ITERATED-GMM-SEVER with classical IV and two-stage Huber regression. We also compare to the “clean IV” error, i.e. the error of IV on the uncorrupted samples. When α is small, essentially no inference is possible (the clean error is large), but as α increases, our estimator starts to outperform the baselines, and roughly tracks the clean error (Figure 1a). Similar results can be seen for d = 100 (Figure 2 in Appendix G.5). Our next two examples consider IV linear regression with heterogeneous treatment effects, a natural setting in which the instruments and covariates are high-dimensional, necessitating dimensionindependent robust estimators. Consider a study in which each sample has a vector X of characteristics, a scalar instrument Z, a scalar treatment T , and a response Y . Assuming that the control response and treatment effect are linear in the characteristics, with unknown coefficients β∗ and θ∗ respectively, and that the response noise is mean-zero conditioned on Z and X (but may correlate with the treatment), we can write the moment conditions E[XZ(Y − T 〈X, θ∗〉 − 〈X,β∗〉)] = E[X(Y − T 〈X, θ∗〉 − 〈X,β∗〉)] = 0. This can be interpreted as an IV linear regression with covariates (TX,X) and instruments (ZX,X). Synthetic HE dataset. For parameters n, d, we generate a unknown d-dimensional parameter vector θ∗ ∼ N(0, Id). We then generate independent samples (Xi, Yi, Zi)ni=1 as follows. Draw Xi ∼ N(0, Id) and Zi ∼ Ber(1/2). The binary treatment is drawn Ti ∼ Ber(pi) with pi = 1 1 + exp(−Zi − UiX̄i) , where Ui ∼ N(0, 1) and X̄i = d−1/2〈Xi,1〉. Finally, the response is Yi = 〈Xi, θ∗〉Ti + 〈Xi, β∗〉+ Ui with β∗ := 0. Ordinary least squares would produce a biased estimate of (θ∗, β∗), since TX̄ is correlated with the response noise U . However, U is by construction independent of X and Z. Thus, in the absence of corruption, IV linear regression with covariates (TX,X), response Y , and instrument (ZX,X) should approximately recover the true parameters (θ, β). For n = 103 and d = 20, the IV estimate still has significant variance, and in this regime, even with no added corruptions, we find that ITERATED-GMM-SEVER has lower recovery error than baselines (Table 1 in Appendix G.5). For n = 104 and d = 20, the IV estimate is more accurate. Hence, we corrupt the first n samples, by setting Xi := 1 and Yi := 3 √ d. Varying from 0.01 to 0.1, we compute the median `2 recovery error of ITERATED-GMM-SEVER, classical IV, and two-stage Huber regression, across 50 independent trials (for each ). The results (Figure 1b) demonstrate that our algorithm is resilient to up to 10% corruptions, whereas both baselines rapidly degrade as increases. NLSYM dataset. In this experiment, we use the data of [6] from the National Longitudinal Survey of Young Men for estimating the average treatment effect (ATE) of education on wages. The data consists of 3010 samples with years of education as the treatment, log wages as the response, and proximity to a 4-year college as the instrument, along with 22 covariates (e.g. geographic indicator variables). For simplicity, we restrict the model to only two covariates (years and squared years of labor force experience) and bias term. We find that the ATE estimated by ITERATED-GMM-SEVER is close to the positive ATE (≈ 0.277) estimated by classical IV, suggesting that Card’s inference may be robust (Figure 3 in Appendix G.5). Next, we corrupt a random -fraction of the responses, in a way that negates the ATE inferred by classical IV regression (see Appendix G.2 for method). Varying from 0.01 to 0.2, we perform 10 independent trials (i.e. resampling the subset of corrupted samples each time). For each trial, we compute the ATE estimate of IV regression, the ATE estimate of two-stage Huber regression, and the median ATE estimate of 50 runs of ITERATED-GMM-SEVER. For each , we then plot the median absolute error of each algorithm across the 10 trials. We find that our algorithm outperforms both baselines, and has lower variance than two-stage Huber regression, up to ≈ 0.15 (Figure 1c; note that error is on log-scale, so the Huber regression is extremely noisy).
1. What is the focus and contribution of the paper regarding Generalized Method of Moments? 2. What are the strengths of the proposed approach, particularly in terms of computational tractability and efficiency? 3. What are the weaknesses of the paper, especially regarding sample complexity and the limitations of the instrumental variables algorithms? 4. Do you have any concerns or suggestions regarding the comparisons with other works and the experimental results? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper In this paper an algorithm and its theoretical guarantees are given for the problem of Generalized Method of Moments, in the setting where an ϵ fraction of data samples could be adversarially corrupted. The guarantees are given under deterministic assumptions on the uncorrupted part of the data and on the moment function g. It is then proved that samples from the instrumental variables linear and logistic regression models satisfy the above assumptions with high probability. The arguments build upon series of recent work on robust estimation of means and local optima. However, the contributions presented here on top of these ideas are considerable. It is stated that the methods presented in this paper are computationally tractable and, moreover, efficient, in contrast to existing work on Robust GMM. Strengths And Weaknesses As discussed above, I believe this is a well written paper with a solid contribution. The main weakness in my view is that eventual sample complexity n of the instrumental variables algorithms in Section 6 is ( d + p ) 5 , where d is the dimension of the features and p the dimension of the instrument variable. These clearly are infeasible for all but very small d and p. The synthetic data experiments are performed with n d and p that do not satisfy these bounds, and results indicate that perhaps the bound may be strengthened. The experiment with NLSYM data is performed with d=2, instead of d=22 in the original data. Since the bounds in the paper are not computationally feasible, a deeper comparison to existing methods should be performed. Why the present work is better than sources [20],[11], which are referred to as computationally intractable in the paper? What would be the results of the NLSYM experiment if it was performed with full d=22 data? Questions I will be glad to see the author's comments on points 1 and 2 above. Limitations The assumptions made in the paper were appropriately discussed.
NIPS
Title Robust Generalized Method of Moments: A Finite Sample Viewpoint Abstract For many inference problems in statistics and econometrics, the unknown parameter is identified by a set of moment conditions. A generic method of solving moment conditions is the Generalized Method of Moments (GMM). However, classical GMM estimation is potentially very sensitive to outliers. Robustified GMM estimators have been developed in the past, but suffer from several drawbacks: computational intractability, poor dimension-dependence, and no quantitative recovery guarantees in the presence of a constant fraction of outliers. In this work, we develop the first computationally efficient GMM estimator (under intuitive assumptions) that can tolerate a constant fraction of adversarially corrupted samples, and that has an `2 recovery guarantee of O( √ ). To achieve this, we draw upon and extend a recent line of work on algorithmic robust statistics for related but simpler problems such as mean estimation, linear regression and stochastic optimization. As a special case, we apply our algorithm to instrumental variables linear regression with heterogeneous treatment effects, and experimentally demonstrate that it can tolerate as much as 10 – 15% corruption, significantly improving upon baseline methods. N/A √ ). To achieve this, we draw upon and extend a recent line of work on algorithmic robust statistics for related but simpler problems such as mean estimation, linear regression and stochastic optimization. As a special case, we apply our algorithm to instrumental variables linear regression with heterogeneous treatment effects, and experimentally demonstrate that it can tolerate as much as 10 – 15% corruption, significantly improving upon baseline methods. 1 Introduction Econometric and causal inference methodologies are increasingly being incorporated in automated large scale decision systems. Inevitably these systems need to deal with the plethora of practical issues that arise from automation. One important aspect is being able to deal with corrupted or irregular data, either due to poor data collection, the presence of outliers, or adversarial attacks by malicious agents. Even traditional applications of econometric methods, in social science studies, can greatly benefit from robust inference so as not to draw conclusions solely driven by a handful of samples, as was recently highlighted in [4]. One broad statistical framework, that encompasses the most widely used estimation techniques in econometrics and causal inference, is the framework of estimating models defined via moment conditions. In this paper we offer a robust estimation algorithm that extends prior recent work in robust statistics to this more general estimation setting. For a family of distributions {Dθ : θ ∈ Θ}, identifying the parameter θ is often equivalent to solving EX∼Dθ [g(X, θ)] = 0, (1) for an appropriate problem-specific vector-valued function g. This formalism encompasses such problems as linear regression (with covariates X , response Y , and moment g((X,Y ), θ) = X(Y − ∗[email protected]. This work was partially done while the first author was an intern at Microsoft Research New England. †[email protected]. This work was partially done while the second author was a Principal Researcher at Microsoft Research New England. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). XT θ)) and instrumental variables (IV) linear regression (with covariates X , response Y , instruments Z, and moment g((X,Y, Z), θ) = Z(Y −XT θ)). Under simple identifiability assumptions, moment conditions are statistically tractable, and can be solved by the Generalized Method of Moments (GMM) [16]. Given independent observations X1, . . . , Xn ∼ Dθ, the (unweighted) GMM estimator is θ̂ = argmin θ∈Θ ∥∥∥∥∥ 1n n∑ i=1 g(Xi, θ) ∥∥∥∥∥ 2 2 . Of course, for general functions g, finding θ̂ (the global minimizer of a potentially non-convex function) may be computationally intractable. Stronger assumptions imply that all approximate local minima of the above function are near the true parameter, in which case the GMM estimator is efficiently approximable. For instrumental variables (IV) linear regression, these assumptions follow from standard non-degeneracy assumptions. Due to its flexibility, the GMM estimator is widely used in practice (along with heuristic variants, in models where it is computationally intractable) [29]. Unfortunately, like most other classical estimators in statistics, the GMM estimator suffers from a lack of robustness: a single outlier in the observations can arbitrarily corrupt the estimate. Robust statistics Initiated by Tukey and Huber in the 1960s, robust statistics is a broad field studying estimators which have provable guarantees even in the presence of outliers [18]. Outliers can be modelled as samples from a heavy-tailed distribution, or even as adversarially and arbitrarily corrupted data. Classically, robustness of an estimator against arbitrary outliers is measured by breakdown point (the fraction of outliers which can be tolerated without causing the estimator to become unbounded [14]) and influence (the maximum change in the estimator under an infinitesimal fraction of outliers [15]). These metrics have spurred development and study of numerous statistical estimators which are often used in practice to mitigate the effect of outliers (e.g. Huber loss for mean estimation, linear regression, and other problems [17]). Problems such as robust univariate mean estimation are by now thoroughly understood [24, 22], and have statistically and computationally efficient estimators. Unfortunately, in higher dimensions, there has long appeared to be a tradeoff between robustness and computational tractability; as a result, much of the literature on high-dimensional robust statistics has focused on statistical efficiency at the expense of computational feasibility [5, 23, 13]. While there is a rich literature on IV regression and GMM in the context of robust statistics, those works either present computationally intractable estimators [21, 12] or are robust in the sense of bounded influence [1, 27, 20] but not robust against arbitrary outliers. Until the last few years, most high-dimensional statistical problems lacked robust estimators satisfying the following basic properties [7]: 1. Computational tractability (i.e. evading the curse of dimensionality) 2. Robustness to a constant fraction of arbitrary outliers 3. Quantitative error guarantees without dimension dependence. Recently, a line of work on algorithmic robust statistics has blossomed within the theoretical computer science community, with the aim of filling this gap in the high-dimensional statistics literature. Estimators with the above properties have been developed for various fundamental high-dimensional problems, including mean and covariance estimation [7, 9], linear regression [10, 3], and stochastic optimization [26, 8]. However, practitioners in econometrics and applied statistics often employ more sophisticated inference methods such as GMM and IV regression [29, 2]. Such methods are not traditionally under the purview of theoretical computer science and learning theory; perhaps as a result, computationally and statistically efficient robust estimators are still lacking. Our contribution We address this lack. Methodologically speaking, our main contribution is to introduce GMM to the algorithmic robust statistics literature and vice versa (even aside from robustness, basic algorithmic questions about GMM remain open and surprisingly unstudied). Theoretically speaking, we prove that a simple modification to the SEVER algorithm for robust stochastic optimization [8] (based on using higher-derivative information) yields a computationally efficient and provably robust GMM estimator under intuitive deterministic assumptions about the uncorrupted data. We instantiate this estimator for two important special cases of GMM—instrumental variables linear regression and instrumental variables logistic regression—under distributional assumptions about the covariates, instruments, and responses (and in fact our algorithm also applies to the IV generalized linear model under certain conditions on the link function). Experimentally, we apply our algorithm to robustly solve IV linear regression. We find that it performs well for a wide range of instrument strengths. In the important setting of heterogeneous treatment effects, our algorithm tolerates as much as 10% corruption. Applied to a seminal dataset previously used to estimate the effect of education on wages [6], we provide evidence for the robustness of the inference, and demonstrate that our algorithm can recover the original inference from corruptions of the dataset, significantly better than baseline approaches. Technical Overview Our robust GMM algorithm builds upon the SEVER algorithm and framework introduced in [8] for robust stochastic optimization, which itself builds on seminal work on robust multivariate mean estimation via spectral filtering [7, 9]. In this section, we outline the increasing levels of complexity. First, given samples v1, . . . , vn ∈ Rd among which n are corrupted, robust mean estimation asks for an estimate of the mean of the uncorrupted samples. The spectral filtering approach due to [9] iteratively does the following, until the sample covariance matrix is bounded: remove outliers in the direction of the largest variance. So long as the uncorrupted samples have bounded covariance, the filtering ensures that at termination, the empirical mean will approximate the uncorrupted mean. Second, given functions f1, . . . , fn : Rd → R among which n are corrupted, robust stochastic optimization asks for an approximate critical point of the mean of the uncorrupted functions. The SEVER algorithm [8] achieves this by alternating between (a) finding a critical point ŵ of the current sample set S, and (b) applying one iteration of spectral filtering to the vectors {∇fi(ŵ) : i ∈ S}, terminating when no samples are removed from S.3 The termination guarantee of spectral filtering immediately implies that at termination, the average gradient of the uncorrupted samples at ŵ is near the average gradient of the final sample set S at ŵ, which is 0 by part (a). So ŵ at termination is an approximate critical point of the mean of the uncorrupted functions. In our problem, we are given functions g1, . . . , gn : Rd → Rp among which n are corrupted, and wish to find an approximate minimizer of ∥∥∥ 1|U |∑i∈U gi(w)∥∥∥2 2 , where U ⊆ [n] is the set of uncorrupted functions. The obvious approach is to alternate between (a) finding a minimizer ŵ of ∥∥∥ 1|S|∑i∈S gi(w)∥∥∥2 2 , where S is the current sample set, and (b) applying spectral filtering to the vectors {gi(ŵ) : i ∈ S}, terminating when no samples are removed from S. The termination guarantee of spectral filtering implies that the final sample average 1|S| ∑ i∈S gi(ŵ) is near the uncorrupted average 1|U | ∑ i∈U gi(ŵ). Unfortunately, there is no guarantee that 1 |S| ∑ i∈S gi(ŵ) has small norm: part (a) only implies that ŵ is a local minimizer (and hence critical point) of the norm, so 1 |S| ∑ i∈S (∇gi(ŵ))T · 1 |S| ∑ i∈S gi(ŵ) = 0. In the above equality, the sample gradient matrix at ŵ could be arbitrarily corrupted, so the sample average at ŵ could have arbitrarily large norm. In principle, even the global minimizer could have large norm. However, this issue can be fixed by using higher-derivative information: specifically, we also apply spectral filtering to (projections of) the matrices ∇gi(ŵ). Under appropriate boundedness and smoothness assumptions, it can then be shown that at termination (when neither filtering step removes samples), ŵ is an approximate critical point of the norm of the uncorrupted average ∥∥∥ 1|U |∑i∈U gi(w)∥∥∥2 2 . By a “strong identifiability” assumption, this implies that ŵ is near the minimizer of ∥∥∥ 1|U |∑i∈U gi(x)∥∥∥2 2 , as desired. 3A related approach simply applies robust mean estimation to estimate the gradients at each step of gradient descent [26]. 2 Preliminaries For real scalars or vectors {ξi}i∈S indexed by a set S, we use the notation ES [ξi] for the sample expectation 1|S| ∑ i∈S ξi. Similarly, if ξi are scalars, then we define the sample variance VarS(ξi) = ES(ξi − ESξi)2. If ξi are vectors then we define the sample covariance matrix CovS(ξi) = ES(ξi − ESξi)(ξi − ESξi)T . A random vector X is (4, 2, τ)-hypercontractive if E(〈X,u〉)4 ≤ τ(E(〈X,u〉)2)2 for all vectors u. Definition 2.1. For a closed setH, a function f : H → R, and γ > 0, a γ-approximate critical point of f (inH) is some x ∈ H such that for any vector v with x+ δv ∈ H for arbitrarily small δ > 0, it holds that v · ∇f(x) ≥ −γ ‖v‖2. Definition 2.2. For a closed setH, a γ-approximate critical point oracle Lγ,H is an algorithm which, given a differentiable function f : H → R returns a γ-approximate critical point of f . Definition 2.3. The (unscaled) logistic function G : R→ R is defined by G(x) = 1/(1 + e−x). Outline In Section 3, we describe the robust GMM problem, and we describe deterministic assumptions on a set of corrupted sample moments, under which we’ll be able to efficiently estimate the parameter which makes the uncorrupted moments small. In Section 4, we describe a key subroutine of our robust GMM algorithm, which is commonly known in the literature as filtering. In Section 5, we describe the robust GMM algorithm and prove a recovery guarantee under the assumptions from Section 3. In Section 6, we apply this algorithm to instrumental variable linear and logistic regression, proving that under reasonable stochastic assumptions on the uncorrupted data, arbitrarily -corrupted moments from these models satisfy the desired deterministic assumptions with high probability. Finally, in Section 7, we evaluate the performance of our algorithm on two corrupted datasets. 3 Robust GMM Model In this section, we formalize the model in which we will provide a robust GMM algorithm. Classically, the goal of GMM estimation is to identify θ ∈ Θ given data X1, . . . , Xn ∼ Dθ, using the moment condition EX∼Dθ [g(X, θ)] = 0. We consider the added challenge of the -strong contamination model, in which an adversary is allowed to inspect the data X1, . . . , Xn and replace n samples with arbitrary data, before the algorithm is allowed to see the data. This corruption model encompasses most reasonable sources of outliers. For our main theorem, we do not make stochastic assumptions about {Dθ : θ ∈ Θ}. Instead, we make deterministic assumptions about the empirical moments gi(θ) := g(Xi, θ) of the given data, which are robust to -strong contamination. Concretely, we make the following assumption. Assumption 3.1. Given differentiable moments g1, . . . , gn : Rd → Rp, a corruption parameter > 0, well-conditionedness parameters λ and L, a Lipschitzness parameter Lg, and a noise level parameter σ2, there is a set Igood ⊆ [n] with |Igood| ≥ (1− )n (the “uncorrupted samples”), a vector w∗ ∈ Rd (the “true parameter”), and a radius R0 ≥ ‖w∗‖2 with the following properties: • Strong identifiability. σmin(EIgood∇g(w∗)) ≥ λ • Bounded-variance gradient. EIgood(uT∇g(w∗)v)2 ≤ L2 for all unit-vectors u ∈ Rp, v ∈ Rd • Bounded-variance noise. EIgood(v · g(w∗))2 ≤ σ2L for all unit vectors v • Well-specification. ∥∥EIgoodg(w∗)∥∥2 ≤ σ√L • Lipschitz gradient. ∥∥EIgood∇g(w)− EIgood∇g(w∗)∥∥op ≤ Lg ‖w − w∗‖2 for all w ∈ B2R0(0) • Stability of gradient. R0 < λ/(9Lg). Intuitively, Assumption 3.1 can be thought of as a condition on the uncorrupted samples, because if they satisfy the assumption with parameter 0, then after -strong contamination, the corrupted samples will still satisfy the assumption with parameter 0 + . Strong identifiability is needed for parameter recovery (even without corruption). Bounded-variance gradient is a technical condition which e.g. reduces to a 4th moment bound for IV regression. The third and fourth conditions ensure that the data is approximately well-specified by the moment conditions. The fifth and sixth conditions hold trivially for IV linear regression; for non-linear moment problems, such as our logistic IV regression problem, this condition requires that the `2-norm of the parameters be sufficiently small, such that the logits do not approach the flat region of the logistic function, a condition that is natural to avoid loss of gradient information and extreme propensities. 4 The FILTER Algorithm In many robust statistics algorithms, an important subroutine is a filtering algorithm for robust mean estimation. In this section we describe a filtering algorithm used in numerous prior works, including e.g. [8, 9]. Given a set of vectors {ξi : i ∈ S} and a threshold M , the algorithm returns a subset of S, by thresholding outliers in the direction of largest variance. Formally, see Algorithm 1. Algorithm 1 FILTER 1: procedure FILTER({ξi : i ∈ S},M ) 2: ξ̂ ← ES [ξi], CovS(ξi) = ES [(ξi − ξ̂)(ξi − ξ̂)T ] 3: v ← largest eigenvector of CovS(ξi) 4: τi ← (v · (ξi − ξ̂))2 for i ∈ S 5: if 1|S| ∑ i∈S τi ≤ 24M then 6: return S 7: else 8: Sample T ← Unif([0,max τi]) 9: return S \ {i ∈ S : τi > T} This algorithm has two important properties. First, if it does not filter any samples, then the sample mean is provably stable, i.e. it cannot have been affected much by the corruptions, so long as the uncorrupted samples had bounded variance (proof in Appendix B.1). Lemma 4.1 (see e.g. [8, 9]). Suppose that FILTER does not filter out any samples. Then ‖ESξ − EIξ‖2 ≤ 3 √ 48 √ (M + ‖CovI(ξ)‖op) for any I ⊆ [n] and > 0 such that |S|, |I| ≥ (1− )n. Second, if the threshold is chosen appropriately (based on the variance of the uncorrupted samples), then the filtering step always in expectation removes at least as many corrupted samples as uncorrupted samples. Equivalently, the size of the symmetric difference between the current sample set and the uncorrupted samples (i.e. the number of corrupted samples in the current set plus the number of uncorrupted samples which have been filtered out of the current set) always decreases in expectation (proof in Appendix B.1.1). Lemma 4.2 (see e.g. [8, 9]). Consider an execution of FILTER with sample set S of size |S| ≥ 2n/3, and vectors {ξi : i ∈ S}, and bound M . Let S′ be the sample set after this iteration’s filtering. Let Igood ⊆ [n] satisfy |Igood| ≥ (5/6)n. Suppose that CovIgood(ξi) MI , then E|S′4Igood| ≤ E|S4Igood|, where the expectation is over the random threshold, and ∆ denotes symmetric difference. 5 The ITERATED-GMM-SEVER Algorithm In this section, we describe and analyze an algorithm ITERATED-GMM-SEVER for robustly solving moment conditions under Assumption 3.1. The key subroutine is the algorithm GMM-SEVER, which given an initial estimate w0 and a radius R such that the true parameter is contained in BR(w0), returns a refined estimate w such that (with large probability) the radius bound can be decreased by a constant factor. We assume access to an approximate constrained critical point oracle L (Definition 2.2), which can be efficiently implemented (for arbitrary smooth bounded functions) by gradient descent. Algorithm 2 GMM-SEVER 1: procedure GMM-SEVER(L, {g1, . . . , gn}, w0, R, γ, L, σ) 2: S ← [n] 3: repeat 4: Compute a γ-approximate critical point w ← Lγ,BR(w0)(‖ES(gi(·))‖ 2 2) 5: u← ESgi(w) 6: S′ ← FILTER({∇gi(w) · u : i ∈ S}, L2 ‖u‖22) 7: if S′ 6= S then 8: Set S ← S′ and return to line 4 9: S′′ ← FILTER({gi(w) : i ∈ S}, σ2L+ 4L2R2) 10: if S′′ 6= S then 11: Set S ← S′′ and return to line 4 12: until S′′ = S 13: return (w, S) Algorithm 3 AMPLIFIED-GMM-SEVER 1: procedure AMPLIFIED-GMM-SEVER(L, {g1, . . . , gn}, w0, R, γ, , L, σ, δ) 2: t← 0 3: repeat 4: w, S ← GMM-SEVER(L, {g1, . . . , gn}, w0, R, γ, L, σ) 5: t← t+ 1 6: until |S| ≥ (1− 11 )n or (1/10)t ≤ δ 7: return w Like the algorithm SEVER [8], our algorithm GMM-SEVER alternates (a) finding a critical point of a function associated to the current samples, and (b) filtering out “outlier” samples. Unlike SEVER, the function we optimize is not simply an empirical mean over the samples, but rather the squared-norm of the sample moments. Moreover, we need two filtering steps: the moments as well as directional derivatives of the moments, in a carefully chosen direction. See Algorithm 2 for the complete description. We will only prove a constant failure probability for GMM-SEVER. However, we will show that it can be amplified to an arbitrarily small failure probability δ. We call the resulting algorithm AMPLIFIED-GMM-SEVER; see Algorithm 3. The algorithm ITERATED-GMM-SEVER then consists of iteratively calling AMPLIFIED-GMM-SEVER to refine the parameter estimate and bound the true parameter within successively smaller balls; see Algorithm 4. We start by analyzing GMM-SEVER. In the next two lemmas, we show that if the algorithm does not filter out too many samples, then we can bound the distance from the output to w∗. First, we show a first-order criticality condition (in the direction ŵ − w∗) for the norm of the moments of the “good" samples. If there was no corruption, then we would have an inequality of the form (ŵ − w∗)T ‖ŵ − w∗‖2 EIgood∇g(ŵ)TEIgoodg(ŵ) ≤ γ. With -corruption, the algorithm is designed so that we can still show the following inequality, matching the above guarantee up to O( √ ) (proof in Appendix C.1): Lemma 5.1. Suppose that the input parameters R and w0 satisfy BR(w0) ⊆ B2R0(0). Under Assumption 3.1, at algorithm termination, if |S| ≥ (1− 10 )n, then the output ŵ of GMM-SEVER satisfies (ŵ − w∗)T ‖ŵ − w∗‖2 EIgood∇g(ŵ)TEIgoodg(ŵ) ≤ γ + 275σL3/2 √ + 603L2R √ Moreover, we can show that any point satisfying the first-order criticality condition must be close to w∗, using the least singular value bound on the gradient (proof in Appendix C.2). Lemma 5.2. Suppose that the input parameters R and w0 satisfy BR(w0) ⊆ B2R0(0). Under Assumption 3.1, suppose that w ∈ BR(w0) satisfies (w − w∗)TEIgood∇g(w)TEIgoodg(w) ≤ κ ‖w − w∗‖2 . Algorithm 4 ITERATED-GMM-SEVER 1: procedure ITERATED-GMM-SEVER({g1, . . . , gn}, R0, γ, , λ, L, σ, δ) 2: t← 1, w1 ← 0, R1 ← R0, δ′ ← cδ/ log(R √ L/(σ √ ), γ = σL3/2 √ 3: repeat 4: ŵt := AMPLIFIED-GMM-SEVER({g1, . . . , gn}, wt, Rt, , L, σ, γ, δ′) 5: Rt+1 ← 2γ/λ2 + C((L2/λ2)Rt √ + σ(L3/2/λ2) √ ) 6: t← t+ 1 7: until Rt > Rt−1/2 8: return ŵt−1 Then ‖w − w∗‖2 ≤ 4(κ+ σL3/2 √ )/λ2. Putting the above lemmas together, we immediately get the following bound on ‖ŵ − w∗‖2. Lemma 5.3. Suppose that the input parameters R and w0 satisfy BR(w0) ⊆ B2R0(0). Under Assumption 3.1, at algorithm termination, if |S| ≥ (1− 10 )n, then the output ŵ of GMM-SEVER satisfies ‖ŵ − w∗‖2 ≤ 4γ λ2 + 2412(L2/λ2)R √ + 1102σ(L3/2/λ2) √ . It remains to bound the size of S at termination. We follow the super-martingale argument from [8], which uses Lemma 4.2 (proof in Appendix C.3). Theorem 5.4. Suppose that the input parameters R and w0 satisfy BR(w0) ⊆ B2R0(0). Let ŵ be the output of GMM-SEVER. Then with probability at least 9/10, it holds that ‖ŵ − w∗‖2 ≤ 4γ λ2 + 2412(L2/λ2)R √ + 1102σ(L3/2/λ2) √ . The time complexity of GMM-SEVER is O(poly(n, d, p, Tγ)) where Tγ is the time complexity of the γ-approximate learner L. Moreover, for any δ > 0 the success probability can be amplified to 1− δ by repeating GMM-SEVER O(log 1/δ) times, or until |S| ≥ (1− 10 )n at termination. We call this AMPLIFIED-GMM-SEVER, and it has time complexity O(poly(n, d, p, Tγ) · log(1/δ)). With the above guarantee for GMM-SEVER and AMPLIFIED-GMM-SEVER, we can now analyze ITERATED-GMM-SEVER (proof in Appendix C.4). Theorem 5.5. Suppose that the input to ITERATED-GMM-SEVER consists of functions g1, . . . , gn : Rd → Rp, a corruption parameter > 0, well-conditionedness parameters λ and L, a Lipschitzness parameter Lg , a noise level parameter σ2, a radius bound R0, and an optimization error parameter γ, such that Assumption 3.1 is satisfied for some unknown parameter w∗ ∈ Rd, and (L2/λ2) √ ≤ 1/9648. 4 Suppose that the algorithm is also given a failure probability parameter δ > 0. Then the output ŵ of ITERATED-GMM-SEVER satisfies ‖ŵ − w∗‖2 ≤ O(σ(L 3/2/λ2) √ ) with probability at least 1− δ. Moreover, the algorithm has time complexity O(poly(n, d, p, Tγ) · log(1/δ) · log(R √ L/(σ √ ))), where Tγ is the time complexity of a γ-approximate learner and γ = σL3/2 √ . 6 Applications In this section, we apply ITERATED-GMM-SEVER to solve linear and logistic instrumental variables regression in the strong contamination model. 4This constant may be improved; we focus in this paper on dependence on the parameters of the problem and do not optimize constants. Robust IV Linear Regression Let Z be the vector of p real-valued instruments, and let X be the vector of d real-valued covariates. Suppose that Z and X are mean-zero. Suppose that the response can be described as Y = XTw∗ + ξ for some fixed w∗ ∈ Rd. The distributional assumptions we will make about X , Y , and Z are described below. Assumption 6.1. Given a corruption parameter > 0, well-conditionedness parameters λ and L, hypercontractivity parameter τ , noise level parameter σ2, and norm bound R0, we assume the following: (i) Valid instruments: E[ξ|Z] = 0, (ii) Bounded-variance noise: E[ξ2|Z] ≤ σ2, (iii) Strong instruments: σmin(EZXT ) ≥ λ, (iv) Boundedness: ‖Cov([Z;X]‖op ≤ L, (v) Hypercontractivity: [Z;X] is (4, 2, τ)-hypercontractive, (vi) Bounded 8th moments: maxiX8i ≤ O(τ2L4) and maxi Z8i ≤ O(τ2L4) (vii) Bounded norm parameter: ‖w∗‖2 ≤ R0. For intuition, conditions (i – iii) are standard for IV regression even in the absence of corruption; (iv – vi) are conditions on the moments of the distribution, and hold for a variety of reasonable distributions including but not limited to any multivariate Gaussian distribution with bounded-spectral-norm covariance. Condition (vii) essentially states that we need an initial estimate of w∗ (but the time complexity of our algorithm will depend only logarithmically on the initial estimate error R0). Define the random variable g(w) = Z(Y −XTw) for w ∈ Rd, and let (Xi, Yi, Zi) be n independent samples drawn according to (X,Y, Z). Let > 0. We prove that under the above assumption, if n is sufficiently large, then with high probability, for any -contamination (X ′i, Y ′ i , Z ′ i) n i=1 of (Xi, Yi, Zi) n i=1, the functions gi(w) = Z ′ i(Y ′ i − (X ′i)Tw) satisfy Assumption 3.1. Formally, we prove the following theorem (see Appendix D): Theorem 6.2. Let > 0. Suppose that < cmin(λ2/(τL2), λ4/L4) for a sufficiently small constant c > 0, and suppose that n ≥ C(d + p)5τ log((p + d)/τ )/ 2 for a sufficiently large constant C. Then with probability at least 0.95 over the samples (Xi, Yi, Zi)ni=1, the following holds: for any -corruption of the samples and any upper bound R0 ≥ ‖w∗‖2, Assumption 3.1 is satisfied. In that event, if L, λ, σ, and are known, then there is a poly(n, d, p, log(1/δ), log(R0/(σ √ )))- time algorithm which produces an estimate ŵ satisfying ‖ŵ − w∗‖2 ≤ O(σ(L3/2/λ2) √ ) with probability at least 1− δ. Robust IV Logistic Regression Let Z be a vector of p real-valued instruments, and let X be a vector of d real-valued covariates. Suppose that Z and X are mean-zero. Suppose that the response can be described as Y = G(XTw∗) + ξ for some fixed w∗ ∈ Rd, where G is the (unscaled) logistic function. The proofs only use 1-Lipschitzness of G and G′, and that G′(0) is bounded away from 0. As far as distributional assumptions, we assume in this section that Assumption 6.1 holds, and additionally assume that the norm bound satisfies R0 ≤ cmin(λ2/L, λ/ √ τL3) for an appropriate constant c, where λ, L, and τ are as required for the Assumption. We obtain the following algorithmic result (proof in Appendix E): Theorem 6.3. Let > 0. Suppose that < cmin(λ2/(τL2), λ4/L4) for a sufficiently small constant c > 0, and suppose that n ≥ C(d + p)5τ log((p + d)/τ )/ 2 for a sufficiently large constant C. Suppose that ‖w∗‖2 ≤ R0 ≤ cmin(λ2/L, λ/ √ τL3). Then with probability at least 0.95 over the samples (Xi, Yi, Zi)ni=1, the following holds: for any -corruption of the samples, Assumption 3.1 is satisfied. In that event, if R0, L, λ, σ, and are known, then there is a poly(n, d, p, log(1/δ), log(R0/(σ √ )))-time algorithm which produces an estimate ŵ satisfying ‖ŵ − w∗‖2 ≤ O(σ(L3/2/λ2) √ ) with probability at least 1− δ. 7 Experiments In this section we corroborate our theory by applying our algorithm ITERATED-GMM-SEVER to several datasets for IV linear regression. See Appendix G for omitted figures and experimental details (e.g. hyperparameter choices and descriptions of the baselines). Error bars are at 25th and 75th percentiles across independent trials. Varied Instrument Strength. We construct a synthetic dataset with endogenous noise and 1% corruptions, and evaluate our estimator as the instrument strength is varied. Concretely, for dimension d and strength α, we draw independent samples (Xi, Yi, Zi)ni=1 where for unobserved noise ηi ∼ N(0, Id), we define instruments Zi ∼ N(0, Id) and covariates Xi = αZi + ηi, and response yi = 〈Xi, θ∗〉 + 〈ηi,1〉. For k = 0.01n of the samples, we introduce corruption by setting Zi = −A/(k √ d) and yi = √ d where A = ∑ Zjyj , which zeroes out the IV estimate. We take n = 104, d = 20 and θ∗ = (1, 0, . . . , 0), and vary α from 0.1 to 10. For each α, we do 10 independent trials, comparing median `2 error of ITERATED-GMM-SEVER with classical IV and two-stage Huber regression. We also compare to the “clean IV” error, i.e. the error of IV on the uncorrupted samples. When α is small, essentially no inference is possible (the clean error is large), but as α increases, our estimator starts to outperform the baselines, and roughly tracks the clean error (Figure 1a). Similar results can be seen for d = 100 (Figure 2 in Appendix G.5). Our next two examples consider IV linear regression with heterogeneous treatment effects, a natural setting in which the instruments and covariates are high-dimensional, necessitating dimensionindependent robust estimators. Consider a study in which each sample has a vector X of characteristics, a scalar instrument Z, a scalar treatment T , and a response Y . Assuming that the control response and treatment effect are linear in the characteristics, with unknown coefficients β∗ and θ∗ respectively, and that the response noise is mean-zero conditioned on Z and X (but may correlate with the treatment), we can write the moment conditions E[XZ(Y − T 〈X, θ∗〉 − 〈X,β∗〉)] = E[X(Y − T 〈X, θ∗〉 − 〈X,β∗〉)] = 0. This can be interpreted as an IV linear regression with covariates (TX,X) and instruments (ZX,X). Synthetic HE dataset. For parameters n, d, we generate a unknown d-dimensional parameter vector θ∗ ∼ N(0, Id). We then generate independent samples (Xi, Yi, Zi)ni=1 as follows. Draw Xi ∼ N(0, Id) and Zi ∼ Ber(1/2). The binary treatment is drawn Ti ∼ Ber(pi) with pi = 1 1 + exp(−Zi − UiX̄i) , where Ui ∼ N(0, 1) and X̄i = d−1/2〈Xi,1〉. Finally, the response is Yi = 〈Xi, θ∗〉Ti + 〈Xi, β∗〉+ Ui with β∗ := 0. Ordinary least squares would produce a biased estimate of (θ∗, β∗), since TX̄ is correlated with the response noise U . However, U is by construction independent of X and Z. Thus, in the absence of corruption, IV linear regression with covariates (TX,X), response Y , and instrument (ZX,X) should approximately recover the true parameters (θ, β). For n = 103 and d = 20, the IV estimate still has significant variance, and in this regime, even with no added corruptions, we find that ITERATED-GMM-SEVER has lower recovery error than baselines (Table 1 in Appendix G.5). For n = 104 and d = 20, the IV estimate is more accurate. Hence, we corrupt the first n samples, by setting Xi := 1 and Yi := 3 √ d. Varying from 0.01 to 0.1, we compute the median `2 recovery error of ITERATED-GMM-SEVER, classical IV, and two-stage Huber regression, across 50 independent trials (for each ). The results (Figure 1b) demonstrate that our algorithm is resilient to up to 10% corruptions, whereas both baselines rapidly degrade as increases. NLSYM dataset. In this experiment, we use the data of [6] from the National Longitudinal Survey of Young Men for estimating the average treatment effect (ATE) of education on wages. The data consists of 3010 samples with years of education as the treatment, log wages as the response, and proximity to a 4-year college as the instrument, along with 22 covariates (e.g. geographic indicator variables). For simplicity, we restrict the model to only two covariates (years and squared years of labor force experience) and bias term. We find that the ATE estimated by ITERATED-GMM-SEVER is close to the positive ATE (≈ 0.277) estimated by classical IV, suggesting that Card’s inference may be robust (Figure 3 in Appendix G.5). Next, we corrupt a random -fraction of the responses, in a way that negates the ATE inferred by classical IV regression (see Appendix G.2 for method). Varying from 0.01 to 0.2, we perform 10 independent trials (i.e. resampling the subset of corrupted samples each time). For each trial, we compute the ATE estimate of IV regression, the ATE estimate of two-stage Huber regression, and the median ATE estimate of 50 runs of ITERATED-GMM-SEVER. For each , we then plot the median absolute error of each algorithm across the 10 trials. We find that our algorithm outperforms both baselines, and has lower variance than two-stage Huber regression, up to ≈ 0.15 (Figure 1c; note that error is on log-scale, so the Huber regression is extremely noisy).
1. What is the focus of the paper regarding robust algorithms for computing generalized methods of moments estimators? 2. What are the strengths of the proposed modified SEVER algorithm, particularly in terms of computational tractability and robustness against outliers? 3. Do you have any questions or concerns regarding the technical aspects of the proofs and lemmas in the paper? 4. How does the paper address the issue of a constant fraction of corrupted samples in the dataset? 5. Can you explain the main contribution of the paper in the context of previous research on robust algorithms for generalized methods of moments estimators?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The SEVER algorithm was proposed in 2019 as a robust algorithm that aims to find an approximate critical point of a collection f 1 , … , f n of real-valued functions, with the caveat that a fraction ϵ n of the sample is corrupted. Here, the authors propose a modification of this algorithm to compute generalized methods of moments estimators: these are estimators of the form θ ^ = a r g m i n θ ∈ Θ | n − 1 ∑ i = 1 n g ( X i , θ ) | 2 . Te output of their algorithm is computationally tractable even in high dimension and can be shown to be robust against a constant fraction of outliers. The performance of this estimator is tested on both a synthetic on non-synthetic dataset. Strengths And Weaknesses The paper is well-written and provides a compelling motivation for considering the modified SEVER algorithm: GMM methods can be applied in a wide number of different contexts, so that designing a sound (and computationally tractable) robust method to compute them is a welcome contribution. From a technical point of view, most results and proofs are modifications of corresponding results in the original SEVER paper. In such, there are no new techniques that are provided. Although mostly clear, there are several parts throughout sections 4 and 5 where I had trouble following the proofs: there are either small typos with weird numerical constants appearing, or some parts I do not understand (see Questions below). The proofs in Section 6 were not carefully checked. Questions It is stated l.177 that the filtering algorithm is standard. Then, it is surprising that Lemmas 4.1 and 4.2 are not already found in the literature. Lemma 5.1: I think we also need an assumption of the type | w 0 − w ∗ | ≤ R 0 an R ≤ R 0 . (such assumptions are implicitly used in the proof) Theorem 5.4: what is p ? In the proof of Lemma A.1: why do we have λ ≤ L ? The constants in the three last lines of l.461 seem off: we have | w − w ∗ | ≤ 3 R 0 and not 2 R 0 . Proof of Lemma 4.1: once again the constants seem off. Proof of Lemma 5.2: once again, it looks like we are using that | w 0 − w ∗ | ≤ R 0 an R ≤ R 0 in the proof of Lemma 5.2 (paragraph l.532) !!!!!! I do not understand the proof of Theorem 5.4. More specifically, it is stated that "In this event, | S t | ≥ 2 n / 3 for all t ", and I do not know why this should hold. This is arguably the main theorem in the paper so more details on this proof are needed. (also, how is X 1 defined? an ''initialization'' is missing). Typos: l.182: had l.478 L ϵ l.511: Step 3 is not defined l.518: define u l.528 third line: $\mathbb{E}{I{good}}$ Limitations Yes.
NIPS
Title Structure-Preserving Embedding of Multi-layer Networks Abstract This paper investigates structure-preserving embedding for multi-layer networks 1 with community structure. We propose a novel generative tensor-based latent space 2 model (TLSM) that allows heterogeneity among vertices. It embeds vertices into 3 a low-dimensional latent space so that vertices within the same community are 4 close to each other in the ambient space, and captures layer heterogeneity through 5 a layer-effect factor matrix. With a general and flexible tensor decomposition 6 on the expected network adjacency tensor, TLSM is dedicated to preserving the 7 original vertex relations and layer-specific effects in the network embedding. An 8 efficient alternative updating scheme is developed to estimate the model parameters 9 and conduct community detection simultaneously. Theoretically, we establish the 10 asymptotic consistencies of TLSM in terms of both multi-layer network estimation 11 and community detection. The theoretical results are supported by extensive 12 numerical experiments on both synthetic and real-life multi-layer networks. 13 N/A This paper investigates structure-preserving embedding for multi-layer networks1 with community structure. We propose a novel generative tensor-based latent space2 model (TLSM) that allows heterogeneity among vertices. It embeds vertices into3 a low-dimensional latent space so that vertices within the same community are4 close to each other in the ambient space, and captures layer heterogeneity through5 a layer-effect factor matrix. With a general and flexible tensor decomposition6 on the expected network adjacency tensor, TLSM is dedicated to preserving the7 original vertex relations and layer-specific effects in the network embedding. An8 efficient alternative updating scheme is developed to estimate the model parameters9 and conduct community detection simultaneously. Theoretically, we establish the10 asymptotic consistencies of TLSM in terms of both multi-layer network estimation11 and community detection. The theoretical results are supported by extensive12 numerical experiments on both synthetic and real-life multi-layer networks.13 1 Introduction14 Network has arisen as one of the most common structures to represent the relations among entities.15 In many complex systems, entities can be multi-relational in that they may interact with each other16 under various circumstances. A multi-layer network, which consists of a common vertex set across all17 network layers representing the entities and an edge set at each layer to characterize a particular type18 of relation among entities, is faithful to represent these relations. Examples of multi-layer networks19 include social networks of multiple interaction channels [42, 15], biological networks of different20 collaboration schemes [49, 31, 29] and world trading networks [1, 37] of various goods.21 In this paper, we propose a structure-preserving embedding framework for multi-layer networks22 via a tensor-based latent space model. Specifically, TLSM utilizes the factorization of network23 adjacency tensor as a building block, embeds the vertices into a low dimensional latent space, and24 captures the heterogeneity among different layers through a layer-effect factor matrix. Consequently,25 the community structure of the multi-layer network can be detected from a network embedding26 perspective, such that vertices within the same community are closer to one another in the ambient27 space than those in different communities. In addition, one key feature of TLSM is that it introduces28 a sparsity factor into the vanilla logit transformation of the network adjacency tensor, which allows29 TLSM to model sparse multi-layer networks in a more explicit fashion and accommodate relatively30 sparser multi-layer networks as the ones considered in literature [22]. More importantly, this sparsity31 factor can be estimated from the network adjacency tensor directly.32 The main contribution of this paper is three-fold. First, the proposed TLSM is flexible and general33 in that it includes many popular network models as special cases. It also relaxes the layer-wise34 positive semi-definite condition that has been frequently employed in literature [6, 35]. Second, a35 joint modeling framework is constructed for TLSM, consisting of the multi-layer network likelihood36 Submitted to 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Do not distribute. and a clustering type penalty, to estimate the multi-layer network and conduct community detection37 simultaneously. Its advantages are supported by extensive numerical experiments on both synthetic38 and real-life multi-layer networks. Third, the asymptotic consistencies of TLSM are established in39 terms of both multi-layer network estimation and community detection. Notably, the established40 theoretical results imply that the proposed methods can accommodate the sparsest multi-layer41 networks considered in literature.42 The rest of the paper is organized as follows. The remaining of Section 1 discusses related works and43 introduces necessary notations. Section 2 presents the proposed TLSM and its estimation scheme with44 an efficient algorithm. In Section 3, we establish the asymptotic consistencies of TLSM. Extensive45 numerical performance of TLSM on synthetic and real-life multi-layer networks as well as ablation46 studies on two novel components of the proposed method are carried out in Section 4. Section 547 concludes the paper. The supplementary materials contains technique proofs and necessary lemmas,48 additional simulation studies, detailed parameter tuning process, among others.49 1.1 Related work50 While there is a growing number of literature focusing on community detection in single-layer51 network [48, 28, 13], community detection in multi-layer network is still in its infancy. One classical52 approach is to detect community structure in each layer separately [4, 5], which fails to leverage53 the homogeneity across different layers. Another approach is to aggregate multi-layer networks54 into a single-layer one [41, 12, 35], which heavily relies on the assumption of homogeneous linking55 pattern across multiple layers. Recently, [26] proposed to aggregate the biased-adjusted version of56 the squared adjacency matrix in each layer to alleviate the information loss in aggregation. yet it57 requires the average node degree to grow at a sub-optimal order.58 In terms of multi-layer network generative models, [34] extended the seminal stochastic block59 model (SBM; 19) to the multi-layer stochastic block model (MLSBM; 34), where the probability for60 any two vertices to form an edge in a given layer depends only on their community memberships.61 Clearly, MLSBM heavily relies on the assumption of homogeneous vertices within communities.62 The framework of MLSBM has also been incorporated in degree-corrected network estimation [36],63 spectral clustering [6, 35, 26], least square estimation [27] and likelihood-based approaches [45]. In64 addition, network response regression model [46] and tensor factorization methods [8, 22] have also65 been proposed to detect community structures in multi-layer networks.66 To allow heterogeneous vertices, the latent space model [18] and random dot product graph model67 [3] have been extended to multi-layer networks[47, 32, 2]. In addition, graph neural network and68 graph convolutional networks has been extended to multi-layer network for learning the multi-layer69 network embedding [14, 23, 17, 39].70 1.2 Notations71 Throughout the paper, we use boldface calligraphic Euler scripts (A) to denote tensors, boldface72 capital letters (A) or Greece letters (α,β) to denote matrices, boldface lowercase letters (a) to73 denote vectors, and regular letters (a) to denote scalars. For an order three tensor A ∈ RI1×I2×I3 ,74 Ai,.,. ∈ RI2×I3 ,A.,j,. ∈ RI1×I3 , and A.,.,m ∈ RI1×I2 are the i-th horizontal slide, j-th lateral slide75 and m-th frontal slide of A, respectively. Similarly, for a matrix A, Ai,. denotes its i-th row and A.,j76 denotes its j-th column. For a vector a, diag(a) stands for the diagonal matrix whose diagonal is a.77 We use || · ||, || · ||∞, and || · ||F to denote the l2-norm, l∞-norm of a vector, and the Frobenius norm78 of matrix or tensor, respectively. For any integer n, denote [n] = {1, 2, ..., n}.79 The mode-1 product between a tensor A ∈ RI1×I2×I3 and a matrix U ∈ RJ1×I1 is a tensor A×1U ∈80 RJ1×I2×I3 such that its (j1, i2, i3)-th entry is defined as (A×1 U)j1,i2,i3 = ∑I1 i1=1 Ai1,i2,i3Uj1,i1 .81 The mode-2 or mode-3 product between A and any matrix of appropriate dimension are defined82 similarly. The CANDECOMP/PARAFAC (CP) decomposition of A has the form83 A = R∑ r=1 a(r) ◦ b(r) ◦ c(r), (1) where a(r) ∈ RI1 , b(r) ∈ RI2 , and c(r) ∈ RI3 for r ∈ [R], and ◦ stands for the vector outer product.84 The CP-rank [24] of the tensor a(r) ◦ b(r) ◦ c(r) is defined to be 1, for r ∈ [R]. The minimal number85 of rank-1 tensors in the CP decomposition of A is called the CP-rank of A. Let I ∈ {0, 1}R×R×R86 be the identity tensor such that Ii1,i2,i3 = 1 if i1 = i2 = i3 and 0 otherwise, and let A ∈ RI1×R,87 B ∈ RI2×R, and C ∈ RI3×R such that A.,r = a(r), B.,r = b(r), and C.,r = c(r). Equation (1)88 then can be equivalently written as A = I ×1 A×2 B ×3 C.89 2 Structure-preserving embedding90 In this paper, we consider multi-layer networks that can be represented as an undirected and un-91 weighted M -layer graph G = (V, E), where V = [n] consists of the common n vertices across92 different layers, and E = {E(m)}Mm=1 with E(m) ⊂ V × V representing the m-th relation network93 among vertices. A order three adjacency tensor A = (ai,j,m) ∈ {0, 1}n×n×M is then defined to94 represent G with entries ai,j,m = 1 if (i, j) ∈ E(m) and 0 otherwise.95 2.1 Tensor-based latent space model96 To fully characterize the multi-layer network structure, we propose the following generative tensor-97 based latent space model (TLSM). For any i ≤ j ∈ [n], and m ∈ [M ],98 ai,j,m = aj,i,m ind.∼ Bernoulli(pi,j,m), with (2) θi,j,m = log ( pi,j,m sn − pi,j,m ) , and (3) Θ = I ×1 α×2 α×3 β, α ∈ Ωα,β ∈ Ωβ, (4) where I is the order three R-dimensional identity tensor. Basically, (2) follows the standard routine99 in the multi-layer network literature [34, 35, 27, 22] to model that ai,j,m = aj,i,m are independently100 generated from a Bernoulli distribution, for i ≤ j ∈ [n] and m ∈ [M ]. Denote P = (pi,j,m) ∈101 Rn×n×M as the network underlying probability tensor, and then Θ = (θi,j,m) ∈ Rn×n×M is102 the entry-wise transformation of P by (3). We call the transformation (3) as the modified logit103 transformation in that the constant 1 in the standard logit transformation is replaced by a sparsity104 factor sn, which may vanish with n andM . We further assume all entries of P are of the order sn; that105 is, there exists a constant 12 ≤ ξ < 1 such that (1− ξ)sn ≤ pi,j,m ≤ ξsn, for i, j ∈ [n] and m ∈ [M ].106 Thus, sn essentially controls the overall network sparsity and the entries of Θ are ensured to locate in107 the interval [− log ξ1−ξ , log ξ 1−ξ ]. More importantly, (4) models the CP decomposition of Θ by the108 factor matrices α ∈ Rn×R and β ∈ RM×R with CP-rank R , which can greatly reduce the number of109 free parameters from n(n+ 1)M/2 to (n+M)R. Throughout the paper, the CP-rank R is allowed110 to diverge with n. In the CP decomposition of Θ, α is the vertex latent position matrix with each row111 αi,. serving as the embedding of vertex i, and β captures heterogeneity across different layers. Herein,112 we define the constraint sets for α and β as Ωα = {α ∈ Rn×R : ||αi,.|| ≤ √ log ξ1−ξ , for i ∈ [n]}113 and Ωβ = {β ∈ RM×R : ||β.,r|| = 1, r ∈ [R]}. Note that the constraint on β is necessary for114 model identification, and detailed discussion will be presented shortly. The constraint set Ωα × Ωβ115 is sufficient to maintain the bounded condition of Θ since a general Hölder inequality yields that116 |θi,j,m| = |I ×1 αTi,. ×2 αTj,. ×3 βTm,.| ≤ ||αi,.||||αj,.||||βm,.||∞ ≤ log ξ 1−ξ . To conclude this117 paragraph, we remake that the parameter ξ is introduced for theoretical purpose and it is not treated as118 a tuning parameter. One can choose ξ sufficiently close to 1 in empirical studies so that the restriction119 on α will be alleviated.120 We make several essential observations of the proposed TLSM. First and foremost, TLSM is flexible121 and general. It includes the celebrated MLSBM [34, 43, 35, 27, 26, 36, 22] as special case. Specif-122 ically, suppose the vertices comes form K disjoint communities, the standard MLSBM assumes123 that the underlying network probability tensor P = B ×1 Z ×2 Z, where B ∈ RK×K×M is a124 semi-symmetric core probability tensor with Bk1,k2,m = Bk2,k1,m for k1, k2 ∈ [K] and m ∈ [M ],125 and Z ∈ {0, 1}n×K is the community membership matrix with Zi,k = 1 if vertex i comes from the126 k-th community and 0 otherwise. That is, the probability of any vertex pair to form an edge in a127 particular layer depends only on their community memberships. Equivalently, under the modified128 logit transformation (3), we have Θ = B̃ ×1 Z ×2 Z, where B̃ is the entry-wise transformation129 of B under (3). Taking R to be the CP-rank of B̃, the CP-decomposition of B̃ then has the form130 B̃ = I ×1 C ×2 C ×3 β for some matrix C ∈ RK×R and β ∈ RM×R due to semi-symmetry.131 This leads to the CP decomposition of Θ has the form (4) with α = ZC. It is clear that MLSBM132 requires vertices within the same community are homogeneous and exchangeable, while TLSM133 allows vertices to have different embeddings even when they are in the same community.134 Second, TLSM is identifiable when both α and β have full column ranks. When both α and β135 have full column ranks, the Kruskal’s k-ranks [25] of α and β satisfy kα = kβ = R, then Θ has136 CP-rank R. Hence, kα + kα + kβ ≥ 2R + 2 as long as R ≥ 2. By Theorem 1 of [40], the fixed137 column l2-norm constraint of β implies that the tensor factorization in (4) is unique up to column138 permutations of α and β and column sign flip of α. It is important to remark that the community139 structure encoded in α remains unchanged under any column permutation or sign flip.140 Third, introducing a sparsity factor sn via a modified logit transformation into the TLSM is non-141 trivial. We take a single-layer network as an example to illustrate the limitation of the standard142 logit transformation in handling sparse network. Suppose a vanilla logit link is used to connect143 the network underlying probability matrix P and its transformation Θ, and the latent space model144 usually assumes that Θ = ααT . A sparse network requires the entries of Θ diverge to negative145 infinite due to the small magnitude of edge probability, which leads to unstable estimation of α in146 numerical experiments. Moreover, this may conflict with the assumption that vertices within the same147 community tend to be close in the embedding space and their inner product is likely to be positive.148 These difficulties can be naturally circumvented when an appropriate sn is chosen in (3).149 2.2 Regularized likelihood150 Given a network adjacency tensor A and number of communities K, our goal is to estimate the multi-layer network embedding (α,β) and conduct community detection on the vertices. Throughout this paper, we assume the number of potential communitiesK is given and may diverge with n. Under the TLSM framework, with slight abuse of notation, we denote the average negative log-likelihood function of the multi-layer network G is L(α,β;A) = L(Θ;A) with L(Θ;A) = 1 φ(n,M) M∑ m=1 ∑ i≤j L(θi,j,m; ai,j,m), whereφ(n,M) = 12n(n+1)M is the number of potential edges, andL(θ; a) = log ( 1+ sn 1−sn+e−θ ) −151 a log ( sn 1−sn+e−θ ) is a negative log-density of a Bernoulli random variable a. We now introduce a152 novel regularization term to detect the potential communities in G,153 J(α) = min Z∈Γ,C∈RK×R 1 n ∥α−ZC∥2F , (5) where C encodes the vertex embedding centers and Γ ⊂ {0, 1}n×K is the set of all possible154 community membership matrices; that is, for any Z ∈ Γ, each row of Z consists of only one 1155 indicating the community membership and all others entries being 0. This leads to the proposed156 regularized cost function,157 Lλ(α,β;A) = L(α,β;A) + λnJ(α), (6) where λn is a positive tuning parameter that strikes the balance between network estimation and158 community detection in the cost function. It is clear that the embeddings of vertices with similar159 linking pattern will be pushed towards the same center, and thus close to each other in the ambient160 space, leading to the desired community structure in G.161 2.3 Projected gradient descent algorithm162 We develop a scalable projected gradient descent (PGD) algorithm to optimize the penalized cost163 function (6), which is highly non-convex and can be solved only locally. PGD, which alternatively164 conducts gradient step and projection step, is one of the most popular and computationally fast165 algorithm in tackling non-convex optimization problem [7, 33, 47, 9].166 To compute the gradients of α and β, we introduce the following notations. Define T ∈ Rn×n×M167 with entries T i,j,m = exp(−θi,j,m)1−sn+exp(−θi,j,m) (pi,j,m − ai,j,m), and X α,β T (2,3) ∈ R n×R whose i-th row168 consists of the diagonal elements of the slice (T ×2 αT ×3 βT )i,.,.. That is, Xα,βT (2,3)(i, r) =169 (T ×2 αT ×3 βT )i,r,r. Similarly, we define Xα,αT (1,2) ∈ R R×M , XβT (3) ∈ R n×R, and XT (1,2) ∈170 Rn×M , such that Xα,αT (1,2)(r,m) = (T ×1 α T ×2 αT )r,r,m, XβT (3)(i, r) = (T ×3 β T )i,i,r, and171 XT (1,2)(i,m) = T i,i,m. Consequently, when the vertex membership matrix Z and the community172 center matrix C are fixed, we can derive the gradients of Lλ(α,β;A) with respect to α and β, as173 1 φ(n,M) ( Xα,βT (2,3)+X β T (3) ∗α ) +2λn(α−ZC) and 1 2φ(n,M) ( (Xα,αT (1,2)) T +XTT (1,2)(α∗α) ) , respectively. Herein, * denotes the Hadamard product (entry-wise product) between two matrices.174 Let (α̃, β̃) denote the solution given by one-step gradient descent, we then project (α̃, β̃) onto175 Ωα × Ωβ in the following steps.176 Step 1. Multiply the r-th column of α̃.,r by ||β̃.,r||1/2 for r ∈ [R]. Denote the resultant matrix as α̃′.177 Step 2. Regularize each row of α as αi,. = α̃′i,.min{ √ log ξ1−ξ , ||α̃ ′ i,.||}/||α̃′i,.||, for i ∈ [n].178 Step 3. Normalize the columns of β as β.,r = β̃.,r/||β̃.,r||, for r ∈ [R].179 Next, when (α,β) are given, we apply a (1 + δ)-approximation K-means algorithm on α̃ to update180 the vertex community membership matrix Z and community center matrix C.181 The above steps will be alternatively conducted until convergence or reaching the maximum number182 of iterations. We further summarized the developed alternative updated scheme in Algorithm 1 in183 Appendix A of the supplementary materials184 Several remarks on the algorithm are in order. First, Algorithm 1 can only be guaranteed to converge185 to a stationary point but not any local minimizer. We hence employ a transformed higher order186 orthogonal iteration (HOOI) algorithm for warm initialization in all the numerical experiments in187 Section 4 and 5. Specifically, given a user-specific value τ , we define Θ̃ to mimic the magnitude188 of Θ such that Θ̃i,j,m = −τ if ai,j,m = 0 and Θ̃i,j,m = τ otherwise. A standard HOOI algorithm189 [11] is applied to Θ̃ to obtain α(0) and β(0). We set τ = 100 in all the numerical experiments.190 Second, the sparsity factor sn is an intrinsic quantity of the multi-layer network data, and it should be191 estimated from the network directly. Note that the minimal and maximal probabilities for any vertex192 pair to form an edge in any layer are pmin = (1− ξ)sn and pmax = ξsn, respectively. Interestingly,193 pmin + pmax = sn, which does not depend on ξ any more. Therefore, we propose to estimate sn as194 ŝn = min i∈[n] 1 nM M∑ m=1 n∑ j=1 ai,j,m +max i∈[n] 1 nM M∑ m=1 n∑ j=1 ai,j,m, (7) which is the sum of the minimal and maximal frequencies of a vertex to form edges with all other195 vertices in all layers. Third, to optimally choose λn, we extend the network cross-validation by196 edge sampling scheme in [30] to multi-layer networks. The detailed tuning procedure is relegated to197 Appendix B in the supplementary materials.198 3 Asymptotic theory199 3.1 Consistency in estimating Θ∗200 Let Ω = {Θ = I ×1 α×2 α×3 β : α ∈ Ωα,β ∈ Ωβ} be the parameter space of the problem and201 Θ∗ = I ×1 α∗ ×2 α∗ ×3 β∗ be the true underlying transformed network probability tensor. Denote202 KL(Θ∗||Θ) = φ−1(n,M) ∑M m=1 ∑ i≤j E ( L(θi,j,m; ai,j,m)− L(θ∗i,j,m; ai,j,m) ) be the averaged203 Kullback–Leibler divergence of the network generation distributions parametrized by Θ∗ and Θ, for204 any Θ ∈ Ω. The following large deviation inequality is derived to quantify the behavior of Lλ(Θ;A)205 for any Θ in the neighborhood of Θ∗ defined by KL(Θ∗||Θ).206 Proposition 1. Suppose λnJ(α∗) ≤ ϵn, and (n+M)Rφ−1(n,M)ϵ−1n log(ϵ −1/2 n ) ≤ c1 for some207 constant c1. Then with probability at lease 1− 2 exp ( − φ(n,M)ϵn 156 ξ1−ξ+28 log 2 ) , we have208 Lλ(Θ∗;A) ≤ inf {Θ∈Ω|KL(Θ∗||Θ)≥4ϵn} Lλ(Θ;A)− ϵn. Proposition 1 basically states that any estimators with sufficiently small objective value should209 be close enough to Θ∗ in terms of KL(Θ∗||Θ). We next study the asymptotic behavior of these210 estimators more precisely. Let (α̂, β̂) ∈ Ωα × Ωβ be any estimator of (α∗,β∗) such that211 Lλ(α̂, β̂;A) ≤ Lλ(α∗,β∗;A) + ϵn, (8) and denote Θ̂ = I ×1 α̂×2 α̂×3 β̂. we have the following theorem.212 Theorem 1. Under the condition of Proposition 1, if (α̂, β̂) satisfies (8), then with probability at least 1− 2 exp ( − φ(n,M)ϵn 156 ξ1−ξ+28 log 2 ) , we have 1 n √ M ∥Θ̂−Θ∗∥F ≤ 4 √ 2 √ ϵn (1− ξ) √ ξsn . The condition that λnJ(Θ∗) ≤ ϵn in Proposition 1 is mild. It implies that the true em-213 beddings of vertices within the same community are close to one another. We remark that214 λnJ(Θ ∗) exactly equals to zero under the MLSBM discussed in Section 2.2. The condition that215 (n+M)Rφ−1(n,M)ϵ−1n log(ϵ −1/2 n ) vanishes with n is also mild. WhenR = O(1), we can take any216 ϵn such that ϵn ≫ lognnmin{n,M} . Consequently, to ensure Θ̂ converges to Θ ∗, Theorem 1 implies the217 smallest sparsity factor one can take is sn ≫ ϵn ≫ lognnmin{n,M} , which means that the average degree218 of a vertex in any particular layer can be as small as nsn. We remark that a common assumption219 M = O(n) that appears in literature, such as [27] and [22], is not necessary in our theory. If we220 further assume M = O(n), we find that the average degree of a vertex in any layer under the221 proposed TLSM set up can be smaller than that in [27] by a factor (M log n)−1/2 and in [22] by a222 factor (log n)−3, showing that our theoretical result accommodates sparser multi-layer networks.223 3.2 Consistency in community detection224 We now turn to establish the consistency of community detection in multi-layer network225 G. Let ψ∗ : [n] −→ [K] be the true community assignment function such that ψ∗ =226 argminψminC1,...,CK ∑n i=1 ∥α∗i − Cψi∥2, and then the community detection error of any esti-227 mated community assignment function ψ̂ can be evaluated by the minimum scaled Hamming distance228 between ψ̂ and ψ∗ under permutations, which is defined as229 err(ψ∗, ψ̂) = min π∈SK 1 n n∑ i=1 1{ψ∗i ̸= π(ψ̂i)}, (9) where 1{·} is the indicator function and SK is the symmetric group of degree K. Such a scaled230 or unscaled Hamming distance has become a popular metric in quantifying the performance of231 community detection [21, 22].232 Denote N∗k = {i : ψ∗i = k} be the k-th true underlying community whose cardinality is nk. Let233 C∗ ∈ RK×R be the true underlying community centers of the network embedding with C∗k. =234 1 nk ∑ ψ∗i =k α∗i., and let B ∗ = I×1C∗×2C∗×3 β∗. The following assumptions are made to ensure235 that communities within the multi-layer networks are asymptotically identifiable.236 Assumption A. Assume the difference between any two distinct horizontal slides of B∗ satisfies that237 min k,k′∈[K],k ̸=k′ 1√ KM ∥B∗k,.,. −B ∗ k′,.,.∥F ≥ γn, where γn > 0 may vanish with n.238 Assumption B. Assume the tuning parameter λn satisfies that λnϵns −2 n (log s −1 n ) −1 ≥ c2, for an absolute constant c2 that does not depend on any model parameter.239 Assumption C. Denote nmin = mink∈[K] nk as the minimal community size. Assume γnnmin √ K n ≥ cξ √ ϵn sn , where cξ = 4 √ 2 (1−ξ) √ ξ + c3 √ (1+δ)min{M,R} M and c3 is a constant that depends on ξ only.240 Assumption A is the minimal community separation requirement, and similar assumption has been241 employed in [27] with a constant γn. Together with the condition λnJ(α∗) ≤ ϵn in Proposition 1,242 Assumption B gives a feasible interval for λn. Assumption C allows for unbalanced communities243 with vanishing nmin/n if the network is not too sparse. Note that cξ can be further bounded by244 4 √ 2 (1−ξ) √ ξ + c3 √ 1 + δ, and the first term of cξ will dominate the second term if R = o(M).245 Theorem 2. Suppose all the assumptions in Theorem 1 as well as Assumptions A, B and C are satisfied, it holds true that err(ψ∗, ψ̂) ≤ c2ξnϵn nminKγ2nsn , with probability at least 1− 1n2 − 2 exp ( − φ(n,M)ϵn 156 ξ1−ξ+28 log 2 ) .246 Theorem 2 assures that the community structure in a multi-layer network can be consistently recovered247 by the proposed TLSM. As a theoretical example, we consider a sparse case with sn = (logn)1+τ1 nmin{n,M} ,248 where 0 < τ1 < 1, nmax = O(nmin), 1√n ||α ∗ − Z∗C∗||F ≤ (log n)−3/2, and both γn, R and K249 are of constant orders. With λn = (logn)2+2τ1 nmin{n,M} , Theorems 1 and 2 imply that ϵn = (logn)1+τ2 nmin{n,M} with250 0 < τ2 < τ1 and err(ψ∗, ψ̂) = op(1).251 4 Numerical experiments252 In this section, we evaluate the numerical performance of the proposed TLSM in a variety of synthetic253 as well as real-life multi-layer networks, compare it against four competitors in literature, including254 the mean adjacency spectral embeddings (MASE; 16), least square estimation (LSE; 27), Tucker255 decomposition with HOSVD initialization (HOSVD-Tucker; 22), and spectral kernel (SPECK; 35),256 and conduct some ablation studies. The implementations of LSE and SPECK are available at the257 authors’ personal websites, HOSVD-Tucker is implemented in the routine “tucker" of the Python258 package “tensorly", and TLSM and MASE are implemented in Python by ourselves.259 4.1 Synthetic networks260 The multi-layer network A = (ai,j,m) ∈ {0, 1}n×n×M is generated as follows. First, we randomly261 selectK = 4 elements uniformly from {2.5∗(b1, b2, . . . , bR) : br ∈ {−1, 1}, r ∈ [R]} as community262 centers, which are denoted as ck, k ∈ [K]. Second, the latent space embedding of vertex i is263 generated as αi = cψi + ei with ei ∼ N(0R, 1.5 ∗ IR), and ψi ∈ [K] are independently drawn264 from the multinomial distribution Multi(1; 1K1K). Third, we generate β = [β1, . . . ,βM ] T with265 βm,r being independent standard normal random varibeles, for m ∈ [M ] and r ∈ [R]. We then266 rescale the column norms of β to be 1 for model identifiability. Finally, we generate A according267 to the proposed TLSM with sn = 0.1. For the sake of fair comparisons, the embedding dimension268 R is set as K in all scenarios. We aim to illustrate the community detection performance of269 all methods as the number of vertices and number of layers increase. To this end, we consider270 (n,M) ∈ {200, 400, 600, 800} × {5, 10, 15, 20}. The averaged hamming errors and their standard271 errors over 50 independent experiments of all methods are reported in Table 1.272 It is evident that TLSM consistently outperforms its competitors, and the performances of LSE273 and HOSVD-Tucker are better than those of MASE and SPECK. This is expected since TLSM,274 LSE and HOSVD-Tucker work on the multi-layer network adjacency tensor directly, while MASE275 and SPECK are matrix aggregation methods that suffer form information loss. Furthermore, as the276 number of vertices and number of layers increase, the community detection errors of all methods277 decrease rapidly. Notably, TLSM and LSE converge faster than the other methods, and attain stable278 performance even for relatively small n and M . Additional simulation studies for various network279 sparsity and unbalanced community sizes are relegated to Appendix C in the supplementary materials.280 4.2 Real-life networks281 We also apply the proposed TLSM method to analyze three real-life multi-layer networks, including282 a social network in the department of Computer Science at Aarhus University (AUCS) [38], a yeast283 Saccharomyces cerevisiae gene co-expression (YSCGC) network [44], and a worldwide agriculture284 trading network (WAT) [10]. Specifically, we conduct community detection on the first two networks285 whose vertex community memberships are available, and carry out a link prediction task on the third286 network whose vertex community memberships are unavailable.287 The AUCS dataset is publicly available at http://multilayer.it.uu.se/datasets.html, and288 it is a 61 × 61 × 5 multi-layer network that records pairwise relationships of 5 types among 61289 persons in AUCS, including current working relationships, repeated leisure activities, regularly eating290 lunch together, co-authorship of a publication, and friendship on Facebook. Since 54 persons in291 the dataset come from 7 research groups and the other 7 persons do not belong to any group, the292 dataset consists of 8 communities corresponding to 7 research groups and an outlier community.293 Applying TLSM and its competitors to the dataset, the number of misclassified vertices by TLSM,294 LSE, MASE, HOSVD-Tucker and SPECK, are 8, 21, 19, 23, 18, respectively. Clearly, TLSM295 significantly outperforms its competitors by at least reducing 16.39% of community detection error.296 The YSCGC dataset is publicly available at https://www.ncbi.nlm.nih.gov/pmc/articles/297 PMC156590/, and contains 205 genes of 4 functional categories, including protein metabolism298 and modification, carbohydrate metabolism and catabolism, nucleobase, nucleoside, nucleotide299 and nucleic acide metabolism, as well as transportation. We regard these four functional category300 labels as the community memberships of the genes. Further, the gene expression responses are301 measured by 20 systematic perturbations with varying genetic and environmental conditions in302 4 replicated hybridizations. We thus constructed a gene co-expression network A = (ai,j,m) ∈303 R205×205×4 based on the similarities of their expressions, where each layer represents one replicated304 hybridization. Specifically, the similarity between genes i and j in the m-th replication is measured305 by wi,j,m = exp ( − ∥x(m)i − x (m) j ∥ ) , where x(m)i ∈ R20 contains the expression levels of 20306 perturbations in the m-th replicated hybridization for i ∈ [205] and m ∈ [4]. The binary value ai,j,m307 is obtained by thresholding wi,j,m with the thresholding value being the 60% quantile of all elements308 in {wi,j,m : i ≤ j ∈ [205],m ∈ [4]}. Applying TLSM and its competitors to this dataset, the number309 of misclassified vertices by TLSM, LSE, MASE, HOSVD-Tucker and SPECK, are 6, 9, 12, 48, 13,310 respectively. TLSM again outperforms its competitors in this YSCGC dataset.311 The WAT dataset is publicly available at http://www.fao.org, and includes 364 agriculture312 product trading relationships among 214 countries in 2010. To process the data, we extract 130 major313 countries whose average degrees are greater than 9 from the 32 densest connected agriculture product314 trading relations, leading to a 130× 130× 32 multi-layer network. Investigating the eigen-structure315 of the mode-1 matricization of the network adjacency tensor, we identify an elbow point [20] at the316 7th largest eigen-value, suggesting there are 6 potential communities among the countries, and thus317 we set K = 6. The corresponding eigen-value plot is attached in Appendex D of the supplementary318 materials. We then randomly selected 80% of the entries of the adjacency tensor as the training set,319 and conduct link prediction on the remaining 20% of the entries. Specifically, we employ TLSM320 and the adaptations of its competitors to estimate the network expected tensor P and generate321 estimations for the missing entries by independent Bernoulli random variables accordingly. The322 averaged link prediction accuracy of TLSM, LSE, MASE, HOSVD-Tucker and SPECK over 50323 independent replications are 79.60%, 76.66%, 75.96%, 77.78% and 79.08%, respectively, where the324 link prediction accuracy is defined as the percentile of the correctly predicted entries. Clearly, all 5325 methods are comparative in terms of link prediction, while TLSM still deliver highest averaged link326 prediction accuracy.327 4.3 Ablation studies328 In this subsection, we carry out some ablation studies on two novel components of the proposed329 method, namely the sparsity factor sn and the community-inducing regularizer J(α). To study the330 effectiveness of sn, we generate a 300× 300× 5 multi-layer network with 3 communities and the331 true network sparsity sn = 0.3. The blue curve in the left panel of Figure 1 shows the average332 Hamming error of 50 independent replications given by the proposed method when employing333 ŝn ∈ {0.05i : i ∈ [20]} in the optimization algorithm, and the red line indicates the averaged334 Hamming error of the proposed method with ŝn estimated via the proposed data-adapted estimation335 scheme. It is clear that the Hamming error at sn = 1 is much larger than that when sn is close336 to 0.3, showing the advantages of the modified logit transformation by sn over the standard logit337 transformation when the network indeed reveals sparse pattern. Moreover, we observe that the red338 line is even lower than the minimum Hamming error in the blue curve. This further confirms the339 effectiveness of the proposed data-adapted estimation scheme for estimating sn. To study the effectiveness of the community-inducing regularizer in the proposed objective function,341 we generate an n× n× 5 multi-layer network with 2 communities, for n ∈ {50, 100, 200, 400}. In342 the right panel of Figure 1, the black pillars indicate the network estimation error 1 n √ 5 ∥Θ̂−Θ∗∥F343 given by the proposed method with λn = 0 which corresponds to the absence of J(α), while the344 red ones indicate the counterparts given by the proposed method with λn is selected by network345 cross-validation. There is a clear improvement when the community-inducing regularizer is enforced346 in all scenarios, particularly for small n. This showcases the helpfulness of the community-inducing347 regularizer in detecting network community structure.348 5 Conclusions349 In this paper, we propose a novel tensor-based latent space model for community detection in350 multi-layer networks. The model embeds vertices into a low-dimensional latent space and views351 the community structure from an network embedding perspective, so that heterogeneous structures352 in different network layers can be properly integrated. The proposed model is formulated as a353 regularization framework, which conducts multi-layer network estimation and community detection354 simultaneously. The advantages of the proposed method are supported by extensive numerical355 experiments and theoretical results. Particularly, the asymptotic consistencies of the proposed method356 are established in terms of both multi-layer network estimation and community detection, even for357 relatively sparse networks.358 References359 [1] Luiz GA Alves, Giuseppe Mangioni, Isabella Cingolani, Francisco Aparecido Rodrigues, Pietro360 Panzarasa, and Yamir Moreno. The nested structural organization of the worldwide trade361 multi-layer network. Scientific reports, 9(1):1–14, 2019.362 [2] Jesús Arroyo, Avanti Athreya, Joshua Cape, Guodong Chen, Carey E Priebe, and Joshua T363 Vogelstein. Inference for multiple heterogeneous networks with a common invariant subspace.364 Journal of Machine Learning Research, 22(142):1–49, 2021.365 [3] Avanti Athreya, Donniell E Fishkind, Minh Tang, Carey E Priebe, Youngser Park, Joshua T366 Vogelstein, Keith Levin, Vince Lyzinski, and Yichen Qin. Statistical inference on random dot367 product graphs: a survey. The Journal of Machine Learning Research, 18(1):8393–8484, 2017.368 [4] Matteo Barigozzi, Giorgio Fagiolo, and Giuseppe Mangioni. Identifying the community369 structure of the international-trade multi-network. Physica A: statistical mechanics and its370 applications, 390(11):2051–2066, 2011.371 [5] Michele Berlingerio, Fabio Pinelli, and Francesco Calabrese. Abacus: frequent pattern mining-372 based community discovery in multidimensional networks. Data Mining and Knowledge373 Discovery, 27(3):294–320, 2013.374 [6] Sharmodeep Bhattacharyya and Shirshendu Chatterjee. Spectral clustering for multiple sparse375 networks: I. arXiv preprint arXiv:1805.10594, 2018.376 [7] Han Chen, Garvesh Raskutti, and Ming Yuan. Non-convex projected gradient descent for377 generalized low-rank tensor regression. Journal of Machine Learning Research, 20:1–37, 2019.378 [8] Zitai Chen, Chuan Chen, Zibin Zheng, and Yi Zhu. Tensor decomposition for multilayer net-379 works clustering. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33,380 pages 3371–3378, 2019.381 [9] Eric C Chi, Brian R Gaines, Will Wei Sun, Hua Zhou, and Jian Yang. Provable convex382 co-clustering of tensors. Journal of Machine Learning Research, 21(214):1–58, 2020.383 [10] Manlio De Domenico, Vincenzo Nicosia, Alexandre Arenas, and Vito Latora. Structural384 reducibility of multilayer networks. Nature communications, 6(1):1–9, 2015.385 [11] Lieven De Lathauwer, Bart De Moor, and Joos Vandewalle. On the best rank-1 and rank-(r386 1, r 2,..., rn) approximation of higher-order tensors. SIAM journal on Matrix Analysis and387 Applications, 21(4):1324–1342, 2000.388 [12] Xiaowen Dong, Pascal Frossard, Pierre Vandergheynst, and Nikolai Nefedov. Clustering389 with multi-layer graphs: A spectral perspective. IEEE Transactions on Signal Processing,390 60(11):5820–5831, 2012.391 [13] Junxian Geng, Anirban Bhattacharya, and Debdeep Pati. Probabilistic community detection392 with unknown number of communities. Journal of the American Statistical Association,393 114(526):893–905, 2019.394 [14] Mahsa Ghorbani, Mahdieh Soleymani Baghshah, and Hamid R Rabiee. Mgcn: semi-supervised395 classification in multi-layer graphs with graph convolutional networks. In Proceedings of396 the 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and397 Mining, pages 208–211, 2019.398 [15] Derek Greene and Pádraig Cunningham. Producing a unified graph representation from multiple399 social network views. In Proceedings of the 5th annual ACM web science conference, pages400 118–121, 2013.401 [16] Qiuyi Han, Kevin Xu, and Edoardo Airoldi. Consistent estimation of dynamic and multi-layer402 block models. In International Conference on Machine Learning, pages 1511–1520. PMLR,403 2015.404 [17] Xin He, Qiong Liu, and You Yang. Mv-gnn: Multi-view graph neural network for compression405 artifacts reduction. IEEE Transactions on Image Processing, 29:6829–6840, 2020.406 [18] Peter D Hoff, Adrian E Raftery, and Mark S Handcock. Latent space approaches to social407 network analysis. Journal of the American Statistical Association, 97(460):1090–1098, 2002.408 [19] Paul W Holland, Kathryn Blackmond Laskey, and Samuel Leinhardt. Stochastic blockmodels:409 First steps. Social networks, 5(2):109–137, 1983.410 [20] Pengsheng Ji and Jiashun Jin. Coauthorship and citation networks for statisticians. The Annals411 of Applied Statistics, 10(4):1779–1812, 2016.412 [21] Jiashun Jin. Fast community detection by score. Ann. Statist., 43(1):57–89, 02 2015.413 [22] Bing-Yi Jing, Ting Li, Zhongyuan Lyu, and Dong Xia. Community detection on mixture414 multilayer networks via regularized tensor decomposition. The Annals of Statistics, 49(6):3181–415 3205, 2021.416 [23] Muhammad Raza Khan and Joshua E Blumenstock. Multi-gcn: Graph convolutional networks417 for multi-view networks, with applications to global poverty. In Proceedings of the AAAI418 Conference on Artificial Intelligence, volume 33, pages 606–613, 2019.419 [24] Tamara G. Kolda and Brett W. Bader. Tensor decompositions and applications. SIAM Review,420 51:455–500, 2009.421 [25] Joseph B Kruskal. Three-way arrays: rank and uniqueness of trilinear decompositions, with422 application to arithmetic complexity and statistics. Linear algebra and its applications, 18(2):95–423 138, 1977.424 [26] Jing Lei. Tail bounds for matrix quadratic forms and bias adjusted spectral clustering in425 multi-layer stochastic block models. arXiv preprint arXiv:2003.08222, 2020.426 [27] Jing Lei, Kehui Chen, and Brian Lynch. Consistent community detection in multi-layer network427 data. Biometrika, 107(1):61–73, 2020.428 [28] Jing Lei and Alessandro Rinaldo. Consistency of spectral clustering in stochastic block models.429 The Annals of Statistics, 43(1):215–237, 2015.430 [29] Dong Li, Zhisong Pan, Guyu Hu, Graham Anderson, and Shan He. Active module identification431 from multilayer weighted gene co-expression networks: a continuous optimization approach.432 IEEE/ACM transactions on computational biology and bioinformatics, 2020.433 [30] Tianxi Li, Elizaveta Levina, and Ji Zhu. Network cross-validation by edge sampling. Biometrika,434 107(2):257–276, 2020.435 [31] Xueming Liu, Enrico Maiorino, Arda Halu, Kimberly Glass, Rashmi B Prasad, Joseph Loscalzo,436 Jianxi Gao, and Amitabh Sharma. Robustness and lethality in multilayer biological molecular437 networks. Nature communications, 11(1):1–12, 2020.438 [32] Zhongyuan Lyu, Dong Xia, and Yuan Zhang. Latent space model for higher-order networks439 and generalized tensor decomposition. arXiv preprint arXiv:2106.16042, 2021.440 [33] Zhuang Ma, Zongming Ma, and Hongsong Yuan. Universal latent space model fitting for large441 networks with edge covariates. Journal of Machine Learning Research, 21(4):1–67, 2020.442 [34] Subhadeep Paul and Yuguo Chen. Consistent community detection in multi-relational443 data through restricted multi-layer stochastic blockmodel. Electronic Journal of Statistics,444 10(2):3807–3870, 2016.445 [35] Subhadeep Paul and Yuguo Chen. Spectral and matrix factorization methods for consistent446 community detection in multi-layer networks. Ann. Statist., 48(1):230–250, 02 2020.447 [36] Subhadeep Paul and Yuguo Chen. Null models and community detection in multi-layer networks.448 Sankhya A, pages 1–55, 2021.449 [37] Zhuo-Ming Ren, An Zeng, and Yi-Cheng Zhang. Bridging nestedness and economic complexity450 in multilayer world trade networks. Humanities and Social Sciences Communications, 7(1):1–8,451 2020.452 [38] Luca Rossi and Matteo Magnani. Towards effective visual analytics on multiplex and multilayer453 networks. Chaos, Solitons & Fractals, 72:68–76, 2015.454 [39] Uday Shankar Shanthamallu, Jayaraman J Thiagarajan, Huan Song, and Andreas Spanias.455 Gramme: Semisupervised learning using multilayered graph attention models. IEEE transac-456 tions on neural networks and learning systems, 31(10):3977–3988, 2019.457 [40] Nicholas D Sidiropoulos and Rasmus Bro. On the uniqueness of multilinear decomposition of458 n-way arrays. Journal of Chemometrics: A Journal of the Chemometrics Society, 14(3):229–239,459 2000.460 [41] Wei Tang, Zhengdong Lu, and Inderjit S Dhillon. Clustering with multiple graphs. In 2009461 Ninth IEEE International Conference on Data Mining, pages 1016–1021. IEEE, 2009.462 [42] Edwin JCG Van Den Oord and Ronan Van Rossem. Differences in first graders’ school463 adjustment: The role of classroom characteristics and social structure of the group. Journal of464 School Psychology, 40(5):371–394, 2002.465 [43] James D Wilson, John Palowitch, Shankar Bhamidi, and Andrew B Nobel. Community466 extraction in multilayer networks with heterogeneous community structure. The Journal of467 Machine Learning Research, 18(1):5458–5506, 2017.468 [44] Ka Yee Yeung, Mario Medvedovic, and Roger E Bumgarner. Clustering gene-expression data469 with repeated measurements. Genome biology, 4(5):1–17, 2003.470 [45] Yubai Yuan and Annie Qu. Community detection with dependent connectivity. The Annals of471 Statistics, 49(4):2378–2428, 2021.472 [46] Jingfei Zhang, Will Wei Sun, and Lexin Li. Network response regression for modeling popula-473 tion of networks with covariates. arXiv preprint arXiv:1810.03192, 2018.474 [47] Xuefei Zhang, Songkai Xue, and Ji Zhu. A flexible latent space model for multilayer networks.475 In International Conference on Machine Learning, pages 11288–11297. PMLR, 2020.476 [48] Yunpeng Zhao, Elizaveta Levina, and Ji Zhu. Consistency of community detection in networks477 under degree-corrected stochastic block models. The Annals of Statistics, 40(4):2266–2292,478 2012.479 [49] Wei Zheng, Dingjie Wang, and Xiufen Zou. Control of multilayer biological networks and480 applied to target identification of complex diseases. BMC bioinformatics, 20(1):1–12, 2019.481 Checklist482 1. For all authors...483 (a) Do the main claims made in the abstract and introduction accurately reflect the pa-484 per’s contributions and scope? [Yes] See the abstract and the third paragrath of the485 introduction.486 (b) Did you describe the limitations of your work? [Yes] The optimization algorithm can487 only be guaranteed to converge to a stationary point.488 (c) Did you discuss any potential negative societal impacts of your work? [No] There489 should be no negative societal impacts.490 (d) Have you read the ethics review guidelines and ensured that your paper conforms to491 them? [Yes]492 2. If you are including theoretical results...493 (a) Did you state the full set of assumptions of all theoretical results? [Yes] See Section 3.494 (b) Did you include complete proofs of all theoretical results? [Yes] All technical proofs495 are provided in Appendix E of the supplementary materials.496 3. If you ran experiments...497 (a) Did you include the code, data, and instructions needed to reproduce the main exper-498 imental results (either in the supplemental material or as a URL)? [Yes] The URLs499 for data are included in Section 4.2, and codes with instructions are included in the500 supplementary materials.501 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they502 were chosen)? [Yes] See Section 2.3 and Appendix B in the supplementary materials.503 (c) Did you report error bars (e.g., with respect to the random seed after running experi-504 ments multiple times)? [Yes] We show the standard erros in Table 1 and 95% confident505 intervals of additional simulation studies in Appendix C in the supplementary materials.506 (d) Did you include the total amount of compute and the type of resources used (e.g., type507 of GPUs, internal cluster, or cloud provider)? [No]508 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...509 (a) If your work uses existing assets, did you cite the creators? [Yes] We used publicly510 available datasets and cite the creators.511 (b) Did you mention the license of the assets? [Yes] All datasets we used are publicly512 available.513 (c) Did you include any new assets either in the supplemental material or as a URL? [No]514 (d) Did you discuss whether and how consent was obtained from people whose data you’re515 using/curating? [No]516 (e) Did you discuss whether the data you are using/curating contains personally identifiable517 information or offensive content? [No] All data we used do not contains personally518 identifiable information or offensive content.519 5. If you used crowdsourcing or conducted research with human subjects...520 (a) Did you include the full text of instructions given to participants and screenshots, if521 applicable? [N/A]522 (b) Did you describe any potential participant risks, with links to Institutional Review523 Board (IRB) approvals, if applicable? [N/A]524 (c) Did you include the estimated hourly wage paid to participants and the total amount525 spent on participant compensation? [N/A]526
1. What is the focus and contribution of the paper on multi-layer network analysis? 2. What are the strengths of the proposed approach, particularly in terms of its flexibility, generality, and theoretical analysis? 3. What are the weaknesses of the paper, especially regarding the novelty of the idea and the performance in real-life experiments? 4. Do you have any concerns or questions about the model's time complexity? 5. What are the limitations and potential negative societal impacts of the proposed approach that the authors did not address?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposes a generative tensor-based latent space model, dubbed TLSM, which generates node embeddings preserving the community structure of the given multi-layer networks. Instead of directly using the log-likelihood function to encode the heterogeneous structure of the multi-layer networks, the authors also introduce a clustering type penalty to simultaneously embed the community information between nodes. Projected gradient descent (PGD) is used to optimize the model parameters, and the asymptotic consistencies of the proposed method are also analyzed. Strengths And Weaknesses Strengths: Firstly, the proposed model is flexible and general, with many popular network models included. Secondly, the designed regularized likelihood framework enables the proposed model to estimate the multi-layer network and conduct community detection simultaneously. Thirdly, the authors establish a theoretical analysis of the proposed model's asymptotic consistency in terms of both multi-layer network estimation and community detection. Weaknesses: Firstly, although the proposed framework is flexible and general, the idea of using tensor decomposition is not very novel, and the regularized likelihood design is only incremental. Secondly, I think the PGD algorithm does not guarantee convergence to the global minimum, and thus we may need to select the initial point carefully. Finally, in the real-life experiments, the proposed model does not perform much better than the benchmark approaches. Questions I am a bit concerned about the proposed model's time complexity. Could the authors provide more details about the model complexity? Limitations The authors did not address the limitations and potential negative societal impact.
NIPS
Title Structure-Preserving Embedding of Multi-layer Networks Abstract This paper investigates structure-preserving embedding for multi-layer networks 1 with community structure. We propose a novel generative tensor-based latent space 2 model (TLSM) that allows heterogeneity among vertices. It embeds vertices into 3 a low-dimensional latent space so that vertices within the same community are 4 close to each other in the ambient space, and captures layer heterogeneity through 5 a layer-effect factor matrix. With a general and flexible tensor decomposition 6 on the expected network adjacency tensor, TLSM is dedicated to preserving the 7 original vertex relations and layer-specific effects in the network embedding. An 8 efficient alternative updating scheme is developed to estimate the model parameters 9 and conduct community detection simultaneously. Theoretically, we establish the 10 asymptotic consistencies of TLSM in terms of both multi-layer network estimation 11 and community detection. The theoretical results are supported by extensive 12 numerical experiments on both synthetic and real-life multi-layer networks. 13 N/A This paper investigates structure-preserving embedding for multi-layer networks1 with community structure. We propose a novel generative tensor-based latent space2 model (TLSM) that allows heterogeneity among vertices. It embeds vertices into3 a low-dimensional latent space so that vertices within the same community are4 close to each other in the ambient space, and captures layer heterogeneity through5 a layer-effect factor matrix. With a general and flexible tensor decomposition6 on the expected network adjacency tensor, TLSM is dedicated to preserving the7 original vertex relations and layer-specific effects in the network embedding. An8 efficient alternative updating scheme is developed to estimate the model parameters9 and conduct community detection simultaneously. Theoretically, we establish the10 asymptotic consistencies of TLSM in terms of both multi-layer network estimation11 and community detection. The theoretical results are supported by extensive12 numerical experiments on both synthetic and real-life multi-layer networks.13 1 Introduction14 Network has arisen as one of the most common structures to represent the relations among entities.15 In many complex systems, entities can be multi-relational in that they may interact with each other16 under various circumstances. A multi-layer network, which consists of a common vertex set across all17 network layers representing the entities and an edge set at each layer to characterize a particular type18 of relation among entities, is faithful to represent these relations. Examples of multi-layer networks19 include social networks of multiple interaction channels [42, 15], biological networks of different20 collaboration schemes [49, 31, 29] and world trading networks [1, 37] of various goods.21 In this paper, we propose a structure-preserving embedding framework for multi-layer networks22 via a tensor-based latent space model. Specifically, TLSM utilizes the factorization of network23 adjacency tensor as a building block, embeds the vertices into a low dimensional latent space, and24 captures the heterogeneity among different layers through a layer-effect factor matrix. Consequently,25 the community structure of the multi-layer network can be detected from a network embedding26 perspective, such that vertices within the same community are closer to one another in the ambient27 space than those in different communities. In addition, one key feature of TLSM is that it introduces28 a sparsity factor into the vanilla logit transformation of the network adjacency tensor, which allows29 TLSM to model sparse multi-layer networks in a more explicit fashion and accommodate relatively30 sparser multi-layer networks as the ones considered in literature [22]. More importantly, this sparsity31 factor can be estimated from the network adjacency tensor directly.32 The main contribution of this paper is three-fold. First, the proposed TLSM is flexible and general33 in that it includes many popular network models as special cases. It also relaxes the layer-wise34 positive semi-definite condition that has been frequently employed in literature [6, 35]. Second, a35 joint modeling framework is constructed for TLSM, consisting of the multi-layer network likelihood36 Submitted to 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Do not distribute. and a clustering type penalty, to estimate the multi-layer network and conduct community detection37 simultaneously. Its advantages are supported by extensive numerical experiments on both synthetic38 and real-life multi-layer networks. Third, the asymptotic consistencies of TLSM are established in39 terms of both multi-layer network estimation and community detection. Notably, the established40 theoretical results imply that the proposed methods can accommodate the sparsest multi-layer41 networks considered in literature.42 The rest of the paper is organized as follows. The remaining of Section 1 discusses related works and43 introduces necessary notations. Section 2 presents the proposed TLSM and its estimation scheme with44 an efficient algorithm. In Section 3, we establish the asymptotic consistencies of TLSM. Extensive45 numerical performance of TLSM on synthetic and real-life multi-layer networks as well as ablation46 studies on two novel components of the proposed method are carried out in Section 4. Section 547 concludes the paper. The supplementary materials contains technique proofs and necessary lemmas,48 additional simulation studies, detailed parameter tuning process, among others.49 1.1 Related work50 While there is a growing number of literature focusing on community detection in single-layer51 network [48, 28, 13], community detection in multi-layer network is still in its infancy. One classical52 approach is to detect community structure in each layer separately [4, 5], which fails to leverage53 the homogeneity across different layers. Another approach is to aggregate multi-layer networks54 into a single-layer one [41, 12, 35], which heavily relies on the assumption of homogeneous linking55 pattern across multiple layers. Recently, [26] proposed to aggregate the biased-adjusted version of56 the squared adjacency matrix in each layer to alleviate the information loss in aggregation. yet it57 requires the average node degree to grow at a sub-optimal order.58 In terms of multi-layer network generative models, [34] extended the seminal stochastic block59 model (SBM; 19) to the multi-layer stochastic block model (MLSBM; 34), where the probability for60 any two vertices to form an edge in a given layer depends only on their community memberships.61 Clearly, MLSBM heavily relies on the assumption of homogeneous vertices within communities.62 The framework of MLSBM has also been incorporated in degree-corrected network estimation [36],63 spectral clustering [6, 35, 26], least square estimation [27] and likelihood-based approaches [45]. In64 addition, network response regression model [46] and tensor factorization methods [8, 22] have also65 been proposed to detect community structures in multi-layer networks.66 To allow heterogeneous vertices, the latent space model [18] and random dot product graph model67 [3] have been extended to multi-layer networks[47, 32, 2]. In addition, graph neural network and68 graph convolutional networks has been extended to multi-layer network for learning the multi-layer69 network embedding [14, 23, 17, 39].70 1.2 Notations71 Throughout the paper, we use boldface calligraphic Euler scripts (A) to denote tensors, boldface72 capital letters (A) or Greece letters (α,β) to denote matrices, boldface lowercase letters (a) to73 denote vectors, and regular letters (a) to denote scalars. For an order three tensor A ∈ RI1×I2×I3 ,74 Ai,.,. ∈ RI2×I3 ,A.,j,. ∈ RI1×I3 , and A.,.,m ∈ RI1×I2 are the i-th horizontal slide, j-th lateral slide75 and m-th frontal slide of A, respectively. Similarly, for a matrix A, Ai,. denotes its i-th row and A.,j76 denotes its j-th column. For a vector a, diag(a) stands for the diagonal matrix whose diagonal is a.77 We use || · ||, || · ||∞, and || · ||F to denote the l2-norm, l∞-norm of a vector, and the Frobenius norm78 of matrix or tensor, respectively. For any integer n, denote [n] = {1, 2, ..., n}.79 The mode-1 product between a tensor A ∈ RI1×I2×I3 and a matrix U ∈ RJ1×I1 is a tensor A×1U ∈80 RJ1×I2×I3 such that its (j1, i2, i3)-th entry is defined as (A×1 U)j1,i2,i3 = ∑I1 i1=1 Ai1,i2,i3Uj1,i1 .81 The mode-2 or mode-3 product between A and any matrix of appropriate dimension are defined82 similarly. The CANDECOMP/PARAFAC (CP) decomposition of A has the form83 A = R∑ r=1 a(r) ◦ b(r) ◦ c(r), (1) where a(r) ∈ RI1 , b(r) ∈ RI2 , and c(r) ∈ RI3 for r ∈ [R], and ◦ stands for the vector outer product.84 The CP-rank [24] of the tensor a(r) ◦ b(r) ◦ c(r) is defined to be 1, for r ∈ [R]. The minimal number85 of rank-1 tensors in the CP decomposition of A is called the CP-rank of A. Let I ∈ {0, 1}R×R×R86 be the identity tensor such that Ii1,i2,i3 = 1 if i1 = i2 = i3 and 0 otherwise, and let A ∈ RI1×R,87 B ∈ RI2×R, and C ∈ RI3×R such that A.,r = a(r), B.,r = b(r), and C.,r = c(r). Equation (1)88 then can be equivalently written as A = I ×1 A×2 B ×3 C.89 2 Structure-preserving embedding90 In this paper, we consider multi-layer networks that can be represented as an undirected and un-91 weighted M -layer graph G = (V, E), where V = [n] consists of the common n vertices across92 different layers, and E = {E(m)}Mm=1 with E(m) ⊂ V × V representing the m-th relation network93 among vertices. A order three adjacency tensor A = (ai,j,m) ∈ {0, 1}n×n×M is then defined to94 represent G with entries ai,j,m = 1 if (i, j) ∈ E(m) and 0 otherwise.95 2.1 Tensor-based latent space model96 To fully characterize the multi-layer network structure, we propose the following generative tensor-97 based latent space model (TLSM). For any i ≤ j ∈ [n], and m ∈ [M ],98 ai,j,m = aj,i,m ind.∼ Bernoulli(pi,j,m), with (2) θi,j,m = log ( pi,j,m sn − pi,j,m ) , and (3) Θ = I ×1 α×2 α×3 β, α ∈ Ωα,β ∈ Ωβ, (4) where I is the order three R-dimensional identity tensor. Basically, (2) follows the standard routine99 in the multi-layer network literature [34, 35, 27, 22] to model that ai,j,m = aj,i,m are independently100 generated from a Bernoulli distribution, for i ≤ j ∈ [n] and m ∈ [M ]. Denote P = (pi,j,m) ∈101 Rn×n×M as the network underlying probability tensor, and then Θ = (θi,j,m) ∈ Rn×n×M is102 the entry-wise transformation of P by (3). We call the transformation (3) as the modified logit103 transformation in that the constant 1 in the standard logit transformation is replaced by a sparsity104 factor sn, which may vanish with n andM . We further assume all entries of P are of the order sn; that105 is, there exists a constant 12 ≤ ξ < 1 such that (1− ξ)sn ≤ pi,j,m ≤ ξsn, for i, j ∈ [n] and m ∈ [M ].106 Thus, sn essentially controls the overall network sparsity and the entries of Θ are ensured to locate in107 the interval [− log ξ1−ξ , log ξ 1−ξ ]. More importantly, (4) models the CP decomposition of Θ by the108 factor matrices α ∈ Rn×R and β ∈ RM×R with CP-rank R , which can greatly reduce the number of109 free parameters from n(n+ 1)M/2 to (n+M)R. Throughout the paper, the CP-rank R is allowed110 to diverge with n. In the CP decomposition of Θ, α is the vertex latent position matrix with each row111 αi,. serving as the embedding of vertex i, and β captures heterogeneity across different layers. Herein,112 we define the constraint sets for α and β as Ωα = {α ∈ Rn×R : ||αi,.|| ≤ √ log ξ1−ξ , for i ∈ [n]}113 and Ωβ = {β ∈ RM×R : ||β.,r|| = 1, r ∈ [R]}. Note that the constraint on β is necessary for114 model identification, and detailed discussion will be presented shortly. The constraint set Ωα × Ωβ115 is sufficient to maintain the bounded condition of Θ since a general Hölder inequality yields that116 |θi,j,m| = |I ×1 αTi,. ×2 αTj,. ×3 βTm,.| ≤ ||αi,.||||αj,.||||βm,.||∞ ≤ log ξ 1−ξ . To conclude this117 paragraph, we remake that the parameter ξ is introduced for theoretical purpose and it is not treated as118 a tuning parameter. One can choose ξ sufficiently close to 1 in empirical studies so that the restriction119 on α will be alleviated.120 We make several essential observations of the proposed TLSM. First and foremost, TLSM is flexible121 and general. It includes the celebrated MLSBM [34, 43, 35, 27, 26, 36, 22] as special case. Specif-122 ically, suppose the vertices comes form K disjoint communities, the standard MLSBM assumes123 that the underlying network probability tensor P = B ×1 Z ×2 Z, where B ∈ RK×K×M is a124 semi-symmetric core probability tensor with Bk1,k2,m = Bk2,k1,m for k1, k2 ∈ [K] and m ∈ [M ],125 and Z ∈ {0, 1}n×K is the community membership matrix with Zi,k = 1 if vertex i comes from the126 k-th community and 0 otherwise. That is, the probability of any vertex pair to form an edge in a127 particular layer depends only on their community memberships. Equivalently, under the modified128 logit transformation (3), we have Θ = B̃ ×1 Z ×2 Z, where B̃ is the entry-wise transformation129 of B under (3). Taking R to be the CP-rank of B̃, the CP-decomposition of B̃ then has the form130 B̃ = I ×1 C ×2 C ×3 β for some matrix C ∈ RK×R and β ∈ RM×R due to semi-symmetry.131 This leads to the CP decomposition of Θ has the form (4) with α = ZC. It is clear that MLSBM132 requires vertices within the same community are homogeneous and exchangeable, while TLSM133 allows vertices to have different embeddings even when they are in the same community.134 Second, TLSM is identifiable when both α and β have full column ranks. When both α and β135 have full column ranks, the Kruskal’s k-ranks [25] of α and β satisfy kα = kβ = R, then Θ has136 CP-rank R. Hence, kα + kα + kβ ≥ 2R + 2 as long as R ≥ 2. By Theorem 1 of [40], the fixed137 column l2-norm constraint of β implies that the tensor factorization in (4) is unique up to column138 permutations of α and β and column sign flip of α. It is important to remark that the community139 structure encoded in α remains unchanged under any column permutation or sign flip.140 Third, introducing a sparsity factor sn via a modified logit transformation into the TLSM is non-141 trivial. We take a single-layer network as an example to illustrate the limitation of the standard142 logit transformation in handling sparse network. Suppose a vanilla logit link is used to connect143 the network underlying probability matrix P and its transformation Θ, and the latent space model144 usually assumes that Θ = ααT . A sparse network requires the entries of Θ diverge to negative145 infinite due to the small magnitude of edge probability, which leads to unstable estimation of α in146 numerical experiments. Moreover, this may conflict with the assumption that vertices within the same147 community tend to be close in the embedding space and their inner product is likely to be positive.148 These difficulties can be naturally circumvented when an appropriate sn is chosen in (3).149 2.2 Regularized likelihood150 Given a network adjacency tensor A and number of communities K, our goal is to estimate the multi-layer network embedding (α,β) and conduct community detection on the vertices. Throughout this paper, we assume the number of potential communitiesK is given and may diverge with n. Under the TLSM framework, with slight abuse of notation, we denote the average negative log-likelihood function of the multi-layer network G is L(α,β;A) = L(Θ;A) with L(Θ;A) = 1 φ(n,M) M∑ m=1 ∑ i≤j L(θi,j,m; ai,j,m), whereφ(n,M) = 12n(n+1)M is the number of potential edges, andL(θ; a) = log ( 1+ sn 1−sn+e−θ ) −151 a log ( sn 1−sn+e−θ ) is a negative log-density of a Bernoulli random variable a. We now introduce a152 novel regularization term to detect the potential communities in G,153 J(α) = min Z∈Γ,C∈RK×R 1 n ∥α−ZC∥2F , (5) where C encodes the vertex embedding centers and Γ ⊂ {0, 1}n×K is the set of all possible154 community membership matrices; that is, for any Z ∈ Γ, each row of Z consists of only one 1155 indicating the community membership and all others entries being 0. This leads to the proposed156 regularized cost function,157 Lλ(α,β;A) = L(α,β;A) + λnJ(α), (6) where λn is a positive tuning parameter that strikes the balance between network estimation and158 community detection in the cost function. It is clear that the embeddings of vertices with similar159 linking pattern will be pushed towards the same center, and thus close to each other in the ambient160 space, leading to the desired community structure in G.161 2.3 Projected gradient descent algorithm162 We develop a scalable projected gradient descent (PGD) algorithm to optimize the penalized cost163 function (6), which is highly non-convex and can be solved only locally. PGD, which alternatively164 conducts gradient step and projection step, is one of the most popular and computationally fast165 algorithm in tackling non-convex optimization problem [7, 33, 47, 9].166 To compute the gradients of α and β, we introduce the following notations. Define T ∈ Rn×n×M167 with entries T i,j,m = exp(−θi,j,m)1−sn+exp(−θi,j,m) (pi,j,m − ai,j,m), and X α,β T (2,3) ∈ R n×R whose i-th row168 consists of the diagonal elements of the slice (T ×2 αT ×3 βT )i,.,.. That is, Xα,βT (2,3)(i, r) =169 (T ×2 αT ×3 βT )i,r,r. Similarly, we define Xα,αT (1,2) ∈ R R×M , XβT (3) ∈ R n×R, and XT (1,2) ∈170 Rn×M , such that Xα,αT (1,2)(r,m) = (T ×1 α T ×2 αT )r,r,m, XβT (3)(i, r) = (T ×3 β T )i,i,r, and171 XT (1,2)(i,m) = T i,i,m. Consequently, when the vertex membership matrix Z and the community172 center matrix C are fixed, we can derive the gradients of Lλ(α,β;A) with respect to α and β, as173 1 φ(n,M) ( Xα,βT (2,3)+X β T (3) ∗α ) +2λn(α−ZC) and 1 2φ(n,M) ( (Xα,αT (1,2)) T +XTT (1,2)(α∗α) ) , respectively. Herein, * denotes the Hadamard product (entry-wise product) between two matrices.174 Let (α̃, β̃) denote the solution given by one-step gradient descent, we then project (α̃, β̃) onto175 Ωα × Ωβ in the following steps.176 Step 1. Multiply the r-th column of α̃.,r by ||β̃.,r||1/2 for r ∈ [R]. Denote the resultant matrix as α̃′.177 Step 2. Regularize each row of α as αi,. = α̃′i,.min{ √ log ξ1−ξ , ||α̃ ′ i,.||}/||α̃′i,.||, for i ∈ [n].178 Step 3. Normalize the columns of β as β.,r = β̃.,r/||β̃.,r||, for r ∈ [R].179 Next, when (α,β) are given, we apply a (1 + δ)-approximation K-means algorithm on α̃ to update180 the vertex community membership matrix Z and community center matrix C.181 The above steps will be alternatively conducted until convergence or reaching the maximum number182 of iterations. We further summarized the developed alternative updated scheme in Algorithm 1 in183 Appendix A of the supplementary materials184 Several remarks on the algorithm are in order. First, Algorithm 1 can only be guaranteed to converge185 to a stationary point but not any local minimizer. We hence employ a transformed higher order186 orthogonal iteration (HOOI) algorithm for warm initialization in all the numerical experiments in187 Section 4 and 5. Specifically, given a user-specific value τ , we define Θ̃ to mimic the magnitude188 of Θ such that Θ̃i,j,m = −τ if ai,j,m = 0 and Θ̃i,j,m = τ otherwise. A standard HOOI algorithm189 [11] is applied to Θ̃ to obtain α(0) and β(0). We set τ = 100 in all the numerical experiments.190 Second, the sparsity factor sn is an intrinsic quantity of the multi-layer network data, and it should be191 estimated from the network directly. Note that the minimal and maximal probabilities for any vertex192 pair to form an edge in any layer are pmin = (1− ξ)sn and pmax = ξsn, respectively. Interestingly,193 pmin + pmax = sn, which does not depend on ξ any more. Therefore, we propose to estimate sn as194 ŝn = min i∈[n] 1 nM M∑ m=1 n∑ j=1 ai,j,m +max i∈[n] 1 nM M∑ m=1 n∑ j=1 ai,j,m, (7) which is the sum of the minimal and maximal frequencies of a vertex to form edges with all other195 vertices in all layers. Third, to optimally choose λn, we extend the network cross-validation by196 edge sampling scheme in [30] to multi-layer networks. The detailed tuning procedure is relegated to197 Appendix B in the supplementary materials.198 3 Asymptotic theory199 3.1 Consistency in estimating Θ∗200 Let Ω = {Θ = I ×1 α×2 α×3 β : α ∈ Ωα,β ∈ Ωβ} be the parameter space of the problem and201 Θ∗ = I ×1 α∗ ×2 α∗ ×3 β∗ be the true underlying transformed network probability tensor. Denote202 KL(Θ∗||Θ) = φ−1(n,M) ∑M m=1 ∑ i≤j E ( L(θi,j,m; ai,j,m)− L(θ∗i,j,m; ai,j,m) ) be the averaged203 Kullback–Leibler divergence of the network generation distributions parametrized by Θ∗ and Θ, for204 any Θ ∈ Ω. The following large deviation inequality is derived to quantify the behavior of Lλ(Θ;A)205 for any Θ in the neighborhood of Θ∗ defined by KL(Θ∗||Θ).206 Proposition 1. Suppose λnJ(α∗) ≤ ϵn, and (n+M)Rφ−1(n,M)ϵ−1n log(ϵ −1/2 n ) ≤ c1 for some207 constant c1. Then with probability at lease 1− 2 exp ( − φ(n,M)ϵn 156 ξ1−ξ+28 log 2 ) , we have208 Lλ(Θ∗;A) ≤ inf {Θ∈Ω|KL(Θ∗||Θ)≥4ϵn} Lλ(Θ;A)− ϵn. Proposition 1 basically states that any estimators with sufficiently small objective value should209 be close enough to Θ∗ in terms of KL(Θ∗||Θ). We next study the asymptotic behavior of these210 estimators more precisely. Let (α̂, β̂) ∈ Ωα × Ωβ be any estimator of (α∗,β∗) such that211 Lλ(α̂, β̂;A) ≤ Lλ(α∗,β∗;A) + ϵn, (8) and denote Θ̂ = I ×1 α̂×2 α̂×3 β̂. we have the following theorem.212 Theorem 1. Under the condition of Proposition 1, if (α̂, β̂) satisfies (8), then with probability at least 1− 2 exp ( − φ(n,M)ϵn 156 ξ1−ξ+28 log 2 ) , we have 1 n √ M ∥Θ̂−Θ∗∥F ≤ 4 √ 2 √ ϵn (1− ξ) √ ξsn . The condition that λnJ(Θ∗) ≤ ϵn in Proposition 1 is mild. It implies that the true em-213 beddings of vertices within the same community are close to one another. We remark that214 λnJ(Θ ∗) exactly equals to zero under the MLSBM discussed in Section 2.2. The condition that215 (n+M)Rφ−1(n,M)ϵ−1n log(ϵ −1/2 n ) vanishes with n is also mild. WhenR = O(1), we can take any216 ϵn such that ϵn ≫ lognnmin{n,M} . Consequently, to ensure Θ̂ converges to Θ ∗, Theorem 1 implies the217 smallest sparsity factor one can take is sn ≫ ϵn ≫ lognnmin{n,M} , which means that the average degree218 of a vertex in any particular layer can be as small as nsn. We remark that a common assumption219 M = O(n) that appears in literature, such as [27] and [22], is not necessary in our theory. If we220 further assume M = O(n), we find that the average degree of a vertex in any layer under the221 proposed TLSM set up can be smaller than that in [27] by a factor (M log n)−1/2 and in [22] by a222 factor (log n)−3, showing that our theoretical result accommodates sparser multi-layer networks.223 3.2 Consistency in community detection224 We now turn to establish the consistency of community detection in multi-layer network225 G. Let ψ∗ : [n] −→ [K] be the true community assignment function such that ψ∗ =226 argminψminC1,...,CK ∑n i=1 ∥α∗i − Cψi∥2, and then the community detection error of any esti-227 mated community assignment function ψ̂ can be evaluated by the minimum scaled Hamming distance228 between ψ̂ and ψ∗ under permutations, which is defined as229 err(ψ∗, ψ̂) = min π∈SK 1 n n∑ i=1 1{ψ∗i ̸= π(ψ̂i)}, (9) where 1{·} is the indicator function and SK is the symmetric group of degree K. Such a scaled230 or unscaled Hamming distance has become a popular metric in quantifying the performance of231 community detection [21, 22].232 Denote N∗k = {i : ψ∗i = k} be the k-th true underlying community whose cardinality is nk. Let233 C∗ ∈ RK×R be the true underlying community centers of the network embedding with C∗k. =234 1 nk ∑ ψ∗i =k α∗i., and let B ∗ = I×1C∗×2C∗×3 β∗. The following assumptions are made to ensure235 that communities within the multi-layer networks are asymptotically identifiable.236 Assumption A. Assume the difference between any two distinct horizontal slides of B∗ satisfies that237 min k,k′∈[K],k ̸=k′ 1√ KM ∥B∗k,.,. −B ∗ k′,.,.∥F ≥ γn, where γn > 0 may vanish with n.238 Assumption B. Assume the tuning parameter λn satisfies that λnϵns −2 n (log s −1 n ) −1 ≥ c2, for an absolute constant c2 that does not depend on any model parameter.239 Assumption C. Denote nmin = mink∈[K] nk as the minimal community size. Assume γnnmin √ K n ≥ cξ √ ϵn sn , where cξ = 4 √ 2 (1−ξ) √ ξ + c3 √ (1+δ)min{M,R} M and c3 is a constant that depends on ξ only.240 Assumption A is the minimal community separation requirement, and similar assumption has been241 employed in [27] with a constant γn. Together with the condition λnJ(α∗) ≤ ϵn in Proposition 1,242 Assumption B gives a feasible interval for λn. Assumption C allows for unbalanced communities243 with vanishing nmin/n if the network is not too sparse. Note that cξ can be further bounded by244 4 √ 2 (1−ξ) √ ξ + c3 √ 1 + δ, and the first term of cξ will dominate the second term if R = o(M).245 Theorem 2. Suppose all the assumptions in Theorem 1 as well as Assumptions A, B and C are satisfied, it holds true that err(ψ∗, ψ̂) ≤ c2ξnϵn nminKγ2nsn , with probability at least 1− 1n2 − 2 exp ( − φ(n,M)ϵn 156 ξ1−ξ+28 log 2 ) .246 Theorem 2 assures that the community structure in a multi-layer network can be consistently recovered247 by the proposed TLSM. As a theoretical example, we consider a sparse case with sn = (logn)1+τ1 nmin{n,M} ,248 where 0 < τ1 < 1, nmax = O(nmin), 1√n ||α ∗ − Z∗C∗||F ≤ (log n)−3/2, and both γn, R and K249 are of constant orders. With λn = (logn)2+2τ1 nmin{n,M} , Theorems 1 and 2 imply that ϵn = (logn)1+τ2 nmin{n,M} with250 0 < τ2 < τ1 and err(ψ∗, ψ̂) = op(1).251 4 Numerical experiments252 In this section, we evaluate the numerical performance of the proposed TLSM in a variety of synthetic253 as well as real-life multi-layer networks, compare it against four competitors in literature, including254 the mean adjacency spectral embeddings (MASE; 16), least square estimation (LSE; 27), Tucker255 decomposition with HOSVD initialization (HOSVD-Tucker; 22), and spectral kernel (SPECK; 35),256 and conduct some ablation studies. The implementations of LSE and SPECK are available at the257 authors’ personal websites, HOSVD-Tucker is implemented in the routine “tucker" of the Python258 package “tensorly", and TLSM and MASE are implemented in Python by ourselves.259 4.1 Synthetic networks260 The multi-layer network A = (ai,j,m) ∈ {0, 1}n×n×M is generated as follows. First, we randomly261 selectK = 4 elements uniformly from {2.5∗(b1, b2, . . . , bR) : br ∈ {−1, 1}, r ∈ [R]} as community262 centers, which are denoted as ck, k ∈ [K]. Second, the latent space embedding of vertex i is263 generated as αi = cψi + ei with ei ∼ N(0R, 1.5 ∗ IR), and ψi ∈ [K] are independently drawn264 from the multinomial distribution Multi(1; 1K1K). Third, we generate β = [β1, . . . ,βM ] T with265 βm,r being independent standard normal random varibeles, for m ∈ [M ] and r ∈ [R]. We then266 rescale the column norms of β to be 1 for model identifiability. Finally, we generate A according267 to the proposed TLSM with sn = 0.1. For the sake of fair comparisons, the embedding dimension268 R is set as K in all scenarios. We aim to illustrate the community detection performance of269 all methods as the number of vertices and number of layers increase. To this end, we consider270 (n,M) ∈ {200, 400, 600, 800} × {5, 10, 15, 20}. The averaged hamming errors and their standard271 errors over 50 independent experiments of all methods are reported in Table 1.272 It is evident that TLSM consistently outperforms its competitors, and the performances of LSE273 and HOSVD-Tucker are better than those of MASE and SPECK. This is expected since TLSM,274 LSE and HOSVD-Tucker work on the multi-layer network adjacency tensor directly, while MASE275 and SPECK are matrix aggregation methods that suffer form information loss. Furthermore, as the276 number of vertices and number of layers increase, the community detection errors of all methods277 decrease rapidly. Notably, TLSM and LSE converge faster than the other methods, and attain stable278 performance even for relatively small n and M . Additional simulation studies for various network279 sparsity and unbalanced community sizes are relegated to Appendix C in the supplementary materials.280 4.2 Real-life networks281 We also apply the proposed TLSM method to analyze three real-life multi-layer networks, including282 a social network in the department of Computer Science at Aarhus University (AUCS) [38], a yeast283 Saccharomyces cerevisiae gene co-expression (YSCGC) network [44], and a worldwide agriculture284 trading network (WAT) [10]. Specifically, we conduct community detection on the first two networks285 whose vertex community memberships are available, and carry out a link prediction task on the third286 network whose vertex community memberships are unavailable.287 The AUCS dataset is publicly available at http://multilayer.it.uu.se/datasets.html, and288 it is a 61 × 61 × 5 multi-layer network that records pairwise relationships of 5 types among 61289 persons in AUCS, including current working relationships, repeated leisure activities, regularly eating290 lunch together, co-authorship of a publication, and friendship on Facebook. Since 54 persons in291 the dataset come from 7 research groups and the other 7 persons do not belong to any group, the292 dataset consists of 8 communities corresponding to 7 research groups and an outlier community.293 Applying TLSM and its competitors to the dataset, the number of misclassified vertices by TLSM,294 LSE, MASE, HOSVD-Tucker and SPECK, are 8, 21, 19, 23, 18, respectively. Clearly, TLSM295 significantly outperforms its competitors by at least reducing 16.39% of community detection error.296 The YSCGC dataset is publicly available at https://www.ncbi.nlm.nih.gov/pmc/articles/297 PMC156590/, and contains 205 genes of 4 functional categories, including protein metabolism298 and modification, carbohydrate metabolism and catabolism, nucleobase, nucleoside, nucleotide299 and nucleic acide metabolism, as well as transportation. We regard these four functional category300 labels as the community memberships of the genes. Further, the gene expression responses are301 measured by 20 systematic perturbations with varying genetic and environmental conditions in302 4 replicated hybridizations. We thus constructed a gene co-expression network A = (ai,j,m) ∈303 R205×205×4 based on the similarities of their expressions, where each layer represents one replicated304 hybridization. Specifically, the similarity between genes i and j in the m-th replication is measured305 by wi,j,m = exp ( − ∥x(m)i − x (m) j ∥ ) , where x(m)i ∈ R20 contains the expression levels of 20306 perturbations in the m-th replicated hybridization for i ∈ [205] and m ∈ [4]. The binary value ai,j,m307 is obtained by thresholding wi,j,m with the thresholding value being the 60% quantile of all elements308 in {wi,j,m : i ≤ j ∈ [205],m ∈ [4]}. Applying TLSM and its competitors to this dataset, the number309 of misclassified vertices by TLSM, LSE, MASE, HOSVD-Tucker and SPECK, are 6, 9, 12, 48, 13,310 respectively. TLSM again outperforms its competitors in this YSCGC dataset.311 The WAT dataset is publicly available at http://www.fao.org, and includes 364 agriculture312 product trading relationships among 214 countries in 2010. To process the data, we extract 130 major313 countries whose average degrees are greater than 9 from the 32 densest connected agriculture product314 trading relations, leading to a 130× 130× 32 multi-layer network. Investigating the eigen-structure315 of the mode-1 matricization of the network adjacency tensor, we identify an elbow point [20] at the316 7th largest eigen-value, suggesting there are 6 potential communities among the countries, and thus317 we set K = 6. The corresponding eigen-value plot is attached in Appendex D of the supplementary318 materials. We then randomly selected 80% of the entries of the adjacency tensor as the training set,319 and conduct link prediction on the remaining 20% of the entries. Specifically, we employ TLSM320 and the adaptations of its competitors to estimate the network expected tensor P and generate321 estimations for the missing entries by independent Bernoulli random variables accordingly. The322 averaged link prediction accuracy of TLSM, LSE, MASE, HOSVD-Tucker and SPECK over 50323 independent replications are 79.60%, 76.66%, 75.96%, 77.78% and 79.08%, respectively, where the324 link prediction accuracy is defined as the percentile of the correctly predicted entries. Clearly, all 5325 methods are comparative in terms of link prediction, while TLSM still deliver highest averaged link326 prediction accuracy.327 4.3 Ablation studies328 In this subsection, we carry out some ablation studies on two novel components of the proposed329 method, namely the sparsity factor sn and the community-inducing regularizer J(α). To study the330 effectiveness of sn, we generate a 300× 300× 5 multi-layer network with 3 communities and the331 true network sparsity sn = 0.3. The blue curve in the left panel of Figure 1 shows the average332 Hamming error of 50 independent replications given by the proposed method when employing333 ŝn ∈ {0.05i : i ∈ [20]} in the optimization algorithm, and the red line indicates the averaged334 Hamming error of the proposed method with ŝn estimated via the proposed data-adapted estimation335 scheme. It is clear that the Hamming error at sn = 1 is much larger than that when sn is close336 to 0.3, showing the advantages of the modified logit transformation by sn over the standard logit337 transformation when the network indeed reveals sparse pattern. Moreover, we observe that the red338 line is even lower than the minimum Hamming error in the blue curve. This further confirms the339 effectiveness of the proposed data-adapted estimation scheme for estimating sn. To study the effectiveness of the community-inducing regularizer in the proposed objective function,341 we generate an n× n× 5 multi-layer network with 2 communities, for n ∈ {50, 100, 200, 400}. In342 the right panel of Figure 1, the black pillars indicate the network estimation error 1 n √ 5 ∥Θ̂−Θ∗∥F343 given by the proposed method with λn = 0 which corresponds to the absence of J(α), while the344 red ones indicate the counterparts given by the proposed method with λn is selected by network345 cross-validation. There is a clear improvement when the community-inducing regularizer is enforced346 in all scenarios, particularly for small n. This showcases the helpfulness of the community-inducing347 regularizer in detecting network community structure.348 5 Conclusions349 In this paper, we propose a novel tensor-based latent space model for community detection in350 multi-layer networks. The model embeds vertices into a low-dimensional latent space and views351 the community structure from an network embedding perspective, so that heterogeneous structures352 in different network layers can be properly integrated. The proposed model is formulated as a353 regularization framework, which conducts multi-layer network estimation and community detection354 simultaneously. The advantages of the proposed method are supported by extensive numerical355 experiments and theoretical results. Particularly, the asymptotic consistencies of the proposed method356 are established in terms of both multi-layer network estimation and community detection, even for357 relatively sparse networks.358 References359 [1] Luiz GA Alves, Giuseppe Mangioni, Isabella Cingolani, Francisco Aparecido Rodrigues, Pietro360 Panzarasa, and Yamir Moreno. The nested structural organization of the worldwide trade361 multi-layer network. Scientific reports, 9(1):1–14, 2019.362 [2] Jesús Arroyo, Avanti Athreya, Joshua Cape, Guodong Chen, Carey E Priebe, and Joshua T363 Vogelstein. Inference for multiple heterogeneous networks with a common invariant subspace.364 Journal of Machine Learning Research, 22(142):1–49, 2021.365 [3] Avanti Athreya, Donniell E Fishkind, Minh Tang, Carey E Priebe, Youngser Park, Joshua T366 Vogelstein, Keith Levin, Vince Lyzinski, and Yichen Qin. Statistical inference on random dot367 product graphs: a survey. The Journal of Machine Learning Research, 18(1):8393–8484, 2017.368 [4] Matteo Barigozzi, Giorgio Fagiolo, and Giuseppe Mangioni. Identifying the community369 structure of the international-trade multi-network. Physica A: statistical mechanics and its370 applications, 390(11):2051–2066, 2011.371 [5] Michele Berlingerio, Fabio Pinelli, and Francesco Calabrese. Abacus: frequent pattern mining-372 based community discovery in multidimensional networks. Data Mining and Knowledge373 Discovery, 27(3):294–320, 2013.374 [6] Sharmodeep Bhattacharyya and Shirshendu Chatterjee. Spectral clustering for multiple sparse375 networks: I. arXiv preprint arXiv:1805.10594, 2018.376 [7] Han Chen, Garvesh Raskutti, and Ming Yuan. Non-convex projected gradient descent for377 generalized low-rank tensor regression. Journal of Machine Learning Research, 20:1–37, 2019.378 [8] Zitai Chen, Chuan Chen, Zibin Zheng, and Yi Zhu. Tensor decomposition for multilayer net-379 works clustering. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33,380 pages 3371–3378, 2019.381 [9] Eric C Chi, Brian R Gaines, Will Wei Sun, Hua Zhou, and Jian Yang. Provable convex382 co-clustering of tensors. Journal of Machine Learning Research, 21(214):1–58, 2020.383 [10] Manlio De Domenico, Vincenzo Nicosia, Alexandre Arenas, and Vito Latora. Structural384 reducibility of multilayer networks. Nature communications, 6(1):1–9, 2015.385 [11] Lieven De Lathauwer, Bart De Moor, and Joos Vandewalle. On the best rank-1 and rank-(r386 1, r 2,..., rn) approximation of higher-order tensors. SIAM journal on Matrix Analysis and387 Applications, 21(4):1324–1342, 2000.388 [12] Xiaowen Dong, Pascal Frossard, Pierre Vandergheynst, and Nikolai Nefedov. Clustering389 with multi-layer graphs: A spectral perspective. IEEE Transactions on Signal Processing,390 60(11):5820–5831, 2012.391 [13] Junxian Geng, Anirban Bhattacharya, and Debdeep Pati. Probabilistic community detection392 with unknown number of communities. Journal of the American Statistical Association,393 114(526):893–905, 2019.394 [14] Mahsa Ghorbani, Mahdieh Soleymani Baghshah, and Hamid R Rabiee. Mgcn: semi-supervised395 classification in multi-layer graphs with graph convolutional networks. In Proceedings of396 the 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and397 Mining, pages 208–211, 2019.398 [15] Derek Greene and Pádraig Cunningham. Producing a unified graph representation from multiple399 social network views. In Proceedings of the 5th annual ACM web science conference, pages400 118–121, 2013.401 [16] Qiuyi Han, Kevin Xu, and Edoardo Airoldi. Consistent estimation of dynamic and multi-layer402 block models. In International Conference on Machine Learning, pages 1511–1520. PMLR,403 2015.404 [17] Xin He, Qiong Liu, and You Yang. Mv-gnn: Multi-view graph neural network for compression405 artifacts reduction. IEEE Transactions on Image Processing, 29:6829–6840, 2020.406 [18] Peter D Hoff, Adrian E Raftery, and Mark S Handcock. Latent space approaches to social407 network analysis. Journal of the American Statistical Association, 97(460):1090–1098, 2002.408 [19] Paul W Holland, Kathryn Blackmond Laskey, and Samuel Leinhardt. Stochastic blockmodels:409 First steps. Social networks, 5(2):109–137, 1983.410 [20] Pengsheng Ji and Jiashun Jin. Coauthorship and citation networks for statisticians. The Annals411 of Applied Statistics, 10(4):1779–1812, 2016.412 [21] Jiashun Jin. Fast community detection by score. Ann. Statist., 43(1):57–89, 02 2015.413 [22] Bing-Yi Jing, Ting Li, Zhongyuan Lyu, and Dong Xia. Community detection on mixture414 multilayer networks via regularized tensor decomposition. The Annals of Statistics, 49(6):3181–415 3205, 2021.416 [23] Muhammad Raza Khan and Joshua E Blumenstock. Multi-gcn: Graph convolutional networks417 for multi-view networks, with applications to global poverty. In Proceedings of the AAAI418 Conference on Artificial Intelligence, volume 33, pages 606–613, 2019.419 [24] Tamara G. Kolda and Brett W. Bader. Tensor decompositions and applications. SIAM Review,420 51:455–500, 2009.421 [25] Joseph B Kruskal. Three-way arrays: rank and uniqueness of trilinear decompositions, with422 application to arithmetic complexity and statistics. Linear algebra and its applications, 18(2):95–423 138, 1977.424 [26] Jing Lei. Tail bounds for matrix quadratic forms and bias adjusted spectral clustering in425 multi-layer stochastic block models. arXiv preprint arXiv:2003.08222, 2020.426 [27] Jing Lei, Kehui Chen, and Brian Lynch. Consistent community detection in multi-layer network427 data. Biometrika, 107(1):61–73, 2020.428 [28] Jing Lei and Alessandro Rinaldo. Consistency of spectral clustering in stochastic block models.429 The Annals of Statistics, 43(1):215–237, 2015.430 [29] Dong Li, Zhisong Pan, Guyu Hu, Graham Anderson, and Shan He. Active module identification431 from multilayer weighted gene co-expression networks: a continuous optimization approach.432 IEEE/ACM transactions on computational biology and bioinformatics, 2020.433 [30] Tianxi Li, Elizaveta Levina, and Ji Zhu. Network cross-validation by edge sampling. Biometrika,434 107(2):257–276, 2020.435 [31] Xueming Liu, Enrico Maiorino, Arda Halu, Kimberly Glass, Rashmi B Prasad, Joseph Loscalzo,436 Jianxi Gao, and Amitabh Sharma. Robustness and lethality in multilayer biological molecular437 networks. Nature communications, 11(1):1–12, 2020.438 [32] Zhongyuan Lyu, Dong Xia, and Yuan Zhang. Latent space model for higher-order networks439 and generalized tensor decomposition. arXiv preprint arXiv:2106.16042, 2021.440 [33] Zhuang Ma, Zongming Ma, and Hongsong Yuan. Universal latent space model fitting for large441 networks with edge covariates. Journal of Machine Learning Research, 21(4):1–67, 2020.442 [34] Subhadeep Paul and Yuguo Chen. Consistent community detection in multi-relational443 data through restricted multi-layer stochastic blockmodel. Electronic Journal of Statistics,444 10(2):3807–3870, 2016.445 [35] Subhadeep Paul and Yuguo Chen. Spectral and matrix factorization methods for consistent446 community detection in multi-layer networks. Ann. Statist., 48(1):230–250, 02 2020.447 [36] Subhadeep Paul and Yuguo Chen. Null models and community detection in multi-layer networks.448 Sankhya A, pages 1–55, 2021.449 [37] Zhuo-Ming Ren, An Zeng, and Yi-Cheng Zhang. Bridging nestedness and economic complexity450 in multilayer world trade networks. Humanities and Social Sciences Communications, 7(1):1–8,451 2020.452 [38] Luca Rossi and Matteo Magnani. Towards effective visual analytics on multiplex and multilayer453 networks. Chaos, Solitons & Fractals, 72:68–76, 2015.454 [39] Uday Shankar Shanthamallu, Jayaraman J Thiagarajan, Huan Song, and Andreas Spanias.455 Gramme: Semisupervised learning using multilayered graph attention models. IEEE transac-456 tions on neural networks and learning systems, 31(10):3977–3988, 2019.457 [40] Nicholas D Sidiropoulos and Rasmus Bro. On the uniqueness of multilinear decomposition of458 n-way arrays. Journal of Chemometrics: A Journal of the Chemometrics Society, 14(3):229–239,459 2000.460 [41] Wei Tang, Zhengdong Lu, and Inderjit S Dhillon. Clustering with multiple graphs. In 2009461 Ninth IEEE International Conference on Data Mining, pages 1016–1021. IEEE, 2009.462 [42] Edwin JCG Van Den Oord and Ronan Van Rossem. Differences in first graders’ school463 adjustment: The role of classroom characteristics and social structure of the group. Journal of464 School Psychology, 40(5):371–394, 2002.465 [43] James D Wilson, John Palowitch, Shankar Bhamidi, and Andrew B Nobel. Community466 extraction in multilayer networks with heterogeneous community structure. The Journal of467 Machine Learning Research, 18(1):5458–5506, 2017.468 [44] Ka Yee Yeung, Mario Medvedovic, and Roger E Bumgarner. Clustering gene-expression data469 with repeated measurements. Genome biology, 4(5):1–17, 2003.470 [45] Yubai Yuan and Annie Qu. Community detection with dependent connectivity. The Annals of471 Statistics, 49(4):2378–2428, 2021.472 [46] Jingfei Zhang, Will Wei Sun, and Lexin Li. Network response regression for modeling popula-473 tion of networks with covariates. arXiv preprint arXiv:1810.03192, 2018.474 [47] Xuefei Zhang, Songkai Xue, and Ji Zhu. A flexible latent space model for multilayer networks.475 In International Conference on Machine Learning, pages 11288–11297. PMLR, 2020.476 [48] Yunpeng Zhao, Elizaveta Levina, and Ji Zhu. Consistency of community detection in networks477 under degree-corrected stochastic block models. The Annals of Statistics, 40(4):2266–2292,478 2012.479 [49] Wei Zheng, Dingjie Wang, and Xiufen Zou. Control of multilayer biological networks and480 applied to target identification of complex diseases. BMC bioinformatics, 20(1):1–12, 2019.481 Checklist482 1. For all authors...483 (a) Do the main claims made in the abstract and introduction accurately reflect the pa-484 per’s contributions and scope? [Yes] See the abstract and the third paragrath of the485 introduction.486 (b) Did you describe the limitations of your work? [Yes] The optimization algorithm can487 only be guaranteed to converge to a stationary point.488 (c) Did you discuss any potential negative societal impacts of your work? [No] There489 should be no negative societal impacts.490 (d) Have you read the ethics review guidelines and ensured that your paper conforms to491 them? [Yes]492 2. If you are including theoretical results...493 (a) Did you state the full set of assumptions of all theoretical results? [Yes] See Section 3.494 (b) Did you include complete proofs of all theoretical results? [Yes] All technical proofs495 are provided in Appendix E of the supplementary materials.496 3. If you ran experiments...497 (a) Did you include the code, data, and instructions needed to reproduce the main exper-498 imental results (either in the supplemental material or as a URL)? [Yes] The URLs499 for data are included in Section 4.2, and codes with instructions are included in the500 supplementary materials.501 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they502 were chosen)? [Yes] See Section 2.3 and Appendix B in the supplementary materials.503 (c) Did you report error bars (e.g., with respect to the random seed after running experi-504 ments multiple times)? [Yes] We show the standard erros in Table 1 and 95% confident505 intervals of additional simulation studies in Appendix C in the supplementary materials.506 (d) Did you include the total amount of compute and the type of resources used (e.g., type507 of GPUs, internal cluster, or cloud provider)? [No]508 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...509 (a) If your work uses existing assets, did you cite the creators? [Yes] We used publicly510 available datasets and cite the creators.511 (b) Did you mention the license of the assets? [Yes] All datasets we used are publicly512 available.513 (c) Did you include any new assets either in the supplemental material or as a URL? [No]514 (d) Did you discuss whether and how consent was obtained from people whose data you’re515 using/curating? [No]516 (e) Did you discuss whether the data you are using/curating contains personally identifiable517 information or offensive content? [No] All data we used do not contains personally518 identifiable information or offensive content.519 5. If you used crowdsourcing or conducted research with human subjects...520 (a) Did you include the full text of instructions given to participants and screenshots, if521 applicable? [N/A]522 (b) Did you describe any potential participant risks, with links to Institutional Review523 Board (IRB) approvals, if applicable? [N/A]524 (c) Did you include the estimated hourly wage paid to participants and the total amount525 spent on participant compensation? [N/A]526
1. What is the focus and contribution of the paper regarding multilayer graph learning? 2. What are the strengths of the proposed approach, particularly in terms of its consistency analysis? 3. What are the weaknesses of the paper, especially regarding its empirical analysis and related work? 4. Do you have any concerns about the assumptions made in the consistency analysis? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper deals with the problem of learning node embeddings of multilayer graphs with applications mainly in community detection and link prediction. A new model is introduced, namely TLSM, that is based on flexible tensor decomposition framework. Specifically, the methodology is based on a tensor latent space model that satisfies a set of interesting properties: it allows nodes to get different embeddings even if they belong to the same community; it satisfies some key identifiability properties; and finally, it is capable of handling sparse networks though a modified logit transformation. The different claims made in the paper are supported either with theoretical arguments or empirically. Strengths And Weaknesses Strengths: The paper addresses an important problem in graph machine learning, with many practical applications. I found particularly interesting the fact that the paper comes with a consistency analysis regarding the proposed methodology. The paper is also well-written, and most of the arguments made are clearly presented. I really enjoyed reading it. Weaknesses: Missing related work. Despite the fact that the multi-layer community detection literature is not as rich as in the case of single-layer graphs, still there are plenty of methodologies that follow different ideas. The paper mentions a few of them, but the literature could further be expanded. It would be interesting to also consider some of these models in the empirical analysis. I will just mention the article by Mercado et al. entitled “The Power Mean Laplacian for Multilayer Graph Clustering” (AISTATS ’18), and possibly some of the references within this article. I enjoyed reading the part related to the consistency analysis. However, it is not clear to me how strong are these assumptions from a practical viewpoint. How will the model empirically behave if some of these assumptions will be violated? My main concerns about the paper are related to the empirical analysis. Why is the embedding dimension set to K ? The scale of the datasets used is quite small. Is there any particular reason for this choice? There is no discussion about the time complexity of the model or even the empirical running time. I would suggest the authors to discuss this point. Selection of baseline models. As I also mentioned above, I found that the selected models do not cover different methodological ideas on clustering multilayer graphs. Lastly, despite mentioning that the paper learning embedding of multilayer graphs, most of the discussion and analysis concerns the task of community detection. There is one experiment on link prediction, but I believe this is quite limited. Did the authors think of having a more generic experimental framework that will more extensively cover the tasks of link prediction and, possibly, node classification? Typos: Line 58, yet Line 73: Greek letters Line 123: comes Questions The different questions that I would like to ask the authors are provided in the list of weaknesses of the paper. Limitations N/A
NIPS
Title Structure-Preserving Embedding of Multi-layer Networks Abstract This paper investigates structure-preserving embedding for multi-layer networks 1 with community structure. We propose a novel generative tensor-based latent space 2 model (TLSM) that allows heterogeneity among vertices. It embeds vertices into 3 a low-dimensional latent space so that vertices within the same community are 4 close to each other in the ambient space, and captures layer heterogeneity through 5 a layer-effect factor matrix. With a general and flexible tensor decomposition 6 on the expected network adjacency tensor, TLSM is dedicated to preserving the 7 original vertex relations and layer-specific effects in the network embedding. An 8 efficient alternative updating scheme is developed to estimate the model parameters 9 and conduct community detection simultaneously. Theoretically, we establish the 10 asymptotic consistencies of TLSM in terms of both multi-layer network estimation 11 and community detection. The theoretical results are supported by extensive 12 numerical experiments on both synthetic and real-life multi-layer networks. 13 N/A This paper investigates structure-preserving embedding for multi-layer networks1 with community structure. We propose a novel generative tensor-based latent space2 model (TLSM) that allows heterogeneity among vertices. It embeds vertices into3 a low-dimensional latent space so that vertices within the same community are4 close to each other in the ambient space, and captures layer heterogeneity through5 a layer-effect factor matrix. With a general and flexible tensor decomposition6 on the expected network adjacency tensor, TLSM is dedicated to preserving the7 original vertex relations and layer-specific effects in the network embedding. An8 efficient alternative updating scheme is developed to estimate the model parameters9 and conduct community detection simultaneously. Theoretically, we establish the10 asymptotic consistencies of TLSM in terms of both multi-layer network estimation11 and community detection. The theoretical results are supported by extensive12 numerical experiments on both synthetic and real-life multi-layer networks.13 1 Introduction14 Network has arisen as one of the most common structures to represent the relations among entities.15 In many complex systems, entities can be multi-relational in that they may interact with each other16 under various circumstances. A multi-layer network, which consists of a common vertex set across all17 network layers representing the entities and an edge set at each layer to characterize a particular type18 of relation among entities, is faithful to represent these relations. Examples of multi-layer networks19 include social networks of multiple interaction channels [42, 15], biological networks of different20 collaboration schemes [49, 31, 29] and world trading networks [1, 37] of various goods.21 In this paper, we propose a structure-preserving embedding framework for multi-layer networks22 via a tensor-based latent space model. Specifically, TLSM utilizes the factorization of network23 adjacency tensor as a building block, embeds the vertices into a low dimensional latent space, and24 captures the heterogeneity among different layers through a layer-effect factor matrix. Consequently,25 the community structure of the multi-layer network can be detected from a network embedding26 perspective, such that vertices within the same community are closer to one another in the ambient27 space than those in different communities. In addition, one key feature of TLSM is that it introduces28 a sparsity factor into the vanilla logit transformation of the network adjacency tensor, which allows29 TLSM to model sparse multi-layer networks in a more explicit fashion and accommodate relatively30 sparser multi-layer networks as the ones considered in literature [22]. More importantly, this sparsity31 factor can be estimated from the network adjacency tensor directly.32 The main contribution of this paper is three-fold. First, the proposed TLSM is flexible and general33 in that it includes many popular network models as special cases. It also relaxes the layer-wise34 positive semi-definite condition that has been frequently employed in literature [6, 35]. Second, a35 joint modeling framework is constructed for TLSM, consisting of the multi-layer network likelihood36 Submitted to 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Do not distribute. and a clustering type penalty, to estimate the multi-layer network and conduct community detection37 simultaneously. Its advantages are supported by extensive numerical experiments on both synthetic38 and real-life multi-layer networks. Third, the asymptotic consistencies of TLSM are established in39 terms of both multi-layer network estimation and community detection. Notably, the established40 theoretical results imply that the proposed methods can accommodate the sparsest multi-layer41 networks considered in literature.42 The rest of the paper is organized as follows. The remaining of Section 1 discusses related works and43 introduces necessary notations. Section 2 presents the proposed TLSM and its estimation scheme with44 an efficient algorithm. In Section 3, we establish the asymptotic consistencies of TLSM. Extensive45 numerical performance of TLSM on synthetic and real-life multi-layer networks as well as ablation46 studies on two novel components of the proposed method are carried out in Section 4. Section 547 concludes the paper. The supplementary materials contains technique proofs and necessary lemmas,48 additional simulation studies, detailed parameter tuning process, among others.49 1.1 Related work50 While there is a growing number of literature focusing on community detection in single-layer51 network [48, 28, 13], community detection in multi-layer network is still in its infancy. One classical52 approach is to detect community structure in each layer separately [4, 5], which fails to leverage53 the homogeneity across different layers. Another approach is to aggregate multi-layer networks54 into a single-layer one [41, 12, 35], which heavily relies on the assumption of homogeneous linking55 pattern across multiple layers. Recently, [26] proposed to aggregate the biased-adjusted version of56 the squared adjacency matrix in each layer to alleviate the information loss in aggregation. yet it57 requires the average node degree to grow at a sub-optimal order.58 In terms of multi-layer network generative models, [34] extended the seminal stochastic block59 model (SBM; 19) to the multi-layer stochastic block model (MLSBM; 34), where the probability for60 any two vertices to form an edge in a given layer depends only on their community memberships.61 Clearly, MLSBM heavily relies on the assumption of homogeneous vertices within communities.62 The framework of MLSBM has also been incorporated in degree-corrected network estimation [36],63 spectral clustering [6, 35, 26], least square estimation [27] and likelihood-based approaches [45]. In64 addition, network response regression model [46] and tensor factorization methods [8, 22] have also65 been proposed to detect community structures in multi-layer networks.66 To allow heterogeneous vertices, the latent space model [18] and random dot product graph model67 [3] have been extended to multi-layer networks[47, 32, 2]. In addition, graph neural network and68 graph convolutional networks has been extended to multi-layer network for learning the multi-layer69 network embedding [14, 23, 17, 39].70 1.2 Notations71 Throughout the paper, we use boldface calligraphic Euler scripts (A) to denote tensors, boldface72 capital letters (A) or Greece letters (α,β) to denote matrices, boldface lowercase letters (a) to73 denote vectors, and regular letters (a) to denote scalars. For an order three tensor A ∈ RI1×I2×I3 ,74 Ai,.,. ∈ RI2×I3 ,A.,j,. ∈ RI1×I3 , and A.,.,m ∈ RI1×I2 are the i-th horizontal slide, j-th lateral slide75 and m-th frontal slide of A, respectively. Similarly, for a matrix A, Ai,. denotes its i-th row and A.,j76 denotes its j-th column. For a vector a, diag(a) stands for the diagonal matrix whose diagonal is a.77 We use || · ||, || · ||∞, and || · ||F to denote the l2-norm, l∞-norm of a vector, and the Frobenius norm78 of matrix or tensor, respectively. For any integer n, denote [n] = {1, 2, ..., n}.79 The mode-1 product between a tensor A ∈ RI1×I2×I3 and a matrix U ∈ RJ1×I1 is a tensor A×1U ∈80 RJ1×I2×I3 such that its (j1, i2, i3)-th entry is defined as (A×1 U)j1,i2,i3 = ∑I1 i1=1 Ai1,i2,i3Uj1,i1 .81 The mode-2 or mode-3 product between A and any matrix of appropriate dimension are defined82 similarly. The CANDECOMP/PARAFAC (CP) decomposition of A has the form83 A = R∑ r=1 a(r) ◦ b(r) ◦ c(r), (1) where a(r) ∈ RI1 , b(r) ∈ RI2 , and c(r) ∈ RI3 for r ∈ [R], and ◦ stands for the vector outer product.84 The CP-rank [24] of the tensor a(r) ◦ b(r) ◦ c(r) is defined to be 1, for r ∈ [R]. The minimal number85 of rank-1 tensors in the CP decomposition of A is called the CP-rank of A. Let I ∈ {0, 1}R×R×R86 be the identity tensor such that Ii1,i2,i3 = 1 if i1 = i2 = i3 and 0 otherwise, and let A ∈ RI1×R,87 B ∈ RI2×R, and C ∈ RI3×R such that A.,r = a(r), B.,r = b(r), and C.,r = c(r). Equation (1)88 then can be equivalently written as A = I ×1 A×2 B ×3 C.89 2 Structure-preserving embedding90 In this paper, we consider multi-layer networks that can be represented as an undirected and un-91 weighted M -layer graph G = (V, E), where V = [n] consists of the common n vertices across92 different layers, and E = {E(m)}Mm=1 with E(m) ⊂ V × V representing the m-th relation network93 among vertices. A order three adjacency tensor A = (ai,j,m) ∈ {0, 1}n×n×M is then defined to94 represent G with entries ai,j,m = 1 if (i, j) ∈ E(m) and 0 otherwise.95 2.1 Tensor-based latent space model96 To fully characterize the multi-layer network structure, we propose the following generative tensor-97 based latent space model (TLSM). For any i ≤ j ∈ [n], and m ∈ [M ],98 ai,j,m = aj,i,m ind.∼ Bernoulli(pi,j,m), with (2) θi,j,m = log ( pi,j,m sn − pi,j,m ) , and (3) Θ = I ×1 α×2 α×3 β, α ∈ Ωα,β ∈ Ωβ, (4) where I is the order three R-dimensional identity tensor. Basically, (2) follows the standard routine99 in the multi-layer network literature [34, 35, 27, 22] to model that ai,j,m = aj,i,m are independently100 generated from a Bernoulli distribution, for i ≤ j ∈ [n] and m ∈ [M ]. Denote P = (pi,j,m) ∈101 Rn×n×M as the network underlying probability tensor, and then Θ = (θi,j,m) ∈ Rn×n×M is102 the entry-wise transformation of P by (3). We call the transformation (3) as the modified logit103 transformation in that the constant 1 in the standard logit transformation is replaced by a sparsity104 factor sn, which may vanish with n andM . We further assume all entries of P are of the order sn; that105 is, there exists a constant 12 ≤ ξ < 1 such that (1− ξ)sn ≤ pi,j,m ≤ ξsn, for i, j ∈ [n] and m ∈ [M ].106 Thus, sn essentially controls the overall network sparsity and the entries of Θ are ensured to locate in107 the interval [− log ξ1−ξ , log ξ 1−ξ ]. More importantly, (4) models the CP decomposition of Θ by the108 factor matrices α ∈ Rn×R and β ∈ RM×R with CP-rank R , which can greatly reduce the number of109 free parameters from n(n+ 1)M/2 to (n+M)R. Throughout the paper, the CP-rank R is allowed110 to diverge with n. In the CP decomposition of Θ, α is the vertex latent position matrix with each row111 αi,. serving as the embedding of vertex i, and β captures heterogeneity across different layers. Herein,112 we define the constraint sets for α and β as Ωα = {α ∈ Rn×R : ||αi,.|| ≤ √ log ξ1−ξ , for i ∈ [n]}113 and Ωβ = {β ∈ RM×R : ||β.,r|| = 1, r ∈ [R]}. Note that the constraint on β is necessary for114 model identification, and detailed discussion will be presented shortly. The constraint set Ωα × Ωβ115 is sufficient to maintain the bounded condition of Θ since a general Hölder inequality yields that116 |θi,j,m| = |I ×1 αTi,. ×2 αTj,. ×3 βTm,.| ≤ ||αi,.||||αj,.||||βm,.||∞ ≤ log ξ 1−ξ . To conclude this117 paragraph, we remake that the parameter ξ is introduced for theoretical purpose and it is not treated as118 a tuning parameter. One can choose ξ sufficiently close to 1 in empirical studies so that the restriction119 on α will be alleviated.120 We make several essential observations of the proposed TLSM. First and foremost, TLSM is flexible121 and general. It includes the celebrated MLSBM [34, 43, 35, 27, 26, 36, 22] as special case. Specif-122 ically, suppose the vertices comes form K disjoint communities, the standard MLSBM assumes123 that the underlying network probability tensor P = B ×1 Z ×2 Z, where B ∈ RK×K×M is a124 semi-symmetric core probability tensor with Bk1,k2,m = Bk2,k1,m for k1, k2 ∈ [K] and m ∈ [M ],125 and Z ∈ {0, 1}n×K is the community membership matrix with Zi,k = 1 if vertex i comes from the126 k-th community and 0 otherwise. That is, the probability of any vertex pair to form an edge in a127 particular layer depends only on their community memberships. Equivalently, under the modified128 logit transformation (3), we have Θ = B̃ ×1 Z ×2 Z, where B̃ is the entry-wise transformation129 of B under (3). Taking R to be the CP-rank of B̃, the CP-decomposition of B̃ then has the form130 B̃ = I ×1 C ×2 C ×3 β for some matrix C ∈ RK×R and β ∈ RM×R due to semi-symmetry.131 This leads to the CP decomposition of Θ has the form (4) with α = ZC. It is clear that MLSBM132 requires vertices within the same community are homogeneous and exchangeable, while TLSM133 allows vertices to have different embeddings even when they are in the same community.134 Second, TLSM is identifiable when both α and β have full column ranks. When both α and β135 have full column ranks, the Kruskal’s k-ranks [25] of α and β satisfy kα = kβ = R, then Θ has136 CP-rank R. Hence, kα + kα + kβ ≥ 2R + 2 as long as R ≥ 2. By Theorem 1 of [40], the fixed137 column l2-norm constraint of β implies that the tensor factorization in (4) is unique up to column138 permutations of α and β and column sign flip of α. It is important to remark that the community139 structure encoded in α remains unchanged under any column permutation or sign flip.140 Third, introducing a sparsity factor sn via a modified logit transformation into the TLSM is non-141 trivial. We take a single-layer network as an example to illustrate the limitation of the standard142 logit transformation in handling sparse network. Suppose a vanilla logit link is used to connect143 the network underlying probability matrix P and its transformation Θ, and the latent space model144 usually assumes that Θ = ααT . A sparse network requires the entries of Θ diverge to negative145 infinite due to the small magnitude of edge probability, which leads to unstable estimation of α in146 numerical experiments. Moreover, this may conflict with the assumption that vertices within the same147 community tend to be close in the embedding space and their inner product is likely to be positive.148 These difficulties can be naturally circumvented when an appropriate sn is chosen in (3).149 2.2 Regularized likelihood150 Given a network adjacency tensor A and number of communities K, our goal is to estimate the multi-layer network embedding (α,β) and conduct community detection on the vertices. Throughout this paper, we assume the number of potential communitiesK is given and may diverge with n. Under the TLSM framework, with slight abuse of notation, we denote the average negative log-likelihood function of the multi-layer network G is L(α,β;A) = L(Θ;A) with L(Θ;A) = 1 φ(n,M) M∑ m=1 ∑ i≤j L(θi,j,m; ai,j,m), whereφ(n,M) = 12n(n+1)M is the number of potential edges, andL(θ; a) = log ( 1+ sn 1−sn+e−θ ) −151 a log ( sn 1−sn+e−θ ) is a negative log-density of a Bernoulli random variable a. We now introduce a152 novel regularization term to detect the potential communities in G,153 J(α) = min Z∈Γ,C∈RK×R 1 n ∥α−ZC∥2F , (5) where C encodes the vertex embedding centers and Γ ⊂ {0, 1}n×K is the set of all possible154 community membership matrices; that is, for any Z ∈ Γ, each row of Z consists of only one 1155 indicating the community membership and all others entries being 0. This leads to the proposed156 regularized cost function,157 Lλ(α,β;A) = L(α,β;A) + λnJ(α), (6) where λn is a positive tuning parameter that strikes the balance between network estimation and158 community detection in the cost function. It is clear that the embeddings of vertices with similar159 linking pattern will be pushed towards the same center, and thus close to each other in the ambient160 space, leading to the desired community structure in G.161 2.3 Projected gradient descent algorithm162 We develop a scalable projected gradient descent (PGD) algorithm to optimize the penalized cost163 function (6), which is highly non-convex and can be solved only locally. PGD, which alternatively164 conducts gradient step and projection step, is one of the most popular and computationally fast165 algorithm in tackling non-convex optimization problem [7, 33, 47, 9].166 To compute the gradients of α and β, we introduce the following notations. Define T ∈ Rn×n×M167 with entries T i,j,m = exp(−θi,j,m)1−sn+exp(−θi,j,m) (pi,j,m − ai,j,m), and X α,β T (2,3) ∈ R n×R whose i-th row168 consists of the diagonal elements of the slice (T ×2 αT ×3 βT )i,.,.. That is, Xα,βT (2,3)(i, r) =169 (T ×2 αT ×3 βT )i,r,r. Similarly, we define Xα,αT (1,2) ∈ R R×M , XβT (3) ∈ R n×R, and XT (1,2) ∈170 Rn×M , such that Xα,αT (1,2)(r,m) = (T ×1 α T ×2 αT )r,r,m, XβT (3)(i, r) = (T ×3 β T )i,i,r, and171 XT (1,2)(i,m) = T i,i,m. Consequently, when the vertex membership matrix Z and the community172 center matrix C are fixed, we can derive the gradients of Lλ(α,β;A) with respect to α and β, as173 1 φ(n,M) ( Xα,βT (2,3)+X β T (3) ∗α ) +2λn(α−ZC) and 1 2φ(n,M) ( (Xα,αT (1,2)) T +XTT (1,2)(α∗α) ) , respectively. Herein, * denotes the Hadamard product (entry-wise product) between two matrices.174 Let (α̃, β̃) denote the solution given by one-step gradient descent, we then project (α̃, β̃) onto175 Ωα × Ωβ in the following steps.176 Step 1. Multiply the r-th column of α̃.,r by ||β̃.,r||1/2 for r ∈ [R]. Denote the resultant matrix as α̃′.177 Step 2. Regularize each row of α as αi,. = α̃′i,.min{ √ log ξ1−ξ , ||α̃ ′ i,.||}/||α̃′i,.||, for i ∈ [n].178 Step 3. Normalize the columns of β as β.,r = β̃.,r/||β̃.,r||, for r ∈ [R].179 Next, when (α,β) are given, we apply a (1 + δ)-approximation K-means algorithm on α̃ to update180 the vertex community membership matrix Z and community center matrix C.181 The above steps will be alternatively conducted until convergence or reaching the maximum number182 of iterations. We further summarized the developed alternative updated scheme in Algorithm 1 in183 Appendix A of the supplementary materials184 Several remarks on the algorithm are in order. First, Algorithm 1 can only be guaranteed to converge185 to a stationary point but not any local minimizer. We hence employ a transformed higher order186 orthogonal iteration (HOOI) algorithm for warm initialization in all the numerical experiments in187 Section 4 and 5. Specifically, given a user-specific value τ , we define Θ̃ to mimic the magnitude188 of Θ such that Θ̃i,j,m = −τ if ai,j,m = 0 and Θ̃i,j,m = τ otherwise. A standard HOOI algorithm189 [11] is applied to Θ̃ to obtain α(0) and β(0). We set τ = 100 in all the numerical experiments.190 Second, the sparsity factor sn is an intrinsic quantity of the multi-layer network data, and it should be191 estimated from the network directly. Note that the minimal and maximal probabilities for any vertex192 pair to form an edge in any layer are pmin = (1− ξ)sn and pmax = ξsn, respectively. Interestingly,193 pmin + pmax = sn, which does not depend on ξ any more. Therefore, we propose to estimate sn as194 ŝn = min i∈[n] 1 nM M∑ m=1 n∑ j=1 ai,j,m +max i∈[n] 1 nM M∑ m=1 n∑ j=1 ai,j,m, (7) which is the sum of the minimal and maximal frequencies of a vertex to form edges with all other195 vertices in all layers. Third, to optimally choose λn, we extend the network cross-validation by196 edge sampling scheme in [30] to multi-layer networks. The detailed tuning procedure is relegated to197 Appendix B in the supplementary materials.198 3 Asymptotic theory199 3.1 Consistency in estimating Θ∗200 Let Ω = {Θ = I ×1 α×2 α×3 β : α ∈ Ωα,β ∈ Ωβ} be the parameter space of the problem and201 Θ∗ = I ×1 α∗ ×2 α∗ ×3 β∗ be the true underlying transformed network probability tensor. Denote202 KL(Θ∗||Θ) = φ−1(n,M) ∑M m=1 ∑ i≤j E ( L(θi,j,m; ai,j,m)− L(θ∗i,j,m; ai,j,m) ) be the averaged203 Kullback–Leibler divergence of the network generation distributions parametrized by Θ∗ and Θ, for204 any Θ ∈ Ω. The following large deviation inequality is derived to quantify the behavior of Lλ(Θ;A)205 for any Θ in the neighborhood of Θ∗ defined by KL(Θ∗||Θ).206 Proposition 1. Suppose λnJ(α∗) ≤ ϵn, and (n+M)Rφ−1(n,M)ϵ−1n log(ϵ −1/2 n ) ≤ c1 for some207 constant c1. Then with probability at lease 1− 2 exp ( − φ(n,M)ϵn 156 ξ1−ξ+28 log 2 ) , we have208 Lλ(Θ∗;A) ≤ inf {Θ∈Ω|KL(Θ∗||Θ)≥4ϵn} Lλ(Θ;A)− ϵn. Proposition 1 basically states that any estimators with sufficiently small objective value should209 be close enough to Θ∗ in terms of KL(Θ∗||Θ). We next study the asymptotic behavior of these210 estimators more precisely. Let (α̂, β̂) ∈ Ωα × Ωβ be any estimator of (α∗,β∗) such that211 Lλ(α̂, β̂;A) ≤ Lλ(α∗,β∗;A) + ϵn, (8) and denote Θ̂ = I ×1 α̂×2 α̂×3 β̂. we have the following theorem.212 Theorem 1. Under the condition of Proposition 1, if (α̂, β̂) satisfies (8), then with probability at least 1− 2 exp ( − φ(n,M)ϵn 156 ξ1−ξ+28 log 2 ) , we have 1 n √ M ∥Θ̂−Θ∗∥F ≤ 4 √ 2 √ ϵn (1− ξ) √ ξsn . The condition that λnJ(Θ∗) ≤ ϵn in Proposition 1 is mild. It implies that the true em-213 beddings of vertices within the same community are close to one another. We remark that214 λnJ(Θ ∗) exactly equals to zero under the MLSBM discussed in Section 2.2. The condition that215 (n+M)Rφ−1(n,M)ϵ−1n log(ϵ −1/2 n ) vanishes with n is also mild. WhenR = O(1), we can take any216 ϵn such that ϵn ≫ lognnmin{n,M} . Consequently, to ensure Θ̂ converges to Θ ∗, Theorem 1 implies the217 smallest sparsity factor one can take is sn ≫ ϵn ≫ lognnmin{n,M} , which means that the average degree218 of a vertex in any particular layer can be as small as nsn. We remark that a common assumption219 M = O(n) that appears in literature, such as [27] and [22], is not necessary in our theory. If we220 further assume M = O(n), we find that the average degree of a vertex in any layer under the221 proposed TLSM set up can be smaller than that in [27] by a factor (M log n)−1/2 and in [22] by a222 factor (log n)−3, showing that our theoretical result accommodates sparser multi-layer networks.223 3.2 Consistency in community detection224 We now turn to establish the consistency of community detection in multi-layer network225 G. Let ψ∗ : [n] −→ [K] be the true community assignment function such that ψ∗ =226 argminψminC1,...,CK ∑n i=1 ∥α∗i − Cψi∥2, and then the community detection error of any esti-227 mated community assignment function ψ̂ can be evaluated by the minimum scaled Hamming distance228 between ψ̂ and ψ∗ under permutations, which is defined as229 err(ψ∗, ψ̂) = min π∈SK 1 n n∑ i=1 1{ψ∗i ̸= π(ψ̂i)}, (9) where 1{·} is the indicator function and SK is the symmetric group of degree K. Such a scaled230 or unscaled Hamming distance has become a popular metric in quantifying the performance of231 community detection [21, 22].232 Denote N∗k = {i : ψ∗i = k} be the k-th true underlying community whose cardinality is nk. Let233 C∗ ∈ RK×R be the true underlying community centers of the network embedding with C∗k. =234 1 nk ∑ ψ∗i =k α∗i., and let B ∗ = I×1C∗×2C∗×3 β∗. The following assumptions are made to ensure235 that communities within the multi-layer networks are asymptotically identifiable.236 Assumption A. Assume the difference between any two distinct horizontal slides of B∗ satisfies that237 min k,k′∈[K],k ̸=k′ 1√ KM ∥B∗k,.,. −B ∗ k′,.,.∥F ≥ γn, where γn > 0 may vanish with n.238 Assumption B. Assume the tuning parameter λn satisfies that λnϵns −2 n (log s −1 n ) −1 ≥ c2, for an absolute constant c2 that does not depend on any model parameter.239 Assumption C. Denote nmin = mink∈[K] nk as the minimal community size. Assume γnnmin √ K n ≥ cξ √ ϵn sn , where cξ = 4 √ 2 (1−ξ) √ ξ + c3 √ (1+δ)min{M,R} M and c3 is a constant that depends on ξ only.240 Assumption A is the minimal community separation requirement, and similar assumption has been241 employed in [27] with a constant γn. Together with the condition λnJ(α∗) ≤ ϵn in Proposition 1,242 Assumption B gives a feasible interval for λn. Assumption C allows for unbalanced communities243 with vanishing nmin/n if the network is not too sparse. Note that cξ can be further bounded by244 4 √ 2 (1−ξ) √ ξ + c3 √ 1 + δ, and the first term of cξ will dominate the second term if R = o(M).245 Theorem 2. Suppose all the assumptions in Theorem 1 as well as Assumptions A, B and C are satisfied, it holds true that err(ψ∗, ψ̂) ≤ c2ξnϵn nminKγ2nsn , with probability at least 1− 1n2 − 2 exp ( − φ(n,M)ϵn 156 ξ1−ξ+28 log 2 ) .246 Theorem 2 assures that the community structure in a multi-layer network can be consistently recovered247 by the proposed TLSM. As a theoretical example, we consider a sparse case with sn = (logn)1+τ1 nmin{n,M} ,248 where 0 < τ1 < 1, nmax = O(nmin), 1√n ||α ∗ − Z∗C∗||F ≤ (log n)−3/2, and both γn, R and K249 are of constant orders. With λn = (logn)2+2τ1 nmin{n,M} , Theorems 1 and 2 imply that ϵn = (logn)1+τ2 nmin{n,M} with250 0 < τ2 < τ1 and err(ψ∗, ψ̂) = op(1).251 4 Numerical experiments252 In this section, we evaluate the numerical performance of the proposed TLSM in a variety of synthetic253 as well as real-life multi-layer networks, compare it against four competitors in literature, including254 the mean adjacency spectral embeddings (MASE; 16), least square estimation (LSE; 27), Tucker255 decomposition with HOSVD initialization (HOSVD-Tucker; 22), and spectral kernel (SPECK; 35),256 and conduct some ablation studies. The implementations of LSE and SPECK are available at the257 authors’ personal websites, HOSVD-Tucker is implemented in the routine “tucker" of the Python258 package “tensorly", and TLSM and MASE are implemented in Python by ourselves.259 4.1 Synthetic networks260 The multi-layer network A = (ai,j,m) ∈ {0, 1}n×n×M is generated as follows. First, we randomly261 selectK = 4 elements uniformly from {2.5∗(b1, b2, . . . , bR) : br ∈ {−1, 1}, r ∈ [R]} as community262 centers, which are denoted as ck, k ∈ [K]. Second, the latent space embedding of vertex i is263 generated as αi = cψi + ei with ei ∼ N(0R, 1.5 ∗ IR), and ψi ∈ [K] are independently drawn264 from the multinomial distribution Multi(1; 1K1K). Third, we generate β = [β1, . . . ,βM ] T with265 βm,r being independent standard normal random varibeles, for m ∈ [M ] and r ∈ [R]. We then266 rescale the column norms of β to be 1 for model identifiability. Finally, we generate A according267 to the proposed TLSM with sn = 0.1. For the sake of fair comparisons, the embedding dimension268 R is set as K in all scenarios. We aim to illustrate the community detection performance of269 all methods as the number of vertices and number of layers increase. To this end, we consider270 (n,M) ∈ {200, 400, 600, 800} × {5, 10, 15, 20}. The averaged hamming errors and their standard271 errors over 50 independent experiments of all methods are reported in Table 1.272 It is evident that TLSM consistently outperforms its competitors, and the performances of LSE273 and HOSVD-Tucker are better than those of MASE and SPECK. This is expected since TLSM,274 LSE and HOSVD-Tucker work on the multi-layer network adjacency tensor directly, while MASE275 and SPECK are matrix aggregation methods that suffer form information loss. Furthermore, as the276 number of vertices and number of layers increase, the community detection errors of all methods277 decrease rapidly. Notably, TLSM and LSE converge faster than the other methods, and attain stable278 performance even for relatively small n and M . Additional simulation studies for various network279 sparsity and unbalanced community sizes are relegated to Appendix C in the supplementary materials.280 4.2 Real-life networks281 We also apply the proposed TLSM method to analyze three real-life multi-layer networks, including282 a social network in the department of Computer Science at Aarhus University (AUCS) [38], a yeast283 Saccharomyces cerevisiae gene co-expression (YSCGC) network [44], and a worldwide agriculture284 trading network (WAT) [10]. Specifically, we conduct community detection on the first two networks285 whose vertex community memberships are available, and carry out a link prediction task on the third286 network whose vertex community memberships are unavailable.287 The AUCS dataset is publicly available at http://multilayer.it.uu.se/datasets.html, and288 it is a 61 × 61 × 5 multi-layer network that records pairwise relationships of 5 types among 61289 persons in AUCS, including current working relationships, repeated leisure activities, regularly eating290 lunch together, co-authorship of a publication, and friendship on Facebook. Since 54 persons in291 the dataset come from 7 research groups and the other 7 persons do not belong to any group, the292 dataset consists of 8 communities corresponding to 7 research groups and an outlier community.293 Applying TLSM and its competitors to the dataset, the number of misclassified vertices by TLSM,294 LSE, MASE, HOSVD-Tucker and SPECK, are 8, 21, 19, 23, 18, respectively. Clearly, TLSM295 significantly outperforms its competitors by at least reducing 16.39% of community detection error.296 The YSCGC dataset is publicly available at https://www.ncbi.nlm.nih.gov/pmc/articles/297 PMC156590/, and contains 205 genes of 4 functional categories, including protein metabolism298 and modification, carbohydrate metabolism and catabolism, nucleobase, nucleoside, nucleotide299 and nucleic acide metabolism, as well as transportation. We regard these four functional category300 labels as the community memberships of the genes. Further, the gene expression responses are301 measured by 20 systematic perturbations with varying genetic and environmental conditions in302 4 replicated hybridizations. We thus constructed a gene co-expression network A = (ai,j,m) ∈303 R205×205×4 based on the similarities of their expressions, where each layer represents one replicated304 hybridization. Specifically, the similarity between genes i and j in the m-th replication is measured305 by wi,j,m = exp ( − ∥x(m)i − x (m) j ∥ ) , where x(m)i ∈ R20 contains the expression levels of 20306 perturbations in the m-th replicated hybridization for i ∈ [205] and m ∈ [4]. The binary value ai,j,m307 is obtained by thresholding wi,j,m with the thresholding value being the 60% quantile of all elements308 in {wi,j,m : i ≤ j ∈ [205],m ∈ [4]}. Applying TLSM and its competitors to this dataset, the number309 of misclassified vertices by TLSM, LSE, MASE, HOSVD-Tucker and SPECK, are 6, 9, 12, 48, 13,310 respectively. TLSM again outperforms its competitors in this YSCGC dataset.311 The WAT dataset is publicly available at http://www.fao.org, and includes 364 agriculture312 product trading relationships among 214 countries in 2010. To process the data, we extract 130 major313 countries whose average degrees are greater than 9 from the 32 densest connected agriculture product314 trading relations, leading to a 130× 130× 32 multi-layer network. Investigating the eigen-structure315 of the mode-1 matricization of the network adjacency tensor, we identify an elbow point [20] at the316 7th largest eigen-value, suggesting there are 6 potential communities among the countries, and thus317 we set K = 6. The corresponding eigen-value plot is attached in Appendex D of the supplementary318 materials. We then randomly selected 80% of the entries of the adjacency tensor as the training set,319 and conduct link prediction on the remaining 20% of the entries. Specifically, we employ TLSM320 and the adaptations of its competitors to estimate the network expected tensor P and generate321 estimations for the missing entries by independent Bernoulli random variables accordingly. The322 averaged link prediction accuracy of TLSM, LSE, MASE, HOSVD-Tucker and SPECK over 50323 independent replications are 79.60%, 76.66%, 75.96%, 77.78% and 79.08%, respectively, where the324 link prediction accuracy is defined as the percentile of the correctly predicted entries. Clearly, all 5325 methods are comparative in terms of link prediction, while TLSM still deliver highest averaged link326 prediction accuracy.327 4.3 Ablation studies328 In this subsection, we carry out some ablation studies on two novel components of the proposed329 method, namely the sparsity factor sn and the community-inducing regularizer J(α). To study the330 effectiveness of sn, we generate a 300× 300× 5 multi-layer network with 3 communities and the331 true network sparsity sn = 0.3. The blue curve in the left panel of Figure 1 shows the average332 Hamming error of 50 independent replications given by the proposed method when employing333 ŝn ∈ {0.05i : i ∈ [20]} in the optimization algorithm, and the red line indicates the averaged334 Hamming error of the proposed method with ŝn estimated via the proposed data-adapted estimation335 scheme. It is clear that the Hamming error at sn = 1 is much larger than that when sn is close336 to 0.3, showing the advantages of the modified logit transformation by sn over the standard logit337 transformation when the network indeed reveals sparse pattern. Moreover, we observe that the red338 line is even lower than the minimum Hamming error in the blue curve. This further confirms the339 effectiveness of the proposed data-adapted estimation scheme for estimating sn. To study the effectiveness of the community-inducing regularizer in the proposed objective function,341 we generate an n× n× 5 multi-layer network with 2 communities, for n ∈ {50, 100, 200, 400}. In342 the right panel of Figure 1, the black pillars indicate the network estimation error 1 n √ 5 ∥Θ̂−Θ∗∥F343 given by the proposed method with λn = 0 which corresponds to the absence of J(α), while the344 red ones indicate the counterparts given by the proposed method with λn is selected by network345 cross-validation. There is a clear improvement when the community-inducing regularizer is enforced346 in all scenarios, particularly for small n. This showcases the helpfulness of the community-inducing347 regularizer in detecting network community structure.348 5 Conclusions349 In this paper, we propose a novel tensor-based latent space model for community detection in350 multi-layer networks. The model embeds vertices into a low-dimensional latent space and views351 the community structure from an network embedding perspective, so that heterogeneous structures352 in different network layers can be properly integrated. The proposed model is formulated as a353 regularization framework, which conducts multi-layer network estimation and community detection354 simultaneously. The advantages of the proposed method are supported by extensive numerical355 experiments and theoretical results. Particularly, the asymptotic consistencies of the proposed method356 are established in terms of both multi-layer network estimation and community detection, even for357 relatively sparse networks.358 References359 [1] Luiz GA Alves, Giuseppe Mangioni, Isabella Cingolani, Francisco Aparecido Rodrigues, Pietro360 Panzarasa, and Yamir Moreno. The nested structural organization of the worldwide trade361 multi-layer network. Scientific reports, 9(1):1–14, 2019.362 [2] Jesús Arroyo, Avanti Athreya, Joshua Cape, Guodong Chen, Carey E Priebe, and Joshua T363 Vogelstein. Inference for multiple heterogeneous networks with a common invariant subspace.364 Journal of Machine Learning Research, 22(142):1–49, 2021.365 [3] Avanti Athreya, Donniell E Fishkind, Minh Tang, Carey E Priebe, Youngser Park, Joshua T366 Vogelstein, Keith Levin, Vince Lyzinski, and Yichen Qin. Statistical inference on random dot367 product graphs: a survey. The Journal of Machine Learning Research, 18(1):8393–8484, 2017.368 [4] Matteo Barigozzi, Giorgio Fagiolo, and Giuseppe Mangioni. Identifying the community369 structure of the international-trade multi-network. Physica A: statistical mechanics and its370 applications, 390(11):2051–2066, 2011.371 [5] Michele Berlingerio, Fabio Pinelli, and Francesco Calabrese. Abacus: frequent pattern mining-372 based community discovery in multidimensional networks. Data Mining and Knowledge373 Discovery, 27(3):294–320, 2013.374 [6] Sharmodeep Bhattacharyya and Shirshendu Chatterjee. Spectral clustering for multiple sparse375 networks: I. arXiv preprint arXiv:1805.10594, 2018.376 [7] Han Chen, Garvesh Raskutti, and Ming Yuan. Non-convex projected gradient descent for377 generalized low-rank tensor regression. Journal of Machine Learning Research, 20:1–37, 2019.378 [8] Zitai Chen, Chuan Chen, Zibin Zheng, and Yi Zhu. Tensor decomposition for multilayer net-379 works clustering. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33,380 pages 3371–3378, 2019.381 [9] Eric C Chi, Brian R Gaines, Will Wei Sun, Hua Zhou, and Jian Yang. Provable convex382 co-clustering of tensors. Journal of Machine Learning Research, 21(214):1–58, 2020.383 [10] Manlio De Domenico, Vincenzo Nicosia, Alexandre Arenas, and Vito Latora. Structural384 reducibility of multilayer networks. Nature communications, 6(1):1–9, 2015.385 [11] Lieven De Lathauwer, Bart De Moor, and Joos Vandewalle. On the best rank-1 and rank-(r386 1, r 2,..., rn) approximation of higher-order tensors. SIAM journal on Matrix Analysis and387 Applications, 21(4):1324–1342, 2000.388 [12] Xiaowen Dong, Pascal Frossard, Pierre Vandergheynst, and Nikolai Nefedov. Clustering389 with multi-layer graphs: A spectral perspective. IEEE Transactions on Signal Processing,390 60(11):5820–5831, 2012.391 [13] Junxian Geng, Anirban Bhattacharya, and Debdeep Pati. Probabilistic community detection392 with unknown number of communities. Journal of the American Statistical Association,393 114(526):893–905, 2019.394 [14] Mahsa Ghorbani, Mahdieh Soleymani Baghshah, and Hamid R Rabiee. Mgcn: semi-supervised395 classification in multi-layer graphs with graph convolutional networks. In Proceedings of396 the 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and397 Mining, pages 208–211, 2019.398 [15] Derek Greene and Pádraig Cunningham. Producing a unified graph representation from multiple399 social network views. In Proceedings of the 5th annual ACM web science conference, pages400 118–121, 2013.401 [16] Qiuyi Han, Kevin Xu, and Edoardo Airoldi. Consistent estimation of dynamic and multi-layer402 block models. In International Conference on Machine Learning, pages 1511–1520. PMLR,403 2015.404 [17] Xin He, Qiong Liu, and You Yang. Mv-gnn: Multi-view graph neural network for compression405 artifacts reduction. IEEE Transactions on Image Processing, 29:6829–6840, 2020.406 [18] Peter D Hoff, Adrian E Raftery, and Mark S Handcock. Latent space approaches to social407 network analysis. Journal of the American Statistical Association, 97(460):1090–1098, 2002.408 [19] Paul W Holland, Kathryn Blackmond Laskey, and Samuel Leinhardt. Stochastic blockmodels:409 First steps. Social networks, 5(2):109–137, 1983.410 [20] Pengsheng Ji and Jiashun Jin. Coauthorship and citation networks for statisticians. The Annals411 of Applied Statistics, 10(4):1779–1812, 2016.412 [21] Jiashun Jin. Fast community detection by score. Ann. Statist., 43(1):57–89, 02 2015.413 [22] Bing-Yi Jing, Ting Li, Zhongyuan Lyu, and Dong Xia. Community detection on mixture414 multilayer networks via regularized tensor decomposition. The Annals of Statistics, 49(6):3181–415 3205, 2021.416 [23] Muhammad Raza Khan and Joshua E Blumenstock. Multi-gcn: Graph convolutional networks417 for multi-view networks, with applications to global poverty. In Proceedings of the AAAI418 Conference on Artificial Intelligence, volume 33, pages 606–613, 2019.419 [24] Tamara G. Kolda and Brett W. Bader. Tensor decompositions and applications. SIAM Review,420 51:455–500, 2009.421 [25] Joseph B Kruskal. Three-way arrays: rank and uniqueness of trilinear decompositions, with422 application to arithmetic complexity and statistics. Linear algebra and its applications, 18(2):95–423 138, 1977.424 [26] Jing Lei. Tail bounds for matrix quadratic forms and bias adjusted spectral clustering in425 multi-layer stochastic block models. arXiv preprint arXiv:2003.08222, 2020.426 [27] Jing Lei, Kehui Chen, and Brian Lynch. Consistent community detection in multi-layer network427 data. Biometrika, 107(1):61–73, 2020.428 [28] Jing Lei and Alessandro Rinaldo. Consistency of spectral clustering in stochastic block models.429 The Annals of Statistics, 43(1):215–237, 2015.430 [29] Dong Li, Zhisong Pan, Guyu Hu, Graham Anderson, and Shan He. Active module identification431 from multilayer weighted gene co-expression networks: a continuous optimization approach.432 IEEE/ACM transactions on computational biology and bioinformatics, 2020.433 [30] Tianxi Li, Elizaveta Levina, and Ji Zhu. Network cross-validation by edge sampling. Biometrika,434 107(2):257–276, 2020.435 [31] Xueming Liu, Enrico Maiorino, Arda Halu, Kimberly Glass, Rashmi B Prasad, Joseph Loscalzo,436 Jianxi Gao, and Amitabh Sharma. Robustness and lethality in multilayer biological molecular437 networks. Nature communications, 11(1):1–12, 2020.438 [32] Zhongyuan Lyu, Dong Xia, and Yuan Zhang. Latent space model for higher-order networks439 and generalized tensor decomposition. arXiv preprint arXiv:2106.16042, 2021.440 [33] Zhuang Ma, Zongming Ma, and Hongsong Yuan. Universal latent space model fitting for large441 networks with edge covariates. Journal of Machine Learning Research, 21(4):1–67, 2020.442 [34] Subhadeep Paul and Yuguo Chen. Consistent community detection in multi-relational443 data through restricted multi-layer stochastic blockmodel. Electronic Journal of Statistics,444 10(2):3807–3870, 2016.445 [35] Subhadeep Paul and Yuguo Chen. Spectral and matrix factorization methods for consistent446 community detection in multi-layer networks. Ann. Statist., 48(1):230–250, 02 2020.447 [36] Subhadeep Paul and Yuguo Chen. Null models and community detection in multi-layer networks.448 Sankhya A, pages 1–55, 2021.449 [37] Zhuo-Ming Ren, An Zeng, and Yi-Cheng Zhang. Bridging nestedness and economic complexity450 in multilayer world trade networks. Humanities and Social Sciences Communications, 7(1):1–8,451 2020.452 [38] Luca Rossi and Matteo Magnani. Towards effective visual analytics on multiplex and multilayer453 networks. Chaos, Solitons & Fractals, 72:68–76, 2015.454 [39] Uday Shankar Shanthamallu, Jayaraman J Thiagarajan, Huan Song, and Andreas Spanias.455 Gramme: Semisupervised learning using multilayered graph attention models. IEEE transac-456 tions on neural networks and learning systems, 31(10):3977–3988, 2019.457 [40] Nicholas D Sidiropoulos and Rasmus Bro. On the uniqueness of multilinear decomposition of458 n-way arrays. Journal of Chemometrics: A Journal of the Chemometrics Society, 14(3):229–239,459 2000.460 [41] Wei Tang, Zhengdong Lu, and Inderjit S Dhillon. Clustering with multiple graphs. In 2009461 Ninth IEEE International Conference on Data Mining, pages 1016–1021. IEEE, 2009.462 [42] Edwin JCG Van Den Oord and Ronan Van Rossem. Differences in first graders’ school463 adjustment: The role of classroom characteristics and social structure of the group. Journal of464 School Psychology, 40(5):371–394, 2002.465 [43] James D Wilson, John Palowitch, Shankar Bhamidi, and Andrew B Nobel. Community466 extraction in multilayer networks with heterogeneous community structure. The Journal of467 Machine Learning Research, 18(1):5458–5506, 2017.468 [44] Ka Yee Yeung, Mario Medvedovic, and Roger E Bumgarner. Clustering gene-expression data469 with repeated measurements. Genome biology, 4(5):1–17, 2003.470 [45] Yubai Yuan and Annie Qu. Community detection with dependent connectivity. The Annals of471 Statistics, 49(4):2378–2428, 2021.472 [46] Jingfei Zhang, Will Wei Sun, and Lexin Li. Network response regression for modeling popula-473 tion of networks with covariates. arXiv preprint arXiv:1810.03192, 2018.474 [47] Xuefei Zhang, Songkai Xue, and Ji Zhu. A flexible latent space model for multilayer networks.475 In International Conference on Machine Learning, pages 11288–11297. PMLR, 2020.476 [48] Yunpeng Zhao, Elizaveta Levina, and Ji Zhu. Consistency of community detection in networks477 under degree-corrected stochastic block models. The Annals of Statistics, 40(4):2266–2292,478 2012.479 [49] Wei Zheng, Dingjie Wang, and Xiufen Zou. Control of multilayer biological networks and480 applied to target identification of complex diseases. BMC bioinformatics, 20(1):1–12, 2019.481 Checklist482 1. For all authors...483 (a) Do the main claims made in the abstract and introduction accurately reflect the pa-484 per’s contributions and scope? [Yes] See the abstract and the third paragrath of the485 introduction.486 (b) Did you describe the limitations of your work? [Yes] The optimization algorithm can487 only be guaranteed to converge to a stationary point.488 (c) Did you discuss any potential negative societal impacts of your work? [No] There489 should be no negative societal impacts.490 (d) Have you read the ethics review guidelines and ensured that your paper conforms to491 them? [Yes]492 2. If you are including theoretical results...493 (a) Did you state the full set of assumptions of all theoretical results? [Yes] See Section 3.494 (b) Did you include complete proofs of all theoretical results? [Yes] All technical proofs495 are provided in Appendix E of the supplementary materials.496 3. If you ran experiments...497 (a) Did you include the code, data, and instructions needed to reproduce the main exper-498 imental results (either in the supplemental material or as a URL)? [Yes] The URLs499 for data are included in Section 4.2, and codes with instructions are included in the500 supplementary materials.501 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they502 were chosen)? [Yes] See Section 2.3 and Appendix B in the supplementary materials.503 (c) Did you report error bars (e.g., with respect to the random seed after running experi-504 ments multiple times)? [Yes] We show the standard erros in Table 1 and 95% confident505 intervals of additional simulation studies in Appendix C in the supplementary materials.506 (d) Did you include the total amount of compute and the type of resources used (e.g., type507 of GPUs, internal cluster, or cloud provider)? [No]508 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...509 (a) If your work uses existing assets, did you cite the creators? [Yes] We used publicly510 available datasets and cite the creators.511 (b) Did you mention the license of the assets? [Yes] All datasets we used are publicly512 available.513 (c) Did you include any new assets either in the supplemental material or as a URL? [No]514 (d) Did you discuss whether and how consent was obtained from people whose data you’re515 using/curating? [No]516 (e) Did you discuss whether the data you are using/curating contains personally identifiable517 information or offensive content? [No] All data we used do not contains personally518 identifiable information or offensive content.519 5. If you used crowdsourcing or conducted research with human subjects...520 (a) Did you include the full text of instructions given to participants and screenshots, if521 applicable? [N/A]522 (b) Did you describe any potential participant risks, with links to Institutional Review523 Board (IRB) approvals, if applicable? [N/A]524 (c) Did you include the estimated hourly wage paid to participants and the total amount525 spent on participant compensation? [N/A]526
1. What is the main contribution of the paper on community detection in multi-layer networks? 2. What are the strengths and weaknesses of the proposed TLSM model? 3. Do you have any questions regarding the integration of heterogenous network structures or the computational complexity of TLSM? 4. What are the limitations and potential societal impacts of the proposed model?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper In this paper, the authors propose TLSM, a novel tensor-based latent space model for community detection in multi-layer networks. TLSM integrates the heterogenous network structure in different layers by embedding nodes into a low-dimensional space with the aim of nodes within the same community should have closer embeddings. Further, TLSM utilizes a regularized framework consisting of the average negative log-likelihood function of the multi-layer network and the clustering regularizer, to estimate the multi-layer network and conduct community detection simultaneously. The authors also provide theoretical analysis regarding the asymptotic consistencies of TLSM of both the multi-layer network and community detection. Strengths And Weaknesses Strengths: TLSM is a flexible and general framework that contains many multi-layer network generative models such as the multi-layer stochastic block model. TLSM estimates the multi-layer network and performs community detection simultaneously by adding a clustering penalty to the multi-layer network likelihood function. TLSM analyzes the asymptotic consistencies in terms of both the multi-layer network and the community detection. Weaknesses: TLSM applies projected gradient descent to optimize the regularized likelihood, which can only guarantee achieving a local optimum. The authors mentioned employing a transformed higher order orthogonal iteration (HOOI) algorithm for warm initialization, but it would be great if the authors could discuss it further in detail. TLSM outperforms baseline methods in the synthetic networks, while does not significantly outperform baselines in the real-world networks. The reviewer wonders if the authors could provide more details about the real-world experiments. Although TLSM is flexible and general, the techniques it used are not novel. Questions It seems that TLSM mainly leverages the tensor CP decomposition to integrate the heterogeneous structure of the multi-layer networks. The reviewer wonders if the integration would be better if TLSM incorporates random walks between different layers. The computational complexity of TLSM is not mentioned in the paper, and thus it would be great if the author could discuss further TLSM's complexity. Limitations Authors did not mention the Limitations and societal impacts. The reviewer thinks the limitations of the proposed model are mainly the optimality of learned parameters and the model's performance.
NIPS
Title AttCAT: Explaining Transformers via Attentive Class Activation Tokens Abstract Transformers have improved the state-of-the-art in various natural language processing and computer vision tasks. However, the success of the Transformer model has not yet been duly explained. Current explanation techniques, which dissect either the self-attention mechanism or gradient-based attribution, do not necessarily provide a faithful explanation of the inner workings of Transformers due to the following reasons: first, attention weights alone without considering the magnitudes of feature values are not adequate to reveal the self-attention mechanism; second, whereas most Transformer explanation techniques utilize self-attention module, the skip-connection module, contributing a significant portion of information flows in Transformers, has not yet been sufficiently exploited in explanation; third, the gradient-based attribution of individual feature does not incorporate interaction among features in explaining the model’s output. In order to tackle the above problems, we propose a novel Transformer explanation technique via attentive class activation tokens, aka, AttCAT, leveraging encoded features, their gradients, and their attention weights to generate a faithful and confident explanation for Transformer’s output. Extensive experiments are conducted to demonstrate the superior performance of AttCAT, which generalizes well to different Transformer architectures, evaluation metrics, datasets, and tasks, to the baseline methods. Our code is available at: https://github.com/qiangyao1988/AttCAT. 1 Introduction Transformers have advanced the state-of-the-art on a variety of natural language processing tasks [1, 2] and see increasing popularity in the field of computer vision [3, 4]. The main innovation behind the Transformer models is the stacking of multi-head self-attention layers to extract global features from sequential tokenized inputs. However, the lack of understanding of their mechanism increases the risk of deploying them in real-world applications [5, 6, 7, 8, 9]. This has motivated new research on explaining Transformers output to assist trustworthy human decision-making [10, 11, 12, 13, 14, 15, 16, 17]. The self-attention mechanism [18] in Transformers assigns a pairwise score capturing the relative importance between every two tokens or image patches as attention weights. Thus, a common practice is to use these attention weights to explain the Transformer model’s output by exhibiting the importance distribution over the input tokens [6]. The baseline method, shown as RawAtt in Figure 2, utilizes the raw attention weights from a single layer or a combination of multiple layers [10]. However, recent studies [11, 12, 13] question whether highly attentive inputs significantly impact the model outputs. Serrano et al. [11] demonstrate that erasing the representations accorded high attention weights do not necessarily lead to a performance decrease. Jain et al. [12] suggest that “attention is not explanation” by observing that attention scores are frequently inconsistent with other feature importance indicators like gradient-based measures. Abnar et al. [13] argue that the contextual information from tokens gets more similar as going deeper into the model, leading to unreliable 36th Conference on Neural Information Processing Systems (NeurIPS 2022). explanations using the raw attention weights. The authors propose two methods to combine the attention weights across multiple layers to cope with this issue. Their attention rollout method, shown as Rollout in Figure 2, reassigns the important scores to the tokens through the linear combination of attention weights across the layers tracing the information flow in Transformer. However, the rollout operation canceled out the accumulated important scores as some deeper layers have almost uniformly distributed attention weights. The attention flow method is formulated as a max-flow problem by dissecting the graph of pairwise attentions. While it somewhat outperforms the rollout method in specific scenarios, it is not ready to support large-scale evaluations [15]. Recently, Bastings et al. [19] advocate using saliency method as opposed to attention as explanations. Although some gradient-based methods [20, 21, 22, 23] have been proposed to leverage salience for explaining Transformer’s output, most of them still focus on the gradients of attention weights, i.e., Grads and AttGrads as shown in Figure 2. They suffer from a similar limitation to the abovementioned attention-based methods. Layer-wise Relevance Propagation (LRP) method [24, 25], which is also considered as a type of saliency method, propagates relevance scores from the output layer to the input. There has been a growing body of work on using LRP to explain Transformers [14, 15]. Voita et al. [14] use LRP to capture the relative importance of the attention heads within each Transformer layer (shown as PartialLRP in Figure 2). However, this approach is limited by only providing partial information on each self-attention head’s relevance; no relevance score is propagated back to the input. To address this problem, Chefer et al. [15] provide a comprehensive treatment of the information propagation within all components of the Transformer model, which back-propagates the information through all layers from the output back to the input. This method further integrates gradients from the attention weights, shown as TransAtt in Figure 2. However, TransAtt relies on the specific LRP rules that is not applicable for other attention modules, e.g., co-attention. Thus it can not provide explanations for all transformer architectures [26]. As such, the existing Transformer explanation techniques are not completely satisfactory due to three major issues. First, most attention-based methods disregard the magnitudes of the features. The summation operation (Eq. 2 shown in Figure 1) demonstrates both attention weights (the green circles) and the feature (the blue circles) contribute to the weighted outputs (the red circles). In other words, since the self-attention mechanism involves the computation of queries, keys, and values, reducing it only to the derived attention weights (inner products of queries and keys) is not ideal. Second, besides the self-attention mechanism, skip connection as another major component in Transformer is not even considered in current techniques. The latter enables the delivery and integration of information by adding an identity mapping from inputs to outputs, trying to solve the model optimization problem from the perspective of information transfer [27]. Moreover, Lu et al. [28] find that a significant portion of information flow in BERT goes through the skip connection instead of the attention heads (i.e., three times more often than attention on average). Thus, attention alone, without considering the skip connection, is not sufficient to characterize the inner working mechanism of Transformers. Third, the individual feature attribution-based approaches [15, 14, 29, 30] cannot capture the pairwise interactions of feature since gradients or relevance scores are calculated independently for each individual feature. For example, the gradients directly go through the Transformer layers from the output to the specific input (the token ‘like’), shown in Figure 1. We propose Attentive Class Activation Tokens (AttCAT) to generate token-level explanations leveraging features, their gradients, and their self-attention weights. Inspired by GradCAM [31], which uses gradient information flowing into the last convolutional layer of the Convolutional Neural Network (CNN) to understand the importance of each neuron for the decision of interest, our approach quantifies the impact of each token to the class-specific output via its gradient information. We further leverage the self-attention weights to capture the global contextual information of each token since it determines the relative importance of a single token concerning all other tokens in the input sequence. By disentangling the information flow across the Transformer layers for a specific token into the information from itself via a skip connection and the interaction information among all the tokens via a self-attention mechanism, we integrate the impact scores, which are generated using AttCAT, from multiple layers to give the final explanation. A summary of the baseline methods and our AttCAT method is shown in Figure 2, demonstrating their main similarities and differences. The RawAtt and Rollout [13] methods simply use the attention weights (α). The Grads method leverages the gradients of attention weights (∇αL) from the last Transformer layer, while the AttGrads method [22] integrates the attention weights (α) and their gradients (∇α) from all Transformer layers. The PartialLRP method [14] applies LRP only on the last Transformer layer (RL). Differently, the TransAtt method [26] integrates the relevance scores (R) from LRP and the gradients of attention weights (∇α). We use CAT, a new gradient-based attribution method leveraging the features (h) and their gradients (∇h), as our in-house baseline method. We further integrate attention weights (α) with CAT as the proposed AttCAT method. We state our contributions as follows: • We propose a novel Transformer explanation technique, AttCAT, leveraging the features, their gradients together with attention weights to generate the so-called impact scores to quantify the influence of inputs on the model’s outputs. • Our AttCAT exploits both the self-attention mechanism and skip connection to explain the inner working mechanism of Transformers via disentangling information flows between intermediate layers. • Furthermore, our class activation based method is capable of discriminating positive and negative impacts toward the model’s output using the directional information of the gradients. • Finally, we conduct extensive experiments on different Transformer architectures, datasets, and Natural Language Processing (NLP) tasks, demonstrating a more faithful and confident explanation than the baseline methods using several quantitative metrics and qualitative visualizations. 2 Preliminaries 2.1 Self-Attention Mechanism The encoders in Transformer model [1] typically stack L identical layers. Each contains two sublayers: (a) a multi-head self-attention module and (b) a feed-forward network module, coupled with layer normalization and skip connection. As illustrated in Figure 1, each encoder computes the output h (l) i ∈ Rd of the i-th token combining the previous encoder’s corresponding output h (l−1) i from the skip connection and a sequence output h(l−1) = {h(l−1)1 , · · · ,h (l−1) i , · · · ,h (l−1) n } ⊆ Rd through self-attention mechanism: αli,j := softmax ( Q(h (l−1) i )K(h (l−1) j ) T √ d ) ∈ R, (1) hli = W O n∑ j=1 αi,jV (hj (l−1)) + h (l−1) i , (2) where αli,j is the attention weight assigned to the j-th token for computing h (l) i . d denotes the dimension of the vectors. Here, Q(·), K(·), and V (·) are the query, key, and value transformations: Q(h) := WQh, K(h) := WKh, V (h) := WV h, (WQ,WK ,WV ) ∈ Rd×d, (3) respectively. We drop the bias parameters in these equations for simplicity. For multi-head attentions, we concatenate the output from each head. 2.2 Class Activation Map GradCAM [31] is one the most successful CAM-based methods using the gradient information flowing into the last convolutional layer of CNN to understand the importance of each neuron for the decision of interest. In order to obtain the class discriminative localization map for the explanation, Grad-CAM first computes the gradient of the score for class c, i.e., yc before the softmax, concerning feature maps Ak of a convolutional layer as ∂y c ∂Ak . Then, these flowing back gradients are global-average-pooled to obtain the neuron importance weight wck: wck = E ( ∂yc ∂Ak ) , (4) where E denotes the global-average-pooling. The weight wck reflects a partial linearization of the CNN downstream from A and captures the importance of feature map k for a target class c. Then a weighted combination of forward activation maps is obtained by: GradCAMc = ReLU (∑ k wckA k ) , (5) where ReLU() is applied to filter out the negative values since we are only interested in the features that positively influence the class of interest. 3 Problem Formulation The objective of a token-level explanation method for Transformer is to generate a separate score for each input token in order to answer the question: Given an input text and a trained Transformer model, which tokens mostly influence the model’s output? There is no standard definition of influence in literature [32]. Some works use the term ‘importance’, whereas others use the term ‘relevance’ depending on the explanation methods being used. Here we note that the token influence should reflect not only the magnitude of impact but also its directionality. As such, we define a new concept, Impact Score, to measure both Magnitude of Impact and Directionality. The former addresses the question “Which input tokens contribute mostly to the output?”. And the latter addresses the question “Given an input token, have positive or negative contributions been made to the output?” Formally, we define the Impact Score generated by our AttCAT method as follows: Definition 1 (Impact Score) Given a pre-trained Transformer T (·), an input token x, and our explanation method EAttCAT(·). Impact Score is define as: Impact Score(EAttCAT(T (x))) = { |EAttCAT(T (x))|, Magnitude of Impact, Sign(EAttCAT(T (x))), Directionality. (6) Remark 1 (Magnitude of Impact) The magnitude of impact indicates how much contribution has been made by each token. A sort function can be applied to the array of scores for the input tokens to retrieve the most impactful tokens on the output. Remark 2 (Directionality) The sign reveals whether each token makes a positive or negative impact on the output. 4 Attentive Class Activation Tokens 4.1 Disentangling Information Flows in Transformer To interpret the inner working mechanism of Transformers, it is essential to understand how the information of each input token flows through each intermediate layer and finally reaches the output. Some previous works [13, 22] use heuristics to treat high attention weights and/or their gradients as indicators of important information flows across layers. Others [15, 14] apply LRP aiming to dissect the information flows via layer-wise back-propagation. However, these approaches either rely on the simple-but-unreliable assumption of linear combination of the intermediate layers or ignore the major components of Transformer, i.e., the magnitudes of the features and the skip connection. From Figure 1, we observe that the output sequence of the Transformer model has a one-to-one correspondence to its input sequence. The skip connection is a shortcut that bridges the input and output of the self-attention operation. We note that the Transformer encoder intuitively is an operator that adds the representation of token interactions (via self-attention mechanism) onto the original representation of the token (via skip connection). Therefore, from a perspective of information flow, we can specify the i-th token’s information at the (l)-th layer as: Information(xli) = Information(x l−1 i ) + Interaction(x l−1 i ,x l−1 n/i ), (7) where Information(xl−1i ) represents the information contained in the i-th token at the (l-1)-th layer, and Interaction(xl−1i ,x l−1 n/i ) reflects the summation of all pairwise interaction between the i-th token and all other tokens (n/i). This observation motivates us to interpret the inner working mechanism of Transformers via disentangling the information flow Transformer. Thus, considering Eq. 7 as a recurrence relation, the final representation of the i-th token then consists of the original information (the input) plus token interactions between the i-th token and all other tokens at different layers. Since the CNN’s last convolutional layer also encodes both high-level semantics and detailed spatial information, corresponding to the original information and the interactions herein, the way GradCAM used for explaining a CNN model’s output inspired us to design Attentive Class Activation Tokens (AttCAT) to understand the impact of each token on a Transformer model’s output. 4.2 Class Activation Tokens For a pre-trained Transformer, we can always find its output hl at l-th layer. Assume hl has n columns, each column corresponds to an input token (including the paddings, i.e., [CLS] and [SEP]). We write its columns separately as hl1, · · · ,hli, · · · ,hln. As hLi is the output of i-th token from the last Transformer layer L, to interpret the impact of i-th token to the final output yc for class c, it would be straightforward if we have a linear relationship between yc and hLi as follows: yc = n∑ i wci · hLi , (8) where wci is the linear coefficient vector for h L i . Inspired by GradCAM [31], we obtain the token important weights as: wci = ∇hLi = ∂yc ∂hLi , (9) where wci illustrates a partial linearization from h L i and captures the importance of i-th token to a target class c. Class Activation Tokens (CAT) is then obtained through a weighted combination: CATLi = ∇hLi ⊙ hLi , (10) where ⊙ is the Hadamard product. CATLi denotes the impact score of the i-th token at L-th layer towards class c. Note that we do not apply ReLU() to filter out the negative scores here since we also care about the directionality of the impact score. 4.3 Attentive CAT While CAT explains the model’s output according to the attribution of each individual token’s encoder output (Eq. 8), it does not consider the interaction among tokens, which is revealed via the selfattention mechanism. The self-attention mechanism [18] assigns a pairwise similarity score between every two tokens as the attention weight, encoding the important interaction information of these tokens. Therefore, we integrate self-attention weights with CAT to further incorporate the token interaction information for better quantifying the impact of each token on the Transformer model’s output. Our Attentive CAT (AttCAT) at L-th layer for i-th token is then formulated as: AttCATLi = EH(αLi · CAT L i ), (11) where αLi denotes the attention weights of the i-th token at L-th layer. EH(·) means averaging over multiple heads. Recall that Eq. 7 represents a recurrence relation, we can always find the output of l-th layer and assign it as yli. We can use Eq. 9, 10, and 11 to formulate AttCAT l i, denoting the impact score for i-th token at l-th layer. Finally, different from the Rollout and TransAtt methods that apply the rollout operation, we sum AttCATli over all Transformer layers as the final impact score of i-th token as follows: AttCATi = L∑ j=1 AttCATji . (12) We empirically demonstrate that the summation is a more effective way than Rollout in Figure 5. 5 Experiments 5.1 Desirable Properties of an Explanation Technique We first introduce two desirable properties of an explanation method: faithfulness and confidence, along with metrics to systematically evaluate the performance of various explanation techniques. Faithfulness quantifies the fidelity of an explanation technique by measuring if the tokens identified indeed impact the output. We adopt two metrics from prior work to evaluate the faithfulness of word-level explanations: the area over the perturbation curve (AOPC) [33, 34] and the Log-odds scores [35, 34]. These two metrics measure local fidelity by deleting or masking the top k% scored words and comparing the probability change on the predicted label. Confidence A token can receive several saliency scores, indicating its contribution to the prediction of each class. The tokens with higher impact scores of the predicted class c should also have lower impact scores for the remaining classes. In other words, the explanation techniques should be highly confident in recognizing the most impact tokens of the desired class (usually the predicted class). On the other hand, these tokens should have the most negligible impact on other classes. We use Kendall-τ correlation, the statistic measuring the strength of association between the ranked scores of different classes, to evaluate the confidence of an explanation method. 5.2 Experiment Settings Transformer models: BERT [2] is one of the most representative Transformer models with impressive performance across a variety of NLP tasks, e.g., sentiment analysis and question answering. We use the BERTbase model and some variants (i.e., DistillBERT [36] and RoBERTa [37]) in our experiments. Our method can be generally applied to other Transformer architectures with minor modifications. The pre-trained models from Huggingface1 are used for validating our explanation method and comparing it to others. More details of these Transformer models and their prediction performance are presented in Appendix A. Datasets: We evaluate the performance using the following exemplar tasks: sentiment analysis on SST2 [38] , Amazon Polarity, Yelp Polarity [39], and IMDB [40] data sets; natural language inference on MNLI [41] data set; paraphrase detection on QQP [42] data set; and question answering on SQuADv1 [43] and SQuADv2 [44] data sets. More details of these data sets are described in Appendix B. Baseline methods: Several baseline explanation methods for Transformer have been compared through our experiments, including the attention-based methods (i.e., RawAtt and Rollout [13]), the attention gradient-based methods (i.e., Grads and AttGrads [22]), the LRP-based methods (i.e., PartialLRP [14] and TransAtt [15]). CAT without incorporating attention weights is an ablation version of AttCAT. Figure 2 summarizes and compares these methods with formulations. 5.3 Evaluation Metrics AOPC: By deleting top k% words, AOPC calculates the average change of the prediction probability on the predicted class over all test examples as follows: AOPC(k) = 1 N N∑ i=1 p(ŷ|xi)− p(ŷ|x̃ki ), (13) where N is the number of examples, ŷ is the predicted label, p(ŷ|·) is the probability on the predicted class, and x̃ki is constructed by removing the k% top-scored words from xi. To avoid choosing an arbitrary k, we remove 0, 10, 20, · · · , 100% of the tokens in order of decreasing saliency, thus arriving at x̃0i , x̃ 10 i , · · · , x̃100i . Higher values of AOPC are better, which means the deleted words are more impactful on the model’s output. LOdds: Log-odds score is calculated by averaging the difference of negative logarithmic probabilities on the predicted class over all test examples before and after masking k% top-scored words with zero paddings, LOdds(k) = 1 N N∑ i=1 log p(ŷ|x̃ki ) p(ŷ|xi) . (14) The notations are the same as in Eq. 13 with the only difference that x̃ki is constructed by replacing the top k% word with the special token [PAD] in xi. Lower LOdds scores are better. Kendal correlation: We use the Kendal-τ to evaluate confidence of an explanation method, formally: Kendal correlation = 1 N N∑ i=1 Kendall-τ(S(xi)c, S(xi)C/c), (15) where S(xi) denotes an array of the token index in order of the decreasing saliency (or attribution, or relevance, or impact) scores for a test example. A lower Kendal correlation demonstrates the explanation method is more confident in generating the saliency scores for predicting the class c. 1https://huggingface.co/ Precision@K: Inspired by the original Precision@K used in recommender system [45], we design a novel Precision@K to evaluate the explanation performance on SQuAD data sets. For each test example, we count the number of tokens in the answer that appear in the K top-scored tokens as Precision@K. Therefore, higher Precision@K scores are better. 6 Results and Discussions 6.1 Quantitative Evaluations The quantitative evaluations in this Section demonstrate our AttCAT method outperforms the baseline methods on the vast majority of different data sets and tasks. Table 1 depicts the results of various explanation methods and data sets. We report the average AOPC and LOdds scores over k values. Due to computation costs, we experiment on a subset with 2,000 randomly selected samples for the Amazon, Yelp, and IMDB data sets. Entire test sets are used for other data sets. AttCAT achieves the highest AOPC and lowest LOdds scores in most settings, demonstrating that the most impactful tokens for model prediction have been deleted or replaced. Among all the compared methods, the attention-based methods (i.e., RawAtt and Rollout) perform worst since attention weights alone without considering the magnitudes of feature values are not adequate to analyze the inner working mechanism of Transformers. Remarkably, AttCAT also outperforms TransAtt, a recent work representing a strong baseline method. The performance of CAT, shown here as an ablation study, drops markedly, corroborating the effectiveness of using self-attention weights in AttCAT. We also report the AOPC and LOdds scores of different methods in explaining BERT by deleting or masking bottom k% words on different data sets in Appendix Table 5. Our AttCAT achieves the lowest AOPC and highest LOdds, demonstrating that AttCAT efficiently captures the most impactful tokens for model predictions. Figure 3 illustrates how the evaluation metrics, namely AOPC and LOdds, change over the varying corruption rates (via removing or masking the k% top-scored words). Our AttCAT method achieves the highest AOPC and the lowest LOdds scores within a corruption rate k of 50% or less, further demonstrating that AttCAT has detected the most impactful words for model predictions. Table 2 shows the Kendal-τ based confidence score of the different explanation techniques for BERT tested using various data sets. We do not report the confidence scores of the attention-based methods since they are class agnostic. AttCAT achieves the best performance on most data sets; different classes observe distinctively sorted tokens, leading to much lower Kendal correlations. In other words, our AttCAT is highly confident in recognizing the most impactful tokens for predicting the class of interest. We show the Precision@K scores for the SQuAD data sets in Figure 4. Here k is set to 20. Our results clearly demonstrate that AttCAT is superior to other methods and generalizes well to various BERT architectures on SQuAD data sets. The higher score means that AttCAT can capture more impactful answer tokens in the TOP-20 sorted tokens, proving its capability to generate more faithful explanations. The results of varying k values are shown in Appendix Figure 8, 9, 10, 11. 6.2 Qualitative Visualizations Lastly, we show a heatmap of the normalized impact scores generated by AttCAT in Figure 5. The first 12 rows (L0-L11) show the impact scores of each token from different BERT layers. The darker shaded token represents a higher score, as shown in the legend. The signs of scores indicate their directionalities. This heatmap also justifies the effectiveness of the summation operation we used in Eq. 12. As shown in the figure, the impact scores become uniform and less impactful as the layer goes deeper, which is consistent with the observation from [13] where the authors argue that the embeddings are more contextualized and tend to carry similar information in the deeper layers. Thus, the rollout operation used in [13, 15] will attenuate the impact scores at shallower layers (i.e., L0-L9) since they are multiplied by scores at the deeper layers (i.e., L10-L11). As shown in the row of ‘Rollout’ in the figure, the rollout operation only gives minimal impact scores of the tokens, indicating essentially no information has been captured for the explanation. While the summation operation (ours), shown as the row of ‘Sum’, generates a faithful explanation incorporating the impact scores from each layer. In term of Impact Score, the token ‘not’ with the highest positive impact score (0.72) contributes mostly to the negative sentiment of this sentence, whereas the token ‘like’ with the highest negative impact score (-0.37) contributes inversely. The ground truth answer of the question answering example shown in Figure 6a is “denver brconcos". AttCAT successfully captures these two tokens with the darkest green shades, corresponding to highest impact scores. The example from SST2 shown in Figure 6b has a negative sentiment. Both AttCAT and TransAtt capture the most impactful tokens, such as ‘boring’, ‘didn’, and ‘t’, which contribute mostly to the negative sentiment prediction. Besides the tokens explaining the negative sentiment, our AttCAT method also identified some other tokens that contribute inversely to the negative sentiment, e.g., ‘like’ and ‘really’ (shown in dark shade of red), whereas TransAtt is not capable of differentiating positive and negative contributions. RawAtt gives more attention on some irrelevant tokens, i.e., ‘overall’, ‘but’, and the punctuations. Rollout only generates some uniformly distributed important scores for the tokens. 7 Conclusion This work addresses the major issues in generating faithful and confident explanations for Transformers via a novel attentive class activation tokens approach. AttCAT leverages the features, their gradients, and corresponded attention weights to define the so-called impact scores, which quantify the impact of inputs on the model’s outputs. The impact score can give both magnitude and directionality of the input tokens’ impact. We conduct extensive experiments on different Transformer models and data sets and demonstrate that our AttCAT achieves the best performance among strong baseline methods using quantitative metrics and qualitative visualizations. Even though our current AttCAT approach is mainly designed for BERT architectures on NLP tasks, it can be naturally extended to Vision Transformer architectures on computer vision tasks as the future work. Since there are various versions of Transformer architectures, e.g., ViT [3] and Swin Transformer [4], which are much different from Transformers used on NLP tasks, it opens up new avenues to extend our AttCAT to explain these models prediction. Acknowledgments This work is supported by the National Science Foundation under grant IIS-2211897.
1. What is the focus and contribution of the paper on Transformer explanation techniques? 2. What are the strengths of the proposed approach, particularly in its detailed work-through and formulas? 3. What are the weaknesses of the paper regarding its novelty compared to former methods and its limited qualitative and quantitative evaluation methods? 4. Do you have any concerns or questions regarding the evaluation performance of AttCAT, particularly when compared to other methods such as TransAtt and AttGrads? 5. What are the limitations of the paper, specifically regarding the consistency of AttCAT's improvement across different datasets?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper proposes a Transformer explanation technique via attentive class activation tokens, aka, AttCAT, leveraging encoded features, their gradients, and their attention weights to generate a faithful and confident explanation for Transformer’s output. Strengths And Weaknesses The work clearly defines the problem and presents a detailed work-through with formulas of the proposed method. The work also present experiments with obvious gains to demonstrate the effectiveness of the proposed method The proposed method seems to have limited novel differences against the former method, CAT. The work needs more qualitative and quantitative evaluation methods to prove why AttCAT is better than former methods in helping people understand Transformers instead of just showing performance numbers. Questions In Table 1, when evaluating on QQP, TransAtt seems to have a better evaluation performance than the proposed method, AttCAT. Is there a reason why this happens? Similar pattern can be observed in Table 2, AttGrads also has a better performance than the proposed method, AttCAT. Does this mean the evaluation performance heavily depends on the dataset and may not be consistent? Therefore, is the improvement of AttCAT also conditional? In the qualitative comparison, the author does not compare AttCAT against CAT. I think this is the most important comparison, especially when the difference between AttCAT and CAT is very limited. Limitations The work does not talk about its limitations. Especially refer to above, it would be important to know in which dataset, AttCAT is better than former ones and in which dataset similar to QQP AttCAT may not perform better.
NIPS
Title AttCAT: Explaining Transformers via Attentive Class Activation Tokens Abstract Transformers have improved the state-of-the-art in various natural language processing and computer vision tasks. However, the success of the Transformer model has not yet been duly explained. Current explanation techniques, which dissect either the self-attention mechanism or gradient-based attribution, do not necessarily provide a faithful explanation of the inner workings of Transformers due to the following reasons: first, attention weights alone without considering the magnitudes of feature values are not adequate to reveal the self-attention mechanism; second, whereas most Transformer explanation techniques utilize self-attention module, the skip-connection module, contributing a significant portion of information flows in Transformers, has not yet been sufficiently exploited in explanation; third, the gradient-based attribution of individual feature does not incorporate interaction among features in explaining the model’s output. In order to tackle the above problems, we propose a novel Transformer explanation technique via attentive class activation tokens, aka, AttCAT, leveraging encoded features, their gradients, and their attention weights to generate a faithful and confident explanation for Transformer’s output. Extensive experiments are conducted to demonstrate the superior performance of AttCAT, which generalizes well to different Transformer architectures, evaluation metrics, datasets, and tasks, to the baseline methods. Our code is available at: https://github.com/qiangyao1988/AttCAT. 1 Introduction Transformers have advanced the state-of-the-art on a variety of natural language processing tasks [1, 2] and see increasing popularity in the field of computer vision [3, 4]. The main innovation behind the Transformer models is the stacking of multi-head self-attention layers to extract global features from sequential tokenized inputs. However, the lack of understanding of their mechanism increases the risk of deploying them in real-world applications [5, 6, 7, 8, 9]. This has motivated new research on explaining Transformers output to assist trustworthy human decision-making [10, 11, 12, 13, 14, 15, 16, 17]. The self-attention mechanism [18] in Transformers assigns a pairwise score capturing the relative importance between every two tokens or image patches as attention weights. Thus, a common practice is to use these attention weights to explain the Transformer model’s output by exhibiting the importance distribution over the input tokens [6]. The baseline method, shown as RawAtt in Figure 2, utilizes the raw attention weights from a single layer or a combination of multiple layers [10]. However, recent studies [11, 12, 13] question whether highly attentive inputs significantly impact the model outputs. Serrano et al. [11] demonstrate that erasing the representations accorded high attention weights do not necessarily lead to a performance decrease. Jain et al. [12] suggest that “attention is not explanation” by observing that attention scores are frequently inconsistent with other feature importance indicators like gradient-based measures. Abnar et al. [13] argue that the contextual information from tokens gets more similar as going deeper into the model, leading to unreliable 36th Conference on Neural Information Processing Systems (NeurIPS 2022). explanations using the raw attention weights. The authors propose two methods to combine the attention weights across multiple layers to cope with this issue. Their attention rollout method, shown as Rollout in Figure 2, reassigns the important scores to the tokens through the linear combination of attention weights across the layers tracing the information flow in Transformer. However, the rollout operation canceled out the accumulated important scores as some deeper layers have almost uniformly distributed attention weights. The attention flow method is formulated as a max-flow problem by dissecting the graph of pairwise attentions. While it somewhat outperforms the rollout method in specific scenarios, it is not ready to support large-scale evaluations [15]. Recently, Bastings et al. [19] advocate using saliency method as opposed to attention as explanations. Although some gradient-based methods [20, 21, 22, 23] have been proposed to leverage salience for explaining Transformer’s output, most of them still focus on the gradients of attention weights, i.e., Grads and AttGrads as shown in Figure 2. They suffer from a similar limitation to the abovementioned attention-based methods. Layer-wise Relevance Propagation (LRP) method [24, 25], which is also considered as a type of saliency method, propagates relevance scores from the output layer to the input. There has been a growing body of work on using LRP to explain Transformers [14, 15]. Voita et al. [14] use LRP to capture the relative importance of the attention heads within each Transformer layer (shown as PartialLRP in Figure 2). However, this approach is limited by only providing partial information on each self-attention head’s relevance; no relevance score is propagated back to the input. To address this problem, Chefer et al. [15] provide a comprehensive treatment of the information propagation within all components of the Transformer model, which back-propagates the information through all layers from the output back to the input. This method further integrates gradients from the attention weights, shown as TransAtt in Figure 2. However, TransAtt relies on the specific LRP rules that is not applicable for other attention modules, e.g., co-attention. Thus it can not provide explanations for all transformer architectures [26]. As such, the existing Transformer explanation techniques are not completely satisfactory due to three major issues. First, most attention-based methods disregard the magnitudes of the features. The summation operation (Eq. 2 shown in Figure 1) demonstrates both attention weights (the green circles) and the feature (the blue circles) contribute to the weighted outputs (the red circles). In other words, since the self-attention mechanism involves the computation of queries, keys, and values, reducing it only to the derived attention weights (inner products of queries and keys) is not ideal. Second, besides the self-attention mechanism, skip connection as another major component in Transformer is not even considered in current techniques. The latter enables the delivery and integration of information by adding an identity mapping from inputs to outputs, trying to solve the model optimization problem from the perspective of information transfer [27]. Moreover, Lu et al. [28] find that a significant portion of information flow in BERT goes through the skip connection instead of the attention heads (i.e., three times more often than attention on average). Thus, attention alone, without considering the skip connection, is not sufficient to characterize the inner working mechanism of Transformers. Third, the individual feature attribution-based approaches [15, 14, 29, 30] cannot capture the pairwise interactions of feature since gradients or relevance scores are calculated independently for each individual feature. For example, the gradients directly go through the Transformer layers from the output to the specific input (the token ‘like’), shown in Figure 1. We propose Attentive Class Activation Tokens (AttCAT) to generate token-level explanations leveraging features, their gradients, and their self-attention weights. Inspired by GradCAM [31], which uses gradient information flowing into the last convolutional layer of the Convolutional Neural Network (CNN) to understand the importance of each neuron for the decision of interest, our approach quantifies the impact of each token to the class-specific output via its gradient information. We further leverage the self-attention weights to capture the global contextual information of each token since it determines the relative importance of a single token concerning all other tokens in the input sequence. By disentangling the information flow across the Transformer layers for a specific token into the information from itself via a skip connection and the interaction information among all the tokens via a self-attention mechanism, we integrate the impact scores, which are generated using AttCAT, from multiple layers to give the final explanation. A summary of the baseline methods and our AttCAT method is shown in Figure 2, demonstrating their main similarities and differences. The RawAtt and Rollout [13] methods simply use the attention weights (α). The Grads method leverages the gradients of attention weights (∇αL) from the last Transformer layer, while the AttGrads method [22] integrates the attention weights (α) and their gradients (∇α) from all Transformer layers. The PartialLRP method [14] applies LRP only on the last Transformer layer (RL). Differently, the TransAtt method [26] integrates the relevance scores (R) from LRP and the gradients of attention weights (∇α). We use CAT, a new gradient-based attribution method leveraging the features (h) and their gradients (∇h), as our in-house baseline method. We further integrate attention weights (α) with CAT as the proposed AttCAT method. We state our contributions as follows: • We propose a novel Transformer explanation technique, AttCAT, leveraging the features, their gradients together with attention weights to generate the so-called impact scores to quantify the influence of inputs on the model’s outputs. • Our AttCAT exploits both the self-attention mechanism and skip connection to explain the inner working mechanism of Transformers via disentangling information flows between intermediate layers. • Furthermore, our class activation based method is capable of discriminating positive and negative impacts toward the model’s output using the directional information of the gradients. • Finally, we conduct extensive experiments on different Transformer architectures, datasets, and Natural Language Processing (NLP) tasks, demonstrating a more faithful and confident explanation than the baseline methods using several quantitative metrics and qualitative visualizations. 2 Preliminaries 2.1 Self-Attention Mechanism The encoders in Transformer model [1] typically stack L identical layers. Each contains two sublayers: (a) a multi-head self-attention module and (b) a feed-forward network module, coupled with layer normalization and skip connection. As illustrated in Figure 1, each encoder computes the output h (l) i ∈ Rd of the i-th token combining the previous encoder’s corresponding output h (l−1) i from the skip connection and a sequence output h(l−1) = {h(l−1)1 , · · · ,h (l−1) i , · · · ,h (l−1) n } ⊆ Rd through self-attention mechanism: αli,j := softmax ( Q(h (l−1) i )K(h (l−1) j ) T √ d ) ∈ R, (1) hli = W O n∑ j=1 αi,jV (hj (l−1)) + h (l−1) i , (2) where αli,j is the attention weight assigned to the j-th token for computing h (l) i . d denotes the dimension of the vectors. Here, Q(·), K(·), and V (·) are the query, key, and value transformations: Q(h) := WQh, K(h) := WKh, V (h) := WV h, (WQ,WK ,WV ) ∈ Rd×d, (3) respectively. We drop the bias parameters in these equations for simplicity. For multi-head attentions, we concatenate the output from each head. 2.2 Class Activation Map GradCAM [31] is one the most successful CAM-based methods using the gradient information flowing into the last convolutional layer of CNN to understand the importance of each neuron for the decision of interest. In order to obtain the class discriminative localization map for the explanation, Grad-CAM first computes the gradient of the score for class c, i.e., yc before the softmax, concerning feature maps Ak of a convolutional layer as ∂y c ∂Ak . Then, these flowing back gradients are global-average-pooled to obtain the neuron importance weight wck: wck = E ( ∂yc ∂Ak ) , (4) where E denotes the global-average-pooling. The weight wck reflects a partial linearization of the CNN downstream from A and captures the importance of feature map k for a target class c. Then a weighted combination of forward activation maps is obtained by: GradCAMc = ReLU (∑ k wckA k ) , (5) where ReLU() is applied to filter out the negative values since we are only interested in the features that positively influence the class of interest. 3 Problem Formulation The objective of a token-level explanation method for Transformer is to generate a separate score for each input token in order to answer the question: Given an input text and a trained Transformer model, which tokens mostly influence the model’s output? There is no standard definition of influence in literature [32]. Some works use the term ‘importance’, whereas others use the term ‘relevance’ depending on the explanation methods being used. Here we note that the token influence should reflect not only the magnitude of impact but also its directionality. As such, we define a new concept, Impact Score, to measure both Magnitude of Impact and Directionality. The former addresses the question “Which input tokens contribute mostly to the output?”. And the latter addresses the question “Given an input token, have positive or negative contributions been made to the output?” Formally, we define the Impact Score generated by our AttCAT method as follows: Definition 1 (Impact Score) Given a pre-trained Transformer T (·), an input token x, and our explanation method EAttCAT(·). Impact Score is define as: Impact Score(EAttCAT(T (x))) = { |EAttCAT(T (x))|, Magnitude of Impact, Sign(EAttCAT(T (x))), Directionality. (6) Remark 1 (Magnitude of Impact) The magnitude of impact indicates how much contribution has been made by each token. A sort function can be applied to the array of scores for the input tokens to retrieve the most impactful tokens on the output. Remark 2 (Directionality) The sign reveals whether each token makes a positive or negative impact on the output. 4 Attentive Class Activation Tokens 4.1 Disentangling Information Flows in Transformer To interpret the inner working mechanism of Transformers, it is essential to understand how the information of each input token flows through each intermediate layer and finally reaches the output. Some previous works [13, 22] use heuristics to treat high attention weights and/or their gradients as indicators of important information flows across layers. Others [15, 14] apply LRP aiming to dissect the information flows via layer-wise back-propagation. However, these approaches either rely on the simple-but-unreliable assumption of linear combination of the intermediate layers or ignore the major components of Transformer, i.e., the magnitudes of the features and the skip connection. From Figure 1, we observe that the output sequence of the Transformer model has a one-to-one correspondence to its input sequence. The skip connection is a shortcut that bridges the input and output of the self-attention operation. We note that the Transformer encoder intuitively is an operator that adds the representation of token interactions (via self-attention mechanism) onto the original representation of the token (via skip connection). Therefore, from a perspective of information flow, we can specify the i-th token’s information at the (l)-th layer as: Information(xli) = Information(x l−1 i ) + Interaction(x l−1 i ,x l−1 n/i ), (7) where Information(xl−1i ) represents the information contained in the i-th token at the (l-1)-th layer, and Interaction(xl−1i ,x l−1 n/i ) reflects the summation of all pairwise interaction between the i-th token and all other tokens (n/i). This observation motivates us to interpret the inner working mechanism of Transformers via disentangling the information flow Transformer. Thus, considering Eq. 7 as a recurrence relation, the final representation of the i-th token then consists of the original information (the input) plus token interactions between the i-th token and all other tokens at different layers. Since the CNN’s last convolutional layer also encodes both high-level semantics and detailed spatial information, corresponding to the original information and the interactions herein, the way GradCAM used for explaining a CNN model’s output inspired us to design Attentive Class Activation Tokens (AttCAT) to understand the impact of each token on a Transformer model’s output. 4.2 Class Activation Tokens For a pre-trained Transformer, we can always find its output hl at l-th layer. Assume hl has n columns, each column corresponds to an input token (including the paddings, i.e., [CLS] and [SEP]). We write its columns separately as hl1, · · · ,hli, · · · ,hln. As hLi is the output of i-th token from the last Transformer layer L, to interpret the impact of i-th token to the final output yc for class c, it would be straightforward if we have a linear relationship between yc and hLi as follows: yc = n∑ i wci · hLi , (8) where wci is the linear coefficient vector for h L i . Inspired by GradCAM [31], we obtain the token important weights as: wci = ∇hLi = ∂yc ∂hLi , (9) where wci illustrates a partial linearization from h L i and captures the importance of i-th token to a target class c. Class Activation Tokens (CAT) is then obtained through a weighted combination: CATLi = ∇hLi ⊙ hLi , (10) where ⊙ is the Hadamard product. CATLi denotes the impact score of the i-th token at L-th layer towards class c. Note that we do not apply ReLU() to filter out the negative scores here since we also care about the directionality of the impact score. 4.3 Attentive CAT While CAT explains the model’s output according to the attribution of each individual token’s encoder output (Eq. 8), it does not consider the interaction among tokens, which is revealed via the selfattention mechanism. The self-attention mechanism [18] assigns a pairwise similarity score between every two tokens as the attention weight, encoding the important interaction information of these tokens. Therefore, we integrate self-attention weights with CAT to further incorporate the token interaction information for better quantifying the impact of each token on the Transformer model’s output. Our Attentive CAT (AttCAT) at L-th layer for i-th token is then formulated as: AttCATLi = EH(αLi · CAT L i ), (11) where αLi denotes the attention weights of the i-th token at L-th layer. EH(·) means averaging over multiple heads. Recall that Eq. 7 represents a recurrence relation, we can always find the output of l-th layer and assign it as yli. We can use Eq. 9, 10, and 11 to formulate AttCAT l i, denoting the impact score for i-th token at l-th layer. Finally, different from the Rollout and TransAtt methods that apply the rollout operation, we sum AttCATli over all Transformer layers as the final impact score of i-th token as follows: AttCATi = L∑ j=1 AttCATji . (12) We empirically demonstrate that the summation is a more effective way than Rollout in Figure 5. 5 Experiments 5.1 Desirable Properties of an Explanation Technique We first introduce two desirable properties of an explanation method: faithfulness and confidence, along with metrics to systematically evaluate the performance of various explanation techniques. Faithfulness quantifies the fidelity of an explanation technique by measuring if the tokens identified indeed impact the output. We adopt two metrics from prior work to evaluate the faithfulness of word-level explanations: the area over the perturbation curve (AOPC) [33, 34] and the Log-odds scores [35, 34]. These two metrics measure local fidelity by deleting or masking the top k% scored words and comparing the probability change on the predicted label. Confidence A token can receive several saliency scores, indicating its contribution to the prediction of each class. The tokens with higher impact scores of the predicted class c should also have lower impact scores for the remaining classes. In other words, the explanation techniques should be highly confident in recognizing the most impact tokens of the desired class (usually the predicted class). On the other hand, these tokens should have the most negligible impact on other classes. We use Kendall-τ correlation, the statistic measuring the strength of association between the ranked scores of different classes, to evaluate the confidence of an explanation method. 5.2 Experiment Settings Transformer models: BERT [2] is one of the most representative Transformer models with impressive performance across a variety of NLP tasks, e.g., sentiment analysis and question answering. We use the BERTbase model and some variants (i.e., DistillBERT [36] and RoBERTa [37]) in our experiments. Our method can be generally applied to other Transformer architectures with minor modifications. The pre-trained models from Huggingface1 are used for validating our explanation method and comparing it to others. More details of these Transformer models and their prediction performance are presented in Appendix A. Datasets: We evaluate the performance using the following exemplar tasks: sentiment analysis on SST2 [38] , Amazon Polarity, Yelp Polarity [39], and IMDB [40] data sets; natural language inference on MNLI [41] data set; paraphrase detection on QQP [42] data set; and question answering on SQuADv1 [43] and SQuADv2 [44] data sets. More details of these data sets are described in Appendix B. Baseline methods: Several baseline explanation methods for Transformer have been compared through our experiments, including the attention-based methods (i.e., RawAtt and Rollout [13]), the attention gradient-based methods (i.e., Grads and AttGrads [22]), the LRP-based methods (i.e., PartialLRP [14] and TransAtt [15]). CAT without incorporating attention weights is an ablation version of AttCAT. Figure 2 summarizes and compares these methods with formulations. 5.3 Evaluation Metrics AOPC: By deleting top k% words, AOPC calculates the average change of the prediction probability on the predicted class over all test examples as follows: AOPC(k) = 1 N N∑ i=1 p(ŷ|xi)− p(ŷ|x̃ki ), (13) where N is the number of examples, ŷ is the predicted label, p(ŷ|·) is the probability on the predicted class, and x̃ki is constructed by removing the k% top-scored words from xi. To avoid choosing an arbitrary k, we remove 0, 10, 20, · · · , 100% of the tokens in order of decreasing saliency, thus arriving at x̃0i , x̃ 10 i , · · · , x̃100i . Higher values of AOPC are better, which means the deleted words are more impactful on the model’s output. LOdds: Log-odds score is calculated by averaging the difference of negative logarithmic probabilities on the predicted class over all test examples before and after masking k% top-scored words with zero paddings, LOdds(k) = 1 N N∑ i=1 log p(ŷ|x̃ki ) p(ŷ|xi) . (14) The notations are the same as in Eq. 13 with the only difference that x̃ki is constructed by replacing the top k% word with the special token [PAD] in xi. Lower LOdds scores are better. Kendal correlation: We use the Kendal-τ to evaluate confidence of an explanation method, formally: Kendal correlation = 1 N N∑ i=1 Kendall-τ(S(xi)c, S(xi)C/c), (15) where S(xi) denotes an array of the token index in order of the decreasing saliency (or attribution, or relevance, or impact) scores for a test example. A lower Kendal correlation demonstrates the explanation method is more confident in generating the saliency scores for predicting the class c. 1https://huggingface.co/ Precision@K: Inspired by the original Precision@K used in recommender system [45], we design a novel Precision@K to evaluate the explanation performance on SQuAD data sets. For each test example, we count the number of tokens in the answer that appear in the K top-scored tokens as Precision@K. Therefore, higher Precision@K scores are better. 6 Results and Discussions 6.1 Quantitative Evaluations The quantitative evaluations in this Section demonstrate our AttCAT method outperforms the baseline methods on the vast majority of different data sets and tasks. Table 1 depicts the results of various explanation methods and data sets. We report the average AOPC and LOdds scores over k values. Due to computation costs, we experiment on a subset with 2,000 randomly selected samples for the Amazon, Yelp, and IMDB data sets. Entire test sets are used for other data sets. AttCAT achieves the highest AOPC and lowest LOdds scores in most settings, demonstrating that the most impactful tokens for model prediction have been deleted or replaced. Among all the compared methods, the attention-based methods (i.e., RawAtt and Rollout) perform worst since attention weights alone without considering the magnitudes of feature values are not adequate to analyze the inner working mechanism of Transformers. Remarkably, AttCAT also outperforms TransAtt, a recent work representing a strong baseline method. The performance of CAT, shown here as an ablation study, drops markedly, corroborating the effectiveness of using self-attention weights in AttCAT. We also report the AOPC and LOdds scores of different methods in explaining BERT by deleting or masking bottom k% words on different data sets in Appendix Table 5. Our AttCAT achieves the lowest AOPC and highest LOdds, demonstrating that AttCAT efficiently captures the most impactful tokens for model predictions. Figure 3 illustrates how the evaluation metrics, namely AOPC and LOdds, change over the varying corruption rates (via removing or masking the k% top-scored words). Our AttCAT method achieves the highest AOPC and the lowest LOdds scores within a corruption rate k of 50% or less, further demonstrating that AttCAT has detected the most impactful words for model predictions. Table 2 shows the Kendal-τ based confidence score of the different explanation techniques for BERT tested using various data sets. We do not report the confidence scores of the attention-based methods since they are class agnostic. AttCAT achieves the best performance on most data sets; different classes observe distinctively sorted tokens, leading to much lower Kendal correlations. In other words, our AttCAT is highly confident in recognizing the most impactful tokens for predicting the class of interest. We show the Precision@K scores for the SQuAD data sets in Figure 4. Here k is set to 20. Our results clearly demonstrate that AttCAT is superior to other methods and generalizes well to various BERT architectures on SQuAD data sets. The higher score means that AttCAT can capture more impactful answer tokens in the TOP-20 sorted tokens, proving its capability to generate more faithful explanations. The results of varying k values are shown in Appendix Figure 8, 9, 10, 11. 6.2 Qualitative Visualizations Lastly, we show a heatmap of the normalized impact scores generated by AttCAT in Figure 5. The first 12 rows (L0-L11) show the impact scores of each token from different BERT layers. The darker shaded token represents a higher score, as shown in the legend. The signs of scores indicate their directionalities. This heatmap also justifies the effectiveness of the summation operation we used in Eq. 12. As shown in the figure, the impact scores become uniform and less impactful as the layer goes deeper, which is consistent with the observation from [13] where the authors argue that the embeddings are more contextualized and tend to carry similar information in the deeper layers. Thus, the rollout operation used in [13, 15] will attenuate the impact scores at shallower layers (i.e., L0-L9) since they are multiplied by scores at the deeper layers (i.e., L10-L11). As shown in the row of ‘Rollout’ in the figure, the rollout operation only gives minimal impact scores of the tokens, indicating essentially no information has been captured for the explanation. While the summation operation (ours), shown as the row of ‘Sum’, generates a faithful explanation incorporating the impact scores from each layer. In term of Impact Score, the token ‘not’ with the highest positive impact score (0.72) contributes mostly to the negative sentiment of this sentence, whereas the token ‘like’ with the highest negative impact score (-0.37) contributes inversely. The ground truth answer of the question answering example shown in Figure 6a is “denver brconcos". AttCAT successfully captures these two tokens with the darkest green shades, corresponding to highest impact scores. The example from SST2 shown in Figure 6b has a negative sentiment. Both AttCAT and TransAtt capture the most impactful tokens, such as ‘boring’, ‘didn’, and ‘t’, which contribute mostly to the negative sentiment prediction. Besides the tokens explaining the negative sentiment, our AttCAT method also identified some other tokens that contribute inversely to the negative sentiment, e.g., ‘like’ and ‘really’ (shown in dark shade of red), whereas TransAtt is not capable of differentiating positive and negative contributions. RawAtt gives more attention on some irrelevant tokens, i.e., ‘overall’, ‘but’, and the punctuations. Rollout only generates some uniformly distributed important scores for the tokens. 7 Conclusion This work addresses the major issues in generating faithful and confident explanations for Transformers via a novel attentive class activation tokens approach. AttCAT leverages the features, their gradients, and corresponded attention weights to define the so-called impact scores, which quantify the impact of inputs on the model’s outputs. The impact score can give both magnitude and directionality of the input tokens’ impact. We conduct extensive experiments on different Transformer models and data sets and demonstrate that our AttCAT achieves the best performance among strong baseline methods using quantitative metrics and qualitative visualizations. Even though our current AttCAT approach is mainly designed for BERT architectures on NLP tasks, it can be naturally extended to Vision Transformer architectures on computer vision tasks as the future work. Since there are various versions of Transformer architectures, e.g., ViT [3] and Swin Transformer [4], which are much different from Transformers used on NLP tasks, it opens up new avenues to extend our AttCAT to explain these models prediction. Acknowledgments This work is supported by the National Science Foundation under grant IIS-2211897.
1. What is the focus and contribution of the paper on token importance in Transformers? 2. What are the strengths of the proposed approach, particularly in its motivation and experimental results? 3. What are the weaknesses of the paper, especially regarding the experiment section? 4. Do you have any concerns or suggestions regarding the aggregation of AttCAT? 5. What are the limitations of the paper, and how could they be addressed in future works?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper proposes a new method, termed AttCAT, for evaluating the importance of each input token in Transformers. The proposed method considers two perspectives for token importance, including the attention perspective and the gradient perspective. The authors introduce Class Activation Tokens, which is inspired by GradCAM [26]. And the Class Activation Tokens is combined with the Transformer attention to output AttCAT. The experimental results show that the resulting metric for token importance is superior to some other methods. Strengths And Weaknesses Strengths The method is well motivated and clearly stated. It makes sense to combine the attention scores and the gradient weights to obtain a new metric for token importance in Transformers. The experimental results clearly demonstrate the effectiveness of the proposed metric. Weaknesses The experiments could be extended. It would be helpful to see how the evaluation metrics, namely AOPC and LOdds, change against the corruption rate k (removing the k % top-scored words). Also, it is interesting to see how the two metrics change when we remove the k % lowest scored words. This is different from removing the k % top-scored words because the inputs to the model are different in the two cases. When removing the k % lowest scored words, we are caring the order of the less informative tokens, while in removing the k % lowest scored words, we are caring the order of the most informative tokens. Questions Instead of simply averaging the AttCAT over multiple heads and multiple layers, are there better methods for doing the aggregation of AttCAT here? Could the author provide the statistics of the datasets, e.g., number of classes and dataset size? Limitations The authors mention that they would extend the AttCAT method to explain generative and vision Transformer architectures as future works. But more discussions on limitations of this work would be useful.
NIPS
Title AttCAT: Explaining Transformers via Attentive Class Activation Tokens Abstract Transformers have improved the state-of-the-art in various natural language processing and computer vision tasks. However, the success of the Transformer model has not yet been duly explained. Current explanation techniques, which dissect either the self-attention mechanism or gradient-based attribution, do not necessarily provide a faithful explanation of the inner workings of Transformers due to the following reasons: first, attention weights alone without considering the magnitudes of feature values are not adequate to reveal the self-attention mechanism; second, whereas most Transformer explanation techniques utilize self-attention module, the skip-connection module, contributing a significant portion of information flows in Transformers, has not yet been sufficiently exploited in explanation; third, the gradient-based attribution of individual feature does not incorporate interaction among features in explaining the model’s output. In order to tackle the above problems, we propose a novel Transformer explanation technique via attentive class activation tokens, aka, AttCAT, leveraging encoded features, their gradients, and their attention weights to generate a faithful and confident explanation for Transformer’s output. Extensive experiments are conducted to demonstrate the superior performance of AttCAT, which generalizes well to different Transformer architectures, evaluation metrics, datasets, and tasks, to the baseline methods. Our code is available at: https://github.com/qiangyao1988/AttCAT. 1 Introduction Transformers have advanced the state-of-the-art on a variety of natural language processing tasks [1, 2] and see increasing popularity in the field of computer vision [3, 4]. The main innovation behind the Transformer models is the stacking of multi-head self-attention layers to extract global features from sequential tokenized inputs. However, the lack of understanding of their mechanism increases the risk of deploying them in real-world applications [5, 6, 7, 8, 9]. This has motivated new research on explaining Transformers output to assist trustworthy human decision-making [10, 11, 12, 13, 14, 15, 16, 17]. The self-attention mechanism [18] in Transformers assigns a pairwise score capturing the relative importance between every two tokens or image patches as attention weights. Thus, a common practice is to use these attention weights to explain the Transformer model’s output by exhibiting the importance distribution over the input tokens [6]. The baseline method, shown as RawAtt in Figure 2, utilizes the raw attention weights from a single layer or a combination of multiple layers [10]. However, recent studies [11, 12, 13] question whether highly attentive inputs significantly impact the model outputs. Serrano et al. [11] demonstrate that erasing the representations accorded high attention weights do not necessarily lead to a performance decrease. Jain et al. [12] suggest that “attention is not explanation” by observing that attention scores are frequently inconsistent with other feature importance indicators like gradient-based measures. Abnar et al. [13] argue that the contextual information from tokens gets more similar as going deeper into the model, leading to unreliable 36th Conference on Neural Information Processing Systems (NeurIPS 2022). explanations using the raw attention weights. The authors propose two methods to combine the attention weights across multiple layers to cope with this issue. Their attention rollout method, shown as Rollout in Figure 2, reassigns the important scores to the tokens through the linear combination of attention weights across the layers tracing the information flow in Transformer. However, the rollout operation canceled out the accumulated important scores as some deeper layers have almost uniformly distributed attention weights. The attention flow method is formulated as a max-flow problem by dissecting the graph of pairwise attentions. While it somewhat outperforms the rollout method in specific scenarios, it is not ready to support large-scale evaluations [15]. Recently, Bastings et al. [19] advocate using saliency method as opposed to attention as explanations. Although some gradient-based methods [20, 21, 22, 23] have been proposed to leverage salience for explaining Transformer’s output, most of them still focus on the gradients of attention weights, i.e., Grads and AttGrads as shown in Figure 2. They suffer from a similar limitation to the abovementioned attention-based methods. Layer-wise Relevance Propagation (LRP) method [24, 25], which is also considered as a type of saliency method, propagates relevance scores from the output layer to the input. There has been a growing body of work on using LRP to explain Transformers [14, 15]. Voita et al. [14] use LRP to capture the relative importance of the attention heads within each Transformer layer (shown as PartialLRP in Figure 2). However, this approach is limited by only providing partial information on each self-attention head’s relevance; no relevance score is propagated back to the input. To address this problem, Chefer et al. [15] provide a comprehensive treatment of the information propagation within all components of the Transformer model, which back-propagates the information through all layers from the output back to the input. This method further integrates gradients from the attention weights, shown as TransAtt in Figure 2. However, TransAtt relies on the specific LRP rules that is not applicable for other attention modules, e.g., co-attention. Thus it can not provide explanations for all transformer architectures [26]. As such, the existing Transformer explanation techniques are not completely satisfactory due to three major issues. First, most attention-based methods disregard the magnitudes of the features. The summation operation (Eq. 2 shown in Figure 1) demonstrates both attention weights (the green circles) and the feature (the blue circles) contribute to the weighted outputs (the red circles). In other words, since the self-attention mechanism involves the computation of queries, keys, and values, reducing it only to the derived attention weights (inner products of queries and keys) is not ideal. Second, besides the self-attention mechanism, skip connection as another major component in Transformer is not even considered in current techniques. The latter enables the delivery and integration of information by adding an identity mapping from inputs to outputs, trying to solve the model optimization problem from the perspective of information transfer [27]. Moreover, Lu et al. [28] find that a significant portion of information flow in BERT goes through the skip connection instead of the attention heads (i.e., three times more often than attention on average). Thus, attention alone, without considering the skip connection, is not sufficient to characterize the inner working mechanism of Transformers. Third, the individual feature attribution-based approaches [15, 14, 29, 30] cannot capture the pairwise interactions of feature since gradients or relevance scores are calculated independently for each individual feature. For example, the gradients directly go through the Transformer layers from the output to the specific input (the token ‘like’), shown in Figure 1. We propose Attentive Class Activation Tokens (AttCAT) to generate token-level explanations leveraging features, their gradients, and their self-attention weights. Inspired by GradCAM [31], which uses gradient information flowing into the last convolutional layer of the Convolutional Neural Network (CNN) to understand the importance of each neuron for the decision of interest, our approach quantifies the impact of each token to the class-specific output via its gradient information. We further leverage the self-attention weights to capture the global contextual information of each token since it determines the relative importance of a single token concerning all other tokens in the input sequence. By disentangling the information flow across the Transformer layers for a specific token into the information from itself via a skip connection and the interaction information among all the tokens via a self-attention mechanism, we integrate the impact scores, which are generated using AttCAT, from multiple layers to give the final explanation. A summary of the baseline methods and our AttCAT method is shown in Figure 2, demonstrating their main similarities and differences. The RawAtt and Rollout [13] methods simply use the attention weights (α). The Grads method leverages the gradients of attention weights (∇αL) from the last Transformer layer, while the AttGrads method [22] integrates the attention weights (α) and their gradients (∇α) from all Transformer layers. The PartialLRP method [14] applies LRP only on the last Transformer layer (RL). Differently, the TransAtt method [26] integrates the relevance scores (R) from LRP and the gradients of attention weights (∇α). We use CAT, a new gradient-based attribution method leveraging the features (h) and their gradients (∇h), as our in-house baseline method. We further integrate attention weights (α) with CAT as the proposed AttCAT method. We state our contributions as follows: • We propose a novel Transformer explanation technique, AttCAT, leveraging the features, their gradients together with attention weights to generate the so-called impact scores to quantify the influence of inputs on the model’s outputs. • Our AttCAT exploits both the self-attention mechanism and skip connection to explain the inner working mechanism of Transformers via disentangling information flows between intermediate layers. • Furthermore, our class activation based method is capable of discriminating positive and negative impacts toward the model’s output using the directional information of the gradients. • Finally, we conduct extensive experiments on different Transformer architectures, datasets, and Natural Language Processing (NLP) tasks, demonstrating a more faithful and confident explanation than the baseline methods using several quantitative metrics and qualitative visualizations. 2 Preliminaries 2.1 Self-Attention Mechanism The encoders in Transformer model [1] typically stack L identical layers. Each contains two sublayers: (a) a multi-head self-attention module and (b) a feed-forward network module, coupled with layer normalization and skip connection. As illustrated in Figure 1, each encoder computes the output h (l) i ∈ Rd of the i-th token combining the previous encoder’s corresponding output h (l−1) i from the skip connection and a sequence output h(l−1) = {h(l−1)1 , · · · ,h (l−1) i , · · · ,h (l−1) n } ⊆ Rd through self-attention mechanism: αli,j := softmax ( Q(h (l−1) i )K(h (l−1) j ) T √ d ) ∈ R, (1) hli = W O n∑ j=1 αi,jV (hj (l−1)) + h (l−1) i , (2) where αli,j is the attention weight assigned to the j-th token for computing h (l) i . d denotes the dimension of the vectors. Here, Q(·), K(·), and V (·) are the query, key, and value transformations: Q(h) := WQh, K(h) := WKh, V (h) := WV h, (WQ,WK ,WV ) ∈ Rd×d, (3) respectively. We drop the bias parameters in these equations for simplicity. For multi-head attentions, we concatenate the output from each head. 2.2 Class Activation Map GradCAM [31] is one the most successful CAM-based methods using the gradient information flowing into the last convolutional layer of CNN to understand the importance of each neuron for the decision of interest. In order to obtain the class discriminative localization map for the explanation, Grad-CAM first computes the gradient of the score for class c, i.e., yc before the softmax, concerning feature maps Ak of a convolutional layer as ∂y c ∂Ak . Then, these flowing back gradients are global-average-pooled to obtain the neuron importance weight wck: wck = E ( ∂yc ∂Ak ) , (4) where E denotes the global-average-pooling. The weight wck reflects a partial linearization of the CNN downstream from A and captures the importance of feature map k for a target class c. Then a weighted combination of forward activation maps is obtained by: GradCAMc = ReLU (∑ k wckA k ) , (5) where ReLU() is applied to filter out the negative values since we are only interested in the features that positively influence the class of interest. 3 Problem Formulation The objective of a token-level explanation method for Transformer is to generate a separate score for each input token in order to answer the question: Given an input text and a trained Transformer model, which tokens mostly influence the model’s output? There is no standard definition of influence in literature [32]. Some works use the term ‘importance’, whereas others use the term ‘relevance’ depending on the explanation methods being used. Here we note that the token influence should reflect not only the magnitude of impact but also its directionality. As such, we define a new concept, Impact Score, to measure both Magnitude of Impact and Directionality. The former addresses the question “Which input tokens contribute mostly to the output?”. And the latter addresses the question “Given an input token, have positive or negative contributions been made to the output?” Formally, we define the Impact Score generated by our AttCAT method as follows: Definition 1 (Impact Score) Given a pre-trained Transformer T (·), an input token x, and our explanation method EAttCAT(·). Impact Score is define as: Impact Score(EAttCAT(T (x))) = { |EAttCAT(T (x))|, Magnitude of Impact, Sign(EAttCAT(T (x))), Directionality. (6) Remark 1 (Magnitude of Impact) The magnitude of impact indicates how much contribution has been made by each token. A sort function can be applied to the array of scores for the input tokens to retrieve the most impactful tokens on the output. Remark 2 (Directionality) The sign reveals whether each token makes a positive or negative impact on the output. 4 Attentive Class Activation Tokens 4.1 Disentangling Information Flows in Transformer To interpret the inner working mechanism of Transformers, it is essential to understand how the information of each input token flows through each intermediate layer and finally reaches the output. Some previous works [13, 22] use heuristics to treat high attention weights and/or their gradients as indicators of important information flows across layers. Others [15, 14] apply LRP aiming to dissect the information flows via layer-wise back-propagation. However, these approaches either rely on the simple-but-unreliable assumption of linear combination of the intermediate layers or ignore the major components of Transformer, i.e., the magnitudes of the features and the skip connection. From Figure 1, we observe that the output sequence of the Transformer model has a one-to-one correspondence to its input sequence. The skip connection is a shortcut that bridges the input and output of the self-attention operation. We note that the Transformer encoder intuitively is an operator that adds the representation of token interactions (via self-attention mechanism) onto the original representation of the token (via skip connection). Therefore, from a perspective of information flow, we can specify the i-th token’s information at the (l)-th layer as: Information(xli) = Information(x l−1 i ) + Interaction(x l−1 i ,x l−1 n/i ), (7) where Information(xl−1i ) represents the information contained in the i-th token at the (l-1)-th layer, and Interaction(xl−1i ,x l−1 n/i ) reflects the summation of all pairwise interaction between the i-th token and all other tokens (n/i). This observation motivates us to interpret the inner working mechanism of Transformers via disentangling the information flow Transformer. Thus, considering Eq. 7 as a recurrence relation, the final representation of the i-th token then consists of the original information (the input) plus token interactions between the i-th token and all other tokens at different layers. Since the CNN’s last convolutional layer also encodes both high-level semantics and detailed spatial information, corresponding to the original information and the interactions herein, the way GradCAM used for explaining a CNN model’s output inspired us to design Attentive Class Activation Tokens (AttCAT) to understand the impact of each token on a Transformer model’s output. 4.2 Class Activation Tokens For a pre-trained Transformer, we can always find its output hl at l-th layer. Assume hl has n columns, each column corresponds to an input token (including the paddings, i.e., [CLS] and [SEP]). We write its columns separately as hl1, · · · ,hli, · · · ,hln. As hLi is the output of i-th token from the last Transformer layer L, to interpret the impact of i-th token to the final output yc for class c, it would be straightforward if we have a linear relationship between yc and hLi as follows: yc = n∑ i wci · hLi , (8) where wci is the linear coefficient vector for h L i . Inspired by GradCAM [31], we obtain the token important weights as: wci = ∇hLi = ∂yc ∂hLi , (9) where wci illustrates a partial linearization from h L i and captures the importance of i-th token to a target class c. Class Activation Tokens (CAT) is then obtained through a weighted combination: CATLi = ∇hLi ⊙ hLi , (10) where ⊙ is the Hadamard product. CATLi denotes the impact score of the i-th token at L-th layer towards class c. Note that we do not apply ReLU() to filter out the negative scores here since we also care about the directionality of the impact score. 4.3 Attentive CAT While CAT explains the model’s output according to the attribution of each individual token’s encoder output (Eq. 8), it does not consider the interaction among tokens, which is revealed via the selfattention mechanism. The self-attention mechanism [18] assigns a pairwise similarity score between every two tokens as the attention weight, encoding the important interaction information of these tokens. Therefore, we integrate self-attention weights with CAT to further incorporate the token interaction information for better quantifying the impact of each token on the Transformer model’s output. Our Attentive CAT (AttCAT) at L-th layer for i-th token is then formulated as: AttCATLi = EH(αLi · CAT L i ), (11) where αLi denotes the attention weights of the i-th token at L-th layer. EH(·) means averaging over multiple heads. Recall that Eq. 7 represents a recurrence relation, we can always find the output of l-th layer and assign it as yli. We can use Eq. 9, 10, and 11 to formulate AttCAT l i, denoting the impact score for i-th token at l-th layer. Finally, different from the Rollout and TransAtt methods that apply the rollout operation, we sum AttCATli over all Transformer layers as the final impact score of i-th token as follows: AttCATi = L∑ j=1 AttCATji . (12) We empirically demonstrate that the summation is a more effective way than Rollout in Figure 5. 5 Experiments 5.1 Desirable Properties of an Explanation Technique We first introduce two desirable properties of an explanation method: faithfulness and confidence, along with metrics to systematically evaluate the performance of various explanation techniques. Faithfulness quantifies the fidelity of an explanation technique by measuring if the tokens identified indeed impact the output. We adopt two metrics from prior work to evaluate the faithfulness of word-level explanations: the area over the perturbation curve (AOPC) [33, 34] and the Log-odds scores [35, 34]. These two metrics measure local fidelity by deleting or masking the top k% scored words and comparing the probability change on the predicted label. Confidence A token can receive several saliency scores, indicating its contribution to the prediction of each class. The tokens with higher impact scores of the predicted class c should also have lower impact scores for the remaining classes. In other words, the explanation techniques should be highly confident in recognizing the most impact tokens of the desired class (usually the predicted class). On the other hand, these tokens should have the most negligible impact on other classes. We use Kendall-τ correlation, the statistic measuring the strength of association between the ranked scores of different classes, to evaluate the confidence of an explanation method. 5.2 Experiment Settings Transformer models: BERT [2] is one of the most representative Transformer models with impressive performance across a variety of NLP tasks, e.g., sentiment analysis and question answering. We use the BERTbase model and some variants (i.e., DistillBERT [36] and RoBERTa [37]) in our experiments. Our method can be generally applied to other Transformer architectures with minor modifications. The pre-trained models from Huggingface1 are used for validating our explanation method and comparing it to others. More details of these Transformer models and their prediction performance are presented in Appendix A. Datasets: We evaluate the performance using the following exemplar tasks: sentiment analysis on SST2 [38] , Amazon Polarity, Yelp Polarity [39], and IMDB [40] data sets; natural language inference on MNLI [41] data set; paraphrase detection on QQP [42] data set; and question answering on SQuADv1 [43] and SQuADv2 [44] data sets. More details of these data sets are described in Appendix B. Baseline methods: Several baseline explanation methods for Transformer have been compared through our experiments, including the attention-based methods (i.e., RawAtt and Rollout [13]), the attention gradient-based methods (i.e., Grads and AttGrads [22]), the LRP-based methods (i.e., PartialLRP [14] and TransAtt [15]). CAT without incorporating attention weights is an ablation version of AttCAT. Figure 2 summarizes and compares these methods with formulations. 5.3 Evaluation Metrics AOPC: By deleting top k% words, AOPC calculates the average change of the prediction probability on the predicted class over all test examples as follows: AOPC(k) = 1 N N∑ i=1 p(ŷ|xi)− p(ŷ|x̃ki ), (13) where N is the number of examples, ŷ is the predicted label, p(ŷ|·) is the probability on the predicted class, and x̃ki is constructed by removing the k% top-scored words from xi. To avoid choosing an arbitrary k, we remove 0, 10, 20, · · · , 100% of the tokens in order of decreasing saliency, thus arriving at x̃0i , x̃ 10 i , · · · , x̃100i . Higher values of AOPC are better, which means the deleted words are more impactful on the model’s output. LOdds: Log-odds score is calculated by averaging the difference of negative logarithmic probabilities on the predicted class over all test examples before and after masking k% top-scored words with zero paddings, LOdds(k) = 1 N N∑ i=1 log p(ŷ|x̃ki ) p(ŷ|xi) . (14) The notations are the same as in Eq. 13 with the only difference that x̃ki is constructed by replacing the top k% word with the special token [PAD] in xi. Lower LOdds scores are better. Kendal correlation: We use the Kendal-τ to evaluate confidence of an explanation method, formally: Kendal correlation = 1 N N∑ i=1 Kendall-τ(S(xi)c, S(xi)C/c), (15) where S(xi) denotes an array of the token index in order of the decreasing saliency (or attribution, or relevance, or impact) scores for a test example. A lower Kendal correlation demonstrates the explanation method is more confident in generating the saliency scores for predicting the class c. 1https://huggingface.co/ Precision@K: Inspired by the original Precision@K used in recommender system [45], we design a novel Precision@K to evaluate the explanation performance on SQuAD data sets. For each test example, we count the number of tokens in the answer that appear in the K top-scored tokens as Precision@K. Therefore, higher Precision@K scores are better. 6 Results and Discussions 6.1 Quantitative Evaluations The quantitative evaluations in this Section demonstrate our AttCAT method outperforms the baseline methods on the vast majority of different data sets and tasks. Table 1 depicts the results of various explanation methods and data sets. We report the average AOPC and LOdds scores over k values. Due to computation costs, we experiment on a subset with 2,000 randomly selected samples for the Amazon, Yelp, and IMDB data sets. Entire test sets are used for other data sets. AttCAT achieves the highest AOPC and lowest LOdds scores in most settings, demonstrating that the most impactful tokens for model prediction have been deleted or replaced. Among all the compared methods, the attention-based methods (i.e., RawAtt and Rollout) perform worst since attention weights alone without considering the magnitudes of feature values are not adequate to analyze the inner working mechanism of Transformers. Remarkably, AttCAT also outperforms TransAtt, a recent work representing a strong baseline method. The performance of CAT, shown here as an ablation study, drops markedly, corroborating the effectiveness of using self-attention weights in AttCAT. We also report the AOPC and LOdds scores of different methods in explaining BERT by deleting or masking bottom k% words on different data sets in Appendix Table 5. Our AttCAT achieves the lowest AOPC and highest LOdds, demonstrating that AttCAT efficiently captures the most impactful tokens for model predictions. Figure 3 illustrates how the evaluation metrics, namely AOPC and LOdds, change over the varying corruption rates (via removing or masking the k% top-scored words). Our AttCAT method achieves the highest AOPC and the lowest LOdds scores within a corruption rate k of 50% or less, further demonstrating that AttCAT has detected the most impactful words for model predictions. Table 2 shows the Kendal-τ based confidence score of the different explanation techniques for BERT tested using various data sets. We do not report the confidence scores of the attention-based methods since they are class agnostic. AttCAT achieves the best performance on most data sets; different classes observe distinctively sorted tokens, leading to much lower Kendal correlations. In other words, our AttCAT is highly confident in recognizing the most impactful tokens for predicting the class of interest. We show the Precision@K scores for the SQuAD data sets in Figure 4. Here k is set to 20. Our results clearly demonstrate that AttCAT is superior to other methods and generalizes well to various BERT architectures on SQuAD data sets. The higher score means that AttCAT can capture more impactful answer tokens in the TOP-20 sorted tokens, proving its capability to generate more faithful explanations. The results of varying k values are shown in Appendix Figure 8, 9, 10, 11. 6.2 Qualitative Visualizations Lastly, we show a heatmap of the normalized impact scores generated by AttCAT in Figure 5. The first 12 rows (L0-L11) show the impact scores of each token from different BERT layers. The darker shaded token represents a higher score, as shown in the legend. The signs of scores indicate their directionalities. This heatmap also justifies the effectiveness of the summation operation we used in Eq. 12. As shown in the figure, the impact scores become uniform and less impactful as the layer goes deeper, which is consistent with the observation from [13] where the authors argue that the embeddings are more contextualized and tend to carry similar information in the deeper layers. Thus, the rollout operation used in [13, 15] will attenuate the impact scores at shallower layers (i.e., L0-L9) since they are multiplied by scores at the deeper layers (i.e., L10-L11). As shown in the row of ‘Rollout’ in the figure, the rollout operation only gives minimal impact scores of the tokens, indicating essentially no information has been captured for the explanation. While the summation operation (ours), shown as the row of ‘Sum’, generates a faithful explanation incorporating the impact scores from each layer. In term of Impact Score, the token ‘not’ with the highest positive impact score (0.72) contributes mostly to the negative sentiment of this sentence, whereas the token ‘like’ with the highest negative impact score (-0.37) contributes inversely. The ground truth answer of the question answering example shown in Figure 6a is “denver brconcos". AttCAT successfully captures these two tokens with the darkest green shades, corresponding to highest impact scores. The example from SST2 shown in Figure 6b has a negative sentiment. Both AttCAT and TransAtt capture the most impactful tokens, such as ‘boring’, ‘didn’, and ‘t’, which contribute mostly to the negative sentiment prediction. Besides the tokens explaining the negative sentiment, our AttCAT method also identified some other tokens that contribute inversely to the negative sentiment, e.g., ‘like’ and ‘really’ (shown in dark shade of red), whereas TransAtt is not capable of differentiating positive and negative contributions. RawAtt gives more attention on some irrelevant tokens, i.e., ‘overall’, ‘but’, and the punctuations. Rollout only generates some uniformly distributed important scores for the tokens. 7 Conclusion This work addresses the major issues in generating faithful and confident explanations for Transformers via a novel attentive class activation tokens approach. AttCAT leverages the features, their gradients, and corresponded attention weights to define the so-called impact scores, which quantify the impact of inputs on the model’s outputs. The impact score can give both magnitude and directionality of the input tokens’ impact. We conduct extensive experiments on different Transformer models and data sets and demonstrate that our AttCAT achieves the best performance among strong baseline methods using quantitative metrics and qualitative visualizations. Even though our current AttCAT approach is mainly designed for BERT architectures on NLP tasks, it can be naturally extended to Vision Transformer architectures on computer vision tasks as the future work. Since there are various versions of Transformer architectures, e.g., ViT [3] and Swin Transformer [4], which are much different from Transformers used on NLP tasks, it opens up new avenues to extend our AttCAT to explain these models prediction. Acknowledgments This work is supported by the National Science Foundation under grant IIS-2211897.
1. What is the focus and contribution of the paper regarding Transformer models? 2. What are the strengths of the proposed approach, particularly in terms of its simplicity and novelty? 3. What are the weaknesses of the paper, especially regarding its limitations? 4. Do you have any concerns or questions about the method's ability to explain the input tokens' impact on the prediction? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper proposes "Attentive Class Activation Tokens" (AttCAT), a post-hoc method for the explanation of Transformer models addressing NLP tasks. Different from existing related methods which only focus specific components of the models being explained (leading to reduced faithfulness) and/or heuristics, the proposed method stresses the use of a) the features encoded by the model, b) their gradients, and c) their associated attention weights as a complete package to address the weaknesses in existing explanation methods. This enables not only the explanation of the parts of the input (tokens) that have a high impact on the prediction made by the model, but also whether this impact positively or negatively contributed to the prediction (directionality). Strengths And Weaknesses = Strengths The manuscript had a very good structure and organization. This led to clear content with a good flow. Overall I enjoyed reading this paper. I applaud the authors for the effort put on the presentation of this manuscript. The proposed method is relatively simple. The inclusion of each of the components (the features encoded by the model, their gradients, and their associated attention weights) is well motivated. On the qualitative side, to the best of my knowledge, the proposed method is novel and complementary to what is out there. On the quantitative side, it obtained state-of-the-art results w.r.t. existing methods. Empirical validation of the method was conducted considering a rich set of well known components including several transformer architectures, several NLP-related datasets, and several model explanation baselines from the literature. Observations made via the proposed method align with observation from previous efforts, e.g. [11] in Sec. 6.2. This already hints at the added value of the proposed method. Questions N.A. Limitations N.A.
NIPS
Title AttCAT: Explaining Transformers via Attentive Class Activation Tokens Abstract Transformers have improved the state-of-the-art in various natural language processing and computer vision tasks. However, the success of the Transformer model has not yet been duly explained. Current explanation techniques, which dissect either the self-attention mechanism or gradient-based attribution, do not necessarily provide a faithful explanation of the inner workings of Transformers due to the following reasons: first, attention weights alone without considering the magnitudes of feature values are not adequate to reveal the self-attention mechanism; second, whereas most Transformer explanation techniques utilize self-attention module, the skip-connection module, contributing a significant portion of information flows in Transformers, has not yet been sufficiently exploited in explanation; third, the gradient-based attribution of individual feature does not incorporate interaction among features in explaining the model’s output. In order to tackle the above problems, we propose a novel Transformer explanation technique via attentive class activation tokens, aka, AttCAT, leveraging encoded features, their gradients, and their attention weights to generate a faithful and confident explanation for Transformer’s output. Extensive experiments are conducted to demonstrate the superior performance of AttCAT, which generalizes well to different Transformer architectures, evaluation metrics, datasets, and tasks, to the baseline methods. Our code is available at: https://github.com/qiangyao1988/AttCAT. 1 Introduction Transformers have advanced the state-of-the-art on a variety of natural language processing tasks [1, 2] and see increasing popularity in the field of computer vision [3, 4]. The main innovation behind the Transformer models is the stacking of multi-head self-attention layers to extract global features from sequential tokenized inputs. However, the lack of understanding of their mechanism increases the risk of deploying them in real-world applications [5, 6, 7, 8, 9]. This has motivated new research on explaining Transformers output to assist trustworthy human decision-making [10, 11, 12, 13, 14, 15, 16, 17]. The self-attention mechanism [18] in Transformers assigns a pairwise score capturing the relative importance between every two tokens or image patches as attention weights. Thus, a common practice is to use these attention weights to explain the Transformer model’s output by exhibiting the importance distribution over the input tokens [6]. The baseline method, shown as RawAtt in Figure 2, utilizes the raw attention weights from a single layer or a combination of multiple layers [10]. However, recent studies [11, 12, 13] question whether highly attentive inputs significantly impact the model outputs. Serrano et al. [11] demonstrate that erasing the representations accorded high attention weights do not necessarily lead to a performance decrease. Jain et al. [12] suggest that “attention is not explanation” by observing that attention scores are frequently inconsistent with other feature importance indicators like gradient-based measures. Abnar et al. [13] argue that the contextual information from tokens gets more similar as going deeper into the model, leading to unreliable 36th Conference on Neural Information Processing Systems (NeurIPS 2022). explanations using the raw attention weights. The authors propose two methods to combine the attention weights across multiple layers to cope with this issue. Their attention rollout method, shown as Rollout in Figure 2, reassigns the important scores to the tokens through the linear combination of attention weights across the layers tracing the information flow in Transformer. However, the rollout operation canceled out the accumulated important scores as some deeper layers have almost uniformly distributed attention weights. The attention flow method is formulated as a max-flow problem by dissecting the graph of pairwise attentions. While it somewhat outperforms the rollout method in specific scenarios, it is not ready to support large-scale evaluations [15]. Recently, Bastings et al. [19] advocate using saliency method as opposed to attention as explanations. Although some gradient-based methods [20, 21, 22, 23] have been proposed to leverage salience for explaining Transformer’s output, most of them still focus on the gradients of attention weights, i.e., Grads and AttGrads as shown in Figure 2. They suffer from a similar limitation to the abovementioned attention-based methods. Layer-wise Relevance Propagation (LRP) method [24, 25], which is also considered as a type of saliency method, propagates relevance scores from the output layer to the input. There has been a growing body of work on using LRP to explain Transformers [14, 15]. Voita et al. [14] use LRP to capture the relative importance of the attention heads within each Transformer layer (shown as PartialLRP in Figure 2). However, this approach is limited by only providing partial information on each self-attention head’s relevance; no relevance score is propagated back to the input. To address this problem, Chefer et al. [15] provide a comprehensive treatment of the information propagation within all components of the Transformer model, which back-propagates the information through all layers from the output back to the input. This method further integrates gradients from the attention weights, shown as TransAtt in Figure 2. However, TransAtt relies on the specific LRP rules that is not applicable for other attention modules, e.g., co-attention. Thus it can not provide explanations for all transformer architectures [26]. As such, the existing Transformer explanation techniques are not completely satisfactory due to three major issues. First, most attention-based methods disregard the magnitudes of the features. The summation operation (Eq. 2 shown in Figure 1) demonstrates both attention weights (the green circles) and the feature (the blue circles) contribute to the weighted outputs (the red circles). In other words, since the self-attention mechanism involves the computation of queries, keys, and values, reducing it only to the derived attention weights (inner products of queries and keys) is not ideal. Second, besides the self-attention mechanism, skip connection as another major component in Transformer is not even considered in current techniques. The latter enables the delivery and integration of information by adding an identity mapping from inputs to outputs, trying to solve the model optimization problem from the perspective of information transfer [27]. Moreover, Lu et al. [28] find that a significant portion of information flow in BERT goes through the skip connection instead of the attention heads (i.e., three times more often than attention on average). Thus, attention alone, without considering the skip connection, is not sufficient to characterize the inner working mechanism of Transformers. Third, the individual feature attribution-based approaches [15, 14, 29, 30] cannot capture the pairwise interactions of feature since gradients or relevance scores are calculated independently for each individual feature. For example, the gradients directly go through the Transformer layers from the output to the specific input (the token ‘like’), shown in Figure 1. We propose Attentive Class Activation Tokens (AttCAT) to generate token-level explanations leveraging features, their gradients, and their self-attention weights. Inspired by GradCAM [31], which uses gradient information flowing into the last convolutional layer of the Convolutional Neural Network (CNN) to understand the importance of each neuron for the decision of interest, our approach quantifies the impact of each token to the class-specific output via its gradient information. We further leverage the self-attention weights to capture the global contextual information of each token since it determines the relative importance of a single token concerning all other tokens in the input sequence. By disentangling the information flow across the Transformer layers for a specific token into the information from itself via a skip connection and the interaction information among all the tokens via a self-attention mechanism, we integrate the impact scores, which are generated using AttCAT, from multiple layers to give the final explanation. A summary of the baseline methods and our AttCAT method is shown in Figure 2, demonstrating their main similarities and differences. The RawAtt and Rollout [13] methods simply use the attention weights (α). The Grads method leverages the gradients of attention weights (∇αL) from the last Transformer layer, while the AttGrads method [22] integrates the attention weights (α) and their gradients (∇α) from all Transformer layers. The PartialLRP method [14] applies LRP only on the last Transformer layer (RL). Differently, the TransAtt method [26] integrates the relevance scores (R) from LRP and the gradients of attention weights (∇α). We use CAT, a new gradient-based attribution method leveraging the features (h) and their gradients (∇h), as our in-house baseline method. We further integrate attention weights (α) with CAT as the proposed AttCAT method. We state our contributions as follows: • We propose a novel Transformer explanation technique, AttCAT, leveraging the features, their gradients together with attention weights to generate the so-called impact scores to quantify the influence of inputs on the model’s outputs. • Our AttCAT exploits both the self-attention mechanism and skip connection to explain the inner working mechanism of Transformers via disentangling information flows between intermediate layers. • Furthermore, our class activation based method is capable of discriminating positive and negative impacts toward the model’s output using the directional information of the gradients. • Finally, we conduct extensive experiments on different Transformer architectures, datasets, and Natural Language Processing (NLP) tasks, demonstrating a more faithful and confident explanation than the baseline methods using several quantitative metrics and qualitative visualizations. 2 Preliminaries 2.1 Self-Attention Mechanism The encoders in Transformer model [1] typically stack L identical layers. Each contains two sublayers: (a) a multi-head self-attention module and (b) a feed-forward network module, coupled with layer normalization and skip connection. As illustrated in Figure 1, each encoder computes the output h (l) i ∈ Rd of the i-th token combining the previous encoder’s corresponding output h (l−1) i from the skip connection and a sequence output h(l−1) = {h(l−1)1 , · · · ,h (l−1) i , · · · ,h (l−1) n } ⊆ Rd through self-attention mechanism: αli,j := softmax ( Q(h (l−1) i )K(h (l−1) j ) T √ d ) ∈ R, (1) hli = W O n∑ j=1 αi,jV (hj (l−1)) + h (l−1) i , (2) where αli,j is the attention weight assigned to the j-th token for computing h (l) i . d denotes the dimension of the vectors. Here, Q(·), K(·), and V (·) are the query, key, and value transformations: Q(h) := WQh, K(h) := WKh, V (h) := WV h, (WQ,WK ,WV ) ∈ Rd×d, (3) respectively. We drop the bias parameters in these equations for simplicity. For multi-head attentions, we concatenate the output from each head. 2.2 Class Activation Map GradCAM [31] is one the most successful CAM-based methods using the gradient information flowing into the last convolutional layer of CNN to understand the importance of each neuron for the decision of interest. In order to obtain the class discriminative localization map for the explanation, Grad-CAM first computes the gradient of the score for class c, i.e., yc before the softmax, concerning feature maps Ak of a convolutional layer as ∂y c ∂Ak . Then, these flowing back gradients are global-average-pooled to obtain the neuron importance weight wck: wck = E ( ∂yc ∂Ak ) , (4) where E denotes the global-average-pooling. The weight wck reflects a partial linearization of the CNN downstream from A and captures the importance of feature map k for a target class c. Then a weighted combination of forward activation maps is obtained by: GradCAMc = ReLU (∑ k wckA k ) , (5) where ReLU() is applied to filter out the negative values since we are only interested in the features that positively influence the class of interest. 3 Problem Formulation The objective of a token-level explanation method for Transformer is to generate a separate score for each input token in order to answer the question: Given an input text and a trained Transformer model, which tokens mostly influence the model’s output? There is no standard definition of influence in literature [32]. Some works use the term ‘importance’, whereas others use the term ‘relevance’ depending on the explanation methods being used. Here we note that the token influence should reflect not only the magnitude of impact but also its directionality. As such, we define a new concept, Impact Score, to measure both Magnitude of Impact and Directionality. The former addresses the question “Which input tokens contribute mostly to the output?”. And the latter addresses the question “Given an input token, have positive or negative contributions been made to the output?” Formally, we define the Impact Score generated by our AttCAT method as follows: Definition 1 (Impact Score) Given a pre-trained Transformer T (·), an input token x, and our explanation method EAttCAT(·). Impact Score is define as: Impact Score(EAttCAT(T (x))) = { |EAttCAT(T (x))|, Magnitude of Impact, Sign(EAttCAT(T (x))), Directionality. (6) Remark 1 (Magnitude of Impact) The magnitude of impact indicates how much contribution has been made by each token. A sort function can be applied to the array of scores for the input tokens to retrieve the most impactful tokens on the output. Remark 2 (Directionality) The sign reveals whether each token makes a positive or negative impact on the output. 4 Attentive Class Activation Tokens 4.1 Disentangling Information Flows in Transformer To interpret the inner working mechanism of Transformers, it is essential to understand how the information of each input token flows through each intermediate layer and finally reaches the output. Some previous works [13, 22] use heuristics to treat high attention weights and/or their gradients as indicators of important information flows across layers. Others [15, 14] apply LRP aiming to dissect the information flows via layer-wise back-propagation. However, these approaches either rely on the simple-but-unreliable assumption of linear combination of the intermediate layers or ignore the major components of Transformer, i.e., the magnitudes of the features and the skip connection. From Figure 1, we observe that the output sequence of the Transformer model has a one-to-one correspondence to its input sequence. The skip connection is a shortcut that bridges the input and output of the self-attention operation. We note that the Transformer encoder intuitively is an operator that adds the representation of token interactions (via self-attention mechanism) onto the original representation of the token (via skip connection). Therefore, from a perspective of information flow, we can specify the i-th token’s information at the (l)-th layer as: Information(xli) = Information(x l−1 i ) + Interaction(x l−1 i ,x l−1 n/i ), (7) where Information(xl−1i ) represents the information contained in the i-th token at the (l-1)-th layer, and Interaction(xl−1i ,x l−1 n/i ) reflects the summation of all pairwise interaction between the i-th token and all other tokens (n/i). This observation motivates us to interpret the inner working mechanism of Transformers via disentangling the information flow Transformer. Thus, considering Eq. 7 as a recurrence relation, the final representation of the i-th token then consists of the original information (the input) plus token interactions between the i-th token and all other tokens at different layers. Since the CNN’s last convolutional layer also encodes both high-level semantics and detailed spatial information, corresponding to the original information and the interactions herein, the way GradCAM used for explaining a CNN model’s output inspired us to design Attentive Class Activation Tokens (AttCAT) to understand the impact of each token on a Transformer model’s output. 4.2 Class Activation Tokens For a pre-trained Transformer, we can always find its output hl at l-th layer. Assume hl has n columns, each column corresponds to an input token (including the paddings, i.e., [CLS] and [SEP]). We write its columns separately as hl1, · · · ,hli, · · · ,hln. As hLi is the output of i-th token from the last Transformer layer L, to interpret the impact of i-th token to the final output yc for class c, it would be straightforward if we have a linear relationship between yc and hLi as follows: yc = n∑ i wci · hLi , (8) where wci is the linear coefficient vector for h L i . Inspired by GradCAM [31], we obtain the token important weights as: wci = ∇hLi = ∂yc ∂hLi , (9) where wci illustrates a partial linearization from h L i and captures the importance of i-th token to a target class c. Class Activation Tokens (CAT) is then obtained through a weighted combination: CATLi = ∇hLi ⊙ hLi , (10) where ⊙ is the Hadamard product. CATLi denotes the impact score of the i-th token at L-th layer towards class c. Note that we do not apply ReLU() to filter out the negative scores here since we also care about the directionality of the impact score. 4.3 Attentive CAT While CAT explains the model’s output according to the attribution of each individual token’s encoder output (Eq. 8), it does not consider the interaction among tokens, which is revealed via the selfattention mechanism. The self-attention mechanism [18] assigns a pairwise similarity score between every two tokens as the attention weight, encoding the important interaction information of these tokens. Therefore, we integrate self-attention weights with CAT to further incorporate the token interaction information for better quantifying the impact of each token on the Transformer model’s output. Our Attentive CAT (AttCAT) at L-th layer for i-th token is then formulated as: AttCATLi = EH(αLi · CAT L i ), (11) where αLi denotes the attention weights of the i-th token at L-th layer. EH(·) means averaging over multiple heads. Recall that Eq. 7 represents a recurrence relation, we can always find the output of l-th layer and assign it as yli. We can use Eq. 9, 10, and 11 to formulate AttCAT l i, denoting the impact score for i-th token at l-th layer. Finally, different from the Rollout and TransAtt methods that apply the rollout operation, we sum AttCATli over all Transformer layers as the final impact score of i-th token as follows: AttCATi = L∑ j=1 AttCATji . (12) We empirically demonstrate that the summation is a more effective way than Rollout in Figure 5. 5 Experiments 5.1 Desirable Properties of an Explanation Technique We first introduce two desirable properties of an explanation method: faithfulness and confidence, along with metrics to systematically evaluate the performance of various explanation techniques. Faithfulness quantifies the fidelity of an explanation technique by measuring if the tokens identified indeed impact the output. We adopt two metrics from prior work to evaluate the faithfulness of word-level explanations: the area over the perturbation curve (AOPC) [33, 34] and the Log-odds scores [35, 34]. These two metrics measure local fidelity by deleting or masking the top k% scored words and comparing the probability change on the predicted label. Confidence A token can receive several saliency scores, indicating its contribution to the prediction of each class. The tokens with higher impact scores of the predicted class c should also have lower impact scores for the remaining classes. In other words, the explanation techniques should be highly confident in recognizing the most impact tokens of the desired class (usually the predicted class). On the other hand, these tokens should have the most negligible impact on other classes. We use Kendall-τ correlation, the statistic measuring the strength of association between the ranked scores of different classes, to evaluate the confidence of an explanation method. 5.2 Experiment Settings Transformer models: BERT [2] is one of the most representative Transformer models with impressive performance across a variety of NLP tasks, e.g., sentiment analysis and question answering. We use the BERTbase model and some variants (i.e., DistillBERT [36] and RoBERTa [37]) in our experiments. Our method can be generally applied to other Transformer architectures with minor modifications. The pre-trained models from Huggingface1 are used for validating our explanation method and comparing it to others. More details of these Transformer models and their prediction performance are presented in Appendix A. Datasets: We evaluate the performance using the following exemplar tasks: sentiment analysis on SST2 [38] , Amazon Polarity, Yelp Polarity [39], and IMDB [40] data sets; natural language inference on MNLI [41] data set; paraphrase detection on QQP [42] data set; and question answering on SQuADv1 [43] and SQuADv2 [44] data sets. More details of these data sets are described in Appendix B. Baseline methods: Several baseline explanation methods for Transformer have been compared through our experiments, including the attention-based methods (i.e., RawAtt and Rollout [13]), the attention gradient-based methods (i.e., Grads and AttGrads [22]), the LRP-based methods (i.e., PartialLRP [14] and TransAtt [15]). CAT without incorporating attention weights is an ablation version of AttCAT. Figure 2 summarizes and compares these methods with formulations. 5.3 Evaluation Metrics AOPC: By deleting top k% words, AOPC calculates the average change of the prediction probability on the predicted class over all test examples as follows: AOPC(k) = 1 N N∑ i=1 p(ŷ|xi)− p(ŷ|x̃ki ), (13) where N is the number of examples, ŷ is the predicted label, p(ŷ|·) is the probability on the predicted class, and x̃ki is constructed by removing the k% top-scored words from xi. To avoid choosing an arbitrary k, we remove 0, 10, 20, · · · , 100% of the tokens in order of decreasing saliency, thus arriving at x̃0i , x̃ 10 i , · · · , x̃100i . Higher values of AOPC are better, which means the deleted words are more impactful on the model’s output. LOdds: Log-odds score is calculated by averaging the difference of negative logarithmic probabilities on the predicted class over all test examples before and after masking k% top-scored words with zero paddings, LOdds(k) = 1 N N∑ i=1 log p(ŷ|x̃ki ) p(ŷ|xi) . (14) The notations are the same as in Eq. 13 with the only difference that x̃ki is constructed by replacing the top k% word with the special token [PAD] in xi. Lower LOdds scores are better. Kendal correlation: We use the Kendal-τ to evaluate confidence of an explanation method, formally: Kendal correlation = 1 N N∑ i=1 Kendall-τ(S(xi)c, S(xi)C/c), (15) where S(xi) denotes an array of the token index in order of the decreasing saliency (or attribution, or relevance, or impact) scores for a test example. A lower Kendal correlation demonstrates the explanation method is more confident in generating the saliency scores for predicting the class c. 1https://huggingface.co/ Precision@K: Inspired by the original Precision@K used in recommender system [45], we design a novel Precision@K to evaluate the explanation performance on SQuAD data sets. For each test example, we count the number of tokens in the answer that appear in the K top-scored tokens as Precision@K. Therefore, higher Precision@K scores are better. 6 Results and Discussions 6.1 Quantitative Evaluations The quantitative evaluations in this Section demonstrate our AttCAT method outperforms the baseline methods on the vast majority of different data sets and tasks. Table 1 depicts the results of various explanation methods and data sets. We report the average AOPC and LOdds scores over k values. Due to computation costs, we experiment on a subset with 2,000 randomly selected samples for the Amazon, Yelp, and IMDB data sets. Entire test sets are used for other data sets. AttCAT achieves the highest AOPC and lowest LOdds scores in most settings, demonstrating that the most impactful tokens for model prediction have been deleted or replaced. Among all the compared methods, the attention-based methods (i.e., RawAtt and Rollout) perform worst since attention weights alone without considering the magnitudes of feature values are not adequate to analyze the inner working mechanism of Transformers. Remarkably, AttCAT also outperforms TransAtt, a recent work representing a strong baseline method. The performance of CAT, shown here as an ablation study, drops markedly, corroborating the effectiveness of using self-attention weights in AttCAT. We also report the AOPC and LOdds scores of different methods in explaining BERT by deleting or masking bottom k% words on different data sets in Appendix Table 5. Our AttCAT achieves the lowest AOPC and highest LOdds, demonstrating that AttCAT efficiently captures the most impactful tokens for model predictions. Figure 3 illustrates how the evaluation metrics, namely AOPC and LOdds, change over the varying corruption rates (via removing or masking the k% top-scored words). Our AttCAT method achieves the highest AOPC and the lowest LOdds scores within a corruption rate k of 50% or less, further demonstrating that AttCAT has detected the most impactful words for model predictions. Table 2 shows the Kendal-τ based confidence score of the different explanation techniques for BERT tested using various data sets. We do not report the confidence scores of the attention-based methods since they are class agnostic. AttCAT achieves the best performance on most data sets; different classes observe distinctively sorted tokens, leading to much lower Kendal correlations. In other words, our AttCAT is highly confident in recognizing the most impactful tokens for predicting the class of interest. We show the Precision@K scores for the SQuAD data sets in Figure 4. Here k is set to 20. Our results clearly demonstrate that AttCAT is superior to other methods and generalizes well to various BERT architectures on SQuAD data sets. The higher score means that AttCAT can capture more impactful answer tokens in the TOP-20 sorted tokens, proving its capability to generate more faithful explanations. The results of varying k values are shown in Appendix Figure 8, 9, 10, 11. 6.2 Qualitative Visualizations Lastly, we show a heatmap of the normalized impact scores generated by AttCAT in Figure 5. The first 12 rows (L0-L11) show the impact scores of each token from different BERT layers. The darker shaded token represents a higher score, as shown in the legend. The signs of scores indicate their directionalities. This heatmap also justifies the effectiveness of the summation operation we used in Eq. 12. As shown in the figure, the impact scores become uniform and less impactful as the layer goes deeper, which is consistent with the observation from [13] where the authors argue that the embeddings are more contextualized and tend to carry similar information in the deeper layers. Thus, the rollout operation used in [13, 15] will attenuate the impact scores at shallower layers (i.e., L0-L9) since they are multiplied by scores at the deeper layers (i.e., L10-L11). As shown in the row of ‘Rollout’ in the figure, the rollout operation only gives minimal impact scores of the tokens, indicating essentially no information has been captured for the explanation. While the summation operation (ours), shown as the row of ‘Sum’, generates a faithful explanation incorporating the impact scores from each layer. In term of Impact Score, the token ‘not’ with the highest positive impact score (0.72) contributes mostly to the negative sentiment of this sentence, whereas the token ‘like’ with the highest negative impact score (-0.37) contributes inversely. The ground truth answer of the question answering example shown in Figure 6a is “denver brconcos". AttCAT successfully captures these two tokens with the darkest green shades, corresponding to highest impact scores. The example from SST2 shown in Figure 6b has a negative sentiment. Both AttCAT and TransAtt capture the most impactful tokens, such as ‘boring’, ‘didn’, and ‘t’, which contribute mostly to the negative sentiment prediction. Besides the tokens explaining the negative sentiment, our AttCAT method also identified some other tokens that contribute inversely to the negative sentiment, e.g., ‘like’ and ‘really’ (shown in dark shade of red), whereas TransAtt is not capable of differentiating positive and negative contributions. RawAtt gives more attention on some irrelevant tokens, i.e., ‘overall’, ‘but’, and the punctuations. Rollout only generates some uniformly distributed important scores for the tokens. 7 Conclusion This work addresses the major issues in generating faithful and confident explanations for Transformers via a novel attentive class activation tokens approach. AttCAT leverages the features, their gradients, and corresponded attention weights to define the so-called impact scores, which quantify the impact of inputs on the model’s outputs. The impact score can give both magnitude and directionality of the input tokens’ impact. We conduct extensive experiments on different Transformer models and data sets and demonstrate that our AttCAT achieves the best performance among strong baseline methods using quantitative metrics and qualitative visualizations. Even though our current AttCAT approach is mainly designed for BERT architectures on NLP tasks, it can be naturally extended to Vision Transformer architectures on computer vision tasks as the future work. Since there are various versions of Transformer architectures, e.g., ViT [3] and Swin Transformer [4], which are much different from Transformers used on NLP tasks, it opens up new avenues to extend our AttCAT to explain these models prediction. Acknowledgments This work is supported by the National Science Foundation under grant IIS-2211897.
1. What are the strengths and weaknesses of the paper regarding its approach to understanding transformer models? 2. How does the paper compare different methods for interpreting transformers, particularly CAT and AttCAT? 3. What are the limitations of the paper regarding its focus on specific tasks and lack of discussion on other domains? 4. Are there any concerns about the absence of critical ablation studies to support the claims made in the paper?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposed two methods to understand how transformers work: CAT and AttCAT. They are motivated to take care of the magnitudes of the features, the gradients, and the skip connection to examine (1) which tokens mostly influence the model's output and (2) whether a token pose a positive or negative contribution to the output. Both CAT and AttCAT are developed based on GradCAM. It also studied a lot of previous approaches and mainly compared weight-based methods, gradient-based methods, and methods based on layer-wise relevance propagation. Results on diverse common benchmarks show the strength of the proposed method, including SST2, QQP, MNLI, Amazon, Yelp, and IMDB. Some qualitative results are included to help understand. Strengths And Weaknesses The reviewer believes the studied direction is pretty much important for related communities where transformers gradually become the most common techniques we used in various tasks. The reviewer also agrees with the authors that we should also consider the magnitude of the features and the skip connection which play big roles in transformers. They carefully studied a lot of literature and mainly compared with three categories including: (1) attention-weights-related works, like RawAtt and Rollout; (2) gradient-related works, like Grads and AttGrads; (3) layer-wise relevance propagation methods, like PartialLRP and TransAtt. Figure2 clearly tells the difference between the proposed method and all the previous approaches. Extensive quantitative experiments are conducted over various datasets including SST2, QQP, MNLI, Amazon, Yelp, IMDB, and SQuAD v1 and v2. Results show the proposed method generally achieves better performance compared with the previous approaches. The reviewer especially likes the red notation in the visualization of the proposed method which indicates the negative effect of the input texts. The paper currently fused the introduction and related works into one section, which makes the introduction section pretty lengthy and a bit hard to read. Although the authors described the differences between the proposed method and previous methods in many places (intro, method, exps), the reviewer did not clearly understand the advantages of the proposed method and the key point leading to the good performance. Also, the reviewer did not see related ablation studies related to two major claims that we should exploit (1) the magnitudes of the features and (2) the skip connections. There are huge missings, where the authors can not consider the skip connections when aggregating the information to see if considering skip connections really help the understanding. Besides, the abstraction shows the reviewer the proposed method address various different tasks while it mainly focused on NLP tasks and currently have no discussion to tasks in other domains. Questions The reviewer is mainly concerned with the missing of potential ablation studies and will adjust scores if it gets addressed properly. Limitations The reviewer currently did not see sufficient ablation studies to support their claims. Also, the method currently did not provide evidence to support understanding of how transformers work in vision tasks.
NIPS
Title Generalizing GANs: A Turing Perspective Abstract Recently, a new class of machine learning algorithms has emerged, where models and discriminators are generated in a competitive setting. The most prominent example is Generative Adversarial Networks (GANs). In this paper we examine how these algorithms relate to the Turing test, and derive what—from a Turing perspective—can be considered their defining features. Based on these features, we outline directions for generalizing GANs—resulting in the family of algorithms referred to as Turing Learning. One such direction is to allow the discriminators to interact with the processes from which the data samples are obtained, making them “interrogators”, as in the Turing test. We validate this idea using two case studies. In the first case study, a computer infers the behavior of an agent while controlling its environment. In the second case study, a robot infers its own sensor configuration while controlling its movements. The results confirm that by allowing discriminators to interrogate, the accuracy of models is improved. 1 Introduction Generative Adversarial Networks [1] (GANs) are a framework for inferring generative models from training data. They place two neural networks—a model and a discriminator—in a competitive setting. The discriminator’s objective is to correctly label samples from either the model or the training data. The model’s objective is to deceive the discriminator, in other words, to produce samples that are categorized as training data by the discriminator. The networks are trained using a gradient-based optimization algorithm. Since their inception in 2014, GANs have been applied in a range of contexts [2, 3], but most prominently for the generation of photo-realistic images [1, 4]. In this paper we analyze the striking similarities between GANs and the Turing test [5]. The Turing test probes a machine’s ability to display behavior that, to an interrogator, is indistinguishable from that of a human. Developing machines that pass the Turing test could be considered as a canonical problem in computer science [6]. More generally, the problem is that of imitating (and hence inferring) the structure and/or behavior of any system, such as an organism, a device, a computer program, or a process. The idea to infer models in a competitive setting (model versus discriminator) was first proposed in [7]. The paper considered the problem of inferring the behavior of an agent in a simple environment. The behavior was deterministic, simplifying the identification task. In a subsequent work [8], the method, named Turing Learning, was used to infer the behavioral rules of a swarm of memoryless 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. robots. The robot’s movements were tracked using an external camera system, providing the training data. Additional robots executed the rules defined by the models. The contributions of this paper are • to examine the defining features of GANs (and variants)—assuming a Turing perspective; • to outline directions for generalizing GANs, in particular, to encourage alternative imple- mentations and novel applications; for example, ones involving physical systems; • to show, using two case studies, that more accurate models can be obtained if the discrimi- nators are allowed to interact with the processes from which data samples are obtained (as the interrogators in the Turing test).1 2 A Turing Perspective In 1950, Turing proposed an imitation game [5] consisting of three players A, B and C. Figure 1 shows a schematic of this game. Player C, also referred to as the interrogator, is unable to see the other players. However, the interrogator can pose questions to and receive answers from them. Answers from the same player are consistently labelled (but not revealing its identity, A or B). At the end of the game, the interrogator has to guess which label belongs to which player. There are two variants of the game, and we focus on the one where player A is a machine, while player B is human (the interrogator is always human). This variant, depicted in Figure 1, is commonly referred to as the Turing test [9, 10]. To pass the test, the machine would have to produce answers that the interrogator believes to originate from a human. If a machine passed this test, it would be considered intelligent. For GANs (and variants), player C, the interrogator, is no longer human, but rather a computer program that learns to discriminate between information originating from players A and B. Player A is a computer program that learns to trick the interrogator. Player B could be any system one wishes to imitate, including humans. 2.1 Defining Features of GANs Assuming a Turing perspective, we consider the following as the defining features of GANs (and variants): • a training agent, T , providing genuine data samples (the training data); • a model agent,M, providing counterfeit data samples; 1Different to [7], we consider substantially more complex case studies, where the discriminators are required to genuinely interact with the systems, as a pre-determined sequence of interventions would be unlikely to reveal all the observable behavioral features. • a discriminator agent, D, labelling data samples as either genuine or counterfeit; • a process by which D observes or interacts withM and T ; • D andM are being optimized: – D is rewarded for labelling data samples of T as genuine; – D is rewarded for labelling data samples ofM as counterfeit; – M is rewarded for misleading D (to label its data samples as genuine). It should be noted that in the Turing test there is a bi-directional exchange of information between player C and either player A or B. In GANs, however, during any particular “game”, data flows only in one direction: The discriminator agent receives data samples, but is unable to influence the agent at the origin during the sampling process. In the case studies presented in this paper, this limitation is overcome, and it is shown that this can lead to improved model accuracy. This, of course, does not imply that active discriminators are beneficial for every problem domain. 2.2 Implementation Options of (Generalized) GANs GANs and their generalizations, that is, algorithms that possess the aforementioned defining features, are instances of Turing Learning [8]. The Turing Learning formulation removes (from a Turing perspective unnecessary) restrictions of the original GAN formulation, for example, the need for models and discriminators to be represented as neural networks, or the need for optimizing these networks using gradient descent. As a result of this, the Turing Learning formulation is very general, and applicable to a wide range of problems (e.g., using models with discrete, continuous or mixed representations). In the following, we present the aspects of implementations that are not considered as defining features, but rather as implementation options. They allow Turing Learning to be tailored, for example, by using the most suitable model representation and optimization algorithm for the given problem domain. Moreover, users can choose implementation options they are familiar with, making the overall framework2 more accessible. • Training data. The training data could take any form. It could be artificial (e.g., audio, visual, textual data in a computer), or physical (e.g., a geological sample, engine, painting or human being). • Model presentation. The model could take any form. In GANs [1], it takes the form of a neural network that generates data when provided with a random input. Other representations include vectors, graphs, and computer programs. In any case, the representation should be expressive enough, allowing a model to produce data with the same distribution as the training data. The associated process could involve physical objects (e.g., robots [8]). If the training data originates from physical objects, but the model data originates from simulation, special attention is needed to avoid the so called reality gap [11]. Any difference caused not by the model but rather the process to collect the data (e.g., tracking equipment) may be detected by the discriminators, which could render model inference impossible. • Discriminator representation. The discriminator could take any form. Its representation should be expressive enough to distinguish between genuine and counterfeit data samples. These samples could be artificial or physical. For example, a discriminator could be networked to an experimental platform, observing and manipulating some physical objects or organisms. • Optimization algorithms. The optimization algorithms could take any form as long as they are compatible with the solution representations. They could use a single candidate solution or a population of candidate solutions [8, 12]. In the context of GANs, gradient-based optimization algorithms are widely applied [13]. These algorithms however require the objective function to be differentiable and (ideally) unimodal. A wide range of metaheuristic algorithms [14] could be explored for domains with more complex objective functions. For example, if the model was represented using a computer program, genetic programming algorithms could be used. 2For an algorithmic description of Turing Learning, see [8]. • Coupling mechanism between the model and discriminator optimizers. The optimization processes for the model and discriminator solutions are dependent on each other. Hence they may require careful synchronization [1]. Moreover, if using multiple models and/or multiple discriminators, choices have to be made for which pairs of solutions to evaluate. Elaborate evaluation schemes may take into account the performance of the opponents in other evaluations (e.g., using niching techniques). Synchronization challenges include those reported for coevolutionary systems.3 In particular, due to the so-called Red Queen Effect, the absolute quality of solutions in a population may increase while the quality of solutions relative to the other population may decrease, or vice versa [18]. Cycling [20] refers to the phenomenon that some solutions that have been lost, may get rediscovered in later generations. A method for overcoming the problem is to retain promising solutions in an archive—the “hall of fame” [21]. Disengagement can occur when one population (e.g., the discriminators) outperforms the other population, making it hard to reveal differences among the solutions. Methods for addressing disengagement include “resource sharing” [22] and “reducing virulence” [20]. • Termination criterion. Identifying a suitable criterion for terminating the optimization process can be challenging, as the performance is defined in relative rather than absolute terms. For example, a model that is found to produce genuine data by each of a population of discriminators may still not be useful (the discriminators may have performed poorly). In principle, however, any criterion can be applied (e.g., convergence data, fixed time limit, etc). 3 Case Study 1: Inferring Stochastic Behavioral Processes Through Interaction 3.1 Problem Formulation This case study is inspired from ethology—the study of animal behavior. Animals are sophisticated agents, whose actions depend on both their internal state and the stimuli present in their environment. Additionally, their behavior can have a stochastic component. In the following, we show how Turing Learning can infer the behavior of a simple agent that captures the aforementioned properties. The agent’s behavior is governed by the probabilistic finite-state machine (PFSM)4 shown in Figure 2. It has n states, and it is assumed that each state leads to some observable behavioral feature, v ∈ R, hereafter referred to as the agent’s velocity. The agent responds to a stimulus that can take two levels, low (L) or high (H). The agent starts in state 1. If the stimulus is L, it remains in state 1 with certainty. 3Coevolutionary algorithms have been studied in a range of contexts [15, 16, 17], including system identification [18, 19], though these works differ from GANs and Turing Learning in that no discriminators evolve, but rather pre-defined metrics gauge on how similar the model and training data are. For some system identification problems, the use of such pre-defined metrics can result in poor model accuracy, as shown in [8]. 4PFSMs generalize the concept of Markov chains [23, 24]. If the stimulus is H , it transitions to state 2 with probability p1, and remains in state 1 otherwise. In other words, on average, it transitions to state 2 after 1/p1 steps. In state k = 2, 3, . . . , n − 1, the behavior is as follows. If the stimulus is identical to that which brings the agent into state k from state k − 1, the state reverts to k − 1 with probability p2 and remains at k otherwise. If the stimulus is different to that which brings the agent into state k from state k − 1, the state progresses to k + 1 with probability p1 and remains at k otherwise. In state n, the only difference is that if the stimulus is different to that which brought about state n, the agent remains in state n with certainty (as there is no next state to progress to). By choosing p1 close to 0 and p2 = 1, we force the need for interaction if the higher states are to be observed for a meaningful amount of time. This is because once a transition to a higher state happens, the interrogator must immediately toggle the stimulus to prevent the agent from regressing back to the lower state. 3.2 Turing Learning Implementation We implement Turing Learning for this problem as follows: • Training data. To obtain the training data, the discriminator interacts with the PFSM, shown in Figure 2. The number of states are set to four (n = 4). The parameters used to generate the (genuine) data samples are given by: q = (p∗1, p ∗ 2, v ∗ 2 , v ∗ 3 , v ∗ 4) = (0.1, 1.0, 0.2, 0.4, 0.6). (1) • Model representation. It is assumed that the structure of the PFSM is known, while the parameters, q, are to be inferred. All parameters can vary in R. To interpret p1 and p2 as probabilities, they are mapped to the closest point in [0, 1], if outside this interval. The model data is derived analogously to that of the training data. • Discriminator representation. The discriminator is implemented as an Elman neural network [25] with 1 input neuron, 5 hidden neurons, and 2 output neurons. At each time step t, the observable feature (the agent’s velocity v) is fed into the input neuron.5 After updating the neural network, the output from one of the output neurons is used to determine the stimulus at time step t+ 1, L or H . At the end of a trial (100 time steps), the output from the other output neuron is used to determine whether the discriminator believes the agent under investigation to be the training agent (T ) or model agent (M). • Optimization Algorithms. We use a standard (µ+ λ) evolution strategy with self-adapting mutation strengths [26] for both the model and the discriminator populations. We use µ = λ = 50 in both cases. The populations are initialized at random. The parameter values of the optimization algorithm are set as described in [26]. • Coupling mechanism between the model and discriminator optimizers. The coupling comes from the evaluation process, which in turn affects the population selection. Each of the 100 candidate discriminators is evaluated once with each of the 100 models, as well as an additional 100 times with the training agent. It receives a point every time it correctly labels the data as either genuine or counterfeit. At the same time, each model receives a point for each time a discriminator mistakenly judges its data as genuine. • Termination criterion. The optimization process is stopped after 1000 generations. 3.3 Results To validate the advantages of the interactive approach, we use three setups for the Turing Learning algorithm. In the default setup, hereafter “Interactive” setup, the discriminator controls the environmental stimulus while observing the agent. In the other two setups, the discriminator observes the agent in a passive manner; that is, its output is not used to update the stimulus. Instead, the stimulus is uniformly randomly chosen at the beginning of the trial, and it is toggled with probability 0.1 at any time step (the stimulus is hence expected to change on average every 10 time steps). In setup “Passive 1”, the discriminator has the same input as in the “Interactive" setup (the observable feature, v). In setup “Passive 2”, the discriminator has one additional input, the current stimulus (S). All other aspects of the passive setups are identical to the “Interactive” setup. 5To emulate a noisy tracking process, the actual speed value is multiplied with a number chosen with a uniform distribution in the range (0.95, 1.05). For each setup, we performed 20 runs of the Turing Learning algorithm. Figure 3(a) shows the distribution of the inferred models that achieved the highest evaluation value in the 1000th generation. The “Interactive” setup is the only one that inferred all parameters with good accuracy. Figure 3(b) shows a typical example of how a discriminator interacts with the agent. The discriminator initially sets the environmental stimulus to alternating values (i.e., toggling between H and L). Once the agent advances from state 1 to state 2, the discriminator instantly changes the stimulus to L and holds it constant. Once the agent advances to higher states, the stimulus is switched again, and so forth. This strategy allows the discriminator to observe the agent’s velocity in each state. 4 Case Study 2: A Robot Inferring Its Own Sensor Configuration 4.1 Problem Formulation The reality gap is a well-known problem in robotics: Often, behaviors that work well in simulation do not translate effectively into real-world implementations [11]. This is because simulations are generally unable to capture the full range of features of the real world, and therefore make simplifying assumptions. Yet, simulations can be important, even on-board a physical robot, as they facilitate planning and optimization. This case study investigates how a robot can use Turing Learning to improve the accuracy of a simulation model of itself, though a process of self-discovery, similar to [27]. In a practical scenario, the inference could take place on-board a physical platform. For convenience, we use an existing simulation platform [28], which has been extensively verified and shown to be able to cross the reality gap [29]. The robot, an e-puck [30], is represented as a cylinder of diameter 7.4 cm, height 4.7 cm and mass 152 g. It has two symmetrically aligned wheels. Their ground contact velocity (vleft and vright) can be set within [-12.8, 12.8] (cm/s). During the motion, random noise is applied to each wheel velocity, by multiplying it with a number chosen with a uniform distribution in the range (0.95, 1.05). The robot has eight infrared proximity sensors distributed around its cylindrical body, see Figure 4(a). The sensors provide noisy reading values (s1, s2, . . . , s8). We assume that the robot does not know where the sensors are located (neither their orientations, nor their displacements from the center). Situations like this are common in robotics, where uncertainties are introduced when sensors get mounted manually or when the sensor configuration may change during operation (e.g., at the time of collision with an object, or when the robot itself reconfigures the sensors). The sensor configuration can be described as follows: q = (θ1, θ2, . . . , θ8, d1, d2, . . . , d8) , (2) where di ∈ (0, R] defines the distance of sensor i from the robot’s center (R is the robot’s radius), and θi ∈ [−π, π] defines the bearing of sensor i relative to the robot’s front. The robot operates in a bounded square environment with sides 50 cm, shown in Figure 4(b). The environment also contains nine movable, cylindrical obstacles, arranged in a grid. The distance between the obstacles is just wide enough for an e-puck to pass through. 4.2 Turing Learning Implementation We implement Turing Learning for this problem as follows: • Training data. The training data comes from the eight proximity sensors of a “real” epuck robot, that is, using sensor configuration q as defined by the robot (see Figure 4(a)). The discriminator controls the movements of the robot within the environment shown in Figure 4(b), while observing the readings of its sensors. • Model representation. It is assumed that the sensor configuration, q, is to be inferred. In other words, a total of 16 parameters have to be estimated. • Discriminator representation. As in Case Study 1, the discriminator is implemented as an Elman neural network with 5 hidden neurons. The network has 8 inputs that receive values from the robot’s proximity sensors (s1, s2, . . . , s8). In addition to the classification output, the discriminator has two control outputs, which are used to set the robot’s wheel velocities (vleft and vright). In each trial, the robot starts from a random position and random orientation within the environment.6 The evaluation lasts for 10 seconds. As the robot’s sensors and actuators are updated 10 times per second, this results in 100 time steps. • The remaining aspects are implemented exactly as in Case Study 1. 6As the robot knows neither its relative position to the obstacles, nor its sensor configuration, the scenario can be considered as a chicken-and-egg problem. 4.3 Results To validate the advantages of the interactive approach, we use again three setups. In the “Interactive” setup the discriminator controls the movements of the robot while observing its sensor readings. In the other two setups, the discriminator observes the robot’s sensor readings in a passive manner; that is, its output is not used to update the movements of the robot. Rather, the pair of wheel velocities is uniformly randomly chosen at the beginning of the trial, and, with probability 0.1 at any time step (the movement pattern hence is expected to change on average every 10 time steps). In setup “Passive 1”, the discriminator has the same inputs as in the “Interactive” setup (the reading values of the robot’s sensors, s1, s2, . . . , s8). In setup “Passive 2”, the discriminator has two additional inputs, indicating the velocities of the left and right wheels (vleft and vright). All other aspects of the passive setups are identical to the “Interactive” setup. For each setup, we performed 20 runs of the Turing Learning algorithm. Figure 5 shows the distribution of the inferred models that achieved the highest evaluation value in the 1000th generation. The “Interactive” setup is the only one that inferred the orientations of the proximity sensors with good accuracy. The displacement parameters were inferred with all setups, though none of them was able to provide accurate estimates. Figure 6 shows a typical example of how a discriminator controls the robot. At the beginning, the robot rotates clockwise, registering an obstacle with sensors s7, s6, . . . , s2 (in that order). The robot then moves forward, and registers the obstacle with sensors s1 and/or s8, while pushing it. This confirms that s1 and s8 are indeed forward-facing. Once the robot has no longer any obstacle in its front, it repeats the process. To validate if the sensor-to-motor coupling was of any significance for the discrimination task, we recorded the movements of a robot controlled by the best discriminator of each of the 20 runs. The robot used either the genuine sensor configuration (50 trials) or the best model configuration of the corresponding run (50 trials). In these 2000 “closed-loop” experiments, the discriminator made correct judgments in 69.45% of the cases. We then repeated the 2000 trials, now ignoring the discriminator’s control outputs, but rather using the movements recorded earlier. In these 2000 “open-loop” experiments, the discriminator made correct judgments in 58.60% of the cases—a significant drop, though still better than guessing (50%). 5 Conclusion In this paper we analyzed how Generative Adversarial Networks (GANs) relate to the Turing test. We identified the defining features of GANs, if assuming a Turing perspective. Other features, including choice of model representation, discriminator representation, and optimization algorithm, were viewed as implementation options of a generalized version of GANs, also referred to as Turing Learning. It was noted that the discriminator in GANs does not directly influence the sampling process, but rather is provided with a (static) data sample from either the generative model or training data set. This is in stark contrast to the Turing test, where the discriminator (the interrogator) plays an active role; it poses questions to the players, to reveal the information most relevant to the discrimination task. Such interactions are by no means always useful. For the purpose for generating photo-realistic images, for example, they may not be needed.7 For the two case studies presented here, however, interactions were shown to cause an improvement in the accuracy of models. The first case study showed how one can infer the behavior of an agent while controlling a stimulus present in its environment. It could serve as a template for studies of animal/human behavior, especially where some behavioral traits are revealed only through meaningful interactions. The inference task was not simple, as the agent’s actions depended on a hidden stochastic process. The latter was influenced by the stimulus, which was set to either low or high by the discriminator (100 times). It was not known in advance which of the 2100 sequences are useful. The discriminator thus needed to dynamically construct a suitable sequence, taking the observation data into account. The second case study focused on a different class of problems: active self-discovery. It showed that a robot can infer its own sensor configuration through controlled movements. This case study could serve as a template for modelling physical devices. The inference task was not simple, as the robot started from a random position in the environment, and its motors and sensors were affected by noise. The discriminator thus needed to dynamically construct a control sequence that let the robot approach an obstacle and perform movements for testing its sensor configuration. Future work could attempt to build models of more complex behaviors, including those of humans. Acknowledgments The authors thank Nathan Lepora for stimulating discussions. 7Though if the discriminator could request additional images by the same model or training agent, problems like mode collapse might be prevented.
1. What is the main contribution of the paper regarding GANs? 2. What are the strengths of the paper in terms of its writing quality and methodology? 3. What are the weaknesses of the paper regarding its assumptions about co-evolution and resilience? 4. How does the reviewer suggest the authors improve their approach by discussing it in the context of related work?
Review
Review GANs are a very interesting idea, which I haven't heard of so far. Using them for control is very appealing. The paper is very well written, easy to understand, high in quality: both in methods and results. There are just two comments. Co-evolution and resilience are much older than the authors assume. Co-evolution has been studied e.g. in Stefano Nolfi and Dario Floreano. 1998. Coevolving Predator and Prey Robots: Do "Arms Races" Arise in Artificial Evolution?. Artif. Life 4, 4 (October 1998), 311-335. DOI=http://dx.doi.org/10.1162/106454698568620 and more can be found in Nolfi & Floreano, Evolutionary Robotics, 2000 Resilience, e.g. learning the own body model also under variations of the body has been investigated in J. Bongard, V. Zykov, and H. Lipson. Resilient machines through continuous self-modeling. Science, 314(5802):1118–1121, 2006. It would be great if the authors could discuss their approach/results in the context of more related work.
NIPS
Title Generalizing GANs: A Turing Perspective Abstract Recently, a new class of machine learning algorithms has emerged, where models and discriminators are generated in a competitive setting. The most prominent example is Generative Adversarial Networks (GANs). In this paper we examine how these algorithms relate to the Turing test, and derive what—from a Turing perspective—can be considered their defining features. Based on these features, we outline directions for generalizing GANs—resulting in the family of algorithms referred to as Turing Learning. One such direction is to allow the discriminators to interact with the processes from which the data samples are obtained, making them “interrogators”, as in the Turing test. We validate this idea using two case studies. In the first case study, a computer infers the behavior of an agent while controlling its environment. In the second case study, a robot infers its own sensor configuration while controlling its movements. The results confirm that by allowing discriminators to interrogate, the accuracy of models is improved. 1 Introduction Generative Adversarial Networks [1] (GANs) are a framework for inferring generative models from training data. They place two neural networks—a model and a discriminator—in a competitive setting. The discriminator’s objective is to correctly label samples from either the model or the training data. The model’s objective is to deceive the discriminator, in other words, to produce samples that are categorized as training data by the discriminator. The networks are trained using a gradient-based optimization algorithm. Since their inception in 2014, GANs have been applied in a range of contexts [2, 3], but most prominently for the generation of photo-realistic images [1, 4]. In this paper we analyze the striking similarities between GANs and the Turing test [5]. The Turing test probes a machine’s ability to display behavior that, to an interrogator, is indistinguishable from that of a human. Developing machines that pass the Turing test could be considered as a canonical problem in computer science [6]. More generally, the problem is that of imitating (and hence inferring) the structure and/or behavior of any system, such as an organism, a device, a computer program, or a process. The idea to infer models in a competitive setting (model versus discriminator) was first proposed in [7]. The paper considered the problem of inferring the behavior of an agent in a simple environment. The behavior was deterministic, simplifying the identification task. In a subsequent work [8], the method, named Turing Learning, was used to infer the behavioral rules of a swarm of memoryless 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. robots. The robot’s movements were tracked using an external camera system, providing the training data. Additional robots executed the rules defined by the models. The contributions of this paper are • to examine the defining features of GANs (and variants)—assuming a Turing perspective; • to outline directions for generalizing GANs, in particular, to encourage alternative imple- mentations and novel applications; for example, ones involving physical systems; • to show, using two case studies, that more accurate models can be obtained if the discrimi- nators are allowed to interact with the processes from which data samples are obtained (as the interrogators in the Turing test).1 2 A Turing Perspective In 1950, Turing proposed an imitation game [5] consisting of three players A, B and C. Figure 1 shows a schematic of this game. Player C, also referred to as the interrogator, is unable to see the other players. However, the interrogator can pose questions to and receive answers from them. Answers from the same player are consistently labelled (but not revealing its identity, A or B). At the end of the game, the interrogator has to guess which label belongs to which player. There are two variants of the game, and we focus on the one where player A is a machine, while player B is human (the interrogator is always human). This variant, depicted in Figure 1, is commonly referred to as the Turing test [9, 10]. To pass the test, the machine would have to produce answers that the interrogator believes to originate from a human. If a machine passed this test, it would be considered intelligent. For GANs (and variants), player C, the interrogator, is no longer human, but rather a computer program that learns to discriminate between information originating from players A and B. Player A is a computer program that learns to trick the interrogator. Player B could be any system one wishes to imitate, including humans. 2.1 Defining Features of GANs Assuming a Turing perspective, we consider the following as the defining features of GANs (and variants): • a training agent, T , providing genuine data samples (the training data); • a model agent,M, providing counterfeit data samples; 1Different to [7], we consider substantially more complex case studies, where the discriminators are required to genuinely interact with the systems, as a pre-determined sequence of interventions would be unlikely to reveal all the observable behavioral features. • a discriminator agent, D, labelling data samples as either genuine or counterfeit; • a process by which D observes or interacts withM and T ; • D andM are being optimized: – D is rewarded for labelling data samples of T as genuine; – D is rewarded for labelling data samples ofM as counterfeit; – M is rewarded for misleading D (to label its data samples as genuine). It should be noted that in the Turing test there is a bi-directional exchange of information between player C and either player A or B. In GANs, however, during any particular “game”, data flows only in one direction: The discriminator agent receives data samples, but is unable to influence the agent at the origin during the sampling process. In the case studies presented in this paper, this limitation is overcome, and it is shown that this can lead to improved model accuracy. This, of course, does not imply that active discriminators are beneficial for every problem domain. 2.2 Implementation Options of (Generalized) GANs GANs and their generalizations, that is, algorithms that possess the aforementioned defining features, are instances of Turing Learning [8]. The Turing Learning formulation removes (from a Turing perspective unnecessary) restrictions of the original GAN formulation, for example, the need for models and discriminators to be represented as neural networks, or the need for optimizing these networks using gradient descent. As a result of this, the Turing Learning formulation is very general, and applicable to a wide range of problems (e.g., using models with discrete, continuous or mixed representations). In the following, we present the aspects of implementations that are not considered as defining features, but rather as implementation options. They allow Turing Learning to be tailored, for example, by using the most suitable model representation and optimization algorithm for the given problem domain. Moreover, users can choose implementation options they are familiar with, making the overall framework2 more accessible. • Training data. The training data could take any form. It could be artificial (e.g., audio, visual, textual data in a computer), or physical (e.g., a geological sample, engine, painting or human being). • Model presentation. The model could take any form. In GANs [1], it takes the form of a neural network that generates data when provided with a random input. Other representations include vectors, graphs, and computer programs. In any case, the representation should be expressive enough, allowing a model to produce data with the same distribution as the training data. The associated process could involve physical objects (e.g., robots [8]). If the training data originates from physical objects, but the model data originates from simulation, special attention is needed to avoid the so called reality gap [11]. Any difference caused not by the model but rather the process to collect the data (e.g., tracking equipment) may be detected by the discriminators, which could render model inference impossible. • Discriminator representation. The discriminator could take any form. Its representation should be expressive enough to distinguish between genuine and counterfeit data samples. These samples could be artificial or physical. For example, a discriminator could be networked to an experimental platform, observing and manipulating some physical objects or organisms. • Optimization algorithms. The optimization algorithms could take any form as long as they are compatible with the solution representations. They could use a single candidate solution or a population of candidate solutions [8, 12]. In the context of GANs, gradient-based optimization algorithms are widely applied [13]. These algorithms however require the objective function to be differentiable and (ideally) unimodal. A wide range of metaheuristic algorithms [14] could be explored for domains with more complex objective functions. For example, if the model was represented using a computer program, genetic programming algorithms could be used. 2For an algorithmic description of Turing Learning, see [8]. • Coupling mechanism between the model and discriminator optimizers. The optimization processes for the model and discriminator solutions are dependent on each other. Hence they may require careful synchronization [1]. Moreover, if using multiple models and/or multiple discriminators, choices have to be made for which pairs of solutions to evaluate. Elaborate evaluation schemes may take into account the performance of the opponents in other evaluations (e.g., using niching techniques). Synchronization challenges include those reported for coevolutionary systems.3 In particular, due to the so-called Red Queen Effect, the absolute quality of solutions in a population may increase while the quality of solutions relative to the other population may decrease, or vice versa [18]. Cycling [20] refers to the phenomenon that some solutions that have been lost, may get rediscovered in later generations. A method for overcoming the problem is to retain promising solutions in an archive—the “hall of fame” [21]. Disengagement can occur when one population (e.g., the discriminators) outperforms the other population, making it hard to reveal differences among the solutions. Methods for addressing disengagement include “resource sharing” [22] and “reducing virulence” [20]. • Termination criterion. Identifying a suitable criterion for terminating the optimization process can be challenging, as the performance is defined in relative rather than absolute terms. For example, a model that is found to produce genuine data by each of a population of discriminators may still not be useful (the discriminators may have performed poorly). In principle, however, any criterion can be applied (e.g., convergence data, fixed time limit, etc). 3 Case Study 1: Inferring Stochastic Behavioral Processes Through Interaction 3.1 Problem Formulation This case study is inspired from ethology—the study of animal behavior. Animals are sophisticated agents, whose actions depend on both their internal state and the stimuli present in their environment. Additionally, their behavior can have a stochastic component. In the following, we show how Turing Learning can infer the behavior of a simple agent that captures the aforementioned properties. The agent’s behavior is governed by the probabilistic finite-state machine (PFSM)4 shown in Figure 2. It has n states, and it is assumed that each state leads to some observable behavioral feature, v ∈ R, hereafter referred to as the agent’s velocity. The agent responds to a stimulus that can take two levels, low (L) or high (H). The agent starts in state 1. If the stimulus is L, it remains in state 1 with certainty. 3Coevolutionary algorithms have been studied in a range of contexts [15, 16, 17], including system identification [18, 19], though these works differ from GANs and Turing Learning in that no discriminators evolve, but rather pre-defined metrics gauge on how similar the model and training data are. For some system identification problems, the use of such pre-defined metrics can result in poor model accuracy, as shown in [8]. 4PFSMs generalize the concept of Markov chains [23, 24]. If the stimulus is H , it transitions to state 2 with probability p1, and remains in state 1 otherwise. In other words, on average, it transitions to state 2 after 1/p1 steps. In state k = 2, 3, . . . , n − 1, the behavior is as follows. If the stimulus is identical to that which brings the agent into state k from state k − 1, the state reverts to k − 1 with probability p2 and remains at k otherwise. If the stimulus is different to that which brings the agent into state k from state k − 1, the state progresses to k + 1 with probability p1 and remains at k otherwise. In state n, the only difference is that if the stimulus is different to that which brought about state n, the agent remains in state n with certainty (as there is no next state to progress to). By choosing p1 close to 0 and p2 = 1, we force the need for interaction if the higher states are to be observed for a meaningful amount of time. This is because once a transition to a higher state happens, the interrogator must immediately toggle the stimulus to prevent the agent from regressing back to the lower state. 3.2 Turing Learning Implementation We implement Turing Learning for this problem as follows: • Training data. To obtain the training data, the discriminator interacts with the PFSM, shown in Figure 2. The number of states are set to four (n = 4). The parameters used to generate the (genuine) data samples are given by: q = (p∗1, p ∗ 2, v ∗ 2 , v ∗ 3 , v ∗ 4) = (0.1, 1.0, 0.2, 0.4, 0.6). (1) • Model representation. It is assumed that the structure of the PFSM is known, while the parameters, q, are to be inferred. All parameters can vary in R. To interpret p1 and p2 as probabilities, they are mapped to the closest point in [0, 1], if outside this interval. The model data is derived analogously to that of the training data. • Discriminator representation. The discriminator is implemented as an Elman neural network [25] with 1 input neuron, 5 hidden neurons, and 2 output neurons. At each time step t, the observable feature (the agent’s velocity v) is fed into the input neuron.5 After updating the neural network, the output from one of the output neurons is used to determine the stimulus at time step t+ 1, L or H . At the end of a trial (100 time steps), the output from the other output neuron is used to determine whether the discriminator believes the agent under investigation to be the training agent (T ) or model agent (M). • Optimization Algorithms. We use a standard (µ+ λ) evolution strategy with self-adapting mutation strengths [26] for both the model and the discriminator populations. We use µ = λ = 50 in both cases. The populations are initialized at random. The parameter values of the optimization algorithm are set as described in [26]. • Coupling mechanism between the model and discriminator optimizers. The coupling comes from the evaluation process, which in turn affects the population selection. Each of the 100 candidate discriminators is evaluated once with each of the 100 models, as well as an additional 100 times with the training agent. It receives a point every time it correctly labels the data as either genuine or counterfeit. At the same time, each model receives a point for each time a discriminator mistakenly judges its data as genuine. • Termination criterion. The optimization process is stopped after 1000 generations. 3.3 Results To validate the advantages of the interactive approach, we use three setups for the Turing Learning algorithm. In the default setup, hereafter “Interactive” setup, the discriminator controls the environmental stimulus while observing the agent. In the other two setups, the discriminator observes the agent in a passive manner; that is, its output is not used to update the stimulus. Instead, the stimulus is uniformly randomly chosen at the beginning of the trial, and it is toggled with probability 0.1 at any time step (the stimulus is hence expected to change on average every 10 time steps). In setup “Passive 1”, the discriminator has the same input as in the “Interactive" setup (the observable feature, v). In setup “Passive 2”, the discriminator has one additional input, the current stimulus (S). All other aspects of the passive setups are identical to the “Interactive” setup. 5To emulate a noisy tracking process, the actual speed value is multiplied with a number chosen with a uniform distribution in the range (0.95, 1.05). For each setup, we performed 20 runs of the Turing Learning algorithm. Figure 3(a) shows the distribution of the inferred models that achieved the highest evaluation value in the 1000th generation. The “Interactive” setup is the only one that inferred all parameters with good accuracy. Figure 3(b) shows a typical example of how a discriminator interacts with the agent. The discriminator initially sets the environmental stimulus to alternating values (i.e., toggling between H and L). Once the agent advances from state 1 to state 2, the discriminator instantly changes the stimulus to L and holds it constant. Once the agent advances to higher states, the stimulus is switched again, and so forth. This strategy allows the discriminator to observe the agent’s velocity in each state. 4 Case Study 2: A Robot Inferring Its Own Sensor Configuration 4.1 Problem Formulation The reality gap is a well-known problem in robotics: Often, behaviors that work well in simulation do not translate effectively into real-world implementations [11]. This is because simulations are generally unable to capture the full range of features of the real world, and therefore make simplifying assumptions. Yet, simulations can be important, even on-board a physical robot, as they facilitate planning and optimization. This case study investigates how a robot can use Turing Learning to improve the accuracy of a simulation model of itself, though a process of self-discovery, similar to [27]. In a practical scenario, the inference could take place on-board a physical platform. For convenience, we use an existing simulation platform [28], which has been extensively verified and shown to be able to cross the reality gap [29]. The robot, an e-puck [30], is represented as a cylinder of diameter 7.4 cm, height 4.7 cm and mass 152 g. It has two symmetrically aligned wheels. Their ground contact velocity (vleft and vright) can be set within [-12.8, 12.8] (cm/s). During the motion, random noise is applied to each wheel velocity, by multiplying it with a number chosen with a uniform distribution in the range (0.95, 1.05). The robot has eight infrared proximity sensors distributed around its cylindrical body, see Figure 4(a). The sensors provide noisy reading values (s1, s2, . . . , s8). We assume that the robot does not know where the sensors are located (neither their orientations, nor their displacements from the center). Situations like this are common in robotics, where uncertainties are introduced when sensors get mounted manually or when the sensor configuration may change during operation (e.g., at the time of collision with an object, or when the robot itself reconfigures the sensors). The sensor configuration can be described as follows: q = (θ1, θ2, . . . , θ8, d1, d2, . . . , d8) , (2) where di ∈ (0, R] defines the distance of sensor i from the robot’s center (R is the robot’s radius), and θi ∈ [−π, π] defines the bearing of sensor i relative to the robot’s front. The robot operates in a bounded square environment with sides 50 cm, shown in Figure 4(b). The environment also contains nine movable, cylindrical obstacles, arranged in a grid. The distance between the obstacles is just wide enough for an e-puck to pass through. 4.2 Turing Learning Implementation We implement Turing Learning for this problem as follows: • Training data. The training data comes from the eight proximity sensors of a “real” epuck robot, that is, using sensor configuration q as defined by the robot (see Figure 4(a)). The discriminator controls the movements of the robot within the environment shown in Figure 4(b), while observing the readings of its sensors. • Model representation. It is assumed that the sensor configuration, q, is to be inferred. In other words, a total of 16 parameters have to be estimated. • Discriminator representation. As in Case Study 1, the discriminator is implemented as an Elman neural network with 5 hidden neurons. The network has 8 inputs that receive values from the robot’s proximity sensors (s1, s2, . . . , s8). In addition to the classification output, the discriminator has two control outputs, which are used to set the robot’s wheel velocities (vleft and vright). In each trial, the robot starts from a random position and random orientation within the environment.6 The evaluation lasts for 10 seconds. As the robot’s sensors and actuators are updated 10 times per second, this results in 100 time steps. • The remaining aspects are implemented exactly as in Case Study 1. 6As the robot knows neither its relative position to the obstacles, nor its sensor configuration, the scenario can be considered as a chicken-and-egg problem. 4.3 Results To validate the advantages of the interactive approach, we use again three setups. In the “Interactive” setup the discriminator controls the movements of the robot while observing its sensor readings. In the other two setups, the discriminator observes the robot’s sensor readings in a passive manner; that is, its output is not used to update the movements of the robot. Rather, the pair of wheel velocities is uniformly randomly chosen at the beginning of the trial, and, with probability 0.1 at any time step (the movement pattern hence is expected to change on average every 10 time steps). In setup “Passive 1”, the discriminator has the same inputs as in the “Interactive” setup (the reading values of the robot’s sensors, s1, s2, . . . , s8). In setup “Passive 2”, the discriminator has two additional inputs, indicating the velocities of the left and right wheels (vleft and vright). All other aspects of the passive setups are identical to the “Interactive” setup. For each setup, we performed 20 runs of the Turing Learning algorithm. Figure 5 shows the distribution of the inferred models that achieved the highest evaluation value in the 1000th generation. The “Interactive” setup is the only one that inferred the orientations of the proximity sensors with good accuracy. The displacement parameters were inferred with all setups, though none of them was able to provide accurate estimates. Figure 6 shows a typical example of how a discriminator controls the robot. At the beginning, the robot rotates clockwise, registering an obstacle with sensors s7, s6, . . . , s2 (in that order). The robot then moves forward, and registers the obstacle with sensors s1 and/or s8, while pushing it. This confirms that s1 and s8 are indeed forward-facing. Once the robot has no longer any obstacle in its front, it repeats the process. To validate if the sensor-to-motor coupling was of any significance for the discrimination task, we recorded the movements of a robot controlled by the best discriminator of each of the 20 runs. The robot used either the genuine sensor configuration (50 trials) or the best model configuration of the corresponding run (50 trials). In these 2000 “closed-loop” experiments, the discriminator made correct judgments in 69.45% of the cases. We then repeated the 2000 trials, now ignoring the discriminator’s control outputs, but rather using the movements recorded earlier. In these 2000 “open-loop” experiments, the discriminator made correct judgments in 58.60% of the cases—a significant drop, though still better than guessing (50%). 5 Conclusion In this paper we analyzed how Generative Adversarial Networks (GANs) relate to the Turing test. We identified the defining features of GANs, if assuming a Turing perspective. Other features, including choice of model representation, discriminator representation, and optimization algorithm, were viewed as implementation options of a generalized version of GANs, also referred to as Turing Learning. It was noted that the discriminator in GANs does not directly influence the sampling process, but rather is provided with a (static) data sample from either the generative model or training data set. This is in stark contrast to the Turing test, where the discriminator (the interrogator) plays an active role; it poses questions to the players, to reveal the information most relevant to the discrimination task. Such interactions are by no means always useful. For the purpose for generating photo-realistic images, for example, they may not be needed.7 For the two case studies presented here, however, interactions were shown to cause an improvement in the accuracy of models. The first case study showed how one can infer the behavior of an agent while controlling a stimulus present in its environment. It could serve as a template for studies of animal/human behavior, especially where some behavioral traits are revealed only through meaningful interactions. The inference task was not simple, as the agent’s actions depended on a hidden stochastic process. The latter was influenced by the stimulus, which was set to either low or high by the discriminator (100 times). It was not known in advance which of the 2100 sequences are useful. The discriminator thus needed to dynamically construct a suitable sequence, taking the observation data into account. The second case study focused on a different class of problems: active self-discovery. It showed that a robot can infer its own sensor configuration through controlled movements. This case study could serve as a template for modelling physical devices. The inference task was not simple, as the robot started from a random position in the environment, and its motors and sensors were affected by noise. The discriminator thus needed to dynamically construct a control sequence that let the robot approach an obstacle and perform movements for testing its sensor configuration. Future work could attempt to build models of more complex behaviors, including those of humans. Acknowledgments The authors thank Nathan Lepora for stimulating discussions. 7Though if the discriminator could request additional images by the same model or training agent, problems like mode collapse might be prevented.
1. How does the proposed method handle discrete action spaces? 2. Can the authors provide additional explanations or visualizations to help understand the results presented in Figure 5?
Review
Review This is a very well-written paper and the problem / approach is very interesting. I wonder though how the authors would propose addressing this problem when the action space is discrete? In addition, Figure 5 is a bit hard to parse. Can the authors summarize some of the statistics (such as correlation) to better demonstrate what the robot is doing?